[
Lists Home |
Date Index |
Thread Index
]
Hi Uche,
> This is true. I guess if you put it in that light, I can consider it
> with a more friendly eye. I know that XVIF has been designed from
> the beginning to support generic lexical processing. I guess that's
> been what Jeni has been trying to do as well, but it looked as if
> her example was couched in the sense of defining a set of operations
> and lexical mappings tailored to WXS types. Perhaps I was too hasty
> in that judgment.
Right, sorry I wasn't clearer. I wasn't aiming for W3C XML Schema
types -- I was writing to string, number and boolean -- the core types
for XPath 1.0.
> So, starting afresh on this idea, and expressing it in XVIF, which
> has the advantage of a handy implementation right now,
Well, I thought that XSLT had a few handy implementations around, and
the advantage that I know the language, so I thought I'd use that. I
think that ideally these data type definitions could be written in
*any* language, and it would be up to the processor to support
whichever language they wanted.
> People can also put whatever they want into the
> dt:lexical-proprocess element including a pipe that defines some
> other transforms and perhaps a validation step.
Right. I assumed (I'm sorry, I should have written up what I was doing
a lot better) that there would a separate definition that took a
parsed X and validated it. I'm not sure why I thought that separating
parsing and validation would be a good idea; certainly I agree that
once parsed, a value should be validated.
> So now we have the lexical representation mapped to a set of
> sub-elements that can be very readily manipulated by XPath for
> value-space declarations.
Right.
> What we'd need to add is some mechanism for such declarations,
> mapping them to operators.
Quite!
> Maybe:
>
> <dt:components xmlns="http://relaxng.org/ns/structure/1.0"
> xmlns:if="http://namespaces.xmlschemata.org/xvif/iframe"
> dtl="http://www.w3.org/2001/XMLSchema-datatypes">
> <dt:lexical-preprocess>
> <if:transform type="http://simonstl.com/ns/fragments/">
> <if:apply>
> <fragmentRules xmlns="http://simonstl.com/ns/fragments/">
> <fragmentRule pattern="^[ \t\n]*([0-9]{4})-([0-9]{2})-([0-9]{2})[
\t\n]*$">>
> <applyTo>
> <element localName="date"/>
> </applyTo>
> <produce>
> <element localName="year"/>
> <element localName="month"/>
> <element localName="day"/>
> </produce>
> </fragmentRule>
> </fragmentRules>
> </if:apply>
> </if:transform>
> </dt:lexical-preprocess>
> <dt:operator symbol="=">
> <dt:result>
> <if:transform type="http://www.w3.org/TR/xpath"
> apply="$lhs/year = $rhs/year and $lhs/month = $rhs/month and $lhs/day =
$rhs/day"/>>
> </dt:result>
> </dt:operator>
> <!-- silly function example -->
> <dt:function name="dtl:date-in-us-format">
> <dt:result>
> <if:transform type="http://www.w3.org/TR/xpath"
> apply="concat(month,'/',day,'/',year)"/>
> </dt:result>
> </dt:function>
> </dt:components>
Absolutely! That's exactly what I was trying to get at.
> Very intriguing idea, I guess, after all. Naturally, optimized
> implementations would not have to use all the above binding info and
> can just jump straight to the optimized code, similar to functions
> that implement EXSLT extensions natively and do not then have to run
> the exsl:function version, though they can always fall back to that
> for classes they do not support.
Yep, absolutely.
> This does only answer one of my complaint: generic dispatch and
> constraint processing. That is a big bone of contention, so I'd be
> happy for such a solution, but the fact is that it still introduces
> a lot of complexity, which is worrisome.
In terms of the operations, I think that a lot of the complexity (from
the user side) can be managed by having sensible defaults. For
example, if a data type doesn't offer a specific equals definition,
then the application could turn it into a string and compare it as a
string.
> However, if it is inevitable that the data types juggernaut must
> have its stone of flesh in the end, I would rather a mechanism such
> as the above allowed others to put their own data types on an even
> keel, and also allowed other forms of axiomatic processing besides
> data types. I also like that it expresses value space operations as
> simple transforms on the plain lexical information, which is an
> important assertion of layering.
OK. Can you describe more about what you mean by that?
Cheers,
Jeni
---
Jeni Tennison
http://www.jenitennison.com/
|