Lists Home |
Date Index |
I think that optimizations that can be implemented for user-defined types
will frequently or mostly be of a sort that can be stated in terms of
'elements whose content model has characteristics X and Y' (to be used under
the assumption that the subject of the transform or quey is valid according
to the type). For instance, an XSLT implementation should (I'm guessing, as
I've no experience as an implementor of XSLT) be able to optimize template
application and matching if it knows that foo is an empty element, or and
element containing only pcdata. A compiler might match the type description
for foo against a set of optimizable patterns in determining how to compile
a particular transform that was to act on it.
For these optimizations, I don't think it would be necessary to have
fooElementNode objects, but it might be useful to have EmptyElementNode
objects which could be generated for suitable parts of the content model.
----- Original Message -----
From: "Jeni Tennison" <firstname.lastname@example.org>
To: "Jonathan Robie" <email@example.com>
Sent: Wednesday, May 08, 2002 7:35 AM
Subject: Re: [xml-dev] XPath 1.5? (was RE: [xml-dev] typing and markup)
> But from what I can see, the same kind of operation on user-defined
> types, particularly on complex types, is going to be a lot harder. I'm
> not an implementer, but I imagine it would take a lot of work during
> compilation to create classes for different kinds of elements so that
> you can take advantage of their particular features, such as testing
> whether an element can have a particular child before trying to
> retrieve it. The reason it's worthwhile doing this for the built-in
> types is precisely because they're built in.
> I think it comes down to what advantage it gives you to treat a 'foo'
> element as a 'foo' element rather than a generic element node, and
> whether that advantage is worth the cost of schema analysis and the
> extra time and memory it takes to have fooElementNode objects, bearing
> in mind that the compile time and run time costs have to directly
> offset each other with XSLT.
> If it ends up being roughly equal, or if the analysis time is greater
> than the time you save due to optimisation, which is what I suspect,
> then the question is what's the point of the strong typing for complex
> types? Especially as there are lots of *disadvantages*, such as the
> added complexity in the processors and in the spec to deal with all
> the different kinds of casting and validating of complex types.