Lists Home |
Date Index |
> I think that optimizations that can be implemented for user-defined
> types will frequently or mostly be of a sort that can be stated in
> terms of 'elements whose content model has characteristics X and Y'
> (to be used under the assumption that the subject of the transform
> or quey is valid according to the type). For instance, an XSLT
> implementation should (I'm guessing, as I've no experience as an
> implementor of XSLT) be able to optimize template application and
> matching if it knows that foo is an empty element, or and element
> containing only pcdata. A compiler might match the type description
> for foo against a set of optimizable patterns in determining how to
> compile a particular transform that was to act on it.
> For these optimizations, I don't think it would be necessary to have
> fooElementNode objects, but it might be useful to have
> EmptyElementNode objects which could be generated for suitable parts
> of the content model.
It would certainly make sense for elements with simple types (PCDATA)
to be different kinds of objects than elements with complex types, but
I was talking about elements with complex types (particularly those
with complex content) and differentiating between them.
I agree that empty elements are a special class of those, but is the
time it takes to analyse the schema to work out whether a type is
empty likely to give a significant speed advantage over just checking
whether the element has any children? It might if the document had
huge numbers of those elements, or if the schema analysis only had to
occur once. Otherwise, it's much less clear cut.