[
Lists Home |
Date Index |
Thread Index
]
From: "Sean McGrath" <sean.mcgrath@propylon.com>
> At 18:32 06/03/2002 +1100, Rick Jelliffe wrote:
>
> >There is a case to be made that, for implementability reasons, it is actually
> >good to bundle together as many orthogonal functions that can act as
> >visitors on the same traversal of the infoset.
> This makes no sense to me. I must be missing something. What do you
> mean "for implementability reasons". My gut tells me quite the
> opposite!
Oops, I meant "for efficiency reasons". Rather than traversing an infoset
many times, it can be more efficient if different functions can be performed
in a single pass through a document. *
So stream-based processing is efficient because it operates on a single
traversal. Contrast this with DOM, if you implement "layered"
functions naively as separate passes.
But not all layers can be implemented well using streams. (XPath
systems requiring arbitrary context, for example.) So instead, to
get efficiency of tree-based data structures, we need to perform
as many functions as possible during one traversal.
So it makes sense for an implementation, for efficiency reasons,
for a schema processor to do datatyping, augmentation, and
defaulting at the same time.
So the modularity of Schema languages should not only be seen
in terms of "what functions can be split out into independent
passes?", but rather "what functions can be split out into notional
independent passes, but implemented using the same pass?"
In practice (i.e. for designers of schema languages and scissor-happy
layerists), it means that for efficiency the node-selection mechanism
should be shared, while the node manipulation mechanism should
be modular.
Cheers
Rick Jelliffe
* Obviously there are other aspects at work too: the availability
of keys or IDs gathered from previous passes, and whether
a layer should act based on a traversal of each node or by following
an XPath.
|