[
Lists Home |
Date Index |
Thread Index
]
Kevin Jones wrote:
> The core issue for me here is that the processing software
> needs to have a way of determining the capability of the
> data model it is being asked to use so that it can adapt
> its evaluation strategy according to those capabilities.
> Without this we are forever stuck with having to use known
> matched pairs of processing software and data model.
I think that's a bit overly complex and puts the responsibility in the
wrong place. The engine should not have to adapt itself to the
capabilities of different models.
I suspect it would be much cleaner to take an approach like Jaxen's. In
this approach there's a core set of basic operations that all
model-connectors must implement (getChild, getAttribute, getParent,
etc.). However most other axes have default implementations that build
on top of the basic operations. For instance, the ancestor axis is
easily implemented on top of getParent. Thus a minimal implementation
only has to provide about 20 fairly straight-forward operations.
However, if the implementation does have more efficient ways to
implement the ancestor axis than just walking up the parent axis, it can
override the default getAncestor implementation with a more customized
version. Ditto for the other axes. The engines never need to know the
details.
This seems a lot more flexible to me and much more likely to be
implemented than having the engines query the models for their
capabilities and then adjust their algorithms accordingly.
--
Elliotte Rusty Harold elharo@metalab.unc.edu
XML in a Nutshell 3rd Edition Just Published!
http://www.cafeconleche.org/books/xian3/
http://www.amazon.com/exec/obidos/ISBN=0596007647/cafeaulaitA/ref=nosim
|