Lists Home |
Date Index |
Petr Cimprich wrote:
> From my point of view, the first step should be to define a context
> available for XPattern evaluation. The context then determines which
> XPath grammar constructs make sense as well as their real meaning. Thus,
> the context should be agreed before the reduction of XPath grammar starts.
> The context can include e.g. these items:
> 1. The current node
> 2. All the ancestors of the current node
> While the 1. and 2. can be maintained at nearly no costs (and I can
> hardly imagine a pattern matching without them) the other points are
> trade-offs between the language power and implementation costs.
> 3. N events look ahead.
> There is a quite difference between 0 and 1; the one node buffering
> allows e.g. to join consequent text events into a single node (which is
> required by the XPath data model).
In a streaming context, the concept of nodes is less useful, I think.
We should rather think in events. The joined character data would then be
available in the end-tag event.
> 4. Position counters (to enable position predicates and position()
Especially for a streaming processor, should the goal of a pattern
matching language not be, to allow for flexible cooperation between what
the language does and what the handlers do?
Predicate evaluation could also be performed in the handler call-backs.
So, maybe the XPattern language could provide for simple attribute value
matching, but leave more difficult tasks to the call-back handlers,
which can take advantage of a Turing complete language, and which
can also easliy interact with application context (e.g if our database
has a record for this id, return true, otherwise return false).
And there should be a specified way for how the handlers return results
for predicate evaluation.