[
Lists Home |
Date Index |
Thread Index
]
From: "Richard Tobin" <richard@cogsci.ed.ac.uk>
> I wouldn't expect any noticeable difference. In XSV we generate a
> finite-state machine for content models, with the states having
> pointers to the element declarations and the element declarations
> having pointers to the type declarations, so there is no looking-up of
> element or type names in the usual case (there is for wildcards of
> course).
>
> A lot will depend on whether you end up reading in the schema for each
> document. That may well take much longer than the validation itself
> for small documents.
If validating and converting the schema to internal form is a large part of
validation cost, would transformations that "flatten" a schema make a
significant difference?
The ultimate flattening, of course, would be a transformation that produced
a representation of the internal form, which would reduce the process to
parsing and memory allocation. But this sort of "compiled" form would be
highly implementation-specific. Would transformations that produce a valid
but simplified schema speed up the process? I am thinking of things like
flattening type derivations, putting the contents of included schemas
inline, etc.
Bob
|