Lists Home |
Date Index |
Christopher R. Maden" writes:
> Surely I am not the first person to try doing this, but I can't seem
> to find any prior art nor any straightforward way to do this.
> I have data that may be arbitrarily large and may conform to
> arbitrary XSDL schemata. Because of the size, I want to process the
> document as an event stream (hence SAX), and I want to make
> different processing decisions based on the declared types from the
> schema and based on the ultimate base types, if there's any type
Markup Technology has just launched a showcase server for  its MT
Pipeline technology, which allows you to do precisely what you
describe, i.e. access schema-validation-based information in
subsequent stages of processing.
For very large documents, you need functionality not (yet) displayed
on the server, but I'll try to put together a demo which illustrates
what you have in mind over the weekend.
The closest pipeline to what you describe above is the one which does
absolutisation of relative URIs located via schema validation .
The important point from the user perspective about these pipelines is
that they require _no_ programming -- all the above functionality is
available from pre-built components.
I'll be making a more general announcement about the showcase server
Henry S. Thompson, Markup Technology Ltd.
4 Buccleuch Place, Edinburgh EH8 9LW, SCOTLAND -- +44 (0) 7866 471 388
Fax: (44) 131 650-4587, e-mail: firstname.lastname@example.org
[mail really from me _always_ has this .sig -- mail without it is forged spam]