[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Question] How to do incremental parsing?
- From: Lars Marius Garshol <email@example.com>
- To: "Xu, Mousheng (SEA)" <Mousheng.Xu@sea.celltechgroup.com>
- Date: Thu, 05 Jul 2001 02:58:31 +0200
* Mousheng Xu
| One way to get around the problem would be to read the XML file into
| memory gradually and when needed. I would like to build such a DOM
| parser, but I am not familiar with the design of the Xerces XML
| parsers. Could someone give me a suggestion on how to tackle on the
There are two approaches to this problem that have not yet been
presented fully in this thread.
The first, which is the one I usually use for data-oriented XML, no
matter what the size of the input, is to create an object model for
your data as a set of classes. Then make a SAX application which
builds instances of of this data model from XML input.
For example, if your input is an RSS document, you should have classes
like Channel, NewsItem, and so on. The result of processing should be
a set of objects representing the input.
This solves the problem Anthony Coates mentioned, when the XML
structure has strong interrelations that make it difficult to work
with SAX's peephole view of the document.
If your dataset is large you need an object model implementation that
is able to deal with that, using an object-oriented database, an RDBMS
mapping tool or whatever.
The second is to use a tool that only builds fragments of the tree at
a time. Tools like Pyxie, XML::Twig, SAXON, minidom, easydom, and, I
think, Orchard, allow you to do this. Probably there are many more
that I don't remember right now. This approach can often make it
easier (though less performant) to build instances of an object model.
In some cases it can also make the object model unnecessary, if the
XML is sufficiently simple.