[
Lists Home |
Date Index |
Thread Index
]
> Before we dive into a local bespoke development, is anyone aware
> of any tools that can efficiently handle browsing and editing of
> very large XML documents (2-300Mb of repeating structure at the top
> end) on "normal" PC workstation hardware?
Unfortunately, we don't know such tools :(
> At the very least I need to be able to sequentially process a large
> document and extract an identified sub-tree (ideally denoted by an
> XPath expression) for run-of-the-mill tools to manipulate. I assume
> such a beast would need to be based on a SAX parser.
That's the way, but you need to implement every query by yourself.
It's rather error prone approach.
> I suppose my other option is to dump the doc into an XML-native
> database and get at the fragments via a query mechanism. Any suggestions
> gratefully received...
Definitely, it's the best solution, because XML DBMS provides powerful query
facilities that can be used to express arbitrary ad-hoc query. We see only
one
disadvantage of this solution: you need to install XML DBMS server.
You can try our Sedna XML DBMS to solve this problem.
http://www.modis.ispras.ru/Development/sedna.htm
Sedna is able to handle such amounts of data. Query languages
are XPath and XQuery.
We are interested in such applications of our system and ready
to provide the required support.
Best regards,
Andrey
|