[
Lists Home |
Date Index |
Thread Index
]
I have the exact same problem. My action was to chunk down the file into
lots of smaller one. I wrote a Java program for that (something like 50
lines)
-----Message d'origine-----
De : Andy Greener [mailto:andy@gid.co.uk]
Envoye : jeudi 29 avril 2004 15:13
A : xml-dev@lists.xml.org
Objet : [xml-dev] Handling very large instance docs
Before we dive into a local bespoke development, is anyone aware
of any tools that can efficiently handle browsing and editing of
very large XML documents (2-300Mb of repeating structure at the top
end) on "normal" PC workstation hardware?
At the very least I need to be able to sequentially process a large
document and extract an identified sub-tree (ideally denoted by an
XPath expression) for run-of-the-mill tools to manipulate. I assume
such a beast would need to be based on a SAX parser.
I suppose my other option is to dump the doc into an XML-native
database and get at the fragments via a query mechanism. Any suggestions
gratefully received...
--
Andy Greener Mob: +44 7836 331933
GID Ltd, Reading, UK Tel: +44 118 956 1248
andy@gid.co.uk Fax: +44 118 958 9005
-----------------------------------------------------------------
The xml-dev list is sponsored by XML.org <http://www.xml.org>, an
initiative of OASIS <http://www.oasis-open.org>
The list archives are at http://lists.xml.org/archives/xml-dev/
To subscribe or unsubscribe from this list use the subscription
manager: <http://www.oasis-open.org/mlmanage/index.php>
|