[
Lists Home |
Date Index |
Thread Index
]
- From: "Steven E. Harris" <steven.harris@tenzing.com>
- To: xml-dev@lists.xml.org
- Date: Mon, 20 Nov 2000 13:56:57 -0800
Paul Tchistopolskii <paul@qub.com> writes:
> When processing the document of 1 Mb in size and if producing the result
> document which is about 2 Mb in size, the amount of RAM required for this
> is not 3 Mb. It is much bigger. *Much* bigger. When using key()
> 'for speed' ( looks reasonable to use key() only for *large* documents,
> right? ) - add some more RAM for building the in-memory index.
Agreed. I had worked on a project for a while with XML files that got
up over 300MB. Anything other than stream-based processing with
constant memory usage was impossible.
Whatever happened to that "stream-processing XSLT profile" thread from
way back when? The closest thing to an implementation I've seen were
my own Perl modules and the XML::Twig Perl module.
--
Steven E. Harris :: steven.harris@tenzing.com
Tenzing :: http://www.tenzing.com
|