[
Lists Home |
Date Index |
Thread Index
]
- From: Chris Lovett <clovett@microsoft.com>
- To: "'Steven E. Harris'" <steven.harris@tenzing.com>, xml-dev@lists.xml.org
- Date: Tue, 21 Nov 2000 02:08:50 -0800
I agree a forward-only streamable subset of XSLT/XPath would be very useful.
Upon further investigation streaming-transformations may turn out to be a
completely different animal. When you change some fundamental assumptions
(like random access in XPath selections) it is wise to revisit the entire
design. I also like the idea of throwing in regular expressions while
you're at it. (Hey, the Schema guys got away with it -
http://www.w3.org/TR/xmlschema-2/#regexs :-)
Chris Lovett
> -----Original Message-----
> From: Steven E. Harris [mailto:steven.harris@tenzing.com]
> Sent: Monday, November 20, 2000 1:57 PM
> To: xml-dev@lists.xml.org
> Subject: Re: transformations
>
>
> Paul Tchistopolskii <paul@qub.com> writes:
>
> > When processing the document of 1 Mb in size and if
> producing the result
> > document which is about 2 Mb in size, the amount of RAM
> required for this
> > is not 3 Mb. It is much bigger. *Much* bigger. When using key()
> > 'for speed' ( looks reasonable to use key() only for
> *large* documents,
> > right? ) - add some more RAM for building the in-memory index.
>
> Agreed. I had worked on a project for a while with XML files that got
> up over 300MB. Anything other than stream-based processing with
> constant memory usage was impossible.
>
> Whatever happened to that "stream-processing XSLT profile" thread from
> way back when? The closest thing to an implementation I've seen were
> my own Perl modules and the XML::Twig Perl module.
>
> --
> Steven E. Harris :: steven.harris@tenzing.com
> Tenzing :: http://www.tenzing.com
>
|