[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: XSLT vs. document size
- From: Paul Prescod <paulp@ActiveState.com>
- To: KRUMPOLEC Martin <firstname.lastname@example.org>
- Date: Mon, 11 Jun 2001 12:45:30 -0700
KRUMPOLEC Martin wrote:
> Good morning,
> I would like to ask experts (yes you are :-) on this list about the
> suitability of the XSLT processing on the bigger documents.
> Common size of input files in our scenerio is hundreds of kilobytes,
> but in rare peaks it can be tens of megabytes and XSLT processing
> sucks too much of memory and CPU ...
> What would you suggest ? Can I use XSLT in streaming fashion ?
> (I doubt it because of xpath patterns)
Your instinct is correct that in general an XPath could require the
processing of the first element in an XSLT document to halt until the
last element is available. No XSLT processor I know of can detect the
situation where a streamed subset of the document would be sufficient to
process it and optimize for that situation.
The closest is Napa, which is designed to only load more information
when it needs it. But it never *discards* information so your
performance will still be poor when document size is large in relation
to memory availability.