[
Lists Home |
Date Index |
Thread Index
]
- From: Didier PH Martin <martind@netfolder.com>
- To: Steve Muench <smuench@us.oracle.com>, Chris Lovett <clovett@microsoft.com>
- Date: Tue, 21 Nov 2000 10:56:41 -0500
Hi Steve,
Steve said:
In the database/XML environment the problem is typically
transforming huge XML documents which are huge because
they are thousands or millions of repeating "subdocuments"
wrapped by an outer, containing element. Not unsurprisingly
these "subdocuments" often are produced from rows in a
database query. If the transformation that needs to be
performed just needs to work on each <ROW> at a time
(that is, each "subdocument") -- but never needs to
do things like select the last() element or navigate
back to the "/" root -- it seems like this kind of
solution may be workable.
Didier replies:
Gee, millions of repeating sub docs means a very very huge text document. In
that case, a stream based processing environment seems to be more
appropriate. Just being curious, have you made some experiments using
Omnimark to do some kind of processing like you are mentioning? If yes, did
you encountered any limitations compared to XSLT or in which ways was it
better?
Cheers
Didier PH Martin
|