[
Lists Home |
Date Index |
Thread Index
]
Matthew.Bennett@facs.gov.au writes:
> Why parse repeatedly, if it's so damned inefficient? Why not come
> up with the concept of a 'compiled' xml document; one where
> structural info. is stored, and access is *FAST*, and validity and
> well-formedness have already been 'certified'? No-one's surprised
> that interpretive languages are execution dogs compared to compiled
> versions (because of no on-going parsing!), so why the mock horror
> that interpretive XML is so inefficient?
This is not a new idea, but despite many bold attempts and breathless
announcements, in five years no one has come up with anything that has
caught on.
Perhaps the problem is that XML parsing is usually very fast, often
close to I/O speed -- if you're reading over a network connection from
the other side of the firewall, your limiting factor will almost
certainly be transfer speed rather than parsing speed.
What takes most of an application's time is actually *doing* something
with the XML: searching it, indexing it, extracting information from
it, building an object tree (DOM or domain-specific), and so on.
Precompiling can help here only if you precompile for the specific
application, and at that point you lose the interoperability that is
XML's main justification.
All the best,
David
--
David Megginson, david@megginson.com, http://www.megginson.com/
|