Lists Home |
Date Index |
- From: Peter@ursus.demon.co.uk (Peter Murray-Rust)
- To: firstname.lastname@example.org
- Date: Sat, 21 Jun 1997 15:31:03 GMT
In message <199706211310.JAA17653@smtp2.erols.com> "Peat" writes:
> If the document is very large, and the parser is required to maintain the
> grove, we would then require the parser to also then include some type of
> defined memory management. Can this be a problem, where different parsers
> implement resource management differently?
This is an important point and one which I've been conscious of but ignored so
far. JUMBO is quite large (with all the MOL classes in there's about half a
megabyte of classes and I have had outOfmem failures with large files (ca.
1 Mbyte legacy input and translation into a tree). I don't know whether there
is a generic solution to this. I tried to run the garbage collector (JDK1.02)
occasionally and this helps, but since parser and browser and document all have
to be in memory then large docs are a problem.
Presumably in an application subtrees can be saved to disk (serialized?)
> I would think if this burden is on the application layer, then knowledge of
> the application can be used to optimize resources.
I would think that if the author uses entities, then knowledge of the entity
structure would help. In the browser the entities could be treated as
'pointers' and resolved only when required.
> Grove standardization is a good idea. Any ideas on how the grove
> standardization can be implemented up one layer?
^^ ??? ^^^
Again, I reiterate that I'd like to see something concrete in a few days and
not to lose the momentum again.
Peter Murray-Rust, domestic net connection
Virtual School of Molecular Sciences
xml-dev: A list for W3C XML Developers
Archived as: http://www.lists.ic.ac.uk/hypermail/xml-dev/
To unsubscribe, send to email@example.com the following message;
List coordinator, Henry Rzepa (firstname.lastname@example.org)