Lists Home |
Date Index |
- From: Alberto Reggiori <email@example.com>
- To: Leigh Dodds <firstname.lastname@example.org>
- Date: Fri, 26 Mar 1999 11:45:08 +0100
Leigh Dodds wrote:
> Wouldn't a (undoubtedly naive) implementation of this be simply serialising
> the object graph to disk, or through an I/O stream? This is obviously easy
> in Java, and again is only obviously beneficial if the serialised object
> graph is more 'compact' (which I believe is at least partly behind your
> desire) than the original textual version?
I am writing a Web application that provides an Open Web space for
secondary schools in Europe where users can interact with an oodbms
thourgh a treeview like cut/paste/rename/edit paradigm using normal 16MB
pentium PCs and ISDN connections.
One of the big issues of this application is to provide a quick
generation and rendering of
(jeremie.com like) to parse and create the in-memory data structure
(DOMish) of thoses views, but that solution reveals not scaling when the
user requests some 200/300 folders.
The actual solution to those problems is to use a little hack on the
server that generates directly html docs with the parsed js structure in
as nested arrays and hashes that do _not_ need parsing anymore. The
files with the "serialised" trees are a bit larger but the rendering
performances are a _lot_ better. The code is still able to display
textual xml treeviews.
I think would be really useful to have a standard and more compact way
to serialise (dump binary groves/structures) to some specific format
I am not saying that XML should be binary, but that the parsing
businness sometimes is an issue.
Just another brain dump.
adr;quoted-printable:;;Via Enrico Fermi=0D=0A;Ispra;Varese;21020;Italy