[
Lists Home |
Date Index |
Thread Index
]
Thomas B. Passin wrote:
> Bob Foster wrote:
>
>> Recent criticisms of some Eclipse-based XML editors (including mine)
>> (in part) because they use a lot of memory relative to file size
>> underline the fairly obvious fact that XML files are often much larger
>> than programming language files. When the techniques used successfully
>> for programming languages are applied to XML, they can break down.
>>
>> The first person I ever saw address this issue directly was Bryan
>> Ford, in his packrat parsing paper
>> (http://www.brynosaurus.com/pub/lang/packrat-icfp02.pdf). Packrat
>> parsing requires an O(n), where n is the document size, data structure
>> with a rather large constant factor. Ford observes "For example, for
>> parsing XML streams, which have a fairly simple structure but often
>> encode large amounts of relatively flat, machine-generated data, the
>> power and flexibility of packrat parsing is not needed and its storage
>> cost would not be justified."
>
> This has reminded me of the Judy array, which can function as a list or
> a dictionary, and is supposed to have very good performance and low
> memory load whether sparsely or densely populated. This is achieved by
> havin a lrage nuber of specialized substructure types, put into play
> depending on the local data characteristics. Quoting from an
> introduction to Judy arrays -
>
> "Judy adapts efficiently to a wide range of populations and data set
> densities. Since the Judy data structure is a tree of trees, each
> sub-tree is a static expanse that is optimized to match the "character"
> or density of the keys it contains. To support this flexibility, in
> 32[64]-bit Judy there are approximately 25[85] major data structures and
> a similar number of minor structures."
God bless ya, Tom, for this out-of-left-field reply. There is a definite
prejudice in popular computer science (at least in my tiny little brain)
for algorithms that can be summed up in a few words, and it's evident
from even a cursory reading of Google results that Judy can't be.
> Perhaps a really top XML editor needs a similarly large number of
> specialized tree structures (glad it's not me writing them!).
Me too. No, wait, I'm expected to be writing them. If I were a professor
I could cajole grad students into taking the problem seriously. The
problem is something like, "When O(m) becomes oppressive: minimizing the
constant multiplier for m-length n-ary tagged trees."
The other problems I talked about are social, e.g., getting existing
algorithms (e.g., for validation) to work with a solution. In that
regard I will throw in my own wildcard: the high cost of abstraction in
current programming practice. Nearly all high-performace algorithms are
tightly coupled to particular data structures. The dependency injection
rule (paraphrased "Don't let high-level classes depend on low-level
details") is almost never followed by great programmers (like, e.g.,
James Clark, Kohsuke Kawaguchi, Michael Kay (don't feel left out if you
aren't on this highly personal list of programmers whose work I have
examined in detail)) presumably because they, like the rest of us,
require a) actual proof that their solutions work, b) don't tolerate
well overheads introduced by indirection and c) (maybe, just a thought)
work in a culture where dependency injection is not the norm or highly
valued.
Bob Foster
http://xmlbuddy.com/
>
> Cheers,
>
> Tom P
|