[
Lists Home |
Date Index |
Thread Index
]
Do you know some intrinsic reason why a parser can't scale linearly?
AFAIK, a parser only needs to retain an element stack and a set of
entity definitions that is fixed when document parsing begins. Without
validation, what's the issue?
Bob
Rick Marshall wrote:
> can we just go back a minute - raw speed is not the only issue, it is
> the way in which the it degrades. o(n2) (order n squared) performance
> will always be bad, just faster bad.
>
> big documents will degrade badly - and this is the real thing to beat -
> not simply raw speed.
>
> rick
>
> Rick Jelliffe wrote:
>
>> Oleg A. Paraschenko wrote:
>>
>>> I think the issue is a bit different. An experienced developer can
>>>
>>> implement a very fast parser, for example, in 1 year. But whom he can
>>> sell it? I just don't see a market for XML parsers.
>>>
>>>
>>>
>> Hence the need for something like a consortium offering a cash prize.
>> Kickstart.
>>
>> Here is how I would see it working. 15 organizations (banks, vendors,
>> etc) get together
>> and put $1000 each into a kitty. They announce that they will pay
>> $10,000 first prize
>> and $5,000 second prize for the two fastest non-viral open source XML
>> parsers that meet the bottom line of being twice as fast as libxml (as
>> of current version) for a particular suite of ASCII-dominated
>> transactions of about 1 to 10K each for non-validating parsing.
>> Contest to run for six months.
>>
>> What do the sponsors get out of it? Worst case: no one wins; no cost,
>> no benefit (though proving we need to go beyond XML does have a value
>> actually!) Best case: tiny investment, substantial reduction in
>> performance of multi-million dollar assets and transaction rates,
>> ability to adopt desirable new architectures. Techniques are open
>> source non-viral so they can potentially feed into commercial products
>> (at the end of the day, Bill gets all the $$$ no matter what!)
>>
>> Any takers? Joseph Chiusano: know anyone?
>>
>> Cheers
>> Rick
|