Lists Home |
Date Index |
From: "Rich Salz" <firstname.lastname@example.org>
> Sure, you made *my* code as efficient, but you made applications less
> efficient by requiring them to handle multiple calls for chardata.
I spent my early years in real-time assembler programming for microcontrollers:
writing UARTs for doing Hayes-style autodetect (which is one of the parent memes for
the XML encoding header): some people who had 6805-based modems in the late 80s
perhaps used product that had my code in it. So I acknowledge that there are
(or, at least, were) applications where every cycle and every byte counts.
Here, the applications do not need to be less efficient: for example, if the parser counted
each entity reference, then you can have two versions of downstream functions
that might care (and were important enough to warrant the space or care): one used
while there are still entity references, and another for when there are no (or no more)
references. (i.e. this is a kind of "strategy" pattern, I suppose). The replacement of
switches by function objects or dispatch arrrays iare two of the most basic speed optimizations
That would add only a fixed couple of cycles per function per document if there are no
references. Obviously, this is not an approach that would be convenient to
retro-fit onto code that was already created.
> As I responded to Miles, the (dedicated?) application greatly benefits from
> knowing that the chardata is all in a single buffer, available all at once.
For the case of internal entities, the text can still be all in a single buffer
in my suggestion shows. I was trying to point out that the claims that
supporting (internal) entity references for documents that normally won't
have entity references *must* cause significant increase in space or
performance seem to fail to take into account some implementation