[
Lists Home |
Date Index |
Thread Index
]
On Thu, 11 Nov 2004 16:44:29 -0500
"Roger L. Costello" <costello@mitre.org> wrote:
> Have I missed any steps/delays? /Roger
I think so. Take a step back, view the problem slightly more widely.
I'm going to use an analogy here, so take it with a grain of salt.
Classic network file systems are designed around the idea of mapping
system file procedures into network transactions. A lot of naive
designs that use XML do much the same thing: they wrap a relatively
small amount of data up in a call, and get a relatively small amount of
data in return (fopen, fseek, fread equivalents).
XML enables larger granularity. Rather than sending "commands" and
getting "return values", you can send and receive larger documents.
This corresponds to the way that WebDAV works, for instance. Instead of
"opening, seeking, reading", you just get the document/file. Fewer,
somewhat larger network transactions; in many cases one can rely on
low-level infrastructure to speed the operations. Compression is also
more effective in this scenario. Parser start up times contribute
overhead, if the documents are larger, the overhead starts to recede to
insignificance.
For increasing performance, increasing the message size (that is,
increasing the content, increasing the granularity of operations) is
often far more effective than attempts to bum a few cycles from
fine-grained operations.
Amy!
--
Amelia A. Lewis amyzing {at} talsever.com
To be whole is to be part; true voyage is return.
-- Laia Asieo Odo (Ursula K. LeGuin, "The Dispossessed")
|