[
Lists Home |
Date Index |
Thread Index
]
On Tuesday 04 February 2003 15:09, Rich Salz wrote:
> How much of the problem "XML needs to be compressed" is solved by just
> using GZIP transfer encoding over HTTP?
That's fine if bandwidth-on-the-wire is your problem, but not if time/space
complexity at the endpoints is; gzip takes big data structures and a fair
amount of CPU.
My machine at home that could max out the 10Mb/sec network link with ONC RPC
requests (the null procedure, to just test architectural overheads), at 100%
CPU utilisation on the gzipping, did (I think) something like 1MB (8Mb)/sec
of gzipping from a file on disk to another file on disk.
Eg, the machine would have no hope of keeping up with the network at full
load as a gzip-based XML web services / RPC server, even if the processing
time required for the application logic was not the bottleneck - and I
thought it was in all these XML apps, right? :-)
> /r$
ABS
--
A city is like a large, complex, rabbit
- ARP
|