Lists Home |
Date Index |
On Tue, Aug 05, 2003 at 05:16:39AM +1000, Rick Jelliffe wrote:
> [...] the current web infrastructure
> (HTTP's MIME header's view of compression of text/* and the widespread
> availability of ZIP decoders on HTTP clients) has a certain level of
> support for ZIP already. It is an important aspect.
Just a note of clarification here...
gzip, bzip2 and ZIP are all different compression systems.
Zip (widely used in the Microsoft Windows and MS-DOS worlds) puts
a table of contents at the end of the data stream. This reduces
memory overhead in the compressor, but makes ZIP unsuitable in many
cases for streaming.
The GNU zip program, gzip, is actually not a plugin replacement for
programs such as pkzip, winzip and zip. It compresses a single file,
but supports streaming decompression.
bzip2 is similar to gzip. It often gets better compression, at the
expense of higher CPU and/or memory usage. With the most widespread
implementation of bzip2, decompression can require over 3.4 megabytes
of memory; reducing the block size from the default during compression
will reduce this, at the expense of needing more CPU time.
Web servers usually use gzip (e.g. for Apache, install mod_gzip).
> So a programmer serializing non-XML data may face the choice:
> 1) generate XML text, and compress. Send to a vanilla client which
> understands ZIP
> 2) generate some binary (e.g. ASN.1 BER or PER), maybe compress, send to
> to [a] special purpose client, maybe which needs to be told to
> uncompress too somehow.
> These are completely different approaches at an architectural level, and no
> cause to make anyone sick.
I just wanted to rein in on the loose use of "ZIP" a little.
Liam Quin, W3C XML Activity Lead, firstname.lastname@example.org, http://www.w3.org/People/Quin/
Ankh's list of IRC clients: http://www.valinor.sorcery.net/clients/