[
Lists Home |
Date Index |
Thread Index
]
On Fri, 25 Jul 2003 11:52:42 +0100, Paul Spencer <xml-dev@boynings.co.uk>
wrote:
>
> They tried XML down the wire with very simple compression and found that
> they were transferring data faster than previously.
I'm hoping that people will come to the W3C workshop with some hard data
and profiling information. Obviously there are LOTS of variables governing
the value of "binary" formats in a system:
- The speed of the processors on both ends
- The bandwidth of the connection
- The tightness of the format coupling between the apps at each end
- The availability of well-tuned compression, XML, etc. utilities on the
platforms
- The volume of transactions expected
- The time-criticality of results in the worst case
AFAIK, compression alone makes a big difference when you have a slow
connection or lots of traffic on a fast connection but also lots of
horsepower on at least the compressing end. It can actually slow things
down if you have a fast network and overloaded processors. ASN.1 and
friends work really well when you have tight schema coupling and the "self-
describing" XML markup is just noise. And none of these performance
optimizations matter if you don't have many transactions or you don't care
how long they take. Finally, NOTHING will help people who prematurely
optimize the bits of an application that are not actual performance
bottlenecks!
I'm hoping we can get enough information to get past the "binary XML is
evil", "XML is bloated and slow", etc. generalizations and understand the
alternatives and tradeoffs more thoroughly.
|