[
Lists Home |
Date Index |
Thread Index
]
On Wednesday 09 January 2002 04:34 pm, Tim Bray wrote:
> At 10:51 PM 08/01/02 -0500, Gavin Thomas Nicol wrote:
> >> "there's an evil little secret about Web services that most vendors
> >> don't talk about. Web services' protocols are very fat, and that means
> >> that Web services interactions over the network will be slow and eat up
> >> a large chunk of bandwidth"
> >
> >This is pretty well known.
>
> I think this assertion is content-free.
As you wish Tim. I think this comment is content-free, and insulting to boot.
You should now very well that I understand the issues, and your (trivially
obvious) point about overall system throughput.
>This kind of thinking goes on all over the place. I call it
>the "junior-engineer-deciding-to-code-it-in-assembler-to-make-
>it-faster-without-measuring-first" fallacy. -Tim
I take offense at this.... I've been doing distributed programming for at
least as long as most people I know, including you. If you had cared to take
in the point I was making, it is that naive fine-grained use of SOAP-ish
things will result in poor performance... substantially more so than naive
use of pure binary RPC. I have tested this, and found it to be true.
My point was that if you use fatter protocols, you need to take that fatness
into account in the design... it skews the set of applications away from
synchronous fine-grained RPC (which is what a lot of people are doing) to
coarser-grained, possibly asynchronous RPC/messaging (which most people
aren't doing). I know this, but I'm not sure that developers at large do...
especially as most of the tools make it *trivial* to wrap any old object up
in RPC/SOAP (I remember the very cool Visual .NET demo where they took a
plain 'ol COM object and made it a web service with but a few clicks).
I also remember in the DOM being flamed by the CORBA folk because the
interfaces were too fine-grained for naive distributed use, which we knew,
but we choose IDL (despite MS naysaying it's use) anyway. This is at least
partly because we used it as a representation language (and I wrote the
original XML DTD for those bindings) but also because those of us with
experience a) acknowledged the problem, and b) knew ways of making even
fine-grained DOM perform well *if needed*, while also acknowledging that
such systems are much harder to build than naive systems..The problems with
naive CORBA use in a distributed world are similar to those with
SOAP/XML-RPC, but performance degradation isn't as great. The skills needed
to write a good CORBA application, and a good web service are similar, as are
the complexities.
As for "What you care about is the performance of the whole system.": of
course that is true. You, I, and many other people know this, and would take
care to make sure the application isn't very chatty. Not everyone knows how
to do this, or tune a network in the face of chattier systems.
I have personally seen the results of such careless design (more than once
actually), and they are simply terrible. Such applications run great on a
single-user local machine with reasonable load, but as soon as you get any
kind of distribution load, the thing falls apart. I recently saw this problem
in a leading digital asset management system (nothing I personally was
involved in), and I'd be more than happy to share the gory details offline if
you wish.
As things go, HTTP is also well-known as being a pretty inefficient protocol
overall (remember Eric Naggum;-)). It took a number of years for clients and
servers to mature, for authors to understand how to write for the web, and
for network administrators to figure out how to tune things so HTTP would run
efficiently. I fully expect a similar adoption curve for web services...
which may well be supplanted during this period by the moral equivalent
of BEEP, or something else entirely (probably not, but you never know..).
I should also note that even in some major corporations, reliable
high-bandwidth connections may or may not exist. I went to one very major
bank a bit over a year ago, and they used Lotus Notes because it was the only
system that worked reliably over their admittedly poor networks, without
sucking up too much bandwidth. As such "throw more bandwidth" at it may not
be the answer.... especially as much of the traffic now is very "bursty".
As I've said in other messages, I'm not flaming web services, just noting
that to use them well will require skills that many people today simply don't
have... and to get good performance will take a lot more effort than people
might think. They might appear "cool and new", but the problems, the
solutions, and the complexities are actually pretty old.
You are right in avoiding premature optimisation... biggest waste of time
around. That said, knowing how to avoid optimization usually requires an
understanding the characteristics of the system being designed for....
|