[
Lists Home |
Date Index |
Thread Index
]
On Wednesday 09 January 2002 08:07 pm, Tim Bray wrote:
> Sorry Gavin - I was irritated not at you but at the original
> "dirty secret" article, and I stand by my claim that it was
> effectively content-free.
I apologize too.. I was probably a little too quick to respond.
> In a typical database-centric application? Wow... my experience
> is different, and my intuition is that in the end, the difference
> between SOAP et al, RMI, CORBA, etc, will all come out in the wash.
It depends on the kind of application, but yes, I have seen the use of
XML-RPC cause a *considerable* slowdown over straight binary RPC.
Again, the most recent case of this was the media asset management
system I alluded to, which is essentially a metadata wrapper over a
content store using a relational database for property storage.
I would bet that if you went to some of the stock trading companies in
the US (some/many of whom are great CORBA shops... they do amazing
stuff) and asked them to deploy a SOAP vs. CORBA app, you'd see a
marked difference in system behaviour. Their systems, despite being in
CORBA, and often fine-grained, and have very stringent performance
requirements. I doubt SOAP-based system would (at least not initially)
meet their needs.
> >My point was that if you use fatter protocols, you need to take
> > that fatness into account in the design...
>
> Even before you have an idea what the general shape of the
> processing workload is?
No. Obviously you have to weigh the different factors. My point was
that you need to understand the general structure of the system,
including the performance characteristics of the RPC mechanism, before
working out the broad architecture. As you said, if database
performance is poor, the RPC mechanism overhead will be lost in the
noise. Conversely, if the database is fast, and there are very large
numbers of requests, the RPC overhead might be extreme. There are some
applications that require very high-speed, fine-grained, sychronous
updates (stock trades for example), and I would be very surprised if
deploying a SOAP-based application in those domains didn't require
significant hardware and network investments.
A skilled carpenter knows the difference between a hammer and a
screwdriver, knows when each is approriate, and knows when to use
nails vs. screws (though in a pinch, he might use a screw as a nail
;-)). The same should be true of porgrammers.
> My instinct has always been to build systems in the simplest
> possible way (which XML message passing usually is) as fast
> as possible and then once you have something working, as a side
> effect you'll have an understanding of the problem you're
> actually trying to solve. Sort of the Extreme Programming approach.
Yes. I know your skill in programming, and appreciate it. I'm similar,
but I try to do "the simplest thing that doesn't preclude me from
further evolution". It usually requires at least one prototype before
I understand what that is... I think experience can help you make the
right choices earlier on, so again, knowledge of all aspects of the
system are useful.
> >As things go, HTTP is also well-known as being a pretty inefficient
> > protocol overall (remember Eric Naggum;-)).
>
> You know, I don't believe that any more. Empirically, HTTP-based
> systems seem to degrade way more gracefully under load than anyone
> would reasonably expect from analyzing things.
Well, I remember well the period from 1993-1996 or so when people were
having all kind of problems with HTTP. It required a lot of software
development, and a lot of network tuning, before people really figured
out how to make it work well. The degredation now is an effect of that
work, not the deign of HTTP per se.
> The real reason the web is slow is because of its server-centricity
> and the fact that you're not allowed to do any significant
> processing on the client;
This is SO important, and SO true. I suffered through this in a system
a built many years ago. The initial design used S-expressions over
sockets as a form of extensible RPC (not unlike SOAP). The system
worked great until we started talking to machines across the pacific.
We eventually settled on a design that offloaded most of the work onto
the client. In fact, it was a peer-to-peer system for the most part,
where peers sychronized with one another based on a QOS metric. Some
machines acted as servers because of their speed, and because they
were weighted in the QOS so that they were always updated.
> Amen. And I would further claim that we have no idea where the
> real bottlenecks are going to turn out to be.
Agreed here... the systems now, and the interactions between them, are
complex enough that often even simulations, let alone intuitions,
aren't accurate any more.
|