Lists Home |
Date Index |
On Wed, 9 Jan 2002, Tim Bray wrote:
> My instinct has always been to build systems in the simplest
> possible way (which XML message passing usually is) as fast
> as possible and then once you have something working, as a side
> effect you'll have an understanding of the problem you're
> actually trying to solve. Sort of the Extreme Programming approach.
This is perhaps a fundamental divide in computing :-)
I think it's a personal thing. The way my mind works, I'm much happier
visualising an entire system in my head and putting a lot of effort into
intial design than 'try it and see'.
Some people say that you can't truely know what's going to happen until
you try it. I don't find this myself. However, I bet it's just the way
their minds work.
I've seen lots of debate about which is the best way to write software;
I'm beginning to think that perhaps the best way is to combine both types
of people. Have me design it, then you implement it and debug it and feed
back to me on lessons learnt that I should use in future designs - I
suspect that would be the best use of our different mindsets!
> You know, I don't believe that any more. Empirically, HTTP-based
> systems seem to degrade way more gracefully under load than anyone
> would reasonably expect from analyzing things. The real reason the
> web is slow is because of its server-centricity and the fact that
> you're not allowed to do any significant processing on the client;
> the same reasons that IBM mainframes and VAXes were slow back in
> the eighties. In that context, the Web Services approach has the
> potential of providing a major performance boost.
Java applets deserve some consideration here. The idea there was to do
work on the client. With Web Services, you'll still more often than not be
visiting a server with your generic web browser (even if it does parse
XML), so clicking on a button will require a server round trip - plus,
potentially, other server round trips as the server gathers responses for
> I'm sure Gavin and myself could each reel off a dozen stories
> of slow systems where "everybody knew" what the problem was,
> and everybody was wrong. -Tim
I recently had to to debug a database action that was taking a lot of
time. It did a query on the table to look for possible primary key
conflicts, then a few hundred inserts or updates (depending on whether the
primary key already existed).
90% of the time was in the intial query, despite that being a simple
primary key lookup:
SELECT * FROM table WHERE (id = 1 OR id = 5 OR id = 1123 OR ...).
Why? It was doing a full table scan (across a *vast* table). Despite a
UNIQUE index on 'id'. And why was that?
Because the 'id' column was of type 'bigint', and my '1' or '5' or '1123'
were being cast to normal 'integer', which was confusing the query planner
and making it not use the index.
Alaric B. Snell
http://www.alaric-snell.com/ http://RFC.net/ http://www.warhead.org.uk/
Any sufficiently advanced technology can be emulated in software