[
Lists Home |
Date Index |
Thread Index
]
From: Liam Quin [mailto:liam@w3.org]
>I think your analogies are getting a little strained here :)
Likely.
>If we'd waited for a 100% reliable Web, with pre-fetching,
>distributed cache integrity, and all the other needs that
>could reasonably have been foreseen in the late 1980s,
>we'd maybe still be waiting - but the Web has come a long
>since 1989, and we wouldn't have that experience.
We're going to do the hindsight thing. Ok. One pass
only for me. Then, can we move this to a discussion of the
best architecture that uses the internet technology
proportional to risk?
I agree that teams should form and get experience.
1. The events that made the web possible were the
release of the internet to commercial use and the
advent of cheap microprocessors and memory. Too many
other groups were working on distributed hypermedia
for this to have not come about in roughly the same
timeframe. It became a race to see who could do it
fast and cheap despite the numerous warnings from
the Internet community that the TCP/IP based systems
were vulnerable and insecure. As long as the system
was simply for moving hyperlinked research papers,
the risks were acceptable. Forms and the ambitions
of the inexperienced but funded Netscape company
put success in a competition ahead of risks at a time
when it was easy to bafflegab the business community
and delight the press with an underdog story. Far too
early, it became a money game. I'm not sure what
could have been done about that once Clark funded
Netscape.
>There were people who said the ISO networking stack was
>much better than TCP/IP - it was certainly more sophisticated,
>and the size (and cost) of the specs helped to keep small
>firms excluded nicely and equipment costs high. Whether
>that was intended I have no idea. But the ISO WGs didn't
>forsee modern DDoS attacks either, and neither did anyone else.
As you note below, some knew. Also, small firms don't get the automatic
right to play a game which they cannot afford and nothing
in the intervening years has changed. In fact, the dominance
of the Microsofts and IBMs is more entrenched than it has
ever been. Nothing changed there. I don't know what you were
hearing but I was hearing from the GE network experts in
1989 that the Internet was not secure and that certain
kinds of attacks were inevitable. One might speculate
that outside small and secure groups, such knowledge
did not proliferate. What did happen
was that most people I spoke with then did not take
the WWW and TimBL seriously. It seemed insane to them
that anyone would given the risks of what he was proposing
and the issues he did not seem to understand or else want to
acknowledge.
>When you get to the point where a 14-year-old kid sitting at
>home can quietly infect tens of thousansd of Windows XP systems
>remotely,
It is doable with Linux and Unix. XP systems offer the
juicy target for master/zombie attacks because they dominate
the desktops. This isn't about the virus; it is about the
systemic vulnerability to DDoS.
>and then use them all at once to send multiple gigabtes
>per second of network data at a single target, it's hard to see
>how any infrastructure could have coped.
That is the point and thanks. As long as the Internet design
is that flaky, it is risky to tie the cetain systems
together with it. The WWW and the
press have to acknowledge this and to heck with the hindmost.
>it's like firing up
>your space shuttle to Mars only to find the intervening space
>has suddenly filled with millions of explosive mines so densely
>that no shuttle could hope to get past... and then blaming the
>rocket engineers for such a stupid design that didn't predict
>the change ;-)
You are stretching. Compare it
to asteroids. These are predictable. What would not be
predictable would be a collision that created a micrometeroid
swarm but that might take out at most, one mission, and it
wouldn't repeat often. Same for sunspots. We know they
happen; we don't always know when, but we design for them.
DDoS is understood and was. The message here is not that
the Internet can be made safe but that so far, it can't
and certain kinds of systems shouldn't be on it. IOW,
this is a rant against 80/20, Simple Is Always Good,
and the "IP is the Whole Future" stuff coming out in
the press. It is time for the W3C to start acknowledging
the risks.
TimBL talks about 'underpowered languages' and simplicity.
Tim Bray talks about "reasonable amount of money and used by
lots of people" as the criteria of success. It makes a VC
sit up and take notice, but the other side of that rap is
the exposure to risks by underdesigning a critical system.
>The online world isn't bound directly by physics - changes
>far more dramatic can and do happen. In fact, DDoS attacks
>by untrusted hosts were predicted in the early 1980s, when a
>Sun workstation cost under US$10,000 and could be conencted
>to a Univeristy network via a Vampire clamp, and then could
>send forged packets onto the net... something previously
>very difficult. A couple of years later, PCs with ethernet
>cards were diong the same... and now PCs with broadband.
Yes. We knew but we pushed ahead. In some part reasonable,
in some applications, not.
>In this case it turns out that the ISPs have the power to
>limit most of the damage -- they can detect forged packets
>when a client sends them over the cable modem, and drop them.
Not in all cases if I understand what Gibson is saying.
Still, yes, this is the kind of thinking that is productive.
>Or disconnect the user and send a bill. That would get
>people setting Administrator passwords on their XP systems,
>and turning off file sharing, and being careful before
>clicking on attachments!
I agree with part of that, but once again, you indulge
the witless part of the agenda: let's clobber Microsoft.
Let's distract the discussion by invoking the devil.
XP systems are vulnerable but so are Linux
systems. So are Unix systems. So are Solaris systems.
It is the nature of the design of the Internet.
>The ISPs could go further and reject forged email. Then
>the current wave of email viruses and spam (and viruses
>that are used for spammers to send email) would go away.
But they have to look first and again, Gibson says such
forgeries aren't always detectable. Should we get rid
of anonymous accounts? XP could remove the raw sockets.
>But as others have said, a new wave would arise.
So lay down and be buggered? I don't think so.
>You mention DARPA funding of Web research -- it's true
>(I think) that there's DARPA funding for Semantic Web
>research, and no doubt for other work trying to move the
>Web forward. But don't confuse the Web with the Internet -
Don't confuse the steering wheel with the engine? Ok,
but when a customer buys a car, that isn't in the
sales pitch, is it?
>You could have a World Wide Web with a different
>infrastructure - e.g. over JANET with X.25 and friends.
Ok.
>At any rate, you can look back and said, "with all we
>know today, the Web should have been designed differently"
>but I don't think such reasoning is productive.
The heck it isn't. If we don't learn from our mistakes,
we are just politicians running for election and covering.
>Better
>to say "with what we know now, the following areas will
>need improvements". And that's research that's being
>done today, of course.
No D'oh. Again, with the same analogy to the German
rocket scientists, as long as the lessons are learned,
I can root for a team.
The W3C priorities should reflect the immediate realities
and needs. What is the mandate of the consortium?
len
|