[
Lists Home |
Date Index |
Thread Index
]
Right. In the Kama Sutra, don't try the advanced positions
until you master the simple ones. If you are too old by the
time you get to those, at least you had fun before you died.
Kay's point was that most innovations occur in smaller sets
but that computer science en masse is a pop culture. So
exactly yes: HTML made SGML successful, much to the chagrin
of the HTMLers. Bo Bice will make southern rock successful
again much to the chagrin of the Allman Brothers and Lynard
Skynard and they will all be happy to let him record their
old material for his new albums. Copyrights are wonderful
when the means of distribution is tightly controlled.
And I thank Intel every day for the extra money in my bank account
because we won that lawsuit for their theft of our IP for pushing
their crappy instruction set through our design. Sometimes, architectures
do matter. Patent them.
Yes, you would still be subscribed to comp.text.sgml and arguing
about rellocs and having fun.
Kay said he didn't worry about it. He is having fun.
len
From: Bob Foster [mailto:bob@objfac.com]
Bullard, Claude L (Len) wrote:
> Alan Kay points out an interesting bit of data: Moore's
> Law gave us approximately a 40,000 to 60,000 increase
> in processor speeds while CPU architecture only gave us
> about a 50 increase therefore wasting about 1000% of Moore's
> Law on expedient architectures.
>
>
http://www.acmqueue.org/modules.php?name=Content&pa=showpage&pid=273&page=3
I think if Alan Kay tried that line of argument here - that one old
Xerox benchmark isn't speeded up much means that hardware designers
threw away a 1000x performance increase - he'd get the same treatment as
those who have benchmarks that prove that XML parsing throws away 50x
performance compared to binary XML. It is just as likely that the reason
that old benchmark doesn't run faster is because it doesn't run an inner
loop out of registers, or, even better, flow data through a pipeline.
Moore's Law does not guarantee that random memory access speeds up in
proportion to CPU speed. In other words, the bottleneck for the kinds of
benchmark Kay is likely to be interested in - symbolic processing - is
the Von Neumann architecture. The reason CPU designers don't build a
better general-purpose architecture is, sure, expedience, but also they
don't know how.
As to his choice of expediency as the primary reason why all progress is
not forward, I'm reminded of the doctor who was nervous about
prescribing antidepressants because their common side effects included
the very symptoms they were supposed to treat. You could as well choose
expediency as the reason any progress gets made at all. Automatic
garbage collection sat on the shelf, in terms of mainstream computing,
for over 40 years before Java won acceptance of it by the expedient of a
syntax that looked a lot like C. Use of the := operator for assignment
would probably have killed it.
Intel went merrily along building crappy little processors until it was
threatened by RISC, whereupon it expediently adopted all the little
design tricks RISC had and expediently threw silicon at the problem of
mapping their crappy instruction set into a RISC pipeline with register
mapping. By this means they decisively demonstrated that, in the long
run, as long as you share the same meta-architecture, instruction set's
got nothing to do with it.
SGML and its predecessors were the most successful text formats you
never heard of for over 20 years before the expediency of HTML yanked it
into the mainstream. Without HTML there would be no xml-dev, and if that
ain't forward progress, I don't know what is. ;-}
|