[
Lists Home |
Date Index |
Thread Index
]
That is a good technical synopsis, Didier, so I won't
attempt to reiterate it. I will touch on the other
points.
First, our navel discussions have a purpose. As I said,
intelligence in the universe is locally originated and
then only spreads by sharing taxons. Proximity in
space and the origin in time affect the rate of that
and that rate determines our overall learning curve.
The web folded that space and things have gone
considerably faster in our technical areas ever since.
We make mistakes but we make more of them so we
are learning faster than we did. Some local group
might have gotten further faster, but being advanced
is no guarantee of being able to communicate. What
we don't recognize, we don't see or hear.
In short, every time we do this addressing and data model
thread, we get a few more near neighbors to understand
the why and how of it. At some point, a critical density
of neighbors will form a community of understanding large
enough to build and share applications based on it.
Extreme Markup must have gone awfully well this year.
It is at the threshold of achieving that density of
understanding that we will see another explosion of
innovation in web application development. Will
this be because what was groves becomes RDF? I
suppose it could. Some are not comfortable with a
predicate logic system and others adore the formality
of that model. Groves stumbled not just in the timing
of the introduction, ie, too far ahead of the general
understanding so it almost slipped through the gaps
and would have had it not been for the original
and slow growing community of understanding, but also
in the original difficulty of understanding the terms
being presented. It took time for a lot of people
to get past the Jorn Barger mentality of link and do,
that overly simplistic and ultimately fatal assumption
that addressing could be completely subsumed under a
conflation of address and location (Jorn lad, you can
walk up behind a woman and hit her with a stick and
drag her into your cave, but you can't get her to
cook a meal well or not slit your throat in the
night that way. Try understanding her first.).
RDF might have a larger community of understanding, but it also has
a larger community of legacy to pull behind it not
only in terms of previous applications (say metatags)
but people who have to be convinced to use it for
meta data modeling. It can't be done by fiat; SGML/HyTime
proved that. Until a community of understanding is
large enough to be self-sustaining, any concept that
defines that community (culture is a sharing of sign
systems, vocabularies if you will), it is always on
life support.
So it isn't just a matter of convincing the cobbler
(whoever that is): one has to convince his competitors
and the elves and dwarves that work for him too. The
cost of the semantic web must be met with a host of
new profitable applications. Otherwise, no money no
thrills. For that to happen, a few virgins will be
sacrificed and then, a few of them will build working
useful applications that build on their ability to
reason on and exchange the results of reason.
Until then, we have the Barger stick.
len
-----Original Message-----
From: Didier PH Martin [mailto:martind@netfolder.com]
Hi Len,
Even if this look navel discussion, in the context of what's happening
with XML and the level of dissatisfaction we feel with the current
direction. Maybe this kind of discussion may help us go back to the
basics and what is good about the Web. Simply that: the web, the
capacity to link things. This is why the linkage issue is so important.
Especially in this decade in which we will have to link multimedia stuff
with textual information (at least when broadband will take off -
Anyway, Asia and the competitive pressure will help us to move toward
that goal)
|