OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

 


 

   Success factors for the Web and Semantic Web

[ Lists Home | Date Index | Thread Index ]
  • From: Michael Champion <mike.champion@softwareag-usa.com>
  • To: xml-dev <xml-dev@lists.xml.org>
  • Date: Thu, 21 Dec 2000 04:12:54 -0500

Obviously the patronage of Tim Berners-Lee has given the semantic web
movement much of its name recognition and good PR.   Tim B-L was clearly
visionary in developing the Web as we know it (URIs, HTTP, and HTML), so I
think it would be useful to compare the factors that drove the success of
the Web with those that appear to be driving the SW.

 I would say that the success of the Web was due mainly to:
- Its leveraging of the TCP/IP infrastructure already in place
- The "network effect" that caused the value of the Web to increase
exponentially as it grew
- The fact that it met a real, if largely un-recognized, human and business
need
- The fact that HTTP and HTML were easy and cheap to implement and use


OK, how do the SW concept and its technologies (I'm referring mainly to Tim
BL's vision as stated in  http://www.w3.org/DesignIssues/Semantic.html  and
reported at http://www.xml.com/pub/a/2000/12/xml2000/timbl.html) fare if we
extrapolate these same success factors?

- It clearly DOES leverage the Web infrastructure.

- It presumably WILL have a strong network effect (although the "local vs
global ontologies" discussion
  makes me wonder about this)

- Does it really meet a real, unmet human and business need?  As several
people have mentioned, the search engines, especially Google, are getting
pretty darned useful lately.  True, this is partly due to the promotion of
metadata and the synergy between the search engines and the HTML <meta>
ag  - if you want good placement in a search engine, you put good metadata
in your HTML.  But it's due largely, as Paul Tchistopolskii points out, to
algorithms that extract useful information from the HTML itself, especially
the "page ranking" technique.  The SW might offer real advantages over what
we have now, but not enough to overcome the "worse is better" bias built
into our brains, economic system, etc.

- Will it be easy and cheap to implement and use?  This is where the SW
advocates have got, as near as I can tell, an unbridgeable chasm between
them and the real world. I'll bet that virtually everyone reading this list
"grokked" the URI/HTTP/HTML web concept very quickly, could hack up HTML
pages easily, etc.  On the other hand, after MONTHS of discussion here,
people are still pleading for a coherent explanation of what the SW really
is, begging to see plausible demonstrations, and basically hearing (from Tim
BL, no less) that one must be patient and have faith.  He wasn't saying that
about the WWW 10 years ago, he was demonstrating useful examples that he
hacked up!   He just cultivated the memes for URI/HTTP/HTML, set them loose,
and watched them take over the world.  This just ain't happening with the SW
memes; they've taken hold in some niches with extremely nuturing
environments, but haven't gotten anywhere in the cold cruel  world at large.

 I guess  I'm envisioning a URI/HTTP/XML Web in 10 years or so that looks a
lot like the WWW today, with search engines rather than logic engines still
being the primary way of finding new information.  I do expect them to use
more sophisticated RDF metadata embedded in XML as well the HTML <meta>
tags, to use some topic maps or link bases to aid the search in certain
(probably limited) domains. But I can't forsee a "semantic web"  of
universal ontologies guiding the development of logic bases that are
exploited by millions of autonomous agents running around deriving
interesting knowledge. I would be astonished if this proves to be
economically, or intellectually or technologically  feasible in the next 25
or even 50 years.  Like Len Bullard, I remember all too well how certain
many were 25 years ago that this kind of "AI" would be a reality by the end
of the 20th century. This whole discussion is gives me a rather stark sense
of deja vu ... and while computers are several orders of magnitude more
powerful than they were in 1975, the rest of the intellectual infrastructure
needed to make the semantic web a reality has not progressed anywhere near
as much.

So, I'm staying open minded about improved metadata and link bases/topic
maps, and especially about how to build better searching tools that use all
this data, metadata, and metametadata in a coherent way.   But I am deeply
skeptical about the possibility of coming up with useful, universal
ontologies that would underly all the RDF assertions that the semantic web
would depend on, and deeply skeptical that there will be a sufficient
economic rationale for users to produce and maintain the complex metadata
that would support it. Would anyone care to try to convince me that: a) Tim
B-L's "semantic web" of universal ontologies/RDF logic bases/interoperable
logic processors would solve PRACTICAL problems of knowledge mangement
dramatically better than what we can do with our brains, the WWW, and
databases/search engines?   and b) The logic bases and metadata to support
Tim BL's vision could be developed, maintained and used by ordinary humans
who are just trying to get their jobs done?






 

News | XML in Industry | Calendar | XML Registry
Marketplace | Resources | MyXML.org | Sponsors | Privacy Statement

Copyright 2001 XML.org. This site is hosted by OASIS