[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
RE: First Order Logic and Semantic Web RE: NPR, Godel, Semantic W eb
- From: "Bullard, Claude L (Len)" <clbullar@ingr.com>
- To: Danny Ayers <danny@panlanka.net>
- Date: Thu, 10 May 2001 13:45:53 -0500
http://www.infotoday.com/searcher/jan00/feldman.htm
Stereotyping can take place wherever you have a tool
and the will to do it. Common stereotyping languages
and topic maps probably have a lot in common. They
are an opinion about a relationship one finds "usually
true" so worthy of "the risk of maintaining a stereotype".
Official stereotypes are a solution and a problem. No
matter how you do this, it is risky. But the risk
is similar to the risk of creating and using patterns
in the real world. The SW can add a lot of weight to
an official stereotype by reference, what we might call,
"broadcast credibility" (said often and by enough people
it has a high truth value).
But that is dangerous. History has come up several
times on this list today. How often has the history been
wrong? What would be the consequence of taking action
based on a false history or a true one? Some lose, some gain.
A broadcast system can be gamed and behaviors based on
using it can become superstitious.
How many times have you seen the statement:
"HTML is a subset of XML."?
Dead wrong. Often repeated. Almost impossible to
remove as an assertion in the system because it has
been said too many times by too many "credible"
authors in "credible" publications. The challenge
is expungement of false assertions in very large
replicated databases many of which you do not have
update privileges over. We deal with this in the court
systems all the time.
The reasonable rule of thumb about a global semantic
web is that if it is updateable and enough people/systems
are using it, it will be corrected. That's the theory.
That is also the risk. Right to correct is a contract.
But CAN you correct it? Did the fact escape into the
environment and spawn? Remember, this is a very large
scale semantic web we are discussing. Understand Nabster
is proving to be very hard to control now and it IS
centralized by index. Suddenly those who have trumpeted the
decentralized web are discovering that without a
powerful navy, the oceans become pirate-infested.
Caution: Mammals At Work.
The semantic web must be considered a service for services,
a layer of information over information which services
can use to go about their business. If the quality
of information and the concept mappings are good, then
the service will be effective, but it cannot be the
ultimate means to resolve disputes or to discover
"truth". That is meaningless. If it gets you to
the goal, it works, but you will be responsible for
knowing when and whom your goals are in conflict.
We are able to cope with a stereotyped
world and we find that authority and legitimacy of
source, how obtained and how maintained become
crucial issues. These are not, as far as I know,
automatable. We can train agents but insofar as
they represent us, we are culpable for their actions.
The only truth is immediate. The only justice individual.
Unless you can negotiate and correct the Semantic Web,
don't build it. It becomes a Golem. If you understand
the legend, you understand the problem and the solution.
Len
http://www.mp3.com/LenBullard
Ekam sat.h, Vipraah bahudhaa vadanti.
Daamyata. Datta. Dayadhvam.h
-----Original Message-----
From: Danny Ayers [mailto:danny@panlanka.net]
Sent: Thursday, May 10, 2001 11:06 AM
To: xml-dev@lists.xml.org
Subject: RE: First Order Logic and Semantic Web RE: NPR, Godel, Semantic
W eb
I would question where the stereotyping is to take place, and who or what
makes the assumptions - "when someone has a Ph.D., it is ok to assume that
he is nearsighted". An assumption is made and then the reasoning is made
using FOL - why so? If the data isn't clear cut, why use clear cut logic? It
has oft been pointed out that a lot of the techniques for reasoning as
discussed in the context of the semantic web have been around for decades
(some for millennia). There have also been techniques around for inexact
reasoning for a good while too. You can either (rather inefficiently) do
number crunching directly on the data/documents or you can develop metrics
and work with these. Let's not forget that in addition to the information
about a document that might have been humanly-specified in Dublin Core or
might be deduced from the location of the document, there is also the
content of the document from which metadata can be mechanically extracted.
There's a lot of data around. One of the common problems with e.g. neural
networks is having large enough data sets with which to train them. The web
is pretty big though. Another requirement is the computational power - the
web's not really lacking there either. Let the machine make its own
assumptions - it might even figure out what activity the good doctors
indulged in to cause their near-sightedness.