[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
RE: First Order Logic and Semantic Web RE: NPR, Godel, Semantic W eb
- From: "Bullard, Claude L (Len)" <email@example.com>
- To: Danny Ayers <firstname.lastname@example.org>, email@example.com
- Date: Thu, 10 May 2001 12:36:08 -0500
The problem is not in having two nodes even with
different means of representing or extracting
the semantic. The problem is having two nodes
agree on that meaning such that they
can automatically complete a transaction. That
is why I say operational definitions, record
of authority (vetted systems), are required.
Otherwise, it is a let the buyer beware system.
For somethings, ok. For others, not. Money
will dictate quality of service.
Ekam sat.h, Vipraah bahudhaa vadanti.
Daamyata. Datta. Dayadhvam.h
From: Danny Ayers [mailto:firstname.lastname@example.org]
Sent: Thursday, May 10, 2001 11:06 AM
Subject: RE: First Order Logic and Semantic Web RE: NPR, Godel, Semantic
I would question where the stereotyping is to take place, and who or what
makes the assumptions - "when someone has a Ph.D., it is ok to assume that
he is nearsighted". An assumption is made and then the reasoning is made
using FOL - why so? If the data isn't clear cut, why use clear cut logic? It
has oft been pointed out that a lot of the techniques for reasoning as
discussed in the context of the semantic web have been around for decades
(some for millennia). There have also been techniques around for inexact
reasoning for a good while too. You can either (rather inefficiently) do
number crunching directly on the data/documents or you can develop metrics
and work with these. Let's not forget that in addition to the information
about a document that might have been humanly-specified in Dublin Core or
might be deduced from the location of the document, there is also the
content of the document from which metadata can be mechanically extracted.
There's a lot of data around. One of the common problems with e.g. neural
networks is having large enough data sets with which to train them. The web
is pretty big though. Another requirement is the computational power - the
web's not really lacking there either. Let the machine make its own
assumptions - it might even figure out what activity the good doctors
indulged in to cause their near-sightedness.
>From: Bullard, Claude L (Len) [mailto:email@example.com]
>Sent: 10 May 2001 19:14
>Subject: FW: First Order Logic and Semantic Web RE: NPR, Godel, Semantic
>Forwarded by request.
>Ekam sat.h, Vipraah bahudhaa vadanti.
>Daamyata. Datta. Dayadhvam.h
>From: Jay Zhang [mailto:firstname.lastname@example.org]
>First, it is necessary to defend the relevance of Goedel
>to the Semantic Web, although any attempt of direct link
>seems a stretch. The justification is not so obvious to
>compare TBL with Russell and Whitehead a hundred years
>The "disappointment" theorems symbolized by Goedel is
>a way to support one simple intuition we tend to forget
>occassionally: language (or data) is much larger than
>Once we forget it, we begin to deduct math thru
>logic or we begin to have AI to replace human decision
>making. What is ultimately achieved is hardly more than
>exhaustive searching algorithms.
>SW inspires the vision that the system, when fed enough
>information from an all encompassing network - the Web,
>would be able to figure out what we need logically or
>answer questions on the spot. This vision of Prolog at
>large does hit theoretical walls!
>I know that I am treading on dangerous water to elaborate,
>but I still try.
>>"Without a system of stereotypes ("for any" and "there
>>always exists") to help us draw conclusions, a logic
>>is only a brute force search algorithm on data. We
>>failed to find a magic."
>>"The Semantic Web could hit the wall of Goedel if it
>>attempts to get meta-conclusions. Without
>>meta-conclusions to work on, are we looking at a
>>data search framework on the Web? In that case,
>>inefficiency of formal deduction is an issue."
>If we always draw on original data to answer "semantic"
>questions, SW is just a distributed database where XML
>is the DDL. If we envision the support of extensive and
>fast decision making, we need to "warehouse" some
>"likely" conclusions based on our data, such as: when a
>person has Z or Q in his name, we assume he is from
>mainland China; when someone has a Ph.D., it is ok
>to assume that he is nearsighted. Every human being
>(or even other animal) operates on these "stereotypes"
>to achieve efficient decision making. When we accept
>the risk and take these as truths, a first order logic
>axiom system, if exists, would be what we need.
>Please repost to XML-DEV when you see fit.
>Jay Zhang, Ph.D.
>Get your FREE download of MSN Explorer at http://explorer.msn.com
>The xml-dev list is sponsored by XML.org, an initiative of OASIS
>The list archives are at http://lists.xml.org/archives/xml-dev/
>To unsubscribe from this elist send a message with the single word
>"unsubscribe" in the body to: email@example.com
The xml-dev list is sponsored by XML.org, an initiative of OASIS
The list archives are at http://lists.xml.org/archives/xml-dev/
To unsubscribe from this elist send a message with the single word
"unsubscribe" in the body to: firstname.lastname@example.org