[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
RE: First Order Logic and Semantic Web
- From: Charles Reitzel <firstname.lastname@example.org>
- To: email@example.com
- Date: Thu, 10 May 2001 14:09:53 -0400
Many postings in the NPR thread have adequately shown that the Goedel Incompleteness Theorem is inapplicable to a system that is _not_ consistent and, therefore, _cannot_ be used as a basis for deducing new facts. If I understand it right, the GIT itself, if fact, points this out.
All the same, I am curious how the proponents of the Semantic Web expect the pieces to "layer up." I think of it this way. There are many intermediate milestones to be reached before such dynamic associativity (as the SW calls for) can be made to consistently yield useful results.
goal-oriented knowledge acquisition
documents + databases + metadata
Standard vertical vocabularies come to mind. Note that work on many such vocabularies predates any XML based efforts by some years (e.g. UN/EDIFACT, vertical E-R models). Some of these are probably very useful and do, in fact, promote inter-operability between trading partners' systems. After vocabularies, we'll await the development and public deployment of controlled, domain-specific knowledge bases (i.e. expert systems). Once such things become the norm, then you might find ways to meaningfully add-on a meta-layer over such systems that let you dynamically perform inter-domain inferences. Things will get really interesting when such inference systems can acquire knowledge (from other available systems) in a goal-oriented fashion. I.e. knowledge acquisition bots. Publicly available knowledge bases might be the search engines of the future.
In this scenario, one could see proof engines operating on known-consistent KBs. The RDF metadata I keep hearing about is early stage stuff (at the proto-vocabulary level) and is hard enough to keep accurate and up to date all by itself. OTOH, most of the necessary parts are available today. As always, the greatest hurdles are organizational, financial, cultural, usefulness, data quality, etc., etc. A great many more systems are possible than actually get built.
take it easy,
At 10:05 PM 5/10/01 +0600, Danny Ayers wrote:
>I would question where the stereotyping is to take place, and who or what
>makes the assumptions - "when someone has a Ph.D., it is ok to assume that
>he is nearsighted". An assumption is made and then the reasoning is made
>using FOL - why so? If the data isn't clear cut, why use clear cut logic? It
>has oft been pointed out that a lot of the techniques for reasoning as
>discussed in the context of the semantic web have been around for decades
>(some for millennia). There have also been techniques around for inexact
>reasoning for a good while too. You can either (rather inefficiently) do
>number crunching directly on the data/documents or you can develop metrics
>and work with these. Let's not forget that in addition to the information
>about a document that might have been humanly-specified in Dublin Core or
>might be deduced from the location of the document, there is also the
>content of the document from which metadata can be mechanically extracted.
>There's a lot of data around. One of the common problems with e.g. neural
>networks is having large enough data sets with which to train them. The web
>is pretty big though. Another requirement is the computational power - the
>web's not really lacking there either. Let the machine make its own
>assumptions - it might even figure out what activity the good doctors
>indulged in to cause their near-sightedness.
>>From: Bullard, Claude L (Len) [mailto:firstname.lastname@example.org]
>>Sent: 10 May 2001 19:14
>>Subject: FW: First Order Logic and Semantic Web RE: NPR, Godel, Semantic
>>Forwarded by request.
>>Ekam sat.h, Vipraah bahudhaa vadanti.
>>Daamyata. Datta. Dayadhvam.h
>>From: Jay Zhang [mailto:email@example.com]
>>First, it is necessary to defend the relevance of Goedel
>>to the Semantic Web, although any attempt of direct link
>>seems a stretch. The justification is not so obvious to
>>compare TBL with Russell and Whitehead a hundred years
>>The "disappointment" theorems symbolized by Goedel is
>>a way to support one simple intuition we tend to forget
>>occassionally: language (or data) is much larger than
>>Once we forget it, we begin to deduct math thru
>>logic or we begin to have AI to replace human decision
>>making. What is ultimately achieved is hardly more than
>>exhaustive searching algorithms.
>>SW inspires the vision that the system, when fed enough
>>information from an all encompassing network - the Web,
>>would be able to figure out what we need logically or
>>answer questions on the spot. This vision of Prolog at
>>large does hit theoretical walls!
>>I know that I am treading on dangerous water to elaborate,
>>but I still try.
>>>"Without a system of stereotypes ("for any" and "there
>>>always exists") to help us draw conclusions, a logic
>>>is only a brute force search algorithm on data. We
>>>failed to find a magic."
>>>"The Semantic Web could hit the wall of Goedel if it
>>>attempts to get meta-conclusions. Without
>>>meta-conclusions to work on, are we looking at a
>>>data search framework on the Web? In that case,
>>>inefficiency of formal deduction is an issue."
>>If we always draw on original data to answer "semantic"
>>questions, SW is just a distributed database where XML
>>is the DDL. If we envision the support of extensive and
>>fast decision making, we need to "warehouse" some
>>"likely" conclusions based on our data, such as: when a
>>person has Z or Q in his name, we assume he is from
>>mainland China; when someone has a Ph.D., it is ok
>>to assume that he is nearsighted. Every human being
>>(or even other animal) operates on these "stereotypes"
>>to achieve efficient decision making. When we accept
>>the risk and take these as truths, a first order logic
>>axiom system, if exists, would be what we need.
>>Please repost to XML-DEV when you see fit.
>>Jay Zhang, Ph.D.
take it easy,