OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.


Help: OASIS Mailing Lists Help | MarkMail Help

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: NPR, Godel, Semantic Web

At 2001-05-08 09:34, John Cowan wrote:
>C. M. Sperberg-McQueen wrote:[Prolog example snipped]
>>This will (I think -- I'm kind of rusty and haven't
>>run this), when provided with a suitable set of facts
>>involving 'parent', produce the expected result.
>>It will for example, infer that 'abel' is human.
>>Change the order of the rules for 'human', however,
>>to 3412, however, and the system will have trouble
>>figuring out even that 'adam' is human.
>I fed both versions into GNU Prolog 1.2.1 and it
>handled both with no problem.

I really am rusty; can't even generate an infinite loop on demand?

OK.  Instead of reordering the rules for 'human', change
'descendant' to:

descendant(N,N2) :- descendant(N3,N2), parent(N,N3).

This produces the following result on my Prolog system.

   ?- human(X).


   fatal error: *** unification overflow ***

>>   append([],L,L).
>>    append([A|B],C,[A|D]) :- append(B,C,D).
>>and next the restricted one:
>>    append([],L,L) :- !.
>>    append([A|B],C,[A|D]) :- append(B,C,D).
>>The second one works only when at the initial call the
>>first two arguments are instantiated and the third is
>>possibly not instantiated.
>Again, I cannot provoke gprolog into returning the wrong
>answer with the second definition, and I tried many cases.

My apologies; I was imprecise and misleading.  The difference
between the two is not that the second one gives wrong answers,
but that it fails to generate some solutions which the first one
does generate.  In particular, if we put the following into

   myappend([A|B],C,[A|D]) :- myappend(B,C,D).

   xappend([],L,L) :- !.
   xappend([A|B],C,[A|D]) :- myappend(B,C,D).

then we can compare the two definitions on the same call:

   $ bp
   Free BinProlog 1.99, written by Paul Tarau, 1992-93
   ftp: clement.info.umoncton.ca, e-mail: binprolog@info.umoncton.ca

   begin loading /cygdrive/d/usr/src/BinProlog/free_bp199/src/wam.bp...
   ...finished loading
   ?- consult(app).
   consulted(app.pro,time = 0)
   ?- myappend(L1,L2,[1,2,3,4,5]).






   ?- xappend(L1,L2,[1,2,3,4,5]).


Note that while this is a case where Prolog cannot make inferences
which are clearly correct (which is, more or less, what I was trying
to provide and which was, more or less, what Michael Campion seemed to
be asking for), I should point out that I don't think the difference
between the two versions of 'descendant' or 'append' reflects the
limitations on formal logic exposed by Goedel's theorem; it just
reflects the limitations of Prolog's particular method of theorem
proving.  In the case of 'append', it reflects the fact that the
programmer told Prolog NOT to draw the inferences in question.  (On
the other hand, at a deeper level, the method Prolog uses has the
characteristic that it can go into infinite loops, and it IS, if I
recall correctly, a consequence of Goedel's Theorem that there isn't
any better mechanical way of doing theorem proving, the best you can
do is have some theorems proved true, some proved false, and some not
terminating before the machine runs out of resources.)

On the other hand, I think on consideration I disagree with the NPR
commentator -- the Semantic Web would indeed hit its head on Goedel's
Theorem if it attempted to make every proposition decidable.  But has
anyone involved with the Semantic Web work ever said that all
propositions will become decidable? Not in my hearing.  Lots of
brave-new-world talk and lots of things that some people believe are
going to be harder than they look, but none of the blue-sky talk has
ever (in my hearing) included claims of completeness, or claims that
every statement expressible can be proven true or false.

One way to make all questions decidable would be to assume, as Prolog
does, more or less, that failure to prove something true is for
practical purposes the same as proving it false.  Negation by failure
relies on a closed-world hypothesis, and the Web differs from earlier
hypertext systems in part by resisting anything that looks remotely
like a closed-world hypothesis.  There is no central list of HTML
pages; there is no central list of HTTP servers; etc.  If you try to
open a connection to a server, and it works, you can mostly assume
that there is a server at the address you tried (unless someone is
impersonating that server ...); if it doesn't work, it's not safe to
infer that there's no server at that address.

Accept 404s and broken links as part of the cost of doing business;
accept a certain amount of credit-card fraud as part of the cost of
having a credit-card system (when it costs $1 to cut the level of
fraud by $1, what is the business reason to continue eliminating
fraud?), and accept that the Semantic Web is not going to guarantee
proofs of the truth of falsity of every proposition.  If the Semantic
Web improves the ability of software to work together and to make
correct or defensible inferences from data, it may improve things.
Neither a complete list of all the propositions implied by the data on
the Web, nor a proof of the continuum hypothesis, is part of the
minimum required to declare victory here.