OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

 


 

   Re: [xml-dev] more politics

[ Lists Home | Date Index | Thread Index ]

when building a semantic database engine, the one thing i realised is
that to be intelligent, it must be able to make mistakes and learn from
them - it learns at this stage from interaction with it's designer (me)
who either builds in new abilites, or accepts that some things will also
be a bit of guess.

ever since we learned about np-complete problems (a long time ago now) -
problems like shortest walk around a graph - we have had to accept that
you can't be right all the time. the fantastic thing about the human
brain is it's ability to cope with the insoluble and the errors in a
situation. the "life goes on" ability.

anyone who has tried building large it systems knows the frustration of
discovering that the so called experts in a business often haven't the
faintest idea of what they're really doing, and they frequently make
significant mistakes. but somehow the whole thing - business and people
- keeps working.

all of which is to say that we have to build learning, and the
acceptance of faults into any semantic or intelligent system. i'd be
very suspicious of any insistence that we can do different. it's in the
same category as perpetual motion machines.

rick

On Fri, 2003-07-25 at 00:03, Simon St.Laurent wrote:
> cowan@mercury.ccil.org (John Cowan) writes:
> >> (b) does the difference have any effect on the behavior of the
> system?
> >
> >Definitely, since "the system" includes human beings and other
> >inference-drawing machines.  
> 
> To me, this is where it gets interesting.  Part of the genius of the
> original Web was that it didn't mind bad URLs - humans were part of the
> system and could deal with the 404 Not Found messages themselves.
> Annoying, but not likely to cause especially complicated problems.
> 
> In the Semantic Web, on the other hand, the URIs are under the covers,
> with no simple "GET it and tell me an answer or give me an error".  "The
> system", as it did for the Web, includes human beings and all their
> interpretive and creative foibles, but not their convenient
> error-handling capabilities - at least not until an awful lot of URIs
> may have been processed.
> 
> The original Web was simple enough that exception handling could bubble
> out to humans and there wouldn't be a huge problem.  The Semantic Web is
> both a lot more complicated and its keepers try very hard to keep humans
> far away, which seems like a seriously dangerous approach to me.





 

News | XML in Industry | Calendar | XML Registry
Marketplace | Resources | MyXML.org | Sponsors | Privacy Statement

Copyright 2001 XML.org. This site is hosted by OASIS