OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.


Help: OASIS Mailing Lists Help | MarkMail Help

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: NPR, Godel, Semantic Web

Thanks Jay!

Here are the paragraphs I would appreciate 
some elaboration on, perhaps and particularly, 
"a system of sterotypes".  

"Without a system of stereotypes ("for any" and "there
always exists") to help us draw conclusions, a logic
is only a brute force search algorithm on data. We
failed to find a magic."

"The Semantic Web could hit the wall of Goedel if it
attempts to get meta-conclusions. Without
meta-conclusions to work on, are we looking at a
data search framework on the Web? In that case,
inefficiency of formal deduction is an issue."

I ask because in the HumanML noodling, the concept of stereotyping 
has come up and I am wondering if there is a 
common or hidden coupling in the Godel problem 
and the problems of classifying human communication 

I'm going off topic a bit... put on wellios and 
please forgive my imprecision.

We can only create stereotypical 
human models, not model humans.  Why?  We can't 
model a human's free will.  Much about human behavior, say 
emotions, remains a black box.  Yes, we can 
create a axioms for emotional relationships, and even 
simulate dynamism through event routing, but really 
we are just simulating, or building golems.  

For interpretation, the best we get is an analogical 
protocol (for example, emotions with intensity scales) 
that let's us make best guesses that 
we can then check against the currently accepted 
facts or tropic intents.   We can define cultures in terms of the 
current genres and expressive tropes, but we can 
only reason about the model and then trust the 
human observer to react appropriately.  It is a 
little like stand up comedy and for many of the 
same reasons.

On the next level, agents can use the semantic 
web databases and do limited axiomatic reasoning. 
When coupled to the emotional nodes, the agent 
might do some surprising things.  It doesn't 
escape the system of axioms, but the use of the 
analogical scales can make it "personable and 
surprising" not because the facts in the database 
are wrong, but because it doesn't react logically.  

So, perhaps the semantic web with its first layer 
of logical assertion is only a piece, a support 
layer for the agents.  It is a knowledge base 
and the Godel dilemma if real is not nearly as 
interesting as what happens when coupled to 
semi-autonomous agents capable of cultural inference.


Ekam sat.h, Vipraah bahudhaa vadanti.
Daamyata. Datta. Dayadhvam.h