[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
RE: NPR, Godel, Semantic Web
- From: "Bullard, Claude L (Len)" <clbullar@ingr.com>
- To: Benjamin Franz <snowhare@nihongo.org>, xml-dev@lists.xml.org
- Date: Thu, 10 May 2001 08:26:42 -0500
So far so good. Modeling an illusion seems
somewhat illusive. As is often stated, we
will do the simple things first and see if
they meet expectations.
We can make our models complex. We don't
really know how to model a human brain. We
don't really know how it works. We can
model observable behavior and that is the
focus of the ontological systems I've had
the time to research.
You are right that we don't have a clue about
free will. That is why we can't model it. A
model is by definition, not the real thing.
What we can do with HumanML is take the currently
accepted observations about what is unique in
human communications, model these, and see what
is useful. In effect, this involves stereotyping.
AI is not mystical or mysterious. Not to me,
anyway. A node is a node. A property is a
property. Who gets to name the names....
Len
http://www.mp3.com/LenBullard
Ekam sat.h, Vipraah bahudhaa vadanti.
Daamyata. Datta. Dayadhvam.h
-----Original Message-----
From: Benjamin Franz [mailto:snowhare@nihongo.org]
Apologies if what you meant to say was "We cannot _currently_ model a
human's free will." - a statement with which I agree. The following is
aimed at what seems a categorical denial of the _possibility_ of modeling
'free will' (something that occurs frequently in literature that wishes to
deny the possibility of ever achieving 'true' AI by attempting to
distinguish via questionale assertions that there is some mystical 'free
will' that somehow flouts the rules of the universe and has no rules of
it's own). If that is not what was meant, disregard the rest of my mail.
;)
> We can only create stereotypical
> human models, not model humans. Why? We can't
> model a human's free will. Much about human behavior, say
> emotions, remains a black box. Yes, we can
> create a axioms for emotional relationships, and even
> simulate dynamism through event routing, but really
> we are just simulating, or building golems.
There is no evidence that 'free will' cannot be perfectly modeled. It may
in fact be nothing more than an illusion caused by the impossibility of
modeling our *own* behavior perfectly (the machine cannot simulate itself
perfectly because you fall into the infinite regression of modelling the
model modelling the model modelling the model.....and hence the machine
cannot predict its own actions.). A *second* machine may well be able to
model the first completely (but not itself as well).
The limitation is very probably in our current understanding of the human
machine (and so our ability to build a precise model of it), not in a
categorical 'this cannot be modeled' limitation.