[
Lists Home |
Date Index |
Thread Index
]
I want to revisit this post a bit.
It is likely that mathematical proofs are the mote in the
eye of the semantic web community. There is a tendency to
run to math and logic when faced with uncertainty as in a
story where one holds up a cross or runs to holy ground when faced with
a vampire (the unknown). Logic and math, though useful, have their limits
and absolutes are rare. Over time, some AI researchers such
as Richard Ballard and for comparison, John Sowa point out
that knowledge is not merely good logic and math. It is a
theory making behavior, a sense-making behavior, more like
traditional scientific method than pure mathematical modeling.
Given a P2P system of ontological nodes:
1. Can it remember situations and theorize about them?
2. Can it integrate new theories into existing frameworks?
3. Can it share the new theory with other nodes?
4. Can a theory shared with another node be modified
by that node and that modified theory be shared?
5. What behaviors of the nodes change as a result
of acquiring the theory? Are new behaviors spawned?
The question becomes not is this theory mathematically
provable, but does it predict outcomes reasonably given
a situation, that is, how well does it work as a control
over uncertainty? That may be as much 'provenance' as is
needed.
Note John's model in which deduction is the last process.
It proceeds by abduction (observation: choose the items
of interest), induction (what axioms emerge from observation)
and deduction (apply the axioms logically). For any ontologically
endowed node in your network, how does it perform these
tasks and share any new theories it creates?
Such sharing is a service.
Theory or control emergence is evolution. That is, an evolving
system acquires new behaviors by acquiring new theories which
it applies as controls. Control emergence is evolution. When
multiple controls are applied to the same behaviors, they can
become non-linear and even chaotic. So the high level understanding
is learning to apply the control to the behavior in a situation.
This is a dynamic context.
len
From: Kal Ahmed [mailto:kal@techquila.com]
Absolutely true. But if that centralizer is simply a node in the P2P
network, what happens when it propagates that inference to other nodes.
For example other nodes might want to distinguish between inferences
made by that node (possibly based on evaluating the inference 'proof')
and data that comes from that node with no other provenence - that leads
to more complex models (I ended up with the possibility of having
multiple levels of reification : A says 'B says "C says 'foo'"').
I know that there are mathematical evaluations of these sorts of trust
models and it could be that ultimately an answer comes from there (but
that implies that I would have to *understand* the maths...;-).
It could also be that ultimately its like all reporting - the receiver
has to rely on what it is given and if it doesn't then it will have to
follow up the sender's sources and cut out the middle man - another good
reason for tracking provenence and disseminating it with such
inferences.
BTW - I don't for a moment imagine that this has not already been an
issue in other areas of CS and in other areas outside of CS.
|