Lists Home |
Date Index |
Bullard, Claude L (Len) wrote:
>I want to revisit this post a bit.
>It is likely that mathematical proofs are the mote in the
>eye of the semantic web community. There is a tendency to
>run to math and logic when faced with uncertainty as in a
>story where one holds up a cross or runs to holy ground when faced with
>a vampire (the unknown). Logic and math, though useful, have their limits
>and absolutes are rare. Over time, some AI researchers such
>as Richard Ballard and for comparison, John Sowa point out
>that knowledge is not merely good logic and math. It is a
>theory making behavior, a sense-making behavior, more like
>traditional scientific method than pure mathematical modeling.
I was thinking of a mathematical system for assigning trust values to
inferences made across data from a variety of different sources (or
inferences across inferences from different sources). Given that the
basic trust values would be entirely subjective anyway, I don't think
that applying a mathematical formula to them would be a big step.
However, I take your point that logic/maths only gets you so far - and
leaves less room for subjectivity than the the Web demands.
>Given a P2P system of ontological nodes:
>1. Can it remember situations and theorize about them?
>2. Can it integrate new theories into existing frameworks?
>3. Can it share the new theory with other nodes?
>4. Can a theory shared with another node be modified
> by that node and that modified theory be shared?
>5. What behaviors of the nodes change as a result
> of acquiring the theory? Are new behaviors spawned?
>The question becomes not is this theory mathematically
>provable, but does it predict outcomes reasonably given
>a situation, that is, how well does it work as a control
>over uncertainty? That may be as much 'provenance' as is
If I understand you correctly you are more in favour of an empirical
approach to determining the trust value of a theory, right ? I agree
that mathematically provable theories are not necessarily correct. In
fact I believe that "correct" is not something which can be objectively
established. All that can be established are the statements (or the
theories) and where they come from - after all, how do I trust a
measurement of the predicitve nature of a theory - surely at the end of
the day I have to apply the theory for myself and see if it fits my
>Note John's model in which deduction is the last process.
>It proceeds by abduction (observation: choose the items
>of interest), induction (what axioms emerge from observation)
>and deduction (apply the axioms logically). For any ontologically
>endowed node in your network, how does it perform these
>tasks and share any new theories it creates?
In my application the sharing was purely done by sharing assertions
about subjects - how a node arrives at a particular assertion is hidden
from all other nodes on the network. In fact in the implementation I had
done, the facts simply come from a knowledgebase (a set of topic maps),
the node is just a dumb conduit of facts, it no more understands the
processes that led to the creation of those facts than any other node in
the network. I suspect that for much of the data that will populate the
early semantic web this will also be true, we will provide semweb access
to databases and systems with no theory behind them that is reachable
through the semweb.
>Such sharing is a service.
That is possibly true. However, I see sharing the information as the
first step - if the use of RDF/XTM gets us more machine-processable
web-accessible information that would be an undeniable Good Thing. I can
see the next goal that you are pushing towards, but I have to admit that
I have no idea how we get to there from here. Hence I find myself in the
position of supporting the information-interchange facilities of the
semantic web and the basic (but still very useful) inferencing we can do
with machine-accessible metadata and share vocabularies, but I find
myself skeptical regarding claims of the semantic web enabling
generalised "knowledge" processing.