[
Lists Home |
Date Index |
Thread Index
]
That is the key to the problem of what was once called 'superstitious
acquisition'
in behavioral psychology circles and some long lost papers.
http://www.infoloom.com/gcaconfs/WEB/seattle96/prog.HTM (how I stumbled
into this party)
Observation is insufficient to prove a fact without a test.
This is particularly true of information acquired by transmission.
As with Google, it ranks 'gossip'. This is why in a contract-based system,
there is a notion of 'record of authority' and why I have mentioned recently
that public sites for standards require a URI for the ROA.
Watch two or more programmers creating a web service attempting to determine
who has the right schema. They aren't much better than the AI at solving
it.
Registries are the usual solution.
Many years ago in rock bands, we would have the fight about who was in tune
so we could use them as a reference. The keyboard players usually won until
the cheap automated tuner was invented (a real godsend, yes). But we chewed
up hundreds of rehearsal hours arguing about that. Now we have a
'negotiation'
for machines and without a 'registry' it will burn up a not inconsiderable
number of machine cycles with no guarantee of halting.
The interesting bit is the emergence of an agent that knows how to negotiate
with the human before attempting to instruct other agents as to their limits
on negotiating with other agents. There is no way around the hierarchy
of agencies that choose choices. Pure P2P has a problem there.
len
From: Kal Ahmed [mailto:kal@techquila.com]
Absolutely true. But if that centralizer is simply a node in the P2P
network, what happens when it propagates that inference to other nodes.
For example other nodes might want to distinguish between inferences
made by that node (possibly based on evaluating the inference 'proof')
and data that comes from that node with no other provenence - that leads
to more complex models (I ended up with the possibility of having
multiple levels of reification : A says 'B says "C says 'foo'"').
I know that there are mathematical evaluations of these sorts of trust
models and it could be that ultimately an answer comes from there (but
that implies that I would have to *understand* the maths...;-).
It could also be that ultimately its like all reporting - the receiver
has to rely on what it is given and if it doesn't then it will have to
follow up the sender's sources and cut out the middle man - another good
reason for tracking provenence and disseminating it with such
inferences.
BTW - I don't for a moment imagine that this has not already been an
issue in other areas of CS and in other areas outside of CS.
Cheers,
Kal
On Mon, 2004-06-14 at 21:45, Bullard, Claude L (Len) wrote:
> If I understand that, your centralizer is the provenancing authority for
> the other provenances. It decides who or what to trust for the current
> inference in the local scope of authority.
>
> No better and no worse than table joins across distributed tables.
>
> len
>
>
> From: Kal Ahmed [mailto:kal@techquila.com]
>
> A while ago I played with P2P for topic map sharing and took some
> relatively naive approaches which I haven't tested for scalability [1].
> My approach was based simply on provenence. In RDF/Topic Maps your
> provenence information just becomes more data which makes it easy to
> process with the same tools that you are using for processing the
> application data - which is nice :-)
>
> Another interesting aspect of inferencing over distributed data is that
> different nodes on the network might have different pieces of
> information that you need in combination in order to make an inference
> which tends to have a centralizing influence as you need to pull that
> data together somewhere in order to make the inference. What that does
> to a provenence-based trust model is a good question to which I don't
> currently have an answer...
|