OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

 


 

   RE: [xml-dev] Reductionist vs Holistic Semantics

[ Lists Home | Date Index | Thread Index ]

Because as has been said, the web reflects the society 
that uses it (subcultures and all), it will have all 
of those problems.  Humans use negotiation to resolve 
conflicts (modulo violence).

The Golem Problem: when assigning both power and authority to 
an artificial system, how does one prevent the system 
from becoming unpredictable.  The agent negotiates.  Does 
it do better than humans, as well, or worse?  It depends 
on the negotiation constraints.  Given what you say about the 
semantic web, here are some starting rules for an 
agent working to evolve reciprocity in a simulation that 
assumes repeated encounters (repeated plays of prisoner's 
dilemma: in a single play, defection is likely):  

1.  Know the source (keiretsu).

2.  Negotiate with the source (if no ontology is 
    a priori in place, negotitate one).

2.  Trust but verify (ontological commitment).

4.  Be provocable (if cooperator defects, defect).

5.  Be clear (use predictable strategy.  communicate trust).

6.  Be forgiving (with onset of cooperation, cooperate).

7.  No massive retaliation (keep response proportional).

It's just an expanded set of rules from the Generous Tit for 
Tat simulations.   Then there is the win-stay, lose-shift 
strategy (aka, Pavlov) which is said to be better at exploiting 
cooperators and not incentivizing defectors.  I'm not cynical 
about the semantic web per se.  The non-linear aspects of it 
are not so much within itself, as in the coupling of the 
semantic web to the humans who use it.   The importance of 
human training and agent training cannot be overstated.  The 
tools used to train the agent make all the difference.

And don't give the agent the power to exceed its authority, 
so much of the solution is in how to define authority.

len


From: Didier PH Martin [mailto:martind@netfolder.com]
 
a) Who will be behind the chain of trust and what the costs are. Costs in
terms of efforts and money. I mean by that, how hard will it be for an
individual to get a certificate and how trustable it will be. In the past
and not necessarily in movies, some people were able to take a different
identity than their real one. If actually some people are smarter than the
system to break it, how can we prevent them to "worm" or "virus" ontologies
and therefore the whole reasoning of automated agents? I can imagine the fun
of some people to turn our agents into some paranoid machines a la Hal (ref:
2001 space Odyssey) or to create some unknown dummy behavior. 
b) How will I know that a chain of trust is trustable and virus free?
c) How the web will struggle to remains democratic and prevent "reasoning
illnesses" caused by strange people with strange motivations and annoying
behavior.




 

News | XML in Industry | Calendar | XML Registry
Marketplace | Resources | MyXML.org | Sponsors | Privacy Statement

Copyright 2001 XML.org. This site is hosted by OASIS