[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
RE: First Order Logic and Semantic Web RE: NPR, Godel, Semantic W eb
- From: Jeff Lowery <jlowery@scenicsoft.com>
- To: "'Bullard, Claude L (Len)'" <clbullar@ingr.com>,Jeff Lowery <jlowery@scenicsoft.com>
- Date: Fri, 11 May 2001 12:26:45 -0700
>
> Yes, the problems of amplification and catastrophe
> in a feedback system: well, essentially, at onset,
> you have to *feel* it and put your palm on the
> strings before the speakers blow... ;-) (the answer
> is in the feedback formula; the control
> or policy for returning output to input).
Yep. But on single source to single output, the feedback solution is easy to
define. The web is a web (clever ontology, no?). Knowing where the inputs
and output lie is the tricky part, and there can be many. I think this is a
harder problem than first appears.
Reminds be of a simple computer simulation, called voters. You start with a
grid of cells, randomly populated with 1's and 0's. The feedback mechanism
here is your eight immediate neighbors. If you have four 0 neighbors and
four 1 neighbors, you vote your conscious, otherwise you vote along with the
majority of your neighbors. Unlike the real world, eventually the simulation
collapses to a single-party system: everybody votes the same. Thus, any bias
in the initial random scattering of 1's and 0's is reinforced. Once an
advantage is gained, it is maintained under these rules.
>
> Let me ask you this, how does a human negotiate for a
> used car? In other words, many contracts start out with
> only a minimal amount of trust among the partners in
> the transaction. Ask yourself in any trading situation
> what procedures or tasks do you do to ensure the situation
> meets your needs. How do you express those needs to a
> potential partner?
I think there's all sorts of trading scenarios, some zero-sum. One factor in
car dealing is trust in oneself. Do I have confidence in my negotiating
skills? When gleaning information off the Semantic Web, do I trust my
agent's ability to discern right from wrong? Again, it goes back to track
record. It's more along the lines of auto repair, how do I trust a mechanic?
Takes experience, you can get burned often in the meantime. Or you can
become your own authority and do it yourself.
> I see these as separate issues: logical procedures
> for negotiating a basis for trust, maintaining a
> private registry of trusted partners, creating a
> trustworthy knowledge base. How does the Survivor
> game on TV work (never watch it myself - degrading)?
See enough of Survivor at work, thank you. It does point up another factor
in trust: understanding motivations. So track record isn't enough. Some
authority can become untrustworthy on subjects of vested interest. This is
why I don't read certain computer rags: too much self-interest in protecting
their ad revenue.
>
> I should think one would look at the UDDI/WSDL service
> model and find the place where the ontology fits. What
> service is it providing?
>
> As to **how does one train an agent**, I should think that
> the critical question. See DAML. What is the agent
> allowed to DO? Get to that first.
>
> How do we constrain human agents? Protocol, policy,
> backups, reviews, etc. I submit one has to look very
> hard at negotiation in contexts of policy and opportunism.
This all gets back to checks and balances. This can't be ad hoc.
>
> Style counts for humans. For SW? It depends on just how
> complex a logical layer you want to devise, the kinds of
> agents, how much analogical reasoning you enable, etc.
The fact is, I won't train my agent; easier to buy one. What I'm concerned
about is the infrastructure in which it operates. Is it robust?
Self-correcting? We shouldn't make this up as we go along. The role of an
expert is not only knowing what he knows, it's knowing what he doesn't know.
The SW had better understand it's limitations.
>
> If you want a thought experiment, the hottest domain for
> research at the moment is using an avatar or virtual human
> interface as the GUI. What would you need to make that
> believable (not real, but believable in the sense that
> you know Bugs Bunny is not real, but he is believable)?
>
> Building the knowledge base, as hard as it looks today,
> is probably tedious but easier than what follows. After
> that, the layer that enables the agent semi-autonomous
> capacity to evolve a strategy in moreorless real time
> is the hard part. It is a problem similar if not identical
> to the problems of interactive fiction and believable
> characters (which is why some of us work in that field -
> fun, artsy, and illuminating).
Well, yes. The vastness of scale is nothing to be trifled with, though. The
SW isn't just War and Peace. Getting a handle on the cast of characters is a
challenge in itself. Not that it can't be done, and by people much abler
than myself, but let's not underestimate the task. It's a long walk from
concept to fruition.
>
> So good question: how does one train an agent? Well,
> first the agent needs memory, both of specific
> facts and what was once called, episodic memory so it
> has the capacity to work with stereotypes and match
> reactions to events (feel it; put palm on strings). If a stereotype
> is identified, how can it avoid falling into local minima?
> Annealing was once a topic of discussion in that context.
Perhaps what an agent needs is self-doubt. We don't need a bunch of arrogant
agents changing our world for us. True, it's how software is done, but do we
really want to follow that model? :-)
>
> But before we get that deep, basic WSDL, routing of application
> data to application, transforms, etc. Most of the business
> documents and business logic are tested long before you
> commit a mission critical operation to them. The applications
> in those domains are actually unlikely to be as open as the
> web. That is the flaw in open vs closed system assumptions.
> There is a middle ground (the keiretsu) in which the operational
> chain is defined by contract, tested, and known. It is closed
> in the sense that expectations are defined and tested prior to
> committing resources to it, so it is not chaotically seeking
> patterns; it is opportunistic.
Yeah, I think we're in basic agreement. It will take a long time.
--------------