[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
RE: First Order Logic and Semantic Web RE: NPR, Godel, Semantic W eb
- From: "Bullard, Claude L (Len)" <clbullar@ingr.com>
- To: Jeff Lowery <jlowery@scenicsoft.com>, xml-dev@lists.xml.org
- Date: Wed, 16 May 2001 08:49:02 -0500
There were earlier incidents. In October, 1960,
we almost launched based on radar signals bouncing
off the moon. The saying is, always a human between
the radar and the fire control.
We do have to be careful and that same issue of
the speed, latency and criticality of task has
come many times over the years so most folks are
aware of it. It may be a problem for the free-wheeling
web where people are allowed to put anything on
the system, but that is the risk and the freedom.
The harder problem is semantic drift where the
original meaning gets lost and the intent warps.
Look at the early texts on eugenics and then
look at the warped politics that followed.
Still, I think most of this will come down to
vetted services. Just as you have to look critically
at your government processes and officials, you
have to look critically at the services. Semantic
systems are services and I suspect the most useful
ones will be very local. Things like Google
are indexing systems, not semantic services. The
difference may be a little subtle, but essentially,
Google only returns lists, it doesn't answer questions.
It is a browsing assistant, not an expert system.
Nobility. Well, it is hard to legislate that isn't it?
We are working hard on HumanML to enable standard sets
of human properties to be added to services. Could one
pervert such properties? Sure. Does it mean we shouldn't
do it? No, it means we should do it as well as we can.
I believe the better we understand each other, the more
we are able to detect and work successfully with the
ambiguity and drift produced by our usefully diverse
cultures and origins.
Len
http://www.mp3.com/LenBullard
Ekam sat.h, Vipraah bahudhaa vadanti.
Daamyata. Datta. Dayadhvam.h
-----Original Message-----
From: Jeff Lowery [mailto:jlowery@scenicsoft.com]
> I fear people who can't tell the difference
> between a person's opinion and a machine's opinion
> or think that either has facts believable out
> of context. Whatever the SW is or is supposed
> to become, it ain't magic.
Again, it's not the people I fear, it the machine's acting on peoples'
behalf. If SW ain't about automation of decision making based on
understanding (machine understanding, not human understanding, the twain
shall never meet), then we should be careful.
Then again, maybe we should fear people. Saw a factoid show last night
giving the once over to the incident in 1995 where the Russians interpreted
the radar signature of a Norwegian rocket test firing as an ICBM attack from
the U.S. Got down to the last 2 minutes of a 10 minute launch procedure.
Even if that wasn't entirely accurate, I think putting blind trust in we
clever monkeys and our machines is ill advised. We have to carefully verify
our perceptions and the interpretation of facts from our systems; get too
damned clever for our own good sometimes. Literally damned clever.
Does that mean we shouldn't pursue noble ends? No, noble ends is what's got
us here now, and it is good. Powerful stuff, this SW, and let's take the
full measure of it.