[
Lists Home |
Date Index |
Thread Index
]
- From: "Bullard, Claude L (Len)" <clbullar@ingr.com>
- To: Joshua Allen <joshuaa@microsoft.com>
- Date: Mon, 23 Oct 2000 14:14:51 -0500
There was a saying, "Between the radar screen
and the fire control is and always is a human.
Don't launch on acquisition of signal."
Perhaps simply, Trust But Verify.
One of the links I posted was to the Principia
Cybernetica which is a hypertext semantic net.
If XLinks are implemented with role attributes,
as Eliot Kimber used to point out, you have the
basic construct for semantic nets.
I don't say a semantic net or expert system
isn't useful. Historically they were very
expensive to build and maintain and have a
temporal currency problem. Also, as they
grow and become intense, access issues
become important. For well-understood
domains without any cascading issues, they
can be effective particularly as an advisory
service. When Wayne Uejio (GE) worked
these for CALS, they were touted as a means
to capture and preserve corporate competence,
that is, to protect against loss of knowledge
where knowledge is considered a corporate asset.
We investigated them in deep detail for IETMs.
They became one of the reasons we advanced
the concept of stable cooperating systems. Wayne
and I spent a lot of time figuring out how these
two technologies would work together. We understood
markup was a key enabler. I wanted to use Hytime
because links could have roles. Wayne wasn't that
hot on markup but saw the sense of it as we looked
into platform lifecycle and interoperation dilemmas.
The network of definitions was augmented with a
rules base typically constructed from interviewing
domain experts. The advantage was as you say
in the fuzzy domains where capturing the feel
of the human expert was important for real world
rules. Again, they had to be augmented with a
continuously updated episodic database. They
became known as learning systems and part of
the general practice of creating intelligent
tutoring systems.
If one wants to create as
Dave suggests, a toy system for XML-Dev'rs,
that is a fine experiment. I'd be very critical
about the top level domains and how the rules
are assigned. I always felt that feeding them
automagically from services such as full-text
indexing and analysis was dicey. If you
use semantic nets to create semantic nets, it
is a bit like using an a-bomb to detonate
an h-bomb.
Feedback-based real time control. Not esoteric
stuff, but tricky to implement well and don't
fly the first one.
Len Bullard
Intergraph Public Safety
clbullar@ingr.com
http://www.mp3.com/LenBullard
Ekam sat.h, Vipraah bahudhaa vadanti.
Daamyata. Datta. Dayadhvam.h
-----Original Message-----
From: Joshua Allen [mailto:joshuaa@microsoft.com]
* a set of inference rules taken from classic object-oriented programming is
not going to give necessarily good results, the inference rules should be
bent to match the way we understand things and not bent to make it easy for
a computer to process. WordNet has "sense of" which is key to making the
semantics actually useful (at least I think so).
* lots of human scrubbing
* straight hierarchical (as in node-labeled graphs) relationships can give
strange outcomes (at least in the lexicon I built!). WordNet is more like
an edge-labeled graph with weights (I think they based weights on frequency
of occurrence with some human scrubbing).
Of course, I know that in the OPML/UserLand sense of outlines, outlines can
link into one another, so outlines don't imply a tree structure always. In
fact, I am not even close to being an expert on the topic of semantic
relationships. And also concur that building a lexicon for lemmatization is
not the same as a defining semantic rules for a limited test domain. Just
from my own limited experience, if building inference rules in the future I
would try to borrow from WordNet's ideas of "sense of" and some fuzzy
weightings at a bare minimum.
|