[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
RE: Are we losing out because of grammars? (Re: Schema ambiguitydetection algorithm for RELAX (1/4))
- From: "Bullard, Claude L (Len)" <clbullar@ingr.com>
- To: Bill dehOra <BdehOra@interx.com>, 'Peter Jones' <peterj@wrox.com>,"'xml-dev@lists.xml.org'" <xml-dev@lists.xml.org>
- Date: Wed, 31 Jan 2001 09:25:45 -0600
A long read but well worth it...
"Hermeneutics: From Textual Explication to Computer Understanding? "; John
C. Mallery, Roger Hurwitz, Gavan Duffy.
http://www.ai.mit.edu/people/jcma/papers/1986-ai-memo-871/memo.html
A short read but immediately applicable...
Introduction to Algorithmic Information Theory, Nick Szabo;
http://www.best.com/~szabo/kolmogorov.html
My thanks to Jan Vegt for reminding me about kolmogorov complexity measures.
These
are more useful than Shannon's random source measures for our purposes.
Len
http://www.mp3.com/LenBullard
Ekam sat.h, Vipraah bahudhaa vadanti.
Daamyata. Datta. Dayadhvam.h
-----Original Message-----
From: Bill dehOra [mailto:BdehOra@interx.com]
How many hops you have to make to travel from one node to another, assuming
you actually have a path between them. In this case you're linking two term
definitions across across which imply term equivalence. My concern was that
if two definitions are far apart, the equivalence is weakened (Thanks to Len
for "semantic drift" and recalling a large chunk of my memory, I forgot that
one *completely*: <http://www.google.com/search?q=semantic+distance>).