[
Lists Home |
Date Index |
Thread Index
]
- From: Carol Ellerbeck <carol@factcity.com>
- To: 'KenNorth' <KenNorth@email.msn.com>,"'Bullard, Claude L (Len)'" <clbullar@ingr.com>
- Date: Tue, 24 Oct 2000 07:43:25 -0400
Ken,
If you "were king of the world" with the idea you express below, you would
not need "an unlimited budget"...just a modest one, to have experts build
your taxonomy/domain vocabularies. I say this as a Taxonomist who has been
in the vocabulary trenches with electronic information for years.
Automation is wonderful (and I would say, even essential), but start with
*NOT JUST* humans (albeit smart humans), start with humans who have some
expertise, and you will accomplish your goal faster, with fewer people, more
efficiently, and have a more solid foundation to build on.......
Long live the king!
C
-----Original Message-----
From: KenNorth [mailto:KenNorth@email.msn.com]
Sent: Monday, October 23, 2000 9:44 PM
To: Bullard, Claude L (Len)
Cc: xml-dev@lists.xml.org
Subject: Re: Soft Landing
> > I always felt that feeding them
> > automagically from services such as full-text
> > indexing and analysis was dicey. If you
> > use semantic nets to create semantic nets, it
> > is a bit like using an a-bomb to detonate
> > an h-bomb.
If I were king of the world, with unlimited budget and unlimited
cooperation, I'd start with a taxonomy and domain experts. Let them define a
domain vocabulary (again I keep pointing to MeSH for medical literature).
Then, when new literature is published each month, run it through machine
analysis to identify new terms that start popping up in the literature
(e.g., XML a few years ago). Also identify relationships to existing
concepts or terms (similarity searches), and so on. The domain experts
identify an alert level (e.g., 5 citations) and when a term or concept
exceeds that level, it's included in a monthly update they receive -- new
terms and concepts in the literature. They use that information when
updating domain vocabularies on a quarterly basis.
Using a pre-defined domain vocabulary is probably more efficient than doing
it all automagically using inference engines, machine analysis of schemas,
RDF, parsing and so on.
Look at the portals that migrated to a classification scheme, instead of
being simply keyword container searches.
|