[
Lists Home |
Date Index |
Thread Index
]
Or some combination. My ERD diagrams are created from a metadata
database that is built by first querying the product for its
descriptions of itself (eg, tablenames, fieldnames, datatypes,
fieldwidths, etc.) all of which the framework object APIs
deliver on request into the metadata db tables. Then
these table fields are annotated by the humans
with descriptions, captions, etc. All of the outputs
are just reports regardless of presentation type. I
suspect an ontological tool that deals with the continuum
of possible document types (Glushko's world) could require
a similar combination of asking the document then annotating
it. I certainly could envision that for services where one
gets some information from the code and then adds formal policy
stuff to that.
A screen scraper or any analysis done by machine on raw
data to feed an ontologically-unified system has a heckuva
job to do. As Alan Cruse describes it in his paper
"Notes on Meaning in Language" the major problems are
what words mean (semantic analysis/decomposition),
how word-meaning varies with context (discourse analysis),
how word-meanings are related and how they contrast
(paradigmatic relations) and the syntactic (and idiomatic)
properties of words (syntagmatic relations).
Those of course breakdown into a lot of smaller problems
that are solvable on their own. I guess one might inquire
how well the Semantic Web architecture provides solutions
to these smaller problems and how well that coheres at
scale.
len
From: Hunsberger, Peter [mailto:Peter.Hunsberger@STJUDE.ORG]
Defining the metadata is still work for a system like ours, but better
to define metadata than to write code. Which of course points you at
your other opportunity for metadata extraction: reverse engineering of
the code....
|