OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.


Help: OASIS Mailing Lists Help | MarkMail Help



   Roger Costello: My Version of "Why use OWL?"

[ Lists Home | Date Index | Thread Index ]
  • To: XML DEV <xml-dev@lists.xml.org>
  • Subject: Roger Costello: My Version of "Why use OWL?"
  • From: "W. E. Perry" <wperry@fiduciary.com>
  • Date: Sat, 26 Apr 2003 11:47:07 -0400
  • Organization: Fiduciary Automation
  • References: <3EA963E5.527F5F39@mitre.org>

[Last week Roger Costello initiated a discussion of OWL and OWL
ontologies both here on XML-DEV and also on the RDF-interest list. This
week he has posted a follow up to RDF-interest but not here on XML-DEV.
Without wishing to cross-post, I believe that the topic remains of as
much interest here as on RDF-interest and I am therefore initiating a
new thread based on my response to Roger on the RDF-interest list.]

"Roger L. Costello" wrote:

> Hi Folks,
> I have created a few slides to describe, at a high level, the
> motivation for using OWL:
>    http://www.xfront.com/owl/motivation/sld001.htm
> Comments welcome.  /Roger

Hi Roger.

My very different take on the role of semantics in a universal
internetwork of complementary and interdependent processes:

We can agree entirely on the headline of your slides #7 and #8:
"Meaning (semantics) applied on a per-application basis". This is
precisely how semantics are elaborated:  as the outcome of specific
expertise applied through process. It is, however, in the nature of
expertise to be idiosyncratic. The most valuable semantics for a given
purpose are elaborated from the application of the most specific
expertise. Therefore the 'problem' which you would highlight in your
slide #9 ('problems with burying the semantic definitions within each
application') is in fact an inherent property of expert processes. In
order to apply expertise, processes must comprehend a specific expert
semantics of the data upon which they operate and the nature of their
manipulation of that data. In your slide #9 you quite correctly factor
an application of expert process into code to interpret the data and
code to process the data. That 'interpretation' of the data is a
specific instantiation of the particular semantically-freighted
datastructure upon which a given expert process expects to operate. The
'code to process the data' has to be designed for a particular
instantiation of the data. The more particular the expertise of that
process, the more particular and idiosyncratic--and less the common
denominator of a standard semantic vocabulary--must be the instantiation
of, and therefore the semantics implied by, the data upon which that
process operates.

Your example on slide #9 of the Mars probe disaster--one application
interpreted the data in inches, another application interpreted the data
in centimeters--is actually a counterexample to what you hope to
illustrate. The cause of the disaster was that different applications
expected to share *common* semantics:  that the data as given was, for
the purposes of *both* applications, in inches or in centimeters. The
devastating error was that each application deferred from its own
expertise to a presumed agreement or 'semantics in common' about which
*both* applications were fatally mistaken. It does not matter which
application happened to guess or blithely presume correctly about the
units of the data as presented. It was an unconscionable abdication of
the expertise of both applications to make any such presumption. As you
correctly illustrate on your slide #9, there are two necessary
components to an expert application, and the first of them is code to
interpret the data. Part of the application's own expertise is knowledge
of the units in which it expects to operate, and therefore it is crucial
for the application to instantiate data in those units for its own
purposes. And, in turn, crucial to doing that is first recognizing the
units intended or implied in data as received, in order to elect the
correct expertise for instantiating data in the units required. The
usual clues for such recognition or syntactic, which is why I can say
that in your example you have inferred the line between syntax and
semantics in the wrong place. An easy case would be if the units were
explicitly presented in syntax, as with e.g. an inches attribute or a
units element. Occasionally it is in fact as simple as that, and the
application can, through its expert interpretation code, readily resolve
the units presented syntactically into those required semantically. In
other cases, the application must look at the provenance or structure of
the data as received and compare it with either or both in previous
examples that it has encountered in order to make an expert
interpretation of the data received. The point is that it is always
incumbent on the application by virtue of its presumed expertise to make
its independent interpretation of the data received in order to make an
informed instantiation of the data required. To defer in that necessary
task of expert processing to some presumed common semantics is to
abdicate expertise itself, and the predictable outcome is error.

Perhaps we should consider a different example. Suppose that an instance
of your SLR is presented to an application for customs duty collection.
The task of that application is not to infer that an SLR is a sort of
camera but to infer that the particular instance presented is an example
of dutiable luxury consumer goods. This application is a valuable use of
the SLR/camera ontology which you are creating, but probably not one
which you expected, nor one which you have provided 'hooks' for in the
ontology you are building. Yet our larger purpose here is to build (and
more abstractly to build the principles for) ontologies distributed
among processing nodes on a worldwide internetwork. In that effort,
harnessing the unique perspective and uniquely expert processing at each
node is the particular value we hope to add by building out the ontology
to worldwide scale. Clearly the customs application cannot function
without its own ontological distinctions between dutiable and
non-dutiable, consumer and industrial goods. Equally clearly we do not
want to burden every camera hobbyist's SLR ontology with the
distinctions which are most crucial to the customs agent. The only
workable way to reconcile those goals, and the only way to build out any
non-trivial ontology to worldwide scale, is to require as a matter of
design that semantics are locally elaborated to fill the local needs of
expert processes. Being local means that these semantics are not shared,
nor understood in some common way. While it is entirely possible that
congruent semantics might be elaborated in separate locations by locally
appropriate processes, the point is not the similarity of the semantics
but the idiosyncrasy of the processes which elaborate them.


Walter Perry


News | XML in Industry | Calendar | XML Registry
Marketplace | Resources | MyXML.org | Sponsors | Privacy Statement

Copyright 2001 XML.org. This site is hosted by OASIS