Lists Home |
Date Index |
- To: Dare Obasanjo <firstname.lastname@example.org>
- Subject: Re: [xml-dev] Ontolgies, Mappings and Transformations (was RE: WebServices/SOA)
- From: Bill de hÓra <email@example.com>
- Date: Wed, 01 Dec 2004 14:39:17 +0000
- Cc: Michael Champion <firstname.lastname@example.org>, email@example.com
- In-reply-to: <830178CE7378FC40BC6F1DDADCFDD1D103BD1C09@RED-MSG-31.redmond.corp.microsoft.com>
- References: <830178CE7378FC40BC6F1DDADCFDD1D103BD1C09@RED-MSG-31.redmond.corp.microsoft.com>
- User-agent: Mozilla Thunderbird 0.9 (Windows/20041103)
Dare Obasanjo wrote:
> One of the things I've found interesting about discussions with the
> RDF/Semantic Web crowd is that many of them fail to see that moving to
> ontologies and the like basically is swapping one mapping mechanism
> (e.g. transformations using XSLT or regular code in your favorite OOP
> language) for another (e.g. creating ontolgies using technologies like
> OWL or DAML+OIL).
I wonder who they might be?
> At the end of the day one still has to transform
> format X to format Y to make sense of it whether this mapping is done
> with XSLT or with OWL is to me incidental.
Not to me. That's a somewhat academic position. The costs vary a lot
depending on the technology employed. Also, the transforms do vary over
time; these things are not always a one shot deal. You want technology
that allows people to change their minds cheaply as possible.
> However the Semantic Web
> related mapping technologies don't allow for the kind of complex and
> messy mappings that occur in the real world.
Like Celsius to Fahrenheit? I don't understand why anyone would be
surprised at this in what is essentially data description work. But
there are cases where something like OWL has value in describing things
- Ian Davis example for Atom versioning comes to mind.
There are technologies close by that can infer data descriptions and
concepts (John Sowa has an impressive use case of reverse engineering a
domain model from an enterprise's data sets), but the Semantic web
doesn't cover them. It assumes that work has been done and the
assertions are to hand. The nearest reference to it in specification is
to the distribution of referents in the RDF model theory.
So, I still maintain the Semantic Web architecture is missing an
infrastructure layer, essentially that of data cleansing and reverse
engineering the tidy logical assertions from the messy raw data. Fields
like robotics, machine learning, search, even data warehousing have come
to accept the need for something like this. There's always a need for a
statistical methods or probabilistic inference - indeed some folks think
it's much more important to have that in place that some kind of GOFAI
frontal lobe. For me, semweb technology won't matter much on the
internet until that infrastructure is in place.