[
Lists Home |
Date Index |
Thread Index
]
- To: Jonathan Borden <jonathan@openhealth.org>
- Subject: Re: [xml-dev] Data vs. Process was Re: [xml-dev] Vocabulary Combination...
- From: Bill de hÓra <bill@dehora.net>
- Date: Tue, 03 Jun 2003 21:40:42 +0100
- Cc: bill.dehora@propylon.com, "W. E. Perry" <wperry@fiduciary.com>, XML DEV <xml-dev@lists.xml.org>
- In-reply-to: <03f101c327b3$af7c3d10$b6f5d3ce@L565>
- References: <3ED676DF.6020707@prescod.net> <ab7bdvs3gol4j4trlefsiq6g4bfbaou70f@4ax.com> <3ED5B096.9080102@textuality.com> <3ED6062E.4080403@bitworking.org> <200305292144.h4TLi3F09045@dragon.flightlab.com> <3ED7E4EC.4070400@prescod.net> <k66gdvsb2srp8mvp06o6ptnvrgqhdh6tvr@4ax.com> <3ED87447.3050302@dehora.net> <3ED87E31.93365357@setf.de> <3ED8984F.8040200@propylon.com> <3ED89F75.5FC16B91@setf.de> <3ED8ABFB.9010609@propylon.com> <3ED8C185.74A10A96@setf.de> <3ED8CE36.8BC058CA@fiduciary.com> <039401c327aa$396db300$b6f5d3ce@L565> <3ED907A1.4010705@propylon.com> <03f101c327b3$af7c3d10$b6f5d3ce@L565>
- User-agent: Mozilla/5.0 (Windows; U; Windows NT 5.0; en-US; rv:1.3) Gecko/20030312
Jonathan Borden wrote:
>> Bill de hÓra wrote:
>>Strong agreement. But let's remember that you cannot have a theory
>>of content without a process model - something some luminaries
>>involved now in the semantic web/ont efforts realized a long time ago.
>
>
> Please elaborate on "theory of content" and its corresponding "process
> model" -- I'm not sure what this means.
Sure. A content theory is a theory of what we know about a world, or
worlds (ontologies) - invariably these will be based on a
denotational semantics (there isn't any other game in town, so to
speak). A process model is a theory of how we use this content. A
model of process (aka a procedural semantics) isn't expected
initially in the denotational approach because the process is an
idealized deductive theorem prover. The idea is that we write down
what we know about a domain and feed it to an engine - usually that
engine will get back to us in a reasonable time-frame.
[The rest of this post highlights some issues with the web
ontologist's approach, and has precious little to do with markup.]
However there is enough experience and research obtained prior to
the semantic web to know something about this approach. A lot of
what we want to know which is interesting is not deductive in the
first place - the engines are less useful that we first imagined. It
turns out that for a computer to discern interesting things about
some state of affairs, a denotational semantics are often not
sufficient (and they might not even be neccessary).
So we need to resort to programs. Which can vary from general
purpose programming languages to more careful stuff like BPELWS,
Prolog, Pi-Calculus and M-X doctor. These don't have anything like
the kind of semantics that would make us comfortable that we know
what's going on. What's liable to make us really uncomfortable is
the possibility that all these facts and axioms we recorded aren't
usable outside a particular process - each program or agent has to
represent each fact for itself. Although that's an extremist
position, anyone who is thinking about web ontologies under the
guise of knowledge reuse could do worse than read the AI literature
in this area.
Now this may seemed a rarified argument, or even an irrelevant one,
for controlled domains (worlds) such as cameras or disorders of the
blood, but it's no more rarified than saying for a computer to
process XML, regular expressions are not a sufficient means of
expression.
I hope all the web-ont stuff can be useful - indeed I expect it will
be helpful in getting some business logic out of system code. But I
would like to see someone, somewhere, explain how we've learned
something useful from the failures of AI research, which makes the
web-ont campaign valuable.
Bill de hÓra
|