[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
re: Namespace Basic Principles
- From: Dan Brickley <Daniel.Brickley@bristol.ac.uk>
- To: David Megginson <david@megginson.com>
- Date: Mon, 01 Jan 2001 11:00:04 +0000 (GMT)
On Sun, 31 Dec 2000, David Megginson wrote:
> How could anything tell software what anything means without human
> intervention? At some point, a human (programmer, operator, or
> what-have-you) has to tell the system that something, somewhere,
> represents a concept that we'll designate "XHTML" and that the system
> has to take or avoid certain actions as a result. No computer on
> earth could figure that out itself from first principles; heck, no
> computer on earth can even understand first principles.
>
> Meaning always starts and ends with human beings.
Right on. That can't be said enough these days.
> [snip]
> 3. Semantic Web Engine
>
> Yes, Virginia, there is a Semantic Web. A little bit of it exists in
> each of us ...
>
> Seriously, behind the whole Semantic Web thing all you'll find is a
> lot of subtyping -- if an SW program finds a foo:bar element, it is
> supposed to look through a whole bunch of ancestor schemas until it
> discovers a supertype of foo:bar that it recognizes, then act on the
> foo:bar element as if it were an instance of its supertype.
>
> As an OO programmer who has not been asleep for a decade, I find it
> amusing that the SW people are chasing after the kind of deep
> inheritance that modern OO programmers are trained to avoid by using
> aggregation, decorators, etc.
Funny you see it that way; probably too many simplistic 'a Dog is a kind
of Mammal' examples floating around (http://xmlns.com/wordnet/1.6/Dog etc).
I've always seen RDF as taking the contrary view: 'type' and 'subclass'
lose some of their magic, they're just two rather common relations
from a genuinely extensible set. Those two got built into RDF 'cos folk
would've invented them many times over if we hadn't done that. Like
they're inventing 'inverseProperty' and other utility relations now
(DAML, OIL etc.)
> Anyway, your question is what an SW processor would do -- somehow, it
> would find a schema for each Namespace (I seem to recall that XML
> Schemas provided an attribute for that purpose) and then would
> download the four schemas referred to by that schema, then the four
> schemas referred to by each of those, etc., until the ninth or tenth
> level when the system broke down trying to download and parse 4^10 or
> so schemas to try to interpret a single XML document. Of course,
> that's assuming that none of the schemas was unavailable or
> maliciously altered because of security breaches at any one of the
> hundreds of different hosts being accessed.
Yeah right, just like a SW search engine would download every page from
the Web onto your desktop, convert it into an RDF representation of the XML
Infoset, and load it into a Prolog system to service your queries while
you wait. Maybe that's why SW fans have been asleep for 10 years? ;-)
XML-DEV hasn't changed much since I've been away...
Dan