[
Lists Home |
Date Index |
Thread Index
]
Because it is a big leap, conceptually and functionally,
to say a link means "get me this file" to "send this
abstract name for an abstract resource to an unknown but
addressable proxy for that abstraction, and tell it that
what ever it means by that, keep it handy because I am
going to use that meaning in some meaningful way locally"
and that is as much as one can say by specification.
A semi-normalized relational database may be weaker than a fully
normalized one, but it has a consistent and fully spec'd
data model that makes addressing rational and reasonable.
That is why ASP/IIS/relational db of choice with ODBC
is still the dominant lifeform in this ecosphere. As
XML specs have been produced to enable the XML representation
plus whatever data model is trendy to act as a proxy
to that, or in lieu of that, things have gotten complicated.
It's one thing to use a document as a serialized view of
the contents of a database. It is very big leap to use
it AS the database. Why did we need namespaces to begin
with? Aggregates. Why do we need aggregates? Is it
because of the final rendering or because of the source
of each piece and where it originates? Is that origin
in a different namespace (syntax collisions) or a
different semantic space (needs a different handler)?
The political complexities don't interest me a lot
except insofar as telling me how long it will take
the 1.25 available solutions to get sorted out.
The Hytime/DSSSL group could get to the bottom or
top of the conceptual ladder because they were a
small group talking among themselves mostly. Dragging
the whole world wide weird and wonderful down that
path so that consensus emerges of common true
understanding takes a lot longer. That at the
end, the solution will look a lot like what the
first group came up with, names changed, history
cheerfully revised and all that, well that is
the price of global consensus.
Walking on bones.... oh well... back to robbing
tombs for gold to keep the economy moving. :-)
len
-----Original Message-----
From: Mike Champion [mailto:mc@xegesis.org]
8/16/2002 10:32:03 AM, "Bullard, Claude L (Len)" <clbullar@ingr.com> wrote:
>
>For those who are understandably irritated by the increasing
>complexity of XML specifications, note where things start
>to become complex in each as one attempts to make a global
>network of resources *behave* as if it were a semi-normalized
>database. Hypertext/hypermedia is an old old form of
>a database, and it isn't simple to take any abstract
>resource anywhere anytime with n representations per
>resource and make it accessible with the same kinds
>of unified views afforded by modern relational or
>even neo-modern object-oriented systems. Trying to
>do that has resulted in much of the noted complexity.
You wouldn't want to elaborate on that, would you? It's
intriguing, but I don't completely follow.
The increasing complexity of XML comes, as far as I can
tell, with taking an SGML subset and adding namespaces,
integration with "the Web" (e.g. the URI debacle),
integration with strongly typed and/or OO programming
languages (e.g., WXS), and the attempt to reconcile all
of the above with the vision of the semantic web. I definitely
see problems treating all this as if it were a normalized
database, but I don't see the attempt to treat it as a
"database" driving the complexity. If anything, in my
humble and biased opinion, thinking of the XML/XTHTML
Web as a "database" would impose a useful discipline and
motivate people to whack off a lot of complexity.
There's a certain amount of self-inflicted complexity e.g. the
obvious political compromises one sees in WXS and DOM, and
the incompatibilities between the DOM and XPath data models.
That's just reality in a consortium of competitors in a rapidly
changing world, and will be sorted out someday, probably by fiat.
Again, I don't see this as having much to do with "semi-
normalized databases."
What am I missing?
|