[
Lists Home |
Date Index |
Thread Index
]
8/16/2002 10:32:03 AM, "Bullard, Claude L (Len)" <clbullar@ingr.com> wrote:
>
>For those who are understandably irritated by the increasing
>complexity of XML specifications, note where things start
>to become complex in each as one attempts to make a global
>network of resources *behave* as if it were a semi-normalized
>database. Hypertext/hypermedia is an old old form of
>a database, and it isn't simple to take any abstract
>resource anywhere anytime with n representations per
>resource and make it accessible with the same kinds
>of unified views afforded by modern relational or
>even neo-modern object-oriented systems. Trying to
>do that has resulted in much of the noted complexity.
You wouldn't want to elaborate on that, would you? It's
intriguing, but I don't completely follow.
The increasing complexity of XML comes, as far as I can
tell, with taking an SGML subset and adding namespaces,
integration with "the Web" (e.g. the URI debacle),
integration with strongly typed and/or OO programming
languages (e.g., WXS), and the attempt to reconcile all
of the above with the vision of the semantic web. I definitely
see problems treating all this as if it were a normalized
database, but I don't see the attempt to treat it as a
"database" driving the complexity. If anything, in my
humble and biased opinion, thinking of the XML/XTHTML
Web as a "database" would impose a useful discipline and
motivate people to whack off a lot of complexity.
There's a certain amount of self-inflicted complexity e.g. the
obvious political compromises one sees in WXS and DOM, and
the incompatibilities between the DOM and XPath data models.
That's just reality in a consortium of competitors in a rapidly
changing world, and will be sorted out someday, probably by fiat.
Again, I don't see this as having much to do with "semi-
normalized databases."
What am I missing?
|