[
Lists Home |
Date Index |
Thread Index
]
On Thu, 02 Dec 2004 08:56:43 -0500, Elliotte Harold
<elharo@metalab.unc.edu> wrote:
> Of course, everyone will use some data
> model to process their XML. However, everyone will not use the same data
> model. Each will choose the data model that meets their needs. Sometimes
> this data model won't look anything like XML. Often, they'll have
> several different layers of data models. Claiming that everybody must
> use the same data model to process XML in order to achieve
> interoperability is just as silly as claiming everyone must use the same
> programming language. Data models are a local choice, not a global one.
Substitute "synatax" for "data model" and all these arguments could be
used to advocate against standardizing on a one-size-fits-all syntax
such as XML text :-) Just as XML is sub-optimal for just about any
particular use case but good enough for a wide range of them, there is
something to be said for having a common XML data model that hits some
sort of 80:20 point for typical use cases. Of course anyone can use
another if the one-size-fits-all model doesn't fit a particular use
case, just as the existence of XML doesn't prevent people from using
YACC or Perl if their needs don't fit its capabilities.
Nobody I know is arguing anything remotely resembling "everybody must
use the same data model to process XML". The point of cleaning up the
data model mess would be to rectify the ugly mismatches between DOM
and XInclude, DOM and XPath, etc., i.e. make simple things much
simpler for novices, not make life harder for the uber-geeks who are
comfortable with the current situation.
|