OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Data Model(s) for XML 1.0 / XML Devcon / DOM / XSL / Query



Sean McGrath wrote:

> In the light of recent debate about the intertwingling
> of XML specs and the PSVI and Henry Thomsons
> excellent keynote at XML Devcon
> (http://www.xml.com/pub/a/2001/02/21/xmldevcon1.html).
>
> isn't it time to accept that not specifying formal
> post-parse data model(s) for XML 1.0 was a big
> mistake?

In a word, no. Those post-parse plus post-additional-processing data
models are in effect being specified now by, among others, the very
groups whose work you cite here. Some of us, however, regard (and need)
XML as a lightweight syntax cleanly separated from the specifics of the
processing--and therefore separated from the instance local semantics
which will be derived--at every node where an XML instance document is
put to use. IMH (if oft expressed) personal opinion, the great
innovation and original value of XML is the concept of well-formedness.
By permitting an instance document to stand on its own as syntax,
without the expected pre-ordained semantics expressed in a DTD (or for
that matter, in any form of content model, schema, or canonical
'infoset'), XML took the decisive step which SGML never had.
Well-formedness recognizes that an instance document will be processed
afresh by every user of it, and implicitly recognizes that the needs,
and therefore the processing required, will be different for each one.
The simplest processing of every document is well-formedness syntax
checking. In some few cases that will be all the processing required.
Beyond that first pass of process, it may be necessary in particular
cases to check a document for conformance to a content model or data
schema; to transform it to some other document form; to elaborate from
it an infoset; or not.

The decision of which of those processes to apply, in what order, and
with what interactions among them is a profoundly local decision, driven
by uniquely local expectations and needs. Simon St.Laurent, among
others, has repeatedly drawn attention to these interactions and asked
for, at a minimum, recognition of them, if not a packaging or processing
model to mediate their effects. The current debate on re-inventing
XPath/XSLT as XQuery is premised on the same questions of how the
various officially 'recommended' processes are supposed to defer to, or
otherwise interact with, one another. My answer to that is seditious,
but awfully useful in designing locally necessary processing:  beyond
well-formedness, W3C specifications in the XML family are often the
right tools for implementing required processes, but are certainly not
the only tools available. Every processing node requires a fresh design
because the results required at each node, and the XML documents and
other data available to it to work with, are different. At a minimum,
the order of processes needs to be carefully examined and uniquely
specified for each node, if only to control the interactions among them
in a predictable and appropriate way. Those interactions can never be
known in the abstract or general case:  they are the result of specific
instance data in a particular processing environment. It is my
suggestion that we design for that reality--using the tools in each case
most suitable--rather than attempt a grand unification of semantics
which are by their very nature utterly local.

Respectfully,

Walter Perry