[
Lists Home |
Date Index |
Thread Index
]
Hi Sean.
Sean McGrath wrote:
> But isn't one there one, canonical, element-structure-plus-content view
> dictated by XML 1.0 itself. i.e. that the syntactic form can be mechanically
> morphed into a hierarchy view?
Yes. That morphing is the first step of element (but not attribute!) processing
> This jejune hierarchy view does not need to be hardcoded. However, the stuff
> that is and is not *in* this hierarchy view dictates what can be in the process
> view.
Not exactly. A process will hardcode its specific data expectations, including a
hierarchy view. Given that, the question is whether the XML document is then
structured to the expectations of the process. We could ask that question in its
converse form--should the process (and its internal data expectations) be
constructed around a particular document model--which is what the advocates of
vertical industry standard data vocabularies effectively prescribe. In either
case we require detailed a priori understanding between--and, in effect,
identical definitions of--process and document. This is what I call the
enterprise network point of view, and it is pervasive in currently orthodox
system design and is, in fact, the fundamental premise on which two-phase commit
transaction processing depends. I have often (!) suggested that we are now in a
position to embrace a different view, which I call the internetwork premise:
that in being connected as an internetwork, homogenous networks remain internally
homogenous, but an addressing scheme is overlaid on them which allows any node on
any of those constituent networks to address directly any other node of the
internetwork. What is lost, in gaining this direct node-to-node addressing, is
the intimate knowledge which nodes on an homogenous enterprise network have of
one another by virtue of the data structures, and the processes exactly fitted to
those data structures, which they share.
This view of the network vs. internetwork context of XML processing is broader
than the original topic of this thread, though I believe that it is helpful in
understanding the different sorts of processing, and thereby the different sorts
of data structures, which a comprehensive general purpose XML processing model
requires. You return below to a narrower view of the nature of XML processing,
without consideration of this network/internetwork context:
> This approach is, I think, where Charles Goldfarb et al. were going with
> "structure controlled" versus "markup aware" SGML processors. The former took
> the element-structure-plus-content view as their point of departure. (A
> generation of these things became known as ESIS processors).
Actually, this distinction is at least as old as the design of the Jacquard
loom: does the programming operate fundamentally from the processor's structural
knowledge of itself or from a template model of the output product. The principal
point which I want to make is that attributes are the 'natural' way to
represent--and attribute-based processing the natural way to execute--the "markup
aware", just as elements, and element-specific processing, are best suited to
"structure controlled" processes. The larger point is that general purpose markup
processing permits both in the handling of the same document, and that there are
significant advantages in using each sort of processing to its greatest advantage
by not muddying the distinctions of the two.
Respectfully,
Walter Perry
|