[
Lists Home |
Date Index |
Thread Index
]
IMH (and oft-repeated, I'm afraid) opinion, the issues below and the security
questions associated with 'who dictates the schema?' are recurring
manifestations of the larger problem of what separates the syntax from the
semantics. The evidence which I rely on for an answer comes, out of the
peculiarity of my own experience, from two very different fields: oral poetry
and financial transaction settlement processing. In both domains it is
abundantly clear that the vocabulary of the text (in poetry's case) or document
(order, comparison, delivery instructions, etc. in the case of financial
transactions) cannot be allowed to dictate either what process operates upon it
or what the semantic input to that process might be. Semantics are elaborated
from particular instance syntax by the operation of a particular process in a
particular environment on a specific occasion. The expertise of the performer
in the case of poetry, including the selection of a particular text, determines
what that process is and therefore what semantics it will elaborate. That
expertise is opaque, though its product, the output of performance, is
accessible to analysis. In financial transactions the processes applied at
various stages--order execution, comparison matching, delivery vs. evidence of
payment--are similarly expert and opaque, including most especially the
specific form in which they instantiate input data for their particular uses.
The outputs of those processes are, however, accessible--indeed are equally
accessible to many possible subsequent processes downstream, most of which an
earlier process might know nothing of, let alone be privileged to direct input
toward.
What, then, does this model--opaque expert process, including the particular
form in which input data is selected and instantiated; accessible and readily
publishable output--imply as a suitable paradigm of processing in the general
case? I argue that such general model requires that:
-- input data is fetched by a process as a crucial function of its
expertise; this implies that such input data is published by other processes
and that security measures which protect that data do so by securing access to
it
--the semantics which might be elaborated from particular data or text on a
particular occasion depend upon the particular selection of data, the specifics
of its instantiation, and the particular processes then applied to that
instantiated form; as all three are privileges of processing which is (or is
entitled to be) opaque it is a reasonable generalization that input is read as
syntax without intent or other inherent semantics, and that the semantics of
the occasion are the specific product of process
--without the ability for input to convey intent or otherwise predictably
trigger process solely by presenting an expected form or by utilizing an
anticipated vocabulary, the premises of an API are absent; instead of
questioning what should be presented as the input to a process, we need to
consider how the transparent and accessible output of a process might usefully
serve as input, when instantiated and manipulated with by a process designed
solely on the requirements of its particular domain of expertise.
Respectfully,
Walter Perry
James Clark wrote:
> Although it's tempting to make a special case for the technology you're
> developing (and I succumbed to that temptation in the past with
> xsl-stylesheet), it's really not a good idea in the long-term; it just
> doesn't make any sense for each of these processing technologies to define
> their own separate mechanism for associating their processing with a
> document. Furthermore, this can't solve the problem of specifying what order
> you want these processes to be applied in. Do you do XInclude then XSD
> validation or vice-versa? I don't think there's a single right answer. I
> could even imagine wanting to do both: validate against a fairly loose schema
> first to constrain the use of XInclude, then do XInclude and then do
> validation against a tighter schema.
|