[
Lists Home |
Date Index |
Thread Index
]
Allow me to point out to the members of this list that a concrete
example of a general-purpose XML processing tool designed around a clear
separation between the producer's and the consumer's interpretation of
the data (structures, constraints, algebras) already exists in the field
of data bindings where, to use the terminology advanced in this thread,
the 'consumer's schema language' is the very type system of the host
language.
In particular, SNAQue (http://www.cis.strath.ac.uk/research/snaque/) is
a research prototype that allows consumers to specify the required
interpretation of the data using the type system of the host language
and to then tentatively project that interpretation onto the data coming
to binding, regardless of the potential availability of a schema (and
thus the manifestation of the producer's interpretation) associated with
the latter. If there is at least a subset of the data which is described
by the projected type, then that subset is injected into the language as
a value of the type before being subject of application-specific
computation. If not, notification of the failure is communicated to the
binding program where it may be dealt with a heuristics-based strategy
based on successive type projections (perhaps based on a history of
similar failures).
Of course, the definition of types to be projected could well be
informed by the availability of a schema published by the producer of
the data and perhaps capturing the interpretation of a sufficiently
cohesive community. Such schema, however, would serve as a convention
rather than a rule and would not participate at all in the mechanical
process of binding. By reflecting the consumer's interpretation of the
data In particular, type projections naturally introduce *partiality* in
the binding, whereby the part of the external input which is outside the
consumer's interpretation of the data, and thus does not match the
projected type, is simply discarded rather than forced upon
domain-specific computations.
Now, while default projection rules insulate consumers for change to the
irrelevant part of their external input they still require that the
relevant part of that input be sufficiently in sync with the projected
type. Customised projection rules (based on the definition of a mapping
between projected types and data expected for binding) may then be
introduced to increase the degree of separation between consumer's and
producer's interpretation (and thus that or resilience to change) whilst
largely remaining in the familiar processing model of the host language.
If finally consuming applications must exhibit the kind of adaptive
behaviour required by very loosely-coupled internetworked systems then
they may need a surrounding full-fledged transformation layer (e.g.
based on XSLT or Xquery) to reconcile inputs which resist community-wide
conventions and assume related but different forms. In slightly less
anarchic scenarios, however, type-based filtering and is sufficient and
gives you the advantage of insulating programmers from technologies and
maintenance tasks which are largely irrelevant to the nature of the
application (a Java programmer does not need to know XSLT, for example).
Regards,
Fabio Simeoni
-----Original Message-----
From: W. E. Perry [mailto:wperry@fiduciary.com]
Sent: 09 March 2003 07:07
To: XML DEV
Subject: Re: [xml-dev] Schemas as Promises and Expectations
"Thomas B. Passin" wrote:
> But I do not think that the consuming node can accept just anything
> and successfully transform it to a preferred form.
Of course not. But to understand how this works it is important to quit
thinking in terms of what the consuming node accepts. I realize that is
difficult: the notion of the public interface, the passed data
structure, the call or invocation of a procedure is pervasive. The point
of these procedures is to offer a specific expertise as a service. A
necessary, inevitable part of that expertise is knowing what to accept
as input, how to test its acceptability, where to find necessary
reference and other ancillary data, and how to instantiate a data
structure specific to the processes which will operate on it. The
measurement of success looks beyond the shallow question of whether a
process has been cajoled to execute to the test of whether a useful
output has been produced. That can be answered only by other such
processes with their own particular interests in the output produced by
this one. The usefulness of the output of one process is in whether
other processes can in turn render from it some useful application of
their own expertise.
> More important, perhaps, is that the incoming documents have a stable
> format. You can adapt to nearly anything as long as it is consistent.
Indeed you can. In practice I am continually surprised by what
unexpected variations seem sensible in the light of what has come before
from the same provenance or in similar structures. Also, stable does not
have to mean static. Changes make sense as changes, measured against the
history of forms encountered in the past, in cases where the resulting
changed form would be unintelligible if encountered without context.
> So are you arguing for Don's point, which I take to be the following -
>
> A producer should consistently produce according to some definite
> schema (lower-case schema, not just WXS, of course), and a consumer
> should design around using some (possibly different) definite schema,
> converting as needed.
I cannot speak for Don, of course, but I would change this
characterization slightly to emphasize that an expert service should
produce the most particular expression of its expertise. From the
viewpoint of the process that particularity might in some cases best be
described by a schematic of output and in other cases not. Either way,
that output will have a published concrete instance expression from
which a schematic might be deduced if that is useful. As for consuming
processes, every process is designed and implemented to operate upon an
expected data structure. I ask that in recognition of the particular
expertise of a process the data structure instantiated for its internal
use be precisely that on which it natively operates. In both cases, the
form of these input and output data structures may change over time as
the specifics of the process itself dictates. A key point of design here
is that such changes a simply made internal to the service as changes in
its process may require, without the need to coordinate those changes
with other processes either upstream or downstream.
Respectfully,
Walter Perry
-----------------------------------------------------------------
The xml-dev list is sponsored by XML.org <http://www.xml.org>, an
initiative of OASIS <http://www.oasis-open.org>
The list archives are at http://lists.xml.org/archives/xml-dev/
To subscribe or unsubscribe from this list use the subscription
manager: <http://lists.xml.org/ob/adm.pl>
|