[
Lists Home |
Date Index |
Thread Index
]
- From: Jeff Lowery <jlowery@scenicsoft.com>
- To: xml-dev@XML.ORG
- Date: Tue, 30 May 2000 13:24:35 -0700
[I originally posted this in response to "Relax or not to Relax", but it
didn't generate any heat there. Try again, as always.]
Rick Jeliffe writes:
> The issue boils down, perhaps, to whether it is desirable to invent a
> new class of XML documents: ones which need to be "schema
> processed" in
> order to be used. From a power and capability view, it would sure be
> nice--we can have XPaths which include type awareness (and then
> Schematron would inherit that!), XBase would know what
> strings were URIs
> in documents, and so on. But is this really creating an
> overly-layered
> and bloated framework that will have performance problems over the WWW
> and which will create many excuses for dialects?
>
> But this new class of XML documents is where we may be heading.
Rick,
I wonder if something more fundamental is happening. What I sense is the
pendulum swinging back from the tightly-bound data and methods of current
object-oriented methods to looser coupling of data and methods that have
existed before. XML facilitates this 'neo-traditional' approach because it's
heirarchical data structure can be easily mapped to methods in a
heirarchical class structure. Therefore, you have a data model and a methods
model.
The point is, is that it's easy to generate low-level accessors and mutators
based merely on the structure defined in the data model. Add a schema to the
mix, and you get type checking and cardinality validation, also. What's left
for the class method side of things is higher-level business rule functions
and dependency constraints.
What are the advantages? For one, many times you just need to transport the
data. Objects get in the way because of their binary nature. Also,
extracting the underlying data from an object requires some knowledge of
their APIs. When schemas are added to the data that's transported, apps
accepting that data alone (minus the higher-level methods) have at least
some hope of manipulating that data successfully given low-level set of
'ground rules' the schema defines.
There's another advantage: it's hard to write a set of methods for objects
that are flexible and generalizable enough to handle unknown variations in
the way data is manipulated for one purpose or another. Even within the same
industry, process varies widely whereas function remains the same. I see
data models as being closer to 'function' and objects tending be closer to
'process'. Therefore data models tend to remain stable over time, and it's
easier (though not easy) to come up with a data model that works across
process boundaries than it is to build an OO API that does the same.
FWIW,
Jeff
***************************************************************************
This is xml-dev, the mailing list for XML developers.
To unsubscribe, mailto:majordomo@xml.org&BODY=unsubscribe%20xml-dev
List archives are available at http://xml.org/archives/xml-dev/
***************************************************************************
|