Lists Home |
Date Index |
>De : Dare Obasanjo [mailto:firstname.lastname@example.org]
>Envoyé : jeudi 7 mars 2002 02:55
>À : Nicolas LEHUEN; Thomas B. Passin; email@example.com
>Objet : RE: [xml-dev] Stupid Question (was RE: [xml-dev] XML doesn't
>deserve its "X".)
>> -----Original Message-----
>> From: Nicolas LEHUEN [mailto:firstname.lastname@example.org]
>> Sent: Wednesday, March 06, 2002 8:01 AM
>> To: 'Thomas B. Passin'; 'email@example.com'
>> Subject: RE: [xml-dev] Stupid Question (was RE: [xml-dev] XML
>> doesn't deserve its "X".)
>> I'm not asking everything to be dynamic. I try to stay as
>> pragmatic as possible. Most people write code with static
>> assumption. Their code is statically bound to a particular
>> schema. Then the schema is extended, and the code has to be
>> modified to be bound to the new version of the schema.
>> To me, extensibility is about finding ways to write programs
>> so that the amount of work following a schema evolution is
>> null or as small as possible. It's not about magically
>> understanding data and processing it the way it has to be. I
>> don't think a pure dynamic approach is feasable.
>> I do think, however, that type inheritance and polymorphism
>> (which is equivalent to dynamic processing of data depending
>> on its type) are the kind of concepts that can be used to
>> reduce the costs of extensibility.
>Again this is functionality that already exists in XML via the
>polymorphic and inheritance mechanisms in XML schema[0,1].
>What I'd like to see are examples and design patterns that
>show how subtitution groups and abstract types can be used to
>build polymorphic apps that adapt well to schema evolution.
Agreed, that's exactly what I wanted to say : XML Schema provides some of
these concepts but there are no associated processing model.
>> I don't want the data to describe how it can be processed !
>> I'm not crazy enough to hope that this can be easily done...
>> I just want the schema to describe itself relatevily to
>> another already known schema, and dynamically provide
>> compatibility rules to legacy application, so that they can
>> read data in extended schemata as if they were in the former
>> schema, or forbid any usage of the document if it would be
>> harmful. Those compatibility rules could be expressed in
>> various ways, from views (à la AF), transformations, or a
>> type system with inheritance and polymorphism, I don't know
>> which is the best. What I notice however is that OOP provides
>> solutions for extensibility, so that it may be interesting to
>> have a close look at extensibility patterns in OOP before
>> trying to solve the problem in XML.
>Actually the solutions for extensibility that have been much
>touted for OOP aren't inheritance (which in most knowledgeable
>corners is derided as overused and a bad tool) but the use of
>interfaces and object composition/aggregation. I'm curious as
>to how you think these can be applied to XML.
( I was thinking more about interfaces and interface inheritance rather than
class inheritance, which is a feature that is quite awkward to use for code
I don't have any solution, for if I had one, I wouldn't be mourning about
the problem :). We did implement a mechanism which allows extensible
configuration of our product through XML documents, composition and
interfaces (a bit like Apache Avalon, but much more powerful), but it has no
relation with the problem of data extensibility, just code extensibility.
A way to leverage interface and composition scheme for XML data
extensibility would be to automatically map each element type to an
interface. Program would then read/write XML data using those interfaces.
Each interface would handle a set of properties plus a set of children
(composition) interfaces related to other elements types that can be found
in the element.
Extensibility would be obtained by extending an interface to add new
properties and new children interfaces. Program design for the old schema
would still function, while programs designed for the new schema would be
able to use the new properties and children.
This kind of design is quite similar to AFs, in fact, but its OOP
orientation make it quite powerful (because polymorphism in data would be
leveraged by polymorphism in code).
The advantage over just-in-time-transformation (applying a transformation to
make sure the old code receive documents in the old schema) is that no data
would be lost, so that some legacy code could be used in a fresh pipeline
without making it a data sink.
See my previous posts in this thread, I gave some example of both the OOP
approach and a non-OOP pipeline approach that could be used to integrate
legacy code without losing data.
>THINGS TO DO IF I BECOME AN EVIL OVERLORD #230
>I will not procrastinate regarding any ritual granting immortality.