[
Lists Home |
Date Index |
Thread Index
]
I'm not saying that PSVI will solve the extensibility problem. OK, my fault,
I uttered the PSVI word during this thread. Let's forget it.
What I am saying, is that an OOP-like type system, with inheritance,
interfaces, polymorphism et al. is a nice way to solve the extensibility
problem.
I don't think it is possible to solve this problem by looking at data alone,
because code is an active (obviously) part in extensibility. Solving
extensibility cannot be done by building serialisation artifacts like
namespaces and not providing the associated processing models. We have to
stop thinking about pure data and think about both code and data. XML 1.0
failed to deliver a processing model, hence it failed to deliver a solution
to extensibility.
The nice thing is that OOP is already there and it's the closest thing to a
model with allows data and code extensibility. The problem is that it seems
that very few people want to add OOP concepts on top of XML, so for
politically correctness we may have to look for other solutions. Sigh.
Like you said, there is very little that the PSVI can add that the
application doesn't. Right. But you just suppose that the data received by
an application is in the very precise format it was built for. Now, to come
back to Eric's sample, if I want to add some new nifty tags to my documents,
so that a new part of the application can use it while old parts still
remain compatible, I have to find a way to convey some meta-data that make
the old parts not only understand that the new document can still be
processed, but how to build an old-style view of it.
Extensibility is not about being able to change something in a document
structure, then modify all code that rely on this structure. It is about
finding a way to tell an heterogeneous mix of code that a single data item
can be viewed and processed in a way that fits it. It is about being able to
mix some new code that uses the new document structure with some old code
that use the old document structure.
Let's have a crappy example. Suppose that we want to build a processing
pipeline like this :
N1--->O1--->N2
Where N1 and N2 are blocks of code that rely on the new document structure
S2, while O1 is a block of code that rely on the old one S1.
We could simply use XSL/T stylesheets to transform document with structure
S1 to documents with structure S2. Lets call this transform T1. The
transform which would take a document with structure S2 and give a document
with structure S1 would be called T2.
In order for the pipeline to work properly, we would insert transformations
:
pipeline N1----T2--->O1----T1--->N2
structure S2--------->S1--------->S2
The problem is, T1 and T2 are most usually not bijective. The evolution of a
document structure may simply involve renaming elements and reordering them,
but that is not really interesting from an extension point of view.
Extension imply adding or removing data in the structures, which means that
there is no such thing as a reverse transformation. In other words,
T1.T2!=Id (and there is no T3 such as T1.T3.T2=Id).
In plain words, this just means that XSL/T transformation is a one-way
integration tool. Extended data produced by N1 will be lost during T2, which
means that N2 won't receive it. Surrounding O1 with T2 and T1 makes sure
that O1 can be used in the process, but it builds a kind of data sink, even
if O1 is just a piece of code that should not have any effect on the
extended data brought by N2.
Suppose that O1 is just a block that have to replace PRODUCTIDs with PRODUCT
elements, including name, price, an ad blurb, and so on (a join operator, in
a way), but just in the PURCHASEORDER/AD/PRODUCTID, not in the
PURCHASEORDER/LINEITEM/PRODUCTID. O1 needs to receive data in a given
format, and output data in another format. What if the input and output
format changes ? What if S2 has important new data items, such as
REBATECODE, that must be passed to N2 (which computes the grand total) ?
Those new data must be passed along by O1 even if they are not a part of the
structure it is supposed to process.
Extensibility implies finding ways to write the block O1 in a way that it
can handle S2 as if it was S1 and propagating any "out-of-band" (with
regards to S1) data to the next block in the pipe. It is more complex than
just building a "view", like AF or on-the-fly XSL/T transformation.
Maybe it is still possible to have a non-intrusive way (i.e. not having to
build O1 in a ready-for-extension way) of adapting old code to new schemas,
with a pipeline like this :
N1--->D1---T2--->O1--->D2---+
| |---T4--->D4--->N2
+-------T3------>D3---+
D1 is the document produced by N1. It is transformed by T2 to feed O1, which
produces D2. It is also transformed by T3 to produce D3, which is a document
which contains all data from D1 that cannot be passed into O1 (that is to
say, all extended data). Finally, T4 takes D2 and D3 and weave them together
to give D4, which is the D1, processed by O1, with extended data from N1.
It seems that this may be an alternative to using OOP concepts for
extensibility. Maybe OOP isn't required after all... Now, the crucial point
is : which way is the easiest and cheapest to implement, learn and teach ?
Regards,
Nicolas
>-----Message d'origine-----
>De : Joe English [mailto:jenglish@flightlab.com]
>Envoyé : mercredi 6 mars 2002 03:32
>À : xml-dev@lists.xml.org
>Objet : Re: [xml-dev] Stupid Question (was RE: [xml-dev] XML doesn't
>deserve its "X".)
>
>
>
>Dare Obasanjo wrote:
>>
>> There is no rule that states that xsi:type should only describe
>> simpleTypes. Your post is basically stating
>>
>> I'm having a hard time envisioning a scenario where having
>> XML schema type information for an instance document would
>> be useful to an application.
>>
>> Which just means that the kind of problems you have to solve are
>> different from those that those of us that are interested in strongly
>> typed data have to solve.
>
>I'm actually very interested in strongly typed data, and
>strongly typed processes as well. But in an application designed
>to process documents conforming to a particular schema, there's
>very little that the PSVI can add that the application doesn't
>already know (by virtue of its author having coded to the schema).
>The main point of a validator IMO is to prevent ill-typed data from
>being fed to such a process to begin with.
>
>Of course this may just be a lack of imagination on my part;
>there may be many compelling use cases for xsi:type, I just
>can't think of any. In most applications I've written,
>by the time a function has its hands on a piece of data,
>it already knows what the relevant type is.
>
>
>--Joe English
>
> jenglish@flightlab.com
>
>-----------------------------------------------------------------
>The xml-dev list is sponsored by XML.org <http://www.xml.org>, an
>initiative of OASIS <http://www.oasis-open.org>
>
>The list archives are at http://lists.xml.org/archives/xml-dev/
>
>To subscribe or unsubscribe from this list use the subscription
>manager: <http://lists.xml.org/ob/adm.pl>
>
|