[
Lists Home |
Date Index |
Thread Index
]
10/11/2002 12:00:24 AM, Uche Ogbuji <uche.ogbuji@fourthought.com> wrote:
>This sounds like the rallying cry of the Knights of Tag Soup.
>
>Validation is mostly an obstacle to evolution if designed using the wrong
>tools in careless hands.
That's a great turn of phrase! Still, I guess I think of myself as at
least the jester in the Court of the Knights of Tag Soup, so maybe I
should ask Uche or someone else to educate me on the point.
Uche's point was a response to a response to something I wrote:
> 2 - Namespaces - work best for mixing instances of well-defined
> vocabularies/schemas together, they don't work so well to support
> evolution or un-typed XML. Schema evolution using namespaces is
> a Known to Be Hard, TAG-level problem.
I'm under the impression that managing schema evolution
is a Hard Problem; lots of the complexity
of WXS is there to make the problem more manageable by employing
notions of partial re-use that come from the OO world, but it's
not clear to me that best practices have emerged for using it. We've seen
lots of threads on how best to use namespaces in the context of
evolving schema (the famous XHTML "3 namespaces" flamefest come
to mind!). I just don't get the impression that this is a situation
where the technology is well-understood but carelessly applied.
Do people here disagree?
On the other part of my original assertion: Am I missing something?
Namespaces in XML seems obviously written for the scenario where
elements and attributes from different *well defined* namespaces are
merged in a single document. The critique from the Architectural
Forms advocates has been (to the best of my ability to repeat it):
The namespaces spec assumes that every element is in one and only
one namespace. But when you have decentralized schema
authorities whose output must be amalgamated this assumption is untenable.
The "decentralized schema authorities" is more or less what they have in the
RSS world, and in lots of other real-world situations where people are mailing
around schema fragments, templates, and examples in order to figure out what
format the data are supposed to be put into.
So, two questions: Is this a matter of "the wrong tools in careless hands"?
I guess I balk at calling people "careless" if they're just doing what they
have to do in an unmanaged environment where there is no "schema authority",
(or the "schema authority" is locked in analysis paralysis in a meeting room
somewhere!). Or am I missing the point here?
Finally, how DOES one make XML technology choices that are not fragile in the
face of human nature? So much of traditional SGML/XML technology seems
predicated on the assumptions underlying the "waterfall model" of software
development. I realize that some of our work lives (Hi Len!!!) are characterized
by requirements, bids, contracts, and lawsuits ... but (ahem) the "marketing
is always right, they changed their minds, so get back to work" attitude
seems a wee bit more prevalent in our industry :-) Namespaces, schemas, and
validation are valuable tools in such an environment, but they don't provide
best practices guidelines they way they would in a waterfall environment.
On the other hand, the Knights of Tag Soup can just deal with it, with
whatever witches brew of schemas, namespaces, architectural forms,
Schematrons, pipelines, regexp's, XSLTs, etc. that has to be conjured up
for a specific need. I don't think that's anything to sneer at!
|