[
Lists Home |
Date Index |
Thread Index
]
[Rick Jelliffe]
>I don't think I have changed my position: I was not against SML as merely an
>"effort to subset XML" but as an antagonistic, premature, dogma-driven
exercise
>in reductionism whose end result could only be to get rid of everything, as
>in XML 2.0alpha. And one that diverted peoples attention from the looming
>XML Schemas: a real source of complication.[1]
Oh I dunno. My take is different in most respects (comments inline below):
>I think there were five tendencies at work in SML that doomed it despite
the talent
>of those involved and their level of concern:
>a naive view of where the costs of parsing lies
This was never a concern in my case. Indeed I've written often about the
dangers
of focusing on parsing speed
(http://www.propylon.com/news/ctoarticles/XML_is_Too_Slow_20011110.html).
>, reductionism (the failure to value that a limited redundancy or sugar or
>lubrication in a language increases its usefulness),
The verbosity of markup never really bothered me thought the interchangeability
of aspects of it (like the classic element/attribute divide) do fascinate me.
> a related belief that anything that can be layered should be layered
> (ignoring the issue of how a document should declare
> or specify this layering),
I'm guilty of that one I think. I've recently come to appreciate the
pervasiveness of hermeneutic
circles in mainstream technologies. However, as a transformation
decomposition freak
(i.e. the sadly under-attended xpipe) I strive to decompose complex
processing into loosely
coupled data processing passes where possible.
This is a very different form of "layering" to the form normally found in the
(increasingly-known-to-be-flawed) stepwise refinement -> procedural
decomposition
approach to complexity management.
>an Americanist tendency that if what is good for them will
>have to be good enough for everyone else, and the lack of agreed
requirements.
Not applicable in my case:-)
>Furthermore, the mental furniture of infoset versus syntax was not
available at
>that time.
As I've said before, the BIG problem with the infoset is that we are no
nearer to establishing
a core mechanism of specifying what goes in and what comes out of an
XML processor.
We got caught up in conversations where we were taking passed each other.
People on
this list have mutually incompatible interpretations of fundamental terms
like "syntax" and
"data model"!
The brittle nature of any XML-in, XML-out specification makes it a fog of a
problem to
join tools together.
We should have fixed it but we didn't. The catastrophic effect of that
failure is that
the object-serializaton folk seized the opportunity to make the in/out
spec. rigorous
using data typing.
A glorious opportunity has been lost in my opinion.
We got caught up in self-congratulation when XML took off.
Now it is becoming something else and I fear we markup heads have only
ourselves to blame.
Oh well...
But before I go let me say one more thing. EVERY XML app I've every seen
IS A SUBSET of XML. By that I mean that it is a trivial matter to create a
fully
XML 1.0 compliant document that causes the tool to choke. Be in not supporting
case sensitive element type names or not handling DOCTYPE declarations or
not resolving external entities or supporting a teeny subset of Unicode or
munging
white space or ...And as for namespaces - don't get me started on namespaces!
Its interesting that in this industry we don't have a culture of
plug-fests. We should because
the results would be surprising to a lot of people. Joining up XML tools is
plain HARD.
The XML spec. leaves a lot unsaid about what goes in and what comes out.
XML namespaces
makes the problems significantly more complex.
We hide behind "XML compliance" at parse time and fail to look at what
comes out
of our "XML compliant" apps. Is it any wonder we cannot integrate these things
easily!
Sean
http://seanmcgrath.blogspot.com
|