> -----Original
Message-----
> From: Leigh Dodds [mailto:ldodds@ingenta.com]
> Sent: Friday, August 03, 2001 11:42 AM
>
To: David E. Cleary; xml-dev@lists.xml.org
> Subject: RE: ANN: SAX Filters
for Namespace Processing
>
>
> OK, thats the *procedural*
reason why default is as it is.
>
> But what's the design decision?
IOW, what thinking lead to the
> 'original decision' that failed to be
overturned?
I doubt (from my own W3C experience, not on the Schema WG) if
that question has an answer. Remember that any W3C working group is 15-50
people with vastly different interests, skills, agendas, etc. It is almost
humanly impossible for the editors of the spec to understand everything in the
early going; they're just trying to make sure that the vast blob of inputs they
get are somehow reflected in the drafts. The whole point of the rather lengthy
and laborious W3C process is to solicit detailed analyses of all these little
"decisions" by the editors, which may be made on purely mechanical
grounds. Many of us here have probably worked as programmers; an analogy
would be if some Commission of Inquiry asked you, two or three years after the
fact, why you initialized some variable to NULL rather than the default value
specified in the minutes of the June 23 design meeting on page
72.
Your only defense, and IMHO the Schema WG's probable defense
here, is that no human being could possibly keep all these fragments of
information straight!
I would have to say that the "2/3 majority in order
to change anything" sounds like a bad idea to me (and is not anything I've
encountered elsewhere in the W3C). It presumes that the original draft
specification was based on sound reasoning, not "I think I understand this,
let's throw it at the wall and see if it sticks."
Folks, David Cleary's
description of how this decision got made illustrates clearly what the W3C is,
and is not. It *does* have a process for getting lots of smart people
together to write specs, and the process brings in all sorts of diverse
requirements and potential solutions that no single person would ever think
of. It does have a mechanism for ensuring that the specifications can be
implemented (at least by the Microsoft's and IBM's of the world!). It does *NOT*
have a mechanism for ensuring that the specifications really and truly
make sense as solutions to the problems specified in the Requirements. It
does NOT have a mechanism (other than perhaps the new TAG) to make sure that one
spec is not subtly inconsistent with another. And it most certainly does
NOT have a way to ensure that the consensus of the experts is based on anything
more than a desire to get the hell out of some committee room :~)
So, the
W3C is a reasonable "R&D Lab for the Internet" where competitors get
together to sort out where they want to compete, and where they want to work in
tandem; this is a "good thing" because this provides end-users with a rough,
sketch of what they can expect to buy, what they will have to build, and what
components they can reasonably expect to interoperate in a rapidly changing
world. It is NOT a reasonable standards organization, as this
example illustrates all too clearly. W3C Recommendations are a common
*starting* point on the road to truly robust standards.