[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
RE: Extreme specwriting? (was RE: SAX Filters for Namespace Proce ssing)
- From: "Bullard, Claude L (Len)" <clbullar@ingr.com>
- To: Mike.Champion@SoftwareAG-USA.com, xml-dev@lists.xml.org
- Date: Mon, 06 Aug 2001 09:35:04 -0500
Title: Extreme specwriting? (was RE: SAX Filters for Namespace Processing)
Which
is precisely what the SGML developers did. SGML did not start out
excruciatingly complex. It takes years of
development and refactoring to
do
that. XML beat it's record for that, but at least the complexities
are
in
different documents
No one is purely right in this
discussion. Again, spec/standard references have
value. If you can refer to only that parts you
implement, and those parts
can be
implemented stand-alone, you can test what you did. In SGML,
we
only had one document for most of it's history, and drafts of others.
That
packed all the features into one reference and that lead to the
term
"the SGML Nazis" being used by developers who only implemented
pieces
of it. So part of this issue is clean references with everyone
involved
being
fully knowledgeable that what is not referenced is just as important,
and
further, just as valid as what is. IOW, it is important to be able to
say and be precise in the saying
that a piece of software is or isn't
"namespace-aware". Part of saying
that is features-negotiation and
part of that might be part of where
efforts like .NET are going.
You don't send a rifle squad in
with artillery. You send them in
with a
radio.
more XML implementors should admit that the emperor is
"sartorially challenged" and just implement those parts of the specs that make
sense, and call a subset a subset. This might limit interoperability in
principle, but I for one would rather work around an explicitly missing
feature than to find (after wasted nights of debugging) that the implementers
interpreted the spec differently.