OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.


Help: OASIS Mailing Lists Help | MarkMail Help



   ubiquitous XML?

[ Lists Home | Date Index | Thread Index ]
  • From: "Simon St.Laurent" <simonstl@simonstl.com>
  • To: XML-Dev Mailing list <xml-dev@xml.org>
  • Date: Thu, 16 Nov 2000 22:37:20 -0500

This may prove to be one of those messages people with allergies to "XML
politics" delete, but it may not.

I just got back from XMLDevCon 2000, which went pretty well.  I got solid
questions from the audience, got to meet yet more incredible XML folk, and
generally had a good time.  I left with a lot of questions about XML's
future, though, and I'm not sure there are any good answers.

My background before coming into XML was in hypertext and eventually Web
development, where I'd marveled at the way Web development seemed open to
virtually all comers.  Though there were certainly different levels of
virtuosity among developers, no question, it was also possible for someone
to learn HTML from a single-page cheat sheet and a ten-minute introduction.
 (I learned from a 116-page book, personally.)

When I first encountered XML, it seemed like it had the same potential.  It
was, after all, a simplification project, and well-formedness offered
newcomers a quick path to creating documents.  XML: A Primer weighed in at
340 pages, heftier than 116, but a lot of that was examples along with some

A simple and approachable XML seemed to offer the world a chance to move
beyond design-by-committee, making it possible for smaller organizations to
develop customized vocabularies which fit their needs very well without
precluding eventual extension.  On comp.text.xml I even suggested that
those closest to documents - secretaries, assistants, and others who were
thoroughly familiar with information in its everyday instances - might be
better-equipped to handle modeling this information than the experts, at
lower cost and perhaps more effectively.  (This didn't receive an entirely
warm reception, of course.)

As time has passed, I've had less and less hope for this vision.  Not
because of the familiar claim that 'data modeling is hard', but because the
supporting standards for XML seem intent on growing more complex and more
obscure simultaneously.  While notations and unparsed entities took a lot
of figuring out, they could be safely ignored for the most part, and XML
could generally be pared down to elements and attributes and content if

The new generation of supporting standards requires a lot more
understanding.  "Post-schema-validation-infoset", "qualified name", and
"XPath node-set" aren't optional features for developers who want to use
the XML the W3C seems to be excited about developing.  My attempt at
explaining namespaces in one hour at the conference proved more tortured
than I expected - live audiences make all the assumptions I've learned to
suppress, and it quickly became clear that the complexities still live.

This isn't all bad - there is a crew of experts who understand these things
inside and out, but it's taken even this crew over a year to really dig
into the implications of the Namespaces spec, and this crew is relatively
small compared to the hordes of people who would really like to use XML,
not to mention the folks who haven't yet heard of it but might gain by
using it, even modeling their own information.

I'm sure there are some folks out there who see this as a good recipe for
making money.  The confusion over XML has probably helped sell more than a
few of my books, and certainly contributes to consulting fees which still
seem to be climbing.  Schemas are complex enough that I generally recommend
that people hide them behind tools, and pray that those tools are always
available every time the Schemas need to be explored - there's money there

On the other hand, I'd like to suggest that it creates a bottleneck,
keeping XML to a much smaller group of people, limiting the economies of
scale available by raising the bar for entry.  The existence of complex and
powerful specs has a tendency to make certain classes of people believe
that they need the biggest most powerful tool around, and there isn't a
whole lot of room to propose alternatives while remaining even vaguely
capable of interoperating with the bloatware community.

I'm not sure there's a way out of this, since so many people seem intent
and sincere in their belief that 'bigger is better, and tools and experts
will make this workable'.  I'd still like to think that there's some room
for 'small is beautiful, and tools and experts obscure as much as they
assist'.  While 'bigger is better' may help the sectors where that rule has
generally held, it doesn't do much for the rest of the world.

It seems to me that companies developing projects with only the Fortune 500
in mind have as much potential for helping the world as Boo.com, and that
the current run of buzzword-compliant tools can thrive only if complexity
keeps organizations with clearer and smaller visions from competing.  Maybe
Microsoft will bring XML to every desktop, but I'm not sure that's a model
too many people (outside of Microsoft) see as a good solution either.

To some extent, I feel like the moralist critiquing today's fleshpots and
extravagance while promising only a threadbare alternative, a life of
simplicity that offers more by offering less.  At the same time, however, I
think that a more thorough examination of the different communities and
potential communities of XML users would be informative, maybe even

Any thoughts?  Does the path to ubiquity run through the Fortune 500?  Or
does it run through the thousands or millions of individuals working on
projects?  (Both is an acceptable answer, of course, but inflicts its own

Simon St.Laurent
XML Elements of Style / XML: A Primer, 2nd Ed.
XHTML: Migrating Toward XML
http://www.simonstl.com - XML essays and books


News | XML in Industry | Calendar | XML Registry
Marketplace | Resources | MyXML.org | Sponsors | Privacy Statement

Copyright 2001 XML.org. This site is hosted by OASIS