[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [xml-dev] Interoperability
From: "Mark Baker" <firstname.lastname@example.org>
> Not if they're cached nearby, or even locally in the case of
> disconnection. It is a common misconception that HTTP URIs are
> necessarily locators. They are identifiers, which are names in some
> contexts, and locators in others.
But what the user needs is particular resources, perhaps drawn from
different sources and modified for local conditions, available without
fuss. Having only the canonical form of schemas etc is not only
fragile (because of coming from multiple sources) but also foists
too-large solutions on people who may be better off with subsets.
> > 3) Packaging
> > ----------------
> > The other mechanism, and the one I think we need,
> > is a simple file format for packaging all the resources
> > for generic and value-added XML applications: an
> > XML application archive format (XAR Gavin has
> > suggested.) The kind of thing I am suggesting
> > can be found at http://www.topologi.com/public/dzip.html
> The Web tried that with MHTML; package up all the images and the HTML
> together in one lump.
But packaging documents is specifically what I am *not* suggesting,
and the reasons you give might be more evidence why. In SGML
we have MIME-SGML and SDIF, and these never took off either.
When the MIMEtypes-for-XML discussions started, I pushed
that the MIME type should be merely for simple entities, not
whole document packages (not only is it a separate issue, it is
also a black hole of complications and competing possibilities.)
> IMO, all we need is tools, plus machine-processable assertions that
> the tools can use to know when to stop caching. Basically, wget with
> some RDDL smarts.
With RDDL we do get the ability for variant schemas or stylesheets
(as are often needed to cope with the house-rules of a particular
organization) nor internationalization nor do we get any ability for
integrators to value-add resources (for free or for profit). Web
resources are not persistent, nor constantly available, nor do
most developers have contracts with Akamai or someone who
could make performance or availability better, nor do we have
any way to support different versions of resources attached to the
I support RDDL just as much as anyone (indeed I have three
RDDL pages for my little languages Schematron, Hook and Connect)
but it does not answer the problem here. RDDL answers the
question "what is at the other end of the namespace URI?"
with the answer "a standard assortment of resources provided by
the controller of the namespace."
The question DZIP answers is "How can we make it convenient
to deploy XML?"
Of course there is a lot of value in making resources for things
available freely over the web. However, this centralizes
power to the well-known sites and encourages fat specs:
I think we need to recognise that any (e.g. W3C) technology
must meet a wide variety of needs and so necessarily be
bulkier than we need for any individual job.
The answer is not in promoting alternative "simpler" standards
(which, by adding to the solution space may actually complicate
things) but in building infrastructure to support deployment of
appropriate subsets to targetted users. In particular to
allow industry profiles of the larger DTDs, but also
XSLT, XML Schemas, or any of them.
This way the web moves from being composed of the standards
Gods who create technology and the drones who use it. Instead,
we have the intermediate priesthood of intergrators who can
tailor the technology for particular users and shield the users
from complexity. (Hey, isn't that us developers at XML-DEV!)
Microsoft has done well by assuming an intermediate
class of integrators; for XML to get really useful we need
to forget the mentality that the end user is a hacker and
think about simple technologies that hit the spot for easier