OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [xml-dev] Interoperability



From: "Mark Baker" <distobj@acm.org>
 
> > But what the user needs is particular resources, perhaps drawn from
> > different sources and modified for local conditions, available without
> > fuss.
> 
> If this information is republished with a URI, then use that URI.

Let us take the simplest example, using URLs with <file>: scheme. The simplest case that should work is not <http>:.

What conventions currently allow us to locate the various configuration
files in a directory which a generic XML desktop appplication (e.g. an editor) would use (e.g. DTD, CSS, XSLT, XML Schema, Schematron schema, RELAX NG schema, public entity files, document-type-specific entities)?

Answer: there are now no conventions that let us locate XML configuration files in a directory according to their role. The scheme that was used to access the directory (e.g. file:: or ftp:: or http::  etc) or the entity type accessed (e.g. directory listing, ZIP file) or the physical access mechanism (e.g. cached or taken directly, or mapped through a CATALOG) all give no help at this stage.

But it would be trivial to make one up. The most* trivial is "if there is a file in the directory with extension .dtd, then it is the DTD".  Such a convention is needed just as much for web access (of the kind that Mark imagines) as for XAR where we bundle all the configuration files for a cross-platform  "XML application" into one file.

That being so, I think it reduces the congee of issues Mark that has raised down to the architectural one of whether it is better to package configuration files or to have them dispersed.

I don't believe it is feasible, at the current stage of the web, 
to build XML applications which have configuration files (schemas, etc) dynamically sourced over the web from multiple strangers on
demand.  Indeed, it would be positively bad software engineering 
for any organization to rely on configuration data it does not have in its control.

Who in their right mind would create a web-page and rely on JavaScript fragments located at a stranger's website?    Every intermediate agent (e.g. a system administrator, a proxy, a network) adds another point of failure:
no integrator (with contractual responsibilitie) wants points of failure outside their control.

Now it may be argued that the convention should be to go from namespace URI to RDDL to related resources.  But that assumes that namespaces are being used, and that the user should know and enter (e.g. by typing) what the namespace is. But this is not the way desktop applications work: the application presents the user with a list of the things it can do.  

Cheers
Rick Jelliffe

* The next most trivial is "if there is a manifest file (e.g. a RDDL document) look up in that", but (to shave with Ockham's razor) we shouldn't require intermediary files where the direct way is satisfactory.   But we first have to locate the RDDL document, and there we are back to looking at extensions.  I happen to believe that the simplest case should be the standard case, and that if we have to use a manifest we have already introduced a complexity that is beyond what I believe integrators need, but others may have different views. (I.e., this is not saying that RDDL is useless, but that even the modest requirements of RDDL are overkill here, however useful in general they are.)