[
Lists Home |
Date Index |
Thread Index
]
On Sat, 25 Jan 2003 20:20:56 -0500, Mike Champion <mc@xegesis.org> wrote:
> the TAG is trying to squeeze WAY to much juice from this rather dry
> fruit. If they are trying to understand the actual principles of the Web
> by focusing on URIs, resources, and representations, I'm extremely
> skeptical that they will produce anything particularly useful to guide
> Webmasters, Semantic Web researchers, Web services theorists or
> practicioners, etc.
I read Tim Bray's http://lists.w3.org/Archives/Public/www-
tag/2003Jan/0369.html shortly after writing this. That is pretty thought
provoking, and I recommend it. Still, I think that neither "a URL simply
locates a specific Web page" or "a URI identifies an abstract Resource for
which HTTP will return an appropriate representation based on content
headers" really nails the question of what URIs are and/or could be useful
for. Clearly a query encoded in a URI (either to a specific application
such as Antarcti.ca or a database) could result in something ephemeral.
Clearly one *can* use the discipline of "Resource Oriented Programming" (I
believe the phrase is Paul Prescod's) to do interesting things, as Tim has
done. My skepticism kicks in when one asserts that this is *the*
architecture of the Web rather than *an* architecture within which one can
do useful things with the Web. Furthermore, the extent to which Resource
Oriented Programming and/or REST is a best practice for the Web seems to be
an open empirical question; I'd like to see it addressed empirically, i.e.
do RESTfully correct sites tend to be more "successful" in some measureable
way than are those that don't appear to use its principles?
To the very limited extent that I think I understand the problem here or
have an answer, I'm inclined to say that a URI encodes some sort of
implicit or explicit contract between the implementer of the site/service
that "owns" the URI and any potential users/consumers. A bare-bones best
practice might be "GETing the base URI should return something useful to
the intended audience" (a human-readable links page such as CNN.com, an RDF
or RDDL file, a WSDL description of the services offered there, or
whatever). Beyond that, I don't think we have much solid theory or
practical experience for anything other than human-readable content.
There's lots of work going on at the W3C and elsewhere to define alternate
ways of nailing down the contract implied by a URI more explicitly -- XPath
(many XML DBMS systems allow XPath queries encoded in a URI), XForms,
XQuery, WSDL, and the various RDF-based specs. That's great, these are
extremely useful ... but do all these things really fit within some
abstraction of what a URI really is? Or, more importantly, do any of the
abstractions that could cover all these bases really provide powerful
theoretical concepts? Our world is full of little tautologies such as "A
URI identifies a Resource in a Uniform way" and "Web services are those
things described by the Web Services Description Language." Attempts to
come up with non-tautological definitions of 'Resource' and 'Web service'
are notoriously prone to go down ratholes on the TAG and WSA mailing lists.
Not all apparent tautologies are theoretically fruitless (natural selection
can be phrased in a way that makes it sound like a sterile tautology), but
at least this has got to be a warning that lots of skepticism and
empiricism needs to guide these discussions if progress is to be made.
|