Lists Home |
Date Index |
Mike Champion wrote:
> ... Clearly one *can* use the discipline of "Resource
> Oriented Programming" (I believe the phrase is Paul Prescod's) to do
> interesting things, as Tim has done. My skepticism kicks in when one
> asserts that this is *the* architecture of the Web rather than *an*
> architecture within which one can do useful things with the Web.
If we can't agree that pervasive use of URIs is a defining
characteristic of the Web as we know it, then I really can't imagine how
we have the building blocks for any meaningful conversation at all! All
I can do is encourage you to read the first chapter of Tim Berners-Lee's
book or his original proposal for the Web here:
"The attached document describes in more detail a Hypertext project.
HyperText is a way to link and access information of various kinds as a
web of nodes in which the user can browse at will."
Web of nodes. i.e. addressable things and links between them. That link
was down when I went there just now so I'll include the Google cache URL:
> Furthermore, the extent to which Resource Oriented Programming and/or
> REST is a best practice for the Web seems to be an open empirical
> question; I'd like to see it addressed empirically, i.e. do RESTfully
> correct sites tend to be more "successful" in some measureable way than
> are those that don't appear to use its principles?
In any particular case there would obviously be some conflating factor
that could be argued was the "real" reason that the particular site took
off. But let's do a thought experiment. Let's say that there are two
Googles on the Web: Google and Giggle. They have the same algorithms and
basic techniques. But one of them uses a resource-centric view where
everything has a link. This means that news sites and blogs can link to
Google caches and Google searches. The other does not expose these
resources as URIs. This means that news sites and blogs must instead
describe the steps required to force the POST-based interface to get to
the right information. Which service will win?
Now do the same thought-experiment with eBay versus eBuy, Amazon versus
Scyntians, etc. Which sites win?
> To the very limited extent that I think I understand the problem here or
> have an answer, I'm inclined to say that a URI encodes some sort of
> implicit or explicit contract between the implementer of the
> site/service that "owns" the URI and any potential users/consumers. A
> bare-bones best practice might be "GETing the base URI should return
> something useful to the intended audience" (a human-readable links page
> such as CNN.com, an RDF or RDDL file, a WSDL description of the services
> offered there, or whatever). Beyond that, I don't think we have much
> solid theory or practical experience for anything other than
> human-readable content.
Does RSS count as human readable content? Even if it is routed and
filtered through a variety of automated processes before a human sees it?