Lists Home |
Date Index |
On Sat, 2002-02-16 at 15:45, Jonathan Borden wrote:
> Aside from voluminous discussion, I don't see the actual problems caused by
Perhaps you see a contented gossip circle where I see symptoms of
serious problems. Smoke doesn't always mean fire, but billowing smoke
that won't disappear often suggests fire - or at least some kind of
> No doubt there _are_ problems, particularly with the definition of URI
> references (e.g. URI + fragment identifier), but with URIs themselves: what
> actual problems exist?
* Lack of comparison rules
* Lack of consistent expectations regarding "appropriate use"
* General lack of consistency among the many different URI schemes
* Lack of common understandings or best practices regarding what "URI
processing" even means.
> Note that I don't accept widespread confusion to be a problem. I strongly
> agree that any areas of confusion need to be cleared up, but regarding
> problems, I mean some actual problem using HTTP etc.
The problems with HTTP are problems with HTTP. The problems with URI
are not necessarily the problems of the particular schemes - the sum is
greater than the parts.
> Clearly however there is widespread confusion and disagreement, perhaps
> caused by the _process_. For example IETF RFCs may contain contradictory
> statements, etc.
The process is part of it - the processing is what worries me most
> I do think that "REST" is a good start at clearing up some of the
> misunderstandings. In my book, working code rules, and the Web, particularly
> HTML + HTTP represents alot of working code. Apache has to be considered one
> of the great accomplishments of the Web (at least in my book), and for this
> simple reason I give Fielding et al. considerable leeway to explain the
> rules of how things _should_ work.
I think HTML+HTTP has done a lot of good, and that REST is an
improvement on the SOAP/UDDI/WSDL pileup. However, I don't find past
performance to be a guarantee of future results, nor do I find URIs to
have very much to do with the success of HTML+HTTP. (In fact, I find
them corrosive of the good URLs have accomplished.)
> > I see little evidence that URIs - beyond URLs - have contributed much
> > good to the developing universe.
> Again, Fielding has contributed to Apache, and Apache _has_ inarguably
> contributed to the developing universe, at least the universe we are
> concerned with here. Whatever he wants to call them: URLs, URIs. Works for
Fielding is a brilliant guy, fine. Taking a successful abstraction -
URLs (which I believe originated with Tim Berners-Lee) and wrapping it
in a whole new set of philosophical mumblings - URIs - does not make
URIs successful on their own philosophical merits.
URLs struck an almost unique balance between human-comprehensibility and
machine usability. URIs disrupt that balance toward the machine, while
relying on processing which doesn't really exist. I can describe a
generic XML processor. I have no clue whatsover what a generic URI
processor looks like. (A generic URL processor is much simpler.)
> > And I disagree to the extent that the *identifiers* contract varies from
> > the expectations already built into pretty much every piece of Web
> > software regarding the *locations* contract.
> > And resources? What? Entities at least have electronic substance.
> The simplest answer to why URI and not URL, is that the actual _document_
> might change (perhaps a new advert is inserted, or some errata are fixed)
> yet the we don't want to assign a new URL (e.g. the current URL has been
> bookmarked all over the place). The answer is that the _resource_ stays the
> same (and the URI identifies the _resource_), while the _entity_ changes.
> This really is quite sensible.
The buildings at given addresses also change on a regular basis. The
principle of location will get you to some kind of entity, however, not
just a generic notion of 'somethingness'.
If URIs are really about "somethingness", maybe we all need to sit back
and read Heidegger for a while. I'm sure that will enhance our
understanding. (I'm game, though I'll need to take up smoking again.
And no, I can't stand Wittgenstein.)
> Another example: you bookmark: http://example.org/MyStocks?company=INTC
> The resource represents the current stock price of Intel.
> The entity is a character string.
> The URI identifies the _resource_ not the _entity_ (e.g. the particular
> document returned on a GET). If it were otherwise we would be endlessly
> debating what happens when the document changes (e.g. new URL). The point is
> that I use URIs in practice every day, and so do you: e.g. what does
> http://www.xmlhack.com identify? A _particular_ document -- nope.
The resource/entity distinction is deeply exciting to people seeking
extra layers of abstraction - and I'd suggest that such people have
already forgotten that locators were already a layer of abstraction, as
are schemes. Layer after layer after layer - and the net result is mere
http://www.xmlhack.com "identifies" something in our conversation. It
is a key to retrieving a given set of bytes using the HTTP protocol from
the server identified by the DNS name "www.xmlhack.com" resolved to a
particular IP address at a given point in time. That's plenty
abstracted for most of us, and I don't thing "blessing"
http://www.xmlhack.com as an identifier rather than a location achieves
anything additional - or useful.
> That is the thesis (literally). REST states that we are better off working
> with URIs, and directly with entities not with resources. The conclusions
> include an admonition against cookies (which I hate). Looks good to me. The
> recommendations seem quite sound. I like the architecture (HTTP scales
> _remarkably_ well, better than I would have predicted). So what is the
See above. Unnecessarily complicating layers of abstraction qualify as
real problems to me. That they happen to have a cult following doesn't
make it any more palatable.
Ring around the content, a pocket full of brackets
Errors, errors, all fall down!