[
Lists Home |
Date Index |
Thread Index
]
On Mon, 2002-02-11 at 13:44, Paul Prescod wrote:
> > HTTP 1.1 was built as a protocol for HyperText Transfer, to the best of
> > my recollection.
>
> It was built as a REST platform for HyperText Transfer. Or so says one
> of its primary creators. (this bit of the conversation feels like deja
> vu all over again)
Gavin's already questioned your history, so I'll let it go. I see HTTP
1.1 as the natural extension of an IETF notion of creating protocols by
slapping extra headers onto information. HTTP was wise to reuse the
same header infrastructure that had worked for prior protocols, but that
doesn't make HTTP a brillant fundamental architecture.
While REST may have animated some of its creators, a lot of us see HTTP
1.1 as a rebuilding of 1.0 which was more or less an extension of 0.9.
I don't think there's anything architectually glorious there. Deeply
useful for hypertext, sure.
> HTTP "is a generic, stateless, protocol which can be used for many tasks
> beyond its use for hypertext, such as name servers and distributed
> object management systems, through extension of its request methods,
> error codes and headers [47]. A feature of HTTP is the typing and
> negotiation of data representation, allowing systems to be built
> independently of the data being transferred."
>
> * http://www.w3.org/Protocols/rfc2616/rfc2616.html
Yeah. I've seen that sentence. I've thought it to be hubris for a long
while now, once the excitement of XML-RPC wore off.
> > ... It's hardly a monument to architectural simplicity,
> > though it does an excellent job of transferring and storing information
> > built around the notions in HTML.
>
> If you can find one HTML-ism in the HTTP/1.1 specification I'll buy you
> dinner when next I see you. HTML has some HTTP-isms in it, not vice
> versa.
It's not an HTML-ism in the sense that it uses HTML syntax, but keeping
an open connection to support images transferring along with the HTML
documents certainly feels to me like support for the "HTML way". That
was most of what interested end-users in HTTP 1.1, certainly.
> > It does mean a shift from working with metadata to working with data,
> > which may be substantial, but it's hardly the end of networking as we
> > know it.
>
> Imagine if there are a thousand different XML vocabularies floating
> around (conservative estimate!). Now you are AOL,
I'm not AOL, and I've never been interested in AOL's problems. Nor am I
interested in "scalability" or "enterprise systems" as such things are
commonly construed...
> trying to implement
> the One True Caching Proxy for all AOL customers.
Why on earth would I build "the One True Caching Proxy for all
customers"? Have I been watching Highlander too many times? "There can
be only one." Am I completely hung up on centralizing everything and
running it through the same blender? Have I forgotten about the
prospect of distributing systems and permitting local control over
processing logic?
I guess one of those must be the answer.
> How would you do it
> based upon nothing more than your protocol of "Open a socket. Send some
> XML." You couldn't.
In fact, I shouldn't. If that was my job I'd be looking for the exit
door.
> You'd probably need to configure the proxy with
> knowledge of every XML vocabulary. It would be a fulltime job just
> trying to keep up with the trendy ones, ignoring the less popular ones.
You seem to have visions of a completely different set of problems than
the ones which interest me. If you want to go build immense corporate
portals, go to it. Have a nice time. If you need me to build a bridge
between my system and your expectations, just drop me line - I'll be
happy to talk.
> Every message should result in a new URI. The URI represents the current
> state of the transaction. You point to the last URI you got.
That's sort of vaguely usable, though I don't think I'd want to
implement anything deeply recursive on that. For hypertext navigation,
I guess it'll do.
> If you've lost that URI somehow then you start from scratch, just as in
> a pure XML protocol. Maybe "starting from scratch" is 10% harder because
> now you must keep track of METHOD, URI, headers and body, rather than
> just body. But that's not tricky:
>
> <httplog method="..." uri="...">
> <headers>
> ...
> </headers>
> <body>
> ....
> </body>
> </httplog>
>
> The real win is if you can AVOID resending the messages at all, which
> you can if you give each new state a URI.
Sure. Inclusion by reference to the current state of the conversation
is normal. Humans do it all the time. That doesn't mean I want to
send a pile of URIs every time we converse, though.
> > URIs make nice destinations. That's about as much as they're good for
> > without metaphysical metadata hocus-pocus. States are things as they
> > are, not things as they are labeled.
>
> That sounds like metaphysical metadata hocus pocus to me! Labels are
> just labels. If you give states labels then they have labels and you can
> "get back" to them. When I save an XML document from XMetaL to disk, I
> give it a label so I can get back that state!
Sure. And if someone else comes along and changes the state out from
under your label, how much good is your label?
> Fine. HTTP messages are easy to store. As discussed above it takes about
> five minutes to define an XML vocabulary if you feel that it is
> important to store them in XML rather than raw text.
I can do that, sure. I can't see the value of having the information in
the headers rather than in the document to start with, though. I don't
mind doing the extra work to support legacy systems, but I also don't
mind saying that it's time to at least think about putting a fork in the
HTTP way of communicating information.
> > Sure, XML is just a syntax. That's what makes it so flexibly usable for
> > this kind of application. I'm perfectly happy to work with MIME-based
> > systems because they're what we have right now. I'm not willing to
> > concede that MIME approaches are a good idea to continue supporting
> > moving forward, however.
>
> MIME is not going to go away until XML has a better approach to binary
> data.
That's not my problem. Nor is it especially difficult to send binary
info on another channel.
> > They won't today, but they may someday. I don't believe this idea is
> > inherently any worse than that of reusing HTTP in these contexts.
>
> Well, I disagree. HTTP was designed as a generic resource manipulation
> protocol. XML was designed as a data representation. HTTP has a ton of
> features that make it a good protocol.
I guess you've never been through the pain of writing an entire book on
Cookies. HTTP may be good enough for a lot of things, but calling it
wonderful is a hard sell.
> XML has a ton of features
> designed to make it a good data representation. Reinventing HTTP in XML
> syntax would be reasonably sane, were it not for the "binary problem."
> But anyways, your idea will get its day in the sun. If we treat SOAP
> Part 1 (not Part 2) as a truly "transport independent protocol" then it
> really isn't much more than opening a pipe and spewing XML. We'll see
> what happens.
Ah, SOAP. Yet another crap envelope idea. Oh well.
--
Simon St.Laurent
Ring around the content, a pocket full of brackets
Errors, errors, all fall down!
http://simonstl.com
|