Lists Home |
Date Index |
"Simon St.Laurent" wrote:
> HTTP 1.1 was built as a protocol for HyperText Transfer, to the best of
> my recollection.
It was built as a REST platform for HyperText Transfer. Or so says one
of its primary creators. (this bit of the conversation feels like deja
vu all over again)
HTTP "is a generic, stateless, protocol which can be used for many tasks
beyond its use for hypertext, such as name servers and distributed
object management systems, through extension of its request methods,
error codes and headers . A feature of HTTP is the typing and
negotiation of data representation, allowing systems to be built
independently of the data being transferred."
> ... It's hardly a monument to architectural simplicity,
> though it does an excellent job of transferring and storing information
> built around the notions in HTML.
If you can find one HTML-ism in the HTTP/1.1 specification I'll buy you
dinner when next I see you. HTML has some HTTP-isms in it, not vice
> > If you do that, you make it extremely difficult to build intermediaries
> > like:
> > * store-and-forward services
> > * caches
> > * firewalls
> > * proxies
> > * message routers
> > * privacy managing intermediaries
> I don't believe that to be a true statement. You can't simply reuse
> (or, as some have noted, abuse) existing implementations of those
> systems, but it hardly rules them out.
> It does mean a shift from working with metadata to working with data,
> which may be substantial, but it's hardly the end of networking as we
> know it.
Imagine if there are a thousand different XML vocabularies floating
around (conservative estimate!). Now you are AOL, trying to implement
the One True Caching Proxy for all AOL customers. How would you do it
based upon nothing more than your protocol of "Open a socket. Send some
XML." You couldn't. You'd probably need to configure the proxy with
knowledge of every XML vocabulary. It would be a fulltime job just
trying to keep up with the trendy ones, ignoring the less popular ones.
> > > Either approach makes it possible to, for example, recreate a given
> > > state by feeding in the data that led to that state, without having to
> > > retain additional metadata about what the headers were, what the
> > > response looked like, etc.
> > That's the point of REST. You shouldn't have to re-create states. States
> > should have URIs. You should just point to the URI of the state.
> And if that state represents the results of a few thousand individual
> messages, how exactly do I recreate it in the event of a failure?
Every message should result in a new URI. The URI represents the current
state of the transaction. You point to the last URI you got.
If you've lost that URI somehow then you start from scratch, just as in
a pure XML protocol. Maybe "starting from scratch" is 10% harder because
now you must keep track of METHOD, URI, headers and body, rather than
just body. But that's not tricky:
<httplog method="..." uri="...">
The real win is if you can AVOID resending the messages at all, which
you can if you give each new state a URI.
> URIs make nice destinations. That's about as much as they're good for
> without metaphysical metadata hocus-pocus. States are things as they
> are, not things as they are labeled.
That sounds like metaphysical metadata hocus pocus to me! Labels are
just labels. If you give states labels then they have labels and you can
"get back" to them. When I save an XML document from XMetaL to disk, I
give it a label so I can get back that state!
> > > ... Archiving a set of transactions seems a lot
> > > easier in this case, and I suspect the processing is actually less
> > > complex.
> > Don't know what you mean by that.
> It means that the messages are all you need to keep should you need an
> archive for backup or regulatory purposes. From a legal standpoint,
> it's nice to know how the contents of your data store came to be, not
> just how they are today.
Fine. HTTP messages are easy to store. As discussed above it takes about
five minutes to define an XML vocabulary if you feel that it is
important to store them in XML rather than raw text.
> > XML is just a syntax. Surely the interesting part is in what problem you
> > are trying to solve and the data model you build up around that problem.
> > Then it becomes useful to ask which parts should go in the XML and which
> > in MIME.
> Sure, XML is just a syntax. That's what makes it so flexibly usable for
> this kind of application. I'm perfectly happy to work with MIME-based
> systems because they're what we have right now. I'm not willing to
> concede that MIME approaches are a good idea to continue supporting
> moving forward, however.
MIME is not going to go away until XML has a better approach to binary
> > > I'd love to see an "XMLchucker" protocol that just opens a port, sends
> > > the info, and maybe replies with a checksum or an error. No more.
> > It'll take about ten minutes to write the RFC for that. But your
> > intermediaries will have no idea what is going on and won't be able to
> > help you.
> They won't today, but they may someday. I don't believe this idea is
> inherently any worse than that of reusing HTTP in these contexts.
Well, I disagree. HTTP was designed as a generic resource manipulation
protocol. XML was designed as a data representation. HTTP has a ton of
features that make it a good protocol. XML has a ton of features
designed to make it a good data representation. Reinventing HTTP in XML
syntax would be reasonably sane, were it not for the "binary problem."
But anyways, your idea will get its day in the sun. If we treat SOAP
Part 1 (not Part 2) as a truly "transport independent protocol" then it
really isn't much more than opening a pipe and spewing XML. We'll see