[
Lists Home |
Date Index |
Thread Index
]
On Mon, 2002-02-11 at 12:40, Paul Prescod wrote:
> REST doesn't adore HTTP. It's the other way around. HTTP 1.1 was
> designed as a protocol for REST. ;)
HTTP 1.1 was built as a protocol for HyperText Transfer, to the best of
my recollection. It's hardly a monument to architectural simplicity,
though it does an excellent job of transferring and storing information
built around the notions in HTML.
> If you do that, you make it extremely difficult to build intermediaries
> like:
>
> * store-and-forward services
> * caches
> * firewalls
> * proxies
> * message routers
> * privacy managing intermediaries
I don't believe that to be a true statement. You can't simply reuse
(or, as some have noted, abuse) existing implementations of those
systems, but it hardly rules them out.
It does mean a shift from working with metadata to working with data,
which may be substantial, but it's hardly the end of networking as we
know it.
> > Either approach makes it possible to, for example, recreate a given
> > state by feeding in the data that led to that state, without having to
> > retain additional metadata about what the headers were, what the
> > response looked like, etc.
>
> That's the point of REST. You shouldn't have to re-create states. States
> should have URIs. You should just point to the URI of the state.
And if that state represents the results of a few thousand individual
messages, how exactly do I recreate it in the event of a failure?
URIs make nice destinations. That's about as much as they're good for
without metaphysical metadata hocus-pocus. States are things as they
are, not things as they are labeled.
> > ... Archiving a set of transactions seems a lot
> > easier in this case, and I suspect the processing is actually less
> > complex.
>
> Don't know what you mean by that.
It means that the messages are all you need to keep should you need an
archive for backup or regulatory purposes. From a legal standpoint,
it's nice to know how the contents of your data store came to be, not
just how they are today.
> XML is just a syntax. Surely the interesting part is in what problem you
> are trying to solve and the data model you build up around that problem.
> Then it becomes useful to ask which parts should go in the XML and which
> in MIME.
Sure, XML is just a syntax. That's what makes it so flexibly usable for
this kind of application. I'm perfectly happy to work with MIME-based
systems because they're what we have right now. I'm not willing to
concede that MIME approaches are a good idea to continue supporting
moving forward, however.
> > I'd love to see an "XMLchucker" protocol that just opens a port, sends
> > the info, and maybe replies with a checksum or an error. No more.
>
> It'll take about ten minutes to write the RFC for that. But your
> intermediaries will have no idea what is going on and won't be able to
> help you.
They won't today, but they may someday. I don't believe this idea is
inherently any worse than that of reusing HTTP in these contexts.
--
Simon St.Laurent
Ring around the content, a pocket full of brackets
Errors, errors, all fall down!
http://simonstl.com
|