Lists Home |
Date Index |
"Simon St.Laurent" wrote:
> Gavin's already questioned your history, so I'll let it go. I see HTTP
> 1.1 as the natural extension of an IETF notion of creating protocols by
> slapping extra headers onto information. HTTP was wise to reuse the
> same header infrastructure that had worked for prior protocols, but that
> doesn't make HTTP a brillant fundamental architecture.
Until you make a technical argument you're just making assertions. "XML
was wise to reuse the angle bracket infrastructure that had worked for
HTML, but that doesn't make XML a brilliant fundamental architecture." I
can play that trick on any technology.
> It's not an HTML-ism in the sense that it uses HTML syntax, but keeping
> an open connection to support images transferring along with the HTML
> documents certainly feels to me like support for the "HTML way".
Supporting attachments is critically important to any Internet-scale
messaging architecture. HTML happens to benefit from it. So would SOAP
or ebXML or Jabber.
> > Imagine if there are a thousand different XML vocabularies floating
> > around (conservative estimate!). Now you are AOL,
> I'm not AOL, and I've never been interested in AOL's problems. Nor am I
> interested in "scalability" or "enterprise systems" as such things are
> commonly construed...
That's fine. If you don't care about scale then your options become much
broader and REST doesn't have much to offer. I typically encourage
people with simpler problems to use XML-RPC.
> > trying to implement
> > the One True Caching Proxy for all AOL customers.
> Why on earth would I build "the One True Caching Proxy for all
> customers"? Have I been watching Highlander too many times?
No, you need one caching proxy because all of the information on your
network comes through a very few interconnects with other systems and by
compressing the information at that one point you can save yourself
hundreds of millions of dollars and pass those savings on to either your
customers or your shareholders. Plus, you can deliver data to your
end-users more quickly.
> be only one." Am I completely hung up on centralizing everything and
> running it through the same blender? Have I forgotten about the
> prospect of distributing systems and permitting local control over
> processing logic?
Local control is fine. Aggregating caches exist precisely to allow
end-users to get more efficient access to their data and drive network
costs down. Nobody loses control. Everybody benefits.
> > You'd probably need to configure the proxy with
> > knowledge of every XML vocabulary. It would be a fulltime job just
> > trying to keep up with the trendy ones, ignoring the less popular ones.
> You seem to have visions of a completely different set of problems than
> the ones which interest me. If you want to go build immense corporate
> portals, go to it. Have a nice time. If you need me to build a bridge
> between my system and your expectations, just drop me line - I'll be
> happy to talk.
You're right. I don't want to be in the point-to-point bridge building
business. I'm happy to admit that that isn't where HTTP excels. XML-RPC
is wonderful for that.
> > Every message should result in a new URI. The URI represents the current
> > state of the transaction. You point to the last URI you got.
> That's sort of vaguely usable, though I don't think I'd want to
> implement anything deeply recursive on that. For hypertext navigation,
> I guess it'll do.
This is more or less the model used by functional programming languages
that have nothing *except* recursion. It is known in that world as a
"continuation." It's also a proven strategy on the Web as we use it
> Sure. Inclusion by reference to the current state of the conversation
> is normal. Humans do it all the time. That doesn't mean I want to
> send a pile of URIs every time we converse, though.
You don't need to send a pile of URIs. You send one. It refers to the
last state of our transaction. Then you add some information. That "last
state" can contain URIs to all previous ones, or embed them.
> Sure. And if someone else comes along and changes the state out from
> under your label, how much good is your label?
That's why you make some URIs static rather than dynamic. If you use
Expedia, you do this all of the time. You can go through several steps
of a transaction and get a URI. Then you email that URI to your wife and
let her go a couple of steps further. Then she emails it back to you and
you finish it. Only your wife and you have the URI. Only you have the
password. Nobody else can overwrite or otherwise interfere with your
> > Fine. HTTP messages are easy to store. As discussed above it takes about
> > five minutes to define an XML vocabulary if you feel that it is
> > important to store them in XML rather than raw text.
> I can do that, sure. I can't see the value of having the information in
> the headers rather than in the document to start with, though. I don't
> mind doing the extra work to support legacy systems, but I also don't
> mind saying that it's time to at least think about putting a fork in the
> HTTP way of communicating information.
Sure, that's the SOAP envelope model. Move the headers down into the
document. If you don't care about working with today's Web then that's
fine. You'll end up reinventing a ton of stuff but I guess that's kind
> > MIME is not going to go away until XML has a better approach to binary
> > data.
> That's not my problem. Nor is it especially difficult to send binary
> info on another channel.
Now your protocol is getting more complicated, negotiating these other
> > Well, I disagree. HTTP was designed as a generic resource manipulation
> > protocol. XML was designed as a data representation. HTTP has a ton of
> > features that make it a good protocol.
> I guess you've never been through the pain of writing an entire book on
> Cookies. HTTP may be good enough for a lot of things, but calling it
> wonderful is a hard sell.
Cookies are not part of HTTP. They are actually a pretty poor idea. And
of course they are irrelevant to web services and distribute computing.