Lists Home |
Date Index |
Gavin Thomas Nicol wrote:
> The point is that your claims smack of selve-serving revisionist
> history. Claims like "HTTP was designed to support..." and "the WWW
> was designed for..." followed by some grandiose claim like broadcast,
> asynchronicity, or "HTTP was designed in the REST paradigm.." simply
> aren't borne out by history
"1989, while working at the European Particle Physics Laboratory, I
proposed that a global hypertext space be created in which any
network-accessible information could be refered to by a single
"Universal Document Identifier"."
Tim's original design documents say nothing about physics:
That's still the central idea of the Web and its most impressive
feature. So I stand by my claim that the Web that we have is exactly a
subset of what Tim Berners-Lee dreamed of back then.
As far as what HTTP was designed to support:
"The Hypertext Transfer Protocol (HTTP) is an application-level
protocol with the lightness and speed necessary for distributed,
collaborative, hypermedia information systems. It is a generic,
stateless, object-oriented protocol which can be used for many tasks,
such as name servers and distributed object management systems,
through extension of its request methods (commands)."
"HTTP is also used as a generic protocol for communication between
user agents and proxies/gateways to other Internet protocols, such as
SMTP , NNTP , FTP , Gopher , and WAIS , allowing
basic hypermedia access to resources available from diverse
applications and simplifying the implementation of user agents."
Name servers and distributed object management systems sounds like
examples of what people want SOAP and web services.
According to Fielding, here is what the situation was in 1993:
"The deployed architecture had significant limitations in its support
for extensibility, shared caching, and intermediaries, which made it
difficult to develop ad-hoc solutions to the growing problems. At the
same time, commercial competition within the software market led to an
influx of new and occasionally contradictory feature proposals for the
"Working groups within the Internet Engineering Taskforce were formed to
work on the Web's three primary standards: URI, HTTP, and HTML. The
charter of these groups was to define the subset of existing
architectural communication that was commonly and consistently
implemented in the early Web architecture, identify problems within that
architecture, and then specify a set of standards to solve those
"The early Web architecture was based on solid principles--separation of
concerns, simplicity, and generality--but lacked an architectural
description and rationale. The design was based on a set of informal
hypertext notes , two early papers oriented towards the user
community [12, 13], and archived discussions on the Web developer
community mailing list (email@example.com). In reality, however, the
only true description of the early Web architecture was found within the
implementations of libwww (the CERN protocol library for clients and
servers), Mosaic (the NCSA browser client), and an assortment of other
implementations that interoperated with them."
"That is, over the past six years I have been constructing models,
adding constraints to the architectural style, and testing their affect
on the Web's protocol standards via experimental extensions to client
and server software. Likewise, others have suggested the addition of
features to the architecture that were outside the scope of my
then-current model style, but not in conflict with it, which resulted in
going back and revising the architectural constraints to better reflect
the improved architecture. The goal has always been to maintain a
consistent and correct model of how I intend the Web architecture to
behave, so that it could be used to guide the protocol standards that
define appropriate behavior, rather than to create an artificial model
that would be limited to the constraints originally imagined when the
"The next chapter introduces and elaborates the Representational State
Transfer (REST) architectural style for distributed hypermedia systems,
as it has been developed to represent the model for how the modern Web
should work. REST provides a set of architectural constraints that, when
applied as a whole, emphasizes scalability of component interactions,
generality of interfaces, independent deployment of components, and
intermediary components to reduce interaction latency, enforce security,
and encapsulate legacy systems."
"Since 1994, the REST architectural style has been used to guide the
design and development of the architecture for the modern Web. This
chapter describes the experience and lessons learned from applying REST
while authoring the Internet standards for the Hypertext Transfer
Protocol (HTTP) and Uniform Resource Identifiers (URI), the two
specifications that define the generic interface used by all component
interactions on the Web, as well as from the deployment of these
technologies in the form of the libwww-perl client library, the Apache
HTTP Server Project, and other implementations of the protocol
> ... (and I have been involved directly, or
> peripherally since Tim BL sent out his first email announcement).
Well either you weren't watching closely enough or Fielding is engaging
in revisionism. Insofar as I read the HTTP and URI specifications and I
see evidence of design and architecture exactly as he describes it, I am
inclined to believe him when he claims that it isn't there by accident.
> > Can XML be used for purchase orders? Yes.
> Yes, but it does *not* support them itself. Nowhere in the XML
> specification does it define "XML for purchase orders". Purchase
> orders and XML are logically distinct.
So you're saying that it is okay to say: "XML can be used for purchase
orders" but incorrect to say "XML supports purchase orders." I consider
those two statements to be logically equivalent. In either case, you
would need to clarify them for a naive reader but knowledgeable
professionals will understand that purchase orders (or asynch, or
peer-to-peer) are an *application* of the technology.
> Precisely. Nowhere in HTTP does it say that it can be used
> asychronously (though like GET bodies, it doesn't forbid them).
Funny. Neither does the SMTP spec. In what sense is SMTP asynchronous?
You connect to a server. You set up a socket. You talk back and forth on
it. Just like HTTP. In fact, SMTP has more syncronization points in a
connection than does HTTP.
It is only in the context of an applcation that *uses* SMTP
asynchronously that it is asynchronous. Exactly like HTTP. Asychronicity
is intrinsically a property of the application, not the protocol.
> Nowhere in the HTTP spec does it claim to be a replacement for SMTP,
> ot SNMP, or RPC. Does it support those *applications*. Yes. Does it
> suport them in and of itself? No.
When did anyone say that HTTP billed itself as a replacement for SMTP,
SNMP or RPC? All that's been said is that it was designed to be generic
which means it could be used as a replacement for those things. And more
important, that it can be used for new applications that are
asynchronous or peer-to-peer or broadcast or whatever.
> To me, there is HTTP 0.9, and a number of revisions.... HTTP 1.1, to
> me, is a vastly more complicated, and slightly improved version of
> HTTP 0.9.
Is a slight improvement over this:
Are you kidding? In what sense are 7 new methods, persistent
connections, a completely new data model, support for intermediaries
like caches, etc. a "slight improvement"?
Roy Fielding has documented how and why HTTP 1.1 is radically different
from HTTP 0.9. He and Tim B-L have described the design of the
architecture of the Web that you claim was not designed. If you are
interested in the facts of the matter, I'll suggest a few references:
I doubt that I can contribute anything more than those references.