[
Lists Home |
Date Index |
Thread Index
]
REST is RPC with primitive methods and strict adherence to
a global namespace. Further, it depends on sharing vocabularies
even if they are as small as a query name and a few arguments.
So do all of the alternatives. They vary by top-down vs bottom-up
evolution of the vocabulary.
Still, I don't think it's quite that easy. What do you want to support?
Browsing and exploration (tight coupling to navigation) or computing a
result? I think you can do either with REST with more work, and
the second with task-specific RPC with more coordination over time.
It comes down to building with generic methods. Both can
use a global namespace. (Are the critics of UDDI really
fussing about GUIDs?) At this time, I tend to favor REST
for sheer ease in the beginning of the design task
and ease at the beginning often makes for a sustainable
effort. But not at the cost of being told other protocols or
design architectures don't have a right to exist on the
Internet. That part is dumb. Impress over imprimatur.
Still, that simply means "the web" is not the Internet
and that those who want to defend "the web" are either
forging chains or building bulwarks. Caveat vendor.
As to whether we can represent all of the potential task
space of the Internet as hypertext, that is the most
interesting question. I'm not sure that until there
is a widely and easily shared definition for distributed
hypermedia about the level of "nodes and edges" (eg,
precisely a network), one can answer that. I also think
that different tasks may need different representations,
but that the hypertext representation evolves nicely
with the least a priori constraints.
If HTTP went away, event is URIs went away, would be
still have a hypemedia system on the Internet?
Nelson: `by hypertext I mean non-sequential writing'.
Conklin: `The concept of hypertext is quite simple: windows on
the screen are associated with objects in a data base, and links are
provided between these objects, both graphically and in the data base.'
Note that no one is discussing the network but the author
goes on to say:
"What is common to all of these is the notion of nodes and links. Nodes and links are organized in a network structure (often referred to as ``web''), where nodes resemble vertices and links edges of the network. Nodes are used to store ``information chunks'' (self-contained information units). Links model some kind of relationship between these units. By following links these relationships between information chunks can be explored. Support for tracing of links is essential to any hypertext system."
Berners-Lee and Fielding conflate the Network Of Machines and Connections with the
network of content nodes and links through a global namespace. That is conflation,
but it simplifies implementation and scales.
The quotes above come from this article:
http://www.dbai.tuwien.ac.at/staff/herzog/thesis/dip.html
It is well worth reviewing once you have grokked what
Fielding is on about. It is old (1993) but fair and
relates design representations to the technologies applied.
len
-----Original Message-----
From: Simon St.Laurent [mailto:simonstl@simonstl.com]
Is it fair to describe REST as RPC done right?
I can't say I believe the various denials from the REST camp claiming
that REST is fundamentally different from RPC. REST has far fewer and
more generic methods than most RPC approaches encourage, and doesn't
appear as likely to create tightly (irrevocably?) coupled systems. On
the other hand, it clearly has methods and parameters, as well as a
request-response approach. URIs are not a magic wand for cleaning up
architectures in my experience either.
From my perspective (XMLChucker etc.), REST still looks like RPC. It
looks, however, like RPC done far more thoughtfully than usually.
Building REST applications requires some thought beyond slapping a
translator onto an API, and it looks like that thought process is
valuable.
Does that seem like a reasonable summary? I don't mind saying REST is
better RPC. I have a hard time saying that REST is not RPC.
|