[
Lists Home |
Date Index |
Thread Index
]
Alaric Snell wrote:
>...
>
> But you can never escape the old problem of the two armies; if you send a
> single request packet, never hear anything back, and retransmit for days
> without ever hearing back, you still don't know if the remote server got the
> packet or not.
That's a brutal problem and it is a perfect example why it would be
incredibly wasteful to write code designed for the local case as if it
were designed for use over a network...which is what it sounds to me
like you were proposing when you started talking about trappable SEGV
signals. It simply doesn't make sense to write local code as if it were
remote code nor is vice versa reasonable.
> Programming languages that have exceptions don't need to hide the network
> when doing RPC. The RPC calls can just throw an extra RemoteException if
> there's a networking problem, and bob's your uncle; we're exposing the
> potential unreliability of the network, and nobody every said that any method
> call has to be particularly fast anyway so we've nothing to hide in the
> latency department!
The question isn't whether the reliability is exposed through the API.
The question is whether the application and API are *architected* around
the potential for failure and latency. If they are, then they will be
too much of a hassle for local use. This is all documented carefully in
"A Note On Distributed Computing".
You can certainly build decent protocols on top of RPC -- but only by
sacrificing the very thing that made RPC so nice: the fact that it makes
remote procedure calls look like local ones! An HTTP defined on top of
RPC would still be HTTP. But HTTP API calls do not look much like
procedure calls.
>... Not even using HTTP! I agree, though, that forcing the RPCs to be
hard
> to do (by requiring conversion of your data to XML and poking it into HTTP,
> many lines of code) will force people to try to do it less, thus removing the
> above problem.
But more to the point, HTTP is optimized for handling networking issues
like lost messages and latency. It defines idempotent methods carefully
(these help reliability). It defines cachable methods carefully (these
help latency).
> And as for the REST argument that 'there are only a few methods, GET and POST
> and others'... I think it's wrong to say that GET and POST are methods in the
> same sense that getName () and getAge () are; in HTTP you would do GETs on
> seperate URLs for the name and the age, or GET a single URL that returns both
> name and age. In no way has GET replaced getName and getAge. HTTP's GET and
> POST and so on correspond more to an RPC protocol's' INVOKE' operation than
> to the application-level getName () and getAge ().
You can surround the issue with logical sophistry but it doesn't change
the core, central fact: any "application" in the world can invoke GET on
any URI without previous negotiation. There are dozens of standards and
tools with support for that built in, starting with HTML, through XLink,
through the semantic web technologies, through XSLT and XPointer,
through the stylesheet PI, through Microsoft Office (including, I'd
wager XDocs), through caches and browsers.
That will never be true for getName and getAge. That makes GET
inherently superior to getName and getAge. Even if one does not buy the
entire REST argument, it amazes me that there are still people out there
arguing that it is better to segment the namespace rather than have a
single unified namespace for retrieving all information. You talk about
reinventing wheels and learning from the past. Surely the ONE THING we
learned from the massive success of the Web is that there should be a
single namespace.
Paul Prescod
|