Lists Home |
Date Index |
On Monday 18 February 2002 15:03, Bullard, Claude L (Len) wrote:
> I asked earlier that if REST is The Web Way and everyone
> knows that, why are the Web Service baseline specifications
> based on a functional style? As the cannonized defender
> of RPC, can you answer that question? So far, we have a
> lot of material from Paul et al on the REST position
> (no newtonian jokes, please.. :)), but no one has stepped
> forward to defend the web service specification approaches
> other than to say as you have said, the toolkits are there.
> So Why Traditional RPC? Why Web Services per UDDI/WSDL/SOAP?
I'll step in here - I'm cannonized often enough on this list for my
controversial views :-)
I think RPC is being a *bit* maligned due to assumptions being made. RPC
isn't just about faking the semantics of a local procedure call over a
network and trying to hide the networkness as much as possible.
CORBA IDL, for example, allows for methods to be marked as asynchronous; they
return nothing and the caller is not blocked.
I think that HTTP requests are RPCs, just RPCs burdened with extra stuff that
could be dropped at no loss of usefulness.
Paul has said things like 'http://www.foo.com/get_authors.php?id=123' goes
into my browser toolbar, while 'new
XMLRPC('http://www.foo.com/').getAuthors(id: 123)' doesn't. Well, that's just
a matter of syntax, and an RPC mechanism can be *defined* that just treats
those method names as parts of the URL through a formalised mapping.
This mapping would just be a convenience for programmers when referring to
RPC services from code. And since it looks like a procedure call in the code,
it can be type checked against an interface definition (WSDL or whatever) for
In this kind of setup, I'd say that http://www.foo.com/ is the RPC resource,
and the convention of it providing an RPC interface being that
http://www.foo.com/<methodName>?<arg>=<value>&... is how an idempotent call
is performed, and a POST to http://www.foo.com/<methodName> implements a
non-idempotent call. Each of those methods is therefore a seperate resource,
but due to the naming convention it can be identified (based on just the HTTP
URL) as being part of http://www.foo.com/ - and the standard could also
dictate that a GET request to http://www.foo.com/describeInterface returns
some form of IDL listing the available operations.
I don't think this *destroys* anything, as I interpret Paul as saying. I see
this as just a naming convention that allows for the relationships between
things in a common pattern to be expressed.
And as for the 'action-oriented RPC versus just sending a document' stuff -
well, look at the problem domain: We want to get something based upon
something else, or we want to do something to something, or we want to send
something to somebody, or we want to wait until a condition is met, or...
These different communication patterns crop up. We need them all. Given just
one we can often implement most of the others; but do we need to start with
just one and have layers on top, or offer all of them in terms of raw
I don't thinking PUTting to a URL really encodes 'sending something'
correctly. As I read it, it's meant to upload a new file with the implication
that a GET would return what was PUT.
How, with HTTP methods, might one implement adding a file to a queue? Have a
URL to which multiple PUTs can be done? Ideally, we'd have a method callable
on a directory that accepts a MIME entity and puts it into the directory with
a unique URL, and returns that URL for future reference. How do we implement
waiting until a condition is met? A GET URL that keeps the client waiting
until the condition is met? Sure. But is that a valid use of HTTP? Would a
proxy properly proxy this, or would it time out after a few minutes and
return an error to the client?
Alaric B. Snell
http://www.alaric-snell.com/ http://RFC.net/ http://www.warhead.org.uk/
Any sufficiently advanced technology can be emulated in software