OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

 


 

   Re: [xml-dev] Traditional RPC

[ Lists Home | Date Index | Thread Index ]

Alaric Snell wrote:
>
>...
>
> I think RPC is being a *bit* maligned due to assumptions being made. RPC
> isn't just about faking the semantics of a local procedure call over a
> network and trying to hide the networkness as much as possible.
 
It sure sounds like it: "Remote    Procedure     Call". What advantages
does RPC have other than it applies a familiar model ("procedure call")
to an unfamiliar problem ("networking")?
 
> CORBA IDL, for example, allows for methods to be marked as asynchronous; they
> return nothing and the caller is not blocked.
 
Yes, RPCs over time tend to evolve features that make them less PC-like.
Neither fish nor fowl they will strangle when the spring comes. ;)
 
> I think that HTTP requests are RPCs, just RPCs burdened with extra stuff that
> could be dropped at no loss of usefulness.
 
That's factually incorrect. The defining characteristic of RPCs is that
the programmer chooses his own methods to go with his endpoints. HTTP
predefines the methods. The set is extensible but the REST constraint is
that extensions must be globally defined and meaningful, not endpoint
specific as in RPC.

 *
http://www1.ics.uci.edu/~fielding/pubs/dissertation/evaluation.htm#sec_6_5_2

"What distinguishes RPC from other forms of network-based application
communication is the notion of invoking a procedure on the remote
machine, wherein the protocol identifies the procedure and passes it a
fixed set of parameters, and then waits for the answer to be supplied
within a return message using the same interface."

"What distinguishes HTTP from RPC isn't the syntax. It isn't even the
different characteristics gained from using a stream as a parameter,
though that helps to explain why existing RPC mechanisms were not usable
for the Web. What makes HTTP significantly different from RPC is that
the requests are directed to resources using a generic interface with
standard semantics that can be interpreted by intermediaries almost as
well as by the machines that originate services. The result is an
application that allows for layers of transformation and indirection
that are independent of the information origin, which is very useful for
an Internet-scale, multi-organization, anarchically scalable information
system."

> Paul has said things like 'http://www.foo.com/get_authors.php?id=123' goes
> into my browser toolbar, while 'new
> XMLRPC('http://www.foo.com/').getAuthors(id: 123)' doesn't. Well, that's just
> a matter of syntax, and an RPC mechanism can be *defined* that just treats
> those method names as parts of the URL through a formalised mapping.

If it were "just syntax" then you wouldn't have to start talking about
idemotency, which is clearly semantics, below.
 
> This mapping would just be a convenience for programmers when referring to
> RPC services from code. And since it looks like a procedure call in the code,
> it can be type checked against an interface definition (WSDL or whatever) for
> validity.
 
If it looks like "just a procedure call" from code then I would question
whether the API is extensible, scalable and secure. I'd also wonder how
it handles streaming data applications for performance.
 
> In this kind of setup, I'd say that http://www.foo.com/ is the RPC resource,
> and the convention of it providing an RPC interface being that
> http://www.foo.com/<methodName>?<arg>=<value>&... is how an idempotent call
> is performed, and a POST to http://www.foo.com/<methodName> implements a
> non-idempotent call.
 
Now you're starting to reinvent HTTP. RPC protocols do not divide the
world into "GET" and "POST" and do not give method results URIs. In
order for this to achieve the goals you would have to strictly separate
out which methods are information-fetch and which are
information-sending. Otherwise there would be some information which
couldn't be fetched for referencing without having some side-effect.
 
Once you start to impose this form of discipline, you are moving from
RPC to REST. REST is a discipline defined by interface constraints.
There is no way to build the system we are describing by generating code
based on Foxpro or C# type declarations.
 
It seems to me, however, that you are talking about one endpoint per
machine name which I think you'll agree is less powerful than having
multiple ones:
 
http://www.foo.com/x/y/methodName?<arg>=<value>
 
Now that we've split things into GET-like and POST-like, the getXXXX
method names are going to be redundant:
 
http://www.foo.com/x/y/getStateName?state=5
 
Why not:

http://www.foo.com/x/y/stateName/5
 
etc. etc. As we apply the interface constraints and common sense we'll
eventually reinvent REST and HTTP. If you want to spend the effort
leading the industry to REST through RPC then I'm all for it.
 
And if SOAP was moving in this direction I'd be all for that too! But it
isn't. It is "traditional RPC" but it isn't even as sophisticated as
traditional RPC because it has no notion of object references so you
cannot refer to work you have already completed without an application-
specific convention like "object handles".

Len says stateless programming is a pain. SOAP is what causes the pain.
URIs are one solution.
 
> ... Each of those methods is therefore a seperate resource,
> but due to the naming convention it can be identified (based on just the HTTP
> URL) as being part of http://www.foo.com/ - and the standard could also
> dictate that a GET request to http://www.foo.com/describeInterface returns
> some form of IDL listing the available operations.

> I don't think this *destroys* anything, as I interpret Paul as saying. I see
> this as just a naming convention that allows for the relationships between
> things in a common pattern to be expressed.
 
URI naming conventions are fine when they are for human readers. They
are bad when they are to be enforced in software:
 
  http://www.w3.org/DesignIssues/Axioms.html#opaque
 
> And as for the 'action-oriented RPC versus just sending a document' stuff -
> well, look at the problem domain: We want to get something based upon
> something else, or we want to do something to something, or we want to send
> something to somebody, or we want to wait until a condition is met, or...
 
In almost every case we want to either reference data or generate
addresses for data so that it can be referenced later.

>...
> I don't thinking PUTting to a URL really encodes 'sending something'
> correctly. As I read it, it's meant to upload a new file with the implication
> that a GET would return what was PUT.
 
Right. You are sending something with the implication that later you can
GET it again. That's a virtue, not a flaw. Now you have something you
can reference for legal and technical reasons.
 
If you really want to send something into a black hole you can abuse
POST. That's what XML+HTTP+POST protocols do.
 
> How, with HTTP methods, might one implement adding a file to a queue?

How do you think Kinko's Internet print queues work? You POST to a queue
resource. But if it is used properly, you will get back a URI to allow
you to refer to that job later. "Is job http://kinkos.com/1343234 done"? 

> ... Have a
> URL to which multiple PUTs can be done? Ideally, we'd have a method callable
> on a directory that accepts a MIME entity and puts it into the directory with
> a unique URL, and returns that URL for future reference.
 
That method is already built into HTTP. And it is *exactly* as you
describe it. There we go reinventing again. ;)
 
"The POST method is used to request that the origin server accept the
entity enclosed in the request as a new subordinate of the resource
identified by the Request-URI in the Request-Line."
 
"The posted entity is subordinate to that URI in the same way that a
file is subordinate to a directory containing it, a news article is
subordinate to a newsgroup to which it is posted, or a record is
subordinate to a database."
 
"If a resource has been created on the origin server, the response
SHOULD be 201 (Created) and contain an entity which describes the status
of the request and refers to the new resource, and a Location header"

"The Location response-header field is used to redirect the recipient to
a location other than the Request-URI for completion of the request or
identification of a new resource. For 201 (Created) responses, the
Location is that of the new resource which was created by the request."

Part of REST discipline is understanding what facilities are already
available in the Web infrastructure!
 
> .... How do we implement waiting until a condition is met?

http://internet.conveyor.com/RESTwiki/moin.cgi/HttpEvents

Summary: You upload a URI and ask that a notification be PUT or POSTed
to that URI when the condition is met.
 
 Paul Prescod




 

News | XML in Industry | Calendar | XML Registry
Marketplace | Resources | MyXML.org | Sponsors | Privacy Statement

Copyright 2001 XML.org. This site is hosted by OASIS