OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

 


 

   RE: [xml-dev] REST as RPC done right

[ Lists Home | Date Index | Thread Index ]


From: Paul Prescod [mailto:paul@prescod.net]

"Bullard, Claude L (Len)" wrote:
> 
>> REST is RPC with primitive methods and strict adherence to
>> a global namespace.  Further, it depends on sharing vocabularies
>> even if they are as small as a query name and a few arguments.
>> So do all of the alternatives.  They vary by top-down vs bottom-up
>> evolution of the vocabulary.

>I agree up to the last sentence which I do not understand.

Not that tangible.  I am thinking about the use of big fat XML 
documents of information vs small sets of arguments passed 
back and forth to build up a complex message.  It isn't an issue.

>> Still, I don't think it's quite that easy.   What do you want to support?
>> Browsing and exploration (tight coupling to navigation) or computing a
>> result?  I think you can do either with REST with more work, and
>> the second with task-specific RPC with more coordination over time.

>Can you demonstrate that the "more work" is sufficient to be significant
>in any system larger than getStockQuote?

It takes some time and coordination to get an API built up for a 
business process.  Every dimension of participation (two people 
two departments, two divisions, two companies) makes it harder. 
We can quickly come up with some function calls, but getting 
those dimensions to agree to a document type is a lot harder. 
If all I give them is an address for a document in HTML, life
is simple.  That is as you have shown, why The Web works. 
Now given an RPC interface, there is little to negotiate, 
just much to discover and orchestrate.  If the results returned 
are smaller, I might be more willing to accept some work 
on my part to handle that than if they provide me a document 
that requires me to change every process I have to handle it. 
It somewhat depends on what I want to coordinate and whether 
I want to do it all up front or a piece at a time.

>> It comes down to building with generic methods.  Both can
>> use a global namespace.  (Are the critics of UDDI really
>> fussing about GUIDs?) 

>UDDI is useless regardless of its technical architecture, because its
>scope is way too broad. The promises made for it are unattainable.

UDDI is about as useful as what they say it is for:  a yellow pages. 
Many businesses may be in the yellow pages but don't do business 
like that.  They require, as I've said, qualified customers that 
are authenticated on use.  These are private parties or select 
memberships.  UDDI doesn't do them much good, but like a yellow 
pages, they may be in there.  So I agree.

>As far as GUIDs, the problem isn't GUIDs versus URIs. The problem is
>that one service uses GUIDs as the first parameter for addressing.

I agree.  But then isn't this just as true of using a URI with a 
query string?  IOW, yes, as you say it is a good idea to share 
addressing or you get the equivalent of the British mail system 
or a mapping exercise as is done in geo-coded location systems. 
But below addressing, we are back to vocabulary evolution and 
it is the same for any system pre document type contracts and a 
CONOPs.

>So the issue is standardization versus the lack of standardization.

Too strong a term.  It is about aggreements yes.  I truly dislike 
using "standard" like that, but I understand your intent.  "Standard" 
is a link in a chain.  I might like to use it, share its use, 
and so forth, but not to wear it and not to share it as a leash 
with another.  It's just stuff.  As an American, I can say with 
confidence, we have too much stuff. ;-)

>Who said that? The closest I can come is to say that it is highly
>questionable whether the W3C should help to standardize a technology
>that is most often used to balkanize the open, standarized namespace as
>I discussed above. But I wouldn't deny SOAP etc. a right to exist.

Ok.  I do want to insist that "The Web" is not "The Internet" and 
that some of these threads are harder if we don't have a good 
scoping definition for "The Web".  It is the same problem as 
Fielding has with "distributed hypermedia".  Given what I see  
happening, we can certainly have systems that will work well 
without using "The Web" and still might use parts of "The Web" 
technology.  I believe the Intellink folks had serious discussions 
about that and a lot of private VPNs do.  The problem is 
the ecology of the Internet, not The Web.  Lots of designs 
really can burn it up (why don't we use two way pipes, 
stateful protocols etc. - dumb networking, yes?).

>> ... That part is dumb.  Impress over imprimatur.
>> Still, that simply means "the web" is not the Internet
>> and that those who want to defend "the web" are either
>> forging chains or building bulwarks.  Caveat vendor.

>I don't know what you're talking about. You've suggested on a couple of
>occasions that the request that you use a standardized addressing syntax
>is in some sense repressive or counter productive. Can you please be
>more specific?

Technologies aren't repressive.  People are.  Insisting on namespaces 
in XML as core is unless there is an alternative to the http:// thingie. 
There is (public IDs) so I am satisfied.  Insisting on the unification 
of the world's information resources in "The Web" is a cherry idea and 
that is not a compliment to the idea.  If you aren't, I'm satisfied.

But the comment from someone that Public IDs don't solve any 
problems only means that person may be unaware of some problems.

>> If HTTP went away, event is URIs went away, would be
>> still have a hypemedia system on the Internet?

>Yes. I used hypermedia systems on the Internet before the Web. But the
>Web caught on because of URIs (then called URLs).

Sure.  On the other hand, times and requirements change.  Understanding 
why things work is helpful to understanding which requirements to 
pick at a given time for a given customer.  If we really mean 
what we say about information over implementation, separation of 
content from process, we have to accept that "The Web" is yet 
another application and be sure that we don't forge a chain 
on information, resources, and most particularly, the things 
they represent in a zealous rage to ensure the ubiquity of 
"the Web".

One of the things that XML plus URIs does, that the web does, is 
expose a lot of resources to disruptive forces as well.  I remember 
all too well the arguments on comp.text.sgml that if a URI wasn't 
offered for a document, it didn't exist or shouldn't.  That kind 
of thinking is stupid.  You didn't do that but the fellow that 
did convinced me to question deeply assertions about "the web" 
and why it's ubiquity must or should triumph over any other 
solution even if it warps our memory, our recorded history, 
and the progress of innovation.

len




 

News | XML in Industry | Calendar | XML Registry
Marketplace | Resources | MyXML.org | Sponsors | Privacy Statement

Copyright 2001 XML.org. This site is hosted by OASIS