OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

 


 

   Re: [xml-dev] Exposing resources/services vs hiding implementationdetail

[ Lists Home | Date Index | Thread Index ]

Michael Champion wrote:
> On Apr 5, 2005 6:31 PM, Leigh Dodds <leigh@ldodds.com> wrote:
> 
> 
>>I don't see how defining a public URL space exposes any details about
>>my application structure. I've done it, for both human and machine
>>oriented interfaces, and you can't tell from the url structure or
>>response formats what my backend is doing. Or even if the entities in
>>the URI map 1:1 with entities in my database schema.
> 
> 
> I interpreted Bill's original post as arguing that one should expose
> actual implementation objects, tables, etc. as URIs.  We cleared that
> up -- he was  talking about abstractions or "domain objects"  a la
> Amazon.com's book-specific URIs that one can exchange, bookmark, etc.
> and not the physical tables where all this stuff resides.   If the
> backend is hidden, behind URIs, services, or whatever, my concerns
> about information hiding are irrelevant.
> 
> If the entities in the URI map 1:1 with your database schema, well, it
> seems like an unnecessary risk to me, but I don't claim to know much
> about security.  Judging by the state of the industry, I'm not sure
> that very many of us do either :-)  . My main reason for starting this
> thread was not to argue against this, but to wonder why several people
> are so certain that having a single message dispatcher URI is a bad
> idea, irrespective of whether exposing all the individual resource
> URIs is a good idea.
> 
> I'm not sure my strawman has been demolished yet: The SOA dogma is to  expose
> the service and the service contract to the client,and hide everything
> else.  That seems to reflect decades of best practice, back to
> "information hiding" in the days when Structured Programming was the
> One True Path to software quality. I'm not sure what's wrong with that
> dogma, other than the fact that  REST advocates making "domain
> objects" visible via URIs and having clients manipulate them by
> transferring  representations.   Why is that supposedly better? 
> What's the evidence?
> 
> My motivation in all this uber-permathread is that I opposed the
> notion that was popular 3-4 years ago of extending the COM/CORBA/RMI
> distributed object paradigm to the Web;    I agreed with the
> RESTifarians that it wouldn't scale, wouldn't leverage the Web
> infrastructure, all the classic arguments that appear to have been
> validated by Amazon, Bloglines, Flickr, etc.  But now I'm questioning
> the currently popular notion that the Web architectural style is
> generally suitable for enterprise scenarios where COM/RMI/J2EE/etc.
> are entrenched.  In these, information is often confidential, lots of
> real money is on the table and armies of slimeballs want to steal it,
> there is a more even balance between readers and writers (making all
> the cacheing goodness somewhat irrelevant), and the data is consumed
> by mission-critical programs that can't just sigh and click on the
> next link if there is a 404, or try again later if there is a timeout.
>  MAYBE the Web architecture principles apply here as well, obviously
> many people think they should.  I'm skeptical, and asking for solid
> arguments and concrete success stories.

Well you know, my beef with all that stuff is that I didn't think it 
transplanted onto to the Web. Which is to say it felt like a ton of work 
to get something done between a pair of firewalls until I stuck to HTTP 
and untyped XML. How far Web/REST-as-deployed can push into the 
enterprise and influence approaches is an unknown to me. I don't expect 
  the J2EE or .NET folks to pay much attention since most of them see 
the web as a problem to be solved, whereas a database is a solution. I 
think approaches like NetKernel and Indigo are cool tho'. And I think 
every enterprise architect looking at ESB backbones should be reading 
the XMPP specs and asking hard questions of the vendors; that's not 
necessarily REST.

There are two specs in the wild now for reliable transmission, POE and 
HTTPLR; which one you'd go for would depend on your circumstances. Rich 
has mentioned end to end reliability as a problem, so he should check 
them out. I know I'll end up using both of them at some point, but I 
could also end using WS-ReliableMessaging - I already use BizTalk for 
such things. I wrote down HTTPLR because it was fun, but also because 
no-one had solved that problem in that way and written it down well 
enough for the next guy (I'm always reminded about what Marshall Rose 
said about syslog) - and for three other reasons. One is the amount of 
baggage something like WS-ReliableMessaging brings with it; it depends 
on a bunch of other WS specs which ups the surface altogether too much 
for me if only because so few of those specs seem to be stable. Two is 
the WS RM specs don't necessarily play well where on endpoint is a pure 
HTTP client - I have deployment scenarios where dropping in a 
server/ws-stack into an infrastructure is a non-runner (getting onto 80 
is a non-runner period), but which are dealt with well-enough by HTTPLR. 
Three is that the right Webby thing to do is to deploy a new HTTP 
method, but I think your chances of adoption go to zero - you can't even 
rely on PUT being available. HTTPLR is by one reading, realtechnik, a 
litany of compromises. I wrote what I thought would get used. But I 
think the fact that I wrote it indicated I agree with Rich to some 
extent about the need not being addressed.

How webarch resulted in value to that protocol is in two ways a) 
everything gets a name, b) the protocol is kept as an application of the 
HTTP layer rather than a declarative message the majority of which. I 
did those not because I drank Roy Fielding's or TBL's kool-aid but 
because I have to a) be able to administrate systems and reconcile 
messages spanning a heck of a lot of systems (too many) and b) not 
having full control of the deployment environment. There are maybe more 
reasons than that.

Concrete success story: there are deployments of early versions of that 
protocol running for over a year now; I can't talk about them in detail, 
that's just how it is, but I can say they've never lost a message. Can 
they handle 10,000 messages a minute? No. Do they need to? No. Could 
they? I guess they might if I had enough computers and resources to ramp 
up (I hear you can download the web and index it given enough computers, 
and I hear you can build LiveJournal and Slashdot out of free stuff 
given the will and the smarts). Is there a theoretical limit? Yes - 
aping full-duplex with twinned HTTP servers should be theoretically 
faster than an asymmetric client/server pair. Was it hard to build? No, 
it took two people a few weeks. Is it secure? I punt to whatever HTTP 
can provide on that - I think it would be dumb to build security into an 
rm protocol.

Security - I'm no expert on that. My observation is to think we have 
enough security tech, but can't package it adequately. Everyone says 
they want it, but who's actually doing it? That's a business problem, 
not a technology one.

cheers
Bill





 

News | XML in Industry | Calendar | XML Registry
Marketplace | Resources | MyXML.org | Sponsors | Privacy Statement

Copyright 2001 XML.org. This site is hosted by OASIS