OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

 


 

   Re: [xml-dev] Exposing resources/services vs hiding implementation detai

[ Lists Home | Date Index | Thread Index ]

On Apr 5, 2005 6:31 PM, Leigh Dodds <leigh@ldodds.com> wrote:

> I don't see how defining a public URL space exposes any details about
> my application structure. I've done it, for both human and machine
> oriented interfaces, and you can't tell from the url structure or
> response formats what my backend is doing. Or even if the entities in
> the URI map 1:1 with entities in my database schema.

I interpreted Bill's original post as arguing that one should expose
actual implementation objects, tables, etc. as URIs.  We cleared that
up -- he was  talking about abstractions or "domain objects"  a la
Amazon.com's book-specific URIs that one can exchange, bookmark, etc.
and not the physical tables where all this stuff resides.   If the
backend is hidden, behind URIs, services, or whatever, my concerns
about information hiding are irrelevant.

If the entities in the URI map 1:1 with your database schema, well, it
seems like an unnecessary risk to me, but I don't claim to know much
about security.  Judging by the state of the industry, I'm not sure
that very many of us do either :-)  . My main reason for starting this
thread was not to argue against this, but to wonder why several people
are so certain that having a single message dispatcher URI is a bad
idea, irrespective of whether exposing all the individual resource
URIs is a good idea.

I'm not sure my strawman has been demolished yet: The SOA dogma is to  expose
the service and the service contract to the client,and hide everything
else.  That seems to reflect decades of best practice, back to
"information hiding" in the days when Structured Programming was the
One True Path to software quality. I'm not sure what's wrong with that
dogma, other than the fact that  REST advocates making "domain
objects" visible via URIs and having clients manipulate them by
transferring  representations.   Why is that supposedly better? 
What's the evidence?

My motivation in all this uber-permathread is that I opposed the
notion that was popular 3-4 years ago of extending the COM/CORBA/RMI
distributed object paradigm to the Web;    I agreed with the
RESTifarians that it wouldn't scale, wouldn't leverage the Web
infrastructure, all the classic arguments that appear to have been
validated by Amazon, Bloglines, Flickr, etc.  But now I'm questioning
the currently popular notion that the Web architectural style is
generally suitable for enterprise scenarios where COM/RMI/J2EE/etc.
are entrenched.  In these, information is often confidential, lots of
real money is on the table and armies of slimeballs want to steal it,
there is a more even balance between readers and writers (making all
the cacheing goodness somewhat irrelevant), and the data is consumed
by mission-critical programs that can't just sigh and click on the
next link if there is a 404, or try again later if there is a timeout.
 MAYBE the Web architecture principles apply here as well, obviously
many people think they should.  I'm skeptical, and asking for solid
arguments and concrete success stories.




 

News | XML in Industry | Calendar | XML Registry
Marketplace | Resources | MyXML.org | Sponsors | Privacy Statement

Copyright 2001 XML.org. This site is hosted by OASIS