OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.


Help: OASIS Mailing Lists Help | MarkMail Help



   Re: [xml-dev] SOAP-RPC and REST and security

[ Lists Home | Date Index | Thread Index ]

Amy Lewis wrote:
> Umm.  SOAP/XML Protocol does not require that toolkits generate WSDL
> from interfaces, or that they generate stubs from WSDL, or in fact
> anything about the deployment process.

WSDL's verbosity strongly discourages human beings from attempting to
read it.

> The existence of helpful IDEs, even if they break certain programming
> principles near and dear to you (and to me, btw ... I agree completely
> that offering this sort of silliness is a bad way to design services),
> has nothing to do with the protocol definition.

That's not true. The protocol was designed to make it easy to wrap
existing COM/DCOM components. Ask Don Box. That usage model was
envisioned since the beginning. I've only ever heard two advantages
cited for RPC: 1. It is intuitive because it works just like regular
programming 2. It is more compatible with legacy systems and existing
functions. Until I hear other advantages of RPC I'm going to presume
that these are the two main things. I don't believe that either of them
is desirable from a security point of view.

> >2. SOAP lies. HTTP is an application protocol, not a transport protocol.
> No.
> SOAP, in RPC mode, uses HTTP POST to submit a complexly structured set
> of parameters, which may or may not be easily expressible using the
> traditional name-value pairs associated with a POST.  As a POST, it
> implies non-idempotent processing.  An action will be taken.  

Of course most example web services are in fact both idempotent and
safe. No action is taken. But more important, most example web services
tunnel a new addressing scheme through the Web. Behind a single web
resource there are millions of "logical" resources. This is a violation
of the Web Architecture which degrades the efficacy of Web logging,
filtering and caching tools: in fact any Web intermediary.

> It may be a nonstandard use of HTTP, but it certainly breaks no rules,
> and in fact is moderately RESTful.

Every SOAP web service I have ever seen breaks some Web Axioms:

"In HTTP, anything which does not have side-effects must use GET"

"Any resource of significance should be given a URI."

In particular, UDDI is one of the "holy three" specifications and it
breaks both of them.

> >It has syntax and semantics, just as XML has syntax and semantics.
> What are the semantics of XML?  XML is syntax ....


> HTTP is *a* binding for SOAP.  It is the premier binding, the most
> visible, because HTTP is widely deployed, and SOAP fits easily into the
> HTTP model: POST a request, receive a response.  SOAP doesn't specify
> what lies on the processor side of that.
> *Marketing* dorks certainly lie like this, hoping to sell product to
> other marketing dorks and get around the evil network-admin BOFH
> geekatroids who don't let said marketing dorks park their Win95 boxes
> on the internet freely.  I doubt, though, that you're going to find
> very many deployments that actually sidestep firewall policy in this
> fashion, 'cause the developers *do* talk to, and have (some) respect
> for the network admins.

Then why not use a TCP protocol? It would actually *reduce* the
compexity of the spec, clear up the SOAPAction question. Clear up the
"Web architecture" question. Improve performance.

Firewalls are the answer. SOAP was born as DCOM-over-the-firewall and if
that's changed since then nobody told the people working on the spec
because it still looks like that is the primary goal.

> >SOAP lies both about what it is doing (POST to do getStockQuote) but
> A POST containing standard HTTP parameters that returns a stock quote
> is different how?

Such a use would also be a violation of web architecture. I would take
the person to task if I met them. Especially if they were a big company
telling people that violating web architecture is a good thing.

Even so, even the most nasty Perl abuses of HTTP are typically more
transparent to firewalls than SOAP because they use method names like
/get_stock_quote and /buy_stock and /cell_stock whereas all a firewall
generally sees for SOAP is /stock_end_point .

> *shrug*  With a big honking SoapAction on them, in case anyone bothers
> to look.  

SOAPAction is deprecated.

> ... And *I* can't deploy a service into the corporate webserver
> without approval of the admins.  Maybe this is different in
> cloud-cuckoo land.

I guess you don't work for as flexible a company as Don Box. ;)

> Fielding, like Schneier, here seems to confuse marketing droids with
> developers.  

Who are the developers? Don Box (who I otherwise have great respect for)
says firewall "compatibility" was a goal. Microsoft was the original
sponsoring body and says firewall "compatibility" was a goal. Nobody has
repudiated that.

> ... The two are not the same.  If there is a network admin in
> the world who allows free deployment of servlets, components, or
> whatever into the corporate firewall, he ought to be fired for
> incompetence *before* the first SOAP server gets dropped in (on top of
> the NCSA finger CGI, perhaps?).

How can the network admin *even know* that such a server is running. It
just looks like a standard HTTP POST, right? If their policy allows POST
then it will be very hard for them to disallow SOAP. 

> >If I were Fielding and I carefully designed HTTP to be transparent and
> >thus more secure, I would be extremely annoyed to see SOAP hop on my
> >bandwagon. The only saving grace is that it will fail and be shut down
> >at the firewall again within a year.
> Sorry, could you explain how SOAP is defeating transparency?  It's
> expanding the syntax of POST parameters.  In the HTTP binding, which is
> the most visible, but not at all the only binding.

The very first line of an HTTP message has three things. A method name,
which is supposed to vary, but is always the same for SOAP. A URI, which
is supposed to vary based on the data you are working with, but is
always the same for a particular SOAP component. All of the important
goodies are in the body which is the *exact opposite* of the intent of
the HTTP specification. Let me say that again. The whole design of HTTP
1.1 was to make the "protocol" bits of the message as transparent as
possible to intermediaries (including firewalls) so that they would not
have to look at the body. SOAP puts all of the "protocol" (addressing
and data manipulation) bits in the body.

> >3. SOAP is RPC.
> No.
> No, no, no.  In fact, SOAP as RPC, in my opinion, has a very limited
> lifetime (just as SOAP over that horrible application protocol with its
> i18n incompetence has, I hope, a short lifetime).  SOAP as messaging
> has a lot of potential, and is increasingly a direction of development.

Being "more general" than RPC is not a virtue in this argument. RPC is
already too general. Now you will get SOAP messages and you don't even
know whether to interpret them according to RPC conventions. That makes
the sysadmin's job even more difficult.

> On your *desktop*?  Does your network admin let you run web services
> exposed to the internet?  Shouldn't he be looking into "would you like
> fries with that" training, if so?

My network admin allows me to run web services. In fact, I've built a
desktop app that used XML-RPC so I've contributed to that situation. My
network admin allows me to access the Internet. There's no way he can
know whether the HTTP traffic from my desktop is from my web browser or
my desktop apps. According to all estimates, SOAP messages will be
zipping around the network every which way, as the new replacement for

> >What kind of insane system administrator would trust that the 48 SOAP
> >apps on your desktop are secure enough to allow access from outside the
> >firewall? What he's going to do?
> Refuse access, of course.  Just as HTTP doesn't go through to your
> probably-insecure desktop web server.  Duh.

The apps will also be making outgoing calls. Corporations don't tend to
think that desktop apps should be able to make any outgoing connection
that the want (e.g. Morpheus). Because once the connection is made it is
made, no matter who made it.

Today sysadmins filter by ports (Morpheus is 1214). SOAP disables that.
SOAPAction is deprecated. The only required (reliable) SOAP elements are
SOAP:Envelope and SOAP:Body. Not much to filter on, is it? Firewalls of
the future will need to understand not just IP, TCP, HTTP and SOAP but
also apply XPaths to figure out what kind of document is flowing through
the network.

"Well-known" port numbers exist for a reason. Can you offer a good
reason why SOAP should use port 80? Other than firewalls?

> Most corporations have a strong policy in favor of certain kinds of
> tunnelling, VPN being the prime example.  But you mean application
> tunneling over application protocols, in this case?  


> ... And if the
> tunneling isn't hidden, why should it be difficult to exclude? 

Based on...parsing the XML text? Anyhow, why is the onus on the sysadmin
to exclude. That's not the way the Internet has historically worked.

> Which has what to do with SOAP?  SOAP is transparent, SOAP is not
> necessarily RPC.  Reading XML is no harder than reading name-value
> pairs.

SOAP is human readable (for some of us who like to think we're humans).
It isn't transparent in the networking sense because it isn't designed
to be easy to recognize. "Easy to recognize" would be new port. "Almost
as easy to recognize" would be new HTTP method. Adding a header would
also be not quite as bad. Doing none of these is far, far from

But more important, SOAP is a superset of RPC so it has no reliable
internal structure, addressing model or even message pattern. It's
transparent to humans but opaque to computers!

 Paul Prescod


News | XML in Industry | Calendar | XML Registry
Marketplace | Resources | MyXML.org | Sponsors | Privacy Statement

Copyright 2001 XML.org. This site is hosted by OASIS