[
Lists Home |
Date Index |
Thread Index
]
> I think you've demonstrated that there are some minor issues with
> security in the REST model over unencrypted HTTP, given current HTTP
> authentication schemes.
Wow. I'd use "real" instead of "minor" (and try to prove why, below),
but still and all, is this the first time a REST person has admitted
that REST has issues with something real? :)
Let me try to explain why REST has real problems with real security,
removing the qualifications you used above.
In another message you questioned why my identity would be something
like a SAML assertion and not a name/password. The answer is
scalability and liveness of data, achieved by using a level of
indirection; in the security world this is known as a TTP, trusted third
party. You may choose not to believe me, but I assure you with any
security credibility I may have that TTP's are a "best practice" -- be
they CA's issuing SSL certificates, Kerberos KDC's, attributes in
ActiveDirectory, or what have you.
Here's a real-world example. A big bank down in New York has a deal
with a small online bookseller that BB's employees get a discount. The
BB hands the seller the public key of its HR SAML server. When an
employee wants to buy a book, they present a SAML assertion to the
seller. The seller checks the signature, sees it came from BB HR, and
gives the employee the discount. This is the best way to do things. BB
is not going to give its employee list to the seller -- their are
privacy issues, out-of-sync issues, etc. Requiring real-time seller->BB
"is XXX an employee" queries adds too much overhead and fragility to the
system.
A very small saml assertion is about 500 bytes. A signature on that
message is between 1000 and 1500 bytes (depending on if the certificate
is included or not), so we're up to around 2K. If the identity is
encrypted -- perhaps the bank wants to ensure that only the bookseller
gets to see this employment assertion -- than add another 100 or so
bytes and increase the 2K by 33% (base64), and we're approaching 3K.
Following REST principles, that information has to be sent every time
the employee contacts the seller's website. Is it scalable to require
3K overhead on every single transaction?
In the bookseller example -- easily extrapolated to any catalog-browsing
-- the client content is miniscule (a URL) compared to the 3K of secure
identification. Of course we're all guessing, but I don't think anyone
can argue that "small client request gets big server response" is an
unreasonable expectation for a non-trivial amount of (future) web traffic.
The examples you've been using are all limited because they show you as
a specific entity interacting with the web site. Many transactions --
most, in the web services world -- are entities *within an organization*
acting with another entity *within an organization.* For that kind of
interaction, TTP is a must for the reasons I listed above: scalability,
privacy, and data-liveness.
> You have not demonstrated that it is a
> fundamental principle that maintaining state on both sides of a
> connection is a requirement for good security.
The paragraphs above tried to justify the size and crypto overhead --
why the data is what it is. Now let's talk about processing cost. On my
1Ghz/384Meg workstation, verifying a 2K file (small data, small
signature as described above) takes about 2 msec. I used xmlsec
(www.xmlsec.org), which is built on openssl and Gnome XML libraries, so
I'm using standard and very fast software. I didn't test, but
encryption is probably similar, since it has another RSA operation, but
instead of XML canonicalization feeding into SHA1, it has base64 decode
feeding into 3DES (triple-DES) or AES.
(Those playing along at home can try similar test using just openssl:
openssl speed rsa1024 ; openssl speed des
)
I don't have any reasonable way to test SSL overhead in the same
framework, but the operations are comparable for initial connection.
So now the server has non-trivial overhead any time it wants to verify
the client's identity. It probably needs to do that almost all the
time, since it needs to know if the client qualifies for the discount
price or not. If the server were running on my machine, for example,
it's an extra 4msec per HTTP connection, or more than doubling the level
of SSL-type activity. Note that you can't use an SSL accelerator to
off-load, since this is XML application-level data, so you need to
provision some number of additional application servers to handle the
crypto work that's now being added. And that stuff is very CPU
intensive. (Shameless plug: or get a product like our XS-40, which is
designed for this; see URL below.)
Perhaps more importantly, by having the client present the security
context, the server is now very suceptible to denial of service attacks.
It's trivial for an adversary to make a realistic message -- no crypto
involved, just send random base64 bytes -- and send it over and over
again forcing the server to do lots of needless work. This is not a
"throw more horsepower at it" issue, it is an architectural issue. If I
have faster hardware, so does my adversary. As long as the processing
is asymmetric -- as long as the server must validate the client on most
transactions -- the weakness is intrinsic to the design.
> At most, you have shown
> that given current public key encryption algorithms and available
> hardware, it is inefficient not to maintain some state on both sides of
> the connection. However, given that faster hardware is a near certainty
> and faster algorithms are far from inconceivable, I certainly don't
> accept this as a fundamental principle.
Given that both good and bad guys benefit from increased efficiencies,
do you now see why it's a fundamental principle?
> The ideal case is
> that the key be changed for each and every transaction. This is
> computationally infeasible today. It may not be tomorrow. Maintaining
> state and using the same key more than once is a necessary compromise
> given the limitations of today's hardware and algorithms
This is partly right, and partly wrong. Right, we want time-limited
session keys; using your login password on every transaction is wrong.
No real security system (cf "all modern browsers" :) does things that
way. Wrong, because of the efficiency issue described above.
> just as
> exchanging the encrypted password with each transaction as done in
> digest authentication is a necessary and useful compromise between the
> benefits of REST and the principles of good security.
Digest auth does not exchange encrypted passwords, it sends a hash of
the password and some plaintext. This is susceptible to dictionary
attacks (i.e., brute-force guessing), meaning that unless you use SSL
whenever the digest challenge is issued, the adversary gets more and
more data to use in mounting a dictionary attack against you.
It is a compromise and it is necessary, but it is not useful. As the
authors themselves put it, "it is better than nothing." As web services
exchange real value or real liability (e.g., your doctor's office
sending a prescription renewal the pharmacy closest to your conference
hotel), we will need much better than that and the compromise will have
to tip toward security and away from REST.
/r$
--
Rich Salz, Chief Security Architect
DataPower Technology http://www.datapower.com
XS40 XML Security Gateway http://www.datapower.com/products/xs40.html
XML Security Overview http://www.datapower.com/xmldev/xmlsecurity.html
|