OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

 


 

   Re: [xml-dev] Re: Cookies at XML Europe 2004 -- Call forParticipation

[ Lists Home | Date Index | Thread Index ]

At 2:30 PM -0500 1/7/04, Rich Salz wrote:

>Here's a real-world example.  A big bank down in New York has a deal 
>with a small online bookseller that BB's employees get a discount. 
>The BB hands the seller the public key of its HR SAML server.  When 
>an employee wants to buy a book, they present a SAML assertion to 
>the seller.  The seller checks the signature, sees it came from BB 
>HR, and gives the employee the discount.  This is the best way to do 
>things.  BB is not going to give its employee list to the seller -- 
>their are privacy issues, out-of-sync issues, etc.  Requiring 
>real-time seller->BB "is XXX an employee" queries adds too much 
>overhead and fragility to the  system.

Your real world example is based on SAML, a system which is not 
actually used in browsers today, and seems unlikely to be in at least 
the near future. You're moving the goal posts. I'm talking about 
browsers and web servers, and you're off into web services, a very 
different scenario. Can you provide a real world example of the 
problem using standard browsers?

>A very small saml assertion is about 500 bytes.  A signature on that 
>message is between 1000 and 1500 bytes (depending on if the 
>certificate is included or not), so we're up to around 2K.  If the 
>identity is encrypted -- perhaps the bank wants to ensure that only 
>the bookseller gets to see this employment assertion -- than add 
>another 100 or so bytes and increase the 2K by 33% (base64), and 
>we're approaching 3K.
>Following REST principles, that information has to be sent every 
>time the employee contacts the seller's website.  Is it scalable to 
>require 3K overhead on every single transaction?

Yes. 3K extra per page doesn't set off my alarm bells for desktop 
systems and servers, and I'm not willing to compromise the design of 
desktop systems and servers to fit the mostly theoretical needs of 
smaller devices.

>The examples you've been using are all limited because they show you 
>as a specific entity interacting with the web site. Many 
>transactions -- most, in the web services world -- are entities 
>*within an organization* acting with another entity *within an 
>organization.*  For that kind of interaction, TTP is a must for the 
>reasons I listed above: scalability, privacy, and data-liveness.

To the extent that your problems only involve web services, I claim 
that web services are broken. It is not OK to break HTTP and the HTTP 
architecture to force it to do things it was never meant to do. This 
has been a huge problem with web services since XML-RPC was invented 
as a way of sneaking through firewalls. There are things people want 
to do with web services for which HTTP is not an appropriate 
protocol. These services should use a different protocol, rather than 
trying to cut HTTP apart and reglue it back together again with 
pieces from other protocols to produce some ugly Franken-protocol 
that will never work as well as an appropriately designed protocol. 
Not all protocols have to be stateless. But HTTP is.

>The paragraphs above tried to justify the size and crypto overhead 
>-- why the data is what it is.  Now let's talk about processing 
>cost. On my 1Ghz/384Meg workstation, verifying a 2K file (small 
>data, small signature as described above) takes about 2 msec.  I 
>used xmlsec (www.xmlsec.org), which is built on openssl and Gnome 
>XML libraries, so I'm using standard and very fast software.  I 
>didn't test, but encryption is probably similar, since it has 
>another RSA operation, but instead of XML canonicalization feeding 
>into SHA1, it has base64 decode feeding into 3DES (triple-DES) or 
>AES.

2ms. I would have guessed more. That's why measuring is important. 
Again, this is not large enough that making a compromise in the name 
of increased speed at the cost of architecture seems wise to me.


>Perhaps more importantly, by having the client present the security 
>context, the server is now very suceptible to denial of service 
>attacks.  It's trivial for an adversary to make a realistic message 
>-- no crypto involved, just send random base64 bytes -- and send it 
>over and over again forcing the server to do lots of needless work. 
>This is not a "throw more horsepower at it" issue, it is an 
>architectural issue.  If I have faster hardware, so does my 
>adversary.  As long as the processing is asymmetric -- as long as 
>the server must validate the client on most transactions -- the 
>weakness is intrinsic to the design.

OK. I can see that. Is this not a problem in your architecture? If 
I'm understanding this attack correctly, using state would only 
remove the need to verify after the initial, verified transaction. 
Can the attacker not send just as many initial, malformed requests? 
How does maintaining state present this attack?


>Given that both good and bad guys benefit from increased 
>efficiencies, do you now see why it's a fundamental principle?

I see you're thinking here more about the DOS than key theft. For key 
theft, the good guys and the bad guys do not benefit equally from 
increased efficiencies. For DOS, they may, though you still have to 
convince me that DOS is not a problem with your architecture relative 
to the HTTP architecture.

In practice, DOS is a problem today with HTTP irregardless of 
encryption. In practice, DOS attacks are dealt with by cutting off 
attacking hosts at the router. This would still be an equally 
effective (or ineffective) response to DOS in the future. I'm not 
sure either encryption approach would have much practical impact on 
DOS attacks.


>It is a compromise and it is necessary, but it is not useful.  As 
>the authors themselves put it, "it is better than nothing."  As web 
>services exchange real value or real liability (e.g., your doctor's 
>office sending a prescription renewal the pharmacy closest to your 
>conference hotel), we will need much better than that and the 
>compromise will have to tip toward security and away from REST.

Anything like this should be done with SSL.

Even if you convince me that HTTP over SSL is not secure enough, 
(which would require showing it's susceptible to being decrypted, not 
merely a denial of service) I am sadly still unconvinced that we will 
get better security than that, even if it's available. Compare the 
case today where the doctor's office calls the pharmacy on the phone. 
This goes in the clear, unencrypted, over lines that have been 
specifically designed to allow  U.S. law enforcement agencies to 
listen in on any conversation by flipping a switch. Furthermore the 
switches that control these taps may well be open to other bad actors 
as well. See  http://www.pbs.org/cringely/pulpit/pulpit20030710.html


-- 

   Elliotte Rusty Harold
   elharo@metalab.unc.edu
   Effective XML (Addison-Wesley, 2003)
   http://www.cafeconleche.org/books/effectivexml            
   http://www.amazon.com/exec/obidos/ISBN%3D0321150406/ref%3Dnosim/cafeaulaitA 




 

News | XML in Industry | Calendar | XML Registry
Marketplace | Resources | MyXML.org | Sponsors | Privacy Statement

Copyright 2001 XML.org. This site is hosted by OASIS