OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.


Help: OASIS Mailing Lists Help | MarkMail Help

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index]
Re: [xml-dev] Re: Javascript and plugging holes

On Dec 10, 2010, at 06:33, Stephen Green wrote:

> OK, I'm no expert on all this but from what I've seen it looks to
> me like Javascript is allowed to do things (and jQuery makes use
> of this) which other technologies like CSS are sometimes
> forbidden to do.

Concrete examples would help here.

> It's an out-of-date example but I remember some
> issues over CSS pseudo elements which were, I thought, not
> allowed in some browsers because, I assume, they caused
> security concerns with allowing elements to be inserted from a
> file outside the webpage (even if it is in the same domain).

I'm not aware of any browser banning a pseudo-element on security grounds. Can you please elaborate?

Some browsers recently restricted what properties are allowed to be changed by the navigation history-related pseudo-classes, but JavaScript isn't permitted sniff history, so that doesn't count as an example of something that CSS isn't allowed to do but JS is.

> Even
> now I get problems with jQuery if changing elements where any
> CSS processing is invoked, and I've taken this to be because the
> CSS processor has to be very cautious, but the same changes
> are allowed without any problems when no CSS is involved. I'm
> thinking that CSS implementations in browsers are more cautious
> because someone sat down and said what was risky and what
> wasn't, while Javascript doesn't seem to me to have yet been
> put to the same scrutiny.

That doesn't seem true to me.

> That might come later and break jQuery.
> Can anyone back this up with better facts/examples?

The only example I can come up with where CSS is subject to more restrictions that JS is cross-origin use. This is because CSS, by design, has a parsing model that recovers from errors, so if a larger non-CSS file has some CSS-looking bits, those bits may work as CSS, becase the parser skips over cruft before and after. Thus if a non-CSS file is maliciously used as a style sheet, some part of the file may end up treated as CSS in a way that's detectable enabling limited information disclosure. JS, on the other hand, fails harder when it doesn't parse without error, so it's improbable that a non-JS file when treated as a JS file could end up leaking information.

See http://scarybeastsecurity.blogspot.com/2009/12/generic-cross-browser-cross-domain.html

> Declarative languages seem far
> safer because there is more control in the processor in how they
> achieve their end result: So it seems to me. I hope that is obvious.

Declarative languages aren't inherently safe; see the CSS attack description above. Also, the document() function makes XSLT powerful enough that it, too, needs to be subject to the Same Origin Policy in order to avoid information leakage.

> On the JSON, it still relies on newer browsers supporting native
> JSON parsing and on some recent safer parsers than the initial
> eval() which is considered unsafe.

eval() is unsafe if the input to eval() hasn't first been validated to be valid JSON-only. This validation can be done with a regular expression. JSON-P is as unsafe as eval().

> I
> did a quick search on 'Javascript security' and JSON came up
> all over the place with warnings (especially about eval of course).

Well, it should be obvious that taking some text and running it as a program with the authority of your Origin is unsafe unless you know the text not to do bad stuff when run as a program.

On Dec 10, 2010, at 06:36, Stephen Green wrote:

> Oh and another example - remember those high profile exploits on
> Twitter where people inserted Javascript in their tweets and did
> all sorts of 'wonderful'/naughty things. Not every company has the
> kind of engineers around to plug such holes overnight like Twitter's
> had to do so I imagine at some point the W3C or the like (or browser
> vendors) coming under pressure to limit what the Javascript can do
> as more and more people use it for HTML5, etc.

The problem here is that Twitter served untrusted and improperly sanitized content from its Origin. If you can send code for the browser to run, the browser runs it thinking it is from you. If someone gives you code, you include it on your page and send to the browser, how is the browser supposed to know it's not from you? If someone gives you random untrusted content, don't put it on your site without sanitizing it in such a way that it contains no executable parts. The only proper way to sanitize content is to sanitize an HTML data model with a white list and to serialize it using a serializer that's small enough to review for the absence of critical bugs. Twitter didn't do that step right.

Now the more fundamental problem here is that the platform allows inline executable content, so the danger of putting executable code in content exists when you include content from a third party in your HTML. That is, the platform doesn't force JS to live in a separate file from HTML. That's similar to von Neumann machines being vulnerable to heap spraying buffer plus overflow attacks, because the same memory that holds data can be treated as code. In Firefox 4, a site can use Content Security Policies to turn off inline scripts and defense in depth, but you still need to sanitize content for CSP-unaware browsers.

Henri Sivonen

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index]

News | XML in Industry | Calendar | XML Registry
Marketplace | Resources | MyXML.org | Sponsors | Privacy Statement

Copyright 1993-2007 XML.org. This site is hosted by OASIS