[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: Traffic Analysis and Namespace Dereferencing
- From: "Clark C. Evans" <cce@clarkevans.com>
- To: David Megginson <david@megginson.com>
- Date: Tue, 02 Jan 2001 14:24:41 -0500 (EST)
On Tue, 2 Jan 2001, David Megginson wrote:
> John Wilson writes:
> > Performing an HTTP GET on an arbitrary URL is not an innocuous
> > action.
>
> Very well put -- there are many dangers, including (as John points
> out) denial-of-service (intentional or unintentional) and maliciously
> altered schema information.
For altered schema information, hopefully digital signatures
will help sort this problem out. Right?
And from my weak understanding of the issues, denial-of-service attacks
are commonly based on dynamic content; hence the CPU or the server's
disk becomes the bottleneck. However, a catalogue would be a relatively
static web page, right? Thus well known and implemented caching techniques
are readily available. For instance, the web page can be kept in the
server's memory. Thus, a denial-of-service attack in this case would
have to flood the server's pipe... which is a bit harder to do.
> Even without technical or security problems, however, automatic
> dereferencing will make it possible discover trade secrets, personal
> information, etc. simply through traffic analysis.
>
> Let's say that I have defined a popular Namespace for encoding
> peer-to-peer records:
>
> http://www.megginson.com/ns/p2p
>
> Now, imagine that IBM plans a big announcement next Thursday, but is
> keeping it heavily under wraps. I bring up my server log and find
> 10,000 hits for http://www.megginson.com/ns/p2p from a research domain
> at ibm.com. Hmm.
I am no expert in this field... however, I would think that this
would be IBM's own fault for not installing a caching router!
For most medium to large organizations, fetching a catalogue
should be a very quick, LAN operation.
Kind Regards,
Clark