OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: How could RDDL be distributed ?



On Tue, Jan 16, 2001 at 04:53:06PM +0000, Miles Sabin wrote:
> Tim Bray wrote,
> > Miles Sabin wrote:
> > > Actually I think it's simply two different problems which 
> > > might have related solutions,
> > >
> > > 1. Allow for local overriding of authoritative resources.
> > > 
> > > 2. Allow for distribution and replication of authoritative
> > > resources.
> >
> > Neither of which are specific to RDDL.  I assume everyone
> > agrees with this.  In the general case, these are just 
> > consequences of dealing with URIs, n'est-ce pas? -T
> 
> Agreed that it's not specific to RDDL. And probably everyone
> agrees on that.
> 
> But this isn't the general URI case. 

Actually, it really is. In the IETF these issues have come up
in _many_ different situations. The issue of "allow for local
overriding of authoritative resources" is called the "appropriate
copy" problem in the academic world (its also known as the
"Harvard problem" because the Harvard library is the one that
bitched about it so much). In their case they want to resolve
a URI and have it redirected to a local appropriate copy using academic
rules about what versions/translations of the document are 
valid for academic research ("for this book is edition 2 just
as good as edition 1").  The caching and replication folks
want to do this for enhanced copy protection and content
enhancement by localizing parts of the resource (i.e. think
of a universal translator for web pages).

The second case is the generalized authoritative caching
and replication problem than URI Resolution  were designed for.
The lower layer folks have been screaming for sane caching/
replication from the application layer for almost a decade now.


> We have a very specific
> looming issue (that's my hunch anyway) of large chunks of web
> infrastructure depending (perhaps unwisely) on being able to
> retrieve resources on the ends of particular well-known URIs on
> a regular basis ... a lot of them hosted by the W3C, a lot of
> them hosted elsewhere. I predict server meltdown.

Yep....

> You could argue that people who build XML applications 
> _shouldn't_ fetch a fresh copy of the corresponding XML 
> DTD/Schema every time they parse a fresh document instance. And 
> you'd be right, but that won't stop people doing it.

As has already been illustrated by many of the parsers out there...

> That's my justification for a new protocol. But I think that
> there's also a very close connection with some of the areas we've
> been discussing here wrt, RDDL and xmlcatalog. Both allow for
> local overriding via what is to all intents and purposes a
> local cache. I suggest we at least look at whether there's
> enough similarity between the two scenarios to make it worth
> coming up with a uniform solution.

Yep....

-MM

-- 
--------------------------------------------------------------------------------
Michael Mealling	|      Vote Libertarian!       | www.rwhois.net/michael
Sr. Research Engineer   |   www.ga.lp.org/gwinnett     | ICQ#:         14198821
Network Solutions	|          www.lp.org          |  michaelm@netsol.com