OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: How could RDDL be distributed ?



Actually it is *the* business problem and 
shared by all businesses all the time everywhere. 
It is part of contract law.  Being that Harvard 
is the best business law school in the world 
(Yalies weep), that they "bitched" about it is 
not surprising.  It is known in other circles 
as the "record of authority" (the governing control).  
Only the IETF and some on these lists consider it a  
novel problem.  The legal community considers 
it a freshman course topic, sanity being... a legal term.

It has been repeatedly discussed on this list as 
part of the FPI/URI sys-ANY dilemma which the IETF 
compounds by conflating name, location and identity. 
TimBL's neat solution ain't "legal" until you attach 
a waiver to it that says "only authoritative in this system" 
because it "only works in this system".

The lawyers are better at law;  the "lower layer" 
guys are better at "systems".  Note the opening 
paragraph of Shannon's "Mathematics of Communication" 
for the issue as resolved in the only way it can 
be:  abandon the pursuit of universal semantics, 
and engineer a means to choose within a system 
of means, not meaning.  Choice gives meaning.

People are repeating and completing the work of a 
Harvard lawyer here because someone discarded the 
requirements of hypertext.  Ten years later, we have
the same requirements and the same work.

Kirk's solution to the Kobayashi Moru wasn't 
genius; just original thinking.

Len 
http://www.mp3.com/LenBullard

Ekam sat.h, Vipraah bahudhaa vadanti.
Daamyata. Datta. Dayadhvam.h


-----Original Message-----
From: Michael Mealling [mailto:michael@bailey.dscga.com]
Sent: Tuesday, January 16, 2001 1:33 PM
To: Miles Sabin
Cc: xml-dev@lists.xml.org
Subject: Re: How could RDDL be distributed ?


On Tue, Jan 16, 2001 at 04:53:06PM +0000, Miles Sabin wrote:
> Tim Bray wrote,
> > Miles Sabin wrote:
> > > Actually I think it's simply two different problems which 
> > > might have related solutions,
> > >
> > > 1. Allow for local overriding of authoritative resources.
> > > 
> > > 2. Allow for distribution and replication of authoritative
> > > resources.
> >
> > Neither of which are specific to RDDL.  I assume everyone
> > agrees with this.  In the general case, these are just 
> > consequences of dealing with URIs, n'est-ce pas? -T
> 
> Agreed that it's not specific to RDDL. And probably everyone
> agrees on that.
> 
> But this isn't the general URI case. 

Actually, it really is. In the IETF these issues have come up
in _many_ different situations. The issue of "allow for local
overriding of authoritative resources" is called the "appropriate
copy" problem in the academic world (its also known as the
"Harvard problem" because the Harvard library is the one that
bitched about it so much). In their case they want to resolve
a URI and have it redirected to a local appropriate copy using academic
rules about what versions/translations of the document are 
valid for academic research ("for this book is edition 2 just
as good as edition 1").  The caching and replication folks
want to do this for enhanced copy protection and content
enhancement by localizing parts of the resource (i.e. think
of a universal translator for web pages).

The second case is the generalized authoritative caching
and replication problem than URI Resolution  were designed for.
The lower layer folks have been screaming for sane caching/
replication from the application layer for almost a decade now.


> We have a very specific
> looming issue (that's my hunch anyway) of large chunks of web
> infrastructure depending (perhaps unwisely) on being able to
> retrieve resources on the ends of particular well-known URIs on
> a regular basis ... a lot of them hosted by the W3C, a lot of
> them hosted elsewhere. I predict server meltdown.

Yep....

> You could argue that people who build XML applications 
> _shouldn't_ fetch a fresh copy of the corresponding XML 
> DTD/Schema every time they parse a fresh document instance. And 
> you'd be right, but that won't stop people doing it.

As has already been illustrated by many of the parsers out there...

> That's my justification for a new protocol. But I think that
> there's also a very close connection with some of the areas we've
> been discussing here wrt, RDDL and xmlcatalog. Both allow for
> local overriding via what is to all intents and purposes a
> local cache. I suggest we at least look at whether there's
> enough similarity between the two scenarios to make it worth
> coming up with a uniform solution.

Yep....

-MM

-- 
----------------------------------------------------------------------------
----
Michael Mealling	|      Vote Libertarian!       |
www.rwhois.net/michael
Sr. Research Engineer   |   www.ga.lp.org/gwinnett     | ICQ#:
14198821
Network Solutions	|          www.lp.org          |
michaelm@netsol.com