Lists Home |
Date Index |
There's at least two usages of range going around on this topic. The
mathematical range of http names is countably infinite. That's to say
http: URIs gives us as many names as we could possibly use. The
prescriptive range is another matter; in that case the range of opinion
offered does not seem to be denumerable.
The main things to think about with names are longevity, simplified
management and integrations with other systems. So, what of the risks in
using urn: or http:?
* If I use http: maybe someday HTTP will be outmoded, or just disappear
and my identifiers will be stuck with a legacy protocol that nobody
supports. People will snigger at my crafty old http: identifiers when
they are using the shiny Resource Description Transfer Protocol.
* if I use urn: maybe someday I'll change my mind about dereferencing.
Then again maybe the world will never agree on how to resolve them and
I'll be stuck with inaccessible resources. My data will be a prisoner of
Is there really any great risk in either case? Some RDF or RDDL can
bidirectionally link between urn: and http: so that they point to the
same resource. The http: option is better today because it offers more
opportunities due to the deployed web machinery. I don't see that
inordinate costs are incurred using http: uber alles for names. There's
no need to agonize about a naming strategy, as there's no
disproportionate cost in changing your mind later.
> My current position is that in the absense of a compelling reason
> (i.e. that something really breaks) there is no reason to artificially
> restrict what an HTTP URI might identify. That is while an HTTP
> transaction always returns a document, what that document is _about_
> might be _anything_. For example, "my homepage" refers to a document
> _about me_.
I agree. The onus is on the naming authority to keep the resource's
meaning consistent; the scheme is a secondary mechanism.
The REST style is different to RDF in the way meaning is assigned to
resources. In REST meaning has something to do the set of
representations doled out (Roy Fielding has reiterated this point
recently). In RDF it has to do with an interpretation (or as Tom Passin
put it, a validity check). My understanding is that RDF and REST are not
at odds here. Far from being 'windows on reality', both systems use
memoization to determine states of affairs about resources.
If there is a real architectural issue here, as opposed to conflicting
philosophical views on how to model data, what is it?
Bill de hÓra
Some of William Kent's writings; worth a read.
While we're at it, there's a funny essay on types and classes up there