Lists Home |
Date Index |
Joshua Allen wrote:
> > Or use HTTP URIs in all the triples and publish them at another HTTP
> > It all works.
> Right. So we agree that using HTTP URIs in all of the triples doesn't
> have negative impact on the distribution of the triples, but it doesn't
> have positive impact either.
> >> a) take me to a site that tells me one person's opinion about
> >> predicate/subject "X"
> >> b) tell me what people have said about resource "Y" with regards to
> >> predicate "X"
> >> c) tell me what people are saying about predicate/subject "X"
> > Yes well Google, seems to provide something akin to (b) and (c) for
> > URIs. You can do a perfectly good rdf:about=http://example.org/foo#bar
> And also for non-http URIs. Google is quite capable of searching and
> indexing terms that are not http: identifiers. (In fact, I bet only a
> small fraction of a percentage of the queries submitted to Google are
> based on http: identifiers)
Cautious agreement. Google relies on the Web infrastructure, which is largely based on HTTP. Remove HTTP and Google wouldn't be Google, would it?
> > a) this is a problem with the HTTP based Web that has not been solved
> Right. It is a "fact of life" in systems like this. In other words, by
> the very nature of the system, "a" is not the most useful information.
> I would further argue that "a" can never scale for *meta* data. But I
> was simply making the case that "b" and "c" are essential foundations of
> semantic web.
I agree that "b" and "c" are
a) essential for any real semantic web.
and also that
b) not there yet in any so called "semantic web" system.
c) the hard problem that everyone is leaving for tomorrow.
> It is only better for scenario "a", where information queries have
> strong affinity to the site owner. In fact, HTTP is much *worse* for
> information that does not have site/publisher affinity.
Perhaps, but in order to reference who said what and whether you believe "who" you need a provenance for the data to start out with. The HTTP source might not get you all the way there, but it is a start (assuming --big assumption-- that you can't hack a site) ... so there are real problems, and the "semantic web" as well as the "Semantic Web" are actually _research projects_ ... now since I am an academic, I have no problem admitting that.
>...I'm not saying USENET is the best solution, but it's a heck
> of a lot better than HTTP if we want to talk about "proven systems that
> are better than alternatives"
Fair enough. I guess all that I am saying is that any of the so-called solutions to HTTP (for naming) namely (sic) URNs don't seem to solve the real problems (some of which you outline above), because as soon as you add a dereferencing mechanism, we are essentially back to square one. Perhaps USENET type distributed systems would be a reasonable solution in terms of a number of reliability problems with HTTP but how do they solve the metaphysical debate over whether we are referencing an abstract concept (perhaps a "namespace") or a document that describes such a concept (e.g. a namespace)? Why aren't things like "Content-Location" etc. response headers good enough? (I don't expect an answer to that, just using it as an example :-)
In any case suppose we agree with TimBL i.e. http://www.w3.org/DesignIssues/HTTP-URI.html ... it seems to push the rathole onto what a fragment identifier identifies ... sigh. The more so-called solutions to these issues I've seen, the more questions are raised. At the moment I'm inclined to use the thing that I know is broken (HTTP), rather than chuck it for something that I have no doubt is broken in some other way, and I'd just end up wasting a bunch of time figuring out how that other thing is equally broken ...