[
Lists Home |
Date Index |
Thread Index
]
> Or use HTTP URIs in all the triples and publish them at another HTTP
URI.
> It all works.
Right. So we agree that using HTTP URIs in all of the triples doesn't
have negative impact on the distribution of the triples, but it doesn't
have positive impact either.
>> a) take me to a site that tells me one person's opinion about
>> predicate/subject "X"
>> b) tell me what people have said about resource "Y" with regards to
>> predicate "X"
>> c) tell me what people are saying about predicate/subject "X"
> Yes well Google, seems to provide something akin to (b) and (c) for
HTTP
> URIs. You can do a perfectly good rdf:about=http://example.org/foo#bar
.
And also for non-http URIs. Google is quite capable of searching and
indexing terms that are not http: identifiers. (In fact, I bet only a
small fraction of a percentage of the queries submitted to Google are
based on http: identifiers)
> a) this is a problem with the HTTP based Web that has not been solved
by
Right. It is a "fact of life" in systems like this. In other words, by
the very nature of the system, "a" is not the most useful information.
I would further argue that "a" can never scale for *meta* data. But I
was simply making the case that "b" and "c" are essential foundations of
semantic web.
To take the idea further, the better job we do at "b" and "c", the less
we become dependent on DNS for "a", which is the main point Rohit Khare
makes in: http://www.ics.uci.edu/~rohit/IEEE-L7-names-trust.html
Or put another way, problem "a" is a general subset of the problems
being solved in "b" and "c". If you solve "b" and "c", you get "a" for
free.
> any other system of its magnitude. HTTP with all its warts remains far
> better than any of its supposed alternatives.
It is only better for scenario "a", where information queries have
strong affinity to the site owner. In fact, HTTP is much *worse* for
information that does not have site/publisher affinity.
Solutions based on HTTP have not proven to be suitable replacements for
USENET, for example. There are thousands of web-based discussion boards
active, and they have some nice features. But they fail dismally in
every respect that the semantic web cares about. In fact, I would say
that the rise of web-based discussion boards has dealt a terrible blow
to interoperability and opened the door for proprietary lockin. USENET
is a *global* set of subject identifiers, with a basic protocol set that
allows anyone to easily become a part of the global discussion space.
If you and ten thousand other people write tools that publish to USENET,
and I write a tool that gathers data from USENET, we all automatically
work together, without needing to rent a bunch of HTTP servers or learn
each other's proprietary interfaces.
USENET is optimized for the "many publishers to a single query space"
scenario, and HTTP is optimized for the "one publisher per query space"
scenario. I'm not saying USENET is the best solution, but it's a heck
of a lot better than HTTP if we want to talk about "proven systems that
are better than alternatives"
|