One problem, of course, is the dilemma that spawned expert
systems work. Level of detail makes knowing
everything about
everything impossible (even putting aside from quantum effects); but
then I don't
expect anyone in the SemWeb work claims that to be
realistic. What I expect
is
what happens in expert systems. A particular domain with
recognized
and
experienced subject matter experts works on the metadata for
that
domain. As more of these domains prove reliable, relationships
among them can then be developed.
Danny:
Yes, I agree entirely. I think one of
the big steps forward made by the RDF and related km work has been what
is in effect a practical refactoring of the knowledge acquisition
bottleneck that plagued early expert systems. The underlying modelling
system has been decoupled from the domain vocabulary which is in turn
decoupled from the domain data. Once decomposed in this fashion, a
properly structured (hence easier) way of obtaining
the knowledge from the expert becomes possible. On top, cross-domain
knowledge bases can be built reliably in the manner you suggest.
Len:
These relationships can help out superstitions because assertions
that
are consistent in one domain can create inconsistencies in
a
related domain. That the machine can find this and present
it
(it never sleeps) to the researcher as a possible problem is
very
useful.
Danny:
Yep. I reckon there's still a way to go yet,
but this should get really interesting.
Len:
Tools make this easier to build. The basic Semweb
languages
should not make this harder. For the Semweb to payoff, a
lot
of
grunt assertion acquisition work has to be done and tested in
each
domain, then tested by creating relationships among domains.
This
takes a lot of time and effort. I would expect that once the
foundation work on the languages and technologies is complete,
it
will take another ten years of hard use to get the kinds of
benefits envisioned. That's ok. It would be worth
it.
Danny:
Aye, in our lifetimes (deities willing)
will do.
Len:
I
realize that some money can be made up front
in
dedicated applications, but anyone looking for a fast buck
in this field should take up another.
Danny:
Heh - I'm
afraid you're probably right there. I do however think that the technologies
can make it easier to develop quite a range of apps that are
mostly-dedicated but have a layer of interop above syntax (so e.g. a
media player could use the MusicBrainz
kb).
Cheers,
Danny.
btw, I've nothing against XML or
old-fashioned expert systems
:
In the nearer term, having tools that can enable
more structured discussion (such as the ClaiMaker app) may make it
possible for current, largely human processes to be accelerated,
so things like scientific advances can happen more rapidly. The web
has enabled an awful lot more people to have an extremely broad range
of material that was previously hard to come by, the semweb should
make it easier to find and work with. It's rather prosaic, but my inability to quote
the medical link piece of work is a perfect example of a problem that
decent RDF-based indexing could solve.
One other bit of bluish sky follows
from noting that information from sensors in the real world can be
fed directly in to the web - nothing remarkable about that. But the
data can be immediately available for analysis alongside the huge corpus
that is the web - hypotheses can be checked in real time, as long as
the data and those hypotheses are expressed in a machine-understandable
fashion. Let's get the machines doing the work for a
change.