Lists Home |
Date Index |
that makes perfect sense. Where one can get vetted reliable
metadata in machine processable form, barring consistent superstion,
indeed can speed up the process of knowledge acquisition. No
to wait for that to begin solving problems with the
Uche correctly states, but the big picture SemWeb itself, will
years to come about.
One problem, of course, is the dilemma that spawned expert
systems work. Level of detail makes knowing everything
everything impossible (even putting aside from quantum effects); but then
anyone in the SemWeb work claims that to be realistic. What I expect
what happens in expert systems. A particular domain with recognized
experienced subject matter experts works on the metadata for
domain. As more of these domains prove reliable, relationships
them can then be developed.
relationships can help out superstitions because assertions
are consistent in one domain can create inconsistencies in
related domain. That the machine can find this and present
never sleeps) to the researcher as a possible problem is
make this easier to build. The basic Semweb languages
not make this harder. For the Semweb to payoff, a lot
grunt assertion acquisition work has to be done and tested in
domain, then tested by creating relationships among domains.
takes a lot of time and effort. I would expect that once the
foundation work on the languages and technologies is complete,
will take another ten years of hard use to get the kinds of
benefits envisioned. That's ok. It would be worth
realize that some money can be made up front
dedicated applications, but anyone looking for a fast buck
in this field
should take up another.
In the nearer term, having tools that can enable
more structured discussion (such as the ClaiMaker app) may make it
possible for current, largely human processes to be accelerated,
so things like scientific advances can happen more rapidly. The web has
enabled an awful lot more people to have an extremely broad range
of material that was previously hard to come by, the semweb should make
it easier to find and work with.
It's rather prosaic, but my inability to quote the medical link piece of
work is a perfect example of a problem that decent RDF-based
indexing could solve.
One other bit of bluish sky follows
from noting that information from sensors in the real world can be fed
directly in to the web - nothing remarkable about that. But the data
can be immediately available for analysis alongside the huge corpus that is
the web - hypotheses can be checked in real time, as long as the data
and those hypotheses are expressed in a machine-understandable fashion.
Let's get the machines doing the work for a