Lists Home |
Date Index |
> The key is not for each ontology to clutter up the other (as you
> might have to happen), but to establish the equivalence. OTOH, I
> think that is the hard part! One problem among many is to how to
> know when
> two terms are in fact "equivalent" since that will usually imply a lot of
> background knowledge that may or may not be articulated or shared.
> In fact, establishing equivalence is essentially the database mapping
> problem in another disguise.
This 'clutter' angle is quite interesting. Walter's arguments about
localization of expertise are valid, but IMHO care should be taken not to
make assumptions that haven't really been tested. Most of the time the
important processing will be occurring within the specific domain using
applications that are focussed upon that domain, and sharing will not occur
at this level. But there may be occasion that the camera hobbyist needs the
expertise of the customs office, and vice versa. If the hobbyist is
travelling overseas, then they may well have need of the expertise contained
in the custom office's locally elaborated knowledge base.
Typical scenario, the hobbyist wants to know what the charges will be, they
pass a "how much?" query to their agent and that agent asks the customs
agent. The customs agent works it all out it its local domain and passes a
message back to the hobbyists agent, which then notifies the hobbyist. Fine.
But what about this alternative : the hobbyist's agent tells the customs
agent "I'm into cameras" and the customs agent returns a bundle of expertise
containing all it knows about cameras. The hobbyist's agent works it out.
What is to stop this particular bit of expertise crossing domains? Where is
the line of separation between one knowledge base and the other? If they are
using a common set of languages that allow for consistent logical inference,
surely the inference will be valid in either domain.
I started by referring to clutter because although we can't assume unlimited
resources anywhere, physical resources are a lot cheaper than when we
started joining computers together. Having clutter around such as knowledge
about *something else* might not turn out to be a big deal. The information
bandwidth between machines is improving gradually in terms of bps, but the
knowledge bandwidth is expanded considerably when both ends can talk about
the data in a more sophisticated fashion. I may be conflating specialised
data with specialised processing and specialised knowledge here, but I do
think the knowledge bandwidth idea works when talking
application-application as well as machine-machine.