[
Lists Home |
Date Index |
Thread Index
]
Here's a pretty good example of how it all goes wrong
even when humans are making the selections:
http://www.perrspectives.com/
Some say Google is doing what the Semantic Web proposes,
but the mammals put their biases into their selections
and this means that as the selection process itself
is centralized (the critical one: the choice of choices),
biases become system selectors. See Tim Bray's question at
Ongoing.
http://tbray.org/ongoing/When/200x/2004/06/22/GoogleCensor
Tim isn't sure if it is stupid or evil. My assessment
is that it is an example of stupid becoming evil. Such
systems must provide feedback-correction; yet we then
realize that as Joshua Allen points out, this is a closed
system (Google), not an open system, and self-correction
is not assured. The owners of Google have allowed their
company and its services to become players in the politics
of the American electorate.
That is unacceptable but at least Larry and Sergey's
faces are on that decision and Tim can push back.
So now we envision a future where these same biases are
entering ontologies used for machine to machine communications.
This is the Golem problem at its clearest and most easily
understood. Speed of light advocacy by amplifying
biases in a faceless medium is dangerous in the extreme.
len
From: Danny Ayers [mailto:danny666@virgilio.it]
One final point is that no matter how good the trust and information
system, the actions that result may have little bearing on their truth
or validity. The suggestion of weapons of mass destruction is enough to
justify a war - the evidence is orthogonal.
|