OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

 


 

   Re: [xml-dev] Aggregated content, fact checking, PICS, Atom/RSS (wasRigg

[ Lists Home | Date Index | Thread Index ]

My basic reaction is that the web at large is not a meritocracy, but a 
global marketplace of ideas and cheap thrills that is strongly resistant 
to control of any kind, including quality control.

Yes, what makes scientific literature (and science itself) "work" is 
peer review. But the fact that you are published in reviewed journals 
does not mean that every stray thought you might blog is authoritative. 
Linus Pauling won two Nobel prizes and was widely regarded as a 
scientific genius but was something of a quack on the subject of vitamin 
C. His pronouncements on the latter were much more widely known than his 
scientific achievements. Does vitamin C prevent the common cold? Nope. 
But a generation thought it did, based on Pauling's endorsement.

When the arena is political rather than scientific, peer review is a 
dream worthy of Quixote. Political ideologies are remarkably resistant 
to "facts", there are few repeatable experiments, etc. But I don't have 
to descend to politics. Consider the dismal science, which at its most 
objective studies the conformance of models to historical data and at 
its least promulgates a set of faith-based assertions, like 
"Unemployment rates below six percent are inflationary" and "Lowering 
taxes stimulates investment" which, like dot-com stocks, are only 
valuable to the extent they are believed to be valuable. No amount of 
peer review can filter out the non-science in conventional wisdom.

Do you believe global warming is fact or fiction? There is a great deal 
of evidence on both sides and both are guilty of cherry-picking the 
evidence to support their preconceptions. Is Michael Chrighton right or 
wrong on this subject? Should his ideas be filtered through the 
scientific establishment? Do you really believe they could be?

The brilliance of the web is that it is (mostly) uncensored, even by 
rationality. I don't expect this to change, and rather hope it won't.

Bob



M. David Peterson wrote:
> Hmmm.. Interesting points Bob... But what about the current legitimate
> and qualified publishers such as researchers, scientists, etc... in
> whom if given a place to extend their publishing into a blog style
> interface that required credentials similar to that in which they have
> been developing year after year with their education and subsequent
> publishing of research papers... Could this be considered adequate
> enough to extend past the People Choice Awards concept? If this was
> part of this system you would have had to already have proven yourself
> to an existing base of experts within a specific field.  Or, by way of
> developing technologies this could easily become a place in which only
> those who are qualified by this same existing system could rate
> particular papers or research projects.  If something like this were
> to be developed and handled by the same bodies that oversee this
> existing publishing system I think it could be easily become something
> that became the defacto standard to look to first on a particular
> subject.
> 
> Thoughts?
> 
> 
> On Mon, 14 Mar 2005 23:46:13 -0800, Bob Foster <bob@objfac.com> wrote:
> 
>>What people actually read isn't a good indicator of accuracy and
>>quality, so you want to provide a way for people to vote on accuracy and
>>quality? People's Choice Awards for the web? Don't you think it's likely
>>that people will vote in favor of the things they actually read?
>>
>>Or maybe you think there's a chance the literate, critical few can stuff
>>the ballot box? Think again.
>>
>>The web is a real democracy, unfiltered by layers of delegates,
>>senators, members of parliament and snotty librarians. The audience
>>prefers bread and circuses. Any sort of polling scheme will reflect that.
>>
>>Bob Foster
>>
>>Ken North wrote:
>> >>>No matter how you model it, the very biggest thing would be to have
>>a way
>> >>>for people to very easily add to meta data about how accurate a given
>> >>>article is because relying on the publisher of the article to do
>>this for
>> >>>you has obvious flaws.
>> >>>[snip]
>> >>>But to my mind the usability and convenience of the
>> >>>mechanism by which "accuracy" meta data gets created and reviewed by an
>> >>>active audience is far more critical.
>> >
>> >
>> > Michael Gorman, President of the American Library Association,
>>recently wrote a
>> > Los Angeles Times op-ed piece about digitizing books for Google:
>> >
>> > "Hailed as the ultimate example of information retrieval, Google is,
>>in fact,
>> > the device that gives you thousands of "hits" (which may or may not
>>be relevant)
>> > in no very useful order. Those characteristics are ignored and
>>excused by those
>> > who think that Google is the creation of "God's mind," because it
>>gives the
>> > searcher its heaps of irrelevance in nanoseconds. Speed is of the
>>essence to the
>> > Google boosters, just as it is to consumers of fast "food," but, as
>>with fast
>> > food, rubbish is rubbish, no matter how speedily it is delivered."
>> >
>> > In a followup piece, he also commented about the quality and
>> > accuracy of blogs:
>> > "A blog is a species of interactive electronic diary by means of
>>which the
>> > unpublishable, untrammeled by editors or the rules of grammar, can
>>communicate
>> > their thoughts via the web."
>> >
>> > Attention.XML, del.icio.us, XFN, and menow are interesting, but
>>"who's reading
>> > what" isn't really a measure of accuracy and quality.
>> >
>> > Media organizations, although imperfect, have staff people to do fact
>>checking.
>> > Academic papers are often peer reviewed.
>> > We know there's a difference in the credibility of those sources
>>versus blogs
>> > and fringe web sites. So we need a solution for filtering out the junk.
>> >
>> > Search engines, aggregators and semantic web technologies would
>>benefit by
>> > having some type of qualifiers for expressing accuracy and quality.
>> >
>> > "Was this article useful?" appears on many web articles. Perhaps we
>>need to
>> > identifying fact-checked articles and start carrying quality rating
>>information
>> > in RSS and Atom feeds. An RSS channel can carry a PICS rating, but we
>>need that
>> > information on a per link basis, not per channel.
>> >
>> > Chris Armstrong wrote an interesting paper about PICS as a quality
>>filter and a
>> > proposal by labeling the quality of information sources:
>> > http://www.ariadne.ac.uk/issue9/pics/
>> >
>> > CIQM developed a labeling system for bibliographic databases that
>>included best
>> > practices or QA policies such as using a second indexer, using
>>authority files
>> > for name validation, and so on:
>> > http://www.i-a-l.co.uk/db_Qual/label.gif
>> >
>> > With 8 billion web pages indexed by Google and 5.8 million feeds
>>indexed by
>> > Feedster, a quality rating system is looking more attractive every day.





 

News | XML in Industry | Calendar | XML Registry
Marketplace | Resources | MyXML.org | Sponsors | Privacy Statement

Copyright 2001 XML.org. This site is hosted by OASIS