OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

 


 

   RE: [xml-dev] Statistical vs "semantic web" approaches to makingsenseof

[ Lists Home | Date Index | Thread Index ]


On Thu, 24 Apr 2003, Danny Ayers wrote:

> By coincidence I've been writing up a semi-refutation of Cory's 'metacrap'
> piece, hopefully ready in a day or so.

i'd be interested to see that. my initial reaction to this piece was 'crap'!
can't help it, but i think it should be obvious that all his arguments
apply equally well to data as it does to metadata.

there seems to be an underlying view that anything done by a machine -
set-top boxes for TV stats or google for metadata - is almost by
definition better and more reliable than anything produced by a human.

"Google can derive statistics about the number of Web-authors who believe
that that page is important enough to link to, and hence make extremely
reliable guesses about how reputable the information on that page is."
really? my friend freddy's got a website with links to the most unreliable
sites on the web. how does that affect google's 'reputability' scoring?

maybe the number of links to a page is a measure of exactly that and
nothing else - but do feel free make any assumptions you want about why
those links are there. personally i don't tend to see googles search
results as a reputability grading at all, and i wouldn't recommend that
anyone does ("it's true, i found it on google!").

ultimately, if you care about the information that you publish, then you
care about the metainformation. and yes, it's generally much easier to
find web pages that have meaningful titles.

regards,

/m

Martin Klang
http://www.o-xml.org - the object-oriented XML programming language





 

News | XML in Industry | Calendar | XML Registry
Marketplace | Resources | MyXML.org | Sponsors | Privacy Statement

Copyright 2001 XML.org. This site is hosted by OASIS