[
Lists Home |
Date Index |
Thread Index
]
- From: Paul Tchistopolskii <paul@qub.com>
- To: xml-dev <xml-dev@lists.xml.org>
- Date: Thu, 21 Dec 2000 18:30:18 -0800
----- Original Message -----
From: David Megginson <david@megginson.com>
> courseware; as a medievalist, however, I had had it drilled into me
> that low-quality/high-volume *always* wins (i.e. crowded school and
> chancery scripts over elegant monastic scripts, paper over parchment,
> printing over calligraphy, American culture over ... oops, sorry),
Yes. 'Worse is better' always wins. That's why for searching and ranking -
<meta> and markup should be slaves of Google / screen scrapping, but not
the other way. <meta> is for "high quality search" . he-he.
My experience shows me that even I like the idea of XSA,
it is becomes too hard for me to maintain even tiny xsa.xml...
No talking about writing some RDF/ Topic maps or something.
Should I write those huge RDF / Topic maps constructions by hand ?
I'm too lasy for that.
Do I understand right that The Semantic Web will provide me with
the quality of search better than Google provides, but in return I
should spend more time maintaining my documents ?
If this is the only sound advantage of Semantic Web - I think it is
obvious that your pattern could be applied here.
Google / screen-scrapping is 'worse is better'.
It is 'low-quality/high-volume'.
Following your rule - Google should win.
Rgds.Paul.
PS. If SW is a layer on *top* of searching layer - that could be
interesting, but I don't understand how 'ontology', Topic Maps ,
RDF, Namespaces, URIs and other nice things could be layered
on top of Google / screen-scrapping.
|