OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

 


 

   Re: [xml-dev] HGRAB. Syndication. Google. Grey area.

[ Lists Home | Date Index | Thread Index ]


> > So, what HGRAB actually does? It polls
> > the HTML pages ( once in a while,
> > no harm done to the load of original
> > website ). Then it places some part of the
> > content into HGRAB database (for future
> > searching). Then it provides the end-user
> > with some 'part' of the original news item
> > and with the URL to the original news source.
> > 
> > Google does *exactly* this (and also Google
> > provides a cached copy of the original content)
> > 
> > That means:
> > 
> > Either both HGRAB and Google should be sued,
> > because they both sell the content
> > *which does not belong to them*, or both
> > HGRAB and Google should be considered
> > 'just a service'.
> 
> 
> Have a look at http://www.google.com/robots.txt

I don't understand your point. Could you pelase 
explain?

So, some websites may have robots.txt, 
saying that some pages should not be 
indexed by robots.

Because HGRAB, for example, is 
usually polling only home page of the website, 
they are all allowed for polling.

Also, I'm not sure if search engines do 
really care about the robot.txt, but that's another 
story.

Also, the interesting twist is that when the 
robot encounters the website with *no*
robots.txt ( most of the sites have no robots.txt ) 
the robot assumes that it is *safe* for him to 
'steal' the content.

Shouldn't it be 'if there is no robots.txt - 
get out of here'?

I think this is really gray area and 
robots.txt is not a solution. 
At the moment, at least.

What I'm missing?

Rgds.Paul.






 

News | XML in Industry | Calendar | XML Registry
Marketplace | Resources | MyXML.org | Sponsors | Privacy Statement

Copyright 2001 XML.org. This site is hosted by OASIS