how do they do that?

Otis Gospodnetic (otisg@panther.middlebury.edu)
Wed, 13 Nov 1996 11:32:45 -0500 (EST)


Hi,

Once a place like Infoseek or WebCrawler or Lycos has data about a substantial
number of URLs in their databases, how do they re-check those URLs and at the
same time make sure they find and index new URLs ?

For example, if WebCrawler already indexed URLs: A B C D, does it go through
its database, check if A has been updated, and the same for all other URLs,
and if they have, re-fetch and re-index them, find if there are any new (never
seen before) URLs in them, and so on ?

That seems kind of limited to me....

Or does a robot have some permanent source of new URLs that it then fetches,
indexes, and goes into web-crawling from there ?

I'm confused :)

thanks,

Otis
_________________________________________________
This messages was sent by the robots mailing list. To unsubscribe, send mail
to robots-request@webcrawler.com with the word "unsubscribe" in the body.
For more info see http://info.webcrawler.com/mak/projects/robots/robots.html