> At 05:11 PM 11/30/96 -0500, John D. Pritchard wrote:
>
> >browsers would have to implement an authentication scheme that robots can't
> >spoof.
Browsers only have to implement an authentication scheme that is
difficult enough to spoof that it's easier for robots to either obtain
their own authentication or follow some other RES.
Here's a zoneball one:
The authentication is:
User-CRC: <some-funky-number>
Where <some-funky-number> is an encoding of the User-Agent and IP
address.
All the big boys know the encoding / decoding function, and people who
do some research will find out about it. What this does is elevate the
level of expertise required to create a robot --- no longer can folks
just whip one out. Granted, people who want to abuse the system
_still_ can, what we've done is make it difficult for naive users to
accidentally botch it.
So Rob, I suspect this means MS's proxy server will still hammer
you. Sorry. :)
-Erik
-- Erik Selberg "I get by with a little help selberg@cs.washington.edu from my friends." http://www.cs.washington.edu/homes/selberg _________________________________________________ This messages was sent by the robots mailing list. To unsubscribe, send mail to robots-request@webcrawler.com with the word "unsubscribe" in the body. For more info see http://info.webcrawler.com/mak/projects/robots/robots.html