Point 1: All you folks who flamed Robert for his agent but didn't
actually address anything did in fact change the subject.
Point 2: Many folks here are running robots that serve purposes others
don't like.
Point 3: We aren't here to discuss purposes, just methods.
Point 4: What we _need_ is a _fine-grain_ method to block out things
we don't want.
Point 5: The problem is identifying those things.
Point 6: The other problem is figuring out the method.
The issue as I see it is that the Web by design is inherently open;
i.e. anyone can come to any site and anonymously obtain some
information. This is good for a great many reasons; however, when it
comes to robots or anything that looks like a robot, it causes
problems.
So, it seems to me what we want to have is a method whereby there's
still some openness to the Web, but there's still some accountability
towards its use. There needs to be some _strong_ method of
identification (and User-Agents that can be spoofed aren't strong by
any means) and some _strong_ method of access control.
Otherwise, we're just arguing here about definitions while random
bot-like beasts are developed and run, usually with disasterous
results.
-Erik
-- Erik Selberg "I get by with a little help selberg@cs.washington.edu from my friends." http://www.cs.washington.edu/homes/selberg _________________________________________________ This messages was sent by the robots mailing list. To unsubscribe, send mail to robots-request@webcrawler.com with the word "unsubscribe" in the body. For more info see http://info.webcrawler.com/mak/projects/robots/robots.html