I think it comes back to that definition of "robot" for me ... when you say
"robot owners", are you speaking specifically about "indexing robots that
tranverse the URL space"? Remember, there are many agents/robots out there
that don't fall into that classification (and wouldn't be putting any more
stress on your system than a single human user.)
The question is how broadly should this kind of standard be implemented, and
what this group's recommendations will be to other code developers about
what code should implement this and which should not.
Just to provide a case in point: one of our products has spider routines
built into it ... essentially reverse-verifying incoming hyperlinks to a
website by retrieving those pages passed as "referrers" from browsers. One
retrieval, executed once, without human intervention. Robot, or not robot?
We're willing to support any standard that develops and seems to apply to
our software, none of which fits any of the traditional definitions of
"robots" as discussed in this group (thus my constant harping for some kind
of definition.)
Brian
-------- REPLY, Original message follows --------
From: Rob Hartill \ Internet: (robh@imdb.com) To:
Subject: Re: Suggestion to help robots and sites coexist a little better
[SNIP]
Will some of the robot owners out there please make some public commitment
to this idea and perhaps suggest the string identifier to be used.
[SNIP]
-------- REPLY, End of original message --------
-------------------------------------------------------------------- Brian Clark - Production Coordinator - bclark@radzone.org ------------------------------------------------------------------ GlobalMedia Design http://www.radzone.org/gmd/ Email Infobot radbot@radzone.org ------------------------------------------------------------------