Maybe few or none now, but we might expect that internetworked appliances
and controls will become more commonplace over the next couple centuries as
these technologies develop. It might be a good idea to establish a sensible
framework now than to have to undo the possible damage of malicious agents
or bots. In any case, whatever kind of agent is launched, it should be
noted to what respects it represents the the user's actions and authority,
and to what repects it reflects the agent or other robot process.
Whether it's a million dollar machine or a page that is broadcast over the
network at the cost of a few packets, such actions of agents and robots need
to be regulated to some degree, if only to delay release of extremely
dangerous agents into public arena. Robots.txt is a good start, and a
parallel agents.txt would be applicable as more of these accessories
develop. Also, as agent interaction is further defined, we will probably
see more proprietary agent models that provide for consistency and tracking
of agent actions. As more positive identification can be provided for these
agents, we might even see them replace many of the mundane tasks that are so
annoying, like going to get your driver's license renewed. It is important
that possible new user's of these technologies ae not frightened to
implement them because of naivete and crooks.
At any rate, please consider in your actions of releasing an agent, whether
it's your mailbomb agent or whatever, how you would like the agent's effects
to occur on your own system. And just have your damn robot follow robots.txt.
Have a nice day,
Ross A. Finlayson
http://www.tomco.net/~raf/btf
_________________________________________________
This messages was sent by the robots mailing list. To unsubscribe, send mail
to robots-request@webcrawler.com with the word "unsubscribe" in the body.
For more info see http://info.webcrawler.com/mak/projects/robots/robots.html