> /robots.htm - an HTML list of links that robots are encouraged to traverse
A plain text file would be much more well suited, similar to the
existing robots.txt - reading in plain text and adding it to the
stack of URL's to be processed is sure to be more effective than
sending the html to the robot reasoning engine to parse about.
> [snip]
>
> Organization: organisation name
> Type: commercial/non/profit/educational etc.
> Admin: email of admininstration
> Webmaster: email of Web admininstration
> Postal: postal address
> ZIP: ZIP/postcode
> Country:
> Position: Lat/Long
> etc.
How are you going to get a system administrator to implement all these
files? How many system administrators do you know even know about
robots.txt? Assuming you want a large chunk of sites to adopt these
details, I'd propose it be implemented into the HTTP protocol somehow.
an "ADMIN" request, for example, could request the above details from
the site just as an "/admin", for example, on IRC, grabs the admin
details of a server from the lines in the configuration.
If a space was made in a server's configuration or makefile for these
details, web administrators are far more likely to implement.
catchya,
-- Kim Davies | "Belief is the death of intelligence" -Snog kimba@it.com.au | http://www.it.com.au/~kimba/