>Where did we get the idea that just because a thing is
>accessible, that that gives us the moral right to access it,
>perhaps against the interests of its owner?
I stated my source: the Unix environment. You are misrepresenting what
I said. There is no such moral right; the normal obligation remains of
notify people of likely mistakes. If I leave my personal papers on the
shelves in a public library, you have the moral right to open them.
You do not have the moral right to take advantage of what is obviously
a mistake, once you discover it.
BTW, the attitude that a robot is just like a human user probably stems from
the Unix environment, too. Under Unix, people routinely scan the whole
filesystem in order to search for information. The Internet is very
Unix-minded; rather typical is AFS, a single global world-wide Unix-like
file system where everyone is supposed to hook up their own files.
>In another message, Reinier states his belief that if a user
>makes the mistake of exposing his home directory to the web,
>that we (as robot owners) can index anything we find there with
>impunity;
Yes; provided that the usual amount of care and politeness is
observed, and under the moral obligation to correct mistakes
once they are reported. You left that out in your summary.
>that the error is on the part of the web-master and
>not on the part of the robot's designer.
That's what I think.
>Let me see if I understand Reinier's point and can perhaps
>state it another way: If I leave my house unlocked, I have
>given my permission for any and all to come in and read my
>personal papers. Does this strike anyone else as somewhat
>absurd?
Yes. I regard a WWW site like a public exhibit (maybe in someone's
backyard). Not as a person's private home.
>In our enthusiasm to become the cartographers of this new
>region of the information universe, do we not run the risk of
>violating the privacy of the indigenous peoples we find there?
Indigenous people gain access to the Internet, not the other way round.
(Except through malicious attacks and sloppy Webmasters.)
>I believe that this "-WE- are the most comprehensive index of
>cyberspace" mentality is very dangerous and suggests a kind of
>information vigiliantism that I find personally distasteful.
Many people hold your views, many hold mine. I don't think there's
an easy solution. (Like you, I grew nervous when I found out
how much Altavista knows about me.)
Wouldn't it be possible for robots to generate email to the Webmaster if
no robots.txt was found, offering an example robots.txt file and a pointer
to relevant documentation? The robot might still start its indexing
process, provided that the Webmaster has a way to undo the results.
>It should be noted that there is a fairly strong case to be
>made that a robot threshing through a non-published web site is
>an illegal activity under the abuse of computing facilities
>statute in U.S. law.
Not so in the Netherlands, where entering a computer is only illegal if
a lock (protection) was broken to gain access.
-- Reinier Post reinpost@win.tue.nl a.k.a. <A HREF="http://www.win.tue.nl/win/cs/is/reinpost/">me</A> [LINK] [LINK] [LINK] [LINK] [LINK] [LINK] [LINK] [LINK] [LINK] [LINK] [LINK]