Each probe appears to look for an HTTP service and if found,
walks the tree on that port.
Ignoring, for the moment that there are 32,000 available ports
to probe and that that many tcp connections would seem to be
rather excessive...
Does anyone else have a problem with this kind of behavior?
While I am cognizant of the use of the robots.txt file, it seems
more than a little antisocial to index materials that are, for
all intents and purposes, unpublished.
I, for one, do not believe that just because I run a server on a
port, that that gives anyone permission to index and provide
others navigation to the material I serve from that port. Many
times, a client needs to have access to the service, in the same
manner as a typical user, and imposing passwords on the service
is an unacceptable burden.
I'm looking for comments on this before I take it to a higher
level.
Thanks. </rr>