Re: Domains and HTTP_HOST

John D. Pritchard (jdp@cs.columbia.edu)
Thu, 07 Nov 1996 17:50:31 -0500


Benjamin Franz said...

> No - it isn't *any* easier to handle different interfaces. As soon as the
> percentage of browsers passing 'Host: ' rises above 95% (it is currently
> around 85%) we will shift all of our client sites to non-IP virtual hosts.
> This will save us tons of IP addresses and reduce our startup overhead
> when configuring new servers since we won't have to reconfigure our
> interfaces each time we add a server. I have suspicion that *many* other
> sites will do the same thing soon and that this will precipitate the
> 'Great Browser Upgrade' where the old (Netscape 1.2 and older, MSIE 2.0
> and older, Lynx 2.4 and older, and most other browsers) browsers are
> rapidly discarded as people find they can no longer browse many web sites.

yep, defintely will happen.


> The most important advantage of 'Host: ' is that it allows new non-IP
> server capable (but not HTTP/1.1 capable) browsers to interoperate with
> old 1.0 servers rather than breaking them by embedding the host in the
> passed URL. As for indexing - if the robots just pass the 'Host: ' header
> - it is no different than indexing the older unique IP servers. The 'is
> this the same server under a different name' problem is insoluble in the
> general case anyway. If you are determined to try and reduce that problem
> - your best approach is the checksum everypage and use the checksums to
> 'fingerprint' sites. Of course - you will get a bunch of 'false idents'
> from mirror sites that way.

md5

it has to happen this way because tcp can't do it.

-john