The Cybercash server performed client authentication based on
the IP address of the TCP connection. Placing a proxy (transparent
or otherwise) in between clients and that server will break
that authentication model. The fix was to simply configure Traffic
Server to pass Cybercash traffic onwards without any attempt to
proxy or cache the content.
But, as you point out, this basically requires that each server using
this authentication model be identified, then corrected on the cache
server on a case-by-case basis. While such servers are, at this point,
a rare occurence, it does happen, and developers should be free to use
this technology if it meets their needs without fear of having their
clients suddenly disconnected from them by their client's ISP.
The second example was of a broken keepalive implementation in
an extremely early Netscape proxy cache. The Netscape proxy
falsely propagated some proxy-keepalive protocol pieces, even
though it was not able to support it. The fix was to configure
Traffic Server to not support keepalive connections from that
client. Afterwards, there were no further problems.
Although this proxy is broken, and the owners should upgrade,
there is still the issue of choice.
These two problems are examples of legacy issues. IP-based
authentication is widely known to be a weak security measure.
The Netscape server in question was years old. As time goes
on, there will be a diminishing list of such anomalies to deal
with. Inktomi works closely with all of our customers to
diagnose any reported anomaly and configure the solution.
Weak or not, it is in use. As such, it is not appropriate for
an ISP to inflict this technology on all of their customers
without consent from the customers.
Beyond that, to scale this solution, Inktomi serves as a
clearinghouse of these anomaly lists for all of our customers.
A report from any one customer is validated and made available
to other Traffic Server installations to preempt any
While this is a good positive customer service step, it is not
by any means a complete resolution to the problem. It still
requires a (potentially) good deal of overhead to keep the list
of anomolies up to date on the server. Further, the cache server
inherently degrades performance to these sites.
Inktomi also conducts proactive audits both inside live Traffic
Servers and via the extensive "web crawling" we perform as part
of our search engine business. The anomalies discovered by these
mechanisms are similarly made available to our customers.
This is a much better and likely more thorough way to gather a list
of anomolies. However, given that, I'm surprised you didn't catch
the CyberCash issue prior to it becoming one.
And finally, there has been confusion concerning the
confidentiality and legal issues of transparent caching.
Transparent caching does not present any new threat to the
confidentiality of data or usage patterns. All of these issues
are already present in abundance in the absence of caching.
Individuals responsible for managing networks will have to weigh
the advantages of caching against these more nebulous
considerations. We, and many others looking towards the future
of a scalable Internet, are confident that caching is becoming an
integral part of the infrastructure, and provides many benefits to
hosters, ISPs, backbones and surfers alike.
Maybe, but it's use should be voluntary on the part of both sides.
Otherwise, the cache implementors (not Inktomi, but the ISPs who
implement transparent caching for all their customers) are
providing a service different from that expected by the bulk of
their customers when they sign up.
Opinions expressed are mine and mine alone, and do not reflect the
views of my employer, were not cleared by my employer and were
submitted to the net from a machine not owned or operated by my