backbone transparent proxy / connection hijacking

The Cybercash server performed client authentication based on
the IP address of the TCP connection. Placing a proxy (transparent
or otherwise) in between clients and that server will break
that authentication model. The fix was to simply configure Traffic
Server to pass Cybercash traffic onwards without any attempt to
proxy or cache the content.

But, as you point out, this basically requires that each server using
this authentication model be identified, then corrected on the cache
server on a case-by-case basis. While such servers are, at this point,
a rare occurence, it does happen, and developers should be free to use
this technology if it meets their needs without fear of having their
clients suddenly disconnected from them by their client's ISP.

The second example was of a broken keepalive implementation in
an extremely early Netscape proxy cache. The Netscape proxy
falsely propagated some proxy-keepalive protocol pieces, even
though it was not able to support it. The fix was to configure
Traffic Server to not support keepalive connections from that
client. Afterwards, there were no further problems.

Although this proxy is broken, and the owners should upgrade,
there is still the issue of choice.

These two problems are examples of legacy issues. IP-based
authentication is widely known to be a weak security measure.
The Netscape server in question was years old. As time goes
on, there will be a diminishing list of such anomalies to deal
with. Inktomi works closely with all of our customers to
diagnose any reported anomaly and configure the solution.

Weak or not, it is in use. As such, it is not appropriate for
an ISP to inflict this technology on all of their customers
without consent from the customers.

Beyond that, to scale this solution, Inktomi serves as a
clearinghouse of these anomaly lists for all of our customers.
A report from any one customer is validated and made available
to other Traffic Server installations to preempt any
further occurrences.

While this is a good positive customer service step, it is not
by any means a complete resolution to the problem. It still
requires a (potentially) good deal of overhead to keep the list
of anomolies up to date on the server. Further, the cache server
inherently degrades performance to these sites.

Inktomi also conducts proactive audits both inside live Traffic
Servers and via the extensive "web crawling" we perform as part
of our search engine business. The anomalies discovered by these
mechanisms are similarly made available to our customers.

This is a much better and likely more thorough way to gather a list
of anomolies. However, given that, I'm surprised you didn't catch
the CyberCash issue prior to it becoming one.

And finally, there has been confusion concerning the
confidentiality and legal issues of transparent caching.
Transparent caching does not present any new threat to the
confidentiality of data or usage patterns. All of these issues
are already present in abundance in the absence of caching.
Individuals responsible for managing networks will have to weigh
the advantages of caching against these more nebulous
considerations. We, and many others looking towards the future
of a scalable Internet, are confident that caching is becoming an
integral part of the infrastructure, and provides many benefits to
hosters, ISPs, backbones and surfers alike.

Maybe, but it's use should be voluntary on the part of both sides.
Otherwise, the cache implementors (not Inktomi, but the ISPs who
implement transparent caching for all their customers) are
providing a service different from that expected by the bulk of
their customers when they sign up.

Paul Gauthier

Owen DeLong

Opinions expressed are mine and mine alone, and do not reflect the
views of my employer, were not cleared by my employer and were
submitted to the net from a machine not owned or operated by my
employer.

> The Cybercash server performed client authentication based on
> the IP address of the TCP connection. Placing a proxy (transparent
> or otherwise) in between clients and that server will break
> that authentication model. The fix was to simply configure Traffic
> Server to pass Cybercash traffic onwards without any attempt to
> proxy or cache the content.
>
But, as you point out, this basically requires that each server using
this authentication model be identified, then corrected on the cache
server on a case-by-case basis. While such servers are, at this point,

My main grip with Digex is that they did this (forced our traffic into a
transparent proxy) without authorization or notification. I wasted an
afternoon, and a customer wasted several days worth of time over a 2-3
week period trying to figure out why their cybercash suddenly stopped
working. This customer then had to scan their web server logs, figure
out which sales had been "lost" due to proxy breakage, and see to it that
products got shipped out. This introduced unusual delays in their
distribution, and had their site shut down for several days between their
realization of a problem and resolution yesterday when we got Digex to
exempt certain IP's from the proxy.

I have nothing against web caching, and even think it's a good idea and
the way of the future. Digex is just going about this the wrong way. As
a customer and network administrator, I should be able to choose which of
FDT's traffic is forced into web caches. When that was the case, we had
no issues with "legacy applications" breaking, because we had no servers
going through caches.

I think it makes great sense for backbone providers to setup web caches
and use whatever means they feel are justified to encourage customers to
setup their own caches that talk to the backbone caches via ICP or give
the customer the _choice_ to have the backbone provider do all their
caching if the customer does not want to setup their own cache.

> Inktomi also conducts proactive audits both inside live Traffic
> Servers and via the extensive "web crawling" we perform as part
> of our search engine business. The anomalies discovered by these
> mechanisms are similarly made available to our customers.
>
This is a much better and likely more thorough way to gather a list
of anomolies. However, given that, I'm surprised you didn't catch
the CyberCash issue prior to it becoming one.

Yes. I can't imagine FDT is the only Digex customer that houses servers
that use CyberCash. As each customer finds an application that breaks due
to transparent proxying, will the others benefit from their debugging, or
does every customer have to jump through the hoops themselves wasting time
rediscovering what breaks?