Cisco's Statement about IPR Claimed in draft-ietf-tcpm-tcpsecure

In message <Pine.NEB.4.58.0405122134560.9034@server.duh.org>, Todd Vierling wri
tes:

: http://www.ietf.org/ietf/IPR/cisco-ipr-draft-ietf-tcpm-tcpsecure.txt

The same document that fully ignores that port number randomness will
severely limit the risk of susceptibility to such an attack?

How many zombies would it take to search the port number space
exhaustively?

    --Steve Bellovin, error

Irrelevant.

The limiting factor here is how many packets can make it to the CPU. Using 10K pps as a nice round (and high) figure, a single machine can do that.

Also, many of the calculations I've seen assume much higher pps when calculating time to reset a session. Has anyone done a test to see what a Juniper M5/10/whatever and a GSR can actually take without dropping packets due to rate limiting and/or falling over from being packeted?

How many route processors does it take to look at the packets from all those zombies? This very quickly becomes a DoS against the route processor rather than a TCP exploit.

The same document that fully ignores that port number
randomness will severely limit the risk of susceptibility
to such an attack?

How many zombies would it take to search the port number
space exhaustively?

Irrelevant.

The limiting factor here is how many packets can make it to
the CPU. Using 10K pps as a nice round (and high) figure,
a single machine can do that.

Also, many of the calculations I've seen assume much higher
pps when calculating time to reset a session. Has anyone
done a test to see what a Juniper M5/10/whatever and a GSR
can actually take without dropping packets due to rate
limiting and/or falling over from being packeted?

In some fairly informal tests that I did with an M20/RE3, I had to
saturate the PFE <-> RE link (100Mbps) with packets destined to the RE
before routing adjacencies started flapping. Packet size (64-1518
bytes) didn't make much of a difference (larger packets seemed to make
things a bit more difficult for the routing protocols), and CPU usage on
the RE rarely went above 30% during any test. Streams were sent from
random source addresses.

Packets that elicited a response from the RE (e.g., pings) didn't appear
to have a greater effect on performance than ones that didn't, as there
appears to be a good amount of rate-limiting going on internally to keep
things reasonably calm. It's documented that pings to the RE are
limited to 1000/sec, but it also appears that other packet types such as
SYNs are rate-limited in some fashion, either the ingress packets
themselves or maybe the responses from the RE. But in any case,
whatever rate-limiting was going on didn't appear to be affecting
routing adjacencies.

Although I didn't try anything too fancy, it appears that it's pretty
difficult to bog down the CPU (a PIII 600) on an RE3. Routing
adjacencies were only affected with the PFE <-> RE link became
saturated, which isn't surprising. There was no indication of transit
traffic being affected, which also isn't surprising given that such
packets are handled by ASICs.

-Terry