If DOS is such a large concern, IPSEC to an extent can be
used to mitigate against it. And IKEv1/v2 with IPSEC is not
the horribly inefficient mechanism it is made out to be. In
practice, it is quite easy to use.
IPSEC does nothing to protect a network device from a DOS attack. You
DOS prevention on a network device needs to happen before the TCP/Packet
termination - not the Key/MD5/IPSEC stage. The signing or encrypting of
the BGP message protects against Man in the Middle and replay attacks -
not DOS attacks. Once a bad packet gets terminated, your DOS stress on
the router kicks in (especially on ASIC/NP routers). The few extra CPU
cycles it takes for walking through keys or IPSEC decrypt are irrelevant
to the router's POV. You SOL if a miscreant can get a packet through
your classification & queuing protections on the router and have it
The key to DOS mitigation on a network device is to have many fields in
the packet to classify as possible before the TCP/Packet termination.
The more you have to classify on, the more granular you can construct
your policy. This is one of the reasons for GTSM - which adds one more
field (the IP packet's TTL) to the classification options.
Yes Jared - our software does the TTL after the MD5, but the hardware
implementations does the check in hardware before the packet gets punted
to the receive path. That is exactly where you need to do the
classification to minimize DOS on a router - as close to the point where
the optical-electrical-airwaves convert to a IP packet as possible.
i'm not that bright, so maybe i'm missing something, but i've heard
this claim from cisco people before and never understood it.
just to clarify: you're saying that doing the (expensive) md5 check
before the (almost free) ttl check makes sense because that
*minimizes* the DOS vectors against a router? can someone walk me
through the logic here using small words? i am obviously not able to
follow this due to my distance from the
As I parsed Barry's post, he was saying that Cisco currently does the
wrong thing today, but that some day when they actually support doing the
check in hardware, that will be the right place to do it. (aka "duh" :P)
Obviously in a perfect world, you don't want to do the expensive MD5 check
anywhere sooner than the last possible moment before you declare the data
valid and add it to the socket buffer. I assume that the reason they can't
do the check sooner in software is they lack a mechanism to tell the IP or
even TCP input code "we want to discard these packets if they are less
than TTL x". They probably can't make that decision until the packet gets
validated by TCP and makes it all the way to BGP code.
But, they should still be able to do all of the TCP layer checks which
don't require outside information, such as matching the segment to a
proper TCB by ip/port/seq #, before doing the MD5 calculation. This makes
DoS against MD5 where you don't know the full L4 port #'s and the seq #
pretty impossible on its own, without needing to involve the TTL hack.
Actually I take that back, it should be easy enough to configure a minimum
TTL requirement on the TCB through a socket interface. Obviously they're
doing something to pass the IP TTL data outside of its normal in_input()
function (or whatever passes for such on IOS), so if you've got that data
avilable in the tcp_input() code you should be able to do the check after
you find your TCB but before the MD5 check, yes?
Since there hasn't been an IOS source code leak in a while, does someone
from Cisco who actually knows how this is implemented want to comment so
we can stop guessing?
Why couldn't the network device do an AH check in hardware before passing
packet to the receive path? If you can get to a point where all connections
or traffic TO the router should be AH, then, that will help with DOS.
If you can limit what devices _SHOULD_ talk to the router and at least
some subset of that from which you demand AH on every packet, that helps but
isn't a complete solution.
If you care that much, why don't you just add an extra loopback address, give it an RFC 1918 address, have your peer talk BGP towards that address and filter all packets towards the actual interface address of the router?
The chance of an attacker sending an RFC 1918 packet that ends up at your router is close to zero and even though the interface address still shows up in traceroutes etc it is bullet proof because of the filters.
(This works even better with IPv6 link local addresses, those are guaranteed to be unroutable.)
Why is this better than using the TTL hack? Which is easier to configure, and at least as secure.
There are several tradeoffs. GTSM (or "TTL hack") requires that both ends implement it and this check may or may not be inexpensive. (Looking at the CPU stats when running with MD5 and then looking up how fast MD5 is supposed to be processed on much older hardware doesn't give me much confidence in router code efficiency.)
If you're truly paranoid, making sure that as few people as possible can enter packets into your router's CPU input queue makes a lot of sense. I prefer having a regular next hop address that shows up in traceroutes and can generate PMTUD packets but if you move the BGP session to some other address there is no need for the interface address to ever receive any packets. That's a lot better than expending resources on AH processing, which I was replying to.
RFC 1918 are an obvious choice for the addresses terminating the BGP session because they're mostly unroutable by default, but an address range that's properly filtered by your peer is even better.
And if you're on a public peering LAN (internet exchange) obviously you'll want to have static ARP and MAC forwarding table entries.