RE: What were we saying about edge filtering?

keep in mind its not destination addresses that are the problem here, BUT
if it was, on an experiment (not a very smart one) we routed 0/1 to a lab
system inside 701 once in 2001 (as I recall, so before
nimda/code-red/blaster) and recieved +600kpps of garbage traffic as a
result. Trying to acl/analyze/deal-with that flow was almost impossible...
I'm not sure what you want to do with it today when our 'sinkhole' network
is consistently handling +20kpps (5x previous) MORE of random garbage
than 3 weeks ago, before blaster/nachi started to cause more pain :frowning:

Christopher L. Morrow wrote:

keep in mind its not destination addresses that are the problem here, BUT

True, but there is RPF checks based on routing. anything routed to NULL0 is generally treated by such filters as an invalid route and will discard the packet of any source address from such a route.

Setting up BGP peers internally and applying route policies to null route the routes received from the bogon peers would allow for easily invalidating the routes and dropping packets which supposably originate from them.

I know this is easily done with vendor C. I suspect that the other vendors have implemented something very similar (heard J was easier than C).


"Neglecting gravity and friction, it is trivial to show that..."

The issues with vendor C, J, N, L, R, P and A through Z have been
repeatedly discussed and are available in the archives should anyone care
to do the research. It is also a mistake to assume that all network
architectures are the same as yours, or that your architecture would
scale to solve the problems other network providers need to solve.

The use of source address validation is expanding. Unfortunately SAV's
effectiveness is declining due to the increase of trojan bots.