[c-nsp] DNS amplification

uRPF / BCP38 is really the only solution. Even if we did close all the open recursion DNS servers (which is a good idea), the attackers would just shift to another protocol/service that provides amplification of traffic and can be aimed via spoofed source address packets. Going after DNS is playing whack-a-mole. DNS is the hip one right now. It's not the only one available.

Many networks will say "but our gear doesn't do uRPF, and maintaining an ACL on every customer port is too hard / doesn't scale."

Consider an alternative solution. On a typical small ISP / small service provider network, if you were to ACL every customer (because your gear won't do uRPF), you might need hundreds or even thousands of ACLs. However, if you were to put output filters on your transit connections, allowing traffic sourced from all IP networks "valid" inside your network, you might find that all you need is a single ACL of a handful to several dozen entries. Having one ACL to maintain that only needs changing if you get a new IP allocation or add/remove a customer who has their own IPs really isn't all that difficult. As far at the rest of the internet is concerned, this solves the issue of spoofed IP packets leaving your network.

yes - and it presumes your DNS servers are based on Linux and use IPTables.

http://www.cryptonizer.com/dnsamp.html

http://serverfault.com/questions/418810/public-facing-recursive-dns-servers-iptables-rules

http://sf-alpha.bjgang.org/wordpress/2013/01/iptables-for-common-dns-amplification-attack-on-recursive-dns-inside-your-network/

these should give you a good idea of how to get started...

Yes, BCP38 is the solution.

  Now, how widely is deployed?

  Someone said in the IEPG session during the IETF86 that 80% of the
service providers had done it?

  This raises two questions for me. One, is it really 80%, how to measure it?

  Second, if it were 80%, how come the 20% makes so much trouble and how
to encourage it to deploy BCP38?

  (well, actually 4 questions :slight_smile:

Regards,
as

        Yes, BCP38 is the solution.

        Now, how widely is deployed?

        Someone said in the IEPG session during the IETF86 that 80% of the
service providers had done it?

right... sure.

        This raises two questions for me. One, is it really 80%, how to measure it?

csail had a project for a while... spoofer project?
  <http://spoofer.csail.mit.edu/&gt;

I think the last I looked they reported ONLY 35% or so coverage of
proper filtering. Looking at:
  <http://spoofer.csail.mit.edu/summary.php&gt;

though they report 86% non-spoofable, that seems very high to me.

        Second, if it were 80%, how come the 20% makes so much trouble and how
to encourage it to deploy BCP38?

some of the 20% seems to be very highspeed connected end hosts and at
a 70:1 amplification ratio you don't need much bandwidth to fill a 1g
pipe, eh?

-chris

You'd have to get access (cloud VM, dedicated server, etc.) on each network and see if you can successfully get spoofed packets out to another network.

I seriously doubt those numbers though. I'd bet it's more like 80% of service providers are too embarrassed to admit they're not doing BCP38 filtering (or don't know what it is), and 20% are doing it on at least some parts of their network.

They should publish the spoofable AS. Not for public shame but at least
to show the netadmins that they are doing something wrong, or if they
are trying to do the good think is not working.

  Or at least a tool to check for your ASN or netblock.

/as

I don't disagree, but I'd point out that there are likely easier
places to do bcp38 than others in everyone's network(s)... So, 'I do
bcp38' unqualified is not as helpful, especially when almost all
consumer grade links are bcp38 by default, which is likely where a
bunch of this measurement originates. (well, I suspect a bunch of it
is from consumer-grade links anyway)

If you have packet data about a sufficient number of different kinds
of attacks per source network over a long period of time, at a
specific attack/normal traffic sensor; you might be able to infer
some information about which networks prevent spoofing, through a
difference in the kind of attacks shown to be originating from all the
networks.

If spoofing is preferred, or used by other nodes involved in a
particular attack, the networks that are concentrated sources of
non-spoofing attack packets most likely, are places where spoofing
prevention could be present -- and have altered attacker behavior.

Possibly the presence of spoofed packets may be suggested by a sudden
drastic difference in the average TTL versus legitimate traffic for a
particular source network for packets with a particular source IP,
correlated with the attack VS the remaining packet TTLs normally
observed for legitimate traffic from that network.

If you have a sufficiently massive number of traffic sensors, and
massive data gathering infrastructure, close enough to the attacks,
it may be possible to analyze the microsecond-level timing of packets,
and the time sequence/order they arrive at various sensors
(milliseconds delay/propagation rate of attacker nodes initiating),
in order to provide a probability that spoofed packets came from
certain networks.

Then at that point, you might make some guesses about which networks
implement BCP38

To get microsecond-level timing, you have to be so close that you're
basically just peering with everyone. And at that point you can just look
to see which fibers carry spoofed packets.

Once you know an ISP hasn't implemented BCP38, what'st the next step?
De-peering just reduces your own visibility into the problem. What if
it's a transit provider, who can be legitimately expected to route for 0/0?

Damian

Arturo Servin wrote:

  Yes, BCP38 is the solution.

It is not a solution at all, because it, instead, will promote
multihomed sites bloats the global routing table.

To really solve the problem in an end to end fashion, it is
necessary to require IGPs carry information for the proper
source address corresponding to each routing table entry in a
*FULL* routing table, which must be delivered to almost, if
not all, all the end systems.

            Masataka Ohta

Arturo Servin wrote:

> Yes, BCP38 is the solution.

It is not a solution at all, because it, instead, will promote
multihomed sites bloats the global routing table.

How does enforcing that source address entering your net from
customers sites match thoses that have been allocated to them
bloat the routing table?

Now if you only accept address you have allocated to them by you
then that could bloat the routing table but BCP 38 does NOT say to
do that. Simlarly URP checking is not BCP 38.

With SIDR each multi-homed customer could provide CERTs which proves
they have been allocated a address range which could be feed into
the acl generators as exceptions to the default rules. This is in
theory automatible.

To really solve the problem in an end to end fashion, it is
necessary to require IGPs carry information for the proper
source address corresponding to each routing table entry in a
*FULL* routing table, which must be delivered to almost, if
not all, all the end systems.

How does that solve the problem?

Mark Andrews wrote:

  Yes, BCP38 is the solution.

It is not a solution at all, because it, instead, will promote
multihomed sites bloats the global routing table.

How does enforcing that source address entering your net from
customers sites match thoses that have been allocated to them
bloat the routing table?

First of all, multihomed sites with its own global routing
table entries bloats the global routing table, which is the
major cause of global routing table bloat and is not acceptable.

Then, the only solution is to let the multihomed sites have
multiple prefixes, each of which is aggregated by each
provider.

But, then, all the end systems are required to choose the proper
source addresses corresponding to destination addresses, which
requires IGPs carry such information.

See draft-ohta-e2e-multihoming-05 for details.

Now if you only accept address you have allocated to them by you
then that could bloat the routing table but BCP 38 does NOT say to
do that. Simlarly URP checking is not BCP 38.

That BCP 38 is narrowly scoped is not my problem.

With SIDR each multi-homed customer could provide CERTs which proves
they have been allocated a address range which could be feed into
the acl generators as exceptions to the default rules. This is in
theory automatible.

The problem is not in individual ISPs but in the global routing
table size.

How does that solve the problem?

In the end to end fashion.

See draft-ohta-e2e-multihoming-05 for details.

            Masataka Ohta

See <http://datatracker.ietf.org/wg/lisp/> for an actual solution to the problem of routing-table bloat, which has nothing to do with BCP38/84.

Once you know an ISP hasn't implemented BCP38, what'st the next step?
De-peering just reduces your own visibility into the problem. What if

In general, a hard problem, not directly solvable in any obvious way.
It's similar to the question of what's the next step, after you
identified a probable connectivity issue. Detection does not always
grant you a way of preventing something.

Ultimately, to improve matters with regards to BCP38, I believe you
have to secure cooperation; cooperation can sometimes be achieved
through persuasion (discussing/requesting/bargaining/begging), or
coercing (bribing, threatening, seeking intervention from sponsors,
regulators, other networks, or other authorities, public shaming).

The recommended next-step would be the ones with the least harmful
ramifications for all involved and the network that do have a chance
of being effective, and more aggressive options reserved as possible
backup plans.

In some cases, extreme methods such as inserting offending network's
AS into the middle of the AS path of outgoing announcements, possibly,
so the spoofed source's upstream network omits reachability to the
prefix under attack....

or maintaining peering, but blackholing traffic from that peer, to the
local prefix under attack

it's a transit provider, who can be legitimately expected to route for 0/0?

Restricted peering can reduce the impact of the problem; in other
words: maintain the peering, but strictly controlling the packets per
second and octets per second volumes; traffic going over the peer link
is sacrificed during attack, to protect the target.
This may still be mutually beneficial for the peers.

If the peer is such a transit provider, the problem is indeed hard,
possibly not able to be mitigated.

Hi,

First of all, multihomed sites with its own global routing
table entries bloats the global routing table, which is the
major cause of global routing table bloat and is not acceptable.

Sorry, but that is false. Looking at the CIDR report (CIDR Report) the routing table could shrink from 449k to 258k just by aggregating announcements. That's a reduction of 42.5%. I can't see how multihomed end-site announcements can be worst than that... There would almost be no routing table left :wink:

Anyway... Drifting off-topic for this thread.
Sander

I think BCP it is a solution. Perhaps not complete but hardy any single
solution would be suitable for a complex problem as this one.

  If you are the end-user organization with a multihomed topology you
apply BCP38 within your own scope. This will help to have less spoofed
traffic. Not solving all the problems but it would help not seeing your
spoofed packets all over the Internet.

  And about the routing table size, it is not multihomed sites the
offenders, it is large ISPs fragmenting because of traffic engineering
or because lack of BGP knowledge.

.as

(Not sure how this made it from c-nsp to nanog, but ...)

uRPF/BCP38 is an important part of a global solution. Similar to open-relays, smurf amplifiers, and other "badness" on the network, one must assist the global network by deploying it where it makes sense.

Deploying it at your customer ports may make sense depending on your network. Deploying it on peers may also make sense.

I think having a simple set of locations where people actually deploy it is critical, eg:

Colocation Network
Server Lans
VPS Lans
Static Routed Customer Edge

This should be the default, and something I've pushed at my employer for years.

If you do nothing, you can expect nothing as the result. If you attempt do so something, you can at least get an idea of where it's not coming from. At least target these easy edges of the network where there is some value.

- Jared

Dobbins, Roland wrote:

See draft-ohta-e2e-multihoming-05 for details.

See <http://datatracker.ietf.org/wg/lisp/&gt; for an actual solution
to the problem of routing-table bloat,

It is, by no means, a solution.

which has nothing to do with BCP38/84.

Locator ID separation has nothing to do with routing table bloat.

            Masataka Ohta

The usual concern with multi-homed end sites is that end sites with IPv4 PA addresses assigned from provider X who wish to multi-home with provider Y wind up adding at least two entries to the global table, a more specific route to each of X and Y (which X will need to leak beneath the covering supernet if it wants to deliver the customer any traffic).

I don't know of any recent analysis which differentiates between this multi-homing pressure on the global table vs. inter-domain traffic engineering or gratuitous deaggregation, but it's fair to say I have not been looking.

Joe

Sander Steffann wrote:

Sorry, but that is false. Looking at the CIDR report
(CIDR Report) the routing table
could shrink from 449k to 258k just by aggregating
announcements.

What if, NLIs are aggregated?

That's a reduction of 42.5%. I can't see how multihomed end-site
announcements can be worst than that...

See "5.2. Limiting the Number of TLAs" of my draft.

Anyway... Drifting off-topic for this thread.

Current poor support for multihomed sites is a reason why
BCP38 is not operational.

          Masataka Ohta