Request comment: list of IPs to block outbound

This is actually somewhat cheap problem, if you optimise for it. That
is rules are somewhat expensive, but N prefixes per rule are not, when
designed with that requirement. Certainly the BOM effect can be
entirely ignored. However this is of course only true if that was
design goal, won't help in a situation where HW is in place and
doesn't not scale there. Just pointing out that there are no technical
or commercial problems getting there, should we so want.

From: Saku Ytti <saku@ytti.fi>
Sent: Tuesday, October 22, 2019 11:54 AM

> The obvious drawback especially for TCAM based systems is the scale,
> so not only we'd need to worry if our FIB can hold 800k prefixes, but
> also if the filter memory can hold the same amount -in addition to
> whatever additional filtering we're doing at the edge (comb filters
> for DoS protection etc...)

This is actually somewhat cheap problem, if you optimise for it. That is rules
are somewhat expensive, but N prefixes per rule are not, when designed
with that requirement. Certainly the BOM effect can be entirely ignored.
However this is of course only true if that was design goal, won't help in a
situation where HW is in place and doesn't not scale there. Just pointing out
that there are no technical or commercial problems getting there, should we
so want.

Well sure if BGP prefix=ACL prefix was true from the get go both scaling problems would be catered for in unison and we wouldn't even notice.
People here would be asking for recommendations on new/replacement edge router that can support 1M routes and filter entries...
But the reality is that long filters can significantly decrease performance of modern (supporting 100G interfaces) NPUs/PFEs.

adam

> 100.64.0.0/10 Private network Shared address space[3] for
> communications between a service
> provider and its subscribers
> when using a carrier-grade NAT.
>

This space is set aside for your ISP to use. like RFC1918 but for ISPs. It
is not specifically CGNAT. Unless you are an ISP using this space, you
should not block destinations in this space.

I have a hard time finding text that prohibits me from running machines
on 100.64/10 addresses inside my network. It is just more RFC1918 space,
a /10 unwisely spent on stalling IPv6 deployment.

/Måns, guilty.

I have a hard time finding text that prohibits me from running machines on 100.64/10 addresses inside my network.

I think you are free to use RFC 6598 — Shared Address Space — in your network. Though you should be aware of caveats of doing so.

It is just more RFC1918 space, a /10 unwisely spent on stalling IPv6 deployment.

My understanding is that RFC 6598 — Shared Address Space — is *EXPLICITLY* /not/ a part of RFC 1918 — Private Internet (Space). And I do mean /explicitly/.

The explicit nature of RFC 6598 is on purpose so that there is no chance that it will conflict with RFC 1918. This is important because it means that RFC 6598 can /safely/ be used for Carrier Grade NAT by ISPs without any fear of conflicting with any potential RFC 1918 IP space that clients may be using.

RFC 6598 ∉ RFC 1918 and RFC 1918 ∉ RFC 6598
RFC 6598 and RFC 1918 are mutually exclusive of each other.

Yes, you can run RFC 6598 in your home network. But you have nobody to complain to if (when) your ISP starts using RFC 6598 Shared Address Space to support Carrier Grade NAT and you end up with an IP conflict.

Aside from that caveat, sure, use RFC 6598.

So, to the reason for the comment request, you are telling me not to
blackhole 100.64/10 in the edge router downstream from an ISP as a
general rule, and to accept source addresses from this netblock. Do I
understand you correctly?

FWIW, I think I've received this recommendation before. The current
version of my NetworkManager dispatcher-d-bcp38.sh script has the
creation of the blackhole route already disabled; i.e., the netblock is
not quarantined.

> It is just more RFC1918 space, a /10 unwisely spent on stalling IPv6
> deployment.

My understanding is that RFC 6598 — Shared Address Space — is *EXPLICITLY*
/not/ a part of RFC 1918 — Private Internet (Space). And I do mean
/explicitly/.

I understand the reasoning. I appreciate the need. I just do not agree
with the conclusion to waste a /10 on beating a dead horse. A /24 would
have been more appropriate way of moving the cost of ipv6 non-deployment
to those responsible. (put in RFC timescale, 6598 is 3000+ RFCen later
than the v6 specification. That is a few human-years. There are no
excuses for non-compliance except cheapness.)

Easing the operation of CGN at scale serves no purpose except stalling
necessary change. It is like installing an electric blanket to cure the
chill from bed-wetting.

So, to the reason for the comment request, you are telling me not to
blackhole 100.64/10 in the edge router downstream from an ISP as a
general rule, and to accept source addresses from this netblock. Do I
understand you correctly?

Depends. If your network is a typical home network, connected via a
normal residential ISP, then you should very much expect to need to
talk to 100.64/10, and even be assigned addresses from that block. On
the other hand, if you have a fixed public address block, be it PI or
PA space, reachable from the world, then you shouldn't see any traffic
from addresses within the CGNAT block.

So, at home I don't block such addresses. But at work (a department
within a university, connected to the Swedish NREN), I do block the
CGNAT addresses on our border links.

FWIW, I think I've received this recommendation before. The current
version of my NetworkManager dispatcher-d-bcp38.sh script has the
creation of the blackhole route already disabled; i.e., the netblock is
not quarantined.

If this is a laptop which you may someday connect to some guest network
somewhere in the world, then not blocking 100.64/10 is the right thing
to do. Nor should you block RFC 1918 addresses in that situation.
(Assuming you actually want to communicate with the rest of the world. :slight_smile:

  /Bellman

I understand the reasoning. I appreciate the need. I just do not agree with the conclusion to waste a /10 on beating a dead horse. A /24 would have been more appropriate way of moving the cost of ipv6 non-deployment to those responsible. (put in RFC timescale, 6598 is 3000+ RFCen later than the v6 specification. That is a few human-years. There are no excuses for non-compliance except cheapness.)

For better or worse, I think IPv6 deployment is one of those things that will likely be completed about the time that spam problem is resolved. It's always going to be moving forward.

I don't know if consuming 4+ million IPs for CGN support is warranted or not.

The CGN that I've had experience … working with … (let's be polite) … in my day job have all been with providers having way more than a /24 worth of clients behind it. As such, they would need to have many (virtual) CGN appliances to deal with each of the /24 private networks. Would a /16 be better? Maybe. That is 1/64 th of what's allocated now.

I personally would rather people use 100.64/10 instead of squatting on other globally routed IPs that they think they will never need to communicate with. (I've seen a bunch of people squat on DoD IP space behind CGN. I think such practice is adding insult to injury and should be avoided.

Easing the operation of CGN at scale serves no purpose except stalling necessary change. It is like installing an electric blanket to cure the chill from bed-wetting.

Much like humans can move passenter plains, even an electric blanket can /eventually/ overcome cold wet bed.

So, to the reason for the comment request, you are telling me not to blackhole 100.64/10 in the edge router downstream from an ISP as a general rule, and to accept source addresses from this netblock. Do I understand you correctly?

It depends.

I think that 100.64/10 is /only/ locally significant and would /only/ be used within your ISP /if/ they use 100.64/10. If they don't use it, then you are probably perfectly safe considering 100.64/10 as a Bogon and treating it accordingly.

Even in ISPs that use 100.64/10, I'd expect minimal traffic to / from it. Obviously you'll need to talk to a gateway in the 100.64/10 space. You /may/ need to talk to DNS servers and the likes therein. I've not heard of ISPs making any other service available via CGN Bypass.

That being said, I have heard of CDNs working with ISPs to make CDN services available via CGN bypass. My limited experience with that still uses globally routed IPs on the CDN equipment with custom routing in the ISPs. So you still aren't communicating with 100.64/10 IPs directly. But my ignorance of CDNs using 100.64/10 doesn't preclude such from being done.

The simple rules that I've used are:

1) Don't use 100.64/10 in your own network. Or if you do, accept the consequences /if/ it becomes a problem.
2) Don't filter 100.64/10 /if/ your external IP from your ISP is a 100.64/10 IP.
3) Otherwise, treat 100.64/10 like a bogon.

FWIW, I think I've received this recommendation before. The current version of my NetworkManager dispatcher-d-bcp38.sh script has the creation of the blackhole route already disabled; i.e., the netblock is not quarantined.

I suspect things like NetworkManager are somewhat at a disadvantage in that they are inherently machine local and don't have visibility beyond the directly attached network segments. As such, they can't /safely/ filter something that may be on the other side of a router. Thus they play it safe and don't do so.

Unless somebody gets electrocuted first.

You are 100 percent correct about NetworkManager. The facility only
manages interfaces (including VPN and bridges). What I've done is added
the ability to install and remove null routes when the upstream
interface comes on-line and goes off-line.

So this is only the first stage of filtering. Using NetFilter (in
CentOS 8 case, NFTABLES), I will be adding rules to implement my
policies on each system I have. What exactly will be accepted, what
will be forwarded, what will be rejected, and what will be ignored.

What adding the null routes does is let me use the FIB test commands so
that the firewall files don't have to know the exact configuration of
networking, or have monster lists that have to be maintained. Consider
that one suggestion from this group is to look at using
https://www.team-cymru.com/bogon-reference-http.html and doing periodic
updates of the null routes based on the information there. (With caution.)

This is specific to Linux. The idea is to let the computer do all the
bookkeeping work, so I don't have to. Even if I have automation to "help".

The first application of this work will be to replace my existing
firewall router with up-to-date software and comprehensive rules to
handle NAT and DNAT, on a local network with quite a number of VLANs.