CGNAT

Hi,

Any recommendation regarding CGNAT appliance who try it and which brand is the best from his perspective!

The throughput which I want to pass through the CGNAT is about 40Gbits and number of subscribers are about 40,000 subscribers.

Regards,
Ahmed

Hello Ahmad,I am using F5 for CGNAT, right now 250K subscriber with 28Gbps bandwidth, I will double it with the second appliance easily soon.Its high performance and I like it.Any time Any QuestionThanks
    
    Ch,
Shahab

Last year I evaluated Cisco ASR9006/VSM-500 and Juniper MX104/MS-MIC-16G in
my lab.

I went with MX104/MS-MIC-16G. I love it.

I deployed (2) MX104's. Each MX104 has a single MX-MIC-16G card in it. I
integrated this CGNAT with MPLS L3VPN's for NAT Inside vrf and NAT outside
vrf. Both MX104's learn 0/0 route for outside and send a 0/0 route for
inside to all the PE's that have DSLAMs connected to them. So each PE with
DSL connected to it learns default route towards 2 equal cost MX104's. I
could easily add a third MX104 to this modular architecture.

I have 7,000 DSL broadband customers behind it. Peak time throughput is
hitting up at 4 gbps... I see a little over 100,000 service flows
(translations) at peak time

I think each MX104 MS-MIC-16G can able about ~7 million translations and
about 7 gbps of cgnat throughput... so I'm good.

I have a /25 for each MX104 outside public address pool (so /24 total for
both MX104's)... pretty sweet how I use /24 for ~7,000 customers :slight_smile:

I'll freeze this probably for DSL and not put anything else behind it. I
want to leave well-enough alone.

If I move forward with CGNAT'ing Cable Modem (~6,000 more subsrcibers) I'll
probably roll-out (2) more MX104's with a new vrf for that...

If I move forward with CGNAT'ing FTTH (~20,000 more subsrcibers) I'll
probably roll-out (2) MX240/480/960 with MS-MPC... I feel I'd want/need
something beefier for FTTH...

- Aaron

Hi Aaron, thanks for the info. I¹m curious what you or others do about
DDoS attacks to CGNAT devices. It seems that a single attack could affect
the thousands of customers that use those devices. Also, do you have
issues detecting attacks vs. legitimate traffic when you have so much
traffic destined to a small group of IPs?

Rich Compton | Principal Eng | 314.596.2828
14810 Grasslands Dr, Englewood, CO 80112

On 4/6/17, 2:33 PM, "NANOG on behalf of Aaron Gould"

Thanks Rich, you bring up some good points. Yes it would seem that an
attack aimed at a target IP address would in-fact now have a greater surface
since that IP address is being used by many people. When we
remotely-trigger-black-hole (RTBH) route an ip address (/32 host route) into
a black hole to stop an attack.... you're right, now you've completed the
ddos, not only for one customer, but hundreds or thousands that were using
that public ip address through the NAT appliance. ...to which I've told my
NOC to not act on any of the /24's-worth of address space the we use for
NAT.

Interestingly, the nature of NAT is that it doesn't allow in-bound traffic
unless a previous out-bound packet had been sent from customer-side to
internet-side and caused the NAT translation to be built.... therefore, an
outside-initiated DDoS attack would be automatically blocked by a NAT
boundary*. This would cause the DDoS to not go as far as it did in the
non-nat scenario. ...so with cgnat you've caused your reach of DDoS to be
shortened. ...but of course this doesn't cause the DDoS to not occur and to
not reach the NAT boundary...the attack still arrives. You have to continue
with other layers of security, defense and mitigation in other areas/layers
of your network.

- Aaron

* (I guess unless they were able to guess-spoof the exact ip address and
port number of an existing nat session, but then it would seem that they
would only reach that same port-address-translated session
destination...which I think would be a single ip address endpoint and port
number)

BTW, does somebody check how implementing a native IPv6 decrease actual
load of CGNAT?

Reports are that 30-50% of traffic will be IPv6 when you enable dual stack. This would be traffic that will not traverse your CGNAT.

Thanks Max, I've thought about that and tested some ipv6 (6vpe, mpls l3vpn
w/ipv6 dual stacked) in my network.

In my CGNAT testing for my 7,000 dsl customers, I've already tested the
inter-vrf route leaks that will be required for ipv6-flow-around to bypass
the IPv4 CGNAT boundary.... so, I have tested dual stacking my dsl customers
with v4/v6 and seen that the v4 does flow via cgnat and v6 does bypass nat.
I could dig up some ios(xr)/junos inter-vrf route leak/policy configs if
anyone could benefit from that.

I'm actually anxious to get started on my ipv6 deployment in my dsl space...
as you have eluded to Max, this would push out the life of my ipv4 cgnat
juniper mx104/ms-mic-16g boundary since we would expect the ipv6 traffic to
flow naturally to my internet pipes un-natted and thus relive the nat nodes.
...and as more and more ipv6 is adopted in the world, the nat44/napt cgnat
boundary is less and less needed.... until ultimately, we are in a ipv6-only
world.

-Aaron

A lot depends on the CGNAT features you are looking to support, some
considerations:

- Are you looking for port block allocation for bulk logging, where a given
subscriber is given a block of source TCP/UDP ports on a translated IP
address
- How many translations and session rate are you looking to support
- Do you require Port Control Protocol (PCP) support for inbound pinholing
reservations? Do your subscribers support uPnP to PCP translation?
- Are you looking to support RFC 6598 (carrier use of 100.64.0.0/10 for
CGNAT)?
- Are you looking to support DS-Lite (RFC 6333) or lw4o6 (RFC 7596)? Both
have significantly different requirements relative to CGNAT (DS-Lite
assumes translation of subscriber RFC 1918 addresses tracking their IPv6
address in the translation table, lw4o6 assumes translation from RFC 1918
to RFC 6598 at the subscriber/B4 prior to IPv6 encapsulation plus
translation of RFC 6598 to public at the CGNAT/AFTR)

Generally, I tend to recommend F5 BIG-IP from a CGNAT feature standpoint

- Ed

I can confirm that percentage (at least with residential customer base).
All big content providers and a number of CDNs will do IPv6 by default. One
thing that will heavily affect this is the CPE equipment (which might not
have IPv6 enabled or even be capable of it).

kind regards
Pshem

My data on customers supposed to be 100% dual-stack (unless they
explicitely disable IPv6 on their side, which some of them do) says 25%
on best days. It used to be up to 35% in late 2015.
For reason unknown, it was going slightly down during 2016, with a
sudden extra decrease in january this year.

With a ~59% dual-stack percentage and a 8% ds-lite percentage (aka 67%
of our subscriber base has IPv6), we get around 40% of IPv6 traffic.

Rich, et al,

Circling back on some older threads... I'm doing this because I've been
growing my cgnat environments and needing to remind myself of somethings,
etc...

If an attack is targeted at 1 ip address, you would think that if
would/could affect all the napt-44 (nat overloaded/pat'd) ip's that hide
behind it... but isn't that *IF* that traffic actually got through the nat
boundary and flowed to the intended target(s) ?

Unsolicited outside---->inside traffic I believe results in a deny of
traffic... and I'm seeing that the nat actually builds those flows as drop
flows....

I generated some traffic at a nat destination and I see all my traffic is
"Drop"... now I wonder if this is a fast path like in asic (pfe) hardware
being dropped... if so, it would seem that the nat boundary is yet a really
nice way to quickly drop unsolicited inbound traffic from perhaps bad
sources.

My source where I was generating traffic... Hollywood-ip (only works in the
movies) 256.256.191.133 (bad guy)

Nat destination where I sending traffic to... 256.256.130.4 (victim/target)

Now of course the resources/network outside the nat is bogged down, but the
inside nat domain seems to be unaffected in this case from what I can tell.

And again, I'm wondering if that "Drop" flow is lightweight/fast processing
for the ms-mpc-128g juniper gear ?

{master}
agould@960> show services sessions destination-prefix 256.256.130.4/32 |
grep 256.256.191.133 | refresh 1
---(refreshed at 2019-02-07 12:36:45 CST)---
---(refreshed at 2019-02-07 12:36:46 CST)---
---(refreshed at 2019-02-07 12:36:47 CST)---
---(refreshed at 2019-02-07 12:36:48 CST)---
---(refreshed at 2019-02-07 12:36:49 CST)---
---(refreshed at 2019-02-07 12:36:50 CST)---
---(refreshed at 2019-02-07 12:36:51 CST)---
---(refreshed at 2019-02-07 12:36:52 CST)---
TCP 256.256.191.133:54519 -> 256.256.130.4:443 Drop O
1
ICMP 256.256.191.133 -> 256.256.130.4 Drop O
1
---(refreshed at 2019-02-07 12:36:53 CST)---
TCP 256.256.191.133:54519 -> 256.256.130.4:443 Drop O
1
ICMP 256.256.191.133 -> 256.256.130.4 Drop O
1
---(refreshed at 2019-02-07 12:36:54 CST)---
TCP 256.256.191.133:54519 -> 256.256.130.4:443 Drop O
1
ICMP 256.256.191.133 -> 256.256.130.4 Drop O
1
---(refreshed at 2019-02-07 12:36:55 CST)---
TCP 256.256.191.133:54519 -> 256.256.130.4:443 Drop O
1
ICMP 256.256.191.133 -> 256.256.130.4 Drop O
1
---(refreshed at 2019-02-07 12:36:56 CST)---
---(refreshed at 2019-02-07 12:36:57 CST)---
---(refreshed at 2019-02-07 12:36:58 CST)---
UDP 256.256.191.133:12998 -> 256.256.130.4:80 Drop O
1
UDP 256.256.191.133:24444 -> 256.256.130.4:80 Drop O
1
---(refreshed at 2019-02-07 12:36:59 CST)---
UDP 256.256.191.133:12998 -> 256.256.130.4:80 Drop O
1
UDP 256.256.191.133:24444 -> 256.256.130.4:80 Drop O
1
---(refreshed at 2019-02-07 12:37:00 CST)---
UDP 256.256.191.133:12998 -> 256.256.130.4:80 Drop O
1
UDP 256.256.191.133:24444 -> 256.256.130.4:80 Drop O
1
---(refreshed at 2019-02-07 12:37:01 CST)---
UDP 256.256.191.133:12998 -> 256.256.130.4:80 Drop O
1
UDP 256.256.191.133:24444 -> 256.256.130.4:80 Drop O
1

- Aaron

Hi, I would suggest that you test fragmented traffic as well to your NAT device. Fragment packets (that are not the first packet) don't have the L4 info so the NAT device will have to keep them in memory until the first fragment comes in w/ the L4 info. This can cause a DoS condition if the NAT device doesn't adequately prune fragmented packets from the memory when there is a flood of these type of packets.