DoD IP Space

Chris -

https://search.arin.net/rdap/?query=22.0.0.0 will provide a valid phone number for technical & abuse matters.

/John

John Curran
President and CEO
American Registry for Internet Numbers

Oh, no worries.. It will never happen :wink:
There is reason why everyone stick to IPv4...

Also, there was also nice space that could be used safely on private
networks [14.0.0.0/8]. Unfortunately money needs to flow, so it was
converted to normal space. Shame.

Same with recent shady action w/ 44.0.0.0/8 is sad as well..
IPv4 will stay with us for very long....

Oh, I could not agree more. We need IETF or other powers-that-be to stop the line-in-the-sand stuff and instead go with a line in the wet concrete.

I’m sure we all remember Y2k (well, most of us, there could be some young-uns on the list). That day was happening whether we wanted it to or not. It was an unchangeable, unmovable deadline.

THAT is what we need for IPv6 implementation. Will it happen? Probably not, sadly.

I’d love to see a line in the concrete of, say, January 1, 2025, whereby IPv6 will be the default.

Hi,

I’m sure we all remember Y2k

Ah, yes. As a young IT consultant wearing a suit and tie (rofl), I upgraded many
bioses in many office buildings in the months leading up to it...

I’d love to see a line in the concrete of, say, January 1, 2025, whereby IPv6
will be the default.

The challenge with that is the market. Y2K was a problem that was existed. It was
a brick wall that we would hit no matter what. The faulty code was released years
before the date.

We, IETF, or even the UN could come up with 1/1/25 as the date where we switch off
IPv4, and you will still find networks that run IPv4 for the simple reason that
the people who own those networks have a choice. With Y2K there was no choice.

The best way to have IPv6 implemented worldwide is by having an incentive for the
executives that make the decisions. From experience, as I've said on this list a
few times before, I can tell you that decision makers with a limited budget that
have to choose between a new revenue generating feature, or a company-wide
implementation of IPv6, will choose the one that's best for their own short-term
interests.

On that note, I did have a perhaps silly idea: One way to create the demand could
be to have browser makers add a warning to the URL bar, similar to the HTTPS
warnings we see today. If a site is IPv4 only, warn that the site is using
deprecated technology.

Financial incentives also work. Perhaps we can convince Mr. Biden to give a .5%
tax cut to corporations that fully implement v6. That will create some bonus
targets.

Thanks,

Sabri

That’s a good one. Perhaps you don’t live/work in the US and can be excused for not knowing that US corporations don’t pay taxes. In many cases we subsidize them by giving tax credits to the point that the money is flowing in the opposite direction entirely. It would be hard to give them any more of a break :wink:

Organizations I have worked with for IPv6 transition, reduced CAPex and OPex by leveraging the IT refresh cycle, and by ensuring there investment included leveraging the USGv6 (https://www.nist.gov/programs-projects/usgv6-program) or IPv6Ready (https://www.ipv6ready.org/) to mitigate the “We sell IPv6 products, and want to you to pay for the debugging costs”.

Can I assume other organizations don’t leverage the IT refresh cycle?

I’m sure we all remember Y2k (well, most of us, there could be some
young-uns on the list). That day was happening whether we wanted it to
or not. It was an unchangeable, unmovable deadline.

but i thought 3gpp was gong to force ipv6 adoption

I’m sure we all remember Y2k (well, most of us, there could be some
young-uns on the list). That day was happening whether we wanted it to
or not. It was an unchangeable, unmovable deadline.

but i thought 3gpp was gong to force ipv6 adoption

let me try it a different way

why should i care whether you deploy ipv6, move to dual stack, cgnat,
...? you will do whatever makes sense to the pointy heads in your c
suite. why should i give them or some tech religion free rent in my
mind when i already have too much real work to do?

randy

IPv6 doesn’t need a hard date. It is coming, slowly, but it is coming.
Every data set says the same thing. It may not be coming as fast as a lot
of us would want or actually think is reasonable as ISP’s are currently
being forced to deploy CGNs (NAT44 and NAT64) because there are laggards
that are not doing their part.

If you offer a service over the Internet then it should be available over
IPv6 otherwise you are costing your customers more to reach you. CGNs are
not free.

Mark

What's all your opinion when company's such as Disney actively recommend disabling IPv6? They are presenting it as IPv6 is blocking their app. We all know that isn’t possible. Several people have issues with their app and Amazon firesticks. I use my phone and a chromecast and I see the issues when IPv6 is enabled. We are in the testing phase on rolling out IPv6 on our network. All the scripts are ready, just trying to work through the few issues like this one.

Thank you
Travis

My opinion is that such recommendations are short sighted, and simply creating tech debt and future support issues for themselves, and in some cases, intermediaries. That example you linked though is pretty specific to one “smart” TV OS ; it’s possible that there is a V6 specific issue with that TV OS, and it’s just worded that way because it’s simpler.

Randy nailed it a couple messages ago though. V6 Adoption always is, and always will be, metered by time, money and resources. Everybody kicks the can on things like this until they can’t anymore. And that’s honestly not even major criticism; everybody has a list of 1000 things to do, and enough time/money/resources to reasonably do 250 of them. Triage happens, we all do it.

Randy,

In one sense I agree with you, but what I was reacting to was the idea of an ISP begging IETF to reassign 22/8 as private space because their customers won't migrate to IPv6. That's problematic for many reasons, and causes the folks who aren't getting with the program to inflict the pain caused by their inaction on the rest of the network.

At the same time, I sympathize with the ISP because if they can't meet their customer's needs (however dumb those needs are) then the customers will leave.

I agree that we don't need a flag day for IPv6, but we have to stop creating new accommodations, and we need to be more creative about keeping the pain (aka cost) of not moving forward isolated to the folks who are creating the problems.

Doug

Joe,

I haven't done that kind of work for a few years now, but I assume the answer to your question in terms of hardware is still yes.

By and large the problem isn't hardware, it's finding the institutional will to actually do the thing. That requires a lot of education, creating or buying resources that can do the architecture, and ultimately the rollout, etc. etc.

And before all of that you have to overcome the fear of things that are new and different, and even 20 years later that's still a tough hill to climb.

Doug

At what level of incompetence must an organization operate to squander
roughly 70,000 /24 networks?

Or to do so and then decide, "You know what we really need to do? Let's
stomp on someone else's address space instead of deploying IPv6 a decade
late.

"And not just anyone's -- the US Military's! Because there's no
possible future in which an emergency might arise and see a need for
this global network built for resiliency to carry defense related
traffic."

Disney should hire some proper developers and QA team.

RFC 1123 instructed developers to make sure your products handled multi-homed servers properly and dealing with one of the addresses being unreachable is part of that. It’s not like the app can’t attempt to a stream from the IPv6 address and if there is no response in 200ms start a parallel attempt from the IPv4 address. If the IPv6 stream succeeds drop the IPv4 stream Happy Eyeballs is just a specific case of multi-homed servers.

QA should have test scenarios where the app has a dual stack network and the servers are silently untraceable over one then the other transport. It isn’t hard to do. Dealing with broken networks is something every application should do.

You mean like Rogers?

Big networks do run out of IPv4 space. It doesn’t require incompetence just lots of devices. That said if the devices where purchased in the last 2 decades they should support IPv6.

How many devices do you think a large car manufacture has on the shop floor? Remember some run their own bus services to move staff around the factory.

Smashing example. They've got fewer than 4 million subscribers (only
about a million of them being Internet), and yet they have somehow gone
through over 17 million addresses?

"Ohh no! Quick! Let's abandon fundamental principles of Internet
architecture to get these poor souls more addresses right away!"

Hi,

certain large corporations that have run out of RFC1918, etc. space

At what level of incompetence must an organization operate to squander
roughly 70,000 /24 networks?

Or, at what level of scale.

Or, a combination of both.

Let me give you an example. This example is not hypothetical.

Acme Inc operates a popular social media site. This requires a lot of
compute power, and storage space. Acme owns multiple datacenters around
the world, and all must be connected.

Acme divides its data centers in "Availability Zones". Each AZ contains
a limited amount of equipment. A typical AZ is made up of multiple pods,
and each pod contains anywhere between 40 and 48 racks. Each rack contains
up to 72 servers. Each server can contain many VMs or containers.

In order to scale, each AZ and pod are designed according to blueprints. This
obviously means that tradeoffs must be made. For example, each rack will be
assigned a /25, since a /26 means that not all 72 servers can have an IP.

Just to accommodate a single IP per server, we already need a /19. Most
servers will have different NICs for different purposes. For example, it is
not uncommon to have a separate storage network, and a management network.

Now we already need 3 /19s (32 /24s per pod, and we haven't even started to
assign IPs to VMs or containers yet.

Let's start to assign IPs to VMs and containers. Within one of my previous
employers, there were different groups that worked on VMs (cloud), and
containers (k8s). Both groups had automated scripts to assign IPs, but these
(obviously) did not communicate. Which means that each group had their own
vlan, with their own IRB (or BVI, or VLAN interface, however you want to
name it). On average, each group started with a /22 per tor (later on,
we limited them to a /24). So now we need 48*2*4=384 /24s per pod extra.

So, with 384+32 = 416 /24s per pod, you are looking at a maximum of 157 pods.

Now, granted, there is a lot of waste in this, hence the change from a /22 to
a /24, with a realization that the cloud and k8s group really needed to work
together to avoid more waste.

I will tell you that this is not at all hypothetical, I have personally
created spreadsheets of every /16 in 10/8 and how they were allocated. It's
amazing how much space was wasted in the early days at said employer, and
how much I was able to reclaim simply by checking if the allocations were
still valid. Hint: when companies split up, a lot of space gets freed up.

This the way that we avoided using DoD IP space to complement 10/8.

But, you were asking how it's possible to run out of 10/8, and here is your
answer :slight_smile:

TL;DR: a combination of scale and incompetence means you can run out of 10/8
really quick.

Thanks,

Sabri

Indeed. Thank you for providing a demonstration of my point.

I'd question the importance of having an console on target in Singapore
be able to directly address an BMC controller in Phoenix (wait for it),
but I'm sure that's a mission requirement.

But just in case you'd like to reconsider, can I interest you in NAT?
Like nutmeg, a little will add some spice to your recipe -- but too much
will cause nausea and hallucinations. It's entirely possible to put an
entire 192.168.0.0/16 network behind every single 172.16.0.0/12 address.

So, you've already "not at all hypothetical'd" entire racks completely
full of 1U hosts that are supporting lots of VMs in their beefy memory
on their two processors and also doing SAN into another universe. Let's
just magic a rack controller to handle the NAT. We can just cram it
into the extra-dimensional space where the switches live.

A standard port mapping configuration to match your "blueprint" ought to
be straight-foward. But let's elide the details and learn by
demonstration by just using it!

If the Singapore AZ were assigned 172.18.0.0/16.
And the 7th pod were 172.18.7.0/24.
And the 12th rack were 172.18.7.12/32.
We can SSH to the 39th host at: 172.18.7.11:2239
Which NATs to 192.168.0.39:22 on the 192.168.0.0/24 standard net.

If the Phoenix AZ (payoff!) were assigned 172.22.0.0/16.
And the 9th pod were 172.22.9.0/24
And the 33rd rack were 172.22.9.33/32.
We can VNC to the BMC of the 27th host at: 172.22.9.33:5927.
Which NATs to 192.168.1.27:5900 on the 192.168.1.0/24 management net.

Let's see. We've met all our requirements, left unused more than 50% of
the 172.16/12 space by being very generous to our AZs, left unused 98%
of the 192.168/16 space in each rack, threw every zero-network to the
wolves for our human counting from 1, and still haven't even touched
10/8. And all less than an hour's chin pulling.

Good for us.