Nat

There's nothing that can really be done about it now and I certainly wasn't able to participate when these things were decided.

However, keeping back 64 bits for the host was a stupid move from the beginning. We're reserving 64 bits for what's currently a 48 bit number. You can use every single MAC address whereas IPS are lost to subnetting and other such things. I could have seen maybe holding back 56 bits for the host if for some reason we need to replace the current system of MAC addresses at some point before IPv6 is replaced.

There may be address space to support it, but is there nimble boundary space for it?

The idea that there's a possible need for more than 4 bits worth of subnets in a home is simply ludicrous and we have people advocating 16 bits worth of subnets. How does that compare to the entire IPv4 Internet?

There is little that can be done about much of this now, but at least we can label some of these past decisions as ridiculous and hopefully a lesson for next time.

However, keeping back 64 bits for the host was a stupid move from the beginning. We're reserving 64 bits for what's currently a 48 bit number. You can use every single MAC address whereas IPS are lost to subnetting and other such things. I could have seen maybe holding back 56 bits for the host if for some reason we need to replace the current system of MAC addresses at some point before IPv6 is replaced.

EUI-64 isn’t the only thing out there that expects hosts to have 64-bit addresses. That was only an example.

There may be address space to support it, but is there nimble boundary space for it?

Yes. Do the math. If every end user got a /48 there’s still 281 *trillion* subnets to go around. The limiting factor in IPv4 is that nobody expected to be able to connect 4 billion devices to the Internet when it was conceived. I really doubt that we’ll see 281 trillion people walking around any time in the next 1000 generations of human civilization.

IPv6 is here to stay.

The idea that there's a possible need for more than 4 bits worth of subnets in a home is simply ludicrous and we have people advocating 16 bits worth of subnets. How does that compare to the entire IPv4 Internet?

You’re still stuck on “LOOOOL ADDRESSES.”

There is little that can be done about much of this now, but at least we can label some of these past decisions as ridiculous and hopefully a lesson for next time.

There isn’t going to be a next time.

*points and snickers quietly*

You're either an incredible optimist,
or you're angling to be the next oft-
misquoted "640KB should be enough
for anyone" voice.

We got a good quarter of a century
out of IPv4. I think we *might* hit
the century mark with IPv6...maybe.
But before we hit that, I suspect we'll
have found enough shortcomings
and gaps that we'll need to start
developing a new addressing format
to go with the newer networking
protocols we'll be designing to
fix those shortcomings.

Until the sun goes poof, there's *always*
going to be a next time. We're never going
to get it _completely_ right. You just have
to consider a longer time horizon than our
own careers.

Matt

I’m only going to say one more thing on this subject because this is essentially a side bar that has very little to do with the subject matter of the OP.

If we hadn’t run out of address space we’d still be trying to fix IPv4. The numbers don’t lie. It’s not very likely that we’re going to be space constrained on the IPv6 Internet like we are on the IPv4 internet. Nobody is going to want to repeat the pain of the last 17 years of trying to convince people to run IPv6.

Just about every technical challenge with the underlying protocol stack is fixable. Except for one: what happens when we run out addresses. For all of its flaws, IPv6 addresses this one particular issue quite well.

Does those extra bits somehow physical hurt you?

Really the choice of address space was 64 or 128 bits. Anything else would
just make it cumbersome to implement in hardware.

We are assigning /48 to end users. If IPv6 addresses had been 64 bits, that
would leave just 16 bits to the users. We would have gone from you get more
than you could possible imagine to "plenty for most, but not that much
really".

I am happy that we have 128 bits. It has already proven useful for many
purposes, including the ability to encode pairs of 32 bit IPv4 addresses as
part of the IPv6 address.

Regards,

Baldur

There's nothing that can really be done about it now and I certainly wasn't able to participate when these things were decided.

However, keeping back 64 bits for the host was a stupid move from the beginning. We're reserving 64 bits for what's currently a 48 bit number. You can use every single MAC address whereas IPS are lost to subnetting and other such things. I could have seen maybe holding back 56 bits for the host if for some reason we need to replace the current system of MAC addresses at some point before IPv6 is replaced.

That’s not what happened. What happened was that we added 64 bits to the address space (the original thought was a 64 bit address space) in order to allow for simplified host autoconf based on EUI-64 addresses. It did seem like a good idea at the time.

At the time, IEEE had realized that they were running out of EUI-48 addresses and had decided that the next generation would be EUI-64 and in fact, if you look at newer interfaces (e.g. firewire) you will see that they do, in fact, ship with EUI-64 addresses baked in. Given that IEEE had already decided on EUI-64 as the way forward for “MAC” addresses, it seems to me that 64 bits makes more sense than 56.

There may be address space to support it, but is there nimble boundary space for it?

I think you mean nibble-boundary space for it and the answer is yes.

The idea that there's a possible need for more than 4 bits worth of subnets in a home is simply ludicrous and we have people advocating 16 bits worth of subnets. How does that compare to the entire IPv4 Internet?

I have more than 16 subnets in my house, so I can cite at least one house with need for more than 4 bits just in a hand-coded network.

Considering the future possibilities for automated topological hierarchies using DHCP-PD with dynamic joining and pruning routers, I think 8 bits is simply not enough to allow for the kind of flexibility we’d like to give to developers, so 16 bits seems like a reasonable compromise.

There is little that can be done about much of this now, but at least we can label some of these past decisions as ridiculous and hopefully a lesson for next time.

TL;DR version: Below is a detailed explanation of why giving a /48 to every residence is harmless and just makes sense.

If you find that adequate, stop here. If you are still skeptical, read on…

Except that the decisions weren’t ridiculous. They not only made sense then, but for the most part, if you consider a bigger picture and a wider longer-term view than just what we are experiencing today, they make even more sense.

First, unlike the 100 gallon or 10,000 gallon fuel tank analogy, extra bits added to the address space come at a near zero cost, so adding them if there’s any potential use is what I would classify as a no-brainer. At the time IPv6 was developed, 64-bit processors were beginning to be deployed and there was no expectation that we’d see 128-bit processors. As such, 128 bit addresses were cheap and easily implementable in anticipated hardware and feasible in existing hardware, so 128-bits made a lot of sense from that perspective.

From the 64-bits we were considering, adding another 64 bits so that we could do EUI-based addressing also made a lot of sense. 48-bits didn’t make much sense because we already knew that IEEE was looking at moving from 48-bits to 64-bits for EUI addresses. A very simple mechanism for translating EUI-48 into a valid unique EUI-64 address was already documented by IEEE (Add an FF suffix to the OUI portion and an EE Prefix to the ESI portion, and ensure that the Locally Generated bit is 1). As such, a locally generated 02:a9:3e:8c:7f:1d address becomes 02:a9:3e:ff:ee:8c:7f:1d while a registered address ac:87:a3:23:45:67 would become ae:87:a3:ff:fe:23:45:67.

The justification for 16 bits of subnetting is a little more pie-in-the-sky, I’ll grant you, but given a 64-bit network numbering space, there’s really no disadvantage to giving out /48s and very little (or no) advantage to giving out smaller chunks to end-sites, regardless of their residential or commercial nature.

Let’s assume that ISPs come in essentially 3 flavors. MEGA (The Verizons, AT&Ts, Comcasts, etc. of the world) having more than 5 million customers, LARGE (having between 100,000and 5 million customers) and SMALL (having fewer than 100,000 customers).

Let’s assume the worst possible splits and add 1 nibble to the minimum needed for each ISP and another nibble for overhead.

Further, let’s assume that 7 billion people on earth all live in individual households and that each of them runs their own small business bringing the total customer base worldwide to 14 billion.

If everyone subscribes to a MEGA and each MEGA serves 5 million customers, we need 2,800 MEGA ISPs. Each of those will need 5,000,000 /48s which would require a /24. Let’s give each of those an additional 8 bits for overhead and bad splits and say each of them gets a /16. That’s 2,800 out of
65,536 /16s and we’ve served every customer on the planet with a lot of extra overhead, using approximately 4% of the address space.

Now, let’s make another copy of earth and serve everyone on a LARGE ISP with only 100,000 customers each. This requires 140,000 LARGE ISPs each of whom will need a /28 (100,000 /48s doesn’t fit in a /32, so we bump them up to /28). Adding in bad splits and overhead at a nibble each, we give each of them a /20. 140,000 /20s out of 1,048,576 total of which we used 44,800 for the MEGA ISPS leaves us with 863,776 /20s still available. We’ve now managed to burn approximately 18% of the total address space and we’ve served the entire world twice.

Finally, let us serve every customer in the world using a small ISP. Let’s assume that each small ISP only serves about 5,000 customers. For 5,000 customers, we would need a /32. Backing that off two nibbles for bad splits and overhead, we give each one a /24.

This will require 2,800,000 /24s. (I realize lots of ISPs server fewer than 5,000 customers, but those ISPs also don’t serve a total of 14 billion end sites,
so I think in terms of averages, this is not an unreasonable place to throw the dart).

There are 16,777,216 /24s in total, but we’ve already used 2,956,800 for the MEGA and LARGE ISPs, bringing our total utilization to 5,756,800 /24s.

We have now built three complete copies of the internet with some really huge assumptions about number of households and businesses added in and we still have only used roughly 34% of the total address space, including nibble boundary round-ups and everything else.

I propose the following: Let’s give out /48s for now. If we manage to hit either of the following two conditions in less than 50 years, I will happily (assuming I am still alive when it happens) assist in efforts to shift to more restrictive allocations.

  Condition 1: If any RIR fully allocates more than 3 /12s worth of address space total
  Condition 2: If we somehow manage to completely allocate all of 2000::/3

I realize that Condition 2 is almost impossible without meeting condition 1 much much earlier, but I put it there just in case.

If we reach a point where EITHER of those conditions becomes true, I will be happy to support more restrictive allocation policy. In the worst case, we have roughly 3/4 of the address space still unallocated when we switch to more restrictive policies. In the case of condition 1, we have a whole lot more. (At most we’ve used roughly 15[1] of the 512 /12s in 2000::/3 or less than 0.004% of the total address space.

My bet is that we can completely roll out IPv6 to everyone with every end-site getting a /48 and still not burn more than 0.004% of the total address space.

If anyone can prove me wrong, then I’ll help to push for more restrictive policies. Until then, let’s just give out /48s and stop hand wringing about how wasteful it is. Addresses that sit in the free pool beyond the end of the useful life of a protocol are also wasted.

Owen

[1] This figure could go up if we add more RIRs. However, even if we double it, we move from 0.004% to 0.008% utilization risk with 10 RIRs.

Not quite true…

"What happens when we have to make an incompatible change to the fundamental packet header?” is the real challenge.

It happens that in the case of IPv4, we didn’t hit that particular wall until we needed a larger address.

In IPv6, it will probably be something related to the ability to scale the number of routing destinations if I had to guess, but it’s so far in the future that predicting it now is somewhere between highly suspect and utterly impossible.

There will be a next time… There is _ALWAYS_ a next time with any human system. We always end up changing how we use things and then needing to adapt those things to those changes. That’s not a bad thing. Hopefully we will learn some lessons from this process and make the next transition somewhat less painful. However, most of those lessons are behavioral and judging by our progress on climate change, I’m not convinced we’ve learned anything at all about addressing problems before they reach crisis status.

Owen

Owen DeLong <owen@delong.com> writes:

The idea that there's a possible need for more than 4 bits worth of
subnets in a home is simply ludicrous and we have people advocating
16 bits worth of subnets. How does that compare to the entire IPv4
Internet?

I have more than 16 subnets in my house, so I can cite at least one
house with need for more than 4 bits just in a hand-coded network.

Considering the future possibilities for automated topological
hierarchies using DHCP-PD with dynamic joining and pruning routers, I
think 8 bits is simply not enough to allow for the kind of flexibility
we’d like to give to developers, so 16 bits seems like a reasonable
compromise.

Thanks for summarizing why /48 for everybody is possible. But I fear
that is not helping much against arguments based on "need". I believe it
is difficult to argue that anyone needs any IP address at all, given
that there are lots of people in the world who seem to survive just fine
without one...

So, with that sorted out, let's consider what you can do with 16 bits of
subnets. One example is checksum neutral prefix translation (RFC6296)
without touching the interface id bits . Let's say you have two upstream
ISPs handing you the prefixes A/48 and B/56. Neither offer any
multihoming support to residential users and both do BCP38 of course. So
you use B/56 internally and do prefix translation to allow your router
to select upstream without involving the clients. Thanks to the A/48
from the first ISP, you are able to choose a set of 256 (or possibly 255
since 0xffff cannot be used) checksum neutral subnet pairs.

Yes, I know. Evil. No need. No CPE support. Etc.

The important part is that 16 bits of subnets is enough to play
algorithmic tricks with the subnet part of your address too, whereas
this is much more difficult with fewer bits. No, you don't need to do
it. But you CAN. The sparse IPv6 addressing model is about opening up
possibilities. Note that those possibilities includes restricting
yourself to using a single address. You don't have to use all your 2^80
addresses :slight_smile:

And for the ISPs, using /48 for every user means fewer prefix lengths to
consider for routing and address management. Sure, we manage diverse
prefix lengths in IPv4 today, but why not take advantage of this
possible simplification if we can? Only those living on bugs will object
to simpler address databases and routing filters.

Bjørn

Owen DeLong <owen@delong.com> writes:

The idea that there's a possible need for more than 4 bits worth of
subnets in a home is simply ludicrous and we have people advocating
16 bits worth of subnets. How does that compare to the entire IPv4
Internet?

I have more than 16 subnets in my house, so I can cite at least one
house with need for more than 4 bits just in a hand-coded network.

Considering the future possibilities for automated topological
hierarchies using DHCP-PD with dynamic joining and pruning routers, I
think 8 bits is simply not enough to allow for the kind of flexibility
we’d like to give to developers, so 16 bits seems like a reasonable
compromise.

Thanks for summarizing why /48 for everybody is possible. But I fear
that is not helping much against arguments based on "need". I believe it
is difficult to argue that anyone needs any IP address at all, given
that there are lots of people in the world who seem to survive just fine
without one…

Arguments based on “need” don’t make any sense in an IPv6 context.

Sure, we shouldn’t be so profligate in our distribution of the address pool
that we run out well before the protocol’s useful life is exhausted, but I
think I’ve shown that the current allocation policies, including /48 have
adequate protection against that occurring.

Being more restrictive just for the sake of being more restrictive doesn’t
serve any purpose. It doesn’t help anyone. As such, I just don’t understand
those arguments. If someone can show me a tangible benefit from a more
restrictive policy, I’m open to considering it, but so far, none exists.

So, with that sorted out, let's consider what you can do with 16 bits of
subnets. One example is checksum neutral prefix translation (RFC6296)
without touching the interface id bits . Let's say you have two upstream
ISPs handing you the prefixes A/48 and B/56. Neither offer any
multihoming support to residential users and both do BCP38 of course. So
you use B/56 internally and do prefix translation to allow your router
to select upstream without involving the clients. Thanks to the A/48
from the first ISP, you are able to choose a set of 256 (or possibly 255
since 0xffff cannot be used) checksum neutral subnet pairs.

That’s a really icky alternative to simple BGP multihoming (which is what
I’m currently using at home).

Of course, not the worst, but a significantly bad part of this is the provider
that’s only giving you a /56 to begin with. :wink:

Yes, I know. Evil. No need. No CPE support. Etc.

True that.

The important part is that 16 bits of subnets is enough to play
algorithmic tricks with the subnet part of your address too, whereas
this is much more difficult with fewer bits. No, you don't need to do
it. But you CAN. The sparse IPv6 addressing model is about opening up
possibilities. Note that those possibilities includes restricting
yourself to using a single address. You don't have to use all your 2^80
addresses :slight_smile:

I completely agree.

And for the ISPs, using /48 for every user means fewer prefix lengths to
consider for routing and address management. Sure, we manage diverse
prefix lengths in IPv4 today, but why not take advantage of this
possible simplification if we can? Only those living on bugs will object
to simpler address databases and routing filters.

Again, you’re preaching to the choir.

Owen

Comments inline

Owen DeLong <owen@delong.com> writes:

The idea that there's a possible need for more than 4 bits worth of
subnets in a home is simply ludicrous and we have people advocating
16 bits worth of subnets. How does that compare to the entire IPv4
Internet?

I have more than 16 subnets in my house, so I can cite at least one
house with need for more than 4 bits just in a hand-coded network.

Considering the future possibilities for automated topological
hierarchies using DHCP-PD with dynamic joining and pruning routers, I
think 8 bits is simply not enough to allow for the kind of flexibility
we’d like to give to developers, so 16 bits seems like a reasonable
compromise.

Thanks for summarizing why /48 for everybody is possible. But I fear
that is not helping much against arguments based on "need". I believe it
is difficult to argue that anyone needs any IP address at all, given
that there are lots of people in the world who seem to survive just fine
without one…

Arguments based on “need” don’t make any sense in an IPv6 context.

Sure, we shouldn’t be so profligate in our distribution of the address pool
that we run out well before the protocol’s useful life is exhausted, but I
think I’ve shown that the current allocation policies, including /48 have
adequate protection against that occurring.

Being more restrictive just for the sake of being more restrictive doesn’t
serve any purpose. It doesn’t help anyone. As such, I just don’t understand
those arguments. If someone can show me a tangible benefit from a more
restrictive policy, I’m open to considering it, but so far, none exists.

The best feature of being more restrictive is continuing employment of people and processes for restriction.

The worst feature of being more restrictive is paying for the extra people and processes.

If we standardize on /48 (or whatever) then we can put all that money and labor into solving the real business problems of the IPv6 Internet..

  Cutler