Credit to Digital Ocean for ipv6 offering

There are still applications that break with subnet smaller than /64, so all VPS providers probably have to use /64 addressing.

Wouldn't that argue for /64s?

/64 netmask, but not /64 for a customer. There are application which break if provided with /80 or /120, but I am not aware of an application requesting /64 for itself.

/64 for one customer seems to be too much,

In what way? What are you trying to protect against? It can't be address exhaustion (there are 2,305,843,009,213,693,952 possible /64s in the currently used format specifier. If there are 1,000,000,000 customer assignments every day of the year, the current format specifier will last over 6 million years).

Too much hassle, like too big config of your router. If you have 1000 customers in a subnet, you would have to have 1000 separate gateway IP's on your router interface plus 1000 local /64 routes.

There's no problem with assigning at least a /64 per customer even for VPSs.

There are plenty of /64s to go around.

Please stop trying to push the IPv4 scarcity mentality into IPv6. Subnet where it makes sense to subnet and assign a /64 to each subnet, whether it has 2 hosts or 2,000 hosts does not matter.

In reality, the difference in waste between a /64 with 2,000 hosts on it and a subnet with 2 hosts on it is less than 0.00001%.

Owen

There are still applications that break with subnet smaller than /64,
so all VPS providers probably have to use /64 addressing.

Wouldn't that argue for /64s?

/64 netmask, but not /64 for a customer. There are application which
break if provided with /80 or /120, but I am not aware of an application
requesting /64 for itself.

Except for SLAAC that requires a /64 due to it using EUI-48 to make up
the address, which "applications" are these, as those applications are
broken by design.

An application (unless it is a protocol like SLAAC or something else
similarly low-level) does not need to know about prefix sizes nor
routing tables.

Thus, can you please identify these applications so that we can hammer
on the developers of those applications and fix that problem?

/64 for one customer seems to be too much,

In what way? What are you trying to protect against? It can't be
address exhaustion (there are 2,305,843,009,213,693,952 possible /64s
in the currently used format specifier. If there are 1,000,000,000
customer assignments every day of the year, the current format
specifier will last over 6 million years).

Too much hassle, like too big config of your router. If you have 1000
customers in a subnet, you would have to have 1000 separate gateway IP's
on your router interface plus 1000 local /64 routes.

Wow, you really stuff all the customers in the same VLAN and thus the
same routed IP.... lots of fun those other customers will have with
that, especially as a lot of folks simply do not know that IPv6 is
already there and has been enabled in their distribution, applications
and kernels for many many years...

As for "why" VPSs are doing the limited number of IPs per VM, simply:
https://www.youtube.com/watch?v=YcXMhwF4EtQ

And if you want more, you can buy more... hence if you want more, vote
with your money and take your business elsewhere...

Greets,
Jeroen

This is actually pretty easy. If I were structuring a VPS environment, then I'd put a /56 or possibly a /52, depending on the number of virtuals expected on each physical server. Then, for each customer who got a VPS on that server, I'd create a bridge interface with a /64 assigned to that customer. Each VPS on that physical server that belonged to the same customer would get put on the same /64.

The router would route the /56 or /52 to the physical server. The hypervisor would have connected routes for the subordinate /64s and provide RAs to give default to the various VPSs.

Very low maintenance, pretty straight forward and simple.

Why would you ever put multiple customers in the same subnet in IPv6? That's just asking for trouble if you ask me.

Owen

Once upon a time, Owen DeLong <owen@delong.com> said:

The router would route the /56 or /52 to the physical server. The hypervisor would have connected routes for the subordinate /64s and provide RAs to give default to the various VPSs.

Doing anything that ties networks to physical servers is a poor design
for a VPS environment. That would mean that any VM migration requires
customers to renumber (so no live migration allowed at all).

Why? Two hypervisors tossing a subnet route to a VM back and forth is
*exactly* the same problem as two routers using VRRP to toss a subnet
route back and forth. And somehow, we all manage to do that *all the time*
without machines on the subnet having to renumber.

I tried to configure my FreeBSD box at home to
use a /120 subnet mask. It consistently crashed
with a kernel panic. I eventually gave up and just
configured it with a /64. Not really an application
per se, but since the OS died, I couldn't actually
tell if the applications were happy or not. :frowning:

Matt

[..]

I tried to configure my FreeBSD box at home to
use a /120 subnet mask. It consistently crashed
with a kernel panic.

Where is the bug report?

I am fairly confident that that really should not be an issue, with the
BSD stack being one of the oldest IPv6 stacks around (thank you itojun
and the rest of KAME!)

Greets,
Jeroen

In article <CABL6YZT7sSFxdBL1_UDVc2_t3X1drW0_AToHE51o2Pd=obDVrw@mail.gmail.com> you write:

+1+1+1 re living room

My cable company assigns my home network a /50. I can figure out what
to do with two of the /64s (wired and wireless networks), but I'm
currently stumped on the other 16,382 of them.

R's,
John

announce them so folks can use the space as darknets…

/bill
PO Box 12317
Marina del Rey, CA 90295
310.322.8102

Didn't file a bug report; just used it as proof of
why a bigger IPv6 allocation was needed, and
worked around the problem that way. If you're
curious, I can change /etc/rc.conf.local back
and recreate the problem. Not sure who I'd
file the bug with, though.

Matt

Which ones?

Mark.

Which ones?

Mark.

I haven't done extensive testing. I have just tried to divide a /64 into smaller subnets and to run Debian and Windows on it (as Matthew Petach did with his FreeBSD). I think I have tried /112 or /120. Debian was mostly fine, just one torrent or newsgroups client couldn't do v6 (can't recall which one), with Windows it was a different story and basically nothing really worked.

It was some time ago and I haven't tried Windows 7 SP1, maybe it has been fixed till now. Does anyone have Windows with IPv6 and netmask > /64?

I've got a /56 which I'm then delegating /60s from - so, for example, I've
got a laptop which I run things like Virtualbox and Docker on. This laptop
has a /60 and it can hand out /64s for virtual networks.

I figure that with the larger allocations to homes or offices the question
isn't "how do I allocate all of these" but "how do I delegate chunks of
this in a hierarchical manner."

Dan

Thus, can you please identify these applications so that we can hammer
on the developers of those applications and fix that problem?

I haven't done extensive testing. I have just tried to divide a /64 into
smaller subnets and to run Debian and Windows on it (as Matthew Petach
did with his FreeBSD). I think I have tried /112 or /120. Debian was
mostly fine, just one torrent or newsgroups client couldn't do v6 (can't
recall which one), with Windows it was a different story and basically
nothing really worked.

Why would a torrent client care about the prefix length?

But anyway, you had some random application that nobody uses that was
broken, seems to be a problem with that specific application, not
anything else.

It was some time ago and I haven't tried Windows 7 SP1, maybe it has
been fixed till now. Does anyone have Windows with IPv6 and netmask > /64?

I've only played with the NT4, Win2k, XP, and Vista stacks, and these
work fine in every scenario (/64 SLAAC, or /128 static config).

Hence you'll need to provide a lot more details than "it didn't work"...

Greets,
Jeroen

Strictly speaking SLAAC standard does not care about network size, you could
specify standard using SLAAC for arbitrary media with arbitrary network size.
In Ethernet EUI-64 is used, but that is not hard technical limitation, infact
Cisco IOS happily will accept any prefix size in Ethernet and SLAAC will work
fine.
SLAAC never makes any guarantees of uniqueness which implies network can be
arbitrarily small, as some other method (DAD) is needed for uniqueness
guarantees.

bz@freebsd.org

(Looking at Bjoern with an evil grin...)

Simon

To add on to this, it appears that DO now considers the request for IPv6 as now being "COMPLETE" because they have rolled it out in a single DC in Singapore, when the request was made by a lot of people BEFORE the Singapore DC was ever avaiable.

Great lack of respect to your customer base....

http://digitalocean.uservoice.com/forums/136585-digitalocean/suggestions/2639897-ipv6-addresses

Of course, one could also read the giant paragraph written by the CEO and see exactly what's going on, including the info about the other data centers and the new ones coming up.

I love how people whine that operators don't deploy IPv6 quickly enough, and they cry even harder when it's actually being deployed because it's not perfect and everywhere on the first day.

Really, give it a break.