ISP customer assignments

From what I can tell from an ISP perspective, the design of IPv6 is for

assignment of a /64 to an end user. Is this correct? Is this how it is
currently being done? If not, where am I going wrong?

Thank you.

- Brian

Brian Johnson wrote:

From what I can tell from an ISP perspective, the design of IPv6 is for

assignment of a /64 to an end user. Is this correct? Is this how it is
currently being done? If not, where am I going wrong?

The most common thing I see is /64 if the end user only needs one
subnet, /56 if they need more than one.

~Seth

So a customer with a single PC hooked up to their broad-band connection would be given 2^64 addresses?

I realize that this is future proofing, but OMG! That’s the IPv4 Internet^2 for a single device!

Am I still seeing/reading/understanding this correctly?

- Brian

Brrzt, wrong. Neither the end user nor you know the answer to that question!

So the only sensible thing is to always give them a /56.

(Actually, the IPv6 address architecture design was to give them a /48. Think about it: We will run out of MAC addresses before we run out of those. But some people can't manage the cognitive dissonance coming from an address starving IPv4 world and then "wasting" all these 2^80 addresses. My parents, who grew up around WW2, were that way, too, and never could unlearn their "saving" habits. So the current "wise" thing is to allocate a /56, "wasting" only 2^72 addresses per customer. The only way back to a connected Internet.)

Gruesse, Carsten

So a customer with a single PC hooked up to their broad-band connection
would be given 2^64 addresses?

I realize that this is future proofing, but OMG! That’s the IPv4
Internet^2 for a single device!

No, for a single LAN.

Am I still seeing/reading/understanding this correctly?

more-or-less. Can I suggest you read:

Think of ipv6 not as 128 bits of address space, but more as a addressing system with a globally unique host part and 2^64 possible subnets. In this respect it's substantially different to ipv4.

Nick

Yes, each and every network segment (especially multi-access ones) should be
/64s. Regardless of the types of machines, speed of link, etc. It is an
entirely different model of addressing, whose name just happens to start
with IP ...

/TJ

Carsten Bormann wrote:

No. A /64 is one *subnet*. Essentially the standard, static size for
any Ethernet LAN. For a customer, the following values are more
appropriate:

/128 - connecting exactly one computer. Probably only useful for your
dynamic dialup customers. Any always-on or static-IP customer should
probably have a CIDR block.

/48 - current ARIN/IETF recommendation for a downstream customer
connecting more than one computer unless that customer is large enough
to need more than 65k LANs.

/56 - in some folks opinion, slightly more sane than assigning a 65k
subnets and bazillions of addresses to a home hobbyist with half a
dozen PC's.

/60 - the smallest amount you should allocate to a downstream customer
with more than one computer. Anything smaller will cost you extra
management overhead from not matching the nibble boundary for RDNS
delegation, handling multiple routes when the customer grows, not
matching the standard /64 subnet size and a myriad other obscure
issues.

Regards,
Bill Herrin

more-or-less. Can I suggest you read:

IPv6 - Wikipedia

Think of ipv6 not as 128 bits of address space, but more as a addressing
system with a globally unique host part and 2^64 possible subnets. In this
respect it's substantially different to ipv4.

And after reading Wikipedia, follow it up with ARIN's
http://www.getipv6.info wiki site.

--Michael Dillon

What would be "wrong" with using a /64 for a customer who only has a
local network? Most home users won't understand what a subnet is.

- Brian

From: wherrin@gmail.com [mailto:wherrin@gmail.com] On Behalf Of

William

IPv6 CPE's may be designed to get one subnet per physical media via
DHCPv6-PD, so for example wireless and wired may be different subnets.
Really, /56 is the way to go for residential assignments.

"Brian Johnson" <bjohnson@drtel.com> writes:

So a customer with a single PC hooked up to their broad-band connection
would be given 2^64 addresses?

I realize that this is future proofing, but OMG! That’s the IPv4
Internet^2 for a single device!

Most people will have more than one device. And there is no NAT as you
know it from IPv4 (and hopefully there never will be. I had to
troubleshoot a NAT related problem today and it wasn't fun.[1])

And I want more than one network I want to have a firewall between my
fridge and my file server.

Am I still seeing/reading/understanding this correctly?

RFC 3177 suggest a /48.

Forget about IPv4 when assigning IPv6 Networks to customers. Think big an
take a one size fits all(most) customers approach. Assign a /48 or /56 to
your customers and they will never ask you about additional IPs
again. This make Documentation relay easy. :wink:

cheers

Jens

[1] Everybody who claims that NAT is easy should have his or her head
examined.

Am I the only one that finds this problematic? I mean, the whole point
of moving to a 128 bit address was to ensure that we would never again
have a problem of address depletion. Now I'm not saying that this puts
us anywhere in that boat (yet) but isn't saying "oh, lets just put a
/64 on every interface" pretty well ignoring the lessons of the last
20 years? Surely a /96 or even a /112 would have been just as good.

Lets think longer term... IPv4 is several decades old now and still in
use. If IPv6 lasts another 50 years before someone decides that it
needs a redo, with current practices, what will things look like?
Consider the population at that point and consider the number of
interfaces as more and more devices become IP enabled. "wireless"
devices have their own issues to content with (spectrum being perhaps
the biggest limiter) so wired devices will always be around. That
means physical interfaces and probably multiple LANs in each
residence. I can see where each device may want its own LAN and will
talk to components of itself using IP internally, perhaps even having
a valid reason for having these individual components publically
addressable.

Like I said, I'm not necessarily saying we're going to find ourselves
in that boat again but it does seem as though more thought is
required. (And yes, I fully realize the magnitude of 2^64. I also
fully realize how quickly inexhaustable resources become rationable.)

-Wayne

What would be "wrong" with using a /64 for a customer who only has a
local network? Most home users won't understand what a subnet is.

It's a question of convenience... your customers', but more
importantly yours. Every time you have to deviate from your default,
whatever default you pick, that's an extra overhead cost you have to
bear. Absent a compelling reason not to, you should structure your
default choice so that it accommodates as many customers as possible.

There are too many good reasons why someone might want to use two
subnets with two different security policies and not enough reasons
(zero in fact) why it would help you to give them less subnets than
the 16 in a /60.

So a customer with a single PC hooked up to their broad-band
connection would be given 2^64 addresses?
I realize that this is future proofing, but OMG! That’s the IPv4
Internet^2 for a single device!

Some clever guy figured out that if you use 64 bits you can write
algorithms that automatically assign an interface's IP address based
on its MAC address without having to arp for it. Since the details of
IPv6 were not yet firmly fixed at that point and ram is cheap, why not
add an extra 64 bits for that very convenient improvement? This is
called "stateless autoconfiguration."

Some even more clever guy figured out that if the first clever guy's
strategy is used, it becomes a trivial matter to track someone
online... based on the last 64 bits of their IP address which will
remain static for the life of the hardware they use regardless of
where they connect to the 'net. Given this rather blatent weakness and
given that you still need DHCP to assign DNS resolvers and the like,
stateless autoconfiguration will probably end up being a waste. That's
unfortunate, but look at it this way: the important part is not how
many addresses are wasted, it's how many addresses are usable.

Regards,
Bill Herrin

They probably don't -- but some appliance they buy might. Maybe some home "family-oriented" box will put the kids' machines on a separate VLAN, to permit rate-limiting, port- and destination-filtering, time-of-day limits, etc. In the past, I had to do similar things -- no AIM during homework hours, no file-sharing -- to the point that I had four subnets in my house (wireless, teen-net, workVPN, and backbone/parents). I don't expect the average consumer to set up something like that, but I sure wouldn't be surprised at appliances that did.

    --Steve Bellovin, http://www.cs.columbia.edu/~smb

[here we go again]

Some clever guy figured out that ... why not
add an extra 64 bits for that very convenient improvement? This is
called "stateless autoconfiguration."

Except that "clever guy" was in fact an idiot blinded by idealism. Not only did he fail to see the security implications of having a fixed address, but he'd apparently spent his entire life under a rock, on an island, on another planet... he completely ignored the fact that people were using DHCP [formerly known as BOOTP] (and have been now for over a decade) to provide machines with FAR MORE than just an address. A machine needs more than just an address to be useful -- something IPv6 users learn very quickly after turning off IPv4 and it's DHCP learned info.

Some even more clever guy figured out that if the first clever guy's
strategy is used, it becomes a trivial matter to track someone
online... ...
stateless autoconfiguration will probably end up being a waste.

It's ALWAYS been a waste. All these supposed "clever guys" failed to learn from the mistakes that preceded them and have doomed us to repeat them... ICMP router discovery (technology abandoned so long ago, I'd forgotten about it), RARP, bootp, dhcp. SLAAC loops us back around to the beginning. Only this time, it's inescapable: I still have to have something on the network spewing RAs for the sole purpose of telling everything to use DHCP instead; there's a hard "class" boundary smack in the middle of a "classless network" because these "clever guys" were lazy and didn't want to figure out ways to avoid address collisions. (something modern IPv6 stacks do by default for privacy -- randomly generated addresses have to be tested for uniqueness.)

--Ricky

Am I the only one that finds this problematic? I mean, the whole point
of moving to a 128 bit address was to ensure that we would never again
have a problem of address depletion. Now I'm not saying that this puts
us anywhere in that boat (yet) but isn't saying "oh, lets just put a
/64 on every interface" pretty well ignoring the lessons of the last
20 years? Surely a /96 or even a /112 would have been just as good.

The current guidance applies only to one /3 out of eight. Different
rules could be applied to the others.

Like I said, I'm not necessarily saying we're going to find ourselves
in that boat again but it does seem as though more thought is
required. (And yes, I fully realize the magnitude of 2^64. I also
fully realize how quickly inexhaustable resources become rationable.)

As it happens, Windows boxes now generate random interface IDs (not
based on MACs), which could have easily been 32 bits with the default
subnet 96 bits long, rather than 64 bits. But we are where we are
and we do have interesting ideas like CGAs as a result.

[here we go again]

Some clever guy figured out that ... why not
add an extra 64 bits for that very convenient improvement? This is
called "stateless autoconfiguration."

Except that "clever guy" was in fact an idiot blinded by idealism. Not only did he fail to see the security implications of having a fixed address, but he'd apparently spent his entire life under a rock, on an

a publicly routeable stateless auto configured address is no less
secure than a publicly routeable address assigned by DHCP. Security is, and
should be, handled by other means.

island, on another planet... he completely ignored the fact that people were using DHCP [formerly known as BOOTP] (and have been now for over a decade) to provide machines with FAR MORE than just an address. A

That's what stateless DHCP does.

Some even more clever guy figured out that if the first clever guy's
strategy is used, it becomes a trivial matter to track someone
online... ...
stateless autoconfiguration will probably end up being a waste.

It's ALWAYS been a waste. All these supposed "clever guys" failed to learn from the mistakes that preceded them and have doomed us to repeat them... ICMP router discovery (technology abandoned so long ago, I'd forgotten about it), RARP, bootp, dhcp. SLAAC loops us back around to the beginning. Only this time, it's inescapable: I still have to have something on the network spewing RAs for the sole purpose of telling everything to use DHCP instead; there's a hard "class" boundary smack in the middle of a "classless network" because these "clever guys" were lazy and didn't want to figure out ways to avoid address collisions.

I don't understand. You're saying you have overlapping class boundaries in
your network?

This is where I think there is a major disconnect on IPv6. The size of the pool is just so large that people just can't wrap their heads around it.

2^128 is enough space for every man, woman and child on the planet to have around 4 billion /64s to themselves. Even if we assume everyone might possibly need say 10 /64s per person that still means we are covered until the population hits around 2,600,000,000,000,000,000.

Chris

I think another disconnect is our understanding and expectations of
addressing needs with IPv6. The challenge of IPv6 address assignment is to
predict what home and enterprise networks will look like in 10, 20 or more
years.

Do we want to implement an assignment method of conservation based on what
we know and understand today, that maximizes the lifetime of IPv6? Or do we
want to use an approach that maximizes its usefulness (and the utility of
the internet) over the next 50 years?