IPv6 fc00::/7 — Unique local addresses

A perfectly valid way to multihome, right? Setup each host with two
IP addresses,
one in each PA range. Use multiple DNS records, to indicate all
the host's pairs of IPs.
If an ISP link goes down, all the clients' should automatically try
resend the unack'ed packets to the
DNS name's other IPs in 10 or 11 seconds, and recover, without having
to reconnect, right? right?? [ No :frowning: ]

Automatic failover to other multihomed IPs seems to always have been
left missing from the TCP protocols, for some reason or another.

Probably good reasons, but that multihoming strategy isn't a very
good one, for now,
due to the disruption of active connections, and bad client
programs that won't look for other DNS records,
even when trying to establish a new connection.

Perhaps one day, there will be a truly reliable transport protocol,
and an API that allows a bind()
against multiple IPs and a connect()
to all a target host's IPs instead of just one, so both hosts can
learn of each other's IP addresses
that are offered to be used for that connection, then "multiple PA
IP addresses"
would be a technically viable multi-homing strategy.

>
> To make it clear, as it seems to be quite misunderstood, you'd have
> both ULA and global addressing in your network.

Right. Just like to multihome with IPv6 you would have both PA addresses
from provider #1 and PA addresses from provider #2 in your network.

Only nobody wants to do that either.

Only because there isn't good support for it yet.

ULA + PA actually works today. The IP stack can do the address
selection without worrying about reachability. The chances of the
ULA being unreachable and the PA being reachable between two nodes
in the same ULA prefix are negligable. If I'm talking to a ULA
address I'll use my ULA address. If I'm talking to a non-ULA address
I'll use my PA addresses.

PA + PA is a problem because you need to worry about source address
selection and that is driven by reachability. You also need to
worry about egress points due to source address filtering. etc.

> Right. Just like to multihome with IPv6 you would have both PA addresses
> from provider #1 and PA addresses from provider #2 in your network.
> Only nobody wants to do that either.

A perfectly valid way to multihome, right? Setup each host with two
IP addresses,
one in each PA range. Use multiple DNS records, to indicate all
the host's pairs of IPs.
If an ISP link goes down, all the clients' should automatically try
resend the unack'ed packets to the
DNS name's other IPs in 10 or 11 seconds, and recover, without having
to reconnect, right? right?? [ No :frowning: ]

Automatic failover to other multihomed IPs seems to always have been
left missing from the TCP protocols, for some reason or another.

Probably good reasons, but that multihoming strategy isn't a very
good one, for now,
due to the disruption of active connections, and bad client
programs that won't look for other DNS records,
even when trying to establish a new connection.

Perhaps one day, there will be a truly reliable transport protocol,
and an API that allows a bind()
against multiple IPs and a connect()

* Stream Control Transport Protocol, first spec'd in 2000 (couldn't
  be deployed widely in IPv4 because of NATs)

* "TCP Extensions for Multipath Operation with Multiple Addresses"

and

"Architectural Guidelines for Multipath TCP Development"

That protocol already exists and is installed on almost every personal computer in the world... but there's alas there's still a lot of TCP out there.

By the way, the problems you listed are some, but not all, of the reasons why it isn't really a viable multi-homing strategy... but yours also include some of the reasons why having ULA + globally-routed space both active would be a problem for many applications.

Matthew Kaufman

In message<4CBF9B7A.1000500@matthew.at>, Matthew Kaufman writes:

To make it clear, as it seems to be quite misunderstood, you'd have
both ULA and global addressing in your network.

Right. Just like to multihome with IPv6 you would have both PA addresses
from provider #1 and PA addresses from provider #2 in your network.

Only nobody wants to do that either.

Only because there isn't good support for it yet.

Too bad that support didn't come first, or all the issues with address allocation and routing table size being discussed elsewhere wouldn't be a problem for operators.

ULA + PA actually works today. The IP stack can do the address
selection without worrying about reachability. The chances of the
ULA being unreachable and the PA being reachable between two nodes
in the same ULA prefix are negligable. If I'm talking to a ULA
address I'll use my ULA address. If I'm talking to a non-ULA address
I'll use my PA addresses.

PA + PA is a problem because you need to worry about source address
selection and that is driven by reachability. You also need to
worry about egress points due to source address filtering. etc.

ULA + PA can have the same problems, especially if your ULA is inter-organization ULA, which was one of the cases under discussion.

Matthew Kaufman

"because of NATs" s/b "because certain parties refused to acknowledge that encapsulation of SCTP in UDP would have operational advantages sufficient to outweigh the disadvantages".

SCTP only gets you 90% of the way there, but it is a lot closer than today's TCP is.

Matthew Kaufman

* Stream Control Transport Protocol, first spec'd in 2000 (couldn't
  be deployed widely in IPv4 because of NATs)

I would dearly love to see SCTP take off. There are so many great potential applications for that protocol that it can boggle. Any type of connection between two things that might have several different kinds of data going back and forth at the same time could greatly benefit.

Which is why there is also work going on at the network layer, both on
the end-hosts via HIP or Shim6, and in the network, such as LISP.

Ultimately, this is a hard problem to solve. There is no easy solution,
otherwise it'd already exist, and have existed at least 10 years ago -
as that is at least how long people have been working on trying to
solve it.

As there is no easy and perfect solution, then we need to accept that
we're going to have to make trade offs to be able to get closer to
solving it. In other words, a better solution, even if it isn't
perfect, is better. The question is what trade offs are acceptable to
make?

We know and have experienced the many drawbacks of NAT, including such
things as restricting deployment of new and better transport protocols
like SCTP, DCCP, and maybe multipath TCP if the NAT boxes inspect and
drop unknown TCP options, and forcing the nature of Internet
applications to be client-server, even when a peer-to-peer application
communications architecture would be far more reliable, scalable and
secure. As NAT ultimately was about conserving address space, and IPv6
solves that problem, it is worth exploring other options that weren't
possible with IPv4 and/or IPv4 NAT.

Regards,
Mark.

ULA + PA can have the same problems, especially if your ULA is
inter-organization ULA, which was one of the cases under discussion.

Which still isn't a problem. Presumably you want your inter-organization
traffic to use ULA addresses to talk to each other so you setup the
address selection rules to do just that. That requires new rules
being distributed to all nodes that need to talk to the other site.
Presumable DHCPv6 could do this. If there isn't yet a DHCP option
to request address selection rules we need to define one. Use a
VPN between the organisations so you fate share. If you have a
private interconnect then the VPN becomes the backup.

> ULA + PA can have the same problems, especially if your ULA is
> inter-organization ULA, which was one of the cases under discussion.

Which still isn't a problem. Presumably you want your inter-organization
traffic to use ULA addresses to talk to each other so you setup the
address selection rules to do just that. That requires new rules
being distributed to all nodes that need to talk to the other site.
Presumable DHCPv6 could do this. If there isn't yet a DHCP option
to request address selection rules we need to define one.

One is being defined -

Someone insisted to me yesterday the RFC1918-like address space was the only way to provide a 'friendly' place for people to start their journey in playing with IPv6. I think that the idea of real routable IPs on a lab network daunts many people.

I've been down the road with ULA a few years back and I have to agree with Owen - rather just do it on GUA.

I was adding IPv6 to a fairly large experimental network and started using ULA. The local NREN then invited me to peer with them but I couldn't announce my ULA to them. They are running a 'public Internet' network and have a backbone that will just filter them.

I think that the biggest thing that trips people up is that they think that they'll just fix-it-with-NAT to get onto the GUA Internet. Getting your own GUA from an RIR isn't tough - rather just do it.

Part 2 will be when the first provider accepts a large sum of money to
route it within their public network between multiple sites owned by
the same customer.

Is this happening now with RFC 1918 addresses and IPv4?

I have seen this in some small providers. Doesn't last long since the chance of collision is high. It then becomes a VPN.

Part 3 will be when that same provider (or some other provider in the
same boat) takes the next step and starts trading routes of ULA space
with other provider(s).

Is this happening now with RFC 1918 addresses and IPv4?

I've seen this too. Once again small providers who pretty quickly get caught out by collisions.

The difference is that ULA could take years or even decades to catch someone out with a collision. By then we'll have a huge mess.

I agree. One application I'd though of was end-to-end Instant
Messaging, where, when you wish to transfer a file to the other
participant, a new SCTP stream is created for the file transfer within
the existing SCTP connection. Not all that novel, but something that
would be much easier to do with SCTP than TCP.

Regards,
Mark.

You assume that people simply select ULA prefixes randomly and don't
start doing linear allocations from the beginning of the ULA range.

Adrian

I don't think there is a difference. The very small providers are
the ones who make the stupid mistakes, it's the larger ones that do the
right thing because it is in their operational interests. Operational
competence, and the resulting increased reliability, is one of the
attributes customers of ISPs value highly.

If any of the Tier-1s don't route ULA address space, then it is useless
compared to global addresses that *are* routed by *all* the Tier-1s. As
the Tier-1s also hire competent networking people, they'll also
understand the scaling issues of the ULA address space, and why it
shouldn't be globally routed. Competent networking people also exist at
the lower tiers as well.

If operators just blindly accept and implement what sales people tell
them to, then those operators aren't operators. They're mindless drones
- and the rest of the people operating the Internet will protect the
Internet from them. Darwin eventually gets rid of those operators
and the ISP that employ them.

Since ULAs could be used as DoS attack sources, they'll also likely be
filtered out by most people as per BCP38.

Regards,
Mark.

I've seen this too. Once again small providers who pretty quickly get
caught out by collisions.

The difference is that ULA could take years or even decades to catch
someone out with a collision. By then we'll have a huge mess.

having merged datacenters with multiple overlapping v4 prefixes I'll
just observe that this is inevitable in v4, you can take steps that make
it less likely to impact you in v6.

You assume that people simply select ULA prefixes randomly and don't
start doing linear allocations from the beginning of the ULA range.

actually I assume they're going to just assign the whole botton half to
themselves like they do with 10/8 since using fc01::/8 is clearly more work.

If you do assign randomly the probability of someone deliberately
assigning the same /48 for use in their network seems pretty low, you're
a heck of a lot better off than with rfc 1918.

Any time there is a parameter that can be configured, there is a
possibility that people will misconfigure it. The only way to
completely prevent that being a possibility is to eliminate the
parameter. We can prevent people from getting addressing wrong by not
putting addresses in the IP header - but I, and I suspect most people,
would prefer their computers not to be a dumb terminal connected to a
mainframe. Or we can make the network robust against misconfiguration,
and put in place things like BCP38.

This is all starting to sound a bit like Chicken Little.

Regards,
Mark.

>> Someone advised me to use GUA instead of ULA. But since for my purposes th
is is used for an IPv6 LAN would ULA not be the better choice?
>>
> IMHO, no. There's no disadvantage to using GUA and I personally don't think
ULA really serves a purpose. If you want to later connect this
> LAN to the internet or something that connects to something that connects t
o something that connects to the internet or whatever, GUA provides
> the following advantages:
> + Guaranteed uniqueness (not just statistically probable uniquene
ss)
> + You can route it if you later desire to
>
> Since ULA offers no real advantages, I don't really see the point.

Someone insisted to me yesterday the RFC1918-like address space was the
only way to provide a 'friendly' place for people to start their journey
in playing with IPv6. I think that the idea of real routable IPs on a
lab network daunts many people.

I've been down the road with ULA a few years back and I have to agree
with Owen - rather just do it on GUA.

Your throwing the baby out with the bath water here.

ULA, by itself, is a painful especially when you have global IPv4
reachability as you end up with lots of timeouts. This is similar
to have a bad 6to4 upsteam link. Just don't go there.

ULA + PA works and provides stable internal addresses when your
upstream link in down the same way as RFC 1918 provides stable
internal addressing for IPv4 when your upstream link is down.

You talk to the world using PA addresses, directly for IPv6 and
indirectly via PNAT for IPv4. These can change over time.

Similarly, ULA + 6to4 works well provided the 6to4 works when you
are connected. When your IPv4 connection is renumbered you have a
new external addresses but the internal addresses stay the same.

I was adding IPv6 to a fairly large experimental network and started
using ULA. The local NREN then invited me to peer with them but I
couldn't announce my ULA to them. They are running a 'public Internet'
network and have a backbone that will just filter them.

I think that the biggest thing that trips people up is that they think
that they'll just fix-it-with-NAT to get onto the GUA Internet. Getting
your own GUA from an RIR isn't tough - rather just do it.

If your big enough to get your own GUA and have the dollars to get
it routed then do that. If you are forced to use PA (think home
networks) then having a ULA prefix as well is a good thing.

Mark

Someone insisted to me yesterday the RFC1918-like address space was the only way to provide a 'friendly' place for people to start their journey in playing with IPv6. I think that the idea of real routable IPs on a lab network daunts many people.

I once worked at a place that really *really* didn't want "real routable IPs" on their giant disk-protocols-over-IP network and wanted to start playing with IPv6 in those labs. What they wanted address space they knew no other company they ever merged with might be using, but which also would never, ever be on the public IPv6 Internet. At the time, there was no solution but to misuse ULA space. They're probably still doing just that.

I've been down the road with ULA a few years back and I have to agree with Owen - rather just do it on GUA.

I was adding IPv6 to a fairly large experimental network and started using ULA. The local NREN then invited me to peer with them but I couldn't announce my ULA to them. They are running a 'public Internet' network and have a backbone that will just filter them.

I think that the biggest thing that trips people up is that they think that they'll just fix-it-with-NAT to get onto the GUA Internet. Getting your own GUA from an RIR isn't tough - rather just do it.

It isn't tough, but it isn't free either. I have an experimental network that I'd love to run IPv6 with my own GUAs on (for the aforementioned sorts of reasons, like what happens when you want to interconnect with others), but it wouldn't be connected (for quite some time) to the public IPv6 Internet and there are *zero* funds available for the fees for PI space. It just isn't like 1992 (or even 1994) was for IPv4.

Matthew Kaufman

The absolute win is the elimination of "head of line" blocking. So if you have a large transfer going, that little short IM or even email notification or whatever gets sent immediately by being multiplexed into the data stream instead of being dumped in at the end of a buffer full of other stuff. By having streams for different sorts of content, it has the potential to conserve considerable resources. Rather than having a separate connection for each type of content, you have only one. Now if they would figure out a good way to load balance SCTP, we would be all set. But the real win is where you have a mix of bulk data streams and interactive small data transfers. The bulk transfer doesn't interfere with the interactive experience.

And there are so many other potential applications like maybe persistent VOIP "trunks" between branch offices over a long-lived SCTP connection with each of those "trunks" being a stream within one connection. The applications are potentially killer but nobody has really tapped into that area yet. Heck, multicast hasn't really lived up to its potential, either.