NAT v6->v4 and v4->v6 (was Re: WG Action: Conclusion of IP Version 6 )

The IETF thinking for the last 10+ years (and I include myself in this) had been that dual stack was the answer and most things would be converted to dual stack before we ran out of v4.

Well, it has not happen.

There is still very little v6 deployment and we will be running out of v4 space soon (instantiate soon by your particular prediction of the day you won't be able to get space from your RIR, either because there is no more or because you do not qualify any longer).

Given the above, it is clear that we are gong to see devices (which might be dual stack *capable*) that will be deployed and provisionned with *v6-only*...

At the edge of the network, where we have devices by the millions and where address space is a critical issue and dual-stack is a non starter, this is already under way...

It is also becoming apparent that:

- the "core internet" (ie the web and any infrastructure server) will take a long time to move to v6 and/or dual stack.

- new v6-only edges will have to communicate with it. So we need v6->v4 translation in the core

- legacy v4 PCs (think win95 up to win XP) using RFC1918 addresses behind a home gateway will never be able to upgrade to an IPv6-only environment. So if we provision the home gateway with v6-only (because there will be a point where we do not have any global v4 addresses left for it) those legacy PCs are going to need a double translation, v4->v6 in the home gateway and then v6 back to v4 in the core. Note: a double private v4->private v4->global v4 translation would work too, but if you are running out of private space as well, this is also a non-starter...

- there are a number of internal deployment cases where net 10 is just not big enough, thus the idea to use v6 to glue several instances of private space together as a 'super net 10'. For this to work, legacy devices that cannot upgrade to v6 need to go through a translation v4->v6.

So, no, NAT v4->v6 or v6-v4 does not solve world hunger but solve very real operational problems.

    - Alain.

It is also becoming apparent that:

- the "core internet" (ie the web and any infrastructure
server) will take a long time to move to v6 and/or dual stack.

- new v6-only edges will have to communicate with it. So we
need v6->v4 translation in the core

Some companies have implemented MPLS in the core, therefore they can
easily add IPv6 services by configuring 6PE on a couple of PE routers in
each PoP. Beyond the PoP, in the customer's network, they can do pure
IPv6 if that is what they want.

- legacy v4 PCs (think win95 up to win XP) using RFC1918
addresses behind a home gateway will never be able to
upgrade to an IPv6-only environment. So if we provision the
home gateway with v6-only (because there will be a point
where we do not have any global v4 addresses left for it)
those legacy PCs are going to need a double translation,
v4->v6 in the home gateway and then v6 back to v4 in the
core.

Not if they use an application layer proxy in their gateway. It's not
too late to specify this as a standard function for an IPv6 Internet
gateway device. Also, the "v6 back to v4" conversion could be handled in
an information provider's data center (Google, CNN) not in the core.

So, no, NAT v4->v6 or v6-v4 does not solve world hunger but
solve very real operational problems.

Agreed. Just about every possible transition technique will solve real
operational problems and we should not be purists about this. Whether
the IETF has a specification for it or not, people will build and deploy
NAT and ALGs among other things.

In addition, this transition comes at a time when we have the technology
that allows virtually anyone (high school kids) to build some kind of
network functionality on top of Linux or BSD. If that is useful, anyone
can freely implement this including the manufacturers of Internet
gateway devices who often use Linux or BSD as the foundation of their
boxes.

Back in 1994 we started to see exponential growth of the Internet
because the barrier to entry suddenly became much lower. It was
financially feasible to buy a bunch of modems, terminal server, Bay or
Cisco routers and a bunch of Linux/BSD servers. The technology was
available cheap enough to encourage many people to take the business
risk. In the interim, technology has advanced somewhat and I expect to
see a flurry of devices as soon as IPv4 exhaustion reaches the general
press.

--Michael Dillon

It is also becoming apparent that:

- the "core internet" (ie the web and any infrastructure
server) will take a long time to move to v6 and/or dual stack.

- new v6-only edges will have to communicate with it. So we
need v6->v4 translation in the core

Some companies have implemented MPLS in the core, therefore they can
easily add IPv6 services by configuring 6PE on a couple of PE routers in
each PoP. Beyond the PoP, in the customer's network, they can do pure
IPv6 if that is what they want.

The issue is not the transport of v6 packet over the network(s) in the
middle, but the servers and the applications at the other side who may
remains IPv4.

In other words, transition to IPv6 is not so much a L3 issue, but a L7 one.

- legacy v4 PCs (think win95 up to win XP) using RFC1918
addresses behind a home gateway will never be able to
upgrade to an IPv6-only environment. So if we provision the
home gateway with v6-only (because there will be a point
where we do not have any global v4 addresses left for it)
those legacy PCs are going to need a double translation,
v4->v6 in the home gateway and then v6 back to v4 in the
core.

Not if they use an application layer proxy in their gateway. It's not
too late to specify this as a standard function for an IPv6 Internet
gateway device. Also, the "v6 back to v4" conversion could be handled in
an information provider's data center (Google, CNN) not in the core.

If you put a proxy on the home gateway that is provisioned with v6 only on
the wan side, you will have to proxy to a v6 destination. If you go back to
my previous point, the destination may be a v4 address.... Thus a home
gateway only solution cannot work, it needs to translation back to v4
somewhere in the network.

That said, I used the word "core" a bit generously, and I agree with you it
could be done within a datacenter if you were willing to pay the cost of the
extra round trip to that datacenter.

     - Alain.

Hi Alain,

The IETF thinking for the last 10+ years (and I include myself in this) had been that dual stack was the answer and most things would be converted to dual stack before we ran out of v4.

Well, it has not happen.

There is still very little v6 deployment and we will be running out of v4 space soon (instantiate soon by your particular prediction of the day you won't be able to get space from your RIR, either because there is no more or because you do not qualify any longer).

I don't think it has happened because in the past there hasn't been a
compelling reason to. End-users haven't seen any benefit, so they
haven't asked for it, and service providers (who of course provide
services to those end-users) haven't been able to justify the investment,
because their end-users/customers haven't been asking for it.

If IPv4 addressing runs out, and the only way to grow the Internet is
to implement IPv6, then the service providers who've made the IPv6
investment provide access to more services i.e. both IPv4 + IPv6 based,
will win customers from their competing service providers.

I think the loss of customers to competitors would create a compelling
reason for service providers to introduce and migrate to IPv6, and I
think a "better Internet experience" (I've been around Internet
marketing people too long) i.e IPv4+IPv6 visible content would be the
compelling reason for customers to move to service providers who're
providing access to both protocols.

Given the above, it is clear that we are gong to see devices (which might be dual stack *capable*) that will be deployed and provisionned with *v6-only*...

At the edge of the network, where we have devices by the millions and where address space is a critical issue and dual-stack is a non starter, this is already under way...

It is also becoming apparent that:

- the "core internet" (ie the web and any infrastructure server) will take a long time to move to v6 and/or dual stack.

- new v6-only edges will have to communicate with it. So we need v6->v4 translation in the core

MPLS as well as the IETF softwires techniques (the MPLS
model without using MPLS i.e. tunnel from ingress to egress via
automated setup tunnels - gre, l2tp, or native IPv4 or IPv6) can or
will shortly be able to be used to tunnel IPv6 over IPv4 or vice versa.
softwires in effect treats the non-native core infrastructure as an
NBMA layer 2.

The advantage of these techniques verses dual stack is that they push
the complexity of dual stack to the network ingress and egress devices.

Dual stack isn't all that complicated, however when you think about
running two forwarded protocols, two routing protocols or an integrated
one supporting two forwarded protocols, having two forwarding
topologies that may not match in the case of dual routing protocols,
and having two sets of troubleshooing methods and tools, I think the
simplicity of having a single core network forwarding protocol and
tunnelling everyting else over it becomes really attractive. The price is
the tunnel overhead of course, however, processing wise that price is only paid
at the edges, making it scalable, and in the core, the bandwidth cost
of the tunnel overhead is minimal, because the network core typically
has all the high bandwidth links.

- legacy v4 PCs (think win95 up to win XP) using RFC1918 addresses behind a home gateway will never be able to upgrade to an IPv6-only environment. So if we provision the home gateway with v6-only (because there will be a point where we do not have any global v4 addresses left for it) those legacy PCs are going to need a double translation, v4->v6 in the home gateway and then v6 back to v4 in the core. Note: a double private v4->private v4->global v4 translation would work too, but if you are running out of private space as well, this is also a non-starter...

While it probably won't be a huge amount, getting rid of forwarding
based on end-node IPv4 destination address out of the core will help
with this a bit. I wonder if anybody has done a study as to how much
public IPv4 address space is consumed by infrastructure addressing.

- there are a number of internal deployment cases where net 10 is just not big enough, thus the idea to use v6 to glue several instances of private space together as a 'super net 10'. For this to work, legacy devices that cannot upgrade to v6 need to go through a translation v4->v6.

I'm guessing you might be in part talking about your cable modem
management problem that I've seen an IPv6 presentation of yours about.

Is it really necessary to have global reachability to all your
customer's CPE for management purposes across the whole of your
network? Would it be possible to have e.g. 3 large management regions,
each with their own instance of network 10, and then make the
infrastructure robust enough that there'd be much bigger problems with
your network if the need to remotely manage one group of CPE from
another management region became necessary?

So, no, NAT v4->v6 or v6-v4 does not solve world hunger but solve very real operational problems.

I suppose we have to weigh up whether the NAT can of worms is worth
openning again with IPv6 to solve operational problems, and losing the
chance to have the benefits of end-to-end principle back again for all
the end-users of the Internet. I've experienced and seen too many cases
where the hidden costs of NAT became unhidden. In those instances,
"throwing" public address space at the problem would have instantly
destoyed the problem NAT was causing. In IPv4 we have to be a bit
careful doing that these days, in the future, with IPv6, we don't.

Regards,
Mark.

MPLS as well as the IETF softwires techniques (the MPLS model without
using MPLS i.e. tunnel from ingress to egress via automated setup
tunnels - gre, l2tp, or native IPv4 or IPv6) can or will shortly be
able to be used to tunnel IPv6 over IPv4 or vice versa. softwires in
effect treats the non-native core infrastructure as an NBMA layer 2.

The advantage of these techniques verses dual stack is that they push
the complexity of dual stack to the network ingress and egress
devices.

Dual stack isn't all that complicated, however when you think about
running two forwarded protocols, two routing protocols or an
integrated one supporting two forwarded protocols, having two
forwarding topologies that may not match in the case of dual routing
protocols, and having two sets of troubleshooing methods and tools, I
think the simplicity of having a single core network forwarding
protocol and tunnelling everyting else over it becomes really
attractive.

huh? and your tunnels do not have *worse* congruency problems than dual
stack? gimme a break.

randy

I do not understand what you mean.

The tunnelled traffic takes the same ingress-to-egress path through the
core that it would if the core natively supported the tunnelled payload
protocol.

This is the basic BGP/MPLS model, using IPv4, IPv6, GRE or L2TP as the
encapsulation, instead of MPLS.

Regards,
Mark.

> > MPLS as well as the IETF softwires techniques (the MPLS model without
> > using MPLS i.e. tunnel from ingress to egress via automated setup
> > tunnels - gre, l2tp, or native IPv4 or IPv6) can or will shortly be
> > able to be used to tunnel IPv6 over IPv4 or vice versa. softwires in
> > effect treats the non-native core infrastructure as an NBMA layer 2.
> >
> > The advantage of these techniques verses dual stack is that they push
> > the complexity of dual stack to the network ingress and egress
> > devices.
> >
> > Dual stack isn't all that complicated, however when you think about
> > running two forwarded protocols, two routing protocols or an
> > integrated one supporting two forwarded protocols, having two
> > forwarding topologies that may not match in the case of dual routing
> > protocols, and having two sets of troubleshooing methods and tools, I
> > think the simplicity of having a single core network forwarding
> > protocol and tunnelling everyting else over it becomes really
> > attractive.
>
> huh? and your tunnels do not have *worse* congruency problems than dual
> stack? gimme a break.
>

I do not understand what you mean.

The tunnelled traffic takes the same ingress-to-egress path through the
core that it would if the core natively supported the tunnelled payload
protocol.

This is the basic BGP/MPLS model, using IPv4, IPv6, GRE or L2TP as the
encapsulation, instead of MPLS.

It's also the RFC1772 BGP encapsulation model (section "A.2.3
Encapsulation"), with the difference being the end-node traffic
sources and sinks are the ingress and egress peers, rather than
an AS worth of them. The model isn't very new at all.

The model isn't very new at all.

no it isn't. many of us remember atm-1, so atm-2 is no big surprise.

randy

I'd argue it's not the quite same situation. From what I understand of
the way ATM/IP was deployed (ATM core, IP routers at the edge with
direct IP adjacencies over ATM PVCs), the ATM topology wasn't visible
to the IP layer. The IP layer then wasn't able to make informed path
decisions for IP traffic, yet it had no choice but to take
responsibility for choosing those forwarding paths, because that's it's
function.

The model we're talking about seems to me to be that old model on
it's head. The devices at the edge of the core network are fully aware
of the underlying topology of the core network so they can make
informed forwarding decisions. The tunnelling encapsulation only serves
the purpose of transporting protocols/payloads, that aren't native in
the core, from edge-to-edge. The tunnelling function doesn't try to
control or have to take responsibility for the selecting paths taken
across the core.

Regards,
Mark.

This is just is a slight move of the core/edge boundary. Core switching
capabilities (MPLS) have been added to edge devices which were pure
terminal-devices from an ATM-perspective. The MPLS-cloud is just as
obscure as ATM to a non-MPLS-speaking IP-device.

//per

No it isn't. The MPLS control plane runs IP routing protocols, so even
if an upstream device isn't capable of MPLS forwarding, it still has
visibility to the network topology information both propagated across
and within the MPLS domain. For conventional IP forwarding, MPLS is a
forwarding optimisation, not a forwarding replacement.

Regards,
Mark.