IPv6 faster/better proof? was Re: Need /24 (arin) asap

And they both report ipv6 is faster / better.

Is it possible that simply having a much smaller routing table overall in
terms of sheer number of prefixes in the DFZ has a positive performance
impact on passing packets, which coupled with the fact that implementations
may be routed better/more efficiently due to a lack of "legacy cruft"
creates a better experience for many packets?

Just a theory/hypothesis with no data to back it up.

I suspect that this may not be an apples to apples comparison.

Perhaps lack of IPv6 is more prevalent in rural areas with poorer
connectivity to the rest of the Internet? Perhaps both these CDNs
serve content for different types of devices over the different AFIs
(maybe old mediaboxes with a slow cpu prefer IPv4?). Perhaps networks
that deploy IPv6 are more likely to allow and accommodate on-net
caches?

I theorize that the described speed difference between IPv4 and IPv6
is an artifact of how the data is analysed rather than an
architectural speed difference between the protocols themselves.

Kind regards,

Job

Besides the data bias that could indeed exist, I noticed many deployed
traffic shapers not supporting IPV6, and imagine that some traffic
engineering is currently being focused on IPv4 traffic. So even the
protocol themselves having comparable performance, IPv6 bandwidth could be
smoother than IPv4 bandwidth for some users.

Perhaps instead of looking at global averages, we could look at speed
comparison for dual-stacked users, like in how many of them see better or
worse performance with v4/v6.

Rubens

I suspect that this may not be an apples to apples comparison.

Perhaps lack of IPv6 is more prevalent in rural areas with poorer
connectivity to the rest of the Internet? Perhaps both these CDNs
serve content for different types of devices over the different AFIs
(maybe old mediaboxes with a slow cpu prefer IPv4?). Perhaps networks
that deploy IPv6 are more likely to allow and accommodate on-net
caches?

I theorize that the described speed difference between IPv4 and IPv6
is an artifact of how the data is analysed rather than an
architectural speed difference between the protocols themselves.

Kind regards,

Job

A similar take, is that big eyeballs (tmobile, comcast, sprint, att,
verizon wireless) and big content (goog, fb, akamai, netflix) are ipv6.
Whats left on ipv4 is the long tail of people asking for help on how to buy
a /24

Joking aside, I suspect that what's left is on the long tail is
actually long haul traffic. I'm not aware of any transit provider
reporting anything close to the numbers that the CDNs observe in terms
of IPv4 / IPv6 percentage split.

I posit that the more miles a packet has to travel, the more likely it
is to be an IPv4 packet.

Kind regards,

Job

There're a lot of big eyeball networks missing from that list. Spectrum
biz class, no IPv6, for one. And some big "content"-ish ones, too.
Google's cloud service, for example? No IPv6 for VMs you lease from them.
And some of the biggest "web hosting providers" in the world still don't
have IPv6 deployed, and they host millions if not billions of sites which,
individually, have very little traffic, but in aggregate amount to a fair
bit.

> A similar take, is that big eyeballs (tmobile, comcast, sprint, att,
verizon
> wireless) and big content (goog, fb, akamai, netflix) are ipv6. Whats
left
> on ipv4 is the long tail of people asking for help on how to buy a /24

Joking aside, I suspect that what's left is on the long tail is
actually long haul traffic. I'm not aware of any transit provider
reporting anything close to the numbers that the CDNs observe in terms
of IPv4 / IPv6 percentage split.

I posit that the more miles a packet has to travel, the more likely it
is to be an IPv4 packet.

Related. The more miles the traffic travels the more likely it is the long
tail ipv4 15% of internet that is not the wales : google, fb, netflix,
apple, akamai ... and i will even throw in cloudflare.

I hear transit is dead

Well, be that as it may, I'm still going to go to work tomorrow :wink:

- Job

From: Ca By <cb.list6@gmail.com>

Meanwhile, FB reports that 75% of mobiles in the USA
reach them via ipv6

And Akaimai reports 80% of mobiles

And they both report ipv6 is faster / better.
----------------------------------------

Let me grab a few more for you:

https://blogs.akamai.com/2016/06/preparing-for-ipv6-only-mobile-networks-why-and-how.html

https://blogs.akamai.com/2016/10/ipv6-at-akamai-edge-2016.html

IPv6 now faster than IPv4 when visiting 20% of top websites – and just as fast for the rest • The Register which cites an academic paper http://dl.acm.org/citation.cfm?doid=2959424.2959429 by Vaibhav Bajpai and Jürgen Schönwälder

I'd sure like to see how they came up with these
numbers in a technically oriented paper.

Most of the above links explain how they got the numbers.
Facebook, in particular, did A/B testing using Mobile Proxygen, which is to say that they configured their mobile app to report performance over both IPv4 and IPv6 from the same handset at the same time.
Others, including APNIC's V6/V4 Relative Performance Maps have a browser fetch two objects with unique URLs, one from an IPv4-only server and one from an IPv6-only server, and compare them.

There
should be no difference, except for no CGN or Happy
Eyeballs working better or something similar. Am I
missing something? Same routers; same links; same
RTTs; same interrupt times on servers; same etc, etc
for both protocols.

From time to time somebody says, "Okay, maybe it works in practice, but does it work in *theory*?"

Busy engineers hardly ever investigate things going inexplicably right.

My hypothesis is that the observed difference in performance relates to how mobile networks deploy their transition mechanisms. Those with a dual-stack APN take a native path for IPv6, while using a CGN path for IPv4, which, combined with the Happy Eyeballs head start, might add 501microseconds, which is a ms, which is 15% of 7ms. Those with an IPv6-only APN use a native path for IPv6, while using either a NAT64 for IPv4 (identical performance to CGN) or 464xlat, which requires translation both in the handset and the NAT64; handsets may not be optimized for header translation.

However, I have a dozen other hypotheses, and the few experiments I've been able to run have not confirmed any hypothesis. For instance, when one protocol is faster than another on a landline network, hop count is not a correlation (therefore, shorter paths, traffic engineering, etc., are not involved).

Lee

Although the FB link is vague but argument itself is true. We just became more intelligent in deploying IPV6. The same measurement if was done in less than a decade for example would show that ipv4 is faster. The dual stack implementation and the slowness introduced by Teredo Tunneling were the main reasons and now we are getting smarter having it deprecating

https://labs.ripe.net/Members/gih/examining-ipv6-performance

  https://tools.ietf.org/html/rfc6555

https://tools.ietf.org/html/rfc7526
Things change, Ipv6 response is showing better has IPV4 is having more TCP re-transmission which the culprit is still not known ( need more analysis) but fingers are pointing to the exhaustion of the ipv4 address so basically CGN , load-balancers and address sharing. Although we can not eliminate peering links and Firewalls. Yet if we have exactly the same topology and traffic crossing the links et Firewalls locations/policies. Analysis didnot confirm why it would have rather more harm on ipv4 than 6

Brgds,

LG

I see in the SMB space, including the small time FTTH and WISP communities
they are mostly focused on IPv4 with little IPv6 going on. While they could
access most content on IPv6 their common platforms still are not IPv6 friendly.

UBNT is rolling out IPv6 finally to some of their UniFi lines as well as their
airOS 8.x by making it default enabled. There is still some way to go but folks
are making progress.

MikroTik is getting there but most people are just not enabling it either.

The WISP folks gripe about geolocation issues for the IPv4 blocks they are leasing
as well, and some end-user content still isn’t IPv6 ready (such as Hulu).

What I can see is that the folks that made the jump are less likely to be required
to hold NAT state so have fewer problems. It’s not quite as simple as 96-more-bits
because you learn there is no ARP (it’s NDP) and you can DHCPv6 + SLAAC or a combination
thereof, but they just don’t have the operational experience.

There’s also the perfectly valid comments from others that they can’t get IPv6 on their
FIOS, Business class DOCSIS services, etc.. It’s also often easier to get a static IPv4
and dynamic IPv6 but getting static IPv6 is harder. Thankfully progress is being made
here, but often much slower than the early adopters here would want.

Then again, I hear everything is in the cloud anyways so as long as I can reach the
NetBookAzureTube perhaps all is well?

- Jared

RouterOS still has "will not fix" IPv6 bugs, so that doesn't help shops dependent on Mikrotik want to move forward with deploying it.

Quick, somebody port FRR to Tile…

<ducks>

I know. They’re very popular in the WISP and FTTH communities that are doing sub-10G as their aggregate bits. I understand the price appeal but not a fan personally.

- Jared

I have a MikroTik hAP Lite router for my FTTH service at my house.

It has excellent support for IPv6, including a ton of translation
mechanisms.

My problem is my home provider doesn't do IPv6, so I run a 6-in-4 tunnel
back to my own backbone for the service (no latency impact as my home
provider is my IP Transit customer :slight_smile: ). This is a little unstable
because my home provider doesn't know how to give me a stable IPv4
address for my FTTH service.

But I do have to say that I am massively impressed by what that little
MikroTik box can do. IPv6 on my home LAN works as expected, as it does
across the 6-in-4 tunnel.

Mark.

Not a fan either for the backbone, even though a lot of ISP's in South
Africa use them for this... admittedly, small networks that simply don't
have the cash to dish out to the big vendors. I know we've had some
issues setting up BGP sessions with MikroTik-based customers/peers,
mainly around how RouterOS interprets various BGP-related RFC's.

But for the home, I can't fault them.

They do fix plenty of bugs (almost as much as they push out new
features). I have seen some IPv6 bug fixes in recent updates they've
published, but nothing that really makes a difference to my home world,
as far as I can remember.

Mark.

I've many customers using MikroTik.

The problem with its IPv6 support is that is only supporting 6in4, which by the way, they call it 6to4, so it is very weird and confusing customers ...

So for native IPv6 or a 6in4 tunnel, is fine, but any other transition mechanism is NOT supported, so we end up reflashing then with OpenWRT.

Regards,

Jordi

-----Mensaje original-----

The problem with its IPv6 support is that is only supporting 6in4, which by the way, they call it 6to4, so it is very weird and confusing customers ...

That "6-to-4 actually means 6-in-4" was quite confusing to me as well. I
just enabled it to prove that they had a language moment there. Good
thing it didn't backfire on me :-).

So for native IPv6 or a 6in4 tunnel, is fine, but any other transition mechanism is NOT supported, so we end up reflashing then with OpenWRT.

Not sure I'd blame them either - they develop a lot of features for
pretty much next-to-nothing; and are being enabled by customers that are
willing to take the risk for relief on budget.

They could be more inclined to fix bugs and develop corner-case features
sooner if they were in the premium market. But, as my (well-known on
this list) American friend would say, "I conjecturbate" :-).

Mark.

The problem with its IPv6 support is that is only supporting 6in4, which by the way, they call it 6to4, so it is very weird and confusing customers ...

That "6-to-4 actually means 6-in-4" was quite confusing to me as well. I just enabled it to prove that they had a language moment there. Good thing it didn't backfire on me :-).

Yeah I can confirm, as I tested it several times, 6to4 for them is proto41, but it is very confusing and against standards nomenclature … This don’t say anything good from a vendor, in my opinion!

So for native IPv6 or a 6in4 tunnel, is fine, but any other transition mechanism is NOT supported, so we end up reflashing then with OpenWRT.

Not sure I'd blame them either - they develop a lot of features for pretty much next-to-nothing; and are being enabled by customers that are willing to take the risk for relief on budget.

I’ve got very good customers from Mikrotik asking them in private and in public and they even didn’t reply. No other transition mechanism is available, no roadmap. So, you can’t use them for example for an IPv6-only access network which clearly is what is needed now.

They could be more inclined to fix bugs and develop corner-case features sooner if they were in the premium market. But, as my (well-known on this list) American friend would say, "I conjecturbate" :-).

They basically run a Linux … and you have OpenWRT sources with all what they need to implement 4in6 transition mechanisms, so no excuses! I must say also that no excuses for other CPE vendors, of course, but others at least have DS-Lite or even lw4o6. Very few offer 464XLAT (CLAT part is what the CPE needs) or MAP-T/E. Hopefully this will change soon.