IPv6 traffic percentages?

You can configure pmacct to specify on which properties of the received
flow data it should aggregate its output data, one could configure
pmacct to store data using the following primitives:

    ($timeperiod, $entrypoint_router_id, $bgp_nexthop, $packet_count)

Where $timeperiod is something like 5 minute ranges, and the post
processing software calculates the distance between the entrypoint
router and where the flow would leave the network ($bgp_nexthop).

See 'aggregate' on http://wiki.pmacct.net/OfficialConfigKeys

In short: you configure pmacct to throw away everything you don't need
(maybe after some light pre-processing), and hope that what remains is
small enough to fit in your cluster and at the same time offers enough
insight to answer the question you set out to resolve.

it's late here, so i am a bit slower than usual. but could you explain
in detail how this tests the hypothesis?

even of all your traffic entered on a bgp hop and exited on a bgp hop,
and all bgp entries set next_hop (which i think you do), you would be
ignoring the 'distance' the packet traveled from source to get to your
entry and traveled from your exit to get to the final destination.


Yes, correct. This is why I mentioned before: "However, this would be
just one network's (biased) view on things."

With this I meant that I can measure something, but only within a subset
of the entire path a packet might traverse. (just that one routing
domain), so not end-to-end. And what might be true for us might not be
true for others.

With this I meant that I can measure something, but only within a subset
of the entire path a packet might traverse.

considering your original hypothesis was about length of paths, this
seems a kind of dead end. you might get a modest improvement by turning
off hot potato :slight_smile:

so not end-to-end

which is the problem

And what might be true for us might not be true for others.

yes. but if it actually measured what we wanted, it would be a useful
measurement. but it doesn't.


This is some more public info.

On this page click to sort on IPv6 deployment.


About 40% of traffic inbound to our University is IPv6. I see several Universities on the list above at more than 60%.

There are more links to public info sites at the bottom of the page.

You can add Apple and Microsoft to the list of usual suspects, but for state in NAT boxes rather than traffic. With happy eyeballs devices query both IPv4 and IPv6 so end up creating state in the NAT box even if the client ultimately chooses IPv6 for the connection. We have lots of devices that like to check with Apple whenever they wake up and the staff here use Microsoft Exchange in the cloud which is available via IPv6. I don’t have any verified data but I have noticed a relation between

Scroll to the bottom of this page and you will see that my latency to Google via IPv6 dropped from 40 ms to 20 ms.


If I compare some days before and after the change I see a decrease in my peak NAT pool usage. However on other days I don’t see a difference. The theory is that after my latency dropped to 20 ms that should be less than the magical 25 ms for Apple devices to receive an answer via IPv6 so they don’t even send out an IPv4 query.


  This link mentions that Microsoft is already preferring IPv6 over IPv4 95% of the time when both are available.


I’m 30 ms away from Facebook so 95% of Microsoft clients would use IPv6 but for Apple devices it’s a gamble. But it’s not clear if 95% of Microsoft clients would only send an IPv6 SYN and not send an IPv4 SYN (saving NAT table size).

The top of our wish list would be for twitter and AWS to support IPv6, I think that those would make the biggest reduction in our NAT table size.

If you hover your mouse over the US on this page


it lists 47% for content. What that 47% means is explained here.


It is fun to play with the type of regression on this page and project 730 days or so in the future.


Not to put any sort of damper on wild speculation, but at the Southern California Linux Expo,
with native IPv4 and IPv6 dual stack support enabled on the wifi for the show, we saw close to
50% of all traffic on IPv6.


From: "Owen DeLong"

I know, but for the "server guys" turning on IPv6 it's pretty low on
priority list.

Which is a selfish, arrogant, and extremely short-sighted and unenlightened view of self-interest.

  Yes, yes, but is it economically rational?