So... is it time to do IPv6 day monthy yet?

It certainly sounds like it might be.

Cheers,
-- jra

I was thinking the same thing. Good call :slight_smile:

   Ryan Pavely
    Net Access Corporation
    http://www.nac.net/

+1

I've enjoyed it so far!

In a message written on Wed, Jun 08, 2011 at 10:40:56AM -0400, Jay Ashworth wrote:

It certainly sounds like it might be.

Why not just leave it on?

Sounds good to me.

I would do perhaps do one more then do "IPv6 TURN ON DAY" with the
intent to *leave* the IPv6 enabled. The longer the content providers
take to switch it on the bigger the switch on load will be. We
still have a opportunity to ramp up IPv6 for the very big content
providers.

I dunno; I can see why doing it in 24 hours chunks is useful, still. But
I think that doing them substantially more frequently than annually
increases markedly the chance that the people who Learned the Lessons will
still be there to *implement*.

And the responses so far suggest that this interim step might be a Pretty
Neat Idea.

Cheers,
-- jr 'shame Towel Day has passed already' a

So should monthly IPv6 day be the same week as Microsoft Patch Tuesday? :slight_smile:

My thoughts are that we need either a week or a 36-48 hour period. I thought this before the events of last week.

I am very happy with the results and that based on the public statements so far (Looking forward to hearing what is presented at NANOG on this as well) that it was viewed as "didn't break much" by the major players.

This affirms to me what we've observed for many years. Having IPv6 enabled doesn't break "much", and not in a way that is any more broken by the issues you can observe in networks otherwise.

- Jared

I think this would be helpful.

Cheers
Ryan

I think this would be helpful.

Agreed. You don't need anybody's permission, kick it off.

The last v6day was an isoc effort, there can be a separate nanog effort or
your own.

Cb

The last v6day was an isoc effort, there can be a separate nanog effort or
your own.

It does make a lot of sense for NANOG (perhaps jointly with RIPE and
other NOGs) to organize monthly IPv6 days with a theme or focus for
each month. If you have a focus, then you can recruit a lot of IPv6
testers to try out certain things on IPv6 day and get a more thorough
test and more feedback

Skip July and August because it takes time to get this organized, and
then start the next one on September the 8th or thereabouts.

For instance, one month could focus on full IPv6 DNS support, but
maybe not right away. A nice easy start would be to deal with IPv6
peering and weird paths that result from tunnels. That is the kind of
thing that would work good with a lot of testers participating and an
application that traces IPv4 and IPv6 paths and measures hop count,
latency, packet loss.

In conjunction with the monthly IPv6 day, NANOG should set up a blog
page or similar to publicly collect incident reports and solutions.

I really don't know why anyone is worried about advertising AAAA
records for authoritative nameservers. It just works. Recursive
nameservers have been dealing with authoritative nameservers having
IPv6 addresses for well over a decade now. This includes dealing
with them being unreachable.

DNS/UDP is not like HTTP/TCP. You don't have connect timeouts to
worry about. Recursive nameservers have much shorter timeouts as
they need to deal with IPv4 nameservers not being reachable. They
also have to do all this re-trying within 3 or so seconds or else
the stub clients will have timed out.

Mark

Ah, but, with IPv6 records, you are much more likely to end up with
a TRUNC result and a TCP query than with IPv4.

Owen

In message <E1F85FB9-7E52-4CE9-B5A9-C9AC0DA01A1C@delong.com>, Owen DeLong write
s:

>=20
>>> The last v6day was an isoc effort, there can be a separate nanog =
effort or
>>> your own.
>>=20
>> It does make a lot of sense for NANOG (perhaps jointly with RIPE and
>> other NOGs) to organize monthly IPv6 days with a theme or focus for
>> each month. If you have a focus, then you can recruit a lot of IPv6
>> testers to try out certain things on IPv6 day and get a more thorough
>> test and more feedback
>>=20
>> Skip July and August because it takes time to get this organized, and
>> then start the next one on September the 8th or thereabouts.
>>=20
>> For instance, one month could focus on full IPv6 DNS support, but
>> maybe not right away. A nice easy start would be to deal with IPv6
>> peering and weird paths that result from tunnels. That is the kind of
>> thing that would work good with a lot of testers participating and an
>> application that traces IPv4 and IPv6 paths and measures hop count,
>> latency, packet loss.
>>=20
>> In conjunction with the monthly IPv6 day, NANOG should set up a blog
>> page or similar to publicly collect incident reports and solutions.
>=20
> I really don't know why anyone is worried about advertising AAAA
> records for authoritative nameservers. It just works. Recursive
> nameservers have been dealing with authoritative nameservers having
> IPv6 addresses for well over a decade now. This includes dealing
> with them being unreachable.
>=20
> DNS/UDP is not like HTTP/TCP. You don't have connect timeouts to
> worry about. Recursive nameservers have much shorter timeouts as
> they need to deal with IPv4 nameservers not being reachable. They
> also have to do all this re-trying within 3 or so seconds or else
> the stub clients will have timed out.
>=20
Ah, but, with IPv6 records, you are much more likely to end up with
a TRUNC result and a TCP query than with IPv4.

Not really. A AAAA record adds 28 octets (a A record takes 16). Unless
you have a lot of name servers most referrals still fall within 512 octets
additionally most answers also still fall withing 512 octets.

Mark

I agree.. not that it should be assumed there is no v6 DNS issue.
With IPv6, the main issue may
be 'firewalls' and 'boxes in the middle' silently munging, eating,
or destroying AAAA responses.

DNSSEC and not AAAA is really the reason to have need for EDNS0 or TRUNC
on validating resolvers. AAAA records should be fine for sane domains.

consider a referral for example.com -> subdomain.example.com with
8 nameservers.
mydomainname.example.com; and assume you get both AAAA and A
additional responses.

Total = 402 octets -- still safe; your domain name could be ~100
characters longer and it would still be fine.

Header < 2 (id) + 2 (qr,opcode,aa,tc,rd,ra,z,rcode,qdcount) + 2
(ancount) + 2 (nscount) + 2 (arcount)
   = 10 octets
Authority Section
ns1.subdomain.example.com. IN NS ns1.subdomain.example.com. <
26name + 2 + 2 + 4 + 2 + 2(pointer) = 36 octets
ns2.subdomain.example.com. IN NS ns2.subdomain.example.com. < 4
name + 2(pointer) + 2 + 2 + 4 + 2 +2(pointer) = 18 octets
ns3.subdomain.example.com. IN NS ns3.subdomain.example.com. < 4
name + 2 + 2 + 2 + 4 + 2 + 2 = 18 octets
ns4.subdomain.example.com. IN NS ns4.subdomain.example.com. < 18 octets
ns5.subdomain.example.com. IN NS ns5.subdomain.example.com. < 18 octets
ns6.subdomain.example.com. IN NS ns6.subdomain.example.com. < 18 octets
ns7.subdomain.example.com. IN NS ns7.subdomain.example.com. < 18 octets
ns8.subdomain.example.com. IN NS ns8.subdomain.example.com. < 18 octets

Additional Section
ns1.subdomain.example.com. IN AAAA 2001:DB8::0 < 2(pointer)
+4TTL+2RDLENGTH+16RDATA = 24 octets
ns2.subdomain.example.com. IN AAAA 2001:DB8::1 < 24 octets
ns3.subdomain.example.com. IN AAAA 2001:DB8::2 < 24 octets
ns4.subdomain.example.com. IN AAAA 2001:DB8::3 < 24 octets
ns5.subdomain.example.com. IN AAAA 2001:DB8::4 < 24 octets
ns6.subdomain.example.com. IN AAAA 2001:DB8::5 < 24 octets
ns7.subdomain.example.com. IN AAAA 2001:DB8::6 < 24 octets
ns8.subdomain.example.com. IN AAAA 2001:DB8::7 < 24 octets
ns1.subdomain.example.com. IN A 192.0.0.0.1 < 2(pointer)
+4TTL+2RDLENGTH+4RDATA = 12 octets
ns2.subdomain.example.com. IN A 192.0.0.0.1 < 12 octets
ns3subdomain.example.com. IN A 192.0.0.0.1 < 12 octets
ns4.subdomain.example.com. IN A 192.0.0.0.1 < 12 octets

Total = 402 octets -- still safe; your domain name could be ~100
characters longer and it would still be fine.

Not really. A AAAA record adds 28 octets (a A record takes 16). Unless
you have a lot of name servers most referrals still fall within 512 octets
additionally most answers also still fall withing 512 octets.

1. Most != All even in IPv4 (ran into this in a few hotels with some
    prominent MMORPG login sites)

2. 512/16 = 32, 512 / 28 = 18 (the 19th record will TRUNC).

  So, you get just over half the number of records to fit within
  the same space.

I would say that my statement that you are much more likely to
encounter TRUNC results and need a TCP query with IPv6 stands.

It also matches my experience.

Owen

This ignores the extra baggage that tends to come along in a DNS payload.

Just the root:

; <<>> DiG 9.6.0-APPLE-P2 <<>> +trace -t any www.delong.com
;; global options: +cmd
. 379756 IN NS e.root-servers.net.
. 379756 IN NS i.root-servers.net.
. 379756 IN NS l.root-servers.net.
. 379756 IN NS f.root-servers.net.
. 379756 IN NS k.root-servers.net.
. 379756 IN NS b.root-servers.net.
. 379756 IN NS j.root-servers.net.
. 379756 IN NS d.root-servers.net.
. 379756 IN NS c.root-servers.net.
. 379756 IN NS g.root-servers.net.
. 379756 IN NS m.root-servers.net.
. 379756 IN NS h.root-servers.net.
. 379756 IN NS a.root-servers.net.
;; Received 512 bytes from 192.159.10.2#53(192.159.10.2) in 7 ms

Or the GTLD servers list:

com. 172800 IN NS a.gtld-servers.net.
com. 172800 IN NS b.gtld-servers.net.
com. 172800 IN NS c.gtld-servers.net.
com. 172800 IN NS d.gtld-servers.net.
com. 172800 IN NS e.gtld-servers.net.
com. 172800 IN NS f.gtld-servers.net.
com. 172800 IN NS g.gtld-servers.net.
com. 172800 IN NS h.gtld-servers.net.
com. 172800 IN NS i.gtld-servers.net.
com. 172800 IN NS j.gtld-servers.net.
com. 172800 IN NS k.gtld-servers.net.
com. 172800 IN NS l.gtld-servers.net.
com. 172800 IN NS m.gtld-servers.net.
;; Received 495 bytes from 2001:500:3::42#53(l.root-servers.net) in 37 ms

(not quite 512, but, close)

Note, none of these came with glue. They ONLY included the name data.
Had they come with glue, we would easily have been over 512 in both
cases just for IPv4, let alone a v4/v6 combination.

I know of at least one prominent MMORPG that has enough A records for their login
servers that they triggered TRUNC DNS results which I discovered when they
broke at some hotels I have stayed at. I've also encountered other sites.

Owen

This ignores the extra baggage that tends to come along in a DNS payload.
Just the root:

.....

Note, none of these came with glue. They ONLY included the name data.
Had they come with glue, we would easily have been over 512 in both
cases just for IPv4, let alone a v4/v6 combination.

None of those come with glue, ....really?
For the root zone, I currently see a fully populated NS response with
a fair bit of glue that is 512 bytes total, and a majority of that 512 bytes
response is the additional section. That is, with no glue, you would be
looking at approximately 200 octets.

In addition when I dig against 2001:dc3::35, the root server responds
to my query
IN with glue for all A through L .root-servers, and two pieces of
AAAA glue
without exceeding 492 message size.

However, the root zone and gTLD zones are quite special, and a very significant
exception to the norm for number of NS entries authoritative for a zone.
Few domains have more than 3 NS.

;; MSG SIZE rcvd: 512

# dig +norecurse -t NS . @198.41.0.4

; <<>> DiG 9.3.6-P1-RedHat-9.3.6-16.P1.el5 <<>> +norecurse -t NS . @198.41.0.4
;; global options: printcmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 46162
;; flags: qr aa; QUERY: 1, ANSWER: 13, AUTHORITY: 0, ADDITIONAL: 14

;; QUESTION SECTION:
;. IN NS

;; ANSWER SECTION:
. 518400 IN NS b.root-servers.net.
. 518400 IN NS e.root-servers.net.
. 518400 IN NS k.root-servers.net.
. 518400 IN NS m.root-servers.net.
. 518400 IN NS f.root-servers.net.
. 518400 IN NS c.root-servers.net.
. 518400 IN NS g.root-servers.net.
. 518400 IN NS j.root-servers.net.
. 518400 IN NS h.root-servers.net.
. 518400 IN NS d.root-servers.net.
. 518400 IN NS a.root-servers.net.
. 518400 IN NS i.root-servers.net.
. 518400 IN NS l.root-servers.net.

;; ADDITIONAL SECTION:
b.root-servers.net. 3600000 IN A 192.228.79.201
e.root-servers.net. 3600000 IN A 192.203.230.10
k.root-servers.net. 3600000 IN A 193.0.14.129
k.root-servers.net. 3600000 IN AAAA 2001:7fd::1
m.root-servers.net. 3600000 IN A 202.12.27.33
m.root-servers.net. 3600000 IN AAAA 2001:dc3::35
f.root-servers.net. 3600000 IN A 192.5.5.241
f.root-servers.net. 3600000 IN AAAA 2001:500:2f::f
c.root-servers.net. 3600000 IN A 192.33.4.12
g.root-servers.net. 3600000 IN A 192.112.36.4
j.root-servers.net. 3600000 IN A 192.58.128.30
j.root-servers.net. 3600000 IN AAAA 2001:503:c27::2:30
h.root-servers.net. 3600000 IN A 128.63.2.53
h.root-servers.net. 3600000 IN AAAA 2001:500:1::803f:235

;; Query time: 203 msec
;; SERVER: 198.41.0.4#53(198.41.0.4)
;; WHEN: Sun Jun 19 13:17:31 2011
;; MSG SIZE rcvd: 512

# dig +norecurse -t NS . @2001:dc3::35

; <<>> DiG 9.3.6-P1-RedHat-9.3.6-16.P1.el5 <<>> +norecurse -t NS .
@m.root-servers.net
;; global options: printcmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 1931
;; flags: qr aa; QUERY: 1, ANSWER: 13, AUTHORITY: 0, ADDITIONAL: 15

;; QUESTION SECTION:
;. IN NS

;; ANSWER SECTION:
. 518400 IN NS c.root-servers.net.
. 518400 IN NS j.root-servers.net.
. 518400 IN NS h.root-servers.net.
. 518400 IN NS g.root-servers.net.
. 518400 IN NS b.root-servers.net.
. 518400 IN NS f.root-servers.net.
. 518400 IN NS l.root-servers.net.
. 518400 IN NS i.root-servers.net.
. 518400 IN NS m.root-servers.net.
. 518400 IN NS d.root-servers.net.
. 518400 IN NS k.root-servers.net.
. 518400 IN NS e.root-servers.net.
. 518400 IN NS a.root-servers.net.

;; ADDITIONAL SECTION:
a.root-servers.net. 3600000 IN A 198.41.0.4
b.root-servers.net. 3600000 IN A 192.228.79.201
c.root-servers.net. 3600000 IN A 192.33.4.12
d.root-servers.net. 3600000 IN A 128.8.10.90
e.root-servers.net. 3600000 IN A 192.203.230.10
f.root-servers.net. 3600000 IN A 192.5.5.241
g.root-servers.net. 3600000 IN A 192.112.36.4
h.root-servers.net. 3600000 IN A 128.63.2.53
i.root-servers.net. 3600000 IN A 192.36.148.17
j.root-servers.net. 3600000 IN A 192.58.128.30
k.root-servers.net. 3600000 IN A 193.0.14.129
l.root-servers.net. 3600000 IN A 199.7.83.42
m.root-servers.net. 3600000 IN A 202.12.27.33
a.root-servers.net. 3600000 IN AAAA 2001:503:ba3e::2:30
d.root-servers.net. 3600000 IN AAAA 2001:500:2d::d

;; Query time: 88 msec
;; SERVER: 2001:dc3::35#53(2001:dc3::35)
;; WHEN: Sun Jun 19 13:35:41 2011
;; MSG SIZE rcvd: 492

Adding AAAA records for existing nameservers will NOT cause TC to
be set where it would not be set without AAAA records unless you
do a "ANY" lookup of the nameserver where it MAY result in TC being
set.

All current implementations, including named, fail to set TC when
adding glue records to a referral in contradiction on RFC 1034.
This issue has been raised with the IETF dnsext wg. Fixing this
would result in most COM/NET referrals, from the root, setting TC.

Note: recent version of named will add glue which matches the
question section is added to a referral in preference to other glue
and if that glue rrset won't fit it then named will set TC to prevent
the resolution process getting wedged. Usually only EDNS/512 +
DO=1 queries result in a referral which fits the critera to set TC
even then most such queries don't.

Mark