IPv6 news

so having dual stack backbones is very important. but ...

Other global providers have a IPv6 network to, all open for business,
but there are very VERY few customers.

And, I'm not so sure we even have a "Internet" of IPv6 out there
either. It looks cold and empty to me.

Here's a challange, have NTP server attached directly to
a good clock and a IPv6 network.

Is there anyone who can talk to it using IPv6 on the Nanog list?

(Time20.Stupi.SE, 2001:0440:1880:1000::0020)

-Peter

As a certain "Tier 1" still uses a mesh of tunnels and uses Viagenie in
Canada as their transit provider latency to the above IP is in the area
of 300ms, going transatlantic twice. IPv4 latency is only 66ms though.
I do hope that some "Tier 1's" get their act together and start doing
native IPv6. I already once suggested upgrading their hardware to
them :wink:

For other people wanting latency tests etc, I suggest one takes a look
at the following URL's:
http://www.sixxs.net/tools/grh/ - IPv6 BGP monitor using a large number
of ISP's for input, thus basically a distributed looking glass.

http://www.sixxs.net/tools/traceroute/ - IPv4 & IPv6 Traceroute, so one
can perform the below oneselves, except then from the web....

Of course also handy RIPE's RIS (http://ris.ripe.net) and a lot of other
links listed on the main GRH page, near the bottom.

Greets,
Jeroen

% ntpdate -q 2001:0440:1880:1000::0020
server 2001:440:1880:1000::20, stratum 1, offset 0.012038, delay 0.45547
13 Oct 14:37:22 ntpdate[30374]: adjust time server 2001:440:1880:1000::20 offset 0.012038 sec

% traceroute6 2001:0440:1880:1000::0020
traceroute to 2001:0440:1880:1000::0020 (2001:440:1880:1000::20) from 2001:468:c80:2103:206:5bff:feea:8e4e, 30 hops max, 16 byte packets
1 isb-7301-1.gi0-1.103.cns.ip6.vt.edu (2001:468:c80:2103::1) 0.671 ms 0.844 ms 0.782 ms
2 isb-6509-2.ge2-4.cns.ip6.vt.edu (2001:468:c80:f220::1) 5.584 ms 1.438 ms 2.644 ms
3 isb-7606-2.ge1-1.cns.ip6.vt.edu (2001:468:c80:f222::2) 1.762 ms 1.59 ms 1.564 ms
4 atm10-0.10.wtn2.ip6.networkvirginia.net (2001:468:cfe:2001::1) 7.757 ms 7.987 ms 7.598 ms
5 2001:468:cff:c::1 (2001:468:cff:c::1) 8.988 ms 9.094 ms 7.944 ms
6 dcne-clpk.maxgigapop.net (2001:468:cff:3::2) 8.624 ms 8.37 ms 8.453 ms
7 washng-max.abilene.ucaid.edu (2001:468:ff:184c::1) 9.371 ms 8.711 ms 8.287 ms
8 atlang-washng.abilene.ucaid.edu (2001:468:ff:118::1) 24.465 ms 25.249 ms 27.17 ms
9 hstnng-atlang.abilene.ucaid.edu (2001:468:ff:e11::2) 59.79 ms 43.836 ms 43.905 ms
10 losang-hstnng.abilene.ucaid.edu (2001:468:ff:1114::2) 75.915 ms 80.024 ms 76.147 ms
11 transpac-1-lo-jmb-702.lsanca.pacificwave.net (2001:504:b:20::136) 75.506 ms 75.583 ms 75.574 ms
12 3ffe:8140:101:1::2 (3ffe:8140:101:1::2) 190.994 ms 188.997 ms 188.857 ms
13 3ffe:8140:101:6::3 (3ffe:8140:101:6::3) 192.198 ms 188.83 ms 190.311 ms
14 hitachi1.otemachi.wide.ad.jp (2001:200:0:1800::9c4:2) 203.086 ms 201.553 ms 201.516 ms
15 pc6.otemachi.wide.ad.jp (2001:200:0:1800::9c4:0) 202.271 ms 200.954 ms 201.457 ms
16 otm6-gate1.iij.net (2001:200:0:1800::2497:1) 255.474 ms 275.889 ms 263.122 ms
17 otm6-bb0.IIJ.Net (2001:240:100:2::1) 264.74 ms 263.295 ms 259.571 ms
18 plt001ix06.IIJ.Net (2001:240:bb20:f000::4001) 255.896 ms 257.555 ms 256.579 ms
19 plt6-gate1.IIJ.Net (2001:240:bb62:8000::4003) 256.799 ms 256.91 ms 257.353 ms
20 sl-bb1v6-rly-t-22.sprintv6.net (3ffe:2900:b:e::1) 327.795 ms 327.826 ms 327.676 ms
21 sl-bb1v6-nyc-t-1000.sprintv6.net (2001:440:1239:1001::2) 335.371 ms 334.318 ms 333.559 ms
22 sl-bb1v6-sto-t-102.sprintv6.net (2001:440:1239:100d::2) 430.171 ms sl-bb1v6-sto-t-101.sprintv6.net (2001:440:1239:1012::1) 443.515 ms sl-bb1v6-sto-t-102.sprintv6.net (2001:440:1239:100d::2) 428.871 ms
23 2001:7f8:d:fb::34 (2001:7f8:d:fb::34) 443.356 ms 449.329 ms 453.155 ms
24 2001:440:1880:1::2 (2001:440:1880:1::2) 447.132 ms 449.606 ms 454.631 ms
25 2001:440:1880:1::12 (2001:440:1880:1::12) 436.22 ms 449.6 ms 461.937 ms
26 2001:440:1880:1000::20 (2001:440:1880:1000::20) 431.528 ms 431.559 ms 434.464 ms

Blech. :slight_smile: (For comparison, here's the IPv4 traceroute:

% traceroute 192.36.143.234
traceroute to 192.36.143.234 (192.36.143.234), 30 hops max, 38 byte packets
1 isb-6509-1.vl103.cns.vt.edu (128.173.12.1) 0.863 ms 1.197 ms 0.637 ms
2 isb-6509-2.vl710.cns.vt.edu (128.173.0.82) 0.686 ms 0.811 ms 0.478 ms
3 isb-7606-1.ge1-1.cns.vt.edu (192.70.187.198) 5.380 ms 1.767 ms 0.625 ms
4 atm1-0.11.roa.networkvirginia.net (192.70.187.194) 6.626 ms 4.997 ms 1.970 ms
5 sl-gw20-rly-2-2.sprintlink.net (160.81.255.1) 7.282 ms 7.340 ms 8.584 ms
6 sl-bb20-rly-3-2.sprintlink.net (144.232.14.29) 17.183 ms 7.721 ms 7.182 ms
7 sl-bb20-tuk-11-0.sprintlink.net (144.232.20.137) 29.827 ms 28.928 ms 29.724 ms
8 sl-bb21-tuk-15-0.sprintlink.net (144.232.20.133) 29.537 ms 28.896 ms 34.679 ms
9 sl-bb21-lon-14-0.sprintlink.net (144.232.19.70) 100.620 ms 100.013 ms 98.216 ms
10 sl-bb22-lon-3-0.sprintlink.net (213.206.129.153) 99.232 ms 159.099 ms 160.970 ms
11 sl-bb20-bru-14-0.sprintlink.net (213.206.129.42) 239.766 ms 119.984 ms 203.193 ms
12 sl-bb21-bru-15-0.sprintlink.net (80.66.128.42) 106.066 ms 103.692 ms 105.917 ms
13 sl-bb20-ams-14-0.sprintlink.net (213.206.129.45) 106.905 ms 106.920 ms 106.243 ms
14 sl-bb21-ham-6-0.sprintlink.net (213.206.129.145) 202.024 ms 124.268 ms 208.230 ms
15 sl-bb21-cop-13-0.sprintlink.net (213.206.129.57) 118.080 ms 119.096 ms 118.624 ms
16 sl-bb21-sto-14-0.sprintlink.net (213.206.129.34) 124.976 ms 125.618 ms 126.115 ms
17 sl-bb20-sto-15-0.sprintlink.net (80.77.96.33) 126.867 ms 126.760 ms 125.839 ms
18 sl-tst1-sto-0-0.sprintlink.net (213.206.131.10) 126.482 ms 126.175 ms 124.867 ms
19 BFR3-POS-2-0.Stupi.NET (192.108.195.121) 125.468 ms 125.193 ms 124.997 ms
20 * * *
21 Time20.Stupi.SE (192.36.143.234) 126.873 ms 126.055 ms 126.962 ms

Well Valdis, that bad route also has to do with your side of the
equation, you might want to check who you are actually using as transits
and if the routes they are providing to you are sane enough.

Text version:
2001:468::/32 is in the routing table, getting accepted by most ISP's.
This one has a reasonable route, going over GBLX (3549) in most places.
Though some get this over BT (1752), who have 'nice' (ahem) tunneled
connectivity and transit with everybody on the planet. ESNET (293) seem
to be the third 'transit', with OpenTransit (5011) being the fourth one
and ISC being the fifth. Many routes seem to go over VIAGENIE (10566)
who seem to have some connectivity problems too most of the time.
Path wise most of it looks pretty sane.

Then there is a chunk of /40's, which are visible inside Abilene, GRH
does see them, but other ISP's don't. Most people will thus take the /32
towards your IP, which might go over some laggy tunneled networks.
Fortunately not criss cross world yet, but...

then the fun part:
2001:468:e00::/40 though seem to be visible globally, getting announced
by University of California, directly going to: Korea! :slight_smile:
And then coming back to the rest of the world over Viagenie (10566)

One of those nice paths:
2001:468:e00::/40 16150 (SE) 6667 (FI) 3549 (US) 6939 (US) 6939 (US)
10566 (CA) 3786 (KR) 17832 (KR) 1237 (KR) 17579 (KR) 2153 (US)

Neatly around the world, you might want to hint this University to not
do 'transit' uplinks themselves with Korean networks :slight_smile:

Then there is also a 2001:468:e9c::/48 which also goes over Korea.

The colored version:
http://www.sixxs.net/tools/grh/lg/?find=2001:468::/32

The above simply happens because most ISP's sanely filter on /32
boundaries, as per:
http://www.space.net/~gert/RIPE/ipv6-filters.html

For further reading see Gert Doering's excellent presentations at:
http://www.space.net/~gert/RIPE/

Greets,
Jeroen

Yes :wink:

JPMac:~ jordi$ ntpdate -q 2001:0440:1880:1000::0020
13 Oct 21:23:11 ntpdate[347]: can't find host 2001:0440:1880:1000::0020
server 0.0.0.0, stratum 0, offset 0.000000, delay 32639.00000
server 17.72.133.42, stratum 2, offset -9.996023, delay 0.13766
13 Oct 21:23:12 ntpdate[347]: step time server 17.72.133.42 offset -9.996023
sec

Regards,
Jordi

And the reason why it fails (clicked send too fast !)

traceroute6 to 2001:0440:1880:1000::0020 (2001:440:1880:1000::20) from
2001:7f9:2000:100:20d:93ff:feeb:73, 30 hops max, 12 byte packets
1 2001:7f9:2000:100:200:1cff:feb5:c535 1.832 ms * 1.138 ms
2 2001:7f9:2000:1:1::1 97.941 ms 101.684 ms 93.166 ms
3 v6-tunnel40-uk6x.ipv6.btexact.com 142.381 ms 167.692 ms 180.064 ms
4 ft-euro6ix-uk6x.ipv6.btexact.com 154.574 ms 328.447 ms 352.331 ms
5 po3-2.lonbb3.london.opentransit.net 362.112 ms 232.994 ms 231.141 ms
6 so7-2-0.loncr1.london.opentransit.net 248.975 ms 248.86 ms 249.376 ms
7 po12-0.loncr3.london.opentransit.net 159.662 ms 214.85 ms 395.218 ms
8 po12-0.oakcr2.oakhill.opentransit.net 379.212 ms 257.403 ms 366.123
ms
9 so4-0-0.loacr2.los-angeles.opentransit.net 402.118 ms 281.826 ms
450.289 ms
10 po2-0.kitbb1.kitaibaraki.opentransit.net 522.638 ms 452.638 ms
po1-0.tkybb2.tokyo.opentransit.net 481.732 ms
11 ge0-0-0.tkybb4.tokyo.opentransit.net 421.303 ms
po1-3.tkybb2.tokyo.opentransit.net 479.118 ms 595.444 ms
12 ge0-0-0.tkybb4.tokyo.opentransit.net 514.295 ms 472.411 ms
2001:688:0:2:8::23 467.252 ms
13 hitachi1.otemachi.wide.ad.jp 588.472 ms 439.962 ms 525.157 ms
14 hitachi1.otemachi.wide.ad.jp 422.59 ms 423.892 ms 421.864 ms
15 pc6.otemachi.wide.ad.jp 404.03 ms otm6-gate1.iij.net 473.603 ms
449.513 ms
16 otm6-gate1.iij.net 418.808 ms 517.862 ms otm6-bb0.iij.net 416 ms
17 plt001ix06.iij.net 433.301 ms plt001ix06.iij.net 426.364 ms 454.844
ms
18 plt001ix06.iij.net 473.956 ms 456.72 ms plt6-gate1.iij.net 717.541 ms
19 sl-bb1v6-rly-t-22.sprintv6.net 422.121 ms plt6-gate1.iij.net 433.742
ms 504.335 ms
20 sl-bb1v6-rly-t-22.sprintv6.net 429.331 ms 445.961 ms
sl-s1v6-nyc-t-1000.sprintv6.net 439.238 ms
21 sl-bb1v6-sto-t-102.sprintv6.net 591.344 ms
sl-s1v6-nyc-t-1000.sprintv6.net 423.258 ms 600.439 ms
22 * * *

Regards,
Jordi

That looks a lot like my traceroute6 (except that when I tried it, the next
hop was working). Is it just me, or is this saying that 2001:440::/32 isn't
peered in enough places (since for both my eastern US location and Jordi's
London location, the route points off to iij.net)?

Well Valdis, that bad route also has to do with your side of the
equation, you might want to check who you are actually using as transits
and if the routes they are providing to you are sane enough.

Well, if somebody at stupi.se wants to do a traceroute6 back at us, I'll
be glad to see what the reverse path looks like... but last I heard
traceroute and traceroute6 showed the *forward* path of packets..

2001:468::/32 is in the routing table, getting accepted by most ISP's.
This one has a reasonable route

The real problem (at least for the forward direction from here) is that the
outbound packets get into the Abilene network, and the best path from there to
2001:440:1880 is a 3ffe: tunnel to japan and then another 3ffe: tunnel back to New
York.

> Well Valdis, that bad route also has to do with your side of the
> equation, you might want to check who you are actually using as transits
> and if the routes they are providing to you are sane enough.

Well, if somebody at stupi.se wants to do a traceroute6 back at us, I'll
be glad to see what the reverse path looks like... but last I heard
traceroute and traceroute6 showed the *forward* path of packets..

That is correct, try tracepath, this shows at least the assymetry.
You can also peek at GRH to see a probable AS path back. ASN's still
tell a lot in IPv6.

Next month I'll finalize the 'symmetry' tool which allows one to do the
AS path checkup between two places automatically.

> 2001:468::/32 is in the routing table, getting accepted by most ISP's.
> This one has a reasonable route

The real problem (at least for the forward direction from here) is that the
outbound packets get into the Abilene network, and the best path from there to
2001:440:1880 is a 3ffe: tunnel to japan and then another 3ffe: tunnel back to New
York.

Kick Abilene to not be so silly and get some real transits. Then again
Abiline is educational and those networks seem to have very nice (read:
overcomplex) routing policies...

Greets,
Jeroen

Somehow, I don't think anything that Abilene does is going to fix Jordi's
routing. From where *you* are, do *you* have a path to 2001:0440:1880:1000::0020
that *doesn't go through Japan? If so, what does your path look like?

My box that gets IPv6 connectivity from Kewlio (set up via the SixXS
tunnel broker) has a fairly short route which doesn't seem to go via
Japan

traceroute6 to time20.stupi.se (2001:440:1880:1000::20) from 2001:4bd0:202a::1, 64 hops max, 12 byte packets
1 gw-121.lon-01.gb.sixxs.net 3.484 ms 3.527 ms 3.978 ms
2 po6.712-IPv6-necromancer.sov.kewlio.net.uk 16.976 ms 4.536 ms 3.979 ms
3 sl-bb1v6-bru-t-4.sprintv6.net 55.976 ms 55.614 ms 54.972 ms
4 sl-bb1v6-sto-t-100.sprintv6.net 84.971 ms 82.604 ms 82.961 ms
5 * * *
6 2001:440:1880:1::2 97.992 ms 101.565 ms 109.964 ms
7 2001:440:1880:1::12 104.966 ms 105.651 ms 102.960 ms
8 2001:440:1880:1000::20 83.971 ms 84.650 ms 85.963 ms
-bash-2.05b$

Though my other box (with connectivity via the BT Exact tunnel broker)
goes via Japan...

-bash-2.05b$ traceroute6 time20.stupi.se
traceroute6 to time20.stupi.se (2001:440:1880:1000::20) from 2001:618:400::511d:
554, 64 hops max, 12 byte packets
1 tb-exit.ipv6.btexact.com 7.983 ms 8.759 ms 7.939 ms
2 uk6x-core-hopper-g0-2.ipv6.btexact.com 9.966 ms 7.892 ms 9.945 ms
3 ft-euro6ix-uk6x.ipv6.btexact.com 9.972 ms 9.899 ms 9.944 ms
4 Po3-2.LONBB3.London.opentransit.net 9.976 ms 9.910 ms 9.952 ms
5 So7-2-0.LONCR1.London.opentransit.net 39.963 ms 10.800 ms 8.944 ms
6 Po12-0.LONCR3.London.opentransit.net 9.975 ms 9.912 ms 9.944 ms
7 Po12-0.OAKCR2.Oakhill.opentransit.net 81.971 ms 81.858 ms 82.929 ms
8 Po5-0.PASCR3.Pastourelle.opentransit.net 141.972 ms 141.986 ms 167.906 ms
9 Po2-0.KITBB1.Kitaibaraki.opentransit.net 269.852 ms 269.712 ms 270.920 ms
10 Ge0-0-0.TKYBB4.Tokyo.opentransit.net 267.901 ms 267.842 ms Po1-3.TKYBB2.To
kyo.opentransit.net 271.916 ms
11 Ge0-0-0.TKYBB4.Tokyo.opentransit.net 272.865 ms 2001:688:0:2:8::23 270.868
ms 269.056 ms
12 hitachi1.otemachi.wide.ad.jp 406.900 ms 404.830 ms 2001:688:0:2:8::23 272
.890 ms
13 hitachi1.otemachi.wide.ad.jp 408.073 ms 409.827 ms 410.849 ms
14 otm6-gate1.iij.net 257.918 ms 390.834 ms 286.880 ms
15 otm6-bb1.IIJ.Net 284.922 ms otm6-gate1.iij.net 259.766 ms 259.903 ms
16 plt001ix06.IIJ.Net 260.792 ms 263.903 ms otm6-bb0.IIJ.Net 259.808 ms
17 plt001ix06.IIJ.Net 266.909 ms plt001ix06.IIJ.Net 266.716 ms 266.728 ms
18 sl-bb1v6-rly-t-22.sprintv6.net 333.883 ms 332.888 ms plt6-gate1.IIJ.Net 2
66.886 ms
19 sl-bb1v6-rly-t-22.sprintv6.net 339.748 ms sl-s1v6-nyc-t-1000.sprintv6.net
339.852 ms 338.706 ms
20 sl-bb1v6-sto-t-102.sprintv6.net 433.779 ms sl-bb1v6-sto-t-101.sprintv6.net
435.691 ms sl-bb1v6-nyc-t-1000.sprintv6.net 342.824 ms
21 sl-bb1v6-sto-t-101.sprintv6.net 439.739 ms 2001:7f8:d:fb::34 526.720 ms 4
54.105 ms
22 2001:7f8:d:fb::34 461.876 ms 459.004 ms 459.913 ms
23 2001:440:1880:1::2 456.849 ms 2001:440:1880:1::12 454.025 ms 454.121 ms
24 2001:440:1880:1000::20 436.766 ms 434.023 ms 2001:440:1880:1::12 462.884
ms
-bash-2.05b$

I don't speak for Abilene or Internet2, but here's what I know. Abilene doesn't buy transit from anyone. Their internal routing policy is quite straightforward, given that it is goverened by the Abilene Conditions of Use (their AUP). The CoU would normally prohibit connections to commercial ISPs, but it is set aside for IPv6 (and IPv4 multicast).

As a result, Abilene peers with a number of commercial IPv6 networks at locations where it is convenient to do so (PAIX, primarily). Abilene has excellent IPv6 connectivity to the R&E world, typically along the same paths and with similar performance to IPv4. They'll never have that good connectivity with the commercial world, and in any case once the commercial uptake of IPv6 is high enough, they'll disconnect those peerings and let their members get IPv6 connectivity through their individual commercial ISPs. I have no idea when that will happen nor how the decision will be made, but that's the stated plan.

Bill.

PS - I have no interest whatsoever in debating R&E versus commercial, the role of Abilene, etc. I'm just passing along this info. . .

ntpdate -q 2001:0440:1880:1000::0020
server 2001:440:1880:1000::20, stratum 1, offset 0.048519, delay 0.56551
13 Oct 15:41:08 ntpdate[7397]: adjust time server 2001:440:1880:1000::20
offset 0.048519 sec

Tim Rainier

JORDI PALET MARTINEZ <jordi.palet@consulintel.es>
Sent by: owner-nanog@merit.edu
10/13/2005 03:23 PM
Please respond to
jordi.palet@consulintel.es

To
"nanog@merit.edu" <nanog@merit.edu>
cc

Subject
Re: IPv6 news

Yes :wink:

JPMac:~ jordi$ ntpdate -q 2001:0440:1880:1000::0020
13 Oct 21:23:11 ntpdate[347]: can't find host 2001:0440:1880:1000::0020
server 0.0.0.0, stratum 0, offset 0.000000, delay 32639.00000
server 17.72.133.42, stratum 2, offset -9.996023, delay 0.13766
13 Oct 21:23:12 ntpdate[347]: step time server 17.72.133.42 offset
-9.996023
sec

Regards,
Jordi

At least some parts of the US seem to.
This wouldn't be so bad if I didn't have to go across the US to get to the first hop:

FT@vash:~$ /usr/sbin/ntpdate -q 2001:0440:1880:1000::0020 192.36.143.234
server 2001:440:1880:1000::20, stratum 1, offset -0.005350, delay 0.27211
server 192.36.143.234, stratum 1, offset 0.002036, delay 0.15575
13 Oct 16:13:39 ntpdate[1526]: adjust time server 192.36.143.234 offset 0.002036 sec

FT@vash:~$ /usr/sbin/traceroute6 2001:0440:1880:1000::0020
traceroute to 2001:0440:1880:1000::0020 (2001:440:1880:1000::20) from 2001:470:1f00:ffff::649, 30 hops max, 16 byte packets
1 tommydool.tunnel.tserv1.fmt.ipv6.he.net (2001:470:1f00:ffff::648) 67.52 ms 70.567 ms 68.182 ms
2 2001:470:1fff:2::26 (2001:470:1fff:2::26) 67.607 ms 68.592 ms 70.168 ms
3 sl-bb1v6-rly-t-76.sprintv6.net (3ffe:2900:a:1::1) 143.214 ms 144.479 ms 145.113 ms
4 sl-s1v6-nyc-t-1000.sprintv6.net (2001:440:1239:1001::2) 150.654 ms 147.692 ms 151.378 ms
5 sl-bb1v6-sto-t-102.sprintv6.net (2001:440:1239:100d::2) 244.013 ms sl-bb1v6-sto-t-101.sprintv6.net (2001:440:1239:1012::1) 246.934 ms sl-bb1v6-sto-t-102.sprintv6.net (2001:440:1239:100d::2) 240.373 ms
6 2001:7f8:d:fb::34 (2001:7f8:d:fb::34) 262.842 ms 261.69 ms 263.779 ms
7 2001:440:1880:1::2 (2001:440:1880:1::2) 258.422 ms 266.312 ms 264.051 ms
8 2001:440:1880:1::12 (2001:440:1880:1::12) 250.289 ms 265.565 ms 267.93 ms
9 2001:440:1880:1000::20 (2001:440:1880:1000::20) 246.908 ms 249.03 ms 247.193 ms

macosx tiger no likey the ntpdate v6 :frowning: but:
~> traceroute6 2001:0440:1880:1000::0020
traceroute6 to 2001:0440:1880:1000::0020 (2001:440:1880:1000::20) from
2001:408:1009:2:203:93ff:feec:f318, 30 hops max, 12 byte packets
1 350.0-0fastethernet.rtr.ops-netman.net 2.551 ms 1.894 ms 23.325 ms
2 2001.gr-0-1-0.hr6.tco4.alter.net 26.575 ms 55.095 ms *
3 2001:408:11::8 98.683 ms 95.157 ms 98.663 ms
4 sl-bb1v6-bru.sprintlink.net 174.97 ms 172.633 ms 172.264 ms
5 sl-bb1v6-sto-t-100.sprintv6.net 207.452 ms 205.804 ms 211.638 ms
6 2001:7f8:d:fb::34 213.451 ms 226.246 ms 225.651 ms
7 2001:440:1880:1::2 209.371 ms 225.198 ms 225.462 ms
8 2001:440:1880:1::12 218.561 ms 227.934 ms 226.571 ms
9 2001:440:1880:1000::20 207.678 ms 206.464 ms 207.334 ms

(debian sarge seems to like ntpdate v6 though :slight_smile: bug open to apple)

I also presume you sent them a check and showed them the business case for
the upgrade? No large provider is going to upgrade anything without a
business reason. Oh, and some parts, critical parts even, of v6 are still
'broken'...

Of course, that's a business decision, but may be instead of getting a new
check for the IPv6 service, not providing it, you will lost some checks from
existing customers who demand dual stack :wink:

Business is also be competitive, and other carriers already have the service
as a value added to the existing IPv4 customers.

Regards,
Jordi

Of course, that's a business decision, but may be instead of getting a new
check for the IPv6 service, not providing it, you will lost some checks from
existing customers who demand dual stack :wink:

As ted and others have already said: "Show me the customers who are
asking"... so far the numbers are startlingly low, too low to justify full
builds by anyone large.

Business is also be competitive, and other carriers already have the service
as a value added to the existing IPv4 customers.

Sure, and the decision to use their network I'd suspect hardly ever comes
down to 'v6'. My point was, really, that the screaming crazy man saying:
"I told them dudes to forklift their network" is hardly productive.
Showing, if folks can't find it themselves, that there is a business case
that would justify a few million dollar upgrade is...

A few folks that have a deployment going are ahead of the curve, hopefully
they can keep the parts they have running and upgrade away from the 7507
that is their current solution :slight_smile: Hopefully other folks can make their
beancounters understand that v6 is going to happen regardless of their
wishes for it NOT to happen due to upgrade costs. Also, hoefully as old
hardware is cycled out finally new and v6 capable hardware will take it's
place :slight_smile:

-Chris

<SNIP>

I also presume you sent them a check and showed them the business case for
the upgrade? No large provider is going to upgrade anything without a
business reason.

Current clients are already paying them at them moment are they not, as
they apparently didn't reserve any funds for upgrades of their network,
nor didn't take IPv6 along in the last 10 years of hardware cycles, thus
clearly having played dumb for the last 10 years, how should their
customers suddenly have to cough up to the stupidity of not being able
to run a business and plan ahead into the future? As they apparently
didn't upgrade their network for 10 years, somebody has to have a fat
bankaccount by now :slight_smile:

Even then, they could easily do some 'good' tunnels over their own IPv4
infrastructure, enabling IPv6 at the edges where they connect their
customers and maybe do some sensible peering and thus providing sensible
IPv6 transit to their paying customers...

Oh, and some parts, critical parts even, of v6 are still
'broken'...

Yep, there is no multihoming, but effectively, except for the BGP tricks
that are currently being played in IPv4 there is nothing in IPv4 either.
But one won't need to upgrade a Tier 1's hardware to support shim6, as
that will all be done at the end site and not at the "Tier 1" level, so
that is just another bad excuse.

Greets,
Jeroen

The larger EU/US ISPs that have real deployments all use Junipers (for
their IPv6), not Ciscos (with a few exceptions - Verio?). Don't know
wether that's true for ASPAC folks too - can someone comment?

One might conclude a thing or two from that - or not.

Best regards,
Daniel