Why is IPv6 broken?

Why is IPv6 broken?

It's broken, first and foremost, because not all network providers who claim to be tier 1 are tier 1.

Even worse, some of these providers run 6to4 relays or providers to home users. A user has no choice which provider is running their 6to4 relay...so, they might end up using a relay that is run by a provider who doesn't peer with their intended destination. I don't think the IETF saw that one coming. But the result is to make 6to4 even more broken. Now, I know some people want 6to4 to die, but while it still exists in some form, user experience is worse than it could be. The temporary fix is for any provider to run their own 6to4 relay for their own customers (assuming that they themselves have full connectivity).

Right now, unless you buy transit from multiple tier 1s, and do so with carefully chosen tier ones, you have only part of the IPv6 internet. Many tier 1s are unsuitable even as backup connections, since you still want your backup connection to have access to the whole internet! Good tier 2 providers might be an excellent choice, sine good providers have already done this leg work and can monitor their providers for compliance.

A few myths...

Routing table size has nothing to do with completeness of routes. Google may be one route, through aggregation. And SmallCo may advertise a large route through one provider, and, due to traffic engineering, a smaller route through a second one - in many cases, anyone that had the large route would be able to contact SmallCo, even without the smaller route being present. So routing table size doesn't work. In addition, some providers aggregate their routing tables to reduce routing load and such. Others intentionally don't or deaggregate it intentionally so that they can brag about having bigger routing tables. What you need to ask is: "How many /64s can you get to from your network, and how many of these /64s are reachable from at least one other major provider (you don't care about internal-only networks, after all)?" They can give you that information, but many won't want to.

It's also not about technical people not getting along. It's about business players trying to make money, but not just that either. It's also about ensuring that providers don't end up assuming more than their share of costs for a link. Just because you have a common peering point doesn't mean that turning peering on would reduce your costs. In some cases it may increase costs tremendously, particularly on your long haul backbone links, because the other party would like to take advantage of an attitude of trust on the internet. That's why we end up with peering policies and contracts.

What is the issue?

Let's take Hurricane. This is no different than other providers...basically, they want to say, "We shouldn't need to pay for IPv6 transit from anyone." This is what Cogent said on IPv4 a few years ago. Google used to say this too for IPv6, not sure if they are still saying it. Basically, "We know we're big enough that you won't want to screw your users by not peering with us."

A small network couldn't do this tactic - a 100 node network who said to the IPv4 tier 1s: "Hey, I'm in the Podunk Internet Exchange, so are you, so I'm going to peer from you so I don't have to buy any bandwidth for my web server (placed in the Podunk exchange). Sure, they would like to - it would save a ton of money if their site got lots of hits. I mean, who wouldn't want free connectivity?

In IPv6, we're going through what we settled years ago in IPv4 - who has to pay who to connect. After all, even free peering connections have a cost in manpower, debugging, traffic engineering, documentation, etc.

Some players who aren't getting free interconnection to tier 1s in IPv4 want to get it in IPv6. So they've worked to attract lots of users, and done so under the guise of "We like IPv6 and want to promote it." Others have not bothered with trying to attract the users, but have said, "We're too big for you to not want to give us connectivity for free, since it would piss off your users if you don't" (Google did this at one point in the past, may still be doing it). The Google example is basically trying to use a monopoly position to force business decisions.

Now, HE, Google, and others would want you to think, "Hey, IPv6 is all new, and these $#@! other providers just want to make a buck on something they have no right to." Well, perhaps. But what they aren't saying is, "We can turn on BGP for IPv6 on our existing connections to other providers, with no cost to us, and actually have full connectivity." The issue isn't about cost today - nobody is charging extra for IPv6 in addition to IPv4 on a pipe where you already buy IPv4 bandwidth. And Google and HE already buy IPv4 bandwidth. What they are thinking of is the future, 15 years from now, when there is no IPv4 - in that future, IPv6 isn't insignificant bandwidth, it's everything. Wouldn't it be nice to be a tier 1 and not pay for that? Of course! And certainly one can argue for or against the current tier 1 club's exclusivity. But it's the way the internet works right now, for better or worse. In the meantime, in pursuit of this future, today's customers are screwed by these providers trying to position themselves to make more profit margin down the road.

Which is better for the customer? A system where they are screwed today so that their provider can have a better negotiating position in business discussions OR a position where they do whatever they have to take to provide the customer with full connectivity? (To HE's credit, they are giving away transit today on IPv6, so it's not like you are losing anything of value by not having the full internet routing tables, but it's a huge reason to not pay HE anything in other services, such as data center colocation - go with a provider that you pay and which gives you what you pay for - full transit).

A bit about peering...

Lots of people who aren't running big networks don't understand peering. They think, "Doesn't this benefit everyone if everyone exchanges traffic?" Maybe, on a pure level, but the business doesn't work that way.

I'll give you an example. Let's say you are a little ISP, and located in Virginia, near a major peering point. You say, "All the tier 1s are there, I can pull fiber to that peering point, which is only a block away, and have free internet, other than the cost of the line." So, let's say you run the line, and, let's say that all the tier 1s agree to let you peer for free, since they want your traffic too. Now, let's say your user downloads 1,000 TB from a server in California, on Qwest's network.

You paid, let's say, $15,000 for your piece of fiber going a block. You needed to hire contractors and buy permits and such, after all. So you shared in the costs of letting the user get to the server. What did Qwest pay? Well, they dug trenches, pulled fiber, negotiated with cities, counties, and states, paid taxes on their work, lit this fiber, etc. It cost a lot because they went a lot further than your one block. And a lot more than $15,000.

You say, "So what! Their customer benefits too!" That's true, but let's go a bit further. Let's say you have a network that extends to California - you by DS3s from Sprint to do it. There's some cost in that, but your user in Virginia would need more bandwidth than your DS3s. So you decide NOT to peer in California, just in Virginia. That way you don't have to upgrade your lines for your Virginia user. Maybe you even legally break your company into two entities, so that you can peer in California and Virginia both, but you can say with a straight face, "We only have Virginia offices for this user - the other company is a separate entity, and not the entity that owns either the server or the end user."

In other words, you found a way to shift most of the traffic burden and infrastructure costs to Qwest, away from your user.

This is why Qwest has some sort of peering policy. Among other things, it will require multiple exchange points, and Qwest will probably say they will send traffic to the closest peering point, to minimize their costs. You get to do the same (more on that later).

Let's say that you currently buy bandwidth from NTT - you're not big enough to get free peering from everyone, but Qwest agrees to peer with you. Of course Qwest and NTT also have a business relationship, to give each other free peering. If Qwest gives you and many other customers free peering, however, you'll send less traffic across NTT's network. That might be good from a technical standpoint, but NTT now is selling you a smaller pipe - and making less money. In effect, Qwest undercut NTT's business and lowered NTT's profits on the connection. How will NTT respond to that, when they were also giving free peering to and from Qwest? Well, they might decide that Qwest isn't a very nice partner and tell Qwest, "Pay us for transit or get lost." That could be ugly - both NTT and Qwest could lose, but Qwest, if they actually care about stable service, won't want to risk it. So generally you don't give peering to anyone who is a customer of one of your free peers. You don't hurt their business. In fact, it's often a requirement in the peering connection, legally. (that said, you could argue whether or not there is an abuse of monopoly here...that's a different issue)

Going one further, let's say you have the server, and Qwest has the end-user. That doesn't change anything - the economics are still such that Qwest has the cost, you don't. That said, it's convention that the person receiving the traffic pays for most of the backhaul.

Asymmetry in the Internet:

What's the path between your host and a remote server? How do you find it? If you said "traceroute", you might be right, but are probably wrong. You need to trace route both sides.

Every provider on the internet is trying to minimize costs. This means that you want traffic to leave your network and go to the destination network with as little distance traveled as possible, because costs go up with distance. It's cheaper to increase the size of pipes within a city to get to a peering point than to increase your backbone pipe size. So, peering contracts typically specify that you dump traffic to the peer as soon as possible. That means the person receiving the traffic generally pays more. It also means that any traffic that crosses an AS boundary almost certainly travels a completely different path each way. In many cases, one third party provider may be used in one direction, another in the other direction. So seeing packet loss on your traceroute at some random tinet router doesn't mean that this router is the cause of any problem, since the return path for that packet from that provider's router might actually cross yet another network that is never transited in either direction for your network connection. (I'm ignoring that most large providers also don't always send ICMP reliably BECAUSE they limit this intentionally to spare the router CPU from overload - it takes router CPU to generate an ICMP TTL exceeded, but it doesn't take router CPU to forward a packet - so traceroute or ping indicating loss at a router doesn't mean anything in itself - the path itself likely has zero percent loss).

So, here's the scenerio.

Let's say a user and a server are on two seperate networks, U (user) and S (server).

Let's say they both utilize transit provider T. So the path could be: U -- T -- S. S buys an OC12 from T, while U buys a T1.

But let's say that the user has a second transit provider, BIG, who is a free peer of T. He bought an OC3 from BIG. So there's another path between U and S: U -- BIG -- T -- S. Likely this path is much faster than U -- T -- S.

So, the path for the traffic to S goes U -- T -- S.

Now, what path does the traffic from T's router go, when T's router generates an ICMP TTL exceeded in response from a traceroute from a user? Does it go straight over the T1 line, or does it go over the peering connection to BIG and then to the customer? The answer, it turns out, depends on network configuration and policy. Let's say it goes out over the T1, but the T1 is congested. It will look like the congestion is at the connection between BIG and T, because this is the first hop that will show packet loss. BUT...the congestion is actually at the U's connection to T, which is irrelevant to the actual traffic path between U and S. So the user, at this point, calls up BIG and T and bitches about "Your peering congestion is congested" when the real problem is that traffic completely unrelated to the user's problem is going via a congested path that is never used for connectivity between U and S.

If you add several providers into this loop, you can end up with a situation where traffic uses Sprint in one direction, but never hits a Sprint router in the other. This is actually very common. A user with slow downloads might be experiencing packet loss on the path from server to user, but not the other way around. In other words, the problem is a provider that never shows up on the user's traceroute!

Remember that the providers hand off the traffic as soon as possible to
their peer. So, whoever receives the larger amount of traffic needs the
bigger cross-country (or trans-oceanic) links. If one side transmits a
T1's worth of data, the other side transmits an OC48's worth of data,
only one needs the OC48s across the country - the one receiving the
traffic. That's why you hear about "traffic ratios". If the traffic is even both ways, both sides have to pay for the same amount of cross-country infrastructure to carry that traffic. So most providers won't peer with someone for free that sends, say, 10 times the amount of traffic that they will receive. It would end up costing a lot of money

Back to IPv6...that's interesting, but what does it have to do with IPv6?

Some providers want to do away with traffic ratio policies, mutliple location peering, not providing free services to the other's customers, etc.

THAT is why you can't ping some sites from your HE tunnel. It's not just that providers won't peer. It's also that providers have rules to keep themselves from getting screwed.

Certainly, there's ways around some of this (for example, traffic ratios - if I make sure my network is used for the cross-country traffic I send, not yours, then I've addressed that concern at a bit of increased expense for myself). But it's generally not worth doing until the size of the providers is sufficiently large. Other things don't have a good technical fix, like not peering with your peer's customer - that's a business rule.

<deBunk>

Where did you get all this from ?

There is not even one single reference to a URL, not to be rude but how
long did it take you to write this theory ?

As for "It's broken, first and foremost..." They may be a Tier 1
provider of other services and also happen to offer IPv6 at which they
are only a Tier 2 or 3 but using the marketing gimics of theyre original
Tier 1 status to get acknowledgement.

I stopped reading shortly after 'I think' the second paragraph and scanned
the rest for URLs that might have made this clear and to the point but
did not find any.

Heresay.

</deBunk>

You should have titled your thread, "my own personal rant about
Hurricane Electric's IPv6 strategy." You may also have left out the
dodgy explanation of peering policies and technicalities, since these
issues have been remarkably static since about 1996. The names of the
networks change, but the song remains the same. This is not a novel
subject on this mailing list. In fact, there have been a number of
threads discussing HE's practices lately. If you are so interested in
them, I suggest you review the list archive.

There are quite a few serious, unresolved technical problems with IPv6
adoption besides a few networks playing chicken with their collective
customer-bases. The lack of will on the part of vendors and operators
to participate in the IETF process, and make necessary and/or
beneficial changes to the IPv6 standards, has left us in a situation
where IPv6 implementation produces networks which are vulnerable to
trivial DoS attacks and network intrusions.

The lack of will on the part of access providers to insist on
functioning IPv6 support on CPE and BRAS platforms has even mid-sized
ISPs facing nine-figure (as in, hundred-million-dollars) expenses to
forklift-upgrade their access networks and end-user equipment, at a
time when IPv6 seems to be the only way to continue growing the
Internet.

The lack of will on the part of major transit networks, including
Savvis, to deploy IPv6 capabilities to their customers, means that
customers caught in multi-year contracts may have no option for native
connectivity. Cogent's policy of requiring a new contract, and from
what I am still being told by some European customers, new money, from
customers in exchange for provisioning IPv6 on existing circuits,
means a simple technical project gets caught up in the complexities of
budgeting and contract execution.

If you believe that the most serious problem facing IPv6 adoption is
that HE / Level3 / Cogent don't carry a full table, you are living in
a fantasy world.

Why is IPv6 broken?

You should have titled your thread, "my own personal rant about
Hurricane Electric's IPv6 strategy." You may also have left out the
dodgy explanation of peering policies and technicalities, since these
issues have been remarkably static since about 1996. The names of the
networks change, but the song remains the same. This is not a novel
subject on this mailing list. In fact, there have been a number of
threads discussing HE's practices lately. If you are so interested in
them, I suggest you review the list archive.

There are quite a few serious, unresolved technical problems with IPv6
adoption besides a few networks playing chicken with their collective
customer-bases. The lack of will on the part of vendors and operators
to participate in the IETF process, and make necessary and/or
beneficial changes to the IPv6 standards, has left us in a situation
where IPv6 implementation produces networks which are vulnerable to
trivial DoS attacks and network intrusions.

The lack of will on the part of access providers to insist on
functioning IPv6 support on CPE and BRAS platforms has even mid-sized
ISPs facing nine-figure (as in, hundred-million-dollars) expenses to
forklift-upgrade their access networks and end-user equipment, at a
time when IPv6 seems to be the only way to continue growing the
Internet.

The lack of will on the part of major transit networks, including
Savvis, to deploy IPv6 capabilities to their customers, means that
customers caught in multi-year contracts may have no option for native
connectivity. Cogent's policy of requiring a new contract, and from
what I am still being told by some European customers, new money, from
customers in exchange for provisioning IPv6 on existing circuits,
means a simple technical project gets caught up in the complexities of
budgeting and contract execution.

+1

The lack of will on the part of the IETF to attract input from and involve
operators in their processes (which I would posit is a critical element in
the process). And the lack of will/fore site on the part of the IETF to
respond to input from operators that they have received. If fingers can
be pointed at both sides, i.e. operators and IETF, then both sides are to
blame. The IETF only has value if they are publishing "standards" that
work properly in the real world. If the implementers of these "standards"
say that they are broken, then the IETF has failed to provide value.

If you believe that the most serious problem facing IPv6 adoption is
that HE / Level3 / Cogent don't carry a full table, you are living in
a fantasy world.

+1

-DMM

[..]

+1

The lack of will on the part of the IETF to attract input from and involve
operators in their processes (which I would posit is a critical element in
the process).

Ehmmmm ANYBODY, including you, can sign up to the IETF mailing lists and
participate there, just like a couple of folks from NANOG are already doing.

You are on NANOG out of your own free will, the same applies to the
IETF. If you don't participate here your voice is not heard either, just
like at the IETF.

Peeking at the ipv6@ietf.org member list, I don't see your name there.
You can signup here: ipv6 Info Page

Greets,
Jeroen

[..]

+1

The lack of will on the part of the IETF to attract input from and involve
operators in their processes (which I would posit is a critical element in
the process).

Ehmmmm ANYBODY, including you, can sign up to the IETF mailing lists and
participate there, just like a couple of folks from NANOG are already doing.

You are on NANOG out of your own free will, the same applies to the
IETF. If you don't participate here your voice is not heard either, just
like at the IETF.

True, anyone can participate in the IETF processes. However, if key players do
not participate, then something is broken. I will take my lumps for not
participating.

My point was - "If fingers can be pointed at both sides, i.e. operators and
IETF, then both sides are to blame."

In the corporate world, if I were contemplating changing the framework of a
system, then I would need to get buy in / agreement from the stakeholders of
that system. If I was going to change the framework behind an HR system, then
the HR managers and HR systems experts would all have to agree to the change.
If I changed the framework and broke all of the HR systems and then told my boss
that I scheduled a meeting and nobody from HR showed up and therefore I used
that as agreement in their absence, then I would get fired. Yes, I understand
that corporate environments are very different from the IETF environment, but
there are perhaps some lessons to learn from the corporate environment.

Most RFCs operate within a meritocracy. A standard can be proposed for
"Example Protocol v10" and if nobody likes it outside of the IETF, then it is
not implemented by anyone and it eventually dies on the vine. IPv6 is
"different" in that it is the underpinning of every other protocol/standard that
will exist on or operate over the internet for the next 20-30 years (probably)
We had 10+ years of IPv6 not being implemented by anyone (seriously), yet it
didn't die on the vine. Perhaps the process for "Example Protocol v10" and the
process for IPv6 should be different - given the fundamental difference in their
scope.

No, we can't change the past. "Those who do not learn from history are doomed
to repeat it." - Santayana. I would say that many variables that got us to
where we are today - which is out of IPv4 addresses and faced with only IPv6,
which many believe is fundamentally flawed, as our only way forward - holds some
lessons to be learned... but perhaps this is just me - and if so, I apologize
for the noise.

Peeking at the ipv6@ietf.org member list, I don't see your name there.
You can signup here: ipv6 Info Page

Absolutely true, fixed.

Greets,
  Jeroen

-DMM

so... how much of the heavy lifting are you personally willing to do and how much are you
depending/expecting others to do on your behalf?

public whining that the v6 network does not mirror the v4 network is not productive and
is not news.

of course ymmv.

/bill

Hi David,

This is a process problem, not an individual problem.

The IETF is run by volunteers. They volunteer because they find
designing protocols to be fun. For the most part, operators are not
entertained by designing network protocols. So, for the most part they
don't partiticpate.

This is not going to change. And it also isn't the problem -- people
who enjoy the work tend to do better work.

The problem is that the IETF routinely exceeds the scope of designing
network protocols. Participants in the working groups take what are
fundamentally operations issues unto themselves. They do so knowing
they lack adequate participation by network operators. And the process
that leads to RFCs offers inadequate checks and balances to mitigate
that behavior.

Consider, for example, RFC 3484. That's the one that determines how an
IPv6 capable host selects which of a group of candidate IPv4 and IPv6
addresses for a particular host name gets priority. How is a server's
address priority NOT an issue that should be managed at an operations
level by individual server administrators? Yet the working group which
produced it came up with a static prioritization that is the root
cause of a significant portion of the IPv6 deployment headaches we
face.

I don't know the whole solution to this problem, but I'm pretty sure I
know the first step.

Today's RFC candidates are required to call out IANA considerations
and security considerations in special sections. They do so because
each of these areas has landmines that the majority of working groups
are ill equipped to consider on their own.

There should be an operations callout as well -- a section where
proposed operations defaults (as well as statics for which a solid
case can be made for an operations tunable) are extracted from the
thick of it and offered for operator scrutiny prior to publication of
the RFC.

Food for thought.

Regards,
Bill Herrin

While this is true, there are a couple of factors that make it more difficult
than it would appear on the surface.

Number one: Participating effectively in IETF is a rather time-consuming
process. While a lot of engineers and developers may have IETF effort
as a primary part of their job function and/or get their employer to let
them spend time on it, operators are often too busy keeping what they
already have running and it can be _VERY_ difficult to get management
to support the idea of investing time in things like IETF which are not
seen by management as having direct operational impact. NANOG
is about the limit of their vision on such things and even that is not
well supported in a lot of organizations.

Number two: While anyone can participate, approaching IETF as an
operator requires a rather thick skin, or, at least it did the last couple
of times I attempted to participate. I've watched a few times where
operators were shouted down by purists and religion over basic
real-world operational concerns. It seems to be a relatively routine
practice and does not lead to operators wanting to come back to
an environment where they feel unwelcome.

Owen

While this is true, there are a couple of factors that make it more difficult
than it would appear on the surface.

Number one: Participating effectively in IETF is a rather time-consuming
process. While a lot of engineers and developers may have IETF effort
as a primary part of their job function and/or get their employer to let
them spend time on it, operators are often too busy keeping what they
already have running and it can be _VERY_ difficult to get management
to support the idea of investing time in things like IETF which are not
seen by management as having direct operational impact. NANOG
is about the limit of their vision on such things and even that is not
well supported in a lot of organizations.
   

Vendors make up the vast bulk of attendance at ietf. And vendors are
there for one reason: to make stuff that you'll be paying for. So you
pay for it at ietf time, or you pay for it at deployment time. Either way,
you'll be paying.

Number two: While anyone can participate, approaching IETF as an
operator requires a rather thick skin, or, at least it did the last couple
of times I attempted to participate. I've watched a few times where
operators were shouted down by purists and religion over basic
real-world operational concerns. It seems to be a relatively routine
practice and does not lead to operators wanting to come back to
an environment where they feel unwelcome.
   
If you're trying to imply that operators get singled out, that's
not been my experience. You definitely need to have a thick skin
given egos and there's definitely a large pool of professional
ietf finger waggers, but their holier than thou attitude is spread
to all in their path, from what I've seen. I won't speak for every
working group, but the ones i've been involved with have been
pretty receptive to operator input.

Mike

You are on NANOG out of your own free will, the same applies to the
IETF. If you don't participate here your voice is not heard either, just
like at the IETF.

True, anyone can participate in the IETF processes. However, if key players
do not participate, then something is broken. I will take my lumps for not
participating.

My point was - "If fingers can be pointed at both sides, i.e. operators and
IETF, then both sides are to blame."

Hi David,

This is a process problem, not an individual problem.

The IETF is run by volunteers. They volunteer because they find
designing protocols to be fun. For the most part, operators are not
entertained by designing network protocols. So, for the most part they
don't partiticpate.

This is not going to change. And it also isn't the problem -- people
who enjoy the work tend to do better work.

The problem is that the IETF routinely exceeds the scope of designing
network protocols. Participants in the working groups take what are
fundamentally operations issues unto themselves. They do so knowing
they lack adequate participation by network operators. And the process
that leads to RFCs offers inadequate checks and balances to mitigate
that behavior.

Consider, for example, RFC 3484. That's the one that determines how an
IPv6 capable host selects which of a group of candidate IPv4 and IPv6
addresses for a particular host name gets priority. How is a server's
address priority NOT an issue that should be managed at an operations
level by individual server administrators? Yet the working group which
produced it came up with a static prioritization that is the root
cause of a significant portion of the IPv6 deployment headaches we
face.

3484 specifies a static default. By definition, defaults in absence of
operator configuration kind of have to be static. Having a reasonable
and expected set of defaults documented in an RFC provides a known
quantity for what operators can/should expect from hosts they have
not configured. I see nothing wrong with RFC 3484 other than I would
agree that the choices made were suboptimal. Mostly that was based
on optimism and a lack of experience available at the time of writing.

There is another RFC and there are APIs and most operating systems
have configuration mechanisms where an operator CAN set that to
something other than the 3484 defaults.

I don't know the whole solution to this problem, but I'm pretty sure I
know the first step.

I don't know what you had in mind, but, reading RFC 5014 would be my
suggestion as a good starting point.

Today's RFC candidates are required to call out IANA considerations
and security considerations in special sections. They do so because
each of these areas has landmines that the majority of working groups
are ill equipped to consider on their own.

There should be an operations callout as well -- a section where
proposed operations defaults (as well as statics for which a solid
case can be made for an operations tunable) are extracted from the
thick of it and offered for operator scrutiny prior to publication of
the RFC.

I think this would be a good idea, actually. It would probably be more
effective to propose it to IETF than to NANOG, however.

Owen

Number two: While anyone can participate, approaching IETF as an
operator requires a rather thick skin, or, at least it did the last couple
of times I attempted to participate. I've watched a few times where

I am subscribed to the IDR (BGP, etc.) and LISP lists. These are
populated with different people and cover entirely different topics.
My opinion is the following:

* The IDR list is welcoming of operators, but whether or not your
opinion is listened to or included in the process, I do not know.
Randy Bush, alone, posts more on this list than the sum of all
operators who post in the time I've been reading. I think Randy's
influence is 100% negative, and it concerns me deeply that one
individual has the potential to do so much damage to essential
protocols like BGP. Also, the priorities of this list are pretty
fucked. Inaction within this working group is the reason we still
don't have expanded BGP communities for 32 bit ASNs. The reason for
this is operators aren't participating. The people on the list or the
current participants of the WG should not be blamed. My gripe about
Randy Bush having the potential to do huge damage would not exist if
there were enough people on the list who understand what they're doing
to offer counter-arguments.

operators were shouted down by purists and religion over basic
real-world operational concerns. It seems to be a relatively routine
practice and does not lead to operators wanting to come back to
an environment where they feel unwelcome.

I have found my input on the LISP list completely ignored because, as
you suggest, my concerns are real-world and don't have any impact on
someone's pet project. LISP as it stands today can never work on the
Internet, and regardless of the fine reputations of the people at
Cisco and other organizations who are working on it, they are either
furthering it only because they would rather work on a pet project
than something useful to customers, or because they truly cannot
understand its deep, insurmountable design flaws at Internet-scale.
You would generally hope that someone saying, "LISP can't work at
Internet-scale because anyone will be able to trivially DoS any LISP
ITR ('router' for simplicity), but here is a way you can improve it,"
well, that remark, input, and person should be taken quite seriously,
their input examined, and other assumptions about the way LISP is
supposed to work ought to be questioned. None of this has happened.
LISP is a pet project to get some people their Ph.D.s and keep some
old guard vendor folks from jumping ship to another company. It is a
shame that the IETF is manipulated to legitimize that kind of thing.

Then again, I could be wrong. Randy Bush could be a genius and LISP
could revolutionize mobility.

The IETF is run by volunteers. They volunteer because they find
designing protocols to be fun. For the most part, operators are not
entertained by designing network protocols. So, for the most part they
don't partiticpate.

Randy Bush, "Editorial zone: Into the Future with the Internet Vendor
Task Force: a very curmudgeonly view, or testing spaghetti," ACM SIGCOMM
Computer Communication Review Volume 35 Issue 5, October 2005.
http://archive.psg.com/051000.ccr-ivtf.html

Consider, for example, RFC 3484. That's the one that determines how an
IPv6 capable host selects which of a group of candidate IPv4 and IPv6
addresses for a particular host name gets priority. How is a server's
address priority NOT an issue that should be managed at an operations
level by individual server administrators? Yet the working group which
produced it came up with a static prioritization that is the root
cause of a significant portion of the IPv6 deployment headaches we
face.

3484 specifies a static default. By definition, defaults in absence of
operator configuration kind of have to be static. Having a reasonable
and expected set of defaults documented in an RFC provides a known
quantity for what operators can/should expect from hosts they have
not configured. I see nothing wrong with RFC 3484 other than I would
agree that the choices made were suboptimal. Mostly that was based
on optimism and a lack of experience available at the time of writing.

Hi Owen,

A more optimal answer would have been to make AAAA records more like
MX or SRV records -- with explicit priorities the clients are
encouraged to follow. I wasn't there but I'd be willing to bet there
was a lonely voice in the room saying, hey, this should be controlled
by the sysadmin. A lonely voice that got shouted down.

Today's RFC candidates are required to call out IANA considerations
and security considerations in special sections. They do so because
each of these areas has landmines that the majority of working groups
are ill equipped to consider on their own.

There should be an operations callout as well -- a section where
proposed operations defaults (as well as statics for which a solid
case can be made for an operations tunable) are extracted from the
thick of it and offered for operator scrutiny prior to publication of
the RFC.

I think this would be a good idea, actually. It would probably be more
effective to propose it to IETF than to NANOG, however.

If the complaint is that the IETF doesn't adequately listen to the
operations folk, then I think it makes sense to consult the operations
folks early and often on potential fixes. If folks here think it would
help, -that- is when I'll it to the IETF.

Regards,
Bill Herrin

Consider, for example, RFC 3484. That's the one that determines how an
IPv6 capable host selects which of a group of candidate IPv4 and IPv6
addresses for a particular host name gets priority. How is a server's
address priority NOT an issue that should be managed at an operations
level by individual server administrators? Yet the working group which
produced it came up with a static prioritization that is the root
cause of a significant portion of the IPv6 deployment headaches we
face.

3484 specifies a static default. By definition, defaults in absence of
operator configuration kind of have to be static. Having a reasonable
and expected set of defaults documented in an RFC provides a known
quantity for what operators can/should expect from hosts they have
not configured. I see nothing wrong with RFC 3484 other than I would
agree that the choices made were suboptimal. Mostly that was based
on optimism and a lack of experience available at the time of writing.

Hi Owen,

A more optimal answer would have been to make AAAA records more like
MX or SRV records -- with explicit priorities the clients are
encouraged to follow. I wasn't there but I'd be willing to bet there
was a lonely voice in the room saying, hey, this should be controlled
by the sysadmin. A lonely voice that got shouted down.

Give me a break... multiple implementations have chosen to tweak the algorithm independently and at various times.

It's just an rfc, not the gospel according to richard draves.

"
   Acknowledgments

   The author would like to acknowledge the contributions of the IPng
   Working Group, particularly Marc Blanchet, Brian Carpenter, Matt
   Crawford, Alain Durand, Steve Deering, Robert Elz, Jun-ichiro itojun
   Hagino, Tony Hain, M.T. Hollinger, JINMEI Tatuya, Thomas Narten, Erik
   Nordmark, Ken Powell, Markku Savela, Pekka Savola, Hesham Soliman,
   Dave Thaler, Mauro Tortonesi, Ole Troan, and Stig Venaas. In
   addition, the anonymous IESG reviewers had many great comments and
   suggestions for clarification.
"

"Can we have IPv6 transit?"
"Yes, please turn up a session to.."

That was asking Cogent for IPv6 dual-stack on our existing IPv4
transit.

I'm not saying it's any good, but it certainly didn't cost extra.

Tom

Several people mentioned this to Jeff on IRC a short time ago, so it's not
clear why he chose to suggest that ipv6 users in Europe were being fleeced
by Cogent for a set-up fee. Perhaps it has happened, but it appears not to
be their policy.

Of course, if you actually want a full ipv6 table, you will need to go
elsewhere.

Nick

I continue to hear different. In my first-hand experience just about
three weeks ago, I was told by Cogent that I need to execute a new
contract to get IPv6 added to an existing IPv4 circuit (U.S.
customer.) This turned a simple pilot project with only a few I.T.
folks involved into, well, I'm still waiting on this new contract to
be executed. I'm not surprised.

I started participating in the IETF 1-2 years ago. Coming from Fidonet background, the threshold of entry felt very low, as long as you make any kind of sense, people will discuss with you there and it doesn't matter who you are. You don't even have to go to the meetings (I've only been to a single one).

I encourage everybody to participate, at least to subscribe to the WG mailing lists and keep a look out for the draft announcements and give feedback to those.

If we in the ISP business don't do this, the show will be run by the vendors and academics (as is the case right now). They're saying "come to us", you're saying "come to us", and as long as both do this the rate of communication is going to be limited. What is needed is more people with operational backgrounds. For instance, I pitched the idea that ended up as a draft, dunno what will come of it:

<http://www.ietf.org/mail-archive/web/isis-wg/current/msg02556.html>

This has purely operational background and the puritans didn't like it (they didn't even understand why one would want to do it like that), but after a while I feel I received some traction and it might actually end up as a protocol enhancement that will help some ISPs in their daily work. Even something like your IGP isn't "done", and can be enhanced even if it takes time.

In fairness, we have a small commit.

If you're talking multi-gigabit+, then perhaps they could be a little
more concerned about the amount of IPv6 traffic that you might start
pushing, leading to delay tactics and/or a required contract change to
protect themselves.

(Not that it's likely much to be concerned about. But then, I don't know
who your customer is. ;))

Or the more likely reality that one hand doesn't talk to the other and
everyone's getting varying answers/actions from Cogent, depending on
whom they speak with.

Tom