Redploying most of 127/8 as unicast public

At some level I think there's a good chance that they'd just work. I wrote a significant amount of the Lantronix terminal server code and it never occurred to me that I should enforce rules about 127.0.0.0 or Class D or Class E. It really didn't have much bearing on a terminal server or the other host-like things we built. If you typed it in, it would work, if you listened on a port it wouldn't care what the address was. I would imagine that lots of stacks from back in the day were just like that.

Mike

Hi Eliot,

I wasn't in the working group so I'll take your word for it. Something
rather different happened later when folks on NANOG discovered that
the IETF had considered and abandoned the idea. Opinion coalesced into
two core groups:

Group 1: Shut up and use IPv6. We don't want the IETF or vendors
distracted from that effort with improvements to IPv4. Mumble mumble
titanic deck chairs harrumph.

Group 2: Why is the IETF being so myopic? We're likely to need more
IPv4 addresses, 240/4 is untouched, and this sort of change has a long
lead time. Mumble mumble heads up tailpipes harrumph.

More than a decade later, the "titantic" is shockingly still afloat
and it would be strikingly useful if there were a mostly working /4 of
IP addresses we could argue about how best to employ.

Regards,
Bill Herrin

Hi Owen,

This has been hashed and rehashed on this group about a gajillion
times but for the sake of those who are new:

Firewalls are programmed by people. People make mistakes. Lots of
mistakes. 1:1 stateful firewalls and 1:many stateful firewalls (NAT)
behave differently in the face of those mistakes. When 1:1 stateful
firewalls are mistakenly told to pass all traffic they faithfully do
so exposing unhardened hosts directly to the Internet. When 1:many
stateful firewalls (NAT) are mistakenly told to pass all traffic they
can't do so. They don't have enough information to decide which
interior host to send a packet to so they simply break.

One fails as a security perimeter breach. The other fails as a system
down. Pick which security posture you prefer but they're very much not
the same. A knocked over fence versus a lost padlock key and well into
the zombie apocalypse.

Regards,
Bill Herrin

(snips for brevity and reply relevancy)

This is a common fallacy… The real concept here isn’t “universal reachability”, but universal transparent addressing. Policy then decides about reachability.

Think stateful firewall without NAT.

No, NAT is not a firewall. The stateful inspection that NAT depends on is a firewall.

You can do all of the exact same things without needing NAT. You just get additional capabilities without NAT that you didn’t have with NAT due to the limitations of shared addressing.

You an do stateful inspection and reject unwanted packets without having to mutilate the packet header in the process.

Owen

You are completely correct in theory.

However, in IPv4 there is a generally true assumption that there are all these sorts of devices that will be deployed in a somewhat secure fashion and not by virtue of any particular efforts on the part of their manufactures, because they are rarely deployed without a NAT in front of them simply due to address scarcity, where NAT becomes a feature of network functionality and not of network security.

This is a fallacy which has repeatedly been proven false by numerous security researchers. It’s time to educate beyond this silly assertion and recognize that NAT is an obfuscation tool, not a security tool.

They are at least as secure behind a non-NAT stateful firewall as they are behind NAT.

The hope that there will be equivalent pervasiveness of a statefull deny-any layer in front of these classes of devices or that they will be deployed|developed with sufficient/equivalent security without that layer is not nearly as re-assuring.

Virtually all home gateways today ship with a stateful default deny-all policy for IPv6, so it’s not a hope, it’s current reality.

There is hope that manufacturers will eventually start improving security as well, but I agree that depending on that at this stage is rather perilous.

OTOH, it’s also perilous to believe that NAT provides adequate protection for their failures in this arena.

Worse, with the assumption of NAT induced security in place its all too logical to predict and expect that these devices are woefully under-equipped to protect themselves in any way without it.

NAT does not induce security. It induces headaches. It induces difficulties in troubleshooting. It induces difficulties in correlating logs and audit trails. It induces all manner of things that make it harder to address security incidents. It does NOT induce security.

Further, 100% of the alleged or perceived security gains attribute to NAT come from stateful inspection, not NAT itself. As such, no, there’s no need for NAT to have equivalent security even if you just assume a stateful default deny-all in the gateway vs. assuming NAT.

I agree that the idea of producing a home gateway without a stateful default deny-any inbound policy should be (and basically is, frankly) as unrealistic as producing a home gateway for IPv4 without NAT, but once that’s the case (and really, from what I have seen of current market entrants, it is), there’s no meaningful difference in the security level between the two options. The non-NAT option does provide greater choice and freedom in controlling your security and permitting things in, but not significantly more dangerous than current port forwarding setups with NAT.

Best case scenario is that practically all SOHO v6 gateways default configuration is statefull deny-any. In which case all you can hope to get from theoretical E2E is less packet mangling.

That’s already the case from my observations. OpenWRT, Linksys, Netgear, D-Link, Belkin, and several others all default this way already.

(Packet mangling is a good test case for protocols who needlessly commit layering violations by embedding lower layer addressing directly or implicitly into their behavior, so NAT has actually been beneficial in this manner)

If you want to put packet mangling capability into test equipment in SQA environments, by all means, feel free, but it has no useful place in the modern internet once we move forward from restricted addressing.

The security conscious are better off deploying these devices with IPv6 turned off. Much less chance of them accidentally becoming individually responsible for their own protection due to any network changes that may not take their existence or particularly sensitive and vulnerable state into consideration.

We can agree to disagree here. The security conscious are better off deploying these products IPv6-only where they can get proper audit and log correlation with transparent addressing and making sure that the upstream router(s) have adequate protection configured. That’s at least as good as having a NAT upstream, given that a NAPT port forward can be just as dangerous to these devices as a transparent permit.

Further, security track records as they are suggest that security will never become the prime focus or even more than an afterthought for the producers of these classes of devices.

I can’t effectively argue against this, but my hope is that we can eventually arrive at a place where manufacturers face real liability for damages inflicted by the insecurity of these products. Kind of a “unsafe at any bandwidth” equivalent of the “unsafe at any speed” campaign to improve automotive safety and get seatbelt mandates and the like. Much of that happened through product liability law.

We can all wish that were not the case but it would be naive to assume otherwise.

It’s naive to assume it’s otherwise today. I do have hope that real progress will be made in liability laws helping to remedy the situation in the future.

Nothing says “fix your broken security” like a multi-million dollar jury verdict against your unlucky competitor.

Nonetheless, even with that remaining the case, I still believe that a stateful inspection without header mutilation is better security than a NAPT.

Owen

Owen DeLong wrote:

Owen DeLong wrote:

I guess I don’t see the need/benefit for a dedicated loopback prefix in excess of one address. I’m not necessary inherently opposed to designating one (which would be all that is required for IPv6 to have one, no software updates would be necessary), but I’d need some additional convincing of its utility to support such a notion.

Since the loopback prefix in IPv4 is present and usable on all systems, IPv6 parity would require the same, so merely designating a prefix would only be the beginning.

There may not be a need. But there is clearly some benefit.

Which is? You still haven’t answered that question.

You have right below.

And if there is indeed no benefit, than there is no reason not to repurpose 127/8 considering that you may use many other ranges in IPv4 for loopback and that you can just use IPv6 for loopback and there you go you have a whole /10.

One doesn’t need a reason for inaction… One needs a reason to act. There is (so far) no compelling reason to repurpose 127/8 as far as I can see.

Its not like it will overnight cause system admin headaches. And they should be running their loopback apps on IPv6 anyways.

You are arguing that just because we can do a thing, we should do a thing. I am arguing that unless there’s a compelling reason to change the standard, we should leave it as is until it dies a natural death of old age.
(or alternatively until we finally disconnect the life support keeping it artificially alive which is a more accurate metaphor for the current state of IPv4).

Well, technically, fe80::/10 is also present and predictable on every loopback interface. It does come with the additional baggage of having to specify a scope id when referencing it, but that’s pretty minor.

Nope… It’s every bit as deterministic as 127.0.0.0/8.

If you send packets to fe80::*%lo0 on a linux box, they’ll get there. If you try it on something other than linux, it probably doesn’t work.
That’s also true of 127.*.*.*.

So fe80::/10 is the loopback prefix for IPv6

It’s link local. It’s present on loopback. fe80::/10%lo0 (on a linux box) is a loopback prefix for IPv6 which is universally deployed.
The scope id becomes important in this context, but other than that, it’s identical to the semantics of IPv4.

Owen

Owen DeLong wrote:

Agreed. But I have every right to express my desires and displeasures with widespread plans to encourage what I perceive as misuse and that’s exactly what’s happening here.

My right to attempt to discourage it by opposing proposed standards is exactly equal to your right to encourage it by promoting them.

Since your discouragement may take form in preventing some amount of improvement or amelioration to IPv4 users, there is a human cost associated to that.

Since wasted effort may prevent other things I see as advantageous to the network and humanity in general from happening, there is a human cost to not preventing it.

Absent the equivalent clear correlation of harm to whatever else you believe those resources are engaged in, I would not say those two behaviors are of equal consequence.

You are entitled to your opinion. I do not happen to share it.

I’m really saying what I said. That IMHO, there’s no benefit to the internet overall if this proposed change is accepted and/or implemented and I see no benefit to standardizing it. As such, I remain opposed to doing so.

There is a clear difference of opinion on this, that there stands a very good chance that prompt implementation now may prove to provide significant benefit in the future, should IPv6 continue to lag, which you cannot guarantee it wont.

There stands some chance. It’s not clear how good that chance is. Obviously you think it is a higher probability than I do. You also assume that it would be widely implemented faster than deployment of IPv6 which is also an assertion of which I remain unconvinced.

Further, there is historical precedent that discouraging re-purposing IPv4 addressing is the wrong decision.

Nope… There is historical precedent that you don’t like it. IMHO, we’ve done far too many things and put far too much effort into avoiding rather than completing IPv6 transition. As such, I think that the historical precedent argues that adding to those errors will not accelerate IPv6 transition and is, therefore, wasted effort at best and potentially counterproductive.

Whether or not the effort that would be wasted implementing it would go to IPv6 or to some other more useful pursuit is not a concern I factor into my opinion in this case.

And I appreciate that, as I consider that reasoning to be specious at best, morally dubious at worst.

At least we agree on something.

Again, have not made any such assumption here, either. It’s not relevant. The only thing I consider relevant is that any resources expended on a complete waste of time could be better
expended elsewhere.

I dont consider my opinion as to what people's effort should be spent on relevant to whether a particular proposal has merit all of its own.

IMHO, the proposal has no merit and is therefore a waste of time. Clearly you disagree. That’s fine.

Which GUA and LL are not, no matter how readily available and easily assignable and otherwise equivalent they are in every way but the one. They are not loopback designated by standard (and system implementation).

And this matters why?

Owen

So re-purpose 127/8 and if users and developers agree with you, it will become available right about the time IPv6 should have finally managed to obsolete IPv4, no harm no foul. And if it fails at that again, at least we will have 127/8 and cohorts.

Meh, feel free to do whatever you want. In terms of any IETF WG adoption call or consensus call, I’ll object as I consider it useless at best and harmful at worst.

Nothing you have said provides any indication that there is sufficient merit to be worth the time I have wasted on this thread, let alone further effort.

Owen

# date; lscpu

Sun Nov 21 20:14:44 EST 2021
Architecture: i686
CPU op-mode(s): 32-bit
Byte Order: Little Endian
CPU(s): 2
On-line CPU(s) list: 0,1
Thread(s) per core: 1
Core(s) per socket: 1
Socket(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 8
Model name: Pentium III (Coppermine)
Stepping: 3
CPU MHz: 933.075
BogoMIPS: 1866.25

(Ok it runs a somewhat newer opensuse.)

No, it is not. Slow start and other RIR policies around scarcity and fairness of
distribution of the last crumbs are the primary contributor, with traffic engineering
a somewhat distant second. Mergers are actually somewhere around 10th on the list last
time I looked.

Owen

J. Hellenthal wrote:

FreeBSD operators have been using this space for quite a long time for
many NAT'ing reasons including firewalls and other services behind
them for jail routing and such.

FreeBSD jails on non-routable IP addresses – Dan Langille's Other Diary

That's just one example that I've seen repeated in multiple other
ways. One of which a jail operator with about 250 addresses out of
that range that enabled his jail routed services.

Thank you for letting us know! We would be happy to improve
the draft so that it has less impact on such pre-existing users.

When we surveyed publicly visible applications based on Linux,
we only found them configured to use the lowest /16. It's true
that any system operator could configure their system in any part
of 127/8, but we focused on the default configurations of popular
software (such as systemd and Kubernetes).

Do you know of any FreeBSD software that comes with a default
configuration in 127/8 but not in 127/16? (It looks like the web page
you referenced is about specific manual configuration, not about
the default behavior of supplied software.)

I do not know the details of FreeBSD jail configuration, nor the precise
behavior of its loopback interface. From my limited understanding, it
looks like the jail configured in the web page you referenced, with
address 127.1.0.128/32 on lo1, would provide loopback service regardless
of whether the default address on lo0 was 127.0.0.1/8 or 127.0.0.1/16.
That's because lo1 is a separate interface from lo0, and the "lo"
interfaces always loop back any packets sent through them, no matter
what addresses are configured on them. (Indeed the example
configures it with a 10.80.0.128 address as well, which would not
normally be considered a loopback address.)

So, if I am right, then even if our current Internet-Draft became a
standard and FreeBSD was modified to implement it, the recommended
commands would continue to work. The only impact would be that such a
FreeBSD machine would be unable to reach a potential global Internet
service hosted out on the Internet at address 127.1.0.128 (because a
local interface has been configured at that address, shadowing the
globally reachable address). I anticipate that no such global services
would be created before 2026 at the very earliest (other than for
reachability testing), and likely much later in the 2020's or early
2030's.

If it turns out that FreeBSD usage of 127.1/16 is widespread, and the
above analysis is incorrect or unacceptable to the FreeBSD community, we
would be happy to modify the draft to retain default loopback behavior
on 127.0.0.1/17 rather than 127.0.0.1/16. That would include both
127.0.x.y and 127.1.x.y as default loopback addresses. This would
completely resolve the issue presented on the "FreeBSD jails on
non-routable IP addresses" web page, while still recovering more than 16
million addresses for global use.

The worst case might be if FreeBSD sysadmins have become accustomed to
picking "random" addresses manually from all over the 127/8 space. If
so, it is not unreasonable to expect that when manually configuring a
node to use "non-routable" addresses, that in the passage of time, some
of them might become routable in the future. When upgrading any machine
to a new OS release, various small things typically need adjusting to
fit into the revised OS. Renumbering the in-system use of up to a few
hundred non-routable addresses like 127.44.22.66 into addresses like
127.0.22.66 (in a smaller non-routable range that still would still
contain 65,000 or 130,000 addresses) might be one of those things that
could be easily adjusted during such an upgrade.

  John

I was not in this part of IETF in those days, so I did not participate
in those discussions. But I later read them on the archived mailing
list, and reached out by email to Dave Thaler for more details about his
concerns. He responded with the same general issues (and a request that
we and everyone else spend more time on IPv6). I asked in a subsequent
message for any details he has about such products that he thought would
fail. He was unable or unwilling to point out even a single operating
system, Internet node type, or firewall product that would fail unsafely
if it saw packets from the 240/4 range.

As documented in our Internet-Draft, all such products known to us
either accept those packets as unicast traffic, or reject such packets
and do not let them through. None crashes, reboots, fills logfiles with
endless messages, falls on the floor, or otherwise fails. No known
firewall is letting 240/4 packets through on the theory that it's
perfectly safe because every end-system will discard them.

As far as I can tell, what Eliot says really stopped this proposal in
2008 was Dave's hand-wave of *potential* concern, not an actual
documented problem with the proposal.

If anyone knows an *actual* documented problem with 240/4 packets,
please tell us!

(And as I pointed out subsequently to Dave, if any nodes currently in
service would *actually* crash if they received a 240/4 packet, that's a
critical denial of service issue. For reasons completely independent
from our proposal, those machines should be rapidly identified and
patched, rather than remaining vulnerable from 2008 thru 2021 and
beyond. It would be trivial for an attacker to send such
packets-of-death from any Linux, Solaris, Android, MacOS, or iOS machine
that they've broken into on the local LAN. And even Windows machines
may have ways to send raw Ethernet packets that could be crafted by
an attacker to appear to be deadly IPv4 240/4 packets.)

  John

Hi John,

I was not in this part of IETF in those days, so I did not participate
in those discussions. But I later read them on the archived mailing
list, and reached out by email to Dave Thaler for more details about his
concerns. He responded with the same general issues (and a request that
we and everyone else spend more time on IPv6). I asked in a subsequent
message for any details he has about such products that he thought would
fail. He was unable or unwilling to point out even a single operating
system, Internet node type, or firewall product that would fail unsafely
if it saw packets from the 240/4 range.

To be fair, you were asking him to recall a conversation that did take place quite some time earlier.

As documented in our Internet-Draft, all such products known to us
either accept those packets as unicast traffic, or reject such packets
and do not let them through. None crashes, reboots, fills logfiles with
endless messages, falls on the floor, or otherwise fails. No known
firewall is letting 240/4 packets through on the theory that it's
perfectly safe because every end-system will discard them.

As far as I can tell, what Eliot says really stopped this proposal in
2008 was Dave's hand-wave of *potential* concern, not an actual
documented problem with the proposal.

I wouldn't go so far as to call it a hand wave. You have found devices that drop packets. That's enough to note that this block of space would not be substitutable for other unicast address space. And quite frankly, unless you're testing every device ever made, you simply can't know how this stuff will work in the wild. That's ok, though, so long as the use is limited to environments that can cope with it.

If anyone knows an *actual* documented problem with 240/4 packets,
please tell us!

(And as I pointed out subsequently to Dave, if any nodes currently in
service would *actually* crash if they received a 240/4 packet, that's a
critical denial of service issue. For reasons completely independent
from our proposal, those machines should be rapidly identified and
patched, rather than remaining vulnerable from 2008 thru 2021 and
beyond. It would be trivial for an attacker to send such
packets-of-death from any Linux, Solaris, Android, MacOS, or iOS machine
that they've broken into on the local LAN. And even Windows machines
may have ways to send raw Ethernet packets that could be crafted by
an attacker to appear to be deadly IPv4 240/4 packets.)

Right, and indeed there are devices out there that have been known to stop functioning properly under certain forms of attack, regardless of the source address.

Eliot

Mans Nilsson wrote:

> Not everyone are Apple, "hp"[0] or MIT, where initial
> allocation still is mostly sufficient.

The number of routing table entries is growing exponentially,
not because of increase of the number of ISPs, but because of
multihoming.

As such, if entities requiring IPv4 multihoming will also
require IPv6 multihoming, the numbers of routing table
entries will be same.

The proper solution is to have end to end multihoming:

  https://tools.ietf.org/id/draft-ohta-e2e-multihoming-02.txt

Your reasoning is correct, but the size of the math matters more.

Indeed, with the current operational practice. global IPv4
routing table size is bounded below 16M. OTOH, that for
IPv6 is unbounded.

            Masataka Ohta

If it turns out that FreeBSD usage of 127.1/16 is widespread, and the
above analysis is incorrect or unacceptable to the FreeBSD community, we
would be happy to modify the draft to retain default loopback behavior
on 127.0.0.1/17 rather than 127.0.0.1/16. That would include both
127.0.x.y and 127.1.x.y as default loopback addresses.

treize:~ mansaxel$ sipcalc 127.0.0.1/17 | grep "Network range"
Network range - 127.0.0.0 - 127.0.127.255
treize:~ mansaxel$ sipcalc 127.0.0.1/15 | grep "Network range"
Network range - 127.0.0.0 - 127.1.255.255

I agree, generally speaking. IMO, it’s unfortunate that these addresses are being held in “limbo” while these debates go on. I’m not complaining about the debates per se, but the longer we go without resolution, these addresses can’t be put to any (documented) use.

There’s background information available that might be helpful to those who haven’t yet seen it:

https://datatracker.ietf.org/doc/slides-70-intarea-4/ (links to the draft-fuller-240space slides from IETF 70)
https://datatracker.ietf.org/doc/minutes-70-intarea/ (IETF 70 INTAREA meeting minutes)
https://mailman.nanog.org/pipermail/nanog/2007-October/thread.html (NANOG October 2007 mail archives, containing links to the “240/4” thread)
https://puck.nether.net/pipermail/240-e/ (the 240-e archives)
https://mailarchive.ietf.org/arch/browse/int-area/ (IETF INTAREA archives, containing comments on the 240space draft and related issues, roughly in the same time frame as in the previous links)

—gregbo

There’s at least one. Marvell PresteriaCX (its either PresteriaCX or DX, forget which). It is in Juniper EX4500, among others.
Hardware-based bogon filter when L3 routing that cannot be disabled.

cheers,

lincoln.

Mans Nilsson wrote:

> Not everyone are Apple, "hp"[0] or MIT, where initial
> allocation still is mostly sufficient.

The number of routing table entries is growing exponentially,
not because of increase of the number of ISPs, but because of
multihoming.

Again, wrong. The number is growing exponentially primarily because of the
fragmentation that comes from recycling addresses.

As such, if entities requiring IPv4 multihoming will also
require IPv6 multihoming, the numbers of routing table
entries will be same.

There are actually ways to do IPv6 multihoming that don’t require using the
same prefix with both providers. Yes, there are tradeoffs, but these mechanisms
aren’t even practical in IPv4, but have been sufficiently widely implemented in
IPv6 to say that they are viable in some cases.

Nonetheless, multihoming isn’t creating 8-16 prefixes per ASN. Fragmentation
is.

Your reasoning is correct, but the size of the math matters more.

Indeed, with the current operational practice. global IPv4
routing table size is bounded below 16M. OTOH, that for
IPv6 is unbounded.

Only by virtue of the lack of addresses available in IPv4. The other tradeoffs
associated with that limitation are rather unpalatable at best.

Owen

Owen DeLong wrote:

The number of routing table entries is growing exponentially, not
because of increase of the number of ISPs, but because of multihoming.

Again, wrong. The number is growing exponentially primarily because
of the fragmentation that comes from recycling addresses.

Such fragmentation only occurs when address ranges are rent to
others for multihoming but later recycled for internal use,
which means it is caused by multihoming.

Anyway, such cases are quite unlikely and negligible.

There are actually ways to do IPv6 multihoming that don’t require
using the same prefix with both providers.

That's what I proposed 20 years ago both with IPv4 and IPv6 in:

     https://tools.ietf.org/id/draft-ohta-e2e-multihoming-02.txt

Yes, there are tradeoffs,
but these mechanisms aren't even practical in IPv4,

Wrong. As is specified by rfc2821:

    When the lookup succeeds, the mapping can result in a list of
    alternative delivery addresses rather than a single address, because
    of multiple MX records, multihoming, or both. To provide reliable
    mail transmission, the SMTP client MUST be able to try (and retry)
    each of the relevant addresses in this list in order, until a
    delivery attempt succeeds. However, there MAY also be a configurable

the idea of end to end multihoming is widely deployed by SMTP at
the application layer, though wider deployment require TCP
modification as I wrote in my draft.

Similar specification is also found in section 7.2 of rfc1035.

but have been
sufficiently widely implemented in IPv6 to say that they are viable
in some cases.

You are just wrong. IP layer has very little to do with it.

            Masataka Ohta

PS

LISP is garbage.

Greg

Thanks for posting the links. Our old draft seems to have largely had its intended effect without ever having been issued as an RFC (moohaha). Most implementations don’t hardcode 240/4 into a bogon filter. We had at the time left open what next steps should be.

So what’s the road to actually being able to use this space? It depends. If you want to use it for your interior, and return routability beyond your AS and external in-addr service is NOT important, all that stops you today is whatever set of issues you find in your own back yard.

If you want to allocate space to customers or need in-addr/return routability, obviously that’s More Work that should not be underestimated. 240/4 appears in a number of bogon filters, not all of which are controlled by people tracking operator lists or the IETF.

And that complicates matters in terms of whether the space should be moved to a unallocated or treated like 10/8. At least the latter seems to match the testing that has thus far been performed.

Eliot

Mans Nilsson wrote:

> Not everyone are Apple, "hp"[0] or MIT, where initial
> allocation still is mostly sufficient.

The number of routing table entries is growing exponentially,
not because of increase of the number of ISPs, but because of
multihoming.

As such, if entities requiring IPv4 multihoming will also
require IPv6 multihoming, the numbers of routing table
entries will be same.

The proper solution is to have end to end multihoming:

        https://tools.ietf.org/id/draft-ohta-e2e-multihoming-02.txt

I'd never read that. We'd made openwrt in particular use "source
specific routing" for ipv6 by default,
many years ago, but I don't know to what extent that facility is used.

ip route from a:b:c:d:/64 via dev A
ip route from a:b:d:d:/64 via dev B

1. Move it from "reserved" to "unallocated unicast" (IETF action)
2. Wait 10 years
3. Now that nearly all equipment that didn't treat it as
yet-to-be-allocated unicast has cycled out of use, argue about what to
allocate the addresses to for best effect.

Similar plan for 0/8, 255/8 and 127/8-127.0/16:

1. Move from their existing status to "deprecated former use;
unallocated unicast."
2. Wait 10 years.
3. Now that most equipment that didn't treat it as yet-to-be-allocated
unicast has cycled out of use, argue about what to allocate the
addresses to.

Bottom line though is that the IETF has to act before anyone else
reasonably can.

Regards,
Bill Herrin