RFC1918 is a wonderful document. It probably added 10-15 years
to the lifespan of the IPv4 address space, made IP addressing
much simpler for internal applications, and it's prevented
a large number of problems like people randomly making up addresses
for boxes they "know" that they'll "never" need to connect to the outside.
But it's not perfect, and it makes a couple of assumptions that
aren't correct, which lead to the kind of edge cases driving this discussion.
1 - RFC1918 refers to "hosts" having IP addresses. They don't.
Hosts have interfaces, and interfaces have IP addresses.
In some cases, hosts have multiple interfaces with
different communication needs - firewalls and routers
being prominent examples. (You could argue that the definition of
"host" excludes routers, but one of the problems here is that
routers not only have routing parts, they have host parts,
e.g. the configuration and control and monitoring functions
that may deny public access for security and anti-DOS reasons.)
And some software isn't all that bright about picking which
interface address to use for its responses.
2 - RFC1918 assumes that "communication" is bidirectional, and that
communication needs are bidirectional. They're not always,
particularly at the network layer as opposed to the transport layer.
Sometimes you need to send but don't want to receive,
and sometimes you want to accept packets from machines
that you don't want to send packets to.
Routers often need to send ICMP packets about "___ failed"
to destinations that they don't need to accept packets from,
such as traceroute and PMTU discovery responses -
the source doesn't always need to be a routable address,
though you could use a registered address and null-route
any incoming packets to it if you wanted to help traceroute a bit.
As a customer of an ISP, it's nice to be able to look at a traceroute
and ask the help desk people why my packets from San Jose to San Francisco
are going by way of Orlando, and to complain that the traceroute
shows that orlando.routers.example.net is 250ms from San Jose,
but I've also found that orlando.routers.example.net isn't always in Orlando,
and that traceroute response times aren't always what they seem,
and maybe that the 250ms doesn't mean either that Example.Net
has a really slow route to Orlando or that the "Orlando" router is in Singapore;
it may just be that they're using a Vendor X router which isn't good at pings
when the CPU is busy (that's especially a problem for little DSL routers.)
It's probably critical for connections between ISPs to have
registered addresses that are used for traceroute responses,
but I'm not convinced that routers internal to an ISP need to have
globally unique addresses, as long as the ISP's operations folks can tell
what interfaces are on what machines.
Using RFC1918 space does mean that traceroutes either need to report
numerical addresses or use the ISP's DNS server to resolve them,
which isn't always practical, but that's not a big limitation.
PathMTU Discovery is less of a perceived problem that traceroute,
since usually anything that's broken will be broken on an edge router
or tunnelling device of some sort rather than a core router,
and core routers tend to all have the same values,
but that still shouldn't force you to use registered address space.