address spoofing

Greg A. Woods wrote:

> So are you making a case to allow RFC1918 source addresses out into the
> network?

Huh? No, I thought I was saying very much the opposite! I don't want
my upstream provider to use RFC1918 on inter-router links, but they do
anyway. I'd like them to filter those addresses too, but they won't.

I do agree they should be filtered out.

At what point should we draw the line and say who can, and who cannot,
use RFC1918 addresses on links? My first thought would be any link over
which traffic from more than one AS transits, or between AS's, should
always be fully routable. Any better ideas?

If you do all your internal routing over ATM or FR virtual circuits then
you won't need to (and in fact cannot) use IP numbers for those circuits
-- it all looks like the physical layer from IP's perspective (the
theory being that if you don't need IPs for inter-router links then you
won't be using precious unique IPs and feel the pressure to use RFC1918
numbers instead). I'm certainly no expert at this, but from the outside
I've seen it done quite successfully. It sure cuts down on the hop
count visible from traceroute too!

The FR cloud will look like one hop as far as I can see. But none of
my RFC1918 links are FR or ATM. They are plain DS1/24*N (aside from
the internal aliases, but those aren't even links).

It's damn near impossible to debug from the outside, of course, but
sometimes that's desirable! :wink:

I remember the first place I put up a firewall, I blocked pretty much
everything, include ping (from outside) and traceroute (from outside).
The reason was to conform to corporate policy regarding confidentiality
of facilities and resources to guard against competitors snooping around.
Even so much as seeing how many IPs would answer ping was considered to
be proprietary company information. It was my goal to limit access to
just those resources required for the company's business. I think I did
it pretty well. I only got one complaint about it and that was from
Randy Bush.

> If you're proposing another set of addresses be reserved for uses like
> this, then I'd be in favor of it with you. Using RFC1918 is certainly
> not the best way to do this, but using allocated space is no better as
> long as allocations are tight.

Using any other set of reserved addresses would have exactly the same
problem as using RFC1918 addresses has. The only two viable options are
to either use globally unique addresses, or not to use any IP routing
internally at all.

I do see another possibility. I would call these "public overload"
addresses. By public, they would be allowed to transit as sources.
By overload, more than one use at a time could be made, although they
should be unique within an administrative scope much as RFC1918 is.
As to the impact that may cause on the net, I cannot say. There could
very well be more impact than RFC1918 has, so it's probably it a good
idea. I just see it as a possibility.

> People don't know how to separate their internet DNS from intranet DNS.
> Or maybe they don't want to put the money into that kind of structure.
> If BIND could be modified to deliver different results depending on the
> source of the request, or it's interface, then it might become easy for
> people to setup DNS to avoid this.

Yes, it can be done, but even I am not yet using the latest software,
which makes this much easier, on all the machines I manage.

I haven't seen how to do it in the newest BIND. I tried some tricks but
haven't managed to accomplish it.

Greg A. Woods wrote:

> my upstream provider to use RFC1918 on inter-router links, but they do
> anyway. I'd like them to filter those addresses too, but they won't.

I do agree they should be filtered out.

At what point should we draw the line and say who can, and who cannot,
use RFC1918 addresses on links? My first thought would be any link over
which traffic from more than one AS transits, or between AS's, should
always be fully routable. Any better ideas?

Somewhere along the lines of this thread, the point has been lost (IMHO).

If a provider uses 1918 addresses on internal links, who cares? And when
you say 'filter' them, do you mean filter them in routing announcements,
or filter any traffic to/from that ips?

If the former, than thats good, you should do that; it should be part of
your martian filters. If the latter, thats fine too, but traceroutes will
'*' on those hops.

But, once again, who cares? Conservation of IP space is good at worst.

> won't be using precious unique IPs and feel the pressure to use RFC1918
> numbers instead). I'm certainly no expert at this, but from the outside
> I've seen it done quite successfully. It sure cuts down on the hop
> count visible from traceroute too!

Using 1918 space will have no bearing on hop count or visibility of the
hop. Thats rediculous.

-- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
     Atheism is a non-prophet organization. I route, therefore I am.
       Alex Rubenstein, alex@nac.net, KC2BUO, ISP/C Charter Member
               Father of the Network and Head Bottle-Washer
     Net Access Corporation, 9 Mt. Pleasant Tpk., Denville, NJ 07834
Don't choose a spineless ISP; we have more backbone! http://www.nac.net
-- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --

Phil Howard wrote:

Greg A. Woods wrote:

> > So are you making a case to allow RFC1918 source addresses out into the
> > network?
>
> Huh? No, I thought I was saying very much the opposite! I don't want
> my upstream provider to use RFC1918 on inter-router links, but they do
> anyway. I'd like them to filter those addresses too, but they won't.

I do agree they should be filtered out.

At what point should we draw the line and say who can, and who cannot,
use RFC1918 addresses on links? My first thought would be any link over
which traffic from more than one AS transits, or between AS's, should
always be fully routable. Any better ideas?

My take on this is the line is at the edge of a single, end customer. In
other words, a company which buys IP service from an ISP may use RFC
1918 addresses internally. An ISP should NOT use private address space
in their own network for any equipment, with the exception of their
administrative or management functions, provided those functions are
restricted in scope to their own use. There should be no gear in the
path from an end customer to peering points which use private address
space.

The reason I arrive at this conclusion is for the sake of the end user.
The purpose, in my opinion, of the private address space described in
RFC 1918 is to allow users to build large networks without consuming
public address space. The goal was to provide someplace for private
networks to go that'd be unique to themselves, provided they didn't talk
to another private end user. Now what happens when a company has already
used 10.x.x.x and built a large network, and uses NAT or proxies at
their border, but their upstream ISP decides to also use 10.x.x.x for
everything? There is real potential for conflict.

What I find most annoying about my upstream using private address space
for their own use is it takes away my ability to use that address space
as an end customer, and could have required me to renumber to ensure
there were no conflicts.

[ On Sunday, April 25, 1999 at 02:46:31 (-0500), Phil Howard wrote: ]

Subject: Re: address spoofing

At what point should we draw the line and say who can, and who cannot,
use RFC1918 addresses on links? My first thought would be any link over
which traffic from more than one AS transits, or between AS's, should
always be fully routable. Any better ideas?

I think the line's trivial to draw. You can use RFC 1918 addresses on
any interfaces so long as the router can never generate a packet with
that address in it (either as a source address, or as part of the
protocol payload for something like "echo reply" which would confuse the
recipient). I don't know if this is actually possible, but that's
irrelevant, of course! :wink:

I.e. RFC1918 addressing for private links is fine so long as the outside
world will never see mention of those addresses.

I remember the first place I put up a firewall, I blocked pretty much
everything, include ping (from outside) and traceroute (from outside).
The reason was to conform to corporate policy regarding confidentiality
of facilities and resources to guard against competitors snooping around.
Even so much as seeing how many IPs would answer ping was considered to
be proprietary company information. It was my goal to limit access to
just those resources required for the company's business. I think I did
it pretty well. I only got one complaint about it and that was from
Randy Bush.

:slight_smile:

I do see another possibility. I would call these "public overload"
addresses. By public, they would be allowed to transit as sources.
By overload, more than one use at a time could be made, although they
should be unique within an administrative scope much as RFC1918 is.
As to the impact that may cause on the net, I cannot say. There could
very well be more impact than RFC1918 has, so it's probably it a good
idea. I just see it as a possibility.

Hmmm... Yes. I wonder if there's any way to prove that if such
addresses are used only as *source* addresses (and perhaps "echo reply"
values, etc.) that they'll never cause any packets to be generated in
response. That way the overloading wouldn't cause as much of a problem.

I meant to mention last time that the use of a specific public block for
this purpose only is better than using RFC 1918 addresses because then
there's less confusion between internal management LANs and other truly
private uses. If I use RFC 1918 addresses behind a firewall then I
cannot permit those packets on the public side.

Of course any overloading of a block of addresses means that you've got
to be particularly careful never to introduce routing in your "public"
infrastructure for the overloaded block -- I think that would be a clear
indication that you're using such addresses for the wrong purpose.

I haven't seen how to do it in the newest BIND. I tried some tricks but
haven't managed to accomplish it.

I'm working on setting up a brand new set of systems for a client and
I'm going to try doing some split-brained DNS in production for them --
I'll try to remember to let you know how it works out and how I did it
if I'm successful. Maybe something like this is worth writing a paper
or article about too, though I think I already have some references
squirrelled away somewhere.