rfc 1918?

Hello,

Does anyone know why I get inbound packets from 10.x.x.x coming from my ISP,
UUNet? They're just headed for a webserver, so it's not likely that they're
up to no good.
This seems to violate rfc 1918. Am I crazy?

Feb 22 15:29:48 computerjobs-gw 353094: Feb 22 20:30:10.439 UTC:
%SEC-6-IPACCESSLOGP: list 135 denied tcp 10.10.5.18(62438) ->
63.67.217.184(80), 1 packet
Feb 22 15:30:02 computerjobs-gw 353095: Feb 22 20:30:24.024 UTC:
%SEC-6-IPACCESSLOGP: list 135 denied tcp 10.10.5.18(62440) ->
63.67.217.184(80), 1 packet
Feb 22 15:30:06 computerjobs-gw 353096: Feb 22 20:30:28.168 UTC:
%SEC-6-IPACCESSLOGP: list 135 denied tcp 10.10.5.18(62455) ->
63.67.217.184(80), 1 packet

You're not crazy, and UUNet should be filtering them.

Most likely, some poor Windows user behind a NAT is leaking the packets
and nobody's filtering them.

> Does anyone know why I get inbound packets from 10.x.x.x coming from my ISP,
> UUNet? They're just headed for a webserver, so it's not likely that they're
> up to no good.
> This seems to violate rfc 1918. Am I crazy?

You're not crazy, and UUNet should be filtering them.

This is, again, not a foregone conclusion.

There are good reasons to want to get those packets (traceroutes from
people who have numbered their networks in rfc1918 networks, f'rinstance).

Not everyone agrees whether it is better to filter or not to filter,
but there are good arguments on both sides.

--jhawk

There are good reasons to want to get those packets (traceroutes from
people who have numbered their networks in rfc1918 networks, f'rinstance).

The original note specifically showed them as being TCP packets from
a 10.x.x.x address going to port 80. Does that qualify as a good reason?

Not everyone agrees whether it is better to filter or not to filter,
but there are good arguments on both sides.

Does anybody in the house think that these packets actually have a snowball's
chance in Hades of getting a reply back sucessfully?

> There are good reasons to want to get those packets (traceroutes from
> people who have numbered their networks in rfc1918 networks, f'rinstance).

The original note specifically showed them as being TCP packets from
a 10.x.x.x address going to port 80. Does that qualify as a good reason?

Err, no, but I don't think it's reasonable to expect people to filter rfc1918
packets based on specific protocols.

> Not everyone agrees whether it is better to filter or not to filter,
> but there are good arguments on both sides.

Does anybody in the house think that these packets actually have a snowball's
chance in Hades of getting a reply back sucessfully?

No, but that's really a thoroughly orthogonal question.

--jhawk

> You're not crazy, and UUNet should be filtering them.

There are good reasons to want to get those packets (traceroutes from
people who have numbered their networks in rfc1918 networks,

That's not a good reason. Nobody should be generating public traffic from
those addresses, "making them work" is not an Internet-friendly decision.

> > You're not crazy, and UUNet should be filtering them.

No Chris, you're not crazy...

> There are good reasons to want to get those packets (traceroutes from
> people who have numbered their networks in rfc1918 networks,

No John, there are exactly zero reasons, good or otherwise, for allowing
any traffic with RFC-1918 source addresses to traverse any part of the
public Internet. Period! :slight_smile:

[ On Thursday, February 22, 2001 at 13:22:27 (-0800), Eric A. Hall wrote: ]

Subject: Re: rfc 1918?

That's not a good reason. Nobody should be generating public traffic from
those addresses, "making them work" is not an Internet-friendly decision.

Precisely.

The sooner RFC-1918-sourced packets get filtered (i.e. the closer to
source they get filtered, *and* the quicker that *EVERYONE* introduces
such filters), then the sooner (i.e. the quicker) the people (and that's
the politely and politically correct way of speaking of them) who think
they can use private addresses in public networks will hopefully get
clue-by-4'ed into changing their errant ways.

Now if only I could find some magic way to let all those trigger happy
people running lame IDS to complain to the true source of such packets.
If the relatively few complaints I see from such people when accidental
ftp or http connections are attempted to their workstations are any
indication, then the mere volume of complaints alone would probably be
sufficient reason for anyone to stop using RFC-1918 addressing. Too bad
the Internet's not just one big large bridged Ethernet and then we could
just look up the MAC address (on our border bridges, of course) of any
offender and then go beat them over the head directly with the magnled
packets! :slight_smile:

Thankfully there are now devices that can do such filtering effectively
even at very high core speeds.... Now we only have to convince the
manufacturers of such devices to supply them with default configurations
that do such filtering (and not to make the stupid mistake that they
need to leave their factory configurations as if they will only ever
live in a lab environment)!

This gets us back to the discussin we had here about 3-4 months ago about
what should be done in order to create a friendly internet environment,
that is, where every Internet connected entity actually gives a damn about
everyone else.

--Ariel

> > There are good reasons to want to get those packets (traceroutes from
> > people who have numbered their networks in rfc1918 networks,

No John, there are exactly zero reasons, good or otherwise, for allowing
any traffic with RFC-1918 source addresses to traverse any part of the
public Internet. Period! :slight_smile:

You are being religious, and I shall not descend into this sort of
discussion with you. It is simply non productive nor professional.

I disagree, and believe that other reasonable people do so as well,
and there is therefore argument over this issue. People should not
assert canonicity upon it. End of story.

--jhawk

No John, there are exactly zero reasons, good or otherwise, for allowing
any traffic with RFC-1918 source addresses to traverse any part of the
public Internet. Period! :slight_smile:

Altho Path MTU from RFC1918 P2P links will arrive and if you block them
you'll find strange things occur on transfering data so you cant say
nothing should come on 1918 space.

> That's not a good reason. Nobody should be generating public traffic from
> those addresses, "making them work" is not an Internet-friendly decision.

I agree, altho a lot of people do use 1918 for their p2p.

The sooner RFC-1918-sourced packets get filtered (i.e. the closer to

until the previous item is fixed tho you'll break things if you do this.

Steve

John Hawkinson wrote:

> No John, there are exactly zero reasons, good or otherwise

I disagree, and believe that other reasonable people do so as well,
and there is therefore argument over this issue.

Some people believe the earth is flat, so that issue is undecided?

hehe

RFC1918 addresses are not "free addresses" they are private-use ONLY
addresses which must not appear in public networking space. It cannot be
made much clearer than that. Science has spoken. RFC1918 addresses on
public interfaces are bad. Doesn't matter who disagrees with it or how
convenient it is to adopt. There is consensus on this issue.

"Stephen J. Wilcox" wrote:

Altho Path MTU from RFC1918 P2P links will arrive and if you block them
you'll find strange things occur on transfering data so you cant say
nothing should come on 1918 space.

Exactly why they should be expunged from ISP backbones. What if an ICMP-DU
message had to go the other way, from ISP space out to the Internet?

Because they aren't filtering properly.

You can solve it by filtering yourself.

[ On Thursday, February 22, 2001 at 17:33:53 (-0500), John Hawkinson wrote: ]

Subject: Re: rfc 1918?

> > > There are good reasons to want to get those packets (traceroutes from
> > > people who have numbered their networks in rfc1918 networks,
>
> No John, there are exactly zero reasons, good or otherwise, for allowing
> any traffic with RFC-1918 source addresses to traverse any part of the
> public Internet. Period! :slight_smile:

You are being religious, and I shall not descend into this sort of
discussion with you. It is simply non productive nor professional.

OK, sorry, let me qualify that:

No John, there are exactly zero TECHNICAL reasons, good or otherwise,
for allowing any traffic with RFC-1918 source addresses to traverse any
part of the public Internet. Period! :slight_smile:

I disagree, and believe that other reasonable people do so as well,
and there is therefore argument over this issue. People should not
assert canonicity upon it. End of story.

In all of the past discussions on this issue there have never been any
presentations of technical reasons for allowing RFC-1918 addresses (in
either the source *or* destination fields) to traverse the public
Internet. (At least none have been presented while I've been watching,
not anywhere.)

Yes those who have the misunderstanding that they can use such addresses
are going to fail to filter them lest they block their own uses, but
that's circular reasoning, even if it is technically correct within the
microcosms of those people's own minds.

However in public there is no possible valid technical argument, by mere
definition of the way RFC-1918 defines the fact that such addresses are
solely for PRIVATE use, and private use ONLY. Unfortunately RFC-1918 is
not also a STD-* document, but even as it is just a Best Current
Practice, it can only ever really succeed even as a BCP if everyone
co-operates completely, and since people are eager to use PRIVATE
addresses that pretty much forces the rest of you to co-operate since
we're going to filter the heck out of your "mis-uses".

RFC-1918 also clearly suggests that non-unique PRIVATE addresses are
really only useful where external connectivity is not used -- i.e. for
private networks that are never in any way connected to the public
Internet. I.e. use of private addresses on public devices, with or
without filtering at network borders, is still "wrong". One might even
go so far as to argue that use of PRIVATE addresses behind a proper NAT
is similarly "wrong", though of course with a proper NAT you'd never
know! :slight_smile:

Note that any part of the Internet which joins any two independently
controlled and operated nodes is, by definition, public. That means
that even an ISP with just direct customers must still never allow
RFC-1918 addresses to appear at either their customer sites, or on their
back-haul(s) to the rest of the Internet. Their customers have just as
much right to make private use of RFC-1918 addresses as does any other
participant on the public Internet. Any use by any ISP of any RFC-1918
addressing violates that right.

The only other technical option is to forget about allocating private
address space, deprecate RFC-1918, and open up the address space to full
and proper routing. Though I do find private address space handy, I
wouldn't mind making all that space publicly available too. So, do we
want RFC-1918 promoted to a full standard, or deleted? You choose.

[ On Thursday, February 22, 2001 at 22:40:11 (+0000), Stephen J. Wilcox wrote: ]

Subject: Re: rfc 1918?

Altho Path MTU from RFC1918 P2P links will arrive and if you block them
you'll find strange things occur on transfering data so you cant say
nothing should come on 1918 space.

Even more reason to filter RFC-1918 src/dest addresses comletely and
utterly. Such broken implemenations deserve to be cut off from the
public Interent as they cause nothing but problems.

Note that anyone using PRIVATE addresses within their own networks, and
with an even half decent security policy, is forced to filter all such
junk at their borders anyway, so they could never "win" with such broken
implementations.

I.e. the only "fair" thing to do is to filter all RFC-1918 addresses
early and often from all public Internet links.

> > That's not a good reason. Nobody should be generating public traffic from
> > those addresses, "making them work" is not an Internet-friendly decision.

I agree, altho a lot of people do use 1918 for their p2p.

That's not necessarily quite the same issue, so long as no packets ever
traverse the rest of the public Internet with RFC-1918 source or
destination addresses.

(Un)Fortunately it's difficult, or even impossible in some cases, to
prevent packets with PRIVATE addresses from being generated and so it's
still extremely bad practice to use PRIVATE addresses for any point-to-
point links with transit PUBLIC traffic "in the raw" (i.e. not in a
tunnel that would have PUBLIC end-point addresses).

> The sooner RFC-1918-sourced packets get filtered (i.e. the closer to

until the previous item is fixed tho you'll break things if you do this.

Indeed -- but the sooner and more often such things are "broken", the
sooner they'll get fixed properly!

"Tough love", and "you've got to be good to be bad", etc., etc., etc....

It is my intention to avoid having 1918 addresses leaving my network.

At our egress points the filters are fairly short -- they allow only traffic
with our IP source addresses to leave. This was my interpretation of the RFC's.
Some in this discussion seem to be saying that we should also filter for RFC1918
destinations. Am I reading this correctly?

I can see that packets destined for RFC1918 addresses will leave our network
(due to default routes) but are promptly dropped at the first BGP speaking
router they encounter. Is it worth the extra router processing time to check
all outgoing packet destinations as well? I can't see where this extra
filtering is worth the trouble.

Mark Radabaugh
VP, Amplex
(419)833-3635
mark@amplex.net

[ On Thursday, February 22, 2001 at 19:12:14 (-0500), Mark Radabaugh wrote: ]

Subject: RE: rfc 1918?

I can see that packets destined for RFC1918 addresses will leave our network
(due to default routes) but are promptly dropped at the first BGP speaking
router they encounter. Is it worth the extra router processing time to check
all outgoing packet destinations as well? I can't see where this extra
filtering is worth the trouble.

I suppose that depends on just how far away the first BGP speaking
router is from your network border(s), and how properly configured it
is.

In practical terms I suppose it also depends on just exactly what
filtering technology you've deployed, and just exactly how close it is
to being overloaded. If you are already pushing your router's CPU too
hard (and if your filters are done by your router's CPU rather than an
ASIC) then obviously reducing your filter load will be in your own best
interests and not filtering destination addresses against RFC-1918 will
be one relatively benign way of reducing the filter load. However if
your router's CPU is only partially utilised now (even if you push your
pipe to capacity), then adding such destination filters won't hurt
anyone.

woods@weird.com (Greg A. Woods) tapped some keys and produced:

In practical terms I suppose it also depends on just exactly what
filtering technology you've deployed, and just exactly how close it is
to being overloaded. If you are already pushing your router's CPU too
hard (and if your filters are done by your router's CPU rather than an
ASIC) then obviously reducing your filter load will be in your own best
interests and not filtering destination addresses against RFC-1918 will
be one relatively benign way of reducing the filter load. However if
your router's CPU is only partially utilised now (even if you push your
pipe to capacity), then adding such destination filters won't hurt
anyone.

Would routing them to Null0 not be more optimal?

Pi

Pim van Riezen <pi@vuurwerk.nl> tapped some keys and produced:

Would routing them to Null0 not be more optimal?

Never mind I'm thinking ass-backwards, must be something in the air.
*Revokes own posting privs*

Pi

Would routing them to Null0 not be more optimal?

Pi

Hum... Now there is a good idea!

Thanks,

Mark

At our egress points the filters are fairly short -- they allow only traffic
with our IP source addresses to leave. This was my interpretation of the RFC's.

Thank you. The rest of the network appreciates you doing your part.

Some in this discussion seem to be saying that we should also filter for RFC1918
destinations. Am I reading this correctly?

That's probably optional. But if you have the router resources to do it.
every little bit helps. You probably should filter and log it, to find out
why one of your hosts is trying to send to a 1918 address outside your
site - if you're not using 1918 space, it shouldn't happen, and if you ARE
using it, the packet should have ended up inside your net, not on your
border router.

In either case, if a packet is trying to leave your net bound for a 1918
destination, something is probably seriously wrong(*).

(*) I'll leave ICMP replies from 1918-addressed P2P links out for the moment :wink:

I can see that packets destined for RFC1918 addresses will leave our network
(due to default routes) but are promptly dropped at the first BGP speaking
router they encounter. Is it worth the extra router processing time to check
all outgoing packet destinations as well? I can't see where this extra
filtering is worth the trouble.

There's 2 main classes of "next router":

1) You don't filter, but it goes to the OTHER end of the link and promptly
gets stomped by a fascist filter that refuses to accept any source address
that's outside the adddress block that's supposed to be at your end.

2) You don't filter, they don't filter either, because they actually USE
1918 space for their own stuff, so your packets wtih 1918 source addresses
and real destination addresses manage to go a LONG way before hitting
anything that will stop them (as the original poster showed, the packets
could actually *arrive* at the destination, with no way to reply).

Remember - this filtering often can't be done on core routers due to
performance issues, so if it doesn't get done at the border routers
it probably won't happen...

        Valdis Kletnieks
        Operating Systems Analyst
        Virginia Tech