The power of default configurations

no to 1) prolong the pain, 2) beat a horsey.. BUT, why are 1918 ips
'special' to any application? why are non-1918 ips 'special' in a
different way?

i know this is hard to believe, but i was asked to review 1918 before it
went to press, since i'd been vociferous in my comments about 1597. in
the text (RFC 1918) we see the following:

   Because private addresses have no global meaning, routing information
   about private networks shall not be propagated on inter-enterprise
   links, and packets with private source or destination addresses
   should not be forwarded across such links. Routers in networks not
   using private address space, especially those of Internet service
   providers, are expected to be configured to reject (filter out)
   routing information about private networks. If such a router receives
   such information the rejection shall not be treated as a routing
   protocol error.

well, so much for the importance of "shall not" in rfcspeak, huh?

   It is strongly recommended that routers which connect enterprises to
   external networks are set up with appropriate packet and routing
   filters at both ends of the link in order to prevent packet and
   routing information leakage. An enterprise should also filter any
   private networks from inbound routing information in order to protect
   itself from ambiguous routing situations which can occur if routes to
   the private address space point outside the enterprise.

"blah, blah, blah, ginger, blah, blah." --what your dog hears (gary larson)

   If an enterprise uses the private address space, or a mix of private
   and public address spaces, then DNS clients outside of the enterprise
   should not see addresses in the private address space used by the
   enterprise, since these addresses would be ambiguous. One way to
   ensure this is to run two authority servers for each DNS zone
   containing both publically and privately addressed hosts. One server
   would be visible from the public address space and would contain only
   the subset of the enterprise's addresses which were reachable using
   public addresses. The other server would be reachable only from the
   private network and would contain the full set of data, including the
   private addresses and whatever public addresses are reachable the
   private network. In order to ensure consistency, both servers should
   be configured from the same data of which the publically visible zone
   only contains a filtered version. There is certain degree of
   additional complexity associated with providing these capabilities.

yikes! i think i contributed some of that text. and i see now that it
really does have to say something about dns forwarders. so i'll withdraw
my suggestion that this thread be moved to bind-users@ -- it needs to go
to dnsop@lists.uoregon.edu since it's not a BIND-specific issue at all.

Paul Vixie wrote:

no to 1) prolong the pain, 2) beat a horsey.. BUT, why are 1918 ips
'special' to any application? why are non-1918 ips 'special' in a
different way?
   
i know this is hard to believe, but i was asked to review 1918 before it
went to press, since i'd been vociferous in my comments about 1597. in

IMO, RFC1918 went off the track when both ISP's and registries started asking their customers if they have "seriously considered using 1918 space instead of applying for addresses". This caused many kinds of renumbering nightmares, overlapping addresses, near death of ipv6, etc.

Pete

> no to 1) prolong the pain, 2) beat a horsey.. BUT, why are 1918 ips
> 'special' to any application? why are non-1918 ips 'special' in a
> different way?

i know this is hard to believe, but i was asked to review 1918 before it
went to press, since i'd been vociferous in my comments about 1597. in
the text (RFC 1918) we see the following:

<snip>

yikes! i think i contributed some of that text. and i see now that it
really does have to say something about dns forwarders. so i'll withdraw
my suggestion that this thread be moved to bind-users@ -- it needs to go
to dnsop@lists.uoregon.edu since it's not a BIND-specific issue at all.

So, this highlights some good operational practices in networking and
DNS-applications, but doesn't answer how 1918 is 'different' or 'special'
than any other ip address. I think what I was driving at is that putting
these proposed road blocks in bind is akin to the 'cisco auto secure'
features.

Someone is attempting to 'secure' the problem (both the network and the
application problems) here in the same manner. The practices outlined in
the RFC paul quoted, if followed, should do this... So, the problem isn't
that technology is required to fix this, its that people aren't doing
the required things to make the pain stop (at the enterprise or individual
site level).

Making the distinction between 1918 and 'other' seems, atleast at the
equipment or application level, like a recipe for disaster. As paul
mentioed wrt Microsoft earlier: There are many an enterprise out there
with 1918 in siteX/Y/Z and 'globally unique ip space' in sites A/B/C.

So, this highlights some good operational practices in networking and
DNS-applications, but doesn't answer how 1918 is 'different' or 'special'
than any other ip address. I think what I was driving at is that putting
these proposed road blocks in bind is akin to the 'cisco auto secure'
features.

when you attempt to solve a routing problem by addressing tricks,
you're gonna pay for it forever in ever-expanding ways. this is
just one of them.

and, through the brilliance of the ivtf, it has been perpetuated
in ipv6.

randy

> So, this highlights some good operational practices in networking and
> DNS-applications, but doesn't answer how 1918 is 'different' or

'special'

> than any other ip address. I think what I was driving at is that

putting

> these proposed road blocks in bind is akin to the 'cisco auto secure'
> features.

when you attempt to solve a routing problem by addressing tricks,
you're gonna pay for it forever in ever-expanding ways. this is
just one of them.

Hmmm... interesting. Routing is basically the dynamic exchange
of address ranges and their attributes through various protocols.
Normally routers do the talking, but that is only incidental.

One might look at this issue and say that IETF RFC human
readable documents are not the best way to communicate address
ranges and their attributes, therefore RFC 1918 is fatally flawed.
Similarly, the IANA page at
http://www.iana.org/assignments/ipv4-address-space
is also flawed because, although it is accessible via the HTTP
protocol, it is clearly intended to be a human readable document
no different from an RFC.

But now let's turn out attention to Team Cymru's bogon project.
Here we see that they are offering the dynamic exchange of
address ranges and their attributes through various protocols
such as DNS, RADB and BGP. Clearly this falls on the "routing"
side of the fence.

Which leads me to the question: Why are RFC 1918 addresses defined
in a document rather than in an authoritative protocol feed which
people can use to configure devices? Perhaps if they were defined
in a protocol feed of some sort, like DNS, then device manufacturers
would make their devices autoconfigure using that feed?

--Michael Dillon

Because they don't change terribly often.
Indeed the ones in RFC1918 don't change at all.
A protocol feed to deliver the same 6 integers?

The discussion here seems to be muddling two issues.

One is ISPs routing packets with RFC1918 source addresses. Which presumably
can and should be dealt with as a routing issue, I believe there is already
BCP outlining several way to deal with this traffic.

This is noticable to DNS admins, as presumably most such misconfigured boxes
never get an IP address for the service they actually want to use, since the
enquiries are unrepliable, or at least the boxes issue more DNS queries
because some of them are unrepliable.

The other is packets enquiring about RFC1918 address space, which can probably
be minimised by changing the default settings when DNS server packages are
made. For example Debian supplies the config files with the RFC1918 zones
commented out (although they are all ready to kill the traffic by removing a
"#").

However whilst I'm sure there is a lot of dross looking up RFC1918 address
space, I also believe if the volume of such enquiries became an operational
issue for the Internet there are other ways of reducing the number of these
queries.

Whilst we are on dross that turns up at DNS servers, how about traffic for
port 0, surely this could be killed at the routing level as well, anyone got
any figures for how much port 0 traffic is around? My understanding is it is
mostly either scanning, or broken firewalls, neither of which are terribly
desirable things to have on your network, or to ship out to other peoples
networks.

anyone got any figures for how much port 0 traffic is around?

For F-root, queries with UDP source port 0 make up about 0.001% of
the traffic. Or 4500 queries yesterday.

I'm not seeing any source port 0 queries at ISC's AS112 node or their TLD server.

Duane W.

Or packet MTU fragmentation. Many security products mis-interpret the
packet header on a fragment and display port "0" instead of port "N/A".

And just like people who drop all ICMP packets, if you drop all fragments,
stuff breaks in weird ways. But its your network, you can break it any
way you want.

<stepping off horsey>

Sean makes a good point, 'randomly' dropping traffic that 'seems bad to
you' is rarely a good plan :frowning: Hopefully people check to see if the traffic
has a use and has some operational validity before just deciding to drop
it? Even icmp has it's place in the world...

</stepping off horsey>