Telco's write best practices for packet switching networks

After the SNMP excitement I asked if anyone had suggestions on how
to architect or design a backbone network to be less suspectible to
problems. It turns out the telephone industry has written a set of
best practices for the Internet.

Focus Group 2.A.2: Best Practices on Packet Switching. Karl Rauscher,
Lucent Technologies
http://www.nric.org/pubs/index.html

   Mr. Rauscher gave an example of the kind of information to be found
   there. The best practice used in the example states that critical
   packet network elements such as control elements, access in signaling
   gateways and DNS servers, should have firewall protection such as
   screening and filtering. One hundred percent of the respondents
   indicated they were implementing this best practice.

Cool, who has an OC-192 firewall on their control elements? What is
a control element, is that the same as a router or is that a signaling
gateway?

Sean,

Cool, who has an OC-192 firewall on their control elements? What is
a control element, is that the same as a router or is that a signaling
gateway?

Hmm...gotta say it (again). Of course oc192/10ge firewalls are not
currently widely deployed (aka not a best practice), but they should be!

Of course, folks will argue that you have to pay a lot of extra $$
to make that a reality...kind of like how auto makers argue that you
should pay a lot of extra $$ for the GPS receiver in your car (which
does not COST a lot of extra $$).

-ron

Cool, who has an OC-192 firewall on their control elements? What is
a control element, is that the same as a router or is that a signaling
gateway?

Hmm...gotta say it (again). Of course oc192/10ge firewalls are not
currently widely deployed (aka not a best practice), but they should be!

Of course, folks will argue that you have to pay a lot of extra $$
to make that a reality...kind of like how auto makers argue that you
should pay a lot of extra $$ for the GPS receiver in your car (which
does not COST a lot of extra $$).

Firewalls are good things for general purpose networks. When you've
got a bunch of clueless employees, all using Windows shares, NFS, and
all sorts of nasty protocols, a firewall is best practice. Rather
than educate every single one of them as to the security implications
of their actions, just insulate them, and do what you can behind the
firewall.

When you've got a deployed server, run by clueful people, dedicated to
a single task, firewalls are not the way to go. You've got a DNS
server. What are you going to do with a firewall? Permit tcp/53 and
udp/53 from the appropriate net blocks. Where's the protection? Turn
off unneeded services, chose a resilient and flame tested daemon, and
watch the patchlist for it.

ericb

When you've got a deployed server, run by clueful people, dedicated to a
single task, firewalls are not the way to go.

Probably. And I would certainly rate "clueful people" _far_ above a firewall
when it comes times to prioritize your security needs and resources.

What are you going to do with a firewall?

Compared to your average application, firewalls often have
-better logging (more detail, adjustable, not on the vulnerable device);
-vendors focused on security;
-add-ons like IDS that can benefit from the superior logs;
-firewall admins focused on security and who do security every day;
-better response capability for unplanned/unanticipated security issues.

chose a resilient and flame tested daemon, and watch the patchlist for it.

You've never seen a security vendor come out with a patch or workaround before
an application vendor?

When you've got a deployed server, run by clueful people, dedicated to a
single task, firewalls are not the way to go.

Probably. And I would certainly rate "clueful people" _far_
above a firewall when it comes times to prioritize your security
needs and resources.

Mind having a talk with my management?

chose a resilient and flame tested daemon, and watch the patchlist for it.

You've never seen a security vendor come out with a patch or
workaround before an application vendor?

Sure. Sometimes they come out with patches that wouldn't be needed if
you didn't have the firewall :wink:

Stateful firewalls also suffer from state propagation problems. High
bandwidth redundant links and firewalls don't get along well together.
Some firewall packages will allow you to statelessly pass high
bandwidth traffic (tcp,udp/53) in the DNS example, which helps with
load management and failover. But then you're back to where you were
without the firewall.

Decent IDSes run on spanning ports against your uplinks, decent
logging on packet filtering routers, etc will all give you the
benefits of the firewall. In general, and IDS is a better IDS than a
firewall, and so forth.

The primay benefit of firewalls is simplicity of configuration, and
the ability to allow outbound services without opening huge inbound
holes (tcp,udp/53, tcp/20, udp > 1023, etc). This is generally not
the case with deployed ISP servers.

Finally, the "crunchy ouside" thing takes over way too often.
Management is lulled into a happy place by the word "firewall", and
even good security engineers get lazy. I realize that this is 100% a
meat problem, but it's a problem either way.

ericb

Sean,

Most ISPs have a comparable set-up wrt modems/terminal servers for
managing their network elements - same dealy, but ISPs can choose
between inband & OOB whereas the telcos can't. (Or couldn't, til
recently, when Net/Bell convergence started urging the market toward
big damn fiber switches with in-band mgmt tools.)

The inband/OOB debate is always squirrely. Things like BGP/OSPF
are in-band, and ISPs can't really choose an out of band way to
exchange routing information. Its true that console access has a
choice of accessing the management port through different paths.
The router will continue to route, even if the operator can't access
the console port.

The telephone world thinks of the debate in terms of 260Hz and tone
signalling versus SS7 control channels. If you disrupt the SS7 control
channel, the telephone switch won't complete new calls even if the
trunk groups still work. The management or craft ports are a different
matter.

Physical attacks make it more interesting. Because the telephone
network uses seperate signalling channels, you can disrupt a lot of
calls by destroying a relatively few control points/links. Since
the Internet uses in-band control, as long as there is some physical
connectivity, you can use it for both control and user traffic.

Everytime Illuminet has a glitch, a dozen states have problems
completing calls between ILECs and CLECs. This affects a lot of
dialup access to the Internet.

So - in the world of telco, the control elements are JUST OOB. Since
you literally can't reach them inband, the OOB element mgmt can be
done through modems or a separate network which is firewalled off
from the rest of the Internet. That's what they're talking about in
your excerpt.

Where it gets interesting is when the assumptions about what is
"outside" or "inside" is violated. I think the Internet is actually
much more secure now because its so open, we don't make assumptions
about who we trust. The telephone network is built on a house of
trust, and if you can get on the "inside" the world is yours.

What I find interesting is that I've heard a lot of cage rattling to
take the Internet in this direction, i.e. stop managing it in-band
where all the kiddies and the terrorists can get at it and start
managing it OOB. Hide it, shut it away, don't route it, etc.
nevermind what a pain it is to manage TWO networks... nevermind how
much flexibility you lose. (Sorry, my bias is showing.)

Having a seperate network didn't stop Mitnick :slight_smile: I think some of it is
"the grass is always greener on the other side of the fence."

Reserving bandwidth for specific purposes tends to make your network
more brittle, and less responsive to unexpected events. I try to
explain it's like car pool lanes on the highway making traffic jams
worse.

I happen to believe you need both in-band and out-of-band control
access, and you need the same level of security on both. But I
tend to order my goals with availability first. Having your network
down may be "secure" but it isn't very useful.

Kelly J. Cooper - Security Engineer, CISSP

So why did you get the CISSP? I just received my CISSP certificate,
but I needed to get it for resume padding purposes.

There are four different issues with IB vs OOB signalling & control,
actually --

1) isolation of control traffic from payload traffic to eliminate
   possible security breaches.

2) isolation of control traffic from payload traffic to prevent starvation
   of control traffic by (possibly misbehaving) payload traffic.

3) having an alternative, isolated, routing infrastructure for control
   traffic which can function even when primary routing is hosed.

4) having a physically separate network for control traffic which can
   concievably survive when primary network is broken.

On #1, Internet routing protocols are notoriously weak. Using globally
routable frames to carry neighbour-to-neighbour routing information is a
recipe for disaster (i think everyone on this list can think of few
not-yet-plugged holes arising from this approach).

In most IGPs and eBGP there's simply no need to use routable IP packets,
period. The only exception is iBGP hack (and consequent route-reflector
kludgery), and this can only be cured by a better-designed IGP which can
carry all exterior routing information. I hope someone's doing something
to make it happen.

Using non-IP packets does not always bring isolation; OSI stack is even
more vulnerable. So a cheap fix would be to design routing hardware in a
way forcing some reserved IP addresses to be non-routable. (127/8 seems to
be a good candidate :). Even better is to start using non-IP frame types
altogether. With all its weakness, outside attacks on ARP are unheard of.

Another (weaker) option is to use cryptography. Besides inevitable bugs
(like numerous problems in SSH), crypto is slow and hard to do right.

#2 is also a known pitfall (hello, OFRV :slight_smile: Although, in theory, just
jacking up priority on packets carrying routing protocols & network
monitoring traffic could take care of this problem, the reality is quite
hairier. Most hardware doesn't prioritize generation of ICMPs (so a lot
of looped or misdirected packets can swamp the routing processor which is
incidentally used to separate control traffic from transit traffic).
Usually, there are cross-interface dependencies resulting from shared
buffer memory, the supposedly "non-blocking" switching fabrics being
anything but, confusion between queueing and drop priorities, plain broken
design of packet classifiers (hard to do it right at high speeds :), and
simply network admins being lazy to configure interface ToS processing
appropriately (and/or failing to filter out packets with ToS similar to
the routing protocols' at all ingress points!)

Given the practical problems of getting ToS for control traffic being set
up properly, the option of guaranteeing bandwidth and processing capacity
to control traffic by using separate, non-configurable forwarding/queueing
for non-IP traffic seems to be quite reasonable.

Problems arising from control traffic starvation are numerous and can
easily lead to prolonged network-wide failures. Therefore, making control
traffic "OOB" is definitely worth-while.

#3 is somewhat muddled by the fact that having valid routing information
while having no functioning payload pathway is somewhat irrelevant (in
theory, haiving such information may let network to use
unidirectionally-broken paths, or allow faster recovery from network
fractures by eliminating the need to send updates about _working_ parts of
the split-off networks). In practice, the only real gain from a redundant
control network is the ability to better diagnose problems, particularly
routing problems in the primary network (i.e. the "OOB" network is used to
carry diagnostic traffic only). A dedicated OOB network for console access
to various pieces of equipment is a lot more useful.

The horrible weakness of SNMP makes separate control network somewhat more
resistant to attacks; however, it also requires zealous filtering of SNMP
and other control protocols (such as telnet :slight_smile: packets coming to the
router's control unit(s) from the primary network. If such filtering is
broken or glossed over, there are no security gains.

#4 is hardly useful in any situation (with the exception of diagnostic
network). In fact, telco-issue "OOB" is usually muxed over the same
wires.

So, i would say i'm pro-OOB where it concerns clean confinement of control
traffic into a non-routable, unconditionally-prioritized frames, and
contra-OOB when it comes to making separate networks for control traffic.
Your definition of "separate network" may vary :slight_smile:

--vadim

In a message written on Fri, Mar 08, 2002 at 05:52:46PM -0800, Vadim Antonov wrote:

1) isolation of control traffic from payload traffic to eliminate
   possible security breaches.

[snip]

On #1, Internet routing protocols are notoriously weak. Using globally
routable frames to carry neighbour-to-neighbour routing information is a
recipe for disaster (i think everyone on this list can think of few
not-yet-plugged holes arising from this approach).

This is an area of interest of mine when looking at IPv6. IPv6
has the notion of link local IP addresses, that can't (for some
definition of can't) be accessed unless you are on that link.

This could go a long way to fixing the problems you mention, but
it introduces some additional configuration issues. In particular,
the current practice of using the same link local addresses on
every link means you would need to configure both the address and
the port.

In any event, I wonder if there is an opportunity here for additional
security. Although any changes are clearly years off.

Of course, like many things security looks easy until you have
to do it yourself. So I don't mean to suggest there are any
really any easy answers.

But I've been wondering about simple structural changes which would
improve the intrinsic security of the net. For example, remember when
BARRNET had the problem with people stealing passwords on their backbone.
One simple change was removing general purpose computers which could
be used as sniffers from their core router LANs.

My simple question is why do exchange point prefixes or backbone
network prefixes need to be announced to peers or customers? If no
one announced IXP prefixes, it would be more difficult (modulo
LSSR/SSSR) to send bogus packets at distant routing gateways. The
attacker would need to be directly connected, or compromise something.

This has been something which has bugged me ever since I connected
a router to mae-east. There is no "true" ASN for inter-provider
network prefixes, yet the prefixes show up in the BGP tables via multiple
providers. Private inter-ISP links aren't any better. They are
frequently taken from some provider's internal space, and announced
by a combination of providers.

This isn't really OOB, but similar to your idea of not using a
globally routable network to exchange routing information. Its
not as difficult as making a 127/8 kludge. Its a small matter
of not announcing prefixes used for BGP to your BGP peers (next-hop-self).

> So, i would say i'm pro-OOB where it concerns clean confinement of control
> traffic into a non-routable, unconditionally-prioritized frames, and
> contra-OOB when it comes to making separate networks for control traffic.
> Your definition of "separate network" may vary :slight_smile:

Of course, like many things security looks easy until you have
to do it yourself. So I don't mean to suggest there are any
really any easy answers.

It would help if equipment vendors made it easier to enable management on
a per-interface basis, so you could just disable it on "in band"
interfaces.

My simple question is why do exchange point prefixes or backbone
network prefixes need to be announced to peers or customers? If no
one announced IXP prefixes, it would be more difficult (modulo
LSSR/SSSR) to send bogus packets at distant routing gateways. The
attacker would need to be directly connected, or compromise something.

Define "need". It is extremely helpful to receive ICMP messages from
within the IP address range of an exchange. If the routes aren't
announced, packets from these addresses would be dropped by routers
performing unicast RPF checks. Too bad for traceroute, but potientally
much more serious for path MTU discovery. Nearly all implementations are
broken to the degree they can't recover from the situation where they
don't receive "datagram too big" messages, and exchange points are
typically the places where networks with different MTUs come together.

I think I would prefer to just announce the prefixes, but route them to
the null interface somewhere. This doesn't get in the way of unicast RPF
elsewhere, but protects the interconnect addresses equally well, and it
allows a more fine grained approach.

Anyway, I feel the effort needed to educate networks about this problem
would be better spent in trying to get them to filter out outbound packets
with bogus source addresses. I still see lots of 192.168/16 source
addresses in packets received from peers.