Arguing against using public IP space

From nanog-bounces+bonomi=mail.r-bonomi.com@nanog.org Sun Nov 13 14:15:38 2011
From: William Herrin <bill@herrin.us>
Date: Sun, 13 Nov 2011 15:13:37 -0500
Subject: Re: Arguing against using public IP space
To: nanog@nanog.org

> On Sun, 13 Nov 2011 10:36:43 -0500, Jason Lewis <jlewis@packetnexus.com> wrote;
>> http://www.redtigersecurity.com/security-briefings/2011/9/16/scada-vendors-use-public-routable-ip-addresses-by-default.html
>
> Any article that claims a /12 is a 'class B', and a /16 is a 'Class C', is
> DEFINITELY 'flawed'.

Hi Robert,

Give the chart a second look. 192.168.0.0/16 (one of the three RFC1918
spaces) is, in fact, a /16 of IPv4 address space and it is, in fact,
found in the old "class C" range. Ditto 172.16.0.0/12. If there's a
nitpick, the author should have labeled the column something like
"classful area" instead of "classful description."

In the 'classful' world, neither the /12 or the /16 spaces were referencble
as a single object. Correct 'classful descriptions' would have been:
        "16 contiguous Class 'B's"
       "256 contiguous Class 'C's"

Fine. But I think you're going to fine that synechdoche triumphs here, and
a Class-C *Sized* network is going to be called that, even if it's first
octet is 191 or lower, Robert.

Cheers,
-- jra

Concur, GMTA. My point is that without an airgap, the attacker can jump from a production network to the SCADA network, so we're in violent agreement.

;>

What if you air-gap the SCADA network of which you are in
administrative control, and then there's a failure on it, and the people
responsible for troubleshooting it can't do it remotely (because of the
air gap), so the trouble continues for an extra hour while they drive
to the office, and that extra hour of failure causes someone to die.
Should that result in a homicide charge?

What if you air-gap the SCADA network of which you are in
administrative control, but, having done so, you can't afford the level
of redundancy you could when it wasn't air-gapped, and a transport
failure leaves you without remote control of a system at a time when
it's needed to prevent a cascading failure, and that leads to someone
dieing. Should that result in a homicide charge?

Air-gap means you have to build your own facilities for the entire
SCADA network. No MPLS-VPN service from providers. Can't even use
point-to-point TDM circuits (T1, for example) from providers, since
those are typically mixed in with other circuits in the carrier's DACS,
so it's only logical separation. And even if you want to redefine
"air-gap" to be "air-gap, or at least no co-mingling in any packet
switching equipment", you've ruled out any use of commercial wireless
service (i.e. cellular) for backup paths.

A good engineer weighs all the tradeoffs and makes a judgement. In
some systems, there's might be a safety component of availability that
justifies accepting some very small increase in the risk of outside
compromise.

You can argue that safety is paramount -- that the system needs to be
designed to never get into an unsafe condition because of a
communications failure (which, in fact is a good argument) -- that
there must always be sufficient local control to keep the system in a
safe state. Then you can implement the air-gap policy, knowing that
while it might make remote control less reliable, there's no chance of,
say, the plant blowing up because of loss of remote control. (Except,
of course, that that's only true if you have complete faith in the local
control system. Sometimes remote monitoring can allow a human to see
and correct a developing unsafe condition that the control system was
never programmed to deal with.)

But even if the local control is completely safe in the loss-of-comm
failure case, it's still not as cut and dried as it sounds. The plant
might not blow up. But it might trip offline with there being no way
to restart it because of a comm failure. Ok, fine, you say, it's still
in a safe condition. Except, of course, that power outages, especially
widespread ones, can kill people. Remote control of the power grid
might not be necessarily to keep plants from blowing up, but it's
certainly necessary in certain cases to keep it online. (And in this
paragraph, I'm using the power grid as an example. But the point I'm
making in this post is more general case.)

Sure, anytime there's an attack or failure on a SCADA network that
wouldn't have occurred had it been air-gapped, it's easy for people to
knee-jerk a "SCADA networks should be airgapped" response. But that's
not really intelligent commentary unless you carefully consider what
risks are associated with air-gapping the network.

Practically speaking, non-trivial SCADA networks are almost never
completely air-gapped. Have you talked to people who run them?

     -- Brett

From: "Brett Frankenberger" <rbf+nanog@panix.com>

What if you air-gap the SCADA network of which you are in
administrative control, and then there's a failure on it, and the
people responsible for troubleshooting it can't do it remotely (because of
the air gap), so the trouble continues for an extra hour while they drive
to the office, and that extra hour of failure causes someone to die.
Should that result in a homicide charge?

I should think it would be your responsibility, as the chief engineer, to
make sure *you have filled all such possible holes in your operations plan*.

What if you air-gap the SCADA network of which you are in
administrative control, but, having done so, you can't afford the
level of redundancy you could when it wasn't air-gapped, and a transport
failure leaves you without remote control of a system at a time when
it's needed to prevent a cascading failure, and that leads to someone
dieing. Should that result in a homicide charge?

If it costs more to run, then it *costs more to run*.

Air-gap means you have to build your own facilities for the entire
SCADA network. No MPLS-VPN service from providers. Can't even use
point-to-point TDM circuits (T1, for example) from providers, since
those are typically mixed in with other circuits in the carrier's
DACS, so it's only logical separation. And even if you want to redefine
"air-gap" to be "air-gap, or at least no co-mingling in any packet
switching equipment", you've ruled out any use of commercial wireless
service (i.e. cellular) for backup paths.

*I* define "air gap" as "no way to move packets from the outside world
into the network. I didn't say "100% dedicated facilities", though your
implication that that still leaves an attacker a way to reconfigure things
such that they could get in is absolutely correct.

A good engineer weighs all the tradeoffs and makes a judgement. In
some systems, there's might be a safety component of availability that
justifies accepting some very small increase in the risk of outside
compromise.

The line is pretty bright, I think, but you're correct in asserting that the
price difference goes up as the square of the number of nines. But that's not
important now; we're talking about cases that aren't even *99%*, much less 4 or
5 nines of unlikelihood that an outsider can find a way to break in.

You can argue that safety is paramount -- that the system needs to be
designed to never get into an unsafe condition because of a
communications failure (which, in fact is a good argument) -- that
there must always be sufficient local control to keep the system in a
safe state. Then you can implement the air-gap policy, knowing that
while it might make remote control less reliable, there's no chance
of, say, the plant blowing up because of loss of remote control. (Except,
of course, that that's only true if you have complete faith in the
local control system. Sometimes remote monitoring can allow a human to see
and correct a developing unsafe condition that the control system was
never programmed to deal with.)

Yes, but that's enablement for loss of locally staffed control, all by itself.

Even power utilities have a pretty good real rate of return these days; I have
no problem with them spending a little more of their revenue on safety, instead
of profit. If that takes regulators pointing guns at them, I'm fine with that.

But even if the local control is completely safe in the loss-of-comm
failure case, it's still not as cut and dried as it sounds. The plant
might not blow up. But it might trip offline with there being no way
to restart it because of a comm failure. Ok, fine, you say, it's still
in a safe condition. Except, of course, that power outages, especially
widespread ones, can kill people. Remote control of the power grid
might not be necessarily to keep plants from blowing up, but it's
certainly necessary in certain cases to keep it online. (And in this
paragraph, I'm using the power grid as an example. But the point I'm
making in this post is more general case.)

And I just read the Cracked piece talking about the 77 NYC blackout, which is
sort of weird, actually. :slight_smile: But the general case point *I* was making was
not that I expected a conviction.

It was that I expected a charge, and a trial.

If a bridge collapses, are we going to charge the Professional Engineer who
signed off on it?

Sure, anytime there's an attack or failure on a SCADA network that
wouldn't have occurred had it been air-gapped, it's easy for people to
knee-jerk a "SCADA networks should be airgapped" response. But that's
not really intelligent commentary unless you carefully consider what
risks are associated with air-gapping the network.

Practically speaking, non-trivial SCADA networks are almost never
completely air-gapped. Have you talked to people who run them?

No, and yes my knee was jerking. But based on the industry stuff I have seen
from out here, security isn't anywhere in the same *county* with where it needs
to be, and -- just as with RMS and the GPL -- maybe if some extremists knees jerk,
the end result, while more moderate, will be more salutary.

If you were that chief, and you knew the result of you screwing up might be
the loss of your liberty, not just your job... you don't think you'd fight
budget battles with your utility harder? (That's often the reason for such
regulations: to give middle to upper management more bullets to fire at the
(greedy) owners.)

Thanks for doing some of my math for me, though, Brett. :slight_smile:

Cheers,
-- jra

"It depends" on your NAT model. If you take a default Cisco PIX or ASA device...

(a) There is an option to "permit non-NAT traffic through the firewall". If not selected (nat-control) then there must be a covering NAT rule for any inside host to communicate with the outside interface, let alone outside-to-inside.

(b) By default all inbound traffic is default-deny, only "return" traffic for inside-initiated connections is allowed.

Yes, it's stateful (which is another argument altogether for placing a stateful device in the chain) but by all means, it does not allow outside traffic into the inside, regardless of the addressing scheme on the inside.

Beyond that, using 1918 space decreases the possibility that a "new, unexpected" path to the inside network will result in exposure. If you are using public space on the inside, and some path develops that bypasses the firewall, the routing information is already in place, you only need to affect the last hop. You can then get end-to-end routing of inside hosts to an outside party. Using 1918 space, with even nominal BCP adherence of the intermediate transit providers, you can't leak routing naturally. (Yes, it's certainly possible, but it raises the bar).

If the added protection were trivial, I would think the PCI requirement 1.3.8 requiring it would have been rejected long ago.

Jeff

Sure, anytime there's an attack or failure on a SCADA network that
wouldn't have occurred had it been air-gapped, it's easy for people to
knee-jerk a "SCADA networks should be airgapped" response. But that's
not really intelligent commentary unless you carefully consider what
risks are associated with air-gapping the network.

Not to mention that it's not the only way for these things to get
infected. Getting fixated on air-gapping is unrealistically ignoring
the other threats out there.

There needs to be a whole lot more security work done on SCADA nets.

... JG

If you designed a life-critical airgapped network that didn't have a trained
warm body at the NOC 24/7 with an airgapped management console, and hot (or at
least warm) spares for both console and console monkey, yes, you *do* deserve
that negligent homicide charge.

Sure, anytime there's an attack or failure on a SCADA network that
wouldn't have occurred had it been air-gapped, it's easy for people to
knee-jerk a "SCADA networks should be airgapped" response. But that's
not really intelligent commentary unless you carefully consider what
risks are associated with air-gapping the network.

Not to mention that it's not the only way for these things to get
infected. Getting fixated on air-gapping is unrealistically ignoring
the other threats out there.

There needs to be a whole lot more security work done on SCADA nets.

Stuxnet should provide a fairly illustrative example.

It doesn't really matter how well isolated from direct access it is if
it has a soft gooey center and a willing attacker.

A packet addressed to an endpoint that doesn't serve anything or have
a client listening will be ignered (whatever) as a matter of course.
Firewall or no firewall.

It will not go to a listening application, neither will it be
completely ignored --
the receiving machine's TCP stack will have a look at the packet.
On common operating systems there are frequently unsafe applications
listening on ports; on certain OSes, there will be no way to turn off
system applications, or human error will leave them in place.

That's fundamental to TCP/IP and secure.

It's fundamental to TCP/IP, but not fundamentally secure. TCP/IP
implementations have flaws.
If a host is meant solely as an endpoint, then it is exposed to undue
risk, if arbitrary packets can be addressed to its TCP/IP stack that
might have remotely exploitable bugs.

The only reason we firewall (packet filter) is to provide access
control (for whatever reason).

No, we also firewall to block access entirely to applications
attempting to establish a service on unapproved ports. We also use
firewalls in certain forms to ensure that communications over a TCP/IP
socket comply with a protocol, for example, that a session
terminated on port 25 is actually a SMTP session.

The firewall might restrict the set of allowed SMTP commands, validate
the syntax of certain commands, hide server version information,
prevent use of ESMTP pipelining from outside, etc.

I apologize in advance if this is too pedestrian (you might know this
but not agree with it) but I want to make a point:
End-to-end principle - Wikipedia

End to end connectivity principal's purpose is not to provide
security. It is to facilitate communications with other hosts and
networks. When a private system _really_ has to be secure, end to
end connectivity is inherently incompatible with security objectives.

There is always a tradeoff involving sacrificing a level of security
against remote attack
when connecting a computer to a network, and then again, when
connecting the network to the internet.

A computer that is not connected to a network is perfectly secure
against network-based attacks.
A computer that is connected to LAN is at risk of potential
network-based attack from that LAN.

If that LAN is then connected through a firewall through many to 1
NAT, there is another layer of risk added.

If the NAT'ing firewall is then replaced with a simple many to 1 NAT
router, there is another layer of risk added; there are new attacks
possible that still succeed in a NAT environment, that the firewall
would have stopped.

Finally, if that that same computer is then given end to end
connectivity with no firewall, there is a much less encumbered
communications path available to that computer for launching
network-based attack; the attack surface is greatly increased in such
a design.

If we are talking about nodes that interface with a SCADA network;
the concept of unfirewalled end-to-end connectivity approaches the
level of insanity.

Those are not systems where end-to-end public connectivity should be a
priority over security.

Yes, the author of this article is sadly mistaken and woefully void
of clue on the issues he attempts to address.

You are completely correct.

Owen

I don't think anyone in this thread is 'fixated' on the idea of airgapping; but it's generally a good idea whenever possible, and as restrictive a communications policy as is possible is definitely called for, amongst all the other things one ought to be doing.

It's also important to note that it's often impossible to *completely* airgap things, these days, due to various interdependencies, admin requirements (mentioned before), and so forth; perhaps bastioning is a more apt term.

That's basically the case for so many things.

I was reading, recently, two articles on Ars Technica ("Die, VPN" and
"Live, VPN") which made it exceedingly clear that these sorts of designs
are still the rule for most companies. I mean, I already knew that, but
it was *depressing* to read.

We've been very successful for many years designing things as though they
were going to be deployed on the public Internet, even if we do still put
them behind a firewall. That's just belt-and-suspenders common sense.

We do run a VPN service, which I use heavily, but it really has little
to do with granting magical access to resources - the VPN endpoint is
actually outside any firewall. I've so frequently found, over the years,
that some "free" Internet connection offering is crippled in some stupid
manner (transparent proxying with ad injection!), that the value added
is mostly just that of getting an Internet connection with no interference
by third parties. The fact that third parties cannot do any meaningful
snooping is nice too.

I also recall a fairly surreal discussion with a NANOG'er who was
absolutely convinced that SSH key based access to other servers was
more secure than password based access along with some ACL's and
something like sshguard; my point was that compromise of the magic
host with the magic key would tend to be worse (because you've suddenly
got access to all the other servers) while having different secure
passwords for each host, along with some ACL's and sshguard, allow you
to retain some isolation within the network from an infected node. It's
dependent on design and forethought, of course...

Basically, getting access to some point in the network shouldn't really
allow you to go on a rampage through the rest of the network.

... JG

As far as I can see Red Tiger Security is Jonathan Pollet; and even
though they list Houston, Dubai, Milan, and Sydney as offices it looks
like Houston is the only one. Is that right? Seems a little
misleading.

It actually reminds me of a 16 year old kid I know who runs a web
hosting "company" that you'd think was a Fortune 500 by the way the
website reads, and he's more than happy to take your credit card
information and store it without being PCI compliant.

Credibility of the company aside,

At first I wanted to cut Jonathan some slack. If he was going to
point to the use of public IPs as evidence that a firewall may not be
in use and then went on to discuss the potential risks of not having
any security, then that would have been appropriate. But instead he
goes on about explaining what a public vs. private address is (poorly)
and proceeds to associate the security of the system with the use of
private IPs.

I just don't see him as credible in the security field after reading it.

Then again, he does have that interview on Fox News posted on his
website where he talks about terrorist plots to compromise the
integrity of nuclear power plants...

Honestly, people post stuff like this time and time again. It's been
debunked so many times that a quick Google will probably give you what
you need to figure it out on your own.

> Getting fixated on air-gapping is unrealistically ignoring the other thre=
ats out there.

I don't think anyone in this thread is 'fixated' on the idea of airgapping;=

No, but it's clear that there are many designers out there who feel this
is the way to go. That's why it's a good idea to cover the ground anyways.

but it's generally a good idea whenever possible, and as restrictive a com=
munications policy as is possible is definitely called for, amongst all the=
other things one ought to be doing.

I think the part people forget about is that last part, "amongst all the
other things one ought to be doing."

It's also important to note that it's often impossible to *completely* airg=
ap things, these days, due to various interdependencies, admin requirements=
(mentioned before), and so forth; perhaps bastioning is a more apt term.

If it didn't turn into a situation where everyone's bastardizing^Wbastioning
your network in insecure ways.

... JG

Chuck, you're right that this should not happen- but the reason it should not happen is because you have a properly functioning stateful firewall, not because you're using NAT. If your firewall is working properly, then having public addresses behind it is no less secure than private. And if your firewall is not working properly, then having private addresses behind it is no more secure than public. In either case, NAT gains you nothing over what you'd have with a firewalled public-address subnet.

The fact that consumer cpe's typically do both nat and stateful firewalling does not mean that those functions are inseparable.

Gabriel,

This is not accurate.

First, many:1 NAT (sometimes also called PAT) is not separable from a
stateful firewall. You can build a stateful firewall without
many-to-one NAT but the reverse is not possible.

Second, while a security benefit from RFC 1918 addressing combined
with 1:1 NAT is dubious at best, the same is not true for the much
more commonly implemented many:1 NAT.

With RFC1918 plus many:1 NAT, most if not all functions of the
interior of the network are not addressable from far locations outside
the network, regardless of the correct or incorrect operation of the
security apparatus. This is an additional boundary which must be
bypassed in order to gain access to the network interior. While there
are a variety of techniques for circumventing this boundary no
combination of them guarantees successful breach. Hence it provides a
security benefit all on its own.

You would not rely on NAT+RFC1918 alone to secure a network and
neither would I. However, that's far from meaning that the use of
RFC1918 is never (or even rarely) operative in a network's security
process.

Regards,
Bill Herrin

William Herrin wrote:

If your machine is addressed with a globally routable IP, a trivial
failure of your security apparatus leaves your machine addressable
from any other host in the entire world which wishes to send it

Isn't that the case with IPv6? That the IP is addressable from any host in the entire (IPv6) world? And isn't that considered a good thing?

I don't think that being addressable from anywhere is a security hole in and of itself. It's how you implement and (mis)configure your firewall and related things that is the (potential) security hole. Whether the IP is world addressable or not

with all your stuff. Yet when you forget to throw the deadbolt, it
does stop an intruder from simply turning the knob and wandering in.

Personally I prefer car analogies when it comes to explaining (complex) computer matters. :wink:

Greetings,
Jeroen

Well this is not quite true, is it.. If your firewall is not working and you have private space internally then you are a lot better off then if you have public space internally! So if your firewall is not working then having private space on one side is a hell of a lot more secure!

As somebody else mentioned on this thread, a NAT box with private space on one side fails closed.

By the same token, if your firewall fails closed rather than fails open, you're
more secure.

And this is totally overlooking the fact that the vast majority of *actual*
attacks these days are web-based drive-bys and similar things that most
firewalls are configured to pass through. Think about it - if a NAT'ed
firewall provides any real protection against real attacks, why are there still
so many zombied systems out there? I mean, Windows Firewall has been shipping
with inbound "default deny" since XP SP2 or so. How many years ago was that?
And what *real* security over and above that host-based firewall are you
getting from that appliance?

Or as Dr Phil would say "FIrewalls - how is that working out for you?"