I don't need no stinking firewall!

All,
    This thread certainly has been educational, and has changed my
perception of what an appropriate outward facing architecture should be.
But seldom do I have the luxury of designing this from scratch, and also
the networks I administer are "small business's".
My question is at what size connection does a state table become
vulnerable, are we talking 1mb dsl's with a soho firewall?
Or as I suspect we are talking about a larger scale?
I know there are variables, I am just looking for a "rule of thumb".
I would not want to recommend a change if it is not warranted.
But when fatter and fatter pipes become available at what point would a
change be warranted.

For small pipes, a simple DoS is trivial enough to jam up the works
without worrying about the state table size.

However, that doesn't mean it isn't smart to get a handle on it.

The biggest question may be pipe size: this variable controls the total
possible PPS that can be tossed at the firewall.

If you consider a technology such as 10base-T Ethernet, that's 10 megabits
per second. When you add up the IFG, MAC preamble, dest/src, MAC type,
payload, and CRC, the smallest Ethernet frame is 84 bytes. 10M/(84*8) =
14880 frames per second.

Now let's say you want to block a SYN flood from hitting your server
(nobody need tell me that there are obvious problems with this; this
is an educational exercise). If your firewall is configured to expire
state table entries for partially opened connections after 15 seconds,
the speed of ethernet combined with the 15 seconds means you need a
table that's 225,000 entries large.

But wait. If they're blowing 14880 frames per second at you, that
Ethernet's quite full. You're already getting DoS'ed by capacity.

Further, what happens when the attack moves to simply fully opening
connections? Your state table is tiny for that...

I know this is NANOG, and it's network-centric. However, fundamentally,
a stateful firewall fuzzes the boundary between network and server. It
is taking on some duties that have typically been the responsibility of
the server. So I'm going to go off on a tangent and say that it may be
useful to consider the state of the art in server technology.

A good UNIX implementation (OpenBSD, FreeBSD, maybe Linux :wink: ) will be
hardened and further-hardenable against these sorts of attacks. The
server *already* has to do things such as tracking SYN's, except that
they no longer have to - they can issue cookies back and then simply
forget about it. If the cookie is returned in the ACK, then a
connection is established. A proper implementation of this means that
a server using SYN cookies has an infinitely large SYN queue; packets
on the network itself are actually the queue. The technique works and
scales at 1Mbps as well as it does at 10Gbps.

Putting a stateful firewall in front of that would be dumb; the server
is completely capable of coping with the superfluous SYN's in a much
more competent manner than the firewall.

I won't go into this in more detail. You can look to see what the IRC
networks are doing to protect themselves. They're commonly beat up and
have learned most of the best defense tricks around.

A stateless firewall that can implement filtering policies in silicon
is absolutely a fantastic thing to have, especially when faced with a
DoS for which you can write rules. Put your servers behind one heck of
a good stateless firewall, and run a well-tuned OS. You'll be a lot
more DoS-resistant.

... JG

The trouble with blanket statements about "all stateful firewalls" and
"all servers" is there are lots of different firewall and server
platforms. Stateful firewalls can implement SYN cookies, and at least
a couple do. Firewalls do not need to build a state entry for
partial TCP sessions, there are a few different things that can be
done, such as the firewall answering on behalf of the server (using
SYN cookies) and negotiating connection with the server after the
final ACK.

As a result, spoofed TCP packets don't consume state. Multiple IPs
they can _receive_ traffic to required, next?

Spoofed UDP is a much bigger problem, because there is no connection
establishment. And it's probably not sane to put certain
public-facing UDP services such as large public DNS service IPs
(e.g. 8.8.8.8) behind most forms of stateful filter.

But that's not the average case, by any means, most servers are not
DNS servers.
Servers consume state just like firewalls do....

E.g. A public FTP server that opens a process for each connection,
goes down in a connection flood, when kernel process slots are used
up, long before the firewall.

Servers running a robust OS completely and correctly configured to
perfectly protect themsleves (resource limits, etc), no Windows
OSes, with unwanted open ports, is a wholly unwarranted assumption
for real-world server environments.

In the best cases it does hold up (to a great extent).
In other cases, it's an operational fantasy; it would be nice if that
could be relied upon....

The firewall capacity for doing this can be easily overwhelmed; and again, well-formed traffic can simply 'crowd out' good traffic. The other drawbacks of the stateful firewall further outweigh even this negligible benefit.

Fronting one's Web server farms/load-balancers with a tier of transparent reverse-proxy caches is a better way to scale TCP connection capacity, as well as the myriad other benefits offered (described earlier in this thread).

James,

That's called a proxy or sometimes an application-layer gateway. The
problem with proxies, aside from the extra computing overhead, is that
they radically change the failure semantics of a TCP connection. The
sender believes itself connected and has transferred the first window
worth of data (which may be all the data he needs to transmit) while
the firewall is still trying to connect to the recipient. Often the
proxy isn't clever enough to send an RST in stead of a FIN so the
remote application thinks it has a successful finish. Even if it does
send an RST, most application developers aren't well enough versed in
sockets programming to block on the shutdown and check the success
status, and even if they do they may report a different error than the
basic "failed to connect."

Proxies can be a useful tool but they should be used with caution and
only when you're absolutely sure that the difference in TCP failure
semantics is not important to the protocol you're proxying.

Regards,
Bill Herrin

Sorry, I got that wrong. shutdown() will succeed without waiting for a
FIN or RST from the remote end. So will close(). Instead, after the
shutdown() you then have to block on read() waiting for a either 0
bytes read or an error. 0 bytes = FIN, error = RST.

Unfortunately, few sockets programmers realize that they have to do
this to catch that final possible error. They send what the expect to
send and if they don't expect to receive anything back, they shutdown
and close the socket without waiting.

Regards,
Bill Herrin

there are a few different things that can be
done, such as the firewall answering on behalf of the server (using
SYN cookies) and negotiating connection with the server after the
final ACK.

That's called a proxy or sometimes an application-layer gateway. The

I'm not really referring to ALGs, but to Layer 3 proxies, that
are application-agnostic -- simply manipulate the connection setup,
and then step 'out of the way' performing only mechanical
translation of SEQ numbers / port numbers. However, appliction
layer gateways are still stateful firewalls.
Content switches and load balancers that track connections and
allow access control are also stateful firewalls.

They are widely used, for many different kinds of applications.

they radically change the failure semantics of a TCP connection. The
sender believes itself connected and has transferred the first window
worth of data (which may be all the data he needs to transmit) while

And if the initial window size is 0?

send an RST, most application developers aren't well enough versed in
sockets programming to block on the shutdown and check the success
status, and even if they do they may report a different error than the
basic "failed to connect."

I agree that could be an issue. The proxy might do the wrong
thing, the application might do the wrong thing.

Proxies can be a useful tool but they should be used with caution and

I agree they should be used with caution.
I don't agree with "You never need a proxy in front of a server,
it's only there to fail".

Again, reverse proxy *caches* are extremely useful in front of Web farms. Pure proxying makes no sense.