D/DoS mitigation hardware/software needed.

Dobbins, Roland wrote:

Firewalls do have their place in DDoS mitigation scenarios, but if used as
the "ultimate" solution you're asking for trouble.

In my experience, their role is to fall over and die, without
exception.

That hasn't been my experience but then I'm not selling anything that
might have a lower ROI than firewalls, in small to mid-sized
installations.

I can't imagine what possible use a stateful firewall has being
placed in front of servers under normal conditions, much less
during a DDoS attack; it just doesn't make sense.

Firewalls are not designed to mitigate large scale DDoS, unlike Arbors,
but they do a damn good job of mitigating small scale attacks of all
kinds including DDoS. Firewalls actually do a better job for small to
medium sites whereas you need an Arbor-like solution for large scale
server farms.

Firewalls do a good job of protecting servers, when properly configured,
because they are designed exclusively for the task. Their CAM tables,
realtime ASICs and low latencies are very much unlike the CPU-driven,
interrupt-bound hardware and kernel-locking, multi-tasking software on a
typical web server. IME it is a rare firewall that doesn't fail long,
long after (that's after, not before) the hosts behind them would have
otherwise gone belly-up.

Rebooting a hosed firewall is also considerably easier than repairing
corrupt database tables, cleaning full log partitions, identifying
zombie processes, and closing their open file handles.

Perhaps a rhetorical question but, does systems administration or
operations staff agree with netop's assertion they 'don't need no
stinking firewall'?

Roger Marquis

That hasn't been my experience but then I'm not selling anything that might have a lower ROI than firewalls, in small to mid-sized installations.

I loudly evinced this position when I worked for the world's largest firewall vendor, so that dog won't hunt, sorry.

Think about it; firewalls go down under DDoS *much more quickly than the hosts themselves*; Arbor and other vendor's IDMSes protect many, many firewalls unwisely deployed in front of servers, worldwide. Were I that sort of person (and I'm not, ask anyone who knows me), it's in my naked commercial interest to *promote* firewall deployments, so that *more* sites will go down more easily and require IDMSes, heh.

Firewalls are not designed to mitigate large scale DDoS, unlike Arbors, but they do a damn good job of mitigating small scale attacks of all kinds including DDoS.

Not been my experience at all - quite the opposite.

Firewalls actually do a better job for small to medium sites whereas you need an Arbor-like solution for large scale
server farms.

No, S/RTBH and/or flow-spec are a much better answer for sites which don't need IDMS, read the thread. And they essentially cost nothing from a CAPEX perspective, and little from an OPEX perspective, as they leverage the existing network infrastructure.

Firewalls do a good job of protecting servers, when properly configured, because they are designed exclusively for the task.

No, they don't, and no, they aren't.

Their CAM tables, realtime ASICs and low latencies are very much unlike the CPU-driven,
interrupt-bound hardware and kernel-locking, multi-tasking software on a
typical web server. IME it is a rare firewall that doesn't fail long,
long after (that's after, not before) the hosts behind them would have
otherwise gone belly-up.

Completely incorrect on all counts. Sales propaganda regurgitated as gospel.

Rebooting a hosed firewall is also considerably easier than repairing
corrupt database tables, cleaning full log partitions, identifying
zombie processes, and closing their open file handles.

Properly-designed server installations don't have these problems. Firewalls don't help, either - they just go down.

Perhaps a rhetorical question but, does systems administration or operations staff agree with netop's assertion they 'don't need no stinking firewall'?

I've been a sysadmin, thanks. How about you?

You can assert that the sun rises in the West all you like, but that doesn't make it true. All the assertions you've made above are 100% incorrect, as borne out by the real-world operational experiences of multiple people who've commented on this thread, not just me.

I've worked inside the sausage factory, FYI, and am quite familiar with how modern firewalls function, what they can do, and their limitations. And they've no place in front of servers, period.

;>

Firewalls are not designed to mitigate large scale DDoS,

Generally speaking, if it didn't being the firewall to its knees, it
wasn't a DoS. It was just sort of an annoying attempt at a DoS.

I think that more or less the definition of a DoS is one that exploits
the resource limitations of the firewall to deny service to everything
behind it. The ultimate DoS, though, is simply filling the pipe with
traffic from "legitimate" data transfer requests. Nothing you are going
to do is going to mitigate that because to stop it you have to DoS
yourself.

Imagine thousands of requests per second from all around the internet
for a legitimate URL. How do you use a firewall to separate the wheat
from the chaff? So let's say you have some client software that you want
people to download. Suddenly you are getting more download requests than
you can handle. Nobody is flooding you with syn or icmp packets. They
are sending a single packet (a legitimate URL) that results in you
sending thousands of packets to real IP addresses that are simply
copying the traffic to what amounts to /dev/null. Now when your
download server gets slow, things get worse because connections begin to
take longer to clear. The kernel on the web server is able to handle
the tcp/ip setup fairly quickly but getting the file actually shipped
out takes time. As connections build up on the firewall, it finally
reaches a point where it is out of RAM in storing all those NAT
translations and connection state.

Now you start noticing that services not under attack are starting to
slow down because the firewall has to sort through an increasingly large
connection table when doing stateful inspection of traffic going to
other services. All the while, there really isn't anything the firewall
can do to mitigate the traffic because it is all correct and
"legitimate".

Basically you are being Slashdotted or experiencing the Drudge Effect
but in this case you are being botnetted.

If you have the server capacity to keep up, now your outbound pipe to
the Internet is filling up, you are dropping packets, TCP/IP connections
begin to back off, connections back up even more and at some point the
firewall just gives up by failing over to the secondary, which then
promptly fails back to the primary and you bounce back and forth in that
state for a while and then finally it just gets hung someplace and the
whole thing is stuck. And during the entire incident there was no
"illegal traffic" that your firewall could have done a thing to block.

Oh, and rate limiting connections isn't going to fix things either
unless you can do it on a per URL basis. Maybe the rate of requests for
/really-big-file.tgz that clogs your system is way different than the
rate of requests for /somewhat-smaller-file.tgz or /index.html