Ethical DDoS drone network

Say for instance one wanted to create an "ethical botnet," how would
this be done in a manner that is legal, non-abusive toward other
networks, and unquestionably used for legitimate internal security
purposes? How does your company approach this dilemma?

Our company for instance has always relied on outside attacks to spot
check our security and i'm beginning to think there may be a more user
friendly alternative.

Thoughts?

I would say to roll your own binary hardcoded to only hit 1 IP address, and
have it held on a law enforcement approved network under the supervision of
a qualified agent. 0.02

Well, for starters, you wold have to own (in the traditional sense) all of
the hosts involved. :slight_smile:

- - ferg

hello, ,

http://mirror.informatik.uni-mannheim.de/pub/ccc/streamdump/saal3/Tag3-Saal3-Slot15%3A30--ID3000-hacking_into_botnets-Main-2008-12-29T15%3A30%3A04%2B0100.ogm

and

http://mirror.informatik.uni-mannheim.de/pub/ccc/streamdump/saal3/Tag3-Saal3-Slot16%3A45--ID3000-hacking_into_botnets-Pause-2008-12-29T18%3A30%3A01%2B0100.ogm

have fun!!!

Marc

Say for instance one wanted to create an "ethical botnet," how would
this be done in a manner that is legal, non-abusive toward other
networks, and unquestionably used for legitimate internal security
purposes? How does your company approach this dilemma?

The company I work for has not approached this particular dilemma yet.

I'm not sure what legitimate internal security purposes you're looking to fulfill, but I think you need to ask yourself a few questions first (not an all-inclusive list, but food for thought nonetheless):

1. What is the purpose of this legit botnet? In other words, what business objective does it achieve?

2. Do you have the people in-house to write the software, or would you be willing to take a chance on using something that exists 'in the wild'?
Depending on how security-minded your shop is, your corporate security folks and legal counsel might take a dim view toward using untrusted software on your internal network, especially if source code is not available. That particular monster can get out of control very quickly.

3. Do you have a sufficient number of machines that are controlled by you to populate this botnet and achieve my goals (see point 1)?

4. How will this botnet be isolated from the rest of your internal network, and would that isolation limit or even negate the botnet's usefulness?

5. If the answer to question 4 is "no isolation", how will you demonstrably control the botnet's propagation?

6. Depending on the answer to question 5, there might be regulatory compliance (HIPAA, FERPA, GLB, SOX, internal security/privacy policies, contractual obligations, etc...) issues to consider.

Our company for instance has always relied on outside attacks to spot
check our security and i'm beginning to think there may be a more user
friendly alternative.

Infection, even for ethical purposes, is still infection.

jms

As long as some part of the system (hosts/networks) from the bots to
the target is not under your control or prepared for this sort of
activity, you may not get a satisfactory answer on this. Its quite
likely these days a third party playing the unwitting participant in
this botnet may find it objectionable.

Is creating and running a botnet the answer? What exactly are you
trying to protect against? DDoS?

There are potentially various sorts of penetration tests and design
reviews you could go through as an alternative to running a so-called
"ethical" botnet. Further information on what you're trying to protect
against may solicit some useful strategies.

John

Say for instance one wanted to create an "ethical botnet," how would
this be done in a manner that is legal, non-abusive toward other
networks, and unquestionably used for legitimate internal security
purposes? How does your company approach this dilemma?

As long as some part of the system (hosts/networks) from the bots to
the target is not under your control or prepared for this sort of
activity, you may not get a satisfactory answer on this. Its quite
likely these days a third party playing the unwitting participant in
this botnet may find it objectionable.

Is creating and running a botnet the answer? What exactly are you
trying to protect against? DDoS?

There are potentially various sorts of penetration tests and design
reviews you could go through as an alternative to running a so-called
"ethical" botnet. Further information on what you're trying to protect
against may solicit some useful strategies.

A legal botnet is a distributed system you own.

A legal DDoS network doesn't exist. The question is set wrong, no?

Agreed, Gadi. It wouldn't be an attack if it were ethical. Technically,
that would be "load testing" or "stress testing".
Might I suggest this to help?
http://www.opensourcetesting.org/performance.php

kind of depends on what the model is. a botnet for hire
  to "red-team" my network might be just the ticket.

--bill

You probably don't have to entirely "own" the distributed system for
it to be legal.
You could just control it with proper authorization.

A legal botnet is one whose deployment and operations doesn't break any laws
in any of the relevant jurisdictions. The ways to achieve this are
legal considerations,
not technical considerations.

I'm not thinking this list is really a good place to ask a question
about legality and get
an answer you can rely on.

You need to confer with your lawyers about how exactly your botnet can
or can't be
built and still be legal. This may depend on what country your botnet
operates in,
where you are, where your nodes are, etc.

But thoroughly control and restrain every possible factor that could ever
make your botnet illegal, and the result should (imho) be legal...

This is not an exhaustive enumeration, but
some situations that often make illegal botnets illegal are:

(A) The botnet operator runs code on computers without authorization,
or the botnet software exploits security vulnerabilities in victim computers to
install without permission
i.e. operator gains unauthorized access to a computer to deploy
botnet nodes,
or the software is a worm.

This problem is avoided if you take measures to guarantee you own every node,
or if you guarantee you have full permission for every computer you
will possibly run botnet
software on, to the full extent of the botnet node's activities.

And you ensure botnet software used never automatically "spreads
itself" like a worm.

This way, all access you gain to node PCs is authorized.

(B) Botnet node software conducts unauthorized activities after it is
installed on the host PC.
e.g. Theft of services.
Perhaps an authorized user of the PC did install the software, but
they installed it
for an entirely different purpose, the botnet node is hidden
software, not noted in
the product brochure or other prominent information about the software.

This problem is avoided if you make sure the person giving permission
to install the
software is aware of the botnet node and all its expected activities,
before a botnet
node can be brought up.

(C) Traffic generated by a botnet could be illegal.
For example, traffic in excess of agreements you have in place, or in
violation of your ISP's
TOS, TOU, or AUP, may be questionable.

Ethically: You need permission from owners of the source and
destination networks the botnet
generates traffic on, not just the source and destination computers.

For example, you have agreements for 10 gigs, but your botnet test accidentally
sends 50 gigs towards your remote site, or one of the thousands of
nodes saturates a
shared link at its local site that belongs to someone else.

An attempt to simulate a DDoS against your own network could inadvertently turn
out to be a real DoS on someone else's network as well as yours, for example one
of your providers' networks.

This is best avoided by maintaining tight control over any distributed
stress testing, and
massively distributed stress testing should be quarantined by all
available means.

The destination of any testing must be a computer you have permission
to blow up.
And the amount of traffic generated by any botnet node on its LAN need
be acceptable.

Always retain rigid controls over any traffic generated, and very
strong measures to prevent an unauthorized third party from ever
being able to make your nodes generate any traffic.

At a bare minimum, strong PKI (no MD5 or SHA-1) and digitally-signed
timestamped commands
for starting a test, with some mechanism to prevent unauthorized
creation or replay of commands.

Plus multiple failsafe mechanisms to allow a test to be rapidly halted.

i.e. all nodes ping a "control point" once every 30 seconds.
if two pings are dropped, the node stops in its tracks.

So you can kill a runaway botnet by unplugging your control hosts.

There are some assumptions here. First are you considering volumetric
DDOS attacks? Second, if you plan on harvesting wild bots and using them
to serve your purpose then I don't see how this can be ethical unless
they are just clients from your own network making it less distributed.
You would then have to have this in your AUP allowing you to do this.
Hmm, I really don't know what you would gain by this. Not knowing what
your network looks like...but assuming your somewhat scaled, I would
think this could all be done in the lab.

Date: Mon, 5 Jan 2009 11:54:06 -0500
From: "BATTLES, TIMOTHY A (TIM), ATTLABS"

assuming your somewhat scaled, I would think this could all be done
in the lab.

And end up with a network that works in the lab. :slight_smile:

- bw * delay
- effects of flow caching, where applicable
- jitter (esp. under load)
- packet dups and loss (esp. under load)
- packet reordering and assiciated side-effects
- upstream/sidestream throughput (esp. under load)

No, reality is far more complex. Some things do not lend themselves to
_a priori_ models, nor even "TFAR" generalizations.

Eddy

True, real world events differ, but so do denial of service attacks.
Distribution in the network, PPS, BPS, Packet Type, Packet Size, etc..
Etc.. Etc.. So really I don't get the point either in staging a real
life do it yourself test. So, you put pieces of your network in
jeopardy night after night during maintenance windows to determine if
what?? Your vulnerable to DDOS? We all know we are, it's just a question
of what type and how much right? So we identify our choke points. We all
know them. We look at the vendor data on how much PPS it can handle and
quickly dismiss that. So what's the next step? Put the device that IS
the choke point and pump it full of all different flavors until it
fails. No harm no foul an now we have data regarding how much and what
takes the device out. If the network is scaled, well we now know that we
have x amount of devices that can fail if the DDOS goes X PPS with Y
packet types. What I don't get is what you would be doing trying to
accomplish this on a production network. Worse case is you break
something. Best case is you don't. So if best case scenario is reach,
what have you learned? Nothing! So what do you do next ramp it up? Seems
silly.

BATTLES, TIMOTHY A (TIM), ATTLABS wrote:

True, real world events differ, but so do denial of service attacks.
Distribution in the network, PPS, BPS, Packet Type, Packet Size, etc..
Etc.. Etc.. So really I don't get the point either in staging a real
life do it yourself test. So, you put pieces of your network in
jeopardy night after night during maintenance windows to determine if
what?? Your vulnerable to DDOS? We all know we are, it's just a question
of what type and how much right? So we identify our choke points. We all

<snip>

packet types. What I don't get is what you would be doing trying to
accomplish this on a production network. Worse case is you break
something. Best case is you don't. So if best case scenario is reach,
what have you learned? Nothing! So what do you do next ramp it up? Seems
silly.

I'll personally agree with you, though there are fringe cases. For example, one or more of your peers might falter before you do. While I'm sure they won't enjoy you hurting their other customers, knowing that your peer's router is going to crater before your expensive piece of hardware is usually good knowledge. Since it's controlled, you can minimize the damage of testing that fact.

Another test is automatic measures and how well they perform. This may or may not be useful in a closed environment, though in a closed environment, they'll definitely need to mirror the production environment depending on what criteria they use for automatic measures.

A non-forging botnet which sends packets (valid or malformed) to an accepting recipient is strictly another internet app, and has a harm ratio related to some p2p apps. IP forging, of course, could cause unintended blowback, which could have severe legal ramifications.

That being said, I'd quit calling it a botnet. I'd call it a distributed application that stress tests DDoS protection measures, and it's advisable to let your direct peers know when you plan to run it. They might even be interested in monitoring their equipment (or tell you up front that you'll crater their equipment).

Jack

In my opinion, the real thing you can puzzle out of this kind of testing is the occasional hidden dependency. I've seen ultra-robust servers fail because a performance monitoring application living on them was timing out in a remote query, and I've also seen devices fail well below their expected load because they were using multiple layers of encapsulation (IP over MPLS over IP over Ethernet over MPLS over Frame-Relay ...) and one of the hidden middle-layers was badly optimized.

The advantage of performing this DDoS-style load testing on yourself is that *you can turn it off once you experience the failure* and then go figure out why it broke when it did. This is a lot more pleasant than trying to figure it out at 2:30 in the morning with insufficient coffee.

David Barak
Need Geek Rock? Try The Franchise:
http://www.listentothefranchise.com

This is the AUP danger to which I was referring earlier. Also, note that the miscreants will attack intermediate systems such as routers they identify via tracerouting from multiple points to the victim - there's no way to test that externally without violating AUPs and/or various criminal statutes in multiple jurisdictions.

And then there are managed-CPE and hosting scenarios, which complicate matters further.

Tim's comments about understanding the performance envelopes of all the system/infrastructure elements are spot-on - that's a primary input into design criteria (or should be). With this knowledge in hand, one can test the most important things internally.

But prior to testing, one should ensure that the architecture and the element configurations are hardened with all the relevant BCPs, and scaled for capacity. The main purpose of the testing would be to verify correct implementation and ensure all the failure modes have been accounted for and ameliorated to the degree possible, and also as an opsec drill.

What I've seen over and over again is a desire to test because it's 'cool', but no desire to spend the time in the design and implementation (or re-implementation) phases to ensure that things are hardened in the first place, nor to spell out security policies and procedures, train, etc.

Actual *security* (as opposed to checklisting) consists of attention to lots of tedious details, drudgery and scut-work, involving the coordination of multiple groups and the attendant politics. It isn't 'sexy', it isn't 'cool', it isn't 'fun', but it pays off at 4AM on a holiday weekend.

Testing should become a priority only after one has done everything one knows to do within one's span of control, IMHO - and I've yet to run across this happy circumstance in any organization who've asked me about this kind testing, FWIW.

Yes - but if your lab accurately reflects production, you can discover this kind of thing in the lab (and one ought to already have a lab setup which reflects production for many reasons having nothing to do with security).

I agree - having a lab of that type is absolutely ideal. However, the ideal and the real diverge tremendously in large and mid-size enterprise networks, because most enterprises just don't have enough lab equipment to adequately model all of the possible scenarios, and including the cost of a lab in the rollout immediately doubles all capital expenditures. The types of problems that the ultra-large DoS can ferret out are the kind which *don't* show up in anything smaller than a 1:1 or 1:2 scale model.

Consider for a moment a large retail chain, with several hundred or a couple thousand locations. How big a lab should they have before deciding to roll out a new network something-or-other? Should their lab be 1:10 scale? A more realistic figure is that they'll consider themselves lucky to be between 1:50 and 1:100, and that lab is probably understaffed at best. Having a dedicated lab manager is often seen as an expensive luxury, and many businesses don't have the margin to support it.

David Barak
Need Geek Rock? Try The Franchise:
http://www.listentothefranchise.com

In my experience, once one has an understanding of the performance envelopes and has built a lab which contains examples of the functional elements of the system (network infrastructure, servers, apps, databases, clients, et. al.), one can extrapolate pretty accurately well out to orders of magnitude.

The problem is that many organizations don't do the above prior to freezing the design and initiating deployment.

Roland Dobbins wrote:

In my experience, once one has an understanding of the performance envelopes and has built a lab which contains examples of the functional elements of the system (network infrastructure, servers, apps, databases, clients, et. al.), one can extrapolate pretty accurately well out to orders of magnitude.

The problem is that many organizations don't do the above prior to freezing the design and initiating deployment.

Sadly, I think money and time have a lot to do with this. Technology is a moving target, and everyone is constantly struggling to keep up while maintaining performance/security.

I've seen this out of software developers, too. I'd say I've seen more outages due to a simple command typed into a router cli crashing the router than DDoS traffic. Perhaps I've been lucky with the latter.

Jack