Ethical DDoS drone network

Super risky. This would be a 99% legal worry plus. Unless all the end points and networks they cross sign off on it the risk is beyond huge.

-jim

Refer earlier posts.
End points ('drones') would have to be legitimate endpoints, not drones on random boxes. That eliminates legal liability client-side.
If the traffic is non abusive then I don't see the risk for the network providers in the middle either.

If it's clearly established that the source (drones), destination (target) are all 'opted in' and there's no 'collateral damage' (in bandwidth terms or otherwise, being the ways in which I see other parties potentially being impacted) I don't know that it's anywhere near as risky as you imply.

You'd have to be careful not to trip IDS or similar for all the networks you transit, to avoid impacting on others in the event of some mis-fired responses...

What would be an example legitimate security purpose, except to perhaps drill responses to illegitimate botnets?

Mark.

Since when do I need permission of "networks they cross" to send data from a machine I (legitimately) own to another machine I own? If this were an FTP or other data transfer, would I have any legal issues? And if not, how is that different from load testing using a random protocol?

Before anyone jumps up & down, I know that all networks reserve the right to filter, use TE, or otherwise alter traffic passing over their infrastructure to avoid damage to the whole. But if I want to (for instance) stream a few 100 Gbps and am paying transit for all bits sent or received, since when do I have any legal worries?

You want to 'attack' yourself, I do not see any problems. And I see lots of possible benefits. Hell, just figuring out which intermediate networks cannot handle the added load is useful information.

This can be done internally using various traffic-generation and exploit-testing tools (plenty of open-source and commercial ones available). No need to build a 'botnet', literally - more of a distributed test-harness

And it must be *kept* internal; using non-routable space is key, along with ensuring that application-layer effects like recursive DNS requests don't end up leaking and causing problems for others.

But before any testing is done on production systems (during maintenance windows scheduled for this type of testing, naturally), it should all be done on airgapped labs, first, IMHO.

And prior to any testing of this sort, it makes sense to review the architecture(s), configuration(s), et. al. of the elements to be tested in order to ensure they incorporate the relevant BCPs, and then implement those which haven't yet been deployed, and *then* test.

In general, I've found that folks tend to get excited about things like launching simulated attacks, setting up honeypots, and the like, because it's viewed as 'cool' and fun; the reality is that in most cases, analyzing and hardening the infrastructure and all participating nodes/elements/apps/services is a far wiser use of time and resources, even though it isn't nearly as entertaining.

You want to 'attack' yourself, I do not see any problems. And I see lots of possible benefits.

This can be done internally using various traffic-generation and exploit-testing tools (plenty of open-source and commercial ones available). No need to build a 'botnet', literally - more of a distributed test-harness

And it must be *kept* internal; using non-routable space is key, along with ensuring that application-layer effects like recursive DNS requests don't end up leaking and causing problems for others.

We disagree.

I can think of several instances where it _must_ be external. For instance, as I said before, knowing which intermediate networks are incapable of handling the additional load is useful information.

But before any testing is done on production systems (during maintenance windows scheduled for this type of testing, naturally), it should all be done on airgapped labs, first, IMHO.

Without arguing that point (and there are lots of scenarios where that is not at all necessary, IMHO), it does not change the fact that external testing can be extremely useful after "air-gap" testing.

And prior to any testing of this sort, it makes sense to review the architecture(s), configuration(s), et. al. of the elements to be tested in order to ensure they incorporate the relevant BCPs, and then implement those which haven't yet been deployed, and *then* test.

You live in a very structured world. Most people live in reality-land where there are too many variables to control, and not only is it impossible guarantee that everything involved is strict to BCP, but the opposite is almost certainly true.

Remember, systems do not work in isolation, and when you touch other networks, weird things happen.

In general, I've found that folks tend to get excited about things like launching simulated attacks, setting up honeypots, and the like, because it's viewed as 'cool' and fun; the reality is that in most cases, analyzing and hardening the infrastructure and all participating nodes/elements/apps/services is a far wiser use of time and resources, even though it isn't nearly as entertaining.

Here we agree: Entertainment has (should have?) nothing to do with it.

Fine test it by simulation on you or the transit end of the pipes. Do not transmit your test sh?t data across the `net.

That solves that question?
:slight_smile:

How do you propose a model is built for the simulation if you can't collect data from the real world?

This is not "sh?t data". Performance testing across networks is very real and happening now. The more knowledge I have of a path the better decisions I can make about that path.

Kris

I can think of several instances where it _must_ be external. For instance, as I said before, knowing which intermediate networks are incapable of handling the additional load is useful information.

AUPs are a big issue, here..

Without arguing that point (and there are lots of scenarios where that is not at all necessary, IMHO), it does not change the fact that external testing can be extremely useful after "air-gap" testing.

Agree completely.

You live in a very structured world.

The idea is to instantiate structure in order to reduce the chaos.

;>

Most people live in reality-land where there are too many variables to control, and not only is it impossible guarantee that everything involved is strict to BCP, but the opposite is almost certainly true.

Nothing's perfect, but one must do the basics before moving on to more advanced things. The low-hanging fruit, as it were (and of course, this is where scale becomes a major obstacle, in many cases; the fruit may be hanging low to the ground, but there can be a *lot* of it to pick).

Remember, systems do not work in isolation, and when you touch other networks, weird things happen.

One ought to get one's own house in order first, prior to looking at externalities. Agree with you 100% that they're important, but one must do what one can within one's own span of control, first.

Here we agree: Entertainment has (should have?) nothing to do with it.

Implementing BCPs is drudgery; because of this, it often receives short shrift.

I can think of several instances where it _must_ be external. For instance, as I said before, knowing which intermediate networks are incapable of handling the additional load is useful information.

But before any testing is done on production systems (during maintenance windows scheduled for this type of testing, naturally), it should all be done on airgapped labs, first, IMHO.

Without arguing that point (and there are lots of scenarios where that is not at all necessary, IMHO), it does not change the fact that external testing can be extremely useful after "air-gap" testing.

Fine test it by simulation on you or the transit end of the pipes. Do not transmit your test sh?t data across the `net.

How do you propose a model is built for the simulation if you can't collect data from the real world?

This is not "sh?t data". Performance testing across networks is very real and happening now. The more knowledge I have of a path the better decisions I can make about that path.

I am sorry for joking, I was sure we were talking about DDoS testing?

I've been called by more one provider because I was "DDoS'ing" someone with traffic that someone requested. Strange how the word "DDoS" has morphed over time.

But back to your original point, how can you tell it is shit data? DDoSes frequently use valid requests or even full connections. If I send my web server port 80 SYNs, why would you complain?

Knowing whether the systems - internal _and_ external - can handle a certain load (and figuring out why not, then fixing it) is vital to many people / companies / applications. Despite the rhetoric here, it is simply not possible to "test" that in a lab. And I guarantee if you do not test it, there _will_ be unexpected problems when Bad Stuff happens.

As mentioned before, Reality Land is not clean and structured.

I can think of several instances where it _must_ be external. For instance, as I said before, knowing which intermediate networks are incapable of handling the additional load is useful information.

AUPs are a big issue, here..

No, they are not.

AUPs do not stop me from sending traffic from my host to my host across links I am paying for.

Without arguing that point (and there are lots of scenarios where that is not at all necessary, IMHO), it does not change the fact that external testing can be extremely useful after "air-gap" testing.

Agree completely.

You live in a very structured world.

The idea is to instantiate structure in order to reduce the chaos.

;>

Most people live in reality-land where there are too many variables to control, and not only is it impossible guarantee that everything involved is strict to BCP, but the opposite is almost certainly true.

Nothing's perfect, but one must do the basics before moving on to more advanced things. The low-hanging fruit, as it were (and of course, this is where scale becomes a major obstacle, in many cases; the fruit may be hanging low to the ground, but there can be a *lot* of it to pick).

Perhaps we are miscommunicating.

You seem to think I am saying people should test externally before they know whether their internal systems work. Of course that is a silly idea.

That does not invalidate the need for external testing. Nor does it guarantee everything will be "BCP compliant", especially since "everything" includes things outside your control.

Amen to that, brother.

Trust me, you definitely want to do your load testing at a 2AM (or other
usually dead time) of your own choosing, when you have the ability to
pull the switch on the test almost instantly if it gets out of hand.

The *last* think you want is to get a surprise slashdotting of your web
servers while the police have your entire site under lockdown. Been there,
done that, it's not fun.

Date: Mon, 5 Jan 2009 06:53:49 -0500
From: Patrick W. Gilmore

But back to your original point, how can you tell it is shit data?

AFAIK, RFC 3514 is the only standards document that has addressed this.
I have yet to see it implemented. :wink:

Eddy

Date: Mon, 5 Jan 2009 15:54:50 +0800
From: Roland Dobbins

AUPs are a big issue, here..

And AUPs [theoretically] set forth definitions.

Of course, there exist colo providers with "unlimited 10 Gbps bandwidth"
whose AUPs read "do not use 'too much' bandwith or we will get angry",
thus introducing ambiguity regarding just _for what_ one is paying...

Perhaps "abuse" is best _operationally_ defined as "something that
angers someone enough that it's at least sort of likely to cost you some
money -- and maybe even a lot"?

Were the definition clear, I doubt there'd be such a long NANOG thread.
(Yes, I'm feeling optimistic today.)

Eddy

FWIW, I'm primarily concerned about testing PPS loads and not brute
force bandwidth.

Best regards, Jeff

You could just troll people on IRC until you get DDOS'd. All the fun, none
of the work!

Date: Mon, 5 Jan 2009 12:54:24 -0500
From: Jeffrey Lyon

FWIW, I'm primarily concerned about testing PPS loads and not brute
force bandwidth.

Which underscores my point: <x> bps with minimally-sized packets is even
higher pps than <x> bps with "normal"-sized packets, for any non-minimal
value of "normal". Thus, the potential for breaking something that
scales based on pps instead of bps _increases_ under such testing.

I've not [yet] seen an AUP that reads "customer shall maintain a minimum
packet size of 400 bytes (combined IP header and payload) averaged over
a moving one-hour window". :wink:

Eddy

Until you get hit at 8GB/s and then don't have a nice 'off' button..

-r

Ray Corbin wrote:

Until you get hit at 8GB/s and then don't have a nice 'off' button..

However, it would very accurately simulate a real-world attack where you don't get to have an "off" button.

~Seth

But I don't think his boss would be too happy when their network is up and down for days because he irk'ed a scriptkiddie on irc just to test their limits :slight_smile:

-r