Are botnets relevant to NANOG?

Not effective against botnets.

Think of it this way, thousands of compromised hosts (zombies),
distributed to the four corners of the Internet, hundreds (if
not thousands) of AS's -- all recieving their instructions via
IRC from a C&C server somewhere, that probably also may change
due to dynamic DNS, or pump-and-dump domain registrations, or
any other various ways to continually move the C&C.

Simply going after (what may _seem_to_be_) the last-hop router
is like swinging a stick after a piƱata that you can't actually
reach when you are blind-folded. :slight_smile:

- ferg

for this community would trend analysis with the best of who is getting better and the worst of who is getting worse and some baseline counts be enough for this group to understand if the problem is getting better.

I am suggesting that NANOG is an appropriate forum to publish general stats on who the problem is getting better/worse for and possibly why things got better/worse.

I'd like to see a general head nod that there is a problem and develop some stats so we can understand if it is getting better or worse.

-rick

Fergie wrote:

We all know there is a problem. Botnets/zombies/et. al. are the
number one threat to the infrastructure and the attacks may be deliberate or
they may be a distraction. The motive is unclear because attacking,
for example, root servers, is an effort without some obvious economic
incentive, at least that I can see. It doesn't make a lot of sense because
the conventional wisdom before they open recursive attacks was that
it was in the miscreants best interest to not attack infrastructure
so that it could facilitate their reachable "goals".

The DA report went through a large thread(s) to post statistics here
and I'm not sure why yours will be any better, or, just another set
of statistics which further de-sensitizes everyone to the problem. I
mean, it looks like, all of a sudden, the DNS community has a big
problem with these open recursive attacks, ran off privately, and
have now determined that it's a feature, not a bug, and well, heck,
operators are now responsible. I am not saying that is the answer, but
I am saying I am reading the OARC comments and this is sort of what
it fees like. As much as Gadi seems to appropriate others credit,
Randy Vaugh and him have been doing this work for some time and
deserves some credit so I'd say "have you spoken to them about how
to make their report better" yet instead of "create more".

-M<

I am saying I am reading the OARC comments and this is sort of what
it fees like. As much as Gadi seems to appropriate others credit,
Randy Vaugh and him have been doing this work for some time and
deserves some credit so I'd say "have you spoken to them about how
to make their report better" yet instead of "create more".

Yes, we have worked with Gati and Randy Vaugh; infact randy helped me out today; thanks randy!

There is a difference in how Randy/Gati collect data and how we collect data. The stuff we publish are from numerous dns based realtime blacklists and spam traps we run. Other folks black-hole botnets and capture data.

We both come up with a dataset that overlaps but we don't yet know by how much. So our data is another view using a different methodology and isn't supposed to be "better" but confirming of where the problem is and estimates of its magnitude.

-rick

> I am saying I am reading the OARC comments and this is sort of what
> it fees like. As much as Gadi seems to appropriate others credit,
> Randy Vaugh and him have been doing this work for some time and
> deserves some credit so I'd say "have you spoken to them about how
> to make their report better" yet instead of "create more".

Yes, we have worked with Gati and Randy Vaugh; infact randy helped me
out today; thanks randy!

There is a difference in how Randy/Gati collect data and how we collect
data. The stuff we publish are from numerous dns based realtime
blacklists and spam traps we run. Other folks black-hole botnets and
capture data.

We both come up with a dataset that overlaps but we don't yet know by
how much. So our data is another view using a different methodology and
isn't supposed to be "better" but confirming of where the problem is and
  estimates of its magnitude.

The more we know, the better. I believe the time for action has come and
gone, but I was not born a pessimist. :slight_smile:

If the first step is to de-"classify" what's public so that people are
aware of what's going on, I say bring it on.

Great work, Rick. Beer is on me this defcon.

  Gadi.

for this community would trend analysis with the best of who is getting
better and the worst of who is getting worse and some baseline counts be

enough for this group to understand if the problem is getting better.

Your 5-day numbers were very reminiscent of the
weekly CIDR report. I think that if you clean it
up for weekly submission then that would be
useful to some.

For instance, you only published data for two
categories of ASN. Where is the tier-1 data?
And numbers should cover a 7-day period, not
5 days. In addition, for each category you should
provide a fixed cutoff. The CIDR report shows
the top 30 ASNs.

I think that the ideal would be a table
including ASN category as one column and showing
the top 50 ASNs. In addition, you should attempt
to separate the dynamic addresses by some means
or other and either add that as a separate column
or else do a separate table. Since this would
be posted weekly over a long period of time, it
is best to put some thought into how to structure
it so it remains relevant.

Also, provide a URL where researchers can download
more complete datasets, not just top 50.

I am suggesting that NANOG is an appropriate forum to publish general
stats on who the problem is getting better/worse for and possibly why
things got better/worse.

I think few people will complain about a weekly
posting of this nature.

--Michael Dillon

The motive is unclear because attacking,
for example, root servers, is an effort without some obvious economic
incentive

Since when is advertising NOT a sign of
an obvious economic incentive?

The DA report went through a large thread(s) to post statistics here
and I'm not sure why yours will be any better, or, just another set
of statistics which further de-sensitizes everyone to the problem.

Stats by themselves can be boring. But making data
available regularly and publicly will inevitably
lead to some people doing analyses of this data and
presenting those analyses to NANOG meetings. This will
lead to wider understanding of what is going on and
will provide raw material for getting management support
for actions to solve the problem.

--Michael Dillon

For instance, you only published data for two
categories of ASN. Where is the tier-1 data?

I suspect that "tier-1" botnet data isn't at all interesting, because
in general, "tier-1" providers have almost no address space containing
the sort of machines that end up in botnets. For instance, look at AS701

http://www.cidr-report.org/cgi-bin/as-report?as=AS701&view=4637

Lots of /24's, but even if you add it all up, barely a single /9 if
that much *total*. And I bet most of those /24's just have a handful
of routers on them.

And numbers should cover a 7-day period, not
5 days. In addition, for each category you should
provide a fixed cutoff. The CIDR report shows
the top 30 ASNs.

If we're playing the "shame game" the way the CIDR report is, an
interesting metric might be "bots divided by announced address space"
(so for instance AS1312 would have it 6 or 10 bots(*) divided by its
2 /16s). I wonder if the numbers for "consumer broadband" versus
"universities" will look significantly different when done that way.

(*) Yes, our AS isn't perfectly clean. We've got a resnet in our
address space, where the best we can do is provide user education and
play whack-a-mole as we find them....