RBL for bots?

I had started to create a list for brute forcers and have been updating them when I can. It's sort of like a personal RBL list with solely the ip address of the offender based off of some scripts that I wrote. For those interested, the script is twofold:

1) Script runs from cron checking /var/log/*secure/messages/etc, depending on the system. If it finds an attacker it blocks them via /etc/hosts.deny and or iptables
2) My version posts the attacking host to www.infiltrated.net/bruteforcers

When I started it, I hadn't heard of or used Denyhosts else I would have modified that script in itself. When I first wrote sharpener, I had intended on finding the abuse contact for the offending attacker and send an automated reply with the date, time, host address and log file information. Scenario:

Attack begins
Script sees attack
Script blocks out attack
Script checks the owner of the netblock and finds their abuse contact
Script sends an automated message stating something like: "At 02/17/07 10:20am EST, our host was attacked from a machine in your netblock. The offending IP address is xxx.xxx.xxx.xxx"

I hadn't had the time to finish the whois $attacker|grep -i abuse portion of it though, then I got bored, sidetracked. What I instead do now is, I use the bruteforcer list from cron on all machines I maintain/manage and have those machines auto block out attackers. The theory is if one machine is getting attacked from luzerA, all machines should block luzerA, and they do now:

http://www.infiltrated.net/sharpener for those interested in modifying/finishing/tweaking the script.

As for creating an RBL such as SORBS or something along those lines. Last I need is a packet attack or political "Take my netblock off!" crap. Hence me not really wanting to bother updating it for the Interweb folk. For those who find it useful, kudos... For those who want to ramble on I have mail filters for you so don't bother.

I run the network for a University with about 12,000 students and 12,000
computers in our dormitories. We, like many other Universities, have spent the
last five or six years putting systems in place that are both reactive and
preventative. From my perspective, the issues are still there but I'm not
sure that I agree with your implications.

Do we still have "compromised" systems? Yes.
Is the number of "compromosed" systems at any time large? No.
Is the situation out of control? No.

Email me off-list if you want more details. IMHO, Its too bad broadband
providers have not yet picked up on what the Universities have done.

Why do you claim broadband providers haven't picked up on what universities have done?

Couldn't broadband providers say the same thing
     > Do we still have "compromised" systems? Yes.
     > Is the number of "compromosed" systems at any time large? No.
     > Is the situation out of control? No.

If you compare infection rates of a broadband provider with 10 million subscribers, which probably translates to at least 30 million devices with NAT, WiFi and mobile devices; would its infection rate be significantly different from a university with 12,000 students with 1 computer each?

If your university's upstream ISP implemented a policy of cutting off the
university's Internet connection anytime a device in the university network was compromised; how many hours a year would the university
be down? What if the university's ISP had a three-strikes policy, would
the university have used up all of its three-strikes? What proof should
the univeristy's upstream ISP accept the problem is corrected?

Is there some infection rate of university networks that upstream ISPs should accept as "normal?" Or should ISPs have a zero-tolorance policy
for universities becoming infected repeatedly?

How is the "acceptable" infection rate for universities different than the infection rate of other types of networks?

One thing to watch out for in interpreting rDNS is
that it can be deceptive. As of about two weeks ago
(last time I checked), Verizon didn't offer FiOS in DC
at all. What you're seeing is probably some of the
newer suburbs in Virginia (possibly Maryland too)
which are vaguely near DC.

$.02

-David

David Barak
Need Geek Rock? Try The Franchise:
http://www.listentothefranchise.com

Gadi,

Can you elaborate a bit on what universities have done which would be
relevant to service providers here?

Generally, we've found that most end users don't even know that their systems
are infected - be it with spyware, bots, etc - and are happy when we can help
them clear things up as they usually aren't in a position to fix things on their
own. I know that the really bad analogy of driving a car has been used a few
times in this thread, but I think part of the analogy is true. If someone owns
and uses a car but the car has no indicator lights to say that something
is wrong, its hard to believe that the driver will be able to fix the problem
or even know to contact the repair shop. We've tried to give our users
that "indicator" light and some help repairing it

Most Universities have adopted the general strategies that came out of the
Internet2/Educause Salsa-NetAuth working group (see links at the end). This
general type of architecture has network components doing registration,
detection, end-user notification, host isolation, and auto-remediation. In
many cases, most of these systems are already in place and they just need
to be tied together.

Where I work, we use a captive-portal like system to do MAC registration
and then, if our detection systems determine a host has an issue, we
force the host back to that captive portal and display a self-help page
for cleaning up the particular problem that the user has with their system.
At the end of the process, we provide a mechanism for them to escape the
captive portal and regain network access automatically. From the statistics
that we collect from our tools, we used to see about a third of the Windows
systems come onto our campus at the beginging of the year with some sort of
infection, with 90% of those cleaned automatically during our registration
process (we have an initial cleaning tool for new systems). Of the systems
that make it past this first round, 90% appear to be caught by our sensors,
sent back to the captive portal, and are able to self-remediate using our
cleaning tool.

Other Universities have similar systems, but invert the "registration" idea.
For example, one place allows open network access until their sensors detect
a problem with a given host. At that point, the host is logged into their
system with an indication of the problem, and then shunted back to the captive
portal with instructions for cleaning up the system.

As Sean and others pointed out, you need a business case for something like
this. In our case, we already had a help desk, tools and documentation for
cleaning up infected systems, a sensor network, web servers, DNS servers with
Views support, and a DHCP system that easily allowed the mapping of classes of
MACs into pools. The cost for us was in adding the database to track things,
some development time to build the web interface, and some of the hooks that
link everything together. The hard savings for us came from fewer calls to the
help desk and fewer incidents for our security team to handle (i.e. less staff
or slower growth in staff). We also gained the soft benifit from students
believing that the network actually works and works well.

Eric :slight_smile:

Here are some presentations that I've done:

Defending Against Yourself
Automated Network Techniques to Protect and Defend Against Your End Users
http://www.roxanne.org/~eric/NERCOMP-2006.ppt
(February, 2006)

Network Architecture for Automatic Security and Policy Enforcement

(Sept 2005)

Life on a University Network:
An Architecture for Automatically Detecting, Isolating, and Cleaning Infected Hosts
http://www.nanog.org/mtg-0402/gauthier.html
(February, 2004)

You forgot a big difference. Universities usually don't give tuition
refunds, so you have a $40,000 "penalty" hanging over the student's head
which gives students an incentive to listen and want to respond to your notices. It's similar to why public libraries have a much harder time getting people to return books than university libraries.

Ask car repair shops about people driving their cars after that indicator
light turns on with smoke belching out of it until the car grinds to a stop. While consumers might miss one notification method, after notifying people by e-mail, telephone, snail mail, web redirects, and any other way you can think of; consumers are very good at ignoring warnings until their computer stops working.

Detection or notification isn't the problem. Getting people to want to fix their computer is.

If there isn't a way to test if the computer is actually fixed, then you just repeatedly cycle around the consumer saying its fixed/nothing is wrong and the ISP claiming its broken.

What's the Turing test for a fixed computer?