Security Intelligence [Was: Re: Netblock reassigned from Chile to US ISP...]

Personally, I don;t NANOG is the proper forum for this discussion.

There are other forums, however, which do follow these issues -- some
public, some private.

If folks think that people are not "doing" massive correlation of criminal
activity on the Internet, they would be mistaken.

- - ferg

Not in the habit of responding to my e-mail, but...

An in-depth strategy with hundreds or thousands of factors examined
results in a smaller
(but still present) possibility of the filter/detector being fooled.

IP-based methods can be combined with the other stronger analysis of
transaction details and other info that can be gathered about a
submitter for detection of attempted abuse.

Personally, I don;t NANOG is the proper forum for this discussion.

There are other forums, however, which do follow these issues -- some
public, some private.

If folks think that people are not "doing" massive correlation of
criminal activity on the Internet, they would be mistaken.

The point I am trying to make here is that ISPs should much more engaged in
this entire process.

In the not-so-distant past, I have tried to engage the ISP community (via
NANOG, at NANOG meetings) to get involved in the fight against cyber crime,
with lackluster response -- unfortunately.

If this problem is ever going to get reduced to a manageable level, ISPs
must play a critical role -- one which they have not been willing
participants to this day. ISPs have been (one of) the missing links here.

Of course, there are very responsible ISPs out there who handle these issue
when they are brought to their attention, and they deserve kudos -- but
unfortunately, they are are in the minority.

This community should be asking itself why that is... and figuring out way
to deal with it responsibly.

$.02,

- - ferg

If folks think that people are not "doing" massive correlation of criminal
activity on the Internet, they would be mistaken.

engineers judge by the results. and, unfortunately, we can read them in the ny times.

though some recent papers sure make interesting reading. just picking on one particular cs researcher
   http://www-cse.ucsd.edu/~savage/papers/CCS08Conversion.pdf
   http://www-cse.ucsd.edu/~savage/papers/login08.pdf
   http://www-cse.ucsd.edu/~savage/papers/LEETHeisenbot08.pdf

the last being particularly interesting in the domain of being able to *accurately* isolate evil.

randy

The point I am trying to make here is that ISPs should much more engaged in
this entire process.

most of the larger isps have reasonable security teams with some good folk. but you need to be much more specific about what you want from medium and smaller isps, and what the immediate payoffs (cf. the financial secions of the newpaper) will be to them to justify the costs.

just whining that no one will come out to play is not a success strategy, as you say you have well demonstrated.

be specific, like "if you run X tools the payoff will be Y."

randy

Inferior people look solely for financial payoff. Superior people
recognize their fundamental obligation to prevent their operation from
being a menace to others, and do it based on ethics.

---Rsk

Wow!! thats an eye opener..

nice and glib. but we have limited resources, and they're gonna get more limited and less resources. so we allocate them based on how we perceive need and relevance to running the actual network. and we just can't do everything. so when ferg says "i exhort and folk don't jump," perhaps there could be a problem with the exhortation not the lack of ethics on the part of the operators.

randy

Quick comment on e-commerce.

Consider that in many/most cases, the merchant will want to capture the
customer's address which is sent along with credit card information for
authorization. Once the merchant has received an authorization, he is
pretty much garanteed to get pad by the credit card company.

So the whole "geolocation" bit is not really necessary because they will
want a real address anyways.

Where geolocation is used is for media companies. If CBS has negotiated
the rights to air a program in the USA, then its web site will be
programmed to only allow USA based IPs to view the on-line version of
that program. In the UK, BBC gets tax revenus from UK citizens, so only
UK IPs are allowed to view the on-line versions of those programs.

but you need to be much more specific about what you want from
medium and smaller isps, and what the immediate payoffs (cf. the
financial secions of the newpaper) will be to them to justify the costs.

Inferior people look solely for financial payoff. Superior people
recognize their fundamental obligation to prevent their operation from
being a menace to others, and do it based on ethics.

They don't need t be moral, they need to understand 4 years down the line it will cost them significantly to the point of losing a lot of business. A good example is registrars. They lose quite a bit now.

Foresight on security is not something that really works.

do they lose 'quite a bit' now? how much is 'quite a bit'? and is that
more or less than they take home at the end of the day?

I'm curious because near as I can tell there doesn't seem to be really
any change in how registrars handle transactions... even domains
knowingly bought with stolen credit cards seem to hang around (and
change) long after the CC company frauded out the transaction(s)...

If there really was a large loss, wouldn't they make changes to
process/procedures/activities to limit their exposure?

-chris

but you need to be much more specific about what you want from
medium and smaller isps, and what the immediate payoffs (cf. the
financial secions of the newpaper) will be to them to justify the costs.

Inferior people look solely for financial payoff. Superior people
recognize their fundamental obligation to prevent their operation from
being a menace to others, and do it based on ethics.

They don't need t be moral, they need to understand 4 years down the line it
will cost them significantly to the point of losing a lot of business. A
good example is registrars. They lose quite a bit now.

do they lose 'quite a bit' now? how much is 'quite a bit'? and is that
more or less than they take home at the end of the day?

I'm curious because near as I can tell there doesn't seem to be really
any change in how registrars handle transactions... even domains
knowingly bought with stolen credit cards seem to hang around (and
change) long after the CC company frauded out the transaction(s)...

If there really was a large loss, wouldn't they make changes to
process/procedures/activities to limit their exposure?

The ones that don't take the "legal risk" now handle fraud quite differently. They are required to handle such purchases with the credit companies, and that costs them (the ones which do as they are required depending on law) on a scale which is .. very disturbing.

Randy Bush <randy@psg.com> writes:

be specific, like "if you run X tools the payoff will be Y."

Yes. And where is the appropriate form for this? I find this
sort of thing quite interesting; and yeah, it doesn't seem like the
sort of thing NANOG is for, but most of the small ISP forms
(like webhostingtalk, etc...) well, the average technical skill level
seems to be ridicioulously low.

Some people talk about ways to give spammers only one 'whack' at
your service, such as requesting a faxed ID ahead of time, or putting more
effort into preventing credit card fraud.

Me, my focus has been on detecting abuse from my customers before the
rest of the world starts complaining.

speaking as a small provider, I can tell you that I find running snort
against my inbound traffic does reduce the cost of running an abuse desk.
I do catch offenders before I get abuse@ complaints, sometimes.

Granted, my snort-fu is not awesome. just the other week I was reminded that
I wasn't even checking for ssh dictionary attacks. There is a lot more work
i need to do with snort before I can have it automatically switch off
customers, or notify me at a high priority, rather than writing to a log
I read once every few days. Still, I think I am on the right track,
as even with my poor, neglected snort setup I still catch some problems
before I get complaints.

I don't see anyone else talking about doing anything
similar... Everyone else seems to be focused on preventing spammers
from signing up or going after them after the fact.

It seems to me that some effort into detecting abuse as it happens
(rather than waiting for an abuse@ complaint, something that, in my experience
takes a rather large amount of abuse to trigger.) could yield quite a lot
of 'low hanging fruit' simply because not much effort has been put
out in that direction.

On the other hand, I have a hard time believing I'm smarter than the guys
running ec2. So maybe I'm missing something, and it's really not actually
any cheaper than manning the abuse@ desk with a bunch of grunts. Or
maybe other people are already doing this, and I've just missed the
conversation.
  
Maybe even if you tune snort optimally, it still can't detect enough of the
attacks to be useful?

be specific, like "if you run X tools the payoff will be Y."

Yes. And where is the appropriate form for this?

there must be some operators' list somewhere.

> it doesn't seem like the sort of thing NANOG is for

yep. nanog is for whining about it, not doing/saying something actually constructive with technical content.

</sarcasm>

speaking as a small provider, I can tell you that I find running snort
against my inbound traffic does reduce the cost of running an abuse desk.
I do catch offenders before I get abuse@ complaints, sometimes.

unfortunately snort does not really scale to a larger provider. and, to the best of my poor knowledge, good open source tools to black-hole/redirect botted users are not generally available. universities have some that are good at campus and enterprise scale.

cymru and a few security researchers responded privately to my plea for solid open source tool sets and refs. knowing the folk involved, maybe we'll see some motion. patience is a virtue, within limits.

randy

I respectfully disagree. I have very large entities with ALOT of traffic running through Snort.

However, they are also using my company's products.

I work for Sourcefire. We make Snort.

If you're talking about throughput, Tilera recently (April) demonstrated 10Gbit/s snort on their TILE64 processors.
http://tilera.com/news_&_events/press_release_080429_snort.php

Not sure if anyone has them in products at the moment though.

Randy Bush <randy@psg.com> writes:

> speaking as a small provider, I can tell you that I find running snort
> against my inbound traffic does reduce the cost of running an abuse desk.
> I do catch offenders before I get abuse@ complaints, sometimes.

unfortunately snort does not really scale to a larger provider. and,
to the best of my poor knowledge, good open source tools to
black-hole/redirect botted users are not generally
available. universities have some that are good at campus and
enterprise scale.

I can't speak to the scaling of snort (I only eat around 20Mbps,
and snort on a 256Mb Xen VM handles it just fine) but I'm not
sure what you are getting at with regards to open-source tools to
blackhole or redirect botted users. I mean, we've all got hooks
in our billing system (or some other procedure) to manually disable
abusive (or non-paying) customers now, right? I guess I'm not seeing
how it is any harder to have a script watching snort disable the
customer than it is to have freeside disable the customer when they
dont pay their bill.

My current setup (and I'm not saying this is the right way,
or even a good way to do it) is just snort logging to a file. I
then have a perl script tailing that file and 'doing stuff' -
right now, 'doing stuff' consists of figuring out if it is abuse
from one of my customers (in which case it puts it in the log for me)
or to one of my customers (in which case, it puts it in a log for that
customer. I figure it doesn't cost me any extra, so I might as well
let customers see incoming attacks.)

If I sat down at it long enough to say 'alert X (or alert X, y times in Z
seconds) means the customer is definately botted (or abusive)' setting
the perl script to run a script that uses ssh to connect to my router
and blackhole the customer (or or to connect to my freeside system and
suspend the account) is if not trivial, at least fairly easy. It's certainly
something I could give to the junior guy on my team, and while I'd want
to check his work and test carefully before going live, I'm confident
he could implement.

If you really need something pre-built, check out:
http://www.snortsam.net/ (I haven't used it, but I don't think
it's the only tool of its kind.)

the hard parts (as I see them) are going to be

1. identifying the snort attacks that mean a box should be shut down.
   I mean, I don't want to shut you down for a simple port-scan. Maybe you
   are checking one of your own networks? things like that are probably
   more of a 'warning' for the customers I target. This is probably
   easier on a network targeting 'normal' customers, 'cause you can prohibit
   many of those things in your AUP. Also, at this point, it's pretty
   important that you don't have a noticable number of false positives. you
   probably want to run your thing in notify-only mode for a while until you
   are comfortable.

2. making sure that your system doesn't turn in to an easy way to DOS another
   server on the same network. BCP38, if implemented tightly enough
   (something I'm doing quite well on IPv4, but not on IPv6) can
   largely fix this problem, and as you are watching for abuse behind your
   own router, is a realistic solution. But it still takes some effort.

Luke S Crawford wrote:

Randy Bush <randy@psg.com> writes:

speaking as a small provider, I can tell you that I find running snort
against my inbound traffic does reduce the cost of running an abuse desk.
I do catch offenders before I get abuse@ complaints, sometimes.

unfortunately snort does not really scale to a larger provider. and,
to the best of my poor knowledge, good open source tools to
black-hole/redirect botted users are not generally
available. universities have some that are good at campus and
enterprise scale.

I can't speak to the scaling of snort (I only eat around 20Mbps,
and snort on a 256Mb Xen VM handles it just fine) but I'm not sure what you are getting at with regards to open-source tools to blackhole or redirect botted users. I mean, we've all got hooks
in our billing system (or some other procedure) to manually disable
abusive (or non-paying) customers now, right? I guess I'm not seeing how it is any harder to have a script watching snort disable the
customer than it is to have freeside disable the customer when they
dont pay their bill.

I suppose it could lead to huge amounts of anger from an existing customer base if automatic cutoffs started showing up one day out of the blue (to their perspective). I automatically disable various things for a whole slew of reasons - but I've been doing it since day one and everyone is aware of it and expects it. Or slowly phase them in with warnings leading up to automated action.

Repetitive, boring tasks are great for computers. I've only ever had one customer (a local advertising agency, who is no longer a customer) cry over automation because they thought they had a "special treatment" clause and didn't need to pay. It sent them warnings, of course, but they thought those didn't apply to them either. Automation isn't for everyone.

I like automation. It has rules and follows them. The rules are posted ahead of time for all to see. Most of the time people are happy to see the automated system put a stop to some kind of potential disaster before it has time to cause more damage. It's like your credit card company calling you because suddenly there's abnormal charges on your card.

~Seth

But it's definitely not cool when my credit card company cuts off my card
due to "abnormal charges" when I'm abroad and suddenly can't get ahold of
customer service via their international phone number. Automation in the
right places works wonders for both convenience and the bottom line. In the
wrong places, it's a sawed off shotgun pointed at your feet.

-brandon

"Brandon Galbraith" <brandon.galbraith@gmail.com> writes:

But it's definitely not cool when my credit card company cuts off my card
due to "abnormal charges" when I'm abroad and suddenly can't get ahold of
customer service via their international phone number. Automation in the
right places works wonders for both convenience and the bottom line. In the
wrong places, it's a sawed off shotgun pointed at your feet.

Yeah, in this case, I think getting the rules right is the hard part...
I don't think it matters that much if the rules are executed by a level-1
person vs. a script (the script, I think, would be more consistent,
at least.) Sure, if you can afford to page someone good to deal with it,
that's probably an even better answer, assuming they can get to it quickly,
but that's much more expensive than just blocking it. (I imagine the
right approach depends a lot on what you happen to be charging the customers
in question.)

Even if you do decide to wait around for an abuse@ complaint to take action,
having the IDS logs of the outgoing traffic makes corroberating an abuse
complaint much easier. And it's easy enough to email a human instead of
shutting off a customer automatically.

Pretty much the same thing I've been telling "security vendors" since 2003. In 2003 the hard problem wasn't, and still isn't, detection (IDS, AV scanners, honeypots, etc), its customer remediation (fixing things). Unfortunately, if all you are selling are hammers.... A security vendor's sale person concept of "scaling" is "more commission."

You may need to leave the network engineer's world and start talking to
the customer care engineer's side of the house. Its a different set of
systems, and a different set of scaling issues. How do you notify 50 million customers about an issue? Marketing people probably know how to do it better than network engineers.

1. Add flags to your customer support systems about different customer status, so when customers contact your call centers the agents can start
on the best script for "known" problems.

2. Include customer status flags on your portals (details behind some level of authentication in case the account is being shared).

3. Obtain and communicate with your customers through multiple channels
respecting their preferences (e.g. e-mail, alternate e-mail, postal mail,
telephone). Even non-US ISPs may want to look at the US FTC "red flag" rules.

Why do I mention those things? Because I've found out (mostly the hard
way) the remediation part of the process is the bottleneck. It doesn't
matter how many bad things you detect, if you can only fix a limited
number at a time. Detecting stuff below the remediation threshold is going to be wasted; and those resources probably would have been better used for more remediation efforts.

Yes, the bad guys may know that too. But if we got to the point where the bad guys actually worry about staying below the remediation threshold; that would be more progress than now.

Hint: if you could prove to a large ISP you could shave 60 seconds off the average customer care call by fixing security problems faster; they would probably be beating down your door begging for it.