What do you want your ISP to block today?

What would be great though is a system where there is an automatic check to see if there is any return traffic for what a customer sends out. If someone keeps sending traffic to the same destination without anything coming back, 99% chance that this is a denial of service attack

This is fine until a customers sends out legitimate multicast traffic, so any such scheme has to ignore multicast traffic. Then the worms and virus writers will just switch to using multicast as a vector.

Yes, that would be cool. I'm surprised that Microsoft doesn't send out its updates over multicast yet. That would save them unbelievable amounts of bandwidth: all Windows boxes simply join the windows update multicast group so they automatically receive each and every update. But we can safely assume they won't use single source multicast so it's only a question of time before some industrious worm builder creates the ultimate worm: one that infects all windows systems world wide by sending a single packet to the windows update multicast group...

Ok, this could happen if:

1. more than five people world wide had interdomain multicast capability
2. anyone with multicast capability could send to any multicast group

And besides, this will happen if possible regardless of the utility of unicast for worm propagation.

Also this only works where routing is strictly symmetrical (e.g. edge connections, and to single homed edges at that).

Yes.

It also has the problem that you have to retain some state (possibly little) for all outbound traffic until you can match it to inbound traffic. Given the paupacity of memory in most edge routers this is a problem. Even with a decent amount of memory, it would soon get overrun, even on a slowish circuit like a T1. A DSLAM with several hundred DSL lines would need lots of memory to implement this, and lots of CPU cycles to manage it.

Give implementers a little credit. There is no need to do this for every packet that flows through a box. You can simply sample the traffic at regular intervals and perform the return traffic check for only a small fraction of all traffic. Statistics is on your side here, as with Random Early Detect congestion/queue management, because you automatically see more packets from sources that send out a lot of traffic.

At the layer 3 level, all TCP traffic is revertive as it has to send ACKs back so this scheme can't simply work on '"I've seen another packet in the reverse direction, so it's OK".

That's exactly why this works: if the other end sends ACKs, then obviously at _some_ level they're willing to talk. So that would indeed be ok. With DOS and scanning this is very different: for many/most/all packets sent by the attacking system, nothing comes back, except maybe a port unreachable or RST.

>
>>If you don't want to download patches from Microsoft, and don't want to
>>pay McAfee, Symantec, etc for anti-virus software; should ISPs start
>>charging people clean up fees when their computers get infected?
>
>Only if it impacts the ISP, which it doesn't most of the time unless they
>buy an unfortunate brand of dial-up concentrators.
>
>>Would you pay an extra $50/Mb a month for your ISP to operate a firewall
>>and scan your traffic for you?
>
>No way. They have no business even looking at my traffic, let alone
>filtering it.
>
>What would be great though is a system where there is an automatic check
>to see if there is any return traffic for what a customer sends out. If
>someone keeps sending traffic to the same destination without anything
>coming back, 99% chance that this is a denial of service attack. If
>someone sends traffic to very many destinations and in more than 50 or 75
>% of the cases nothing comes back or just an ICMP port unreachable or TCP
>RST, 99% chance that this is a scan of some sort.

This is fine until a customers sends out legitimate multicast traffic, so
any such scheme has to ignore multicast traffic. Then the worms and virus
writers will just switch to using multicast as a vector.

It's not just UDP Multicast. Unicast streaming is moving towards UDP. In
Apple Darwin Streaming Server, for example, unicast streaming is UDP
by default. Examination of my DSS server logs shows that over 2/3 of
our video streaming in the last 2 months is over UDP.

In this UDP streaming there is return traffic but it is highly assymetric.

Regards
Marshall Eubanks

Rob Thomas wrote:

Oh, good gravy! I have a news flash for all of you "security experts"
out there: The Internet is not one, big, coordinated firewall with a
handy GUI, waiting for you to provide the filtering rules. How many
of you "experts" regularly sniff OC-48 and OC-192 backbones for all
those naughty packets? Do you really want ISPs to filter the mother
of all ports-of-pain, TCP 80?

Yes. While I hate to admit it, the one thing worse than not applying filters is applying them incorrectly. A good example would be the icmp rate limits. It's one thing to shut off icmp, or even filtering 92 byte icmp. The second one rate-limits icmp echo/reply, they just destroyed the number one network troubleshooting and performance testing tool. If it was a full block, one would say "it's filtered". Yet with rate limiting, you just see sporatic results; sometimes good, sometimes high latency, sometimes dropped.

Filter edges, and if you apply a backbone filter, apply it CORRECTLY! Rate-limiting icmp is not correctly.

-Jack

Sean Donelan wrote:

If you don't want to download patches from Microsoft, and don't want to
pay McAfee, Symantec, etc for anti-virus software; should ISPs start
charging people clean up fees when their computers get infected?

www.google.com
+Free +AntiVirus

Now was that so hard?

-Jack

Christopher L. Morrow's mention of asymmetric routing for multihomed
customers is more to the point, but if we can solve this for all those
single homed dial, cable and ADSL end-users and not for multihomed
networks, I'll be very happy.

Sorry to throw yet another insect into the topical remedy (fly in the
ointment), but, I happen to look alot like a single homed ADSL end
user at certain levels, but, I'm multihomed. I'd be very annoyed if
my ISP started blocking things just because my traffic pattern didn't
look like what they expect from a single homed customer.

So which do you prefer: nobody gets to scan your systems from the outside
(including you) or everyone gets to scan your systems from the outside
(including you).

I prefer the latter.

If you want to know how TCP is working to a destination, you
have to use TCP to test it.

As I mentioned above: this will not impact TCP at all because TCP
generates return traffic. I'm sure there are one or two UDP applications
out there that don't generate return traffic, but I don't know any. The
only problem (except asymmetric routing when multihomed) would be
tunnels, but you can simply enable RIP or something else on the tunnel to
make sure it's used in both directions. Multicast doesn't generate return
traffic so this would only apply to unicast destinations.

But, TCP to a port that isn't listening (or several ports that aren't
listening) _ARE_ what you are talking about blocking. This is not a
good idea.

Scans by themselves certainly aren't inherently dangerous.

It should be possible to have a host generate special "return traffic"
that makes sure that stuff that would otherwise be blocked is allowed
through.

I don't think it's desirable or appropriate to have everyone re-engineer
their hosts to allow monitoring and external validation scans to get
around your scheme for turning off services ISPs should be providing.

Owen

That won't save them when the time required to download the patch set is an order of magnitude greater than the mean time to infection.

Seems to me that it would be far more effective to simply prohibit connection of machines without acceptable operating systems to the network. That would send a more appropriate message to the vendor, too (better than "don't bother to test before you release, we'll pay to clean up the resulting mess").

Joe

Christopher L. Morrow's mention of asymmetric routing for multihomed
customers is more to the point, but if we can solve this for all those
single homed dial, cable and ADSL end-users and not for multihomed
networks, I'll be very happy.

I happen to look alot like a single homed ADSL end
user at certain levels, but, I'm multihomed. I'd be very annoyed if
my ISP started blocking things just because my traffic pattern didn't
look like what they expect from a single homed customer.

I'm sure knife salespeople find it extremely annoying that they can't bring their wares along as carry-on when they fly. Sometimes a few people have to be inconvenienced for the greater good.

But, TCP to a port that isn't listening (or several ports that aren't
listening) _ARE_ what you are talking about blocking. This is not a
good idea.

Why not? I think it's a very good idea. TCP doesn't work if you only use it in one direction, so blocking this doesn't break anything legitimate, but it does stop a whole lot of abuse. (Obviously I'm talking about the case where the lack of return traffic can be determined with a modicum of reliability.)

It should be possible to have a host generate special "return traffic"
that makes sure that stuff that would otherwise be blocked is allowed
through.

I don't think it's desirable or appropriate to have everyone re-engineer
their hosts to allow monitoring and external validation scans to get
around your scheme for turning off services ISPs should be providing.

But then you don't seem to have any problems with letting through denial of service attacks so I'm not sure if there is any use in even discussing this with you. Today, about half of all mail is spam, and it's only getting worse. If we do nothing, tomorrow half of all network traffic could be worms, scans and DOS. We can't go on sitting on our hands.

That won't save them when the time required to download the patch set
is an order of magnitude greater than the mean time to infection.

This, in fact, is the single biggest thorn in our side at the moment. It's hard
to adopt a pious "patch your broken box" attitude when the user can't get it
patched without getting 0wned first...

Seems to me that it would be far more effective to simply prohibit
connection of machines without acceptable operating systems to the
network. That would send a more appropriate message to the vendor, too
(better than "don't bother to test before you release, we'll pay to
clean up the resulting mess").

Given the Lion worm that hit Linux boxes, and the fact there's apparently a
known remote-root (since fixed) for Apple's OSX, what operating systems would
you consider "acceptable"?

Bits are bits, very few of them actually impact the ISP itself. Most
ISPs protect their own infrastructure. Routers are very good at
forwarding bits. Routers have problems filtering bits. Whether it is
spam, viruses or other attacks; its mostly customers or end-users that
bear the brunt of the impact, not the ISP.

The recurring theme is: I don't want my ISP to block anything I do, but
ISPs should block other people from doing things I don't think they
should do.

So how long is reasonable for an ISP to give a customer to fix an
infected computer; when you have cases like Slammer where it takes only
a few minutes to infect the entire Internet? Do you wait 72 hours?
or until the next business day? or block the traffic immediately?

Or some major ISPs seem to have the practice of letting the infected
computers continuing attacking as long as it doesn't hurt their
network.

how about ACLing them?

upstream from customer:
permit udp <customer> <ISP's nameservers> port 53
permit tcp <customer> <windowsupdaterange> port 80(?)

for as much of the windows update range as can be found. Since they've
recently akamai'zed, this is somewhat predictable.

Downstream, you can either setup stateful, or just be lazy and hope that
allowing estab flag is enough...

ACL can be either templated or genericized for the OS. (replacing
<customer> with any means the customer pvc (assuming DSL) can only
hit microsoft regardless of spoofing. Similar ACLs can be setup
for Solaris, OSX, even various flavors of linux. being able to at
least semi-automate router config changes is a requisite, but not
insurmountable.

This will, no doubt, increase support calls. How much compared to a
pervasive work is left as an exercise to the reader.

That's a popular sentiment which derives its facade of reasonableness
from the notion that ISP's ought to provide unencumbered pipes to the
Internet core. However, it doesn't bear close scrutiny.

Would you say that ISP's should not filter spoofed source addresses?
That they should turn off "no ip directed broadcast"? Of course not,
because such traffic is clearly pathological with no redeeming social
value.

The tough part for the ISP is to decide what other traffic types are
absolutely illegitimate and should therefore be subject to being
Verboten on the net.

Well I understand why an ISP will filter these.

But those things you mentioned are not software vendor vulnerabilities, or vulnerabilities of some proprietary protocol used only by desktop systems.

Also the ISP will filter anything it feels it is a threat to it's own systems as that is where their own responsibility lies, and if they dont protect these they dont make any money.

Because an ISP chooses to filter IANA reserved addresses (I am to argue that all do not perform this type of filtering, I would think that applying prefix lists, and null routes is what an ISP would do...not filter on source address...I have received packets at my edge with a IANA reserved address as the source), or turn off IP directed broadcasts, does not compare to applying filters every single time some vendor releases faulty code, or their code is exploited. These exploits affect the end user nodes of the ISP's customer, not the ISP itself (in a grand scale). The ISP is a business.

G.

Mark Borchers writes:

Which Microsoft protocols should ISP's break today? Microsoft Exchange?
Microsoft file sharing? Microsoft Plug & Play? Microsoft SQL/MSDE?
Microsoft IIS?

All of the above. <g>

> He added that ISPs have the view and ability to prevent en-masse
> attacks. "All these attacks traverse their networks before they reach
> you and me. If they would simply stop attack traffic that has been
> identified and accepted as such, we'd all sleep better," Cooper said.

Bwahahaha. Ghod I love a good comedian.

Having recently pulped my head against the wall of a "network provider" too
clueless to provision decent IP connectivity, the last thing I want is to
have the ISP unilaterally decide what they're going to do with my packets.

The recurring theme is: I don't want my ISP to block anything I do, but
ISPs should block other people from doing things I don't think they
should do.

That's about my position, I guess. <g> There's a difference between
naively blocking ports or screwing with packets, though, and blocking known
dodgy behaviour (spoofed source addresses, for one). Yes, port 135 is a
known vector, and so is 4444 now, but they have their legitimate uses. If
you have evidence that someone is doing something dodgy with them, then you
should shut them down. But spanking everyone because some people
can't/won't take responsibility for their systems reeks of schoolroom
justice ("We're all going to sit here until the guilty party owns up").

So how long is reasonable for an ISP to give a customer to fix an
infected computer; when you have cases like Slammer where it takes only
a few minutes to infect the entire Internet? Do you wait 72 hours?
or until the next business day? or block the traffic immediately?

Immediately. The ISP is, IMO, responsible for the traffic of those they
connect to the Internet. Maybe I'm just showing my old-fashioned
values there, though.

Or some major ISPs seem to have the practice of letting the infected
computers continuing attacking as long as it doesn't hurt their
network.

"Welcome to my null0, O provider of loose morals".

Assuming a situation like the blaster worm, I'd expect a call to one of
the emergency contacts listed. Response time should be less than an hour.
(even if it is just a 'thanks, we're working on it')

I'm not aware of any operating system that is invulnerable. But clearly, some operating systems are more vulnerable than others :slight_smile:

This, in fact, is the single biggest thorn in our side at the moment. It's hard
to adopt a pious "patch your broken box" attitude when the user can't get it
patched without getting 0wned first...

This is where you start forcing users through a captive portal to the update
site of their vendor, I think they'll get the idea when every site they try to
bring up turns out to be windowsupdate.microsoft.com

[snip]

Given the Lion worm that hit Linux boxes, and the fact there's apparently a
known remote-root (since fixed) for Apple's OSX, what operating systems would
you consider "acceptable"?

Anything that's not currently infected, and is patched to the current 'safe'
level.

Christopher L. Morrow's mention of asymmetric routing for multihomed
customers is more to the point, but if we can solve this for all those
single homed dial, cable and ADSL end-users and not for multihomed
networks, I'll be very happy.

I happen to look alot like a single homed ADSL end
user at certain levels, but, I'm multihomed. I'd be very annoyed if
my ISP started blocking things just because my traffic pattern didn't
look like what they expect from a single homed customer.

I'm sure knife salespeople find it extremely annoying that they can't
bring their wares along as carry-on when they fly. Sometimes a few people
have to be inconvenienced for the greater good.

In my opinion, this is a very unfortunate attitude largely based on FUD
and myth. Apologies for the off-topicness of the following example,
but, having just been through this level of greater good, I hope it
will serve some positive purpose if people realize how ridiculous it
gets if you let this go.

Frankly, I think the level of absurdity that the TSA and HSA have taken
things to speaks for itself. From May 21 of this year until August 1,
certain interpretations of our newfound greater good would have allowed
me to be classified as a terrorist and hauled off to prison. Why?
Because on May 21, depending on your interpretation of the statutes,
my posession of an until then perfectly legal 2 pounds of black powder
or my posession of an until then perfectly legal Aerotech J-350 Ammonium
Perchlorate Composite Propellant rocket motor reload suddenly changed
from a perfectly legal hobby to an act of terrorism for anyone who did
not posess a Low Explosives User Permit from the USDOJ/BATFE. What changed
on August 1? I got my permit (finally) which I applied for in April.

The minor inconvenience involved in doing this consisted of:

  1. $100 to the feds.
  2. I had to file an FBI Fingerprint Card with the BATF
    + $30 to get the fingerprinting done
    + Took about 3 hours to track down the correct method of
      getting the fingerprinting done and actually have
      it done. (BATF instructions didn't work and it turned
      into a name-that-bureacracy trip through 5 different
      agencies to find one that would do the fingerprinting
      (no, the FBI will not)).
  3. Federal Background Check
  4. Essentially sign away my 4th amendment rights and grant
    the BATFE permission to inspect my home at any time.
  5. Get a letter of agreement for contingency storage from at
    least one agency with a LEUP and a storage authorization
    (my LEUP is a non-storage LEUP).
  6. I now need to keep records of all my rocket motor purchases,
    usages, storages, and other dispositions for 10 years.

The greater good accomplished:

  Any nutcase that wants to can still pay cash for all the ammonium
nitrate and diesel fuel he/she wants with no identification required, no
record of the transaction, and no permit required.

  Did I mention that the Oklahoma City Federal building has proven
that AN+Diesel does explode, while the NH state police explosives lab
has proven that APCP DOES NOT EXPLODE.

Sorry... I just don't see a greater good in forcing liability on ISPs
for forwarding IP datagrams with valid headers.

But, TCP to a port that isn't listening (or several ports that aren't
listening) _ARE_ what you are talking about blocking. This is not a
good idea.

Why not? I think it's a very good idea. TCP doesn't work if you only use
it in one direction, so blocking this doesn't break anything legitimate,
but it does stop a whole lot of abuse. (Obviously I'm talking about the
case where the lack of return traffic can be determined with a modicum of
reliability.)

1. Your assumption is false. There are multiple diagnostic things
  that can be accomplished with what appears to be a single-sided
  TCP connection.

2. I should be able to probe, portscan, or otherwise attack my own
  site from any location on the internet so long as I do not create
  a DOS or AUP violation on someone elses network that I have an
  agreement with.

3. Fixing the end hosts will stop a lot more abuse than breaking
  the network will.

It should be possible to have a host generate special "return traffic"
that makes sure that stuff that would otherwise be blocked is allowed
through.

I don't think it's desirable or appropriate to have everyone
re-engineer
their hosts to allow monitoring and external validation scans to get
around your scheme for turning off services ISPs should be providing.

But then you don't seem to have any problems with letting through denial
of service attacks so I'm not sure if there is any use in even discussing
this with you. Today, about half of all mail is spam, and it's only
getting worse. If we do nothing, tomorrow half of all network traffic
could be worms, scans and DOS. We can't go on sitting on our hands.

I don't propose sitting on our hands. I propose fixing the problem where
the problem is. What you are proposing makes as much sense as locking up
all the yeast producers to cut down on drunk driving. Sure, there are
fewer yeast producers than drunk drivers and they're in business, so they're
easier to find. However, just because it's easier doesn't make it correct
or even logical. Yes, this is an extreme example, but, other than degree
of separation, I don't see alot of difference in the approaches.

Fixing the edge is harder, but, it will yield better results. Breaking
the core is easier, but, will yield lots of collateral damage and won't
necessarily do much more than create smarter worms.

Owen

Given the Lion worm that hit Linux boxes, and the fact there's apparently
a known remote-root (since fixed) for Apple's OSX, what operating systems
would you consider "acceptable"?

This is an old argument and it just doesn't get any better with time.

There is a fundamental difference between BUGS which all software has
and Micr0$0ft's level of engineered-in vulnerabilities and wanton
disregard for security in the name of features. If you cannot see
that many of the exploited vulnerabilities in Micr0$0ft were DESIGNED
into the software instead of accidental bugs, I can't help you. This
is not to say that Micr0$0ft has not had more than their fair share
of BUGS which created vulnerabilities as well.

BTW, how big was the patch for OSX's remote root? (less than 2MB)
How big was the patch for Lion? (don't have that number handy, but I remember
it being relatively small)
When was the last time you installed a Micr0$0ft security fix that was
less than 5MB? (I have yet to see one)

Shall we also compare the realtive timetables between vulnerability awareness
and general patch availablility?

Owen

    Frankly I dont want any of my ISP's filtering any of my
traffic. I
think we need (especially enterprise administrators like
myself) to take
some responsibility, and place our own filters.

That's a popular sentiment which derives its facade of reasonableness
from the notion that ISP's ought to provide unencumbered pipes to the
Internet core. However, it doesn't bear close scrutiny.

I disagree.

Would you say that ISP's should not filter spoofed source addresses?

It depends. If spoofed source address can be determined with 100% reliability
then, generally, yes. However, an ISP, generally would only be able to
reliably make this determination on some of their own customers' links.
As such, that's not my traffic unless I'm already violating an AUP or one
of said ISPs other customers is violationg the ISPs AUP. Of course an
ISP has the right to block traffic which is in clear violation of the ISPs
AUP from the ISPs customers who presumably signed the AUP as a condition
of their service agreement.

That they should turn off "no ip directed broadcast"? Of course not,

I cannot think of a single situation in which the ISPs configuration of
no ip directed broadcast would affect my traffic unless I was sending
traffic _TO_ the broadcast of some network within the ISPs backbone.
As such, I would, again, figure that falls into the AUP violation category
above.

because such traffic is clearly pathological with no redeeming social
value.

No. Because such traffic is clearly in violation of the AUP I signed
as a customer and for no other reason. My ISP has the right to block my
traffic in any case where I am in violation of the AUP. He has a similar
right with any of his/her other customers. Outside of that, no, an ISP
should not, generally block traffic.

The tough part for the ISP is to decide what other traffic types are
absolutely illegitimate and should therefore be subject to being
Verboten on the net.

Again, this is a very slippery slope and relies on the fallacy that traffic
must have some socially redeeming value in order to be routed. In my eyes,
what traffic has value may be radically different from your opinion.
Allowing opinion to enter into rulesets is not, generally, a good plan.

Owen