Hello,
It might be interesting if some people were to post when they received
their first attack packet, and where it came from, if they happened to
be logging.
Here is the first packet we logged:
Jan 25 00:29:37 EST 216.66.11.120
--Phil
ISPrime
Interestingly, looking through my logs for UDP 1434, I saw a sequential
scan of my subnet like so:
Jan 16 08:15:51 206.176.210.74,53 -> x.x.x.1,1434 PR udp len 20 33 IN
Jan 16 08:15:51 206.176.210.74,53 -> x.x.x.2,1434 PR udp len 20 33 IN
Jan 16 08:15:51 206.176.210.74,53 -> x.x.x.3,1434 PR udp len 20 33 IN
All from 206.176.210.74, all source port 53 (probably trying to
use people's DNS firewall rules to get around being filtered).
After that, I saw nothing until the storm started last night from many
different source IPs, which was at Jan 24 21:31:53 PST for me.
-c
* Clayton Fiske (clay@bloomcounty.org) [030125 12:55] writeth:
It might be interesting if some people were to post when they received
their first attack packet, and where it came from, if they happened to
be logging.
Here is the first packet we logged:
Jan 25 00:29:37 EST 216.66.11.120
Interestingly, looking through my logs for UDP 1434, I saw a sequential
scan of my subnet like so:
Jan 16 08:15:51 206.176.210.74,53 -> x.x.x.1,1434 PR udp len 20 33 IN
I'm not sure that going back that far is going to offer anything
conclusive, as it could have been any number of scanners looking for
vulnerabilities. Looking at my logs back to the 19th, I have isolated hits
on the 19th and 23rd. However, they really started to come in force at
22:29:39 MDT, two seconds after Clayton's. My first attempt came from an
IP owned by Level 3 Comm.
Jan 23 02:43:44 c6509-core 10829487: 47w0d: %SEC-6-IPACCESSLOGP: list 130
denied udp 192.41.65.170(48962) -> 166.70.10.63(1434), 1 packet
Jan 24 22:29:39 c6509-core 10966964: 47w1d: %SEC-6-IPACCESSLOGP: list 130
denied udp 65.57.250.28(1210) -> 204.228.150.9(1434), 1 packet
Jan 24 22:29:44 border 7577864: 30w2d: %SEC-6-IPACCESSLOGP: list 100 denied
udp 129.219.122.204(1170) -> 204.228.132.100(1434), 1 packet
Jan 24 22:29:50 border 7577865: 30w2d: %SEC-6-IPACCESSLOGP: list 100 denied
udp 212.67.198.3(1035) -> 166.70.22.47(1434), 1 packet
Jan 24 22:29:52 xmission-paix 425068: 7w0d: %SEC-6-IPACCESSLOGP: list 100
denied udp 61.103.121.140(3546) -> 166.70.22.87(1434), 1 packet
Jan 24 22:29:52 border 7577868: 30w2d: %SEC-6-IPACCESSLOGP: list 100 denied
udp 65.57.250.28(1210) -> 204.228.132.18(1434), 1 packet
Jan 24 22:29:55 c6509-core 10966977: 47w1d: %SEC-6-IPACCESSLOGP: list 130
denied udp 61.103.121.140(3546) -> 166.70.10.8(1434), 1 packet
Jan 24 22:29:57 c6509-core 10966979: 47w1d: %SEC-6-IPACCESSLOGP: list 130
denied udp 12.24.139.231(3315) -> 204.228.140.81(1434), 1 packet
Jan 24 22:29:58 c6509-core 10966980: 47w1d: %SEC-6-IPACCESSLOGP: list 130
denied udp 140.115.113.252(3780) -> 207.135.133.228(1434), 1 packet
Jan 24 22:29:59 c6509-core 10966981: 47w1d: %SEC-6-IPACCESSLOGP: list 130
denied udp 17.193.12.215(3117) -> 207.135.155.209(1434), 1 packet
Jan 24 22:30:00 border 7577873: 30w2d: %SEC-6-IPACCESSLOGP: list 100 denied
udp 209.15.147.225(4543) -> 204.228.133.186(1434), 1 packet
According to Clayton Fiske:
Interestingly, looking through my logs for UDP 1434, I saw a
sequential
scan of my subnet like so:
Jan 16 08:15:51 206.176.210.74,53 -> x.x.x.1,1434 PR udp len 20 33
IN
Jan 16 08:15:51 206.176.210.74,53 -> x.x.x.2,1434 PR udp len 20 33
IN
Jan 16 08:15:51 206.176.210.74,53 -> x.x.x.3,1434 PR udp len 20 33
IN
All from 206.176.210.74, all source port 53 (probably trying to
use people's DNS firewall rules to get around being filtered).
After that, I saw nothing until the storm started last night from
many
different source IPs, which was at Jan 24 21:31:53 PST for me.
Ditto on the sequential scan well before the actual action, except
that mine came on Jan. 19th:
Jan 19 10:59:11 Deny inbound UDP from 67.8.33.179/1 to xxx.xxx.xxx.xxx
...
...
The scan went across several subnets I manage inside 209.67.0.0
serially. My sources were all from 67.8.33.179, all source port 1.
The actual worm propagation began to hit my logs at 00:28:16 EST Jan
25.
Cheers.
-travis
Our first (this is EST):
Jan 25 00:29:44 external.firewall1.oct.nac.net firewalld[109]: deny in
eth0 404 udp 20 114 61.103.121.140 66.246.x.x 3546 14
34 (default)
61.103.121.140 = a host somewhere on GBLX
Date: Sat, 25 Jan 2003 06:58:46 -0500
From: Phil Rosenthal
It might be interesting if some people were to post when they
received their first attack packet, and where it came from,
if they happened to be logging.
I agree, except such high flow rates make even millisecond-scale
time skew a huge issue...
Eddy
Here is what we saw at MIT (names are subnets). These are the times when the flooding started to cause us problems.
sloan 00:31:36
oc1-t1 00:32:07
nox-link 00:32:37
extr2-bb 00:33:13
All are EST. The numbers are accurate to *at best* a minute because of the delay before the Noc is scheduled to test them.
-Jeff
Here are the IPs I got at 5:29:40 GMT, the time I got 10 packets / second
Our first ones came from:
1. L(3) space, swip'd out to an outfit in Florida
2. Sprint space, swip'd out to an outfit in Indiana
3. repeat of #1
4. Korea
5. Korea
All times are EST and UTC, and locked to a stratum 1 time source. Any researcher who needs the full logs need only ask. Our firewalling didn't permit any of this in.
Jan 25 00:29:42 gatei46 214: Jan 25 05:29:41 UTC: %SEC-6-IPACCESSLOGP: list 101 denied udp 63.209.100.22(1253) -> 208.254.46.93(1434), 1 packet
Jan 25 00:29:49 gatei46 215: Jan 25 05:29:48 UTC: %SEC-6-IPACCESSLOGP: list 101 denied udp 208.14.240.150(4315) -> 208.254.46.3(1434), 1 packet
Jan 25 00:29:52 gatei46 216: Jan 25 05:29:51 UTC: %SEC-6-IPACCESSLOGP: list 101 denied udp 63.209.100.22(1253) -> 208.254.46.150(1434), 1 packet
Jan 25 00:30:01 gatei46 217: Jan 25 05:30:00 UTC: %SEC-6-IPACCESSLOGP: list 101 denied udp 218.234.13.22(4762) -> 208.254.46.62(1434), 1 packet
Jan 25 00:30:03 gatei46 218: Jan 25 05:30:02 UTC: %SEC-6-IPACCESSLOGP: list 101 denied udp 211.172.232.82(3830) -> 208.254.47.188(1434), 1 packet
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
[snip]
Ditto on the sequential scan well before the actual action, except
that mine came on Jan. 19th:
Jan 19 10:59:11 Deny inbound UDP from 67.8.33.179/1 to xxx.xxx.xxx.xxx
I have a similar packet (but only one) from the same host (time is ntp sync'd
EST).
Jan 20 12:55:47 firewall kernel: Packet log: input - ppp0 PROTO=17
67.8.33.179:1 65.83.153.253:1434 L=29 S=0x00 I=20300 F=0x0000 T=110 (#23)
The scan went across several subnets I manage inside 209.67.0.0
serially. My sources were all from 67.8.33.179, all source port 1.
The actual worm propagation began to hit my logs at 00:28:16 EST Jan
25.
My first worm packet-
Jan 25 00:32:52 firewall kernel: Packet log: input - ppp0 PROTO=17
131.128.163.118:1631 65.83.153.253:1434 L=404 S=0x00 I=2610 F=0x0000 T=113
(#23)
and continued until
Jan 25 11:48:44 firewall kernel: Packet log: input - ppp0 PROTO=17
151.99.167.133:30725 65.83.153.253:1434 L=404 S=0x00 I=2 F=0x0000 T=111 (#23)
when BS.N apparently shutdown 1434.
- --
Redundancy? You can say that again!
I have a similar packet (but only one) from the same host (time is ntp sync'd
EST).
Jan 20 12:55:47 firewall kernel: Packet log: input - ppp0 PROTO=17
67.8.33.179:1 65.83.153.253:1434 L=29 S=0x00 I=20300 F=0x0000 T=110 (#23)
That's a busy machine apparently:
Jan 19 01:13:16 gw ipmon[32123]: 01:13:15.993484 ed0 @0:20 b 67.8.33.179,1
-> 66.92.x.x,1434 PR udp len 20 29 IN
(also EST, NTP synced)
C
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
+-----------------+
> 216.069.032.086 | Kentucky Community and Technical College System
> 066.223.041.231 | Interland
> 216.066.011.120 | Hurricane Electric
> 216.098.178.081 | V-Span, Inc.
+-----------------+
HE.net seems to be a reoccuring theme. (I speak to evil of them --
actually, there are some good people over there).
However, it appears that one of the 'root' boxes of this attack was at HE.
This is the third or fourth time I've seen theit netblocks mentioned as
the source of some of the first packets.
-- Alex Rubenstein, AR97, K2AHR, alex@nac.net, latency, Al Reuben --
-- Net Access Corporation, 800-NET-ME-36, http://www.nac.net --
> +-----------------+
> > 216.069.032.086 | Kentucky Community and Technical College System
> > 066.223.041.231 | Interland
> > 216.066.011.120 | Hurricane Electric
> > 216.098.178.081 | V-Span, Inc.
> +-----------------+
HE.net seems to be a reoccuring theme. (I speak to evil of them --
actually, there are some good people over there).
However, it appears that one of the 'root' boxes of this attack was at HE.
This is the third or fourth time I've seen theit netblocks mentioned as
the source of some of the first packets.
Looking at the router traffic graphs for the east and west coast the
attack started at the same time just before 9:30 PST or 12:30 EST. I'm
sure the owners of some of the infected boxes would be able to give a
better chronology based on when their logs for other services (i.e. HTTP)
they might have been running stopped.
After looking at flow stats and figuring out that this wasn't an attack by
a single compromised box we blocked udp port 1434 on several of our core
routers. We then went back and contacted customers whose IPs showed up in
our flow stats. Some where reachable and coordinated with our support to
disconnect their MSSQL servers or otherwise shutdown MSSQL. We then went
through all our customer aggregation switches looking for ports that had
the pattern of the attack, i.e. 25000 pps inbound to our switch, 10
packets outbound on a 100 Mbps port. We shutdown about 7 customer ports
in New York and about 16 in California. These customers were contacted
and the majority of them have patched their machines, a few are still off.
Some Hurricane sites like our San Jose site were unaffected (no change
from normal traffic levels) indicating any Windows users there had
previously patched.
Mike.
+----------------- H U R R I C A N E - E L E C T R I C -----------------+
Just to add to this. We noticed a sudden burst and terminated ports to
customers infected as well. I never noticed anything odd from HE and we also
applied 1434 blocks very quickly. Thankfully, our most infected customer
crashed his internal core and took him off line anyway:).
> +-----------------+
> > 216.069.032.086 | Kentucky Community and Technical College System
> > 066.223.041.231 | Interland
> > 216.066.011.120 | Hurricane Electric
> > 216.098.178.081 | V-Span, Inc.
> +-----------------+
HE.net seems to be a reoccuring theme. (I speak to evil of them --
actually, there are some good people over there).
First of all: This worm started so fast, to find its source we have to
look into the past, not at the 'flash point'. HE.net is a very large
colo provider, so I am in no way surprised that they show up. Same
for Interland.
Morning all,
In light of the recent attack, and the dramatic impact it had on internet connectivity. I was wondering if any operators (esp of exchange pts) would provide information on utilization. Especially any common backplane %s.
I have received information on router utilizations, some routers it seems may have held up better then others. That information is useful. But I am working on some optical exchange point/optical metro designs and this might have a dramatic impact if one considers things like OBGP, Uni 1.0, ODSI etc etc.
A working hypothesis on the affect of this type of attack on a dynamically allocated bandwidth network (such as an optical exchange running OBGP etc) would have had a drastic affect on resources. All the available spare capacity would have likely be allocated out. So the "bucket" would have run dry. Understanding that exchange points of this type (or metro area dynamic layer1 transport networks) will manage the total bandwidth needs to always maintain adequate available capacity.
With the rapid onset of an attack such as the one sat morning. Models I have show that not only would the spare capacity been utilized quickly but that in a tiered (colored) customer system. That the lower service level customers (lead colored, silver etc) would have had their capacity confiscated and reallocated to the Platinum and Gold customers. The impact would have been much greater. Especially if the "lead" customers where not using their links for a simple off-hours server backup link, or redundant circuits to production circuits on another network. If they were low cost IP providers attempted to complete with the lowest cost server, they would have been drastically affected.
The affect might have caused a cascading type failure. If enough IP service providers were affected (disconnected) and their peering circuits or metro links disconnected, this traffic would have rerouted and flooded other IXs and private peering links. Without taking into consideration the BGP adds/withdraws load. They traffic levels alone would have had a sever impact on border routers and networks. At least that would be by assessment.
One other considerations is that optical IXs will have a greater impact on the internet, possibly good and bad. With larger circuit sizes of OC48 and OC192 for peering. An attack would have a greater ability to flood more traffic. A failure of a peering session here would cause a reroute of greater traffic. A possible benfit might be that larger circuit sizes might mean that an attack might not be able to overwhelm the larger capacities especially if backbone sizes are the constricting factor, not peering circuits or optical VPN circuits at the optical IX.
Any feedback, devil's advocate position, voodoo or "other" is welcome.
Dave
Here are the first ten minutes of packets that one of my firewalls
intercepted:
(PST Times)
Jan 24 21:32:19: UDP Drop SRC=211.205.179.133 LEN=404 TOS=0x00 PREC=0x00 TTL=115 ID=22340 PROTO=UDP SPT=1739 DPT=1434 LEN=384
Jan 24 21:32:54: UDP Drop SRC=128.122.40.59 LEN=404 TOS=0x00 PREC=0x00 TTL=108 ID=1366 PROTO=UDP SPT=1086 DPT=1434 LEN=384
Jan 24 21:33:11: UDP Drop SRC=141.142.65.14 LEN=404 TOS=0x00 PREC=0x00 TTL=113 ID=28703 PROTO=UDP SPT=1896 DPT=1434 LEN=384
Jan 24 21:38:54: UDP Drop SRC=211.57.70.131 LEN=404 TOS=0x00 PREC=0x00 TTL=102 ID=9940 PROTO=UDP SPT=1654 DPT=1434 LEN=384
Jan 24 21:39:34: UDP Drop SRC=202.96.108.140 LEN=404 TOS=0x00 PREC=0x00 TTL=108 ID=17122 PROTO=UDP SPT=4742 DPT=1434 LEN=384
Jan 24 21:41:40: UDP Drop SRC=200.162.192.22 LEN=404 TOS=0x00 PREC=0x00 TTL=108 ID=21153 PROTO=UDP SPT=3121 DPT=1434 LEN=384
Jan 24 21:41:51: UDP Drop SRC=64.70.191.74 LEN=404 TOS=0x00 PREC=0x00 TTL=109 ID=46498 PROTO=UDP SPT=1046 DPT=1434 LEN=384
Jan 24 21:42:06: UDP Drop SRC=129.242.210.240 LEN=404 TOS=0x00 PREC=0x00 TTL=107 ID=2336 PROTO=UDP SPT=1574 DPT=1434 LEN=384
I checked, and none of these source addresses had sent any visible
probes into my network within the prior month.
The really weird thing is that while I was interactively watching
router logs I saw a bunch of packets where neither the SRC nor DST
were within my network. I looked up the MAC address of the packets,
and they seemed to be coming from a client colocated box (apparently
un-firewalled Linux). I wonder if there was a worm that spread
previous to the attack to seed/start the attack by sending spoofed
attack packets to a large list of known vulnerable servers.
It does make sense though that the origin packets would have all been
spoofed. Unfortunately I can't find any items like that in my log
files.
-Steve
On Sun, Jan 26, 2003 at 12:09:33AM -0500, Alex Rubenstein eloquently stated:
Although this MS-SQL worm used a lot of bandwidth because of the embedded
exploit code, usually worms scan first and try exploiting after. Such scan
requires few bytes, so even a T-3 would carry a lot of host scans per
second, and could case many routers to die on the receiving end because of
packets-per-second or news-arps-per-second or syslogs-per-second
limitations.
I think the worst danger of large circuits would be the uplink capacity; a
bunch of infected hosts would easily fill up a T-3 trying to scan for new
hosts to attack, limiting the worm propagations speed, but an OC-192 might
end up carrying all of the scan traffic and infect more hosts faster.
Rubens
I have received information on router utilizations, some routers it seems may have held up better then others. That information is useful. But I am working on some optical exchange point/optical metro designs and this might have a dramatic impact if one considers things like OBGP, Uni 1.0, ODSI etc etc.
A working hypothesis on the affect of this type of attack on a dynamically allocated bandwidth network (such as an optical exchange running OBGP etc) would have had a drastic affect on resources. All the available spare capacity would have likely be allocated out. So the "bucket" would have run dry. Understanding that exchange points of this type (or metro area dynamic layer1 transport networks) will manage the total bandwidth needs to always maintain adequate available capacity
The problem with ONI and MP<whatever>S models is that there is little or no correlation between the topology aware layers. You will most likely sooner or later end up in a situation where the layers will start to oscillate.
So, I would not build a optical IX around this model - if I would build a optical IX at all.
- kurtis -