anybody knowing what’s going on with CW ??
also anybody from CNN…you got barely connectivity southbound
done from nitrous…
Tracing the route to cnn.com (220.127.116.11)
1 p219.t3.ans.net (18.104.22.168) 0 msec 0 msec 4 msec
2 h12-1.t60-6.Reston.t3.ans.net (22.214.171.124) [AS 1673] 8 msec 8 msec 8 msec
3 f2-1.t60-2.Reston.t3.ans.net (126.96.36.199) [AS 1673] 8 msec 8 msec 8 msec
4 h9-1.t104-0.Atlanta.t3.ans.net (188.8.131.52) [AS 1673] 28 msec 28 msec 28 msec
5 f2-0.c104-10.Atlanta.t3.ans.net (184.108.40.206) [AS 1673] 32 msec 28 msec 28 msec
6 * * *
7 * * h0-0.enss3222.t3.ans.net (220.127.116.11) [AS 1324] !A
Just got off the phone with their first-level trouble reporting. They say
that "Someone put in a block against C&W, and the resulting traffic
overloaded our backbone".
I've been looking for a better answer to show up here -- if e-mail can get
through to me.
Id really like to know too. CW has been having massive problems since last
night. basically our CW link is totally unusable at the moment. So this is
looking something like a 15+ hour CW outage.
5 core1-fddi-1.Sacramento.cw.net (18.104.22.168) 68.215 ms 48.120 ms 73.001 ms
6 core7.SanFrancisco.cw.net (22.214.171.124) 44.076 ms 50.416 ms 49.339 ms
7 Hssi5-1-0.BR1.SFO1.alter.net (126.96.36.199) 1025.028 ms 1135.500 ms *
8 114.ATM3-0.XR1.SFO1.ALTER.NET (188.8.131.52) 1071.055 ms * *
6 core2.SanFrancisco.cw.net (184.108.40.206) 157.969 ms 70.837 ms 97.183 ms
7 borderx1-fddi-1.SanFrancisco.cw.net (220.127.116.11) 121.741 ms 95.664 ms 88.498 ms
8 * * *
I came across this last night.
1740 1239 1800 209 286 3561, (suppressed due to dampening)
18.104.22.168 from 22.214.171.124 (126.96.36.199)
Origin IGP, valid, external
Dampinfo: penalty 3342, flapped 6 times in 00:17:37, reuse in
There have been two problems affecting the performance of our network in
the past 24 hours. The first problem which has been hammered out on this
list is the leaking of our routes by other providers. Our noc is still
working with some other ISPs to get this issue resolved. Currently some
large backbones are still seeing some more specific pieces of our netblocks
that are not pointing directly to our network.
In addition to this (and possibly caused by this) we had an increase in
traffic in our San Francisco node. This increase congested one of our Fddi
rings. This caused problems for a subset of our routers in the San
Francisco area. Steps were taken to alleviate the immediate problem and it
will be closely monitored. To be clear, the problem was localized to just a
handful of routers and just to a single node.
The congestion problem was fixed early this morning. I apologize for delay
Manager, Backbone Design
Cable & Wireless Internet Engineering