UUNET connectivity in Minneapolis, MN

Date: Thu, 11 Aug 2005 16:06:05 +0000 (GMT)
From: "Christopher L. Morrow" <christopher.morrow@mci.com>
Subject: Re: UUNET connectivity in Minneapolis, MN

> we had a loss of comercial power(coned) in the downers grove terminal.
> terminal is up on generator power now.
>

that seems to map to the internal firedrill as well, anyone else hit by
this event?

Electric utility had a sub-station burn up. resulting in a medium-sized
geographic area without power -- something like 17,000 residences according
to news reports (no numbers on 'commercial' custeomrs provided).

AT&T has a facility in the affected area, and were also without utility power.

Rumor mill says that Sprint had a (moderately small) number of T-3 circuits
affected, as well.

info from the local news stations

http://www.nbc5.com/news/4836579/detail.html?z=dp&dpswid=2265994&dppid=65192

http://www.chicagotribune.com/news/local/chi-050811outage,0,6108555.story?co
ll=chi-news-hed

we had a loss of comercial power(coned) in the downers grove terminal.
terminal is up on generator power now.

that seems to map to the internal firedrill as well, anyone else hit by
this event?

Electric utility had a sub-station burn up. resulting in a medium-sized
geographic area without power -- something like 17,000 residences according
to news reports (no numbers on 'commercial' custeomrs provided).

AT&T has a facility in the affected area, and were also without utility power.

Rumor mill says that Sprint had a (moderately small) number of T-3 circuits
affected, as well.

ATT must adhere to some diffrent engineering standards; as well devices we monitor there were all fine no blips... but all of the MCI customers we have in IL, MI, WI, MN all had issues...

Power went out at 4:30 ish and ckts all dumped about 8:30 pm...

Then bounced until 6:30 AM this morning.

Not sure I understand how on earth something like this happens... power is not that confusing to make sure it does not stop working.

JD

ATT must adhere to some diffrent engineering standards; as well devices we monitor there were all fine no blips... but all of the MCI customers we have in IL, MI, WI, MN all had issues...

Power went out at 4:30 ish and ckts all dumped about 8:30 pm...

Then bounced until 6:30 AM this morning.

Not sure I understand how on earth something like this happens... power is not that confusing to make sure it does not stop working.

JD

Maybe they actually *HAVE* standby generators. I have no sympathy for any provider for failure to plan for the inevitable power failure. I only have moderate sympathy for a failed standby generator. It's a diesel bolted to an alternator. The design was solidly debugged by the 1930s..... exercise it, be obsessive about preventative maintenance, keep the fuel polished, have extra filters on hand, and it will rarely let you down. Having to dish out SLA credits isn't punishment enough for failing to have standby power.

On the other hand, if the customers are in contract, and a provider can get away with having their network fall down when there's a power outage and few if any customers actually go through the hassle of seeking SLA reimbursement, then it's really the customers' fault for the provider not having a generator. Yup, this is the screwed up world we live in. :slight_smile:

Not sure I understand how on earth something like this happens... power

is

not that confusing to make sure it does not stop working.

Is that so?

Have you read the report on the Northeast blackout of 2003?
https://reports.energy.gov/

--Michael Dillon

I certainly understand why utility power goes out and that is the reason why MCI loosing power confuses me. I am pretty sure that someone at MCI also realizes why the blackout happens and how fragile things are.

It is irresponsible for a Tier 1 infrastructure provider to not be able to generate their own and have large chunks of their network fail do to the inability to power it. I bet you every SBC CO in the affected area was still pushing power out to customer prems.

Unless there is some sort of crazy story related to why a service provider could not keep the lights on, this should have not been an issue with proper operations and engineering.

JD

James D. Butt

Unless there is some sort of crazy story related to why a service provider
could not keep the lights on, this should have not been an issue with
proper operations and engineering.

The building where one of our nodes sites got hit with an electrical fire in
the basement one day, the fire department shut off all electrical to the
whole building including the big diesel generators sitting outside the back
of the building so all we had was battery power until that ran out 6 hours
later.

How do you prepare for that?

Geo.

George Roettger
Netlink Services

Yes that is an exception... not what happened in this case....

You can come up with a lot of valid exceptions...

There are many reasons why a Tier 1 provider does not stick all its eggs in multi-tenant buildings... smart things can be done with site selection. I am not saying ever customer needs to keep their network like this... but the really bug guys at the core of their network yes.

JD

Unless there is some sort of crazy story related to why a service

provider

could not keep the lights on, this should have not been an issue with
proper operations and engineering.

I'll let others tell you about the rat that caused a
short circuit when Stanford attempted to switch to
backup power. Or the time that fire crews told staff
to evacuate a Wiltel colo near San Jose because of a
backhoe that broke a gas pipe. The staff were prevented
from starting their backup generators after power to
the neighborhood was cut.

In my opinion, the only way to solve this problem is
to locate colos and PoPs in clusters within a city
and deliver resilient DC power to these clusters from
a central redundant generator plant. The generator plants,
transmission lines and clusters can be engineered for
resiliency. And then the highly flammable and dangerous
quantities of fuel can be localized in a generator plant
where they can be kept a safe distance from residential
and office buildings.

Unfortunately, to do this sort of thing requires vision
which is something that has been lacking in the network
operations field of late.

--Michael Dillon

On Behalf Of James D. Butt
  > Unless there is some sort of crazy story related
> to why a service provider
> could not keep the lights on, this should have not
> been an issue with
> proper operations and engineering.

6 stories from the trenches

Once a back hoe decided to punch through a high
pressure natural gas main, right outside
our offices. The fire department had us
shut down ANYTHING that MIGHT make a spark.
No nothing was able to run. It did not matter
that we had uspes and such,
all went dark for hours.

During the Northridge earthquake (the one during the
world series in sf.ba.ca.us) there was a BUNCH of
disruption of the infrastructure, drives were shaken
til they crashed, power wend down all over the area,
Telco lines got knocked down, underground vaults got
flooded, and data centers went off line.

When ISDN was king(or ya get a t-1),
I worked for an ISP in the bay area that
was one of the few to have SOME
connectivity when mae-w went down. We had a t-1 that
went “north” to another exchange point, and even
though
that little guy had %50+ packet loss, it kept
chugging.
We were one of the few isp’s that
had ANY net connection, most of the people
went in through their local MAE ,
(that was in the days before connecting
to a MAE required that you be connected to
several other MAE’s)

Once while working for a startup in SF,
I pushed for upses and backup power gen
sets for our rack of boxes, and I was told
that we were "in the middle of the finintial district
of SF, that bart/the cable cars ran near by,
and that a big huge sub station with in
rock throwing distance of our building,
not to mention a power plant a couple
miles away. There was no reason for us to
invest in backup gen sets, or hours of
ups time…. I asked what the procedure
was if we lost power for an extended
period of time, and I was told, “we go home”

wellllllll…… the power went off to the
entire SF region, and I was able to shut
down the equipment with out to
much trouble, cause my laptop was plugged into a ups
(at my desk) and the critical servers were on a ups,
as
well as the hub I was on. After I verified that we
were
stil up at our co-lo (via my CDPD modem)
I stated the facts to my boss, and told him
that I was following his established
procedure for extended power loss.
I was on my way home. (boss=not happy)

A backup generator failed at a co-lo because
of algae in the diesel fuel.

Another time a valve broke in the buildings HVAC
system
sending pink gooey water under the door ,
and into the machine room.

There are reasons why a bunch of 9’s piled together,

weird stuff does happen. This is nanog, each
‘old timer’ has a few dozen of these events
they can relate.

The first 2 ya realy can’t prepare for other
than for all your stuff to be mirrored
‘some place else’, the rest are preventable,
but they were still rare.

( back to an operational slant)
Get a microwave t-2 and shoot it over to some
other building, get a freaking cable modem as
a backup, or find another way to get your lines out.

If having things work is important to you,
YOU should make sure it happens!

If people are preventing you from doing your job
(having servers up and reachable) CYA, and
point it out in the post mortem.

-charles

Curse the dark, or light a match. You decide, it's your dark.
Valdis.Kletnieks in NANOG

So a while ago, we're in the middle of some major construction to put in
infrastructure for a supercomputer. Meanwhile, as an unrelated project we
installed a new diesel backup generator to replace an older generator that was
undersized for our current systems, and take several hours of downtime
on a Saturday to wire the beast in.

The next Friday, some contractors are moving the entrance to our machine room
about 30 feet to the right, so you don't walk into the middle of the
supercomputer. Worker A starts moving a small red switch unit from its
location next to where the door used to be to its new location next to where
the door was going to be. Unfortunately, he did it before double-checking with
Worker B that the small red switch was disarmed...

Ka-blammo, a Halon dump... and of course that's interlocked with the power,
so once the Halon stopped hissing, it was *very* quiet in there.....

Moral: It only takes one guy with a screwdriver.....

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

So I am standing in a datacenter fiddling with some fiber and listening to an electrician explaining to the datacenter owner how he has just finished auditing all of the backup power systems and that the transfer switch will work this time (unlike the last 3 times). This is making me a little nervous, but I keep quiet (unusual for me)... Electrician starts walking out of the DC, looks at the (glowing) Big Red Button (marked "Emergency Power Off") and says "Hey, why ya'll running on emergency power?" and presses BRB. Lights go dark, disks spin down, Warren takes his business elsewhere!

This is the same DC that had large basement mounted generators in a windowless building in NYC. Weeks before the above incident they had tried to test the generator (one of the failed transfer switch incidents), but apparently no one knew that there were manual flues at the top of the exhausts.... Carbon monoxide, building evacuated...

Warren

[ Charset ISO-8859-1 unsupported, converting... ]

During the Northridge earthquake (the one during the
world series in sf.ba.ca.us) there was a BUNCH of
disruption of the infrastructure, drives were shaken
til they crashed, power wend down all over the area,
Telco lines got knocked down, underground vaults got
flooded, and data centers went off line.

Sorry.. wrong earthquake..

The Loma Prieta quake of 10/17/1989 occured during the opening
game of the World Series, featuring the San Francisco Giants,
and the Oakland Athletics in an all SF Bay area series.
The epicenter was in the Santa Cruz mountains, in the vicinity of
Mt Loma Prieta. Commercial power was lost to much of the bay area.

The Northridge quake occured on 1/17/1994, in southern California.
The epicenter was located in the San Fernando Valley, 20 miles NW of
Los Angeles.

As far as I recall, network disruption was minimal following the
Northridge quake, with a few sites offline {due to a machine room flooding
at UCLA?}

               -- Welcome My Son, Welcome To The Machine --
Bob Vaughan | techie @ tantivy.net |
       > P.O. Box 19792, Stanford, Ca 94309 |
-- I am Me, I am only Me, And no one else is Me, What could be simpler? --