East Coast outage?

Speaking on Deep Background, the Press Secretary whispered:

Am I the only one who is surprised that here we are now - over 7 years
later - and the electric grid industry still hasn't found/implemented a
design fix for this problem?

Guess what... Real Time is Hard. Real Time when dealing with
amounts of energy sufficient to move mountains, with dv/dt in
milliseconds, is even harder.

Spread it out over a grid that measures fractions of a wavelength,
it gets harder still. Then run parts at 105-110% and it gets
really hard.

Now, play Prisoner's Dilemma with it.... and watch the sparks.

I'm no power engineer but I do not envy them. Can YOU build an
equal size TCP/IP network with the added requirement that you
never drop any more than say one or 2 bits/hour? And if you do,
you cold boot it all again?

The power industry designs a grid that runs so close to capacity that if^W when something big fails, the whole grid shuts down in a cascade. They know it:

"What happens if <$big_num_watts> power plant suddenly spikes"?

"We have a cascade failure thru the whole grid as switches overload and shut off. This causes blackouts over a wide area, and it takes many hours to restore electrical service. Also, many outlying TelCo facilities have battery backup power that will be exhausted before we can restore power to them, and there aren't enough gen sets around to keep them all running when their batteries die. So TelCo service (and by extension, also Internet service) will fail in many areas as a result of the widespread electrical grid failure."

"How often can we expect this to occur?"

"Oh, once every decade or so, on one of the major grids. It usually happens when electric use is at peak demand, late afternoon during the summer."

"Oh. Ok then. Carry on."

See:

<http://story.news.yahoo.com/news?tmpl=story&cid=578&ncid=578&e=2&u=/nm/20030815/ts_nm/power_grid_dc>

"We're a superpower with a third-world grid. We need a new grid," New Mexico Gov. and former Energy Secretary Bill Richardson told the CNN television network. "The problem is that nobody is building enough transmission capacity."

What's the point in having DOE and FERC regulation and oversight if they just rubber-stamp this type of design and endorse running at over-capacity on a routine basis? What happened to designing something so that it doesn't break when one big part fails, designing it so that switches don't get overloaded when a nearby plant spikes and goes off the grid? Is it *that* hard/expensive to have switching plants sufficiently resilient, with the extra capacity that can handle a *predictable and expected* event?

In California we design our systems to survive major earthquakes (e.g 7.x), even though they only happen once every 10-20 years, and then only affect a relatively small portion (compared to the size of power grids) of the state. When we discover that the engineering isn't resilient enough (e.g. when the Cypress structure collapsed and a piece of the SF Bay Bridge fell during the Loma Prieta quake in 1989), we find out what went wrong and FIX it, not just in the one inadequately designed structure or system, but statewide, system-wide. (We have rebuilt a lot of bridges in the last 14 years!) Yet we keep on seeing electrical switches that can't handle the load when a nearby plant spikes or goes off the grid, causing cascade failures. It is predictable and it has been happening for at least 40 years! Don't they notice that their design is inadequate and FIX it??? Quoting the above article again:

"According to the Electric Power Research Institute in Palo Alto, California, U.S. power demand has surged 30 percent in the last decade, while transmission capacity grew a mere 15 percent. "

They not only don't fix it, they let it get worse. sigh...

Well, at least we now have a great argument against regulation when they try to create a Department of the Internet to oversee the "Internet industry".

jc

Perhaps the lesson to learn is that very large networks don't always lead to very high stability. A much larger number of smaller, more autonomous generation and transmission facilities might have much more reasonable interconnection requirements, and hence less wide-ranging failure modes.

Seems to me, if more consumers were opportunistic generators (fuel cells, solar cells, wind turbines, whatever) the islands formed during interconnection failures would have far more accurately-matched supply and demand, and failures would stand a much better chance of having only local impact.

Joe (battery and GPRS powered, still)

>Then run parts at 105-110% and it gets really hard.

The power industry designs a grid that runs so close to capacity that if^W
when something big fails, the whole grid shuts down in a cascade. They
know it:

Rubbish again.

Welcome to the wonderful world of physics. Ask your favourite physics
professor what does

  E1 = E2

in context of yesterdays events.

Amount of energy generated must be balanced with the amount of energy used
at any time. Otherwise Bad Things (tm) will happen. The shutown of the grid
is a very good thing compared to what it would have been had it not
shutdown.

Alex

It seems to me that the power guys are still living somewhere in the last century. Is it really impossible to absorb power spikes? We can go from utility to battery or the other way around in milliseconds, so it should be possible to activate something that can absorb a short spike much the same way. Balancing intermediate-term generation/usage mismatches should be possible by simply communicating with users. There is lots of stuff out there that switches on and off periodically (all kinds of cooling systems, battery charging, lights), so let it switch on or off for a few minutes when the power network needs it to.

I think the idea that the power should be always present and always reliable is actually harmful, as it doesn't provide for any "congestion contnrol" by bringing the users into the loop.

How many kVA are *you* switching?

How many kVA are running through those big 765kv lines?

This is what a *circuit breaker* looks like at those sizes:

http://www.hhi.co.kr/english/IndustrialPowerSystem/product/highvoltage/product2-7.html

8000 amps at 765kv.

And that's just to *break* the circuit without vaporizing itself in the process.

It seems to me that the power guys are still living somewhere in the
last century. Is it really impossible to absorb power spikes? We can go
from utility to battery or the other way around in milliseconds, so it
should be possible to activate something that can absorb a short spike
much the same way. Balancing intermediate-term generation/usage
mismatches should be possible by simply communicating with users. There
is lots of stuff out there that switches on and off periodically (all
kinds of cooling systems, battery charging, lights), so let it switch
on or off for a few minutes when the power network needs it to.

No, the problem is that by the time your users receive that information and
act upon it, you will either get a blackout (braker) or a blow up
(transformers becoming brakers).

The reason it takes long to restore the power is that to restore the power
to section "A" one needs to deliver the amount nearly equal to what the
section "A" needs at that specific time and that is a lot of calculatins.

Alex

I don't know, but at least reading this IEEE Spectrum article:
http://www.ece.umr.edu/courses/f02/ee207/spectrum/Grid/ implies that
long distance transmission is full of strange and nonlinear effects
such as 'reactive power', voltage support, and other technical
concepts that made me conclude that there are nasty details that are
not widely known. Excerpts follow:

   Generators at another smallpower plant also tripped. The tripping
   was due to high reactive power output associated with supporting
   transmission voltage

** Reactive power sidebar:

   Reactive power consumption tends to depress transmission voltage,
   while its production or injection tends to support
   voltage. Transmission lines both consume it (because of their
   series inductance) and produce it (from their shunt capacitance).

   Because transmission line voltage is held relatively constant, the
   production of reactive power is nearly constant. Its consumption,
   however, is low at light load and high at heavy load.

   The variable net reactive-power requirements of a transmission line
   give rise to a voltage control problem. Generators and
   reactive-power compensation equipment must absorb reactive power
   during light load, and produce it during heavy load.

   In a general emergency, when there are outages and high loading on
   re-maining transmission lines, those lines consume reactive power
   that must be supplied by nearby generators and shunt capacitor
   banks. (Reactive power can be transmitted only over relatively
   short distances.)

   If reactive power cannot be supplied promptly enough in an area of
   decaying voltage, voltage may in effect collapse. Insufficient
   voltage support may in addition contribute to synchronous
   instability. --C.W.T.

** Done

Later it talks about how ''fast capacitor-bank switching in southern
Idaho would have contained the initial 2 July outages.''. It also says
something about: ''That August day, though, the power system
stabilizers at a large nuclear plant in Southern California were out
of service. (Power system stabilization at this location is especially
effective because it is near one end of the north-south intertie
oscillation mode.)''

I think to really understand the material above one needs to read
author's book: _Power System Voltage Stability_

I also think that its hard to appreciate the stability differences
between shipping power a few hundred feet and shipping power 1000
miles. It looks like that long-distance shipping is the root cause of
the half-dozen major outages over the past 30 years. Why is the
northwest getting power 800 miles away in Wyoming instead of putting
up their own plant?

Also, 'alternative generation' isn't there yet. For instance, from
California's wind energy site
http://www.energy.ca.gov/wind/overview.html The total output of all
13000 turbines in CA, *together* average only 400MW of unreliable
power over the course of a year. Diablo Canyon (nuclear, california)
produces five times this so does Jim Bridger (coal, wyoming). After 20
years of effort and subsidies, thats 1% of CA's energy use, and 10% of
what was imported today. http://currentenergy.lbl.gov/ca/

Scott

Yep. That's why DC power transmission is the way to go. No potentially
harmful low-frequency EM emissions, too.

--vadim

Load management is actually fairly common here in Ohio in the cooperative
electric utilities. Residential users get rebates on heat pumps and water
heaters in exchange for allowing the utility to install RF controlled
interrupting switches on them. Summer ironically isn't the problem for
them, its winter when they want to do peak demand management so as not to
ratchet into a higher wholesale demand rate class.

My guess is when it shakes out, the failure will be traced to a rather large
unit or interconnect tripping offline. Since the load is relatively
constant if you look at the time in a short enough period, and you lose a
couple hundred MVA of feed onto the grid, the other generation on the grid
is going to attempt to absorb it. It works just like a drill, in reverse.
If you put a sanding wheel onto a drill and press it into wood, it will drag
the drill down. Opposite for generation. Steam is driving the turbine,
which is producing power. Throw more load on instantaneously, the rotor
will slow down. Now the units can absorb slight variations in load, but
500MVA falling off quickly cannot be instantaneously absorbed. So, the
rotor slows down. As it slows down, the frequency drops. When the
frequency gets low enough (and we're talking fractions of a Hz), protective
relaying kicks in and opens the breaker between the unit and the grid. This
compounds the effect, because the 500MVA loss may cause another 100MVA in
units to trip off relatively close. Now the grid has 600MVA to absorb and
that loads more units down, which drift farther down and they trip, which
adds another X MVA to the load and it justs keeps going. Same thing can
happen in reverse to when the load is suddenly removed and the unit overruns
the frequency.

This effect was observed a couple of times for a muni electric I used to
work with. They had a tie line to a IOU and when it opened in the summer
becuase of lightning, overload, etc, it would trip all their units off line
because the tie was carrying inbound on the order of 40% of their load.
Interestingly, it had effects on the IOU also, since the muni was consuming
watts, but supplying VAR's, trying to help maintain power factor on the IOU
system. Units can only produce so many MVA's. MVA = sqrt(MW ** 2 + MVAR **
2). As reactive loads go up (like AC units in the summer), MVAR's go up.
According to the formula, MW production goes down since the unit can only
produce so many MVA's (its a nice right triangle, MVA is the hypotenuse, MW
is the horizontal and MVAR is the vertical and power factor is the cosine of
the angle. With a purely resistive load like a light bulb, PF = 1 since
there are no VAR flows there [cos 0 = 1]). They do cheat sometimes and use
capacitors or synchronous condensors/reactors (an overexcited motor which
looks like a variable capacitor, kind of cool) to try and equal out the
power factor. The bite is, Joe Consumer doesn't pay for VAR's, he pays for
Watts. But the transmission and distribution system has to account for and
carry the VAR flows also. And if you size the lines and forget the VAR
flows, in the summer, things can go boom.

Everyone whines because of the "antiquated" system. The system worked like
it should. It may suck to be without power for 48 hours, but try 18 months
if the unit came apart. You don't go to Ace Hardware and buy a new 50MVA
steam driven unit. And the nukes tripping off was probably more an artifact
of frequency instability on the grid than a problem with the nukes
themselves. Coal, gas or nuke, you still have to maintain frequency. As an
old EE prof of mine said, the system will seek stability. Seeking may be
nice like flow re-distribution, or it may be ugly like the rotor and frame
separating. Either way, it ends up stable (albeit maybe in the field next
to the plant) ...

Maybe a stupid question...

But what if the huge distribution systems used DC and the whole thing was only converted to AC close to the users in small installations? This would get rid of the frequency problems.

Once upon a time, Iljitsch van Beijnum <iljitsch@muada.com> said:

Maybe a stupid question...

But what if the huge distribution systems used DC and the whole thing
was only converted to AC close to the users in small installations?
This would get rid of the frequency problems.

Basic physics. To run DC at the power levels required, the "wire" would
have to be over 100 feet in diameter IIRC. Look up the Edison vs. Tesla
power arguments for all kinds of information on AC vs. DC.

This is one of the problems that makes the room-temperature
superconductor a "holy grail" research area.

What are the "required levels"? There is a 600 MW DC sea cable between Sweden and Germany with an outer diameter of about five inches (130 mm).

My guess is when it shakes out, the failure will be traced to a rather

large

unit or interconnect tripping offline.

It will be traced back to a huge branch from a huge tree that fell and took
down a couple of transmission lines which then melted the road in a fairly
expensive neighborhood in northeastern ohio. That started a chain reaction
because it was too big a ripple.

Geo.

Edison and Tesla's arguments took place long before switching power supplies
and the development of insulating materials capable of withstanding hundreds
of kilovolts.

The size of the conductor is a function of IR losses. Losses are a function
of the resistance of the conductor and the current passing through it. By
raising the voltage, the current drops proportionally for the amount of
power delivered, and hence the conductor size also drops. The problem in
the Edison/Tesla days was a practical way to convert high voltage DC to low
voltage (120 volts or so) power for distribution to homes and businesses.
200KV light bulbs and switches are kind of impractical for home use. :slight_smile:

The advantage of AC is that a simple transformer can be used to step down
the voltage from transmission to distribution levels. Before high voltage
semiconductors and switching supplies, high voltage DC transmission was
useless as there was no practical means to convert it to the lower voltage
levels useful in homes. Rotary motor-generator sets would have been the
only choice. Huge, not very efficient, lots of (big) moving parts. Not
trivial to maintain.

AC still makes sense for distribution, but HV DC transmission lines are
becoming the norm. Think about some very large SCRs and associated parts
to convert to AC for distribution.

http://www.hydro.mb.ca/our_facilities/ts_nelson.shtml

Chris Adams wrote:

Basic physics. To run DC at the power levels required, the "wire" would
have to be over 100 feet in diameter IIRC. Look up the Edison vs. Tesla
power arguments for all kinds of information on AC vs. DC.

This was under the assumption that the transmission line was at the same voltage as the end-user, because there were no good DC-DC voltage converters in that day. And a few bazillion amps at 120V needs a really fat wire.

There's no significant wire size difference between a DC and AC line at the same ampacity.

Voltage conversion is the key. _If_ you can do it, then transmission isn't a problem.

http://www.hydro.mb.ca/our_facilities/ts_nelson.shtml

From: "Chris Adams" <cmadams@hiwaay.net>
To: <nanog@merit.edu>
Sent: Friday, August 15, 2003 10:48 PM
Subject: Re: East Coast outage?

> Once upon a time, Iljitsch van Beijnum <iljitsch@muada.com> said:
> > Maybe a stupid question...
> >
> > But what if the huge distribution systems used DC and the whole

thing

> > was only converted to AC close to the users in small installations?
> > This would get rid of the frequency problems.
>
> Basic physics. To run DC at the power levels required, the "wire"

would

> have to be over 100 feet in diameter IIRC. Look up the Edison vs.

Tesla

> power arguments for all kinds of information on AC vs. DC.

Huh ? Where in the physics of ohms law is Hz a factor ? Having lived off
the grid, where systems are often at max 48v, yes the wires have to be
several 0's of gage to carry the lagre amperages. Much the same in A/B DC legs in
a colo. Up the volts and the amps go down to produce the same power (watts
or work).

I am a little rusty on this one, but I seem to remember that AC travels
only on the outside skin of the wire but DC uses all the wire.

Once upon a time, Chris Lewis <clewis@nortelnetworks.com> said:

Chris Adams wrote:
>Basic physics. To run DC at the power levels required, the "wire" would
>have to be over 100 feet in diameter IIRC. Look up the Edison vs. Tesla
>power arguments for all kinds of information on AC vs. DC.

This was under the assumption that the transmission line was at the same
voltage as the end-user, because there were no good DC-DC voltage
converters in that day. And a few bazillion amps at 120V needs a really
fat wire.

To the many that (properly) corrected me: yes, this is what I was
thinking about (well, that and the server I was restoring at the time).
I wasn't aware that there are high voltage DC long-haul lines that then
are converted to AC for local distribution.

hackerwacker@tarpit.cybermesa.com wrote:

I am a little rusty on this one, but I seem to remember that AC travels only on the outside skin of the wire but DC uses all the wire.

"Skin effect" is only significant at high frequencies (lots of megahertz and up). At 60hz it can be ignored.