DHS letters for fuel and facility access

On some other mailing lists, FCC licensed operators are reporting they have received letters from the Department of Homeland Security authorizing "access" and "fuel" priority.

Occasionally, DHS issues these letters after natural disasters such as hurricanes for hospitals and critical facilities. I haven't heard of them issued for pandemics.

Fuel priority? They expecting shortage and/or power outages?

-James

I suspect it’s more to solve issues with truck drivers going to work and their job is to deliver fuel. Some areas have been instituting curfews and this would satisfy the local authorities who may stop such a driver.

- Jared

Its a form letter.

Same letter is printed no matter what disaster its for. I don't think they are expecting power outages (unless there is a co-disaster at the same time). Its just the standard form.

In response to a snarky question offlist. Yes, the DHS letters are just copies. Yes, the DHS letters are easy to counterfeit.

Not a lawyer, but counterfeiting an official federal document during a national state of emergency likely violates many federal and state laws.

DON'T DO IT.

We (Verizon not me) lost a central office during 9/11 because it ran out of fuel - the tankers were staged but we’re not allowed to enter Manhattan.

This clears that pathway for us now, and it’s fairly standard protocol since.

-Ben

Got it!

I get that thanks, wasn’t trying to be snarky just genuinely curious.

As an ex-broadcaster, I have never seen one of these letters. Even during our 1989 earthquake. In fact, I knew of one station that ordered a genset as power was down after the earthquake for several days and it was commandeered by LE as they were being driven to the transmitter site, three times! Three different gensets!

The SF Bay Area shelter in place rules specifically exempt news media, telecommunications and internet including infrastructure services thereof (presumably large internet companies, network and security vendors, etc), fuel deliveries.

I could use infrastructure vendors excuse but $current_client_company is on mandatory WFH for next five weeks and team had filtered out doing it informally before it became official.

I’d name the company but someone might contact me for an emergency and I have nothing to do with the customer incidents team. I don’t even know who to forward stuff to. Suffice it to say that everyone doing network security infra at all the vendors is being as safe as possible under the circumstances. We’re trying to keep all the lights on for you.

-George

It’s true, we’re all here, and we’re standing by. Also if anyone on NANOG needs something we can do, please reach out to me via email and I will make it happen. You’re not alone during times of crisis.
-Ben.

That same fuel shortage killed all Internet traffic to sub-Saharan Africa. Took us a while to figure out what was wrong with the satellite link to the US.

  paul

What year was that :-)?

Mark.

Does anyone know who to contact at DHS to see about getting a letter like this for an operator?

WISPA has the letters available in the Members Section of the website.

September 2001. Just after the 9/11 attacks, all of lower Manhattan was shut down. Out link (IIRC) was to a satellite farm on Staten island, across the bay to 60 Hudson. Power went off, diesels kicked in, fuel trucks was not allowed in, and a few days later we lost all international connectivity.

Lots of important people lost power as well, so the feds decided to let the diesel tankers in after a few days’ deliberations.

  paul

September 2001. Just after the 9/11 attacks, all of lower Manhattan was shut down. Out link (IIRC) was to a satellite farm on Staten island, across the bay to 60 Hudson. Power went off, diesels kicked in, fuel trucks was not allowed in, and a few days later we lost all international connectivity.

We had some interesting failures during 9/11 as well -- for some
reason, the UPS didn't kick in, so everything went down - and then
came back a few minutes later as the generators came online -- and
then went down again ~2 hours later -- turns out that the genset air
filters got clogged with dust, and suffocated the diesel.
This was "fixed" a few days later by brushing them off with brooms and
paintbrushes -- by this point they had completely discharged the 24V
starter batteries, and so someone (not me!) had to lug a pair of car
batteries and jumper cables. They restarted, and ran for a while, and
then stopped again.

It turns out that getting a permit to store lots of diesel on the roof
is hard (fair enough), and so there was only a small holding tank on
the roof, and the primary tanks were in the basement -- and the
transfer pump from the basement to roof storage was not, as we had
been told, on generator power....

We had specified that the transfer pump be on the generator feed,
there was a schematic showing at is being on the generator feed, there
was even a breaker with a cable marked "Transfer Pump (HP4,5)" ---
but it turned out to just be a ~3ft piece of cable stuffed into a
conduit, and not actually, you know, running all the way down to the
basement and connected to the transfer pump.

W

Good reminder to test, test, test...

Good reminder to test, test, test...

Indeed -- and we had tested, multiple times. Unfortunately, the only
realistic way we would have found this would have been to kill power
to the building and run on generators for many hours, and then,
likely, we would only have discovered it when the gensets ran out of
power and fell over. IIRC, there is (or was) some noise and pollution
regulations in NYC where you could only run generators for short
periods of time (30min?) unless it was an actual emergency. I also
seem to remember something about having to test at night, probably
also for noise...

But, yes, regular testing is clearly a good practice - but so is
having a good BCP/DR plan (which you also test :slight_smile:
W

At my work place there is enough generators, fuel generators.

There is enough time to power things down properly.

The IT infra seems to be working ok, although some remote workers complain about a few things about VPN.

There is however worry that the IT infra might not keep up, or that not all employees might have access to emails. To address that, they have built a website facing to the Internet with internal announcement info to employees. They have also created a registry where the employees record their external email addresses so we receive internal announcements but on external email addresses, a thing which was more or less prohibited in normal times by IT policy.

The internal emergency phone number (two digit phone number only available to internals only by landline) has just been shut down. The info circulated announcing it so. IT is standard procedure in case of issues.

My desk voicemail is still active and I can consult it remotely, but not sure for how long. The re-start of desk power typically resets the phone and I lose voicemail forever. I expect that re-start of desk power in a few weeks or so, part of standard procedure to re-start power routinely. But I dont expect me to go to my desk any time since now in one month to press the button on the phone to set the voicemail active.

Alex