Why the US Government has so many data centers

At enterprise storage costs, that much storage will cost more than the OC-12, and then add datacenter and backups. Total could be 2-3x OC-12 annual costs.

If your org can afford to buy non-top-line storage then it would probably be cheaper to go local.

However, you should check how much of the bandwidth is actually storage. I see multimillion dollar projects without basic demand / needs analysis or statistics more often than not.

George William Herbert

Politicians and sales people with inaccurate cost savings. Say it isn't so.

If you think these are $100 million dollar "data centers," maybe a few billion dollars in cost savings is possible over 10 years. But if a majority of the "data centers" are a single server in a room, the cost savings of moving it to a different room may not save billions of dollars. But no one will remember. Prediction, there will be a glowing report in a year or so about the huge cost savings, and then a couple years later will be an Inspector General report about problems counting things.

If that's what taxpayers want, that's what they'll get.

Datacenter isn't actually an issue since there's room in the same racks
(ironically, in the location the previous fileservers were) as the Domain
Controllers and WAN Accelerators. Based on the "standard" (per the Windows
admins) file storage space of 700 meg, that sounds like 3TB for user
storage. Even if it were 30TB, I still can't see a proper setup costing
more than the OC-12 after a period of two years.

Org is within the Federal Government, so they're not allowed to buy
non-top-line anything. I agree we should check how much bandwidth is
storage, but since there's a snowball's chance in hell of them actually
making a change, it's almost certainly not worth the paperwork.

Based on the "standard" (per the Windows admins) file storage space of 700 meg, that sounds like 3TB for user storage. Even if it were 30TB, I still can't see a proper setup costing more than the OC-12 after a period of two years.

Org is within the Federal Government, so they're not allowed to buy non-top-line anything.

Million-plus dollar NetApps or EMC units are not at all unusual.

This is a terrible pity if a small NAS from Imation/Nexsan would work redundantly for $150k or less.

I agree we should check how much bandwidth is storage, but since there's a snowball's chance in hell of them actually making a change, it's almost certainly not worth the paperwork.

This is the kind of thing whoever runs it needs to know, proves my point, and argues against local datacenters where nobody bothers to even collect performance metrics much of the time.

George William Herbert

* Sean Donelan:

When you say "data center" to an ordinary, average person or reporter;
they think of big buildings filled with racks of computers. Not a
lonely server sitting in a test lab or under someone's desk.

I suspect part of the initiative is to get rid of that mindset, which
leads to such gems as “we don't have any servers, so we only need to
secure our clients”.

In other words, Hillary Clinton's bathroom closet is a data center.

I was trying to resist the urge to chime in on this one, but this discussion has continued for much longer than I had anticipated... So here it goes

I spent 5 years in the Marines (out now) in which one of my MANY duties was to manage these "data centers" (a part of me just died as I used that word to describe these server rooms). I can't get into what exactly I did or with what systems on such a public forum, but I'm pretty sure that most of the servers I managed would be exempted from this paper/policy.

Anyways, I came across a lot of servers in my time, but I never came across one that I felt should've been located elsewhere. People have brought up the case of personal share drive, but what about the combat camera (think public relations) that has to store large quantities (100s of 1000s) of high resolution photos and retain them for years. Should I remove that COTS (commercial off the shelf) NAS underneath the Boss' desk and put in a data center 4 miles down the road, and force all that traffic down a network that was designed for light to moderate web browsing and email traffic just so I can check a box for some politician's reelection campaign ads on how they made the government "more efficient"

Better yet, what about the backhoe operator who didn't call before he dug, and cut my line to the datacenter? Now we cannot respond effectively to a natural disaster in the Asian Pacific or a bombing in the Middle East or a platoon that has come under fire and will die if they can't get air support, all because my watch officer can't even login to his machine since I can no longer have a backup domain controller on-site

These seem very far fetched to most civilian network operators, but to anybody who has maintained military systems, this is a very real scenario. As mentioned, I'm pretty sure my systems would be exempted, but most would not. When these systems are vital to national security and life & death situations, it can become a very real problem. I realize that this policy was intended for more run of the mill scenarios, but the military is almost always grouped in with everyone else anyways.

Furthermore, I don't think most people realize the scale of these networks. NMCI, the network that the Navy and Marine Corps used (when I was in), had over 500,000 active users in the AD forest. When you have a network that size, you have to be intentional about every decision, and you should not leave it up to a political appointee who has trouble even checking their email.

When you read how about much money the US military hemorrhages, just remember....
- The multi million dollar storage array combined with a complete network overhaul, and multiple redundant 100G+ DWDM links was "more efficient" than a couple of NAS that we picked up off of Amazon for maybe $300 sitting under a desk connected to the local switch.
- Using an old machine that would otherwise be collecting dust to ensure that users can login to their computers despite conditions outside of our control is apparently akin to treason and should be dealt with accordingly.
</rant>

--Todd

So...

Before I go on, I have not been in Todd's shoes, either serving nor directly supporting an org like that.

However, I have indirectly supported orgs like that and consulted at or supported literally hundreds of commercial and a few educational and nonprofit orgs over the last 30 years.

There are corner cases where distributed resilience is paramount, including a lot of field operations (of all sorts) on ships (and aircraft and spacecraft), or places where the net really is unstable. Any generalizations that wrap those legitimate exceptions in are overreaching their valid descriptive range.

That said, the vast bulk of normal world environments, individuals make justifications like Todd's and argue for distributed services, private servers, etc. And then do not run them reliably, with patches, backups, central security management, asset tracking, redundancy, DR plans, etc.

And then they break, and in some cases are and will forever be lost. In other cases they will "merely" take 2, 5, 10, in one case more than 100 times longer to repair and more money to recover than they should have.

Statistically these are very very poor operational practice. Not so much because of location (some) but because of lack of care and quality management when they get distributed and lost out of IT's view.

Statistically, several hundred clients in and a hundred or so organizational assessments in, if I find servers that matter under desks you have about a 2% chance that your IT org can handle supporting and managing them appropriately.

If you think that 98% of servers in a particular category being at high risk of unrecoverable or very difficult recovery when problems crop up is acceptable, your successor may be hiring me or someone else who consults a lot for a very bad day's cleanup.

I have literally been at a billion dollar IT disaster and at tens of smaller multimillion dollar ones trying to clean it up. This is a very sad type of work.

I am not nearly as cheap for recoveries as for preventive management and proactive fixes.

George William Herbert

Fine.

But when some Armenian script kiddie DDoSing Netflix takes down your TSA
terrorist lookup service, and you come to me asking why the plane blew up,
I'm going to tell you "because you fucking ignored my written advice on
the matter", while I'm packing my desk.

In writing.

Cheers,
-- jra

DOCI is about physical data center opimization, not about network or
service availability.

DCOI metrics:
- Energy metering
- Power Usage Effectiveness (PUE)
- Virtualization
- Server Utilization & Automated Monitoring
- Facility Utilization

Why do you have two circuits with only 40% utilization. The auditor says
that's waste, and you only need one circuit at 80% utilization for half
the cost.

Circuit utilization, capacity and availability shouldn't be calculated
separately in a data center environment. If you look at each separately you
risk making some expensive mistakes.

*Rafael Possamai*
Founder & CEO at E2W Solutions
*office:* (414) 269-6000
*e-mail:* rafael@e2wsolutions.com

And of course, said auditor is probably near impervious to the very real
and valid reasons you have 2 circuits. Because as Upton Sinclair wrote
around a century ago:

"You cannot make a man understand something when his paycheck depends
on him not understanding it".

Come on, the audit requirements should have diversity/redundancy concerns in them.

That's standard in all the audits I have done or participated in.

If these ones don't I have a marketing opportunity to teach a HA seminar and followon consulting to the IG.

George William Herbert

Turn on C-SPAN and watch any random congressional oversight hearing.

Reasonable, rational or logical thoughts are rare. You may be making assumptions that aren't supported. Just ask Flint Michigan about saving
money on cheaper water supplies.

This seems like a good time to mention my favorite example of such a thing.

In the Navy, originally, and it ended up in a few other places, there was
invented the concept of a 'battleshort', or 'battleshunt', depending on whom
you're talking to.

This was something akin to a Big Frankenstein Knife Switch across the main
circuit breaker in a power panel (and maybe a couple branch circuit breakers),
whose job was to make sure those didn't trip on you at an inconvenient time.

Like when you were trying to lay a gun on a Bad Guy.

The engineering decision that was made there was that the minor possiblity of
a circuit overheating and starting something on fire was less important that
*the ability to shoot at the bad guys*...

Or, in my favorite example, something going wrong when launching Apollo rockets.

If you examine the Firing Room recorder transcripts from the manned Apollo
launches, you will find, somewhere in the terminal count, an instruction to
"engage the battle short", or something like that.

Men were, I have been told, stationed at strategic locations with extinguishers,
in case something which would normally have tripped a breaker was forbidden from
doing so by the shunt...

so that the power wouldn't go out at T-4 seconds.

It's referenced in this article:

  http://www.honeysucklecreek.net/station/ops_areas.html

and a number of other places google will find you.

Unknown whether this protocol was still followed in the Shuttle era, or whether
it will return in the New Manned Space Flight era.

But, like the four star saluting the Medal Of Honor recipient, it's one of
those outliers that's *so far* out, that I love and collect them.

And it's a good category of idea to have in the back of your head when planning.

Cheers,
-- jra

The last time I checked, the US CIO office was understaffed and fighting the bureaucratic hydra and mostly losing, but competent and doing things like providing IGs with relevant ammo.

If not true in this case then the audit should be redone with relevant criteria.

George William Herbert

FYI, similar to "battleshort", the term BATTLE OVERRIDE is described [1] on page 45 of G. Gordon Liddy's book _Will_,
and apparently [2] "Battle Override" was to be the original title of Liddy's autobiography, but the publisher wanted a one-word title. Quotes:

"On the multidialed wall behind the radar technicians was a prominent switch with a red security cover. It was marked: BATTLE OVERRIDE", and

"In the event of a battle emergency, however, the protective warm-up delay could be overridden and full power applied immediately by throwing the 'Battle Override' switch, as everything and everyone became expendable in war."

Tony Patti
CIO

[1] https://books.google.com/books?id=YRty_4HT_8kC&pg=PA45&lpg=PA45&dq="battle+override"+M-33c&source=bl&ots=RYLdUECeHF&sig=hs9i6-W_CVwe5ZcjpxbEGSh9TNE&hl=en&sa=X&ved=0ahUKEwi474vOltjLAhUKcRQKHUHmCW4Q6AEIHjAA#v=onepage&q="battle%20override"%20M-33c&f=false
[2] http://www.worldwizzy.com/library/G._Gordon_Liddy

This seems like a good time to mention my favorite example of such a thing.

In the Navy, originally, and it ended up in a few other places, there was
invented the concept of a 'battleshort', or 'battleshunt', depending on whom
you're talking to.

I've built one, sort of. In an outdoor broadcasting vehicle. See, in
order to get a working grounding scheme, the PDU in the bus gets to serve
as power source for a lot of things that might find themselves outside,
in climate. 200VDC feeds in triaxial cables to cameras, for instance.
(this was before cameras were connected with singlemode fiber, but
after the era of the multicore "shower handle" connectors) All this
was of course built for some exposure to the elements but not for
drenching. During setup, it was decided to protect people with a GFCI
breaker on the main three-phase bus in the bus[0][1], but once setup,
people were not really supposed to gefingerpoken the thingamaboobs, so
in the interest of reliability a bypass was created for the GFCI breaker.
This had to be built in-house, since no electrical contractor even wanted
to contemplate it. So we did.

/Måns, ex-builder of analog broadcast facilities.

Guess what, an IG decides to count "data centers" using OMB's definition
of a data center. CIO points out those "data centers" won't save money.

https://fcw.com/articles/2016/04/11/lyngaas-halvorsen-update.aspx
The IG report knocked Halvorsen for not adjusting his strategy to account for a revised definition of data centers from the Office of Management and Budget. But Halvorsen defended that decision, saying the revised definition focused on special-purpose processing nodes, which are data centers that have no direct connection to the DOD Information Network.

"Those nodes aren't where the money [is], and in most cases, there's no value in consolidating them," Halvorsen said.