latest Snowden docs show NSA intercepts all Google and Yahoo DC-to-DC traffic

http://www.washingtonpost.com/world/national-security/nsa-infiltrates-links-to-yahoo-google-data-centers-worldwide-snowden-documents-say/2013/10/30/e51d661e-4166-11e3-8b74-d89d714ca4dd_story.html

Google is speeding up its initiative to encrypt all DC to DC traffic, as
this was suspected a short time ago.

http://www.informationweek.com/security/government/nsa-fallout-google-speeds-data-encryptio/240161070

As a top-posting IT generalist pleb, can someone explain why Google/Yahoo did not already encrypt their data between DCs?
Why is my data encrypted over the internet from my computer to theirs, but they don't encrypt the data when it goes outside their building and all the fancy access controls they like to talk about?

Thank you for your feedback,
explanoit

Its about the CPU cost of the crypto. I was once told the number of
CPUs required to do SSL on web search (which I have now forgotten) and
it was a bigger number than you'd expect -- certainly hundreds.

So, crypto costs money at scale basically.

Cheers,
Michael

Hey expanoit,

There was a small part that jumped out at me when I read the article
earlier:

"In recent years, both of them are said to have bought
or leased thousands of miles of fiber-optic cables for their own exclusive
use. They had reason to think, insiders said, that their private, internal
networks were safe from prying eyes."

It seems as if both Yahoo and Google assumed that since they were private
circuits that they didn't have to encrypt. This would've added cost in
engineering, hardware, and in the end, overall throughput; I would assume
they saw it as a low possibility that anyone would (a) have knowledge of
the their traffic inter-site and (b) would have the ability to not only
accomplish the task but not get caught as well.

This is just my take on the situation and I'm sure there are others more
experienced that could offer a more detailed perspective with much less
speculation. Thanks.

Sincerely,

Anthony R Junk
Network Engineer
(410) 929-1838
anthonyrjunk@gmail.com

[snip]

Its about the CPU cost of the crypto. I was once told the number of
CPUs required to do SSL on web search (which I have now forgotten) and
it was a bigger number than you'd expect -- certainly hundreds.

So, crypto costs money at scale basically.

SSL Cryptography for web search is a different problem than, say
Site-to-Site VPN encryption.

Every time a new browser connects, you have a new SSL session setup.
New SSL session setup requires public cryptography operations which impose
a significant delay, and the public key operations have an enormous CPU
cost.

So much so, that the key generation and signing operations involved in CPU
session setup are a big bottleneck, and therefore, a potential DoS risk.

For encryption of traffic between datacenters; There should be very
little session setup and teardown (very few public key operations);
almost all the crypto load would be symmetric cryptography.

No doubt, there still must be some cost in terms of crypto processors
required to achieve encryption of all the traffic on 100-gigabit links
between datacenters; it's always something, after all.

For encryption of traffic between datacenters; There should be very
little session setup and teardown (very few public key operations);
almost all the crypto load would be symmetric cryptography.

trivial at 9600 baud between google datacenters

...

It seems as if both Yahoo and Google assumed that since they were private
circuits that they didn't have to encrypt.

I actually cannot see them assuming that. Google
and Yahoo engineers are smart, and taping fibres
has been well known for, well, "forever". I can
see them making a business decision that the
costs would be excessive to mitigate against
taping(*) that would be allowed under the laws
in any event.

Gary

(*) "A" mitigation was run the fibre through your
own pressured pipe which you monitored for loss
of pressure, so that even a "hot tap" on the pipe
itself would possibly be detected (and there are
countermeasures to countermeasures
to countermeasures of the various methods).
And even then, you had to have a someone walk
the path from time to time to verify its integrity.
And I am pretty sure there is even an NSA/DOD
doc on the requirements/implementation to do
those mitigations.

Given what we now know about the breadth of the NSA operations, and the
likelihood that this is still only the tip of the iceberg - would anyone
still point to NSA guidance on avoiding monitoring with any sort of
confidence?

There has always been cognitive dissonance in the dual roles of the NSA:
1. The NSA monitors.
2. The NSA provides guidance on how to avoid being monitored.

Conflict?

-DMM

I still have some one time pads if you are good writing fast ...

-J

[...]

Given what we now know about the breadth of the NSA operations, and the
likelihood that this is still only the tip of the iceberg - would anyone
still point to NSA guidance on avoiding monitoring with any sort of
confidence?

There has always been cognitive dissonance in the dual roles of the NSA:
1. The NSA monitors.
2. The NSA provides guidance on how to avoid being monitored.

Conflict?

-DMM

As a local 'barbecue baron' said about his brother's competing
restaurants: "I taught him everything he knows about barbecue. I just
didn't teach him everything _I_ know about barbecue."

I don't think so. The folks who actually do it, are the ones who are going
to best know how to avoid it. Plenty of TV shows bear this out. :slight_smile:

I think that failure to encrypt inter-DC traffic that is on dark fibre is
simply on the presumption that corporations are seeking to protect their
links from the actions of 'unauthorised' people. The telco theyre
contracting presumably have some sort of privacy agreement with them.
No-one else is supposed to be able to get on the wire. A risk assessment
pre-Snowdon probably didn't make the performance hits, costs, etc of
high-speed rateable encryption, worthwhile - but the paradigm has shifted.
The government is using 'authorisation' to get access to that dark fibre
link (presumably) and that authority is at the heart of the problem.

When reviewing your risk assessment around the presence (or not) of
encryption on your inter-site links, also consider whether the methods of
encryption available to the private sector havn't also been cracked by the
NSA etc. They had the 'golden standard' for crypto, but one has to wonder
whether that standard includes an undocumented backdoor...

Mark.

While smart, most providers make an assumption at least with a piece of
dark fiber the only people with access to it are themselves and any
providers of the fiber. I don't think that's an unrealistic
expectation... Providers who have trenched their own fiber certainly do
not encrypt traffic across the network, but their fiber is probably no
less susceptible to tapping at certain locations. There have been a
number of articles in recent years about how vulnerable the fiber
infrastructure is to attacks, tapping, etc. Vaults, manhole locations,
etc. are pretty much wide open.

Phil

* mikal@stillhq.com (Michael Still) [Fri 01 Nov 2013, 05:27 CET]:

Its about the CPU cost of the crypto. I was once told the number of CPUs required to do SSL on web search (which I have now forgotten) and it was a bigger number than you'd expect -- certainly hundreds.

False: ImperialViolet - Overclocking SSL

"On our production frontend machines, SSL/TLS accounts for less than 1% of the CPU load, less than 10KB of memory per connection and less than 2% of network overhead. Many people believe that SSL takes a lot of CPU time and we hope the above numbers (public for the first time) will help to dispel that."

  -- Niels.

That was *front end* SSL/TLS - not internal / back end SSL/TLS.

One could assert that the per-activity SSL/TLS overhead might be the same
for internal services accessed to answer a front-end request, but that's
not necessarily true. The code/request ratios and external/internal
SSL/TLS startup costs are going to vary wildly from service to service.

Anthony Junk wrote:

It seems as if both Yahoo and Google assumed that since they were
private circuits that they didn't have to encrypt.

According to Snowden, there are government agents at key
positions for managing security.

When they declare the private circuits are secure, no one
else in the companies can argue against.

Unless they are fired and all the backdoors installed by
them are removed, neither Yahoo and Google are secure.

            Masataka Ohta

This is probably not entirely true, however...

There is certainly enough in the Snowden docs to render this a valid
question, and there is enough to assume some truth to the statement.

Anyone familiar with secure organizations will realize this as the internal
witch hunt problem. You now have serious reason to believe that you have
been compromised. If security needs to be absolute, then the degree of
response needed to succeed at attaining that will require very serious
vetting of all the staff, of the nature of what national security
organizations do (background checks, polygraphs, detailed personal
histories, intrusive random monitoring of employee actions in and outside
the office, etc).

Most of "us" will not put up with that. However, most of "us" also desire
reasonably secure services (both those of us who work for those services,
and those of us who use them).

The prior default setting was to assume there was nobody trying hard enough
to penetrate those services that the internal witch hunt degree of internal
security was necessary. It was "reasonable" to hope that someone with
nation-state / superpower level resources was not actively Trying To Get
In. Now that's not a safe assumption.

The NSA has just put the entire profession in a horrible bind. By going
beyond the foggy-but-legally-documented FISA warrant activities into active
hostile actions against US providers we have to wonder about what degree of
paranoia is necessary.

Do we now just stick our heads back in the sand? Identify key security
groups with override authority within our organizations, vet them and
monitor them like the CIA and NSA vet and monitor their employees? Try to
establish that level of review of all our staffs?

Bruce Schneier has tiptoed around this some, but the thread from his blog
last week of "How do we know we can trust Bruce" is terrifying when we have
to consider applying that question to everyone on this list (and who should
be on this list).

Anyone familiar with secure organizations

there are such things?

we should be more cautious with absolutes, usually :slight_smile:

Nothing is absolute, but there are certainly "white" organizations which
have no attempt to be secure, and much greyer ones where it's a big deal in
organizational process and ethos.

A Snowden once a decade or so is not a bad record. Unfortunately, we ...
hoped ... they were the good guys, not the bad guys.