Impacts of Encryption Everywhere (any solution?)

Has anyone outside of tech media, Silicon Valley or academia (all places wildly out of touch with the real world) put much thought into the impacts of encryption everywhere? So often we hear about how we need the best modern encryption on all forms of communication because of whatever scary thing is trendy this week (Russia, NSA, Google, whatever). HTTPS your marketing information and generic education pieces because of the boogeyman!

However, I recently came across a thread where someone was exploring getting a one megabit connection into their village and sharing it among many. The crowd I referenced earlier also believes you can't Internet under 100 megabit/s per home.

Apparently, the current best Internet the residents of the village can get is 40 kilobit/s. Zero oversubscription gets a better service to up to 25 homes. Likely that could be stretched to at least 50 or 100 homes and be better than what they currently have. Forget about streaming video, let's just focus on web browsing and messaging.

However, this could be wildly improved with caching ala squid or something similar. The problem is that encrypted content is difficult to impossible for your average Joe to cache. The rewards for implementing caching are greatly mitigated and people like this must suffer a worse Internet experience because of some ideological high horse in a far-off land.

Some things certainly do need to be encrypted, but encrypting everything means people with limited Internet access get worse performance OR mechanisms have to be out in place to break ALL encryption, this compromising security and privacy when it's really needed.

To circle back to being somewhat on-topic, what mechanisms are available to maximize the amount of traffic someone in this situation could cache? The performance of third-world Internet depends on you.

Turn off JavaScript on the clients. The wastage from downloading even a single copy of react.js is sufficient to fetch dozens of Wikipedia pages, repeatedly.

That is super interesting. While one can Internet fine at 5Mbps (save for
streaming UHD movies maybe), I am not convinced 1Mbps can be successfully
shared even if there was no encryption anywhere.
My understanding is that some enterprises do decrypt traffic in flight with
proxies such as bluecoat, though I'm not sure on the particulars of how
that works. I think the overall theory is that the proxy acts as a trusted
CA for all its client and generates the certificate for the destination
hostname on the fly thus terminating the SSL connection and opening new one
on behalf of the client. I do, however, recall that the solution is not
cheap. Neither $ nor computationally or, I'm guessing, in case of a village
if they can't get anything faster than 1Mbps, can they even get power to
run a couple (does the proxy uptime matter?) of proxies of heavy compute?

Another concern would be that caching implies the whole village visits the
same content. I'm not even confident me and wife visit the same content
(save for gmail maybe).

And lastly, most modern websites are very media rich. Unless the whole
village confines their usage to, I can't imagine that the
experience will be pleasant in anyway or form or there will be any benefit
to caching.

Save for the SSL proxy mentioned above, I have seen folks pull several
crappy DLS connections (Let's say ~1Mbps each) and band them together. If
the provider support the bonding option, great! If not, I've seen folks
basically per flow load balance across the 4 connections.



There are better places to reduce traffic while simultaneously enhancing
security and privacy. The new EU version of the home page of USA Today
is about 20% the size of the one presented in the US -- because it's
had all the tracking and scripting stripped out -- with a concomitant
reduction in load time and rendering time. Much more drastic reductions
are available elsewhere, e.g., mail messages composed of text only are
typically 5% to 10% the size of the same messages marked up with HTML.

The problem (part of the problem) is that the people doing these foolish
things are new, ignorant, and privileged: they don't realize that bandwidth
is still an expensive and scarce resource for most of the planet. I've
said for years that every web designer should be forced to work in an
environment bandlimited to 56K in order to instll in them the virtue
of frugality and strongly discourage them from flattering their egos
by creating all-singing all-dancing web sites...that look great in the
portfolios they'll show to their peers but are horribly bloated, slow,
unrenderable in a lot of browsers, and fraught with security and privacy
problems. (Try pointing a text-only browser at your favorite website.
Can you even read the home page?)


Dne 28. 5. 2018 v 17:00 Rich Kulawiec napsal(a):

The increase in the subscriber base increases the likelihood of visiting the same content and thus the benefit.

Before HTTPS-everywhere, caching was hugely beneficial.

Currently they are making do with 40 kilobit/s, so it's certainly possible to Internet at that level. Just looking at ways the service can be even that much better.

If they only have single digit megabit/s of Internet, you don't need multiple systems to add\drop the encryption. While I don't have anything to back this up, I'd suspect a couple hundred dollar single board computer (since session border controller seems to be a more popular use of the acronym SBC) would be sufficient. I'm not overly intimate with that space, but some little ARM-based machine could probably do it just fine. Move that to hundreds of megabit/s or gigabit/s and your concern is certainly much more relevant.

I can't imagine rural third-country villages have much influence over the departments of the appropriate companies to affect all of the junk getting added to sites these days.

I'm also not foolish enough to think this thread will affect the encrypt-everything crowd as it is more of a religion\ideology than a practical matter. However, maybe it'll shed some light on technical ways of dealing with this at the service-provider level or plant some doubt in someone's mind the next time they think they need to encrypt non-sensitive information.

The same goes for all development. My phone is significantly slower today than a couple years ago when new without a significant change in the amount of stuff that I run because developers are lazy and fill the space the latest platforms offer them.

I've personally played with Squid's SSL-bump-in-the-wire mode (on my personal systems) and was moderately happy with it. - I think that such is a realistic possibility in the scenario that you describe.

I would REQUIRE /open/ and /transparent/ communications from the ISP and a *VERY* strict security control to the caching proxy. I would naively like to believe that an ISP could establish a reputation with the community and build a trust relationship such that the community was somewhat okay with the SSL-bump-in-the-wire.

It might even be worth leveraging WPAD or PAC to route specific URLs direct to some places (banks, etc) to mitigate some of the security risk.

I would also advocate another proxy on the upstream side of the 1 Mbps connection (in the cloud if you will) primarily for the purpose of it doing as much traffic optimization as possible. Have it fetch things and deal with fragments so that it can homogenize the traffic before it's sent across the across the slow link. I'd think seriously about throwing some CPU (a single core off of any machine in the last 10 years should be sufficient) at compression to try to stretch the bandwidth between the two proxy servers.

I'd also think seriously about a local root DNS zone slave downstream, and any other zone that I could slave, for the purpose of minimizing the number of queries that need to get pushed across the link.

I've been assuming that this 1 Mbps link is terrestrial. Which means that I'd also explore something like a satellite link with more bandwidth. Sure the latency on it will be higher, but that can be worked with. Particularly if you can use some intelligence to route different CoS / ToS / DiffServ (DSCP) across the different links.

I think there are options and things that can be done to make this viable.

Also, considering that the village has been using a 40 kbps link, sharing a 1 Mbps (or 1,000 kbps) link is going to be a LOT better than it was. The question is, how do you stretch a good thing as far as possible.

Finally, will you please provide some pointers to the discussion you're talking about? I'd like to read it if possible.

PCs within the enterprise contain an enterprise-local root in their
certificate store. The proxy re-encrypts using a key whose ephemeral
cert chains up to the enterprise root.

Bill Herrin

The "do not search a culprit" stuff:
What is the point with encryption ?

If your users have a very-low bandwidth, they will get a crappy service,
with or without encryption
This is our world, our http-based internet is NOT made for a 40k connection

The "tip stuff":
If you simply do not care about encryption, or are willing to trade
privacy for caching because you have no-bandwidth, you can simply break SSL
It costs nothing, and you will not mind the "red lock" (remember: trade-off)

The "philosophical stuff":
About your last part, you are absolutely right, this is a sad situation,
yet not true

Niklaus Wirth (the pascal guy) said in 1995:
"Software gets slower faster than hardware gets faster."
This has never been so true ..

Look at the Steam cache project, the generic downloader can also cache Windows Updates and most gaming services. I imagine Windows Updates would eat a lot of traffic.

In addition to the "bump in the wire" you could also enable larger frame
sizes downstream since you're already completely disassembling and
reassembling the packets. Large downloads or uploads could see overhead go
from 3% at 1500B to about 0.5% at 9100B. It's not much but every little bit
counts. (Preamble, Ethernet, IP, and TCP headers all need be sent accross
the circuit less often to get the same amount of data through)

Looking only at the throughput of L4 payloads, you get:
1500 MTU = 956 kbps
9100 MTU = 992 kbps

That almost adds a whole additional home if my math is correct.


I'm also not foolish enough to think this thread will affect the
encrypt-everything crowd as it is more of a religion\ideology than a
practical matter. However, maybe it'll shed some light on technical
ways of dealing with this at the service-provider level or plant some
doubt in someone's mind the next time they think they need to encrypt
non-sensitive information.

Good Luck, especially in light of the poo-for-brains at Google responsible for the Chrome browser who (wrongly) equate "secure" with Transport Encryption and "unsecure" with not having Transport Encryption; when all that Transport Encryption really implies is Transport Encryption and not much else. It has little to do with whether or not a site is "secure". Generally speaking, I have found that sites engaging Transport Security are much more "unsecure" (as in subject to security breaches and flaws) than those that do not engage Transport Security for no reason.

However, the poo-for-brains crowd will get everyone to engage Transport Security so the will be called "Secure", whether trustworthy or not.

Actually, starting July Chrome will no longer say "secure" for sites with
Transport Security. It will only say "not secure" for sites without, so it
will no longer provide the false impression of equating Transport Security
with Application/Operational Security.


To be fair, most of the conversation is people not realizing the OP is in a third world country and believe that 1 mbit/s isn't enough for a single user much less a village.

Also, I think it's 40 kilotbit/s per user (so probably dial-up), not 40 kilobit/s for the whole village. The whole village may very well have 1 megabit/s worth of dial-up connections, but everyone potentially able to go to 1 megabit is a lot more useful than capping each to 40 kilobit/s.

I’m sorry I simply believe that in 2018 with the advanced and cheap ptp radio (ubiquiti anyone? $300 and I have a 200mbit/sec link over 10miles! Spend a bit more and go 100km) plus the advancements in cubesats about to be launched, even the 3rd world can simply get with the times.


Once you become sensitized to the HTTPS warnings because needlessly has SSL (or your printer or switch's management interface for those of us not needing to proxy SSL traffic), you now no longer notice that your bank isn't secure. Being hyper-sensitive about SSL causes one to miss things that actually matter.

HTTP works just fine over a 40 kb connection. That's all I could get out of my dial-up that I shared to four other computers until about 2004 when I started my WISP.

I know the fixed wireless space quite well. If there's no Internet to be had, it doesn't matter how quickly you can distribute it.

He did say that (for whatever reason), relaying off of mountain-top sites to get to better connectivity wasn't a viable option.

The yet-to-be-deployed satellite constellations don't do anyone any good today.

Hi Ben,

I do not think you adequately understand the economics of the

slide 22, IP transit cost.

Your 200mbit/sec link that costs you $300 in hardware
is going to cost you $4960/month to actually get IP traffic
across, in Nairobi. Yes, that's about $60,000/year.

Could *you* afford to "get with the times" if that's what
your bandwidth was going to cost you?

Please, do a little research on what the real
costs are before telling others they need to
"simply get with the times."



I live in the US of A, and this is what 200Mb/s roughly would cost me as well here in Rural Monopoly-land. Rural ILEC also has the CATV business, and, well, they are _not_ going to run cable up here. I've actually priced 150Mb/s bandwidth from the ILEC over the years; in 2003 the cost would have been about $100,000 per month. As of five years ago 10Mb/s symmetrical cost roughly $1,000 per month, the lion's share of that being per-mile NECA Tariff 5 transport costs.

The terrain here prevents fixed wireless. The terrain also prevents satellite comms to the Clarke belt (mountain to the south with trees on US Forest Service property in the line of sight). I get 1XRTT in one room of my house when the humidity is below 70% and it's winter, and once in a blue moon 3G will light up, but it's not stable enough to actually use; it's the speed of dialup. If I traipse about a hundred yards up the mountain to the south (onto US Forest Service property, so, no repeater for me) I can get semi-usable 4G; nothing like being in the middle of the woods with an active black bear population trying to get a usable signal.

I'm paying $50 per month for 7/0.5 DSL (I might add that they provide excellent DSL that has been extremely reliable) from the only ISP available in the area.

I remember a usable web experience not too long ago on 28.8K/33.6K dialup (it was quite a while before said ILEC got a 56K-capable modem bank). DSL started out here at 384k/128k. On the positive side, we have a very low oversubscription ratio, so I actually get the full bandwidth the majority of the time, even video streaming. I also know all the network engineers there, too, and that also has its advantages.

(Yes, I am aware that rural living is a choice, and there are things worth a great deal more than bandwidth, that it's a tradeoff, etc.)

So it's not just '3rd-world' countries with expensive bandwidth.