BGAN Optimized Laptops

All the features this supposedly has which makes it optimized for
constrained-bandwidth environments, one can accomplish with any *NIX,
including OSX (LittleSnitch provides a nice GUI for it).

This is a preference setting on most Web browsers. Lynx works well for this, too, on *NIX systems, if the users can use a terminal.

Equivalent for email clients - headers-only, only download message on demand, only download attachments on demand, etc.

My first thought on seeing the post was the work I used to do for satellite-linked networks around TCP config optimisation.
This was on XP clients so i've no idea how aplicable it is to more modern desktops, but the simple way was to make use of tools such as TCP Optimizer[1].

Also had a lot of success with WAFS type devices such as the Riverbed Steelhead and their ilk, which goes a bit beyond OP's scope, in terms of making low-bandwidth high-latency links more optimal.


[1] and which I note talks about Windows 7,8,10 and 2012 server, so this must still be a 'thing'.


Someone told me that there is a way for the browser to say
to the web server, send me only the parts of the web page I
request. For example, send me everything but the flash and
images. Being a browser wuss I thought the web server just
sent everything and the browser decided whether to display
it or not. That would mean the data already was transferred
over the expensive sat link incurring the data costs.


Just wanted to clear one point up...

The web is *not* a "push" model; it's a "pull" model.

The HTML document is nothing but a text document
which has references to other elements that are
available to the browser, should it choose to
request them; but it is incumbent upon the
browser to request each and every one of
those other elements from the server before
they are transferred. The server will not send
something that was not first requested by the

It's misunderstandings like this that make content
providers twitch every time an eyeball network
says "well you're *sending* all this data at my
network" -- absolutely nothing is being sent
that was not explicitly requested by the browser
first. ^_^;



Matt’s totally correct on the browser requesting the info, so it’s up to the client to decide what to download even obfuscated javascript links.
My question would be how far can compression take you for something like Opera which does some compression in browser with a caching server? I figure a lot of websites are probably using more uncompressed formats like PNG, which can probably be compressed a bit more, but it’s still like taking a tar ball. If a server in sending gzip’d text and the browser/cache are compressing that how much more can be gained? Compression of compression with even more compression to me is probably more like a downward spiral.

Mostly true, yet there's that little bit that makes it not total truth.

HTTP/2 has push, where instead of waiting for a browser to decide which
elements to fetch a server can send anything it likes, the basic theory
being that "everyone" will request certain/all objects so sending them
without waiting for the requests will enhance performance. HTTP/2 --
derived from / started as SPDY -- became a standard in May and is
supported by various servers and clients.

WebSockets should probably be mentioned as well.

And the even older content replacing push (ca '95) -- though seldom used
it is still supported by some browsers.