HTTPS redirects to HTTP for monitoring

Hi Everyone,

I wanted to see what opinions and thoughts were out there. What software,
appliances, or services are being used to monitor web traffic for
"inappropriate" content on the SSL side of things? personal use?
enterprise enterprise?

It looks like Websense might do decryption (
http://community.websense.com/forums/t/3146.aspx) while Covenant Eyes does
some sort of session hijack to redirect to non-ssl (atleast for Google) (
https://twitter.com/CovenantEyes/status/451382865914105856).

Thoughts on having a product that decrypts SSL traffic internally vs one
that doesn't allow SSL to start with?

-Grant

Admittedly I've only been on the user side of things for this, but IMO for
cases like this MITM > striping. if your users need to access anything
outside your intranet (google apps comes to mind right away, any kind of
outsourced web-based training, etc) that requires SSL to function would be
broken by stripping, but with MITMing the connection and having your
internal certs set up properly, it won't even blip.

that being said, squid can be configured to transparently decrypt and
reencrypt the session. (http://wiki.squid-cache.org/Features/SslBump)

We use Fortinet firewalls and SSL (HTTPS, FTPS, IMAPS, POP3S, SMTPS, SSH) inspection is a standard feature. It works by rolling out a custom CA certificate from the device to all of the desktops and whenever you hit a SSL site, a cert signed with the CA is generated and presented to the user. If you look at the cert your browser has, you can tell the CA is different but most users aren't looking at that.

Our user base uses a lot of services that can't be forced to downgrade to HTTP so it's the only option. Fortinet has some configurations that allow you to exclude certain sites from the MiTM 'attack'. For example we don't scan banking, health care and personal privacy categories.

IMHO, it would be better to just block the service and say the encrypted
traffic is inconsistent with your policy instead of snooping it and
exposing sensitive data to your middle box.

These boxes that violate end to end encryption are a great place for
hackers to steal the bank and identity info of everyone in your company.

That sounds like a lot liablity to put on your shoulders.

CB

So your idea is to block every HTTPS website?

From my point of view, it is better than violate user privacy & safety.

Sneaky is evil.

Hello,

I have been going through something very interesting recently that relates
to this. We have a customer who google is flagging for "abusive" search
behavior. Because google now forces all search traffic to be SSL, it has
made attempting to track down the supposed "bad traffic" extremely
difficult. We have contacted google through several channels and no one at
google who we've worked with is able to provide us any factual examples of
what they are seeing and because of the traffic being encrypted all our
usual capture and analysis tools have been fairly useless.

I'm sure this this will be more and more prevalent but its really
frustrating when the vendor who forces SSL cannot or will not provide
actual documentation that can help us investigate. So far the only ideas
we've come up with are to play some tricks with DNS overrides and force the
users to non SSL search so we can inspect http traffic or we were also
looking into doing something like using SQUID mitm SSL and allow us to at
least inspect the traffic there.

Overall we're not thrilled about the other side effects / implications that
can be caused by these workarounds, and in this situation our customer who
happens to be a customer of several google apps is very disappointed that
they cannot be more cooperative.

I am very interested to hear if others have run into similar situations and
how it was handled etc. I am sure we will see this type of issue again with
the number of hosted and SaaS solutions growing exponentially, so we are
looking into various options so that in the future we have better
accomodations to handle this situation with or without cooperation on the
hosted side.

chris

Hi Grant,

Fidelis Security (part of GD) does this for USG customers. Good guys
with a strong, scalable product.
http://www.fidelissecurity.com/

Basically, all internal web browsers get a custom CA which
authenticates a re-signing cert. HTTPS traffic is decrypted by an IDS
agent, examined and then re-encrypted with the resigning cert.

You have to decide for yourself whether you really want to examine
your users' HTTPS traffic. It does create a rather hostile work
environment for the folks you're playing big brother to. Not quite
camera-in-the-men's-room hostile but hostile enough to deter quality
staff from seeking and maintaining employment.

Regards,
Bill Herrin

So your idea is to block every HTTPS website?

My idea is to provide secure internet and tell the truth about it.

Proxying And mitm SSL/TLS is telling a lie to the end user and exposing
them and the proxying organization to a great deal of liability.

If you cannot provide proper transport of TLS/SSL, then tell your users
that. Dont fake it and undermine the ecosystem.

Proxying secure traffic is extremely dangerous, you are pretty much
creating trap door in the bank vault. It is going to hurt when the hackers
find it and you are going to Be liable for undermining all the secure
communications for all your users.

Your call. Ymmv. May be you are especially lucky and the hackers wont find
this weak spot in your network where all the most important encrypted info
(Perosal and corporate) suddenly becomes clear text.

My advice, dont do mitm, you cant afford it. It is only a matter of
Time when the hackers get this info and steal the identity And drain the
bank accounts of all your users.

So your idea is to block every HTTPS website?

From my point of view, it is better than violate user privacy & safety.

Sneaky is evil.

I expect your users would fire you when they found you'd blocked access to Google.

These boxes that violate end to end encryption are a great place for
hackers to steal the bank and identity info of everyone in your company.

Since the end user machines are generally running Windows, why would bad guys
waste time on a much harder and more obscure target?

>> So your idea is to block every HTTPS website?
>From my point of view, it is better than violate user privacy & safety.
>
>Sneaky is evil.

I expect your users would fire you when they found you'd blocked access to
Google.

And they would sue you for gross negligence for decrypting their ssn when
access company payroll and cpni data

These boxes that violate end to end encryption are a great place for

>>> hackers to steal the bank and identity info of everyone in your
company.

Since the end user machines are generally running Windows, why would bad
guys
waste time on a much harder and more obscure target?

Who said the mitm box was not running windows ?

That said, a properly admin'd win7 box is about as secure as any other end
station in my opinion. Yea, win2k and xp were a pain, msft has come a long
long way.

The same cannot be said for Adobe or Java.

CB

Doesn't goog do certificate pinning anyways, at least in their web
browser?

Honestly, don't do this. Neither option.You can still have some control over SSL access with ordinary domain based filtering getting proxied, via CONNECT method or sorta. You don't need filtering capabilities over full POST/DELETE/UPDATE HTTP methods, and if you believe you need it, you just have a bigger problem that MITMing won't solve at all. It's just like believing a data leak prevention system will really prevent data leaking.
Or believing a Palo Alto NGFW policy that allows gmail but won't allow gmail attachments of mime-type XYZ will be effective. If someone is really interested, there are clever ways to bypass it, more clever than your options to filter it.
Forcing http fallback for https communication is not only wrong, it's a general regression regarding security policy and best practices. You are risking privacy, or "confidentiality" and "integrity" if you prefer 27002 buzzwords. Not to mention the "availability" breakage since many applications won't just work (aka, you will break 'em).
On the other hand, adding a MITM strategy, be using Squid, Fortinet, pfSense, Palo Alto, Sonicwall, EndianFW, is just worse. You are adding you own an attack vector on your company. You are doing the difficult part of the attack for the attacker, installing a custom root cert in your client stations. So you will have much more to worry about, from "who has access", "how vulnerable" and "how to deploy" until "what is deployed", "what is revogated", "how's renegotiation, CRIME, etc like". You will have more problem root and vectors to care about. Not only how safe is the remote destination SSL server, but how save is the client to local proxy doing MITM and local proxy doing MITM to remote SSL server.
You are attacking, cracking and breaking your own network. If someone raise your squid log levels, you will have to respond for that, and respond for what was copied before you noticed it. Same goes for Fortinet, Websense, Sonicwall, or whatever open source or proprietary solution you pick. You are still breaking "confidentiality" and "integrity" but now without allowing for ordinary users or applications to notice it.
Back to the beginning: you can still do some level of HTTPS filtering and per domain controlling without having to fully MITM and inspect the payload. Don't add OWASP Top 10 / SANS Top 25 facilitation vectors to your company. Do the usual limited but still "safe" (oh no, not counting that unknown openssl zero-day NSA and people on IRC know about but industry stills ignore, or any other conspirator theory/fact), filtering... do just whatever can be filtered without MITMing https and HTTP redirection. And don't be seduced by the possibility of filtering more than that. It's a trap, for both your users and your responsibilities as organization regarding users' privacy not to mention possible ACTs and other laws on your state/contry.

I expect your users would fire you when they found you'd blocked access to
Google.

And they would sue you for gross negligence for decrypting their ssn when
access company payroll and cpni data

May I suggest that playing Junior Lawyer on nanog rarely turns out well.

These filter boxes are typically used on private enterprise networks. Companies have broad latitude in what facilities they provide to their users, and the rationales for filtering run from keeping out malware to keeping dimwit guys from porn sites that are fodder for hostile work environment claims.

Your employer alrady knows your SSN and payroll data. Unless, I suppose, you're using your computer at one job to moonlight at another, but you're not going to get a lot of sympathy for that.

There are also ISPs that provide intrusive filtering as a feature. I wouldn't use one, but I know people who do, typically members of conservative religious groups.

R's,
John

I don't know if you're referring to HSTS. If not, it's worth noting in
this thread. As I understand HSTS, session decryption is still possible
on sites that send the 'Strict-Transport-Security' header. See:
https://tools.ietf.org/html/rfc6797

I suspect it's only a matter of time before browsers become suspicious by
default, requiring that HTTPS responses be signed and requiring that SSL
certificates come from trusted sources. In other words, HSTS is the next
step in a long-running arms race. It will not be the last. See this 1997
article for a taste: http://www.apacheweek.com/features/ssl
  
  Money quote: "The US Government imposes export restrictions on arms, in a
set of rules called ITAR"

All of this points to the deficiency of the existing commercial
certificate authority system. The fact that organizations can easily
purchase software specifically designed to subvert encrypted communication
channels is proof that HTTPS security is an illusion.

Kelly

chris <tknchris@gmail.com> writes:

I have been going through something very interesting recently that relates
to this. We have a customer who google is flagging for "abusive" search
behavior. Because google now forces all search traffic to be SSL, it has
made attempting to track down the supposed "bad traffic" extremely
difficult. We have contacted google through several channels and no one at
google who we've worked with is able to provide us any factual examples of
what they are seeing and because of the traffic being encrypted all our
usual capture and analysis tools have been fairly useless.

I presume the problem is that Google has flagged the outgoing address
on your NAT, because that's all they can see.

Have you considered deploying IPv6 and giving each customer their own
address? Then only that customer will be flagged and it'll be between
them and Google.

I don't know if you're referring to HSTS.

No, HSTS is separate to certificate pinning. Certificate pinning would, in
fact, cause Chrome to freak out in the presence of an HTTPS-intercepting
proxy, but that's what it's supposed to do. I doubt that organisations
regressive enough to do HTTPS-MitM would be enlightened enough to allow
Chrome to be installed, though.

If not, it's worth noting in
this thread. As I understand HSTS, session decryption is still possible
on sites that send the 'Strict-Transport-Security' header. See:
RFC 6797 - HTTP Strict Transport Security (HSTS)

Yes, HSTS allows interception; it would, on the other hand, prevent the
downgrade attack which the OP was suggesting as one option to allow
organisational monitoring of web requests and responses.

I suspect it's only a matter of time before browsers become suspicious by
default, requiring that HTTPS responses be signed and requiring that SSL
certificates come from trusted sources.

That sounds like what has been the case since... forever.

All of this points to the deficiency of the existing commercial
certificate authority system. The fact that organizations can easily
purchase software specifically designed to subvert encrypted communication
channels is proof that HTTPS security is an illusion.

What does the existence of a HTTPS proxy have to do with the deficiency of
existing CAs? Yes, CAs have issued intermediate CA certificates to MitM
boxes (Trustwave has been caught doing it; I'm sure others have done it,
too). However, the standard mechanism for doing this sort of thing is a
locally-issued root CA certificate, which is installed in the corporate SOE
as a trusted root. That is, actually, *exactly* how the TLS certificate
system is supposed to work -- root CA certificate is marked as trusted, thus
everything issued therefrom is considered OK.

That this is possible is not "proof that HTTPS security is an illusion";
it's simply another demonstration that if the bad guy has control over your
machine, it isn't your machine any more. If TLS wasn't vulnerable to this
particular mode of subversion, I'm sure there'd be products out there that
would hook into the core of the browser and grab the requests before they
got into the encrypted channel and re-route them to the proxy, and it would
be that software, rather than the local root CA certificate, which would be
installed in the corporate SOE.

- Matt

I have been "off-line" for several days and I have to say that this is one of the most depressing thread I have seen _anywhere_ in a while and reading it on NANOG multiplies the depression.

But I am heartened to see this response (and one or two others so far)--there is still hope.

For what it is worth (and this reflects at most two people's thinking here), I go to some considerable effort to identify handlers-of-my-data that betray my trust and on the merest hint I take considerable effort the avoid the betrayer and anybody who relies on the betrayer.

Yes, I know that there is no way that I can stop everybody, but I try very hard.

There are also ISPs that provide intrusive filtering as a feature. I
wouldn't use one, but I know people who do, typically members of
conservative religious groups.

Can you provide credible evidence to support "typically members of
> conservative religious groups."?

Please explain how that contributes to the question at hand.

In article <54BCC924.1000104@cox.net> you write:

There are also ISPs that provide intrusive filtering as a feature. I
wouldn't use one, but I know people who do, typically members of
conservative religious groups.

Can you provide credible evidence to support "typically members of
> conservative religious groups."?

I personally have known people who use them. If you're familar with
some of the books that I've written, it should be evident why I'd need
to know about them and who they'd appeal to.

In any event, as should be totally obvious, the point was that there
are people who for their own reasons welcome intrusive filtering that
most of us would find unacceptable.

R's,
John