Flow collection and analysis

Wed, Jan 26, 2022 at 07:21:19AM -0600, Mike Hammett:

Why is it [TLS] even necessary for such a function?

confidentiality and integrity, even if you do not care about authentication.
I am surprised that question is asked.

The fewer things that are left unprotected, the better for everyone. those
with concern about erosion of their privacy and human rights benefit from
everything being protected, everywhere for everyone.

People who advocate TLS lash-ups like nginx front ends remind me of Mr. Beans DIY automobile security, which started with a screwed-on metal hasp and padlock, and then continued to a range of additional “layers”. Not “defense-in-depth”, merely unwarranted “complexity-in-depth”:


TLS is a standardized, fully open-source package that can be integrated into even tiny IoT devices (witness this $10 WiFi module https://www.adafruit.com/product/4201). The argument that people who want intrinsically secure products can just bolt-on their own security are missing the point entirely. Every web-enabled product should be required to implement TLS and then let custiners decide when they want to enable it. Vendors who are so weak that they can’t should have their products go straight into /dev/null.

-mel via cell

While I agree that, yes everything SHOULD support TLS, there’s a perfectly good reason for terminating TLS in something like (nginx/caddy/apache/etc): X number of things supporting TLS on their web interface means X number of ways of configuring TLS. If I terminate it on nginx, there’s only a single way: the nginx config, which is then farily easily leveraged into having a single set allowed protocols and ciphers.


you can always choose to use nginx if you like, but there’s no reason anyone else should be forced to.


Are you asking for commercial solutions? Free solutions? Open Source?

If the purpose of the software is not to be a dedicated purpose http daemon, use something that already exists with a deep feature set that can be configured as needed for the purpose, such as apache2 with openssl or nginx.

It’s not reasonable to expect that the developers of elastiflow reinvent the wheel and write their own httpd with TLS support, if it can be easily put “behind” apache2 or nginx. The risks of having people who aren’t full time httpd specialists write their own http daemon and mitigate every possible security risk in a TLS setup are greater than using what already exists.

It’s a one page size configuration file in nginx.

Not at all, what I’m recommending is that people who develop something that is specialized (like netflow analysis software) don’t need to expend the person-hours and extensive development time to implement something that has already been better implemented by people who are httpd specialists.

The amount of possible design complexities and security risks that go into shipping a ‘stable’ versio of apache2 or nginx are beyond the scope of any small to medium sized non-httpd-related opens source software project. Let the apache2 or nginx developers handle that.

It’s like saying that because a piece of software communicates with something externally by SMTP, either inbound or outbound email or both, some software developer should take the time to re-implemnt and write from scratch their own SMTP, rather than relaying mail via a postfix daemon running on the same server.

Or because you have a piece of software that queries something over SNMP, don’t use the perfectly good ISC SNMP packages that exist for centos or debian to issue snmpgets, but write from scratch your own snmp poller.

But nobody asked for anything from scratch Eric. Open SSL is it complete ready to integrate package. Any developer worth his salt should be able to put it on any web application. In addition to OpenSSL, there are very compact commercial SSL libraries such as Mocana NanoSSL and wolfSSL, if you want to really simplify the process.

Nobody need write any crypto software at all, and the extensive manhours you claim are not real.


‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐

‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐

Why DNS are still travelling in clear text?

The software running the DNS services worldwide are probably written in C or any languages you mentioned below.

Why don't they just strap a libressl on DNS or NanoSSL?

Okay, there is DNS over https. I don't know the stats, but I doubt it's close to 100% adoption worldwide.

I don't understand what is the issue about SSL, zero trust has anything to do about collecting flows. Do I need ssl to run shell commands in my terminal to read flows? Not really. Do I need to strap ssl on grep, notepad and excel? I'm not sure how could one do that.

When you see the flows of your customers, you have access to how many times did they use Netflix, facebook and anything you could think of because these people are querying DNS to reach these... in clear text. They are also hitting servers that are well known.

I would worry more about who is reading the flows of my business' customers than these flows being not protected by SSL. They are anyway in a highly secure environment with zero trust.

So if you don't like elastiflow or any software that are not being protected by SSL, then maybe switch off your computer. Protonmail won't help you to keep your digital life secure.

This email was sent by a secure infrastructure using TLS 1.2 and clear text dns.

Thank you


‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐

My colleagues developers started adding valid let’s encrypt certs everywhere. Now I have multiple NAT entry points for build-servers in my VPC because of the renewal frequency.

I feel less secure with them adding valid SSL certs everywhere that runs on a PRIVATE NETWORK.

It’s just dumb reasoning, and the CTO agreed with them. They are all gone by now, but their legacy remains.

Now I have to find all those certs and replace them with 10 year self-signed, and add --no-check-certificates flags in their http client requests.

All NAT entrypoints are gone.

I’m feeling safe now.