Mark Andrews wrote:
Look at CableLabs specifications. There is also RFC 7084, Basic
Requirements for IPv6 Customer Edge Routers which CableLabs
reference.
One of a stupidity, among many, of IPv6 is that it assumes
links have millions or billions of mostly immobile hosts
and define very large (but not large enough for billions or
even millions) minimum interval between ND messages, which
is applicable to links with much smaller number of hosts.
So, though rfc7084 says;
it MUST explicitly
invalidate itself as an IPv6 default router on each of its
advertising interfaces by immediately transmitting one or more
Router Advertisement messages with the "Router Lifetime" field
set to zero [RFC4861].
rfc4861 forbids two RAs sent with minimum interval less than 16
seconds.
Is it "immediately transmitting one or more Router Advertisement
messages"?
Masataka Ohta
Keith,
It’s called “operations security” or “OPSEC.” The idea is that from lots of pieces of insignificant information, an adversary can derive or infer more important information you’d like to deny to him. There’s a 5-step process used by the U.S. Military but the TL;DR version is: if you don’t have to reveal something, don’t.
IMO, anyone who thinks the folks who developed OPSEC don’t have a clue is the one I find wanting.
Regards,
Bill Herrin
Can somebody hand me a match? There's a straw man argument
that needs to be set afire here.
A security geek would be all over me - "too many clues!".
Anyone who says something like that is not a "security geek". They
are a "security poser", interested primarily in "security by obscurity"
and "security theatre", and have no clue what they are talking about.
It's called "operations security" or "OPSEC." The idea is that from lots
of pieces of insignificant information, an adversary can derive or infer
more important information you'd like to deny to him. There's a 5-step
process used by the U.S. Military but the TL;DR version is: if you don't
have to reveal something, don't.
You and I have completely different opinions of how security works. In my world, security must continue to be effective even in the face of an adversary that knows everything there is to know about what is being attacked (except for some authentication secrets, which of course need to be kept secret).
If the attacker does not already have that information, then obtaining it is usually a rather trivial reconnaissance operation. The job of "securing" something means to make it impervious to outside influence -- it is the other side of the "safety" coin -- and Safety and Security go hand in hand.
Security based on keeping something which is trivial to discover secret is trivial security and can still be trivially bypassed.
It is telling that of the thousands of "ransomware attacks" that occur each second, only 617 have been successful so far this year. Those victims probably relied on keeping something secret that did not matter. In other words, they expended effort on the wrong things -- their analysis of risk was inherently flawed.
Can you provide a scenario in which knowledge of the VLAN number is a vulnerability that can be exploited? And if you can find one, is there a more effective way to prevent that exploit that will work even if the attacker knows the VLAN number? Would it not be more effective to implement that measure than simply using trivial means (that are trivial to defeat) to hide the VLAN number? This does not mean that you need to publish the VLAN numbers on Facebook for all to see, merely that knowledge of that fact is now irrelevant, and that even if the someone posted the VLAN numbers on Facebook for all to see, then that would not be helpful to the adversary.
IMO, anyone who thinks the folks who developed OPSEC don't have a clue is
the one I find wanting.
Opinions vary. That is the nature of opinion.
Is that a good time for me to point to the URL in my sig?
Cheers,
-- jra
As an Evil Firewall Administrator™, I have an interest in this area ...
On Fri, 4 Oct 2019 15:05:29 -0700, William Herrin <bill@herrin.us> may have
written:
> Anyone who says something like that is not a "security geek". They are
> a "security poser", interested primarily in "security by obscurity" and
> "security theatre", and have no clue what they are talking about.
Hmm ... 'primarily in "security by obscurity"' ... that does tend to
indicate a severe case of cluelessness (and that's coming from someone who
doesn't let his right hand know what his left hand is up to without
justification that has been signed off in triplicate). To give a real world
example, removing headers from an Apache web server doesn't do much to
increase security (it's mostly to keep auditors happy) because automated
attacks will hit your exposed Apache servers anyway, and a sophisticated
attacker will note the removal and adopt the strategy of an automated
attack.
more important information you'd like to deny to him. There's a 5-step
process used by the U.S. Military but the TL;DR version is: if you don't
have to reveal something, don't.
You've ignored step 1 - identifying critical information that needs
protecting. It makes sense to protect information that needs protecting and
don't lose sleep over information that doesn't need protecting. Not many of
us are planning an invasion of a Nazi-infected Europe any time soon.
We are heading toward a restatement of Kerckhoff's principle/Shannon's maxim,
the latter of which can be paraphrased as "design systems assuming that
your adversary will know as much about them as you do".
Not that I'm advocating publishing all internal design documents, but systems
whose security is predicated on the secrecy of those are brittle and likely
to be badly compromised. Better to assume that enemies know or can find out
everything and design/build accordingly.
---rsk
Not everyone attacking your systems is going to have the skills or knowledge to get in though - simple tricks (like hiding what web server you use) can prevent casual attacks from script kiddies and others who aren't committed to targeting you, freeing your security teams to focus on the serious threats.
Mark
Not everyone attacking your systems is going to have the skills or
knowledge to get in though - simple tricks (like hiding what web server
you use) can prevent casual attacks from script kiddies and others who
aren't committed to targeting you, freeing your security teams to focus
on the serious threats.
And this is based on what evidence? It also defies logic. By
definition script-kiddies run scripts. If you remove the identification
those scripts can no longer identify what is running, and therefore will
continue to attack it. What would be useful is to replace that with
alternative "disinformation" headers so that the script-kiddies scripts
will get a positive result, but that result will not be what they are
looking for, so they will go away. Until having disinformation headers
gets the same "old wives tale" status as "remove the identifying
headers". At which point either course of either action is a waste of
effort and $$$ because the script-kiddies will just ignore it as it will
be just as cost effective to run the exploit and see what happens.
In other words, simple tricks are exactly that. They usually do exactly
the opposite of what the "simple tricker" thought they were doing, or do
nothing useful at all. Which means that effort and $$$ have been
expended at best on a useless endeavour, and at worst one which
increased the very activity it was designed to thwart. One would have
been far better off putting the $$$ in the slush-fund and using it when
some particularly persistent script-kiddie showed up so you could afford
to add a filter to the firewall.
They aren’t mutually exclusive concepts. A strong security architecture has multiple layers an adversary must penetrate. No layer has to be sufficient on its own, it just has to reduce vulnerability more than it increases cost.
Limiting the server banner so it doesn’t tell an adversary the exact OS-specific binary you’re using has a near-zero cost and forces an adversary to expend more effort searching for a vulnerability. It doesn’t magically protect you from hacking on its own. As you say, your security must not be breached just because the adversary figures out what version you’re running. But viewed as one layer in an overall plan, limiting that information enhances your security at negligible cost. That’s security smart.
Regards,
Bill Herrin
I think your analysis is incorrect.
There are two cases which are relevant:
(1) The attack is non-targetted (that is, it is opportunistic)
(2) The attack is targetted at you specifically.
In the former (1) case, it does not matter whether the "banner" identifies the specific OS binary or not as it is irrelevant. The script either works or it does not. Even if the "banner" says "Beyond this point there be monsters" will make absolutely not one whit of difference.
In the latter (2) case, it does not matter whether the "banner" identifies the specific OS binary or not as it is irrelevant. You have been targetted. All possible exploits will be attempted until success is achieved or the vat of exploits to try runs dry.
So while the cost of doing the thing may be near-zero, it is not zero. All those near-zero cost things you do that have no actual advantage can add up to quite a huge total and it will be more advantageous to spend that somewhere where it will, in fact, make a difference.
Any additional effort put in by an attacker will increase the chance of an attack being detected before it is successful. COnsider the following two scenerios.
Scenerio 1 is a webserver that makes no effort to obfuscate:
-
Attacker does HEAD request on /, which is a legitmate request, and sees the webserver vendor name
-
Attacker does a quick search, and finds there is a vulnerabilty in webserver
-
Attacker exploits vulnerability
Now, consider scenerio 2, where the server is configured to hide the webserver vendor and has an IDS/IPS system in place
- Attacker does HEAD request on /, which is a legitmate request, but there is no usable information in the respone.
- Attacker does a probe on the webserver to try a number of attacks, which generate a number of 403, 404, 500 etc errors in the webserver logs
- IDS/IPS sees the sudden spike in errors from a single IP address and blocks the source IP
The act of obfuscation made it possible for the IDS/IPS to detect the probe, preventing the attack. WIll this block every attack? Probably not, but it increases the effectiveness of the security by forcing the attacker to take additional (detectable) actions when trying to break in.
The lock on your front door can be picked by anyone with a $10 lockpick set in under 5 minutes, does that mean you shouldn’t bother locking your doors?
Mark
This Email from Marie Stopes International and any attachments may contain information which is privileged or confidential. It is meant only for the individual(s) or entity named above. If you are not the intended recipient(s) of this Email or any part of it please notify the sender immediately on receipt and delete it from your system. Any opinion or other information in this email or its attachments that does not relate to the business of Marie Stopes International is personal to the sender and is not given or endorsed by Marie Stopes International.
And in fact, there's more than just the costs of doing it. There's also the costs
of having done it.
Obfuscating your OpenSSH versions is a *really* good way to make your security
scanners that flag backleveled systems fail to flag the systems.
Which can cause a really uncomfortable conversation with the CIO about why the
local newspaper's front page is running a story about how your organization got
totally pwned via a backleveled OpenSSH on one cluster of 5 servers.....
You would still be better served by forgetting about hiding the
webserver vendor name and using that money to buy an IDS/IPS that works
properly by detecting the actual exploit attempt rather than looking for
"a spike of errors in the log" in order to block the originating
address, especially since a "spike of errors in the log" can have quite
a few causes other than exploit attempts -- in fact such a "spike in
errors" is more likely to occur for reasons other than attempts to find
a vulnerability. Furthermore, it is quite possible for the first
exploit attempt to be successful despite having hidden the banner, in
which case the entire thing was merely nothing more than security
theatre. This is especially true when you consider "many" systems using
this method of protection and millions of attempted exploits per second.
Furthermore, why on earth would an opportunistic attacker use two
requests when one would suffice? There is nothing to be gained by
probing only to discover "Oh, I am getting all wet cuz this is a juicy
target" when one would merely send the exploit and see what happens --
it either works or it does not -- and probing first adds no value -- in
just makes each attempt expend more resources. In the time you have
probed a server and gotten a response, you could have simply sent the
exploit to a dozen servers. So clearly probing for a "good target" is
just a waste of time.
This is why most dirty e-mail spammers just "blast" out their spam
without waiting for the appropriate responses from the SMTP server, and
why having the SMTP server insist on strict RFC compliance (and test
that the connected MTA is RFC compliant) works so well at getting rid of
95% of spam.
So given a choice between:
(1) Spending money hiding the headers and using software to reconfigure
the firewall based on errors in the log; or,
(2) Spending money on an IDS/IPS that can detect and drop an exploit
dynamically
you are probably better served by (2) than by (1). The software that
monitors the log is most useful to send a notification that there is an
excessive error rate (since that is what it is detecting).
Of the millions of ransomware attacks per second, the 617 victims so far
this year probably relied on method (1) and in hindsight wished they had
been a little smarter and used method (2) instead.
Why would they bother performing that search? Why not use their botnets
to throw every exploit they have at a service and see if anything works?
That's easier and cheaper and faster than being selective. It also --
if they have happen to have a working exploit -- blows right past
(announced) versions, whether real, fake, or elided.
Brute force is cheap, analysis is expensive.
Case in point: every mail server I have eyeballs on was probed by
attackers trying to exploit the recent exim vulnerability -- no matter
what MTA they're running, no matter that they all announce the MTA and
version, no matter anything. I doubt I'm alone in observing this.
Even a diligent, capable attacker -- someone who is willing to invest
the time and effort to ascertain what's running which service, down
to the version -- could save themselves some homework by launching an
attack like the one in the first paragraph above, examining the results,
and using those to greatly reduce their search space. It's easy, it's
cheap, it's fast, it's automated, and it yields no clues as to where
the followup (version-specific) attack is going to come from.
---rsk
On Tue, 8 Oct 2019 13:59:58 +0000, Mark Collins
<mark.collins@mariestopes.org> may have written:
Not everyone attacking your systems is going to have the skills or
knowledge to get in though - simple tricks (like hiding what web server
you use) can prevent casual attacks from script kiddies and others who
aren't committed to targeting you, freeing your security teams to focus
on the serious threats.
Er ... no. Not according to real world data (my firewall logs).
Most attacks are fully automated and they don't (always) bother with
complex logic to determine which attacks to try. For instance I constantly
see Apache struts attacks against servers that a) may or may not be running
Apache (the headers are removed) b) definitely aren't running Struts.
In fact many attacks are sufficiently automated that the human behind the
scenes won't even know a system has been compromised if it doesn't
successfully pick up the second stage of the payload and 'phone home'.