Data embedded in the binary is hard-coded. That's what hard-coded
means. If it makes you happier I'll qualify it as a "hard-coded
default," to differentiate it from settings the operator can't
override with configuration.
No. I will not indulge your invention of terms. "Hard-coded" means you
need to recompile to change it. This is a default value. A
configuration option takes precedence.
No, it is not in any respect. The code you grepped out generates a
default configuration hints file when one does not exist.
The CWE you cite specifically refers to default values for things like
cryptographic RNG seeds and salts and TCP sequence number generators and
the like. Viz something like Debian -- Security Information -- DSA-1571-1 openssl from 2008.
A quick search of CVE - Search CVE List shows
between 600 and 3700 CVEs related to default configurations that are
either directly insecure or unexpectedly become insecure when some but
not all of the defaults are changed by the operator. The vast majority
of these CVEs exhibit, as you say, no flaw in the computational logic.
You literally just gave me a link to the CVE search page, waved your
hand, and said, "See?" Well, I'll admit to not being as good at
conducting CVE research as you. So, as an expert on the topic: How many
of these "between 600 and 3700 CVEs" are related to a violating the
baseless expectation of confidentially in a protocol which does not
guarantee confidentiality? Somewhere between 0 and 2000?
But you know what, go ahead. Submit the CVE. Be the hero that you
believe yourself to be.
Data embedded in the binary is hard-coded. That's what hard-coded
means. If it makes you happier I'll qualify it as a "hard-coded
default," to differentiate it from settings the operator can't
override with configuration.
No. I will not indulge your invention of terms. "Hard-coded" means you
need to recompile to change it. This is a default value. A
configuration option takes precedence.
BIND-9.18.14 requires recompilation to update the embedded defaults ..
bin/named/config.c: 2001:500:200::b; # b.root-servers.net\n\
bin/named/config.c: 199.9.14.201; # b.root-servers.net\n\
lib/dns/rootns.c: "B.ROOT-SERVERS.NET. 3600000 IN A 199.9.14.201\n"
lib/dns/rootns.c: "B.ROOT-SERVERS.NET. 3600000 IN AAAA 2001:500:200::b\n"
Don't comprehend what a vulnerability is.
Don't recognize the distinction between a logic issue and a
configuration issue.
Don't understand the difference between "hard-coded" and a default
value.
Don't recognize that these defaults are overridden by a existing
configuration file that is often shipped by the operating system
distribution.
Don't read the code.
Evidently. Since we're talking about default configurations, the
obvious search is "default configurations." That yields 770 results.
The fourth in my list is CVE-2023-33949, a piece of software whose
default configuration lets folks create accounts without verifying
their email address. That's a reasonable setting when the application
is not exposed to the public Internet and you want to minimize setup
effort. The mitigation is to change the configuration setting.
Expanding the search to "defaults" yields 3769 results. I didn't read
through 3769 results to find one that was perfectly, flawlessly on
point but there were plenty where something about the software's
default configuration is insecure until the operator changes the
configuration.
I've unpacked what a vulnerability is and is not for you.
I've unpacked how you can't be violating confidentiality in a protocol
which doesn't guarantee confidentiality for you.
I've unpacked how abusing the vulnerability reporting system for
something which isn't actually a security vulnerability dilutes the
effectiveness of that reporting system for you.
I've unpacked basic definitions of basic terms like "hard-coded" for
you.
Each time, you've just quietly picked up the goal posts and moved them
downrange. Your argument has gotten increasingly ridiculous. It's
obvious that you're more interested in "winning" than anything.
I've dispensed the last of my advise. File your CVE. I look forward to
tracking its progress.
It announces itself to an address which remains under the control of
USC/ISI the current and on going root server operator for b.root-servers.net.
So apart from leaking that the root hints have not been updated I don’t
see a big risk here. The address block, as has been stated, is in a reserved
range for critical infrastructure and, I suspect, has special controls placed
on it by ARIN regarding its re-use should USC/ISI ever release it / cease to
be a root-server operator. I would hope that ARIN and all the RIRs have
the list of current and old root-server addresses and that any block that
are being transferred that have one of these addresses are flagged for
special consideration.
I'm afraid that "old root-server addresses" will not
be considered for "critical infrastructure" at least
by those people who can't see operational difficulties
to change the addresses.
[Sorry for the delay -- this was ICANN week and I'm just getting unburied]
> Perhaps make it a false responder in the last of those 9 years so that
> anybody who is truly that far behind on their software updates gets
> enough of a spanking to stop sending you packets. You'll have problems
> repurposing the address and its subnet until folks stop sending you
> DNS query packets, even if you don't respond to them.
Not a bad idea, you could also put a nice warning page up informing
them that their DNS resolver is broken and not enforcing DNSSEC while
you're at it
Responding to this topic specifically:
All root server operators have made a strong commitment to only serving
the DNS root as managed by IANA [1], I'm afraid this option is off the
table. Although you could use some wiggle-ling to try and say this
principle doesn't apply to "old addresses", I would not be willing to
take that wiggle on behalf of b.root-servers.net.
[Sorry for the delay -- this was ICANN week and I'm just getting unburied]
Do you have query rates over time for the old and new addresses since
this change in 2017?
We do indeed still get traffic on the older addresses, and its not an
insignificant amount and its not just priming queries either.
Even if you end up with the same answer of 12mo, data supporting it
may give comfort to the community.
As my colleague Robert Story said in a separate thread, we are still
serving our old address and will almost certainly continue to serve our
current address long beyond the promise date we put into the
announcement. Our intent is to continue serving it longer than the
announced end date, but we do not offer a promise to do so.
As I recall, the statement which started the thread could be
paraphrased as one of those root server operating saying, "Hey
Community, we'll continue to treat the old address as still a root DNS
server for a year, maybe more, but no promises."
I acknowledge that you'd prefer it be, "forever and a day," and
perhaps that's what the answer should be, but in all due respect the
document you cite is completely mute on the use of addresses which are
-no longer- root DNS servers.
At some point, somebody's going to want to do something with the old
/24. If they didn't, nothing would stop them from committing to
continue its use as a root DNS server (in addition to the new official
address) for the remaining lifetime of the b-root service. The extra
configuration and extra route announcement just don't have a high
enough cost not to.
I acknowledge that you'd prefer it be, "forever and a day," and
perhaps that's what the answer should be, but in all due respect the
document you cite is completely mute on the use of addresses which are
-no longer- root DNS servers.
I cited the document to discuss the fact that we can not do what you
suggested:
Not a bad idea, you could also put a nice warning page up informing
them that their DNS resolver is broken and not enforcing DNSSEC while
you're at it
as this would require us to return a different answer to a query than
what is in the IANA maintained root zone (IE, we'd be synthesizing
address records and hoping that the querier was using a web-browser
which has been tried by many companies and is heavily frowned upon.
Other options like returning a special loopback address have been better
appreciated [2] but this would still require returning answers that did
not match the IANA distributed root zone data which we will not do.
As to your other point:
At some point, somebody's going to want to do something with the old
/24.
You are correct that we did not state we will or will not be returning
the address block we have back to ARIN. We do not plan on returning it
for precisely the reasons you've specified. Even if we were going to,
we would certainly stop responding on it for a long time first. And
even if we returned it, I suspect that ARIN itself would consider
carefully what to do with a returned address in the critical
infrastructure block. TL;DR: we agree and it's covered.
William Herrin <bill@herrin.us> writes:
> At some point, somebody's going to want to do something with the old
> /24.
You are correct that we did not state we will or will not be returning
the address block we have back to ARIN. We do not plan on returning it
for precisely the reasons you've specified. Even if we were going to,
we would certainly stop responding on it for a long time first. And
even if we returned it, I suspect that ARIN itself would consider
carefully what to do with a returned address in the critical
infrastructure block.
Hi Wes,
Due respect, you should have a better fleshed-out commitment to the
community than, "Here today, gone tomorrow. Probably not. Maybe." Not
to put words in your mouth, but try something like, "We will continue
serving root DNS requests from the old address indefinitely. We will
notify the community of any change in that disposition at least 1 year
prior to the change and will describe, at that time, what will
change."
Don't say, "We'll keep it up for as long as we feel like it, but at
least a year." That's crap.
Long ago, I had suggested that given the peculiar and unique nature or root server addresses and their critical sensitivity that their addresses be treated as protocol parameters, i.e., that root service was fixed to those addresses by protocol specification. People asked if I had been touched by the Bad Idea Fairy (RIP). I still think it’s a good idea…
Don’t say, “We’ll keep it up for as long as we feel like it, but at
least a year.” That’s crap.
30% of the root servers have been renumbered in the last 25 years.
h : 2015
d: 2013
l : 2007
j : 2002
For these 4 cases, only a 6 month transition time was provided, and the internet as we know it did not fall over in a flaming pile. ( One could argue it was ALREADY a flaming pile, but that’s a different discussion.)
Why are we so twisted up because this time they are providing a guarantee for TWICE as much transition time? Have things changed so much since 2015 that a full year is not enough time all of a sudden?
There’s a huge difference between “no one noticed any issues because recursive resolvers will seamlessly fall back to other root servers if there’s an outage” and “there aren’t issues”.
For non-DNSSEC-verifying-resolvers (sheesh, but they still exist), if the IPs are eventually released and someone stands up a DNS server on them you could cause real harm.
Does this need to be over-engineered to prevent that? No, though doing a few tricks to help the poor folks on unmaintained recursive resolvers isn’t bad either.
But lack of visible issues doesn’t mean that users aren’t put at risk. That said, I have no idea if the old number resources were released or no longer announced in the DFZ after the previous renumbers, which would really be the point at which concern is warranted, not simply no longer responding.
IP addresses cannot and should not be trusted. It’s not like you can really trust your packets going to B today are going to and from the real B (or Bs).
If the security of DNS relies on no one intercepting or spoofing responses of some of your queries to a root server, it’s been game over for a long time.
That's great in theory, and folks should be using DNSSEC [1], but we all know there's plenty of places out there in this wide world that don't do things right, and absolutely *do* rely on packets getting to the correct place.
I'm not saying we shouldn't whack those folks with a cluestick [1] (we should), I'm saying we should also not bother making it easier for an attacker to hijack these poor misguided souls.
Matt
[1] $(dig +short pumpkey.net ds) returns nothing here, so I guess you are included in the set of folks who should really upgrade their DNS security to stop relying on the trust packets are getting to the right place.
That's great in theory, and folks should be using DNSSEC [1],
Wrong.
Both in theory and practice, DNSSEC is not secure end to
end
Indeed, but (a) there's active work in the IETF to change that (DNSSEC stapling to TLS certs) and (b) that wasn't the point - the above post said "It’s not like you can really trust your packets going to B _today_ are going to and from the real B (or Bs)." which is exactly what DNSSEC protects against! It may not protect the client, but it protects the recursive resolver, which is often on the same AS as the client (or if its not, its usually connected via DoH/DoT, which is itself a secure channel).
and is not very useful.
If its not useful, please describe a mechanism by which an average recursive resolver can be protected against someone hijacking C root on Hurricane Electric (which doesn't otherwise have the announcement at all, last I heard) and responding with bogus data?
Or, alternatively, describe a mechanism which allows a recursive resolver to not return bogus data in the case of *any* authoritative server BGP hijack.
For example, root key rollover is as easy/difficult as
updating IP addresses for b.root-servers.net.
Then maybe read the rest of this thread, cause lots of folks pointed out issues with *just* updating the IP and not bothering to give it some time to settle