(forwarded w/o permissions, though this hit bugtraq earlier...t)
I discussed this in detail with Lucky before he posted it. I'll give a
summary of how this affects the readers of NANOG here -- feel free to
forward if you like.
Prior to Bernstein's discovery the row-reduction step in factorization
could be made massively parallelizable, we believed that 1024 bit keys
would remain unfactorable essentially forever. Now, 1024 bit RSA keys look
to be factorable either presently, or in the very near future once Moore's
law is taken into account. However, at a price tag of $2 billion for a
specialized machine, we have a few years before anyone outside of the
intelligence community attempts this.
What is most concerning to me is a few discoveries that were made while
looking into the problem of widespread use of 1024 bit keys:
First: Verisign appears to have no minimum requirements for the key sizes
it will sign. I have discussed at length Verisign's active contributions
to the hindrance of security on the Internet in the past (see the
archives of my presentation at DEFCON 9), but I somehow missed this gem. A
few months ago, in fact, Verisign issued a 384 bit certificate. (You could
factor this on your desk top machine in days.) 512 bit keys are also
fairly commonly signed by Verisign. (Ugh.)
Question for people who know: Does Verisign allow you to submit CSRs for
2048 to 4096 bit certificates?
Second: As far as I can tell, OpenSSH (and I assume the commercial
versions of SSH as well) offer no mechanism for enforcing the size of
users' keys when public key authentication is turned on. This means that
users could be placing (factorable) 512 bit keys in their
~/.ssh/authorized_keys files, which is in effect worse than using weak
passwords (as an attacker would leave no false login attempts for you to
detect in your logs).
I've mailed Theo de Raadt asking if OpenSSH has an undocumented mechanism
for specifying minimum permitted key size that I don't know about. If
there is one, I'll certainly post a follow-up.
Lucky also mentions S/MIME, which has so many flaws I'm not going to
address it; PGP, which places the risks squarely on the key-holder and
doesn't prevent the use of 2048 bit keys (which should be safe even taking
Bernstein's findings into account), so I'm not to concerned with that; and
IPsec, which sadly isn't in widespread use.
So, my main concerns are TLS, (which is damaged due to poor engineering on
the part of Netscape and Microsoft, and uncouth policy issues on the part
of Versign) and SSH, which may suffer from an easily correctable
engineering flaw. Note that the biggest concerns don't have to do
specifically with 1024 bit keys, but rather, small key sizes in general.
--Len.
Len Sassaman <rabbi@quickie.net> writes:
Prior to Bernstein's discovery the row-reduction step in
factorization
could be made massively parallelizable, we believed that 1024 bit
keys
would remain unfactorable essentially forever. Now, 1024 bit RSA
keys look
to be factorable either presently, or in the very near future once
Moore's
law is taken into account. However, at a price tag of $2 billion for
a
specialized machine, we have a few years before anyone outside of
the
intelligence community attempts this.
What is most concerning to me is a few discoveries that were made
while
looking into the problem of widespread use of 1024 bit keys:
Out of curiosity, was there any indication that Bernstein's
improvements might apply to the discrete log problem, DSA in general,
and the 1024-bit limit on key size built into NIST's DSS standard?
Revoking an RSA key and re-issuing a longer one might be a pain, but
there's no option for that in the current GPG implementation.
Cheers.
-travis
Personally I'm not too concerned (yet). You're probably worse off due to
implementation flaws.
But on a list of things which "should be fixed" for the future: Any RSA
implementation using RSARef (which until the patent expired was the only
legal way to write RSA implementations in the US) is limited to < 1024
bits.
I can think of a few vendors using embedded SSH who still suffer from this
problem (Vendor F comes to mind, but their SSH implementation also doesn't
work with OpenSSH w/freebsd localisations, so something else is afoot
there as well).
Since you are mentioning Verisign here, and CA authorities in general, has
anyone considered that factoring the CA authority's key is far simpler than
breaking the underlying key [no matter how large?]. Based on the
implementation, the CA's key cannot be changed often or easily. Key
revocations are not automatic or even respected, and the CA's key, once
compromised, can sign any other key you'd like for a beautiful
man-in-the-middle attack.
The man-in-the-middle is the only attack these keys are designed to thwart,
because if you can't access the physical bits, you don't have anything to
decipher anyway. The beautiful thing about compromising the CA's key is that
its not easily traceable.
Regards,
Deepak Jain
AiNET
Out of curiosity, was there any indication that Bernstein's
improvements might apply to the discrete log problem, DSA in general,
Bernstein's paper was geared toward RSA, but I believe he makes the claim
that discrete log based algorithms are susceptible as well. I'm not a
cryptographer, so my word shouldn't be taken as fact on this, but if
there is a row-reduction step in solving DL, it would apply.
and the 1024-bit limit on key size built into NIST's DSS standard?
I'm not sure about the relative bit strength when the row reduction is
taken into account, but I suspect that if the above assumptions are true,
1024 bit would be too small.
NIST's limit was specified for purely technical reasons -- until recently,
we did not have hash functions that were of sufficiently long bit-sizes to
do provide equivalent security for DSA keys larger than 1024 bits, so DSS
was limited to that size. (SHA-1, the hash function specified in the DSS,
was 160 bits. So was RIPEMD-160).
One could now do a larger DSA with a larger hash such as SHA-512, though
an updated DSS standard has not been specified yet.
Revoking an RSA key and re-issuing a longer one might be a pain, but
there's no option for that in the current GPG implementation.
A few details on GnuPG, since I have intimate knowledge of that software
and OpenPGP in general:
While the current version (1.0.6) of GnuPG cannot generate RSA keys, it
*can* use them. If you have a version of PGP 7.x, you can generate RSA v4
keys up to 4096 bits, and then use them with GnuPG.
GnuPG 1.0.7 is slated to allow for the generation of RSA v4 keys. I think
it's safe for everyone to wait until that comes out, since the threat of
1024 bit keys being broken is not an immediate one for most threat models.
Also, the major attacks to protect against with OpenPGP are ones that are
undetectable by the intended users. For instance, if a flaw were found
that allowed an attacker to decrypt PGP messages simply by "doing things"
to them after they were intercepted over the wire, it would be huge.
In PGP, you have a 1024 bit DSA key as the master signing key. Breaking
this would allow an attacker to forge signatures of yours (bad, but
detectable, and only a real concern if you are using signatures in a
critical environment, such as part of an authentication scheme) or bind
new subkeys to your key.
It is the subkeys that are used for encryption, and these are ElGamal
keys, which are also based on the discrete log problem, but are up to 4096
bits in size, so Bernstein's attack shouldn't be an issue if we're
assuming they're roughly equivalent in strength to RSA, taking this into
account.
The way an attacker would exploit this weakness in order to read encrypted
mail would be to modify your key to attach a bogus encryption subkey to
it, and hope that people you communicate with encrypt to the bogus subkey
instead of the real key. This won't go unnoticed for long, and doesn't get
him past encrypted information, either.
So, I'm not too concerned about GnuPG presently, though I am going to
bring up the issue of stronger DSA keys with the IETF working group.
Remember, the main change in opinion after Bernstein's paper is that 1024
bit keys are no longer "safe forever".
Far more frightening is that 512 bit keys are still in use, and can be
broken in weeks by anyone with a few grand to throw at it (even without
Bernstein's improvements).
--Len.
Well, that's not really the case. Breaking a 384 bit key is trivial.
Breaking a 1024 bit key is probably not possible without a multi-billion
dollar budget. 2048 bit keys are still in no danger of being broken any
time soon unless further advances are made in factoring.
But I see the point you are making, which is that targeting the CA lets
you attack all of the browsers that trust keys signed by that CA, rather
than specifically targeting that one site. However, MITM attacks are
active attacks, and run the risk of being detected by the the victim. If
you break the key a site is using for encryption, you can read the traffic
without fear of detection.
Other comments on this issue, which I covered in my DEFCON 9 presentation:
it would probably be a lot easier to compromise a CA's root key by means
of network or physical attack, rather than through cryptanalysis. It also
doesn't have to be Verisign you target -- there are over a hundred trusted
root certification authorities in IE, some of them issued to companies
that have gone bankrupt, or sold their root as part of their assets.
Remember, if you're attempting a MITM attack in TLS, you're really
exploiting poor design of the trust-management features of the client,
which is a whole can-o-worms in and of itself.
--Len.
Exactly. Why think $2B is some insurmountable barrier when there are far
cheaper ways of getting what you want. Most computer people think of
security only in terms of computers. Bribing a few night security guards is
far cheaper than even cryptanalysis and will give any sufficiently
interested party access to the machines signing the keys.
At present, if you have the sophistication to break an "interesting" key,
you could have the sophistication to not be detected MITM. The difference
between inserting/replacing a valid flow, and simply listening [unless the
attacker is stupid] isn't that big a difference from a detection [of the
attack] point of view.
Again, I am assuming things about the attacker that makes them scary. If the
attacker is a little kiddie using his home broadband connection, he is not
necessarily going to be able to use that information for anything
particularly harmful.
Yes, but the trust architecture out there today are far more vulnerable
[IMO] than the underlying key-encryption. Again, while key negotiation is
interesting and important, RSA/DSA/etc are only used in that stage, and
generally the underlying connection [for performance reasons] moves with a
significantly less bulky encryption algorithm. Blowfish, IDEA and a few
others come to mind.
It is far more trivial to capture and compromise an instream algorithm than
worrying about the key at the get go is [unless you are trying to
permanently compromise a victim, at which point the CA is an easier target
anyway]. This is especially the case when you allow for dedicated hardware.
I have always been of the opinion that all of this internet-widely-available
encryption is primarily to make customers feel safe and save credit card
companies some liability. There wasn't enough thought put into it at all
levels to make it more safe/secure than that.
No one is going to spend millions of dollars to get at most the same
millions of dollars of back in credit card fraud [good money after bad].
Anyone who is relying on these commercial architectures to secure gov't
secrets or secrets worthy of an intelligence outfit's attention is a moron
[for numerous reasons]. If all you are doing is trying to secure machines
against script kiddies, starting huge public debates and initiatives and the
like seems like overkill to me. [investment is greater than reward]. YMMV.
Deepak Jain
AiNET
Exactly. Why think $2B is some insurmountable barrier when there are far
$2B isn't an insurmountable barrier. It is well within most intelligence
agencies' budgets, and that price will only get lower.
At present, if you have the sophistication to break an "interesting" key,
you could have the sophistication to not be detected MITM. The difference
between inserting/replacing a valid flow, and simply listening [unless the
attacker is stupid] isn't that big a difference from a detection [of the
attack] point of view.
Passive attacks are, by definition, undetectable. Active attacks are not;
some are simply more detectable than others.
No one is going to spend millions of dollars to get at most the same
millions of dollars of back in credit card fraud [good money after bad].
Anyone who is relying on these commercial architectures to secure gov't
secrets or secrets worthy of an intelligence outfit's attention is a moron
[for numerous reasons]. If all you are doing is trying to secure machines
against script kiddies, starting huge public debates and initiatives and the
like seems like overkill to me. [investment is greater than reward]. YMMV.
Remember that there is no international law preventing a country's
intelligence agency from committing industrial espionage for its own
companies (and in fact this is common practice).
Also, remember that the US Military has considered, and may very well be
using, IPsec in the field to coordinate military maneuvers.
I think you're really missing the main point with that $2 billion figure.
The "big surprise" is that we might be able to put a price-point on
factoring 1024 bit keys -- previously, they were thought to be "secure
forever".
A machine that costs $2 billion today, according to Moore's law, will cost
about $200,000 20 years from now. Not counting inflation. That will be
well within many people's budgets.
Hmm. Something very interesting about that, is the fact that someone
could basically just dump tons of data to a hard drive, and have it
available when they could afford to decode it.
Some information is definitely quite valuable even five years down the
road.
[snip]
$2B isn't an insurmountable barrier. It is well within most intelligence
agencies' budgets, and that price will only get lower.
Yep. The Venona program in the 40's and 50's is a good example of
this - many of the decrypted messages were actually intercepted years
and years earlier.
Storage gets cheaper and cheaper every day.
David
That is a falicy. Moore's law is most certainly not accelerating -- in
fact:
1965-1990 Moore's law stated that the number of transistors per square
inch on integrated circuits (and therefore, the speed) doubles every 2
years. The pace has since slowed down a bit, but appears to be holding
steady at doubling every 18 months (1995-present).
http://www.physics.udel.edu/wwwusers/watson/scen103/intel.html
However, this trend cannot continue forever. In 1997, Moore predicted we
would reach the physical limits on transistor miniaturization somewhere
around 2017. Whatever the actual date, we will need a break-through in
computing to continue to obtain performance increases over time past this
point.
--Len.
If its a big surprise that any key of any arbitrary length can be cracked
in
finite time and in finite resources, I think people haven't been thinking
about the information presented in the security books out there. Most of
the
estimates that say anything is "unbreakable" don't recognize that Moore's
law is real, and accelerating...
That is a falicy. Moore's law is most certainly not accelerating -- in
fact:
1965-1990 Moore's law stated that the number of transistors per square
inch on integrated circuits (and therefore, the speed) doubles every 2
years. The pace has since slowed down a bit, but appears to be holding
steady at doubling every 18 months (1995-present).
http://www.physics.udel.edu/wwwusers/watson/scen103/intel.html
However, this trend cannot continue forever. In 1997, Moore predicted we
would reach the physical limits on transistor miniaturization somewhere
around 2017. Whatever the actual date, we will need a break-through in
computing to continue to obtain performance increases over time past this
point.
Not to be too picky, but how is going from "doubling every 2 years" to
"doubling every 18 months" slowing down?
Erm, yeah. Thanks for calling me on that -- I horribly condensed what I
was trying to say.
By the original definition (number of transistors per square inch doubles
every year), it has slowed to every 2.5 years. See the graph I linked to.
Data density is currently doubling every 18 months, and holding steady at
that rate.
(But this *is* off topic for NANOG...)
the new CVS versions of OpenSSH (the current portable CVS version doesn't
have the changes quite yet) allow you to specify a minimum key lentgh as a
#define at compile time. see ssh.h:
#define SSH_RSA_MINIMUM_MODULUS_SIZE 768
- brett