Re the "Manhattan project in 1944" argument - I am very cautious about the "modulo engineering scaling" carve-out -- unlike the uranium manufacturing pipeline of World War 2, that involved massively scaling up a known process, on the face of it there's no uncontroversial process/architecture to scale up in this case.
On the face of it, even relatively "point-target" goals of this kind could take many decades if at all; GaN for blue diodes come in mind as an example of a field that was stuck for a generation -- until it wasn't.
> I am very cautious about the "modulo engineering scaling" carve-out
As OP said elsewhere[0, 1]:
> Once you understand quantum fault-tolerance, asking “so when are you going to factor 35 with Shor’s algorithm?” becomes sort of like asking the Manhattan Project physicists in 1943, “so when are you going to produce at least a small nuclear explosion?”
In other words (IIUC): Some problems (here: scaling fault tolerance) seem to be easier than others.
As a software engineer with a good amount of freedom to choose what tools I want to use, what can I do presently to move towards post-quantum cryptography? AFAIK the hashes and symmetric cyphers that are in wide use are already resistant, leaving mainly public-key cryptography as the problem. Is there, for instance, a drop in replacement for `ssh-keygen -t ed25519`?
I have another comment[1] on this post with more practical instructions, but the `ssh-keygen` is a good question. The cryptography community is still focused on migrating encryption/key exchange algorithms, for fear of data being captured today and decrypted in the future. So OpenSSH 10.0+ already enables ML-KEM by default.
SSH keys, on the other hand, are authentication and would require an online Quantum Computer to break, so we have more time. Authentication is also (usually) more complicated, so there are still disagreements on what to do with the Web PKI for example. To give you a concrete target, Google, Microsoft, and CloudFlare have self-imposed deadlines of 2029 for their PQC migrations.
In practice, PQC migration means updating your software, bugging your vendors to ensure they have this on their roadmaps, and making sure your own code is flexible in respect to algorithms used.
Late edit: PQC migration also includes sometimes changing configuration files/library invocations to enable the new algorithms, and ensuring that your processes still work during the migration, where you might have both pure classical and PQC/hybrid at the same time.
I just went through some ~/.ssh/config files and realized that, along side the entries for ancient systems that need to be forced to use undesirable ciphers and Kexies, I also had some entries for current systems that stipulated only the "good" values, to "protect me from hypothetical future downgrade attacks". Which means that I wasn't getting the latest PQ Kex, because my entries hadn't been updated since ssh 9.x.
Maybe the best practice here is to have one or more Boppers on your team who send out periodic notifications to update not just algos in libraries but, more importantly, make sure those updates are reflected in the damned configs.
We also really need a cultural shift where it becomes expected that, for any given app, we have something like:
$ ssh --best user@host
which does that for us. Because this is a failure mode that shouldn't even be possible for most users and cases.
There are also scanners that you can deploy to identify vulnerable servers, like https://sshcheck.com/ . Clients are harder to check, but you can always observe your logs.
On a separate note, I've definetly been hearing worried murmurs about "harvest and decrypt" attacks along with post-quantum TEE slightly before the GCP paper, and I definetly think it appears a couple nation states are on track for a "quantum leap" by 2030 given the rate at which I've been hearing it within my network.
How does that work when Diffie Hellman key exchange is ephemeral and so compliant servers couldn’t even roll back sessions if they wanted, to let a MiTM
We're almost done countering store-now/decrypt-later, but the biggest part of the job, post-quantum authentication, still remains. Like Google, we target 2029 to be done .
SSH is working on a drop-in as we speak. TLS is further along: most stacks already support X25519MLKEM768 (by default!) to counter store-now/decrypt-later. PQ certs are not widely supported yet, but that's being sped up as we speak.
I am reminded of a certain comedian who lost his job hosting an awards ceremony because he had once said something on stage that people didn't like....
...8 years previously.[1]
Long, long ago in a datacenter far away, breaking 3DES used to be the province of expensive bespoke hardware owned by only the elite nation states. Today it is so trivial that the gpu in your second hand laptop can do it "at scale".
5 years ago ChatGPT was a wet dream.
We should be very conservative in our planning where future security is concerned. The only thing we can be sure of is that Murphy's Law is looking for every chance to make us look foolish.
As far as I know, cracking 3DES is still not trivial, and requires a very large number of operations and/or a very large amount of data. But can just about be done in some situations.
If you have any link to trivially cracking it on your second hand laptop and doing it at scale, would be very interested.
So we know that quantum computers hold a real risk of being able to break a lot of encryption. We also know that changing cyphers is hard (because reasons)
But what I don't see is what I can practically do now, as either someone who is a CTO/Big Cheese™ or a lowly engineer?
> But what I don't see is what I can practically do now, as either someone who is a CTO/Big Cheese™ or a lowly engineer?
Migrate! The major TLS and OpenSSH applications already support PQC, for example.
1. Make sure you have the required dependencies (e.g., openssl 3.5+ is when a lot of PQC algorithms got support).
2. Make sure the client/server software is up to date (this might be all that's needed, e.g., OpenSSH 10.0+ enables PQC in-transit encryption by default, and so does Chrome 131+).
3. Enable PQC support in the configuration (e.g., "ssl_ecdh_curve X25519MLKEM768;" in Nginx).
If you are the developer of anything that's explicitly using RSA or ECC (or god forbid Diffie-Hellman), you can also migrate your own software, or at least make the algorithm selectable at initialization time instead of hardcoded. If you have vendors, ask them for their PQC migration roadmaps.
Note that with encrypted data you want to protect yourself against attackers that are capturing data today and waiting to break it in the future (Harvest-Now, Decrypt-Later). So migrating encryption is more urgent than migrating authentication.
The most important thing to realize about cryptography is that, for most methods short of a Vernam cipher or quantum key distribution, coded messages need to be treated as published with delay. Cipher text can be archived today and attacked years from now with currently undeveloped, unknown, or unpredicted resources/algorithms. Sure, perhaps nobody archived the cipher text and you're fine. You don't know that for sure. Your methods may be very strong but, if they're not provably immune to attack, you also don't know what the delay before publication truly is. It might be a very long time. It might not.
If you're transmitting credit card info that changes every few years and can be changed on demand, that's no big deal. If you're transmitting information that will remain sensitive for decades, the time to look for methods that would stand up to quantum computing was years ago. However, today is still better than years in the future. At the very least, you can choose what to send in encrypted form over public networks and what not to send.
There are people who will scoff at the notion of quantum computing ever developing to the point where it can make an impact. There are people who scoff at the effort and expense of QKD or good ol' spooks carrying briefcases full of one-time PADs. You might be right to listen to them. You might not be. It's a risk. Whether you, or your organization, can tolerate that risk is entirely dependent on you and yours.
If you're a CTO, have a post quantum strategy: know what crypto you use and where it is, plan to migrate to post quantum secure ciphers over the next decade or so, or sooner if possible. If you're a lowly engineer, not very much unless you're specifically selecting technologies with crypto. In which case crypto agility (being able to switch out existing crypto when needed) is a good property to look for.
I think lobby for saner defaults (tip of the hat to Steve Gibson's term "the tyranny of the default"), configuring one's GPG config to mark certain cyphers as insecure (to prevent downgrade attacks)... and have one's (chief) information security officer write those things down as policy and maybe have a yearly onboarding workshop teaching people why it's important.
Where available, you can migrate. Even if PQ is not yet available it helps to:
1. Make sure your dependencies are up to date. Move to a recent version of your crypto libraries.
2. Make sure your server can install multiple certificates: you'll need that unless you control all your clients.
3. Automate certificate issuance as far as possible.
Also, what you can do now is to run the following wargame: assume the CRQC arrived. What's the business impact?
For the migration itself I see three parallel streams.
1. Main push of straight-forward cases (TLS, etc.) Might need to wait a bit for software support.
2. Hard cases: crypto baked into hardware; custom protocols; keys in tight spaces (JWT in URLs); etc. You need to bubble those up soon to make decisions on how to fix them.
3. External dependencies. Barely any vendor has a PQ roadmap, so asking now is probably early, but you can figure out what to do if they don't get their stuff ready in time.
TLS can already be setup to avoid store-now-decrypt-later PQC issues. That's available today, and should be implemented.
Use https://sslboard.com to inventory all your external TLS infrastructure and check for PQC readiness (creator here).
What is the biggest number factored using Shor's algorithm?
Last time I looked it was very unimpressive.
Edit: It's gotten worse. 21 from 2012. "Replication of Quantum Factorisation Records with an 8-bit Home Computer, an Abacus, and a Dog" say the factorization of 35 in 2019 actually failed.
> Sometimes these days, I'll survey the spectacular recent progress in fault-tolerance, 2-qubit gate fidelities, programmable hundred-qubit systems, etc., only to be answered with a sneer: "What's the biggest number that Shor's algorithm has factored? Still 15 after all these years? Haha, apparently the emperor has no clothes!" I've commented that this is sort of like dismissing the Manhattan Project as hopelessly stalled in 1944, on the ground that so far it hasn't produced even a tiny nuclear explosion... If there's a reason why you think it can't work beyond a certain scale, say so. But don't fixate on one external benchmark and ignore everything happening under the hood, if the experts are telling you that under the hood is where all the action now is, and your preferred benchmark is only relevant later.
> If there's a reason why you think it can't work beyond a certain scale, say so
I'm not saying it can't work. Just that in 14 years no one has managed to factor a larger number than 21. Seemingly focus has shifted to other factoring algorithms that don't have performance improvements over conventional computing.
I'm not the one implying that Shor's algorithm will breaking encryption in "a few years from now".
The concern is that there's a large enough chance that it might to be worth planning for the outcome. That chance doesn't need to be high for that to be the case. And there's good reason to believe that the size of number that has been factored up to now is not a reliable indicator that the growth rate will remain very slow.
(The analogy with the Manhattan project is apt: an adversary learning about it would have been wise to adjust their planning around the possibility of it succeeding even if they judged that it was not a given that it would)
> [...] no one has managed to factor a larger number than 21.
Small correction: no one has PUBLICLY managed to factor a larger number than 21.
There could be advances (foreign and domestic) that just don't get published because they represent having an upper-hand with regards to cryptography. So, from Game Theory perspective, not making waves is in the interest of nation states. They'll even try to be dismissive about concerns.
> dismissing the Manhattan Project as hopelessly stalled in 1944
Then again, there are enough examples of failed projects. Why should this be comparable to the Manhattan project? In 1944, it was only two years underway, whereas Shor's algorithm is over 30. Tons of articles have been published on quantum computing, while the A bomb was kept as secret as possible, making learning from other countries, sometimes even from colleagues, impossible. In 1942, an atomic explosion was still hypothetical, whereas quantum computing had its first commercial service 7 years ago. Etc.
So, while in principle lack of progress doesn't guarantee failure, a comparison to the Manhattan Project is stylistic bullshit.
> Then again, there are enough examples of failed projects. Why should this be comparable to the Manhattan project? In 1944, it was only two years underway, whereas Shor's algorithm is over 30.
1944 is a bit arbitrary. Szilard for one was thinking about it earlier:
> […] He conceived the nuclear chain reaction in 1933, and patented the idea in 1936. In late 1939 he wrote the letter for Albert Einstein's signature that resulted in the Manhattan Project that built the atomic bomb….
How long was humanity thinking about flying before the Wright brothers and 1903? We had Babbage's analytical engine (and Lovelace) in 1837, with Zuse's Z2 and the British bombes both in 1940; Zuse's Z3 in 1941.
The main point is that just as you can't ask for tiny nuclear explosion because nuclear physics just doesn't work that way, you also can't ask for factoring of 21 with Shor's algorithm. Quantum computing just doesn't work that way, sorry.
The analogy between nuclear fission and quantum computing doesn’t really work. Fission was a relatively new physical phenomenon the Manhattan Project scientists were studying to turn it into a weapon of mass destruction on a scale that too had no precedent except in natural disasters. Quantum computing is a new technology that is supposed to make already effectively computable problems computable faster; it is ideally supposed to provide an increase in capacity, not capability. It should definitely be able to make tiny computations work before going for the bigger problems. That’s how all computing works, if it can’t solve simple problems, it’s never going to solve bigger ones. What you’re saying here essentially sounds like “there will be a magical event one day when quantum computing solves the biggest computing problems and we’ll all realize it works.”
I am not particularly invested either which way about the likelihood of quantum computing being a major breakthrough or not but this is seeming like yet one more area of computing research like crypto and LLMs which in recent years is increasingly being flooded by people on a hype train.
No you really can't. Being able to factor 15 but not 21 with Shor's algorithm is normal. I know it sounds absurd, but it really is that way. Because factoring 21 is about 100x times harder than factoring 15.
My point was that the comparison with nuclear explosions is wonky, since we (in the world of that analogy) already have seen a tiny nuclear explosion 15 years ago. And we kept being told that explosions 100 times larger are just around the corner, but explosions 25% larger are way too hard to expect.
I get that there's a lot of R&D going on to make larger quantum computers a thing and that there's been very definite progress, but factoring 21 is just too hard to expect for now. But that also pushes the date where pre-quantum cryptography is broken further into the future. If we still struggle to factor one of the smaller 5 bit numbers, factoring the 128 bit numbers necessary to break elliptic curve cryptography seems quite far away.
I talked to a guy who did his doctoral degree on quantum computing and he was not worried at all. In fact he thought it was wildly overhyped, and like cold fusion, self driving cars, or string theory, always just around the corner. Just give us five more years and another grant, please.
For the neutral atoms approach in particular there doesn't seem to be a clear capability missing anymore to building a full scale CRQC: each of the separate components has been demonstrated. Of course when they try to put everything together they'll undoubtedly hit unexpected issues with integration. Wish I could be a fly on the wall at those labs.
Many people in this thread are skeptical about quantum computers, and that's fair. This migration is a big part of my current job, and even I think that there's a non negligible chance that we won't see commercially available quantum computers anytime soon.
The problem is that we're not trying to predict the exact future, we're hedging against possible developments. If there's a 50/50 chance of quantum computers being widely deployed for cryptoanalysis, then there's a 50% chance of this migration being useless. But you don't want to bet your security on a coin toss! So, we migrate.
That's the unfortunate truth of security, sometimes the protections are never triggered. But you still need them.
There are different types of skepticism when it comes to Quantum Computing (QC).
One can be skeptic about its feasibility, namely achieving Fault Tolerant Scalable Quantum Computer (for instance Gil Kalai).
Or, they can be skeptic its applications on the real world.
I am on the second camp, very much. It has mainly two important application areas: Breaking Some Public Key cryptography and Simulating Quantum System.
I think the 1st one is very real, we need to be serious and careful on the migration.
The second area is, I think, extremely overhyped.
One should ask what cases there are for investing in QC that makes financial sense. I can think of couple of areas where quantum effects are important enough to justify this. Better designs for Enzymes and Solid State Batteries.
The case for enzyme designs is weakened even more if you check the recent paper by Garnet Chan: https://bsky.app/profile/dulwichquantum.bsky.social/post/3mh...
Unless we see collorobaration between IBM/Google and BYD/CATL/Tesla that will lead to next gen solid state batteries, I would say it wont have substantial impact on the real world.
One also has to consider that QC is not the only method to simulation strongly coupled Quantum Systems, there are already other methods, tensor network, Deep Learning based, etc.
Lastly, the QCs will be coming in the future. SO, yhey kind of need to hurry up since current benchmarks for EV batteries are improving every year.
There is also the issue of translating lab result into production environment.
All of these factors are eating away the relevance of QCs when it comes to real world applications.
I think we are essentially left with a situation where the only practical application of the technology (QC) is to steal stuff on the internet.
There's a good consensus that for key exchange/encryption (TLS, SSH, age, etc) the way forward is ML-KEM 768 together with some classical algorithm, like X25519. The public keys are larger (1 KB), but that's usually ok unless you're working on very small microcontrollers. And you should migrate quickly because of harvest-now-decrypt-later attacks.
For signatures, things are harder because there are tradeoffs. Some algorithms have large signatures (10+ KB), others require keeping state and have catastrophic consequences if subkeys are reused. And the systems around it are also more complicated: in a certificate, should you put a classical and a PQC signature together? Or should the PQC signature go in an extension? Should the extension be marked as critical and fail loudly on old clients, or should new clients have a special case to always check it if PQC signature validation is available? Or should we abandon the certificate chains and move to Merkle Tree Certificates[1]?
So signatures/authentication are still up for debate. Unless your team is on the bleeding edge of either crypto research or security risks, then there's not much to do than wait for better consensus to form.
> And you should migrate quickly because of harvest-now-decrypt-later attacks.
...
> So signatures/authentication are still up for debate. Unless your team is on the bleeding edge of either crypto research or security risks, then there's not much to do than wait for better consensus to form.
I'm trying, as a layman, to find some not-too-insane middle ground between those contradictions.
That's because of the difference between Confidentiality goals and Authenticity goals.
If I send you a document encrypted with classical crypto today, an attacker could grab a copy, wait a few years, then decrypt with a quantum computer (Harvest-Now-Decrypt-Later). The contents of the document I sent today are exposed in the future.
For documents/transmissions that must remain confidential for 10 years, assuming a quantum computer available in 2030, you should have been encrypting them with PQC since 2020! And if deploying PQC for your clients and servers takes two years, you should have started migrating in 2018!
But if I send you a signed document, it's safe because you're verifying the signature today and there are no quantum computers available today to forge a new signature. The same goes for SSH authentication and web certificates, for example. They're safe right until the moment quantum computers arrive (and by then you better have a good solution!).
That's why so many open-source projects already support ML-KEM for key exchange/encryption, but signatures are still under discussion. The former is more urgent.
I'm sure eventually i'll eat my words - but Quantum still seems like a massive marketing gimmick. The technology itself is incredibly interesting, but it feels as if CERN began advertising itself as a marketing stunt - there's just something about the way I see quantum marketed + advertised right now that doesn't seem to align with reality.
I suppose in spirit of the article - it's as if the manhattan project in 1944 was telling the world that theoretically it's 6-12 months away from igniting the entire upper atmosphere.
People are starting to catch on to the AI scare mongering, let the quantum computer scare mongering begin. We should probably start giving these companies lots of money lest other countries beat us to it.
Aaronson know his stuff but I am not sure he hasn’t considered the fact that, in this current hype cycle, the quantum researchers breathlessly reporting to him on a breakthrough just around the corner are just lying to him and themselves.
I have been hearing about one more technical hurdle to solve before quantum algorithms become feasible since before I graduated. That was in 1996.
This is true, practical quantum computing is always "just a couple of years away".
At the same time, moving to more secure encryption really isn't difficult. How many times have algorithms been deprecated over the past 20 or so years? It's time to do it again.
Let's just make sure that the NSA hasn't worked in any backdoors. At latest since Snowdon, anything they work on is suspect.
There is no clear evidence that the risk of "a practical post quantum computer would arrive in the next 5 years" is greater than "post quantum scheme X is broken" for any scheme X. The only way to go is hybridation and it is quite hard from an engineering point apparently.
If Algo-A and Algo-B both rely on "factoring big numbers is hard!" then once the Quantumpocalypse occurs, breaking Algo-B(Algo-A(plaintext)) is no harder than asking ChatGPT 99.5 to add an extra step in your vibe coded cracking engine's frontend, such that it now does B_breaker < cyphertext | A_breaker >> plaintext.lol or whatever the equivalent is for the fashionable language of the that future day.
He was saying hybrid encryption as in use both a well established classical "factoring big numbers is hard!" algo and also a fancy new post quantum cryptography algo. That way if it turns out the fancy new algo can be broken by non-quantum computers at least you aren't in a worse position than you were in before because you are still protected by the well established classical algo.
You have to break both algorithms. One of them is quantum-safe if it's secure, but it could also be completely insecure like supersingular isogeny was.
It'll be a 90/10 rule: 90% of the upgrades will be straightforward. It's important the 10% that'll be hard early. For many it's probably already too late.
And in the process immediately convert huge numbers of devices into ewaste. Then check the excuse calendar again for tomorrow's reason to deprecate yet another batch of "legacy" ciphers from openSSL.
Quantum correction algorithms (that would allow factoring of thousands of digits) begin to work when the gate fidelity and other parameters are above certain threshold.
This is what bugs me about both quantum computers and commercial fusion power. There's so much talk about how it's just inevitable and will happen soonish, but a lot of the evidence suggests, in some cases strongly, that it might not ever be possible.
I find it weird how bleeding edge research, at the very edges of both physics and engineering, is treated as though it's a market development about to drop. Possibly a consequence of pure R&D having all but died? Getting funded requires pretending there's a business plan for what you're working on?
There's no strong evidence of impossibility. For quantum computers to be impossible at scale we need new unknown physics. Fusion requires lots of engineering. And before those engineering efforts would show practical impossibility or impracticality, there can't be strong evidence.
For quantum computers, the situation is quite similar. Michel Dyakonov and several others have laid out the situation well.
At least we don't have anyone claiming that interstellar travel is just 10 years away, yet. Probably because it's more difficult to make an economic case for it. But the issues are quite similar. In principle, in terms of physics, nothing prevents an interstellar journey. In practice, it just isn't going to happen.
The Boy Who Cried Wolf is a story about a boy who have seen a wolf, successfully threatened the wolf away by causing a commotion in a disbelieving village. One day the disbelieving village refused to show up, boy was eaten and thus proven correct.
But as it happens in real life politics too, people who were just proven they were wrong continued to blame the boy.
The story is told from the point of view of a villagers trying to hide their culpability by blaming the victim.
> The Boy Who Cried Wolf is a story about a boy who have seen a wolf, successfully threatened the wolf away by causing a commotion in a disbelieving village
Sounding the alarm while presenting no data or science, as a member of the National academy of sciences, is doing a disservice to the position, to science, to the self.
Show the data, the charts, let people decide for themselves.
> Sounding the alarm while presenting no data or science
One needs to read OP's blog post in the context of his other posts from the last couple months (many of which have been discussed here on HN in one way or another), where he does discuss the science.
Does djb ever frequent HN? Can we summon him with the correct incantations?
I'd really like to know what his current work on the subject entails, but when I try googling his stuff all I find are years-old papers, more recent meta discussion, and him making a few comments about other peoples' work.
I was sure that by now he'd have at least collaborated on some avant-garde PQ algo that was as different from the NSA approved stuff as chacha20-poly1305 was from AES. I was hoping for a PQ-NaCl folks would be using soon, not the libpqcrypto that seems to lack traction among devs (for reasons I do not understand). I am disappoint.
(It's probably all tucked away in some corner of the web that a layman like me will never find. Sigh.)
Edit: Hah! I gave up on looking for papers or repos and decided to just read his blog instead. Well would'ya look at that! It's non-stop PQ ranting of the kind we've come to love and cherish from DJB. No new repos or code with his imprimatur that I can see so far but better than I was expecting. Looks like I've got some reading to do....
> if quantum computers start breaking cryptography a few years from now, don’t you dare come to this blog and tell me that I failed to warn you. This post is your warning.
If quantum computers broke cryptography I think going to some guy's blog and complaining that he failed to warn me would be pretty low down on my todo list
Re the "Manhattan project in 1944" argument - I am very cautious about the "modulo engineering scaling" carve-out -- unlike the uranium manufacturing pipeline of World War 2, that involved massively scaling up a known process, on the face of it there's no uncontroversial process/architecture to scale up in this case.
On the face of it, even relatively "point-target" goals of this kind could take many decades if at all; GaN for blue diodes come in mind as an example of a field that was stuck for a generation -- until it wasn't.
> I am very cautious about the "modulo engineering scaling" carve-out
As OP said elsewhere[0, 1]:
> Once you understand quantum fault-tolerance, asking “so when are you going to factor 35 with Shor’s algorithm?” becomes sort of like asking the Manhattan Project physicists in 1943, “so when are you going to produce at least a small nuclear explosion?”
In other words (IIUC): Some problems (here: scaling fault tolerance) seem to be easier than others.
[0]: https://scottaaronson.blog/?p=9665#comment-2029013
[1]: See also https://news.ycombinator.com/item?id=47959531 for a very similar quote.
"The Shor of Damocles" - what a metaphor.
I thought it was a typo at first but wikipedia explained:
The Sword of Damocles is an ancient Greek moral anecdote, an allusion to the imminent and ever-present peril faced by those in positions of power.
Shor's algorithm is a quantum algorithm for finding the prime factors of an integer
As a software engineer with a good amount of freedom to choose what tools I want to use, what can I do presently to move towards post-quantum cryptography? AFAIK the hashes and symmetric cyphers that are in wide use are already resistant, leaving mainly public-key cryptography as the problem. Is there, for instance, a drop in replacement for `ssh-keygen -t ed25519`?
I have another comment[1] on this post with more practical instructions, but the `ssh-keygen` is a good question. The cryptography community is still focused on migrating encryption/key exchange algorithms, for fear of data being captured today and decrypted in the future. So OpenSSH 10.0+ already enables ML-KEM by default.
SSH keys, on the other hand, are authentication and would require an online Quantum Computer to break, so we have more time. Authentication is also (usually) more complicated, so there are still disagreements on what to do with the Web PKI for example. To give you a concrete target, Google, Microsoft, and CloudFlare have self-imposed deadlines of 2029 for their PQC migrations.
In practice, PQC migration means updating your software, bugging your vendors to ensure they have this on their roadmaps, and making sure your own code is flexible in respect to algorithms used.
[1]: https://news.ycombinator.com/item?id=47959556
Late edit: PQC migration also includes sometimes changing configuration files/library invocations to enable the new algorithms, and ensuring that your processes still work during the migration, where you might have both pure classical and PQC/hybrid at the same time.
I just went through some ~/.ssh/config files and realized that, along side the entries for ancient systems that need to be forced to use undesirable ciphers and Kexies, I also had some entries for current systems that stipulated only the "good" values, to "protect me from hypothetical future downgrade attacks". Which means that I wasn't getting the latest PQ Kex, because my entries hadn't been updated since ssh 9.x.
Maybe the best practice here is to have one or more Boppers on your team who send out periodic notifications to update not just algos in libraries but, more importantly, make sure those updates are reflected in the damned configs.
We also really need a cultural shift where it becomes expected that, for any given app, we have something like:
$ ssh --best user@host
which does that for us. Because this is a failure mode that shouldn't even be possible for most users and cases.
There are also scanners that you can deploy to identify vulnerable servers, like https://sshcheck.com/ . Clients are harder to check, but you can always observe your logs.
Cloudflare should have finished it's PQC migration already.
That's true for their CDN (https://blog.cloudflare.com/post-quantum-for-all/), but there's a lot more to do, with a 2029 target (https://blog.cloudflare.com/post-quantum-roadmap/).
Ah yep. Good callout.
On a separate note, I've definetly been hearing worried murmurs about "harvest and decrypt" attacks along with post-quantum TEE slightly before the GCP paper, and I definetly think it appears a couple nation states are on track for a "quantum leap" by 2030 given the rate at which I've been hearing it within my network.
How does that work when Diffie Hellman key exchange is ephemeral and so compliant servers couldn’t even roll back sessions if they wanted, to let a MiTM
We're almost done countering store-now/decrypt-later, but the biggest part of the job, post-quantum authentication, still remains. Like Google, we target 2029 to be done .
SSH is working on a drop-in as we speak. TLS is further along: most stacks already support X25519MLKEM768 (by default!) to counter store-now/decrypt-later. PQ certs are not widely supported yet, but that's being sped up as we speak.
It's still being implemented or defined.
The worry about "harvest and decrypt" in a 5 year timeframe is primarily from a nation state/natsec perspective.
If you are being targeted by a nation state as a line level engineer, harvest and decrypt is the least of your worries.
I am reminded of a certain comedian who lost his job hosting an awards ceremony because he had once said something on stage that people didn't like....
...8 years previously.[1]
Long, long ago in a datacenter far away, breaking 3DES used to be the province of expensive bespoke hardware owned by only the elite nation states. Today it is so trivial that the gpu in your second hand laptop can do it "at scale".
5 years ago ChatGPT was a wet dream.
We should be very conservative in our planning where future security is concerned. The only thing we can be sure of is that Murphy's Law is looking for every chance to make us look foolish.
[1] https://www.bbc.com/news/entertainment-arts-46479017
As far as I know, cracking 3DES is still not trivial, and requires a very large number of operations and/or a very large amount of data. But can just about be done in some situations.
If you have any link to trivially cracking it on your second hand laptop and doing it at scale, would be very interested.
Mea culpa! I must have had a brain fart and added the 3 in there. My sincere apologies!
Of course I can't find the link to whatever I read that discussed gpu accelerated des cracking now.
Ok, maybe I'm missing something here.
So we know that quantum computers hold a real risk of being able to break a lot of encryption. We also know that changing cyphers is hard (because reasons)
But what I don't see is what I can practically do now, as either someone who is a CTO/Big Cheese™ or a lowly engineer?
> But what I don't see is what I can practically do now, as either someone who is a CTO/Big Cheese™ or a lowly engineer?
Migrate! The major TLS and OpenSSH applications already support PQC, for example.
1. Make sure you have the required dependencies (e.g., openssl 3.5+ is when a lot of PQC algorithms got support).
2. Make sure the client/server software is up to date (this might be all that's needed, e.g., OpenSSH 10.0+ enables PQC in-transit encryption by default, and so does Chrome 131+).
3. Enable PQC support in the configuration (e.g., "ssl_ecdh_curve X25519MLKEM768;" in Nginx).
If you are the developer of anything that's explicitly using RSA or ECC (or god forbid Diffie-Hellman), you can also migrate your own software, or at least make the algorithm selectable at initialization time instead of hardcoded. If you have vendors, ask them for their PQC migration roadmaps.
Note that with encrypted data you want to protect yourself against attackers that are capturing data today and waiting to break it in the future (Harvest-Now, Decrypt-Later). So migrating encryption is more urgent than migrating authentication.
The most important thing to realize about cryptography is that, for most methods short of a Vernam cipher or quantum key distribution, coded messages need to be treated as published with delay. Cipher text can be archived today and attacked years from now with currently undeveloped, unknown, or unpredicted resources/algorithms. Sure, perhaps nobody archived the cipher text and you're fine. You don't know that for sure. Your methods may be very strong but, if they're not provably immune to attack, you also don't know what the delay before publication truly is. It might be a very long time. It might not.
If you're transmitting credit card info that changes every few years and can be changed on demand, that's no big deal. If you're transmitting information that will remain sensitive for decades, the time to look for methods that would stand up to quantum computing was years ago. However, today is still better than years in the future. At the very least, you can choose what to send in encrypted form over public networks and what not to send.
There are people who will scoff at the notion of quantum computing ever developing to the point where it can make an impact. There are people who scoff at the effort and expense of QKD or good ol' spooks carrying briefcases full of one-time PADs. You might be right to listen to them. You might not be. It's a risk. Whether you, or your organization, can tolerate that risk is entirely dependent on you and yours.
About QKD: https://arxiv.org/abs/1803.04520
Did this pass peer review somewhere?
QKD is cool and all, but it just doesn't scale to the whole Internet. https://blog.cloudflare.com/you-dont-need-quantum-hardware/
If you're a CTO, have a post quantum strategy: know what crypto you use and where it is, plan to migrate to post quantum secure ciphers over the next decade or so, or sooner if possible. If you're a lowly engineer, not very much unless you're specifically selecting technologies with crypto. In which case crypto agility (being able to switch out existing crypto when needed) is a good property to look for.
This is what Cloudflare[1] is doing.
[1] https://blog.cloudflare.com/post-quantum-roadmap/
I think lobby for saner defaults (tip of the hat to Steve Gibson's term "the tyranny of the default"), configuring one's GPG config to mark certain cyphers as insecure (to prevent downgrade attacks)... and have one's (chief) information security officer write those things down as policy and maybe have a yearly onboarding workshop teaching people why it's important.
Where available, you can migrate. Even if PQ is not yet available it helps to:
1. Make sure your dependencies are up to date. Move to a recent version of your crypto libraries. 2. Make sure your server can install multiple certificates: you'll need that unless you control all your clients. 3. Automate certificate issuance as far as possible.
Also, what you can do now is to run the following wargame: assume the CRQC arrived. What's the business impact?
For the migration itself I see three parallel streams.
1. Main push of straight-forward cases (TLS, etc.) Might need to wait a bit for software support.
2. Hard cases: crypto baked into hardware; custom protocols; keys in tight spaces (JWT in URLs); etc. You need to bubble those up soon to make decisions on how to fix them.
3. External dependencies. Barely any vendor has a PQ roadmap, so asking now is probably early, but you can figure out what to do if they don't get their stuff ready in time.
TLS can already be setup to avoid store-now-decrypt-later PQC issues. That's available today, and should be implemented. Use https://sslboard.com to inventory all your external TLS infrastructure and check for PQC readiness (creator here).
> Shor of Damocles
What is the biggest number factored using Shor's algorithm?
Last time I looked it was very unimpressive.
Edit: It's gotten worse. 21 from 2012. "Replication of Quantum Factorisation Records with an 8-bit Home Computer, an Abacus, and a Dog" say the factorization of 35 in 2019 actually failed.
https://eprint.iacr.org/2025/1237
I will let Scott Aaronson speak. (See https://scottaaronson.blog/?p=9668)
> Sometimes these days, I'll survey the spectacular recent progress in fault-tolerance, 2-qubit gate fidelities, programmable hundred-qubit systems, etc., only to be answered with a sneer: "What's the biggest number that Shor's algorithm has factored? Still 15 after all these years? Haha, apparently the emperor has no clothes!" I've commented that this is sort of like dismissing the Manhattan Project as hopelessly stalled in 1944, on the ground that so far it hasn't produced even a tiny nuclear explosion... If there's a reason why you think it can't work beyond a certain scale, say so. But don't fixate on one external benchmark and ignore everything happening under the hood, if the experts are telling you that under the hood is where all the action now is, and your preferred benchmark is only relevant later.
> If there's a reason why you think it can't work beyond a certain scale, say so
I'm not saying it can't work. Just that in 14 years no one has managed to factor a larger number than 21. Seemingly focus has shifted to other factoring algorithms that don't have performance improvements over conventional computing.
I'm not the one implying that Shor's algorithm will breaking encryption in "a few years from now".
The concern is that there's a large enough chance that it might to be worth planning for the outcome. That chance doesn't need to be high for that to be the case. And there's good reason to believe that the size of number that has been factored up to now is not a reliable indicator that the growth rate will remain very slow.
(The analogy with the Manhattan project is apt: an adversary learning about it would have been wise to adjust their planning around the possibility of it succeeding even if they judged that it was not a given that it would)
> [...] no one has managed to factor a larger number than 21.
Small correction: no one has PUBLICLY managed to factor a larger number than 21.
There could be advances (foreign and domestic) that just don't get published because they represent having an upper-hand with regards to cryptography. So, from Game Theory perspective, not making waves is in the interest of nation states. They'll even try to be dismissive about concerns.
> dismissing the Manhattan Project as hopelessly stalled in 1944
Then again, there are enough examples of failed projects. Why should this be comparable to the Manhattan project? In 1944, it was only two years underway, whereas Shor's algorithm is over 30. Tons of articles have been published on quantum computing, while the A bomb was kept as secret as possible, making learning from other countries, sometimes even from colleagues, impossible. In 1942, an atomic explosion was still hypothetical, whereas quantum computing had its first commercial service 7 years ago. Etc.
So, while in principle lack of progress doesn't guarantee failure, a comparison to the Manhattan Project is stylistic bullshit.
> Then again, there are enough examples of failed projects. Why should this be comparable to the Manhattan project? In 1944, it was only two years underway, whereas Shor's algorithm is over 30.
1944 is a bit arbitrary. Szilard for one was thinking about it earlier:
> […] He conceived the nuclear chain reaction in 1933, and patented the idea in 1936. In late 1939 he wrote the letter for Albert Einstein's signature that resulted in the Manhattan Project that built the atomic bomb….
* https://en.wikipedia.org/wiki/Leo_Szilard
Partly inspired in 1932 by reading Wells' book, published in 1914:
* https://en.wikipedia.org/wiki/The_World_Set_Free
How long was humanity thinking about flying before the Wright brothers and 1903? We had Babbage's analytical engine (and Lovelace) in 1837, with Zuse's Z2 and the British bombes both in 1940; Zuse's Z3 in 1941.
The main point is that just as you can't ask for tiny nuclear explosion because nuclear physics just doesn't work that way, you also can't ask for factoring of 21 with Shor's algorithm. Quantum computing just doesn't work that way, sorry.
The analogy between nuclear fission and quantum computing doesn’t really work. Fission was a relatively new physical phenomenon the Manhattan Project scientists were studying to turn it into a weapon of mass destruction on a scale that too had no precedent except in natural disasters. Quantum computing is a new technology that is supposed to make already effectively computable problems computable faster; it is ideally supposed to provide an increase in capacity, not capability. It should definitely be able to make tiny computations work before going for the bigger problems. That’s how all computing works, if it can’t solve simple problems, it’s never going to solve bigger ones. What you’re saying here essentially sounds like “there will be a magical event one day when quantum computing solves the biggest computing problems and we’ll all realize it works.”
I am not particularly invested either which way about the likelihood of quantum computing being a major breakthrough or not but this is seeming like yet one more area of computing research like crypto and LLMs which in recent years is increasingly being flooded by people on a hype train.
Given that 15 has already been factored using Shor's algorithm on a real quantum computer, I think we can.
No you really can't. Being able to factor 15 but not 21 with Shor's algorithm is normal. I know it sounds absurd, but it really is that way. Because factoring 21 is about 100x times harder than factoring 15.
See https://algassert.com/post/2500 for details.
My point was that the comparison with nuclear explosions is wonky, since we (in the world of that analogy) already have seen a tiny nuclear explosion 15 years ago. And we kept being told that explosions 100 times larger are just around the corner, but explosions 25% larger are way too hard to expect.
I get that there's a lot of R&D going on to make larger quantum computers a thing and that there's been very definite progress, but factoring 21 is just too hard to expect for now. But that also pushes the date where pre-quantum cryptography is broken further into the future. If we still struggle to factor one of the smaller 5 bit numbers, factoring the 128 bit numbers necessary to break elliptic curve cryptography seems quite far away.
I talked to a guy who did his doctoral degree on quantum computing and he was not worried at all. In fact he thought it was wildly overhyped, and like cold fusion, self driving cars, or string theory, always just around the corner. Just give us five more years and another grant, please.
Meanwhile Waymo has 200 million autonomous miles under its belt.
Waymo had millions in 2015.
Waymo doesn't seem to know it.
https://waymo.com/research/safety-performance-of-the-waymo-r...
> Waymo’s rider-only ride-hailing operations reached its first one million rider-only miles on January 21, 2023
The key distinction being "rider-only".
If we are counting supervised miles, then Tesla is in the lead with around 8 billion miles.
N=1 sample size.
I talked to another guy with the same degree in the same field and he was concerned.
Scott used to be that guy.
I said this about LLMs a few years ago, and now here we are.
Yeah 70 years ago right.
The abacus thing is pretty funny, but it's dangerously uninformed. https://bas.westerbaan.name/notes/2026/04/02/factoring.html
Is there a better benchmark to use?
Honest question.
How can a lay person track the real word progress of quantum computers?
Most approaches have missing "capabilities" that can be tracked. Adam Zalcman lays them out for superconducting qubits here. https://westerbaan.name/~bas/rwpqc2026/adam.pdf
For the neutral atoms approach in particular there doesn't seem to be a clear capability missing anymore to building a full scale CRQC: each of the separate components has been demonstrated. Of course when they try to put everything together they'll undoubtedly hit unexpected issues with integration. Wish I could be a fly on the wall at those labs.
Many people in this thread are skeptical about quantum computers, and that's fair. This migration is a big part of my current job, and even I think that there's a non negligible chance that we won't see commercially available quantum computers anytime soon.
The problem is that we're not trying to predict the exact future, we're hedging against possible developments. If there's a 50/50 chance of quantum computers being widely deployed for cryptoanalysis, then there's a 50% chance of this migration being useless. But you don't want to bet your security on a coin toss! So, we migrate.
That's the unfortunate truth of security, sometimes the protections are never triggered. But you still need them.
There are different types of skepticism when it comes to Quantum Computing (QC). One can be skeptic about its feasibility, namely achieving Fault Tolerant Scalable Quantum Computer (for instance Gil Kalai).
Or, they can be skeptic its applications on the real world.
I am on the second camp, very much. It has mainly two important application areas: Breaking Some Public Key cryptography and Simulating Quantum System.
I think the 1st one is very real, we need to be serious and careful on the migration.
The second area is, I think, extremely overhyped.
One should ask what cases there are for investing in QC that makes financial sense. I can think of couple of areas where quantum effects are important enough to justify this. Better designs for Enzymes and Solid State Batteries. The case for enzyme designs is weakened even more if you check the recent paper by Garnet Chan: https://bsky.app/profile/dulwichquantum.bsky.social/post/3mh...
Unless we see collorobaration between IBM/Google and BYD/CATL/Tesla that will lead to next gen solid state batteries, I would say it wont have substantial impact on the real world. One also has to consider that QC is not the only method to simulation strongly coupled Quantum Systems, there are already other methods, tensor network, Deep Learning based, etc.
Lastly, the QCs will be coming in the future. SO, yhey kind of need to hurry up since current benchmarks for EV batteries are improving every year. There is also the issue of translating lab result into production environment.
All of these factors are eating away the relevance of QCs when it comes to real world applications.
I think we are essentially left with a situation where the only practical application of the technology (QC) is to steal stuff on the internet.
Can you talk about what algorithms you're migrating to?
Disclaimer: what follows is my opinion.
There's a good consensus that for key exchange/encryption (TLS, SSH, age, etc) the way forward is ML-KEM 768 together with some classical algorithm, like X25519. The public keys are larger (1 KB), but that's usually ok unless you're working on very small microcontrollers. And you should migrate quickly because of harvest-now-decrypt-later attacks.
For signatures, things are harder because there are tradeoffs. Some algorithms have large signatures (10+ KB), others require keeping state and have catastrophic consequences if subkeys are reused. And the systems around it are also more complicated: in a certificate, should you put a classical and a PQC signature together? Or should the PQC signature go in an extension? Should the extension be marked as critical and fail loudly on old clients, or should new clients have a special case to always check it if PQC signature validation is available? Or should we abandon the certificate chains and move to Merkle Tree Certificates[1]?
So signatures/authentication are still up for debate. Unless your team is on the bleeding edge of either crypto research or security risks, then there's not much to do than wait for better consensus to form.
[1] https://postquantum.com/security-pqc/googles-merkle-tree-mtc...
Your opinion is most welcome. Cheers!
> And you should migrate quickly because of harvest-now-decrypt-later attacks.
...
> So signatures/authentication are still up for debate. Unless your team is on the bleeding edge of either crypto research or security risks, then there's not much to do than wait for better consensus to form.
I'm trying, as a layman, to find some not-too-insane middle ground between those contradictions.
That's because of the difference between Confidentiality goals and Authenticity goals.
If I send you a document encrypted with classical crypto today, an attacker could grab a copy, wait a few years, then decrypt with a quantum computer (Harvest-Now-Decrypt-Later). The contents of the document I sent today are exposed in the future.
For documents/transmissions that must remain confidential for 10 years, assuming a quantum computer available in 2030, you should have been encrypting them with PQC since 2020! And if deploying PQC for your clients and servers takes two years, you should have started migrating in 2018!
But if I send you a signed document, it's safe because you're verifying the signature today and there are no quantum computers available today to forge a new signature. The same goes for SSH authentication and web certificates, for example. They're safe right until the moment quantum computers arrive (and by then you better have a good solution!).
That's why so many open-source projects already support ML-KEM for key exchange/encryption, but signatures are still under discussion. The former is more urgent.
I'm sure eventually i'll eat my words - but Quantum still seems like a massive marketing gimmick. The technology itself is incredibly interesting, but it feels as if CERN began advertising itself as a marketing stunt - there's just something about the way I see quantum marketed + advertised right now that doesn't seem to align with reality.
> * it feels as if CERN began advertising itself as a marketing stunt*
Quantum AI harvesting antimatter
I suppose in spirit of the article - it's as if the manhattan project in 1944 was telling the world that theoretically it's 6-12 months away from igniting the entire upper atmosphere.
> the Shor of Damocles
Perfect.
People are starting to catch on to the AI scare mongering, let the quantum computer scare mongering begin. We should probably start giving these companies lots of money lest other countries beat us to it.
Aaronson know his stuff but I am not sure he hasn’t considered the fact that, in this current hype cycle, the quantum researchers breathlessly reporting to him on a breakthrough just around the corner are just lying to him and themselves.
I have been hearing about one more technical hurdle to solve before quantum algorithms become feasible since before I graduated. That was in 1996.
This is true, practical quantum computing is always "just a couple of years away".
At the same time, moving to more secure encryption really isn't difficult. How many times have algorithms been deprecated over the past 20 or so years? It's time to do it again.
Let's just make sure that the NSA hasn't worked in any backdoors. At latest since Snowdon, anything they work on is suspect.
There is no clear evidence that the risk of "a practical post quantum computer would arrive in the next 5 years" is greater than "post quantum scheme X is broken" for any scheme X. The only way to go is hybridation and it is quite hard from an engineering point apparently.
There is evidence of the opposite: graph singular isogeny mumbo jumbo algorithm was proven to be easily broken on an ordinary computer.
Hybrid encryption is as simple as running one encryption and then the other. Problem is mostly that post quantum keys are large.
Am I missing something fundamental here?
If Algo-A and Algo-B both rely on "factoring big numbers is hard!" then once the Quantumpocalypse occurs, breaking Algo-B(Algo-A(plaintext)) is no harder than asking ChatGPT 99.5 to add an extra step in your vibe coded cracking engine's frontend, such that it now does B_breaker < cyphertext | A_breaker >> plaintext.lol or whatever the equivalent is for the fashionable language of the that future day.
He was saying hybrid encryption as in use both a well established classical "factoring big numbers is hard!" algo and also a fancy new post quantum cryptography algo. That way if it turns out the fancy new algo can be broken by non-quantum computers at least you aren't in a worse position than you were in before because you are still protected by the well established classical algo.
You have to break both algorithms. One of them is quantum-safe if it's secure, but it could also be completely insecure like supersingular isogeny was.
I hard disagree with your assertion that moving to more secure encryption isn't difficult. It is insanely difficult, especially at global scale.
It'll be a 90/10 rule: 90% of the upgrades will be straightforward. It's important the 10% that'll be hard early. For many it's probably already too late.
And in the process immediately convert huge numbers of devices into ewaste. Then check the excuse calendar again for tomorrow's reason to deprecate yet another batch of "legacy" ciphers from openSSL.
The sooner we start making devices ready for better encryption systems, the fewer devices will be wasted.
Are you saying this because it's an evergreen joke or because you really think there hasn't been meaningful progress in the field since 1996?
Duke Nukem Forever was release fifteen years ago. Some things never happen until they suddenly do.
The wolf really does eat the boy at the end of The Boy Who Cried Wolf.
But Duke Nukem was developed with visible progress.
We are still not factoring 21, let alone 35, let alone numbers with thousands of digits.
Quantum correction algorithms (that would allow factoring of thousands of digits) begin to work when the gate fidelity and other parameters are above certain threshold.
> gate fidelity and other parameters are above certain threshold
A threshold that might be beyond what the physical properties of our universe allow. It is still unclear.
This possibility means discovery of new physics that has no indications of existence yet.
This is what bugs me about both quantum computers and commercial fusion power. There's so much talk about how it's just inevitable and will happen soonish, but a lot of the evidence suggests, in some cases strongly, that it might not ever be possible.
I find it weird how bleeding edge research, at the very edges of both physics and engineering, is treated as though it's a market development about to drop. Possibly a consequence of pure R&D having all but died? Getting funded requires pretending there's a business plan for what you're working on?
There's no strong evidence of impossibility. For quantum computers to be impossible at scale we need new unknown physics. Fusion requires lots of engineering. And before those engineering efforts would show practical impossibility or impracticality, there can't be strong evidence.
By not ever be possible, I mean in a practical sense, including e.g. the economics of it, as well as reliability, checkability, etc.
Jassby's article about fusion (https://thebulletin.org/2017/04/fusion-reactors-not-what-the...) describes several well-understood issues that could prevent commercial fusion power from ever being practically possible.
For quantum computers, the situation is quite similar. Michel Dyakonov and several others have laid out the situation well.
At least we don't have anyone claiming that interstellar travel is just 10 years away, yet. Probably because it's more difficult to make an economic case for it. But the issues are quite similar. In principle, in terms of physics, nothing prevents an interstellar journey. In practice, it just isn't going to happen.
The Boy Who Cried Wolf is a story about a boy who have seen a wolf, successfully threatened the wolf away by causing a commotion in a disbelieving village. One day the disbelieving village refused to show up, boy was eaten and thus proven correct.
But as it happens in real life politics too, people who were just proven they were wrong continued to blame the boy.
The story is told from the point of view of a villagers trying to hide their culpability by blaming the victim.
That's one way to completely reframe the story to fit the narrative you want to push
> The Boy Who Cried Wolf is a story about a boy who have seen a wolf, successfully threatened the wolf away by causing a commotion in a disbelieving village
What happened before that in the story
quantum computers will flourish the same day that fusion does.
Sounding the alarm while presenting no data or science, as a member of the National academy of sciences, is doing a disservice to the position, to science, to the self.
Show the data, the charts, let people decide for themselves.
> Sounding the alarm while presenting no data or science
One needs to read OP's blog post in the context of his other posts from the last couple months (many of which have been discussed here on HN in one way or another), where he does discuss the science.
Demanding information one won't read or understand.
Does djb ever frequent HN? Can we summon him with the correct incantations?
I'd really like to know what his current work on the subject entails, but when I try googling his stuff all I find are years-old papers, more recent meta discussion, and him making a few comments about other peoples' work.
I was sure that by now he'd have at least collaborated on some avant-garde PQ algo that was as different from the NSA approved stuff as chacha20-poly1305 was from AES. I was hoping for a PQ-NaCl folks would be using soon, not the libpqcrypto that seems to lack traction among devs (for reasons I do not understand). I am disappoint.
(It's probably all tucked away in some corner of the web that a layman like me will never find. Sigh.)
Edit: Hah! I gave up on looking for papers or repos and decided to just read his blog instead. Well would'ya look at that! It's non-stop PQ ranting of the kind we've come to love and cherish from DJB. No new repos or code with his imprimatur that I can see so far but better than I was expecting. Looks like I've got some reading to do....
I should have subscribed to his rss feed years ago. And his "microblog" too! https://microblog.cr.yp.to/
Tl;dr:
> if quantum computers start breaking cryptography a few years from now, don’t you dare come to this blog and tell me that I failed to warn you. This post is your warning.
If quantum computers broke cryptography I think going to some guy's blog and complaining that he failed to warn me would be pretty low down on my todo list
[dead]