I woke up to the news of a new form of timing-side-channel attack based on the dynamic frequency scaling of modern x86 processors. This is the Hertzbleed attack, which will be presented at the USENIX Security Symposium in Boston in August. The authors are Yingchen Wang and Hovav Shacham from the University of Texas at Austin, Riccardo Paccagnella and Elizabeth Tang He and Christopher Fletcher from the University of Illinois Urbana-Champaign, and David Kohlbrenner from the University of Washington.
Interestingly for this blog, they target the supersingular isogeny key exchange protocol SIKE. SIKE is implemented in constant-time using standard timing-attack countermeasures from traditional ECC, such as the Montgomery ladder. But the frequency scaling feature allows to mount the timing attack even against supposedly constant-time code. The attack is a form of “zero value attack“. See also Patrick Longa’s comments on the NIST PQC mailing list.
The key property of SIKE that is exploited is that the protocol message is a triple where is an elliptic curve and are points. The points are a troublesome aspect of SIDH/SIKE, and are the cause of the adaptive attack by Galbraith-Petit-Ti-Shani, the torsion point attacks by Petit and de Quehen-Kutas-Leonardi-Martindale-Panny-Petit-Stange, and so on. In certain contexts, these attacks are prevented by the Fujisaki-Okamoto transform, but this doesn’t help in the context of side-channel attacks.
The specific attack on SIKE given by the Hertzbleed authors is also of this form. It involves maliciously choosing the points in a key-dependent way to learn information. As explained by Patrick Longa in his post above, there is a countermeasure to such attacks.
What does this mean for ECC in general? At the moment the timing channel does not seem to be fine-grained enough to attack constant-time elliptic curve systems in use. The paper says:
“Despite its theoretical power, it is not obvious how to construct practical exploits through the frequency side channel. This is because DVFS updates depend on the aggregate power consumption over millions of CPU cycles and only reflect coarse-grained program behavior. Yet, we show … that some cryptographic primitives admit amplification of single key bit guesses into thousands of high- or low-power operations, enough to induce a measurable timing difference.”
But we know that attacks only get better. So it will be interesting and important to see if this approach can be used to attack current ECC systems in practice.
Eurocrypt 2021 was held as a hybrid conference, with some participants in-person in Zagreb and some online. I was one of the ones joining the conference by Zoom and Zulip. As always in this blog I focus on talks and news most relevant for elliptic curve fans.
For each accepted paper there was a short (20-30 minutes) video available in advance of the conference on the conference website, and also a short live Q&A session that is immortalised on the IACR youtube channel.
There were two invited talks:
Craig Gentry’s talk “A Decade (or So) of Fully Homomorphic Encryption” (FHE) gave an overview of work on FHE, the current main applications of it, and possible future directions. He listed privacy preserving genome association, neural nets, and private information retrieval as three major application areas. He classified FHE schemes into four “generations”, the fourth of which is the CKKS (Cheon, Kim, Kim, Song) approach to FHE for floating point numbers. He gave an abstract formulation of FHE based on the Rothblum generic approach and showed how this seems to inevitably leads to several of the main schemes. Several times he challenged the community to construct FHE schemes not based on lattice assumptions, stating his conviction that lattices and FHE are not “soulmates” (but he is not willing to die in duel over this).
Sarah Meiklejohn cleverly tricked the audience into attending a talk on blockchain by using the title “An Evolution of Models for Zero-Knowledge Proofs”. The talk covered a wide range of applications and aspects of zero knowledge proofs, in particular a detailed discussion of the dependence on a common random string (CRS) and various trust models relating to the generation of the CRS.
The papers “Non-Interactive Zero Knowledge from Sub-exponential DDH” by Jain and Jin, “On the (in)security of ROS” by Benhamouda, Lepoint, Loss, Orrù and Raykova, and “New Representations of the AES Key Schedule” by Leurent and Pernot, were chosen for the Best Paper Awards. The first of these explicitly mentions elliptic curves: They require the sub-exponential hardness of DDH, which is only plausible for (non-pairing) elliptic curves. The paper is about Statistical NIZK arguments and Zaps.
Moving on to the rest of the conference program, I mention the following papers.
“Analysing the HPKE Standard” by Alwen, Blanchet, Hauck, Kiltz, Lipp and Riepel. The paper introduces the notion of “nominal group” to handle groups of non-prime order (such as Curve25519 and Curve448).
“Compact, Efficient and UC-Secure Isogeny-Based Oblivious Transfer” by Lai, Delpech de Saint Guilhem, and me. The paper shows how a special property of CSIDH (that ) can be used to construct a two-round oblivious transfer (OT) scheme. The paper has a fully UC-secure 3-round version. In the live Q&A session, Lai announced that the paper has a “fixable bug” that means the fully secure version requires 4 rounds. Keep an eye on eprint for an updated version.
“One-way functions and malleability oracles: Hidden shift attacks on isogeny-based protocols” by Kutas, Merz, Petit and Weitkämper. The aim of the paper is study a group action (one can also view it as being a variant of the hidden number problem) in the SIDH setting, so that the Kuperberg-type hidden shift quantum algorithm (as applies to CSIDH) can also be applied to SIDH. The paper shows how to implement this in the case of overstretched SIDH parameters. These overstretched SIDH parameters are already known to be broken due to an attack by Petit. The method of the paper currently has no impact on the security of SIDH.
“Pre-Computation Scheme of Window-NAF for Koblitz Curves Revisited” by Yu and Xu is about efficient discrete-log crypto using elliptic curves. Sadly there were technical problems during the live session and so Yu was unable to answer questions. Since -NAF window methods have been studied for decades one would not expect major progress, but this paper gives a substantial speedup over the previous state-of-the-art. This is done by giving improved operation counts for computing on Kohel’s -model of elliptic curves, and by organising the precomputation stage for the window method more effectively. Combined, these methods allow to potentially go as far as windows of length .
“Sieving for twin smooth integers with solutions to the Prouhet-Tarry-Escott problem” by Costello, Meyer and Naehrig is about choosing primes suitable for efficient implementation of B-SIDH and SQI-Sign. The problem is to construct primes such that and are very smooth.
“Delay Encryption” by Burdges and De Feo has the best pre-recoded video — Watch it now! (At least, the first 5 minutes.) The paper is about delay functions based on sequential computation of isogenies.
“On Bounded Distance Decoding with Predicate: Breaking the Lattice Barrier for the Hidden Number Problem” by Albrecht and Heninger. The paper is about lattice algorithms for bounded distance decoding (BDD) by reducing to unique-SVP using the embedding technique and then applying enumeration or sieving. The additional feature is the use of a predicate that identifies the unique desired solution. This predicate naturally arises in problems like the hidden number problem (HNP) where the hidden number is the solution to a instance of the ECDLP. The main motivation of the paper is side-channel attacks recovering ECDSA private signing keys from known nonce bits.
“Classical vs Quantum Random Oracles” by Yamakawa and Zhandry discussed separations between the random oracle model (ROM) and the quantum random oracle model (QROM). They show a signature scheme that is secure in the ROM but insecure in the QROM, but as is to often the case with separations it is an artifial scheme that outputs the private key when a query is made on a certain type of input. The paper then gives positive results on Fiat-Shamir signatures and Full-Domain-Hash signatures (and more).
The Rump Session was entertaining. For readers of this blog the most interesting announcement was about the solution to the $5000 $IKE challenge. The talk was given by Craig Costello, who is one of the founders of the challenge. The challenge was solved by Aleksei Udovenko and Giuseppe Vitto. Interestingly, their approach uses the classic meet-in-the-middle approach (as mentioned in the original SIDH paper by Jao and De Feo) rather than the van Oorschot and Wiener (vOW) approach. The classic meet-in-the-middle approach is a time-memory tradeoff and so requires very large storage, whereas the vOW method is a low storage random walk method (but with higher time complexity). A paper is now on eprint that explains the computation and a number of optimisations. The $50,000 challenge is still waiting to be claimed!
The 3rd PQC Standardization Conference, organized by NIST, took place online from June 7 to 9, featuring a mix of live talks, pre-recorded talks, and panels. The oral exchanges were complemented by a text-based forum, provided by an app well known for its lack of end-to-end encryption, where some topics were eventually debated at length. Slides for the talks will be available in a few days, and video recordings in a few weeks. In the meantime, I will give a personal account of the conference based exclusively on my recollections. I took no notes, and I was often preparing or eating dinner at the same time, so nothing of what I will report should be taken as an established truth.
Kicking-off the conference, NIST gave some interesting bits of information on the status of the selection process and the future. The timeline for the 3rd round stays put: NIST expects to announce the selected standards sometimes between the end of 2021 and the beginning of 2022, as well as the alternates that will move to Round 4. Two announcements stirred more emotions in the audience: NIST reported on the difficulties of acquiring patents that are perceived to hinder standardization of some candidates, and specifically pointed to a statement recently published by CNRS (archived). Severalresearchers with links to French academia have already expressed their disappointment with CNRS’ strategy. The second was a confirmation of a possibility that NIST had already hinted at previously: roughly 6 months after the end of the 3rd round, NIST plans to reopen the process to submissions, specifically seeking to add more variety to signatures. The audience understood that NIST will not accept new KEM candidates in this phase. Given the recent progress in designing post-quantum signature schemes, includingsome that received accolades at AsiaCrypt, this announcement should interest the readers of this blog. In the same spirit, NIST doubled down on the possibility of standardizing SPHINCS+ at the end of the 3rd round.
Throughout the three days, each of the finalists and alternate finalists had a 15 minutes slot to present their updates for the 3rd round. In most cases, there were minimal or no updates. PicNic appears to be the most notable exception, with important changes to the structure of the LowMC block cipher. Rainbow and GeMSS had some explaining to do, in response to recentadvances in cryptanalysis, and GeMSS had to drop some parameters. Vadim Lyubashevsky and Dan Bernstein possibly gave the most opinionated talks, I recommend watching both when they are available.
Several contributed talks reported on various aspects of post-quantum cryptography. Lattices had the greatest share, I especially enjoyed the talks by Thomas Espitau and Yu Yang on variants of Falcon… or maybe was it the excellent Château Latour I was having at the same time? I also enjoyed the “Applications” session, which opened my eyes on how difficult it is to put any of the PQC candidates in constrained environments such as smartcards and IoT.
Of particular interest to the readers of this blog should be the three contributed talks on isogeny-based cryptography:
Péter Kutas (joint work with Christophe Petit) gave an excellent, if somewhat time-constrained, survey talk on several different “torsion point attacks” against SIDH and variants, which have previouslyappearedin this blog. The take-away message is that SIDH, SIKE and B-SIDH are well protected against all of them, be it because of the Fujisaki–Okamoto transform, or because of their intrinsic limitations, but the broader space of generalizations of SIDH a cryptographer might imagine is somewhat limited by these attacks, as it has already been repeatedly shown. I would certainly like to see more research in this promising direction, which has applications beyond cryptanalysis.
Élise Tasso (joint work with Nadia El Mrabet, Simon Pontié and myself) presented an in-the-lab confirmation of a fault-injection attack on SIDH first proposed by Ti. The attack is alarmingly easy to mount (ok, we used equipment worth 40k€, but that’s only because we’re rich), but at the same time:
It requires multiple repetitions of key generation with the same secret key, something that should never happen in a correct implementation of SIDH or SIKE;
It appears to be difficult to exploit in presence of key compression;
It has a countermeasure so simple and cheap, that it may as well be included by default in the reference code.
The old-timers of this blog will not be surprised to learn that the best talk of the conference was delivered by Craig Costello. In only 5 minutes, Craig pretended to use SageMath code to generate pairs of toy SIDH public keys (one for Alice, one for Bob), discard the secrets, and (clumsily fail to) upload the public keys to a GitHub repo. Then, he announced that Microsoft is offering $5,000 for the solution of the smaller instance, named $IKEp182, and $50, 000 for that of the larger instance, named $IKEp217. The prize money matches what the SIKE team estimates to be the material cost of breaking the instances, so think twice before reallocating your BitCoin mining resources.
If you believe this stunt, then be our guest and start cracking, but don’t come whining when some mysterious bounty catcher from Australia claims the big prize. For now, I can only observe that everybody seems to trust Microsoft and no one has even forked the GitHub repo (but, do you trust GitHub, anyway?). As a public service to the community, here is the SHA1 of the latest commit, dated June 9, 2021: 72dc1cb50d5a78fee605757e2f33043b2f36f9b4, and here is the SHA-512 of the contents of the repo (excluding .git and .gitignore), as of June 9:
Anyway, the video recording will be available in a few days, and you will all get a chance to have a close look at Craig’s sleight of hand. Be on the watch for video stitching by “those cheeky devils at the NSA”!
There have been quite a few papers on isogeny crypto posted in the last few months. Here is a brief summary of some of them. I thank the members of my research group for fruitful discussions about these papers.
This is a greatly revised and expanded version of an earlier paper by a subset of the authors. I am writing about the March 2021 version.
The paper builds on an idea of Petit (published at ASIACRYPT 2017) to exploit the fact that SIDH gives the image of torsion points. To be precise, suppose is an elliptic curve over with known endomorphism ring (for simplicity let’s take ). Let be an elliptic curve such that there is an isogeny of degree . Suppose we are also given where generate the subgroup of points of order on . The attacker wants to know . The general approach is to choose a suitable endomorphism on such that the isogeny from to itself has degree divisible by . One can then compute this isogeny and hence work back to determine . Two specific technical improvements in this paper over Petit’s work are the use of the dual isogeny and the Frobenius map.
The paper contains a number of results, but the main headline result is to give a polynomial-time attack on “over-stretched” SIDH, where (Petit’s original paper needed the stronger condition ). This is an important result, but it has no impact on standard implementations of SIDH (such as the SIKE submission to NIST), which have . However, the paper is a warning about non-standard variants of SIDH, and the paper discusses several such situations.
The CSIDH approach to isogeny crypto is very appealing for crypto constructions and has attracted a lot of interest, however the problem is that it is attackable using Kuperberg’s quantum hidden-shift algorithm. Several works, in particular the two EUROCRYPT 2020 papers by Peikert and Bonnetain-Schrottenloher, have seriously challenged the proposed CSIDH parameters and suggested they do not meet the minimum required post-quantum security levels. So is CSIDH doomed?
The paper analyses a natural approach to rescue CSIDH, by using a much larger prime (and hence much larger class group) but by choosing private keys from a small subset of the class group, namely the ideals of the form with for small . (A similar proposal was made by myself and Luca De Feo in our SeaSign paper, in the context of lossy keys.)
The paper is mostly about Kuperberg’s quantum algorithm and its analysis. The paper gives arguments that suggest Kuperberg’s algorithm is not applicable to this variant of the problem. The analysis seems plausible, though I am not expert.
The big question is whether this saves CSIDH. For the proposed parameters, the prime is at least 4000 bits, so that protocol messages in key exchange are now at least 4000 bits. This is much worse than the originally suggested 512 bits. The timings are of the order of a few seconds for each operation, which again is much worse than desired. In conclusion, CSIDH may be saved, but it does lose some of its advantages over lattices (e.g., small keys and messages).
This paper relates to both the papers already discussed. It constructs a group action in the SIDH setting, which opens the door to a Kuperberg-type sub-exponential quantum attack on SIDH. Again let be the elliptic curve over with . Let be an endomorphism of . Let be a certain multiplicative subgroup of . We need to act on some set. Let be a subgroup of of order . The action of group element is defined to be . So is acting on a set of elliptic curves (essentially a set of -invariants). Computing the group action is easy when is known, but the difficulty is to compute the group action on the challenge curve in the SIDH protocol (when is not known). To do this, the authors use torsion point information, as already discussed in item 1 above. So suppose we are given where has kernel of order and where generate the subgroup of order . The attack requires . Since SIDH with such parameters can already be broken in classical polynomial time by Petit (or the first paper discussed above), one would not use Kuperberg’s algorithm in this case. Hence this paper does not weaken SIDH. Instead the merit of the paper is to explain how group actions can be introduced in the SIDH setting, which contradicts the conventional wisdom that “SIDH is not based on group actions”.
This post is about SQISign, an exciting post-quantum signature scheme based on isogenies. The blog post is intended for people who understand SIDH well, but are not experts at quaternions and Eichler orders. I do not claim to explain (or understand) all the details. I am mainly trying to give an overview of what are the main technical achievements of the SQISign authors.
First some background. Previously there were three isogeny-based signature schemes. A scheme hased on SIDH was given by Yoo, Azarderakhsh, Jalali, Jao and Soukharev. The basic idea is simple: Given a public key the prover wants to prove that they know the secret isogeny . To do this, as in SIDH, commit to an isogeny and the value , where and where and . The verifier sends a single bit, and the prover reveals either and its image on , or .
A variant of this scheme was proposed by Galbraith, Petit and Silva. Their paper also included a completely different (and more complicated) scheme based on quaternions and endomorphism rings. Thirdly, the SeaSign signature by De Feo and Galbraith is based on CSIDH. In all three cases, the basic form is a 3-move identification protocol (Sigma protocol) with single bit challenges. The SeaSign scheme manages to increase the challenge size by using a Merkle tree and some other ideas. But even with SeaSign, it is necessary to repeat the scheme a number of times in parallel to ensure that the probability a forger can guess the challenge is negligble.
The fundamental problem in isogeny signatures has been to create a scheme that has an exponentially large challenge set, so that the scheme doesn’t have to be repeated. SQISign achieves this in an ingenious way, by using the Kohel-Lauter-Petit-Tignol (KLPT) algorithm.
The idea is simple enough at first sight. Again we want to prove knowledge of the secret isogeny . The commitment is a curve such that the prover/signer knows an isogeny . However, instead of a single bit challenge, the challenge is now an isogeny . The signer/prover, knowing the secret key, can compute the isogeny . However it is insecure to publish this isogeny, since it would leak the kernel of and hence reveal the private key. Instead, we’d like to use the KLPT algorithm to create an isogeny which is “independent” of (this is exactly what KLPT does). If there was an efficient way to do this then we’d have a cool signature scheme. See the below figure, taken from the SQISign paper.
There are two big problems with this idea. The first is that we can’t run the KLPT algorithm on . The KLPT algorithm relies on the norm in the quaternion order having a very special form that allows to reduce solving norm equations to solving binary (two variable) quadratic forms (using Cornacchia’s algorithm). The endomorphism ring of is not expected to be so well-behaved. So we can’t run KLPT with . The second problem is that an attacker could easily forge signatures by choosing such that the attacker knows an isogeny . Then, given the challenge , the forger knows an isogeny from to , and could win.
The SQISign paper solves both problems. Now would be a good time to mention that the authors of SQISign are an isogeny “dream team” of De Feo, Kohel, Leroux, Petit and Wesolowski. A deep understanding of KLPT is necessary to create the scheme, and the paper is a very impressive work.
We start with the special curve of j-invariant 1728. It has a “nice” endomorphism ring . The KLPT algorithm exploits the fact that isogenies from correspond to left- ideals in . Equivalent ideals correspond to isogenies with the same image curve.
The most important thing a reader of this paper has to understand is that is the only curve we can work with, in terms of ideals and endomorphisms. So every computation has to be pulled back to in some way. This requires a whole bunch of sneaky constructions, starting with the private key. The private key is not one isogeny , but two. Precisely, is an isogeny of “small” prime order corresponding to an ideal . That’s nice in quaternion-land, but we can’t compute the public key, so we need to compute an equivalent ideal whose norm is a power of small primes, and thus compute the corresponding isogeny . The public key of a signer is and the private key is both and .
The commitment is a “random” isogeny of degree coprime to the degrees of both and . The prover can compute a left -ideal corresponding to .
The challenge (which will be created using the Fiat-Shamir transform in the signature scheme) is the isogeny . This isogeny is fully known to the prover, for example its kernel is known. There is an exponentially large set of possible challenges, which is what makes the signature scheme interesting. The degree of is co-prime to everything else. The prover needs to compute an ideal corresponding to , and to do this we pull back the kernel of under , compute the ideal and then push forward (via Lemma 3 of the paper). This results in , which is a left--ideal.
Now comes the really subtle bit. We have the three isogenies , and and three ideals . The isogeny corresponds to . Somehow we need to compute an equivalent ideal, but this is a left--ideal, which is no good to us. The trick is to pull back under , which we can do since the degree of is coprime to the norm of . (This is the part where it is necessary to have two different private keys.) This ideal corresponds to an isogeny for some curve . Then we would run KLPT on this ideal to get a random ideal of suitable smooth norm, then push forward via or and we’re done, right? Wrong!
We have two different isogenies , and the pushforward of one of them to is an isogeny to . But the problem is: the pushforward of the other does not necessarily have an image curve isomorphic to , which is what we need. How to enforce this requirement? This is where the Eichler orders come in. Eichler orders are intersections of two maximal orders in a quaternion algebra. The Eichler order of interest in this paper is . (This is the reason why the degree of is taken as small as possible.) The key result is Corollary 1 of the paper, which restricts the equivalence of ideals to multiplication by elements in the Eichler order. It is necessary to develop a variant of KLPT to ensure this restriction is possible.
The paper contains several optimisations and also some new computational assumptions. One assumption is that there is no meet-in-the-middle attack on , which seems plausible (though surprising at first sight). Other assumptions (see Section 7.3) are about the randomness of the outputs of the KLPT algorithm in this setting.
What about the attack I mentioned earlier (the “second problem”) choosing by computing an isogeny . This attack is prevented by imposing a condition on the degree of . The attacker can’t run the same algorithm as the signer, since they have no way to pull back isogenies to and run KLPT.
To conclude, the exciting thing about SQISign is the exponentially large set of challenges, which means the signing process does not need to be repeated. This is why the signatures are very short compared with other post-quantum signatures. I have no opinion on the practicality of the scheme.
One of the reasons I started this blog was to share information with people who were unable to attend conferences. So I’ve tried to maintain a tradition of conference reviews. I’ll continue, even though online conferences are more inclusive and there is no excuse not to attend.
There were technical problems with the first panel (the “Pacific rim” group), but the audio is ok and the quality gets better after the first few minutes. The discussion in the first panel covered various attacks on signatures (including Minerva and TPM-Fail), groups of unknown order, the development of isogeny-based crypto, polynomial commitments, and computational records for factoring, finite field DLP and ECDLP. Finally we answered the question “why work on fancy crypto when we don’t even seem to be able to safely encrypt and sign data?”. Watch the recording to find out more!
The following three panels ran more smoothly and are also highly recommended. I found it particularly interesting to listen to the speculations on future developments in quantum computing in the panel “How long can we safely use pre-quantum ECC?”, but I also recommend “Formal verification of ECC” and “Is SIKE ready for prime time?”
There is also a curated list of talks which is a sort-of “greatest hits” of conference and seminar talks from the last year.
The ECC 2020 conference will take the form of a curated collection of lecture recordings, and four panel discussions. Each panel will feature a number of experts in the area, and there will be an opportunity for audience members watching live to ask questions during the session using zulip. Each panel is followed by a 30 min social break on wonder.me. The conference is free, please come along.
To register, email Tanja Lange.
The four panels are:
Wednesday 28th October, 01:00 UTC: Conference opening, and panel on recent trends in ECC Moderator: Steven Galbraith. Panel: Dan Boneh, Nadia Heninger, Kristin Lauter, Mehdi Tibouchi, and Yuval Yarom.
Wednesday 28th October, 08:00 UTC: Formal verification of ECC protocols Moderator: Benjamin Smith. Panel: Karthik Bhargavan, Bas Spitters, and Bow-Yaw Wang.
Thursday 29 October, 12:00 UTC: How long can we safely use pre-quantum ECC? Moderator: Francisco Rodríguez Henríquez. Panel: Sam Jaques, Manfred Lochter, and Michele Mosca.
Thursday 29 October, 19:00 UTC: Is SIKE ready for prime time? Moderator: Alfred Menezes. Panel: David Jao, Christophe Petit, and Nick Sullivan.
PQCrypto 2020 took place online on September 21-23. All sessions were recorded and can be viewed here. The conference had a varied programme across all of post-quantum crypto, including a number of isogeny papers. In this blog post I will discuss the Three Invited Talks. (I was not able to participate in the conference in real-time, as the sessions took place in the middle of the night for me.)
Benjamin Smith “Isogenies: what now, and what next?” Ben’s talk was an overview of isogenies, SIDH and CSIDH. In his words, isogenies are “absolutely weird” but they are gradually becoming “less weird”. He spent time talking about how isogenies are computed, including mentioning his recent result with Bernstein, De Feo and Leroux on computing -isogenies in time. He mentioned some open problems, such as the “insanely hard problem” to generate a random supersingular elliptic curve E without knowing a trapdoor that would allow to compute (for example) the endomorphism ring of E. He also stated his view, in regard the time result, that “square-root time is not the end of this story and we can probably do better than this”. Regarding what’s next: Ben says we need to better understand the foundations and to speed up isogenies, and to do this we need to bring new brains to these problems.
Frank Wilhelm-Mauch “Quantum computers – state of play and roadmap” The talk gave an excellent overview of the current state of quantum computing, including some of the different models for building a quantum computer and some of the recent computational records for problems in various domains in science. One of the main themes of the talk was that algorithmic progress is limited by the problem of fault-tolerance and lowering errors. If the error rate is too high (meaning > 0.3 percent) then computation seems to be more or less useless. The current challenge is to get an error rate around . Once that is achieved the community can consider “aggressively scaling” the number of qubits. Frank’s speculation is that quantum computers will stay at the 50-100 qubit range for the near future, while researchers focus on lowering the error. He mentioned that IBM has a roadmap to 1000 qubit hardware in 3 years and hopes to reach qubits a little later, but they have not set themselves any error rate targets. The conclusions of the talk: despite continual progress in hardware, “cryptanalysis still can only be addressed with fault-tolerant algorithms” and there is still a “long way to go” to achieve that.
Dustin Moody/NIST “NIST PQC Standardization Update: Round 2 and beyond” Dustin was representing the National Institute of Standards and Technology (NIST) in this talk, by giving an overview of how NIST selected the round 2 and round 3 candidates. In the talk, and during the questions afterwards, he addressed the extent to which the NSA has influenced the decision-making process. He stated that “NIST alone makes the decisions” based on their own research and input from the community, and that the NSA feedback did not change any of their decisions. Regarding the “alternate” schemes in the third round (eg SIKE), Dustin stressed that these could be standardised in future if they continue to hold up to scrutiny. In terms of the ultimate “winners”, he mentioned that there will be at most one lattice-based KEM and at most one lattice-based signature to be standardised. So it looks like the final selection will contain relatively few schemes (perhaps only 4).
Some other news updates:
The Raccoon Attack is side-channel attack on Diffie-Hellman in prime fields in old versions of TLS. The idea of the attack is to reduce the problem to the hidden number problem (HNP), which can be solved using a lattice attack. To quote from the announcement “The vulnerability is really hard to exploit and relies on very precise timing measurements and on a specific server configuration to be exploitable”. What is interesting for this blog is that the attack works for finite field Diffie-Hellman but seems to be hard to mount on elliptic curve Diffie-Hellman, since the analogous hidden number problem does not have such a good solution. Maybe it is time to revisit elliptic curve hidden number problems? I believe the current state-of-the-art to be the paper “On the Bit Security of Elliptic Curve Diffie-Hellman” by my previous PhD student Barak Shani (published at PKC 2017). Barak’s PhD thesis contains a detailed discussion of hidden number problems in different “access models”.
The Accepted papers at Asiacrypt 2020 are online. The conference will take place online on Dec 7-11. There are lots of isogeny papers and it looks like it will be an interesting programme. Congratulations to Luca De Feo, David Kohel, Antonin Leroux, Christophe Petit and Benjamin Wesolowski for their paper “SQISign: Compact Post-Quantum signatures from Quaternions and Isogenies”, which is one of the three papers to be awarded the title of Best Paper. I will report from the conference at the time.
CRYPTO ran as an online conference this year due to COVID. Most of the sessions were while I was sleeping, but I was able to catch up on some of the sessions I missed by looking at recordings. The conference programme with links to papers and YouTube recordings is here.
The highlight was the powerful and important invited talk Crypto for the People by Seny Kamara. It covered several themes. One theme was to ask the question “Who benefits from the crypto research that we are doing?” Seny argued that mostly our work benefits companies and governments, rather than citizens. This echoed some sentiments of Phil Rogaway’s talk “The Moral Character of Cryptographic Work” from ASIACRYPT 2015. It was fascinating to hear about the low-tech system used by the ANC in the 1980s for secure communication between operatives in South Africa and the leadership in exile in the UK. (Being old enough to remember saving Basic programs from my home computer onto cassette tape using an audio format, it all made sense to me.) The need for secure communication for freedom fighters served to illustrate his argument that at least some of us should work on systems that help the marginalised and disempowered, rather than companies and governments. Another theme of his talk was diversity, both of people and research topics. Seny gave some insight into his experiences as a black immigrant minority in the crypto research community.
The Best Papers were:
Joseph Jaeger and Nirvan Tyagi, “Handling Adaptive Compromise for Practical Encryption Schemes”. Best Paper by Early Career Researchers Award.
Susan Hohenberger, Venkata Koppula and Brent Waters, “Chosen Ciphertext Security from Injective Trapdoor Functions”.
Wouter Castryck, Jana Sotáková and Frederik Vercauteren, “Breaking the decisional Diffie-Hellman problem for class group actions using genus theory”.
This is the paper of most interest to this blog. It is a wonderful paper, and I have already discussed it in this blog post. A well-deserved best paper award.
Christof Beierle, Gregor Leander and Yosuke Todo, “Improved Differential-Linear Attacks with Applications to ARX Ciphers”.
The Rump Session did not contain any talks directly related to ECC. Yvo Desmedt gave a talk “40 years Advances in Cryptology: How Will History Judge Us?” which was a plea to the community to publish more research on cryptanalysis (he also said that “side channels are not real cryptanalysis” and that everyone should read Kahn’s book). There was also a version of the game show “Jeopardy!” but I’d rather not talk about it.
Some papers related to discrete logarithms were:
Fabrice Boudot, Pierrick Gaudry, Aurore Guillevic, Nadia Heninger, Emmanuel Thomé and Paul Zimmermann, “Comparing the difficulty of factorization and discrete logarithm: a 240-digit experiment”.
The paper reports some recent large-scale factoring and finite field DLP experiments using the number field sieve. The results suggest that solving DLP in is only about 3 times slower than factoring for the same bit sizes (previous estimates would be suggested a bigger gap).
Gabrielle De Micheli, Pierrick Gaudry and Cecile Pierrot, “Asymptotic complexities of discrete logarithm algorithms in pairing-relevant finite fields”.
The paper is about the complexity of index calculus algorithms for the DLP in “medium prime” finite field settings, as arise in pairing-based crypto.
Alex Lombardi and Vinod Vaikuntanathan, “Fiat-Shamir for Repeated Squaring with Applications to PPAD-Hardness and VDFs”.
The paper is about zero-knowledge proofs, but it features a sort of index calculus algorithm. Worth a look if you are curious.
There were plenty of papers on lattices. A common theme was algorithms for algebraic structured lattices. Two papers I want to mention are:
Qian Guo, Thomas Johansson and Alexander Nilsson, “A key-recovery timing attack on post-quantum primitives using the Fujisaki-Okamoto transformation and its application on FrodoKEM”.
The paper shows that the equality check in the Fujisaki-Okamoto transformation should be implemented in constant time for lattice schemes. If not, then a timing attack can be used to mount an active attack based on deducing when the errors start to overflow (and hence learning the error terms in LWE instances).
Not mentioned in their paper, but discussed in the chat thread, is that a similar idea can be used to mount the GPST attack on SIDH if the FO transform is not constant time.
Koen de Boer, Léo Ducas, Alice Pellet-Mary and Benjamin Wesolowski, “Random Self-reducibility of Ideal-SVP via Arakelov Random Walks”.
The paper shows how to transform an ideal lattice to a random ideal lattice (and hence transform ideal-SVP to the average case) by a combination of continuous (Arakelov class group) and discrete (sparsification) processes. I recommend the pre-conference video by Koen, as it has some really awesome animations. I also like the fact that the approach is analogous to the use of random walks on isogeny graphs to get uniform mixing, as done by Jao, Miller and Venkatesan in their reduction for the ECDLP.
This is a follow-up to my previous post on the Fourteenth Algorithmic Number Theory Symposium (ANTS-XIV).
First I want to announce that the Selfridge Prize for the best submitted paper as judged by the program committee was awarded to Jonathan Love and Dan Boneh for their paper Supersingular Curves With Small Non-integer Endomorphisms. The prize is funded by the Number Theory Foundation.
The prize for the best poster was awarded to Eric Bach and Jonathan Sorenson for Generating Smooth Numbers with Known Factorization Uniformly at Random.
David Jao’s invited talk introduced and surveyed the current state of isogeny-based crypto. In his opinion, signatures are the exciting new frontier in this area.
The other two invited talks for the second half of the conference were Rachel Pries and Andrew Booker. Rachel talked about determining the Shimura datum of families of Abelian varieties coming from cyclic covers of the projective line. Andrew talked about his recent computations on sums of three cubes (in particular the 33 and 42 problems). I highly recommend watching Andrew’s talk for his descriptions of how the result about 33 was announced and his interactions with the press (and a suggestion for the conference T-shirt).
Among the contributed papers, there were several papers that will be of interest to cryptographers.
Daniel J. Bernstein, Luca De Feo, Antonin Leroux and Benjamin Smith. Faster computation of isogenies of large prime degree
The paper gives an improved method to compute isogenies of high-degree, by using ideas about fast computation of products that go back to Pollard and Strassen.
Novak Kaluderovic, Thorsten Kleinjung and Dusan Kostic. Cryptanalysis of the generalised Legendre pseudorandom function
The paper gives new algorithms to compute when given an oracle that computes the Legendre symbol . This problem goes back a several decades in crypto, but recently gained new interest due to applications in multi-party computation.
Thomas Espitau and Paul Kirchner. The nearest-colattice algorithm: time-approximation tradeoff for approx-CVP
The paper gives a “block” algorithm for solving the approximate closest vector problem in a lattice, that generalises the Babai nearest plane method.
Carl Bootland, Wouter Castryck and Frederik Vercauteren. On the Security of the Multivariate Ring Learning With Errors Problem
The paper gives methods to solve the search m-RLWE problem and attack cryptosystems based on this problem.
Jean-Sébastien Coron, Luca Notarnicola and Gabor Wiese. Simultaneous diagonalization of incomplete matrices and applications
The paper is about some computations that improve algorithms to solve the approximate common divisor problem and break CLT13 cryptographic multilinear maps.
Daniel E. Martin. Short Vector Problems and Simultaneous Approximation
The paper gives a reduction from the approximate shortest vector problem to the problem of simultaneous Diophantine approximation (the reverse direction to the famous reduction by Lagarias form simultaneous Diophantine approximation to SVP).