EdDSA standardized

A new version of the NIST Federal Information Processing Standard (FIPS) for Digital Signatures has been published. Also see here.

This version includes EdDSA. There are (at least) two notable features of EdDSA. First, it is more closely related to Schnorr signatures than ECDSA. This means it avoids (in my opinion) many of the clunky aspects of ECDSA and allows a much more elegant security analysis. Second, it uses curves in Edwards form. Specifically, the document Recommendations for Discrete Logarithm-based Cryptography: Elliptic Curve Domain Parameters states “this Recommendation includes two newly specified Edwards curves, which provide increased performance, side-channel resistance, and simpler implementation when compared to traditional curves”. The curves include the widely used Edwards25519 curve.

I also take the opportunity to advertise this page which lists applications that use Ed25519.

— Steven Galbraith

Posted in Uncategorized | Leave a comment

Attacks on SIDH/SIKE

You may feel like you are having trouble keeping up with the news on SIDH/SIKE. So am I! I hope this blog post doesn’t instantly become obsolete due to new advances.

To recall, there are now three preprints giving attacks on SIDH:

The first two are parallel independent works that apply a theorem due to Kani to break SIDH in some special cases. One of the special cases was the NIST submission SIKE, meaning that SIKE was able to be broken easily on a laptop. The most recent paper by Damien Robert is influenced by the previous two works, and extends the attack to work in a much more general case. This now breaks SIDH completely in all cases.

I will summarise Damien Robert’s work below, but first a few other updates.

Now to sketch the idea by Damien Robert. For consistency with my previous blog post I use some of the Castryck-Decru notation.

Recall that in SIDH we have a base curve E_0 and isogenies \phi_A : E_0 \to E_A of degree 2^a and \phi_B : E_0 \to E_B of degree 3^b, together with some auxiliary points. Assume 2^a > 3^b.

The most natural way to break SIDH is to compute \phi_B on points of order 3^b, from which one can then determine \ker( \phi_B ) and SIDH is broken. (This is the goal of the torsion-point attacks by Petit et al, it is also the idea for the improvement to the original attack by Rémy Oudompheng and others that I mentioned above.) This is what Damien Robert’s attack does too.

Let c = 2^a - 3^b. Then we can write c as a sum of four squares. In this post I will take the simpler case when all prime factors of the square-free part of c are congruent to 1 modulo 4, in which case we can write c = a_1^2 + a_2^2 where a_1, a_2 are integers. [Note added 15/8/2022 thanks to Daniel Bernstein: To do this we need to factor c, which is polynomial-time for quantum computers or subexponential-time classically. The 4 squares case is polynomial-time classically.] Let M = (\begin{matrix} a_1 & a_2 \\ -a_2 & a_1 \end{matrix}). Then M^t M = (a_1^2 + a_2^2) I_2 = c I_2 where I_2 is the 2 \times 2 identity matrix. Hence M defines an isogeny \alpha : E^2 \to E^2 for any elliptic curve E. Specifically \alpha(X,Y) = ( [a_1]X + [a_2]Y, [-a_2]X + [a_1] Y ). The dual isogeny \hat{\alpha} corresponds to the transpose M^t. We have \hat{\alpha}\alpha = [c] as a map from E^2 to itself. We will apply this map to both E_0 and E_B.

The secret isogeny is \phi_B : E_0 \to E_B and we can extend this to a map from E_0^2 \to E_B^2 as \phi_B( X, Y ) = ( \phi_B( X ), \phi_B( Y ) ). Note that \phi_B commutes with \alpha, since \phi_B is a group homomorphism.

We can define an isogeny F on the 4-dimensional Abelian variety {\bf A} = E_0^2 \times E_B^2, by F = (\begin{matrix} \alpha & \hat{\phi}_B \\  -\phi_B &  \hat{\alpha} \end{matrix}). The dual isogeny is \hat{F} = (\begin{matrix} \hat{\alpha} & -\hat{\phi}_B \\ \phi_B & \alpha \end{matrix}) and one can check \hat{F} F = (c + 3^b) I_4 = 2^a I_4 where I_4 is the identity map (or identity matrix if you want to view these as matrices).

Now, from the auxiliary points in SIDH we know how \phi_B acts on E_0[ 2^a ]. Since \hat{\phi}_B \phi_B = [ 3^b] we can also compute how \hat{\phi}_B acts on E_B[ 2^a ]. Hence we can compute how F acts on {\bf A}[2^a]. Since \ker(F) \subseteq {\bf A}[2^a] it means we compute \ker(F).

Now, knowing \ker(F) we can use fancy methods to compute isogenies on high-dimensional Abelian varieties (and Damien Robert is a leading expert on this topic) and hence compute F on any point. So, we compute F on a basis for {\bf A}[ 3^b] and hence compute \phi_B on E_0[ 3^b] and break SIDH.

The full attack works with an 8-dimensional Abelian variety {\bf A} = E_0^4 \times E_B^4. I am not aware of any implementation of the attack as yet, but I look forward to seeing one. The attack can also be set up to compute the isogeny \phi_A instead, simply by brute-forcing an extra small 3-power isogeny so that 3^b > 2^a.

As with my earlier blog post, I want to emphasise that this attack is only possible due to decades of research on computing isogenies of general Abelian varieties. This work was not primarily motivated by cryptographic applications but by the drive to generalization in pure mathematics.

— Steven Galbraith

Posted in Uncategorized | Leave a comment

Breaking supersingular isogeny Diffie-Hellman (SIDH)

The paper An efficient key recovery attack on SIDH by Wouter Castryck and Thomas Decru is a major breakthrough in isogeny cryptanalysis. This relates to the SIDH protocol by Jao and De Feo, and the NIST round 4 finalist SIKE.

I do not have time to explain all the technical details, but here are some quick answers to your burning questions.

  1. Is the result true?

There is no reason to doubt this. Code is provided (though I haven’t run it myself) and I understand the SIKE team has confirmed the attack. Having solved the Microsoft $IKEp217 challenge, Castryck and Decru are eligible to claim the $50,000 USD prize.

Some aspects of the attack and its complexity analysis are heuristic, but that is normal and acceptable for cryptanalysis. The experimental results show that the attack is very practical.

  1. How does the attack work?

The attack exploits the fact that SIDH has auxiliary points and that the degree of the secret isogeny is known. The auxiliary points in SIDH have always been an annoyance and a potential weakness, and they have been exploited for fault attacks, the GPST adaptive attack, torsion point attacks, etc.

Let E_0 be the base curve and let P_0, Q_0 \in E_0 have order 2^a. Let E, P, Q be given such that there exists an isogeny \phi of degree 3^b with \phi : E_0 \to E, \phi(P_0) = P, and \phi(Q_0) = Q.

A key aspect of SIDH is that one does not compute \phi directly, but as a composition of isogenies of degree 3. In other words, there is a sequence of curves E_0 \to E_1 \to E_2 \to \cdots \to E connected by 3-isogenies.

Essentially, like in GPST, the attack determines the intermediate curves E_i and hence eventually determines the private key. At step i the attack does a brute-force search of all possible E_i \to E_{i+1}, and the magic ingredient is a gadget that shows which one is correct.

(The above is over-simplified, the isogenies E_i \to E_{i+1} in the attack are not of degree 3 but of degree a small power of 3.)

  1. What is this magic ingredient?

It is a theorem by Ernst Kani about reducible subgroups of abelian surfaces.

  1. Is there a simple way to explain the magic ingredient?

Nope. Go learn about Richelot isogenies and abelian surfaces.

  1. What does this mean for the NIST round 4 candidate SIKE?

The scheme specified in the SIKE NIST submission is broken.

  1. Can SIDH be fixed?

Already on twitter there were several suggestions on how to possibly avoid the attack. For example, follow Peter Kutas @kutasp and Boris Fouotsa @FouotsaB for some threads.

To be able to use the magic ingredient the attacker must efficiently compute a number of isogenies with degrees of the form 2^i-3^j for various 1 \le i \le a, 1 \le j \le b and it is not clear how to do this if we are not close to a curve with small discriminant complex multiplication. So one hope is that SIDH can be saved by choosing a base curve E_0 with unknown endomorphism ring (this might require some kind of public setup).

The paper suggests that variants of SIDH such as B-SIDH (using primes other than 2 and 3) should be attackable. So it seems that changing the primes will not prevent the attack.

  1. Does it break CSIDH or other isogeny cryptosystems?

No. The attack very specifically relies on two things: (1) that the degree of the secret isogeny is known; (2) the attacker is provided with the auxiliary points. Hence the attack does not seem to break CSIDH or SQISign.

  1. Does it break ECC?

No. The attack assumes the degree of the isogeny is known, and that is exactly the secret key in ECC. There is no particular reason to think attacks on SIDH lead to attacks on ECC.

  1. Why was it only discovered now?

The theoretical foundations of the attack are described in a paper by Kani from 1997 (and also some useful tools are in a paper by Howe, Leprévost and Poonen from 2000). So in some sense the attack could have been noticed at any time. But a key point is that this is not an attack one is going to discover by thinking only about isogenies between elliptic curves. The attack deeply exploits Richelot isogenies and products of elliptic curves and I doubt the attack can be expressed meaningfully without that language. This is the power of generalisation and extension. So what was necessary to find the attack was to have a community of scholars studying “esoteric” subjects like extending isogeny crypto to abelian surfaces.

  1. Implications for PQ crypto

There is no doubt that this result will reduce confidence in isogenies. The sudden appearance of an attack this powerful shows that the field is not yet mature. The relatively recent attack by Ward Beullens on Rainbow has a similar impact on multivariate crypto. The correct response to this is not to attempt to minimise the impact, nor to reflexively declare the subject dead. Instead, we should keep our minds open and let the mathematicians work out the implications, wherever they lead. Personally, I can’t wait to see what Wouter and Thomas and all the isogenists come up with next!

— Steven Galbraith

Posted in Uncategorized | Leave a comment

Hertzbleed Attack

I woke up to the news of a new form of timing-side-channel attack based on the dynamic frequency scaling of modern x86 processors. This is the Hertzbleed attack, which will be presented at the USENIX Security Symposium in Boston in August. The authors are Yingchen Wang and Hovav Shacham from the University of Texas at Austin, Riccardo Paccagnella and Elizabeth Tang He and Christopher Fletcher from the University of Illinois Urbana-Champaign, and David Kohlbrenner from the University of Washington.

Interestingly for this blog, they target the supersingular isogeny key exchange protocol SIKE. SIKE is implemented in constant-time using standard timing-attack countermeasures from traditional ECC, such as the Montgomery ladder. But the frequency scaling feature allows to mount the timing attack even against supposedly constant-time code. The attack is a form of “zero value attack“. See also Patrick Longa’s comments on the NIST PQC mailing list.

The key property of SIKE that is exploited is that the protocol message is a triple (E,P,Q) where E is an elliptic curve and P,Q are points. The points P,Q are a troublesome aspect of SIDH/SIKE, and are the cause of the adaptive attack by Galbraith-Petit-Ti-Shani, the torsion point attacks by Petit and de Quehen-Kutas-Leonardi-Martindale-Panny-Petit-Stange, and so on. In certain contexts, these attacks are prevented by the Fujisaki-Okamoto transform, but this doesn’t help in the context of side-channel attacks.

The specific attack on SIKE given by the Hertzbleed authors is also of this form. It involves maliciously choosing the points P,Q in a key-dependent way to learn information. As explained by Patrick Longa in his post above, there is a countermeasure to such attacks.

What does this mean for ECC in general? At the moment the timing channel does not seem to be fine-grained enough to attack constant-time elliptic curve systems in use. The paper says:

“Despite its theoretical power, it is not obvious how to construct practical exploits through the frequency side channel. This is because DVFS updates depend on the aggregate power consumption over millions of CPU cycles and only reflect coarse-grained program behavior. Yet, we show … that some cryptographic primitives admit amplification of single key bit guesses into thousands of high- or low-power operations, enough to induce a measurable timing difference.”

But we know that attacks only get better. So it will be interesting and important to see if this approach can be used to attack current ECC systems in practice.

— Steven Galbraith

Posted in Uncategorized | Leave a comment

Eurocrypt 2021 – Zagreb, Zoom and Zulip

Eurocrypt 2021 was held as a hybrid conference, with some participants in-person in Zagreb and some online. I was one of the ones joining the conference by Zoom and Zulip. As always in this blog I focus on talks and news most relevant for elliptic curve fans.

For each accepted paper there was a short (20-30 minutes) video available in advance of the conference on the conference website, and also a short live Q&A session that is immortalised on the IACR youtube channel.

There were two invited talks:

Craig Gentry’s talk “A Decade (or So) of Fully Homomorphic Encryption” (FHE) gave an overview of work on FHE, the current main applications of it, and possible future directions. He listed privacy preserving genome association, neural nets, and private information retrieval as three major application areas. He classified FHE schemes into four “generations”, the fourth of which is the CKKS (Cheon, Kim, Kim, Song) approach to FHE for floating point numbers. He gave an abstract formulation of FHE based on the Rothblum generic approach and showed how this seems to inevitably leads to several of the main schemes. Several times he challenged the community to construct FHE schemes not based on lattice assumptions, stating his conviction that lattices and FHE are not “soulmates” (but he is not willing to die in duel over this).

Sarah Meiklejohn cleverly tricked the audience into attending a talk on blockchain by using the title “An Evolution of Models for Zero-Knowledge Proofs”. The talk covered a wide range of applications and aspects of zero knowledge proofs, in particular a detailed discussion of the dependence on a common random string (CRS) and various trust models relating to the generation of the CRS.

The papers “Non-Interactive Zero Knowledge from Sub-exponential DDH” by Jain and Jin, “On the (in)security of ROS” by Benhamouda, Lepoint, Loss, Orrù and Raykova, and “New Representations of the AES Key Schedule” by Leurent and Pernot, were chosen for the Best Paper Awards. The first of these explicitly mentions elliptic curves: They require the sub-exponential hardness of DDH, which is only plausible for (non-pairing) elliptic curves. The paper is about Statistical NIZK arguments and Zaps.

Moving on to the rest of the conference program, I mention the following papers.

Analysing the HPKE Standard” by Alwen, Blanchet, Hauck, Kiltz, Lipp and Riepel. The paper introduces the notion of “nominal group” to handle groups of non-prime order (such as Curve25519 and Curve448).

Compact, Efficient and UC-Secure Isogeny-Based Oblivious Transfer” by Lai, Delpech de Saint Guilhem, and me. The paper shows how a special property of CSIDH (that (a * E)^t = a^{-1} * E^t) can be used to construct a two-round oblivious transfer (OT) scheme. The paper has a fully UC-secure 3-round version. In the live Q&A session, Lai announced that the paper has a “fixable bug” that means the fully secure version requires 4 rounds. Keep an eye on eprint for an updated version.

One-way functions and malleability oracles: Hidden shift attacks on isogeny-based protocols” by Kutas, Merz, Petit and Weitkämper. The aim of the paper is study a group action (one can also view it as being a variant of the hidden number problem) in the SIDH setting, so that the Kuperberg-type hidden shift quantum algorithm (as applies to CSIDH) can also be applied to SIDH. The paper shows how to implement this in the case of overstretched SIDH parameters. These overstretched SIDH parameters are already known to be broken due to an attack by Petit. The method of the paper currently has no impact on the security of SIDH.

Pre-Computation Scheme of Window \tau-NAF for Koblitz Curves Revisited” by Yu and Xu is about efficient discrete-log crypto using elliptic curves. Sadly there were technical problems during the live session and so Yu was unable to answer questions. Since \tau-NAF window methods have been studied for decades one would not expect major progress, but this paper gives a substantial speedup over the previous state-of-the-art. This is done by giving improved operation counts for computing P \pm Q on Kohel’s \mu_4-model of elliptic curves, and by organising the precomputation stage for the window method more effectively. Combined, these methods allow to potentially go as far as windows of length w=7.

Sieving for twin smooth integers with solutions to the Prouhet-Tarry-Escott problem” by Costello, Meyer and Naehrig is about choosing primes suitable for efficient implementation of B-SIDH and SQI-Sign. The problem is to construct primes p=2m+1 such that m and m+1 are very smooth.

Delay Encryption” by Burdges and De Feo has the best pre-recoded video — Watch it now! (At least, the first 5 minutes.) The paper is about delay functions based on sequential computation of isogenies.

On Bounded Distance Decoding with Predicate: Breaking the Lattice Barrier for the Hidden Number Problem” by Albrecht and Heninger. The paper is about lattice algorithms for bounded distance decoding (BDD) by reducing to unique-SVP using the embedding technique and then applying enumeration or sieving. The additional feature is the use of a predicate that identifies the unique desired solution. This predicate naturally arises in problems like the hidden number problem (HNP) where the hidden number is the solution to a instance of the ECDLP. The main motivation of the paper is side-channel attacks recovering ECDSA private signing keys from known nonce bits.

Classical vs Quantum Random Oracles” by Yamakawa and Zhandry discussed separations between the random oracle model (ROM) and the quantum random oracle model (QROM). They show a signature scheme that is secure in the ROM but insecure in the QROM, but as is to often the case with separations it is an artifial scheme that outputs the private key when a query is made on a certain type of input. The paper then gives positive results on Fiat-Shamir signatures and Full-Domain-Hash signatures (and more).

The Rump Session was entertaining. For readers of this blog the most interesting announcement was about the solution to the $5000 $IKE challenge. The talk was given by Craig Costello, who is one of the founders of the challenge. The challenge was solved by Aleksei Udovenko and Giuseppe Vitto. Interestingly, their approach uses the classic meet-in-the-middle approach (as mentioned in the original SIDH paper by Jao and De Feo) rather than the van Oorschot and Wiener (vOW) approach. The classic meet-in-the-middle approach is a time-memory tradeoff and so requires very large storage, whereas the vOW method is a low storage random walk method (but with higher time complexity). A paper is now on eprint that explains the computation and a number of optimisations. The $50,000 challenge is still waiting to be claimed!

— Steven Galbraith

Posted in Uncategorized | Leave a comment

Report by Luca de Feo on the 3rd PQC Standardization Conference

The 3rd PQC Standardization Conference, organized by NIST, took place online from June 7 to 9, featuring a mix of live talks, pre-recorded talks, and panels. The oral exchanges were complemented by a text-based forum, provided by an app well known for its lack of end-to-end encryption, where some topics were eventually debated at length. Slides for the talks will be available in a few days, and video recordings in a few weeks. In the meantime, I will give a personal account of the conference based exclusively on my recollections. I took no notes, and I was often preparing or eating dinner at the same time, so nothing of what I will report should be taken as an established truth.

Kicking-off the conference, NIST gave some interesting bits of information on the status of the selection process and the future. The timeline for the 3rd round stays put: NIST expects to announce the selected standards sometimes between the end of 2021 and the beginning of 2022, as well as the alternates that will move to Round 4. Two announcements stirred more emotions in the audience: NIST reported on the difficulties of acquiring patents that are perceived to hinder standardization of some candidates, and specifically pointed to a statement recently published by CNRS (archived). Several researchers with links to French academia have already expressed their disappointment with CNRS’ strategy. The second was a confirmation of a possibility that NIST had already hinted at previously: roughly 6 months after the end of the 3rd round, NIST plans to reopen the process to submissions, specifically seeking to add more variety to signatures. The audience understood that NIST will not accept new KEM candidates in this phase. Given the recent progress in designing post-quantum signature schemes, including some that received accolades at AsiaCrypt, this announcement should interest the readers of this blog. In the same spirit, NIST doubled down on the possibility of standardizing SPHINCS+ at the end of the 3rd round.

Throughout the three days, each of the finalists and alternate finalists had a 15 minutes slot to present their updates for the 3rd round. In most cases, there were minimal or no updates. PicNic appears to be the most notable exception, with important changes to the structure of the LowMC block cipher. Rainbow and GeMSS had some explaining to do, in response to recent advances in cryptanalysis, and GeMSS had to drop some parameters. Vadim Lyubashevsky and Dan Bernstein possibly gave the most opinionated talks, I recommend watching both when they are available.

Several contributed talks reported on various aspects of post-quantum cryptography. Lattices had the greatest share, I especially enjoyed the talks by Thomas Espitau and Yu Yang on variants of Falcon… or maybe was it the excellent Château Latour I was having at the same time? I also enjoyed the “Applications” session, which opened my eyes on how difficult it is to put any of the PQC candidates in constrained environments such as smartcards and IoT.

Of particular interest to the readers of this blog should be the three contributed talks on isogeny-based cryptography:

  • Péter Kutas (joint work with Christophe Petit) gave an excellent, if somewhat time-constrained, survey talk on several different “torsion point attacks” against SIDH and variants, which have previously appeared in this blog. The take-away message is that SIDH, SIKE and B-SIDH are well protected against all of them, be it because of the Fujisaki–Okamoto transform, or because of their intrinsic limitations, but the broader space of generalizations of SIDH a cryptographer might imagine is somewhat limited by these attacks, as it has already been repeatedly shown. I would certainly like to see more research in this promising direction, which has applications beyond cryptanalysis.
  • Élise Tasso (joint work with Nadia El Mrabet, Simon Pontié and myself) presented an in-the-lab confirmation of a fault-injection attack on SIDH first proposed by Ti. The attack is alarmingly easy to mount (ok, we used equipment worth 40k€, but that’s only because we’re rich), but at the same time:
    • It requires multiple repetitions of key generation with the same secret key, something that should never happen in a correct implementation of SIDH or SIKE;
    • It appears to be difficult to exploit in presence of key compression;
    • It has a countermeasure so simple and cheap, that it may as well be included by default in the reference code.

The old-timers of this blog will not be surprised to learn that the best talk of the conference was delivered by Craig Costello. In only 5 minutes, Craig pretended to use SageMath code to generate pairs of toy SIDH public keys (one for Alice, one for Bob), discard the secrets, and (clumsily fail to) upload the public keys to a GitHub repo. Then, he announced that Microsoft is offering $5,000 for the solution of the smaller instance, named $IKEp182, and $50, 000 for that of the larger instance, named $IKEp217. The prize money matches what the SIKE team estimates to be the material cost of breaking the instances, so think twice before reallocating your BitCoin mining resources.

If you believe this stunt, then be our guest and start cracking, but don’t come whining when some mysterious bounty catcher from Australia claims the big prize. For now, I can only observe that everybody seems to trust Microsoft and no one has even forked the GitHub repo (but, do you trust GitHub, anyway?). As a public service to the community, here is the SHA1 of the latest commit, dated June 9, 2021: 72dc1cb50d5a78fee605757e2f33043b2f36f9b4, and here is the SHA-512 of the contents of the repo (excluding .git and .gitignore), as of June 9:

$ git clone https://github.com/microsoft/SIKE-challenges
$ cd SIKE-challenges/
$ cat * | sha512sum
d2d6c6b7627a0cdbdfc98515f0e99c3387f08ce06196f767b65541442d36fd5978a49761f6c8e6c4704c87bad3629a604ad176517a966023780288cc2c1e3ae9 -

Anyway, the video recording will be available in a few days, and you will all get a chance to have a close look at Craig’s sleight of hand. Be on the watch for video stitching by “those cheeky devils at the NSA”!

— Luca de Feo

Posted in Uncategorized | Leave a comment

Some recent papers in isogeny crypto

There have been quite a few papers on isogeny crypto posted in the last few months. Here is a brief summary of some of them. I thank the members of my research group for fruitful discussions about these papers.

  1. Improved torsion point attacks on SIDH variants by Victoria de Quehen, Péter Kutas, Chris Leonardi, Chloe Martindale, Lorenz Panny, Christophe Petit and Katherine E. Stange.

This is a greatly revised and expanded version of an earlier paper by a subset of the authors. I am writing about the March 2021 version.

The paper builds on an idea of Petit (published at ASIACRYPT 2017) to exploit the fact that SIDH gives the image of torsion points. To be precise, suppose E_0 is an elliptic curve over \mathbb{F}_p with known endomorphism ring (for simplicity let’s take j(E_0) = 1728). Let E_1 be an elliptic curve such that there is an isogeny \phi : E_0 \to E_1 of degree A. Suppose we are also given \phi(P), \phi(Q) where P, Q generate the subgroup of points of order B on E_0. The attacker wants to know \phi. The general approach is to choose a suitable endomorphism \theta on E_0 such that the isogeny \phi \circ \theta \circ \hat{\phi} + d from E_1 to itself has degree divisible by B. One can then compute this isogeny and hence work back to determine \phi. Two specific technical improvements in this paper over Petit’s work are the use of the dual isogeny and the Frobenius map.

The paper contains a number of results, but the main headline result is to give a polynomial-time attack on “over-stretched” SIDH, where B > pA (Petit’s original paper needed the stronger condition B > A^4). This is an important result, but it has no impact on standard implementations of SIDH (such as the SIKE submission to NIST), which have A \approx B \approx \sqrt{p}. However, the paper is a warning about non-standard variants of SIDH, and the paper discusses several such situations.

  1. The SQALE of CSIDH: Square-root Vélu Quantum-resistant isogeny Action with Low Exponents by Jorge Chávez-Saab, Jesús-Javier Chi-Domínguez, Samuel Jaques and Francisco Rodríguez-Henríquez.

The CSIDH approach to isogeny crypto is very appealing for crypto constructions and has attracted a lot of interest, however the problem is that it is attackable using Kuperberg’s quantum hidden-shift algorithm. Several works, in particular the two EUROCRYPT 2020 papers by Peikert and Bonnetain-Schrottenloher, have seriously challenged the proposed CSIDH parameters and suggested they do not meet the minimum required post-quantum security levels. So is CSIDH doomed?

The paper analyses a natural approach to rescue CSIDH, by using a much larger prime (and hence much larger class group) but by choosing private keys from a small subset of the class group, namely the ideals of the form \prod_i \ell_i^{e_i} with |e_i| \le m for small m. (A similar proposal was made by myself and Luca De Feo in our SeaSign paper, in the context of lossy keys.)

The paper is mostly about Kuperberg’s quantum algorithm and its analysis. The paper gives arguments that suggest Kuperberg’s algorithm is not applicable to this variant of the problem. The analysis seems plausible, though I am not expert.

The big question is whether this saves CSIDH. For the proposed parameters, the prime p is at least 4000 bits, so that protocol messages in key exchange are now at least 4000 bits. This is much worse than the originally suggested 512 bits. The timings are of the order of a few seconds for each operation, which again is much worse than desired. In conclusion, CSIDH may be saved, but it does lose some of its advantages over lattices (e.g., small keys and messages).

  1. One-way functions and malleability oracles: Hidden shift attacks on isogeny-based protocols, by Péter Kutas, Simon-Philipp Merz, Christophe Petit and Charlotte Weitkämper (accepted to EUROCRYPT 2021).

This paper relates to both the papers already discussed. It constructs a group action in the SIDH setting, which opens the door to a Kuperberg-type sub-exponential quantum attack on SIDH. Again let E_0 : y^2 = x^3 + x be the elliptic curve over \mathbb{F}_p with j(E_0) = 1728. Let \iota(x,y) = (-x, iy) be an endomorphism of E_0. Let G be a certain multiplicative subgroup of \mathbb{Z}[\iota]. We need G to act on some set. Let L be a subgroup of E_0 of order A. The action of group element \theta=a+b\iota \in G is defined to be E_0 / \theta(L). So G is acting on a set of elliptic curves (essentially a set of j-invariants). Computing the group action is easy when L is known, but the difficulty is to compute the group action on the challenge curve E_1 = E/K in the SIDH protocol (when K is not known). To do this, the authors use torsion point information, as already discussed in item 1 above. So suppose we are given E_0, E_1, \phi(P), \phi(Q) where \phi : E_0 \to E_1 has kernel K of order A and where P, Q generate the subgroup of order B. The attack requires B > p A^4. Since SIDH with such parameters can already be broken in classical polynomial time by Petit (or the first paper discussed above), one would not use Kuperberg’s algorithm in this case. Hence this paper does not weaken SIDH. Instead the merit of the paper is to explain how group actions can be introduced in the SIDH setting, which contradicts the conventional wisdom that “SIDH is not based on group actions”.

— Steven Galbraith

Posted in Uncategorized | Leave a comment


This post is about SQISign, an exciting post-quantum signature scheme based on isogenies. The blog post is intended for people who understand SIDH well, but are not experts at quaternions and Eichler orders. I do not claim to explain (or understand) all the details. I am mainly trying to give an overview of what are the main technical achievements of the SQISign authors.

First some background. Previously there were three isogeny-based signature schemes. A scheme hased on SIDH was given by Yoo, Azarderakhsh, Jalali, Jao and Soukharev. The basic idea is simple: Given a public key (E_0, E_A) the prover wants to prove that they know the secret isogeny \tau : E_0 \to E_A. To do this, as in SIDH, commit to an isogeny \psi : E_0 \to E_1 and the value j(E_2), where \gcd( \deg(\tau), \deg(\psi)) = 1 and E_2 = E_0/\langle G_A, G_1 \rangle where G_A = \ker( \tau ) and G_1 = \ker(\psi). The verifier sends a single bit, and the prover reveals either G_1 and its image on E_A, or \psi(G_A).

A variant of this scheme was proposed by Galbraith, Petit and Silva. Their paper also included a completely different (and more complicated) scheme based on quaternions and endomorphism rings. Thirdly, the SeaSign signature by De Feo and Galbraith is based on CSIDH. In all three cases, the basic form is a 3-move identification protocol (Sigma protocol) with single bit challenges. The SeaSign scheme manages to increase the challenge size by using a Merkle tree and some other ideas. But even with SeaSign, it is necessary to repeat the scheme a number of times in parallel to ensure that the probability a forger can guess the challenge is negligble.

The fundamental problem in isogeny signatures has been to create a scheme that has an exponentially large challenge set, so that the scheme doesn’t have to be repeated. SQISign achieves this in an ingenious way, by using the Kohel-Lauter-Petit-Tignol (KLPT) algorithm.

The idea is simple enough at first sight. Again we want to prove knowledge of the secret isogeny \tau : E_0 \to E_A. The commitment is a curve E_1 such that the prover/signer knows an isogeny \psi : E_0 \to E_1. However, instead of a single bit challenge, the challenge is now an isogeny \phi : E_1 \to E_2. The signer/prover, knowing the secret key, can compute the isogeny \sigma' = \phi \circ \psi \circ \hat{\tau} : E_A \to E_2. However it is insecure to publish this isogeny, since it would leak the kernel of \hat{\tau} and hence reveal the private key. Instead, we’d like to use the KLPT algorithm to create an isogeny \sigma : E_A \to E_2 which is “independent” of \sigma' (this is exactly what KLPT does). If there was an efficient way to do this then we’d have a cool signature scheme. See the below figure, taken from the SQISign paper.

There are two big problems with this idea. The first is that we can’t run the KLPT algorithm on E_A. The KLPT algorithm relies on the norm in the quaternion order having a very special form that allows to reduce solving norm equations to solving binary (two variable) quadratic forms (using Cornacchia’s algorithm). The endomorphism ring of E_A is not expected to be so well-behaved. So we can’t run KLPT with E_A. The second problem is that an attacker could easily forge signatures by choosing E_1 such that the attacker knows an isogeny \phi' : E_A \to E_1. Then, given the challenge \phi : E_1 \to E_2, the forger knows an isogeny from E_A to E_2, and could win.

The SQISign paper solves both problems. Now would be a good time to mention that the authors of SQISign are an isogeny “dream team” of De Feo, Kohel, Leroux, Petit and Wesolowski. A deep understanding of KLPT is necessary to create the scheme, and the paper is a very impressive work.

We start with the special curve E_0 of j-invariant 1728. It has a “nice” endomorphism ring \cal{O}. The KLPT algorithm exploits the fact that isogenies from E_0 correspond to left-\cal{O} ideals in \cal{O}. Equivalent ideals correspond to isogenies with the same image curve.

The most important thing a reader of this paper has to understand is that E_0 is the only curve we can work with, in terms of ideals and endomorphisms. So every computation has to be pulled back to E_0 in some way. This requires a whole bunch of sneaky constructions, starting with the private key. The private key is not one isogeny \tau : E_0 \to E_A, but two. Precisely, \tau is an isogeny of “small” prime order N_A corresponding to an ideal I_\tau. That’s nice in quaternion-land, but we can’t compute the public key, so we need to compute an equivalent ideal whose norm is a power of small primes, and thus compute the corresponding isogeny \tau' : E_0 \to E_A. The public key of a signer is E_A and the private key is both I_\tau and \tau'.

The commitment is a “random” isogeny \psi : E_0 \to E_1 of degree coprime to the degrees of both \tau and \tau'. The prover can compute a left \cal{O}-ideal I_\psi corresponding to \psi.

The challenge (which will be created using the Fiat-Shamir transform in the signature scheme) is the isogeny \phi : E_1 \to E_2. This isogeny is fully known to the prover, for example its kernel is known. There is an exponentially large set of possible challenges, which is what makes the signature scheme interesting. The degree of \phi is co-prime to everything else. The prover needs to compute an ideal corresponding to \phi, and to do this we pull back the kernel of \phi under \psi, compute the ideal I and then push forward (via Lemma 3 of the paper). This results in I_\phi, which is a left-End(E_1)-ideal.

Now comes the really subtle bit. We have the three isogenies \tau, \psi and \phi and three ideals I_\tau, I_\psi, I_\phi. The isogeny \phi \circ \psi \circ \hat{\tau} : E_A \to E_2 corresponds to I = \overline{I_\tau} I_\psi I_\phi. Somehow we need to compute an equivalent ideal, but this is a left-End(E_A)-ideal, which is no good to us. The trick is to pull back I under \tau', which we can do since the degree of \tau' is coprime to the norm of I_\tau. (This is the part where it is necessary to have two different private keys.) This ideal corresponds to an isogeny \varphi : E_0 \to E' for some curve E'. Then we would run KLPT on this ideal to get a random ideal of suitable smooth norm, then push forward via \tau or \tau' and we’re done, right? Wrong!

We have two different isogenies E_0 \to E', and the pushforward of one of them to E_A is an isogeny to E_2. But the problem is: the pushforward of the other does not necessarily have an image curve isomorphic to E_2, which is what we need. How to enforce this requirement? This is where the Eichler orders come in. Eichler orders are intersections of two maximal orders in a quaternion algebra. The Eichler order of interest in this paper is End( E_0 ) \cap End( E_A ). (This is the reason why the degree of \tau is taken as small as possible.) The key result is Corollary 1 of the paper, which restricts the equivalence of ideals to multiplication by elements \beta in the Eichler order. It is necessary to develop a variant of KLPT to ensure this restriction is possible.

The paper contains several optimisations and also some new computational assumptions. One assumption is that there is no meet-in-the-middle attack on \tau, which seems plausible (though surprising at first sight). Other assumptions (see Section 7.3) are about the randomness of the outputs of the KLPT algorithm in this setting.

What about the attack I mentioned earlier (the “second problem”) choosing E_1 by computing an isogeny \phi' : E_A \to E_1. This attack is prevented by imposing a condition on the degree of \sigma. The attacker can’t run the same algorithm as the signer, since they have no way to pull back isogenies to E_0 and run KLPT.

To conclude, the exciting thing about SQISign is the exponentially large set of challenges, which means the signing process does not need to be repeated. This is why the signatures are very short compared with other post-quantum signatures. I have no opinion on the practicality of the scheme.

— Steven Galbraith

Posted in Uncategorized | Leave a comment

Review of ECC 2020

One of the reasons I started this blog was to share information with people who were unable to attend conferences. So I’ve tried to maintain a tradition of conference reviews. I’ll continue, even though online conferences are more inclusive and there is no excuse not to attend.

ECC 2020 took place online last week. Recordings of the 4 panel discussions are available on youtube here.

There were technical problems with the first panel (the “Pacific rim” group), but the audio is ok and the quality gets better after the first few minutes. The discussion in the first panel covered various attacks on signatures (including Minerva and TPM-Fail), groups of unknown order, the development of isogeny-based crypto, polynomial commitments, and computational records for factoring, finite field DLP and ECDLP. Finally we answered the question “why work on fancy crypto when we don’t even seem to be able to safely encrypt and sign data?”. Watch the recording to find out more!

The following three panels ran more smoothly and are also highly recommended. I found it particularly interesting to listen to the speculations on future developments in quantum computing in the panel “How long can we safely use pre-quantum ECC?”, but I also recommend “Formal verification of ECC” and “Is SIKE ready for prime time?”

There is also a curated list of talks which is a sort-of “greatest hits” of conference and seminar talks from the last year.

— Steven Galbraith

Posted in Uncategorized | Leave a comment

ECC 2020 Conference

The ECC 2020 conference will take the form of a curated collection of lecture recordings, and four panel discussions. Each panel will feature a number of experts in the area, and there will be an opportunity for audience members watching live to ask questions during the session using zulip. Each panel is followed by a 30 min social break on wonder.me. The conference is free, please come along.

To register, email Tanja Lange.

The four panels are:

  • Wednesday 28th October, 01:00 UTC: Conference opening, and panel on recent trends in ECC
    Moderator: Steven Galbraith.
    Panel: Dan Boneh, Nadia Heninger, Kristin Lauter, Mehdi Tibouchi, and Yuval Yarom.
  • Wednesday 28th October, 08:00 UTC: Formal verification of ECC protocols
    Moderator: Benjamin Smith.
    Panel: Karthik Bhargavan, Bas Spitters, and Bow-Yaw Wang.
  • Thursday 29 October, 12:00 UTC: How long can we safely use pre-quantum ECC?
    Moderator: Francisco Rodríguez Henríquez.
    Panel: Sam Jaques, Manfred Lochter, and Michele Mosca.
  • Thursday 29 October, 19:00 UTC: Is SIKE ready for prime time?
    Moderator: Alfred Menezes.
    Panel: David Jao, Christophe Petit, and Nick Sullivan.

See you there!

— Steven Galbraith

Posted in Uncategorized | Leave a comment