## The first half of ANTS 2020

The Fourteenth Algorithmic Number Theory Symposium (ANTS-XIV), that was intended to take place at the University of Auckland in New Zealand, is currently taking place online in what Noam Elkies has named “New Zoomland”. Here is a report on the first half of the conference.

Note that the pdfs of papers are all here and that there are short videos of all talks on the ANTS 2020 conference YouTube channel.

Day one of the conference opened with two papers on isogenies.

• Supersingular Curves With Small Non-integer Endomorphisms, by Jonathan Love and Dan Boneh.

The motivation for this paper is the problem of “hashing” to a random supersingular curve over $\mathbb{F}_{p^2}$. A natural solution would be to construct a CM curve with small discriminant, as done in Broker’s algorithm. The paper calls such curves $M$-small (curves whose endomorphism ring contains an isogeny of degree at most $M$). Equivalently the endomorphism ring contains an order of discriminant at most $M$. For each fundamental discriminant $D$ such that $M \le D < 0$ let $T_D$ be the set of all $M$-small curves such that the order lies in $\mathbb{Q}(\sqrt{D})$. The paper proves a number of new results (both mathematical and algorithmic) relating to the structure of the subgraphs $T_D$ within the full isogeny graph, when $M < \sqrt{p}/2$.

The conclusion is that, given any $M$-small curves $E, E'$ one can efficiently compute an isogeny from $E$ to $E'$, which means this is not a useful way to hash to elliptic curves.

Note that Castryck, Panny and Vercauteren (EUROCRYPT 2020) present a complementary result for hashing into the CSIDH set.

• Computing endomorphism rings of supersingular elliptic curves and connections to pathfinding in isogeny graphs, by Kirsten Eisenträger, Sean Hallgren, Chris Leonardi, Travis Morrison and Jennifer Park.

This paper is about the problem of computing $End(E)$ when $E$ is a supersingular elliptic curve. There are two approaches to this problem in the literature:

1. To compute cycles in the isogeny graph (and hence endomorphisms), giving a subring of $End(E)$. Then to work out which of the maximal orders containing that ring is the right one. This idea first appears in Kohel’s thesis, but all the details were not worked out.
2. To compute an isogeny from a nice curve $E_0$ with known endomorphism ring to $E$, and then to deduce the endomorphism ring of $E$. The key step to such approaches is the paper by Kohel, Lauter, Petit and Tignol (from ANTS 2014). Several subsequent papers discuss details of computing $End(E)$, in particular the paper Supersingular isogeny graphs and endomorphism rings: reductions and solutions by Eisenträger, Hallgren, Lauter, Morrison and Petit.

This ANTS paper is the first to give all the details of an algorithm of the first approach (finding cycles) and to work out the complexity carefully. The main focus of the paper is a study of Bass orders. The paper pays close attention to the size of the representation of the endomorphism ring.

At a high level it is not obvious to me which of these two approaches is the better strategy, or whether they both should have basically the same complexity.

Next up was a superb invited lecture by David Harvey on Recent results on fast multiplication. A recording of the lecture is on the YouTube channel mentioned above.

The first day concluded with two more talks on superspecial curves.

• Counting Richelot isogenies between superspecial abelian surfaces, by Toshiyuki Katsura and Katsuyuki Takashima.

The paper has a careful analysis of the isogenies of superspecial dimension 2 abelian varieties. The goal is to distinguish the case of isogenies to a product of two elliptic curves versus isogenies to the Jacobain of a genus 2 curve.

• Algorithms to enumerate superspecial Howe curves of genus four, by Momonari Kudo, Shushi Harashita and Everett Howe.

The paper is about a method to construct superspecial curves genus 4 curves out of elliptic curves and genus 2 curves.

Day two had a number of nice papers, but probably less interesting to a cryptographic audience (also a bit early in the morning for me). Isabel Vogt gave an invited lecture on Arithmetic and geometry of Brill–Noether loci of curves. There was a poster session comprising three of the posters. One of the posters has a crypto context: Samuel Dobson, Steven D. Galbraith, and Benjamin Smith, Trustless Construction of Groups of Unknown Order with Hyperelliptic Curves.

Day three started with three number theory talks (again, too early in the morning for me — who organised this??). The invited talk by Felipe Voloch on Commitment Schemes and Diophantine Equations discussed some ideas for a post-quantum commitment scheme based on diophantine equations that are hard to solve but for which determining the number of solutions is easy.

Day three also featured a wonderful session by Joppe Bos and Michael Naehrig to remember Peter Montgomery. Peter had attended a number of ANTS conferences over the years, and died earlier this year. His name is legendary in computational number theory due to his contributions to factoring and fast arithmetic, such as Montgomery multiplication, the Montgomery model for elliptic curves, and the Montgomery ladder for elliptic curve point multiplication.

Finally on day three was the rump session, which was heavily biased towards isogenies.

• Everett Howe gave a hilarious talk on isogenies of superspecial curves in characteristic 2 (joint work with Bradley Brock).
• Enric Florit (reporting on joint work with Ben Smith) presented some results on the Ramanujan property (or not) of genus 2 isogeny graphs.
• Christophe Petit advertised the Isogeny-based cryptography winter summer school in Bristol in December.
• Péter Kutas talked on joint work with on quantum attacks on unbalanced SIDH.
• Antonin Leroux sketched SQIsign (pronounced “ski sign”) post-quantum signature from quaternions and isogenies. It seems a nice idea and I am very curious to see the details.
• Wouter Castryck (joint work with Thomas Decru and Fre Vercauteren) presented Radical isogenies which is a major breakthrough new idea to speed up CSIDH.

It has been a great 3 days. I will report back again on the rest of the conference, and also the workshop on post-quantum crypto that follows the conference.

— Steven Galbraith

## Eurocrypt 2020

Eurocrypt 2020 took place online, being the first major IACR cryptography conference to be run online over zoom. The chairs did an amazing job at short notice, assisted very ably by Kevin McCurley and Kay McKelly from the IACR as Virtual Conference Administrators.

I understand there were over 1000 registrations, making it by far the largest Eurocrypt of all time. It turns out that online events are more accessible. Certainly I had not planned to fly to Europe for it, but since it was online I was able to attend.

Short videos of the talks are on youtube and are linked from the conference webpage, and the full live sessions are on the IACR YouTube channel. Unfortunately I did not attend any of the live sessions due to time-zones, but I did at least catch most of the rump session.

For the purposes of this blog, the most important session was the opening session on “Number-Theoretic Cryptography”. There was an unexpected hidden theme between two of the papers, which is the algorithm by Becker, Coron and Joux for solving subset-sum (building on the beautiful idea of Howgrave-Graham and Joux to split vectors by weight rather than length).

I now discuss the papers in this session.

• “He Gives C-Sieves on the CSIDH” by Chris Peikert, and “Quantum Security Analysis of CSIDH” by Xavier Bonnetain and André Schrottenloher.

These two papers are about quantum algorithms to solve the group action problem (the computational assumption underlying the CSIDH isogeny-based post-quantum key exchange protocol). Both papers use Kuperberg’s second algorithm for the hidden shift problem. The papers advance our understanding of the security of CSIDH.

As outlined very clearly in Peikert’s talk, there are three components of such an algorithm:

1. A quantum circuit to compute the class group action $[a]*E$ for “arbitrary” ideal classes. This is used to create certain “labelled” quantum states (consisting of an integer, called the “label”, stored classically as bits, and a quantum state).
2. A sieve algorithm to combine “naughty” labelled states to form “nice” labelled quantum states.
3. A measurement process that allows to compute some bits of the secret ideal class from a nice labelled state.

The main challenges are to get a small quantum circuit to do step 1 and to minimise the number of states needed for the sieving in step 2.

Peikert’s paper is mainly about items 2 and 3. The paper uses “quantumly accessible classical RAM”, which means there is a storage device that stores classical bits, and one can efficiently create quantum states that are superpositions of stored values. The paper then studies the sieving process (the “collimation sieve”, which is the “C-sieve” in the title of the paper). One aim is to minimimise the total number of labelled states that need to be computed in step 1, and hence reduce the total number of quantum gates required.

Peikert claims that if the storage is (respectively) $2^{32}, 2^{40}, 2^{48}$ bits, then one can attack CSIDH-512 using (respectively) $2^{18.7}, 2^{15.7}, 2^{14.1}$ executions of step 1. He claims that in the case of $2^{40}$ bits of storage this gives a quantum algorithm to break CSIDH-512 that requires $2^{60}$ T-gates. Officially this is below NIST level 1, though it is still unclear to me that a quantum computer with $2^{60}$ gates is a realistic object in the forseeable future (as far as I know, we do not currently have classical computers with $2^{60}$ gates).

Peikert discusses several optimisations and also reports on his simulations (his code is available).

Bonnetain and Schrottenloher study a different range of parameters for Kuperberg’s second algorithm. Regarding step 2 they focus on the case of small storage and aim to reduce the quantum cost of the algorithm by doing a large classical computation involving the labels. In other words, they reduce the quantum cost by increasing the classical cost. Their innovations in the sieve stage include using subset-sum algorithms (such as the one by Becker-Coron-Joux that I mentioned above) and list-merging algorithms (such as Schroeppel-Shamir). They also make progress on step 1, with new ideas to shrink the quantum circuit for computing the class group action.

For CSIDH-512, Bonnetain and Schrottenloher claim an attack that requires only about 40,000 logical qubits and a total circuit size of approximately $2^{71.6}$ T-gates, while also performing a $2^{86}$ classical computation. Again, this is technically below the required security for NIST level 1.

In the discussion it was noted that these papers do not “break” CSIDH, but both authors suggest the parameters should be increased. It was also mentioned that the attacks apply to CSURF. If I had been awake in the discussions I would have asked what is the security level if the Bonnetain-Schrottenloher quantum circuit is used with the Peikert sieve. Presumably this gives a further improvement of the attack.

• “Double-Base Chains for Scalar Multiplications on Elliptic Curves” by Wei Yu, Saud Al Musa and Bao Li.

There has been an ongoing literature on the task of computing low weight double-base chains, which are a tool to potentially speed up variable point scalar multiplication on elliptic curves. This paper, which is mostly theoretical, gives some improvements in this area.

• “Rational Isogenies from Irrational Endomorphisms” by Wouter Castryck, Lorenz Panny and Frederik Vercauteren.

This lovely paper gives a reduction of the security of the CSIDH cryptosystem to the problem of computing endomorphism rings of supersingular elliptic curves.

For simplicity I discuss the case $p \equiv 3 \mod{4}$. Let $E_0 : y^2 = x^3 + x$ be the supersingular curve with $j$-invariant 1728. The quadratic twist $E_0^t$ of $E_0$ is isomorphic over $\mathbb{F}_p$ to $E_0$.

The paper exploits the fact that if $E = [a] * E_0$ for some ideal class $[a]$ then $E^t = [a^{-1}] E_0.$ Now suppose $\tau \in End(E)$ is an isogeny of degree $\ell$ such that $\tau \circ \pi = - \pi \circ \tau$, where $\pi$ is the $p$-power Frobenius (this is a typical case). Let $\phi : E \to E^t$ be the isomorphism over $\mathbb{F}_{p^2}$ from $E$ to its quadratic twist. Then $\phi \circ \tau : E \to E^t$ is an $\ell$-isogeny that is $\mathbb{F}_p$-rational. It follows that $\phi \circ \tau$ corresponds to an ideal class $[b]$ such that $E^t = [ab] E_0$. Hence $[ab] = [a^{-1}]$ and we can solve for $[a]$ by computing $\sqrt{[b]}$.

In short, if we are given $E$ and want to compute $[a]$ such that $E = [a] * E_0$, then it suffices to find a suitable element $\tau \in End(E)$. The paper explains how to do this.

• “Low Weight Discrete Logarithms and Subset Sum in $2^{0.65n}$ with Polynomial Memory” by Andre Esser and Alexander May.

This paper gives the first progress on the low Hamming weight DLP for about 20 years and it is a major result. The paper is a total mind-bender, with random walks built from random walks forming rhos of rhos. A key idea is splitting by weight, which again takes us back to the Becker-Coron-Joux algorithm.

I am sorry, but I cannot begin to summarise this paper in a few lines. Read it. Then read it again when you are stoned. Then read it again when you are sober.

The invited talks were both excellent.

• “Mathematics and Cryptography: A Marriage of Convenience?” by Alice Silverberg.

I watched the 44-minute pre-recorded lecture. There is also a Live talk with 30 mins presentation followed by Q&A.

Alice told various stories of her journey through mathematics and crypto, and some of the people she has worked with. She briefly talked about her work with Rubin on compression in finite fields (algebraic tori), her work with Boneh on multilinear maps, and her work with H.W. Lenstra Jr. on the Gentry-Szydlo algorithm. Most importantly she used these stories to illustrate the importance of community, curiosity, communication, kindness, listening, and doing the right thing.

• “Fine-Grained Cryptography: A New Frontier?” by Alon Rosen.

The talk gave an overview of recent work on fine-grained complexity and crypto. The basic idea is this: Suppose we have a computational problem that an honest user can set up in roughly linear time $O(n)$ but an attacker requires time $O(n^k)$ to solve. This is polynomial time, but if $k$ is large enough then such work may be infeasible, or at least extremely time-consuming, and that could be enough for crypto applications.

Alon explained the $k$-Orthogonal Vectors ($k$-OV) problem, which is: Given $k$ lists $L_i$, each containing $n$ vectors in $\mathbb{F}_p^d$ (for small $p$ and $d = O( \log(n))$) to find $k$ vectors $u_i \in L_i$ such that $\sum_{j=1}^d \prod_{i=1}^k u_{i,j} = 0$. In the case $k=2$ this becomes the problem of finding two vectors whose Eucliden inner product is zero (i.e., the vectors are orthogonal).

The obvious way to solve the $k$-OV problem is to consider all $n^k$ combinations of vectors. The $k$-OV assumption is that there is no algorithm that can do much better than this.

Impagliazzo, Paturi and Zane introduced the Strong Exponential Time Hypothesis (SETH) which is that you can’t solve $k$-SAT much better than expected. The SETH supports the $k$-OV assumption, in the sense that if there was a faster algorithm for $k$-OV then there would be faster algorithms for $k$-SAT than currently known.

Alon then described how to get proof-of-work and one-way functions from $k$-OV, and said many nice things about the CRYPTO 2019 paper “Public-Key Cryptography in the Fine-Grained Setting” by Rio LaVigne, Andrea Lincoln and Virginia Vassilevska Williams. The final part of the talk was a blur of philosophical musings (and some jokes).

As always, a highlight was the rump session. There was drinking. There were cigars. There were unusual zoom backgrounds. Contacts were traced. There was a challenging quiz.

In terms of this blog, there were no rump session talks of particular relevance to ECC. Two highlights were the songs “I will survive” by Women in Theory, and “Zoom zoom zoom zoom” by Chloe Martindale. Please click here for the

For those who wanted to visit Zagreb, don’t worry! The conference general chairs are being rewarded for their competence by being asked to run the conference again in 2021.

In the meantime, as FakeIACR quipped on twitter, I look forward to not seeing you soon at PKC and CRYPTO.

— Steven Galbraith

Posted in Uncategorized | 1 Comment

## May 2020, What’s new?

Things have been quiet on this blog lately. As with everyone else on the planet, I have been distracted by COVID-19 and its impact on academic and family life. But I have decided it is time to write something.

Hopefully the next post will not be too far behind: EUROCRYPT is coming up next week, as an online conference. I plan to report on some of the activities that take place (although it is mostly taking place in the middle of the night with respect to my time-zone).

In the meantime, I thought I would mention some recent isogeny papers that I like.

• Craig Costello and Ben Smith, The Supersingular Isogeny Problem in Genus 2 and Beyond, PQCrypto 2020.

Supersingular abelian varieties of dimension $d > 1$ are isogenous to a product of supersingular elliptic curves. Superspecial abelian varieties are isomorphic to a product of supersingular elliptic curves, though generally only as unpolarized abelian varieties.

If one considers the isogeny graph of superspecial abelian varieties of dimension $d$ then some vertices (isomorphism classes of polarized abelian varieties) are simple and some are products of abelian varieties of lower dimension with a product polarization. The idea of the paper is to use a set of “easily-identifiable distinguished vertices” in the isogeny graph, namely those corresponding to $E \times A$ where $E$ is a supersingular elliptic curve and $A$ is an abelian variety of dimension $d-1$.

This leads to an iterative approach to find an isogeny between two superspecial abelian varieties $A_1$ and $A_2$ of the same dimension $d$: Take random walks from each until hitting distinguished vertices of the form $E_i \times A_i'$ for $i \in \{1,2\}$. If one can compute isogenies $E_1 \to E_2$ and $A_1' \to A_2'$ then one can “lift” them to an isogeny $A_1 \to A_2$. Since we are reducing the dimension, the search problems to find the isogenies are in smaller spaces, and hence easier to find.

This is a really nice application of the structure of products with the space of polarized abelian varieties.

• Ali El Kaafarani, Shuichi Katsumata and Federico Pintore, Lossy CSI-FiSh: Efficient Signature Scheme with Tight Reduction to Decisional CSIDH-512, PKC 2020.

The paper is about digital signatures from isogeny problems. The SeaSign paper by De Feo and me contained a signature scheme in the random oracle model and also, in an appendix, a scheme based on lossy keys that would have tight security in the quantum random oracle model (based on a result by Kiltz, Lyubashevsky and Schaffner). The drawback is working with a massive class group and hence larger parameters.

This paper introduced a new way to get lossy keys, that is simpler and does not require such a large increase in parameters. I like the idea. Here is an analogy of the idea in the DLP setting.
Let the system parameters be $(g_1, g_2)$ in some group.
Suppose a user has private key $a$ and public key $(h_1,h_2) = (g_1^a,g_2^a)$.
Here is an interactive way to prove knowledge of $a$:

1. Prover chooses random $r$ and sends commitment $(t_1, t_2) = (g_1^r,g_2^r)$ to verifier.
2. Verifier responds with a challenge bit $c \in \{0,1\}$.
3. Prover replies with $s = r$ (if $c=0$) or $s = ar^{-1}$ (if $c=1$).
4. Verifier checks that $(g_1^s, g_2^s) = (t_1,t_2)$ if $c=0$, or $(t_1^s,t_2^s) = (h_1,h_2)$ if $c=1$.
5. Repeat.

One can see that this makes sense as a proof of knowledge, and can be turned into a signature scheme using the Fiat-Shamir transform.

The lossy key setting is to choose a random pair $(h_1,h_2)$ as a “lossy public key”, in which case one can see that there is not a single $s$ that works for each $r$. The indistinguishability of real and lossy keys is the computational assumption.

Unlike the lossy keys scheme in the SeaSign paper, this approach does not need such large parameters.

• Daniel J. Bernstein, Luca De Feo, Antonin Leroux and Ben Smith, Faster computation of isogenies of large prime degree, to appear at ANTS.

Computing degree $d$ isogenies essentially boils down to evaluating some polynomials of degree $O(d)$ at some points. For example we have polynomials such as

$f(x) = \prod_{i=1}^{(d-1)/2} (x - x( [i]P ))$

and wish to evaluate them on the $x$-coordinate of some points. The classical Vélu formulae compute this in $O(d)$ steps.

The starting point of the paper is the observation (going back to Pollard and Strassen) that there are fast methods to evaluate polynomials with a certain “structure” in their roots. The paper explains how to make this idea work for isogenies, using biquadratic polynpomials to encode the addition operation on a Montgomery model.

In short, the paper shows how to compute isogenies in $\tilde{O}( \sqrt{d} )$ operations. In practice, this approach becomes useful for $d \approx 200$, which means it could be useful for CSIDH, CSURF or BSIDH. The approach is not interesting for SIDH/SIKE.

• Samuel Jaques and André Schrottenloher, Low-gate Quantum Golden Collision Finding, eprint 2020/424.

This paper is about quantum algorithms for the claw-finding problem, which is a basic “meet-in-the-middle” approach to solving the computational problem behind SIDH/SIKE. More precisely, in a claw-finding problem we have functions $f : X \to Z, g : Y \to Z$ and seek $x \in X, y \in Y$ such that $f(x) = g(y)$. In SIDH, $X = Y$ corresponds to degree $2^{n/2}$ isogenies, and $f$ is taking an isogeny from $E$ while $g$ is taking an isogeny from $E_A$.

Two types of algorithm have previously been considered:

1. Tani’s algorithm. This was already mentioned in the original paper on SIDH by Jao and De Feo. A recent paper by Jaques and Schanck argued that managing the quantum random access storage in Tani’s algorithm would likely mean it performs less well that originally expected.

2. The van Oorschot and Wiener algorithm. This is a classical algorithm, whose relevance to SIDH was first noted by Adj, Cervantes-Vázquez, Chi-Domínguez, Menezes and Rodríguez-Henríquez. The main idea is to transform the problem of claw-finding into a collision-finding problem on the set $X$. The problem is that useless collisions arise, and so one needs to compute many collisions until finding a collision (the so-called “golden collision”) that solves the claw-finding problem.

The new paper considers variants of these quantum random walk algorithms. I’m still digesting the details. However, the conclusion is that SIDH/SIKE is still holding up well against quantum attack. The paper states “SIKE parameters still meet the NIST security levels claimed in their NIST submission.”

— Steven Galbraith

## Breaking the decisional Diffie-Hellman problem for class group actions using genus theory

This post is about the paper Breaking the decisional Diffie-Hellman problem for class group actions using genus theory by Wouter Castryck, Jana Sotáková and Frederik Vercauteren.

First let us recall the situation in the setting of $\mathbb{Z}_p^*$ where $p$ is prime. A Diffie-Hellman quadruple is $(g, g^a, g^b, g^{ab} )$ for $g \in \mathbb{Z}_p^*$. The Decision Diffie-Hellman problem (DDH) is to distinguish such a triple (for uniformly sampled $g \in \mathbb{Z}_p^*$ and $a,b \in \mathbb{Z}_{p-1}$) from a quadruple $(g, g^a, g^b, g^c)$ for uniformly sampled $g \in \mathbb{Z}_p^*$ and $a,b,c \in \mathbb{Z}_{p-1}$.

The Legendre symbol is multiplicative, which implies that $(\tfrac{g^a}{p} ) = ( \tfrac{g}{p} )^a$. If $(\tfrac{g}{p})= -1$ (which may happen when $g$ has even order) then one can learn the parity of $a$ and $b$ from $g^a, g^b$ respectively, and hence test if the fourth value of the quadruple has Legendre symbol consistent with $g^{ab}$. This well-known fact allows to reject half of all uniformly sampled quadruples $(g, g^a, g^b, g^c)$ when $(\tfrac{g}{p})= -1$, which is sufficient to say that DDH is not a hard problem in $\mathbb{Z}_p^*$.

One can increase the success rate beyond $1/2$ by using a random-self-reduction of Diffie-Hellman quadruples, but one can never get a perfect DDH oracle (i.e., an oracle that only accepts $(g, g^a, g^b, g^{ab} )$) from this technique, as the Legendre symbol only “sees” the order 2 elements.

The Legendre symbol is just a group homomorphism $\mathbb{Z}_p^* \to \{ 1, -1\}$, and for any prime $\ell \mid (p-1)$ one can get a homomorphism $\mathbb{Z}_p^* \to H$ where $H$ is a subgroup of order $\ell$. Hence, if $(p-1)$ has a range of small factors then one can get an increasingly accurate algorithm to distinguish Diffie-Hellman quadruples from random quadruples (and hence solve DDH).

[As an aside: The amazing thing about the Legendre symbol, and the reason it is taught in all good number theory courses, is not the existence of a group homomorphism $\mathbb{Z}_p^* \to \{ 1, -1\}$. This is trivial. What is non-trivial is the quadratic reciprocity law, which gives a non-obvious and very efficient way to compute Legendre symbols.]

Similarly, for any finite group one can consider group homomorphisms to subgroups of small order. So one can also attack elliptic curve DDH in a similar way. This is one of the main reasons why the community works with groups of prime order, and in particular elliptic curves of prime order.

Now we turn to group actions. Let $G$ be a finite group acting on a set $X$. Write the action of $g \in G$ on $x \in X$ as $g*x$. For example, let $G = Cl( \mathcal{O} )$ be an ideal class group acting on the set $X$ of supersingular elliptic curves with j-invariant in $\mathbb{F}_p$. This is the setting of the CSIDH system in isogeny-based post-quantum crypto. The natural analogue of DLP is: Given $E$ and $a * E$ to compute $a$. The natural analogue of DDH is to distinguish $(E, a*E, b*E, (ab)*E )$ from $(E, a*E, b*E, c*E )$, where $a, b, c$ are uniformly sampled from $G$.

Knowing about Legendre symbols, it is natural to speculate that one might be able to do something similar for group actions. We’d like a group homomorphism $\phi : G \to H$ for some group $H$ of small prime order, and to be able to compute $\phi(a)$ from $a*E$. That is what the paper of Castryck, Sotáková and Vercauteren does.

In some survey talks (such as at ANTS 2018 and at the Alice Silverberg birthday conference in 2018) I asked “Can subgroups of ideal class group be exploited?” While I thought this would be a good problem, I did not have any clue how to do this. I am surprised and delighted by the new results.

Without going into the details, what the paper shows is that one can get enough information about the degree of an isogeny from $E$ to $a*E$ (and hence the norm of the ideal $a$) from looking at pairings of points on the elliptic curve. It is a wonderful and surprising (at least, to me) result, and brings a new set of ideas and techniques into isogeny crypto. The paper is not too hard to follow (don’t be put off by the phrase “genus theory” — it is not as scary as it sounds).

I end with a few small comments. First, as with the case of $\mathbb{Z}_p^*$, this does not give a perfect algorithm to distinguish DDH quadruples from uniform ones. But it does allow to reject some quadruples as being definitely not DDH, and this is enough to break the DDH assumption. Second, breaking the DDH assumption does not, as far as I know, break any isogeny cryptosystem. This is because isogeny cryptosystems are rather unsophisticated compared with DLP-based protocols. Third, the results do not apply to SIDH and have no impact on the SIKE submission to the NIST standardization process.

To conclude, this paper is a great theoretical result that brings new ideas to the field. What will be next for isogeny crypto? Whatever it is, I look forward to it!

— Steven Galbraith

## Algorithmic Number Theory (ANTS-XIV)

The Fourteenth Algorithmic Number Theory Symposium, ANTS-XIV, will take place at the University of Auckland, New Zealand on June 30 – July 4, 2020.

The deadline for submissions is February 25. To submit, please go to the call for papers on the conference website.

I decided to have a look at the history of the ANTS conferences, and in particular to identify the most highly cited papers (using google scholar). The ANTS conferences started in 1994, with the first conference held at Cornell.

First I want to mention that citation counts are not a good measure of research quality or difficulty or importance. Citations are biased towards subject areas that have a culture of writing lots of papers and citing widely. Citation counts are also biased to older papers, since they have had more time to accrue citations. Hence, we would expect the most highly cited papers at ANTS to be in cryptography, and from 16 or more years ago.

Nevertheless, citation counts do tell something about the interest in a paper, and are a reasonable proxy for impact on the field.

The most highly cited paper in the history of the ANTS conferences (with nearly 1700 citations according to google scholar) is:

• J. Hoffstein, J. Pipher and J. H. Silverman, NTRU: A ring-based public key cryptosystem, ANTS III, 1998.

There is no doubt that this is a massively influential paper on lattice cryptography. Several of the lattice-based submissions to the NIST Post-Quantum standardisaton process were very closely building on NTRU. The irony (if you can call it that) is that this paper was rejected from CRYPTO, and yet has had higher impact than most other papers published in CRYPTO around that time.

Here are the following 14 most cited ANTS papers:

• A. Joux, A One Round Protocol for Tripartite Diffie-Hellman, ANTS IV, 2000. 1369 cites.
• D. Boneh, The Decision Diffie-Hellman Problem, ANTS III, 1998. 1078 cites.
• S. D. Galbraith, K. Harrison and D. Soldera, Implementing the Tate Pairing, ANTS V, 2002. 689 cites.
• A. Joux, The Weil and Tate Pairings as Building Blocks for Public KeyC ryptosystems, ANTS V, 2002. 287 cites.
• L. M. Adleman, J. DeMarrais and M.-D. A. Huang, A subexponential algorithm for discrete logarithms over the rational subgroup of the jacobians of large genus hyperelliptic curves over finite fields, ANTS I, 1994. 258 cites.
• P. Gaudry and R. Harley, Counting Points on Hyperelliptic Curves over Finite Fields, ANTS VI, 2000. 199 cites.
• P. Q. Nguyen and D. Stehlé, LLL on the Average, ANTS VII, 2006. 168 cites.
• O. Schirokauer, D. Weber and T. F. Denny, Discrete Logarithms: The Effectiveness of the Index Calculus Method, ANTS II, 1996. 167 cites.
• L. M. Adleman, The function field sieve, ANTS I, 1994. 165 cites.
• G.-J. Lay and H. G. Zimmer, Constructing elliptic curves with given group order over large finite fields, ANTS I, 1994. 163 cites.
• P. Q. Nguyen and J. Stern, Lattice Reduction in Cryptology: An Update, ANTS IV, 2000. 144 cites.
• N. D. Elkies, Shimura Curve Computations, ANTS III, 1998. 106 cites.
• M. Fouquet and F. Morain, Isogeny Volcanoes and the SEA Algorithm, ANTS V, 2002. 104 cites.
• A. Shallue and C. E. van de Woestijne, Construction of Rational Points on Elliptic Curves over Finite Fields, ANTS VII, 2006. 100 cites.

Of course, these rankings will change over time. But that is what it looked like in early 2020.

Looking at this list I see many important and favourite papers: Antoine Joux’s paper on One Round Tripartite Diffie-Hellman kick-started pairing-based crypto; the Adleman-DeMarrais-Huang paper was the first to show high genus curves are weak for DLP crypto; the Lay-Zimmer paper was influential in the early days of ECC; Fouquet and Morain introduced the phrase “Isogeny Volcano”; etc. It is also notable that several of the papers listed (e.g., those by Boneh, Elkies, Nguyen-Stern, and the second paper by Joux) are invited papers, which shows that the community does value survey/overview papers.

Anyway, I look forward to strong submissions to ANTS XIV in Auckland, including on elliptic curves, lattices and isogenies. Hopefully in 15-20 years the impact of some of those papers will be apparent.

— Steven Galbraith

## Various news items

1. The ECC 2019 conference took place in Bochum in December. Slides from some of the talks (including the rump session) are available here.

2. ASIACRYPT 2019 took place in Kobe, Japan in December. It was a very well organised conference.

The Best Paper award went to Thomas Debris-Alazard, Nicolas Sendrier and Jean-Pierre Tillich for the paper “Wave: A New Family of Trapdoor One-Way Preimage Sampleable Functions Based on Codes”. This paper gives a post-quantum signature scheme (of the “hash and sign” type) from error-correcting codes. Ward Beullens has written a blog post on this paper in the COSIC blog.

There were two conference sessions on isogenies, featuring these papers:

• Ward Beullens, Thorsten Kleinjung and Fréderik Vercauteren “CSI-FiSh: Efficient Isogeny based Signatures through Class Group Computations”. This paper introduces an isogeny signature scheme of the type originally proposed by Stolbunov (also discussed by De Feo and me in the appendix of our “SeaSign” paper). The key step is a large class group computation for one specific parameter (ie one prime).

• Luca De Feo, Simon Masson, Christophe Petit and Antonio Sanso “Verifiable Delay Functions from Supersingular Isogenies and Pairings”. Computing a sequence of isogenies is a natural “delay function” (in the sense that the computation cannot be sped up using parallel computing). The paper shows how to get a delay function whose result can be checked efficiently, by using pairings. Carl Bootland has blogged about this talk.
• Xiu Xu, Haiyang Xue, Kunpeng Wang, Man Ho Au and Song Tian “Strongly Secure Authenticated Key Exchange from Supersingular Isogenies”. The paper gives an authenticated key exchange scheme based on isogenies, with very strong security properties in the CK+ model (and the random oracle model).
• Michael Naehrig and Joost Renes “Dual Isogenies and Their Application to Public-key Compression for Isogeny-based Cryptography”. The paper has some nice ideas about compression.
• Suhri Kim, Kisoon Yoon, Young-Ho Park and Seokhie Hong “Optimized Method for Computing Odd-Degree Isogenies on Edwards Curves”. The paper introduces some nice isogeny formulas, that are similar to formulas already known for Montgomery curves.
• Salim Ali Altuğ and Yilei Chen “Hard Isogeny Problems over RSA Moduli and Groups with Infeasible Inversion”. The paper has fascinating ideas about constructing something like a group with infeasible inversion.

The invited talks were both about blockchain, so I don’t mention them here.

You can read about several other papers on the COSIC blog. Recordings were made of the talks, and will go on the iacr youtube channel eventually.

The rump session, hosted by Mehdi Tibouchi, featured a Samurai warrior to make sure speakers kept to time.

3. Recall the Multiparty Non-Interactive Key Exchange From Isogenies on Elliptic Curves (mentioned in this blog post. It was relying on an invariant of products of elliptic curves. Recently Eric Rains, Karl Rubin, Travis Scholl, Shahed Sharif and Alice Silverberg have posted on arxiv the paper “Algebraic maps constant on isomorphism classes of unpolarized abelian varieties are constant”, which gives additional evidence that a useful invariant doesn’t exist.

4. Advance notice for ECC 2020 in Taiwan! You will find information here.

— Steven Galbraith

## Isogeny crypto

A long time ago, when pairing-based cryptography was new, cryptographers who did not fully understand the mathematics of pairings would sometimes make mistakes. They would assume that everything that can be done with discrete logarithms could also be done with pairings. Unfortunately, this would sometimes result in protocols that were insecure, or else un-implementable.

Indeed, such cases apparently still happen:

This situation is natural whenever a crypto tool that is technically subtle (and crypto tools always have technical subtleties) moves from “niche” into the mainstream. However it can result in incorrect schemes being published, for example because there are not enough experts to review all the papers.

Back in 2006, in response to those issues in pairing-based crypto, Kenny Paterson, Nigel Smart and I wrote the paper Pairings for Cryptographers. The abstract read:

Many research papers in pairing based cryptography treat pairings as a “black box”. These papers build cryptographic schemes making use of various properties of pairings. If this approach is taken, then it is easy for authors to make invalid assumptions concerning the properties of pairings. The cryptographic schemes developed may not be realizable in practice, or may not be as efficient as the authors assume. The aim of this paper is to outline, in as simple a fashion as possible, the basic choices that are available when using pairings in cryptography. For each choice, the main properties and efficiency issues are summarized. The paper is intended to be of use to non-specialists who are interested in using pairings to design cryptographic schemes.

This abstract exhibits the particular style of understated writing that is cultivated by British people. What we really meant was: Please read this and stop screwing up.

Rolling forward 15 years, isogeny-based cryptography is another area with many technical subtleties, but is moving into the mainstream of cryptography. Once again, not everything that can be done with discrete logarithms can necessarily be done with isogenies. It is therefore not surprising to find papers that have issues with their security.

It is probably time for an Isogenies for Cryptographers paper, but I don’t have time to write it. Instead, in this blog post I will mention several recent examples of incorrect papers. My hope is that these examples are instructional and will help prevent future mistakes. My intention is not to bring shame upon the authors.

• In 2014, D. Jao and V. Soukharev proposed an isogeny-based undeniable signature scheme. The security analysis of their scheme required the introduction of some computational problems in isogenies. Recently, S.-P. Merz, R. Minko and C. Petit Another look at some isogeny hardness assumptions have broken the computational assumptions and formulated attacks on the scheme.

In this case, there is no reason for the original authors to be embarrassed. There has been considerable progress in isogeny crypto in the last 5 years, and it is natural that new cryptanalytic tools would become available that could break earlier schemes.

• Several papers, including this one, have argued that a certain decisional assumption related to the SIDH isogeny cryptosystem should be hard.

Without going into all the details, in SIDH there is a base curve $E$ and four points $P_A, Q_A, P_B, Q_B$ on it. An SIDH instance includes a triple $(E_A, \phi_A(P_B), \phi_A(Q_B))$ where $\phi_A : E \to E_A$ is an isogeny of degree $2^n$. One of the basic computational problems is to compute $\phi_A$ when given this information.

The decisional assumption is to distinguish a valid triple $(E_A, \phi_A(P_B), \phi_A(Q_B))$ from another triple $(E', P', Q')$ where $E'$ is a supersingular curve, and $P', Q'$ are points satisfying various conditions.

At Provsec 2019, S. Terada and K. Yoneyama (“Password-based Authenticated Key Exchange from Standard Isogeny Assumptions”) proposed a password-based authenticated key exchange scheme for SIDH. The security against offline dictionary attacks was based on the hardness of a decision problem, but it was not the above decision problem. Instead, the security of the scheme under such an offline dictionary attack relies on the difficulty of distinguishing the triple $(E', P', Q')$ from a uniformly random binary string of the same length. This problem is not hard at all since there are many properties that the valid triple should satisfy (e.g., $E$ is a supersingular elliptic curve, $P', Q' \in E'$ etc) which would not be satisfied by a uniformly chosen binary string. Hence the scheme in the paper is not secure against offline dictionary attacks.

It is actually a really interesting open question to fix this, related to compression of SIDH protocol messages. If one could compress SIDH protocol messages down to the minimum number of bits, then one might actually be able to argue that the protocol message is indistinguishable from a uniform binary string. I don’t know any way to solve this problem and I think it is probably impossible. For the state-of-the-art in compression of SIDH messages see G. H. M. Zanon, M. A. Simplicio Jr, G. C. C. F. Pereira, J. Doliskani and P. S. L. M. Barreto, “Faster key compression for isogeny-based cryptosystems”.

• A very natural and desirable feature is to be able to hash to an SIDH or CSIDH public key. Unfortunately this is hard. Really hard.

D. Boneh and J. Love Supersingular Curves With Small Non-integer Endomorphisms show, among other things, that it is hard to hash to SIDH public keys. W. Castryck, L. Panny and F. Vercauteren, Rational isogenies from irrational endomorphisms show it is hard to hash to CSIDH.

It would be great if someone can solve one of these problems, but I think they are both hard. In the meantime, cryptographers should not assume that it is possible to hash to public keys/protocol messages. This also limits the possibility to transport some protocols from the discrete-log world into the isogeny world.

• Due to the adaptive attacks on SIDH, one cannot get CCA1 or CCA2 secure encryption from SIDH without doing the Fujisaki-Okamoto transform (or something similar). Similarly, one cannot get non-interactive key exchange from SIDH. It is natural to try to get around this by some tweak to SIDH. R. Azarderakhsh, D. Jao and C. Leonardi gave a solution to this problem by running $k$ instances in parallel (e.g. for $k = 60$). S. Kayacan suggested two schemes that were hoped to be secure. However adaptive attacks have been shown in both schemes by my students and collaborators:
• A. Fujioka, K. Takashima, S. Terada and K. Yoneyama proposed an authenticated key exchange scheme similar to some previous discrete-log-based schemes that required gap assumptions in the security proof. Gap assumptions are of the form: Problem X is hard, even when given an oracle to solve the decisional variant of problem X.

For the isogeny context it is dangerous to use a gap assumption, as there are known arguments that one can reduce the computational isogeny problem to a decisional isogeny problem in certain cases. I already warned about this in the key exchange setting in this note. The solution of Fujioka et al was to introduce a “degree-insensitive” version of the problem, which is essentially to extend the protocol to $\ell$-isogeny chains of any length (rather than fixed length). It is an interesting idea.

However, my student S. Dobson and I have given evidence (see On the Degree-Insensitive SI-GDH problem and assumption) that the distribution of public keys in the degree insensitive case is close to uniform, and so it no longer makes sense to consider a gap problem. We do not have an attack on this protocol, but we conclude that the security proof is not correct. This shows again that one must be very careful to adapt ideas from discrete-log-based protocols into the isogeny setting.

• S. Furukawa, N. Kunihiro and K. Takashima (“Multi-party key exchange protocols from supersingular isogenies”) proposed an isogeny variant of the Burmester-Desmedt protocol for $n$-party key exchange in two rounds for any $n$. It is a nice paper, but Takashima (“Post-Quantum Constant-Round Group Key Exchange from Static Assumptions”) comments that:

Furukawa et al. [14] proposed an isogeny-based BD-type GKE protocol called SIBD. However, the security proof of SIBD (Theorem 4 in [14]) is imperfect, and several points remain unclear, for example, on how to simulate some public variables.

Once again, the scheme is not broken (as far as I know), but the security argument is not correct. Takashima gives a new security analysis in his paper (but I have not had time to check it).

What can authors do to avoid the dangers of isogeny crypto? There are some very good surveys of the basic ideas behind isogenies (for example see Mathematics of Isogeny Based Cryptography by Luca De Feo), but there is no good resource for cryptographers who want to use isogenies as a “black box”, and just want to know what is possible and what is not possible for building protocols. My best attempt so far is this note. In any case, I hope the present blog post can act as a cautionary tale: treating isogenies as a black box is risky!

— Steven Galbraith

Posted in Uncategorized | 1 Comment

## An attendee’s report: crypto means Crypto (2019)

Steven Galbraith, who maintains this blog, has been inviting me to write a blog post on several conferences for quite some time, and I’ve consistently postponed accepting the invitations for, well, too long, so here you go. Yet, the reader is kindly asked not to expect a masterpiece of literature in this very first attempt of mine at blogging (in other words: read on at your own peril; you won’t be able to unread it later).

Continuing the unavoidable trend for large conferences, Crypto 2019 offered two parallel tracks, and understandably I’ll report on but a few presentations of the one specific track I happened to choose at each segment of the program (I tried to vary my choice of track for every session block, though).

And yet, the dichotomy of parallel sessions got me into existential anguish (of sorts) right from the start for being unable to attend both. The very first parallel pair was on lattice-based ZK proofs on the one hand, and on certain symmetric constructions on the other. I chose symmetric constructions.
I found the notion of secure PRNGs that lack a random seed, introduced by S. Coretti et al. (“Seedless Fruit is the Sweetest: Random Number Generation, Revisited”), particularly intriguing (to say the least). The authors bypass the impossibility of attaining this by compromising: yes, the entropy source is still implicitly there despite the name, but instead of modeling the extraction procedure by feeding the PRNG a randomness seed, it assumes the underlying random oracle itself (called the “monolithic extractor”) is picked uniformly at random all at once. Building on this idea, the authors offer provably secure constructions and show how some existing ones are insecure. Unfortunately, delays between clicks and slide changes, coupled with a few other issues (including, I should say, a somewhat inordinate number of jokes), made it impossible to cover the extensive slide set in the allotted time… and to check if I got the ideas right.
My session choice meant I couldn’t attend the simultaneous presentation of the equally intriguing solution to the problem of constructing a non-interactive zero-knowledge (NIZK) proof system from the LWE assumption for any NP language, discovered by C. Peikert and S. Shiehian and described in their paper “Noninteractive Zero Knowledge for NP from (Plain) Learning with Errors”. That was a pity, but it was somewhat compensated by the work “Nonces Are Noticed: AEAD Revisited” by M. Bellare, Ruth Ng, and B. Tackmann. This work reveals an enormous gap between the usual theory of nonce-based schemes and the actual (sometimes even standardized) usage of those schemes in practice: nonces become a kind of metadata that can reveal a surprising amount of information about the users or devices originating them. Quite creepy, but the authors address it by providing new notions and solutions whereby the nonce is hidden as well, and also resist nonce misuse.

As usual, there was a session on FHE. The work “On the Plausibility of Fully Homomorphic Encryption for RAMs” by A. Hamlin et al., the authors tackle the problem of hiding the sequence of memory addresses that are accessed when doing some processing on a large database. Using their notion of rewindable oblivious RAM, they obtain a preliminary single-hop scheme where the multiplicative running time overhead is $O(\mathrm{polylog} N)$, where $N$ is the database size.
In the same session, Sri A. K. Thyagarajan talked about his joint work with G. Malavolta on “Homomorphic Time-Lock Puzzles and Applications” whereby one can evaluate functions over puzzles without solving them. This amusing notion has nice applications like e-voting: in a simple setting, the voters create one encryption of 1 for the candidate they are voting for and distinct encryptions of 0 for all the others, so that summing up those sets over all voters yields the encrypted voting tally for all candidates (without revealing who voted for them), while adding the all encryptions, and independently the squares of all encryptions, for each individual voter yields a proof that they voted exactly once for each candidate. Transforming the encryptions into time-lock puzzles makes the decryption operations public, and does away with the need for a trusted third party. Other applications were suggested, like sealed e-auction bidding, multiparty coin flipping, or multiparty contract signing.

The session on the communication complexity of multiparty computation (MPC), which I chose over malleable codes, was no less striking, in particular the presentation by Mark Simkin and the one by Abhi Shelat.
Mark, who presented his work with S. Ghosh (“The Communication Complexity of Threshold Private Set Intersection”), started with applications of private set intersection (like the intersection of fingerprints) where one only cares about large intersections. In that case, it pays to set up the protocol so that one actually learns the complement of the intersection instead. One can see this as MPC of the ratio between characteristic polynomials, so that common factors (that is, those corresponding to the intersection) cancel. I didn’t quite gather whether a trusted third party is essential or just a secondary concern for the proposed protocol, though.
Abhi delighted the audience with a long, slow-motion clip of radical acrobatic skiing and the associated adrenaline rush. This blogger is not really sure the subject of MPC communication complexity causes a similar physiological effect, although the presenter claimed it does. After a recapitulation of the milestones of the subject, the audience was finally rewarded with a quite detailed mathematical treatment of the contribution, though this time at a very, very fast pace. Perhaps the subject does cause an adrenaline rush after all. Anyway, the work covered adaptively secure MPC with sublinear communication cost, in a scenario where the adversary can corrupt parties at any time, even after the end of the protocol, at which time the adversary can potentially corrupt all parties.

The session on post-quantum security focused on the quantum random oracle model (QROM). Both papers in the first part of that session, “How to Record Quantum Queries, and Applications to Quantum Indifferentiability” by M. Zhandry, and “Quantum Security Proofs Using Semi-classical Oracles” by A. Ambainis, M. Hamburg and D. Unruh, were thickly theoretical. The talk on “Quantum Indistinguishability of Random Sponges” by J. Czajkowski, A. Hülsing, and C. Schaffner was more approachable in my opinion (TL;DR: the sponge construction can be used to build quantum-secure pseudorandom functions when the adversary has superposition access to the input-output behavior of the sponge but not to the sponge’s internal function or permutation function itself, assumed to be random in their model). Sure enough, the more theoretically-oriented results have a clear and welcome niche even here, since these results build upon Zhandry’s prior switching lemma for pseudo-random functions or permutations from 2015. Zhandry is also a co-author of another paper from that session, “Revisiting Post-Quantum Fiat-Shamir” (joint work with Q. Liu), which was presented together with the last one, “Security of the Fiat-Shamir Transformation in the Quantum Random-Oracle Model” by J. Don et al.

Several other works are worth mentioning; I’ll mention a few more, but alas, not a full list: hanc blogis exiguitas non caperet. I found the paper “Unifying Leakage Models on a Rényi Day” by T. Prest, D. Goudarzi, A. Martinelli, and A. Passelègue, whose presentation I could not attend for not being proficient at ubiquity, quite entertaining (I assure the reader that this has nothing to do with my living in the often rainy Seattle area). The paper “It Wasn’t Me! Repudiability and Claimability of Ring Signatures” by S. Park and A. Sealfon deals with the question of enabling repudiation for ring signature non-signers, and claimability for actual signers of ring signatures. The importance of the first is to deflect undue responsibility for ring signatures produced by another ring member, and the importance of the latter lies in taking due credit for signing when that turns out to be, or becomes, desirable, but prior notions of security for ring signatures were ambivalent at best on such issues. Besides updated notions, the authors offer a repudiable scheme based on a variety of assumptions (for instance, bilinear maps), and unclaimable scheme based on the SIS assumption, and constructions for claimable or unrepudiable schemes that can be obtained from certain existing ring signatures.

Last but obviously not least, three papers got awards:

1. “Cryptanalysis of OCB2: Attacks on Authenticity and Confidentiality,” by A. Inoue, T. Iwata, K. Minematsu, and B. Poettering got the Best Paper award;
2. “Quantum Cryptanalysis in the RAM Model: Claw-Finding Attacks on SIKE,” by
S. Jaques and J. M. Schanck, got Best Young Researcher Paper award;
3. “Fully Secure Attribute-Based Encryption for $t$-CNF from LWE,” by R. Tsabary, got Best Young Researcher Paper award.

The papers are quite well written. The interested readers are encouraged to avail themselves of them for all of the fascinating details of these works. I was personally interested in the second of them and, to a smaller degree, the first, so I’ll try and briefly summarize those (I’m afraid the third falls somewhat outside my areas of expertise so I refer the interested reader to the corresponding paper).

Kazuhiko Minematsu began describing their work on OCB2 by showing how easy it is to attain a minimal forgery with one single encryption query. The general attack follows the model previously applied against the EAX Prime mode of operation, which lacked a formal security analysis (so it was not really a big surprise that scheme turned out to succumb to attacks). However, OCB2 was supported by a full-fledged security proof and remained unscathed for fifteen years. The attack described in the paper stems from an observed gap in that security proof which turned out to be a severe flaw. On the bright side, the attack does not extend to OCB1 nor OCB3, nor to certain suggested tweaks to OCB2. This shows that the overall structure of OCB is sound, but also the necessity of active verification of proofs.

Sam Jaques explained that their claw-finding paper set forth three goals. The first goal was to fairly compare attacks with classical and quantum resources. The second goal was to view gates as processes (which is indeed the view suggested by current quantum technology). The third goal was to include error correction as part of the cost and effort of the attack, since those are essential to overcome the exquisite fragility (in the sense of susceptibility to decoherence) of quantum computations. Their main idea was thus to view quantum memory as a physical system acted upon by a memory controller. As such, it undergoes two kinds of time evolution: free (caused by noise) and costly (caused by the controller). The computation cost becomes the number of iterations (ignoring construction costs, focusing on the controller cost instead). Two cost models are covered: the so-called G (gate) cost, which assumes passive error correction and 1 RAM operation per gate, and the DW (depth-width) cost, which counts 1 RAM operation per qubit per time unit. This sets the framework for their analysis of the claw-finding algorithm, which is a meet-in-the-middle attack to recover a path spelled out by the private key in the isogeny graph, between the initial curve and the final one (which is part of the public key). It can be realized by Tani’s collision-finding algorithm, by following random walks on two Johnson graphs, looking for a collision, and doing all computations in a quantum setting. The complexity is $\tilde{O}(p^{1/6})$. Despite the paper title, a quite surprising conclusion of their analysis is that SIDH and SIKE are actually harder to break than initially thought. In particular, it appears that the minimum SIKE parameter set (namely, SIKE434) cannot be broken by any known attack in less than the cost and effort needed to break AES128, specifically $2^{143}$. This scales to other parameter sets, to the effect that the revised SIKE parameters for the 2nd round of the NIST PQC process are smaller than their 1st round counterparts.

So, there you have it, a brief (and necessarily incomplete, but hopefully helpful) appraisal of Crypto 2019. Scripsi. Vale.

## SIAM Conference on Applied Algebraic Geometry 2019

So here we are in the nice city of Bern, in the Teutonic Switzerland, for the SIAM Conference on Applied Algebraic Geometry 2019that this year counts more than 750 attendees. The weather is warm enough but the isogenies topic has never been so hot! So for this occurrence of the conference Tanja Lange, Chloe Martindale and Lorenz Panny managed to organise a really great isogenies mini-symposium spread over 4 days.

Day #1

Day #1 started strong. After a quick overview of isogenies by Chloe Martindale and Lorenz Panny, including an introduction to SIDH and CSIDH, the invited speakers took the stage:

This concluded Day #1

Day #2

• David Jao discussing recent progress in implementing isogeny-based cryptosystems in constant time to resist side-channel attacks. In particular he presented results from his recent paper (joint work with Jalali, Azarderakhsh and Kermani). One of the interesting observation made was that isogeny computation over Edward curves is relatively simple to be implemented in constant time (as expected) but it is faster only for isogenies of degree 5 or more. He concluded his talk with some really great demos (as also reported by Thomas Decru in a second blog post).
• Christophe Petit surveyed known results on the security of isogeny-based protocols including the celebrated active attack on SIDH.
• Frederik Vercauteren gave the first of two sessions dedicated to CSI-FiSh (joint work with Beullens and Kleinjung). This part had as a focus the new record class group computation they achieved while computing the class group structure of CSIDH. It seems they reused some of the code previously written by Kleinjung, and for the final computation of the closest vector in the lattice Léo Ducas gave a hand. While the technique used for the computation was standard, it was still a remarkably big task involving several core years. He concluded the talk with a nice list of open problems.
• David Kohel presented a joint work done with his student Leonardo Colò that was recently published at NutMiC 2019. This construction called OSIDH (that stands for oriented supersingular isogeny Diffie-Hellman) is built on top of O-oriented supersingular elliptic curves (as define in the paper).

Day #3

Day #3 of isogenies opened with the plenary session delivered by Kristin Lauter. Her talk, as usual, was really inspiring and was about the history of Supersingular Isogeny Graphs in Cryptography. She basically covered the Charles-Goren-Lauter (CGL) hashing construction and the panorama of post quantum cryptography. After a quick break and a commuting to the other building we were back to the isogenies mini-symposium:

• Thomas Decru presented a new CGL type genus-two hash function (joint work with Wouter Castryck and Benjamin Smith). The reformulated a previous construction by Takashima (broken by Yan Bo Ti and Victor Flynn) by using genus-two superspecial subgraphs.
• Jean-François Biasse talk was about algorithms for finding isogenies between supersingular elliptic curves. He showed that under some circumstances the generic Grover algorithm might beat the more isogeny specific Tani algorithm. This talk was also covered by a Thomas Decru’s blog post.
• Benjamin Wesolowski talked about his systematic approach to analyse horizontal isogeny graphs for abelian varieties. He covered some neat theorems he proved (in a joint paper with Brooks and Jetchev) and concluded saying that his results would not be enough to say anything about the CSIDH case but as we will see in the next talk they are extremely useful in the higher genus cases.
• Dimitar Jetche‘s talk was a natural following of the previous talk. He was focusing on vertical isogenies instead and announced a possible solution to the DLP on genus 3 hyperelliptic curves.

Day #4

And here we arrived already to the last day:

• Ward Beullens delivered the second part of the CSI-FiSh paper (here there is also a blog post about it). In this part he focused on the identification scheme/signature part including the Zero Knowledge and the optimization part.
• Florian Hess tried to give an answer to an open problem posed in a recent paper about multiparty non-interactive key exchange. Namely his talk was about the possibility of building an invariant maps from isogenies. His conclusions were not really positive at least so far.
• Luca De Feo brought a new topic to the isogeny World: #blockchain! He presented a new Verifiable Delay Function construction based on Supersingular Isogenies and Pairings (joint work with Simon Masson, Christophe Petit and Antonio Sanso). Despite using isogenies, the construction is not quantum resistant due the usage of pairings. A blog post about this construction can be found here.

• Jeff Burdges talked about some real word application of isogenies, including an hybrid scheme that might be used in mix networks, consensus algorithms in blockchain and encrypt to the future to be employed in auctions.

That’s all from SIAM AG. See you in 2 years.

— Antonio Sanso