Isogeny crypto

A long time ago, when pairing-based cryptography was new, cryptographers who did not fully understand the mathematics of pairings would sometimes make mistakes. They would assume that everything that can be done with discrete logarithms could also be done with pairings. Unfortunately, this would sometimes result in protocols that were insecure, or else un-implementable.

Indeed, such cases apparently still happen:

This situation is natural whenever a crypto tool that is technically subtle (and crypto tools always have technical subtleties) moves from “niche” into the mainstream. However it can result in incorrect schemes being published, for example because there are not enough experts to review all the papers.

Back in 2006, in response to those issues in pairing-based crypto, Kenny Paterson, Nigel Smart and I wrote the paper Pairings for Cryptographers. The abstract read:

Many research papers in pairing based cryptography treat pairings as a “black box”. These papers build cryptographic schemes making use of various properties of pairings. If this approach is taken, then it is easy for authors to make invalid assumptions concerning the properties of pairings. The cryptographic schemes developed may not be realizable in practice, or may not be as efficient as the authors assume. The aim of this paper is to outline, in as simple a fashion as possible, the basic choices that are available when using pairings in cryptography. For each choice, the main properties and efficiency issues are summarized. The paper is intended to be of use to non-specialists who are interested in using pairings to design cryptographic schemes.

This abstract exhibits the particular style of understated writing that is cultivated by British people. What we really meant was: Please read this and stop screwing up.

Rolling forward 15 years, isogeny-based cryptography is another area with many technical subtleties, but is moving into the mainstream of cryptography. Once again, not everything that can be done with discrete logarithms can necessarily be done with isogenies. It is therefore not surprising to find papers that have issues with their security.

It is probably time for an Isogenies for Cryptographers paper, but I don’t have time to write it. Instead, in this blog post I will mention several recent examples of incorrect papers. My hope is that these examples are instructional and will help prevent future mistakes. My intention is not to bring shame upon the authors.

  • In 2014, D. Jao and V. Soukharev proposed an isogeny-based undeniable signature scheme. The security analysis of their scheme required the introduction of some computational problems in isogenies. Recently, S.-P. Merz, R. Minko and C. Petit Another look at some isogeny hardness assumptions have broken the computational assumptions and formulated attacks on the scheme.

    In this case, there is no reason for the original authors to be embarrassed. There has been considerable progress in isogeny crypto in the last 5 years, and it is natural that new cryptanalytic tools would become available that could break earlier schemes.

  • Several papers, including this one, have argued that a certain decisional assumption related to the SIDH isogeny cryptosystem should be hard.

    Without going into all the details, in SIDH there is a base curve E and four points P_A, Q_A, P_B, Q_B on it. An SIDH instance includes a triple (E_A, \phi_A(P_B), \phi_A(Q_B)) where \phi_A : E \to E_A is an isogeny of degree 2^n. One of the basic computational problems is to compute \phi_A when given this information.

    The decisional assumption is to distinguish a valid triple (E_A, \phi_A(P_B), \phi_A(Q_B)) from another triple (E', P', Q') where E' is a supersingular curve, and P', Q' are points satisfying various conditions.

    At Provsec 2019, S. Terada and K. Yoneyama (“Password-based Authenticated Key Exchange from Standard Isogeny Assumptions”) proposed a password-based authenticated key exchange scheme for SIDH. The security against offline dictionary attacks was based on the hardness of a decision problem, but it was not the above decision problem. Instead, the security of the scheme under such an offline dictionary attack relies on the difficulty of distinguishing the triple (E', P', Q') from a uniformly random binary string of the same length. This problem is not hard at all since there are many properties that the valid triple should satisfy (e.g., E is a supersingular elliptic curve, P', Q' \in E' etc) which would not be satisfied by a uniformly chosen binary string. Hence the scheme in the paper is not secure against offline dictionary attacks.

    It is actually a really interesting open question to fix this, related to compression of SIDH protocol messages. If one could compress SIDH protocol messages down to the minimum number of bits, then one might actually be able to argue that the protocol message is indistinguishable from a uniform binary string. I don’t know any way to solve this problem and I think it is probably impossible. For the state-of-the-art in compression of SIDH messages see G. H. M. Zanon, M. A. Simplicio Jr, G. C. C. F. Pereira, J. Doliskani and P. S. L. M. Barreto, “Faster key compression for isogeny-based cryptosystems”.

  • A very natural and desirable feature is to be able to hash to an SIDH or CSIDH public key. Unfortunately this is hard. Really hard.

    D. Boneh and J. Love Supersingular Curves With Small Non-integer Endomorphisms show, among other things, that it is hard to hash to SIDH public keys. W. Castryck, L. Panny and F. Vercauteren, Rational isogenies from irrational endomorphisms show it is hard to hash to CSIDH.

    It would be great if someone can solve one of these problems, but I think they are both hard. In the meantime, cryptographers should not assume that it is possible to hash to public keys/protocol messages. This also limits the possibility to transport some protocols from the discrete-log world into the isogeny world.

  • Due to the adaptive attacks on SIDH, one cannot get CCA1 or CCA2 secure encryption from SIDH without doing the Fujisaki-Okamoto transform (or something similar). Similarly, one cannot get non-interactive key exchange from SIDH. It is natural to try to get around this by some tweak to SIDH. R. Azarderakhsh, D. Jao and C. Leonardi gave a solution to this problem by running k instances in parallel (e.g. for k = 60). S. Kayacan suggested two schemes that were hoped to be secure. However adaptive attacks have been shown in both schemes by my students and collaborators:
  • A. Fujioka, K. Takashima, S. Terada and K. Yoneyama proposed an authenticated key exchange scheme similar to some previous discrete-log-based schemes that required gap assumptions in the security proof. Gap assumptions are of the form: Problem X is hard, even when given an oracle to solve the decisional variant of problem X.

    For the isogeny context it is dangerous to use a gap assumption, as there are known arguments that one can reduce the computational isogeny problem to a decisional isogeny problem in certain cases. I already warned about this in the key exchange setting in this note. The solution of Fujioka et al was to introduce a “degree-insensitive” version of the problem, which is essentially to extend the protocol to \ell-isogeny chains of any length (rather than fixed length). It is an interesting idea.

    However, my student S. Dobson and I have given evidence (see On the Degree-Insensitive SI-GDH problem and assumption) that the distribution of public keys in the degree insensitive case is close to uniform, and so it no longer makes sense to consider a gap problem. We do not have an attack on this protocol, but we conclude that the security proof is not correct. This shows again that one must be very careful to adapt ideas from discrete-log-based protocols into the isogeny setting.

  • S. Furukawa, N. Kunihiro and K. Takashima (“Multi-party key exchange protocols from supersingular isogenies”) proposed an isogeny variant of the Burmester-Desmedt protocol for n-party key exchange in two rounds for any n. It is a nice paper, but Takashima (“Post-Quantum Constant-Round Group Key Exchange from Static Assumptions”) comments that:

    Furukawa et al. [14] proposed an isogeny-based BD-type GKE protocol called SIBD. However, the security proof of SIBD (Theorem 4 in [14]) is imperfect, and several points remain unclear, for example, on how to simulate some public variables.

    Once again, the scheme is not broken (as far as I know), but the security argument is not correct. Takashima gives a new security analysis in his paper (but I have not had time to check it).

What can authors do to avoid the dangers of isogeny crypto? There are some very good surveys of the basic ideas behind isogenies (for example see Mathematics of Isogeny Based Cryptography by Luca De Feo), but there is no good resource for cryptographers who want to use isogenies as a “black box”, and just want to know what is possible and what is not possible for building protocols. My best attempt so far is this note. In any case, I hope the present blog post can act as a cautionary tale: treating isogenies as a black box is risky!

— Steven Galbraith

Posted in Uncategorized | 1 Comment

An attendee’s report: crypto means Crypto (2019)

Steven Galbraith, who maintains this blog, has been inviting me to write a blog post on several conferences for quite some time, and I’ve consistently postponed accepting the invitations for, well, too long, so here you go. Yet, the reader is kindly asked not to expect a masterpiece of literature in this very first attempt of mine at blogging (in other words: read on at your own peril; you won’t be able to unread it later).

Continuing the unavoidable trend for large conferences, Crypto 2019 offered two parallel tracks, and understandably I’ll report on but a few presentations of the one specific track I happened to choose at each segment of the program (I tried to vary my choice of track for every session block, though).

And yet, the dichotomy of parallel sessions got me into existential anguish (of sorts) right from the start for being unable to attend both. The very first parallel pair was on lattice-based ZK proofs on the one hand, and on certain symmetric constructions on the other. I chose symmetric constructions.
I found the notion of secure PRNGs that lack a random seed, introduced by S. Coretti et al. (“Seedless Fruit is the Sweetest: Random Number Generation, Revisited”), particularly intriguing (to say the least). The authors bypass the impossibility of attaining this by compromising: yes, the entropy source is still implicitly there despite the name, but instead of modeling the extraction procedure by feeding the PRNG a randomness seed, it assumes the underlying random oracle itself (called the “monolithic extractor”) is picked uniformly at random all at once. Building on this idea, the authors offer provably secure constructions and show how some existing ones are insecure. Unfortunately, delays between clicks and slide changes, coupled with a few other issues (including, I should say, a somewhat inordinate number of jokes), made it impossible to cover the extensive slide set in the allotted time… and to check if I got the ideas right.
My session choice meant I couldn’t attend the simultaneous presentation of the equally intriguing solution to the problem of constructing a non-interactive zero-knowledge (NIZK) proof system from the LWE assumption for any NP language, discovered by C. Peikert and S. Shiehian and described in their paper “Noninteractive Zero Knowledge for NP from (Plain) Learning with Errors”. That was a pity, but it was somewhat compensated by the work “Nonces Are Noticed: AEAD Revisited” by M. Bellare, Ruth Ng, and B. Tackmann. This work reveals an enormous gap between the usual theory of nonce-based schemes and the actual (sometimes even standardized) usage of those schemes in practice: nonces become a kind of metadata that can reveal a surprising amount of information about the users or devices originating them. Quite creepy, but the authors address it by providing new notions and solutions whereby the nonce is hidden as well, and also resist nonce misuse.

As usual, there was a session on FHE. The work “On the Plausibility of Fully Homomorphic Encryption for RAMs” by A. Hamlin et al., the authors tackle the problem of hiding the sequence of memory addresses that are accessed when doing some processing on a large database. Using their notion of rewindable oblivious RAM, they obtain a preliminary single-hop scheme where the multiplicative running time overhead is O(\mathrm{polylog} N), where N is the database size.
In the same session, Sri A. K. Thyagarajan talked about his joint work with G. Malavolta on “Homomorphic Time-Lock Puzzles and Applications” whereby one can evaluate functions over puzzles without solving them. This amusing notion has nice applications like e-voting: in a simple setting, the voters create one encryption of 1 for the candidate they are voting for and distinct encryptions of 0 for all the others, so that summing up those sets over all voters yields the encrypted voting tally for all candidates (without revealing who voted for them), while adding the all encryptions, and independently the squares of all encryptions, for each individual voter yields a proof that they voted exactly once for each candidate. Transforming the encryptions into time-lock puzzles makes the decryption operations public, and does away with the need for a trusted third party. Other applications were suggested, like sealed e-auction bidding, multiparty coin flipping, or multiparty contract signing.

The session on the communication complexity of multiparty computation (MPC), which I chose over malleable codes, was no less striking, in particular the presentation by Mark Simkin and the one by Abhi Shelat.
Mark, who presented his work with S. Ghosh (“The Communication Complexity of Threshold Private Set Intersection”), started with applications of private set intersection (like the intersection of fingerprints) where one only cares about large intersections. In that case, it pays to set up the protocol so that one actually learns the complement of the intersection instead. One can see this as MPC of the ratio between characteristic polynomials, so that common factors (that is, those corresponding to the intersection) cancel. I didn’t quite gather whether a trusted third party is essential or just a secondary concern for the proposed protocol, though.
Abhi delighted the audience with a long, slow-motion clip of radical acrobatic skiing and the associated adrenaline rush. This blogger is not really sure the subject of MPC communication complexity causes a similar physiological effect, although the presenter claimed it does. After a recapitulation of the milestones of the subject, the audience was finally rewarded with a quite detailed mathematical treatment of the contribution, though this time at a very, very fast pace. Perhaps the subject does cause an adrenaline rush after all. Anyway, the work covered adaptively secure MPC with sublinear communication cost, in a scenario where the adversary can corrupt parties at any time, even after the end of the protocol, at which time the adversary can potentially corrupt all parties.

The session on post-quantum security focused on the quantum random oracle model (QROM). Both papers in the first part of that session, “How to Record Quantum Queries, and Applications to Quantum Indifferentiability” by M. Zhandry, and “Quantum Security Proofs Using Semi-classical Oracles” by A. Ambainis, M. Hamburg and D. Unruh, were thickly theoretical. The talk on “Quantum Indistinguishability of Random Sponges” by J. Czajkowski, A. Hülsing, and C. Schaffner was more approachable in my opinion (TL;DR: the sponge construction can be used to build quantum-secure pseudorandom functions when the adversary has superposition access to the input-output behavior of the sponge but not to the sponge’s internal function or permutation function itself, assumed to be random in their model). Sure enough, the more theoretically-oriented results have a clear and welcome niche even here, since these results build upon Zhandry’s prior switching lemma for pseudo-random functions or permutations from 2015. Zhandry is also a co-author of another paper from that session, “Revisiting Post-Quantum Fiat-Shamir” (joint work with Q. Liu), which was presented together with the last one, “Security of the Fiat-Shamir Transformation in the Quantum Random-Oracle Model” by J. Don et al.

Several other works are worth mentioning; I’ll mention a few more, but alas, not a full list: hanc blogis exiguitas non caperet. I found the paper “Unifying Leakage Models on a Rényi Day” by T. Prest, D. Goudarzi, A. Martinelli, and A. Passelègue, whose presentation I could not attend for not being proficient at ubiquity, quite entertaining (I assure the reader that this has nothing to do with my living in the often rainy Seattle area). The paper “It Wasn’t Me! Repudiability and Claimability of Ring Signatures” by S. Park and A. Sealfon deals with the question of enabling repudiation for ring signature non-signers, and claimability for actual signers of ring signatures. The importance of the first is to deflect undue responsibility for ring signatures produced by another ring member, and the importance of the latter lies in taking due credit for signing when that turns out to be, or becomes, desirable, but prior notions of security for ring signatures were ambivalent at best on such issues. Besides updated notions, the authors offer a repudiable scheme based on a variety of assumptions (for instance, bilinear maps), and unclaimable scheme based on the SIS assumption, and constructions for claimable or unrepudiable schemes that can be obtained from certain existing ring signatures.

Last but obviously not least, three papers got awards:

  1. “Cryptanalysis of OCB2: Attacks on Authenticity and Confidentiality,” by A. Inoue, T. Iwata, K. Minematsu, and B. Poettering got the Best Paper award;
  2. “Quantum Cryptanalysis in the RAM Model: Claw-Finding Attacks on SIKE,” by
    S. Jaques and J. M. Schanck, got Best Young Researcher Paper award;
  3. “Fully Secure Attribute-Based Encryption for t-CNF from LWE,” by R. Tsabary, got Best Young Researcher Paper award.

The papers are quite well written. The interested readers are encouraged to avail themselves of them for all of the fascinating details of these works. I was personally interested in the second of them and, to a smaller degree, the first, so I’ll try and briefly summarize those (I’m afraid the third falls somewhat outside my areas of expertise so I refer the interested reader to the corresponding paper).

Kazuhiko Minematsu began describing their work on OCB2 by showing how easy it is to attain a minimal forgery with one single encryption query. The general attack follows the model previously applied against the EAX Prime mode of operation, which lacked a formal security analysis (so it was not really a big surprise that scheme turned out to succumb to attacks). However, OCB2 was supported by a full-fledged security proof and remained unscathed for fifteen years. The attack described in the paper stems from an observed gap in that security proof which turned out to be a severe flaw. On the bright side, the attack does not extend to OCB1 nor OCB3, nor to certain suggested tweaks to OCB2. This shows that the overall structure of OCB is sound, but also the necessity of active verification of proofs.

Sam Jaques explained that their claw-finding paper set forth three goals. The first goal was to fairly compare attacks with classical and quantum resources. The second goal was to view gates as processes (which is indeed the view suggested by current quantum technology). The third goal was to include error correction as part of the cost and effort of the attack, since those are essential to overcome the exquisite fragility (in the sense of susceptibility to decoherence) of quantum computations. Their main idea was thus to view quantum memory as a physical system acted upon by a memory controller. As such, it undergoes two kinds of time evolution: free (caused by noise) and costly (caused by the controller). The computation cost becomes the number of iterations (ignoring construction costs, focusing on the controller cost instead). Two cost models are covered: the so-called G (gate) cost, which assumes passive error correction and 1 RAM operation per gate, and the DW (depth-width) cost, which counts 1 RAM operation per qubit per time unit. This sets the framework for their analysis of the claw-finding algorithm, which is a meet-in-the-middle attack to recover a path spelled out by the private key in the isogeny graph, between the initial curve and the final one (which is part of the public key). It can be realized by Tani’s collision-finding algorithm, by following random walks on two Johnson graphs, looking for a collision, and doing all computations in a quantum setting. The complexity is \tilde{O}(p^{1/6}). Despite the paper title, a quite surprising conclusion of their analysis is that SIDH and SIKE are actually harder to break than initially thought. In particular, it appears that the minimum SIKE parameter set (namely, SIKE434) cannot be broken by any known attack in less than the cost and effort needed to break AES128, specifically 2^{143}. This scales to other parameter sets, to the effect that the revised SIKE parameters for the 2nd round of the NIST PQC process are smaller than their 1st round counterparts.

So, there you have it, a brief (and necessarily incomplete, but hopefully helpful) appraisal of Crypto 2019. Scripsi. Vale.

Paulo Barreto

Posted in Uncategorized | Leave a comment

SIAM Conference on Applied Algebraic Geometry 2019

So here we are in the nice city of Bern, in the Teutonic Switzerland, for the SIAM Conference on Applied Algebraic Geometry 2019that this year counts more than 750 attendees. The weather is warm enough but the isogenies topic has never been so hot! So for this occurrence of the conference Tanja Lange, Chloe Martindale and Lorenz Panny managed to organise a really great isogenies mini-symposium spread over 4 days.

Day #1

Day #1 started strong. After a quick overview of isogenies by Chloe Martindale and Lorenz Panny, including an introduction to SIDH and CSIDH, the invited speakers took the stage:

This concluded Day #1

Day #2

In Day #2 we had

  • David Jao discussing recent progress in implementing isogeny-based cryptosystems in constant time to resist side-channel attacks. In particular he presented results from his recent paper (joint work with Jalali, Azarderakhsh and Kermani). One of the interesting observation made was that isogeny computation over Edward curves is relatively simple to be implemented in constant time (as expected) but it is faster only for isogenies of degree 5 or more. He concluded his talk with some really great demos (as also reported by Thomas Decru in a second blog post).
  • Christophe Petit surveyed known results on the security of isogeny-based protocols including the celebrated active attack on SIDH.
  • Frederik Vercauteren gave the first of two sessions dedicated to CSI-FiSh (joint work with Beullens and Kleinjung). This part had as a focus the new record class group computation they achieved while computing the class group structure of CSIDH. It seems they reused some of the code previously written by Kleinjung, and for the final computation of the closest vector in the lattice Léo Ducas gave a hand. While the technique used for the computation was standard, it was still a remarkably big task involving several core years. He concluded the talk with a nice list of open problems.
  • David Kohel presented a joint work done with his student Leonardo Colò that was recently published at NutMiC 2019. This construction called OSIDH (that stands for oriented supersingular isogeny Diffie-Hellman) is built on top of O-oriented supersingular elliptic curves (as define in the paper).

Day #3

Day #3 of isogenies opened with the plenary session delivered by Kristin Lauter. Her talk, as usual, was really inspiring and was about the history of Supersingular Isogeny Graphs in Cryptography. She basically covered the Charles-Goren-Lauter (CGL) hashing construction and the panorama of post quantum cryptography. After a quick break and a commuting to the other building we were back to the isogenies mini-symposium:

  • Thomas Decru presented a new CGL type genus-two hash function (joint work with Wouter Castryck and Benjamin Smith). The reformulated a previous construction by Takashima (broken by Yan Bo Ti and Victor Flynn) by using genus-two superspecial subgraphs.
  • Jean-François Biasse talk was about algorithms for finding isogenies between supersingular elliptic curves. He showed that under some circumstances the generic Grover algorithm might beat the more isogeny specific Tani algorithm. This talk was also covered by a Thomas Decru’s blog post.
  • Benjamin Wesolowski talked about his systematic approach to analyse horizontal isogeny graphs for abelian varieties. He covered some neat theorems he proved (in a joint paper with Brooks and Jetchev) and concluded saying that his results would not be enough to say anything about the CSIDH case but as we will see in the next talk they are extremely useful in the higher genus cases.
  • Dimitar Jetche‘s talk was a natural following of the previous talk. He was focusing on vertical isogenies instead and announced a possible solution to the DLP on genus 3 hyperelliptic curves.

Day #4

And here we arrived already to the last day:

  • Ward Beullens delivered the second part of the CSI-FiSh paper (here there is also a blog post about it). In this part he focused on the identification scheme/signature part including the Zero Knowledge and the optimization part.
  • Florian Hess tried to give an answer to an open problem posed in a recent paper about multiparty non-interactive key exchange. Namely his talk was about the possibility of building an invariant maps from isogenies. His conclusions were not really positive at least so far.
  • Luca De Feo brought a new topic to the isogeny World: #blockchain! He presented a new Verifiable Delay Function construction based on Supersingular Isogenies and Pairings (joint work with Simon Masson, Christophe Petit and Antonio Sanso). Despite using isogenies, the construction is not quantum resistant due the usage of pairings. A blog post about this construction can be found here.

  • Jeff Burdges talked about some real word application of isogenies, including an hybrid scheme that might be used in mix networks, consensus algorithms in blockchain and encrypt to the future to be employed in auctions.

That’s all from SIAM AG. See you in 2 years.

— Antonio Sanso

Posted in Uncategorized | Leave a comment

ECC 2019, Bochum, December 2-4

It has just been announced that ECC 2019 will be held in Bochum, Germany, on December 2-4, 2019. More details will eventually be available at https://eccworkshop.org/.

— Steven Galbraith

Posted in Uncategorized | Leave a comment

PQCrypto 2019

PQCrypto 2019 was held at Ronghui Spa Hotel in Chongqing, China on May 8-10, 2019. Prior to the conference, two other PQC events (the PQC 2019 summer school on May 6th and the 4th Asia PQC Forum on May 7th) were held at the same place.

The session on isogeny-based cryptography was held in the afternoon of May 9. It included three talks by young researchers:

  • Thomas Decru “Faster SeaSign signatures through improved rejection sampling” (joint work with Lorenz Panny and Frederik Vercauteren)

    This talk presented a faster variant of the SeaSign signature scheme by improving rejection sampling, which is a key technical ingredient of SeaSign. To obtain a practical isogeny-based (post-quantum) signature scheme is an important research direction in this field. This work advances a nice step towards the goal, however, it does not yet succeed.

  • Yan Bo Ti “Genus Two Isogeny Cryptography” (joint work with Victor Flynn)

    First, this talk presented a systematic method for finding collisions of the Charles-Goren-Lauter type genus-two hash function (which was suggested in a previous paper of mine). The collision finding was accomplished based on a closer look of the structure of isogeny graphs in genus two. A little surprisingly, it is now fixed by a very recent paper by Castryck, Decru, and Smith (eprint arxiv 2019/296), which reformulates it by using genus-two “superspecial” subgraphs. The talk also proposed a SIDH-type key exchange in genus two, in which (2,2)- and (3,3)-isogenies are used instead of 2- and 3-isogenies in the genus one case, respectively.

  • Michael Meyer “On Lions and Elligators: An efficient constant-time implementation of CSIDH” (joint work with Fabio Campos and Steffen Reith)

    This presentation proposed an efficient constant-time implementation of CSIDH. In the authors’ previous paper (INDOCRYPT 2018), they initiated an improvement of CSIDH implementation, which resulted in a faster algorithm than the original. However, this previous one leaks various information about the private key. Therefore, for obtaining side-channel leakage resistance, this talk modified how to sample key elements and used dummy isogenies, and then obtained a constant-time implementation (with several efficiency improvements furthermore).

Moreover, there were two invited talks which are relevant to this blog. One was by Tsuyoshi Takagi (Univ. of Tokyo), and the title was “Computational Challenge Problems in Post-Quantum Cryptography”, in which he first briefly reviewed the NIST PQC standardization, and then introduced PQC challenge problems with focus on the Fukuoka MQ Challenge and the Darmstadt Lattice Challenge. The other was given by Dustin Moody (NIST) on “Round 2 of NIST PQC Competition”. He carefully summarized the history of the competition and, for all the round 2 submissions, he made brief comments on advantages and/or unique features of the schemes. He also very briefly mentioned the future schedule of the competition.

— Katsuyuki Takashima

Posted in Uncategorized | Leave a comment

Recent work on pairing inversion

A few days ago (April 10, 2019) Takakazu Satoh posted on eprint the paper Miller Inversion is Easy for the Reduced Tate Pairing on Trace Zero Supersingular Curves on eprint. I was delighted to hear from Takakazu Satoh, as I know him well but I had not heard from him for several years. Satoh’s papers have a distinctive look, since he uses his own typesetting package that he wrote in the 1980s before TeX was widely available.

Takakazu Satoh has been interested in pairing inversion for quite a while, and has published several papers on the topic, including “On Degrees of Polynomial Interpolations Related to Elliptic Curve Cryptography” (WCC 2005), “On Pairing Inversion Problems” (Pairing 2007), and “Closed formulae for the Weil pairing inversion” (Finite Fields and Their Applications 2008).

To recall basic definitions: Let E be an ellptic curve over a field \mathbb{F}_q. Let \ell be a large prime such that \ell \mid \#E( \mathbb{F}_q ). The embedding degree is the smallest integer k such that \ell \mid (q^k  - 1). A pairing-friendly elliptic curve is one whose embedding degree is small, e.g., 2 \le k \le 30.

The (reduced) Tate-Lichtenbaum pairing takes two points P, Q \in E[ \ell ] and gives a value e_\ell( P, Q ) \in \mathbb{F}_{q^k}. A lot of researchers, including me and many of my friends, worked between 2000 and 2010 trying to compute pairings faster. The computation of the reduced Tate-Lichtenbaum pairing has two stages. First one computes a Miller function f_{\ell, P}(Q). The function f_{\ell, P} has divisor
\ell(P) - (\ell P ) - (\ell -1)(\infty).
Then one computes the final exponentiation, which is an exponentiation to the power (q^k - 1)/\ell.

It was discovered that, for many pairing-friendly curves, there was a more efficient way to compute the Miller function. Essentially instead of computing f_{\ell, P} one can compute values like f_{q, P}(Q), which is a simpler computation when \ell > q, which is usually the case. This observation was first made in a special case in a paper of Duursma and Lee from ASIACRYPT 2003. Further special cases were noted by Kwon (ACISP 2005) and Barreto, Galbraith, O hEigeartaigh and Scott (Designs, Codes and Cryptography, 2007). The situation was finally clarified by Granger, Hess, Oyono, Thériault and Vercauteren (“Ate Pairing on Hyperelliptic Curves”, EUROCRYPT 2007). The key idea is to work with Frobenius eigenspaces and change the pairing to f_{q,Q}(P). I call this paper GHOTV below. Subsequently, Hess (“Pairing Lattices”, Pairing 2008) and Vercauteren (“Optimal pairings”, IEEE Trans. Information Theory 2010) showed how to use this idea most effectively in general pairing implementations.

One particular feature of the ate pairing introduced in GHOTV, is that the pairing computation only needs the first stage (Miller’s algorithm) and does not require a final exponentiation. This makes it a lot faster to compute. The ate pairing is not the exact same function as the reduced Tate-Lichtenbaum pairing, so one cannot use them interchangeably. But they are both bilinear maps and so a system can be implemented using the ate pairing and it all works fine.

In those days, we thought pairings would be useful for identity-based crypto and short signatures, and we were mostly trying to work with large embedding degrees like k = 12. So the case k = 2 was considered of relatively minor interest and was not studied much.

The computational assumptions underlying pairing-based crypto all required pairing inversion to be hard. Typically pairing inversion means: Given z \in \mathbb{F}_{q^k} and a point P \in E[\ell] to find Q \in E[ \ell ] such that e_\ell(P, Q) = z.

It was quickly realised that there are two obstacles to pairing inversion: First it is necessary to invert the final exponentiation. Second one needs to invert the Miller function. It turned out that sometimes one or the other of these problems could be easy, but I know no situation for prime order groups where both are easy at the same time. For example, since the ate pairing does not require a final exponentiation, pairing inversion for the ate pairing is equivalent to Miller inversion (which seems to be hard in this case). In short, pairing inversion remains a hard computational problem, which is good news for pairing-based crypto. A good reference for these ideas is Galbraith, Hess, Vercauteren (“Aspects of Pairing Inversion”, IEEE Trans. Information Theory 2008).

Satoh’s recent paper explores the case k = 2. Work on discrete logs in finite fields (e.g., see these blog posts has caused some pairing researchers to become very conservative and reconsider choices such as k = 2. When k = 2 we essentially have \ell = q+1 and the final exponentiation is to the power (q^2-1)/(q+1) = q-1. The reduced Tate-Lichtenbaum pairing is
f_{q+1, Q}(P)^{q-1}.
Satoh uses Lemma 2 of GHOTV, that f_{q, Q}(P) \in \mathbb{F}_{q^2} already has order q+1. This is the fact that the ate pairing does not require a final exponentiation. Let v = f_{q+1, Q}(P) be the value computed by Miller’s algorithm, so that v^{q-1} is the pairing value. Suppose one has the value v (this is not normally the case, normally the attacker has v^{q-1}, from which there are q-1 possible values for v). Since f_{q+1, Q}(P) = (x(P) - x(Q)) f_{q,Q}(P) (I am using different notation to Satoh here) it means that an attacker can get their hands on P by raising v to the power q+1, remembering that exponentiation to the power q is linear, and hence solving a square-root to get x(P). Hence, Satoh has shown that Miller inversion is easy in this case, but pairing inversion is still hard.

In fact, when k = 2 it would be quite natural to instead use the ate pairing for any crypto application. Now there is no final exponentiation. However Satoh’s attack does not work since his approach is precisely to kill off the “ate pairing” contribution to the Tate-Lichtenbaum pairing.

In short, there are two pairings one can use in embedding degree 2 and both seem to be hard to invert: the ate pairing has trivial final exponentiation but the Miller function seems hard to invert; the Tate-Lichtenbaum pairing has easy Miller inversion (as Satoh has just shown) but it seems hard to invert the final exponentiation.

I end with a comment about applications of pairings. As I mentioned, 15 years ago we thought that the “killer applications” for pairings were identity-based crypto and short signatures. Nowadays it seems pairings are most useful for short zero knowledge proofs. For example, Jens Groth’s pairing-based zk-SNARK (zero-knowledge succinct non-interactive argument of knowledge) is a key component of Zcash. As far as I can tell, the implementation in Zcash uses curves with embedding degree k=12 and does use the ate pairing.

— Steven Galbraith

Posted in Uncategorized | Leave a comment

AMS Sectional Meeting Special Session on The Mathematics of Cryptography

The American Mathematical Society Spring Central and Western Joint Sectional
Meeting was held at the University of Hawaii at Manoa, in Honolulu during March 22-24, 2019.

There was a Special Session on The Mathematics of Cryptography organised by Shahed Sharif and Alice Silverberg. Slides of (some of) the talks are available here.

Among the talks I attended, I mention these:

  • Isogeny cryptography: strengths, weaknesses and challenges, by Steven Galbraith.

    I talked about CSIDH and SeaSign, and then said a little bit about some work of my PhD student Yan Bo Ti on hash functions from dimension 2 supersingular abelian varieties. The slides online also cover some other topics that I did not mention (Kuperberg’s algorithm and quaternion algebras).

  • Identity-Based Encryption from the Diffie-Hellman Assumption by Sanjam Garg.

    This was a really nice talk that described a “hash encryption” scheme based on the (decisional) Diffie-Hellman problem and explained how this enables identity-based encryption from the Diffie-Hellman problem (no pairings needed). This is joint work with Nico Döttling. The schemes are not practical.

  • Ramanujan Graphs In Cryptography by Kristin Lauter.

    Kristin reported joint work with Anamaria Costache, Brooke Feigon, Maike Massierer and Anna Puskas about some computational problems in isogeny graphs. A paper on this work is eprint 2018/593.

  • Numerical Method for Comparison on Homomorphically Encrypted Numbers by Jung Hee Cheon.

    Jung Hee talked about some basic mathematical functions (such as max and min) that are useful for practical computations on encrypted data. He explained some iterated processes (in his words “nowadays I am working in numerical analysis”) that give low-depth circuits to compute approximations to these functions.

  • Multiparty Non-Interactive Key Exchange From Isogenies on Elliptic Curves by Shahed Sharif.

    Shahed talked (on the blackboard — there are no slides) about his paper eprint 2018/665 with Dan Boneh, Darren Glass, Daniel Krashen, Kristin Lauter, Alice Silverberg, Mehdi Tibouchi and Mark Zhandry. The scheme is still incomplete as no suitable efficiently computable isomorphism invariant of abelian varieties has been found. Shahed discussed attempts to find such invariant, and I learned some interesting facts about polarizations on abelian surfaces.

  • The Hidden Quadratic Form Problem by Joseph Silverman.

    Joe presented joint work with Jeff Hoffstein and others on a new candidate number-theoretical problem that might be interesting for new signature schemes. This is a work-in-progress and is not published yet.

  • Isolated Curves and Cryptography by Travis Scholl.

    Travis presented his papers eprint 2017/383, eprint 2018/307 and some newer work on “isolated curves”.

  • Fun with the hidden number problem by Nadia Heninger.

    Nadia surveyed her joint work with Breitner, published as “Biased Nonce Sense: Lattice Attacks against Weak ECDSA Signatures in Cryptocurrencies”. Her talk also included an overview of lattice algorithms for the hidden number problem, and a very clear sketch of Bleichenbacher’s approach using Fourier analysis to the hidden number problem.

  • Short digital signatures via isomorphisms between modular lattices based on finite field isomorphisms by Jeffrey Hoffstein.

    Jeff presented very new joint work with Joe Silverman on yet another number-theoretical problem that might be interesting for new signature schemes. This is related to their previous work on isomorphisms of finite fields, but with new ideas and applications. I was not able to follow the details of the talk. There is no preprint yet on this work.

  • Computing isogenies and endomorphism rings of supersingular elliptic curves by Travis Morrison.

    Travis gave an overview of his EUROCRYPT 2018 paper with Eisentraeger, Hallgren, Lauter and Petit.

  • Lower bounds for Hilbert class polynomials by Reinier Broker.

    Much work on algorithms to compute Hilbert class polynomials requires proving good upper bounds on the size (e.g., bitlength) of these polynomials. Reinier spoke about his current work-in-progress trying to prove lower bounds on the size of these polynomials.

There was also a Special Session on Emerging Connections with Number Theory organised by Kate Stange and Renate Scheidler, plus a lot of other sessions, that included talks of some interest to readers of this blog. However, I stayed in the Mathematics of Cryptography room.

— Steven Galbraith

Posted in Uncategorized | Leave a comment