Public Key Cryptography 2015, Gaithersburg, USA

PKC 2015 was held at the Gaithersburg Campus of the National Institute of Standards and Technology (NIST), USA, March 30th to April 1st. There were 36 accepted papers and two invited talks. The venue was quite impressive. However, our global impression is that the conference held few surprises.

In the cryptanalysis session, I (Ludovic) had two papers: ‘A Polynomial-Time Key-Recovery Attack on MQQ Cryptosystems’ (joint work Jean-Charles Faugère, Danilo Gligoroski, Simona Samardjiska, Enrico Thomae) and `Algebraic Cryptanalysis of a Quantum Money Scheme — the Noise-Free Case’ (joint work with Marta Conde Pena, Jean-Charles Faugère). The first paper, presented by Simona, describes a polynomial-time attacks against the multivariate schemes which use quasi-groups (such systems provided up to know the fastest signature scheme on eBACS). The second paper, presented by Marta, proposes a heuristic polynomial-time attack against a quantum-money scheme of Scott Aaronson and Paul Christiano (STOC’2012). The situation of the quantum-money scheme is not as bad as the MQQ cryptosystems. A tweak, already proposed by Scott Aaronson and Paul Christiano, allows to circumvent the attack presented by Marta, so your quantum money is still safe for now.

Ayoub Otmani gave a nice talk about a `Polynomial-Time Attack on the BBCRS Scheme’ (joint work with Alain Couvreur, Jean-Pierre Tillich, and Valérie Gautier Umaña). This is yet another attack using the square code distinguishing technique. The target was a McEliece scheme using somewhat `hidden’ GRS codes (you can find a description of the BBCRS scheme in your favourite Journal of Cryptology).

Antoine Joux gave a very accessible invited talk on `Recent Advances in Algorithms for Computing Discrete Logarithms’, focusing primarily on his recent result with Cecile Peirrot (presented at Asiacrypt 2014) which enables one to compute the logs of the factor base elements with lower complexity than before, in small characteristic fields.

Sanjam Garg gave an invited talk on `New Advances in Obfuscation and its Applications’ on his latest results (EC’14) about obfuscation. In reply to a question from the audience, obfuscation, with rigorous security proofs, is not yet practical.

Nico Döttling gave a clear presentation about `Low Noise LPN: KDM Secure Public Key Encryption and Sample Amplification’. In particular, a connection between solving LPN with small error and bounded number of samples and solving LPN with unbounded number of samples. This result can be used to construct a KDM secure public-key cryptosystem from LPN with small noise.

Vadim Lyubashevsky gave a well motivated and very clear presentation on `Simple Lattice Trapdoor Sampling from a Broad Class of Distributions’ (joint work with Daniel Wichs). The timely question that the authors investigated is whether we really need Gaussian distributions in lattice-based cryptography. According to Vadim, Gaussians could be replaced in most situations by different (suitable) distributions without harming the security. However, from a practical point of view, Gaussians lead to the most practical schemes. So, `we can view Gaussian distributions as an optimisation parameter’.

There were only a couple of talks directly relevant to ECC.

Allison B. Lewko gave a well motivated and very clear presentation on `A Profitable Sub-Prime Loan: Obtaining the Advantages of Composite Order in Prime-Order Bilinear Groups’ (joint work with Sarah Meiklejohn). As the title indicates, this work shows how one can obtain for prime order bilinear groups, several useful features which occur naturally for composite order bilinear groups.

I (Rob) presented `Faster ECC over GF(2^521 – 1)’ (joint work with Mike Scott) which gave improved timings for point multiplication on the NIST curve P-521 and the Edwards curve E-521. The latter is now under serious consideration for several new ECC standards (including NIST’s).

After PKC, NIST organised a workshop on `Cybersecurity in a Post-Quantum World’. The event was a success with 140 participants (which is more than attended the PKC conference) with many of them from industry, namely from CISCO, Microsoft, Security Innovation, even RSA security. It seems that quantum-safe cryptography is going to be a very hot topic in future.

— Rob Granger and Ludovic Perret

Posted in Uncategorized | Leave a comment

Accepted presentations to NIST workshop on ECC standards

The NIST Workshop on Elliptic Curve Cryptography Standards takes place in June. The list of accepted presentations is now available here.

— Steven Galbraith

Posted in Uncategorized | Leave a comment

Elliptic curve discrete logarithm problem in characteristic two

Several recent preprints have discussed summation polynomial attacks on the ECDLP in characteristic 2:

  • eprint 2015/310, New algorithm for the discrete logarithm problem on elliptic curves, by Igor Semaev.
  • eprint 2015/319, Point Decomposition Problem in Binary Elliptic Curves, by Koray Karabina.
  • arxiv 1503.08001, Notes on summation polynomials, by Michiel Kosters and Sze Ling Yeo.

To recall some history, in 2004 Semaev introduced a fundamental new idea for index calculus algorithms for the ECDLP. Fix an elliptic curve E over a field K. Semaev’s summation polynomials S_m( x_1, \dots, x_m ) have the property that if S_m( a_1, \dots, a_m ) = 0 for some field elements a_1, \dots, a_m \in K then there are elliptic curve points (a_1, b_1), \dots, (a_m,b_m) on E(\overline{K}) such that (a_1, b_1) + \cdots + (a_m,b_m) = 0.
Gaudry and Diem explored how to use this idea in the context of Weil descent and the ECDLP on elliptic curves over \mathbb{F}_{2^n}. In particular, Diem showed there exists a sequence of finite fields \mathbb{F}_{2^n} (not of prime degree) such that the ECDLP along this sequence is provably subexponential. In these cases, the idea is to choose a_i in some dimension d < n subspace of \mathbb{F}_{2^n} and to write a "random point" R as a sum of m such points by finding a solution to S_{m+1}(a_1, \dots, a_m, x(R)) = 0. One chooses md \approx n to have a good chance of a relation, in which case one has to solve a system of polynomial equations involving md \approx n variables and n equations.

The main problem is that the summation polynomial S_m has degree 2^{m-2} in each variable, meaning that after Weil descent the system of equations in a large number of variables has extremely high degree. The difficult case is when n is a prime and so one cannot choose the subspace of dimension d to be a subfield. There have been a number of recent papers that exploit invariants under group actions to lower the degree, and hence improve these attacks in the case when n is prime. I mention just three of these works:

  • Jean-Charles Faugere, Pierrick Gaudry, Louise Huot and Guenael Renault, Using Symmetries in the Index Calculus for Elliptic Curves Discrete Logarithm, J. Crypt., Volume 27, Issue 4 (2014) 595-635.
  • Yun-Ju Huang, Christophe Petit, Naoyuki Shinohara and Tsuyoshi Takagi, Improvement of Faugere et al.’s Method to Solve ECDLP, in K. Sakiyama and M. Terada (eds.), IWSEC 2013, Springer LNCS 8231 (2013) 115–132.
  • Steven D. Galbraith and Shishay W. Gebregiyorgis, Summation polynomial algorithms for elliptic curves in characteristic two, in W. Meier and D. Mukhopadhyay (eds), INDOCRYPT 2014, Springer LNCS 8885 (2014) 409-427.

The challenge in solving large systems of multivariate equations is always the same: How to choose new variables that enable lowering the degree of the system of equations. The simplest of all such ideas is linearisation: replace every monomial with a new variable. The aim is to lower the degree by increasing the number of variables, but hopefully by not introducing too many new variables.

The new idea in the papers by Semaev and Karabina is elegant, natural and simple. The idea is natural when one knows that the summation polynomials are computed recursively by taking resultants. The idea is to “unroll” the resultant computation by introducing new variables for the intermediate points. Putting the idea simply, instead of trying to write R = P_1 + \cdots + P_m for points P_i in the factor base, we write
Q_1 = P_1 + P_2, Q_2 = Q_1 + P_3, \dots, R = Q_{m-2} + P_m where the Q_i are completely arbitrary points.

Hence, one now has a system of md + (m-2)n \approx (m-1)n variables and (m-1)n equations, but the equations all have total degree 4 because S_3 only has total degree 4. The big question is whether this is a good way to trade-off the number of variables versus degree from the point of view of solving systems of equations. Both the Semaev and Karabina papers give evidence that this choice of variables and equations is easier to solve than previous ones.

So far so good, but now the papers become more speculative. In particular, the papers both assume the “first fall” conjecture of Petit and Quisquater.

Karabina is relatively modest. He suggests taking m=5, with the aim of getting a O( q^{2/5} ) algorithm that would beat the O( q^{1/2} ) of Pollard rho (here q = 2^n). He gives some experimental evidence that the first fall conjecture applies in this case (his data only goes up to n=19 for the case m=5, but this is using S_4 rather than the full idea of S_3). He suggests the O( q^{2/5} ) algorithm might start to beat rho once q = 2^n > 2^{700}.

Semaev is more aggressive in his choice of parameters. He takes m growing with n , and assumes the first fall conjecture holds in this case. Experimental results are given in the paper, but the largest n appears to be 21 and the largest m is 6. In my opinion there is insufficient evidence in the paper to support the conjecture as n and m both grow in the required way. But it is interesting to speculate on how the algorithm might go. For n = 571 Semaev suggests m = 12, giving running time O( (n(m-1))^{4\omega} ) where \omega > 2.3 is the linear algebra constant. The claimed complexity is much better than Pollard rho for such a curve.

However, I note that the storage cost for the matrix representing the system of equations is { N + 3 \choose 4}^2 where N = (m-1)n is the number of binary variables in the descended polynomial equations. In the case n=571, m=12 the storage cost is at least 2^{91} bits, which is more than 10^{14} terabytes. To put this in perspective, Avogadro’s constant is about 2^{79}, so storing such a matrix seems way beyond current engineering and so there is no current reason to be concerned about such algorithms. (Francisco Rodriguez-Henriquez pointed out to me that the entire world wide web in 2013 required about 4 zettabytes which is 4 \cdot 10^9 terabytes.) I stress that this matrix needs to be stored as part of the Groebner basis computation for every single relation computation (and this cannot be effectively distributed, as far as we know). The final linear algebra stage is easier.

As an aside, Semaev’s choice of title “New algorithm for the discrete logarithm problem on elliptic curves” seems exaggerated. The title of the paper suggests there is a major new idea, rather than merely an elegant choice of variables for which to implement a known algorithm.

Finally I mention the Kosters-Yeo paper. Section 3 of the paper is about NP-completeness of an elliptic curve version of subset-sum, which is already well-known, see for example the discussion here. Section 4 is more relevant to this blog post as it discusses the first fall conjecture. The paper seems to give evidence that the first fall conjecture is false. It is also notable that this paper reports much larger computations, going as far as n = 40, while Karabina and Semaev do not go beyond n = 26. If it turns out that the first fall conjecture is false, then there is no reason to believe the claims by Semaev and Karabina that index calculus methods could beat Pollard rho for the ECDLP on binary curves.

One thing is clear, more work needs to be done to evaluate the first fall conjecture and the performance of this class of algorithms. But the current state of knowledge is that there is no serious threat to the ECDLP over binary fields of prime extension degree n.

— Steven Galbraith

PS. Semaev has drawn my attention to the revision of his paper on eprint, which now relies on a weaker assumption (Assumption 2). However, I am still of the opinion that there is currently insufficient evidence to support this assumption for the case of very large n.

PPS. This post had around 200 views in its first 17 hours. It is good to know there is a lot of interest in this topic.

Posted in Uncategorized | 1 Comment

Soliloquy, ideal lattices, and algorithms

This post is not about elliptic curve cryptography, but I think some readers will be interested in the issue. The topic is cryptosystems based on ideal lattices, and in particular the recent “Soliloquy” paper from GCHQ. Two recent blog posts are worth reading:

Dan Bernstein has written a blog post that fleshes-out some of the details. In the comments there is an extensive response by Chris Peikert and also some further discussion.

Chris Peikert’s response can also be accessed directly here.

My impression is that the ideas used by GCHQ do not threaten NTRU or Ring-LWE. But it is certainly good to see these algorithmic ideas documented. One of the drawbacks of our publication model is that “failed ideas” do not get written up and shared. Instead, the ideas become known to experts and considered as “folklore”. It is of value to the global research community to disseminate ideas more widely.

— Steven Galbraith

Posted in Uncategorized | Leave a comment

EUROCRYPT 2015 accepted papers

The list of accepted papers to Eurocrypt is now up. There are several papers about discrete logarithms, as well as notable papers about multilinear maps and homomorphic encryption. There has never been a better time to visit Bulgaria!

— Steven Galbraith

Posted in Uncategorized | Leave a comment

New 113-bit ECDLP record

Erich Wenger and Paul Wolfger have just announced on NMBRTHRY a new ECDLP record computation.

The curve is over the field GF( 2^{113} ) and is not a Koblitz/subfield curve. So there is no speedup to Pollard rho from using equivalence classes under Frobenius. The number of points on the curve is 2 \cdot 5192296858534827689835882578830703 where the second number is a 112-bit prime.

No details of the computation are yet public, apart from the fact that 10 FPGAs were used. However, I presume the details will be similar to the SAC 2014 paper by the same authors. That paper was reporting on an ECDLP computation for a subfield curve, which was a significantly easier computation.

This result is not surprising, and does not change our opinions on the difficulty of the ECDLP, but it is always great to see computations of this type being performed.

— Steven Galbraith

Posted in Uncategorized | Leave a comment

Standardising specific elliptic curves

There is considerable ongoing discussion by standardisation committees about precisely which elliptic curves should be specified in crypto standards. One point of view on the merits of different choices is presented on the safe curves webpage, of Bernstein and Lange. These views are not unanimously held. The IETF discussions can be found here.

NIST will be holding a workshop in June 2015 to discuss these questions further.

— Steven Galbraith

Posted in Uncategorized | Leave a comment