More information is available at the conference page

https://ecc2017.cs.ru.nl/

]]>

The three invited talks were:

- Nadia Heninger “The Reality of Cryptographic Deployments on the Internet”.
Nadia described several bad implementations of finite field Diffie-Hellman key exchange, surveying work of several recent papers by many authors. She commented that finite field Diffie-Hellman is prevalent in practice partly due to concerns that elliptic curves might have US government trapdoors.

- Hoeteck Wee “Advances in Functional Encryption”
Hoeteck gave a wonderfully clear overview of functional encryption.

- Neal Koblitz “Cryptography in Vietnam in the French and American Wars”
Neal gave a fascinating historical talk, based on recent research by himself, the general chair Hieu Phan and others, and drawing on historical resources from museums in Hanoi and the writings of historians and former government employees. Neal emphasized the mathematical and cryptographical ingenuity of the Vietnamese people, as well as powerfully evoking the horrors of war and the heroism of certain individuals (both Vietnamese and American).

The best paper award went to Ilaria Chillotti, Nicolas Gama, Mariya Georgieva and Malika Izabachène for “Faster Fully Homomorphic Encryption: Bootstrapping in less than 0.1 Seconds”, which shows that homomorphic encryption (in this case the GSW scheme with packed ciphertexts, together with a bunch of clever new ideas) is gradually becoming closer to practicality. Here is a photo of the best paper award authors with the two program chairs (Tsyoshi Takagi on the left and Jung Hee Cheon on the right).

Some papers related to discrete logarithms and elliptic curves included:

- Palash Sarkar and Shashank Singh “A General Polynomial Selection Method and New Asymptotic Complexities for the Tower Number Field Sieve Algorithm”.
This work is relevant for assessing the security of pairing based cryptography, more details on this application can be found here.

- Steven D. Galbraith, Christophe Petit, Barak Shani and Yan Bo Ti “On the Security of Supersingular Isogeny Cryptosystems”
The paper contains several results about the (potentially post-quantum) isogeny-based key exchange and encryption protocols of De Feo, Jao and Plut.

- There was an entire session about ABE and IBE, containing papers that use pairings:
- Nuttapong Attrapadung “Dual System Encryption Framework in Prime-Order Groups via Computational Pair Encodings”
- Junqing Gong, Xiaolei Dong, Jie Chen and Zhenfu Cao “Efficient IBE with Tight Reduction to Standard Assumption in the Multi-challenge Setting”
- Melissa Chase, Mary Maller and Sarah Meiklejohn “Déjà Q All Over Again: Tighter and Broader Reductions of q-Type Assumptions”
- Shuichi Katsumata and Shota Yamada “Partitioning via Non-Linear Polynomial Functions: More Compact IBEs from Ideal Lattices and Bilinear Maps”

- Paz Morillo, Carla Ràfols and Jorge L. Villar “The Kernel Matrix Diffie-Hellman Assumption”
This talk is about relations between variants of the Diffie-Hellman problem.

- Ted Chinburg, Brett Hemenway, Nadia Heninger and Zachary Scherr “Cryptographic applications of capacity theory: On the optimality of Coppersmith’s method for univariate polynomials”
Ted Chinburg delivered a clear and interesting survey of “capacity theory” (a branch of arithmetic geometry/algebraic number theory) that is relevant to the analysis of Coppersmith’s technique for finding small solutions to polynomial equations. The authors hope these ideas will be useful in other contexts in cryptography/cryptanalysis.

Regarding the increased focus on post-quantum crypto there were talks on multivariate crypto (more efficient Multi-quadratic-polynomial signatures), lattices (Vadim Lyubashevsky presented a result about signatures based on ring-SIS in *any* ring and urged the audience to work on a much harder but more interesting problem relating to LWE in any ring) and code-based crypto (an adaptive attack on a decoding algorithm).

The rump session was chaired by me, and was thankfully short. The best and most entertaining talk was given by Pierre Karpman and Jerome Plut. The social activities included a Water Puppet show and a Vietnamese banquet with traditional music.

— Steven Galbraith

]]>

https://twitter.com/cryptocephaly/status/803542260256276481

UPDATED Sunday December 4: More detailed explanation on NMBRTHRY list.

UPDATED December 30: The eprint paper has been updated.

]]>

It’s remarkable that this workshop has been running successfully for 20 years now: elliptic curve cryptography has come a long way. It was great to be there to celebrate a milestone of sorts:

–Ben Smith

]]>

Apart from the excellent invited lectures, the most memorable event of the conference was the late-night walk through the forest, illuminated by hand-held flaming torches, from the conference dinner at Bremerhof.

The Selfridge Prize was presented to J. Steffen Müller (Oldenburg) for the paper “Computing canonical heights on elliptic curves in quasi-linear time” by J. Steffen Müller and Michael Stoll.

The published papers are available in the LMS Journal of Computational Mathematics. Sadly this will be the final year that the proceedings appear in this journal, since the journal is being closed down.

There were relatively few papers with major relevance to ECC, but the following papers may be of some interest to readers of this blog:

- Chris Peikert “Finding Short Generators of Ideals, and Implications for Cryptography“. This was an overview of the work presented in his paper with Cramer, Ducas and Regev.
- Gary McGuire, Henriette Heer and Oisin Robinson “JKL-ECM: An implementation of ECM using Hessian curves”. The paper was about choosing elliptic curves in Hessian form with large torsion groups for the elliptic curve factoring method.
- Jung Hee Cheon, Jinhyuck Jeong and Changmin Lee “An algorithm for NTRU problems and cryptanalysis of the GGH multilinear map without an encoding of zero”. This paper is another nail in the coffin of multilinear maps.
- Francois Morain, Charlotte Scribot and Benjamin Smith “Computing cardinalities of -curve reductions over finite fields”. This was about a variant of the SEA method that is suitable for counting points on special curves with an endomorphism of a special type. Such curves are suitable for fast implementations of ECC, and so the method in this paper helps to speed up parameter generation when using such curves.
- Luca Defeo, Jerome Plût, Eric Schost and Cyril Hugounenq “Explicit isogenies in quadratic time in any characteristic”. The paper is about a Couveignes-type method for computing an explicit isogeny between two curves. This is a useful ingredient in point counting algorithms. The new method is appropriate when working in characteristic that is neither “large” nor “very small”.
- Jean-François Biasse, Claus Fieker and Michael Jacobson “Fast heuristic algorithms for computing relations in the class group of a quadratic order with applications to isogeny evaluation”. This paper is about the problem of “smoothing” an isogeny by reducing the ideal corresponding to it in the ideal class group of the order. It introduces some nice techniques that had not been used in this context previously.

The rump session contained a number of jokes about Australia and New Zealand. Aurore Guillevic mentioned some recent DLP records (mostly already mentioned on this blog). Rump session slides will be available eventually here.

The 2018 edition of the ANTS conference is expected to take place in Madison, Wisconsin.

— Steven Galbraith

]]>

The scientific programs of both conferences overlapped somewhat (the CRYPTO program ran from Monday through Thursday morning, while CHES ran from Wednesday to Friday with optional tutorials on Tuesday), and CRYPTO now has parallel sessions, so attendees to both conferences effectively had to choose between three parallel tracks. Yet you could probably attend all the talks related to elliptic curve cryptography, as there just weren’t that many.

At CRYPTO, the two most relevant presentations were given in the Algorithmic Number Theory session on Wednesday morning:

- Taechan Kim presented his paper with Razvan Barbulescu, “Extended Tower Number Field Sieve: A New Complexity for the Medium Prime Case”, about which Aurore Guillevic wrote an extensive survey on this blog a few months ago. The paper obtains a better complexity for the discrete logarithm problem in some composite degree extensions of finite fields, and although Taechan spent a good part of his talk trying to downplay the concrete impact, it actually translates to a significant reduction in the security of the most popular pairing-friendly elliptic curves. In particular, after this attack, 256-bit Barreto-Naehrig curves no longer offer 128 bits of security, but perhaps closer to 96 or so.
- Craig Costello presented his paper with Patrick Longa and Michael Naehrig, “Efficient Algorithms for Supersingular Isogeny Diffie-Hellman”, which uses a number of clever tricks to implement the postquantum-secure isogeny-based key exchange protocol of De Feo, Jao and Plût significantly more efficiently than what previously thought possible. Although SIDH still lags behind other popular postquantum constructions based e.g. on lattices by several orders of magnitude in terms of performance, it uses comparatively short keys, can be combined with classical ECDH very cheaply, and in any case is based on a very different type of security assumption that may look more appealing to the algebraic geometrically inclined.

Other papers related to elliptic curves include:

- “Design in Type-I, Run in Type-III: Fast and Scalable Bilinear-Type Conversion using Integer Programming” by Masayuki Abe, Fumitaka Hoshino and Miyako Ohkubo, which explains how to algorithmically convert pairing-based protocols using symmetric pairings to the asymmetric setting at a minimal overhead using integer linear programming techniques;
- the CRYPTO best paper, “Breaking the Circuit Size Barrier for Secure Computation Under DDH“ by Elette Boyle, Niv Gilboa and Yuval Ishai, which is not elliptic curve crypto per se, but relies on an interesting observation regarding discrete logarithms. The idea is that if two parties hold a secret sharing of a small value in the exponent, i.e. and with , they can derive from that an additive secret sharing of itself
*without any interaction*. To do so, they agree on a polynomially dense subset of distinguished points in the group, and count how many steps it takes to reach an element of from their respective share. If is small enough compared to the relative density of , they should reach the*same*element of with good probability, and in that case, if it took (resp. ) steps for the first (resp. second) party, we have , hence : is a secret sharing of obtained without interaction!

At CHES, on the other hand, there were several interesting papers about the implementation of elliptic and hyperelliptic curve cryptography on various platforms.

- On desktop CPUs: the paper by Thomaz Oliveira, Julio López and Francisco Rodríguez-Henríquez, “Software Implementation of Koblitz Curves over Quadratic Fields”. Usual Koblitz curves are defined over and use the fast Frobenius endomorphism instead of doublings to speed up scalar multiplication. This paper instead investigates curves defined over together with the corresponding, slightly less fast Frobenius , and shows that the quadratic extension structure of the corresponding fields yields interesting performance benefits. The authors obtain a constant time scalar multiplication in under 70k cycles on Haswell and 52k cycles on Skylake at the 128-bit security level, which is quite respectable, even though they have to rely on a suboptimal field size of close to 300 bits to find a curve with a sufficiently large prime subgroup.
- On embedded CPUs: the paper by Leijla Batina, Joost Renes, Peter Schwabe and Benjamin Smith, “µKummer: Efficient Hyperelliptic Signatures and Key Exchange on Microcontrollers”. It is well-known that Kummer surfaces support a notion of scalar multiplication, but not point addition directly because it is not compatible with quotienting by . As a result, they would only be used for protocols like Diffie-Hellman, and not for e.g. signatures, which require point additions. However, Chung, Costello and Smith recently observed that you can simply lift back to the actual Jacobian after carrying out your fast variable base point multiplication on the Kummer, and doing so is likely to be faster than doing everything in a Jacobian, especially if you want constant time arithmetic. This CHES paper is a concrete demonstration of that idea on constrained software platforms (AVR ATmega and ARM Cortex M0), where the authors break earlier speed records for (H)ECC by wide margins.
- In hardware: the paper by Kimmo Järvinen, Andrea Miele, Reza Azarderakjsh and Patrick Longa, “FourQ on FPGA: New Hardware Speed Records for Elliptic Curve Cryptography over Large Prime Characteristic Fields”. FourQ is a very nice curve introduced by Costello and Longa at last year’s ASIACRYPT, which currently holds essentially all of the speed records on desktop CPUs for constant-time scalar multiplication (both fixed and variable base) by a comfortable margin. This CHES paper implements it on FPGA, and finds that it also performs faster than other implementations over large characteristic fields (although not nearly as fast as comparable binary field designs).

The CHES rump session also featured some annoucements of note, including a concrete complexity estimate by Francisco Rodríguez-Henríquez and his colleagues of the quasilinear attack on discrete logs in the formerly 128-bit secure field that used to be recommended for pairings (answer: if everybody in the world was working on it 8 hours per day, 1000 multiplications per hour, it would only take about 10 months!). The annoucement that personally got me the most excited is an improved implementation result for the binary curve GLS254 on desktop CPUs due to Thomaz Oliveira, Diego Aranha, Julio López and Francisco Rodríguez-Henríquez, who adapted techniques from the quadratic Koblitz paper above to blow up the competition again with that curve: at 48k cycles on Haswell and 38k cycles on Skylake for 128-bit secure scalar multiplication, it is even faster than Kummers and FourQ!

— Mehdi Tibouchi

]]>

The paper is written in an unusual style. It is a bit like a research notebook, containing sketches of ideas, rather than a polished mathematics paper.

The paper is mainly about the point decomposition problem, which is the fundamental problem behind all recent work on index calculus algorithms (see: these blog

posts). Precisely this problem is: Given a point and a factor base write for .

The standard approach these days is to use summation polynomials: We find solutions to the Semaev summation polynomial and then compute the corresponding points. Currently these methods have not had any practical impact on the security of elliptic curve cryptography.

The preprint contains several ideas, whose relevance and impact are yet to be fully determined.

One idea is a way to generate a lot of equations without adding too many new variables. Courtois chooses random elliptic curve points (where is any natural number) and defines new variables

for .

Courtois then notes that if and then

Hence we have

There are choices for , so we get a system of equations in the variables . On the one hand, we now have a greatly over-determined system, and so it should be easier to solve than traditional systems. On the other hand, the system has too many solutions as the variables are unconstrained.

Hence the next problem is to add constraints to the system to reduce the number of solutions. If one can find suitable constraints then one should be able to define a corresponding notion of factor base. Some ideas are sketched in the paper, in particular in Section 18.2, but I have not yet formed an opinion about well these ideas will work. The paper only considers elliptic curves over prime fields , but similar ideas might be used for curves over other fields.

There are several other ideas in the paper, including some new polynomial equations that might be used to play a similar role to the summation polynomials.

Overall, the paper contains some interesting ideas that are not yet fully developed. Currently the paper does not describe a complete index calculus algorithm and it is difficult for me to determine whether or not the methods are likely to lead to an improvement over existing techniques. No precise complexity statements are made in the paper.

I hope that other researchers will investigate these ideas. I look forward to following the development of work on this topic.

— Steven Galbraith

]]>

http://ecc2016.yasar.edu.tr/invited.html.

Here are the speakers:

- Benjamin Smith, “Fast, uniform, and compact scalar multiplication for elliptic curves and genus 2 Jacobians with applications to signature schemes” and “μKummer: efficient hyperelliptic signatures and key exchange on microcontrollers”
- Cyril Hugounenq, “Explicit isogenies in quadratic time in any characteristic”
- Daniel Genkin, “ECDH Key-Extraction via Low-Bandwidth Electromagnetic Attacks on PCs” and “ECDSA Key Extraction from Mobile Devices via Nonintrusive Physical Side Channels” and “CacheBleed: A Timing Attack on OpenSSL Constant Time RSA”
- Jens Groth, “On the Size of Pairing-based Non-interactive Arguments”
- Maike Massierer, “Computing L-series of geometrically hyperelliptic curves of genus three”
- Mehmet Sabır Kiraz, “Pairings and Cloud Security”
- Pascal Sasdrich, “Implementing Curve25519 for Side-Channel–Protected Elliptic Curve Cryptography”
- Patrick Longa, “Efficient algorithms for supersingular isogeny Diffie-Hellman”
- Razvan Barbulescu, “Extended Tower Number Field Sieve: A New Complexity for Medium Prime Case”
- Sebastian Kochinke, “Computing discrete logarithms with special linear systems”
- Shashank Singh, “A General Polynomial Selection Method and New Asymptotic Complexities for the Tower Number Field Sieve Algorithm”
- Shoukat Ali, “A new algorithm for residue multiplication modulo $2^{521}-1$”
- Tung Chou, “The Simplest Protocol for Oblivious Transfer” and “Sandy2x: new Curve25519 speed records”

The conference organisers wish to reassure conference attendees that it is safe to come to Turkey for the conference: “The life in Izmir is just as usual: sunny and slow-going. We are preparing for ECC and we would like to serve our guests in the best way we can.”

http://www.mathematik.uni-kl.de/~thofmann/ants/schedule.html.

In other words, they solved a DLP in a finite field of size around 510 bits. You can read further details here and here.

— Steven Galbraith

]]>

From a cryptographic point of view, complete addition laws are important for designing and implementing uniform and constant-time algorithms on elliptic curves. The most well-known and successful example of this is the (twisted) Edwards model for elliptic curves, where a single formula can be used for doubling and adding, without any exceptions or special cases. In contrast, consider the traditional chord-and-tangent group law on the Weierstrass model of an elliptic curve. This group law can’t be applied to compute , for example; instead, we have a separate doubling formula using the tangent. And until now, nobody has written down a single efficient complete addition law for the points on prime-order elliptic curves – including the prime-order curves that were standardized by NIST, and their international counterparts like Brainpool and ANSSI. This means that implementing standardized curves in a safe way is a much more complicated business than it should be!

Over twenty years ago, Bosma and Lenstra studied the group laws on elliptic curves. They concentrated on the case where an elliptic curve E is embedded in the projective plane as a Weierstrass model, but Arene, Kohel, and Ritzenthaler have since generalized their results to any projective embedding of any elliptic curve, and also to abelian varieties of any dimensions. To make things more precise, suppose E is an elliptic curve in projective Weierstrass form, with coordinates , , and . A *group law* is a triple of polynomials such that for the pairs of points in some open subset of (the cartesian product of with itself). This means that the group law is allowed to “fail” on some subset of the pairs of points on , provided that subset is defined by a nontrivial algebraic relation. Basically, the group law is allowed to fail on what I will call a *failure curve* of points in the surface (this curve may be reducible; strictly speaking, it’s an effective divisor on ). For any fixed , there is a bounded number of such that the group law fails to add to , and that bound depends only on the group law, not .

To make this notion of failure more precise, we can add another requirement to the definition of a group law: for any pair of input points on , either (ie, the group law holds) or . From a computational point of view this is very nice, because does not correspond to a projective point; so if we evaluate the group law at a pair of points then either the result is correct, or it is not a projective point – and this case is extremely easy to identify. If the formula is ever wrong, it’s so obviously wrong that you see it immediately.

A *complete system* of group laws is a collection of group laws that covers all of the pairs of points on : that is, the common intersection of all of the failure curves is empty. Bosma and Lenstra showed that for an elliptic curve, any complete system must contain *at least 2* group laws. However, this result only holds over the algebraic closure; and in cryptography, we don’t work over the algebraic closure, we work over a fixed finite field . And this is the crucial point: we can get away with *just one* group law, so long as none of the pairs of points on its failure curve are defined over ! If they’re all defined over some extension, then they will never be inputs to our cryptographic algorithm, and we can simply ignore their existence. Arene, Kohel, and Ritzenthaler take this to its logical conclusion, showing that this can always be done for any elliptic curve or abelian variety (apart from some pathological cases over extremely small fields, which are irrelevant to cryptography).

The particularly nice thing about Bosma and Lenstra’s paper is that they give a clear description of what the failure curves look like for the simplest nontrivial class of group laws, which is where all of the polynomials are biquadratic (that is, each , , and is homogeneous of degree 2 in the coordinates of , and also in the coordinates of ). In this case, the failure curves all correspond to lines: for each group law , there is a line in the projective plane such that the group law fails on if and only if is in the intersection of with .

At Eurocrypt, Joost Renes explained that there is an obvious way to apply this to prime-order elliptic curves: for a complete group law on such a curve , we can take the biquadratic group law whose failure line is the x-axis. This works because the intersection with consists of the nonzero 2-torsion points – and since has prime order, none of those are defined over , so they can’t be the difference of any pair of -points, and therefore the group law can’t fail on any pair of points in . So far so good. But Bosma and Lenstra actually wrote down that group law as an example in their paper, and the formulae take up a whole page and a half! I’m sure that plenty of cryptographers had seen it, and thought *“that’s all very nice in theory, but…”*

So now we come to the main contribution of the paper: Joost, Craig, and Lejla show that this *“it’ll never work”* intuition is completely wrong. They simplify the formulae (following Arene, Kohel, and Ritzenthaler); they derive efficient straight-line algorithms to evaluate the polynomials; and they show that not only can this group law be computed efficiently, it’s actually competitive with the best known (non-complete) formulae for projective Weierstrass curves. So if you find yourself implementing the group law on a prime-order curve (for whatever reason, be it scientific or political), then you should definitely consider doing it the way their paper suggests. You won’t lose much speed, and you’ll gain a lot of safety and simplicity.

–Ben Smith

]]>

A pairing is a bilinear map defined over an elliptic curve. It outputs a value in a finite field. This is commonly expressed as where and are two (distinct) prime order subgroups of an elliptic curve, and is the target group, of same prime order. More precisely, we have

where is the cyclotomic subgroup of , i.e. the subgroup of -th roots of unity:

The use of pairings as a constructive tool in cryptography had some premices in 1999 and started to be used in 2000. Its security relies on the intractability of the discrete logarithm problem on the elliptic curve (i.e. in and ) and in the finite field (i.e. in ).

The expected running-time of a discrete logarithm computation on an elliptic curve and in a finite field is not the same: this is in the group of points , and this is in a finite field , where is the notation The asymptotic complexity in the finite field depends on the total size of the finite field. It means that we can do cryptography in an order subgroup of , whenever is large enough.

The finite fields are commonly divided into three cases, depending on the size of the prime (the finite field characteristic) compared to the extension degree . Each of the three cases has its own index calculus variant, and the most appropriate variant that applies qualifies the characteristic (as small, large or medium):

- small characteristic: one uses the function field sieve (FFS) algorithm, and the quasi-polynomial-time algorithm when the extension degree is suitable for that (i.e. smooth enough);
- medium characteristic: one uses the NFS-HD algorithm. This is the High Degree variant of the Number Field Sieve (NFS) algorithm. The elements involved in the relation collection are of higher degree compared to the regular NFS algorithm.
- large characteristic: one uses the Number Field Sieve algorithm.

Each variant (QPA, FFS, NFS-HD and NFS) has a different asymptotic complexity.

Small characteristic finite fields such as and where is composite are broken for pairing-based cryptography with the QPA algorithm (see this blog post).

It does not mean that *supersingular curves* are broken but it means that the pairing-friendly curves in small characteristic are broken, and only supersingular pairing-friendly curves were available in small characteristic. There exist supersingular pairing-friendly curves in large characteristic which are still safe, if the finite field is chosen of large enough size.

It was assumed in order to set up the key-sizes of pairing-based cryptography that computing a discrete logarithm in any finite field of medium to large characteristic is of same difficulty at least as computing a discrete logarithm in a prime field of the same total size. The key-size were chosen according to the complexity formula .

The finite fields targeted by this new variant are of the form , where is composite (e.g. , ) and there exists a small factor of (e.g. 2 or 3 for 6 and 12). In that case, this is possible to consider both the finite field and the two number fields involved in the NFS algorithm as a tower of an extension of degree on top of an extension of degree . With this representation, combined with previously known polynomial selection methods, the norm of the elements involved in the relation collection step (second step of the NFS algorithm) is much smaller. This provides an important speed-up of all the algorithm and decreases the asymptotic complexity below the assumed complexity. That is why in certain cases, the key sizes should be enlarged.

It start with a brief summary on the previous state of the art in DL computation in , then the Kim–Barbulescu variant is sketched, and a possible key-size update in pairing-based cryptography is discussed. This is not reasonnable at the time of writting to propose practical key-size update, but we can discuss about an estimate, based on the new asymptotic complexities.

The NFS algorithm to compute discrete logarithms applies to finite fields where (the characteristic) is of medium to large size compared to the total size of the finite field. This is measured asymptotically as where and the notation is defined as

Since in pairing-based cryptography, small characteristic is restricted to and , we can simplify the situation and say that the NFS algorithm and its variants apply to all the non-small-characteristic finite fields used in pairing-based cryptography. The Kim–Barbulescu pre-print contains two new asymptotic complexities for computing discrete logarithms in such finite fields. Prime fields are not affected by this new algorithm. The finite fields used in pairing-based cryptography that are affected are and for example.

The NFS algorithm is made of four steps:

- polynomial selection,
- relation colection,
- linear algebra,
- individual discrete logarithm.

The Kim–Barbulescu improvement presents a new polynomial selection that combines the Tower-NFS construction (TNFS) [AC:BarGauKle15] with the Conjugation technique [EC:BGGM15]. This leads to a better asymptotic complexity of the algorithm. The polynomial selection determines the expected running time of the algorithm. The idea of NFS is to work in a number field instead of a finite field, so that a notion of *small* elements exists as well as a factorization of the elements into prime ideals. This is not possible in a finite field .

Take as an example the prime 3141592653589793238462643383589 as in Kim Barbulescu eprint and consider . The order of the cyclotomic subgroup is and the 194-bit prime 13688771707474838583681679614171480268081711303057164980773 is the largest prime divisor of . Since , one can construct a Kummer extension , where is irreducible modulo . Then to run NFS, the idea of Joux, Lercier, Smart and Vercauteren in 2006 [JLSV06] was to use as the first ring, and where as the second one to collect relations in NFS. Take the element . Its norm in is 122-bit long. Its norm in is 322-bit long, the total is 444 bits.

Why do we need two sides (the two rings and )? Because when mapping these two elements from and to , one maps since , so that and at the same time, is mapped to the same element. What is the purpose of all of this? We will obtain a different factorization into prime ideals in each ring, so that we will get a non-trivial relation when mapping each side to . Now the game is to define polynomials and such that the norm is as low as possible, so that the probability that an element will factor into *small* prime ideals is high.

A few new polynomial selection methods were proposed since 2006, to reduce this norm, hence improving the asymptotic complexity of the NFS algorithm. In practice, each finite field is a case study, and the method with the best asymptotic running-time will not obviously gives the smallest norms in practice. That is why cryptanalysts compare the norm bounds in addition to comparing the asymptotic complexity. This is done in Table 5 and Fig. 2 of the Kim–Barbulescu pre-print.

What is the key-idea of the Kim–Barbulescu method? It combines two previous techniques: the Tower-NFS variant [AC:BarGauKle15] and the Conjugation polynomial selection method [EC:BGGM15]. Moreover, it exploits the fact that is composite. Sarkar and Sigh started in this direction in [EC:SarSin16] and obtained a smaller norm estimate (the asymptotic complexity was not affected significatively). We mention that few days ago, Sarkar and Singh combined the Kim–Barbulescu technique with their polynomial selection method [EPRINT:SarSin16].

Here is a list of the various polynomial selection methods for finite fields .

- the Joux–Lercier–Smart–Vercauteren method that applies to medium-characteristic finite fields, a.k.a. JLSV1, whose asymptotic complexity is .
- the Conjugation method that applies to medium-characteristic finite fields, whose asymptotic complexity is .
- the JLSV2 method that applies to large-characteristic finite fields, whose asymptotic complexity is .
- the generalized Joux–Lercier (GJL) method that applies to large-characteristic finite fields, whose asymptotic complexity is .
- the Sarkar–Singh method that allows a trade-off between the GJL and the Conjugation method when is composite.

Back to our example, the Kim–Barbulescu technique would define polynomials such as

In this case, the elements involved in the relation collection step are of the form . They have six coefficients. in order to enumerate a set of same global size where , one takes . The norm of is 135.6 bit long, the norm of is 216.6 bit long, for a total of 352.2 bits. This is 92 bits less than the 444 bits obtained with the former JLSV technique.

The norm computed in the number fields depends on the coefficient size and the degree of the polynomials defining the number fields, and on the degree and coefficient size of the element whose norm is computed.

With the Kim–Barbulescu technique, the norm is computed recursively from to then from to . The norm formula depends on the degree of the element and since this element is of degree 1 in , its norm will be as low as possible. Then in , the element is of degree (at most) but this does not increase its norm since the coefficients of are tiny (a norm bound is , and will be very small).

The asymptotic complexity is reduced (contrary to the Sarkar–Singh method) because the norm formula is reduced not only in practice but also in the asymptotic formula. To do so, the parameters should be tuned very tightly. In the large characteristic case, should stay quite small: For of 608 bits as in the example, the numerical value is . Kim and Barbulescu took . In fact, should be as small as possible, so that one can collect relations over elements of any degree in , but always degree 1 in . The smallest possible value is when is even.

For the classical BN curve example where is a 3072-bit prime power, the bound is .

For prime fields where has a special form, the Special-NFS algorithm may apply. Its complexity is . For extension fields , where is obtained by evaluating a polynomial of degree at a given value, e.g. for BN curves, Joux and Pierrot [PAIRING:JouPie13] proposed a dedicated polynomial selection method such that the NFS algorithm has an exptected running-time of

- in medium characteristic, where and ;
- in large characteristic, where and and moreover satisfies ;
- in prime fields and tiny extension degree fields, where and moreover satisfies .

The Kim–barbulescu method combines the TNFS method and the Joux–Pierrot method to obtain a asymptotic complexity in the medium-characteristic case.

This new attack reduces the asymptotic complexity of the NFS algorithm to compute DL in , which contains the target group of pairings. The recommended sizes of were computed assuming that the asymptotic complexity is . Since the new complexity is below this bound, the key sizes should be enlarged. The smallest asymptotic complexity that Kim and Barbulescu obtain is for a combination of Extended-TNFS and Special-NFS. They obtain when the prime is given by a polynomial of degree and when is composite, , and satisfies an appropriate bound. In that case, it means that the size of should be roughly doubled asymptotically.

In order to provide a first comparison of the expected running-time of the NFS algorithm in finite fields and prime fields of same total size, one can compare the size of the norms involved in the relation collection. We took the example of 608 bits (183 decimal digits), and consider a prime field of 608 bits for comparison. The Joux–Lercier method [MathComp:JouLer03] generates two polynomials and of degree and and coefficient size , . We take , where . We obtain . We take . The choice would end to slightly larger norms (5 to 10 bit larger).

The polynomials and would be of degree 3 and 2, and the coefficient size of would be of 203 bits. The bound in the relation collection would be approximately (in theory ) of 27.36 bits (cado-nfs) up to 30 bits (Kim–Barbulescu estimate). The norms of the elements would be of 341 bits () to 354 bits (). Kim and Barbulescu obtain a norm of 330 bits for [Example 1]. It means that computing a discrete logarithm in compared to both of 608 bits would be easier in .

We can do the same analysis for of 3072 bits and this starts to be a complete mess. The Joux–Lercier method for a prime field of 3072 bits would take polynomials of degree and the norms would be of 1243 bits. The Kim–Barbulescu technique (ExTNFS) with Conjugation in of 3072 bits would involve norms of 696 bits in average. If moreover we consider a BN-prime given by a polynomial of degree 4, and take and , the norms would be of 713 bits in average. This is also what [Fig. 3] shows: the cross-over point betweeen ExTNFS-Conj (of asymptotic complexity ) and Special ExTNFS (of asymptotic complexity ) is above 4250 bits.

[Edited May 3 thanks to P.S.L.M. B. comments]

What would be the actual security level of a BN curve with of 256 bits and of 3072 bits? At the moment, the best attack would use a combination of Extended TNFS with the Joux–Pierrot polynomial selection for special primes, or the ExTNFS with Conjugation. We give in the following table a rough estimate of the improvement. Evaluating the asymptotic complexity for a given finite field size does no provide the number of bits of security but could give a (theoretical) hint on the improvements of the Kim–Barbulescu technique. This table can only say that the former 256-bit prime BN curve should be changed into a 448 up to 512-bit prime BN-curve.

* This is a rough and theoretical estimate. This can only says that a new low bound on the size of BN-like to achieve a 128-bit security level would be at least 5376 bits ( of 448 bits).*

[edited June 8th thanks to T. Kim comments. In the medium prime case, the best complexity of Extended TNFS is when is composite. ]

To obtain a key size according to a security level, a value is required, which is not known. The order of magnitude of this is usually of a dozen. A 4608-bit field (corresponding for Barreto-Naehrig curves to of 384 bits) might not be large enough to achieve a 128-bit security level.

Few days ago, Chatterjee, Menezes and Rodriguez-Henriquez proposed to use embedding degree one curves as a safe alternative for pairing-based cryptography. However in these constructions, the prime is given by a polynomial of degree 2. It is not known what is the most secure: a prime field with given by a polynomial of degree 2 or a supersingular curve with (or even a MNT curve with ).

Another way would be to use generic curves of embedding degree or , generated with the Cocks-Pinch method, so that no one of the recent attacks can apply. In any case the pairing computation would be much less efficient than previously with the BN curves and of 256 bits. Such low embedding degree curves were used at the beginning of pairing implementation, e.g. [CT-RSA:Scott05], [ICISC:ChaSarBar04].

Aurore Guillevic.

]]>