Eurocrypt 2017 was hosted by the ENS crypto group in Paris, France. There were four talks of special interest to researchers in curve-based cryptography, and a couple of items in the Rump Session.

## Twisted -normal form for elliptic curves

**David Kohel** introduced the -normal form for elliptic curves five years ago (at Indocrypt 2012). These curves are basically the “right way” to generalize Edwards curve arithmetic to characteristic 2. And they’re the right generalization not only mathematically, but also NIST-ically: existing standardized characteristic 2 curves cannot be transformed into -normal form. David’s paper twists its way around that obstruction, for a small cost of two extra multiplications per point addition. These twisted -normal curves are clearly the fastest and prettiest standard-compatible characteristic-2 elliptic curves out there. This is great news for binarophiles, and it will be interesting to see if implementers working on the hardware level can get much benefit from this.

## Efficient compression of SIDH public keys

**Joost Renes** gave a remarkably accessible talk about his work with **Craig Costello**, **David Jao**, **Patrick Longa**, **Michael Naehrig**, and **David Urbanik** on compressing public keys for the Supersingular Isogeny Diffie–Hellman protocol. SIDH is the best-known supposedly-quantum-resistant elliptic curve cryptosystem; while it might be slow compared with other postquantum alternatives, its principal attraction for cryptographers is its particularly small keys. Well, those keys are now even smaller (330 bytes for 128-bit security)—but the interesting thing in this paper is a much-improved key compression algorithm, which runs an order of magnitude faster than previous methods.

## Computation of a 768-bit prime field discrete logarithm

**Thorsten Kleinjung** gave a really nice talk on his record discrete logarithm computation with **Claus Diem**, **Arjen Lenstra**, **Christine Priplata**, and **Colin Stahlke**. Together they computed a discrete logarithm in a 768-bit prime field.

Why 768 bits? Because that matches the record for general integer factorization (from 2009, in a project that also included Thorsten and Arjen), which was computed with the General Number Field Sieve (GNFS); and GNFS is also what we use for prime-field discrete logs. In contrast to most recent finite-field discrete-log results which attack small-characteristic or pairing-related fields, this computation represents the state-of-the-art in the classic prime-field case.

The prime in question was , which is the smallest “safe prime” larger than (“safe” meaning that is also prime, so that this represents the hardest case for generic algorithms applied to finite fields of the same size). The element 11 generates the multiplicative group of .

No doubt the question you are asking yourself right now is *“what is the discrete logarithm of with respect to the base 11?” *Ask no more, for Thorsten has the answer: it’s *325923617918270562238615985978623709128341338833721058543950813521768156295091638348030637920237175638117352442299234041658748471079911977497864301995972638266781162575370644813703762423329783129621567127479417280687495231463348812*.

…So now you know. But as Thorsten points out, the journey is more interesting than the final destination: using some clever techniques detailed in the paper, this calculation took *much* less time and effort (a whole order of magnitude!) than the authors expected. Before you get too excited, it still took 5300 core years—but if this isn’t the exact discrete logarithm you are looking for, computing another one in the same field will now only take two core days. From a cryptographic perspective, that two-core-day figure is especially interesting, because that’s the time required to break actual keys, after a 5-core-millennium precomputation depending only on the field.

## A kilobit hidden SNFS discrete logarithm computation

**Joshua Fried** spoke about his work on with **Pierrick Gaudry**, **Nadia Heninger**, and **Emmanuel Thomé** about discrete logarithms in an even bigger prime field: 1024 bits. How can you compute discrete logs in such a large prime field? You cheat—or, I should say, the parameter generator cheats. Our estimates of the difficulty of these problems, and the cryptosystems that depend on them, are based on the performance of the *General* Number Field Sieve algorithm (GNFS). But Dan Gordon explained 25 years ago how to choose primes that are vulnerable to the much faster *Special* Number Field Sieve (SNFS)—but only if we know a secret backdoor, and detecting that backdoor is apparently infeasible. This project set up an instance of a backdoored 1024-bit prime, and then solved it. This means that if you’re still using 1024-bit fields (and why are you doing such a thing in the twenty-first century?), then you should be extremely careful about their provenance. Kevin McCurley asked an interesting question: is Gordon’s backdoor optimal?

## Speeding up the Huff form of elliptic curves

**Gamze Orhon** gave a lightning-fast presentation of her work with **Huseyin Hisil** on optimizing Huff curve arithmetic during the rump session. The key is viewing these curves as curves in , rather than . The details are in their preprint.

## A database of discrete logarithm computations

**Aurore Guillevic** and **Laurent Grémy** have established a new reference website to help you keep track of records progress and progress in finite field discrete logarithm computations. It was about time we had a better solution than trawling the archives of the NMBRTHRY list! Laurent is hosting a front-end on his website, but what’s really nice is that the database itself is git-able.

*—Ben Smith*

> it still took 5300 core years

What I want to understand is how long 5300 core years takes in practice? Depending on the machines we use. I always see this core years numbers thrown around but never really understand them 🙂

1 core year: 1 CPU core running for a full year at 100% load

5300 core years: 1 year of 5300 CPU cores running at 100% load

5300 core years can even run in a few seconds if you have a lot of cores (~ 1,67*10^11) 🙂

Thanks for the answer Leon. I guess a better question would be: how affordable/practical is it to run 5300 CPUs? If it is affordable and practical, how much more CPUs can I get to make the running time practical as well (1 year is way too much :))