A major breakthrough this week has been the announcement by Razvan Barbulescu, Pierrick Gaudry, Antoine Joux and Emmanuel Thomé of a quasi-polynomial-time DLP algorithm for finite fields. The paper is available here
The algorithm applies to finite fields of the form , where
is a prime power and where
. We represent this field as
where
is an irreducible polynomial of degree
such that
divides
, where
and
are linear or quadratic. Since this polynomial has degree
or
it follows that we need
.
Following the previous work by Joux (see Section 4), the factor base is the set of linear or quadratic polynomials over . First one finds relations among the linear polynomials in polynomial-time
by substituting into the systematic equation
linear fractional transformations over
in
(this is where the group
enters; such transformations are also called homographies or Mobius transformations). One gets a cubic polynomial in
equal to a product of linear polynomials. If the cubic polynomial splits (which happens with probability
) then one has a relation. One needs
such relations. Sparse linear algebra (each row has
non-zero entries out of
) allows to express the discrete logarithms of linear polynomials with respect to any fixed linear polynomial
that generates the field. The cost of the linear algebra is
operations in the ring
.
This is not enough. We also need quadratic polynomials in our factor base. This is explained in Section 4.3 of Joux’s paper. One can handle the issue by solving systems of equations (each taking
operations). Hence, the overall complexity of the first stage is
. This is polynomial time in the input size
(as
).
The major innovation of the new paper is a new approach for the descent procedure. This is when you take a polynomial , corresponding to the original DLP instance, and want to express it as a product of elements in the factor base. The idea is to recycle ideas from the first step. Suppose
. One substitutes into the systematic equation a linear fractional transformation
over
. One now has on the left hand side a polynomial in
of degree
and on the right hand side a product of linear terms of the form
for
. With probability around
the left hand side splits into a product of polynomials of degree at most
. If that happens we have a relation. One needs
relations and then one can perform linear algebra to eliminate all
terms on the right hand side, except for
itself. We therefore express
as a product of
polynomials of smaller degree (the
term arises as we have combined
relations, each of which features at most
polynomials).
One then proceeds recursively. Since the degree is initially , but drops from
to
at each stage, there are
levels of recursion. The total number of polynomials that arise is an enormous
. But this is quasi-polynomial in the input size.
And there you have it — surprisingly simple!
Much work will be done to improve the descent step. In practice one will begin with subexponential smoothing as in the function field sieve, then employ both the new method and the Groebner basis method due to Joux.
The method is particularly suitable for fields that can already be written in the form with
. For example, with the recent records
we take
and work with
instead of
for the factor base.
It is interesting to consider the case of prime extensions , where
is prime. The traditional function field sieve would be expected to solve such problems in around
to
bit operations. For the new approach we need to embed this field into
with
. The obvious choice is
. Then the first stage of the algorithm, as we have mentioned, already takes time proportional to
. The descent step will be much more time-consuming. For example
. Of course, we expect this can be improved significantly, but for the moment the new method does not seem to beat FFS for fields of this size (much larger
is another story).
For pairing fields such as things are much more risky. Taking
and
is already somewhat close to the ideal range. I would not be surprised if this case is broken before the end of the year.
— Steven Galbraith
Hi Steven,
You say that a factor base of linear polynomials is not enough, but I can see no mention of this in the BGJT paper. Could you comment more on this?
There was some discussion of this in Paris. Antoine seemed to be of the opinion that linear polynomials are not sufficient, but I am not sure why. I guess that in a few months this will all be more clear, once more computations have been done. I will not be surprised if we eventually can use only linear polynomials, or maybe even fewer (using a “large prime” variation).