0% found this document useful (0 votes)
21 views17 pages

Gauss Algorithm

Uploaded by

Thiago Astrizi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views17 pages

Gauss Algorithm

Uploaded by

Thiago Astrizi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 17

JOURNAL OF ALGORITHMS 12.

556-572 (1991)

Gauss’ Algorithm Revisited


BRIGITTE VALL~E

D&artement de Matht?matiques, Universite’de Caen, 14032 Caen Cedex, France

Received September 1989; revised June 1990

Gauss gave an algorithm which reduces integer lattices in the two-dimensional


case and finds a basis of a lattice consisting of its two successive minima. We
exhibit its worst-case input configuration and then show that this worst-case input
configuration generalizes the worst-case input configuration of the centered Eu-
clidean algorithm to dimension two. o 1991 Academic press, hc.

The problem of reducing integer lattice bases consists in finding “short”


lattice bases from arbitrary given ones. For the dimension IZ = 2, the
problem is solved completely by a polynomial-time algorithm due to Gauss
[3]. Given a lattice L described by a basis, Gauss’ algorithm outputs a
basis (u, v) formed by two successive minima of L: u is a shortest non-zero
vector of L, and v is a shortest vector among all vectors of L linearly
independent from u. Such a basis is the best possible from the Euclidean
point of view and is called minimal.
In 1983, Lenstra, Lenstra, and Lov&z [13] exhibited a polynomial-time
algorithm, called the LLL algorithm’ which, given a basis of an integer
lattice L of rank IZ, builds a “good” basis of L. This algorithm uses as a
subroutine a slightly modified version of Gauss’ algorithm. This LLL
algorithm has been widely applied and permits the resolution of other
varied and essential problems, for example, in the theory of numbers [4, 81,
in algebra [6, 12, 131, in integer programming [5, 141, or in cryptography
[lo, 171.

‘The LLL algorithm is often also called the L3-algorithm or Lovasz’ basis reduction
algorithm.
556
0196-6774/91 $3.00
Copyright 0 1991 by Academic Press, Inc.
All rights of reproduction in any form reserved.
GAUSS’ ALGORITHM REVISITED 557

Gauss’ algorithm is fundamental for reduction theory of lattices and


quadratic forms. It deserves study because insights in the behaviour of
Gauss’ algorithm may give rise to insights in the behaviour of the LLL
algorithm, which in turn influences many other computational algorithms.
The polynomial-time complexity of Gauss’ algorithm had been estab-
lished by Lagarias in 1980 [9], but some natural questions still remain
open. Here are the main results of this paper:

(A) We give the best possible upper bound for the number of
iterations of such an algorithm.
(B) It is intuitive that Gauss’ algorithm generalizes Euclid’s centered
algorithm to the two-dimensional case. Here, we establish clearly the link
between the two algorithms and we deduce that the worst-case input
configuration of Gauss’ algorithm generalizes the worst-case input config-
uration of the centered Euclid’s algorithm first exhibited by DuprC in 1846
PI.

In addition, we consider a further question concerning the LLL algo-


rithm. The LLL algorithm uses a modified version of Gauss’ algorithm
that depends on a parameter t, called the Gauss(t) algorithm below. The
LLL algorithm is proved to be a polynomial time algorithm when the
parameter t is chosen larger than 1. The question is: Is the LLL algorithm
remaining a polynomial-time algorithm with the parameter r chosen equal
to l? The choice f = 1 corresponds to using the original Gauss’ algorithm
as a subroutine in the LLL algorithm. We do not answer this question
here, but put forward some arguments that bring plausibility to the
conjecture that the LLL algorithm has polynomial-time complexity when
the parameter t is equal to 1.

Notations. We consider the space R* endowed with its Euclidean


structure. For u and v in R*, u . v is the dot product of u and v; Iv1 is the
length of v: (VI* = v*.
For an element r of Q, we define the integer nearest to r to be the
unique integer m, such that r - m belongs to I- i, +I, and the sign of r to
be equal to + 1 and equal to 1 if r is non-negative.
A lattice L of Z* is a Z-module of rank 2; it is usually given by a basis
(u,v). Some quantities may be used to measure a basis; the length M of
basis (u, v) is equal to M = max{ Iu I, Iv I) but we use mainly the inertia 1 of
such a basis. It is the sum of the squares of the lengths of the two vectors
of this basis; so one has

I = u* + v* 2 M*.
558 BRIGITTE VALLI?E

1. PLN HISTORICAL SURVEY

Here, we follow the very nice book of Scharlau and Opolka 1151 which
describes the historical evolution of the notion of reduction of integral
quadratic forms.
Lagrange [ll] was the first mathematician to study the reduction theory
of binary quadratic forms, even though these words do not actually appear
in his work. The connection between bases of lattices and quadratic forms
was observed by Gauss [3] in the two-dimensional case and systematically
exploited by Dirichlet [l], who extended the notion of reduction to the
case of dimension three. It is quite clear that, among these mathemati-
cians, Gauss was the most interested in what we call now an algorithmical
point of view, and this explains why his name is given to the algorithm of
reduction of binary quadratic forms.

2. DIFFERENT FORMULATIONS OF GAUSS' ALGORITHM

Gauss himself described his algorithm within the general framework of


binary integer quadratic forms [3]. Here, it is described in the terms of
lattice basis reduction.
We start with a basis (u, v> of a lattice L in 2’. If u is the shortest vector
of the basis (u, v), we replace the vector v by the smallest vector x(v, u) of

K(v,u) = {w/w = F(V - mu), m E Z, E = *l}

that makes an acute angle with u.


Note that x(v, u) is easy to calculate from r = u . v/u*: the integer m is
the integer nearest to r and E is the sign of r - m.

GAUSS ALGORITHM.
Repeat
1. If u* > v*, exchange u and v;
2. v := x(v, u);
until u* 5 v*.

We recover the original formulation of Gauss if we transfer the opera-


tions on the Gram matrix G(u,v), which is the matrix of the associated
binary quadratic form. By definition, we have

G(u,v)= (.u’, u;;).


GAUSS’ ALGORITHM REVISITED 559

The following result is well known and describes the output configura-
tion:
PROPOSITION 1. Given a basis (u, v) of a lattice L, the Gauss algorithm
constructs a basis of L that contains two successive minima of L.
Proof. The output configuration (u, v) satisfies the two conditions

v* 2 u* and 0 5 u * v I (+I*,

so that the length of the projection of v orthogonally to u is greater than


(6/2)(u I. We deduce that all the vectors w of L which are neither
parallel to u nor elements of K(v, u) satisfy

IWI 2 JTIVI > Ivl 2 lul.


Thus (u, v) is a minimal basis of L. q
In order to analyze the complexity of the Gauss algorithm, we describe
now another algorithm that depends on a parameter t, which will be
assumed to be strictly greater than 1. The new algorithm is called the
Gauss(t) algorithm and the previous one arises when t is equal to 1. In
Gauss(t), the loop termination condition

u* I v*
is replaced by
u* 4 t*v*. (1)
The polynomial complexity of the Gauss(t) algorithm is clear. At each
loop, the length of the longer vector is decreased by a factor at least equal
to l/t. So, we obtain an upper bound for the number k, of iterations of
this algorithm executed on an integer basis of length M and inertia 1:

k, I log,(M) + 1 I ; log,(Z) + 1. (2)


We also consider a third version of Gauss’ algorithm, in which the loop
termination condition is replaced by

u * v < v*. (Ibis)


This algorithm is called the Gauss-acute algorithm. This name is based on
a geometrical meaning of this loop termination condition. Let us consider
the situation at the end of an iteration of Gauss-acute. If the three points
A, B, C are defined in the following way
u=AEi, v = AC, (3)
560 BRIGI7TE VALLl?E

the condition (Ibis) expresses that the angle at point C is acute. On the
other side, Step 2 has made the two other angles at A and B acute.
Hence, Gauss-acute halts when the output triangle ABC built on vectors
u and v has all three of its angles acute.

3. COMPARISON BETWEEN THESE ALGORITHMS

The following result links the numbers of iterations k, k, and k, of the


three algorithms-Gauss, Gauss-acute, and Gauss(t)-executed on the
same basis and describes the output configurations of these algorithms.

PROPOSITION 2. For any t I 6, the two numbers k, and k satisfy

k,rkrk,+l. (4)

The two numbers k and i sati&

/$<k&+l. (4bis)

The triangle built on the output configuration of Gauss(t) or on the output


configuration of Gauss-acute contains two successiveminima of the lattice.

ProoF The inequality k, _< k is clear. Let us prove the inequality


L _( k, by considering the end of an iteration of the algorithms to be
compared.
Here, we have 0 I (v * u)/u2 I $, so that the ratio between the two
quantities to be tested,

v-u U2
- on one side, on the other side,
V2 7
is at most equal to $. So the loop termination condition of the original
Gauss algorithm is sharper than condition (Ibis).
We now prove the two other inequalities. We consider the last loop of
the Gauss(t) algorithm or the last loop of the Gauss-acute algorithm and
we suppose that this loop is not the last one for the Gauss algorithm. We
prove that the Gauss algorithm finishes at the next loop. In both cases, we
have then

Iv1 < lul and 0 I v * u s $I”.

Remark that v is the shortest side of the triangle built on (u, v). So the
following loop of the Gauss algorithm begins by exchanging u and v and
GAUSS' ALGORITHM REVISITED 561

we have

IUI < Id and 0 I v * u I $v’.

Now u is the shortest side of the triangle built on (u, v>.


In the case of Gauss(t), condition (1) provides the inequality

(5)
In the case of Gauss-acute, we use directly condition (Ibis):
V’U
OS- u2 cl*

So, in both cases, because of (5) or (Ibis), following Step 2 will either leave
v fixed or change it into &-(u - v), according to the relative size of these
two vectors. Since each of these vectors is longer than u, the execution of
the Gauss algorithm finishes at this loop, and this shows the two first
assertions.
At the end of this last loop, we obtain the two vectors that were the two
shortest vectors of the triangle of the end of the previous loop. Conse-
quently, the second assertion follows from Proposition 1. 0

We deduce from (2) and (4) a quite natural proof of polynomial


complexity of the Gauss algorithm.

COROLLARY 1. The number of iterations k of the Gauss algorithm


executed on a basis of inertia 1 satisfies

k(1) I ; log&j) + 2. (f-9

We have obtained a polynomial complexity bound for the Gauss algo-


rithm by comparing it to the Gauss(fi) algorithm. The geometrical
meaning of this value 6 is not obvious, and it is intuitively clear that the
upper bound (6) is not the best one.
Now we analyze precisely the number of iterations of the Gauss-acute
algorithm; we prefer this formulation of Gauss’ algorithm because of the
geometrical meaning of the loop termination condition. We can easily
describe the worst-case input configuration of this algorithm and deduce
from it a best possible upper bound for the number of iterations of this
formulation of Gauss’ algorithm. By using Proposition 2, we then deduce
similar facts for the original version of Gauss’ algorithm.
562 BRIGITTE VALLkE

4. THE POSSIBLE ANTECEDENTS OF A CONFIGURATION

We say that an ordered basis (u, v) is convenient if it satisfies the


following conditions:
v-u 1 V’ll
OS---- I- and - v2 2 1.
u* 2
These convenient bases are exactly the bases that we obtain at the end of
any non-final iteration of the Gauss-acute algorithm. Note that for a
triangle ABC built on a convenient basis by relation (3), C is its obtuse
angle and AC is its shortest side (Fig. 1).
Given a convenient basis (u,v), we wish to describe its possible an-
tecedents by the Gauss-acute algorithm, i.e., the bases of the end of
previous Step 2 that could give rise to it (cf. Fig. 1).

LEMMA 1. The possibleantecedentsof a convenient basis(II, v) are all the


convenient bases(w’, u>, where w’ ti defined by the equality
w’ = mu + EV,
with the following conditions on m and E:

misaninteger,m>l,e= $l,ands=lifm=2orifu-v=$8.
(7)
Proof. Before execution of this loop, the vector u, which is now the
longer vector of the basis, was the shorter one. It remained fixed. The
antecedents of vector v are thus among the vectors w’ defined by
w’ = mu + EV withmEZande= kl.
The possible antecedents of basis (u, v> can only be bases (w’, u) with

(m=l,C=+l) (m=.?.E=+l)

minimal
antecedent

5, it’ iii’
(m=O. E=-1) (m=l.E=-1) (m=Z. E=-1)

FIGURE I
GAUSS’ ALGORITHM REVISITED 563

w’ * u 2 0: this condition excludes the negative values of m and the pair


(m, E) = (0, - 1) (see fig. 1). Moreover, the vector u must be the shortest
side of triangle built on basis (w’,u): this excludes the value m = 1 and
also the pair Cm, E) = (2, - 1). Finally, if the triangle built on basis (u, v) is
isosceles, i.e.,
u * v = ($)U2,

all the vectors w’ associated to E = - 1 are excluded because they give,


after reduction, the vector v’ symmetrical to v with respect to the line of
vector u. All the other values of the pair (m, E) lead to convenient bases
(w’, II). 0

The vector w defined by the equality


w=2u+v
is called the minimal antecedent of the basis (u,v). This definition is
justified by the next two lemmas.
For comparing different configurations built on pairs of vectors (u,v),
we use Gram matrices G(u, v) and we use a partial order on them induced
by the order on the coefficients. More precisely, we say that a matrix is
strictly less than another one if each coefficient of the first one is strictly
less than its associate of the second one, except perhaps for the coeffi-
cients at the right bottom, where equality is allowed. We also say that a
matrix G is strictly minimal among a subset .~9 if G is strictly less than all
the other elements of 8.
We compare now the Gram matrices built with all the possible an-
tecedents (w’, u) of a convenient basis (u, v). We return to quadratic forms!

LEMMA 2. Let (u,v) be a convenient basis. Then the Gram matrix


G(w, u) built with its minimal antecedent (w, u) is strictly minimal among all
the Gram matrices G(w’, u) built with all its possible antecedents (w’, u).

Proof. Since all points D’ defined by w’ = AD’ are at the same


distance from the line AB, it is sufficient to show the following inequali-
ties:
OIw*u<w’*u if w’ # w.
Since ABC is acute at A, one has v . u 2 0 and the left inequality is clear.
On the other hand, for w’ # w, we have to evaluate the sign of the
difference

w’ * u - w - u = (m - 2)u2 + (E - 1)u * v.
This difference is strictly positive under the conditions E = 1 and m > 2.
564 BRIGITTE VALLiE

If E = - 1, the triangle built on (u, v) cannot be isosceles: we have


2u . v < u* and the difference is also strictly positive under the conditions
E = - 1 and m 2 3. •I

5. THE WORST-CASE CONFIGURATION OF GAUSS’ ALGORITHM

We start now with a convenient basis (u,, u-r), which is the last
convenient basis obtained during an execution of the Gauss-acute algo-
rithm and we consider all the sequences u’,, made with the nth possible
successive antecedents of the basis (IQ, IL,). Among them, there is a
particular one, the sequence u, made with the successive minimal an-
tecedents. This sequence follows the linear recurrence (cf. fig. 2)

U, = 2u,-1 + u,-* for n 2 1. (8)

We compare the Gram matrices G,, and GA respectively built with the
particular pair (un, u,-r) and all the pairs (ul,, u’,-r>.

LEMMA 3. The Gram matrix G,, is strictly minimal among all the Gram
matrices GA.

proaf. The proof is by induction on IZ. The assertion is true for it = 2,


by applying Lemma 2 to basis (uO, u- ,). For all it 2 0, we consider the
matrix U, built with column vectors u, and u,,- r expressed in the
canonical basis,

The same quantities with a quote denote the associated quantities for the
vectors II’,. We have

G,, = ‘l&U, and G; = ‘U;U;.

FIGURE 2
GAUSS’ ALGORITHM REVISITED 565

If we let

and Q’=(T i),


Q=(: iJ
where the pair (m, E) satisfies the conditions (7), we have

'U, = Q 'U,,-, and ‘u; = g w;-,,

so that

6 = QGn-1‘Q and G:, = Q’G;T1 IQ’.

If we apply now Lemma 2 to possible antecedents of the pair <u’,- i, u> -&,
we obtain

G:, 2 QG;-, ‘Q. (9)

Since the matrix Q is strictly positive, we can apply the induction hypothe-
sis to obtain

QG;-, ‘Q L QG,,-, ‘Q.

This last term is by definition equal to G, and, using (91, completes the
induction step. Furthermore, we remark that the last two inequalities can
become equalities only if the sequence u’,, is made with all minimal
antecedents. q

We consider now the inertiae of these bases. We denote by I,,


-resp. LL- the inertia of basis (u,, u,- I)-resp.(u’,, u’,- ,). These inertiae
are in fact traces of the associated quadratic forms: We have

1, = Trace(G,) and 1; = Trace( Gh) . (10)

We deduce from Lemma 3 that r:, is greater than 1, and can be equal to 1,
only if the sequence II; is made with all minimal antecedents. Now, we
calculate 1, exactly.
LEMMA 4. We have

1
J2 & [(1+A)*"+'
- (1- fi,2n+11.
10

The equality holds only when triangle AB, B- 1 defined by AB, = u. and
m-1 = u-, is right-angled at B-,.
566 BRIGITTE VALLI?E

Pr&. Using G,, = Q”G,Q” and (lo), we link easily I, to G, via


matrix Q:
I,, = Trace(Q”G,Q”).

Since Truce is invariant by a circular permutation, we obtain

1, = Trace( Q’“G,).

The symmetrical matrix Q has its eigenvalues equal to 1 + 6 and 1 - 6


and can be diagonalized in an orthonormal basis associated to a matrix P,
with coefficients in Q(a). So we have

Jz:+1 0
D = tPQP with D =
0 1-a I’
and we deduce

= Trace( D2” ‘PG,P)

= y(& + 1)2,, + s(1 - ,),,,

where y and S are the diagonal coefficients of matrix ‘PG,P. Since I,,
must be an integer, the coefficients y and 6 are conjugates in Q(a), so
that

I, = (a + pJ2)0@ + 1)2n + (cx - Pv9)(1 - J2)2n.

If we define the integers (Y,, and p,, by the relation

(a + 1),1 = a!, + p,Jz, (12)


we obtain

1, = qcfa2, + WP2,) f

with initial values I, = 2a and I, = 2(3a + 4p). We evaluate the quantity

I, - 51, = 5u2, + u2, + 4u, * u-, - 5u; - 5u2_, = 4(u, * u-1 - lq.

Since the basis (II,,, u- i) is convenient, I, - 51, is always non-negative and


is equal to 0 only if the triangle A&B- i is right-angled at B- ,. Thus

8p = 1, - 31, 2 21, = 4a,


GAUSS’ ALGORITHM REVISITED 567

and, finally,

Finally, from (12) we obtain the lower bound (11) which is the best
possible since it is reached when the first triangle is right-angled. 0
We deduce now an upper bound for the number & of iterations of the
Gauss-acute algorithm.
THEOREM 1. Let (u, v) be a basis of inertia 1 of an integer lattice L. Then
the number I&) of iterations executed by the Gauss-acute algorithm on basis
(u, v) is upper-bounded by K(l) with

K(1) = 12[ log 1+ $li$) +I]. (13)

This upper bound is asymptotically best possible in the following sense: There
exists a sequence of bases of lattice Z 2 with strictly increasing inertiae 1 for
which the ratio k(l)/KU) tends to 1 when 1 tends to 03.
Proof Consider a convenient basis (u,v) of an integer lattice L. Let n
be the number of loops executed by the Gauss-acute algo$thm on this
triangle, while the basis remains convenient. We have n = k(l) - 1 and,
since the inertia I, of the last convenient basis (uc, u-r) is at least equal to
3, we have
1 1 I,
3 2 j- 2 t 2 P2n+l 2 & (1 + fi)2n+1. (14)
0 0 ( 1

So, we obtain the announced bound (13).


In the first and third inequality of (14), equality is reached only if
triangle ABoB- I defined in Lemma 4 enjoys the two following properties:
it is right-angled at B-r and the inertia I, = AB$ + AB?, is equal to 3.
Thus lattice L is the square lattice Z2, and, after a possible rotation of a
right angle, we can choose

u-1 = AB-, = (0,l) and u. = AB, = (l,l). (15)


In the second inequality of (14), equality is reached only if ABC is the nth
minimal antecedent of triangle ABoB-,. So we obtain the equality
1
7 = P2n+l
568 BRIGITTE VALLkE

if and only if, after a possible rotation of a right angle, vectors u and v are
two successive elements of the sequence u, defined by the linear recur-
rence (8) together with initial conditions (15). More precisely, we must
have u = u, and v = u,,-r.
Since &+r is equivalent to (1/2&X1 + \lZjzn+’ for n + 03, we ob-
tain the stated asymptotic bound. 0

In this proof, we also exhibited the worst-case configuration of the


Gauss-acute algorithm. The components (x,, y,) of vectors u, satisfying
conditions (8) follow the same linear recurrence as sequences (Y, and &
defined in (12). By comparing initial values defined by (15), we obtain

xn = Pn+l and Yn = ani1

and we can now describe precisely the worst-case configuration of the


Gauss-acute algorithm:

THEOREM 2. For k 2 1, let (u, v) be a conuenient basis such that the


Gauss-acute algorithm requires exactly k loops and such that basis (u, v) has
an inertia I which is minimal among all basessatisfying these conditions.
Then, up to a possible rotation of a right angle, the input configuration is
defined from sequences(Y, and p, of (12) by

u = (Pk>%) and v = (Pk-17 q-1).

6. COMPARING GAUSS' ALGORITHM AND EUCLID'S ALGORITHM

It is clear that Gauss’ algorithm generalizes Euclid’s algorithm, more


precisely the centered Euclidean algorithm. A basic step of the centered
Euclidean algorithm between two positive integers a and b (a 2 b) is
written

b b
a=bq+r with - - < r I -
2 2

and replaces the pair (6, a> by the pair (Irl, b). An iteration of Gauss’
algorithm clearly generalizes this basic step to the two-dimensional case.
We show now that, in Z*, the two algorithms, Gauss’ algorithm and the
centered continued fraction algorithm, are the same. This fact is deduced
from a more general result.
GAUSS’ ALGORITHM REVISITED 569

PROPOSITION 3. Let (w, w’) and (II, v) be two buses of the same lattice L.
Suppose that the two following conditions hold:
(i) the two vectors u and v form an acute angle,
(ii> the two vectors w and w’ can be expressed in the basis (u,v)

w=pu+qv and w’ = p’u + q’v,

with strictly positive integers p, p’, q, q’ satisfying q < q’.

Then, a step of Gauss’ algorithm, executed on the pair (w, w’), replaces the
vector w’ by the vector wO = pOu + q,v built from the rational PO/q0 that is
the last convergent of the centered continued fraction expansion of p/q.

Proof Since (u,v> and (w, w’) are two bases of the same lattice L, we
have
pq’ -p’q = *1. (16)
From this equality and hypothesis q 5 q’, we deduce that w’ is the longer
vector between the two vectors w and w’. Thus, a step of the Gauss
algorithm working on the pair (w, w’) replaces w’ by another vector-
a shorter one.
We consider the last convergent PO/q0 of p/q. Then, the pair (pO, q,,)
satisfies the conditions

P40 -Po4 = *1, (17)

o<p,r;, o.q,,.q. (18)


Comparing the two equalities (16) and (171, we deduce that there exists an
integer m of Z such that

p’ = epO + mp, q’ = eqo + mq with& = +l.

Therefore, w. is an element of K(w, w’). Using (18) and the fact that u and
v form an acute angle, we deduce that the scalar product w. . w, equal to

PoPU2 + 404v* + (PO4 + P4o)U * v>


satisfies
0 I wo . w I +w*.

Thus the vector w. is just the vector built by the Gauss algorithm from the
pair (w, w’>. 0
570 BRIGITTE VALL6E

We can apply this proposition in the particular case when the basis (u, v)
is made with two sides of an acute basic triangle of lattice L. Then, after a
possible modification of the sign of the two vectors u, v, we can assume
that the hypotheses of Proposition 3 are fulfilled. So, we prove that the
whole Gauss algorithm working on basis (w’, w) coincides with the cen-
tered Euclidean algorithm working on the coordinates of w in this basis
(u, v).
It is usual to assert that the Gauss algorithm and the centered Eu-
clidean algorithm are the same; but it is true only a posteriori, when one
already knows the minimal basis of the lattice. And, generally speaking,
the Euclidean algorithm is unable to find such a basis while the Gauss
algorithm is built for finding it!
But there is a particular case when one knows a priori the minimal basis
of the lattice; this is the case of lattice Z*, where this basis is the canonical
one. In this case, it is actually true that the two algorithms coincide.
The worst-case of Gauss’ algorithm arises precisely in this case. So, our
proof enables us, first, to recover the description of the worst-case config-
uration of the centered Euclidean algorithm, which was first given by
DuprC [21 and, second, to show that the generalization of Euclid’s cen-
tered algorithm to Gauss’ algorithm does not worsen the worst-case.

COROLLARY 2. For k 2 1, let u and u be integers with u > u > 0 such


that the centered Euclidean algorithm requires exactly k division steps and
such that v is as small as possiblesatisfying these conditions. Then v = Pk + 1
andu =Pk.

7. THE COMPLEXITY OF THE LLL ALGORITHM

We recall that the LLL algorithm [13] reduces bases of integer lattices:
when given a basis b = (b,, b,, . . . , b,) of a lattice L, it builds a new basis
of this lattice which enjoys good Euclidean properties. This algorithm
considers the box Bi which is made with the projections of bi and bi+l
orthogonally to the subspace generated by the first i - 1 vectors of basis b
and aims at building a new basis b for which all associated boxes Bi are
reduced in the sense of the Gauss(t) algorithm.
From both the points of view of complexity and output configuration, it
seems that this algorithm depends on the parameter t, which is assumed,
as previously, to be strictly greater than 1. During its execution, it per-
forms steps of the Gauss(t) algorithm on boxes Bi, and the total number
of steps s(t) of Gauss(t) performed can be upper-bounded as a function of
GAUSS’ ALGORITHM REVISITED 571

dimension IZ and of the length M of the input basis; one obtains

s(t) I $z( n - l)log,( M)

by considering the geometrical decrease (by a factor of l/t21 of the


quantity
n-l

D(b) = pi(~),

where Q(b) is the determinant of the Gram matrix built with the first i
vectors (b,, . . . , bJ of basis b.
This argument fails to prove the polynomial-time complexity of the LLL
algorithm for the value t = 1 of the parameter t. However, we must
remark that this argument already fails when applied in the two-dimen-
sional case and does not prove the polynomial-time complexity of the
Gauss algorithm itself!
In Section 2, we have proved the polynomial complexity of the Gauss
algorithm in an indirect way: we first established the polynomial-time
complexity of the Gauss(t) algorithm; then we proved that the number of
iterations of the Gauss(t) is almost independent from the parameter t. For
this, we compared the output configurations of both algorithms Gauss(t)
and Gauss.
We think that the same indirect method can help to study the complex-
ity of the LLL algorithm for t = 1. One needs to answer the question:
Does the output configuration of the LLL algorithm associated
to parameter t essentially depend on t?

We remark that Lagarias and Odlyzko [lo] performed extensive compu-


tations and conjectured that the LLL algorithm runs in polynomial-time
even for the particular value t = 1 of the parameter.

ACKNOWLEDGMENTS

This work is a part of my Ph.D. thesis [16]. I thank Jacques Stern, who was my thesis
advisor, for many helpful discussions. Many thanks to a referee who gave me many hints to
improve the paper.

REFERENCES

1. G. L. DIRICHLET, J. Reine Angew Math. 40 (18501, 216-219.


2. A. DUPRB, Surle nombre des divisions a effectuer pour obtenir le plus grand commun
diviseur entre deux nombres entiers, .I. Math. 11 (18461, 41-64, quoted in [7].
572 BRIGITTE VALLkE

3. C. F. GAUSS, Recherches arithmetiques, French translation of “Disquisitiones Arithmeti-


cae,” Blanchard, Paris, 1953.
4. B. JUST, Integer relations among algebraic numbers, in “Proceedings, MFCS’89,” Lec-
tures Notes in Computer Science, Vol. 379, pp. 314-320, Springer-Verlag, New
York/Berlin, 1989.
5. R. KANNAN, Improved algorithms for integer programming and related lattice problem,
in “15th Annual ACM Sympos. Theory of Computing, (1983)“, pp. 193-206.
6. R. KANNAN, H. W. LENSTRA AND L. Lov.&sz, Polynomial factorization and bits of
algebraic and some transcendental numbers, Math. Cornput. 50, 181 (1988), 235-2.50.
7. D. E. KNUTH, “The Art of Computer Programming,” Vol. II, pp. 361, 605,
Addison-Wesley, Reading, MA, 1981.
8. J. C. LAGARIAS, Computational complexity of simultaneous diophantine approximation
problem, in “23rd IEEE Symp. FOCS, 1982.”
9. J. C. LAGARIAS, Worst-Case complexity bounds for algorithms in the theory of integral
quadratic forms, J. AIgorihns 1 (19801, 142-186.
10. J. C. LAGARIAS AND A. M. ODLYZKO, Solving low-density subset sum problem, in “24th
IEEE Symposium FOCS, 1983,” pp. l-10.
11. J. B. LAGRANGE, in “Oeuvres III,” pp. 695-795.
12. S. LANDAU, Factoring polynomials over algebraic number fields, SLAM J. Corn@. 14,
No. 1 (1985), 184-195.
13. A. K. LENSTRA, H. W. LENSTRA, AND L. LovAsz, Factoring polynomials with rational
coefficients, Mar/z. Ann. 261 (19821, 513-534.
14. H. W. LENSTRA, Integer programming with a fixed number of variables, Math. Oper. Res.
8, No. 4(1983).
15. W. SCH~RLAU AND H. OPOLKA, “From Fermat to Minkowski, Lectures on the Theory of
Numbers and Its Historical Development,” Undergraduate Texts in Mathematics,
Springer-Verlag, New York/Berlin, 1985.
16. B. VALL~E, “Une approche geomitrique de la reduction des reseaux en petite dimen-
sion,” These de Doctorat, Universiti de Caen, 1986.
17. B. VALL~E, M. GIRAULT, AND PH. TOFFIN, How to guess Ith roots modulo n by reducing
lattice bases, in “Proceedings of AAECC-6, Roma, 1988,” Lecture Notes in Computer
Science, Vol. 357, pp. 427-442, Springer-Verlag, New York/Berlin, 1988.

You might also like