Gauss Algorithm
Gauss Algorithm
556-572 (1991)
‘The LLL algorithm is often also called the L3-algorithm or Lovasz’ basis reduction
algorithm.
556
0196-6774/91 $3.00
Copyright 0 1991 by Academic Press, Inc.
All rights of reproduction in any form reserved.
GAUSS’ ALGORITHM REVISITED 557
(A) We give the best possible upper bound for the number of
iterations of such an algorithm.
(B) It is intuitive that Gauss’ algorithm generalizes Euclid’s centered
algorithm to the two-dimensional case. Here, we establish clearly the link
between the two algorithms and we deduce that the worst-case input
configuration of Gauss’ algorithm generalizes the worst-case input config-
uration of the centered Euclid’s algorithm first exhibited by DuprC in 1846
PI.
I = u* + v* 2 M*.
558 BRIGITTE VALLI?E
Here, we follow the very nice book of Scharlau and Opolka 1151 which
describes the historical evolution of the notion of reduction of integral
quadratic forms.
Lagrange [ll] was the first mathematician to study the reduction theory
of binary quadratic forms, even though these words do not actually appear
in his work. The connection between bases of lattices and quadratic forms
was observed by Gauss [3] in the two-dimensional case and systematically
exploited by Dirichlet [l], who extended the notion of reduction to the
case of dimension three. It is quite clear that, among these mathemati-
cians, Gauss was the most interested in what we call now an algorithmical
point of view, and this explains why his name is given to the algorithm of
reduction of binary quadratic forms.
GAUSS ALGORITHM.
Repeat
1. If u* > v*, exchange u and v;
2. v := x(v, u);
until u* 5 v*.
The following result is well known and describes the output configura-
tion:
PROPOSITION 1. Given a basis (u, v) of a lattice L, the Gauss algorithm
constructs a basis of L that contains two successive minima of L.
Proof. The output configuration (u, v) satisfies the two conditions
v* 2 u* and 0 5 u * v I (+I*,
u* I v*
is replaced by
u* 4 t*v*. (1)
The polynomial complexity of the Gauss(t) algorithm is clear. At each
loop, the length of the longer vector is decreased by a factor at least equal
to l/t. So, we obtain an upper bound for the number k, of iterations of
this algorithm executed on an integer basis of length M and inertia 1:
the condition (Ibis) expresses that the angle at point C is acute. On the
other side, Step 2 has made the two other angles at A and B acute.
Hence, Gauss-acute halts when the output triangle ABC built on vectors
u and v has all three of its angles acute.
k,rkrk,+l. (4)
/$<k&+l. (4bis)
v-u U2
- on one side, on the other side,
V2 7
is at most equal to $. So the loop termination condition of the original
Gauss algorithm is sharper than condition (Ibis).
We now prove the two other inequalities. We consider the last loop of
the Gauss(t) algorithm or the last loop of the Gauss-acute algorithm and
we suppose that this loop is not the last one for the Gauss algorithm. We
prove that the Gauss algorithm finishes at the next loop. In both cases, we
have then
Remark that v is the shortest side of the triangle built on (u, v). So the
following loop of the Gauss algorithm begins by exchanging u and v and
GAUSS' ALGORITHM REVISITED 561
we have
(5)
In the case of Gauss-acute, we use directly condition (Ibis):
V’U
OS- u2 cl*
So, in both cases, because of (5) or (Ibis), following Step 2 will either leave
v fixed or change it into &-(u - v), according to the relative size of these
two vectors. Since each of these vectors is longer than u, the execution of
the Gauss algorithm finishes at this loop, and this shows the two first
assertions.
At the end of this last loop, we obtain the two vectors that were the two
shortest vectors of the triangle of the end of the previous loop. Conse-
quently, the second assertion follows from Proposition 1. 0
misaninteger,m>l,e= $l,ands=lifm=2orifu-v=$8.
(7)
Proof. Before execution of this loop, the vector u, which is now the
longer vector of the basis, was the shorter one. It remained fixed. The
antecedents of vector v are thus among the vectors w’ defined by
w’ = mu + EV withmEZande= kl.
The possible antecedents of basis (u, v> can only be bases (w’, u) with
(m=l,C=+l) (m=.?.E=+l)
minimal
antecedent
5, it’ iii’
(m=O. E=-1) (m=l.E=-1) (m=Z. E=-1)
FIGURE I
GAUSS’ ALGORITHM REVISITED 563
w’ * u - w - u = (m - 2)u2 + (E - 1)u * v.
This difference is strictly positive under the conditions E = 1 and m > 2.
564 BRIGITTE VALLiE
We start now with a convenient basis (u,, u-r), which is the last
convenient basis obtained during an execution of the Gauss-acute algo-
rithm and we consider all the sequences u’,, made with the nth possible
successive antecedents of the basis (IQ, IL,). Among them, there is a
particular one, the sequence u, made with the successive minimal an-
tecedents. This sequence follows the linear recurrence (cf. fig. 2)
We compare the Gram matrices G,, and GA respectively built with the
particular pair (un, u,-r) and all the pairs (ul,, u’,-r>.
LEMMA 3. The Gram matrix G,, is strictly minimal among all the Gram
matrices GA.
The same quantities with a quote denote the associated quantities for the
vectors II’,. We have
FIGURE 2
GAUSS’ ALGORITHM REVISITED 565
If we let
so that
If we apply now Lemma 2 to possible antecedents of the pair <u’,- i, u> -&,
we obtain
Since the matrix Q is strictly positive, we can apply the induction hypothe-
sis to obtain
This last term is by definition equal to G, and, using (91, completes the
induction step. Furthermore, we remark that the last two inequalities can
become equalities only if the sequence u’,, is made with all minimal
antecedents. q
We deduce from Lemma 3 that r:, is greater than 1, and can be equal to 1,
only if the sequence II; is made with all minimal antecedents. Now, we
calculate 1, exactly.
LEMMA 4. We have
1
J2 & [(1+A)*"+'
- (1- fi,2n+11.
10
The equality holds only when triangle AB, B- 1 defined by AB, = u. and
m-1 = u-, is right-angled at B-,.
566 BRIGITTE VALLI?E
1, = Trace( Q’“G,).
Jz:+1 0
D = tPQP with D =
0 1-a I’
and we deduce
where y and S are the diagonal coefficients of matrix ‘PG,P. Since I,,
must be an integer, the coefficients y and 6 are conjugates in Q(a), so
that
1, = qcfa2, + WP2,) f
I, - 51, = 5u2, + u2, + 4u, * u-, - 5u; - 5u2_, = 4(u, * u-1 - lq.
and, finally,
Finally, from (12) we obtain the lower bound (11) which is the best
possible since it is reached when the first triangle is right-angled. 0
We deduce now an upper bound for the number & of iterations of the
Gauss-acute algorithm.
THEOREM 1. Let (u, v) be a basis of inertia 1 of an integer lattice L. Then
the number I&) of iterations executed by the Gauss-acute algorithm on basis
(u, v) is upper-bounded by K(l) with
This upper bound is asymptotically best possible in the following sense: There
exists a sequence of bases of lattice Z 2 with strictly increasing inertiae 1 for
which the ratio k(l)/KU) tends to 1 when 1 tends to 03.
Proof Consider a convenient basis (u,v) of an integer lattice L. Let n
be the number of loops executed by the Gauss-acute algo$thm on this
triangle, while the basis remains convenient. We have n = k(l) - 1 and,
since the inertia I, of the last convenient basis (uc, u-r) is at least equal to
3, we have
1 1 I,
3 2 j- 2 t 2 P2n+l 2 & (1 + fi)2n+1. (14)
0 0 ( 1
if and only if, after a possible rotation of a right angle, vectors u and v are
two successive elements of the sequence u, defined by the linear recur-
rence (8) together with initial conditions (15). More precisely, we must
have u = u, and v = u,,-r.
Since &+r is equivalent to (1/2&X1 + \lZjzn+’ for n + 03, we ob-
tain the stated asymptotic bound. 0
b b
a=bq+r with - - < r I -
2 2
and replaces the pair (6, a> by the pair (Irl, b). An iteration of Gauss’
algorithm clearly generalizes this basic step to the two-dimensional case.
We show now that, in Z*, the two algorithms, Gauss’ algorithm and the
centered continued fraction algorithm, are the same. This fact is deduced
from a more general result.
GAUSS’ ALGORITHM REVISITED 569
PROPOSITION 3. Let (w, w’) and (II, v) be two buses of the same lattice L.
Suppose that the two following conditions hold:
(i) the two vectors u and v form an acute angle,
(ii> the two vectors w and w’ can be expressed in the basis (u,v)
Then, a step of Gauss’ algorithm, executed on the pair (w, w’), replaces the
vector w’ by the vector wO = pOu + q,v built from the rational PO/q0 that is
the last convergent of the centered continued fraction expansion of p/q.
Proof Since (u,v> and (w, w’) are two bases of the same lattice L, we
have
pq’ -p’q = *1. (16)
From this equality and hypothesis q 5 q’, we deduce that w’ is the longer
vector between the two vectors w and w’. Thus, a step of the Gauss
algorithm working on the pair (w, w’) replaces w’ by another vector-
a shorter one.
We consider the last convergent PO/q0 of p/q. Then, the pair (pO, q,,)
satisfies the conditions
Therefore, w. is an element of K(w, w’). Using (18) and the fact that u and
v form an acute angle, we deduce that the scalar product w. . w, equal to
Thus the vector w. is just the vector built by the Gauss algorithm from the
pair (w, w’>. 0
570 BRIGITTE VALL6E
We can apply this proposition in the particular case when the basis (u, v)
is made with two sides of an acute basic triangle of lattice L. Then, after a
possible modification of the sign of the two vectors u, v, we can assume
that the hypotheses of Proposition 3 are fulfilled. So, we prove that the
whole Gauss algorithm working on basis (w’, w) coincides with the cen-
tered Euclidean algorithm working on the coordinates of w in this basis
(u, v).
It is usual to assert that the Gauss algorithm and the centered Eu-
clidean algorithm are the same; but it is true only a posteriori, when one
already knows the minimal basis of the lattice. And, generally speaking,
the Euclidean algorithm is unable to find such a basis while the Gauss
algorithm is built for finding it!
But there is a particular case when one knows a priori the minimal basis
of the lattice; this is the case of lattice Z*, where this basis is the canonical
one. In this case, it is actually true that the two algorithms coincide.
The worst-case of Gauss’ algorithm arises precisely in this case. So, our
proof enables us, first, to recover the description of the worst-case config-
uration of the centered Euclidean algorithm, which was first given by
DuprC [21 and, second, to show that the generalization of Euclid’s cen-
tered algorithm to Gauss’ algorithm does not worsen the worst-case.
We recall that the LLL algorithm [13] reduces bases of integer lattices:
when given a basis b = (b,, b,, . . . , b,) of a lattice L, it builds a new basis
of this lattice which enjoys good Euclidean properties. This algorithm
considers the box Bi which is made with the projections of bi and bi+l
orthogonally to the subspace generated by the first i - 1 vectors of basis b
and aims at building a new basis b for which all associated boxes Bi are
reduced in the sense of the Gauss(t) algorithm.
From both the points of view of complexity and output configuration, it
seems that this algorithm depends on the parameter t, which is assumed,
as previously, to be strictly greater than 1. During its execution, it per-
forms steps of the Gauss(t) algorithm on boxes Bi, and the total number
of steps s(t) of Gauss(t) performed can be upper-bounded as a function of
GAUSS’ ALGORITHM REVISITED 571
D(b) = pi(~),
where Q(b) is the determinant of the Gram matrix built with the first i
vectors (b,, . . . , bJ of basis b.
This argument fails to prove the polynomial-time complexity of the LLL
algorithm for the value t = 1 of the parameter t. However, we must
remark that this argument already fails when applied in the two-dimen-
sional case and does not prove the polynomial-time complexity of the
Gauss algorithm itself!
In Section 2, we have proved the polynomial complexity of the Gauss
algorithm in an indirect way: we first established the polynomial-time
complexity of the Gauss(t) algorithm; then we proved that the number of
iterations of the Gauss(t) is almost independent from the parameter t. For
this, we compared the output configurations of both algorithms Gauss(t)
and Gauss.
We think that the same indirect method can help to study the complex-
ity of the LLL algorithm for t = 1. One needs to answer the question:
Does the output configuration of the LLL algorithm associated
to parameter t essentially depend on t?
ACKNOWLEDGMENTS
This work is a part of my Ph.D. thesis [16]. I thank Jacques Stern, who was my thesis
advisor, for many helpful discussions. Many thanks to a referee who gave me many hints to
improve the paper.
REFERENCES