0% found this document useful (0 votes)
11 views4 pages

CVP

Uploaded by

chaymaeed571
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views4 pages

CVP

Uploaded by

chaymaeed571
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

Tel Aviv University, Fall 2004 Lecture 3 Lecturer: Oded Regev

Lattices in Computer Science CVP Algorithm Scribe: Eyal Kaplan

In this lecture, we describe an approximation algorithm to the Closest Vector Problem (CVP).
This algorithm, known as the Nearest Plane Algorithm, was developed by L. Babai in 1986. It
n
obtains a 2( √23 ) approximation ratio, where n is the rank of the lattice. In many applications, this
algorithm is applied for a constant n; in such cases, we obtain a constant approximation factor.
One can define approximate-CVP as a search problem, as an optimization problem, or as a
decision problem (where the latter is often known as a gap problem). In the following definitions,
γ ≥ 1 is the approximation factor. By setting γ = 1 we obtain the exact version of the problems.

D EFINITION 1 (CV Pγ , SEARCH ) Given a basis B ∈ Zm×n and a point t ∈ Zm , find a point
x ∈ L(B) such that ∀y ∈ L(B), kx − tk ≤ γky − tk.

D EFINITION 2 (CV Pγ , OPTIMIZATION ) Given a basis B ∈ Zm×n and a point t ∈ Zm , find r ∈ Q


such that dist(t, L(B)) ≤ r ≤ γ · dist(t, L(B)).

D EFINITION 3 (CV Pγ , DECISION ) Given a basis B ∈ Zm×n , a point t ∈ Zm and r ∈ Q, decide


if dist(t, L(B)) ≤ r or dist(t, L(B)) > γ · r.
n
Babai’s nearest plane algorithm solves the search variant of CV Pγ for γ = 2( √23 ) . It is easy
to see that this implies a solution to the other two variants of CV Pγ as they are not harder than the
n
search version. For simplicity, the algorithm we present here achieves γ = 2 2 . It is possible to
n
achieve γ = 2( √23 ) by a straightforward modification of the parameters.

1 The Nearest Plane Algorithm


The algorithm has two main steps. First, it applies the LLL reduction to the input lattice. It then
looks for an integer combination of the basis vectors that is close to the target vector t. This step is
essentially the same as one inner loop in the reduction step of the LLL algorithm.

I NPUT: Basis B ∈ Zm×n , t ∈ Zm


n
O UTPUT: A vector x ∈ L(B) such that kx − tk ≤ 2 2 dist(t, L(B))
1. Run δ-LLL on B with δ = 34
2. b ← t
for j = n to 1 do
b ← b − cj bj where cj = dhb, b̃j i/hb̃j , b̃j ic
Output t − b

It can be seen that this algorithm runs in polynomial time in the input size; indeed, the LLL
procedure runs in polynomial time and the reduction step was already analyzed in the previous
class. Notice that unlike our description of the LLL algorithm, here we consider the algorithm for
arbitrary lattices that are not necessarily full-rank. This will, in fact, make our analysis slightly
easier.
A useful way to imagine the second step of the algorithm is the following. Consider the or-
thonormal set given by b̃1 /kb̃1 k, . . . , b̃n /kb̃n k. For full-rank lattices (i.e., m = n) this is a basis but

1
in general, we need to extend it with m − n additional vectors to make it an orthonormal basis of
Rm . Using such a basis, we can now write the matrix B and the vector t as follows.
  
kb̃1 k ∗ ··· ∗ ∗
 0 kb̃2 k · · · 
∗  ∗
 
 .. . ..   .. 
 . . . .   
  .
 ∗   
 ∗
 0 ··· 
kb̃n k ∗

 
 0 0   
 ···  ∗
 . ..  .
 .. ··· .   .. 
0 ··· 0 ∗

The algorithm looks for an integer combination of the columns for which each coordinate i =
1, . . . , n is within ± 12 kb˜i k of the ith coordinate of t. So our algorithm first finds a multiple of
the nth matrix column that brings the nth coordinate to within ± 12 kb̃n k. It then continues to the
n − 1st column and so on. Notice that in case the lattice is not full-rank, the last m − n dimensions
correspond to the space orthogonal to the span of the lattice.
We now consider another equivalent description of the second step. This description is recursive
and will be the most convenient for our analysis. It emphasizes the geometric nature of the nearest
plane algorithm and also explains its name. See Figure 1 for an illustration.



kb̃3 k
t=s 
 t

b̃3 b3 b2
s b1

b2
b1

Figure 1: The nearest plane algorithm for a rank 3 lattice and the resulting rank 2 instance. The
chosen hyperplanes are thicker.

1. Let s be the projection of t on span(b1 , . . . , bn ).

2. Find c such that the hyperplane cb̃n + span(b1 , . . . , bn−1 ) is as close as possible to s.

3. Let s0 = s − cbn . Call recursively with s0 and L(b1 , . . . , bn−1 ). Let x0 be the answer.

4. Return x = x0 + cbn .

It is easy to verify that the above is indeed equivalent to the second step of the algorithm. Our
first step is to project t on span(b1 , . . . , bn ). Some thought reveals that the closest lattice vector to
s is the same as the closest lattice vector to t and hence this step makes sense. In Step 2 we identify
one translate of the lattice L(b1 , . . . , bn−1 ) where we suspect that the closest vector to s resides. It
is on this translate that we recurse in Step 3. More precisely, in Steps 3 and 4 we compute a close

2
vector to s in cbn + L(b1 , . . . , bn−1 ). Since the latter set is not a lattice (it does not contain the zero
vector), we shift it (together with s) by −cbn . Then, when we obtain the answer x0 , we shift it back
by cbn . Hence, the answer x is indeed a close vector to s in cbn + L(b1 , . . . , bn−1 ).

2 Correctness of the Algorithm


We first notice that the algorithm never returns points that are too far away from the input point (for
the case where t ∈ span(B)). This follows easily from the matrix description above since each
coordinate i of the output is within ± 12 kb̃i k of that of t.
1 Pn ˜ 2
C LAIM 4 For any t ∈ span(B), the output x of the algorithm is such that kx−tk2 ≤ 4 i=1 kbi k .

Since B is LLL-reduced, we obtain the following.

C LAIM 5 For any t ∈ span(B), the output x of the algorithm is such that kx − tk ≤ 21 2n/2 kb˜n k.

P ROOF : By properties of an LLL reduced basis, we have that


n−i
∀1 ≤ i ≤ n, kb˜i k ≤ 2 2 kb˜n k

and hence
n n
1 X ˜ 2 1 X n−i ˜ 2 1 n ˜ 2
kx − tk2 ≤ kbi k ≤ 2 kbn k ≤ 2 kbn k .
4 4 4
i=1 i=1

2
The above claim shows that when dist(t, L(B)) ≥ 21 kb̃n k, the output of the algorithm is a 2n/2
approximation to CVP. However, we still need to handle the case where t is very close to the lattice
(and also the case where t ∈
/ span(B)). This is done in the following lemma, which completes the
proof of correctness.

L EMMA 6 For any t ∈ Zm , let y ∈ L(B) be the closest lattice point to t. Then the algorithm
n
described above finds a point x ∈ L(B) such that kx − tk ≤ 2 2 ky − tk.

P ROOF : We prove by induction on the rank n that our algorithm finds a point x ∈ L(B) such that
n
kx − sk ≤ 2 2 ky − sk. This yields the claim, since

kx − tk2 = ks − tk2 + ks − xk2


≤ ks − tk2 + 2n ky − sk2
≤ 2n (ks − tk2 + ky − sk2 ) = 2n ky − tk2

where the first equality follows since s − t and s − x are orthogonal and the last equality follows
similarly.
˜
Now distinguish two cases. If ks − yk < kb2n k , then y ∈ cb˜n + span(b1 , . . . , bn−1 ), because all
kb˜n k
other hyperplanes are of distance at least 2 . Therefore, y ∈ cbn + L(b1 , . . . , bn−1 ). Intuitively,

3
this means that we identified the correct translate in Step 2. So we obtain that y 0 = y − cbn ∈
L(b1 , . . . , bn−1 ) is the closest point to s0 . Hence, by our inductive assumption,

kx − sk = kx0 − s0 k
n−1
≤2 2 ky 0 − s0 k
n−1
=2 2 ky − sk
n
≤ 2 ky − sk.
2

˜
Otherwise, we must have that ks − yk ≥ kb2n k . In this case, it is possible that we identify the
wrong translate in Step 2. However, by Claim 5, we have that
1
ks − xk ≤ 2n/2 kb˜n k ≤ 2n/2 ks − yk.
2
2
Finally, let us mention the following possible extension to Babai’s algorithm. Instead of iden-
tifying only one translate in Step 2, we take (say) two translates and recurse on both. Such an
extension (slightly) improves the approximation ratio but unfortunately runs in time exponential in
the rank.

You might also like