0% found this document useful (0 votes)
39 views11 pages

Average-Case Hardness

This document summarizes a lecture on average-case hardness based on lattice problems. It discusses the following key points: 1. In 1996, Ajtai discovered that lattices can be used to construct cryptographic schemes, with security based on worst-case lattice problem hardness. This is in contrast to most constructions which rely on average-case assumptions. 2. Ajtai's construction based security on the worst-case hardness of the n^c-approximate SVP problem. Subsequent work improved the construction and reduced the constant c. 3. The document presents a new collision resistant hash function construction based on the worst-case hardness of O(n^3)-approximate SIVP. It

Uploaded by

kr0465
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
39 views11 pages

Average-Case Hardness

This document summarizes a lecture on average-case hardness based on lattice problems. It discusses the following key points: 1. In 1996, Ajtai discovered that lattices can be used to construct cryptographic schemes, with security based on worst-case lattice problem hardness. This is in contrast to most constructions which rely on average-case assumptions. 2. Ajtai's construction based security on the worst-case hardness of the n^c-approximate SVP problem. Subsequent work improved the construction and reduced the constant c. 3. The document presents a new collision resistant hash function construction based on the worst-case hardness of O(n^3)-approximate SIVP. It

Uploaded by

kr0465
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

Tel Aviv University, Fall 2004

Lattices in Computer Science


Lecture 12
Average-case Hardness
Lecturer: Oded Regev
Scribe: Elad Verbin
Traditionally, lattices were used as tools in cryptanalysis, that is, as tools in breaking cryptographic
schemes. We have seen an example of such an application in a previous lecture. In 1996, Ajtai made a
surprising discovery: lattices can be used to construct cryptographic schemes [1]. His seminal work sparked
great interest in understanding the complexity of lattice problems and their relation to cryptography.
Ajtais discovery is interesting for another reason: the security of his cryptographic scheme is based on
the worst-case hardness of lattice problems. What this means is that if one succeeds in breaking the crypto-
graphic scheme, even with some small probability, then one can also solve any instance of a certain lattice
problem. This remarkable property is what makes lattice-based cryptographic construction so attractive.
In contrast, virtually all other cryptographic constructions are based on some average-case assumptions.
For example, in cryptographic constructions based on factoring, the assumption is that it is hard to factor
numbers chosen from a certain distribution. But how should we choose this distribution? Obviously, we
should not use numbers with small factors (such as even number), but perhaps there are other numbers that
we should avoid? In cryptographic constructions based on worst-case hardness, such questions do not even
arise.
Let us describe Ajtais result more precisely. The cryptographic construction given in [1] is known as a
family of one-way functions. Ajtai proved that the security of this family can be based on the worst-case hard-
ness of the n
c
-approximate SVP for some constant c. In other words, the ability to invert a function chosen
from this family with non-negligible probability implies an ability to solve any instance of n
c
-approximate
SVP. Shortly after, Goldreich et al. [4] improved on Ajtais result by constructing a stronger cryptographic
primitive known as a family of collision resistant hash functions (CRHF). Much of the subsequent work
concentrated on decreasing the constant c (thereby improving the security assumption) [3, 5, 6]. In the most
recent work, the constant is essentially c = 1.
Shortly after [1], on a different but related direction of research, Ajtai and Dwork [2] constructed a
public-key cryptosystem whose security is based on the worst-case hardness of lattice problems. Several
improvements were given in subsequent works [4, 7]. We should mention that unlike the case of one-way
functions and CRHF, the security of all known lattice-based public-key cryptosystems is based on a special
case of SVP known as unique-SVP. The hardness of this problem is not understood so well, and it is an
open question whether one can base public-key cryptosystems on the (worst-case) hardness of SVP.
1 Our CRHF
In this lecture, we present a CRHF based on the worst-case hardness of O(n
3
)-approximate SIVP. This
construction is a somewhat simplied version of the one in [6]. We remark that it is possible to improve the
security assumption to

O(n)-approximate SIVP, as was done in [6]. We will indicate how this can be done
in Section 4. Let us rst recall the denition of SIVP.
DEFINITION 1 (SIVP

) Given a basis B Z
nn
, nd a set of n linearly independent vectors in L(B) each
of length at most
n
(L(B)).
The transference theorem of Banaszczyk, which we saw in the last lecture, shows that a solution to SIVP

implies a solution to (the optimization variant of) SVP


n
. This is achieved by simply solving SIVP

on
the dual lattice. Therefore our CRHF construction is also based on the worst-case hardness of O(n
4
)-
approximate SVP. We now give the formal denition of a CRHF.
DEFINITION 2 A family of collision resistant hash functions (CRHF) is a sequence T
n

n=1
, where each
T
n
is a family of functions f : 0, 1
m(n)
0, 1
k(n)
, with the following properties.
1. There exists an algorithm that given any n 1 outputs a random element of T
n
in time polynomial
in n.
1
2. Every function f T
n
is efciently computable.
3. For any c > 0, there is no polynomial-time algorithm that with probability at least
1
n
c
, given a random
f T
n
outputs x, y such that x ,= y and f(x) = f(y) (i.e., there is no polynomial-time algorithm
that with non-negligible probability nds a collision).
REMARK 1 We usually consider functions from 0, 1
m
to 0, 1
k
for m > k so that collisions are guar-
anteed to exist. If no collisions exist, the last requirement is trivially void.
REMARK 2 The more standard notion of a family of one-way functions (OWF) is dened similarly, where
instead of the last requirement we have the following:
3. For any c > 0, there is no polynomial-time algorithm that with probability at least
1
n
c
, given a random
f T
n
and the value f(x) for a random x 0, 1
m
, outputs y such f(x) = f(y) (i.e., there is
no polynomial-time algorithm that with non-negligible probability inverts the function on a random
input).
It is easy to see that any CRHF is in particular a OWF. We remark that both are important primitives in
cryptography, but we will not expand on this topic.
Our CRHF is essentially the modular subset-sum function over Z
n
q
, as dened next. It is parameterized
by two functions m = m(n), q = q(n).
DEFINITION 3 For any a
1
, . . . , a
m
Z
n
q
, dene f
a
1
,...,a
m
as the function from 0, 1
m
to 0, 1
nlog q
given
by
f
a
1
,...,a
m
(b
1
, . . . , b
m
) =
m

i=1
b
i
a
i
mod q.
Then, we dene the family T
n
as the set of functions f
a
1
,...,a
m
for all a
1
, . . . , a
m
Z
n
q
.
This family clearly satises the rst two properties of a CRHF. Our main theorem below shows that for
a certain choice of parameters, the existence of a collision nder (i.e., an algorithm that violates the third
property of a CRHF) implies a solution to SIVP
O(n
3
)
.
THEOREM 4 Let q = 2
2n
and m = 4n
2
. Assume that there exists a polynomial-time algorithm COLLI-
SIONFIND that given random elements a
1
, . . . , a
m
Z
n
q
nds b
1
, . . . , b
m
1, 0, 1, not all zero, such
that

m
i=1
b
i
a
i
= 0 (mod q) with probability at least n
c
0
for some constant c
0
> 0. Then there is a
polynomial-time algorithm that solves SIVP
O(n
3
)
on any lattice.
Notice that for this choice of parameters, m > nlog q so collisions are guaranteed to exist. The proof is
based on the idea of smoothing a lattice by Gaussian noise, which is described in the next section.
2 The Smoothing Parameter
For s > 0 and x R
n
dene
s
(x) =
s
(x)/s
n
. This is the Gaussian probability density function with
parameter s. As we have seen in the last lecture, a vector chosen randomly according to
s
has length at
most

ns with probability 1 2
(n)
. In this section we are interested in understanding what happens
when we take the uniform distribution on a lattice and add Gaussian noise to it. An illustration of this is
shown in Figure 1. The four plots show the distribution obtained with four different values of s. Notice that
as we add more Gaussian noise, the distribution becomes closer to uniform. Our goal in this section is to
2
Figure 1: A lattice distribution with different amounts of Gaussian noise
analyze this formally and understand how large s has to be for this to happen. This will play a crucial role
in the proof of the main theorem.
To make the above formal, we work modulo the parallelepiped, as was described in Lecture 7. Namely,
the statement we wish to prove is that for large enough s, if we reduce the distribution
s
modulo T(B), we
obtain a distribution that is very close to uniform over T(B). This is done in the following lemma.
LEMMA 5 Let be a lattice with basis B. Then, the statistical distance between the uniform distribution
on T(B) and the distribution obtained by sampling from
s
and reducing the result modulo T(B) is at most
1
2

1/s
(

0).
PROOF: We need to calculate the statistical distance between the following two density functions on T(B):
U(x) =
1
det()
= det(

)
and
Y (x) =

s.t. x

mod 1(B)=x

s
(x
t
) =
1
s
n

s
(x + ).
Using the Poisson summation formula and properties of the Fourier transform, we obtain
Y (x) =
1
s
n
det(

) s
n

1/s
(w) e
2iw,x)
= det(

1 +

\0

1/s
(w) e
2iw,x)

.
3
So,
(Y, U) =
1
2

1(B)
[Y (x) U(x)[dx

1
2
vol(T(B)) max
x1(B)
[Y (x) det(

)[
=
1
2
det() det(

) max
x1(B)

\0

1/s
(w) e
2iw,x)

1
2
det() det(

\0

1/s
(w)

=
1
2

1/s
(

0)
where the last inequality uses the triangle inequality.
The above lemma motivates the following denition.
DEFINITION 6 For any > 0, we dene the smoothing parameter of with parameter as the smallest s
such that
1/s
(

0) and denote it by

().
To see why this is well-dened, notice that
1/s
(

0) is a continuous and strictly decreasing function


of s with lim
s0

1/s
(

0) = and lim
s

1/s
(

0) = 0. Using this denition, the lemma


can be restated as follows: for any s

(), the statistical distance between the uniform distribution on


T(B) and the distribution obtained by sampling from
s
and reducing the result modulo T(B) is at most
1
2
. In the rest of this section, we relate

() to other lattice parameters.


CLAIM 7 For any <
1
100
we have

()
1

1
(

)
.
PROOF: Let s =
1

1
(

)
, and let y

be of norm
1
(

). Then

1/s
(

0)
1/s
(y) = e
|y/
1
(

)|
2
= e

>
1
100
.

Using Banaszczyks transference theorem, we immediately obtain the following corollary.


COROLLARY 8 For any <
1
100
we have

()
1
n

n
().
CLAIM 9 For any 2
n+1
,

()

1
(

)
.
PROOF: Let s =

n/
1
(

). Our goal is to prove that


1/s
(

0) 2
n+1
. Then,

1/s
(

0) = (s

0) 2
n+1
where the inequality follows from a corollary we saw in the previous lecture together with
1
(s

)

n.

Using the easy direction of the transference theorem, we obtain the following corollary.
COROLLARY 10 For any 2
n+1
,

()

n
n
().
We remark that it can be shown that

() log n
n
() for n
log n
(see Lemma 15).
4
3 Proof of Theorem 4
Our goal is to describe an algorithm that solves SIVP
O(n
3
)
on any given lattice using calls to COLLISION-
FIND (as dened in Theorem 4). The core of the algorithm is the procedure FINDVECTOR presented below.
In this procedure and elsewhere in this section, we x to be n
log n
and recall that we choose q = 2
2n
and
m = 4n
2
. The output of FINDVECTOR is some random short lattice vector. As we shall see later, by calling
FINDVECTOR enough times, we can obtain a set of n short linearly independent vectors, as required.
Roughly speaking, FINDVECTOR works as follows. It rst chooses mvectors x
1
, . . . , x
m
independently
from the Gaussian distribution

where is close to the smoothing parameter of the lattice. Since the
smoothing parameter is not much bigger than
n
, these vectors are short. Then, these vectors are reduced
modulo T(B) to obtain y
1
, . . . , y
m
. By Lemma 5, each of y
1
, . . . , y
m
is distributed almost uniformly in
T(B). We now partition T(B) into a very ne grid containing q
n
cells (see Figure 2). Each cell naturally
corresponds to an element of Z
n
q
and we dene a
i
Z
n
q
as the element corresponding to the cell containing
y
i
. Notice that each a
i
is distributed almost uniformly in Z
n
q
. We can therefore apply COLLISIONFIND to
a
1
, . . . , a
m
and obtain a 1, 0, 1-combination of them that sums to zero in Z
n
q
. We then notice that the
same combination applied to x
1
, . . . , x
m
is: (i) a short vector (since each x
i
is short and the coefcients are
at most 1 is absolute value) (ii) extremely close to a lattice vector (which must therefore be short as well).
The procedure outputs this close-by lattice vector.
z
i
y
i
Figure 2: Partitioning a basic parallelepiped into 4
2
parts
Procedure 1 FINDVECTOR
Input: A lattice given by an LLL-reduced basis B, and a parameter satisfying 2

() 4

().
Output: A (short) element of , or a message FAIL.
1: For each i 1, . . . , m do the following:
2: Choose a random vector x
i
from distribution

3: Let y
i
= x
i
mod T(B)
4: Consider the sub-parallelepiped containing y
i
. Let a
i
be the element of Z
n
q
corresponding to it, and
let z
i
be its lower-left corner. In symbols, a
i
= qB
1
y
i
| and z
i
= Ba
i
/q = BqB
1
y
i
|/q.
5: Run COLLISIONFIND on (a
1
, . . . , a
m
). If it fails then output FAIL. Otherwise, we obtain b
1
, . . . , b
m

1, 0, 1, not all zero, such that

m
i=1
b
i
a
i
= 0 (mod q)
6: Return

m
i=1
b
i
(x
i
y
i
+ z
i
)
Later in this section, we will prove that FINDVECTOR satises the following properties:
When it is successful, its output is a lattice vector, and with probability exponentially close to 1, its
length is at most O(n
3

n
())
It is successful with probability at least n
c
0
/2
The distribution of its output is full-dimensional, in the sense that the probability that the output
vector lies in any xed n 1-dimensional hyperplane is at most 0.9.
5
Based on these properties, we can now describe the SIVP
O(n
3
)
algorithm. Given some basis of a lattice
, the algorithm starts by applying the LLL algorithm to obtain an LLL-reduced basis B. Assume for
simplicity that we know a value as required by FINDVECTOR. We can then apply FINDVECTOR n
c
0
+2
times (where n
c
0
is the success probability of COLLISIONFIND). Among all vectors returned, we look for
n linearly independent vectors. If such vectors are found, we output them; otherwise, we fail.
By the properties mentioned above, we see that among the n
c
0
+2
applications of FINDVECTOR made
by our algorithm, the expected number of successful calls is at least n
2
/2. Using standard arguments, we
obtain that with very high probability, the number of successful calls is at least, say, n
2
/4. Moreover, we see
that with high probability all these vectors are lattice vectors of length at most O(n
3

n
()). Finally, we
claim that these vectors contain n linearly independent vectors with very high probability. Indeed, as long
as the dimension of the space spanned by the current vectors is less than n, each new vector increases it by
one with probability at least 0.1. Hence, with very high probability, we nd n linearly independent vectors.
It remains to explain how to nd a parameter in the required range. Recall that the length of the
longest vector in an LLL-reduced basis gives a 2
n
approximation to
n
. Together with Corollaries 8 and
10, we obtain a n
3/2
2
n
approximation to

(). We can therefore apply the algorithm described above with


n +
3
2
log n guesses of . One of them is guaranteed to be in the required range.
In the rest of this section, we show that FINDVECTOR satises the properties mentioned above.
CLAIM 11 If FINDVECTOR does not fail, its output is a lattice vector.
PROOF: Assuming FINDVECTOR is successful, its output is the vector

m
i=1
b
i
(x
i
y
i
+z
i
). Each x
i
y
i
is a lattice vector by the denition of y
i
. Moreover,
m

i=1
b
i
z
i
= B
m

i=1
b
i
a
i
/q
is a lattice vector because

m
i=1
b
i
a
i
/q is an integer vector.
The following claim shows that when FINDVECTOR is successful, its output is a short vector. By
combining the bound below with Corollary 10 and our choice of m, we obtain a bound of O(n
3

n
()) on
the length of the output.
CLAIM 12 If

(), the probability that FINDVECTOR outputs a vector v of length |v| > 2m

n
is at most 2
(n)
.
PROOF: Using the triangle inequality and the fact that b
i
1, 0, 1 we get that

i=1
b
i
(x
i
y
i
+ z
i
)

i=1
[b
i
[ |x
i
y
i
+ z
i
|
m

i=1
|x
i
| +
m

i=1
|z
i
y
i
|.
We bound the two terms separately. First, each x
i
is chosen independently from the distribution

. As we
saw in the previous lecture, the probability that |x
i
| >

n is at most 2
(n)
. So the contribution of the
rst term is at most m

n except with probability m 2


(n)
= 2
(n)
.
We nowconsider the second term. By the denition of z
i
, both y
i
and z
i
are in the same sub-parallelepiped,
so |z
i
y
i
|
1
q
diam(T(B)). This quantity is extremely small: indeed, by our choice of q and Corollary
8 we obtain
|z
i
y
i
| 2
2n
n 2
n

n
() 2
2n
n 2
n
n

()
where we used that B is LLL-reduced and therefore
diam(T(B)) n 2
n

n
().

6
CLAIM 13 If

(), algorithm FINDVECTOR succeeds with probability at least


1
2
n
c
0
.
PROOF: By denition, COLLISIONFIND succeeds on a uniformly random input with probability at least
n
c
0
. So it would sufce to show that the input we provide to COLLISIONFIND is almost uniform, i.e.,
that the statistical distance between the m-tuple (a
1
, . . . , a
m
) and the uniform distribution on m-tuples of
elements in Z
n
q
is negligible.
To show this, notice that by Lemma 5, the statistical distance between the distribution of each y
i
and the
uniform distribution on T(B) is at most
1
2

1/
(

0). By our assumption on , this quantity is at most


1
2
, which is negligible.
Now consider the function f : T(B) Z
n
q
given by f(y) = qB
1
y| Z
n
q
. Then we can write
a
i
= f(y
i
). Moreover, it is easy to see that on input a uniform point y in T(B), f(y) is a uniform element
of Z
n
q
. These two observations, combined with the fact that statistical distance cannot increase by applying a
function, imply that the statistical distance between a
i
and the uniformdistribution on Z
n
q
is negligible. Since
the a
i
are chosen independently, the distance between the m-tuple (a
1
, . . . , a
m
) and the uniform distribution
on (Z
n
q
)
m
is at most m times larger, which is still negligible. To summarize, we have the following sequence
of inequalities:
((a
1
, . . . , a
m
), (U(Z
n
q
))
m
)
m

i=1
(a
i
, U(Z
n
q
)) =
= m

f(

mod T(B)), f(U(T(B)))

m (

mod T(B), U(T(B)))
m .
Since m = 4n
2
n
log n
is a negligible function, we are done.
It remains to prove that the output of FINDVECTOR is full-dimensional. (Notice that so far we havent
even excluded the possibility that its output is constantly the zero vector!) We cannot make any assumptions
on the behavior of COLLISIONFIND, and we need to argue that even if it acts maliciously, the vectors given
by FINDVECTOR are full-dimensional. Essentially, the idea is the following. We note that COLLISIONFIND
is only given the a
i
. From this, it can deduce the z
i
and also the y
i
to within a good approximation. But,
as we show later, it still has lots of uncertainty about the vectors x
i
: conditioned on any xed value for y
i
,
the distribution of x
i
is full-dimensional. So no matter what COLLISIONFIND does, the distribution of the
output vector is full-dimensional.
To argue this formally, it is helpful to imagine that the vectors x
i
are chosen after we call COLLISION-
FIND. This is done by introducing the following virtual procedure FINDVECTOR
t
. We use the notation
D
s,y
to denote the probability obtained by conditioning
s
on the outcome x satisfying x mod T(B) = y.
More precisely, for any x + y,
Pr[D
s,y
= x] =

s
(x)

s
( + y)
=

s
(x)

s
( + y)
.
We only use FINDVECTOR
t
in our analysis and therefore it doesnt matter that we dont have an efcient way
to sample from D
s,y
. The important thing is that its output distribution is identical to that of FINDVECTOR.
We complete the analysis with the following lemma. It shows that for s

() and any n
1-dimensional hyperplane H, the probability that a vector x chosen from D
s,y
is in H is at most 0.9.
This implies that the same holds for the output distribution of FINDVECTOR
t
(and hence also for that of
FINDVECTOR). Indeed, consider Step 5. Not all b
i
are zero, so assume for simplicity that b
1
= 1. Then for
the output of the procedure to be in some n 1-dimensional hyperplane H, the vector x
1
must also be in
some hyperplane (namely, H + y
1
z
1

m
i=2
b
i
(x
i
y
i
+ z
i
)), which happens with probability at most
0.9.
7
Procedure 2 FINDVECTOR
t
Input: A lattice given by an LLL-reduced basis B, and a parameter satisfying 2

() 4

().
Output: A (short) element of , or a message FAIL.
1: For each i 1, . . . , m do the following:
2: Choose y
i
according to the distribution

mod T(B)
3: Dene a
i
= qB
1
y
i
| and z
i
= Ba
i
/q = BqB
1
y
i
|/q.
4: Run COLLISIONFIND on (a
1
, . . . , a
m
). If it fails then output FAIL. Otherwise, we obtain b
1
, . . . , b
m

1, 0, 1, not all zero, such that

m
i=1
b
i
a
i
= 0
5: For each i 1, . . . , m, choose x
i
from the distribution D
,y
i
6: Return

m
i=1
b
i
(x
i
y
i
+ z
i
)
LEMMA 14 For s

(), any y and any n 1-dimensional hyperplane H, Pr


xD
s,y
[x H] < 0.9.
PROOF: Let u R
n
be a unit vector and c R be such that H = x R
n
[ x, u) = c. Without loss of
generality, we can assume that u = (1, 0, . . . , 0) so x, u) = x
1
. Clearly, it is enough to show that
E
xD
s,y
[e
(
x
1
c
s
)
2
] < 0.9.
The left hand side can be written as

x+y

s
(x)

s
( + y)
e
(
x
1
c
s
)
2
=
1

s
( + y)

x+y
e
|
x
s
|
2
e
(
x
1
c
s
)
2
.
We now analyze this expression. Using the Poisson summation formula and the fact that s

(),

s
( + y) = det

s
n

1/s
(w)e
2iw,y)
det

s
n
(1 ).
To analyze the sum, we dene
g(x) := e
|
x
s
|
2
e
(
x
1
c
s
)
2
= e


s
2
(x
2
1
+(x
1
c)
2
+x
2
2
++x
2
n
)
= e


s
2
c
2
2
e


s
2
((

2(x
1

1
2
c))
2
+x
2
2
++x
2
n
)
.
From this we can see that the Fourier transform of g is given by
g(w) = e


s
2
c
2
2
e
2iw
1
(
1
2
c)
s
n1

2
e
s
2
((
w
1

2
)
2
+w
2
2
++w
2
n
)
and in particular,
[ g(w)[
s
n

2
e
s
2
((
w
1

2
)
2
+w
2
2
++w
2
n
)

s
n

2/s
(w).
We can now apply the Poisson summation formula and obtain
g( + y) = det

g(w) e
2iw,y)
det

[ g(w)[ det

s
n

2
(1 + )
where the last inequality follows from s

(). Combining the two bounds, we obtain


E
xD
s,y
[e
(
x
1
c
2
)
2
]
det

s
n
(1 + )/

2
det

s
n
(1 )
< 0.9.

8
4 Possible Improvements and Some Remarks
The reduction we presented here shows how to solve SIVP
O(n
3
)
using a collision nder. The best known
reduction [6] achieves a solution to SIVP

O(n)
, where the

O hides polylogarithmic factors. This improvement
is obtained by adding three more ideas to our reduction:
1. Use the bound

() log n
n
() for = n
log n
described below in Lemma 15. This improves
the approximation factor to

O(n
2.5
).
2. It can be shown that the summands of

m
i=1
b
i
(x
i
y
i
+ z
i
) add up like random vectors, i.e., with
cancellations. Therefore, the total norm is proportional to

m and not m. This means that one can
improve the bound in Claim 12 to

O(

m

n ). Together with the previous improvement, this
gives an approximation factor of

O(n
1.5
).
3. The last idea is to use an iterative algorithm. In other words, instead of obtaining an approximate
solution to SIVP in one go, we obtain it in steps: starting with a set of long vectors, we repeatedly
make it shorter by replacing long vectors with shorter ones. This allows us to choose a smaller value
of q, say, q = n
10
, which in turn allows us to choose m =

(n). This smaller value of m makes the
length of the resulting basis only

O(n)
n
(). See [6] for more details.
Let us also mention two modications to the basic reduction. First, notice that it is enough if COL-
LISIONFIND returns coefcients b
1
, . . . , b
m
that are small, and not necessarily in 1, 0, 1. So nding
small solutions to random modular equations is as hard as worst-case lattice problems. Another possible
modication is to partition the basic parallelepiped into p
1
p
2
p
n
parts for some primes p
1
, . . . , p
n
(in-
stead of q
n
parts). This naturally gives rise to the group Z
p
1
Z
p
n
= Z
p
1
p
n
. Hence, we see that
nding small solutions to a random equation in Z
N
(for an appropriate N) is also as hard as worst-case
lattice problems.
Finally, we note that the basic reduction presented in the previous section is non-adaptive in the sense
that all oracle queries can be made simultaneously. In contrast, in an adaptive reduction, oracle queries
depend on answers from previous oracle queries and therefore cannot be made simultaneously. If we apply
the iterative technique outlined above in order to gain an extra

n in the approximation factor, then the
reduction becomes adaptive.
4.1 A Tighter Bound on the Smoothing Parameter
LEMMA 15 Let = n
log n
. Then for any lattice ,

() log n
n
().
This lemma is essentially tight: consider, for instance, the lattice = Z
n
. Then clearly
n
() = 1. On
the other hand, the dual lattice is also Z
n
and we can therefore lower bound
1/s
(

0) by (say) e
s
2
.
To make this quantity at most , s should be at least (log n) and hence

() (log n).
PROOF: Let v
1
, . . . , v
n
be a set of n linearly independent vectors in of length at most
n
() (such a set
exists by the denition of
n
). Take s = log n
n
(). Our goal is to show that
1/s
(

0) is smaller
than . The idea is to show that for each i, almost all the contribution to
1/s
(

) comes from vectors in

that are orthogonal to v


i
. Since this holds for all i, we will conclude that almost all contribution must come
from the origin. The origins contribution is 1, hence
1/s
(

) is essentially 1 and
1/s
(

0) is very
small.
For i = 1, . . . , n and j Z we dene
S
i,j
= y

[ v
i
, y) = j.
9
If we recall the denition of the dual lattice, we see that for any i, the union of S
i,j
over all j Z is

.
Moreover, if S
i,j
is not empty, then it is a translation of S
i,0
and we can write
S
i,j
= S
i,0
+ w + ju
i
where u
i
= v
i
/|v
i
|
2
is a vector of length 1/|v
i
| 1/
n
() in the direction of v
i
and w is some vector
orthogonal to v
i
. Using these properties, we see that if S
i,j
is not empty, then

1/s
(S
i,j
) = e
|jsu
i
|
2

1/s
(S
i,0
+ w)
e
|jsu
i
|
2

1/s
(S
i,0
)
e
j
2
log
2
n

1/s
(S
i,0
).
where the rst inequality follows from a lemma in the previous lecture. Hence,

1/s
(

S
i,0
) =

j,=0

1/s
(S
i,j
)

1/s
(S
i,0
)

j,=0
e
j
2
log
2
n

1/s
(S
i,0
) n
2 log n
n
2 log n

1/s
(

).
Since v
1
, . . . , v
n
are linearly independent,

0 =
n

i=1
(

S
i,0
)
and therefore

1/s
(

0)
n

i=1

1/s
(

S
i,0
)
n n
2 log n

1/s
(

)
= n
2 log n+1
(1 +
1/s
(

0)).
We obtain the result by rearranging.
References
[1] M. Ajtai. Generating hard instances of lattice problems. In Proc. of 28th STOC, pages 99108, 1996.
Available from ECCC.
[2] M. Ajtai and C. Dwork. A public-key cryptosystem with worst-case/average-case equivalence. In Proc.
29th ACM Symp. on Theory of Computing (STOC), pages 284293, 1997.
[3] J. Cai and A. Nerurkar. An improved worst-case to ity average-case connection for lattice problems. In
Proc. of 38th FOCS, pages 468477, 1997.
[4] O. Goldreich, S. Goldwasser, and S. Halevi. Collision-free hashing from lattice problems. Technical
report, TR96-056, Electronic Colloquium on Computational Complexity (ECCC), 1996.
10
[5] D. Micciancio. Improved cryptographic hash functions with worst-case/average-case connection. In
Proc. of 34th STOC, pages 609618, 2002.
[6] D. Micciancio and O. Regev. Worst-case to average-case reductions based on Gaussian measures. In
Proc. 45th Annual IEEE Symp. on Foundations of Computer Science (FOCS), pages 372381, 2004.
[7] O. Regev. New lattice-based cryptographic constructions. Journal of the ACM, 51(6):899942, 2004.
Preliminary version in STOC03.
11

You might also like