Lattice Based Cryptography
Lattice Based Cryptography
Oded Regev
i=1
i
v
i
i
Z
.
The vectors v
1
, . . . , v
n
are known as a basis of the lattice.
Fig. 1. A lattice in R
2
and two of its bases
Historically, lattices were investigated since the late 18th century by mathe-
maticians such as Lagrange, Gauss, and later Minkowski. More recently, lattices
have become an active topic of research in computer science. They are used as
an algorithmic tool to solve a wide variety of problems (e.g., [?,?,?]); they have
N, 2
O(n
2
) bits. So if, for instance, we choose n to be several hundreds, the public key
size is on the order of several gigabytes, which clearly makes the cryptosystem
impractical.
Two recent works by Ajtai [?] and by the author [?] have tried to remedy
this. Both works present cryptosystems whose public key scales like
O(n
2
) (or
even
O(n) if one can set up a pre-agreed random string of length
O(n
2
)) and
each encrypted bit gets blown up to
O(n) bits. Combined with a very simple
encryption process (involving only modular additions), this makes these two
cryptosystems a good competitor for certain applications.
However, the security of these two cryptosystems is not as strong as that
of other lattice-based cryptosystems. The security of Ajtais cryptosystem [?] is
based on a problem by Dirichlet, which is not directly related to any standard
lattice problem. Moreover, his system has no worst-case hardness as the ones
previously mentioned. However, his system, as well as many details in its proof
of security, have the avor of a lattice-based cryptosystem, and it might be that
one day its security will be established based on the worst-case hardness of lattice
problems.
The second cryptosystem [?] is based on the worst-case quantum hardness of
the SVP. What this means is that breaking the cryptosystem implies an ecient
quantum algorithm for approximating SVP. This security guarantee is incompa-
rable to the one by Ajtai and Dwork: On one hand, it is stronger as it is based
VI
on the general SVP and not the special case of unique-SVP. On the other hand,
it is weaker as it only implies a quantum algorithm for a lattice problem. Since
no quantum algorithm is known to outperform classical algorithms for lattice
problems, it is not unreasonable to conjecture that lattice problems are hard
even quantumly. Moreover, it is possible that a more clever proof of security
could establish the same worst-case hardness under a classical assumption. Fi-
nally, let us emphasize that the cryptosystem itself is entirely classical, and is in
fact somewhat similar to the one of [?] described above.
5 An Outline of a Construction
In this section, we outline a construction of a lattice-based family of collision re-
sistant hash functions. We will follow a simplied description of the construction
in [?], without worrying too much about the exact security guarantee achieved.
1
At the heart of the proof is the realization that by adding a sucient amount
of Gaussian noise to a lattice, one arrives at a distribution that is extremely close
to uniform. An example of this eect is shown in Figure 2. This technique rst
appeared in [?], and is based on the work of Banaszczyk [?]. Let us denote
by = (L) the least amount of Gaussian noise required in order to obtain a
distribution whose statistical distance to uniform is negligible (where by amount
we mean the standard deviation in each coordinate). This lattice parameter was
analyzed in [?] where it was shown that it is relatively short in the sense that
nding nonzero lattice vectors of length at most poly(n) is a hard lattice
problem as it automatically implies a solution to other, more standard, lattice
problems such as an approximation to SVP to within polynomial factors.
2
Fig. 2. A lattice with dierent amounts of Gaussian noise
1
A more careful analysis of the construction described below shows that its security
can be based on the worst-case hardness of
O(n
1.5
)-approximate SIVP, which implies
a security based on
O(n
2.5
)-approximate SVP using standard reductions. In order
to obtain the best known factor of
O(n), one needs to use an iterative procedure.
2
To be precise, we need slightly more than just nding vectors of length at most
poly(n); we need to be able to nd n linearly independent vectors of this length.
As it turns out, by repeatedly calling the procedure described below, one can obtain
such vectors.
VII
Before going on, we need to explain what exactly we mean by adding Gaus-
sian noise to a lattice. One way to rigorously dene this is to consider the
uniform distribution on all lattice points inside some large cube and then add
Gaussian noise to this distribution. While this approach works just ne, it leads
to some unnecessary technical complications due to the need to deal with the
edges of the cube. Instead, we choose to take a mathematically cleaner approach
(although it might be confusing at rst): we work with the quotient R
n
/L. More
explicitly, we dene a function h : R
n
[0, 1)
n
as follows. Given any x R
n
,
write it as a linear combination of the lattice basis vectors x =
n
i=1
i
v
i
, and
dene h(x) = (
1
, . . . ,
n
) mod 1. So for instance, all points in L are mapped to
(0, . . . , 0). Then the statement about the Gaussian noise above can be formally
stated as follows: if we sample a point x from a Gaussian distribution in R
n
cen-
tered around 0 of standard deviation in each coordinate, then the statistical
distance between the distribution of h(x) and the uniform distribution on [0, 1)
n
is negligible.
We now turn to the construction. Our family of hash functions is the modular
subset-sum function over Z
n
q
, as dened next. Fix q = 2
2n
and m = 4n
2
. For
each a
1
, . . . , a
m
Z
n
q
, the family contains the function f
a1,...,am
: 0, 1
m
0, 1
nlog q
given by
f
a1,...,am
(b
1
, . . . , b
m
) =
m
i=1
b
i
a
i
mod q.
Notice that with our choice of parameters, m > nlog q so collisions are guaran-
teed to exist. Clearly, these functions are easy to compute. Our goal is there-
fore to show that they are collision resistant. We establish this by proving
that if there exists a polynomial-time algorithm CollisionFind that given
a
1
, . . . , a
m
chosen uniformly from Z
n
q
, nds with some non-negligible probability
b
1
, . . . , b
m
1, 0, 1, not all zero, such that
m
i=1
b
i
a
i
= (0, . . . , 0) (mod q),
then there is a polynomial-time algorithm that nds vectors of length at most
poly(n) in any given lattice L (which, as mentioned before, implies a solution
to approximate SVP).
Our rst observation is that from CollisionFind we can easily construct
another algorithm, call it CollisionFind
m
i=1
b
i
a
i
[
m
q
,
m
q
]
n
(mod 1). In other words, it nds a 1, 0, 1 combination of a
1
, . . . , a
m
that is extremely close to (0, . . . , 0) modulo 1. To see this, observe that Colli-
sionFind
we can nd vectors of
length at most poly(n) in any given lattice L. So let L be some lattice given by
its basis v
1
, . . . , v
n
. Our rst step is to apply the LLL algorithm to v
1
, . . . , v
n
.
This makes sure that v
1
, . . . , v
n
are not unreasonably long: namely, none of
these vectors is longer than 2
n
.
We now arrive at the main part of the procedure. We rst choose m vectors
x
1
, . . . , x
m
independently from the Gaussian distribution in R
n
centered around
VIII
0 of standard deviation in each coordinate. (To be precise, we dont know ,
but we can obtain a good enough estimate by trying a polynomial number of
values.) Next, we compute a
i
= h(x
i
) for i = 1, . . . , m. By the discussion above,
we know that each a
i
is distributed essentially uniformly on [0, 1)
n
. We can
therefore apply CollisionFind
to a
1
, . . . , a
m
and obtain with non-negligible
probability b
1
, . . . , b
m
1, 0, 1 such that
m
i=1
b
i
a
i
[
m
q
,
m
q
]
n
(mod 1).
Now consider the vector y =
m
i=1
b
i
x
i
. On one hand, this is a short vector, as
it is the sum of at most m vectors of length roughly
n each. On the other
hand, by the linearity of h, we have that h(y) [
m
q
,
m
q
]
n
(mod 1). What this
means is that y is extremely close to a lattice vector. Indeed, write y =
i
v
i
for some reals
1
, . . . ,
m
. Then we have that each
i
is within
m
q
of an integer.
Consider now the lattice vector y
i
|v
i
obtained by rounding each
i
to
the nearest integer. Then the distance between y and y
is
|y y
|
n
i=1
m
q
|v
i
|
mn
q
2
n
and in particular we found a lattice vector y
is nonzero
(with some non-negligible probability). The proof of this requires some eort,
so we just give the main idea. Recall that we dene y
as a (rounding of a)
1, 0, 1 combination of x
1
, . . . , x
m
obtained by calling CollisionFind
with
a
1
, . . . , a
m
. The diculty in proving that y
outputs,
y
n + 1) =
O(n
2.5
) with some non-negligible probability. Obviously, by repeating this a
polynomial number of times, we can obtain such a vector with very high proba-
bility. The essence of the proof, and what makes possible the connection between
the average-case collision nding problem and the worst-case lattice problem, is
the realization that all lattices look the same after adding a small amount of
noise they turn into a uniform distribution.
6 Open Questions
Cryptanalysis: Attacks on lattice-based cryptosystems, such as the one by
Nguyen and Stern [?], seem to be limited to low dimensions (a few tens).
Due to the greatly improved eciency of the new cryptosystems in [?,?],
using much higher dimensions has now become possible. It would be very
interesting to see attempts to attack these new cryptographic constructions.
IX
Improved cryptosystems: As we have seen in Section 4, the situation
with lattice-based cryptosystems is not entirely satisfactory: The original
construction of Ajtai and Dwork, as well as some of the follow-up work, are
based on the hardness of the unique-SVP and are moreover quite inecient.
Two recent attempts [?,?] give much more ecient constructions, but with
less-than-optimal security guarantees. Other constructions, such as the one
by NTRU [?], are extremely ecient but have no provable security. A very
interesting open question is to obtain ecient lattice-based cryptosystems
based on the worst-case hardness of unique-SVP (or preferably SVP). An-
other interesting direction is whether specic families of lattices, such as
cyclic lattices, can be used to obtain more ecient constructions.
Comparison with number theoretic cryptography: Can one factor
integers or compute discrete logarithms using an oracle that solves, say,