0% found this document useful (0 votes)
118 views

Lattice Based Cryptography

This document summarizes recent progress in lattice-based cryptography. It begins by introducing lattices and some of their mathematical properties. Historically, lattices were studied in mathematics and more recently have found applications in computer science and cryptanalysis. The document focuses on their use in constructing cryptographic primitives. It describes Ajtai's seminal 1996 work showing lattices could be used to build cryptography based on the worst-case hardness of lattice problems. Subsequent work improved on Ajtai's constructions and explored other lattice-based cryptographic applications like hash functions and public key encryption. The security of these schemes relies on the difficulty of problems like the shortest vector problem on lattices, for which the best known algorithms have superpolynomial running times.
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
118 views

Lattice Based Cryptography

This document summarizes recent progress in lattice-based cryptography. It begins by introducing lattices and some of their mathematical properties. Historically, lattices were studied in mathematics and more recently have found applications in computer science and cryptanalysis. The document focuses on their use in constructing cryptographic primitives. It describes Ajtai's seminal 1996 work showing lattices could be used to build cryptography based on the worst-case hardness of lattice problems. Subsequent work improved on Ajtai's constructions and explored other lattice-based cryptographic applications like hash functions and public key encryption. The security of these schemes relies on the difficulty of problems like the shortest vector problem on lattices, for which the best known algorithms have superpolynomial running times.
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

Lattice-based Cryptography

Oded Regev

Tel Aviv University, Israel


Abstract. We describe some of the recent progress on lattice-based
cryptography, starting from the seminal work of Ajtai, and ending with
some recent constructions of very ecient cryptographic schemes.
1 Introduction
In this survey, we describe some of the recent progress on lattice-based cryp-
tography. What is a lattice? It is a set of points in n-dimensional space with a
periodic structure, such as the one illustrated in Figure 1. More formally, given
n-linearly independent vectors v
1
, . . . , v
n
R
n
, the lattice generated by them is
the set of vectors
L(v
1
, . . . , v
n
) :=

i=1

i
v
i


i
Z

.
The vectors v
1
, . . . , v
n
are known as a basis of the lattice.
Fig. 1. A lattice in R
2
and two of its bases
Historically, lattices were investigated since the late 18th century by mathe-
maticians such as Lagrange, Gauss, and later Minkowski. More recently, lattices
have become an active topic of research in computer science. They are used as
an algorithmic tool to solve a wide variety of problems (e.g., [?,?,?]); they have

Supported by an Alon Fellowship, by the Binational Science Foundation, by the


Israel Science Foundation, and by the EU Integrated Project QAP.
II
found many applications in cryptanalysis (e.g., [?,?]); and they have some unique
properties from a computational complexity point of view (e.g., [?,?,?]). In this
survey we will focus on their positive applications in cryptography, i.e., on the
construction of cryptographic primitives whose security relies on the hardness of
certain lattice problems.
Our starting point is Ajtais seminal result from 1996 [?]. His surprising
discovery was that lattices, which up to that point were used only as tools in
cryptanalysis, can actually be used to construct cryptographic primitives. His
work sparked a great interest in understanding the complexity of lattice problems
and their relation to cryptography.
Ajtais discovery was surprising for another reason: the security of his crypto-
graphic primitive is based on the worst-case hardness of lattice problems. What
this means is that if one succeeds in breaking the primitive, even with some small
probability, then one can also solve any instance of a certain lattice problem. This
remarkable property is what makes lattice-based cryptographic constructions so
attractive. In contrast, virtually all other cryptographic constructions are based
on some average-case assumption. For example, in cryptographic constructions
based on factoring, the assumption is that it is hard to factor numbers chosen
from a certain distribution. But how should we choose this distribution? Obvi-
ously, we should not use numbers with small factors (such as even numbers),
but perhaps there are other numbers that we should avoid? In cryptographic
constructions based on worst-case hardness, such questions do not even arise.
There are several other reasons for our interest in lattice-based cryptogra-
phy. One is that the computations involved are very simple and often require
only modular addition. This can be advantageous in certain practical scenarios
when encryption is performed by a low-cost device. Another reason is that we
currently do not have too many alternatives to traditional number-theoretic-
based cryptography such as RSA. Such alternatives will be needed in case an
ecient algorithm for factoring integers is ever found. In fact, ecient quantum
algorithms for factoring integers and computing discrete logarithms already ex-
ist [?]. Although large-scale quantum computers are not expected to exist for
at least a decade, this fact should already be regarded as a warning. There are
currently no known quantum algorithms for lattice problems.
Our choice of topics for this survey is clearly biased by the authors personal
taste and familiarity. One notable topic that we will not discuss here are crypto-
graphic constructions following the design of Goldreich, Goldwasser, and Halevi
[?]. In particular, this includes the highly ecient constructions developed by
the company NTRU [?,?].
For other surveys on the topic, see, e.g., [?,?] and also the lecture notes [?,?].
Another useful resource is the book by Micciancio and Goldwasser [?], which
also contains a wealth of information on the computational complexity aspects
of lattice problems.
The rest of this survey is organized as follows. In Section 2 we dene the
shortest vector problem and state some known results. In Section 3 we describe
the known constructions of hash functions, starting from Ajtais work [?]. Then,
III
in Section 4 we describe the known constructions of public key cryptosystems.
The only technical part of this survey is Section 5, where we outline the construc-
tion of a lattice-based collision resistant hash function together with its security
proof. We end with some open questions in Section 6.
2 Lattice Problems
The main computational problem associated with lattices is the shortest vector
problem (SVP). In SVP, given a lattice basis, we are supposed to output a
shortest nonzero vector in the lattice. In fact, we will be mostly interested in the
approximation variant of this problem, where our goal is to output a nonzero
lattice vector whose norm is greater than that of the shortest nonzero lattice
vector by at most some approximation factor . There are other interesting
lattice problems (such as SIVP), and roughly speaking, the goal in most of them
is to nd short vectors under some appropriate denition of short. We will
encounter one such problem in Section 5. The behavior of these problems is
often very similar to that of SVP, so for simplicity we do not discuss them in
detail here (see [?] for more details).
Part of the diculty of SVP comes from the fact that a lattice has many dif-
ferent bases and that typically, the given lattice basis contains very long vectors,
much longer than the shortest nonzero vector. In fact, the well-known polynomial
time algorithm of Lenstra, Lenstra, and Lov asz (LLL) [?] from 1982 achieves an
approximation factor of 2
O(n)
where n is the dimension of the lattice. As bad as
this might seem, this algorithm is surprisingly useful, with applications ranging
from factoring polynomials over the rational numbers, integer programming, and
many applications in cryptanalysis (such as attacks on knapsack based crypto-
graphic systems and special cases of RSA). In 1987, Schnorr presented an im-
proved algorithm obtaining an approximation factor that is slightly subexponen-
tial, namely 2
O(n(log log n)
2
/ log n)
. This was recently improved to 2
O(nlog log n/ log n)
[?]. We should also mention that if one insists on an exact solution to SVP (or
even just an approximation to within poly(n) factors), the best algorithm has a
running time of 2
O(n)
[?].
Given the above results, one might expect SVP to be NP-hard to approximate
to within very large factors. However, the best known result only shows that
approximating SVP to within factors 2
(log n)
1
2

is NP-hard (under randomized


quasi-polynomial time reductions) [?]. Moreover, SVP is not believed to be NP-
hard to approximate to within factors above

n/ log n [?,?,?], since for such
approximation factors it lies in classes such as NP coNP.
On the practical side, it is dicult to say what is the dimension n beyond
which solving SVP becomes infeasible with todays computing power. A reason-
able guess would be that taking n to be several hundreds makes the problem
extremely dicult.
To conclude, the problem of approximating SVP to within polynomial factors
n
c
for c
1
2
seems to be very dicult (best algorithm runs in exponential time),
however it is not believed to be NP-hard.
IV
3 Hash Functions
As mentioned above, the rst lattice-based cryptographic construction with
worst-case security guarantees was presented in the seminal work of Ajtai [?].
More precisely, Ajtai presented a family of one-way functions whose security is
based on the worst-case hardness of n
c
-approximate SVP for some constant c > 0.
In other words, he showed that being able to invert a function chosen from this
family with non-negligible probability implies the ability to solve any instance
of n
c
-approximate SVP. Shortly after, Goldreich et al. [?] improved on Ajtais
result by constructing a stronger cryptographic primitive known as a family of
collision resistant hash functions. Much of the subsequent work concentrated on
decreasing the constant c (thereby improving the security assumption) [?,?,?].
In the most recent work, the constant is essentially c = 1 [?]. The hash function
in all these constructions is essentially the modular subset sum function. We will
see an example of such a construction in Section 5 below.
We remark that all these constructions are based on the worst-case hardness
of a problem not believed to be NP-hard. Although it seems unlikely, it is not
entirely impossible that further improvements in these constructions would lead
us to approximation factors of the form n
c
for c strictly below
1
2
. That would
mean that we managed to base the security on the worst-case hardness of a
problem that might be NP-hard.
The constructions described above are not too ecient. For instance,

O(n
2
)
bits are necessary in order to specify a function in the family, where n is the
dimension of the lattice underlying the security and the

O hides poly-logarithmic
factors (in other words, the key size is

O(n
2
) bits). So if, for instance, we choose
n to be several hundreds, we might need roughly a megabyte just to specify the
hash function. Recently, an improved construction was presented by Micciancio
[?]. He gives a family of one-way functions where only

O(n) bits are needed to
specify a function in the family. Its security is based on the worst-case hardness
of lattice problems on a restricted set of lattices known as cyclic lattices. Since
no better algorithms are known for this family, it is reasonable to assume that
solving lattice problems on these lattices is as hard as the general case. Finally,
in more recent work [?,?], the hash function of [?] was modied, preserving the
eciency and achieving the stronger security property of collision resistance.
4 Public-key Cryptography
Following Ajtais discovery of lattice-based hash functions, Ajtai and Dwork [?]
constructed a public-key cryptosystem whose security is based on the worst-case
hardness of a lattice problem. Several improvements were given in subsequent
works [?,?]. Unlike the case of hash functions, the security of these cryptosystems
is based on the worst-case hardness of a special case of SVP known as unique-
SVP. Here, we are given a lattice whose shortest nonzero vector is shorter by
some factor than all other nonparallel lattice vectors, and our goal is to nd a
shortest nonzero lattice vector. The hardness of this problem is not understood
V
as well as that of SVP, and it is a very interesting open question whether one
can base public-key cryptosystems on the (worst-case) hardness of SVP.
As is often the case in lattice-based cryptography, the cryptosystems them-
selves have a remarkably simple description (most of the work is in establishing
their security). For example, let us describe the cryptosystem from [?]. Let N
be some large integer. The private key is simply an integer h chosen randomly
in the range [

N, 2

N). The public key consists of m = O(log N) numbers


a
1
, . . . , a
m
in 0, 1, . . . , N 1 that are close to integer multiples of N/h (no-
tice that h doesnt necessarily divide N). We also include in the public key an
index i
0
[m] such that a
i0
is close to an odd multiple of N/h. We encrypt
one bit at a time. An encryption of the bit 0 is the sum of a random subset
of a
1
, . . . , a
m
reduced modulo N. An encryption of the bit 1 is done in the
same way except we add a
i0
/2| to the result before reducing modulo N. On
receiving an encrypted word w, we consider its remainder on division by N/h. If
it is small, we decrypt 0 and otherwise we decrypt 1. To establish the correctness
of the decryption procedure, notice that since a
1
, . . . , a
m
are all close to integer
multiples of N/h, any sum of a subset of them is also close to a multiple of N/h
and hence encryptions of 0 are decrypted correctly. Similarly, since a
i0
/2| is far
from a multiple of N/h, encryptions of 1 are also far from multiples of N/h and
hence we again decrypt correctly. The proof of security is more dicult and we
omit it here (but see Section 5 for a related proof).
The aforementioned lattice-based cryptosystems are unfortunately quite in-
ecient. It turns out that when we base the security on lattices of dimension
n, the size of the public key is

O(n
4
) and each encrypted bit gets blown up to

O(n
2
) bits. So if, for instance, we choose n to be several hundreds, the public key
size is on the order of several gigabytes, which clearly makes the cryptosystem
impractical.
Two recent works by Ajtai [?] and by the author [?] have tried to remedy
this. Both works present cryptosystems whose public key scales like

O(n
2
) (or
even

O(n) if one can set up a pre-agreed random string of length

O(n
2
)) and
each encrypted bit gets blown up to

O(n) bits. Combined with a very simple
encryption process (involving only modular additions), this makes these two
cryptosystems a good competitor for certain applications.
However, the security of these two cryptosystems is not as strong as that
of other lattice-based cryptosystems. The security of Ajtais cryptosystem [?] is
based on a problem by Dirichlet, which is not directly related to any standard
lattice problem. Moreover, his system has no worst-case hardness as the ones
previously mentioned. However, his system, as well as many details in its proof
of security, have the avor of a lattice-based cryptosystem, and it might be that
one day its security will be established based on the worst-case hardness of lattice
problems.
The second cryptosystem [?] is based on the worst-case quantum hardness of
the SVP. What this means is that breaking the cryptosystem implies an ecient
quantum algorithm for approximating SVP. This security guarantee is incompa-
rable to the one by Ajtai and Dwork: On one hand, it is stronger as it is based
VI
on the general SVP and not the special case of unique-SVP. On the other hand,
it is weaker as it only implies a quantum algorithm for a lattice problem. Since
no quantum algorithm is known to outperform classical algorithms for lattice
problems, it is not unreasonable to conjecture that lattice problems are hard
even quantumly. Moreover, it is possible that a more clever proof of security
could establish the same worst-case hardness under a classical assumption. Fi-
nally, let us emphasize that the cryptosystem itself is entirely classical, and is in
fact somewhat similar to the one of [?] described above.
5 An Outline of a Construction
In this section, we outline a construction of a lattice-based family of collision re-
sistant hash functions. We will follow a simplied description of the construction
in [?], without worrying too much about the exact security guarantee achieved.
1
At the heart of the proof is the realization that by adding a sucient amount
of Gaussian noise to a lattice, one arrives at a distribution that is extremely close
to uniform. An example of this eect is shown in Figure 2. This technique rst
appeared in [?], and is based on the work of Banaszczyk [?]. Let us denote
by = (L) the least amount of Gaussian noise required in order to obtain a
distribution whose statistical distance to uniform is negligible (where by amount
we mean the standard deviation in each coordinate). This lattice parameter was
analyzed in [?] where it was shown that it is relatively short in the sense that
nding nonzero lattice vectors of length at most poly(n) is a hard lattice
problem as it automatically implies a solution to other, more standard, lattice
problems such as an approximation to SVP to within polynomial factors.
2
Fig. 2. A lattice with dierent amounts of Gaussian noise
1
A more careful analysis of the construction described below shows that its security
can be based on the worst-case hardness of

O(n
1.5
)-approximate SIVP, which implies
a security based on

O(n
2.5
)-approximate SVP using standard reductions. In order
to obtain the best known factor of

O(n), one needs to use an iterative procedure.
2
To be precise, we need slightly more than just nding vectors of length at most
poly(n); we need to be able to nd n linearly independent vectors of this length.
As it turns out, by repeatedly calling the procedure described below, one can obtain
such vectors.
VII
Before going on, we need to explain what exactly we mean by adding Gaus-
sian noise to a lattice. One way to rigorously dene this is to consider the
uniform distribution on all lattice points inside some large cube and then add
Gaussian noise to this distribution. While this approach works just ne, it leads
to some unnecessary technical complications due to the need to deal with the
edges of the cube. Instead, we choose to take a mathematically cleaner approach
(although it might be confusing at rst): we work with the quotient R
n
/L. More
explicitly, we dene a function h : R
n
[0, 1)
n
as follows. Given any x R
n
,
write it as a linear combination of the lattice basis vectors x =

n
i=1

i
v
i
, and
dene h(x) = (
1
, . . . ,
n
) mod 1. So for instance, all points in L are mapped to
(0, . . . , 0). Then the statement about the Gaussian noise above can be formally
stated as follows: if we sample a point x from a Gaussian distribution in R
n
cen-
tered around 0 of standard deviation in each coordinate, then the statistical
distance between the distribution of h(x) and the uniform distribution on [0, 1)
n
is negligible.
We now turn to the construction. Our family of hash functions is the modular
subset-sum function over Z
n
q
, as dened next. Fix q = 2
2n
and m = 4n
2
. For
each a
1
, . . . , a
m
Z
n
q
, the family contains the function f
a1,...,am
: 0, 1
m

0, 1
nlog q
given by
f
a1,...,am
(b
1
, . . . , b
m
) =
m

i=1
b
i
a
i
mod q.
Notice that with our choice of parameters, m > nlog q so collisions are guaran-
teed to exist. Clearly, these functions are easy to compute. Our goal is there-
fore to show that they are collision resistant. We establish this by proving
that if there exists a polynomial-time algorithm CollisionFind that given
a
1
, . . . , a
m
chosen uniformly from Z
n
q
, nds with some non-negligible probability
b
1
, . . . , b
m
1, 0, 1, not all zero, such that

m
i=1
b
i
a
i
= (0, . . . , 0) (mod q),
then there is a polynomial-time algorithm that nds vectors of length at most
poly(n) in any given lattice L (which, as mentioned before, implies a solution
to approximate SVP).
Our rst observation is that from CollisionFind we can easily construct
another algorithm, call it CollisionFind

, that performs the following task:


given elements a
1
, . . . , a
m
chosen uniformly from [0, 1)
n
, it nds with some non-
negligible probability b
1
, . . . , b
m
1, 0, 1, not all zero, such that

m
i=1
b
i
a
i

[
m
q
,
m
q
]
n
(mod 1). In other words, it nds a 1, 0, 1 combination of a
1
, . . . , a
m
that is extremely close to (0, . . . , 0) modulo 1. To see this, observe that Colli-
sionFind

can simply apply CollisionFind to qa


1
|, . . . , qa
m
|.
Our goal now is to show that using CollisionFind

we can nd vectors of
length at most poly(n) in any given lattice L. So let L be some lattice given by
its basis v
1
, . . . , v
n
. Our rst step is to apply the LLL algorithm to v
1
, . . . , v
n
.
This makes sure that v
1
, . . . , v
n
are not unreasonably long: namely, none of
these vectors is longer than 2
n
.
We now arrive at the main part of the procedure. We rst choose m vectors
x
1
, . . . , x
m
independently from the Gaussian distribution in R
n
centered around
VIII
0 of standard deviation in each coordinate. (To be precise, we dont know ,
but we can obtain a good enough estimate by trying a polynomial number of
values.) Next, we compute a
i
= h(x
i
) for i = 1, . . . , m. By the discussion above,
we know that each a
i
is distributed essentially uniformly on [0, 1)
n
. We can
therefore apply CollisionFind

to a
1
, . . . , a
m
and obtain with non-negligible
probability b
1
, . . . , b
m
1, 0, 1 such that

m
i=1
b
i
a
i
[
m
q
,
m
q
]
n
(mod 1).
Now consider the vector y =

m
i=1
b
i
x
i
. On one hand, this is a short vector, as
it is the sum of at most m vectors of length roughly

n each. On the other
hand, by the linearity of h, we have that h(y) [
m
q
,
m
q
]
n
(mod 1). What this
means is that y is extremely close to a lattice vector. Indeed, write y =

i
v
i
for some reals
1
, . . . ,
m
. Then we have that each
i
is within
m
q
of an integer.
Consider now the lattice vector y

i
|v
i
obtained by rounding each
i
to
the nearest integer. Then the distance between y and y

is
|y y

|
n

i=1
m
q
|v
i
|
mn
q
2
n

and in particular we found a lattice vector y

of length at most poly(n). The


procedure can now output y

, which is a short lattice vector.


So are we done? Well, not completely: we still have to show that y

is nonzero
(with some non-negligible probability). The proof of this requires some eort,
so we just give the main idea. Recall that we dene y

as a (rounding of a)
1, 0, 1 combination of x
1
, . . . , x
m
obtained by calling CollisionFind

with
a
1
, . . . , a
m
. The diculty in proving that y

,= 0 is that we have no control over


CollisionFind

, and in particular it might act in some malicious way, trying


to set the b
1
, . . . , b
m
so that y

ends up being the zero vector. To solve this issue,


one can prove that the a
i
do not contain enough information about the x
i
. In
other words, conditioned on any xed values to the a
i
, the x
i
still have enough
uncertainty in them to guarantee that no matter what CollisionFind

outputs,
y

is nonzero with very high probability.


To conclude, we have seen that by a single call to the collision nder, one
can nd in any given lattice, a nonzero vector of length at most (m

n + 1) =
O(n
2.5
) with some non-negligible probability. Obviously, by repeating this a
polynomial number of times, we can obtain such a vector with very high proba-
bility. The essence of the proof, and what makes possible the connection between
the average-case collision nding problem and the worst-case lattice problem, is
the realization that all lattices look the same after adding a small amount of
noise they turn into a uniform distribution.
6 Open Questions
Cryptanalysis: Attacks on lattice-based cryptosystems, such as the one by
Nguyen and Stern [?], seem to be limited to low dimensions (a few tens).
Due to the greatly improved eciency of the new cryptosystems in [?,?],
using much higher dimensions has now become possible. It would be very
interesting to see attempts to attack these new cryptographic constructions.
IX
Improved cryptosystems: As we have seen in Section 4, the situation
with lattice-based cryptosystems is not entirely satisfactory: The original
construction of Ajtai and Dwork, as well as some of the follow-up work, are
based on the hardness of the unique-SVP and are moreover quite inecient.
Two recent attempts [?,?] give much more ecient constructions, but with
less-than-optimal security guarantees. Other constructions, such as the one
by NTRU [?], are extremely ecient but have no provable security. A very
interesting open question is to obtain ecient lattice-based cryptosystems
based on the worst-case hardness of unique-SVP (or preferably SVP). An-
other interesting direction is whether specic families of lattices, such as
cyclic lattices, can be used to obtain more ecient constructions.
Comparison with number theoretic cryptography: Can one factor
integers or compute discrete logarithms using an oracle that solves, say,

n-approximate SVP? Such a result would prove that lattice-based cryp-


tosystems are superior to traditional number-theoretic-based ones (see [?,?]
for related work).
Reverse reductions: Is the security of lattice-based cryptographic con-
structions equivalent to the hardness of lattice-problems? More concretely,
assuming we have an oracle that solves (say)

n-approximate SVP, can we
break lattice-based cryptography? A result along these lines is known for the
Ajtai-Dwork cryptosystem [?], but it is still open if the same can be shown
for newer cryptosystems such as the ones in [?,?].
Signature schemes: Lattices have been successfully used in constructing
hash functions and public key cryptosystems. Can one also construct signa-
ture schemes with worst-case hardness guarantees and similar eciency? See
[?] for some related work.
Security against chosen-ciphertext attacks: The Ajtai-Dwork cryp-
tosystem, as well as all subsequent work, are not secure against chosen-
ciphertext attacks. Indeed, it is not too dicult to see that one can extract
the private key given access to the decryption oracle. In practice, there are
known methods to deal with this issue. It would be interesting to nd an
(ecient) solution with a rigorous proof of security in the standard model
(for related work, see, e.g., [?]).
Applications in Learning Theory: The cryptosystems of [?,?] were re-
cently used by Klivans and Sherstov to obtain cryptographic hardness results
for problems in learning theory [?]. It would be interesting to extend this
line of research.
Acknowledgements I am grateful to Ishay Haviv, Julia Kempe, Daniele Mic-
ciancio, and Phong Nguyen for many helpful comments.

You might also like