LUSA: The HPC Library For Lattice-Based Cryptanalysis: Artur Mariano
LUSA: The HPC Library For Lattice-Based Cryptanalysis: Artur Mariano
LUSA: The HPC Library For Lattice-Based Cryptanalysis: Artur Mariano
Abstract—This paper introduces LUSA - the Lattice Unified [5], [6]. This startling discovery marked the beginning of
Set of Algorithms library - a C++ library that comprises many lattice-based cryptography, leading many researchers to engage
high performance, parallel implementations of lattice algorithms, on an intensive investigation of lattice-based cryptosystems.
with particular focus on lattice-based cryptanalysis. Currently,
LUSA offers algorithms for lattice reduction and the SVP. Not only lattice-based cryptography holds the promise to be
LUSA was designed to be 1) simple to install and use, 2) have quantum immune, as lattice-based schemes enjoy very strong
no other dependencies, 3) be designed specifically for lattice- security proofs based on worst-case hardness1 . To date, no fast
based cryptanalysis, including the majority of the most relevant quantum algorithms to solve hard lattice problems efficiently
algorithms in this field and 4) offer efficient, parallel and scalable were found. In 2009, lattices became very attractive as a
methods for those algorithms.
LUSA explores paralellism mainly at the thread level, being candidate to post-quantum cryptography, since Gentry used
based on OpenMP. However the code is also written to be efficient them, in 2009, to construct a Fully Homomorphic Encryption
at the cache and operation level, taking advantage of carefully (which allows cryptosystems to perform operations on data
sorted data structures and data level parallelism. without decrypting it) scheme [7], something whose feasibility
This paper shows that LUSA delivers these promises, by being scientists wondered for over 30 years [8], [9], [10], after
simple to use while consistently outperforming its counterparts,
such as NTL, plll and fplll, and offering scalable, parallel Rivest et al. introduced this idea in 1978 [11]. Over time,
implementations of the most relevant algorithms to date, which lattice-based cryptosystems became increasingly popular and
are currently not available in other libraries. a hot topic of research, because not only do they support fully
Index Terms—Lattices, lattice-based cryptanalysis, algorithms, homomorphic encryption as they are also easy to implement
hpc, parallel. e.g. [12], [13], [14] and quite efficient in practice e.g. [15],
[16], [14]. Today, lattice-based cryptography stands out as one
of the most prominent and rapidly growing fields of post-
I. I NTRODUCTION
quantum cryptography.
Quantum computing: a new era. Although the most skep- Lattices. Lattices are discrete subgroups of the n-
tical disbelief the idea that quantum computers will ever exist, dimensional Euclidean space Rn , with a strong periodicity
that possibility has already started to redefine the landscape of property2 . A lattice L generated by a basis B, a set of linearly
many scientific fields. While in some of those fields, scientists independent vectors b1 ,...,bm in Rn , is denoted by:
wait eagerly for their arrival, in others, such as cryptography, ( )
they represent a race against the clock. Back in the nineties, m
X
n m
the news broke that several classical cryptographic schemes L(B) = x ∈ R : x = ui bi , u ∈ Z , (1)
(such as RSA and ElGamal) were insecure against quantum i=1
computers [1], [2]. The key change with quantum computers in where m ≤ n is the rank of the lattice. When m = n, the
the game is that mathematical problems used as the foundation lattice is said to be of full rank. When n is at least 2, each
of many cryptosystems (e.g. factoring large numbers in RSA lattice has infinitely many different bases.
and solving discrete logarithms in ElGamal) would no longer Lattice-based cryptography uses integer lattices primarily,
be a hard row to hoe, which would render those cryptosystems because even though there are non-integer lattices, solving
insecure [3]. After this discovery, many cryptographers sprang lattice problems on integer lattices is still (very) hard, but
to the challenge of designing cryptosystems that are safe they are easier to handle computationally because there are
even in the presence of quantum computers, which eventually no/fewer precision problems. There are also different types
became known as post-quantum cryptography. of lattices, including Goldstein-Mayer lattices (which are
Lattice-based cryptography and cryptanalysis. Over the commonly referred to as random lattices [19], which we use in
years, several types of so-called “post-quantum” cryptosystems this paper) and Ajtai lattices [4], which typically have vectors
have been proposed, in order to prevent that the rise of with relatively small coordinates. There are other lattices, with
quantum computers does pose a challenge from a security additional structure, such as ideal lattices [20]. Although we do
standpoint. Not long after the security of classical cryp- not use ideal lattices in this paper, they are still important in the
tosystems was found to be compromised in the presence context of lattice-based cryptography [21], [22]; it is important
of quantum computers, Ajtai discovered that certain lattice
problems have interesting properties for cryptography [4], 1 Put simply, this means that breaking cryptosystems based on randomly
chosen, average-case lattice problem instances, is at least as hard as solving
Artur Mariano thanks DFG for supporting this work. Artur Mariano is certain lattice problems in the worst case.
funded by the Deutsche Forschungsgemeinschaft (DFG, German Research 2 We refer the reader to the papers [17], [18] in order to learn more about
Foundation) Projektnummer 382285730. lattices, especially in the context of lattice-based cryptography.
LUSA - THE LATTICE UNIFIED SET OF ALGORITHMS LIBRARY 2
to say that LUSA works with ideal lattices (and every kind of cryptanalysis enables one to analyze cryptosystems, so that
lattices, to that matter) although there are currently no routines one can trust them. As we show later, crypanalysis is also
in the algorithms to explore the structure of ideal lattices. fundamental, as it is the tool used to define the parameters of
In the context of cryptanalysis, we should keep in mind that new cryptosystems, so that they are secure and efficient.
adversaries may take advantage of this additional structure in Security of lattice-based crytposystems. Cryptosystems
lattices, and so this is a very relevant point. In the future, base their security on hard mathematical problems. For in-
LUSA may include versions of algorithms that can indeed stance, the security of the RSA cryptosystem is based on the
explore ideal lattices. hardness of factoring certain large integers. In practice, this
For visual purposes, we show a lattice and lattice vector means that an attacker is able to break the scheme if he/she
operations in Figure 1. This is a lattice in R2 , where the can efficiently factor large numbers with the same number of
basis B is composed of b1 and b2 , i.e. B = {b1 , b2 }. The bits as the key in the scheme. Therefore, the security of RSA
vector b3 is an example of an operation with lattice vectors: relies on the fact that factoring large integers is a very hard
it is a linear combination of the basis vectors, in particular problem, and its complexity grows fast with the input size. In
b3 = b1 −2×b2 . This particular linear combination also shows practice, key sizes are chosen such that there is no attacker
that b1 can be made shorter (in terms of Euclidean norm) at who can factor a large number of that size in a reasonable
the cost of b2 , given that b3 is smaller than b1 . This process amount of time. We comment on this problem in the next
of making lattice vectors (bases) shorter by adding/subtracting subsection.
other lattice vectors is often referred to as vector (basis) Lattice-based cryptosystems also base their security on
reduction, which is widely used in various lattice algorithms hard mathematical problems, in particular lattice problems.
and is itself carried out with specific algorithms. Depending on the exact scheme, these may be 1) lattice basis
reduction, 2) the Shortest Vector Problem (SVP), 3) the Closest
Vector Problem (CVP), 4) the Learning with Errors (LWE) and
several variants of these, to name the most relevant ones. Due
to the connection between the problems and the security of the
corresponding cryptosystems, the algorithms that solve these
b3 = b1 − 2b2 b1 problems are sometimes referred to as attacks.
How to choose security parameters such as the key size
of the scheme? Till deployment of cryptosystems, we must
b2 carry out an intense scrutiny of possible attacks against those
0 cryptosystems, so that one can have increased confidence on
their security and appropriate parameters of practical imple-
mentations of these systems can be chosen.
Selecting security parameters is not a simple, deterministic
Fig. 1. Example of a lattice in R2 and its basis (b1 ,b2 ) in red. process. Intuitively, we would simply define “very high”
parameters, but these would lead to slow (also said inefficient)
Notes, terminology and notation. Let Rn be a n- schemes. In practice, parameter selection is a two-step, rather
dimensional Euclidean vector space. In this paper, we write empirical process. First, the potential of attacks has to be deter-
vectors and matrices in bold face (or italic if represented mined in practice and not just in theory. This has to be done in
with Greek letters), while vectors are written in lower-case the highest-end computer architectures, which adversaries may
and matrices in upper-case (or lower-case if represented with have access to. Having determined this potential, the second
Greek letters), as in vector v and matrices M and µ. Vectors step is to define security parameters of schemes such that it
(also called lattice points or simply points) in Rn represent is intractable to solve the underlying security problems with
1 × n matrices, from both a mathematical and computational those parameters, based on the known potential of the available
perspective. The pEuclidean
Pn norm (or length) of a given vector attacks and computer architectures.
v in Rn , kvk, is v
i=1 i
2 , where v is the ith coordinate of
i Thus, the scrutiny for possible attacks to these systems
v. When we mention a “short” vector, we refer to its Euclidean must consist of comprehensive and tenacious efforts at solving
norm (or length). The term zero vector is used for the vector these problems. Until the arrival of quantum computers, the
whose norm is zero, i.e., the origin of the lattice. strongest attacks one can envision in this context are efficient
“Schemes” is the short version of “cryptoschemes” or “cryp- implementations of the best algorithms to solve the aforemen-
tosystems” (the reader may also recognize the term “code”, tioned lattice problems. In particular, it should be investigated
which means the same). how suited and scalable these algorithms are on parallel,
Cryptology is the science that focuses on the study of high performance computer architectures. At the same time,
cryptography and cryptanalysis. While “cryptography” can be it is important to note that the necessity for serious testing
defined as the practice of creating and understanding codes in practice also stems from the fact that lattice algorithms
that keep information secret, “cryptanalysis can be defined tend to behave differently in practice than predicted in theory,
as the science that studies the procedures, processes and especially in large-scale experiments.
methods used to translate or interpret secret writings, as These arguments alone go to show that high performance
codes and ciphers, for which the key is unknown. In practice, computing is a key tool in cryptanalysis, as it only makes
LUSA - THE LATTICE UNIFIED SET OF ALGORITHMS LIBRARY 3
sense to assess the potential of attacks on the highest-end day, it is difficult to catch up with the headway made on lattice
computer architectures. In fact, this is why we have witnessed algorithms and their implementations. This means that NTL is
considerable efforts in the development of parallel, efficient not specifically designed for lattice-based crytptography and
implementation of attacks. In short, implementing the best cryptanalysis, and it is often regarded as “a sledgehammer”
attacks known on the best parallel multi- and many-core even if we only want to kill a fly. At the same time, the
architectures is the only way to actually determine the potential same mathematicians who find the current libraries too hard
of attacks in practice and therefore accurately select security to install are interested in performance.
parameters for lattice-based cryptosystems, such that they are fplll, in contrast to NTL, was mainly designed for lattice-
both secure and efficient. based crytptography and cryptanalysis purposes, offering sev-
We note that there are no “standard” lattice dimensions eral lattice algorithms, such as LLL [23] and BKZ [24] (cf.
which we would see in real case scenarious; instead, the goal Section II for more details), including BKZ 2.0 [25]. The
of lattice based cryptanalysis is, fundamentally, to determine central algorithm of the library is LLL, thus the name of
how high one can go with the available algorithms and the library; it is also very much based on floating-point or-
computer architectures. For the sake of this paper, we have thogonalization. The floating-point LLL reduction algorithms
limited lattice dimensions to 60 and 65, depending on the offered by fplll [26], [27] are based on a trade-off between
algorithm. We note that although LUSA could, in theory, speed and guarantees. fplll is thus a very centered library, as
perform very differently for much higher dimensions, the goal it it very focused on this particular angle (which is surely very
is of this paper is to show how the library performs per se and relevant for the cryptanalysis and cryptography communities).
how it compares to other available implementations. There is some effort in creating a very modular library and
Why LUSA? First off, implementing a library with attacks specific operations to be oblivious to the user. However, as we
(and building blocks for the development of new attacks) is will show in this paper, LUSA not only offers more algorithms
of paramount importance in the context of lattice-based crypt- than fplll as it also offers efficient, parallel versions of those
analysis, because as we showed, assessing the performance of algorithms (to our knowledge, although fplll is somewhat
attacks on high-end computer architectures is vital. thread-safe, there are no built-in functions that can run in
LUSA was primarily developed due to the lack of some parallel - there are only such functions in the context of fplll
features on some of the existent lattice libraries, such as NTL3 , “ecosystems”, i.e. external modules/libraries).
plll4 and fplll5 . Released in 2014, plll is another major library in this
First, neither NTL nor fplll are specifically coded for low- context. The library was also mainly designed for lattice-
level efficiency (although they are efficient libraries to some based crytptography and cryptanalysis purposes, judging by
extent, they are better known for offering several methods, the implementations offered, but while it includes a vast array
with great focus on mathematical aspects), and no parallel im- of algorithms and options for those algorithms, it does not
plementations are generally offered. For instance, NTL’s BKZ include the most recent ones (e.g. sieving). Additionally, plll
implementation is often used in high dimensional lattices as a depends upon other libraries (GMP, MTFR and the boost
common pre-processor, and it can require days of computation, library) and there is only one parallel algorithm in the entire
depending on the lattice dimension and BKZ parameters. This library, enumeration (with some form of pruning).
has indeed been a limiting factor in the context of trying to LUSA was designed to be simple - we made LUSA 100%
break into higher dimensions of the SVP-Challenge6 . independent from other libraries - efficient and specifically
Second, NTL, fplll and plll depend upon other libraries, thought out for lattice-based cryptography. Plus, all algorithms
rendering the overall installation cumbersome and time- in LUSA (except for LLL, whose execution time is not large
consuming. Also, this increases compilation and execution in most cryptanalysis setups - if LLL is to be run on high
time of the used library algorithms, among other problems. lattice dimensions then LUSA is probably not the best library
Third, although a very useful library in many fields related to that end) can run with multiple threads and scale very well.
to number theory, NTL it is not specifically designed to be LUSA’s promises. LUSA promises two things: simplicity
used in the context of lattice-based cryptanalysis. NTL is a - it is simple to install and use, depending upon no other
number theory library that includes a myriad of algorithms library - and performance/parallelism, as most methods are
that, although useful and relevant in many scientific fields, are also very efficient and parallel. The majority of the provided
usually not used by the lattice-based cryptanalysis community. implementations are considerably faster that those in similar
In fact, the complexity of NTL served as the subject of many libraries, such as NTL, fplll and plll, as we show in this paper.
offline (interesting) conversations at cryptography conferences But why are cryptographers interested in usability and
the author attended over the past 6 years. The target public performance, for such a library? Most cryptographers do
of lattice libraries are mainly users with a strong background not have a strong background in computer science (as they are
on mathematics, who prefer to work with libraries that are mainly mathematicians) so usability is very important7 . As for
simple to install and use. Also, because NTL comprises so performance, there are mainly two reasons for its need:
many classes of algorithms, some of which growing by the
1) The first reason is parameter selection, as we mentioned
3 https://www.shoup.net/ntl/ before. If LUSA’s algorithms are directly used to esti-
4 https://github.com/fplll/fplll
5 https://felix.fontein.de/plll/ 7 Yet, some cryptographers have presented beautiful works on algorithms
6 https://www.latticechallenge.org/svp-challenge/ coded for efficiency, let alone computer libraries.
LUSA - THE LATTICE UNIFIED SET OF ALGORITHMS LIBRARY 4
mate security parameters of schemes, they need (and are) of future work, providing timelines for LUSA’s next versions.
to be efficient. In fact, to this day, there is no standard
library for this matter; this is currently done relying II. L ATTICE BASED CRYPTANALYSIS TODAY
on many different, isolated implementations, which are
”Do you hack cryptosystems for a living? Not quite...”
typically not normalized as they are tested and assessed
in different architectures, compilers, and other variables. Lattice-based cryptanalysis has evolved quite rapidly, as
2) The second reason is because new algorithms are often- a way to scrutinize lattice-based schemes which developed,
times developed using existing ones as building blocks. themselves, very quickly. As for lattice-based cryptography,
Having a set of parallel, efficient algorithms will ensure there are a few papers, reports and notes published on the field.
that the new attacks are themselves efficient and parallel. In particular, emphasize 1) introductory papers from 2006 and
This is very relevant in order to assess the actual 2009 [17], [29], 2) an extensive tutorial for beginners, from
performance of new attacks in practice, as they are 2015 [30] and and 3) a paper/survey from 2016, which also
developed. In fact, some algorithms are very good in provides an introduction to lattice-based cryptography as well
theory but intractable in practice e.g. a Voronoi-based as the progress in the field over a decade [31]. As for lattice-
SVP-solver proposed in [28]. Having a tool to assist based cryptanalysis, there is a comprehensive survey, from
experimentation right after design is crucial to shed light 2017, solely on the advances of the field, which we refer the
on the tractability of attacks. reader to [32], shall he want to know more. Yet, we provide
LUSA’s scientific contribution. LUSA contributes to the a brief overview of the field, stating where and how LUSA
scientific field of lattice-based cryptanalysis in the following frames in. Other resources on both lattice-based cryptography
ways: and cryptanalysis include surveys and overviews ([33], [34],
[35], [36], [37], [38], [10], [39]), PhD theses (e.g. [7], [40],
i) LUSA includes implementations of algorithms that are [41], [42], [43], [44]) and books [45], [46], [47]. There are
the fastest implementations known. These implementa- also multiple talk slides and videos available online for free
tions contain a variety of novel HPC and parallel com- on the topic.
puting techniques/strategies.
ii) As LUSA offers algorithms that can be used in a modular
way, LUSA is itself a platform that enables crypto- A. Problems on lattice-based cryprography: SVP, CVP and
graphers to create new, high performance attacks. This lattice reduction
is especially true for attacks that include other lattice There are many lattice problems in the context of cryptogra-
algorithms as part of their execution (e.g. some lattice- phy and cryptanalysis. As we show in the following, LUSA’s
based reduction algorithms use SVP-solvers as part of first release version (v1.0) addresses the SVP and lattice basis
their logic). As a result, cryptographers will have a much reduction.
better sense of the performance of the algorithms as they SVP. The norm of a shortest vector8 of a lattice is denoted
develop them. by λ1 (L). The norm of the shortest vector in the lattice is
iii) Not only LUSA allows for the development of new also the minimal distance between any two vectors in the
algorithms in terms of performance and parallelism, but lattice. Finding the shortest vector in the lattice is a problem
it can also suport the development of new algorithms by known as the Shortest Vector Problem (SVP). The SVP is one
containing many routines which can be used as building of the most studied problems in lattice-based cryptanalysis.
blocks (e.g. from the Gram-Schmit ortogonalization to Formally, the SVP can be defined as: given a basis B of the
lattice reduction algorithms). lattice L, find a non-zero vector p ∈ L such that: kpk =
iv) LUSA presents itself as a standard library to normalize min kvk : v ∈ L(B), kvk 6= 0. This is typically called
performance assessment across several attacks and lattice “exact SVP” as there are approximate versions of the SVP
algorithms, a crucial problem in parameter selection. The (e.g. α-SVP, which is an approximate version of the SVP
community can simply download LUSA and test the and whose solution is at most α% off the SVP solution). In
algorithms therein on their desired CPU platform (in the fact, the SVP is especially relevant in the context of lattice-
future, we will extend this to GPUs). based cryptography because it 1) can actually be used to break
cryptosystems that rely on the α-SVP and 2) it is used in
Roadmap. The rest of this paper is organized as follows.
many other, practical algorithms in the field, such as BKZ 2.0,
In Section II, we provide a brief overview of the lattice-based
the most practical lattice basis reduction algorithm (which can
cryptography and cryptanalysis field, its evolution in recent
also be used to solve the α-SVP), uses SVP-solvers as part
years and how LUSA fits in. In Section III, we present LUSA,
of its logic. For a comprehensive review of the approximate
by explaining how the library is structured and what methods
versions of the SVP, the reader is referred to [29]. In order
are available. In Section IV we present the benchmark platform
to understand the impact of LUSA, note that the SVP does
used in this paper, which was chosen to be well representative
not state anything about the basis, but the used basis has a big
of a possible LUSA end user. In Section V we present and
impact on the practical performance of SVP- and other solvers.
comment on the performance of LUSA, including how well it
Emde Boas showed, in 1981, that the SVP with infinity norms
compares to other libraries and how it scales with the number
of cores. In Section VI, we wrap up the paper, with some brief 8 Note that due to the natural symmetry in lattices, there is not only one
conclusions and comments. In Section VII, we provide lines shortest vector.
LUSA - THE LATTICE UNIFIED SET OF ALGORITHMS LIBRARY 5
• An implementation of an extended exponent double pre- but LUSA intends to implement the original LLL floating
cision data type, or xdouble9 , which allows to represent point algorithm by Schnorr and Euchner). In most lattice-based
floating point numbers with the same precision as a cryptanalysis setups, LLL tends to result in a much smaller
double, but with a much larger exponent. It is provided portion of time than other algorithms, say BKZ. However, as
in LUSA as a C++ class and supports a multitude of LLL may play an important role in specific setups (and in
different operators (the user is referred to LUSA’s manual other areas of crytanalysis), we plan on optimizing LLL in
for more details on this). This is indeed very similar to the future.
the class with the same name in NTL. LUSA assumes that all lattice bases are LLL-reduced before
• An implementation of ZZ, a class which allows to repre- any other algorithm is invoked. Therefore, as LLL is the
sent integers that do not fit into primitive data types. This starting point of any significant computation in LUSA, all LLL
is also provided as a class with a multitude of operators implementations use the multiple precision directly, in order to
(the user is referred to LUSA’s manual for more details handle large numbers that may exist in the raw, unprocessed
on this). Our ZZ class is unique and different from input lattices. All LLL variants yield a final basis that fits
the implementations of other libraries (although some into native C datatypes. This means that users should first
operations are based on the same algorithms, as there LLL-reduce any basis they want to run LUSA’s algorithms
are only a handful, if that much, per operation). on (even if other lattice-reduction algorithms are to be used
• An implementation of RR, which allows to represent after). Listing 1 is an example of how LUSA can be used in
floating point numbers with arbitrary precision. Unlike a main.cpp file and be started off (with an LLL reduction,
primitive floating point data types, the precision of num- in this case). As this paper is not intended to be a manual for
bers represented with this class are not fixed. This class LUSA, please check the manual on LUSA’s webpage10 . Each
also supports a multitude of operators (the user is referred algorithm has a set of parameters which are both used as input
to LUSAs manual for more details on this). The same we and output. This can be checked in LUSA’s manual.
said for ZZ applies to the RR class. # include ” ba s is . h”
LUSA also contains the basis class (with the basis.h
i n t main ( i n t a r g c , char ∗ a r g v [ ] )
header), which contains all methods and algorithms provided. {
B a s i s ∗B = new B a s i s ( a r g v [ 1 ] ) ;
B. Lattice-reduction algorithms: LLL and BKZ
f l o a t d e l t a = 0.99 f ;
LUSA implements several LLL variants, both heuristic and
exact versions. These include: B−> l l l ( d e l t a ) ;
• The lll routine, the core implementation of LLL in LUSA, return 0;
a floating-point implementation based on Schnorr’s float- }
ing point LLL version [24]. This method uses our ZZ Listing 1. Example of how to start LUSA, with an LLL-reduction.
class. In fact, there are two methods in this description,
but one uses native datatypes for the floating point part After the input basis is LLL-reduced, any of the algorithms
of LLL (lllnd) and the other uses our xdouble class in LUSA can be called upon it. For instance, after the basis
(lll). The former works up until dimension 50, the latter is LLL-reduced, we can move on to SVP calculations (in this
works on any dimension (we comment on this later on). case with enumeration), as shown in Listing 2.
• An exact version of LLL, called exactlll, which makes # include ” ba s is . h”
use of native data types, based on the [23]. This means i n t main ( i n t a r g c , char ∗ a r g v [ ] )
that the numbers in the lattice basis should fit, at most, {
in long long datatypes (forces 64 bit). There is also
B a s i s ∗B = new B a s i s ( a r g v [ 1 ] ) ;
a variant of this version (exactlllmp), which works on
any lattice basis, regardless of their size. f l o a t d e l t a = 0.99 f ;
i n t dim = B−>g e t D i m e n s i o n ( ) ;
The fastest LLL variant in the library is indeed an heuristic
variant of the LLL algorithm with floating point arithmetic, B−> l l l ( d e l t a ) ;
which was proposed by Schnorr and Euchner [24] and can
B−>c o n v e r t B a s i s ( ) ;
be invoked with the lll method (note that we implemented
this method with xdouble so that there are no precision int beta = 20;
problems up until high lattice basis dimensions). The LLL
B−>bkz ( b e t a , d e l t a ) ;
implementations in LUSA are not parallel, as LLL is not
particularly suitable for parallelization [72], [73], [74], [75] l o n g ∗ S h o r t e s t V e c t o r = ( l o n g ∗ ) c a l l o c ( dim , 8 ) ;
(most parallel versions of LLL do not scale well in practice; double ShortestNorm = 0 . 0 ;
to improve upon this, many variants of the original algorithm S h o r t e s t N o r m = B−>e n u m e r a t e ( S h o r t e s t V e c t o r ) ;
were created as a way to improve its parallelization potential,
return 0;
9 We have maintained the terminology of NTL in order to reduce the
learning curve of current NTL users. 10 http://alfa.di.uminho.pt/˜ ammm/lusa.html
LUSA - THE LATTICE UNIFIED SET OF ALGORITHMS LIBRARY 7
a “standard C LLL implementation”. This has to do with Given that our LLL makes use of the multi-precision module
managing datatypes, allocating structures that are necessary to “convert” the lattice basis into one basis that fits in native
for the overall LUSA execution, among others. We made this datatypes and initiates other libraries functions, our LLL is
decision because LLL is the “entry point” of any substantial slower than NTL’s - which also uses multiple-precision - and
computation with LUSA, and it typically reduces lattices in fplll’s implementations12 of LLL (as shown in Figure 3). fplll
less than a few seconds, even for high dimensions. In short, implements a different LLL variant.
if one user wants to simply run LLL on a lattice-basis and However, we point out that 1) performance is generally
aims for maximum performance, LUSA’s LLL is not the go- not a problem in LLL, because LLL can run high lattice
to solution. dimensions in a very short time-frame (in Figures 3 and 4
LLL requires more time than BKZ not only because of multi-
precision being required but also because we ran BKZ with
a low block size), and 2) as mentioned before, our LLL
implementation encapsulates several steps that prepare the
execution of following algorithms. In fact, we have centered
LUSA around the premise that all input lattice bases are first
LLL-reduced, and we accepted a loss of performance at that
point. The main reason for this is that, as mentioned, LLL
is one of the fastest lattice algorithms there is, and high
performance is typically not required.
Note that NTL’s LLL becomes significantly slower after
dimension 50, where performance becomes comparable to
LUSA. This is because NTL uses the xdouble class after
dimension 50, whereas LUSA uses it for any dimension (in
practice we could have determined xdouble is necessary for
Fig. 2. Average execution times for the LLL and BKZ (block size 20) routines, a given lattice basis and turn it on/off accordingly - note that
using 1 thread. LUSA also has an LLL implementation that does not use
xdouble - which we have not as LLL runs very quickly
As we said, our LLL implementation runs LUSA’s multi- anyway).
precision module. This lowers LLL’s overall performance, as
we store the basis in different formats and many datatypes
“conversions” take place. After LLL is executed, the user has
to call the convertBasis method, which converts the basis to
the native datatypes (instead of multiple precision datatypes)
and can then call any other algorithm in LUSA. We opted
to not call this method transparently after an LLL execution
because the user should be aware of this, if basis elements are
read. In Listing 4, note that enumeration is called after LLL,
but the method convertBasis is called in between.
# include ” ba s is . h”
i n t main ( i n t a r g c , char ∗ a r g v [ ] )
{
B a s i s ∗B = new B a s i s ( a r g v [ 1 ] ) ;
f l o a t d e l t a = 0.99 f ; Fig. 3. Comparison of the LLL routine for the LUSA, fplll and NTL
implementations, using 1 thread.
B−> l l l ( d e l t a ) ;
As for BKZ, as Figure 4 shows, our implementation is
B−>c o n v e r t B a s i s ( ) ;
considerably faster than both NTL’s and fplll. There are two
l o n g ∗ S h o r t e s t V e c t o r = ( l o n g ∗ ) c a l l o c ( dim , 8 ) ; main reasons for this: 1) the LLL calls within BKZ are
double ShortestNorm = 0 . 0 ; very much optimized, in terms of memory handling, pointer
S h o r t e s t N o r m = B−>e n u m e r a t e ( S h o r t e s t V e c t o r ) ; arithmetic and coding, and 2) our xdouble module (i.e.
including operations on xdouble datatypes) is more efficient
return 0; than NTL’s. In a follow-up paper, we will scrutinize LUSA’s
}
performance from a computational standpoint, and we will
Listing 4. Example of an enumeration call in LUSA, after a LLL, with the
explicit call of the convertBasis method.
show analytics pertaining to these factors (e.g. cache miss rates
We now compare LUSA against other libraries in terms of 12 fplll was set to run without specifying any options, thus the library uses
lattice-reduction algorithms. as much precision as it desires (it may use GMP).
LUSA - THE LATTICE UNIFIED SET OF ALGORITHMS LIBRARY 9
Fig. 6. Average execution times for the enumeration and hash sieve routines,
using 1 thread.
Fig. 4. Comparison of the BKZ (block size 20) routine for the LUSA, fplll
and NTL implementations, using 1 thread. (enumerate and enumeratePruned) and the Hashsieve
routine (hashSieve), for a single thread.
Another important point is that LUSA’s BKZ is parallel. In LUSA, Hashsieve is better than enumeration with (ex-
In Figure 4, we present the results for a single thread. The treme) pruning, which is congruent with previous results on
scalability of BKZ is shown in Figure 5. As the figure shows, these algorithms in other contexts other than libraries.
BKZ scales well, but only if the BKZ window is large enough. These implementations have all been presented in [79],
Essentially, we parallelized BKZ by executing the enumeration [71], [67]. Although slight modifications have been made to
routines on each window with LUSA’s parallel enumeration the implementations, so that they fit LUSA (e.g. allowing
routine, which scales itself very well (but only after a certain for generic parameterization, global variables, thread safety,
window). We believe this is a good choice as small-window etc), their core implementation is the same as presented in
runs of BKZ are not too much time-consuming anyway. the papers, and LUSA’s performance is in line with that of
those implementations. As data structures are allocated when
each method is called, LUSA incurs overhead that isolated
implementations do not.
LUSA vs other libraries. We now show results of LUSA
compared to other libraries and implementations. Figure 7
compares LUSA’s enumeration routine (enumerate) against
fplll. fplll is faster than LUSA running with one thread, for
this particular algorithm, but fplll does not provide a parallel
version of the algorithm, and LUSA is much faster if the
algorithm is run with multiple threads.
Fig. 5. Performance of LUSA’s parallel BKZ, for 1-8 threads. Beta (window
size) set to 40.
B. SVP-solvers
This subsection shows the performance of LUSA’s SVP-
solvers, in isolation and compared to other libraries, and their
scalability. Fig. 7. LUSA’s enumeration, for 1-8 threads, compared to fplll.
Performance. Figure 6 shows the performance of LUSA’s
enumeration routine with pruning turned on and off The results (i.e. the shortest vector) yielded by the methods
LUSA - THE LATTICE UNIFIED SET OF ALGORITHMS LIBRARY 10
Fig. 11. Performance of LUSA’s HashSieve routine, for 1-8 threads, compared
to the baseline implementation (sequential).
Fig. 12. LUSA’s Voronoi cell-based SVP solver and Voronoi 2.0, using 1-8 Fig. 13. Performance of LUSA’s enumeration routines (non-pruned and
threads, compared to plll. pruned enumerations), for 1-8 threads.
VII. V ERSIONS SCHEDULED FOR FUTURE WORK Santos, for hosting me and having taught me since I started my
In the next v1.* versions of LUSA, we plan to: BSc. I thank Martin Albrecht for insightful comments on early
verions of this paper. Lastly but not least, I thank DFG, who
• Incorporate SIMD instructions in our LLL implementa-
made this project possible. Artur Mariano is funded by the
tion, as shown in [83], which should enable our LLL
Deutsche Forschungsgemeinschaft (DFG, German Research
implementation to be faster;
Foundation) Projektnummer 382285730.
• Integrate better bounding functions to improve the algo-
rithmic performance of BKZ 2.0 and offer this method;
R EFERENCES
• Increase the performance of enumeration methods, so that
LUSA’s enumeration is the fastest among all libraries; [1] P. Shor, “Polynomial-time algorithms for prime factorization and
discrete logarithms on a quantum computer,” SIAM J. Comput.,
In LUSA v2.0, we plan to incorporate CVP algorithms and vol. 26, no. 5, pp. 1484–1509, Oct. 1997. [Online]. Available:
newer SVP algorithms that recently became available: http://dx.doi.org/10.1137/S0097539795293172
[2] ——, “Algorithms for quantum computation: Discrete logarithms
• Add variants of the existent algorithms, such as those and factoring,” in Proceedings of the 35th Annual Symposium on
listed in plll (LUSA aims to be the most complete library Foundations of Computer Science, ser. SFCS ’94. Washington, DC,
USA: IEEE Computer Society, 1994, pp. 124–134. [Online]. Available:
available); http://dx.doi.org/10.1109/SFCS.1994.365700
• Include other methods that are not solvers; [3] D. Bernstein, J. Buchmann, and E. Dahmen, Eds., Post-
• Add CVP-solvers (both enumeration- and sieving-based); quantum cryptography. Springer, 2009. [Online]. Available:
http://www.springerlink.com/content/978-3-540-88701-0
• Add newer Sieving algorithms, such as [64];
[4] M. Ajtai, “Generating hard instances of lattice problems (extended
In LUSA v3.0, we plan to: abstract),” in STOC. New York, NY, USA: ACM, 1996, pp. 99–108.
[5] M. Ajtai and C. Dwork, “A public-key cryptosystem with worst-
• Add solvers of other relevant lattice problems, such as case/average-case equivalence,” in Proceedings of the Twenty-ninth
LWE and SIS; Annual ACM Symposium on Theory of Computing, ser. STOC ’97.
New York, NY, USA: ACM, 1997, pp. 284–293. [Online]. Available:
In LUSA v4.0, we plan to make LUSA capable of executing http://doi.acm.org/10.1145/258533.258604
code on GPUs, and, for some algorithms, on CPUs and GPUs [6] M. Ajtai, “The shortest vector problem in L2 is NP-hard for randomized
simultaneously. As we intend to make LUSA 100% library reductions (extended abstract),” in STOC, 1998, pp. 10–19.
[7] C. Gentry, “A fully homomorphic encryption scheme,” Ph.D. disserta-
independent (except for standard libraries), it is still unclear tion, Stanford, CA, USA, 2009, aAI3382729.
whether we will need to implement a run-time system for [8] A. Acar, H. Aksu, A. S. Uluagac, and M. Conti, “A survey on
CPU+GPU environments, use a built-in system or implement homomorphic encryption schemes: Theory and implementation,” ACM
Comput. Surv., vol. 51, no. 4, pp. 79:1–79:35, Jul. 2018. [Online].
every method on CPU+GPU environments by hand. Available: http://doi.acm.org/10.1145/3214303
Additionally, we plan to deeply assess LUSA from a com- [9] C. Fontaine and F. Galand, “A survey of homomorphic encryption
putational efficiency standpoint, by 1) characterizing cache for nonspecialists,” EURASIP Journal on Information Security,
vol. 2007, no. 1, p. 013801, Dec 2007. [Online]. Available:
behavior (to which end we will use PAPI and assess cache miss https://doi.org/10.1155/2007/13801
rates and other factors) and power consumption, 2) identifying [10] P. Martins, A. Mariano, and L. Sousa, “A survey on fully homomorphic
opportunities to cache optimization and code vectorization, 3) encryption: an engineering perspective,” in ACM Computing Surveys,
2017, p. To appear.
assessing and improving workload balancing/idle time among [11] R. L. Rivest, L. Adleman, and M. L. Dertouzos, “On data banks
threads in parallel methods and 4) implementing built-in and privacy homomorphisms,” Foundations of Secure Computation,
techniques to improve performance on NUMA systems as e.g. Academia Press, pp. 169–179, 1978.
[12] V. Kuchta and O. Markowitch, “Multi-authority distributed attribute-
done with HashSieve before [84]. based encryption with application to searchable encryption on lattices,”
in Paradigms in Cryptology - Mycrypt 2016. Malicious and Exploratory
Cryptology - Second International Conference, Mycrypt 2016, Kuala
ACKNOWLEDGEMENTS Lumpur, Malaysia, December 1-2, 2016, Revised Selected Papers,
I would like to thank my previous student Fábio Correia, 2016, pp. 409–435. [Online]. Available: https://doi.org/10.1007/978-3-
319-61273-7 20
for the implementation of the multiple-precision capability [13] L. Zhou, Z. Hu, and F. Lv, “A simple lattice-based pke scheme,”
in LUSA module and most enumeration-related algorithms. I SpringerPlus, vol. 5, no. 1, p. 1627, Sep 2016. [Online]. Available:
thank my previous student Filipe Cabeleira for implementing https://doi.org/10.1186/s40064-016-3300-4
[14] E. Alkim, P. S. L. M. Barreto, N. Bindel, P. Longa, and J. E.
the Voronoi cell based algorithms in LUSA and assisting with Ricardini, “The lattice-based digital signature scheme qtesla,” IACR
some of the testing and logistics of the library. I thank Özgür Cryptology ePrint Archive, vol. 2019, p. 85, 2019. [Online]. Available:
Dagdelen and Robert Fitzpatrick for the work we developed https://eprint.iacr.org/2019/085
[15] T. Gneysu, M. Krausz, T. Oder, and J. Speith, “Evaluation
together, which helped to inspire me to create this library. I of lattice-based signature schemes in embedded systems,” in
thank particularly Thijs Laarhoven for the beautiful work we 2018 25th IEEE International Conference on Electronics, Circuits
have done together and being the best science partner one can and Systems (ICECS), 2018, pp. 385–388. [Online]. Available:
https://app.dimensions.ai/details/publication/pub.1111608955
have. I thank TU-Darmstadt and Christian Bischof in particular [16] L. Ducas, E. Kiltz, T. Lepoint, V. Lyubashevsky, P. Schwabe, G. Seiler,
for introducing me to this awesome subject, and all the people and D. Stehlé, “Crystals-dilithium: A lattice-based digital signature
I ever worked with in the context of lattice-based cryptanalysis scheme,” IACR Trans. Cryptogr. Hardw. Embed. Syst., vol. 2018, no. 1,
pp. 238–268, 2018.
(some of whom instilled me to create and develop LUSA, as [17] O. Regev, Lattice-Based Cryptography. Berlin, Heidelberg: Springer
they figured my passion for performance and optimization). Berlin Heidelberg, 2006, pp. 131–141.
I thank my previous hosts, especially Gabriel Falcão at the [18] P. Nguyen and J. Stern, “The Two Faces of Lattices in Cryptology,” in
CaLC, 2001, pp. 146–180.
University of Coimbra, whom I have worked on lattice-based [19] J. van de Pol, “Lattice-based cryptography,” Master’s thesis, Technische
cryptography. I thank my current host, professor Luis Paulo Universiteit Eindhoven, The Netherlands, 2011.
LUSA - THE LATTICE UNIFIED SET OF ALGORITHMS LIBRARY 13
[20] V. Lyubashevsky, “Lattice-Based Identification Schemes Secure Under [47] S. D. Galbraith, Mathematics of Public Key Cryptography, 1st ed. New
Active Attacks,” 2008, pp. 162–179. York, NY, USA: Cambridge University Press, 2012.
[21] V. Lyubashevsky, C. Peikert, and O. Regev, On Ideal Lattices and [48] E. Boas, “Another NP-complete partition problem and the complexity
Learning with Errors over Rings. Berlin, Heidelberg: Springer Berlin of computing short vectors in a lattice,” Technical Report 81-04, Math-
Heidelberg, 2010, pp. 1–23. ematische Instituut, University of Amsterdam, 1981.
[22] D. Stehlé, R. Steinfeld, K. Tanaka, and K. Xagawa, Efficient Public Key [49] D. Micciancio, “Efficient reductions among lattice problems,” in
Encryption Based on Ideal Lattices, 2009, pp. 617–635. Proceedings of the Nineteenth Annual ACM-SIAM Symposium on
[23] A. Lenstra, H. Lenstra, and L. Lovász, “Factoring polynomials with Discrete Algorithms, ser. SODA ’08. Philadelphia, PA, USA: Society
rational coefficients,” Math. Ann., vol. 261, pp. 515–534, 1982. for Industrial and Applied Mathematics, 2008, pp. 84–93. [Online].
[24] C.-P. Schnorr and M. Euchner, “Lattice basis reduction: Improved Available: http://dl.acm.org/citation.cfm?id=1347082.1347092
practical algorithms and solving subset sum problems,” Mathematical [50] S. Arora, L. Babai, J. Stern, and Z. Sweedyk, “The hardness of
Programming, vol. 66, no. 2–3, pp. 181–199, 1994. approximate optima in lattices, codes, and systems of linear equations,”
[25] Y. Chen and P. Q. Nguyen, BKZ 2.0: Better Lattice Security Estimates. J. Comput. Syst. Sci., vol. 54, no. 2, pp. 317–331, 1997. [Online].
Berlin, Heidelberg: Springer Berlin Heidelberg, 2011, pp. 1–20. Available: http://dx.doi.org/10.1006/jcss.1997.1472
[26] P. Nguyen and D. Stehlé, “An LLL algorithm with quadratic complexity,” [51] O. Goldreich, D. Micciancio, S. Safra, and J.-P. Seifert, “Approximating
SIAM J. Comput., vol. 39, no. 3, pp. 874–903, 2009. shortest lattice vectors is not harder than approximating closest lattice
[27] I. Morel, D. Stehlé, and G. Villard, “H-LLL: using householder inside vectors,” vol. 71, no. 2, pp. 55–61, 1999.
LLL,” in ISSAC. ACM, 2009, pp. 271–278. [52] O. Goldreich, S. Goldwasser, and S. Halevi, “Public-key cryptosystems
[28] E. Agrell, T. Eriksson, A. Vardy, and K. Zeger, “Closest point search in from lattice reduction problems,” in Proceedings of the 17th Annual
lattices,” IEEE Transactions on Information Theory, vol. 48, no. 8, pp. International Cryptology Conference on Advances in Cryptology, ser.
2201–2214, Aug 2002. CRYPTO ’97. London, UK, UK: Springer-Verlag, 1997, pp. 112–131.
[29] D. Micciancio and O. Regev, Post-Quantum Cryptography. Berlin, [53] O. Regev, “On lattices, learning with errors, random linear codes, and
Heidelberg: Springer Berlin Heidelberg, 2009, ch. Lattice-based Cryp- cryptography,” J. ACM, vol. 56, no. 6, pp. 34:1–34:40, 2009. Preliminary
tography, pp. 147–191. version in STOC 2005.
[30] D. P. Chi, J. W. Choi, J. S. Kim, and T. Kim, “Lattice based cryptography [54] M. Ajtai, R. Kumar, and D. Sivakumar, “A sieve algorithm for the
for beginners,” Cryptology ePrint Archive, Report 2015/938, 2015, shortest lattice vector problem,” pp. 601–610, 2001.
https://eprint.iacr.org/2015/938. [55] P. Nguyen and T. Vidick, “Sieve algorithms for the shortest vector
[31] C. Peikert, “A decade of lattice cryptography,” Found. Trends Theor. problem are practical,” Journal of Mathematical Cryptology, vol. 2,
Comput. Sci., vol. 10, no. 4, pp. 283–424, Mar. 2016. [Online]. no. 2, pp. 181–207, 2008.
Available: http://dx.doi.org/10.1561/0400000074 [56] D. Micciancio and P. Voulgaris, “Faster exponential time algorithms for
[32] A. Mariano, T. Laarhoven, and C. Bischof, “A parallel variant of ldsieve the shortest vector problem,” pp. 1468–1480, 2010.
for the svp on lattices,” in 2017 25th Euromicro International Conference
[57] X. Pujol and D. Stehl, “Solving the shortest lattice vector
on Parallel, Distributed and Network-based Processing (PDP), March
problem in time 22.465n.” IACR Cryptology ePrint Archive,
2017, pp. 23–30.
vol. 2009, p. 605, 2009. [Online]. Available: http://dblp.uni-
[33] D. Micciancio, Cryptographic Functions from Worst-Case Complexity
trier.de/db/journals/iacr/iacr2009.htmlPujolS09
Assumptions. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010,
[58] X. Wang, M. Liu, C. Tian, and J. Bi, “Improved Nguyen-
pp. 427–452.
Vidick Heuristic Sieve Algorithm for Shortest Vector Problem,”
[34] O. Regev, “The learning with errors problem (invited survey),” in 2010
in Proceedings of the 6th ACM Symposium on Information,
IEEE 25th Annual Conference on Computational Complexity, 2010, pp.
Computer and Communications Security, ser. ASIACCS ’11.
191–204.
New York, NY, USA: ACM, 2011, pp. 1–9. [Online]. Available:
[35] V. Vaikuntanathan, “Computing blindfolded: New developments in fully
http://doi.acm.org/10.1145/1966913.1966915
homomorphic encryption,” in 2011 IEEE 52nd Annual Symposium on
[59] F. Zhang, Y. Pan, and G. Hu, “A three-level sieve algorithm for the
Foundations of Computer Science, Oct 2011, pp. 5–16.
shortest vector problem,” in SAC, 2013, pp. 29–47.
[36] P. Q. Nguyen and J. Stern, “The two faces of lattices in cryptology,” in
Cryptography and Lattices, J. H. Silverman, Ed. Berlin, Heidelberg: [60] A. Becker, N. Gama, and A. Joux, “A sieve algorithm based on
Springer Berlin Heidelberg, 2001, pp. 146–180. overlattices,” in ANTS, 2014, pp. 49–70.
[37] D. Stehlé, The LLL Algorithm: Survey and Applications. Berlin, [61] T. Laarhoven, “Sieving for shortest vectors in lattices using angular
Heidelberg: Springer Berlin Heidelberg, 2010, ch. Floating-Point LLL: locality-sensitive hashing,” in CRYPTO, 2015, pp. 3–22.
Theoretical and Practical Aspects, pp. 179–213. [62] A. Becker, N. Gama, and A. Joux, “Speeding-up lattice sieving without
[38] C. P. Schnorr, The LLL Algorithm: Survey and Applications. Berlin, increasing the memory, using sub-quadratic nearest neighbor search,”
Heidelberg: Springer Berlin Heidelberg, 2010, ch. Progress on LLL and IACR Cryptology ePrint Archive, vol. 2015, p. 522, 2015. [Online].
Lattice Reduction, pp. 145–178. Available: https://eprint.iacr.org/2015/522
[39] H. Nejatollahi, N. Dutt, S. Ray, F. Regazzoni, I. Banerjee, [63] A. Becker, L. Ducas, N. Gama, and T. Laarhoven, “New directions in
and R. Cammarota, “Post-quantum lattice-based cryptography nearest neighbor searching with applications to lattice sieving,” in SODA,
implementations: A survey,” ACM Comput. Surv., vol. 51, 2016.
no. 6, pp. 129:1–129:41, Jan. 2019. [Online]. Available: [64] M. R. Albrecht, L. Ducas, G. Herold, E. Kirshanova, E. W. Postleth-
http://doi.acm.org/10.1145/3292548 waite, and M. Stevens, “The general sieve kernel and new records in
[40] A. Mariano, “High performance algorithms for lattice-based cryptanal- lattice reduction,” in Advances in Cryptology – EUROCRYPT 2019,
ysis,” Ph.D. dissertation, Technische Universität Darmstadt, Darmstadt, Y. Ishai and V. Rijmen, Eds. Cham: Springer International Publishing,
Germany, 2016. 2019, pp. 717–746.
[41] T. Lepoint, “Design and implementation of lattice-based cryptography,” [65] G. Hanrot and D. Stehlé, “Improved analysis of kannan’s shortest
p. PhD Thesis, 06 2014. lattice vector algorithm,” in Advances in Cryptology - CRYPTO 2007,
[42] R. Bendlin, “Lattice-based Cryptography: Threshold Protocols and A. Menezes, Ed. Berlin, Heidelberg: Springer Berlin Heidelberg, 2007,
Multiparty Computation,” Ph.D. dissertation, Department of Computer pp. 170–186.
Science, Aarhus University, Darmstadt, Germany, 2013. [66] A. Ghasemmehdi and E. Agrell, “Faster Recursions in Sphere Decod-
[43] Y. Chen, “Reduction de reseau et securite concrete du chiffrement com- ing.” IEEE Transactions on Information Theory, vol. 57, no. 6, pp. 3530–
pletement homomorphe,” Ph.D. dissertation, Université Paris Diderot, 3536, 2011.
Paris, France, 2015. [67] F. Correia, A. Mariano, A. Proena, C. Bischof, and E. Agrell, “Parallel
[44] T. Laarhoven, “Search problems in cryptography: From fingerprinting to Improved Schnorr-Euchner Enumeration SE++ on Shared and Dis-
lattice sieving,” Ph.D. dissertation, Technische Universiteit Eindhoven, tributed Memory Systems, With and Without Extreme Pruning,” Journal
The Netherlands, 2016. of Wireless Mobile Networks, Ubiquitous Computing, and Dependable
[45] D. Micciancio and S. Goldwasser, Complexity of Lattice Problems: A Applications (JoWUA), vol. 7, no. 4, pp. 1–19, December 2016.
Cryptographic Perspective, 01 2002, vol. 671. [68] N. Gama, P. Nguyen, and O. Regev, “Lattice enumeration using extreme
[46] ——, Complexity of Lattice Problems: a cryptographic perspective, ser. pruning,” in EUROCRYPT, 2010, pp. 257–278.
The Kluwer International Series in Engineering and Computer Science. [69] D. Micciancio and M. Walter, “Practical, predictable lattice ba-
Boston, Massachusetts: Kluwer Academic Publishers, Mar. 2002, vol. sis reduction,” Cryptology ePrint Archive, Report 2015/1123, 2015,
671. http://eprint.iacr.org/2015/1123.
LUSA - THE LATTICE UNIFIED SET OF ALGORITHMS LIBRARY 14