0% found this document useful (0 votes)
26 views36 pages

Lattice Based Cryptanalysis

This document provides a gentle tutorial on lattice-based cryptanalysis for novice cryptanalysts. It introduces the anatomy of a lattice-based attack, which involves transforming a cryptographic construction into a lattice-based problem, constructing a lattice to reduce the problem to a lattice problem, and then using lattice reduction to solve the lattice problem. The tutorial will cover popular lattice reduction algorithms, common lattice-based problems like finding small roots and the hidden number problem, and examples of attacks like the Boneh-Durfee attack on RSA with a small private exponent. It aims to provide high-level intuitions for how and why lattice-based attacks work.

Uploaded by

jamespottaer
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
26 views36 pages

Lattice Based Cryptanalysis

This document provides a gentle tutorial on lattice-based cryptanalysis for novice cryptanalysts. It introduces the anatomy of a lattice-based attack, which involves transforming a cryptographic construction into a lattice-based problem, constructing a lattice to reduce the problem to a lattice problem, and then using lattice reduction to solve the lattice problem. The tutorial will cover popular lattice reduction algorithms, common lattice-based problems like finding small roots and the hidden number problem, and examples of attacks like the Boneh-Durfee attack on RSA with a small private exponent. It aims to provide high-level intuitions for how and why lattice-based attacks work.

Uploaded by

jamespottaer
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 36

A Gentle Tutorial for Lattice-Based Cryptanalysis

Joseph Surin Shaanan Cohney


University of Melbourne

Abstract. The applicability of lattice reduction to a wide variety of cryptographic situations makes
it an important part of the cryptanalyst’s toolbox. Despite this, the construction of lattices and use
of lattice reduction algorithms for cryptanalysis continue to be somewhat difficult to understand for
beginners. This tutorial aims to be a gentle but detailed introduction to lattice-based cryptanalysis
targeted towards the novice cryptanalyst with little to no background in lattices. We explain some
popular attacks through a conceptual model that simplifies the various components of a lattice attack.
Contents

1 Introduction 1

1.1 Accompanying Code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2

2 Acknowledgements 2

3 Background 4

3.1 Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

3.2 Lattices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

3.2.1 Basic Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

3.2.2 Properties, Invariants and Characterisations . . . . . . . . . . . . . . . . . . . . . . 5

3.3 Lattice Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

3.4 Lattice Reduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

3.4.1 The LLL Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

3.5 Solving CVP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

3.5.1 Babai’s Nearest Plane Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

3.5.2 Kannan’s Embedding Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

3.6 An Application of Lattice Reduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

4 Lattice-based Problems 15

4.1 Finding Small Roots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

4.1.1 Coppersmith’s Method: An Overview . . . . . . . . . . . . . . . . . . . . . . . . . 15

4.1.2 Coppersmith’s Method: Extensions and Generalisations . . . . . . . . . . . . . . . 16

4.2 Knapsack Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

4.2.1 Low-density Subset Sum Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

4.2.2 Low-density Subset Sum Problems: Extensions and Generalisations . . . . . . . . 19

4.3 Hidden Number Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20

4.3.1 Hidden Number Problem: An Overview . . . . . . . . . . . . . . . . . . . . . . . . 20

4.3.2 Extended Hidden Number Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . 22

5 Lattice Attacks 24

5.1 RSA Stereotyped Message . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24

5.1.1 Simple Case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24

5.1.2 Many Unknown Parts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24

2
5.2 Partial Key Exposure Attacks on RSA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25

5.2.1 Boneh-Durfee Attack . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25

5.2.2 Partial Key Exposure Attack . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26

5.3 ECDSA with Bad Nonces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26

5.3.1 ECDSA with k = z ⊕ d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26

5.3.2 ECDSA with Biased Nonces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27

5.3.3 ECDSA Key Disclosure Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28

5.4 Bleichenbacher’s PKCS#1 v1.5 Padding Oracle Attack . . . . . . . . . . . . . . . . . . . . 29

5.4.1 PKCS#1 v1.5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29

5.4.2 The Attack (Lattice Version) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29


1 Introduction

Since its invention in 1982, the LLL algorithm [LLL82], and more generally, lattice reduction, has enriched
the field of cryptanalysis with its wide applicability to attacking all kinds of public key cryptographic
constructions. Among many others, lattice reduction has been used to attack the Merkle-Hellman cryp-
tosystem [Adl83], truncated LCGs [Fri+88], RSA with small public exponent [Cop97], RSA with small
private exponent [BV96], and (EC)DSA with bad nonces [HS01].

Although the literature is rich with the applications of lattice reduction to cryptanalysis, the algorithms
presented and mathematical background required may often make it difficult to approach. In this tutorial,
we will present several lattice-based attacks and the necessary background required to understand and
develop such attacks. The intent of this tutorial is to be a light introduction to lattice-based cryptanalysis
for the novice cryptanalyst with little to no background in lattices. A high level understanding of
how lattices can be used for cryptanalysis, intuitions for when and why certain attacks will work, and
motivations with practical examples may set a good foundation for one’s further, more rigorous studies
on the topic.

The structure of this tutorial follows the anatomy of a lattice-based attack, which we consider as con-
sisting of 4 parts with 3 main steps.

Cryptographic Construction/Setting

Transformation

Lattice-based problem

Lattice construction

Lattice problem

Lattice reduction

Solution

Figure 1: Anatomy of a lattice-based attack.

As in Figure 1, we begin with a cryptographic construction in a particular setting (e.g. ECDSA with
biased nonces) and transform it into some lattice-based problem (e.g. Hidden number problem). By
constructing a lattice, the lattice-based problem can be reduced to a lattice problem (e.g. the approximate
shortest vector problem). Finally, this lattice problem may be efficiently solved with lattice reduction
algorithms to obtain the solution.

Figure 2 shows these steps in context with some concrete examples. In general, a particular lattice-based
problem can be the foundation of an attack for many different situations. These lattice-based problems
are in turn solved by solving lattice problems, which is where lattice reduction plays a key role.

This ideology will guide the structure of this tutorial. We will start from the foundations by studying
the popular lattice reduction algorithm, LLL, and how to use lattice reduction to solve lattice problems.
Then, we will look at some lattice-based problems and how to formulate lattices to solve those. Finally,
we will study attacks on cryptographic constructions and see how each setting is transformed into a
lattice-based problem.

1
RSA with known bits of p (EC)DSA with biased nonces Truncated LCG

Finding small roots Hidden number problem

SVPγ or CVPγ

Secret info

Figure 2: Examples of lattice-based attacks.

Table 1 summarises the lattice-based problems and attacks that we will study in this tutorial.

Lattice-based Problem Attack Description

Finding small roots RSA stereotyped message Low exponent RSA with large amount of
known plaintext
Boneh-Durfee attack RSA with small private exponent d < 0.292

Partial key exposure attack RSA with small private exponent d and
known bits of d
Knapsack problem ECDSA with k = z ⊕ d ECDSA given many signatures calculated
with nonce as the message hash XOR the
private key
Hidden number problem ECDSA with biased nonces ECDSA given many signatures calculated
with biased nonces
Bleichenbacher’s PKCS#1 PKCS#1 v1.5 given a large number of re-
v1.5 padding oracle attack quests to a decryption padding oracle
Extended hidden number ECDSA key disclosure ECDSA given many signatures calculated
problem problem with partially known nonces and known bits
of private key

Table 1: Attacks studied in this tutorial.

1.1 Accompanying Code

We provide implementations of the various techniques described herein at https://github.com/josephsurin/


lattice-based-cryptanalysis, appropriate for following along with the tutorial.

2 Acknowledgements

We would like to thank Gabrielle De Micheli and Joseph Bonneau for their comments on the paper and
for reviewing a prior draft. We are also indebted to Gabrielle’s work with Nadia Heninger [DH20] that

2
surveys lattice-based key-recovery techniques under partial information.

3
3 Background

3.1 Notation

Throughout this tutorial, we use bold uppercase letters such as B to denote matrices and bold lowercase
letters such as b to denote vectors. We denote the length of a vector v with respect to the Euclidean
norm by ∥v∥. When describing a lattice or a basis as a matrix, we write the basis vectors as rows of the
matrix.

3.2 Lattices

3.2.1 Basic Definitions

Definition 3.1 (Lattice). An n-dimensional lattice L is a discrete, (additive) subgroup of Rn .

A lattice can be described by a basis B which contains linearly independent basis vectors {b1 , . . . , bm }
with bi ∈ Rn . The number of vectors in the basis, m, is called the rank of the lattice. In this tutorial,
we focus mostly on full-rank lattices, which have the same rank and dimension, i.e. m = n. The lattice
itself can be thought of as the set of all integer linear combinations of these basis vectors:
{m }

L = L(B) = ai b i | ai ∈ Z
i=1

Figure 3 depicts a two dimensional lattice. Note that every non-trivial lattice consists of an infinite
number of elements (called lattice points), which are finitely generated by linear combinations of the
basis vectors. With this visualisation, we see that a lattice basis is not unique. Figure 3 shows two
different bases that generate the same lattice.

b′2
b2
b′1
b1

Figure 3: A 2-dimensional lattice and two different bases for it.

In fact, every non-trivial lattice has an infinite number of bases. However, although a lattice can be
represented by many different bases, there are some particularly “good” bases that are ideal for solving
certain computational problems. The following section defines some properties of lattices which will help
us to establish an idea of what constitutes a “good” basis, and eventually, how to find one.

4
3.2.2 Properties, Invariants and Characterisations

Two bases are both bases for the same lattice if they both generate exactly every point in the lattice.
But with bases coming in many different shapes and sizes, how can we efficiently determine whether or
not two particular bases are bases for the same lattice? To answer this, we will define some properties
of lattices bases, as well as some lattice invariants: quantities that are intrinsic to the lattice and do not
change depending on the basis chosen.
Definition 3.2 (Fundamental Parallelepiped). Let B = {b1 , . . . , bm } be a basis. The fundamental
parallelepiped of B is defined as
{m }

P(B) = ai bi | ai ∈ [0, 1)
i=1

Figure 4 shows a lattice and a basis, with the basis’ fundamental parallelepiped shaded in. Note that
the region is half-open and does not include the three non-zero lattice points near the perimeter.

b2

b1

Figure 4: A 2-dimensional lattice with basis {b1 , b2 } and its associated fundamental parallelepiped.

In fact, one way to characterise lattice bases geometrically is by observing whether or not its fundamental
parallelepiped contains a non-zero lattice point. A basis B is a basis for the lattice L if and only if
P(B) ∩ L = {0}. The vectors {b1 , b2 } in figure 5a do not form a basis for the lattice as the fundamental
parallelepiped contains a non-zero lattice point. Though we omit a proof here, the two dimensional case
provides some intuition as to why this is true: if the fundamental parallelepiped contains a lattice point,
then it is impossible to express that point as an integer linear combination of the basis vectors. In the
other direction, if the fundamental parallelepiped contains only the zero vector, then we can imagine it
as tiling all of Rn with each parallelepiped containing exactly one lattice point as in figure 5b.

Clearly, a different choice of basis will produce a different fundamental parallelepiped, even if both
bases represent the same lattice. However, one invariant quantity is the volume of each fundamental
parallelepiped, which is the lattice’s determinant.
Definition 3.3 (Determinant). Let B be a basis for the full-rank lattice L = L(B). The determinant of
L, denoted by det(L), is the n-dimensional volume of P(B). We have

det(L) = vol(P(B)) = | det(B)|

It isn’t immediately obvious why the determinant is a lattice invariant. Before we see why it is, we take
a detour and give an algebraic characterisation of lattice bases.

Suppose we have two bases B = {b1 , . . . , bn } and B′ = {b′1 , . . . , b′n } for a full-rank lattice L. Each
vector b′1 , . . . , b′n is in the lattice, so they can be written uniquely as integer linear combinations of the

5
c
b2
b2

b1 b1

(a) {b1 , b2 } do not form a basis, as P({b1 , b2 }) (b) {b1 , b2 } form a basis, as P({b1 , b2 }) con-
contains the non-zero lattice point c. tains only the zero point, and tiles R2 .

Figure 5: Geometric characterisation of lattice bases

basis vectors b1 , . . . , bn :
b′1 = a11 b1 + · · · + a1n bn
..
.

bn = an1 b1 + · · · + ann bn
We can write this as the matrix equation B′ = AB where A is the coefficient matrix of this system of
equations. In the same way, the bi can also be written uniquely as integer linear combinations of the b′i .
Specifically, we have B = A−1 B′ . Noting that A−1 must be a matrix of integers, we have
det(AA−1 ) = det(A) det(A−1 ) = det(I) = 1
with det(A) and det(A−1 ) being integers. Therefore, we must have | det(A)| = 1. Conversely, suppose
that for some integer matrix A with | det(A)| = 1 we have B = AB′ . In this case, clearly each vector in
B can be written as an integer linear combination of the vectors in B′ , so L(B) ⊆ L(B′ ). We also have
B′ = A−1 B, so by the same argument, we have L(B′ ) ⊆ L(B). Therefore, L(B) = L(B′ ). This leads us
to the following theorem which will play a key role in the following sections.
Theorem 3.4. Let B and B′ be bases. Then, L(B) = L(B′ ) if and only if B = UB′ for some integer
matrix U with | det(U)| = 1.

With this theorem, it is easy to see why the determinant of a lattice is invariant: suppose B and B′
are bases of the lattice L. Then B = UB′ , so det(B) = det(UB′ ) = det(U) det(B′ ) = det(B′ ). The
determinant will be a useful property for analysing algorithms and bounds later on.

Another important invariant of a lattice is its successive minima. The first successive minimum, often
denoted as λ(L) or just λ (when the context is clear), is the length of the shortest non-zero vector in the
lattice. In general, a lattice of rank n has n successive minima which are defined as follows.
Definition 3.5 (Successive Minima). Let L be a lattice of rank n. For i ∈ {1, . . . , n}, the ith successive
minimum of L, denoted by λi (L), is the smallest r such that L has i linearly independent vectors of
length at most r.

Geometrically, λi (L) is the radius of the smallest closed ball around the origin which contains i linearly
independent vectors. Figure 6 depicts the first successive minimum of a lattice.

Interestingly, there is no efficient algorithm to compute the successive minima of a lattice. There is,
however, an upper bound due to Minkowski.
Theorem 3.6 (Minkowski’s First Theorem). Let L be a full-rank n dimensional lattice. Then

λ1 (L) ≤ n| det(L)|1/n

The first successive minimum is of particular interest as it gives a standard by which we can judge the
length of vectors in a lattice.

6
λ

Figure 6: First successive minimum λ and a shortest non-zero vector of a lattice. The red line is the one
dimensional span of the shortest vectors.

3.3 Lattice Problems

We now define some important computational lattice problems.


Definition 3.7 (Shortest Vector Problem (SVP)). Given a basis B of a lattice L = L(B), find a non-zero
lattice vector v that satisfies ∥v∥ = λ1 (L).

SVP
v

Figure 7: The Shortest Vector Problem.

Definition 3.8 (Closest Vector Problem (CVP)). Given a basis B of a lattice L = L(B), and a target
vector t (not necessarily in L), find a lattice vector v that satisfies ∥v − t∥ = minw∈L ∥w − t∥.

t t

CVP
v

Figure 8: The Closest Vector Problem.

It is well-known that these problems are NP-Hard [Ajt98]. However, we can define slightly relaxed

7
versions of these problems which turn out to have efficient solutions for certain parameters. These will
be more useful for cryptanalysis.
Definition 3.9 (Approximate Shortest Vector Problem (SVPγ )). Given a basis B of a lattice L = L(B)
and an approximation factor γ, find a non-zero lattice vector v that satisfies ∥v∥ ≤ γ · λ1 (L).
Definition 3.10 (Approximate Closest Vector Problem (CVPγ )). Given a basis B of a lattice L =
L(B), a target vector t and an approximation factor γ, find a lattice vector v that satisfies ∥v − t∥ ≤
γ · minw∈L ∥w − t∥.

To solve these computational problems, we use lattice reduction. The idea is that we can transform
an arbitrary lattice basis into a “better” basis that contains shorter and more orthogonal vectors. In
the next section, we describe the LLL algorithm [LLL82] which is a lattice reduction algorithm that
gives a polynomial-time solution to these problems for approximation factors exponential in the lattice
dimension.

3.4 Lattice Reduction

The goal of lattice reduction is to take an arbitrary lattice basis and transform it into another basis
for the same lattice that has shorter and more orthogonal vectors. Since we look for short vectors, this
process in itself may yield a solution for the approximate shortest vector problem.

Recall from Theorem 3.4 that two bases B and B′ represent the same lattice if and only if B = UB′ for
some integer matrix U with | det(U)| = 1. This leads us to two useful transformations we can perform on
a lattice basis: vector-switching and vector-addition. Left multiplying a basis by the matrix Ti,j yields a
new basis with the ith and jth basis vectors swapped, and left multiplying a basis by the matrix Li,j (k)
yields a new basis with the jth basis vector added k times to the ith basis vector. Note that both of
these matrices have determinants of ±1. These two elementary row operations are key parts of the LLL
algorithm.

  j
1  
 ..  1
 .   
   .. 
   . 
   
 0 1  i  
   1 
 ..   
Ti,j =

. 
 Li,j (k) = 

.. 

  .
   
 1 0  j  
   k 1  i
 ..   
 .   .. 
 . 
 
1
1

3.4.1 The LLL Algorithm

We now describe the algorithm of Lenstra, Lenstra and Lovász [LLL82].

The first step of this iterative algorithm is to take the basis B = {b1 , . . . , bn } and compute an orthogonal
basis by the Gram-Schmidt orthogonalisation process. This orthogonal basis is not a valid basis for the
lattice but will help us to reduce the lattice basis. The Gram-Schmidt process gives us the orthogonal
vectors b∗1 , . . . , b∗n and coefficients µi,j defined as
{
b∗i = bi , i=1 ⟨bi , b∗j ⟩
∑ µ i,j =
b∗i = bi − j=1 µi,j b∗j ,
i−1
1<i≤n ⟨b∗j , b∗j ⟩

With this, we have the notion of a LLL-reduced basis which is our end goal.

8
Definition 3.11 (δ-LLL reduced). Let δ ∈ (1/4, 1). A basis B = {b1 , . . . , bn } is δ-LLL-reduced if

1. |µi,j | ≤ 1/2 for all i > j (size-reduced)


2. (δ − µ2i+1,i )∥b∗i ∥2 ≤ ∥b∗i+1 ∥2 for all 1 ≤ i ≤ n − 1 (Lovász condition)

The first condition relates to the lengths of the basis vectors but it isn’t sufficient to ensure all basis
vectors are short (consider the two dimensional case where b1 is large). The second condition helps to
rectify this by imposing a requirement on a more local scale between consecutive Gram-Schmidt vectors
which roughly says that the second vector should be not much shorter than the first.

With these definitions, we can now present the LLL algorithm.

Algorithm 1 LLL Algorithm [LLL82]


1: function LLL(Basis {b1 , . . . , bn }, δ)
2: while true do
3: for i = 2 to n do ▷ size-reduction
4: for j = i − 1 to 1 do
5: b∗i , µi,j ← Gram-Schmidt(b1 , . . . , bn )
6: bi ← bi − ⌊µi,j ⌉bj
7: if ∃i such that (δ − µ2i+1,i )∥b∗i ∥2 > ∥b∗i+1 ∥2 then ▷ Lovász condition
8: Swap bi and bi+1
9: else
10: return {b1 , . . . , bn }

Example 3.12. Let B = {b1 , b2 } where b1 = (−2, 2), b2 = (−2, 1). We will run the LLL algorithm on
this lattice basis, using δ = 0.75

b1
b2

Figure 9: Lattice L(B) with basis b1 = (−2, 2) and b2 = (−2, 1).

We begin by computing the first set of Gram-Schmidt vectors and coefficients:


b∗1 = b1 = (−2, 2)
⟨b2 , b∗ ⟩
b∗2 = b2 − ∗ 1∗ b∗1 = (−2, 1) − 0.75 · (−2, 2) = (−0.5, −0.5)
⟨b1 , b1 ⟩
⟨b2 , b∗1 ⟩
µ2,1 = = 0.75
⟨b∗1 , b∗1 ⟩
Then, we set b2 ← b2 − ⌊µ2,1 ⌉b1 = (0, −1).

The Lovász condition between b∗1 and b∗2 is not satisfied as


(δ − µ22,1 )∥b∗1 ∥2 = (0.75 − 0.752 ) · 8 = 1.5 > 0.5 = ∥b∗2 ∥2

9
so we swap vectors b1 and b2 in the basis. We now have b1 = (0, −1) and b2 = (−2, 2).

We perform another iteration and compute the Gram-Schmidt vectors and coefficients from the new
basis:
b∗1 = b1 = (0, −1)
⟨b2 , b∗1 ⟩ ∗
b∗2 = b2 − b = (−2, 2) + 2 · (0, −1) = (−2, 0)
⟨b∗1 , b∗1 ⟩ 1
⟨b2 , b∗ ⟩
µ2,1 = ∗ 1∗ = −2
⟨b1 , b1 ⟩

We find that the Lovász condition holds (for every pair of consecutive Gram-Schmidt vectors):

(δ − µ22,1 )∥b∗1 ∥2 = (0.75 − (−2)2 ) · 1 = −3.25 ≤ 4 = ∥b∗2 ∥2

so we are done.

The LLL-reduced basis is {b′1 , b′2 } where b′1 = (0, −1) and b′2 = (−2, 0).

b1
b2 LLL b′2
b′1

(a) Lattice L(B) with original basis b1 = (b) Lattice with LLL-reduced basis b′1 =
(−2, 2), b2 = (−2, 1). (0, −1), b′2 = (−2, 0).

We note that in this toy example, a shortest vector in the lattice is found in the first vector of the reduced
basis!

Of course, LLL does not always find a shortest vector. But we can show that it will always find a vector
within an approximation factor exponential in the dimension of the lattice to the shortest vector. To do
so, we first prove a theorem for the lower bound on the shortest vector in a lattice.
Theorem 3.13. Let B = {b1 , . . . , bn } be a lattice basis and B∗ = {b∗1 , . . . , b∗n } its corresponding Gram-
Schmidt orthogonalisation. Then λ1 (L(B)) ≥ mini∈{1,...,n} ∥b∗i ∥.

Proof. Let x = (x1 , . . . , xn ) ∈ Zn be a non-zero vector. We consider the lattice point xB and show that
its length is bounded below by mini∈{1,...,n} ∥b∗i ∥. Let j be the largest index such that xj ̸= 0. Then


n
|⟨xB, b∗j ⟩| = |⟨ xi bi , b∗j ⟩|
i=1

n
=| xi ⟨bi , b∗j ⟩| by linearity of the inner product
i=1
= |xj |⟨bj , b∗j ⟩ since bi and b∗j are orthogonal for j < i and xi = 0 for i < j
= |xj | · ∥b∗j ∥2

10
From the Cauchy-Schwartz inequality, we have

|⟨xB, b∗j ⟩|2 ≤ ⟨xB, xB⟩ · ⟨b∗j , b∗j ⟩


=⇒ |⟨xB, b∗j ⟩| ≤ ∥xB∥ · ∥b∗j ∥
so
|xj | · ∥b∗j ∥2 ≤ ∥xB∥ · ∥b∗j ∥
=⇒ |xj | · ∥b∗j ∥ ≤ ∥xB∥
=⇒ ∥b∗j ∥ ≤ ∥xB∥ since xj ̸= 0 is an integer
=⇒ min ∥b∗i ∥ ≤ ∥xB∥
i∈{1,...,n}

Therefore, since xB can be any non-zero lattice point, we must have λ1 (L(B)) ≥ mini∈{1,...,n} ∥b∗i ∥.
( )n−1
Proposition 3.14. Let B = {b1 , . . . , bn } be a δ-LLL-reduced basis. Then ∥b1 ∥ ≤ √ 2
4δ−1
λ1 .

Proof. From the Lovász condition, we have

(δ − µ2i+1,i )∥b∗i ∥2 ≤ ∥b∗i+1 ∥2

and from the size-reduced condition, we have |µi+1,i | ≤ 21 , so


( )
4δ − 1
∥b∗i ∥2 ≤ ∥b∗i+1 ∥2
4

By chaining the inequalities, we get


( )i−1
4
∥b∗1 ∥2 = ∥b1 ∥ ≤ 2
∥b∗i ∥2
4δ − 1
( )i−1
2
=⇒ ∥b1 ∥ ≤ √ ∥b∗i ∥
4δ − 1
( )n−1
2
=⇒ ∥b1 ∥ ≤ √ ∥b∗n ∥
4δ − 1
The result then follows from Lemma 3.13.

Combining Proposition 3.14 with Theorem 3.6, we have the result


Proposition 3.15.
( )n−1
2 √
∥b1 ∥ ≤ √ n · | det(L)|1/n
4δ − 1

This result will be important for us as it gives us a rough idea about what short vectors LLL will find.
As we’ll see throughout this tutorial, if we can model a particular problem in which the solution exists
as a short vector of some lattice, we may be able to solve for it by using lattice reduction if the short
vector is within an exponential approximation factor of the shortest vector. We omit the analysis and
proof and take it for granted that the LLL algorithm runs in time polynomial in the lattice dimension.

3.5 Solving CVP

As we have seen, the LLL lattice reduction algorithm can be used to solve the approximate shortest
vector problem with approximation factors by simply taking the first vector in the LLL-reduced basis.
Lattice reduction can also be used to solve the approximate closest vector problem with similar approxi-
mation factors. In this section, we briefly discuss Babai’s nearest plane algorithm [Bab86] and Kannan’s
embedding method [Kan87].

11
3.5.1 Babai’s Nearest Plane Algorithm

Babai’s nearest plane algorithm is a greedy algorithm that uses induction on the dimension n of the
lattice. The algorithm begins with lattice reduction to get a reduced basis B = {b1 , . . . , bn }. Where
t is the target vector, we consider the hyperplane generated by the first n − 1 lattice vectors and let
t′ = t − cn bn where cn is an integer such that the hyperplane translated by cn bn is as close as possible
to t. cn can be computed as cn = ⌈⟨t, b∗n ⟩/⟨b∗n , b∗n ⟩⌋. We then inductively apply this process to the first
n − 1 lattice vectors and the new translated target vector t′ . The output of the algorithm is the sum of
the ci bi which is clearly a lattice point.

Figure 11 gives an example of the algorithm in the two dimensional case.

−c2 b2
t′
t′
b1 c2 b2
c1 b1
b2 −c1 b1

t′′

(a) The red lines represent the translated


hyperplanes zb2 + span{b1 } with z ∈ Z. (b) The hyperplane in this iteration is {0}, so
c2 b2 + span{b1 } (drawn in bold) is the clos- we find the multiple c1 of b1 that minimises
est translated hyperplane to the target t. the length of t′ − c1 b1 .

t
c1 b1

c2 b2

(c) The closest lattice point to t is given by


c 2 b2 + c 1 b1 .

Figure 11: Babai’s Nearest Plane Algorithm.

The time complexity of Babai’s algorithm is dominated by the lattice reduction step. Thus, in the case
where we use LLL, the algorithm runs in time polynomial in the lattice dimension. As one might expect,
when using Babai’s algorithm with LLL, we can solve CVPγ to within an exponential approximation
factor.

12
Algorithm 2 Babai’s Nearest Plane Algorithm [Bab86]
1: function Babai(Basis {b1 , . . . , bn }, target vector t)
2: Perform lattice reduction on B
3: b∗i ← Gram-Schmidt(b1 , . . . , bn )
4: b←t
5: for i = n − 1 to 1 do
6: ci ← ⌈⟨b, b∗i ⟩/⟨b∗i , b∗i ⟩⌋
7: b ← b − ci bi
8: return t − b

3.5.2 Kannan’s Embedding Method

Kannan’s embedding method is another technique to solve the closest vector problem which works by
embedding the target vector in the lattice basis and treating the CVP instance as a SVP instance. Let
B = {b1 , . . . , bn } be the lattice basis and let t = (t1 , . . . , tn ) be the target vector. Suppose that a
solution to the CVP instance is given by c1 b1 + · · · + cn bn . Then, we have

n
t≈ ci bi
i=1
∑n
=⇒ t = ci bi + e
i=1

where ∥e∥ is small. Hence, it makes sense to consider the n + 1 dimensional lattice with basis
 
B 0
B′ =  
t q

which contains the short vector (e, q) by the linear combination (−c1 , . . . , −cn , 1). The solution is then
given by subtracting e from t.

The integer q is known as the embedding factor and often affects how successful LLL will be in revealing
the correct vector. We refer to [Gal12] and [Sun+21] for discussions on the embedding factor.

3.6 An Application of Lattice Reduction

Before we begin discussing the application of lattice reduction to cryptanalysis, we will look at an
application of lattice reduction in algebra. The problem we look at here is that of reconstructing the
minimal polynomial of an algebraic number given an approximation of the number. This problem was
shown to be solvable with lattice reduction in [KLL84].
Definition 3.16 (Minimal Polynomial). Let α ∈ F where F is a field. The minimal polynomial of α is
the monic polynomial of lowest degree in F [x] such that α is a root.

We will use
√ lattice reduction to find the minimal polynomial f (x) of α = 7 + 3 5 given the approximation
β = 7 + 3 5 ≈ 8.70997594 to 8 decimal places. Firstly, we suspect that f is of degree 3, so we write
f (x) = a0 + a1 x + a2 x2 + a3 x3 with ai ∈ R. We have f (α) = 0, and so f (β) ≈ 0. To work over the
integers, we multiply this expression by 108 to get
108 a0 + ⌊108 β⌋a1 + ⌊108 β 2 ⌋a2 + ⌊108 β 3 ⌋a3 ≈ 0
Now, consider the lattice with basis (given by the rows)
   
108 1 0 0 100000000 1 0 0
   
   
 ⌊108 β⌋ 0 1 0  870997594 0 0
1
B=  8 2
=
 


⌊10 β ⌋ 0 0 1  7586368087 0 0 1
   
⌊108 β 3 ⌋ 0 0 0 66077083514 0 0 0

13
Note that our target vector x = (c, a0 , a1 , a2 ) containing the coefficients of the minimal polynomial is in
this lattice with the linear combination t = (a0 , a1 , a2 , 1). That is, tB = x. Here, c is an integer very
close to 0.

To justify why LLL will be successful in helping us to find the coefficients of the minimal polynomial,
we assume knowledge of an upper bound, M = 400, for the height of the minimal polynomial, which is
defined to be the maximum of the magnitudes of its coefficients. This gives us a rough upper bound of
our target vector: ∥(0, M, M, M )∥ ≈ 692. We use the result of the previous section which bounds the
first vector in the LLL-reduced basis to find that
( )n−1
2 √
∥b1 ∥ ≤ √ n| det(L)|1/n ≈ 2868
4δ − 1
which suggests we should be able to find the target vector with LLL.

In practice, LLL will likely produce a vector much shorter than this upper bound. [NS06] gives a heuristic
that suggests the first vector in the LLL-reduced basis will satisfy ∥b1 ∥/| det(L)|1/n ≈ 1.02n for random
bases. In general, we will tend to rely on heuristics and rough calculations to determine whether or not
LLL will be successful.

Running the LLL algorithm on the basis gives the reduced basis B′
 
5 −348 147 −21
 
 
438 −75 116 188 
B′ = 



109 −214 −563 −159
 
357 136 220 −419

The first row gives the shortest vector, and we recognise it as our solution because the first entry, 5, is
coefficients off this vector to get a0 = −348, a1 = 147, a2 = −21. So, the minimal
close to 0. We read the √
polynomial of α = 7 + 3 5 is
f (x) = x3 − 21x2 + 147x − 348

14
4 Lattice-based Problems

In this section, we study some problems which can be solved by modelling the situation as a SVPγ or
CVPγ instance. These problems may be characterised by their linear nature and throughout this tutorial,
we call these problems lattice-based problems.

4.1 Finding Small Roots

We start with arguably one of the most influential and widely applied problems in public key cryptanal-
ysis: the problem of finding “small” roots of a polynomial modulo an integer. This problem was studied
in a seminal work by Coppersmith where he also applied his techniques to breaking low exponent RSA
in particular settings [Cop96]. The power of Coppersmith’s method is in its ability to find small roots of
polynomial equations modulo composite numbers, without knowledge of the factorisation. Finding roots
modulo a prime is easy, while finding roots modulo a composite number N is equivalent to factoring
N , so Coppersmith’s method is a good compromise and has indeed found innumerable applications in
cryptanalysis.

In this section, we present a simple, informal overview of Coppersmith’s method in the univariate case,
as well as some useful extensions and generalisations which we state without proof.

4.1.1 Coppersmith’s Method: An Overview

∑d−1
Let N be a composite integer and f (x) = xd + i=0 ai xi ∈ Z[x] a monic polynomial of degree d.
Coppersmith’s method helps us to find all integer solutions x0 to the equation f (x0 ) = 0 (mod N )
which satisfy |x0 | < B for some bound B depending on N and d.

The main idea of the algorithm is to construct a polynomial h(x) over the integers which also satisfies
h(x0 ) = 0. Since there are efficient algorithms to find roots of univariate polynomials over the integers,
finding such a polynomial will give us the roots to the original polynomial. To proceed, we note that
adding integer multiples of gi (x) = N xi ∈ Z[x] to f results in polynomials which have the same roots as
f modulo N . So we consider the lattice generated by the rows of B:
 
N
 
 
 BN 
 
 
 B N2

B=  ..


 . 
 
 
 B d−1
N 
 
a0 a1 B a2 B 2 ··· ad−1 B d−1 Bd

The first d rows encode the polynomials gi (Bx) for 0 ≤ i < d while the last row encodes f (Bx). Thus,
every element of this lattice represents a polynomial which shares the roots of f modulo N . Crucially,
we observe that if there is such a polynomial h(x) in this lattice which also satisfies |h(x0 )| < N , then
we have h(x0 ) = 0 over the integers. This polynomial should have “small” coefficients to satisfy this
condition. Specifically we require |hi xi0 | ≤ |hi B i | < N/(d + 1) for all coefficients hi of h. Since the
elements of L(B) encode the coefficients of these polynomials of interest, perhaps using lattice reduction
to find a short vector will be helpful.

B is triangular, so we can easily calculate det(L(B)) = det(B) = B d(d+1)/2 N d . From Proposition 3.15,
we have (using δ = 3/4)
( )d
2 √
∥b1 ∥ ≤ √ d + 1| det(L)|1/(d+1)
4δ − 1

= 2d/2 d + 1 · B d/2 N d/(d+1)

= 2d/2 d + 1 · B d/2 N N −1/(d+1)

15
where b1 = (b0 , . . . , bd ) is the first vector in the LLL-reduced basis. We interpret this vector as the
coefficients of the polynomial h(Bx), and we see that by setting

B < N 2/d(d+1) /(2(d + 1)3/d )

we achieve the required bounds on the hi :

|hi B i | = |bi | ≤ ∥b1 ∥ < N/(d + 1)

So, to find the small roots of f mod N , we take the polynomial h(x) = b0 + (b1 /B)x + (b2 /B 2 )x2 + · · · +
(bd−1 /B d−1 )xd−1 + xd and solve for its roots over the integers. We check each root to ensure that it is
a root of f mod N .
Example 4.1. Let N = 23 · 29 = 667 and f (x) = x2 + 6x + 352 ∈ Z[x]. Note that f has the “small” root
x0 = 15 modulo N but not over the integers. That is, f (15) = 0 (mod N ), but f (15) ̸= 0. We will use
Coppersmith’s method as described above to recover this small root. We take B = 20 and construct the
lattice generated by the rows of B:
 
N
 
     
 BN 
 
  667 0 0 667 0 0
 B2N     
B= =
 ·
 
 = 


 ..   0 20 667 0
 
0 13340 0

 . 
  6 · 20 202
  352 352 120 400
 B d−1 N 
 
a0 a1 B a2 B · · · ad−1 B
2 d−1
B d

Running LLL on this basis yields the reduced basis B′ :


 
−315 120 400
 
′  
B =  352 120 400 
 
167 12260 −3600

We read off the first row and interpret it as the coefficients of the polynomial h(Bx). So we compute
h(x) as follows:
h(Bx) = 400x2 + 120x − 315
( ) ( )
400 120
=⇒ h(x) = 2
x + x − 315
202 20
= x2 + 6x − 315
Then, solving for the roots of h(x) (i.e. with the quadratic formula or Newton’s method), we find the
solution x0 = 15. Note that this also gives us the solution x0 = −21 whose magnitude is greater than B.
In practice, we can often expect Coppersmith’s method to perform slightly better than the theoretical
bounds.

4.1.2 Coppersmith’s Method: Extensions and Generalisations

From further work of Coppersmith and many others, Coppersmith’s method has been developed into
stronger and more useful results. We give a formal statement of Coppersmith’s method and some useful
extensions and generalisations.
Theorem 4.2 (Coppersmith’s Method [Cop96]). Let N be an integer of unknown factorisation, which
has a divisor b ≥ N β . Let f (x) be a univariate, monic polynomial of degree δ and 0 < ϵ ≤ 71 β. Then we
can find all solutions x0 of f (x) = 0 (mod b) with
1 β2 −ϵ
|x0 | ≤ N δ
2
in time polynomial in (log N, δ, 1ϵ ).

16
Theorem 4.3 (Coppersmith’s Method for Bivariate Integer Polynomials [Cor07]). Let f (x, y) ∈ Z[x, y]
be an irreducible polynomial of degree δ. Suppose X and Y are upper bounds for the desired solution
(x0 , y0 ) and let W = maxi,j |fi,j |X i Y i . If XY < W 1/δ , then we can find all solutions (x0 , y0 ) of
f (x, y) = 0 bounded by |x0 | ≤ X and |y0 | ≤ Y in time polynomial in (log W, 2δ ).

There is a generalisation of Coppersmith’s method to multivariate polynomials. The basic idea follows
that of the univariate case; we construct a similar lattice whose rows encode the coefficients of some
shift polynomials which are multiples of powers of the modulus. After reducing the lattice basis, we
obtain a set of multivariate polynomials which share the target roots over the integers. We can then use
Gröbner basis techniques or compute resultants to solve the system of equations and recover the roots.
As the polynomials we obtain after the lattice reduction step are not guaranteed to be algebraically
independent (i.e. they may share a non-trivial factor), there is no guarantee that the system will be
solvable. Therefore, this generalisation is heuristical only, though it tends to work quite well in practice.
We also note that the bounds analysis typically depends heavily on the monomials that appear in the
polynomial of interest as well as the shift polynomials used since the bounds depend on the determinant
of the lattice. To calculate the upper bounds for the roots which we might expect the algorithm to
successfully find, we combine the LLL bounds from Theorem 3.15 with the following result of Howgrave-
Graham:
Theorem 4.4 (Howgrave-Graham [How97]). Let h(x1 , . . . , xn ) ∈ Z[x1 , . . . , xn ] be a polynomial consist-
ing of ω monomials. If

(1) f (r1 , . . . , rn ) = 0 (mod N ) for some |r1 | < X1 , . . . , |rn | < Xn


(2) ∥h(x1 X1 , . . . , xn Xn )∥ < N

ω

Then f (r1 , . . . , rn ) = 0 holds over the integers.

In practice, performing this analysis may be tedious and difficult, especially for lattices that aren’t full-
rank, so a more empirical approach is often used. For a more in-depth understanding of Coppersmith’s
method, see [Cop96, JM06, Cor07, May03].

4.2 Knapsack Problem

The knapsack problem is a well-known NP-complete computational problem that has been used as a
trapdoor in some public key cryptosystems [MH78, CR88, Yas07]. The most common version of the
knapsack problem in cryptography and cryptanalysis is the subset sum problem which involves finding
a subset of a given set of numbers that sum to a given target. In this section, we will study a special
case of this subset sum problem as well as the modular subset sum problem and further generalisations.

4.2.1 Low-density Subset Sum Problems

Definition 4.5 (Subset Sum Problem). Given positive integers a1 , . . . , an (the weights) and a target
integer s, find some subset of the ai that sum to s. That is, find e1 , . . . , en with ei ∈ {0, 1} such that


n
ei ai = s
i=1

Many cryptosystems based on the subset sum problem have been shown to be insecure [Odl91] through
algorithms that solve special “low-density” subset sum problem instances. The density of a set of weights
a1 , . . . , an is defined by
n
d=
log2 max(ai )
[LO85] gives the LO algorithm which can solve subset sum problem instances with d < 0.6463 given
access to a SVP oracle. Similarly, [Cos+92] gives the CJLOSS algorithm which is a slight modification of

17
the LO algorithm that improves the bound to d < 0.9408. Both of these algorithms are based on lattice
reduction and have been shown to be quite practical [Cos+92].

The strategy is to construct a lattice which contains a vector encoding the ei as a short vector. A simple
lattice which follows this idea is generated by the rows of the following basis matrix:
 
1 a1
 
 
 1 a2 
 
 .. .. 
B= . .
 
 
 1 a 
 n 
s

Notice that the linear combination t = (e1 , . . . , en , −1) generates the (short) vector x1 = (e1 , . . . , en , 0).
That is, tB = x1 . So we might intuitively expect lattice reduction to help us find this vector. The
CJLOSS algorithm uses the slightly different lattice with basis B′ given by
 
1 N a1
 
 
 1 N a2 
 
 .. .. 
B′ =  . . 
 
 
 1 N a n
 
1
2
1
2 · · · 2 Ns
1


where N > n is an integer. In this case, the same linear combination √ t = (e1 , . . . , en , −1) generates
the vector x2 = (e1 − 21 , . . . , en − 12 , 0), and so we always have ∥x2 ∥ = 21 n. In [Cos+92] it is shown that
the probability, P , of x2 not being the unique shortest vector in the lattice is bounded by
√ 2c 0 n
P ≤ n(4n n + 1)
max(ai )

where c0 = 1.0628 . . .. This upper bound tends towards 0 as max(ai ) > 2c0 n , and so we get

max(ai ) > 2c0 n


=⇒ log2 max(ai ) > c0 n
1 n
=⇒ >
c0 log2 max(ai )
1
=⇒ d <
c0
=⇒ d < 0.9408 . . .

Therefore, almost all subset sum problems with density d < 0.9408 can be efficiently solved given a SVP
oracle. It is important to note that this result is theoretical and proven with the hypothetical existence
of a SVP oracle. In reality, although such an oracle does not exist, LLL often finds a sufficiently short
vector that allows us to solve the subset sum problem. We also note that the upper bound on the density
is theoretical as well and in practice it may even be possible to solve higher density subset sum problems
with this approach.
Example 4.6. Let (a1 , a2 , a3 , a4 , a5 , a6 ) = (83, 59, 47, 81, 76, 51) be the n = 6 weights of a subset sum
problem and let s = 291 be the target. The density of this subset sum problem is d = n/ log2 max(ai ) =
6/6.375 = 0.9412. Although the density is slightly higher than the theoretical upper bound, we will use
the CJLOSS algorithm and show that it can solve this subset sum problem. With N = 3 we construct

18
the lattice generated by the rows of B:
 
1 0 0 0 0 0 3 · 83
   
 
1 N a1 0 1 0 0 0 0 3 · 59 
   
   
 1 N a2   0 0 1 0 0 0 3 · 47 
   
 .. ..   
B= . .  = 0 0 0 1 0 0 3 · 81 
   
   
 1 N an  0 0 0 0 1 0 3 · 76 
   
 
1 1
··· 1
Ns 0 0 0 0 0 1 3 · 51 
2 2 2  
1
2
1
2
1
2
1
2
1
2
1
2 3 · 291

Running LLL on this basis yields the reduced basis B′ :


 
−1 1 1 −1 −1 −1 0
 
 
1 1 1 −1 −1 3 0
 
 
 2 −2 2 2 −4 0 0
1 

B′ =  1 3 −3 3 −3 1 0
2 
 
 2 −2 −2 −4 0 0 0
 
 
−3 −1 −3 −1 −1 1 0
 
0 −2 0 0 −2 −2 −6

The first row b1 = (b1 , . . . , b7 ) = (− 12 , 12 , 12 , − 21 , − 12 , − 12 , 0) is our vector x = (e1 − 21 , . . . , e6 − 12 , 0) (or −x)


which encodes the ei . Thus, we recover the ei by computing (b1 + 21 , . . . , b6 + 12 ) (or ( 12 − b1 , . . . 21 − b6 )).
In this case, doing the latter gives us

(e1 , e2 , e3 , e4 , e5 , e6 ) = (1, 0, 0, 1, 1, 1)
∑n
which indeed satisfies i=1 ei ai = s.

4.2.2 Low-density Subset Sum Problems: Extensions and Generalisations

It turns out that the approach of the previous section can be extended to the multiple subset sum
problem, the modular subset sum problem and the multiple modular subset sum problem. We have the
following definitions for these problems.
Definition 4.7 (Multiple Subset Sum Problem). Given positive integers a1,1 , . . . , ak,n (the weights) and
target integers s1 , . . . , sk , find e1 , . . . , en with ei ∈ {0, 1} such that

n
ei aj,i = sj
i=1

for all 1 ≤ j ≤ k.
Definition 4.8 (Modular Subset Sum Problem). Given positive integers a1 , . . . , an (the weights), a
target integer s, and a modulus M , find e1 , . . . , en with ei ∈ {0, 1} such that

n
ei ai = s (mod M )
i=1

Definition 4.9 (Multiple Modular Subset Sum Problem). Given positive integers a1,1 , . . . , ak,n (the
weights), target integers s1 , . . . , sk , and a modulus M , find e1 , . . . , en with ei ∈ {0, 1} such that

n
ei aj,i = sj (mod M )
i=1

for all 1 ≤ j ≤ k.

19
n
The density of the multiple subset sum problem is defined as d = k·log max(a j,i )
, while the density of the
2
n
multiple modular subset sum problem is defined as d = k·log M . We also note that the (modular) subset
2
sum problem is simply the multiple (modular) subset sum problem with k = 1.

In [PZ16], it is shown that the multiple subset sum problem can be solved with the same density bound
of d < 0.9408 using the lattice with basis B given by:
 
1 0 N a1,1 N a2,1 · · · N ak,1
 
 
 1 0 N a1,2 N a2,2 · · · N ak,2 
 
 .. .. .. .. .. .. 
B= . . . . . . 
 
 
 1 0 N a N a · · · N a k,n 
 1,n 2,n 
1
2
1
2 ··· 2 2
1 1
N s1 N s2 · · · N sk

with N > n+1
4 . Similar to before, the linear combination (e1 , . . . , en , −1) generates the short vector
x = (e1 − 1
2 , . . . , en − 12 , − 12 , 0, . . . , 0) and so we expect lattice reduction to reveal this target vector.

[PZ16]( also shows that) the modular multiple subset sum problem can be solved when d < 0.9408 and
n √
k = o log ((n+1) n+1)
using the lattice with basis B′ given by:
2

 
1 0 N a1,1 N a2,1 ··· N ak,1
 
 ··· N ak,2 
 1 0 N a1,2 N a2,2 
 .. 
 .. .. .. .. .. 
 . . . . . . 
 
 
 1 0 N a1,n N a2,n ··· N ak,n 
 
 
B′ =  NM 
 
 
 NM 
 
 .. 
 . 
 
 
 NM 
 
1
2
1
2 ··· 1
2
1
2 N s1 N s2 ··· N sk
∑n
To ∑
see why the target vector is in this lattice, we rewrite the modular equations i=1 ei aj,i = sj (mod M )
n
as i=1 ei aj,i = sj + ℓj M and notice that the linear combination (e1 , . . . , en , −ℓ1 , . . . , −ℓk , −1) generates
the target vector x = (e1 − 21 , . . . , en − 12 , − 21 , 0, . . . , 0).

4.3 Hidden Number Problem

The hidden number problem (HNP) was introduced in [BV96] for the purpose of proving results about the
bit security of the Diffie-Hellman key-exchange protocol. At a high level, the HNP deals with recovering
a secret “hidden” number given some partial knowledge of its linear relations, so it has naturally found
further usefulness in cryptanalysis and especially side-channel attacks. In this section, we will study the
hidden number problem as well as the extended hidden number problem as formulated in [HR07].

4.3.1 Hidden Number Problem: An Overview

The original formulation of the HNP in [BV96] is in terms of finding a secret integer α modulo a public
prime p when given the most significant bits of a number of ti α mod p, where the ti are random and
known. We follow the reformulation given in [BH19] which is a slight variant that models the problem
as seeking a solution to a system of linear equations.

20
Definition 4.10 (Hidden Number Problem). Let p be a prime and let α ∈ [1, p − 1] be a secret integer.
Recover α given m pairs of integers {(ti , ai )}m
i=1 such that

βi − ti α + ai = 0 (mod p)
where the βi are unknown and satisfy |βi | < B for some B < p.

For appropriate parameters, the HNP can be solved via a reduction to the closest vector problem.
Consider the matrix with basis B given by
 
p
 
 
 p 
 
 . .. 
B= 
 
 
 p 
 
t1 t2 · · · tm 1/p

By rewriting the HNP equations as βi + ai = ti α + ki p for integers ki , we see that the linear com-
bination x = (k1 , . . . , km , α) generates the lattice vector xB = (β1 + a1 , . . . , βm + am , α/p). Defining
and u = (β1 , . . . , βm , α/p), we notice that xB − t = u where the length of u is
t = (a1 , . . . , am , 0) √
bounded above by m + 1B, whereas the lattice determinant is pm−1 . Therefore, we can reasonably
expect an approximate CVP algorithm to reveal the vector u from which we can read off the secret
integer α by multiplying the last entry by p.

⌈√ the⌉authors prove that this approach


In [BV96], ⌈√ is⌉successful using Babai’s algorithm with LLL when
m = 2 log p and B ≤ p/2k where k = log p + ⌈log log p⌉. In practice, the HNP can be solved
with looser bounds on the parameters, especially in smaller dimensions and with optimisations such as
recentering [BH19]. It has also been shown that an SVP approach using Kannan’s embedding method
is often more effective than the CVP approach [Ben+14, Sun+21].

With the SVP approach, we embed the CVP target vector as a row in the lattice basis to get the basis
B′ :  
p
 
 
 p 
 
 .. 
 . 

B =  

 p 
 
 
 t1 t2 · · · tm B/p 
 
a1 a2 ··· am B
This lattice contains the vector
u′ = (β1 , . . . , βm , αB/p, −B)

generated by the linear combination (k1 , . . . , km , α, −1). We have ∥u′ ∥ < m + 2B and so Proposition
3.15 suggests that we are likely to find this short vector among the basis vectors of an LLL-reduced
basis. Notably, the shorter vector (0, . . . , 0, B, 0) is in this lattice, generated by the linear combination
(−t1 , . . . , −tm , p, 0), so u′ is more likely to be the second vector of an LLL-reduced basis.
Example 4.11. Let p = 401 and (t1 , t2 , t3 ) = (143, 293, 304). We will use the SVP approach to solving
the HNP to recover the secret integer α = 309. Suppose we are given that βi − ti α + ai = 0 (mod p)
for i ∈ {1, 2, 3} where (a1 , a2 , a3 ) = (62, 300, 86) and furthermore, that βi < B = 20. We construct the
matrix generated by the rows of B:
   
p 0 0 0 0 401 0 0 0 0
   
   
0 p 0 0 0  0 401 0 0 0
   
   
B=0 0 p 0 0 =  0 0 401 0 0
   
   
 t1 t2 t3 B/p 0  143 293 304 401 20
0
   
a1 a2 a3 0 B 62 300 86 0 20

21
Running LLL on this basis gives us the LLL-reduced basis B′ :
 
0 0 0 20 0
 
 
−15 −12 −16 1840
20 
 401 
′  
B =  24 −5 −6 − 1800 20 
 401 
 
 6 −42 −5 − 1440 −40
 401 
−11 −1 57 − 4013880
20

As expected, the first basis vector is (0, 0, 0, B, 0). To recover α, we look through the other basis vectors.
We note that our target lattice vector u = (β1 , β2 , β3 , αB/p, −B) is not among the basis vectors as no
basis vector has −20 as their last element. However, −u has the same length as u, and is potentially
among the basis vectors. Indeed, −u is found in the second basis vector and we also find that this yields
βi < B for i ∈ {1, 2, 3}. Therefore, we read off the secret integer α from the second last entry in −u to
find that α = −( 1840
401 ) · 20 mod p = α = 309.
401

4.3.2 Extended Hidden Number Problem

The extended hidden number problem formulated in [HR07] extends the HNP to the case in which there
are multiple chunks of information known about linear relations of the secret integer. Additionally, it
simultaneously deals with the case in which multiple chunks of the secret integer are known.
Definition 4.12 (Extended Hidden Number Problem [HR07]). Let p be a prime and let x ∈ [1, p − 1]
be a secret integer such that

m
x = x̄ + 2π j x j
j=1

where the integers x̄ and πj are known, and the unknown integers xj satisfy 0 ≤ xj < 2νj for known
integers νj . Suppose we are given d equations


m ∑
li
αi 2π j x j + ρi,j ki,j = βi − αi x̄ (mod p)
j=1 j=1

for 1 ≤ i ≤ d where αi ̸= 0 (mod p), ρi,j and βi are known integers. The unknown integers ki,j are
bounded by 0 ≤ ki,j < 2µi,j where the µi,j are known. The extended hidden number problem (EHNP)
is to find x. The EHNP instance is represented by
( { }d )
x̄, p, {πj , νj }j=1 , αi , {ρi,j , µi,j }j=1 , βi
m li
i=1

As with the hidden number problem, we model the situation as a CVP instance. The main idea behind
the lattice basis used to solve the EHNP is similar to that of the regular HNP except the EHNP lattice
involves factors to deal with the varying sizes of the unknown chunks. For a δ > 0 (which will be chosen
later), we construct the EHNP lattice basis B:
 
p · Id
 
 
B= A X 
 
R K

22
with the following definitions:
 
α 1 2π 1 α 2 2π 1 · · · α d 2π 1
 
  ( )
 α 1 2π 2 α 2 2π 2 · · · α d 2π 2  δ δ δ
A=  .. .. .. 
 X = diag , , . . . , νm
 . . .  2ν 1 2ν 2 2
 
α 1 2π m α 2 2π m ··· α d 2π m
 
ρ
 1,1 
 .. 
 . 
 
 
ρ1,l1 
  ( )
 ..  δ δ δ δ
R=
 . 
 K = diag ,..., ,..., ,...,
  2µ1,1 2µ1,l1 2µd,1 2µd,ld
 ρd,1 
 
 
 .. 
 . 
 
ρd,ld

To understand what vector we should target with CVP, we rewrite the EHNP equations as


m ∑
li
αi 2π j x j + ρi,j ki,j + ri p = βi − αi x̄
j=1 j=1

for integers ri . Now, consider the lattice vector u generated by the linear combination x which contains
secret information:

x = (r1 , . . . , rd , x1 , . . . , xm , k1,1 , . . . , k1,l1 , . . . , kd,1 , . . . , kd,ld )

We have
( )
x1 δ xm δ k1,1 δ k1,l δ kd,1 δ kd,l δ
xB = u = β1 − α1 x̄, . . . , βd − αd x̄, , . . . , νm , µ1,1 , . . . , µ1,l1 , . . . , µd,1 , . . . , µd,ld
2ν 1 2 2 2 1 2 2 d

Then, letting ( )
δ δ δ δ δ δ
w= β1 − α1 x̄, . . . , βd − αd x̄, , . . . , , , . . . , , . . . , , . . . ,
2 2 2 2 2 2
we notice that w is close to the lattice vector u. Therefore, by solving the CVP instance with w as the
target vector, we may reveal the lattice vector u that encodes the secret chunks xj in the (d + 1)st to
(d + m)th entries.

It remains to choose an appropriate δ. The proof of correctness of this algorithm in [HR07] requires δ
to be chosen such that 0 < κD δ < 1 where


d
2D/4 (m + L)1/2 + 1
L= li , D = d + m + L, κD =
i=1
2

When this is the case, the approach described above to solving the EHNP succeeds with probability

(2κD )L+m 2τ +ξ
P >1−
pd
∑m ∑ d ∑ li
where τ = j=1 νj and ξ = i=1 j=1 µi,j . As expected, we have more success when we have more
equations (given by the parameter d), and less success when the number of chunks and amount of
unknown information is more (given by the parameters m, νj , li , µi,j ).

23
5 Lattice Attacks

In this section, we study some attacks on cryptosystems made possible by lattice reduction techniques.
We will focus on transforming cryptographic problems into lattice-based problems which we have already
learnt how to solve.

5.1 RSA Stereotyped Message

One of the first and most well-known applications of Coppersmith’s work on finding small roots of
modular polynomials is an attack on low-exponent RSA. This attack is due to Coppersmith [Cop97]
and directly applies Coppersmith’s method to recover the full plaintext of an RSA ciphertext when the
public exponent is 3 and at least two thirds of the plaintext is known.

5.1.1 Simple Case

Let N be an RSA modulus, e a (small) public exponent, and c = me (mod N ) a given ciphertext for
the message m. Suppose that m is of the form m = m′ + x0 for known m′ . If x0 is small enough, we can
formulate the RSA equation as a small roots problem and try to solve it with Coppersmith’s method.
We have
c = me (mod N )
=⇒ c = (m′ + x0 )e (mod N )
=⇒ (m′ + x0 )e − c = 0 (mod N )
Writing f (x) = (m′ + x)e − c (mod N ), we have a modular polynomial of degree e in x. From Theorem
1
4.2, we can solve for x to recover the root x0 using Coppersmith’s method when |x0 | ≤ 12 N e . In the case
where e = 3, this turns out to be quite practical and implies that the unknown part of the plaintext can
be recovered if we know two thirds of the rest.
Example 5.1. As a motivating example, consider the RSA modulus
N = 318110533564846846327681562969806306267050757360741
with e = 3 and the stereotyped message m = “my secret pin is XXXX′′ where XXXX is an unknown four
digit pin code. We are given the ciphertext
c = me = 312332738778608882264230787188876936416561274050341 (mod N )
Because we know a part of the plaintext, we can write m as m = m′ + x0 where m′ is the known part,
and x0 is the unknown part. For this expression to make sense, we convert the known message bytes “my
secret pin is \x00\x00\x00\x00” to an integer to get
m′ = 159995190028598044409165991369948950987562188537856
Since the pin is 4 digits, we may assume that x0 is smaller than (FFFFFFFF)16 . Therefore, we have the
modular polynomial in x
f (x) = (m′ + x)e − c
which has a root |x0 | < 232 . This root is small compared to the size of N , so we can recover it by
Coppersmith’s theorem. Performing Coppersmith’s method on this polynomial recovers the small root
x0 = 825439031 whose bytes representation reveals the secret pin 1337.

5.1.2 Many Unknown Parts

With an extension of Coppersmith’s method to many variables, this stereotyped message attack can be
adapted to work in the more general situation where (non-contiguous) unknown parts of the plaintext are
scattered amongst a larger portion of known plaintext and small public exponent is used. For example,
consider the message
m = “my four letter username is XXXX and my secret four digit pin code is YYYY′′

24
where the XXXX and YYYY are unknown parts of the message we wish to recover. We can rewrite m as
m = m′ + 2tx x0 + 2ty y0 where x0 and y0 represent the unknown parts of the message and m′ represents
the known part:

m′ =“my four letter username is \x00\x00\x00\x00


and my secret four digit pin code is \x00\x00\x00\x00′′

The 2tx and 2ty factors exist to capture position of the unknown parts in the message. In this example,
tx = 42 × 8 = 336 and since ty starts at the least significant bit, ty = 0. To recover x0 and y0 , we
construct the modular polynomial f (x, y) of degree e in x and y:

f (x, y) = (m′ + 2tx x + 2ty y)e − c

and use Coppersmith’s method for multivariate polynomials to recover the small root (x0 , y0 ).

5.2 Partial Key Exposure Attacks on RSA

In 1990, Wiener gave an attack on the RSA cryptosystem showing that it can be broken if the private
exponent d is smaller than N 0.25 [Wie90]. A decade later, the bound on d was improved by Boneh and
Durfee to d < N 0.292 with a new approach using lattice reduction techniques [BD99]. Although there
have been no new results improving this bound since, the Boneh-Durfee attack has been revisited many
times and it has been shown that the bound can be improved given the relaxed condition of partial
knowledge of the private exponent [BM03, Ern+05].

5.2.1 Boneh-Durfee Attack

Let N = pq be a (balanced) RSA modulus, e a public exponent, and d the corresponding private exponent
where ed = 1 (mod φ(N )) and φ(N ) = N − p − q + 1. Suppose that d < N δ with δ < 0.292. From the
RSA key equation, we have

ed = 1 (mod φ(N ))
=⇒ ed = 1 + k(N − p − q + 1)
=⇒ 1 + k(N − p − q + 1) = 0 (mod e)
( )
N +1 p+q
=⇒ 1 + 2k − = 0 (mod e)
2 2
ed−1
where k = φ(N ) , and the last line follows since N +1 and p+q are necessarily always even. For simplicity,
we assume that e < φ(N ), and so we get

ed − 1
k= <d
φ(N )
=⇒ k < N δ

Furthermore, since the modulus is balanced, then p+q


2 <N
0.5
. Thus, we consider the bivariate modular
polynomial ( )
N +1
f (x, y) = 1 + 2x −y (mod e)
2
which has the “small” root (x, y) = (k, (p + q)/2). In [BD99], the authors use a Coppersmith√ approach
with specific shift polynomials to show that the root can be recovered as long as δ < 1 − 1/ 2 ≈ 0.292.
A more generic approach to finding small roots such as the strategy described in [JM06] may be used to
recover the roots, although with less guarantees on the bounds.

After finding the root (k, s), it is easy to recover the private key d by evaluating f (k, s) over the integers
to get ed = f (k, s) and then dividing the result by e.

25
5.2.2 Partial Key Exposure Attack

We now give an overview of the partial key exposure attack first introduced in [Ern+05], following the
simplification of [SGM10] whose general approach is quite similar to that of the Boneh-Durfee attack.

In this setting, we have a balanced RSA modulus N = pq, a public exponent e, and the corresponding
private exponent d. Furthermore, we are given that d < N δ as well as (δ − γ) log2 N MSBs of d. That is,
we know an integer d0 such that |d−d0 | < N γ . If δ < 0.292, then we can directly apply the Boneh-Durfee
attack without using the additional partial knowledge of d. Although the Boneh-Durfee attack cannot
be used for δ > 0.292, it is still helpful to revisit the reformulation of the RSA key equation used in the
attack: ( )
N +1 p+q
1 + 2k − = 0 (mod e)
2 2
The key idea is that we can use the known bits of d to approximate k so that the bounds on the roots
that we need to solve for are lowered. Let d = d0 + d1 where d1 is the remainder of the unknown bits of
d. Note that
ed − 1
k=
φ(N )
e(d0 + d1 ) − 1
=
φ(N )
ed0

N
So k0 = ⌊ edN ⌋ is a good approximation for k. In [BM03] it is shown that this approximation satisfies
0

|k − k0 | < 4N λ with λ = max{γ, δ − 21 }. Therefore, writing k = k0 + k1 where k1 is the remainder of the


unknown bits of k, we find that the polynomial
( )
N +1
f (x, y) = 1 + 2(k0 + x) −y (mod e)
2

has the “small” root (x, y) = (k1 , (p + q)/2) for appropriate δ and γ.

5.3 ECDSA with Bad Nonces

5.3.1 ECDSA with k = z ⊕ d

In the ECDSA signature scheme, the generation of the nonce must be handled with a lot of care. It
must not only be secret, but also needs to be sufficiently random and reveal no additional relations or
information about the private key. In this section, we study an interesting situation in which the nonce
is chosen to be the result of XORing the private key with the hash of the message being signed. Since
the private key is unknown, such a nonce is unpredictable. However, we will see that using this nonce
generation reveals linear relations in the bits of the private key which can then be transformed into a
lattice-based problem instance. The approach we present comes from the challenge Signature from the
TSJ CTF 2022 Capture-The-Flag event [map22].

Suppose we are given ℓ ECDSA signatures (zi , ri , si ). Each signature is calculated using the nonce
ki = zi ⊕ d where d is the private signing key. zi is the hash of each message and the ri and si satisfy
the ECDSA equation
si = ki−1 (zi + ri d) (mod n)
where n is the order of the elliptic curve. Rearranging, we can write this equation more cleanly as

s i k i = zi + r i d (mod n)

Note that the only unknowns in this equation are ki and d. We know that these are related however,
specifically
k i = zi ⊕ d

m
= zi + d − 2j zi,[j] d[j]
j=1

26
where x[j] denotes the jth (least significant) bit of x and m is the number of bits in zi and d. Substituting
this expression for ki into the ECDSA equation, we have

m
si (zi + d − 2j zi,[j] d[j] ) = zi + ri d (mod n)
j=1

Furthermore, if we write d in terms of its bits, we see that we have a linear equation where the only
unknowns are the d[j] :

m ∑
m ∑
m
si (zi + 2j−1 d[j] − 2j zi,[j] d[j] ) = zi + ri 2j−1 d[j] (mod n)
j=1 j=1 j=1

For simplicity, we can write this as



m
ai,j d[j] = bi (mod n)
j=1

where we understand ai,j and bi to be values that we can efficiently compute from zi , ri and si .

Noting that we have ℓ such relations, this resembles a multiple modular subset sum problem instance.
Since the bit size of the weights is the same as the bit size of d, the problem becomes solvable as soon as
ℓ ≥ 2, i.e. when the density is ≤ 0.5.

5.3.2 ECDSA with Biased Nonces

As we have seen in the previous section, the proper generation of the nonce when using the ECDSA
signature scheme is crucial to its security. It turns out that even the slightest bias of less than one bit in
the nonce generation can lead to a full recovery of the private key [Ara+20]. The attack of [Ara+20] uses
a transformation into a hidden number problem instance and solves it using a Fourier analysis approach.
In contrast, the lattice reduction approach to solving the underlying HNP instance as described in [HS01]
will be more successful when a smaller number of signatures with larger biases are known. Such situations
are of practical interest and have been seen in real world implementations [Ben+14, BH19].

Suppose we are given ℓ ECDSA signatures (zi , ri , si ). zi is the hash of each message and the ri and si
satisfy the ECDSA equation
si = ki−1 (zi + ri d) (mod n)
=⇒ si ki = zi + ri d (mod n)
where d is the private signing key, n is the order of the elliptic curve and ki is the biased nonce. In the
remainder of this section, we will study this attack with different interpretations of the word “biased”.
In each case, we perform some simple algebraic manipulation and transform the situation into a hidden
number problem instance.

Zero MSB In this case, we assume that the nonces are generated such that the top l bits of each ki
are zero. Therefore, we have |ki | < 2log2 n−l , and so

s i k i = zi + r i d (mod n)
=⇒ ki − (s−1
i ri )d + (−s−1
i zi ) =0 (mod n)
is precisely a hidden number problem instance which can be solved given large enough l and ℓ.

Zero LSB If instead the l LSBs of each ki are zero, then we can write ki = 2l ki′ where |ki′ | < 2log2 n−l .
Then, we have
si ki = zi + ri d (mod n)
=⇒ si (2l ki′ ) = zi + ri d (mod n)
=⇒ ki′ − (2−l s−1
i ri )d + (−2−l s−1
i zi ) =0 (mod n)
which is a hidden number problem instance.

27
Known MSB Suppose that we know the l MSBs of each ki . Then we can write ki = 2log2 n−l ti + ki′
where ti is the known MSBs of ki , and |ki′ | < 2log2 n−l . Then, we have
s i k i = zi + r i d (mod n)
=⇒ si (2 ti + ki′ )
log2 n−l
= zi + r i d (mod n)
=⇒ ki′ − (s−1
i ri )d + (2
log2 n−l
ti − s−1
i zi ) =0 (mod n)
which is again a hidden number problem instance.

Shared MSB In this case, we are given that the l MSBs of each ki are the same, but not necessarily
what this shared value is. We write ki = 2log2 n−l t + ki′ where t is the unknown shared MSBs and
|ki′ | < 2log2 n−l . Substituting this expression for ki into the ECDSA equation, we have
si (2log2 n−l t + ki′ ) = zi + ri d (mod n)
=⇒ 2 log2 n−l
t+ ki′ − s−1
i ri d − s−1
i zi =0 (mod n)
Therefore, to eliminate the 2log2 n−l t term, we take the following ℓ − 1 relations for 2 ≤ i ≤ ℓ to be the
equations in our hidden number problem instance:
(2log2 n−l t + ki′ − si−1 ri d − s−1
i zi ) − (2
log2 n−l
t + k1′ − s−1 −1
1 r 1 d − s 1 z1 ) = 0 (mod n)
=⇒ (ki′ − k1′ ) − (s−1
i ri − s−1
1 r1 )d + (s−1
1 z1 − s−1
i zi ) =0 (mod n)

5.3.3 ECDSA Key Disclosure Problem

The biased nonce situation of the previous section is generalised by the (EC)DSA key disclosure problem
as described in [HR07]. In this setting, we are given (non-contiguous) chunks of both the nonces and the
private signing key. Such a scenario may be realistic, for example, if an attacker is able to recover some
bits of the nonces but not enough to apply the biased nonce attack and if some bits of the private key
are known (e.g. from a side channel). We will see how this very naturally translates into an extended
hidden number problem instance.

Suppose we are given ℓ ECDSA signatures (zi , ri , si ). zi is the hash of each message and the ri and si
satisfy the ECDSA equation
si = ki−1 (zi + ri d) (mod n)
where d is the private signing key, n is the order of the elliptic curve and ki is the per-signature nonce.
Furthermore, suppose that we know some (non-contiguous) chunks of each ki and d. So, we can write

m
d = d¯ + 2π j d j , 0 ≤ d j < 2ν j
j=1


li
ki = k¯i + 2λi,j ki,j , 0 ≤ ki,j < 2µi,j
j=1
¯ νj , k¯i , πj , λi,j and µi,j .
where we know all of d,

Substituting these expressions into the ECDSA equation, we get



li ∑
m
si (k¯i + 2λi,j ki,j ) = zi + ri (d¯ + 2πj dj ) (mod n)
j=1 j=1


m ∑
li
=⇒ ri 2π j d j + (−2λi,j si )ki,j = (si k¯i − zi ) − ri d¯ (mod n)
j=1 j=1

which is the EHNP instance represented by


{ }ℓ
¯ n, {πj , νj }m , α, {ρi,j , µi,j }li , βi
(d, )
j=1 j=1
i=1

where
α = ri , ρi,j = −2λi,j si (mod n), βi = si k¯i − zi (mod n)

28
5.4 Bleichenbacher’s PKCS#1 v1.5 Padding Oracle Attack

As textbook RSA is well known to be susceptible to a variety of attacks, padding schemes are used in
practice to preprocess the message before encryption. The PKCS#1 standard [KJR16] is one of the
most widely implemented standards for RSA encryption and digital signatures. In 1998, Bleichenbacher
presented an attack on version 1.5 [Kal98] of this standard which showed that an arbitrary ciphertext can
be decrypted given access to a decryption oracle that reveals whether or not the corresponding plaintext
has the correct format according to the padding scheme [Ble98]. Although the original attack description
by Bleichenbacher was not in terms of lattices, there is a lattice formulation described by Nguyen [Ngu09].
This approach does not improve the query complexity of the attack (in fact, it is slightly worse), however,
it may be more intuitive to understand and is an interesting application of lattice reduction. Furthermore,
a lattice approach was used in an attack [Ron+19] which improved Bleichenbacher’s attack by using
parallelisation, yielding practical results against modern TLS implementations. In this section, we give
a brief overview of the PKCS#1 v1.5 padding scheme and a lattice attack against it.

5.4.1 PKCS#1 v1.5

In this section, we describe the parts of the PKCS#1 v1.5 standard for encryption necessary to understand
Bleichenbacher’s attack.

00 02 padding string 00 message M

Figure 12: PKCS#1 v1.5 block formatting for encryption.

Let N be an RSA modulus and e a public exponent. Let k be the length of N in bytes. In this
scheme, k must be at least 12 to allow for the fixed bytes in the block format and sufficient padding.
To encrypt a message M of length |M | bytes, a padding string P S consisting of k − 3 − |M | non-zero
bytes is pseudorandomly generated. The padded encryption block EB is then computed as EB =
00||02||P S||00||M (where || denotes concatenation) as per Figure 12. This block is converted to an
integer m which is then encrypted with RSA to get the ciphertext c = me (mod N ).

Given a ciphertext c, the message is recovered by decrypting c with the private key d to get m = cd
(mod N ). If the ciphertext is PKCS conforming, then m can be converted to an encryption block starting
with 0002 followed by a non-zero padding string that is separated from the message by a zero byte. By
searching for the first zero byte starting from the padding string, the message is recovered by taking all
bytes following the found zero byte.

5.4.2 The Attack (Lattice Version)

We now describe the lattice attack. The goal is to recover the message m from a ciphertext c = me
(mod N ). Assume that we have access to an oracle OB which returns whether or not a given ciphertext
is PKCS conforming. That is,
{
0 if c is not PKCS conforming
OB (c) =
1 if c is PKCS conforming

The existence of such an oracle is realistic, as many implementations often throw an error when an RSA
ciphertext is not PKCS conforming. Importantly, we note that, if OB (c) = 1 for a given ciphertext, then
the first two most significant bytes of m are 00 and 02.

The key idea is to use the oracle to find many ri such that OB (rie c) = OB ((ri m)e ) = 1. When choosing
ri uniformly at random, we expect the oracle to return 1 with probability close to 1/2562 = 1/65536.
Now suppose we have such integers r1 , . . . , rℓ . Since each ri m mod N starts with 0002, we have

2B ≤ ri m mod N < 3B

29
where B = 2l−16 and l is the length of N in bits. Note that B represents the bytes 0001 left shifted to
fill the size of N . Rearranging, we get

ri m mod N − 2B < B
=⇒ ri m − 2B = ki (mod N )
=⇒ ki − ri m + 2B = 0 (mod N )

where |ki | < B. This is precisely a hidden number problem instance where m is the hidden number.

Naturally, we want to know how large ℓ needs to be for the HNP instance to be solvable. As in √ Section
4.3.1, the short vector we hope to recover by solving the HNP instance
√ has length less than ℓ + 2B.
2 ℓ−1 1/(ℓ+2)
From Theorem 3.6, a shortest vector will have length
√ approximately ℓ + 2(B N ) . Thus, we
may expect LLL to successfully solve the HNP if ℓ + 2B is much less than this:
√ √
ℓ + 2B ≪ ℓ + 2(B 2 N ℓ−1 )1/(ℓ+2)
=⇒ B ≪ (B 2 2l(ℓ−1) )1/(ℓ+2)
=⇒ 2l−16 ≪ (22l−32 2l(ℓ−1) )1/(ℓ+2)
2l − 32 + l(ℓ − 1)
=⇒ l − 16 ≪
ℓ+2
l + lℓ − 32
=⇒ l − 16 ≪
ℓ+2
=⇒ (l − 16)(ℓ + 2) ≪ l + lℓ − 32
=⇒ lℓ − 16ℓ + 2l − 32 ≪ l + lℓ − 32
=⇒ l ≪ 16ℓ

For a l = 512 bit modulus, the lattice attack works when ℓ > 40.

30
References
[Adl83] Leonard M. Adleman. “On Breaking Generalized Knapsack Public Key Cryptosystems”.
In: Proceedings of the Fifteenth Annual ACM Symposium on Theory of Computing. STOC
’83. New York, NY, USA: Association for Computing Machinery, 1983, pp. 402–412. isbn:
0897910990. url: https://doi.org/10.1145/800061.808771.
[Ajt98] Miklós Ajtai. “The shortest vector problem in L2 is NP-hard for randomized reductions
(extended abstract)”. In: Proceedings of the thirtieth annual ACM symposium on Theory of
computing - STOC ’98. ACM Press, 1998. url: https://doi.org/10.1145/276698.276705.
[Ara+20] Diego F. Aranha et al. “LadderLeak: Breaking ECDSA with Less than One Bit of Nonce
Leakage”. In: Proceedings of the 2020 ACM SIGSAC Conference on Computer and Com-
munications Security. New York, NY, USA: Association for Computing Machinery, 2020,
pp. 225–242. isbn: 9781450370899. url: https://doi.org/10.1145/3372297.3417268.
[Bab86] L. Babai. “On Lovász’ lattice reduction and the nearest lattice point problem”. In: Combi-
natorica 6.1 (Mar. 1986), pp. 1–13. url: https://doi.org/10.1007/bf02579403.
[BD99] Dan Boneh and Glenn Durfee. “Cryptanalysis of RSA with Private Key d Less than N 0.292 ”.
In: Advances in Cryptology — EUROCRYPT ’99. Ed. by Jacques Stern. Berlin, Heidelberg:
Springer Berlin Heidelberg, 1999, pp. 1–11. isbn: 978-3-540-48910-8.
[Ben+14] Naomi Benger et al. ““Ooh Aah... Just a Little Bit” : A Small Amount of Side Channel Can
Go a Long Way”. In: Advanced Information Systems Engineering. Springer Berlin Heidelberg,
2014, pp. 75–92. url: https://doi.org/10.1007/978-3-662-44709-3_5.
[BH19] Joachim Breitner and Nadia Heninger. “Biased Nonce Sense: Lattice Attacks Against Weak
ECDSA Signatures in Cryptocurrencies”. In: Financial Cryptography and Data Security.
Springer International Publishing, 2019, pp. 3–20. url: https://doi.org/10.1007/978-3-
030-32101-7_1.
[Ble98] Daniel Bleichenbacher. “Chosen ciphertext attacks against protocols based on the RSA en-
cryption standard PKCS #1”. In: Advances in Cryptology — CRYPTO ’98. Springer Berlin
Heidelberg, 1998, pp. 1–12. url: https://doi.org/10.1007/bfb0055716.
[BM03] Johannes Blömer and Alexander May. “New Partial Key Exposure Attacks on RSA”. In:
Advances in Cryptology - CRYPTO 2003. Springer Berlin Heidelberg, 2003, pp. 27–43. isbn:
978-3-540-45146-4.
[BV96] Dan Boneh and Ramarathnam Venkatesan. “Hardness of Computing the Most Significant
Bits of Secret Keys in Diffie-Hellman and Related Schemes”. In: Advances in Cryptology —
CRYPTO ’96. Springer Berlin Heidelberg, 1996, pp. 129–142. url: https://doi.org/10.
1007/3-540-68697-5_11.
[Cop96] Don Coppersmith. “Finding a Small Root of a Univariate Modular Equation”. In: Proceedings
of the 15th Annual International Conference on Theory and Application of Cryptographic
Techniques. EUROCRYPT’96. Springer-Verlag, 1996, pp. 155–165. isbn: 354061186X.
[Cop97] Don Coppersmith. “Small Solutions to Polynomial Equations, and Low Exponent RSA Vul-
nerabilities”. In: Journal of Cryptology 10.4 (Sept. 1997), pp. 233–260. url: https://doi.
org/10.1007/s001459900030.
[Cor07] Jean-Sébastien Coron. “Finding Small Roots of Bivariate Integer Polynomial Equations: A
Direct Approach”. In: Proceedings of the 27th Annual International Cryptology Conference
on Advances in Cryptology. CRYPTO’07. Berlin, Heidelberg: Springer-Verlag, 2007, pp. 379–
394. isbn: 3540741429.
[Cos+92] Matthijs J. Coster et al. “Improved low-density subset sum algorithms”. In: Computational
Complexity 2.2 (June 1992), pp. 111–128. url: https://doi.org/10.1007/bf01201999.
[CR88] B. Chor and R.L. Rivest. “A knapsack-type public key cryptosystem based on arithmetic in
finite fields”. In: IEEE Transactions on Information Theory 34.5 (Sept. 1988), pp. 901–909.
url: https://doi.org/10.1109/18.21214.
[DH20] Gabrielle De Micheli and Nadia Heninger. “Recovering cryptographic keys from partial in-
formation, by example”. In: Cryptology ePrint Archive (2020).
[Ern+05] Matthias Ernst et al. “Partial Key Exposure Attacks on RSA up to Full Size Exponents”. In:
Advances in Cryptology – EUROCRYPT 2005. Springer Berlin Heidelberg, 2005, pp. 371–
386. isbn: 978-3-540-32055-5.

31
[Fri+88] Alan M. Frieze et al. “Reconstructing Truncated Integer Variables Satisfying Linear Con-
gruences”. In: SIAM Journal on Computing 17.2 (Apr. 1988), pp. 262–280. url: https :
//doi.org/10.1137/0217016.
[Gal12] Steven D. Galbraith. Mathematics of public key cryptography. Cambridge University Press,
2012.
[How97] Nicholas Howgrave-Graham. “Finding small roots of univariate modular equations revisited”.
In: Crytography and Coding. Springer Berlin Heidelberg, 1997, pp. 131–142. url: https :
//doi.org/10.1007%2Fbfb0024458.
[HR07] Martin Hlaváč and Tomáš Rosa. “Extended Hidden Number Problem and Its Cryptanalytic
Applications”. In: Selected Areas in Cryptography. Springer Berlin Heidelberg, 2007, pp. 114–
133. url: https://doi.org/10.1007/978-3-540-74462-7_9.
[HS01] N. A. Howgrave-Graham and N. P. Smart. “Lattice Attacks on Digital Signature Schemes”.
In: Designs, Codes and Cryptography 23.3 (2001), pp. 283–290. url: https://doi.org/10.
1023/a:1011214926272.
[JM06] Ellen Jochemsz and Alexander May. “A Strategy for Finding Roots of Multivariate Poly-
nomials with New Applications in Attacking RSA Variants”. In: Advances in Cryptology –
ASIACRYPT 2006. Springer Berlin Heidelberg, 2006, pp. 267–282. url: https://doi.org/
10.1007/11935230_18.
[Kal98] Burt Kaliski. PKCS #1: RSA Encryption Version 1.5. RFC 2313. Mar. 1998. url: https:
//www.rfc-editor.org/info/rfc2313.
[Kan87] Ravi Kannan. “Minkowski’s Convex Body Theorem and Integer Programming”. In: Mathe-
matics of Operations Research 12.3 (Aug. 1987), pp. 415–440. url: https://doi.org/10.
1287/moor.12.3.415.
[KJR16] B. Kaliski, J. Jonsson, and A. Rusch. PKCS #1: RSA Cryptography Specifications Version
2.2. Tech. rep. Nov. 2016. url: https://doi.org/10.17487/rfc8017.
[KLL84] R. Kannan, A. K. Lenstra, and L. Lovász. “Polynomial factorization and nonrandomness of
bits of algebraic and some transcendental numbers”. In: Proceedings of the sixteenth annual
ACM symposium on Theory of computing - STOC ’84. ACM Press, 1984. url: https :
//doi.org/10.1145/800057.808681.
[LLL82] A. K. Lenstra, H. W. Lenstra, and L. Lovász. “Factoring polynomials with rational coeffi-
cients”. In: Mathematische Annalen 261.4 (Dec. 1982), pp. 515–534. url: https://doi.org/
10.1007/bf01457454.
[LO85] J. C. Lagarias and A. M. Odlyzko. “Solving low-density subset sum problems”. In: Journal
of the ACM 32.1 (Jan. 1985), pp. 229–246. url: https://doi.org/10.1145/2455.2461.
[map22] maple3142. TSJ CTF 2022 - Signature. 2022. url: https://github.com/maple3142/My-
CTF-Challenges/tree/master/TSJ%20CTF%202022/Signature.
[May03] Alexander May. “New RSA vulnerabilities using lattice reduction methods.” PhD thesis.
Paderborn University, 2003.
[MH78] R. Merkle and M. Hellman. “Hiding information and signatures in trapdoor knapsacks”. In:
IEEE Transactions on Information Theory 24.5 (Sept. 1978), pp. 525–530. url: https :
//doi.org/10.1109/tit.1978.1055927.
[Ngu09] P. Q. Nguyen. “Public-Key Cryptanalysis”. In: Recent Trends in Cryptography. Ed. by I.
Luengo. Vol. 477. Contemporary Mathematics. AMS–RSME, 2009.
[NS06] Phong Q. Nguyen and Damien Stehlé. “LLL on the Average”. In: Lecture Notes in Computer
Science. Springer Berlin Heidelberg, 2006, pp. 238–256. url: https://doi.org/10.1007/
11792086_18.
[Odl91] A. M. Odlyzko. The rise and fall of knapsack cryptosystems. 1991. url: https://doi.org/
10.1090/psapm/042/1095552.
[PZ16] Yanbin Pan and Feng Zhang. “Solving low-density multiple subset sum problems with SVP
oracle”. In: Journal of Systems Science and Complexity 29.1 (Feb. 2016), pp. 228–242. url:
https://doi.org/10.1007/s11424-015-3324-9.
[Ron+19] Eyal Ronen et al. “The 9 Lives of Bleichenbacher’s CAT: New Cache ATtacks on TLS Im-
plementations”. In: 2019 IEEE Symposium on Security and Privacy (SP). IEEE, May 2019.
url: https://doi.org/10.1109/sp.2019.00062.

32
[SGM10] Santanu Sarkar, Sourav Sen Gupta, and Subhamoy Maitra. “Partial Key Exposure Attack
on RSA – Improvements for Limited Lattice Dimensions”. In: Progress in Cryptology - IN-
DOCRYPT 2010. Springer Berlin Heidelberg, 2010, pp. 2–16. url: https://doi.org/10.
1007/978-3-642-17401-8_2.
[Sun+21] Chao Sun et al. “Guessing Bits: Improved Lattice Attacks on (EC)DSA with Nonce Leakage”.
In: IACR Transactions on Cryptographic Hardware and Embedded Systems (Nov. 2021),
pp. 391–413. url: https://doi.org/10.46586/tches.v2022.i1.391-413.
[Wie90] Michael J. Wiener. “Cryptanalysis of short RSA secret exponents”. In: IEEE Transactions
on Information Theory 36 (1990), pp. 553–558.
[Yas07] Takeshi NASAKO Yasuyuki MURAKAMI. Knapsack Public-Key Cryptosystem Using Chi-
nese Remainder Theorem. Cryptology ePrint Archive, Report 2007/107. https://ia.cr/
2007/107. 2007.

You might also like