0% found this document useful (0 votes)
5 views12 pages

Chaper 1

The document discusses the primality problem, which involves determining whether a given natural number is prime or composite. It presents algorithms for primality testing, including trial division and Lehmann's primality test, highlighting their efficiency and limitations, especially for large numbers. The text emphasizes the significance of prime numbers in cryptography and the need for efficient algorithms to handle large inputs in practical applications.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views12 pages

Chaper 1

The document discusses the primality problem, which involves determining whether a given natural number is prime or composite. It presents algorithms for primality testing, including trial division and Lehmann's primality test, highlighting their efficiency and limitations, especially for large numbers. The text emphasizes the significance of prime numbers in cryptography and the need for efficient algorithms to handle large inputs in practical applications.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 12

1.

Introduction: Efficient Primality Testing

1.1 Algorithms for the Primality Problem


A natural number n > 1 is called a prime number if it has no positive
divisors other than 1 and n. If n is not prime, it is called composite, and
can be written n = a · b for natural numbers 1 < a, b < n. Ever since this
concept was defined in ancient Greece, the primality problem
“Given a number n, decide whether n is a prime number or not”
has been considered a natural and intriguing computational problem. Here
is a simple algorithm for the primality problem:
Algorithm 1.1.1 (Trial Division)
Input: Integer n ≥ 2.
Method:
0 i: integer;
1 i ← 2;
2 while i · i ≤ n repeat
3 if i divides n
4 then return 1;
5 i ← i + 1;
6 return 0;

This algorithm, when presented with an input number n, gives rise to√the
following calculation: In the loop in lines 2–5 the numbers i = 2, 3, . . . ,  n,
in this order, are tested for being a divisor of n. As soon as a divisor is
found, the calculation stops and returns the value 1. If no divisor is found,
the answer 0 is returned. The algorithm solves the primality problem in the
following sense:
n is a prime number if and only if Algorithm 1.1.1 returns 0.
if n = a · b for 1 < a, b < n, then one of the factors a and b is
This is because √
not larger than n, and hence such a factor must be found by the algorithm.
For moderately large n this procedure may be used for a calculation by hand;
using a modern computer, it is feasible to carry it out for numbers with 20
or 25 decimal digits. However, when confronted with a number like

M. Dietzfelbinger: Primality Testing in Polynomial Time, LNCS 3000, pp. 1-12, 2004.
 Springer-Verlag Berlin Heidelberg 2004
2 1. Introduction: Efficient Primality Testing

n = 74838457648748954900050464578792347604359487509026452654305481,
this method cannot be used, simply because it takes too long. The 62-digit
number n happens to be prime, so the loop runs for more than 1031 rounds.
One might think of some simple tricks to speed up the computation, like
dividing by 2, 3, and 5 at the beginning, but afterwards not by any proper
multiples of these numbers. Even after applying tricks of this kind, and under
the assumption that a very fast computer is used that can carry out one trial
division in 1 nanosecond, say, a simple estimate shows that this would take
more than 1013 years of computing time on a single computer.
Presented with such a formidably large number, or an even larger one with
several hundred decimal digits, naive procedures like trial division are not
helpful, and will never be even if the speed of computers increases by several
orders of magnitude and even if computer networks comprising hundreds of
thousands of computers are employed.
One might ask whether considering prime numbers of some hundred dec-
imal digits makes sense at all, because there cannot be any set of objects
in the real world that would have a cardinality that large. Interestingly, in
algorithmics and especially in cryptography there are applications that use
prime numbers of that size for very practical purposes. A prominent example
of such an application is the public key cryptosystem by Rivest, Shamir, and
Adleman [36] (the “RSA system”), which is based on our ability to create
random primes of several hundred decimal digits. (The interested reader may
wish to consult cryptography textbooks like [37, 40] for this and other exam-
ples of cryptosystems that use randomly generated large prime numbers.)
One may also look at the primality problem from a more theoretical point
of view. A long time before prime numbers became practically important as
basic building blocks of cryptographic systems, Carl Friedrich Gauss had
written:
“The problem of distinguishing prime numbers from composites, and
of resolving composite numbers into their prime factors, is one of
the most important and useful in all of arithmetic. . . . The dignity
of science seems to demand that every aid to the solution of such an
elegant and celebrated problem be zealously cultivated.” ([20], in the
translation from Latin into English from [25])
Obviously, Gauss knew the trial division method and also methods for finding
the prime decomposition of natural numbers. So it was not just any procedure
for deciding primality he was asking for, but one with further properties —
simplicity, maybe, and speed, certainly.

1.2 Polynomial and Superpolynomial Time Bounds


In modern language, we would probably say that Gauss asked for an efficient
algorithm to test a number for being a prime, i.e., one that solves the problem
1.2 Polynomial and Superpolynomial Time Bounds 3

fast on numbers that are not too large. But what does “fast ” and “not too
large” mean? Clearly, for any algorithm the number of computational steps
made on input n will grow as larger and larger n are considered. It is the rate
of growth that is of interest here. √
To illustrate a growth rate different from n as in Algorithm 1.1.1, we
consider another algorithm for the primality problem (Lehmann [26]).
Algorithm 1.2.1 (Lehmann’s Primality Test)
Input: Odd integer n ≥ 3, integer  ≥ 2.
Method:
0 a, c: integer; b[1..]: array of integer;
1 for i from 1 to  do
2 a ← a randomly chosen element of {1, . . . , n − 1};
3 c ← a(n−1)/2 mod n;
4 if c ∈
/ {1, n − 1}
5 then return 1;
6 else b[i] ← c;
7 if b[1] = · · · = b[] = 1
8 then return 1;
9 else return 0;
The intended output of Algorithm 1.2.1 is 0 if n is a prime number and 1
if n is composite. The loop in lines 1–6 causes the same action to be carried
out  times, for  ≥ 2 a number given as input. The core of the algorithm is
lines 2–6. In line 2 a method is invoked that is important in many efficient
algorithms: randomization. We assume that the computer that carries out
the algorithm has access to a source of randomness and in this way can
choose a number a in {1, . . . , n − 1} uniformly at random. (Intuitively, we
may imagine it casts fair “dice” with n − 1 faces. In reality, of course, some
mechanism for generating “pseudorandom numbers” is used.) In the ith round
through the loop, the algorithm chooses a number ai at random and calculates
(n−1)/2 (n−1)/2
ci = a i mod n, i.e., the remainder when ai is divided by n. If ci
is different from 1 and n − 1, then output 1 is given, and the algorithm stops
(lines 4 and 5); otherwise (line 6) ci is stored in memory cell b[i]. If all of the
ci ’s are in {1, n − 1}, the loop runs to the end, and in lines 7–9 the outcomes
c1 , . . . , c of the  rounds are looked at again. If n − 1 appears at least once,
output 0 is given; if all ci ’s equal 1, output 1 is given.
We briefly discuss how the output should be interpreted. Since the algo-
rithm performs random experiments, the result is a random variable. What
is the probability that we get the “wrong” output? We must consider two
cases.
Case 1: n is a prime number. (The desired output is 0.) — We shall see
later (Sect. 6.1) that for n an odd prime exactly half of the elements a
of {1, . . . , n − 1} satisfy a(n−1)/2 mod n = n − 1, the other half satisfies
a(n−1)/2 mod n = 1. This means that the loop runs through all  rounds,
4 1. Introduction: Efficient Primality Testing

and that the probability that c1 = · · · = c = 1 and the wrong output 1 is


produced is 2− .
Case 2: n is a composite number. (The desired output is 1.) — There are two
possibilities. If there is no a in {1, . . . , n − 1} with a(n−1)/2 mod n = n − 1
at all, the output is guaranteed to be 1, which is the “correct” value. On
the other hand, it can be shown (see Lemma 5.3.1) that if there is some a
in {1, . . . , n − 1} that satisfies a(n−1)/2 mod n = n − 1, then more than half
of the elements in {1, . . . , n − 1} satisfy a(n−1)/2 mod n ∈ / {1, n − 1}. This
means that the probability that the loop in lines 1–6 runs for  rounds is no
more than 2− . The probability that output 0 is produced cannot be larger
than this bound.
Overall, the probability that the wrong output appears is bounded by 2− .
This can be made very small at the cost of a moderate number of repetitions
of the loop.
Algorithm 1.2.1, our first “efficient” primality test, exhibits some features
we will find again and again in such algorithms: the algorithm itself is very
simple, but its correctness or error probability analysis is based on facts from
number theory referring to algebraic structures not appearing in the text of
the algorithm.
Now let us turn to the computational effort needed to carry out Algo-
rithm 1.2.1 on an input number n. Obviously, the only interesting part of the
computation is the evaluation of a(n−1)/2 mod n in line 3. By “modular arith-
metic” (see Sect. 3.3) we can calculate with remainders modulo n throughout,
which means that only numbers of size up to n2 appear as intermediate re-
sults. Calculating a(n−1)/2 in the naive way by (n − 1)/2 − 1 multiplications
is hopelessly inefficient, even worse than the naive trial division method. But
there is a simple trick (“repeated squaring”, explained in detail in Sect. 2.3)
which leads to a method that requires at most 2 log n multiplications1 and
divisions of numbers not larger than n2 . How long will this take in terms
of single-digit operations if we calculate using decimal notation for integers?
Multiplying an h-digit and an -digit number, by the simplest methods as
taught in school, requires not more than h ·  multiplications and c0 · h ·  ad-
ditions of single decimal digits, for some small constant c0 . The number n10
of decimal digits of n equals log10 (n + 1) ≈ log10 n, and thus the number
of elementary operations on digits needed to carry out Algorithm 1.2.1 on
an n-digit number can be estimated from above by c(log10 n)3 for a suitable
constant c. We thus see that Algorithm 1.2.1 can be carried out on a fast
computer in reasonable time for numbers with several thousand digits.
As a natural measure of the size of the input we could take the number
n10 = log10 (n + 1) ≈ log10 n of decimal digits needed to write down n.
However, closer to the standard representation of natural numbers in comput-
ers, we take the number n = n2 = log(n+1) of digits of the binary rep-
1
In this text, ln x denotes the logarithm of x to the base e, while log x denotes
the logarithm of x to the base 2.
1.2 Polynomial and Superpolynomial Time Bounds 5

resentation of n, which differs from log n by at most 1. Since (log n)/(log10 n)


is the constant (ln 10)/(ln 2) ≈ 3.322 ≈ 103 , we have n10 ≈ 10 log n. For
3

example, a number with 80 binary digits has about 24 decimal digits.) Simi-
larly, as an elementary operation we view the addition or the multiplication
of two bits. A rough estimate on the basis of the naive methods shows that
certainly c ·  bit operations are sufficient to add, subtract, or compare two
-bit numbers; for multiplication and division we are on the safe side if we
assume an upper bound of c · 2 bit operations, for some constant c. Assume
now an algorithm A is given that performs TA (n) elementary operations on
input n. We consider possible bounds on TA (n) expressed as fi (log n), for
some functions fi : N → R; see Table 1.1. The table lists the bounds we
get for numbers with about 60, 150, and 450 decimal digits, and it gives the
binary length of numbers we can treat within 1012 and 1020 computational
steps.

i fi (x) fi (200) fi (500) fi (1500) si (1012 ) si (1020 )

1 c·x 200c 500c 1,500c 1012 /c 1020 /c


√ √
2 c · x2 40,000c 250,000c 2.2c · 106 106 / c 1010 / c
√ √
3 c · x3 8c · 106 1.25c · 108 3.4c · 109 104 / 3 c 4.6 · 106 / 3 c
√ √
4 c · x4 1.6c · 109 6.2c · 1010 5.1c · 1012 1,000/ 4 c 100,000/ 4 c
√ √
5 c · x6 6.4c · 1013 1.6c · 1016 1.1c · 1019 100/ 6 c 2150/ 6 c
√ √
6 c · x9 5.1c · 1020 2.0c · 1024 3.8c · 1028 22/ 9 c 165/ 9 c
7 x2 ln ln x 4.7 · 107 7.3 · 109 4.3 · 1012 1,170 22,000

8 c·2 x 18,000c 5.4c · 106 4.55c · 1011 1,600 4,400
9 c · 2x/2 1.3c · 1030 1.6c · 1060 2.6c · 10120 80 132

Table 1.1. Growth functions for operation bounds. fi (200), fi (500), fi (1500) de-
note the bounds obtained for 200-, 500-, and 1500-bit numbers; si (1012 ) and si (1020 )
are the maximal numbers of binary digits admissible so that an operation bound
of 1012 resp. 1020 is guaranteed

We may interpret the figures in this table in a variety of ways. Let us


(very optimistically) assume that we run our algorithm on a computer or
a computer network that carries out 1,000 bit operations in a nanosecond.
Then 1012 steps take about 1 second (feasible), and 1020 steps take a little
more than 3 years (usually unfeasible). Considering the rows for f1 and f2
we note that algorithms that take only a linear or quadratic number of op-
erations can be run for extremely large numbers within a reasonable time. If
the bounds are cubic (as for Algorithm 1.2.1, f3 ), numbers with thousands
of digits pose no particular problem; for polynomials of degree 4 (f4 ), we
begin to see a limit: numbers with 30,000 decimal digits are definitely out
of reach. Polynomial operation bounds with larger exponents (f5 or f6 ) lead
6 1. Introduction: Efficient Primality Testing

to situations where the length of the numbers that can be treated is already
severely restricted — with (log n)9 operations we may deal with one 7-digit
number in 1 second; treating a single 50-digit √ number takes years.√ Bounds
f7 (log n) = (log n)2 ln ln n , f8 (log n) = c · 2 log n , and f9 (log n) = c n exceed
any polynomial in log n for sufficiently large n. For numbers with small bi-
nary length log n, however, some of these superpolynomial bounds may still be
smaller than high-degree polynomial bounds, as the comparison between f6 ,
f7 , and f8 shows. In particular, note that for log n = 180,000 (corresponding
to a 60,000-digit number) we√have 2 ln ln(log n) < 5, so f7 (log n) < (log n)5 .
The bound f9 (log n) = c n, which belongs to the trial division method,
is extremely bad; only very short inputs can be treated.
Summing up, we see that algorithms with a polynomial bound with a truly
small exponent are useful even for larger numbers. Algorithms with polyno-
mial time bounds with larger exponents may become impossible to carry out
even for moderately large numbers. If the time bound is superpolynomial,
treating really large inputs is usually out of the question. From a theoretical
perspective, it has turned out to be useful to draw a line between computa-
tional problems that admit algorithms with a polynomial operation bound
and problems that do not have such algorithms, since for large enough n,
every polynomial bound will be smaller than every superpolynomial bound.
This is why the class P, to be discussed next, is of such prominent importance
in computational complexity theory.

1.3 Is PRIMES in P?

In order to formulate what exactly the question “Is PRIMES in P?” means,
we must sketch some concepts from computational complexity theory. Tra-
ditionally, the objects of study of complexity theory are “languages” and
“functions”. A nonempty finite set Σ is regarded as an alphabet, and one
considers the set Σ ∗ of all finite sequences or words over Σ. The most impor-
tant alphabet is the binary alphabet {0, 1}, where Σ ∗ comprises the words

ε (the empty word), 0, 1, 00, 01, 10, 11, 000, 001, 010, 011, 100, 101, . . . .

Note that natural numbers can be represented as binary words, e.g., by means
of the binary representation: bin(n) denotes the binary representation of n.
Now decision problems for numbers can be expressed as sets of words over
{0, 1}, e.g.
SQUARE = {bin(n) | n ≥ 0 is a square}
= {0, 1, 100, 1001, 10000, 11001, 100100, 110001, 1000000, . . .}
codes the problem “Given n, decide whether n is a square of some number”,
while
1.4 Randomized and Superpolynomial Time Algorithms 7

PRIMES = {bin(n) | n ≥ 2 is a prime number}


= {10, 11, 101, 111, 1011, 1101, 10001, 10011, 10111, 11101, . . .}
codes the primality problem. Every subset of Σ ∗ is called a language. Thus,
SQUARE and PRIMES are languages.
In computability and complexity theory, algorithms for inputs that are
words over a finite alphabet are traditionally formalized as programs for a
particular machine model, the Turing machine. Readers who are interested
in the formal details of measuring the time complexity of algorithms in terms
of this model are referred to standard texts on computational complexity
theory such as [23]. Basically, the model charges one step for performing one
operation involving a fixed number of letters, or digits.
We say that a language L ⊆ Σ ∗ is in class P if there is a Turing machine
(program) M and a polynomial p such that on input x ∈ Σ ∗ consisting of
m letters the machine M makes no more than p(m) steps and arrives at the
answer 1 if x ∈ L and at the answer 0 if x ∈/ L.
For our purposes it is sufficient to note the following: if we have an algo-
rithm A that operates on numbers (like Algorithms 1.1.1 and 1.2.1) so that
the total number of operations that are performed on input n is bounded
by c(log n)k for constants c and k and so that the intermediate results never
become larger than nk , then the language

{bin(n) | A on input n outputs 0}

is in class P.
Thus, to establish that PRIMES is in P it is sufficient to find an algorithm
A for the primality problem that operates on (not too large) numbers with
a polynomial operation bound. The question of whether such an algorithm
might exist had been open ever since the terminology for asking the question
was developed in the 1960s.

1.4 Randomized and Superpolynomial Time Algorithms


for the Primality Problem
In a certain sense, the search for polynomial time algorithms for the primality
problem was already successful in the 1970s, when two very efficient methods
for testing large numbers for primality were proposed, one by Solovay and
Strassen [39], and one by Rabin [35], based on previous work by Miller [29].
These algorithm have the common feature that they employ a random exper-
iment (just like Algorithm 1.2.1); so they fall into the category of randomized
algorithms. For both these algorithms the following holds.
• If the input is a prime number, the output is 0.
• If the input is composite, the output is 0 or 1, and the probability that the
outcome is 0 is bounded by 12 .
8 1. Introduction: Efficient Primality Testing

On input n, both algorithms use at most c · log n arithmetic operations on


numbers not larger than n2 , for some constant c; i.e., they are about as
fast as Algorithm 1.2.1. If the output is 1, the input number n is definitely
composite; we say that the calculation proves that n is composite, and yields
a certificate for that fact. If the result is 0, we do not really know whether
the input number is prime or not. Certainly, an error bound of up to 12 is not
satisfying. However, by repeating the algorithm up to  times, hence spending
 · c · log n arithmetic operations on input n, the error bound can be reduced
to 2− , for arbitrary . And if we choose to carry out  = d log n repetitions,
the algorithms will still have a polynomial operation bound, but the error
bound drops to n−d , extremely small for n with a hundred or more decimal
digits.
These randomized algorithms, along with others with a similar behavior
(e.g., the Lucas Test and the Frobenius Test, described in [16, Sect. 3.5]),
are sufficient for solving the primality problem for quite large inputs for all
practical purposes, and algorithms of this type are heavily used in practice.
For practical purposes, there is no reason to worry about the risk of giving
output 0 on a composite input n, as long as the error bound is adjusted so
that the probability for this to happen is smaller than 1 in 1020 , say, and it is
guaranteed that the algorithm exhibits the behavior as if truly random coin
tosses were available. Such a small error probability is negligible in relation to
other (hardware or software) error risks that are inevitable with real computer
systems. The Miller-Rabin Test and the Solovay-Strassen Test are explored
in detail later in this text (Chaps. 5 and 6).
Still, from a theoretical point of view, the question remained whether there
was an absolutely error-free algorithm for solving the primality problem with
a small time bound. Here one may consider
(a) algorithms without randomization (called deterministic algorithms to
emphasize the contrast), and
(b) randomized algorithms with expected polynomial running time which
never give erroneous outputs.
As for (a), the (up to the year 2002) fastest known deterministic algorithm
for the primality problem was proposed in 1983 by Adleman, Pomerance,
and Rumeley [2]. It has a time bound of f7c (log n), where f7c (x) = xc ln ln x
for some constant c > 0, which makes it slightly superpolynomial. Practical
implementations have turned out to be successful for numbers with many
hundreds of decimal digits [12].
As for (b), in 1987 Adleman and Huang [1] proposed a randomized algo-
rithm that has a (high-degree) polynomial time bound and yields primality
certificates, in the following sense: On input n, the algorithm outputs 0 or 1.
If the output is 1, the input n is guaranteed to be prime, and the calculation
carried out by the algorithm constitutes a proof of this fact. If the input n is
a prime number, then the probability that the wrong answer 0 is given is at
1.5 The New Algorithm 9

most n1 . Algorithms with this kind of behavior are called primality proving
algorithms.
The algorithm of Adleman and Huang (AAH ) may be combined with, for
example, the Solovay-Strassen Test (ASS ) to obtain an error-free randomized
algorithm for the primality problem with expected polynomial time bound,
as follows: Given an input n, run both algorithms on n. If one of them gives
a definite answer (AAH declares that n is a prime number or ASS declares
that n is composite), we are done. Otherwise, keep repeating the procedure
until an answer is obtained. The expected number of repetitions is smaller
than 2 no matter whether n is prime or composite. The combined algorithm
gives the correct answer with probability 1, and the expected time bound is
polynomial in log n.
There are further algorithms that provide proofs for the primality of an
input number n, many of them quite successful in practice. For much more
information on primality testing and primality proving algorithms see [16].
(A complete list of the known algorithms as of 2004 may be found in the
overview paper [11].)

1.5 The New Algorithm

Such was the state of affairs when in August 2002 M. Agrawal, N. Kayal, and
N. Saxena published their paper “PRIMES is in P”. In this paper, Agrawal,
Kayal, and Saxena described a deterministic algorithm for the primality prob-
lem, and a polynomial bound of c · (log n)12 · (log log n)d was proved for the
number of bit operations, for constants c and d.
In the time analysis of the algorithm, a deep result of Fouvry [19] from
analytical number theory was used, published in 1985. This result concerns
the density of primes of a special kind among the natural numbers. Unfor-
tunately, the proof of Fouvry’s theorem is accessible only to readers with a
quite strong background in number theory. In discussions following the pub-
lication of the new algorithm, some improvements were suggested. One of
these improvements (by H.W. Lenstra [10, 27]) leads to a slightly modified
algorithm with a new time analysis, which avoids the use of Fouvry’s the-
orem altogether, and makes it possible to carry out the time analysis and
correctness proof solely by basic methods from number theory and algebra.
The new analysis even yields an improved bound of c · (log n)10.5 · (log log n)d
on the number of bit operations. Employing Fouvry’s result one obtains the
even smaller bound c · (log n)7.5 · (log log n)d .
Experiments and number-theoretical conjectures make it seem likely that
the exponent in the complexity bound can be chosen even smaller, about 6
instead of 7.5. The reader may consult Table 1.1 to get an idea for num-
bers of which order of magnitude the algorithm is guaranteed to terminate
in reasonable time. Currently, improvements of the new algorithm are be-
10 1. Introduction: Efficient Primality Testing

ing investigated, and these may at some time make it competitive with the
primality proving algorithms currently in use. (See [11].)
Citing the title of a review of the result [13], with the improved and
simplified time analysis the algorithm by Agrawal, Kayal, and Saxena appears
even more a “Breakthrough for Everyman”: a result that can be explained
to interested high-school students, with a correctness proof and time analysis
that can be understood by everyone with a basic mathematical training as
acquired in the first year of studying mathematics or computer science. It
is the purpose of this text to describe this amazing and impressive result
in a self-contained manner, along with two randomized algorithms (Solovay-
Strassen and Miller-Rabin) to represent practically important primality tests.
The book covers just enough material from basic number theory and
elementary algebra to carry through the analysis of these algorithms, and so
frees the reader from collecting methods and facts from different sources.

1.6 Finding Primes and Factoring Integers

In cryptographic applications, e.g., in the RSA cryptosystem [36], we need


to be able to solve the prime generation problem, i.e., produce multidigit
randomly chosen prime numbers. Given a primality testing algorithm A with
one-sided error, like the Miller-Rabin Test (Chap. 5), one may generate a
random prime in [10s , 10s+1 − 1] as follows: Choose an odd number a from
this interval at random; run A on a. If the outcome indicates that a is prime,
output a, otherwise start anew with a new random number a.
For this algorithm to succeed we need to have some information about the
density of prime numbers in [10s , 10s+1 −1]. It is a consequence of Chebychev’s
Theorem 3.6.3 below that the fraction of prime numbers in [10s , 10s+1 − 1]
exceeds c/s, for some constant c > 0. This implies that the number of trials
needed until the randomly chosen number a is indeed a prime number is
no larger than s/c. The expected computation cost for obtaining an output
is then no larger than s/c times the cost of running algorithm A. If the
probability that algorithm A declares a composite number a prime is no larger
than 2− , then the probability that the output is composite is no larger than
2− · s/c, which can be made as small as desired by choosing  large enough.
We see that the complexity of generating primes is tightly coupled with the
complexity of primality testing. In practice, thus, the advent of the primality
test of Agrawal, Kayal, and Saxena has not changed much with respect to the
problem of generating primes, since it is much slower than the randomized
algorithms and the error probability can be made so small that it is irrelevant
from the point of view of the applications.
On the other hand, for the security of many cryptographic systems it is
important that the factoring problem
Given a composite number n, find a proper factor of n
1.7 How to Read This Book 11

is not easily solvable for n sufficiently large. An introduction into the subject
of factoring is given in, for example, [41]; an in-depth treatment may be
found in [16]. As an example, we mention one algorithm from the family of
the fastest known factorization algorithms, the “number field sieve”, which
1/3 2/3
has a superpolynomial running time bound of c · ed·(ln n) (ln ln n) , for a
constant d a little smaller than 1.95 and some c > 0. Using algorithms like
this, one has been able to factor single numbers of more than 200 decimal
digits.
It should be noted that with respect to factoring (and to the security
of cryptosystems that are based on the supposed difficulty of factoring) no
change is to be expected as a consequence of the new primality test. This
algorithm shares with all other fast primality tests the property that if it
declares an input number n composite, in most cases it does so on the basis
of indirect evidence, having detected a property in n prime numbers cannot
have. Such a property usually does not help in finding a proper factor of n.

1.7 How to Read This Book

Of course, the book may be read from cover to cover. In this way, the reader
is lead on a guided tour through the basics of algorithms for numbers, of
number theory, and of algebra (including all the proofs), as far as they are
needed for the analysis of the three primality tests treated here.
Chapter 2 should be checked for algorithmic notation and basic algo-
rithms for numbers. Readers with some background in basic number the-
ory and/or algebra may want to read Sects. 3.1 through 3.5 and Sects. 4.1
through 4.3 only cursorily to make sure they are familiar with the (standard)
topics treated there. Section 3.6 on the density bounds for prime numbers
and Sect. 4.4 on the fact that in finite fields the multiplicative group is cyclic
are a little more special and provide essential building blocks of the analysis
of the new primality test by Agrawal, Kayal, and Saxena.
Chapters 5 and 6 treat the Miller-Rabin Test and the Solovay-Strassen
Test in a self-contained manner; a proof of the quadratic reciprocity law,
which is used for the time analysis of the latter algorithm, is provided in
Appendix A.3. These two chapters may be skipped by readers interested
exclusively in the deterministic primality test.
Chapter 7 treats polynomials, in particular polynomials over finite fields
and the technique of constructing finite fields by quotienting modulo an ir-
reducible polynomial. Some special properties of the polynomial X r − 1 are
developed there. All results compiled in this section are essential for the anal-
ysis of the deterministic primality test, which is given in Chap. 8.
Readers are invited to send information about mistakes, other suggestions
for improvements, or comments directly to the author’s email address:
[email protected]
12 1. Introduction: Efficient Primality Testing

A list of corrections will be held on the webpage


http://eiche.theoinf.tu-ilmenau.de/kt/pbook

You might also like