Quarel David
Quarel David
David Quarel
June 2017
David Quarel
Acknowledgements
I wish to thank my supervisor Tim Trudgian. His advice and weekly meetings
provided much needed support whenever I’d get stuck on a problem or become
disheartened. I find myself in the most fortunate position of finishing honours in
mathematics with the same person as when I started it, way back in 2013 when
I took MATH1115.1
Having the most engaging lecturer during my first year of undergraduate
helped to foster my interest in mathematics, and I’ve yet to see any other lecturer
at this university take the time to write letters of congratulations to those that
did well in their course.
1
I still remember the hilarious anecdote in class about cutting a cake “coaxially”.
v
Abstract
The Goldbach conjecture states that every even number can be decomposed as the
sum of two primes. Let D(N ) denote the number of such prime decompositions
for an even N . It is known that D(N ) can be bounded above by
Y
∗ N Y 1 1
D(N ) ≤ C 1+ 1− = C ∗ Θ(N )
log2 N
p|N
p − 2 p>2 (p − 1)2
p>2
based on work done by Cheer and Goldston [6]. For each interval [j, j + 1], they
expressed ω(u) as a Taylor expansion about u = j + 1. We expanded about the
point u = j + 0.5, so ω(u) was never evaluated more than 0.5 away from the
center of the Taylor expansion. which gave much stronger error bounds.
Issues arose while using this Taylor expansion to compute the required in-
tegrals for Chen’s constant, so we proceeded with solving the above differential
equation to obtain ω(u), and then integrating the result. Although the values
that were obtained undershot Wu’s results, we pressed on and refined Wu’s work
by discretising his integrals with finer granularity. The improvements to Chen’s
constant were negligible (as predicted by Wu). This provides experimental evi-
dence, but not a proof, that were Wu’s integrals computed on smaller intervals
in exact form, the improvement to Chen’s constant would be similarly negligible.
Thus, any substantial improvement on Chen’s constant likely requires a radically
different method to what Wu provided.
vii
Contents
Acknowledgements v
Abstract vii
ix
x CONTENTS
5 Numerical computations 41
5.1 Justifying Integration Method . . . . . . . . . . . . . . . . . . . . 41
5.2 Interpolating Wu’s data . . . . . . . . . . . . . . . . . . . . . . . 48
6 Summary 51
6.1 Buchstab function . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
6.2 Chen’s constant . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
6.3 Future work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
A 55
A.1 Code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
Bibliography 65
Notation and terminology
Notation
GC Goldbach conjecture.
f (x) g(x) g is an asymptotic upper bound for f , that is, there exists
constants M and c such that for all x > c, |f (x)| ≤ M |g(x)|.
f (x) g(x) g is an asymptotic lower bound for f , that is, there exists
constants M and c such that for all x > c, |f (x)| ≥ M |g(x)|.
Equivalent to g f .
f (x) ∼ g(x) g is an asymptotic tight bound for f , that is, lim fg(n)
(n)
= 1.
n→∞
Equivalent to f g and g f .
P≤n Set of all numbers with at most n prime factors. Also denoted
Pn for brevity.
xi
xii NOTATION AND TERMINOLOGY
A≤x {a : a ∈ A, a ≤ x}.
γ Euler–Mascheroni constant.
1.1 Origin
One of the oldest and most difficult unsolved problems in mathematics is the
Goldbach conjecture (sometimes called the Strong Goldbach conjecture), which
originated during correspondence between Christian Goldbach and Leonard Euler
in 1742. The Goldbach conjecture states that “Every even integer N > 2 can be
expressed as the sum of two primes” [17].
For example,
4=2+2 8=5+3 10 = 7 + 3 = 5 + 5.
Evidently, this decomposition need not be unique. The Goldbach Conjecture has
been verified by computer search for all even N ≤ 4 × 1018 [12], but the proof
for all even integers remains open.∗ Some of the biggest steps towards solving
the Goldbach conjecture include Chen’s theorem [44] and the Ternary Goldbach
conjecture [23].
1
2 CHAPTER 1. HISTORY OF GOLDBACH PROBLEMS
36 = 31 + 5 66 = 61 + 5 90 = 83 + 7 = 61 + 29
= 29 + 7 = 59 + 7 = 79 + 11 = 59 + 31
= 23 + 13 = 53 + 13 = 73 + 17 = 53 + 37
= 19 + 17 = 47 + 19 = 71 + 13 = 47 + 43
= 43 + 23 = 67 + 23
= 37 + 29
Intuitively, we would expect that as N grows large, D(N ) should likewise grow
large. The extended Goldbach conjecture [20] asks how D(N ) grows asymptot-
ically. As expected, the conjectured formula for D(N ) grows without bound as
N increases (Figure 1.1).
We can see from the plots that the points of D(N ) cluster into bands, and
2Θ(N ) also shares this property (Figure 1.2). The plot of D(N ) called “Gold-
bach’s Comet” [14], which has many interesting properties. We plot D(N ), and
colour each point either red, green or blue if (N/2 mod 3) is 0, 1 or 2 respectively
(Figure 1.3). [14]. We can see the bands of colours are visibly separated. If D(N )
is plotted only for prime multiples of an even number [2], say
N = 12 × 2, 12 × 3, 12 × 5, . . .
Figure 1.2: Plot of 2Θ(N ), the conjectured tight bound to D(N ), for N ≤ 25000.
4 CHAPTER 1. HISTORY OF GOLDBACH PROBLEMS
Figure 1.3: Plot of D(N ) for N ≤ 25000, points coloured based on N/2 mod 3.
odd numbers. They also specified how D3 (n) (the number of ways n can be
decomposed as the sum of three primes) grows asymptotically
n2 Y
1
D3 (n) ∼ C3 3 1− 2 , (1.1)
log n p − 3p + 3
p|n
p prime
where
Y 1
C3 = 1+ .
p>2
(p − 1)3
p prime
Using the Taylor expansion of exp(x) and the fact that every term in the sum is
positive, we obtain the inequality
∞
x2 X xn
1+x≤1+x+ + ... = = exp(x).
2! n=0
n!
We also note that the product over all primes dividing n in (1.1) grows slowly
compared to the main term, as for all n ≥ 2
1
n2 − 3n + 3 ≥ n
2
1 2
1− 2 ≥1−
n − 3n + 3 n
Y 1
Y 2 Y 2
1− 2 ≥ 1− ≥ 1− .
p − 3p + 3 p 2<p≤n
p
p|n p|n
p prime p prime p prime
we obtain 2
Y e−2γ
2 1 Y 1
1− ≥ 1− ∼ ,
2<p≤n
p 2 2<p≤n p 2 log2 n
p prime p prime
n2
D3 (n) .
log5 n
This gives a stronger version of the TGC, as it implies that the number of different
ways that an odd integer can be decomposed as the sum of three primes can
be arbitrarily large. In 1937, Vinogradov improved this result by removing the
dependency on GRH [50].
An explicit value for how large N needs to be before the TGC holds was found
15
by Borozdin, who showed that N > 33 is sufficient [16]. In 1997, the TGC was
shown to be conditionally true for all odd N > 5 by Deshouillers et al, [11] under
the assumption of GRH. The unconditional TGC was further improved by Liu
and Wang [34] in 2002, who showed that Borozdin’s constant can be reduced to
N > e3100 . Thus, in principle, all that was needed to prove the TGC was to check
that it held for all N < e3100 ≈ 107000 . Given that the number of particles in
the universe ≈ 1080 , no improvements to computational power would likely help.
Further mathematical work was necessary.
In 2013, Helfgott reduced the constant sufficiently as to allow computer as-
sisted search to fill the gap, obtaining N > 1027 [23]. When combined with
his work numerically verifying the TGC for all N < 8.875 × 1030 , [23] Helfgott
succeeded in proving the TGC.
n3 Y
1 1
D4 (N ) ∼ C4 4 1+ , (1.5)
3 log n (p − 2)(p2 − 2p + 2)
p|n
p>2
where
Y 1
C4 = 1− .
p>2
(p − 1)4
1.4. EXTENDED GOLDBACH CONJECTURE 7
The asymptotic formulae for D3 (N ) and D4 (N ) are very similar (checking con-
vergence and the long term behaviour of D4 (N ) is the same proof as for D3 (N )).
Hardy–Littlewood claim that one can easily generalise their work to obtain an
asymptotic formula for Dr (N ) for r > 2. However, for r = 2, the results do not
easily generalise:
“It does not fail in principle, for it leads to a definite result which
appears to be correct; but we cannot overcome the difficulties of the
proof...”[20, p. 32]
The following asymptotic formula for D(N ) was thus conjectured [20]
where
CN N Y 1
Y 1
Θ(N ) := , CN := C0 1+ , C0 := 1− .
log2 N p|N
p−2 p>2
(p − 1)2
p>2
Now C0 denotes the twin primes constant (see 1.10), and each term in the infinite
product in CN is greater than 1. Thus, CN > C0 , which provides an asymptotic
lower bound of
N
D(N ) , (1.7)
log2 N
Hence, (1.6) implies the Goldbach conjecture (for large N ), which should indicate
the difficulty of the problem.
There has been progress in using (1.6) as a way to construct an upper bound
for D(N ), by seeking the smallest values of C ∗ (hereafter referred to as Chen’s
constant) such that
D(N ) ≤ C ∗ Θ(N )‡ (1.8)
Upper bounds for C ∗ have been improved over the years, but recent improvements
have been small† (see Table 1.1). Recent values have been obtained by various
sieve theory methods by constructing large sets of weighted inequalities (the one
in Wu’s paper has 21 terms!). However, the reductions in C ∗ are minuscule.
One would suspect that any further improvements would require a radically new
method, instead of complicated inequalities with more terms.
Wu Dong Hua [53] claimed to have C ∗ ≤ 7.81565, but some functions he
defined to compute C ∗ failed to have some necessary properties [48].
‡
Some authors instead include the factor of 2 inside C ∗ , so they instead assert D(N ) ∼ Θ(N )
for (1.6), and D(N ) ≤ 2C ∗ Θ(N ) for (1.8).
†
We remind the reader than C ∗ is conjectured to be 2.
8 CHAPTER 1. HISTORY OF GOLDBACH PROBLEMS
C∗ Year Author
16 + 1949 Selberg [45]
12 1964 Pan [37]
8 1966 Bombieri & Davenport [3]
7.8342 1978 Chen [7]
7.8209 2004 Wu [54]
That is, we can express N as the sum of two primes and at most K powers of two
(we refer to K as the Linnik–Goldbach constant). In 1953, Linnik [30] proved
that there exists some finite K such that the statement holds, but did not include
an explicit value for K. (See Table 1.2 for historical improvements on K.) The
methods of Heath-Brown and Puchta [22], and of Pintz and Ruzsa [40] show that
K satisfies (1.9) if
C3
λK−2 < ,
(C − 2)C2 + cC0−1 log 2
∗
The infinite product for C0 is easily verified as convergent, as for all integers n > 2
1
0<1− < 1.
(n − 1)2
Now, C2 is defined by
∞
X |µ(2d + 1)| Y 1
C2 = ,
(2d + 1)
d=0
p − 2
p|d
p>2
where
(n) = min{v : 2v ≡ 1 mod d}.
v∈N
and µ(x) is the Möbius function. It has been shown by Khalfalah–Pintz [39] that
and obtain unconditionally that K ≥ 11.0953, a near miss for K = 11. Platt–
Trudgian remark that any further improvements in estimating C3 using their
method with more computational power would be limited. Obtaining K = 11 by
improving Chen’s constant (assuming all others constants used by Platt–Trudgian
are the same) would require C ∗ ≤ 7.73196, which is close to the best known value
of C ∗ ≤ 7.8209. This provides a motivation for reducing C ∗ .
Romanov [43] proved in 1934 that R− > 0, though he did not provide an
estimate on the value of R. An explicit lower bound on R was not shown un-
til 2004, with Chen–Sun [8] who proved R− ≥ 0.0868. Habsieger–Roblot [18]
improved both bounds, obtaining 0.0933 < R− ≤ R+ < 0.49045. Pintz [38]
obtained R− ≥ 0.09368, but under the assumption of Wu Dong Hua’s value of
Chen’s constant C ∗ ≤ 7.81565, the proof of which is flawed [48].
Since 2n 6= p + 2k for an odd prime p, it is trivial to show that R+ ≤ 1/2. Van
de Corput [49] and Erdős [13] proved in 1950 that R+ < 0.5. In 1983, Romani
[42] computed A(N )/N for all N < 107 and noted that the minima were located
when N was a power of two. He then constructed an approximation to A(N ), by
using an approximation to the prime counting function π(x)
x ∞
(m − 1)!
Z
1 X
π(x) ∼ Li(x) := dt = m x,
2 log t m=1
log x
then using the Taylor expansion of Li(x) to obtain a formula for A(N ) with
†
Some authors denote R = limn→∞ 2A(N )/N (and similar for R+ and R− ), in which case
the trivial bound is R+ ≤ 1.
∗
This was also proved independently by Pintz and Ruzsa [40] in 2003. They also claim to
have K ≤ 8, but this is yet to appear in literature.
1.6. HISTORY OF ROMANOV’S CONSTANT 11
unknown coefficients bi
∞
X x
A(N ) = Rx + bi .
i=1
(log x)i
By using the precomputed values for A(N ), the unknown coefficients may be
estimated, thus extrapolating values for A(N ). Using this method, Romani con-
jectured that R ≈ 0.434.
Bomberi’s [42] probabilistic approach, which randomly generates probable
primes (as they are computationally faster to find than primes, and pseudoprimes‡
are uncommon enough that it does not severely alter the results) in the interval
[1, n], in such a way that the distribution closely matches that of the primes, is
also used to compute R. The values obtained closely match Romani’s work, and
so it is concluded that R ≈ 0.434 is a reasonable estimate.
The lower bound to Romonov’s constant R− , the Linnik–Goldbach constant
K and Chen’s constant C ∗ are all related. There is a very complicated connection
relating R− and K [38], but here we prove a simple property relating R− and K.
Proof. If R− > 1/4 then for any given even N , more than 1/4 of the numbers up
to N can be written as the sum of an odd prime and a power of two. No even n
can be written as n = p + 2k since p is odd. So more than 1/2 of the odd numbers
up to N can be written in this way. If we construct a set of pairs of odd integers
N −r N +r
(1, N − 1), (3, N − 3), . . . , , ,
2 2
where r = N mod 4, then by the pigeon-hole principle, there must exist a pair
(k, N − k) such that
k = p1 + 2v1 , N − k = p2 + 2 v2 .
Therefore
N = k + (N − k) = p1 + p2 + 2v1 + 2v2 .
This can be combined with the proposition above to obtain a relationship be-
tween C ∗ and K, although Pintz also shows a weaker estimate on C ∗ is sufficient.
‡
A composite number that a probabilistic primality test incorrectly asserts is prime.
12 CHAPTER 1. HISTORY OF GOLDBACH PROBLEMS
As an explicit lower bound for R did not appear until 2004, and this bound
is much lower than the expected 0.434 given by Romani, attempting to improve
K by improving R− appears a difficult task.
where Θ(N ) is the same function used for the conjectured tight bound of D(N )
(see 1.6). We have the asymptotic lower bound (see 1.7)
N
Θ(N ) .
log2 N
so this implies that for all sufficiently large even N
N = p + q for q ∈ P2
In 2015, Yamada [55] proved that letting N0 = exp exp 36 is sufficient for the
above to hold.
Chapter 2
2.1 Summary
Wu [54] proved that C ∗ ≤ 7.8209. He obtained this result by expressing the
Goldbach conjecture as a linear sieve, and finding an upper bound on the size
of the sieved set. The upper bound constructed is very complicated, using a
series of weighted inequalities with 21(!) terms. The approximation to the sieve
is computed using numerical integration, weighting the integrals over judicially
chosen intervals. Few intervals (9), over which to discretise the integration are
used, thereby reducing the problem to a linear optimisation with 9 equations.
“If we divide the interval [2, 3] into more than 9 subintervals, we can
certainly obtain a better result. But the improvement is minuscule.”[54,
p. 253]
The main objective of this paper is to quantify how “minuscule” the improvement
is.
13
14 CHAPTER 2. PRELIMINARIES FOR WU’S PAPER
numbers are primes below (pn + 2)2 , as the first number k that is composite
and not struck out by this procedure must share no prime factors with P. So
k = p2n+1 ≥ (pn + 2)2 .
This method of generating a large set of prime numbers is efficient, as by
striking off the multiples of each prime from a list of numbers, the only opera-
tions used is addition and memory lookup. Contrast this with primality testing
via trial division, as integer division is slower than integer addition. O’Neill
[36] showed that to generate all primes up to a bound n, the Sieve of Eratos-
thenes takes O(n log log n) operations whereas repeated trial division would take
3
O(n 2 / log2 n) operations. Moreover, the Sieve of Eratosthenes does not use divi-
sion in computing the primes, only addition.
Sieve theory is concerned with estimating the size of the sifted set. Formally,
given a finite set A ⊂ N, a set of primes P and some number z ≥ 2, we define
the sieve function [21]
i.e. by taking the set A and removing the elements that are divisible by some prime
p ∈ P with p < z, the number of elements left over is S(A; P, z). Removing the
elements of A in this manner is called sifting. Analogous to a sieve that sorts
through a collection of objects and only lets some through, the unsifted numbers
are those in A that do not have a prime factor in P. Many problems in number
theory can be re-expressed as a sieve, and thus attacked. We provide an example
related to the Goldbach conjecture included in [21]. For a given even N , let
A = {n(N − n) : n ∈ N, 2 ≤ n ≤ N − 2}, P = P.
Then
√ √
S(A, P; N ) = #{n(N − n) : n ∈ N, p < N ⇒ (n(N − n), p) = 1}.
√
Now since (n(N − n), p) = 1 ⇒ (n, p) = 1 and (N − n, p) = 1 for all p < N ,
both n and N − n must be prime, as if not, either n or N − n has a prime factor
√
≥ N , which implies n ≥ N (impossible) or N − n ≥ N (also impossible). Hence
√ √
S(A, P; N ) = #{p ∈ P : (N − p) ∈ P, p < N },
√
Now for all primes p, if p < N , then p ≤ N/2, hence
√
S(A, P; N ) ≤ #{p ∈ P : (N − p) ∈ P, p ≤ N/2},
≤ #{p ∈ P : (N − p) ∈ P, p ≤ N − p} = D(N ).
2.2. PRELIMINARIES OF SIEVE THEORY 15
So this particular choice of sieve is a lower bound for D(N ). If we have a particular
(possibly infinite) subset of the integers A, we may wish to take all numbers in
A less that or equal to x, denoted∗ A≤x
A≤x = {a ∈ A : a ≤ x},
and ask how fast #A≤x grows with x. For example, if A were the set of all even
numbers,
A = {n ∈ N : n even} = {0, 2, 4, . . .},
the size of the set of A≤x would be computed as:
jxk
#A≤x = #{a ∈ A : a ≤ x} = + 1.
2
It is usually more difficult to figure out how sets grow. Likewise, an exact formula
is usually impossible. Therefore, the best case is usually an asymptotic tight
bound. For example, if A = P, the set of all primes, then the prime number
theorem [1, p. 9] gives
x
#{p ∈ P : p ≤ n} = π(x) ∼ .
log x
The main aim of sieve theory is to decompose #A≤N as the sum of a main term
X and an error term R, such that X dominates R for large N . We can then
obtain an asymptotic formula for #A≤N [19]. This partially accounts for the
many proofs in number theory that only work for some extremely large N , where
N can be gargantuan (see the Ternary Goldbach conjecture in Chapter 1), as the
main term may only dominate the error term for very large values. In some cases
it can only be shown that the main term eventually dominates the error term, no
explicit value as to how large N needs to be before this occurs is required.
Now, for a given subset A≤x , we choose a main term† X that is hopefully a
good approximation of #A≤x .
We denote
Ad = {a : a ∈ A≤x , a ≡ 0 mod d},
for some square-free number d. For each prime p, we assign a value for the
function ω(p), with the constraint that 0 ≤ ω(p) < p, so that
ω(p)
#Ap ≈ X.
p
∗
In the literature, it is common to use A instead of A≤x , and it is implicitly understood
that A only includes the numbers less than some x.
†
Note that since X depends on x, we would normally write X(x), but we wished to stick to
convention.
16 CHAPTER 2. PRELIMINARIES FOR WU’S PAPER
By defining
Y
ω(1) = 1, ω(d) = ω(p),
p|d
ω(d)
Rd = #Ad − X.
d
So we have that
ω(d)
#Ad = X + Rd ,
d
ω(d)
where (hopefully) the main term d
X dominates the error term Rd . Given a set
of prime numbers P, we define
Y
P (z) = p.
p∈P,p<z
We can rewrite the sieve using two identities of the Möbius function and
multiplicative functions.
Proof. The case n = 1 is trivial. For n > 1, write the prime decomposition of n
as n = pe11 . . . penn . Since µ(d) = 0 when d is divisible by a square, we need only
consider divisors d of n of the form d = pa11 . . . pann , where each ai is either zero or
one, (i.e., divisors that are products of distinct primes). Enumerating all possible
products made from {p1 , . . . , pn }, we obtain
X
µ(d) = µ(1) + µ(p1 ) + . . . + µ(pn ) + µ(p1 p2 )
d|n
+ . . . + µ(pn−1 pn ) + . . . + µ(p1 . . . pn )
n n n
=1+ (−1) + (1) + . . . + (−1)n
1 2 n
n
X n
= (−1)i = (1 − 1)n = 0.
i=0
i
2.2. PRELIMINARIES OF SIEVE THEORY 17
Proof. Let X
g(n) = µ(d)f (d).
d|n
Now since µ(d) is zero for d = p2 , p3 , . . ., the only non-zero terms that are divisors
of pai i are 1 and pi .
n
Y n
Y Y
g(n) = (µ(1)f (1) + µ(pi )f (pi )) = (1 − f (pi )) = (1 − f (p)).
i=1 i=1 p|n
‡
A function f is multiplicative if f (m)f (n) = f (mn) whenever gcd(n, m) = 1.
18 CHAPTER 2. PRELIMINARIES FOR WU’S PAPER
Writing
Y ω(p)
W (z; ω) = 1− ,
p<z,p∈P
p
we obtain
X
|S(A; P, z) − XW (z; ω)| ≤ µ(d)Rd .
d|P (z)
Then, by using the triangle inequality, we obtain the worst case remainder when
all the µ(d) terms in the sum are one
X
|S(A; P, z) − XW (z; ω)| ≤ |Rd |. (2.6)
d|P (z)
This is the Sieve of Eratosthenes–Legendre, which crudely bounds the error be-
tween the actual sieved set and the approximations generated by choosing X
and ω(p). Sometimes, even approximating this sieve proves too difficult, so the
problem is weakened before searching for an upper or lower bound. For example,
see (1.7). For an upper bound, we search for functions µ+ (d) that act as upper
bounds to µ(d), in the sense that
X X 1 (n, P (z)) = 1,
+
µ (d) ≥ µ(d) = (2.7)
0 (n, P (z)) > 1.
d|(n,P (z)) d|(n,P (z))
Then
X µ+ (d)ω(d) X
S(A; P, z) ≤ X + |µ+ (d)||Rd |. (2.8)
d
d|P (z) d|P (z)
Minimising the upper bound while ensuring µ+ (d) satisfies (2.7) and has suffi-
ciently small support (to reduce the size of the summation) is, in general, very
difficult. Nevertheless, this method is used by Chen [26] to construct the follow-
ing upper bounds of sieves that we have related to the Goldbach conjecture and
the twin primes conjecture § .
n
#{(p, p0 ) : p ∈ P, p0 ∈ P≤2 , p + p0 = 2n} (2.9)
log2 n
x
#{p ∈ P : p ≤ x, p + 2 ∈ P≤2 } (2.10)
log2 x
where P≤2 is the set of all integers with 1 or 2 prime factors.
§
The twin primes conjecture asserts the existence of infinitely many primes p for which p + 2
is also prime. Examples include 3 and 5, 11 and 13, 41 and 43, . . .
2.3. SELBERG SIEVE 19
The first sieve is a near miss for approximating the Goldbach sieve, and weak-
ens the Goldbach conjecture to allow sums of primes, or a prime and a semiprime¶ .
The second sieve similarly weakens the twin primes conjecture. Thus, Chen’s
sieves imply that there are infinitely many primes p such that p + 2 is either
prime or semiprime, and that every sufficiently large even integer n can be de-
composed into the sum of a prime and a semiprime. However, the asymptotic
bounds above do not show constants that may be present, so “sufficiently large”
could be very large indeed.
This is used to construct an upper bound for the sieve in the same form as (2.8).
X ω(D) X
S(A; P, z) ≤ X λd1 λd2 + |λd1 λd2 RD | = XΣ1 + Σ2 , (2.13)
D
dv |P (z) dv |P (z)
v=1,2 v=1,2
Now one can choose the other remaining constants λd , d ≥ 2 so that the main
term Σ1 is as small an upper bound as possible, while ensuring Σ1 dominates Σ2 .
Since this is in general very difficult even for simple sequences A, Selberg looked
at the more restricted case of setting all the constants
λd = 0 for d ≥ z
A = {N − p : p ≤ N },
and apply a sieve to it, keeping only the integers in A that are prime. This
leaves the set of all primes of the form N − p, which (due to double counting) is
asymptotic to 2D(N ).
The goal is to approximate the sieve S(A; P, z), as described in Chapter 2.
Chen uses the sieve Selberg [45] used to prove C ∗ ≤ 16 + , which has some extra
constraints on the multiplicative function ω(p) in the main term, to make it easier
to estimate. Define
Y ω(p)
V (z) = 1− (3.1)
p<z
p
and suppose there exists a constant K > 1 such that
V (z1 ) log z2 K
≤ 1+ for z2 ≥ z1 ≥ 2. (3.2)
V (z2 ) log z1 log z1
Then the Rosser–Iwaniec [25] linear sieve is given by
X X
log Q
S(A; P, z) ≤ XV (z) F +E + λ+l (q)r(A, q), (3.3)
log z l<L q|P (z)
X X
log Q
S(A; P, z) ≥ XV (z) f +E − λ−
l (q)r(A, q), (3.4)
log z l<L q|P (z)
21
22 CHAPTER 3. EXAMINING WU’S PAPER
∗
where F and f are the solutions of the following coupled difference equations,
The Rosser–Iwaniec sieve is a more refined version of Selbergs sieve, as the error
− −
terms λ+ +
l and λl have some restrictions that, informally, λl (or λl ) can be
decomposed into the convolution of two other functions λ = λ1 ∗ λ2 . We will be
concerned only with (3.3), as we only need to find an upper bound for the sieve.
Lower bounds on the sieve would imply the Goldbach conjecture, which would
be difficult. Chen improved on the sieve (3.3) by introducing two new functions
H(s) and h(s) such that (3.3) holds with f (s) + h(s) and F (s) − H(s) in place
of f (s) and F (s) respectively.[54].
log Q
S(A; P, z) ≤ XV (z) (F (s) − H(s)) + E + error, (3.7)
log z
Chen proved that h(s) > 0 and H(s) > 0 (which is obviously a required property,
as otherwise these functions would make the bound on S(A; P, z) worse) using
three set of complicated inequalities (the largest had 43 terms!).
Define
X
Φ(N, σ, s) = σ(d)S(Ad , {p : (p, dN ) = 1}, d1/s ), (3.10)
d
where
Ej = {p : (p, N ) = 1} ∩ [Vj /∆, Vj ),
and that no one Vi term can be too large. Each Vi term is bounded above by
both Q and the previous terms V1 , . . . , Vi−1 .
s
Q
Vi ≤ .
V1 . . . Vi−1
So if any one Vi term is big, the rest of the terms beyond will be constrained to
be small.
Φ can be thought of as breaking up the sieve S(A; P, z) into smaller parts,
where the set that is being sieved over is only those elements in A that are
multiples of d, and the index of summation is given by the σ, as σ(d) will be zero
everywhere expect on its set of support. Beaking up the sieve this was allows
Wu to prove some weighted inequalities reated to Φ, that would ordinarily be too
difficult to prove in general for the entire sieve.
Define
X σ(d)CdN
Θ(N, σ) = 4Li(N ) , (3.11)
d
ϕ(d) log d
Wu shows that both Hk,N0 (s) and hk,N0 are decreasing, and defines
H(s) = lim lim Hk,N0 (s) , h(s) = lim lim hk,N0 (s) (3.13)
k→∞ N0 →∞ k→∞ N0 →∞
These integral equations are still difficult to work with, as all Wu proves about h
and H is that H(s) is decreasing on [1, 10], and h(s) is increasing on [1, 2], and
is decreasing on [2, 10].
Wu proves an upper bound for the smaller parts of the sieve Φ
X
5Φ(N, σ, s) ≤ σ(d)(Γ1 − . . . − Γ4 + Γ5 + . . . + Γ21 ) + Oδ,k (N 1−η )
d
X
2Φ(N, σ, s) ≤ σ(d)(Ω1 − Ω2 + Ω3 ) + Oδ,k (N 1−η )
d
where the Γi terms are the dreaded 21 terms in Wu’s weighted inequality [54,
p.233]. Wu uses this weighted inequality
(t + 1)2
0 σ0 (t) 16 1[α2 ,3] (t)
Ξ1 (t, s, s ) := log + log
2t (s − 1)(s0 − 1) 2t (s − 1)(s0 − 1)
1[α3 ,α2 ] (t) t+1
+ log ,
2t (s − 1)(s0 − 1 − t)
(3.16)
3.3. A LOWER BOUND FOR H(S) 25
s0 −1 1−1/s0
log(t − 1) log(s0 t − 1)
Z Z
0 1
Ψ1 (s, s ) := dt + dt − I1 (s, s0 ), (3.17)
2 t 2 1−1/s t(1 − t)
The equations Ξ(s) and Ψ1 (s) are derived from Wu’s sieve inequalities, and
they provide a way to rewrite the complicated integral equations (3.14) into an
inequality that can be attacked.
Z 0
2 s −1 log(t − 1) 2 k1 −1 log(t − 1)
Z
0
Ψ2 (s, s , k1 , k2 , k3 ) = − dt − dt
5 2 t 5 2 t
0
1 k2 −1 log(t − 1) 1 1−1/s log(s0 t − 1)
Z Z
− dt + dt (3.20)
5 2 t 5 1−1/s t(1 − t)
25
1 1−1/k1 log(k1 t − 1)
Z
2X
+ dt − I2,i (s).
5 1−1/k3 t(1 − t) 5 i=9
φ−t−u−v
Z
dt du dv
I2,i (s) = max ω (9 ≤ i ≤ 15),
φ≥2 u tu2 v
D2,i
φ − t − u − v − w dt du dv dw
Z
I2,i (s) = max ω (16 ≤ i ≤ 19),
φ≥2 v tuv 2 w
D2,i
(3.21)
φ − t − u − v − w − x dt du dv dw dx
Z
I2,20 (s) = max ω ,
φ≥2 w tuv 2 x
D2,20
φ − t − u − v − w − x − y dt du dv dw dx dy
Z
I2,21 (s) = max ω .
φ≥2 x tuvwx2 y
D2,21
This large set of integrals is derived by finding an integral equation that provides
an upper bound for each term of the form
X
σ(d)Γi
d
Taking the sum of all these integrals will give a bound for Φ, and hence for H(s).
3.3. A LOWER BOUND FOR H(S) 27
α1 := k1 − 2, α2 := s0 − 2,
α3 := s0 − s0 /s − 1, α4 := s0 − s0 /k2 − 1,
α5 := s0 − s0 /k3 − 1, α6 := s0 − 2s0 /k2 ,
α7 := s0 − s0 /k1 − s0 /k3 , α8 := s0 − s0 /k1 − s0 /k2
α9 := k1 − k1 /k2 − 1.
Ξ2 , together with the α terms, are derived by a complicated Lemma [54, p. 248]
relating three separate integral inequality equations.
where
Z b
c 1 σ(3, t + 2, t + 1)
σ(a, b, c) := log dt, σ0 (t) := . (3.24)
a t−1t 1 − σ(3, 5, 4)
where
28 CHAPTER 3. EXAMINING WU’S PAPER
All of these complicated integrals are a way of bounding Wu’s 21 term in-
equality. For each term Γi in the inequality, it has a corresponding integral. For
example, here is one term from the inequality,
XXX
Γ10 = S(Adp1 p2 p3 , {p : (p, dN ) = 1}, p2 }) (3.25)
d1/k1 ≤p1 ≤p2 ≤d1/k2 ≤p3 ≤d1/s
which corresponds to
φ−t−u−v
Z
dt du dv
I2,10 (s) = max ω
φ≥2 u tu2 v
1/k1 ≤t≤u≤1/k2 ≤v≤1/s
The sieve is related to the function F , as F is part of the upper bound for the
sieve (3.7). Now if we define the Dickman function ρ(u) by
ρ(u) = 1 1 ≤ u ≤ 2,
(3.26)
(u − 1)ρ0 (u) = −ρ(u − 1) u ≥ 2.
where Z sj
ai,j := Ξ2 (t, si )dt i = 1, . . . , 4; j = 1, . . . , 9
sj−1
and
9
X
H(si ) ≥ Ψ1 (si ) + ai,j H(sj ) (3.30)
j=1
3.4. DISCRETISING THE INTEGRAL 29
where Z sj
ai,j := Ξ1 (t, si )dt i = 5, . . . , 9; j = 1, . . . , 9
sj−1
Now H(si ) is given as a linear combination of all the other H(sj ), 1 ≤ j ≤ 9 values,
which simplifies the problem from resolving a complicated integral equation, to a
simple linear optimisation problem. These discretisations of the integrals can be
written as matrix equations.
Ψ2 (s1 )
..
.
a1,1 . . . a1,9 H(s1 )
Ψ (s )
.. . . . . 2 4
A := . . .. , H := .. , B := (3.31)
Ψ1 (s5 )
a9,1 . . . a9,9 H(s9 ) ..
.
Ψ1 (s9 )
(I − A)H ≥ B (3.32)
By letting
A(s) = sF (s)/2eγ
and from the definition of Θ(N, σ),
X CN
Θ(N, {1}) = 4Li(N )
d
log d
4Li(N )CN
=
log N 1/2−δ
4Li(N )CN
Φ(N, {1}, 2.2) ≤ {A(2.2) − Hk,N0 (2.2)} ≤ 8{1 − x1 }Θ(N )
log N 1/2−δ
In this section we examine how the Buchstab Function ω(u) (which appears in
most of Wu’s integrals) is computed.
4.1 Background
The Buchstab function is defined by the following delay differential equation∗
ω(u) = 1/u 1 ≤ u ≤ 2,
(4.1)
(uω(u))0 = ω(u − 1) u ≥ 2.
From the graph it appears that ω(u) quickly approaches a constant value.
Buchstab [4] showed that
lim ω(u) = e−γ , (4.2)
u→∞
|ω(u) − e−γ | ≤ e−u(log u+log log u+(log log u/ log u)−1)+O(u/ log u) . (4.4)
∗
The definition of ω(u) is very similar to the other difference equations F and f (3.5) and
(3.6).
31
32 CHAPTER 4. APPROXIMATING THE BUCHSTAB FUNCTION
For the purposes of numerically integrating ω(u), we only consider the region
2 ≤ u ≤ 10, as beyond that, we can set ω(u) = e−γ for u ≥ 10 and bound the
error by (4.10).
Rj
Observe that 1
ω(t)dt is a constant. Call this constant K.
Z u−1
∀u ∈ [j + 1, j + 2], K = uωj+1 (u) − ωj (t)dt.
j
Choose u = j + 1.
K = (j + 1)ωj+1 (j + 1).
To force the Buchstab function to be continuous, we assume that the piecewise
splines agree at the knots, that is,
ωj (j + 1) = ωj+1 (j + 1).
We note that
k−1 m Xk−1 m ∞ m
X 1 2 2 X 2
≤ ≤ = 3.
m=0
k−m 3 m=0
3 m=0
3
The Taylor expansion of ω2 (u) can be shown to converge uniformly in the interval
1.5 ≤ u ≤ 4.5, as
∞ ∞ ∞ k ∞ k
X
k
X 1 k
X u−3 X 3
|ω2 (u)| ≤ |ak (2)||u − 3| ≤ k
|u − 3| ≤ ≤ = 4.
k=0 k=0
2 k=0
2 k=0
4
Hence by the Weierstrass M-test[15], the series for ω2 (u) is uniformly convergent
for 1.5 ≤ u ≤ 4.5. This allows us to compute the integral of the Taylor series of
ω2 (u), (and thereby compute ω3 (u)) by swapping the order of the sum and the
integral, by the Fubini-Tonelli theorem [46]. By using (4.11) we can prove by
induction that the Taylor series of ωj (u) converges uniformly for j ≤ u ≤ j + 1.
This allows interchange of the sums and integrals in (4.15). Thus, we obtain
∞ ∞
(u − (j + 2))k+1 − (−1)k+1
X X
k
u ak (j+1)(u−(j+2)) = (j+1)a0 (j)+ ak (j) .
k=0 k=0
k + 1
36 CHAPTER 4. APPROXIMATING THE BUCHSTAB FUNCTION
By substuting T2 (u) into (4.11), Cheer [6] shows that one obtains a new approx-
imation T3 (u) for ω3 (u), with a new error term of
Z u−1
1
|E3 (u)| = E2 (t)dt + 3E2 (3)
u 2
Z u−1 (4.19)
1 (u − 3)E + 3E
≤ Edt + 3E = = E.
u 2 u
By repeating this argument and then by induction,
So the accuracy of ω2 (u) holds for the rest of the ωk (u), excluding computational
errors due to machine precision arithmetic † [6]. (If needed, Mathematica supports
†
Real number arithmetic on a computer is implemented using floating point numbers, which
only have finite accuracy. Hence, every operation introduces rounding errors, and with enough
operations, can cause issues with the final result.
4.2. A PIECEWISE TAYLOR EXPANSION 37
Then by definition, the error term (for a Taylor expansion to order N ) can be
written as
1
EN = max |aN +1 (2)(ξ − 3)N +1 | ≤ |aN +1 (2)| ≤ N +1 . (4.21)
2≤ξ≤3 2
So if ω2 (u) is approximated with a power series up to order N , then the maximum
error for ω(u) anywhere is 2−(N +1) .
where
3
k+1 X k−1 m !
1 + log 3 2 1 3
ak (2) = (−1)k+1 − 2
k+1
+ , (4.23)
(5/2) 5 3 m=0
k − m 5
∞ k
1 X ak (j − 1) (−1)
a0 (j) = k
j+ , (4.24)
j + 1/2 k=0 2 2(k + 1)
1 ak−1 (j − 1)
ak (j) = − ak−1 (j) . (4.25)
j + 1/2 k
Applying the Taylor remainder theorem, we calculate the error of the Taylor
expansion of ω2 (u), truncated to N terms:
1
E := max |ω2 (u) − T2 (u)| ≤ |aN +1 (2)||(3 − 2.5)N +1 | ≤ |aN +1 (2)|. (4.26)
2≤ξ≤3 2N +1
38 CHAPTER 4. APPROXIMATING THE BUCHSTAB FUNCTION
k−1 m Xk−1 m ∞ m
X 1 3 3 X 3 5
≤ ≤ = .
m=0
k−m 5 m=0
5 m=0
5 2
So,
N +2 N +1
1 + log 32
3 2 5 2
|aN +1 (2)| ≤ − N +2
+ ≤ . (4.27)
(5/2) 5 3 2 3
N +1
1 2 1
E≤ ≤ N +1 . (4.28)
2N +1 3 3
which is better than the old error bound (4.21) by an exponential factor. Again,
by a similar argument to (4.19), this error E can be shown to hold everywhere.
In practice this is a rather weak error bound, as the actual error is much less. By
computing
1
|aN +1 (2)|,
2N +1
for 10 ≤ N ≤ 20, (rejecting the first few N until the points settle out) and
plotting the values on a log plot, we observe the values form a line. By fitting a
curve of the form
log y = mx + c,
to this line, the asymptotic behaviour of the numerical upper bound error is
deduced to be approximately O(3.31−N ). The base case for the above derivations
was done with ω2 (u) instead of ω1 (u) as the Taylor expansion of 1/u converges
slowly. The corresponding error bounds obtained are much weaker. One could, in
principle, use ω3 (u) as the base case and thus obtain a much stronger error bound
for ωj (u), j ≥ 3, but as mentioned above, ω3 (u) is not an elementary function.
So, ω2 (u) is the best we could hope to use. If one used the power series expression
of ω3 (u) to compute the error bounds, each ak (3) would be defined in terms of
4.2. A PIECEWISE TAYLOR EXPANSION 39
ak (2) as shown,
∞
(−1)k
1 X
−k
a0 (3) = ak (2)2 j+
3 + 1/2 k=0 2(k + 1)
∞ k−1
k+1 X m !
1 + log 23
1 X (−1)k+1 3 2 1 3
= k
− k+1
+
3 + 1/2 k=0 2 (5/2) 5 3 m=0
k−m 5
(−1)k
j+ ,
2(k + 1)
1 ak−1 (2)
ak (3) = − ak−1 (3) .
3 + 1/2 k
and so, expanded out in full, the formulas for the coefficients would become (even
more) unwieldy. Since the error E decreases exponentially quickly with respect
to the degree of the polynomial, our implementation of the Buchstab function
(given above) was deemed sufficient.
40 CHAPTER 4. APPROXIMATING THE BUCHSTAB FUNCTION
Chapter 5
Numerical computations
but ran into several problems. Using Mathematica’s inbuilt integration routine,
the computation would either finish quickly, or never halt, depending on the value
of φ chosen. It was discovered that the function could only be integrated for large
φ, such that ωT (u) would always be in the constant region. These problems were
not resolved. Instead, we integrated the Buchstab function by computing an anti-
derivative of ωT (u), and applying the fundamental theorem of calculus (FTC),
three times.
41
42 CHAPTER 5. NUMERICAL COMPUTATIONS
We can show that FTC validly applies in this instance. As ωT (u) is a piece-
wise spline of polynomials, it is continuous everywhere except at the points
u = 2, 3, . . . , k + 1 where the splines meet. Using two theorems about Lebesgue
integration [46]
Theorem 5.1.2. If f is integrable on [a, b], then there exists an absolutely con-
tinuous function F such that F 0 (x) = f (x) almost everywhere, and in fact we
Rx
may take F (x) = a f (y) dy.
it was expected that the FTC could be used 3 times to compute I1 , however the
values computed were nonsense. When attempting to replicate the entry in Wu’s
table (5.1) for i = 6, we expected 0.0094 . . . and obtained a large negative result,
−10000 or so. It is believed that since ωT (u) was defined in a piecewise manner,
integrating ωT φ−t−u−v
u
would involve resolving many inequalities, which would
only increase in complexity after three integrations.
Attempts to numerically integrate ωT (u) were successful, however it was much
faster to use Mathematica to numerically solve the differential delay equation
defining the Buchstab function (4.1) with standard numerical ODE solver rou-
tines, obtain a numerical solution for ω(u), and numerically integrate the result.
5.1. JUSTIFYING INTEGRATION METHOD 43
i si s0i k1,i k2,i k3,i Ψ1 (si ) (Wu) Ψ1 (si ) Ψ2 (si ) (Wu) Ψ2 (si )
1 2.2 4.54 3.53 2.90 2.44 0.01582635 0.01615180
2 2.3 4.50 3.54 2.88 2.43 0.01224797 0.01547663
3 2.4 4.46 3.57 2.87 2.40 0.01389875 0.01406834
4 2.5 4.12 3.56 2.91 2.50 0.01177605 0.01187935
5 2.6 3.58 0.00940521 0.00947409
6 2.7 3.47 0.00655895 0.00659089
7 2.8 3.34 0.00353675 0.00354796
8 2.9 3.19 0.00105665 0.00105838
9 3.0 3.00 0.00000000 0.00000000
Table 5.1: Table of values for Ψ1 and Ψ2 , compared with Wu’s results [54].
The resulting values were also closer to Wu’s. The following table includes Wu’s
results, compared to the results we obtained.
Given an si , Wu chose the parameters s0i , k1,i , k2,i , k3,i to maximise Ψ1 (si ) or
Ψ2 (si ). All the values we calculated were an overshoot of Wu’s results, so it was
not surprising that we obtained a smaller value for Chen’s constant (3.32). For
i = 5, 6, . . . , 9, the values of s0i Wu gave were verified to maximise our version of
Ψ1 (si ), by trying all values s0i = 3, 3.01, . . . , 5.00, which is given by the constraint
3 ≤ s0i ≤ 5 on (3.17). So even though our Ψ1 did not match Wu’s, both were
maximised at the same points. This indicated that we could use our “poor mans”
Φ1 to investigate the behaviour of Φ1 , but not to compute exact values for it.
When computing the values for this table, we first wanted to see for a fixed s
and s0 , how I1 (s) varied as a function of φ. It was determined that the function
did not grow too quickly near the maximum, and as φ grew large, Ψ1 tended
to a constant, which is consistent with ω(u) quickly tending to e−γ . Thus, we
can apply simple numerical maximisation techniques, without getting trapped in
local maxima, or missing the maximum because the function changes too quickly.
We only need to maximise φ over a small interval φ ∈ [2, 4], and we obtain the
maximal value by computing I1 for a few points in the interval of interest, choosing
the maximum point, and then computing again for a collection of points in a small
neighbourhood about the previous maximum. This is iterated a few times until
the required accuracy is obtained. (Listing 5.1).
44 CHAPTER 5. NUMERICAL COMPUTATIONS
def I1 (s, s0 ):
φlow := 2
φhigh := 4
:= 0.1
φM AX := 2
while ( ≥ 0.001):
Φ := {φlow , φlow + , φlow + 2, . . . , φhigh }
φM AX := φ ∈ Φ such that I(φ, ˜ s, s0 ) is maximal
φlow := max(φM AX − , 2)
φhigh := φM AX +
:= /10
return I˜1 (φM AX , s, s0 )
enddef
Listing 5.1: Algorithm to compute I1 (s, s0 )
Initially, it seemed odd that for all the values of s and s0 that were tried,
I˜1 (φ, s, s0 ) appeared to always be maximal when φ = 2. The actual maximum
for I˜1 (φ, s, s0 ) occurs at a point φ < 2, but was being cut off by the constraint
that φ ≥ 2. It seemed unusual to us that Wu would maximise an integral over
φ ≥ 2, when the integral is always maximal when φ = 2. However, for some of
the other integrals in (3.21), the function was maximal at some point φ > 2. This
behaviour is made clear when we plot I2,11 (φ, s = 2.6, s0 = 3.58) (Figure 5.2) as a
function of φ, for a fixed value of s and s0 .
For some of the integrals needed to compute I2 , the numerical integration
5.1. JUSTIFYING INTEGRATION METHOD 45
did not converge. Whether this was a property of the integrals (i.e. they were
not defined for those particular values of φ) or a symptom of how the numerical
integration routines in Mathematica operate was not determined. To maximise
the integrals with respect to φ, we could no longer assume they were defined
for all φ ≥ 2. Instead, it was found that for each integral, there would be a
corresponding value φlow such that the method of numerical integration converged
for all φ ≥ φlow . This value of φlow was computed by method of bisection (Listing
5.2).
def findphi (i, s, s0 , k1 , k2 , k3 ):
if ( computing I2,i (2, s, s0 , k1 , k2 , k3 ) suceeded ):
return 2
φlow := 2
φhigh := 5
:= 0.001
while (|φhigh − φlow | ≥ ):
φmid := (φhigh + φlow )/2
if ( computing I2,i (φmid , s, s0 , k1 , k2 , k3 ) suceeded ):
φhigh := φmid
else :
φlow := φmid
return φhigh
enddef
Listing 5.2: Algorithm to compute the least φ such that I2,i (φ, s) is defined
46 CHAPTER 5. NUMERICAL COMPUTATIONS
I1 and I2,i can now be computed, and thus Ψ1 (3.17) and Ψ2 (3.20) can be
easily computed using standard numerical integration techniques. This gives the
entries of the vector B (3.31). Computing the matrix A (3.31) is comparatively
easy, as each element ai,j of the matrix A is the integral of either Ξ1 (see 3.16) or
Ξ2 (see 3.16) over an interval, which is easily computed numerically. Thus, the
system of linear equations (I − A)X = B (where I is the identity matrix) can be
solved for X. This is compared to the value for X that Wu obtained.
0.0223939 . . . 0.0227656 . . .
0.0217196 . . . 0.0219942 . . .
0.0202876 . . . 0.0205028 . . .
0.0181433 . . . 0.0182930 . . .
XWu = 0.0158644 . . . X = 0.0152937 . . .
(5.3)
0.0129923 . . . 0.0126404 . . .
0.0100686 . . . 0.0099076 . . .
0.0078162 . . . 0.0078020 . . .
0.0072943 . . . 0.0073089 . . .
So from these vectors, we use (3.34) to obtain two different values for C ∗ , ours
and Wu’s,
∗
CW u = 8(1 − 0.0223938) ≤ 7.82085,
(5.4)
C ∗ = 8(1 − 0.0227655) ≤ 7.8178752,
we use the same values before (See 5.1) for 1 ≤ i ≤ 4 and then compute new
values of Ψ1 for each new value of si , so we evaluate Ψ1 at 40 points instead
of 5. We can then compute Ψ1 by numerically maximising over s0i , in a similar
fashion to computing I1 (See Listing 5.1). Note that Ψ1 depends on I1 , so we are
optimising with respect to two variables, φ and s0i . We also need to recompute
the matrix A given by (3.31) on the new set of intervals. Thus, the component
ai,j of A is now given by
Z s j
Ξ2 (t, si )dt i = 1, . . . , 4; j = 1, . . . , 45
sj−1
ai,j := Z sj (5.5)
Ξ1 (t, si )dt i = 5, . . . , 45; j = 1, . . . , 45
sj−1
So we now have a new matrix A and vector B with which to solve (I − A)X = B.
0.0228801 . . .
0.0221067 . . .
Ψ2 (s1 )
0.0206136 . . .
..
0.0184024 . . .
.
a1,1 . . . a1,45
0.0184024 . . .
Ψ (s )
.. .. .. 2 4
A := . . . , B := ⇒X=
Ψ1 (s5 ) 0.0153999 . . .
a45,1 . . . a45,45 ..
0.0157765 . . .
.
0.0163132 . . .
Ψ1 (s45 )
0.0170428 . . .
..
.
Once that was complete, we redid the calculations again with even finer gran-
ularity for s0i , choosing
2.1 + 0.1i 1 ≤ i ≤ 5,
s0i =
2.6 + 0.001(i − 5) 6 ≤ i ≤ 405.
where m is the point where Ψ2 (m) = Ψ1 (m). Thus, ΨB is the best case scenario,
by essentially taking the maximum of the two upper bounds. We need the values
for s0 , k1 , k2 , k3 to compute C ∗ , as we need to compute the matrix A, which in
turn depends on the values of ai,j (see 5.5). So we obtain those variables by
interpolation also.
We then discretise the interval [2, 3] from coarse to fine, and calculate the new
value of C ∗ for each of these discretisations. In doing so, we can get an estimate
as to how much additional intervals of integration impact the value of C ∗ . After
computing C ∗ for a varying number of intervals, from Wu’s original 9 up to 600
(which took 84 minutes, 40 seconds on the same hardware used to compute Table
5.2), we can clearly see that adding more intervals does make a slight difference,
but there are diminishing returns beyond a hundred or so. One would expect not
to make any gains on C ∗ by attempting to compute a million intervals.
5.2. INTERPOLATING WU’S DATA 49
Summary
where each aj (k) is defined in terms of the previous values an (m) for 2 ≤ n <
j, 0 ≤ m < k, and the values of a2 (k) are derived from the Taylor expansion of
log(u − 1) + 1
ω2 (u) = .
u
The coefficients are more complicated than the ones Cheer–Goldston obtained,
but when the Taylor approximation of ω(u) is truncated to degree N , we show
that the corresponding error is less than 1/3N +1 (4.28) versus Cheer’s error bound
of 1/2N +1 .
Although the spline is not continuous at the knots, the size of the discontinu-
ities can be made arbitrarily small by increasing the degree of the power series.
51
52 CHAPTER 6. SUMMARY
successful, so instead we numerically solved the ODE that defined ω(u), and nu-
merically evaluated Wu’s integrals instead. This gave us approximately the same
results as Wu. Wu resolved a set of functional inequalities using a discretisation of
[2, 3] into 9 pieces, so we replicated the results, and then increased the resolution
for the section [2.6, 3.0] of the interval from 5 points to 40, and then 400. The
corresponding change in C ∗ was as expected, minimal. The relative difference in
C ∗ between sampling 5 points and 400 points in the interval [2.6, 3.0] was only
1.66 × 10−2 %. This leads us to believe that we should expect similar results, had
we computed an exact form of Wu’s integrals.
We interpolated the values of Ψ1 , Ψ2 and the variables required from Wu’s
data, and (under the assumption that these interpolations are a reasonable ap-
proximation to the exact form), we have shown that Wu was indeed right in
asserting that cutting [2, 3] into more subintervals would result in a better value
for C ∗ , but only by a minuscule amount.
Wu gives us that H(s) is decreasing on [1, 10]. The constraint on the param-
eter s is that 2 ≤ s ≤ 3, but Wu’s discretisation starts at s = 2.2. Wu claims
this was chosen as Ψ2 attains its maximum at approximately s = 2.1. How close
this value of s is to the true maximal value is not stated, so this could be an-
other avenue for improvement. One could explore the value of Ψ2 in a small
neighbourhood around s = 2.1, to see what the true maximum is.
Slightly nudging this value left of 2.2 to, say, 2.19 might mean a better upper
bound for H, and thus a better value for Chen’s constant.
∗
Whenever Ψ is mentioned in this section, take it to mean both Ψ1 and Ψ2 .
6.3. FUTURE WORK 53
and the derivative is not too large. It seems true that Ψ are locally concave †
near the local maximum, so once values for the variables have been found that
bring Ψ near the maximum, one could then run a battery of standard convex
optimisation techniques on the problem.
We could also look at alternative forms of approximating ω(u). Since we are
never concerned with the values of ω(u) at single points, but rather the integral
of ω(u) over intervals, we could look to approximations of ωj (u) on each interval
[j, j + 1] that minimise the L∞ error
x)n−k f (k/n). This is the nth Bernstein polynomial for the function f . It can be shown that
||f − Bn ||L∞ ([0,1]) → 0 i.e. that Bn converges to f uniformly.[15]
54 CHAPTER 6. SUMMARY
All the code used for the results in this thesis was written in Mathematica
A.1 Code
Implementation of D(N )
55
56 APPENDIX A.
precisionT];
Clear[aw];
aw[k_, 2] := aw2[[k + 1]]
aw[0, j_] :=
aw[0, j] = (1/(j + 1/2))*
Sum[(aw[k, j - 1]/2^k)*(j + (-1)^k/(2*(k + 1))),
{k, 0, degreeT}]
aw[k_, j_] :=
aw[k, j] = (1/(j + 1/2))*(aw[k - 1, j - 1]/k - aw[k - 1, j])
wT[j_, u_] :=
ReleaseHold[
Hold[Sum[aw[k, j]*(u - (j + 1/2))^k, {k, 0, degreeT}]]]
wLim = Exp[-EulerGamma];
wA[u_] := Boole[u > intervalsT + 1]*wLim +
Sum[Boole[Inequality[k, LessEqual, u, Less, k + 1]]*wT[k,
u], {k, 1, intervalsT}]
wN[u_] := Evaluate[w[u] /.
NDSolve[{D[u*w[u], u] == w[u - 1], w[u /; u <= 2] == 1/u}, w,
{u, 1, 30}, WorkingPrecision -> 50]];
unsafeIntegrate[f_] := f;
unsafeIntegrate[f_, var__,
rest___] :=
(unsafeIntegrate[(Integrate[f, var[[1]]] /.
var[[1]] -> var[[3]]), rest]
- unsafeIntegrate[(Integrate[f, var[[1]]] /. var[[1]] -> var[[2]]),
rest])
Defining Ψ2
(1/5)*
Integrate[Log[k1*t - 1]/(t*(1 - t)), {t, 1 - 1/k3, 1 - 1/k1}] +
(-2/5)*Sum[i2comp[i, k], {k, 9, 21}]
Part of Ψ2
Implementation of (Listing 5.2) to find the smallest φ such that the integral
I2,i is still defined.
and Ξ2 (3.23)
62 APPENDIX A.
a2[i_, j_] := a2Int[sL[j - 1], sL[j], sL[i], spL[i], k1L[i], k2L[i], k3L[i]];
bChen = {0.015826357,
0.015247971,
0.013898757,
0.011776059,
0.009405211,
0.006558950,
0.003536751,
0.001056651,
0};
A.1. CODE 63
int1 = Interpolation[psi1data];
int2 = Interpolation[psi2data];
interpsi[s_] :=
Piecewise[{{int2[2.2], 2 <= s <= 2.2}, {int2[s], 2.2 <= s <= root},
{int1[s], root <= s <= 3}}];
intk1 = Interpolation[k1data];
intk2 = Interpolation[k2data];
intk3 = Interpolation[k3data];
intSp = Interpolation[spdata]
[2] J. Baker. Excel and the Goldbach comet. Spreadsheets in Education (eJSiE),
2(2):2, 2007.
[7] J.-R. CHEN. On the Goldbach’s problem and the sieve methods. Scientia
Sinica, 21(6):701–739, 1978.
[8] Y.-G. Chen and X.-G. Sun. On romanoff’s constant. Journal of Number
Theory, 106(2):275–284, 2004.
65
66 BIBLIOGRAPHY
[13] P. Erdös. On integers of the form 2k + p and some related problems. Summa
Brasil. Math., pages 113–125, 1950.
[15] R. Goldberg. Methods of real analysis. Blaisdell book in the pure and applied
sciences. Blaisdell Pub. Co., 1964.
[18] L. Habsieger and X.-F. Roblot. On integers of the form p + 2k . Acta Arith-
metica, 122(1):45–50, 2006.
[25] H. Iwaniec. A new form of the error term in the linear sieve. Acta Arith-
metica, 37(1):307–320, 1980.
[30] Y. V. Linnik. Addition of prime numbers with powers of one and the same
number. Matematicheskii Sbornik, 74(1):3–60, 1953.
[32] J. Liu, M.-C. Liu, and T. Wang. On the almost Goldbach problem of linnik.
Journal de théorie des nombres de Bordeaux, 11(1):133–147, 1999.
[33] Z. Liu and G. LÜ. Density of two squares of primes and powers of 2. Inter-
national Journal of Number Theory, 7(05):1317–1329, 2011.
[37] C.-D. Pan. A new application of the yu. v. linnik large sieve method. Chinese
Math. Acta, 5:642–652, 1964.
[44] P. Ross. On chen’s theorem that each large even number has the form p1+
p2 or p1+ p2 p3. Journal of the London Mathematical Society, 2(4):500–506,
1975.
[52] J. W. Wrench Jr. Evaluation of artin’s constant and the twin-prime constant.
Mathematics of Computation, pages 396–398, 1961.
[54] J. Wu. Chen’s double sieve, Goldbach’s conjecture and the twin prime prob-
lem. Acta Arithmetica, 114:215–273, 2004.