Science of Computer Programming: Roland Backhouse, João F. Ferreira
Science of Computer Programming: Roland Backhouse, João F. Ferreira
Science of Computer Programming: Roland Backhouse, João F. Ferreira
1. Introduction
An algorithm is a sequence of instructions that can be systematically executed in the solution of a given problem.
Algorithms have been studied and developed since the beginning of civilisation, but, over the last 50 years, the
unprecedented scale of programming problems and the consequent demands on precision and concision have made
computer scientists hone their algorithmic problem-solving skills to a fine degree.
Even so, and although much of mathematics is algorithmic in nature, the skills needed to formulate and solve algorithmic
problems do not form an integral part of contemporary mathematics education; also, the teaching of computer-related topics
at pre-university level focuses on enabling students to be effective users of information technology, rather than equip them
with the skills to develop new applications or to solve new problems.
A blatant example is the conventional treatment of Euclid’s algorithm to compute the greatest common divisor (gcd) of
two positive natural numbers, the oldest nontrivial algorithm that involves iteration and that has not been superseded by
algebraic methods. (For a modern paraphrase of Euclid’s original statement, see [1, pp. 335–336].) Most books on number
theory include Euclid’s algorithm, but rarely use the algorithm directly to reason about properties of numbers. Moreover, the
presentation of the algorithm in such books has benefited little from the advances that have been made in our understanding
of the basic principles of algorithm development. In an article such as this one, it is of course not the place to rewrite
mathematics textbooks. Nevertheless, our goal in this paper is to demonstrate how a focus on the algorithmic method can
enrich and re-invigorate the teaching of mathematics. We use Euclid’s algorithm to derive both old and well-known, and
new and previously unknown, properties of the greatest common divisor and rational numbers. The leitmotiv is the notion of
∗ Corresponding author.
E-mail addresses: [email protected] (R. Backhouse), [email protected] (J.F. Ferreira).
0167-6423/$ – see front matter Crown Copyright © 2010 Published by Elsevier B.V. All rights reserved.
doi:10.1016/j.scico.2010.05.006
R. Backhouse, J.F. Ferreira / Science of Computer Programming 76 (2011) 160–180 161
a loop invariant — how it can be used as a verification interface (i.e., how to verify theorems) and as a construction interface
(i.e., how to investigate and derive new theorems).
We begin the paper in Section 2 with basic properties of the division relation and the construction of Euclid’s algorithm
from its formal specification. In contrast to standard presentations of the algorithm, which typically assume the existence of
the gcd operator with specific algebraic properties, our derivation gives a constructive proof of the existence of an infimum
operator in the division ordering of natural numbers.
The focus of Section 3 is the systematic use of invariant properties of Euclid’s algorithm to verify known identities.
Section 4, on the other hand, shows how to use the algorithm to derive new results related with the greatest common
divisor: we calculate sufficient conditions for a natural-valued function1 to distribute over the greatest common divisor,
and we derive an efficient algorithm to enumerate the positive rational numbers in two different ways.
Although the identities in Section 3 are well-known, we believe that our derivations improve considerably on standard
presentations. One example is the proof that the greatest common divisor of two numbers is a linear combination of the
numbers; by the simple device of introducing matrix arithmetic into Euclid’s algorithm, it suffices to observe that matrix
multiplication is associative in order to prove the theorem. This exemplifies the gains in our problem-solving skills that can
be achieved by the right combination of precision and concision. The introduction of matrix arithmetic at this early stage
was also what enabled us to derive a previously unknown algorithm to enumerate the rationals in so-called Stern–Brocot
order (see Section 4), which is the primary novel result (as opposed to method) in this paper.
Included in the Appendix is a brief summary of the work of Stern and Brocot, the 19th century authors after whom the
Stern–Brocot tree is named. It is interesting to review their work, particularly that of Brocot, because it is clearly motivated
by practical, algorithmic problems. The review of Stern’s paper is included in order to resolve recent misunderstandings
about the origin of the Eisenstein–Stern and Stern–Brocot enumerations of the rationals.
2. Divisibility theory
Division is one of the most important concepts in number theory. This section begins with a short, basic account of the
division relation. We observe that division is a partial ordering on the natural numbers and pose the question whether the
infimum, in the division ordering, of any pair of numbers exists. The algorithm we know as Euclid’s gcd algorithm is then
derived in order to give a positive (constructive) answer to this question.
The division relation, here denoted by an infix ‘‘\’’ symbol, is the relation on integers defined to be the converse of the
‘‘is-a-multiple-of’’ relation2 :
[ m\n ≡ ⟨∃k : k∈Z : n = k×m⟩ ].
In words, an integer m divides an integer n (or n is divisible by m) if there exists some integer k such that n = k×m. In that
case, we say that m is a divisor of n and that n is a multiple of m.
The division relation plays a prominent role in number theory. So, we start by presenting some of its basic properties
and their relation to addition and multiplication. First, it is reflexive because multiplication has a unit (i.e., m = 1×m) and
it is transitive, since multiplication is associative. It is also (almost) preserved by linear combination because multiplication
distributes over addition:
[ k\x ∧ k\y ≡ k\(x + a×y) ∧ k\y ]. (1)
(We leave the reader to verify this law; take care to note the use of the distributivity of multiplication over addition in
its proof.) Reflexivity and transitivity make division a preorder on the integers. It is not anti-symmetric but the numbers
equivalent under the preordering are given by
[ m\n ∧ n\m ≡ abs.m = abs.n ],
where abs is the absolute value function and the infix dot denotes function application. Each equivalence class thus consists
of a natural number and its negation. If the division relation is restricted to natural numbers, division becomes anti-
symmetric, since abs is the identity function on natural numbers. This means that, restricted to the natural numbers, division
is a partial order with 0 as the greatest element and 1 as the smallest element.
At this stage in our analysis, properties (5), (6) and (7) assume that Eq. (2) has a solution in the appropriate cases. For
instance, (5) means that, if (2) has a solution for certain natural numbers m and n, it also has a solution when the values of
m and n are interchanged.
In view of properties (4) and (5), it remains to show that (2) has a solution when both m and n are strictly positive and
unequal. We do this by providing an algorithm that computes the solution. Eq. (2) does not directly suggest any algorithm,
but the germ of an algorithm is suggested by observing that it is equivalent to
x, y:: x = y ∧ ⟨∀k:: k\m ∧ k\n ≡ k\x ∧ k\y⟩. (8)
This new shape strongly suggests an algorithm that, initially, establishes the truth of
⟨∀k:: k\m ∧ k\n ≡ k\x ∧ k\y⟩
– which is trivially achieved by the assignment x, y := m, n – and then, reduces x and y in such a way that the property is
kept invariant whilst making progress to a state satisfying x = y. When such a state is reached, we have found a solution to
the Eq. (8), and the value of x (or y since they are equal) is a solution of (2). Thus, the structure of the algorithm we are trying
to develop is as follows4 :
3 Unless indicated otherwise, the domain of all variables is N, the set of natural numbers. Note that we include 0 in N. The notation x:: E means that x is
the unknown and the other free variables are parameters of the equation E.
4 We use the Guarded Command Language (GCL), a very simple programming language with just four programming constructs — assignment, sequential
composition, conditionals, and loops. The GCL was introduced by Dijkstra [4]. The statement do S od is a loop that executes S repeatedly while at least one
of S’s guards is true. Expressions in curly brackets are assertions.
R. Backhouse, J.F. Ferreira / Science of Computer Programming 76 (2011) 160–180 163
{ 0<m ∧ 0<n }
x , y := m , n ;
{ Invariant: ⟨∀k:: k\m ∧ k\n ≡ k\x ∧ k\y⟩ }
do x ̸= y → x , y := A , B
od
if y < x → x := x−y
x < y → y := y−x
fi
od
x < y → y := y−x
od
The algorithm that we have constructed is Euclid’s algorithm for computing the greatest common divisor of two positive
natural numbers, the oldest nontrivial algorithm that has survived to the present day! (Please note that our formulation of
the algorithm differs from Euclid’s original version and from most versions found in number-theory books. While they use
the property [ m▽n = n▽(m mod n) ], we use (10), i.e., [ m▽n = (m−n)▽n ]. For an encyclopedic account of Euclid’s
algorithm, we recommend [1, p. 334].)
In Section 2.1.1, we described the problem we were tackling as establishing that the infimum of two natural numbers
under the division ordering always exists; it was only at the end of the section that we announced that the algorithm we
had derived is an algorithm for determining the greatest common divisor. This was done deliberately in order to avoid the
confusion that can – and does – occur when using the words ‘‘greatest common divisor’’. In this section, we clarify the issue
in some detail.
Confusion and ambiguity occur when a set can be ordered in two different ways. The natural numbers can be ordered
by the usual size ordering (denoted by the symbol ≤), but they can also be ordered by the division relation. When the
ordering is not made explicit (for instance, when referring to the ‘‘least’’ or ‘‘greatest’’ of a set of numbers), we might normally
understand the size ordering, but the division ordering might be meant, depending on the context.
In words, the infimum of two values in a partial ordering – if it exists – is the largest value (with respect to the ordering)
that is at most both values (with respect to the ordering). The terminology ‘‘greatest lower bound’’ is often used instead
of ‘‘infimum’’. Of course, ‘‘greatest’’ here is with respect to the partial ordering in question. Thus, the infimum (or greatest
lower bound) of two numbers with respect to the division ordering – if it exists – is the largest number with respect to the
division ordering that divides both of the numbers. Since, for strictly positive numbers, ‘‘largest with respect to the division
ordering’’ implies ‘‘largest with respect to the size ordering’’ (equally, the division relation, restricted to strictly positive
numbers, is a subset of the ≤ relation), the ‘‘largest number with respect to the division ordering that divides both of the
numbers’’ is the same, for strictly positive numbers, as the ‘‘largest number with respect to the size ordering that divides both
of the numbers’’. Both these expressions may thus be abbreviated to the ‘‘greatest common divisor’’ of the numbers, with
no problems caused by the ambiguity in the meaning of ‘‘greatest’’ — when the numbers are strictly positive. Ambiguity does
occur, however, when the number 0 is included, because 0 is the largest number with respect to the division ordering, but
the smallest number with respect to the size ordering. If ‘‘greatest’’ is taken to mean with respect to the division ordering on
numbers, the greatest common divisor of 0 and 0 is simply 0. If, however, ‘‘greatest’’ is taken to mean with respect to the size
ordering, there is no greatest common divisor of 0 and 0. This would mean that the gcd operator is no longer idempotent,
since 0▽0 is undefined, and it is no longer associative, since, for positive m, (m▽0)▽0 is well-defined whilst m▽(0▽0) is
not.
Concrete evidence of the confusion in the standard mathematics literature is easy to find. We looked up the definition
of greatest common divisor in three commonly used undergraduate mathematics texts, and found three non-equivalent
definitions. The first [5, p. 30] defines ‘‘greatest’’ to mean with respect to the divides relation (as, in our view, it should be
defined); the second [6, p. 21, def. 2.2] defines ‘‘greatest’’ to mean with respect to the ≤ relation (and requires that at least
one of the numbers be non-zero). The third text [7, p. 78] excludes zero altogether, defining the greatest common divisor of
strictly positive numbers as the generator of all linear combinations of the given numbers; the accompanying explanation (in
words) of the terminology replaces ‘‘greatest’’ by ‘‘largest’’ but does not clarify with respect to which ordering the ‘‘largest’’
is to be determined.
Now that we know that ▽ is the greatest common divisor, we could change the operator to gcd, i.e., replace m▽n by
m gcd n. However, we stick to the ‘‘▽’’ notation because it makes the formulae shorter, and, so, easier to read. We also use
‘‘△’’ to denote the least common multiple operator. To remember which is which, just remember that infima (lower bounds)
are indicated by downward-pointing symbols (eg. ↓ for minimum, and ∨ for disjunction) and suprema (upper bounds) by
upward-pointing symbols.
In this section we show how algorithms and the notion of invariance can be used to prove theorems. In particular, we
show that the exploitation of Euclid’s algorithm makes proofs related with the greatest common divisor simple and more
systematic than the traditional ones.
There is a clear pattern in all our calculations: every time we need to prove a new theorem involving ▽, we construct
an invariant that is valid initially (with x , y := m , n) and that corresponds to the theorem to be proved upon termination
(with x = y = m▽n). Alternatively, we can construct an invariant that is valid on termination (with x = y = m▽n) and whose
initial value corresponds to the theorem to be proved. The invariant in Section 3.3 is such an example. Then, it remains to
prove that the chosen invariant is valid after each iteration of the repeatable statement.
We start with a minor change in the invariant that allows us to prove some well-known properties. Then, we explore
how the shape of the theorems to be proved determine the shape of the invariant. We also show how to prove a geometrical
property of ▽.
R. Backhouse, J.F. Ferreira / Science of Computer Programming 76 (2011) 160–180 165
The invariant that we use in Section 2.2 rests on the validity of the theorem
[ k\m ∧ k\n ≡ k\(m−n) ∧ k\n ].
But, as van Gasteren observed in [8, Chapter 11], we can use the more general and equally valid theorem
[ k \ (c ×m) ∧ k \ (c ×n) ≡ k \ (c × (m−n)) ∧ k \ (c ×n) ]
to conclude that the following property is an invariant of Euclid’s algorithm:
⟨∀k, c :: k \ (c ×m) ∧ k \ (c ×n) ≡ k \ (c ×x) ∧ k \ (c ×y)⟩.
In particular, the property is true on termination of the algorithm, at which point x and y both equal m▽n. That is, for all m
and n, such that 0 < m and 0 < n,
[ k \ (c ×m) ∧ k \ (c ×n) ≡ k \ (c × (m▽n)) ] . (11)
In addition, theorem (11) holds when m < 0, since
[ (−m)▽n = m▽n ] ∧ [ k \ (c ×(−m)) ≡ k \ (c ×m) ],
and it holds when m equals 0, since [ k\0 ]. Hence, using the symmetry between m and n we conclude that (11) is indeed
valid for all integers m and n. (In Van Gasteren’s presentation, this theorem only holds for all (m, n) ̸= (0, 0).)
Theorem (11) can be used to prove a number of properties of the greatest common divisor. If, for instance, we replace k
by m, we have
[ m \ (c ×n) ≡ m \ (c × (m▽n)) ],
and, as a consequence, we also have
[ (m \ (c ×n) ≡ m\c ) ⇐ m▽n = 1 ]. (12)
More commonly, (12) is formulated as the weaker
[ m\c ⇐ m▽n = 1 ∧ m\(c ×n) ],
and is known as Euclid’s Lemma. Another significant property is
[ k \ (c × (m▽n)) ≡ k \ ((c ×m)▽(c ×n)) ] , (13)
which can be proved as:
k \ (c × (m▽n))
= { (11) }
k \ (c ×m) ∧ k \ (c ×n)
= { (3) }
k \ ((c ×m)▽(c ×n)).
From (13) we conclude
[ (c ×m)▽(c ×n) = c × (m▽n) ]. (14)
Property (14) states that multiplication by a natural number distributes over ▽. It is an important property that can be
used to simplify arguments where both multiplication and the greatest common divisor are involved. An example is Van
Gasteren’s proof of the theorem
[ (m×p)▽n = m▽n ⇐ p▽n = 1 ], (15)
which is as follows:
m▽n
= { p▽n = 1 and 1 is the unit of multiplication }
(m×(p▽n))▽n
= { (14) }
(m×p) ▽ (m×n) ▽ n
= { (m×n)▽n = n }
(m×p)▽n.
166 R. Backhouse, J.F. Ferreira / Science of Computer Programming 76 (2011) 160–180
In the previous sections, we have derived a number of properties of the ▽ operator. However, where the divides relation
is involved, the operator always occurs on the right side of the relation. (For examples, see (3) and (13).) Now we consider
properties where the operator is on the left side of a divides relation. Our goal is to show that
[ (m▽n) \ k ≡ ⟨∃a, b:: k = m×a + n×b⟩ ], (16)
where the range of a and b is the integers.
Of course, if (16) is indeed true, then it is also true when k equals m▽n. That is, a consequence of (16) is
[ ⟨∃a, b:: m▽n = m×a + n×b⟩ ]. (17)
In words, m▽n is a linear combination of m and n. For example,
3▽5 = 1 = 3×2 − 5×1 = 5×2 − 3×3.
Vice-versa, if (17) is indeed true then (16) is a consequence. (The crucial fact is that multiplication distributes through
addition.) It thus suffices to prove (17).
We can establish (17) by constructing such a linear combination for given values of m and n.
When n is 0, we have
m▽0 = m = m×1 + 0×1.
(The multiple of 0 is arbitrarily chosen to be 1.)
When both m and n are non-zero, we need to augment Euclid’s algorithm with a computation of the coefficients. The
most effective way to establish the property is to establish that x and y are linear combinations of m and n is an invariant of
the algorithm; this is best expressed using matrix arithmetic.
In the algorithm below, the assignments to x and y have been replaced by equivalent assignments to the vector (x y).
Also, an additional variable C, whose value is a 2× 2 matrix
of integers has Specifically, I, A
been introduced into the program.
1 0 1 0 1 −1
and B are 2×2 matrices; I is the identity matrix , A is the matrix −1 1 and B is the matrix
0 1 0 1
. (The assignment
(x y) := (x y)×A is equivalent to x , y := x−y , y, as can be easily checked.)
{ 0<m ∧ 0<n }
(x y) , C := (m n) , I ;
{ Invariant: (x y) = (m n) × C }
do y < x → (x y) , C := (x y) × A , C×A
x < y → (x y) , C := (x y) × B , C×B
od
{ (x y) = (m▽n m▽n) = (m n) × C }
The invariant shows only the relation between the vectors (x y) and (m n); in words, (x y) is a multiple of (m n).
It is straightforward to verify that the invariant is established by the initialising assignment, and maintained by the loop
body. Crucial to the proof that it is maintained by the loop body is that multiplication (here of matrices) is associative. Had
we expressed the assignments to C in terms of its four elements, verifying that the invariant is maintained by the loop body
would have amounted to giving in detail the proof that matrix multiplication is associative. This is a pointless duplication
of effort, avoiding which fully justifies the excursion into matrix arithmetic.
(An exercise for the reader is to express the property that m and n are linear combinations of x and y. The solution involves
observing that A and B are invertible. This will be exploited in Section 4.2.)
In this section, we prove that in a Cartesian coordinate system, m▽n can be interpreted as the number of points with
integral coordinates on the straight line joining the points (0, 0) and (m, n), excluding (0, 0). Formally, with dummies s and
t ranging over integers, we prove for all m and n:
⟨Σ s, t : m×t = n×s ∧ s ≤ m ∧ t ≤ n ∧ (0 < s ∨ 0 < t ) : 1⟩
= (18)
m▽n.
We begin by observing that (18) holds when m = 0 or when n = 0 (we leave the proof to the reader). When 0 < m and 0 < n,
we can simplify the range of (18). First, we observe that
(0 < s ≤ m ≡ 0 < t ≤ n) ⇐ m×t = n×s,
R. Backhouse, J.F. Ferreira / Science of Computer Programming 76 (2011) 160–180 167
since
0<t ≤n
= { 0<m }
0 < m×t ≤ m×n
= { m×t = n×s }
0 < n×s ≤ m×n
= { 0 < n, cancellation }
0 < s ≤ m.
As a result, (18) can be written as
[ ⟨Σ s, t : m×t = n×s ∧ 0 < t ≤ n : 1⟩ = m▽n ]. (19)
In order to use Euclid’s algorithm, we need to find an invariant that allows us to conclude (19). If we use as invariant
⟨Σ s, t : x×t = y×s ∧ 0 < t ≤ y : 1⟩ = x▽y, (20)
its initial value is the property that we want to prove:
⟨Σ s, t : m×t = n×s ∧ 0 < t ≤ n : 1⟩ = m▽n.
Its value upon termination is
⟨Σ s, t : (m▽n)×t = (m▽n)×s ∧ 0 < t ≤ m▽n : 1⟩ = (m▽n)▽(m▽n),
which is equivalent (by cancellation of multiplication and idempotence of ▽) to
⟨Σ s, t : t = s ∧ 0 < t ≤ m▽n : 1⟩ = m▽n.
It is easy to see that the invariant reduces to true on termination (because the sum on the left equals m▽n), making its initial
value also true.
It is also easy to see that the right-hand side of the invariant is unnecessary as it is the same initially and on termination.
This motivates the generalisation of the concept ‘‘invariant’’. ‘‘Invariants’’ in the literature are always boolean-valued
functions of the program variables, but we see no reason why ‘‘invariants’’ should not be of any type: for us, an invariant of
a loop is simply a function of the program variables whose value is unchanged by execution of the loop body.5 In this case,
the value is a natural number. Therefore, we can simplify (20) and use as invariant
⟨Σ s, t : x×t = y×s ∧ 0 < t ≤ y : 1⟩. (21)
Its value on termination is
⟨Σ s, t : (m▽n)×t = (m▽n)×s ∧ 0 < t ≤ m▽n : 1⟩,
which is equivalent to
⟨Σ s, t : t = s ∧ 0 < t ≤ m▽n : 1⟩.
As said above, this sum equals m▽n.
Now, since the invariant (21) equals the left-hand side of (19) for the initial values of x and y, we only have to check if it
remains constant after each iteration. This means that we have to prove (for y < x ∧ 0 < y):
⟨Σ s, t : x×t = y×s ∧ 0 < t ≤ y : 1⟩
= ⟨Σ s, t : (x−y)×t = y×s ∧ 0 < t ≤ y : 1⟩,
which can be rewritten, for positive x and y, as:
⟨Σ s, t : (x+y)×t = y×s ∧ 0 < t ≤ y : 1⟩
= ⟨Σ s, t : x×t = y×s ∧ 0 < t ≤ y : 1⟩.
The proof is as follows:
5 Some caution is needed here because our more general use of the word ‘‘invariant’’ does not completely coincide with its standard usage for boolean-
valued functions. The standard meaning of an invariant of a statement S is a boolean-valued function of the program variables which, in the case that the
function evaluates to true, remains true after execution of S. Our usage requires that, if the function evaluates to false before execution of S, it continues to
evaluate to false after executing S.
168 R. Backhouse, J.F. Ferreira / Science of Computer Programming 76 (2011) 160–180
In this section we show how to use Euclid’s algorithm to derive new theorems related with the greatest common divisor.
We start by calculating reasonable sufficient conditions for a natural-valued function to distribute over the greatest common
divisor. We also derive an efficient algorithm for enumerating the positive rational numbers in two different ways.
In addition to multiplication by a natural number, there are other functions that distribute over ▽. The goal of this
subsection is to determine sufficient conditions for a natural-valued function f to distribute over ▽, i.e., for the following
property to hold:
[ f .(m▽n) = f .m ▽ f .n ]. (22)
For simplicity’s sake, we restrict all variables to natural numbers. This implies that the domain of f is also restricted to the
natural numbers.
We explore (22) by identifying invariants of Euclid’s algorithm involving the function f . To determine an appropriate
loop invariant, we take the right-hand side of (22) and calculate:
f .m ▽ f .n
= { the initial values of x and y are m and n, respectively }
f .x ▽ f .y
= { suppose that f .x ▽ f .y is invariant;
on termination: x = m▽n ∧ y = m▽n }
f .(m▽n) ▽ f .(m▽n)
= { ▽ is idempotent }
f .(m▽n).
Property (22) is thus established under the assumption that f .x▽f .y is an invariant of the loop body. (Please note that this
invariant is of the more general form introduced in Section 3.3.)
The next step is to determine what condition on f guarantees that f .x▽f .y is indeed invariant. Noting the symmetry in
the loop body between x and y, the condition is easily calculated to be
f .(x+y) ▽ f .y
= { f distributes over ▽ }
f .((x+y)▽y)
= { (7) }
f .(x▽y)
= { f distributes over ▽ }
f .x ▽ f .y.
By mutual implication we conclude that
‘‘ f distributes over ▽ ’’ ≡ (23).
We have now reached a point where we can determine if a function distributes over ▽. However, since (23) still has two
occurrences of ▽, we want to refine it into simpler properties. Towards that end we turn our attention to the condition
f .x ▽ f .y = f .(x+y) ▽ f .y,
and we explore simple ways of guaranteeing that it is everywhere true. For instance, it is immediately obvious that any
function that distributes over addition distributes over ▽. (Note that multiplication by a natural number is such a function.)
The proof is very simple:
f .(x+y) ▽ f .y
= { f distributes over addition }
(f .x+f .y) ▽ f .y
= { (7) }
f .x ▽ f .y.
In view of properties (7) and (15), we formulate the following lemma, which is a more general requirement:
Lemma 24. All functions f that satisfy
⟨∀x, y:: ⟨∃a, b : a▽f .y = 1 : f .(x+y) = a × f .x + b × f .y⟩⟩
distribute over ▽.
Proof.
f .(x+y) ▽ f .y
= { f .(x+y) = a × f .x + b × f .y }
(a × f .x + b × f .y) ▽ f .y
= { (7) }
(a × f .x) ▽ f .y
= { a ▽ f .y = 1 and (15) }
f .x ▽ f .y.
Note that since the discussion above is based on Euclid’s algorithm, Lemma 24 only applies to positive arguments. We now
investigate the case where m or n is 0. We have, for m = 0 :
f .(0▽n) = f .0 ▽ f .n
= { [ 0▽m = m ] }
f .n = f .0 ▽ f .n
= { [ a\b ≡ a = b▽a ] }
f .n \ f .0
⇐ { obvious possibilities that make the expression valid
are f .0 = 0, f .n = 1, or f .n = f .0; the first is the
interesting case }
f .0 = 0.
170 R. Backhouse, J.F. Ferreira / Science of Computer Programming 76 (2011) 160–180
= { definition }
fib.k \ fib.(n×k)
= { [ a\b ≡ a▽b = a ] ,
with a := fib.k and b := fib.(n×k) }
fib.k ▽ fib.(n×k) = fib.k
fib.(k▽(n×k)) = fib.k
A standard theorem of mathematics is that the rationals are ‘‘denumerable’’, i.e. they can be put in one-to-one
correspondence with the natural numbers. Another way of saying this is that it is possible to enumerate the rationals so
that each appears exactly once.
Recently, there has been a spate of interest in the construction of bijections between the natural numbers and the
(positive) rationals (see [11–13] and [14, pp. 94–97]). Gibbons et al. [11] describe as ‘‘startling’’ the observation that the
rationals can be efficiently enumerated6 by ‘‘deforesting’’ the so-called ‘‘Calkin–Wilf’’ [13] tree of rationals. However, they
claim that it is ‘‘not at all obvious’’ how to ‘‘deforest’’ the Stern–Brocot tree of rationals.
In this section, we derive an efficient algorithm for enumerating the rationals according to both orderings. The algorithm
is based on a bijection between the rationals and invertible 2×2 matrices. The key to the algorithm’s derivation is the
reformulation of Euclid’s algorithm in terms of matrices (see Section 3.2). The enumeration is efficient in the sense that
it has the same time and space complexity as the algorithm credited to Moshe Newman in [12], albeit with a constant-fold
increase in the number of variables and number of arithmetic operations needed at each iteration.
Note that, in our view, it is misleading to use the name ‘‘Calkin–Wilf tree of rationals’’ because Stern [15] had already
documented essentially the same structural characterisation of the rationals almost 150 years earlier than Calkin and
Wilf. For more explanation, see the Appendix in which we review in some detail the relevant sections of Stern’s paper.
Stern attributes the structure to Eisenstein, so henceforth we refer to the ‘‘Eisenstein–Stern’’ tree of rationals where recent
publications (including our own [16]) would refer to the ‘‘Calkin–Wilf tree of rationals’’. Appendix A.1 includes background
information. For a comprehensive account of properties of the Stern–Brocot tree, including further relationships with
Euclid’s algorithm, see [10, pp. 116–118].
6 By an efficient enumeration we mean a method of generating each rational without duplication with constant cost per rational in terms of arbitrary-
precision simple arithmetic operations.
172 R. Backhouse, J.F. Ferreira / Science of Computer Programming 76 (2011) 160–180
That all matrices in the tree are different is proved by showing that the tree is a binary search tree (as formalised shortly).
The key element of the proof7 is that the determinants of A and B are both equal to 1 and, hence, the determinant of any
finite product of Ls and Rs is also 1.
Formally, we define the relation ≺ on matrices that are finite products of Ls and Rs by
a′ c ′ a ′ +c ′
a c a +c
≺ ≡ < .
b d b′ d′ b +d b ′ +d ′
(Note that the denominator in these fractions is strictly positive; this fact is easily proved by induction.) We prove that, for
all such matrices X, Y and Z,
X×L×Y ≺ X, (30)
a′ c ′
a c 1 0
suppose X = b d
and Y = b′ d′
. Then, since L = 1 1
, (30) is easily calculated to be
(x y) , C := (x y) × A , C×A
in the body of Euclid’s algorithm becomes
x x
,C := B× , B×C.
y y
Similarly, the assignment
(x y) , C := (x y) × B , C×B
becomes
x x
,C := A× , A×C.
y y
7 The proof is an adaptation of the proof in [10, p. 117] that the rationals in the Stern–Brocot tree are all different. Our use of determinants corresponds
to their use of ‘‘the fundamental fact’’ (4.31). Note that the definitions of L and R are swapped around in [10].
R. Backhouse, J.F. Ferreira / Science of Computer Programming 76 (2011) 160–180 173
In this way, we get a second bijection between the rationals and the finite products of the matrices A−1 and B−1 . This is the
basis for our second method of enumerating the rationals.
In summary, we have:
{ 0<m ∧ 0<n }
(x y) , D := (m n) , I ;
{ Invariant: (m n) = (x y) × D }
do y < x → (x y) , D := (x y) × L−1 , L×D
x < y → (x y) , D := (x y) × R−1 , R×D
od
simply premultiply by (1 1) or postmultiply by
1
1
. Formally, the matrices are enumerated by enumerating all strings of
Ls and Rs in lexicographic order, beginning with the empty string; each string is mapped to a matrix by the homomorphism
that maps ‘‘L’’ to L, ‘‘R’’ to R, and string concatenation to matrix product. It is easy to enumerate all such strings; as we see
shortly, converting strings to matrices is also not difficult, for the simple reason that L and R are invertible.
The enumeration proceeds level-by-level. Beginning with the unit matrix (level 0), the matrices on each level are
enumerated from left to right. There are 2k matrices on level k, the first of which is Lk . The problem is to determine for
a given matrix, which is the matrix ‘‘adjacent’’ to it. That is, given a matrix D, which is a finite product of L and R, and is
different from Rk for all k, what is the matrix that is to the immediate right of D in Fig. 1?
Consider the lexicographic ordering on strings of Ls and Rs of the same length. The string immediately following a string
s (that is not the last) is found by identifying the rightmost L in s. Supposing s is the string tLRj , where Rj is a string of j Rs,
its successor is tRLj .
It’s now easy to see how to transform the matrix identified by s to its successor matrix. Simply postmultiply by
R−j × L−1 × R × Lj . This is because, for all T and j,
(T × L × Rj ) × (R−j × L−1 × R × Lj ) = T × R × Lj .
(m n) = (x y) × D.
D is initially the identity matrix and x and y are initialised to m and n, respectively; immediately following the initialisation
process, D is repeatedly premultiplied by R so long as x is less than y. Simultaneously, y is reduced by
x.1The
number of times
that D is premultiplied by R is thus the greatest number j such that j×m is less than n, which is n− m
. Now suppose the
input values m and n are coprime. Then, on termination of the algorithm, (1 1) × D equals ( m n ). That is, if
D00 D01
D = ,
D10 D11
then,
n −1 D01 + D11 − 1
= .
m D00 + D10
It remains to decide how to keep track of the levels in the tree. For this purpose, it is not necessary to maintain a counter. It
suffices to observe that D is a power of R exactly when the rationals in the Eisenstein–Stern, or Stern–Brocot, tree are integers,
and this integer is the number of the next level in the tree (where the root is on level 0). So, it is easy to test whether the
last matrix on the current level has been reached. Equally, the first matrix on the next level is easily calculated. For reasons
we discuss in the next section, we choose to test whether the rational in the Eisenstein–Stern tree is an integer; that is,
we evaluate the boolean D00 + D10 = 1. In this way, we get the following (non-terminating) program which computes the
successive values of D.
R. Backhouse, J.F. Ferreira / Science of Computer Programming 76 (2011) 160–180 175
D := I ;
1 0
do D00 + D10 = 1 → D := D01 +D11 1
D01 + D11 − 1
D00 + D10 ̸= 1 → j := D00 + D10
;
2j + 1 1
D := D × −1 0
od
n −1
A minor simplification of this algorithm is that the ‘‘− 1’’ in the assignment to j can be omitted. This is because m
and
n
m
are equal when m and n are coprime and m is different from 1. We return to this shortly.
4.2.4. Discussion
Our construction of an algorithm for enumerating the rationals in Stern–Brocot order was motivated by reading two
publications, [10, pp. 116–118] and [11]. Gibbons et al. [11] show how to enumerate the elements of the Eisenstein–Stern
tree, but claim that ‘‘it is not at all obvious how to do this for the Stern–Brocot tree’’. Specifically, they say9 :
However, there is an even better compensation for the loss of the ordering property in moving from the Stern–
Brocot to the Calkin–Wilf tree: it becomes possible to deforest the tree altogether, and generate the rationals directly,
maintaining no additional state beyond the ‘current’ rational. This startling observation is due to Moshe Newman [12].
In contrast, it is not at all obvious how to do this for the Stern–Brocot tree; the best we can do seems to be to deforest
the tree as far as its levels, but this still entails additional state of increasing size.
8 Recall that, to comply with existing literature, the enumerated rational is n and not m .
m n
9 Recall that they attribute the tree to Calkin and Wilf rather than Eisenstein and Stern.
176 R. Backhouse, J.F. Ferreira / Science of Computer Programming 76 (2011) 160–180
In this section, we have shown that it is possible to enumerate the rationals in Stern–Brocot order without incurring
‘‘additional state of increasing size’’. More importantly, we have presented one enumeration algorithm with two
specialisations, one being the ‘‘Calkin–Wilf’’ enumeration they present, and the other being the Stern–Brocot enumeration
that they described as being ‘‘not at all obvious’’.
The optimisation of Eisenstein–Stern enumeration which leads to Newman’s algorithm is not possible for Stern–Brocot
enumeration. Nevertheless, the complexity of Stern–Brocot enumeration is the same as the complexity of Newman’s
algorithm, both in time and space. The only disadvantage of Stern–Brocot enumeration is that four variables are needed
in place of two; the advantage is the (well-known) advantage of the Stern–Brocot tree over the Eisenstein–Stern tree — the
rationals on a given level are in ascending order.
Gibbons, Lester and Bird’s goal seems to have been to show how the functional programming language Haskell
implements the various constructions – the construction of the tree structures and Newman’s algorithm. In doing so, they
repeat the existing mathematical presentations of the algorithms as given in [10,13,12]. The ingredients for an efficient
enumeration of the Stern–Brocot tree are all present in these publications, but the recipe is missing!
The fact that expressing the rationals in ‘‘lowest form’’ is essential to the avoidance of duplication in any enumeration
immediately suggests the relevance of Euclid’s algorithm. The key to our exposition is that Euclid’s algorithm can be
expressed in terms of matrix multiplications, where – significantly – the underlying matrices are invertible. Transposition
and inversion of the matrices capture the symmetry properties in a precise, calculational framework. As a result, the bijection
between the rationals and the tree elements is immediate and we do not need to give separate, inductive proofs for both
tree structures. Also, the determination of the next element in an enumeration of the tree elements has been reduced to one
unifying construction.
5. Conclusion
In our view, much of mathematics is inherently algorithmic; it is also clear that, in the modern age, algorithmic
problem solving is just as important, if not much more so, than in the 19th century. Somehow, however, mathematical
education in the 20th century lost sight of its algorithmic roots. We hope to have exemplified in this paper how a fresh
approach to introductory number theory that focuses on the algorithmic content of the theory can combine practicality
with mathematical elegance. By continuing this endeavour we believe that the teaching of mathematics can be enriched
and given new vigour.
Acknowledgements
Thanks go to Christian Wuthrich for help in translating the most relevant parts of Stern’s paper, and to Jeremy Gibbons
for his comments on earlier drafts of this paper, and for help with TeX commands. Thanks also to our colleagues in
the Nottingham Tuesday Morning Club for helping iron out omissions and ambiguities, and to Arjan Mooij and Jeremy
Weissmann for their comments on Section 4.1.
Some material presented here was developed in the context of the MathIS project, supported by Fundação para a Ciência
e a Tecnologia (Portugal) under contract PTDC/EIA/73252/2006.
The second author was funded by Fundação para a Ciência e a Tecnologia (Portugal) under grant SFRH/BD/24269/2005.
The primary novel result of our paper is the construction given in Section 4.2 of an algorithm to enumerate the rationals
in Stern–Brocot order. Apart from minor differences, this section of our paper was submitted in April 2007 to the American
Mathematical Monthly; it was rejected in November 2007 on the grounds that it was not of sufficient interest to readers of
the Monthly. One (of two referees) did, however, recommend publication. The referee made the following general comment.
Each of the two trees of rationals – the Stern–Brocot tree and the Calkin–Wilf tree – has some history. Since this paper
now gives the definitive link between these trees, I encourage the authors, perhaps in their Discussion section, to also
give the definitive histories of these trees, something in the same spirit as the Remarks at the end of the Calkin and
Wilf paper.
Since the publication of [16], we have succeeded in obtaining copies of the original papers and it is indeed interesting to
briefly review the papers. But we do not claim to provide ‘‘definitive histories of these trees’’ — that is a task for a historian
of mathematics.
Appendix A.1 is about the paper [15] published in 1858 by Stern. The surprising fact that emerges from the review is
that the so-called ‘‘Calkin–Wilf’’ tree of rationals, and not just the ‘‘Stern–Brocot’’ tree, is studied in detail in his paper.
Moreover, of the two structures, the ‘‘Calkin–Wilf’’ tree is more readily recognised; the ‘‘Stern–Brocot’’ tree requires rather
more understanding to identify. Brocot’s paper [17], which we review in Appendix A.2, is interesting because it illustrates
how 19th century mathematics was driven by practical, algorithmic problems. (For additional historical remarks, see also
[18].)
R. Backhouse, J.F. Ferreira / Science of Computer Programming 76 (2011) 160–180 177
Earlier we have commented that the structure that has recently been referred to as the ‘‘Calkin–Wilf’’ tree was
documented by Stern [15] in 1858. In this section we review those sections of Stern’s paper that are relevant to our own.
m n.
Subsequent rows are obtained by inserting between every pair of numbers the sum of the numbers. Thus the first row is
m m +n n
The process of constructing such rows is repeated indefinitely. The sequence of numbers obtained by concatenating the
individual rows in order is what is now called the Eisenstein array and denoted by Ei(m,n) (see, for example, [19, sequence
A064881]) . Stern refers to each occurrence of a number in rows other than the zeroth row as either a sum element
(‘‘Summenglied’’) or a source element (‘‘Stammglied’’). The sum elements are the newly added numbers. For example, in
the first row the number m+n is a sum element; in the second row the number m+n is a source element.
does so implicitly in section 10 where he relates the the continued fraction representation of ba to the row number in which
the pair (a, b) occurs. He does not appear to suggest a similar method for computing t in the general case of enumerating
Ei(m,n). However, it is straightforward to combine our derivation of Newman’s algorithm with Stern’s theorems to obtain
an algorithm to enumerate the elements of Ei(m,n) for arbitrary natural numbers m and n. Interested readers may consult
our website [20] where several implementations are discussed.
As stated at the beginning of this section, the conclusion is that Stern almost derives Newman’s algorithm, but not quite.
On the other hand, because his analysis is of the general case Ei(m,n) as opposed to Ei(1,1), his results are more general.
R. Backhouse, J.F. Ferreira / Science of Computer Programming 76 (2011) 160–180 179
Achille Brocot was a famous French watchmaker who, some years before the publication of his paper [17], had to fix
some pendulums used for astronomical measurements. However, the device was incomplete and he did not know how
to compute the number of teeth of cogs that were missing. He was unable to find any literature helpful to the solution of
the problem, so, after some experiments, he devised a method to compute the numbers. In his paper, Brocot illustrates his
method with the following example:
A shaft turns once in 23 min. We want suitable cogs so that another shaft completes a revolution in 3 h and 11 min,
that is 191 min.
The ratio between both speeds is 19123
, so we can clearly choose a cog with 191 teeth, and another one with 23 teeth. But, as
Brocot wrote, it was not possible, at that time, to create cogs with so many teeth. And because 191 and 23 are coprime, cogs
with fewer teeth can only approximate the true ratio.
Brocot’s contribution was a method to compute approximations to the true ratios (hence the title of his paper, ‘‘Calculus
of cogs by approximation’’). He begins by observing that 19123
must be between the ratios 18 and 91 . If we choose the ratio 81 , the
error is −7 since 8×23 = 1×191 − 7. This means that if we choose this ratio, the slower cog completes its revolution seven
minutes early, i.e., after 8×23 min. On the other hand, if we choose the ratio 19 , the error is 16 since 9×23 = 1×191 + 16,
meaning that the slower cog completes its revolution sixteen minutes late, i.e., after 9×23 min.
Accordingly, Brocot writes two rows:
8 1 −7
9 1 +16
His method consists in iteratively forming a new row, by adding the numbers in all three columns of the rows that produce
the smallest error. Initially, we only have two rows, so we add the numbers in the three columns and we write the row of
sums in the middle.
8 1 −7
17 2 +9
9 1 +16
191+ 9
(If we choose the ratio 17
2
, the slower cog completes its revolution 29 min later, since 17
2
= 23 2 .) Further approximations
are constructed by adding a row adjacent to the row that minimises the error term. The process ends once we reach the
error 0, which refers to the true ratio. The final state of the table is:
8 1 −7
33 4 −5
58 7 −3
83 10 −1
191 23 0
108 13 +1
25 3 +2
17 2 +9
9 1 +16
180 R. Backhouse, J.F. Ferreira / Science of Computer Programming 76 (2011) 160–180
191 83 1 108
The conclusion is that the two closest approximations to 23
are ratios of 10
(which runs 10
min faster) and 13
(which runs
1 191
13
min slower). We could continue this process, getting at each stage a closer approximation to 23
. In fact, Brocot refines
the table shown above, in order to construct a multistage cog train (see [17, p. 191]).
m′ m′
At each step in Brocot’s process we add a new ratio mn+
+n ′
, which is usually called the mediant of m
n′
n
and
. Similarly, each
m′ m′
node in the Stern–Brocot tree is of the form mn+
+n′
, where m
n
is the nearest ancestor above and to the left, and n′
is the nearest
4
ancestor above and to the right. (Consider, for example, the rational 3 in Fig. 3. Its nearest ancestor above and to the left is
1
1
and its nearest ancestor above and to the right is 32 .) Brocot’s process can be used to construct the Stern–Brocot tree: first,
0 m+m′
create an array that contains initially the rationals 1
and 10 ; then, insert the rational n+n′
between two adjacent fractions
m m′
n
and n′
. In the first step we add only one rational to the array
0 1 1
, , ,
1 1 0
but in the second step we add two new rationals:
0 1 1 2 1
, , , , .
1 2 1 1 0
Generally, in the nth step we add 2n−1 new rationals. Clearly, this array can be represented as an infinite binary tree, whose
first four levels are represented in Fig. 3 (we omit the fractions 10 and 10 ).
The most interesting aspect to us of Brocot’s paper is that it solves an algorithmic problem. Brocot was faced with the
practical problem of how to approximate rational numbers in order to construct clocks of satisfactory accuracy and his
solution is indisputably an algorithm. Stern’s paper is closer to a traditional mathematical paper but, even so, it is an in-
depth study of an algorithm for generating rows of numbers of increasing length.
A.3. Conclusion
There can be no doubt that what has been dubbed in recent years the ‘‘Calkin–Wilf’’ tree of rationals is, in fact, a central
topic in Stern’s 1858 paper. Calkin and Wilf [13] admit that in Stern’s paper ‘‘there is a structure that is essentially our
tree of fractions’’ but add ‘‘in a different garb’’ and do not clarify what is meant by ‘‘a different garb’’. It is unfortunate that
the misleading name has now become prevalent; in order to avoid further misinterpretations of historical fact, it would be
desirable for Stern’s paper to be translated into English.
We have not attempted to determine how the name ‘‘Stern–Brocot’’ tree came into existence. It has been very surprising
to us how much easier it is to identify the Eisenstein–Stern tree in Stern’s paper in comparison to identifying the Stern–
Brocot tree.
References
[1] D.E. Knuth, The Art of Computer Programming, 3rd ed., in: Seminumerical Algorithms, vol. 2, Addison-Wesley Longman Publishing Co., Inc., Boston,
MA, USA, 1997.
[2] R. Backhouse, Program Construction. Calculating Implementations From Specifications, John Wiley & Sons, Ltd, 2003.
[3] D. Gries, F.B. Schneider, A Logical Approach to Discrete Math, Springer-Verlag, 1993.
[4] E.W. Dijkstra, Guarded commands, nondeterminacy and formal derivation of programs, Communications of the ACM 18 (8) (1975) 453–457.
[5] K.E. Hirst, Numbers, Sequences and Series, Edward Arnold, 1995.
[6] D.M. Burton, Elementary Number Theory, 6th ed., McGraw-Hill Higher Education, 2005. URL http://www.worldcat.org/isbn/0071244255.
[7] J.B. Fraleigh, A First Course in Abstract Algebra, 6th ed., Addison Wesley Longman Inc., 1998.
[8] A. van Gasteren, On the Shape of Mathematical Arguments, in: LNCS, vol. 445, Springer-Verlag, 1990.
[9] E.W. Dijkstra, Fibonacci and the greatest common divisor, April 1990. URL http://www.cs.utexas.edu/users/EWD/ewd10xx/EWD1077.PDF.
[10] R.L. Graham, D.E. Knuth, O. Patashnik, Concrete Mathematics: A Foundation for Computer Science, 2nd ed., Addison-Wesley Publishing Company,
1994.
[11] J. Gibbons, D. Lester, R. Bird, Enumerating the rationals, Journal of Functional Programming 16 (3) (2006) 281–291.
[12] D.E. Knuth, C. Rupert, A. Smith, R. Stong, Recounting the rationals, continued, American Mathematical Monthly 110 (7) (2003) 642–643.
[13] N. Calkin, H.S. Wilf, Recounting the rationals, The American Mathematical Monthly 107 (4) (2000) 360–363.
[14] M. Aigner, G. Ziegler, Proofs From The Book, 3rd ed., Springer-Verlag, 2004.
[15] M.A. Stern, Üeber eine zahlentheoretische Funktion, Journal für die reine und angewandte Mathematik 55 (1858) 193–220.
[16] R. Backhouse, J.F. Ferreira, Recounting the rationals: twice! in: Mathematics of Program Construction, in: LNCS, vol. 5133, 2008, pp. 79–91.
doi:10.1007/978-3-540-70594-9_6. URL http://joaoff.com/publications/2008/rationals.
[17] A. Brocot, Calcul des rouages par approximation, nouvelle méthode, Revue Chronométrique 3 (1861) 186–194. Available via
http://joaoff.com/publications/2008/rationals/.
[18] B. Hayes, On the teeth of wheels, American Scientist 88 (4) (2000) 296–300. URL http://www.americanscientist.org/issues/pub/2000/4/on-the-teeth-
of-wheels.
[19] N.J.A. Sloane, The on-line encyclopedia of integer sequences. URL http://www.research.att.com/~njas/sequences/.
[20] R. Backhouse, J.F. Ferreira, On Euclid’s algorithm and elementary number theory, 2009. URL http://joaoff.com/publications/2009/euclid-alg.