Intro To Design & Analysis of Algorithms 3e
Intro To Design & Analysis of Algorithms 3e
(a) (b)
FIGURE 1.7 (a) Adjacency matrix and (b) adjacency lists of the graph in Figure 1.6a.
30 Introduction
adjacency lists indicate columns of the adjacency matrix that, for a given vertex,
contain 1s.
If a graph is sparse, the adjacency list representation may use less space
than the corresponding adjacency matrix despite the extra storage consumed by
pointers of the linked lists; the situation is exactly opposite for dense graphs. In
general, which of the two representations is more convenient depends on the
nature of the problem, on the algorithm used for solving it, and, possibly, on the
type of input graph (sparse or dense).
Weighted Graphs A weighted graph (or weighted digraph) is a graph (or di-
graph) with numbers assigned to its edges. These numbers are called weights or
costs. An interest in such graphs is motivated by numerous real-world applica-
tions, such as nding the shortest path between two points in a transportation or
communication network or the traveling salesman problem mentioned earlier.
Both principal representations of a graph can be easily adopted to accommo-
date weighted graphs. If a weighted graph is represented by its adjacency matrix,
then its element A[i, j] will simply contain the weight of the edge from the ith to
the jth vertex if there is such an edge and a special symbol, e.g., , if there is no
such edge. Such a matrix is called the weight matrix or cost matrix. This approach
is illustrated in Figure 1.8b for the weighted graph in Figure 1.8a. (For some ap-
plications, it is more convenient to put 0s on the main diagonal of the adjacency
matrix.) Adjacency lists for a weighted graph have to include in their nodes not
only the name of an adjacent vertex but also the weight of the corresponding edge
(Figure 1.8c).
Paths and Cycles Among the many properties of graphs, two are important for a
great number of applications: connectivity and acyclicity. Both are based on the
notion of a path. Apath from vertex u to vertex v of a graph Gcan be dened as a
sequence of adjacent (connected by an edge) vertices that starts with u and ends
with v. If all vertices of a path are distinct, the path is said to be simple. The length
of a path is the total number of vertices in the vertex sequence dening the path
minus 1, which is the same as the number of edges in the path. For example, a, c,
b, f is a simple path of length 3 from a to f in the graph in Figure 1.6a, whereas
a, c, e, c, b, f is a path (not simple) of length 5 from a to f.
a
b
c
d
a
b
c
d
b, 5
a, 5
a, 1
b, 4
c, 1
c, 7
b, 7
d, 4
d, 2
c, 2
5
7
4
1
7
2
2
7
5
1 4
4
2
a
c
b
d
FIGURE 1.8 (a) Weighted graph. (b) Its weight matrix. (c) Its adjacency lists.
1.4 Fundamental Data Structures 31
a
c b e g
i
f
h
d
FIGURE 1.9 Graph that is not connected.
In the case of a directed graph, we are usually interested in directed paths.
A directed path is a sequence of vertices in which every consecutive pair of the
vertices is connected by an edge directed from the vertex listed rst to the vertex
listed next. For example, a, c, e, f is a directed path from a to f in the graph in
Figure 1.6b.
A graph is said to be connected if for every pair of its vertices u and v there
is a path from u to v. If we make a model of a connected graph by connecting
some balls representing the graphs vertices with strings representing the edges,
it will be a single piece. If a graph is not connected, such a model will consist
of several connected pieces that are called connected components of the graph.
Formally, a connected component is a maximal (not expandable by including
another vertex and an edge) connected subgraph
2
of a given graph. For example,
the graphs in Figures 1.6a and 1.8a are connected, whereas the graph in Figure 1.9
is not, because there is no path, for example, from a to f. The graph in Figure
1.9 has two connected components with vertices {a, b, c, d, e] and {f, g, h, i],
respectively.
Graphs with several connected components do happen in real-world appli-
cations. A graph representing the Interstate highway system of the United States
would be an example (why?).
It is important to know for many applications whether or not a graph under
considerationhas cycles. Acycle is a pathof a positive lengththat starts andends at
the same vertex and does not traverse the same edge more than once. For example,
f , h, i, g, f is a cycle in the graph in Figure 1.9. A graph with no cycles is said to
be acyclic. We discuss acyclic graphs in the next subsection.
Trees
A tree (more accurately, a free tree) is a connected acyclic graph (Figure 1.10a).
A graph that has no cycles but is not necessarily connected is called a forest: each
of its connected components is a tree (Figure 1.10b).
2. A subgraph of a given graph G=V, E) is a graph G
/
=V
/
, E
/
) such that V
/
V and E
/
E.
32 Introduction
a b
c d
f g
a b
c d e
f g
h
i
j
(b) (a)
FIGURE 1.10 (a) Tree. (b) Forest.
i
c b
b d e
a
a
h h i
c g
f
g
d
e
f
(b) (a)
FIGURE 1.11 (a) Free tree. (b) Its transformation into a rooted tree.
Trees have several important properties other graphs do not have. In par-
ticular, the number of edges in a tree is always one less than the number of its
vertices:
[E[ =[V[ 1.
As the graph in Figure 1.9 demonstrates, this property is necessary but not suf-
cient for a graph to be a tree. However, for connected graphs it is sufcient and
hence provides a convenient way of checking whether a connected graph has a
cycle.
Rooted Trees Another very important property of trees is the fact that for every
two vertices in a tree, there always exists exactly one simple path fromone of these
vertices to the other. This property makes it possible to select an arbitrary vertex
in a free tree and consider it as the root of the so-called rooted tree. A rooted tree
is usually depicted by placing its root on the top (level 0 of the tree), the vertices
adjacent to the root below it (level 1), the vertices two edges apart from the root
still below (level 2), and so on. Figure 1.11 presents such a transformation from a
free tree to a rooted tree.
1.4 Fundamental Data Structures 33
Rooted trees play a very important role in computer science, a much more
important one than free trees do; in fact, for the sake of brevity, they are often
referred to as simply trees. An obvious application of trees is for describing
hierarchies, from le directories to organizational charts of enterprises. There are
many less obvious applications, such as implementing dictionaries (see below),
efcient access to very large data sets (Section 7.4), and data encoding (Section
9.4). As we discuss in Chapter 2, trees also are helpful in analysis of recursive
algorithms. To nish this far-from-complete list of tree applications, we should
mention the so-called state-space trees that underline two important algorithm
design techniques: backtracking and branch-and-bound (Sections 12.1 and 12.2).
For any vertex v in a tree T , all the vertices on the simple path from the root
to that vertex are called ancestors of v. The vertex itself is usually considered its
own ancestor; the set of ancestors that excludes the vertex itself is referred to as
the set of proper ancestors. If (u, v) is the last edge of the simple path from the
root to vertex v (and u ,=v), u is said to be the parent of v and v is called a child
of u; vertices that have the same parent are said to be siblings. A vertex with no
children is called a leaf ; a vertex with at least one child is called parental. All the
vertices for which a vertex v is an ancestor are said to be descendants of v; the
proper descendants exclude the vertex v itself. All the descendants of a vertex v
with all the edges connecting them form the subtree of T rooted at that vertex.
Thus, for the tree in Figure 1.11b, the root of the tree is a; vertices d, g, f, h, and i
are leaves, and vertices a, b, e, and c are parental; the parent of b is a; the children
of b are c and g; the siblings of b are d and e; and the vertices of the subtree rooted
at b are {b, c, g, h, i].
The depth of a vertex v is the length of the simple path from the root to v. The
height of a tree is the length of the longest simple path from the root to a leaf. For
example, the depth of vertex c of the tree in Figure 1.11b is 2, and the height of
the tree is 3. Thus, if we count tree levels top down starting with 0 for the roots
level, the depth of a vertex is simply its level in the tree, and the trees height is the
maximum level of its vertices. (You should be alert to the fact that some authors
dene the height of a tree as the number of levels in it; this makes the height of
a tree larger by 1 than the height dened as the length of the longest simple path
from the root to a leaf.)
Ordered Trees An ordered tree is a rooted tree in which all the children of each
vertex are ordered. It is convenient to assume that in a trees diagram, all the
children are ordered left to right.
A binary tree can be dened as an ordered tree in which every vertex has
no more than two children and each child is designated as either a left child or a
right child of its parent; a binary tree may also be empty. An example of a binary
tree is given in Figure 1.12a. The binary tree with its root at the left (right) child
of a vertex in a binary tree is called the left (right) subtree of that vertex. Since
left and right subtrees are binary trees as well, a binary tree can also be dened
recursively. This makes it possible to solve many problems involving binary trees
by recursive algorithms.
34 Introduction
(b) (a)
1 7
4
9
10
12 5
FIGURE 1.12 (a) Binary tree. (b) Binary search tree.
12 null 5
10 null null 7 null null
4 null null
1 null
9
FIGURE 1.13 Standard implementation of the binary search tree in Figure 1.12b.
In Figure 1.12b, some numbers are assigned to vertices of the binary tree in
Figure 1.12a. Note that a number assigned to each parental vertex is larger than all
the numbers in its left subtree and smaller than all the numbers in its right subtree.
Such trees are called binary search trees. Binary trees and binary search trees have
a wide variety of applications in computer science; you will encounter some of
them throughout the book. In particular, binary search trees can be generalized
to more general types of search trees called multiway search trees, which are
indispensable for efcient access to very large data sets.
As you will see later in the book, the efciency of most important algorithms
for binary search trees and their extensions depends on the trees height. There-
fore, the following inequalities for the height h of a binary tree with n nodes are
especially important for analysis of such algorithms:
log
2
n h n 1.
1.4 Fundamental Data Structures 35
A binary tree is usually implemented for computing purposes by a collection
of nodes corresponding to vertices of the tree. Each node contains some informa-
tion associated with the vertex (its name or some value assigned to it) and two
pointers to the nodes representing the left child and right child of the vertex, re-
spectively. Figure 1.13 illustrates such an implementation for the binary search
tree in Figure 1.12b.
Acomputer representation of an arbitrary ordered tree can be done by simply
providing a parental vertex with the number of pointers equal to the number of
its children. This representation may prove to be inconvenient if the number of
children varies widely among the nodes. We can avoid this inconvenience by using
nodes with just two pointers, as we did for binary trees. Here, however, the left
pointer will point to the rst child of the vertex, and the right pointer will point
to its next sibling. Accordingly, this representation is called the rst childnext
sibling representation. Thus, all the siblings of a vertex are linked via the nodes
right pointers in a singly linked list, with the rst element of the list pointed to
by the left pointer of their parent. Figure 1.14a illustrates this representation for
the tree in Figure 1.11b. It is not difcult to see that this representation effectively
transforms an ordered tree into a binary tree said to be associated with the ordered
tree. We get this representation by rotating the pointers about 45 degrees
clockwise (see Figure 1.14b).
Sets and Dictionaries
The notion of a set plays a central role in mathematics. A set can be described as
an unordered collection (possibly empty) of distinct items called elements of the
d
d
c
h g e
f i
b
a
null
g
null null
e null
f
null null
i null null
b
c
h null
a null
(b) (a)
FIGURE 1.14 (a) First childnext sibling representation of the tree in Figure 1.11b. (b) Its
binary tree representation.
36 Introduction
set. Aspecic set is dened either by an explicit listing of its elements (e.g., S ={2,
3, 5, 7]) or by specifying a property that all the sets elements and only they must
satisfy (e.g., S ={n: n is a prime number smaller than 10}). The most important set
operations are: checking membership of a given item in a given set; nding the
union of two sets, which comprises all the elements in either or both of them; and
nding the intersection of two sets, which comprises all the common elements in
the sets.
Sets can be implemented in computer applications in two ways. The rst
considers only sets that are subsets of some large set U, called the universal
set. If set U has n elements, then any subset S of U can be represented by a bit
string of size n, called a bit vector, in which the ith element is 1 if and only if
the ith element of U is included in set S. Thus, to continue with our example, if
U ={1, 2, 3, 4, 5, 6, 7, 8, 9], then S ={2, 3, 5, 7] is represented by the bit string
011010100. This way of representing sets makes it possible to implement the
standard set operations very fast, but at the expense of potentially using a large
amount of storage.
The second and more common way to represent a set for computing purposes
is to use the list structure to indicate the sets elements. Of course, this option, too,
is feasible only for nite sets; fortunately, unlike mathematics, this is the kind of
sets most computer applications need. Note, however, the two principal points of
distinction between sets and lists. First, a set cannot contain identical elements;
a list can. This requirement for uniqueness is sometimes circumvented by the
introduction of a multiset, or bag, an unordered collection of items that are not
necessarily distinct. Second, a set is an unordered collection of items; therefore,
changing the order of its elements does not change the set. A list, dened as an
ordered collection of items, is exactly the opposite. This is an important theoretical
distinction, but fortunately it is not important for many applications. It is also
worth mentioning that if a set is represented by a list, depending on the application
at hand, it might be worth maintaining the list in a sorted order.
In computing, the operations we need to perform for a set or a multiset most
often are searching for a given item, adding a new item, and deleting an item
from the collection. A data structure that implements these three operations is
called the dictionary. Note the relationship between this data structure and the
problem of searching mentioned in Section 1.3; obviously, we are dealing here
with searching in a dynamic context. Consequently, an efcient implementation
of a dictionary has to strike a compromise between the efciency of searching and
the efciencies of the other two operations. There are quite a fewways a dictionary
can be implemented. They range from an unsophisticated use of arrays (sorted or
not) to much more sophisticated techniques such as hashing and balanced search
trees, which we discuss later in the book.
A number of applications in computing require a dynamic partition of some
n-element set into a collection of disjoint subsets. After being initialized as a
collection of n one-element subsets, the collection is subjected to a sequence
of intermixed union and search operations. This problem is called the set union
problem. We discuss efcient algorithmic solutions to this problem in Section 9.2,
in conjunction with one of its important applications.
1.4 Fundamental Data Structures 37
You may have noticed that in our reviewof basic data structures we almost al-
ways mentioned specic operations that are typically performed for the structure
in question. This intimate relationship between the data and operations has been
recognized by computer scientists for a long time. It has led them in particular
to the idea of an abstract data type (ADT): a set of abstract objects represent-
ing data items with a collection of operations that can be performed on them. As
illustrations of this notion, reread, say, our denitions of the priority queue and
dictionary. Although abstract data types could be implemented in older procedu-
ral languages such as Pascal (see, e.g., [Aho83]), it is much more convenient to
do this in object-oriented languages such as C++ and Java, which support abstract
data types by means of classes.
Exercises 1.4
1. Describe howone can implement each of the following operations on an array
so that the time it takes does not depend on the arrays size n.
a. Delete the ith element of an array (1 i n).
b. Delete the ith element of a sorted array (the remaining array has to stay
sorted, of course).
2. If you have to solve the searching problemfor a list of n numbers, howcan you
take advantage of the fact that the list is known to be sorted? Give separate
answers for
a. lists represented as arrays.
b. lists represented as linked lists.
3. a. Show the stack after each operation of the following sequence that starts
with the empty stack:
push(a), push(b), pop, push(c), push(d), pop
b. Show the queue after each operation of the following sequence that starts
with the empty queue:
enqueue(a), enqueue(b), dequeue, enqueue(c), enqueue(d), dequeue
4. a. Let A be the adjacency matrix of an undirected graph. Explain what prop-
erty of the matrix indicates that
i. the graph is complete.
ii. the graph has a loop, i.e., an edge connecting a vertex to itself.
iii. the graph has an isolated vertex, i.e., a vertex with no edges incident
to it.
b. Answer the same questions for the adjacency list representation.
5. Give a detailed description of an algorithm for transforming a free tree into
a tree rooted at a given vertex of the free tree.
38 Introduction
6. Prove the inequalities that bracket the height of a binary tree with n vertices:
log
2
n h n 1.
7. Indicate how the ADT priority queue can be implemented as
a. an (unsorted) array.
b. a sorted array.
c. a binary search tree.
8. How would you implement a dictionary of a reasonably small size n if you
knewthat all its elements are distinct (e.g., names of the 50 states of the United
States)? Specify an implementation of each dictionary operation.
9. For each of the following applications, indicate the most appropriate data
structure:
a. answering telephone calls in the order of their known priorities
b. sending backlog orders to customers in the order they have been received
c. implementing a calculator for computing simple arithmetical expressions
10. Anagram checking Design an algorithm for checking whether two given
words are anagrams, i.e., whether one word can be obtained by permuting
the letters of the other. For example, the words tea and eat are anagrams.
SUMMARY
An algorithm is a sequence of nonambiguous instructions for solving a
problem in a nite amount of time. An input to an algorithm species an
instance of the problem the algorithm solves.
Algorithms can be specied in a natural language or pseudocode; they can
also be implemented as computer programs.
Among several ways to classify algorithms, the two principal alternatives are:
. to group algorithms according to types of problems they solve
. to group algorithms according to underlying design techniques they are
based upon
The important problem types are sorting, searching, string processing, graph
problems, combinatorial problems, geometric problems, and numerical
problems.
Algorithm design techniques (or strategies or paradigms) are general
approaches to solving problems algorithmically, applicable to a variety of
problems from different areas of computing.
Summary 39
Although designing an algorithm is undoubtedly a creative activity, one can
identify a sequence of interrelated actions involved in such a process. They
are summarized in Figure 1.2.
A good algorithm is usually the result of repeated efforts and rework.
The same problem can often be solved by several algorithms. For example,
three algorithms were given for computing the greatest common divisor of
two integers: Euclids algorithm, the consecutive integer checking algorithm,
and the middle-school method enhanced by the sieve of Eratosthenes for
generating a list of primes.
Algorithms operate on data. This makes the issue of data structuring critical
for efcient algorithmic problemsolving. The most important elementary data
structures are the array and the linked list. They are used for representing
more abstract data structures such as the list, the stack, the queue, the graph
(via its adjacency matrix or adjacency lists), the binary tree, and the set.
An abstract collection of objects with several operations that can be per-
formed on them is called an abstract data type (ADT). The list, the stack, the
queue, the priority queue, and the dictionary are important examples of ab-
stract data types. Modern object-oriented languages support implementation
of ADTs by means of classes.
This page intentionally left blank
2
Fundamentals of the Analysis
of Algorithm Efficiency
I often say that when you can measure what you are speaking about and
express it in numbers you know something about it; but when you cannot
express it in numbers your knowledge is a meagre and unsatisfactory
kind: it may be the beginning of knowledge but you have scarcely, in your
thoughts, advanced to the stage of science, whatever the matter may be.
Lord Kelvin (18241907)
Not everything that can be counted counts, and not everything that counts
can be counted.
Albert Einstein (18791955)
T
his chapter is devoted to analysis of algorithms. The American Heritage Dic-
tionary denes analysis as the separation of an intellectual or substantial
whole into its constituent parts for individual study. Accordingly, each of the prin-
cipal dimensions of an algorithmpointed out in Section 1.2 is both a legitimate and
desirable subject of study. But the term analysis of algorithms is usually used in
a narrower, technical sense to mean an investigation of an algorithms efciency
with respect to two resources: running time and memory space. This emphasis on
efciency is easy to explain. First, unlike such dimensions as simplicity and gen-
erality, efciency can be studied in precise quantitative terms. Second, one can
arguealthough this is hardly always the case, given the speed and memory of
todays computersthat the efciency considerations are of primary importance
from a practical point of view. In this chapter, we too will limit the discussion to
an algorithms efciency.
41
42 Fundamentals of the Analysis of Algorithm Efciency
We start with a general framework for analyzing algorithm efciency in Sec-
tion 2.1. This section is arguably the most important in the chapter; the funda-
mental nature of the topic makes it also one of the most important sections in the
entire book.
In Section 2.2, we introduce three notations: O (big oh), (big omega),
and (big theta). Borrowed from mathematics, these notations have become
the language for discussing the efciency of algorithms.
In Section 2.3, we showhowthe general framework outlined in Section 2.1 can
be systematically applied to analyzing the efciency of nonrecursive algorithms.
The main tool of such an analysis is setting up a sum representing the algorithms
running time and then simplifying the sum by using standard sum manipulation
techniques.
In Section 2.4, we show how the general framework outlined in Section 2.1
can be systematically applied to analyzing the efciency of recursive algorithms.
Here, the main tool is not a summation but a special kind of equation called a
recurrence relation. We explain how such recurrence relations can be set up and
then introduce a method for solving them.
Although we illustrate the analysis framework and the methods of its appli-
cations by a variety of examples in the rst four sections of this chapter, Section
2.5 is devoted to yet another examplethat of the Fibonacci numbers. Discov-
ered 800 years ago, this remarkable sequence appears in a variety of applications
both within and outside computer science. Adiscussion of the Fibonacci sequence
serves as a natural vehicle for introducing an important class of recurrence rela-
tions not solvable by the method of Section 2.4. We also discuss several algorithms
for computing the Fibonacci numbers, mostly for the sake of a few general obser-
vations about the efciency of algorithms and methods of analyzing them.
The methods of Sections 2.3 and 2.4 provide a powerful technique for analyz-
ing the efciency of many algorithms with mathematical clarity and precision, but
these methods are far from being foolproof. The last two sections of the chapter
deal with two approachesempirical analysis and algorithm visualizationthat
complement the pure mathematical techniques of Sections 2.3 and 2.4. Much
newer and, hence, less developed than their mathematical counterparts, these ap-
proaches promise to play an important role among the tools available for analysis
of algorithm efciency.
2.1 The Analysis Framework
In this section, we outline a general framework for analyzing the efciency of algo-
rithms. We already mentioned in Section 1.2 that there are two kinds of efciency:
time efciency and space efciency. Time efciency, also called time complexity,
indicates howfast an algorithmin question runs. Space efciency, also called space
complexity, refers to the amount of memory units required by the algorithmin ad-
dition to the space needed for its input and output. In the early days of electronic
computing, both resourcestime and spacewere at a premium. Half a century
2.1 The Analysis Framework 43
of relentless technological innovations have improved the computers speed and
memory size by many orders of magnitude. Now the amount of extra space re-
quired by an algorithm is typically not of as much concern, with the caveat that
there is still, of course, a difference between the fast main memory, the slower
secondary memory, and the cache. The time issue has not diminished quite to the
same extent, however. In addition, the research experience has shown that for
most problems, we can achieve much more spectacular progress in speed than in
space. Therefore, following a well-established tradition of algorithmtextbooks, we
primarily concentrate on time efciency, but the analytical framework introduced
here is applicable to analyzing space efciency as well.
Measuring an Inputs Size
Lets start with the obvious observation that almost all algorithms run longer on
larger inputs. For example, it takes longer to sort larger arrays, multiply larger
matrices, and so on. Therefore, it is logical to investigate an algorithms efciency
as a function of some parameter n indicating the algorithms input size.
1
In most
cases, selecting such a parameter is quite straightforward. For example, it will be
the size of the list for problems of sorting, searching, nding the lists smallest
element, and most other problems dealing with lists. For the problemof evaluating
a polynomial p(x) =a
n
x
n
. . .
a
0
of degree n, it will be the polynomials degree
or the number of its coefcients, whichis larger by 1 thanits degree. Youll see from
the discussion that such a minor difference is inconsequential for the efciency
analysis.
There are situations, of course, where the choice of a parameter indicating
an input size does matter. One such example is computing the product of two
n n matrices. There are two natural measures of size for this problem. The rst
and more frequently used is the matrix order n. But the other natural contender
is the total number of elements N in the matrices being multiplied. (The latter
is also more general since it is applicable to matrices that are not necessarily
square.) Since there is a simple formula relating these two measures, we can easily
switch from one to the other, but the answer about an algorithms efciency will
be qualitatively different depending on which of these two measures we use (see
Problem 2 in this sections exercises).
The choice of an appropriate size metric can be inuenced by operations of
the algorithm in question. For example, how should we measure an inputs size
for a spell-checking algorithm? If the algorithm examines individual characters of
its input, we should measure the size by the number of characters; if it works by
processing words, we should count their number in the input.
We should make a special note about measuring input size for algorithms
solving problems such as checking primality of a positive integer n. Here, the input
is just one number, and it is this numbers magnitude that determines the input
1. Some algorithms require more than one parameter to indicate the size of their inputs (e.g., the number
of vertices and the number of edges for algorithms on graphs represented by their adjacency lists).
44 Fundamentals of the Analysis of Algorithm Efciency
size. In such situations, it is preferable to measure size by the number b of bits in
the ns binary representation:
b =log
2
n 1. (2.1)
This metric usually gives a better idea about the efciency of algorithms in ques-
tion.
Units for Measuring Running Time
The next issue concerns units for measuring an algorithms running time. Of
course, we can simply use some standard unit of time measurementa second,
or millisecond, and so onto measure the running time of a program implement-
ing the algorithm. There are obvious drawbacks to such an approach, however:
dependence on the speed of a particular computer, dependence on the quality of
a program implementing the algorithm and of the compiler used in generating the
machine code, and the difculty of clocking the actual running time of the pro-
gram. Since we are after a measure of an algorithms efciency, we would like to
have a metric that does not depend on these extraneous factors.
One possible approach is to count the number of times each of the algorithms
operations is executed. This approach is both excessively difcult and, as we
shall see, usually unnecessary. The thing to do is to identify the most important
operation of the algorithm, called the basic operation, the operation contributing
the most to the total running time, and compute the number of times the basic
operation is executed.
As a rule, it is not difcult to identify the basic operation of an algorithm: it
is usually the most time-consuming operation in the algorithms innermost loop.
For example, most sorting algorithms work by comparing elements (keys) of a
list being sorted with each other; for such algorithms, the basic operation is a key
comparison. As another example, algorithms for mathematical problems typically
involve some or all of the four arithmetical operations: addition, subtraction,
multiplication, and division. Of the four, the most time-consuming operation is
division, followed by multiplication and then addition and subtraction, with the
last two usually considered together.
2
Thus, the established framework for the analysis of an algorithms time ef-
ciency suggests measuring it by counting the number of times the algorithms
basic operation is executed on inputs of size n. We will nd out how to compute
such a count for nonrecursive and recursive algorithms in Sections 2.3 and 2.4,
respectively.
Here is an important application. Let c
op
be the execution time of an algo-
rithms basic operation on a particular computer, and let C(n) be the number of
times this operation needs to be executed for this algorithm. Then we can estimate
2. On some computers, multiplication does not take longer than addition/subtraction (see, for example,
the timing data provided by Kernighan and Pike in [Ker99, pp. 185186]).
2.1 The Analysis Framework 45
the running time T (n) of a programimplementing this algorithmon that computer
by the formula
T (n) c
op
C(n).
Of course, this formula should be used with caution. The count C(n) does not
contain any information about operations that are not basic, and, in fact, the
count itself is often computed only approximately. Further, the constant c
op
is
also an approximation whose reliability is not always easy to assess. Still, unless
n is extremely large or very small, the formula can give a reasonable estimate of
the algorithms running time. It also makes it possible to answer such questions as
How much faster would this algorithm run on a machine that is 10 times faster
than the one we have? The answer is, obviously, 10 times. Or, assuming that
C(n) =
1
2
n(n 1), how much longer will the algorithm run if we double its input
size? The answer is about four times longer. Indeed, for all but very small values
of n,
C(n) =
1
2
n(n 1) =
1
2
n
2
1
2
n
1
2
n
2
and therefore
T (2n)
T (n)
c
op
C(2n)
c
op
C(n)
1
2
(2n)
2
1
2
n
2
=4.
Note that we were able to answer the last question without actually knowing
the value of c
op
: it was neatly cancelled out in the ratio. Also note that
1
2
, the
multiplicative constant in the formula for the count C(n), was also cancelled out.
It is for these reasons that the efciency analysis framework ignores multiplicative
constants and concentrates on the counts order of growth to within a constant
multiple for large-size inputs.
Orders of Growth
Why this emphasis on the counts order of growth for large input sizes? A differ-
ence in running times on small inputs is not what really distinguishes efcient
algorithms from inefcient ones. When we have to compute, for example, the
greatest common divisor of two small numbers, it is not immediately clear how
much more efcient Euclids algorithm is compared to the other two algorithms
discussed in Section 1.1 or even why we should care which of them is faster and
by how much. It is only when we have to nd the greatest common divisor of two
large numbers that the difference in algorithmefciencies becomes both clear and
important. For large values of n, it is the functions order of growth that counts: just
look at Table 2.1, which contains values of a few functions particularly important
for analysis of algorithms.
The magnitude of the numbers in Table 2.1 has a profound signicance for
the analysis of algorithms. The function growing the slowest among these is the
logarithmic function. It grows so slowly, in fact, that we should expect a program
46 Fundamentals of the Analysis of Algorithm Efciency
TABLE 2.1 Values (some approximate) of several functions important for
analysis of algorithms
n log
2
n n n log
2
n n
2
n
3
2
n
n!
10 3.3 10
1
3.3
.
10
1
10
2
10
3
10
3
3.6
.
10
6
10
2
6.6 10
2
6.6
.
10
2
10
4
10
6
1.3
.
10
30
9.3
.
10
157
10
3
10 10
3
1.0
.
10
4
10
6
10
9
10
4
13 10
4
1.3
.
10
5
10
8
10
12
10
5
17 10
5
1.7
.
10
6
10
10
10
15
10
6
20 10
6
2.0
.
10
7
10
12
10
18
implementing an algorithmwith a logarithmic basic-operation count to run practi-
cally instantaneously oninputs of all realistic sizes. Alsonote that althoughspecic
values of such a count depend, of course, on the logarithms base, the formula
log
a
n =log
a
b log
b
n
makes it possible to switch fromone base to another, leaving the count logarithmic
but with a new multiplicative constant. This is why we omit a logarithms base and
write simply log n in situations where we are interested just in a functions order
of growth to within a multiplicative constant.
On the other end of the spectrum are the exponential function 2
n
and the
factorial function n! Both these functions grow so fast that their values become
astronomically large even for rather small values of n. (This is the reason why we
did not include their values for n > 10
2
in Table 2.1.) For example, it would take
about 4
.
10
10
years for a computer making a trillion (10
12
) operations per second
to execute 2
100
operations. Though this is incomparably faster than it would have
taken to execute 100! operations, it is still longer than 4.5 billion (4.5
.
10
9
) years
the estimated age of the planet Earth. There is a tremendous difference between
the orders of growth of the functions 2
n
and n!, yet both are often referred to as
exponential-growth functions (or simply exponential) despite the fact that,
strictly speaking, only the former should be referred to as such. The bottom line,
which is important to remember, is this:
Algorithms that require an exponential number of operations are practical
for solving only problems of very small sizes.
Another way to appreciate the qualitative difference among the orders of
growth of the functions in Table 2.1 is to consider how they react to, say, a
twofold increase in the value of their argument n. The function log
2
n increases in
value by just 1 (because log
2
2n =log
2
2 log
2
n =1 log
2
n); the linear function
increases twofold, the linearithmic function n log
2
n increases slightly more than
twofold; the quadratic function n
2
and cubic function n
3
increase fourfold and
2.1 The Analysis Framework 47
eightfold, respectively (because (2n)
2
=4n
2
and (2n)
3
=8n
3
); the value of 2
n
gets
squared (because 2
2n
=(2
n
)
2
); and n! increases much more than that (yes, even
mathematics refuses to cooperate to give a neat answer for n!).
Worst-Case, Best-Case, and Average-Case Efciencies
In the beginning of this section, we established that it is reasonable to measure
an algorithms efciency as a function of a parameter indicating the size of the
algorithms input. But there are many algorithms for which running time depends
not only on an input size but also on the specics of a particular input. Consider,
as an example, sequential search. This is a straightforward algorithmthat searches
for a given item (some search key K) in a list of n elements by checking successive
elements of the list until either a match with the search key is found or the list
is exhausted. Here is the algorithms pseudocode, in which, for simplicity, a list is
implemented as an array. It also assumes that the second condition A[i] ,=K will
not be checked if the rst one, which checks that the arrays index does not exceed
its upper bound, fails.
ALGORITHM SequentialSearch(A[0..n 1], K)
//Searches for a given value in a given array by sequential search
//Input: An array A[0..n 1] and a search key K
//Output: The index of the rst element in A that matches K
// or 1 if there are no matching elements
i 0
while i < n and A[i] ,=K do
i i 1
if i < n return i
else return 1
Clearly, the running time of this algorithm can be quite different for the
same list size n. In the worst case, when there are no matching elements or
the rst matching element happens to be the last one on the list, the algorithm
makes the largest number of key comparisons among all possible inputs of size
n: C
worst
(n) =n.
The worst-case efciency of an algorithm is its efciency for the worst-case
input of size n, which is an input (or inputs) of size n for which the algorithm
runs the longest among all possible inputs of that size. The way to determine
the worst-case efciency of an algorithm is, in principle, quite straightforward:
analyze the algorithmto see what kind of inputs yield the largest value of the basic
operations count C(n) among all possible inputs of size n and then compute this
worst-case value C
worst
(n). (For sequential search, the answer was obvious. The
methods for handling less trivial situations are explained in subsequent sections of
this chapter.) Clearly, the worst-case analysis provides very important information
about an algorithms efciency by bounding its running time from above. In other
48 Fundamentals of the Analysis of Algorithm Efciency
words, it guarantees that for any instance of size n, the running time will not exceed
C
worst
(n), its running time on the worst-case inputs.
The best-case efciency of an algorithm is its efciency for the best-case input
of size n, which is an input (or inputs) of size n for which the algorithm runs the
fastest among all possible inputs of that size. Accordingly, we can analyze the best-
case efciency as follows. First, we determine the kindof inputs for whichthe count
C(n) will be the smallest among all possible inputs of size n. (Note that the best
case does not mean the smallest input; it means the input of size n for which the
algorithm runs the fastest.) Then we ascertain the value of C(n) on these most
convenient inputs. For example, the best-case inputs for sequential search are lists
of size n with their rst element equal to a search key; accordingly, C
best
(n) =1
for this algorithm.
The analysis of the best-case efciency is not nearly as important as that
of the worst-case efciency. But it is not completely useless, either. Though we
should not expect to get best-case inputs, we might be able to take advantage of
the fact that for some algorithms a good best-case performance extends to some
useful types of inputs close to being the best-case ones. For example, there is a
sorting algorithm(insertion sort) for which the best-case inputs are already sorted
arrays on which the algorithm works very fast. Moreover, the best-case efciency
deteriorates only slightly for almost-sorted arrays. Therefore, such an algorithm
might well be the method of choice for applications dealing with almost-sorted
arrays. And, of course, if the best-case efciency of an algorithm is unsatisfactory,
we can immediately discard it without further analysis.
It should be clear from our discussion, however, that neither the worst-case
analysis nor its best-case counterpart yields the necessary information about an
algorithms behavior ona typical or random input. This is the informationthat
the average-case efciency seeks to provide. To analyze the algorithms average-
case efciency, we must make some assumptions about possible inputs of size n.
Lets consider again sequential search. The standard assumptions are that
(a) the probability of a successful search is equal to p (0 p 1) and (b) the
probability of the rst match occurring in the ith position of the list is the same
for every i. Under these assumptionsthe validity of which is usually difcult to
verify, their reasonableness notwithstandingwe can nd the average number
of key comparisons C
avg
(n) as follows. In the case of a successful search, the
probability of the rst match occurring in the ith position of the list is p/n for
every i, and the number of comparisons made by the algorithm in such a situation
is obviously i. In the case of an unsuccessful search, the number of comparisons
will be n with the probability of such a search being (1 p). Therefore,
C
avg
(n) =[1
.
p
n
2
.
p
n
. . .
i
.
p
n
. . .
n
.
p
n
] n
.
(1 p)
=
p
n
[1 2
. . .
i
. . .
n] n(1 p)
=
p
n
n(n 1)
2
n(1 p) =
p(n 1)
2
n(1 p).
2.1 The Analysis Framework 49
This general formula yields some quite reasonable answers. For example, if p =1
(the search must be successful), the average number of key comparisons made
by sequential search is (n 1)/2; that is, the algorithm will inspect, on average,
about half of the lists elements. If p =0 (the search must be unsuccessful), the
average number of key comparisons will be n because the algorithm will inspect
all n elements on all such inputs.
As you can see from this very elementary example, investigation of the
average-case efciency is considerably more difcult than investigation of the
worst-case and best-case efciencies. The direct approach for doing this involves
dividing all instances of size n into several classes so that for each instance of the
class the number of times the algorithms basic operation is executed is the same.
(What were these classes for sequential search?) Then a probability distribution
of inputs is obtained or assumed so that the expected value of the basic operations
count can be found.
The technical implementation of this plan is rarely easy, however, and prob-
abilistic assumptions underlying it in each particular case are usually difcult to
verify. Given our quest for simplicity, we will mostly quote known results about
the average-case efciency of algorithms under discussion. If you are interested
in derivations of these results, consult such books as [Baa00], [Sed96], [KnuI],
[KnuII], and [KnuIII].
It should be clear from the preceding discussion that the average-case ef-
ciency cannot be obtained by taking the average of the worst-case and the
best-case efciencies. Even though this average does occasionally coincide with
the average-case cost, it is not a legitimate way of performing the average-case
analysis.
Does one really need the average-case efciency information? The answer is
unequivocally yes: there are many important algorithms for which the average-
case efciency is much better than the overly pessimistic worst-case efciency
would lead us to believe. So, without the average-case analysis, computer scientists
could have missed many important algorithms.
Yet another type of efciency is called amortized efciency. It applies not to
a single run of an algorithm but rather to a sequence of operations performed
on the same data structure. It turns out that in some situations a single operation
can be expensive, but the total time for an entire sequence of n such operations is
always signicantly better than the worst-case efciency of that single operation
multiplied by n. So we can amortize the high cost of such a worst-case occur-
rence over the entire sequence in a manner similar to the way a business would
amortize the cost of an expensive item over the years of the items productive life.
This sophisticated approach was discovered by the American computer scientist
Robert Tarjan, who used it, among other applications, in developing an interest-
ing variation of the classic binary search tree (see [Tar87] for a quite readable
nontechnical discussion and [Tar85] for a technical account). We will see an ex-
ample of the usefulness of amortized efciency in Section 9.2, when we consider
algorithms for nding unions of disjoint sets.
50 Fundamentals of the Analysis of Algorithm Efciency
Recapitulation of the Analysis Framework
Before we leave this section, let us summarize the main points of the framework
outlined above.
Both time and space efciencies are measured as functions of the algorithms
input size.
Time efciency is measured by counting the number of times the algorithms
basic operation is executed. Space efciency is measured by counting the
number of extra memory units consumed by the algorithm.
The efciencies of some algorithms may differ signicantly for inputs of the
same size. For such algorithms, we need to distinguish between the worst-case,
average-case, and best-case efciencies.
The frameworks primary interest lies in the order of growth of the algorithms
running time (extra memory units consumed) as its input size goes to innity.
Inthe next section, we lookat formal means toinvestigate orders of growth. In
Sections 2.3 and 2.4, we discuss particular methods for investigating nonrecursive
and recursive algorithms, respectively. It is there that you will see howthe analysis
framework outlined here can be applied to investigating the efciency of specic
algorithms. You will encounter many more examples throughout the rest of the
book.
Exercises 2.1
1. For each of the following algorithms, indicate (i) a natural size metric for its
inputs, (ii) its basic operation, and (iii) whether the basic operation count can
be different for inputs of the same size:
a. computing the sum of n numbers
b. computing n!
c. nding the largest element in a list of n numbers
d. Euclids algorithm
e. sieve of Eratosthenes
f. pen-and-pencil algorithm for multiplying two n-digit decimal integers
2. a. Consider the denition-based algorithm for adding two n n matrices.
What is its basic operation? How many times is it performed as a function
of the matrix order n? As a function of the total number of elements in the
input matrices?
b. Answer the same questions for the denition-based algorithm for matrix
multiplication.
2.1 The Analysis Framework 51
3. Consider a variation of sequential search that scans a list to return the number
of occurrences of a given search key in the list. Does its efciency differ from
the efciency of classic sequential search?
4. a. Glove selection There are 22 gloves in a drawer: 5 pairs of red gloves, 4
pairs of yellow, and 2 pairs of green. You select the gloves in the dark and
can check them only after a selection has been made. What is the smallest
number of gloves you need to select to have at least one matching pair in
the best case? In the worst case?
b. Missing socks Imagine that after washing 5 distinct pairs of socks, you
discover that two socks are missing. Of course, you would like to have
the largest number of complete pairs remaining. Thus, you are left with
4 complete pairs in the best-case scenario and with 3 complete pairs in
the worst case. Assuming that the probability of disappearance for each
of the 10 socks is the same, nd the probability of the best-case scenario;
the probability of the worst-case scenario; the number of pairs you should
expect in the average case.
5. a. Prove formula (2.1) for the number of bits in the binary representation of
a positive decimal integer.
b. Prove the alternative formula for the number of bits in the binary repre-
sentation of a positive integer n:
b ={log
2
(n 1).
c. What would be the analogous formulas for the number of decimal digits?
d. Explain why, within the accepted analysis framework, it does not matter
whether we use binary or decimal digits in measuring ns size.
6. Suggest how any sorting algorithm can be augmented in a way to make the
best-case count of its key comparisons equal to just n 1 (n is a lists size,
of course). Do you think it would be a worthwhile addition to any sorting
algorithm?
7. Gaussian elimination, the classic algorithm for solving systems of n linear
equations in n unknowns, requires about
1
3
n
3
multiplications, which is the
algorithms basic operation.
a. How much longer should you expect Gaussian elimination to work on a
system of 1000 equations versus a system of 500 equations?
b. You are considering buying a computer that is 1000 times faster than the
one you currently have. By what factor will the faster computer increase
the sizes of systems solvable in the same amount of time as on the old
computer?
8. For each of the following functions, indicate how much the functions value
will change if its argument is increased fourfold.
a. log
2
n b.
n c. n d. n
2
e. n
3
f. 2
n
52 Fundamentals of the Analysis of Algorithm Efciency
9. For each of the following pairs of functions, indicate whether the rst function
of each of the following pairs has a lower, same, or higher order of growth (to
within a constant multiple) than the second function.
a. n(n 1) and 2000n
2
b. 100n
2
and 0.01n
3
c. log
2
n and ln n d. log
2
2
n and log
2
n
2
e. 2
n1
and 2
n
f. (n 1)! and n!
10. Invention of chess
a. According to a well-known legend, the game of chess was invented many
centuries ago in northwestern India by a certain sage. When he took his
invention to his king, the king liked the game so much that he offered the
inventor any reward he wanted. The inventor asked for some grain to be
obtained as follows: just a single grain of wheat was to be placed on the
rst square of the chessboard, two on the second, four on the third, eight
on the fourth, and so on, until all 64 squares had been lled. If it took just
1 second to count each grain, how long would it take to count all the grain
due to him?
b. Howlong would it take if instead of doubling the number of grains for each
square of the chessboard, the inventor asked for adding two grains?
2.2 Asymptotic Notations and Basic Efciency Classes
As pointed out in the previous section, the efciency analysis framework con-
centrates on the order of growth of an algorithms basic operation count as the
principal indicator of the algorithms efciency. To compare and rank such orders
of growth, computer scientists use three notations: O (big oh), (big omega), and
(big theta). First, we introduce these notations informally, and then, after sev-
eral examples, formal denitions are given. In the following discussion, t (n) and
g(n) can be any nonnegative functions dened on the set of natural numbers. In
the context we are interested in, t (n) will be an algorithms running time (usually
indicated by its basic operation count C(n)), and g(n) will be some simple function
to compare the count with.
Informal Introduction
Informally, O(g(n)) is the set of all functions with a lower or same order of growth
as g(n) (to within a constant multiple, as n goes to innity). Thus, to give a few
examples, the following assertions are all true:
n O(n
2
), 100n 5 O(n
2
),
1
2
n(n 1) O(n
2
).
2.2 Asymptotic Notations and Basic Efciency Classes 53
Indeed, the rst two functions are linear and hence have a lower order of growth
than g(n) =n
2
, while the last one is quadratic and hence has the same order of
growth as n
2
. On the other hand,
n
3
, O(n
2
), 0.00001n
3
, O(n
2
), n
4
n 1 , O(n
2
).
Indeed, the functions n
3
and 0.00001n
3
are both cubic and hence have a higher
order of growth than n
2
, and so has the fourth-degree polynomial n
4
n 1.
The second notation, (g(n)), stands for the set of all functions with a higher
or same order of growthas g(n) (towithina constant multiple, as ngoes toinnity).
For example,
n
3
(n
2
),
1
2
n(n 1) (n
2
), but 100n 5 , (n
2
).
Finally, (g(n)) is the set of all functions that have the same order of growth
as g(n) (to within a constant multiple, as n goes to innity). Thus, every quadratic
function an
2
bn c with a > 0 is in (n
2
), but so are, among innitely many
others, n
2
sin n and n
2
log n. (Can you explain why?)
Hopefully, this informal introduction has made you comfortable with the idea
behind the three asymptotic notations. So now come the formal denitions.
O-notation
DEFINITION A function t (n) is said to be in O(g(n)), denoted t (n) O(g(n)),
if t (n) is bounded above by some constant multiple of g(n) for all large n, i.e., if
there exist some positive constant c and some nonnegative integer n
0
such that
t (n) cg(n) for all n n
0
.
The denition is illustrated in Figure 2.1 where, for the sake of visual clarity, n is
extended to be a real number.
As an example, let us formally prove one of the assertions made in the
introduction: 100n 5 O(n
2
). Indeed,
100n 5 100n n (for all n 5) =101n 101n
2
.
Thus, as values of the constants c and n
0
required by the denition, we can take
101 and 5, respectively.
Note that the denition gives us a lot of freedom in choosing specic values
for constants c and n
0
. For example, we could also reason that
100n 5 100n 5n (for all n 1) =105n
to complete the proof with c =105 and n
0
=1.
54 Fundamentals of the Analysis of Algorithm Efciency
doesn't
matter
n
n
0
cg(n)
t (n)
FIGURE 2.1 Big-oh notation: t (n) O(g(n)).
doesn't
matter
n
n
0
cg(n)
t (n)
FIGURE 2.2 Big-omega notation: t (n) (g(n)).
-notation
DEFINITION A function t (n) is said to be in (g(n)), denoted t (n) (g(n)), if
t (n) is bounded below by some positive constant multiple of g(n) for all large n,
i.e., if there exist some positive constant c and some nonnegative integer n
0
such
that
t (n) cg(n) for all n n
0
.
The denition is illustrated in Figure 2.2.
Here is an example of the formal proof that n
3
(n
2
):
n
3
n
2
for all n 0,
i.e., we can select c =1 and n
0
=0.
2.2 Asymptotic Notations and Basic Efciency Classes 55
doesn't
matter
n
n
0
c
2
g(n)
c
1
g(n)
t (n)
FIGURE 2.3 Big-theta notation: t (n) (g(n)).
-notation
DEFINITION A function t (n) is said to be in (g(n)), denoted t (n) (g(n)),
if t (n) is bounded both above and below by some positive constant multiples of
g(n) for all large n, i.e., if there exist some positive constants c
1
and c
2
and some
nonnegative integer n
0
such that
c
2
g(n) t (n) c
1
g(n) for all n n
0
.
The denition is illustrated in Figure 2.3.
For example, let us prove that
1
2
n(n 1) (n
2
). First, we prove the right
inequality (the upper bound):
1
2
n(n 1) =
1
2
n
2
1
2
n
1
2
n
2
for all n 0.
Second, we prove the left inequality (the lower bound):
1
2
n(n 1) =
1
2
n
2
1
2
n
1
2
n
2
1
2
n
1
2
n (for all n 2) =
1
4
n
2
.
Hence, we can select c
2
=
1
4
, c
1
=
1
2
, and n
0
=2.
Useful Property Involving the Asymptotic Notations
Using the formal denitions of the asymptotic notations, we can prove their
general properties (see Problem 7 in this sections exercises for a few simple
examples). The following property, in particular, is useful in analyzing algorithms
that comprise two consecutively executed parts.
56 Fundamentals of the Analysis of Algorithm Efciency
THEOREM If t
1
(n) O(g
1
(n)) and t
2
(n) O(g
2
(n)), then
t
1
(n) t
2
(n) O(max{g
1
(n), g
2
(n)]).
(The analogous assertions are true for the and notations as well.)
PROOF The proof extends to orders of growth the following simple fact about
four arbitrary real numbers a
1
, b
1
, a
2
, b
2
: if a
1
b
1
and a
2
b
2
, then a
1
a
2
2 max{b
1
, b
2
].
Since t
1
(n) O(g
1
(n)), there exist some positive constant c
1
and some non-
negative integer n
1
such that
t
1
(n) c
1
g
1
(n) for all n n
1
.
Similarly, since t
2
(n) O(g
2
(n)),
t
2
(n) c
2
g
2
(n) for all n n
2
.
Let us denote c
3
=max{c
1
, c
2
] and consider n max{n
1
, n
2
] so that we can use
both inequalities. Adding them yields the following:
t
1
(n) t
2
(n) c
1
g
1
(n) c
2
g
2
(n)
c
3
g
1
(n) c
3
g
2
(n) =c
3
[g
1
(n) g
2
(n)]
c
3
2 max{g
1
(n), g
2
(n)].
Hence, t
1
(n) t
2
(n) O(max{g
1
(n), g
2
(n)]), with the constants c and n
0
required
by the O denition being 2c
3
=2 max{c
1
, c
2
] and max{n
1
, n
2
], respectively.
So what does this property imply for an algorithm that comprises two consec-
utively executed parts? It implies that the algorithms overall efciency is deter-
mined by the part with a higher order of growth, i.e., its least efcient part:
t
1
(n) O(g
1
(n))
t
2
(n) O(g
2
(n))
_
t
1
(n) t
2
(n) O(max{g
1
(n), g
2
(n)]).
For example, we can check whether an array has equal elements by the following
two-part algorithm: rst, sort the array by applying some known sorting algorithm;
second, scan the sorted array to check its consecutive elements for equality. If, for
example, a sorting algorithm used in the rst part makes no more than
1
2
n(n 1)
comparisons (and hence is in O(n
2
)) while the second part makes no more than
n 1 comparisons (and hence is in O(n)), the efciency of the entire algorithm
will be in O(max{n
2
, n]) =O(n
2
).
Using Limits for Comparing Orders of Growth
Though the formal denitions of O, , and are indispensable for proving their
abstract properties, they are rarely used for comparing the orders of growth of
two specic functions. A much more convenient method for doing so is based on
2.2 Asymptotic Notations and Basic Efciency Classes 57
computing the limit of the ratio of two functions in question. Three principal cases
may arise:
lim
n
t (n)
g(n)
=
_
_
_
0 implies that t (n) has a smaller order of growth than g(n),
c implies that t (n) has the same order of growth as g(n),
implies that t (n) has a larger order of growth than g(n).
3
Note that the rst two cases mean that t (n) O(g(n)), the last two mean that
t (n) (g(n)), and the second case means that t (n) (g(n)).
The limit-based approach is often more convenient than the one based on
the denitions because it can take advantage of the powerful calculus techniques
developed for computing limits, such as LH opitals rule
lim
n
t (n)
g(n)
= lim
n
t
/
(n)
g
/
(n)
and Stirlings formula
n!
2n
_
n
e
_
n
for large values of n.
Here are three examples of using the limit-based approach to comparing
orders of growth of two functions.
EXAMPLE 1 Compare the orders of growth of
1
2
n(n 1) and n
2
. (This is one of
the examples we used at the beginning of this section to illustrate the denitions.)
lim
n
1
2
n(n 1)
n
2
=
1
2
lim
n
n
2
n
n
2
=
1
2
lim
n
(1
1
n
) =
1
2
.
Since the limit is equal to a positive constant, the functions have the same order
of growth or, symbolically,
1
2
n(n 1) (n
2
).
EXAMPLE 2 Compare the orders of growth of log
2
n and
n. (Unlike Exam-
ple 1, the answer here is not immediately obvious.)
lim
n
log
2
n
n
= lim
n
_
log
2
n
_
/
_
n
_
/
= lim
n
_
log
2
e
_
1
n
1
2
n
=2 log
2
e lim
n
1
n
=0.
Since the limit is equal to zero, log
2
n has a smaller order of growth than
n. (Since
lim
n
log
2
n
n
= 0, we can use the so-called little-oh notation: log
2
n o(
n).
Unlike the big-Oh, the little-oh notation is rarely used in analysis of algorithms.)
3. The fourth case, in which such a limit does not exist, rarely happens in the actual practice of analyzing
algorithms. Still, this possibility makes the limit-based approach to comparing orders of growth less
general than the one based on the denitions of O, , and .
58 Fundamentals of the Analysis of Algorithm Efciency
EXAMPLE 3 Compare the orders of growth of n! and 2
n
. (We discussed this
informally in Section 2.1.) Taking advantage of Stirlings formula, we get
lim
n
n!
2
n
= lim
n
2n
_
n
e
_
n
2
n
= lim
n
2n
n
n
2
n
e
n
= lim
n
2n
_
n
2e
_
n
=.
Thus, though 2
n
grows very fast, n!grows still faster. We can write symbolically that
n! (2
n
); note, however, that while the big-Omega notation does not preclude
the possibility that n! and 2
n
have the same order of growth, the limit computed
here certainly does.
Basic Efciency Classes
Even though the efciency analysis framework puts together all the functions
whose orders of growth differ by a constant multiple, there are still innitely many
such classes. (For example, the exponential functions a
n
have different orders of
growth for different values of base a.) Therefore, it may come as a surprise that
the time efciencies of a large number of algorithms fall into only a few classes.
These classes are listed in Table 2.2 in increasing order of their orders of growth,
along with their names and a few comments.
You could raise a concern that classifying algorithms by their asymptotic ef-
ciency would be of little practical use since the values of multiplicative constants
are usually left unspecied. This leaves open the possibility of an algorithm in a
worse efciency class running faster than an algorithm in a better efciency class
for inputs of realistic sizes. For example, if the running time of one algorithm is n
3
while the running time of the other is 10
6
n
2
, the cubic algorithm will outperform
the quadratic algorithm unless n exceeds 10
6
. A few such anomalies are indeed
known. Fortunately, multiplicative constants usually do not differ that drastically.
As a rule, you should expect an algorithmfroma better asymptotic efciency class
to outperform an algorithm from a worse class even for moderately sized inputs.
This observation is especially true for an algorithm with a better than exponential
running time versus an exponential (or worse) algorithm.
Exercises 2.2
1. Use the most appropriate notation among O, , and to indicate the time
efciency class of sequential search (see Section 2.1)
a. in the worst case.
b. in the best case.
c. in the average case.
2. Use the informal denitions of O, , and to determine whether the follow-
ing assertions are true or false.
2.2 Asymptotic Notations and Basic Efciency Classes 59
TABLE 2.2 Basic asymptotic efciency classes
Class Name Comments
1 constant Short of best-case efciencies, very few reasonable
examples can be given since an algorithms running
time typically goes to innity when its input size grows
innitely large.
log n logarithmic Typically, a result of cutting a problems size by a
constant factor on each iteration of the algorithm (see
Section 4.4). Note that a logarithmic algorithm cannot
take into account all its input or even a xed fraction
of it: any algorithmthat does so will have at least linear
running time.
n linear Algorithms that scan a list of size n (e.g., sequential
search) belong to this class.
n log n linearithmic Many divide-and-conquer algorithms (see Chapter 5),
including mergesort and quicksort in the average case,
fall into this category.
n
2
quadratic Typically, characterizes efciency of algorithms with
two embedded loops (see the next section). Elemen-
tary sorting algorithms and certain operations on n n
matrices are standard examples.
n
3
cubic Typically, characterizes efciency of algorithms with
three embedded loops (see the next section). Several
nontrivial algorithms from linear algebra fall into this
class.
2
n
exponential Typical for algorithms that generate all subsets of an
n-element set. Often, the term exponential is used
in a broader sense to include this and larger orders of
growth as well.
n! factorial Typical for algorithms that generate all permutations
of an n-element set.
a. n(n 1)/2 O(n
3
) b. n(n 1)/2 O(n
2
)
c. n(n 1)/2 (n
3
) d. n(n 1)/2 (n)
3. For each of the following functions, indicate the class (g(n)) the function
belongs to. (Use the simplest g(n) possible in your answers.) Prove your
assertions.
a. (n
2
1)
10
b.
_
10n
2
7n 3
c. 2n lg(n 2)
2
(n 2)
2
lg
n
2
d. 2
n1
3
n1
e. log
2
n
60 Fundamentals of the Analysis of Algorithm Efciency
4. a. Table 2.1 contains values of several functions that often arise in the analysis
of algorithms. These values certainly suggest that the functions
log n, n, n log
2
n, n
2
, n
3
, 2
n
, n!
are listed in increasing order of their order of growth. Do these values
prove this fact with mathematical certainty?
b. Prove that the functions are indeed listed in increasing order of their order
of growth.
5. List the following functions according to their order of growth fromthe lowest
to the highest:
(n 2)!, 5 lg(n 100)
10
, 2
2n
, 0.001n
4
3n
3
1, ln
2
n,
3
n, 3
n
.
6. a. Prove that every polynomial of degree k, p(n) =a
k
n
k
a
k1
n
k1
. . .
a
0
with a
k
> 0, belongs to (n
k
).
b. Prove that exponential functions a
n
have different orders of growth for
different values of base a > 0.
7. Prove the following assertions by using the denitions of the notations in-
volved, or disprove them by giving a specic counterexample.
a. If t (n) O(g(n)), then g(n) (t (n)).
b. (g(n)) =(g(n)), where > 0.
c. (g(n)) =O(g(n)) (g(n)).
d. For any two nonnegative functions t (n) and g(n) dened on the set of
nonnegative integers, either t (n) O(g(n)), or t (n) (g(n)), or both.
8. Prove the sections theorem for
a. notation. b. notation.
9. We mentioned in this section that one can check whether all elements of an
array are distinct by a two-part algorithm based on the arrays presorting.
a. If the presorting is done by analgorithmwitha time efciency in(n log n),
what will be a time-efciency class of the entire algorithm?
b. If the sorting algorithm used for presorting needs an extra array of size n,
what will be the space-efciency class of the entire algorithm?
10. The range of a nite nonempty set of n real numbers S is dened as the differ-
ence between the largest and smallest elements of S. For each representation
of S given below, describe in English an algorithmto compute the range. Indi-
cate the time efciency classes of these algorithms using the most appropriate
notation (O, , or ).
a. An unsorted array
b. A sorted array
c. A sorted singly linked list
d. A binary search tree
2.3 Mathematical Analysis of Nonrecursive Algorithms 61
11. Lighter or heavier? You have n > 2 identical-looking coins and a two-pan
balance scale with no weights. One of the coins is a fake, but you do not know
whether it is lighter or heavier than the genuine coins, which all weigh the
same. Design a (1) algorithm to determine whether the fake coin is lighter
or heavier than the others.
12. Door in a wall You are facing a wall that stretches innitely in both direc-
tions. There is a door in the wall, but you know neither how far away nor in
which direction. You can see the door only when you are right next to it. De-
sign an algorithm that enables you to reach the door by walking at most O(n)
steps where n is the (unknown to you) number of steps between your initial
position and the door. [Par95]
2.3 Mathematical Analysis of Nonrecursive Algorithms
In this section, we systematically apply the general framework outlined in Section
2.1 to analyzing the time efciency of nonrecursive algorithms. Let us start with
a very simple example that demonstrates all the principal steps typically taken in
analyzing such algorithms.
EXAMPLE 1 Consider the problem of nding the value of the largest element
in a list of n numbers. For simplicity, we assume that the list is implemented as
an array. The following is pseudocode of a standard algorithm for solving the
problem.
ALGORITHM MaxElement(A[0..n 1])
//Determines the value of the largest element in a given array
//Input: An array A[0..n 1] of real numbers
//Output: The value of the largest element in A
maxval A[0]
for i 1 to n 1 do
if A[i] > maxval
maxval A[i]
return maxval
The obvious measure of an inputs size here is the number of elements in the
array, i.e., n. The operations that are going to be executed most often are in the
algorithms for loop. There are two operations in the loops body: the comparison
A[i] > maxval and the assignment maxval A[i]. Which of these two operations
should we consider basic? Since the comparison is executed on each repetition
of the loop and the assignment is not, we should consider the comparison to be
the algorithms basic operation. Note that the number of comparisons will be the
same for all arrays of size n; therefore, in terms of this metric, there is no need to
distinguish among the worst, average, and best cases here.
62 Fundamentals of the Analysis of Algorithm Efciency
Let us denote C(n) the number of times this comparison is executed and try
to nd a formula expressing it as a function of size n. The algorithm makes one
comparison on each execution of the loop, which is repeated for each value of the
loops variable i within the bounds 1 and n 1, inclusive. Therefore, we get the
following sum for C(n):
C(n) =
n1
i=1
1.
This is an easy sum to compute because it is nothing other than 1 repeated n 1
times. Thus,
C(n) =
n1
i=1
1 =n 1 (n).
Here is a general plan to follow in analyzing nonrecursive algorithms.
General Plan for Analyzing the Time Efciency of Nonrecursive Algorithms
1. Decide on a parameter (or parameters) indicating an inputs size.
2. Identify the algorithms basic operation. (As a rule, it is located in the inner-
most loop.)
3. Check whether the number of times the basic operation is executed depends
only on the size of an input. If it also depends on some additional property,
the worst-case, average-case, and, if necessary, best-case efciencies have to
be investigated separately.
4. Set up a sum expressing the number of times the algorithms basic operation
is executed.
4
5. Using standard formulas and rules of sum manipulation, either nd a closed-
form formula for the count or, at the very least, establish its order of growth.
Before proceeding with further examples, you may want to review Appen-
dix A, which contains a list of summation formulas and rules that are often useful
in analysis of algorithms. In particular, we use especially frequently two basic rules
of sum manipulation
u
i=l
ca
i
=c
u
i=l
a
i
, (R1)
u
i=l
(a
i
b
i
) =
u
i=l
a
i
u
i=l
b
i
, (R2)
4. Sometimes, an analysis of a nonrecursive algorithm requires setting up not a sum but a recurrence
relation for the number of times its basic operation is executed. Using recurrence relations is much
more typical for analyzing recursive algorithms (see Section 2.4).
2.3 Mathematical Analysis of Nonrecursive Algorithms 63
and two summation formulas
u
i=l
1 =u l 1 where l u are some lower and upper integer limits, (S1)
n
i=0
i =
n
i=1
i =1 2
. . .
n =
n(n 1)
2
1
2
n
2
(n
2
). (S2)
Note that the formula
n1
i=1
1 =n 1, which we used in Example 1, is a special
case of formula (S1) for l =1 and u =n 1.
EXAMPLE 2 Consider the element uniqueness problem: check whether all the
elements in a given array of n elements are distinct. This problem can be solved
by the following straightforward algorithm.
ALGORITHM UniqueElements(A[0..n 1])
//Determines whether all the elements in a given array are distinct
//Input: An array A[0..n 1]
//Output: Returns true if all the elements in A are distinct
// and false otherwise
for i 0 to n 2 do
for j i 1 to n 1 do
if A[i] =A[j] return false
return true
The natural measure of the inputs size here is again n, the number of elements
in the array. Since the innermost loop contains a single operation (the comparison
of two elements), we should consider it as the algorithms basic operation. Note,
however, that the number of element comparisons depends not only on n but also
on whether there are equal elements in the array and, if there are, which array
positions they occupy. We will limit our investigation to the worst case only.
By denition, the worst case input is an array for which the number of element
comparisons C
worst
(n) is the largest among all arrays of size n. An inspection of
the innermost loop reveals that there are two kinds of worst-case inputsinputs
for which the algorithm does not exit the loop prematurely: arrays with no equal
elements and arrays in which the last two elements are the only pair of equal
elements. For such inputs, one comparison is made for each repetition of the
innermost loop, i.e., for each value of the loop variable j between its limits i 1
and n 1; this is repeated for each value of the outer loop, i.e., for each value of
the loop variable i between its limits 0 and n 2. Accordingly, we get
64 Fundamentals of the Analysis of Algorithm Efciency
C
worst
(n) =
n2
i=0
n1
j=i1
1 =
n2
i=0
[(n 1) (i 1) 1] =
n2
i=0
(n 1 i)
=
n2
i=0
(n 1)
n2
i=0
i =(n 1)
n2
i=0
1
(n 2)(n 1)
2
=(n 1)
2
(n 2)(n 1)
2
=
(n 1)n
2
1
2
n
2
(n
2
).
We also could have computed the sum
n2
i=0
(n 1 i) faster as follows:
n2
i=0
(n 1 i) =(n 1) (n 2)
. . .
1 =
(n 1)n
2
,
where the last equality is obtained by applying summation formula (S2). Note
that this result was perfectly predictable: in the worst case, the algorithm needs to
compare all n(n 1)/2 distinct pairs of its n elements.
EXAMPLE 3 Given two n n matrices A and B, nd the time efciency of the
denition-based algorithm for computing their product C =AB. By denition, C
is an n n matrix whose elements are computed as the scalar (dot) products of
the rows of matrix A and the columns of matrix B:
A B C
col. j
C i, j [ ]
row i
* =
where C[i, j] =A[i, 0]B[0, j]
. . .
A[i, k]B[k, j]
. . .
A[i, n 1]B[n 1, j]
for every pair of indices 0 i, j n 1.
ALGORITHM MatrixMultiplication(A[0..n 1, 0..n 1], B[0..n 1, 0..n 1])
//Multiplies two square matrices of order n by the denition-based algorithm
//Input: Two n n matrices A and B
//Output: Matrix C =AB
for i 0 to n 1 do
for j 0 to n 1 do
C[i, j] 0.0
for k 0 to n 1 do
C[i, j] C[i, j] A[i, k] B[k, j]
return C
2.3 Mathematical Analysis of Nonrecursive Algorithms 65
We measure an inputs size by matrix order n. There are two arithmetical
operations in the innermost loop heremultiplication and additionthat, in
principle, cancompete for designationas the algorithms basic operation. Actually,
we do not have to choose between them, because on each repetition of the
innermost loop each of the two is executed exactly once. So by counting one
we automatically count the other. Still, following a well-established tradition, we
consider multiplication as the basic operation (see Section 2.1). Let us set up a sum
for the total number of multiplications M(n) executed by the algorithm. (Since this
count depends only on the size of the input matrices, we do not have to investigate
the worst-case, average-case, and best-case efciencies separately.)
Obviously, there is just one multiplication executed on each repetition of the
algorithms innermost loop, which is governed by the variable k ranging from the
lower bound 0 to the upper bound n 1. Therefore, the number of multiplications
made for every pair of specic values of variables i and j is
n1
k=0
1,
and the total number of multiplications M(n) is expressed by the following
triple sum:
M(n) =
n1
i=0
n1
j=0
n1
k=0
1.
Now, we can compute this sum by using formula (S1) and rule (R1) given
above. Starting with the innermost sum
n1
k=0
1, which is equal to n (why?), we get
M(n) =
n1
i=0
n1
j=0
n1
k=0
1 =
n1
i=0
n1
j=0
n =
n1
i=0
n
2
=n
3
.
This example is simple enough so that we could get this result without all
the summation machinations. How? The algorithm computes n
2
elements of the
product matrix. Each of the products elements is computed as the scalar (dot)
product of an n-element row of the rst matrix and an n-element column of the
second matrix, which takes n multiplications. So the total number of multiplica-
tions is n
.
n
2
=n
3
. (It is this kind of reasoning that we expected you to employ
when answering this question in Problem 2 of Exercises 2.1.)
If we now want to estimate the running time of the algorithm on a particular
machine, we can do it by the product
T (n) c
m
M(n) =c
m
n
3
,
where c
m
is the time of one multiplication on the machine in question. We would
get a more accurate estimate if we took into account the time spent on the
additions, too:
T (n) c
m
M(n) c
a
A(n) =c
m
n
3
c
a
n
3
=(c
m
c
a
)n
3
,
66 Fundamentals of the Analysis of Algorithm Efciency
where c
a
is the time of one addition. Note that the estimates differ only by their
multiplicative constants and not by their order of growth.
You should not have the erroneous impression that the plan outlined above
always succeeds in analyzing a nonrecursive algorithm. An irregular change in a
loop variable, a sum too complicated to analyze, and the difculties intrinsic to
the average case analysis are just some of the obstacles that can prove to be insur-
mountable. These caveats notwithstanding, the plan does work for many simple
nonrecursive algorithms, as you will see throughout the subsequent chapters of
the book.
As a last example, let us consider an algorithm in which the loops variable
changes in a different manner from that of the previous examples.
EXAMPLE 4 The following algorithm nds the number of binary digits in the
binary representation of a positive decimal integer.
ALGORITHM Binary(n)
//Input: A positive decimal integer n
//Output: The number of binary digits in ns binary representation
count 1
while n > 1 do
count count 1
n n/2
return count
First, notice that the most frequently executed operation here is not inside the
while loop but rather the comparison n > 1 that determines whether the loops
body will be executed. Since the number of times the comparison will be executed
is larger than the number of repetitions of the loops body by exactly 1, the choice
is not that important.
A more signicant feature of this example is the fact that the loop variable
takes on only a few values between its lower and upper limits; therefore, we
have to use an alternative way of computing the number of times the loop is
executed. Since the value of n is about halved on each repetition of the loop,
the answer should be about log
2
n. The exact formula for the number of times
the comparison n >1 will be executed is actually log
2
n 1the number of bits
in the binary representation of n according to formula (2.1). We could also get
this answer by applying the analysis technique based on recurrence relations; we
discuss this technique inthe next sectionbecause it is more pertinent tothe analysis
of recursive algorithms.
2.3 Mathematical Analysis of Nonrecursive Algorithms 67
Exercises 2.3
1. Compute the following sums.
a. 1 3 5 7
. . .
999
b. 2 4 8 16
. . .
1024
c.
n1
i=3
1 d.
n1
i=3
i e.
n1
i=0
i(i 1)
f.
n
j=1
3
j1
g.
n
i=1
n
j=1
ij h.
n
i=1
1/i(i 1)
2. Find the order of growth of the following sums. Use the (g(n)) notation with
the simplest function g(n) possible.
a.
n1
i=0
(i
2
+1)
2
b.
n1
i=2
lg i
2
c.
n
i=1
(i 1)2
i1
d.
n1
i=0
i1
j=0
(i j)
3. The sample variance of n measurements x
1
, . . . , x
n
can be computed as either
n
i=1
(x
i
x)
2
n 1
where x =
n
i=1
x
i
n
or
n
i=1
x
2
i
(
n
i=1
x
i
)
2
/n
n 1
.
Find and compare the number of divisions, multiplications, and additions/
subtractions (additions and subtractions are usually bunched together) that
are required for computing the variance according to each of these formulas.
4. Consider the following algorithm.
ALGORITHM Mystery(n)
//Input: A nonnegative integer n
S 0
for i 1 to n do
S S i i
return S
a. What does this algorithm compute?
b. What is its basic operation?
c. How many times is the basic operation executed?
d. What is the efciency class of this algorithm?
e. Suggest an improvement, or a better algorithm altogether, and indicate its
efciency class. If you cannot do it, try to prove that, in fact, it cannot be
done.
68 Fundamentals of the Analysis of Algorithm Efciency
5. Consider the following algorithm.
ALGORITHM Secret(A[0..n 1])
//Input: An array A[0..n 1] of n real numbers
minval A[0]; maxval A[0]
for i 1 to n 1 do
if A[i] < minval
minval A[i]
if A[i] > maxval
maxval A[i]
return maxval minval
Answer questions (a)(e) of Problem 4 about this algorithm.
6. Consider the following algorithm.
ALGORITHM Enigma(A[0..n 1, 0..n 1])
//Input: A matrix A[0..n 1, 0..n 1] of real numbers
for i 0 to n 2 do
for j i 1 to n 1 do
if A[i, j] ,=A[j, i]
return false
return true
Answer questions (a)(e) of Problem 4 about this algorithm.
7. Improve the implementation of the matrix multiplication algorithm (see Ex-
ample 3) by reducing the number of additions made by the algorithm. What
effect will this change have on the algorithms efciency?
8. Determine the asymptotic order of growth for the total number of times all
the doors are toggled in the locker doors puzzle (Problem12 in Exercises 1.1).
9. Prove the formula
n
i=1
i =1 2
. . .
n =
n(n 1)
2
either by mathematical induction or by following the insight of a 10-year-old
school boy named Carl Friedrich Gauss (17771855) who grew up to become
one of the greatest mathematicians of all times.
2.3 Mathematical Analysis of Nonrecursive Algorithms 69
10. Mental arithmetic A 1010 table is lled with repeating numbers on its
diagonals as shown below. Calculate the total sum of the tables numbers in
your head (after [Cra07, Question 1.33]).
1 2 3
2 3
3
9 10
9 10 11
9 10 11
17
17 18
17 18 19
9 10 11
9 10 11
9 10 11
9 10 11
9 10 11
9 10 11
10 11
11. Consider the following version of an important algorithm that we will study
later in the book.
ALGORITHM GE(A[0..n 1, 0..n])
//Input: An n (n 1) matrix A[0..n 1, 0..n] of real numbers
for i 0 to n 2 do
for j i 1 to n 1 do
for k i to n do
A[j, k] A[j, k] A[i, k] A[j, i] / A[i, i]
a. Find the time efciency class of this algorithm.
b. What glaring inefciency does this pseudocode contain and how can it be
eliminated to speed the algorithm up?
12. von Neumanns neighborhood Consider the algorithm that starts with a
single square and on each of its n iterations adds new squares all around the
outside. How many one-by-one squares are there after n iterations? [Gar99]
(In the parlance of cellular automata theory, the answer is the number of cells
in the von Neumann neighborhood of range n.) The results for n =0, 1, and
2 are illustrated below.
70 Fundamentals of the Analysis of Algorithm Efciency
n =0 n =1 n =2
13. Page numbering Find the total number of decimal digits needed for num-
bering pages in a book of 1000 pages. Assume that the pages are numbered
consecutively starting with 1.
2.4 Mathematical Analysis of Recursive Algorithms
In this section, we will see how to apply the general framework for analysis
of algorithms to recursive algorithms. We start with an example often used to
introduce novices to the idea of a recursive algorithm.
EXAMPLE 1 Compute the factorial function F(n) =n! for an arbitrary nonneg-
ative integer n. Since
n! =1
.
. . .
.
(n 1)
.
n =(n 1)!
.
n for n 1
and 0! =1 by denition, we can compute F(n) =F(n 1)
.
n with the following
recursive algorithm.
ALGORITHM F(n)
//Computes n! recursively
//Input: A nonnegative integer n
//Output: The value of n!
if n =0 return 1
else return F(n 1) n
For simplicity, we consider n itself as an indicator of this algorithms input size
(rather than the number of bits in its binary expansion). The basic operation of the
algorithm is multiplication,
5
whose number of executions we denote M(n). Since
the function F(n) is computed according to the formula
F(n) =F(n 1)
.
n for n > 0,
5. Alternatively, we could count the number of times the comparison n =0 is executed, which is the same
as counting the total number of calls made by the algorithm (see Problem 2 in this sections exercises).
2.4 Mathematical Analysis of Recursive Algorithms 71
the number of multiplications M(n) needed to compute it must satisfy the equality
M(n) =M(n 1)
to compute
F(n1)
1
to multiply
F(n1) by n
for n > 0.
Indeed, M(n 1) multiplications are spent to compute F(n 1), and one more
multiplication is needed to multiply the result by n.
The last equation denes the sequence M(n) that we need to nd. This equa-
tion denes M(n) not explicitly, i.e., as a function of n, but implicitly as a function
of its value at another point, namely n 1. Such equations are called recurrence
relations or, for brevity, recurrences. Recurrence relations play an important role
not only in analysis of algorithms but also in some areas of applied mathematics.
They are usually studied in detail in courses on discrete mathematics or discrete
structures; a very brief tutorial on them is provided in Appendix B. Our goal now
is to solve the recurrence relation M(n) =M(n 1) 1, i.e., to nd an explicit
formula for M(n) in terms of n only.
Note, however, that there is not one but innitely many sequences that satisfy
this recurrence. (Can you give examples of, say, two of them?) To determine a
solution uniquely, we need an initial condition that tells us the value with which
the sequence starts. We can obtain this value by inspecting the condition that
makes the algorithm stop its recursive calls:
if n =0 return 1.
This tells us two things. First, since the calls stop when n =0, the smallest value
of n for which this algorithm is executed and hence M(n) dened is 0. Second, by
inspecting the pseudocodes exiting line, we can see that when n =0, the algorithm
performs no multiplications. Therefore, the initial condition we are after is
M(0) = 0.
the calls stop when n = 0 no multiplications when n = 0
Thus, we succeeded in setting up the recurrence relation and initial condition
for the algorithms number of multiplications M(n):
M(n) =M(n 1) 1 for n > 0, (2.2)
M(0) =0.
Before we embark on a discussion of how to solve this recurrence, let us
pause to reiterate an important point. We are dealing here with two recursively
dened functions. The rst is the factorial function F(n) itself; it is dened by the
recurrence
F(n) =F(n 1)
.
n for every n > 0,
F(0) =1.
The second is the number of multiplications M(n) needed to compute F(n) by the
recursive algorithm whose pseudocode was given at the beginning of the section.
72 Fundamentals of the Analysis of Algorithm Efciency
As we just showed, M(n) is dened by recurrence (2.2). And it is recurrence (2.2)
that we need to solve now.
Though it is not difcult to guess the solution here (what sequence starts
with 0 when n =0 and increases by 1 on each step?), it will be more useful to
arrive at it in a systematic fashion. From the several techniques available for
solving recurrence relations, we use what can be called the method of backward
substitutions. The methods idea (and the reason for the name) is immediately
clear from the way it applies to solving our particular recurrence:
M(n) =M(n 1) 1 substitute M(n 1) =M(n 2) 1
=[M(n 2) 1] 1 =M(n 2) 2 substitute M(n 2) =M(n 3) 1
=[M(n 3) 1] 2 =M(n 3) 3.
After inspecting the rst three lines, we see an emerging pattern, which makes it
possible to predict not only the next line (what would it be?) but also a general
formula for the pattern: M(n) =M(n i) i. Strictly speaking, the correctness of
this formula should be proved by mathematical induction, but it is easier to get to
the solution as follows and then verify its correctness.
What remains to be done is to take advantage of the initial condition given.
Since it is specied for n =0, we have to substitute i =n in the patterns formula
to get the ultimate result of our backward substitutions:
M(n) =M(n 1) 1 =
. . .
=M(n i) i =
. . .
=M(n n) n =n.
You should not be disappointed after exerting so much effort to get this
obvious answer. The benets of the method illustrated in this simple example
will become clear very soon, when we have to solve more difcult recurrences.
Also, note that the simple iterative algorithm that accumulates the product of n
consecutive integers requires the same number of multiplications, and it does so
without the overhead of time and space used for maintaining the recursions stack.
The issue of time efciency is actually not that important for the problem of
computing n!, however. As we sawin Section 2.1, the functions values get so large
so fast that we can realistically compute exact values of n! only for very small ns.
Again, we use this example just as a simple and convenient vehicle to introduce
the standard approach to analyzing recursive algorithms.
Generalizing our experience with investigating the recursive algorithm for
computing n!, we can now outline a general plan for investigating recursive algo-
rithms.
General Plan for Analyzing the Time Efciency of Recursive Algorithms
1. Decide on a parameter (or parameters) indicating an inputs size.
2. Identify the algorithms basic operation.
2.4 Mathematical Analysis of Recursive Algorithms 73
3. Check whether the number of times the basic operation is executed can vary
on different inputs of the same size; if it can, the worst-case, average-case, and
best-case efciencies must be investigated separately.
4. Set up a recurrence relation, with an appropriate initial condition, for the
number of times the basic operation is executed.
5. Solve the recurrence or, at least, ascertain the order of growth of its solution.
EXAMPLE 2 As our next example, we consider another educational workhorse
of recursive algorithms: the Tower of Hanoi puzzle. In this puzzle, we (or mythical
monks, if you do not like to move disks) have n disks of different sizes that can
slide onto any of three pegs. Initially, all the disks are on the rst peg in order of
size, the largest on the bottom and the smallest on top. The goal is to move all the
disks to the third peg, using the second one as an auxiliary, if necessary. We can
move only one disk at a time, and it is forbidden to place a larger disk on top of a
smaller one.
The problem has an elegant recursive solution, which is illustrated in Fig-
ure 2.4. To move n > 1 disks from peg 1 to peg 3 (with peg 2 as auxiliary), we rst
move recursively n 1 disks from peg 1 to peg 2 (with peg 3 as auxiliary), then
move the largest disk directly from peg 1 to peg 3, and, nally, move recursively
n 1 disks from peg 2 to peg 3 (using peg 1 as auxiliary). Of course, if n =1, we
simply move the single disk directly from the source peg to the destination peg.
1 3
2
FIGURE 2.4 Recursive solution to the Tower of Hanoi puzzle.
74 Fundamentals of the Analysis of Algorithm Efciency
Let us apply the general plan outlined above to the Tower of Hanoi problem.
The number of disks n is the obvious choice for the inputs size indicator, and so is
moving one disk as the algorithms basic operation. Clearly, the number of moves
M(n) depends on n only, and we get the following recurrence equation for it:
M(n) =M(n 1) 1 M(n 1) for n > 1.
With the obvious initial condition M(1) = 1, we have the following recurrence
relation for the number of moves M(n):
M(n) =2M(n 1) 1 for n > 1, (2.3)
M(1) =1.
We solve this recurrence by the same method of backward substitutions:
M(n) =2M(n 1) 1 sub. M(n 1) =2M(n 2) 1
=2[2M(n 2) 1] 1 =2
2
M(n 2) 2 1 sub. M(n 2) =2M(n 3) 1
=2
2
[2M(n 3) 1] 2 1 =2
3
M(n 3) 2
2
2 1.
The pattern of the rst three sums on the left suggests that the next one will be
2
4
M(n 4) 2
3
2
2
2 1, and generally, after i substitutions, we get
M(n) =2
i
M(n i) 2
i1
2
i2
. . .
2 1 =2
i
M(n i) 2
i
1.
Since the initial condition is specied for n =1, which is achieved for i =n 1, we
get the following formula for the solution to recurrence (2.3):
M(n) =2
n1
M(n (n 1)) 2
n1
1
=2
n1
M(1) 2
n1
1 =2
n1
2
n1
1 =2
n
1.
Thus, we have an exponential algorithm, which will run for an unimaginably
long time even for moderate values of n (see Problem5 in this sections exercises).
This is not due to the fact that this particular algorithm is poor; in fact, it is not
difcult to prove that this is the most efcient algorithm possible for this problem.
It is the problems intrinsic difculty that makes it so computationally hard. Still,
this example makes an important general point:
One should be careful with recursive algorithms because their succinctness
may mask their inefciency.
When a recursive algorithm makes more than a single call to itself, it can be
useful for analysis purposes to construct a tree of its recursive calls. In this tree,
nodes correspond to recursive calls, and we can label them with the value of the
parameter (or, more generally, parameters) of the calls. For the Tower of Hanoi
example, the tree is given in Figure 2.5. By counting the number of nodes in the
tree, we can get the total number of calls made by the Tower of Hanoi algorithm:
C(n) =
n1
l=0
2
l
(where l is the level in the tree in Figure 2.5) =2
n
1.
2.4 Mathematical Analysis of Recursive Algorithms 75
n
n 1 n 1
n 2 n 2 n 2 n 2
2 2
1 1
2
1 1
2
1 1 1 1
FIGURE 2.5 Tree of recursive calls made by the recursive algorithm for the Tower of
Hanoi puzzle.
The number agrees, as it should, with the move count obtained earlier.
EXAMPLE 3 As our next example, we investigate a recursive version of the
algorithm discussed at the end of Section 2.3.
ALGORITHM BinRec(n)
//Input: A positive decimal integer n
//Output: The number of binary digits in ns binary representation
if n =1 return 1
else return BinRec(n/2) 1
Let us set up a recurrence and an initial condition for the number of addi-
tions A(n) made by the algorithm. The number of additions made in computing
BinRec(n/2) is A(n/2), plus one more addition is made by the algorithm to
increase the returned value by 1. This leads to the recurrence
A(n) =A(n/2) 1 for n > 1. (2.4)
Since the recursive calls end when n is equal to 1 and there are no additions made
then, the initial condition is
A(1) =0.
The presence of n/2 in the functions argument makes the method of back-
ward substitutions stumble on values of n that are not powers of 2. Therefore, the
standard approach to solving such a recurrence is to solve it only for n =2
k
and
then take advantage of the theoremcalled the smoothness rule (see Appendix B),
which claims that under very broad assumptions the order of growth observed for
n =2
k
gives a correct answer about the order of growth for all values of n. (Alter-
natively, after getting a solution for powers of 2, we can sometimes ne-tune this
solution to get a formula valid for an arbitrary n.) So let us apply this recipe to our
recurrence, which for n =2
k
takes the form
76 Fundamentals of the Analysis of Algorithm Efciency
A(2
k
) =A(2
k1
) 1 for k > 0,
A(2
0
) =0.
Now backward substitutions encounter no problems:
A(2
k
) =A(2
k1
) 1 substitute A(2
k1
) =A(2
k2
) 1
=[A(2
k2
) 1] 1 =A(2
k2
) 2 substitute A(2
k2
) =A(2
k3
) 1
=[A(2
k3
) 1] 2 =A(2
k3
) 3 . . .
. . .
=A(2
ki
) i
. . .
=A(2
kk
) k.
Thus, we end up with
A(2
k
) =A(1) k =k,
or, after returning to the original variable n =2
k
and hence k =log
2
n,
A(n) =log
2
n (log n).
In fact, one can prove (Problem7 in this sections exercises) that the exact solution
for an arbitrary value of n is given by just a slightly more rened formula A(n) =
log
2
n.
This section provides an introduction to the analysis of recursive algorithms.
These techniques will be used throughout the book and expanded further as
necessary. In the next section, we discuss the Fibonacci numbers; their analysis
involves more difcult recurrence relations to be solved by a method different
from backward substitutions.
Exercises 2.4
1. Solve the following recurrence relations.
a. x(n) =x(n 1) 5 for n > 1, x(1) =0
b. x(n) =3x(n 1) for n > 1, x(1) =4
c. x(n) =x(n 1) n for n > 0, x(0) =0
d. x(n) =x(n/2) n for n > 1, x(1) =1 (solve for n =2
k
)
e. x(n) =x(n/3) 1 for n > 1, x(1) =1 (solve for n =3
k
)
2. Set up and solve a recurrence relation for the number of calls made by F(n),
the recursive algorithm for computing n!.
3. Consider the following recursive algorithm for computing the sum of the rst
n cubes: S(n) =1
3
2
3
. . .
n
3
.
2.4 Mathematical Analysis of Recursive Algorithms 77
ALGORITHM S(n)
//Input: A positive integer n
//Output: The sum of the rst n cubes
if n =1 return 1
else return S(n 1) n n n
a. Set up and solve a recurrence relation for the number of times the algo-
rithms basic operation is executed.
b. How does this algorithm compare with the straightforward nonrecursive
algorithm for computing this sum?
4. Consider the following recursive algorithm.
ALGORITHM Q(n)
//Input: A positive integer n
if n =1 return 1
else return Q(n 1) 2 n 1
a. Set up a recurrence relation for this functions values and solve it to deter-
mine what this algorithm computes.
b. Set up a recurrence relation for the number of multiplications made by this
algorithm and solve it.
c. Set up a recurrence relation for the number of additions/subtractions made
by this algorithm and solve it.
5. Tower of Hanoi
a. In the original version of the Tower of Hanoi puzzle, as it was published in
the 1890s by
Edouard Lucas, a French mathematician, the world will end
after 64 disks have beenmovedfroma mystical Tower of Brahma. Estimate
the number of years it will take if monks could move one disk per minute.
(Assume that monks do not eat, sleep, or die.)
b. How many moves are made by the ith largest disk (1 i n) in this
algorithm?
c. Find a nonrecursive algorithm for the Tower of Hanoi puzzle and imple-
ment it in the language of your choice.
6. Restricted Tower of Hanoi Consider the version of the Tower of Hanoi
puzzle in which n disks have to be moved from peg A to peg C using peg
B so that any move should either place a disk on peg B or move a disk from
that peg. (Of course, the prohibition of placing a larger disk on top of a smaller
one remains in place, too.) Design a recursive algorithm for this problem and
nd the number of moves made by it.
78 Fundamentals of the Analysis of Algorithm Efciency
7. a. Prove that the exact number of additions made by the recursive algorithm
BinRec(n) for an arbitrary positive decimal integer n is log
2
n.
b. Set up a recurrence relation for the number of additions made by the
nonrecursive version of this algorithm (see Section 2.3, Example 4) and
solve it.
8. a. Design a recursive algorithmfor computing 2
n
for any nonnegative integer
n that is based on the formula 2
n
=2
n1
2
n1
.
b. Set up a recurrence relation for the number of additions made by the
algorithm and solve it.
c. Draw a tree of recursive calls for this algorithm and count the number of
calls made by the algorithm.
d. Is it a good algorithm for solving this problem?
9. Consider the following recursive algorithm.
ALGORITHM Riddle(A[0..n 1])
//Input: An array A[0..n 1] of real numbers
if n =1 return A[0]
else temp Riddle(A[0..n 2])
if temp A[n 1] return temp
else return A[n 1]
a. What does this algorithm compute?
b. Set up a recurrence relation for the algorithms basic operation count and
solve it.
10. Consider the following algorithm to check whether a graph dened by its
adjacency matrix is complete.
ALGORITHM GraphComplete(A[0..n 1, 0..n 1])
//Input: Adjacency matrix A[0..n 1, 0..n 1]) of an undirected graph G
//Output: 1 (true) if G is complete and 0 (false) otherwise
if n =1 return 1 //one-vertex graph is complete by denition
else
if not GraphComplete(A[0..n 2, 0..n 2]) return 0
else for j 0 to n 2 do
if A[n 1, j] =0 return 0
return 1
What is the algorithms efciency class in the worst case?
11. The determinant of an n n matrix
2.4 Mathematical Analysis of Recursive Algorithms 79
A =
_
_
_
_
a
0 0
. . .
a
0 n1
a
1 0
. . .
a
1 n1
.
.
.
.
.
.
a
n1 0
. . .
a
n1 n1
_
_
,
denoted det A, can be dened as a
00
for n =1 and, for n >1, by the recursive
formula
det A =
n1
j=0
s
j
a
0 j
det A
j
,
where s
j
is +1 if j is even and 1 if j is odd, a
0 j
is the element in row 0 and
column j, and A
j
is the (n 1) (n 1) matrix obtained from matrix A by
deleting its row 0 and column j.
a. Set up a recurrence relation for the number of multiplications made by the
algorithm implementing this recursive denition.
b. Without solving the recurrence, what canyousay about the solutions order
of growth as compared to n!?
12. von Neumanns neighborhood revisited Find the number of cells in the von
Neumann neighborhood of range n (Problem 12 in Exercises 2.3) by setting
up and solving a recurrence relation.
13. Frying hamburgers There are n hamburgers to be fried on a small grill that
can hold only two hamburgers at a time. Each hamburger has to be fried
on both sides; frying one side of a hamburger takes 1 minute, regardless of
whether one or two hamburgers are fried at the same time. Consider the
following recursive algorithm for executing this task in the minimum amount
of time. If n 2, fry the hamburger or the two hamburgers together on each
side. If n > 2, fry any two hamburgers together on each side and then apply
the same procedure recursively to the remaining n 2 hamburgers.
a. Set up and solve the recurrence for the amount of time this algorithmneeds
to fry n hamburgers.
b. Explain why this algorithm does not fry the hamburgers in the minimum
amount of time for all n > 0.
c. Give a correct recursive algorithm that executes the task in the minimum
amount of time.
14. Celebrity problem A celebrity among a group of n people is a person who
knows nobody but is known by everybody else. The task is to identify a
celebrity by only asking questions to people of the form Do you know
him/her? Design an efcient algorithm to identify a celebrity or determine
that the group has no such person. How many questions does your algorithm
need in the worst case?
80 Fundamentals of the Analysis of Algorithm Efciency
2.5 Example: Computing the nth Fibonacci Number
In this section, we consider the Fibonacci numbers, a famous sequence
0, 1, 1, 2, 3, 5, 8, 13, 21, 34, . . . (2.5)
that can be dened by the simple recurrence
F(n) =F(n 1) F(n 2) for n > 1 (2.6)
and two initial conditions
F(0) =0, F(1) =1. (2.7)
The Fibonacci numbers were introduced by Leonardo Fibonacci in 1202 as
a solution to a problem about the size of a rabbit population (Problem 2 in this
sections exercises). Many more examples of Fibonacci-like numbers have since
been discovered in the natural world, and they have even been used in predicting
the prices of stocks and commodities. There are some interesting applications of
the Fibonacci numbers incomputer science as well. For example, worst-case inputs
for Euclids algorithm discussed in Section 1.1 happen to be consecutive elements
of the Fibonacci sequence. In this section, we briey consider algorithms for
computing the nth element of this sequence. Among other benets, the discussion
will provide us with an opportunity to introduce another method for solving
recurrence relations useful for analysis of recursive algorithms.
To start, let us get an explicit formula for F(n). If we try to apply the method
of backward substitutions to solve recurrence (2.6), we will fail to get an easily
discernible pattern. Instead, we can take advantage of a theorem that describes
solutions to a homogeneous second-order linear recurrence with constant co-
efcients
ax(n) bx(n 1) cx(n 2) =0, (2.8)
where a, b, and c are some xed real numbers (a ,=0) called the coefcients of
the recurrence and x(n) is the generic term of an unknown sequence to be found.
Applying this theorem to our recurrence with the initial conditions givensee
Appendix Bwe obtain the formula
F(n) =
1
5
(
n
n
), (2.9)
where =(1
n
gets innitely small as n goes to innity. In fact, one can prove that the impact
of the second term
1
n
on the value of F(n) can be obtained by rounding off the
value of the rst termto the nearest integer. In other words, for every nonnegative
integer n,
F(n) =
1
n
rounded to the nearest integer. (2.10)
In the algorithms that follow, we consider, for the sake of simplicity, such oper-
ations as additions and multiplications at unit cost. Since the Fibonacci numbers
grow innitely large (and grow very rapidly), a more detailed analysis than the
one offered here is warranted. In fact, it is the size of the numbers rather than a
time-efcient method for computing themthat should be of primary concern here.
Still, these caveats notwithstanding, the algorithms we outline and their analysis
provide useful examples for a student of the design and analysis of algorithms.
To begin with, we can use recurrence (2.6) and initial conditions (2.7) for the
obvious recursive algorithm for computing F(n).
ALGORITHM F(n)
//Computes the nth Fibonacci number recursively by using its denition
//Input: A nonnegative integer n
//Output: The nth Fibonacci number
if n 1 return n
else return F(n 1) F(n 2)
Before embarking on its formal analysis, can you tell whether this is an ef-
cient algorithm? Well, we need to do a formal analysis anyway. The algorithms ba-
sic operation is clearly addition, so let A(n) be the number of additions performed
by the algorithm in computing F(n). Then the numbers of additions needed for
computing F(n 1) and F(n 2) are A(n 1) and A(n 2), respectively, and
the algorithm needs one more addition to compute their sum. Thus, we get the
following recurrence for A(n):
A(n) =A(n 1) A(n 2) 1 for n > 1, (2.11)
A(0) =0, A(1) =0.
The recurrence A(n) A(n 1) A(n 2) = 1 is quite similar to recurrence
F(n) F(n 1) F(n 2) =0, but its right-hand side is not equal to zero. Such
recurrences are called inhomogeneous. There are general techniques for solving
inhomogeneous recurrences (see Appendix B or any textbook on discrete mathe-
matics), but for this particular recurrence, a special trick leads to a faster solution.
We canreduce our inhomogeneous recurrence toa homogeneous one by rewriting
it as
[A(n) 1] [A(n 1) 1] [A(n 2) 1] =0
and substituting B(n) =A(n) 1:
82 Fundamentals of the Analysis of Algorithm Efciency
B(n) B(n 1) B(n 2) =0,
B(0) =1, B(1) =1.
This homogeneous recurrence can be solved exactly in the same manner as recur-
rence (2.6) was solved to nd an explicit formula for F(n). But it can actually be
avoided by noting that B(n) is, in fact, the same recurrence as F(n) except that it
starts with two 1s and thus runs one step ahead of F(n). So B(n) =F(n 1), and
A(n) =B(n) 1 =F(n 1) 1 =
1
5
(
n1
n1
) 1.
Hence, A(n) (
n
), and if we measure the size of n by the number of bits
b =log
2
n 1in its binary representation, the efciency class will be even worse,
namely, doubly exponential: A(b) (
2
b
).
The poor efciency class of the algorithmcould be anticipated by the nature of
recurrence (2.11). Indeed, it contains two recursive calls with the sizes of smaller
instances only slightly smaller than size n. (Have you encountered such a situation
before?) We can also see the reason behind the algorithms inefciency by looking
at a recursive tree of calls tracing the algorithms execution. An example of such
a tree for n =5 is given in Figure 2.6. Note that the same values of the function
are being evaluated here again and again, which is clearly extremely inefcient.
We can obtain a much faster algorithm by simply computing the successive
elements of the Fibonacci sequence iteratively, as is done in the following algo-
rithm.
ALGORITHM Fib(n)
//Computes the nth Fibonacci number iteratively by using its denition
//Input: A nonnegative integer n
//Output: The nth Fibonacci number
F[0] 0; F[1] 1
for i 2 to n do
F[i] F[i 1] F[i 2]
return F[n]
F(3)
F(4)
F(5)
F(3)
F(1) F(2) F(2)
F(2) F(1) F(1) F(1) F(0) F(0)
F(1) F(0)
FIGURE 2.6 Tree of recursive calls for computing the 5th Fibonacci number by the
denition-based algorithm.
2.5 Example: Computing the nth Fibonacci Number 83
This algorithm clearly makes n 1 additions. Hence, it is linear as a function
of n and only exponential as a function of the number of bits b in ns binary
representation. Note that using an extra array for storing all the preceding ele-
ments of the Fibonacci sequence can be avoided: storing just two values is neces-
sary to accomplish the task (see Problem 8 in this sections exercises).
The third alternative for computing the nth Fibonacci number lies in using
formula (2.10). The efciency of the algorithm will obviously be determined by
the efciency of an exponentiation algorithm used for computing
n
. If it is done
by simply multiplying by itself n 1times, the algorithmwill be in (n) =(2
b
).
There are faster algorithms for the exponentiation problem. For example, we
will discuss (log n) = (b) algorithms for this problem in Chapters 4 and 6.
Note also that special care should be exercised in implementing this approach
to computing the nth Fibonacci number. Since all its intermediate results are
irrational numbers, we would have to make sure that their approximations in the
computer are accurate enough so that the nal round-off yields a correct result.
Finally, there exists a (log n) algorithm for computing the nth Fibonacci
number that manipulates only integers. It is based on the equality
_
F(n 1) F(n)
F(n) F(n 1)
_
=
_
0 1
1 1
_
n
for n 1
and an efcient way of computing matrix powers.
Exercises 2.5
1. Find a Web site dedicated to applications of the Fibonacci numbers and
study it.
2. Fibonaccis rabbits problem A man put a pair of rabbits in a place sur-
rounded by a wall. How many pairs of rabbits will be there in a year if the
initial pair of rabbits (male and female) are newborn and all rabbit pairs are
not fertile during their rst month of life but thereafter give birth to one new
male/female pair at the end of every month?
3. Climbing stairs Find the number of different ways to climb an n-stair stair-
case if each step is either one or two stairs. For example, a 3-stair staircase can
be climbed three ways: 1-1-1, 1-2, and 2-1.
4. How many even numbers are there among the rst n Fibonacci numbers, i.e.,
among the numbers F(0), F(1), . . . , F(n 1)? Give a closed-form formula
valid for every n > 0.
5. Check by direct substitutions that the function
1
5
(
n
n
) indeed satises
recurrence (2.6) and initial conditions (2.7).
6. The maximum values of the Java primitive types int and long are 2
31
1 and
2
63
1, respectively. Find the smallest n for which the nth Fibonacci number
is not going to t in a memory allocated for
84 Fundamentals of the Analysis of Algorithm Efciency
a. the type int. b. the type long.
7. Consider the recursive denition-based algorithm for computing the nth Fi-
bonacci number F(n). Let C(n) and Z(n) be the number of times F(1) and
F(0) are computed, respectively. Prove that
a. C(n) =F(n). b. Z(n) =F(n 1).
8. Improve algorithm Fib of the text so that it requires only (1) space.
9. Prove the equality
_
F(n 1) F(n)
F(n) F(n 1)
_
=
_
0 1
1 1
_
n
for n 1.
10. How many modulo divisions are made by Euclids algorithm on two consec-
utive Fibonacci numbers F(n) and F(n 1) as the algorithms input?
11. Dissecting a Fibonacci rectangle Given a rectangle whose sides are two con-
secutive Fibonacci numbers, design an algorithmto dissect it into squares with
no more than two squares being the same size. What is the time efciency class
of your algorithm?
12. In the language of your choice, implement two algorithms for computing the
last ve digits of the nth Fibonacci number that are based on (a) the recursive
denition-based algorithm F(n); (b) the iterative denition-based algorithm
Fib(n). Perform an experiment to nd the largest value of n for which your
programs run under 1 minute on your computer.
2.6 Empirical Analysis of Algorithms
In Sections 2.3 and 2.4, we saw how algorithms, both nonrecursive and recursive,
can be analyzed mathematically. Though these techniques can be applied success-
fully to many simple algorithms, the power of mathematics, even when enhanced
with more advanced techniques (see [Sed96], [Pur04], [Gra94], and [Gre07]), is
far from limitless. In fact, even some seemingly simple algorithms have proved
to be very difcult to analyze with mathematical precision and certainty. As we
pointed out in Section 2.1, this is especially true for the average-case analysis.
The principal alternative to the mathematical analysis of an algorithms ef-
ciency is its empirical analysis. This approach implies steps spelled out in the
following plan.
General Plan for the Empirical Analysis of Algorithm Time Efciency
1. Understand the experiments purpose.
2. Decide on the efciency metric M to be measured and the measurement unit
(an operation count vs. a time unit).
3. Decide on characteristics of the input sample (its range, size, and so on).
4. Prepare a programimplementing the algorithm(or algorithms) for the exper-
imentation.
2.6 Empirical Analysis of Algorithms 85
5. Generate a sample of inputs.
6. Run the algorithm (or algorithms) on the samples inputs and record the data
observed.
7. Analyze the data obtained.
Let us discuss these steps one at a time. There are several different goals
one can pursue in analyzing algorithms empirically. They include checking the
accuracy of a theoretical assertion about the algorithms efciency, comparing the
efciency of several algorithms for solving the same problem or different imple-
mentations of the same algorithm, developing a hypothesis about the algorithms
efciency class, and ascertaining the efciency of the program implementing the
algorithm on a particular machine. Obviously, an experiments design should de-
pend on the question the experimenter seeks to answer.
In particular, the goal of the experiment should inuence, if not dictate, how
the algorithms efciency is to be measured. The rst alternative is to insert a
counter (or counters) into a program implementing the algorithm to count the
number of times the algorithms basic operation is executed. This is usually a
straightforward operation; you should only be mindful of the possibility that
the basic operation is executed in several places in the program and that all its
executions need to be accounted for. As straightforward as this task usually is,
you should always test the modied program to ensure that it works correctly, in
terms of both the problem it solves and the counts it yields.
The second alternative is to time the program implementing the algorithm in
question. The easiest way to do this is to use a systems command, such as the time
command in UNIX. Alternatively, one can measure the running time of a code
fragment by asking for the systemtime right before the fragments start (t
start
) and
just after its completion (t
nish
), and then computing the difference between the
two (t
nish
t
start
).
7
In Cand C++, you can use the function clock for this purpose;
in Java, the method currentTimeMillis() in the System class is available.
It is important to keep several facts in mind, however. First, a systems time
is typically not very accurate, and you might get somewhat different results on
repeated runs of the same program on the same inputs. An obvious remedy is
to make several such measurements and then take their average (or the median)
as the samples observation point. Second, given the high speed of modern com-
puters, the running time may fail to register at all and be reported as zero. The
standard trick to overcome this obstacle is to run the program in an extra loop
many times, measure the total running time, and then divide it by the number of
the loops repetitions. Third, on a computer running under a time-sharing system
such as UNIX, the reported time may include the time spent by the CPU on other
programs, which obviously defeats the purpose of the experiment. Therefore, you
should take care to ask the systemfor the time devoted specically to execution of
7. If the system time is given in units called ticks, the difference should be divided by a constant
indicating the number of ticks per time unit.
86 Fundamentals of the Analysis of Algorithm Efciency
your program. (In UNIX, this time is called the user time, and it is automatically
provided by the time command.)
Thus, measuring the physical running time has several disadvantages, both
principal (dependence on a particular machine being the most important of them)
and technical, not shared by counting the executions of a basic operation. On the
other hand, the physical running time provides very specic information about
an algorithms performance in a particular computing environment, which can
be of more importance to the experimenter than, say, the algorithms asymptotic
efciency class. In addition, measuring time spent on different segments of a
program can pinpoint a bottleneck in the programs performance that can be
missed by an abstract deliberation about the algorithms basic operation. Getting
such datacalled prolingis an important resource in the empirical analysis of
an algorithms running time; the data in question can usually be obtained from
the system tools available in most computing environments.
Whether you decide to measure the efciency by basic operation counting or
by time clocking, you will need to decide on a sample of inputs for the experiment.
Often, the goal is to use a sample representing a typical input; so the challenge
is to understand what a typical input is. For some classes of algorithmse.g., for
algorithms for the traveling salesman problem that we are going to discuss later in
the bookresearchers have developed a set of instances they use for benchmark-
ing. But much more often than not, an input sample has to be developed by the
experimenter. Typically, you will have to make decisions about the sample size (it
is sensible to start with a relatively small sample and increase it later if necessary),
the range of instance sizes (typically neither trivially small nor excessively large),
and a procedure for generating instances in the range chosen. The instance sizes
can either adhere to some pattern (e.g., 1000, 2000, 3000, . . . , 10,000 or 500, 1000,
2000, 4000, . . . , 128,000) or be generated randomly within the range chosen.
The principal advantage of size changing according to a pattern is that its
impact is easier to analyze. For example, if a samples sizes are generated by
doubling, you can compute the ratios M(2n)/M(n) of the observed metric M
(the count or the time) to see whether the ratios exhibit a behavior typical of
algorithms in one of the basic efciency classes discussed in Section 2.2. The
major disadvantage of nonrandom sizes is the possibility that the algorithm under
investigation exhibits atypical behavior on the sample chosen. For example, if all
the sizes in a sample are even and your algorithm runs much more slowly on odd-
size inputs, the empirical results will be quite misleading.
Another important issue concerning sizes in an experiments sample is
whether several instances of the same size should be included. If you expect the
observed metric to vary considerably on instances of the same size, it would be
probably wise to include several instances for every size in the sample. (There
are well-developed methods in statistics to help the experimenter make such de-
cisions; you will nd no shortage of books on this subject.) Of course, if several
instances of the same size are included in the sample, the averages or medians of
the observed values for each size should be computed and investigated instead of
or in addition to individual sample points.
2.6 Empirical Analysis of Algorithms 87
Much more often than not, an empirical analysis requires generating random
numbers. Even if you decide to use a pattern for input sizes, you will typically
want instances themselves generated randomly. Generating random numbers on
a digital computer is known to present a difcult problem because, in principle,
the problem can be solved only approximately. This is the reason computer scien-
tists prefer to call such numbers pseudorandom. As a practical matter, the easiest
and most natural way of getting such numbers is to take advantage of a random
number generator available in computer language libraries. Typically, its output
will be a value of a (pseudo)random variable uniformly distributed in the interval
between 0 and 1. If a different (pseudo)random variable is desired, an appro-
priate transformation needs to be made. For example, if x is a continuous ran-
dom variable uniformly distributed on the interval 0 x < 1, the variable y =l
x(r l) will be uniformly distributed among the integer values between integers
l and r 1 (l < r).
Alternatively, you can implement one of several known algorithms for gener-
ating (pseudo)random numbers. The most widely used and thoroughly studied of
such algorithms is the linear congruential method.
ALGORITHM Random(n, m, seed, a, b)
//Generates a sequence of n pseudorandom numbers according to the linear
// congruential method
//Input: A positive integer n and positive integer parameters m, seed, a, b
//Output: A sequence r
1
, . . . , r
n
of n pseudorandom integers uniformly
// distributed among integer values between 0 and m1
//Note: Pseudorandom numbers between 0 and 1 can be obtained
// by treating the integers generated as digits after the decimal point
r
0
seed
for i 1 to n do
r
i
(a r
i1
b) mod m
The simplicity of this pseudocode is misleading because the devil lies in the
details of choosing the algorithms parameters. Here is a partial list of recommen-
dations based on the results of a sophisticated mathematical analysis (see [KnuII,
pp. 184185] for details): seed may be chosen arbitrarily and is often set to the
current date and time; m should be large and may be conveniently taken as 2
w
,
where w is the computers word size; a should be selected as an integer between
0.01m and 0.99m with no particular pattern in its digits but such that a mod 8 =5;
and the value of b can be chosen as 1.
The empirical data obtainedas the result of anexperiment needtobe recorded
and then presented for an analysis. Data can be presented numerically in a table or
graphically in a scatterplot, i.e., by points in a Cartesian coordinate system. It is a
good idea to use both these options whenever it is feasible because both methods
have their unique strengths and weaknesses.
88 Fundamentals of the Analysis of Algorithm Efciency
The principal advantage of tabulated data lies in the opportunity to manip-
ulate it easily. For example, one can compute the ratios M(n)/g(n) where g(n) is
a candidate to represent the efciency class of the algorithm in question. If the
algorithm is indeed in (g(n)), most likely these ratios will converge to some pos-
itive constant as n gets large. (Note that careless novices sometimes assume that
this constant must be 1, which is, of course, incorrect according to the denition
of (g(n)).) Or one can compute the ratios M(2n)/M(n) and see how the running
time reacts to doubling of its input size. As we discussed in Section 2.2, such ratios
should change only slightly for logarithmic algorithms and most likely converge
to 2, 4, and 8 for linear, quadratic, and cubic algorithms, respectivelyto name
the most obvious and convenient cases.
On the other hand, the form of a scatterplot may also help in ascertaining
the algorithms probable efciency class. For a logarithmic algorithm, the scat-
terplot will have a concave shape (Figure 2.7a); this fact distinguishes it from
all the other basic efciency classes. For a linear algorithm, the points will tend
to aggregate around a straight line or, more generally, to be contained between
two straight lines (Figure 2.7b). Scatterplots of functions in (n lg n) and (n
2
)
will have a convex shape (Figure 2.7c), making them difcult to differentiate. A
scatterplot of a cubic algorithm will also have a convex shape, but it will show a
much more rapid increase in the metrics values. An exponential algorithm will
most probably require a logarithmic scale for the vertical axis, in which the val-
ues of log
a
M(n) rather than those of M(n) are plotted. (The commonly used
logarithm base is 2 or 10.) In such a coordinate system, a scatterplot of a truly
exponential algorithm should resemble a linear function because M(n) ca
n
im-
plies log
b
M(n) log
b
c n log
b
a, and vice versa.
One of the possible applications of the empirical analysis is to predict the al-
gorithms performance on an instance not included in the experiment sample. For
example, if you observe that the ratios M(n)/g(n) are close to some constant c
for the sample instances, it could be sensible to approximate M(n) by the prod-
uct cg(n) for other instances, too. This approach should be used with caution,
especially for values of n outside the sample range. (Mathematicians call such
predictions extrapolation, as opposed to interpolation, which deals with values
within the sample range.) Of course, you can try unleashing the standard tech-
niques of statistical data analysis and prediction. Note, however, that the majority
of such techniques are based on specic probabilistic assumptions that may or may
not be valid for the experimental data in question.
It seems appropriate to end this section by pointing out the basic differ-
ences between mathematical and empirical analyses of algorithms. The princi-
pal strength of the mathematical analysis is its independence of specic inputs;
its principal weakness is its limited applicability, especially for investigating the
average-case efciency. The principal strength of the empirical analysis lies in its
applicability to any algorithm, but its results can depend on the particular sample
of instances and the computer used in the experiment.
2.6 Empirical Analysis of Algorithms 89
count or time
n
(b) (a)
count or time
n
(c)
count or time
n
FIGURE 2.7 Typical scatter plots. (a) Logarithmic. (b) Linear. (c) One of the convex
functions.
Exercises 2.6
1. Consider the following well-known sorting algorithm, which is studied later
in the book, with a counter inserted to count the number of key comparisons.
ALGORITHM SortAnalysis(A[0..n 1])
//Input: An array A[0..n 1] of n orderable elements
//Output: The total number of key comparisons made
count 0
for i 1 to n 1 do
90 Fundamentals of the Analysis of Algorithm Efciency
v A[i]
j i 1
while j 0 and A[j] > v do
count count 1
A[j 1] A[j]
j j 1
A[j 1] v
return count
Is the comparison counter inserted in the right place? If you believe it is, prove
it; if you believe it is not, make an appropriate correction.
2. a. Run the program of Problem 1, with a properly inserted counter (or coun-
ters) for the number of key comparisons, on 20 randomarrays of sizes 1000,
2000, 3000, . . . , 20,000.
b. Analyze the data obtained to form a hypothesis about the algorithms
average-case efciency.
c. Estimate the number of key comparisons we should expect for a randomly
generated array of size 25,000 sorted by the same algorithm.
3. Repeat Problem 2 by measuring the programs running time in milliseconds.
4. Hypothesize a likely efciency class of an algorithm based on the following
empirical observations of its basic operations count:
size 1000 2000 3000 4000 5000 6000 7000 8000 9000 10000
count 11,966 24,303 39,992 53,010 67,272 78,692 91,274 113,063 129,799 140,538
5. What scale transformation will make a logarithmic scatterplot look like a
linear one?
6. How can one distinguish a scatterplot for an algorithm in (lg lg n) from a
scatterplot for an algorithm in (lg n)?
7. a. Find empirically the largest number of divisions made by Euclids algo-
rithm for computing gcd(m, n) for 1n m100.
b. For each positive integer k, nd empirically the smallest pair of integers
1n m100 for which Euclids algorithm needs to make k divisions in
order to nd gcd(m, n).
8. The average-case efciency of Euclids algorithm on inputs of size n can be
measured by the average number of divisions D
avg
(n) made by the algorithm
in computing gcd(n, 1), gcd(n, 2), . . . , gcd(n, n). For example,
D
avg
(5) =
1
5
(1 2 3 2 1) =1.8.
2.7 Algorithm Visualization 91
Produce a scatterplot of D
avg
(n) and indicate the algorithms likely average-
case efciency class.
9. Run an experiment to ascertain the efciency class of the sieve of Eratos-
thenes (see Section 1.1).
10. Run a timing experiment for the three algorithms for computing gcd(m, n)
presented in Section 1.1.
2.7 Algorithm Visualization
In addition to the mathematical and empirical analyses of algorithms, there is yet
a third way to study algorithms. It is called algorithm visualization and can be
dened as the use of images to convey some useful information about algorithms.
That information can be a visual illustration of an algorithms operation, of its per-
formance on different kinds of inputs, or of its execution speed versus that of other
algorithms for the same problem. To accomplish this goal, an algorithm visualiza-
tion uses graphic elementspoints, line segments, two- or three-dimensional bars,
and so onto represent some interesting events in the algorithms operation.
There are two principal variations of algorithm visualization:
Static algorithm visualization
Dynamic algorithm visualization, also called algorithm animation
Static algorithm visualization shows an algorithms progress through a series
of still images. Algorithm animation, on the other hand, shows a continuous,
movie-like presentation of an algorithms operations. Animation is an arguably
more sophisticated option, which, of course, is much more difcult to implement.
Early efforts in the area of algorithm visualization go back to the 1970s. The
watershedevent happenedin1981 withthe appearance of a 30-minute color sound
lm titled Sorting Out Sorting. This algorithm visualization classic was produced
at the University of Toronto by Ronald Baecker with the assistance of D. Sherman
[Bae81, Bae98]. It contained visualizations of nine well-known sorting algorithms
(more than half of them are discussed later in the book) and provided quite a
convincing demonstration of their relative speeds.
The success of Sorting Out Sorting made sorting algorithms a perennial fa-
vorite for algorithm animation. Indeed, the sorting problem lends itself quite
naturally to visual presentation via vertical or horizontal bars or sticks of different
heights or lengths, which need to be rearranged according to their sizes (Figure
2.8). This presentation is convenient, however, only for illustrating actions of a
typical sorting algorithm on small inputs. For larger les, Sorting Out Sorting used
the ingenious idea of presenting data by a scatterplot of points on a coordinate
plane, with the rst coordinate representing an items position in the le and the
second one representing the items value; with such a representation, the process
of sorting looks like a transformation of a random scatterplot of points into the
points along a frames diagonal (Figure 2.9). In addition, most sorting algorithms
92 Fundamentals of the Analysis of Algorithm Efciency
FIGURE 2.8 Initial and nal screens of a typical visualization of a sorting algorithm using
the bar representation.
work by comparing and exchanging two given items at a timean event that can
be animated relatively easily.
Since the appearance of Sorting Out Sorting, a great number of algorithm
animations have been created, especially after the appearance of Java and the
2.7 Algorithm Visualization 93
FIGURE 2.9 Initial and nal screens of a typical visualization of a sorting algorithm using
the scatterplot representation.
World Wide Web in the 1990s. They range in scope from one particular algorithm
to a group of algorithms for the same problem (e.g., sorting) or the same applica-
tion area (e.g., geometric algorithms) to general-purpose animation systems. At
the end of 2010, a catalog of links to existing visualizations, maintained under the
94 Fundamentals of the Analysis of Algorithm Efciency
NSF-supported AlgoVizProject, contained over 500 links. Unfortunately, a survey
of existing visualizations found most of them to be of low quality, with the content
heavily skewed toward easier topics such as sorting [Sha07].
There are two principal applications of algorithm visualization: research and
education. Potential benets for researchers are based on expectations that algo-
rithm visualization may help uncover some unknown features of algorithms. For
example, one researcher used a visualization of the recursive Tower of Hanoi algo-
rithminwhichodd- andeven-numbereddisks were coloredintwodifferent colors.
He noticed that two disks of the same color never came in direct contact during
the algorithms execution. This observationhelpedhimindeveloping a better non-
recursive version of the classic algorithm. To give another example, Bentley and
McIlroy [Ben93] mentioned using an algorithm animation system in their work
on improving a library implementation of a leading sorting algorithm.
The application of algorithm visualization to education seeks to help students
learning algorithms. The available evidence of its effectiveness is decisively mixed.
Although some experiments did register positive learning outcomes, others failed
to do so. The increasing body of evidence indicates that creating sophisticated
software systems is not going to be enough. In fact, it appears that the level of
student involvement with visualization might be more important than specic
features of visualization software. In some experiments, low-tech visualizations
prepared by students were more effective than passive exposure to sophisticated
software systems.
To summarize, although some successes in both research and education have
been reported in the literature, they are not as impressive as one might expect. A
deeper understanding of human perception of images will be required before the
true potential of algorithm visualization is fullled.
SUMMARY
There are two kinds of algorithm efciency: time efciency and space
efciency. Time efciency indicates how fast the algorithm runs; space
efciency deals with the extra space it requires.
An algorithms time efciency is principally measured as a function of its input
size by counting the number of times its basic operation is executed. A basic
operation is the operation that contributes the most to running time. Typically,
it is the most time-consuming operation in the algorithms innermost loop.
For some algorithms, the running time can differ considerably for inputs of
the same size, leading to worst-case efciency, average-case efciency, and
best-case efciency.
The establishedframeworkfor analyzing time efciency is primarily grounded
in the order of growth of the algorithms running time as its input size goes to
innity.
Summary 95
The notations O, , and are used to indicate and compare the asymptotic
orders of growth of functions expressing algorithm efciencies.
The efciencies of a large number of algorithms fall into the following
few classes: constant, logarithmic, linear, linearithmic, quadratic, cubic, and
exponential.
The main tool for analyzing the time efciency of a nonrecursive algorithm
is to set up a sum expressing the number of executions of its basic operation
and ascertain the sums order of growth.
The main tool for analyzing the time efciency of a recursive algorithm is to
set up a recurrence relation expressing the number of executions of its basic
operation and ascertain the solutions order of growth.
Succinctness of a recursive algorithm may mask its inefciency.
The Fibonacci numbers are an important sequence of integers in which every
element is equal to the sum of its two immediate predecessors. There are
several algorithms for computing the Fibonacci numbers, with drastically
different efciencies.
Empirical analysis of an algorithm is performed by running a program
implementing the algorithm on a sample of inputs and analyzing the data
observed (the basic operations count or physical running time). This
often involves generating pseudorandom numbers. The applicability to any
algorithm is the principal strength of this approach; the dependence of results
on the particular computer and instance sample is its main weakness.
Algorithm visualization is the use of images to convey useful information
about algorithms. The two principal variations of algorithm visualization are
static algorithmvisualization and dynamic algorithmvisualization (also called
algorithm animation).
This page intentionally left blank
3
Brute Force and
Exhaustive Search
Science is as far removed from brute force as this sword from a crowbar.
Edward Lytton (18031873), Leila, Book II, Chapter I
Doing a thing well is often a waste of time.
Robert Byrne, a master pool and billiards player and a writer
A
fter introducing the framework and methods for algorithm analysis in the
preceding chapter, we are ready to embark on a discussion of algorithm
design strategies. Each of the next eight chapters is devoted to a particular design
strategy. The subject of this chapter is brute force and its important special case,
exhaustive search. Brute force can be described as follows:
Brute force is a straightforward approach to solving a problem, usually
directly based on the problem statement and denitions of the concepts
involved.
The force implied by the strategys denition is that of a computer and
not that of ones intellect. Just do it! would be another way to describe the
prescription of the brute-force approach. And often, the brute-force strategy is
indeed the one that is easiest to apply.
As an example, consider the exponentiation problem: compute a
n
for a
nonzero number a and a nonnegative integer n. Although this problem might
seem trivial, it provides a useful vehicle for illustrating several algorithm design
strategies, including the brute force. (Also note that computing a
n
mod mfor some
large integers is a principal component of a leading encryption algorithm.) By the
denition of exponentiation,
a
n
=a
. . .
a
. ,, .
n times
.
97
98 Brute Force and Exhaustive Search
This suggests simply computing a
n
by multiplying 1 by a n times.
We have already encountered at least two brute-force algorithms in the book:
the consecutive integer checking algorithmfor computing gcd(m, n) in Section 1.1
and the denition-based algorithm for matrix multiplication in Section 2.3. Many
other examples are given later in this chapter. (Can you identify a few algorithms
you already know as being based on the brute-force approach?)
Though rarely a source of clever or efcient algorithms, the brute-force ap-
proach should not be overlooked as an important algorithm design strategy. First,
unlike some of the other strategies, brute force is applicable to a very wide va-
riety of problems. In fact, it seems to be the only general approach for which it
is more difcult to point out problems it cannot tackle. Second, for some impor-
tant problemse.g., sorting, searching, matrix multiplication, string matching
the brute-force approach yields reasonable algorithms of at least some practi-
cal value with no limitation on instance size. Third, the expense of designing a
more efcient algorithm may be unjustiable if only a few instances of a prob-
lem need to be solved and a brute-force algorithm can solve those instances with
acceptable speed. Fourth, even if too inefcient in general, a brute-force algo-
rithm can still be useful for solving small-size instances of a problem. Finally,
a brute-force algorithm can serve an important theoretical or educational pur-
pose as a yardstick with which to judge more efcient alternatives for solving a
problem.
3.1 Selection Sort and Bubble Sort
In this section, we consider the application of the brute-force approach to the
problem of sorting: given a list of n orderable items (e.g., numbers, characters
from some alphabet, character strings), rearrange them in nondecreasing order.
As we mentioned in Section 1.3, dozens of algorithms have been developed for
solving this very important problem. You might have learned several of them in
the past. If you have, try to forget themfor the time being and look at the problem
afresh.
Now, after your mind is unburdened of previous knowledge of sorting algo-
rithms, ask yourself a question: What would be the most straightforward method
for solving the sorting problem? Reasonable people may disagree on the answer
to this question. The two algorithms discussed hereselection sort and bubble
sortseem to be the two prime candidates.
Selection Sort
We start selection sort by scanning the entire given list to nd its smallest element
and exchange it with the rst element, putting the smallest element in its nal
position in the sorted list. Then we scan the list, starting with the second element,
to nd the smallest among the last n 1 elements and exchange it with the second
element, putting the second smallest element in its nal position. Generally, on the
3.1 Selection Sort and Bubble Sort 99
ith pass through the list, which we number from 0 to n 2, the algorithm searches
for the smallest item among the last n i elements and swaps it with A
i
:
in their final positions
A
0
A
1
. . .
A
i1
A
i
, . . . , A
min
, . . . , A
n1
the last n i elements
After n 1 passes, the list is sorted.
Here is pseudocode of this algorithm, which, for simplicity, assumes that the
list is implemented as an array:
ALGORITHM SelectionSort(A[0..n 1])
//Sorts a given array by selection sort
//Input: An array A[0..n 1] of orderable elements
//Output: Array A[0..n 1] sorted in nondecreasing order
for i 0 to n 2 do
min i
for j i 1 to n 1 do
if A[j] < A[min] min j
swap A[i] and A[min]
As an example, the action of the algorithm on the list 89, 45, 68, 90, 29, 34, 17
is illustrated in Figure 3.1.
The analysis of selection sort is straightforward. The input size is given by the
number of elements n; the basic operation is the key comparison A[j] < A[min].
The number of times it is executed depends only on the array size and is given by
the following sum:
C(n) =
n2
i=0
n1
j=i1
1 =
n2
i=0
[(n 1) (i 1) 1] =
n2
i=0
(n 1 i).
| 89
17 |
17
17
17
17
17
45
45
29 |
29
29
29
29
68
68
68
34 |
34
34
34
90
90
90
90
45 |
45
45
29
29
45
45
90
68 |
68
34
34
34
68
68
90
89 |
17
89
89
89
89
89
90
FIGURE 3.1 Example of sorting with selection sort. Each line corresponds to one
iteration of the algorithm, i.e., a pass through the lists tail to the right
of the vertical bar; an element in bold indicates the smallest element
found. Elements to the left of the vertical bar are in their nal positions and
are not considered in this and subsequent iterations.
100 Brute Force and Exhaustive Search
Since we have already encountered the last sum in analyzing the algorithm of
Example 2 in Section 2.3, you should be able to compute it now on your own.
Whether you compute this sum by distributing the summation symbol or by
immediately getting the sum of decreasing integers, the answer, of course, must
be the same:
C(n) =
n2
i=0
n1
j=i1
1 =
n2
i=0
(n 1 i) =
(n 1)n
2
.
Thus, selection sort is a (n
2
) algorithm on all inputs. Note, however, that the
number of key swaps is only (n), or, more precisely, n 1(one for eachrepetition
of the i loop). This property distinguishes selectionsort positively frommany other
sorting algorithms.
Bubble Sort
Another brute-force application to the sorting problem is to compare adjacent
elements of the list and exchange them if they are out of order. By doing it
repeatedly, we end up bubbling up the largest element to the last position on
the list. The next pass bubbles up the second largest element, and so on, until
after n 1 passes the list is sorted. Pass i (0 i n 2) of bubble sort can be
represented by the following diagram:
A
0
, . . . , A
j
?
A
j1
, . . . , A
ni1
[ A
ni
. . .
A
n1
in their nal positions
Here is pseudocode of this algorithm.
ALGORITHM BubbleSort(A[0..n 1])
//Sorts a given array by bubble sort
//Input: An array A[0..n 1] of orderable elements
//Output: Array A[0..n 1] sorted in nondecreasing order
for i 0 to n 2 do
for j 0 to n 2 i do
if A[j 1] < A[j] swap A[j] and A[j 1]
The action of the algorithm on the list 89, 45, 68, 90, 29, 34, 17 is illustrated
as an example in Figure 3.2.
The number of key comparisons for the bubble-sort version given above is
the same for all arrays of size n; it is obtained by a sum that is almost identical to
the sum for selection sort:
3.1 Selection Sort and Bubble Sort 101
89
45
45
45
45
45
45
45
45
45
45
89
68
68
68
68
68
68
68
68
68
68
89
89
89
89
89
29
29
29
90
90
90
29
29
29
29
89
34
34
34
34
89
17
17
17
17
89
90
90
90
90
29
29
29
90
34
34
34
34
34
34
90
17
17
17
17
17
17
90
?
etc.
|
|
|
|
|
FIGURE 3.2 First two passes of bubble sort on the list 89, 45, 68, 90, 29, 34, 17. A new
line is shown after a swap of two elements is done. The elements to the
right of the vertical bar are in their nal positions and are not considered in
subsequent iterations of the algorithm.
C(n) =
n2
i=0
n2i
j=0
1 =
n2
i=0
[(n 2 i) 0 1]
=
n2
i=0
(n 1 i) =
(n 1)n
2
(n
2
).
The number of key swaps, however, depends on the input. In the worst case of
decreasing arrays, it is the same as the number of key comparisons:
S
worst
(n) =C(n) =
(n 1)n
2
(n
2
).
As is often the case with an application of the brute-force strategy, the rst
version of an algorithm obtained can often be improved upon with a modest
amount of effort. Specically, we can improve the crude version of bubble sort
given above by exploiting the following observation: if a pass through the list
makes no exchanges, the list has been sorted and we can stop the algorithm
(Problem 12a in this sections exercises). Though the new version runs faster on
some inputs, it is still in (n
2
) in the worst and average cases. In fact, even among
elementary sorting methods, bubble sort is an inferior choice, and if it were not for
its catchy name, you would probably have never heard of it. However, the general
lesson you just learned is important and worth repeating:
A rst application of the brute-force approach often results in an algorithm
that can be improved with a modest amount of effort.
102 Brute Force and Exhaustive Search
Exercises 3.1
1. a. Give an example of an algorithm that should not be considered an appli-
cation of the brute-force approach.
b. Give an example of a problem that cannot be solved by a brute-force
algorithm.
2. a. What is the time efciency of the brute-force algorithm for computing
a
n
as a function of n? As a function of the number of bits in the binary
representation of n?
b. If youare tocompute a
n
mod mwhere a >1andnis a large positive integer,
how would you circumvent the problem of a very large magnitude of a
n
?
3. For each of the algorithms in Problems 4, 5, and 6 of Exercises 2.3, tell whether
or not the algorithm is based on the brute-force approach.
4. a. Design a brute-force algorithm for computing the value of a polynomial
p(x) =a
n
x
n
a
n1
x
n1
. . .
a
1
x a
0
at a given point x
0
and determine its worst-case efciency class.
b. If the algorithm you designed is in (n
2
), design a linear algorithm for this
problem.
c. Is it possible to design an algorithm with a better-than-linear efciency for
this problem?
5. A network topology species how computers, printers, and other devices
are connected over a network. The gure below illustrates three common
topologies of networks: the ring, the star, and the fully connected mesh.
ring star fully connected mesh
You are given a boolean matrix A[0..n 1, 0..n 1], where n > 3, which is
supposed to be the adjacency matrix of a graph modeling a network with one
of these topologies. Your task is to determine which of these three topologies,
if any, the matrix represents. Design a brute-force algorithm for this task and
indicate its time efciency class.
6. Tetromino tilings Tetrominoes are tiles made of four 1 1 squares. There
are ve types of tetrominoes shown below:
3.1 Selection Sort and Bubble Sort 103
straight tetromino square tetromino L-tetromino T-tetromino Z-tetromino
Is it possible to tilei.e., cover exactly without overlapsan 8 8 chessboard
with
a. straight tetrominoes? b. square tetrominoes?
c. L-tetrominoes? d. T-tetrominoes?
e. Z-tetrominoes?
7. A stack of fake coins There are n stacks of n identical-looking coins. All of
the coins in one of these stacks are counterfeit, while all the coins in the other
stacks are genuine. Every genuine coin weighs 10 grams; every fake weighs
11 grams. You have an analytical scale that can determine the exact weight of
any number of coins.
a. Devise a brute-force algorithmto identify the stack with the fake coins and
determine its worst-case efciency class.
b. What is the minimum number of weighings needed to identify the stack
with the fake coins?
8. Sort the list E, X, A, M, P, L, E in alphabetical order by selection sort.
9. Is selection sort stable? (The denition of a stable sorting algorithmwas given
in Section 1.3.)
10. Is it possible to implement selection sort for linked lists with the same (n
2
)
efciency as the array version?
11. Sort the list E, X, A, M, P, L, E in alphabetical order by bubble sort.
12. a. Prove that if bubble sort makes no exchanges on its pass through a list, the
list is sorted and the algorithm can be stopped.
b. Write pseudocode of the method that incorporates this improvement.
c. Prove that the worst-case efciency of the improved version is quadratic.
13. Is bubble sort stable?
14. Alternating disks You have a rowof 2n disks of two colors, n dark and n light.
They alternate: dark, light, dark, light, and so on. You want to get all the dark
disks to the right-hand end, and all the light disks to the left-hand end. The
only moves you are allowed to make are those that interchange the positions
of two neighboring disks.
Design an algorithm for solving this puzzle and determine the number of
moves it takes. [Gar99]
104 Brute Force and Exhaustive Search
3.2 Sequential Search and Brute-Force String Matching
We sawin the previous section two applications of the brute-force approach to the
sorting porblem. Here we discuss two applications of this strategy to the problem
of searching. The rst deals with the canonical problem of searching for an item
of a given value in a given list. The second is different in that it deals with the
string-matching problem.
Sequential Search
We have already encountered a brute-force algorithm for the general searching
problem: it is called sequential search (see Section 2.1). To repeat, the algorithm
simply compares successive elements of a given list with a given search key until
either a match is encountered (successful search) or the list is exhausted without
nding a match (unsuccessful search). A simple extra trick is often employed
in implementing sequential search: if we append the search key to the end of
the list, the search for the key will have to be successful, and therefore we can
eliminate the end of list check altogether. Here is pseudocode of this enhanced
version.
ALGORITHM SequentialSearch2(A[0..n], K)
//Implements sequential search with a search key as a sentinel
//Input: An array A of n elements and a search key K
//Output: The index of the rst element in A[0..n 1] whose value is
// equal to K or 1 if no such element is found
A[n] K
i 0
while A[i] ,=K do
i i 1
if i < n return i
else return 1
Another straightforward improvement can be incorporated in sequential
search if a given list is known to be sorted: searching in such a list can be stopped
as soon as an element greater than or equal to the search key is encountered.
Sequential search provides an excellent illustration of the brute-force ap-
proach, with its characteristic strength (simplicity) and weakness (inferior ef-
ciency). The efciency results obtained in Section 2.1 for the standard version of
sequential search change for the enhanced version only very slightly, so that the
algorithm remains linear in both the worst and average cases. We discuss later in
the book several searching algorithms with a better time efciency.
3.2 Sequential Search and Brute-Force String Matching 105
Brute-Force String Matching
Recall the string-matching problem introduced in Section 1.3: given a string of n
characters called the text and a string of m characters (mn) called the pattern,
nd a substring of the text that matches the pattern. To put it more precisely, we
want to nd ithe index of the leftmost character of the rst matching substring
in the textsuch that t
i
=p
0
, . . . , t
ij
=p
j
, . . . , t
im1
=p
m1
:
t
0
. . . t
i
. . . t
ij
. . . t
im1
. . . t
n1
text T
[ [ [
p
0
. . . p
j
. . . p
m1
pattern P
If matches other than the rst one need to be found, a string-matching algorithm
can simply continue working until the entire text is exhausted.
A brute-force algorithm for the string-matching problem is quite obvious:
align the pattern against the rst m characters of the text and start matching the
corresponding pairs of characters from left to right until either all the m pairs
of the characters match (then the algorithm can stop) or a mismatching pair is
encountered. In the latter case, shift the pattern one position to the right and
resume the character comparisons, starting again with the rst character of the
pattern and its counterpart in the text. Note that the last position in the text that
canstill be a beginning of a matching substring is n m(providedthe text positions
are indexedfrom0 ton 1). Beyondthat position, there are not enoughcharacters
to match the entire pattern; hence, the algorithm need not make any comparisons
there.
ALGORITHM BruteForceStringMatch(T [0..n 1], P[0..m1])
//Implements brute-force string matching
//Input: An array T [0..n 1] of n characters representing a text and
// an array P[0..m1] of m characters representing a pattern
//Output: The index of the rst character in the text that starts a
// matching substring or 1 if the search is unsuccessful
for i 0 to n m do
j 0
while j < m and P[j] =T [i j] do
j j 1
if j =m return i
return 1
An operation of the algorithm is illustrated in Figure 3.3. Note that for this
example, the algorithm shifts the pattern almost always after a single character
comparison. The worst case is much worse: the algorithm may have to make
all m comparisons before shifting the pattern, and this can happen for each of
the n m 1 tries. (Problem 6 in this sections exercises asks you to give a
specic example of such a situation.) Thus, in the worst case, the algorithm makes
106 Brute Force and Exhaustive Search
N O B O D Y _ N O T I C E D _ H I M
N O T
N O T
N O T
N O T
N O T
N O T
N O T
N O T
FIGURE 3.3 Example of brute-force string matching. The patterns characters that are
compared with their text counterparts are in bold type.
m(n m1) character comparisons, which puts it in the O(nm) class. For a typical
word search in a natural language text, however, we should expect that most shifts
would happen after very few comparisons (check the example again). Therefore,
the average-case efciency should be considerably better than the worst-case
efciency. Indeedit is: for searching inrandomtexts, it has beenshowntobe linear,
i.e., (n). There are several more sophisticated and more efcient algorithms for
string searching. The most widely known of themby R. Boyer and J. Mooreis
outlined in Section 7.2 along with its simplication suggested by R. Horspool.
Exercises 3.2
1. Find the number of comparisons made by the sentinel version of sequential
search
a. in the worst case.
b. in the average case if the probability of a successful search is p (0 p 1).
2. As shown in Section 2.1, the average number of key comparisons made by
sequential search (without a sentinel, under standard assumptions about its
inputs) is given by the formula
C
avg
(n) =
p(n 1)
2
n(1 p),
where p is the probability of a successful search. Determine, for a xed n, the
values of p (0 p 1) for which this formula yields the maximum value of
C
avg
(n) and the minimum value of C
avg
(n).
3. Gadget testing A rm wants to determine the highest oor of its n-story
headquarters from which a gadget can fall without breaking. The rm has two
identical gadgets to experiment with. If one of them gets broken, it cannot be
repaired, and the experiment will have to be completed with the remaining
gadget. Design an algorithm in the best efciency class you can to solve this
problem.
3.2 Sequential Search and Brute-Force String Matching 107
4. Determine the number of character comparisons made by the brute-force
algorithm in searching for the pattern GANDHI in the text
THERE_IS_MORE_TO_LIFE_THAN_INCREASING_ITS_SPEED
Assume that the length of the textit is 47 characters longis known before
the search starts.
5. How many comparisons (both successful and unsuccessful) will be made by
the brute-force algorithm in searching for each of the following patterns in
the binary text of one thousand zeros?
a. 00001 b. 10000 c. 01010
6. Give an example of a text of length n and a pattern of length mthat constitutes
a worst-case input for the brute-force string-matching algorithm. Exactly how
many character comparisons will be made for such input?
7. In solving the string-matching problem, would there be any advantage in
comparing pattern and text characters right-to-left instead of left-to-right?
8. Consider the problem of counting, in a given text, the number of substrings
that start with an A and end with a B. For example, there are four such
substrings in CABAAXBYA.
a. Design a brute-force algorithm for this problem and determine its ef-
ciency class.
b. Design a more efcient algorithm for this problem. [Gin04]
9. Write a visualization program for the brute-force string-matching algorithm.
10. Word Find A popular diversion in the United States, word nd (or word
search) puzzles ask the player to nd each of a given set of words in a square
table lled with single letters. A word can read horizontally (left or right),
vertically (up or down), or along a 45 degree diagonal (in any of the four
directions) formed by consecutively adjacent cells of the table; it may wrap
around the tables boundaries, but it must read in the same direction with no
zigzagging. The same cell of the table may be used in different words, but, in a
given word, the same cell may be used no more than once. Write a computer
program for solving this puzzle.
11. Battleship game Write a program based on a version of brute-force pattern
matching for playing the game Battleship on the computer. The rules of the
game are as follows. There are two opponents in the game (in this case,
a human player and the computer). The game is played on two identical
boards (10 10 tables of squares) on which each opponent places his or her
ships, not seen by the opponent. Each player has ve ships, each of which
occupies a certain number of squares on the board: a destroyer (two squares),
a submarine (three squares), a cruiser (three squares), a battleship (four
squares), and an aircraft carrier (ve squares). Each ship is placed either
horizontally or vertically, with no two ships touching each other. The game
is played by the opponents taking turns shooting at each others ships. The
108 Brute Force and Exhaustive Search
result of every shot is displayed as either a hit or a miss. In case of a hit, the
player gets to go again and keeps playing until missing. The goal is to sink all
the opponents ships before the opponent succeeds in doing it rst. To sink a
ship, all squares occupied by the ship must be hit.
3.3 Closest-Pair and Convex-Hull Problems
by Brute Force
In this section, we consider a straightforward approach to two well-known prob-
lems dealing with a nite set of points in the plane. These problems, aside from
their theoretical interest, arise in two important applied areas: computational ge-
ometry and operations research.
Closest-Pair Problem
The closest-pair problem calls for nding the two closest points in a set of n
points. It is the simplest of a variety of problems in computational geometry that
deals with proximity of points in the plane or higher-dimensional spaces. Points
in question can represent such physical objects as airplanes or post ofces as well
as database records, statistical samples, DNA sequences, and so on. An air-trafc
controller might be interested in two closest planes as the most probable collision
candidates. Aregional postal service manager might need a solution to the closest-
pair problem to nd candidate post-ofce locations to be closed.
One of the important applications of the closest-pair problemis cluster analy-
sis in statistics. Based on n data points, hierarchical cluster analysis seeks to orga-
nize themin a hierarchy of clusters based on some similarity metric. For numerical
data, this metric is usually the Euclidean distance; for text and other nonnumerical
data, metrics such as the Hamming distance (see Problem 5 in this sections ex-
ercises) are used. A bottom-up algorithm begins with each element as a separate
cluster and merges them into successively larger clusters by combining the closest
pair of clusters.
For simplicity, we consider the two-dimensional case of the closest-pair prob-
lem. We assume that the points in question are specied in a standard fashion by
their (x, y) Cartesian coordinates and that the distance between two points p
i
(x
i
,
y
i
) and p
j
(x
j
, y
j
) is the standard Euclidean distance
d(p
i
, p
j
) =
_
(x
i
x
j
)
2
(y
i
y
j
)
2
.
The brute-force approach to solving this problem leads to the following ob-
vious algorithm: compute the distance between each pair of distinct points and
nd a pair with the smallest distance. Of course, we do not want to compute the
distance between the same pair of points twice. To avoid doing so, we consider
only the pairs of points (p
i
, p
j
) for which i < j.
3.3 Closest-Pair and Convex-Hull Problems by Brute Force 109
Pseudocode below computes the distance between the two closest points;
getting the closest points themselves requires just a trivial modication.
ALGORITHM BruteForceClosestPair(P)
//Finds distance between two closest points in the plane by brute force
//Input: A list P of n (n 2) points p
1
(x
1
, y
1
), . . . , p
n
(x
n
, y
n
)
//Output: The distance between the closest pair of points
d
for i 1 to n 1 do
for j i 1 to n do
d min(d, sqrt((x
i
x
j
)
2
(y
i
y
j
)
2
)) //sqrt is square root
return d
The basic operation of the algorithm is computing the square root. In the age
of electronic calculators with a square-root button, one might be led to believe
that computing the square root is as simple an operation as, say, addition or
multiplication. Of course, it is not. For starters, evenfor most integers, square roots
are irrational numbers that therefore can be found only approximately. Moreover,
computing such approximations is not a trivial matter. But, in fact, computing
square roots in the loop can be avoided! (Can you think how?) The trick is to
realize that we can simply ignore the square-root function and compare the values
(x
i
x
j
)
2
(y
i
y
j
)
2
themselves. We can do this because the smaller a number of
which we take the square root, the smaller its square root, or, as mathematicians
say, the square-root function is strictly increasing.
Then the basic operation of the algorithm will be squaring a number. The
number of times it will be executed can be computed as follows:
C(n) =
n1
i=1
n
j=i1
2 =2
n1
i=1
(n i)
=2[(n 1) (n 2)
. . .
1] =(n 1)n (n
2
).
Of course, speeding up the innermost loop of the algorithm could only de-
crease the algorithms running time by a constant factor (see Problem 1 in this
sections exercises), but it cannot improve its asymptotic efciency class. In Chap-
ter 5, we discuss a linearithmic algorithm for this problem, which is based on a
more sophisticated design technique.
Convex-Hull Problem
On to the other problemthat of computing the convex hull. Finding the convex
hull for a given set of points in the plane or a higher dimensional space is one of
the most importantsome people believe the most importantproblems in com-
putational geometry. This prominence is due to a variety of applications in which
110 Brute Force and Exhaustive Search
this problem needs to be solved, either by itself or as a part of a larger task. Sev-
eral such applications are based on the fact that convex hulls provide convenient
approximations of object shapes and data sets given. For example, in computer an-
imation, replacing objects by their convex hulls speeds up collision detection; the
same idea is used in path planning for Mars mission rovers. Convex hulls are used
in computing accessibility maps produced from satellite images by Geographic
Information Systems. They are also used for detecting outliers by some statisti-
cal techniques. An efcient algorithm for computing a diameter of a set of points,
which is the largest distance between two of the points, needs the sets convex hull
to nd the largest distance between two of its extreme points (see below). Finally,
convex hulls are important for solving many optimization problems, because their
extreme points provide a limited set of solution candidates.
We start with a denition of a convex set.
DEFINITION A set of points (nite or innite) in the plane is called convex if
for any two points p and q in the set, the entire line segment with the endpoints
at p and q belongs to the set.
All the sets depicted in Figure 3.4a are convex, and so are a straight line,
a triangle, a rectangle, and, more generally, any convex polygon,
1
a circle, and
the entire plane. On the other hand, the sets depicted in Figure 3.4b, any nite
set of two or more distinct points, the boundary of any convex polygon, and a
circumference are examples of sets that are not convex.
Now we are ready for the notion of the convex hull. Intuitively, the convex
hull of a set of n points in the plane is the smallest convex polygon that contains
all of them either inside or on its boundary. If this formulation does not re up
your enthusiasm, consider the problem as one of barricading n sleeping tigers by
a fence of the shortest length. This interpretation is due to D. Harel [Har92]; it is
somewhat lively, however, because the fenceposts have to be erected right at the
spots where some of the tigers sleep! There is another, much tamer interpretation
of this notion. Imagine that the points in question are represented by nails driven
into a large sheet of plywood representing the plane. Take a rubber band and
stretch it to include all the nails, then let it snap into place. The convex hull is the
area bounded by the snapped rubber band (Figure 3.5).
A formal denition of the convex hull that is applicable to arbitrary sets,
including sets of points that happen to lie on the same line, follows.
DEFINITION The convex hull of a set S of points is the smallest convex set
containing S. (The smallest requirement means that the convex hull of S must
be a subset of any convex set containing S.)
If S is convex, its convex hull is obviously S itself. If S is a set of two points,
its convex hull is the line segment connecting these points. If S is a set of three
1. By a triangle, a rectangle, and, more generally, any convex polygon, we mean here a region, i.e., the
set of points both inside and on the boundary of the shape in question.
3.3 Closest-Pair and Convex-Hull Problems by Brute Force 111
(b) (a)
FIGURE 3.4 (a) Convex sets. (b) Sets that are not convex.
FIGURE 3.5 Rubber-band interpretation of the convex hull.
points not on the same line, its convex hull is the triangle with the vertices at the
three points given; if the three points do lie on the same line, the convex hull is
the line segment with its endpoints at the two points that are farthest apart. For
an example of the convex hull for a larger set, see Figure 3.6.
A study of the examples makes the following theorem an expected result.
THEOREM The convex hull of any set S of n > 2 points not all on the same line
is a convex polygon with the vertices at some of the points of S. (If all the points
do lie on the same line, the polygon degenerates to a line segment but still with
the endpoints at two points of S.)
112 Brute Force and Exhaustive Search
p
7
p
6
p
2
p
8
p
3
p
4
p
1
p
5
FIGURE 3.6 The convex hull for this set of eight points is the convex polygon with
vertices at p
1
, p
5
, p
6
, p
7
, and p
3
.
The convex-hull problem is the problem of constructing the convex hull for
a given set S of n points. To solve it, we need to nd the points that will serve as
the vertices of the polygon in question. Mathematicians call the vertices of such
a polygon extreme points. By denition, an extreme point of a convex set is a
point of this set that is not a middle point of any line segment with endpoints in
the set. For example, the extreme points of a triangle are its three vertices, the
extreme points of a circle are all the points of its circumference, and the extreme
points of the convex hull of the set of eight points in Figure 3.6 are p
1
, p
5
, p
6
, p
7
,
and p
3
.
Extreme points have several special properties other points of a convex set
do not have. One of them is exploited by the simplex method, a very important
algorithm discussed in Section 10.1. This algorithm solves linear programming
problems, which are problems of nding a minimum or a maximum of a linear
functionof nvariables subject tolinear constraints (see Problem12 inthis sections
exercises for an example and Sections 6.6 and 10.1 for a general discussion). Here,
however, we are interested in extreme points because their identication solves
the convex-hull problem. Actually, to solve this problem completely, we need to
knowa bit more than just which of n points of a given set are extreme points of the
sets convex hull: we need to know which pairs of points need to be connected to
form the boundary of the convex hull. Note that this issue can also be addressed
by listing the extreme points in a clockwise or a counterclockwise order.
So howcan we solve the convex-hull problem in a brute-force manner? If you
do not see an immediate plan for a frontal attack, do not be dismayed: the convex-
hull problem is one with no obvious algorithmic solution. Nevertheless, there is a
simple but inefcient algorithm that is based on the following observation about
line segments making up the boundary of a convex hull: a line segment connecting
twopoints p
i
andp
j
of a set of npoints is a part of the convex hulls boundary if and
3.3 Closest-Pair and Convex-Hull Problems by Brute Force 113
only if all the other points of the set lie on the same side of the straight line through
these two points.
2
(Verify this property for the set in Figure 3.6.) Repeating this
test for every pair of points yields a list of line segments that make up the convex
hulls boundary.
A few elementary facts from analytical geometry are needed to implement
this algorithm. First, the straight line through two points (x
1
, y
1
), (x
2
, y
2
) in the
coordinate plane can be dened by the equation
ax by =c,
where a =y
2
y
1
, b =x
1
x
2
, c =x
1
y
2
y
1
x
2
.
Second, such a line divides the plane into two half-planes: for all the points
in one of them, ax by > c, while for all the points in the other, ax by < c.
(For the points on the line itself, of course, ax by =c.) Thus, to check whether
certain points lie on the same side of the line, we can simply check whether the
expression ax by c has the same sign for each of these points. We leave the
implementation details as an exercise.
What is the time efciency of this algorithm? It is in O(n
3
): for each of
n(n 1)/2 pairs of distinct points, we may need to nd the sign of ax by c
for each of the other n 2 points. There are much more efcient algorithms for
this important problem, and we discuss one of them later in the book.
Exercises 3.3
1. Assuming that sqrt takes about 10 times longer than each of the other oper-
ations in the innermost loop of BruteForceClosestPoints, which are assumed
to take the same amount of time, estimate how much faster the algorithm will
run after the improvement discussed in Section 3.3.
2. Can you design a more efcient algorithm than the one based on the brute-
force strategy to solve the closest-pair problem for n points x
1
, x
2
, . . . , x
n
on
the real line?
3. Let x
1
< x
2
<
. . .
< x
n
be real numbers representing coordinates of n villages
located along a straight road. A post ofce needs to be built in one of these
villages.
a. Design an efcient algorithm to nd the post-ofce location minimizing
the average distance between the villages and the post ofce.
b. Design an efcient algorithm to nd the post-ofce location minimizing
the maximum distance from a village to the post ofce.
2. For the sake of simplicity, we assume here that no three points of a given set lie on the same line. A
modication needed for the general case is left for the exercises.
114 Brute Force and Exhaustive Search
4. a. There are several alternative ways to dene a distance between two points
p
1
(x
1
, y
1
) and p
2
(x
2
, y
2
) in the Cartesian plane. In particular, the Manhat-
tan distance is dened as
d
M
(p
1
, p
2
) =[x
1
x
2
[ [y
1
y
2
[.
Prove that d
M
satises the following axioms, which every distance function
must satisfy:
i. d
M
(p
1
, p
2
) 0 for any two points p
1
and p
2
, and d
M
(p
1
, p
2
) =0 if and
only if p
1
=p
2
ii. d
M
(p
1
, p
2
) = d
M
(p
2
, p
1
)
iii. d
M
(p
1
, p
2
) d
M
(p
1
, p
3
) d
M
(p
3
, p
2
) for any p
1
, p
2
, and p
3
b. Sketch all the points in the Cartesian plane whose Manhattan distance to
the origin (0, 0) is equal to 1. Do the same for the Euclidean distance.
c. True or false: A solution to the closest-pair problem does not depend on
which of the two metricsd
E
(Euclidean) or d
M
(Manhattan)is used?
5. The Hamming distance between two strings of equal length is dened as the
number of positions at which the corresponding symbols are different. It is
named after Richard Hamming (19151998), a prominent American scientist
and engineer, who introduced it in his seminal paper on error-detecting and
error-correcting codes.
a. Does the Hamming distance satisfy the three axioms of a distance metric
listed in Problem 4?
b. What is the time efciency class of the brute-force algorithmfor the closest-
pair problem if the points in question are strings of msymbols long and the
distance between two of them is measured by the Hamming distance?
6. Odd pie ght There are n 3 people positioned on a eld (Euclidean plane)
so that each has a unique nearest neighbor. Each person has a cream pie. At a
signal, everybody hurls his or her pie at the nearest neighbor. Assuming that
n is odd and that nobody can miss his or her target, true or false: There always
remains at least one person not hit by a pie. [Car79]
7. The closest-pair problem can be posed in the k-dimensional space, in which
the Euclidean distance between two points p
/
(x
/
1
, . . . , x
/
k
) and p
//
(x
//
1
, . . . , x
//
k
)
is dened as
d(p
/
, p
//
) =
_
k
s=1
(x
/
s
x
//
s
)
2
.
What is the time-efciency class of the brute-force algorithm for the k-
dimensional closest-pair problem?
8. Find the convex hulls of the following sets and identify their extreme points
(if they have any):
a. a line segment
3.4 Exhaustive Search 115
b. a square
c. the boundary of a square
d. a straight line
9. Design a linear-time algorithmto determine two extreme points of the convex
hull of a given set of n > 1 points in the plane.
10. What modication needs to be made in the brute-force algorithm for the
convex-hull problem to handle more than two points on the same straight
line?
11. Write a program implementing the brute-force algorithm for the convex-hull
problem.
12. Consider the following small instance of the linear programming problem:
maximize 3x 5y
subject to x y 4
x 3y 6
x 0, y 0.
a. Sketch, in the Cartesian plane, the problems feasible region, dened as
the set of points satisfying all the problems constraints.
b. Identify the regions extreme points.
c. Solve this optimization problem by using the following theorem: A linear
programming problem with a nonempty bounded feasible region always
has a solution, which can be found at one of the extreme points of its
feasible region.
3.4 Exhaustive Search
Many important problems require nding an element with a special property in a
domain that grows exponentially (or faster) with an instance size. Typically, such
problems arise in situations that involveexplicitly or implicitlycombinatorial
objects such as permutations, combinations, and subsets of a given set. Many such
problems are optimization problems: they ask to nd an element that maximizes
or minimizes some desired characteristic such as a path length or an assignment
cost.
Exhaustive search is simply a brute-force approach to combinatorial prob-
lems. It suggests generating each and every element of the problem domain, se-
lecting those of them that satisfy all the constraints, and then nding a desired
element (e.g., the one that optimizes some objective function). Note that although
the idea of exhaustive search is quite straightforward, its implementation typically
requires an algorithmfor generating certain combinatorial objects. We delay a dis-
cussion of such algorithms until the next chapter and assume here that they exist.
116 Brute Force and Exhaustive Search
We illustrate exhaustive search by applying it to three important problems: the
traveling salesman problem, the knapsack problem, and the assignment problem.
Traveling Salesman Problem
The traveling salesman problem (TSP) has been intriguing researchers for the
last 150 years by its seemingly simple formulation, important applications, and
interesting connections to other combinatorial problems. In laymans terms, the
problemasks to nd the shortest tour through a given set of n cities that visits each
city exactly once before returning to the city where it started. The problem can be
conveniently modeled by a weighted graph, with the graphs vertices representing
the cities and the edge weights specifying the distances. Then the problem can be
stated as the problemof nding the shortest Hamiltonian circuit of the graph. (A
Hamiltonian circuit is dened as a cycle that passes through all the vertices of the
graph exactly once. It is named after the Irish mathematician Sir William Rowan
Hamilton (18051865), who became interested in such cycles as an application of
his algebraic discoveries.)
It is easy to see that a Hamiltonian circuit can also be dened as a sequence of
n 1adjacent vertices v
i
0
, v
i
1
, . . . , v
i
n1
, v
i
0
, where the rst vertex of the sequence
is the same as the last one and all the other n 1 vertices are distinct. Further,
we can assume, with no loss of generality, that all circuits start and end at one
particular vertex (they are cycles after all, are they not?). Thus, we can get all
the tours by generating all the permutations of n 1 intermediate cities, compute
the tour lengths, and nd the shortest among them. Figure 3.7 presents a small
instance of the problem and its solution by this method.
An inspection of Figure 3.7 reveals three pairs of tours that differ only by
their direction. Hence, we could cut the number of vertex permutations by half.
We could, for example, choose any twointermediate vertices, say, b andc, andthen
consider only permutations in which b precedes c. (This trick implicitly denes a
tours direction.)
This improvement cannot brighten the efciency picture much, however.
The total number of permutations needed is still
1
2
(n 1)!, which makes the
exhaustive-search approach impractical for all but very small values of n. On the
other hand, if you always see your glass as half-full, you can claim that cutting
the work by half is nothing to sneeze at, even if you solve a small instance of the
problem, especially by hand. Also note that had we not limited our investigation
to the circuits starting at the same vertex, the number of permutations would have
been even larger, by a factor of n.
Knapsack Problem
Here is another well-known problem in algorithmics. Given n items of known
weights w
1
, w
2
, . . . , w
n
and values v
1
, v
2
, . . . , v
n
and a knapsack of capacity W,
nd the most valuable subset of the items that t into the knapsack. If you do not
like the idea of putting yourself in the shoes of a thief who wants to steal the most
3.4 Exhaustive Search 117
2
5
8 7
3
1
a
c
b
d
a ---> b ---> c ---> d ---> a
a ---> b ---> d ---> c ---> a
a ---> c ---> b ---> d ---> a
a ---> c ---> d ---> b ---> a
a ---> d ---> b ---> c ---> a
a ---> d ---> c ---> b ---> a
I = 2 + 8 + 1 + 7 = 18
I = 2 + 3 + 1 + 5 = 11 optimal
Tour Length
optimal
I = 5 + 8 + 3 + 7 = 23
I = 5 + 1 + 3 + 2 = 11
I = 7 + 3 + 8 + 5 = 23
I = 7 + 1 + 8 + 2 = 18
FIGURE 3.7 Solution to a small instance of the traveling salesman problemby exhaustive
search.
valuable loot that ts into his knapsack, think about a transport plane that has to
deliver the most valuable set of items to a remote location without exceeding the
planes capacity. Figure 3.8a presents a small instance of the knapsack problem.
The exhaustive-search approach to this problem leads to generating all the
subsets of the set of n items given, computing the total weight of each subset in
order to identify feasible subsets (i.e., the ones with the total weight not exceeding
the knapsack capacity), and nding a subset of the largest value among them. As
an example, the solution to the instance of Figure 3.8a is given in Figure 3.8b. Since
the number of subsets of an n-element set is 2
n
, the exhaustive search leads to a
(2
n
) algorithm, no matter how efciently individual subsets are generated.
Thus, for both the traveling salesman and knapsack problems considered
above, exhaustive search leads to algorithms that are extremely inefcient on
every input. In fact, these two problems are the best-known examples of so-
called NP-hard problems. No polynomial-time algorithm is known for any NP-
hard problem. Moreover, most computer scientists believe that such algorithms
do not exist, although this very important conjecture has never been proven.
More-sophisticated approachesbacktracking and branch-and-bound (see Sec-
tions 12.1 and 12.2)enable us to solve some but not all instances of these and
118 Brute Force and Exhaustive Search
item 4 item 3 item 2 item 1 knapsack
10
w
1
= 7
v
1
= $42
w
2
= 3
v
2
= $12
w
3
= 4
v
3
= $40
w
4
= 5
v
4
= $25
(a)
Subset Total weight Total value
0 $ 0
{1] 7 $42
{2] 3 $12
{3] 4 $40
{4] 5 $25
{1, 2] 10 $54
{1, 3] 11 not feasible
{1, 4] 12 not feasible
{2, 3] 7 $52
{2, 4] 8 $37
_
3, 4
_
9 $65
{1, 2, 3] 14 not feasible
{1, 2, 4] 15 not feasible
{1, 3, 4] 16 not feasible
{2, 3, 4] 12 not feasible
{1, 2, 3, 4] 19 not feasible
(b)
FIGURE 3.8 (a) Instance of the knapsack problem. (b) Its solution by exhaustive search.
The information about the optimal selection is in bold.
3.4 Exhaustive Search 119
similar problems in less than exponential time. Alternatively, we can use one of
many approximation algorithms, such as those described in Section 12.3.
Assignment Problem
In our third example of a problem that can be solved by exhaustive search, there
are n people who need to be assigned to execute n jobs, one person per job. (That
is, each person is assigned to exactly one job and each job is assigned to exactly
one person.) The cost that would accrue if the ith person is assigned to the jth job
is a known quantity C[i, j] for each pair i, j =1, 2, . . . , n. The problem is to nd
an assignment with the minimum total cost.
A small instance of this problem follows, with the table entries representing
the assignment costs C[i, j]:
Job 1 Job 2 Job 3 Job 4
Person 1 9 2 7 8
Person 2 6 4 3 7
Person 3 5 8 1 8
Person 4 7 6 9 4
It is easy to see that an instance of the assignment problem is completely
specied by its cost matrix C. In terms of this matrix, the problem is to select one
element in each row of the matrix so that all selected elements are in different
columns and the total sum of the selected elements is the smallest possible. Note
that no obvious strategy for nding a solution works here. For example, we cannot
select the smallest element in each row, because the smallest elements may happen
to be in the same column. In fact, the smallest element in the entire matrix need
not be a component of an optimal solution. Thus, opting for the exhaustive search
may appear as an unavoidable evil.
We can describe feasible solutions to the assignment problem as n-tuples
j
1
, . . . , j
n
) in which the ith component, i =1, . . . , n, indicates the column of the
element selected in the ith row (i.e., the job number assigned to the ith person).
For example, for the cost matrix above, 2, 3, 4, 1) indicates the assignment of
Person 1 to Job 2, Person 2 to Job 3, Person 3 to Job 4, and Person 4 to Job 1.
The requirements of the assignment problem imply that there is a one-to-one
correspondence between feasible assignments and permutations of the rst n
integers. Therefore, the exhaustive-search approach to the assignment problem
would require generating all the permutations of integers 1, 2, . . . , n, computing
the total cost of each assignment by summing up the corresponding elements of
the cost matrix, and nally selecting the one with the smallest sum. A few rst
iterations of applying this algorithm to the instance given above are shown in
Figure 3.9; you are asked to complete it in the exercises.
120 Brute Force and Exhaustive Search
9
6
5
7
2
4
8
6
7
3
1
9
8
7
8
4
C =
<1, 2, 3, 4>
<1, 2, 4, 3>
<1, 3, 2, 4>
<1, 3, 4, 2>
<1, 4, 2, 3>
<1, 4, 3, 2>
cost = 9 + 4 + 1 + 4 = 18
cost = 9 + 4 + 8 + 9 = 30
cost = 9 + 3 + 8 + 4 = 24
cost = 9 + 3 + 8 + 6 = 26
cost = 9 + 7 + 8 + 9 = 33
cost = 9 + 7 + 1 + 6 = 23
etc.
FIGURE 3.9 First few iterations of solving a small instance of the assignment problem
by exhaustive search.
Since the number of permutations to be considered for the general case of the
assignment problem is n!, exhaustive search is impractical for all but very small
instances of the problem. Fortunately, there is a much more efcient algorithmfor
this problem called the Hungarian method after the Hungarian mathematicians
K onig and Egerv ary, whose work underlies the method (see, e.g., [Kol95]).
This is good news: the fact that a problem domain grows exponentially or
faster does not necessarily imply that there canbe noefcient algorithmfor solving
it. In fact, we present several other examples of such problems later in the book.
However, such examples are more of an exception to the rule. More often than
not, there are no known polynomial-time algorithms for problems whose domain
grows exponentially with instance size, provided we want to solve them exactly.
And, as we mentioned above, such algorithms quite possibly do not exist.
Exercises 3.4
1. a. Assuming that each tour can be generated in constant time, what will be
the efciency class of the exhaustive-search algorithm outlined in the text
for the traveling salesman problem?
b. If this algorithm is programmed on a computer that makes ten billion
additions per second, estimate the maximum number of cities for which
the problem can be solved in
i. 1 hour. ii. 24 hours. iii. 1 year. iv. 1 century.
2. Outline an exhaustive-search algorithm for the Hamiltonian circuit problem.
3. Outline an algorithm to determine whether a connected graph represented
by its adjacency matrix has an Eulerian circuit. What is the efciency class of
your algorithm?
4. Complete the application of exhaustive search to the instance of the assign-
ment problem started in the text.
5. Give an example of the assignment problem whose optimal solution does not
include the smallest element of its cost matrix.
3.4 Exhaustive Search 121
6. Consider the partition problem: given n positive integers, partition them into
two disjoint subsets with the same sumof their elements. (Of course, the prob-
lem does not always have a solution.) Design an exhaustive-search algorithm
for this problem. Try to minimize the number of subsets the algorithm needs
to generate.
7. Consider the clique problem: given a graph G and a positive integer k, deter-
mine whether the graph contains a clique of size k, i.e., a complete subgraph
of k vertices. Design an exhaustive-search algorithm for this problem.
8. Explain how exhaustive search can be applied to the sorting problem and
determine the efciency class of such an algorithm.
9. Eight-queens problem Consider the classic puzzle of placing eight queens on
an 8 8 chessboard so that no two queens are in the same row or in the same
column or on the same diagonal. How many different positions are there so
that
a. no two queens are on the same square?
b. no two queens are in the same row?
c. no two queens are in the same row or in the same column?
Alsoestimate howlong it wouldtake tondall the solutions tothe problemby
exhaustive search based on each of these approaches on a computer capable
of checking 10 billion positions per second.
10. Magic squares A magic square of order n is an arrangement of the integers
from 1 to n
2
in an n n matrix, with each number occurring exactly once, so
that each row, each column, and each main diagonal has the same sum.
a. Prove that if a magic square of order n exists, the sum in question must be
equal to n(n
2
1)/2.
b. Design an exhaustive-search algorithm for generating all magic squares of
order n.
c. Gotothe Internet or your library andnda better algorithmfor generating
magic squares.
d. Implement the two algorithmsthe exhaustive search and the one you
have foundand run an experiment to determine the largest value of n
for which each of the algorithms is able to nd a magic square of order n
in less than 1 minute on your computer.
11. Famous alphametic A puzzle in which the digits in a correct mathematical
expression, such as a sum, are replaced by letters is called cryptarithm; if, in
addition, the puzzles words make sense, it is said to be an alphametic. The
most well-known alphametic was published by the renowned British puzzlist
Henry E. Dudeney (18571930):
122 Brute Force and Exhaustive Search
S E N D
M O R E
M O N E Y
Two conditions are assumed: rst, the correspondence between letters and
decimal digits is one-to-one, i.e., each letter represents one digit only and dif-
ferent letters represent different digits. Second, the digit zero does not appear
as the left-most digit in any of the numbers. To solve an alphametic means
to nd which digit each letter represents. Note that a solutions uniqueness
cannot be assumed and has to be veried by the solver.
a. Write a program for solving cryptarithms by exhaustive search. Assume
that a given cryptarithm is a sum of two words.
b. Solve Dudeneys puzzle the way it was expected to be solved when it was
rst published in 1924.
3.5 Depth-First Search and Breadth-First Search
The termexhaustive search canalsobe appliedtotwovery important algorithms
that systematically process all vertices and edges of a graph. These two traversal
algorithms are depth-rst search (DFS) and breadth-rst search (BFS). These
algorithms have proved to be very useful for many applications involving graphs in
articial intelligence and operations research. In addition, they are indispensable
for efcient investigation of fundamental properties of graphs such as connectivity
and cycle presence.
Depth-First Search
Depth-rst search starts a graphs traversal at an arbitrary vertex by marking it
as visited. On each iteration, the algorithm proceeds to an unvisited vertex that
is adjacent to the one it is currently in. (If there are several such vertices, a tie
can be resolved arbitrarily. As a practical matter, which of the adjacent unvisited
candidates is chosen is dictated by the data structure representing the graph. In
our examples, we always break ties by the alphabetical order of the vertices.) This
process continues until a dead enda vertex with no adjacent unvisited vertices
is encountered. At a dead end, the algorithm backs up one edge to the vertex
it came from and tries to continue visiting unvisited vertices from there. The
algorithm eventually halts after backing up to the starting vertex, with the latter
being a dead end. By then, all the vertices in the same connected component as the
starting vertex have been visited. If unvisited vertices still remain, the depth-rst
search must be restarted at any one of them.
It is convenient to use a stack to trace the operation of depth-rst search. We
push a vertex onto the stack when the vertex is reached for the rst time (i.e., the
3.5 Depth-First Search and Breadth-First Search 123
g
a
d
e
b
c f
j
h
h
i
j
g
a
c
d f
b
e
i
(a) (b) (c)
d
3, 1
c
2, 5
a
1, 6
e
6, 2
b
5, 3
f
4, 4
j
10,7
i
9, 8
h
8, 9
g
7,10
FIGURE 3.10 Example of a DFS traversal. (a) Graph. (b) Traversals stack (the rst
subscript number indicates the order in which a vertex is visited, i.e.,
pushed onto the stack; the second one indicates the order in which it
becomes a dead-end, i.e., popped off the stack). (c) DFS forest with the
tree and back edges shown with solid and dashed lines, respectively.
visit of the vertex starts), and we pop a vertex off the stack when it becomes a
dead end (i.e., the visit of the vertex ends).
It is also very useful to accompany a depth-rst search traversal by construct-
ing the so-called depth-rst search forest. The starting vertex of the traversal
serves as the root of the rst tree in such a forest. Whenever a newunvisited vertex
is reached for the rst time, it is attached as a child to the vertex from which it is
being reached. Such an edge is called a tree edge because the set of all such edges
forms a forest. The algorithm may also encounter an edge leading to a previously
visited vertex other than its immediate predecessor (i.e., its parent in the tree).
Such an edge is called a back edge because it connects a vertex to its ancestor,
other than the parent, in the depth-rst search forest. Figure 3.10 provides an ex-
ample of a depth-rst search traversal, with the traversal stack and corresponding
depth-rst search forest shown as well.
Here is pseudocode of the depth-rst search.
ALGORITHM DFS(G)
//Implements a depth-rst search traversal of a given graph
//Input: Graph G=V, E)
//Output: Graph G with its vertices marked with consecutive integers
// in the order they are rst encountered by the DFS traversal
mark each vertex in V with 0 as a mark of being unvisited
count 0
for each vertex v in V do
if v is marked with 0
dfs(v)
124 Brute Force and Exhaustive Search
dfs(v)
//visits recursively all the unvisited vertices connected to vertex v
//by a path and numbers them in the order they are encountered
//via global variable count
count count 1; mark v with count
for each vertex w in V adjacent to v do
if w is marked with 0
dfs(w)
The brevity of the DFS pseudocode and the ease with which it can be per-
formed by hand may create a wrong impression about the level of sophistication
of this algorithm. To appreciate its true power and depth, you should trace the
algorithms action by looking not at a graphs diagram but at its adjacency matrix
or adjacency lists. (Try it for the graph in Figure 3.10 or a smaller example.)
How efcient is depth-rst search? It is not difcult to see that this algorithm
is, in fact, quite efcient since it takes just the time proportional to the size of the
data structure used for representing the graph in question. Thus, for the adjacency
matrix representation, the traversal time is in ([V[
2
), and for the adjacency list
representation, it is in ([V[ [E[) where [V[ and [E[ are the number of the
graphs vertices and edges, respectively.
ADFS forest, which is obtained as a by-product of a DFS traversal, deserves a
fewcomments, too. To begin with, it is not actually a forest. Rather, we can look at
it as the given graph with its edges classied by the DFS traversal into two disjoint
classes: tree edges and back edges. (No other types are possible for a DFS forest
of an undirected graph.) Again, tree edges are edges used by the DFS traversal to
reach previously unvisited vertices. If we consider only the edges in this class, we
will indeed get a forest. Back edges connect vertices to previously visited vertices
other than their immediate predecessors in the traversal. They connect vertices to
their ancestors in the forest other than their parents.
A DFS traversal itself and the forest-like representation of the graph it pro-
vides have proved to be extremely helpful for the development of efcient al-
gorithms for checking many important properties of graphs.
3
Note that the DFS
yields two orderings of vertices: the order in which the vertices are reached for the
rst time (pushed onto the stack) and the order in which the vertices become dead
ends (popped off the stack). These orders are qualitatively different, and various
applications can take advantage of either of them.
Important elementary applications of DFS include checking connectivity and
checking acyclicity of a graph. Since dfs halts after visiting all the vertices con-
3. The discovery of several such applications was an important breakthrough achieved by the two
American computer scientists John Hopcroft and Robert Tarjan in the 1970s. For this and other
contributions, they were given the Turing Awardthe most prestigious prize in the computing eld
[Hop87, Tar87].
3.5 Depth-First Search and Breadth-First Search 125
nected by a path to the starting vertex, checking a graphs connectivity can be
done as follows. Start a DFS traversal at an arbitrary vertex and check, after
the algorithm halts, whether all the vertices of the graph will have been vis-
ited. If they have, the graph is connected; otherwise, it is not connected. More
generally, we can use DFS for identifying connected components of a graph
(how?).
As for checking for a cycle presence in a graph, we can take advantage of the
graphs representation in the formof a DFS forest. If the latter does not have back
edges, the graph is clearly acyclic. If there is a back edge from some vertex u to its
ancestor v (e.g., the back edge from d to a in Figure 3.10c), the graph has a cycle
that comprises the path from v to u via a sequence of tree edges in the DFS forest
followed by the back edge from u to v.
You will nd a few other applications of DFS later in the book, although
more sophisticated applications, such as nding articulation points of a graph,
are not included. (A vertex of a connected graph is said to be its articulation
point if its removal with all edges incident to it breaks the graph into disjoint
pieces.)
Breadth-First Search
If depth-rst search is a traversal for the brave (the algorithm goes as far from
home as it can), breadth-rst search is a traversal for the cautious. It proceeds in
a concentric manner by visiting rst all the vertices that are adjacent to a starting
vertex, then all unvisited vertices two edges apart from it, and so on, until all
the vertices in the same connected component as the starting vertex are visited.
If there still remain unvisited vertices, the algorithm has to be restarted at an
arbitrary vertex of another connected component of the graph.
It is convenient to use a queue (note the difference from depth-rst search!)
to trace the operation of breadth-rst search. The queue is initialized with the
traversals starting vertex, which is marked as visited. On each iteration, the
algorithm identies all unvisited vertices that are adjacent to the front vertex,
marks them as visited, and adds them to the queue; after that, the front vertex is
removed from the queue.
Similarly to a DFS traversal, it is useful to accompany a BFS traversal by con-
structing the so-called breadth-rst search forest. The traversals starting vertex
serves as the root of the rst tree in such a forest. Whenever a newunvisited vertex
is reached for the rst time, the vertex is attached as a child to the vertex it is being
reached from with an edge called a tree edge. If an edge leading to a previously
visited vertex other than its immediate predecessor (i.e., its parent in the tree)
is encountered, the edge is noted as a cross edge. Figure 3.11 provides an exam-
ple of a breadth-rst search traversal, with the traversal queue and corresponding
breadth-rst search forest shown.
126 Brute Force and Exhaustive Search
g
a
d
e
b
c f
j
h
h
i
j
g
a
c d
f b
e
i
(a) (b) (c)
a
1
c
2
d
3
e
4
f
5
b
6
g
7
h
8
j
9
i
10
FIGURE 3.11 Example of a BFS traversal. (a) Graph. (b) Traversal queue, with the
numbers indicating the order in which the vertices are visited, i.e., added
to (and removed from) the queue. (c) BFS forest with the tree and cross
edges shown with solid and dotted lines, respectively.
Here is pseudocode of the breadth-rst search.
ALGORITHM BFS(G)
//Implements a breadth-rst search traversal of a given graph
//Input: Graph G=V, E)
//Output: Graph G with its vertices marked with consecutive integers
// in the order they are visited by the BFS traversal
mark each vertex in V with 0 as a mark of being unvisited
count 0
for each vertex v in V do
if v is marked with 0
bfs(v)
bfs(v)
//visits all the unvisited vertices connected to vertex v
//by a path and numbers them in the order they are visited
//via global variable count
count count 1; mark v with count and initialize a queue with v
while the queue is not empty do
for each vertex w in V adjacent to the front vertex do
if w is marked with 0
count count 1; mark w with count
add w to the queue
remove the front vertex from the queue
3.5 Depth-First Search and Breadth-First Search 127
a
a
e
e
b
b
f
f
c
c
g
g
d
d
h
(a) (b)
FIGURE 3.12 Illustration of the BFS-based algorithm for nding a minimum-edge path.
(a) Graph. (b) Part of its BFS tree that identies the minimum-edge path
from a to g.
Breadth-rst search has the same efciency as depth-rst search: it is in
([V[
2
) for the adjacency matrix representation and in ([V[ [E[) for the adja-
cency list representation. Unlike depth-rst search, it yields a single ordering of
vertices because the queue is a FIFO (rst-in rst-out) structure and hence the
order in which vertices are added to the queue is the same order in which they
are removed from it. As to the structure of a BFS forest of an undirected graph,
it can also have two kinds of edges: tree edges and cross edges. Tree edges are the
ones used to reach previously unvisited vertices. Cross edges connect vertices to
those visited before, but, unlike back edges in a DFS tree, they connect vertices
either on the same or adjacent levels of a BFS tree.
BFS can be used to check connectivity and acyclicity of a graph, essentially
in the same manner as DFS can. It is not applicable, however, for several less
straightforwardapplications suchas nding articulationpoints. Onthe other hand,
it can be helpful in some situations where DFS cannot. For example, BFS can
be used for nding a path with the fewest number of edges between two given
vertices. To do this, we start a BFS traversal at one of the two vertices and stop
it as soon as the other vertex is reached. The simple path from the root of the
BFS tree to the second vertex is the path sought. For example, path a b c g
in the graph in Figure 3.12 has the fewest number of edges among all the paths
between vertices a and g. Although the correctness of this application appears to
stem immediately from the way BFS operates, a mathematical proof of its validity
is not quite elementary (see, e.g., [Cor09, Section 22.2]).
Table 3.1 summarizes the main facts about depth-rst search and breadth-rst
search.
128 Brute Force and Exhaustive Search
TABLE 3.1 Main facts about depth-rst search (DFS)
and breadth-rst search (BFS)
DFS BFS
Data structure a stack a queue
Number of vertex orderings two orderings one ordering
Edge types (undirected graphs) tree and back edges tree and cross edges
Applications connectivity, connectivity,
acyclicity, acyclicity,
articulation points minimum-edge paths
Efciency for adjacency matrix ([V
2
[) ([V
2
[)
Efciency for adjacency lists ([V[ [E[) ([V[ [E[)
Exercises 3.5
1. Consider the following graph.
b f c g
e a d
a. Write down the adjacency matrix and adjacency lists specifying this graph.
(Assume that the matrix rows and columns and vertices in the adjacency
lists follow in the alphabetical order of the vertex labels.)
b. Starting at vertex a and resolving ties by the vertex alphabetical order,
traverse the graph by depth-rst search and construct the corresponding
depth-rst search tree. Give the order in which the vertices were reached
for the rst time (pushed onto the traversal stack) and the order in which
the vertices became dead ends (popped off the stack).
2. If we dene sparse graphs as graphs for which [E[ O([V[), which implemen-
tation of DFS will have a better time efciency for such graphs, the one that
uses the adjacency matrix or the one that uses the adjacency lists?
3. Let G be a graph with n vertices and m edges.
a. True or false: All its DFS forests (for traversals starting at different ver-
tices) will have the same number of trees?
b. True or false: All its DFS forests will have the same number of tree edges
and the same number of back edges?
4. Traverse the graph of Problem 1 by breadth-rst search and construct the
corresponding breadth-rst search tree. Start the traversal at vertex a and
resolve ties by the vertex alphabetical order.
3.5 Depth-First Search and Breadth-First Search 129
5. Prove that a cross edge in a BFS tree of an undirected graph can connect
vertices only on either the same level or on two adjacent levels of a BFS tree.
6. a. Explain how one can check a graphs acyclicity by using breadth-rst
search.
b. Does either of the two traversalsDFS or BFSalways nd a cycle faster
than the other? If you answer yes, indicate which of them is better and
explain why it is the case; if you answer no, give two examples supporting
your answer.
7. Explain how one can identify connected components of a graph by using
a. a depth-rst search.
b. a breadth-rst search.
8. A graph is said to be bipartite if all its vertices can be partitioned into two
disjoint subsets Xand Y so that every edge connects a vertex in Xwith a vertex
in Y. (One can also say that a graph is bipartite if its vertices can be colored in
two colors so that every edge has its vertices colored in different colors; such
graphs are also called 2-colorable.) For example, graph (i) is bipartite while
graph (ii) is not.
x
1
y
1
x
3
y
2
x
2
y
3
(i) (ii)
a
c
b
d
a. Design a DFS-based algorithm for checking whether a graph is bipartite.
b. Design a BFS-based algorithm for checking whether a graph is bipartite.
9. Write a program that, for a given graph, outputs:
a. vertices of each connected component
b. its cycle or a message that the graph is acyclic
10. One can model a maze by having a vertex for a starting point, a nishing point,
dead ends, and all the points in the maze where more than one path can be
taken, and then connecting the vertices according to the paths in the maze.
a. Construct such a graph for the following maze.
130 Brute Force and Exhaustive Search
b. Which traversalDFS or BFSwould you use if you found yourself in a
maze and why?
11. Three Jugs Sim eonDenis Poisson(17811840), a famous Frenchmathemati-
cian and physicist, is said to have become interested in mathematics after
encountering some version of the following old puzzle. Given an 8-pint jug
full of water and two empty jugs of 5- and 3-pint capacity, get exactly 4 pints
of water in one of the jugs by completely lling up and/or emptying jugs into
others. Solve this puzzle by using breadth-rst search.
SUMMARY
Brute force is a straightforwardapproachtosolving a problem, usually directly
based on the problem statement and denitions of the concepts involved.
The principal strengths of the brute-force approach are wide applicability and
simplicity; its principal weakness is the subpar efciency of most brute-force
algorithms.
A rst application of the brute-force approach often results in an algorithm
that can be improved with a modest amount of effort.
The following noted algorithms can be considered as examples of the brute-
force approach:
. denition-based algorithm for matrix multiplication
. selection sort
. sequential search
. straightforward string-matching algorithm
Exhaustive search is a brute-force approach to combinatorial problems. It
suggests generating each and every combinatorial object of the problem,
selecting those of them that satisfy all the constraints, and then nding a
desired object.
The traveling salesman problem, the knapsack problem, and the assignment
problem are typical examples of problems that can be solved, at least
theoretically, by exhaustive-search algorithms.
Exhaustive search is impractical for all but very small instances of problems
it can be applied to.
Depth-rst search (DFS) and breadth-rst search (BFS) are two principal
graph-traversal algorithms. By representing a graph in a form of a depth-rst
or breadth-rst search forest, they help in the investigation of many important
properties of the graph. Both algorithms have the same time efciency:
([V[
2
) for the adjacency matrix representation and ([V[ [E[) for the
adjacency list representation.
4
Decrease-and-Conquer
Plutarch says that Sertorius, in order to teach his soldiers that perseverance
and wit are better than brute force, had two horses brought before them,
and set two men to pull out their tails. One of the men was a burly Hercules,
who tugged and tugged, but all to no purpose; the other was a sharp, weasel-
faced tailor, who plucked one hair at a time, amidst roars of laughter, and
soon left the tail quite bare.
E. Cobham Brewer, Dictionary of Phrase and Fable, 1898
T
he decrease-and-conquer technique is based on exploiting the relationship
between a solution to a given instance of a problem and a solution to its
smaller instance. Once such a relationship is established, it can be exploited either
top down or bottom up. The former leads naturally to a recursive implementa-
tion, although, as one can see from several examples in this chapter, an ultimate
implementation may well be nonrecursive. The bottom-up variation is usually
implemented iteratively, starting with a solution to the smallest instance of the
problem; it is called sometimes the incremental approach.
There are three major variations of decrease-and-conquer:
decrease by a constant
decrease by a constant factor
variable size decrease
In the decrease-by-a-constant variation, the size of an instance is reduced
by the same constant on each iteration of the algorithm. Typically, this constant
is equal to one (Figure 4.1), although other constant size reductions do happen
occasionally.
Consider, as an example, the exponentiation problem of computing a
n
where
a ,=0 and n is a nonnegative integer. The relationship between a solution to an
instance of size n and an instance of size n 1 is obtained by the obvious formula
a
n
=a
n1
.
a. So the function f (n) = a
n
can be computed either top down by
using its recursive denition
131
132 Decrease-and-Conquer
problem of size n
subproblem
of size n 1
solution to
the subproblem
solution to
the original problem
FIGURE 4.1 Decrease-(by one)-and-conquer technique.
f (n) =
_
f (n 1)
.
a if n > 0,
1 if n =0,
(4.1)
or bottomup by multiplying 1by a n times. (Yes, it is the same as the brute-force
algorithm, but we have come to it by a different thought process.) More interesting
examples of decrease-by-one algorithms appear in Sections 4.14.3.
The decrease-by-a-constant-factor technique suggests reducing a problem
instance by the same constant factor on each iteration of the algorithm. In most
applications, this constant factor is equal to two. (Can you give an example of such
an algorithm?) The decrease-by-half idea is illustrated in Figure 4.2.
For an example, let us revisit the exponentiation problem. If the instance of
size n is to compute a
n
, the instance of half its size is to compute a
n/2
, with the
obvious relationship between the two: a
n
=(a
n/2
)
2
. But since we consider here
instances with integer exponents only, the former does not work for odd n. If n is
odd, we have to compute a
n1
by using the rule for even-valued exponents and
then multiply the result by a. To summarize, we have the following formula:
Decrease-and-Conquer 133
problem of size n
subproblem
of size n/2
solution to
the subproblem
solution to
the original problem
FIGURE 4.2 Decrease-(by half)-and-conquer technique.
a
n
=
_
_
_
(a
n/2
)
2
if n is even and positive,
(a
(n1)/2
)
2
.
a if n is odd,
1 if n =0.
(4.2)
If we compute a
n
recursively according to formula (4.2) and measure the algo-
rithms efciency by the number of multiplications, we shouldexpect the algorithm
to be in (log n) because, on each iteration, the size is reduced by about a half at
the expense of one or two multiplications.
A few other examples of decrease-by-a-constant-factor algorithms are given
in Section 4.4 and its exercises. Such algorithms are so efcient, however, that
there are few examples of this kind.
Finally, in the variable-size-decrease variety of decrease-and-conquer, the
size-reduction pattern varies from one iteration of an algorithm to another. Eu-
clids algorithm for computing the greatest common divisor provides a good ex-
ample of such a situation. Recall that this algorithm is based on the formula
gcd(m, n) =gcd(n, m mod n).
134 Decrease-and-Conquer
Though the value of the second argument is always smaller on the right-hand side
than on the left-hand side, it decreases neither by a constant nor by a constant
factor. A few other examples of such algorithms appear in Section 4.5.
4.1 Insertion Sort
In this section, we consider an application of the decrease-by-one technique to
sorting an array A[0..n 1]. Following the techniques idea, we assume that the
smaller problem of sorting the array A[0..n 2] has already been solved to give
us a sorted array of size n 1: A[0]
. . .
A[n 2]. How can we take advantage
of this solution to the smaller problem to get a solution to the original problem
by taking into account the element A[n 1]? Obviously, all we need is to nd an
appropriate position for A[n 1] among the sorted elements and insert it there.
This is usually done by scanning the sorted subarray from right to left until the
rst element smaller than or equal to A[n 1] is encountered to insert A[n 1]
right after that element. The resulting algorithm is called straight insertion sort
or simply insertion sort.
Though insertion sort is clearly based on a recursive idea, it is more efcient
to implement this algorithm bottom up, i.e., iteratively. As shown in Figure 4.3,
starting with A[1] and ending with A[n 1], A[i] is inserted in its appropriate place
among the rst i elements of the array that have been already sorted (but, unlike
selection sort, are generally not in their nal positions).
Here is pseudocode of this algorithm.
ALGORITHM InsertionSort(A[0..n 1])
//Sorts a given array by insertion sort
//Input: An array A[0..n 1] of n orderable elements
//Output: Array A[0..n 1] sorted in nondecreasing order
for i 1 to n 1 do
v A[i]
j i 1
while j 0 and A[j] > v do
A[j 1] A[j]
j j 1
A[j 1] v
A[0]
. . .
A[ j] < A[ j + 1]
. . .
A[i 1] A[i]
. . .
A[n 1]
smaller than or equal to A[i] greater than A[i]
FIGURE 4.3 Iteration of insertion sort: A[i] is inserted in its proper position among the
preceding elements previously sorted.
4.1 Insertion Sort 135
89 |
45
45
45
29
29
17
45
89 |
68
68
45
34
29
68
68
89 |
89
68
45
34
90
90
90
90 |
89
68
45
29
29
29
29
90 |
89
68
34
34
34
34
34
90 |
89
17
17
17
17
17
17
90
FIGURE 4.4 Example of sorting with insertion sort. A vertical bar separates the sorted
part of the array from the remaining elements; the element being inserted
is in bold.
The operation of the algorithm is illustrated in Figure 4.4.
The basic operation of the algorithmis the key comparison A[j] >v. (Why not
j 0? Because it is almost certainly faster than the former in an actual computer
implementation. Moreover, it is not germane to the algorithm: a better imple-
mentation with a sentinelsee Problem 8 in this sections exerciseseliminates
it altogether.)
The number of key comparisons in this algorithm obviously depends on the
nature of the input. In the worst case, A[j] > v is executed the largest number
of times, i.e., for every j =i 1, . . . , 0. Since v =A[i], it happens if and only if
A[j] > A[i] for j =i 1, . . . , 0. (Note that we are using the fact that on the ith
iteration of insertion sort all the elements preceding A[i] are the rst i elements in
the input, albeit in the sorted order.) Thus, for the worst-case input, we get A[0] >
A[1] (for i =1), A[1] > A[2] (for i =2), . . . , A[n 2] > A[n 1] (for i =n 1).
In other words, the worst-case input is an array of strictly decreasing values. The
number of key comparisons for such an input is
C
worst
(n) =
n1
i=1
i1
j=0
1 =
n1
i=1
i =
(n 1)n
2
(n
2
).
Thus, in the worst case, insertion sort makes exactly the same number of compar-
isons as selection sort (see Section 3.1).
In the best case, the comparison A[j] > v is executed only once on every
iteration of the outer loop. It happens if and only if A[i 1] A[i] for every
i =1, . . . , n 1, i.e., if the input array is already sorted in nondecreasing order.
(Though it makes sense that the best case of an algorithm happens when the
problem is already solved, it is not always the case, as you are going to see in our
discussion of quicksort in Chapter 5.) Thus, for sorted arrays, the number of key
comparisons is
C
best
(n) =
n1
i=1
1 =n 1 (n).
136 Decrease-and-Conquer
This very good performance in the best case of sorted arrays is not very useful by
itself, because we cannot expect such convenient inputs. However, almost-sorted
les do arise in a variety of applications, and insertion sort preserves its excellent
performance on such inputs.
A rigorous analysis of the algorithms average-case efciency is based on
investigating the number of element pairs that are out of order (see Problem 11 in
this sections exercises). It shows that on randomly ordered arrays, insertion sort
makes on average half as many comparisons as on decreasing arrays, i.e.,
C
avg
(n)
n
2
4
(n
2
).
This twice-as-fast average-case performance coupled with an excellent efciency
on almost-sorted arrays makes insertion sort stand out among its principal com-
petitors among elementary sorting algorithms, selection sort and bubble sort. In
addition, its extension named shellsort, after its inventor D. L. Shell [She59], gives
us an even better algorithm for sorting moderately large les (see Problem 12 in
this sections exercises).
Exercises 4.1
1. Ferrying soldiers A detachment of n soldiers must cross a wide and deep
river with no bridge in sight. They notice two 12-year-old boys playing in a
rowboat by the shore. The boat is so tiny, however, that it can only hold two
boys or one soldier. How can the soldiers get across the river and leave the
boys in joint possession of the boat? Howmany times need the boat pass from
shore to shore?
2. Alternating glasses
a. There are 2n glasses standing next to each other in a row, the rst n of them
lled with a soda drink and the remaining n glasses empty. Make the glasses
alternate in a lled-empty-lled-empty pattern in the minimum number of
glass moves. [Gar78]
b. Solve the same problem if 2n glassesn with a drink and n emptyare
initially in a random order.
3. Marking cells Design an algorithm for the following task. For any even n,
mark n cells on an innite sheet of graph paper so that each marked cell has an
odd number of marked neighbors. Two cells are considered neighbors if they
are next to each other either horizontally or vertically but not diagonally. The
marked cells must form a contiguous region, i.e., a region in which there is a
pathbetweenany pair of markedcells that goes througha sequence of marked
neighbors. [Kor05]
4.1 Insertion Sort 137
4. Design a decrease-by-one algorithm for generating the power set of a set of n
elements. (The power set of a set S is the set of all the subsets of S, including
the empty set and S itself.)
5. Consider the following algorithm to check connectivity of a graph dened by
its adjacency matrix.
ALGORITHM Connected(A[0..n 1, 0..n 1])
//Input: Adjacency matrix A[0..n 1, 0..n 1]) of an undirected graph G
//Output: 1 (true) if G is connected and 0 (false) if it is not
if n =1 return 1 //one-vertex graph is connected by denition
else
if not Connected(A[0..n 2, 0..n 2]) return 0
else for j 0 to n 2 do
if A[n 1, j] return 1
return 0
Does this algorithm work correctly for every undirected graph with n > 0
vertices? If you answer yes, indicate the algorithms efciency class in the
worst case; if you answer no, explain why.
6. Teamordering You have the results of a completed round-robin tournament
in which n teams played each other once. Each game ended either with a
victory for one of the teams or with a tie. Design an algorithm that lists the
teams in a sequence so that every team did not lose the game with the team
listed immediately after it. What is the time efciency class of your algorithm?
7. Apply insertion sort to sort the list E, X, A, M, P, L, E in alphabetical order.
8. a. What sentinel should be put before the rst element of an array being
sorted in order to avoid checking the in-bound condition j 0 on each
iteration of the inner loop of insertion sort?
b. Is the sentinel version in the same efciency class as the original version?
9. Is it possible to implement insertion sort for sorting linked lists? Will it have
the same O(n
2
) time efciency as the array version?
10. Compare the texts implementation of insertion sort with the following ver-
sion.
ALGORITHM InsertSort2(A[0..n 1])
for i 1 to n 1 do
j i 1
while j 0 and A[j] > A[j 1] do
swap(A[j], A[j 1])
j j 1
138 Decrease-and-Conquer
What is the time efciency of this algorithm? How is it compared to that
of the version given in Section 4.1?
11. Let A[0..n 1] be an array of n sortable elements. (For simplicity, you may
assume that all the elements are distinct.) A pair (A[i], A[j]) is called an
inversion if i < j and A[i] > A[j].
a. What arrays of size n have the largest number of inversions and what is this
number? Answer the same questions for the smallest number of inversions.
b. Show that the average-case number of key comparisons in insertion sort is
given by the formula
C
avg
(n)
n
2
4
.
12. Shellsort (more accurately Shells sort) is an important sorting algorithm that
works by applying insertion sort to each of several interleaving sublists of a
given list. On each pass through the list, the sublists in question are formed
by stepping through the list with an increment h
i
taken from some predened
decreasing sequence of step sizes, h
1
>
. . .
>h
i
>
. . .
>1, which must end with
1. (The algorithm works for any such sequence, though some sequences are
known to yield a better efciency than others. For example, the sequence 1,
4, 13, 40, 121, . . . , used, of course, in reverse, is known to be among the best
for this purpose.)
a. Apply shellsort to the list
S, H, E, L, L, S, O, R, T, I, S, U, S, E, F, U, L
b. Is shellsort a stable sorting algorithm?
c. Implement shellsort, straight insertion sort, selection sort, and bubble sort
in the language of your choice and compare their performance on random
arrays of sizes 10
n
for n =2, 3, 4, 5, and 6 as well as on increasing and
decreasing arrays of these sizes.
4.2 Topological Sorting
In this section, we discuss an important problem for directed graphs, with a
variety of applications involving prerequisite-restricted tasks. Before we pose this
problem, though, let us review a few basic facts about directed graphs themselves.
A directed graph, or digraph for short, is a graph with directions specied for all
its edges (Figure 4.5a is an example). The adjacency matrix and adjacency lists are
still two principal means of representing a digraph. There are only two notable
differences between undirected and directed graphs in representing them: (1) the
adjacency matrix of a directed graph does not have to be symmetric; (2) an edge
in a directed graph has just one (not two) corresponding nodes in the digraphs
adjacency lists.
4.2 Topological Sorting 139
b
b
c
c d
d
e
e
(a) (b)
a a
FIGURE 4.5 (a) Digraph. (b) DFS forest of the digraph for the DFS traversal started at a.
Depth-rst search and breadth-rst search are principal traversal algorithms
for traversing digraphs as well, but the structure of corresponding forests can be
more complex than for undirected graphs. Thus, even for the simple example of
Figure 4.5a, the depth-rst search forest (Figure 4.5b) exhibits all four types of
edges possible in a DFS forest of a directed graph: tree edges (ab, bc, de), back
edges (ba) from vertices to their ancestors, forward edges (ac) from vertices to
their descendants in the tree other than their children, and cross edges (dc), which
are none of the aforementioned types.
Note that a back edge in a DFS forest of a directed graph can connect a vertex
to its parent. Whether or not it is the case, the presence of a back edge indicates
that the digraph has a directed cycle. A directed cycle in a digraph is a sequence
of three or more of its vertices that starts and ends with the same vertex and in
which every vertex is connected to its immediate predecessor by an edge directed
from the predecessor to the successor. For example, a, b, a is a directed cycle in
the digraph in Figure 4.5a. Conversely, if a DFS forest of a digraph has no back
edges, the digraph is a dag, an acronym for directed acyclic graph.
Edge directions lead to newquestions about digraphs that are either meaning-
less or trivial for undirected graphs. In this section, we discuss one such question.
As a motivating example, consider a set of ve required courses {C1, C2, C3, C4,
C5} a part-time student has to take in some degree program. The courses can be
taken in any order as long as the following course prerequisites are met: C1 and
C2 have no prerequisites, C3 requires C1 and C2, C4 requires C3, and C5 requires
C3 and C4. The student can take only one course per term. In which order should
the student take the courses?
The situation can be modeled by a digraph in which vertices represent courses
and directed edges indicate prerequisite requirements (Figure 4.6). In terms of
this digraph, the question is whether we can list its vertices in such an order that
for every edge in the graph, the vertex where the edge starts is listed before the
vertex where the edge ends. (Can you nd such an ordering of this digraphs
vertices?) This problem is called topological sorting. It can be posed for an
140 Decrease-and-Conquer
C1 C4
C2 C5
C3
FIGURE 4.6 Digraph representing the prerequisite structure of ve courses.
C1
C5
1
C4
2
C3
3
C1
4
C1 C3 C4 C5 C2
5
(a) (b) (c)
The popping-off order:
C5, C4, C3, C1, C2
The topologically sorted list:
C2
C3
C4
C5 C2
FIGURE 4.7 (a) Digraph for which the topological sorting problem needs to be solved.
(b) DFS traversal stack with the subscript numbers indicating the popping-
off order. (c) Solution to the problem.
arbitrary digraph, but it is easy to see that the problem cannot have a solution
if a digraph has a directed cycle. Thus, for topological sorting to be possible, a
digraphinquestionmust be a dag. It turns out that being a dag is not only necessary
but also sufcient for topological sorting to be possible; i.e., if a digraph has no
directed cycles, the topological sorting problem for it has a solution. Moreover,
there are two efcient algorithms that both verify whether a digraph is a dag
and, if it is, produce an ordering of vertices that solves the topological sorting
problem.
The rst algorithmis a simple application of depth-rst search: performa DFS
traversal and note the order in which vertices become dead-ends (i.e., popped
off the traversal stack). Reversing this order yields a solution to the topological
sorting problem, provided, of course, no back edge has been encountered during
the traversal. If a back edge has been encountered, the digraph is not a dag, and
topological sorting of its vertices is impossible.
Why does the algorithm work? When a vertex v is popped off a DFS stack,
no vertex u with an edge from u to v can be among the vertices popped off before
v. (Otherwise, (u, v) would have been a back edge.) Hence, any such vertex u will
be listed after v in the popped-off order list, and before v in the reversed list.
Figure 4.7 illustrates an application of this algorithm to the digraph in Fig-
ure 4.6. Note that in Figure 4.7c, we have drawn the edges of the digraph, and
they all point from left to right as the problems statement requires. It is a con-
venient way to check visually the correctness of a solution to an instance of the
topological sorting problem.
4.2 Topological Sorting 141
C1
delete C1 delete C2
delete C3
The solution obtained is C1, C2, C3, C4, C5
delete C4 delete C5
C4 C4
C4
C5
C5 C5
C5
C3
C4
C5
C3
C2
C3
C2
FIGURE 4.8 Illustration of the source-removal algorithm for the topological sorting
problem. On each iteration, a vertex with no incoming edges is deleted
from the digraph.
The second algorithmis based on a direct implementation of the decrease-(by
one)-and-conquer technique: repeatedly, identify in a remaining digraph a source,
which is a vertex with no incoming edges, and delete it along with all the edges
outgoing from it. (If there are several sources, break the tie arbitrarily. If there
are none, stop because the problem cannot be solvedsee Problem 6a in this
sections exercises.) The order in which the vertices are deleted yields a solution
to the topological sorting problem. The application of this algorithm to the same
digraph representing the ve courses is given in Figure 4.8.
Note that the solution obtained by the source-removal algorithm is different
from the one obtained by the DFS-based algorithm. Both of them are correct, of
course; the topological sorting problem may have several alternative solutions.
The tiny size of the example we used might create a wrong impression about
the topological sorting problem. But imagine a large projecte.g., in construction,
research, or software developmentthat involves a multitude of interrelatedtasks
with known prerequisites. The rst thing to do in such a situation is to make sure
that the set of given prerequisites is not contradictory. The convenient way of
doing this is to solve the topological sorting problem for the projects digraph.
Only then can one start thinking about scheduling tasks to, say, minimize the total
completiontime of the project. This wouldrequire, of course, other algorithms that
you can nd in general books on operations research or in special ones on CPM
(Critical Path Method) and PERT (Program Evaluation and Review Technique)
methodologies.
As to applications of topological sorting in computer science, they include
instruction scheduling in programcompilation, cell evaluation ordering in spread-
sheet formulas, and resolving symbol dependencies in linkers.
142 Decrease-and-Conquer
Exercises 4.2
1. Apply the DFS-based algorithm to solve the topological sorting problem for
the following digraphs:
a a b c b
c e
g
g
f
e
f
d
d
(b)
(a)
2. a. Prove that the topological sorting problem has a solution if and only if it is
a dag.
b. For a digraphwithnvertices, what is the largest number of distinct solutions
the topological sorting problem can have?
3. a. What is the time efciency of the DFS-based algorithm for topological
sorting?
b. How can one modify the DFS-based algorithm to avoid reversing the
vertex ordering generated by DFS?
4. Can one use the order in which vertices are pushed onto the DFS stack
(instead of the order they are popped off it) to solve the topological sorting
problem?
5. Apply the source-removal algorithm to the digraphs of Problem 1 above.
6. a. Prove that a nonempty dag must have at least one source.
b. How would you nd a source (or determine that such a vertex does not
exist) in a digraph represented by its adjacency matrix? What is the time
efciency of this operation?
c. How would you nd a source (or determine that such a vertex does not
exist) in a digraph represented by its adjacency lists? What is the time
efciency of this operation?
7. Can you implement the source-removal algorithm for a digraph represented
by its adjacency lists so that its running time is in O([V[ [E[)?
8. Implement the two topological sorting algorithms in the language of your
choice. Run an experiment to compare their running times.
9. A digraph is called strongly connected if for any pair of two distinct vertices u
and v there exists a directed path fromu to v and a directed path fromv to u. In
general, a digraphs vertices can be partitioned into disjoint maximal subsets
of vertices that are mutually accessible via directed paths; these subsets are
called strongly connected components of the digraph. There are two DFS-
4.2 Topological Sorting 143
based algorithms for identifying strongly connected components. Here is the
simpler (but somewhat less efcient) one of the two:
Step 1 Perform a DFS traversal of the digraph given and number its
vertices in the order they become dead ends.
Step 2 Reverse the directions of all the edges of the digraph.
Step 3 Perform a DFS traversal of the new digraph by starting (and, if
necessary, restarting) the traversal at the highest numbered vertex
among still unvisited vertices.
The strongly connected components are exactly the vertices of the DFS
trees obtained during the last traversal.
a. Apply this algorithm to the following digraph to determine its strongly
connected components:
a b c
g
h
d e
f
b. What is the time efciency class of this algorithm? Give separate answers
for the adjacency matrix representation and adjacency list representation
of an input digraph.
c. How many strongly connected components does a dag have?
10. Spiders web A spider sits at the bottom (point S) of its web, and a y sits at
the top (F). How many different ways can the spider reach the y by moving
along the webs lines in the directions indicated by the arrows? [Kor05]
F
S
144 Decrease-and-Conquer
4.3 Algorithms for Generating Combinatorial Objects
In this section, we keep our promise to discuss algorithms for generating combi-
natorial objects. The most important types of combinatorial objects are permuta-
tions, combinations, and subsets of a given set. They typically arise in problems
that require a consideration of different choices. We already encountered them in
Chapter 3 when we discussed exhaustive search. Combinatorial objects are stud-
ied in a branch of discrete mathematics called combinatorics. Mathematicians, of
course, are primarily interestedindifferent counting formulas; we shouldbe grate-
ful for such formulas because they tell us howmany items need to be generated. In
particular, they warn us that the number of combinatorial objects typically grows
exponentially or even faster as a function of the problem size. But our primary
interest here lies in algorithms for generating combinatorial objects, not just in
counting them.
Generating Permutations
We start with permutations. For simplicity, we assume that the underlying set
whose elements need to be permuted is simply the set of integers from 1 to n;
more generally, they can be interpreted as indices of elements in an n-element set
{a
1
, . . . , a
n
]. What would the decrease-by-one technique suggest for the problem
of generating all n! permutations of {1, . . . , n]? The smaller-by-one problem is to
generate all (n 1)! permutations. Assuming that the smaller problem is solved,
we can get a solution to the larger one by inserting n in each of the n possible
positions among elements of every permutation of n 1 elements. All the permu-
tations obtained in this fashion will be distinct (why?), and their total number will
be n(n 1)! =n!. Hence, we will obtain all the permutations of {1, . . . , n].
We can insert n in the previously generated permutations either left to right
or right to left. It turns out that it is benecial to start with inserting n into
12 . . . (n 1) by moving right to left and then switch direction every time a new
permutation of {1, . . . , n 1] needs to be processed. An example of applying this
approach bottom up for n =3 is given in Figure 4.9.
The advantage of this order of generating permutations stems from the fact
that it satises the minimal-change requirement: each permutation can be ob-
tained from its immediate predecessor by exchanging just two elements in it. (For
the method being discussed, these two elements are always adjacent to each other.
start 1
insert 2 into 1 right to left 12 21
insert 3 into 12 right to left 123 132 312
insert 3 into 21 left to right 321 231 213
FIGURE 4.9 Generating permutations bottom up.
4.3 Algorithms for Generating Combinatorial Objects 145
Check this for the permutations generated in Figure 4.9.) The minimal-change re-
quirement is benecial both for the algorithms speed and for applications using
the permutations. For example, in Section 3.4, we needed permutations of cities
to solve the traveling salesman problem by exhaustive search. If such permuta-
tions are generated by a minimal-change algorithm, we can compute the length of
a new tour from the length of its predecessor in constant rather than linear time
(how?).
It is possible to get the same ordering of permutations of n elements without
explicitly generating permutations for smaller values of n. It can be done by
associating a direction with each element k in a permutation. We indicate such
a direction by a small arrow written above the element in question, e.g.,
3
.
The element k is said to be mobile in such an arrow-marked permutation if its
arrow points to a smaller number adjacent to it. For example, for the permutation
3
, 3 and 4 are mobile while 2 and 1 are not. Using the notion of a mobile
element, we can give the following description of the Johnson-Trotter algorithm
for generating permutations.
ALGORITHM JohnsonTrotter(n)
//Implements Johnson-Trotter algorithm for generating permutations
//Input: A positive integer n
//Output: A list of all permutations of {1, . . . , n]
initialize the rst permutation with 1
. . . n
.
This algorithm is one of the most efcient for generating permutations; it can
be implemented to run in time proportional to the number of permutations, i.e.,
in (n!). Of course, it is horribly slow for all but very small values of n; however,
this is not the algorithms fault but rather the fault of the problem: it simply asks
to generate too many items.
One can argue that the permutation ordering generated by the Johnson-
Trotter algorithm is not quite natural; for example, the natural place for permu-
tation n(n 1) . . . 1 seems to be the last one on the list. This would be the case
if permutations were listed in increasing orderalso called the lexicographic or-
146 Decrease-and-Conquer
derwhichis the order inwhichthey wouldbe listedina dictionary if the numbers
were interpreted as letters of an alphabet. For example, for n =3,
123 132 213 231 312 321.
So how can we generate the permutation following a
1
a
2
. . . a
n1
a
n
in lexi-
cographic order? If a
n1
< a
n
, which is the case for exactly one half of all the
permutations, we can simply transpose these last two elements. For example, 123
is followed by 132. If a
n1
>a
n
, we nd the permutations longest decreasing sufx
a
i1
>a
i2
>
. . .
>a
n
(but a
i
<a
i1
); increase a
i
by exchanging it with the smallest
element of the sufx that is greater than a
i
; and reverse the new sufx to put it in
increasing order. For example, 362541 is followed by 364125. Here is pseudocode
of this simple algorithm whose origins go as far back as 14th-century India.
ALGORITHM LexicographicPermute(n)
//Generates permutations in lexicographic order
//Input: A positive integer n
//Output: A list of all permutations of {1, . . . , n] in lexicographic order
initialize the rst permutation with 12 . . . n
while last permutation has two consecutive elements in increasing order do
let i be its largest index such that a
i
< a
i1
//a
i1
> a
i2
>
. . .
> a
n
nd the largest index j such that a
i
< a
j
//j i 1 since a
i
< a
i1
swap a
i
with a
j
//a
i1
a
i2
. . . a
n
will remain in decreasing order
reverse the order of the elements from a
i1
to a
n
inclusive
add the new permutation to the list
Generating Subsets
Recall that in Section 3.4 we examined the knapsack problem, which asks to nd
the most valuable subset of items that ts a knapsack of a given capacity. The
exhaustive-search approach to solving this problem discussed there was based on
generating all subsets of a given set of items. In this section, we discuss algorithms
for generating all 2
n
subsets of an abstract set A ={a
1
, . . . , a
n
]. (Mathematicians
call the set of all subsets of a set its power set.)
The decrease-by-one idea is immediately applicable to this problem, too. All
subsets of A = {a
1
, . . . , a
n
] can be divided into two groups: those that do not
contain a
n
and those that do. The former group is nothing but all the subsets of
{a
1
, . . . , a
n1
], while each and every element of the latter can be obtained by
adding a
n
to a subset of {a
1
, . . . , a
n1
]. Thus, once we have a list of all subsets of
{a
1
, . . . , a
n1
], we can get all the subsets of {a
1
, . . . , a
n
] by adding to the list all
its elements with a
n
put into each of them. An application of this algorithm to
generate all subsets of {a
1
, a
2
, a
3
] is illustrated in Figure 4.10.
Similarly togenerating permutations, we donot have togenerate power sets of
smaller sets. Aconvenient way of solving the problemdirectly is basedona one-to-
one correspondence between all 2
n
subsets of an n element set A ={a
1
, . . . , a
n
]
4.3 Algorithms for Generating Combinatorial Objects 147
n subsets
0
1 {a
1
]
2 {a
1
] {a
2
] {a
1
, a
2
]
3 {a
1
] {a
2
] {a
1
, a
2
] {a
3
] {a
1
, a
3
] {a
2
, a
3
] {a
1
, a
2
, a
3
]
FIGURE 4.10 Generating subsets bottom up.
and all 2
n
bit strings b
1
, . . . , b
n
of length n. The easiest way to establish such a
correspondence is to assign to a subset the bit string in which b
i
=1 if a
i
belongs
to the subset and b
i
=0 if a
i
does not belong to it. (We mentioned this idea of bit
vectors in Section 1.4.) For example, the bit string 000 will correspond to the empty
subset of a three-element set, 111 will correspond to the set itself, i.e., {a
1
, a
2
, a
3
],
and 110 will represent {a
1
, a
2
]. With this correspondence in place, we can generate
all the bit strings of length n by generating successive binary numbers from 0 to
2
n
1, padded, when necessary, with an appropriate number of leading 0s. For
example, for the case of n =3, we obtain
bit strings 000 001 010 011 100 101 110 111
subsets {a
3
] {a
2
] {a
2
, a
3
] {a
1
] {a
1
, a
3
] {a
1
, a
2
] {a
1
, a
2
, a
3
]
Note that although the bit strings are generated by this algorithm in lexico-
graphic order (in the two-symbol alphabet of 0 and 1), the order of the subsets
looks anything but natural. For example, we might want to have the so-called
squashed order, in which any subset involving a
j
can be listed only after all the
subsets involving a
1
, . . . , a
j1
, as was the case for the list of the three-element set
in Figure 4.10. It is easy to adjust the bit stringbased algorithm above to yield a
squashed ordering of the subsets involved (see Problem 6 in this sections exer-
cises).
A more challenging question is whether there exists a minimal-change algo-
rithmfor generating bit strings sothat every one of themdiffers fromits immediate
predecessor by only a single bit. (In the language of subsets, we want every subset
to differ from its immediate predecessor by either an addition or a deletion, but
not both, of a single element.) The answer to this question is yes. For example, for
n =3, we can get
000 001 011 010 110 111 101 100.
Such a sequence of bit strings is called the binary reected Gray code. Frank Gray,
a researcher at AT&T Bell Laboratories, reinvented it in the 1940s to minimize
the effect of errors in transmitting digital signals (see, e.g., [Ros07], pp. 642
643). Seventy years earlier, the French engineer
Emile Baudot used such codes
148 Decrease-and-Conquer
in telegraphy. Here is pseudocode that generates the binary reected Gray code
recursively.
ALGORITHM BRGC(n)
//Generates recursively the binary reected Gray code of order n
//Input: A positive integer n
//Output: A list of all bit strings of length n composing the Gray code
if n =1 make list L containing bit strings 0 and 1 in this order
else generate list L1 of bit strings of size n 1 by calling BRGC(n 1)
copy list L1 to list L2 in reversed order
add 0 in front of each bit string in list L1
add 1 in front of each bit string in list L2
append L2 to L1 to get list L
return L
The correctness of the algorithm stems from the fact that it generates 2
n
bit
strings and all of them are distinct. Both these assertions are easy to check by
mathematical induction. Note that the binary reected Gray code is cyclic: its last
bit string differs from the rst one by a single bit. For a nonrecursive algorithm for
generating the binary reected Gray code see Problem9 in this sections exercises.
Exercises 4.3
1. Is it realistic to implement an algorithm that requires generating all permu-
tations of a 25-element set on your computer? What about all the subsets of
such a set?
2. Generate all permutations of {1, 2, 3, 4] by
a. the bottom-up minimal-change algorithm.
b. the Johnson-Trotter algorithm.
c. the lexicographic-order algorithm.
3. Apply LexicographicPermute tomultiset {1, 2, 2, 3]. Does it generate correctly
all the permutations in lexicographic order?
4. Consider the following implementation of the algorithm for generating per-
mutations discovered by B. Heap [Hea63].
ALGORITHM HeapPermute(n)
//Implements Heaps algorithm for generating permutations
//Input: A positive integer n and a global array A[1..n]
//Output: All permutations of elements of A
if n =1
write A
4.3 Algorithms for Generating Combinatorial Objects 149
else
for i 1 to n do
HeapPermute(n 1)
if n is odd
swap A[1] and A[n]
else swap A[i] and A[n]
a. Trace the algorithm by hand for n =2, 3, and 4.
b. Prove the correctness of Heaps algorithm.
c. What is the time efciency of HeapPermute?
5. Generate all the subsets of a four-element set A ={a
1
, a
2
, a
3
, a
4
] by each of
the two algorithms outlined in this section.
6. What simple trickwouldmake the bit stringbasedalgorithmgenerate subsets
in squashed order?
7. Write pseudocode for a recursive algorithm for generating all 2
n
bit strings of
length n.
8. Write a nonrecursive algorithm for generating 2
n
bit strings of length n that
implements bit strings as arrays and does not use binary additions.
9. a. Generate the binary reexive Gray code of order 4.
b. Trace the following nonrecursive algorithm to generate the binary re-
exive Gray code of order 4. Start with the n-bit string of all 0s. For
i =1, 2, . . . , 2
n1
, generate the ith bit string by ipping bit b in the previ-
ous bit string, where b is the position of the least signicant 1 in the binary
representation of i.
10. Design a decrease-and-conquer algorithm for generating all combinations of
k items chosen from n, i.e., all k-element subsets of a given n-element set. Is
your algorithm a minimal-change algorithm?
11. Gray code and the Tower of Hanoi
a. Show that the disk moves made in the classic recursive algorithm for the
Tower of Hanoi puzzle canbe usedfor generating the binary reectedGray
code.
b. Showhowthe binary reected Gray code can be used for solving the Tower
of Hanoi puzzle.
12. Fair attraction In olden days, one could encounter the following attraction
at a fair. A light bulb was connected to several switches in such a way that it
lighted up only when all the switches were closed. Each switch was controlled
by a push button; pressing the button toggled the switch, but there was no
way to know the state of the switch. The object was to turn the light bulb on.
Design an algorithm to turn on the light bulb with the minimum number of
button pushes needed in the worst case for n switches.
150 Decrease-and-Conquer
4.4 Decrease-by-a-Constant-Factor Algorithms
You may recall from the introduction to this chapter that decrease-by-a-constant-
factor is the second major variety of decrease-and-conquer. As an example of an
algorithm based on this technique, we mentioned there exponentiation by squar-
ing dened by formula (4.2). In this section, you will nd a few other examples of
such algorithms.. The most important and well-known of them is binary search.
Decrease-by-a-constant-factor algorithms usually runinlogarithmic time, and, be-
ing very efcient, do not happen often; a reduction by a factor other than two is
especially rare.
Binary Search
Binary search is a remarkably efcient algorithmfor searching in a sorted array. It
works by comparing a search key K with the arrays middle element A[m]. If they
match, the algorithm stops; otherwise, the same operation is repeated recursively
for the rst half of the array if K < A[m], and for the second half if K > A[m]:
K
[
A[0] . . . A[m1]
. ,, .
search here if
K<A[m]
A[m] A[m1] . . . A[n 1]
. ,, .
search here if
K>A[m]
.
As an example, let us apply binary search to searching for K =70 in the array
3 14 27 31 39 42 55 70 74 81 85 93 98
The iterations of the algorithm are given in the following table:
3 14 27 31 39 42 55 70 74 81 85 93 98
0 1 2 3 4 5 6 7 8 9 10 11 12 index
value
iteration 1
iteration 2
iteration 3
l m r
l m r
l,m r
Though binary search is clearly based on a recursive idea, it can be easily
implemented as a nonrecursive algorithm, too. Here is pseudocode of this nonre-
cursive version.
4.4 Decrease-by-a-Constant-Factor Algorithms 151
ALGORITHM BinarySearch(A[0..n 1], K)
//Implements nonrecursive binary search
//Input: An array A[0..n 1] sorted in ascending order and
// a search key K
//Output: An index of the arrays element that is equal to K
// or 1 if there is no such element
l 0; r n 1
while l r do
m(l r)/2
if K =A[m] return m
else if K < A[m] r m1
else l m1
return 1
The standard way to analyze the efciency of binary search is to count the number
of times the search key is compared with an element of the array. Moreover, for
the sake of simplicity, we will count the so-called three-way comparisons. This
assumes that after one comparison of K with A[m], the algorithm can determine
whether K is smaller, equal to, or larger than A[m].
How many such comparisons does the algorithm make on an array of n
elements? The answer obviously depends not only on n but also on the specics of
a particular instance of the problem. Let us nd the number of key comparisons
in the worst case C
worst
(n). The worst-case inputs include all arrays that do not
contain a given search key, as well as some successful searches. Since after one
comparison the algorithm faces the same situation but for an array half the size,
we get the following recurrence relation for C
worst
(n):
C
worst
(n) =C
worst
(n/2) 1 for n > 1, C
worst
(1) =1. (4.3)
(Stop and convince yourself that n/2 must be, indeed, rounded down and that the
initial condition must be written as specied.)
We already encountered recurrence (4.3), with a different initial condition, in
Section 2.4 (see recurrence (2.4) and its solution there for n =2
k
). For the initial
condition C
worst
(1) =1, we obtain
C
worst
(2
k
) =k 1 =log
2
n 1. (4.4)
Further, similarly to the case of recurrence (2.4) (Problem 7 in Exercises 2.4), the
solution given by formula (4.4) for n =2
k
can be tweaked to get a solution valid
for an arbitrary positive integer n:
C
worst
(n) =log
2
n 1 ={log
2
(n 1). (4.5)
Formula (4.5) deserves attention. First, it implies that the worst-case time
efciency of binary search is in (log n). Second, it is the answer we should have
152 Decrease-and-Conquer
fully expected: since the algorithm simply reduces the size of the remaining array
by about half on each iteration, the number of such iterations needed to reduce the
initial size n to the nal size 1 has to be about log
2
n. Third, to reiterate the point
made in Section 2.1, the logarithmic function grows so slowly that its values remain
small even for very large values of n. In particular, according to formula (4.5),
it will take no more than {log
2
(10
3
1) =10 three-way comparisons to nd an
element of a given value (or establish that there is no such element) in any sorted
array of one thousand elements, and it will take no more than {log
2
(10
6
1) =20
comparisons to do it for any sorted array of size one million!
What can we say about the average-case efciency of binary search? A so-
phisticated analysis shows that the average number of key comparisons made by
binary search is only slightly smaller than that in the worst case:
C
avg
(n) log
2
n.
(More accurate formulas for the average number of comparisons in a successful
and an unsuccessful search are C
yes
avg
(n) log
2
n 1 and C
no
avg
(n) log
2
(n 1),
respectively.)
Though binary search is an optimal searching algorithm if we restrict our op-
erations only to comparisons between keys (see Section 11.2), there are searching
algorithms (see interpolation search in Section 4.5 and hashing in Section 7.3) with
a better average-case time efciency, and one of them (hashing) does not even re-
quire the array to be sorted! These algorithms do require some special calculations
in addition to key comparisons, however. Finally, the idea behind binary search
has several applications beyondsearching (see, e.g., [Ben00]). Inaddition, it canbe
applied to solving nonlinear equations in one unknown; we discuss this continuous
analogue of binary search, called the method of bisection, in Section 12.4.
Fake-Coin Problem
Of several versions of the fake-coin identication problem, we consider here
the one that best illustrates the decrease-by-a-constant-factor strategy. Among n
identical-looking coins, one is fake. With a balance scale, we can compare any two
sets of coins. That is, by tipping to the left, to the right, or staying even, the balance
scale will tell whether the sets weigh the same or which of the sets is heavier than
the other but not by how much. The problem is to design an efcient algorithm
for detecting the fake coin. An easier version of the problemthe one we discuss
hereassumes that the fake coin is known to be, say, lighter than the genuine
one.
1
The most natural idea for solving this problem is to divide n coins into two
piles of n/2 coins each, leaving one extra coin aside if n is odd, and put the two
1. A much more challenging version assumes no additional information about the relative weights of the
fake and genuine coins or even the presence of the fake coin among n given coins. We pursue this more
difcult version in the exercises for Section 11.2.
4.4 Decrease-by-a-Constant-Factor Algorithms 153
piles on the scale. If the piles weigh the same, the coin put aside must be fake;
otherwise, we can proceed in the same manner with the lighter pile, which must
be the one with the fake coin.
We can easily set up a recurrence relation for the number of weighings W(n)
needed by this algorithm in the worst case:
W(n) =W(n/2) 1 for n > 1, W(1) =0.
This recurrence shouldlookfamiliar toyou. Indeed, it is almost identical tothe one
for the worst-case number of comparisons in binary search. (The difference is in
the initial condition.) This similarity is not really surprising, since both algorithms
are based on the same technique of halving an instance size. The solution to the
recurrence for the number of weighings is also very similar to the one we had for
binary search: W(n) =log
2
n.
This stuff should look elementary by now, if not outright boring. But wait: the
interesting point here is the fact that the above algorithm is not the most efcient
solution. It would be more efcient to divide the coins not into two but into three
piles of about n/3 coins each. (Details of a precise formulation are developed
in this sections exercises. Do not miss it! If your instructor forgets, demand the
instructor to assign Problem 10.) After weighing two of the piles, we can reduce
the instance size by a factor of three. Accordingly, we should expect the number
of weighings to be about log
3
n, which is smaller than log
2
n.
Russian Peasant Multiplication
Now we consider a nonorthodox algorithm for multiplying two positive integers
called multiplication ` a la russe or the Russian peasant method. Let n and m
be positive integers whose product we want to compute, and let us measure the
instance size by the value of n. Now, if n is even, an instance of half the size has
to deal with n/2, and we have an obvious formula relating the solution to the
problems larger instance to the solution to the smaller one:
n
.
m=
n
2
.
2m.
If n is odd, we need only a slight adjustment of this formula:
n
.
m=
n 1
2
.
2mm.
Using these formulas and the trivial case of 1
.
m=mto stop, we can compute
product n
.
m either recursively or iteratively. An example of computing 50
.
65
with this algorithm is given in Figure 4.11. Note that all the extra addends shown
in parentheses in Figure 4.11a are in the rows that have odd values in the rst
column. Therefore, we can nd the product by simply adding all the elements in
the m column that have an odd number in the n column (Figure 4.11b).
Also note that the algorithm involves just the simple operations of halving,
doubling, and addinga feature that might be attractive, for example, to those
154 Decrease-and-Conquer
n m n m
50 65 50 65
25 130 25 130 130
12 260 (130) 12 260
6 520 6 520
3 1040 3 1040 1040
1 2080 (1040) 1 2080 2080
2080 (130 1040) =3250 3250
(a) (b)
FIGURE 4.11 Computing 50
.
65 by the Russian peasant method.
who do not want to memorize the table of multiplications. It is this feature of the
algorithm that most probably made it attractive to Russian peasants who, accord-
ing to Western visitors, used it widely in the nineteenth century and for whom the
method is named. (In fact, the method was known to Egyptian mathematicians as
early as 1650 b.c. [Cha98, p. 16].) It also leads to very fast hardware implementa-
tion since doubling and halving of binary numbers can be performed using shifts,
which are among the most basic operations at the machine level.
Josephus Problem
Our last example is the Josephus problem, named for Flavius Josephus, a famous
Jewish historian who participated in and chronicled the Jewish revolt of 6670
c.e. against the Romans. Josephus, as a general, managed to hold the fortress of
Jotapata for 47 days, but after the fall of the city he took refuge with 40 diehards in
a nearby cave. There, the rebels voted to perish rather than surrender. Josephus
proposed that each man in turn should dispatch his neighbor, the order to be
determined by casting lots. Josephus contrived to draw the last lot, and, as one
of the two surviving men in the cave, he prevailed upon his intended victim to
surrender to the Romans.
So let n people numbered 1 to n stand in a circle. Starting the grim count with
person number 1, we eliminate every second person until only one survivor is left.
The problem is to determine the survivors number J(n). For example (Figure
4.12), if n is 6, people in positions 2, 4, and 6 will be eliminated on the rst pass
through the circle, and people in initial positions 3 and 1 will be eliminated on the
second pass, leaving a sole survivor in initial position 5thus, J(6) =5. To give
another example, if n is 7, people in positions 2, 4, 6, and 1 will be eliminated on
the rst pass (it is more convenient to include 1 in the rst pass) and people in
positions 5 and, for convenience, 3 on the secondthus, J(7) =7.
4.4 Decrease-by-a-Constant-Factor Algorithms 155
1
2
(a) (b)
4
1
3
2
5
2
1
6
1
1
1
3
2
6
1
2
1
7
4
1
5
2
FIGURE 4.12 Instances of the Josephus problem for (a) n =6 and (b) n =7. Subscript
numbers indicate the pass on which the person in that position is
eliminated. The solutions are J(6) =5 and J(7) =7, respectively.
It is convenient to consider the cases of even and odd ns separately. If n is
even, i.e., n =2k, the rst pass through the circle yields an instance of exactly the
same problembut half its initial size. The only difference is in position numbering;
for example, a person in initial position 3 will be in position 2 for the second pass,
a person in initial position 5 will be in position 3, and so on (check Figure 4.12a). It
is easy to see that to get the initial position of a person, we simply need to multiply
his new position by 2 and subtract 1. This relationship will hold, in particular, for
the survivor, i.e.,
J(2k) =2J(k) 1.
Let us now consider the case of an odd n (n > 1), i.e., n =2k 1. The rst pass
eliminates people in all even positions. If we add to this the elimination of the
person in position 1 right after that, we are left with an instance of size k. Here, to
get the initial position that corresponds to the new position numbering, we have
to multiply the new position number by 2 and add 1 (check Figure 4.12b). Thus,
for odd values of n, we get
J(2k 1) =2J(k) 1.
Can we get a closed-form solution to the two-case recurrence subject to the
initial condition J(1) = 1? The answer is yes, though getting it requires more
ingenuity than just applying backward substitutions. In fact, one way to nd a
solution is to apply forward substitutions to get, say, the rst 15 values of J(n),
discern a pattern, and then prove its general validity by mathematical induction.
We leave the execution of this plan to the exercises; alternatively, you can look it
upin[GKP94], whose expositionof the Josephus problemwe have beenfollowing.
Interestingly, the most elegant formof the closed-formanswer involves the binary
representation of size n: J(n) can be obtained by a 1-bit cyclic shift left of n itself!
For example, J(6) =J(110
2
) =101
2
=5 and J(7) =J(111
2
) =111
2
=7.
156 Decrease-and-Conquer
Exercises 4.4
1. Cutting a stick A stick n inches long needs to be cut into n 1-inch pieces.
Outline an algorithm that performs this task with the minimum number of
cuts if several pieces of the stick can be cut at the same time. Also give a
formula for the minimum number of cuts.
2. Design a decrease-by-half algorithmfor computing log
2
n and determine its
time efciency.
3. a. What is the largest number of key comparisons made by binary search in
searching for a key in the following array?
3 14 27 31 39 42 55 70 74 81 85 93 98
b. List all the keys of this array that will require the largest number of key
comparisons when searched for by binary search.
c. Find the average number of key comparisons made by binary search in a
successful search in this array. Assume that each key is searched for with
the same probability.
d. Find the average number of key comparisons made by binary search in an
unsuccessful search in this array. Assume that searches for keys in each of
the 14 intervals formed by the arrays elements are equally likely.
4. Estimate how many times faster an average successful search will be in a
sorted array of one million elements if it is done by binary search versus
sequential search.
5. The time efciency of sequential search does not depend on whether a list is
implemented as an array or as a linked list. Is it also true for searching a sorted
list by binary search?
6. a. Design a version of binary search that uses only two-way comparisons such
as and =. Implement your algorithm in the language of your choice and
carefully debug it: such programs are notorious for being prone to bugs.
b. Analyze the time efciency of the two-way comparison version designed
in part a.
7. Picture guessing A version of the popular problem-solving task involves pre-
senting people with an array of 42 picturesseven rows of six pictures each
and asking them to identify the target picture by asking questions that can be
answered yes or no. Further, people are then required to identify the picture
with as few questions as possible. Suggest the most efcient algorithm for this
problem and indicate the largest number of questions that may be necessary.
8. Consider ternary searchthe following algorithm for searching in a sorted
array A[0..n 1]. If n =1, simply compare the search key K with the single
4.5 Variable-Size-Decrease Algorithms 157
element of the array; otherwise, search recursively by comparing K with
A[n/3], and if K is larger, compare it with A[2n/3] to determine in which
third of the array to continue the search.
a. What design technique is this algorithm based on?
b. Set up a recurrence for the number of key comparisons in the worst case.
You may assume that n =3
k
.
c. Solve the recurrence for n =3
k
.
d. Compare this algorithms efciency with that of binary search.
9. An array A[0..n 2] contains n 1 integers from 1 to n in increasing order.
(Thus one integer in this range is missing.) Design the most efcient algorithm
you can to nd the missing integer and indicate its time efciency.
10. a. Write pseudocode for the divide-into-three algorithm for the fake-coin
problem. Make sure that your algorithm handles properly all values of n,
not only those that are multiples of 3.
b. Set up a recurrence relation for the number of weighings in the divide-into-
three algorithm for the fake-coin problem and solve it for n =3
k
.
c. For large values of n, about how many times faster is this algorithm than
the one based on dividing coins into two piles? Your answer should not
depend on n.
11. a. Apply the Russian peasant algorithm to compute 26
.
47.
b. From the standpoint of time efciency, does it matter whether we multiply
n by m or m by n by the Russian peasant algorithm?
12. a. Write pseudocode for the Russian peasant multiplication algorithm.
b. What is the time efciency class of Russian peasant multiplication?
13. Find J(40)the solution to the Josephus problem for n =40.
14. Prove that the solution to the Josephus problemis 1 for every n that is a power
of 2.
15. For the Josephus problem,
a. compute J(n) for n =1, 2, . . . , 15.
b. discern a pattern in the solutions for the rst fteen values of n and prove
its general validity.
c. prove the validity of getting J(n) by a 1-bit cyclic shift left of the binary
representation of n.
4.5 Variable-Size-Decrease Algorithms
In the third principal variety of decrease-and-conquer, the size reduction pattern
varies from one iteration of the algorithm to another. Euclids algorithm for
computing the greatest common divisor (Section 1.1) provides a good example
158 Decrease-and-Conquer
of this kind of algorithm. In this section, we encounter a few more examples of
this variety.
Computing a Median and the Selection Problem
The selection problem is the problem of nding the kth smallest element in a list
of n numbers. This number is called the kth order statistic. Of course, for k =1 or
k =n, we can simply scan the list in question to nd the smallest or largest element,
respectively. Amore interesting case of this problemis for k ={n/2, which asks to
ndanelement that is not larger thanone half of the lists elements andnot smaller
than the other half. This middle value is called the median, and it is one of the
most important notions in mathematical statistics. Obviously, we can nd the kth
smallest element in a list by sorting the list rst and then selecting the kth element
in the output of a sorting algorithm. The time of such an algorithm is determined
by the efciency of the sorting algorithm used. Thus, with a fast sorting algorithm
such as mergesort (discussed in the next chapter), the algorithms efciency is in
O(n log n).
You should immediately suspect, however, that sorting the entire list is most
likely overkill since the problem asks not to order the entire list but just to nd its
kth smallest element. Indeed, we can take advantage of the idea of partitioning
a given list around some value p of, say, its rst element. In general, this is a
rearrangement of the lists elements so that the left part contains all the elements
smaller than or equal to p, followed by the pivot p itself, followed by all the
elements greater than or equal to p.
all are p all are p p p
Of the two principal algorithmic alternatives to partition an array, here we
discuss the Lomuto partitioning [Ben00, p. 117]; we introduce the better known
Hoares algorithm in the next chapter. To get the idea behind the Lomuto parti-
tioning, it is helpful to think of an arrayor, more generally, a subarray A[l..r]
(0 l r n 1)under consideration as composed of three contiguous seg-
ments. Listed in the order they follow pivot p, they are as follows: a segment with
elements knowntobe smaller thanp, the segment of elements knowntobe greater
than or equal to p, and the segment of elements yet to be compared to p (see Fig-
ure 4.13a). Note that the segments can be empty; for example, it is always the case
for the rst two segments before the algorithm starts.
Starting with i =l 1, the algorithm scans the subarray A[l..r] left to right,
maintaining this structure until a partition is achieved. On each iteration, it com-
pares the rst element in the unknown segment (pointed to by the scanning index
i in Figure 4.13a) with the pivot p. If A[i] p, i is simply incremented to expand
the segment of the elements greater than or equal to p while shrinking the un-
processed segment. If A[i] < p, it is the segment of the elements smaller than p
that needs to be expanded. This is done by incrementing s, the index of the last
4.5 Variable-Size-Decrease Algorithms 159
(a)
(c)
l s r i
p < p p ?
(b)
l s r
p < p p
l s r
p < p p
FIGURE 4.13 Illustration of the Lomuto partitioning.
element in the rst segment, swapping A[i] and A[s], and then incrementing i to
point to the new rst element of the shrunk unprocessed segment. After no un-
processed elements remain (Figure 4.13b), the algorithmswaps the pivot with A[s]
to achieve a partition being sought (Figure 4.13c).
Here is pseudocode implementing this partitioning procedure.
ALGORITHM LomutoPartition(A[l..r])
//Partitions subarray by Lomutos algorithm using rst element as pivot
//Input: A subarray A[l..r] of array A[0..n 1], dened by its left and right
// indices l and r (l r)
//Output: Partition of A[l..r] and the new position of the pivot
p A[l]
s l
for i l 1 to r do
if A[i] < p
s s 1; swap(A[s], A[i])
swap(A[l], A[s])
return s
How can we take advantage of a list partition to nd the kth smallest element
in it? Let us assume that the list is implemented as an array whose elements
are indexed starting with a 0, and let s be the partitions split position, i.e., the
index of the arrays element occupied by the pivot after partitioning. If s =k 1,
pivot p itself is obviously the kth smallest element, which solves the problem. If
s > k 1, the kth smallest element in the entire array can be found as the kth
smallest element in the left part of the partitioned array. And if s < k 1, it can
160 Decrease-and-Conquer
be found as the (k s)th smallest element in its right part. Thus, if we do not solve
the problem outright, we reduce its instance to a smaller one, which can be solved
by the same approach, i.e., recursively. This algorithm is called quickselect.
To nd the kth smallest element in array A[0..n 1] by this algorithm, call
Quickselect(A[0..n 1], k) where
ALGORITHM Quickselect(A[l..r], k)
//Solves the selection problem by recursive partition-based algorithm
//Input: Subarray A[l..r] of array A[0..n 1] of orderable elements and
// integer k (1 k r l 1)
//Output: The value of the kth smallest element in A[l..r]
s LomutoPartition(A[l..r]) //or another partition algorithm
if s =k 1 return A[s]
else if s > l k 1 Quickselect(A[l..s 1], k)
else Quickselect(A[s 1..r], k 1 s)
In fact, the same idea can be implemented without recursion as well. For the
nonrecursive version, we need not even adjust the value of k but just continue
until s =k 1.
EXAMPLE Apply the partition-based algorithm to nd the median of the fol-
lowing list of nine numbers: 4, 1, 10, 8, 7, 12, 9, 2, 15. Here, k = {9/2 = 5 and our
task is to nd the 5th smallest element in the array.
We use the above version of array partitioning, showing the pivots in bold.
0 1 2 3 4 5 6 7 8
s i
4 1 10 8 7 12 9 2 15
s i
4 1 10 8 7 12 9 2 15
s i
4 1 10 8 7 12 9 2 15
s i
4 1 2 8 7 12 9 10 15
s i
4 1 2 8 7 12 9 10 15
2 1 4 8 7 12 9 10 15
Since s =2 is smaller than k 1 =4, we proceed with the right part of the array:
4.5 Variable-Size-Decrease Algorithms 161
0 1 2 3 4 5 6 7 8
s i
8 7 12 9 10 15
s i
8 7 12 9 10 15
s i
8 7 12 9 10 15
7 8 12 9 10 15
Nows =k 1 =4, and hence we can stop: the found median is 8, which is greater
than 2, 1, 4, and 7 but smaller than 12, 9, 10, and 15.
How efcient is quickselect? Partitioning an n-element array always requires
n 1 key comparisons. If it produces the split that solves the selection problem
without requiring more iterations, then for this best case we obtain C
best
(n) =
n 1 (n). Unfortunately, the algorithm can produce an extremely unbalanced
partition of a given array, with one part being empty and the other containing n 1
elements. In the worst case, this can happen on each of the n 1 iterations. (For
a specic example of the worst-case input, consider, say, the case of k =n and a
strictly increasing array.) This implies that
C
worst
(n) =(n 1) (n 2)
. . .
1 =(n 1)n/2 (n
2
),
which compares poorly with the straightforward sorting-based approach men-
tionedinthe beginning of our selectionproblemdiscussion. Thus, the usefulness of
the partition-based algorithmdepends on the algorithms efciency in the average
case. Fortunately, a careful mathematical analysis has shown that the average-case
efciency is linear. In fact, computer scientists have discovered a more sophisti-
cated way of choosing a pivot in quickselect that guarantees linear time even in
the worst case [Blo73], but it is too complicated to be recommended for practical
applications.
It is also worth noting that the partition-based algorithm solves a somewhat
more general problem of identifying the k smallest and n k largest elements of
a given list, not just the value of its kth smallest element.
Interpolation Search
As the next example of a variable-size-decrease algorithm, we consider an algo-
rithm for searching in a sorted array called interpolation search. Unlike binary
search, whichalways compares a searchkey withthe middle value of a givensorted
array (and hence reduces the problems instance size by half), interpolation search
takes into account the value of the search key in order to nd the arrays element
to be compared with the search key. In a sense, the algorithm mimics the way we
162 Decrease-and-Conquer
value
v
A[r ]
A[l ]
l x r
index
FIGURE 4.14 Index computation in interpolation search.
search for a name in a telephone book: if we are searching for someone named
Brown, we open the book not in the middle but very close to the beginning, unlike
our action when searching for someone named, say, Smith.
More precisely, on the iteration dealing with the arrays portion between the
leftmost element A[l] and the rightmost element A[r], the algorithm assumes
that the array values increase linearly, i.e., along the straight line through the
points (l, A[l]) and (r, A[r]). (The accuracy of this assumption can inuence the
algorithms efciency but not its correctness.) Accordingly, the search keys value
v is compared with the element whose index is computed as (the round-off of)
the x coordinate of the point on the straight line through the points (l, A[l]) and
(r, A[r]) whose y coordinate is equal to the search value v (Figure 4.14).
Writing down a standard equation for the straight line passing through the
points (l, A[l]) and (r, A[r]), substituting v for y, and solving it for x leads to the
following formula:
x =l
_
(v A[l])(r l)
A[r] A[l]
_
. (4.6)
The logic behind this approach is quite straightforward. We know that the
array values are increasing (more accurately, not decreasing) from A[l] to A[r],
but we do not know how they do it. Had these values increased linearly, which is
the simplest manner possible, the index computed by formula (4.4) would be the
expected location of the arrays element with the value equal to v. Of course, if v
is not between A[l] and A[r], formula (4.4) need not be applied (why?).
After comparing v with A[x], the algorithm either stops (if they are equal)
or proceeds by searching in the same manner among the elements indexed either
4.5 Variable-Size-Decrease Algorithms 163
between l and x 1 or between x 1 and r, depending on whether A[x] is smaller
or larger than v. Thus, the size of the problems instance is reduced, but we cannot
tell a priori by how much.
The analysis of the algorithms efciency shows that interpolation search uses
fewer than log
2
log
2
n 1key comparisons on the average when searching in a list
of n random keys. This function grows so slowly that the number of comparisons
is a very small constant for all practically feasible inputs (see Problem 6 in this
sections exercises). But inthe worst case, interpolationsearchis only linear, which
must be considered a bad performance (why?).
Assessing the worthiness of interpolation search versus that of binary search,
Robert Sedgewick wrote in the second edition of his Algorithms that binary search
is probably better for smaller les but interpolation search is worth considering
for large les and for applications where comparisons are particularly expensive
or access costs are very high. Note that in Section 12.4 we discuss a continuous
counterpart of interpolation search, which can be seen as one more example of a
variable-size-decrease algorithm.
Searching and Insertion in a Binary Search Tree
Let us revisit the binary search tree. Recall that this is a binary tree whose nodes
contain elements of a set of orderable items, one element per node, so that for
every node all elements in the left subtree are smaller and all the elements in the
right subtree are greater than the element in the subtrees root. When we need to
search for an element of a given value v in such a tree, we do it recursively in the
following manner. If the tree is empty, the search ends in failure. If the tree is not
empty, we compare v with the trees root K(r). If they match, a desired element
is found and the search can be stopped; if they do not match, we continue with
the search in the left subtree of the root if v < K(r) and in the right subtree if
v > K(r). Thus, on each iteration of the algorithm, the problem of searching in a
binary search tree is reduced to searching in a smaller binary search tree. The most
sensible measure of the size of a search tree is its height; obviously, the decrease in
a trees height normally changes from one iteration to another of the binary tree
searchthus giving us an excellent example of a variable-size-decrease algorithm.
In the worst case of the binary tree search, the tree is severely skewed.
This happens, in particular, if a tree is constructed by successive insertions of an
increasing or decreasing sequence of keys (Figure 4.15).
Obviously, the search for a
n1
in such a tree requires n comparisons, making
the worst-case efciency of the search operation fall into (n). Fortunately, the
average-case efciency turns out to be in (log n). More precisely, the number of
key comparisons needed for a search in a binary search tree built from n random
keys is about 2ln n 1.39 log
2
n. Since insertion of a new key into a binary search
tree is almost identical to that of searching there, it also exemplies the variable-
size-decrease technique and has the same efciency characteristics as the search
operation.
164 Decrease-and-Conquer
a
0
a
0
a
1
a
1
a
n2
a
n2
a
n1
a
n1
(a) (b)
.
.
.
.
.
.
FIGURE 4.15 Binary search trees for (a) an increasing sequence of keys and (b) a
decreasing sequence of keys.
The Game of Nim
There are several well-known games that share the following features. There are
twoplayers, whomove inturn. Norandomness or hiddeninformationis permitted:
all players know all information about gameplay. A game is impartial: each player
has the same moves available from the same game position. Each of a nite
number of available moves leads to a smaller instance of the same game. The
game ends with a win by one of the players (there are no ties). The winner is the
last player who is able to move.
A prototypical example of such games is Nim. Generally, the game is played
with several piles of chips, but we consider the one-pile version rst. Thus, there is
a single pile of n chips. Two players take turns by removing from the pile at least
one and at most m chips; the number of chips taken may vary from one move to
another, but both the lower and upper limits stay the same. Who wins the game
by taking the last chip, the player moving rst or second, if both players make the
best moves possible?
Let us call an instance of the game a winning position for the player to
move next if that player has a winning strategy, i.e., a sequence of moves that
results in a victory no matter what moves the opponent makes. Let us call an
instance of the game a losing position for the player to move next if every move
available for that player leads toa winning positionfor the opponent. The standard
approach to determining which positions are winning and which are losing is to
investigate small values of n rst. It is logical to consider the instance of n =0 as
a losing one for the player to move next because this player is the rst one who
cannot make a move. Any instance with 1 n m chips is obviously a winning
position for the player to move next (why?). The instance with n =m1 chips
is a losing one because taking any allowed number of chips puts the opponent in
a winning position. (See an illustration for m=4 in Figure 4.16.) Any instance
with m2 n 2m1 chips is a winning position for the player to move next
because there is a move that leaves the opponent with m1chips, which is a losing
4.5 Variable-Size-Decrease Algorithms 165
0
4
3
2
1
5
9
8
7
6
10
FIGURE 4.16 Illustration of one-pile Nim with the maximum number of chips that may
be taken on each move m=4. The numbers indicate n, the number of
chips in the pile. The losing positions for the player to move are circled.
Only winning moves from the winning positions are shown (in bold).
position. 2m2 =2(m1) chips is the next losing position, and so on. It is not
difcult to see the pattern that can be formally proved by mathematical induction:
aninstance withnchips is a winning positionfor the player tomove next if andonly
if n is not a multiple of m1. The winning strategy is to take n mod(m1) chips
on every move; any deviation from this strategy puts the opponent in a winning
position.
One-pile Nim has been known for a very long time. It appeared, in particular,
as the summation game in the rst published book on recreational mathematics,
authored by Claude-Gaspar Bachet, a French aristocrat and mathematician, in
1612: a player picks a positive integer less than, say, 10, and then his opponent and
he take turns adding any integer less than 10; the rst player to reach 100 exactly
is the winner [Dud70].
In general, Nim is played with I > 1 piles of chips of sizes n
1
, n
2
, . . . , n
I
. On
each move, a player can take any available number of chips, including all of them,
from any single pile. The goal is the sameto be the last player able to make a
move. Note that for I =2, it is easy to gure out who wins this game and how.
Here is a hint: the answer for the games instances with n
1
=n
2
differs from the
answer for those with n
1
,=n
2
.
A solution to the general case of Nim is quite unexpected because it is based
on the binary representation of the pile sizes. Let b
1
, b
2
, . . . , b
I
be the pile sizes
in binary. Compute their binary digital sum, also known as the nim sum, dened
as the sum of binary digits discarding any carry. (In other words, a binary digit
s
i
in the sum is 0 if the number of 1s in the ith position in the addends is even,
and it is 1 if the number of 1s is odd.) It turns out that an instance of Nim is a
winning one for the player to move next if and only if its nim sum contains at least
one 1; consequently, Nims instance is a losing instance if and only if its nim sum
contains only zeros. For example, for the commonly played instance with n
1
=3,
n
2
=4, n
3
=5, the nim sum is
166 Decrease-and-Conquer
011
100
101
010
Since this sum contains a 1, the instance is a winning one for the player moving
rst. To nd a winning move from this position, the player needs to change one of
the three bit strings so that the new nim sum contains only 0s. It is not difcult to
see that the only way to accomplish this is to remove two chips from the rst pile.
This ingenious solution to the game of Nim was discovered by Harvard math-
ematics professor C. L. Bouton more than 100 years ago. Since then, mathemati-
cians have developed a much more general theory of such games. An excellent
account of this theory, with applications to many specic games, is given in the
monograph by E. R. Berlekamp, J. H. Conway, and R. K. Guy [Ber03].
Exercises 4.5
1. a. If we measure an instance size of computing the greatest common divisor
of m and n by the size of the second number n, by how much can the size
decrease after one iteration of Euclids algorithm?
b. Prove that an instance size will always decrease at least by a factor of two
after two successive iterations of Euclids algorithm.
2. Apply quickselect to nd the median of the list of numbers 9, 12, 5, 17, 20,
30, 8.
3. Write pseudocode for a nonrecursive implementation of quickselect.
4. Derive the formula underlying interpolation search.
5. Give an example of the worst-case input for interpolation search and show
that the algorithm is linear in the worst case.
6. a. Find the smallest value of n for which log
2
log
2
n 1 is greater than 6.
b. Determine which, if any, of the following assertions are true:
i. log log n o(log n) ii. log log n (log n) iii. log log n (log n)
7. a. Outline an algorithm for nding the largest key in a binary search tree.
Would you classify your algorithm as a variable-size-decrease algorithm?
b. What is the time efciency class of your algorithm in the worst case?
8. a. Outline an algorithm for deleting a key from a binary search tree. Would
you classify this algorithm as a variable-size-decrease algorithm?
b. What is the time efciency class of your algorithm in the worst case?
9. Outline a variable-size-decrease algorithmfor constructing anEuleriancircuit
in a connected graph with all vertices of even degrees.
Summary 167
10. Mis` ere one-pile Nim Consider the so-called mis` ere version of the one-pile
Nim, in which the player taking the last chip loses the game. All the other
conditions of the game remain the same, i.e., the pile contains n chips and on
each move a player takes at least one but no more than m chips. Identify the
winning and losing positions (for the player to move next) in this game.
11. a. Moldy chocolate Two players take turns by breaking an mn chocolate
bar, which has one spoiled 1 1 square. Each break must be a single
straight line cutting all the way across the bar along the boundaries between
the squares. After each break, the player who broke the bar last eats the
piece that does not contain the spoiled square. The player left with the
spoiled square loses the game. Is it better to go rst or second in this game?
b. Write an interactive program to play this game with the computer. Your
program should make a winning move in a winning position and a random
legitimate move in a losing position.
12. Flipping pancakes There are n pancakes all of different sizes that are stacked
on top of each other. You are allowed to slip a ipper under one of the
pancakes and ip over the whole stack above the ipper. The purpose is to
arrange pancakes according to their size with the biggest at the bottom. (You
can see a visualization of this puzzle on the Interactive Mathematics Miscellany
and Puzzles site [Bog].) Design an algorithm for solving this puzzle.
13. You need to search for a given number in an n n matrix in which every
row and every column is sorted in increasing order. Can you design a O(n)
algorithm for this problem? [Laa10]
SUMMARY
Decrease-and-conquer is a general algorithm design technique, based on
exploiting a relationship between a solution to a given instance of a problem
and a solution to a smaller instance of the same problem. Once such a
relationship is established, it can be exploited either top down (usually
recursively) or bottom up.
There are three major variations of decrease-and-conquer:
. decrease-by-a-constant, most often by one (e.g., insertion sort)
. decrease-by-a-constant-factor, most often by the factor of two (e.g., binary
search)
. variable-size-decrease (e.g., Euclids algorithm)
Insertion sort is a direct application of the decrease-(by one)-and-conquer
technique to the sorting problem. It is a (n
2
) algorithm both in the worst
andaverage cases, but it is about twice as fast onaverage thaninthe worst case.
The algorithms notable advantage is a good performance on almost-sorted
arrays.
168 Decrease-and-Conquer
A digraph is a graph with directions on its edges. The topological sorting
problem asks to list vertices of a digraph in an order such that for every edge
of the digraph, the vertex it starts at is listed before the vertex it points to.
This problem has a solution if and only if a digraph is a dag (directed acyclic
graph), i.e., it has no directed cycles.
There are twoalgorithms for solving the topological sorting problem. The rst
one is based on depth-rst search; the second is based on a direct application
of the decrease-by-one technique.
The decrease-by-one technique is a natural approach to developing algo-
rithms for generating elementary combinatorial objects. The most efcient
class of such algorithms are minimal-change algorithms. However, the num-
ber of combinatorial objects grows so fast that even the best algorithms are
of practical interest only for very small instances of such problems.
Binary search is a very efcient algorithm for searching in a sorted array. It
is a principal example of a decrease-by-a-constant-factor algorithm. Other
examples include exponentiation by squaring, identifying a fake coin with a
balance scale, Russian peasant multiplication, and the Josephus problem.
For some decrease-and-conquer algorithms, the size reduction varies from
one iteration of the algorithm to another. Examples of such variable-size-
decrease algorithms include Euclids algorithm, the partition-based algorithm
for the selection problem, interpolation search, and searching and insertion in
a binary search tree. Nim exemplies games that proceed through a series of
diminishing instances of the same game.
5
Divide-and-Conquer
Whatever man prays for, he prays for a miracle. Every prayer reduces itself
to thisGreat God, grant that twice two be not four.
Ivan Turgenev (18181883), Russian novelist and short-story writer
D
ivide-and-conquer is probably the best-known general algorithm design
technique. Thoughits fame may have something todowithits catchy name, it
is well deserved: quite a fewvery efcient algorithms are specic implementations
of this general strategy. Divide-and-conquer algorithms work according to the
following general plan:
1. A problem is divided into several subproblems of the same type, ideally of
about equal size.
2. The subproblems are solved (typically recursively, though sometimes a dif-
ferent algorithm is employed, especially when subproblems become small
enough).
3. If necessary, the solutions to the subproblems are combined to get a solution
to the original problem.
The divide-and-conquer technique is diagrammed in Figure 5.1, which depicts
the case of dividing a problemintotwosmaller subproblems, by far the most widely
occurring case (at least for divide-and-conquer algorithms designedtobe executed
on a single-processor computer).
As anexample, let us consider the problemof computing the sumof nnumbers
a
0
, . . . , a
n1
. If n > 1, we can divide the problem into two instances of the same
problem: to compute the sum of the rst n/2 numbers and to compute the sum
of the remaining {n/2 numbers. (Of course, if n =1, we simply return a
0
as the
answer.) Once each of these two sums is computed by applying the same method
recursively, we can add their values to get the sum in question:
a
0
. . .
a
n1
=(a
0
. . .
a
n/21
) (a
n/2
. . .
a
n1
).
Is this an efcient way to compute the sum of n numbers? A moment of
reection (why could it be more efcient than the brute-force summation?), a
169
170 Divide-and-Conquer
subproblem 1
of size n/2
subproblem 2
of size n/2
solution to
subproblem 1
solution to
subproblem 2
solution to
the original problem
problem of size n
FIGURE 5.1 Divide-and-conquer technique (typical case).
small example of summing, say, four numbers by this algorithm, a formal analysis
(which follows), and common sense (we do not normally compute sums this way,
do we?) all lead to a negative answer to this question.
1
Thus, not every divide-and-conquer algorithm is necessarily more efcient
than even a brute-force solution. But often our prayers to the Goddess of
Algorithmicssee the chapters epigraphare answered, and the time spent on
executing the divide-and-conquer plan turns out to be signicantly smaller than
solving a problemby a different method. In fact, the divide-and-conquer approach
yields some of the most important and efcient algorithms in computer science.
We discuss a few classic examples of such algorithms in this chapter. Though we
consider only sequential algorithms here, it is worth keeping in mind that the
divide-and-conquer technique is ideally suited for parallel computations, in which
each subproblem can be solved simultaneously by its own processor.
1. Actually, the divide-and-conquer algorithm, called the pairwise summation, may substantially reduce
the accumulated round-off error of the sum of numbers that can be represented only approximately
in a digital computer [Hig93].
Divide-and-Conquer 171
As mentioned above, in the most typical case of divide-and-conquer a prob-
lems instance of size n is divided into two instances of size n/2. More generally,
an instance of size n can be divided into b instances of size n/b, with a of them
needing to be solved. (Here, a and b are constants; a 1 and b > 1.) Assuming
that size n is a power of b to simplify our analysis, we get the following recurrence
for the running time T (n):
T (n) =aT (n/b) f (n), (5.1)
where f (n) is a function that accounts for the time spent on dividing an instance
of size n into instances of size n/b and combining their solutions. (For the sum
example above, a =b =2 and f (n) =1.) Recurrence (5.1) is called the general
divide-and-conquer recurrence. Obviously, the order of growthof its solutionT (n)
depends on the values of the constants a and b and the order of growth of the
function f (n). The efciency analysis of many divide-and-conquer algorithms is
greatly simplied by the following theorem (see Appendix B).
Master Theorem If f (n) (n
d
) where d 0 in recurrence (5.1), then
T (n)
_
_
_
(n
d
) if a < b
d
,
(n
d
log n) if a =b
d
,
(n
log
b
a
) if a > b
d
.
Analogous results hold for the O and notations, too.
For example, the recurrence for the number of additions A(n) made by the
divide-and-conquer sum-computation algorithm (see above) on inputs of size
n =2
k
is
A(n) =2A(n/2) 1.
Thus, for this example, a =2, b =2, and d =0; hence, since a > b
d
,
A(n) (n
log
b
a
) =(n
log
2
2
) =(n).
Note that we were able to nd the solutions efciency class without going through
the drudgery of solving the recurrence. But, of course, this approach can only es-
tablish a solutions order of growth to within an unknown multiplicative constant,
whereas solving a recurrence equation with a specic initial condition yields an
exact answer (at least for ns that are powers of b).
It is also worth pointing out that if a =1, recurrence (5.1) covers decrease-
by-a-constant-factor algorithms discussed in the previous chapter. In fact, some
people consider such algorithms as binary search degenerate cases of divide-and-
conquer, where just one of two subproblems of half the size needs to be solved.
It is better not to do this and consider decrease-by-a-constant-factor and divide-
and-conquer as different design paradigms.
172 Divide-and-Conquer
5.1 Mergesort
Mergesort is a perfect example of a successful application of the divide-and-
conquer technique. It sorts a given array A[0..n 1] by dividing it into two halves
A[0..n/2 1] and A[n/2..n 1], sorting each of them recursively, and then
merging the two smaller sorted arrays into a single sorted one.
ALGORITHM Mergesort(A[0..n 1])
//Sorts array A[0..n 1] by recursive mergesort
//Input: An array A[0..n 1] of orderable elements
//Output: Array A[0..n 1] sorted in nondecreasing order
if n > 1
copy A[0..n/2 1] to B[0..n/2 1]
copy A[n/2..n 1] to C[0..{n/2 1]
Mergesort(B[0..n/2 1])
Mergesort(C[0..{n/2 1])
Merge(B, C, A) //see below
The merging of two sorted arrays can be done as follows. Two pointers (array
indices) are initialized to point to the rst elements of the arrays being merged.
The elements pointed to are compared, and the smaller of them is added to a new
array being constructed; after that, the index of the smaller element is incremented
to point to its immediate successor in the array it was copied from. This operation
is repeated until one of the two given arrays is exhausted, and then the remaining
elements of the other array are copied to the end of the new array.
ALGORITHM Merge(B[0..p 1], C[0..q 1], A[0..p q 1])
//Merges two sorted arrays into one sorted array
//Input: Arrays B[0..p 1] and C[0..q 1] both sorted
//Output: Sorted array A[0..p q 1] of the elements of B and C
i 0; j 0; k 0
while i < p and j < q do
if B[i] C[j]
A[k] B[i]; i i 1
else A[k] C[j]; j j 1
k k 1
if i =p
copy C[j..q 1] to A[k..p q 1]
else copy B[i..p 1] to A[k..p q 1]
The operation of the algorithm on the list 8, 3, 2, 9, 7, 1, 5, 4 is illustrated in
Figure 5.2.
5.1 Mergesort 173
8 3 2 9 7 1 5 4
1 2 3 4 5 7 8 9
8 3 2 9
8 3 2 9 7 1 5 4
3 8 2 9 1 7 4 5
5 4
7 1 5 4
2 3 8 9 1 4 5 7
7 1 2 9 8 3
FIGURE 5.2 Example of mergesort operation.
How efcient is mergesort? Assuming for simplicity that n is a power of 2, the
recurrence relation for the number of key comparisons C(n) is
C(n) =2C(n/2) C
merge
(n) for n > 1, C(1) =0.
Let us analyze C
merge
(n), the number of key comparisons performed during the
merging stage. At each step, exactly one comparison is made, after which the total
number of elements in the two arrays still needing to be processed is reduced
by 1. In the worst case, neither of the two arrays becomes empty before the
other one contains just one element (e.g., smaller elements may come from the
alternating arrays). Therefore, for the worst case, C
merge
(n) =n 1, and we have
the recurrence
C
worst
(n) =2C
worst
(n/2) n 1 for n > 1, C
worst
(1) =0.
Hence, according to the Master Theorem, C
worst
(n) (n log n) (why?). In fact,
it is easy to nd the exact solution to the worst-case recurrence for n =2
k
:
C
worst
(n) =n log
2
n n 1.
174 Divide-and-Conquer
The number of key comparisons made by mergesort in the worst case comes
very close to the theoretical minimum
2
that any general comparison-based sorting
algorithm can have. For large n, the number of comparisons made by this algo-
rithm in the average case turns out to be about 0.25n less (see [Gon91, p. 173])
and hence is also in (n log n). A noteworthy advantage of mergesort over quick-
sort and heapsortthe two important advanced sorting algorithms to be discussed
lateris its stability (see Problem7 in this sections exercises). The principal short-
coming of mergesort is the linear amount of extra storage the algorithm requires.
Though merging can be done in-place, the resulting algorithmis quite complicated
and of theoretical interest only.
There are two main ideas leading to several variations of mergesort. First, the
algorithmcan be implemented bottomup by merging pairs of the arrays elements,
then merging the sorted pairs, and so on. (If n is not a power of 2, only slight
bookkeeping complications arise.) This avoids the time and space overhead of
using a stack to handle recursive calls. Second, we can divide a list to be sorted
in more than two parts, sort each recursively, and then merge them together. This
scheme, which is particularly useful for sorting les residing on secondary memory
devices, is called multiway mergesort.
Exercises 5.1
1. a. Write pseudocode for a divide-and-conquer algorithm for nding the po-
sition of the largest element in an array of n numbers.
b. What will be your algorithms output for arrays with several elements of
the largest value?
c. Set up and solve a recurrence relation for the number of key comparisons
made by your algorithm.
d. How does this algorithm compare with the brute-force algorithm for this
problem?
2. a. Write pseudocode for a divide-and-conquer algorithm for nding values
of both the largest and smallest elements in an array of n numbers.
b. Set up and solve (for n =2
k
) a recurrence relation for the number of key
comparisons made by your algorithm.
c. How does this algorithm compare with the brute-force algorithm for this
problem?
3. a. Write pseudocode for a divide-and-conquer algorithm for the exponenti-
ation problem of computing a
n
where n is a positive integer.
b. Set up and solve a recurrence relation for the number of multiplications
made by this algorithm.
2. As we shall see in Section 11.2, this theoretical minimum is {log
2
n! {n log
2
n 1.44n.
5.1 Mergesort 175
c. How does this algorithm compare with the brute-force algorithm for this
problem?
4. As mentioned in Chapter 2, logarithm bases are irrelevant in most contexts
arising in analyzing an algorithms efciency class. Is this true for both asser-
tions of the Master Theorem that include logarithms?
5. Find the order of growth for solutions of the following recurrences.
a. T (n) =4T (n/2) n, T (1) =1
b. T (n) =4T (n/2) n
2
, T (1) =1
c. T (n) =4T (n/2) n
3
, T (1) =1
6. Apply mergesort to sort the list E, X, A, M, P, L, E in alphabetical order.
7. Is mergesort a stable sorting algorithm?
8. a. Solve the recurrence relation for the number of key comparisons made by
mergesort in the worst case. You may assume that n =2
k
.
b. Set up a recurrence relation for the number of key comparisons made by
mergesort on best-case inputs and solve it for n =2
k
.
c. Set up a recurrence relation for the number of key moves made by the
version of mergesort given in Section 5.1. Does taking the number of key
moves into account change the algorithms efciency class?
9. Let A[0..n 1] be an array of n real numbers. A pair (A[i], A[j]) is said to
be an inversion if these numbers are out of order, i.e., i < j but A[i] > A[j].
Design an O(n log n) algorithm for counting the number of inversions.
10. Implement the bottom-upversionof mergesort inthe language of your choice.
11. Trominopuzzle Atromino(more accurately, a right tromino) is anL-shaped
tile formed by three 1 1 squares. The problem is to cover any 2
n
2
n
chess-
board with a missing square with trominoes. Trominoes can be oriented in an
arbitrary way, but they should cover all the squares of the board except the
missing one exactly and with no overlaps. [Gol94]
Design a divide-and-conquer algorithm for this problem.
176 Divide-and-Conquer
5.2 Quicksort
Quicksort is the other important sorting algorithmthat is based on the divide-and-
conquer approach. Unlike mergesort, which divides its input elements according
to their position in the array, quicksort divides them according to their value.
We already encountered this idea of an array partition in Section 4.5, where we
discussed the selection problem. A partition is an arrangement of the arrays
elements so that all the elements to the left of some element A[s] are less than
or equal to A[s], and all the elements to the right of A[s] are greater than or equal
to it:
A[0] . . . A[s 1]
. ,, .
all are A[s]
A[s] A[s 1] . . . A[n 1]
. ,, .
all are A[s]
Obviously, after a partition is achieved, A[s] will be in its nal position in the
sorted array, and we can continue sorting the two subarrays to the left and to the
right of A[s] independently (e.g., by the same method). Note the difference with
mergesort: there, the division of the problem into two subproblems is immediate
and the entire work happens in combining their solutions; here, the entire work
happens in the division stage, with no work required to combine the solutions to
the subproblems.
Here is pseudocode of quicksort: call Quicksort(A[0..n 1]) where
ALGORITHM Quicksort(A[l..r])
//Sorts a subarray by quicksort
//Input: Subarray of array A[0..n 1], dened by its left and right
// indices l and r
//Output: Subarray A[l..r] sorted in nondecreasing order
if l < r
s Partition(A[l..r]) //s is a split position
Quicksort(A[l..s 1])
Quicksort(A[s 1..r])
As a partition algorithm, we can certainly use the Lomuto partition discussed
in Section 4.5. Alternatively, we can partition A[0..n 1] and, more generally, its
subarray A[l..r] (0 l <r n 1) by the more sophisticated method suggested by
C.A.R. Hoare, the prominent British computer scientist who invented quicksort.
3
3. C.A.R. Hoare, at age 26, invented his algorithm in 1960 while trying to sort words for a machine
translation project from Russian to English. Says Hoare, My rst thought on how to do this was
bubblesort and, by an amazing stroke of luck, my second thought was Quicksort. It is hard to disagree
with his overall assessment: I have been very lucky. What a wonderful way to start a career in
Computing, by discovering a new sorting algorithm! [Hoa96]. Twenty years later, he received the
Turing Award for fundamental contributions to the denition and design of programming languages;
in 1980, he was also knighted for services to education and computer science.
5.2 Quicksort 177
As before, we start by selecting a pivotan element with respect to whose value
we are going to divide the subarray. There are several different strategies for
selecting a pivot; we will return to this issue when we analyze the algorithms
efciency. For now, we use the simplest strategy of selecting the subarrays rst
element: p =A[l].
Unlike the Lomuto algorithm, we will now scan the subarray from both ends,
comparing the subarrays elements to the pivot. The left-to-right scan, denoted
below by index pointer i, starts with the second element. Since we want elements
smaller than the pivot to be in the left part of the subarray, this scan skips over
elements that are smaller than the pivot and stops upon encountering the rst
element greater than or equal to the pivot. The right-to-left scan, denoted below
by index pointer j, starts with the last element of the subarray. Since we want
elements larger than the pivot to be in the right part of the subarray, this scan
skips over elements that are larger than the pivot and stops on encountering the
rst element smaller than or equal to the pivot. (Why is it worth stopping the scans
after encountering anelement equal tothe pivot? Because doing this tends toyield
more even splits for arrays with a lot of duplicates, which makes the algorithm run
faster. For example, if we did otherwise for an array of n equal elements, we would
have gotten a split into subarrays of sizes n 1 and 0, reducing the problem size
just by 1 after scanning the entire array.)
After both scans stop, three situations may arise, depending on whether or not
the scanning indices have crossed. If scanning indices i and j have not crossed, i.e.,
i < j, we simply exchange A[i] and A[j] and resume the scans by incrementing i
and decrementing j, respectively:
j i
p all are p all are p
. . .
p p
If the scanning indices have crossed over, i.e., i > j, we will have partitioned the
subarray after exchanging the pivot with A[j]:
j i
p all are p all are p p p
Finally, if the scanning indices stop while pointing to the same element, i.e., i =j,
the value they are pointing to must be equal to p (why?). Thus, we have the
subarray partitioned, with the split position s =i =j:
j = i
p all are p all are p = p
We can combine the last case with the case of crossed-over indices (i > j) by
exchanging the pivot with A[j] whenever i j.
Here is pseudocode implementing this partitioning procedure.
178 Divide-and-Conquer
ALGORITHM HoarePartition(A[l..r])
//Partitions a subarray by Hoares algorithm, using the rst element
// as a pivot
//Input: Subarray of array A[0..n 1], dened by its left and right
// indices l and r (l < r)
//Output: Partition of A[l..r], with the split position returned as
// this functions value
p A[l]
i l; j r 1
repeat
repeat i i 1 until A[i] p
repeat j j 1 until A[j] p
swap(A[i], A[j])
until i j
swap(A[i], A[j]) //undo last swap when i j
swap(A[l], A[j])
return j
Note that index i can go out of the subarrays bounds in this pseudocode.
Rather than checking for this possibility every time index i is incremented, we can
appendtoarray A[0..n 1] a sentinel that wouldprevent index i fromadvancing
beyond position n. Note that the more sophisticated method of pivot selection
mentioned at the end of the section makes such a sentinel unnecessary.
An example of sorting an array by quicksort is given in Figure 5.3.
We start our discussion of quicksorts efciency by noting that the number
of key comparisons made before a partition is achieved is n 1 if the scanning
indices cross over and n if they coincide (why?). If all the splits happen in the
middle of corresponding subarrays, we will have the best case. The number of key
comparisons in the best case satises the recurrence
C
best
(n) =2C
best
(n/2) n for n > 1, C
best
(1) =0.
According to the Master Theorem, C
best
(n) (n log
2
n); solving it exactly for
n =2
k
yields C
best
(n) =n log
2
n.
In the worst case, all the splits will be skewed to the extreme: one of the
two subarrays will be empty, and the size of the other will be just 1 less than the
size of the subarray being partitioned. This unfortunate situation will happen, in
particular, for increasing arrays, i.e., for inputs for which the problem is already
solved! Indeed, if A[0..n 1] is a strictly increasing array and we use A[0] as the
pivot, the left-to-right scan will stop on A[1] while the right-to-left scan will go all
the way to reach A[0], indicating the split at position 0:
5.2 Quicksort 179
0 1 2 3 4 5 6 7
5
5
5
5
5
5
3
3
3
3
3
3
1
1
1
1
1
1
4
4
4
4
2
2
2
2
5
2
3
3
8
2 3 1 4
4
4
4
4
4
4
4
4
7
7
7
7 9
9
9
9
9
7
1
1
3
3
3
3
3
1
1
1
1
i
i
i
i
i
i
i
i
i
i j
j
j
j
j
j
i
i
i
i
i j
j
j
j
j
j
j
j
j
8
8
8
9
9
8
8
8
7
7
7
7
7
2
2
2
4
4
9
9
9
8
2
2 8
2
8
8
9
9
7
7
(b)
(a)
I=0, r =3
s=1
I=2, r =3
s=2
I=5, r =7
s=6
I=0, r =7
s=4
I=3, r =3
I=5, r =5 I=0, r =0
I=2, r =1
I=7, r =7
FIGURE 5.3 Example of quicksort operation. (a) Arrays transformations with pivots
shown in bold. (b) Tree of recursive calls to Quicksort with input values l
and r of subarray bounds and split position s of a partition obtained.
A[0] A[1] A[n1]
. . .
j i
So, after making n 1 comparisons to get to this partition and exchanging the
pivot A[0] with itself, the algorithm will be left with the strictly increasing array
A[1..n 1] to sort. This sorting of strictly increasing arrays of diminishing sizes will
180 Divide-and-Conquer
continue until the last one A[n 2..n 1] has been processed. The total number
of key comparisons made will be equal to
C
worst
(n) =(n 1) n
. . .
3 =
(n 1)(n 2)
2
3 (n
2
).
Thus, the question about the utility of quicksort comes down to its average-
case behavior. Let C
avg
(n) be the average number of key comparisons made by
quicksort on a randomly ordered array of size n. A partition can happen in any
positions (0 s n 1) after n 1comparisons are made toachieve the partition.
After the partition, the left and right subarrays will have s and n 1 s elements,
respectively. Assuming that the partition split can happen in each position s with
the same probability 1/n, we get the following recurrence relation:
C
avg
(n) =
1
n
n1
s=0
[(n 1) C
avg
(s) C
avg
(n 1 s)] for n > 1,
C
avg
(0) =0, C
avg
(1) =0.
Its solution, which is much trickier than the worst- and best-case analyses, turns
out to be
C
avg
(n) 2n ln n 1.39n log
2
n.
Thus, on the average, quicksort makes only 39% more comparisons than in the
best case. Moreover, its innermost loopis soefcient that it usually runs faster than
mergesort (and heapsort, another n log n algorithm that we discuss in Chapter 6)
on randomly ordered arrays of nontrivial sizes. This certainly justies the name
given to the algorithm by its inventor.
Because of quicksorts importance, there have been persistent efforts over the
years to rene the basic algorithm. Among several improvements discovered by
researchers are:
better pivot selection methods such as randomized quicksort that uses a
random element or the median-of-three method that uses the median of the
leftmost, rightmost, and the middle element of the array
switching to insertion sort on very small subarrays (between 5 and 15 elements
for most computer systems) or not sorting small subarrays at all and nishing
the algorithm with insertion sort applied to the entire nearly sorted array
modications of the partitioning algorithm such as the three-way partition
into segments smaller than, equal to, and larger than the pivot (see Problem 9
in this sections exercises)
According to Robert Sedgewick [Sed11, p. 296], the worlds leading expert on
quicksort, such improvements in combination can cut the running time of the
algorithm by 20%30%.
Like any sorting algorithm, quicksort has weaknesses. It is not stable. It
requires a stack to store parameters of subarrays that are yet to be sorted. While
5.2 Quicksort 181
the size of this stack can be made to be in O(log n) by always sorting rst the
smaller of two subarrays obtained by partitioning, it is worse than the O(1) space
efciency of heapsort. Althoughmore sophisticatedways of choosing a pivot make
the quadratic running time of the worst case very unlikely, they do not eliminate
it completely. And even the performance on randomly ordered arrays is known
to be sensitive not only to implementation details of the algorithm but also to
bothcomputer architecture anddata type. Still, the January/February 2000 issue of
Computing in Science &Engineering, a joint publication of the American Institute
of Physics and the IEEE Computer Society, selected quicksort as one of the 10
algorithms withthe greatest inuence onthe development andpractice of science
and engineering in the 20th century.
Exercises 5.2
1. Apply quicksort to sort the list E, X, A, M, P, L, E in alphabetical order.
Draw the tree of the recursive calls made.
2. For the partitioning procedure outlined in this section:
a. Prove that if the scanning indices stop while pointing to the same element,
i.e., i =j, the value they are pointing to must be equal to p.
b. Prove that when the scanning indices stop, j cannot point to an element
more than one position to the left of the one pointed to by i.
3. Give an example showing that quicksort is not a stable sorting algorithm.
4. Give an example of an array of n elements for which the sentinel mentioned
in the text is actually needed. What should be its value? Also explain why a
single sentinel sufces for any input.
5. For the version of quicksort given in this section:
a. Are arrays made up of all equal elements the worst-case input, the best-
case input, or neither?
b. Are strictly decreasing arrays the worst-case input, the best-case input, or
neither?
6. a. For quicksort with the median-of-three pivot selection, are strictly increas-
ing arrays the worst-case input, the best-case input, or neither?
b. Answer the same question for strictly decreasing arrays.
7. a. Estimate how many times faster quicksort will sort an array of one million
random numbers than insertion sort.
b. True or false: For every n > 1, there are n-element arrays that are sorted
faster by insertion sort than by quicksort?
8. Design an algorithm to rearrange elements of a given array of n real num-
bers so that all its negative elements precede all its positive elements. Your
algorithm should be both time efcient and space efcient.
182 Divide-and-Conquer
9. a. The Dutch national ag problemis to rearrange an array of characters R,
W, and B(red, white, and blue are the colors of the Dutch national ag) so
that all the Rs come rst, the Ws come next, and the Bs come last. [Dij76]
Design a linear in-place algorithm for this problem.
b. Explain how a solution to the Dutch national ag problem can be used in
quicksort.
10. Implement quicksort in the language of your choice. Run your program on
a sample of inputs to verify the theoretical assertions about the algorithms
efciency.
11. Nuts and bolts You are given a collection of n bolts of different widths and
n corresponding nuts. You are allowed to try a nut and bolt together, from
which you can determine whether the nut is larger than the bolt, smaller than
the bolt, or matches the bolt exactly. However, there is no way to compare
two nuts together or two bolts together. The problem is to match each bolt
to its nut. Design an algorithm for this problem with average-case efciency
in (n log n). [Raw91]
5.3 Binary Tree Traversals and Related Properties
In this section, we see how the divide-and-conquer technique can be applied to
binary trees. Abinary tree T is dened as a nite set of nodes that is either empty
or consists of a root and two disjoint binary trees T
L
and T
R
called, respectively, the
left and right subtree of the root. We usually think of a binary tree as a special case
of an ordered tree (Figure 5.4). (This standard interpretation was an alternative
denition of a binary tree in Section 1.4.)
Since the denition itself divides a binary tree into two smaller structures of
the same type, the left subtree and the right subtree, many problems about binary
trees can be solved by applying the divide-and-conquer technique. As an example,
let us consider a recursive algorithm for computing the height of a binary tree.
Recall that the height is dened as the length of the longest path from the root to
a leaf. Hence, it can be computed as the maximum of the heights of the roots left
T
left
T
right
FIGURE 5.4 Standard representation of a binary tree.
5.3 Binary Tree Traversals and Related Properties 183
and right subtrees plus 1. (We have to add 1 to account for the extra level of the
root.) Also note that it is convenient to dene the height of the empty tree as 1.
Thus, we have the following recursive algorithm.
ALGORITHM Height(T )
//Computes recursively the height of a binary tree
//Input: A binary tree T
//Output: The height of T
if T =return 1
else return max{Height(T
lef t
), Height(T
right
)] 1
We measure the problems instance size by the number of nodes n(T ) in a
given binary tree T . Obviously, the number of comparisons made to compute
the maximum of two numbers and the number of additions A(n(T )) made by the
algorithm are the same. We have the following recurrence relation for A(n(T )):
A(n(T )) =A(n(T
lef t
)) A(n(T
right
)) 1 for n(T ) > 0,
A(0) =0.
Before we solve this recurrence (can you tell what its solution is?), let us note
that additionis not the most frequently executedoperationof this algorithm. What
is? Checkingand this is very typical for binary tree algorithmsthat the tree is
not empty. For example, for the empty tree, the comparison T = is executed
once but there are no additions, and for a single-node tree, the comparison and
addition numbers are 3 and 1, respectively.
It helps in the analysis of tree algorithms to draw the trees extension by
replacing the empty subtrees by special nodes. The extra nodes (shown by little
squares in Figure 5.5) are called external; the original nodes (shown by little
circles) are called internal. By denition, the extension of the empty binary tree
is a single external node.
It is easy to see that the Height algorithmmakes exactly one addition for every
internal node of the extended tree, and it makes one comparison to check whether
(b) (a)
FIGURE 5.5 Binary tree (on the left) and its extension (on the right). Internal nodes are
shown as circles; external nodes are shown as squares.
184 Divide-and-Conquer
the tree is empty for every internal and external node. Therefore, to ascertain the
algorithms efciency, we need to know how many external nodes an extended
binary tree with n internal nodes can have. After checking Figure 5.5 and a few
similar examples, it is easy to hypothesize that the number of external nodes x is
always 1 more than the number of internal nodes n:
x =n 1. (5.2)
To prove this equality, consider the total number of nodes, both internal and
external. Since every node, except the root, is one of the twochildrenof aninternal
node, we have the equation
2n 1 =x n,
which immediately implies equality (5.2).
Note that equality (5.2) also applies to any nonempty full binary tree, in
which, by denition, every node has either zero or two children: for a full binary
tree, n and x denote the numbers of parental nodes and leaves, respectively.
Returning to algorithm Height, the number of comparisons to check whether
the tree is empty is
C(n) =n x =2n 1,
and the number of additions is
A(n) =n.
The most important divide-and-conquer algorithms for binary trees are the
three classic traversals: preorder, inorder, and postorder. All three traversals visit
nodes of a binary tree recursively, i.e., by visiting the trees root and its left and
right subtrees. They differ only by the timing of the roots visit:
In the preorder traversal, the root is visited before the left and right subtrees
are visited (in that order).
In the inorder traversal, the root is visited after visiting its left subtree but
before visiting the right subtree.
In the postorder traversal, the root is visited after visiting the left and right
subtrees (in that order).
These traversals are illustrated in Figure 5.6. Their pseudocodes are quite
straightforward, repeating the descriptions given above. (These traversals are also
a standard feature of data structures textbooks.) As to their efciency analysis, it
is identical to the above analysis of the Height algorithm because a recursive call
is made for each node of an extended binary tree.
Finally, we should note that, obviously, not all questions about binary trees
require traversals of bothleft andright subtrees. For example, the searchandinsert
operations for a binary search tree require processing only one of the two subtrees.
Accordingly, we considered them in Section 4.5 not as applications of divide-and-
conquer but rather as examples of the variable-size-decrease technique.
5.3 Binary Tree Traversals and Related Properties 185
b
a
d e
c
preorder:
inorder:
postorder:
a, b, d, g, e, c, f
d, g, b, e, a, f, c
g, d, e, b, f, c, a
g
f
FIGURE 5.6 Binary tree and its traversals.
Exercises 5.3
1. Design a divide-and-conquer algorithmfor computing the number of levels in
a binary tree. (In particular, the algorithm must return 0 and 1 for the empty
and single-node trees, respectively.) What is the time efciency class of your
algorithm?
2. The following algorithm seeks to compute the number of leaves in a binary
tree.
ALGORITHM LeafCounter(T )
//Computes recursively the number of leaves in a binary tree
//Input: A binary tree T
//Output: The number of leaves in T
if T =return 0
else return LeafCounter(T
lef t
) LeafCounter(T
right
)
Is this algorithm correct? If it is, prove it; if it is not, make an appropriate
correction.
3. Can you compute the height of a binary tree with the same asymptotic ef-
ciency as the sections divide-and-conquer algorithm but without using a
stack explicitly or implicitly? Of course, you may use a different algorithm
altogether.
4. Prove equality (5.2) by mathematical induction.
5. Traverse the following binary tree
a. in preorder.
b. in inorder.
c. in postorder.
186 Divide-and-Conquer
b
a
d e
c
f
6. Write pseudocode for one of the classic traversal algorithms (preorder, in-
order, and postorder) for binary trees. Assuming that your algorithm is recur-
sive, nd the number of recursive calls made.
7. Which of the three classic traversal algorithms yields a sorted list if applied to
a binary search tree? Prove this property.
8. a. Draw a binary tree with 10 nodes labeled 0, 1, . . . , 9 in such a way that the
inorder and postorder traversals of the tree yield the following lists: 9, 3,
1, 0, 4, 2, 7, 6, 8, 5 (inorder) and 9, 1, 4, 0, 3, 6, 7, 5, 8, 2 (postorder).
b. Give an example of two permutations of the same n labels 0, 1, . . . , n 1
that cannot be inorder and postorder traversal lists of the same binary tree.
c. Design an algorithm that constructs a binary tree for which two given
lists of n labels 0, 1, . . . , n 1 are generated by the inorder and postorder
traversals of the tree. Your algorithm should also identify inputs for which
the problem has no solution.
9. The internal path length I of an extended binary tree is dened as the sum
of the lengths of the pathstaken over all internal nodesfrom the root to
eachinternal node. Similarly, the external path lengthEof anextendedbinary
tree is dened as the sum of the lengths of the pathstaken over all external
nodesfrom the root to each external node. Prove that E =I 2n where n
is the number of internal nodes in the tree.
10. Write a programfor computing the internal path length of an extended binary
tree. Use it to investigate empirically the average number of key comparisons
for searching in a randomly generated binary search tree.
11. Chocolate bar puzzle Given an n m chocolate bar, you need to break it
into nm 1 1 pieces. You can break a bar only in a straight line, and only one
bar can be broken at a time. Design an algorithmthat solves the problemwith
the minimum number of bar breaks. What is this minimum number? Justify
your answer by using properties of a binary tree.
5.4 Multiplication of Large Integers and
Strassens Matrix Multiplication
In this section, we examine two surprising algorithms for seemingly straightfor-
ward tasks: multiplying two integers and multiplying two square matrices. Both
5.4 Multiplication of Large Integers and Strassens Matrix Multiplication 187
achieve a better asymptotic efciency by ingenious application of the divide-and-
conquer technique.
Multiplication of Large Integers
Some applications, notably modern cryptography, require manipulation of inte-
gers that are over 100 decimal digits long. Since such integers are too long to t in
a single word of a modern computer, they require special treatment. This practi-
cal need supports investigations of algorithms for efcient manipulation of large
integers. In this section, we outline an interesting algorithm for multiplying such
numbers. Obviously, if we use the conventional pen-and-pencil algorithmfor mul-
tiplying two n-digit integers, each of the n digits of the rst number is multiplied by
each of the n digits of the second number for the total of n
2
digit multiplications.
(If one of the numbers has fewer digits than the other, we can pad the shorter
number with leading zeros to equalize their lengths.) Though it might appear that
it would be impossible to design an algorithm with fewer than n
2
digit multiplica-
tions, this turns out not to be the case. The miracle of divide-and-conquer comes
to the rescue to accomplish this feat.
To demonstrate the basic idea of the algorithm, let us start with a case of
two-digit integers, say, 23 and 14. These numbers can be represented as follows:
23 =2
.
10
1
3
.
10
0
and 14 =1
.
10
1
4
.
10
0
.
Now let us multiply them:
23 14 =(2
.
10
1
3
.
10
0
) (1
.
10
1
4
.
10
0
)
=(2 1)10
2
(2 4 3 1)10
1
(3 4)10
0
.
The last formula yields the correct answer of 322, of course, but it uses the same
four digit multiplications as the pen-and-pencil algorithm. Fortunately, we can
compute the middle term with just one digit multiplication by taking advantage
of the products 2 1 and 3 4 that need to be computed anyway:
2 4 3 1 =(2 3) (1 4) 2 1 3 4.
Of course, there is nothing special about the numbers we just multiplied.
For any pair of two-digit numbers a =a
1
a
0
and b =b
1
b
0
, their product c can be
computed by the formula
c =a b =c
2
10
2
c
1
10
1
c
0
,
where
c
2
=a
1
b
1
is the product of their rst digits,
c
0
=a
0
b
0
is the product of their second digits,
c
1
=(a
1
a
0
) (b
1
b
0
) (c
2
c
0
) is the product of the sum of the
as digits and the sum of the bs digits minus the sum of c
2
and c
0
.
188 Divide-and-Conquer
Now we apply this trick to multiplying two n-digit integers a and b where n is
a positive even number. Let us divide both numbers in the middleafter all, we
promised to take advantage of the divide-and-conquer technique. We denote the
rst half of the as digits by a
1
and the second half by a
0
; for b, the notations are b
1
and b
0
, respectively. In these notations, a =a
1
a
0
implies that a =a
1
10
n/2
a
0
and
b =b
1
b
0
implies that b =b
1
10
n/2
b
0
. Therefore, taking advantage of the same
trick we used for two-digit numbers, we get
c =a b =(a
1
10
n/2
a
0
) (b
1
10
n/2
b
0
)
=(a
1
b
1
)10
n
(a
1
b
0
a
0
b
1
)10
n/2
(a
0
b
0
)
=c
2
10
n
c
1
10
n/2
c
0
,
where
c
2
=a
1
b
1
is the product of their rst halves,
c
0
=a
0
b
0
is the product of their second halves,
c
1
=(a
1
a
0
) (b
1
b
0
) (c
2
c
0
) is the product of the sum of the
as halves and the sum of the bs halves minus the sum of c
2
and c
0
.
If n/2 is even, we can apply the same method for computing the products c
2
, c
0
,
and c
1
. Thus, if n is a power of 2, we have a recursive algorithm for computing the
product of two n-digit integers. In its pure form, the recursion is stopped when n
becomes 1. It can also be stopped when we deem n small enough to multiply the
numbers of that size directly.
How many digit multiplications does this algorithm make? Since multiplica-
tion of n-digit numbers requires three multiplications of n/2-digit numbers, the
recurrence for the number of multiplications M(n) is
M(n) =3M(n/2) for n > 1, M(1) =1.
Solving it by backward substitutions for n =2
k
yields
M(2
k
) =3M(2
k1
) =3[3M(2
k2
)] =3
2
M(2
k2
)
=
. . .
=3
i
M(2
ki
) =
. . .
=3
k
M(2
kk
) =3
k
.
Since k =log
2
n,
M(n) =3
log
2
n
=n
log
2
3
n
1.585
.
(On the last step, we took advantage of the following property of logarithms:
a
log
b
c
=c
log
b
a
.)
But what about additions and subtractions? Have we not decreased the num-
ber of multiplications by requiring more of those operations? Let A(n) be the
number of digit additions and subtractions executed by the above algorithm in
multiplying two n-digit decimal integers. Besides 3A(n/2) of these operations
needed to compute the three products of n/2-digit numbers, the above formulas
5.4 Multiplication of Large Integers and Strassens Matrix Multiplication 189
require ve additions and one subtraction. Hence, we have the recurrence
A(n) =3A(n/2) cn for n > 1, A(1) =1.
Applying the Master Theorem, which was stated in the beginning of the chapter,
we obtain A(n) (n
log
2
3
), which means that the total number of additions and
subtractions have the same asymptotic order of growth as the number of multipli-
cations.
The asymptotic advantage of this algorithm notwithstanding, how practical is
it? The answer depends, of course, on the computer system and program quality
implementing the algorithm, which might explain the rather wide disparity of
reported results. On some machines, the divide-and-conquer algorithm has been
reportedtooutperformthe conventional methodonnumbers only 8 decimal digits
long and to run more than twice faster with numbers over 300 decimal digits
longthe area of particular importance for modern cryptography. Whatever this
outperformance crossover point happens to be on a particular machine, it is
worth switching to the conventional algorithm after the multiplicands become
smaller than the crossover point. Finally, if you program in an object-oriented
language such as Java, C++, or Smalltalk, you should also be aware that these
languages have special classes for dealing with large integers.
Discovered by 23-year-old Russian mathematician Anatoly Karatsuba in
1960, the divide-and-conquer algorithmprovedwrong the then-prevailing opinion
that the time efciency of any integer multiplication algorithm must be in (n
2
).
The discovery encouraged researchers to look for even (asymptotically) faster
algorithms for this and other algebraic problems. We will see such an algorithm
in the next section.
Strassens Matrix Multiplication
Now that we have seen that the divide-and-conquer approach can reduce the
number of one-digit multiplications in multiplying two integers, we should not be
surprised that a similar feat can be accomplished for multiplying matrices. Such
an algorithm was published by V. Strassen in 1969 [Str69]. The principal insight
of the algorithm lies in the discovery that we can nd the product C of two 2 2
matrices A and B with just seven multiplications as opposed to the eight required
by the brute-force algorithm (see Example 3 in Section 2.3). This is accomplished
by using the following formulas:
_
c
00
c
01
c
10
c
11
_
=
_
a
00
a
01
a
10
a
11
_
_
b
00
b
01
b
10
b
11
_
=
_
m
1
m
4
m
5
m
7
m
3
m
5
m
2
m
4
m
1
m
3
m
2
m
6
_
,
where
190 Divide-and-Conquer
m
1
=(a
00
a
11
) (b
00
b
11
),
m
2
=(a
10
a
11
) b
00
,
m
3
=a
00
(b
01
b
11
),
m
4
=a
11
(b
10
b
00
),
m
5
=(a
00
a
01
) b
11
,
m
6
=(a
10
a
00
) (b
00
b
01
),
m
7
=(a
01
a
11
) (b
10
b
11
).
Thus, to multiply two 2 2 matrices, Strassens algorithm makes seven multipli-
cations and 18 additions/subtractions, whereas the brute-force algorithm requires
eight multiplications and four additions. These numbers should not lead us to
multiplying 2 2 matrices by Strassens algorithm. Its importance stems from its
asymptotic superiority as matrix order n goes to innity.
Let Aand B be two n n matrices where n is a power of 2. (If n is not a power
of 2, matrices can be padded with rows and columns of zeros.) We can divide A,
B, and their product C into four n/2 n/2 submatrices each as follows:
_
C
00
C
01
C
10
C
11
_
=
_
A
00
A
01
A
10
A
11
_
_
B
00
B
01
B
10
B
11
_
.
It is not difcult to verify that one can treat these submatrices as numbers to
get the correct product. For example, C
00
can be computed either as A
00
B
00
A
01
B
10
or as M
1
M
4
M
5
M
7
where M
1
, M
4
, M
5
, and M
7
are found by
Strassens formulas, with the numbers replaced by the corresponding submatrices.
If the seven products of n/2 n/2 matrices are computed recursively by the same
method, we have Strassens algorithm for matrix multiplication.
Let us evaluate the asymptotic efciency of this algorithm. If M(n) is the
number of multiplications made by Strassens algorithm in multiplying two n n
matrices (where n is a power of 2), we get the following recurrence relation for it:
M(n) =7M(n/2) for n > 1, M(1) =1.
Since n =2
k
,
M(2
k
) =7M(2
k1
) =7[7M(2
k2
)] =7
2
M(2
k2
) =
. . .
=7
i
M(2
ki
)
. . .
=7
k
M(2
kk
) =7
k
.
Since k =log
2
n,
M(n) =7
log
2
n
=n
log
2
7
n
2.807
,
which is smaller than n
3
required by the brute-force algorithm.
Since this savings inthe number of multiplications was achievedat the expense
of making extra additions, we must check the number of additions A(n) made by
Strassens algorithm. To multiply two matrices of order n >1, the algorithm needs
to multiply seven matrices of order n/2 and make 18 additions/subtractions of
matrices of size n/2; when n =1, no additions are made since two numbers are
5.4 Multiplication of Large Integers and Strassens Matrix Multiplication 191
simply multiplied. These observations yield the following recurrence relation:
A(n) =7A(n/2) 18(n/2)
2
for n > 1, A(1) =0.
Though one can obtain a closed-form solution to this recurrence (see Problem 8
in this sections exercises), here we simply establish the solutions order of growth.
According to the Master Theorem, A(n) (n
log
2
7
). In other words, the number
of additions has the same order of growth as the number of multiplications. This
puts Strassens algorithm in (n
log
2
7
), which is a better efciency class than (n
3
)
of the brute-force method.
Since the time of Strassens discovery, several other algorithms for multiplying
two n n matrices of real numbers in O(n
_
_
_
_
0 1 0 1
2 1 0 4
2 0 1 1
1 3 5 0
_
_
exiting the recursionwhenn =2, i.e., computing the products of 2 2 matrices
by the brute-force algorithm.
8. Solve the recurrence for the number of additions required by Strassens algo-
rithm. Assume that n is a power of 2.
9. V. Pan [Pan78] has discovered a divide-and-conquer matrix multiplication
algorithm that is based on multiplying two 70 70 matrices using 143,640
multiplications. Find the asymptotic efciency of Pans algorithm (you may
ignore additions) and compare it with that of Strassens algorithm.
10. Practical implementations of Strassens algorithm usually switch to the brute-
force method after matrix sizes become smaller than some crossover point.
Run an experiment to determine such a crossover point on your computer
system.
5.5 The Closest-Pair and Convex-Hull Problems
by Divide-and-Conquer
In Section 3.3, we discussed the brute-force approach to solving two classic prob-
lems of computational geometry: the closest-pair problem and the convex-hull
problem. We saw that the two-dimensional versions of these problems can be
solved by brute-force algorithms in (n
2
) and O(n
3
) time, respectively. In this sec-
tion, we discuss more sophisticated and asymptotically more efcient algorithms
for these problems, which are based on the divide-and-conquer technique.
The Closest-Pair Problem
Let P be a set of n > 1 points in the Cartesian plane. For the sake of simplicity,
we assume that the points are distinct. We can also assume that the points are
ordered in nondecreasing order of their x coordinate. (If they were not, we could
sort them rst by an efceint sorting algorithm such as mergesort.) It will also be
convenient to have the points sorted in a separate list in nondecreasing order of
the y coordinate; we will denote such a list Q.
If 2 n 3, the problem can be solved by the obvious brute-force algorithm.
If n > 3, we can divide the points into two subsets P
l
and P
r
of {n/2 and n/2
points, respectively, by drawing a vertical line through the median m of their x
coordinates so that {n/2 points lie to the left of or on the line itself, and n/2
points lie to the right of or on the line. Then we can solve the closest-pair problem
5.5 The Closest-Pair and Convex-Hull Problems by Divide-and-Conquer 193
(a) (b)
d
r
d
l
d
min
d d
d d
p
x = m
x = m
FIGURE 5.7 (a) Idea of the divide-and-conquer algorithm for the closest-pair problem.
(b) Rectangle that may contain points closer than d
min
to point p.
recursively for subsets P
l
and P
r
. Let d
l
and d
r
be the smallest distances between
pairs of points in P
l
and P
r
, respectively, and let d =min{d
l
, d
r
].
Note that d is not necessarily the smallest distance between all the point pairs
because points of a closer pair can lie on the opposite sides of the separating
line. Therefore, as a step combining the solutions to the smaller subproblems, we
need to examine such points. Obviously, we can limit our attention to the points
inside the symmetric vertical strip of width 2d around the separating line, since
the distance between any other pair of points is at least d (Figure 5.7a).
194 Divide-and-Conquer
Let S be the list of points inside the strip of width 2d around the separating
line, obtained from Q and hence ordered in nondecreasing order of their y coor-
dinate. We will scan this list, updating the information about d
min
, the minimum
distance seen so far, if we encounter a closer pair of points. Initially, d
min
=d, and
subsequently d
min
d. Let p(x, y) be a point on this list. For a point p
/
(x
/
, y
/
) to
have a chance to be closer to p than d
min
, the point must followp on list S and the
difference between their y coordinates must be less than d
min
(why?). Geometri-
cally, this means that p
/
must belong to the rectangle shown in Figure 5.7b. The
principal insight exploited by the algorithm is the observation that the rectangle
can contain just a few such points, because the points in each half (left and right)
of the rectangle must be at least distance d apart. It is easy to prove that the total
number of such points in the rectangle, including p, does not exceed eight (Prob-
lem 2 in this sections exercises); a more careful analysis reduces this number to
six (see [Joh04, p. 695]). Thus, the algorithm can consider no more than ve next
points following p on the list S, before moving up to the next point.
Here is pseudocode of the algorithm. We followthe advice given in Section 3.3
to avoid computing square roots inside the innermost loop of the algorithm.
ALGORITHM EfcientClosestPair(P, Q)
//Solves the closest-pair problem by divide-and-conquer
//Input: An array P of n 2 points in the Cartesian plane sorted in
// nondecreasing order of their x coordinates and an array Q of the
// same points sorted in nondecreasing order of the y coordinates
//Output: Euclidean distance between the closest pair of points
if n 3
return the minimal distance found by the brute-force algorithm
else
copy the rst {n/2 points of P to array P
l
copy the same {n/2 points from Q to array Q
l
copy the remaining n/2 points of P to array P
r
copy the same n/2 points from Q to array Q
r
d
l
EfcientClosestPair(P
l
, Q
l
)
d
r
EfcientClosestPair(P
r
, Q
r
)
d min{d
l
, d
r
]
mP[{n/2 1].x
copy all the points of Q for which [x m[ < d into array S[0..num1]
dminsq d
2
for i 0 to num2 do
k i 1
while k num1 and (S[k].y S[i].y)
2
< dminsq
dminsq min((S[k].x S[i].x)
2
(S[k].y S[i].y)
2
, dminsq)
k k 1
return sqrt(dminsq)
5.5 The Closest-Pair and Convex-Hull Problems by Divide-and-Conquer 195
The algorithm spends linear time both for dividing the problem into two
problems half the size and combining the obtained solutions. Therefore, assuming
as usual that n is a power of 2, we have the following recurrence for the running
time of the algorithm:
T (n) =2T (n/2) f (n),
where f (n) (n). Applying the Master Theorem (with a =2, b =2, and d =1),
we get T (n) (n log n). The necessity to presort input points does not change
the overall efciency class if sorting is done by a O(n log n) algorithm such as
mergesort. In fact, this is the best efciency class one can achieve, because it has
been proved that any algorithm for this problem must be in (n log n) under
some natural assumptions about operations an algorithmcan perform(see [Pre85,
p. 188]).
Convex-Hull Problem
Let us revisit the convex-hull problem, introduced in Section 3.3: nd the smallest
convex polygon that contains n given points in the plane. We consider here a
divide-and-conquer algorithm called quickhull because of its resemblance to
quicksort.
Let S be a set of n >1 points p
1
(x
1
, y
1
), . . . , p
n
(x
n
, y
n
) in the Cartesian plane.
We assume that the points are sortedinnondecreasing order of their x coordinates,
with ties resolved by increasing order of the y coordinates of the points involved.
It is not difcult to prove the geometrically obvious fact that the leftmost point
p
1
and the rightmost point p
n
are two distinct extreme points of the sets convex
hull (Figure 5.8). Let
p
1
p
n
be the straight line through points p
1
and p
n
directed
from p
1
to p
n
. This line separates the points of S into two sets: S
1
is the set of
points to the left of this line, and S
2
is the set of points to the right of this line.
(We say that point q
3
is to the left of the line
q
1
q
2
directed from point q
1
to point
q
2
if q
1
q
2
q
3
forms a counterclockwise cycle. Later, we cite an analytical way to
check this condition, based on checking the sign of a determinant formed by the
coordinates of the three points.) The points of S on the line
p
1
p
n
, other than p
1
and p
n
, cannot be extreme points of the convex hull and hence are excluded from
further consideration.
The boundary of the convex hull of S is made up of two polygonal chains:
an upper boundary and a lower boundary. The upper boundary, called the
upper hull, is a sequence of line segments with vertices at p
1
, some of the points
in S
1
(if S
1
is not empty) and p
n
. The lower boundary, called the lower hull, is
a sequence of line segments with vertices at p
1
, some of the points in S
2
(if S
2
is
not empty) and p
n
. The fact that the convex hull of the entire set S is composed
of the upper and lower hulls, which can be constructed independently and in a
similar fashion, is a very useful observation exploited by several algorithms for
this problem.
For concreteness, let us discuss howquickhull proceeds to construct the upper
hull; the lower hull can be constructed in the same manner. If S
1
is empty, the
196 Divide-and-Conquer
p
1
p
n
FIGURE 5.8 Upper and lower hulls of a set of points.
p
1
p
max
p
n
FIGURE 5.9 The idea of quickhull.
upper hull is simply the line segment with the endpoints at p
1
and p
n
. If S
1
is not
empty, the algorithm identies point p
max
in S
1
, which is the farthest from the line
p
1
p
n
(Figure 5.9). If there is a tie, the point that maximizes the angle
,
p
max
pp
n
can be selected. (Note that point p
max
maximizes the area of the triangle with
two vertices at p
1
and p
n
and the third one at some other point of S
1
.) Then the
algorithm identies all the points of set S
1
that are to the left of the line
p
1
p
max
;
these are the points that will make up the set S
1,1
. The points of S
1
to the left of
the line
p
max
p
n
will make up the set S
1,2
. It is not difcult to prove the following:
p
max
is a vertex of the upper hull.
The points inside .p
1
p
max
p
n
cannot be vertices of the upper hull (and hence
can be eliminated from further consideration).
There are no points to the left of both lines
p
1
p
max
and
p
max
p
n
.
Therefore, the algorithm can continue constructing the upper hulls of p
1
S
1,1
p
max
and p
max
S
1,2
p
n
recursively and then simply concatenate them to
get the upper hull of the entire set p
1
S
1
p
n
.
5.5 The Closest-Pair and Convex-Hull Problems by Divide-and-Conquer 197
Now we have to gure out how the algorithms geometric operations can be
actually implemented. Fortunately, we can take advantage of the following very
useful fact from analytical geometry: if q
1
(x
1
, y
1
), q
2
(x
2
, y
2
), and q
3
(x
3
, y
3
) are
three arbitrary points in the Cartesian plane, then the area of the triangle .q
1
q
2
q
3
is equal to one-half of the magnitude of the determinant
x
1
y
1
1
x
2
y
2
1
x
3
y
3
1
=x
1
y
2
x
3
y
1
x
2
y
3
x
3
y
2
x
2
y
1
x
1
y
3
,
while the sign of this expression is positive if and only if the point q
3
=(x
3
, y
3
) is to
the left of the line
q
1
q
2
. Using this formula, we can check in constant time whether
a point lies to the left of the line determined by two other points as well as nd
the distance from the point to the line.
Quickhull has the same (n
2
) worst-case efciency as quicksort (Problem 9
in this sections exercises). In the average case, however, we should expect a
much better performance. First, the algorithm should benet from the quicksort-
like savings from the on-average balanced split of the problem into two smaller
subproblems. Second, a signicant fraction of the pointsnamely, those inside
.p
1
p
max
p
n
(see Figure 5.9)are eliminated from further processing. Under a
natural assumption that points given are chosen randomly from a uniform dis-
tribution over some convex region (e.g., a circle or a rectangle), the average-case
efciency of quickhull turns out to be linear [Ove80].
Exercises 5.5
1. a. For the one-dimensional version of the closest-pair problem, i.e., for the
problem of nding two closest numbers among a given set of n real num-
bers, design an algorithm that is directly based on the divide-and-conquer
technique and determine its efciency class.
b. Is it a good algorithm for this problem?
2. Prove that the divide-and-conquer algorithm for the closest-pair problem
examines, for every point p in the vertical strip (see Figures 5.7a and 5.7b), no
more than seven other points that can be closer to p than d
min
, the minimum
distance between two points encountered by the algorithm up to that point.
3. Consider the version of the divide-and-conquer two-dimensional closest-pair
algorithm in which, instead of presorting input set P, we simply sort each of
the two sets P
l
and P
r
in nondecreasing order of their y coordinates on each
recursive call. Assuming that sorting is done by mergesort, set up a recurrence
relation for the running time in the worst case and solve it for n =2
k
.
4. Implement the divide-and-conquer closest-pair algorithm, outlined in this
section, in the language of your choice.
198 Divide-and-Conquer
5. Find on the Web a visualization of an algorithm for the closest-pair problem.
What algorithm does this visualization represent?
6. The Voronoi polygon for a point p of a set S of points in the plane is dened
to be the perimeter of the set of all points in the plane closer to p than to any
other point in S. The union of all the Voronoi polygons of the points in S is
called the Voronoi diagram of S.
a. What is the Voronoi diagram for a set of three points?
b. Find a visualization of an algorithm for generating the Voronoi diagram
on the Web and study a few examples of such diagrams. Based on your
observations, can you tell how the solution to the previous question is
generalized to the general case?
7. Explain how one can nd point p
max
in the quickhull algorithm analytically.
8. What is the best-case efciency of quickhull?
9. Give a specic example of inputs that make quickhull run in quadratic time.
10. Implement quickhull in the language of your choice.
11. Creating decagons There are 1000 points in the plane, no three of them
on the same line. Devise an algorithm to construct 100 decagons with their
vertices at these points. The decagons need not be convex, but each of them
has to be simple, i.e., its boundary should not cross itself, and no two decagons
may have a common point.
12. Shortest path around There is a fenced area in the two-dimensional Eu-
clidean plane in the shape of a convex polygon with vertices at points
p
1
(x
1
, y
1
), p
2
(x
2
, y
2
), . . . , p
n
(x
n
, y
n
) (not necessarily in this order). There are
two more points, a(x
a
, y
a
) and b(x
b
, y
b
) such that x
a
<min{x
1
, x
2
, . . . , x
n
] and
x
b
>max{x
1
, x
2
, . . . , x
n
]. Design a reasonably efcient algorithmfor comput-
ing the length of the shortest path between a and b. [ORo98]
SUMMARY
Divide-and-conquer is a general algorithm design technique that solves a
problem by dividing it into several smaller subproblems of the same type
(ideally, of about equal size), solving each of them recursively, and then
combining their solutions to get a solution to the original problem. Many
efcient algorithms are based on this technique, although it can be both
inapplicable and inferior to simpler algorithmic solutions.
Running time T (n) of many divide-and-conquer algorithms satises the
recurrence T (n) =aT (n/b) f (n). The Master Theoremestablishes the order
of growth of its solutions.
Mergesort is a divide-and-conquer sorting algorithm. It works by dividing an
input array intotwohalves, sorting themrecursively, andthen merging the two
Summary 199
sorted halves to get the original array sorted. The algorithms time efciency
is in (n log n) in all cases, with the number of key comparisons being very
close to the theoretical minimum. Its principal drawback is a signicant extra
storage requirement.
Quicksort is a divide-and-conquer sorting algorithm that works by partition-
ing its input elements according to their value relative to some preselected
element. Quicksort is noted for its superior efciency among n log n al-
gorithms for sorting randomly ordered arrays but also for the quadratic
worst-case efciency.
The classic traversals of a binary treepreorder, inorder, and postorder
and similar algorithms that require recursive processing of both left and right
subtrees can be considered examples of the divide-and-conquer technique.
Their analysis is helped by replacing all the empty subtrees of a given tree by
special external nodes.
There is a divide-and-conquer algorithm for multiplying two n-digit integers
that requires about n
1.585
one-digit multiplications.
Strassens algorithm needs only seven multiplications to multiply two 2 2
matrices. By exploiting the divide-and-conquer technique, this algorithm can
multiply two n n matrices with about n
2.807
multiplications.
The divide-and-conquer technique can be successfully applied to two impor-
tant problems of computational geometry: the closest-pair problem and the
convex-hull problem.
This page intentionally left blank
6
Transform-and-Conquer
Thats the secret to life . . . replace one worry with another.
Charles M. Schulz (19222000), American cartoonist,
the creator of Peanuts
T
his chapter deals with a group of design methods that are based on the idea
of transformation. We call this general technique transform-and-conquer
because these methods work as two-stage procedures. First, in the transformation
stage, the problems instance is modied to be, for one reason or another, more
amenable to solution. Then, in the second or conquering stage, it is solved.
There are three major variations of this idea that differ by what we transform
a given instance to (Figure 6.1):
Transformation to a simpler or more convenient instance of the same
problemwe call it instance simplication.
Transformation to a different representation of the same instancewe call it
representation change.
Transformation to an instance of a different problem for which an algorithm
is already availablewe call it problem reduction.
In the rst three sections of this chapter, we encounter examples of the
instance-simplication variety. Section 6.1 deals with the simple but fruitful idea
of presorting. Many algorithmic problems are easier to solve if their input is
sorted. Of course, the benets of sorting should more than compensate for the
problem's
instance
solution
simpler instance
or
another representation
or
another problem's instance
FIGURE 6.1 Transform-and-conquer strategy.
201
202 Transform-and-Conquer
time spent on it; otherwise, we would be better off dealing with an unsorted
input directly. Section 6.2 introduces one of the most important algorithms in
applied mathematics: Gaussian elimination. This algorithm solves a system of
linear equations by rst transforming it to another system with a special property
that makes nding a solution quite easy. In Section 6.3, the ideas of instance
simplication and representation change are applied to search trees. The results
are AVL trees and multiway balanced search trees; of the latter we consider the
simplest case, 2-3 trees.
Section 6.4 presents heaps and heapsort. Even if you are already familiar
with this important data structure and its application to sorting, you can still
benet from looking at them in this new light of transform-and-conquer design.
In Section 6.5, we discuss Horners rule, a remarkable algorithm for evaluating
polynomials. If there were an Algorithm Hall of Fame, Horners rule would be a
serious candidate for induction based on the algorithms elegance and efciency.
We also consider there two interesting algorithms for the exponentiation problem,
both based on the representation-change idea.
The chapter concludes witha reviewof several applications of the thirdvariety
of transform-and-conquer: problem reduction. This variety should be considered
the most radical of the three: one problem is reduced to another, i.e., transformed
into an entirely different problem. This is a very powerful idea, and it is extensively
used in the complexity theory (Chapter 11). Its application to designing practical
algorithms is not trivial, however. First, we need to identify a new problem into
which the given problem should be transformed. Then we must make sure that
the transformation algorithm followed by the algorithm for solving the new prob-
lem is time efcient compared to other algorithmic alternatives. Among several
examples, we discuss an important special case of mathematical modeling, or
expressing a problem in terms of purely mathematical objects such as variables,
functions, and equations.
6.1 Presorting
Presorting is an old idea in computer science. In fact, interest in sorting algorithms
is due, to a signicant degree, to the fact that many questions about a list are
easier to answer if the list is sorted. Obviously, the time efciency of algorithms
that involve sorting may depend on the efciency of the sorting algorithm being
used. For the sake of simplicity, we assume throughout this section that lists are
implemented as arrays, because some sorting algorithms are easier to implement
for the array representation.
So far, we have discussed three elementary sorting algorithmsselection sort,
bubble sort, and insertion sortthat are quadratic in the worst and average cases,
and two advanced algorithmsmergesort, which is always in (n log n), and
quicksort, whose efciency is also (n log n) in the average case but is quadratic in
the worst case. Are there faster sorting algorithms? As we have already stated in
Section 1.3 (see also Section 11.2), no general comparison-based sorting algorithm
6.1 Presorting 203
can have a better efciency than n log n in the worst case, and the same result holds
for the average-case efciency.
1
Following are three examples that illustrate the idea of presorting. More
examples can be found in this sections exercises.
EXAMPLE 1 Checking element uniqueness in an array If this element unique-
ness problem looks familiar to you, it should; we considered a brute-force algo-
rithm for the problem in Section 2.3 (see Example 2). The brute-force algorithm
compared pairs of the arrays elements until either two equal elements were found
or no more pairs were left. Its worst-case efciency was in (n
2
).
Alternatively, we can sort the array rst and then check only its consecutive
elements: if the array has equal elements, a pair of them must be next to each
other, and vice versa.
ALGORITHM PresortElementUniqueness(A[0..n 1])
//Solves the element uniqueness problem by sorting the array rst
//Input: An array A[0..n 1] of orderable elements
//Output: Returns true if A has no equal elements, false otherwise
sort the array A
for i 0 to n 2 do
if A[i] =A[i 1] return false
return true
The running time of this algorithm is the sum of the time spent on sorting
and the time spent on checking consecutive elements. Since the former requires
at least n log n comparisons and the latter needs no more than n 1 comparisons,
it is the sorting part that will determine the overall efciency of the algorithm. So,
if we use a quadratic sorting algorithm here, the entire algorithm will not be more
efcient than the brute-force one. But if we use a good sorting algorithm, such
as mergesort, with worst-case efciency in (n log n), the worst-case efciency of
the entire presorting-based algorithm will be also in (n log n):
T (n) =T
sort
(n) T
scan
(n) (n log n) (n) =(n log n).
EXAMPLE 2 Computing a mode A mode is a value that occurs most often in a
given list of numbers. For example, for 5, 1, 5, 7, 6, 5, 7, the mode is 5. (If several
different values occur most often, any of them can be considered a mode.) The
brute-force approach to computing a mode would scan the list and compute the
frequencies of all its distinct values, then nd the value with the largest frequency.
1. Sorting algorithms called radix sorts are linear but in terms of the total number of input bits. These
algorithms work by comparing individual bits or pieces of keys rather than keys in their entirety.
Although the running time of these algorithms is proportional to the number of input bits, they are
still essentially n log n algorithms because the number of bits per key must be at least log
2
n in order
to accommodate n distinct keys of input.
204 Transform-and-Conquer
In order to implement this idea, we can store the values already encountered,
along with their frequencies, in a separate list. On each iteration, the ith element
of the original list is compared with the values already encountered by traversing
this auxiliary list. If a matching value is found, its frequency is incremented;
otherwise, the current element is added to the list of distinct values seen so far
with a frequency of 1.
It is not difcult to see that the worst-case input for this algorithm is a list with
no equal elements. For such a list, its ith element is compared with i 1 elements
of the auxiliary list of distinct values seen so far before being added to the list with
a frequency of 1. As a result, the worst-case number of comparisons made by this
algorithm in creating the frequency list is
C(n) =
n
i=1
(i 1) =0 1
. . .
(n 1) =
(n 1)n
2
(n
2
).
The additional n 1 comparisons needed to nd the largest frequency in the aux-
iliary list do not change the quadratic worst-case efciency class of the algorithm.
As an alternative, let us rst sort the input. Then all equal values will be
adjacent to each other. To compute the mode, all we need to do is to nd the
longest run of adjacent equal values in the sorted array.
ALGORITHM PresortMode(A[0..n 1])
//Computes the mode of an array by sorting it rst
//Input: An array A[0..n 1] of orderable elements
//Output: The arrays mode
sort the array A
i 0 //current run begins at position i
modef requency 0 //highest frequency seen so far
while i n 1 do
runlengt h 1; runvalue A[i]
while i runlengt h n 1 and A[i runlengt h] =runvalue
runlengt h runlengt h 1
if runlengt h > modef requency
modef requency runlengt h; modevalue runvalue
i i runlengt h
return modevalue
The analysis here is similar to the analysis of Example 1: the running time of
the algorithm will be dominated by the time spent on sorting since the remainder
of the algorithm takes linear time (why?). Consequently, with an n log n sort, this
methods worst-case efciency will be in a better asymptotic class than the worst-
case efciency of the brute-force algorithm.
6.1 Presorting 205
EXAMPLE 3 Searching problem Consider the problem of searching for a given
value v in a given array of n sortable items. The brute-force solution here is
sequential search (Section 3.1), which needs n comparisons in the worst case. If
the array is sorted rst, we can then apply binary search, which requires only
log
2
n 1 comparisons in the worst case. Assuming the most efcient n log n
sort, the total running time of such a searching algorithm in the worst case will be
T (n) =T
sort
(n) T
search
(n) =(n log n) (log n) =(n log n),
which is inferior to sequential search. The same will also be true for the average-
case efciency. Of course, if we are to search in the same list more than once, the
time spent on sorting might well be justied. (Problem 4 in this sections exercises
asks to estimate the minimum number of searches needed to justify presorting.)
Before we nish our discussion of presorting, we should mention that many,
if not most, geometric algorithms dealing with sets of points use presorting in
one way or another. Points can be sorted by one of their coordinates, or by
their distance from a particular line, or by some angle, and so on. For example,
presorting was used in the divide-and-conquer algorithms for the closest-pair
problem and for the convex-hull problem, which were discussed in Section 5.5.
Further, some problems for directed acyclic graphs can be solved more easily
after topologically sorting the digraph in question. The problems of nding the
longest and shortest paths in such digraphs (see the exercises for Sections 8.1
and 9.3) illustrate this point.
Finally, most algorithms based on the greedy technique, which is the subject of
Chapter 9, require presorting of their inputs as an intrinsic part of their operations.
Exercises 6.1
1. Consider the problemof nding the distance betweenthe twoclosest numbers
in an array of n numbers. (The distance between two numbers x and y is
computed as [x y[.)
a. Design a presorting-based algorithm for solving this problem and deter-
mine its efciency class.
b. Compare the efciency of this algorithm with that of the brute-force algo-
rithm (see Problem 9 in Exercises 1.2).
2. Let A ={a
1
, . . . , a
n
] and B ={b
1
, . . . , b
m
] be two sets of numbers. Consider
the problem of nding their intersection, i.e., the set C of all the numbers that
are in both A and B.
a. Design a brute-force algorithm for solving this problem and determine its
efciency class.
b. Design a presorting-based algorithm for solving this problem and deter-
mine its efciency class.
206 Transform-and-Conquer
3. Consider the problem of nding the smallest and largest elements in an array
of n numbers.
a. Design a presorting-based algorithm for solving this problem and deter-
mine its efciency class.
b. Compare the efciency of the three algorithms: (i) the brute-force algo-
rithm, (ii) this presorting-basedalgorithm, and(iii) the divide-and-conquer
algorithm (see Problem 2 in Exercises 5.1).
4. Estimate howmany searches will be needed to justify time spent on presorting
an array of 10
3
elements if sorting is done by mergesort and searching is done
by binary search. (You may assume that all searches are for elements known
to be in the array.) What about an array of 10
6
elements?
5. To sort or not to sort? Design a reasonably efcient algorithmfor solving each
of the following problems and determine its efciency class.
a. You are given n telephone bills and m checks sent to pay the bills (n m).
Assuming that telephone numbers are written on the checks, nd out who
failed to pay. (For simplicity, you may also assume that only one check is
written for a particular bill and that it covers the bill in full.)
b. You have a le of n student records indicating each students number,
name, home address, and date of birth. Find out the number of students
from each of the 50 U.S. states.
6. Given a set of n 3 points in the Cartesian plane, connect them in a simple
polygon, i.e., a closed path through all the points so that its line segments
(the polygons edges) do not intersect (except for neighboring edges at their
common vertex). For example,
P
3 P
3
P
2
P
2
P
6
P
1
P
4
P
5
P
4
P
6
P
1
P
5
a. Does the problem always have a solution? Does it always have a unique
solution?
b. Design a reasonably efcient algorithm for solving this problem and indi-
cate its efciency class.
7. You have an array of n real numbers and another integer s. Find out whether
the array contains two elements whose sum is s. (For example, for the array 5,
9, 1, 3 and s =6, the answer is yes, but for the same array and s =7, the answer
6.1 Presorting 207
is no.) Design an algorithm for this problem with a better than quadratic time
efciency.
8. Youhave a list of nopenintervals (a
1
, b
1
), (a
2
, b
2
), . . . , (a
n
, b
n
) onthe real line.
(An open interval (a, b) comprises all the points strictly between its endpoints
a and b, i.e., (a, b) = {x[ a < x < b].) Find the maximum number of these
intervals that have a common point. For example, for the intervals (1, 4),
(0, 3), (1.5, 2), (3.6, 5), this maximum number is 3. Design an algorithm
for this problem with a better than quadratic time efciency.
9. Number placement Given a list of n distinct integers and a sequence of n
boxes with pre-set inequality signs inserted between them, design an algo-
rithm that places the numbers into the boxes to satisfy those inequalities. For
example, the numbers 4, 6, 3, 1, 8 can be placed in the ve boxes as shown
below:
1 < 8 > 3 < 4 < 6
10. Maxima search
a. A point (x
i
, y
i
) in the Cartesian plane is said to be dominated by point
(x
j
, y
j
) if x
i
x
j
and y
i
y
j
with at least one of the two inequalities being
strict. Given a set of n points, one of them is said to be a maximum of the
set if it is not dominated by any other point in the set. For example, in the
gure below, all the maximum points of the set of 10 points are circled.
y
x
Design an efcient algorithm for nding all the maximum points of a given
set of n points in the Cartesian plane. What is the time efciency class of
your algorithm?
b. Give a few real-world applications of this algorithm.
208 Transform-and-Conquer
11. Anagram detection
a. Design an efcient algorithm for nding all sets of anagrams in a large le
such as a dictionary of English words [Ben00]. For example, eat, ate, and
tea belong to one such set.
b. Write a program implementing the algorithm.
6.2 Gaussian Elimination
You are certainly familiar with systems of two linear equations in two unknowns:
a
11
x a
12
y =b
1
a
21
x a
22
y =b
2
.
Recall that unless the coefcients of one equation are proportional to the coef-
cients of the other, the system has a unique solution. The standard method for
nding this solution is to use either equation to express one of the variables as a
function of the other and then substitute the result into the other equation, yield-
ing a linear equation whose solution is then used to nd the value of the second
variable.
In many applications, we need to solve a system of n equations in n
unknowns:
a
11
x
1
a
12
x
2
. . .
a
1n
x
n
=b
1
a
21
x
1
a
22
x
2
. . .
a
2n
x
n
=b
2
.
.
.
a
n1
x
1
a
n2
x
2
. . .
a
nn
x
n
=b
n
where n is a large number. Theoretically, we can solve such a system by general-
izing the substitution method for solving systems of two linear equations (what
general design technique would such a method be based upon?); however, the
resulting algorithm would be extremely cumbersome.
Fortunately, there is a much more elegant algorithm for solving systems of
linear equations called Gaussian elimination.
2
The idea of Gaussian elimination
is to transform a system of n linear equations in n unknowns to an equivalent
system (i.e., a system with the same solution as the original one) with an upper-
triangular coefcient matrix, a matrix with all zeros below its main diagonal:
2. The method is named after Carl Friedrich Gauss (17771855), wholike other giants in the history of
mathematics such as Isaac Newton and Leonhard Eulermade numerous fundamental contributions
to both theoretical and computational mathematics. The method was known to the Chinese 1800 years
before the Europeans rediscovered it.
6.2 Gaussian Elimination 209
a
11
x
1
a
12
x
2
. . .
a
1n
x
n
=b
1
a
/
11
x
1
a
/
12
x
2
. . .
a
/
1n
x
n
=b
/
1
a
21
x
1
a
22
x
2
. . .
a
2n
x
n
=b
2
a
/
22
x
2
. . .
a
/
2n
x
n
=b
/
2
.
.
.
=
.
.
.
a
n1
x
1
a
n2
x
2
. . .
a
nn
x
n
=b
n
a
/
nn
x
n
=b
/
n
.
In matrix notations, we can write this as
Ax =b = A
/
x =b
/
,
where
A =
_
_
_
_
a
11
a
12
. . . a
1n
a
21
a
22
. . . a
2n
.
.
.
a
n1
a
n2
. . . a
nn
_
_
, b =
_
_
_
_
b
1
b
2
.
.
.
b
n
_
_
, A
/
=
_
_
_
_
a
/
11
a
/
12
. . . a
/
1n
0 a
/
22
. . . a
/
2n
.
.
.
0 0 . . . a
/
nn
_
_
, b =
_
_
_
_
b
/
1
b
/
2
.
.
.
b
/
n
_
_
.
(We added primes to the matrix elements and right-hand sides of the new system
to stress the point that their values differ from their counterparts in the original
system.)
Why is the system with the upper-triangular coefcient matrix better than
a system with an arbitrary coefcient matrix? Because we can easily solve the
systemwithanupper-triangular coefcient matrix by backsubstitutions as follows.
First, we can immediately nd the value of x
n
from the last equation; then we can
substitute this value into the next to last equation to get x
n1
, and so on, until we
substitute the known values of the last n 1 variables into the rst equation, from
which we nd the value of x
1
.
So how can we get from a system with an arbitrary coefcient matrix A to an
equivalent system with an upper-triangular coefcient matrix A
/
? We can do that
through a series of the so-called elementary operations:
exchanging two equations of the system
replacing an equation with its nonzero multiple
replacing an equation with a sum or difference of this equation and some
multiple of another equation
Since no elementary operation can change a solution to a system, any system that
is obtained through a series of such operations will have the same solution as the
original one.
Let us see how we can get to a system with an upper-triangular matrix. First,
we use a
11
as a pivot to make all x
1
coefcients zeros in the equations below
the rst one. Specically, we replace the second equation with the difference
between it and the rst equation multiplied by a
21
/a
11
to get an equation with
a zero coefcient for x
1
. Doing the same for the third, fourth, and nally nth
equationwith the multiples a
31
/a
11
, a
41
/a
11
, . . . , a
n1
/a
11
of the rst equation,
respectivelymakes all the coefcients of x
1
below the rst equation zero. Then
we get rid of all the coefcients of x
2
by subtracting an appropriate multiple of the
second equation from each of the equations below the second one. Repeating this
210 Transform-and-Conquer
elimination for each of the rst n 1 variables ultimately yields a system with an
upper-triangular coefcient matrix.
Before we look at an example of Gaussian elimination, let us note that we
can operate with just a systems coefcient matrix augmented, as its (n 1)st
column, with the equations right-hand side values. In other words, we need to
write explicitly neither the variable names nor the plus and equality signs.
EXAMPLE 1 Solve the system by Gaussian elimination.
2x
1
x
2
x
3
=1
4x
1
x
2
x
3
=5
x
1
x
2
x
3
=0.
_
_
2 1 1 1
4 1 1 5
1 1 1 0
_
_
row 2
4
2
row 1
row 3
1
2
row 1
_
_
2 1 1 1
0 3 3 3
0
3
2
1
2
1
2
_
_
row 3
1
2
row 2
_
_
2 1 1 1
0 3 3 3
0 0 2 2
_
_
Now we can obtain the solution by back substitutions:
x
3
=(2)/2 =1, x
2
=(3 (3)x
3
)/3 =0, and x
1
=(1 x
3
(1)x
2
)/2 =1.
Here is pseudocode of the rst stage, called forward elimination, of the
algorithm.
ALGORITHM ForwardElimination(A[1..n, 1..n], b[1..n])
//Applies Gaussian elimination to matrix A of a systems coefcients,
//augmented with vector b of the systems right-hand side values
//Input: Matrix A[1..n, 1..n] and column-vector b[1..n]
//Output: An equivalent upper-triangular matrix in place of A with the
//corresponding right-hand side values in the (n 1)st column
for i 1 to n do A[i, n 1] b[i] //augments the matrix
for i 1 to n 1 do
for j i 1 to n do
for k i to n 1 do
A[j, k] A[j, k] A[i, k] A[j, i] / A[i, i]
6.2 Gaussian Elimination 211
There are two important observations to make about this pseudocode. First, it
is not always correct: if A[i, i] =0, we cannot divide by it and hence cannot use the
ith row as a pivot for the ith iteration of the algorithm. In such a case, we should
take advantage of the rst elementary operation and exchange the ith row with
some row below it that has a nonzero coefcient in the ith column. (If the system
has a unique solution, which is the normal case for systems under consideration,
such a row must exist.)
Since we have to be prepared for the possibility of row exchanges anyway, we
can take care of another potential difculty: the possibility that A[i, i] is so small
and consequently the scaling factor A[j, i]/A[i, i] so large that the new value of
A[j, k] might become distorted by a round-off error caused by a subtraction of two
numbers of greatly different magnitudes.
3
To avoid this problem, we can always
look for a row with the largest absolute value of the coefcient in the ith column,
exchange it with the ith row, and then use the new A[i, i] as the ith iterations
pivot. This modication, called partial pivoting, guarantees that the magnitude
of the scaling factor will never exceed 1.
The second observation is the fact that the innermost loop is written with a
glaring inefciency. Can you nd it before checking the following pseudocode,
which both incorporates partial pivoting and eliminates this inefciency?
ALGORITHM BetterForwardElimination(A[1..n, 1..n], b[1..n])
//Implements Gaussian elimination with partial pivoting
//Input: Matrix A[1..n, 1..n] and column-vector b[1..n]
//Output: An equivalent upper-triangular matrix in place of A and the
//corresponding right-hand side values in place of the (n 1)st column
for i 1 to n do A[i, n 1] b[i] //appends b to A as the last column
for i 1 to n 1 do
pivot row i
for j i 1 to n do
if [A[j, i][ >[A[pivot row, i][ pivot row j
for k i to n 1 do
swap(A[i, k], A[pivot row, k])
for j i 1 to n do
t emp A[j, i] / A[i, i]
for k i to n 1 do
A[j, k] A[j, k] A[i, k] t emp
Let us nd the time efciency of this algorithm. Its innermost loop consists of
a single line,
A[j, k] A[j, k] A[i, k] t emp,
3. We discuss round-off errors in more detail in Section 11.4.
212 Transform-and-Conquer
which contains one multiplication and one subtraction. On most computers, multi-
plication is unquestionably more expensive than addition/subtraction, and hence
it is multiplication that is usually quoted as the algorithms basic operation.
4
The
standard summation formulas and rules reviewed in Section 2.3 (see also Appen-
dix A) are very helpful in the following derivation:
C(n) =
n1
i=1
n
j=i1
n1
k=i
1 =
n1
i=1
n
j=i1
(n 1 i 1) =
n1
i=1
n
j=i1
(n 2 i)
=
n1
i=1
(n 2 i)(n (i 1) 1) =
n1
i=1
(n 2 i)(n i)
=(n 1)(n 1) n(n 2)
. . .
3
.
1
=
n1
j=1
(j 2)j =
n1
j=1
j
2
n1
j=1
2j =
(n 1)n(2n 1)
6
2
(n 1)n
2
=
n(n 1)(2n 5)
6
1
3
n
3
(n
3
).
Since the second(back substitution) stage of Gaussianeliminationis in(n
2
),
as you are asked to show in the exercises, the running time is dominated by the
cubic elimination stage, making the entire algorithm cubic as well.
Theoretically, Gaussian elimination always either yields an exact solution to a
systemof linear equations when the systemhas a unique solution or discovers that
no such solution exists. In the latter case, the system will have either no solutions
or innitely many of them. In practice, solving systems of signicant size on a
computer by this method is not nearly so straightforward as the method would
lead us to believe. The principal difculty lies in preventing an accumulation of
round-off errors (see Section 11.4). Consult textbooks on numerical analysis that
analyze this and other implementation issues in great detail.
LU Decomposition
Gaussian elimination has an interesting and very useful byproduct called LU de-
composition of the coefcient matrix. In fact, modern commercial implementa-
tions of Gaussian elimination are based on such a decomposition rather than on
the basic algorithm outlined above.
EXAMPLE Let us return to the example in the beginning of this section, where
we applied Gaussian elimination to the matrix
4. As we mentioned in Section 2.1, on some computers multiplication is not necessarily more expensive
than addition/subtraction. For this algorithm, this point is moot since we can simply count the number
of times the innermost loop is executed, which is, of course, exactly the same number as the number
of multiplications and the number of subtractions there.
6.2 Gaussian Elimination 213
A =
_
_
2 1 1
4 1 1
1 1 1
_
_
.
Consider the lower-triangular matrix L made up of 1s on its main diagonal and
the row multiples used in the forward elimination process
L =
_
_
1 0 0
2 1 0
1
2
1
2
1
_
_
and the upper-triangular matrix U that was the result of this elimination
U =
_
_
2 1 1
0 3 3
0 0 2
_
_
.
It turns out that the product LU of these matrices is equal to matrix A. (For this
particular pair of L and U, you can verify this fact by direct multiplication, but as
a general proposition, it needs, of course, a proof, which we omit here.)
Therefore, solving the system Ax = b is equivalent to solving the system
LUx =b. The latter system can be solved as follows. Denote y =Ux, then Ly =b.
Solve the system Ly =b rst, which is easy to do because L is a lower-triangular
matrix; then solve the system Ux =y, with the upper-triangular matrix U, to nd
x. Thus, for the system at the beginning of this section, we rst solve Ly =b:
_
_
1 0 0
2 1 0
1
2
1
2
1
_
_
_
_
y
1
y
2
y
3
_
_
=
_
_
1
5
0
_
_
.
Its solution is
y
1
=1, y
2
=5 2y
1
=3, y
3
=0
1
2
y
1
1
2
y
2
=2.
Solving Ux =y means solving
_
_
2 1 1
0 3 3
0 0 2
_
_
_
_
x
1
x
2
x
3
_
_
=
_
_
1
3
2
_
_
,
and the solution is
x
3
=(2)/2 =1, x
2
=(3 (3)x
3
)/3 =0, x
1
=(1 x
3
(1)x
2
)/2 =1.
Note that once we have the LU decomposition of matrix A, we can solve
systems Ax =b with as many right-hand side vectors b as we want to, one at a time.
This is a distinct advantage over the classic Gaussian elimination discussed earlier.
Also note that the LU decomposition does not actually require extra memory,
because we can store the nonzero part of U in the upper-triangular part of A
214 Transform-and-Conquer
(including the main diagonal) and store the nontrivial part of L below the main
diagonal of A.
Computing a Matrix Inverse
Gaussian elimination is a very useful algorithm that tackles one of the most
important problems of applied mathematics: solving systems of linear equations.
In fact, Gaussian elimination can also be applied to several other problems of
linear algebra, such as computing a matrix inverse. The inverse of an n n matrix
A is an n n matrix, denoted A
1
, such that
AA
1
=I,
where I is the n n identity matrix (the matrix with all zero elements except
the main diagonal elements, which are all ones). Not every square matrix has
an inverse, but when it exists, the inverse is unique. If a matrix A does not have
an inverse, it is called singular. One can prove that a matrix is singular if and
only if one of its rows is a linear combination (a sum of some multiples) of the
other rows. A convenient way to check whether a matrix is nonsingular is to apply
Gaussian elimination: if it yields an upper-triangular matrix with no zeros on the
main diagonal, the matrix is nonsingular; otherwise, it is singular. So being singular
is a very special situation, and most square matrices do have their inverses.
Theoretically, inverse matrices are very important because they play the role
of reciprocals in matrix algebra, overcoming the absence of the explicit division
operation for matrices. For example, in a complete analogy with a linear equation
in one unknown ax = b whose solution can be written as x = a
1
b (if a is not
zero), we can express a solution to a system of n equations in n unknowns Ax =b
as x =A
1
b (if A is nonsingular) where b is, of course, a vector, not a number.
According to the denition of the inverse matrix for a nonsingular n n
matrix A, to compute it we need to nd n
2
numbers x
ij
, 1 i, j n, such that
_
_
_
_
a
11
a
12
. . . a
1n
a
21
a
22
. . . a
2n
.
.
.
a
n1
a
n2
. . . a
nn
_
_
_
_
_
_
x
11
x
12
. . . x
1n
x
21
x
22
. . . x
2n
.
.
.
x
n1
x
n2
. . . x
nn
_
_
=
_
_
_
_
1 0 . . . 0
0 1 . . . 0
.
.
.
0 0 . . . 1
_
_
.
We can nd the unknowns by solving n systems of linear equations that have the
same coefcient matrix A, the vector of unknowns x
j
is the jth column of the
inverse, and the right-hand side vector e
j
is the jth column of the identity matrix
(1 j n):
Ax
j
=e
j
.
We can solve these systems by applying Gaussian elimination to matrix A aug-
mented by the n n identity matrix. Better yet, we can use forward elimina-
tion to nd the LU decomposition of A and then solve the systems LUx
j
=e
j
,
j =1, . . . , n, as explained earlier.
6.2 Gaussian Elimination 215
Computing a Determinant
Another problem that can be solved by Gaussian elimination is computing a
determinant. The determinant of an n n matrix A, denoted det A or [A[, is a
number whose value canbe denedrecursively as follows. If n =1, i.e., if Aconsists
of a single element a
11
, det A is equal to a
11
; for n > 1, det A is computed by the
recursive formula
det A =
n
j=1
s
j
a
1j
det A
j
,
where s
j
is 1if j is odd and 1if j is even, a
1j
is the element in row1 and column
j, and A
j
is the (n 1) (n 1) matrix obtained from matrix A by deleting its
row 1 and column j.
In particular, for a 2 2 matrix, the denition implies a formula that is easy
to remember:
det
_
a
11
a
12
a
21
a
22
_
=a
11
det [a
22
] a
12
det [a
21
] =a
11
a
22
a
12
a
21
.
In other words, the determinant of a 2 2 matrix is simply equal to the difference
between the products of its diagonal elements.
For a 3 3 matrix, we get
det
_
_
a
11
a
12
a
13
a
21
a
22
a
23
a
31
a
32
a
33
_
_
=a
11
det
_
a
22
a
23
a
32
a
33
_
a
12
det
_
a
21
a
23
a
31
a
33
_
a
13
det
_
a
21
a
22
a
31
a
32
_
=a
11
a
22
a
33
a
12
a
23
a
31
a
13
a
21
a
32
a
11
a
23
a
32
a
12
a
21
a
33
a
13
a
22
a
31
.
Incidentally, this formula is very handy in a variety of applications. In particular,
we used it twice already in Section 5.5 as a part of the quickhull algorithm.
But what if we need to compute a determinant of a large matrix? Although
this is a task that is rarely needed in practice, it is worth discussing nevertheless.
Using the recursive denition can be of little help because it implies computing the
sumof n!terms. Here, Gaussian elimination comes to the rescue again. The central
point is the fact that the determinant of an upper-triangular matrix is equal to the
product of elements on its main diagonal, and it is easy to see how elementary
operations employed by the elimination algorithm inuence the determinants
value. (Basically, it either remains unchanged or changes a sign or is multiplied by
the constant used by the elimination algorithm.) As a result, we can compute the
determinant of an n n matrix in cubic time.
Determinants play an important role in the theory of systems of linear equa-
tions. Specically, a system of n linear equations in n unknowns Ax = b has a
unique solution if and only if the determinant of its coefcient matrix det A is
216 Transform-and-Conquer
not equal to zero. Moreover, this solution can be found by the formulas called
Cramers rule,
x
1
=
det A
1
det A
, . . . , x
j
=
det A
j
det A
, . . . , x
n
=
det A
n
det A
,
where det A
j
is the determinant of the matrix obtained by replacing the jth
column of Aby the column b. You are asked to investigate in the exercises whether
using Cramers rule is a good algorithm for solving systems of linear equations.
Exercises 6.2
1. Solve the following system by Gaussian elimination:
x
1
x
2
x
3
=2
2x
1
x
2
x
3
=3
x
1
x
2
3x
3
=8.
2. a. Solve the system of the previous question by the LU decomposition
method.
b. From the standpoint of general algorithm design techniques, how would
you classify the LU decomposition method?
3. Solve the system of Problem 1 by computing the inverse of its coefcient
matrix and then multiplying it by the vector on the right-hand side.
4. Would it be correct to get the efciency class of the forward elimination stage
of Gaussian elimination as follows?
C(n) =
n1
i=1
n
j=i1
n1
k=i
1 =
n1
i=1
(n 2 i)(n i)
=
n1
i=1
[(n 2)n i(2n 2) i
2
]
=
n1
i=1
(n 2)n
n1
i=1
(2n 2)i
n1
i=1
i
2
.
Since s
1
(n) =
n1
i=1
(n 2)n (n
3
), s
2
(n) =
n1
i=1
(2n 2)i (n
3
), and
s
3
(n) =
n1
i=1
i
2
(n
3
), s
1
(n) s
2
(n) s
3
(n) (n
3
).
5. Write pseudocode for the back-substitutionstage of Gaussianeliminationand
show that its running time is in (n
2
).
6. Assuming that division of two numbers takes three times longer than their
multiplication, estimate how much faster BetterForwardElimination is than
ForwardElimination. (Of course, you should also assume that a compiler is
not going to eliminate the inefciency in ForwardElimination.)
6.2 Gaussian Elimination 217
7. a. Give an example of a system of two linear equations in two unknowns that
has a unique solution and solve it by Gaussian elimination.
b. Give an example of a system of two linear equations in two unknowns that
has no solution and apply Gaussian elimination to it.
c. Give an example of a system of two linear equations in two unknowns that
has innitely many solutions and apply Gaussian elimination to it.
8. The Gauss-Jordan elimination method differs from Gaussian elimination in
that the elements above the main diagonal of the coefcient matrix are made
zero at the same time and by the same use of a pivot rowas the elements below
the main diagonal.
a. Apply the Gauss-Jordan method to the system of Problem 1 of these
exercises.
b. What general design strategy is this algorithm based on?
c. In general, how many multiplications are made by this method in solving
a system of n equations in n unknowns? How does this compare with the
number of multiplications made by the Gaussian elimination method in
both its elimination and back-substitution stages?
9. A system Ax =b of n linear equations in n unknowns has a unique solution if
and only if det A,=0. Is it a good idea to check this condition before applying
Gaussian elimination to the system?
10. a. Apply Cramers rule to solve the system of Problem 1 of these exercises.
b. Estimate how many times longer it will take to solve a system of n linear
equations in n unknowns by Cramers rule than by Gaussian elimination.
Assume that all the determinants in Cramers rule formulas are computed
independently by Gaussian elimination.
11. Lights out This one-person game is played on an n n board composed
of 1 1 light panels. Each panel has a switch that can be turned on and off,
thereby toggling the on/off state of this and four vertically and horizontally
adjacent panels. (Of course, toggling a corner square affects a total of three
panels, and toggling a noncorner panel on the boards border affects a total
of four squares.) Given an initial subset of lighted squares, the goal is to turn
all the lights off.
a. Show that an answer can be found by solving a system of linear equations
with 0/1 coefcients and right-hand sides using the modulo 2 arithmetic.
b. Use Gaussian elimination to solve the 2 2 all-ones instance of this
problem, where all the panels of the 2 2 board are initially lit.
c. Use Gaussian elimination to solve the 3 3 all-ones instance of this
problem, where all the panels of the 3 3 board are initially lit.
218 Transform-and-Conquer
6.3 Balanced Search Trees
In Sections 1.4, 4.5, and 5.3, we discussed the binary search treeone of the prin-
cipal data structures for implementing dictionaries. It is a binary tree whose nodes
contain elements of a set of orderable items, one element per node, so that all ele-
ments in the left subtree are smaller than the element in the subtrees root, and all
the elements in the right subtree are greater than it. Note that this transformation
froma set to a binary search tree is an example of the representation-change tech-
nique. What do we gain by such transformation compared to the straightforward
implementation of a dictionary by, say, an array? We gain in the time efciency
of searching, insertion, and deletion, which are all in (log n), but only in the av-
erage case. In the worst case, these operations are in (n) because the tree can
degenerate into a severely unbalanced one with its height equal to n 1.
Computer scientists have expended a lot of effort in trying to nd a structure
that preserves the good properties of the classical binary search treeprincipally,
the logarithmic efciency of the dictionary operations and having the sets ele-
ments sortedbut avoids its worst-case degeneracy. They have come up with two
approaches.
The rst approach is of the instance-simplication variety: an unbalanced
binary search tree is transformed into a balanced one. Because of this, such
trees are called self-balancing. Specic implementations of this idea differ
by their denition of balance. An AVL tree requires the difference between
the heights of the left and right subtrees of every node never exceed 1. A
red-black tree tolerates the height of one subtree being twice as large as the
other subtree of the same node. If an insertion or deletion of a new node
creates a tree with a violated balance requirement, the tree is restructured
by one of a family of special transformations called rotations that restore the
balance required. In this section, we will discuss only AVL trees. Information
about other types of binary search trees that utilize the idea of rebalancing
via rotations, including red-black trees and splay trees, can be found in the
references [Cor09], [Sed02], and [Tar83].
The second approach is of the representation-change variety: allowmore than
one element in a node of a search tree. Specic cases of such trees are 2-3
trees, 2-3-4 trees, and more general and important B-trees. They differ in the
number of elements admissible in a single node of a search tree, but all are
perfectly balanced. We discuss the simplest case of such trees, the 2-3 tree, in
this section, leaving the discussion of B-trees for Chapter 7.
AVL Trees
AVL trees were invented in 1962 by two Russian scientists, G. M. Adelson-Velsky
and E. M. Landis [Ade62], after whom this data structure is named.
6.3 Balanced Search Trees 219
10 10
5 20 5 20
4 7 12 4 7
2 8 2 8
1
1
1
1
1
1
0 0
0
0
0 0
0 0
2
(a) (b)
FIGURE 6.2 (a) AVL tree. (b) Binary search tree that is not an AVL tree. The numbers
above the nodes indicate the nodes balance factors.
DEFINITION An AVL tree is a binary search tree in which the balance factor of
every node, which is dened as the difference between the heights of the nodes
left and right subtrees, is either 0 or +1 or 1. (The height of the empty tree is
dened as 1. Of course, the balance factor can also be computed as the difference
between the numbers of levels rather than the height difference of the nodes left
and right subtrees.)
For example, the binary search tree in Figure 6.2a is an AVL tree but the one
in Figure 6.2b is not.
If an insertion of a new node makes an AVL tree unbalanced, we transform
the tree by a rotation. A rotation in an AVL tree is a local transformation of its
subtree rooted at a node whose balance has become either 2 or 2. If there are
several such nodes, we rotate the tree rooted at the unbalanced node that is the
closest to the newly inserted leaf. There are only four types of rotations; in fact,
two of them are mirror images of the other two. In their simplest form, the four
rotations are shown in Figure 6.3.
The rst rotation type is called the single right rotation, or R-rotation. (Imag-
ine rotating the edge connecting the root and its left child in the binary tree in
Figure 6.3a to the right.) Figure 6.4 presents the single R-rotation in its most gen-
eral form. Note that this rotation is performed after a new key is inserted into the
left subtree of the left child of a tree whose root had the balance of +1 before the
insertion.
The symmetric single left rotation, or L-rotation, is the mirror image of the
single R-rotation. It is performed after a new key is inserted into the right subtree
of the right child of a tree whose root had the balance of 1 before the insertion.
(You are asked to draw a diagram of the general case of the single L-rotation in
the exercises.)
220 Transform-and-Conquer
3
3
3
3
2
2
2
2
2
1
1
1
1
1 3
1
2
1
1
1
2
2
2
0
0
0 0
2
1 3
0
0
0
0
0
0
2
1 3
0
0 0
2
1 3
0
0 0
(a)
(b)
(c)
(d)
R
L
LR
RL
FIGURE 6.3 Four rotation types for AVL trees with three nodes. (a) Single R-rotation.
(b) Single L-rotation. (c) Double LR-rotation. (d) Double RL-rotation.
The second rotation type is called the double left-right rotation (LR-
rotation). It is, in fact, a combination of two rotations: we perform the L-rotation
of the left subtree of root r followed by the R-rotation of the new tree rooted at
r (Figure 6.5). It is performed after a new key is inserted into the right subtree of
the left child of a tree whose root had the balance of +1 before the insertion.
6.3 Balanced Search Trees 221
T
3
T
2
T
1
c
r
T
3
T
2
T
1
c
r
single R-rotation
FIGURE 6.4 General form of the R-rotation in the AVL tree. A shaded node is the last
one inserted.
T
4
T
3
T
2
T
1
c
g
or
r
T
4
T
3
T
2
T
1
c
or
g
r
double LR-rotation
FIGURE 6.5 General form of the double LR-rotation in the AVL tree. A shaded node
is the last one inserted. It can be either in the left subtree or in the right
subtree of the roots grandchild.
The double right-left rotation(RL-rotation) is the mirror image of the double
LR-rotation and is left for the exercises.
Note that the rotations are not trivial transformations, thoughfortunately they
canbe done inconstant time. Not only shouldthey guarantee that a resulting tree is
balanced, but they should also preserve the basic requirements of a binary search
tree. For example, in the initial tree of Figure 6.4, all the keys of subtree T
1
are
smaller than c, which is smaller than all the keys of subtree T
2
, which are smaller
than r, which is smaller than all the keys of subtree T
3
. And the same relationships
among the key values hold, as they must, for the balanced tree after the rotation.
222 Transform-and-Conquer
5 5
6 6
6 6 6
5 5
3 3 2
2
5
8
8 8
8
8 3
5
5 8
6
1
1
2
0
0
0 0
0 0
0 0
0
0
1
1
1
1
5 6
2 2 5 4
4
6 8 3 3
0
0
0 0
0 0
1
0
1
1
2
1
0
8 6
5 5
2 2 4 8
7
4
7 6
3 3
0 0 0
0
1 0
0 0
0 0
0
0
2
2
2
0 0
0 0
L(5)
R(5)
LR(6)
RL(6)
FIGURE 6.6 Construction of an AVL tree for the list 5, 6, 8, 3, 2, 4, 7 by successive
insertions. The parenthesized number of a rotations abbreviation indicates
the root of the tree being reorganized.
An example of constructing an AVL tree for a given list of numbers is shown
in Figure 6.6. As you trace the algorithms operations, keep in mind that if there
are several nodes with the 2 balance, the rotation is done for the tree rooted at
the unbalanced node that is the closest to the newly inserted leaf.
How efcient are AVL trees? As with any search tree, the critical charac-
teristic is the trees height. It turns out that it is bounded both above and below
6.3 Balanced Search Trees 223
by logarithmic functions. Specically, the height h of any AVL tree with n nodes
satises the inequalities
log
2
n h < 1.4405 log
2
(n 2) 1.3277.
(These weird-looking constants are round-offs of some irrational numbers related
to Fibonacci numbers and the golden ratiosee Section 2.5.)
The inequalities immediately imply that the operations of searching and in-
sertion are (log n) in the worst case. Getting an exact formula for the average
height of an AVL tree constructed for random lists of keys has proved to be dif-
cult, but it is known from extensive experiments that it is about 1.01log
2
n + 0.1
except when n is small [KnuIII, p. 468]. Thus, searching in an AVL tree requires,
on average, almost the same number of comparisons as searching in a sorted array
by binary search.
The operation of key deletion in an AVL tree is considerably more difcult
than insertion, but fortunately it turns out to be in the same efciency class as
insertion, i.e., logarithmic.
These impressive efciency characteristics come at a price, however. The
drawbacks of AVL trees are frequent rotations and the need to maintain bal-
ances for its nodes. These drawbacks have prevented AVL trees from becoming
the standard structure for implementing dictionaries. At the same time, their un-
derlying ideathat of rebalancing a binary search tree via rotationshas proved
to be very fruitful and has led to discoveries of other interesting variations of the
classical binary search tree.
2-3 Trees
As mentioned at the beginning of this section, the second idea of balancing a
search tree is to allow more than one key in the same node of such a tree. The
simplest implementation of this idea is 2-3 trees, introduced by the U.S. computer
scientist John Hopcroft in 1970. A 2-3 tree is a tree that can have nodes of two
kinds: 2-nodes and3-nodes. A2-node contains a single key K andhas twochildren:
the left child serves as the root of a subtree whose keys are less than K, and the
right child serves as the root of a subtree whose keys are greater than K. (In other
words, a 2-node is the same kind of node we have in the classical binary search
tree.) A 3-node contains two ordered keys K
1
and K
2
(K
1
< K
2
) and has three
children. The leftmost child serves as the root of a subtree with keys less than K
1
,
the middle child serves as the root of a subtree with keys between K
1
and K
2
,
and the rightmost child serves as the root of a subtree with keys greater than K
2
(Figure 6.7).
The last requirement of the 2-3 tree is that all its leaves must be on the same
level. In other words, a 2-3 tree is always perfectly height-balanced: the length of
a path from the root to a leaf is the same for every leaf. It is this property that we
buy by allowing more than one key in the same node of a search tree.
Searching for a given key K in a 2-3 tree is quite straightforward. We start
at the root. If the root is a 2-node, we act as if it were a binary search tree: we
either stop if K is equal to the roots key or continue the search in the left or right
224 Transform-and-Conquer
< K
1
> K
2
(K
1
, K
2
)
K
1
, K
2
> K
K
< K
2-node 3-node
FIGURE 6.7 Two kinds of nodes of a 2-3 tree.
subtree if K is, respectively, smaller or larger than the roots key. If the root is a 3-
node, we know after no more than two key comparisons whether the search can
be stopped (if K is equal to one of the roots keys) or in which of the roots three
subtrees it needs to be continued.
Inserting a new key in a 2-3 tree is done as follows. First of all, we always
insert a new key K in a leaf, except for the empty tree. The appropriate leaf is
found by performing a search for K. If the leaf in question is a 2-node, we insert
K there as either the rst or the second key, depending on whether K is smaller or
larger than the nodes old key. If the leaf is a 3-node, we split the leaf in two: the
smallest of the three keys (two old ones and the new key) is put in the rst leaf,
the largest key is put in the second leaf, and the middle key is promoted to the
old leafs parent. (If the leaf happens to be the trees root, a new root is created
to accept the middle key.) Note that promotion of a middle key to its parent can
cause the parents overow (if it was a 3-node) and hence can lead to several node
splits along the chain of the leafs ancestors.
An example of a 2-3 tree construction is given in Figure 6.8.
As for any search tree, the efciency of the dictionary operations depends on
the trees height. So let us rst nd an upper bound for it. A 2-3 tree of height h
with the smallest number of keys is a full tree of 2-nodes (such as the nal tree in
Figure 6.8 for h =2). Therefore, for any 2-3 tree of height h with n nodes, we get
the inequality
n 1 2
. . .
2
h
=2
h1
1,
and hence
h log
2
(n 1) 1.
On the other hand, a 2-3 tree of height h with the largest number of keys is a full
tree of 3-nodes, each with two keys and three children. Therefore, for any 2-3 tree
with n nodes,
n 2
.
1 2
.
3
. . .
2
.
3
h
=2(1 3
. . .
3
h
) =3
h1
1
6.3 Balanced Search Trees 225
8
8 8
5
2
2 2 2
3 8
9 9 9 4 4 7 7
2 5
5
9 9 9
9
9 9 3, 5
3, 8 3, 8
3, 8
4, 5
5, 9
2, 3, 5
3, 5, 8
4, 5, 7
5, 8, 9
FIGURE 6.8 Construction of a 2-3 tree for the list 9, 5, 8, 3, 2, 4, 7.
and hence
h log
3
(n 1) 1.
These lower and upper bounds on height h,
log
3
(n 1) 1 h log
2
(n 1) 1,
imply that the time efciencies of searching, insertion, and deletion are all in
(log n) in both the worst and average case. We consider a very important gener-
alization of 2-3 trees, called B-trees, in Section 7.4.
Exercises 6.3
1. Which of the following binary trees are AVL trees?
3 6 4 6 3 6
5 5 5
2 8 2 8 1
1 3 7 9
2 7 9
(a) (b) (c)
2. a. For n =1, 2, 3, 4, and 5, draw all the binary trees with n nodes that satisfy
the balance requirement of AVL trees.
226 Transform-and-Conquer
b. Drawa binary tree of height 4 that can be an AVLtree and has the smallest
number of nodes among all such trees.
3. Drawdiagrams of the single L-rotation and of the double RL-rotation in their
general form.
4. For each of the following lists, construct an AVL tree by inserting their ele-
ments successively, starting with the empty tree.
a. 1, 2, 3, 4, 5, 6
b. 6, 5, 4, 3, 2, 1
c. 3, 6, 5, 1, 2, 4
5. a. For an AVLtree containing real numbers, design an algorithmfor comput-
ing the range (i.e., the difference between the largest and smallest numbers
in the tree) and determine its worst-case efciency.
b. True or false: The smallest and the largest keys in an AVL tree can always
be found on either the last level or the next-to-last level?
6. Write a program for constructing an AVL tree for a given list of n distinct
integers.
7. a. Construct a 2-3 tree for the list C, O, M, P, U, T, I, N, G. Use the alphabetical
order of the letters and insert them successively starting with the empty
tree.
b. Assuming that the probabilities of searching for each of the keys (i.e., the
letters) are the same, nd the largest number and the average number of
key comparisons for successful searches in this tree.
8. Let T
B
and T
2-3
be, respectively, a classical binary search tree and a 2-3 tree
constructed for the same list of keys inserted in the corresponding trees in
the same order. True or false: Searching for the same key in T
2-3
always takes
fewer or the same number of key comparisons as searching in T
B
?
9. For a 2-3 tree containing real numbers, design an algorithm for computing
the range (i.e., the difference between the largest and smallest numbers in the
tree) and determine its worst-case efciency.
10. Write a program for constructing a 2-3 tree for a given list of n integers.
6.4 Heaps and Heapsort
The data structure called the heap is denitely not a disordered pile of items
as the words denition in a standard dictionary might suggest. Rather, it is a
clever, partially ordered data structure that is especially suitable for implementing
priority queues. Recall that a priority queue is a multiset of items withanorderable
characteristic called an items priority, with the following operations:
6.4 Heaps and Heapsort 227
10 10 10
5 7
4 2 1 2 1 6 2 1
5 5 7 7
FIGURE 6.9 Illustration of the denition of heap: only the leftmost tree is a heap.
nding an item with the highest (i.e., largest) priority
deleting an item with the highest priority
adding a new item to the multiset
It is primarily an efcient implementation of these operations that makes
the heap both interesting and useful. Priority queues arise naturally in such ap-
plications as scheduling job executions by computer operating systems and traf-
c management by communication networks. They also arise in several impor-
tant algorithms, e.g., Prims algorithm (Section 9.1), Dijkstras algorithm (Sec-
tion 9.3), Huffman encoding (Section 9.4), and branch-and-bound applications
(Section 12.2). The heap is also the data structure that serves as a cornerstone of
a theoretically important sorting algorithm called heapsort. We discuss this algo-
rithm after we dene the heap and investigate its basic properties.
Notion of the Heap
DEFINITION A heap can be dened as a binary tree with keys assigned to its
nodes, one key per node, provided the following two conditions are met:
1. The shape propertythe binary tree is essentially complete (or simply com-
plete), i.e., all its levels are full except possibly the last level, where only some
rightmost leaves may be missing.
2. The parental dominance or heap propertythe key in each node is greater
than or equal to the keys in its children. (This condition is considered auto-
matically satised for all leaves.)
5
For example, consider the trees of Figure 6.9. The rst tree is a heap. The
second one is not a heap, because the trees shape property is violated. And the
third one is not a heap, because the parental dominance fails for the node with
key 5.
Note that key values in a heap are ordered top down; i.e., a sequence of values
on any path from the root to a leaf is decreasing (nonincreasing, if equal keys are
allowed). However, there is no left-to-right order in key values; i.e., there is no
5. Some authors require the key at each node to be less than or equal to the keys at its children. We call
this variation a min-heap.
228 Transform-and-Conquer
10
8 7
5 2
5 3 1
6
10 8 7 5 2 1 6 3 5 1
0 1 2 3 4 5 6 7 8 9 10
1
index
value
the array representation
parents leaves
FIGURE 6.10 Heap and its array representation.
relationship among key values for nodes either on the same level of the tree or,
more generally, in the left and right subtrees of the same node.
Here is a list of important properties of heaps, which are not difcult to prove
(check these properties for the heap of Figure 6.10, as an example).
1. There exists exactly one essentially complete binary tree with n nodes. Its
height is equal to log
2
n.
2. The root of a heap always contains its largest element.
3. A node of a heap considered with all its descendants is also a heap.
4. A heap can be implemented as an array by recording its elements in the top-
down, left-to-right fashion. It is convenient to store the heaps elements in
positions 1 through n of such an array, leaving H[0] either unused or putting
there a sentinel whose value is greater than every element in the heap. In such
a representation,
a. the parental node keys will be in the rst n/2 positions of the array,
while the leaf keys will occupy the last {n/2 positions;
b. the children of a key in the arrays parental position i (1 i n/2) will
be in positions 2i and 2i 1, and, correspondingly, the parent of a key in
position i (2 i n) will be in position i/2.
Thus, we could also dene a heap as an array H[1..n] in which every element
in position i in the rst half of the array is greater than or equal to the elements
in positions 2i and 2i 1, i.e.,
H[i] max{H[2i], H[2i 1]] for i =1, . . . , n/2.
(Of course, if 2i 1 > n, just H[i] H[2i] needs to be satised.) While the ideas
behind the majority of algorithms dealing with heaps are easier to understand if
we think of heaps as binary trees, their actual implementations are usually much
simpler and more efcient with arrays.
How can we construct a heap for a given list of keys? There are two principal
alternatives for doing this. The rst is the bottom-up heap construction algorithm
illustrated in Figure 6.11. It initializes the essentially complete binary tree with n
nodes by placing keys in the order given and then heapies the tree as follows.
Starting with the last parental node, the algorithm checks whether the parental
6.4 Heaps and Heapsort 229
2
9 7
6 5 8
2
9 8
6 5 7
2
9 8
6 5 7
9
2 8
6 5 7
9
6 8
2 5 7
2
9 8
6 5 7
FIGURE 6.11 Bottom-up construction of a heap for the list 2, 9, 7, 6, 5, 8. The double-
headed arrows show key comparisons verifying the parental dominance.
dominance holds for the key in this node. If it does not, the algorithm exchanges
the nodes key K with the larger key of its children and checks whether the
parental dominance holds for K in its new position. This process continues until
the parental dominance for K is satised. (Eventually, it has to because it holds
automatically for any key in a leaf.) After completing the heapication of the
subtree rooted at the current parental node, the algorithmproceeds to do the same
for the nodes immediate predecessor. The algorithm stops after this is done for
the root of the tree.
ALGORITHM HeapBottomUp(H[1..n])
//Constructs a heap from elements of a given array
// by the bottom-up algorithm
//Input: An array H[1..n] of orderable items
//Output: A heap H[1..n]
for i n/2 downto 1 do
k i; v H[k]
heap false
while not heap and 2 k n do
j 2 k
if j < n //there are two children
if H[j] < H[j 1] j j 1
if v H[j]
heap true
else H[k] H[j]; k j
H[k] v
How efcient is this algorithm in the worst case? Assume, for simplicity,
that n = 2
k
1 so that a heaps tree is full, i.e., the largest possible number of
230 Transform-and-Conquer
nodes occurs on each level. Let h be the height of the tree. According to the rst
property of heaps in the list at the beginning of the section, h = log
2
n or just
{log
2
(n 1) 1 = k 1 for the specic values of n we are considering. Each
key on level i of the tree will travel to the leaf level h in the worst case of the
heap construction algorithm. Since moving to the next level down requires two
comparisonsone to nd the larger child and the other to determine whether
the exchange is requiredthe total number of key comparisons involving a key
on level i will be 2(h i). Therefore, the total number of key comparisons in the
worst case will be
C
worst
(n) =
h1
i=0
level i keys
2(h i) =
h1
i=0
2(h i)2
i
=2(n log
2
(n 1)),
where the validity of the last equality canbe provedeither by using the closed-form
formula for the sum
h
i=1
i2
i
(see Appendix A) or by mathematical induction on
h. Thus, with this bottom-up algorithm, a heap of size n can be constructed with
fewer than 2n comparisons.
The alternative (and less efcient) algorithm constructs a heap by successive
insertions of a new key into a previously constructed heap; some people call it
the top-down heap construction algorithm. So how can we insert a new key K
into a heap? First, attach a new node with key K in it after the last leaf of the
existing heap. Then sift K up to its appropriate place in the new heap as follows.
Compare K with its parents key: if the latter is greater than or equal to K, stop
(the structure is a heap); otherwise, swap these two keys and compare K with its
new parent. This swapping continues until K is not greater than its last parent or
it reaches the root (illustrated in Figure 6.12).
Obviously, this insertionoperationcannot require more key comparisons than
the heaps height. Since the height of a heap with n nodes is about log
2
n, the time
efciency of insertion is in O(log n).
How can we delete an item from a heap? We consider here only the most
important case of deleting the roots key, leaving the question about deleting an
arbitrary key in a heap for the exercises. (Authors of textbooks like to do such
things to their readers, do they not?) Deleting the roots key from a heap can be
done with the following algorithm, illustrated in Figure 6.13.
9
6 8
2 5 7
9
6 10
2 5 8 7 10
10
6 9
2 5 8 7
FIGURE 6.12 Inserting a key (10) into the heap constructed in Figure 6.11. The new key
is sifted up via a swap with its parent until it is not larger than its parent
(or is in the root).
6.4 Heaps and Heapsort 231
1
8 6
2 5 9
9
8 6
2 5 1
1
8 6
2 5
8
5 6
2 1
Step 1 Step 2 Step 3
FIGURE 6.13 Deleting the roots key from a heap. The key to be deleted is swapped
with the last key after which the smaller tree is heapied by exchanging
the new key in its root with the larger key in its children until the parental
dominance requirement is satised.
Maximum Key Deletion from a heap
Step 1 Exchange the roots key with the last key K of the heap.
Step 2 Decrease the heaps size by 1.
Step 3 Heapify the smaller tree by sifting K down the tree exactly in the
same way we didit inthe bottom-upheapconstructionalgorithm. That
is, verify the parental dominance for K: if it holds, we are done; if not,
swap K with the larger of its children and repeat this operation until
the parental dominance condition holds for K in its new position.
The efciency of deletion is determined by the number of key comparisons
needed to heapify the tree after the swap has been made and the size of the tree
is decreased by 1. Since this cannot require more key comparisons than twice the
heaps height, the time efciency of deletion is in O(log n) as well.
Heapsort
Now we can describe heapsortan interesting sorting algorithm discovered by
J. W. J. Williams [Wil64]. This is a two-stage algorithm that works as follows.
Stage 1 (heap construction): Construct a heap for a given array.
Stage 2 (maximum deletions): Apply the root-deletion operation n 1 times
to the remaining heap.
As a result, the array elements are eliminated in decreasing order. But since
under the array implementation of heaps an element being deleted is placed last,
the resulting array will be exactly the original array sorted in increasing order.
Heapsort is traced on a specic input in Figure 6.14. (The same input as the one
232 Transform-and-Conquer
Stage 1 (heap construction) Stage 2 (maximum deletions)
2
2
2
9
9
9
9
9
2
6
7
8
8
8
8
6
6
6
6
2
5
5
5
5
5
8
7
7
7
7
9
7
8
5
7
2
6
5
5
2
2
6
6
6
6
6
6
2
2
2
| 5
8
8
7
7
5
5
5
| 6
2
2
2
2
2
| 7
5
5
5
| 8
7
| 9
FIGURE 6.14 Sorting the array 2, 9, 7, 6, 5, 8 by heapsort.
in Figure 6.11 is intentionally used so that you can compare the tree and array
implementations of the bottom-up heap construction algorithm.)
Since we already know that the heap construction stage of the algorithm is in
O(n), we have to investigate just the time efciency of the second stage. For the
number of key comparisons, C(n), needed for eliminating the root keys from the
heaps of diminishing sizes from n to 2, we get the following inequality:
C(n) 2log
2
(n 1) 2log
2
(n 2)
. . .
2log
2
1 2
n1
i=1
log
2
i
2
n1
i=1
log
2
(n 1) =2(n 1) log
2
(n 1) 2n log
2
n.
This means that C(n) O(n log n) for the secondstage of heapsort. For bothstages,
we get O(n) O(n log n) =O(n log n). A more detailed analysis shows that the
time efciency of heapsort is, in fact, in (n log n) in both the worst and average
cases. Thus, heapsorts time efciency falls in the same class as that of mergesort.
Unlike the latter, heapsort is in-place, i.e., it does not require any extra storage.
Timing experiments on random les show that heapsort runs more slowly than
quicksort but can be competitive with mergesort.
6.4 Heaps and Heapsort 233
Exercises 6.4
1. a. Construct a heap for the list 1, 8, 6, 5, 3, 7, 4 by the bottom-up algorithm.
b. Construct a heap for the list 1, 8, 6, 5, 3, 7, 4 by successive key insertions
(top-down algorithm).
c. Is it always true that the bottom-up and top-down algorithms yield the
same heap for the same input?
2. Outline an algorithm for checking whether an array H[1..n] is a heap and
determine its time efciency.
3. a. Find the smallest and the largest number of keys that a heap of height h
can contain.
b. Prove that the height of a heap with n nodes is equal to log
2
n.
4. Prove the following equality used in Section 6.4:
h1
i=0
2(h i)2
i
=2(n log
2
(n 1)), where n =2
h1
1.
5. a. Design an efcient algorithm for nding and deleting an element of the
smallest value in a heap and determine its time efciency.
b. Design an efcient algorithmfor nding and deleting an element of a given
value v in a heap H and determine its time efciency.
6. Indicate the time efciency classes of the three main operations of the priority
queue implemented as
a. an unsorted array.
b. a sorted array.
c. a binary search tree.
d. an AVL tree.
e. a heap.
7. Sort the following lists by heapsort by using the array representation of heaps.
a. 1, 2, 3, 4, 5 (in increasing order)
b. 5, 4, 3, 2, 1 (in increasing order)
c. S, O, R, T, I, N, G (in alphabetical order)
8. Is heapsort a stable sorting algorithm?
9. What variety of the transform-and-conquer technique does heapsort repre-
sent?
10. Which sorting algorithm other than heapsort uses a priority queue?
11. Implement three advanced sorting algorithmsmergesort, quicksort, and
heapsortin the language of your choice and investigate their performance
on arrays of sizes n = 10
3
, 10
4
, 10
5
, and 10
6
. For each of these sizes consider
234 Transform-and-Conquer
a. randomly generated les of integers in the range [1..n].
b. increasing les of integers 1, 2, . . . , n.
c. decreasing les of integers n, n 1, . . . , 1.
12. Spaghetti sort Imagine a handful of uncooked spaghetti, individual rods
whose lengths represent numbers that need to be sorted.
a. Outline a spaghetti sorta sorting algorithm that takes advantage of
this unorthodox representation.
b. What does this example of computer science folklore (see [Dew93]) have
to do with the topic of this chapter in general and heapsort in particular?
6.5 Horners Rule and Binary Exponentiation
In this section, we discuss the problem of computing the value of a polynomial
p(x) =a
n
x
n
a
n1
x
n1
. . .
a
1
x a
0
(6.1)
at a given point x and its important special case of computing x
n
. Polynomials
constitute the most important class of functions because they possess a wealth of
good properties on the one hand and can be used for approximating other types of
functions on the other. The problem of manipulating polynomials efciently has
been important for several centuries; new discoveries were still being made the
last 50 years. By far the most important of them was the fast Fourier transform
(FFT). The practical importance of this remarkable algorithm, which is based on
representing a polynomial by its values at specially chosen points, was such that
some people consider it one of the most important algorithmic discoveries of all
time. Because of its relative complexity, we donot discuss the FFTalgorithminthis
book. An interested reader will nd a wealth of literature on the subject, including
reasonably accessible treatments in such textbooks as [Kle06] and [Cor09].
Horners Rule
Horners rule is an old but very elegant and efcient algorithm for evaluating a
polynomial. It is named after the British mathematician W. G. Horner, who pub-
lished it in the early 19th century. But according to Knuth [KnuII, p. 486], the
method was used by Isaac Newton 150 years before Horner. You will appreciate
this method much more if you rst design an algorithm for the polynomial evalu-
ation problem by yourself and investigate its efciency (see Problems 1 and 2 in
this sections exercises).
Horners rule is a good example of the representation-change technique since
it is basedonrepresenting p(x) by a formula different from(6.1). This newformula
is obtainedfrom(6.1) by successively taking x as a commonfactor inthe remaining
polynomials of diminishing degrees:
p(x) =(
. . .
(a
n
x a
n1
)x
. . .
)x a
0
. (6.2)
6.5 Horners Rule and Binary Exponentiation 235
For example, for the polynomial p(x) =2x
4
x
3
3x
2
x 5, we get
p(x) =2x
4
x
3
3x
2
x 5
=x(2x
3
x
2
3x 1) 5
=x(x(2x
2
x 3) 1) 5
=x(x(x(2x 1) 3) 1) 5. (6.3)
It is in formula (6.2) that we will substitute a value of x at which the polyno-
mial needs to be evaluated. It is hard to believe that this is a way to an efcient
algorithm, but the unpleasant appearance of formula (6.2) is just that, an appear-
ance. As we shall see, there is no need to go explicitly through the transformation
leading to it: all we need is an original list of the polynomials coefcients.
The pen-and-pencil calculation can be conveniently organized with a two-
row table. The rst row contains the polynomials coefcients (including all the
coefcients equal to zero, if any) listed fromthe highest a
n
to the lowest a
0
. Except
for its rst entry, which is a
n
, the second row is lled left to right as follows: the
next entry is computed as the xs value times the last entry in the second row plus
the next coefcient from the rst row. The nal entry computed in this fashion is
the value being sought.
EXAMPLE 1 Evaluate p(x) =2x
4
x
3
3x
2
x 5 at x =3.
coefcients 2 1 3 1 5
x =3 2 3
.
2 (1) =5 3
.
5 3 =18 3
.
18 1 =55 3
.
55 (5) =160
Thus, p(3) =160. (On comparing the tables entries with formula (6.3), you will
see that 3
.
2 (1) =5 is the value of 2x 1 at x =3, 3
.
5 3 =18 is the value of
x(2x 1) 3 at x =3, 3
.
18 1 =55 is the value of x(x(2x 1) 3) 1 at x =3,
and, nally, 3
.
55 (5) =160 is the value of x(x(x(2x 1) 3) 1) 5 =p(x)
at x =3.)
Pseudocode of this algorithm is the shortest one imaginable for a nontrivial
algorithm:
ALGORITHM Horner(P[0..n], x)
//Evaluates a polynomial at a given point by Horners rule
//Input: An array P[0..n] of coefcients of a polynomial of degree n,
// stored from the lowest to the highest and a number x
//Output: The value of the polynomial at x
p P[n]
for i n 1 downto 0 do
p x p P[i]
return p
236 Transform-and-Conquer
The number of multiplications and the number of additions are given by the
same sum:
M(n) =A(n) =
n1
i=0
1 =n.
To appreciate how efcient Horners rule is, consider only the rst term of
a polynomial of degree n: a
n
x
n
. Just computing this single term by the brute-
force algorithm would require n multiplications, whereas Horners rule computes,
in addition to this term, n 1 other terms, and it still uses the same number of
multiplications! It is not surprising that Horners rule is an optimal algorithm for
polynomial evaluation without preprocessing the polynomials coefcients. But it
took scientists 150 years after Horners publication to come to the realization that
such a question was worth investigating.
Horners rule also has some useful byproducts. The intermediate numbers
generated by the algorithm in the process of evaluating p(x) at some point x
0
turn
out to be the coefcients of the quotient of the division of p(x) by x x
0
, and the
nal result, in addition to being p(x
0
), is equal to the remainder of this division.
Thus, according to Example 1, the quotient and the remainder of the division of
2x
4
x
3
3x
2
x 5 by x 3 are 2x
3
5x
2
18x 55 and 160, respectively.
This division algorithm, known as synthetic division, is more convenient than so-
called long division.
Binary Exponentiation
The amazing efciency of Horners rule fades if the method is applied to comput-
ing a
n
, which is the value of x
n
at x =a. In fact, it degenerates to the brute-force
multiplication of a by itself, with wasteful additions of zeros in between. Since
computing a
n
(actually, a
n
mod m) is an essential operation in several important
primality-testing and encryption methods, we consider now two algorithms for
computing a
n
that are based on the representation-change idea. They both exploit
the binary representation of exponent n, but one of them processes this binary
string left to right, whereas the second does it right to left.
Let
n =b
I
. . . b
i
. . . b
0
be the bit string representing a positive integer n in the binary number system.
This means that the value of n can be computed as the value of the polynomial
p(x) =b
I
x
I
. . .
b
i
x
i
. . .
b
0
(6.4)
at x =2. For example, if n =13, its binary representation is 1101 and
13 =1
.
2
3
1
.
2
2
0
.
2
1
1
.
2
0
.
Let us now compute the value of this polynomial by applying Horners rule
and see what the methods operations imply for computing the power
a
n
=a
p(2)
=a
b
I
2
I
...
b
i
2
i
...
b
0
.
6.5 Horners Rule and Binary Exponentiation 237
Horners rule for the binary polynomial p(2) Implications for a
n
=a
p(2)
p 1 //the leading digit is always 1 for n 1 a
p
a
1
for i I 1 downto 0 do for i I 1 downto 0 do
p 2p b
i
a
p
a
2pb
i
But
a
2pb
i
=a
2p
.
a
b
i
=(a
p
)
2
.
a
b
i
=
_
(a
p
)
2
if b
i
=0,
(a
p
)
2
.
a if b
i
=1.
Thus, after initializing the accumulators value to a, we can scan the bit string
representing the exponent n to always square the last value of the accumulator
and, if the current binary digit is 1, also to multiply it by a. These observations lead
to the following left-to-right binary exponentiation method of computing a
n
.
ALGORITHM LeftRightBinaryExponentiation(a, b(n))
//Computes a
n
by the left-to-right binary exponentiation algorithm
//Input: A number a and a list b(n) of binary digits b
I
, . . . , b
0
// in the binary expansion of a positive integer n
//Output: The value of a
n
product a
for i I 1 downto 0 do
product product product
if b
i
=1 product product a
return product
EXAMPLE 2 Compute a
13
by the left-to-right binary exponentiation algorithm.
Here, n =13 =1101
2
. So we have
binary digits of n 1 1 0 1
product accumulator a a
2
.
a =a
3
(a
3
)
2
=a
6
(a
6
)
2
.
a =a
13
Since the algorithmmakes one or two multiplications on each repetition of its
only loop, the total number of multiplications M(n) made by it in computing a
n
is
(b 1) M(n) 2(b 1),
where b is the length of the bit string representing the exponent n. Taking into
account that b 1 = log
2
n, we can conclude that the efciency of the left-
to-right binary exponentiation is logarithmic. Thus, this algorithm is in a better
efciency class than the brute-force exponentiation, which always requires n 1
multiplications.
238 Transform-and-Conquer
The right-to-left binary exponentiationuses the same binary polynomial p(2)
(see (6.4)) yielding the value of n. But rather than applying Horners rule to it as
the previous method did, this one exploits it differently:
a
n
=a
b
I
2
I
...
b
i
2
i
...
b
0
=a
b
I
2
I
.
. . .
.
a
b
i
2
i
.
. . .
.
a
b
0
.
Thus, a
n
can be computed as the product of the terms
a
b
i
2
i
=
_
a
2
i
if b
i
=1,
1 if b
i
=0,
i.e., the product of consecutive terms a
2
i
, skipping those for which the binary digit
b
i
is zero. In addition, we can compute a
2
i
by simply squaring the same term we
computed for the previous value of i since a
2
i
= (a
2
i1
)
2
. So we compute all such
powers of a from the smallest to the largest (from right to left), but we include in
the product accumulator only those whose corresponding binary digit is 1. Here
is pseudocode of this algorithm.
ALGORITHM RightLeftBinaryExponentiation(a, b(n))
//Computes a
n
by the right-to-left binary exponentiation algorithm
//Input: A number a and a list b(n) of binary digits b
I
, . . . , b
0
// in the binary expansion of a nonnegative integer n
//Output: The value of a
n
t erma //initializes a
2
i
if b
0
=1 product a
else product 1
for i 1 to I do
t ermt erm t erm
if b
i
=1 product product t erm
return product
EXAMPLE 3 Compute a
13
by the right-to-left binary exponentiation method.
Here, n =13 =1101
2
. So we have the following table lled in from right to
left:
1 1 0 1 binary digits of n
a
8
a
4
a
2
a terms a
2
i
a
5
.
a
8
=a
13
a
.
a
4
=a
5
a product accumulator
Obviously, the algorithms efciency is also logarithmic for the same reason
the left-to-right binary multiplication is. The usefulness of both binary exponentia-
tion algorithms is reduced somewhat by their reliance on availability of the explicit
6.5 Horners Rule and Binary Exponentiation 239
binary expansion of exponent n. Problem 9 in this sections exercises asks you to
design an algorithm that does not have this shortcoming.
Exercises 6.5
1. Consider the following brute-force algorithm for evaluating a polynomial.
ALGORITHM BruteForcePolynomialEvaluation(P[0..n], x)
//Computes the value of polynomial P at a given point x
//by the highest to lowest term brute-force algorithm
//Input: An array P[0..n] of the coefcients of a polynomial of degree n,
// stored from the lowest to the highest and a number x
//Output: The value of the polynomial at the point x
p 0.0
for i n downto 0 do
power 1
for j 1 to i do
power power x
p p P[i] power
return p
Find the total number of multiplications and the total number of additions
made by this algorithm.
2. Write pseudocode for the brute-force polynomial evaluation that stems from
substituting a given value of the variable into the polynomials formula and
evaluating it from the lowest term to the highest one. Determine the number
of multiplications and the number of additions made by this algorithm.
3. a. Estimate how much faster Horners rule is compared to the lowest-to-
highest term brute-force algorithm of Problem 2 if (i) the time of one
multiplication is signicantly larger than the time of one addition; (ii) the
time of one multiplication is about the same as the time of one addition.
b. Is Horners rule more time efcient at the expense of being less space
efcient than the brute-force algorithm?
4. a. Apply Horners rule to evaluate the polynomial
p(x) =3x
4
x
3
2x 5 at x =2.
b. Use the results of the above application of Horners rule to nd the quo-
tient and remainder of the division of p(x) by x 2.
5. Apply Horners rule to convert 110100101 from binary to decimal.
6. Compare the number of multiplications and additions/subtractions needed
by the long division of a polynomial p(x) =a
n
x
n
a
n1
x
n1
. . .
a
0
by
240 Transform-and-Conquer
x c, where c is some constant, with the number of these operations in the
synthetic division.
7. a. Apply the left-to-right binary exponentiation algorithm to compute a
17
.
b. Is it possible to extend the left-to-right binary exponentiation algorithm to
work for every nonnegative integer exponent?
8. Apply the right-to-left binary exponentiation algorithm to compute a
17
.
9. Designa nonrecursive algorithmfor computing a
n
that mimics the right-to-left
binary exponentiation but does not explicitly use the binary representation
of n.
10. Is it a good idea to use a general-purpose polynomial-evaluation algorithm
such as Horners rule to evaluate the polynomial p(x) =x
n
x
n1
. . .
x 1?
11. According to the corollary of the Fundamental Theorem of Algebra, every
polynomial
p(x) =a
n
x
n
a
n1
x
n1
. . .
a
0
can be represented in the form
p(x) =a
n
(x x
1
)(x x
2
) . . . (x x
n
)
where x
1
, x
2
, . . . , x
n
are the roots of the polynomial (generally, complex and
not necessarily distinct). Discuss which of the two representations is more
convenient for each of the following operations:
a. polynomial evaluation at a given point
b. addition of two polynomials
c. multiplication of two polynomials
12. Polynomial interpolation Given a set of n data points (x
i
, y
i
) where no two
x
i
are the same, nd a polynomial p(x) of degree at most n 1 such that
p(x
i
) =y
i
for every i =1, 2, . . . , n.
6.6 Problem Reduction
Here is my version of a well-known joke about mathematicians. Professor X, a
noted mathematician, noticed that when his wife wanted to boil water for their
tea, she took their kettle from their cupboard, lled it with water, and put it on
the stove. Once, when his wife was away (if you have to know, she was signing
her best-seller in a local bookstore), the professor had to boil water by himself.
He saw that the kettle was sitting on the kitchen counter. What did Professor X
do? He put the kettle in the cupboard rst and then proceeded to follow his wifes
routine.
6.6 Problem Reduction 241
Problem 1
(to be solved)
solution
to Problem 2
Problem 2
(solvable by alg. A)
reduction alg. A
FIGURE 6.15 Problem reduction strategy.
The way Professor X approached his task is an example of an important
problem-solving strategy calledproblemreduction. If youneedtosolve a problem,
reduce it to another problem that you know how to solve (Figure 6.15).
The joke about the professor notwithstanding, the idea of problem reduction
plays a central role in theoretical computer science, where it is used to classify
problems according to their complexity. We will touch on this classication in
Chapter 11. But the strategy can be used for actual problem solving, too. The
practical difculty in applying it lies, of course, in nding a problem to which the
problem at hand should be reduced. Moreover, if we want our efforts to be of
practical value, we need our reduction-based algorithm to be more efcient than
solving the original problem directly.
Note that we have already encountered this technique earlier in the book.
In Section 6.5, for example, we mentioned the so-called synthetic division done
by applying Horners rule for polynomial evaluation. In Section 5.5, we used the
following fact from analytical geometry: if p
1
(x
1
, y
1
), p
2
(x
2
, y
2
), and p
3
(x
3
, y
3
) are
three arbitrary points in the plane, then the determinant
x
1
y
1
1
x
2
y
2
1
x
3
y
3
1
=x
1
y
2
x
3
y
1
x
2
y
3
x
3
y
2
x
1
y
3
x
2
y
1
is positive if and only if the point p
3
is to the left of the directed line
p
1
p
2
through
points p
1
and p
2
. In other words, we reduced a geometric question about the
relative locations of three points to a question about the sign of a determinant.
In fact, the entire idea of analytical geometry is based on reducing geometric
problems to algebraic ones. And the vast majority of geometric algorithms take
advantage of this historic insight by Ren e Descartes (15961650). In this section,
we give a few more examples of algorithms based on the strategy of problem
reduction.
Computing the Least Common Multiple
Recall that the least common multiple of two positive integers m and n, denoted
lcm(m, n), is dened as the smallest integer that is divisible by both m and n. For
example, lcm(24, 60) = 120, and lcm(11, 5) =55. The least common multiple is
one of the most important notions in elementary arithmetic and algebra. Perhaps
you remember the following middle-school method for computing it: Given the
prime factorizations of m and n, compute the product of all the common prime
242 Transform-and-Conquer
factors of m and n, all the prime factors of m that are not in n, and all the prime
factors of n that are not in m. For example,
24 =2
.
2
.
2
.
3,
60 =2
.
2
.
3
.
5,
lcm(24, 60) =(2
.
2
.
3)
.
2
.
5 =120.
As a computational procedure, this algorithm has the same drawbacks as the
middle-school algorithm for computing the greatest common divisor discussed
in Section 1.1: it is inefcient and requires a list of consecutive primes.
A much more efcient algorithm for computing the least common multiple
can be devised by using problem reduction. After all, there is a very efcient
algorithm (Euclids algorithm) for nding the greatest common divisor, which is a
product of all the common prime factors of mand n. Can we nd a formula relating
lcm(m, n) and gcd(m, n)? It is not difcult to see that the product of lcm(m, n) and
gcd(m, n) includes every factor of m and n exactly once and hence is simply equal
to the product of m and n. This observation leads to the formula
lcm(m, n) =
m
.
n
gcd(m, n)
,
where gcd(m, n) can be computed very efciently by Euclids algorithm.
Counting Paths in a Graph
As our next example, we consider the problem of counting paths between two
vertices in a graph. It is not difcult to prove by mathematical induction that the
number of different paths of length k > 0 from the ith vertex to the jth vertex
of a graph (undirected or directed) equals the (i, j)th element of A
k
where A is
the adjacency matrix of the graph. Therefore, the problem of counting a graphs
paths can be solved with an algorithm for computing an appropriate power of its
adjacency matrix. Note that the exponentiation algorithms we discussed before
for computing powers of numbers are applicable to matrices as well.
As a specic example, consider the graph of Figure 6.16. Its adjacency matrix
A and its square A
2
indicate the numbers of paths of length 1 and 2, respectively,
between the corresponding vertices of the graph. In particular, there are three
b a
a
b
c
d
d c
a b c d
A =
0
1
1
1
1
0
0
0
1
0
0
1
1
0
1
0
a
b
c
d
a b c d
A
2
=
3
0
1
1
0
1
1
1
1
1
2
1
1
1
1
2
FIGURE 6.16 A graph, its adjacency matrix A, and its square A
2
. The elements of A and
A
2
indicate the numbers of paths of lengths 1 and 2, respectively.
6.6 Problem Reduction 243
paths of length2 that start andendat vertex a (a b a, a c a, anda d a);
but there is only one path of length 2 from a to c (a d c).
Reduction of Optimization Problems
Our next example deals with solving optimization problems. If a problem asks to
nd a maximum of some function, it is said to be a maximization problem; if it
asks to nd a functions minimum, it is called a minimization problem. Suppose
now that you need to nd a minimum of some function f (x) and you have an
algorithm for function maximization. How can you take advantage of the latter?
The answer lies in the simple formula
min f (x) =max[f (x)].
In other words, to minimize a function, we can maximize its negative instead and,
to get a correct minimal value of the function itself, change the sign of the answer.
This property is illustrated for a function of one real variable in Figure 6.17.
Of course, the formula
max f (x) =min[f (x)]
is valid as well; it shows how a maximization problem can be reduced to an
equivalent minimization problem.
This relationship between minimization and maximization problems is very
general: it holds for functions dened on any domain D. In particular, we can
y
f (x*)
f (x)
f (x)
f (x*)
x*
x
FIGURE 6.17 Relationship between minimization and maximization problems:
min f (x) =max[f (x)].
244 Transform-and-Conquer
apply it to functions of several variables subject to additional constraints. A very
important class of such problems is introduced below in this section.
Nowthat we are on the topic of function optimization, it is worth pointing out
that the standard calculus procedure for nding extremum points of a function is,
in fact, also based on problem reduction. Indeed, it suggests nding the functions
derivative f
/
(x) and then solving the equation f
/
(x) = 0 to nd the functions
critical points. In other words, the optimization problemis reduced to the problem
of solving an equation as the principal part of nding extremum points. Note
that we are not calling the calculus procedure an algorithm, since it is not clearly
dened. In fact, there is no general method for solving equations. A little secret of
calculus textbooks is that problems are carefully selected so that critical points
can always be found without difculty. This makes the lives of both students
and instructors easier but, in the process, may unintentionally create a wrong
impression in students minds.
Linear Programming
Many problems of optimal decision making can be reduced to an instance of
the linear programming problema problem of optimizing a linear function of
several variables subject to constraints in the form of linear equations and linear
inequalities.
EXAMPLE1 Consider a university endowment that needs toinvest $100 million.
This sum has to be split between three types of investments: stocks, bonds, and
cash. The endowment managers expect an annual return of 10%, 7%, and 3% for
their stock, bond, and cash investments, respectively. Since stocks are more risky
than bonds, the endowment rules require the amount invested in stocks to be no
more than one-third of the moneys invested in bonds. In addition, at least 25%
of the total amount invested in stocks and bonds must be invested in cash. How
should the managers invest the money to maximize the return?
Let us create a mathematical model of this problem. Let x, y, and z be the
amounts (in millions of dollars) invested in stocks, bonds, and cash, respectively.
By using these variables, we can pose the following optimization problem:
maximize 0.10x 0.07y 0.03z
subject to x y z =100
x
1
3
y
z 0.25(x y)
x 0, y 0, z 0.
Although this example is both small and simple, it does show how a problem
of optimal decision making can be reduced to an instance of the general linear
programming problem
6.6 Problem Reduction 245
maximize (or minimize) c
1
x
1
. . .
c
n
x
n
subject to a
i1
x
1
. . .
a
in
x
n
(or or =) b
i
for i =1, . . . , m
x
1
0, . . . , x
n
0.
(The last group of constraintscalled the nonnegativity constraintsare, strictly
speaking, unnecessary because they are special cases of more general constraints
a
i1
x
1
. . .
a
in
x
n
b
i
, but it is convenient to treat them separately.)
Linear programming has proved to be exible enough to model a wide variety
of important applications, such as airline crew scheduling, transportation and
communication network planning, oil exploration and rening, and industrial
production optimization. In fact, linear programming is considered by many as
one of the most important achievements in the history of applied mathematics.
The classic algorithm for this problem is called the simplex method (Sec-
tion 10.1). It was discovered by the U.S. mathematician George Dantzig in the
1940s [Dan63]. Although the worst-case efciency of this algorithm is known to
be exponential, it performs very well ontypical inputs. Moreover, a more recent al-
gorithmby Narendra Karmarkar [Kar84] not only has a proven polynomial worst-
case efciency but has also performed competitively with the simplex method in
empirical tests.
It is important to stress, however, that the simplex method and Karmarkars
algorithm can successfully handle only linear programming problems that do not
limit its variables to integer values. When variables of a linear programming
problem are required to be integers, the linear programming problem is said
to be an integer linear programming problem. Except for some special cases
(e.g., the assignment problem and the problems discussed in Sections 10.210.4),
integer linear programming problems are much more difcult. There is no known
polynomial-time algorithm for solving an arbitrary instance of the general integer
linear programming problem and, as we see in Chapter 11, such an algorithm
quite possibly does not exist. Other approaches such as the branch-and-bound
technique discussed in Section 12.2 are typically used for solving integer linear
programming problems.
EXAMPLE 2 Let us see how the knapsack problem can be reduced to a linear
programming problem. Recall from Section 3.4 that the knapsack problem can
be posed as follows. Given a knapsack of capacity W and n items of weights
w
1
, . . . , w
n
andvalues v
1
, . . . , v
n
, ndthemost valuable subset of the items that ts
into the knapsack. We consider rst the continuous (or fractional) version of the
problem, in which any fraction of any item given can be taken into the knapsack.
Let x
j
, j =1, . . . , n, be a variable representing a fraction of item j taken into
the knapsack. Obviously, x
j
must satisfy the inequality 0 x
j
1. Then the total
weight of the selected items can be expressed by the sum
n
j=1
w
j
x
j
, and their
total value by the sum
n
j=1
v
j
x
j
. Thus, the continuous version of the knapsack
problem can be posed as the following linear programming problem:
246 Transform-and-Conquer
maximize
n
j=1
v
j
x
j
subject to
n
j=1
w
j
x
j
W
0 x
j
1 for j =1, . . . , n.
There is no need to apply a general method for solving linear programming
problems here: this particular problemcan be solved by a simple special algorithm
that is introduced in Section 12.3. (But why wait? Try to discover it on your
own now.) This reduction of the knapsack problem to an instance of the linear
programming problem is still useful, though, to prove the correctness of the
algorithm in question.
In the discrete (or 0-1) version of the knapsack problem, we are only allowed
either to take a whole item or not to take it at all. Hence, we have the following
integer linear programming problem for this version:
maximize
n
j=1
v
j
x
j
subject to
n
j=1
w
j
x
j
W
x
j
{0, 1] for j =1, . . . , n.
This seemingly minor modication makes a drastic difference for the com-
plexity of this and similar problems constrained to take only discrete values in
their potential ranges. Despite the fact that the 0-1 version might seem to be eas-
ier because it can ignore any subset of the continuous version that has a fractional
value of an item, the 0-1 version is, in fact, much more complicated than its con-
tinuous counterpart. The reader interested in specic algorithms for solving this
problem will nd a wealth of literature on the subject, including the monographs
[Mar90] and [Kel04].
Reduction to Graph Problems
As we pointed out in Section 1.3, many problems can be solved by a reduction
to one of the standard graph problems. This is true, in particular, for a variety of
puzzles and games. In these applications, vertices of a graph typically represent
possible states of the problem in question, and edges indicate permitted transi-
tions among such states. One of the graphs vertices represents an initial state and
another represents a goal state of the problem. (There might be several vertices
of the latter kind.) Such a graph is called a state-space graph. Thus, the transfor-
mation just described reduces the problem to the question about a path from the
initial-state vertex to a goal-state vertex.
6.6 Problem Reduction 247
Pg
P
Pw
Pw
P
Pc Pg
Pc Pg
Pg
wc | | Pg
c | | Pwg
w | | Pgc
g | | Pwc
| | Pwgc
Pwgc | |
Pwc | | g
Pwg | | c
Pg | | wc
Pgc | | w
FIGURE 6.18 State-space graph for the peasant, wolf, goat, and cabbage puzzle.
EXAMPLE Let us revisit the classic river-crossing puzzle that was included in
the exercises for Section 1.2. A peasant nds himself on a river bank with a wolf,
a goat, and a head of cabbage. He needs to transport all three to the other side
of the river in his boat. However, the boat has room only for the peasant himself
and one other item (either the wolf, the goat, or the cabbage). In his absence, the
wolf would eat the goat, and the goat would eat the cabbage. Find a way for the
peasant to solve his problem or prove that it has no solution.
The state-space graph for this problem is given in Figure 6.18. Its vertices are
labeled to indicate the states they represent: P, w, g, c stand for the peasant, the
wolf, the goat, and the cabbage, respectively; the two bars [ [ denote the river;
for convenience, we also label the edges by indicating the boats occupants for
each crossing. In terms of this graph, we are interested in nding a path from the
initial-state vertex labeled Pwgc[ [ to the nal-state vertex labeled [ [Pwgc.
It is easy to see that there exist two distinct simple paths from the initial-
state vertex to the nal state vertex (what are they?). If we nd them by applying
breadth-rst search, we get a formal proof that these paths have the smallest
number of edges possible. Hence, this puzzle has two solutions requiring seven
river crossings, which is the minimum number of crossings needed.
Our success in solving this simple puzzle should not lead you to believe that
generating and investigating state-space graphs is always a straightforward task.
To get a better appreciation of them, consult books on articial intelligence (AI),
the branch of computer science in which state-space graphs are a principal subject.
248 Transform-and-Conquer
In this book, we deal with an important special case of state-space graphs in
Sections 12.1 and 12.2.
Exercises 6.6
1. a. Prove the equality
lcm(m, n) =
m
.
n
gcd(m, n)
that underlies the algorithm for computing lcm(m, n).
b. Euclids algorithm is known to be in O(log n). If it is the algorithm that is
used for computing gcd(m, n), what is the efciency of the algorithm for
computing lcm(m, n)?
2. You are given a list of numbers for which you need to construct a min-heap.
(A min-heap is a complete binary tree in which every key is less than or equal
to the keys in its children.) How would you use an algorithm for constructing
a max-heap (a heap as dened in Section 6.4) to construct a min-heap?
3. Prove that the number of different paths of length k >0 fromthe ith vertex to
the jth vertex in a graph (undirected or directed) equals the (i, j)th element
of A
k
where A is the adjacency matrix of the graph.
4. a. Design an algorithm with a time efciency better than cubic for checking
whether a graph with n vertices contains a cycle of length 3 [Man89].
b. Consider the following algorithmfor the same problem. Starting at an arbi-
trary vertex, traverse the graph by depth-rst search and check whether its
depth-rst search forest has a vertex with a back edge leading to its grand-
parent. If it does, the graph contains a triangle; if it does not, the graph
does not contain a triangle as its subgraph. Is this algorithm correct?
5. Given n > 3 points P
1
= (x
1
, y
1
), . . . , P
n
= (x
n
, y
n
) in the coordinate plane,
design an algorithm to check whether all the points lie within a triangle with
its vertices at three of the points given. (You can either design an algorithm
from scratch or reduce the problem to another one with a known algorithm.)
6. Consider the problem of nding, for a given positive integer n, the pair of
integers whose sum is n and whose product is as large as possible. Design an
efcient algorithm for this problem and indicate its efciency class.
7. The assignment problem introduced in Section 3.4 can be stated as follows:
There are n people who need to be assigned to execute n jobs, one person
per job. (That is, each person is assigned to exactly one job and each job is
assigned to exactly one person.) The cost that would accrue if the ith person is
assigned to the jth job is a known quantity C[i, j] for each pair i, j =1, . . . , n.
The problem is to assign the people to the jobs to minimize the total cost of
6.6 Problem Reduction 249
the assignment. Express the assignment problem as a 0-1 linear programming
problem.
8. Solve the instance of the linear programming problem given in Section 6.6:
maximize 0.10x 0.07y 0.03z
subject to x y z =100
x
1
3
y
z 0.25(x y)
x 0, y 0, z 0.
9. The graph-coloring problem is usually stated as the vertex-coloring prob-
lem: Assign the smallest number of colors to vertices of a given graph so
that no two adjacent vertices are the same color. Consider the edge-coloring
problem: Assign the smallest number of colors possible to edges of a given
graph so that no two edges with the same endpoint are the same color. Ex-
plain how the edge-coloring problem can be reduced to a vertex-coloring
problem.
10. Consider the two-dimensional post ofce location problem: given n points
(x
1
, y
1
), . . . , (x
n
, y
n
) in the Cartesian plane, nd a location (x, y) for a post
ofce that minimizes
1
n
n
i=1
([x
i
x[ [y
i
y[), the average Manhattan dis-
tance from the post ofce to these points. Explain how this problem can be
efciently solvedby the problemreductiontechnique, providedthe post ofce
does not have to be located at one of the input points.
11. Jealous husbands There are n 2 married couples who need to cross a
river. They have a boat that can hold no more than two people at a time.
To complicate matters, all the husbands are jealous and will not agree on any
crossing procedure that would put a wife on the same bank of the river with
another womans husband without the wifes husband being there too, even if
there are other people on the same bank. Can they cross the river under such
constraints?
a. Solve the problem for n =2.
b. Solve the problem for n =3, which is the classical version of this problem.
c. Does the problem have a solution for n 4? If it does, indicate how many
river crossings it will take; if it does not, explain why.
12. Double-n dominoes Dominoes are small rectangular tiles with dots called
spots or pips embossed at both halves of the tiles. A standard double-six
domino set has 28 tiles: one for each unordered pair of integers from (0, 0)
to (6, 6). In general, a double-n domino set would consist of domino tiles
for each unordered pair of integers from (0, 0) to (n, n). Determine all values
of n for which one constructs a ring made up of all the tiles in a double-n
domino set.
250 Transform-and-Conquer
SUMMARY
Transform-and-conquer is the fourth general algorithm design (and problem-
solving) strategy discussed in the book. It is, in fact, a group of techniques
based on the idea of transformation to a problem that is easier to solve.
There are three principal varieties of the transform-and-conquer strategy:
instance simplication, representation change, and problem reduction.
Instance simplication is transforming an instance of a problem to an instance
of the same problem with some special property that makes the problem
easier to solve. List presorting, Gaussian elimination, and rotations in AVL
trees are good examples of this strategy.
Representation change implies changing one representation of a problems
instance to another representation of the same instance. Examples discussed
inthis chapter include representationof a set by a 2-3 tree, heaps andheapsort,
Horners rule for polynomial evaluation, and two binary exponentiation
algorithms.
Problemreduction calls for transforming a given problem to another problem
that can be solved by a known algorithm. Among examples of applying this
idea to algorithmic problem solving (see Section 6.6), reductions to linear
programming and reductions to graph problems are especially important.
Some examples used to illustrate transform-and-conquer happen to be very
important data structures and algorithms. They are: heaps and heapsort, AVL
and 2-3 trees, Gaussian elimination, and Horners rule.
A heap is an essentially complete binary tree with keys (one per node)
satisfying the parental dominance requirement. Though dened as binary
trees, heaps are normally implemented as arrays. Heaps are most important
for the efcient implementation of priority queues; they also underlie
heapsort.
Heapsort is a theoretically important sorting algorithm based on arranging
elements of an array in a heap and then successively removing the largest
element from a remaining heap. The algorithms running time is in (n log n)
both in the worst case and in the average case; in addition, it is in-place.
AVL trees are binary search trees that are always balanced to the extent
possible for a binary tree. The balance is maintained by transformations of
four types called rotations. All basic operations on AVL trees are in O(log n);
it eliminates the bad worst-case efciency of classic binary search trees.
2-3 trees achieve a perfect balance in a search tree by allowing a node to
contain up to two ordered keys and have up to three children. This idea can
be generalized to yield very important B-trees, discussed later in the book.
Summary 251
Gaussian eliminationan algorithm for solving systems of linear equations
is a principal algorithm in linear algebra. It solves a system by transforming it
to an equivalent system with an upper-triangular coefcient matrix, which is
easy to solve by back substitutions. Gaussian elimination requires about
1
3
n
3
multiplications.
Horners rule is an optimal algorithm for polynomial evaluation without
coefcient preprocessing. It requires only n multiplications and n additions
to evaluate an n-degree polynomial at a given point. Horners rule also has a
few useful byproducts, such as the synthetic division algorithm.
Two binary exponentiation algorithms for computing a
n
are introduced in
Section 6.5. Both of them exploit the binary representation of the exponent
n, but they process it in the opposite directions: left to right and right to left.
Linear programming concerns optimizing a linear function of several vari-
ables subject to constraints in the form of linear equations and linear inequal-
ities. There are efcient algorithms capable of solving very large instances
of this problem with many thousands of variables and constraints, provided
the variables are not required to be integers. The latter, called integer linear
programming, constitute a much more difcult class of problems.
This page intentionally left blank
7
Space and Time Trade-Offs
Things which matter most must never be at the mercy of things which
matter less.
Johann Wolfgang von G oethe (17491832)
S
pace and time trade-offs in algorithm design are a well-known issue for both
theoreticians and practitioners of computing. Consider, as an example, the
problem of computing values of a function at many points in its domain. If it is
time that is at a premium, we can precompute the functions values and store them
in a table. This is exactly what human computers had to do before the advent of
electronic computers, in the process burdening libraries with thick volumes of
mathematical tables. Though such tables have lost much of their appeal with the
widespreaduse of electronic computers, the underlying idea has proventobe quite
useful in the development of several important algorithms for other problems.
In somewhat more general terms, the idea is to preprocess the problems input,
in whole or in part, and store the additional information obtained to accelerate
solving the problem afterward. We call this approach input enhancement
1
and
discuss the following algorithms based on it:
counting methods for sorting (Section 7.1)
Boyer-Moore algorithm for string matching and its simplied version sug-
gested by Horspool (Section 7.2)
The other type of technique that exploits space-for-time trade-offs simply uses
extra space to facilitate faster and/or more exible access to the data. We call this
approach prestructuring. This name highlights two facets of this variation of the
space-for-time trade-off: some processing is done before a problem in question
1. The standard terms used synonymously for this technique are preprocessing and preconditioning.
Confusingly, these terms can also be applied to methods that use the idea of preprocessing but do not
use extra space (see Chapter 6). Thus, in order to avoid confusion, we use input enhancement as a
special name for the space-for-time trade-off technique being discussed here.
253
254 Space and Time Trade-Offs
is actually solved but, unlike the input-enhancement variety, it deals with access
structuring. We illustrate this approach by:
hashing (Section 7.3)
indexing with B-trees (Section 7.4)
There is one more algorithm design technique related to the space-for-time
trade-off idea: dynamic programming. This strategy is based on recording solu-
tions to overlapping subproblems of a given problem in a table from which a solu-
tion to the problem in question is then obtained. We discuss this well-developed
technique separately, in the next chapter of the book.
Two nal comments about the interplay between time and space in algo-
rithm design need to be made. First, the two resourcestime and spacedo not
have to compete with each other in all design situations. In fact, they can align to
bring an algorithmic solution that minimizes both the running time and the space
consumed. Such a situation arises, in particular, when an algorithm uses a space-
efcient data structure to represent a problems input, which leads, in turn, to a
faster algorithm. Consider, as an example, the problem of traversing graphs. Re-
call that the time efciency of the two principal traversal algorithmsdepth-rst
search and breadth-rst searchdepends on the data structure used for repre-
senting graphs: it is (n
2
) for the adjacency matrix representation and (n m)
for the adjacency list representation, where n and m are the numbers of vertices
and edges, respectively. If input graphs are sparse, i.e., have few edges relative to
the number of vertices (say, m O(n)), the adjacency list representation may well
be more efcient from both the space and the running-time points of view. The
same situation arises in the manipulation of sparse matrices and sparse polynomi-
als: if the percentage of zeros in such objects is sufciently high, we can save both
space and time by ignoring zeros in the objects representation and processing.
Second, one cannot discuss space-time trade-offs without mentioning the
hugely important area of data compression. Note, however, that in data compres-
sion, size reductionis the goal rather thana technique for solving another problem.
We discuss just one data compression algorithm, in the next chapter. The reader
interested in this topic will nd a wealth of algorithms in such books as [Say05].
7.1 Sorting by Counting
As a rst example of applying the input-enhancement technique, we discuss its
application to the sorting problem. One rather obvious idea is to count, for each
element of a list to be sorted, the total number of elements smaller than this
element and record the results in a table. These numbers will indicate the positions
of the elements in the sorted list: e.g., if the count is 10 for some element, it should
be in the 11th position (with index 10, if we start counting with 0) in the sorted
array. Thus, we will be able to sort the list by simply copying its elements to their
appropriate positions in a new, sorted list. This algorithm is called comparison-
counting sort (Figure 7.1).
7.1 Sorting by Counting 255
Array A[0..5]
Array S[0..5]
Count []
Count []
Count []
Count []
Count []
Count []
Count []
Initially
After pass i = 0
After pass i = 1
After pass i = 2
After pass i = 3
After pass i = 4
Final state
62 31 84 96 19 47
0
3
3
0
0
1
1
0
1
2
4
4
0
1
2
3
5
5
0
0
0
0
0
0
0
0
0
1
1
1
2
2
19 31 47 62 84 96
FIGURE 7.1 Example of sorting by comparison counting.
ALGORITHM ComparisonCountingSort(A[0..n 1])
//Sorts an array by comparison counting
//Input: An array A[0..n 1] of orderable elements
//Output: Array S[0..n 1] of As elements sorted in nondecreasing order
for i 0 to n 1 do Count [i] 0
for i 0 to n 2 do
for j i 1 to n 1 do
if A[i] < A[j]
Count [j] Count [j] 1
else Count [i] Count [i] 1
for i 0 to n 1 do S[Count [i]] A[i]
return S
What is the time efciency of this algorithm? It should be quadratic because
the algorithmconsiders all the different pairs of ann-element array. More formally,
the number of times its basic operation, the comparison A[i] < A[j], is executed
is equal to the sum we have encountered several times already:
C(n) =
n2
i=0
n1
j=i1
1 =
n2
i=0
[(n 1) (i 1) 1] =
n2
i=0
(n 1 i) =
n(n 1)
2
.
Thus, the algorithm makes the same number of key comparisons as selection sort
and in addition uses a linear amount of extra space. On the positive side, the
algorithm makes the minimum number of key moves possible, placing each of
them directly in their nal position in a sorted array.
The counting idea does work productively in a situation in which elements
to be sorted belong to a known small set of values. Assume, for example, that
we have to sort a list whose values can be either 1 or 2. Rather than applying a
general sorting algorithm, we should be able to take advantage of this additional
256 Space and Time Trade-Offs
information about values to be sorted. Indeed, we can scan the list to compute
the number of 1s and the number of 2s in it and then, on the second pass,
simply make the appropriate number of the rst elements equal to 1 and the
remaining elements equal to 2. More generally, if element values are integers
between some lower bound l and upper bound u, we can compute the frequency
of each of those values and store them in array F[0..u l]. Then the rst F[0]
positions in the sorted list must be lled with l, the next F[1] positions with l 1,
and so on. All this can be done, of course, only if we can overwrite the given
elements.
Let us consider a more realistic situation of sorting a list of items with some
other information associated with their keys so that we cannot overwrite the lists
elements. Thenwe cancopy elements intoa newarrayS[0..n 1]toholdthe sorted
list as follows. The elements of A whose values are equal to the lowest possible
value l are copiedintothe rst F[0] elements of S, i.e., positions 0 throughF[0] 1;
the elements of value l 1 are copied to positions from F[0] to (F[0] F[1]) 1;
and so on. Since such accumulated sums of frequencies are called a distribution
in statistics, the method itself is known as distribution counting.
EXAMPLE Consider sorting the array
13 11 12 13 12 12
whose values are known to come from the set {11, 12, 13] and should not be
overwritten in the process of sorting. The frequency and distribution arrays are
as follows:
Array values 11 12 13
Frequencies 1 3 2
Distribution values 1 4 6
Note that the distribution values indicate the proper positions for the last occur-
rences of their elements in the nal sorted array. If we index array positions from0
ton 1, the distributionvalues must be reducedby 1 toget corresponding element
positions.
It is more convenient to process the input array right to left. For the example,
the last element is 12, and, since its distribution value is 4, we place this 12 in
position 4 1 =3 of the array S that will hold the sorted list. Then we decrease
the 12s distribution value by 1 and proceed to the next (from the right) element
in the given array. The entire processing of this example is depicted in Figure 7.2.
7.1 Sorting by Counting 257
D[0..2] S[0..5]
A [5] = 12
A [4] = 12
A [3] = 13
A [2] = 12
A [1] = 11
A [0] = 13
11
1
1
1
1
1
0
4
3
2
2
1
1
6
6
6
5
5
5
12
12
12
13
13
FIGURE 7.2 Example of sorting by distribution counting. The distribution values being
decremented are shown in bold.
Here is pseudocode of this algorithm.
ALGORITHM DistributionCountingSort(A[0..n 1], l, u)
//Sorts an array of integers from a limited range by distribution counting
//Input: An array A[0..n 1] of integers between l and u (l u)
//Output: Array S[0..n 1] of As elements sorted in nondecreasing order
for j 0 to u l do D[j] 0 //initialize frequencies
for i 0 to n 1 do D[A[i] l] D[A[i] l] 1 //compute frequencies
for j 1 to u l do D[j] D[j 1] D[j] //reuse for distribution
for i n 1 downto 0 do
j A[i] l
S[D[j] 1] A[i]
D[j] D[j] 1
return S
Assuming that the range of array values is xed, this is obviously a linear
algorithm because it makes just two consecutive passes through its input array
A. This is a better time-efciency class than that of the most efcient sorting
algorithmsmergesort, quicksort, and heapsortwe have encountered. It is im-
portant to remember, however, that this efciency is obtained by exploiting the
specic nature of inputs for which sorting by distribution counting works, in addi-
tion to trading space for time.
Exercises 7.1
1. Is it possible to exchange numeric values of two variables, say, u and v, without
using any extra storage?
2. Will the comparison-counting algorithm work correctly for arrays with equal
values?
3. Assuming that the set of possible list values is {a, b, c, d}, sort the following
list in alphabetical order by the distribution-counting algorithm:
b, c, d, c, b, a, a, b.
258 Space and Time Trade-Offs
4. Is the distribution-counting algorithm stable?
5. Design a one-line algorithm for sorting any array of size n whose values are n
distinct integers from 1 to n.
6. The ancestry problem asks to determine whether a vertex u is an ancestor
of vertex v in a given binary (or, more generally, rooted ordered) tree of n
vertices. Design a O(n) input-enhancement algorithm that provides sufcient
information to solve this problemfor any pair of the trees vertices in constant
time.
7. The following technique, known as virtual initialization, provides a time-
efcient way to initialize just some elements of a given array A[0..n 1] so
that for each of its elements, we can say in constant time whether it has been
initialized and, if it has been, with which value. This is done by utilizing a
variable count er for the number of initialized elements in Aand two auxiliary
arrays of the same size, say B[0..n 1] and C[0..n 1], dened as follows.
B[0], . . . , B[count er 1] contain the indices of the elements of A that were
initialized: B[0] contains the index of the element initializedrst, B[1] contains
the index of the element initialized second, etc. Furthermore, if A[i] was the
kth element (0 k count er 1) to be initialized, C[i] contains k.
a. Sketch the state of arrays A[0..7], B[0..7], and C[0..7] after the three as-
signments
A[3] x; A[7] z; A[1] y.
b. In general, how can we check with this scheme whether A[i] has been
initialized and, if it has been, with which value?
8. Least distance sorting There are 10 Egyptian stone statues standing in a row
in an art gallery hall. A new curator wants to move them so that the statues
are ordered by their height. How should this be done to minimize the total
distance that the statues are moved? You may assume for simplicity that all
the statues have different heights. [Azi10]
9. a. Write a program for multiplying two sparse matrices, a p q matrix Aand
a q r matrix B.
b. Write a program for multiplying two sparse polynomials p(x) and q(x) of
degrees m and n, respectively.
10. Is it a good idea to write a program that plays the classic game of tic-tac-toe
with the human user by storing all possible positions on the games 3 3board
along with the best move for each of them?
7.2 Input Enhancement in String Matching
In this section, we see how the technique of input enhancement can be applied
to the problem of string matching. Recall that the problem of string matching
7.2 Input Enhancement in String Matching 259
requires nding an occurrence of a given string of m characters called the pattern
in a longer string of n characters called the text. We discussed the brute-force
algorithm for this problem in Section 3.2: it simply matches corresponding pairs
of characters in the pattern and the text left to right and, if a mismatch occurs,
shifts the pattern one position to the right for the next trial. Since the maximum
number of such trials is n m1 and, in the worst case, m comparisons need to
be made on each of them, the worst-case efciency of the brute-force algorithm is
in the O(nm) class. On average, however, we should expect just a fewcomparisons
before a patterns shift, and for random natural-language texts, the average-case
efciency indeed turns out to be in O(n m).
Several faster algorithms have been discovered. Most of them exploit the
input-enhancement idea: preprocess the pattern to get some information about
it, store this information in a table, and then use this information during an actual
search for the pattern in a given text. This is exactly the idea behind the two best-
known algorithms of this type: the Knuth-Morris-Pratt algorithm[Knu77] and the
Boyer-Moore algorithm [Boy77].
The principal difference between these two algorithms lies in the way they
compare characters of a pattern with their counterparts in a text: the Knuth-
Morris-Pratt algorithm does it left to right, whereas the Boyer-Moore algorithm
does it right to left. Since the latter idea leads to simpler algorithms, it is the
only one that we will pursue here. (Note that the Boyer-Moore algorithm starts
by aligning the pattern against the beginning characters of the text; if the rst
trial fails, it shifts the pattern to the right. It is comparisons within a trial that the
algorithm does right to left, starting with the last character in the pattern.)
Although the underlying idea of the Boyer-Moore algorithm is simple, its
actual implementation in a working method is less so. Therefore, we start our
discussion with a simplied version of the Boyer-Moore algorithm suggested by
R. Horspool [Hor80]. In addition to being simpler, Horspools algorithm is not
necessarily less efcient than the Boyer-Moore algorithm on random strings.
Horspools Algorithm
Consider, as an example, searching for the pattern BARBER in some text:
s
0
. . . c . . . s
n1
B A R B E R
Starting with the last R of the pattern and moving right to left, we compare the
corresponding pairs of characters in the pattern and the text. If all the patterns
characters match successfully, a matching substring is found. Then the search
can be either stopped altogether or continued if another occurrence of the same
pattern is desired.
If a mismatch occurs, we need to shift the pattern to the right. Clearly, we
would like to make as large a shift as possible without risking the possibility of
missing a matching substring in the text. Horspools algorithmdetermines the size
260 Space and Time Trade-Offs
of such a shift by looking at the character c of the text that is aligned against the
last character of the pattern. This is the case even if character c itself matches its
counterpart in the pattern.
In general, the following four possibilities can occur.
Case 1 If there are no cs in the patterne.g., c is letter S in our example
we can safely shift the pattern by its entire length (if we shift less, some character
of the pattern would be aligned against the texts character c that is known not to
be in the pattern):
s
0
. . . S . . . s
n1
,|
B A R B E R
B A R B E R
Case 2 If there are occurrences of character c inthe patternbut it is not the last
one theree.g., c is letter B in our examplethe shift should align the rightmost
occurrence of c in the pattern with the c in the text:
s
0
. . . B . . . s
n1
,|
B A R B E R
B A R B E R
Case 3 If c happens to be the last character in the pattern but there are no cs
among its other m1 characterse.g., c is letter R in our examplethe situation
is similar to that of Case 1 and the pattern should be shifted by the entire patterns
length m:
s
0
. . . M E R . . . s
n1
,| | |
L E A D E R
L E A D E R
Case 4 Finally, if c happens to be the last character in the pattern and there
are other cs among its rst m1 characterse.g., c is letter R in our example
the situation is similar to that of Case 2 and the rightmost occurrence of c among
the rst m1 characters in the pattern should be aligned with the texts c:
s
0
. . . A R . . . s
n1
,| |
R E O R D E R
R E O R D E R
These examples clearly demonstrate that right-to-left character comparisons
can lead to farther shifts of the pattern than the shifts by only one position
7.2 Input Enhancement in String Matching 261
always made by the brute-force algorithm. However, if such an algorithm had
to check all the characters of the pattern on every trial, it would lose much
of this superiority. Fortunately, the idea of input enhancement makes repetitive
comparisons unnecessary. We can precompute shift sizes and store themin a table.
The table will be indexed by all possible characters that can be encountered in a
text, including, for natural language texts, the space, punctuation symbols, and
other special characters. (Note that no other information about the text in which
eventual searching will be done is required.) The tables entries will indicate the
shift sizes computed by the formula
t (c) =
_
_
the patterns length m,
if c is not among the rst m1 characters of the pattern;
the distance from the rightmost c among the rst m1 characters
of the pattern to its last character, otherwise.
(7.1)
For example, for the patternBARBER, all the tables entries will be equal to6, except
for the entries for E, B, R, and A, which will be 1, 2, 3, and 4, respectively.
Here is a simple algorithm for computing the shift table entries. Initialize all
the entries to the patterns length mand scan the pattern left to right repeating the
following step m1 times: for the jth character of the pattern (0 j m2),
overwrite its entry in the table with m1 j, which is the characters distance to
the last character of the pattern. Note that since the algorithm scans the pattern
from left to right, the last overwrite will happen for the characters rightmost
occurrenceexactly as we would like it to be.
ALGORITHM ShiftTable(P[0..m1])
//Fills the shift table used by Horspools and Boyer-Moore algorithms
//Input: Pattern P[0..m1] and an alphabet of possible characters
//Output: Table[0..size 1] indexed by the alphabets characters and
// lled with shift sizes computed by formula (7.1)
for i 0 to size 1 do Table[i] m
for j 0 to m2 do Table[P[j]] m1 j
return Table
Now, we can summarize the algorithm as follows:
Horspools algorithm
Step 1 For a given pattern of length m and the alphabet used in both the
pattern and text, construct the shift table as described above.
Step 2 Align the pattern against the beginning of the text.
Step 3 Repeat the following until either a matching substring is found or the
pattern reaches beyond the last character of the text. Starting with the
last character in the pattern, compare the corresponding characters in
the pattern and text until either all m characters are matched (then
262 Space and Time Trade-Offs
stop) or a mismatching pair is encountered. In the latter case, retrieve
the entry t (c) fromthe cs column of the shift table where c is the texts
character currently aligned against the last character of the pattern,
and shift the pattern by t (c) characters to the right along the text.
Here is pseudocode of Horspools algorithm.
ALGORITHM HorspoolMatching(P[0..m1], T [0..n 1])
//Implements Horspools algorithm for string matching
//Input: Pattern P[0..m1] and text T [0..n 1]
//Output: The index of the left end of the rst matching substring
// or 1 if there are no matches
ShiftTable(P[0..m1]) //generate Table of shifts
i m1 //position of the patterns right end
while i n 1 do
k 0 //number of matched characters
while k m1 and P[m1 k] =T [i k] do
k k 1
if k =m
return i m1
else i i Table[T [i]]
return 1
EXAMPLE As an example of a complete application of Horspools algorithm,
consider searching for the pattern BARBER in a text that comprises English letters
and spaces (denoted by underscores). The shift table, as we mentioned, is lled as
follows:
character c A B C D E F . . . R . . . Z _
shift t (c) 4 2 6 6 1 6 6 3 6 6 6
The actual search in a particular text proceeds as follows:
J I M _ S A W _ M E _ I N _ A _ B A R B E R S H O P
B A R B E R B A R B E R
B A R B E R B A R B E R
B A R B E R B A R B E R
A simple example can demonstrate that the worst-case efciency of Hor-
spools algorithm is in O(nm) (Problem 4 in this sections exercises). But for
random texts, it is in (n), and, although in the same efciency class, Horspools
algorithm is obviously faster on average than the brute-force algorithm. In fact,
as mentioned, it is often at least as efcient as its more sophisticated predecessor
discovered by R. Boyer and J. Moore.
7.2 Input Enhancement in String Matching 263
Boyer-Moore Algorithm
Now we outline the Boyer-Moore algorithm itself. If the rst comparison of the
rightmost character in the pattern with the corresponding character c in the text
fails, the algorithm does exactly the same thing as Horspools algorithm. Namely,
it shifts the pattern to the right by the number of characters retrieved from the
table precomputed as explained earlier.
The two algorithms act differently, however, after some positive number k
(0 <k <m) of the patterns characters are matched successfully before a mismatch
is encountered:
s
0
. . . c s
ik1
. . . s
i
. . . s
n1
text
,| | |
p
0
. . . p
mk1
p
mk
. . . p
m1
pattern
In this situation, the Boyer-Moore algorithm determines the shift size by consid-
ering two quantities. The rst one is guided by the texts character c that caused
a mismatch with its counterpart in the pattern. Accordingly, it is called the bad-
symbol shift. The reasoning behind this shift is the reasoning we used in Hor-
spools algorithm. If c is not in the pattern, we shift the pattern to just pass this
c in the text. Conveniently, the size of this shift can be computed by the formula
t
1
(c) k where t
1
(c) is the entry in the precomputed table used by Horspools
algorithm (see above) and k is the number of matched characters:
s
0
. . . c s
ik1
. . . s
i
. . . s
n1
text
,| | |
p
0
. . . p
mk1
p
mk
. . . p
m1
pattern
p
0
. . . p
m1
For example, if we search for the pattern BARBER in some text and match the last
two characters before failing on letter S in the text, we can shift the pattern by
t
1
(S) 2 =6 2 =4 positions:
s
0
. . . S E R . . . s
n1
,| | |
B A R B E R
B A R B E R
The same formula can also be used when the mismatching character c of the
text occurs in the pattern, provided t
1
(c) k >0. For example, if we search for the
pattern BARBER in some text and match the last two characters before failing on
letter A, we can shift the pattern by t
1
(A) 2 =4 2 =2 positions:
s
0
. . . A E R . . . s
n1
,| | |
B A R B E R
B A R B E R
264 Space and Time Trade-Offs
If t
1
(c) k 0, we obviously do not want to shift the pattern by 0 or a negative
number of positions. Rather, we can fall back on the brute-force thinking and
simply shift the pattern by one position to the right.
To summarize, the bad-symbol shift d
1
is computed by the Boyer-Moore
algorithm either as t
1
(c) k if this quantity is positive and as 1 if it is negative
or zero. This can be expressed by the following compact formula:
d
1
=max{t
1
(c) k, 1]. (7.2)
The second type of shift is guided by a successful match of the last k > 0
characters of the pattern. We refer to the ending portion of the pattern as its sufx
of size k and denote it suff (k). Accordingly, we call this type of shift the good-sufx
shift. We now apply the reasoning that guided us in lling the bad-symbol shift
table, which was based on a single alphabet character c, to the patterns sufxes
of sizes 1, . . . , m1 to ll in the good-sufx shift table.
Let us rst consider the case when there is another occurrence of suff (k) in
the pattern or, to be more accurate, there is another occurrence of suff (k) not
preceded by the same character as in its rightmost occurrence. (It would be useless
to shift the pattern to match another occurrence of suff (k) preceded by the same
character because this would simply repeat a failed trial.) In this case, we can shift
the pattern by the distance d
2
between such a second rightmost occurrence (not
preceded by the same character as in the rightmost occurrence) of suff (k) and its
rightmost occurrence. For example, for the pattern ABCBAB, these distances for
k =1 and 2 will be 2 and 4, respectively:
k pattern d
2
1 ABCBAB 2
2 ABCBAB 4
What is to be done if there is no other occurrence of suff (k) not preceded by
the same character as in its rightmost occurrence? In most cases, we can shift the
pattern by its entire length m. For example, for the pattern DBCBAB and k =3, we
can shift the pattern by its entire length of 6 characters:
s
0
. . . c B A B . . . s
n1
,| | | |
D B C B A B
D B C B A B
Unfortunately, shifting the pattern by its entire length when there is no other
occurrence of suff (k) not preceded by the same character as in its rightmost
occurrence is not always correct. For example, for the pattern ABCBAB and k =3,
shifting by 6 could miss a matching substring that starts with the texts AB aligned
with the last two characters of the pattern:
7.2 Input Enhancement in String Matching 265
s
0
. . . c B A B C B A B . . . s
n1
,| | | |
A B C B A B
A B C B A B
Note that the shift by 6 is correct for the pattern DBCBAB but not for ABCBAB,
because the latter pattern has the same substring AB as its prex (beginning part
of the pattern) and as its sufx (ending part of the pattern). To avoid such an
erroneous shift based on a sufx of size k, for which there is no other occurrence
in the pattern not preceded by the same character as in its rightmost occurrence,
we need to nd the longest prex of size l < k that matches the sufx of the same
size l. If such a prex exists, the shift size d
2
is computed as the distance between
this prex and the corresponding sufx; otherwise, d
2
is set to the patterns length
m. As an example, here is the complete list of the d
2
valuesthe good-sufx table
of the Boyer-Moore algorithmfor the pattern ABCBAB:
k pattern d
2
1 ABCBAB 2
2 ABCBAB 4
3 ABCBAB 4
4 ABCBAB 4
5 ABCBAB 4
Nowwe are prepared to summarize the Boyer-Moore algorithmin its entirety.
The Boyer-Moore algorithm
Step 1 For a given pattern and the alphabet used in both the pattern and the
text, construct the bad-symbol shift table as described earlier.
Step 2 Using the pattern, construct the good-sufx shift table as described
earlier.
Step 3 Align the pattern against the beginning of the text.
Step 4 Repeat the following step until either a matching substring is found or
the pattern reaches beyond the last character of the text. Starting with
the last character inthe pattern, compare the corresponding characters
inthe patternandthe text until either all mcharacter pairs are matched
(then stop) or a mismatching pair is encountered after k 0 character
pairs are matched successfully. In the latter case, retrieve the entry
t
1
(c) from the cs column of the bad-symbol table where c is the texts
mismatched character. If k > 0, also retrieve the corresponding d
2
entry from the good-sufx table. Shift the pattern to the right by the
266 Space and Time Trade-Offs
number of positions computed by the formula
d =
_
d
1
if k =0,
max{d
1
, d
2
] if k > 0,
(7.3)
where d
1
=max{t
1
(c) k, 1].
Shifting by the maximum of the two available shifts when k > 0 is quite log-
ical. The two shifts are based on the observationsthe rst one about a texts
mismatched character, and the second one about a matched group of the patterns
rightmost charactersthat imply that shifting by less than d
1
and d
2
characters, re-
spectively, cannot lead to aligning the pattern with a matching substring in the text.
Since we are interested in shifting the pattern as far as possible without missing a
possible matching substring, we take the maximum of these two numbers.
EXAMPLE As a complete example, let us consider searching for the pattern
BAOBAB in a text made of English letters and spaces. The bad-symbol table looks
as follows:
c A B C D . . . O . . . Z _
t
1
(c) 1 2 6 6 6 3 6 6 6
The good-sufx table is lled as follows:
k pattern d
2
1 BAOBAB 2
2 BAOBAB 5
3 BAOBAB 5
4 BAOBAB 5
5 BAOBAB 5
The actual search for this pattern in the text given in Figure 7.3 proceeds as
follows. After the last B of the pattern fails to match its counterpart K in the text,
the algorithm retrieves t
1
(K) = 6 from the bad-symbol table and shifts the pat-
tern by d
1
=max{t
1
(K) 0, 1] =6 positions to the right. The new try successfully
matches two pairs of characters. After the failure of the third comparison on the
space character in the text, the algorithm retrieves t
1
( ) =6 from the bad-symbol
table and d
2
=5 from the good-sufx table to shift the pattern by max{d
1
, d
2
] =
max{6 2, 5] =5. Note that on this iteration it is the good-sufx rule that leads
to a farther shift of the pattern.
The next try successfully matches just one pair of Bs. After the failure of
the next comparison on the space character in the text, the algorithm retrieves
t
1
( ) =6 from the bad-symbol table and d
2
=2 from the good-sufx table to shift
7.2 Input Enhancement in String Matching 267
B E S S _ K N E W _ A B O U T _ B A O B A B S
B A O B A B
d
1
=t
1
(K) 0 =6 B A O B A B
d
1
= t
1
( ) 2 = 4 B A O B A B
d
2
=5 d
1
= t
1
( ) 1 = 5
d = max{4, 5] = 5 d
2
=2
d = max{5, 2] = 5
B A O B A B
FIGURE 7.3 Example of string matching with the Boyer-Moore algorithm.
the pattern by max{d
1,
d
2
] =max{6 1, 2] =5. Note that on this iteration it is the
bad-symbol rule that leads to a farther shift of the pattern. The next try nds a
matching substring in the text after successfully matching all six characters of the
pattern with their counterparts in the text.
When searching for the rst occurrence of the pattern, the worst-case ef-
ciency of the Boyer-Moore algorithmis known to be linear. Though this algorithm
runs very fast, especially on large alphabets (relative to the length of the pattern),
many people prefer its simplied versions, such as Horspools algorithm, when
dealing with natural-languagelike strings.
Exercises 7.2
1. Apply Horspools algorithm to search for the pattern BAOBAB in the text
BESS KNEW ABOUT BAOBABS
2. Consider the problem of searching for genes in DNA sequences using Hor-
spools algorithm. A DNA sequence is represented by a text on the alphabet
{A, C, G, T}, and the gene or gene segment is the pattern.
a. Construct the shift table for the following gene segment of your chromo-
some 10:
TCCTATTCTT
b. Apply Horspools algorithm to locate the above pattern in the following
DNA sequence:
TTATAGATCTCGTATTCTTTTATAGATCTCCTATTCTT
268 Space and Time Trade-Offs
3. How many character comparisons will be made by Horspools algorithm in
searching for each of the following patterns in the binary text of 1000 zeros?
a. 00001 b. 10000 c. 01010
4. For searching in a text of length n for a pattern of length m (n m) with
Horspools algorithm, give an example of
a. worst-case input. b. best-case input.
5. Is it possible for Horspools algorithm to make more character comparisons
than the brute-force algorithm would make in searching for the same pattern
in the same text?
6. If Horspools algorithm discovers a matching substring, how large a shift
should it make to search for a next possible match?
7. How many character comparisons will the Boyer-Moore algorithm make in
searching for each of the following patterns in the binary text of 1000 zeros?
a. 00001 b. 10000 c. 01010
8. a. Wouldthe Boyer-Moore algorithmworkcorrectly withjust the bad-symbol
table to guide pattern shifts?
b. Would the Boyer-Moore algorithmwork correctly with just the good-sufx
table to guide pattern shifts?
9. a. If the last characters of a pattern and its counterpart in the text do match,
does Horspools algorithm have to check other characters right to left, or
can it check them left to right too?
b. Answer the same question for the Boyer-Moore algorithm.
10. Implement Horspools algorithm, the Boyer-Moore algorithm, and the brute-
force algorithm of Section 3.2 in the language of your choice and run an
experiment to compare their efciencies for matching
a. random binary patterns in random binary texts.
b. random natural-language patterns in natural-language texts.
11. You are given two strings S and T , each n characters long. You have to
establish whether one of them is a right cyclic shift of the other. For example,
PLEA is a right cyclic shift of LEAP, and vice versa. (Formally, T is a right cyclic
shift of S if T can be obtained by concatenating the (n i)-character sufx of
S and the i-character prex of S for some 1 i n.)
a. Design a space-efcient algorithmfor the task. Indicate the space and time
efciencies of your algorithm.
b. Design a time-efcient algorithm for the task. Indicate the time and space
efciencies of your algorithm.
7.3 Hashing 269
7.3 Hashing
In this section, we consider a very efcient way to implement dictionaries. Recall
that a dictionary is an abstract data type, namely, a set with the operations of
searching (lookup), insertion, and deletion dened on its elements. The elements
of this set can be of an arbitrary nature: numbers, characters of some alphabet,
character strings, and so on. In practice, the most important case is that of records
(student records in a school, citizen records in a governmental ofce, book records
in a library).
Typically, records comprise several elds, each responsible for keeping a
particular type of information about an entity the record represents. For example,
a student record may contain elds for the students ID, name, date of birth, sex,
home address, major, and so on. Among record elds there is usually at least one
called a key that is used for identifying entities represented by the records (e.g.,
the students ID). In the discussion below, we assume that we have to implement
a dictionary of n records with keys K
1
, K
2
, . . . , K
n
.
Hashing is based on the idea of distributing keys among a one-dimensional
array H[0..m1] called a hash table. The distribution is done by computing, for
each of the keys, the value of some predened function h called the hash function.
This function assigns an integer between 0 and m1, called the hash address, to
a key.
For example, if keys are nonnegative integers, a hash function can be of
the form h(K) =K mod m; obviously, the remainder of division by m is always
between0 andm1. If keys are letters of some alphabet, we canrst assigna letter
its position in the alphabet, denoted here ord(K), and then apply the same kind
of a function used for integers. Finally, if K is a character string c
0
c
1
. . . c
s1
, we
can use, as a very unsophisticated option, (
s1
i=0
ord(c
i
)) mod m. A better option
is to compute h(K) as follows:
2
h 0; for i 0 to s 1 do h (h C ord(c
i
)) mod m,
where C is a constant larger than every ord(c
i
).
In general, a hash function needs to satisfy somewhat conicting require-
ments:
Ahash tables size should not be excessively large compared to the number of
keys, but it should be sufcient to not jeopardize the implementations time
efciency (see below).
A hash function needs to distribute keys among the cells of the hash table as
evenly as possible. (This requirement makes it desirable, for most applications,
to have a hash function dependent on all bits of a key, not just some of them.)
A hash function has to be easy to compute.
2. This can be obtained by treating ord(c
i
) as digits of a number in the C-based system, computing its
decimal value by Horners rule, and nding the remainder of the number after dividing it by m.
270 Space and Time Trade-Offs
K
i
K
j
m1 0 b
. . . . . .
FIGURE 7.4 Collision of two keys in hashing: h(K
i
) =h(K
j
).
Obviously, if we choose a hash tables size m to be smaller than the number
of keys n, we will get collisionsa phenomenon of two (or more) keys being
hashed into the same cell of the hash table (Figure 7.4). But collisions should be
expected even if m is considerably larger than n (see Problem 5 in this sections
exercises). In fact, in the worst case, all the keys could be hashed to the same cell
of the hash table. Fortunately, with an appropriately chosen hash table size and a
good hash function, this situation happens very rarely. Still, every hashing scheme
must have a collision resolution mechanism. This mechanism is different in the
two principal versions of hashing: open hashing (also called separate chaining)
and closed hashing (also called open addressing).
Open Hashing (Separate Chaining)
In open hashing, keys are stored in linked lists attached to cells of a hash table.
Each list contains all the keys hashed to its cell. Consider, as an example, the
following list of words:
A, FOOL, AND, HIS, MONEY, ARE, SOON, PARTED.
As a hash function, we will use the simple function for strings mentioned above,
i.e., we will add the positions of a words letters in the alphabet and compute the
sums remainder after division by 13.
We start with the empty table. The rst key is the word A; its hash value is
h(A) =1 mod 13 =1. The second keythe word FOOLis installed in the ninth
cell since (6 15 15 12) mod 13 =9, and so on. The nal result of this process
is given in Figure 7.5; note a collision of the keys ARE and SOON because h(ARE) =
(1 18 5) mod 13 =11 and h(SOON) = (19 15 15 14) mod 13 =11.
How do we search in a dictionary implemented as such a table of linked lists?
We do this by simply applying to a search key the same procedure that was used
for creating the table. To illustrate, if we want to search for the key KID in the hash
table of Figure 7.5, we rst compute the value of the same hash function for the
key: h(KID) = 11. Since the list attached to cell 11 is not empty, its linked list may
contain the search key. But because of possible collisions, we cannot tell whether
this is the case until we traverse this linked list. After comparing the string KID rst
with the string ARE and then with the string SOON, we end up with an unsuccessful
search.
In general, the efciency of searching depends on the lengths of the linked
lists, which, in turn, depend on the dictionary and table sizes, as well as the quality
7.3 Hashing 271
keys
hash addresses 1 9 6 10 7 11 11 12
11 12 10 9 8 7 6 5 4 3 2 1 0
A FOOL
FOOL MONEY AND A HIS ARE
SOON
PARTED
AND HIS MONEY ARE SOON PARTED
1
2
(1
1
1
)
1
2
(1
1
(1)
2
)
50% 1.5 2.5
75% 2.5 8.5
90% 5.5 50.5
Still, as the hash table gets closer to being full, the performance of linear prob-
ing deteriorates because of a phenomenon called clustering. A cluster in linear
probing is a sequence of contiguously occupied cells (with a possible wrapping).
For example, the nal state of the hash table of Figure 7.6 has two clusters. Clus-
ters are bad news in hashing because they make the dictionary operations less
efcient. As clusters become larger, the probability that a new element will be
attached to a cluster increases; in addition, large clusters increase the probabil-
ity that two clusters will coalesce after a new keys insertion, causing even more
clustering.
Several other collision resolution strategies have been suggested to alleviate
this problem. One of the most important is double hashing. Under this scheme, we
use another hash function, s(K), to determine a xed increment for the probing
sequence to be used after a collision at location l =h(K):
(l s(K)) mod m, (l 2s(K)) mod m, . . . . (7.6)
Toguarantee that every locationinthe table is probedby sequence (7.6), the incre-
ment s(k) and the table size m must be relatively prime, i.e., their only common
divisor must be 1. (This condition is satised automatically if m itself is prime.)
Some functions recommended in the literature are s(k) =m2 k mod (m2)
and s(k) =8 (k mod 8) for small tables and s(k) =k mod 97 1 for larger ones.
3. This problem was solved in 1962 by a young graduate student in mathematics named Donald E.
Knuth. Knuth went on to become one of the most important computer scientists of our time. His
multivolume treatise The Art of Computer Programming [KnuI, KnuII, KnuIII, KnuIV] remains the
most comprehensive and inuential book on algorithmics ever published.
274 Space and Time Trade-Offs
Mathematical analysis of double hashing has proved to be quite difcult. Some
partial results and considerable practical experience with the method suggest that
with good hashing functionsboth primary and secondarydouble hashing is su-
perior to linear probing. But its performance also deteriorates when the table gets
close to being full. A natural solution in such a situation is rehashing: the current
table is scanned, and all its keys are relocated into a larger table.
It is worthwhile to compare the main properties of hashing with balanced
search treesits principal competitor for implementing dictionaries.
Asymptotic time efciency With hashing, searching, insertion, and deletion
canbe implementedtotake (1) time onthe average but (n) time inthe very
unlikely worst case. For balanced search trees, the average time efciencies
are (log n) for both the average and worst cases.
Ordering preservation Unlike balanced search trees, hashing does not
assume existence of key ordering and usually does not preserve it. This makes
hashing less suitable for applications that need to iterate over the keys in or-
der or require range queries such as counting the number of keys between
some lower and upper bounds.
Since its discovery in the 1950s by IBM researchers, hashing has found many
important applications. In particular, it has become a standard technique for stor-
ing a symbol tablea table of a computer programs symbols generated during
compilation. Hashing is quite handy for such AI applications as checking whether
positions generated by a chess-playing computer program have already been con-
sidered. With some modications, it has also proved to be useful for storing very
large dictionaries on disks; this variation of hashing is called extendible hashing.
Since disk access is expensive compared with probes performed in the main mem-
ory, it is preferable to make many more probes than disk accesses. Accordingly, a
location computed by a hash function in extendible hashing indicates a disk ad-
dress of a bucket that can hold up to b keys. When a keys bucket is identied,
all its keys are read into main memory and then searched for the key in question.
In the next section, we discuss B-trees, a principal alternative for storing large
dictionaries.
Exercises 7.3
1. For the input 30, 20, 56, 75, 31, 19 and hash function h(K) =K mod 11
a. construct the open hash table.
b. nd the largest number of key comparisons in a successful search in this
table.
c. nd the average number of key comparisons in a successful search in this
table.
2. For the input 30, 20, 56, 75, 31, 19 and hash function h(K) =K mod 11
a. construct the closed hash table.
7.3 Hashing 275
b. nd the largest number of key comparisons in a successful search in this
table.
c. nd the average number of key comparisons in a successful search in this
table.
3. Why is it not a good idea for a hash function to depend on just one letter (say,
the rst one) of a natural-language word?
4. Find the probability of all n keys being hashed to the same cell of a hash table
of size mif the hash function distributes keys evenly among all the cells of the
table.
5. Birthday paradox The birthday paradox asks how many people should be
in a room so that the chances are better than even that two of them will have
the same birthday (month and day). Find the quite unexpected answer to this
problem. What implication for hashing does this result have?
6. Answer the following questions for the separate-chaining version of hashing.
a. Where would you insert keys if you knew that all the keys in the dictionary
are distinct? Which dictionary operations, if any, would benet from this
modication?
b. We could keep keys of the same linked list sorted. Which of the dictio-
nary operations would benet from this modication? How could we take
advantage of this if all the keys stored in the entire table need to be sorted?
7. Explain howto use hashing to check whether all elements of a list are distinct.
What is the time efciency of this application? Compare its efciency with
that of the brute-force algorithm (Section 2.3) and of the presorting-based
algorithm (Section 6.1).
8. Fill in the following table with the average-case (as the rst entry) and worst-
case (as the second entry) efciency classes for the ve implementations of
the ADT dictionary:
unordered ordered binary balanced
array array search tree search tree hashing
search
insertion
deletion
9. We have discussed hashing in the context of techniques based on spacetime
trade-offs. But it also takes advantage of another general strategy. Which one?
10. Write a computer programthat uses hashing for the following problem. Given
a natural-language text, generate a list of distinct words with the number of
occurrences of each word in the text. Insert appropriate counters in the pro-
gram to compare the empirical efciency of hashing with the corresponding
theoretical results.
276 Space and Time Trade-Offs
T
0
T
1
p
0
T
i 1
T
n 2
p
n 2
K
n 1
p
n 1
p
i 1
p
1
K
1
p
i
K
i
T
n 1
T
i
. . . . . .
FIGURE 7.7 Parental node of a B-tree.
7.4 B-Trees
The idea of using extra space to facilitate faster access to a given data set is partic-
ularly important if the data set in question contains a very large number of records
that need to be stored on a disk. A principal device in organizing such data sets
is an index, which provides some information about the location of records with
indicated key values. For data sets of structured records (as opposed to unstruc-
tured data such as text, images, sound, and video), the most important index
organization is the B-tree, introduced by R. Bayer and E. McGreight [Bay72]. It
extends the idea of the 2-3 tree (see Section 6.3) by permitting more than a single
key in the same node of a search tree.
In the B-tree version we consider here, all data records (or record keys)
are stored at the leaves, in increasing order of the keys. The parental nodes are
used for indexing. Specically, each parental node contains n 1 ordered keys
K
1
<
. . .
< K
n1
assumed, for the sake of simplicity, to be distinct. The keys are
interposed with n pointers to the nodes children so that all the keys in subtree T
0
are smaller than K
1
, all the keys in subtree T
1
are greater than or equal to K
1
and
smaller than K
2
with K
1
being equal to the smallest key in T
1
, and so on, through
the last subtree T
n1
whose keys are greater than or equal to K
n1
with K
n1
being
equal to the smallest key in T
n1
(see Figure 7.7).
4
In addition, a B-tree of order m 2 must satisfy the following structural
properties:
The root is either a leaf or has between 2 and m children.
Each node, except for the root and the leaves, has between {m/2 and m
children (and hence between {m/2 1 and m1 keys).
The tree is (perfectly) balanced, i.e., all its leaves are at the same level.
4. The node depicted in Figure 7.7 is called the n-node. Thus, all the nodes in a classic binary search tree
are 2-nodes; a 2-3 tree introduced in Section 6.3 comprises 2-nodes and 3-nodes.
7.4 B-Trees 277
20 51
25 34 40 60 11 15
60, 68, 80 40, 43, 46 15, 16, 19 51, 55 34, 38 25, 28 20, 24 11, 14 4, 7, 10
FIGURE 7.8 Example of a B-tree of order 4.
An example of a B-tree of order 4 is given in Figure 7.8.
Searching in a B-tree is very similar to searching in the binary search tree, and
even more so in the 2-3 tree. Starting with the root, we follow a chain of pointers
to the leaf that may contain the search key. Then we search for the search key
among the keys of that leaf. Note that since keys are stored in sorted order, at
both parental nodes and leaves, we can use binary search if the number of keys at
a node is large enough to make it worthwhile.
It is not the number of key comparisons, however, that we should be con-
cerned about in a typical application of this data structure. When used for storing
a large data le on a disk, the nodes of a B-tree normally correspond to the disk
pages. Since the time needed to access a disk page is typically several orders of
magnitude larger thanthe time neededtocompare keys inthe fast computer mem-
ory, it is the number of disk accesses that becomes the principal indicator of the
efciency of this and similar data structures.
Howmany nodes of a B-tree do we need to access during a search for a record
with a given key value? This number is, obviously, equal to the height of the tree
plus 1. To estimate the height, let us nd the smallest number of keys a B-tree of
order m and positive height h can have. The root of the tree will contain at least
one key. Level 1 will have at least two nodes with at least {m/2 1 keys in each
of them, for the total minimum number of keys 2({m/2 1). Level 2 will have at
least 2{m/2 nodes (the children of the nodes on level 1) with at least {m/2 1
in each of them, for the total minimum number of keys 2{m/2({m/2 1). In
general, the nodes of level i, 1 i h 1, will contain at least 2{m/2
i1
({m/2
1) keys. Finally, level h, the leaf level, will have at least 2{m/2
h1
nodes with at
least one key in each. Thus, for any B-tree of order m with n nodes and height
h > 0, we have the following inequality:
n 1
h1
i=1
2{m/2
i1
({m/2 1) 2{m/2
h1
.
After a series of standardsimplications (see Problem2 inthis sections exercises),
this inequality reduces to
278 Space and Time Trade-Offs
n 4{m/2
h1
1,
which, in turn, yields the following upper bound on the height h of the B-tree of
order m with n nodes:
h log
{m/2
n 1
4
1. (7.7)
Inequality (7.7) immediately implies that searching in a B-tree is a O(log n)
operation. But it is important to ascertain here not just the efciency class but
the actual number of disk accesses implied by this formula. The following table
contains the values of the right-hand-side estimates for a le of 100 million records
and a few typical values of the trees order m:
order m 50 100 250
hs upper bound 6 5 4
Keep in mind that the tables entries are upper estimates for the number of disk
accesses. In actual applications, this number rarely exceeds 3, with the B-trees
root and sometimes rst-level nodes stored in the fast memory to minimize the
number of disk accesses.
The operations of insertion and deletion are less straightforward than search-
ing, but both can also be done in O(log n) time. Here we outline an insertion
algorithmonly; a deletion algorithmcan be found in the references (e.g., [Aho83],
[Cor09]).
The most straightforward algorithm for inserting a new record into a B-
tree is quite similar to the algorithm for insertion into a 2-3 tree outlined in
Section 6.3. First, we apply the search procedure to the new records key K to
nd the appropriate leaf for the new record. If there is room for the record in that
leaf, we place it there (in an appropriate position so that the keys remain sorted)
and we are done. If there is no room for the record, the leaf is split in half by
sending the second half of the records to a new node. After that, the smallest key
K
/
in the new node and the pointer to it are inserted into the old leafs parent
(immediately after the key and pointer to the old leaf). This recursive procedure
may percolate up to the trees root. If the root is already full too, a new root is
created with the two halves of the old roots keys split between two children of
the new root. As an example, Figure 7.9 shows the result of inserting 65 into the
B-tree in Figure 7.8 under the restriction that the leaves cannot contain more than
three items.
You should be aware that there are other algorithms for implementing inser-
tions into a B-tree. For example, to avoid the possibility of recursive node splits,
we can split full nodes encountered in searching for an appropriate leaf for the
new record. Another possibility is to avoid some node splits by moving a key to
the nodes sibling. For example, inserting 65 into the B-tree in Figure 7.8 can be
done by moving 60, the smallest key of the full leaf, to its sibling with keys 51 and
55, and replacing the key value of their parent by 65, the new smallest value in
7.4 B-Trees 279
20 51
25 34 40 60 68 11 15
40, 43, 46 15, 16, 19 51, 55 60, 65 68, 80 34, 38 25, 28 20, 24 11, 14 4, 7, 10
FIGURE 7.9 B-tree obtained after inserting 65 into the B-tree in Figure 7.8.
the second child. This modication tends to save some space at the expense of a
slightly more complicated algorithm.
A B-tree does not have to be always associated with the indexing of a large
le, and it can be considered as one of several search tree varieties. As with other
types of search treessuch as binary search trees, AVL trees, and 2-3 treesa B-
tree can be constructed by successive insertions of data records into the initially
empty tree. (The empty tree is considered to be a B-tree, too.) When all keys reside
in the leaves and the upper levels are organized as a B-tree comprising an index,
the entire structure is usually called, in fact, a B
-tree.
Exercises 7.4
1. Give examples of using an index in real-life applications that do not involve
computers.
2. a. Prove the equality
1
h1
i=1
2{m/2
i1
({m/2 1) 2{m/2
h1
=4{m/2
h1
1,
which was used in the derivation of upper bound (7.7) for the height of a
B-tree.
b. Complete the derivation of inequality (7.7).
3. Find the minimumorder of the B-tree that guarantees that the number of disk
accesses in searching in a le of 100 million records does not exceed 3. Assume
that the roots page is stored in main memory.
4. Draw the B-tree obtained after inserting 30 and then 31 in the B-tree in
Figure 7.8. Assume that a leaf cannot contain more than three items.
5. Outline an algorithm for nding the largest key in a B-tree.
6. a. A top-down 2-3-4 tree is a B-tree of order 4 with the following modica-
tion of the insert operation: Whenever a search for a leaf for a new key
280 Space and Time Trade-Offs
encounters a full node (i.e., a node with three keys), the node is split into
two nodes by sending its middle key to the nodes parent, or, if the full
node happens to be the root, the new root for the middle key is created.
Construct a top-down 2-3-4 tree by inserting the following list of keys in
the initially empty tree:
10, 6, 15, 31, 20, 27, 50, 44, 18.
b. What is the principal advantage of this insertion procedure compared with
the one used for 2-3 trees in Section 6.3? What is its disadvantage?
7. a. Write a program implementing a key insertion algorithm in a B-tree.
b. Write a program for visualization of a key insertion algorithm in a B-tree.
SUMMARY
Space and time trade-offs in algorithm design are a well-known issue for
both theoreticians and practitioners of computing. As an algorithm design
technique, trading space for time is much more prevalent than trading time
for space.
Input enhancement is one of the two principal varieties of trading space for
time inalgorithmdesign. Its idea is topreprocess the problems input, inwhole
or in part, and store the additional information obtained in order to accelerate
solving the problem afterward. Sorting by distribution counting and several
important algorithms for string matching are examples of algorithms based
on this technique.
Distribution counting is a special method for sorting lists of elements from a
small set of possible values.
Horspools algorithm for string matching can be considered a simplied
version of the Boyer-Moore algorithm. Both algorithms are based on the ideas
of input enhancement and right-to-left comparisons of a patterns characters.
Both algorithms use the same bad-symbol shift table; the Boyer-Moore also
uses a second table, called the good-sufx shift table.
Prestructuringthe second type of technique that exploits space-for-time
trade-offsuses extra space to facilitate a faster and/or more exible access
to the data. Hashing and B
s=i
p
s
.
(level of a
s
in T
k1
i
1)
s=k1
p
s
.
(level of a
s
in T
j
k1
1)]
= min
ikj
{
k1
s=i
p
s
.
level of a
s
in T
k1
i
j
s=k1
p
s
.
level of a
s
in T
j
k1
s=i
p
s
]
= min
ikj
{C(i, k 1) C(k 1, j)]
j
s=i
p
s
.
Thus, we have the recurrence
C(i, j) = min
ikj
{C(i, k 1) C(k 1, j)]
j
s=i
p
s
for 1 i j n. (8.8)
We assume in formula (8.8) that C(i, i 1) =0 for 1 i n 1, which can be
interpretedas the number of comparisons inthe empty tree. Note that this formula
implies that
C(i, i) =p
i
for 1 i n,
as it should be for a one-node binary search tree containing a
i
.
300 Dynamic Programming
p
1
p
n
p
2
i
j n
n +1
0 1
goal
0
0
0
1
C[i, j ]
FIGURE 8.9 Table of the dynamic programming algorithm for constructing an optimal
binary search tree.
The two-dimensional table in Figure 8.9 shows the values needed for comput-
ing C(i, j) by formula (8.8): they are in rowi and the columns to the left of column
j and in column j and the rows below row i. The arrows point to the pairs of en-
tries whose sums are computed in order to nd the smallest one to be recorded as
the value of C(i, j). This suggests lling the table along its diagonals, starting with
all zeros on the main diagonal and given probabilities p
i
, 1 i n, right above it
and moving toward the upper right corner.
The algorithm we just sketched computes C(1, n)the average number of
comparisons for successful searches in the optimal binary tree. If we also want to
get the optimal tree itself, we need to maintain another two-dimensional table to
record the value of k for which the minimum in (8.8) is achieved. The table has
the same shape as the table in Figure 8.9 and is lled in the same manner, starting
with entries R(i, i) =i for 1 i n. When the table is lled, its entries indicate
indices of the roots of the optimal subtrees, which makes it possible to reconstruct
an optimal tree for the entire set given.
EXAMPLE Let us illustrate the algorithm by applying it to the four-key set we
used at the beginning of this section:
8.3 Optimal Binary Search Trees 301
key A B C D
probability 0.1 0.2 0.4 0.3
The initial tables look like this:
0.1
0 0.2
0
0.4
0 0.3
0
0 1
2
3
4
5
0 1 2 3 4
1
2
3
4
5
0 1
1
2
2
3
3
4
4
main table root table
Let us compute C(1, 2):
C(1, 2) =min
_
k =1: C(1, 0) C(2, 2)
2
s=1
p
s
=0 0.2 0.3 =0.5
k =2: C(1, 1) C(3, 2)
2
s=1
p
s
=0.1 0 0.3 =0.4
_
=0.4.
Thus, out of two possible binary trees containing the rst two keys, A and B, the
root of the optimal tree has index 2 (i.e., it contains B), and the average number
of comparisons in a successful search in this tree is 0.4.
We will ask you to nish the computations in the exercises. You should arrive
at the following nal tables:
0.1
0
0.4
0.2
0
1.1
0.8
0.4
0
1.7
1.4
1.0
0.3
0
0 1
2
3
4
5
0 1 2 3 4
1
2
3
4
5
0 1
1 2
2
3
3
3
3
3
3
4
2 3 4
main table root table
Thus, the average number of key comparisons in the optimal tree is equal to
1.7. Since R(1, 4) =3, the root of the optimal tree contains the third key, i.e., C. Its
left subtree is made up of keys A and B, and its right subtree contains just key D
(why?). To nd the specic structure of these subtrees, we nd rst their roots by
consulting the root table again as follows. Since R(1, 2) =2, the root of the optimal
tree containing A and B is B, with A being its left child (and the root of the one-
node tree: R(1, 1) =1). Since R(4, 4) =4, the root of this one-node optimal tree is
its only key D. Figure 8.10 presents the optimal tree in its entirety.
302 Dynamic Programming
B D
A
C
FIGURE 8.10 Optimal binary search tree for the example.
Here is pseudocode of the dynamic programming algorithm.
ALGORITHM OptimalBST(P[1..n])
//Finds an optimal binary search tree by dynamic programming
//Input: An array P[1..n] of search probabilities for a sorted list of n keys
//Output: Average number of comparisons in successful searches in the
// optimal BST and table R of subtrees roots in the optimal BST
for i 1 to n do
C[i, i 1] 0
C[i, i] P[i]
R[i, i] i
C[n 1, n] 0
for d 1 to n 1 do //diagonal count
for i 1 to n d do
j i d
minval
for k i to j do
if C[i, k 1] C[k 1, j] < minval
minval C[i, k 1] C[k 1, j]; kmin k
R[i, j] kmin
sumP[i]; for s i 1 to j do sumsumP[s]
C[i, j] minval sum
return C[1, n], R
The algorithms space efciency is clearly quadratic; the time efciency of this
versionof the algorithmis cubic (why?). Amore careful analysis shows that entries
in the root table are always nondecreasing along each row and column. This limits
values for R(i, j) to the range R(i, j 1), . . . , R(i 1, j) and makes it possible
to reduce the running time of the algorithm to (n
2
).
8.3 Optimal Binary Search Trees 303
Exercises 8.3
1. Finish the computations started in the sections example of constructing an
optimal binary search tree.
2. a. Why is the time efciency of algorithm OptimalBST cubic?
b. Why is the space efciency of algorithm OptimalBST quadratic?
3. Write pseudocode for a linear-time algorithm that generates the optimal
binary search tree from the root table.
4. Devise a way to compute the sums
j
s=i
p
s
, which are used in the dynamic
programming algorithm for constructing an optimal binary search tree, in
constant time (per sum).
5. True or false: The root of an optimal binary search tree always contains the
key with the highest search probability?
6. How would you construct an optimal binary search tree for a set of n keys if
all the keys are equally likely to be searched for? What will be the average
number of comparisons in a successful search in such a tree if n =2
k
?
7. a. Show that the number of distinct binary search trees b(n) that can be
constructed for a set of n orderable keys satises the recurrence relation
b(n) =
n1
k=0
b(k)b(n 1 k) for n > 0, b(0) =1.
b. It is known that the solution to this recurrence is given by the Catalan
numbers. Verify this assertion for n =1, 2, . . . , 5.
c. Find the order of growth of b(n). What implication does the answer to
this question have for the exhaustive-search algorithm for constructing an
optimal binary search tree?
8. Design a (n
2
) algorithm for nding an optimal binary search tree.
9. Generalize the optimal binary search algorithmby taking into account unsuc-
cessful searches.
10. Write pseudocode of a memory function for the optimal binary search tree
problem. You may limit your function to nding the smallest number of key
comparisons in a successful search.
11. Matrix chain multiplication Consider the problem of minimizing the total
number of multiplications made in computing the product of n matrices
A
1
.
A
2
.
. . .
.
A
n
whose dimensions are d
0
d
1
, d
1
d
2
, . . . , d
n1
d
n
, respectively. Assume
that all intermediate products of two matrices are computed by the brute-
force (denition-based) algorithm.
304 Dynamic Programming
a. Give an example of three matrices for which the number of multiplications
in (A
1
.
A
2
)
.
A
3
and A
1
.
(A
2
.
A
3
) differ at least by a factor of 1000.
b. How many different ways are there to compute the product of n matrices?
c. Design a dynamic programming algorithm for nding an optimal order of
multiplying n matrices.
8.4 Warshalls and Floyds Algorithms
In this section, we look at two well-known algorithms: Warshalls algorithm for
computing the transitive closure of a directed graph and Floyds algorithm for the
all-pairs shortest-paths problem. These algorithms are based on essentially the
same idea: exploit a relationship between a problem and its simpler rather than
smaller version. Warshall and Floyd published their algorithms without mention-
ing dynamic programming. Nevertheless, the algorithms certainly have a dynamic
programming avor and have come to be considered applications of this tech-
nique.
Warshalls Algorithm
Recall that the adjacency matrix A={a
ij
] of a directed graph is the boolean matrix
that has 1 in its ith row and jth column if and only if there is a directed edge from
the ith vertex to the jth vertex. We may also be interested in a matrix containing
the information about the existence of directed paths of arbitrary lengths between
vertices of a givengraph. Sucha matrix, calledthe transitive closure of the digraph,
would allow us to determine in constant time whether the jth vertex is reachable
from the ith vertex.
Here are a few application examples. When a value in a spreadsheet cell
is changed, the spreadsheet software must know all the other cells affected by
the change. If the spreadsheet is modeled by a digraph whose vertices represent
the spreadsheet cells and edges indicate cell dependencies, the transitive closure
will provide such information. In software engineering, transitive closure can be
used for investigating data ow and control ow dependencies as well as for
inheritance testing of object-orientedsoftware. Inelectronic engineering, it is used
for redundancy identication and test generation for digital circuits.
DEFINITION The transitive closure of a directed graph with n vertices can be
dened as the n n boolean matrix T ={t
ij
], in which the element in the ith row
and the jth column is 1 if there exists a nontrivial path (i.e., directed path of a
positive length) from the ith vertex to the jth vertex; otherwise, t
ij
is 0.
An example of a digraph, its adjacency matrix, and its transitive closure is
given in Figure 8.11.
We can generate the transitive closure of a digraph with the help of depth-
rst search or breadth-rst search. Performing either traversal starting at the ith
8.4 Warshalls and Floyds Algorithms 305
b a
a
b
c
d
d c
a b c d
A =
0
0
0
1
1
0
0
0
0
0
0
1
0
1
0
0
a
b
c
d
a b c d
T =
1
1
0
1
1
1
0
1
1
1
0
1
1
1
0
1
(a) (b) (c)
FIGURE 8.11 (a) Digraph. (b) Its adjacency matrix. (c) Its transitive closure.
vertex gives the information about the vertices reachable from it and hence the
columns that contain 1s in the ith row of the transitive closure. Thus, doing such
a traversal for every vertex as a starting point yields the transitive closure in its
entirety.
Since this method traverses the same digraph several times, we should hope
that a better algorithm can be found. Indeed, such an algorithm exists. It is called
Warshalls algorithm after Stephen Warshall, who discovered it [War62]. It is
convenient to assume that the digraphs vertices and hence the rows and columns
of the adjacency matrix are numbered from1 to n. Warshalls algorithmconstructs
the transitive closure through a series of n n boolean matrices:
R
(0)
, . . . , R
(k1)
, R
(k)
, . . . R
(n)
. (8.9)
Each of these matrices provides certain information about directed paths in the
digraph. Specically, the element r
(k)
ij
in the ith row and jth column of matrix
R
(k)
(i, j =1, 2, . . . , n, k =0, 1, . . . , n) is equal to 1 if and only if there exists a
directed path of a positive length from the ith vertex to the jth vertex with each
intermediate vertex, if any, numbered not higher than k. Thus, the series starts
with R
(0)
, which does not allow any intermediate vertices in its paths; hence,
R
(0)
is nothing other than the adjacency matrix of the digraph. (Recall that the
adjacency matrix contains the information about one-edge paths, i.e., paths with
no intermediate vertices.) R
(1)
contains the information about paths that can use
the rst vertex as intermediate; thus, with more freedom, so to speak, it may
contain more 1s than R
(0)
. In general, each subsequent matrix in series (8.9) has
one more vertex to use as intermediate for its paths than its predecessor and hence
may, but does not have to, contain more 1s. The last matrix in the series, R
(n)
,
reects paths that can use all n vertices of the digraph as intermediate and hence
is nothing other than the digraphs transitive closure.
The central point of the algorithm is that we can compute all the elements of
each matrix R
(k)
from its immediate predecessor R
(k1)
in series (8.9). Let r
(k)
ij
,
the element in the ith row and jth column of matrix R
(k)
, be equal to 1. This
means that there exists a path from the ith vertex v
i
to the jth vertex v
j
with each
intermediate vertex numbered not higher than k:
v
i
, a list of intermediate vertices each numbered not higher than k, v
j
. (8.10)
306 Dynamic Programming
k
R
(k 1)
j k j
1
1 1
k
i
1 = R
(k)
k
i
=
0 1
0
7
1
0
a
b
c
d
a b c d
D =
0
2
7
6
10
0
7
16
3
5
0
9
4
6
1
0
(a) (b) (c)
3
2
6 7
1
FIGURE 8.14 (a) Digraph. (b) Its weight matrix. (c) Its distance matrix.
mentioned at the beginning of this section has a better asymptotic efciency
than Warshalls algorithm (why?). We can speed up the above implementation
of Warshalls algorithm for some inputs by restructuring its innermost loop (see
Problem 4 in this sections exercises). Another way to make the algorithm run
faster is to treat matrix rows as bit strings and employ the bitwise or operation
available in most modern computer languages.
As to the space efciency of Warshalls algorithm, the situation is similar to
that of computing a Fibonacci number and some other dynamic programming
algorithms. Althoughwe usedseparate matrices for recording intermediate results
of the algorithm, this is, in fact, unnecessary. Problem 3 in this sections exercises
asks you to nd a way of avoiding this wasteful use of the computer memory.
Finally, we shall see below how the underlying idea of Warshalls algorithm can
be applied to the more general problem of nding lengths of shortest paths in
weighted graphs.
Floyds Algorithm for the All-Pairs Shortest-Paths Problem
Givena weightedconnectedgraph(undirectedor directed), the all-pairs shortest-
paths problem asks to nd the distancesi.e., the lengths of the shortest paths
from each vertex to all other vertices. This is one of several variations of the
problem involving shortest paths in graphs. Because of its important applications
to communications, transportation networks, and operations research, it has been
thoroughly studied over the years. Among recent applications of the all-pairs
shortest-path problemis precomputing distances for motion planning in computer
games.
It is convenient to record the lengths of shortest paths in an n n matrix D
called the distance matrix: the element d
ij
in the ith row and the jth column of
this matrix indicates the length of the shortest path from the ith vertex to the jth
vertex. For an example, see Figure 8.14.
We can generate the distance matrix with an algorithm that is very similar to
Warshalls algorithm. It is called Floyds algorithmafter its co-inventor Robert W.
Floyd.
1
It is applicable to both undirected and directed weighted graphs provided
1. Floyd explicitly referenced Warshalls paper in presenting his algorithm [Flo62]. Three years earlier,
Bernard Roy published essentially the same algorithm in the proceedings of the French Academy of
Sciences [Roy59].
8.4 Warshalls and Floyds Algorithms 309
that they donot containa cycle of a negative length. (The distance betweenany two
vertices in such a cycle can be made arbitrarily small by repeating the cycle enough
times.) The algorithm can be enhanced to nd not only the lengths of the shortest
paths for all vertex pairs but also the shortest paths themselves (Problem10 in this
sections exercises).
Floyds algorithm computes the distance matrix of a weighted graph with n
vertices through a series of n n matrices:
D
(0)
, . . . , D
(k1)
, D
(k)
, . . . , D
(n)
. (8.12)
Each of these matrices contains the lengths of shortest paths with certain con-
straints on the paths considered for the matrix in question. Specically, the el-
ement d
(k)
ij
in the ith row and the jth column of matrix D
(k)
(i, j =1, 2, . . . , n,
k =0, 1, . . . , n) is equal to the length of the shortest path among all paths from
the ith vertex to the jth vertex with each intermediate vertex, if any, numbered
not higher than k. In particular, the series starts with D
(0)
, which does not allow
any intermediate vertices in its paths; hence, D
(0)
is simply the weight matrix of the
graph. The last matrix in the series, D
(n)
, contains the lengths of the shortest paths
among all paths that can use all n vertices as intermediate and hence is nothing
other than the distance matrix being sought.
As in Warshalls algorithm, we can compute all the elements of each matrix
D
(k)
fromits immediate predecessor D
(k1)
in series (8.12). Let d
(k)
ij
be the element
in the ith row and the jth column of matrix D
(k)
. This means that d
(k)
ij
is equal to
the length of the shortest path among all paths from the ith vertex v
i
to the jth
vertex v
j
with their intermediate vertices numbered not higher than k:
v
i
, a list of intermediate vertices each numbered not higher than k, v
j
. (8.13)
We can partition all such paths into two disjoint subsets: those that do not use the
kth vertex v
k
as intermediate and those that do. Since the paths of the rst subset
have their intermediate vertices numbered not higher than k 1, the shortest of
them is, by denition of our matrices, of length d
(k1)
ij
.
What is the length of the shortest path in the second subset? If the graph does
not contain a cycle of a negative length, we can limit our attention only to the
paths in the second subset that use vertex v
k
as their intermediate vertex exactly
once (because visiting v
k
more than once can only increase the paths length). All
such paths have the following form:
v
i
, vertices numbered k 1, v
k
, vertices numbered k 1, v
j
.
In other words, each of the paths is made up of a path from v
i
to v
k
with each
intermediate vertex numbered not higher than k 1 and a path from v
k
to v
j
with each intermediate vertex numbered not higher than k 1. The situation is
depicted symbolically in Figure 8.15.
Since the length of the shortest path from v
i
to v
k
among the paths that use
intermediate vertices numbered not higher than k 1 is equal to d
(k1)
ik
and the
length of the shortest path from v
k
to v
j
among the paths that use intermediate
310 Dynamic Programming
v
i
v
j
v
k
d
kj
(k 1)
d
ij
(k 1)
d
ik
(k 1)
FIGURE 8.15 Underlying idea of Floyds algorithm.
vertices numberednot higher thank 1is equal tod
(k1)
kj
, the lengthof the shortest
path among the paths that use the kth vertex is equal to d
(k1)
ik
+ d
(k1)
kj
. Taking into
account the lengths of the shortest paths in both subsets leads to the following
recurrence:
d
(k)
ij
=min{d
(k1)
ij
, d
(k1)
ik
d
(k1)
kj
] for k 1, d
(0)
ij
=w
ij
. (8.14)
To put it another way, the element in row i and column j of the current distance
matrix D
(k1)
is replaced by the sum of the elements in the same row i and the
column k and in the same column j and the row k if and only if the latter sum is
smaller than its current value.
The application of Floyds algorithm to the graph in Figure 8.14 is illustrated
in Figure 8.16.
Here is pseudocode of Floyds algorithm. It takes advantage of the fact that
the next matrix in sequence (8.12) can be written over its predecessor.
ALGORITHM Floyd(W[1..n, 1..n])
//Implements Floyds algorithm for the all-pairs shortest-paths problem
//Input: The weight matrix W of a graph with no negative-length cycle
//Output: The distance matrix of the shortest paths lengths
D W //is not necessary if W can be overwritten
for k 1 to n do
for i 1 to n do
for j 1 to n do
D[i, j] min{D[i, j], D[i, k] D[k, j]]
return D
Obviously, the time efciency of Floyds algorithm is cubicas is the time
efciency of Warshalls algorithm. In the next chapter, we examine Dijkstras
algorithmanother method for nding shortest paths.
8.4 Warshalls and Floyds Algorithms 311
D
(0)
=
a
b
c
d
a b c d
0
2
0
7
1
0
D
(1)
=
a
b
c
d
a b c d
0
2
0
7
3
5
0
9
1
0
D
(2)
=
a
b
c
d
a b c d
0
2
9
6
0
7
3
5
0
9
1
0
D
(3)
=
a
b
c
d
a b c d
0
2
9
6
10
0
7
16
3
5
0
9
4
6
1
0
D
(4)
=
a
b
c
d
a b c d
0
2
7
6
10
0
7
16
3
5
0
9
4
6
1
0
Lengths of the shortest paths
with no intermediate vertices
(D
(0)
is simply the weight matrix).
Lengths of the shortest paths
with intermediate vertices numbered
not higher than 1, i.e., just a
(note two new shortest paths from
b to c and from d to c ).
Lengths of the shortest paths
with intermediate vertices numbered
not higher than 2, i.e., a and b
(note a new shortest path from c to a).
Lengths of the shortest paths
with intermediate vertices numbered
not higher than 3, i.e., a, b, and c
(note four new shortest paths from a to b,
from a to d, from b to d, and from d to b).
Lengths of the shortest paths
with intermediate vertices numbered
not higher than 4, i.e., a, b, c, and d
(note a new shortest path from c to a).
b a
d c
3
2
6 7
1
FIGURE 8.16 Application of Floyds algorithm to the digraph shown. Updated elements
are shown in bold.
Exercises 8.4
1. Apply Warshalls algorithm to nd the transitive closure of the digraph de-
ned by the following adjacency matrix:
_
_
_
_
0 1 0 0
0 0 1 0
0 0 0 1
0 0 0 0
_
_
2. a. Prove that the time efciency of Warshalls algorithm is cubic.
b. Explain why the time efciency class of Warshalls algorithm is inferior to
that of the traversal-based algorithmfor sparse graphs represented by their
adjacency lists.
312 Dynamic Programming
3. Explain how to implement Warshalls algorithm without using extra memory
for storing elements of the algorithms intermediate matrices.
4. Explain how to restructure the innermost loop of the algorithm Warshall to
make it run faster at least on some inputs.
5. Rewrite pseudocode of Warshalls algorithm assuming that the matrix rows
are represented by bit strings on which the bitwise or operation can be per-
formed.
6. a. Explain how Warshalls algorithm can be used to determine whether a
given digraph is a dag (directed acyclic graph). Is it a good algorithm for
this problem?
b. Is it a good idea to apply Warshalls algorithm to nd the transitive closure
of an undirected graph?
7. Solve the all-pairs shortest-path problem for the digraph with the following
weight matrix:
_
_
_
_
_
_
0 2 1 8
6 0 3 2
0 4
2 0 3
3 0
_
_
8. Prove that the next matrix in sequence (8.12) of Floyds algorithm can be
written over its predecessor.
9. Give an example of a graph or a digraph with negative weights for which
Floyds algorithm does not yield the correct result.
10. Enhance Floyds algorithm so that shortest paths themselves, not just their
lengths, can be found.
11. Jack Straws In the game of Jack Straws, a number of plastic or wooden
straws are dumped on the table and players try to remove them one by
one without disturbing the other straws. Here, we are only concerned with
whether various pairs of straws are connected by a path of touching straws.
Givena list of the endpoints for n >1straws (as if they were dumpedona large
piece of graph paper), determine all the pairs of straws that are connected.
Note that touching is connecting, but also that two straws can be connected
indirectly via other connected straws. [1994 East-Central Regionals of the
ACM International Collegiate Programming Contest]
SUMMARY
Dynamic programming is a technique for solving problems with overlapping
subproblems. Typically, these subproblems arise from a recurrence relating a
solution to a given problem with solutions to its smaller subproblems of the
Summary 313
same type. Dynamic programming suggests solving each smaller subproblem
once and recording the results in a table from which a solution to the original
problem can be then obtained.
Applicability of dynamic programming to an optimization problem requires
the problem to satisfy the principle of optimality: an optimal solution to any
of its instances must be made up of optimal solutions to its subinstances.
Among many other problems, the change-making problemwith arbitrary coin
denominations can be solved by dynamic programming.
Solving a knapsack problem by a dynamic programming algorithm exempli-
es an application of this technique to difcult problems of combinatorial
optimization.
The memory function technique seeks to combine the strengths of the top-
down and bottom-up approaches to solving problems with overlapping
subproblems. It does this by solving, in the top-down fashion but only
once, just the necessary subproblems of a given problem and recording their
solutions in a table.
Dynamic programming can be used for constructing an optimal binary search
tree for a given set of keys and known probabilities of searching for them.
Warshalls algorithm for nding the transitive closure and Floyds algorithm
for the all-pairs shortest-paths problem are based on the idea that can be
interpreted as an application of the dynamic programming technique.
This page intentionally left blank
9
Greedy Technique
Greed, for lack of a better word, is good! Greed is right! Greed works!
Michael Douglas, US actor in the role of Gordon Gecko,
in the lm Wall Street, 1987
L
et us revisit the change-making problem faced, at least subconsciously, by
millions of cashiers all over the world: give change for a specic amount n
with the least number of coins of the denominations d
1
>d
2
>
. . .
>d
m
used in that
locale. (Here, unlike Section 8.1, we assume that the denominations are ordered in
decreasing order.) For example, the widely used coin denominations in the United
States are d
1
=25 (quarter), d
2
=10 (dime), d
3
=5 (nickel), and d
4
=1 (penny).
How would you give change with coins of these denominations of, say, 48 cents?
If you came up with the answer 1 quarter, 2 dimes, and 3 pennies, you followed
consciously or nota logical strategy of making a sequence of best choices among
the currently available alternatives. Indeed, in the rst step, you could have given
one coin of any of the four denominations. Greedy thinking leads to giving one
quarter because it reduces the remaining amount the most, namely, to 23 cents. In
the second step, you had the same coins at your disposal, but you could not give
a quarter, because it would have violated the problems constraints. So your best
selection in this step was one dime, reducing the remaining amount to 13 cents.
Giving one more dime left you with 3 cents to be given with three pennies.
Is this solution to the instance of the change-making problem optimal? Yes, it
is. In fact, one can prove that the greedy algorithm yields an optimal solution for
every positive integer amount with these coin denominations. At the same time,
it is easy to give an example of coin denominations that do not yield an optimal
solution for some amountse.g., d
1
=25, d
2
=10, d
3
=1 and n =30.
The approach applied in the opening paragraph to the change-making prob-
lem is called greedy. Computer scientists consider it a general design technique
despite the fact that it is applicable to optimization problems only. The greedy
approach suggests constructing a solution through a sequence of steps, each ex-
panding a partially constructed solution obtained so far, until a complete solution
315
316 Greedy Technique
to the problem is reached. On each stepand this is the central point of this
techniquethe choice made must be:
feasible, i.e., it has to satisfy the problems constraints
locally optimal, i.e., it has to be the best local choice among all feasible choices
available on that step
irrevocable, i.e., once made, it cannot be changed on subsequent steps of the
algorithm
These requirements explain the techniques name: on each step, it suggests
a greedy grab of the best alternative available in the hope that a sequence
of locally optimal choices will yield a (globally) optimal solution to the entire
problem. We refrain from a philosophical discussion of whether greed is good or
bad. (If you have not seen the movie from which the chapters epigraph is taken,
its hero did not end up well.) From our algorithmic perspective, the question is
whether such a greedy strategy works or not. As we shall see, there are problems
for which a sequence of locally optimal choices does yield an optimal solution for
every instance of the problem in question. However, there are others for which
this is not the case; for such problems, a greedy algorithm can still be of value if
we are interested in or have to be satised with an approximate solution.
Inthe rst twosections of the chapter, we discuss twoclassic algorithms for the
minimumspanning tree problem: Prims algorithmand Kruskals algorithm. What
is remarkable about these algorithms is the fact that they solve the same problem
by applying the greedy approach in two different ways, and both of them always
yield an optimal solution. In Section 9.3, we introduce another classic algorithm
Dijkstras algorithmfor the shortest-pathproblemina weightedgraph. Section9.4
is devoted to Huffman trees and their principal application, Huffman codesan
important data compression method that can be interpreted as an application of
the greedy technique. Finally, a few examples of approximation algorithms based
on the greedy approach are discussed in Section 12.3.
As a rule, greedy algorithms are both intuitively appealing and simple. Given
an optimization problem, it is usually easy to gure out howto proceed in a greedy
manner, possibly after considering a few small instances of the problem. What is
usually more difcult is to prove that a greedy algorithmyields an optimal solution
(when it does). One of the common ways to do this is illustrated by the proof given
in Section 9.1: using mathematical induction, we show that a partially constructed
solution obtained by the greedy algorithm on each iteration can be extended to
an optimal solution to the problem.
The second way to prove optimality of a greedy algorithm is to show that
on each step it does at least as well as any other algorithm could in advancing
toward the problems goal. Consider, as an example, the following problem: nd
the minimum number of moves needed for a chess knight to go from one corner
of a 100 100 board to the diagonally opposite corner. (The knights moves are
L-shaped jumps: two squares horizontally or vertically followed by one square in
Greedy Technique 317
the perpendicular direction.) A greedy solution is clear here: jump as close to the
goal as possible on each move. Thus, if its start and nish squares are (1,1) and
(100, 100), respectively, a sequence of 66 moves such as
(1, 1) (3, 2) (4, 4)
. . .
(97, 97) (99, 98) (100, 100)
solves the problem. (The number k of two-move advances can be obtained from
the equation 1 3k =100.) Why is this a minimum-move solution? Because if we
measure the distance to the goal by the Manhattan distance, which is the sum of
the difference between the row numbers and the difference between the column
numbers of two squares in question, the greedy algorithm decreases it by 3 on
each movethe best the knight can do.
The third way is simply to show that the nal result obtained by a greedy
algorithm is optimal based on the algorithms output rather than the way it op-
erates. As an example, consider the problem of placing the maximum number of
chips on an 8 8 board so that no two chips are placed on the same or adjacent
vertically, horizontally, or diagonallysquares. To follow the prescription of the
greedy strategy, we should place each new chip so as to leave as many available
squares as possible for next chips. For example, starting with the upper left corner
of the board, we will be able to place 16 chips as shown in Figure 9.1a. Why is
this solution optimal? To see why, partition the board into sixteen 4 4 squares
as shown in Figure 9.1b. Obviously, it is impossible to place more than one chip in
each of these squares, which implies that the total number of nonadjacent chips
on the board cannot exceed 16.
As a nal comment, we should mention that a rather sophisticated theory
has been developed behind the greedy technique, which is based on the abstract
combinatorial structure called matroid. An interested reader can check such
books as [Cor09] as well as a variety of Internet resources on the subject.
FIGURE 9.1 (a) Placement of 16 chips on non-adjacent squares. (b) Partition of the board
proving impossibility of placing more than 16 chips.
318 Greedy Technique
b a
d c
b a
d c
b a
d c
b a
d c
1
2
1
2 2
5
3
1
5
1
5
3 3
graph w(T
1
) = 6 w(T
2
) = 9 w(T
3
) = 8
FIGURE 9.2 Graph and its spanning trees, with T
1
being the minimum spanning tree.
9.1 Prims Algorithm
The following problemarises naturally in many practical situations: given n points,
connect themin the cheapest possible way so that there will be a path between ev-
ery pair of points. It has direct applications to the design of all kinds of networks
including communication, computer, transportation, and electricalby providing
the cheapest way to achieve connectivity. It identies clusters of points in data sets.
It has been used for classication purposes in archeology, biology, sociology, and
other sciences. It is also helpful for constructing approximate solutions to more
difcult problems such the traveling salesman problem (see Section 12.3).
We can represent the points given by vertices of a graph, possible connections
by the graphs edges, and the connection costs by the edge weights. Then the
question can be posed as the minimum spanning tree problem, dened formally
as follows.
DEFINITION Aspanning tree of an undirected connected graph is its connected
acyclic subgraph (i.e., a tree) that contains all the vertices of the graph. If such a
graph has weights assigned to its edges, a minimum spanning tree is its spanning
tree of the smallest weight, where the weight of a tree is dened as the sum of the
weights on all its edges. The minimum spanning tree problem is the problem of
nding a minimum spanning tree for a given weighted connected graph.
Figure 9.2 presents a simple example illustrating these notions.
If we were to try constructing a minimum spanning tree by exhaustive search,
we would face two serious obstacles. First, the number of spanning trees grows
exponentially with the graph size (at least for dense graphs). Second, generating
all spanning trees for a given graph is not easy; in fact, it is more difcult than
nding a minimum spanning tree for a weighted graph by using one of several
efcient algorithms available for this problem. In this section, we outline Prims
algorithm, which goes back to at least 1957
1
[Pri57].
1. Robert Prim rediscovered the algorithm published 27 years earlier by the Czech mathematician
Vojt ech Jarnk in a Czech journal.
9.1 Prims Algorithm 319
Prims algorithm constructs a minimum spanning tree through a sequence
of expanding subtrees. The initial subtree in such a sequence consists of a single
vertex selected arbitrarily fromthe set V of the graphs vertices. On each iteration,
the algorithmexpands the current tree inthe greedy manner by simply attaching to
it the nearest vertex not in that tree. (By the nearest vertex, we mean a vertex not
in the tree connected to a vertex in the tree by an edge of the smallest weight. Ties
can be broken arbitrarily.) The algorithm stops after all the graphs vertices have
been included in the tree being constructed. Since the algorithm expands a tree
by exactly one vertex on each of its iterations, the total number of such iterations
is n 1, where n is the number of vertices in the graph. The tree generated by the
algorithm is obtained as the set of edges used for the tree expansions.
Here is pseudocode of this algorithm.
ALGORITHM Prim(G)
//Prims algorithm for constructing a minimum spanning tree
//Input: A weighted connected graph G=V, E)
//Output: E
T
, the set of edges composing a minimum spanning tree of G
V
T
{v
0
] //the set of tree vertices can be initialized with any vertex
E
T
for i 1 to [V[ 1 do
nd a minimum-weight edge e
=(v
, u
]
E
T
E
T
{e
]
return E
T
The nature of Prims algorithm makes it necessary to provide each vertex not
in the current tree with the information about the shortest edge connecting the
vertex to a tree vertex. We can provide such information by attaching two labels
to a vertex: the name of the nearest tree vertex and the length (the weight) of the
corresponding edge. Vertices that are not adjacent to any of the tree vertices can
be given the label indicating their innite distance to the tree vertices and
a null label for the name of the nearest tree vertex. (Alternatively, we can split
the vertices that are not in the tree into two sets, the fringe and the unseen.
The fringe contains only the vertices that are not in the tree but are adjacent to at
least one tree vertex. These are the candidates from which the next tree vertex
is selected. The unseen vertices are all the other vertices of the graph, called
unseen because they are yet to be affected by the algorithm.) With such labels,
nding the next vertex to be added to the current tree T =
_
V
T
, E
T
_
becomes a
simple task of nding a vertex with the smallest distance label in the set V V
T
.
Ties can be broken arbitrarily.
After we have identiedavertex u
by a shorter
edge than the us current distance label, update its labels by u
and u, respectively.
2
Figure 9.3 demonstrates the applicationof Prims algorithmtoa specic graph.
Does Prims algorithm always yield a minimum spanning tree? The answer
to this question is yes. Let us prove by induction that each of the subtrees T
i
,
i =0, . . . , n 1, generated by Prims algorithmis a part (i.e., a subgraph) of some
minimum spanning tree. (This immediately implies, of course, that the last tree in
the sequence, T
n1
, is a minimum spanning tree itself because it contains all n
vertices of the graph.) The basis of the induction is trivial, since T
0
consists of a
single vertex and hence must be a part of any minimum spanning tree. For the
inductive step, let us assume that T
i1
is part of some minimum spanning tree T .
We need to prove that T
i
, generated from T
i1
by Prims algorithm, is also a part
of a minimum spanning tree. We prove this by contradiction by assuming that no
minimumspanning tree of the graphcancontainT
i
. Let e
i
=(v, u) be the minimum
weight edge froma vertex in T
i1
to a vertex not in T
i1
used by Prims algorithmto
expand T
i1
to T
i
. By our assumption, e
i
cannot belong to any minimum spanning
tree, including T . Therefore, if we add e
i
to T , a cycle must be formed (Figure 9.4).
In addition to edge e
i
=(v, u), this cycle must contain another edge (v
/
, u
/
)
connecting a vertex v
/
T
i1
to a vertex u
/
that is not in T
i1
. (It is possible that
v
/
coincides with v or u
/
coincides with u but not both.) If we now delete the edge
(v
/
, u
/
) from this cycle, we will obtain another spanning tree of the entire graph
whose weight is less than or equal to the weight of T since the weight of e
i
is less
than or equal to the weight of (v
/
, u
/
). Hence, this spanning tree is a minimum
spanning tree, which contradicts the assumption that no minimum spanning tree
contains T
i
. This completes the correctness proof of Prims algorithm.
How efcient is Prims algorithm? The answer depends on the data structures
chosen for the graph itself and for the priority queue of the set V V
T
whose
vertex priorities are the distances to the nearest tree vertices. (You may want
to take another look at the example in Figure 9.3 to see that the set V V
T
indeed operates as a priority queue.) In particular, if a graph is represented by
its weight matrix and the priority queue is implemented as an unordered array,
the algorithms running time will be in ([V[
2
). Indeed, on each of the [V[ 1
iterations, the array implementing the priority queue is traversedtondanddelete
the minimum and then to update, if necessary, the priorities of the remaining
vertices.
We can also implement the priority queue as a min-heap. A min-heap is a
mirror image of the heap structure discussed in Section 6.4. (In fact, it can be im-
plemented by constructing a heap after negating all the key values given.) Namely,
a min-heap is a complete binary tree in which every element is less than or equal
2. If the implementation with the fringe/unseen split is pursued, all the unseen vertices adjacent to u
, is
selected by comparing the lengths of the subtrees paths increased by
the distances to vertices adjacent to the subtrees vertices.
to a vertex nearest to it, then to a second nearest, and so on. In general, before its
ith iteration commences, the algorithmhas already identied the shortest paths to
i 1 other vertices nearest to the source. These vertices, the source, and the edges
of the shortest paths leading to themfromthe source forma subtree T
i
of the given
graph (Figure 9.10). Since all the edge weights are nonnegative, the next vertex
nearest to the source can be found among the vertices adjacent to the vertices of
T
i
. The set of vertices adjacent to the vertices in T
i
can be referred to as fringe
vertices; they are the candidates from which Dijkstras algorithm selects the next
vertex nearest to the source. (Actually, all the other vertices can be treated as
fringe vertices connected to tree vertices by edges of innitely large weights.) To
identify the ith nearest vertex, the algorithm computes, for every fringe vertex u,
the sum of the distance to the nearest tree vertex v (given by the weight of the
edge (v, u)) and the length d
v
of the shortest path from the source to v (previously
determined by the algorithm) and then selects the vertex with the smallest such
sum. The fact that it sufces to compare the lengths of such special paths is the
central insight of Dijkstras algorithm.
To facilitate the algorithms operations, we label each vertex with two labels.
The numeric label d indicates the length of the shortest path from the source to
this vertex found by the algorithm so far; when a vertex is added to the tree, d
indicates the length of the shortest path from the source to that vertex. The other
label indicates the name of the next-to-last vertex on such a path, i.e., the parent of
the vertex in the tree being constructed. (It can be left unspecied for the source
s and vertices that are adjacent to none of the current tree vertices.) With such
labeling, nding the next nearest vertex u
by an edge of
weight w(u
, u) such that d
u
w(u
, u) < d
u
, update the labels of u by u
and d
u
w(u
, u), respectively.
Figure 9.11 demonstrates the application of Dijkstras algorithm to a specic
graph.
The labeling and mechanics of Dijkstras algorithm are quite similar to those
used by Prims algorithm (see Section 9.1). Both of them construct an expanding
subtree of vertices by selecting the next vertex from the priority queue of the
remaining vertices. It is important not to mix them up, however. They solve
different problems and therefore operate with priorities computed in a different
manner: Dijkstras algorithm compares path lengths and therefore must add edge
weights, while Prims algorithm compares the edge weights as given.
Now we can give pseudocode of Dijkstras algorithm. It is spelled out
in more detail than Prims algorithm was in Section 9.1in terms of explicit
operations ontwosets of labeledvertices: the set V
T
of vertices for whicha shortest
path has already been found and the priority queue Qof the fringe vertices. (Note
that in the following pseudocode, V
T
contains a given source vertex and the fringe
contains the vertices adjacent to it after iteration 0 is completed.)
ALGORITHM Dijkstra(G, s)
//Dijkstras algorithm for single-source shortest paths
//Input: A weighted connected graph G=V, E) with nonnegative weights
// and its vertex s
//Output: The length d
v
of a shortest path from s to v
// and its penultimate vertex p
v
for every vertex v in V
Initialize(Q) //initialize priority queue to empty
for every vertex v in V
d
v
; p
v
null
Insert(Q, v, d
v
) //initialize vertex priority in the priority queue
d
s
0; Decrease(Q, s, d
s
) //update priority of s with d
s
V
T
for i 0 to [V[ 1 do
u
]
for every vertex u in V V
T
that is adjacent to u
do
if d
u
w(u
, u) < d
u
d
u
d
u
w(u
, u); p
u
u
Decrease(Q, u, d
u
)
The time efciency of Dijkstras algorithmdepends onthe data structures used
for implementing the priority queue and for representing an input graph itself.
For the reasons explained in the analysis of Prims algorithm in Section 9.1, it is
336 Greedy Technique
b c
a e d
4
7 4
5 2
3 6
Tree vertices Remaining vertices Illustration
a(, 0) b(a, 3) c(, ) d(a, 7) e(, )
b c
a e d
4
7 4
5 2
3 6
b(a, 3) c(b, 3 4) d(b, 3 2) e(, )
b c
a e d
4
7 4
5 2
3 6
d(b, 5) c(b, 7) e(d, 5 4)
b c
a e d
4
7 4
5 2
3 6
c(b, 7) e(d, 9)
b c
a e d
4
7 4
5 2
3 6
e(d, 9)
The shortest paths (identied by following nonnumeric labels backward from a
destination vertex in the left column to the source) and their lengths (given by
numeric labels of the tree vertices) are as follows:
from a to b : a b of length 3
from a to d : a b d of length 5
from a to c : a b c of length 7
from a to e : a b d e of length 9
FIGURE 9.11 Application of Dijkstras algorithm. The next closest vertex is shown in
bold.
9.3 Dijkstras Algorithm 337
in ([V[
2
) for graphs represented by their weight matrix and the priority queue
implemented as an unordered array. For graphs represented by their adjacency
lists and the priority queue implemented as a min-heap, it is in O([E[ log [V[). A
still better upper bound can be achieved for both Prims and Dijkstras algorithms
if the priority queue is implemented using a sophisticated data structure called
the Fibonacci heap (e.g., [Cor09]). However, its complexity and a considerable
overhead make such an improvement primarily of theoretical value.
Exercises 9.3
1. Explain what adjustments if any need to be made in Dijkstras algorithm
and/or in an underlying graph to solve the following problems.
a. Solve the single-source shortest-paths problem for directed weighted
graphs.
b. Find a shortest path between two given vertices of a weighted graph or
digraph. (This variation is called the single-pair shortest-path problem.)
c. Find the shortest paths to a given vertex from each other vertex of a
weighted graph or digraph. (This variation is called the single-destination
shortest-paths problem.)
d. Solve the single-source shortest-paths problemina graphwithnonnegative
numbers assignedtoits vertices (andthe lengthof a pathdenedas the sum
of the vertex numbers on the path).
2. Solve the following instances of the single-source shortest-paths problemwith
vertex a as the source:
a.
b c
a e d
4
2 5
7 4
3 6
b.
a
d
h
c
g
k
4
4 4
5
5
5
b
e
i
l
3
3 3
3
2 2 1
f
j
6
6 9
6
7 5
8
338 Greedy Technique
3. Give a counterexample that shows that Dijkstras algorithmmay not work for
a weighted connected graph with negative weights.
4. Let T be a tree constructed by Dijkstras algorithm in the process of solving
the single-source shortest-paths problem for a weighted connected graph G.
a. True or false: T is a spanning tree of G?
b. True or false: T is a minimum spanning tree of G?
5. Write pseudocode for a simpler version of Dijkstras algorithm that nds
only the distances (i.e., the lengths of shortest paths but not shortest paths
themselves) from a given vertex to all other vertices of a graph represented
by its weight matrix.
6. Prove the correctness of Dijkstras algorithmfor graphs with positive weights.
7. Design a linear-time algorithm for solving the single-source shortest-paths
problemfor dags (directedacyclic graphs) representedby their adjacency lists.
8. Explain howthe minimum-sumdescent problem(Problem8 in Exercises 8.1)
can be solved by Dijkstras algorithm.
9. Shortest-path modeling Assume you have a model of a weighted connected
graph made of balls (representing the vertices) connected by strings of appro-
priate lengths (representing the edges).
a. Describe howyou can solve the single-pair shortest-path problemwith this
model.
b. Describe how you can solve the single-source shortest-paths problem with
this model.
10. Revisit the exercise from Section 1.3 about determining the best route for a
subway passenger to take from one designated station to another in a well-
developed subway system like those in Washington, DC, or London, UK.
Write a program for this task.
9.4 Huffman Trees and Codes
Suppose we have to encode a text that comprises symbols from some n-symbol
alphabet by assigning to each of the texts symbols some sequence of bits called
the codeword. For example, we can use a xed-length encoding that assigns to
each symbol a bit string of the same length m (m log
2
n). This is exactly what
the standard ASCII code does. One way of getting a coding scheme that yields a
shorter bit string on the average is based on the old idea of assigning shorter code-
words to more frequent symbols and longer codewords to less frequent symbols.
This idea was used, in particular, in the telegraph code invented in the mid-19th
century by Samuel Morse. In that code, frequent letters such as e (
.
) and a (
.
)
are assigned short sequences of dots and dashes while infrequent letters such as q
(
.
) and z (
..
) have longer ones.
9.4 Huffman Trees and Codes 339
Variable-length encoding, which assigns codewords of different lengths to
different symbols, introduces a problem that xed-length encoding does not have.
Namely, how can we tell how many bits of an encoded text represent the rst (or,
more generally, the ith) symbol? Toavoidthis complication, we canlimit ourselves
to the so-called prex-free (or simply prex) codes. In a prex code, no codeword
is a prex of a codeword of another symbol. Hence, with such an encoding, we
can simply scan a bit string until we get the rst group of bits that is a codeword
for some symbol, replace these bits by this symbol, and repeat this operation until
the bit strings end is reached.
If we want to create a binary prex code for some alphabet, it is natural to
associate the alphabets symbols with leaves of a binary tree in which all the left
edges are labeled by 0 and all the right edges are labeled by 1. The codeword of a
symbol can then be obtained by recording the labels on the simple path from the
root to the symbols leaf. Since there is no simple path to a leaf that continues to
another leaf, no codeword can be a prex of another codeword; hence, any such
tree yields a prex code.
Among the many trees that can be constructed in this manner for a given
alphabet with known frequencies of the symbol occurrences, howcan we construct
a tree that would assign shorter bit strings to high-frequency symbols and longer
ones to low-frequency symbols? It can be done by the following greedy algorithm,
invented by David Huffman while he was a graduate student at MIT [Huf52].
Huffmans algorithm
Step 1 Initialize n one-node trees and label them with the symbols of the
alphabet given. Record the frequency of each symbol in its trees root
to indicate the trees weight. (More generally, the weight of a tree will
be equal to the sum of the frequencies in the trees leaves.)
Step 2 Repeat the following operation until a single tree is obtained. Find
two trees with the smallest weight (ties can be broken arbitrarily, but
see Problem2 in this sections exercises). Make themthe left and right
subtree of a new tree and record the sum of their weights in the root
of the new tree as its weight.
A tree constructed by the above algorithm is called a Huffman tree. It
denesin the manner described abovea Huffman code.
EXAMPLE Consider the ve-symbol alphabet {A, B, C, D, _] with the following
occurrence frequencies in a text made up of these symbols:
symbol A B C D _
frequency 0.35 0.1 0.2 0.2 0.15
The Huffman tree construction for this input is shown in Figure 9.12.
340 Greedy Technique
0.1
B
0.15
_
0.1 0.15
_
0.2
0.25
C
0.2
D
0.35
A
0.35
A
0.35
A
0.35
A
0.2
C D
0.2
0.1
B
0.15
_
0.25
0.2
C D
0.2
0.4
0.2
C D
0.2
0.4 0.6
0.2
C D
0.2
0.4
0.1
B
B
0.15
_
0.25
0.35
A
0.1
B
0.15
_
0.25
0.6
1.0
1
1 1
1
0
0 0
0
FIGURE 9.12 Example of constructing a Huffman coding tree.
The resulting codewords are as follows:
symbol A B C D _
frequency 0.35 0.1 0.2 0.2 0.15
codeword 11 100 00 01 101
9.4 Huffman Trees and Codes 341
Hence, DAD is encoded as 011101, and 10011011011101 is decoded as BAD_AD.
With the occurrence frequencies given and the codeword lengths obtained,
the average number of bits per symbol in this code is
2
.
0.35 3
.
0.1 2
.
0.2 2
.
0.2 3
.
0.15 =2.25.
Had we used a xed-length encoding for the same alphabet, we would have to
use at least 3 bits per each symbol. Thus, for this toy example, Huffmans code
achieves the compression ratioa standardmeasure of a compressionalgorithms
effectivenessof (3 2.25)/3
.
100%=25%. In other words, Huffmans encoding
of the text will use 25% less memory than its xed-length encoding. (Extensive
experiments with Huffman codes have shown that the compression ratio for this
scheme typically falls between 20% and 80%, depending on the characteristics of
the text being compressed.)
Huffmans encoding is one of the most important le-compression methods.
Inadditiontoits simplicity andversatility, it yields anoptimal, i.e., minimal-length,
encoding (provided the frequencies of symbol occurrences are independent and
known in advance). The simplest version of Huffman compression calls, in fact,
for a preliminary scanning of a given text to count the frequencies of symbol
occurrences in it. Then these frequencies are used to construct a Huffman coding
tree and encode the text as described above. This scheme makes it necessary,
however, to include the coding table into the encoded text to make its decoding
possible. This drawback can be overcome by using dynamic Huffman encoding,
in which the coding tree is updated each time a newsymbol is read fromthe source
text. Further, modern alternatives such as Lempel-Ziv algorithms (e.g., [Say05])
assign codewords not to individual symbols but to strings of symbols, allowing
them to achieve better and more robust compressions in many applications.
It is important tonote that applications of Huffmans algorithmare not limited
to data compression. Suppose we have n positive numbers w
1
, w
2
, . . . , w
n
that
have to be assigned to n leaves of a binary tree, one per node. If we dene the
weighted path length as the sum
n
i=1
l
i
w
i
, where l
i
is the length of the simple
path from the root to the ith leaf, how can we construct a binary tree with
minimum weighted path length? It is this more general problem that Huffmans
algorithm actually solves. (For the coding application, l
i
and w
i
are the length of
the codeword and the frequency of the ith symbol, respectively.)
This problem arises in many situations involving decision making. Consider,
for example, the game of guessing a chosen object from n possibilities (say, an
integer between 1 and n) by asking questions answerable by yes or no. Different
strategies for playing this game can be modeled by decision trees
5
such as those
depicted in Figure 9.13 for n =4. The length of the simple path from the root to a
leaf in such a tree is equal to the number of questions needed to get to the chosen
number represented by the leaf. If number i is chosen with probability p
i
, the sum
5. Decision trees are discussed in more detail in Section 11.2.
342 Greedy Technique
n = 3 n = 4 n = 1 n = 2
n > 1 n > 3
n >2
no
no yes
no yes
no yes
yes
n = 3
n = 3
n = 4
n = 4
n = 1 n = 2
n = 2
no
no yes
yes
FIGURE 9.13 Two decision trees for guessing an integer between 1 and 4.
n
i=1
l
i
p
i
, where l
i
is the length of the path from the root to the ith leaf, indicates
the average number of questions needed to guess the chosen number with a
game strategy represented by its decision tree. If each of the numbers is chosen
with the same probability of 1/n, the best strategy is to successively eliminate half
(or almost half) the candidates as binary search does. This may not be the case
for arbitrary p
i
s, however. For example, if n =4 and p
1
=0.1, p
2
=0.2, p
3
=0.3,
and p
4
=0.4, the minimum weighted path tree is the rightmost one in Figure 9.13.
Thus, we need Huffmans algorithm to solve this problem in its general case.
Note that this is the second time we are encountering the problem of con-
structing an optimal binary tree. In Section 8.3, we discussed the problem of
constructing anoptimal binary searchtree withpositive numbers (the searchprob-
abilities) assigned to every node of the tree. In this section, given numbers are
assigned just to leaves. The latter problem turns out to be easier: it can be solved
by the greedy algorithm, whereas the former is solved by the more complicated
dynamic programming algorithm.
Exercises 9.4
1. a. Construct a Huffman code for the following data:
symbol A B C D _
frequency 0.4 0.1 0.2 0.15 0.15
b. Encode ABACABAD using the code of question (a).
c. Decode 100010111001010 using the code of question (a).
2. For data transmission purposes, it is often desirable to have a code with a
minimumvariance of the codewordlengths (among codes of the same average
length). Compute the average and variance of the codeword length in two
9.4 Huffman Trees and Codes 343
Huffman codes that result from a different tie breaking during a Huffman
code construction for the following data:
symbol A B C D E
probability 0.1 0.1 0.2 0.2 0.4
3. Indicate whether each of the following properties is true for every Huffman
code.
a. The codewords of the two least frequent symbols have the same length.
b. The codewords length of a more frequent symbol is always smaller than
or equal to the codewords length of a less frequent one.
4. What is the maximal length of a codeword possible in a Huffman encoding of
an alphabet of n symbols?
5. a. Write pseudocode of the Huffman-tree construction algorithm.
b. What is the time efciency class of the algorithm for constructing a Huff-
man tree as a function of the alphabet size?
6. Show that a Huffman tree can be constructed in linear time if the alphabet
symbols are given in a sorted order of their frequencies.
7. Given a Huffman coding tree, which algorithm would you use to get the
codewords for all the symbols? What is its time-efciency class as a function
of the alphabet size?
8. Explain howone can generate a Huffman code without an explicit generation
of a Huffman coding tree.
9. a. Write a program that constructs a Huffman code for a given English text
and encode it.
b. Write a program for decoding of an English text which has been encoded
with a Huffman code.
c. Experiment withyour encoding programtonda range of typical compres-
sion ratios for Huffmans encoding of English texts of, say, 1000 words.
d. Experiment with your encoding program to nd out how sensitive the
compression ratios are to using standard estimates of frequencies instead
of actual frequencies of symbol occurrences in English texts.
10. Card guessing Design a strategy that minimizes the expected number of
questions asked in the following game [Gar94]. You have a deck of cards that
consists of one ace of spades, two deuces of spades, three threes, and on up
to nine nines, making 45 cards in all. Someone draws a card from the shufed
deck, which you have to identify by asking questions answerable with yes
or no.
344 Greedy Technique
SUMMARY
The greedy technique suggests constructing a solution to an optimization
problem through a sequence of steps, each expanding a partially constructed
solution obtained so far, until a complete solution to the problem is reached.
On each step, the choice made must be feasible, locally optimal, and
irrevocable.
Prims algorithm is a greedy algorithm for constructing a minimum spanning
tree of a weighted connected graph. It works by attaching to a previously
constructed subtree a vertex closest to the vertices already in the tree.
Kruskals algorithm is another greedy algorithm for the minimum spanning
tree problem. It constructs a minimum spanning tree by selecting edges
in nondecreasing order of their weights provided that the inclusion does not
create a cycle. Checking the latter condition efciently requires an application
of one of the so-called union-nd algorithms.
Dijkstras algorithm solves the single-source shortest-path problem of nding
shortest paths from a given vertex (the source) to all the other vertices of a
weighted graph or digraph. It works as Prims algorithm but compares path
lengths rather than edge lengths. Dijkstras algorithm always yields a correct
solution for a graph with nonnegative weights.
AHuffman tree is a binary tree that minimizes the weighted path length from
the root to the leaves of predened weights. The most important application
of Huffman trees is Huffman codes.
A Huffman code is an optimal prex-free variable-length encoding scheme
that assigns bit strings to symbols based on their frequencies in a given text.
This is accomplished by a greedy construction of a binary tree whose leaves
represent the alphabet symbols and whose edges are labeled with 0s and 1s.
10
Iterative Improvement
The most successful men in the end are those whose success is the result of
steady accretion.
Alexander Graham Bell (18351910)
T
he greedy strategy, considered in the preceding chapter, constructs a solution
to an optimization problem piece by piece, always adding a locally optimal
piece to a partially constructed solution. In this chapter, we discuss a different
approach to designing algorithms for optimization problems. It starts with some
feasible solution (a solution that satises all the constraints of the problem) and
proceeds to improve it by repeated applications of some simple step. This step
typically involves a small, localized change yielding a feasible solution with an
improved value of the objective function. When no such change improves the
value of the objective function, the algorithm returns the last feasible solution as
optimal and stops.
There can be several obstacles to the successful implementation of this idea.
First, we need an initial feasible solution. For some problems, we can always start
with a trivial solution or use an approximate solution obtained by some other (e.g.,
greedy) algorithm. But for others, nding an initial solution may require as much
effort as solving the problem after a feasible solution has been identied. Second,
it is not always clear what changes should be allowed in a feasible solution so that
we can check efciently whether the current solution is locally optimal and, if not,
replace it with a better one. Thirdand this is the most fundamental difculty
is an issue of local versus global extremum (maximum or minimum). Think about
the problemof nding the highest point in a hilly area with no map on a foggy day.
Alogical thing to do would be to start walking up the hill fromthe point you are
at until it becomes impossible to do so because no direction would lead up. You
will have reached a local highest point, but because of a limited feasibility, there
will be no simple way to tell whether the point is the highest (global maximum
you are after) in the entire area.
Fortunately, there are important problems that can be solved by iterative-
improvement algorithms. The most important of them is linear programming.
345
346 Iterative Improvement
We have already encountered this topic in Section 6.6. Here, in Section 10.1,
we introduce the simplex method, the classic algorithm for linear programming.
Discovered by the U.S. mathematician George B. Dantzig in 1947, this algorithm
has proved to be one of the most consequential achievements in the history of
algorithmics.
In Section 10.2, we consider the important problemof maximizing the amount
of ow that can be sent through a network with links of limited capacities. This
problem is a special case of linear programming. However, its special structure
makes it possible to solve the problem by algorithms that are more efcient than
the simplex method. We outline the classic iterative-improvement algorithm for
this problem, discovered by the American mathematicians L. R. Ford, Jr., and
D. R. Fulkerson in the 1950s.
The last two sections of the chapter deal with bipartite matching. This is
the problem of nding an optimal pairing of elements taken from two disjoint
sets. Examples include matching workers and jobs, high school graduates and
colleges, and men and women for marriage. Section 10.3 deals with the problem
of maximizing the number of matched pairs; Section 10.4 is concerned with the
matching stability.
We also discuss several iterative-improvement algorithms in Section 12.3,
where we consider approximation algorithms for the traveling salesman and knap-
sack problems. Other examples of iterative-improvement algorithms can be found
in the algorithms textbook by Moret and Shapiro [Mor91], books on continuous
and discrete optimization (e.g., [Nem89]), and the literature on heuristic search
(e.g., [Mic10]).
10.1 The Simplex Method
We have already encountered linear programming (see Section 6.6)the general
problem of optimizing a linear function of several variables subject to a set of
linear constraints:
maximize (or minimize) c
1
x
1
. . .
c
n
x
n
subject to a
i1
x
1
. . .
a
in
x
n
(or or =) b
i
for i =1, . . . , m
x
1
0, . . . , x
n
0. (10.1)
We mentioned there that many important practical problems can be modeled as
instances of linear programming. Tworesearchers, L. V. Kantorovichof the former
Soviet Union and the Dutch-American T. C. Koopmans, were even awarded the
Nobel Prize in 1975 for their contributions to linear programming theory and
its applications to economics. Apparently because there is no Nobel Prize in
mathematics, the Royal Swedish Academy of Sciences failed to honor the U.S.
mathematician G. B. Dantzig, who is universally recognized as the father of linear
10.1 The Simplex Method 347
programming in its modern form and the inventor of the simplex method, the
classic algorithm for solving such problems.
1
Geometric Interpretation of Linear Programming
Before we introduce a general method for solving linear programming problems,
let us consider a small example, which will help us to see the fundamental prop-
erties of such problems.
EXAMPLE 1 Consider the following linear programming problem in two vari-
ables:
maximize 3x 5y
subject to x y 4
x 3y 6
x 0, y 0.
(10.2)
By denition, a feasible solution to this problem is any point (x, y) that satises
all the constraints of the problem; the problems feasible region is the set of all
its feasible points. It is instructive to sketch the feasible region in the Cartesian
plane. Recall that any equation ax by =c, where coefcients a and b are not
both equal to zero, denes a straight line. Such a line divides the plane into two
half-planes: for all the points in one of them, ax by < c, while for all the points
in the other, ax by > c. (It is easy to determine which of the two half-planes
is which: take any point (x
0
, y
0
) not on the line ax by =c and check which of
the two inequalities hold, ax
0
by
0
> c or ax
0
by
0
< c.) In particular, the set of
points dened by inequality x y 4 comprises the points on and below the line
x y =4, and the set of points dened by inequality x 3y 6 comprises the
points on and below the line x 3y =6. Since the points of the feasible region
must satisfy all the constraints of the problem, the feasible region is obtained by
the intersection of these two half-planes and the rst quadrant of the Cartesian
plane dened by the nonnegativity constraints x 0, y 0 (see Figure 10.1). Thus,
the feasible regionfor problem(10.2) is the convex polygonwiththe vertices (0, 0),
(4, 0), (0, 2), and (3, 1). (The last point, which is the point of intersection of the
lines x y =4 and x 3y =6, is obtained by solving the systemof these two linear
equations.) Our task is to nd an optimal solution, a point in the feasible region
with the largest value of the objective function z =3x 5y.
Are there feasible solutions for which the value of the objective function
equals, say, 20? The points (x, y) for which the objective function z =3x 5y is
equal to20 formthe line 3x 5y =20. Since this line does not have commonpoints
1. George B. Dantzig (19142005) has received many honors, including the National Medal of Science
presented by the president of the United States in 1976. The citation states that the National Medal was
awarded for inventing linear programming and discovering methods that led to wide-scale scientic
and technical applications to important problems in logistics, scheduling, and network optimization,
and to the use of computers in making efcient use of the mathematical theory.
348 Iterative Improvement
y
x
(3, 1)
(0, 2)
(0, 0)
(4, 0)
x + y = 4
x + 3y = 6
FIGURE 10.1 Feasible region of problem (10.2).
with the feasible regionsee Figure 10.2the answer to the posed question is no.
On the other hand, there are innitely many feasible points for which the objective
function is equal to, say, 10: they are the intersection points of the line 3x 5y =10
with the feasible region. Note that the lines 3x 5y =20 and 3x 5y =10 have
the same slope, as would any line dened by equation 3x 5y = z where z is
some constant. Such lines are called level lines of the objective function. Thus,
our problem can be restated as nding the largest value of the parameter z for
which the level line 3x 5y =z has a common point with the feasible region.
We can nd this line either by shifting, say, the line 3x 5y =20 south-west
(without changing its slope!) toward the feasible region until it hits the region for
the rst time or by shifting, say, the line 3x 5y =10 north-east until it hits the
feasible region for the last time. Either way, it will happen at the point (3, 1) with
the corresponding z value 3
.
3 5
.
1 =14. This means that the optimal solution
to the linear programming problem in question is x =3, y =1, with the maximal
value of the objective function equal to 14.
Note that if we had to maximize z = 3x 3y as the objective function in
problem (10.2), the level line 3x 3y =z for the largest value of z would coincide
with the boundary line segment that has the same slope as the level lines (draw
this line in Figure 10.2). Consequently, all the points of the line segment between
vertices (3, 1) and (4, 0), including the vertices themselves, would be optimal
solutions, yielding, of course, the same maximal value of the objective function.
10.1 The Simplex Method 349
y
x
(3, 1)
(0, 2)
(0, 0) (4, 0)
3x + 5y = 20
3x + 5y = 14
3x + 5y = 10
FIGURE 10.2 Solving a two-dimensional linear programming problem geometrically.
Does every linear programming problem have an optimal solution that can
be found at a vertex of its feasible region? Without appropriate qualications,
the answer to this question is no. To begin with, the feasible region of a linear
programming problem can be empty. For example, if the constraints include two
contradictory requirements, suchas x y 1andx y 2, there canbe nopoints
in the problems feasible region. Linear programming problems with the empty
feasible region are called infeasible. Obviously, infeasible problems do not have
optimal solutions.
Another complicationmay ariseif theproblems feasible regionis unbounded,
as the following example demonstrates.
EXAMPLE 2 If we reverse the inequalities in problem (10.2) to x y 4 and
x 3y 6, the feasible region of the new problem will become unbounded (see
Figure 10.3). If the feasible region of a linear programming problemis unbounded,
its objective function may or may not attain a nite optimal value on it. For
example, the problemof maximizing z =3x 5y subject to the constraints x y
4, x 3y 6, x 0, y 0 has no optimal solution, because there are points in
the feasible region making 3x 5y as large as we wish. Such problems are called
unbounded. On the other hand, the problem of minimizing z =3x 5y subject to
the same constraints has an optimal solution (which?).
350 Iterative Improvement
y
x
(3, 1)
(0, 4)
(0, 0)
(6, 0)
3x + 5y = 24
3x + 5y = 20
3x + 5y = 14
FIGURE 10.3 Unbounded feasible region of a linear programming problem with
constraints x y 4, x 3y 6, x 0, y 0, and three level lines of
the function 3x 5y.
Fortunately, the most important features of the examples we consideredabove
hold for problems with more than two variables. In particular, a feasible region of
a typical linear programming problem is in many ways similar to convex polygons
in the two-dimensional Cartesian plane. Specically, it always has a nite number
of vertices, which mathematicians prefer to call extreme points (see Section 3.3).
Furthermore, an optimal solution to a linear programming problem can be found
at one of the extreme points of its feasible region. We reiterate these properties
in the following theorem.
THEOREM (Extreme Point Theorem) Any linear programming problem with
a nonempty bounded feasible region has an optimal solution; moreover, an op-
timal solution can always be found at an extreme point of the problems feasible
region.
2
This theorem implies that to solve a linear programming problem, at least
in the case of a bounded feasible region, we can ignore all but a nite number of
2. Except for some degenerate instances (such as maximizing z =x y subject to x y =1), if a linear
programming problemwith an unbounded feasible region has an optimal solution, it can also be found
at an extreme point of the feasible region.
10.1 The Simplex Method 351
points inits feasible region. Inprinciple, we cansolve sucha problemby computing
the value of the objective functionat eachextreme point andselecting the one with
the best value. There are two major obstacles to implementing this plan, however.
The rst lies in the need for a mechanism for generating the extreme points of the
feasible region. As we are going to see below, a rather straightforward algebraic
procedure for this taskhas beendiscovered. The secondobstacle lies inthe number
of extreme points a typical feasible region has. Here, the news is bad: the number
of extreme points is known to grow exponentially with the size of the problem.
This makes the exhaustive inspection of extreme points unrealistic for most linear
programming problems of nontrivial sizes.
Fortunately, it turns out that there exists an algorithm that typically inspects
only a small fractionof the extreme points of the feasible regionbefore reaching an
optimal one. This famous algorithm is called the simplex method. The idea of this
algorithm can be described in geometric terms as follows. Start by identifying an
extreme point of the feasible region. Then check whether one can get an improved
value of the objective functionby going toanadjacent extreme point. If it is not the
case, the current point is optimalstop; if it is the case, proceed to an adjacent
extreme point with an improved value of the objective function. After a nite
number of steps, the algorithmwill either reachanextreme point where anoptimal
solution occurs or determine that no optimal solution exists.
An Outline of the Simplex Method
Our task now is to translate the geometric description of the simplex method
into the more algorithmically precise language of algebra. To begin with, before
we can apply the simplex method to a linear programming problem, it has to be
represented in a special form called the standard form. The standard form has the
following requirements:
It must be a maximization problem.
All the constraints (except the nonnegativity constraints) must be in the form
of linear equations with nonnegative right-hand sides.
All the variables must be required to be nonnegative.
Thus, the general linear programming problem in standard form with m con-
straints and n unknowns (n m) is
maximize c
1
x
1
. . .
c
n
x
n
subject to a
i1
x
1
. . .
a
in
x
n
=b
i
, where b
i
0 for i =1, 2, . . . , m
x
1
0, . . . , x
n
0.
(10.3)
It can also be written in compact matrix notations:
maximize cx
subject to Ax =b
x 0,
352 Iterative Improvement
where
c =[c
1
c
2
. . . c
n
], x =
_
_
_
_
x
1
x
2
.
.
.
x
n
_
_
, A =
_
_
a
11
a
12
. . . a
1n
.
.
.
.
.
.
.
.
.
a
m1
a
m2
. . . a
mn
_
_
, b =
_
_
_
_
b
1
b
2
.
.
.
b
m
_
_
.
Any linear programming problem can be transformed into an equivalent
problem in standard form. If an objective function needs to be minimized, it can
be replaced by the equivalent problem of maximizing the same objective function
with all its coefcients c
j
replaced by c
j
, j = 1, 2, . . . , n (see Section 6.6 for
a more general discussion of such transformations). If a constraint is given as an
inequality, it can be replaced by an equivalent equation by adding a slack variable
representing the difference between the two sides of the original inequality. For
example, the two inequalities of problem (10.2) can be transformed, respectively,
into the following equations:
x y u =4 where u 0 and x 3y v =6 where v 0.
Finally, in most linear programming problems, the variables are required to be
nonnegative to begin with because they represent some physical quantities. If this
is not the case in an initial statement of a problem, an unconstrained variable
x
j
can be replaced by the difference between two new nonnegative variables:
x
j
=x
/
j
x
//
j
, x
/
j
0, x
//
j
0.
Thus, problem (10.2) in standard form is the following linear programming
problem in four variables:
maximize 3x 5y 0u 0v
subject to x y u =4
x 3y v =6
x, y, u, v 0.
(10.4)
It is easy to see that if we nd an optimal solution (x
, y
, u
, v
) to problem(10.4),
we can obtain an optimal solution to problem(10.2) by simply ignoring its last two
coordinates.
The principal advantage of the standard form lies in the simple mechanism
it provides for identifying extreme points of the feasible region. To do this for
problem (10.4), for example, we need to set two of the four variables in the con-
straint equations to zero to get a system of two linear equations in two unknowns
and solve this system. For the general case of a problem with m equations in n
unknowns (n m), n m variables need to be set to zero to get a system of m
equations in m unknowns. If the system obtained has a unique solutionas any
nondegenerate system of linear equations with the number of equations equal to
the number of unknowns doeswe have a basic solution; its coordinates set to
zerobefore solving the systemare callednonbasic, andits coordinates obtainedby
solving the system are called basic. (This terminology comes from linear algebra.
10.1 The Simplex Method 353
Specically, we can rewrite the system of constraint equations of (10.4) as
x
_
1
1
_
y
_
1
3
_
u
_
1
0
_
v
_
0
1
_
=
_
4
6
_
.
A basis in the two-dimensional vector space is composed of any two vectors that
are not proportional to each other; once a basis is chosen, any vector can be
uniquely expressed as a sum of multiples of the basis vectors. Basic and nonba-
sic variables indicate which of the given vectors are, respectively, included and
excluded in a particular basis choice.)
If all the coordinates of a basic solution are nonnegative, the basic solution is
called a basic feasible solution. For example, if we set to zero variables x and y
and solve the resulting system for u and v, we obtain the basic feasible solution
(0, 0, 4, 6); if we set to zero variables x and u and solve the resulting system for y
and v, we obtain the basic solution (0, 4, 0, 6), which is not feasible. The impor-
tance of basic feasible solutions lies in the one-to-one correspondence between
them and the extreme points of the feasible region. For example, (0, 0, 4, 6) is an
extreme point of the feasible region of problem(10.4) (with the point (0, 0) in Fig-
ure 10.1 being its projection on the x, y plane). Incidentally, (0, 0, 4, 6) is a natural
starting point for the simplex methods application to this problem.
As mentioned above, the simplex method progresses through a series of
adjacent extreme points (basic feasible solutions) with increasing values of the
objective function. Each such point can be represented by a simplex tableau, a
table storing the informationabout the basic feasible solutioncorresponding tothe
extreme point. For example, the simplex tableau for (0, 0, 4, 6) of problem (10.4)
is presented below:
1 1 1 0
x y u v
(10.5)
4
1
u
v 3 0 1 6
3 5 0 0 0
In general, a simplex tableau for a linear programming problem in standard form
with n unknowns and m linear equality constraints (n m) has m1 rows and
n 1 columns. Each of the rst m rows of the table contains the coefcients of
a corresponding constraint equation, with the last columns entry containing the
equations right-hand side. The columns, except the last one, are labeled by the
names of the variables. The rows are labeled by the basic variables of the basic
feasible solution the tableau represents; the values of the basic variables of this
354 Iterative Improvement
solution are in the last column. Also note that the columns labeled by the basic
variables form the mm identity matrix.
The last row of a simplex tableau is called the objective row. It is initialized
by the coefcients of the objective function with their signs reversed (in the rst
n columns) and the value of the objective function at the initial point (in the last
column). On subsequent iterations, the objective row is transformed the same
way as all the other rows. The objective row is used by the simplex method to
check whether the current tableau represents an optimal solution: it does if all
the entries in the objective rowexcept, possibly, the one in the last columnare
nonnegative. If this is not the case, any of the negative entries indicates a nonbasic
variable that can become basic in the next tableau.
For example, according to this criterion, the basic feasible solution (0, 0, 4, 6)
represented by tableau (10.5) is not optimal. The negative value in the x-column
signals the fact that we can increase the value of the objective function z =3x
5y 0u 0v by increasing the value of the x-coordinate in the current basic
feasible solution (0, 0, 4, 6). Indeed, since the coefcient for x in the objective
function is positive, the larger the x value, the larger the value of this function. Of
course, we will need to compensate an increase in x by adjusting the values of
the basic variables u and v so that the new point is still feasible. For this to be the
case, both conditions
x u =4 where u 0
x v =6 where v 0
must be satised, which means that
x min{4, 6] =4.
Note that if we increase the value of x from 0 to 4, the largest amount possible,
we will nd ourselves at the point (4, 0, 0, 2), an adjacent to (0, 0, 4, 6) extreme
point of the feasible region, with z =12.
Similarly, the negative value in the y-column of the objective row signals the
fact that we can also increase the value of the objective function by increasing
the value of the y-coordinate in the initial basic feasible solution (0, 0, 4, 6). This
requires
y u =4 where u 0
3y v =6 where v 0,
which means that
y min{
4
1
,
6
3
] =2.
If we increase the value of y from 0 to 2, the largest amount possible, we will nd
ourselves at the point (0, 2, 2, 0), another adjacent to (0, 0, 4, 6) extreme point,
with z =10.
If there are several negative entries in the objective row, a commonly used
rule is to select the most negative one, i.e., the negative number with the largest
10.1 The Simplex Method 355
absolute value. This rule is motivated by the observation that such a choice yields
the largest increase in the objective functions value per unit of change in a vari-
ables value. (In our example, an increase in the x-value from 0 to 1 at (0, 0, 4, 6)
changes the value of z =3x 5y 0u 0v from 0 to 3, while an increase in the
y-value from 0 to 1 at (0, 0, 4, 6) changes z from 0 to 5.) Note, however, that the
feasibility constraints impose different limits on how much each of the variables
may increase. In our example, in particular, the choice of the y-variable over the
x-variable leads to a smaller increase in the value of the objective function. Still,
we will employ this commonly used rule and select variable y as we continue with
our example. Anewbasic variable is called the entering variable, while its column
is referred to as the pivot column; we mark the pivot column by .
Now we will explain how to choose a departing variable, i.e., a basic variable
tobecome nonbasic inthe next tableau. (The total number of basic variables inany
basic solution must be equal to m, the number of the equality constraints.) As we
saw above, to get to an adjacent extreme point with a larger value of the objective
function, we need to increase the entering variable by the largest amount possible
to make one of the old basic variables zero while preserving the nonnegativity
of all the others. We can translate this observation into the following rule for
choosing a departing variable in a simplex tableau: for each positive entry in the
pivot column, compute the -ratio by dividing the rows last entry by the entry in
the pivot column. For the example of tableau (10.5), these -ratios are
u
=
4
1
=4,
v
=
6
3
=2.
The row with the smallest -ratio determines the departing variable, i.e., the
variable to become nonbasic. Ties may be broken arbitrarily. For our example, it is
variable v. We mark the rowof the departing variable, called the pivot row, by
and denote it
row. Note that if there are no positive entries in the pivot column,
no -ratio can be computed, which indicates that the problem is unbounded and
the algorithm stops.
Finally, the following steps need to be taken to transform a current tableau
into the next one. (This transformation, called pivoting, is similar to the princi-
pal step of the Gauss-Jordan elimination algorithm for solving systems of linear
equationssee Problem8 in Exercises 6.2.) First, divide all the entries of the pivot
rowby the pivot, its entry inthe pivot column, toobtain
row
new
. For tableau(10.5),
we obtain
row
new
:
1
3
1 0
1
3
2.
Then, replace each of the other rows, including the objective row, by the difference
row c
.
row
new
,
where c is the rows entry in the pivot column. For tableau (10.5), this yields
356 Iterative Improvement
row 1 1
.
row
new
:
2
3
0 1
1
3
2,
row 3 (5)
.
row
new
:
4
3
0 0
5
3
10.
Thus, the simplex method transforms tableau (10.5) into the following tableau:
2
3
0 1
x y u v
(10.6)
2 u
y 1 0 2
0 0 10
1
3
1
3
1
3
4
3
5
3
Tableau (10.6) represents the basic feasible solution (0, 2, 2, 0) with an increased
value of the objective function, which is equal to 10. It is not optimal, however
(why?).
The next iterationdo it yourself as a good exercise!yields tableau (10.7):
1 0
x y u v
(10.7)
3
0
x
y 1 1
0 0 2 1 14
1
2
1
2
1
2
3
2
This tableau represents the basic feasible solution (3, 1, 0, 0). It is optimal because
all the entries in the objective rowof tableau (10.7) are nonnegative. The maximal
value of the objective function is equal to 14, the last entry in the objective row.
Let us summarize the steps of the simplex method.
Summary of the simplex method
Step 0 Initialization Present a given linear programming problem in stan-
dard form and set up an initial tableau with nonnegative entries in the
rightmost column and mother columns composing the mmidentity
matrix. (Entries in the objective row are to be disregarded in verifying
these requirements.) These mcolumns dene the basic variables of the
initial basic feasible solution, used as the labels of the tableaus rows.
Step 1 Optimality test If all the entries inthe objective row(except, possibly,
the one in the rightmost column, which represents the value of the
10.1 The Simplex Method 357
objective function) are nonnegativestop: the tableau represents an
optimal solution whose basic variables values are in the rightmost
column and the remaining, nonbasic variables values are zeros.
Step 2 Finding the entering variable Select a negative entry fromamong the
rst n elements of the objective row. (Acommonly usedrule is toselect
the negative entry with the largest absolute value, with ties broken
arbitrarily.) Mark its column to indicate the entering variable and the
pivot column.
Step 3 Finding the departing variable For each positive entry in the pivot
column, calculate the -ratio by dividing that rows entry in the right-
most column by its entry in the pivot column. (If all the entries in the
pivot column are negative or zero, the problem is unboundedstop.)
Find the rowwith the smallest -ratio (ties may be broken arbitrarily),
and mark this rowto indicate the departing variable and the pivot row.
Step 4 Forming the next tableau Divide all the entries in the pivot row by
its entry in the pivot column. Subtract from each of the other rows,
including the objective row, the new pivot row multiplied by the entry
in the pivot column of the row in question. (This will make all the
entries in the pivot column 0s except for 1 in the pivot row.) Replace
the label of the pivot row by the variables name of the pivot column
and go back to Step 1.
Further Notes on the Simplex Method
Formal proofs of validity of the simplex method steps can be found in books
devoted to a detailed discussion of linear programming (e.g., [Dan63]). A few
important remarks about the method still need to be made, however. Generally
speaking, aniterationof the simplex methodleads toanextreme point of the prob-
lems feasible region with a greater value of the objective function. In degenerate
cases, which arise when one or more basic variables are equal to zero, the simplex
method can only guarantee that the value of the objective function at the new
extreme point is greater than or equal to its value at the previous point. In turn,
this opens the door to the possibility not only that the objective functions values
stall for several iterations in a row but that the algorithm might cycle back to a
previously considered point and hence never terminate. The latter phenomenon
is called cycling. Although it rarely if ever happens in practice, specic examples
of problems where cycling does occur have been constructed. A simple modica-
tion of Steps 2 and 3 of the simplex method, called Blands rule, eliminates even
the theoretical possibility of cycling. Assuming that the variables are denoted by
a subscripted letter (e.g., x
1
, x
2
, . . . , x
n
), this rule can be stated as follows:
Step 2 modied Among the columns with a negative entry in the objective
row, select the column with the smallest subscript.
Step 3 modied Resolve a tie among the smallest -ratios by selecting the
row labeled by the basic variable with the smallest subscript.
358 Iterative Improvement
Another caveat deals with the assumptions made in Step 0. They are automat-
ically satised if a problem is given in the form where all the constraints imposed
on nonnegative variables are inequalities a
i1
x
1
. . .
a
in
x
n
b
i
with b
i
0 for
i =1, 2, . . . , m. Indeed, by adding a nonnegative slack variable x
ni
into the ith
constraint, we obtain the equality a
i1
x
1
. . .
a
in
x
n
x
ni
=b
i
, and all the re-
quirements imposed on an initial tableau of the simplex method are satised for
the obvious basic feasible solution x
1
=
. . .
=x
n
=0, x
n1
=
. . .
=x
nm
=1. But
if a problem is not given in such a form, nding an initial basic feasible solution
may present a nontrivial obstacle. Moreover, for problems with an empty feasible
region, no initial basic feasible solution exists, and we need an algorithmic way to
identify such problems. One of the ways to address these issues is to use an exten-
sion to the classic simplex method called the two-phase simplex method (see, e.g.,
[Kol95]). In a nutshell, this method adds a set of articial variables to the equality
constraints of a given problem so that the new problem has an obvious basic fea-
sible solution. It then solves the linear programming problem of minimizing the
sum of the articial variables by the simplex method. The optimal solution to this
problem either yields an initial tableau for the original problem or indicates that
the feasible region of the original problem is empty.
How efcient is the simplex method? Since the algorithm progresses through
a sequence of adjacent points of a feasible region, one should probably expect bad
news because the number of extreme points is known to grow exponentially with
the problemsize. Indeed, the worst-case efciency of the simplex method has been
shown to be exponential as well. Fortunately, more than half a century of practical
experience with the algorithm has shown that the number of iterations in a typical
application ranges between mand 3m, with the number of operations per iteration
proportional to mn, where m and n are the numbers of equality constraints and
variables, respectively.
Since its discovery in 1947, the simplex method has been a subject of intensive
study by many researchers. Some of them have worked on improvements to the
original algorithm and details of its efcient implementation. As a result of these
efforts, programs implementing the simplex method have been polished to the
point that very large problems with hundreds of thousands of constraints and
variables can be solved in a routine manner. In fact, such programs have evolved
into sophisticated software packages. These packages enable the user to enter
a problems constraints and obtain a solution in a user-friendly form. They also
provide tools for investigating important properties of the solution, such as its
sensitivity to changes in the input data. Such investigations are very important for
many applications, including those in economics. At the other end of the spectrum,
linear programming problems of a moderate size can nowadays be solved on a
desktop using a standard spreadsheet facility or by taking advantage of specialized
software available on the Internet.
Researchers have also tried to nd algorithms for solving linear programming
problems with polynomial-time efciency in the worst case. An important mile-
stone in the history of such algorithms was the proof by L. G. Khachian [Kha79]
showing that the ellipsoid method can solve any linear programming problem in
10.1 The Simplex Method 359
polynomial time. Althoughthe ellipsoidmethodwas muchslower thanthe simplex
methodinpractice, its better worst-case efciency encourageda searchfor alterna-
tives to the simplex method. In 1984, Narendra Karmarkar published an algorithm
that not only had a polynomial worst-case efciency but also was competitive with
the simplex method in empirical tests as well. Although we are not going to discuss
Karmarkars algorithm [Kar84] here, it is worth pointing out that it is also based
on the iterative-improvement idea. However, Karmarkars algorithm generates a
sequence of feasible solutions that lie within the feasible region rather than going
through a sequence of adjacent extreme points as the simplex method does. Such
algorithms are called interior-point methods (see, e.g., [Arb93]).
Exercises 10.1
1. Consider the following version of the post ofce location problem (Problem
3 in Exercises 3.3): Given n integers x
1
, x
2
, . . . , x
n
representing coordinates
of n villages located along a straight road, nd a location for a post ofce that
minimizes the average distance between the villages. The post ofce may be,
but is not required to be, located at one of the villages. Devise an iterative-
improvement algorithm for this problem. Is this an efcient way to solve this
problem?
2. Solve the following linear programming problems geometrically.
a.
maximize 3x y
subject to x y 1
2x y 4
x 0, y 0
b.
maximize x 2y
subject to 4x y
y 3 x
x 0, y 0
3. Consider the linear programming problem
minimize c
1
x c
2
y
subject to x y 4
x 3y 6
x 0, y 0
where c
1
and c
2
are some real numbers not both equal to zero.
a. Give an example of the coefcient values c
1
and c
2
for which the problem
has a unique optimal solution.
360 Iterative Improvement
b. Give an example of the coefcient values c
1
and c
2
for which the problem
has innitely many optimal solutions.
c. Give an example of the coefcient values c
1
and c
2
for which the problem
does not have an optimal solution.
4. Would the solution to problem (10.2) be different if its inequality constraints
were strict, i.e., x y < 4 and x 3y < 6, respectively?
5. Trace the simplex method on
a. the problem of Exercise 2a.
b. the problem of Exercise 2b.
6. Trace the simplex method on the problem of Example 1 in Section 6.6
a. by hand.
b. by using one of the implementations available on the Internet.
7. Determine how many iterations the simplex method needs to solve the
problem
maximize
n
j=1
x
j
subject to 0 x
j
b
j
, where b
j
> 0 for j =1, 2, . . . , n.
8. Can we apply the simplex method to solve the knapsack problem (see Exam-
ple 2 in Section 6.6)? If you answer yes, indicate whether it is a good algorithm
for the problem in question; if you answer no, explain why not.
9. Prove that no linear programming problem can have exactly k 1 optimal
solutions unless k =1.
10. If a linear programming problem
maximize
n
j=1
c
j
x
j
subject to
n
j=1
a
ij
x
j
b
i
for i =1, 2, . . . , m
x
1
, x
2
, . . . , x
n
0
is considered as primal, then its dual is dened as the linear programming
problem
minimize
m
i=1
b
i
y
i
subject to
m
i=1
a
ij
y
i
c
j
for j =1, 2, . . . , n
y
1
, y
2
, . . . , y
m
0.
10.2 The Maximum-Flow Problem 361
a. Express the primal and dual problems in matrix notations.
b. Find the dual of the linear programming problem
maximize x
1
4x
2
x
3
subject to x
1
x
2
x
3
6
x
1
x
2
2x
3
2
x
1
, x
2
, x
3
0.
c. Solve the primal and dual problems and compare the optimal values of
their objective functions.
10.2 The Maximum-Flow Problem
In this section, we consider the important problemof maximizing the owof a ma-
terial through a transportation network (pipeline system, communication system,
electrical distribution system, and so on). We will assume that the transportation
network in question can be represented by a connected weighted digraph with n
vertices numbered from 1 to n and a set of edges E, with the following properties:
It contains exactly one vertex with no entering edges; this vertex is called the
source and assumed to be numbered 1.
It contains exactly one vertex with no leaving edges; this vertex is called the
sink and assumed to be numbered n.
The weight u
ij
of each directed edge (i, j) is a positive integer, called the
edge capacity. (This number represents the upper bound on the amount of
the material that can be sent from i to j through a link represented by this
edge.)
A digraph satisfying these properties is called a ow network or simply a
network.
3
A small instance of a network is given in Figure 10.4.
It is assumed that the source and the sink are the only source and destination
of the material, respectively; all the other vertices can serve only as points where
a owcan be redirected without consuming or adding any amount of the material.
In other words, the total amount of the material entering an intermediate vertex
must be equal to the total amount of the material leaving the vertex. This con-
dition is called the ow-conservation requirement. If we denote the amount sent
throughedge (i, j) by x
ij
, thenfor any intermediate vertex i, the ow-conservation
requirement can be expressed by the following equality constraint:
j: (j,i)E
x
ji
=
j: (i,j)E
x
ij
for i =2, 3, . . . , n 1, (10.8)
3. In a slightly more general model, one can consider a network with several sources and sinks and allow
capacities u
ij
to be innitely large.
362 Iterative Improvement
1 2
4
2
3 1
3
5
5
3 4
6
2
FIGURE 10.4 Example of a network graph. The vertex numbers are vertex names;
the edge numbers are edge capacities.
where the sums inthe left- andright-handsides express the total inowandoutow
entering and leaving vertex i, respectively.
Since no amount of the material can change by going through intermediate
vertices of the network, it stands to reason that the total amount of the material
leaving the source must end up at the sink. (This observation can also be derived
formally from equalities (10.8), a task you will be asked to do in the exercises.)
Thus, we have the following equality:
j: (1,j)E
x
1j
=
j: (j,n)E
x
jn
. (10.9)
This quantity, the total outow from the sourceor, equivalently, the total inow
into the sinkis called the value of the ow. We denote it by v. It is this quantity
that we will want to maximize over all possible ows in a network.
Thus, a (feasible) ow is an assignment of real numbers x
ij
to edges (i, j) of
a given network that satisfy ow-conservation constraints (10.8) and the capacity
constraints
0 x
ij
u
ij
for every edge (i, j) E. (10.10)
The maximum-ow problemcan be stated formally as the following optimization
problem:
maximize v =
j: (1,j)E
x
1j
subject to
j: (j,i)E
x
ji
j: (i,j)E
x
ij
=0 for i =2, 3, . . . , n 1
0 x
ij
u
ij
for every edge (i, j) E.
(10.11)
We can solve linear programming problem (10.11) by the simplex method or
by another algorithmfor general linear programming problems (see Section 10.1).
However, the special structure of problem(10.11) can be exploited to design faster
algorithms. In particular, it is quite natural to employ the iterative-improvement
10.2 The Maximum-Flow Problem 363
idea as follows. We can always start with the zero ow (i.e., set x
ij
=0 for every
edge (i, j) in the network). Then, on each iteration, we can try to nd a path
from source to sink along which some additional ow can be sent. Such a path is
called ow augmenting. If a ow-augmenting path is found, we adjust the ow
along the edges of this path to get a ow of an increased value and try to nd
an augmenting path for the new ow. If no ow-augmenting path can be found,
we conclude that the current ow is optimal. This general template for solving
the maximum-ow problem is called the augmenting-path method, also known
as the Ford-Fulkerson method after L. R. Ford, Jr., and D. R. Fulkerson, who
discovered it (see [For57]).
An actual implementation of the augmenting path idea is, however, not quite
straightforward. To see this, let us consider the network in Figure 10.4. We start
with the zero ow shown in Figure 10.5a. (In that gure, the zero amounts sent
through each edge are separated from the edge capacities by the slashes; we will
use this notation in the other examples as well.) It is natural to search for a ow-
augmenting path from source to sink by following directed edges (i, j) for which
the current ow x
ij
is less than the edge capacity u
ij
. Among several possibilities,
let us assume that we identify the augmenting path 1236 rst. We can
increase the ow along this path by a maximum of 2 units, which is the smallest
unused capacity of its edges. The new ow is shown in Figure 10.5b. This is as far
as our simpleminded idea about ow-augmenting paths will be able to take us.
Unfortunately, the ow shown in Figure 10.5b is not optimal: its value can still
be increased along the path 143256 by increasing the ow by 1 on
edges (1, 4), (4, 3), (2, 5), and (5, 6) and decreasing it by 1 on edge (2, 3). The ow
obtained as the result of this augmentation is shown in Figure 10.5c. It is indeed
maximal. (Can you tell why?)
Thus, to nd a ow-augmenting path for a ow x, we need to consider paths
from source to sink in the underlying undirected graph in which any two consec-
utive vertices i, j are either
i. connected by a directed edge from i to j with some positive unused capacity
r
ij
=u
ij
x
ij
(so that we can increase the ow through that edge by up to r
ij
units), or
ii. connected by a directed edge from j to i with some positive ow x
ji
(so that
we can decrease the ow through that edge by up to x
ji
units).
Edges of the rst kind are called forward edges because their tail is listed before
their head in the vertex list 1
. . .
i j
. . .
n dening the path; edges of the
secondkindare calledbackward edges because their tail is listedafter their headin
the path list 1
. . .
i j
. . .
n. To illustrate, for the path 143256
of the last example, (1, 4), (4, 3), (2, 5), and (5, 6) are the forward edges, and (3, 2)
is the backward edge.
For a given ow-augmenting path, let r be the minimum of all the unused
capacities r
ij
of its forward edges and all the ows x
ji
of its backward edges.
It is easy to see that if we increase the current ow by r on each forward edge
and decrease it by this amount on each backward edge, we will obtain a feasible
364 Iterative Improvement
1 2
4
0/ 2
0/ 3 0/1
3
5
0/ 5
(a)
0/ 3 0/ 4
6
0/ 2
1 2
4
2/ 2
0/ 3 0/1
3
5
2/ 5
0/ 3 0/ 4
6
2/ 2
(b)
1 2
4
2/ 2
1/ 3 1/1
3
5
1/ 5
1/ 3 1/ 4
6
2/ 2
(c)
FIGURE 10.5 Illustration of the augmenting-path method. Flow-augmenting paths are
shown in bold. The ow amounts and edge capacities are indicated by
the numbers before and after the slash, respectively.
ow whose value is r units greater than the value of its predecessor. Indeed, let
i be an intermediate vertex on a ow-augmenting path. There are four possible
combinations of forward and backward edges incident to vertex i:
r
i
r
,
r
i
r
,
r
i
r
,
r
i
r
.
10.2 The Maximum-Flow Problem 365
For each of them, the ow-conservation requirement for vertex i will still hold
after the ow adjustments indicated above the edge arrows. Further, since r is the
minimum among all the positive unused capacities on the forward edges and all
the positive ows on the backward edges of the ow-augmenting path, the new
ow will satisfy the capacity constraints as well. Finally, adding r to the ow on
the rst edge of the augmenting path will increase the value of the ow by r.
Under the assumption that all the edge capacities are integers, r will be a
positive integer too. Hence, the ow value increases at least by 1 on each iteration
of the augmenting-path method. Since the value of a maximum ow is bounded
above (e.g., by the sumof the capacities of the source edges), the augmenting-path
method has to stop after a nite number of iterations.
4
Surprisingly, the nal ow
always turns out to be maximal, irrespective of a sequence of augmenting paths.
This remarkable result stems from the proof of the Max-Flow Min-Cut Theorem
(see, e.g., [For62]), which we replicate later in this section.
The augmenting-path methodas described above in its general formdoes
not indicate a specic way for generating ow-augmenting paths. A bad sequence
of such paths may, however, have a dramatic impact on the methods efciency.
Consider, for example, the network in Figure 10.6a, in which U stands for some
large positive integer. If we augment the zero ow along the path 1234,
we shall obtain the ow of value 1 shown in Figure 10.6b. Augmenting that ow
along the path 1324 will increase the ow value to 2 (Figure 10.6c). If we
continue selecting this pair of ow-augmenting paths, we will need a total of 2U
iterations to reach the maximum ow of value 2U (Figure 10.6d). Of course, we
can obtain the maximum ow in just two iterations by augmenting the initial zero
ow along the path 124 followed by augmenting the new ow along the path
134. The dramatic difference between 2U and 2 iterations makes the point.
Fortunately, there are several ways to generate ow-augmenting paths ef-
ciently and avoid the degradation in performance illustrated by the previous
example. The simplest of them uses breadth-rst search to generate augment-
ing paths with the least number of edges (see Section 3.5). This version of the
augmenting-path method, called shortest-augmenting-path or rst-labeled-rst-
scanned algorithm, was suggested by J. Edmonds and R. M. Karp [Edm72]. The
labeling refers to marking a new (unlabeled) vertex with two labels. The rst label
indicates the amount of additional ow that can be brought from the source to
the vertex being labeled. The second label is the name of the vertex from which
the vertex being labeled was reached. (It can be left undened for the source.) It
is also convenient to add the or sign to the second label to indicate whether
the vertex was reached via a forward or backward edge, respectively. The source
can be always labeled with , . For the other vertices, the labels are computed
as follows.
4. If capacity upper bounds are irrational numbers, the augmenting-path method may not terminate
(see, e.g., [Chv83, pp. 387388], for a cleverly devised example demonstrating such a situation). This
limitation is only of theoretical interest because we cannot store irrational numbers in a computer, and
rational numbers can be transformed into integers by changing the capacity measurement unit.
366 Iterative Improvement
1
3
(a)
2
0/U 0/U
0/1
0/U 0/U
4 1
3
2
1/U 0/U
1/1
0/U 1/U
4
(b)
1
3
2
1/U 1/U
0/1
1/U 1/U
4
(c)
1
3
2
U/U U/U
0/1
U/U U/U
4
(d)
FIGURE 10.6 Efciency degradation of the augmenting-path method.
If unlabeled vertex j is connected to the front vertex i of the traversal queue
by a directed edge from i to j with positive unused capacity r
ij
=u
ij
x
ij
, then
vertex j is labeled with l
j
, i
, where l
j
=min{l
i
, r
ij
].
If unlabeled vertex j is connected to the front vertex i of the traversal queue
by a directed edge from j to i with positive ow x
ji
, then vertex j is labeled with
l
j
, i
, where l
j
=min{l
i
, x
ji
].
If this labeling-enhanced traversal ends up labeling the sink, the current
ow can be augmented by the amount indicated by the sinks rst label. The
augmentation is performed along the augmenting path traced by following the
vertex second labels from sink to source: the current ow quantities are increased
on the forward edges and decreased on the backward edges of this path. If, on the
other hand, the sink remains unlabeled after the traversal queue becomes empty,
the algorithm returns the current ow as maximum and stops.
ALGORITHM ShortestAugmentingPath(G)
//Implements the shortest-augmenting-path algorithm
//Input: A network with single source 1, single sink n, and
// positive integer capacities u
ij
on its edges (i, j)
//Output: A maximum ow x
assign x
ij
=0 to every edge (i, j) in the network
label the source with , and add the source to the empty queue Q
10.2 The Maximum-Flow Problem 367
while not Empty(Q) do
i Front(Q); Dequeue(Q)
for every edge from i to j do //forward edges
if j is unlabeled
r
ij
u
ij
x
ij
if r
ij
> 0
l
j
min{l
i
, r
ij
]; label j with l
j
, i
Enqueue(Q, j)
for every edge from j to i do //backward edges
if j is unlabeled
if x
ji
> 0
l
j
min{l
i
, x
ji
]; label j with l
j
, i
Enqueue(Q, j)
if the sink has been labeled
//augment along the augmenting path found
j n //start at the sink and move backwards using second labels
while j ,=1 //the source hasnt been reached
if the second label of vertex j is i
x
ij
x
ij
l
n
else //the second label of vertex j is i
x
ji
x
ji
l
n
j i; i the vertex indicated by is second label
erase all vertex labels except the ones of the source
reinitialize Q with the source
return x //the current ow is maximum
An application of this algorithm to the network in Figure 10.4 is illustrated in
Figure 10.7.
The optimality of a nal ow obtained by the augmenting-path method stems
from a theorem that relates network ows to network cuts. A cut induced by
partitioning vertices of a network into some subset X containing the source and
X, the complement of X, containing the sink is the set of all the edges with a tail
in X and a head in
X. We denote a cut C(X,
X) or simply C. For example, for the
network in Figure 10.4:
if X ={1] and hence
X ={2, 3, 4, 5, 6], C(X,
X) ={(1, 2), (1, 4)];
if X ={1, 2, 3, 4, 5] and hence
X ={6], C(X,
X) ={(3, 6), (5, 6)];
if X ={1, 2, 4] and hence
X ={3, 5, 6], C(X,
X) ={(2, 3), (2, 5), (4, 3)].
The name cut stems from the following property: if all the edges of a cut
were deleted from the network, there would be no directed path from source to
sink. Indeed, let C(X,
X) be a cut. Consider a directed path from source to sink. If
v
i
is the rst vertex of that path which belongs to
X (the set of such vertices is not
368 Iterative Improvement
1 2
4
3
5
6
0/3
0/2
0/3 0/1
0/5 0/2
0/4
Queue: 1 2 4 3 5 6
1 2
4
3, 1+
3
2, 2+
5
2, 1+
2, 2+
0/5
6 2, 3+ ,
0/2 0/2
0/3 0/1
0/4 0/3
Augment the flow by 2 (the sinks first label)
along the path 1 2 3 6.
1 2
4
3
5
6
0/3
2/2
0/3 0/1
2/5 2/2
0/4
Queue: 1 4 3 2 5 6
1 2
4
3, 1+
3
1, 4+
5
1, 2+
6 1, 5+ ,
Augment the flow by 1 (the sinks first label)
along the path 1 4 3 2 5 6.
1, 3
2/2 2/2 2/5
0/3 0/4
0/3 0/1
1 2
4
3
5
6
1/3
1/5
1/4
1/1 1/3
2/2 2/2
Queue: 1 4
1 2
4
2, 1+
3
5
6 ,
No augmenting path (the sink is unlabeled);
the current flow is maximal.
1/1 1/3
1/3 1/4
1/5 2/2 2/2
FIGURE 10.7 Illustration of the shortest-augmenting-path algorithm. The diagrams on
the left show the current ow before the next iteration begins; the
diagrams on the right show the results of the vertex labeling on that
iteration, the augmenting path found (in bold), and the ow before its
augmentation. Vertices deleted from the queue are indicated by the
symbol.
10.2 The Maximum-Flow Problem 369
empty, because it contains the sink), then v
i
is not the source and its immediate
predecessor v
i1
on that path belongs to X. Hence, the edge from v
i1
to v
i
must
be an element of the cut C(X,
X). This proves the property in question.
The capacity of a cut C(X,
X), denoted c(X,
X), is dened as the sum of
capacities of the edges that compose the cut. For the three examples of cuts given
above, the capacities are equal to 5, 6, and 9, respectively. Since the number of
different cuts in a network is nonempty and nite (why?), there always exists
a minimum cut, i.e., a cut with the smallest capacity. (What is a minimum cut
in the network of Figure 10.4?) The following theorem establishes an important
relationship between the notions of maximum ow and minimum cut.
THEOREM (Max-Flow Min-Cut Theorem) The value of a maximum ow in a
network is equal to the capacity of its minimum cut.
PROOF First, let x be a feasible ow of value v and let C(X,
X) be a cut of
capacity c in the same network. Consider the ow across this cut dened as the
difference between the sum of the ows on the edges from X to
X and the sum
of the ows on the edges from
X to X. It is intuitively clear and can be formally
derived fromthe equations expressing the ow-conservation requirement and the
denition of the ow value (Problem 6b in this sections exercises) that the ow
across the cut C(X,
X) is equal to v, the value of the ow:
v =
iX, j
X
x
ij
j
X, iX
x
ji
. (10.12)
Since the second sum is nonnegative and the ow x
ij
on any edge (i, j) cannot
exceed the edge capacity u
ij
, equality (10.12) implies that
v
iX, j
X
x
ij
iX, j
X
u
ij
,
i.e.,
v c. (10.13)
Thus, the value of any feasible ow in a network cannot exceed the capacity of
any cut in that network.
Let v
, which would
370 Iterative Improvement
contradict the assumption that the owx
, X
). By
the denition of set X
to X
ij
=u
ij
, and each edge (j, i) fromX
to X
iX
, jX
ij
jX
, iX
ji
=
iX
, jX
u
ij
0 =c(X
, X
),
which proves the theorem.
The proof outlined above accomplishes more than proving the equality of the
maximum-ow value and the minimum-cut capacity. It also implies that when the
augmenting-path method terminates, it yields both a maximum ow and a mini-
mumcut. If labeling of the kindutilizedinthe shortest-augmenting-pathalgorithm
is used, a minimum cut is formed by the edges from the labeled to unlabeled ver-
tices on the last iteration of the method. Finally, the proof implies that all such
edges must be full (i.e., the ows must be equal to the edge capacities), and all
the edges from unlabeled vertices to labeled, if any, must be empty (i.e., have
zero ows on them). In particular, for the network in Figure 10.7, the algorithm
nds the cut {(1, 2), (4, 3)] of minimum capacity 3, both edges of which are full as
required.
Edmonds and Karp proved in their paper [Edm72] that the number of aug-
menting paths needed by the shortest-augmenting-path algorithm never exceeds
nm/2, where n and m are the number of vertices and edges, respectively. Since
the time required to nd a shortest augmenting path by breadth-rst search is
in O(n m) =O(m) for networks represented by their adjacency lists, the time
efciency of the shortest-augmenting-path algorithm is in O(nm
2
).
More efcient algorithms for the maximum-ow problem are known (see the
monograph [Ahu93], as well as appropriate chapters in such books as [Cor09] and
[Kle06]). Some of them implement the augmenting-path idea in a more efcient
manner. Others are based on the concept of preows. A preow is a ow that
satises the capacity constraints but not the ow-conservation requirement. Any
vertex is allowed to have more ow entering the vertex than leaving it. A preow-
push algorithm moves the excess ow toward the sink until the ow-conservation
requirement is reestablished for all intermediate vertices of the network. Faster al-
gorithms of this kind have worst-case efciency close to O(nm). Note that preow-
push algorithms fall outside the iterative-improvement paradigm because they do
not generate a sequence of improving solutions that satisfy all the constraints of
the problem.
To conclude this section, it is worth pointing out that although the initial
interest in studying network ows was caused by transportation applications, this
model has also proved to be useful for many other areas. We discuss one of them
in the next section.
10.2 The Maximum-Flow Problem 371
Exercises 10.2
1. Since maximum-ow algorithms require processing edges in both directions,
it is convenient to modify the adjacency matrix representation of a network
as follows. If there is a directed edge from vertex i to vertex j of capacity
u
ij
, then the element in the ith row and the jth column is set to u
ij
, and the
element in the jth row and the ith column is set to u
ij
; if there is no edge
between vertices i and j, both these elements are set to zero. Outline a simple
algorithm for identifying a source and a sink in a network presented by such
a matrix and indicate its time efciency.
2. Apply the shortest-augmenting path algorithm to nd a maximum ow and a
minimum cut in the following networks.
a.
1 2 5
5
7
2
8
3
6 4 4
4 6
b.
2
1 6
4
3
2
7
1
5
4 4
2
3 5
3. a. Does the maximum-ow problem always have a unique solution? Would
your answer be different for networks with different capacities on all their
edges?
b. Answer the same questions for the minimum-cut problem of nding a cut
of the smallest capacity in a given network.
4. a. Explain how the maximum-ow problem for a network with several
sources and sinks can be transformed into the same problem for a network
with a single source and a single sink.
b. Some networks have capacity constraints on the ow amounts that can
ow through their intermediate vertices. Explain how the maximum-ow
problem for such a network can be transformed to the maximum-ow
problem for a network with edge capacity constraints only.
5. Consider a network that is a rooted tree, with the root as its source, the leaves
as its sinks, and all the edges directed along the paths from the root to the
leaves. Design an efcient algorithm for nding a maximum ow in such a
network. What is the time efciency of your algorithm?
6. a. Prove equality (10.9).
372 Iterative Improvement
b. Prove that for any ow in a network and any cut in it, the value of the
ow is equal to the ow across the cut (see equality (10.12)). Explain the
relationship between this property and equality (10.9).
7. a. Express the maximum-ow problem for the network in Figure 10.4 as a
linear programming problem.
b. Solve this linear programming problem by the simplex method.
8. As an alternative to the shortest-augmenting-path algorithm, Edmonds and
Karp [Edm72] suggested the maximum-capacity-augmenting-path algorithm,
in which a ow is augmented along the path that increases the ow by the
largest amount. Implement both these algorithms in the language of your
choice and perform an empirical investigation of their relative efciency.
9. Write a report on a more advanced maximum-ow algorithm such as
(i) Dinitzs algorithm, (ii) Karzanovs algorithm, (iii) Malhotra-Kamar-
Maheshwari algorithm, or (iv) Goldberg-Tarjan algorithm.
10. Dining problem Several families go out to dinner together. To increase their
social interaction, they would like to sit at tables so that no two members of
the same family are at the same table. Showhowto nd a seating arrangement
that meets this objective (or prove that no such arrangement exists) by using
a maximum-ow problem. Assume that the dinner contingent has p families
and that the ith family has a
i
members. Also assume that q tables are available
and the jth table has a seating capacity of b
j
. [Ahu93]
10.3 Maximum Matching in Bipartite Graphs
In many situations we are faced with a problem of pairing elements of two sets.
The traditional example is boys and girls for a dance, but you can easily think
of more serious applications. It is convenient to represent elements of two given
sets by vertices of a graph, with edges between vertices that can be paired. A
matching in a graph is a subset of its edges with the property that no two edges
share a vertex. A maximum matchingmore precisely, a maximum cardinality
matchingis a matching withthe largest number of edges. (What is it for the graph
in Figure 10.8? Is it unique?) The maximum-matching problem is the problem of
nding a maximum matching in a given graph. For an arbitrary graph, this is a
rather difcult problem. It was solved in 1965 by Jack Edmonds [Edm65]. (See
[Gal86] for a good survey and more recent references.)
We limit our discussioninthis sectiontothe simpler case of bipartite graphs. In
a bipartite graph, all the vertices can be partitioned into two disjoint sets V and U,
not necessarily of the same size, sothat every edge connects a vertex inone of these
sets to a vertex in the other set. In other words, a graph is bipartite if its vertices
can be colored in two colors so that every edge has its vertices colored in different
colors; such graphs are also said to be 2-colorable. The graph in Figure 10.8 is
bipartite. It is not difcult to prove that a graph is bipartite if and only if it does
not have a cycle of an odd length. We will assume for the rest of this section that
10.3 Maximum Matching in Bipartite Graphs 373
1 2 3 4
5
V
U 6 7 8
FIGURE 10.8 Example of a bipartite graph.
the vertex set of a given bipartite graph has been already partitioned into sets V
and U as required by the denition (see Problem 8 in Exercises 3.5).
Let us apply the iterative-improvement technique to the maximum-
cardinality-matching problem. Let M be a matching in a bipartite graph G=
V, U, E). How can we improve it, i.e., nd a new matching with more edges?
Obviously, if every vertex in either V or U is matched (has a mate), i.e., serves as
an endpoint of an edge in M, this cannot be done and M is a maximum matching.
Therefore, to have a chance at improving the current matching, both V and U
must contain unmatched (also called free) vertices, i.e., vertices that are not inci-
dent to any edge in M. For example, for the matching M
a
={(4, 8), (5, 9)] in the
graph in Figure 10.9a, vertices 1, 2, 3, 6, 7, and 10 are free, and vertices 4, 5, 8,
and 9 are matched.
Another obvious observation is that we can immediately increase a current
matching by adding an edge between two free vertices. For example, adding (1, 6)
to the matching M
a
={(4, 8), (5, 9)] in the graph in Figure 10.9a yields a larger
matching M
b
= {(1, 6), (4, 8), (5, 9)] (Figure 10.9b). Let us now try to nd a
matching larger than M
b
by matching vertex 2. The only way to do this would
be to include the edge (2, 6) in a new matching. This inclusion requires removal of
(1, 6), which can be compensated by inclusion of (1, 7) in the new matching. This
new matching M
c
= {(1, 7), (2, 6), (4, 8), (5, 9)] is shown in Figure 10.9c.
In general, we increase the size of a current matching M by constructing a
simple pathfroma free vertex inV toa free vertex inU whose edges are alternately
in E M and in M. That is, the rst edge of the path does not belong to M, the
second one does, and so on, until the last edge that does not belong to M. Such a
path is called augmenting with respect to the matching M. For example, the path
2, 6, 1, 7 is an augmenting path with respect to the matching M
b
in Figure 10.9b.
Since the length of an augmenting path is always odd, adding to the matching M
the paths edges in the odd-numbered positions and deleting from it the paths
edges in the even-numbered positions yields a matching with one more edge than
in M. Such a matching adjustment is called augmentation. Thus, in Figure 10.9,
the matching M
b
was obtained by augmentation of the matching M
a
along the
augmenting path 1, 6, and the matching M
c
was obtained by augmentation of the
matching M
b
along the augmenting path 2, 6, 1, 7. Moving further, 3, 8, 4, 9, 5, 10
is an augmenting path for the matching M
c
(Figure 10.9c). After adding to M
c
the edges (3, 8), (4, 9), and (5, 10) and deleting (4, 8) and (5, 9), we obtain the
matching M
d
= {(1, 7), (2, 6), (3, 8), (4, 9), (5, 10)] shown in Figure 10.9d. The
374 Iterative Improvement
1 2 3 4
6
V
U 7 8 9
5
10
(a)
Augmenting path: 1, 6
1 2 3 4
6 7 8 9
5
10
(b)
Augmenting path: 2, 6, 1, 7
1 2 3 4
6 7 8 9
5
10
(c)
Augmenting path: 3, 8, 4, 9, 5, 10
1 2 3 4
6 7 8 9
5
10
(d)
Maximum matching
FIGURE 10.9 Augmenting paths and matching augmentations.
10.3 Maximum Matching in Bipartite Graphs 375
matching M
d
is not only a maximum matching but also perfect, i.e., a matching
that matches all the vertices of the graph.
Before we discuss an algorithm for nding an augmenting path, let us settle
the issue of what nonexistence of such a path means. According to the theorem
discovered by the French mathematician Claude Berge, it means the current
matching is maximal.
THEOREM A matching M is a maximum matching if and only if there exists no
augmenting path with respect to M.
PROOF If an augmenting path with respect to a matching M exists, then the size
of the matching can be increased by augmentation. Let us prove the more difcult
part: if no augmenting path with respect to a matching M exists, then the matching
is a maximum matching. Assume that, on the contrary, this is not the case for a
certain matching M in a graph G. Let M
=(M M
) (M
M[ >[M M
[ because [M
[ >[M[ by
assumption. Let G
/
be the subgraph of G made up of all the edges in M M
and
their endpoints. By denition of a matching, any vertex in G
/
G can be incident
to no more than one edge in M and no more than one edge in M
. Hence, each of
the vertices in G
/
has degree 2 or less, and therefore every connected component
of G
/
is either a path or an even-length cycle of alternating edges fromM M
and
M
M. Since [M
M[ >[M M
and
M
Queue: 1 2 3
1 2 3 4
7 8 9
5
10 6
Queue: 2 3
1
6
2 1 3
8
2 3 4
8 9
5
10 6 7
Augment from 7
Queue: 2 3 6 8 1 4
1 2 3 4
8 9
5
10 6 7
Queue: 3
1
3
6
3
8
4 4
2 3 4
8 9
5
10 6 7
Augment from 10
Queue: 3 6 8 2 4 9
1 2 3 4
8 9
5
10 6 7
Queue: empty maximum matching
FIGURE 10.10 Application of the maximum-cardinality-matching algorithm. The left
column shows a current matching and initialized queue at the next
iterations start; the right column shows the vertex labeling generated
by the algorithm before augmentation is performed. Matching edges are
shown in bold. Vertex labels indicate the vertices fromwhich the labeling
is done. The discovered endpoint of an augmenting path is shaded and
labeled for clarity. Vertices deleted from the queue are indicated by .
378 Iterative Improvement
How efcient is the maximum-matching algorithm? Each iteration except
the last one matches two previously free verticesone from each of the sets V
and U. Therefore, the total number of iterations cannot exceed n/2 1, where
n = [V[ [U[ is the number of vertices in the graph. The time spent on each
iteration is in O(n m), where m=[E[ is the number of edges in the graph. (This
assumes that the informationabout the status of eachvertexfree or matchedand
the vertex mate if the lattercan be retrieved in constant time, e.g., by storing it in
an array.) Hence, the time efciency of the algorithm is in O(n(n m)). Hopcroft
and Karp [Hop73] showed how the efciency can be improved to O(
n(n m))
by combining several iterations into a single stage to maximize the number of
edges added to the matching with one search.
We were concerned in this section with matching the largest possible number
of vertex pairs in a bipartite graph. Some applications may require taking into ac-
count the quality or cost of matching different pairs. For example, workers may
execute jobs with different efciencies, or girls may have different preferences for
their potential dance partners. It is natural to model such situations by bipartite
graphs with weights assigned to their edges. This leads to the problem of maxi-
mizing the sum of the weights on edges connecting matched pairs of vertices. This
problem is called maximum-weight matching. We encountered it under a differ-
ent namethe assignment problemin Section 3.4. There are several sophisti-
cated algorithms for this problem, which are much more efcient than exhaustive
search (see, e.g., [Pap82], [Gal86], [Ahu93]). We have to leave themoutside of our
discussion, however, because of their complexity, especially for general graphs.
Exercises 10.3
1. For each matching shown below in bold, nd an augmentation or explain why
no augmentation exists.
a. b.
1 2 3 4
5 6 7
1 2 3 4
5 6 7 8
2. Apply the maximum-matching algorithm to the following bipartite graph:
1 2 3
4 5 6
10.3 Maximum Matching in Bipartite Graphs 379
3. a. What is the largest and what is the smallest possible cardinality of a match-
ing in a bipartite graph G=V, U, E) with n vertices in each vertex set V
and U and at least n edges?
b. What is the largest and what is the smallest number of distinct solutions
the maximum-cardinality-matching problemcan have for a bipartite graph
G=V, U, E) with n vertices in each vertex set V and U and at least n
edges?
4. a. Halls Marriage Theoremasserts that a bipartite graph G=V, U, E) has a
matching that matches all vertices of the set V if and only if for each subset
S V, [R(S)[ [S[ where R(S) is the set of all vertices adjacent to a vertex
in S. Check this property for the following graph with (i) V ={1, 2, 3, 4]
and (ii) V ={5, 6, 7].
1 2 3 4
5 6 7
b. You have to devise an algorithm that returns yes if there is a matching in
a bipartite graph G=V, U, E) that matches all vertices in V and returns
no otherwise. Would you base your algorithm on checking the condition
of Halls Marriage Theorem?
5. Suppose there are ve committees A, B, C, D, and E composed of six persons
a, b, c, d, e, and f as follows: committee As members are b and e; committee
Bs members are b, d, and e; committee Cs members are a, c, d, e, and f ;
committee Ds members are b, d, and e; committee Es members are b and
e. Is there a system of distinct representatives, i.e., is it possible to select
a representative from each committee so that all the selected persons are
distinct?
6. Show how the maximum-cardinality-matching problem for a bipartite graph
can be reduced to the maximum-ow problem discussed in Section 10.2.
7. Consider the following greedy algorithm for nding a maximum matching
in a bipartite graph G= V, U, E). Sort all the vertices in nondecreasing
order of their degrees. Scan this sorted list to add to the current matching
(initially empty) the edge from the lists free vertex to an adjacent free vertex
of the lowest degree. If the lists vertex is matched or if there are no adjacent
free vertices for it, the vertex is simply skipped. Does this algorithm always
produce a maximum matching in a bipartite graph?
8. Design a linear-time algorithm for nding a maximum matching in a tree.
9. Implement the maximum-matching algorithm of this section in the language
of your choice. Experiment with its performance on bipartite graphs with n
vertices in each of the vertex sets and randomly generated edges (in both
380 Iterative Improvement
dense and sparse modes) to compare the observed running time with the
algorithms theoretical efciency.
10. Domino puzzle A domino is a 2 1 tile that can be oriented either hori-
zontally or vertically. A tiling of a given board composed of 1 1 squares is
covering it withdominoes exactly andwithout overlap. Is it possible totile with
dominoes an 8 8 board without two unit squares at its diagonally opposite
corners?
10.4 The Stable Marriage Problem
In this section, we consider an interesting version of bipartite matching called the
stable marriage problem. Consider a set Y = {m
1
, m
2
, . . . , m
n
] of n men and a
set X ={w
1
, w
2
, . . . , w
n
] of n women. Each man has a preference list ordering
the women as potential marriage partners with no ties allowed. Similarly, each
woman has a preference list of the men, also with no ties. Examples of these two
sets of lists are given in Figures 10.11a and 10.11b. The same information can also
be presentedby ann nranking matrix (see Figure 10.11c). The rows andcolumns
of the matrix represent the men and women of the two sets, respectively. A cell
in row m and column w contains two rankings: the rst is the position (ranking)
of w in the ms preference list; the second is the position (ranking) of m in the ws
preference list. For example, the pair 3, 1 in Jims row and Anns column in the
matrix in Figure 10.11c indicates that Ann is Jims third choice while Jim is Anns
rst. Which of these two ways to represent such information is better depends on
the task at hand. For example, it is easier to specify a match of the sets elements
by using the ranking matrix, whereas the preference lists might be a more efcient
data structure for implementing a matching algorithm.
Amarriage matching M is a set of n (m, w) pairs whose members are selected
from disjoint n-element sets Y and X in a one-one fashion, i.e., each man m from
Y is paired with exactly one woman w from X and vice versa. (If we represent
Y and X as vertices of a complete bipartite graph with edges connecting possible
marriage partners, thena marriage matching is a perfect matching insucha graph.)
mens preferences womens preferences ranking matrix
1st 2nd 3rd 1st 2nd 3rd Ann Lea Sue
Bob: Lea Ann Sue Ann: Jim Tom Bob Bob 2,3 1,2 3,3
Jim: Lea Sue Ann Lea: Tom Bob Jim Jim 3,1 1,3 2,1
Tom: Sue Lea Ann Sue: Jim Tom Bob Tom 3,2 2,1 1,2
(a) (b) (c)
FIGURE 10.11 Data for an instance of the stable marriage problem. (a) Mens preference
lists; (b) womens preference lists. (c) Ranking matrix (with the boxed
cells composing an unstable matching).
10.4 The Stable Marriage Problem 381
A pair (m, w), where m Y, w X, is said to be a blocking pair for a marriage
matching M if man m and woman w are not matched in M but they prefer each
other to their mates in M. For example, (Bob, Lea) is a blocking pair for the
marriage matching M = {(Bob, Ann), (Jim, Lea), (Tom, Sue)} (Figure 10.11c)
because they are not matched in M while Bob prefers Lea to Ann and Lea
prefers Bob to Jim. A marriage matching M is called stable if there is no blocking
pair for it; otherwise, M is called unstable. According to this denition, the
marriage matching inFigure 10.11c is unstable because BobandLea candroptheir
designated mates to join in a union they both prefer. The stable marriage problem
is to nd a stable marriage matching for mens and womens given preferences.
Surprisingly, this problem always has a solution. (Can you nd it for the
instance in Figure 10.11?) It can be found by the following algorithm.
Stable marriage algorithm
Input: A set of n men and a set of n women along with rankings of the women
by each man and rankings of the men by each woman with no ties
allowed in the rankings
Output: A stable marriage matching
Step 0 Start with all the men and women being free.
Step 1 While there are free men, arbitrarily select one of them and do the
following:
Proposal The selected free man m proposes to w, the next
woman on his preference list (who is the highest-ranked woman
who has not rejected him before).
Response If w is free, she accepts the proposal to be matched
with m. If she is not free, she compares mwith her current mate. If
she prefers mto him, she accepts ms proposal, making her former
mate free; otherwise, she simply rejects ms proposal, leaving m
free.
Step 2 Return the set of n matched pairs.
Before we analyze this algorithm, it is useful to trace it on some input. Such
an example is presented in Figure 10.12.
Let us discuss properties of the stable marriage algorithm.
THEOREM The stable marriage algorithm terminates after no more than n
2
iterations with a stable marriage output.
PROOF The algorithm starts with n men having the total of n
2
women on their
ranking lists. On each iteration, one man makes a proposal to a woman. This
reduces the total number of women to whom the men can still propose in the
future because no man proposes to the same woman more than once. Hence, the
algorithm must stop after no more than n
2
iterations.
382 Iterative Improvement
Free men:
Bob, Jim, Tom
Ann Lea Sue
Bob 2, 3 1,2 3, 3
Jim 3, 1 1, 3 2, 1
Tom 3, 2 2, 1 1, 2
Bob proposed to Lea
Lea accepted
Free men:
Jim, Tom
Ann Lea Sue
Bob 2, 3 1,2 3, 3
Jim 3, 1 1, 3 2, 1
Tom 3, 2 2, 1 1, 2
Jim proposed to Lea
Lea rejected
Free men:
Jim, Tom
Ann Lea Sue
Bob 2, 3 1,2 3, 3
Jim 3, 1 1, 3 2,1
Tom 3, 2 2, 1 1, 2
Jim proposed to Sue
Sue accepted
Free men:
Tom
Ann Lea Sue
Bob 2, 3 1,2 3, 3
Jim 3, 1 1, 3 2,1
Tom 3, 2 2, 1 1, 2
Tom proposed to Sue
Sue rejected
Free men:
Tom
Ann Lea Sue
Bob 2, 3 1, 2 3, 3
Jim 3, 1 1, 3 2,1
Tom 3, 2 2,1 1, 2
Tom proposed to Lea
Lea replaced Bob with Tom
Free men:
Bob
Ann Lea Sue
Bob 2,3 1, 2 3, 3
Jim 3, 1 1, 3 2,1
Tom 3, 2 2,1 1, 2
Bob proposed to Ann
Ann accepted
FIGURE 10.12 Application of the stable marriage algorithm. An accepted proposal is
indicated by a boxed cell; a rejected proposal is shown by an underlined
cell.
Let us now prove that the nal matching M is a stable marriage matching.
Since the algorithmstops after all the n men are one-one matched to the n women,
the only thing that needs to be proved is the stability of M. Suppose, on the
contrary, that M is unstable. Then there exists a blocking pair of a man m and a
woman w who are unmatched in M and such that both m and w prefer each other
to the persons they are matched with in M. Since m proposes to every woman on
his ranking list in decreasing order of preference and w precedes ms match in M,
mmust have proposed to w on some iteration. Whether w refused ms proposal or
acceptedit but replacedhimona subsequent iterationwitha higher-rankedmatch,
ws mate in M must be higher on ws preference list than m because the rankings
of the men matched to a given woman may only improve on each iteration of the
algorithm. This contradicts the assumption that w prefers m to her nal match
in M.
The stable marriage algorithm has a notable shortcoming. It is not gender
neutral. In the form presented above, it favors mens preferences over womens
10.4 The Stable Marriage Problem 383
preferences. We can easily see this by tracing the algorithm on the following
instance of the problem:
woman 1 woman 2
man 1 1, 2 2, 1
man 2 2, 1 1, 2
The algorithm obviously yields the stable matching M = {(man 1, woman 1), (man
2, woman 2)}. In this matching, both men are matched to their rst choices, which
is not the case for the women. One can prove that the algorithm always yields a
stable matching that is man-optimal: it assigns to each man the highest-ranked
woman possible under any stable marriage. Of course, this gender bias can be
reversed, but not eliminated, by reversing the roles played by men and women
in the algorithm, i.e., by making women propose and men accept or reject their
proposals.
There is another important corollary to the fact that the stable marriage
algorithm always yields a gender-optimal stable matching. It is easy to prove
that a man (woman)-optimal matching is unique for a given set of participant
preferences. Therefore the algorithms output does not depend on the order in
which the free men (women) make their proposals. Consequently, we can use any
data structure we might prefere.g., a queue or a stackfor representing this set
with no impact on the algorithms outcome.
The notion of the stable matching as well as the algorithmdiscussed above was
introduced by D. Gale and L. S. Shapley in the paper titled College Admissions
andthe Stability of Marriage [Gal62]. I donot knowwhichof the twoapplications
mentioned in the title you would consider more important. The point is that
stability is a matching property that can be desirable in a variety of applications.
For example, it has been used for many years in the United States for matching
medical-school graduates with hospitals for residency training. For a brief history
of this application and an in-depth discussion of the stable marriage problem and
its extensions, see the monograph by Guseld and Irwing [Gus89].
Exercises 10.4
1. Consider an instance of the stable marriage problem given by the following
ranking matrix:
A B C
1, 3 2, 2 3, 1
3, 1 1, 3 2, 2
2, 2 3, 1 1, 3
For each of its marriage matchings, indicate whether it is stable or not. For the
unstable matchings, specify a blocking pair. For the stable matchings, indicate
whether they are man-optimal, woman-optimal, or neither. (Assume that the
Greek and Roman letters denote the men and women, respectively.)
384 Iterative Improvement
2. Design a simple algorithm for checking whether a given marriage matching is
stable and determine its time efciency class.
3. Find a stable marriage matching for the instance given in Problem 1 by apply-
ing the stable marriage algorithm
a. in its men-proposing version.
b. in its women-proposing version.
4. Find a stable marriage matching for the instance dened by the following
ranking matrix:
A B C D
1, 3 2, 3 3, 2 4, 3
1, 4 4, 1 3, 4 2, 2
2, 2 1, 4 3, 3 4, 1
4, 1 2, 2 3, 1 1, 4
5. Determine the time-efciency class of the stable marriage algorithm
a. in the worst case.
b. in the best case.
6. Prove that a man-optimal stable marriage set is always unique. Is it also true
for a woman-optimal stable marriage matching?
7. Prove that in the man-optimal stable matching, each woman has the worst
partner that she can have in any stable marriage matching.
8. Implement the stable-marriage algorithm given in Section 10.4 so that its
running time is in O(n
2
). Run an experiment to ascertain its average-case
efciency.
9. Write a report on the college admission problem(residents-hospitals assign-
ment) that generalizes the stable marriage probleminthat a college canaccept
proposals from more than one applicant.
10. Consider the problemof the roommates, which is related to but more difcult
than the stable marriage problem: An even number of boys wish to divide up
into pairs of roommates. A set of pairings is called stable if under it there are
no two boys who are not roommates and who prefer each other to their actual
roommates. [Gal62] Give an instance of this problem that does not have a
stable pairing.
SUMMARY
The iterative-improvement technique involves nding a solution to an op-
timization problem by generating a sequence of feasible solutions with
improving values of the problems objective function. Each subsequent so-
lution in such a sequence typically involves a small, localized change in the
previous feasible solution. When no such change improves the value of the
Summary 385
objective function, the algorithm returns the last feasible solution as optimal
and stops.
Important problems that can be solved exactly by iterative-improvement
algorithms include linear programming, maximizing the ow in a network,
and matching the maximum possible number of vertices in a graph.
The simplex method is the classic method for solving the general linear
programming problem. It works by generating a sequence of adjacent extreme
points of the problems feasible region with improving values of the objective
function.
The maximum-ow problem asks to nd the maximum ow possible in a
network, a weighted directed graph with a source and a sink.
The Ford-Fulkerson method is a classic template for solving the maximum-
ow problem by the iterative-improvement approach. The shortest-
augmenting-path method implements this idea by labeling network vertices
in the breadth-rst search manner.
The Ford-Fulkerson method also nds a minimum cut in a given network.
A maximum cardinality matching is the largest subset of edges in a graph
such that no two edges share the same vertex. For a bipartite graph, it can be
found by a sequence of augmentations of previously obtained matchings.
The stable marriage problemis to nd a stable matching for elements of two n-
element sets based on given matching preferences. This problem always has
a solution that can be found by the Gale-Shapley algorithm.
This page intentionally left blank
11
Limitations of Algorithm Power
Intellect distinguishes between the possible and the impossible; reason
distinguishes between the sensible and the senseless. Even the possible can
be senseless.
Max Born (18821970), My Life and My Views, 1968
I
n the preceding chapters of this book, we encountered dozens of algorithms
for solving a variety of different problems. A fair assessment of algorithms as
problem-solving tools is inescapable: they are very powerful instruments, espe-
cially when they are executed by modern computers. But the power of algorithms
is not unlimited, and its limits are the subject of this chapter. As we shall see, some
problems cannot be solved by any algorithm. Other problems can be solved algo-
rithmically but not in polynomial time. And even when a problem can be solved
in polynomial time by some algorithms, there are usually lower bounds on their
efciency.
We start, in Section 11.1, with methods for obtaining lower bounds, which are
estimates on a minimum amount of work needed to solve a problem. In general,
obtaining a nontrivial lower bound even for a simple-sounding problem is a very
difcult task. As opposed to ascertaining the efciency of a particular algorithm,
the task here is to establish a limit on the efciency of any algorithm, known or
unknown. This also necessitates a careful description of the operations such algo-
rithms are allowed to perform. If we fail to dene carefully the rules of the game,
so to speak, our claims may end up in the large dustbin of impossibility-related
statements as, for example, the one made by the celebrated British physicist Lord
Kelvin in 1895: Heavier-than-air ying machines are impossible.
Section 11.2 discusses decision trees. This technique allows us, among other
applications, to establish lower bounds on the efciency of comparison-based
algorithms for sorting and for searching in sorted arrays. As a result, we will be
able to answer such questions as whether it is possible to invent a faster sorting
algorithm than mergesort and whether binary search is the fastest algorithm for
searching in a sorted array. (What does your intuition tell you the answers to these
questions will turn out to be?) Incidentally, decision trees are also a great vehicle
387
388 Limitations of Algorithm Power
for directing us to a solution of some puzzles, such as the coin-weighing problem
discussed in Section 4.4.
Section 11.3 deals with the question of intractability: which problems can
and cannot be solved in polynomial time. This well-developed area of theoretical
computer science is called computational complexity theory. We present the basic
elements of this theory and discuss informally such fundamental notions as P, NP,
and NP-complete problems, including the most important unresolved question of
theoretical computer science about the relationship between P and NP problems.
The last section of this chapter deals with numerical analysis. This branch
of computer science concerns algorithms for solving problems of continuous
mathematicssolving equations and systems of equations, evaluating such func-
tions as sin x and ln x, computing integrals, and so on. The nature of such problems
imposes two types of limitations. First, most cannot be solved exactly. Second,
solving them even approximately requires dealing with numbers that can be rep-
resented in a digital computer with only a limited level of precision. Manipulating
approximate numbers without proper care can lead to very inaccurate results. We
will see that even solving a basic quadratic equation on a computer poses sig-
nicant difculties that require a modication of the canonical formula for the
equations roots.
11.1 Lower-Bound Arguments
We can look at the efciency of an algorithmtwo ways. We can establish its asymp-
totic efciency class (say, for the worst case) and see where this class stands with
respect to the hierarchy of efciency classes outlined in Section 2.2. For exam-
ple, selection sort, whose efciency is quadratic, is a reasonably fast algorithm,
whereas the algorithm for the Tower of Hanoi problem is very slow because its ef-
ciency is exponential. We can argue, however, that this comparison is akin to the
proverbial comparison of apples to oranges because these two algorithms solve
different problems. The alternative and possibly fairer approach is to ask how
efcient a particular algorithm is with respect to other algorithms for the same
problem. Seen in this light, selection sort has to be considered slow because there
are O(n log n) sorting algorithms; the Tower of Hanoi algorithm, on the other
hand, turns out to be the fastest possible for the problem it solves.
When we want to ascertain the efciency of an algorithmwith respect to other
algorithms for the same problem, it is desirable toknowthe best possible efciency
any algorithm solving the problem may have. Knowing such a lower bound can
tell us how much improvement we can hope to achieve in our quest for a better
algorithm for the problem in question. If such a bound is tight, i.e., we already
know an algorithm in the same efciency class as the lower bound, we can hope
for a constant-factor improvement at best. If there is a gap between the efciency
of the fastest algorithm and the best lower bound known, the door for possible
improvement remains open: either a faster algorithm matching the lower bound
could exist or a better lower bound could be proved.
11.1 Lower-Bound Arguments 389
In this section, we present several methods for establishing lower bounds and
illustrate them with specic examples. As we did in analyzing the efciency of
specic algorithms in the preceding chapters, we should distinguish between a
lower-bound class and a minimum number of times a particular operation needs
to be executed. As a rule, the second problem is more difcult than the rst.
For example, we can immediately conclude that any algorithm for nding the
median of n numbers must be in (n) (why?), but it is not simple at all to prove
that any comparison-based algorithm for this problem must do at least 3(n 1)/2
comparisons in the worst case (for odd n).
Trivial Lower Bounds
The simplest method of obtaining a lower-bound class is based on counting the
number of items in the problems input that must be processed and the number of
output items that need to be produced. Since any algorithmmust at least read all
the items it needs to process and write all its outputs, such a count yields a trivial
lower bound. For example, any algorithm for generating all permutations of n
distinct items must be in (n!) because the size of the output is n!. And this bound
is tight because good algorithms for generating permutations spend a constant
time on each of them except the initial one (see Section 4.3).
As another example, consider the problem of evaluating a polynomial of
degree n
p(x) =a
n
x
n
a
n1
x
n1
. . .
a
0
at a given point x, given its coefcients a
n
, a
n1
, . . . , a
0
. It is easy to see that all the
coefcients have tobe processedby any polynomial-evaluationalgorithm. Indeed,
if it were not the case, we could change the value of an unprocessed coefcient,
which would change the value of the polynomial at a nonzero point x. This means
that any such algorithm must be in (n). This lower bound is tight because both
the right-to-left evaluation algorithm (Problem 2 in Exercises 6.5) and Horners
rule (Section 6.5) are both linear.
In a similar vein, a trivial lower bound for computing the product of two
n n matrices is (n
2
) because any such algorithm has to process 2n
2
elements
in the input matrices and generate n
2
elements of the product. It is still unknown,
however, whether this bound is tight.
Trivial lower bounds are often too low to be useful. For example, the trivial
bound for the traveling salesman problem is (n
2
), because its input is n(n 1)/2
intercity distances and its output is a list of n 1 cities making up an optimal tour.
But this bound is all but useless because there is no known algorithm with the
running time being a polynomial function of any degree.
There is another obstacle to deriving a meaningful lower bound by this
method. It lies in determining which part of an input must be processed by any
algorithm solving the problem in question. For example, searching for an ele-
ment of a given value in a sorted array does not require processing all its elements
(why?). As another example, consider the problemof determining connectivity of
390 Limitations of Algorithm Power
an undirected graph dened by its adjacency matrix. It is plausible to expect that
any such algorithm would have to check the existence of each of the n(n 1)/2
potential edges, but the proof of this fact is not trivial.
Information-Theoretic Arguments
While the approach outlined above takes into account the size of a problems
output, the information-theoretical approach seeks to establish a lower bound
based on the amount of information it has to produce. Consider, as an example,
the well-known game of deducing a positive integer between 1 and n selected
by somebody by asking that person questions with yes/no answers. The amount of
uncertainty that any algorithmsolving this problemhas toresolve canbe measured
by {log
2
n, the number of bits needed to specify a particular number among the
n possibilities. We can think of each question (or, to be more accurate, an answer
to each question) as yielding at most 1 bit of information about the algorithms
output, i.e., the selected number. Consequently, any such algorithm will need at
least {log
2
n such steps before it can determine its output in the worst case.
The approach we just exploited is called the information-theoretic argument
because of its connection to information theory. It has proved to be quite useful
for nding the so-called information-theoretic lower bounds for many problems
involving comparisons, including sorting and searching. Its underlying idea can be
realized much more precisely through the mechanism of decision trees. Because
of the importance of this technique, we discuss it separately and in more detail in
Section 11.2.
Adversary Arguments
Let us revisit the same game of guessing a number used to introduce the idea of
an information-theoretic argument. We can prove that any algorithm that solves
this problem must ask at least {log
2
n questions in its worst case by playing the
role of a hostile adversary who wants to make an algorithm ask as many questions
as possible. The adversary starts by considering each of the numbers between
1 and n as being potentially selected. (This is cheating, of course, as far as the
game is concerned, but not as a way to prove our assertion.) After each question,
the adversary gives an answer that leaves him with the largest set of numbers
consistent with this and all the previously given answers. This strategy leaves
him with at least one-half of the numbers he had before his last answer. If an
algorithm stops before the size of the set is reduced to 1, the adversary can exhibit
a number that could be a legitimate input the algorithm failed to identify. It is a
simple technical matter now to show that one needs {log
2
n iterations to shrink
an n-element set to a one-element set by halving and rounding up the size of the
remaining set. Hence, at least {log
2
n questions needtobe askedby any algorithm
in the worst case.
This example illustrates the adversary method for establishing lower bounds.
It is based on following the logic of a malevolent but honest adversary: the malev-
11.1 Lower-Bound Arguments 391
olence makes him push the algorithm down the most time-consuming path, and
his honesty forces him to stay consistent with the choices already made. A lower
bound is then obtained by measuring the amount of work needed to shrink a set
of potential inputs to a single input along the most time-consuming path.
As another example, consider the problemof merging two sorted lists of size n
a
1
< a
2
<
. . .
< a
n
and b
1
< b
2
<
. . .
< b
n
into a single sorted list of size 2n. For simplicity, we assume that all the as and
bs are distinct, which gives the problem a unique solution. We encountered this
problem when discussing mergesort in Section 5.1. Recall that we did merging by
repeatedly comparing the rst elements in the remaining lists and outputting the
smaller among them. The number of key comparisons in the worst case for this
algorithm for merging is 2n 1.
Is there an algorithm that can do merging faster? The answer turns out to
be no. Knuth [KnuIII, p. 198] quotes the following adversary method for proving
that 2n 1 is a lower bound on the number of key comparisons made by any
comparison-based algorithm for this problem. The adversary will employ the
following rule: reply true to the comparison a
i
< b
j
if and only if i < j. This will
force any correct merging algorithm to produce the only combined list consistent
with this rule:
b
1
< a
1
< b
2
< a
2
<
. . .
< b
n
< a
n
.
To produce this combined list, any correct algorithm will have to explicitly com-
pare 2n 1 adjacent pairs of its elements, i.e., b
1
to a
1
, a
1
to b
2
, and so on. If one
of these comparisons has not been made, e.g., a
1
has not been compared to b
2
, we
can transpose these keys to get
b
1
< b
2
< a
1
< a
2
<
. . .
< b
n
< a
n
,
which is consistent with all the comparisons made but cannot be distinguished
from the correct conguration given above. Hence, 2n 1 is, indeed, a lower
bound for the number of key comparisons needed for any merging algorithm.
Problem Reduction
We have already encountered the problem-reduction approach in Section 6.6.
There, we discussed getting an algorithm for problem P by reducing it to another
problemQsolvable with a known algorithm. Asimilar reduction idea can be used
for nding a lower bound. To show that problem P is at least as hard as another
problem Q with a known lower bound, we need to reduce Q to P (not P to Q!).
In other words, we should show that an arbitrary instance of problem Q can be
transformed (in a reasonably efcient fashion) to an instance of problem P, so
any algorithm solving P would solve Q as well. Then a lower bound for Q will be
a lower bound for P. Table 11.1 lists several important problems that are often
used for this purpose.
392 Limitations of Algorithm Power
TABLE 11.1 Problems often used for establishing lower bounds
by problem reduction
Problem Lower bound Tightness
sorting (n log n) yes
searching in a sorted array (log n) yes
element uniqueness problem (n log n) yes
multiplication of n-digit integers (n) unknown
multiplication of n n matrices (n
2
) unknown
We will establish the lower bounds for sorting and searching in the next sec-
tion. The element uniqueness problem asks whether there are duplicates among n
given numbers. (We encountered this problem in Sections 2.3 and 6.1.) The proof
of the lower bound for this seemingly simple problem is based on a very sophisti-
cated mathematical analysis that is well beyond the scope of this book (see, e.g.,
[Pre85] for a rather elementary exposition). As to the last two algebraic prob-
lems in Table 11.1, the lower bounds quoted are trivial, but whether they can be
improved remains unknown.
As an example of establishing a lower bound by reduction, let us consider
the Euclidean minimum spanning tree problem: given n points in the Cartesian
plane, construct a tree of minimum total length whose vertices are the given
points. As a problem with a known lower bound, we use the element uniqueness
problem. We can transform any set x
1
, x
2
, . . . , x
n
of n real numbers into a set
of n points in the Cartesian plane by simply adding 0 as the points y coordinate:
(x
1
, 0), (x
2
, 0), . . . , (x
n
, 0). Let T be a minimumspanning tree found for this set of
points. Since T must contain a shortest edge, checking whether T contains a zero-
length edge will answer the question about uniqueness of the given numbers. This
reduction implies that (n log n) is a lower bound for the Euclidean minimum
spanning tree problem, too.
Since the nal results about the complexity of many problems are not known,
the reduction technique is often used to compare the relative complexity of prob-
lems. For example, the formulas
x
.
y =
(x y)
2
(x y)
2
4
and x
2
=x
.
x
show that the problems of computing the product of two n-digit integers and
squaring an n-digit integer belong to the same complexity class, despite the latter
being seemingly simpler than the former.
There are several similar results for matrix operations. For example, multi-
plying two symmetric matrices turns out to be in the same complexity class as
multiplying two arbitrary square matrices. This result is based on the observation
that not only is the former problem a special case of the latter one, but also that
11.1 Lower-Bound Arguments 393
we can reduce the problem of multiplying two arbitrary square matrices of order
n, say, A and B, to the problem of multiplying two symmetric matrices
X =
_
0 A
A
T
0
_
and Y =
_
0 B
T
B 0
_
,
where A
T
and B
T
are the transpose matrices of Aand B (i.e., A
T
[i, j] =A[j, i] and
B
T
[i, j] =B[j, i]), respectively, and 0 stands for the n n matrix whose elements
are all zeros. Indeed,
XY =
_
0 A
A
T
0
_ _
0 B
T
B 0
_
=
_
AB 0
0 A
T
B
T
_
,
from which the needed product AB can be easily extracted. (True, we will have
to multiply matrices twice the original size, but this is just a minor technical
complication with no impact on the complexity classes.)
Though such results are interesting, we will encounter even more important
applications of the reduction approach to comparing problem complexity in Sec-
tion 11.3.
Exercises 11.1
1. Prove that any algorithm solving the alternating-disk puzzle (Problem 14 in
Exercises 3.1) must make at least n(n 1)/2 moves to solve it. Is this lower
bound tight?
2. Prove that the classic recursive algorithm for the Tower of Hanoi puzzle
(Section 2.4) makes the minimum number of disk moves needed to solve the
problem.
3. Find a trivial lower-bound class for each of the following problems and indi-
cate, if you can, whether this bound is tight.
a. nding the largest element in an array
b. checking completeness of a graph represented by its adjacency matrix
c. generating all the subsets of an n-element set
d. determining whether n given real numbers are all distinct
4. Consider the problem of identifying a lighter fake coin among n identical-
looking coins with the help of a balance scale. Can we use the same
information-theoretic argument as the one in the text for the number of ques-
tions in the guessing game to conclude that any algorithm for identifying the
fake will need at least {log
2
n weighings in the worst case?
5. Prove that any comparison-based algorithm for nding the largest element of
an n-element set of real numbers must make n 1 comparisons in the worst
case.
394 Limitations of Algorithm Power
6. Find a tight lower bound for sorting an array by exchanging its adjacent
elements.
7. Give an adversary-argument proof that the time efciency of any algorithm
that checks connectivity of a graph with n vertices is in (n
2
), provided the
only operation allowed for an algorithm is to inquire about the presence of
an edge between two vertices of the graph. Is this lower bound tight?
8. What is the minimumnumber of comparisons needed for a comparison-based
sorting algorithm to merge any two sorted lists of sizes n and n 1 elements,
respectively? Prove the validity of your answer.
9. Find the product of matrices A and B through a transformation to a product
of two symmetric matrices if
A =
_
1 1
2 3
_
and B =
_
0 1
1 2
_
.
10. a. Can one use this sections formulas that indicate the complexity equiva-
lence of multiplication and squaring of integers to show the complexity
equivalence of multiplication and squaring of square matrices?
b. Show that multiplication of two matrices of order n can be reduced to
squaring a matrix of order 2n.
11. Find a tight lower-bound class for the problem of nding two closest numbers
among n real numbers x
1
, x
2
, . . . , x
n
.
12. Find a tight lower-bound class for the number placement problem (Problem 9
in Exercises 6.1).
11.2 Decision Trees
Many important algorithms, especially those for sorting and searching, work by
comparing items of their inputs. We can study the performance of such algorithms
with a device called a decision tree. As an example, Figure 11.1 presents a decision
tree of an algorithm for nding a minimum of three numbers. Each internal node
of a binary decision tree represents a key comparison indicated in the node,
e.g., k < k
/
. The nodes left subtree contains the information about subsequent
comparisons made if k < k
/
, and its right subtree does the same for the case of
k >k
/
. (For the sake of simplicity, we assume throughout this section that all input
items are distinct.) Each leaf represents a possible outcome of the algorithms
run on some input of size n. Note that the number of leaves can be greater than
the number of outcomes because, for some algorithms, the same outcome can
be arrived at through a different chain of comparisons. (This happens to be the
case for the decision tree in Figure 11.1.) An important point is that the number of
leaves must be at least as large as the number of possibleoutcomes. Thealgorithms
work on a particular input of size n can be traced by a path from the root to a leaf
in its decision tree, and the number of comparisons made by the algorithmon such
11.2 Decision Trees 395
yes
yes no no
no
yes
a c b c
a < c b < c
a < b
FIGURE 11.1 Decision tree for nding a minimum of three numbers.
a run is equal to the length of this path. Hence, the number of comparisons in the
worst case is equal to the height of the algorithms decision tree.
The central idea behind this model lies in the observation that a tree with a
given number of leaves, which is dictated by the number of possible outcomes, has
to be tall enough to have that many leaves. Specically, it is not difcult to prove
that for any binary tree with l leaves and height h,
h {log
2
l. (11.1)
Indeed, a binary tree of height h with the largest number of leaves has all its leaves
on the last level (why?). Hence, the largest number of leaves in such a tree is 2
h
.
In other words, 2
h
l, which immediately implies (11.1).
Inequality (11.1) puts a lower bound on the heights of binary decision trees
and hence the worst-case number of comparisons made by any comparison-based
algorithm for the problem in question. Such a bound is called the information-
theoretic lower bound (see Section 11.1). We illustrate this technique below on
two important problems: sorting and searching in a sorted array.
Decision Trees for Sorting
Most sorting algorithms are comparison based, i.e., they work by comparing
elements in a list to be sorted. By studying properties of decision trees for such
algorithms, we can derive important lower bounds on their time efciencies.
We caninterpret anoutcome of a sorting algorithmas nding a permutationof
the element indices of an input list that puts the lists elements in ascending order.
Consider, as an example, a three-element list a, b, c of orderable items such as
real numbers or strings. For the outcome a < c < b obtained by sorting this list
(see Figure 11.2), the permutation in question is 1, 3, 2. In general, the number of
possible outcomes for sorting an arbitrary n-element list is equal to n!.
396 Limitations of Algorithm Power
yes
yes
yes yes
no
no no yes
no no no
a < b
abc
bac abc
cba
a < c b < a b < c
cba
b < a
a < c
a < b < c
abc
b < c
abc
yes
a < c < b c < a < b c < b < a b < a < c b < c < a
FIGURE 11.2 Decision tree for the tree-element selection sort. A triple above a
node indicates the state of the array being sorted. Note two redundant
comparisons b <a with a single possible outcome because of the results
of some previously made comparisons.
Inequality (11.1) implies that the height of a binary decision tree for any
comparison-based sorting algorithm and hence the worst-case number of com-
parisons made by such an algorithm cannot be less than {log
2
n!:
C
worst
(n) {log
2
n!. (11.2)
Using Stirlings formula for n!, we get
{log
2
n! log
2
2n(n/e)
n
=n log
2
n n log
2
e
log
2
n
2
log
2
2
2
n log
2
n.
In other words, about n log
2
n comparisons are necessary in the worst case to sort
an arbitrary n-element list by any comparison-based sorting algorithm. Note that
mergesort makes about this number of comparisons in its worst case and hence is
asymptotically optimal. This also implies that the asymptotic lower bound n log
2
n
is tight and therefore cannot be substantially improved. We should point out,
however, that the lower bound of {log
2
n! can be improved for some values of
n. For example, {log
2
12! =29, but it has been proved that 30 comparisons are
necessary (and sufcient) to sort an array of 12 elements in the worst case.
We can also use decision trees for analyzing the average-case efciencies of
comparison-based sorting algorithms. We can compute the average number of
comparisons for a particular algorithm as the average depth of its decision trees
leaves, i.e., as the average path length fromthe root to the leaves. For example, for
11.2 Decision Trees 397
yes
yes
no
no no yes
yes
no no
a < b
abc
acb
a < c
bca
b < c
b < c
a < b < c
abc
a < c
bac
yes
a < c < b c < a < b c < b < a
b < a < c
b < c < a
FIGURE 11.3 Decision tree for the three-element insertion sort.
the three-element insertion sort whose decision tree is given in Figure 11.3, this
number is (2 3 3 2 3 3)/6 =2
2
3
.
Under the standard assumption that all n! outcomes of sorting are equally
likely, the following lower bound on the average number of comparisons C
avg
made by any comparison-based algorithm in sorting an n-element list has been
proved:
C
avg
(n) log
2
n!. (11.3)
As we saw earlier, this lower bound is about n log
2
n. You might be surprised that
the lower bounds for the average and worst cases are almost identical. Remember,
however, that these bounds are obtained by maximizing the number of compar-
isons made in the average and worst cases, respectively. For a particular sorting
algorithm, the average-case efciency can, of course, be signicantly better than
their worst-case efciency.
Decision Trees for Searching a Sorted Array
In this section, we shall see how decision trees can be used for establishing lower
bounds on the number of key comparisons in searching a sorted array of n keys:
A[0] < A[1] <
. . .
< A[n 1]. The principal algorithm for this problem is binary
search. As we saw in Section 4.4, the number of comparisons made by binary
search in the worst case, C
bs
worst
(n), is given by the formula
C
bs
worst
(n) =log
2
n 1 ={log
2
(n 1). (11.4)
398 Limitations of Algorithm Power
A[1]
A[3]
A[0] A[2]
A[1]
< A[0]
A[3] > A[3]
A[0] A[2] (A[0], A[1]) (A[1], A[2])
(A[2], A[3])
< > =
< < > > = =
< > =
FIGURE 11.4 Ternary decision tree for binary search in a four-element array.
We will use decision trees to determine whether this is the smallest possible
number of comparisons.
Since we are dealing here withthree-way comparisons inwhichsearchkey K is
compared with some element A[i] to see whether K <A[i], K =A[i], or K >A[i],
it is natural to try using ternary decision trees. Figure 11.4 presents such a tree for
the case of n =4. The internal nodes of that tree indicate the arrays elements being
compared with the search key. The leaves indicate either a matching element in
the case of a successful search or a found interval that the search key belongs to
in the case of an unsuccessful search.
We can represent any algorithm for searching a sorted array by three-way
comparisons with a ternary decision tree similar to that in Figure 11.4. For an
array of n elements, all such decision trees will have 2n 1 leaves (n for successful
searches and n 1for unsuccessful ones). Since the minimumheight h of a ternary
tree with l leaves is {log
3
l, we get the following lower bound on the number of
worst-case comparisons:
C
worst
(n) {log
3
(2n 1).
This lower bound is smaller than {log
2
(n 1), the number of worst-case
comparisons for binary search, at least for large values of n (and smaller than or
equal to {log
2
(n 1) for every positive integer nsee Problem 7 in this sections
exercises). Can we prove a better lower bound, or is binary search far from
being optimal? The answer turns out to be the former. To obtain a better lower
bound, we should consider binary rather than ternary decision trees, such as the
one in Figure 11.5. Internal nodes in such a tree correspond to the same three-
way comparisons as before, but they also serve as terminal nodes for successful
searches. Leaves therefore represent only unsuccessful searches, and there are
n 1 of them for searching an n-element array.
11.2 Decision Trees 399
A[1]
< A[0]
> A[3]
(A[0], A[1]) (A[1], A[2])
(A[2], A[3])
<
< <
< >
> >
>
A[0] A[2]
A[3]
FIGURE 11.5 Binary decision tree for binary search in a four-element array.
As comparison of the decision trees in Figures 11.4 and 11.5 illustrates, the
binary decision tree is simply the ternary decision tree with all the middle subtrees
eliminated. Applying inequality (11.1) to such binary decision trees immediately
yields
C
worst
(n) {log
2
(n 1). (11.5)
This inequality closes the gap between the lower bound and the number of worst-
case comparisons made by binary search, which is also {log
2
(n 1). A much
more sophisticated analysis (see, e.g., [KnuIII, Section 6.2.1]) shows that under the
standard assumptions about searches, binary search makes the smallest number
of comparisons on the average, as well. The average number of comparisons made
by this algorithm turns out to be about log
2
n 1 and log
2
(n 1) for successful
and unsuccessful searches, respectively.
Exercises 11.2
1. Prove by mathematical induction that
a. h {log
2
l for any binary tree with height h and the number of leaves l.
b. h {log
3
l for any ternary tree with height h and the number of leaves l.
2. Consider the problem of nding the median of a three-element set {a, b, c]
of orderable items.
a. What is the information-theoretic lower bound for comparison-based al-
gorithms solving this problem?
b. Draw a decision tree for an algorithm solving this problem.
c. If the worst-case number of comparisons in your algorithm is greater
than the information-theoretic lower bound, do you think an algorithm
400 Limitations of Algorithm Power
matching the lower bound exists? (Either nd such an algorithm or prove
its impossibility.)
3. Draw a decision tree and nd the number of key comparisons in the worst
and average cases for
a. the three-element basic bubble sort.
b. the three-element enhancedbubble sort (whichstops if noswaps have been
made on its last pass).
4. Design a comparison-based algorithm for sorting a four-element array with
the smallest number of element comparisons possible.
5. Design a comparison-based algorithm for sorting a ve-element array with
seven comparisons in the worst case.
6. Drawa binary decision tree for searching a four-element sorted list by sequen-
tial search.
7. Compare the two lower bounds for searching a sorted array{log
3
(2n 1)
and {log
2
(n 1)to show that
a. {log
3
(2n 1) {log
2
(n 1) for every positive integer n.
b. {log
3
(2n 1) <{log
2
(n 1) for every positive integer n n
0
.
8. What is the information-theoretic lower bound for nding the maximum of n
numbers by comparison-based algorithms? Is this bound tight?
9. A tournament tree is a complete binary tree reecting results of a knockout
tournament: its leaves represent n players entering the tournament, and
each internal node represents a winner of a match played by the players
represented by the nodes children. Hence, the winner of the tournament is
represented by the root of the tree.
a. What is the total number of games played in such a tournament?
b. How many rounds are there in such a tournament?
c. Design an efcient algorithm to determine the second-best player using
the information produced by the tournament. Howmany extra games does
your algorithm require?
10. Advanced fake-coin problem There are n 3 coins identical in appearance;
either all are genuine or exactly one of them is fake. It is unknown whether
the fake coin is lighter or heavier than the genuine one. You have a balance
scale with which you can compare any two sets of coins. That is, by tipping to
the left, to the right, or staying even, the balance scale will tell whether the
sets weigh the same or which of the sets is heavier than the other, but not by
how much. The problem is to nd whether all the coins are genuine and, if
not, to nd the fake coin and establish whether it is lighter or heavier than the
genuine ones.
11.3 P, NP, and NP-Complete Problems 401
a. Prove that any algorithmfor this problemmust make at least {log
3
(2n 1)
weighings in the worst case.
b. Draw a decision tree for an algorithm that solves the problem for n =3
coins in two weighings.
c. Prove that there exists no algorithmthat solves the problemfor n =4 coins
in two weighings.
d. Draw a decision tree for an algorithm that solves the problem for n =4
coins in two weighings by using an extra coin known to be genuine.
e. Draw a decision tree for an algorithm that solves the classic version of
the problemthat for n =12 coins in three weighings (with no extra coins
being used).
11. Jigsawpuzzle A jigsaw puzzle contains n pieces. A section of the puzzle is
a set of one or more pieces that have been connected to each other. Amove
consists of connecting two sections. What algorithmwill minimize the number
of moves required to complete the puzzle?
11.3 P, NP, and NP-Complete Problems
In the study of the computational complexity of problems, the rst concern of both
computer scientists and computing professionals is whether a given problem can
be solved in polynomial time by some algorithm.
DEFINITION 1 We say that an algorithm solves a problem in polynomial time
if its worst-case time efciency belongs to O(p(n)) where p(n) is a polynomial of
the problems input size n. (Note that since we are using big-oh notation here,
problems solvable in, say, logarithmic time are solvable in polynomial time as
well.) Problems that can be solved in polynomial time are called tractable, and
problems that cannot be solved in polynomial time are called intractable.
There are several reasons for drawing the intractability line in this way. First,
the entries of Table 2.1 and their discussion in Section 2.1 imply that we cannot
solve arbitrary instances of intractable problems in a reasonable amount of time
unless such instances are very small. Second, although there might be a huge
difference between the running times in O(p(n)) for polynomials of drastically
different degrees, there are very few useful polynomial-time algorithms with the
degree of a polynomial higher than three. In addition, polynomials that bound
running times of algorithms do not usually have extremely large coefcients.
Third, polynomial functions possess many convenient properties; in particular,
both the sum and composition of two polynomials are always polynomials too.
Fourth, the choice of this class has led to a development of an extensive theory
called computational complexity, which seeks to classify problems according to
their inherent difculty. And according to this theory, a problems intractability
402 Limitations of Algorithm Power
remains the same for all principal models of computations and all reasonable
input-encoding schemes for the problem under consideration.
We just touch on some basic notions and ideas of complexity theory in this
section. If you are interested in a more formal treatment of this theory, you will
have no trouble nding a wealth of textbooks devoted to the subject (e.g., [Sip05],
[Aro09]).
P and NP Problems
Most problems discussed in this book can be solved in polynomial time by some
algorithm. They include computing the product and the greatest common divisor
of two integers, sorting a list, searching for a key in a list or for a pattern in a text
string, checking connectivity and acyclicity of a graph, and nding a minimum
spanning tree and shortest paths in a weighted graph. (You are invited to add
more examples to this list.) Informally, we can think about problems that can be
solved in polynomial time as the set that computer science theoreticians call P. A
more formal denition includes in P only decision problems, which are problems
with yes/no answers.
DEFINITION 2 Class P is a class of decision problems that can be solved in
polynomial time by (deterministic) algorithms. This class of problems is called
polynomial.
The restriction of P to decision problems can be justied by the following
reasons. First, it is sensible to exclude problems not solvable in polynomial time
because of their exponentially large output. Such problems do arise naturally
e.g., generating subsets of a given set or all the permutations of n distinct items
but it is apparent from the outset that they cannot be solved in polynomial time.
Second, many important problems that are not decision problems in their most
natural formulation can be reduced to a series of decision problems that are easier
to study. For example, instead of asking about the minimum number of colors
needed to color the vertices of a graph so that no two adjacent vertices are colored
the same color, we can ask whether there exists such a coloring of the graphs
vertices with no more than m colors for m=1, 2, . . . . (The latter is called the m-
coloring problem.) The rst value of minthis series for whichthe decisionproblem
of m-coloring has a solution solves the optimization version of the graph-coloring
problem as well.
It is natural to wonder whether every decision problem can be solved in
polynomial time. The answer to this question turns out to be no. In fact, some
decision problems cannot be solved at all by any algorithm. Such problems are
called undecidable, as opposed to decidable problems that can be solved by an
algorithm. A famous example of an undecidable problem was given by Alan
11.3 P, NP, and NP-Complete Problems 403
Turing in 1936.
1
The problem in question is called the halting problem: given a
computer program and an input to it, determine whether the program will halt on
that input or continue working indenitely on it.
Here is a surprisingly short proof of this remarkable fact. By way of contra-
diction, assume that A is an algorithm that solves the halting problem. That is, for
any program P and input I,
A(P, I) =
_
1, if program P halts on input I;
0, if program P does not halt on input I.
We can consider program P as an input to itself and use the output of algorithm
A for pair (P, P) to construct a program Q as follows:
Q(P) =
_
halts, if A(P, P) =0, i.e., if program P does not halt on input P;
does not halt, if A(P, P) =1, i.e., if program P halts on input P.
Then on substituting Q for P, we obtain
Q(Q) =
_
halts, if A(Q, Q) =0, i.e., if program Q does not halt on input Q;
does not halt, if A(Q, Q) =1, i.e., if program Q halts on input Q.
This is a contradiction because neither of the two outcomes for program Q is
possible, which completes the proof.
Are there decidable but intractable problems? Yes, there are, but the number
of known examples is surprisingly small, especially of those that arise naturally
rather than being constructed for the sake of a theoretical argument.
There are many important problems, however, for which no polynomial-time
algorithm has been found, nor has the impossibility of such an algorithm been
proved. The classic monograph by M. Garey and D. Johnson [Gar79] contains a
list of several hundred such problems from different areas of computer science,
mathematics, and operations research. Here is just a small sample of some of the
best-known problems that fall into this category:
Hamiltonian circuit problem Determine whether a given graph has a
Hamiltonian circuita path that starts and ends at the same vertex and passes
through all the other vertices exactly once.
Traveling salesman problem Find the shortest tour through n cities with
knownpositive integer distances betweenthem(ndthe shortest Hamiltonian
circuit in a complete graph with positive integer weights).
1. This was just one of many breakthrough contributions to theoretical computer science made by the
English mathematician and computer science pioneer Alan Turing (19121954). In recognition of this,
the ACMthe principal society of computing professionals and researchershas named after him an
award given for outstanding contributions to theoretical computer science. A lecture given on such an
occasion by Richard Karp [Kar86] provides an interesting historical account of the development of
complexity theory.
404 Limitations of Algorithm Power
Knapsack problem Findthe most valuable subset of nitems of givenpositive
integer weights and values that t into a knapsack of a given positive integer
capacity.
Partition problem Given n positive integers, determine whether it is possi-
ble to partition them into two disjoint subsets with the same sum.
Bin-packing problem Given n items whose sizes are positive rational num-
bers not larger than 1, put them into the smallest number of bins of size 1.
Graph-coloring problem For a given graph, nd its chromatic number,
which is the smallest number of colors that need to be assigned to the graphs
vertices so that no two adjacent vertices are assigned the same color.
Integer linear programming problem Find the maximum (or minimum)
value of a linear function of several integer-valued variables subject to a nite
set of constraints in the form of linear equalities and inequalities.
Some of these problems are decision problems. Those that are not have
decision-version counterparts (e.g., the m-coloring problemfor the graph-coloring
problem). What all these problems have in common is an exponential (or worse)
growth of choices, as a function of input size, from which a solution needs to be
found. Note, however, that some problems that also fall under this umbrella can
be solved in polynomial time. For example, the Eulerian circuit problemthe
problem of the existence of a cycle that traverses all the edges of a given graph
exactly oncecan be solved in O(n
2
) time by checking, in addition to the graphs
connectivity, whether all the graphs vertices have even degrees. This example is
particularly striking: it is quite counterintuitive to expect that the problem about
cycles traversing all the edges exactly once (Eulerian circuits) can be so much
easier than the seemingly similar problem about cycles visiting all the vertices
exactly once (Hamiltonian circuits).
Another common feature of a vast majority of decision problems is the fact
that although solving such problems can be computationally difcult, checking
whether a proposed solution actually solves the problem is computationally easy,
i.e., it can be done in polynomial time. (We can think of such a proposed solution
as being randomly generated by somebody leaving us with the task of verifying its
validity.) For example, it is easy to check whether a proposed list of vertices is a
Hamiltonian circuit for a given graph with n vertices. All we need to check is that
the list contains n 1 vertices of the graph in question, that the rst n vertices are
distinct whereas the last one is the same as the rst, and that every consecutive
pair of the lists vertices is connected by an edge. This general observation about
decision problems has led computer scientists to the notion of a nondeterministic
algorithm.
DEFINITION 3 A nondeterministic algorithm is a two-stage procedure that
takes as its input an instance I of a decision problem and does the following.
Nondeterministic (guessing) stage: An arbitrary string S is generated that
can be thought of as a candidate solution to the given instance I (but may be
complete gibberish as well).
11.3 P, NP, and NP-Complete Problems 405
Deterministic (verication) stage: A deterministic algorithm takes both I
and S as its input and outputs yes if S represents a solution to instance I. (If S is
not a solution to instance I, the algorithm either returns no or is allowed not to
halt at all.)
We say that a nondeterministic algorithm solves a decision problem if and
only if for every yes instance of the problem it returns yes on some execu-
tion. (In other words, we require a nondeterministic algorithm to be capable
of guessing a solution at least once and to be able to verify its validity. And,
of course, we do not want it to ever output a yes answer on an instance for
which the answer should be no.) Finally, a nondeterministic algorithm is said to
be nondeterministic polynomial if the time efciency of its verication stage is
polynomial.
Now we can dene the class of NP problems.
DEFINITION 4 Class NP is the class of decision problems that can be solved by
nondeterministic polynomial algorithms. This class of problems is called nonde-
terministic polynomial.
Most decision problems are in NP. First of all, this class includes all the
problems in P:
P NP.
This is true because, if a problem is in P, we can use the deterministic polynomial-
time algorithm that solves it in the verication-stage of a nondeterministic algo-
rithm that simply ignores string S generated in its nondeterministic (guessing)
stage. But NP also contains the Hamiltonian circuit problem, the partition prob-
lem, decision versions of the traveling salesman, the knapsack, graph coloring, and
many hundreds of other difcult combinatorial optimization problems cataloged
in [Gar79]. The halting problem, on the other hand, is among the rare examples
of decision problems that are known not to be in NP.
This leads to the most important open question of theoretical computer sci-
ence: Is P a proper subset of NP, or are these two classes, in fact, the same? We
can put this symbolically as
P
?
=NP.
Note that P = NP would imply that each of many hundreds of difcult
combinatorial decision problems can be solved by a polynomial-time algorithm,
although computer scientists have failed to nd such algorithms despite their per-
sistent efforts over many years. Moreover, many well-known decision problems
are known to be NP-complete (see below), which seems to cast more doubts
on the possibility that P = NP.
406 Limitations of Algorithm Power
NP-Complete Problems
Informally, an NP-complete problem is a problem in NP that is as difcult as any
other problem in this class because, by denition, any other problem in NP can
be reduced to it in polynomial time (shown symbolically in Figure 11.6).
Here are more formal denitions of these concepts.
DEFINITION 5 A decision problem D
1
is said to be polynomially reducible to
a decision problem D
2
, if there exists a function t that transforms instances of D
1
to instances of D
2
such that:
1. t maps all yes instances of D
1
to yes instances of D
2
and all no instances of D
1
to no instances of D
2
2. t is computable by a polynomial time algorithm
This denition immediately implies that if a problem D
1
is polynomially
reducible tosome problemD
2
that canbe solvedinpolynomial time, thenproblem
D
1
can also be solved in polynomial time (why?).
DEFINITION 6 A decision problem D is said to be NP-complete if:
1. it belongs to class NP
2. every problem in NP is polynomially reducible to D
The fact that closely related decision problems are polynomially reducible to
each other is not very surprising. For example, let us prove that the Hamiltonian
circuit problem is polynomially reducible to the decision version of the traveling
NP - complete problem
NP problems
FIGURE 11.6 Notion of an NP-complete problem. Polynomial-time reductions of NP
problems to an NP-complete problem are shown by arrows.
11.3 P, NP, and NP-Complete Problems 407
salesman problem. The latter can be stated as the existence problem of a Hamil-
tonian circuit not longer than a given positive integer min a given complete graph
with positive integer weights. We can map a graph G of a given instance of the
Hamiltonian circuit problem to a complete weighted graph G
/
representing an in-
stance of the traveling salesman problemby assigning 1 as the weight to each edge
in G and adding an edge of weight 2 between any pair of nonadjacent vertices in
G. As the upper bound mon the Hamiltonian circuit length, we take m=n, where
n is the number of vertices in G (and G
/
). Obviously, this transformation can be
done in polynomial time.
Let G be a yes instance of the Hamiltonian circuit problem. Then G has a
Hamiltonian circuit, and its image in G
/
will have length n, making the image a
yes instance of the decision traveling salesman problem. Conversely, if we have a
Hamiltonian circuit of the length not larger than n in G
/
, then its length must be
exactly n (why?) and hence the circuit must be made up of edges present in G,
making the inverse image of the yes instance of the decision traveling salesman
problem be a yes instance of the Hamiltonian circuit problem. This completes the
proof.
The notion of NP-completeness requires, however, polynomial reducibility of
all problems in NP, both known and unknown, to the problem in question. Given
the bewildering variety of decision problems, it is nothing short of amazing that
specic examples of NP-complete problems have been actually found. Neverthe-
less, this mathematical feat was accomplished independently by Stephen Cook
in the United States and Leonid Levin in the former Soviet Union.
2
In his 1971
paper, Cook [Coo71] showed that the so-called CNF-satisability problemis NP-
complete. The CNF-satisability problem deals with boolean expressions. Each
boolean expression can be represented in conjunctive normal form, such as the
following expression involving three boolean variables x
1
, x
2
, and x
3
and their
negations denoted x
1
, x
2
, and x
3
, respectively:
(x
1
x
2
x
3
)&( x
1
x
2
)&( x
1
x
2
x
3
).
The CNF-satisability problemasks whether or not one can assign values true and
false to variables of a given boolean expression in its CNF formto make the entire
expression true. (It is easy to see that this can be done for the above formula: if
x
1
= true, x
2
= true, and x
3
= false, the entire expression is true.)
Since the Cook-Levin discovery of the rst known NP-complete problems,
computer scientists have found many hundreds, if not thousands, of other exam-
ples. In particular, the well-known problems (or their decision versions) men-
tioned aboveHamiltonian circuit, traveling salesman, partition, bin packing,
and graph coloringare all NP-complete. It is known, however, that if P ,= NP
there must exist NP problems that neither are in P nor are NP-complete.
2. As it often happens in the history of science, breakthrough discoveries are made independently and
almost simultaneously by several scientists. In fact, Levin introduced a more general notion than NP-
completeness, which was not limited to decision problems, but his paper [Lev73] was published two
years after Cooks.
408 Limitations of Algorithm Power
For a while, the leading candidate to be such an example was the problem
of determining whether a given integer is prime or composite. But in an im-
portant theoretical breakthrough, Professor Manindra Agrawal and his students
Neeraj Kayal and Nitin Saxena of the Indian Institute of Technology in Kanpur
announced in 2002 a discovery of a deterministic polynomial-time algorithm for
primality testing [Agr04]. Their algorithm does not solve, however, the related
problemof factoring large composite integers, which lies at the heart of the widely
used encryption method called the RSA algorithm [Riv78].
Showing that a decision problem is NP-complete can be done in two steps.
First, one needs to show that the problem in question is in NP; i.e., a randomly
generated string can be checked in polynomial time to determine whether or not
it represents a solution to the problem. Typically, this step is easy. The second
step is to show that every problem in NP is reducible to the problem in question
in polynomial time. Because of the transitivity of polynomial reduction, this step
can be done by showing that a known NP-complete problem can be transformed
to the problem in question in polynomial time (see Figure 11.7). Although such
a transformation may need to be quite ingenious, it is incomparably simpler than
proving the existence of a transformation for every problem in NP. For example,
if we already know that the Hamiltonian circuit problem is NP-complete, its
polynomial reducibility to the decision traveling salesman problem implies that
the latter is also NP-complete (after an easy check that the decision traveling
salesman problem is in class NP).
The denition of NP-completeness immediately implies that if there exists a
deterministic polynomial-time algorithm for just one NP-complete problem, then
every problem in NP can be solved in polynomial time by a deterministic algo-
rithm, and hence P = NP. In other words, nding a polynomial-time algorithm
known
NP- complete
problem
candidate for
NP - completeness
NP problems
FIGURE 11.7 Proving NP-completeness by reduction.
11.3 P, NP, and NP-Complete Problems 409
for one NP-complete problem would mean that there is no qualitative difference
between the complexity of checking a proposed solution and nding it in polyno-
mial time for the vast majority of decision problems of all kinds. Such implications
make most computer scientists believe that P ,= NP, although nobody has been
successful so far in nding a mathematical proof of this intriguing conjecture. Sur-
prisingly, in interviews with the authors of a book about the lives and discoveries
of 15 prominent computer scientists [Sha98], Cook seemed to be uncertain about
the eventual resolution of this dilemma whereas Levin contended that we should
expect the P = NP outcome.
Whatever the eventual answer to the P
?
=NP question proves to be, knowing
that a problem is NP-complete has important practical implications for today. It
means that faced with a problem known to be NP-complete, we should probably
not aim at gaining fame and fortune
3
by designing a polynomial-time algorithm
for solving all its instances. Rather, we should concentrate on several approaches
that seek to alleviate the intractability of such problems. These approaches are
outlined in the next chapter of the book.
Exercises 11.3
1. A game of chess can be posed as the following decision problem: given a
legal positioning of chess pieces and information about which side is to move,
determine whether that side can win. Is this decision problem decidable?
2. A certain problem can be solved by an algorithm whose running time is in
O(n
log
2
n
). Which of the following assertions is true?
a. The problem is tractable.
b. The problem is intractable.
c. Impossible to tell.
3. Give examples of the following graphs or explain why such examples cannot
exist.
a. graph with a Hamiltonian circuit but without an Eulerian circuit
b. graph with an Eulerian circuit but without a Hamiltonian circuit
c. graph with both a Hamiltonian circuit and an Eulerian circuit
d. graph with a cycle that includes all the vertices but with neither a Hamil-
tonian circuit nor an Eulerian circuit
3. In 2000, The Clay Mathematics Institute (CMI) of Cambridge, Massachusetts, designated a $1 million
prize for the solution to this problem.
410 Limitations of Algorithm Power
4. For each of the following graphs, nd its chromatic number.
e b
d c
a a
b
c
f
g
d
e
h
a e
b f
c g
d h
a. b. c.
5. Design a polynomial-time algorithm for the graph 2-coloring problem: deter-
mine whether vertices of a given graph can be colored in no more than two
colors so that no two adjacent vertices are colored the same color.
6. Consider the following brute-force algorithm for solving the composite num-
ber problem: Check successive integers from 2 to n/2 as possible divisors of
n. If one of them divides n evenly, return yes (i.e., the number is composite);
if none of them does, return no. Why does this algorithm not put the problem
in class P?
7. State the decision version for each of the following problems and outline a
polynomial-time algorithm that veries whether or not a proposed solution
solves the problem. (You may assume that a proposed solution represents a
legitimate input to your verication algorithm.)
a. knapsack problem b. bin packing problem
8. Show that the partition problem is polynomially reducible to the decision
version of the knapsack problem.
9. Show that the following three problems are polynomially reducible to each
other.
(i) Determine, for a given graph G=V, E) and a positive integer m [V[,
whether G contains a clique of size m or more. (A clique of size k in a graph
is its complete subgraph of k vertices.)
(ii) Determine, for a given graph G=V, E) and a positive integer m[V[,
whether there is a vertex cover of size m or less for G. (A vertex cover of size
k for a graph G=V, E) is a subset V
/
V such that [V
/
[ =k and, for each
edge (u, v) E, at least one of u and v belongs to V
/
.)
(iii) Determine, for a given graph G=V, E) and a positive integer m[V[,
whether G contains an independent set of size m or more. (An independent
11.3 P, NP, and NP-Complete Problems 411
set of size k for a graph G=V, E) is a subset V
/
V such that [V
/
[ =k and
for all u, v V
/
, vertices u and v are not adjacent in G.)
10. Determine whether the following problem is NP-complete. Given several
sequences of uppercase and lowercase letters, is it possible to select a letter
from each sequence without selecting both the upper- and lowercase versions
of any letter? For example, if the sequences are Abc, BC, aB, and ac, it is
possible to choose Afromthe rst sequence, Bfromthe second and third, and
c from the fourth. An example where there is no way to make the required
selections is given by the four sequences AB, Ab, aB, and ab. [Kar86]
11. Which of the following diagrams do not contradict the current state of our
knowledge about the complexity classes P, NP, and NPC (NP-complete
problems)?
a.
P = NP = NPC
b.
P = NP
NPC
c.
NPC
NP
P
d.
NPC
NP
P
e.
NPC
NP
P
12. King Arthur expects 150 knights for an annual dinner at Camelot. Unfortu-
nately, some of the knights quarrel with each other, and Arthur knows who
quarrels with whom. Arthur wants to seat his guests around a table so that no
two quarreling knights sit next to each other.
a. Which standard problem can be used to model King Arthurs task?
b. As a research project, nd a proof that Arthurs problem has a solution if
each knight does not quarrel with at least 75 other knights.
412 Limitations of Algorithm Power
11.4 Challenges of Numerical Algorithms
Numerical analysis is usually described as the branch of computer science con-
cerned with algorithms for solving mathematical problems. This description needs
an important clarication: the problems in question are problems of continuous
mathematicssolving equations and systems of equations, evaluating such func-
tions as sin x and ln x, computing integrals, and so onas opposed to problems of
discrete mathematics dealing with such structures as graphs, trees, permutations,
and combinations. Our interest in efcient algorithms for mathematical problems
stems from the fact that these problems arise as models of many real-life phe-
nomena both in the natural world and in the social sciences. In fact, numerical
analysis used to be the main area of research, study, and application of computer
science. With the rapid proliferation of computers in business and everyday-life
applications, which deal primarily with storage and retrieval of information, the
relative importance of numerical analysis has shrunk in the last 30 years. However,
its applications, enhanced by the power of modern computers, continue to expand
in all areas of fundamental research and technology. Thus, wherever ones inter-
ests lie in the wide world of modern computing, it is important to have at least
some understanding of the special challenges posed by continuous mathematical
problems.
We are not going to discuss the variety of difculties posed by modeling, the
task of describing a real-life phenomenon in mathematical terms. Assuming that
this has already been done, what principal obstacles to solving a mathematical
problemdo we face? The rst major obstacle is the fact that most numerical analy-
sis problems cannot be solved exactly.
4
They have to be solved approximately, and
this is usually done by replacing an innite object by a nite approximation. For
example, the value of e
x
at a given point x can be computed by approximating
its innite Taylors series about x =0 by a nite sum of its rst terms, called the
nth-degree Taylor polynomial:
e
x
1 x
x
2
2!
. . .
x
n
n!
. (11.6)
To give another example, the denite integral of a function can be approximated
by a nite weighted sum of its values, as in the composite trapezoidal rule that
you might remember from your calculus class:
_
b
a
f (x)dx
h
2
[f (a) 2
n1
i=1
f (x
i
) f (b)], (11.7)
where h =(b a)/n, x
i
=a ih for i =0, 1, . . . , n (Figure 11.8).
The errors of such approximations are called truncation errors. One of the
major tasks in numerical analysis is to estimate the magnitudes of truncation
4. Solving a system of linear equations and polynomial evaluation, discussed in Sections 6.2 and 6.5,
respectively, are rare exceptions to this rule.
11.4 Challenges of Numerical Algorithms 413
x
b
h h h h
a x
1
x
i 1
x
n 1
x
i +1
x
i
FIGURE 11.8 Composite trapezoidal rule.
errors. This is typically done by using calculus tools, from elementary to quite
advanced. For example, for approximation (11.6) we have
[e
x
[1 x
x
2
2!
. . .
x
n
n!
][
M
(n 1)!
[x[
n1
, (11.8)
where M =max e
e
0.5
< 2.
Using this bound and the desired accuracy level of 10
4
, we obtain from (11.8)
M
(n 1)!
[0.5[
n1
<
2
(n 1)!
0.5
n1
< 10
4
.
To solve the last inequality, we can compute the rst few values of
2
(n 1)!
0.5
n1
=
2
n
(n 1)!
to see that the smallest value of n for which this inequality holds is 5.
Similarly, for approximation(11.7), the standardboundof the truncationerror
is given by the inequality
[
_
b
a
f (x)dx
h
2
[f (a) 2
n1
i=1
f (x
i
) f (b)][
(b a)h
2
12
M
2
, (11.9)
414 Limitations of Algorithm Power
where M
2
= max [f
//
(x)[ on the interval a x b. You are asked to use this
inequality in the exercises for this section (Problems 5 and 6).
The other type of errors, called round-off errors, are caused by the limited
accuracy with which we can represent real numbers in a digital computer. These
errors arise not only for all irrational numbers (which, by denition, require an
innite number of digits for their exact representation) but for many rational
numbers as well. In the overwhelming majority of situations, real numbers are
represented as oating-point numbers,
.d
1
d
2
. . . d
p
.
B
E
, (11.10)
where B is the number base, usually 2 or 16 (or, for unsophisticated calculators,
10); d
1
, d
2,
. . . , d
p
are digits (0 d
i
< B for i =1, 2, . . . , p and d
1
> 0 unless the
number is 0) representing together the fractional part of the number and called
its mantissa; and E is an integer exponent with the range of values approximately
symmetric about 0.
The accuracy of the oating-point representation depends on the number
of signicant digits p in representation (11.10). Most computers permit two or
even three levels of precision: single precision (typically equivalent to between
6 and 7 signicant decimal digits), double precision (13 to 14 signicant decimal
digits), and extended precision (19 to 20 signicant decimal digits). Using higher-
precision arithmetic slows computations but may help to overcome some of the
problems caused by round-off errors. Higher precision may need to be used only
for a particular step of the algorithm in question.
As with an approximation of any kind, it is important to distinguish between
the absolute error and the relative error of representing a number
by its
approximation :
absolute error =[
[, (11.11)
relative error =
[
[
[
[
. (11.12)
(The relative error is undened if
=0.)
Very large and very small numbers cannot be represented in oating-point
arithmetic because of the phenomena called overow and underow, respec-
tively. An overow happens when an arithmetic operation yields a result out-
side the range of the computers oating-point numbers. Typical examples of
overow arise from the multiplication of large numbers or division by a very
small number. Sometimes we can eliminate this problem by making a simple
change in the order in which an expression is evaluated (e.g., (10
29
.
11
30
)/12
30
=
10
29
.
(11/12)
30
), by replacing an expression with an equal one (e.g., computing
_
100
2
_
not as 100!/(2!(100 2)!) but as (100
.
99)/2), or by computing a logarithm
of an expression instead of the expression itself.
Underow occurs when the result of an operation is a nonzero fraction of
such a small magnitude that it cannot be represented as a nonzero oating-point
11.4 Challenges of Numerical Algorithms 415
number. Usually, underow numbers are replaced by zero, but a special signal is
generated by hardware to indicate such an event has occurred.
It is important to remember that, in addition to inaccurate representation
of numbers, the arithmetic operations performed in a computer are not always
exact, either. In particular, subtracting two nearly equal oating-point numbers
may cause a large increase in relative error. This phenomenon is called subtractive
cancellation.
EXAMPLE 1 Consider two irrational numbers
= =3.14159265 . . . and
= 6
.
10
7
=3.14159205 . . .
represented by oating-point numbers = 0.3141593
.
10
1
and = 0.3141592
.
10
1
, respectively. The relative errors of these approximations are small:
[
=
0.0000003 . . .
<
4
3
10
7
and
[
=
0.00000005 . . .
6
.
10
7
<
1
3
10
7
,
respectively. The relative error of representing the difference
by the
difference of the oating-point representations = is
[
=
10
6
6
.
10
7
6
.
10
7
=
2
3
,
which is very large for a relative error despite quite accurate approximations for
both and .
Note that we may get a signicant magnication of round-off error if a low-
accuracy difference is used as a divisor. (We already encountered this problem
in discussing Gaussian elimination in Section 6.2. Our solution there was to use
partial pivoting.) Many numerical algorithms involve thousands or even millions
of arithmetic operations for typical inputs. For such algorithms, the propagation of
round-off errors becomes a major concern from both the practical and theoretical
standpoints. For some algorithms, round-off errors can propagate through the
algorithms operations with increasing effect. This highly undesirable property
of a numerical algorithm is called instability. Some problems exhibit such a high
level of sensitivity to changes in their input that it is all but impossible to design a
stable algorithm to solve them. Such problems are called ill-conditioned.
EXAMPLE 2 Consider the following system of two linear equations in two
unknowns:
1.001x 0.999y =2
0.999x 1.001y =2.
416 Limitations of Algorithm Power
Its only solution is x =1, y =1. To see howsensitive this systemis to small changes
to its right-hand side, consider the system with the same coefcient matrix but
slightly different right-hand side values:
1.001x 0.999y =2.002
0.999x 1.001y =1.998.
The only solution to this systemis x =2, y =0, which is quite far fromthe solution
to the previous system. Note that the coefcient matrix of this system is close to
being singular (why?). Hence, a minor change in its coefcients may yield a system
with either no solutions or innitely many solutions, depending on its right-hand-
side values. You can nd a more formal and detailed discussion of how we can
measure the degree of ill-condition of the coefcient matrix in numerical analysis
textbooks (e.g., [Ger03]).
We conclude with a well-known problem of nding real roots of the quadratic
equation
ax
2
bx c =0 (11.13)
for any real coefcients a, b, andc (a ,=0). According tosecondary-school algebra,
equation (11.13) has real roots if and only if its discriminant D = b
2
4ac is
nonnegative, and these roots can be found by the following formula
x
1,2
=
b
_
b
2
4ac
2a
. (11.14)
Although formula (11.14) provides a complete solution to the posed problem
as far as a mathematician is concerned, it is far from being a complete solution for
an algorithmdesigner. The rst major obstacle is evaluating the square root. Even
for most positive integers D,
D
x
n
) for n =0, 1, . . . , (11.15)
where the initial approximation x
0
can be chosen, among other possibilities, as
x
0
=(1 D)/2. It is not difcult to prove that sequence (11.15) is decreasing (if
D ,=1) and always converges to
D. We can stop generating its elements either
when the difference between its two consecutive elements is less than a predened
error tolerance > 0
x
n
x
n1
<
11.4 Challenges of Numerical Algorithms 417
or when x
2
n1
is sufciently close to D. Approximation sequence (11.15) converges
very fast to
D for most values of D. In particular, one can prove that if 0.25
D < 1, then no more than four iterations are needed to guarantee that
[x
n
D[ < 4
.
10
15
,
and we can always scale a given value of d to one in the interval [0.25, 1) by the
formula d =D2
p
, where p is an even integer.
EXAMPLE 3 Let us apply Newtons algorithm to compute
2. (For simplicity,
we ignore scaling.) We will round off the numbers to six decimal places and use
the standard numerical analysis notation
.
= to indicate the round-offs.
x
0
=
1
2
(1 2) =1.500000,
x
1
=
1
2
(x
0
2
x
0
)
.
=1.416667,
x
2
=
1
2
(x
1
2
x
1
)
.
=1.414216,
x
3
=
1
2
(x
2
2
x
2
)
.
=1.414214,
x
4
=
1
2
(x
3
2
x
3
)
.
=1.414214.
At this point we have to stop because x
4
= x
3
.
= 1.414214 and hence all other
approximations will be the same. The exact value of
2 is 1.41421356 . . . .
With the issue of computing square roots squared away (I do not know
whether or not the pun was intended), are we home free to write a program based
on formula (11.14)? The answer is no because of the possible impact of round-off
errors. Among other obstacles, we are faced here with the menace of subtractive
cancellation. If b
2
is much larger than 4ac,
_
b
2
4ac will be very close to [b[, and
a root computed by formula (11.14) might have a large relative error.
EXAMPLE 4 Let us follow a paper by George Forsythe
5
[For69] and consider
the equation
x
2
10
5
x 1 =0.
Its true roots to 11 signicant digits are
x
1
.
=99999.999990
5. George E. Forsythe (19171972), a noted numerical analyst, played a leading role in establishing
computer science as a separate academic discipline in the United States. It is his words that are used
as the epigraph to this books preface.
418 Limitations of Algorithm Power
and
x
2
.
=0.000010000000001.
If we use formula (11.14) and perform all the computations in decimal oating-
point arithmetic with, say, seven signicant digits, we obtain
(b)
2
=0.1000000
.
10
11
,
4ac =0.4000000
.
10
1
,
D
.
=0.1000000
.
10
11
,
D
.
=0.1000000
.
10
6
,
x
1
.
=
b
D
2a
.
=0.1000000
.
10
6
,
x
2
.
=
b
D
2a
.
=0.
And although the relative error of approximating x
1
by x
1
is very small, for the
second root it is very large:
[x
2
x
2
[
x
2
=1 (i.e., 100%)
To avoid the possibility of subtractive cancellation in formula (11.14), we can
use instead another formula, obtained as follows:
x
1
=
b
_
b
2
4ac
2a
=
(b
_
b
2
4ac)(b
_
b
2
4ac)
2a(b
_
b
2
4ac)
=
2c
b
_
b
2
4ac
,
with no danger of subtractive cancellation in the denominator if b > 0. As to x
2
,
it can be computed by the standard formula
x
2
=
b
_
b
2
4ac
2a
,
with no danger of cancellation either for a positive value of b.
The case of b < 0 is symmetric: we can use the formulas
x
1
=
b
_
b
2
4ac
2a
11.4 Challenges of Numerical Algorithms 419
and
x
2
=
2c
b
_
b
2
4ac
.
(The case of b =0 can be considered with either of the other two cases.)
There are several other obstacles to applying formula (11.14), which are re-
lated to limitations of oating-point arithmetic: if a is very small, division by a
can cause an overow; there seems to be no way to ght the danger of subtractive
cancellation in computing b
2
4ac other than calculating it with double precision;
and so on. These problems have been overcome by William Kahan of the Univer-
sity of Toronto (see [For69]), and his algorithm is considered to be a signicant
achievement in the history of numerical analysis.
Hopefully, this brief overviewhas piqued your interest enough for you to seek
more information in the many books devoted exclusively to numerical algorithms.
In this book, we discuss one more topic in the next chapter: three classic methods
for solving equations in one unknown.
Exercises 11.4
1. Some textbooks dene the number of signicant digits in the approximation
of number
[
[
[
< 5
.
10
k
.
According to this denition, how many signicant digits are there in the
approximation of by
a. 3.1415? b. 3.1417?
2. If =1.5 is known to approximate some number
.
b. the range of the relative errors of these approximations.
3. Find the approximate value of
D for
any value of the initial approximation x
0
>
D.
b. Prove that if 0.25 D <1and x
0
=(1 D)/2, no more than four iterations
of Newtons method are needed to guarantee that
[x
n
D[ < 4
.
10
15
.
10. Apply four iterations of Newtons method to compute
3 and estimate the
absolute and relative errors of this approximation.
SUMMARY
Given a class of algorithms for solving a particular problem, a lower bound
indicates the best possible efciency any algorithm from this class can have.
A trivial lower bound is based on counting the number of items in the
problems input that must be processed and the number of output items
that need to be produced.
An information-theoretic lower bound is usually obtained through a mecha-
nism of decision trees. This technique is particularly useful for comparison-
based algorithms for sorting and searching. Specically:
Any general comparison-based sorting algorithm must perform at least
{log
2
n! n log
2
n key comparisons in the worst case.
Any general comparison-based algorithm for searching a sorted array
must perform at least {log
2
(n 1) key comparisons in the worst case.
The adversary method for establishing lower bounds is based on following
the logic of a malevolent adversary who forces the algorithm into the most
time-consuming path.
A lower bound can also be established by reduction, i.e., by reducing a
problem with a known lower bound to the problem in question.
Complexity theory seeks to classify problems according to their computational
complexity. The principal split is between tractable and intractable problems
Summary 421
problems that can and cannot be solved in polynomial time, respectively.
For purely technical reasons, complexity theory concentrates on decision
problems, which are problems with yes/no answers.
The halting problem is an example of an undecidable decision problem; i.e.,
it cannot be solved by any algorithm.
P is the class of all decision problems that can be solved in polynomial time.
NP is the class of all decision problems whose randomly guessed solutions
can be veried in polynomial time.
Many important problems in NP (such as the Hamiltonian circuit problem)
are known to be NP-complete: all other problems in NP are reducible to such
a problemin polynomial time. The rst proof of a problems NP-completeness
was published by S. Cook for the CNF-satisability problem.
It is not known whether P = NP or P is just a proper subset of NP. This
question is the most important unresolved issue in theoretical computer
science. A discovery of a polynomial-time algorithm for any of the thousands
of known NP-complete problems would imply that P = NP.
Numerical analysis is a branch of computer science dealing with solving
continuous mathematical problems. Two types of errors occur in solving a
majority of such problems: truncation error and round-off error. Truncation
errors stem from replacing innite objects by their nite approximations.
Round-off errors are due to inaccuracies of representing numbers in a digital
computer.
Subtractive cancellation happens as a result of subtracting two near-equal
oating-point numbers. It may lead to a sharp increase in the relative round-
off error and therefore should be avoided (by either changing the expressions
form or by using a higher precision in computing such a difference).
Writing a general computer program for solving quadratic equations ax
2
j=i1
a
j
< d (the sum s is too small).
General Remarks
From a more general perspective, most backtracking algorithms t the follow-
ing description. An output of a backtracking algorithm can be thought of as an
n-tuple (x
1
, x
2
, . . . , x
n
) where each coordinate x
i
is an element of some nite lin-
12.1 Backtracking 429
early ordered set S
i
. For example, for the n-queens problem, each S
i
is the set
of integers (column numbers) 1 through n. The tuple may need to satisfy some
additional constraints (e.g., the nonattacking requirements in the n-queens prob-
lem). Depending on the problem, all solution tuples can be of the same length
(the n-queens and the Hamiltonian circuit problem) and of different lengths (the
subset-sum problem). A backtracking algorithm generates, explicitly or implic-
itly, a state-space tree; its nodes represent partially constructed tuples with the
rst i coordinates dened by the earlier actions of the algorithm. If such a tuple
(x
1
, x
2
, . . . , x
i
) is not a solution, the algorithm nds the next element in S
i1
that
is consistent with the values of (x
1
, x
2
, . . . , x
i
) and the problems constraints, and
adds it to the tuple as its (i 1)st coordinate. If such an element does not exist,
the algorithm backtracks to consider the next value of x
i
, and so on.
To start a backtracking algorithm, the following pseudocode can be called for
i =0 ; X[1..0] represents the empty tuple.
ALGORITHM Backtrack(X[1..i])
//Gives a template of a generic backtracking algorithm
//Input: X[1..i] species rst i promising components of a solution
//Output: All the tuples representing the problems solutions
if X[1..i] is a solution write X[1..i]
else //see Problem 9 in this sections exercises
for each element x S
i1
consistent with X[1..i] and the constraints do
X[i 1] x
Backtrack(X[1..i 1])
Our success in solving small instances of three difcult problems earlier in
this section should not lead you to the false conclusion that backtracking is a
very efcient technique. In the worst case, it may have to generate all possible
candidates in an exponentially (or faster) growing state space of the problem at
hand. The hope, of course, is that a backtracking algorithm will be able to prune
enough branches of its state-space tree before running out of time or memory or
both. The success of this strategy is known to vary widely, not only from problem
to problem but also from one instance to another of the same problem.
There are several tricks that might help reduce the size of a state-space tree.
One is to exploit the symmetry often present in combinatorial problems. For
example, the board of the n-queens problem has several symmetries so that some
solutions can be obtained from others by reection or rotation. This implies, in
particular, that we need not consider placements of the rst queen in the last n/2
columns, because any solution with the rst queen in square (1, i), {n/2 i n,
can be obtained by reection (which?) from a solution with the rst queen in
square (1, n i 1). This observation cuts the size of the tree by about half.
Another trick is to preassign values to one or more components of a solution,
as we did in the Hamiltonian circuit example. Data presorting in the subset-sum
430 Coping with the Limitations of Algorithm Power
example demonstrates potential benets of yet another opportunity: rearrange
data of an instance given.
It would be highly desirable to be able to estimate the size of the state-space
tree of a backtracking algorithm. As a rule, this is too difcult to do analytically,
however. Knuth [Knu75] suggested generating a random path from the root to
a leaf and using the information about the number of choices available during
the path generation for estimating the size of the tree. Specically, let c
1
be the
number of values of the rst component x
1
that are consistent with the problems
constraints. We randomly select one of these values (with equal probability 1/c
1
)
to move to one of the roots c
1
children. Repeating this operation for c
2
possible
values for x
2
that are consistent with x
1
and the other constraints, we move to one
of the c
2
children of that node. We continue this process until a leaf is reached
after randomly selecting values for x
1
, x
2
, . . . , x
n
. By assuming that the nodes on
level i have c
i
children on average, we estimate the number of nodes in the tree as
1 c
1
c
1
c
2
. . .
c
1
c
2
. . .
c
n
.
Generating several such estimates and computing their average yields a useful
estimation of the actual size of the tree, although the standard deviation of this
random variable can be large.
In conclusion, three things on behalf of backtracking need to be said. First, it
is typically applied to difcult combinatorial problems for which no efcient algo-
rithms for nding exact solutions possibly exist. Second, unlike the exhaustive-
search approach, which is doomed to be extremely slow for all instances of a
problem, backtracking at least holds a hope for solving some instances of nontriv-
ial sizes in an acceptable amount of time. This is especially true for optimization
problems, for which the idea of backtracking can be further enhanced by evaluat-
ing the quality of partially constructedsolutions. Howthis canbe done is explained
in the next section. Third, even if backtracking does not eliminate any elements
of a problems state space and ends up generating all its elements, it provides a
specic technique for doing so, which can be of value in its own right.
Exercises 12.1
1. a. Continue the backtracking search for a solution to the four-queens prob-
lem, which was started in this section, to nd the second solution to the
problem.
b. Explain how the boards symmetry can be used to nd the second solution
to the four-queens problem.
2. a. Which is the last solution to the ve-queens problem found by the back-
tracking algorithm?
b. Use the boards symmetry to nd at least four other solutions to the
problem.
12.1 Backtracking 431
3. a. Implement the backtracking algorithmfor the n-queens probleminthe lan-
guage of your choice. Run your program for a sample of n values to get the
numbers of nodes in the algorithms state-space trees. Compare these num-
bers with the numbers of candidate solutions generated by the exhaustive-
search algorithm for this problem (see Problem 9 in Exercises 3.4).
b. For each value of n for which you run your program in part (a), estimate
the size of the state-space tree by the method described in Section 12.1 and
compare the estimate with the actual number of nodes you obtained.
4. Design a linear-time algorithm that nds a solution to the n-queens problem
for any n 4.
5. Apply backtracking to the problem of nding a Hamiltonian circuit in the
following graph.
a
f
b
g
e c d
6. Apply backtracking to solve the 3-coloring problem for the graph in Fig-
ure 12.3a.
7. Generate all permutations of {1, 2, 3, 4] by backtracking.
8. a. Apply backtracking to solve the following instance of the subset sum
problem: A ={1, 3, 4, 5] and d =11.
b. Will the backtracking algorithm work correctly if we use just one of the
two inequalities to terminate a node as nonpromising?
9. The general template for backtracking algorithms, which is given in the sec-
tion, works correctly only if no solution is a prex to another solution to the
problem. Change the templates pseudocode to work correctly without this
restriction.
10. Write a program implementing a backtracking algorithm for
a. the Hamiltonian circuit problem.
b. the m-coloring problem.
11. Puzzle pegs This puzzle-like game is played on a board with 15 small holes
arranged in an equilateral triangle. In an initial position, all but one of the
holes are occupied by pegs, as in the example shown below. A legal move is
a jump of a peg over its immediate neighbor into an empty square opposite;
the jump removes the jumped-over neighbor from the board.
432 Coping with the Limitations of Algorithm Power
Design and implement a backtracking algorithm for solving the following
versions of this puzzle.
a. Starting with a given location of the empty hole, nd a shortest sequence
of moves that eliminates 14 pegs with no limitations on the nal position
of the remaining peg.
b. Starting with a given location of the empty hole, nd a shortest sequence
of moves that eliminates 14 pegs with the remaining peg at the empty hole
of the initial board.
12.2 Branch-and-Bound
Recall that the central idea of backtracking, discussed in the previous section, is to
cut off a branch of the problems state-space tree as soon as we can deduce that it
cannot lead to a solution. This idea can be strengthened further if we deal with an
optimization problem. An optimization problem seeks to minimize or maximize
some objective function (a tour length, the value of items selected, the cost of
an assignment, and the like), usually subject to some constraints. Note that in
the standard terminology of optimization problems, a feasible solution is a point
in the problems search space that satises all the problems constraints (e.g., a
Hamiltonian circuit in the traveling salesman problem or a subset of items whose
total weight does not exceed the knapsacks capacity in the knapsack problem),
whereas an optimal solution is a feasible solution with the best value of the
objective function (e.g., the shortest Hamiltonian circuit or the most valuable
subset of items that t the knapsack).
Compared to backtracking, branch-and-bound requires two additional items:
a way to provide, for every node of a state-space tree, a bound on the best
value of the objective function
1
on any solution that can be obtained by adding
further components to the partially constructed solution represented by the
node
the value of the best solution seen so far
If this information is available, we can compare a nodes bound value with
the value of the best solution seen so far. If the bound value is not better than the
value of the best solution seen so fari.e., not smaller for a minimization problem
1. This bound should be a lower bound for a minimization problem and an upper bound for a maximiza-
tion problem.
12.2 Branch-and-Bound 433
and not larger for a maximization problemthe node is nonpromising and can
be terminated (some people say the branch is pruned). Indeed, no solution
obtained from it can yield a better solution than the one already available. This is
the principal idea of the branch-and-bound technique.
In general, we terminate a search path at the current node in a state-space
tree of a branch-and-bound algorithm for any one of the following three reasons:
The value of the nodes bound is not better than the value of the best solution
seen so far.
The node represents no feasible solutions because the constraints of the
problem are already violated.
The subset of feasible solutions represented by the node consists of a single
point (and hence no further choices can be made)in this case, we compare
the value of the objective function for this feasible solution with that of the
best solution seen so far and update the latter with the former if the new
solution is better.
Assignment Problem
Let us illustrate the branch-and-bound approach by applying it to the problem of
assigning n people to n jobs so that the total cost of the assignment is as small
as possible. We introduced this problem in Section 3.4, where we solved it by
exhaustive search. Recall that an instance of the assignment problem is specied
by an n n cost matrix C so that we can state the problem as follows: select one
element in each row of the matrix so that no two selected elements are in the
same column and their sum is the smallest possible. We will demonstrate how this
problem can be solved using the branch-and-bound technique by considering the
same small instance of the problem that we investigated in Section 3.4:
job 1 job 2 job 3 job 4
C =
_
_
_
_
9 2 7 8
6 4 3 7
5 8 1 8
7 6 9 4
_
_
person a
person b
person c
person d
How can we nd a lower bound on the cost of an optimal selection without
actually solving the problem? We can do this by several methods. For example, it
is clear that the cost of any solution, including an optimal one, cannot be smaller
than the sumof the smallest elements in each of the matrixs rows. For the instance
here, this sum is 2 3 1 4 =10. It is important to stress that this is not the cost
of any legitimate selection (3 and 1 came from the same column of the matrix);
it is just a lower bound on the cost of any legitimate selection. We can and will
apply the same thinking to partially constructed solutions. For example, for any
legitimate selection that selects 9 from the rst row, the lower bound will be
9 3 1 4 =17.
One more comment is in order before we embark on constructing the prob-
lems state-space tree. It deals with the order in which the tree nodes will be
434 Coping with the Limitations of Algorithm Power
generated. Rather than generating a single child of the last promising node as
we did in backtracking, we will generate all the children of the most promising
node among nonterminated leaves in the current tree. (Nonterminated, i.e., still
promising, leaves are also called live.) How can we tell which of the nodes is most
promising? We can do this by comparing the lower bounds of the live nodes. It
is sensible to consider a node with the best bound as most promising, although
this does not, of course, preclude the possibility that an optimal solution will ul-
timately belong to a different branch of the state-space tree. This variation of the
strategy is called the best-rst branch-and-bound.
So, returning to the instance of the assignment problem given earlier, we start
with the root that corresponds to no elements selected from the cost matrix. As
we already discussed, the lower-bound value for the root, denoted lb, is 10. The
nodes on the rst level of the tree correspond to selections of an element in the
rst row of the matrix, i.e., a job for person a (Figure 12.5).
So we have four live leavesnodes 1 through 4that may contain an optimal
solution. The most promising of them is node 2 because it has the smallest lower-
bound value. Following our best-rst search strategy, we branch out from that
node rst by considering the three different ways of selecting an element from the
second row and not in the second columnthe three different jobs that can be
assigned to person b (Figure 12.6).
Of the six live leavesnodes 1, 3, 4, 5, 6, and 7that may contain an optimal
solution, we again choose the one with the smallest lower bound, node 5. First, we
consider selecting the third columns element fromcs row(i.e., assigning person c
to job 3); this leaves us with no choice but to select the element from the fourth
column of ds row (assigning person d to job 4). This yields leaf 8 (Figure 12.7),
which corresponds to the feasible solution {a 2, b 1, c 3, d 4] with the
total cost of 13. Its sibling, node 9, corresponds to the feasible solution {a 2,
b 1, c 4, d 3] with the total cost of 25. Since its cost is larger than the cost
of the solution represented by leaf 8, node 9 is simply terminated. (Of course, if
0
start
1 2 3 4
lb = 2+3+1+4 =10
lb =8+3 +1+ 6 =18 lb =7+4+5+ 4 =20 lb = 2+3+1+4 =10 lb =9+3+1+4 =17
a 2 a 1 a 3 a 4
FIGURE 12.5 Levels 0 and 1 of the state-space tree for the instance of the assignment
problem being solved with the best-rst branch-and-bound algorithm. The
number above a node shows the order in which the node was generated.
A nodes elds indicate the job number assigned to person a and the
lower bound value, lb, for this node.
12.2 Branch-and-Bound 435
1 2 3 4
0
start
lb =10
lb =17
a 3 a 1
lb =10
a 2
6
lb =14
b 3
5
lb =13
b 1
7
lb =17
b 4
lb = 20 lb =18
a 4
FIGURE 12.6 Levels 0, 1, and 2 of the state-space tree for the instance of the assignment
problem being solved with the best-rst branch-and-bound algorithm.
1 2 3 4
0
start
lb =10
lb =17
a 3 a 1
lb =10
a 2
6
lb =14
b 3
5
lb =13
b 1
cost =13
d 4
c 3
cost = 25
d 3
c 4
7
8 9
solution inferior solution
lb =17
b 4
lb = 20 lb =18
a 4
X X
X X X
FIGURE 12.7 Complete state-space tree for the instance of the assignment problem
solved with the best-rst branch-and-bound algorithm.
its cost were smaller than 13, we would have to replace the information about the
best solution seen so far with the data provided by this node.)
Now, as we inspect each of the live leaves of the last state-space treenodes
1, 3, 4, 6, and 7 in Figure 12.7we discover that their lower-bound values are
not smaller than 13, the value of the best selection seen so far (leaf 8). Hence,
we terminate all of them and recognize the solution represented by leaf 8 as the
optimal solution to the problem.
436 Coping with the Limitations of Algorithm Power
Before we leave the assignment problem, we have to remind ourselves again
that, unlike for our next examples, there is a polynomial-time algorithm for this
problem called the Hungarian method (e.g., [Pap82]). In the light of this efcient
algorithm, solving the assignment problem by branch-and-bound should be con-
sidered a convenient educational device rather than a practical recommendation.
Knapsack Problem
Let us now discuss how we can apply the branch-and-bound technique to solving
the knapsack problem. This problem was introduced in Section 3.4: given n items
of known weights w
i
and values v
i
, i =1, 2, . . . , n, and a knapsack of capacity W,
nd the most valuable subset of the items that t in the knapsack. It is convenient
to order the items of a given instance in descending order by their value-to-weight
ratios. Then the rst item gives the best payoff per weight unit and the last one
gives the worst payoff per weight unit, with ties resolved arbitrarily:
v
1
/w
1
v
2
/w
2
. . .
v
n
/w
n
.
It is natural to structure the state-space tree for this problem as a binary tree
constructed as follows (see Figure 12.8 for an example). Each node on the ith
level of this tree, 0 i n, represents all the subsets of n items that include a
particular selection made from the rst i ordered items. This particular selection
is uniquely determined by the path from the root to the node: a branch going to
the left indicates the inclusion of the next item, and a branch going to the right
indicates its exclusion. We record the total weight w and the total value v of this
selection in the node, along with some upper bound ub on the value of any subset
that can be obtained by adding zero or more items to this selection.
A simple way to compute the upper bound ub is to add to v, the total value of
the items already selected, the product of the remaining capacity of the knapsack
W wand the best per unit payoff among the remaining items, which is v
i1
/w
i1
:
ub =v (W w)(v
i1
/w
i1
). (12.1)
As a specic example, let us apply the branch-and-bound algorithm to the
same instance of the knapsack problem we solved in Section 3.4 by exhaustive
search. (We reorder the items in descending order of their value-to-weight ratios,
though.)
value
item weight value
weight
1 4 $40 10
2 7 $42 6 The knapsacks capacity W is 10.
3 5 $25 5
4 3 $12 4
12.2 Branch-and-Bound 437
inferior to
node 8
not feasible
X
X
not feasible optimal solution
X
inferior to node 8
X
0
ub =100
w = 0, v = 0
1
ub = 76
w = 4, v = 40
ub = 70
w = 4, v = 40
3 4
ub = 64
w = 4, v = 40
6
ub = 69
w = 9, v = 65
5
value = 65
w = 9, v = 65
8
w = 11
7
w = 12
2
ub = 60
w = 0, v = 0
with 1
with 2
with 4
with 3
w/o 1
w/o 2
w/o 4
w/o 3
FIGURE 12.8 State-space tree of the best-rst branch-and-bound algorithm for the
instance of the knapsack problem.
At the root of the state-space tree (see Figure 12.8), no items have been
selected as yet. Hence, both the total weight of the items already selected w and
their total value v are equal to 0. The value of the upper bound computed by
formula (12.1) is $100. Node 1, the left child of the root, represents the subsets
that include item 1. The total weight and value of the items already included are
4 and $40, respectively; the value of the upper bound is 40 + (10 4) 6 = $76.
Node 2 represents the subsets that do not include item 1. Accordingly, w = 0,
v =$0, and ub = 0 + (10 0) 6 = $60. Since node 1 has a larger upper bound than
the upper bound of node 2, it is more promising for this maximization problem,
and we branch from node 1 rst. Its childrennodes 3 and 4represent subsets
with item 1 and with and without item 2, respectively. Since the total weight w of
every subset represented by node 3 exceeds the knapsacks capacity, node 3 can
be terminated immediately. Node 4 has the same values of w and v as its parent;
the upper bound ub is equal to 40 + (10 4) 5 = $70. Selecting node 4 over node
2 for the next branching (why?), we get nodes 5 and 6 by respectively including
and excluding item 3. The total weights and values as well as the upper bounds for
438 Coping with the Limitations of Algorithm Power
these nodes are computed in the same way as for the preceding nodes. Branching
from node 5 yields node 7, which represents no feasible solutions, and node 8,
which represents just a single subset {1, 3} of value $65. The remaining live nodes
2 and6 have smaller upper-boundvalues thanthe value of the solutionrepresented
by node 8. Hence, both can be terminated making the subset {1, 3} of node 8 the
optimal solution to the problem.
Solving the knapsack problem by a branch-and-bound algorithm has a rather
unusual characteristic. Typically, internal nodes of a state-space tree do not dene
a point of the problems search space, because some of the solutions components
remain undened. (See, for example, the branch-and-bound tree for the assign-
ment problem discussed in the preceding subsection.) For the knapsack problem,
however, every node of the tree represents a subset of the items given. We can
use this fact to update the information about the best subset seen so far after
generating each new node in the tree. If we had done this for the instance investi-
gated above, we could have terminated nodes 2 and 6 before node 8 was generated
because they both are inferior to the subset of value $65 of node 5.
Traveling Salesman Problem
We will be able to apply the branch-and-bound technique to instances of the
traveling salesman problem if we come up with a reasonable lower bound on tour
lengths. One very simple lower bound can be obtained by nding the smallest
element in the intercity distance matrix D and multiplying it by the number of
cities n. But there is a less obvious and more informative lower bound for instances
with symmetric matrix D, which does not require a lot of work to compute. It is
not difcult to show (Problem 8 in this sections exercises) that we can compute a
lower bound on the length l of any tour as follows. For each city i, 1 i n, nd
the sum s
i
of the distances from city i to the two nearest cities; compute the sum
s of these n numbers, divide the result by 2, and, if all the distances are integers,
round up the result to the nearest integer:
lb ={s/2. (12.2)
For example, for the instance in Figure 12.9a, formula (12.2) yields
lb ={[(1 3) (3 6) (1 2) (3 4) (2 3)]/2 =14.
Moreover, for any subset of tours that must include particular edges of a given
graph, we can modify lower bound (12.2) accordingly. For example, for all the
Hamiltonian circuits of the graph in Figure 12.9a that must include edge (a, d),
we get the following lower bound by summing up the lengths of the two shortest
edges incident with each of the vertices, with the required inclusion of edges (a, d)
and (d, a):
{[(1 5) (3 6) (1 2) (3 5) (2 3)]/2 =16.
We now apply the branch-and-bound algorithm, with the bounding function
given by formula (12.2), to nd the shortest Hamiltonian circuit for the graph in
12.2 Branch-and-Bound 439
3
5
1 8
6
7
0
9
a b
4
2 3
c d
e
(b)
(a)
lb = 14
b is not
before c
lb >= l
of node 11
a
4
lb = 19
a, e
3
lb = 16
a, d
2
a, c
1
lb = 14
a, b
6
lb = 16
a, b, d
5
8 9 10 11
lb = 16
a, b, c
l = 24
a, b, c, d,
(e, a)
l = 19
a, b, c, e,
(d, a)
l = 24
a, b, d, c,
(e, a)
l = 16
a, b, d, e,
(c, a)
7
lb = 19
a, b, e
X X
lb > l
of node 11
X
lb > l
of node 11
X
first tour better tour inferior tour optimal tour
FIGURE 12.9 (a) Weighted graph. (b) State-space tree of the branch-and-bound algorithm
to nd a shortest Hamiltonian circuit in this graph. The list of vertices in
a node species a beginning part of the Hamiltonian circuits represented
by the node.
Figure 12.9a. To reduce the amount of potential work, we take advantage of two
observations made in Section 3.4. First, without loss of generality, we can consider
only tours that start at a. Second, because our graph is undirected, we can generate
only tours in which b is visited before c. In addition, after visiting n 1 =4 cities,
a tour has no choice but to visit the remaining unvisited city and return to the
starting one. The state-space tree tracing the algorithms application is given in
Figure 12.9b.
The comments we made at the endof the preceding sectionabout the strengths
and weaknesses of backtracking are applicable to branch-and-bound as well. To
reiterate the main point: these state-space tree techniques enable us to solve
many large instances of difcult combinatorial problems. As a rule, however, it is
virtually impossible topredict whichinstances will be solvable ina realistic amount
of time and which will not.
Incorporation of additional information, such as a symmetry of a games
board, can widen the range of solvable instances. Along this line, a branch-and-
bound algorithm can be sometimes accelerated by a knowledge of the objective
440 Coping with the Limitations of Algorithm Power
functions value of some nontrivial feasible solution. The information might be
obtainablesay, by exploiting specics of the data or even, for some problems,
generated randomlybefore we start developing a state-space tree. Then we can
use such a solution immediately as the best one seen so far rather than waiting for
the branch-and-bound processing to lead us to the rst feasible solution.
In contrast to backtracking, solving a problem by branch-and-bound has both
the challenge and opportunity of choosing the order of node generation and nd-
ing a goodbounding function. Thoughthe best-rst rule we usedabove is a sensible
approach, it may or may not lead to a solution faster than other strategies. (Arti-
cial intelligence researchers are particularly interested in different strategies for
developing state-space trees.)
Finding a good bounding function is usually not a simple task. On the one
hand, we want this function to be easy to compute. On the other hand, it cannot
be too simplisticotherwise, it would fail in its principal task to prune as many
branches of a state-space tree as soon as possible. Striking a proper balance be-
tween these two competing requirements may require intensive experimentation
with a wide variety of instances of the problem in question.
Exercises 12.2
1. What data structure would you use to keep track of live nodes in a best-rst
branch-and-bound algorithm?
2. Solve the same instance of the assignment problem as the one solved in
the section by the best-rst branch-and-bound algorithm with the bounding
function based on matrix columns rather than rows.
3. a. Give an example of the best-case input for the branch-and-bound algo-
rithm for the assignment problem.
b. In the best case, how many nodes will be in the state-space tree of the
branch-and-bound algorithm for the assignment problem?
4. Write a programfor solving the assignment problemby the branch-and-bound
algorithm. Experiment withyour programtodetermine the average size of the
cost matrices for which the problem is solved in a given amount of time, say,
1 minute on your computer.
5. Solve the following instance of the knapsack problem by the branch-and-
bound algorithm:
item weight value
1 10 $100
2 7 $63 W =16
3 8 $56
4 4 $12
12.3 Approximation Algorithms for NP-Hard Problems 441
6. a. Suggest a more sophisticated bounding function for solving the knapsack
problem than the one used in the section.
b. Use your bounding function in the branch-and-bound algorithm applied
to the instance of Problem 5.
7. Write a program to solve the knapsack problem with the branch-and-bound
algorithm.
8. a. Prove the validity of the lower bound given by formula (12.2) for instances
of the traveling salesman problem with symmetric matrices of integer
intercity distances.
b. How would you modify lower bound (12.2) for nonsymmetric distance
matrices?
9. Apply the branch-and-bound algorithm to solve the traveling salesman prob-
lem for the following graph:
2
5
8 7
3
a b
1
c d
(We solved this problem by exhaustive search in Section 3.4.)
10. As a research project, write a report on how state-space trees are used for
programming such games as chess, checkers, and tic-tac-toe. The two principal
algorithms you should read about are the minimax algorithm and alpha-beta
pruning.
12.3 Approximation Algorithms for NP-Hard Problems
In this section, we discuss a different approach to handling difcult problems
of combinatorial optimization, such as the traveling salesman problem and the
knapsack problem. As we pointed out in Section 11.3, the decision versions of
these problems are NP-complete. Their optimization versions fall in the class of
NP-hard problemsproblems that are at least as hardas NP-complete problems.
2
Hence, there are no known polynomial-time algorithms for these problems, and
there are serious theoretical reasons to believe that such algorithms do not exist.
What then are our options for handling such problems, many of which are of
signicant practical importance?
2. The notionof anNP-hardproblemcanbe denedmore formally by extending the notionof polynomial
reducibility to problems that are not necessarily in class NP, including optimization problems of the
type discussed in this section (see [Gar79, Chapter 5]).
442 Coping with the Limitations of Algorithm Power
If an instance of the problem in question is very small, we might be able to
solve it by an exhaustive-search algorithm (Section 3.4). Some such problems can
be solved by the dynamic programming technique we demonstrated in Section 8.2.
But even when this approach works in principle, its practicality is limited by
dependence on the instance parameters being relatively small. The discovery of
the branch-and-bound technique has proved to be an important breakthrough,
because this technique makes it possible to solve many large instances of difcult
optimization problems in an acceptable amount of time. However, such good
performance cannot usually be guaranteed.
There is a radically different way of dealing with difcult optimization prob-
lems: solve them approximately by a fast algorithm. This approach is particularly
appealing for applications where a good but not necessarily optimal solution will
sufce. Besides, in real-life applications, we often have to operate with inaccurate
data to begin with. Under such circumstances, going for an approximate solution
can be a particularly sensible choice.
Although approximation algorithms run a gamut in level of sophistication,
most of them are based on some problem-specic heuristic. A heuristic is a
common-sense rule drawn from experience rather than from a mathematically
proved assertion. For example, going to the nearest unvisited city in the traveling
salesman problem is a good illustration of this notion. We discuss an algorithm
based on this heuristic later in this section.
Of course, if we use an algorithmwhose output is just an approximation of the
actual optimal solution, we would like to know how accurate this approximation
is. We can quantify the accuracy of an approximate solution s
a
to a problem of
minimizing some function f by the size of the relative error of this approximation,
re(s
a
) =
f (s
a
) f (s
)
f (s
)
,
where s
)
as a measure of accuracy of s
a
. Note that for the sake of scale uniformity, the
accuracy ratio of approximate solutions to maximization problems is usually com-
puted as
r(s
a
) =
f (s
)
f (s
a
)
to make this ratio greater than or equal to 1, as it is for minimization problems.
Obviously, the closer r(s
a
) is to 1, the better the approximate solution is.
For most instances, however, we cannot compute the accuracy ratio, because we
typically do not know f (s
)
for some constant c.
444 Coping with the Limitations of Algorithm Power
PROOF By way of contradiction, suppose that such an approximation algorithm
A and a constant c exist. (Without loss of generality, we can assume that c is a
positive integer.) We will show that this algorithm could then be used for solving
the Hamiltonian circuit problem in polynomial time. We will take advantage of
a variation of the transformation used in Section 11.3 to reduce the Hamiltonian
circuit problem to the traveling salesman problem. Let G be an arbitrary graph
withnvertices. We mapGtoa complete weightedgraphG
/
by assigning weight 1 to
each edge in G and adding an edge of weight cn 1 between each pair of vertices
not adjacent in G. If G has a Hamiltonian circuit, its length in G
/
is n; hence, it is
the exact solution s
) > cn.
Taking into account the two derived inequalities, we could solve the Hamiltonian
circuit problem for graph G in polynomial time by mapping G to G
/
, applying
algorithm A to get tour s
a
in G
/
, and comparing its length with cn. Since the
Hamiltonian circuit problem is NP-complete, we have a contradiction unless P =
NP.
Greedy Algorithms for the TSP The simplest approximation algorithms for the
traveling salesman problem are based on the greedy technique. We will discuss
here two such algorithms.
Nearest-neighbor algorithm
The following well-known greedy algorithm is based on the nearest-neighbor
heuristic: always go next to the nearest unvisited city.
Step 1 Choose an arbitrary city as the start.
Step 2 Repeat the following operation until all the cities have been visited:
go to the unvisited city nearest the one visited last (ties can be broken
arbitrarily).
Step 3 Return to the starting city.
EXAMPLE 1 For the instance represented by the graph in Figure 12.10, with a as
the starting vertex, the nearest-neighbor algorithm yields the tour (Hamiltonian
circuit) s
a
: a b c d a of length 10.
1
6
3 3
2
a b
1
d c
FIGURE 12.10 Instance of the traveling salesman problem.
12.3 Approximation Algorithms for NP-Hard Problems 445
The optimal solution, as can be easily checked by exhaustive search, is the tour
s
)
=
10
8
=1.25
(i.e., tour s
a
is 25% longer than the optimal tour s
).
Unfortunately, except for its simplicity, not many good things can be said
about the nearest-neighbor algorithm. In particular, nothing can be said in general
about the accuracy of solutions obtained by this algorithm because it can force us
to traverse a very long edge on the last leg of the tour. Indeed, if we change the
weight of edge (a, d) from 6 to an arbitrary large number w 6 in Example 1,
the algorithm will still yield the tour a b c d a of length 4 w, and the
optimal solution will still be a b d c a of length 8. Hence,
r(s
a
) =
f (s
a
)
f (s
)
=
4 w
8
,
which can be made as large as we wish by choosing an appropriately large value
of w. Hence, R
A
=for this algorithm (as it should be according to Theorem 1).
Multifragment-heuristic algorithm
Another natural greedy algorithm for the traveling salesman problem considers
it as the problem of nding a minimum-weight collection of edges in a given
complete weightedgraphsothat all the vertices have degree 2. (Withthis emphasis
on edges rather than vertices, what other greedy algorithm does it remind you
of?) An application of the greedy technique to this problem leads to the following
algorithm [Ben90].
Step 1 Sort the edges in increasing order of their weights. (Ties can be broken
arbitrarily.) Initialize the set of tour edges to be constructed to the
empty set.
Step 2 Repeat this step n times, where n is the number of cities in the instance
being solved: add the next edge on the sorted edge list to the set of tour
edges, provided this addition does not create a vertex of degree 3 or a
cycle of length less than n; otherwise, skip the edge.
Step 3 Return the set of tour edges.
As an example, applying the algorithm to the graph in Figure 12.10 yields
{(a, b), (c, d), (b, c), (a, d)]. This set of edges forms the same tour as the one pro-
duced by the nearest-neighbor algorithm. In general, the multifragment-heuristic
algorithm tends to produce signicantly better tours than the nearest-neighbor
algorithm, as we are going to see from the experimental data quoted at the end of
this section. But the performance ratio of the multifragment-heuristic algorithm
is also unbounded, of course.
446 Coping with the Limitations of Algorithm Power
There is, however, a very important subset of instances, called Euclidean, for
which we can make a nontrivial assertion about the accuracy of both the nearest-
neighbor and multifragment-heuristic algorithms. These are the instances in which
intercity distances satisfy the following natural conditions:
triangle inequality d[i, j] d[i, k] d[k, j] for any triple of cities i, j, and
k (the distance between cities i and j cannot exceed the length of a two-leg
path from i to some intermediate city k to j)
symmetry d[i, j] =d[j, i] for any pair of cities i and j (the distance from i
to j is the same as the distance from j to i)
Asubstantial majority of practical applications of the traveling salesmanprob-
lemare its Euclidean instances. They include, in particular, geometric ones, where
cities correspondtopoints inthe plane anddistances are computedby the standard
Euclidean formula. Although the performance ratios of the nearest-neighbor and
multifragment-heuristic algorithms remain unbounded for Euclidean instances,
their accuracy ratios satisfy the following inequality for any such instance with
n 2 cities:
f (s
a
)
f (s
1
2
({log
2
n 1),
where f (s
a
) and f (s
, i.e.,
f (s
a
) 2f (s
).
Since removing any edge from s
).
448 Coping with the Limitations of Algorithm Power
This inequality implies that
2f (s
) > 2w(T
) > f (s
a
),
which is, in fact, a slightly stronger assertion than the one we needed to prove.
Christodes Algorithm There is an approximation algorithm with a better per-
formance ratio for the Euclidean traveling salesman problemthe well-known
Christodes algorithm [Chr76]. It also uses a minimum spanning tree but does
this in a more sophisticated way than the twice-around-the-tree algorithm. Note
that a twice-around-the-tree walk generated by the latter algorithm is an Eule-
rian circuit in the multigraph obtained by doubling every edge in the graph given.
Recall that an Eulerian circuit exists in a connected multigraph if and only if all
its vertices have even degrees. The Christodes algorithm obtains such a multi-
graph by adding to the graph the edges of a minimum-weight matching of all the
odd-degree vertices in its minimum spanning tree. (The number of such vertices
is always even and hence this can always be done.) Then the algorithm nds an
Eulerian circuit in the multigraph and transforms it into a Hamiltonian circuit by
shortcuts, exactly the same way it is done in the last step of the twice-around-the-
tree algorithm.
EXAMPLE 3 Let us trace the Christodes algorithmin Figure 12.12 on the same
instance (Figure 12.12a) used for tracing the twice-around-the-tree algorithm in
Figure 12.11. The graphs minimum spanning tree is shown in Figure 12.12b. It has
four odd-degree vertices: a, b, c, and e. The minimum-weight matching of these
four vertices consists of edges (a, b) and (c, e). (For this tiny instance, it can be
found easily by comparing the total weights of just three alternatives: (a, b) and
(c, e), (a, c) and (b, e), (a, e) and (b, c).) The traversal of the multigraph, starting
at vertex a, produces the Eulerian circuit a b c e d b a, which, after
one shortcut, yields the tour a b c e d a of length 37.
The performance ratio of the Christodes algorithm on Euclidean instances
is 1.5 (see, e.g., [Pap82]). It tends to produce signicantly better approximations
to optimal tours than the twice-around-the-tree algorithm does in empirical tests.
(We quote some results of such tests at the end of this subsection.) The quality of
a tour obtained by this heuristic can be further improved by optimizing shortcuts
made on the last step of the algorithm as follows: examine the multiply-visited
cities in some arbitrary order and for each make the best possible shortcut. This
12.3 Approximation Algorithms for NP-Hard Problems 449
12
9
4 8
9
7 11
a e
8
6 10
b d
c
(a)
a
b
e
d
c
8
6
4 4 7 11
(b)
a
b
e
d
c
6
4 7 9 11
(c)
FIGURE 12.12 Application of the Christodes algorithm. (a) Graph. (b) Minimum
spanning tree with added edges (in dash) of a minimum-weight matching
of all odd-degree vertices. (c) Hamiltonian circuit obtained.
enhancement would have not improved the tour a b c e d a obtained in
Example 3 froma b c e d b a because shortcutting the second occur-
rence of b happens to be better than shortcutting its rst occurrence. In general,
however, this enhancement tends to decrease the gap between the heuristic and
optimal tour lengths from about 15% to about 10%, at least for randomly gener-
ated Euclidean instances [Joh07a].
Local Search Heuristics For Euclidean instances, surprisingly good approxima-
tions tooptimal tours canbe obtainedby iterative-improvement algorithms, which
are also called local search heuristics. The best-known of these are the 2-opt, 3-
opt, and Lin-Kernighan algorithms. These algorithms start with some initial tour,
e.g., constructed randomly or by some simpler approximation algorithm such as
the nearest-neighbor. On each iteration, the algorithm explores a neighborhood
around the current tour by replacing a few edges in the current tour by other
edges. If the changes produce a shorter tour, the algorithm makes it the current
450 Coping with the Limitations of Algorithm Power
C
4
C
1
C
2
C
3
(a)
C
1
C
2
C
4
C
3
(b)
FIGURE 12.13 2-change: (a) Original tour. (b) New tour.
tour and continues by exploring its neighborhood in the same manner; otherwise,
the current tour is returned as the algorithms output and the algorithm stops.
The 2-opt algorithm works by deleting a pair of nonadjacent edges in a tour
and reconnecting their endpoints by the different pair of edges to obtain another
tour (see Figure 12.13). This operation is called the 2-change. Note that there is
only one way to reconnect the endpoints because the alternative produces two
disjoint fragments.
EXAMPLE 4 If we start with the nearest-neighbor tour a b c d e a in
the graph of Figure 12.11, whose length l
nn
is equal to 39, the 2-opt algorithm will
move to the next tour as shown in Figure 12.14.
To generalize the notion of the 2-change, one can consider the k-change for
any k 2. This operation replaces up to k edges in a current tour. In addition to
2-changes, only the 3-changes have proved to be of practical interest. The two
principal possibilities of 3-changes are shown in Figure 12.15.
There are several other local search algorithms for the traveling salesman
problem. The most prominent of them is the Lin-Kernighan algorithm [Lin73],
which for two decades after its publication in 1973 was considered the best algo-
rithm to obtain high-quality approximations of optimal tours. The Lin-Kernighan
algorithm is a variable-opt algorithm: its move can be viewed as a 3-opt move
followed by a sequence of 2-opt moves. Because of its complexity, we have to re-
frain from discussing this algorithm here. The excellent survey by Johnson and
McGeoch [Joh07a] contains an outline of the algorithm and its modern exten-
sions as well as methods for its efcient implementation. This survey also contain
results from the important empirical studies about performance of many heuris-
tics for the traveling salesman problem, including of course, the Lin-Kernighan
algorithm. We conclude our discussion by quoting some of these data.
Empirical Results The traveling salesman problem has been the subject of in-
tense study for the last 50 years. This interest was driven by a combination of pure
12.3 Approximation Algorithms for NP-Hard Problems 451
a
b
e
d
c
6 10
4 7
12
a
b
e
d
c
8
6
9 7
12
l = 42 > l
nn
= 39
a
b
e
d
c
6 10
4 7
12
10
a
b
e
d
c
6
9 9
12
l = 46 > l
nn
= 39
a
b
e
d
c
6 10
4 4 7
12
10
a
b
e
d
c
8
11
12
l = 45 > l
nn
= 39
a
b
e
d
c
6 10
4 4 7
12
10
a
b
e
d
c
8 9 7
l = 38 < l
nn
= 39
(new tour)
FIGURE 12.14 2-changes from the nearest-neighbor tour of the graph in Figure 12.11.
452 Coping with the Limitations of Algorithm Power
C
1
C
4
C
5
C
3
C
6
C
2
(a)
C
1
C
2
C
3
C
4
C
5
C
6
(b)
C
1
C
2
C
3
C
4
C
5
C
6
(c)
FIGURE 12.15 3-change: (a) Original tour. (b), (c) New tours.
theoretical interest and serious practical needs stemming from such newer ap-
plications as circuit-board and VLSI-chip fabrication, X-ray crystallography, and
genetic engineering. Progress in developing effective heuristics, their efcient im-
plementationby using sophisticateddata structures, andthe ever-increasing power
of computers have led to a situation that differs drastically from a pessimistic pic-
ture painted by the worst-case theoretical results. This is especially true for the
most important applications class of instances of the traveling salesman problem:
points in the two-dimensional plane with the standard Euclidean distances be-
tween them.
Nowadays, Euclidean instances with up to 1000 cities can be solved exactly
in quite a reasonable amount of timetypically, in minutes or faster on a good
workstationby such optimization packages as Concord [App]. In fact, according
to the information on the Web site maintained by the authors of that package, the
largest instance of the traveling salesman problem solved exactly as of January
2010 was a tour through 85,900 points in a VLSI application. It signicantly ex-
ceeded the previous record of the shortest tour through all 24,978 cities in Sweden.
There should be little doubt that the latest record will also be eventually super-
seded and our ability to solve ever larger instances exactly will continue to expand.
This remarkable progress does not eliminate the usefulness of approximation al-
gorithms for such problems, however. First, some applications lead to instances
that are still too large to be solved exactly in a reasonable amount of time. Second,
one may well prefer spending seconds to nd a tour that is within a few percent
of optimum than to spend many hours or even days of computing time to nd the
shortest tour exactly.
But howcan one tell howgood or bad the approximate solution is if we do not
know the length of an optimal tour? A convenient way to overcome this difculty
is to solve the linear programming problem describing the instance in question by
ignoring the integrality constraints. This provides a lower boundcalledthe Held-
Karp boundon the length of the shortest tour. The Held-Karp bound is typically
very close (less than 1%) to the length of an optimal tour, and this bound can be
computed in seconds or minutes unless the instance is truly huge. Thus, for a tour
12.3 Approximation Algorithms for NP-Hard Problems 453
TABLE 12.1 Average tour quality and running times for various
heuristics on the 10,000-city random uniform
Euclidean instances [Joh07a]
% excess over the Running time
Heuristic Held-Karp bound (seconds)
nearest neighbor 24.79 0.28
multifragment 16.42 0.20
Christodes 9.81 1.04
2-opt 4.70 1.41
3-opt 2.88 1.50
Lin-Kernighan 2.00 2.06
s
a
obtained by some heuristic, we estimate the accuracy ratio r(s
a
) =f (s
a
)/f (s
)
from above by the ratio f (s
a
)/HK(s
), where f (s
a
) is the length of the heuristic
tour s
a
and HK(s
)
f (s
(k)
a
)
1 1/k for any instance of size n,
where k is an integer parameter in the range 0 k < n. The rst approximation
scheme was suggested by S. Sahni in 1975 [Sah75]. This algorithm generates all
subsets of k items or less, and for each one that ts into the knapsack it adds the
remaining items as the greedy algorithm would do (i.e., in nonincreasing order
of their value-to-weight ratios). The subset of the highest value obtained in this
fashion is returned as the algorithms output.
EXAMPLE 7 A small example of an approximation scheme with k =2 is pro-
vided in Figure 12.16. The algorithm yields {1, 3, 4], which is the optimal solution
for this instance.
You can be excused for not being overly impressed by this example. And,
indeed, the importance of this scheme is mostly theoretical rather than practical.
It lies in the fact that, in addition to approximating the optimal solution with any
predened accuracy level, the time efciency of this algorithm is polynomial in n.
Indeed, the total number of subsets the algorithm generates before adding extra
elements is
k
j=0
_
n
j
_
=
k
j=0
n(n 1)
. . .
(n j 1)
j!
k
j=0
n
j
j=0
n
k
=(k 1)n
k
.
12.3 Approximation Algorithms for NP-Hard Problems 457
item weight value value/weight
1 4 $40 10
2 7 $42 6
3 5 $25 5
4 1 $ 4 4
capacity W =10
(a)
subset added items value
1, 3, 4 $69
{1] 3, 4 $69
{2] 4 $46
{3] 1, 4 $69
{4] 1, 3 $69
{1, 2] not feasible
{1, 3] 4 $69
{1, 4] 3 $69
{2, 3] not feasible
{2, 4] $46
{3, 4] 1 $69
(b)
FIGURE 12.16 Example of applying Sahnis approximation scheme for k =2. (a) Instance.
(b) Subsets generated by the algorithm.
For each of those subsets, it needs O(n) time to determine the subsets possible
extension. Thus, the algorithms efciency is in O(kn
k1
). Note that although it is
polynomial in n, the time efciency of Sahnis scheme is exponential in k. More
sophisticated approximation schemes, called fully polynomial schemes, do not
have this shortcoming. Among several books that discuss such algorithms, the
monographs [Mar90] and [Kel04] are especially recommended for their wealth of
other material about the knapsack problem.
Exercises 12.3
1. a. Apply the nearest-neighbor algorithm to the instance dened by the inter-
city distance matrix below. Start the algorithm at the rst city, assuming
that the cities are numbered from 1 to 5.
_
_
_
_
_
_
0 14 4 10
14 0 5 8 7
4 5 0 9 16
10 8 9 0 32
7 16 32 0
_
_
b. Compute the accuracy ratio of this approximate solution.
458 Coping with the Limitations of Algorithm Power
2. a. Write pseudocode for the nearest-neighbor algorithm. Assume that its
input is given by an n n intercity distance matrix.
b. What is the time efciency of the nearest-neighbor algorithm?
3. Apply the twice-around-the-tree algorithm to the graph in Figure 12.11a with
a walk around the minimum spanning tree that starts at the same vertex a but
differs from the walk in Figure 12.11b. Is the length of the obtained tour the
same as the length of the tour in Figure 12.11b?
4. Prove that making a shortcut of the kind used by the twice-around-the-tree
algorithm cannot increase the tours length in a Euclidean graph.
5. What is the time efciency class of the greedy algorithm for the knapsack
problem?
6. Prove that the performance ratio R
A
of the enhanced greedy algorithm for
the knapsack problem is equal to 2.
7. Consider the greedy algorithm for the bin-packing problem, which is called
the rst-t (FF) algorithm: place each of the items in the order given into the
rst bin the item ts in; when there are no such bins, place the item in a new
bin and add this bin to the end of the bin list.
a. Apply FF to the instance
s
1
=0.4, s
2
=0.7, s
3
=0.2, s
4
=0.1, s
5
=0.5
and determine whether the solution obtained is optimal.
b. Determine the worst-case time efciency of FF.
c. Prove that FF is a 2-approximation algorithm.
8. The rst-t decreasing (FFD) approximation algorithm for the bin-packing
problem starts by sorting the items in nonincreasing order of their sizes and
then acts as the rst-t algorithm.
a. Apply FFDto the instance
s
1
=0.4, s
2
=0.7, s
3
=0.2, s
4
=0.1, s
5
=0.5
and determine whether the solution obtained is optimal.
b. Does FFDalways yield an optimal solution? Justify your answer.
c. Prove that FFDis a 1.5-approximation algorithm.
d. Run an experiment to determine which of the two algorithmsFF or
FFDyields more accurate approximations on a random sample of the
problems instances.
9. a. Design a simple 2-approximation algorithm for nding a minimum vertex
cover (a vertex cover withthe smallest number of vertices) ina givengraph.
b. Consider the following approximation algorithm for nding a maximum
independent set (an independent set with the largest number of vertices) in
a given graph. Apply the 2-approximation algorithmof part (a) and output
12.4 Algorithms for Solving Nonlinear Equations 459
all the vertices that are not in the obtained vertex cover. Can we claim that
this algorithm is a 2-approximation algorithm, too?
10. a. Design a polynomial-time greedy algorithm for the graph-coloring prob-
lem.
b. Show that the performance ratio of your approximation algorithm is in-
nitely large.
12.4 Algorithms for Solving Nonlinear Equations
In this section, we discuss several algorithms for solving nonlinear equations in
one unknown,
f (x) =0. (12.4)
There are several reasons for this choice among subareas of numerical analysis.
First of all, this is an extremely important problem from both a practical and the-
oretical point of view. It arises as a mathematical model of numerous phenomena
in the sciences and engineering, both directly and indirectly. (Recall, for example,
that the standard calculus technique for nding extremum points of a function
f (x) is based on nding its critical points, which are the roots of the equation
f
/
(x) =0.) Second, it represents the most accessible topic in numerical analysis
and, at the same time, exhibits its typical tools and concerns. Third, some meth-
ods for solving equations closely parallel algorithms for array searching and hence
provide examples of applying general algorithm design techniques to problems of
continuous mathematics.
Let us start with dispelling a misconception you might have about solving
equations. Your experience with equation solving from middle school to calculus
courses might have led you to believe that we can solve equations by factoring
or by applying a readily available formula. Sorry to break it to you, but you have
been deceived (with the best of educational intentions, of course): you were able
to solve all those equations only because they had been carefully selected to make
it possible. In general, we cannot solve equations exactly and need approximation
algorithms to do so.
This is true even for solving the quadratic equation
ax
2
bx c =0
because the standard formula for its roots
x
1,2
=
b
_
b
2
4ac
2a
requires computing the square root, which can be done only approximately for
most positive numbers. In addition, as we discussed in Section 11.4, this canonical
formula needs to be modied to avoid the possibility of low-accuracy solutions.
460 Coping with the Limitations of Algorithm Power
What about formulas for roots of polynomials of degrees higher than two?
Such formulas for third- and fourth-degree polynomials exist, but they are too
cumbersome to be of practical value. For polynomials of degrees higher than
four, there can be no general formula for their roots that would involve only the
polynomials coefcients, arithmetical operations, and radicals (taking roots). This
remarkable result was published rst by the Italian mathematician and physician
Paolo Rufni (17651822) in 1799 and rediscovered a quarter century later by the
Norwegian mathematician Niels Abel (18021829); it was developed further by
the French mathematician Evariste Galois (18111832).
4
The impossibility of such a formula can hardly be considered a great disap-
pointment. As the great GermanmathematicianCarl FriedrichGauss (17771855)
put it in his thesis of 1801, the algebraic solution of an equation was no better than
devising a symbol for the root of the equation and then saying that the equation
had a root equal to the symbol [OCo98].
We can interpret solutions to equation (12.4) as points at which the graph
of the function f (x) intersects with the x-axis. The three algorithms we discuss
in this section take advantage of this interpretation. Of course, the graph of f (x)
may intersect the x-axis at a single point (e.g., x
3
=0), at multiple or even innitely
many points (sin x =0), or at no point (e
x
1 =0). Equation (12.4) would then
have a single root, several roots, and no roots, respectively. It is a good idea to
sketch a graph of the function before starting to approximate its roots. It can help
to determine the number of roots and their approximate locations. In general, it
is a good idea to isolate roots, i.e., to identify intervals containing a single root of
the equation in question.
Bisection Method
This algorithm is based on an observation that the graph of a continuous function
must intersect with the x-axis between two points a and b at least once if the
functions values have opposite signs at these two points (Figure 12.17).
The validity of this observation is proved as a theoremin calculus courses, and
we take it for granted here. It serves as the basis of the following algorithm, called
the bisection method, for solving equation (12.4). Starting with an interval [a, b]
at whose endpoints f (x) has opposite signs, the algorithm computes the value of
f (x) at the middle point x
mid
=(a b)/2. If f (x
mid
) =0, a root was found and the
algorithm stops. Otherwise, it continues the search for a root either on [a, x
mid
] or
on[x
mid
, b], depending onwhichof the twohalves the values of f (x) have opposite
signs at the endpoints of the new interval.
Since we cannot expect the bisection algorithm to stumble on the exact value
of the equations root and stop, we need a different criterion for stopping the algo-
4. Rufnis discovery was completely ignored by almost all prominent mathematicians of that time. Abel
died young after a difcult life of poverty. Galois was killed in a duel when he was only 21 years old.
Their results on the solution of higher-degree equations are nowconsidered to be among the crowning
achievements in the history of mathematics.
12.4 Algorithms for Solving Nonlinear Equations 461
a b x
1
x
f (x)
FIGURE 12.17 First iteration of the bisection method: x
1
is the middle point of interval
[a, b].
rithm. We can stop the algorithmafter the interval [a
n
, b
n
] bracketing some root x
becomes so small that we can guarantee that the absolute error of approximating
x
by x
n
, the middle point of this interval, is smaller than some small preselected
number >0. Since x
n
is the middle point of [a
n
, b
n
] and x
[
b
n
a
n
2
. (12.5)
Hence, we can stop the algorithm as soon as (b
n
a
n
)/2 < or, equivalently,
x
n
a
n
< . (12.6)
It is not difcult to prove that
[x
n
x
[
b
1
a
1
2
n
for n =1, 2, . . . . (12.7)
This inequality implies that the sequence of approximations {x
n
] can be made as
close to root x
a
1
)/2
n
< , i.e.,
n > log
2
b
1
a
1
, (12.8)
does the trick.
EXAMPLE 1 Let us consider equation
x
3
x 1 =0. (12.9)
It has one real root. (See Figure 12.18 for the graph of f (x) =x
3
x 1.) Since
f (0) < 0 and f (2) > 0, the root must lie within interval [0, 2]. If we choose the
error tolerance level as =10
2
, inequality (12.8) would require n >log
2
(2/10
2
)
or n 8 iterations.
Figure 12.19 contains a trace of the rst eight iterations of the bisection
method applied to equation (12.9).
Thus, we obtained x
8
=1.3203125 as an approximate value for the root x
of
equation (12.9), and we can guarantee that
[1.3203125 x
[ < 10
2
.
Moreover, if we take into account the signs of the function f (x) at a
8
, b
8
, and x
8
,
we can assert that the root lies between 1.3203125 and 1.328125.
The principal weakness of the bisection method as a general algorithm for
solving equations is its slow rate of convergence compared with other known
methods. It is for this reason that the method is rarely used. Also, it cannot be
extended to solving more general equations and systems of equations. But it does
have several strong points. It always converges to a root whenever we start with an
12.4 Algorithms for Solving Nonlinear Equations 463
0 2
f(x) = x
3
x 1
y
x
FIGURE 12.18 Graph of function f (x) =x
3
x 1.
n a
n
b
n
x
n
f (x
n
)
1 0.0 2.0+ 1.0 1.0
2 1.0 2.0+ 1.5 0.875
3 1.0 1.5+ 1.25 0.296875
4 1.25 1.5+ 1.375 0.224609
5 1.25 1.375+ 1.3125 0.051514
6 1.3125 1.375+ 1.34375 0.082611
7 1.3125 1.34375+ 1.328125 0.014576
8 1.3125 1.328125+ 1.3203125 0.018711
FIGURE 12.19 Trace of the bisection method for solving equation (12.8). The signs
after the numbers in the second and third columns indicate the sign of
f (x) =x
3
x 1 at the corresponding endpoints of the intervals.
interval whose properties are very easy to check. And it does not use derivatives
of the function f (x) as some faster methods do.
What important algorithm does the method of bisection remind you of? If
you have found it to closely resemble binary search, you are correct. Both of
them solve variations of the searching problem, and they are both divide-by-
half algorithms. The principal difference lies in the problems domain: discrete
for binary search and continuous for the bisection method. Also note that while
binary search requires its input array to be sorted, the bisection method does not
require its function to be nondecreasing or nonincreasing. Finally, whereas binary
search is very fast, the bisection method is relatively slow.
464 Coping with the Limitations of Algorithm Power
f(x)
x
a
n
x
n
b
n
FIGURE 12.20 Iteration of the method of false position.
Method of False Position
The method of false position (also known by its name in Latin, regula falsi) is to
interpolation search as the bisection method is to binary search. Like the bisection
method, it has, on each iteration, some interval [a
n
, b
n
] bracketing a root of a
continuous function f (x) that has opposite-sign values at a
n
and b
n
. Unlike the
bisection method, however, it computes the next root approximation not as the
middle of [a
n
, b
n
] but as the x-intercept of the straight line through the points
(a
n
, f (a
n
)) and (b
n
, f (b
n
)) (Figure 12.20).
You are asked in the exercises to show that the formula for this x-intercept
can be written as
x
n
=
a
n
f (b
n
) b
n
f (a
n
)
f (b
n
) f (a
n
)
. (12.10)
EXAMPLE 2 Figure 12.21 contains the results of the rst eight iterations of this
method for solving equation (12.9).
Although for this example the method of false position does not perform
as well as the bisection method, for many instances it yields a faster converging
sequence.
Newtons Method
Newtons method, alsocalledthe Newton-Raphson method, is one of the most im-
portant general algorithms for solving equations. When applied to equation (12.4)
in one unknown, it can be illustrated by Figure 12.22: the next element x
n1
of the
methods approximation sequence is obtained as the x-intercept of the tangent
line to the graph of function f (x) at x
n
.
The analytical formula for the elements of the approximation sequence turns
out to be
x
n1
=x
n
f (x
n
)
f
/
(x
n
)
for n =0, 1, . . . . (12.11)
12.4 Algorithms for Solving Nonlinear Equations 465
a
n
b
n
x
n
f (x
n
)
1 0.0 2.0 0.333333 1.296296
2 0.333333 2.0 0.676471 1.366909
3 0.676471 2.0 0.960619 1.074171
4 0.960619 2.0 1.144425 0.645561
5 1.144425 2.0 1.242259 0.325196
6 1.242259 2.0 1.288532 0.149163
7 1.288532 2.0 1.309142 0.065464
8 1.309142 2.0 1.318071 0.028173
FIGURE 12.21 Trace of the method of false position for equation (12.9). The signs
after the numbers in the second and third columns indicate the sign of
f (x) =x
3
x 1 at the corresponding endpoints of the intervals.
f(x
n
)
x
x
n
x
n + 1
FIGURE 12.22 Iteration of Newtons method.
In most cases, Newtons algorithm guarantees convergence of sequence (12.11) if
an initial approximation x
0
is chosen close enough to the root. (Precisely dened
prescriptions for choosing x
0
can be found in numerical analysis textbooks.) It may
converge for initial approximations far from the root as well, but this is not always
true.
EXAMPLE 3 Computing
f (x
n
)
f
/
(x
n
)
=x
n
x
2
n
a
2x
n
=
x
2
n
a
2x
n
=
1
2
(x
n
a
x
n
),
466 Coping with the Limitations of Algorithm Power
which is exactly the formula we used in Section 11.4 for computing approximate
values of square roots.
EXAMPLE 4 Let us apply Newtons method to equation (12.9), which we previ-
ously solved with the bisection method and the method of false position. Formula
(12.11) for this case becomes
x
n1
=x
n
x
3
n
x
n
1
3x
2
n
1
.
As an initial element of the approximation sequence, we take, say, x
0
=2. Fig-
ure 12.23 contains the results of the rst ve iterations of Newtons method.
You cannot fail to notice how much faster Newtons approximation sequence
converges to the root than the approximation sequences of both the bisection
method and the method of false position. This very fast convergence is typical of
Newtons method if an initial approximation is close to the equations root. Note,
however, that on each iteration of this method we need to evaluate new values of
the functionandits derivative, whereas the previous twomethods require only one
new value of the function itself. Also, Newtons method does not bracket a root as
these two methods do. Moreover, for an arbitrary function and arbitrarily chosen
initial approximation, its approximation sequence may diverge. And, because
formula (12.11) has the functions derivative in the denominator, the method may
break down if it is equal to zero. In fact, Newtons method is most effective when
f
/
(x) is bounded away from zero near root x
. In particular, if
[f
/
(x)[ m
1
> 0
on the interval between x
n
and x
) =f
/
(c)(x
n
x
),
where c is some point between x
n
and x
. Since f (x
) =0 and [f
/
(c)[ m
1
, we
obtain
n x
n
x
n1
f (x
n1
)
0 2.0 1.545455 1.145755
1 1.545455 1.359615 0.153705
2 1.359615 1.325801 0.004625
3 1.325801 1.324719 4.7
.
10
6
4 1.324719 1.324718 5
.
10
12
FIGURE 12.23 Trace of Newtons method for equation (12.9).
12.4 Algorithms for Solving Nonlinear Equations 467
[x
n
x
[
[f (x
n
)[
m
1
. (12.12)
Formula (12.12) can be used as a criterion for stopping Newtons algorithm when
its right-hand side becomes smaller than a preselected accuracy level . Other
possible stopping criteria are
[x
n
x
n1
[ <
and
[f (x
n
)[ < ,
where is a small positive number. Since the last two criteria do not necessarily
imply closeness of x
n
to root x
i=l
1 =1 1
. . .
1
. ,, .
ul1 times
=u l 1 (l, u are integer limits, l u);
n
i=1
1 =n
2.
n
i=1
i =1 2
. . .
n =
n(n 1)
2
1
2
n
2
3.
n
i=1
i
2
=1
2
2
2
. . .
n
2
=
n(n 1)(2n 1)
6
1
3
n
3
4.
n
i=1
i
k
=1
k
2
k
. . .
n
k
1
k 1
n
k1
5.
n
i=0
a
i
=1 a
. . .
a
n
=
a
n1
1
a 1
(a ,=1);
n
i=0
2
i
=2
n1
1
6.
n
i=1
i2
i
=1
.
2 2
.
2
2
. . .
n2
n
=(n 1)2
n1
2
7.
n
i=1
1
i
=1
1
2
. . .
1
n
ln n , where 0.5772 . . . (Eulers constant)
8.
n
i=1
lg i n lg n
Sum Manipulation Rules
1.
u
i=l
ca
i
=c
u
i=l
a
i
2.
u
i=l
(a
i
b
i
) =
u
i=l
a
i
u
i=l
b
i
3.
u
i=l
a
i
=
m
i=l
a
i
u
i=m1
a
i
, where l m < u
4.
u
i=l
(a
i
a
i1
) =a
u
a
l1
Useful Formulas for the Analysis of Algorithms 477
Approximation of a Sum by a Denite Integral
_
u
l1
f (x)dx
u
i=l
f (i)
_
u1
l
f (x)dx for a nondecreasing f (x)
_
u1
l
f (x)dx
u
i=l
f (i)
_
u
l1
f (x)dx for a nonincreasing f (x)
Floor and Ceiling Formulas
The oor of a real number x, denoted x, is dened as the greatest integer not
larger than x (e.g., 3.8 =3, 3.8 =4, 3 =3). The ceiling of a real number x,
denoted {x, is dened as the smallest integer not smaller than x (e.g., {3.8 =4,
{3.8 =3, {3 =3).
1. x 1 <x x {x < x 1
2. x n =x n and {x n ={x n for real x and integer n
3. n/2 {n/2 =n
4. {lg(n 1) =lg n 1
Miscellaneous
1. n!
2n
_
n
e
_
n
as n (Stirlings formula)
2. Modular arithmetic (n, m are integers, p is a positive integer)
(n m) mod p =(n mod p m mod p) mod p
(nm) mod p =((n mod p)(m mod p)) mod p
This page intentionally left blank
APPENDIX B
Short Tutorial on Recurrence
Relations
Sequences and Recurrence Relations
DEFINITION A (numerical) sequence is an ordered list of numbers.
Examples: 2, 4, 6, 8, 10, 12, . . . (positive even integers)
0, 1, 1, 2, 3, 5, 8, . . . (the Fibonacci numbers)
0, 1, 3, 6, 10, 15, . . . (numbers of key comparisons in selection sort)
A sequence is usually denoted by a letter (such as x or a) with a subindex
(such as n or i) written in curly brackets, e.g., {x
n
]. We use the alternative notation
x(n). This notation stresses the fact that a sequence is a function: its argument n
indicates a position of a number in the list, while the functions value x(n) stands
for that number itself. x(n) is called the generic term of the sequence.
There are two principal ways to dene a sequence:
by an explicit formula expressing its generic term as a function of n, e.g.,
x(n) =2n for n 0
by an equation relating its generic term to one or more other terms of the
sequence, combined with one or more explicit values for the rst term(s),
e.g.,
x(n) =x(n 1) n for n > 0, (B.1)
x(0) =0. (B.2)
It is the latter method that is particularly important for analysis of recursive
algorithms (see Section 2.4 for a detailed discussion of this topic).
An equation such as (B.1) is called a recurrence equation or recurrence rela-
tion (or simply a recurrence), and an equation such as (B.2) is called its initial con-
dition. Aninitial conditioncanbe givenfor a value of nother than0 (e.g., for n =1)
and for some recurrences (e.g., for the recurrence F(n) = F(n 1) F(n 2)
479
480 Short Tutorial on Recurrence Relations
dening the Fibonacci numberssee Section 2.5), more than one value needs to
be specied by initial conditions.
To solve a given recurrence subject to a given initial condition means to nd
an explicit formula for the generic term of the sequence that satises both the
recurrence equation and the initial condition or to prove that such a sequence does
not exist. For example, the solution to recurrence (B.1) subject to initial condition
(B.2) is
x(n) =
n(n 1)
2
for n 0. (B.3)
It can be veried by substituting this formula into (B.1) to check that the equality
holds for every n > 0, i.e., that
n(n 1)
2
=
(n 1)(n 1 1)
2
n
and into (B.2) to check that x(0) =0, i.e., that
0(0 1)
2
=0.
Sometimes it is convenient to distinguish between a general solution and
a particular solution to a recurrence. Recurrence equations typically have an
innite number of sequences that satisfy them. Ageneral solution to a recurrence
equation is a formula that species all such sequences. Typically, a general solution
involves one or more arbitrary constants. For example, for recurrence (B.1), the
general solution can be specied by the formula
x(n) =c
n(n 1)
2
, (B.4)
where c is such an arbitrary constant. By assigning different values to c, we can
get all the solutions to equation (B.1) and only these solutions.
A particular solution is a specic sequence that satises a given recurrence
equation. Usually we are interested in a particular solution that satises a given
initial condition. For example, sequence (B.3) is a particular solution to (B.1)
(B.2).
Methods for Solving Recurrence Relations
No universal method exists that would enable us to solve every recurrence rela-
tion. (This is not surprising, because we donot have sucha methodevenfor solving
much simpler equations in one unknown f (x) =0 for an arbitrary function f (x).)
There are several techniques, however, some more powerful than others, that can
solve a variety of recurrences.
Method of Forward Substitutions Starting with the initial term(or terms) of the
sequence given by the initial condition(s), we can use the recurrence equation to
generate the fewrst terms of its solutioninthe hope of seeing a patternthat canbe
Short Tutorial on Recurrence Relations 481
expressedby a closed-endformula. If sucha formula is found, its validity shouldbe
either checked by direct substitution into the recurrence equation and the initial
condition (as we did for (B.1)(B.2)) or proved by mathematical induction.
For example, consider the recurrence
x(n) =2x(n 1) 1 for n > 1, (B.5)
x(1) =1. (B.6)
We obtain the few rst terms as follows:
x(1) =1,
x(2) =2x(1) 1 =2
.
1 1 =3,
x(3) =2x(2) 1 =2
.
3 1 =7,
x(4) =2x(3) 1 =2
.
7 1 =15.
It is not difcult to notice that these numbers are one less than consecutive powers
of 2:
x(n) =2
n
1 for n =1, 2, 3, and 4.
We can prove the hypothesis that this formula yields the generic term of the
solution to (B.5)(B.6) either by direct substitution of the formula into (B.5) and
(B.6) or by mathematical induction.
As a practical matter, the method of forward substitutions works in a very
limited number of cases because it is usually very difcult to recognize the pattern
in the rst few terms of the sequence.
Method of Backward Substitutions This method of solving recurrence relations
works exactly as its name implies: using the recurrence relation in question, we
express x(n 1) as a function of x(n 2) and substitute the result into the original
equation to get x(n) as a function of x(n 2). Repeating this step for x(n
2) yields an expression of x(n) as a function of x(n 3). For many recurrence
relations, we will then be able to see a pattern and express x(n) as a function of
x(n i) for an arbitrary i =1, 2, . . . . Selecting i to make n i reach the initial
condition and using one of the standard summation formulas often leads to a
closed-end formula for the solution to the recurrence.
As an example, let us apply the method of backward substitutions to recur-
rence (B.1)(B.2). Thus, we have the recurrence equation
x(n) =x(n 1) n.
Replacing n by n 1 in the equation yields x(n 1) = x(n 2) n 1; after
substituting this expression for x(n 1) in the initial equation, we obtain
x(n) =[x(n 2) n 1] n =x(n 2) (n 1) n.
Replacing nby n 2 inthe initial equationyields x(n 2) =x(n 3) n 2; after
substituting this expression for x(n 2), we obtain
x(n) =[x(n 3) n 2] (n 1) n =x(n 3) (n 2) (n 1) n.
482 Short Tutorial on Recurrence Relations
Comparing the three formulas for x(n), we can see the pattern arising after i such
substitutions:
1
x(n) =x(n i) (n i 1) (n i 2)
. . .
n.
Since initial condition (B.2) is specied for n =0, we need n i =0, i.e., i =n, to
reach it:
x(n) =x(0) 1 2
. . .
n =0 1 2
. . .
n =n(n 1)/2.
The method of backward substitutions works surprisingly well for a wide
variety of simple recurrence relations. Youcanndmany examples of its successful
applications throughout this book (see, in particular, Section 2.4 and its exercises).
Linear Second-Order Recurrences with Constant Coefcients An important
class of recurrences that can be solved by neither forward nor backward substitu-
tions are recurrences of the type
ax(n) bx(n 1) cx(n 2) =f (n), (B.7)
where a, b, and c are real numbers, a ,=0. Such a recurrence is called second-order
linear recurrence with constant coefcients. It is second-order because elements
x(n) and x(n 2) are two positions apart in the unknown sequence in question; it
is linear because the left-hand side is a linear combination of the unknown terms
of the sequence; it has constant coefcients because of the assumption that a, b,
and c are some xed numbers. If f (n) =0 for every n, the recurrence is said to be
homogeneous; otherwise, it is called inhomogeneous.
Let us consider rst the homogeneous case:
ax(n) bx(n 1) cx(n 2) =0. (B.8)
Except for the degenerate situationof b =c =0, equation(B.8) has innitely many
solutions. All these solutions, which make up the general solution to (B.8), can be
obtained by one of the three formulas that follow. Which of the three formulas
applies to a particular case depends on the roots of the quadratic equation with
the same coefcients as recurrence (B.8):
ar
2
br c =0. (B.9)
Quadratic equation (B.9) is called the characteristic equation for recurrence
equation (B.8).
THEOREM 1 Let r
1
, r
2
be two roots of characteristic equation (B.9) for recur-
rence relation (B.8).
1. Strictly speaking, the validity of the patterns formula needs to be proved by mathematical induction
on i. It is often easier, however, to get the solution rst and then verify it (e.g., as we did for
x(n) =n(n 1)/2).
Short Tutorial on Recurrence Relations 483
Case 1 If r
1
and r
2
are real and distinct, the general solution to recurrence
(B.8) is obtained by the formula
x(n) =r
n
1
r
n
2
,
where and are two arbitrary real constants.
Case 2 If r
1
and r
2
are equal to each other, the general solution to recurrence
(B.8) is obtained by the formula
x(n) =r
n
nr
n
,
where r =r
1
=r
2
and and are two arbitrary real constants.
Case 3 If r
1,2
=u iv are two distinct complex numbers, the general solution
to recurrence (B.8) is obtained as
x(n) =
n
[ cos n sin n],
where =
_
u
2
v
2
, =arctan v/u, and and are two arbitrary real constants.
Case 1 of this theorem arises, in particular, in deriving the explicit formula for
the nth Fibonacci number (Section 2.5). First, we need to rewrite the recurrence
dening this sequence as
F(n) F(n 1) F(n 2) =0.
Its characteristic equation is
r
2
r 1 =0,
with the roots
r
1,2
=
1
1 4(1)
2
=
1
5
2
.
Since this characteristic equation has two distinct real roots, we have to use the
formula indicated in Case 1 of Theorem 1:
F(n) =
_
1
5
2
_
n
_
1
5
2
_
n
.
So far, we have ignored initial conditions F(0) = 0 and F(1) = 1. Now, we
take advantage of them to nd specic values of constants and . We do this by
substituting 0 and 1the values of n for which the initial conditions are given
into the last formula and equating the results to 0 and 1, respectively:
F(0) =
_
1
5
2
_
0
_
1
5
2
_
0
=0,
F(1) =
_
1
5
2
_
1
_
1
5
2
_
1
=1.
484 Short Tutorial on Recurrence Relations
After some standard algebraic simplications, we get the following system of two
linear equations in two unknowns and :
=0
_
1
5
2
_
_
1
5
2
_
=1.
Solving the system (e.g., by substituting = into the second equation and
solving the equation obtained for ), we get the values =1/
5 and =1/
5
for the unknowns. Thus,
F(n) =
1
5
_
1
5
2
_
n
5
_
1
5
2
_
n
=
1
5
(
n
n
),
where =(1
. . .
a
0
=0, (B.11)
which is the characteristic equation for recurrence (B.10).
Finally, there are several other, more sophisticated techniques for solving
recurrence relations. Purdom and Brown [Pur04] provide a particularly thorough
discussion of this topic from the analysis of algorithms perspective.
Common Recurrence Types in Algorithm Analysis
There are a few recurrence types that arise in the analysis of algorithms with
remarkable regularity. This happens because they reect one of the fundamental
design techniques.
Decrease-by-One Adecrease-by-one algorithmsolves a problemby exploiting a
relationship between a given instance of size n and a smaller instance of size n 1.
Specic examples include recursive evaluationof n!(Section2.4) andinsertionsort
(Section 4.1). The recurrence equation for investigating the time efciency of such
algorithms typically has the following form:
T (n) =T (n 1) f (n), (B.12)
486 Short Tutorial on Recurrence Relations
where function f (n) accounts for the time needed to reduce an instance to a
smaller one and to extend the solution of the smaller instance to a solution of
the larger instance. Applying backward substitutions to (B.12) yields
T (n) =T (n 1) f (n)
=T (n 2) f (n 1) f (n)
=
. . .
=T (0)
n
j=1
f (j).
For a specic function f (x), the sum
n
j=1
f (j) can usually be either computed
exactly or its order of growthascertained. For example, if f (n) =1,
n
j=1
f (j) =n;
if f (n) =log n,
n
j=1
f (j) (n log n); if f (n) =n
k
,
n
j=1
f (j) (n
k1
). The
sum
n
j=1
f (j) can also be approximated by formulas involving integrals (see, in
particular, the appropriate formulas in Appendix A).
Decrease-by-a-Constant-Factor A decrease-by-a-constant-factor algorithm
solves a problem by reducing its instance of size n to an instance of size n/b (b =2
for most but not all such algorithms), solving the smaller instance recursively, and
then, if necessary, extending the solution of the smaller instance to a solution of
the given instance. The most important example is binary search; other examples
include exponentiation by squaring (introduction to Chapter 4), Russian peasant
multiplication, and the fake-coin problem (Section 4.4).
The recurrence equation for investigating the time efciency of such algo-
rithms typically has the form
T (n) =T (n/b) f (n), (B.13)
where b >1 and function f (n) accounts for the time needed to reduce an instance
to a smaller one and to extend the solution of the smaller instance to a solution
of the larger instance. Strictly speaking, equation (B.13) is valid only for n =b
k
,
k =0, 1, . . .. For values of n that are not powers of b, there is typically some round-
off, usually involving the oor and/or ceiling functions. The standard approach to
such equations is to solve them for n =b
k
rst. Afterward, either the solution is
tweaked to make it valid for all ns (see, for example, Problem 7 in Exercises 2.4),
or the order of growth of the solution is established based on the smoothness rule
(Theorem 4 in this appendix).
By considering n =b
k
, k =0, 1, . . . , and applying backward substitutions to
(B.13), we obtain the following:
Short Tutorial on Recurrence Relations 487
T (b
k
) =T (b
k1
) f (b
k
)
=T (b
k2
) f (b
k1
) f (b
k
)
=
. . .
=T (1)
k
j=1
f (b
j
).
For a specic function f (x), the sum
k
j=1
f (b
j
) can usually be either computed
exactly or its order of growth ascertained. For example, if f (n) =1,
k
j=1
f (b
j
) =k =log
b
n.
If f (n) =n, to give another example,
k
j=1
f (b
j
) =
k
j=1
b
j
=b
b
k
1
b 1
=b
n 1
b 1
.
Also, recurrence (B.13) is a special case of recurrence (B.14) coveredby the Master
Theorem (Theorem 5 in this appendix). According to this theorem, in particular,
if f (n) (n
d
) where d > 0, then T (n) (n
d
) as well.
Divide-and-Conquer A divide-and-conquer algorithm solves a problem by di-
viding its given instance into several smaller instances, solving each of them recur-
sively, and then, if necessary, combining the solutions to the smaller instances into
a solution to the given instance. Assuming that all smaller instances have the same
size n/b, witha of thembeing actually solved, we get the following recurrence valid
for n =b
k
, k =1, 2, . . . :
T (n) =aT (n/b) f (n), (B.14)
where a 1, b 2, and f (n) is a function that accounts for the time spent on
dividing the probleminto smaller ones and combining their solutions. Recurrence
(B.14) is called the general divide-and-conquer recurrence.
2
Applying backward substitutions to (B.14) yields the following:
2. In our terminology, for a = 1, it covers decrease-by-a-constant-factor, not divide-and-conquer,
algorithms.
488 Short Tutorial on Recurrence Relations
T (b
k
) =aT (b
k1
) f (b
k
)
=a[aT (b
k2
) f (b
k1
)] f (b
k
) =a
2
T (b
k2
) af (b
k1
) f (b
k
)
=a
2
[aT (b
k3
) f (b
k2
)] af (b
k1
) f (b
k
)
=a
3
T (b
k3
) a
2
f (b
k2
) af (b
k1
) f (b
k
)
=
. . .
=a
k
T (1) a
k1
f (b
1
) a
k2
f (b
2
)
. . .
a
0
f (b
k
)
=a
k
[T (1)
k
j=1
f (b
j
)/a
j
].
Since a
k
=a
log
b
n
=n
log
b
a
, we get the following formula for the solution to recur-
rence (B.14) for n =b
k
:
T (n) =n
log
b
a
[T (1)
log
b
n
j=1
f (b
j
)/a
j
]. (B.15)
Obviously, the order of growth of solution T (n) depends on the values of the
constants a and b and the order of growth of the function f (n). Under certain
assumptions about f (n) discussed in the next section, we can simplify formula
(B.15) and get explicit results about the order of growth of T (n).
Smoothness Rule and the Master Theorem We mentioned earlier that the time
efciency of decrease-by-a-constant-factor and divide-and-conquer algorithms is
usually investigated rst for ns that are powers of b. (Most often b =2, as it is
in binary search and mergesort; sometimes b =3, as it is in the better algorithm
for the fake-coin problem of Section 4.4, but it can be any integer greater than or
equal to 2.) The question we are going to address now is when the order of growth
observed for ns that are powers of b can be extended to all its values.
DEFINITION Let f (n) be a nonnegative function dened on the set of natural
numbers. f (n) is called eventually nondecreasing if there exists some nonnegative
integer n
0
so that f (n) is nondecreasing on the interval [n
0
, ), i.e.,
f (n
1
) f (n
2
) for all n
2
> n
1
n
0
.
For example, the function (n 100)
2
is eventually nondecreasing, although it
is decreasing on the interval [0, 100], and the function sin
2 n
2
is a function that
is not eventually nondecreasing. The vast majority of functions we encounter in
the analysis of algorithms are eventually nondecreasing. Most of them are, in fact,
nondecreasing on their entire domains.
DEFINITION Let f (n) be a nonnegative function dened on the set of natural
numbers. f (n) is called smooth if it is eventually nondecreasing and
f (2n) (f (n)).
Short Tutorial on Recurrence Relations 489
It is easy to check that functions which do not grow too fast, including log n,
n, n log n, and n
j=1
b
jd
/a
j
] =n
log
b
a
[T (1)
log
b
n
j=1
(b
d
/a)
j
].
The sum in this formula is that of a geometric series, and therefore
Short Tutorial on Recurrence Relations 491
log
b
n
j=1
(b
d
/a)
j
=(b
d
/a)
(b
d
/a)
log
b
n
1
(b
d
/a) 1
if b
d
,=a
and
log
b
n
j=1
(b
d
/a)
j
=log
b
n if b
d
=a.
If a < b
d
, then b
d
/a > 1, and therefore
log
b
n
j=1
(b
d
/a)
j
=(b
d
/a)
(b
d
/a)
log
b
n
1
(b
d
/a) 1
((b
d
/a)
log
b
n
).
Hence, in this case,
T (n) =n
log
b
a
[T (1)
log
b
n
j=1
(b
d
/a)
j
] n
log
b
a
((b
d
/a)
log
b
n
)
=(n
log
b
a
(b
d
/a)
log
b
n
) =(a
log
b
n
(b
d
/a)
log
b
n
)
=(b
d log
b
n
) =(b
log
b
n
d
) =(n
d
).
If a > b
d
, then b
d
/a < 1, and therefore
log
b
n
j=1
(b
d
/a)
j
=(b
d
/a)
(b
d
/a)
log
b
n
1
(b
d
/a) 1
(1).
Hence, in this case,
T (n) =n
log
b
a
[T (1)
log
b
n
j=1
(b
d
/a)
j
] (n
log
b
a
).
If a =b
d
, then b
d
/a =1, and therefore
T (n) =n
log
b
a
[T (1)
log
b
n
j=1
(b
d
/a)
j
] =n
log
b
a
[T (1) log
b
n]
(n
log
b
a
log
b
n) =(n
log
b
b
d
log
b
n) =(n
d
log
b
n).
Since f (n) = n
d
is a smooth function for any d 0, a reference to Theorem 4
completes the proof.
Theorem 5 provides a very convenient tool for a quick efciency analysis of
divide-and-conquer and decrease-by-a-constant-factor algorithms. You can nd
examples of such applications throughout the book.
This page intentionally left blank
References
[Ade62] Adelson-Velsky, G.M. and Landis, E.M. An algorithm for organi-
zation of information. Soviet Mathematics Doklady, vol. 3, 1962,
12591263.
[Adl94] Adleman, L.M. Molecular computation of solutions to combinatorial
problems. Science, vol. 266, 1994, 10211024.
[Agr04] Agrawal, M., Kayal, N., and Saxena, N. PRIMES is in P. Annals of
Mathematics, vol. 160, no. 2, 2004, 781793.
[Aho74] Aho, A.V., Hopcroft, J.E., and Ullman, J.D. The Design and Analysis
of Computer Algorithms. Addison-Wesley, 1974.
[Aho83] Aho, A.V., Hopcroft, J.E., and Ullman, J.D. Data Structures and
Algorithms. Addison-Wesley, 1983.
[Ahu93] Ahuja, R.K., Magnanti, T.L., and Orlin, J.B. Network Flows: Theory,
Algorithms, and Applications. Prentice Hall, 1993.
[App] Applegate, D.L., Bixby, R.E., Chv atal, V., and Cook, W.J. The
Traveling Salesman Problems. www.tsp.gatech.edu/index.html.
[App07] Applegate, D.L., Bixby, R.E., Chv atal, V., and Cook, W.J. The
Traveling Salesman Problem: A Computational Study. Princeton
University Press, 2007.
[Arb93] Arbel, A. Exploring Interior-Point Linear Programming: Algorithms
and Software (Foundations of Computing). MIT Press, 1993.
[Aro09] Arora, S. and Barak, B. Computational Complexity: A Modern
Approach. Cambridge University Press, 2009.
[Ata09] Atallah, M.J. and Blanton, M., eds. Algorithms and Theory of
Computation Handbook, 2nd ed. (two-volume set), Chapman and
Hall/CRC, 2009.
[Avi07] Avidan, S. and Shamir, A. Seam carving for content-aware image
resizing. ACM Transactions on Graphics, vol. 26, no. 3, article 10,
July 2007, 9 pages.
493
494 References
[Azi10] Aziz, A. and Prakash, A. Algorithms for Interviews. algorithmsforin-
terviews.com, 2010.
[Baa00] Baase, S. and Van Gelder, A. Computer Algorithms: Introduction to
Design and Analysis, 3rd ed. Addison-Wesley, 2000.
[Bae81] Baecker, R. (with assistance of D. Sherman) Sorting out Sorting. 30-
minute color sound lm. Dynamic Graphics Project, University of
Toronto, 1981. video.google.com/videoplay?docid=3970523862559-
774879#docid=-4110947752111188923.
[Bae98] Baecker, R. Sorting out sorting: a case study of software visualization
for teaching computer science. In Software Visualization: Program-
ming as a Multimedia Experience, edited by J. Stasko, J. Domingue,
M.C. Brown, and B.A. Price. MIT Press, 1998, 369381.
[BaY95] Baeza-Yates, R.A. Teaching algorithms. ACM SIGACT News, vol.
26, no. 4, Dec. 1995, 5159.
[Bay72] Bayer, R. and McGreight, E.M. Organization and maintenance of
large ordered indices. Acta Informatica, vol. 1, no. 3, 1972, 173189.
[Bel09] Bell, J., and Stevens, B. A survey of known results and research areas
for n-queens. Discrete Mathematics, vol. 309, issue 1, Jan. 2009, 131.
[Bel57] Bellman, R.E. Dynamic Programming. Princeton University Press,
1957.
[Ben00] Bentley, J. Programming Pearls, 2nd ed. Addison-Wesley, 2000.
[Ben90] Bentley, J.L. Experiments on traveling salesman heuristics. In
Proceedings of the First Annual ACM-SIAM Symposium on Discrete
Algorithms, 1990, 9199.
[Ben93] Bentley, J.L. and McIlroy, M.D. Engineering a sort function.
SoftwarePractice and Experience, vol. 23, no. 11, 1993, 12491265.
[Ber03] Berlekamp, E.R., Conway, J.H., and Guy, R.K. Winning Ways for
Your Mathematical Plays, 2nd ed., volumes 14. A K Peters, 2003.
[Ber00] Berlinski, D. The Advent of the Algorithm: The Idea That Rules the
World. Harcourt, 2000.
[Ber05] Berman, K.A. and Paul, J.L. Algorithms: Sequential, Parallel, and
Distributed. Course Technology, 2005.
[Ber01] Berstekas, D.P. Dynamic Programming and Optimal Control: 2nd
Edition (Volumes 1 and 2). Athena Scientic, 2001.
[Blo73] Bloom, M., Floyd, R.W., Pratt, V., Rivest, R.L., and Tarjan, R.E.
Time bounds for selection. Journal of Computer and SystemSciences,
vol. 7, no. 4, 1973, 448461.
[Bog] Bogomolny, A. Interactive Mathematics Miscellany and Puzzles.
www.cut-the-knot.org.
References 495
[Boy77] Boyer, R.S. and Moore, J.S. A fast string searching algorithm.
Communications of the ACM, vol. 21, no. 10, 1977, 762772.
[Bra96] Brassard, G. and Bratley, P. Fundamentals of Algorithmics. Prentice
Hall, 1996.
[Car79] Carmony, L. Odd pie ghts. Mathematics Teacher, vol. 72, no. 1, 1979,
6164.
[Cha98] Chabert, Jean-Luc, ed. A History of Algorithms: From the Pebble to
the Microchip. Translated by Chris Weeks. Springer, 1998.
[Cha00] Chandler, J.P. Patent protection of computer programs. Minnesota
Intellectual Property Review, vol. 1, no. 1, 2000, 3346.
[Chr76] Christodes, N. Worst-Case Analysis of a New Heuristic for the
Traveling Salesman Problem. Technical Report, GSIA, Carnegie-
Mellon University, 1976.
[Chv83] Chv atal, V. Linear Programming. W.H. Freeman, 1983.
[Com79] Comer, D. The ubiquitous B-tree. ACM Computing Surveys, vol. 11,
no. 2, 1979, 121137.
[Coo71] Cook, S.A. The complexity of theorem-proving procedures. In
Proceeding of the Third Annual ACM Symposium on the Theory
of Computing, 1971, 151158.
[Coo87] Coopersmith, D. and Winograd, S. Matrix multiplication via arith-
metic progressions. In Proceedings of Nineteenth Annual ACM
Symposium on the Theory of Computing, 1987, 16.
[Cor09] Cormen, T.H., Leiserson, C.E., Rivest, R.L., and Stein, C. Introduc-
tion to Algorithms, 3rd ed. MIT Press, 2009.
[Cra07] Crack, T.F. Heard on the Street: Quantitative Questions from Wall
Street Job Interviews, 10th ed., self-published, 2007.
[Dan63] Dantzig, G.B. Linear Programming and Extensions. Princeton
University Press, 1963.
[deB10] de Berg, M., Cheong, O., van Kreveld, M., and Overmars, M.
Computational Geometry: Algorithms and Applications, 3rd ed.
Springer, 2010.
[Dew93] Dewdney, A.K. The (New) Turing Omnibus. Computer Science
Press, 1993.
[Dij59] Dijkstra, E.W. A note on two problems in connection with graphs.
Numerische Mathematik, vol. 1, 1959, 269271.
[Dij76] Dijkstra, E.W. A Discipline of Programming. Prentice-Hall, 1976.
[Dud70] Dudley, U. The rst recreational mathematics book. Journal of
Recreational Mathematics, 1970, 164169.
496 References
[Eas10] Easley, D., and Kleinberg, J. Networks, Crowds, and Markets:
Reasoning About a Highly Connected World. Cambridge University
Press, 2010.
[Edm65] Edmonds, J. Paths, trees, and owers. Canadian Journal of Mathe-
matics, vol. 17, 1965, 449467.
[Edm72] Edmonds, J. and Karp, R.M. Theoretical improvements in algorith-
mic efciency for network owproblems. Journal of the ACM, vol. 19,
no. 2, 1972, 248264.
[Flo62] Floyd, R.W. Algorithm 97: shortest path. Communications of the
ACM, vol. 5, no. 6, 1962, 345.
[For57] Ford, L.R., Jr., and Fulkerson, D.R. A simple algorithm for nding
maximal network ows and an application to the Hitchcock problem.
Canadian Journal of Mathematics, vol. 9, no. 2, 1957, 210218.
[For62] Ford, L.R., Jr., and Fulkerson, D.R. Flows in Networks. Princeton
University Press, 1962.
[For68] Forsythe, G.E. What todotill the computer scientist comes. American
Mathematical Monthly, vol. 75, no. 5, 1968, 454462.
[For69] Forsythe, G.E. Solving a quadratic equation on a computer. In The
Mathematical Sciences, edited by COSRIMS and George Boehm,
MIT Press, 1969, 138152.
[Gal62] Gale, D. and Shapley, L.S. College admissions and the stability of
marriage. American Mathematical Monthly, vol. 69, Jan. 1962, 915.
[Gal86] Galil, Z. Efcient algorithms for nding maximum matching in
graphs. Computing Surveys, vol. 18, no. 1, March 1986, 2338.
[Gar99] Gardiner, A. Mathematical Puzzling. Dover, 1999.
[Gar78] Gardner, M. aha! Insight. Scientic American/W.H. Freeman, 1978.
[Gar88] Gardner, M. Hexaexagons and Other Mathematical Diversions: The
First Scientic American Book of Puzzles and Games. University of
Chicago Press, 1988.
[Gar94] Gardner, M. My Best Mathematical and Logic Puzzles. Dover, 1994.
[Gar79] Garey, M.R. and Johnson, D.S. Computers and Intractability: AGuide
to the Theory of NP-Completeness. W.H. Freeman, 1979.
[Ger03] Gerald, C.F. and Wheatley, P.O. Applied Numerical Analysis, 7th ed.
Addison-Wesley, 2003.
[Gin04] Ginat, D. Embedding instructive assertions in program design. In
Proceedings of ITiCSE04, June 2830, 2004, Leeds, UK, 6266.
[Gol94] Golomb, S.W. Polyominoes: Puzzles, Patterns, Problems, and Pack-
ings. Revised and expanded second edition. Princeton University
Press, 1994.
References 497
[Gon91] Gonnet, G.H. and Baeza-Yates, R. Handbook of Algorithms and
Data Structures in Pascal and C, 2nd ed. Addison-Wesley, 1991.
[Goo02] Goodrich, M.T. and Tamassia, R. Algorithm Design: Foundations,
Analysis, and Internet Examples. John Wiley & Sons, 2002.
[Gra94] Graham, R.L., Knuth, D.E., andPatashnik, O. Concrete Mathematics:
A Foundation for Computer Science, 2nd ed. Addison-Wesley, 1994.
[Gre07] Green, D.H. and Knuth, D.E. Mathematics for Analysis of Algo-
rithms, 3rd edition. Birkh auser, 2007.
[Gri81] Gries, D. The Science of Programming. Springer, 1981.
[Gus89] Guseld, D. and Irwing, R.W. The Stable Marriage Problem: Structure
and Algorithms. MIT Press, 1989.
[Gut07] Gutin, G. and Punnen, A.P., eds. Traveling Salesman Problemand Its
Variations. Springer, 2007.
[Har92] Harel, D. Algorithmics: The Spirit of Computing, 2nd ed. Addison-
Wesley, 1992.
[Har65] Hartmanis, J. and Stearns, R.E. On the computational complexity
of algorithms. Transactions of the American Mathematical Society,
vol. 117, May 1965, 285306.
[Hea63] Heap, B.R. Permutations by interchanges. Computer Journal, vol. 6,
1963, 293294.
[Het10] Hetland, M.L. Python Algorithms: Mastering Basic Algorithms in the
Python Language. Apress, 2010.
[Hig93] Higham, N.J. The accuracy of oating point summation. SIAM
Journal on Scientic Computing, vol. 14, no. 4, July 1993, 783799.
[Hoa96] Hoare, C.A.R. Quicksort. In Great Papers in Computer Science,
Phillip Laplante, ed. West Publishing Company, 1996, 3139.
[Hoc97] Hochbaum, D.S., ed. Approximation Algorithms for NP-Hard
Problems. PWS Publishing, 1997.
[Hop87] Hopcroft, J.E. Computer science: the emergence of a discipline.
Communications of the ACM, vol. 30, no. 3, March 1987, 198202.
[Hop73] Hopcroft, J.E. and Karp, R.M. An n
5/2
algorithm for maximum
matchings in bipartite graphs. SIAM Journal on Computing, vol. 2,
1973, 225231.
[Hor07] Horowitz, E., Sahni, S., and Rajasekaran, S. Computer Algorithms,
2nd ed. Silicon Press, 2007.
[Hor80] Horspool, R.N. Practical fast searching in strings. SoftwarePractice
and Experience, vol. 10, 1980, 501506.
[Hu02] Hu, T.C. andShing, M.T. Combinatorial Algorithms: EnlargedSecond
edition. Dover, 2002.
498 References
[Huf52] Huffman, D.A. A method for the construction of minimum redun-
dancy codes. Proceedings of the IRE, vol. 40, 1952, 10981101.
[Joh07a] Johnson, D.S. andMcGeoch, L.A. Experimental analysis of heuristics
for the STSP. In The Traveling Salesman Problem and Its Variations,
edited by G. Gutin and A.P. Punnen, Springer, 2007, 369443.
[Joh07b] Johnson, D.S., Gutin, G., McGeoch, L.A., Yeo, A., Zhang, W., and
Zverovitch, A. Experimental analysis of heuristics for the ATSP.
In The Traveling Salesman Problem and Its Variations, edited by G.
Gutin and A.P. Punnen, Springer, 2007, 445487.
[Joh04] Johnsonbaugh, R. and Schaefer, M. Algorithms. Pearson Education,
2004.
[Kar84] Karmarkar, N. Anewpolynomial-time algorithmfor linear program-
ming. Combinatorica, vol. 4, no. 4, 1984, 373395.
[Kar72] Karp, R.M. Reducibility among combinatorial problems. In Com-
plexity of Computer Communications, edited by R.E. Miller and J.W.
Thatcher. Plenum Press, 1972, 85103.
[Kar86] Karp, R.M. Combinatorics, complexity, and randomness. Communi-
cations of the ACM, vol. 29, no. 2, Feb. 1986, 89109.
[Kar91] Karp, R.M. An introduction to randomized algorithms. Discrete
Applied Mathematics, vol. 34, Nov. 1991, 165201.
[Kel04] Kellerer, H., Pferschy, U., and Pisinger, D. Knapsack Problems.
Springer, 2004.
[Ker99] Kernighan, B.W. and Pike. R. The Practice of Programming.
Addison-Wesley, 1999.
[Kha79] Khachian, L.G. A polynomial algorithm in linear programming.
Soviet Mathematics Doklady, vol. 20, 1979, 191194.
[Kle06] Kleinberg, J. and Tardos,
E. Algorithm Design. Pearson, 2006.
[Knu75] Knuth, D.E. Estimating the efciency of backtrack programs.
Mathematics of Computation, vol. 29, Jan. 1975, 121136.
[Knu76] Knuth, D.E. Big omicronandbig omega andbig theta. ACMSIGACT
News, vol. 8, no. 2, 1976, 1823.
[Knu96] Knuth, D.E. Selected Papers on Computer Science. CSLI Publications
and Cambridge University Press, 1996.
[KnuI] Knuth, D.E. The Art of Computer Programming, Volume 1: Funda-
mental Algorithms, 3rd ed. Addison-Wesley, 1997.
[KnuII] Knuth, D.E. The Art of Computer Programming, Volume 2: Seminu-
merical Algorithms, 3rd ed. Addison-Wesley, 1998.
[KnuIII] Knuth, D.E. The Art of Computer Programming, Volume 3: Sorting
and Searching, 2nd ed. Addison-Wesley, 1998.
References 499
[KnuIV] Knuth, D.E. The Art of Computer Programming, Volume 4A,
Combinatorial Algorithms, Part 1. Addison-Wesley, 2011.
[Knu77] Knuth, D.E., Morris, Jr., J.H., and Pratt, V.R. Fast pattern matching
in strings. SIAM Journal on Computing, vol. 5, no. 2, 1977, 323350.
[Kol95] Kolman, B. and Beck, R.E. Elementary Linear Programming with
Applications, 2nd ed. Academic Press, 1995.
[Kor92] Kordemsky, B.A. The Moscow Puzzles. Dover, 1992.
[Kor05] Kordemsky, B.A. Mathematical Charmers. Oniks, 2005 (in Russian).
[Kru56] Kruskal, J.B. On the shortest spanning subtree of a graph and
the traveling salesman problem. Proceedings of the American
Mathematical Society, vol. 7, 1956, 4850.
[Laa10] Laakmann, G. Cracking the Coding Interview, 4th ed. CareerCup,
2010.
[Law85] Lawler, E.L., Lenstra, J.K., Rinnooy Kan, A.H.G., and Shmoys, D.B.,
eds. The Traveling Salesman Problem. John Wiley, 1985.
[Lev73] Levin, L.A. Universal sorting problems. Problemy Peredachi Infor-
matsii, vol. 9, no. 3, 1973, 115116 (in Russian). English translation in
Problems of Information Transmission, vol. 9, 265266.
[Lev99] Levitin, A. Do we teach the right algorithm design techniques? In
Proceedings of SIGCSE99, New Orleans, LA, 1999, 179183.
[Lev11] Levitin, A. and Levitin, M. Algorithmic Puzzles. Oxford University
Press, 2011.
[Lin73] Lin, S. and Kernighan, B.W. An effective heuristic algorithm for
the traveling-salesman problem. Operations Research, vol. 21, 1973,
498516.
[Man89] Manber, U. Introduction to Algorithms: A Creative Approach.
Addison-Wesley, 1989.
[Mar90] Martello, S. and Toth, P. Knapsack Problems: Algorithms and
Computer Implementations. John Wiley, 1990.
[Mic10] Michalewicz, Z. and Fogel, D.B. How to Solve It: Modern Heuristics,
second, revised and extended edition. Springer, 2010.
[Mil05] Miller, R. andBoxer, L. Algorithms Sequential andParallel: AUnied
Approach, 2nd ed. Charles River Media, 2005.
[Mor91] Moret, B.M.E. and Shapiro, H.D. Algorithms fromPto NP. Volume I:
Design and Efciency. Benjamin Cummings, 1991.
[Mot95] Motwani, R. and Raghavan, P. Randomized Algorithms. Cambridge
University Press, 1995.
[Nea09] Neapolitan, R. and Naimipour, K. Foundations of Algorithms, Fourth
Edition. Jones and Bartlett, 2009.
500 References
[Nem89] Nemhauser, G.L., Rinnooy Kan, A.H.G., and Todd, M.J., eds.
Optimization. North-Holland, Amsterdam, 1989.
[OCo98] OConnor, J.J. and Robertson, E.F. The MacTutor History of Mathe-
matics archive, June 1998, www-history.mcs.st-andrews.ac.uk/history/
Mathematicians/Abel.html.
[Ong84] Ong, H.L. and Moore, J.B. Worst-case analysis of two traveling
salesman heuristics. Operations Research Letters, vol. 2, 1984, 273
277.
[ORo98] ORourke, J. Computational Geometry in C, 2nd ed. Cambridge
University Press, 1998.
[Ove80] Overmars, M.H. and van Leeuwen, J. Further comments on Bykats
convex hull algorithm. Information Processing Letters, vol. 10, no. 4/5,
1980, 209212.
[Pan78] Pan, V.Y. Strassens algorithm is not optimal. Proceedings of Nine-
teenth Annual IEEE Symposium on the Foundations of Computer
Science, 1978, 166176.
[Pap82] Papadimitriou, C.H. and Steiglitz, K. Combinatorial Optimization:
Algorithms and Complexity. Prentice-Hall, 1982.
[Par95] Parberry, I. Problems on Algorithms. Prentice-Hall, 1995.
[Pol57] P olya, G. How to Solve It: A New Aspect of Mathematical Method,
2nd ed. Doubleday, 1957.
[Pre85] Preparata, F.P. and Shamos, M.I. Computational Geometry: An
Introduction. Springer, 1985.
[Pri57] Prim, R.C. Shortest connection networks and some generalizations.
Bell System Technical Journal, vol. 36, no. 1, 1957, 13891401.
[Pur04] Purdom, P.W., Jr., and Brown, C. The Analysis of Algorithms. Oxford
University Press, 2004.
[Raw91] Rawlins, G.J.E. Compared to What? An Introduction to the Analysis
of Algorithms. Computer Science Press, 1991.
[Rei77] Reingold, E.M., Nievergelt, J., and Deo, N. Combinatorial Algo-
rithms: Theory and Practice. Prentice-Hall, 1977.
[Riv78] Rivest, R.L., Shamir, A., and Adleman, L.M. Amethod for obtaining
digital signatures and public-key cryptosystems. Communications of
the ACM, vol. 21, no. 2, Feb. 1978, 120126.
[Ros07] Rosen, K. Discreet Mathematics and Its Applications, 6th ed.,
McGraw-Hill, 2007.
[Ros77] Rosenkrantz, D.J., Stearns, R.E., and Lewis, P.M. An analysis of
several heuristics for the traveling salesman problem. SIAM Journal
of Computing, vol. 6, 1977, 563581.
References 501
[Roy59] Roy, B. Transitivit e et connexit e. Comptes rendus de lAcad emie des
Sciences, vol. 249, 216218, 1959.
[Sah75] Sahni, S. Approximation algorithms for the 0/1 knapsack problem.
Journal of the ACM, vol. 22, no. 1, Jan. 1975, 115124.
[Sah76] Sahni, S. and Gonzalez, T. P-complete approximation problems.
Journal of the ACM, vol. 23, no. 3, July 1976, 555565.
[Say05] Sayood, K. Introduction to Data Compression, 3rd ed. Morgan
Kaufmann Publishers, 2005.
[Sed02] Sedgewick, R. Algorithms in C/C++/Java, Parts 15: Fundamentals,
Data Structures, Sorting, Searching, and Graph Algorithms, 3rd ed.
Addison-Wesley Professional, 2002.
[Sed96] Sedgewick, R. and Flajolet, P. An Introduction to the Analysis of
Algorithms. Addison-Wesley Professional, 1996.
[Sed11] Sedgewick, R. and Wayne, K. Algorithms, Fourth Edition. Pearson
Education, 2011.
[Sha07] Shaffer, C.A., Cooper, M., and Edwards, S.H. Algorithm visualiza-
tion: a report on the state of the eld. ACMSIGCSEBulletin, vol. 39,
no. 1, March 2007, 150154.
[Sha98] Shasha, D. and Lazere, C. Out of Their Minds: The Lives and
Discoveries of 15 Great Computer Scientists. Copernicus, 1998.
[She59] Shell, D.L. A high-speed sorting procedure. Communications of the
ACM, vol. 2, no. 7, July 1959, 3032.
[Sho94] Shor, P.W. Algorithms for quantumcomputation: discrete algorithms
and factoring. Proceedings 35th Annual Symposium on Foundations
of Computer Science (Sha Goldwasser, ed.). IEEE Computer
Society Press, 1994, 124134.
[Sip05] Sipser, M. Introduction to the Theory of Computation, 2nd ed. Course
Technology, 2005.
[Ski10] Skiena, S.S. Algorithm Design Manual, 2nd ed. Springer, 2010.
[Str69] Strassen, V. Gaussian elimination is not optimal. Numerische
Mathematik, vol. 13, no. 4, 1969, 354356.
[Tar83] Tarjan, R.E. Data Structures and Network Algorithms. Society for
Industrial and Applied Mathematics, 1983.
[Tar85] Tarjan, R.E. Amortized computational complexity. SIAMJournal on
Algebraic and Discrete Methods, vol. 6, no. 2, Apr. 1985, 306318.
[Tar87] Tarjan, R.E. Algorithm design. Communications of the ACM, vol.
30, no. 3, March 1987, 204212.
[Tar84] Tarjan, R.E. and van Leeuwen, J. Worst-case analysis of set union
algorithms. Journal of the ACM, vol. 31, no. 2, Apr. 1984, 245281.
502 References
[War62] Warshall, S. A theorem on boolean matrices. Journal of the ACM,
vol. 9, no. 1, Jan. 1962, 1112.
[Wei77] Weide, B. A survey of analysis techniques for discrete algorithms.
Computing Surveys, vol. 9, no. 4, 1977, 291313.
[Wil64] Williams, J.W.J. Algorithm 232 (heapsort). Communications of the
ACM, vol. 7, no. 6, 1964, 347348.
[Wir76] Wirth, N. Algorithms + Data Structures = Programs. Prentice-Hall,
Englewood Cliffs, NJ, 1976.
[Yan08] Yanofsky, N.S. and Mannucci, M.A. Quantum Computing for
Computer Scientists. Cambridge University Press, 2008.
[Yao82] Yao, F. Speed-up in dynamic programming. SIAM Journal on
Algebraic and Discrete Methods, vol. 3, no. 4, 1982, 532540.
Hints to Exercises
CHAPTER 1
Exercises 1.1
1. It is probably faster to do this by searching the Web, but your library should
be able to help, too.
2. One can nd arguments supporting either view. There is a well-established
principle pertinent to the matter, though: scientic facts or mathematical
expressions of them are not patentable. (Why do you think this is the case?)
But should this preclude granting patents for all algorithms?
3. You may assume that you are writing your algorithms for a human rather than
a machine. Still, make sure that your descriptions do not contain obvious am-
biguities. Knuth provides an interesting comparison between cooking recipes
and algorithms [KnuI, p. 6].
4. There is a quite straightforward algorithm for this problem based on the
denition of
n.
5. Try to design an algorithm that always makes less than mn comparisons.
6. a. Just follow Euclids algorithm as described in the text.
b. Compare the number of divisions made by the two algorithms.
7. Prove that if d divides both m and n (i.e., m=sd and n =t d for some positive
integers s and t ), then it also divides both n and r =m mod n and vice versa.
Use the formula m = qn r (0 r < n) and the fact that if d divides two
integers u and v, it also divides u v and u v (why?).
8. Perform one iteration of the algorithm for two arbitrarily chosen integers
m < n.
9. The answer to part (a) can be given immediately, the answer to part (b) can
be given by checking the algorithms performance on all pairs 1 <m<n 10.
503
504 Hints to Exercises
10. a. Use the equality
gcd(m, n) =gcd(mn, n) for mn > 0.
b. The key is to gure out the total number of distinct integers that can be
written on the board, starting with an initial pair m, n where m > n 1.
You should exploit a connection of this question to the question of part (a).
Considering small examples, especially those with n =1 and n =2, should
help, too.
11. Of course, for some coefcients, the equation will have no solutions.
12. Tracing the algorithmby hand for, say, n =10 and studying its outcome should
help answering both questions.
Exercises 1.2
1. The farmer would have to make several trips across the river, starting with
the only one possible.
2. Unlike the Old World puzzle of Problem 1, the rst move solving this puzzle
is not obvious.
3. The principal issue here is a possible ambiguity.
4. Your algorithmshouldworkcorrectly for all possible values of the coefcients,
including zeros.
5. You almost certainly learned this algorithm in one of your introductory pro-
gramming courses. If this assumption is not true, you have a choice between
designing such an algorithm on your own or looking it up.
6. You may need to make a eld trip to refresh your memory.
7. Question (a) is difcult, though the answer to itdiscovered in the 1760s by
the GermanmathematicianJohannLambertis well-known. By comparison,
question (b) is incomparably simpler.
8. You probably know two or more different algorithms for sorting an array of
numbers.
9. You can: decrease the number of times the inner loop is executed, make that
loop run faster (at least for some inputs), or, more signicantly, design a faster
algorithm from scratch.
Exercises 1.3
1. Trace the algorithm on the input given. Use the denitions of stability and
being in-place that were introduced in the section.
2. If you do not recall any searching algorithms, you should design a simple
searching algorithm (without succumbing to the temptation to nd one in the
latter chapters of the book).
Chapter 1 505
3. This algorithmis introduced later in the book, but you should have no trouble
designing it on your own.
4. If you have not encountered this problem in your previous courses, you may
look up the answers on the Web or in a discrete structures textbook. The
answers are, in fact, surprisingly simple.
5. Noefcient algorithmfor solving this problemfor anarbitrary graphis known.
This particular graph does have Hamiltonian circuits that are not difcult to
nd. (You need to nd just one of them.)
6. a. Put yourself (mentally) in a passengers place and ask yourself what cri-
terion for the best route you would use. Then think of people that may
have different needs.
b. The representationof theproblembyagraphis straightforward. Givesome
thoughts, though, to stations where trains can be changed.
7. a. What are tours in the traveling salesman problem?
b. It would be natural to consider vertices colored the same color as elements
of the same subset.
8. Create a graph whose vertices represent the maps regions. You will have to
decide on the edges on your own.
9. Assume that the circumference in question exists and nd its center rst. Also,
do not forget to give a special answer for n 2.
10. Be careful not to miss some special cases of the problem.
Exercises 1.4
1. a. Take advantage of the fact that the array is not sorted.
b. We used this trick in implementing one of the algorithms in Section 1.1.
2. a. For a sorted array, there is a spectacularly efcient algorithm you almost
certainly have heard about.
b. Unsuccessful searches can be made faster.
3. a. Push(x) puts x on the top of the stack; pop deletes the item from the top
of the stack.
b. Enqueue(x) adds x to the rear of the queue; dequeue deletes the item from
the front of the queue.
4. Just use the denitions of the graph properties in question and data structures
involved.
5. There are two well-known algorithms that can solve this problem. The rst
uses a stack; the second uses a queue. Although these algorithms are discussed
later in the book, do not miss this chance to discover them by yourself!
6. The inequality h n 1follows immediately fromthe heights denition. The
lower bound inequality follows fromthe inequality 2
h1
1 n, which can be
506 Hints to Exercises
proved by considering the largest number of vertices a binary tree of height
h can have.
7. You need to indicate how each of the three operations of the priority queue
will be implemented.
8. Because of insertions anddeletions, using anarray of the dictionarys elements
(sorted or unsorted) is not the best implementation possible.
9. You need to know about postx notation in order to answer one of these
questions. (If youare not familiar withit, ndthe informationonthe Internet.)
10. There are several algorithms for this problem. Keep in mind that the words
may contain multiple occurrences of the same letter.
CHAPTER 2
Exercises 2.1
1. The questions are indeed as straightforward as they appear, though some
of them may have alternative answers. Also, keep in mind the caveat about
measuring an integers size.
2. a. The sum of two matrices is dened as the matrix whose elements are the
sums of the corresponding elements of the matrices given.
b. Matrix multiplicationrequires twooperations: multiplicationandaddition.
Which of the two would you consider basic and why?
3. Will the algorithms efciency vary on different inputs of the same size?
4. a. Gloves are not socks: they can be right-handed and left-handed.
b. You have only two qualitatively different outcomes possible. Find the
number of ways to get each of the two.
5. a. First, prove that if a positive decimal integer n has b digits in its binary
representation, then
2
b1
n < 2
b
.
Then, take binary logarithms of the terms in these inequalities.
b. The proof is similar to the proof of formula (2.1).
c. The formulas will be the same, with just one small adjustment to account
for the different radix.
d. How can we switch from one logarithm base to another?
6. Insert a verication of whether the problem is already solved.
7. A similar question was investigated in the section.
8. Use either the difference between or the ratio of f (4n) and f (n), whichever
is more convenient for getting a compact answer. If it is possible, try to get an
answer that does not depend on n.
Chapter 2 507
9. If necessary, simplify the functions in question to single out terms dening
their orders of growth to within a constant multiple. (We discuss formal meth-
ods for answering such questions in the next section; however, the questions
can be answered without knowledge of such methods.)
10. a. Use the formula
n
i=0
2
i
=2
n1
1.
b. Use the formula for the sum of the rst n odd numbers or the formula for
the sum of arithmetic progression.
Exercises 2.2
1. Use the corresponding counts of the algorithms basic operation (see Sec-
tion 2.1) and the denitions of O, , and .
2. Establish the order of growth of n(n 1)/2 rst and then use the informal
denitions of O, , and . (Similar examples were given in the section.)
3. Simplify the functions given to single out the terms dening their orders of
growth.
4. a. Check carefully the pertinent denitions.
b. Compute the ratio limits of every pair of consecutive functions on the list.
5. First, simplify some of the functions. Then, use the list of functions in Table 2.2
to anchor each of the functions given. Prove their nal placement by com-
puting appropriate limits.
6. a. You can prove this assertion either by computing an appropriate limit or
by applying mathematical induction.
b. Compute lim
n
a
n
1
/a
n
2
.
7. Prove the correctness of (a), (b), and (c) by using the appropriate denitions;
construct a counterexample for (d) (e.g., by constructing two functions behav-
ing differently for odd and even values of their arguments).
8. The proof of part (a) is similar to the one given for the theorems assertion
in Section 2.2. Of course, different inequalities need to be used to bound the
sum from below.
9. Follow the analysis plan used in the text when the algorithm was mentioned
for the rst time.
10. You may use straightforward algorithms for all the four questions asked. Use
the O notation for the time efciency class of one of them, and the notation
for the three others.
11. The problem can be solved in two weighings.
12. You should walk intermittently left and right from your initial position until
the door is reached.
508 Hints to Exercises
Exercises 2.3
1. Use the common summation formulas and rules listed in Appendix A. You
may need to performsome simple algebraic operations before applying them.
2. Find a sum among those in Appendix A that looks similar to the sum in
question and try to transform the latter to the former. Note that you do not
have to get a closed-form expression for a sum before establishing its order
of growth.
3. Just follow the formulas in question.
4. a. Tracing the algorithm to get its output for a few small values of n (e.g.,
n =1, 2, and 3) should help if you need it.
b. We faced the same question for the examples discussed in this section. One
of them is particularly pertinent here.
c. Follow the plan outlined in the section.
d. As a function of n, the answer should followimmediately fromyour answer
topart (c). Youmay alsowant togive ananswer as a functionof the number
of bits in the ns representation (why?).
e. Have you not encountered this sum somewhere?
5. a. Tracing the algorithm to get its output for a few small values of n (e.g.,
n =1, 2, and 3) should help if you need it.
b. We faced the same question for the examples discussed in the section. One
of them is particularly pertinent here.
c. You can either followthe sections plan by setting up and computing a sum
or answer the question directly. (Try to do both.)
d. Your answer will immediately follow from the answer to part (c).
e. Does the algorithm always have to make two comparisons on each itera-
tion? This idea can be developed further to get a more signicant improve-
ment than the obvious onetry to do it for a four-element array and then
generalize the insight. But can we hope to nd an algorithm with a better
than linear efciency?
6. a. Elements A[i, j] and A[j, i] are symmetric with respect to the main diag-
onal of the matrix.
b. There is just one candidate here.
c. You may investigate the worst case only.
d. Your answer will immediately follow from the answer to part (c).
e. Compare the problem the algorithm solves with the way it does this.
7. Computing a sum of n numbers can be done with n 1 additions. How many
does the algorithm make in computing each element of the product matrix?
8. Set up a sum for the number of times all the doors are toggled and nd its
asymptotic order of growth by using some formulas from Appendix A.
Chapter 2 509
9. For the general step of the proof by induction, use the formula
n1
i=1
i =
n
i=1
i (n 1).
The young Gauss computed the sum 1 2
. . .
99 100 by noticing that
it can be computed as the sum of 50 pairs, each with the same sum.
10. There are at least two different ways to solve this problem, which comes from
a collection of Wall Street interview questions.
11. a. Setting up a sumshould pose no difculties. Using the standard summation
formulas and rules will require more effort than in the previous examples,
however.
b. Optimize the algorithms innermost loop.
12. Set up a sum for the number of squares after n iterations of the algorithm and
then simplify it to get a closed-form answer.
13. To derive a formula expressing the total number of digits as a function of
the number of pages n, where 1 n 1000, it is convenient to partition the
functions domain into several natural intervals.
Exercises 2.4
1. Each of these recurrences can be solved by the method of backward substitu-
tions.
2. The recurrence relation in question is almost identical to the recurrence
relation for the number of multiplications, which was set up and solved in
the section.
3. a. The question is similar to that about the efciency of the recursive algo-
rithm for computing n!.
b. Write pseudocode for the nonrecursive algorithm and determine its ef-
ciency.
4. a. Note that you are asked here about a recurrence for the functions values,
not about a recurrence for the number of times its operation is executed.
Just follow the pseudocode to set it up. It is easier to solve this recurrence
by forward substitutions (see Appendix B).
b. This question is very similar to one we have already discussed.
c. You may want to include the subtractions needed to decrease n.
5. a. Use the formula for the number of disk moves derived in the section.
b. Solve the problemfor three disks to investigate the number of moves made
by each of the disks. Then generalize the observations and prove their
validity for the general case of n disks.
510 Hints to Exercises
6. The required algorithm and the method of its analysis are similar to those of
the classic version of the puzzle. Because of the additional constraint, more
than two smaller instances of the puzzle need to be solved here.
7. a. Consider separately the cases of even and odd values of n and show that
for both of them log
2
n satises the recurrence relation and its initial
condition.
b. Just follow the algorithms pseudocode.
8. a. Use the formula 2
n
=2
n1
2
n1
without simplifying it; do not forget to
provide a condition for stopping your recursive calls.
b. A similar algorithm was investigated in the section.
c. A similar question was investigated in the section.
d. A bad efciency class of an algorithm by itself does not mean that the
algorithmis bad. For example, the classic algorithmfor the Tower of Hanoi
puzzle is optimal despite its exponential-time efciency. Therefore, a claim
that a particular algorithm is not good requires a reference to a better one.
9. a. Tracing the algorithm for n =1 and n =2 should help.
b. It is very similar to one of the examples discussed in the section.
10. Get the basic operation count either by solving a recurrence relation or by
computing directly the number of the adjacency matrix elements the algo-
rithm checks in the worst case.
11. a. Use the denitions formula to get the recurrence relation for the number
of multiplications made by the algorithm.
b. Investigate the right-hand side of the recurrence relation. Computing the
rst few values of M(n) may be helpful, too.
12. You might want to use the neighborhoods symmetry to obtain a simple
formula for the number of squares added to the neighborhood on the nth
iteration of the algorithm.
13. The minimum amount of time needed to fry three hamburgers is smaller than
4 minutes.
14. Solve rst a simpler version in which a celebrity must be present.
Exercises 2.5
1. Use a search engine.
2. Set up an equation expressing the number of rabbits after n months in terms
of the number of rabbits in some previous months.
3. There are several ways to solve this problem. The most elegant of themmakes
it possible to put the problem in this section.
4. Writing down the rst, say, ten Fibonacci numbers makes the pattern obvious.
Chapter 3 511
5. It is easier to substitute
n
and
n
into the recurrence equation separately.
Why will this sufce?
6. Use an approximate formula for F(n) to nd the smallest values of n to exceed
the numbers given.
7. Set up the recurrence relations for C(n) and Z(n), with appropriate initial
conditions, of course.
8. All the information needed on each iteration of the algorithm is the values of
the last two consecutive Fibonacci numbers. Modify the algorithm Fib(n) to
take advantage of this fact.
9. Prove it by mathematical induction.
10. Consider rst a small example such as computing gcd(13, 8).
11. Take advantage of the special nature of the rectangles dimensions.
12. The last k digits of an integer N can be obtained by computing N mod 10
k
. Per-
forming all operations of your algorithms modulo 10
k
(see Appendix A) will
enable you to circumvent the exponential growth of the Fibonacci numbers.
Also note that Section 2.6 is devoted to a general discussion of the empirical
analysis of algorithms.
Exercises 2.6
1. Does it return a correct comparison count for every array of size 2?
2. Debug your comparison counting and random input generating for small
array sizes rst.
3. On a reasonably fast desktop, you may well get zero time, at least for smaller
sizes inyour sample. Section2.6 mentions a trickfor overcoming this difculty.
4. Check how fast the count values grow with doubling the input size.
5. A similar question was discussed in the section.
6. Compare the values of the functions lg lg n and lg n for n =2
k
.
7. Insert the division counter in a program implementing the algorithm and run
it for the input pairs in the range indicated.
8. Get the empirical data for random values of n in a range of, say, between 10
2
and 10
4
or 10
5
and plot the data obtained. (You may want to use different
scales for the axes of your coordinate system.)
CHAPTER 3
Exercises 3.1
1. a. Thinkof algorithms that have impressedyouwiththeir efciency and/or so-
phistication. Neither characteristic is indicative of a brute-force algorithm.
512 Hints to Exercises
b. Surprisingly, it is not a very easy question to answer. Mathematical prob-
lems (including those youve studied in your secondary school and college
courses) are a good source of such examples.
2. a. The rst question was all but answered in the section. Expressing the
answer as a function of the number of bits can be done by using the formula
relating the two metrics.
b. How can we compute (ab) mod m?
3. It helps to have done the exercises in question.
4. a. The most straightforward algorithm, which is based on substituting x
0
into
the formula, is quadratic.
b. Analyzing what unnecessary computations the quadratic algorithm does
should lead you to a better (linear) algorithm.
c. How many coefcients does a polynomial of degree n have? Can one
compute its value at an arbitrary point without processing all of them?
5. For each of the three network topologies, what properties of the matrix should
the algorithm check?
6. The answer to four of the questions is yes.
7. a. Just apply the brute-force thinking to the problem in question.
b. The problem can be solved in one weighing.
8. Just trace the algorithm on the input given. (It was done for another input in
the section.)
9. Although the majority of elementary sorting algorithms are stable, do not
rush with your answer. A general remark about stability made in Section 1.3,
where the notion of stability is introduced, could be helpful, too.
10. Generally speaking, implementing an algorithm for a linked list poses prob-
lems if the algorithm requires accessing the lists elements not in sequential
order.
11. Just trace the algorithm on the input given. (See an example in the section.)
12. a. A list is sorted if and only if all its adjacent elements are in a correct order.
Why?
b. Add a boolean ag to register the presence or absence of switches.
c. Identify worst-case inputs rst.
13. Can bubblesort change the order of two equal elements in its input?
14. Thinking about the puzzle as a sorting-like problem may or may not lead you
to the most simple and efcient solution.
Exercises 3.2
1. Modify the analysis of the algorithms version in Section 2.1.
2. As a function of p, what kind of function is C
avg
?
Chapter 3 513
3. Solve a simpler problem with a single gadget rst. Then design a better than
linear algorithm for the problem with two gadgets.
4. The content of this quote from Mahatma Gandhi is more thought provoking
than this drill.
5. For each input, one iteration of the algorithm yields all the information you
need to answer the question.
6. It will sufce to limit your search for an example to binary texts and patterns.
7. The answer, surprisingly, is yes.
8. a. For a given occurrence of A in the text, what are the substrings you need
to count?
b. For a given occurrence of B in the text, what are the substrings you need
to count?
9. You may use either bit strings or a natural-language text for the visualization
program. It would be a good idea to implement, as an option, a search for all
occurrences of a given pattern in a given text.
10. Test your program thoroughly. Be especially careful about the possibility of
words read diagonally with wrapping around the tables border.
11. A (very) brute-force algorithm can simply shoot at adjacent feasible cells
starting at, say, one of the corners of the board. Can you suggest a better
strategy? (You can investigate relative efciencies of different strategies by
making two programs implementing them play each other.) Is your strategy
better than the one that shoots at randomly generated cells of the opponents
board?
Exercises 3.3
1. You may want to consider two versions of the answer: without taking into
account the comparison and assignments in the algorithms innermost loop
and with them.
2. Sorting n real numbers can be done in O(n log n) time.
3. a. Solving the problem for n = 2 and n = 3 should lead you to the critical
insight.
b. Where would you put the post ofce if it did not have to be at one of the
village locations?
4. a. Check requirements (i)(iii) by using basic properties of absolute values.
b. For the Manhattan distance, the points in question are dened by the
equation [x 0[ [y 0[ = 1. You can start by sketching the points in
the positive quadrant of the coordinate system (i.e., the points for which
x, y 0) and then sketch the rest by using the symmetries.
c. The assertion is false. You can choose, say, p
1
(0, 0) and p
2
(1, 0) and nd p
3
to complete a counterexample.
514 Hints to Exercises
5. a. Prove that the Hamming distance does satisfy the three axioms of a dis-
tance metric.
b. Your answer should include two parameters.
6. True; prove it by mathematical induction.
7. Your answer should be a function of two parameters: n and k. A special case
of this problem (for k =2) is solved in the text.
8. Review the examples given in the section.
9. Some of the extreme points of a convex hull are easier to nd than others.
10. If there are other points of a given set on the straight line through p
i
and p
j
,
which of all these points need to be preserved for further processing?
11. Your program should work for any set of n distinct points, including sets with
many collinear points.
12. a. The set of points satisfying inequality ax by c is the half-plane of the
points on one side of the straight line ax by =c, including all the points
on the line itself. Sketch such a half-plane for each of the inequalities and
nd their intersection.
b. The extreme points are the vertices of the polygon obtained in part (a).
c. Compute and compare the values of the objective function at the extreme
points.
Exercises 3.4
1. a. Identify the algorithms basic operation and count the number of times it
will be executed.
b. For each of the time amounts given, nd the largest value of n for which
this limit wont be exceeded.
2. How different is the traveling salesman problem from the problem of nding
a Hamiltonian circuit?
3. Your algorithm should check the well-known condition that is both necessary
and sufcient for the existence of an Eulerian circuit in a connected graph.
4. Generate the remaining 4! 6 =18 possible assignments, compute their costs,
and nd the one with the minimal cost.
5. Make the size of your counterexample as small as possible.
6. Rephrase the problem so that the sum of elements in one subset, rather than
two, needs to be checked on each try of a possible partition.
7. Follow the denitions of a clique and of an exhaustive-search algorithm.
8. Try all possible orderings of the elements given.
9. Use common formulas of elementary combinatorics.
10. a. Add all the elements in the magic square in two different ways.
b. What combinatorial objects do you have to generate here?
Chapter 4 515
11. a. For testing, you may use alphametic collections available on the Internet.
b. Given the absence of electronic computers in 1924, you must refrain here
from using the Internet.
Exercises 3.5
1. a. Use the denitions of the adjacency matrix and adjacency lists given in
Section 1.4.
b. Performthe DFS traversal the same way it is done for another graph in the
text (see Figure 3.10).
2. Compare the efciency classes of the two versions of DFS for sparse graphs.
3. a. What is the number of such trees equal to?
b. Answer this question for connected graphs rst.
4. Performthe BFS traversal the same way it is done in the text (see Figure 3.11).
5. You may use the fact that the level of a vertex in a BFS tree indicates the
number of edges in the shortest (minimum-edge) path from the root to that
vertex.
6. a. What property of a BFS forest indicates a cycles presence? (The answer
is similar to the one for a DFS forest.)
b. The answer is no. Find two examples supporting this answer.
7. Given the fact that both traversals can reach a new vertex if and only if it is
adjacent to one of the previously visited vertices, which vertices will be visited
by the time either traversal halts (i.e., its stack or queue becomes empty)?
8. Use a DFS forest and a BFS forest for parts (a) and (b), respectively.
9. Use either DFS or BFS.
10. a. Follow the instructions of the problems statement.
b. Trying both traversals should lead you to a correct answer very fast.
11. You can apply BFS without an explicit sketch of a graph representing the
states of the puzzle.
CHAPTER 4
Exercises 4.1
1. Solve the problem for n =1.
2. You may consider pouring soda from a lled glass into an empty glass as one
move.
3. Its easier to use the bottom-up approach.
4. Use the fact that all the subsets of an n-element set S ={a
1
, . . . , a
n
] can be
divided into two groups: those that contain a
n
and those that do not.
5. The answer is no.
516 Hints to Exercises
6. Use the same idea that underlies insertion sort.
7. Trace the algorithm as we did in the text for another input (see Figure 4.4).
8. a. The sentinel should stop the smallest element frommoving beyond the rst
position in the array.
b. Repeat the analysis performed in the text for the sentinel version.
9. Recall that one can access elements of a singly linked list only sequentially.
10. Compare the running times of the algorithms inner loop.
11. a. Answering the questions for an array of three elements should lead to the
general answers.
b. Assume for simplicity that all elements are distinct and that inserting A[i]
in each of the i 1 possible positions among its predecessors is equally
likely. Analyze the sentinel version of the algorithm rst.
12. a. Note that its more convenient to sort sublists in parallel, i.e., compare A[0]
with A[h
i
], then A[1] with A[1 h
i
], and so on.
b. Recall that, generally speaking, sorting algorithms that can exchange ele-
ments far apart are not stable.
Exercises 4.2
1. Trace the algorithmas it is done inthe text for another digraph(see Figure 4.7).
2. a. You need to prove two assertions: (i) if a digraph has a directed cycle, then
the topological sorting problem does not have a solution; (ii) if a digraph
has no directed cycles, then the problem has a solution.
b. Consider an extreme type of a digraph.
3. a. How does it relate to the time efciency of DFS?
b. Doyouknowthe lengthof the list tobe generatedby the algorithm? Where
should you put, say, the rst vertex being popped off a DFS traversal stack
for the vertex to be in its nal position?
4. Try to do this for a small example or two.
5. Trace the algorithm on the instances given as it is done in the section (see
Figure 4.8).
6. a. Use a proof by contradiction.
b. If you have difculty answering the question, consider an example of a
digraph with a vertex with no incoming edges and write down its adjacency
matrix.
c. The answer follows from the denitions of the source and adjacency lists.
7. For each vertex, store the number of edges entering the vertex in the remain-
ing subgraph. Maintain a queue of the source vertices.
9. a. Trace the algorithm on the input given by following the steps of the algo-
rithm as indicated.
Chapter 4 517
b. Determine the efciency for each of the three principal steps of the al-
gorithm and then determine the overall efciency. Of course, the answers
depend on whether a digraph is represented by its adjacency matrix or by
its adjacency lists.
10. Take advantage of topological sorting and the graphs symmetry.
Exercises 4.3
1. Use standard formulas for the numbers of these combinatorial objects. For the
sake of simplicity, you may assume that generating one combinatorial object
takes the same time as, say, one assignment.
2. We traced the algorithms on smaller instances in the section.
3. See an outline of this algorithm in the section.
4. a. Trace the algorithm for n =2; take advantage of this trace in tracing the
algorithm for n =3 and then use the latter for n =4.
b. Showthat the algorithmgenerates n! permutations and that all of themare
distinct. Use mathematical induction.
c. Set up a recurrence relation for the number of swaps made by the algo-
rithm. Find its solution and the solutions order of growth. You may need
the formula: e
n
i=0
1
i!
for large values of n.
5. We traced both algorithms on smaller instances in the section.
6. Tricks become boring after they have been given away.
7. This is not a difcult exercise because of the obvious way of getting bit strings
of length n from bit strings of length n 1.
8. You may still mimic the binary addition without using it explicitly.
9. Just trace the algorithms for n =4.
10. There are several decrease-andconquer algorithms for this problem. They
are more subtle than one might expect. Generating combinations in a pre-
dened order (increasing, decreasing, lexicographic) helps with both a
design and a correctness proof. The following simple property is very help-
ful in that regard. Assuming with no loss of generality that the underlying
set is {1, 2, . . . , n], there are
_
ni
k1
_
k-subsets whose smallest element is i,
i =1, 2, . . . , n k 1.
11. Represent the disk movements by ipping bits in a binary n-tuple.
12. Thinking about the switches as bits of a bit string could be helpful but not
necessary.
Exercises 4.4
1. Take care of the length of the longest piece present.
518 Hints to Exercises
2. If the instance of size n is to compute log
2
n, what is the instance of size n/2?
What is the relationship between the two?
3. For part (a), take advantage of the formula that gives the immediate answer.
The most efcient prop for answering questions (b)(d) is a binary search tree
that mirrors the algorithms operations in searching for an arbitrary search
key.
4. Estimate the ratio of the average number of key comparisons made by se-
quential search to the average number made by binary search in successful
searches.
5. How would you reach the middle element in a linked list?
6. a. Use the comparison K A[m] where m (l r)/2 until l = r. Then
check whether the search is successful or not.
b. The analysis is almost identical to that of the texts version of binary search.
7. Number the pictures and use this numbering in your questions.
8. The algorithm is quite similar to binary search, of course. In the worst case,
how many key comparisons does it make on each iteration and what fraction
of the array remains to be processed?
9. Start by comparing the middle element A[m] with m1.
10. It is obvious how one needs to proceed if n mod 3 =0 or n mod 3 =1; it is
somewhat less so if n mod 3 =2.
11. a. Trace the algorithm for the numbers given as it is done in the text for
another input (see Figure 4.14b).
b. How many iterations does the algorithm perform?
12. You may implement the algorithm either recursively or nonrecursively.
13. The fastest way to answer the question is to use the formula that exploits the
binary representation of n, which is mentioned at the end of the section.
14. Use the binary representation of n.
15. a. Use forward substitutions (see Appendix B) into the recurrence equations
given in the text.
b. On observing the pattern in the rst 15 values of n obtained in part (a),
express it analytically. Then prove its validity by mathematical induction.
c. Start with the binary representation of n and translate into binary the
formula for J(n) obtained in part (b).
Exercises 4.5
1. a. The answer follows immediately from the formula underlying Euclids
algorithm.
b. Let r =m mod n. Investigate two cases of rs value relative to ns value.
Chapter 5 519
2. Trace the algorithm on the input given, as is done in the section for another
input.
3. The nonrecursive version of the algorithmwas applied to a particular instance
in the sections example.
4. Write an equation of the straight line through the points (l, A[l]) and (r, A[r])
and nd the x coordinate of the point on this line whose y coordinate is v.
5. Construct an array for which interpolation search decreases the remaining
subarray by one element on each iteration.
6. a. Solve the inequality log
2
log
2
n 1 > 6.
b. Compute lim
n
log log n
log n
. Note that to within a constant multiple, one can
consider the logarithms to be natural, i.e., base e.
7. a. The denition of the binary search tree suggests such an algorithm.
b. What is the worst-case input for your algorithm? How many key compar-
isons does it make on such an input?
8. a. Consider separately three cases, (i) the keys node is a leaf, (ii) the keys
node has one child, (iii) the keys node has two children.
b. Assume that you know a location of the key to be deleted.
9. Starting at an arbitrary vertex of the graph, traverse a sequence of its untra-
versed edges until either all the edges are traversed or no untraversed edge is
available.
10. Follow the plan used in the section for analyzing the normal version of the
game.
11. Play several rounds of the game on the graph paper to become comfortable
with the problem. Considering special cases of the spoiled squares location
should help you to solve it.
12. Do yourself a favor: try to design an algorithm on your own. It does not have
to be optimal, but it should be reasonably efcient.
13. Start by comparing the search number with the last element in the rst row.
CHAPTER 5
Exercises 5.1
1. In more than one respect, this question is similar to the divide-and-conquer
computation of the sum of n numbers.
2. Unlike Problem 1, a divide-and-conquer algorithm for this problem can be
more efcient by a constant factor than the brute-force algorithm.
3. How would you compute a
8
by solving two exponentiation problems of size
4? How about a
9
?
520 Hints to Exercises
4. Look at the notations used in the theorems statement.
5. Apply the Master Theorem.
6. Trace the algorithm as it was done for another input in the section.
7. How can mergesort reverse a relative ordering of two elements?
8. a. Use backward substitutions, as usual.
b. What inputs minimize the number of key comparisons made by mergesort?
How many comparisons are made by mergesort on such inputs during the
merging stage?
c. Do not forget to include key moves made both before the split and during
the merging.
9. Modify mergesort to solve the problem.
11. A divide-and-conquer algorithm works by reducing a problems instance to
several smaller instances of the same problem.
Exercises 5.2
1. We traced the algorithm on another instance in the section.
2. Use the rules for stopping the scans.
3. The denition of stability of a sorting algorithmwas given in Section 1.3. Your
example does not have to be large.
4. Trace the algorithm to see on which inputs index i gets out of bounds.
5. Study what the sections version of quicksort does on such arrays. You should
base your answers on the number of key comparisons, of course.
6. Where will splits occur on the inputs in question?
7. a. Computing the ratio n
2
/(n log
2
n) for n =10
6
is incorrect.
b. Think the best-case and worst-case inputs.
8. Use the partition idea.
9. a. You may want to rst solve the two-color ag problem, i.e., rearrange
efciently an array of Rs and Bs. (A similar problem is Problem 8 in this
sections exercises.)
b. Extend the denition of a partition.
11. Use the partition idea.
Exercises 5.3
1. The problem is almost identical to the one discussed in the section.
2. Trace the algorithm on a small input.
3. This can be done by an algorithm discussed in an earlier chapter of the book.
4. Use strong induction on the number of internal nodes.
Chapter 5 521
5. This is a standard exercise that you have probably done in your data struc-
tures course. With the traversal denitions given at the end of the section,
you should be able to trace them even if you have never encountered these
algorithms before.
6. Your pseudocode can simply mirror the traversal denition.
7. If you do not know the answer to this important question, you may want to
check the results of the traversals on a small binary search tree. For a proof,
answer this question: What can be said about two nodes with keys k
1
and k
2
if
k
1
< k
2
?
8. Find the roots label of the binary tree rst, and then identify the labels of the
nodes in its left and right subtrees.
9. Use strong induction on the number of internal nodes.
11. Breaking the chocolate bar can be represented by a binary tree.
Exercises 5.4
1. You might want to answer the question for n =2 rst and then generalize it.
2. Trace the algorithm on the input given. You will have to use it again in order
to compute the products of two-digit numbers as well.
3. a. Take logarithms of both sides of the equality.
b. What did we use the closed-form formula for?
4. a. How do we multiply by powers of 10?
b. Try to repeat the argument for, say, 98 76.
5. Counting the number of one-digit additions made by the pen-and-pencil al-
gorithm in multiplying, say, two four-digit numbers, should help answer the
general question.
6. Check the formulas by simple algebraic manipulations.
7. Trace Strassens algorithm on the input given. (It takes some work, but it
would have been much more of it if you were asked to stop the recursion when
n =1.) It is a good idea to check your answer by multiplying the matrices by
the brute-force (denition-based) algorithm, too.
8. Use the method of backward substitutions to solve the recurrence given in
the text.
9. The recurrence for the number of multiplications in Pans algorithm is similar
to that for Strassens algorithm. Use the Master Theorem to nd the order of
growth of its solution.
Exercises 5.5
1. a. How many points need to be considered in the combining-solutions stage
of the algorithm?
522 Hints to Exercises
b. Design a simpler algorithm in the same efciency class.
2. Divide the rectangle in Figure 5.7b into eight congruent rectangles and show
that each of these rectangles can contain no more than one point of interest.
3. Recall (see Section 5.1) that the number of comparisons made by mergesort
in the worst case is C
worst
(n) =n log
2
n n 1 (for n =2
k
). You may use just
the highest-order term of this formula in the recurrence you need to set up.
6. The answer to part (a) comes directly from a textbook on plane geometry.
7. Use the formula relating the value of a determinant with the area of a triangle.
8. It must be in (n), of course. (Why?)
9. Design a sequence of n points for which the algorithmdecreases the problems
size just by 1 on each of its recursive calls.
11. Apply an idea used in this section to construct a decagon with its vertices at
ten given points.
12. The path cannot cross inside the fenced area, but it can go along the fence.
CHAPTER 6
Exercises 6.1
1. This problem is similar to one of the examples in the section.
2. a. Compare every element in one set with all the elements in the other.
b. In fact, you can use presorting in three different ways: sort elements of
just one of the sets, sort elements of each of the sets separately, and sort
elements of the two sets together.
3. a. How do we nd the smallest and largest elements in a sorted list?
b. The brute-force algorithm and the divide-and-conquer algorithm are both
linear.
4. Use the known results about the average-case comparison numbers of the
algorithms in this question.
5. a. The problemis similar to one of the preceding problems in these exercises.
b. How would you solve this problem if the student information were written
on index cards? Better yet, think how somebody else, who has never taken
a course on algorithms but possesses a good dose of common sense, would
solve this problem.
6. a. Many problems of this kind have exceptions for one particular congura-
tion of points. As to the question about a solutions uniqueness, you can get
the answer by considering a fewsmall random instances of the problem.
b. Construct a polygon for a few small random instances of the problem.
Try to construct polygons in some systematic fashion.
Chapter 6 523
7. It helps to think about real numbers as ordered points on the real line. Con-
sidering the special case of s =0, with a given array containing both negative
and positive numbers, might be helpful, too.
8. After sorting the a
i
s and b
i
s, the problem can be solved in linear time.
9. Start by sorting the number list given.
10. a. Sort the points in nondecreasing order of their x coordinates and then scan
them right to left.
b. Think of choice problems with two desirable characteristics to take into
account.
11. Use the presorting idea twice.
Exercises 6.2
1. Trace the algorithm as we did in solving another system in the section.
2. a. Use the Gaussian elimination results as explained in the text.
b. It is one of the varieties of the transform-and-conquer technique. Which
one?
3. To nd the inverse, you can either solve the system with three simultaneous
right-hand side vectors representing the columns of the 3 3 identity matrix
or use the LU decomposition of the systems coefcient matrix found in
Problem 2.
4. Though the nal answer is correct, its derivation contains an error you have
to nd.
5. Pseudocode of this algorithm is quite straightforward. If you are in doubt,
see the sections example tracing the algorithm. The order of growth of the
algorithms running time can be found by following the standard plan for the
analysis of nonrecursive algorithms.
6. Estimate the ratio of the algorithm running times by using the approximate
formulas for the number of divisions and the number of multiplications in
both algorithms.
7. a. This is a normal case: one of the two equations should not be propor-
tional to the other.
b. The coefcients of one equation should be the same or proportional to the
corresponding coefcients of the other equation, whereas the right-hand
sides should not.
c. The two equations should be either the same or proportional to each other
(including the right-hand sides).
8. a. Manipulate the matrix rows above a pivot rowthe same way the rows below
the pivot row are changed.
b. Are the Gauss-JordanmethodandGaussianeliminationbasedonthe same
algorithm design technique or on different ones?
524 Hints to Exercises
c. Derive a formula for the number of multiplications in the Gauss-Jordan
method in the same manner this was done for Gaussian elimination in the
section.
9. How long will it take to compute the determinant compared to the time
needed to apply Gaussian elimination to the system?
10. a. Apply Cramers rule to the system given.
b. How many distinct determinants are there in the Cramers rule formulas?
11. a. If x
ij
is the number of times the panel in the ith row and jth column needs
to be toggled in a solution, what can be said about x
ij
? After you answer
this question, showthat the binary matrix representing aninitial state of the
board can be represented as a linear combination (in modulo-2 arithmetic)
of n
2
binary matrices each representing the effect of toggling an individual
panel.
b. Set up a system of four equations in four unknowns (see part (a)) and
solve it by Gaussian elimination, performing all operations in modulo-2
arithmetic.
c. If you believe that a systemof nine equations in nine unknowns is too large
to solve by hand, write a program to solve the problem.
Exercises 6.3
1. Use the denition of AVL trees. Do not forget that an AVL tree is a special
case of a binary search tree.
2. For both questions, it is easier to construct the required trees bottom up, i.e.,
for smaller values of n rst.
3. The single L-rotation and the double RL-rotation are the mirror images of the
single R-rotation and the double LR-rotation, whose diagrams can be found
in the section.
4. Insert the keys one after another doing appropriate rotations the way it was
done in the sections example.
5. a. An efcient algorithm immediately follows from the denition of the bi-
nary search tree of which the AVL tree is a special case.
b. The correct answer is opposite to the one that immediately comes to mind.
7. a. Trace the algorithm for the input given (see Figure 6.8 for an example).
b. Keep in mind that the number of key comparisons made in searching for a
key in a 2-3 tree depends not only on its nodes depth but also on whether
the key is the rst or second one in the node.
8. False; nd a simple counterexample.
9. Where will the smallest and largest keys be located?
Chapter 6 525
Exercises 6.4
1. a. Trace the algorithm outlined in the text on the input given.
b. Trace the algorithm outlined in the text on the input given.
c. A mathematical fact may not be established by checking its validity on a
single example.
2. For a heap represented by an array, only the parental dominance requirement
needs to be checked.
3. a. What structure does a complete tree of height h with the largest number
of nodes have? What about a complete tree with the smallest number of
nodes?
b. Use the results established in part (a).
4. First, express the right-hand side as a function of h. Then, prove the obtained
equality by either using the formula for the sum
i2
i
given in Appendix A
or by mathematical induction on h.
5. a. Where in a heap should one look for its smallest element?
b. Deleting an arbitrary element of a heap can be done by generalizing the
algorithm for deleting its root.
6. Fill in a table with the time efciency classes of efcient implementations
of the three operations: nding the largest element, nding and deleting the
largest element, and adding a new element.
7. Trace the algorithm on the inputs given (see Figure 6.14 for an example).
8. As a rule, sorting algorithms that can exchange far-apart elements are not
stable.
9. One can claim that the answers are different for the two principal represen-
tations of a heap.
10. This algorithm is less efcient than heapsort because it uses the array rather
than the heap to implement the priority queue.
12. Pick the spaghetti rods up in a bundle and place them end down (i.e., verti-
cally) onto a tabletop.
Exercises 6.5
1. Set up sums and simplify them by using the standard formulas and rules for
sum manipulation. Do not forget to include the multiplications outside the
inner loop.
2. Take advantage of the fact that the value of x
i
can be easily computed from
the previously computed x
i1
.
3. a. Use the formulas for the number of multiplications (and additions) for
both algorithms.
b. Does Horners rule use any extra memory?
526 Hints to Exercises
4. Apply Horners rule to the instance given the same way it is applied to another
one in the section.
5. Compute p(2) where p(x) =x
8
x
7
x
5
x
2
1.
6. If you implement the algorithm for long division by x c efciently, the
answer might surprise you.
7. a. Trace the left-to-right binary exponentiation algorithm on the instance
given the same way it is done for another instance in the section.
b. The answer is yes: the algorithm can be extended to work for the zero
exponent as well. How?
8. Trace the right-to-left binary exponentiation algorithm on the instance given
the same way it is done for another instance in the section.
9. Compute and use the binary digits of n on the y.
10. Use a formula for the sum of the terms of this special kind of a polynomial.
11. Compare the number of operations needed to implement the task in question.
12. Although there exists exactly one such polynomial, there are several different
ways to represent it. You may want to generalize Lagranges interpolation
formula for n =2:
p(x) =y
1
x x
2
x
1
x
2
y
2
x x
1
x
2
x
1
Exercises 6.6
1. a. Use the rules for computing lcm(m, n) andgcd(m, n) fromthe prime factors
of m and n.
b. The answer immediately follows from the formula for computing lcm
(m, n).
2. Use a relationship between minimization and maximization problems.
3. Prove the assertion by induction on k.
4. a. Base your algorithmon the following observation: a graph contains a cycle
of length 3 if and only if it has two adjacent vertices i and j that are also
connected by a path of length 2.
b. Do not jump to a conclusion in answering this question.
5. An easier solution is to reduce the problem to another one with a known
algorithm. Since we did not discuss many geometric algorithms in the book,
it should not be difcult to gure out to which one this problem needs to be
reduced.
6. Express this problemas a maximization problemof a function in one variable.
7. Introduce double-indexed variables x
ij
to indicate an assignment of the ith
person to the jth job.
Chapter 7 527
8. Take advantage of the specic features of this instance to reduce the problem
to one with fewer variables.
9. Create a new graph.
10. Solve rst the one-dimensional version of the post ofce location problem
(Problem 3(a) in Exercises 3.3).
11. a. Create a state-space graph for the problem as it is done for the river-
crossing puzzle in the section.
b. Create a state-space graph for the problem.
c. Look at the state obtained after the rst six river crossings in the solution
to part (b).
12. The problem can be solved by reduction to a well-known problem about a
graph traversal.
CHAPTER 7
Exercises 7.1
1. Yes, it is possible. How?
2. Check the algorithms pseudocode to see what it does upon encountering
equal values.
3. Trace the algorithm on the input given (see Figure 7.2 for an example).
4. Check whether the algorithm can reverse a relative ordering of equal ele-
ments.
5. Where will A[i] be in the sorted array?
6. Take advantage of the standard traversals of such trees.
7. a. Follow the denitions of the arrays B and C in the description of the
method.
b. Find, say, B[C[3]] for the example in part (a).
8. Start by nding the target positions for all the statures.
9. a. Use linked lists to hold nonzero elements of the matrices.
b. Represent each of the given polynomials by a linked list with nodes con-
taining exponent i and coefcient a
i
for each nonzero term a
i
x
i
.
10. You may use a search of the literature/Internet to answer this question.
Exercises 7.2
1. Trace the algorithm in the same way it is done in the section for another
instance of the string-matching problem.
2. A special alphabet notwithstanding, this application is not different than
applications to natural-language strings.
528 Hints to Exercises
3. For each pattern, ll in its shift table and then determine the number of
character comparisons (both successful and unsuccessful) on each trial and
the total number of trials.
4. Find an example of a binary string of length m and a binary string of length n
(n m) so that Horspools algorithm makes
a. the largest possible number of character comparisons before making the
smallest possible shift.
b. the smallest possible number of character comparisons.
5. It is logical to try a worst-case input for Horspools algorithm.
6. Can the algorithm shift the pattern by more than one position without the
possibility of missing another matching substring?
7. For each pattern, ll in the two shift tables and then determine the number
of character comparisons (both successful and unsuccessful) on each trial and
the total number of trials.
8. Check the description of the Boyer-Moore algorithm.
9. Check the descriptions of the algorithms.
11. a. A brute-force algorithm ts the bill here.
b. Enhance the input before a search.
Exercises 7.3
1. Apply the open hashing (separate chaining) scheme to the input given, as is
done in the text for another input (see Figure 7.5). Then compute the largest
number and average number of comparisons for successful searches in the
constructed table.
2. Apply the closed hashing (open addressing) scheme to the input given as it is
done in the text for another input (see Figure 7.6). Then compute the largest
number and average number of comparisons for successful searches in the
constructed table.
3. How many different addresses can such a hash function produce? Would it
distribute keys evenly?
4. The question is quite similar to computing the probability of having the same
result in n throws of a fair die.
5. Find the probability that n people have different birthdays. As to the hashing
connection, what hashing phenomenon deals with coincidences?
6. a. There is no need to insert a new key at the end of the linked list it is
hashed to.
b. Which operations are faster in a sorted linked list and why? For sorting,
do we have to copy all elements in the nonempty lists in an array and
then apply a general purpose sorting algorithm, or is there a way to take
advantage of the sorted order in each of the nonempty linked lists?
Chapter 8 529
7. A direct application of hashing solves the problem.
8. Consider this question as a mini-review: the answers are in Section 7.3 for
hashing and in the appropriate sections of the book for the others. Of course,
you should use the best algorithms available.
9. If you need to refresh your memory, check the books table of contents.
Exercises 7.4
1. Thinking about searching for informationshouldleadtoa variety of examples.
2. a. Use the standard rules of summanipulation and, in particular, the geomet-
ric series formula.
b. You will need to take logarithms base {m/2 in your derivation.
3. Find this value from the inequality in the text that provides the upper-bound
of the B-trees height.
4. Follow the insertion algorithm outlined in the section.
5. The algorithm is suggested by the denition of the B-tree.
6. a. Just follow the description of the algorithm given in the statement of the
problem. Note that a newkey is always inserted in a leaf and that full nodes
are always split on the way down, even though the leaf for the newkey may
have a room for it.
b. Can a split of a full node cause a cascade of splits through the chain of its
ancestors? Can we get a taller search tree than necessary?
CHAPTER 8
Exercises 8.1
1. Compare the denitions of the two techniques.
2. Use the table generated by the dynamic programming algorithm in solving
the problems instance in Example 1 of the section.
3. a. The analysis is similar to that of the top-down recursive computation of
the nth Fibonacci number in Section 2.5.
b. Set up and solve a recurrence for the number of candidate solutions that
need to be processed by the exhaustive search algorithm.
4. Apply the dynamic programming algorithm to the instance given as it is done
inExample 2 of the section. Note that there are twooptimal coincombinations
here.
5. Adjust formula (8.5) for inadmissible cells and their immediate neighbors.
6. The problemis similar to the change-making problemdiscussed in the section.
530 Hints to Exercises
7. a. Relate the number of the rooks shortest paths to the square in the ith row
and the jth column of the chessboard to the numbers of the shortest paths
to the adjacent squares.
b. Consider one shortest path as 14 consecutive moves to adjacent squares.
8. One can solve the problem in quadratic time.
9. Use a well-known formula from elementary combinatorics relating C(n, k) to
smaller binomial coefcients.
10. a. Topologically sort the dags vertices rst.
b. Create a dag with n 1 vertices: one vertex to start and the others to
represent the coins given.
11. Let F(i, j) be the order of the largest all-zero submatrix of a given matrix with
its lowright corner at (i, j). Set up a recurrence relating F(i, j) to F(i 1, j),
F(i, j 1), and F(i 1, j 1).
12. a. In the situation where teams A and B need i and j games, respectively,
to win the series, consider the result of team A winning the game and the
result of team A losing the game.
b. Set up a table with ve rows (0 i 4) and ve columns (0 j 4) and
ll it by using the recurrence derived in part (a).
c. Your pseudocode should be guided by the recurrence set up in part (a).
The efciency answers follow immediately from the tables size and the
time spent on computing each of its entries.
Exercises 8.2
1. a. Use formulas (8.6)(8.7) to ll in the appropriate table, as is done for
another instance of the problem in the section.
b., c. What would the equality of the two terms in
max{F(i 1, j), v
i
F(i 1, j w
i
)]
mean?
2. a. Write pseudocode to ll the table in Figure 8.4 (say, row by row) by using
formulas (8.6)(8.7).
b. An algorithm for identifying an optimal subset is outlined in the section
via an example.
3. How many values does the algorithm compute? How long does it take to
compute one value? How many table cells need to be traversed to identify
the composition of an optimal subset?
4. Use the denition of F(i, j) to check whether it is always true that
a. F(i, j 1) F(i, j) for 1 j W.
b. F(i 1, j) F(i, j) for 1 i n.
5. The problem is similar to one of the problems discussed in Section 8.1.
Chapter 8 531
6. Trace the calls of the function MemoryKnapsack(i, j) on the instance in
question. (An application to another instance can be found in the section.)
7. The algorithm applies formula (8.6) to ll some of the tables cells. Why can
we still assert that its efciencies are in (nW)?
8. One of the reasons deals with the time efciency; the other deals with the
space efciency.
9. You may want to include algorithm visualizations in your report.
Exercises 8.3
1. Continue applying formula (8.8) as prescribed by the algorithm.
2. a. The algorithms time efciency can be investigated by following the stan-
dard plan of analyzing the time efciency of a nonrecursive algorithm.
b. How much space do the two tables generated by the algorithm use?
3. k =R[1, n] indicates that the root of an optimal tree is the kth key in the list of
ordered keys a
1
, . . . , a
n
. The roots of its left and right subtrees are specied
by R[1, k 1] and R[k 1, n], respectively.
4. Use a space-for-time trade-off.
5. If the assertion were true, would we not have a simpler algorithm for con-
structing an optimal binary search tree?
6. The structure of the tree should simply minimize the average depth of its
nodes. Do not forget to indicate a way to distribute the keys among the nodes
of the tree.
7. a. Since there is a one-to-one correspondence between binary search trees
for a given set of n orderable keys and binary trees with n nodes (why?),
you can count the latter. Consider all the possibilities of partitioning the
nodes between the left and right subtrees.
b. Compute the values in question using the two formulas.
c. Use the formula for the nth Catalan number and Stirlings formula for n!.
8. Change the bounds of the innermost loop of algorithm OptimalBST by ex-
ploiting the monotonicity of the root table mentionedat the endof the section.
9. Assume that a
1
, . . . , a
n
are distinct keys ordered from the smallest to the
largest, p
1
, . . . , p
n
are the probabilities of searching for them, and q
0
, q
1
, . . . ,
q
n
are probabilities of unsuccessful searches for keys in intervals (, a
1
),
(a
1
, a
2
), . . . , (a
n
, ), respectively; (p
1
. . .
p
n
) (q
0
. . .
q
n
) =1. Set
up a recurrence relation similar to recurrence (8.8) for the expected number
of key comparisons that takes into account both successful and unsuccessful
searches.
10. See the memory function solution for the knapsack problem in Section 8.2.
11. a. It is easier to nd a general formula for the number of multiplications
needed for computing (A
1
.
A
2
)
.
A
3
and A
1
.
(A
2
.
A
3
) for matrices A
1
with
532 Hints to Exercises
dimensions d
0
d
1
, A
2
with dimensions d
1
d
2
, and A
3
with dimensions
d
2
d
3
and then choose some specic values for the dimensions to get a
required example.
b. You can get the answer by following the approach used for counting binary
trees.
c. The recurrence relation for the optimal number of multiplications in com-
puting A
i
.
. . .
.
A
j
is very similar to the recurrence relation for the optimal
number of comparisons in searching a binary search tree composed of keys
a
i
, . . . , a
j
.
Exercises 8.4
1. Apply the algorithm to the adjacency matrix given, as is done in the section
for another matrix.
2. a. The answer can be obtained either by considering how many values the
algorithm computes or by following the standard plan for analyzing the
efciency of a nonrecursive algorithm (i.e., by setting up a sum to count its
basic operations executions).
b. What is the efciency class of the traversal-based algorithm for sparse
graphs represented by their adjacency lists?
3. Show that one can simply overwrite elements of R
(k1)
with elements of R
(k)
without any other changes in the algorithm.
4. What happens if R
(k1)
[i, k] =0?
5. Show rst that formula (8.11) (from which the superscripts can be eliminated
according to the solution to Problem 3)
r
ij
=r
ij
or (r
ik
and r
kj
)
is equivalent to
if r
ik
r
ij
(r
ij
or r
kj
).
6. a. What property of the transitive closure indicates a presence of a directed
cycle? Is there a better algorithm for checking this?
b. Which elements of the transitive closure of an undirected graph are equal
to 1? Can you nd such elements with a faster algorithm?
7. See an example of applying the algorithm to another instance in the section.
8. What elements of matrix D
(k1)
does d
(k)
ij
, the element in the ith row and the
jth column of matrix D
(k)
, depend on? Can these values be changed by the
overwriting?
9. Your counterexample must contain a cycle of a negative length.
Chapter 9 533
10. It will sufce to store, in a single matrix P, indices of intermediate vertices k
used in updates of the distance matrices. This matrix can be initialized with
all its elements equal to, say, 1.
CHAPTER 9
Exercises 9.1
1. You may use integer divisions in your algorithm.
2. You can apply the greedy approach either to each of its rows (or columns) or
to the entire cost matrix.
3. Considering the case of two jobs might help. Of course, after forming a
hypothesis, you will have to prove the algorithms optimality for an arbitrary
input or nd a specic counterexample showing that it is not the case.
4. Only the earliest-nish-rst algorithm always yields an optimal solution.
5. Simply apply the greedy approach to the situation at hand. You may assume
that t
1
t
2
. . .
t
n
.
6. Think the minimum positive amount of water among all the vessels in their
current state.
7. The minimum number of messages for n =4 is six.
8. For both versions of the problem, it is not difcult to get to a hypothesis about
the solutions form after considering the cases of n =1, 2, and 3. It is proving
the solutions optimality that is at the heart of this problem.
9. a. Trace the algorithm for the graph given. An example can be found in the
text.
b. After the next fringe vertex is added to the tree, add all the unseen vertices
adjacent to it to the priority queue of fringe vertices.
10. Applying Prims algorithm to a weighted graph that is not connected should
help in answering this question.
11. Check whether the proof of the algorithms correctness is valid for negative
edge weights.
12. The answer is no. Give a counterexample.
13. Since Prims algorithm needs weights on a graphs edges, some weights have
to be assigned. As to the second question, think of other algorithms that can
solve this problem.
14. Strictly speaking, the wording of the questionasks youtoprove twothings: the
fact that at least one minimumspanning tree exists for any weightedconnected
graphandthe fact that a minimumspanning tree is unique if all the weights are
distinct numbers. The proof of the former stems fromthe obvious observation
about niteness of the number of spanning trees for a weighted connected
534 Hints to Exercises
graph. The proof of the latter can be obtained by repeating the correctness
proof of Prims algorithm with a minor adjustment at the end.
15. Consider two cases: the keys value was decreased (this is the case needed for
Prims algorithm) and the keys value was increased.
Exercises 9.2
1. Trace the algorithm for the given graphs the same way it is done for another
input in the section.
2. Two of the four assertions are true; the other two are false.
3. Applying Kruskals algorithm to a disconnected graph should help to answer
the question.
4. One way to answer the question is to transform a graph with negative weights
to one with all positive weights.
5. Is the general trick of transforming maximization problems to their minimiza-
tion counterparts (see Section 6.6) applicable here?
6. Substitute the three operations of the disjoint subsets ADTmakeset(x),
nd(x), and union(x, y)in the appropriate places of the algorithms pseu-
docode given in the section.
7. Follow the plan used in Section 9.1 to prove the correctness of Prims algo-
rithm.
8. The argument is very similar to the one made in the section for the union-by-
size version of quick nd.
11. The question is not trivial, because introducing extra points (called Steiner
points) may make the total length of the network smaller than that of a
minimum spanning tree of the square. Solving rst the problem for three
equidistant points might give you an indication of what a solution to the
problem in question might look like.
Exercises 9.3
1. One of the questions requires no changes in either the algorithmor the graph;
the others require simple adjustments.
2. Just trace the algorithm on the input graphs the same way it was done for an
example in the section.
3. Your counterexample can be a graph with just three vertices.
4. Only one of the assertions is correct. Find a small counterexample for the
other.
5. Simplify the pseudocode given in the section by implementing the priority
queue as an unordered array and eliminating the parental labeling of vertices.
6. Prove it by induction on the number of vertices included in the tree con-
structed by the algorithm.
Chapter 10 535
7. Topologically sort the dags vertices rst.
8. To get a graph, connect numbers on adjacent levels that can be components
of a sum from the apex to the base. Then gure out how to deal with the fact
that the weights are assigned to vertices rather than edges.
9. Take advantage of the ways of thinking used in geometry and physics.
10. Before you embark on implementing a shortest-path algorithm, you would
have to decide what criterion determines the best route. Of course, it would
be highly desirable tohave a programasking the user whichof several possible
criteria s/he wants to be applied.
Exercises 9.4
1. See the example given in the section.
2. After combining the two nodes with the lowest probabilities, resolve the tie
arising onthe next iterationintwodifferent ways. For eachof the twoHuffman
codes obtained, compute the mean and variance of the codeword length.
3. You may base your answers on the way Huffmans algorithm works or on the
fact that Huffman codes are known to be optimal prex codes.
4. The maximal length of a codeword relates to the height of Huffmans coding
tree in an obvious fashion. Try to nd a set of n specic frequencies for
an alphabet of size n for which the tree has the shape yielding the longest
codeword possible.
5. a. What is the most appropriate data structure for an algorithm whose prin-
cipal operation is nding the two smallest elements in a given set and
replacing them by their sum?
b. Identify the principal operations of the algorithm, the number of times they
are executed, and their efciencies for the data structure used.
6. Maintain two queues: one for given frequencies, the other for weights of new
trees.
7. It would be natural to use one of the standard traversal algorithms.
8. Generate the codewords right to left.
10. A similar example was discussed at the end of Section 9.4. Construct Huff-
mans tree and then come up with specic questions that would yield that tree.
(You are allowed to ask questions such as: Is this card the ace, or a seven, or
an eight?)
CHAPTER 10
Exercises 10.1
1. Start at an arbitrary integer point x and investigate whether a neighboring
point is a better location for the post ofce than x is.
536 Hints to Exercises
2. Sketch the feasible region of the problemin question. Followthis up by either
applying the Extreme-Point Theorem or by inspecting level lines, whichever
is more appropriate. Both methods were illustrated in the text.
3. Sketch the feasible region of the problem. Then choose values of the param-
eters c
1
and c
2
to obtain a desired behavior of the objective functions level
lines.
4. What is the principal difference between maximizing a linear function, say,
f (x) =2x, on a closed vs. semi-open interval, e.g., 0 x 1 vs. 0 x < 1?
5. Trace the simplex method on the instances given, as was done for an example
in the text.
6. When solving the problem by hand, you might want to start by getting rid
of fractional coefcients in the problems statement. Also, note that the
problems specics make it possible to replace its equality constraint by
one inequality constraint. You were asked to solve this problem directly in
Problem 8 of Exercises 6.6.
7. The specics of the problem make it possible to see the optimal solution at
once. Sketching its feasible region for n =2 or n =3, though not necessary,
may help to see both this solution and the number of iterations needed by the
simplex method to nd it.
8. Consider separately two versions of the problem: continuous and 0-1 (see
Example 2 in Section 6.6).
9. If x
/
=(x
/
1
, x
/
2
, . . . , x
/
n
) and x
//
=(x
//
1
, x
//
2
, . . . , x
//
n
) are two distinct optimal so-
lutions to the same linear programming problem, what can we say about any
point of the line segment with the endpoints at x
/
and x
//
? Note that any such
point x can be expressed as x =t x
/
(1 t )x
//
=(t x
/
1
(1 t )x
//
1
, t x
/
2
(1
t )x
//
2
, . . . , t x
/
n
(1 t )x
//
n
), where 0 t 1.
10. a. You will need to use the notion of a matrix transpose, dened as the matrix
whose rows are the columns of the given matrix.
b. Apply the general denition to the specic problemgiven. Note the change
from maximization to minimization, the change of the roles played by the
objective functions coefcients and the constraints right-hand sides, the
transposition of the constraints, and the reversal of their signs.
c. You may use either the simplex method or the geometric approach.
Exercises 10.2
1. What properties of the elements of the modied adjacency matrix stem from
the source and sink denitions, respectively?
2. See the algorithm and an example illustrating it in the text.
3. Of course, the value (capacity) of an optimal ow (cut) is the same for any
optimal solution. The question is whether distinct ows (cuts) can yield the
same optimal value.
Chapter 10 537
4. a. Add extra vertices and edges to the network given.
b. If an intermediate vertex has a constraint on the ow amount that can ow
through it, split the vertex in two.
5. Take advantage of the recursive structure of a rooted tree.
6. a. Sum the equations expressing the ow-conservation requirements.
b. Sum the equations dening the ow value and ow-conservation require-
ments for the vertices in set X inducing the cut.
7. a. Use template (10.11) given in the text.
b. Use either an add-on tool of your spreadsheet or some software available
on the Internet.
10. Use edge capacities to impose the problems constraints. Also, take advantage
of the solution to Problem 4(a).
Exercises 10.3
1. You may (but do not have to) use the algorithm described in the section.
2. See an application of this algorithm to another bipartite graph in the section.
3. The denition of a matching and its cardinality should lead you to the answers
to these questions with no difculty.
4. a. You do not have to check the inequality for each subset S of V if you can
point out a subset for which the inequality does not hold. Otherwise, ll in
a table for all the subsets S of the indicated set V with columns for S, R(S),
and [R(S)[ [S[.
b. Think time efciency.
5. Reduce the problem to nding a maximum matching in a bipartite graph.
6. Transform a given bipartite graph into a network by making vertices of the
former be intermediate vertices of the latter.
7. Since this greedy algorithm is arguably simpler than the augmenting-path
algorithmgiveninthe section, shouldwe expect a positive or negative answer?
Of course, this point cannot be substituted for a more specic argument or a
counterexample.
8. Start by presenting a tree given as a BFS tree.
9. For pointers regarding an efcient implementation of the algorithm, see
[Pap82, Section 10.2].
10. Although not necessary, thinking about the problem as one dealing with
matching squares of a chessboard might lead you to a short and elegant proof
that this well-known puzzle has no solution.
538 Hints to Exercises
Exercises 10.4
1. A marriage matching is obtained by selecting three matrix cells, one cell from
each rowand column. To determine the stability of a given marriage matching,
check each of the remaining matrix cells for a blocking pair.
2. It sufces to consider each member of one sex (say, the men) as a potential
member of a blocking pair.
3. An application of the men-proposing version to another instance is given in
the section. For the women-proposing version, reverse the roles of the sexes.
4. You may use either the men-proposing or women-proposing version of the
algorithm.
5. The time efciency is clearly dened by the number of proposals made. You
may (but are not required to) provide the exact number of proposals in the
worst and best cases, respectively; an appropriate class will sufce.
6. Prove it by contradiction.
7. Prove it by contradiction.
8. Choose data structures so that the innermost loop of the algorithm can run in
constant time.
9. The principal references are [Gal62] and [Gus89].
10. Consider four boys, three of whom rate the fourth boy as the least desired
roommate. Complete these rankings to obtain an instance with no stable
pairing.
CHAPTER 11
Exercises 11.1
1. Is it possible to solve the puzzle by making fewer moves than the brute-force
algorithm? Why?
2. Since you know that the number of disk moves made by the classic algorithm
is 2
n
1, you can simply prove (e.g., by mathematical induction) that for any
algorithm solving this problem, the number of disk moves M(n) made by the
algorithm is greater than or equal to 2
n
1. Alternatively, you can show that
if M
(n) satises
the recurrence relation
M
(n) =2M
(1) =1,
whose solution is 2
n
1.
3. All these questions have straightforward answers. If a trivial lower bound is
tight, dont forget to mention a specic algorithm that proves its tightness.
4. Reviewing Section 4.4, where the fake-coin problem was introduced, should
help in answering the question.
Chapter 11 539
5. Pay attention to comparison losers.
6. Think inversions.
7. Divide the set of vertices of an input graph into two disjoint subsets U and
W having n/2 and {n/2 vertices, respectively, and show that any algorithm
will have to check for an edge between every pair of vertices (u, w), where
u U and w W, before the graphs connectivity can be established.
8. The question and the answer are quite similar to the case of two n-element
sorted lists discussed in the section. So is the proof of the lower bound.
9. Simply follow the transformation formula suggested in the section.
10. a. Check whether the formulas hold for two arbitrary square matrices.
b. Use a formula similar to the one showing that multiplication of arbitrary
square matrices can be reduced to multiplication of symmetric matrices.
11. What problem with a known lower bound is most similar to the one in ques-
tion? After nding an appropriate reduction, do not forget to indicate an
algorithm that makes the lower bound tight.
12. Use the problem reduction method.
Exercises 11.2
1. a. Prove rst that 2
h
l by induction on h.
b. Prove rst that 3
h
l by induction on h.
2. a. How many outcomes does the problem have?
b. Of course, there are many ways to solve this simple problem.
c. Thinking about a, b, and c as points on the real line should help.
3. This is a straightforward question. You may assume that the three elements
to be sorted are distinct. (If you need help, see decision trees for the three-
element selection sort and three-element insertion sort in the section.)
4. Compute a nontrivial lower bound for sorting a four-element array and then
identify a sorting algorithm whose number of comparisons in the worst case
matches the lower bound.
5. This is not an easy task. None of the standard sorting algorithms can do this.
Try todesigna special algorithmthat squeezes as muchinformationas possible
from each of its comparisons.
6. This is a very straightforward question. Use the obvious observation that
sequential search in a sorted list can be stopped as soon as an element larger
than the search key is encountered.
7. a. Start by transforming the logarithms to the same base.
b. The easiest way is to prove that
lim
n
{log
2
(n 1)
{log
3
(2n 1)
> 1.
540 Hints to Exercises
To get rid of the ceiling functions, you can use
f (n) 1
g(n) 1
<
{f (n)
{g(n)
<
f (n) 1
g(n) 1
where f (n) =log
2
(n 1) and g(n) =log
3
(2n 1) and show that
lim
n
f (n) 1
g(n) 1
= lim
n
f (n) 1
g(n) 1
> 1.
8. The answer to the rst question follows directly from inequality (11.1). The
answer to the second is no (why?).
9. a. Think losers.
b. Think the height of the tournament tree or, alternatively, the number of
steps needed to reduce an n-element set to a one-element set by halving.
c. After the winner has been determined, which player can be the second
best?
10. a. How many outcomes does this problem have?
b. Draw a ternary decision tree that solves the problem.
c. Show that each of the two casesweighing two coins (one on each cup of
the scale) or four coins (two on each cup of the scale)yields at least one
situation with more than three outcomes still possible. The latter cannot
be resolved uniquely with a single weighing.
1
d. Decide rst whether you should start with weighing two coins. Do not
forget that you can take advantage of the extra coin known to be genuine.
e. This is a famous puzzle. The principal insight is that of the solution to
part (d).
11. If you want to solve the problem in the spirit of the section, represent the
process of assembling the puzzle by a binary tree.
Exercises 11.3
1. Check the denition of a decidable decision problem.
2. First, determine whether n
log
2
n
is a polynomial function. Then, read carefully
the denitions of tractable and intractable problems.
3. All four combinations are possible, and none of the examples needs to be
large.
4. Simply use the denition of the chromatic number. Solving Problem 5 rst
might be helpful but not necessary.
5. This problem should be already familiar to you.
1. This approach of using information-theoretic reasoning for the problem was suggested by Brassard
and Bratley [Bra96].
Chapter 11 541
6. What is a proper measure of an inputs size for this problem?
7. See the formulation of the decision version of graph coloring and the veri-
cation algorithm for the Hamiltonian circuit problem given in the section.
8. You may start by expressing the partition problem as a linear equation with
0-1 variables x
i
, i =1, . . . , n.
9. If you are not familiar with the notions of a clique, vertex cover, and indepen-
dent set, it would be a good idea to start by nding a maximum-size clique,
a minimum-size vertex cover, and a maximum-size independent set for a few
simple graphs such as those in Problem 4. As far as Problem 9 is concerned,
try to nd a relationship between these three notions. You will nd it useful
to consider the complement of your graph, which is the graph with the same
vertices and the edges between vertices that are not adjacent in the graph
itself.
10. The same problem in a different wording can be found in the section.
11. Just two of them do not contradict the current state of our knowledge about
the complexity classes.
12. The problem you need was mentioned explicitly in the section.
Exercises 11.4
1. As the given denition of the number of signicant digits requires, compute
the relative errors of the approximations. One of the answers doesnt agree
with our intuitive idea of this notion.
2. Use the denitions of the absolute and relative errors and the properties of
the absolute value.
3. Compute the value of
5
i=0
0.5
i
i!
and the magnitude of the difference between
it and
e =1.648721 . . . .
4. Apply the formula for the area of a trapezoid to each of the n approximating
trapezoid strips and sum them up.
5. Apply formulas (11.7) and (11.9) to the integrals given.
6. Find an upper bound for the second derivative of e
sin x
and use formula (11.9)
to nd a value of n guaranteeing the truncation error smaller than the given
error limit.
7. A similar problem is discussed in the section.
8. Consider all possible values for the coefcients a, b, and c. Keep in mind that
solving an equation means nding all its roots or proving that no roots exist.
9. a. Prove that every element x
n
of the sequence is (i) positive, (ii) greater
than
D (by computing x
n1
D =
(x
n
D)
2
2x
n
.
10. It is done for
2 in the section.
CHAPTER 12
Exercises 12.1
1. a. Resume the algorithm by backtracking from the rst solutions leaf.
b. How can you get the second solution from the rst one by exploiting a
symmetry of the board?
2. Think backtracking applied backward.
3. a. Take advantage of the general template for backtracking algorithms. You
will have to gure out how to check whether no two queens attack each
other in a given placement of the queens.
To make your comparison with an exhaustive-search algorithm easier,
you may consider the version that nds all the solutions to the problem
without taking advantage of the symmetries of the board. Also note that
an exhaustive-search algorithmcan try either all placements of n queens on
n distinct squares of the n n board, or only placements of the queens in
different rows, or only placements in different rows and different columns.
b. Althoughit is interesting tosee howaccurate suchanestimate is for a single
random path, you would want to compute the average of several of them
to get a reasonably accurate estimate of the tree size.
4. Consider separately six cases of different remainders of the division of n by
6. The cases of n mod 6 =2 and n mod 6 =3 are harder than the others and
require an adjustment of a greedy placement of the queens.
5. Another instance of this problem is solved in the section.
6. Note that without loss of generality, one can assume that vertex a is colored
with color 1 and hence associate this information with the root of the state-
space tree.
7. This application of backtracking is quite straightforward.
8. a. Another instance of this problem is solved in the section.
b. Some of the nodes will be deemed promising when, in fact, they are not.
9. A minor change in the template given does the job.
11. Make sure that your programdoes not duplicate tree nodes for the same board
position. And, of course, if a given instance of the puzzle does not have a
solution, your program should issue a message to that effect.
Chapter 12 543
Exercises 12.2
1. What operations does a best-rst branch-and-bound algorithm perform on
the live nodes of its state-space tree?
2. Use the smallest numbers selected from the columns of the cost matrix to
compute the lower bounds. With this bounding function, its more logical to
consider four ways to assign job 1 for the nodes on the rst level of the tree.
3. a. Your answer should be an n n matrix with a simple structure making the
algorithm work the fastest.
b. Sketch the structure of the state-space tree for your answer to part (a).
5. A similar problem is solved in the section.
6. Take into account more than a single item from those not included in the
subset under consideration.
8. A Hamiltonian circuit must have exactly two edges incident to each vertex of
the graph.
9. A similar problem is solved in the section.
Exercises 12.3
1. a. Start by marking the rst column of the matrix and nding the smallest
element in the rst row and an unmarked column.
b. You will have to nd an optimal solution by exhaustive search or by a
branch-and-bound algorithm or by some other method.
2. a. The simplest approachis tomarkmatrix columns that correspondtovisited
cities. Alternatively, you can maintain a linked list of unvisited cities.
b. Following the standard plan for analyzing algorithmefciency should pose
no difculty (and yield the same result for either of the two options men-
tioned in the hint to part (a)).
3. Do the walk in the clockwise direction.
4. Extend the triangle inequality to the case of k 1 intermediate vertices and
prove its validity by mathematical induction.
5. First, determine the time efciency of each of the three steps of the algorithm.
6. You will have to prove two facts:
i. f (s
) 2f (s
a
) for any instance of the knapsack problem, where f (s
a
) is
the value of the approximate solution obtained by the enhanced greedy
algorithm and f (s
i=1
s
i
for any instance with B
FF
> 1
where B
FF
is the number of bins obtained by applying the rst-t (FF) al-
gorithm to an instance with sizes s
1
, s
2
, . . . , s
n
. To prove it, take advantage
of the fact that there can be no more than one bin that is half full or less.
8. a. Trace the algorithm on the instance given and then answer the question
whether you can put the same items in fewer bins.
b. You can answer the question either with a theoretical argument or by
providing a counterexample.
c. Take advantage of the two following properties:
i. All the items placed by FFD in extra bins, i.e., bins after the rst B
1.
(B
a
(G
n
)
(G
n
)
(where
a
(G
n
) and
(G
n
) are the number of colors obtained by the greedy
algorithm and the minimum number of colors, respectively) can be made
as large as one wishes.
Chapter 12 545
Exercises 12.4
1. It might help your search to know that the solution was rst published by
Italian Renaissance mathematician Girolamo Cardano.
2. You can answer these questions without using calculus or a sophisticated
calculator by representing equations in the form f
1
(x) =f
2
(x) and graphing
functions f
1
(x) and f
2
(x).
3. a. Use the property underlying the bisection method.
b. Use the denition of division of polynomial p(x) by x x
0
, i.e., the equality
p(x) =q(x)(x x
0
) r,
where x
0
is a root of p(x), q(x) and r are the quotient and remainder of
this division, respectively.
c. Differentiate both sides of the equality given in part (b) and substitute x
0
in the result.
4. Use the fact that [x
n
x
.
5. Sketch the graph to determine a general location of the root and choose an ini-
tial interval bracketing it. Use an appropriate inequality given in Section 12.4
to determine the smallest number of iterations required. Perform the itera-
tions of the algorithm, as is done for the example in the section.
6. Write an equation of the line through the points (a
n
, f (a
n
)) and (b
n
, f (b
n
))
and nd its x-intercept.
7. See the example given in the section. As a stopping criterion, you may use
either the length of interval [a
n
, b
n
] or inequality (12.12).
8. Write anequationof the tangent line tothe graphof the functionat (x
n
, f (x
n
))
and nd its x-intercept.
9. See the example given in the section. Of course, you may start with a different
x
0
than the one used in that example.
10. Consider, for example, f (x) =
3
x.
11. Derive an equation for the area in question and then solve it by using one of
the methods discussed in the section.
This page intentionally left blank
Index
The index covers the main text, the exercises, the epilogue, and the appendices.
The following indicators are used after page numbers: ex for exercises, g for
gures, n for footnotes, sum for summaries.
Numbers and Symbols
2-approximation algorithm, 447, 458ex
2-change, 450
2-colorable graph, 129ex. See also bipartite
graph
2-node, 223
2-opt, 449450, 451g, 453, 469sum
23 tree, 218, 223225, 226ex, 250sum
23-4 tree, 218
top-down, 279ex
3-change, 450, 452g
3-node, 223
3-opt, 449, 450, 453, 469sum
e, 475
o. See little-oh notation
O. See big-oh notation
. See big-theta notation
. See big-omega notation
(Eulers constant), 476
, 17ex, 419ex
(golden ratio), 80
A
Abel, Niels, 460
absolute error, 414, 419ex, 461, 468ex
abstract data type (ADT), 37, 39sum
accuracy ratio, 442444, 446447, 453, 455,
457ex
Adelson-Velsky, G. M., 218
adjacency lists, 29, 30, 37ex, 39sum, 43n
adjacency matrix, 29, 30, 37ex, 39sum
Adleman, L. M., 474
ADT. See abstract data type
adversary arguments, 390391, 394ex,
420sum
algorithm, 34, 38sum
analyzing of, 1415
approximation, 11
basic operation of, 44, 46
coding of, 1516
correctness of, 1314
efciency of, 14
exact, 11
generality of, 14
input to, 9
methods of specifying, 1213
nondeterministic, 404
optimality of, 16. See also lower
bounds
origin of word, 7ex
parallel, 10, 472
patentability of, 7ex
randomized, 180, 472
running time of, 4445
sequential, 10, 473
simplicity of, 14
space efciency of, 14, 4142, 94sum,
95sum
time efciency of, 14, 4142, 94sum,
95sum
547
548 Index
algorithm animation, 9192. See also
algorithm visualization
algorithm design paradigm. See algorithm
design technique
algorithm design strategy. See algorithm
design technique
algorithm design technique, 11, 471
algorithmic problem solving, 918
algorithmics, 1
Algorithmics: the Spirit of Computing, 1
algorithm visualization, 9194, 95sum
al-Khorezmi (al-Khwarizmi), 7ex
all-pairs shortest-paths problem, 308, 312ex
amortized efciency, 49, 330n
analysis of algorithm efciency, 4195
amortized. See amortized efciency
average-case, 4849, 8491
best-case, 48
empirical, 8491, 95sum
framework for, 4252, 9495sum
mathematical
of decrease-by-a-constant-factor
algorithms, 150157, 486487
of decrease-by-one algorithms, 135,
137138ex, 485486
of divide-and-conquer algorithms,
171197, 198sum, 487491
of nonrecursive algorithms, 6170,
95sum
of recursive algorithms, 7079, 84ex,
95sum
useful formulas for, 475477
worst-case, 4748
ancestor, of a tree vertex, 33, 258ex
proper, 33
ancestry problem, 258ex
approximation algorithms, 11, 13, 441459.
See also numerical algorithms
for bin packing, 458ex
for graph coloring, 459ex
for knapsack problem, 453457
approximation schemes, 456457,
469sum
greedy algorithms, 454456, 458ex,
469sum
for maximum independent set, 458ex
for minimum vertex cover, 458ex
for TSP, 443453, 469sum
empirical data, 453
greedy algorithms, 444446, 457
458ex, 469sum
local search heuristics, 449453,
469sum
minimum-spanning-treebased, 446
449, 458ex, 469sum
approximation schemes, 456457, 469sum
fully polynomial, 457
array, 2526, 38ex, 39sum
deletion in, 37ex
index, 26
articulation point, 125
articial intelligence (AI), 247
assignment operation (), 13
assignment problem, 119120, 120ex,
130sum
branch-and-bound for, 433436,
440ex
exhaustive search for, 119120, 120ex
greedy technique for, 322ex
Hungarian method for, 120
linear programming for, 248249ex
asymptotic notations, 5261
and limits, 5658
big-oh notation, 5253, 54g
big-omega notation, 52, 5455
big-theta notation, 52, 55
informal introduction to, 5253
properties of, 5556, 60ex
augmentation
of matching, 373, 374g, 377ex, 378g
of network ow. See ow-augmenting
path
augmenting-path method (Ford-
Fulkerson), 363369
average-case efciency, 4849, 90ex,
94sum. See also empirical analysis
of algorithms
AVL tree, 218223, 225ex, 226ex, 250sum
efciency of dictionary operations, 223
height of, 223
rotations, 219222, 226ex
B
Bachet, Claude-Gaspar, 165
Bachets problem of weights, 323324ex
back edge
directed, 139, 140
undirected, 123, 124, 125, 128ex, 248ex
Index 549
back substitutions, in Gaussian elimination,
209, 210, 216ex
backtracking, 424432, 468sum
estimating efciency of, 430, 431ex
general template for, 429, 431ex
for graph coloring, 431ex
for Hamiltonian circuit problem, 426
427, 431ex
for n-queens problem, 425426, 430ex,
431ex
for peg solitaire, 431432ex
for permutation generation, 431ex
for subset-sum problem, 427428, 431ex
backward edge, in augmenting path,
363369
backward substitutions, for solving
recurrences. See method of
backward substitutions
bad-symbol shift. See BoyerMoore
algorithm
bag, 36
balance factor, in AVL tree, 219
balanced search trees, 218, 274. See also 23
tree; 23-4 tree; AVL tree; B-tree
basic efciency classes, 58, 59, 60ex, 95sum.
See also asymptotic notations
basic operation, 44, 50, 94sum
count, 4445, 50ex, 90ex
order of growth, 4547. See also
asymptotic notations
basic solution, 352
feasible, 353
Bellman, Richard, 283, 284
Bentley, Jon, 15n
Berge, Claude, 375
best-case efciency, 48, 51ex
best subway route, 2425ex, 338ex
BFS. See breadth-rst search
big-oh notation, 5253, 54g, 57, 5861ex
big-omega notation, 5253, 54, 56, 57,
5860ex
big-theta notation, 5253, 57, 5861ex
binary digital sum (nim sum), 165166
binary exponentiation, 236239, 251sum
left-to-right, 237, 240ex
right-to-left, 238239, 240ex
binary reected Gray code. See Gray code
binary representation of a decimal integer
length counter algorithm, 66, 75, 78ex
number of bits in, 44, 51ex
binary search, 150152, 156ex, 163, 168sum,
205, 463
efciency of, 151152, 157ex
binary search tree, 34, 38ex, 60ex, 166ex,
186ex, 218, 226ex, 303ex. See also
AVL tree
deletion in, 166ex
insertion in, 163, 164g
optimal, 297302, 303ex
searching in, 163
self-balancing, 218
binary string. See bit string
binary tree, 3335, 38ex, 39sum, 182185.
See also decision trees
(essentially) complete, 227
extension of, 183
full, 184
height of, 33, 183184
path length in
external, 186ex
internal, 186ex
minimum weighted, 341342
traversals of, 182185, 185ex, 186ex
binary tree search. See binary search tree,
searching in
binomial coefcient, 292ex, 297ex
bin-packing problem, 404, 407, 410ex
approximation algorithms for, 458ex
bipartite graph, 129ex
maximum matching in, 372380
birthday paradox, 275ex
bisection method, 460463, 466, 467ex,
469sum. See also binary search
bit string, 26
and subset, 147
as codeword, 338341, 342ex, 343ex
bit vector, 36
Blands rule, 357
Bouton, C. L., 166
Boyer-Moore algorithm, 259, 263267,
268ex, 280sum
branch-and-bound, 432441, 468sum
best-rst, 434, 435g, 437g, 440ex
for assignment problem, 433436,
440ex
for knapsack problem, 436438, 440
441ex
for TSP, 438440, 441ex
550 Index
breadth-rst search (BFS), 125128, 129ex,
130ex, 130sum
efciency of, 127
forest, 125, 126g, 128ex, 130sum
main facts about, 128
queue, 125
brute force, 97130, 130sum. See also
exhaustive search
for closest-pair problem, 108109, 113ex,
114ex
for composite number problem, 410ex
for convex-hull problem, 112113, 115ex
for element-uniqueness problem, 6364
for matrix multiplication, 6466, 68ex
for polynomial evaluation, 239ex
for searching. See sequential search
for sorting. See bubble sort; selection
sort
for string matching, 105106, 107ex,
268ex
vs. presorting, 203204, 205ex, 206ex
B-tree, 276279, 279ex, 280ex, 281sum
efciency of, 278
height of, 277278, 279ex
B