0% found this document useful (0 votes)
32 views81 pages

An Introduction To The Analysis of Algorithms 3rd Edition Michael Soltys - Get Instant Access To The Full Ebook Content

The document promotes the availability of various algorithm-related ebooks for instant download at ebookgate.com, including titles like 'An Introduction to the Analysis of Algorithms' by Michael Soltys. It emphasizes the importance of algorithm correctness and provides a brief overview of the content covered in the third edition of Soltys' book, which includes techniques for proving algorithm correctness. The book is intended for students in Computer Science and Mathematics and includes practical problems and solutions using Python.

Uploaded by

mpebeganim12
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
32 views81 pages

An Introduction To The Analysis of Algorithms 3rd Edition Michael Soltys - Get Instant Access To The Full Ebook Content

The document promotes the availability of various algorithm-related ebooks for instant download at ebookgate.com, including titles like 'An Introduction to the Analysis of Algorithms' by Michael Soltys. It emphasizes the importance of algorithm correctness and provides a brief overview of the content covered in the third edition of Soltys' book, which includes techniques for proving algorithm correctness. The book is intended for students in Computer Science and Mathematics and includes practical problems and solutions using Python.

Uploaded by

mpebeganim12
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 81

Instant Ebook Access, One Click Away – Begin at ebookgate.

com

An Introduction to the Analysis of Algorithms 3rd


Edition Michael Soltys

https://ebookgate.com/product/an-introduction-to-the-
analysis-of-algorithms-3rd-edition-michael-soltys/

OR CLICK BUTTON

DOWLOAD EBOOK

Get Instant Ebook Downloads – Browse at https://ebookgate.com


Click here to visit ebookgate.com and download ebook now
Instant digital products (PDF, ePub, MOBI) available
Download now and explore formats that suit you...

An Introduction to the Analysis of Algorithms 2nd Edition


Michael Soltys

https://ebookgate.com/product/an-introduction-to-the-analysis-of-
algorithms-2nd-edition-michael-soltys/

ebookgate.com

An Introduction To Harmonic Analysis 3rd Edition Yitzhak


Katznelson

https://ebookgate.com/product/an-introduction-to-harmonic-
analysis-3rd-edition-yitzhak-katznelson/

ebookgate.com

Introduction to the design and analysis of algorithms 3ed.


Edition Levitin A.

https://ebookgate.com/product/introduction-to-the-design-and-analysis-
of-algorithms-3ed-edition-levitin-a/

ebookgate.com

An Introduction to Discourse Analysis Theory and Method


3rd Edition James Paul Gee

https://ebookgate.com/product/an-introduction-to-discourse-analysis-
theory-and-method-3rd-edition-james-paul-gee/

ebookgate.com
Linear Programming An Introduction to Finite Improvement
Algorithms 2nd Edition Daniel Solow

https://ebookgate.com/product/linear-programming-an-introduction-to-
finite-improvement-algorithms-2nd-edition-daniel-solow/

ebookgate.com

Introduction to Energy Analysis 3rd Edition Kornelis Blok

https://ebookgate.com/product/introduction-to-energy-analysis-3rd-
edition-kornelis-blok/

ebookgate.com

An Introduction to Neuroendocrinology 2nd Edition Michael


Wilkinson

https://ebookgate.com/product/an-introduction-to-
neuroendocrinology-2nd-edition-michael-wilkinson/

ebookgate.com

Analysis of Algorithms An Active Learning Approach 1st


Edition Jeffrey J. Mcconnell

https://ebookgate.com/product/analysis-of-algorithms-an-active-
learning-approach-1st-edition-jeffrey-j-mcconnell/

ebookgate.com

An Introduction to Biostatistic 3rd Edition Thomas Glover

https://ebookgate.com/product/an-introduction-to-biostatistic-3rd-
edition-thomas-glover/

ebookgate.com
An Introduction to the
Analysis of Algorithms
3rd Edition

10875_9789813235908_TP.indd 1 16/1/18 11:29 AM


b2530   International Strategic Relations and China’s National Security: World at the Crossroads

This page intentionally left blank

b2530_FM.indd 6 01-Sep-16 11:03:06 AM


An Introduction to the
Analysis of Algorithms
3rd Edition

Michael Soltys
California State University Channel Islands, USA

World Scientific
NEW JERSEY • LONDON • SINGAPORE • BEIJING • SHANGHAI • HONG KONG • TA I P E I • CHENNAI • TOKYO

10875_9789813235908_TP.indd 2 16/1/18 11:29 AM


Published by
World Scientific Publishing Co. Pte. Ltd.
5 Toh Tuck Link, Singapore 596224
USA office: 27 Warren Street, Suite 401-402, Hackensack, NJ 07601
UK office: 57 Shelton Street, Covent Garden, London WC2H 9HE

British Library Cataloguing-in-Publication Data


A catalogue record for this book is available from the British Library.

AN INTRODUCTION TO THE ANALYSIS OF ALGORITHMS


Third Edition
Copyright © 2018 by World Scientific Publishing Co. Pte. Ltd.
All rights reserved. This book, or parts thereof, may not be reproduced in any form or by any means,
electronic or mechanical, including photocopying, recording or any information storage and retrieval
system now known or to be invented, without written permission from the publisher.

For photocopying of material in this volume, please pay a copying fee through the Copyright Clearance
Center, Inc., 222 Rosewood Drive, Danvers, MA 01923, USA. In this case permission to photocopy
is not required from the publisher.

ISBN 978-981-3235-90-8

For any available supplementary material, please visit


http://www.worldscientific.com/worldscibooks/10.1142/10875#t=suppl

Desk Editor: Amanda Yun

Printed in Singapore

Amanda - 10875 - An Introduction to the Analysis of Algorithms.indd 1 16-01-18 9:53:35 AM


October 27, 2017 10:59 BK032 - An Introduction to the Analysis of Algorithms (3rd Edition) soltys˙alg page v

To my family

v
b2530   International Strategic Relations and China’s National Security: World at the Crossroads

This page intentionally left blank

b2530_FM.indd 6 01-Sep-16 11:03:06 AM


October 27, 2017 10:59 BK032 - An Introduction to the Analysis of Algorithms (3rd Edition) soltys˙alg page vii

Preface

If he had only learnt a little


less, how infinitely better he
might have taught much more!

Charles Dickens [Dickens


(1854)], pg. 7

This book is a short introduction to the analysis of algorithms, from the


point of view of proving algorithm correctness. The quote above refers to
Mr. M‘Choakumchild, a caricature of a teacher in Charles Dickens’ Hard
Times, who chokes the minds of his pupils with too much information. We
will avoid M‘Choakumchild’s mistake, and make a virtue out of brevity.
Our theme is the following: how do we argue mathematically, without
a burden of excessive formalism, that a given algorithm does what it is
supposed to do? And why is this important? In the words of C.A.R.
Hoare:
As far as the fundamental science is concerned, we still certainly
do not know how to prove programs correct. We need a lot of
steady progress in this area, which one can foresee, and a lot of
breakthroughs where people suddenly find there’s a simple way
to do something that everybody hitherto has thought to be far
too difficult1 .

Software engineers know many examples of things going terribly wrong


because of program errors; their particular favorites are the following two2 .
The blackout in the American North-East during the summer of 2003 was
due to a software bug in an energy management system; an alarm that
1 FromAn Interview with C.A.R. Hoare, in [Shustek (2009)].
2 Thesetwo examples come from [van Vliet (2000)], where many more instances of
spectacular failures may be found.

vii
October 27, 2017 10:59 BK032 - An Introduction to the Analysis of Algorithms (3rd Edition) soltys˙alg page viii

viii An introduction to the analysis of algorithms

should have been triggered never went off, leading to a chain of events that
climaxed in a cascading blackout. The Ariane 5, flight 501, the maiden
flight of the rocket in June 4, 1996, ended with an explosion 40 seconds
into the flight; this $500 million loss was caused by an overflow in the
conversion from a 64-bit floating point number to a 16-bit signed integer.
When Richard A. Clarke, the former National Coordinator for Security,
asked Ed Amoroso, head of AT&T Network Security, what is to be done
about the vulnerabilities in the USA cyber-infrastructure, Amoroso said:
Software is most of the problem. We have to write software
which has many fewer errors and which is more secure3 .
Similarly, Fred D. Taylor, Jr., a Lt. Colonel in the United States Air Force
and a National Security Fellow at the Harvard Kennedy School, wrote:
The extensive reliance on software has created new and ex-
panding opportunities. Along with these opportunities, there
are new vulnerabilities putting the global infrastructure and
our national security at risk. The ubiquitous nature of the In-
ternet and the fact that it is serviced by common protocols
and processes has allowed anyone with the knowledge to create
software to engage in world-wide activities. However, for most
software developers there is no incentive to produce software
that is more secure4 .

Software security falls naturally under the umbrella of software correctness.


While the goal of program correctness is elusive, we can develop methods
and techniques for reducing errors. The aim of this book is modest: we
want to present an introduction to the analysis of algorithms—the “ideas”
behind programs, and show how to prove their correctness.
The algorithm may be correct, but the implementation itself might be
flawed. Some syntactical errors in the program implementation may be
uncovered by a compiler or translator—which in turn could also be buggy—
but there might be other hidden errors. The hardware itself might be faulty;
the libraries on which the program relies at run time might be unreliable,
etc. It is the main task of a programmer to write code that works given
such a delicate, error prone, environment. Finally, the algorithmic content
of a piece of software might be very small; the majority of the lines of code
could be the “menial” task of interface programming. Thus, the ability to
argue correctly about the soundness of an algorithm is only one of many
facets of the task at hand, yet an important one, if only for the pedagogical
reason of learning to argue rigorously about algorithms.
3 See page 272 in [Clarke and Knake (2011)].
4 Harvard Law School National Security Journal, [Fred D. Taylor (2011)].
October 27, 2017 10:59 BK032 - An Introduction to the Analysis of Algorithms (3rd Edition) soltys˙alg page ix

Preface ix

We begin this book with a chapter of preliminaries, containing the key


ideas of induction and invariance, and the framework of pre/post-conditions
and loop invariants. We also prove the correctness of some classical algo-
rithms, such as the integer division algorithm, and Euclid’s procedure for
computing the greatest common divisor of two numbers.
We present three standard algorithm design techniques in eponymous
chapters: greedy algorithms, dynamic programming and the divide and
conquer paradigm. We are concerned with correctness of algorithms, rather
than, say, efficiency or the underlying data structures. For example, in the
chapter on the greedy paradigm we explore in depth the idea of a promising
partial solution, a powerful technique for proving the correctness of greedy
algorithms. We include online algorithms and competitive analysis, as well
as randomized algorithms with a section on cryptography.
Algorithms solve problems, and many of the problems in this book fall
under the category of optimization problems, whether cost minimization,
such as Kruskal’s algorithm for computing minimum cost spanning trees—
section 2.1, or profit maximization, such as selecting the most profitable
subset of activities—section 4.4.
The book is sprinkled with problems. Most problems are theoretical,
but many require the implementation of an algorithm; we suggest the
Python 3 programming language for such problems. The reader is ex-
pected to learn Python on their own; see for example, [Dierbach (2013)]
or [Downey (2015)]5 . One of the advantages of Python is that it is easy
to start writing small snippets of code that work—and most of the coding
in this book falls into the “small snippet” category. The solutions to most
problems are included in the “Answers to selected problems” at the end of
each chapter. The solutions to most of the programming exercises will be
available for download from the author’s web page6 .
The intended audience of this book are graduate and undergraduate
students in Computer Science and Mathematics. The presentation is
self-contained: the first chapter introduces the aforementioned ideas of
pre/post-conditions, loop invariants and termination. The last chapter,
Chapter 9, Mathematical Foundations, contains the necessary background
in Induction, Invariance Principle, Number Theory, Relations and Logic.
The reader unfamiliar with discrete mathematics is encouraged to start
with Chapter 9 and do all the problems therein.
5 The PDFs of earlier versions, up to 2.0.17 at the time of writing, are available for free

download from Green Tea Press, http://greenteapress.com/wp/think-python.


6 http://www.msoltys.com.
October 27, 2017 10:59 BK032 - An Introduction to the Analysis of Algorithms (3rd Edition) soltys˙alg page x

x An introduction to the analysis of algorithms

This book draws on many sources. First of all, [Cormen et al. (2009)] is a
fantastic reference for anyone who is learning algorithms. I have also used as
reference the elegantly written [Kleinberg and Tardos (2006)]. A classic in
the field is [Knuth (1997)], and I base my presentation of online algorithms
on the material in [Borodin and El-Yaniv (1998)]. I have learned greedy
algorithms, dynamic programming and logic from Stephen A. Cook at the
University of Toronto. Section 9.3, a digest of relations, is based on lectures
given by Ryszard Janicki in 2008 at McMaster University. Section 9.4 is
based on logic lectures by Stephen A. Cook taught at the University of
Toronto in the 1990s.
I am grateful to Ryan McIntyre who proof-read the 3rd edition
manuscript, and updated the Python solutions, during the summer of 2017.
As stated at the beginning of this Preface, we aim to present a concise,
mathematically rigorous, introduction to the beautiful field of Algorithms. I
agree strongly with [Su (2010)] that the purpose of education is to cultivate
the “yawp”:
I sound my barbaric yawp over the root(top)s of the world!

which are words of John Keating, quoting a Walk Whitman poem ([Whit-
man (1892)]), in the movie Dead Poets Society. This yawp is the deep
yearning inside each of us for an aesthetic experience ([Scruton (2011)]).
Hopefully, this book will supply a yawp or two.
October 27, 2017 10:59 BK032 - An Introduction to the Analysis of Algorithms (3rd Edition) soltys˙alg page xi

Contents

Preface vii

1. Preliminaries 1
1.1 What is correctness? . . . . . . . . . . . . . . . . . . . . . 1
1.1.1 Complexity . . . . . . . . . . . . . . . . . . . . . . 3
1.1.2 Division . . . . . . . . . . . . . . . . . . . . . . . 4
1.1.3 Euclid . . . . . . . . . . . . . . . . . . . . . . . . 5
1.1.4 Palindromes . . . . . . . . . . . . . . . . . . . . . 7
1.1.5 Further examples . . . . . . . . . . . . . . . . . . 9
1.2 Ranking algorithms . . . . . . . . . . . . . . . . . . . . . . 11
1.2.1 PageRank . . . . . . . . . . . . . . . . . . . . . . 11
1.2.2 A stable marriage . . . . . . . . . . . . . . . . . . 14
1.2.3 Pairwise Comparisons . . . . . . . . . . . . . . . . 17
1.3 Answers to selected problems . . . . . . . . . . . . . . . . 19
1.4 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26

2. Greedy Algorithms 29
2.1 Minimum cost spanning trees . . . . . . . . . . . . . . . . 29
2.2 Jobs with deadlines and profits . . . . . . . . . . . . . . . 37
2.3 Further examples and problems . . . . . . . . . . . . . . . 40
2.3.1 Make-change . . . . . . . . . . . . . . . . . . . . . 40
2.3.2 Maximum weight matching . . . . . . . . . . . . . 41
2.3.3 Shortest path . . . . . . . . . . . . . . . . . . . . 41
2.3.4 Huffman codes . . . . . . . . . . . . . . . . . . . . 45
2.4 Answers to selected problems . . . . . . . . . . . . . . . . 47
2.5 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54

xi
October 27, 2017 10:59 BK032 - An Introduction to the Analysis of Algorithms (3rd Edition) soltys˙alg page xii

xii An introduction to the analysis of algorithms

3. Divide and Conquer 57


3.1 Mergesort . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
3.2 Multiplying numbers in binary . . . . . . . . . . . . . . . 59
3.3 Savitch’s algorithm . . . . . . . . . . . . . . . . . . . . . . 62
3.4 Further examples and problems . . . . . . . . . . . . . . . 64
3.4.1 Extended Euclid’s algorithm . . . . . . . . . . . . 64
3.4.2 Quicksort . . . . . . . . . . . . . . . . . . . . . . . 65
3.4.3 Git bisect . . . . . . . . . . . . . . . . . . . . . . . 66
3.5 Answers to selected problems . . . . . . . . . . . . . . . . 66
3.6 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68

4. Dynamic Programming 71
4.1 Longest monotone subsequence problem . . . . . . . . . . 71
4.2 All pairs shortest path problem . . . . . . . . . . . . . . . 73
4.2.1 Bellman-Ford algorithm . . . . . . . . . . . . . . . 74
4.3 Simple knapsack problem . . . . . . . . . . . . . . . . . . 75
4.3.1 Dispersed knapsack problem . . . . . . . . . . . . 78
4.3.2 General knapsack problem . . . . . . . . . . . . . 79
4.4 Activity selection problem . . . . . . . . . . . . . . . . . . 80
4.5 Jobs with deadlines, durations and profits . . . . . . . . . 82
4.6 Further examples and problems . . . . . . . . . . . . . . . 84
4.6.1 Consecutive subsequence sum problem . . . . . . 84
4.6.2 Shuffle . . . . . . . . . . . . . . . . . . . . . . . . 85
4.7 Answers to selected problems . . . . . . . . . . . . . . . . 87
4.8 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92

5. Online Algorithms 95
5.1 List accessing problem . . . . . . . . . . . . . . . . . . . . 96
5.2 Paging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
5.2.1 Demand paging . . . . . . . . . . . . . . . . . . . 101
5.2.2 FIFO . . . . . . . . . . . . . . . . . . . . . . . . . 104
5.2.3 LRU . . . . . . . . . . . . . . . . . . . . . . . . . 104
5.2.4 Marking algorithms . . . . . . . . . . . . . . . . . 108
5.2.5 FWF . . . . . . . . . . . . . . . . . . . . . . . . . 109
5.2.6 LFD . . . . . . . . . . . . . . . . . . . . . . . . . 110
5.3 Answers to selected problems . . . . . . . . . . . . . . . . 114
5.4 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
October 27, 2017 10:59 BK032 - An Introduction to the Analysis of Algorithms (3rd Edition) soltys˙alg page xiii

Contents xiii

6. Randomized Algorithms 119


6.1 Perfect matching . . . . . . . . . . . . . . . . . . . . . . . 120
6.2 Pattern matching . . . . . . . . . . . . . . . . . . . . . . . 124
6.3 Primality testing . . . . . . . . . . . . . . . . . . . . . . . 126
6.4 Public key cryptography . . . . . . . . . . . . . . . . . . . 129
6.4.1 Diffie-Hellman key exchange . . . . . . . . . . . . 130
6.4.2 ElGamal . . . . . . . . . . . . . . . . . . . . . . . 133
6.4.3 RSA . . . . . . . . . . . . . . . . . . . . . . . . . 135
6.5 Further problems . . . . . . . . . . . . . . . . . . . . . . . 137
6.6 Answers to selected problems . . . . . . . . . . . . . . . . 138
6.7 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144

7. Algorithms in Linear Algebra 149


7.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . 149
7.2 Gaussian Elimination . . . . . . . . . . . . . . . . . . . . 150
7.2.1 Formal proofs of correctness over Z2 . . . . . . . . 153
7.3 Gram-Schmidt . . . . . . . . . . . . . . . . . . . . . . . . 156
7.4 Gaussian lattice reduction . . . . . . . . . . . . . . . . . . 157
7.5 Computing the characteristic polynomial . . . . . . . . . . 157
7.5.1 Csanky’s algorithm . . . . . . . . . . . . . . . . . 158
7.5.2 Berkowitz’s algorithm . . . . . . . . . . . . . . . . 158
7.5.3 Proving properties of the characteristic polynomial 159
7.6 Answers to selected problems . . . . . . . . . . . . . . . . 166
7.7 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169

8. Computational Foundations 171


8.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . 171
8.2 Alphabets, strings and languages . . . . . . . . . . . . . . 172
8.3 Regular languages . . . . . . . . . . . . . . . . . . . . . . 173
8.3.1 Deterministic Finite Automaton . . . . . . . . . . 173
8.3.2 Nondeterministic Finite Automata . . . . . . . . . 176
8.3.3 Regular Expressions . . . . . . . . . . . . . . . . . 179
8.3.4 Algebraic laws for Regular Expressions . . . . . . 182
8.3.5 Closure properties of regular languages . . . . . . 184
8.3.6 Complexity of transformations and decisions . . . 184
8.3.7 Equivalence and minimization of automata . . . . 185
8.3.8 Languages that are not regular . . . . . . . . . . . 186
8.3.9 Automata on terms . . . . . . . . . . . . . . . . . 189
November 2, 2017 14:6 BK032 - An Introduction to the Analysis of Algorithms (3rd Edition) soltys˙alg page xiv

xiv An introduction to the analysis of algorithms

8.4 Context-free languages . . . . . . . . . . . . . . . . . . . . 190


8.4.1 Context-free grammars . . . . . . . . . . . . . . . 190
8.4.2 Pushdown automata . . . . . . . . . . . . . . . . . 192
8.4.3 Chomsky Normal Form . . . . . . . . . . . . . . . 195
8.4.4 CYK algorithm . . . . . . . . . . . . . . . . . . . 197
8.4.5 Pumping Lemma for CFLs . . . . . . . . . . . . . 198
8.4.6 Further observations about CFL . . . . . . . . . . 199
8.4.7 Other grammars . . . . . . . . . . . . . . . . . . . 200
8.5 Turing machines . . . . . . . . . . . . . . . . . . . . . . . 201
8.5.1 Nondeterministic TMs . . . . . . . . . . . . . . . 202
8.5.2 Encodings . . . . . . . . . . . . . . . . . . . . . . 203
8.5.3 Decidability . . . . . . . . . . . . . . . . . . . . . 204
8.5.4 Church-Turing thesis . . . . . . . . . . . . . . . . 205
8.5.5 Undecidability . . . . . . . . . . . . . . . . . . . . 206
8.5.6 Reductions . . . . . . . . . . . . . . . . . . . . . . 208
8.5.7 Rice’s theorem . . . . . . . . . . . . . . . . . . . . 209
8.5.8 Post’s Correspondence Problem . . . . . . . . . . 209
8.5.9 Undecidable properties of CFLs . . . . . . . . . . 215
8.6 Answers to selected problems . . . . . . . . . . . . . . . . 217
8.7 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229

9. Mathematical Foundations 231


9.1 Induction and Invariance . . . . . . . . . . . . . . . . . . . 231
9.1.1 Induction . . . . . . . . . . . . . . . . . . . . . . . 231
9.1.2 Invariance . . . . . . . . . . . . . . . . . . . . . . 234
9.2 Number Theory . . . . . . . . . . . . . . . . . . . . . . . . 236
9.2.1 Prime numbers . . . . . . . . . . . . . . . . . . . . 236
9.2.2 Modular arithmetic . . . . . . . . . . . . . . . . . 237
9.2.3 Group theory . . . . . . . . . . . . . . . . . . . . 239
9.2.4 Applications of group theory to number theory . . 240
9.3 Relations . . . . . . . . . . . . . . . . . . . . . . . . . . . 242
9.3.1 Closure . . . . . . . . . . . . . . . . . . . . . . . . 243
9.3.2 Equivalence relation . . . . . . . . . . . . . . . . . 244
9.3.3 Partial orders . . . . . . . . . . . . . . . . . . . . 246
9.3.4 Lattices . . . . . . . . . . . . . . . . . . . . . . . . 248
9.3.5 Fixed point theory . . . . . . . . . . . . . . . . . . 250
9.3.6 Recursion and fixed points . . . . . . . . . . . . . 254
9.4 Logic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256
9.4.1 Propositional logic . . . . . . . . . . . . . . . . . . 256
January 15, 2018 15:15 BK032 - An Introduction to the Analysis of Algorithms (3rd Edition) soltys˙alg page xv

Contents xv

9.4.2 First Order Logic . . . . . . . . . . . . . . . . . . 262


9.4.3 Peano Arithmetic . . . . . . . . . . . . . . . . . . 267
9.4.4 Formal verification . . . . . . . . . . . . . . . . . 268
9.5 Answers to selected problems . . . . . . . . . . . . . . . . 271
9.6 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 292

Bibliography 295
Index 303
October 27, 2017 10:59 BK032 - An Introduction to the Analysis of Algorithms (3rd Edition) soltys˙alg page 1

Chapter 1

Preliminaries

It is commonly believed that


more than 70% (!) of the effort
and cost of developing a
complex software system is
devoted, in one way or another,
to error correcting.

Algorith., pg. 107 [Harel (1987)]

1.1 What is correctness?

To show that an algorithm is correct, we must show somehow that it does


what it is supposed to do. The difficulty is that the algorithm unfolds in
time, and it is tricky to work with a variable number of steps, i.e., while-
loops. We are going to introduce a framework for proving algorithm (and
program) correctness that is called Hoare’s logic. This framework uses
induction and invariance (see Section 9.1), and logic (see Section 9.4) but
we are going to use it informally. For a formal example see Section 9.4.4.
We make two assertions, called the pre-condition and the post-condition;
by correctness we mean that whenever the pre-condition holds before the
algorithm executes, the post-condition will hold after it executes. By termi-
nation we mean that whenever the pre-condition holds, the algorithm will
stop running after finitely many steps. Correctness without termination is
called partial correctness, and correctness per se is partial correctness with
termination. All this terminology is there to connect a given problem with
some algorithm that purports to solve it. Hence we pick the pre and post
condition in a way that reflects this relationship and proves it true.

1
October 27, 2017 10:59 BK032 - An Introduction to the Analysis of Algorithms (3rd Edition) soltys˙alg page 2

2 An introduction to the analysis of algorithms

These concepts can be made more precise, but we need to introduce


some standard notation: Boolean connectives: ∧ is “and,” ∨ is “or” and
¬ is “not.” We also use → as Boolean implication, i.e., x → y is logically
equivalent to ¬x ∨ y, and ↔ is Boolean equivalence, and α ↔ β expresses
((α → β) ∧ (β → α)). ∀ is the “for-all” universal quantifier, and ∃ is the
“there exists” existential quantifier. We use “⇒” to abbreviate the word
“implies,” i.e., 2|x ⇒ x is even, while “⇒” abbreviates “does not imply.”
Let A be an algorithm, and let IA be the set of all well-formed inputs
for A; the idea is that if I ∈ IA then it “makes sense” to give I as an input
to A. The concept of a “well-formed” input can also be made precise,
but it is enough to rely on our intuitive understanding—for example, an
algorithm that takes a pair of integers as input will not be “fed” a matrix.
Let O = A(I) be the output of A on I, if it exists. Let αA be a pre-
condition and βA a post-condition of A; if I satisfies the pre-condition we
write αA (I) and if O satisfies the post-condition we write βA (O). Then,
partial correctness of A with respect to pre-condition αA and post-condition
βA can be stated as:
(∀I ∈ IA )[(αA (I) ∧ ∃O(O = A(I))) → βA (A(I))]. (1.1)
In words: for any well formed input I, if I satisfies the pre-condition and
A(I) produces an output (i.e., terminates), then this output satisfies the
post-condition.
Full correctness is (1.1) together with the assertion that for all I ∈ IA ,
A(I) terminates (and hence there exists an O such that O = A(I)).

Problem 1.1. Modify (1.1) to express full correctness.

A fundamental notion in the analysis of algorithms is that of a loop


invariant; it is an assertion that stays true after each execution of a “while”
(or “for”) loop. Coming up with the right assertion, and proving it, is a
creative endeavor. If the algorithm terminates, the loop invariant is an
assertion that helps to prove the implication αA (I) → βA (A(I)).
Once the loop invariant has been shown to hold, it is used for proving
partial correctness of the algorithm. So the criterion for selecting a loop
invariant is that it helps in proving the post-condition. In general many
different loop invariants (and for that matter pre and post-conditions) may
yield a desirable proof of correctness; the art of the analysis of algorithms
consists in selecting them judiciously. We usually need induction to prove
that a chosen loop invariant holds after each iteration of a loop, and usually
we also need the pre-condition as an assumption in this proof.
October 27, 2017 10:59 BK032 - An Introduction to the Analysis of Algorithms (3rd Edition) soltys˙alg page 3

Preliminaries 3

1.1.1 Complexity
Given an algorithm A, and an input x, the running time of A on x is the
number of steps it takes A to terminate on input x. The delicate issue here
is to define a “step,” but we are going to be informal about it: we assume
a Random Access Machine (a machine that can access memory cells in a
single step), and we assume that an assignment of the type x ← y is a
single step, and so are arithmetical operations, and the testing of Boolean
expressions (such as x ≥ y ∧ y ≥ 0). Of course this simplification does
not reflect the true state of affairs if for example we manipulate numbers
of 4,000 bits (as in the case of cryptographic algorithms). But then we
redefine steps appropriately to the context.
We are interested in worst-case complexity. That is, given an algorithm
A, we let T A (n) to be the maximal running time of A on any input x of size
n. Here “size” means the number of bits in a reasonable fixed encoding of x.
We tend to write T (n) instead of T A (n) as the algorithm under discussion
is given by the context. It turns out that even for simple algorithms T (n)
maybe very complicated, and so we settle for asymptotic bounds on T (n).
In order to provide asymptotic approximations to T (n) we introduce
the Big O notation, pronounced as “big-oh.” Consider functions f and
g from N to R, that is, functions whose domain is the natural numbers
but can range over the reals. We say that g(n) ∈ O(f (n)) if there exist
constants c, n0 ∈ N such that for all n ≥ n0 , g(n) ≤ cf (n), and the little o
notation, g(n) ∈ o(f (n)), which denotes that limn→∞ g(n)/f (n) = 0. We
also say that g(n) ∈ Ω(f (n)) if there exist constants c, n0 such that for
all n ≥ n0 , g(n) ≥ cf (n). Finally, we say that g(n) ∈ Θ(f (n)) if it is
the case that g(n) ∈ O(f (n)) ∩ Ω(f (n)). If g(n) ∈ Θ(f (n)), then f (n) is
called an asymptotically tight bound for g(n), and it means that f (n) is
a very good approximation to g(n). Note that in practice we will often
write g(n) = O(f (n)) instead of the formal g(n) ∈ O(f (n)); a slight but
convenient abuse of notation.
For example, an2 + bn + c = Θ(n2 ), where a > 0. To see this, note that
an + bn + c ≤ (a + |b| + |c|)n2 , for all n ∈ N, and so an2 + bn + c = O(n2 ),
2

where we took the absolute value of b, c because they may be negative. On


the other hand, an2 + bn + c = a((n + c1 )2 − c2 ) where c1 = b/2a and
c2 = (b2 − 4ac)/4a2, so we can find a c3 and an n0 so that for all n ≥ n0 ,
c3 n2 ≤ a((n + c1 )2 − c2 ), and so an2 + bn + c = Ω(n2 ).

Problem 1.2. Find c3 and n0 in terms of a, b, c. Then prove that for k ≥ 0,


k i k
i=0 ai n = Θ(n ); this shows the simplifying advantage of the Big O.
October 27, 2017 10:59 BK032 - An Introduction to the Analysis of Algorithms (3rd Edition) soltys˙alg page 4

4 An introduction to the analysis of algorithms

1.1.2 Division
What could be simpler than integer division? We are given two integers,
x, y, and we want to find the quotient and remainder of dividing x by y.
For example, if x = 25 and y = 3, then q = 3 and r = 1. Note that the q
and r returned by the division algorithm are usually denoted as div(x, y)
(the quotient) and rem(x, y) (the remainder), respectively.

Algorithm 1.1 Division


Pre-condition: x ≥ 0 ∧ y > 0 ∧ x, y ∈ N
1: q ← 0
2: r ← x
3: while y ≤ r do
4: r ←r−y
5: q ←q+1
6: end while
7: return q, r
Post-condition: x = (q · y) + r ∧ 0 ≤ r < y

We propose the following assertion as the loop invariant:


x = (q · y) + r ∧ r ≥ 0, (1.2)
and we show that (1.2) holds after each iteration of the loop. Basis case
(i.e., zero iterations of the loop—we are just before line 3 of the algorithm):
q = 0, r = x, so x = (q · y) + r and since x ≥ 0 and r = x, r ≥ 0.
Induction step: suppose x = (q · y) + r ∧ r ≥ 0 and we go once more
through the loop, and let q  , r be the new values of q, r, respectively (com-
puted in lines 4 and 5 of the algorithm). Since we executed the loop one
more time it follows that y ≤ r (this is the condition checked for in line 3
of the algorithm), and since r = r − y, we have that r ≥ 0. Thus,
x = (q · y) + r = ((q + 1) · y) + (r − y) = (q  · y) + r ,
and so q  , r still satisfy the loop invariant (1.2).
Now we use the loop invariant to show that (if the algorithm terminates)
the post-condition of the division algorithm holds, if the pre-condition
holds. This is very easy in this case since the loop ends when it is no
longer true that y ≤ r, i.e., when it is true that r < y. On the other
hand, (1.2) holds after each iteration, and in particular the last iteration.
Putting together (1.2) and r < y we get our post-condition, and hence
partial correctness.
October 27, 2017 10:59 BK032 - An Introduction to the Analysis of Algorithms (3rd Edition) soltys˙alg page 5

Preliminaries 5

To show termination we use the least number principle (LNP). We need


to relate some non-negative monotone decreasing sequence to the algorithm;
just consider r0 , r1 , r2 , . . ., where r0 = x, and ri is the value of r after the
i-th iteration. Note that ri+1 = ri − y. First, ri ≥ 0, because the algorithm
enters the while loop only if y ≤ r, and second, ri+1 < ri , since y > 0. By
LNP such a sequence “cannot go on for ever,” (in the sense that the set
{ri |i = 0, 1, 2, . . .} is a subset of the natural numbers, and so it has a least
element), and so the algorithm must terminate.
Thus we have shown full correctness of the division algorithm.

Problem 1.3. What is the running time of algorithm 1.1? That is, how
many steps does it take to terminate? Assume that assignments (lines 1
and 2), and arithmetical operations (lines 4 and 5) as well as testing “≤”
(line 3) all take one step.

Problem 1.4. Suppose that the precondition in algorithm 1.1 is changed


to say: “x ≥ 0 ∧ y > 0 ∧ x, y ∈ Z,” where Z = {. . . , −2, −1, 0, 1, 2, . . .}.
Is the algorithm still correct in this case? What if it is changed to to the
following: “y > 0 ∧x, y ∈ Z”? How would you modify the algorithm to work
with negative values?

Problem 1.5. Write a program that takes as input x and y, and outputs
the intermediate values of q and r, and finally the quotient and remainder
of the division of x by y.

1.1.3 Euclid
Given two positive integers a, b, their greatest common divisor, denoted
as gcd(a, b), is the greatest integer that divides both. Euclid’s algorithm,
presented as algorithm 1.2, is a procedure for finding the greatest common
divisor of two numbers. It is one of the oldest know algorithms; it appeared
in Euclid’s Elements (Book 7, Propositions 1 and 2) around 300 BC.
Note that to compute rem(n, m) in lines 1 and 3 of Euclid’s algorithm
we need to use algorithm 1.1 (the division algorithm) as a subroutine; this
is a typical “composition” of algorithms. Also note that lines 1 and 3 are
executed from left to right, so in particular in line 3 we first do m ← n,
then n ← r, and finally r ← rem(m, n). This is important for the algorithm
to work correctly, because when we are executing r ← rem(m, n), we are
using the newly updated values of m, n.
October 27, 2017 10:59 BK032 - An Introduction to the Analysis of Algorithms (3rd Edition) soltys˙alg page 6

6 An introduction to the analysis of algorithms

Algorithm 1.2 Euclid


Pre-condition: a > 0 ∧ b > 0 ∧ a, b ∈ Z
1: m ← a ; n ← b ; r ← rem(m, n)
2: while (r > 0) do
3: m ← n ; n ← r ; r ← rem(m, n)
4: end while
5: return n
Post-condition: n = gcd(a, b)

To prove the correctness of Euclid’s algorithm we are going to show that


after each iteration of the while loop the following assertion holds:
m > 0, n > 0 and gcd(m, n) = gcd(a, b), (1.3)
that is, (1.3) is our loop invariant. We prove this by induction on the
number of iterations. Basis case: after zero iterations (i.e., just before the
while loop starts—so after executing line 1 and before executing line 2) we
have that m = a > 0 and n = b > 0, so (1.3) holds trivially. Note that
a > 0 and b > 0 by the pre-condition.
For the induction step, suppose m, n > 0 and gcd(a, b) = gcd(m, n),
and we go through the loop one more time, yielding m , n . We want to
show that gcd(m, n) = gcd(m , n ). Note that from line 3 of the algorithm
we see that m = n, n = r = rem(m, n), so in particular m = n > 0 and
n = r = rem(m, n) > 0 since if r = rem(m, n) were zero, the loop would
have terminated (and we are assuming that we are going through the loop
one more time). So it is enough to prove the assertion in Problem 1.6.

Problem 1.6. Show that for all m, n > 0, gcd(m, n) = gcd(n, rem(m, n)).

Now the correctness of Euclid’s algorithm follows from (1.3), since the
algorithm stops when r = rem(m, n) = 0, so m = q·n, and so gcd(m, n) = n.

Problem 1.7. Show that Euclid’s algorithm terminates, and establish its
Big O complexity.

Problem 1.8. How would you make the algorithm more efficient? This
question is asking for simple improvements that lower the running time by
a constant factor.

Problem 1.9. Modify Euclid’s algorithm so that given integers m, n as


input, it outputs integers a, b such that am + bn = g = gcd(m, n). This is
called the extended Euclid’s algorithm. Follow this outline:
October 27, 2017 10:59 BK032 - An Introduction to the Analysis of Algorithms (3rd Edition) soltys˙alg page 7

Preliminaries 7

(a) Use the LNP to show that if g = gcd(m, n), then there exist a, b such
that am + bn = g.
(b) Design Euclid’s extended algorithm, and prove its correctness.
(c) The usual Euclid’s extended algorithm has a running time polynomial
in min{m, n}; show that this is the running time of your algorithm, or
modify your algorithm so that it runs in this time.

Problem 1.10. Write a program that implements Euclid’s extended algo-


rithm. Then perform the following experiment: run it on a random selection
of inputs of a given size, for sizes bounded by some parameter N ; compute
the average number of steps of the algorithm for each input size n ≤ N ,
and use gnuplot1 to plot the result. What does f (n)—which is the “average
number of steps” of Euclid’s extended algorithm on input size n—look like?
Note that size is not the same as value; inputs of size n are inputs with a
binary representation of n bits.

1.1.4 Palindromes
Algorithm 1.3 tests if a string is a palindrome, which is a word that read
the same backwards as forwards, e.g., madamimadam or racecar.
In order to present this algorithm we need to introduce a little bit of
notation. The floor and ceil functions are defined, respectively, as follows:
x = max{n ∈ Z|n ≤ x} and x = min{n ∈ Z|n ≥ x}, and x refers to
the “rounding” of x, and it is defined as x = x + 12 .

Algorithm 1.3 Palindromes


Pre-condition: n ≥ 1 ∧ A[0 . . . n − 1] is a character array
1: i ← 0
2: while (i <  n
2 ) do
3: if (A[i] = A[n − i − 1]) then
4: return F
5: end if
6: i←i+1
7: end while
8: return T
Post-condition: return T iff A is a palindrome

1 Gnuplot is a command-line driven graphing utility (http://www.gnuplot.info). Also,

Python has a plotting library matplotlib (https://matplotlib.org).


October 27, 2017 10:59 BK032 - An Introduction to the Analysis of Algorithms (3rd Edition) soltys˙alg page 8

8 An introduction to the analysis of algorithms

Let the loop invariant be: after the k-th iteration, i = k + 1 and for
all j such that 1 ≤ j ≤ k, A[j] = A[n − j + 1]. We prove that the loop
invariant holds by induction on k. Basis case: before any iterations take
place, i.e., after zero iterations, there are no j’s such that 1 ≤ j ≤ 0, so the
second part of the loop invariant is (vacuously) true. The first part of the
loop invariant holds since i is initially set to 1.
Induction step: we know that after k iterations, A[j] = A[n−j+1] for all
1 ≤ j ≤ k; after one more iteration we know that A[k+1] = A[n−(k+1)+1],
so the statement follows for all 1 ≤ j ≤ k+1. This proves the loop invariant.

Problem 1.11. Using the loop invariant argue the partial correctness of
the palindromes algorithm. Show that the algorithm terminates.
It is easy to manipulate strings in Python; a segment of a string is
called a slice. Consider the word palindrome; if we set the variables s to
this word,
s = ‘palindrome’
then we can access different slices as follows:
print s[0:5] palin
print s[5:10] drome
print s[5:] drome
print s[2:8:2] lnr
where the notation [i:j] means the segment of the string starting from the
i-th character (and we always start counting at zero!), to the j-th character,
including the first but excluding the last. The notation [i:] means from
the i-th character, all the way to the end, and [i:j:k] means starting from
the i-th character to the j-th (again, not including the j-th itself), taking
every k-th character.
One way to understand the string delimiters is to write the indices “in
between” the numbers, as well as at the beginning and at the end. For
example
0 p1 a2 l3 i4 n5 d6 r7 o8 m9 e10

and to notice that a slice [i:j] contains all the symbols between index i
and index j.
Problem 1.12. Using Python’s inbuilt facilities for manipulating slices of
strings, write a succinct program that checks whether a given string is a
palindrome.
October 27, 2017 10:59 BK032 - An Introduction to the Analysis of Algorithms (3rd Edition) soltys˙alg page 9

Preliminaries 9

1.1.5 Further examples


In this section we provide more examples of algorithms that take as integers
as input, and manipulate them with a while-loop. We also present an
example of an algorithm that is very easy to describe, but for which no proof
of termination is known (algorithm 1.6). This supports further the notion
that proofs of correctness are not just pedantic exercises in mathematical
formalism but a real certificate of validity of a given algorithmic solution.

Problem 1.13. Give an algorithm which takes as input a positive integer


n, and outputs “yes” if n = 2k (i.e., n is a power of 2), and “no” otherwise.
Prove that your algorithm is correct.

Problem 1.14. What does algorithm 1.4 compute? Prove your claim.

Algorithm 1.4 See Problem 1.14


1: x ← m ; y ← n ; z ← 0
2: while (x = 0) do
3: if (rem(x, 2) = 1) then
4: z ←z+y
5: end if
6: x ← div(x, 2)
7: y ←y·2
8: end while
9: return z

Problem 1.15. What does algorithm 1.5 compute? Assume that a, b are
positive integers (i.e., assume that the pre-condition is that a, b > 0). For
which starting a, b does this algorithm terminate? In how many steps does
it terminate, if it does terminate?

Algorithm 1.5 See Problem 1.15


1: while (a > 0) do
2: if (a < b) then
3: (a, b) ← (2a, b − a)
4: else
5: (a, b) ← (a − b, 2b)
6: end if
7: end while
October 27, 2017 10:59 BK032 - An Introduction to the Analysis of Algorithms (3rd Edition) soltys˙alg page 10

10 An introduction to the analysis of algorithms

Consider algorithm 1.6 given below.

Algorithm 1.6 Ulam’s algorithm


Pre-condition: a > 0
x ←− a
while last three values of x not 4, 2, 1 do
if x is even then
x ←− x/2
else
x ←− 3x + 1
end if
end while

This algorithm is different from all the algorithms that we have seen thus
far in that there is no known proof of termination, and therefore no known
proof of correctness. Observe how simple it is: for any positive integer a, set
x = a, and repeat the following: if x is even, divide it by 2, and if it is odd,
multiply it by 3 and add 1. Repeat this until the last three values obtained
were 4, 2, 1. For example, if a = 22, then one can check that x takes on
the following values: 22, 11, 34, 17, 52, 26, 13, 40, 20, 10, 5, 16, 8, 4, 2, 1, and
algorithm 1.6 terminates. It is conjectured that regardless of the initial
value of a, as long as a is a positive integer, algorithm 1.6 terminates. This
conjecture is known as “Ulam’s problem,”2 and despite decades of work no
one has been able to solve this problem.
In fact, recent work shows that variants of Ulam’s problem have been
shown undecidable. We will look at undecidability in Chapter 9, but [Lehto-
nen (2008)] showed that for a very simple variant of the problem where we
let x be 3x + t for x in a particular set At (for details see the paper), there
simply is no algorithm whatsoever that will decide for which initial a’s the
new algorithm terminates and for which it does not.
Problem 1.16. Write a program that takes a as input and displays all the
values of Ulam’s problem until it sees 4, 2, 1 at which point it stops. You
have just written an almost trivial program for which there is no proof of
termination. Now do an experiment: compute how many steps it takes to
reach 4, 2, 1 for all a < N , for some N . Any conjectures?

2 It is also called “Collatz Conjecture,” “Syracuse Problem,” “Kakutani’s Problem,”

or “Hasse’s Algorithm.” While it is true that a rose by any other name would smell
just as sweet, the preponderance of names shows that the conjecture is a very alluring
mathematical problem.
October 27, 2017 10:59 BK032 - An Introduction to the Analysis of Algorithms (3rd Edition) soltys˙alg page 11

Preliminaries 11

1.2 Ranking algorithms

The algorithms we have seen so far in the book are classical but to some
extent they are “toy examples.” In this section we want to demonstrate the
power and usefulness of some very well known “grown up” algorithms. We
will focus on three different ranking algorithms. Ranking items is a primor-
dial human activity, and we will take a brief look at ranking procedures that
range from the ancient, such as Ramon Llull’s, a 13-th century mystic and
philosopher, to old, such as Marquis de Condorcet’s work discussed in Sec-
tion 1.2.3, to the state of the art in Google’s simple and elegant PageRank
discussed in the next section.

1.2.1 PageRank
In 1945, Vannevar Bush wrote an article in the Atlantic Monthly entitled
As we may think [Bush (1945)], where he demonstrated an eerie prescience
of the ideas that became the World Wide Web. In that gem of an article
Bush pointed out that information retrieval systems are organized in a lin-
ear fashion (whether books, databases, computer memory, etc.), but that
human conscious experience exhibits what he called “an associative mem-
ory.” That is, the human mind has a semantic network, where we think of
one thing, and that reminds us of another, etc. Bush proposed a blueprint
for a human-like machine, the “Memex,” which had ur-web characteristics:
digitized human knowledge interconnected by associative links.
When in the early 1990s Tim Berners-Lee finally implemented the ideas
of Bush in the form of HTML, and ushered in the World Wide Web, the web
pages were static and the links had a navigational function. Today links
often trigger complex programs such as Perl, PHP, MySQL, and while some
are still navigational, many are transactional, implementing actions such as
“add to shopping cart,” or “update my calendar.”
As there are now billions of active web pages, how does one search them
to find relevant high-quality information? We accomplish this by ranking
those pages that meet the search criteria; pages of a good rank will appear
at the top — this way the search results will make sense to a human reader
who only has to scan the first few results to (hopefully) find what he wants.
These top pages are called authoritative pages.
In order to rank authoritative pages at the top, we make use of the fact
that the web consists not only of pages, but also of hyperlinks that connect
these pages. This hyperlink structure (which can be naturally modeled by a
October 27, 2017 10:59 BK032 - An Introduction to the Analysis of Algorithms (3rd Edition) soltys˙alg page 12

12 An introduction to the analysis of algorithms

directed graph) contains a lot of latent human annotation that can be used
to automatically infer authority. This is a profound observation: after all,
items ranked highly by a user are ranked so in a subjective manner; exploit-
ing the hyperlink structure allows us to connect the subjective experience
of the users with the output of an algorithm!
More specifically, by creating a hyperlink, the author gives an implicit
endorsement to a page. By mining the collective judgment expressed by
these endorsements we get a picture of the quality (or subjective perception
of the quality) of a given web page. This is very similar to our perception
of the quality of scholarly citations, where an important publication is cited
by other important publications. The question now is how do we convert
these ideas into an algorithm. A seminal answer was given by the now
famous PageRank algorithm, authored by S. Brin and L. Page, the founders
of Google — see [Brin and Page (1998)]. PageRank mines the hyperlink
structure of the web in order to infer the relative importance of the pages.
Consider Figure 1.1 which depicts a web page X, and all the pages
T1 , T2 , T3 , . . . , Tn that point to it. Given a page X, let C(X) be the number
of distinct link that leave X, i.e., these are links anchored in X that point
to a page outside of X. Let PR(X) be the page rank of X. We also employ
a parameter d, which we call the damping factor, and which we will explain
later.

@ABC
GFED
T1 PP @ABC
GFED
T @ABC
GFED
T3 ... @ABC
GFED
Tn
PPP 2 AA
AA oooo
PPP
PPP AA ooo
PPP AA ooooo
PPPA  ooo
' 89:;
?>=< wo
X

Fig. 1.1 Computing the rank of page A.

Then, the page rank of X can be computed as follows:


 
PR(T1 ) PR(T2 ) PR(Tn )
PR(X) = (1 − d) + d + + ···+ . (1.4)
C(T1 ) C(T2 ) C(Tn )
We now explain (1.4): the damping factor d is a constant 0 ≤ d ≤ 1, and
usually set to .85. The formula posits the behavior of a “random surfer”
who starts clicking on links on a random page, following a link out of that
page, and clicking on links (never hitting the “back button”) until the
random surfer gets bored, and starts the process from the beginning by
going to a random page. Thus, in (1.4) the (1 − d) is the probability of
October 27, 2017 10:59 BK032 - An Introduction to the Analysis of Algorithms (3rd Edition) soltys˙alg page 13

Preliminaries 13

choosing X at random, while PR(T i)


C(Ti ) is the probability of reaching X by
coming from Ti , normalized by the number of outlinks from Ti . We make
a slight adjustment to (1.4): we normalize it by the size of the web, N ,
that is, we divide (1 − d) by N . This way, the chance of stumbling on X is
adjusted to the overall size of the web.
The problem with (1.4) is that it appears to be circular. How do we
compute PR(Ti ) in the first place? The algorithm works in stages, refining
the page rank of each page at each stage. Initially, we take the egalitarian
approach and assign each page a rank of 1/N , where N is the total number
of pages on the web. Then recompute all page ranks using (1.4) and the
initial page ranks, and continue. After each stage PR(X) gets closer to the
actual value, and in fact converges fairly quickly. There are many technical
issues here, such as knowing when to stop, and handling a computation
involving N which may be over a trillion, but this is the PageRank algorithm
in a nut shell.
Of course the web is a vast collection of heterogeneous documents,
and (1.4) is too simple a formula to capture everything, and so Google
search is a lot more complicated. For example, not all outlinks are treated
equally: a link in larger font, or emphasized with a “<STRONG>” tag, will
have more weight. Documents differ internally in terms of language, format
such as PDF, image, text, sound, video; and externally in terms of reputa-
tion of the source, update frequency, quality, popularity, and other variables
that are now taken into account by a modern search engine. The reader is
directed to [Franceschet (2011)] for more information about PageRank.
Furthermore, the presence of search engines also affects the web. As
the search engines direct traffic, they themselves shape the ranking of the
web. A similar effect in Physics is known as the observer effect, where
instruments alter the state of what they observe. As a simple example,
consider measuring the pressure in your tires: you have to let some air out,
and therefore change the pressure slightly, in order to measure it. All these
fascinating issues are the subject matter of Big Data Analytics.

Problem 1.17. Consider the following small network:


/ 89:;
?>=<
A / 89:;
?>=<
B / 89:;
?>=<
C
GF ~~ @@@ ~~?
~ @ ~
~~ ~@~
@
~~~  ~~~ @@@ 
~~ ~ 
89:;
?>=<
D / 89:;
?>=<
E 89:;
?>=<
F
@A BCD
October 27, 2017 10:59 BK032 - An Introduction to the Analysis of Algorithms (3rd Edition) soltys˙alg page 14

14 An introduction to the analysis of algorithms

Compute the PageRank of the different pages in this network using (1.4)
with damping factor d = 1, that is, assuming all navigation is done by
following links (no random jumps to other pages).

Problem 1.18. Write a program which computes the ranks of all the pages
in a given network of size N . Let the network be given as a 0-1 matrix,
where a 1 in position (i, j) means that there is a link from page i to page
j. Otherwise, there is a 0 in that position. Use (1.4) to compute the page
rank, starting with a value of 1/N . You should stop when all values have
converged — does this algorithm always terminate? Also, keep track of all
the values as fractions a/b, where gcd(a, b) = 1; Python has a convenient
fractions library: import fractions.

1.2.2 A stable marriage


Suppose that we want to match interns with hospitals, or students with
colleges; both are instances of the admission process problem, and both
have a solution that optimizes, to a certain degree, the overall satisfaction
of all the parties concerned. The solution to this problem is an elegant
algorithm to solve the so called “stable marriage problem,” which has been
used since the 1960s for the college admission process and for matching
interns with hospitals.
An instance of the stable marriage problem of size n consists of two
disjoint finite sets of equal size; a set of boys B = {b1 , b2 , . . . , bn }, and a set
of girls G = {g1 , g2 , . . . , gn }. Let “<i ” denote the ranking of boy bi ; that
is, g <i g  means that boy bi prefers g over g  . Similarly, “<j ” denotes
the ranking of girl gj . Each boy bi has such a ranking (linear ordering)
<i of G which reflects his preference for the girls that he wants to marry.
Similarly each girl gj has a ranking (linear ordering) <j of B which reflects
her preference for the boys she would like to marry.
A matching (or marriage) M is a 1-1 correspondence between B and
G. We say that b and g are partners in M if they are matched in M and
write pM (b) = g and also pM (g) = b. A matching M is unstable if there
is a pair (b, g) from B × G such that b and g are not partners in M but b
prefers g to pM (b) and g prefers b to pM (g). Such a pair (b, g) is said to
block the matching M and is called a blocking pair for M (see figure 1.2).
A matching M is stable if it contains no blocking pairs.
It turns out that there always exists a stable marriage solution to the
matching problem. This solution can be computed with the celebrated
October 27, 2017 10:59 BK032 - An Introduction to the Analysis of Algorithms (3rd Edition) soltys˙alg page 15

Preliminaries 15

b II g
II uu
II uuu
uI
uu III
zuu $
pM (g) pM (b)

Fig. 1.2 A blocking pair: b and g prefer each other to their partners pM (b) and pM (g).

algorithm due to Gale and Shapley ([Gale and Shapley (1962)]) that outputs
a stable marriage for any input B, G, regardless of the ranking3 .
The matching M is produced in stages Ms so that bt always has a
partner at the end of stage s, where t ≤ s. However, the partners of bt do not
get better, i.e., pMt (bt ) ≤t pMt+1 (bt ) ≤t · · · . On the other hand, for each
g ∈ G, if g has a partner at stage t, then g will have a partner at each stage
s ≥ t and the partners do not get worse, i.e., pMt (g) ≥t pMt+1 (g) ≥t . . ..
Thus, as s increases, the partners of bt become less preferable and the
partners of g become more preferable.
At the end of stage s, assume that we have produced a matching
Ms = {(b1 , g1,s ), . . . , (bs , gs,s )},
where the notation gi,s means that gi,s is the partner of boy bi after the
end of stage s.
We will say that partners in Ms are engaged. The idea is that at stage
s+1, bs+1 will try to get a partner by proposing to the girls in G in his order
of preference. When bs+1 proposes to a girl gj , gj accepts his proposal if
either gj is not currently engaged or is currently engaged to a less preferable
boy b, i.e., bs+1 <j b. In the case where gj prefers bs+1 over her current
partner b, then gj breaks off the engagement with b and b then has to search
for a new partner.

Problem 1.19. Show that each b need propose at most once to each g.

From problem 1.19 we see that we can make each boy keep a bookmark
on his list of preference, and this bookmark is only moving forward. When
a boy’s turn to choose comes, he starts proposing from the point where
his bookmark is, and by the time he is done, his bookmark moved only
forward. Note that at stage s + 1 each boy’s bookmark cannot have moved
beyond the girl number s on the list without choosing someone (after stage
3 In 2012, the Nobel Prize in Economics was awarded to Lloyd S. Shapley and Alvin E.

Roth “for the theory of stable allocations and the practice of market design,” i.e., for
the stable marriage algorithm.
October 27, 2017 10:59 BK032 - An Introduction to the Analysis of Algorithms (3rd Edition) soltys˙alg page 16

16 An introduction to the analysis of algorithms

Algorithm 1.7 Gale-Shapley


1: Stage 1: b1 chooses his top g and M1 ←− {(b1 , g)}
2: for s = 1, . . . , s = |B| − 1, Stage s + 1: do
3: M ←− Ms
4: b∗ ←− bs+1
5: for b∗ proposes to all g’s in order of preference: do
6: if g was not engaged: then
7: Ms+1 ←− M ∪ {(b∗ , g)}
8: end current stage
9: else if g was engaged to b but g prefers b∗ : then
10: M ←− (M − {(b, g)}) ∪ {(b∗ , g)}
11: b∗ ←− b
12: repeat from line 5
13: end if
14: end for
15: end for

s only s girls are engaged). As the boys take turns, each boy’s bookmark
is advancing, so some boy’s bookmark (among the boys in {b1 , . . . , bs+1 })
will advance eventually to a point where he must choose a girl.
The discussion in the above paragraph shows that stage s + 1 in algo-
rithm 1.7 must end. The concern here was that case (ii) of stage s+1 might
end up being circular. But the fact that the bookmarks are advancing shows
that this is not possible.
Furthermore, this gives an upper bound of (s + 1)2 steps at stage (s + 1)
in the procedure. This means that there are n stages, and each stage takes
O(n2 ) steps, and hence algorithm 1.7 takes O(n3 ) steps altogether. The
question, of course, is what do we mean by a step? Computers operate on
binary strings, yet here the implicit assumption is that we compare numbers
and access the lists of preferences in a single step. But the cost of these
operations is negligible when compared to our idealized running time, and
so we allow ourselves this poetic license to bound the overall running time.

Problem 1.20. Show that there is exactly one girl that was not engaged
at stage s but is engaged at stage (s + 1) and that, for each girl gj that is
engaged in Ms , gj will be engaged in Ms+1 and that pMs+1 (gj ) <j pMs (gj ).
(Thus, once gj becomes engaged, she will remain engaged and her partners
will only gain in preference as the stages proceed.)
October 27, 2017 10:59 BK032 - An Introduction to the Analysis of Algorithms (3rd Edition) soltys˙alg page 17

Preliminaries 17

Problem 1.21. Suppose that |B| = |G| = n. Show that at the end of stage
n, Mn will be a stable marriage.

We say that a pair (b, g) is feasible if there exists a stable matching in


which b, g are partners. We say that a matching is boy-optimal if every boy
is paired with his highest ranked feasible partner. We say that a matching is
boy-pessimal if every boy is paired with his lowest ranking feasible partner.
Similarly, we define girl-optimal/pessimal.

Problem 1.22. Show that our version of the algorithm produces a boy-
optimal and girl-pessimal stable matching. Does this mean that they order-
ing of the boys is irrelevant?

Problem 1.23. Implement algorithm 1.7.

1.2.3 Pairwise Comparisons


A fundamental application of algorithmic procedures is to choose the best
option from among many. The selection requires a ranking procedure that
guides it, but given the complexity of the world in the Information Age, the
ranking procedure and selection are often done based on an extraordinary
number of criteria. It may also require the chooser to provide a justification
for the selection and to convince someone else that the best option has
indeed been chosen. For example, imagine the scenario where a team of
doctors must decide whether or not to operate on a patient [Kakiashvili
et al. (2012)], and how important it is to both select the optimal course of
action and provide a strong justification for the final selection. Indeed, a
justification in this case may be as important as selecting the best option.
Considerable effort has been devoted to research in search engine rank-
ing [Easley and Kleinberg (2010)], in the case of massive amount of highly
heterogeneous items. On the other hand, relatively little work has been
done in ranking smaller sets of highly similar (homogeneous) items, differ-
entiated by a large number of criteria. Today’s state of the art consists of
an assortment of domain-specific ad hoc procedures, which are highly do-
main dependent: one approach in the medical profession [Kakiashvili et al.
(2012)]; another in the world of management [Koczkodaj et al. (2014)], etc.
Pairwise Comparisons (PC) has a surprisingly old history for a method
that to a certain degree is not widely known. The ancient beginnings are
often attributed to a thirteenth century mystic and philosopher Ramon
October 27, 2017 10:59 BK032 - An Introduction to the Analysis of Algorithms (3rd Edition) soltys˙alg page 18

18 An introduction to the analysis of algorithms

Lull. In 2001 a manuscript of Llull’s was discovered, titled Ars notandi,


Ars eleccionis, and Alia ars eleccionis (see [Hägele and Pukelsheim (2001);
Faliszewski et al. (2010)]) where he discussed voting systems and prefigures
the PC method. The modern beginnings are attributed to the Marquis
de Condorcet (see [Condorcet (1785)], written four years before the French
Revolution, and nine years before losing his head to the same). Just as
Llull, Condorcet applied the PC method to analyzing voting outcomes.
Almost a century and a half later, Thurstone [Thurstone (1927)] refined
the method and employed a psychological continuum with the scale values
as the medians of the distributions of judgments.
Modern PC can be said to have started with the work of Saaty in 1977
[Saaty (1977)], who proposed a finite nine-point scale of measurements. Fur-
thermore, Saaty introduced the Analytic Hierarchy Process (AHP), which
is a formal method to derive ranking orders from numerical pairwise com-
parisons. AHP is widely used around the world for decision making, in ed-
ucation, industry, government, etc. [Koczkodaj (1993)] proposed a smaller
five-point scale, which is less fine-grained than Saaty’s nine-point, but eas-
ier to use. Note that while AHP is a respectable tool for practical appli-
cations, it is nevertheless considered by many [Dyer (1990); Janicki (2011)]
as a flawed procedure that produces arbitrary rankings.
Let X = {x1 , x2 , . . . , xn } be a finite set of objects to be ranked. Let
aij express the numerical preference between xi and xj . The idea is that
aij estimates “how much better” xi is compared to xj . Clearly, for all i, j,
aij > 0 and aij = 1/aji . The intuition is that if aij > 1, then xi is preferred
over xj by that factor. So, for example, Apple’s Retina display has four
times the resolution of the Thunderbolt display, and so if x1 is Retina,
and x2 is Thunderbolt, we could say that the image quality of x1 is four
times better than the image quality of x2 , and so a12 = 4, and a21 = 1/4.
The assignment of values to the aij ’s are often done subjectively by human
judges. Let A = [aij ] be a pairwise comparison matrix, also known as a
preference matrix. We say that a pairwise comparison matrix is consistent
if for all i, j, k we have that aij ajk = aik . Otherwise, it is inconsistent.

Theorem 1.24 (Saaty). A pairwise comparison matrix A is consistent if


and only if there exist w1 , w2 , . . . , wn such that aij = wi /wj .

Problem 1.25. Note that the wi ’s that appear in Theorem 1.24 create a
ranking, in that xj is preferable to xi if and only if wi < wj . Suppose that
A is a consistent PC matrix. How can you extract the wi ’s from A?
October 27, 2017 10:59 BK032 - An Introduction to the Analysis of Algorithms (3rd Edition) soltys˙alg page 19

Preliminaries 19

In practice, the subjective evaluations aij are seldom consistent, which


poses a set of problems ([Janicki and Zhai (2011)]), namely, how do we:
(i) measure inconsistency and what level is acceptable? (ii) remove incon-
sistencies, or lower them to an acceptable level? (iii) derive the values wi
starting with an inconsistent ranking A? (iv) justify a certain method for
removing inconsistencies? An inconsistent matrix has value in that the
degree of inconsistency measures, to some extent, the degree of subjective-
ness of the referees. But we need to be able to answer the questions in the
above paragraph before we can take advantage in a meaningful way of an
inconsistent matrix.

Problem 1.26. [Bozóki and Rapcsák (2008)] propose several methods for
measuring inconsistencies in a matrix (see especially Table 1 on page 161
of their article). Consider implementing some of these measures. Can you
propose a method for resolving inconsistencies in a PC matrix?

1.3 Answers to selected problems

Problem 1.1. (∀I ∈ IA )[∃O(O = A(I)) ∧ (αA (I) → βA (A(I)))]. This says
that for any well formed input I, there is an output, i.e., the algorithm A
terminates. This is expressed with ∃O(O = A(I)). Also, it says that if the
well formed input I satisfies the pre-condition, stated as the antecendent
αA (I), then the output satisfies the post-condition, stated as the consequent
βA (A(I)).
Problem 1.2. Clearly,
an2 + bn + c ≥ an2 − |b|n − |c| = n2 (a − |b|/n − |c|/n2 ) (1.5)
|b| is finite, so ∃nb ∈ N such that |b|/nb ≤ a/4. Similarly, ∃nc ∈ N such
that |c|/n2c ≤ a/4. Let n0 = max{nb , nc }. For n ≥ n0 , a − |b|/n0 − |c|/n20 ≥
a − a/4 − a/4 = a/2. This, combined with (1.5), grants:
a 2
n ≤ an2 + bn + c
2
for all n ≥ n0 . We need only to assign c3 the value a/2 to complete the
proof that an2 + bn + c ∈ Ω(n2 ).
Next we deal with the general polynomial with a positive leading coef-
ficient. Let
 k 
k
i k
p(n) = ai n = n ai /nk−i ,
i=1 i=1
October 27, 2017 10:59 BK032 - An Introduction to the Analysis of Algorithms (3rd Edition) soltys˙alg page 20

20 An introduction to the analysis of algorithms

k
where ak > 0. Clearly p(n) ≤ nk i=1 |ai | for all n ∈ N, so p(n) = O(nk ).
Moreover, every ai is finite, so for each i ∈ N such that 0 ≤ i ≤ k − 1, ∃ni
such that ai /nk−i ≤ ak /2k for all n ≥ ni . Let n0 be the maximum of these
k−1
ni ’s. p(n) can be rewritten as nk (ak + i=0 ai /nk−i ), so

k−1
p(n) ≥ nk (ak − ai /nk−i ).
i=0
k−1
We have shown that for n ≥ n0 , i=0 ai /nk−i ≤ ak − k(ak /2k) = ak /2,
so let c = ak /2. For all n ≥ n0 , p(n) ≥ (ak − ak /2)nk = cnk . Thus,
p(n) = Ω(nk ).
We have shown that p(n) ∈ O(nk ) and p(n) ∈ Ω(nk ), so p(n) = Θ(nk ).
Problem 1.3. The while loop starts with r = x, and then y is subtracted
each time; this is bounded by x (the slowest case, when y = 1). Each time
the while loop executes, it tests y ≤ r, and recomputes r, q, and so it costs
3 steps. Adding the original two assignments (q ← 0,r ← x), we get a total
of 3x + 2 steps. Note that we assume that x, y are presented in binary
(the usual encoding), and that it takes log2 x bits to encode x, and so the
running time is 3 · 2log2 x + 2, i.e., the running time is exponential in the
length of the input! This is not a desirable running time; if x were big, say
1,000 bits, and y small, this algorithm would take longer than the lifetime
of the sun (10 billion years) to end. There are much faster algorithms for
division such as the Newton-Raphson method.
Problem 1.4. The original precondition (under which the algorithm is
correct) is:
x ≥ 0 ∧ y > 0 ∧ x, y ∈ N
where N = {0, 1, 2, . . . }. So in the first case our work has already been done
for us; any member of Z which is ≥ 0 is also in N (and any member of N is
in Z), so these preconditions are equivalent. Given that the algorithm was
correct under the original precondition, it is also correct under the new one.
In the second case it is not correct: consider x = −5 and y = 2, so initially
r = −5, and the loop would not execute, and r ≥ 0 in the post-condition
would not be true.
Problem 1.6. First observe that if u divides x and y, then for any a, b ∈ Z
u also divides ax + by. Thus, if i|m and i|n, then
i|(m − qn) = r = rem(m, n).
So i divides both n and rem(m, n), and so i has to be bounded by their
greatest common divisor, i.e., i ≤ gcd(n, rem(m, n)). As this is true
October 27, 2017 10:59 BK032 - An Introduction to the Analysis of Algorithms (3rd Edition) soltys˙alg page 21

Preliminaries 21

for every i, it is in particular true for i = gcd(m, n); thus gcd(m, n) ≤


gcd(n, rem(m, n)). Conversely, suppose that i|n and i|rem(m, n). Then
i|m = qn + r, so i ≤ gcd(m, n), and again, gcd(n, rem(m, n)) meets the
condition of being such an i, so we have gcd(n, rem(m, n)) ≤ gcd(m, n).
Both inequalities taken together give us gcd(m, n) = gcd(n, rem(m, n)).
Problem 1.7. Let ri be r after the i-th iteration of the loop. Note that
r0 = rem(m, n) = rem(a, b) ≥ 0, and in fact every ri ≥ 0 by definition of
remainder. Furthermore:

ri+1 = rem(mi+1 , ni+1 )


= rem(ni , ri )
= rem(ni , rem(mi , ni ))
= rem(ni , ri ) < ri .

and so we have a decreasing, and yet non-negative, sequence of numbers;


by the LNP this must terminate. To establish the complexity, we count the
number of iterations of the while-loop, ignoring the swaps (so to get the
actual number of iterations we should multiply the result by two).
Suppose that m = qn + r. If q ≥ 2, then m ≥ 2n, and since m ← n,
m decreases by at least a half. If q = 1, then m = n + r where 0 < r < n,
and we examine two cases: r ≤ n/2, so n decreases by at least a half as
n ← r, or r > n/2, in which case m = n + r > n + n/2 = 3/2n, so since
m ← n, m decreases by 1/3. Thus, it can be said that in all cases at least
one element in the pair decreases by at least 1/3, and so it can be said
that the running time is bounded by k such that 3k = m · n, and so by
O(log(m · n)) = O(log m + log n). As inputs are assumed to be given in
binary, we can conclude from this that the running time is linear in the size
of the input.
A tighter analysis, known as Lamé’s theorem, can be found in [Cormen
et al. (2009)] (theorem 31.11), which states that for any integer k ≥ 1,
if a > b ≥ 1 and b < Fk+1 , where Fi is the i-th Fibonacci number (see
Problem 9.5), then it takes fewer than k iterations of the while-loop (not
counting swaps) to run Eucild’s algorithm.
Problem 1.8. When m < n then rem(m, n) = m, and so m = n and
n = m. Thus, when m < n we execute one iteration of the loop only
to swap m and n. In order to be more efficient, we could add line 2.5 in
algorithm 1.2 saying if (m < n) then swap(m, n).
Problem 1.9. (a) We show that if d = gcd(a, b), then there exist u, v such
that au + bv = d. Let S = {ax + by|ax + by > 0}; clearly S = ∅. By LNP
October 27, 2017 10:59 BK032 - An Introduction to the Analysis of Algorithms (3rd Edition) soltys˙alg page 22

22 An introduction to the analysis of algorithms

there exists a least g ∈ S. We show that g = d. Let a = q · g + r, 0 ≤ r < g.


Suppose that r > 0; then

r = a − q · g = a − q(ax0 + by0 ) = a(1 − qx0 ) + b(−qy0 ).

Thus, r ∈ S, but r < g—contradiction. So r = 0, and so g|a, and a similar


argument shows that g|b. It remains to show that g is greater than any
other common divisor of a, b. Suppose c|a and c|b, so c|(ax0 + by0 ), and so
c|g, which means that c ≤ g. Thus g = gcd(a, b) = d.
(b) Euclid’s extended algorithm is algorithm 1.8. Note that in the al-
gorithm, the assignments in line 1 and line 8 are evaluated left to right.

Algorithm 1.8 Extended Euclid’s algorithm


Pre-condition: m > 0, n > 0
1: a ← 0; x ← 1; b ← 1; y ← 0; c ← m; d ← n
2: loop
3: q ← div(c, d)
4: r ← rem(c, d)
5: if r = 0 then
6: stop
7: end if
8: c ← d; d ← r; t ← x; x ← a; a ← t − qa; t ← y; y ← b; b ← t − qb
9: end loop
Post-condition: am + bn = d = gcd(m, n)

We can prove the correctness of algorithm 1.8 by using the following


loop invariant which consists of four assertions:

am + bn = d, xm + yn = c, d > 0, gcd(c, d) = gcd(m, n). (LI)

The basis case:

am + bn = 0 · m + 1 · n = n = d
xm + yn = 1 · m + 0 · n = m = c

both by line 1. Then d = n > 0 by pre-condition, and gcd(c, d) = gcd(m, n)


by line 1. For the induction step assume that the “primed” variables are the
October 27, 2017 10:59 BK032 - An Introduction to the Analysis of Algorithms (3rd Edition) soltys˙alg page 23

Preliminaries 23

result of one more full iteration of the loop on the “un-primed” variables:

a m + b n = (x − qa)m + (y − qb)n by line 8


= (xm − yn) − q(am + bn)
= c − qd by induction hypothesis
=r by lines 3 and 4

=d by line 8

Then x m = y  n = am+bn = d = c where the first equality is by line 8, the


second by the induction hypothesis, and the third by line 8. Also, d = r by
line 8, and the algorithm would stop in line 5 if r = 0; on the other hand,
from line 4, r = rem(c, d) ≥ 0, so r > 0 and so d > 0. Finally,

gcd(c , d ) = gcd(d, r) by line 8


= gcd(d, rem(c, d)) by line 4
= gcd(c, d) see problem 1.6
= gcd(m, n). by induction hypothesis

For partial correctness it is enough to show that if the algorithm termi-


nates, the post-condition holds. If the algorithm terminates, then r = 0,
so rem(c, d) = 0 and gcd(c, d) = gcd(d, 0) = d. On the other hand,
by (LI), we have that am + bn = d, so am + bn = d = gcd(c, d) and
gcd(c, d) = gcd(m, n).
(c) On pp. 292–293 in [Delfs and Knebl (2007)] there is a nice analysis
of their version of the algorithm. They bound the running time in terms of
Fibonacci numbers, and obtain the desired bound on the running time.
Problem 1.11. For partial correctness of algorithm 1.3, we show that if
the pre-condition holds, and if the algorithm terminates, then the post-
condition will hold. So assume the pre-condition, and suppose first that A
is not a palindrome. Then there exists a smallest i0 (there exists one, and
so by the LNP there exists a smallest one) such that A[i0 ] = A[n − i0 + 1],
and so, after the first i0 − 1 iteration of the while-loop, we know from the
loop invariant that i = (i0 − 1) + 1 = i0 , and so line 4 is executed and the
algorithm returns F. Therefore, “A not a palindrome” ⇒ “return F.”
Suppose now that A is a palindrome. Then line 4 is never executed (as
no such i0 exists), and so after the k =  n2 -th iteration of the while-loop,
we know from the loop invariant that i =  n2  + 1 and so the while-loop is
not executed any more, and the algorithm moves on to line 8, and returns T.
Therefore, “A is a palindrome” ⇒ “return T.”
October 27, 2017 10:59 BK032 - An Introduction to the Analysis of Algorithms (3rd Edition) soltys˙alg page 24

24 An introduction to the analysis of algorithms

Therefore, the post-condition, “return T iff A is a palindrome,” holds.


Note that we have only used part of the loop invariant, that is we used the
fact that after the k-th iteration, i = k + 1; it still holds that after the k-th
iteration, for 1 ≤ j ≤ k, A[j] = A[n − j + 1], but we do not need this fact
in the above proof.
To show that the algorithm terminates, let di =  n2  − i. By the pre-
condition, we know that n ≥ 1. The sequence d1 , d2 , d3 , . . . is a decreasing
sequence of positive integers (because i ≤  n2 ), so by the LNP it is finite,
and so the loop terminates.
Problem 1.12. It is very easy once you realize that in Python the slice
[::-1] generates the reverse string. So, to check whether string s is a
palindrome, all we do is write s == s[::-1].
Problem 1.13. The solution is given by algorithm 1.9

Algorithm 1.9 Powers of 2


Pre-condition: n ≥ 1
x←n
while (x > 1) do
if (2|x) then
x ← x/2
else
stop and return “no”
end if
end while
return “yes”
Post-condition: “yes” ⇐⇒ n is a power of 2

Let the loop invariant be: “x is a power of 2 iff n is a power of 2.”


We show the loop invariant by induction on the number of iterations
of the main loop. Basis case: zero iterations, and since x ← n, x = n, so
obviously x is a power of 2 iff n is a power of 2. For the induction step,
note that if we ever get to update x, we have x = x/2, and clearly x is a
power of 2 iff x is. Note that the algorithm always terminates (let x0 = n,
and xi+1 = xi /2, and apply the LNP as usual).
We can now prove correctness: if the algorithms returns “yes”, then
after the last iteration of the loop x = 1 = 20 , and by the loop invariant n
is a power of 2. If, on the other hand, n is a power of 2, then so is every x,
so eventually x = 1, and so the algorithm returns “yes”.
October 27, 2017 10:59 BK032 - An Introduction to the Analysis of Algorithms (3rd Edition) soltys˙alg page 25

Preliminaries 25

Problem 1.14. Algorithm 1.4 computes the product of m and n, that is,
the returned z = m · n. A good loop invariant is x · y + z = m · n.
Problem 1.17. We start by initializing all nodes to have rank 1/6, and
then repeatedly apply the following formulas, based on (1.4):
PR(A) = PR(F )
PR(B) = PR(A)
PR(C) = PR(B)/4 + PR(E)
PR(D) = PR(B)/4
PR(E) = PR(B)/4 + PR(D)
PR(F ) = PR(B)/4 + PR(C)

The result is given in figure 1.3.

0 1 2 3 4 5 6 ... 17
A 0.17 0.17 0.21 0.25 0.29 0.18 0.20 0.22
B 0.17 0.17 0.17 0.21 0.25 0.29 0.18 0.22
C 0.17 0.21 0.25 0.13 0.14 0.16 0.19 ... 0.17
D 0.17 0.04 0.04 0.04 0.05 0.06 0.07 0.06
E 0.17 0.21 0.08 0.08 0.09 0.11 0.14 0.11
F 0.17 0.21 0.25 0.29 0.18 0.20 0.23 0.22
Total 1.00 1.00 1.00 1.00 1.00 1.00 1.00 ... 1.00

Fig. 1.3 Pagerank convergence in Problem 1.17. Note that the table is obtained with
a spreadsheet: all values are rounded to two decimal places, but column 1 is obtained
by placing 1/6 in each row, column 2 is obtained from column 1 with the formulas, and
all the remaining columns are obtained by “dragging” column 2 all the way to the end.
The values converged (more or less) in column 17.

Problem 1.19. After b proposed to g for the first time, whether this
proposal was successful or not, the partners of g could have only gotten
better. Thus, there is no need for b to try again.
Problem 1.20. bs+1 proposes to the girls according to his list of preference;
a g ends up accepting, and if the g who accepted bs+1 was free, she is the new
one with a partner. Otherwise, some b∗ ∈ {b1 , . . . , bs } became disengaged,
and we repeat the same argument. The g’s disengage only if a better b
proposes, so it is true that pMs+1 (gj ) <j pMs (gj ).
Problem 1.21. Suppose that we have a blocking pair {b, g} (meaning that
{(b, g  ), (b , g)} ⊆ Mn , but b prefers g to g  , and g prefers b to b ). Either b
October 27, 2017 10:59 BK032 - An Introduction to the Analysis of Algorithms (3rd Edition) soltys˙alg page 26

26 An introduction to the analysis of algorithms

came after b or before. If b came before b , then g would have been with
b or someone better when b came around, so g would not have become
engaged to b . On the other hand, since (b , g) is a pair, no better offer has
been made to g after the offer of b , so b could not have come after b . In
either case we get an impossibility, and so there is no blocking pair {b, g}.
Problem 1.22. To show that the matching is boy-optimal, we argue by
contradiction. Let “g is an optimal partner for b” mean that among all the
stable matchings g is the best partner that b can get.
We run the Gale-Shapley algorithm, and let b be the first boy who is
rejected by his optimal partner g. This means that g has already been
paired with some b , and g prefers b to b. Furthermore, g is at least as
desirable to b as his own optimal partner (since the proposal of b is the
first time during the run of the algorithm that a boy is rejected by his
optimal partner). Since g is optimal for b, we know (by definition) that
there exists some stable matching S where (b, g) is a pair. On the other
hand, the optimal partner of b is ranked (by b of course) at most as high
as g, and since g is taken by b, whoever b is paired with in S, say g  , b
prefers g to g  . This gives us an unstable pairing, because {b , g} prefer
each other to the partners they have in S.
Yes, this means that the ordering of the boys is immaterial, because
there is a unique boy-optimal matching, and it is independent of the order-
ing of the boys.
To show that the Gale-Shapley algorithm is girl-pessimal, we use the
fact that it is boy-optimal (which we just showed). Again, we argue by
contradiction. Suppose there is a stable matching S where g is paired
with b, and g prefers b to b, where (b , g) is the result of the Gale-Shapley
algorithm. By boy-optimality, we know that in S we have (b , g  ), where g 
is not higher on the preference list of b than g, and since g is already paired
with b, we know that g  is actually lower. This says that S is unstable since
{b , g} would rather be together than with their partners.

1.4 Notes

This book is about proving things about algorithms; their correctness, their
termination, their running time, etc. The art of mathematical proofs is a
difficult art to master; a very good place to start is [Velleman (2006)].
On page vii we mentioned the North-East blackout of 2003. At the time
the author was living in Toronto, Canada, on the 14th floor of an apartment
October 27, 2017 10:59 BK032 - An Introduction to the Analysis of Algorithms (3rd Edition) soltys˙alg page 27

Preliminaries 27

building (which really was the 13th floor, but as number 13 was outlawed
in Toronto elevators, after the 12th floor, the next button on the elevator
was 14). After the first 24 hours, the emergency generators gave out, and
we all had to climb the stairs to our floors; we would leave the building,
and scavenge the neighborhood for food and water, but as refrigeration was
out in most places, it was not easy to find fresh items. In short, we really
felt the consequences of that algorithmic error intimately.
In the footnote to Problem 1.10 we mention the Python library
matplotlib. Below we provide a simple example, plotting the functions
f (x) = x3 and h(x) = −x3 over the interval [0, 10] using this library:
import matplotlib.pyplot as plt
import numpy as np

def f(x):
return x**3
def h(x):
return -x**3

Input = np.arange(0,10.1,.5)
Outputf = [f(x) for x in Input]
Outputh = [h(x) for x in Input]

plt.plot(Input,Outputf,’r.’,label=’f - label’)
plt.plot(Input,Outputh,’b--’,label=’h - label’)
plt.xlabel(’This is the X axis label’)
plt.ylabel(’This is the Y axis label’)
plt.suptitle(’This is the title’)
plt.legend()
plt.show()
Of course, matplotlib has lots of features; see the documentation for more
complex examples.
The palindrome madamimadam comes from Joyce’s Ulysses. We discussed
the string manipulating facilities of Python in the section on palindromes,
Section 1.1.4, but perhaps the most powerful language for string manip-
ulations is Perl. For example, suppose that we have a text that contains
hashtags which are words of characters that start with ‘#’, and we wish to
collect all those hashtags into an array. One trembles at the prospect of
having to implement this in, say, the C programming language, but in Perl
October 27, 2017 10:59 BK032 - An Introduction to the Analysis of Algorithms (3rd Edition) soltys˙alg page 28

28 An introduction to the analysis of algorithms

this can be accomplished in one line:


@TAGS = ($TEXT =~ m/\#([a-zA-Z0-9]+)/g);
where $TEXT contains the text with zero or more hashtags, and the array
@TAGS will be a list of all the hashtags that occur in $TEXT without the ‘#’
prefix. For the great pleasure of Perl see [Schwartz et al. (2011)].
Search engines are complex and vast software systems, and ranking
pages is not the only technical issue that has to be solved. For example,
parsing keywords to select relevant pages (pages that contain the keywords),
before any ranking is done on these pages, is also a challenging task: the
search system has to solve many problems, such as synonymy (multiple
ways to say the same thing) and polysemy (multiple meanings), and many
others. See [Miller (1995)].
Section 1.2.2 is based on §2 in [Cenzer and Remmel (2001)]. For another
presentation of the Stable Marriage problem see chapter 1 in [Kleinberg
and Tardos (2006)]. The reference to the Marquis de Condorcet in the first
sentence of section 1.2.2 comes from the PhD thesis of Yun Zhai ([Zhai
(2010)]), written under the supervision of Ryszard Janicki. In that thesis,
Yun Zhai references [Arrow (1951)] as the source of the remark regarding
the Marquis de Condorcet’s early attempts at pairwise ranking. There is a
wonderfully biting description of Condorcet and his ideas in Roger Kimball’s
The Fortunes of Permanence [Kimball (2012)], pp. 237–244. Condercet
may have given us the method of Pairwise Comparisons, but he was a
tragic figure of the Enlightenment: he promised “perfectionnement même
de l’espèce humaine” (“the absolute perfection of the human race”), but
his utopian ideas were the precursor of countless hacks who insisted on
perfecting man whether he wanted it or not, ushering in the inevitable
tyrannical excesses that are the culmination of utopian dreams.
Professor Thomas L. Saaty (Theorem 1.24) died on August 14, 2017. He
was a distinguished professor at the University of Pittsburghs Katz School
of Business. The government of Poland gave Prof. Saaty a national award
after its use of his theory AHP for making decisions resulted in the country
initially not joining the European Union.
October 27, 2017 10:59 BK032 - An Introduction to the Analysis of Algorithms (3rd Edition) soltys˙alg page 29

Chapter 2

Greedy Algorithms

It may be profitable to you to


reflect, in future, that there
never were greed and cunning
in the world yet, that did not
do too much, and overreach
themselves.

D. Copperfield, [Dickens (1850)]

Greedy algorithms are algorithms prone to instant gratification. They make


choices that are locally optimum, hoping that they will lead to a global
optimum at the end. An example of a greedy procedure is the dispensing
of change by a convenience store clerk. In order to use the fewest coins
possible, the clerk gives out the coins of the highest value for as long as
possible, moving on to the next lower denomination when that is no longer
possible, and repeats.
Greediness is a simple strategy that works well with some computational
problems but fails with others. In the case of cash dispensing, if we have
coins of value 1, 5, 25 the greedy procedure always produces the smallest
possible number of coins, but the same is not true for 1, 10, 25. Just consider
dispensing 30, which greedily is 25, 1, 1, 1, 1, 1, while 10, 10, 10 is optimal.

2.1 Minimum cost spanning trees

We represent finite graphs with adjacency matrices. Given a directed or


undirected graph G = (V, E), its adjacency matrix is a matrix AG of size
n × n, where n = |V |, such that entry (i, j) is 1 if (i, j) is an edge in G, and
it is 0 otherwise.

29
Exploring the Variety of Random
Documents with Different Content
The Project Gutenberg eBook of Wars &
Treaties, 1815 to 1914
This ebook is for the use of anyone anywhere in the United
States and most other parts of the world at no cost and with
almost no restrictions whatsoever. You may copy it, give it away
or re-use it under the terms of the Project Gutenberg License
included with this ebook or online at www.gutenberg.org. If you
are not located in the United States, you will have to check the
laws of the country where you are located before using this
eBook.

Title: Wars & Treaties, 1815 to 1914

Author: Baron Arthur Ponsonby Ponsonby

Release date: July 31, 2019 [eBook #60026]


Most recently updated: October 17, 2024

Language: English

Credits: Produced by Turgut Dincer, Charlie Howard, and the


Online
Distributed Proofreading Team at http://www.pgdp.net
(This
file was produced from images generously made
available
by The Internet Archive)

*** START OF THE PROJECT GUTENBERG EBOOK WARS &


TREATIES, 1815 TO 1914 ***
WARS AND TREATIES
1815 to 1914

DEMOCRACY AND DIPLOMACY


(3s. 6d. net)

BY
ARTHUR PONSONBY

“It is the completest statement of the case for the democratic control of
foreign affairs which has been published, and contains a mass of facts whose
value cannot be exaggerated. We owe Mr. Ponsonby a great debt for this work.”—
Labour Leader.
“... Mr. Ponsonby’s main contention is one which may and should receive the
hearty assent of many who disagree with him in detail. He strongly urges the
necessity in dealing with foreign affairs of ensuring the co-operation and approval
of the great mass of the people. He is manifestly quite right.”—The late Lord
Cromer in The Spectator.
REBELS AND REFORMERS
(6s. net)

BY
ARTHUR & DOROTHEA PONSONBY

Savonarola—William the Silent—Tycho Brahe—Cervantes—


Giordano Bruno—Grotius—Voltaire—Hans Andersen—
Mazzini—W. Lloyd Garrison—Thoreau—Tolstoy

“Mr. and Mrs. Ponsonby’s book is intended for children or for those who are
too busy to read books in many volumes. But the interest of it lies not in the
necessarily short and simple narratives giving the story rather than the ideas,
although these are done clearly and with spirit, but in the reflections which lie
about those stories and lodge here and there in the reader’s mind. Like all books
worth reading this one is the outcome of a mass of judgments and beliefs which
may be very briefly expressed in the work itself, but lend it the gift which in the
case of human beings we call personality.”—The Times Literary Supplement.
“The story of these twelve lives is told in these pages—and told with a most
enticing simplicity and the happiest taste—in the hope of redressing the balance
between men of action and men of thought, and of showing that this type of
character and achievement can be made just as interesting to the young as the
more conventional hero of the history book.... This book is more especially for the
young, but it will be a delight also to grown-up readers.”—The Nation.
“The biographies are always well simplified and written in a clear and pointed
way. They are accompanied by portraits, which add not a little to the work’s
attractiveness as a book unusually well fitted to the needs of young readers who
are beginning to take an interest in history.”—The Scotsman.
WARS & TREATIES
1815 to 1914

BY

ARTHUR PONSONBY

LONDON: GEORGE ALLEN & UNWIN LTD.


RUSKIN HOUSE 40 MUSEUM STREET, W.C. 1
NEW YORK THE MACMILLAN COMPANY

First published April 1918


Reprinted June 1918
Third Edition, revised and enlarged January 1919

(All rights reserved)


CONTENTS
PAGE
Introduction 7
YEAR
1821–1828 Greek War 14
1828–1829 Russo-Turkish War 16
1830–1839 War between Holland and Belgium 18
1830–1839 War in Portugal and Spain 20
1831 Russian Campaign in Poland 22
1832–1841 Turko-Egyptian War 24
1838–1842 First Afghan War 26
1840–1842 Opium War in China 28
1846–1848 American War With Mexico 30
1848–1849 Austro-Hungarian War 32
1831
}
1848–1849
} Italian War of Liberation 34
1859 }
1866–1867
}
1854–1856 Crimean War 36
1857–1858 Indian Mutiny 38
1857–1860 Chinese War 40
1861–1865 American Civil War 42
1862–1867 French Expedition in Mexico 44
1864–1870 Brazilian War 46
1864 Danish War 48
1866 Austro-Prussian War 50
1867–1868 British Expedition in Abyssinia 52
1870–1871 Franco-German War 54
1873–1874 The Ashanti War 56
1877–1878 Russo-Turkish War 58
1878–1881 Second Afghan War 60
1879 Zulu War 62
1879–1882 The Chile-Peruvian War 64
1881 French Expedition in Tunis 66
1882 Egyptian War 68
1884–1885 Franco-Chinese War 70
1823–1826
}
1851 } Burmese Wars 72

1885 }
1885 Serbo-Bulgarian War 74
1894–1895 Chino-Japanese War 76
1895–1896 Italo-Abyssinian War 78
1896–1898 War in the Soudan 80
1897 Turko-Greek War 82
1897–1898 Spanish-American War 84
1881 } The Boer Wars 86
1899–1902
}
1899–1900 Boxer Rising in China 88
1904–1905 Russo-Japanese War 90
1911–1912 Turko-Italian War 92
1912–1913 First Balkan War 94
1913 Second Balkan War 96
Important Treaties 98
Index of Treaties 102
Bibliography 104
WARS AND TREATIES
1815–1914
INTRODUCTION
A growing number of people are devoting their attention to a closer
study of foreign affairs. Many of them may not have the opportunity
to read the larger volumes of histories; and, indeed, even if they
had, they would find their choice of books very much restricted
when they came to the more recent period of European and world
history, although in the last year or so the gap has to some extent
been filled up by several interesting studies of international politics
in the nineteenth century. Some knowledge of this period is essential
if we are to understand the full significance of the events of to-day,
and if we are to form any helpful opinion of the course to be
pursued in future.
Historians often take for granted that their readers already have
some general knowledge of the groundwork of events and they build
up their structure of criticism, their delineation of policy and
tendencies, and their survey of international problems on the
assumption that the scaffolding has been erected. But often it has
not, and then history, more especially the complex tangle of
international history, becomes difficult to grasp. It may therefore
serve some useful purpose if a few poles of scaffolding representing
the dates and outline of conflicts and agreements between nations
can be supplied in a very brief and easily intelligible form, a
presentment of the bare record of facts which may be useful for
reference.
During the last hundred years war has been a more common
occurrence in international intercourse than most people realize. The
forty-two records of wars tabled in these pages do not cover the
whole ground. They are the chief conflicts, or the conflicts fraught
with the most serious consequences, but they are by no means the
only occasions on which there was fighting in the world. Revolutions,
unless they led to international war, are not mentioned, neither are
expeditions such as the advance on Llassa, the Chitral expedition,
the Indian frontier wars, the Kaffir wars, the Somaliland expeditions,
the revolt of the Herreroes in German West Africa or the French
expeditions in Morocco: the wars between the states of South
America, with two exceptions, have also been omitted. But the list as
it stands, is striking enough and may suffice to make the student
inquire further into the circumstances which produced this almost
unceasing strife.
The causes are epitomized in the fewest possible words and the
occasion is separated from the cause. Causes of wars are very
seldom remembered and are not very easily discovered in the
perusal of histories. The occasion is sometimes mistaken for the
cause, whereas it may often be merely a pretext. The occasion of a
war has not infrequently been a comparatively trivial incident,
whereas the cause can be traced to the gradual development of
friction for which divergence of policies or conflict of ambitions may
have been responsible. The trivial incident, or even an incident of a
more serious nature, may pass off without fatal consequences if no
friction exists between the nations and there is a general
atmosphere of amicable understanding. Where, on the contrary,
relations are strained it requires but a very small spark to light up a
conflagration. It is important therefore to detach the occasion from
the cause.
Causes of war in the nineteenth century differ to some extent
from those of previous centuries. The elemental combative passion
of man expressing itself in fierce racial animosities is far less
noticeable. Religious differences do not figure so positively as a
reason for conflict. Dynastic ambitions linger on and still play a
formidable part, even after 1815, but not with the same unashamed
and aggressive arrogance as in bygone centuries. Nationalist
aspirations begin to assert themselves, and the waves of
revolutionary exasperation with outworn systems of despotic
government have made those very governments combat that spirit
by force of arms. As the century proceeds, and the wonderful
inventions for rapid transit and communication develop, the most
noticeable element in war-making is the commercial or colonial
ambition of governments fostered largely by the pressure of financial
interests and declaring itself under the name of Empire. This policy
of competitive imperial expansion in the newly accessible regions of
the globe will be found to constitute the most frequent cause of
dispute, of jealousy, and of suspicion between nations. The pretext
will vary, the excuse will be presented under plausible guises for
popular consumption, but the ultimate cause, the fundamental origin
will be the same. Imperialism economic in its origin is fostered
largely by an exaggerated spirit of nationalism.
The remarkable extent of Empire expansion in the latter part of
the nineteenth century is best illustrated by the following figures:—

Acquisitions of Territory

To the British Empire 1870–1900: 4,754,000 square miles;


88,000,000 population.
To France 1884–1900: 3,583,580 square miles;
36,553,000 population.
To Germany 1884–1900: 1,026,220 square miles;
16,687,100 population.

But perhaps the chief and most frequent cause of war is war
itself. In the Balkan Peninsula—where, whenever the fighting has
ceased, nothing approaching a satisfactory settlement has ever been
concluded—this is specially true. Eight or nine of the wars recorded
concern the Balkans. Or take the Crimean War. Sir Spencer Walpole
says:
“From 1856 to 1878 the Continent of Europe was afflicted with
five great wars—the Franco-Austrian War of 1859; the Danish of
1864; the Austro-Prussian of 1860; the Franco-German of 1870 and
the Russo-Turkish of 1878: all of which can be lineally traced to the
war of 1854,” and one at least of those wars, as we know, sowed the
seeds of future war. The war that is concluded by a dictated peace,
the war that leaves a sense of grievance and unsatisfied though
legitimate claims, the war that inspires a lasting desire for revenge
inevitably leads to future war. Wars are never aggressive but always
defensive on the part of those who are responsible for waging them.
Wars are never defensive but always aggressive on the part of those
against whom they are waged. The Ministers and monarchs do the
quarrelling, the people believe the version they are told and obey.
The people do the fighting and make the sacrifice, the Ministers and
monarchs do the treaty-making without consulting them. The
people’s part is one of valiance, endurance, and suffering; the part
of the Ministers and monarchs is one too often marred by failure and
frequently disfigured by intrigue and deception.
Cast your eye through these forty-two very brief records of wars.
Think of the valour, the determination, and the heroism of the
people, be they soldiers or civilians. Consider the noble part played
by those who without question obeyed what they were led to believe
was their country’s call. And then look on the other side at the
results—the ineptitude of the statesmen, the patched-up treaties,
the worthless agreements, the wars that led to further wars, the
failure to secure a settlement after the soldier had done his part, and
the unnecessary prolongation of conflicts when agreement might
have been reached by the exercise of a little wisdom and foresight.
The contrast is remarkable between the actions on the battlefield
and the intrigue in the council chamber. Blood has been spilt, lives
lost, and victories won often without any positive advantage being
gained in the final result.
The wars are arranged according to date. Some were long-
drawn-out struggles, others sharp conflicts of a few months. The
number of men engaged in any battle and the casualties if they
could be tabulated would no doubt seem comparatively small to our
modern eyes. The total loss of life in the Crimean War amounted to
1
about 600,000 men. An estimate of the loss in killed and wounded
in some of the other great battles may be given as follows: Solferino
(1859), 31,500; Chickamauga (1863), 35,100; Gettysburg (1863),
37,000; Königrätz (1866), 26,894; Vionville (1870), 32,800;
2
Gravelotte (1870), 30,000; Plevna (1877), 19,000; The Boer War
(1899–1902): British losses, 28,603; Boers killed, 4,000, prisoners
3
40,000; Mukden (1905), 131,000.

1
The Cambridge Modern History, vol. xii
2
An article in Current History, by General
Duryee, of the U.S.A. Army.
3
Encyclopædia Britannica.

Wars to the generation that experiences them are unmixed evils


engendering hatred and evil passions and bringing in their train loss,
suffering, destruction, and impoverishment, all of which are acutely
felt. The succeeding generation inherit their consequences in the
shape of high taxation and the attempts to mend and reconstruct
the dislocated national life. The horror has gone but the memory
remains. To the succeeding generation they become episodes read
of in the cold pages of history, and then at last they fade into mere
names—a battle with a vaguely remembered date.
Each war is terminated by a treaty. The main provisions of a few
additional treaties which were not concluded after wars are also
given. In but few instances have war treaties been observed, and in
several cases they were not worth the paper they were written on.
Treaties are signed and ratified by statesmen without the sanction or
approval, and sometimes without the knowledge, of their people.
The statesmen enter the council chamber as individuals bent on
securing advantages at other people’s expense, and ready by
bargain and intrigue to attain their ends. These instruments
therefore are expressions of temporary expediency sometimes
exacted after defeat, sometimes the result of compromise and
generally inconclusive. If treaties are to become sacred obligations
founded on international justice and respected not merely by
changing governments but by whole nations, the spirit in which they
are drawn up and the method by which they are concluded must be
radically altered. The existence of secret treaties and engagements
has proved to be one of the gravest dangers to European peace.
There are a large number of conventions which have been
concluded between nations, by which social intercourse with regard
to such matters as post and telegraph is facilitated, and of late years
arbitration treaties between one Power and another have multiplied
very rapidly. This is the one advance in which the efforts of
diplomacy have borne fruit. The important treaty of Arbitration
between Great Britain and the United States is the only one of these
treaties mentioned in the list. Agreements with regard to the
conduct of war have been made, such as the Geneva Convention of
1864 and 1906, and the Hague Declarations of 1899 and 1907, but
they have proved to a large extent futile.
Treaties are generally concluded for an undefined period, and
lapse owing to deliberate breach or altered circumstances. But no
people, and it may safely be said no government, was precisely
aware which of the innumerable treaties were still in force, and what
actually in given circumstances its obligations were.
There may be many instances in which a nation may look back
with pride at the victory of its arms and the achievements of its
generals. There are but few instances in which a nation can look
back with pride at the advantages gained by treaties of peace and at
the achievements of its diplomatists. From the Treaty of Vienna,
1815, to the Treaty of Bukarest, 1913, the record of so-called
settlements is not one to inspire confidence in the efficacy of warfare
or in the methods of diplomacy.
After the termination of the Napoleonic Wars in 1815 there were
great hopes of an era of peace. But two antagonistic elements
existed in Europe which were bound sooner or later to come into
open conflict. On the one hand the French Revolution had
engendered in the peoples a spirit of unrest, of discontent, of
impatience with the unfettered monarchical system, and at the same
time confidence in their power and hope of success in the
destruction of tyranny and arbitrary government. It was in fact the
rise of democracy. On the other side the despotic governments were
ready to co-operate, and, under the guidance of Metternich,
endeavour to repress and exterminate the movement for the
establishment of constitutional government, and for the expression
of nationalist and democratic aspirations. Two waves of revolution
passed over Europe in 1830 and 1848, and by the middle of the
century the reactionaries could no longer hold their own, and many
states had been freed from despotism and oppression.
In the latter part of the century, however, as has already been
pointed out, fresh causes for war arose in the competitive ambition
of governments for imperial expansion. Wars became more frequent
and extended into remote regions of the world which had become
accessible. There are forty-seven wars mentioned in these records;
of these thirteen took place before the Crimean War, which is about
the middle of the period, and thirty-three after. In twenty-one out of
the forty-five wars Great Britain was either directly or indirectly
concerned as a belligerent. There were only two wars in which
Christian nations were not primarily involved.
It must be remembered that in no country had the peoples any
voice in the determination of policy so far as international affairs
were concerned. While for brevity’s sake the usual phraseology is
adopted, and such expressions used as “France decided,” “Russia
refused,” “Italy intended,” etc., etc., in no case does the name of the
country mean the people or indeed anything more than a monarch
and a few statesmen. Although constitutional monarchy became
established during the period in many countries, and with it,
parliamentary government, the idea of diplomacy, foreign policy,
international engagements, and treaties being under parliamentary
supervision and control, had not yet been suggested.
The solution of the vast problem of the avoidance of war in the
future, if it rests alone on the wisdom of sovereigns and statesmen,
is not likely, judging by the experience of the past, to be reached
very rapidly. In the meanwhile a careful examination of the events of
recent history is a necessary preparation for all who want to dispel
the strange but prevalent delusion that force of arms settles
international disputes, and this record may be useful as a manual for
reference.
THE GREEK WAR
1821–1828

Belligerents:
Greece and later Russia, France and Great Britain.
Turkey.

Cause:
Nationalist aspirations had been growing in Greece ever since the
French Revolution. These were encouraged by an intellectual revival
and commercial development. The tyranny and cruel oppression of
Turkish misgovernment under Sultan Mahmud gradually inflamed
public opinion.

Occasion:
The Hetæria Philike, a secret society, inaugurated the rebellion.
The first move was made in Moldavia, where it completely failed.
This was followed by a revolt in the Morea and the islands of the
Ægean and subsequently in Central Greece.

Course of the War:


There were wholesale massacres on both sides, notably the
destruction by the Turks of the inhabitants of Chios. The Turks were
unable to suppress the revolt. The Greeks under Kolokotrones
exhausted the Turkish army, and assistance was sought by the
Sultan from Mehemet Ali, of Egypt, who in 1823 conquered Crete
and defeated the Greeks at Psara. The Egyptians and Turks entered
Morea. Missolonghi fell after a year’s siege, and the garrison in the
Acropolis at Athens surrendered in June 1827. By a treaty signed at
London in July 1827 Great Britain, France, and Russia decided to
intervene as mediators. The Turks rejected mediation. The victory of
the allied fleets at Navarino took place on October 20 1827.

Political Result:
By the Treaty of Adrianople, September 1829 (see also p. 17)
Greece became autonomous under the supreme sovereignty of the
Sultan. Shortly afterwards the Powers agreed that Greece should be
established as an absolutely independent kingdom, but without
Crete or Samos, and with a frontier line drawn from the mouth of
the River Achelous to a spot near Thermopylæ. Prince Leopold of
Saxe-Coburg accepted the crown, but renounced it after a few
months. Prince Otho of Bavaria accepted it in February 1833. After a
revolution in 1862 he was succeeded by Prince George of Denmark
in 1863, the father of King Constantine who was deposed in 1917.

Remarks:
Greece was confined within far too narrow limits, with which she
could not rest contented. The enmity between Russia and Turkey
was in no way mitigated, and Russian ambitions remained
unsatisfied.
RUSSO-TURKISH WAR
1828–1829

Belligerents:
Russia.
Turkey.

Cause:
By the Treaty of London, July 1827, Great Britain, Russia, and
France undertook to put an end to the conflict in the East, which had
arisen out of the Greek struggle for independence. After the victory
of Navarino, Canning died and Great Britain was inactive. By the
Treaty of Akerman, October 1826, the points of contention between
Russia and Turkey had been settled in Russia’s favour. But the
Russian Government ardently desired a contest with Turkey.

Occasion:
The Sultan Mahmud issued a proclamation which was a direct
challenge to Russia, and followed it by a levy of troops and the
expulsion of Christians from Constantinople. On April 26, 1828,
Russia replied by declaring war.
Course of the War:
The Russians occupied the Roumanian principalities and crossed
the Danube. At first the Turks had considerable successes in the
Dobrudja, and the Russians, who suffered enormous losses, were
only able to capture Varna. Reserves were brought up during the
winter. After fierce resistance the Turks were routed near Shumla. In
July 1829 the Russians crossed the Balkans, the fleet co-operated in
the Black Sea, and the army began to march on Constantinople. In
Asia, Kars and Erzeroum having fallen into the Russian hands, the
Sultan yielded.

Political Result:
By the Treaty of Adrianople, September 14, 1829, Russian
ascendancy in the principalities of the Danube was permanently
assured, and the whole of the Caucasus was converted into Russian
territory. The Straits were declared free and open to merchant ships
of all Powers. The Turkish Government gave its adhesion to the
Treaty of London regulating the Greek frontier.

Remarks:
Russia’s hold over Turkey was greatly strengthened, but the
establishment of an absolutely independent kingdom in Greece was
finally secured.
WAR BETWEEN HOLLAND AND
BELGIUM
1830–1839

Belligerents:
Holland.
Belgium, France, Great Britain.

Cause:
The Kingdom of the Netherlands was set up by the Congress of
Vienna in 1815, but from the first there was discord between the
two states of the kingdom. King William was a Dutchman and a
Protestant. Holland, although the smaller of the two states, had a
permanent majority in the Chamber. Public offices and appointments
were filled by Dutchmen. The hatred of Dutch rule grew, and with it
a desire for separation.

Occasion:
The success of the French Revolution of 1830 led to an outbreak
in Brussels, and Belgian insurgents fought against the Dutch
soldiers. The Powers met in London, and Belgium was declared a
separate kingdom. Leopold of Saxe-Coburg was offered the crown
and entered Brussels as King of the Belgians on June 21, 1831; at
the same time the Dutch prepared for an invasion.

Course of the War:


On August 9, 1831, the Belgians were routed in an encounter
with the Dutch, but on the intervention of the French army King
William withdrew. The Conference in London drew up a treaty, but
King William refused to come to terms and retained possession of
Antwerp. In November a combined British and French fleet sailed for
the coast of Holland, and a French army laid siege to Antwerp. The
Dutch garrison capitulated on December 23, 1831, and the town was
handed over to the Belgians and the French troops withdrew. Still
the Dutch refused to yield and held two forts which enabled them to
command the navigation of the Scheldt. Not till March 1838 did
Holland signify her readiness to accept the treaty.

Political Result:
The Conference throughout had endeavoured to come to an
agreement; Austria, Prussia, and Russia sympathized with Holland;
but eventually the final Treaty of London was signed on April 19,
1839. Luxemburg was divided, and also the district of Maestricht.
The Scheldt was declared open to the commerce of both countries.
The national debt was divided, and the five Powers guaranteed the
independence and neutrality of Belgium.

Remarks:
As independent states the two countries lived side by side
amicably. The neutrality of Belgium was reaffirmed in 1870 on the
outbreak of the Franco-German War.
Leopold was succeeded in 1865 by his son Leopold II, under
whose sovereignty the Congo Free State was placed in 1885. King
Albert succeeded his uncle in 1909.
WAR IN PORTUGAL AND SPAIN
1830–1839

Belligerents:
Followers of Don Miguel.
Portuguese Constitutionalists.
Spaniards.
Carlists.
and for a period France and Great Britain.

Cause:
Don Miguel, the head of the reactionary party, was betrothed to
Donna Maria, daughter of Pedro of Brazil. In 1828, disregarding his
professions of loyalty to the Constitution, he declared himself King of
Portugal. The Constitutionalists, who were adherents of Donna
Maria, were crushed. She received no assistance from outside to
deal with the usurper.
In Spain Don Carlos, the King’s brother, was the representative of
the reactionary party. King Ferdinand, before his death, issued the
Pragmatic Sanction, which enabled his daughter to succeed to the
throne. The King was weak and unpopular, and Don Carlos had a
great following in Spain.

Occasion:
Welcome to Our Bookstore - The Ultimate Destination for Book Lovers
Are you passionate about books and eager to explore new worlds of
knowledge? At our website, we offer a vast collection of books that
cater to every interest and age group. From classic literature to
specialized publications, self-help books, and children’s stories, we
have it all! Each book is a gateway to new adventures, helping you
expand your knowledge and nourish your soul
Experience Convenient and Enjoyable Book Shopping Our website is more
than just an online bookstore—it’s a bridge connecting readers to the
timeless values of culture and wisdom. With a sleek and user-friendly
interface and a smart search system, you can find your favorite books
quickly and easily. Enjoy special promotions, fast home delivery, and
a seamless shopping experience that saves you time and enhances your
love for reading.
Let us accompany you on the journey of exploring knowledge and
personal growth!

ebookgate.com

You might also like