Python Demacus
Python Demacus
James Hetherington
1 Introduction to Python 14
1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
1.1.1 Why teach Python? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
1.1.2 Why Python? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
1.1.3 Why write programs for research? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
1.1.4 Sensible Input - Reasonable Output . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
1.2 Many kinds of Python . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
1.2.1 The Jupyter Notebook . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
1.2.2 Typing code in the notebook . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
1.2.3 Python at the command line . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
1.2.4 Python scripts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
1.2.5 Python Libraries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
1.3 An example Python data analysis notebook . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
1.3.1 Why write software to manage your data and plots? . . . . . . . . . . . . . . . . . . . 18
1.3.2 Importing Libraries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
1.3.3 Comments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
1.3.4 Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
1.3.5 Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
1.3.6 More complex functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
1.3.7 Checking our work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
1.3.8 Displaying results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
1.3.9 Manipulating Numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
1.3.10 Creating Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
1.3.11 Looping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
1.3.12 Plotting graphs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
1.3.13 Composing Program Elements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
1.4 Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
1.4.1 Variable Assignment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
1.4.2 Reassignment and multiple labels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
1.4.3 Objects and types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
1.4.4 Reading error messages. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
1.4.5 Variables and the notebook kernel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
1.5 Using Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
1.5.1 Calling functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
1.5.2 Using methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
1.5.3 Functions are just a type of object! . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
1.5.4 Getting help on functions and methods . . . . . . . . . . . . . . . . . . . . . . . . . . 45
1.5.5 Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
1.6 Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
1.6.1 Floats and integers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
1.6.2 Strings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
1.6.3 Lists . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
1
1.6.4 Ranges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
1.6.5 Sequences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
1.6.6 Unpacking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
1.7 Containers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
1.7.1 Checking for containment. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
1.7.2 Mutability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
1.7.3 Tuples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
1.7.4 Memory and containers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
1.7.5 Identity vs Equality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
1.8 Dictionaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
1.8.1 The Python Dictionary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
1.8.2 Keys and Values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
1.8.3 Immutable Keys Only . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
1.8.4 No guarantee of order (before Python 3.7) . . . . . . . . . . . . . . . . . . . . . . . . . 60
1.8.5 Sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
1.9 Data structures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
1.9.1 Nested Lists and Dictionaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
1.9.2 Exercise: a Maze Model. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
1.9.3 Solution: my Maze Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
1.10 Control and Flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
1.10.1 Turing completeness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
1.10.2 Conditionality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
1.10.3 Else and Elif . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
1.10.4 Comparison . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
1.10.5 Automatic Falsehood . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
1.10.6 Indentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
1.10.7 Pass . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
1.10.8 Iteration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
1.10.9 Iterables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
1.10.10 Dictionaries are Iterables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
1.10.11 Unpacking and Iteration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
1.10.12 Break, Continue . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
1.10.13 Classroom exercise: the Maze Population . . . . . . . . . . . . . . . . . . . . . . . . . 71
1.10.14 Solution: counting people in the maze . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
1.11 Comprehensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
1.11.1 The list comprehension . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
1.11.2 Selection in comprehensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
1.11.3 Comprehensions versus building lists with append: . . . . . . . . . . . . . . . . . . . . 73
1.11.4 Nested comprehensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
1.11.5 Dictionary Comprehensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
1.11.6 List-based thinking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
1.11.7 Classroom Exercise: Occupancy Dictionary . . . . . . . . . . . . . . . . . . . . . . . . 74
1.11.8 Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
1.12 Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
1.12.1 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
1.12.2 Default Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
1.12.3 Side effects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
1.12.4 Early Return . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
1.12.5 Unpacking arguments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
1.12.6 Sequence Arguments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
1.12.7 Keyword Arguments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
1.13 Using Libraries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
1.13.1 Import . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
1.13.2 Why bother? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
2
1.13.3 Importing from modules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
1.13.4 Import and rename . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
1.14 Defining your own classes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
1.14.1 User Defined Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
1.14.2 Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
1.14.3 Constructors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
1.14.4 Object-oriented design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
1.14.5 Object oriented design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
1.14.6 Exercise: Your own solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
3
2.8.4 Figures and Axes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
2.8.5 Saving figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
2.8.6 Subplots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
2.8.7 Versus plots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
2.8.8 Learning More . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
2.9 NumPy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
2.9.1 The Scientific Python Trilogy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
2.9.2 Limitations of Python Lists . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
2.9.3 The NumPy array . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
2.9.4 Elementwise Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124
2.9.5 arange and linspace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124
2.9.6 Multi-Dimensional Arrays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
2.9.7 Array Datatypes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
2.9.8 Broadcasting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
2.9.9 Newaxis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
2.9.10 Dot Products . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
2.9.11 Record Arrays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
2.9.12 Logical arrays, masking, and selection . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
2.9.13 Numpy memory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
2.10 The Boids! . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
2.10.1 Flocking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
2.10.2 Setting up the Boids . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
2.10.3 Flying in a Straight Line . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137
2.10.4 Matplotlib Animations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137
2.10.5 Fly towards the middle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
2.10.6 Avoiding collisions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140
2.10.7 Match speed with nearby birds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142
2.11 Recap: Understanding the “Greengraph” Example . . . . . . . . . . . . . . . . . . . . . . . . 142
2.11.1 Classes for Greengraph . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
2.11.2 Invoking our code and making a plot . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
2.12 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146
2.12.1 What’s version control? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146
2.12.2 Why use version control? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
2.12.3 Git != GitHub . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
2.12.4 How do we use version control? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
2.12.5 What is version control? (Team version) . . . . . . . . . . . . . . . . . . . . . . . . . . 147
2.12.6 Scope . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
2.13 Practising with Git . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148
2.13.1 Example Exercise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148
2.13.2 Programming and documents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148
2.13.3 Markdown . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148
2.13.4 Displaying Text in this Tutorial . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148
2.13.5 Setting up somewhere to work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148
2.14 Solo work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149
2.14.1 Configuring Git with your name and email . . . . . . . . . . . . . . . . . . . . . . . . 149
2.14.2 Initialising the repository . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149
2.15 Solo work with Git . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150
2.15.1 A first example file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150
2.15.2 Telling Git about the File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150
2.15.3 Our first commit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150
2.15.4 Configuring Git with your editor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
2.15.5 Git log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
2.15.6 Hash Codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
2.15.7 Nothing to see here . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
4
2.15.8 Unstaged changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
2.15.9 Staging a file to be included in the next commit . . . . . . . . . . . . . . . . . . . . . 153
2.15.10 The staging area . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
2.15.11 Message Sequence Charts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
2.15.12 The Levels of Git . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154
2.15.13 Review of status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
2.15.14 Carry on regardless . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156
2.15.15 Commit with a built-in-add . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156
2.15.16 Review of changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156
2.15.17 Git Solo Workflow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157
2.16 Fixing mistakes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158
2.16.1 Referring to changes with HEAD and ^ . . . . . . . . . . . . . . . . . . . . . . . . . . 159
2.16.2 Reverting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
2.16.3 Conflicted reverts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
2.16.4 Review of changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
2.16.5 Antipatch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160
2.16.6 Rewriting history . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160
2.16.7 A new lie . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160
2.16.8 Using reset to rewrite history . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161
2.16.9 Covering your tracks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162
2.16.10 Resetting the working area . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162
2.17 Publishing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165
2.17.1 Sharing your work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165
2.17.2 Creating a repository . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165
2.17.3 Paying for GitHub . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166
2.17.4 Adding a new remote to your repository . . . . . . . . . . . . . . . . . . . . . . . . . . 166
2.17.5 Remotes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166
2.17.6 Playing with GitHub . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167
2.18 Working with multiple files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167
2.18.1 Some new content . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167
2.18.2 Git will not by default commit your new file . . . . . . . . . . . . . . . . . . . . . . . . 168
2.18.3 Tell git about the new file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168
2.19 Changing two files at once . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168
2.20 Collaboration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171
2.20.1 Form a team . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171
2.20.2 Giving permission . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171
2.20.3 Obtaining a colleague’s code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171
2.20.4 Nonconflicting changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172
2.20.5 Rejected push . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173
2.20.6 Merge commits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
2.20.7 Nonconflicted commits to the same file . . . . . . . . . . . . . . . . . . . . . . . . . . . 175
2.20.8 Conflicting commits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179
2.20.9 Resolving conflicts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180
2.20.10 Commit the resolved file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181
2.20.11 Distributed VCS in teams with conflicts . . . . . . . . . . . . . . . . . . . . . . . . . . 182
2.20.12 The Levels of Git . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184
2.21 Editing directly on GitHub . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185
2.21.1 Editing directly on GitHub . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185
2.22 Social Coding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185
2.22.1 GitHub as a social network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185
2.23 Fork and Pull . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185
2.23.1 Different ways of collaborating . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185
2.23.2 Forking a repository on GitHub . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186
2.23.3 Pull Request . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186
5
2.23.4 Practical example - Team up! . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186
2.23.5 Some Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188
2.24 Branches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188
2.24.1 Publishing branches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190
2.24.2 Find out what is on a branch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191
2.24.3 Merging branches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192
2.24.4 Cleaning up after a branch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193
2.24.5 A good branch strategy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194
2.24.6 Grab changes from a branch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194
2.25 Git Stash . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194
2.26 Tagging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195
2.27 Working with generated files: gitignore . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196
2.28 Git clean . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198
2.29 Hunks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199
2.29.1 Git Hunks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199
2.29.2 Interactive add . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199
2.30 GitHub pages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199
2.30.1 Yaml Frontmatter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199
2.30.2 The gh-pages branch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200
2.30.3 UCL layout for GitHub pages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200
2.31 Working with multiple remotes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201
2.31.1 Distributed versus centralised . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201
2.31.2 Referencing remotes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202
2.32 Hosting Servers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202
2.32.1 Hosting a local server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202
2.32.2 Home-made SSH servers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203
2.33 SSH keys and GitHub . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204
2.34 Rebasing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204
2.34.1 Rebase vs merge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204
2.34.2 An example rebase . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204
2.34.3 Fast Forwards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205
2.34.4 Rebasing pros and cons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205
2.35 Squashing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206
2.35.1 Using rebase to squash . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206
2.36 Debugging With Git Bisect . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207
2.36.1 An example repository . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207
2.36.2 Bisecting manually . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208
2.36.3 Solving Manually . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208
2.36.4 Solving automatically . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209
3 Testing 211
3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211
3.1.1 A few reasons not to do testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211
3.1.2 A few reasons to do testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211
3.1.3 Not a panacea . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212
3.1.4 Tests at different scales . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212
3.1.5 Legacy code hardening . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212
3.1.6 Testing vocabulary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212
3.1.7 Branch coverage: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212
3.2 How to Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212
3.2.1 Equivalence partitioning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212
3.2.2 Using our tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219
3.2.3 Boundary cases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221
3.2.4 Positive and negative tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221
6
3.2.5 Raising exceptions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221
3.3 Testing frameworks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222
3.3.1 Why use testing frameworks? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222
3.3.2 Common testing frameworks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223
3.3.3 pytest framework: usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223
3.4 Testing with floating points . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225
3.4.1 Floating points are not reals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225
3.4.2 Comparing floating points . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225
3.4.3 Comparing vectors of floating points . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226
3.5 Classroom exercise: energy calculation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226
3.5.1 Diffusion model in 1D . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226
3.5.2 Starting point . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227
3.5.3 Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228
3.5.4 Coverage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 230
3.6 Mocking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231
3.6.1 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231
3.6.2 Mocking frameworks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231
3.6.3 Recording calls with mock . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232
3.6.4 Using mocks to model test resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233
3.6.5 Testing functions that call other functions . . . . . . . . . . . . . . . . . . . . . . . . . 235
3.7 Using a debugger . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 236
3.7.1 Stepping through the code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 236
3.7.2 Using the python debugger . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 236
3.7.3 Basic navigation: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 236
3.7.4 Breakpoints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237
3.7.5 Post-mortem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237
3.8 Continuous Integration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 238
3.8.1 Test servers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 238
3.8.2 Memory and profiling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 238
3.9 Recap example: Monte-Carlo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 238
3.9.1 Problem: Implement and test a simple Monte-Carlo algorithm . . . . . . . . . . . . . 238
3.9.2 Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 238
7
4.5.1 Distribution tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256
4.5.2 Laying out a project . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257
4.5.3 Using setuptools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257
4.5.4 Convert the script to a module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 258
4.5.5 Write an executable script . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259
4.5.6 Specify dependencies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259
4.5.7 Specify entry point . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 260
4.5.8 Installing from GitHub . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 260
4.5.9 Write a readme file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261
4.5.10 Write a license file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261
4.5.11 Write a citation file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261
4.5.12 Define packages and executables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261
4.5.13 Write some unit tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 262
4.5.14 Developer Install . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 265
4.5.15 Distributing compiled code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 265
4.6 Documentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 266
4.6.1 Documentation is hard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 266
4.6.2 Prefer readable code with tests and vignettes . . . . . . . . . . . . . . . . . . . . . . . 266
4.6.3 Comment-based Documentation tools . . . . . . . . . . . . . . . . . . . . . . . . . . . 266
4.7 Example of using Sphinx . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 266
4.7.1 Write some docstrings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 266
4.7.2 Set up sphinx . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267
4.7.3 Define the root documentation page . . . . . . . . . . . . . . . . . . . . . . . . . . . . 268
4.7.4 Run sphinx . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 269
4.7.5 Sphinx output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 269
4.8 Doctest - testing your documentation is up to date . . . . . . . . . . . . . . . . . . . . . . . . 269
4.9 Software Project Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 270
4.9.1 Software Engineering Stages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 270
4.9.2 Requirements Engineering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271
4.9.3 Functional and architectural design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271
4.9.4 Waterfall . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271
4.9.5 Why Waterfall? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271
4.9.6 Problems with Waterfall . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271
4.9.7 Software is not made of bricks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271
4.9.8 Software is not made of bricks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272
4.9.9 Software is not made of bricks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272
4.9.10 The Agile Manifesto . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272
4.9.11 Agile is not absence of process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272
4.9.12 Elements of an Agile Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272
4.9.13 Ongoing Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273
4.9.14 Iterative Development . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273
4.9.15 Continuous Delivery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273
4.9.16 Self-organising teams . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273
4.9.17 Agile in Research . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273
4.9.18 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273
4.10 Managing software issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274
4.10.1 Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274
4.10.2 Some Issue Trackers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274
4.10.3 Anatomy of an issue . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274
4.10.4 Reporting a Bug . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274
4.10.5 Owning an issue . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274
4.10.6 Status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274
4.10.7 Resolutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 275
4.10.8 Bug triage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 275
8
4.10.9 The backlog . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 275
4.10.10 Development cycles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 275
4.10.11 GitHub issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 275
4.11 Software Licensing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 275
4.11.1 Reuse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 275
4.11.2 Disclaimer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 275
4.11.3 Choose a licence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 276
4.11.4 Open source doesn’t stop you making money . . . . . . . . . . . . . . . . . . . . . . . 276
4.11.5 Plagiarism vs promotion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 276
4.11.6 Your code is good enough . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 276
4.11.7 Worry about licence compatibility and proliferation . . . . . . . . . . . . . . . . . . . 276
4.11.8 Academic licence proliferation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 276
4.11.9 Licences for code, content, and data. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277
4.11.10 Licensing issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277
4.11.11 Permissive vs share-alike . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277
4.11.12 Academic use only . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277
4.11.13 Patents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277
4.11.14 Use as a web service . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 278
4.11.15 Library linking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 278
4.11.16 Citing software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 278
4.11.17 Referencing the licence in every file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 278
4.11.18 Choose a licence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 278
4.11.19 Open source does not equal free maintenance . . . . . . . . . . . . . . . . . . . . . . . 278
5 Construction 279
5.1 Construction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 279
5.1.1 Construction vs Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 279
5.1.2 Low-level design decisions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 279
5.1.3 Algorithms and structures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 279
5.1.4 Architectural design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 279
5.1.5 Construction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 280
5.1.6 Literate programming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 280
5.1.7 Programming for humans . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 280
5.1.8 Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 280
5.2 Coding Conventions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 281
5.2.1 One code, many layouts: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 281
5.2.2 So many choices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 282
5.2.3 Layout . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 282
5.2.4 Layout choices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 282
5.2.5 Naming Conventions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 282
5.2.6 Hungarian Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 283
5.2.7 Newlines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 283
5.2.8 Syntax Choices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 283
5.2.9 Syntax choices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 283
5.2.10 Coding Conventions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 283
5.2.11 Lint . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 284
5.3 Comments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 284
5.3.1 Why comment? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 284
5.3.2 Bad Comments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 285
5.3.3 Comments which are obvious . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 285
5.3.4 Comments which could be replaced by better style . . . . . . . . . . . . . . . . . . . . 285
5.3.5 Comments vs expressive code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 285
5.3.6 Comments which belong in an issue tracker . . . . . . . . . . . . . . . . . . . . . . . . 285
5.3.7 Comments which only make sense to the author today . . . . . . . . . . . . . . . . . . 286
9
5.3.8 Comments which are unpublishable . . . . . . . . . . . . . . . . . . . . . . . . . . . . 286
5.3.9 Good commenting: pedagogical comments . . . . . . . . . . . . . . . . . . . . . . . . . 286
5.3.10 Good commenting: reasons and definitions . . . . . . . . . . . . . . . . . . . . . . . . 286
5.4 Refactoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 286
5.4.1 Refactoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 287
5.4.2 A word from the Master . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 287
5.4.3 List of known refactorings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 287
5.4.4 Replace magic numbers with constants . . . . . . . . . . . . . . . . . . . . . . . . . . . 287
5.4.5 Replace repeated code with a function . . . . . . . . . . . . . . . . . . . . . . . . . . . 287
5.4.6 Change of variable name . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 288
5.4.7 Separate a complex expression into a local variable . . . . . . . . . . . . . . . . . . . . 288
5.4.8 Replace loop with iterator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 288
5.4.9 Replace hand-written code with library code . . . . . . . . . . . . . . . . . . . . . . . 289
5.4.10 Replace set of arrays with array of structures . . . . . . . . . . . . . . . . . . . . . . . 289
5.4.11 Replace constants with a configuration file . . . . . . . . . . . . . . . . . . . . . . . . . 289
5.4.12 Replace global variables with function arguments . . . . . . . . . . . . . . . . . . . . . 290
5.4.13 Merge neighbouring loops . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 290
5.4.14 Break a large function into smaller units . . . . . . . . . . . . . . . . . . . . . . . . . . 290
5.4.15 Separate code concepts into files or modules . . . . . . . . . . . . . . . . . . . . . . . . 291
5.4.16 Refactoring is a safe way to improve code . . . . . . . . . . . . . . . . . . . . . . . . . 291
5.4.17 Tests and Refactoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 291
5.4.18 Refactoring Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 292
6 Design 293
6.1 Object-Oriented Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 293
6.1.1 Design processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 293
6.1.2 Design and research . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 293
6.2 Recap of Object-Orientation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 293
6.2.1 Classes: User defined types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 293
6.2.2 Declaring a class . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 294
6.2.3 Object instances . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 294
6.2.4 Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 294
6.2.5 Constructor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 294
6.2.6 Member Variable . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 294
6.3 Object refactorings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 294
6.3.1 Replace add-hoc structure with user defined classes . . . . . . . . . . . . . . . . . . . . 294
6.3.2 Replace function with a method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 295
6.3.3 Replace method arguments with class members . . . . . . . . . . . . . . . . . . . . . . 295
6.3.4 Replace global variable with class and member . . . . . . . . . . . . . . . . . . . . . . 296
6.3.5 Object Oriented Refactoring Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . 296
6.4 Class design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 296
6.4.1 UML . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 297
6.4.2 YUML . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 297
6.4.3 Information Hiding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 298
6.4.4 Property accessors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 299
6.4.5 Class Members . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 301
6.5 Inheritance and Polymorphism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 301
6.5.1 Object-based vs Object-Oriented . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 301
6.5.2 Inheritance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 301
6.5.3 Ontology and inheritance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 302
6.5.4 Inheritance in python . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 302
6.5.5 Inheritance terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 302
6.5.6 Inheritance and constructors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 302
6.5.7 Inheritance UML diagrams . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 303
10
6.5.8 Aggregation vs Inheritance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 303
6.5.9 Polymorphism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 304
6.5.10 Polymorphism and Inheritance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 305
6.5.11 Undefined Functions and Polymorphism . . . . . . . . . . . . . . . . . . . . . . . . . . 305
6.5.12 Refactoring to Polymorphism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 306
6.5.13 Interfaces and concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 306
6.5.14 Interfaces in UML . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 306
6.5.15 Further UML . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 307
6.6 Patterns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 307
6.6.1 Class Complexity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 307
6.6.2 Design Patterns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 307
6.6.3 Reading a pattern . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 307
6.6.4 Introducing Some Patterns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 308
6.6.5 Supporting code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 308
6.7 Factory Pattern . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 308
6.7.1 Factory UML . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 308
6.7.2 Factory Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 309
6.7.3 Agent model constructor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 309
6.7.4 Agent derived classes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 310
6.7.5 Refactoring to Patterns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 310
6.8 Builder Pattern . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 311
6.8.1 Builder example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 312
6.8.2 Builder preferred to complex constructor . . . . . . . . . . . . . . . . . . . . . . . . . 313
6.8.3 Using a builder . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 313
6.8.4 Avoid staged construction without a builder. . . . . . . . . . . . . . . . . . . . . . . . 314
6.9 Strategy Pattern . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 314
6.9.1 Strategy pattern example: sunspots . . . . . . . . . . . . . . . . . . . . . . . . . . . . 314
6.9.2 Sunspot cycle has periodicity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 315
6.9.3 Years are not constant length . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 316
6.9.4 Strategy Pattern for Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 316
6.9.5 Uneven time series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 316
6.9.6 Too many classes! . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 316
6.9.7 Apply the strategy pattern: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 316
6.9.8 Results: Deviation of year length from average . . . . . . . . . . . . . . . . . . . . . . 319
6.10 Model-View-Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 320
6.10.1 Separate graphics from science! . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 320
6.10.2 Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 320
6.10.3 View . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 320
6.10.4 Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 320
6.10.5 Other resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 321
6.11 Exercise: Refactoring The Bad Boids . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 322
6.11.1 Bad_Boids . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 322
6.11.2 Your Task . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 323
6.11.3 A regression test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 324
6.11.4 Invoking the test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 324
6.11.5 Make the regression test fail . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 324
6.11.6 Start Refactoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 324
11
7.2.4 Lambda Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 330
7.2.5 Using functional programming for numerical methods . . . . . . . . . . . . . . . . . . 332
7.3 Iterators and Generators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 334
7.3.1 Iterators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 334
7.3.2 Defining Our Own Iterable . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 336
7.3.3 A shortcut to iterables: the __iter__ method . . . . . . . . . . . . . . . . . . . . . . . 337
7.3.4 Generators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 338
7.4 Related Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 340
7.4.1 Context managers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 340
7.4.2 Decorators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 341
7.5 Supplementary material . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 342
7.5.1 Test generators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 342
7.5.2 Negative test contexts managers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 343
7.5.3 Negative test decorators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 343
7.6 Exceptions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 344
7.6.1 Create your own Exception . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 345
7.6.2 Managing multiple exceptions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 347
7.6.3 Design with Exceptions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 350
12
14 Marks Scheme 409
13
Chapter 1
Introduction to Python
1.1 Introduction
1.1.1 Why teach Python?
• In this first session, we will introduce Python.
• This course is about programming for data analysis and visualisation in research.
• It’s not mainly about Python.
• But we have to use some language.
• Sensible input
• Reasonable output
14
In [1]: ### Make plot
%matplotlib inline
import math
import numpy as np
import matplotlib.pyplot as plt
We’re going to be mainly working in the Jupyter notebook in this course. To get hold of a copy of the
notebook, follow the setup instructions shown on the course website, or use the installation in Desktop@UCL
(available in the teaching cluster rooms or anywhere).
Jupyter notebooks consist of discussion cells, referred to as “markdown cells”, and “code cells”, which
contain Python. This document has been created using Jupyter notebook, and this very cell is a Markdown
Cell.
Code cell inputs are numbered, and show the output below.
Markdown cells contain text which uses a simple format to achive pretty layout, for example, to obtain:
bold, italic
15
• Bullet
Quote
We write:
**bold**, *italic*
* Bullet
> Quote
• When in a cell, press escape to leave it. When moving around outside cells, press return to enter.
• Outside a cell:
• Use arrow keys to move around.
• Press b to add a new cell below the cursor.
• Press m to turn a cell from code mode to markdown mode.
• Press shift+enter to calculate the code in the block.
• Press h to see a list of useful keys in the notebook.
• Inside a cell:
• Press tab to suggest completions of variables. (Try it!)
In [3]: %%bash
# Above line tells Python to execute this cell as *shell code*
# not Python, as if we were in a command line
# This is called a 'cell magic'
In [4]: %%bash
echo "print(2 * 4)" > eight.py
python eight.py
16
We can make the script directly executable (on Linux or Mac) by inserting a [hash-
bang](https://en.wikipedia.org/wiki/Shebang_(Unix%29) and setting the permissions to execute.
Writing fourteen.py
In [6]: %%bash
chmod u+x fourteen.py
./fourteen.py
14
import numpy as np
import matplotlib.pyplot as plt
def make_figure():
theta = np.arange(0, 4 * math.pi, 0.1)
eight = plt.figure()
axes = eight.add_axes([0, 0, 1, 1])
axes.plot(0.5 * np.sin(theta), np.cos(theta / 2))
return eight
Writing draw_eight.py
In a real example, we could edit the file on disk using a program such as Atom or VS code.
In [8]: import draw_eight # Load the library file we just wrote to disk
17
There is a huge variety of available packages to do pretty much anything. For instance, try import
antigravity.
The %% at the beginning of a cell is called magics. There’s a large list of them available and you can
create your own.
18
1.3.2 Importing Libraries
Research programming is all about using libraries: tools other people have provided programs that do many
cool things. By combining them we can feel really powerful but doing minimum work ourselves. The python
syntax to import someone else’s library is “import”.
Now, if you try to follow along on this example in an Jupyter notebook, you’ll probably find that you
just got an error message.
You’ll need to wait until we’ve covered installation of additional python libraries later in the course, then
come back to this and try again. For now, just follow along and try get the feel for how programming for
data-focused research works.
---------------------------------------------------------------------------
~/virtualenv/python3.7.5/lib/python3.7/site-packages/geopy/geocoders/base.py in _call_geocoder(s
354 try:
--> 355 page = requester(req, timeout=timeout, **kwargs)
356 except Exception as error:
19
650
<ipython-input-2-7fa4fab2949a> in <module>
1 geocoder = geopy.geocoders.Yandex(lang="en_US")
----> 2 geocoder.geocode('Cambridge', exactly_one=False)
~/virtualenv/python3.7.5/lib/python3.7/site-packages/geopy/geocoders/yandex.py in geocode(self,
114 logger.debug("%s.geocode: %s", self.__class__.__name__, url)
115 return self._parse_json(
--> 116 self._call_geocoder(url, timeout=timeout),
117 exactly_one,
118 )
~/virtualenv/python3.7.5/lib/python3.7/site-packages/geopy/geocoders/base.py in _call_geocoder(s
371 exc_info=False)
372 try:
--> 373 raise ERROR_CODE_MAP[code](message)
374 except KeyError:
375 raise GeocoderServiceError(message)
The results come out as a list inside a list: [Name, [Latitude, Longitude]]. Programs represent data
in a variety of different containers like this.
1.3.3 Comments
Code after a # symbol doesn’t get run.
This runs
1.3.4 Functions
We can wrap code up in a function, so that we can repeatedly get just the information we want.
20
Defining functions which put together code to make a more complex task seem simple from the outside
is the most important thing in programming. The output of the function is stated by “return”; the input
comes in in brackets after the function name:
In [5]: geolocate('Cambridge')
---------------------------------------------------------------------------
~/virtualenv/python3.7.5/lib/python3.7/site-packages/geopy/geocoders/base.py in _call_geocoder(s
354 try:
--> 355 page = requester(req, timeout=timeout, **kwargs)
356 except Exception as error:
21
<ipython-input-5-ccb6d38c8bab> in <module>
----> 1 geolocate('Cambridge')
<ipython-input-4-48a1a3c91ee7> in geolocate(place)
1 def geolocate(place):
----> 2 return geocoder.geocode(place, exactly_one = False)[0][1]
~/virtualenv/python3.7.5/lib/python3.7/site-packages/geopy/geocoders/yandex.py in geocode(self,
114 logger.debug("%s.geocode: %s", self.__class__.__name__, url)
115 return self._parse_json(
--> 116 self._call_geocoder(url, timeout=timeout),
117 exactly_one,
118 )
~/virtualenv/python3.7.5/lib/python3.7/site-packages/geopy/geocoders/base.py in _call_geocoder(s
371 exc_info=False)
372 try:
--> 373 raise ERROR_CODE_MAP[code](message)
374 except KeyError:
375 raise GeocoderServiceError(message)
1.3.5 Variables
We can store a result in a variable:
---------------------------------------------------------------------------
~/virtualenv/python3.7.5/lib/python3.7/site-packages/geopy/geocoders/base.py in _call_geocoder(s
354 try:
--> 355 page = requester(req, timeout=timeout, **kwargs)
356 except Exception as error:
22
642
<ipython-input-6-e33090ca51bc> in <module>
----> 1 london_location = geolocate("London")
2 print(london_location)
<ipython-input-4-48a1a3c91ee7> in geolocate(place)
1 def geolocate(place):
----> 2 return geocoder.geocode(place, exactly_one = False)[0][1]
~/virtualenv/python3.7.5/lib/python3.7/site-packages/geopy/geocoders/yandex.py in geocode(self,
114 logger.debug("%s.geocode: %s", self.__class__.__name__, url)
115 return self._parse_json(
--> 116 self._call_geocoder(url, timeout=timeout),
117 exactly_one,
118 )
~/virtualenv/python3.7.5/lib/python3.7/site-packages/geopy/geocoders/base.py in _call_geocoder(s
371 exc_info=False)
372 try:
--> 373 raise ERROR_CODE_MAP[code](message)
374 except KeyError:
375 raise GeocoderServiceError(message)
23
GeocoderInsufficientPrivileges: HTTP Error 403: Forbidden
params = dict(
z = zoom,
size = str(size[0]) + "," + str(size[1]),
ll = str(long) + "," + str(lat),
l = "sat" if satellite else "map",
lang = "en_US"
)
return requests.get(base,params=params)
https://static-maps.yandex.ru/1.x/?z=10&size=400%2
C400&ll=-0.1275%2C51.5072&l=sat&lang=en_US
We can write automated tests so that if we change our code later, we can check the results are still
valid.
Our previous function comes back with an Object representing the web request. In object oriented
programming, we use the . operator to get access to a particular property of the object, in this case, the
actual image at that URL is in the content property. It’s a big file, so I’ll just get the first few chars:
In [11]: map_response.content[0:20]
Out[11]: b'\xff\xd8\xff\xe0\x00\x10JFIF\x00\x01\x01\x01\x00H\x00H\x00\x00'
24
1.3.8 Displaying results
I’ll need to do this a lot, so I’ll wrap up our previous function in another function, to save on typing.
I can use a library that comes with Jupyter notebook to display the image. Being able to work with
variables which contain images, or documents, or any other weird kind of data, just as easily as we can with
numbers or letters, is one of the really powerful things about modern programming languages like Python.
---------------------------------------------------------------------------
<ipython-input-13-d69fb2ebab72> in <module>
1 import IPython
----> 2 map_png = map_at(*london_location)
---------------------------------------------------------------------------
<ipython-input-14-6c68dc8fc8d1> in <module>
----> 1 print("The type of our map result is actually a: ", type(map_png))
In [15]: IPython.core.display.Image(map_png)
---------------------------------------------------------------------------
<ipython-input-15-ac666969e449> in <module>
----> 1 IPython.core.display.Image(map_png)
25
---------------------------------------------------------------------------
~/virtualenv/python3.7.5/lib/python3.7/site-packages/geopy/geocoders/base.py in _call_geocoder(s
354 try:
--> 355 page = requester(req, timeout=timeout, **kwargs)
356 except Exception as error:
<ipython-input-16-9124101779f1> in <module>
----> 1 IPython.core.display.Image(map_at(*geolocate("New Delhi")))
<ipython-input-4-48a1a3c91ee7> in geolocate(place)
26
1 def geolocate(place):
----> 2 return geocoder.geocode(place, exactly_one = False)[0][1]
~/virtualenv/python3.7.5/lib/python3.7/site-packages/geopy/geocoders/yandex.py in geocode(self,
114 logger.debug("%s.geocode: %s", self.__class__.__name__, url)
115 return self._parse_json(
--> 116 self._call_geocoder(url, timeout=timeout),
117 exactly_one,
118 )
~/virtualenv/python3.7.5/lib/python3.7/site-packages/geopy/geocoders/base.py in _call_geocoder(s
371 exc_info=False)
372 try:
--> 373 raise ERROR_CODE_MAP[code](message)
374 except KeyError:
375 raise GeocoderServiceError(message)
In [17]: from io import BytesIO # A library to convert between files and strings
import numpy as np # A library to deal with matrices
import imageio # A library to deal with images
This code has assumed we have our pixel data for the image as a 400 × 400 × 3 3-d matrix, with each of
the three layers being red, green, and blue pixels.
We find out which pixels are green by comparing, element-by-element, the middle (green, number 1) layer
to the top (red, zero) and bottom (blue, 2)
Now we just need to parse in our data, which is a PNG image, and turn it into our matrix format:
27
---------------------------------------------------------------------------
<ipython-input-20-1df2d88d5544> in <module>
----> 1 print(count_green_in_png( map_at(*london_location) ))
We’ll also need a function to get an evenly spaced set of places between two endpoints:
---------------------------------------------------------------------------
~/virtualenv/python3.7.5/lib/python3.7/site-packages/geopy/geocoders/base.py in _call_geocoder(s
354 try:
--> 355 page = requester(req, timeout=timeout, **kwargs)
356 except Exception as error:
28
/opt/python/3.7.5/lib/python3.7/urllib/request.py in http_error_default(self, req, fp, code, msg
648 def http_error_default(self, req, fp, code, msg, hdrs):
--> 649 raise HTTPError(req.full_url, code, msg, hdrs, fp)
650
<ipython-input-22-ed53afe2376e> in <module>
----> 1 location_sequence(geolocate("London"), geolocate("Cambridge"), 5)
<ipython-input-4-48a1a3c91ee7> in geolocate(place)
1 def geolocate(place):
----> 2 return geocoder.geocode(place, exactly_one = False)[0][1]
~/virtualenv/python3.7.5/lib/python3.7/site-packages/geopy/geocoders/yandex.py in geocode(self,
114 logger.debug("%s.geocode: %s", self.__class__.__name__, url)
115 return self._parse_json(
--> 116 self._call_geocoder(url, timeout=timeout),
117 exactly_one,
118 )
~/virtualenv/python3.7.5/lib/python3.7/site-packages/geopy/geocoders/base.py in _call_geocoder(s
371 exc_info=False)
372 try:
--> 373 raise ERROR_CODE_MAP[code](message)
374 except KeyError:
375 raise GeocoderServiceError(message)
buffer = BytesIO()
29
result = imageio.imwrite(buffer, out, format='png')
return buffer.getvalue()
In [24]: IPython.core.display.Image(
map_at(*london_location, satellite=True)
)
---------------------------------------------------------------------------
<ipython-input-24-84d560d5795b> in <module>
1 IPython.core.display.Image(
----> 2 map_at(*london_location, satellite=True)
3 )
In [25]: IPython.core.display.Image(
show_green_in_png(
map_at(
*london_location,
satellite=True)))
---------------------------------------------------------------------------
<ipython-input-25-ba1938f843d6> in <module>
2 show_green_in_png(
3 map_at(
----> 4 *london_location,
5 satellite=True)))
1.3.11 Looping
We can loop over each element in out list of coordinates, and get a map for that place:
---------------------------------------------------------------------------
30
~/virtualenv/python3.7.5/lib/python3.7/site-packages/geopy/geocoders/base.py in _call_geocoder(s
354 try:
--> 355 page = requester(req, timeout=timeout, **kwargs)
356 except Exception as error:
<ipython-input-26-b3877d0d28cf> in <module>
----> 1 for location in location_sequence(geolocate("London"),
2 geolocate("Birmingham"),
3 4):
4 IPython.core.display.display(
5 IPython.core.display.Image(map_at(*location)))
<ipython-input-4-48a1a3c91ee7> in geolocate(place)
31
1 def geolocate(place):
----> 2 return geocoder.geocode(place, exactly_one = False)[0][1]
~/virtualenv/python3.7.5/lib/python3.7/site-packages/geopy/geocoders/yandex.py in geocode(self,
114 logger.debug("%s.geocode: %s", self.__class__.__name__, url)
115 return self._parse_json(
--> 116 self._call_geocoder(url, timeout=timeout),
117 exactly_one,
118 )
~/virtualenv/python3.7.5/lib/python3.7/site-packages/geopy/geocoders/base.py in _call_geocoder(s
371 exc_info=False)
372 try:
--> 373 raise ERROR_CODE_MAP[code](message)
374 except KeyError:
375 raise GeocoderServiceError(message)
In [27]: [count_green_in_png(map_at(*location))
for location in
location_sequence(geolocate("London"),
geolocate("Birmingham"),
10)]
---------------------------------------------------------------------------
~/virtualenv/python3.7.5/lib/python3.7/site-packages/geopy/geocoders/base.py in _call_geocoder(s
354 try:
--> 355 page = requester(req, timeout=timeout, **kwargs)
356 except Exception as error:
32
568 args = (dict, 'default', 'http_error_default') + orig_args
--> 569 return self._call_chain(*args)
570
<ipython-input-27-b5d8a75e50ec> in <module>
1 [count_green_in_png(map_at(*location))
2 for location in
----> 3 location_sequence(geolocate("London"),
4 geolocate("Birmingham"),
5 10)]
<ipython-input-4-48a1a3c91ee7> in geolocate(place)
1 def geolocate(place):
----> 2 return geocoder.geocode(place, exactly_one = False)[0][1]
~/virtualenv/python3.7.5/lib/python3.7/site-packages/geopy/geocoders/yandex.py in geocode(self,
114 logger.debug("%s.geocode: %s", self.__class__.__name__, url)
115 return self._parse_json(
--> 116 self._call_geocoder(url, timeout=timeout),
117 exactly_one,
118 )
~/virtualenv/python3.7.5/lib/python3.7/site-packages/geopy/geocoders/base.py in _call_geocoder(s
371 exc_info=False)
372 try:
--> 373 raise ERROR_CODE_MAP[code](message)
374 except KeyError:
375 raise GeocoderServiceError(message)
33
GeocoderInsufficientPrivileges: HTTP Error 403: Forbidden
In [29]: plt.plot([count_green_in_png(map_at(*location))
for location in
location_sequence(geolocate("London"),
geolocate("Birmingham"),
10)])
---------------------------------------------------------------------------
~/virtualenv/python3.7.5/lib/python3.7/site-packages/geopy/geocoders/base.py in _call_geocoder(s
354 try:
--> 355 page = requester(req, timeout=timeout, **kwargs)
356 except Exception as error:
34
HTTPError: HTTP Error 403: Forbidden
<ipython-input-29-e7d26b5362c3> in <module>
1 plt.plot([count_green_in_png(map_at(*location))
2 for location in
----> 3 location_sequence(geolocate("London"),
4 geolocate("Birmingham"),
5 10)])
<ipython-input-4-48a1a3c91ee7> in geolocate(place)
1 def geolocate(place):
----> 2 return geocoder.geocode(place, exactly_one = False)[0][1]
~/virtualenv/python3.7.5/lib/python3.7/site-packages/geopy/geocoders/yandex.py in geocode(self,
114 logger.debug("%s.geocode: %s", self.__class__.__name__, url)
115 return self._parse_json(
--> 116 self._call_geocoder(url, timeout=timeout),
117 exactly_one,
118 )
~/virtualenv/python3.7.5/lib/python3.7/site-packages/geopy/geocoders/base.py in _call_geocoder(s
371 exc_info=False)
372 try:
--> 373 raise ERROR_CODE_MAP[code](message)
374 except KeyError:
375 raise GeocoderServiceError(message)
From a research perspective, of course, this code needs a lot of work. But I hope the power of using
programming is clear.
35
By putting these together, we can make a function which can plot this graph automatically for any two
places:
---------------------------------------------------------------------------
~/virtualenv/python3.7.5/lib/python3.7/site-packages/geopy/geocoders/base.py in _call_geocoder(s
354 try:
--> 355 page = requester(req, timeout=timeout, **kwargs)
356 except Exception as error:
36
During handling of the above exception, another exception occurred:
<ipython-input-31-eea54607c6ae> in <module>
----> 1 plt.plot(green_between('New York', 'Chicago', 20))
<ipython-input-4-48a1a3c91ee7> in geolocate(place)
1 def geolocate(place):
----> 2 return geocoder.geocode(place, exactly_one = False)[0][1]
~/virtualenv/python3.7.5/lib/python3.7/site-packages/geopy/geocoders/yandex.py in geocode(self,
114 logger.debug("%s.geocode: %s", self.__class__.__name__, url)
115 return self._parse_json(
--> 116 self._call_geocoder(url, timeout=timeout),
117 exactly_one,
118 )
~/virtualenv/python3.7.5/lib/python3.7/site-packages/geopy/geocoders/base.py in _call_geocoder(s
371 exc_info=False)
372 try:
--> 373 raise ERROR_CODE_MAP[code](message)
374 except KeyError:
375 raise GeocoderServiceError(message)
And that’s it! We’ve covered, very very quickly, the majority of the python language, and much of the
theory of software engineering.
Now we’ll go back, carefully, through all the concepts we touched on, and learn how to use them properly
ourselves.
1.4 Variables
1.4.1 Variable Assignment
When we generate a result, the answer is displayed, but not kept anywhere.
In [1]: 2 * 3
37
Out[1]: 6
If we want to get back to that result, we have to store it. We put it in a box, with a name on the box.
This is a variable.
In [2]: six = 2 * 3
In [3]: print(six)
If we look for a variable that hasn’t ever been defined, we get an error.
In [4]: print(seven)
---------------------------------------------------------------------------
<ipython-input-4-25c0309421cb> in <module>
----> 1 print(seven)
In [6]: print(nothing)
None
In [7]: type(None)
Out[7]: NoneType
30
In [10]: print(scary)
216
38
1.4.2 Reassignment and multiple labels
But here’s the real scary thing: it seems like we can put something else in that box:
In [11]: scary = 25
In [12]: print(scary)
25
Note that the data that was there before has been lost.
No labels refer to it any more - so it has been “Garbage Collected”! We might imagine something pulled
out of the box, and thrown on the floor, to make way for the next occupant.
In fact, though, it is the label that has moved. We can see this because we have more than one label
refering to the same box:
In [13]: name = "Eric"
In [14]: nom = name
In [15]: print(nom)
Eric
In [16]: print(name)
Eric
In [19]: print(nom)
Idle
So we can now develop a better understanding of our labels and boxes: each box is a piece of space (an
address) in computer memory. Each label (variable) is a reference to such a place.
When the number of labels on a box (“variables referencing an address”) gets down to zero, then the
data in the box cannot be found any more.
After a while, the language’s “Garbage collector” will wander by, notice a box with no labels, and throw
the data away, making that box available for more data.
Old fashioned languages like C and Fortran don’t have Garbage collectors. So a memory address with
no references to it still takes up memory, and the computer can more easily run out.
So when I write:
In [20]: name = "Michael"
The following things happen:
1. A new text object is created, and an address in memory is found for it.
2. The variable “name” is moved to refer to that address.
3. The old address, containing “James”, now has no labels.
4. The garbage collector frees the memory at the old address.
Supplementary materials: There’s an online python tutor which is great for visualising memory and
references. Try the scenario we just looked at.
Labels are contained in groups called “frames”: our frame contains two labels, ‘nom’ and ‘name’.
39
1.4.3 Objects and types
An object, like name, has a type. In the online python tutor example, we see that the objects have type
“str”. str means a text object: Programmers call these ‘strings’.
In [21]: type(name)
Out[21]: str
Depending on its type, an object can have different properties: data fields Inside the object.
Consider a Python complex number for example:
In [22]: z = 3 + 1j
We can see what properties and methods an object has available using the dir function:
In [23]: dir(z)
Out[23]: ['__abs__',
'__add__',
'__bool__',
'__class__',
'__delattr__',
'__dir__',
'__divmod__',
'__doc__',
'__eq__',
'__float__',
'__floordiv__',
'__format__',
'__ge__',
'__getattribute__',
'__getnewargs__',
'__gt__',
'__hash__',
'__init__',
'__init_subclass__',
'__int__',
'__le__',
'__lt__',
'__mod__',
'__mul__',
'__ne__',
'__neg__',
'__new__',
'__pos__',
'__pow__',
'__radd__',
'__rdivmod__',
'__reduce__',
'__reduce_ex__',
'__repr__',
'__rfloordiv__',
'__rmod__',
'__rmul__',
'__rpow__',
40
'__rsub__',
'__rtruediv__',
'__setattr__',
'__sizeof__',
'__str__',
'__sub__',
'__subclasshook__',
'__truediv__',
'conjugate',
'imag',
'real']
You can see that there are several methods whose name starts and ends with __ (e.g. __init__): these
are special methods that Python uses internally, and we will discuss some of them later on in this course.
The others (in this case, conjugate, img and real) are the methods and fields through which we can interact
with this object.
In [24]: type(z)
Out[24]: complex
In [25]: z.real
Out[25]: 3.0
In [26]: z.imag
Out[26]: 1.0
In [27]: z.wrong
---------------------------------------------------------------------------
<ipython-input-27-0cc5a8ef8f99> in <module>
----> 1 z.wrong
In [28]: z2 = 5 - 6j
print("Gets to here")
print(z.wrong)
print("Didn't get to here")
41
Gets to here
---------------------------------------------------------------------------
<ipython-input-28-f92e96af0737> in <module>
1 z2 = 5 - 6j
2 print("Gets to here")
----> 3 print(z.wrong)
4 print("Didn't get to here")
But in the above, we can see that the error happens on the third line of our code cell.
We can also see that the error message: > ‘complex’ object has no attribute ‘wrong’
…tells us something important. Even if we don’t understand the rest, this is useful for debugging!
In [29]: number = 0
In [30]: print(number)
If I change a variable:
In [32]: print(number)
42
In [1]: len("pneumonoultramicroscopicsilicovolcanoconiosis")
Out[1]: 45
Here we have “called a function”.
The function len takes one input, and has one output. The output is the length of whatever the input
was.
Programmers also call function inputs “parameters” or, confusingly, “arguments”.
Here’s another example:
In [2]: sorted("Python")
Out[2]: ['P', 'h', 'n', 'o', 't', 'y']
Which gives us back a list of the letters in Python, sorted alphabetically (more specifically, according to
their Unicode order).
The input goes in brackets after the function name, and the output emerges wherever the function is
used.
So we can put a function call anywhere we could put a “literal” object or a variable.
In [3]: len('Jim') * 8
Out[3]: 24
In [4]: x = len('Mike')
y = len('Bob')
z = x + y
In [5]: print(z)
7
---------------------------------------------------------------------------
<ipython-input-9-328ac508ff1b> in <module>
----> 1 x.upper()
43
If you try to use a method that doesn’t exist, you get an error:
In [10]: x.wrong
---------------------------------------------------------------------------
<ipython-input-10-29321da545fa> in <module>
----> 1 x.wrong
Methods and properties are both kinds of attribute, so both are accessed with the dot operator.
Objects can have both properties and methods:
In [11]: z = 1 + 5j
In [12]: z.real
Out[12]: 1.0
In [13]: z.conjugate()
Out[13]: (1-5j)
In [14]: z.conjugate
Out[14]: <function complex.conjugate>
44
1.5.4 Getting help on functions and methods
The ‘help’ function, when applied to a function, gives help on it!
In [23]: help(sorted)
A custom key function can be supplied to customize the sort order, and the
reverse flag can be set to request the result in descending order.
The ‘dir’ function, when applied to an object, lists all its attributes (properties and methods):
In [24]: dir("Hexxo")
Out[24]: ['__add__',
'__class__',
'__contains__',
'__delattr__',
'__dir__',
'__doc__',
'__eq__',
'__format__',
'__ge__',
'__getattribute__',
'__getitem__',
'__getnewargs__',
'__gt__',
'__hash__',
'__init__',
'__init_subclass__',
'__iter__',
'__le__',
'__len__',
'__lt__',
'__mod__',
'__mul__',
'__ne__',
'__new__',
'__reduce__',
'__reduce_ex__',
'__repr__',
'__rmod__',
'__rmul__',
'__setattr__',
'__sizeof__',
'__str__',
'__subclasshook__',
'capitalize',
'casefold',
'center',
45
'count',
'encode',
'endswith',
'expandtabs',
'find',
'format',
'format_map',
'index',
'isalnum',
'isalpha',
'isascii',
'isdecimal',
'isdigit',
'isidentifier',
'islower',
'isnumeric',
'isprintable',
'isspace',
'istitle',
'isupper',
'join',
'ljust',
'lower',
'lstrip',
'maketrans',
'partition',
'replace',
'rfind',
'rindex',
'rjust',
'rpartition',
'rsplit',
'rstrip',
'split',
'splitlines',
'startswith',
'strip',
'swapcase',
'title',
'translate',
'upper',
'zfill']
Most of these are confusing methods beginning and ending with __, part of the internals of python.
Again, just as with error messages, we have to learn to read past the bits that are confusing, to the bit
we want:
Out[25]: 'Hello'
In [26]: help("FIsh".replace)
46
replace(old, new, count=-1, /) method of builtins.str instance
Return a copy with all occurrences of substring old replaced by new.
count
Maximum number of occurrences to replace.
-1 (the default value) means replace all occurrences.
If the optional argument count is given, only the first count occurrences are
replaced.
1.5.5 Operators
Now that we know that functions are a way of taking a number of inputs and producing an output, we
should look again at what happens when we write:
In [27]: x = 2 + 3
In [28]: print(x)
This is just a pretty way of calling an “add” function. Things would be more symmetrical if add were
actually written
x = +(2, 3)
Where ‘+’ is just the name of the name of the adding function.
In python, these functions do exist, but they’re actually methods of the first input: they’re the myste-
rious __ functions we saw earlier (Two underscores.)
In [29]: x.__add__(7)
Out[29]: 12
Out[30]: 'HelloGoodbye'
Out[31]: [2, 3, 4, 5, 6]
In [32]: 7 - 2
Out[32]: 5
47
---------------------------------------------------------------------------
<ipython-input-33-5b64b789ad11> in <module>
----> 1 [2, 3, 4] - [5, 6]
In [34]: [2, 3, 4] + 5
---------------------------------------------------------------------------
<ipython-input-34-67b01a5c24ab> in <module>
----> 1 [2, 3, 4] + 5
To do this, put:
Out[35]: [2, 3, 4, 5]
Just as in Mathematics, operators have a built-in precedence, with brackets used to force an order of
operations:
In [36]: print(2 + 3 * 4)
14
In [37]: print((2 + 3) * 4)
20
1.6 Types
We have seen that Python objects have a ‘type’:
In [1]: type(5)
Out[1]: int
48
1.6.1 Floats and integers
Python has two core numeric types, int for integer, and float for real number.
In [2]: one = 1
ten = 10
one_float = 1.0
ten_float = 10.
In [4]: tenth
Out[4]: 0.1
In [5]: type(one)
Out[5]: int
In [6]: type(one_float)
Out[6]: float
The meaning of an operator varies depending on the type it is applied to! (And on the python version.)
Out[8]: 0.1
<class 'float'>
In [10]: type(tenth)
Out[10]: float
The divided by operator when applied to floats, means divide by for real numbers. But when applied to
integers, it means divide then round down:
In [11]: 10 // 3
Out[11]: 3
In [12]: 10.0 / 3
Out[12]: 3.3333333333333335
In [13]: 10 / 3.0
Out[13]: 3.3333333333333335
49
So if I have two integer variables, and I want the float division, I need to change the type first.
There is a function for every type name, which is used to convert the input to an output of the desired
type.
In [14]: x = float(5)
type(x)
Out[14]: float
In [15]: 10 / float(3)
Out[15]: 3.3333333333333335
I lied when I said that the float type was a real number.√It’s actually a computer representation of
a real number called a “floating point number”. Representing 2 or 31 perfectly would be impossible in a
computer, so we use a finite amount of memory to do it.
In [16]: N = 10000.0
sum([1 / N] * int(N))
Out[16]: 0.9999999999999062
Supplementary material:
1.6.2 Strings
Python has a built in string type, supporting many useful methods.
In [18]: print(full.upper())
TERRY JONES
As for float and int, the name of a type can be used as a function to convert between types:
Out[19]: (10, 1)
11
101.0
50
We can remove extraneous material from the start and end of a string:
Out[22]: 'Hello'
Note that you can write strings in Python using either single (' ... ') or double (" ... ") quote
marks. The two ways are equivalent. However, if your string includes a single quote (e.g. an apostrophe),
you should use double quotes to surround it:
And vice versa: if your string has a double quote inside it, you should wrap the whole string in single
quotes.
1.6.3 Lists
Python’s basic container type is the list.
We can define our own list with square brackets:
In [25]: [1, 3, 7]
Out[25]: [1, 3, 7]
Out[26]: list
In [28]: various_things[2]
Out[28]: 'banana'
In [29]: index = 0
various_things[index]
Out[29]: 1
Sir==Michael==Edward==Palin
51
In [31]: "Ernst Stavro Blofeld".split(" ")
Out[33]: 'John->Ronald->Reuel->Tolkein'
A matrix can be represented by nesting lists – putting lists inside other lists.
In [35]: identity[0][0]
Out[35]: 1
1.6.4 Ranges
Another useful type is range, which gives you a sequence of consecutive numbers. In contrast to a list, ranges
generate the numbers as you need them, rather than all at once.
If you try to print a range, you’ll see something that looks a little strange:
In [36]: range(5)
Out[36]: range(0, 5)
We don’t see the contents, because they haven’t been generatead yet. Instead, Python gives us a description
of the object - in this case, its type (range) and its lower and upper limits.
We can quickly make a list with numbers counted up by converting this range:
[0, 1, 2, 3, 4]
Ranges in Python can be customised in other ways, such as by specifying the lower limit or the step (that
is, the difference between successive elements). You can find more information about them in the official
Python documentation.
1.6.5 Sequences
Many other things can be treated like lists. Python calls things that can be treated like lists sequences.
A string is one such sequence type.
Sequences support various useful operations, including: - Accessing a single element at a particular index:
sequence[index] - Accessing multiple elements (a slice): sequence[start:end_plus_one] - Getting the
length of a sequence: len(sequence) - Checking whether the sequence contains an element: element in
sequence
The following examples illustrate these operations with lists, strings and ranges.
52
In [38]: print(count_to_five[1])
In [39]: print("Palin"[2])
In [41]: count_to_five[1:3]
Out[41]: range(1, 3)
In [43]: len(various_things)
Out[43]: 5
In [44]: len("Python")
Out[44]: 6
In [45]: name
Out[46]: True
In [47]: 3 in count_to_five
Out[47]: True
1.6.6 Unpacking
Multiple values can be unpacked when assigning from sequences, like dealing out decks of cards.
World
In [49]: range(4)
Out[49]: range(0, 4)
In [51]: two
Out[51]: 2
53
If there is too much or too little data, an error results:
---------------------------------------------------------------------------
<ipython-input-52-3331a3ab5222> in <module>
----> 1 zero, one, two, three = range(7)
---------------------------------------------------------------------------
<ipython-input-53-8575e9410b1d> in <module>
----> 1 zero, one, two, three = range(2)
Python provides some handy syntax to split a sequence into its first element (“head”) and the remaining
ones (its “tail”):
head is 0
tail is [1, 2, 3]
Note the syntax with the *. The same pattern can be used, for example, to extract the middle segment
of a sequence whose length we might not know:
one is 0
two is [1, 2, 3, 4, 5, 6, 7, 8]
three is 9
54
1.7 Containers
1.7.1 Checking for containment.
The list we saw is a container type: its purpose is to hold other objects. We can ask python whether or
not a container contains a particular item:
Out[1]: True
Out[2]: False
In [3]: 2 in range(5)
Out[3]: True
In [4]: 99 in range(5)
Out[4]: False
1.7.2 Mutability
A list can be modified:
print(" ".join(name))
1.7.3 Tuples
A tuple is an immutable sequence. It is like a list, execpt it cannot be changed. It is defined with round
brackets.
In [7]: x = 0,
type(x)
Out[7]: tuple
55
---------------------------------------------------------------------------
<ipython-input-8-242e9dae76d3> in <module>
1 my_tuple = ("Hello", "World")
----> 2 my_tuple[0] = "Goodbye"
In [9]: type(my_tuple)
Out[9]: tuple
---------------------------------------------------------------------------
<ipython-input-10-7127277fc72e> in <module>
1 fish = "Hake"
----> 2 fish[0] = 'R'
But note that container reassignment is moving a label, not changing an element:
Supplementary material: Try the online memory visualiser for this one.
In [12]: x = list(range(3))
x
Out[12]: [0, 1, 2]
In [13]: y = x
y
Out[13]: [0, 1, 2]
In [14]: z = x[0:3]
y[1] = "Gotcha!"
56
In [15]: x
In [16]: y
In [17]: z
Out[17]: [0, 1, 2]
In [19]: x
In [20]: y
In [21]: z
In [24]: x
In [25]: y
In [26]: z
57
1.7.5 Identity vs Equality
Having the same data is different from being the same actual object in memory:
Out[27]: True
Out[28]: False
The == operator checks, element by element, that two containers have the same data. The is operator
checks that they are actually the same object.
But, and this point is really subtle, for immutables, the python language might save memory by reusing
a single instantiated copy. This will always be safe.
Out[29]: True
Out[30]: True
In [31]: x = range(3)
y = x
z = x[:]
In [32]: x == y
Out[32]: True
In [33]: x is y
Out[33]: True
In [34]: x == z
Out[34]: True
In [35]: x is z
Out[35]: False
1.8 Dictionaries
1.8.1 The Python Dictionary
Python supports a container type called a dictionary.
This is also known as an “associative array”, “map” or “hash” in other languages.
In a list, we use a number to look up an element:
In [2]: names[1]
58
Out[2]: 'Luther'
In [4]: chapman
In [5]: chapman['Jobs']
In [6]: chapman['age']
Out[6]: 48
In [7]: type(chapman)
Out[7]: dict
In [8]: chapman.keys()
In [9]: chapman.values()
Out[10]: True
Out[11]: False
Out[12]: True
59
1.8.3 Immutable Keys Only
The way in which dictionaries work is one of the coolest things in computer science: the “hash table”. The
details of this are beyond the scope of this course, but we will consider some aspects in the section on
performance programming.
One consequence of this implementation is that you can only use immutable things as keys.
In [13]: good_match = {
("Lamb", "Mint"): True,
("Bacon", "Chocolate"): False
}
but:
In [14]: illegal = {
["Lamb", "Mint"]: True,
["Bacon", "Chocolate"]: False
}
---------------------------------------------------------------------------
<ipython-input-14-514a4c981e6d> in <module>
1 illegal = {
2 ["Lamb", "Mint"]: True,
----> 3 ["Bacon", "Chocolate"]: False
4 }
1.8.5 Sets
A set is a list which cannot contain the same element twice. We make one by calling set() on any sequence,
e.g. a list or string.
60
In [17]: unique_letters
Out[17]: {' ', 'C', 'G', 'a', 'h', 'm', 'n', 'p', 'r'}
In [18]: primes_below_ten = { 2, 3, 5, 7}
In [19]: type(unique_letters)
Out[19]: set
In [20]: type(primes_below_ten)
Out[20]: set
In [21]: unique_letters
Out[21]: {' ', 'C', 'G', 'a', 'h', 'm', 'n', 'p', 'r'}
This will be easier to read if we turn the set of letters back into a string, with join:
In [22]: "".join(unique_letters)
A set has no particular order, but is really useful for checking or storing unique values.
Set operations work as in mathematics:
In [23]: x = set("Hello")
y = set("Goodbye")
In [25]: x | y # Union
Your programs will be faster and more readable if you use the appropriate container type for your data’s
meaning. Always use a set for lists which can’t in principle contain the same data twice, always use a
dictionary for anything which feels like a mapping from keys to values.
61
In [1]: UCL = {
'City': 'London',
'Street': 'Gower Street',
'Postcode': 'WC1E 6BT'
}
In [2]: Chapman = {
'City': 'London',
'Street': 'Southwood ln',
'Postcode': 'N6 5TB'
}
In [4]: addresses
A more complicated data structure, for example for a census database, might have a list of residents or
employees at each address:
In [7]: addresses
Which is then a list of dictionaries, with keys which are strings or lists.
We can go further, e.g.:
This was an example of a ‘list comprehension’, which have used to get data of this structure, and which
we’ll see more of in a moment…
62
1.9.2 Exercise: a Maze Model.
Work with a partner to design a data structure to represent a maze using dictionaries and lists.
• The front room can hold 2 people. Graham is currently there. You can go outside to the garden, or
upstairs to the bedroom, or north to the kitchen.
• From the kitchen, you can go south to the front room. It fits 1 person.
• From the garden you can go inside to front room. It fits 3 people. David is currently there.
• From the bedroom, you can go downstairs to the front room. You can also jump out of the window to
the garden. It fits 2 people.
In [1]: house = {
'living' : {
'exits': {
'north' : 'kitchen',
'outside' : 'garden',
'upstairs' : 'bedroom'
},
'people' : ['Graham'],
'capacity' : 2
},
'kitchen' : {
'exits': {
'south' : 'living'
},
'people' : [],
'capacity' : 1
},
'garden' : {
'exits': {
'inside' : 'living'
},
'people' : ['David'],
'capacity' : 3
63
},
'bedroom' : {
'exits': {
'downstairs' : 'living',
'jump' : 'garden'
},
'people' : [],
'capacity' : 1
}
}
• Control whether a program statement should be executed or not, based on a variable. “Conditionality”
• Jump back to an earlier point in the program, and run some statements again. “Branching”
Once we have these, we can write computer programs to process information in arbitrary ways: we are
Turing Complete!
1.10.2 Conditionality
Conditionality is achieved through Python’s if statement:
In [1]: x = 5
if x < 0:
print(f"{x} is negative")
The absence of output here means the if clause prevented the print statement from running.
In [2]: x = -10
if x < 0:
print(f"{x} is negative")
-10 is negative
64
1.10.3 Else and Elif
Python’s if statement has optional elif (else-if) and else clauses:
In [3]: x = 5
if x < 0:
print("x is negative")
else:
print("x is positive")
x is positive
In [4]: x = 5
if x < 0:
print("x is negative")
elif x == 0:
print("x is zero")
else:
print("x is positive")
x is positive
Try editing the value of x here, and note that other sections are found.
if choice == 'high':
print(1)
elif choice == 'medium':
print(2)
else:
print(3)
1.10.4 Comparison
True and False are used to represent boolean (true or false) values.
In [6]: 1 > 2
Out[6]: False
Out[7]: True
Out[8]: False
65
There’s no automatic conversion of the string True to true:
Out[9]: False
In python two there were subtle implied order comparisons between types, but it was bad style to rely
on these. In python three, you cannot compare these.
---------------------------------------------------------------------------
<ipython-input-10-2ae56e567bff> in <module>
----> 1 '1' < 2
---------------------------------------------------------------------------
<ipython-input-11-4b266c2a1d9b> in <module>
----> 1 '5' < 2
---------------------------------------------------------------------------
<ipython-input-12-142f2d5d83a7> in <module>
----> 1 '1' > 2
Any statement that evaluates to True or False can be used to control an if Statement.
66
In [13]: mytext = "Hello"
In [14]: if mytext:
print("Mytext is not empty")
not also understands magic conversion from false-like things to True or False.
In [18]: not not "Who's there!" # Thanks to Mysterious Student
Out[18]: True
In [19]: bool("")
Out[19]: False
In [20]: bool("Graham")
Out[20]: True
In [21]: bool([])
Out[21]: False
In [22]: bool(['a'])
Out[22]: True
In [23]: bool({})
Out[23]: False
In [24]: bool({'name': 'Graham'})
Out[24]: True
In [25]: bool(0)
Out[25]: False
In [26]: bool(1)
Out[26]: True
But subtly, although these quantities evaluate True or False in an if statement, they’re not themselves
actually True or False under ==:
In [27]: [] == False
Out[27]: False
In [28]: bool([]) == False
Out[28]: True
67
1.10.6 Indentation
In Python, indentation is semantically significant. You can choose how much indentation to use, so long as
you are consistent, but four spaces is conventional. Please do not use tabs.
In the notebook, and most good editors, when you press <tab>, you get four spaces.
No indentation when it is expected, results in an error:
In [29]: x = 2
In [30]: if x > 0:
print(x)
but:
In [31]: if x > 0:
print(x)
1.10.7 Pass
A statement expecting identation must have some indented code. This can be annoying when commenting
things out. (With #)
In [32]: if x > 0:
# print x
print("Hello")
In [33]: if x > 0:
# print x
pass
print("Hello")
Hello
68
1.10.8 Iteration
Our other aspect of control is looping back on ourselves.
We use for … in to “iterate” over lists:
9
49
225
4
Each time through the loop, the variable in the value slot is updated to the next element of the sequence.
1.10.9 Iterables
Any sequence type is iterable:
sarcasm.append(letter * repetition)
"".join(sarcasm)
Out[3]: 'OOOkaaay'
The above is a little puzzle, work through it to understand why it does what it does.
current_year = now.year
69
1.10.11 Unpacking and Iteration
Unpacking can be useful with iteration:
In [5]: triples = [
[4, 11, 15],
[39, 4, 18]
]
In [6]: for whatever in triples:
print(whatever)
[4, 11, 15]
[39, 4, 18]
In [8]: # A reminder that the words you use for variable names are arbitrary:
for hedgehog, badger, fox in triples:
print(badger)
11
4
print(things.items())
dict_items([('Eric', [1943, 'South Shields']), ('UCL', [1826, 'Bloomsbury']), ('Cambridge', [1209, 'Camb
70
1
3
5
7
9
11
13
15
17
19
These aren’t useful that often, but are worth knowing about. There’s also an optional else clause on
loops, executed only if you don’t break, but I’ve never found that useful.
In [1]: house = {
'living' : {
'exits': {
'north' : 'kitchen',
'outside' : 'garden',
'upstairs' : 'bedroom'
},
'people' : ['Graham'],
'capacity' : 2
},
'kitchen' : {
'exits': {
'south' : 'living'
},
'people' : [],
'capacity' : 1
},
'garden' : {
'exits': {
'inside' : 'living'
},
'people' : ['David'],
'capacity' : 3
},
'bedroom' : {
'exits': {
'downstairs' : 'living',
'jump' : 'garden'
},
'people' : [],
'capacity' : 1
71
}
}
We can count the occupants and capacity like this:
In [2]: capacity = 0
occupancy = 0
for name, room in house.items():
capacity += room['capacity']
occupancy += len(room['people'])
print(f"House can fit {capacity} people, and currently has: {occupancy}.")
House can fit 7 people, and currently has: 2.
As a side note, note how we included the values of capacity and occupancy in the last line. This is a
handy syntax for building strings that contain the values of variables. You can read more about it at this
Python String Formatting Best Practices guide or in the official documentation.
1.11 Comprehensions
1.11.1 The list comprehension
If you write a for loop inside a pair of square brackets for a list, you magic up a list as defined. This can
make for concise but hard to read code, so be careful.
In [1]: [2 ** x for x in range(10)]
Out[1]: [1, 2, 4, 8, 16, 32, 64, 128, 256, 512]
Which is equivalent to the following code without using comprehensions:
In [2]: result = []
for x in range(10):
result.append(2 ** x)
result
Out[2]: [1, 2, 4, 8, 16, 32, 64, 128, 256, 512]
You can do quite weird and cool things with comprehensions:
In [3]: [len(str(2 ** x)) for x in range(10)]
Out[3]: [1, 1, 1, 1, 2, 2, 2, 3, 3, 3]
72
1.11.3 Comprehensions versus building lists with append:
This code:
In [6]: result = []
for x in range(30):
if x % 3 == 0:
result.append(2 ** x)
result
Out[6]: [1, 8, 64, 512, 4096, 32768, 262144, 2097152, 16777216, 134217728]
Does the same as the comprehension above. The comprehension is generally considered more readable.
Comprehensions are therefore an example of what we call ‘syntactic sugar’: they do not increase the
capabilities of the language.
Instead, they make it possible to write the same thing in a more readable way.
Almost everything we learn from now on will be either syntactic sugar or interaction with something other
than idealised memory, such as a storage device or the internet. Once you have variables, conditionality, and
branching, your language can do anything. (And this can be proved.)
Out[8]: [0, 1, 0, 2, 1, 0, 3, 2, 1, 0]
If you want something more like a matrix, you need to do two nested comprehensions!
Out[9]: [[0, 1, 2, 3], [-1, 0, 1, 2], [-2, -1, 0, 1], [-3, -2, -1, 0]]
In [10]: [x+y for x in ['a', 'b', 'c'] for y in ['1', '2', '3']]
Out[10]: ['a1', 'a2', 'a3', 'b1', 'b2', 'b3', 'c1', 'c2', 'c3']
In [11]: [[x+y for x in ['a', 'b', 'c']] for y in ['1', '2', '3']]
Out[11]: [['a1', 'b1', 'c1'], ['a2', 'b2', 'c2'], ['a3', 'b3', 'c3']]
73
1.11.6 List-based thinking
Once you start to get comfortable with comprehensions, you find yourself working with containers, nested
groups of lists and dictionaries, as the ‘things’ in your program, not individual variables.
Given a way to analyse some dataset, we’ll find ourselves writing stuff like:
There are lots of built-in methods that provide actions on lists as a whole:
Out[13]: True
Out[14]: False
Out[15]: 3
Out[16]: 6
My favourite is map, which, similar to a list comprehension, applies one function to every member of a
list:
Out[17]: ['0', '1', '2', '3', '4', '5', '6', '7', '8', '9']
Out[18]: ['0', '1', '2', '3', '4', '5', '6', '7', '8', '9']
So I can write:
We’ll learn more about map and similar functions when we discuss functional programming later in the
course.
Now, write a program to print out a new dictionary, which gives, for each room’s name, the number of
people in it. Don’t add in a zero value in the dictionary for empty rooms.
The output should look similar to:
74
1.11.8 Solution
With this maze structure:
In [1]: house = {
'living' : {
'exits': {
'north' : 'kitchen',
'outside' : 'garden',
'upstairs' : 'bedroom'
},
'people' : ['Graham'],
'capacity' : 2
},
'kitchen' : {
'exits': {
'south' : 'living'
},
'people' : [],
'capacity' : 1
},
'garden' : {
'exits': {
'inside' : 'living'
},
'people' : ['David'],
'capacity' : 3
},
'bedroom' : {
'exits': {
'downstairs' : 'living',
'jump' : 'garden'
},
'people' : [],
'capacity' : 1
}
}
To get the current number of occupants, we can use a similar dictionary comprehension. Remember that
we can filter (only keep certain rooms) by adding an if clause:
1.12 Functions
1.12.1 Definition
We use def to define a function, and return to pass back a value:
75
In [1]: def double(x):
return x * 2
10 [5, 5] fivefive
In [3]: jeeves()
In [4]: jeeves('John')
If you have some parameters with defaults, and some without, those with defaults must go later.
If you have multiple default arguments, you can specify neither, one or both:
In [6]: jeeves()
In [7]: jeeves("Hello")
z = list(range(4))
double_inplace(z)
print(z)
76
[0, 2, 4, 6]
In this example, we’re using [:] to access into the same list, and write it’s data.
Let’s remind ourselves of the behaviour for modifying lists in-place using [:] with a simple array:
In [14]: x = 5
x = 7
x = ['a', 'b', 'c']
y = x
In [15]: x
In [17]: y
In [19]: x = list(range(3))
extend(6, x, 'a')
print(x)
In [20]: z = range(9)
extend(6, z, 'a')
print(z)
range(0, 9)
77
1.12.5 Unpacking arguments
If a vector is supplied to a function with a ’*’, its elements are used to fill each of a function’s arguments.
arrow(1, 3)
neutron -> 0
proton -> 1
electron -> -1
In [25]: doubler(1, 2, 3)
Out[25]: [2, 4, 6]
neutron -> n
proton -> p
electron -> e
78
These different approaches can be mixed:
A: 1
B: 2
args: (3, 4, 5)
keyword args {'fish': 'Haddock'}
In [1]: math.sin(1.6)
---------------------------------------------------------------------------
<ipython-input-1-12dcc3af2e0c> in <module>
----> 1 math.sin(1.6)
In [3]: math.sin(1.6)
Out[3]: 0.9995736030415051
In [4]: type(math)
Out[4]: module
The tools supplied by a module are attributes of the module, and as such, are accessed with a dot.
In [5]: dir(math)
Out[5]: ['__doc__',
'__file__',
'__loader__',
'__name__',
79
'__package__',
'__spec__',
'acos',
'acosh',
'asin',
'asinh',
'atan',
'atan2',
'atanh',
'ceil',
'copysign',
'cos',
'cosh',
'degrees',
'e',
'erf',
'erfc',
'exp',
'expm1',
'fabs',
'factorial',
'floor',
'fmod',
'frexp',
'fsum',
'gamma',
'gcd',
'hypot',
'inf',
'isclose',
'isfinite',
'isinf',
'isnan',
'ldexp',
'lgamma',
'log',
'log10',
'log1p',
'log2',
'modf',
'nan',
'pi',
'pow',
'radians',
'remainder',
'sin',
'sinh',
'sqrt',
'tan',
'tanh',
'tau',
'trunc']
80
In [6]: math.pi
Out[6]: 3.141592653589793
You can always find out where on your storage medium a library has been imported from:
In [7]: print(math.__file__[0:50])
print(math.__file__[50:])
/home/travis/virtualenv/python3.7.5/lib/python3.7/
lib-dynload/math.cpython-37m-x86_64-linux-gnu.so
Note that import does not install libraries. It just makes them available to your current notebook session,
assuming they are already installed. Installing libraries is harder, and we’ll cover it later. So what libraries
are available? Until you install more, you might have just the modules that come with Python, the standard
library.
Supplementary Materials: Review the list of standard library modules.
If you installed via Anaconda, then you also have access to a bunch of modules that are commonly used
in research.
Supplementary Materials: Review the list of modules that are packaged with Anaconda by default
on different architectures (modules installed by default are shown with ticks).
We’ll see later how to add more libraries to our setup.
Out[8]: 1.2246467991473532e-16
Out[9]: 1.2246467991473532e-16
Importing one-by-one like this is a nice compromise between typing and risk of name clashes.
It is possible to import everything from a module, but you risk name clashes.
Out[10]: 1.2246467991473532e-16
81
1.13.4 Import and rename
You can rename things as you import them to avoid clashes or for typing convenience
Out[11]: 1.0
In [12]: pi = 3
from math import pi as realpi
print(sin(pi), sin(realpi))
0.1411200080598672 1.2246467991473532e-16
Or:
Or:
What’s the difference? Before Python 2.2 a class was distinct from all other Python types, which caused
some odd behaviour. To fix this, classes were redefined as user programmed types by extending object, e.g.,
class room(object).
So most Python 2 code will use this syntax as very few people want to use old style python classes. Python
3 has formalised this by removing old-style classes, so they can be defined without extending object, or
indeed without braces.
Just as with other python types, you use the name of the type as a function to make a variable of that
type:
Out[4]: int
Out[5]: __main__.Room
82
In [6]: myroom.name = "Living"
In [7]: myroom.name
Out[7]: 'Living'
The most common use of a class is to allow us to group data into an object in a way that is easier to
read and understand than organising data into lists and dictionaries.
In [8]: myroom.capacity = 3
myroom.occupants = ["Graham", "Eric"]
1.14.2 Methods
So far, our class doesn’t do much!
We define functions inside the definition of a class, in order to give them capabilities, just like the
methods on built-in types.
In [11]: myroom.overfull()
Out[11]: False
In [12]: myroom.occupants.append(['TerryG'])
In [13]: myroom.occupants.append(['John'])
In [14]: myroom.overfull()
Out[14]: True
When we write methods, we always write the first function argument as self, to refer to the object
instance itself, the argument that goes “before the dot”.
This is just a convention for this variable name, not a keyword. You could call it something else if you
wanted.
1.14.3 Constructors
Normally, though, we don’t want to add data to the class attributes on the fly like that. Instead, we define
a constructor that converts input data into an object.
def overfull(self):
return len(self.occupants) > self.capacity
83
In [16]: living = Room("Living Room", {'north': 'garden'}, 3)
In [17]: living.capacity
Out[17]: 3
Methods which begin and end with two underscores in their names fulfil special capabilities in Python,
such as constructors.
For example, the below program might describe our “Maze of Rooms” system:
We define a “Maze” class which can hold rooms:
def occupants(self):
return [occupant for room in self.rooms.values()
for occupant in room.occupants.values()]
def wander(self):
"""Move all the people in a random direction"""
for occupant in self.occupants():
occupant.wander()
def describe(self):
for room in self.rooms.values():
room.describe()
def step(self):
self.describe()
print("")
self.wander()
print("")
84
self.maze = maze
self.name = name
self.occupants = {} # Note the default argument, occupants start empty
self.exits = exits # Should be a dictionary from directions to room names
self.capacity = capacity
def has_space(self):
return len(self.occupants) < self.capacity
def available_exits(self):
return [exit for exit, target in self.exits.items()
if self.maze.rooms[target].has_space()]
def random_valid_exit(self):
import random
if not self.available_exits():
return None
return random.choice(self.available_exits())
def describe(self):
if self.occupants:
print(f"{self.name}: " + " ".join(self.occupants.keys()))
We define a “Person” class for room occupants:
In [20]: class Person:
def __init__(self, name, room=None):
self.name = name
def wander(self):
exit = self.room.random_valid_exit()
if exit:
self.use(exit)
And we use these classes to define our people, rooms, and their relationships:
In [21]: graham = Person('Graham')
eric = Person('Eric')
85
terryg = Person('TerryG')
john = Person('John')
In [25]: living.add_occupant(graham)
In [26]: garden.add_occupant(eric)
garden.add_occupant(terryg)
In [27]: bedroom.add_occupant(john)
In [28]: house.simulate(3)
livingroom: Graham
garden: Eric TerryG
bedroom: John
86
1.14.5 Object oriented design
There are many choices for how to design programs to do this. Another choice would be to separately define
exits as a different class from rooms. This way, we can use arrays instead of dictionaries, but we have to
first define all our rooms, then define all our exits.
def wander(self):
"Move all the people in a random direction"
for occupant in self.occupants:
occupant.wander()
def describe(self):
for occupant in self.occupants:
occupant.describe()
def step(self):
house.describe()
print("")
house.wander()
print("")
def has_space(self):
return self.occupancy < self.capacity
87
def available_exits(self):
return [exit for exit in self.exits if exit.valid()]
def random_valid_exit(self):
import random
if not self.available_exits():
return None
return random.choice(self.available_exits())
def wander(self):
exit = self.room.random_valid_exit()
if exit:
self.use(exit)
def describe(self):
print("{who} is in the {where}".format(who=self.name,
where=self.room.name))
def valid(self):
return self.target.has_space()
88
In [38]: house.add_exit('jump', bed, garden)
In [40]: house.simulate(3)
This is a huge topic, about which many books have been written. The differences between these two
designs are important, and will have long-term consequences for the project. That is the how we start to
think about software engineering, as opposed to learning to program, and is an important part of this
course.
89
Chapter 2
90
So let us put it all back together, not forgetting ultimately what it is for.
Let it give us one more final pleasure; drink it and forget it all!
- Richard Feynman
Writing mydata.txt
Where did that go? It went to the current folder, which for a notebook, by default, is where the notebook
is on disk.
In [2]: import os # The 'os' module gives us all the tools we need to search in the file system
os.getcwd() # Use the 'getcwd' function from the 'os' module to find where we are on disk.
Out[2]: '/home/travis/build/UCL/rsd-engineeringcourse/ch01data'
In [3]: import os
[x for x in os.listdir(os.getcwd()) if ".txt" in x]
Out[3]: ['mydata.txt']
Yep! Note how we used a list comprehension to filter all the extraneous files.
In [4]: os.path.dirname(os.getcwd())
Out[4]: '/home/travis/build/UCL/rsd-engineeringcourse'
In [5]: "/".join(os.getcwd().split("/")[:-1])
Out[5]: '/home/travis/build/UCL/rsd-engineeringcourse'
But this would not work on Windows, where path elements are separated with a \ instead of a /. So it’s
important to use os.path for this stuff.
Supplementary Materials: If you’re not already comfortable with how files fit into folders, and folders
form a tree, with folders containing subfolders, then look at this Software Carpentry lesson on navigating
the file system.
Satisfy yourself that after using %%writefile, you can then find the file on disk with Windows Explorer,
OSX Finder, or the Linux Shell.
We can see how in Python we can investigate the file system with functions in the os module, using just
the same programming approaches as for anything else.
We’ll gradually learn more features of the os module as we go, allowing us to move around the disk, walk
around the disk looking for relevant files, and so on. These will be important to master for automating our
data analyses.
91
2.1.4 Opening files in Python
So, let’s read our file:
In [7]: type(myfile)
Out[7]: _io.TextIOWrapper
Even though the name of this type is not very clear, it offers various ways of accessing the file.
We can go line-by-line, by treating the file as an iterable:
Out[8]: ["A poet once said, 'The whole universe is in a glass of wine.'\n",
'We will probably never know in what sense he meant it, \n',
'for poets do not write to be understood. \n',
'But it is true that if we look at a glass of wine closely enough we see the entire universe. \
'There are the things of physics: the twisting liquid which evaporates depending\n',
'on the wind and weather, the reflection in the glass;\n',
'and our imagination adds atoms.\n',
"The glass is a distillation of the earth's rocks,\n",
"and in its composition we see the secrets of the universe's age, and the evolution of stars. \
'What strange array of chemicals are in the wine? How did they come to be? \n',
'There are the ferments, the enzymes, the substrates, and the products.\n',
'There in wine is found the great generalization; all life is fermentation.\n',
'Nobody can discover the chemistry of wine without discovering, \n',
'as did Louis Pasteur, the cause of much disease.\n',
'How vivid is the claret, pressing its existence into the consciousness that watches it!\n',
'If our small minds, for some convenience, divide this glass of wine, this universe, \n',
'into parts -- \n',
'physics, biology, geology, astronomy, psychology, and so on -- \n',
'remember that nature does not know it!\n',
'\n',
'So let us put it all back together, not forgetting ultimately what it is for.\n',
'Let it give us one more final pleasure; drink it and forget it all!\n',
' - Richard Feynman\n']
If we do that again, the file has already finished, there is no more data.
Out[9]: []
In [10]: myfile.seek(0)
[len(x) for x in myfile if 'know' in x]
It’s really important to remember that a file is a different built in type than a string.
92
2.1.5 Working with files
We can read one line at a time with readline:
In [11]: myfile.seek(0)
first = myfile.readline()
In [12]: first
Out[12]: "A poet once said, 'The whole universe is in a glass of wine.'\n"
In [13]: second = myfile.readline()
In [14]: second
Out[14]: 'We will probably never know in what sense he meant it, \n'
We can read the whole remaining file with read:
In [15]: rest = myfile.read()
In [16]: rest
Out[16]: "for poets do not write to be understood. \nBut it is true that if we look at a glass of wine c
Which means that when a file is first opened, read is useful to just get the whole thing as a string:
In [17]: open('mydata.txt').read()
Out[17]: "A poet once said, 'The whole universe is in a glass of wine.'\nWe will probably never know in
You can also read just a few characters:
In [18]: myfile.seek(1335)
Out[18]: 1335
In [19]: myfile.read(15)
Out[19]: '\n - Richard F'
---------------------------------------------------------------------------
<ipython-input-22-8fadd4a635f7> in <module>
----> 1 mystring.readline()
93
This is important, because some file format parsers expect input from a file and not a string. We can
convert between them using the StringIO class of the io module in the standard library:
In [25]: mystringasafile.readline()
In [26]: mystringasafile.readline()
In [27]: myfile.close()
Because it’s so easy to forget this, python provides a context manager to open a file, then close it
automatically at the end of an indented block:
content
Out[28]: "A poet once said, 'The whole universe is in a glass of wine.'\nWe will probably never know in
The code to be done while the file is open is indented, just like for an if statement.
You should pretty much always use this syntax for working with files. We will see more about context
managers in a later chapter.
HelloWorld
94
In [32]: with open('mywrittenfile','r') as source:
print(source.read())
HelloWorldHelloJames
2.2.1 URLs
All internet resources are defined by a Uniform Resource Locator.
In [1]: "https://static-maps.yandex.ru/1.x/?size=400,400&ll=-0.1275,51.51&z=10&l=sat&lang=en_US"
Out[1]: 'https://static-maps.yandex.ru/1.x/?size=400,400&ll=-0.1275,51.51&z=10&l=sat&lang=en_US'
Supplementary materials: These can actually be different for different protocols, the above is a
simplification. You can see more, for example, at the wikipedia article about the URI scheme.
URLs are not allowed to include all characters; we need to, for example, “escape” a space that appears
inside the URL, replacing it with %20, so e.g. a request of http://some example.com/ would need to be
http://some%20example.com/
Supplementary materials: The code used to replace each character is the ASCII code for it.
Supplementary materials: The escaping rules are quite subtle. See the wikipedia article for more
detail. The standard library provides the urlencode function that can take care of this for you.
2.2.2 Requests
The python requests library can help us manage and manipulate URLs. It is easier to use than the urllib
library that is part of the standard library, and is included with anaconda and canopy. It sorts out escaping,
parameter encoding, and so on for us.
To request the above URL, for example, we write:
95
In [3]: response = requests.get("https://static-maps.yandex.ru/1.x/?size=400,400&ll=-0.1275,51.51&z=10&l
params={
'size': '400,400',
'll': '-0.1275,51.51',
'zoom': 10,
'l': 'sat',
'lang': 'en_US'
})
In [4]: response.content[0:50]
Out[4]: b'\xff\xd8\xff\xe0\x00\x10JFIF\x00\x01\x01\x01\x00H\x00H\x00\x00\xff\xdb\x00C\x00\x08\x06\x06\x0
When we do a request, the result comes back as text. For the png image in the above, this isn’t very
readable.
Just as for file access, therefore, we will need to send the text we get to a python module which understands
that file format.
Again, it is important to separate the transport model (e.g. a file system, or an “http request” for the
web) from the data model of the data that is returned.
In [6]: spots[0:80]
This looks like semicolon-separated data, with different records on different lines. (Line separators come
out as \n)
There are many many scientific datasets which can now be downloaded like this - integrating the download
into your data pipeline can help to keep your data flows organised.
In [9]: years[0:15]
Out[9]: ['1749',
'1749',
'1749',
'1749',
96
'1749',
'1749',
'1749',
'1749',
'1749',
'1749',
'1749',
'1749',
'1750',
'1750',
'1750']
But don’t: what if, for example, one of the records contains a separator inside it; most computers will
put the content in quotes, so that, for example,
something; something
The naive code above would give four fields, of which the first is
"something
You’ll never manage to get all that right; so you’ll be better off using a library to do it.
97
Typical separators are the space, tab, comma, and semicolon, leading to correspondingly-named file
formats, e.g.:
• Space-separated value (e.g. field1 "field two" field3 )
• Comma-separated value (e.g. field1, another field, "wow, another field")
Comma-separated value is abbreviated CSV, and tab-separated value TSV.
CSV is also used to refer to all the different sub-kinds of separated value files, i.e. some people use CSV
to refer to tab-, space- and semicolon-separated files.
CSV is not a particularly superb data format, because it forces your data model to be a list of lists.
Richer file formats describe “serialisations” for dictionaries and for deeper-than-two nested list structures as
well.
Nevertheless, because you can always export spreadsheets as CSV files (each row is a record, each cell is
a field), CSV files are very popular.
98
Out[6]: [<matplotlib.lines.Line2D at 0x7f4d6d3df550>]
The plot command accepted an array of ‘X’ values and an array of ‘Y’ values. We used a special NumPy
“:” syntax, which we’ll learn more about later. Don’t worry about the %matplotlib magic command for
now - we’ll also look at this later.
CSV
Contents: * Column 1-2: Gregorian calendar date - Year - Month * Column 3: Date in fraction
of year. * Column 4: Monthly mean total sunspot number. * Column 5: Monthly mean standard
deviation of the input sunspot numbers. * Column 6: Number of observations used to compute
the monthly mean total sunspot number. * Column 7: Definitive/provisional marker. ‘1’ indicates
that the value is definitive. ‘0’ indicates that the value is still provisional.
In [8]: sunspots
99
Out[8]: array([(1749., 1., 1749.042, 96.7, -1. , -1., 1.),
(1749., 2., 1749.123, 104.3, -1. , -1., 1.),
(1749., 3., 1749.204, 116.7, -1. , -1., 1.), …,
(2019., 10., 2019.79 , 0.4, 0.1, 857., 0.),
(2019., 11., 2019.873, 0.5, 0.1, 693., 0.),
(2019., 12., 2019.958, 1.6, 0.6, 719., 0.)],
dtype=[('year', '<f8'), ('month', '<f8'), ('date', '<f8'), ('mean', '<f8'), ('deviation',
In [10]: sunspots
Now, NumPy understands the names of the columns, so our plot command is more readable:
In [11]: sunspots['year']
100
2.4 Structured Data
2.4.1 Structured data
CSV files can only model data where each record has several fields, and each field is a simple datatype, a
string or number.
We often want to store data which is more complicated than this, with nested structures of lists and
dictionaries. Structured data formats like JSON, YAML, and XML are designed for this.
2.4.2 JSON
JSON is a very common open-standard data format that is used to store structured data in a human-readable
way.
This allows us to represent data which is combinations of lists and dictionaries as a text file which looks
a bit like a Javascript (or Python) data literal.
In [3]: json.dumps(mydata)
If you would like a more readable output, you can use the indent argument.
101
{
"key": [
"value1",
"value2"
],
"key2": {
"key4": "value3"
}
}
Writing myfile.json
In [7]: my_data_as_string
In [9]: mydata['somekey']
This is a very nice solution for loading and saving Python data structures.
It’s a very common way of transferring data on the internet, and of saving datasets to disk.
There’s good support in most languages, so it’s a nice inter-language file interchange format.
2.4.3 YAML
YAML is a very similar data format to JSON, with some nice additions:
• You don’t need to quote strings if they don’t have funny characters in
• You can have comment lines, beginning with a #
• You can write dictionaries without the curly brackets: it just notices the colons.
• You can write lists like this:
Writing myfile.yaml
102
In [12]: with open('myfile.yaml') as yaml_file:
my_data = yaml.safe_load(yaml_file)
print(mydata)
YAML is a popular format for ad-hoc data files, but the library doesn’t ship with default Python (though
it is part of Anaconda and Canopy), so some people still prefer JSON for its universality.
Because YAML gives the option of serialising a list either as newlines with dashes or with square brackets,
you can control this choice:
In [13]: print(yaml.safe_dump(mydata))
somekey:
- a list
- with values
default_flow_style=False (the default) uses a “block style” (rather than an “inline” or “flow style”)
to delineate data structures. See the YAML docs for more details.
2.4.4 XML
Supplementary material: XML is another popular choice when saving nested data structures. It’s very
careful, but verbose. If your field uses XML data, you’ll need to learn a python XML parser (there are a
few), and about how XML works.
In [1]: house = {
'living': {
'exits': {
'north': 'kitchen',
'outside': 'garden',
'upstairs': 'bedroom'
},
'people': ['James'],
'capacity': 2
},
'kitchen': {
'exits': {
'south': 'living'
},
'people': [],
'capacity': 1
},
103
'garden': {
'exits': {
'inside': 'living'
},
'people': ['Sue'],
'capacity': 3
},
'bedroom': {
'exits': {
'downstairs': 'living',
'jump': 'garden'
},
'people': [],
'capacity': 1
}
}
In [4]: %%bash
cat 'maze.json'
{"living": {"exits": {"north": "kitchen", "outside": "garden", "upstairs": "bedroom"}, "people": ["James
In [6]: maze_again
Or with YAML:
In [9]: %%bash
cat 'maze.yaml'
104
bedroom:
capacity: 1
exits:
downstairs: living
jump: garden
people: []
garden:
capacity: 3
exits:
inside: living
people:
- Sue
kitchen:
capacity: 1
exits:
south: living
people: []
living:
capacity: 2
exits:
north: kitchen
outside: garden
upstairs: bedroom
people:
- James
In [11]: maze_again
105
"minlatitude": "50.008",
"maxlongitude": "1.67",
"minlongitude": "-9.756",
"minmagnitude": "1",
"endtime": "2018-10-11",
"orderby": "time-asc"}
)
In [2]: quakes.text[0:100]
Out[2]: '{"type":"FeatureCollection","metadata":{"generated":1579287393000,"url":"https://earthquake.usg
Your exercise: determine the location of the largest magnitude earthquake in the UK this century.
You’ll need to: * Get the text of the web result * Parse the data as JSON * Understand how the data
is structured into dictionaries and lists * Where is the magnitude? * Where is the place description or
coordinates? * Program a search through all the quakes to find the biggest quake * Find the place of the
biggest quake * Form a URL for an online map service at that latitude and longitude: look back at the
introductory example * Display that image
In [4]: type(requests_json)
Out[4]: dict
Now we can navigate through this dictionary to see how the information is stored in the nested dictionaries
and lists. The keys method can indicate what kind of information each dictionary holds, and the len function
tells us how many entries are contained in a list. How you explore is up to you!
106
In [5]: requests_json.keys()
In [6]: len(requests_json['features'])
Out[6]: 120
In [7]: requests_json['features'][0].keys()
In [8]: requests_json['features'][0]['properties'].keys()
Out[8]: dict_keys(['mag', 'place', 'time', 'updated', 'tz', 'url', 'detail', 'felt', 'cdi', 'mmi', 'aler
In [9]: requests_json['features'][0]['properties']['mag']
Out[9]: 2.6
In [10]: requests_json['features'][0]['geometry']
Also note that some IDEs display JSON in a way that makes its structure easier to understand. Try
saving this data in a text file and opening it in an IDE or a browser.
Out[12]: 4.8
params = dict(
107
z=zoom,
size="{},{}".format(size[0], size[1]),
ll="{},{}".format(long, lat),
l="sat" if satellite else "map",
lang="en_US"
)
Out[16]:
108
2.7 Scientific File Formats
CSV, JSON and YAML are very common formats for representing general-purpose data, but their simplicity
sometimes makes then inconvenient for scientific applications. A common drawback, for example, is that
reading very large amounts of data from a CSV or JSON file can be inefficient. This has led to to the
use of more targetted file formats which better address scientists’ requirements for storing, accessing or
manipulating data.
In this section, we will see an example of such a file format, and how to interact with files written in it
programmatically.
2.7.1 HDF5
HDF5 is the current version of the Hierachical Data Format (HDF), and is commonly used to store large
volumes of scientific data, such as experimental results or measurements. An HDF5 file contains two kinds
of entities organised in a hierarchy, similar to a filesystem.
• Datasets contain scalar or array values. Each dataset has a type, such as integer, floating-point or
string.
• Groups contain datasets or other groups, much like directories contain files and directories.
Both datasets and groups can have attributes associated with them, which provide metadata about the
contents.
For example, let’s imagine we are trying to store some measurements of sea level at different locations
and dates. One way to organise it is shown in the image below:
We will store the locations of our sampling points in a dataset called locations, and the actual results
in a group called measurements. Within that group, we will have a dataset for each date we took samples
109
on, which will contain results for all locations on that date. For instance, if we are collecting data from 𝑁
locations at 𝑇 times per day, each dataset will be a 𝑁 ×𝑇 array of numerical values (integer or floating-point,
depending on how we want to record it).
One of the strengths of the HDF5 format is that a file can contain disparate kinds of data, of arbitrary
size and types. The attributes provide additional information about the meaning or provenance of the data,
and can even link to other datasets and groups within the file.
Some distributions (like Anaconda) already include this library by default, in which case this command
will not do anything except report that the library is already installed.
Once installed, we must import it in our file like any other library:
Let’s create a new HDF5 file that mirrors the structure of the above example. We start by creating an
object that will represent this file in our program.
In the example, the file contains a dataset named locations and a group called measurements at the
root level. We can add these to our empty file using some of the methods that the file object provides.
In [4]: new_file.create_group('measurements')
Note that the library lets us create empty datasets, which can be populated later. In this case, however,
we initialise the dataset with some values at creation using the data argument.
The HDF5 file objects behave somewhat like Python dictionaries: we can access the new group with the
usual indexing syntax ([...]). This next section shows how to do that and how to add a dataset to the
group. Here, we add 4 measurements for each location for that day.
110
Out[5]: <HDF5 dataset "sea_level_20191012": shape (2, 4), type "<i8">
When we are done with writing to the file, we must make sure to close it, so that all the changes are
written to it (if they have not been already) and any used memory is released:
In [6]: new_file.close()
There is a different style for reading and writing files, which is safer and saves you the need to close the
file after you are finished. We can use this to read a file and iterate over its contents:
/ contains…
locations
measurements
/measurements contains…
sea_level_20191012
This is similar to the with open(...) syntax we use to work with text files - it is another example of a
context manager.
There are many more ways you can access a file with h5py. If you are interested, you can look at the
quick-start guide from its documentation for an overview.
111
2.8.1 Importing Matplotlib
We import the pyplot object from Matplotlib, which provides us with an interface for making figures. We
usually abbreviate it.
We tell the Jupyter notebook to show figures we generate alongside the code that created it, rather than
in a separate window. Lines beginning with a single percent are not python code: they control how the
notebook deals with python code.
Lines beginning with two percents are “cell magics”, that tell Jupyter notebook how to interpret the
particular cell; we’ve seen %%writefile, for example.
The plot command returns a figure, just like the return value of any function. The notebook then displays
this.
To add a title, axis labels etc, we need to get that figure object, and manipulate it. For convenience,
matplotlib allows us to do this just by issuing commands to change the “current figure”:
112
Out[4]: Text(0.5, 1.0, 'Hello')
But this requires us to keep all our commands together in a single cell, and makes use of a “global” single
“current plot”, which, while convenient for quick exploratory sketches, is a bit cumbersome. To produce from
our notebook proper plots to use in papers, the library defines some types we can use to treat individual
figures as variables, and manipulate these.
113
Once we have some axes, we can plot a graph on them:
In [8]: sine_graph_axes.set_ylabel("f(x)")
Now we need to actually display the figure. As always with the notebook, if we make a variable be
returned by the last line of a code cell, it gets displayed:
In [10]: sine_graph
Out[10]:
114
We can add another curve:
In [12]: sine_graph
Out[12]:
115
A legend will help us distinguish the curves:
In [13]: sine_graph_axes.legend()
In [14]: sine_graph
Out[14]:
116
2.8.5 Saving figures
We must be able to save figures to disk, in order to use them in papers. This is really easy:
In [15]: sine_graph.savefig('my_graph.png')
In order to be able to check that it worked, we need to know how to display an arbitrary image in the
notebook.
The programmatic way is like this:
In [16]: from IPython.display import Image # Get the notebook's own library for manipulating itself.
Image(filename='my_graph.png')
Out[16]:
117
2.8.6 Subplots
We might have wanted the sin and cos graphs on separate axes:
In [20]: double_graph
Out[20]:
118
In [21]: sin_axes.plot([sin(pi * x / 100.0) for x in range(100)])
In [22]: sin_axes.set_ylabel("sin(x)")
In [24]: cos_axes.set_ylabel("cos(x)")
In [26]: double_graph
Out[26]:
119
2.8.7 Versus plots
When we specify a single list to plot, the x-values are just the array index number. We usually want to
plot something more meaningful:
120
In [28]: sin_axes.plot([x / 100.0 for x in range(100)], [sin(pi * x / 100.0) for x in range(100)])
cos_axes.plot([x / 100.0 for x in range(100)], [cos(pi * x / 100.0) for x in range(100)])
Out[28]: [<matplotlib.lines.Line2D at 0x7f529e1d7fd0>]
In [29]: double_graph
Out[29]:
121
2.8.8 Learning More
There’s so much more to learn about matplotlib: pie charts, bar charts, heat maps, 3-d plotting, animated
plots, and so on. You can learn all this via the Matplotlib Website. You should try to get comfortable with
all this, so please use some time in class, or at home, to work your way through a bunch of the examples.
2.9 NumPy
2.9.1 The Scientific Python Trilogy
Why is Python so popular for research work?
MATLAB has typically been the most popular “language of technical computing”, with strong built-in
support for efficient numerical analysis with matrices (the mat in MATLAB is for Matrix, not Maths), and
plotting.
Other dynamic languages have cleaner, more logical syntax (Ruby, Haskell)
But Python users developed three critical libraries, matching the power of MATLAB for scientific work:
By combining a plotting library, a matrix maths library, and an easy-to-use interface allowing live plotting
commands in a persistent environment, the powerful capabilities of MATLAB were matched by a free and
open toolchain.
We’ve learned about Matplotlib and IPython in this course already. NumPy is the last part of the trilogy.
In [2]: x
In [3]: x[2][2]
Out[3]: 2
In [4]: x + 5
---------------------------------------------------------------------------
122
<ipython-input-4-9e8324a7b754> in <module>
----> 1 x + 5
Hello
HelloHello
HelloHelloHello
HelloHelloHelloHello
We can also see our first weakness of NumPy arrays versus Python lists:
In [10]: my_array.append(4)
---------------------------------------------------------------------------
<ipython-input-10-b12177763178> in <module>
----> 1 my_array.append(4)
For NumPy arrays, you typically don’t change the data size once you’ve defined your array, whereas for
Python lists, you can do this efficiently. However, you get back lots of goodies in return…
123
2.9.4 Elementwise Operations
Most operations can be applied element-wise automatically!
In [11]: my_array + 2
These “vectorized” operations are very fast: (the %%timeit magic reports how long it takes to run a cell;
there is more information available if interested)
In [13]: %%timeit
[x**2 for x in big_list]
2.99 ms ± 28.7 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
In [14]: %%timeit
big_array**2
5.03 µs ± 18.8 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
This is similar to Python’s range, although note that we can’t use non-integer steps with the latter!
---------------------------------------------------------------------------
<ipython-input-16-90c31a0aefc9> in <module>
----> 1 y = list(range(0, 10, 0.1))
In [18]: values
124
Out[18]: array([0. , 0.03173326, 0.06346652, 0.09519978, 0.12693304,
0.1586663 , 0.19039955, 0.22213281, 0.25386607, 0.28559933,
0.31733259, 0.34906585, 0.38079911, 0.41253237, 0.44426563,
0.47599889, 0.50773215, 0.53946541, 0.57119866, 0.60293192,
0.63466518, 0.66639844, 0.6981317 , 0.72986496, 0.76159822,
0.79333148, 0.82506474, 0.856798 , 0.88853126, 0.92026451,
0.95199777, 0.98373103, 1.01546429, 1.04719755, 1.07893081,
1.11066407, 1.14239733, 1.17413059, 1.20586385, 1.23759711,
1.26933037, 1.30106362, 1.33279688, 1.36453014, 1.3962634 ,
1.42799666, 1.45972992, 1.49146318, 1.52319644, 1.5549297 ,
1.58666296, 1.61839622, 1.65012947, 1.68186273, 1.71359599,
1.74532925, 1.77706251, 1.80879577, 1.84052903, 1.87226229,
1.90399555, 1.93572881, 1.96746207, 1.99919533, 2.03092858,
2.06266184, 2.0943951 , 2.12612836, 2.15786162, 2.18959488,
2.22132814, 2.2530614 , 2.28479466, 2.31652792, 2.34826118,
2.37999443, 2.41172769, 2.44346095, 2.47519421, 2.50692747,
2.53866073, 2.57039399, 2.60212725, 2.63386051, 2.66559377,
2.69732703, 2.72906028, 2.76079354, 2.7925268 , 2.82426006,
2.85599332, 2.88772658, 2.91945984, 2.9511931 , 2.98292636,
3.01465962, 3.04639288, 3.07812614, 3.10985939, 3.14159265])
Regardless of the method used, the array of values that we get can be used in the same way.
In fact, NumPy comes with “vectorised” versions of common functions which work element-by-element
when applied to arrays:
In [19]: %matplotlib inline
125
2.9.6 Multi-Dimensional Arrays
NumPy’s true power comes from multi-dimensional arrays:
[[0., 0.],
[0., 0.],
[0., 0.],
[0., 0.]],
[[0., 0.],
[0., 0.],
[0., 0.],
[0., 0.]]])
In [21]: x = np.array(range(40))
x
[[10, 11],
[12, 13],
[14, 15],
[16, 17],
[18, 19]],
[[20, 21],
[22, 23],
[24, 25],
[26, 27],
[28, 29]],
[[30, 31],
[32, 33],
[34, 35],
[36, 37],
[38, 39]]])
126
And index multiple columns at once:
In [23]: y[3, 2, 1]
Out[23]: 35
Including selecting on inner axes while taking all from the outermost:
In [24]: y[:, 2, 1]
Out[24]: array([ 5, 15, 25, 35])
And subselecting ranges:
In [25]: y[2:, :1, :] # Last 2 axes, 1st row, all columns
Out[25]: array([[[20, 21]],
[[30, 31]]])
And transpose arrays:
In [26]: y.transpose()
Out[26]: array([[[ 0, 10, 20, 30],
[ 2, 12, 22, 32],
[ 4, 14, 24, 34],
[ 6, 16, 26, 36],
[ 8, 18, 28, 38]],
127
2.9.7 Array Datatypes
A Python list can contain data of mixed type:
In [33]: x = ['hello', 2, 3.4]
In [34]: type(x[2])
Out[34]: float
In [35]: type(x[1])
Out[35]: int
A NumPy array always contains just one datatype:
In [36]: np.array(x)
Out[36]: array(['hello', '2', '3.4'], dtype='<U5')
NumPy will choose the least-generic-possible datatype that can contain the data:
In [37]: y = np.array([2, 3.4])
In [38]: y
Out[38]: array([2. , 3.4])
You can access the array’s dtype, or check the type of individual elements:
In [39]: y.dtype
Out[39]: dtype('float64')
In [40]: type(y[0])
Out[40]: numpy.float64
In [41]: z = np.array([3, 4, 5])
z
Out[41]: array([3, 4, 5])
In [42]: type(z[0])
Out[42]: numpy.int64
The results are, when you get to know them, fairly obvious string codes for datatypes: NumPy supports
all kinds of datatypes beyond the python basics.
NumPy will convert python type names to dtypes:
In [43]: x = [2, 3.4, 7.2, 0]
In [44]: int_array = np.array(x, dtype=int)
In [45]: float_array = np.array(x, dtype=float)
In [46]: int_array
Out[46]: array([2, 3, 7, 0])
In [47]: float_array
Out[47]: array([2. , 3.4, 7.2, 0. ])
In [48]: int_array.dtype
Out[48]: dtype('int64')
In [49]: float_array.dtype
Out[49]: dtype('float64')
128
2.9.8 Broadcasting
This is another really powerful feature of NumPy.
By default, array operations are element-by-element:
---------------------------------------------------------------------------
<ipython-input-51-d87da4b8a218> in <module>
----> 1 np.arange(5) * np.arange(6)
ValueError: operands could not be broadcast together with shapes (5,) (6,)
---------------------------------------------------------------------------
<ipython-input-52-b6b30bdbcb53> in <module>
----> 1 np.zeros([2,3]) * np.zeros([2,4])
ValueError: operands could not be broadcast together with shapes (2,3) (2,4)
In [55]: m1 + m2
---------------------------------------------------------------------------
<ipython-input-55-92db99ada483> in <module>
----> 1 m1 + m2
ValueError: operands could not be broadcast together with shapes (10,10) (10,5,2)
129
In [56]: np.ones([3, 3]) * np.ones([3, 3]) # Note elementwise multiply, *not* matrix multiply.
Out[56]: array([[1., 1., 1.],
[1., 1., 1.],
[1., 1., 1.]])
Except, that if one array has any Dimension 1, then the data is REPEATED to match the other.
In [57]: col = np.arange(10).reshape([10, 1])
col
Out[57]: array([[0],
[1],
[2],
[3],
[4],
[5],
[6],
[7],
[8],
[9]])
In [58]: row = col.transpose()
row
Out[58]: array([[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]])
In [59]: col.shape # "Column Vector"
Out[59]: (10, 1)
In [60]: row.shape # "Row Vector"
Out[60]: (1, 10)
In [61]: row + col
Out[61]: array([[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9],
[ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10],
[ 2, 3, 4, 5, 6, 7, 8, 9, 10, 11],
[ 3, 4, 5, 6, 7, 8, 9, 10, 11, 12],
[ 4, 5, 6, 7, 8, 9, 10, 11, 12, 13],
[ 5, 6, 7, 8, 9, 10, 11, 12, 13, 14],
[ 6, 7, 8, 9, 10, 11, 12, 13, 14, 15],
[ 7, 8, 9, 10, 11, 12, 13, 14, 15, 16],
[ 8, 9, 10, 11, 12, 13, 14, 15, 16, 17],
[ 9, 10, 11, 12, 13, 14, 15, 16, 17, 18]])
In [62]: 10 * row + col
Out[62]: array([[ 0, 10, 20, 30, 40, 50, 60, 70, 80, 90],
[ 1, 11, 21, 31, 41, 51, 61, 71, 81, 91],
[ 2, 12, 22, 32, 42, 52, 62, 72, 82, 92],
[ 3, 13, 23, 33, 43, 53, 63, 73, 83, 93],
[ 4, 14, 24, 34, 44, 54, 64, 74, 84, 94],
[ 5, 15, 25, 35, 45, 55, 65, 75, 85, 95],
[ 6, 16, 26, 36, 46, 56, 66, 76, 86, 96],
[ 7, 17, 27, 37, 47, 57, 67, 77, 87, 97],
[ 8, 18, 28, 38, 48, 58, 68, 78, 88, 98],
[ 9, 19, 29, 39, 49, 59, 69, 79, 89, 99]])
This works for arrays with more than one unit dimension.
130
2.9.9 Newaxis
Broadcasting is very powerful, and numpy allows indexing with np.newaxis to temporarily create new
one-long dimensions on the fly.
In [64]: x
In [65]: y
[[4, 5],
[6, 7]]])
Out[66]: (2, 5, 1, 1)
Out[67]: (2, 1, 2, 2)
In [69]: res.shape
Out[69]: (2, 5, 2, 2)
In [70]: np.sum(res)
Out[70]: 830
Note that newaxis works because a 3 × 1 × 3 array and a 3 × 3 array contain the same data, differently
shaped:
[[3, 4, 5]],
[[6, 7, 8]]])
131
2.9.10 Dot Products
NumPy multiply is element-by-element, not a dot-product:
In [73]: a = np.arange(9).reshape(3, 3)
a
In [75]: a * b
In [76]: np.dot(a, b)
Out[77]: (3, 3, 1)
Out[78]: (1, 3, 3)
[[ 9, 12, 15],
[24, 28, 32],
[45, 50, 55]],
132
Out[80]: array([[ 24, 27, 30],
[ 78, 90, 102],
[132, 153, 174]])
Or if you prefer:
[[ 9, 12, 15],
[24, 28, 32],
[45, 50, 55]],
Then we sum over the middle, 𝑗 axis, [which is the 1-axis of three axes numbered (0,1,2)] of this 3-d
matrix. Thus we generate Σ𝑗 𝐴𝑖𝑗 𝐵𝑗𝑘 .
We can see that the broadcasting concept gives us a powerful and efficient way to express many linear
algebra operations computationally.
In [85]: record_x
133
Record arrays can be addressed with field names like they were a dictionary:
In [86]: record_x['col1']
In [89]: iszero = x == y
iszero
In [90]: y[np.logical_not(iszero)]
Although when printed, this comes out as a flat list, if assigned to, the selected elements of the array are
changed!
In [91]: y[iszero] = 5
In [92]: y
134
2.9.13 Numpy memory
Numpy memory management can be tricksy:
In [93]: x = np.arange(5)
y = x[:]
In [94]: y[2] = 0
x
In [95]: x = list(range(5))
y = x[:]
In [96]: y[2] = 0
x
Out[96]: [0, 1, 2, 3, 4]
We must use np.copy to force separate memory. Otherwise NumPy tries its hardest to make slices be
views on data.
Now, this has all been very theoretical, but let’s go through a practical example, and see how powerful
NumPy can be.
2.10.1 Flocking
The aggregate motion of a flock of birds, a herd of land animals, or a school of fish is a beautiful
and familiar part of the natural world… The aggregate motion of the simulated flock is created by
a distributed behavioral model much like that at work in a natural flock; the birds choose their
own course. Each simulated bird is implemented as an independent actor that navigates according
to its local perception of the dynamic environment, the laws of simulated physics that rule its
motion, and a set of behaviors programmed into it… The aggregate motion of the simulated flock
is the result of the dense interaction of the relatively simple behaviors of the individual simulated
birds.
– Craig W. Reynolds, “Flocks, Herds, and Schools: A Distributed Behavioral Model”, Computer Graphics
21 4 1987, pp 25-34
The model includes three main behaviours which, together, give rise to “flocking”. In the words of the
paper:
135
2.10.2 Setting up the Boids
Our boids will each have an x velocity and a y velocity, and an x position and a y position.
We’ll build this up in NumPy notation, and eventually, have an animated simulation of our flying boids.
In [2]: boid_count = 10
In [5]: positions.shape
We used broadcasting with np.newaxis to apply our upper limit to each boid. rand gives us a random
number between 0 and 1. We multiply by our limits to get a number up to that limit.
Out[6]: array([[2000],
[2000]])
Out[7]: (2, 1)
For example, let’s assume that we want our initial positions to vary between 100 and 200 in the x axis,
and 900 and 1100 in the y axis. We can generate random positions within these constraints with:
136
But each bird will also need a starting velocity. Let’s make these random too:
We can reuse the new_flock function defined above, since we’re again essentially just generating random
numbers from given limits. This saves us some code, but keep in mind that using a function for something
other than what its name indicates can become confusing!
Here, we will let the initial x velocities range over [0, 10] and the y velocities over [−20, 20].
figure = plt.figure()
axes = plt.axes(xlim=(0, limits[0]), ylim=(0, limits[1]))
scatter = axes.scatter(positions[0, :], positions[1, :],
marker='o', edgecolor='k', lw=0.5)
scatter
137
Then, we define a function which updates the figure for each timestep
def animate(frame):
update_boids(positions, velocities)
scatter.set_offsets(positions.transpose())
138
2.10.5 Fly towards the middle
Boids try to fly towards the middle:
In [19]: positions
In [20]: velocities
139
2.10.6 Avoiding collisions
We’ll want to add our other flocking rules to the behaviour of the Boids.
We’ll need a matrix giving the distances between each bird. This should be 𝑁 × 𝑁 .
In [31]: xsep_matrix.shape
Out[31]: (4, 4)
In [32]: xsep_matrix
But in NumPy we can be cleverer than that, and make a 2 × 𝑁 × 𝑁 matrix of separations:
In [34]: separations.shape
Out[34]: (2, 4, 4)
And then we can get the sum-of-squares 𝛿𝑥2 + 𝛿𝑦2 like this:
In [37]: square_distances
Find the direction distances only to those birds which are too close:
140
In [39]: separations_if_close = np.copy(separations)
far_away = np.logical_not(close_birds)
Set x and y values in separations_if_close to zero if they are far away:
In [40]: separations_if_close[0, :, :][far_away] = 0
separations_if_close[1, :, :][far_away] = 0
separations_if_close
Out[40]: array([[[0., 0., 0., 0.],
[0., 0., 0., 0.],
[0., 0., 0., 0.],
[0., 0., 0., 0.]],
positions += velocities
In [44]: def animate(frame):
update_boids(positions, velocities)
scatter.set_offsets(positions.transpose())
141
Out[44]: <IPython.core.display.HTML object>
positions += velocities
Hopefully the power of NumPy should be pretty clear now. This would be enormously slower and, I
think, harder to understand using traditional lists.
142
Now, we can even write it up into a class, and save it as a module. Remember that it is generally a
better idea to create files in an editor or integrated development environment (IDE) rather than through the
notebook!
In [1]: %%bash
mkdir -p greengraph # Create the folder for the module (on mac or linux)
class Greengraph(object):
def __init__(self, start, end):
self.start = start
self.end = end
self.geocoder = geopy.geocoders.Yandex(lang="en_US")
Writing greengraph/graph.py
import numpy as np
from io import BytesIO
import imageio as img
import requests
class Map(object):
def __init__(self, lat, long, satellite=True, zoom=10,
size=(400, 400), sensor=False):
base = "https://static-maps.yandex.ru/1.x/?"
params = dict(
143
z=zoom,
size=str(size[0]) + "," + str(size[1]),
ll=str(long) + "," + str(lat),
l="sat" if satellite else "map",
lang="en_US"
)
self.image = requests.get(
base, params=params).content # Fetch our PNG image data
content = BytesIO(self.image)
self.pixels = img.imread(content) # Parse our PNG image as a numpy array
Writing greengraph/map.py
Writing greengraph/__init__.py
---------------------------------------------------------------------------
~/virtualenv/python3.7.5/lib/python3.7/site-packages/geopy/geocoders/base.py in _call_geocoder(s
354 try:
--> 355 page = requester(req, timeout=timeout, **kwargs)
144
356 except Exception as error:
<ipython-input-5-a69e6d6508d4> in <module>
4
5 mygraph = Greengraph('New York', 'Chicago')
----> 6 data = mygraph.green_between(20)
145
~/build/UCL/rsd-engineeringcourse/ch01data/greengraph/graph.py in geolocate(self, place)
11
12 def geolocate(self, place):
---> 13 return self.geocoder.geocode(place, exactly_one=False)[0][1]
14
15 def location_sequence(self, start, end, steps):
~/virtualenv/python3.7.5/lib/python3.7/site-packages/geopy/geocoders/yandex.py in geocode(self,
114 logger.debug("%s.geocode: %s", self.__class__.__name__, url)
115 return self._parse_json(
--> 116 self._call_geocoder(url, timeout=timeout),
117 exactly_one,
118 )
~/virtualenv/python3.7.5/lib/python3.7/site-packages/geopy/geocoders/base.py in _call_geocoder(s
371 exc_info=False)
372 try:
--> 373 raise ERROR_CODE_MAP[code](message)
374 except KeyError:
375 raise GeocoderServiceError(message)
In [6]: plt.plot(data)
---------------------------------------------------------------------------
<ipython-input-6-727d88478626> in <module>
----> 1 plt.plot(data)
2.12 Introduction
2.12.1 What’s version control?
Version control is a tool for managing changes to a set of files.
There are many different version control systems:
• Git
• Mercurial (hg)
• CVS
• Subversion (svn)
• …
146
2.12.2 Why use version control?
• Better kind of backup.
• Review history (“When did I introduce this bug?”).
• Restore older code versions.
• Ability to undo mistakes.
• Maintain several versions of the code at a time.
Graham Eric
my_vcs commit …
… Join the team
… my_vcs checkout
… Do some programming
… my_vcs commit
my_vcs update …
Do some programming Do some programming
my_vcs commit …
my_vcs update …
my_vcs merge …
my_vcs commit …
2.12.6 Scope
This course will use the git version control system, but much of what you learn will be valid with other
version control tools you may encounter, including subversion (svn) and mercurial (hg).
147
2.13 Practising with Git
2.13.1 Example Exercise
In this course, we will use, as an example, the development of a few text files containing a description of a
topic of your choice.
This could be your research, a hobby, or something else. In the end, we will show you how to display the
content of these files as a very simple website.
2.13.3 Markdown
The text files we create will use a simple “wiki” markup style called markdown to show formatting. This is
the convention used in this file, too.
You can view the content of this file in the way Markdown renders it by looking on the web, and compare
the raw text.
In [1]: %%bash
echo some output
some output
Writing somefile.md
But if you are following along, you should edit the file using a text editor. On either Windows, Mac or
Linux, we recommend VS Code.
148
In [4]: import os
top_dir = os.getcwd()
top_dir
Out[4]: '/home/travis/build/UCL/rsd-engineeringcourse/ch02git'
Out[5]: '/home/travis/build/UCL/rsd-engineeringcourse/ch02git/learning_git'
In [7]: os.chdir(working_dir)
In [8]: %%bash
git config --global user.name "Lancelot the Brave"
git config --global user.email "[email protected]"
In [9]: %%bash
pwd # Note where we are standing-- MAKE SURE YOU INITIALISE THE RIGHT FOLDER
git init
/home/travis/build/UCL/rsd-engineeringcourse/ch02git/learning_git/git_example
Initialized empty Git repository in /home/travis/build/UCL/rsd-engineeringcourse/ch02git/learning_git/gi
In [10]: %%bash
ls
In [11]: %%bash
git status
On branch master
No commits yet
149
2.15 Solo work with Git
So, we’re in our git working directory:
In [1]: import os
top_dir = os.getcwd()
git_dir = os.path.join(top_dir, 'learning_git')
working_dir = os.path.join(git_dir, 'git_example')
os.chdir(working_dir)
working_dir
Out[1]: '/home/travis/build/UCL/rsd-engineeringcourse/ch02git/learning_git/git_example'
150
2.15.4 Configuring Git with your editor
If you don’t type in the log message directly with -m “Some message”, then an editor will pop up, to allow
you to edit your message on the fly.
For this to work, you have to tell git where to find your editor.
In [6]: %%bash
git config --global core.editor vim
In [7]: %%bash
git config --get core.editor
vim
To configure Notepad++ on windows you’ll need something like the below, ask a demonstrator to help
for your machine.
I’m going to be using vim as my editor, but you can use whatever editor you prefer. Find how to setup
your favourite editor in the setup chapter of Software Carpentry’s Git lesson.
In [8]: %%bash
git log
commit 8db2c9c5e612a7c5ca7eeab79121fbee6b8f0f6f
Author: Lancelot the Brave <[email protected]>
Date: Fri Jan 17 18:57:30 2020 +0000
In [9]: %%bash
git status
151
On branch master
nothing to commit, working tree clean
vim index.md
Overwriting index.md
Mountains in the UK
===================
England is not very mountainous.
But has some tall hills, and maybe a mountain or two depending on your definition.
On branch master
Changes not staged for commit:
(use "git add <file>…" to update what will be committed)
(use "git restore <file>…" to discard changes in working directory)
modified: index.md
no changes added to commit (use "git add" and/or "git commit -a")
We can now see that there is a change to “index.md” which is currently “not staged for commit”. What
does this mean?
If we do a git commit now nothing will happen.
Git will only commit changes to files that you choose to include in each commit.
This is a difference from other version control systems, where committing will affect all changed files.
We can see the differences in the file with:
In [13]: %%bash
git diff
152
+++ b/index.md
@@ -2,3 +2,5 @@ Mountains in the UK
===================
England is not very mountainous.
But has some tall hills, and maybe a mountain or two depending on your definition.
+
+Mount Fictional, in Barsetshire, U.K. is the tallest mountain in the world.
Deleted lines are prefixed with a minus, added lines prefixed with a plus.
In [14]: %%bash
git add --update
This says “include in the next commit, all files which have ever been included before”.
Note that git add is the command we use to introduce git to a new file, but also the command we use
to “stage” a file to be included in the next commit.
def wsd(code):
response = requests.post("http://www.websequencediagrams.com/index.php", data={
'message': code,
'apiVersion': 1,
})
expr = re.compile("(\?(img|pdf|png|svg)=[a-zA-Z0-9]+)")
m = expr.search(response.text)
if m == None:
print("Invalid response from server.")
return False
153
image=requests.get("http://www.websequencediagrams.com/" + m.group(0))
return IPython.core.display.Image(image.content)
Writing wsd.py
Out[16]:
In [17]: message="""
Working Directory -> Staging Area : git add
Staging Area -> Local Repository : git commit
Working Directory -> Local Repository : git commit -a
"""
wsd(message)
Out[17]:
154
2.15.13 Review of status
In [18]: %%bash
git status
On branch master
Changes to be committed:
(use "git restore --staged <file>…" to unstage)
modified: index.md
Untracked files:
(use "git add <file>…" to include in what will be committed)
__pycache__/
wsd.py
In [19]: %%bash
git commit -m "Add a lie about a mountain"
In [20]: %%bash
git log
commit 9aa861a3e3fc30830515a1673f565dd7c8b988dd
Author: Lancelot the Brave <[email protected]>
Date: Fri Jan 17 18:57:31 2020 +0000
commit 8db2c9c5e612a7c5ca7eeab79121fbee6b8f0f6f
Author: Lancelot the Brave <[email protected]>
Date: Fri Jan 17 18:57:30 2020 +0000
155
First commit of discourse on UK topography
vim index.md
Overwriting index.md
This last command, git commit -a automatically adds changes to all tracked files to the staging area,
as part of the commit command. So, if you never want to just add changes to some tracked files but not
others, you can just use this and forget about the staging area!
commit c9b2323938b782b0cf1231038ff2855097c3f7f1
Author: Lancelot the Brave <[email protected]>
Date: Fri Jan 17 18:57:31 2020 +0000
156
Change title
commit 9aa861a3e3fc30830515a1673f565dd7c8b988dd
Author: Lancelot the Brave <[email protected]>
Date: Fri Jan 17 18:57:31 2020 +0000
In [25]: %%bash
git log --oneline
In [26]: message="""
participant "Cleese's repo" as R
participant "Cleese's index" as I
participant Cleese as C
Out[26]:
157
2.16 Fixing mistakes
We’re still in our git working directory:
In [1]: import os
top_dir = os.getcwd()
git_dir = os.path.join(top_dir, 'learning_git')
working_dir = os.path.join(git_dir, 'git_example')
os.chdir(working_dir)
working_dir
Out[1]: '/home/travis/build/UCL/rsd-engineeringcourse/ch02git/learning_git/git_example'
158
2.16.1 Referring to changes with HEAD and ^
The commit we want to revert to is the one before the latest.
HEAD refers to the latest commit. That is, we want to go back to the change before the current HEAD.
We could use the hash code (e.g. 73fbeaf) to reference this, but you can also refer to the commit before
the HEAD as HEAD^, the one before that as HEAD^^, the one before that as HEAD~3.
2.16.2 Reverting
Ok, so now we’d like to undo the nasty commit with the lie about Mount Fictional.
In [2]: %%bash
git revert HEAD^
Auto-merging index.md
[master c124739] Revert "Add a lie about a mountain"
Date: Fri Jan 17 18:57:34 2020 +0000
1 file changed, 2 deletions(-)
An editor may pop up, with some default text which you can accept and save.
In [3]: %%bash
git log --date=short
commit c1247395a9617120445afe96dc1cb74cbaed57d6
Author: Lancelot the Brave <[email protected]>
Date: 2020-01-17
commit c9b2323938b782b0cf1231038ff2855097c3f7f1
Author: Lancelot the Brave <[email protected]>
Date: 2020-01-17
Change title
commit 9aa861a3e3fc30830515a1673f565dd7c8b988dd
Author: Lancelot the Brave <[email protected]>
Date: 2020-01-17
159
Add a lie about a mountain
commit 8db2c9c5e612a7c5ca7eeab79121fbee6b8f0f6f
Author: Lancelot the Brave <[email protected]>
Date: 2020-01-17
2.16.5 Antipatch
Notice how the mistake has stayed in the history.
There is a new commit which undoes the change: this is colloquially called an “antipatch”. This is nice:
you have a record of the full story, including the mistake and its correction.
Overwriting index.md
In [5]: %%bash
cat index.md
In [6]: %%bash
git diff
160
+mountain or two depending on your definition.
In [7]: %%bash
git commit -am "Add a silly spelling"
[master ffaeb55] Add a silly spelling
1 file changed, 3 insertions(+), 2 deletions(-)
In [8]: %%bash
git log --date=short
commit ffaeb556011d90ae3bc9f74bd591e0c34fb37984
Author: Lancelot the Brave <[email protected]>
Date: 2020-01-17
commit c1247395a9617120445afe96dc1cb74cbaed57d6
Author: Lancelot the Brave <[email protected]>
Date: 2020-01-17
commit c9b2323938b782b0cf1231038ff2855097c3f7f1
Author: Lancelot the Brave <[email protected]>
Date: 2020-01-17
Change title
commit 9aa861a3e3fc30830515a1673f565dd7c8b988dd
Author: Lancelot the Brave <[email protected]>
Date: 2020-01-17
commit 8db2c9c5e612a7c5ca7eeab79121fbee6b8f0f6f
Author: Lancelot the Brave <[email protected]>
Date: 2020-01-17
In [10]: %%bash
git log --date=short
161
commit c1247395a9617120445afe96dc1cb74cbaed57d6
Author: Lancelot the Brave <[email protected]>
Date: 2020-01-17
commit c9b2323938b782b0cf1231038ff2855097c3f7f1
Author: Lancelot the Brave <[email protected]>
Date: 2020-01-17
Change title
commit 9aa861a3e3fc30830515a1673f565dd7c8b988dd
Author: Lancelot the Brave <[email protected]>
Date: 2020-01-17
commit 8db2c9c5e612a7c5ca7eeab79121fbee6b8f0f6f
Author: Lancelot the Brave <[email protected]>
Date: 2020-01-17
In [11]: %%bash
cat index.md
If you want to lose the change from the working directory as well, you can do git reset --hard.
I’m going to get rid of the silly spelling, and I didn’t do --hard, so I’ll reset the file from the working
directory to be the same as in the index:
In [12]: %%bash
git checkout index.md
162
Updated 1 path from the index
In [13]: %%bash
cat index.md
In [14]: message="""
Working Directory -> Staging Area : git add
Staging Area -> Local Repository : git commit
Working Directory -> Local Repository : git commit -a
Staging Area -> Working Directory : git checkout
Local Repository -> Staging Area : git reset
Local Repository -> Working Directory: git reset --hard
"""
from wsd import wsd
%matplotlib inline
wsd(message)
Out[14]:
In [15]: message="""
participant "Cleese's repo" as R
participant "Cleese's index" as I
163
participant Cleese as C
"""
wsd(message)
Out[15]:
164
2.17 Publishing
We’re still in our working directory:
In [1]: import os
top_dir = os.getcwd()
git_dir = os.path.join(top_dir, 'learning_git')
working_dir = os.path.join(git_dir, 'git_example')
os.chdir(working_dir)
working_dir
Out[1]: '/home/travis/build/UCL/rsd-engineeringcourse/ch02git/learning_git/git_example'
165
Fill in a short name, and a description. Choose a “public” repository. Don’t choose to initialize the
repository with a README. That will create a repository with content and we only want a placeholder
where to upload what we’ve created locally.
Warning: Permanently added the RSA host key for IP address '140.82.113.4' to the list of known hosts.
To github.com:UCL/github-example.git
+ 7fd37ad…c124739 master -> master (forced update)
2.17.5 Remotes
The first command sets up the server as a new remote, called origin.
Git, unlike some earlier version control systems is a “distributed” version control system, which means
you can work with multiple remote servers.
Usually, commands that work with remotes allow you to specify the remote to use, but assume the origin
remote if you don’t.
Here, git push will push your whole history onto the server, and now you’ll be able to see it on the
internet! Refresh your web browser where the instructions were, and you’ll see your repository!
Let’s add these commands to our diagram:
In [4]: message="""
Working Directory -> Staging Area : git add
Staging Area -> Local Repository : git commit
Working Directory -> Local Repository : git commit -a
Staging Area -> Working Directory : git checkout
Local Repository -> Staging Area : git reset
Local Repository -> Working Directory: git reset --hard
Local Repository -> Remote Repository : git push
"""
from wsd import wsd
%matplotlib inline
wsd(message)
166
Out[4]:
vim lakeland.md
Writing lakeland.md
Lakeland
========
167
2.18.2 Git will not by default commit your new file
In [7]: %%bash --no-raise-error
git commit -am "Try to add Lakeland"
On branch master
Your branch is up to date with 'origin/master'.
Untracked files:
__pycache__/
lakeland.md
wsd.py
This didn’t do anything, because we’ve not told git to track the new file yet.
Ok, now we have added the change about Cumbria to the file. Let’s publish it to the origin repository.
In [9]: %%bash
git push
To github.com:UCL/github-example.git
c124739..28202e2 master -> master
Visit GitHub, and notice this change is on your repository on the server. We could have said git push
origin to specify the remote to use, but origin is the default.
Mountains:
* Helvellyn
Overwriting lakeland.md
168
In [11]: %%writefile index.md
Mountains and Lakes in the UK
===================
Engerland is not very mountainous.
But has some tall hills, and maybe a
mountain or two depending on your definition.
Overwriting index.md
In [12]: %%bash
git status
On branch master
Your branch is up to date with 'origin/master'.
Untracked files:
(use "git add <file>…" to include in what will be committed)
__pycache__/
wsd.py
no changes added to commit (use "git add" and/or "git commit -a")
These changes should really be separate commits. We can do this with careful use of git add, to stage
first one commit, then the other.
In [13]: %%bash
git add index.md
git commit -m "Include lakes in the scope"
Because we “staged” only index.md, the changes to lakeland.md were not included in that commit.
In [14]: %%bash
git commit -am "Add Helvellyn"
In [15]: %%bash
git log --oneline
169
c124739 Revert "Add a lie about a mountain"
c9b2323 Change title
9aa861a Add a lie about a mountain
8db2c9c First commit of discourse on UK topography
In [16]: %%bash
git push
To github.com:UCL/github-example.git
28202e2..e899c3c master -> master
In [17]: message="""
participant "Cleese's remote" as M
participant "Cleese's repo" as R
participant "Cleese's index" as I
participant Cleese as C
Out[17]:
170
2.20 Collaboration
2.20.1 Form a team
Now we’re going to get to the most important question of all with Git and GitHub: working with others.
Organise into pairs. You’re going to be working on the website of one of the two of you, together, so
decide who is going to be the leader, and who the collaborator.
In [1]: import os
top_dir = os.getcwd()
git_dir = os.path.join(top_dir, 'learning_git')
working_dir = os.path.join(git_dir, 'git_example')
os.chdir(git_dir)
In [2]: %%bash
pwd
rm -rf github-example # cleanup after previous example
rm -rf partner_repo # cleanup after previous example
/home/travis/build/UCL/rsd-engineeringcourse/ch02git/learning_git
Next, the collaborator needs to find out the URL of the repository: they should go to the leader’s
repository’s GitHub page, and note the URL on the top of the screen. Make sure the “ssh” button is pushed,
the URL should begin with [email protected].
Copy the URL into your clipboard by clicking on the icon to the right of the URL, and then:
In [3]: %%bash
pwd
git clone [email protected]:UCL/github-example.git partner_repo
/home/travis/build/UCL/rsd-engineeringcourse/ch02git/learning_git
171
In [5]: %%bash
pwd
ls
/home/travis/build/UCL/rsd-engineeringcourse/ch02git/learning_git/partner_repo
index.md
lakeland.md
Note that your partner’s files are now present on your disk:
In [6]: %%bash
cat lakeland.md
Lakeland
========
Mountains:
* Helvellyn
In [7]: os.chdir(working_dir)
* Tryfan
* Yr Wyddfa
Writing Wales.md
In [9]: %%bash
ls
index.md
lakeland.md
__pycache__
Wales.md
wsd.py
In [10]: %%bash
git add Wales.md
git commit -m "Add wales"
172
[master cd12c03] Add wales
1 file changed, 5 insertions(+)
create mode 100644 Wales.md
In [11]: os.chdir(partner_dir)
* Ben Eighe
* Cairngorm
Writing Scotland.md
In [13]: %%bash
ls
index.md
lakeland.md
Scotland.md
In [14]: %%bash
git add Scotland.md
git commit -m "Add Scotland"
In [15]: %%bash
git push
To github.com:UCL/github-example.git
e899c3c..95afb78 master -> master
In [16]: os.chdir(working_dir)
To github.com:UCL/github-example.git
! [rejected] master -> master (fetch first)
error: failed to push some refs to '[email protected]:UCL/github-example.git'
173
hint: Updates were rejected because the remote contains work that you do
hint: not have locally. This is usually caused by another repository pushing
hint: to the same ref. You may want to first integrate the remote changes
hint: (e.g., 'git pull …') before pushing again.
hint: See the 'Note about fast-forwards' in 'git push --help' for details.
Do as it suggests:
In [18]: %%bash
git pull
From github.com:UCL/github-example
e899c3c..95afb78 master -> origin/master
* [new branch] gh-pages -> origin/gh-pages
In [19]: %%bash
git push
To github.com:UCL/github-example.git
95afb78..dbc9d65 master -> master
In [20]: os.chdir(partner_dir)
In [21]: %%bash
git pull
Updating 95afb78..dbc9d65
Fast-forward
Wales.md | 5 +++++
1 file changed, 5 insertions(+)
create mode 100644 Wales.md
From github.com:UCL/github-example
95afb78..dbc9d65 master -> origin/master
In [22]: %%bash
ls
174
index.md
lakeland.md
Scotland.md
Wales.md
* Tryfan
* Snowdon
Overwriting Wales.md
In [24]: %%bash
git diff
diff --git a/Wales.md b/Wales.md
index f3e88b4..90f23ec 100644
--- a/Wales.md
+++ b/Wales.md
@@ -2,4 +2,4 @@ Mountains In Wales
==================
* Tryfan
-* Yr Wyddfa
+* Snowdon
In [25]: %%bash
git commit -am "Translating from the Welsh"
[master 37d36fa] Translating from the Welsh
1 file changed, 1 insertion(+), 1 deletion(-)
In [26]: %%bash
git log --oneline
37d36fa Translating from the Welsh
dbc9d65 Merge branch 'master' of github.com:UCL/github-example
cd12c03 Add wales
95afb78 Add Scotland
e899c3c Add Helvellyn
f033da8 Include lakes in the scope
28202e2 Add lakeland
c124739 Revert "Add a lie about a mountain"
c9b2323 Change title
9aa861a Add a lie about a mountain
8db2c9c First commit of discourse on UK topography
175
In [27]: os.chdir(working_dir)
In [28]: %%writefile Wales.md
Mountains In Wales
==================
* Pen y Fan
* Tryfan
* Snowdon
Overwriting Wales.md
In [29]: %%bash
git commit -am "Add a beacon"
[master fc30222] Add a beacon
1 file changed, 2 insertions(+), 1 deletion(-)
In [30]: %%bash
git log --oneline
fc30222 Add a beacon
dbc9d65 Merge branch 'master' of github.com:UCL/github-example
cd12c03 Add wales
95afb78 Add Scotland
e899c3c Add Helvellyn
f033da8 Include lakes in the scope
28202e2 Add lakeland
c124739 Revert "Add a lie about a mountain"
c9b2323 Change title
9aa861a Add a lie about a mountain
8db2c9c First commit of discourse on UK topography
In [31]: %%bash
git push
To github.com:UCL/github-example.git
dbc9d65..fc30222 master -> master
176
In [34]: %%bash
git pull
Auto-merging Wales.md
Merge made by the 'recursive' strategy.
Wales.md | 1 +
1 file changed, 1 insertion(+)
From github.com:UCL/github-example
dbc9d65..fc30222 master -> origin/master
In [35]: %%bash
git push
To github.com:UCL/github-example.git
fc30222..7cae13a master -> master
In [36]: %%bash
git log --oneline --graph
In [37]: os.chdir(working_dir)
In [38]: %%bash
git pull
Updating fc30222..7cae13a
Fast-forward
From github.com:UCL/github-example
fc30222..7cae13a master -> origin/master
In [39]: %%bash
git log --graph --oneline
177
* 7cae13a Merge branch 'master' of github.com:UCL/github-example
|\
| * fc30222 Add a beacon
* | 37d36fa Translating from the Welsh
|/
* dbc9d65 Merge branch 'master' of github.com:UCL/github-example
|\
| * 95afb78 Add Scotland
* | cd12c03 Add wales
|/
* e899c3c Add Helvellyn
* f033da8 Include lakes in the scope
* 28202e2 Add lakeland
* c124739 Revert "Add a lie about a mountain"
* c9b2323 Change title
* 9aa861a Add a lie about a mountain
* 8db2c9c First commit of discourse on UK topography
In [40]: message="""
participant Palin as P
participant "Palin's repo" as PR
participant "Shared remote" as M
participant "Cleese's repo" as CR
participant Cleese as C
"""
from wsd import wsd
178
%matplotlib inline
wsd(message)
Out[40]:
* Pen y Fan
* Tryfan
* Snowdon
* Fan y Big
Overwriting Wales.md
In [42]: %%bash
git commit -am "Add another Beacon"
git push
[master 41e49f6] Add another Beacon
1 file changed, 1 insertion(+)
To github.com:UCL/github-example.git
7cae13a..41e49f6 master -> master
179
In [43]: os.chdir(partner_dir)
* Pen y Fan
* Tryfan
* Snowdon
* Glyder Fawr
Overwriting Wales.md
To github.com:UCL/github-example.git
! [rejected] master -> master (fetch first)
error: failed to push some refs to '[email protected]:UCL/github-example.git'
hint: Updates were rejected because the remote contains work that you do
hint: not have locally. This is usually caused by another repository pushing
hint: to the same ref. You may want to first integrate the remote changes
hint: (e.g., 'git pull …') before pushing again.
hint: See the 'Note about fast-forwards' in 'git push --help' for details.
When you pull, instead of offering an automatic merge commit message, it says:
Auto-merging Wales.md
CONFLICT (content): Merge conflict in Wales.md
Automatic merge failed; fix conflicts and then commit the result.
From github.com:UCL/github-example
7cae13a..41e49f6 master -> origin/master
In [47]: %%bash
cat Wales.md
180
Mountains In Wales
==================
* Pen y Fan
* Tryfan
* Snowdon
<<<<<<< HEAD
* Glyder Fawr
=======
* Fan y Big
>>>>>>> 41e49f613a7c00f2c735a4ce06f64b8755abf7c4
Manually edit the file, to combine the changes as seems sensible and get rid of the symbols:
* Pen y Fan
* Tryfan
* Snowdon
* Glyder Fawr
* Fan y Big
Overwriting Wales.md
In [49]: %%bash
git commit -a --no-edit # I added a No-edit for this non-interactive session. You can edit the
In [50]: %%bash
git push
To github.com:UCL/github-example.git
41e49f6..d61e07e master -> master
In [51]: os.chdir(working_dir)
In [52]: %%bash
git pull
Updating 41e49f6..d61e07e
Fast-forward
Wales.md | 1 +
1 file changed, 1 insertion(+)
181
From github.com:UCL/github-example
41e49f6..d61e07e master -> origin/master
In [53]: %%bash
cat Wales.md
Mountains In Wales
==================
* Pen y Fan
* Tryfan
* Snowdon
* Glyder Fawr
* Fan y Big
In [54]: %%bash
git log --oneline --graph
182
note left of P: git commit -am "update wales.md"
P->PR: add commit to local repo
"""
wsd(message)
Out[55]:
183
2.20.12 The Levels of Git
In [56]: message="""
Working Directory -> Staging Area : git add
Staging Area -> Local Repository : git commit
Working Directory -> Local Repository : git commit -a
Staging Area -> Working Directory : git checkout
Local Repository -> Staging Area : git reset
Local Repository -> Working Directory: git reset --hard
Local Repository -> Remote Repository : git push
Remote Repository -> Local Repository : git fetch
Local Repository -> Working Directory : git merge
Remote Repository -> Working Directory: git pull
"""
wsd(message)
Out[56]:
184
2.21 Editing directly on GitHub
2.21.1 Editing directly on GitHub
Note that you can also make changes in the GitHub website itself. Visit one of your files, and hit “edit”.
Make a change in the edit window, and add an appropriate commit message.
That change now appears on the website, but not in your local copy. (Verify this).
Now pull, and check the change is now present on your local version.
185
You can inspect and clone Numpy’s code in GitHub, play around a bit and find how to fix the bug.
Numpy has done so much for you asking nothing in return, that you really want to contribute back by
fixing the bug for them.
You make all of the changes but you can’t push it back to Numpy’s repository because you don’t have
permissions.
The right way to do this is forking Numpy’s repository.
1. Fork repository
You will see on the top right of the page a Fork button with an accompanying number indicating how many
GitHub users have forked that repository.
Collaborators need to navigate to the leader’s repository and click the Fork button.
Collaborators: note how GitHub has redirected you to your own GitHub page and you are now looking
at an exact copy of the team leader’s repository.
186
4. Make, commit and push changes to new branch
For example, let’s create a new file called SouthWest.md and edit it to add this text:
* Exmoor
* Dartmoor
* Bodmin Moor
Save it, and push this changes to your fork’s new branch:
7. Fixes by collaborator
Collaborators will be notified of this comment by email and also in their profiles page. Click the link
accompanying this notification to read the comment from the team leader.
Go back to your local repository, make the changes suggested and push them to the new branch.
Add this at the beginning of your file:
git add .
git commit -m "Titles added as requested."
git push origin southwest
This change will automatically be added to the pull request you started.
187
8. Leader accepts pull request
The team leader will be notified of the new changes that can be reviewed in the same fashion as earlier.
Let’s assume the team leader is now happy with the changes.
Leaders can see in the “Conversation” tab of the pull request a green button labelled Merge pull
request. Click it and confirm the decision.
The collaborator’s pull request has been accepted and appears now in the original repository owned by
the team leader.
Fork and Pull Request done!
• Numpy’s example is only illustrative. Normally, Open Source projects have in their documentation
(sometimes in the form of a wiki) a set of instructions you need to follow if you want to contribute to
their software.
• Pull Requests can also be done for merging branches in a non-forked repository. It’s typically used in
teams to merge code from a branch into the master branch and ask team colleagues for code reviews
before merging.
• It’s a good practice before starting a fork and a pull request to have a look at existing forks and pull
requests. On GitHub, you can find the list of pull requests on the horizontal menu on the top of the
page. Try to also find the network graph displaying all existing forks of a repo, e.g., NumpyDoc repo’s
network graph.
2.24 Branches
Branches are incredibly important to why git is cool and powerful.
They are an easy and cheap way of making a second version of your software, which you work on in
parallel, and pull in your changes when you are ready.
In [1]: import os
top_dir = os.getcwd()
git_dir = os.path.join(top_dir, 'learning_git')
working_dir = os.path.join(git_dir, 'git_example')
os.chdir(working_dir)
In [2]: %%bash
git branch # Tell me what branches exist
* master
In [3]: %%bash
git checkout -b experiment # Make a new branch
188
In [4]: %%bash
git branch
* experiment
master
* Pen y Fan
* Tryfan
* Snowdon
* Glyder Fawr
* Fan y Big
* Cadair Idris
Overwriting Wales.md
In [6]: %%bash
git commit -am "Add Cadair Idris"
In [7]: %%bash
git checkout master # Switch to an existing branch
In [8]: %%bash
cat Wales.md
Mountains In Wales
==================
* Pen y Fan
* Tryfan
* Snowdon
* Glyder Fawr
* Fan y Big
In [9]: %%bash
git checkout experiment
189
Mountains In Wales
==================
* Pen y Fan
* Tryfan
* Snowdon
* Glyder Fawr
* Fan y Big
* Cadair Idris
In [11]: %%bash
git push -u origin experiment
Warning: Permanently added the RSA host key for IP address '140.82.114.4' to the list of known hosts.
remote:
remote: Create a pull request for 'experiment' on GitHub by visiting:
remote: https://github.com/UCL/github-example/pull/new/experiment
remote:
To github.com:UCL/github-example.git
* [new branch] experiment -> experiment
We use --set-upstream origin (Abbreviation -u) to tell git that this branch should be pushed to and
pulled from origin per default.
If you are following along, you should be able to see your branch in the list of branches in GitHub.
Once you’ve used git push -u once, you can push new changes to the branch with just a git push.
If others checkout your repository, they will be able to do git checkout experiment to see your branch
content, and collaborate with you in the branch.
In [12]: %%bash
git branch -r
origin/experiment
origin/gh-pages
origin/master
Local branches can be, but do not have to be, connected to remote branches They are said to “track”
remote branches. push -u sets up the tracking relationship. You can see the remote branch for each of your
local branches if you ask for “verbose” output from git branch:
In [13]: %%bash
git branch -vv
190
2.24.2 Find out what is on a branch
In addition to using git diff to compare to the state of a branch, you can use git log to look at lists of
commits which are in a branch and haven’t been merged yet.
In [14]: %%bash
git log master..experiment
commit dc38d6fc18f743da76c59defadb59559e26a073c
Author: Lancelot the Brave <[email protected]>
Date: Fri Jan 17 18:58:22 2020 +0000
Git uses various symbols to refer to sets of commits. The double dot A..B means “ancestor of B and not
ancestor of A”
So in a purely linear sequence, it does what you’d expect.
In [15]: %%bash
git log --graph --oneline HEAD~9..HEAD~5
But in cases where a history has branches, the definition in terms of ancestors is important.
In [16]: %%bash
git log --graph --oneline HEAD~5..HEAD
In [17]: %%bash
git checkout master
191
In [18]: %%writefile Scotland.md
Mountains In Scotland
==================
* Ben Eighe
* Cairngorm
* Aonach Eagach
Overwriting Scotland.md
In [19]: %%bash
git diff Scotland.md
* Ben Eighe
* Cairngorm
+* Aonach Eagach
In [20]: %%bash
git commit -am "Commit Aonach onto master branch"
Then this notation is useful to show the content of what’s on what branch:
In [21]: %%bash
git log --left-right --oneline master...experiment
Three dots means “everything which is not a common ancestor” of the two commits, i.e. the differences
between them.
In [22]: %%bash
git branch
git merge experiment
experiment
* master
Merge made by the 'recursive' strategy.
Wales.md | 1 +
1 file changed, 1 insertion(+)
192
In [23]: %%bash
git log --graph --oneline HEAD~3..HEAD
experiment
* master
In [25]: %%bash
git branch -d experiment
In [26]: %%bash
git branch
* master
In [27]: %%bash
git branch --remote
origin/experiment
origin/gh-pages
origin/master
In [28]: %%bash
git push --delete origin experiment
# Remove remote branch
# - also can use github interface
To github.com:UCL/github-example.git
- [deleted] experiment
In [29]: %%bash
git branch --remote
origin/gh-pages
origin/master
193
2.24.5 A good branch strategy
• A production branch: code used for active work
• A develop branch: for general new code
• feature branches: for specific new ideas
• release branches: when you share code with others
• Useful for isolated bug fixes
to quickly grab a file from one branch into another. This will create a copy of the file as it exists in
<branch> into your current branch, overwriting it if it already existed. For example, if you have been
experimenting in a new branch but want to undo all your changes to a particular file (that is, restore the
file to its version in the master branch), you can do that with:
Using git checkout with a path takes the content of files. To grab the content of a specific commit from
another branch, and apply it as a patch to your branch, use:
In [1]: import os
top_dir = os.getcwd()
git_dir = os.path.join(top_dir, 'learning_git')
working_dir = os.path.join(git_dir, 'git_example')
os.chdir(working_dir)
* Pen y Fan
* Tryfan
* Snowdon
* Glyder Fawr
* Fan y Big
* Cadair Idris
Overwriting Wales.md
In [3]: %%bash
git stash
git pull
194
No local changes to save
Already up to date.
By stashing your work first, your repository becomes clean, allowing you to pull. To restore your changes,
use git stash apply.
The “Stash” is a way of temporarily saving your working area, and can help out in a pinch.
2.26 Tagging
Tags are easy to read labels for revisions, and can be used anywhere we would name a commit.
Produce real results only with tagged revisions
In [5]: %%bash
git tag -a v1.0 -m "Release 1.0"
git push --tags
To github.com:UCL/github-example.git
! [rejected] v1.0 -> v1.0 (already exists)
error: failed to push some refs to '[email protected]:UCL/github-example.git'
hint: Updates were rejected because the tag already exists in the remote.
---------------------------------------------------------------------------
<ipython-input-5-30c586933bd0> in <module>
----> 1 get_ipython().run_cell_magic('bash', '', 'git tag -a v1.0 -m "Release 1.0"\ngit push --tags\
~/virtualenv/python3.7.5/lib/python3.7/site-packages/IPython/core/interactiveshell.py in run_cel
2350 with self.builtin_trap:
2351 args = (magic_arg_s, cell)
-> 2352 result = fn(*args, **kwargs)
2353 return result
2354
~/virtualenv/python3.7.5/lib/python3.7/site-packages/IPython/core/magics/script.py in named_scri
140 else:
141 line = script
--> 142 return self.shebang(line, cell)
143
144 # write a basic docstring:
195
</home/travis/virtualenv/python3.7.5/lib/python3.7/site-packages/decorator.py:decorator-gen-110>
~/virtualenv/python3.7.5/lib/python3.7/site-packages/IPython/core/magics/script.py in shebang(se
243 sys.stderr.flush()
244 if args.raise_error and p.returncode!=0:
--> 245 raise CalledProcessError(p.returncode, cell, output=out, stderr=err)
246
247 def _run_script(self, p, cell, to_close):
CalledProcessError: Command 'b'git tag -a v1.0 -m "Release 1.0"\ngit push --tags\n'' returned no
* Cross Fell
Writing Pennines.md
In [7]: %%bash
git add Pennines.md
git commit -am "Add Pennines"
You can also use tag names in the place of commmit hashes, such as to list the history between particular
commits:
In [8]: %%bash
git log v1.0.. --graph --oneline
196
Examples include .o and .x files for compiled languages, .pyc files in Python.
In our example, we might want to make our .md files into a PDF with pandoc:
MDS=$(wildcard *.md)
PDFS=$(MDS:.md=.pdf)
default: $(PDFS)
%.pdf: %.md
pandoc $< -o $@
Writing Makefile
In [10]: %%bash
make
We now have a bunch of output .pdf files corresponding to each Markdown file.
But we don’t want those to show up in git:
In [11]: %%bash
git status
On branch master
Your branch is ahead of 'origin/master' by 4 commits.
(use "git push" to publish your local commits)
Untracked files:
(use "git add <file>…" to include in what will be committed)
Makefile
Pennines.pdf
Scotland.pdf
Wales.pdf
__pycache__/
index.pdf
lakeland.pdf
wsd.py
nothing added to commit but untracked files present (use "git add" to track)
Use .gitignore files to tell Git not to pay attention to files with certain paths:
197
Writing .gitignore
In [13]: %%bash
git status
On branch master
Your branch is ahead of 'origin/master' by 4 commits.
(use "git push" to publish your local commits)
Untracked files:
(use "git add <file>…" to include in what will be committed)
.gitignore
Makefile
__pycache__/
wsd.py
nothing added to commit but untracked files present (use "git add" to track)
In [14]: %%bash
git add Makefile
git add .gitignore
git commit -am "Add a makefile and ignore generated files"
git push
To github.com:UCL/github-example.git
d61e07e..afac5fd master -> master
In [15]: %%bash
git clean -fX
Removing Pennines.pdf
Removing Scotland.pdf
Removing Wales.pdf
Removing index.pdf
Removing lakeland.pdf
In [16]: %%bash
ls
198
index.md
lakeland.md
Makefile
Pennines.md
__pycache__
Scotland.md
Wales.md
wsd.py
2.29 Hunks
2.29.1 Git Hunks
A “Hunk” is one git change. This changeset has three hunks:
+import matplotlib
+import numpy as np
+def increment_or_add(key,hash,weight=1):
+ if key not in hash:
+ hash[key]=0
+ hash[key]+=weight
+
data_path=os.path.join(os.path.dirname(
os.path.abspath(__file__)),
-regenerate=False
+regenerate=True
+import matplotlib
+import numpy as np
#Stage this hunk [y,n,a,d,/,j,J,g,e,?]?
199
---
---
A pair of lines with three dashes, to the top of each markdown file. This is how GitHub knows which
markdown files to make into web pages. Here’s why for the curious.
Overwriting index.md
In [18]: %%bash
git commit -am "Add github pages YAML frontmatter"
In [19]: os.chdir(working_dir)
In [20]: %%bash
The first time you do this, GitHub takes a few minutes to generate your pages.
The website will appear at http://username.github.io/repositoryname, for example:
http://UCL.github.io/github-example/
200
2.31 Working with multiple remotes
2.31.1 Distributed versus centralised
Older version control systems (cvs, svn) were “centralised”; the history was kept only on a server, and all
commits required an internet.
Centralised Distributed
Server has history Every user has full history
Your computer has one snapshot Many local branches
To access history, need internet History always available
You commit to remote server Users synchronise histories
cvs, subversion(svn) git, mercurial (hg), bazaar (bzr)
With modern distributed systems, we can add a second remote. This might be a personal fork on github:
In [1]: import os
top_dir = os.getcwd()
git_dir = os.path.join(top_dir, 'learning_git')
working_dir = os.path.join(git_dir, 'git_example')
os.chdir(working_dir)
In [2]: %%bash
git checkout master
git remote add rits [email protected]:ucl-rits/github-example.git
git remote -v
Your branch is ahead of 'origin/master' by 1 commit.
(use "git push" to publish your local commits)
origin [email protected]:UCL/github-example.git (fetch)
origin [email protected]:UCL/github-example.git (push)
rits [email protected]:ucl-rits/github-example.git (fetch)
rits [email protected]:ucl-rits/github-example.git (push)
* Cross Fell
* Whernside
Overwriting Pennines.md
In [4]: %%bash
git commit -am "Add Whernside"
[master f8b480d] Add Whernside
1 file changed, 1 insertion(+)
201
In [5]: %%bash
git push -uf rits master
In [6]: %%bash
git fetch
git log --oneline --left-right rits/master...origin/master
From github.com:ucl-rits/github-example
* [new branch] gh-pages -> rits/gh-pages
In [7]: %%bash
git diff --name-only origin/master
Pennines.md
index.md
When you reference remotes like this, you’re working with a cached copy of the last time you interacted
with the remote. You can do git fetch to update local data with the remotes without actually pulling. You
can also get useful information about whether tracking branches are ahead or behind the remote branches
they track:
In [8]: %%bash
git branch -vv
202
• Pushing to someone’s working copy is dangerous
• Use git init --bare to make a copy for pushing
• You don’t need to create a “server” as such, any ‘bare’ git repo will do.
In [10]: %%bash
mkdir -p bare_repo
cd bare_repo
git init --bare
In [11]: os.chdir(working_dir)
In [12]: %%bash
git remote add local_bare ../bare_repo
git push -u local_bare master
To ../bare_repo
* [new branch] master -> master
In [13]: %%bash
git remote -v
You can now work with this local repository, just as with any other git server. If you have a colleague
on a shared file system, you can use this approach to collaborate through that file system.
ssh <mymachine>
mkdir mygitserver
cd mygitserver
git init --bare
exit
git remote add <somename> ssh://user@host/mygitserver
git push -u <somename> master
203
2.33 SSH keys and GitHub
Classroom exercise: If you haven’t already, you should set things up so that you don’t have to keep typing
in your password whenever you interact with GitHub via the command line.
You can do this with an “ssh keypair”. You may have created a keypair in the Software Carpentry shell
training. Go to the ssh settings page on GitHub and upload your public key by copying the content from
your computer. (Probably at .ssh/id_rsa.pub)
If you have difficulties, the instructions for this are on the GitHub website.
2.34 Rebasing
2.34.1 Rebase vs merge
A git merge is only one of two ways to get someone else’s work into yours. The other is called a rebase.
In a merge, a revision is added, which brings the branches together. Both histories are retained. In a
rebase, git tries to work out
What would you need to have done, to make your changes, if your colleague had already made
theirs?
Git will invent some new revisions, and the result will be a repository with an apparently linear history.
This can be useful if you want a cleaner, non-branching history, but it has the risk of creating inconsistencies,
since you are, in a way, “rewriting” history.
On the “Carollian” branch, a commit has been added translating the initial state into Lewis Caroll’s
language:
'Twas brillig,
and the slithy toves
* 2a74d89 Dancing
* 6a4834d Initial state
204
If we now merge carollian into master, the final state will include both changes:
'Twas brillig,
and the slithy toves
danced and spun in the waves
But the graph shows a divergence and then a convergence:
git log --oneline --graph
* b41f869 Merge branch 'carollian' into master_merge_carollian
|\
| * 2232bf3 Translate into Caroll's language
* | 2a74d89 Dancing
|/
* 6a4834d Initial state
But if we rebase, the final content of the file is still the same, but the graph is different:
git log --oneline --graph master_rebase_carollian
* df618e0 Dancing
* 2232bf3 Translate into Caroll's language
* 6a4834d Initial state
We have essentially created a new history, in which our changes come after the ones in the carollian branch.
Note that, in this case, the hash for our “Dancing” commit has changed (from 2a74d89 to df618e0)!
To trigger the rebase, we did:
git checkout master
git rebase carollian
If this had been a remote, we would merge it with:
git pull --rebase
205
2.35 Squashing
A second way to use the git rebase command is to rebase your work on top of one of your own earlier
commits, in interactive mode (-i). A common use of this is to “squash” several commits that should really
be one, i.e. combine them into a single commit that contains all their changes:
git log
We can rewrite select commits to be merged, so that the history is neater before we push. This is a great
idea if you have lots of trivial typo commits.
save the interactive rebase config file, and rebase will build a new history:
git log
Note the commit hash codes for ‘Some good work’ and ‘A great piece of work’ have changed, as the
change they represent has changed.
206
2.36 Debugging With Git Bisect
You can use
git bisect
In [1]: import os
top_dir = os.getcwd()
git_dir = os.path.join(top_dir, 'learning_git')
os.chdir(git_dir)
In [2]: %%bash
rm -rf bisectdemo
git clone [email protected]:shawnsi/bisectdemo.git
In [3]: bisect_dir=os.path.join(git_dir,'bisectdemo')
os.chdir(bisect_dir)
In [4]: %%bash
python squares.py 2 # 4
This has been set up to break itself at a random commit, and leave you to use bisect to work out where
it has broken:
In [5]: %%bash
./breakme.sh > break_output
Which will make a bunch of commits, of which one is broken, and leave you in the broken final state
207
2.36.2 Bisecting manually
In [7]: %%bash
git bisect start
git bisect bad # We know the current state is broken
git checkout master
git bisect good # We know the master branch state is OK
Your branch is up to date with 'origin/master'.
Bisecting: 500 revisions left to test after this (roughly 9 steps)
[506a15c8e30778254808357b84bbd7ba9aa63346] Comment 500
Bisect needs one known good and one known bad commit to get started
python squares.py 2
4
208
2.36.4 Solving automatically
If we have an appropriate unit test, we can do all this automatically:
In [8]: %%bash
git bisect start
git bisect bad HEAD # We know the current state is broken
git bisect good master # We know master is good
git bisect run python squares.py 2
squares.py | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
209
bisect run success
Boom!
210
Chapter 3
Testing
3.1 Introduction
When programming, it is very important to know that the code we have written does what it was intended.
Unfortunately, this step is often skipped in scientific programming, especially when developing code for our
own personal work.
Researchers sometimes check that their code behaves correctly by manually running it on some sample
data and inspecting the results. However, it is much better and safer to automate this process, so the tests
can be run often – perhaps even after each new commit! This not only reassures us that the code behaves
as it should at any given moment, it also gives us more flexibility to change it, because we have a way of
knowing when we have broken something by accident.
In this chapter, we will mostly look at how to write unit tests, which check the behaviour of small parts
of our code. We will work with a particular framework for Python code, but the principles we discuss are
general. We will also look at how to use a debugger to locate problems in our code, and services that simplify
the automated running of tests.
Sensibility Sense
It’s boring Maybe
Code is just a one off throwaway As with most research codes
No time for it A bit more code, a lot less debugging
Tests can be buggy too See above
Not a professional programmer See above
Will do it later See above
211
3.1.3 Not a panacea
Trying to improve the quality of software by doing more testing is like trying to lose weight by
weighting yourself more often. - Steve McConnell
212
We can’t write a test for every possible input: this is an infinite amount of work.
We need to write tests to rule out different bugs. There’s no need to separately test equivalent inputs.
Let’s look at an example of this question outside of coding:
codes = [Path.MOVETO,
Path.LINETO,
Path.LINETO,
Path.LINETO,
Path.CLOSEPOLY]
path1 = Path(vertices(*field1), codes)
path2 = Path(vertices(*field2), codes)
fig = plt.figure()
ax = fig.add_subplot(111)
patch1 = patches.PathPatch(path1, facecolor='orange', lw=2)
patch2 = patches.PathPatch(path2, facecolor='blue', lw=2)
ax.add_patch(patch1)
ax.add_patch(patch2)
ax.set_xlim(0,5)
ax.set_ylim(0,5)
show_fields((1.,1.,4.,4.), (2.,2.,3.,3.))
213
Here, we can see that the area of overlap, is the same as the smaller field, with area 1.
We could now go ahead and write a subroutine to calculate that, and also write some test cases for our
answer.
But first, let’s just consider that question abstractly, what other cases, not equivalent to this might there
be?
For example, this case, is still just a full overlap, and is sufficiently equivalent that it’s not worth another
test:
In [3]: show_fields((1.,1.,4.,4.),(2.5,1.7,3.2,3.4))
214
But this case is no longer a full overlap, and should be tested separately:
In [4]: show_fields((1.,1.,4.,4.),(2.,2.,3.,4.5))
On a piece of paper, sketch now the other cases you think should be treated as non-equivalent. Some
answers are below:
Spoiler space
Spoiler space
Spoiler space
215
Spoiler space
Spoiler space
Spoiler space
Spoiler space
Spoiler space
Spoiler space
Spoiler space
216
In [7]: show_fields((1.,1.,4.,4.),(2.,2.,3.,4.)) # Just touching
217
In [9]: show_fields((1.,1.,4.,4.),(2.5,4,3.5,4.5)) # Just touching from outside
218
3.2.2 Using our tests
OK, so how might our tests be useful?
Here’s some code that might correctly calculate the area of overlap:
In [12]: overlap((1.,1.,4.,4.),(2.,2.,3.,3.))
Out[12]: 1.0
219
In [16]: assert overlap((1.,1.,4.,4.),(4.5,4.5,5,5)) == 0.0
---------------------------------------------------------------------------
<ipython-input-16-21bafdf6842e> in <module>
----> 1 assert overlap((1.,1.,4.,4.),(4.5,4.5,5,5)) == 0.0
AssertionError:
In [17]: print(overlap((1.,1.,4.,4.),(4.5,4.5,5,5)))
0.25
In [18]: show_fields((1.,1.,4.,4.),(4.5,4.5,5,5))
Both width and height are negative, resulting in a positive area. The above code didn’t take into account
the non-overlap correctly.
It should be:
220
In [20]: def overlap(field1, field2):
left1, bottom1, top1, right1 = field1
left2, bottom2, top2, right2 = field2
return overlap_height*overlap_width
Note, we reran our other tests, to check our fix didn’t break something else. (We call that “fallout”)
• Limit between two equivalence classes: edge and corner sharing fields
• Wherever indices appear, check values at 0, N, N+1
• Empty arrays:
Bad input should be expected and should fail early and explicitly.
Testing should ensure that explicit failures do indeed happen.
# Do something
221
In [23]: I_only_accept_positive_numbers(5)
In [24]: I_only_accept_positive_numbers(-5)
---------------------------------------------------------------------------
<ipython-input-24-ac3b0fd3c476> in <module>
----> 1 I_only_accept_positive_numbers(-5)
<ipython-input-22-198af6344050> in I_only_accept_positive_numbers(number)
2 # Check input
3 if number < 0:
----> 4 raise ValueError("Input {} is negative".format(number))
5
6 # Do something
But to do that, we need to learn about more sophisticated testing tools, called “test frameworks”.
222
3.3.2 Common testing frameworks
• Language agnostic: CTest
• Test runner for executables, bash scripts, etc…
• Great for legacy code hardening
• C unit-tests:
• C++ unit-tests:
– CppTest,
– Boost::Test,
– google-test,
– Catch (best)
• Python unit-tests:
• R unit-tests:
– RUnit,
– svUnit
– (works with SciViews GUI)
• Fortran unit-tests:
– funit,
– pfunit(works with MPI)
# Do something
but the real power comes when we write a test file alongside our code files in our homemade packages:
223
In [4]: %%bash
mkdir -p saskatchewan
touch saskatchewan/__init__.py
Writing saskatchewan/overlap.py
def test_full_overlap():
assert overlap((1.,1.,4.,4.), (2.,2.,3.,3.)) == 1.0
def test_partial_overlap():
assert overlap((1,1,4,4), (2,2,3,4.5)) == 2.0
def test_no_overlap():
assert overlap((1,1,4,4), (4.5,4.5,5,5)) == 0.0
Writing saskatchewan/test_overlap.py
def test_no_overlap():
> assert overlap((1,1,4,4), (4.5,4.5,5,5)) == 0.0
E assert 0.25 == 0.0
224
E + where 0.25 = overlap((1, 1, 4, 4), (4.5, 4.5, 5, 5))
test_overlap.py:10: AssertionError
========================= 1 failed, 2 passed in 0.03s ==========================
Note that it reported which test had failed, how many tests ran, and how many failed.
The symbol ..F means there were three tests, of which the third one failed.
Pytest will:
Some options:
Out[8]: 2.2737367544323206e-13
Out[9]: 2.220446049250313e-13
Out[10]: 1.4901161193847656e-08
Or be more explicit:
225
3.4.3 Comparing vectors of floating points
Numerical vectors are best represented using numpy.
Numpy ships with a number of assertions (in numpy.testing) to make comparison easy:
It compares the difference between actual and expected to atol + rtol * abs(expected).
Implementation:
226
Here, the total energy due to position 2 is 3(3 − 1) = 6, and due to column 7 is 1(1 − 1) = 0. We need
to sum these to get the total energy.
In [2]: %%bash
mkdir -p diffusion
touch diffusion/__init__.py
Parameters
----------
Writing diffusion/model.py
227
• Testing file: test_diffusion_model.py
Writing diffusion/test_model.py
In [5]: %%bash
cd diffusion
pytest
test_model.py . [100%]
Now, write your code (in model.py), and tests (in test_model.py), testing as you do.
3.5.3 Solution
Don’t look until after you’ve tried!
def energy(density):
"""
Energy associated with the diffusion model
:Parameters:
228
# and the right values (positive or null)
if any(density < 0):
raise ValueError("Density should be an array of *positive* integers.")
if density.ndim != 1:
raise ValueError("Density should be an a *1-dimensional*" +
"array of positive integers.")
Overwriting diffusion/model.py
def test_energy_fails_on_non_integer_density():
with raises(TypeError) as exception:
energy([1.0, 2, 3])
def test_energy_fails_on_negative_density():
with raises(ValueError) as exception: energy(
[-1, 2, 3])
def test_energy_fails_ndimensional_density():
with raises(ValueError) as exception: energy(
[[1, 2, 3], [3, 4, 5]])
def test_zero_energy_cases():
# Zero energy at zero density
densities = [ [], [0], [0, 0, 0] ]
for density in densities:
assert energy(density) == 0
def test_derivative():
from numpy.random import randint
# modified densities
density_plus_one = density.copy()
density_plus_one[element_index] += 1
229
if density[element_index] > 0
else 0 )
actual = energy(density_plus_one) - energy(density)
assert expected == actual
def test_derivative_no_self_energy():
""" If particle is alone, then its participation to energy is zero """
from numpy import array
expected = 0
actual = energy(density_plus_one) - energy(density)
assert expected == actual
Overwriting diffusion/test_model.py
In [8]: %%bash
cd diffusion
pytest
test_model.py … [100%]
3.5.4 Coverage
With py.test, you can use the “pytest-cov” plugin to measure test coverage
In [9]: %%bash
cd diffusion
pytest --cov="diffusion"
test_model.py … [100%]
230
model.py 10 0 100%
test_model.py 31 0 100%
-----------------------------------
TOTAL 41 0 100%
Or an html report:
In [10]: %%bash
cd diffusion
pytest --cov="diffusion" --cov-report html
test_model.py … [100%]
In [ ]:
3.6 Mocking
3.6.1 Definition
Mock: verb,
Mocking
• Replace a real object with a pretend object, which records how it is called, and can assert if it is called
wrong
231
3.6.3 Recording calls with mock
Mock objects record the calls made to them:
In [2]: function(1)
Out[2]: 2
Out[3]: 2
In [4]: function.mock_calls
In [7]: function(1)
Out[7]: 2
Out[8]: 'xyz'
In [9]: function()
---------------------------------------------------------------------------
<ipython-input-9-30ca0b4348da> in <module>
----> 1 function()
232
/opt/python/3.7.5/lib/python3.7/unittest/mock.py in _mock_call(_mock_self, *args, **kwargs)
1071 raise effect
1072 elif not _callable(effect):
-> 1073 result = next(effect)
1074 if _is_exception(result):
1075 raise result
StopIteration:
base = "https://static-maps.yandex.ru/1.x/?"
params = dict(
z = zoom,
size = ",".join(map(str,size)),
ll = ",".join(map(str,(long,lat))),
lang = "en_US")
if satellite:
params["l"] = "sat"
else:
params["l"] = "map"
Out[12]:
233
We would like to test that it is building the parameters correctly. We can do this by mocking the
requests object. We need to temporarily replace a method in the library with a mock. We can use “patch”
to do this:
234
params={
'z':12,
'size':'400,400',
'll':'0.0,51.0',
'lang':'en_US',
'l': 'map'
}
)
test_build_default_params()
That was quiet, so it passed. When I’m writing tests, I usually modify one of the expectations, to
something ‘wrong’, just to check it’s not passing “by accident”, run the tests, then change it back!
We want to test that the above function does the right thing. It is supposed to compute the derivative
of a function of a vector in a particular direction.
E.g.:
Out[16]: 1.0
How do we assert that it is doing the right thing? With tests like this:
def test_derivative_2d_y_direction():
func = MagicMock()
partial_derivative(func, [0,0], 1)
func.assert_any_call([0, 1.0])
func.assert_any_call([0, 0])
test_derivative_2d_y_direction()
We made our mock a “Magic Mock” because otherwise, the mock results f_x_plus_delta and f_x can’t
be subtracted:
---------------------------------------------------------------------------
235
<ipython-input-19-ef96ecbf0feb> in <module>
----> 1 Mock() - Mock()
• python: spyder,
• [pdb](https://docs.python.org/3.6/library/pdb.html)
• R: RStudio, debug, browser
The python debugger is a python shell: it can print and compute values, and even change the values
of the variables at that point in the program.
236
3.7.4 Breakpoints
Break points tell debugger where and when to stop We say * b somefunctionname
In [1]: %%writefile solutions/diffusionmodel/energy_example.py
from diffusion_model import energy
print(energy([5, 6, 7, 8, 0, 1]))
Writing solutions/diffusionmodel/energy_example.py
The debugger is, of course, most used interactively, but here I’m showing a prewritten debugger script:
In [2]: %%writefile commands
restart # restart session
n
b energy # program will stop when entering energy
c # continue program until break point is reached
print(density) # We are now "inside" the energy function and can print any variable.
Writing commands
In [3]: %%bash
python -m pdb solutions/diffusionmodel/energy_example.py < commands
> /home/travis/build/UCL/rsd-engineeringcourse/ch03tests/solutions/diffusionmodel/energy_example.py(1)<m
-> from diffusion_model import energy
(Pdb) Restarting solutions/diffusionmodel/energy_example.py with arguments:
solutions/diffusionmodel/energy_example.py
> /home/travis/build/UCL/rsd-engineeringcourse/ch03tests/solutions/diffusionmodel/energy_example.py(1)<m
-> from diffusion_model import energy
(Pdb) > /home/travis/build/UCL/rsd-engineeringcourse/ch03tests/solutions/diffusionmodel/energy_example.p
-> print(energy([5, 6, 7, 8, 0, 1]))
(Pdb) Breakpoint 1 at /home/travis/build/UCL/rsd-engineeringcourse/ch03tests/solutions/diffusionmodel/di
(Pdb) > /home/travis/build/UCL/rsd-engineeringcourse/ch03tests/solutions/diffusionmodel/diffusion_model.
-> from numpy import array, any, sum
(Pdb) [5, 6, 7, 8, 0, 1]
(Pdb)
Alternatively, break-points can be set on files: b file.py:20 will stop on line 20 of file.py.
3.7.5 Post-mortem
Debugging when something goes wrong:
1. Have a crash somewhere in the code
2. run python -m pdb file.py or run the cell with %pdb on
The program should stop where the exception was raised
1. use w and l for position in code and in call stack
2. use up and down to navigate up and down the call stack
3. inspect variables along the way to understand failure
This does work in the notebook.
%pdb on
from diffusion.model import energy
partial_derivative(energy,[5,6,7,8,0,1],5)
237
3.8 Continuous Integration
3.8.1 Test servers
Goal:
3.9.2 Solution
We need to break our problem down into pieces:
238
Next Step: Think about the possible unit tests
1. Input insanity: e.g. density should non-negative integer; testing by giving negative values etc.
2. change_density(): density is change by a particle hopping left or right? Do all positions have an
equal chance of moving?
3. accept_change() will move be accepted when second energy is lower?
4. Make a small test case for the main algorithm. (Hint: by using mocking, we can pre-set who to move
where.)
In [1]: %%bash
mkdir -p DiffusionExample
class MonteCarlo(object):
""" A simple Monte Carlo implementation """
if temperature == 0:
raise NotImplementedError(
"Zero temperature not implemented")
if temperature < 0e0:
raise ValueError(
"Negative temperature makes no sense")
if len(density) < 2:
raise ValueError("Density is too short")
# of the right kind (integer). Unless it is zero length,
# in which case type does not matter.
if density.dtype.kind != 'i' and len(density) > 0:
raise TypeError("Density should be an array of *integers*.")
# and the right values (positive or null)
if any(density < 0):
raise ValueError("Density should be an array of" +
"*positive* integers.")
if density.ndim != 1:
raise ValueError("Density should be an a *1-dimensional*" +
"array of positive integers.")
if sum(density) == 0:
raise ValueError("Density is empty.")
self.current_energy = energy(density)
self.temperature = temperature
self.density = density
239
def random_agent(self, density):
# Particle index
particle = randint(sum(density))
current = 0
for location, n in enumerate(density):
current += n
if current > particle:
break
return location
location = self.random_agent(density)
# Move direction
if(density[location]-1 < 0):
return array(density)
if location == 0:
direction = 1
elif location == len(density) - 1:
direction = -1
else:
direction = self.random_direction()
def step(self):
iteration = 0
while iteration < self.itermax:
new_density = self.change_density(self.density)
new_energy = energy(new_density)
240
def energy(density, coefficient=1):
""" Energy associated with the diffusion model
:Parameters:
density: array of positive integers
Number of particles at each position i in the array/geometry
"""
from numpy import array, any, sum
# of the right kind (integer). Unless it is zero length, in which case type does not matter.
if density.dtype.kind != 'i' and len(density) > 0:
raise TypeError("Density should be an array of *integers*.")
# and the right values (positive or null)
if any(density < 0):
raise ValueError("Density should be an array" +
"of *positive* integers.")
if density.ndim != 1:
raise ValueError("Density should be an a *1-dimensional*" +
"array of positive integers.")
Writing DiffusionExample/MonteCarlo.py
Temperature = 0.1
density = [np.sin(i) for i in np.linspace(0.1, 3, 100)]
density = np.array(density)*100
density = density.astype(int)
fig = plt.figure()
ax = plt.axes(xlim=(-1, len(density)), ylim=(0, np.max(density)+1))
image = ax.scatter(range(len(density)), density)
241
def simulate(step):
energy, density = mc.step()
image.set_offsets(np.vstack((range(len(density)), density)).T)
txt_energy.set_text('Energy = {}'.format(energy))
def test_input_sanity():
""" Check incorrect input do fail """
energy = MagicMock()
242
MonteCarlo(energy, [1.0, 2, 3])
with raises(ValueError) as exception:
MonteCarlo(energy, [-1, 2, 3])
with raises(ValueError) as exception:
MonteCarlo(energy, [[1, 2, 3], [3, 4, 5]])
with raises(ValueError) as exception:
MonteCarlo(energy, [3])
with raises(ValueError) as exception:
MonteCarlo(energy, [0, 0])
def test_move_particle_one_over():
""" Check density is change by a particle hopping left or right. """
from numpy import nonzero, multiply
from numpy.random import randint
energy = MagicMock()
for i in range(100):
# Do this n times, to avoid
# issues with random numbers
# Create density
def test_equal_probability():
""" Check particles have equal probability of movement. """
from numpy import array, sqrt, count_nonzero
energy = MagicMock()
def test_accept_change():
""" Check that move is accepted if second energy is lower """
from numpy import sqrt, count_nonzero, exp
243
energy = MagicMock
mc = MonteCarlo(energy, [1, 1, 1], temperature=100.0)
# Should always be true.
# But do more than one draw,
# in case randomness incorrectly crept into
# implementation
for i in range(10):
assert mc.accept_change(0.5, 0.4)
assert mc.accept_change(0.5, 0.5)
def test_main_algorithm():
import numpy as np
from numpy import testing
from unittest.mock import Mock
density=[1, 1, 1, 1, 1]
energy=MagicMock()
mc=MonteCarlo(energy, density, itermax = 5)
Writing DiffusionExample/test_model.py
In [5]: %%bash
cd DiffusionExample
py.test
test_model.py … [100%]
-- Docs: https://docs.pytest.org/en/latest/warnings.html
244
========================= 5 passed, 1 warning in 0.86s =========================
245
Chapter 4
---------------------------------------------------------------------------
~/virtualenv/python3.7.5/lib/python3.7/site-packages/geopy/geocoders/base.py in _call_geocoder(s
246
354 try:
--> 355 page = requester(req, timeout=timeout, **kwargs)
356 except Exception as error:
<ipython-input-2-ca3d5ea40875> in <module>
----> 1 geocoder.geocode('Cambridge', exactly_one=False)
~/virtualenv/python3.7.5/lib/python3.7/site-packages/geopy/geocoders/yandex.py in geocode(self,
114 logger.debug("%s.geocode: %s", self.__class__.__name__, url)
115 return self._parse_json(
--> 116 self._call_geocoder(url, timeout=timeout),
117 exactly_one,
118 )
247
~/virtualenv/python3.7.5/lib/python3.7/site-packages/geopy/geocoders/base.py in _call_geocoder(s
371 exc_info=False)
372 try:
--> 373 raise ERROR_CODE_MAP[code](message)
374 except KeyError:
375 raise GeocoderServiceError(message)
That was actually pretty easy, I hope. This is how you’ll install new libraries when you need them.
Troubleshooting:
On mac or linux, you might get a complaint that you need “superuser”, “root”, or “administrator” access.
If so type:
If you get a complaint like: ‘pip is not recognized as an internal or external command’, try the following:
• conda install pip (if you are using Anaconda - though it should be already available)
• or follow the official instructions otherwise.
instead of pip install. This will fetch the python package not from PyPI, but from Anaconda’s
distribution for your platform, and manage any non-python dependencies too.
Typically, if you’re using Anaconda, whenever you come across a python package you want, you should
check if Anaconda package it first using conda search. If it is there you can conda install it, you’ll likely
have less problems. But Anaconda doesn’t package everything, so you’ll need to pip install from time to
time.
The maintainers of packages may have also provided releases of their software via conda-forge, a
community-driven project that provides a collection of packages for the anaconda environment. In such
case you can add conda-forge to your anaconda installation and use search and install as explained
above.
Out[3]: ['/home/travis/virtualenv/python3.7.5/lib/python3.7/site-packages/geopy']
Your computer will be configured to keep installed Python packages in a particular place.
Python knows where to look for possible library installations in a list of places, called the $PYTHONPATH
(%PYTHONPATH% in Windows). It will try each of these places in turn, until it finds a matching library name.
248
In [4]: import sys
sys.path
Out[4]: ['/home/travis/build/UCL/rsd-engineeringcourse/ch04packaging',
'/home/travis/virtualenv/python3.7.5/lib/python37.zip',
'/home/travis/virtualenv/python3.7.5/lib/python3.7',
'/home/travis/virtualenv/python3.7.5/lib/python3.7/lib-dynload',
'/opt/python/3.7.5/lib/python3.7',
'',
'/home/travis/virtualenv/python3.7.5/lib/python3.7/site-packages',
'/home/travis/virtualenv/python3.7.5/lib/python3.7/site-packages/IPython/extensions',
'/home/travis/.ipython']
You can add (append) more paths to this list, and so allow libraries to be load from there. Thought this
is not a recommended practice, let’s do it once to understand how the import works.
• cd my_python_libs
• cd <library name> (e.g. cd JSAnimation-master)
This is all pretty awkward, but it is worth practising this stuff, as most of the power of using programming
for research resides in all the libraries that are out there.
249
4.2 Libraries
4.2.1 Libraries are awesome
The strength of a language lies as much in the set of libraries available, as it does in the language itself.
A great set of libraries allows for a very powerful programming style:
• Write minimal code yourself
• Choose the right libraries
• Plug them together
• Create impressive results
Not only is this efficient with your programming time, it’s also more efficient with computer time.
The chances are any algorithm you might want to use has already been programmed better by someone
else.
250
4.2.5 Sensible Version Numbering
The best approach to version numbers clearly distinguishes kinds of change:
Given a version number MAJOR.MINOR.PATCH, e.g. 2.11.14 increment the:
In [1]: import os
if 'mazetool' not in os.listdir(os.getcwd()):
os.mkdir('mazetool')
class Maze(object):
def __init__(self, name):
self.name = name
self.rooms = []
self.occupants = []
251
self.rooms.append(result)
return result
def wander(self):
"Move all the people in a random direction"
for occupant in self.occupants:
occupant.wander()
def describe(self):
for occupant in self.occupants:
occupant.describe()
def step(self):
house.describe()
print()
house.wander()
print()
Writing mazetool/maze.py
class Room(object):
def __init__(self, name, capacity):
self.name = name
self.capacity = capacity
self.occupancy = 0
self.exits = []
def has_space(self):
return self.occupancy < self.capacity
def available_exits(self):
return [exit for exit in self.exits if exit.valid() ]
def random_valid_exit(self):
import random
if not self.available_exits():
return None
252
return random.choice(self.available_exits())
Writing mazetool/room.py
class Person(object):
def __init__(self, name, room = None):
self.name=name
self.room=room
def wander(self):
exit = self.room.random_valid_exit()
if exit:
self.use(exit)
def describe(self):
print(self.name, "is in the", self.room.name)
Writing mazetool/person.py
class Exit(object):
def __init__(self, name, target):
self.name = name
self.target = target
def valid(self):
return self.target.has_space()
Writing mazetool/exit.py
In order to tell Python that our “mazetool” folder is a Python package, we have to make a special file
called __init__.py. If you import things in there, they are imported as part of the package:
Writing mazetool/__init__.py
In this case we are making it easier to import Maze as we are making it available one level above.
253
4.3.2 Loading Our Package
We just wrote the files, there is no “Maze” class in this notebook yet:
---------------------------------------------------------------------------
<ipython-input-7-3bc371b39bcd> in <module>
----> 1 myhouse = Maze('My New House')
But now, we can import Maze, (and the other files will get imported via the chained Import statements,
starting from the __init__.py file.
In [9]: mazetool.exit.Exit
Out[9]: mazetool.exit.Exit
Note the files we have created are on the disk in the folder we made:
In [12]: import os
You may get also .pyc files. Those are “Compiled” temporary python files that the system generates to
speed things up. They’ll be regenerated on the fly when your .py files change. They may appear inside the
__pycache__ directory.
254
/home/travis/virtualenv/python3.7.5/lib/python3.7/site-packages
/home/travis/virtualenv/python3.7.5/lib/python3.7/site-packages/IPython/extensions
/home/travis/.ipython
In [16]: print(sys.path[-1])
/home/travis/devel/libraries/python
I’ve thus added a folder to the list of places searched. If you want to do this permanently, you should
set the PYTHONPATH Environment Variable, which you can learn about in a shell course, or can read about
online for your operating system.
4.4 Argparse
This is the standard library for building programs with a command-line interface. Here we show a short
introduction to it, but we recommend to read the official tutorial.
if __name__ == "__main__":
parser = ArgumentParser(description="Generate appropriate greetings")
parser.add_argument('--title', '-t')
parser.add_argument('--polite','-p', action="store_true")
parser.add_argument('personal')
parser.add_argument('family')
arguments= parser.parse_args()
Writing greeter.py
If you are using MacOS or Linux, you do the following to create an executable:
In [2]: %%bash
chmod u+x greeter.py
In [3]: %%bash
./greeter.py --help
255
positional arguments:
personal
family
optional arguments:
-h, --help show this help message and exit
--title TITLE, -t TITLE
--polite, -p
if you are using Windows, change bash by cmd, and prepend the commands by python
%%cmd
python greeter.py John Cleese
In [4]: %%bash
./greeter.py John Cleese
Hey, John Cleese.
In [5]: %%bash
./greeter.py --polite John Cleese
How do you do, John Cleese.
In [6]: %%bash
./greeter.py John Cleese --title Dr
Hey, Dr John Cleese.
Yes, he is!
4.5 Packaging
Once we’ve made a working program, we’d like to be able to share it with others.
A good cross-platform build tool is the most important thing: you can always have collaborators build
from source.
256
4.5.2 Laying out a project
When planning to package a project for distribution, defining a suitable project layout is essential.
In [1]: %%bash
tree --charset ascii greetings -I "doc|build|Greetings.egg-info|dist|*.pyc"
greetings
|-- CITATION.md
|-- conf.py
|-- greetings
| |-- command.py
| |-- greeter.py
| |-- __init__.py
| `-- test
| |-- fixtures
| | `-- samples.yaml
| |-- __init__.py
| `-- test_greeter.py
|-- index.rst
|-- LICENSE.md
|-- README.md
`-- setup.py
3 directories, 12 files
We can start by making our directory structure. You can create many nested directories at once using
the -p switch on mkdir.
In [2]: %%bash
mkdir -p greetings/greetings/test/fixtures
mkdir -p greetings/scripts
setup(
name="Greetings",
version="0.1.0",
packages=find_packages(exclude=['*test']),
)
pip install .
And the package will be then available to use everywhere on the system.
257
4.5.4 Convert the script to a module
Of course, there’s more to do when taking code from a quick script and turning it into a proper module:
We need to add docstrings to our functions, so people can know how to use them.
Out[4]:
Parameters
----------
personal: str
A given name, such as Will or Jean-Luc
family: str
A family name, such as Riker or Picard
title: str
An optional title, such as Captain or Reverend
polite: bool
True for a formal greeting, False for informal.
Returns
-------
string
An appropriate greeting
Examples
--------
>>> from greetings.greeter import greet
>>> greet("Terry", "Jones")
'Hey, Terry Jones.
"""
Parameters
----------
personal: str
A given name, such as Will or Jean-Luc
family: str
258
A family name, such as Riker or Picard
title: str
An optional title, such as Captain or Reverend
polite: bool
True for a formal greeting, False for informal.
Returns
-------
string
An appropriate greeting
Examples
--------
>>> from greetings.greeter import greet
>>> greet("Terry", "Jones")
'Hey, Terry Jones.
The documentation string explains how to use the function; don’t worry about this for now, we’ll consider
this on the next section (notebook version).
def process():
parser = ArgumentParser(description="Generate appropriate greetings")
parser.add_argument('--title', '-t')
parser.add_argument('--polite', '-p', action="store_true")
parser.add_argument('personal')
parser.add_argument('family')
arguments = parser.parse_args()
print(greet(arguments.personal, arguments.family,
arguments.title, arguments.polite))
if __name__ == "__main__":
process()
259
4.5.7 Specify entry point
This allows us to create a command to execute part of our library. In this case when we execute greet on
the terminal, we will be calling the process function under greetings/command.py.
In [7]: Code("greetings/setup.py")
Out[7]:
from setuptools import setup, find_packages
setup(
name="Greetings",
version="0.1.0",
packages=find_packages(exclude=['*test']),
entry_points={
'console_scripts': [
'greet = greetings.command:process'
]})
And the scripts are now available as command line commands:
In [8]: %%bash
greet --help
usage: greet [-h] [--title TITLE] [--polite] personal family
positional arguments:
personal
family
optional arguments:
-h, --help show this help message and exit
--title TITLE, -t TITLE
--polite, -p
In [9]: %%bash
greet Terry Gilliam
greet --polite Terry Gilliam
greet Terry Gilliam --title Cartoonist
Hey, Terry Gilliam.
How do you do, Terry Gilliam.
Hey, Cartoonist Terry Gilliam.
260
4.5.9 Write a readme file
e.g.:
In [11]: Code("greetings/README.md")
Out[11]:
Greetings!
==========
Usage:
In [12]: Code("greetings/LICENSE.md")
Out[12]:
In [13]: Code("greetings/CITATION.md")
Out[13]:
You may well want to formalise this using the codemeta.json standard or the citation file format - these
don’t have wide adoption yet, but we recommend it.
261
4.5.13 Write some unit tests
Separating the script from the logical module made this possible:
In [15]: Code("greetings/greetings/test/test_greeter.py")
Out[15]:
import yaml
import os
from ..greeter import greet
def test_greeter():
with open(os.path.join(os.path.dirname(__file__),
'fixtures',
'samples.yaml')) as fixtures_file:
fixtures = yaml.safe_load(fixtures_file)
for fixture in fixtures:
answer = fixture.pop('answer')
assert greet(**fixture) == answer
In [16]: Code("greetings/greetings/test/fixtures/samples.yaml")
Out[16]:
- personal: Eric
family: Idle
answer: "Hey, Eric Idle."
- personal: Graham
family: Chapman
polite: True
answer: "How do you do, Graahm Chapman."
- personal: Michael
family: Palin
title: CBE
answer: "Hey, CBE Mike Palin."
greetings/greetings/test/test_greeter.py F [100%]
def test_greeter():
with open(os.path.join(os.path.dirname(__file__),
'fixtures',
262
'samples.yaml')) as fixtures_file:
fixtures = yaml.safe_load(fixtures_file)
for fixture in fixtures:
answer = fixture.pop('answer')
> assert greet(**fixture) == answer
E AssertionError: assert 'How do you d…aham Chapman.' == 'How do you d…aahm Chapman.'
E - How do you do, Graham Chapman.
E ? -
E + How do you do, Graahm Chapman.
E ? +
greetings/greetings/test/test_greeter.py:12: AssertionError
============================== 1 failed in 0.06s ===============================
However, this hasn’t told us that also the third test is wrong! A better aproach is to parametrize the
test as follows:
def read_fixture():
with open(os.path.join(os.path.dirname(__file__),
'fixtures',
'samples.yaml')) as fixtures_file:
fixtures = yaml.safe_load(fixtures_file)
return fixtures
@pytest.mark.parametrize("fixture", read_fixture())
def test_greeter(fixture):
answer = fixture.pop('answer')
assert greet(**fixture) == answer
Overwriting greetings/greetings/test/test_greeter.py
Now when we run pytest, we get a failure per element in our fixture and we know all that fails.
263
fixture = {'family': 'Chapman', 'personal': 'Graham', 'polite': True}
@pytest.mark.parametrize("fixture", read_fixture())
def test_greeter(fixture):
answer = fixture.pop('answer')
> assert greet(**fixture) == answer
E AssertionError: assert 'How do you d…aham Chapman.' == 'How do you d…aahm Chapman.'
E - How do you do, Graham Chapman.
E ? -
E + How do you do, Graahm Chapman.
E ? +
greetings/greetings/test/test_greeter.py:16: AssertionError
____________________________ test_greeter[fixture2] ____________________________
@pytest.mark.parametrize("fixture", read_fixture())
def test_greeter(fixture):
answer = fixture.pop('answer')
> assert greet(**fixture) == answer
E AssertionError: assert 'Hey, CBE Michael Palin.' == 'Hey, CBE Mike Palin.'
E - Hey, CBE Michael Palin.
E ? ^^^ -
E + Hey, CBE Mike Palin.
E ? ^
greetings/greetings/test/test_greeter.py:16: AssertionError
========================= 2 failed, 1 passed in 0.07s ==========================
We can also make pytest to check whether the docstrings are correct by adding the --doctest-modules
flag:
greetings/greetings/greeter.py F [ 25%]
greetings/greetings/test/test_greeter.py .FF [100%]
264
021 --------
022 >>> from greetings.greeter import greet
023 >>> greet("Terry", "Jones")
Expected:
'Hey, Terry Jones.
Got:
'Hey, Terry Jones.'
/home/travis/build/UCL/rsd-engineeringcourse/ch04packaging/greetings/greetings/greeter.py:23: DocTestFai
____________________________ test_greeter[fixture1] ____________________________
@pytest.mark.parametrize("fixture", read_fixture())
def test_greeter(fixture):
answer = fixture.pop('answer')
> assert greet(**fixture) == answer
E AssertionError: assert 'How do you d…aham Chapman.' == 'How do you d…aahm Chapman.'
E - How do you do, Graham Chapman.
E ? -
E + How do you do, Graahm Chapman.
E ? +
greetings/greetings/test/test_greeter.py:16: AssertionError
____________________________ test_greeter[fixture2] ____________________________
@pytest.mark.parametrize("fixture", read_fixture())
def test_greeter(fixture):
answer = fixture.pop('answer')
> assert greet(**fixture) == answer
E AssertionError: assert 'Hey, CBE Michael Palin.' == 'Hey, CBE Mike Palin.'
E - Hey, CBE Michael Palin.
E ? ^^^ -
E + Hey, CBE Mike Palin.
E ? ^
greetings/greetings/test/test_greeter.py:16: AssertionError
========================= 3 failed, 1 passed in 0.10s ==========================
265
• dpkg for apt-get on Ubuntu and Debian
• rpm for yum/dnf on Redhat and Fedora
• homebrew on OSX (Possibly macports as well)
• An executable msi installer for Windows.
Homebrew
Homebrew: A ruby DSL, you host off your own webpage
See an installer for the cppcourse example
If you’re on OSX, do:
4.6 Documentation
4.6.1 Documentation is hard
• Good documentation is hard, and very expensive.
• Bad documentation is detrimental.
• Good documentation quickly becomes bad if not kept up-to-date with code changes.
• Professional companies pay large teams of documentation writers.
• Readable code
• Automated tests
• Small code samples demonstrating how to use the api
"""
Generate a greeting string for a person.
Parameters
----------
266
personal: str
A given name, such as Will or Jean-Luc
family: str
A family name, such as Riker or Picard
title: str
An optional title, such as Captain or Reverend
polite: bool
True for a formal greeting, False for informal.
Returns
-------
string
An appropriate greeting
"""
sphinx-quickstart
Which responds:
Please enter avalues for the following settings (just press Enter to
accept a default value, if one is given in brackets).
and then look at and adapt the generated config, a file called conf.py in the root of the project. This
contains the project’s Sphinx configuration, as Python variables:
#Add any Sphinx extension module names here, as strings. They can be
#extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
# ones.
extensions = [
'sphinx.ext.autodoc', # Support automatic documentation
'sphinx.ext.coverage', # Automatically check if functions are documented
'sphinx.ext.mathjax', # Allow support for algebra
'sphinx.ext.viewcode', # Include the source code in documentation
'numpydoc' # Support NumPy style docstrings
]
To proceed with the example, we’ll copy a finished conf.py into our folder, though normally you’ll always
use sphinx-quickstart
import sys
import os
267
extensions = [
'sphinx.ext.autodoc', # Support automatic documentation
'sphinx.ext.coverage', # Automatically check if functions are documented
'sphinx.ext.mathjax', # Allow support for algebra
'sphinx.ext.viewcode', # Include the source code in documentation
'numpydoc' # Support NumPy style docstrings
]
templates_path = ['_templates']
source_suffix = '.rst'
master_doc = 'index'
project = u'Greetings'
copyright = u'2014, James Hetherington'
version = '0.1'
release = '0.1'
exclude_patterns = ['_build']
pygments_style = 'sphinx'
htmlhelp_basename = 'Greetingsdoc'
latex_elements = {
}
latex_documents = [
('index', 'Greetings.tex', u'Greetings Documentation',
u'James Hetherington', 'manual'),
]
man_pages = [
('index', 'greetings', u'Greetings Documentation',
[u'James Hetherington'], 1)
]
texinfo_documents = [
('index', 'Greetings', u'Greetings Documentation',
u'James Hetherington', 'Greetings', 'One line description of project.',
'Miscellaneous'),
]
Overwriting greetings/conf.py
.. autofunction:: greetings.greeter.greet
Overwriting greetings/index.rst
268
4.7.4 Run sphinx
We can run Sphinx using:
In [3]: %%bash
cd greetings/
sphinx-build . doc
Parameters
----------
personal: str
A given name, such as Will or Jean-Luc
family: str
269
A family name, such as Riker or Picard
title: str
An optional title, such as Captain or Reverend
polite: bool
True for a formal greeting, False for informal.
Returns
-------
string
An appropriate greeting
Examples
--------
>>> from greetings.greeter import greet
>>> greet("Terry", "Jones")
'Hey, Terry Jones.
"""
Overwriting greetings/greetings/greeter.py
**********************************************************************
File "greetings/greetings/greeter.py", line 23, in greeter.greet
Failed example:
greet("Terry", "Jones")
Expected:
'Hey, Terry Jones.
Got:
'Hey, Terry Jones.'
**********************************************************************
1 items had failures:
1 of 2 in greeter.greet
***Test Failed*** 1 failures.
270
• Architectural Design
• Implementation
• Integration
As a clinician, when I finish an analysis, I want a report to be created on the test results, so that
I can send it to the patient.
As a role, when condition or circumstance applies I want a goal or desire so that benefits occur.
These are easy to map into the Gherkin behaviour driven design test language.
4.9.4 Waterfall
The Waterfall design philosophy argues that the elements of design should occur in order: first requirements
capture, then functional design, then architectural design. This approach is based on the idea that if a
mistake is made in the design, then programming effort is wasted, so significant effort is spent in trying to
ensure that requirements are well understood and that the design is correct before programming starts.
Waterfall results in a paperwork culture, where people spend a long time designing standard forms to
document each stage of the design, with less time actually spent making things.
Waterfall results in excessive adherence to a plan, even when mistakes in the design are obvious to people
doing the work.
271
4.9.8 Software is not made of bricks
Software is not the same ‘stuff’ as that from which physical systems are constructed. Software
systems differ in material respects from physical systems. Much of this has been rehearsed by
Fred Brooks in his classic ‘No Silver Bullet’ paper. First, complexity and scale are different
in the case of software systems: relatively functionally simple software systems comprise more
independent parts, placed in relation to each other, than do physical systems of equivalent func-
tional value. Second, and clearly linked to this, we do not have well developed components and
composition mechanisms from which to build software systems (though clearly we are working
hard on providing these) nor do we have a straightforward mathematical account that permits
us to reason about the effects of composition.
– Prof. Anthony Finkelstein, UCL Dean of Engineering, and Professor of Software Systems Engineering
That is, while there is value in the items on the right, we value the items on the left more.
– Jim Highsmith.
272
4.9.13 Ongoing Design
Agile development doesn’t eschew design. Design documents should still be written, but treated as living
documents, updated as more insight is gained into the task, as work is done, and as requirements change.
Use of a Wiki or version control repository to store design documents thus works much better than using
Word documents!
Test-driven design and refactoring are essential techniques to ensure that lack of “Big Design Up Front”
doesn’t produce badly constructed spaghetti software which doesn’t meet requirements. By continously
scouring our code for smells, and stopping to refactor, we evolve towards a well-structured design with
weakly interacting units. By starting with tests which describe how our code should behave, we create
executable specifications, giving us confidence that the code does what it is supposed to.
4.9.18 Conclusion
• Don’t ignore design
• See if there’s a known design pattern that will help
• Do try to think about how your code will work before you start typing
• Do use design tools like UML to think about your design without coding straight away
• Do try to write down some user stories
• Do maintain design documents.
273
BUT
• Do change your design as you work, updating the documents if you have them
• Don’t go dark – never do more than a couple of weeks programming without showing what you’ve
done to colleagues
• Don’t get isolated from the reasons for your code’s existence, stay involved in the research, don’t be a
Code Monkey.
• Do keep a list of all the things your code needs, estimate and prioritise tasks carefully.
• Version
• Steps
If possible, submit a minimal reproducing code fragment - look at this detailed answer about how to
create a minimal example for 𝐿𝑎𝑇 𝑒𝑋.
4.10.6 Status
• Submitted
• Accepted
• Underway
• Blocked
274
4.10.7 Resolutions
• Resolved
• Will Not Fix
• Not reproducible
• Not a bug (working as intended)
4.11.2 Disclaimer
Here we attempt to give some basic advice on choosing a licence for your software. But:
For an in-depth discussion of software licences, read the O’Reilly book Understanding Open Source and
Free Software Licensing.
Your department, or UCL, may have policies about applying licences to code you create while a UCL
employee or student. This training doesn’t address this issue, and does not represent UCL policy – seek
advice from your supervisor or manager if concerned.
275
4.11.3 Choose a licence
It is important to choose a licence and to create a license file to tell people what it is.
The licence lets people know whether they can reuse your code and under what terms. This course has
one, for example.
Your licence file should typically be called LICENSE.txt or similar. GitHub will offer to create a licence
file automatically when you create a new repository.
XXXX NON-COMMERCIAL EDUCATIONAL LICENSE Copyright (c) 2013 Prof. Foo. All
rights reserved.
276
You may use and modify this software for any non-commercial purpose within your educational
institution. Teaching, academic research, and personal experimentation are examples of purpose
which can be non-commercial.
You may redistribute the software and modifications to the software for non-commercial purposes,
but only to eligible users of the software (for example, to another university student or faculty
to support joint academic research).
Please don’t do this. Your desire to slightly tweak the terms is harmful to the future software ecosystem.
Also, Unless you are a lawyer, you cannot do this safely!
If you want your code to be maximally reusable, use a permissive licence If you want to force other people
using your code to make derivatives open source, use a copyleft licence.
If you want to use code that has a permissive licence, it’s safe to use it and keep your code secret. If you
want to use code that has a copyleft licence, you’ll have to release your code under such a licence.
4.11.13 Patents
Intellectual property law distinguishes copyright from patents. This is a complex field, which I am far from
qualified to teach!
People who think carefully about intellectual property law distinguish software licences based on how
they address patents. Very roughly, if you want to ensure that contributors to your project can’t then go off
and patent their contribution, some licences, such as the Apache licence, protect you from this.
277
4.11.14 Use as a web service
If I take copyleft code, and use it to host a web service, I have not sold the software.
Therefore, under some licences, I do not have to release any derivative software. This “loophole” in the
GPL is closed by the AGPL (“Affero GPL”)
Check your licence at opensource.org for details of how to apply it to your software. For example, for
the GPL.
278
Chapter 5
Construction
5.1 Construction
Software design gets a lot of press (Object orientation, UML, design patterns).
In this session we’re going to look at advice on software construction.
279
5.1.5 Construction
So, we’ve excluded most of the exciting topics. What’s left is the bricks and mortar of software: how letters
and symbols are used to build code which is readable.
Software has beauty at these levels too: stories and characters correspond to architecture and object
design, plots corresponds to algorithms, but the rhythm of sentences and the choice of words corresponds to
software construction.
5.1.8 Setup
This notebook is based on a number of fragments of code, with an implicit context. We’ve made a library
to set up the context so the examples work.
280
sEntry="2.0"
entry ="2.0"
iOffset=1
offset =1
anothervariable=1
flag1=True
variable=1
flag2=False
def do_something(): pass
chromosome=None
start_codon=None
subsequence=MagicMock()
transcribe=MagicMock()
ribe=MagicMock()
find=MagicMock()
can_see=MagicMock()
my_name=""
your_name=""
flag1=False
flag2=False
start=0.0
end=1.0
step=0.1
birds=[MagicMock()]*2
resolution=100
pi=3.141
result= [0]*resolution
import numpy as np
import math
data= [math.sin(y) for y in np.arange(0,pi,pi/resolution)]
import yaml
import os
Writing context.py
def add_to_reaction(a_name,
281
a_reaction):
l_species = Species(a_name)
a_reaction.append( l_species )
5.2.3 Layout
In [4]: reaction = {
"reactants": ["H", "H", "O"],
"products": ["H2O"]
}
In [5]: reaction2=(
{
"reactants":
[
"H",
"H",
"O"
],
"products":
[
"H2O"
]
}
)
282
5.2.6 Hungarian Notation
Prefix denotes type:
So in the example above we know that we are creating a float number as a composition of a string entry
and an integer offset.
People may find this useful in languages like Python where the type is intrisic in the variable.
5.2.7 Newlines
• Newlines make code easier to read
• Newlines make less code fit on a screen
In [10]: anothervariable += 1
if ((variable == anothervariable) and flag1 or flag2): do_something()
We create extra variables as an intermediate step. Don’t worry about the performance now, the compiler
will do the right thing.
What about operator precedence? Being explicit helps to remind yourself what you are doing.
• Python: PEP8
• R: Google’s guide for R, tidyverse style guide
• C++: Google’s style guide, Mozilla’s
• Julia: Official style guide
283
5.2.11 Lint
There are automated tools which enforce coding conventions and check for common mistakes.
These are called linters. A popular one is pycodestyle:
E.g. pip install pycodestyle
It is a good idea to run a linter before every commit, or include it in your CI tests.
There are other tools that help with linting that are worth mentioning. With pylint you can also get
other useful information about the quality of your code:
pip install pylint
-------------------------------------
Your code has been rated at -15.00/10
and with black you can fix all the errors at once.
black species.py
These linters can be configured to choose which points to flag and which to ignore.
Do not blindly believe all these automated tools! Style guides are guides not rules.
Finally, there are tools like editorconfig to help sharing the conventions used within a project, where
each contributor uses different IDEs and tools. There are also bots like pep8speaks that comments on
contributors’ pull requests suggesting what to change to follow the conventions for the project.
5.3 Comments
Let’s import first the context for this chapter.
284
5.3.2 Bad Comments
“I write good code, you can tell by the number of comments.”
This is wrong.
class Agent:
def turn(self):
self.direction += self.angular_velocity;
def move(self):
self.x += Agent.step_length * sin(self.direction)
self.y += Agent.step_length * cos(self.direction)
This is probably better. We are using the name of the functions (i.e., turn, move) instead of comments.
Therefore, we’ve got self-documenting code.
285
In [6]: if x.safe_to_clear(): # Guard added as temporary workaround for #32
x.clear()
is OK. And platforms like GitHub will create a link to it when browsing the code.
@double
def try_me_twice():
pass
5.4 Refactoring
Let’s import first the context for this chapter.
Let’s put ourselves in an scenario - that you’ve probably been in before. Imagine you are changing a
large piece of legacy code that’s not well structured, introducing many changes at once, trying to keep in
your head all the bits and pieces that need to be modified to make it all work again. And suddenly, your
officemate comes and ask you to go for coffee… and you’ve lost all track of what you had in your head and
need to start again.
Instead of doing so, we could use a more robust approach to go from nasty ugly code to clean code in a
safer way.
286
5.4.1 Refactoring
To refactor is to:
• Make a change to the design of some software
• Which improves the structure or readability
• But which leaves the actual behaviour of the program completely unchanged.
287
After:
if can_see(hawk, starling):
hawk.hunting()
if can_see(starling, hawk):
starling.flee()
In [6]: z = find(x,y)
if z:
ribe(x)
After:
vs
In [10]: sum = 0
for i in range(resolution):
sum += data[i]
After:
In [11]: sum = 0
for value in data:
sum += value
288
5.4.9 Replace hand-written code with library code
Smell: It feels like surely someone else must have done this at some point.
Before:
After:
After:
Warning: this refactoring greatly improves readability but can make code slower, depending on memory
layout. Be careful.
After:
Writing config.yaml
289
5.4.12 Replace global variables with function arguments
Smell: A global variable is assigned and then used inside a called function:
if hawk.can_see(starling):
hawk.hunt(starling)
class Hawk(object):
def can_see(self, target):
return (self.facing - target.facing) < viewport
Becomes:
class Hawk(object):
def can_see(self, target, viewport):
return (self.facing - target.facing) < viewport
Becomes:
Though there may be a case where all the nests need to be built before the birds can start laying eggs.
Before:
290
After:
class Two(object):
def __init__():
self.child = One()
After:
Writing anotherfile.py
class Two(object):
def __init__():
self.child = One()
291
5.4.18 Refactoring Summary
• Replace magic numbers with constants
• Replace repeated code with a function
• Change of variable/function/class name
• Replace loop with iterator
• Replace hand-written code with library code
• Replace set of arrays with array of structures
• Replace constants with a configuration file
• Replace global variables with function arguments
• Break a large function into smaller units
• Separate code concepts into files or modules
292
Chapter 6
Design
293
Notice, that in Python, you can add properties to an object once it’s been defined. Just because you can
doesn’t mean you should!
6.2.4 Method
Method: A function which is “built in” to a class
my_object = MyClass()
my_object.someMethod(value)
6.2.5 Constructor
Constructor: A special method called when instantiating a new object
my_object = MyClass(value)
my_object = MyClass()
assert(my_object.member == "Value")
294
In [8]: from random import random
birds = [{"position": random(),
"velocity": random(),
"type": kind} for kind in bird_types]
if can_see(hawk, starling):
hawk.hunt()
After:
In [11]: class Bird:
def can_see(self, target):
return (self.facing - target.facing) < self.viewport
if hawk.can_see(starling):
hawk.hunt()
295
6.3.4 Replace global variable with class and member
Smell: A global variable is referenced by a few functions
if today == birthday[0:2]:
print(f"Happy Birthday, {name}")
else:
print("No birthday for you today.")
296
In [1]: class Particle:
def __init__(self, position, velocity):
self.position = position
self.velocity = velocity
def move(self, delta_t):
self.position += self.velocity * delta_t
In C++:
class Particle {
std::vector<double> position;
std::vector<double> velocity;
Particle(std::vector<double> position, std::vector<double> velocity);
void move(double delta_t);
}
In Fortran:
type particle
real :: position
real :: velocity
contains
procedure :: init
procedure :: move
end type particle
6.4.1 UML
UML is a conventional diagrammatic notation used to describe “class structures” and other higher level
aspects of software design.
Computer scientists get worked up about formal correctness of UML diagrams and learning the conven-
tions precisely. Working programmers can still benefit from using UML to describe their designs.
6.4.2 YUML
We can see a YUML model for a Particle class with position and velocity data and a move() method
using the YUML online UML drawing tool (example).
http://yuml.me/diagram/boring/class/[Particle|position;velocity|move%28%29]
Here’s how we can use Python code to get an image back from YUML:
In [2]: import requests
from IPython.display import Image
def yuml(model):
result = requests.get("http://yuml.me/diagram/boring/class/" + model)
return Image(result.content)
In [3]: yuml("[Particle|position;velocity|move()]")
Out[3]:
297
The representation of the Particle class defined above in UML is done with a box with three sections.
The name of the class goes on the top, then the name of the member variables in the middle, and the name
of the methods on the bottom. We will see later why this is useful.
def called_inside(self):
self.__private_method()
self._private_method()
self.__private_data = 1
self._private_data = 1
MyClass().called_inside()
In [6]: MyClass().public_method() # OK
print(MyClass()._private_data)
In [7]: print(MyClass().public_data)
---------------------------------------------------------------------------
298
AttributeError Traceback (most recent call last)
<ipython-input-8-e4355512aeb6> in <module>
----> 1 MyClass().__private_method() # Generates error
---------------------------------------------------------------------------
<ipython-input-9-6c81459189e2> in <module>
----> 1 print(MyClass().__private_data) # Generates error
becomes:
@property
def name(self):
return f"{self._first self._second}"
299
Making the same external code work as before.
Note that the code behaves the same way to the outside user. The implementation detail is hidden by
private variables. In languages without this feature, such as C++, it is best to always make data private,
and always access data through functions:
---------------------------------------------------------------------------
<ipython-input-13-bb03ebf6e67c> in <module>
12 assert(graham.name == "Graham Chapman")
13 graham.get_married(david)
---> 14 assert(graham.name == "Graham Sherlock")
AssertionError:
This type of situation could makes that the object data structure gets inconsistent with itself. Making
variables being out of sync with other variables. Each piece of information should only be stored in once
place! In this case, name should be calculated each time it’s required as previously shown. In database
design, this is called Normalisation.
300
In [14]: yuml("[Particle|+public;-private|+publicmethod();-privatemethod]")
Out[14]:
def __init__(self):
Counted.number_created += 1
@classmethod
def howMany(cls):
return cls.number_created
Counted.howMany() # 0
x = Counted()
Counted.howMany() # 1
z = [Counted() for x in range(5)]
Counted.howMany() # 6
Out[15]: 6
The data is shared among all the objects instantiated from that class. Note that in __init__ we are
not using self.number_created but the name of the class. The howMany function is not a method of a
particular object. It’s called on the class, not on the object. This is possible by using the @classmethod
decorator.
6.5.2 Inheritance
• Inheritance is a mechanism that allows related classes to share code.
• Inheritance allows a program to reflect the ontology of kinds of thing in a program.
301
6.5.3 Ontology and inheritance
• A bird is a kind of animal
• An eagle is a kind of bird
• A starling is also a kind of bird
• All animals can be born and die
• Only birds can fly (Ish.)
• Only eagles hunt
• Only starlings flock
class Bird(Animal):
def fly(self):
print("Whee!")
class Eagle(Bird):
def hunt(self):
print("I'm gonna eatcha!")
class Starling(Bird):
def flew(self):
print("I'm flying away!")
Eagle().beBorn()
Eagle().hunt()
I exist
I'm gonna eatcha!
Another equivalent definition is using the synonym child / parent for derived / base class: * A child class
extends a parent class.
302
class Person(Animal):
def __init__(self, age, name):
super().__init__(age)
self.name = name
Read Raymond Hettinger’s article about super to see various real examples.
In [18]: yuml("[Animal]^-[Bird],[Bird]^-[Eagle],[Bird]^-[Starling]%")
Out[18]:
Aggregation in UML
The Boids situation can be represented thus:
In [19]: yuml("[Model]<>-*>[Boid],[Boid]position++->[Vector],[Boid]velocity++->[Vector]%")
Out[19]:
The open diamond indicates Aggregation, the closed diamond composition. (A given boid might
belong to multiple models, a given position vector is forever part of the corresponding Boid.)
The asterisk represents cardinality, a model may contain multiple Boids. This is a one to many relation-
ship. Many to many relationship is shown with * on both sides.
303
Refactoring to inheritance
Smell: Repeated code between two classes which are both ontologically subtypes of something
Before:
class Pet:
def __init__(self, age, owner):
self.age = age
self.owner = owner
def birthday(self):
self.age += 1
After:
class Person(Animal):
def __init__(self, age, job):
self.job = job
super().__init__(age)
class Pet(Animal):
def __init__(self, age, owner):
self.owner = owner
super().__init__(age)
6.5.9 Polymorphism
In [22]: class Dog:
def noise(self):
return "Bark"
class Cat:
def noise(self):
return "Miaow"
class Pig:
def noise(self):
return "Oink"
class Cow:
def noise(self):
return "Moo"
304
animals = [Dog(), Dog(), Cat(), Pig(), Cow(), Cat()]
for animal in animals:
print(animal.noise())
Bark
Bark
Miaow
Oink
Moo
Miaow
class Dog(Animal):
def noise(self):
return "Bark"
class Worm(Animal):
pass
class Poodle(Dog):
pass
Bark
I don't make a noise.
Oink
Moo
Bark
305
In [24]: class Animal:
pass
class Worm(Animal):
pass
---------------------------------------------------------------------------
<ipython-input-25-9a56606e40c2> in <module>
----> 1 Worm().noise() # Generates error
def noise(self):
if self.animal_kind == "Dog":
return "Bark"
elif self.animal_kind == "Cat":
return "Miaow"
elif self.animal_kind == "Cow":
return "Moo"
In [27]: yuml("[<<Animal>>]^-.-[Dog]")
Out[27]:
306
6.5.15 Further UML
UML is a much larger diagram language than the aspects we’ve shown here.
• Message sequence charts show signals passing back and forth between objects (Web Sequence Dia-
grams).
• Entity Relationship Diagrams can be used to show more general relationships between things in a
system.
Read more about UML on Martin Fowler’s book about the topic.
6.6 Patterns
6.6.1 Class Complexity
We’ve seen that using object orientation can produce quite complex class structures, with classes owning
each other, instantiating each other, and inheriting from each other.
There are lots of different ways to design things, and decisions to make.
• Should I inherit from this class, or own it as a member variable? (“is a” vs “has a”)
• How much flexibility should I allow in this class’s inner workings?
• Should I split this related functionality into multiple classes or keep it in one?
307
6.6.4 Introducing Some Patterns
There are lots and lots of design patterns, and it’s a great literature to get into to read about design questions
in programming and learn from other people’s experience.
We’ll just show a few in this session:
• Factory Method
• Builder
• Strategy
• Model-View-Controller
def yuml(model):
result=requests.get("http://yuml.me/diagram/boring/class/" + model)
return Image(result.content)
Out[2]:
308
6.7.2 Factory Example
An “agent based model” is one like the Boids model from last week: agents act and interact under certain
rules. Complex phenomena can be described by simple agent behaviours.
309
However, this common constructor doesn’t know what kind of agent to create; as a common base, it
could be a model of boids, or the agents could be remote agents on foreign servers, or they could even be
physical hardware robots connected to the driving model over Wifi!
We need to defer the construction of the agents. We can do this with polymorphism: each derived class
of the ABM can have an appropriate method to create its agents:
This is the factory method pattern: a common design solution to the need to defer the construction of
daughter objects to a derived class. self.create is not defined here, but in each of the agents that inherits
from AgentModel. Using polimorphism to get deffered behaviour on what you want to create.
There is no need to define an explicit base interface for the “Agent” concept in Python: anything that
responds to “simulate” and “interact” methods will do: this is our Agent concept.
class BirdModel(AgentModel):
def __init__(self, config):
self.boids = []
for boid_config in config:
310
self.boids.append(Boid(**boid_config))
class WebAgentFactory(AgentModel):
def __init__(self, url, config):
self.url = url
connection = AmazonCompute.connect(url)
AgentModel.__init__(self)
self.web_agents = []
for agent_config in config:
self.web_agents.append(OnlineAgent(agent_config, connection))
The agent creation loop is almost identical in the two classes; so we can be sure we need to refactor it
away; but the type that is created is different in the two cases, so this is the smell that we need a factory
pattern.
Out[8]:
311
6.8.1 Builder example
Let’s continue our Agent Based modelling example.
312
There’s a lot more to defining a model than just adding agents of different kinds: we need to define
boundary conditions, specify wind speed or light conditions.
We could define all of this for an imagined advanced Model with a very very long constructor, with lots
of optional arguments:
def finish(self):
self.validate()
return self.model
def validate(self):
assert(self.model.xlim is not None)
# Check that the all the
# parameters that need to be set
# have indeed been set.
Inheritance of an Abstract Builder for multiple concrete builders could be used where there might be
multiple ways to build models with the same set of calls to the builder: for example a version of the model
builder yielding models which can be executed in parallel on a remote cluster.
builder.set_bounds(500, 500)
builder.add_agent(40, 40)
builder.add_agent(400, 100)
313
model = builder.finish()
model.simulate()
314
6.9.2 Sunspot cycle has periodicity
In [16]: spectrum = rfft(spots)
plt.figure()
plt.plot(abs(spectrum))
plt.savefig('fixed.png')
315
6.9.3 Years are not constant length
There’s a potential problem with this analysis however:
We also want to find the period of the strongest periodic signal in the data, there are various different
methods we could use for this also, such as integrating the fourier series by quadrature to find the mean
frequency, or choosing the largest single value.
• The constructors for each derived class will need arguments for all the numerical method’s control
parameters, such as the degree of spline for the interpolation method, the order of quadrature for
integrators, and so on.
• Where we have multiple algorithmic choices to make (interpolator, periodogram, peak finder…) the
number of derived classes would explode: class SunspotAnalyzerSplineFFTTrapeziumNearMode is
a bit unwieldy.
• The algorithmic choices are not then available for other projects
• This design doesn’t fit with a clean Ontology of “kinds of things”: there’s no Abstract Base for
spectrogram generators…
316
self.end = self.data[-1, 0]
self.range = self.end - self.start
self.step = self.range / self.count
self.times = self.data[:, 0]
self.values = self.data[:, 1]
self.plot_data = [self.times, self.values]
self.inverse_plot_data = [1.0 / self.times[20:], self.values[20:]]
Then, our class which contains the analysis code, except the numerical methods
def load_data(self):
start_date_str = '1700-12-31'
end_date_str = '2014-01-01'
self.start_date = self.format_date(start_date_str)
end_date = self.format_date(end_date_str)
url_base = ("http://www.quandl.com/api/v1/datasets/" +
"SIDC/SUNSPOTS_A.csv")
x = requests.get(url_base,params={'trim_start': start_date_str,
'trim_end': end_date_str,
'sort_order': 'asc'})
secs_per_year = (datetime(2014, 1, 1) - datetime(2013, 1, 1)
).total_seconds()
data = csv.reader(StringIO(x.text)) # Convert requests
# result to look
# like a file buffer before
# reading with CSV
next(data) # Skip header row
self.series = Series([[
(self.format_date(row[0]) - self.start_date
).total_seconds()/secs_per_year,
float(row[1])] for row in data])
def frequency_data(self):
return self.frequency_strategy.transform(self.series)
317
"Return the next power of 2 above value"
return 2**(1 + int(log(value) / log(2)))
In [25]: plt.plot(*comparison)
plt.xlim(0, 16)
318
6.9.8 Results: Deviation of year length from average
In [26]: plt.plot(deviation)
319
6.10 Model-View-Controller
6.10.1 Separate graphics from science!
Whenever we are coding a simulation or model we want to:
We often see scientific programs where the code which is used to display what is happening is mixed up
with the mathematics of the analysis. This is hard to understand.
We can do better by separating the Model from the View, and using a “Controller” to manage them.
6.10.2 Model
This is where we describe our internal logic, rules, etc.
class Model:
def __init__(self):
self.positions = np.random.rand(100, 2)
self.speeds = (np.random.rand(100, 2) +
np.array([-0.5, -0.5])[np.newaxis, :])
self.deltat = 0.01
def simulation_step(self):
self.positions += self.speeds * self.deltat
def agent_locations(self):
return self.positions
6.10.3 View
This is where we describe what the user sees of our Model, what’s displayed. You may have different type
of visualisation (e.g., on one type of projection, a 3D view, a surface view, …) which can be implemented in
different view classes.
def update(self):
self.scatter.set_offsets(
self.model.agent_locations())
6.10.4 Controller
This is the class that tells the view that the models has changed and updates the model with any change
the user has input through the view.
320
In [29]: class Controller:
def __init__(self):
self.model = Model() # Or use Builder
self.view = View(self.model)
def animate(frame_number):
self.model.simulation_step()
self.view.update()
self.animator = animate
def go(self):
from matplotlib import animation
anim = animation.FuncAnimation(self.view.figure,
self.animator,
frames=200,
interval=50)
return anim.to_jshtml()
In [31]: HTML(contl.go())
321
6.11 Exercise: Refactoring The Bad Boids
6.11.1 Bad_Boids
We have written some very bad code implementing our Boids flocking example.
Here’s the Github link.
Please fork it on GitHub, and clone your fork.
For the Exercise, you should start from the GitHub repository, but here’s our terrible code:
In [1]: """
A deliberately bad implementation of
[Boids](http://dl.acm.org/citation.cfm?doid=37401.37406)
for use as an exercise on refactoring.
"""
import random
def update_boids(boids):
xs,ys,xvs,yvs=boids
# Fly towards the middle
for i in range(len(xs)):
for j in range(len(xs)):
xvs[i]=xvs[i]+(xs[j]-xs[i])*0.01/len(xs)
for i in range(len(xs)):
for j in range(len(xs)):
yvs[i]=yvs[i]+(ys[j]-ys[i])*0.01/len(xs)
# Fly away from nearby boids
for i in range(len(xs)):
for j in range(len(xs)):
if (xs[j]-xs[i])**2 + (ys[j]-ys[i])**2 < 100:
xvs[i]=xvs[i]+(xs[i]-xs[j])
yvs[i]=yvs[i]+(ys[i]-ys[j])
# Try to match speed with nearby boids
for i in range(len(xs)):
for j in range(len(xs)):
if (xs[j]-xs[i])**2 + (ys[j]-ys[i])**2 < 10000:
xvs[i]=xvs[i]+(xvs[j]-xvs[i])*0.125/len(xs)
yvs[i]=yvs[i]+(yvs[j]-yvs[i])*0.125/len(xs)
# Move according to velocities
for i in range(len(xs)):
322
xs[i]=xs[i]+xvs[i]
ys[i]=ys[i]+yvs[i]
figure=plt.figure()
axes=plt.axes(xlim=(-500,1500), ylim=(-500,1500))
scatter=axes.scatter(boids[0],boids[1])
def animate(frame):
update_boids(boids)
scatter.set_offsets(list(zip(boids[0],boids[1])))
cd bad_boids
python boids.py
You should be able to see some birds flying around, and then disappearing as they leave the window.
323
6.11.3 A regression test
First, have a look at the regression test we made.
To create it, we saved out the before and after state for one iteration of some boids, using ipython:
import yaml
import boids
from copy import deepcopy
before = deepcopy(boids.boids)
boids.update_boids(boids.boids)
after = boids.boids
fixture = {"before": before, "after": after}
fixture_file = open("fixture.yml", 'w')
fixture_file.write(yaml.dump(fixture))
fixture_file.close()
def test_bad_boids_regression():
regression_data = yaml.safe_load(open(os.path.join(os.path.dirname(__file__),'fixture.yml')))
boid_data = regression_data["before"]
update_boids(boid_data)
for after, before in zip(regression_data["after"], boid_data):
for after_value, before_value in zip(after, before):
assert_almost_equal(after_value, before_value, delta=0.01)
pytest
Edit the file to make the test fail, see the fail, then reset it:
324
Chapter 7
All concepts, ideas, or instructions should be in the program in just one place. Every line in the program
should say something useful and important.
We refer to code that respects this principle as DRY code.
In this chapter, we’ll look at some techniques that can enable us to refactor away repetitive code.
Since in many of these places, the techniques will involve working with functions as if they were vari-
ables, we’ll learn some functional programming. We’ll also learn more about the innards of how Python
implements classes.
We’ll also think about how to write programs that generate the more verbose, repetitive program we
could otherwise write. We call this metaprogramming.
325
A conceptual trick which is often used by computer scientists to teach the core idea of functional pro-
gramming is this: to write a program, in theory, you only ever need functions with one argument, even
when you think you need two or more. Why?
Let’s define a program to add two numbers:
add(5, 6)
Out[1]: 11
How could we do this, in a fictional version of Python which only defined functions of one argument? In
order to understand this, we’ll have to understand several of the concepts of functional programming. Let’s
start with a program which just adds five to something:
add_five(6)
Out[2]: 11
OK, we could define lots of these, one for each number we want to add. But that would be infinitely
repetitive. So, let’s try to metaprogram that: we want a function which returns these add_N() functions.
Let’s start with the easy case: a function which returns a function which adds 5 to something:
coolfunction = generate_five_adder()
coolfunction(7)
Out[3]: 12
OK, so what happened there? Well, we defined a function inside the other function. We can always do
that:
thirty_function()
Out[4]: 30
When we do this, the functions enclosed inside the outer function are local functions, and can’t be seen
outside:
In [5]: add_seven
326
---------------------------------------------------------------------------
<ipython-input-5-6fa1bcd39365> in <module>
----> 1 add_seven
There’s not really much of a difference between functions and other variables in python. A function is
just a variable which can have () put after it to call the code!
In [6]: print(thirty_function)
<function thirty_function at 0x7f96601e50e0>
And we know that one of the things we can do with a variable is return it. So we can return a function,
and then call it outside:
In [9]: def deferred_greeting():
def greet():
print("Hello")
return greet
friendlyfunction = deferred_greeting()
In [10]: # Do something else
print("Just passing the time...")
Just passing the time…
So now, to finish this, we just need to return a function to add an arbitrary amount:
In [12]: def generate_adder(increment):
def _adder(a):
return a + increment
return _adder
add_3 = generate_adder(3)
327
In [13]: add_3(9)
Out[13]: 12
We can make this even prettier: let’s make another variable pointing to our define_adder() function:
In [15]: add(8)(5)
Out[15]: 13
In summary, we have started with a function that takes two arguments (add(a, b)) and replaced it with
a new function (add(a)(b)). This new function takes a single argument, and returns a function that itself
takes the second argument.
This may seem like an overly complicated process - and, in some cases, it is! However, this pattern of
functions that return functions (or even take them as arguments!) can be very useful. In fact, it is the basis
of decorators, a Python feature that we will discuss more in this chapter [notebook].
7.2.2 Closures
You may have noticed something a bit weird:
In the definition of generate_adder, increment is a local variable. It should have gone out of scope and
died at the end of the definition. How can the amount the returned adder function is adding still be kept?
This is called a closure. In Python, whenever a function definition references a variable in the surrounding
scope, it is preserved within the function definition.
You can close over global module variables as well:
def greet():
print("Hello, ", name)
greet()
Hello, Eric
And note that the closure stores a reference to the variable in the surrounding scope: (“Late Binding”)
greet()
Hello, John
328
Out[18]: [5, 6, 7, 8, 9, 10, 11, 12, 13, 14]
But this is sufficiently common that there’s a quick built-in:
In [19]: list(map(add_five, numbers))
Out[19]: [5, 6, 7, 8, 9, 10, 11, 12, 13, 14]
This map operation is really important conceptually when understanding efficient parallel programming:
different computers can apply the mapped function to their input at the same time. We call this Single
Program, Multiple Data (SPMD). map is half of the map-reduce functional programming paradigm which
is key to the efficient operation of much of today’s “data science” explosion.
Let’s continue our functional programming mind-stretch by looking at reduce operations.
We very often want to loop with some kind of accumulator (an intermediate result that we update), such
as when finding a mean:
In [20]: def summer(data):
total = 0.0
for x in data:
total += x
return total
In [21]: summer(range(10))
Out[21]: 45.0
or finding a maximum:
In [22]: import sys
def my_max(data):
# Start with the smallest possible number
highest = sys.float_info.min
for x in data:
if x > highest:
highest = x
return highest
In [23]: my_max([2, 5, 10, -11, -5])
Out[23]: 10
These operations, where we have some variable which is building up a result, and the result is updated
with some operation, can be gathered together as a functional program, taking in (as an argument) the
operation to be used to combine results:
In [24]: def accumulate(initial, operation, data):
accumulator = initial
for x in data:
accumulator = operation(accumulator, x)
return accumulator
def my_sum(data):
def _add(a, b):
return a + b
return accumulate(0, _add, data)
329
In [25]: my_sum(range(5))
Out[25]: 10
def my_max(data):
return accumulate(sys.float_info.min, bigger, data)
Out[26]: 10
def my_max(data):
return reduce(bigger, data, sys.float_info.min)
Out[27]: 10
Efficient map-reduce
Now, because these operations, bigger and _add, are such that e.g. (a+b)+c = a+(b+c) , i.e. they are
associative, we could apply our accumulation to the left half and the right half of the array, each on a
different computer, and then combine the two halves:
1 + 2 + 3 + 4 = (1 + 2) + (3 + 4)
Indeed, with a bigger array, we can divide-and-conquer more times:
1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 = ((1 + 2) + (3 + 4)) + ((5 + 6) + (7 + 8))
So with enough parallel computers, we could do this operation on eight numbers in three steps: first, we
use four computers to do one each of the pairwise adds.
Then, we use two computers to add the four totals.
Then, we use one of the computers to do the final add of the two last numbers.
You might be able to do the maths to see that with an N element list, the number of such steps is
proportional to the logarithm of N.
We say that with enough computers, reduction operations are O(ln N)
This course isn’t an introduction to algorithms, but we’ll talk more about this O() notation when we
think about programming for performance.
def count_Cs(sequence):
return sequence.count('C')
330
return max(counts)
def most_Gs_in_any_sequence(sequences):
return max(map(lambda sequence: sequence.count('G'), sequences))
data = [
"CGTA",
"CGGGTAAACG",
"GATTACA"
]
most_Gs_in_any_sequence(data)
Out[28]: 4
The syntax here means that these two definitions are identical:
most_of_given_base_in_any_sequence(data, 'A')
Out[30]: 3
The above fragment defined a lambda function as a closure over base. If you understood that, you’ve
got it!
To double all elements in an array:
Out[33]: 10
331
7.2.5 Using functional programming for numerical methods
Probably the most common use in research computing for functional programming is the application of a
numerical method to a function.
Consider this example which uses the newton function from SciPy, a root-finding function implementing
the Newton-Raphson method. The arguments we pass to newton are the function whose roots we want to
find, and a starting point to search from.
We will be using this to find the roots of the function 𝑓(𝑥) = 𝑥2 − 𝑥.
In [34]: %matplotlib inline
In [35]: from scipy.optimize import newton
from numpy import linspace, zeros
from matplotlib import pyplot as plt
xs = linspace(-1, 2, 50)
solved = [xs, list(map(solve_me, xs)), xs, zeros(len(xs))]
plt.plot(*solved)
Starting from 2, the root I found is 1.0
Starting from 0.2, the root I found is -3.441905100203782e-21
332
Sometimes such tools return another function, for example the derivative of their input function. This is
what a naive implementation of that could look like:
def _func_derived(x):
return (func(x + eps) - func(x)) / eps
return _func_derived
The derivative of solve_me is 𝑓 ′ (𝑥) = 2𝑥 − 1, which represents a straight line. We can verify that our
computations are correct, i.e. that the returned function straight matches 𝑓 ′ (𝑥), by checking the value of
straight at some 𝑥:
In [38]: straight(3)
Out[38]: 5.00999999999987
or by plotting it:
In [39]: derived = (
xs, list(map(solve_me, xs)),
xs, list(map(derivative(solve_me, 0.01), xs))
)
plt.plot(*derived)
print(newton(derivative(solve_me, 0.01), 0))
0.495000000000001
333
Of course, coding your own numerical methods is bad, because the implementations you develop are
likely to be less efficient, less accurate and more error-prone than what you can find in existing established
libraries.
For example, the above definition could be replaced by:
def derivative(func):
def _func_derived(x):
return scipy.misc.derivative(func, x)
return _func_derived
newton(derivative(solve_me), 0)
Out[40]: 0.5
If you’ve done a moderate amount of calculus, then you’ll find similarities between functional program-
ming in computer science and Functionals in the calculus of variations.
In [1]: bowl = {
"apple": 5,
"banana": 3,
"orange": 7
}
APPLE
BANANA
ORANGE
Surprisingly often, we want to iterate over something that takes a moderately large amount of memory
to store - for example, our map images in the green-graph example.
Our green-graph example involved making an array of all the maps between London and Birmingham.
This kept them all in memory at the same time: first we downloaded all the maps, then we counted the
green pixels in each of them.
This would NOT work if we used more points: eventually, we would run out of memory. We need to
use a generator instead. This chapter will look at iterators and generators in more detail: how they work,
when to use them, how to create our own.
7.3.1 Iterators
Consider the basic python range function:
In [2]: range(10)
334
In [3]: total = 0
for x in range(int(1e6)):
total += x
total
Out[3]: 499999500000
In order to avoid allocating a million integers, range actually uses an iterator.
We don’t actually need a million integers at once, just each integer in turn up to a million.
Because we can get an iterator from it, we say that a range is an iterable.
So we can for-loop over it:
In [4]: for i in range(3):
print(i)
0
1
2
There are two important Python built-in functions for working with iterables. First is iter, which lets
us create an iterator from any iterable object.
In [5]: a = iter(range(3))
Once we have an iterator object, we can pass it to the next function. This moves the iterator forward,
and gives us its next element:
In [6]: next(a)
Out[6]: 0
In [7]: next(a)
Out[7]: 1
In [8]: next(a)
Out[8]: 2
When we are out of elements, a StopIteration exception is raised:
In [9]: next(a)
---------------------------------------------------------------------------
<ipython-input-9-15841f3f11d4> in <module>
----> 1 next(a)
StopIteration:
This tells Python that the iteration is over. For example, if we are in a for i in range(3) loop, this
lets us know when we should exit the loop.
We can turn an iterable or iterator into a list with the list constructor function:
In [10]: list(range(5))
Out[10]: [0, 1, 2, 3, 4]
335
7.3.2 Defining Our Own Iterable
When we write next(a), under the hood Python tries to call the __next__() method of a. Similarly,
iter(a) calls a.__iter__().
We can make our own iterators by defining classes that can be used with the next() and iter() functions:
this is the iterator protocol.
For each of the concepts in Python, like sequence, container, iterable, the language defines a protocol, a
set of methods a class must implement, in order to be treated as a member of that concept.
To define an iterator, the methods that must be supported are __next__() and __iter__().
__next__() must update the iterator.
We’ll see why we need to define __iter__ in a moment.
Here is an example of defining a custom iterator class:
In [11]: class fib_iterator:
"""An iterator over part of the Fibonacci sequence."""
def __iter__(self):
return self
def __next__(self):
(self.previous, self.current) = (self.current, self.previous + self.current)
self.limit -= 1
if self.limit < 0:
raise StopIteration()
return self.current
In [12]: x = fib_iterator(5)
In [13]: next(x)
Out[13]: 2
In [14]: next(x)
Out[14]: 3
In [15]: next(x)
Out[15]: 5
In [16]: next(x)
Out[16]: 8
In [17]: for x in fib_iterator(5):
print(x)
2
3
5
8
13
In [18]: sum(fib_iterator(1000))
Out[18]: 29792421850814336033688281998163190091567313054381975903277817344053672219048890452003450816384
336
7.3.3 A shortcut to iterables: the __iter__ method
In fact, we don’t always have to define both __iter__ and __next__!
If, to be iterated over, a class just wants to behave as if it were some other iterable, you can just implement
__iter__ and return iter(some_other_iterable), without implementing next. For example, an image
class might want to implement some metadata, but behave just as if it were just a 1-d pixel array when
being iterated:
class MyImage(object):
def __init__(self, pixels):
self.pixels = array(pixels, dtype='uint8')
self.channels = self.pixels.shape[2]
def __iter__(self):
# return an iterator over just the pixel values
return iter(self.pixels.reshape(-1, self.channels))
def show(self):
plt.imshow(self.pixels, interpolation="None")
x = [[[255, 255, 0], [0, 255, 0]], [[0, 0, 255], [255, 255, 255]]]
image = MyImage(x)
337
In [21]: image.channels
Out[21]: 3
yellow
lime
blue
white
See how we used image in a for loop, even though it doesn’t satisfy the iterator protocol (we didn’t
define both __iter__ and __next__ for it)?
The key here is that we can use any iterable object (like image) in a for expression, not just iterators!
Internally, Python will create an iterator from the iterable (by calling its __iter__ method), but this means
we don’t need to define a __next__ method explicitly.
The iterator protocol is to implement both __iter__ and __next__, while the iterable protocol is to
implement __iter__ and return an iterator.
7.3.4 Generators
There’s a fair amount of “boiler-plate” in the above class-based definition of an iterable.
Python provides another way to specify something which meets the iterator protocol: generators.
x = my_generator()
In [24]: next(x)
Out[24]: 5
In [25]: next(x)
Out[25]: 10
In [26]: next(x)
---------------------------------------------------------------------------
<ipython-input-26-92de4e9f6b1e> in <module>
----> 1 next(x)
StopIteration:
338
5
10
In [28]: sum(my_generator())
Out[28]: 15
A function which has yield statements instead of a return statement returns temporarily: it automag-
ically becomes something which implements __next__.
Each call of next() returns control to the function where it left off.
Control passes back-and-forth between the generator and the caller. Our Fibonacci example therefore
becomes a function rather than a class.
We can now use the output of the function like a normal iterable:
In [30]: sum(yield_fibs(5))
Out[30]: 31
2
8
34
144
Sometimes we may need to gather all values from a generator into a list, such as before passing them to
a function that expects a list:
In [32]: list(yield_fibs(10))
In [33]: plt.plot(list(yield_fibs(20)))
339
7.4 Related Concepts
Iterables and generators can be used to achieve complex behaviour, especially when combined with functional
programming. In fact, Python itself contains some very useful language features that make use of these
practices: context managers and decorators. We have already seen these in this class, but here we discuss
them in more detail.
Writing example.yaml
{'modelname': 'brilliant'}
In addition to more convenient syntax, this takes care of any clean-up that has to be done after the file
is closed, even if any errors occur while we are working on the file.
How could we define our own one of these, if we too have clean-up code we always want to run after a
calling function has done its work, or set-up code we want to do first?
We can define a class that meets an appropriate protocol:
340
In [36]: class verbose_context():
def __init__(self, name):
self.name=name
def __enter__(self):
print("Get ready, ", self.name)
def __exit__(self, exc_type, exc_value, traceback):
print("OK, done")
with verbose_context("Monty"):
print("Doing it!")
However, this is pretty verbose! Again, a generator with yield makes for an easier syntax:
@contextmanager
def verbose_context(name):
print("Get ready for action, ", name)
yield name.upper()
print("You did it")
7.4.2 Decorators
When doing functional programming, we may often want to define mutator functions which take in one
function and return a new function, such as our derivative example earlier.
def repeater(count):
def wrap_function_in_repeat(func):
def _repeated(x):
counter = count
while counter > 0:
counter -= 1
x = func(x)
return x
return _repeated
return wrap_function_in_repeat
341
fiftytimes = repeater(50)
fiftyroots = fiftytimes(sqrt)
print(fiftyroots(100))
1.000000000000004
It turns out that, quite often, we want to apply one of these to a function as we’re defining a class. For
example, we may want to specify that after certain methods are called, data should always be stored:
Any function which accepts a function as its first argument and returns a function can be used as a
decorator like this.
Much of Python’s standard functionality is implemented as decorators: we’ve seen @contextmanager,
@classmethod and @attribute. The @contextmanager metafunction, for example, takes in an iterator, and
yields a class conforming to the context manager protocol.
In [39]: @repeater(3)
def hello(name):
return f"Hello, {name}"
In [40]: hello("Cleese")
def test_greeter():
with open(os.path.join(os.path.dirname(
__file__), 'fixtures', 'samples.yaml')
) as fixtures_file:
fixtures = yaml.safe_load(fixtures_file)
yield assert_exemplar(**fixture)
Each time a function beginning with test_ does a yield it results in another test.
342
7.5.2 Negative test contexts managers
We have seen this:
with raises(AttributeError):
x = 2
x.foo()
@contextmanager
def reimplement_raises(exception):
try:
yield
except exception:
pass
else:
raise Exception("Expected,", exception,
" to be raised, nothing was.")
@nose.tools.raises(TypeError, ValueError)
def test_raises_type_error():
raise TypeError("This test passes")
In [46]: test_raises_type_error()
In [47]: @nose.tools.raises(Exception)
def test_that_fails_by_passing():
pass
In [48]: test_that_fails_by_passing()
---------------------------------------------------------------------------
<ipython-input-48-627706dd82d1> in <module>
----> 1 test_that_fails_by_passing()
343
~/virtualenv/python3.7.5/lib/python3.7/site-packages/nose/tools/nontrivial.py in newfunc(*arg, *
65 else:
66 message = "%s() did not raise %s" % (name, valid)
---> 67 raise AssertionError(message)
68 newfunc = make_decorator(func)(newfunc)
69 return newfunc
We could reimplement this ourselves now too, using the context manager we wrote above:
In [50]: @homemade_raises_decorator(TypeError)
def test_raises_type_error():
raise TypeError("This test passes")
In [51]: test_raises_type_error()
7.6 Exceptions
When we learned about testing, we saw that Python complains when things go wrong by raising an “Excep-
tion” naming a type of error:
In [1]: 1/0
---------------------------------------------------------------------------
<ipython-input-1-9e1622b385b6> in <module>
----> 1 1/0
Exceptions are objects, forming a class hierarchy. We just raised an instance of the ZeroDivisionError
class, making the program crash. If we want more information about where this class fits in the hierarchy,
we can use Python’s inspect module to get a chain of classes, from ZeroDivisionError up to object:
344
Out[2]: (ZeroDivisionError, ArithmeticError, Exception, BaseException, object)
So we can see that a zero division error is a particular kind of Arithmetic Error.
In [3]: x = 1
for y in x:
print(y)
---------------------------------------------------------------------------
<ipython-input-3-127e9e41ff29> in <module>
1 x = 1
2
----> 3 for y in x:
4 print(y)
In [4]: inspect.getmro(TypeError)
raise(MyCustomErrorType("Problem"))
---------------------------------------------------------------------------
<ipython-input-5-d49058201ff8> in <module>
3
4
----> 5 raise(MyCustomErrorType("Problem"))
MyCustomErrorType: Problem
345
You can add custom data to your exception:
def __str__(self):
return f"Error, category {self.category}"
raise(MyCustomErrorType(404))
---------------------------------------------------------------------------
<ipython-input-6-edbc5ba6fff8> in <module>
7
8
----> 9 raise(MyCustomErrorType(404))
The real power of exceptions comes, however, not in letting them crash the program, but in letting your
program handle them. We say that an exception has been “thrown” and then “caught”.
try:
config = yaml.safe_load(open("datasource.yaml"))
user = config["userid"]
password = config["password"]
except FileNotFoundError:
print("No password file found, using anonymous user.")
user = "anonymous"
password = None
print(user)
Note that we specify only the error we expect to happen and want to handle. Sometimes you see code
that catches everything:
In [8]: try:
config = yaml.lod(open("datasource.yaml"))
user = config["userid"]
password = config["password"]
except:
346
user = "anonymous"
password = None
print(user)
anonymous
This can be dangerous and can make it hard to find errors! There was a mistyped function name there
(‘lod’), but we did not notice the error, as the generic except caught it. Therefore, we should be specific
and catch only the type of error we want.
And create a function that reads credentials files and returns the username and password to use.
In [11]: print(read_credentials('datasource2.yaml'))
('eidle', 'secret')
In [12]: print(read_credentials('datasource.yaml'))
In [13]: print(read_credentials('datasource3.yaml'))
347
Expected keys not found in file
('anonymous', None)
This last code has a flaw: the file was successfully opened, the missing key was noticed, but not explicitly
closed. It’s normally OK, as Python will close the file as soon as it notices there are no longer any references
to datasource in memory, after the function exits. But this is not good practice, you should keep a file handle
for as short a time as possible.
Exceptions do not have to be caught close to the part of the program calling them. They can be caught
anywhere “above” the calling point in the call stack: control can jump arbitrarily far in the program: up to
the except clause of the “highest” containing try statement.
348
raise SyntaxError()
if x == 3:
raise TypeError()
In [20]: f1(0)
F1Before
F2Before
F3Before
F3After
F2After
F1After
In [21]: f1(1)
F1Before
F2Before
F3Before
F3Except (�)
F2After
F1After
In [22]: f1(2)
F1Before
F2Before
F3Before
F2Except (�)
F1After
349
In [23]: f1(3)
F1Before
F2Before
F3Before
F1Except (�)
def analysis(source):
if type(source) == dict:
name = source['modelname']
else:
content = open(source)
source = yaml.safe_load(content)
name = source['modelname']
print(name)
In [25]: analysis({'modelname': 'Super'})
Super
analysis('example.yaml')
350
brilliant
This approach is more extensible, and behaves properly if we give it some other data-source
which responds like a dictionary or string.
analysis("modelname: Amazing")
Amazing
Sometimes we want to catch an error, partially handle it, perhaps add some extra data to the exception,
and then re-raise to be caught again further up the call stack.
The keyword “raise” with no argument in an except: clause will cause the caught error to be re-thrown.
Doing this is the only circumstance where it is safe to do except: without catching a specific type of error.
In [30]: try:
# Something
pass
except:
# Do this code here if anything goes wrong
raise
If you want to be more explicit about where the error came from, you can use the raise from syntax,
which will create a chain of exceptions:
def higher_function():
try:
lower_function()
except ValueError as e:
raise RuntimeError("Error in higher function!") from e
higher_function()
351
---------------------------------------------------------------------------
<ipython-input-31-88616a1de55f> in higher_function()
6 try:
----> 7 lower_function()
8 except ValueError as e:
<ipython-input-31-88616a1de55f> in lower_function()
1 def lower_function():
----> 2 raise ValueError("Error in lower function!")
3
The above exception was the direct cause of the following exception:
<ipython-input-31-88616a1de55f> in <module>
10
11
---> 12 higher_function()
<ipython-input-31-88616a1de55f> in higher_function()
7 lower_function()
8 except ValueError as e:
----> 9 raise RuntimeError("Error in higher function!") from e
10
11
It can be useful to catch and re-throw an error as you go up the chain, doing any clean-up needed for
each layer of a program.
The error will finally be caught and not re-thrown only at a higher program layer that knows how to
recover. This is known as the “throw low catch high” principle.
352
Chapter 8
Operator overloading
We’ve seen already during the course that some operators behave differently depending on the data type.
For example, + adds numbers but concatenates strings or lists:
In [1]: 4 + 2
Out[1]: 6
In [2]: '4' + '2'
Out[2]: '42'
* is used for multiplication, or repeated addition:
In [3]: 6 * 7
Out[3]: 42
In [4]: 'me' * 3
Out[4]: 'mememe'
/ is division for numbers, and wouldn’t have a real meaning on strings. However, it’s used to separate
files and directories on your file system. Therefore, this has been overloaded in the pathlib module:
In [5]: import os
from pathlib import Path
353
8.1 Overloading operators for your own classes
Now that we have seen that in Python operators do different things, how can we use + or other operators
on our own classes to achieve similar behaviour?
Let’s go back to our Maze example, and simplify our room object so it’s defined as:
However, when we print it we don’t get much infomation on the object. So, the first operator we are
overloading is its string represenation defining __str__:
How can we add two rooms together? What does it mean? Let’s define that the addition (+) of two
rooms makes up one with the combined size. We produce this behaviour by defining the __add__ method.
Would the order of how the rooms are added affect the final room? As they are added now, the name is
determined by the order, but do we want that? Or would we prefer to have:
354
small + big == big + small
That bring us to another operator, equal to: ==. The method needed to produce such comparison is
__eq__.
So, in this way two rooms of the same area are “equal” if their names are composed by the same.
True
False
You can add the other comparisons to know which room is bigger or smaller with the following functions:
Operator Function
< __lt__(self, other)
<= __le__(self, other)
> __gt__(self, other)
>= __ge__(self, other)
Let’s add people to the rooms and check whether they are in one room or not.
circus = Room('Circus', 3)
circus.add_occupant('Graham')
circus.add_occupant('Eric')
circus.add_occupant('Terry')
How do we know if John is in the room? We can check the occupants list:
Out[16]: False
355
In [17]: class Room:
def __init__(self, name, area):
self.name = name
self.area = area
self.occupants = []
def add_occupant(self, name):
self.occupants.append(name)
def __contains__(self, value):
return value in self.occupants
circus = Room('Circus', 3)
circus.add_occupant('Graham')
circus.add_occupant('Eric')
circus.add_occupant('Terry')
'Terry' in circus
Out[17]: True
We can add lots more operators to classes. For example, __getitem__ to let you index or access part of
your object like a sequence or dictionary, e.g., newObject[1] or newObject["data"], or __len__ to return
a number of elements in your object. Probably the most exciting one is __call__, which overrides the ()
operator; this allows us to define classes that behave like functions! We call these callables.
greeter_instance = Greeter("Hello")
greeter_instance("Eric")
Hello Eric
We’ve now come full circle in the blurring of the distinction between functions and objects! The full
power of functional programming is really remarkable.
If you want to know more about the topics in this lecture, using a different language syntax, I recommend
you watch the Abelson and Sussman “Structure and Interpretation of Computer Programs” lectures. These
are the Computer Science equivalent of the Feynman Lectures!
Next notebook shows a detailed example of how to apply operator overloading to create your own symbolic
algebra system.
356
Chapter 9
Operator overloading
357
result = Expression([first, second, third])
In [7]: @extend(Term)
class Term:
def add(self, *others):
return Expression((self,) + others)
In [8]: @extend(Term)
class Term:
def multiply(self, *others):
358
result_data = dict(self.data)
result_coeff = self.coefficient
# Convert arguments to Terms first if they are
# constants or integers
others = map(Term, others)
In [9]: @extend(Expression)
class Expression:
def add(self, *others):
result = Expression(self.terms)
return result
In [10]: x = Term('x')
y = Term('y')
This is better, but we still can’t write the expression in a ‘natural’ way.
However, we can define what * and + do when applied to Terms!:
In [11]: @extend(Term)
class Term:
In [12]: @extend(Expression)
class Expression:
def multiply(self, another):
# Distributive law left as exercise
359
pass
Out[13]: 'y'
print(five_x_ysq.data, five_x_ysq.coefficient)
{'x': 1, 'y': 2} 5
This is called operator overloading. We can define what add and multiply mean when applied to our
class.
Note that this only works so far if we multiply on the right-hand-side! However, we can define a multi-
plication that works backwards, which is used as a fallback if the left multiply raises an error:
In [15]: @extend(Expression)
class Expression:
def __radd__(self, other):
return self.__add__(other)
In [16]: @extend(Term)
class Term:
def __rmul__(self, other):
return self.__mul__(other)
In [17]: 5 * Term('x')
It’s not easy at the moment to see if these things are working!
We can add another operator method __str__, which defines what happens if we try to print our class:
In [19]: @extend(Term)
class Term:
def __str__(self):
def symbol_string(symbol, power):
if power == 1:
return symbol
else:
return f"{symbol}^{power}"
360
symbol_strings=[symbol_string(symbol, power)
for symbol, power in self.data.items()]
prod = '*'.join(symbol_strings)
if not prod:
return str(self.coefficient)
if self.coefficient == 1:
return prod
else:
return f"{self.coefficient}*{prod}"
In [20]: @extend(Expression)
class Expression:
def __str__(self):
return '+'.join(map(str, self.terms))
In [22]: print(expr)
5*x^2*y+7*x+2
9.1 Metaprogramming
Warning: Advanced topic!
In [1]: bananas = 0
apples = 0
oranges = 0
bananas += 1
apples += 1
oranges += 1
The right hand side of these assignments doesn’t respect the DRY principle. We could of course define
a variable for our initial value:
In [2]: initial_fruit_count = 0
bananas = initial_fruit_count
apples = initial_fruit_count
oranges = initial_fruit_count
However, this is still not as DRY as it could be: what if we wanted to replace the assignment with, say,
a class constructor and a buy operation:
361
def buy(self):
self.count += 1
bananas = Basket()
apples = Basket()
oranges = Basket()
bananas.buy()
apples.buy()
oranges.buy()
We had to make the change in three places. Whenever you see a situation where a refactoring or change
of design might require you to change the code in multiple places, you have an opportunity to make the code
DRYer.
In this case, metaprogramming for incrementing these variables would involve just a loop over all the
variables we want to initialise:
So can we declare a new variable programmatically? Given a list of the names of fruit baskets we want,
initialise a variable with that name?
globals()['apples']
Wow, we can! Every module or class in Python, is, under the hood, a special dictionary, storing the
values in its namespace. So we can create new variables by assigning to this dictionary. globals() gives a
reference to the attribute dictionary for the current module
kiwis.count
Out[7]: 0
This is metaprogramming.
I would NOT recommend using it for an example as trivial as the one above. A better, more Pythonic
choice here would be to use a data structure to manage your set of fruit baskets:
In [8]: baskets = {}
for name in basket_names:
baskets[name] = Basket()
baskets['kiwis'].count
362
Out[8]: 0
Out[9]: 0
Which is the nicest way to do this, I think. Code which feels like metaprogramming is needed to make
it less repetitive can often instead be DRYed up using a refactored data structure, in a way which is cleaner
and more easy to understand. Nevertheless, metaprogramming is worth knowing.
In [11]: x = Boring()
x.name = "Michael"
In [12]: x.name
Out[12]: 'Michael'
And these turn up, as expected, in an attribute dictionary for the class:
In [13]: x.__dict__
Out[14]: 'Michael'
If we want to add an attribute given it’s name as a string, we can use setattr:
x.age
Out[15]: 75
363
In [17]: x.describe()
In [18]: x.describe
In [19]: Boring.describe
Note that we set this method as an attribute of the class, not the instance, so it is available to other
instances of Boring:
In [20]: y = Boring()
y.name = 'Terry'
y.age = 78
In [21]: y.describe()
We can define a standalone function, and then bind it to the class. Its first argument automagically
becomes self.
In [24]: x.birth_year()
Out[24]: 1945
In [25]: x.birth_year
In [26]: x.birth_year.__name__
Out[26]: 'broken_birth_year'
364
In [28]: terry = Person("Terry", 78, "Screenwriter", 0)
In [29]: terry.name
Out[29]: 'Terry'
Sometimes, metaprogramming will be really helpful in making non-repetitive code, and you should have
it in your toolbox, which is why I’m teaching you it. But doing it all the time overcomplicated matters.
We’ve talked a lot about the DRY principle, but there is another equally important principle:
Whenever you write code and you think, “Gosh, I’m really clever”,you’re probably doing it wrong. Code
should be about clarity, not showing off.
365
Chapter 10
Performance programming
We’ve spent most of this course looking at how to make code readable and reliable. For research work, it is
often also important that code is efficient: that it does what it needs to do quickly.
It is very hard to work out beforehand whether code will be efficient or not: it is essential to Profile code,
to measure its performance, to determine what aspects of it are slow.
When we looked at Functional programming, we claimed that code which is conceptualised in terms of
actions on whole data-sets rather than individual elements is more efficient. Let’s measure the performance
of some different ways of implementing some code and see how they perform.
value = position
return limit
In [2]: xmin = -1.5
ymin = -1.0
xmax = 0.5
ymax = 1.0
resolution = 300
xstep = (xmax - xmin) / resolution
ystep = (ymax - ymin) / resolution
xs = [(xmin + (xmax - xmin) * i / resolution) for i in range(resolution)]
ys = [(ymin + (ymax - ymin) * i / resolution) for i in range(resolution)]
In [3]: %%timeit
data = [[mandel1(complex(x, y)) for x in xs] for y in ys]
366
523 ms ± 3.97 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
We will learn this lesson how to make a version of this code which works Ten Times faster:
return diverged_at_count
367
In [10]: %matplotlib inline
import matplotlib.pyplot as plt
plt.imshow(data_numpy, interpolation='none')
In [11]: %%timeit
data_numpy = mandel_numpy(values)
44.1 ms ± 621 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
Out[12]: 0.0
368
limit -= 1
value = value**2 + position
if limit < 0:
return 0
return limit
In [4]: %%timeit
data2 = []
for y in ys:
row = []
for x in xs:
row.append(mandel1(complex(x, y)))
data2.append(row)
509 ms ± 8.9 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
In [5]: data2 = []
for y in ys:
row = []
for x in xs:
row.append(mandel1(complex(x, y)))
data2.append(row)
Interestingly, not much difference. I would have expected this to be slower, due to the normally high cost
of appending to data.
369
We ought to be checking if these results are the same by comparing the values in a test, rather than
re-plotting. This is cumbersome in pure Python, but easy with NumPy, so we’ll do this later.
Let’s try a pre-allocated data structure:
In [8]: %%timeit
for j, y in enumerate(ys):
for i, x in enumerate(xs):
data3[j][i] = mandel1(complex(x, y))
511 ms ± 5.51 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
370
Nope, no gain there.
Let’s try using functional programming approaches:
In [11]: %%timeit
data4 = []
for y in ys:
bind_mandel = lambda x: mandel1(complex(x, y))
data4.append(list(map(bind_mandel, xs)))
521 ms ± 8.39 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
In [12]: data4 = []
for y in ys:
bind_mandel = lambda x: mandel1(complex(x, y))
data4.append(list(map(bind_mandel, xs)))
371
That was a tiny bit slower.
So, what do we learn from this? Our mental image of what code should be faster or slower is often wrong,
or doesn’t make much difference. The only way to really improve code performance is empirically, through
measurements.
The real magic of numpy arrays is that most python operations are applied, quickly, on an elementwise
basis:
In [4]: %%timeit
for i in range(8):
for j in range(8):
y[i][j] = x[i][j] + 10
372
51.6 µs ± 926 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
In [5]: x + 10
Out[5]: array([[ 10, 14, 18, 22, 26, 30, 34, 38],
[ 42, 46, 50, 54, 58, 62, 66, 70],
[ 74, 78, 82, 86, 90, 94, 98, 102],
[106, 110, 114, 118, 122, 126, 130, 134],
[138, 142, 146, 150, 154, 158, 162, 166],
[170, 174, 178, 182, 186, 190, 194, 198],
[202, 206, 210, 214, 218, 222, 226, 230],
[234, 238, 242, 246, 250, 254, 258, 262]])
Numpy’s mathematical functions also happen this way, and are said to be “vectorized” functions.
In [6]: np.sqrt(x)
Out[6]: array([[ 0. , 2. , 2.82842712, 3.46410162, 4. ,
4.47213595, 4.89897949, 5.29150262],
[ 5.65685425, 6. , 6.32455532, 6.63324958, 6.92820323,
7.21110255, 7.48331477, 7.74596669],
[ 8. , 8.24621125, 8.48528137, 8.71779789, 8.94427191,
9.16515139, 9.38083152, 9.59166305],
[ 9.79795897, 10. , 10.19803903, 10.39230485, 10.58300524,
10.77032961, 10.95445115, 11.13552873],
[11.3137085 , 11.48912529, 11.66190379, 11.83215957, 12. ,
12.16552506, 12.32882801, 12.489996 ],
[12.64911064, 12.80624847, 12.9614814 , 13.11487705, 13.26649916,
13.41640786, 13.56465997, 13.7113092 ],
[13.85640646, 14. , 14.14213562, 14.28285686, 14.4222051 ,
14.56021978, 14.69693846, 14.83239697],
[14.96662955, 15.09966887, 15.23154621, 15.3622915 , 15.49193338,
15.62049935, 15.74801575, 15.87450787]])
Numpy contains many useful functions for creating matrices. In our earlier lectures we’ve seen linspace
and arange for evenly spaced numbers.
In [7]: np.linspace(0, 10, 21)
Out[7]: array([ 0. , 0.5, 1. , 1.5, 2. , 2.5, 3. , 3.5, 4. , 4.5, 5. ,
5.5, 6. , 6.5, 7. , 7.5, 8. , 8.5, 9. , 9.5, 10. ])
In [8]: np.arange(0, 10, 0.5)
Out[8]: array([0. , 0.5, 1. , 1.5, 2. , 2.5, 3. , 3.5, 4. , 4.5, 5. , 5.5, 6. ,
6.5, 7. , 7.5, 8. , 8.5, 9. , 9.5])
Here’s one for creating matrices like coordinates in a grid:
In [9]: xmin = -1.5
ymin = -1.0
xmax = 0.5
ymax = 1.0
resolution = 300
xstep = (xmax - xmin) / resolution
ystep = (ymax - ymin) / resolution
373
In [10]: print(ymatrix)
We can add these together to make a grid containing the complex numbers we want to test for membership
in the Mandelbrot set.
In [12]: print(values)
In [13]: z0 = values
z1 = z0 * z0 + values
z2 = z1 * z1 + values
z3 = z2 * z2 + values
In [14]: print(z3)
374
[[24.06640625+20.75j 23.16610231+20.97899073j
22.27540349+21.18465854j … 11.20523832 -1.88650846j
11.5734533 -1.6076251j 11.94394738 -1.31225596j]
[23.82102149+19.85687829j 22.94415031+20.09504528j
22.07634812+20.31020645j … 10.93323949 -1.5275283j
11.28531994 -1.24641067j 11.63928527 -0.94911594j]
[23.56689029+18.98729242j 22.71312709+19.23410533j
21.86791017+19.4582314j … 10.65905064 -1.18433756j
10.99529965 -0.90137318j 11.33305161 -0.60254144j]
…
[23.30453709-18.14090998j 22.47355537-18.39585192j
21.65061048-18.62842771j … 10.38305264 +0.85663867j
10.70377437 +0.57220289j 11.02562928 +0.27221042j]
[23.56689029-18.98729242j 22.71312709-19.23410533j
21.86791017-19.4582314j … 10.65905064 +1.18433756j
10.99529965 +0.90137318j 11.33305161 +0.60254144j]
[23.82102149-19.85687829j 22.94415031-20.09504528j
22.07634812-20.31020645j … 10.93323949 +1.5275283j
11.28531994 +1.24641067j 11.63928527 +0.94911594j]]
In [16]: mandel1(values)
---------------------------------------------------------------------------
<ipython-input-16-484a82ca909a> in <module>
----> 1 mandel1(values)
ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or
No. The logic of our current routine would require stopping for some elements and not for others.
We can ask numpy to vectorise our method for us:
375
In [17]: mandel2 = np.vectorize(mandel1)
In [20]: %%timeit
data5 = mandel2(values)
476 ms ± 2.92 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
This is not significantly faster. When we use vectorize it’s just hiding an plain old python for loop under
the hood. We want to make the loop over matrix elements take place in the “C Layer”.
What if we just apply the Mandelbrot algorithm without checking for divergence until the end:
376
In [22]: data6 = mandel_numpy_explode(values)
/home/travis/virtualenv/python3.7.5/lib/python3.7/site-packages/ipykernel_launcher.py:5: RuntimeWarning:
"""
/home/travis/virtualenv/python3.7.5/lib/python3.7/site-packages/ipykernel_launcher.py:5: RuntimeWarning:
"""
/home/travis/virtualenv/python3.7.5/lib/python3.7/site-packages/ipykernel_launcher.py:6: RuntimeWarning:
/home/travis/virtualenv/python3.7.5/lib/python3.7/site-packages/ipykernel_launcher.py:6: RuntimeWarning:
/home/travis/virtualenv/python3.7.5/lib/python3.7/site-packages/ipykernel_launcher.py:9: RuntimeWarning:
if __name__ == '__main__':
In [25]: %%timeit
data6 = mandel_numpy(values)
377
Wow, that was TEN TIMES faster.
There’s quite a few NumPy tricks there, let’s remind ourselves of how they work:
In [28]: x = np.arange(10)
y = np.ones([10]) * 5
z = x > y
In [29]: x
In [30]: y
Out[30]: array([5., 5., 5., 5., 5., 5., 5., 5., 5., 5.])
In [31]: print(z)
[False False False False False False True True True True]
In [32]: x[x>3]
In [33]: x[np.logical_not(z)]
378
Out[33]: array([0, 1, 2, 3, 4, 5])
And you can use such an index as the target of an assignment:
In [34]: x[z] = 5
x
Out[34]: array([0, 1, 2, 3, 4, 5, 5, 5, 5, 5])
Note that we didn’t compare two arrays to get our logical array, but an array to a scalar integer – this
was broadcasting again.
return diverged_at_count
In [36]: data7 = mandel4(values)
In [37]: plt.imshow(data7, interpolation='none')
Out[37]: <matplotlib.image.AxesImage at 0x7f47f9df5c50>
379
In [38]: %%timeit
data7 = mandel4(values)
65.4 ms ± 753 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
Note that here, all the looping over mandelbrot steps was in Python, but everything below the loop-over-
positions happened in C. The code was amazingly quick compared to pure Python.
Can we do better by avoiding a square root?
In [39]: def mandel5(position, limit=50):
value = position
diverged_at_count = np.zeros(position.shape)
while limit > 0:
limit -= 1
value = value**2 + position
diverging = value * np.conj(value) > 4
first_diverged_this_time = np.logical_and(diverging, diverged_at_count == 0)
diverged_at_count[first_diverged_this_time] = limit
value[diverging] = 2
return diverged_at_count
In [40]: %%timeit
data8 = mandel5(values)
44.3 ms ± 303 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
380
Out[43]: 0.0
In [44]: data2 = []
for y in ys:
row = []
for x in xs:
row.append(mandel1(complex(x, y)))
data2.append(row)
---------------------------------------------------------------------------
<ipython-input-45-b9ae9db328ea> in <module>
----> 1 data2 - data1
Out[46]: 0
NumPy provides some convenient assertions to help us write unit tests with NumPy arrays:
381
np.conj(value[calculating])>4
calculating = np.logical_and(calculating,
np.logical_not(diverging_now))
diverged_at_count[diverging_now] = limit
return diverged_at_count
In [52]: %%timeit
data8 = mandel6(values)
58.8 ms ± 530 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
This was not faster even though it was doing less work
This often happens: on modern computers, branches (if statements, function calls) and memory access
is usually the rate-determining step, not maths.
Complicating your logic to avoid calculations sometimes therefore slows you down. The only way to know
is to measure
382
In [54]: x = np.arange(64)
y = x.reshape([8,8])
y
We can use a : to indicate we want all the values from a particular axis:
We can mix array selectors, boolean selectors, :s and ordinary array seqeuencers:
383
Out[59]: array([[[ 4, 5, 6],
[12, 13, 14]],
Out[60]: (4, 1, 2)
When we use basic indexing with integers and : expressions, we get a view on the matrix so a copy is
avoided:
In [61]: a = z[:, :, 2]
a[0, 0] = -500
z
In [62]: z[1]
In [63]: z[...,2]
384
Out[63]: array([[-500, 6, 10, 14],
[ 18, 22, 26, 30],
[ 34, 38, 42, 46],
[ 50, 54, 58, 62]])
However, boolean mask indexing and array filter indexing always causes a copy.
Let’s try again at avoiding doing unnecessary work by using new arrays containing the reduced data
instead of a mask:
value = value[carry_on]
indices = indices[:, carry_on]
positions = positions[carry_on]
diverged_at_count[diverging_now_indices[0,:],
diverging_now_indices[1,:]] = limit
return diverged_at_count
385
In [67]: %%timeit
data9 = mandel7(values)
65.7 ms ± 441 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
Still slower. Probably due to lots of copies – the point here is that you need to experiment to see which
optimisations will work. Performance programming needs to be empirical.
10.4 Profiling
We’ve seen how to compare different functions by the time they take to run. However, we haven’t obtained
much information about where the code is spending more time. For that we need to use a profiler. IPython
offers a profiler through the %prun magic. Let’s use it to see how it works:
%prun shows a line per each function call ordered by the total time spent on each of these. However,
sometimes a line-by-line output may be more helpful. For that we can use the line_profiler package (you
need to install it using pip). Once installed you can activate it in any notebook by running:
Here, it is clearer to see which operations are keeping the code busy.
10.5 Cython
Cython can be viewed as an extension of Python where variables and functions are annotated with extra
information, in particular types. The resulting Cython source code will be compiled into optimized C or
C++ code, and thereby yielding substantial speed-up of slow Python code. In other words, Cython provides
a way of writing Python with comparable performance to that of C/C++.
In a Jupyter notebook, everything is a lot easier. One needs only to load the Cython extension (%load_ext
Cython) at the beginning and put %%cython mark in front of cells of Cython code. Cells with Cython mark
will be treated as a .pyx code and consequently, compiled into C.
For details, please see Building Cython Code.
Pure python Mandelbrot set:
386
In [1]: xmin = -1.5
ymin = -1.0
xmax = 0.5
ymax = 1.0
resolution = 300
xstep = (xmax - xmin) / resolution
ystep = (ymax - ymin) / resolution
xs = [(xmin + (xmax - xmin) * i / resolution) for i in range(resolution)]
ys = [(ymin + (ymax - ymin) * i / resolution) for i in range(resolution)]
Compiled by Cython:
In [4]: %%cython
387
In [6]: %timeit [[mandel(complex(x,y)) for x in xs] for y in ys] # pure python
%timeit [[mandel_cython(complex(x,y)) for x in xs] for y in ys] # cython
537 ms ± 5.64 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
303 ms ± 6.6 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
We have improved the performance of a factor of 1.5 by just using the Cython compiler, without
changing the code!
In [7]: %%cython
def var_typed_mandel_cython(position, limit=50):
cdef double complex value # typed variable
value = position
while abs(value) < 2:
limit -= 1
value = value**2 + position
if limit < 0:
return 0
return limit
In [8]: %%cython
cpdef call_typed_mandel_cython(double complex position,
int limit=50): # typed function
cdef double complex value # typed variable
value = position
while abs(value)<2:
388
limit -= 1
value = value**2 + position
if limit < 0:
return 0
return limit
10.5 µs ± 59.4 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
5.95 µs ± 67.6 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
2.97 µs ± 5.62 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
656 ns ± 3.89 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)
In [14]: %%cython
import numpy as np
cimport numpy as np
xlim = position.shape[1]
ylim = position.shape[0]
389
diverged_at = np.zeros([ylim, xlim], dtype=int)
for x in xrange(xlim):
for y in xrange(ylim):
steps = limit
value = position[y,x]
pos = position[y,x]
while abs(value) < 2 and steps >= 0:
steps -= 1
value = value**2 + pos
diverged_at[y,x] = steps
return diverged_at
Note the double import of numpy: the standard numpy module and a Cython-enabled version of numpy
that ensures fast indexing of and other operations on arrays. Both import statements are necessary in code
that uses numpy arrays. The new thing in the code above is declaration of arrays by np.ndarray.
In [15]: %timeit data_cy = [[mandel(complex(x,y)) for x in xs] for y in ys] # pure python
513 ms ± 1.04 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
390
In [22]: %timeit [math.sin(i) for i in range(int(1e7))] # python
1.75 s ± 15.3 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
917 ms ± 18.8 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
4.53 ms ± 2.24 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
Let’s look at appending data into a NumPy array, compared to a plain Python list:
391
In [6]: plot_time(time_append_to_ndarray, counts)
392
Adding an element to a Python list is way faster! Also, it seems that adding an element to a Python list
is independent of the length of the list, but it’s not so for a NumPy array.
How do they perform when accessing an element in the middle?
393
Both scale well for accessing the middle element.
What about inserting at the beginning?
If we want to insert an element at the beginning of a Python list we can do:
In [11]: x = list(range(5))
x
Out[11]: [0, 1, 2, 3, 4]
Out[12]: [-1, 0, 1, 2, 3, 4]
394
list performs badly for insertions at the beginning!
There are containers in Python that work well for insertion at the start:
395
But looking up in the middle scales badly:
396
What is going on here?
Arrays are stored as contiguous memory. Anything which changes the length of the array requires the
whole array to be copied elsewhere in memory.
This copy takes time proportional to the array size.
The Python list type is also an array, but it is allocated with extra memory. Only when that memory
397
is exhausted is a copy needed.
If the extra memory is typically the size of the current array, a copy is needed every 1/N appends, and
costs N to make, so on average copies are cheap. We call this amortized constant time.
This makes it fast to look up values in the middle. However, it may also use more space than is needed.
The deque type works differently: each element contains a pointer to the next. Inserting elements is
therefore very cheap, but looking up the Nth element requires traversing N such pointers.
398
10.6.1 Dictionary performance
For another example, let’s consider the performance of a dictionary versus a couple of other ways in which
we could implement an associative array.
If we have an evil dictionary of N elements, how long would it take - on average - to find an element?
In [23]: eric_evil["Job"]
Out[23]: 'Comedian'
In [25]: eric_evil["Job"]
Out[25]: 'Comedian'
399
What if we created a dictionary where we bisect the search?
def __getitem__(self,akey):
from bisect import bisect_left
loc = bisect_left(self.keys, akey)
if loc != len(self.data):
return self.data[loc][1]
raise KeyError()
In [30]: eric_sorted["Job"]
Out[30]: 'Comedian'
400
We can’t really see what’s going on here for the sorted example as there’s too much noise, but theoretically
we should get logarithmic asymptotic performance. We write this down as 𝑂(ln 𝑁 ). This doesn’t mean
there isn’t also a constant term, or a term proportional to something that grows slower (such as ln(ln 𝑁 )):
we always write down just the term that is dominant for large 𝑁 . We saw before that list is 𝑂(1) for
appends, 𝑂(𝑁 ) for inserts. Numpy’s array is 𝑂(𝑁 ) for appends.
401
The simple check-each-in-turn solution is 𝑂(𝑁 ) - linear time.
402
Python’s built-in dictionary is, amazingly, O(1): the time is independent of the size of the dictionary.
This uses a miracle of programming called the Hash Table: you can learn more about these issues at this
video from Harvard University. This material is pretty advanced, but, I think, really interesting!
Optional exercise: determine what the asymptotic peformance for the Boids model in terms of the
number of Boids. Make graphs to support this. Bonus: how would the performance scale with the number
of dimensions?
403
Chapter 11
An Adventure In Packaging: An
exercise in research software
engineering.
In this exercise, you will convert the already provided solution to the programming challenge defined in this
Jupyter notebook, into a proper Python package.
The code to actually solve the problem is already given, but as roughly sketched out code in a notebook.
Your job will be to convert the code into a formally structured package, with unit tests, a command line
interface, and demonstrating your ability to use git version control.
The exercise will be semi-automatically marked, so it is very important that you adhere in your solution
to the correct file and folder structure, as defined in the rubric below. An otherwise valid solution which
doesn’t work with our marking tool will not be given credit.
First, we set out the problem we are solving, and it’s informal solution. Next, we specify in detail the
target for your tidy solution. Finally, to assist you in creating a good solution, we state the marks scheme
we will use.
404
Chapter 12
We are going to look at a simple game, a modified version of one with a long history. Games of this kind
have been used as test-beds for development of artificial intelligence.
A dungeon is a network of connected rooms. One or more rooms contain treasure. Your character, the
adventurer, moves between rooms, looking for the treasure. A troll is also in the dungeon. The troll moves
between rooms at random. If the troll catches the adventurer, you lose. If you find treasure before being
eaten, you win. (In this simple version, we do not consider the need to leave the dungeon.)
The starting rooms for the adventurer and troll are given in the definition of the dungeon.
The way the adventurer moves is called a strategy. Different strategies are more or less likely to succeed.
We will consider only one strategy this time - the adventurer will also move at random.
We want to calculate the probability that this strategy will be successful for a given dungeon.
We will use a “monte carlo” approach - simply executing the random strategy many times, and counting
the proportion of times the adventurer wins.
Our data structure for a dungeon will be somewhat familiar from the Maze example:
In [1]: dungeon1 = {
'treasure' : [1], # Room 1 contains treasure
'adventurer': 0, # The adventurer starts in room 0
'troll': 2, # The troll starts in room 2
'network': [[1], #Room zero connects to room 1
[0,2], #Room one connects to rooms 0 and 2
[1] ] #Room 2 connects to room 1
}
So this example shows a 3-room linear corridor: with the adventurer at one end, the troll at the other,
and the treasure in the middle.
With the adventurer following a random walk strategy, we can define a function to update a dungeon:
405
In [4]: update_dungeon(dungeon1)
dungeon1
Out[4]: {'treasure': [1], 'adventurer': 1, 'troll': 1, 'network': [[1], [0, 2], [1]]}
We can also define a function to test if the adventurer has won, died, or if the game continues:
In [6]: outcome(dungeon1)
Out[6]: -1
def run_to_result(dungeon):
dungeon=copy.deepcopy(dungeon)
max_steps=1000
for _ in range(max_steps):
result= outcome(dungeon)
if result != 0:
return result
update_dungeon(dungeon)
# don't run forever, return 0 (e.g. if there is no treasure and the troll can't reach the ad
return result
In [8]: dungeon2 = {
'treasure' : [1], # Room 1 contains treasure
'adventurer': 0, # The adventurer starts in room 0
'troll': 2, # The troll starts in room 2
'network': [[1], #Room zero connects to room 1
[0,2], #Room one connects to rooms 0 and 2
[1,3], #Room 2 connects to room 1 and 3
[2]] # Room 3 connects to room 2
In [9]: run_to_result(dungeon2)
Out[9]: -1
Note that we might get a different result sometimes, depending on how the adventurer moves, so we need
to run multiple times to get our probability:
406
if outcome == 1:
successes+=1
success_fraction = successes/trials
return success_fraction
In [11]: success_chance(dungeon2)
Out[11]: 0.5002
Make sure you understand why this number should be a half, given a large value for trials.
In [12]: dungeon3 = {
'treasure' : [2], # Room 2 contains treasure
'adventurer': 0, # The adventurer starts in room 0
'troll': 4, # The troll starts in room 4
'network': [[1], #Room zero connects to room 1
[0,2], #Room one connects to rooms 0 and 2
[1,3], #Room 2 connects to room 1 and 3
[2, 4], # Room 3 connects to room 2 and 4
[3]] # Room 4 connects to room 3
In [13]: success_chance(dungeon3)
Out[13]: 0.4044
[Not for credit] Do you understand why this number should be 0.4? Hint: The first move is always the
same. In the next state, a quarter of the time, you win. 3/8 of the time, you end up back where you were
before. The rest of the time, you lose (eventually). You can sum the series: 14 (1 + 38 + ( 38 )2 + ...) = 25 .
407
Chapter 13
You must submit your exercise solution to Moodle as a single uploaded Zip format archive. (You must use
only the zip tool, not any other archiver, such as .tgz or .rar. If we cannot unzip the archiver with zip,
you will receive zero marks.)
The folder structure inside your zip archive must have a single top-level folder, whose folder name is
your student number, so that on running unzip this folder appears. This top level folder must contain
all the parts of your solution. You will lose marks if, on unzip, your archive creates other files or folders at
the same level as this folder, as we will be unzipping all the assignments in the same place on our computers
when we mark them!
Inside your top level folder, you should create a setup.py file to make the code installable. You should
also create some other files, per the lectures, that should be present in all research software packages. (Hint,
there are three of these.)
Your tidied-up version of the solution code should be in a sub-folder called adventure which will be the
python package itself. It will contain an init.py file, and the code itself must be in a file called dungeon.py.
This should define a class Dungeon: instead of a data structure and associated functions, you must refactor
this into a class and methods.
Thus, if you run python in your top-level folder, you should be able to from adventure.dungeon import
Dungeon. If you cannot do this, you will receive zero marks.
You must create a command-line entry point, called hunt. This should use the entry_points facility in
setup.py, to point toward a module designed for use as the entry point, in adventure/command.py. This
should use the Argparse library. When invoked with hunt mydungeon.yml --samples 500 the command
must print on standard output the probability of finding the treasure in the specified dungeon, using the
random walk strategy, after the specified number of test runs.
The dungeon.yml file should be a yml file containing a structure representing the dungeon state. Use
the same structure as the sample code above, even though you’ll be building a Dungeon object from this
structure rather than using it directly.
You must create unit tests which cover a number of examples. These should be defined in
adventure/tests/test_dungeon.py. Don’t forget to add an init.py file to that folder too, so that at
the top of the test file you can ” from ..dungeon import Dungeon.” If your unit tests use a fixture file to
DRY up tests, this must be called adventure/tests/fixtures.yml. For example, this could contain a yaml
array of many dungeon structures.
You should git init inside your student-number folder, as soon as you create it, and git commit your
work regularly as the exercise progresses.
Due to our automated marking tool, only work that has a valid git repository, and follows the folder and
file structure described above, will receive credit.
Due to the need to avoid plagiarism, do not use a public github repository for your work - instead, use
git on your local disk (with git commit but not git push), and *ensure the secret .git folder is part of
your zipped archive.
408
Chapter 14
Marks Scheme
Note that because of our automated marking tool, a solution which does not match the standard solution
structure defined above, with file and folder names exactly as stated, may not receive marks, even if the
solution is otherwise good. “Follow on marks” are not guaranteed in this case.
Total: 25 marks
In [ ]:
409
Chapter 15
In this exercise, you will convert badly written code, provided here, into better-written code.
You will do this not through simply writing better code, but by taking a refactoring approach, as discussed
in the lectures.
As such, your use of git version control, to make a commit after each step of the refactoring, with a
commit message which indicates the refactoring you took, will be critical to success.
You will also be asked to look at the performance of your code, and to make changes which improve the
speed of the code.
The script as supplied has its parameters hand-coded within the code. You will be expected, in your
refactoring, to make these available as command line parameters to be supplied when the code is invoked.
410
Chapter 16
411
412
Chapter 17
• Identify which variables in the code would, more sensibly, be able to be input parameters, and use
Argparse to manage these.
• 4 marks: 1 for each of four arguments identified.
• The code above makes use of append() which is not appropriate for NumPy. Create a new solution
(in a file called tree_np.py) which makes use of NumPy. Compare the performance (again, excluding
the plotting from your measurements), and discuss in comments.md
– 5 marks: [1] NumPy solution uses array-operations to subtract the change angle from all angles
in a single minus sign, [1] to take the sine of all angles using np.sin [1] to move on all the positions
with a single vector displacement addition [1] Numpy solution uses hstack or similar to create
new arrays with twice the length, by composing the left-turned array with the right-turned array
[1] Performance comparison recorded
As with assignment one, to facilitate semi-automated marking, submit your code to moodle as a single
Zip file (not .tgz, nor any other zip format), which unzips to produce files in a folder named with your
student number.
413