0% found this document useful (0 votes)
139 views26 pages

Appendix A: Python 101

Uploaded by

Dr. Ashok Mundhe
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
139 views26 pages

Appendix A: Python 101

Uploaded by

Dr. Ashok Mundhe
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 26

Appendix A: Python 101

Fig. A.1: Guido van Rossum (b.1956), creator of Python. (Image credit: Michael Cavotta
CC BY-NC-ND 4.0)

Since the inception of Python in 1989 by the Dutch computer scientist Guido van Rossum,
Python has now grown to become arguably the world’s most popular programming language
(as of 2023). Python is easy and intuitive to learn, with a strong community support. It is
therefore ideal for learners with little or no background in computing, and as such it is now
taught in schools all over the world.

A.1 Installation

If you are totally new to Python, the following installation method is recommended.
• Anaconda and JupyterLab. We recommend installing Anaconda on your computer
from
https://www.anaconda.com
Anaconda is a popular software suite containing, amongst other applications. JupyterLab
which we recommend for using with this book. The complete guide to using JupyterLab
can be found at
https://jupyterlab.readthedocs.io

• Pip is an essential tool for installing and updating Python libraries that are not part of
the standard distribution.

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 489
S. Chongchitnan, Exploring University Mathematics with Python,
https://doi.org/10.1007/978-3-031-46270-2
490 A Appendix A: Python 101

Pip uses command lines. This means that you need to have a Terminal open (this applies
to both Windows and Mac).
Check if you already have pip on your machine. In your terminal, type
pip --version
If pip has not yet been installed, follow the instructions at
https://pip.pypa.io

A.2 Learning Python

There is no shortage of learning resources for beginners. Here are some free resources for
beginners that you may find useful (as of 2023). I have selected resources that focus on
scientific computing.
Of course, Python evolves all the time, so you may like to start by searching for the latest
resources out there.
There is no need to a complete an entire Python course to start using this book. You can
pick things up along the way by modifying the code given in this book.
• Online resources
– Aalto university in Finland has produced a wonderful, free course with the focus
on scientific computing. There are lots of videos and easy-to-follow tutorials. See:
https://aaltoscicomp.github.io/python-for-scicomp/
– Scientific Python lectures (Gaël Varoquax et. al.) is a comprehensive set of tutorials,
guiding beginners through NumPy, SciPy, SymPy etc. on all the way up to machine
learning.
https://lectures.scientific-python.org
• Books
– Linge and Langtangen, Programming for computations – Python, Springer (2020)
[131]
– Kong, Siauw and Bayen, Python programming and numerical methods – a guide
for engineers and scientists, Academic Press (2020) [114]
– Lin, Aizenman, Espinel, Gunnerson and Liu, An introduction to Python program-
ming for scientists and engineers, Cambridge University Press (2022) [130]
– Lynch, Python for scientific computing and artificial intelligence, CRC Press
(2023) [136]
A.3 Python data types 491

A.3 Python data types

Here we summarise key facts about data types in Python.

Basic data types

The basic data types used in this book are:


• Integer e.g. 0, 1, -99
• Float e.g. 1.0, 2/3
• String e.g. 'Hello', '2'
• Boolean namely, True, False
• Complex e.g. 1j, 5-2j

Composite data types

Composite data types and their properties are given in table (A.1) below.

Data type Ordered? Mutable? Duplicate elements allowed?


List and
Ordered Mutable Duplicate elements allowed
NumPy array
Tuple Ordered Immutable Duplicate elements allowed
Set Unordered Immutable No duplicate elements
Dictionary Ordered Mutable No duplicate elements

Table A.1: Properties of composite data types in Python 3.9+

Examples:
• Lists and arrays are used in most programs in this book.
• Tuple: s in ellipse.ipynb (§3.3).
• Set: uniq in birthday.ipynb (§7.6)
• Dictionary: step in planes.ipynb (§5.3)
Let’s now discuss the three properties in the heading of table A.1.
• Ordered. A composite data type is said to be ordered if its elements are arranged in a
fixed, well, order. This means that it is possible to pinpoint the first or second object in
the data type. For example, suppose we define a list L as:
L = ['goat', 999, True]
Then, the first element L[0] is reported as the string 'goat'. Similarly, define a tuple
as
T = ('goat', 999, True)
492 A Appendix A: Python 101

Then, T[0] is reported as the string 'goat'.


In contrast, suppose we define a set S as:
S = {'goat', 999, True}
Then, calling S[0] gives an error.
Finally, a dictionary is ordered (since version 3.7), but Python does not index its
elements (which are keys).
• Mutable. A composite data type is said to be mutable if its elements can be changed
after it is defined. For example, using the list L above, the command
L[1] = L[1] + 1
changes L to ['goat', 1000, True]. This happens because lists are mutable. How-
ever, the command
T[1] = T[1] + 1
produces an error, since the elements of T cannot be changed.
• Duplicate elements. The list
L1 = ['goat', 999, True, True]
is different from L (they have different numbers of elements). However, the set
S1 = {'goat', 999, True, True} equals the set S (you can test this by performing
the equality test S==S1).
Whilst duplicate elements are ignored by a set, a dictionary will override previous
duplicate elements. For example, if we define a dictionary of animal types with 2
duplicate keys:
D = {'animal': 'goat', 'animal': 'pig', 'animal': 'bear'}
then D is simply a dictionary with one key, namely, D = {'animal': 'bear'}.

Lists vs arrays

Python list Numpy array


Mixed data types? Allowed Not allowed
+ means Concatenation Addition elementwise
∗ means Duplication Multiplication elementwise
Storage Less efficient More efficient
Computational speed Slower Faster (due to vectorisation)

Table A.2: Comparing properties of Python lists and NumPy arrays.

Let’s now discuss table A.2 line-by-line.


A.3 Python data types 493

• Mixed data types. A list can hold mixed data types. For example, the list:
Lmixed = [1 , 2.3 , 4+5j]
contains an integer, a float and a complex number.
Now let’s convert the list to an array using the command A=np.array(Lmixed). We
now find that the array A reads
array([1. +0.j, 2.3+0.j, 4. +5.j])
This shows that the list has been converted to an array of a single data type (i.e. complex).
This is because NumPy arrays are homogeneous, meaning that every element in an
array is of the same data type. NumPy converts the elements in the list to the data type
that best satisfies all of the list elements involved.
• + operator. Adding two NumPy arrays together element-wise is what we often need to
do as mathematicians (e.g. adding vectors and matrices). But take note that for two
lists, L1 and L2, the operation L1+L2 creates a new list by merging (also known as
concatenating) the two lists.
In the mathematical tasks discussed in this book, we sometimes find ourselves adding
an array of numbers to a list of numbers, in which case the operator + thankfully acts
like element-wise addition.
In short, the operator + acts differently depending on the data types involved. In technical
terms, this is called operator overloading.
Run the following code which demonstrates overloading of the operator +.
import numpy as np
A = np.array([7, 8, 5])
X = [0, -1, 2]
L = ['goat', 999, True]
Sum1 = A + A
Sum2 = A + X
Sum3 = X + L
#Sum4 = A + L #This line will produce an error
print('A+A =', Sum1, type(Sum1),
'\nA+X =', Sum2, type(Sum2),
'\nX+L =', Sum3, type(Sum3))
Output:
A+A = [14 16 10] <class 'numpy.ndarray'>
A+X = [7 7 7] <class 'numpy.ndarray'>
X+L = [0, -1, 2, 'goat', 999, True] <class 'list'>

• ∗ operator. Let L be a list and A be an array. Let c be a constant. Consider the following
‘multiplications’ involving the operator ∗. The results are not always what you might
expect due to overloading.
1. If c is a positive integer, then c*L is a list comprising c concatenated copies of the
list L. In other words, c*L= L + L + ... + L (c copies).
If c is a negative integer or zero, then c*L is an empty list.
If c is not an integer, c*L produces an error.
2. c*A is an array whose elements are those A multiplied by c.
3. L*A is an array whose ith element is the product of the ith element of L and the ith
element of A.
494 A Appendix A: Python 101

4. A*A is an array whose ith element is the square of the ith element of A. This is
equivalent to A**2.
The following code demonstrates the above points.
import numpy as np
L = [0, -1, 2]
A = np.array([7, 8, 5])
Prod1 = 3*L; Prod2 = 3*A
Prod3 = L*A; Prod4 = A*A
print('3*L =', Prod1, type(Prod1),
'\n3*A =', Prod2, type(Prod2),
'\nL*A =', Prod3, type(Prod3),
'\nA*A =', Prod4, type(Prod4))
Output:
3*L = [0, -1, 2, 0, -1, 2, 0, -1, 2] <class 'list'>
3*A = [21 24 15] <class 'numpy.ndarray'>
L*A = [ 0 -8 10] <class 'numpy.ndarray'>
A*A = [49 64 25] <class 'numpy.ndarray'>
Here is another example: in the code classification.ipynb (§8.10), we find the
following line:
label = len(data0)*[0] + len(data1)*[1]
This line uses the ∗ operator to duplicate the singleton lists [0] and [1] and concatenate
them using the + operator.
• Storage. In broad terms, a large array takes up less storage (in terms of bytes) than a
list of the same length. Let’s quantify this statement.
In the code ratio-size.ipynb, we store the sequence

S = (0, 1, 2, . . . , n − 1) (A.1)

in two ways: as a list and as an array. We then find out how many bytes are required to
store each representation, and calculate the ratio
Number of bytes used to store S as a list
.
Number of bytes used to store S as an array

This ratio is plotted in fig. A.2 for sequence length n up to 106 . We see that for a
long sequence (n & 103 ), storing it as a list can take up as much as 10% more space
compared to an array. On the other hand, there are no real space-saving advantages for
short sequences (n . 100).
A.3 Python data types 495

1.10

1.05
Ratio of sizes (List:Array)
1.00

0.95

0.90

0.85

0.80

0.75 1
10 102 103 104 105 106
Length
Fig. A.2: Ratio of the numbers of bytes needed to store the sequence (0, 1, 2, . . . , n − 1) as a
list vs as an array. For a long sequence (n & 103 ), storing it as a list can require as much as
10% more space compared to an array. This graph is produced by ratio-size.ipynb.

ratio-size.ipynb (for plotting fig. A.2)


import numpy as np
import matplotlib.pyplot as plt
getsizeof = size of an object in bytes from sys import getsizeof

Sequence lengths (up to 106 ) N = np.round(np.logspace(1,6,1000))


For storing list sizes. . . sizeL = []
and array sizes sizeA = []

For the sequence (0, 1, 2,. . . , n − 1) for n in N:


Create the corresponding list. . . L = [x for x in range(int(n))]
and the corresponding array A = np.arange(n)
Store their sizes sizeL.append(getsizeof(L))
sizeA.append(getsizeof(A))

Size ratio ratio = np.array(sizeL)/np.array(sizeA)

plt.semilogx(N, ratio, 'b')


plt.xlabel('Length')
plt.ylabel('Ratio of sizes (List:Array)')
plt.xlim(10, max(N))
plt.grid('on')
plt.show()
496 A Appendix A: Python 101

• Computational speed. Broadly, computations using arrays are faster than those using
lists. This is because many array operations can be vectorised, meaning that the
operations are performed in parallel by the CPU. This is much faster than, say, using a
for loop to perform the same operations on each element of a list one at a time.
Let’s quantify this speed boost.
In the code ratio-time.ipynb, we measure how long it takes to add one to each
element of the list and array representations of the sequence S (eq. A.1). Using a list L,
we time how long it takes to perform the list comprehension

[l+1 for l in L]

In contrast, using an array A, the operation A + 1 is vectorised, where 1 is understood


by NumPy to be the array (1,1,. . . , 1) of the same size as A (this shape-matching is
called broadcasting1).
Fig. A.3 shows the ratio of the runtimes for the list and the array calculations. We see
that the list computation is generally slower. On my computer, the worst cases occur for
sequences of length ≈ 3 × 104 , where the list computation is up to 70 times slower than
the array computation.
Your graph will be slightly different, depending on your hardware and Python distribution.
But your graph should support the conclusion that list-based calculations are generally
slower than those using arrays.

70
60
Ratio of runtimes (List:Array)

50
40
30
20
10
0
101 102 103 104 105 106
Length
Fig. A.3: Ratio of the runtimes taken to add 1 to the sequence (0, 1, 2, . . . , n − 1), using a list
vs using an array. In the worst case (when n ≈ 3 × 104 ), the list computation is around 70
times slower than the array computation. This graph is produced by ratio-time.ipynb.
Your graph will be slightly different.

1 https://numpy.org/doc/stable/user/basics.broadcasting.html
A.4 Random musings 497

ratio-time.ipynb (for plotting fig. A.3)


import numpy as np
import matplotlib.pyplot as plt
For measuring operation runtime from time import perf_counter as timer

Array of sequence lengths (up to 106 ) N = np.around(np.logspace(1,6,1000))


For storing runtime using a list. . . timeL = []
and using an array timeA = []

For the sequence (0, 1, 2,. . . , n − 1) for n in N:


Create the corresponding list. . . L = [x for x in range(int(n))]
and the corresponding array A = np.arange(n)

Start the clock! tic = timer()


Add 1 to every element in the list [l+1 for l in L]
Stop the clock! toc = timer()
Store the runtime timeL.append(toc-tic)

Start the clock! tic = timer()


Repeat for the array (vectorised method) A+1
Stop the clock! toc = timer()
Store the runtime timeA.append(toc-tic)

Ratio of runtimes ratio = np.array(timeL)/np.array(timeA)

plt.semilogx(N, ratio, 'r')


plt.xlabel('Length')
plt.ylabel('Ratio of runtimes (List:Array)')
plt.xlim(10, max(N))
plt.grid('on')
plt.show()

A.4 Random musings

Free visualisation tools

In this book, we have used Matplotlib, Pandas, Plotly and Seaborn to create visualisations
in Python. Here are other useful (and free) visualisation tools.
• Visualising curves and surfaces. Desmos2 is a powerful visualisation tool for plotting
curves and exploring 2D geometry. One of the best features is that sliders are instantly
and intuitively created for you. For 3D geometry, math3d3 offers an easy-to-use,
Desmos-like interface for creating 3D surfaces and vector fields. GeoGebra4 gives
advanced functionalities and is my go-to app for creating beautiful 3D figures like fig.
3.9.

2 https://www.desmos.com
3 https://www.math3d.org
4 https://www.geogebra.org
498 A Appendix A: Python 101

• Data visualisation. R is a powerful programming language used by the statistics


community. The R library htmlwidgets5 makes it easy to create interactive data
visualisation with only basic R knowledge.

Parallelisation

We have come across tasks that can be done in parallel, e.g. performing multiple Monte
Carlo simulations, solving an ODE with a range of initial conditions, and machine-learning
tasks. Your computer will most likely contain multiple computing cores, and we can speed
up our code by manually distributing tasks over multiple cores. To see how many computing
cores your computer has, run the following lines:
import multiprocessing
multiprocessing.cpu_count()
If you have not tried parallel programming before, a good starting point is the documen-
tation6 for the module multiprocessing.

Python pitfalls and oddities

Overall, Python is easy for beginners to pick up. But there are some pitfalls that can trip up
many beginners.
We have already mentioned that when using lists, the + and ∗ operators are not really
addition and multiplication.
Here are more pitfalls and some oddities that Python learners should watch out for.
1. The last element. One of the most common beginner’s mistakes is forgetting that
range and np.arange do not include the last element, but np.linspace does.
This applies to slicing of arrays and lists too. For example, A[-1] is the last element of
array A, but A[0:-1] is the array A excluding the last element.
2. Tranposing 1D arrays. Sometimes you may want to transpose a one-dimensional array
(e.g. to turn a row vector into a column vector). For example:
u = np.array([0,1,2])
v = u.T
However, you will find that then v is still precisely u. Nothing happens when you
transpose a 1D array!
If you really need the transpose explicitly, try adding another pair of brackets and
transpose v=np.array([[0,1,2]]) instead. In practice, such a transpose can often
be avoided altogether.

5 http://www.htmlwidgets.org
6 https://docs.python.org/3/library/multiprocessing.html
A.4 Random musings 499

3. Modifying a duplicate list/array can be dangerous. Consider the following lines of


code:
A = [1,1,1]
B = A
B[0] = 9
Clearly B=[9,1,1]. But you may be surprised to find that A is also [9,1,1]! The
original list has been modified, perhaps inadvertently.
To modify B whilst preserving the original list, use B=A.copy() instead of B=A.
Alternatively, if using arrays, use B=np.copy(A).
This caution applies to all mutable objects in Python. In short, be careful when
duplicating mutable objects7.
4. Iterators can only be used once. What do you think you will see if you run this code?
A = [1,2]
B = [5,9]
Z = zip(A,B)
for i,j in Z:
print(i,j)
for i,j in Z:
print('Can you see this?')

You will find that the string 'Can you see this?' is never printed, because zip is
an iterator, and each iterator can only be used once. When it comes to the second for
loop, Z is now empty.
To avoid this pitfall, we should replace each Z in the for loops by zip(A,B).
5. Mutable default argument. Probably the most infamous Python pitfall of all is
illustrated in the following function
def F(x=[]):
x.append(9)
return x
print(F(), F())
The function F takes an argument x, which, if not supplied, is set to the empty list by
default. Therefore, calling F() gives [9]. So we might expect that the output of the
code is [9],[9]. However, you may be surprised to see the output

[9, 9] [9, 9]

See this discussion thread8 for explanation. The commonly prescribed remedy is to use
the (immutable) keyword x=None in the argument instead.
def F(x=None):
if x==None:
x=[]
x.append(9)
return x

7 You will need deepcopy rather than copy when duplicating, say, a list containing lists. Read more about
deepcopy at https://docs.python.org/3/library/copy.html
8 https://stackoverflow.com/questions/1132941
500 A Appendix A: Python 101

Concluding remarks

Mathematics and programming are both lifelong pursuits for me. The main difference is
that whilst mathematics is a body of universal truths that will never change, programming
changes constantly. Even in the space of almost two years of writing this book, Python has
constantly evolved, and things that used to work now produce warnings or errors.
Naturally this means that this book will not have an indefinite shelf-life, but at least I
hope that I have demonstrated how maths and programming can work in synergy (in the
non-contrived sense of the word). I hope that the techniques demonstrated in this book have
given readers plenty of inspirations to explore mathematics more deeply for themselves. I
am looking forward to sharing more ideas for exploring university mathematics with Python
in future updates.
REFERENCES

1. Agarwal, R.P., Hodis, S., O’Regan, D.: 500 examples and problems of applied differential equations.
Springer, Cham (2019)
2. Aggarwal, C.: Linear algebra and optimization for machine learning. Springer, Cham (2020)
3. Ahlfors, L.V.: Complex analysis, third edn. McGraw-Hill, New York (1979)
4. Aigner, M., Ziegler, G.M.: Proofs from THE BOOK, 6th edn. Springer, Berlin (2018)
5. Alcock, L.: How to think about abstract algebra. Oxford University Press, Oxford (2021)
6. Alligood, K.T., Sauer, T.D., Yorke, J.A.: Chaos: an introduction to dynamical systems. Springer, New
York (1997)
7. Altmann, S.L.: Hamilton, rodrigues, and the quaternion scandal. Mathematics Magazine 62(5), 291
(1989)
8. Anderson, D.F., Seppäläinen, T., Valkó, B.: Introduction to Probability. Cambridge University Press,
Cambridge (2017)
9. Andreescu, T., Andrica, D.: Complex numbers from A to –Z, 2nd edn. Birkhäuser, New York (2014)
10. Apostol, T.M.: Mathematical analysis, 2nd edn. Addison-Wesley, London (1974)
11. Apostol, T.M.: Introduction to analytic number theory. Springer, New York (2010)
12. Armstrong, M.A.: Groups and Symmetry. Springer, New York (1988)
13. Atkinson, K.E., Han, W., Stewart, D.: Numerical solution of ordinary differential equations. Wiley,
New Jersey (2009)
14. Axler, S.: Linear algebra done right, 3rd edn. Springer (2015)
15. Ayoub, R.: Euler and the zeta function. The American Mathematical Monthly 81(10), 1067 (1974)
16. Bak, J., Newman, D.J.: Complex analysis, 3rd edn. Springer, New York (2010)
17. Baker, G.L., Blackburn, J.A.: The pendulum : a case study in physics. Oxford University Press, Oxford
(2006)
18. Bannink, T., Buhrman, H.: Quantum Pascal’s Triangle and Sierpinski’s carpet. arXiv e-prints
arXiv:1708.07429 (2017)
19. Barnard, T., Neill, H.: Discovering group theory: a transition to advanced mathematics. CRC Press,
Boca Raton (2017)
20. Bartle, R.G., R., S.D.: Introduction to Real Analysis, 4th edn. Wiley, New Jersey (2011)
21. Bas, E.: Basics of probability and stochastic processes. Springer, Cham (2019)
22. Bays, C., Hudson, R.H.: A new bound for the smallest x with π(x) > li(x). Mathematics of
Computation 69(231), 1285 (1999)
23. Beléndez, A., Pascual, C., Méndez, D.I., Beléndez, T., Neipp, C.: Exact solution for the nonlinear
pendulum. Revista Brasiliera de Ensino de Física 29(4), 645 (2007)
24. Beltrametti, M.C., Carletti, E., Gallarati, D., Bragadin, G.M.: Lectures on curves, surfaces and
projective varieties. European Mathematical Society, Zürich (2009)
25. Berndt, B.C., Robert, A.R.: Ramanujan: Letters and Commentary. American Mathematical Society,
Providence (1995)
26. Birkhoff, G., Mac Lane, S.: A survey of modern algebra. Macmillan, New York (1941)
27. Birkhoff, G., Mac Lane, S.: A survey of modern algebra, 4th edn. Macmillan, London; New York;
(1977)
28. Boas, M.L.: Mathematical Methods in the Physical Sciences, 3rd edn. Wiley (2005)
29. Bork, A.M.: “vectors versus quaternions"—the letters in nature. American Journal of Physics 34(3),
202 (1966)
30. Borwein, J.M., Bradley, D.M., Crandall, R.E.: Computational strategies for the riemann zeta function.
Journal of Computational and Applied Mathematics 121(1), 247 (2000)
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 501
S. Chongchitnan, Exploring University Mathematics with Python,
https://doi.org/10.1007/978-3-031-46270-2
502 References

31. Bower, O.K.: Note concerning two problems in geometrical probability. The American Mathematical
Monthly 41(8), 506 (1934)
32. Brauer, F., Castillo-Chavez, C., Feng, Z.: Mathematical models in epidemiology. Springer, New York
(2019)
33. Bronson, R., Costa, G.B.: Schaum’s outline of differential equations, 5th edn. McGraw Hill (2021)
34. Brunton, S.L., Kutz, J.N.: Data-Driven Science and Engineering: Machine Learning, Dynamical
Systems, and Control, 2nd edn. Cambridge University Press, Cambridge (2022)
35. Burton, D.M.: Elementary number theory, 7th edn. McGraw Hill, New York (2011)
36. Butcher, J.C.: Numerical methods for ordinary differential equations, 3rd edn. Wiley, Chichester
(2016)
37. do Carmo, M.P.: Differential geometry of curves and surfaces. Prentice Hall (1976)
38. Carter, N.C.: Visual group theory. Mathematical Association of America, Washington, D.C. (2009)
39. Chao, K.F., Plymen, R.: A new bound for the smallest x with π(x) > li(x) (2005). URL https:
//arxiv.org/abs/math/0509312
40. Cheney, W., Kincaid, D.: Numerical mathematics and computing, 7th edn. Cengage (2012)
41. Chihara, T.S.: An introduction to orthogonal polynomials. Dover, New York (2011)
42. Choquet-Bruhat, Y., de Witt-Morette, C., Dillard-Bleick, M.: Analysis, manifolds and physics. North
Holland, Amsterdam (1983)
43. Chung, K.L., AitSahlia, F.: Elementary probability theory: with stochastic processes and an introduction
to mathematical finance, 4th edn. Springer, New York (2003)
44. Clarke, R.D.: An application of the poisson distribution. Journal of the Institute of Actuaries 72(3),
481 (1946)
45. Collins, P.J.: Differential and integral equations. Oxford University Press, Oxford (2006)
46. Comninos, P.: Mathematical and computer programming techniques for computer graphics. Springer,
London (2006)
47. Conrey, J.B.: The riemann hypothesis. Notices of the AMS 50(3), 341 (2003)
48. Cramér, H.: On the order of magnitude of the difference between consecutive prime numbers. Acta
Arithmetica 2(1), 23 (1936)
49. Crandall, R., Pomerance, C.: Prime numbers: a computational perspective, 2nd edn. Springer, New
York (2005)
50. Crowe, M.J.: A history of vector analysis: the evolution of the idea of a vectorial system. University
of Notre Dame Press, London;Notre Dame (Illinois) (1967)
51. Crowe, W.D., Hasson, R., Rippon, P.J., Strain-Clark, P.E.D.: On the structure of the mandelbar set.
Nonlinearity 2(4), 541 (1989)
52. DeGroot, M.H., Schervish, M.J.: Probability and statistics, 4th edn. Pearson Education, London
(2012)
53. Deisenroth, M.P., Faisal, A.A., Ong, C.S.: Mathematics for machine learning. Cambridge University
Press, Cambridge (2020)
54. Dolotin, V., Morozov, A.: The Universal Mandelbrot Set: The beginning of the story. World Scientific,
Singapore (2006)
55. Douady, A., Hubbard, J.H.: Étude dynamique des polynômes complexes. Publications Mathématiques
d’Orsay 84 (1984)
56. Dougherty, C.: Introduction to econometrics, fifth edn. Oxford University Press, Oxford (2016)
57. Dudley, U.: Elementary number theory, 2nd edn. W. H. Freeman, San Francisco (1978)
58. Elaydi, S.: An introduction to difference equations, 3rd edn. Springer, New York (2005)
59. Evans, G., Blackledge, J., Yardley, P.: Numerical methods for partial differential equations. Springer-
Verlag, London (2000)
60. Farlow, S.J.: Partial differential equations for scientists and engineers. Dover, New York (1982)
61. Feigelson, E.D., Babu, G.J.: Modern statistical methods for astronomy: with R applications. Cambridge
University Press, Cambridge (2012)
62. Feller, W.: An introduction to probability theory and its applications, vol. I, 3rd edn. Wiley, London
(1968)
63. Fine, B., Rosenberger, G.: Number Theory: an introduction via the density of primes, 2nd edn.
Birkhäuser, Cham (2016)
64. Fischer, H.: A History of the Central Limit Theorem. Springer, New York (2011)
65. Folland, G.B.: Fourier analysis and its applications. American Mathematical Society, Providence
(2009)
66. Forbes, C.S., Evans, M.: Statistical distributions, 4th edn. Wiley-Blackwell, Oxford (2010)
67. Fortney, J.P.: A Visual Introduction to Differential Forms and Calculus on Manifolds. Birkhäuser,
Cham (2018)
References 503

68. Fraleigh, J.B., Brand, N.E.: A first course in abstract algebra, 8th edn. Pearson (2020)
69. Friedman, N., Cai, L., Xie, X.S.: Linking stochastic dynamics to population distribution: An analytical
framework of gene expression. Phys. Rev. Lett. 97, 168302 (2006)
70. Gallian, J.A.: Contemporary abstract algebra, 10th edn. Chapman and Hall /CRC, Boca Raton (2020)
71. Gardner, M.: Mathematical games. Scientific American 231(4), 120 (1974)
72. Gelbaum, B.R., Olmsted, J.M.H.: Counterexamples in Analysis. Dover, New York (1964)
73. Gerver, J.: The differentiability of the Riemann function at certain rational multiples of π. Proceedings
of the National Academy of Sciences of the United States of America 62(3), 668–670 (1969). URL
http://www.jstor.org/stable/59156
74. Gezerlis, A.: Numerical methods in physics with Python. Cambridge University Press, Cambridge
(2020)
75. Glendinning, P.: Stability, instability, and chaos: an introduction to the theory of nonlinear differential
equations. Cambridge University Press, Cambridge (1994)
76. Goldstein, H., Poole, C., Safko, J.: Classical mechanics, 3rd edn. Addison Wesley, San Francisco
(2002)
77. Gorroochurn, P.: Classic problems of probability. John Wiley, Hoboken (2012)
78. Gradshteyn, I.S., Ryzhik, I.M.: Table of integrals, series, and products, 8th edn. Academic Press,
Amsterdam (2014)
79. Granville, A.: Zaphod beeblebrox’s brain and the fifty-ninth row of pascal’s triangle. The American
Mathematical Monthly 99(4), 318 (1992)
80. Granville, A., Martin, G.: Prime number races. The American Mathematical Monthly 113(1), 1 (2006)
81. Gray, J.: A history of abstract algebra. Springer, Cham (2018)
82. Grieser, D.: Exploring mathematics. Springer, Cham (2018)
83. Griffiths, D.J.: Introduction to Electrodynamics, 4th edn. Cambridge University Press, Cambridge
(2017)
84. Griffiths, D.J., Schroeter, D.F.: Introduction to quantum mechanics, 3rd edn. Cambridge University
Press, Cambridge (2018)
85. Griffiths, M., Brown, C., Penrose, J.: From pascal to fibonacci via a coin-tossing scenario. Mathematics
in School 43(2), 25–27 (2014)
86. Grimmett, G., Stirzaker, D.: One thousand exercises in probability, 3rd edn. Oxford University Press,
Oxford (2020)
87. Hall, L., Wagon, S.: Roads and wheels. Mathematics Magazine 65(5), 283–301 (1992)
88. Hamill, P.: A student’s guide to Lagrangian and Hamiltonians. Cambridge University Press, Cambridge
(2014)
89. Hanley, J.A., Bhatnagar, S.: The “poisson" distribution: History, reenactments, adaptations. The
American Statistician 76(4), 363 (2022)
90. Hansen, J., Sato, M.: Regional climate change and national responsibilities. Environmental Research
Letters 11(3), 034009 (2016)
91. Hart, M.: Guide to Analysis, 2nd edn. Palgrave, Basingstoke (2001)
92. Haslwanter, T.: An introduction to statistics with Python: with applications in the life sciences.
Springer, Switzerland (2016)
93. Hass, J., Heil, C., Weir, M.: Thomas’ Calculus, 14th edn. Pearson (2019)
94. Herman, R.L.: An introduction to Fourier analysis. Chapman and Hall /CRC, New York (2016)
95. Hiary, G.A.: Fast methods to compute the riemann zeta function (2007). URL https://arxiv.org/
abs/0711.5005
96. Hirsch, M.W., Smale, S., Devaney, R.L.: Differential equations, dynamical systems, and an introduction
to chaos, 3rd edn. Academic Press, Amsterdam (2013)
97. Howell, K.B.: Ordinary differential equations: an introduction to the fundamentals, 2nd edn. Chapman
and Hall /CRC, Abingdon (2020)
98. Jarnicki, M., Pflug, P.: Continuous Nowhere Differentiable Functions: The Monsters of Analysis.
Springer, Cham (2015)
99. Jaynes, E.T.: The well-posed problem. Foundations of Physics 3(4), 477 (1973)
100. Johansson, R.: Numerical Python. Apress, Berkeley (2019)
101. Johnson, P.B.: Leaning Tower of Lire. American Journal of Physics 23(4), 240 (1955)
102. Johnston, D.: Random Number Generators—Principles and Practices. De Gruyter Press, Berlin (2018)
103. Johnston, N.: Advanced linear and matrix algebra. Springer, Cham (2021)
104. Johnston, N.: Introduction to linear and matrix algebra. Springer, Cham (2021)
105. Jones, G.A., Jones, J.M.: Elementary number theory. Springer, London (1998)
106. Jones, H.F.: Groups, representations and physics, 2nd edn. Taylor & Francis, New York (1988)
107. Kajiya, J.T.: The rendering equation. SIGGRAPH Comput. Graph. 20(4), 143–150 (1986)
504 References

108. Katz, V.J.: The history of stokes’ theorem. Mathematics Magazine 52(3), 146 (1979)
109. Kay, S.M.: Intuitive probability and random processes using MATLAB. Springer, New York (2006)
110. Kenett, R., Zacks, S., Gedeck, P.: Modern statistics: a computer-based approach with Python.
Birkhäuser, Cham (2022)
111. Kettle, S.F.A.: Symmetry and structure: readable group theory for chemists, 3rd edn. John Wiley,
Chichester (2007)
112. Kibble, T.W.B., Berkshire, F.H.: Classical mechanics, 5th edn. Imperial College Press, London (2004)
113. Kifowit, S.J., Stamps, T.A.: The Harmonic Series diverges again and again. AMATYC Review 27(2),
31–43 (2006)
114. Kong, Q., Siauw, T., Bayen, A.: Python programming and numerical methods – a guide for engineers
and scientists. Academic Press (2020)
115. Kortemeyer, J.: Complex numbers: an introduction for first year students. Springer, Wiesbaden (2021)
116. Kosinski, A.A.: Cramer’s rule is due to cramer. Mathematics Magazine 74(4), 310–312 (2001)
117. Kubat, M.: An introduction to machine learning, third edn. Springer, Cham (2021)
118. Kucharski, A.: Math’s beautiful monsters; how a destructive idea paved the way to modern math.
Nautilus Q.(11) (2014)
119. Kuczmarski, F.: Roads and wheels, roulettes and pedals. The American Mathematical Monthly 118(6),
479–496 (2011)
120. Kuhl, E.: Computational epidemiology: data-driven modelling of COVID-19. Springer, Cham (2021)
121. Lagarias, J.C.: Euler’s constant: Euler’s work and modern developments. Bulletin of the American
Mathematical Society 50(4), 527–628 (2013)
122. Lam, L.Y.: Jiu zhang suanshu (nine chapters on the mathematical art): An overview. Archive for
History of Exact Sciences 47(1), 1 (1994)
123. Lambert, B.: A student’s guide to Bayesian statistics. SAGE Publications, London (2018)
124. Langtangen, H.P., Linge, S.: Finite difference computing with PDEs. Springer, Cham (2017). URL
https://link.springer.com/book/10.1007/978-3-319-55456-3
125. Lay, D.C., Lay, S.R., McDonald, J.: Linear algebra and its applications, 5th edn. Pearson, Boston
(2016)
126. Lemmermeyer, F.: Reciprocity laws. Springer, Berlin (2000)
127. Lengyel, E.: Mathematics for 3D game programming and computer graphics. Cengage (2011)
128. Leon, S.J., Björck, A., Gander, W.: Gram-schmidt orthogonalization: 100 years and more. Numerical
Linear Algebra with Applications 20(3), 492 (2013)
129. Li, T.Y., Yorke, J.A.: Period three implies chaos. The American Mathematical Monthly 82(10), 985
(1975)
130. Lin, J.W., Aizenman, H., Espinel, E.M.C., Gunnerson, K.N., Liu, J.: An introduction to Python
programming for scientists and engineers. Cambridge University Press, Cambridge (2022)
131. Linge, S., Langtangen, H.P.: Programming for Computations - Python, 2nd edn. Springer (2020)
132. Liu, Y.: First semester in numerical analysis with Python (2020). URL http://digital.auraria.
edu/IR00000195/00001
133. Lorenz, E.N.: Deterministic nonperiodic flow. Journal of Atmospheric Sciences 20(2), 130 (1963)
134. Lyche, T.: Numerical linear algebra and matrix factorizations. Springer, Cham (2020)
135. Lynch, S.: Dynamical systems with applications using Python. Birkhäuser, Cham (2018)
136. Lynch, S.: Python for scientific computing and artificial intelligence. CRC Press (2023)
137. MacTutor History of Mathematics Archive: URL https://mathshistory.st-andrews.ac.uk/
138. MacTutor History of Mathematics Archive: URL https://mathshistory.st-andrews.ac.uk/
Curves/Cycloid/
139. Matsuura, K.: Bayesian Statistical Modeling with Stan, R, and Python. Springer, Singapore (2022)
140. May, R.M.: Simple mathematical models with very complicated dynamics. Nature 261(5560), 459
(1976)
141. Mazo, R.M.: Brownian motion: fluctuations, dynamics, and applications, vol. 112. Clarendon Press,
Oxford (2002)
142. Mazur, B., Stein, W.A.: Prime numbers and the Riemann hypothesis. Cambridge University Press,
Cambridge (2016)
143. McCluskey, A., B., M.: Undergraduate Analysis. Oxford University Press, Oxford (2018)
144. McMullen, C.T.: The Mandelbrot set is universal. In The Mandelbrot Set, Theme and variations, p. 1.
Cambridge University Press, Cambridge (2007)
145. Michelitsch, M., Rössler, O.E.: The “burning ship” and its quasi-julia sets. Computers & Graphics
16(4), 435 (1992)
146. Michon, G.P.: Surface area of an ellipsoid (2004). URL http://www.numericana.com/answer/
ellipsoid.htm
References 505

147. Mickens, R.E.: Difference equations: Theory, applications and advanced topics, 3rd edn. Chapman
and Hall /CRC (2015)
148. Misner, C.W., Thorne, K.S., A., W.J.: Gravitation. W. H. Freeman, San Francisco (1973)
149. Mullen, G.L., Sellers, J.A.: Abstract algebra: a gentle introduction. Chapman and Hall /CRC, Boca
Raton (2017)
150. Muller, N., Magaia, L., Herbst, B.M.: Singular value decomposition, eigenfaces, and 3d reconstructions.
SIAM review 46(3), 518 (2004)
151. Nahin, P.J.: Duelling Idiots and Other Probability Puzzlers. Princeton University Press, Princeton
(2012)
152. Nahin, P.J.: Digital Dice: Computational Solutions to Practical Probability Problems. Princeton
University Press, New Jersey (2013)
153. Nahin, P.J.: Inside Interesting Integrals. Springer-Verlag, New York (2015)
154. Needham, T.: Visual complex analysis. Clarendon, Oxford (1997)
155. Nickerson, R.: Penney ante: counterintuitive probabilities in coin tossing. UMAP journal 27(4), 503
(2007)
156. Paolella, M.S.: Fundamental probability: a computational approach. John Wiley, Chichester, England
(2006)
157. Patarroyo, K.Y.: A digression on hermite polynomials (2019). URL https://arxiv.org/abs/
1901.01648
158. Paul, W., Baschnagel, J.: Stochastic processes: from physics to finance, 2nd edn. Springer, New York
(2013)
159. Pearl, J., Glymour, M., Jewell N, P.: Causal Inference in Statistics: A Primer. John Wiley and Sons,
Newark (2016)
160. Peck, R., Short, T.: Statistics: learning from data, 2nd edn. Cengage, Australia (2019)
161. Pedersen, S.: From calculus to analysis. Springer, Cham (2015)
162. Peitgen, H.O., Jürgens, H., Saupe, D.: Chaos and fractals: new frontiers of science, 2nd edn. Springer
(2004)
163. Petrov, V.V.: Limit theorems of probability theory: sequences of independent random variables.
Clarendon, Oxford (1995)
164. Pharr, M., Humphreys, G.: Physically based rendering: from theory to implementation, 2nd edn.
Morgan Kaufmann, San Francisco (2010)
165. Piessens, R., Doncker-Kapenga, E.d., Überhuber, C., Kahaner, D.: QUADPACK: A subroutine package
for automatic integration. Springer-Verlag, Berlin (1983)
166. Platt, D., Trudgian, T.: The Riemann hypothesis is true up to 3 · 1012 . Bulletin of the London
Mathematical Society 53(3), 792 (2021)
167. Pollack, P., Roy, A.S.: Steps into analytic number theory: a problem-based introduction. Springer,
Cham (2021)
168. Polyanin, A.D., Zaitsev, V.F.: Handbook of ordinary differential fquations, 3rd edn. Chapman and
Hall /CRC, New York (2017)
169. Posamentier, A.S., Lehmann, I.: The (Fabulous) Fibonacci Numbers. Prometheus Books, New York
(2007)
170. Press, W.H., Tekolsky, S.A., Vetterling, W.T., Flannery, B.P.: Numerical Recipes: the art of scientific
computing, 3rd edn. Cambridge University Press, Cambridge (2007)
171. Pressley, A.N.: Elementary Differential Geometry. Springer-Verlag, London (2010)
172. Priestley, H.A.: Introduction to complex analysis, second edn. Oxford University Press, Oxford (2005)
173. Reid, M.: Undergraduate algebraic geometry. Cambridge University Press, Cambridge (1988)
174. Riley, K.F., Hobson, M.P., Bence, S.J.: Mathematical Methods for Physics and Engineering, 3rd edn.
Cambridge University Press (2006)
175. Robinson, J.C.: An introduction to ordinary differential equations. Cambridge University Press,
Cambridge (2004)
176. Rosenhouse, J.: The Monty Hall Problem. Oxford University Press, Oxford (2009)
177. Roser, M., Appel, C., Ritchie, H.: Human height. Our World in Data (2013). URL https:
//ourworldindata.org/human-height
178. Ross, S.M.: Stochastic processes, 2nd edn. Wiley, Chichester (1996)
179. Ross, S.M.: A first course in probability, 10th edn. Pearson, Harlow (2020)
180. Salinelli, E., Tomarelli, F.: Discrete dynamical models, vol. 76. Springer, Wien (2014)
181. Salsburg, D.: The Lady Tasting Tea: How Statistics Revolutionized Science in the Twentieth Century.
Holt Paperbacks (2002)
182. Sauer, T.: Numerical analysis, 2nd edn. Pearson, Boston (2012)
183. Selvin, S.: Letters to the editor. The American Statistician 29(1), 67 (1975)
506 References

184. Serway, R.A., Jewett, J.W.: Physics for scientists and engineers, 10th edn. Cengage (2011)
185. Shafarevich, I.R.: Basic algebraic geometry I, 3rd edn. Springer, Berlin (2013)
186. Shah, C.: A Hands-On Introduction to Machine Learning. Cambridge University Press (2022)
187. Shinbrot, T., Grebogi, C., Wisdom, J., Yorke, J.: Chaos in a double pendulum. American Journal of
Physics 60(6), 491–499 (1992)
188. Skegvoll, E., Lindqvist, B.H.: Modeling the occurrence of cardiac arrest as a poisson process. Annals
of emergency medicine 33(4), 409 (1999)
189. Spivak, M.: Calculus, 3rd edn. Cambridge University Press, Cambridge (2006)
190. Stein, E.M.: Fourier analysis: an introduction. Princeton University Press, Princeton (2003)
191. Stewart, I., Tall, D.: Complex analysis: the hitch hiker’s guide to the plane, second edn. Cambridge
University Press, Cambridge (2018)
192. Stewart, J., Watson, S., Clegg, D.: Multivariable Calculus, 9th edn. Cengage (2020)
193. Stinerock, R.: Statistics with R: A Beginner’s Guide. SAGE Publishing.1 (2018)
194. Strang, G.: Introduction to linear algebra, 5th edn. Wellesley-Cambridge Press (2016)
195. Strang, G.: Linear algebra and learning from data. Wellesley-Cambridge Press (2019)
196. Strauss, W.A.: Partial differential equations : an introduction, 2nd edn. Wiley, New Jersey (2008)
197. Strogatz, S.H.: Nonlinear dynamics and chaos, with applications to physics, biology, chemistry, and
engineering, 2nd edn. Westview Press (2015)
198. Sutton, E.C.: Observational Astronomy: Techniques and Instrumentation. Cambridge University Press
(2011)
199. Sýkora, S.: Approximations of ellipse perimeters and of the complete elliptic integral (2005). URL
http://dx.doi.org/10.3247/SL1Math05.004
200. Tall, D.: The blancmange function continuous everywhere but differentiable nowhere. The Mathemat-
ical Gazette 66(435), 11–22 (1982)
201. Tapp, K.: Differential Geometry of Curves and Surfaces. Springer (2016)
202. Thim, J.: Continuous nowhere differentiable functions. Masters Thesis, Luleå University of Technology
(2003)
203. Tong, Y.L.: The Multivariate Normal Distribution. Springer New York, New York (1990)
204. Tversky, A., Kahneman, D.: Judgment under uncertainty: Heuristics and biases. Science 185(4157),
1124 (1974)
205. Unpingco, J.: Python for Probability, Statistics, and Machine Learning, 2nd edn. Springer International
Publishing, Cham (2019)
206. Vince, J.: Quaternions for computer graphics, 2nd edn. Springer, London (2021)
207. Vince, J.: Mathematics for computer graphics, 6th edn. Springer, London (2022)
208. Vălean, C.I.: (Almost) Impossible Integrals, Sums, and Series. Springer, Cham (2019)
209. Watson, G.N.: Three triple integrals. The Quarterly Journal of Mathematics os-10(1), 266 (1939)
210. Wikimedia: URL https://commons.wikimedia.org/wiki/File:Blaise_Pascal_
Versailles.JPG
211. Wikimedia: URL https://en.wikipedia.org/wiki/Jia_Xian#/media/File:Yanghui_
triangle.gif
212. Wikimedia: URL https://en.wikipedia.org/wiki/Florence_Nightingale#/media/File:
Florence_Nightingale_(H_Hering_NPG_x82368).jpg
213. Wikimedia / Mario Biondi: URL https://commons.wikimedia.org/wiki/File:Al_
Khwarizmi%27s_Monument_in_Khiva.png
214. Wikipedia: URL https://en.wikipedia.org/wiki/Seki_Takakazu
215. Wilcox, A.J.: On the importance—and the unimportance— of birthweight. International Journal of
Epidemiology 30(6), 1233 (2001)
216. Witte, R.S., Witte, J.S.: Statistics, 4th edn. Wiley, Hoboken (2021)
217. Wolfram, S.: Statistical mechanics of cellular automata. Rev. Mod. Phys. 55, 601 (1983)
218. Wolfram, S.: Geometry of binomial coefficients. The American Mathematical Monthly 91(9), 566
(1984)
219. Yesilyurt, B.: Equations of Motion Formulation of a Pendulum Containing N-point Masses. arXiv
e-prints arXiv:1910.12610
220. Young, H.D., Freedman, R.A.: University physics with modern physics, 15th edn. Pearson (2020)
221. Yuan, Y.: Jiu zhang suan shu and the gauss algorithm for linear equations. Documenta Mathematica
(Extra volume: optimization stories) p. 9 (2012)
INDEX

A Generalisations 408
Bisection method 34
Algebra 283 Blancmange function 88
Anaconda 489 Broadcasting (Python concept) 496
Analysis Buffon’s needle 401
Complex 342 Buffon’s noodles 404
Real 5 Bézout’s identity 320
Animation (Matplotlib) 150
Ansombe’s quartet 449 C
Apéry’s constant ζ (3) 338
arange (syntax) 6 Cardioid 187
Arc length 104 Cartographic projections 122
Archimedian property 31 Catalan’s constant 396
Arcsine law (random walk) 483 Cayley’s Theorem 310
Array (NumPy) 6 Cayley-Hamilton Theorem 248, 279
Ceiling 13
B Cellular automata 368
Central Limit Theorem 424
Bayes’ Theorem 381 Generalised 427
Bayesian statistics Chaos 167, 172
Conjugate priors 484 Chinese Remainder Theorem 323
Credible interval 465 Circulation 135
Evidence 464 Classification 471
Inference 463 k-nearest neighbour algorithm 473
Likelihood 464 Clustering 471
Posterior 464 k-means algorithm 471
Prior 464 Coefficient of determination R2 448,
vs. Frequentist statistics 468 482
Bernoulli numbers 338 Coin toss (simulation techniques) 369
Bernoulli trials 369 Combination 356
Bertrand paradox 413 Comma (Python operator) 29
Bifurcation 179 Comparison test 16
Big O notation 58 Completeness Axiom 32
Binet’s formula 22 Confidence interval 432
Binomial coefficient 364 Conic sections 118
Birthday problem 377 Continuity (ε-δ definition) 23
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 507
S. Chongchitnan, Exploring University Mathematics with Python,
https://doi.org/10.1007/978-3-031-46270-2
508 INDEX

Contractive sequence 22 Division ring 315


Correlation coefficient r 448, 482 Douady’s Rabbit 211
Counterexamples 44
Coupon collector’s problem 379, 409 E
Covariance 405, 482
Cramer’s rule 233 Eccentricity 106
Curse of dimensionality 398 Eigenvalues and eigenvectors 243
Curvature 108 Multiplicity (of an eigenvalue) 247
Signed 109 Ellipsoid 117
Cycloid 101 Surface area 122
Cylindrical coordinates 98 Elliptic integrals 106
Epiphylum oxypetalum 262
D Error function (erf) 89
Euclid’s orchard 50
d’Alembertian operator  200 Euclidean algorithm 318
Data types (Python) 491 Euler’s criterion 330
De Moivre-Laplace Theorem 388 Euler’s product 342
Degrees of freedom (statistics) 430, Euler-Mascheroni constant 18
480 Expectation value 360
Differentiability 36
Differentiation 36 F
Backward difference 55
Five-point stencil 87 Factorial (syntax) 48
Forward difference 37, 55 Feigenbaum constant 179
of power series 62 Fibonacci sequence 20, 251
Symmetric difference 55 From coin tosses 373
Directional derivative 126 Generalised 49
Dirichlet eta function 340 In Pascal’s triangle 407
Distribution (probability) Field 315
χ 459 First fundamental form 122
χ2 438 Fixed point (of a dynamical system)
Arcsine 425 171
Bates 425 Period 177
Bernoulli 369 Stability 177, 248
Beta 465, 484 Floor 13
Binomial 370 Flux 129
Bivariate (joint) 402 Fourier series 82
Bivariate normal 451, 482 Fractal 180, 210
Cauchy 427 Frenet-Serret equations 112
Exponential 393 Function 8
Gamma 442 Bijective 292
Marchenko-Pastur 280 Homomorphism 292
Marginal 456 Injective 292
Normal (Gaussian) 387 Python syntax 9
Poisson 391, 410 Surjective 292
Student’s t 429 Fundamental Theorem of Arithmetic
Triangular 425 286
Uniform 361 Fundamental Theorem of Calculus 133
Divergence (div operator ∇·) 130 Fundamental Theorem of Plane Curves
Divergence Theorem 129 109
INDEX 509

Fundamental Theorem of Space Curves Critical values 433


115 p value 434
t-test 434
G Tea tasting 442
Test statistic ( χ2 ) 440
Galton board 386 Type I and type II errors 435
Gambler’s ruin 483 Test statistic (t) 433
Gamma function 340
Gauss’s Lemma 350 I
Gaussian integrals 80
Gibbs phenomenon 83 Image compression
Gimbal lock 313 SVD (colour) 281
Golden Ratio 22 SVD (greyscale) 262
Generalised 50 Image segmentation 486
Gradient (grad operator ∇) 124 Improper integral 75
Gram-Schmidt process 272 Incomplete gamma function 352
Greatest common divisor (gcd) 30, Initial-value problem 148
285 Integrating factor 150
Green’s Theorem 139 Integration
Groups 284 dblquad and tplquad (syntax) 100
Abelian 291 Double and triple 99
Alternating 303 of power series 62
Cayley graph 296 Romberg 74
Cayley table 289 Trapezium Rule 70
Coset 307 Boole’s Rule 92
Cyclic 291 Midpoint Rule 90
Dihedral 295 Simpson’s Rule 72, 90
Isomorphism 291 Trapezium Rule 90
Klein four 299 Intermediate Value Theorem 33
Normal subgroup 309 Invertible Matrix Theorem 223
Permutation 302 ipywidgets (library) 24
Quaternion 311
J
Quotient 309
Special orthogonal 240 Julia set 211
Subgroups 299 JupyterLab 489
Symmetric 302
L
H
Lagrange’s Theorem 299
Harmonic numbers 379 Laplacian operator ∇2 133
Harmonic series 17 Laurent series 339
Heat equation (diffusion equation) 190 Legendre polynomials 274
Helix 113 Legendre symbol 327
Hermite polynomials 282 Lemniscate of Bernoulli 108, 143
Hyperbolic paraboloid (saddle) 145 Li-Yorke Theorem 180
Hyperboloid of one sheet 117, 144 Limit 28
Hyperboloid of two sheets 144 Linear algebra 215
Hypocycloid 142 Linear combination 221
Hypothesis testing 432 Linear congruences (system of) 322
chi-squared (goodness of fit) test 439 Solvability criterion 325
510 INDEX

Linear independence 221 LU 230


Linear regression 444 QR 235
Least square 444 Singular-value (SVD) 258
Linear systems (of equations) 219, 224 MCMC algorithms 468, 485
Linear transformation 222 Mean 360
Image 222 Mean Value Theorem 41
Kernel 222 Cauchy’s 43
Matrix representation 237 Median 420
Rotation 237 Metropolis-Hastings algorithm 485
Scaling and reflection 240 Minimum χ2 480
Shear 239 Mode 420
Translation 241 Modular arithmetic 286
linspace (syntax) 6 Monotone Convergence Theorem 14
Logistic Map 176 Monte Carlo integration 395, 411
Lorenz equations 171, 207 Error 399
Lucas sequences 49 Graphics rendering 399
Lyapunov exponent 167 Monty Hall problem 381
Generalisations 409
M Möbius strip 145
Möbius transformation 353
Machine epsilon (ε mach ) 57
Machine learning 471
N
Supervised 471
Training 475
Unsupervised 471 Neighbourhood 23
Madhava series 47 Newton-Raphson method 34, 51
Magic commands (including Nielsen’s spiral 143
%matplotlib) ix Nine Chapters of Mathematical Art
Mandelbrot set 181 216, 220
Connection to the logistic map 185 Normal modes 200
Matplotlib 6 Normal to a surface 124
Matrices 217 Number theory 285
Change of basis 254 Python syntax 288
Characteristic polynomial 247 NumPy 6
Defective 279
Determinant 218, 240 O
Fundamental subspaces 268
Inverse 218, 232 Order of convergence (for sequences)
Nullspace and nullity 222 49
Orthogonal 240 Ordinary differential equations 148
Pauli 315 Existence and uniqueness of solutions
Rank 221 157
Row and column spaces 221 Forward-Euler method 152
Row reduction 218 Heun’s method 155
Row-echelon form 218 Runge-Kutta (RK4) method 156
RREF 219 Separation of variables 151
Matrix decomposition solve_ivp (syntax) 158
Cholesky 235 Orthogonal polynomials 275
Diagonalisation Osculating circle 144
(eigen-decomposition) 251 Overloading (Python concept) 493
INDEX 511

P Quadratic form 453


Diagonalisation 455
p-series 15 Quadratic reciprocity law 329
Pandas (library) 421 Quadratic residue 327
Paraboloid 145 Quadric surfaces 117
Parametric curves 96 Quaternions 311
Arc-length parametrisation 106 Rotation in R3 267, 312
Regular 103
Parametric surfaces 97 R
Parseval’s theorem 83
Partial derivatives 98 Radius of convergence 61
Partial differential equations 148 Ramanujan 104
Crank-Nicolson method 196 π approximation 48
Finite-difference method (1D) 190 Elliptic perimeter approximation
Finite-difference method (2D) 197 104, 143
Separation of variables 194, 199 Random number generators (RNG)
Uniqueness of solution 196 376
Pascal’s triangle 363 Random numbers (Python) 362
Pendulum 160 Random variables
Damped 206 Continuous 359
Double 165, 207 Discrete 358
Simple harmonic motion 161 Iid 418
Upside-down 206 Independent 403
Permutation (combinatorics) 356 Uncorrelated 452
Pigeonhole Principle 325 Random variates (sampling) 416
PIL (library) 261 Random walk 458
Pip (Python installation tool) 489 2D 459
Plotly (library) 225 Recurrent 460
Poincaré-Bendixson Theorem 173 Symmetric 458
Prime Number Theorem 332 Transient 460
Prime numbers 286 Rank-Nullity Theorem 265
Chebyshev bias 350 Ratio Lemma 65
Computer search 335 Ratio Test 61
Infinitude of 335 Raytracing 127, 399
Prime-counting function π 332 Reciprocal Fibonacci number 48
Primitive roots 348 Recurrence relation 149
Probability 355 Solution using matrices 251
Conditional 357 Solution using method of
Cumulative distribution function (cdf) characteristics 255
359 Regression to the mean 448
Density function (pdf) 359 Rejection region (critical region) 432
Kolmogorov axioms 357 Riemann Hypothesis 344
Law of total 358 Computer verification 345
Mass function (pmf) 358 Prime numbers and 345
Percent point function (ppf) 433 Riemann’s non-differentiable function
PyMC (library) 468 89
Riemann’s zeta function ζ 336
Q Analytic continuation 340
Critical strip 340
quad (syntax) 76 Efficient computation 352
512 INDEX

Functional equation 340 Generalised 140


Nontrivial zeros 343 Strange attractor 172
Prime numbers and 342 Superposition principle 195
Trivial zeros 342 Surface of revolution 145
Ring 315 Sympify (SymPy operator) 247
Rolle’s Theorem 43 SymPy (library) 216
Roulettes 103
Rounding error 58 T

S Taylor series 59
Remainder 63
Sample mean 418 Taylor’s Theorem 63
Sample variance 420 Theorema Egregium 122
Satellite orbits 205 Thomae’s function 30
Scalar field 124 Throwing (Python) 35
Scikit-learn (library) 421 Torsion 112
Torus 146
SciPy (library) 54
Totient function φ 293
Seaborn (library) 421
Triangular numbers 364
Sequence 6
Truncation error 58
Convergence 10
Turtle (library) 368
Sequential criterion 32
Series 8 U
Sharkovsky’s Theorem 180
Sierpiński’s triangle 365 Unbiased estimator 419
Sieve of Eratosthenes 287
Simple pole 339 V
Simpson’s paradox 447, 481
sinc (function) 28 Variance 360
Sine and cosine integrals (Si and Ci) Vector field 129
80, 92 Vector space 221
Singular-value decomposition (SVD) Basis 221
258 Dimension 221
SIR model (epidemiology) 209 Viète’s formula 47
Slider (in Python) 26 Viviani’s curve 144
Span 221
Spectral Theorem 255 W
Spherical coordinates 98 Wallis product 47
Spherical harmonics 275 Wave equation 197
Squeeze Theorem 13 Weierstrass function 67
Standard deviation 360 Wiener process 460
Statistics 415 Wigner’s semicircle law 279
Tables 434, 480 Witch of Agnesi 143
Steepest descent 127
Stieltjes constants 339 Z
Stirling’s approximation 351
Stokes’ Theorem 138 Z-score 417
BIOGRAPHICAL INDEX

Abel, Niels Henrik, 291 Fatou, Pierre, 211 Leibniz, Gottfried, 53


Agnesi, Maria Gaetana, Feigenbaum, Mitchell, Lindelöf, Ernst, 158
143 179 Lorenz, Edward, 171
al-Khwarizmi, Fibonacci, 20 Lyapunov, Aleksandr
Muhammad ibn Fisher, Ronald, 442 Mikhailovich, 168
Musa, 283 Fourier, Joseph, 82
Apéry, Roger, 338 Frenet, Jean Frédéric, Mandelbrot, Benoit, 181
Archimedes, 122 112 May, Robert, 176
Menaechmus, 118
Bates, Grace, 425 Galton, Francis, 386
Newton, Isaac, 53
Bayes, Thomas, 382 Gauss, Carl Friedrich,
Nightingale, Florence,
Bernoulli, Jacob, 108 95
415
Bertrand, Joseph, 413 Gibbs, Josiah, 83
Bolzano, Bernard, 23 Gosset, William Sealy Ostrogradsky, Mikhail
Boole, George, 74 (‘Student’), 429 Vasilyevich, 130
Brown, Robert, 460 Gram, Jørgen, 273
Green, George, 139 Parseval, Marc-Antoine,
Buffon, Georges-Louis
83
Leclerc, 401
Hadamard, Jacques, 332 Pascal, Blaise, 355
Bézout, Étienne, 320
Hamilton, William Pauli, Wolfgang, 315
Catalan, Eugène Rowan, 249 Pearson, Karl, 440
Charles, 396 Heun, Karl, 155 Picard, Emile, 158
Cauchy, Augustin-Louis, Pingala, 363
Jia Xian, 363 Poisson, Siméon-Denis,
5 Julia, Gaston, 211
Cayley, Arthur, 249 392
Cramer, Gabriel, 233 Kepler, Johannes, 104 Ramanujan, Srinivasa,
Klein, Felix, 299 104
d’Alembert, Jean, 197 Kutta, Martin, 156 Riemann, Bernhard, 336
de la Vallée Poussin, Rodrigues, Olinde, 311
Charles Jean, 332 Lagrange, Joseph-Louis, Rolle, Michel, 43
de Moivre, Abraham, 63 Romberg, Werner, 74
388 Laplace, Pierre-Simon, Runge, Carl, 156
388
Eratosthenes, 287 Legendre, Schmidt, Erhard, 273
Euclid, 320 Adrien-Marie, Serret, Joseph Alfred,
Euler, Leonhard, 147 275 112
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 513
S. Chongchitnan, Exploring University Mathematics with Python,
https://doi.org/10.1007/978-3-031-46270-2
514 BIOGRAPHICAL INDEX

Sharkovsky, Oleksandr Stokes, George, 139 van Rossum, Guido, 489


Mikolaiovich, 180 Sun Zi, 323 Viviani, Vincenzo, 144
Sierpiński, Wacław, 365 von Koch, Helge, 345
Simpson, Edward Hugh, Takakazu, Seki, 215
447 Taylor, Brook, 63
Simpson, Thomas, 72 Thomae, Carl Johannes, Wiener, Norbert, 460
Stirling, James, 351 30 Wilbraham, Henry, 83

You might also like