100% found this document useful (1 vote)
32 views86 pages

An Introduction To Scientific Computing With Matlab and Python Tutorials Sheng Xu PDF Download

The document is an introduction to scientific computing using MATLAB and Python, authored by Sheng Xu and published by CRC Press in 2022. It covers fundamental concepts, errors in scientific computing, algorithm properties, and includes various exercises and programming problems. Additionally, it provides links to related ebooks and resources for further exploration in scientific computing.

Uploaded by

budakmbware
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
32 views86 pages

An Introduction To Scientific Computing With Matlab and Python Tutorials Sheng Xu PDF Download

The document is an introduction to scientific computing using MATLAB and Python, authored by Sheng Xu and published by CRC Press in 2022. It covers fundamental concepts, errors in scientific computing, algorithm properties, and includes various exercises and programming problems. Additionally, it provides links to related ebooks and resources for further exploration in scientific computing.

Uploaded by

budakmbware
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 86

An Introduction To Scientific Computing With

Matlab And Python Tutorials Sheng Xu download

https://ebookbell.com/product/an-introduction-to-scientific-
computing-with-matlab-and-python-tutorials-sheng-xu-42864256

Explore and download more ebooks at ebookbell.com


Here are some recommended products that we believe you will be
interested in. You can click the link to download.

An Introduction To Scientific Computing Twelve Computational Projects


Solved With Matlab Ionut Danaila

https://ebookbell.com/product/an-introduction-to-scientific-computing-
twelve-computational-projects-solved-with-matlab-ionut-
danaila-50300156

An Introduction To Scientific Computing Fifteen Computational Projects


Solved With Matlab 2nd 2nd Edition Ionut Danaila

https://ebookbell.com/product/an-introduction-to-scientific-computing-
fifteen-computational-projects-solved-with-matlab-2nd-2nd-edition-
ionut-danaila-53596822

Javatech An Introduction To Scientific And Technical Computing With


Java Clark S Lindsey J S Tolliver Thomas Lindblad

https://ebookbell.com/product/javatech-an-introduction-to-scientific-
and-technical-computing-with-java-clark-s-lindsey-j-s-tolliver-thomas-
lindblad-4103566

Computational Mathematics An Introduction To Numerical Analysis And


Scientific Computing With Python Dimitrios Mitsotakis

https://ebookbell.com/product/computational-mathematics-an-
introduction-to-numerical-analysis-and-scientific-computing-with-
python-dimitrios-mitsotakis-50292792
Python For Astronomers An Introduction To Scientific Computing 3rd
Edition 3rd Imad Pasha

https://ebookbell.com/product/python-for-astronomers-an-introduction-
to-scientific-computing-3rd-edition-3rd-imad-pasha-47525742

Matlab For Neuroscientists An Introduction To Scientific Computing In


Matlab 2nd Edition Pascal Wallisch

https://ebookbell.com/product/matlab-for-neuroscientists-an-
introduction-to-scientific-computing-in-matlab-2nd-edition-pascal-
wallisch-4556304

An Introduction To Parallel And Vector Scientific Computing Ronald W


Shonkwiler L Lefton

https://ebookbell.com/product/an-introduction-to-parallel-and-vector-
scientific-computing-ronald-w-shonkwiler-l-lefton-4702094

An Introduction To Scientific Research Methods In Geography And


Environmental Studies 2nd Edition Daniel R Montello

https://ebookbell.com/product/an-introduction-to-scientific-research-
methods-in-geography-and-environmental-studies-2nd-edition-daniel-r-
montello-52242432

An Introduction To Scientific Communication

https://ebookbell.com/product/an-introduction-to-scientific-
communication-4796692
An Introduction to
Scientific Computing
with MATLAB® and
Python Tutorials
An Introduction to
Scientific Computing
with MATLAB® and
Python Tutorials

Sheng Xu
MATLAB ® is a trademark of The MathWorks, Inc. and is used with permission. The MathWorks
does not warrant the accuracy of the text or exercises in this book. This book’s use or discussion
of MATLAB ® software or related products does not constitute endorsement or sponsorship by The
MathWorks of a particular pedagogical approach or particular use of the MATLAB ® software.

First edition published 2022


by CRC Press
6000 Broken Sound Parkway NW, Suite 300, Boca Raton, FL 33487-2742

and by CRC Press


4 Park Square, Milton Park, Abingdon, Oxon, OX14 4RN

© 2022 Sheng Xu

CRC Press is an imprint of Taylor & Francis Group, LLC

Reasonable efforts have been made to publish reliable data and information, but the author and pub-
lisher cannot assume responsibility for the validity of all materials or the consequences of their use.
The authors and publishers have attempted to trace the copyright holders of all material reproduced
in this publication and apologize to copyright holders if permission to publish in this form has not
been obtained. If any copyright material has not been acknowledged please write and let us know so
we may rectify in any future reprint.

Except as permitted under U.S. Copyright Law, no part of this book may be reprinted, reproduced,
transmitted, or utilized in any form by any electronic, mechanical, or other means, now known or
hereafter invented, including photocopying, microfilming, and recording, or in any information
storage or retrieval system, without written permission from the publishers.

For permission to photocopy or use material electronically from this work, access www.copyright.
com or contact the Copyright Clearance Center, Inc. (CCC), 222 Rosewood Drive, Danvers, MA
01923, 978-750-8400. For works that are not available on CCC please contact mpkbookspermis-
[email protected]

Trademark notice: Product or corporate names may be trademarks or registered trademarks and are
used only for identification and explanation without intent to infringe.

ISBN: 978-1-032-06315-7 (hbk)


ISBN: 978-1-032-06318-8 (pbk)
ISBN: 978-1-003-20169-4 (ebk)

DOI: 10.1201/9781003201694

Typeset in Nimbus Roman


by KnowledgeWorks Global Ltd.

Publisher’s note: This book has been prepared from camera-ready copy provided by the authors
To my family and friends.
Contents

Preface xiii

Author xv

1 An Overview of Scientific Computing 1


1.1 What Is Scientific Computing? . . . . . . . . . . . . . . . . . . . . 1
1.2 Errors in Scientific Computing . . . . . . . . . . . . . . . . . . . . 3
1.2.1 Absolute and Relative Errors . . . . . . . . . . . . . . . . . 3
1.2.2 Upper Bounds . . . . . . . . . . . . . . . . . . . . . . . . 4
1.2.3 Sources of Errors . . . . . . . . . . . . . . . . . . . . . . . 4
1.3 Algorithm Properties . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.4 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

2 Taylor’s Theorem 9
2.1 Polynomials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.1.1 Polynomial Evaluation . . . . . . . . . . . . . . . . . . . . 10
2.2 Taylor’s Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.2.1 Taylor Polynomials . . . . . . . . . . . . . . . . . . . . . . 12
2.2.2 Taylor Series . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.2.3 Taylor’s Theorem . . . . . . . . . . . . . . . . . . . . . . . 14
2.3 Alternating Series Theorem . . . . . . . . . . . . . . . . . . . . . 19
2.4 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
2.5 Programming Problems . . . . . . . . . . . . . . . . . . . . . . . 22

3 Roundoff Errors and Error Propagation 25


3.1 Numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
3.1.1 Integers . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
3.2 Floating-Point Numbers . . . . . . . . . . . . . . . . . . . . . . . 26
3.2.1 Scientific Notation and Rounding . . . . . . . . . . . . . . 27
3.2.2 DP Floating-Point Representation . . . . . . . . . . . . . . 29
3.3 Error Propagation . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
3.3.1 Catastrophic Cancellation . . . . . . . . . . . . . . . . . . 32
3.3.2 Algorithm Stability . . . . . . . . . . . . . . . . . . . . . . 33
3.4 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
3.5 Programming Problems . . . . . . . . . . . . . . . . . . . . . . . 37

vii
viii Contents

4 Direct Methods for Linear Systems 43


4.1 Matrices and Vectors . . . . . . . . . . . . . . . . . . . . . . . . . 44
4.2 Triangular Systems . . . . . . . . . . . . . . . . . . . . . . . . . . 45
4.3 GE and A=LU . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
4.3.1 Elementary Matrices . . . . . . . . . . . . . . . . . . . . . 51
4.3.2 A=LU . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
4.3.3 Solving Ax = b by A=LU . . . . . . . . . . . . . . . . . . 55
4.4 GEPP and PA=LU . . . . . . . . . . . . . . . . . . . . . . . . . . 57
4.4.1 GEPP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
4.4.2 PA=LU . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
4.4.3 Solving Ax = b by PA=LU . . . . . . . . . . . . . . . . . . 65
4.5 Tridiagonal Systems . . . . . . . . . . . . . . . . . . . . . . . . . 67
4.6 Conditioning of Linear Systems . . . . . . . . . . . . . . . . . . . 69
4.6.1 Vector and Matrix Norms . . . . . . . . . . . . . . . . . . . 72
4.6.2 Condition Numbers . . . . . . . . . . . . . . . . . . . . . . 76
4.6.3 Error and Residual Vectors . . . . . . . . . . . . . . . . . . 77
4.7 Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
4.8 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
4.9 Programming Problems . . . . . . . . . . . . . . . . . . . . . . . 82

5 Root Finding for Nonlinear Equations 85


5.1 Roots and Fixed Points . . . . . . . . . . . . . . . . . . . . . . . . 86
5.2 The Bisection Method . . . . . . . . . . . . . . . . . . . . . . . . 90
5.3 Newton’s Method . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
5.3.1 Convergence Analysis of Newton’s Method . . . . . . . . . 96
5.3.2 Practical Issues of Newton’s Method . . . . . . . . . . . . . 99
5.4 Secant Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
5.5 Fixed-Point Iteration . . . . . . . . . . . . . . . . . . . . . . . . . 105
5.6 Newton’s Method for Systems of Nonlinear Equations . . . . . . . 109
5.6.1 Taylor’s Theorem for Multivariate Functions . . . . . . . . 110
5.6.2 Newton’s Method for Nonlinear Systems . . . . . . . . . . 114
5.7 Unconstrained Optimization . . . . . . . . . . . . . . . . . . . . . 117
5.8 Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
5.9 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
5.10 Programming Problems . . . . . . . . . . . . . . . . . . . . . . . 127

6 Interpolation 131
6.1 Terminology of Interpolation . . . . . . . . . . . . . . . . . . . . . 132
6.2 Polynomial Space . . . . . . . . . . . . . . . . . . . . . . . . . . 133
6.2.1 Chebyshev Basis . . . . . . . . . . . . . . . . . . . . . . . 135
6.2.2 Legendre Basis . . . . . . . . . . . . . . . . . . . . . . . . 138
6.3 Monomial Interpolation . . . . . . . . . . . . . . . . . . . . . . . 141
6.4 Lagrange Interpolation . . . . . . . . . . . . . . . . . . . . . . . . 143
6.5 Newton’s Interpolation . . . . . . . . . . . . . . . . . . . . . . . . 145
6.6 Interpolation Error . . . . . . . . . . . . . . . . . . . . . . . . . . 149
Contents ix

6.6.1Error in Polynomial Interpolation . . . . . . . . . . . . . . 150


6.6.2Behavior of Interpolation Error . . . . . . . . . . . . . . . 153
6.6.2.1 Equally-Spaced Nodes . . . . . . . . . . . . . . . 153
6.6.2.2 Chebyshev Nodes . . . . . . . . . . . . . . . . . 156
6.7 Spline Interpolation . . . . . . . . . . . . . . . . . . . . . . . . . 159
6.7.1 Piecewise Linear Interpolation . . . . . . . . . . . . . . . . 159
6.7.2 Cubic Spline . . . . . . . . . . . . . . . . . . . . . . . . . 161
6.7.3 Cubic Spline Interpolation . . . . . . . . . . . . . . . . . . 163
6.8 Discrete Fourier Transform (DFT) . . . . . . . . . . . . . . . . . . 167
6.9 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173
6.10 Programming Problems . . . . . . . . . . . . . . . . . . . . . . . 177

7 Numerical Integration 183


7.1 Definite Integrals . . . . . . . . . . . . . . . . . . . . . . . . . . . 184
7.2 Numerical Integration . . . . . . . . . . . . . . . . . . . . . . . . 187
7.2.1 Change of Intervals . . . . . . . . . . . . . . . . . . . . . . 189
7.3 The Midpoint Rule . . . . . . . . . . . . . . . . . . . . . . . . . . 191
7.3.1 Degree of Precision (DOP) . . . . . . . . . . . . . . . . . . 192
7.3.2 Error of the Midpoint Rule . . . . . . . . . . . . . . . . . . 195
7.4 The Trapezoidal Rule . . . . . . . . . . . . . . . . . . . . . . . . . 200
7.5 Simpson’s Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205
7.6 Newton-Cotes Rules . . . . . . . . . . . . . . . . . . . . . . . . . 209
7.7 Gaussian Quadrature Rules . . . . . . . . . . . . . . . . . . . . . . 209
7.8 Other Numerical Integration Techniques . . . . . . . . . . . . . . . 216
7.8.1 Integration with Singularities . . . . . . . . . . . . . . . . . 216
7.8.2 Adaptive Integration . . . . . . . . . . . . . . . . . . . . . 217
7.9 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218
7.10 Programming Problems . . . . . . . . . . . . . . . . . . . . . . . 221

8 Numerical Differentiation 225


8.1 Differentiation Using Taylor’s Theorem . . . . . . . . . . . . . . . 225
8.1.1 The Method of Undetermined Coefficients . . . . . . . . . 227
8.2 Differentiation Using Interpolation . . . . . . . . . . . . . . . . . 229
8.2.1 Differentiation Using DFT . . . . . . . . . . . . . . . . . . 231
8.3 Richardson Extrapolation . . . . . . . . . . . . . . . . . . . . . . 232
8.4 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233
8.5 Programming Problems . . . . . . . . . . . . . . . . . . . . . . . 235

9 Initial Value Problems and Boundary Value Problems 237


9.1 Initial Value Problems (IVPs) . . . . . . . . . . . . . . . . . . . . 238
9.1.1 Euler’s Method . . . . . . . . . . . . . . . . . . . . . . . . 239
9.1.1.1 Local Truncation Error and Global Error . . . . . 242
9.1.1.2 Consistency, Convergence and Stability . . . . . . 244
9.1.1.3 Explicit and Implicit Methods . . . . . . . . . . . 247
9.1.2 Taylor Series Methods . . . . . . . . . . . . . . . . . . . . 248
x Contents

9.1.3 Runge-Kutta (RK) Methods . . . . . . . . . . . . . . . . . 249


9.2 Boundary Value Problems (BVPs) . . . . . . . . . . . . . . . . . . 252
9.2.1 Finite Difference Methods . . . . . . . . . . . . . . . . . . 253
9.2.1.1 Local Truncation Error and Global Error . . . . . 254
9.2.1.2 Consistency, Stability and Convergence . . . . . . 255
9.3 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256
9.4 Programming Problems . . . . . . . . . . . . . . . . . . . . . . . 259

10 Basic Iterative Methods for Linear Systems 261


10.1 Jacobi and Gauss-Seidel Methods . . . . . . . . . . . . . . . . . . 263
10.1.1 Jacobi Method . . . . . . . . . . . . . . . . . . . . . . . . 264
10.1.2 Gauss-Seidel (G-S) Method . . . . . . . . . . . . . . . . . 267
10.2 Convergence Analysis . . . . . . . . . . . . . . . . . . . . . . . . 270
10.3 Other Iterative Methods . . . . . . . . . . . . . . . . . . . . . . . 272
10.4 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273
10.5 Programming Problems . . . . . . . . . . . . . . . . . . . . . . . 274

11 Discrete Least Squares Problems 277


11.1 The Discrete LS Problems . . . . . . . . . . . . . . . . . . . . . . 278
11.2 The Normal Equation by Calculus . . . . . . . . . . . . . . . . . . 281
11.3 The Normal Equation by Linear Algebra . . . . . . . . . . . . . . 284
11.4 LS Problems by A=QR . . . . . . . . . . . . . . . . . . . . . . . . 286
11.5 Artificial Neural Network . . . . . . . . . . . . . . . . . . . . . . 287
11.6 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 289
11.7 Programming Problems . . . . . . . . . . . . . . . . . . . . . . . 292

12 Monte Carlo Methods and Parallel Computing 295


12.1 Monte Carlo Methods . . . . . . . . . . . . . . . . . . . . . . . . 297
12.2 Parallel Computing . . . . . . . . . . . . . . . . . . . . . . . . . . 298
12.3 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 300
12.4 Programming Problems . . . . . . . . . . . . . . . . . . . . . . . 302

Appendices 305

A An Introduction of MATLAB for Scientific Computing 307


A.1 What is MATLAB? . . . . . . . . . . . . . . . . . . . . . . . . . . 307
A.1.1 Starting MATLAB . . . . . . . . . . . . . . . . . . . . . . 307
A.1.2 MATLAB as an Advanced Calculator . . . . . . . . . . . . 308
A.1.3 Order of Operations . . . . . . . . . . . . . . . . . . . . . 308
A.1.4 MATLAB Built-in Functions and Getting Help . . . . . . . 309
A.1.5 Keeping a Record for the Command Window . . . . . . . . 310
A.1.6 Making M-Scripts . . . . . . . . . . . . . . . . . . . . . . 310
A.2 Variables, Vectors and Matrices . . . . . . . . . . . . . . . . . . . 311
A.2.1 Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . 311
A.2.2 Suppressing Output . . . . . . . . . . . . . . . . . . . . . . 312
A.2.3 Vectors and Matrices . . . . . . . . . . . . . . . . . . . . . 312
Contents xi

A.2.4 Special Matrices . . . . . . . . . . . . . . . . . . . . . . . 313


A.2.5 The Colon Notation and linspace . . . . . . . . . . . . . 314
A.2.6 Accessing Entries in a Vector or Matrix . . . . . . . . . . . 315
A.3 Matrix Arithmetic . . . . . . . . . . . . . . . . . . . . . . . . . . 317
A.3.1 Scalar Multiplication . . . . . . . . . . . . . . . . . . . . . 317
A.3.2 Matrix Addition . . . . . . . . . . . . . . . . . . . . . . . 317
A.3.3 Matrix Multiplication . . . . . . . . . . . . . . . . . . . . . 318
A.3.4 Transpose . . . . . . . . . . . . . . . . . . . . . . . . . . . 319
A.3.5 Entry-wise Convenience Operations . . . . . . . . . . . . . 320
A.4 Outputting/Plotting Results . . . . . . . . . . . . . . . . . . . . . 321
A.4.1 disp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 321
A.4.2 fprintf . . . . . . . . . . . . . . . . . . . . . . . . . . . 322
A.4.3 plot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 322
A.5 Loops and Decisions . . . . . . . . . . . . . . . . . . . . . . . . . 325
A.5.1 for Loops . . . . . . . . . . . . . . . . . . . . . . . . . . 325
A.5.2 Logicals and Decisions . . . . . . . . . . . . . . . . . . . . 327
A.5.3 while Loops . . . . . . . . . . . . . . . . . . . . . . . . . 329
A.6 Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 329
A.6.1 M-Functions . . . . . . . . . . . . . . . . . . . . . . . . . 330
A.6.2 Anonymous Functions . . . . . . . . . . . . . . . . . . . . 332
A.6.3 Passing Functions to Functions . . . . . . . . . . . . . . . . 333
A.7 Creating Live Scripts in the Live Editor . . . . . . . . . . . . . . . 334
A.8 Concluding Remarks . . . . . . . . . . . . . . . . . . . . . . . . . 335
A.9 Programming Problems . . . . . . . . . . . . . . . . . . . . . . . 336

B An Introduction of Python for Scientific Computing 339


B.1 What is Python? . . . . . . . . . . . . . . . . . . . . . . . . . . . 339
B.1.1 Starting Python . . . . . . . . . . . . . . . . . . . . . . . . 339
B.1.2 Python as an Advanced Calculator . . . . . . . . . . . . . . 340
B.1.3 Python Programs . . . . . . . . . . . . . . . . . . . . . . . 342
B.2 Variables, Lists and Dictionaries . . . . . . . . . . . . . . . . . . . 342
B.2.1 Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . 342
B.2.2 Lists . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 343
B.2.3 Dictionaries . . . . . . . . . . . . . . . . . . . . . . . . . . 345
B.3 Looping and Making Decisions . . . . . . . . . . . . . . . . . . . 347
B.3.1 for Loops . . . . . . . . . . . . . . . . . . . . . . . . . . 347
B.3.2 if Statements . . . . . . . . . . . . . . . . . . . . . . . . . 349
B.3.3 while Loops . . . . . . . . . . . . . . . . . . . . . . . . . 351
B.3.4 break and continue in Loops . . . . . . . . . . . . . . . 352
B.4 Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 354
B.4.1 Passing Arguments . . . . . . . . . . . . . . . . . . . . . . 355
B.4.2 Passing Lists . . . . . . . . . . . . . . . . . . . . . . . . . 357
B.5 Classes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 358
B.5.1 Attributes . . . . . . . . . . . . . . . . . . . . . . . . . . . 359
B.5.2 Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . 361
xii Contents

B.5.3 Inheritance . . . . . . . . . . . . . . . . . . . . . . . . . . 362


B.5.4 Objects as Attributes . . . . . . . . . . . . . . . . . . . . . 364
B.6 Modules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 366
B.7 numpy, scipy, matplotlib . . . . . . . . . . . . . . . . . . . . . . . 370
B.7.1 numpy . . . . . . . . . . . . . . . . . . . . . . . . . . . . 370
B.7.2 scipy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 374
B.7.3 matplotlib . . . . . . . . . . . . . . . . . . . . . . . . . . . 374
B.8 Jupyter Notebook . . . . . . . . . . . . . . . . . . . . . . . . . . . 375
B.9 Concluding Remarks . . . . . . . . . . . . . . . . . . . . . . . . . 376
B.10 Programming Problems . . . . . . . . . . . . . . . . . . . . . . . 376

Index 379
Preface

Features of the Book


We write this book to have the following features.
• We cover fundamental numerical methods in the book. The book can be used as a
textbook for a first course in scientific computing. The prerequisites are calculus
and linear algebra. A college student in science, math or engineering can take the
course in sophomore year.
• We write this book for students. We present the material in a self-contained
accessible manner for a student to easily follow and explore. We use motivating
examples and application problems to catch a student’s interest. We add remarks
at various places to satisfy a student’s curiosity.
• We provide short tutorials on MATLAB® and Python. We give pseudocodes of
algorithms instead of source codes. With the tutorials and pseudocodes, a student
can enjoy writing programs in MATLAB or Python to implement, test and apply
algorithms.
• We balance the underlying idea, algorithm implementation and performance analy-
sis for a fundamental numerical method so that a student can gain comprehensive
understanding of the method.
• We review necessary material from calculus and linear algebra at appropriate
places. We also make the connection between the fundamental numerical methods
with advanced topics such as machine learning and parallel computing.
• We design paper-and-pen exercises and programming problems so that stu-
dents can apply, test and analyze numerical methods toward comprehensive
understanding and practical applications.

Sample Syllabi
Below is a sample syllabus for a two-semester course sequence.

Semester 1
1. Appendix A or B: MATLAB or Python Tutorial

xiii
xiv Preface

2. Chapter 1: An Overview of Scientific Computing


3. Chapter 2: Taylor’s Theorem
4. Chapter 3: Roundoff Errors and Error Propagation
5. Sections 5.1–5.5: Root Finding for Nonlinear Equations
6. Sections 6.1–6.6: Interpolation
7. Sections 7.1–7.6: Numerical Integration
8. Chapter 8: Numerical Differentiation

Semester 2
1. Chapter 4: Direct Methods for Linear Systems
2. Sessions 5.6–5.7: Nonlinear Systems and Optimization
3. Sessions 6.6–6.7: Spline Interpolation and DFT
4. Sessions 7.7–7.8: Gaussian Quadrature Rules
5. Chapter 9: IVPs and BVPs
6. Chapter 10: Iterative Methods for Linear Systems
7. Chapter 11: LS Problems
8. Chapter 12: Monte Carlo Methods and Parallel Computing

Acknowledgments
We acknowledge our colleagues Daniel Reynolds and Johannes Tausch for detailed
critiques of this first edition. We would appreciate any comments, suggestions and
corrections that readers may wish to send us using the email address [email protected].
Author

Sheng Xu is associate professor of mathematics at Southern Methodist University


(SMU). He holds a Ph.D. in mechanical engineering from Cornell University. He
conducts research on development and application of computational methods for
problems in fluid mechanics, including aerodynamics of insect flight, two-fluid flows,
supersonic turbulence, and turbulence control. His published work has appeared in
Journal of Computational Physics, SIAM Journal on Scientific Computing, Physics of
Fluids, and Journal of Fluid Mechanics.

xv
1
An Overview of Scientific Computing

In this chapter, we address what scientific computing is about and why it is important.
We give a brief introduction to algorithms and errors. We emphasize the importance
of error upper bounds.

1.1 What Is Scientific Computing?


Let’s consider two examples first.
Example 1.1 (Apple in free fall) An apple falls from a tree under gravity. How long
does it take to reach the ground (or Newton’s head)?
[Solution:] Assume that the apple has a zero initial velocity and falls without drag.
Denote the height of the apple as H and Earth’s gravity as g. By Newton’s second law,
the time T that the apple takes to reach the ground satisfies
1 2
gT = H
2
This equation can be solved exactly (analytically) to give
s
2H
T=
g


Example 1.2 (Particle chasing) Two particles A and B move from the same location
along a straight path in the same direction. The particle A moves with the constant
speed 2, and the particle B starts with a zero speed and accelerates with a time-
dependent speed et − 1, where t is the time. How long does the particle B take to catch
the particle A?
[Solution:] When the particle B catches up with the particle A at t = T > 0, they
travel the same distance. TheRdistance traveled by the particle A is 2T , and the distance
traveled by the particle B is 0T (et − 1)dt. The time T therefore satisfies
Z T
2T = (et − 1)dt, T >0
0

DOI: 10.1201/9781003201694-1 1
2 An Overview of Scientific Computing

which gives
3T + 1 = eT , T >0
This equation cannot be solved analytically using elementary functions, but we
clearly know it has a positive real solution from the physics intuition or the plots of
f (T ) = 3T + 1 and g(T ) = eT in Fig. 1.1. 

20
y=3t+1
18 y=e
t

16

14

12

10
y

0
0 0.5 1 1.5 2 2.5 3
t

FIGURE 1.1
Graphs of f (T ) = 3T + 1 and g(T ) = eT for T ∈ [0, 3].

In the first example, we can find an analytical solution of the equation. An


analytical solution is a closed-form expression for an unknown variable in terms
of the known variables. However, the analytical solutions of many mathematical
problems may not be available or may be difficult to obtain, such as in the second
example. We therefore resort to scientific computing to find numerical solutions of
those mathematical problems. A numerical solution is an approximate solution of a
problem, and it appears as numerical values instead of closed-form expressions.
Scientific computing involves the development and study of step-by-step proce-
dures to find numerical solutions of mathematical problems. The step-by-step pro-
cedures are called (numerical) algorithms. A numerical/computational method
refers to both the idea underlying an algorithm and the fulfillment of the idea as an
algorithm. We sometimes do not distinguish a method with an algorithm.
For the second example above, we may devise an algorithm to approximate
the solution as follows. We notice that the solution is between 1 and 3 because
f (1) = 3 × 1 + 1 > g(1) = e1 while f (3) = 3 × 3 + 1 < g(3) = e3 (the solid curve
is above the dashed curve at 1 while below at 3 in Fig. 1.1). We can then bisect
Errors in Scientific Computing 3

the interval [1, 3] by the middle point 2 and look at the two subintervals [1, 2] and
[2, 3]. We can use the same strategy to narrow down the solution to one subinterval,
in this case [1, 2]. We can repeatedly apply this strategy until we find a subinterval
that contains the solution and is small enough such that its middle point is a good
approximation of the solution. Actually, this procedure is called bisection iteration,
which will be learned in more details later in Section 5.2 of Chapter 5.
Scientific computing, theoretical study and experiments are three pillars to support
scientific research and technological applications. The diagram in Fig. 1.2 shows
where scientific computing comes into play.
Experiments Computers

modeling scientific
Real−world Problems Mathematical Models Numerical Solutions
computing

physics, engineering, biology, ... understanding, analysis, prediction, design, ...

FIGURE 1.2
Scientific computing in solving real-world problems.

With the advancement of modern computers, scientific computing plays a heavier


and heavier role in almost all areas. This introductory text will give you a taste of
scientific computing and prepares you for your broader and deeper exploration and
application of scientific computing.

1.2 Errors in Scientific Computing


The results from scientific computing are typically approximate. One important goal
in scientific computing is to ensure that the approximate results are close enough to
the true/exact results.

1.2.1 Absolute and Relative Errors

Definition 1.1 Let T denote the true/exact value of a scalar/number, and A an


approximation of the scalar. We define the error E, the absolute error |E| and the
relative error R of the approximation A as follows, respectively
• The error: E = T − A
• The absolute error: |E| = |T − A|
|T − A| |E|
• The relative error: R = = , if T 6= 0
|T | |T |
4 An Overview of Scientific Computing

The relative error is often used to compare approximations of numbers of widely


different scales.
Example 1.3 (Absolute and relative errors) The absolute errors in the measure-
ments of the thickness of a book 5 cm thick and a paper 0.2 mm thick are both
0.1 mm. Are the two measurements of the same quality? Which one is better?
[Solution:] The absolute errors are the same, but the relative errors are quite differ-
ent. The relative error in the measured book thickness is 0.1/50 = 0.002 = 0.2%,
which is much smaller than the relative error in the measured paper thickness,
0.1/0.2 = 0.5 = 50%. The two measurements are not of the same quality, and the
measurement of the book thickness has better quality. 

1.2.2 Upper Bounds


In real practice, we know the approximation but do not know the true value (otherwise
we may not need the approximation), and we therefore do not know the exact value of
an error (absolute or relative), but we can find an upper bound for the absolute or
relative error, which is a value that the error cannot exceed. The error may reach an
upper bound if the upper bound is sharp. For example, if we know the absolute error
in an approximation ranges in [0, 1] and can be 1, then 1 is a sharp upper bound, and
it is true that the absolute error is also less than the upper bound 100, but the upper
bound 100 may not be quite useful. So we want to find an upper bound as sharp as
possible (i.e. the minimum upper bound).
Example 1.4 (Upper bounds) Let A = 3.14 be an approximation of π. Find a mean-
ingful upper bound of the absolute error in the approximation.
[Solution:] The true value of π is

π = 3.14159 · · ·

The approximation A is
A = 3.14
The absolute error |E| in the approximation is

|E| = |π − A| = |3.14159 · · · − 3.14| = 0.00159 · · · < 0.0016

We can choose 0.0016 to be an upper bound, which is “sharper” (“better”) than an


upper bound 0.002. An upper bound 0.1 is not wrong, but not meaningful. 

1.2.3 Sources of Errors


Where are errors from? According to their sources, we can categorize errors into the
following three types.
Algorithm Properties 5

• Modeling errors: the errors due to the simplifications and hypotheses in the
modeling process that convert a real-world problem into a mathematical model.
For example, the error due to the neglect of air resistance in the apple’s free fall
problem above. Modeling errors are not in the scope of scientific computing.
• Mathematical approximation errors: the errors due to the approximation of an
actual quantity by an approximate formula.
Example 1.5 We know that f 0 (a), the derivative of f (x) at x = a, is the slope of
f (a + h) − f (a)
the tangent line to the curve y = f (x) at x = a; and , where h is
h
finite, is the slope of the secant line through the points (a, f (a)) and (a + h, f (a +
h)) on the curve y = f (x). If we approximate the tangent slope T = f 0 (a) by
f (a + h) − f (a)
the secant slope A = , we have the mathematical approximation
h
error:
f (a + h) − f (a)
|T − A| = f 0 (a) −
h
Later, we will analyze how this error depends on the value of h using Taylor’s
theorem and introduce the big-O notation to describe this dependence. 
• Roundoff errors: the errors due to finite-precision representation of real numbers.
Example 1.6 Let x̂ = 1.2345679 be a 8-digit representation of x = 1.23456789.
The absolute error |x − x̂| = 0.00000001 = 10−8 is caused by rounding x to x̂ and
is the roundoff error in the representation. Later, we will describe how computers
represent real numbers and learn that roundoff errors are inevitable in computer
representations. 

1.3 Algorithm Properties


A numerical algorithm is a step-by-step procedure to find the numerical solution of a
mathematical problem. We care about the following three properties of a numerical
algorithm.
Accuracy Accuracy concerns about the magnitude of the error in a numerical solution.
The error needs to be controlled in an acceptable range.
Efficiency Efficiency includes time efficiency and storage efficiency. Time efficiency
depends on how many arithmetic operations are needed in an algorithm. Storage
efficiency depends on how many memory space is needed to execute an algorithm.
Stability An algorithm is stable if small errors (inevitable in practice) introduced in
the algorithm stay small. If the small errors get amplified to be out of control, the
algorithm is unstable.
Later we will use specific algorithms to illustrate and discuss these properties.
6 An Overview of Scientific Computing

1.4 Exercises
Exercise 1.1 A1 is an approximation of T1 = 1 with the absolute error 1, and A2 is
an approximation of T2 = 100 with the absolute error 1. Which approximation (A1 or
A2 ) has better accuracy? Why?

Exercise 1.2 (1) Suppose the true value T = 2, and the relative error of its approx-
imation A is less than 5%. What is the range for A? (2) Suppose an approximation
A = 99, and the relative error in A is 10%. What are the possible values of the true
value T ?

Exercise 1.3 If A = 10 is an approximation of a true value T which is in the range


8 ≤ T ≤ 11, find an upper bound for the absolute error and an upper bound for the
relative error of the approximation.

1 n
 
Exercise 1.4 Euler’s number e can be defined as e = lim 1 + . It is a famous
n→∞ n
transcendental number with infinitely many digits e = 2.718282828459.... Suppose
1 2
 
one approximates e by A = 1 + with n = 2. (1) Find a meaningful upper bound
2
for the absolute error of the approximation. (2) Find a meaningful upper bound for
the relative error of the approximation.

Exercise 1.5 If the function y = sin(x) is approximated by the function y = x for


x ∈ [−π/2, π/2], which type does the error of the approximation belong to? How
does the absolute error change as |x| increases toward π/2?

Exercise 1.6 Let Ah be an approximation of the true value T . Suppose the approxima-
tion Ah depends on a small parameter h, and the absolute error |T − Ah | depends on h
as |T − Ah | = Ch p , where C > 0 and p > 0 are constants. (1) To make the error small,
do we want a small or large value of p? (2) If we double the value of h, how many
times larger does the absolute error become for p = 1, p = 2 and p = 3, respectively?

Exercise 1.7 Count the total number of multiplications in the nested loops of the
following pseudocodes.
(1)
for m = 1 to 20 do
for n = 1 to 20 do
bm = bm − Gm,n bn
end for
end for
(2)
for m = 1 to 20 do
for n = m to 20 do
bm = bm − Gm,n bn
Exercises 7

end for
end for

Exercise 1.8 If you are asked to evaluate p4 (x) = 1 + 2x + 3x2 + 4x3 + 5x4 at a given
value of x, how many multiplications do you use? (Note that evaluating xn as x ·x · · · · ·x
needs n − 1 multiplications.) Can you use less?
2
Taylor’s Theorem

In this chapter, we presents Taylor’s theorem, which is used later for development
and analysis of some numerical methods. We show how to approximate functions by
Taylor polynomials and how to analyze the errors in such approximations. The big-O
notation is introduced to describe efficiency and accuracy.

2.1 Polynomials
A polynomial pn (x) in the variable x of degree n, where n is a non-negative integer,
can be written in the general form as

pn (x) = c0 + c1 x + c2 x2 + · · · + cn−1 xn−1 + cn xn (2.1)

where c0 , c1 , c2 ,...,cn−1 and cn are constant coefficients (if cn 6= 0, n is the true/exact


degree). The summation in pn (x) can be written in the sigma notation as
n
c0 + c1 x + c2 x2 + · · · + cn−1 xn−1 + cn xn ≡ ∑ ck x k (2.2)
k=0

Note that pn (x) is built from constant coefficients and the variable x using only
addition, multiplication and exponentiation of x to non-negative integer powers (re-
peated multiplication). So polynomials can be easily evaluated, differentiated and
integrated, and are good candidates to approximate functions or data.
The polynomial pn (x) can be regarded as a linear combination of the monomial
basis functions 1, x, x2 , ..., xn−1 and xn using the constant weights c0 ,c1 , ..., cn−1 and
cn , respectively. Later we will construct a polynomial of degree at most n as a linear
combination of other basis functions.
A polynomial centered at the number a has the form

pn (x) = c0 + c1 (x − a) + c2 (x − a)2 + · · · + cn−1 (x − a)n−1 + cn (x − a)n (2.3)

Below are a few examples of polynomials


• z(x) = 0: a zero polynomial (of undefined degree)
• p0 (x) = −5: a constant polynomial (of degree 0)

DOI: 10.1201/9781003201694-2 9
10 Taylor’s Theorem

• p1 (x) = 2 + 3x: a linear polynomial (of degree 1)


• p2 (x) = π + 1.3x − 2.7x2 : a quadratic polynomial (of degree 2)
• p3 (x) = −6 + 2(x − 7) − 5(x − 7)2 + (x − 7)3 : a cubic polynomial (of degree 3)
centered at 7
• p4 (x) = c0 + c1 x + c2 x2 + c3 x3 + c4 x4 : a polynomial of degree ≤ 4 (= 4 if c4 6= 0)
Later, we will define the orthogonality of polynomials and introduce special
polynomials such as Chebyshev polynomials and Legendre polynomials.
A very important theorem regarding polynomials is the fundamental theorem of
algebra.

Theorem 2.1 (Fundamental theorem of algebra) Let pn (x) = cn xn +


cn−1 x n−1 2
+ · · · + c2 x + c1 x + c0 , cn 6= 0, be a polynomial of degree n.
Then pn (x) can be factorized as

pn (x) = cn (x − r1 )(x − r2 ) · · · (x − rn ) (2.4)

i.e. the polynomial equation pn (x) = 0 has n roots r1 , r2 , ... , rn .

Remark The roots can be repeated and can be complex numbers. The theo-
rem can be proved using complex analysis.

2.1.1 Polynomial Evaluation


Let’s count the numbers of additions and multiplications in evaluating

p4 (x) = 1 + 2x + 3x2 + 4x3 + 5x4

at a given value of x using different methods.


Naive method: xk , k = 2, 3, 4, in p4 (x) is calculated as x · x · · · · · x, the repeated
multiplication of x for k − 1 times. So the term ck xk needs k multiplications. The total
number of multiplications in evaluating p4 (x) is
4
0+1+2+3+4 ≡ ∑ k = 10.
k=0

The total number of additions in evaluating p4 (x) is 4.


Horner’s method: We can reduce the number of multiplications by calculating xk
as xk−1 · x if xk−1 is known. To use this fact, we can write p4 (x) in the form of nested
multiplication
p4 (x) = 1 + x · (2 + x · (3 + x · (4 + 5 · x)))
Taylor’s Theorem 11

where only 4 multiplications are required. The number of additions is still 4. The
evaluation procedure using nested multiplication is known as Horner’s method.
The above analysis can be easily extended to a polynomial of degree n, pn (x) =
∑nk=0 ck xk , for which the naive method needs
n
n(n + 1)
∑ k = 0+1+2+3+···+n = 2
(2.5)
k=0

multiplications and n additions, while Horner’s method requires n multiplications and


n additions. So Horner’s method is more efficient (in terms of computational time).
We may use the big O notation to describe the efficiency of a method. Denote
the size of a problem as n and the computational time of a method for the problem as
w. We say that the computational time is of order f (n), i.e.

w = O( f (n)) (2.6)

which means that the growth of the computational time w is bounded from above for
large enough n as
w ≤ C f (n) (2.7)
where C is a positive constant. For example, the computational time of the naive
method for polynomial evaluation is O(n2 ), while Horner’s method is O(n), where n
is the degree of a polynomial.
The polynomial pn (x) centered at a in the form of nested multiplication is

pn (x) = c0 +(x−a)·(c1 +(x−a)·(c2 +· · ·+(x−a)·(cn−1 +(x−a)·cn ) · · · )) (2.8)

which can be evaluated by Horner’s method by starting with the innermost parentheses
and working outward. The pseudocode of Horner’s method is

Algorithm 1 Horner’s method to evaluate the polynomial pn (x) = c0 + c1 (x − a) +


c2 (x − a)2 + · · · + cn−1 (x − a)n−1 + cn (x − a)n
p ← cn
z ← x−a
for k from n − 1 down to 0 do
p ← ck + z · p
end for

2.2 Taylor’s Theorem


Taylor’s theorem is very important for us in scientific computing to construct and
analyze numerical approximations. Below we distinguish Taylor polynomials and
Taylor series, and then review Taylor’s theorem.
12 Taylor’s Theorem

2.2.1 Taylor Polynomials


Let f (x) be a function with continuous first n derivatives (i.e. continuous f 0 , f 00 , . . .,
f (n) ) in an interval I containing the number a (we may say f ∈ CIn , where CIn denotes
the function space consisting of all functions with continuous first n derivatives in
the interval I). The Taylor polynomial of degree n for the function f (x) about a is

f 00 (a) f (n) (x)


Tn (x) = f (a) + f 0 (a)(x − a) + (x − a)2 + · · · + (x − a)n
2! n!
n
f (k) (a)
≡ ∑ (x − a)k (2.9)
k=0 k!

where k! = 1 · 2 · · · · · k is the factorial of k and 0! := 1.

Remark T1 (x) = f (a) + f 0 (a)(x − a) is the linearization of f (x) at a (the


tangent line equation).

The Taylor polynomial Tn (x) is constructed using the derivatives of f (x) at a such
that Tn (x) and f (x) satisfy the following matching conditions at a

Tn (a) = f (a) (2.10)


Tn0 (a) = f 0 (a) (2.11)
Tn00 (a) = f 00 (a) (2.12)
···
T (n) (a) = f (n) (a) (2.13)

Example 2.1 (Taylor polynomials) Find the Taylor polynomial of degree 2 for
f (x) = 1 + 2x + 3x2 about a = 1.
[Solution:] In this problem, f (x) = 1 + 2x + 3x2 and a = 1. The Taylor polynomial is

f 00 (1)
T2 (x) = f (1) + f 0 (1)(x − 1) + (x − 1)2
2!
We have

f (x) = −1 + 2x + 3x2 , f (1) = 4


f 0 (x) = 2 + 6x, f 0 (1) = 8
f 00 (x) = 6, f 00 (1) = 6

So the Taylor polynomial is

T2 (x) = 4 + 8(x − 1) + 3(x − 1)2


Taylor’s Theorem 13

2.2.2 Taylor Series


If f (x) is infinitely differentiable over an interval containing the number a, we can
write down Taylor series/expansion of f (x) about a as

f 00 (a) ∞
f (k) (a)
f (x) ∼ f (a) + f 0 (a)(x − a) + (x − a)2 + · · · ≡ ∑ (x − a)k (2.14)
2! k=0 k!

Note that
(1) the Taylor series (on the right) can be regarded as limn→∞ Tn (x), and a Taylor
polynomial is a truncated Taylor series;
(2) the symbol ∼ is used to indicate that the Taylor series may not equal f (x); If the
Taylor series converges to f (x) for any x in an interval, we can replace ∼ by =
with the specification of the convergence interval;
(3) the Taylor series is also called the Maclaurin series if a = 0.
Below are some familiar convergent Maclaurin series with their convergence
intervals.
x2 x3 ∞ k
x
ex = 1 + x + + +··· ≡ ∑ , |x| < ∞ (2.15)
2! 3! k=0 k!

x3 x5 x7 ∞
(−1)k x2k+1
sin x = x − + − +··· ≡ ∑ , |x| < ∞ (2.16)
3! 5! 7! k=0 (2k + 1)!

x2 x4 x6 ∞
(−1)k x2k
cos x = 1 − + − +··· ≡ ∑ , |x| < ∞ (2.17)
2! 4! 6! k=0 (2k)!


1
= 1 + x + x2 + x3 + · · · ≡ ∑ xk , |x| < 1 (2.18)
1−x k=0

x3 x5 x7 ∞
(−1)k x2k+1
tan−1 x = x − + − +··· ≡ ∑ , |x| ≤ 1 (2.19)
3 5 7 k=0 2k + 1

x2 x3 x4 ∞
(−1)k−1 xk
ln(1 + x) = x − + − +··· ≡ ∑ , −1 < x ≤ 1, (2.20)
2 3 4 k=1 k

p(p − 1) 2 p(p − 1)(p − 2) 3


(1 + x) p = 1 + px + x + x +···
2! 3!
∞  
p k
≡ ∑ x , |x| < 1 (2.21)
k=0 k
14 Taylor’s Theorem

In the last binomial series, the power p is real, and the binomial notation is defined by
 
p p(p − 1) · · · (p − k + 1)
:= (2.22)
k k!
We may substitute x by new variables in the above series or integrate or differenti-
ate the above series to obtain new series.
Example 2.2 Substitute x by −t 2 in the Maclaurin series for ex , we get
2 t4 t6 ∞
(−1)k t 2k
e−t = 1 − t 2 + − +··· ≡ ∑ , |t| < ∞ (2.23)
2! 3! k=0 k!

Example 2.3 Substitute 1 + x by t in the Maclaurin series for ln(1 + x), we get the
Taylor series of lnt about a = 1 as
(t − 1)2 (t − 1)3 (t − 1)4
lnt = (t − 1) − + − +···
2 3 4

(−1)k−1 (t − 1)k
≡ ∑ , |t − 1| < 1 (0 < t < 2) (2.24)
k=1 k


2.2.3 Taylor’s Theorem


Polynomials can be easily evaluated with only additions, subtractions and multipli-
cations. In addition, polynomials can be easily √differentiated and integrated. Many
other functions, for example sin x, ex , ln x and x, cannot be evaluated exactly using
only these arithmetic operations. It is thus desirable to approximate a function by
a polynomial. Here we approximate a function by its Taylor polynomials and use
Taylor’s theorem to analyze the errors. Later, we will also approximate a function by
other kinds of polynomials.
Fig. 2.1 shows the graphs of y = cos x and the Taylor polynomials T0 (x), T2 (x) and
T4 (x) of the degrees 0, 2 and 4, respectively, for f (x) = cos x about a = 0 on [−π, π],
where
T0 (x) = 1
x2
T2 (x) = 1 −
2!
x2 x4
T4 (x) = 1 − +
2! 4!
which are obtained by truncating the Maclaurin series of y = cos x. In this case, the
Taylor polynomials approximate f (x) well if x is close to a; and a higher-order Taylor
polynomial approximates f (x) better for a fixed value of x on [−π, π].
The accuracy of Taylor polynomial approximation can be analyzed using Taylor’s
theorem.
Taylor’s Theorem 15

Taylor polynomials of f(x)=cos(x) at 0


2
y=cos(x)
T 0 (x)
1 T 2 (x)
T 4 (x)

-1
y

-2

-3

-4
-3 -2 -1 0 1 2 3
x

FIGURE 2.1
Approximations of y = cos x, x ∈ [−π, π] by its Taylor polynomials of different
degrees.

n+1
Theorem 2.2 (Taylor’s Theorem) Assume f (x) ∈ C[α,β ]
, and let a be a number
in (α, β ). Then

f (x) = Tn (x) + Rn (x), x ∈ [α, β ] (2.25)

where
n
f (k) (a)
Tn (x) = ∑ (x − a)k (2.26)
k=0 k!
is the n-th order Taylor polynomial of f at a, and Rn (x) is called Taylor’s
remainder (or the error term in approximating f (x) by Tn (x)). Taylor’s remainder
Rn (x) in Lagrange mean value form is

f (n+1) (c)
Rn (x) = (x − a)n+1 (2.27)
(n + 1)!

where c is a point (generally unknown to us) between a and x.


16 Taylor’s Theorem

Remark Taylor’s theorem can be proved by repeatedly applying Rolle’s


theorem to the function
(t − a)n+1
g(t) = f (t) − Tn (t) − ( f (x) − Tn (x))
(x − a)n+1

which is a function of t that satisfies g(a) = g0 (a) = · · · = g(n) (a) = 0 and g(x) =
0.
Remark Taylor’s remainder Rn (x) also has an integral form as
Z x n+1
f (t)
Rn (x) = (t − x)n dt (2.28)
a n!
which can be revealed using integration by parts as follows
Z x Z x
f (x) = f (a) + f 0 (t)dt = f (a) + f 0 (t)(t − x)0 dt
a a
Z x
= f (a) + f 0 (a)(x − a) − f 00 (t)(t − x)dt
a
0
(t − x)2
Z x 
0 00
= f (a) + f (a)(x − a) − f (t) dt
a 2
(x − a)2 (t − x)2
Z x
= f (a) + f 0 (a)(x − a) + f 00 (a) + dt f 000 (t)
a 2 2
0
(x − a)2 (t − x)3
Z x 
0 00 000
= f (a) + f (a)(x − a) + f (a) + f (t) dt
2 a 3!
(x − a)2 (x − a)3 (t − x)3
Z x
0 00
= f (a) + f (a)(x − a) + f (a) + f 00 (a) + f (4) (t) dt
2 3! a 3!
Z x n+1
f (t)
= · · · = Tn (x) + (t − x)n dt
a n!

Remark There is also Taylor’s theorem for multivariate functions in high


dimensions (see Section 5.6.1 of Chapter 5).

Below are a few notes regarding the theorem.


• Considering n = 0 in Taylor’s theorem, we get

f (x) − f (a)
f (x) = f (a) + f 0 (c)(x − a) or = f 0 (c) (2.29)
x−a
Taylor’s Theorem 17

which is the mean value theorem (MVT). The MVT has a clear geometry
interpretation: existence of a tangent line parallel to the secant line through the
two ending points.
• Let h = x − a, then x = a + h and we can write Taylor’s theorem in the form in
terms of h as
n
f (k) (a) k f (n+1) (c) n+1
f (a + h) = ∑ h + h (2.30)
k=0 k! (n + 1)!
where c is an unknown point between a and a + h.
Similarly, if we let h = a − x, then x = a − h (and x − a = −h) and we can write
Taylor’s theorem in the form in terms of h as
n
f (k) (a) f (n+1) (c)
f (a − h) = ∑ (−h)k + (−h)n+1 (2.31)
k=0 k! (n + 1)!

• Taylor series limn→∞ Tn (x) converges to f (x) if limn→∞ Rn (x) = 0.


Below are two examples in which Taylor’s theorem is applied to analyze errors in
mathematical approximations.

Example 2.4 (Approximation by a Taylor polynomial) f (x) = ex is approximated


by Tn (x), the n-th order Taylor polynomial of f about 0, for x ∈ [−2, 2]. (1) Find an
upper bound of the absolute error in terms of only the degree n. (2) Then determine
the degree n such that the absolute error is at most 10−4 .
[Solution:] According to Taylor’s theorem, the absolute error | f (x) − Tn (x)| is given
by Taylor’s remainder (the error term) as

f (n+1) (c) ec
| f (x) − Tn (x)| = (x − 0)n+1 = |x|n+1
(n + 1)! (n + 1)!

where the unknown number c is between 0 and x.


(1) For x ∈ [−2, 2] and a = 0, we have
• c ∈ (−2, 2), i.e. c falls in the same interval [−2, 2] as x, as illustrated in Fig. 2.2,
and thus ec < e2 .
• |x| ≤ 2 and thus |x|n+1 ≤ 2n+1
c

α a x
β

FIGURE 2.2
Relative positions of a, x and c if a ∈ [α, β ] and x ∈ [α, β ].

So we obtain the upper bound of the absolute error in terms of only n as

ec e2
| f (x) − Tn (x)| = |x|n+1 < 2n+1
(n + 1)! (n + 1)!
18 Taylor’s Theorem

(2) If we set the upper bound be at most 10−4 :


e2
2n+1 ≤ 10−4
(n + 1)!
then we can guarantee the absolute error is at most 10−4 for x ∈ [−2, 2] as
e2
| f (x) − Tn (x)| ≤ 2n+1 ≤ 10−4
(n + 1)!
Since 2n+1 /(n + 1)! is decreasing with n for n = 1, 2, 3, . . . (see Exercise 2.14), the
inequality can be satisfied if n is large enough, and in this case we require n ≥ 11. 
Example 2.5 (Finite difference approximation) We can approximate T = f 0 (a)
(the slope of a tangent line) by the so-called forward finite difference (more on
such approximations in Chapter 8)
f (a + h) − f (a)
A=
h
which is the slope of a secant line, where h > 0 is a small spatial step (if h < 0, then it
2
is called backward finite difference). Assume f (x) ∈ C[a,a+h] . Use Taylor’s theorem to
determine how the absolute error in the approximation depends on the spatial step h.
[Solution:] We use Taylor’s theorem in the form in terms of h, i.e. Eq. (2.30), with
2
n + 1 = 2 (note that f (x) ∈ C[a,a+h] is twice differentiable):
f 00 (c) 2
f (a + h) = f (a) + f 0 (a)h + h
2!
where c ∈ (a, a + h). We therefore obtain the dependence of the absolute error on h as
f (a + h) − f (a) | f 00 (c)|
|T − A| = − f 0 (a) = h
h 2!
Note that if h is sufficiently small, then f 00 (c) ≈ f 00 (a) as c ∈ (a, a+h) and the absolute
error is approximately | f 00 (a)|h/2 (a constant multiple of h). 
Previously we used the big O notation to present the efficiency of a method. We
can also use the big O notation to present the accuracy of a method. If the absolute
error e of the approximation in a method depends on a small parameter h, we say the
absolute error is of order g(h), i.e.
e = O(g(h)) (2.32)
which means that the absolute error e is bounded from above as
e ≤ Cg(h) (2.33)
for some constant C > 0 when h is small enough. In the last example above, we have
the absolute error
f (a + h) − f (a) | f 00 (c)|
|T − A| = − f 0 (a) = h ≤ Ch (2.34)
h 2!
where C = maxa≤x≤a+h | f 00 (x)|/2. So we can say the absolute error in this example is
of O(h).
Alternating Series Theorem 19

2.3 Alternating Series Theorem


An alternating series is a series whose terms alternate in signs. It has the form
∞ ∞
b0 − b1 + b2 − b3 + · · · ≡ ∑ (−1)k bk or − b0 + b1 − b2 + b3 − · · · ≡ ∑ (−1)k+1 bk
k=0 k=0

where bk > 0, k = 0, 1, 2, . . .. Looking at the Maclaurin series listed in Section 2.2.2,


we recognize that the Maclaurin series of ex for x < 0, sin x and cos x for |x| < ∞,
1/(1−x) for −1 < x < 0, tan−1 x for |x| < 1, and ln(1+x) for 0 < x < 1 are convergent
alternating series.

Theorem 2.3 The alternating series



b0 − b1 + b2 − b3 + · · · ≡ ∑ (−1)k bk (2.35)
k=0

or

−b0 + b1 − b2 + b3 − · · · ≡ ∑ (−1)k+1 bk (2.36)
k=0

where bk > 0, k = 0, 1, 2, . . ., is convergent if

b0 > b1 > b2 > b3 > · · · > 0 (2.37)

and

lim bk = 0 (2.38)
k→∞

Let the sum of the convergent alternating series be S, i.e.


∞ ∞
S= ∑ (−1)k bk or S = ∑ (−1)k+1 bk (2.39)
k=0 k=0

Let the partial sum Sn (the truncated series) be


n n
Sn = ∑ (−1)k bk or Sn = ∑ (−1)k+1 bk (2.40)
k=0 k=0

Then

|S − Sn | ≤ bn+1 (2.41)
20 Taylor’s Theorem

The Maclaurin series mentioned above satisfy the condition of the alternating
series theorem for x in the specified intervals and are therefore convergent (as we have
already known from Taylor’s theorem).
The alternating series theorem provides an upper bound bn+1 for the error in
approximating the sum S by the partial sum Sn of a convergent alternating series that
satisfies the conditions in the theorem.

Example 2.6 (Approximation by an alternating series) If we use the partial sum


of the Maclaurin series of cos x to approximate cos 1, at which term (included) should
we stop in the partial sum to make the absolute error less than 10−6 ?
[Solution:] The Maclaurin series of cos x is
x2 x4 x6 ∞
(−1)k x2k
1− + − +··· ≡ ∑ , |x| < ∞
2! 4! 6! k=0 (2k)!

which converges to the sum S(x) = cos x. Suppose we stop the partial sum Sn (x) right
at the term (−1)n x2n /(2n)! (included). (Note that Sn (x) is the Taylor polynomial of
degree 2n for f (x) = cos x about a = 0.) By the alternating series theorem, we have

|x|2(n+1)
|S(x) − Sn (x)| = | cos x − Sn (x)| ≤
(2(n + 1))!
So the absolute error in approximating cos 1 by the partial sum Sn (1) is bounded as
1
| cos 1 − Sn (1)| ≤
(2n + 2)!

Let 1/(2n + 2)! < 10−6 , we obtain n ≥ 4. Note that S4 (x) and S4 (1) are

x2 x4 x6 x8 1 1 1 1
S4 (x) = 1 − + − + , S4 (1) = 1 − + − +
2! 4! 6! 8! 2! 4! 6! 8!


2.4 Exercises
Exercise 2.1 Write the polynomial p4 (x) = 1 + 5x + 4x2 + 3x3 + 2x4 in the form
of nested multiplication. Then evaluate p4 (2) using the nested form (i.e. Horner’s
method). How many multiplications and additions are used in your evaluation?

Exercise 2.2 How to evaluate p(x) = 2 + 3x3 − 4x6 + 8x9 − 11x12 at a given value of
x using only 6 multiplications?

Exercise 2.3 Count how many multiplications (in terms of n) are used in the following
pseudocode (at the beginning of next page). Write down the number of multiplications
in the big O notation. Hint: ∑nk=1 k2 = n(n + 1)(2n + 1)/6.
Exercises 21

for k from 1 to n − 1 do
for i from k + 1 to n do
for j from k + 1 to n do
Ai j ← Ai j − Ak j Aik /Akk
end for
end for
end for

Exercise 2.4 Find the Taylor polynomial T3 (x) of degree 3 for f (x) = 1 + 4x + 3x2 +
2x3 about a = 1. Show that T3 (x) = f (x) in this case by simplifying T3 (x) or by
applying Taylor’s theorem.
√3
Exercise 2.5 Find the Taylor polynomial T1 (x) of degree 1 for f (x) = x5 about
a = 0. Sketch the graphs of T1 (x) and f (x) in the same plot. Can you find the Taylor
polynomial T2 (x) of degree 2 for f (x) about a = 0?

Exercise 2.6 Write down the Taylor polynomial T4 (x) of degree 4 for f (x) about a.
(k)
Verify that T4 (a) = f (k) (a) for k = 0, 1, 2, 3, 4. Note that we define the 0-th derivative
of a function as the function itself.

Exercise 2.7 Let Tn (x) be the Taylor polynomial of degree n for the polynomial
pn (x) = c0 + c1 x + c2 x2 + · · · + cn xn about a. Prove that Tn (x) = pn (x).

Exercise 2.8 Derive the Maclaurin series for f (x) = ln(1 − x) by two ways: (1)
finding f (k) (0)/k!, k = 0, 1, 2, 3, . . .; (2) using a substitution in an existing Maclaurin
series. State the range of x on which the series converges to f (x) = ln(1 − x).

Exercise 2.9 Let θ = (1/55 · · · 5)◦ be a small angle in degrees, where the denominator
has n fives. (1) Compute sin θ for n = 3, 5, 7, 10 and write down the results in scientific
notation. Do you see something unusual? (2) Why sin θ has more and more same
decimal digits in the same order as π when n increases? (Hint: What is the truncated
degree-1 Maclaurin series of sin x? Is x in the series in degrees or radians?)

Exercise 2.10 Let Tn (x) be the n-th order Taylor polynomial for f (x) = ex about
a = 0. (1) Find the expression for Tn (x). (2) If T9 (x) is used to approximate f (x) = ex
for −2 ≤ x ≤ 1, find an upper bound of the error. (3) If Tn (x) is used to approximate
f (x) = ex for −1 ≤ x ≤ 1 with the absolute error at most 10−3 , how large should n
be?

Exercise 2.11 Let f (x) = sin x, a = π. (1) Derive the degree-3 Taylor polynomial
T3 (x) for f (x) at a. (2) If f (x) is approximated by T3 (x) for π − 1 ≤ x ≤ π + 1, find
an upper bound for the error of this approximation using Taylor’s Theorem.

Exercise 2.12 Derive the degree-2 Taylor polynomial T2 (x) for f (x) = ln x at a = 1. If
f (x) is approximated by T2 (x) for 0.5 ≤ x ≤ 1.5, find an upper bound for the absolute
error of this approximation using Taylor’s remainder.
22 Taylor’s Theorem

Exercise 2.13 Prove Taylor’s theorem. (Hint: See remarks for Theorem 2.2.)
Exercise 2.14 Show 2n+1 /(n + 1)! is decreasing with n for n = 1, 2, 3, . . . using proof
by induction.
Exercise 2.15 Use the alternating series theorem to determine the value of n in the
Taylor polynomial
n
(−1)k x2k
T2n (x) = ∑
k=0 (2k)!
for cos x about 0 such that cos 1 is approximated by T2n (1) with the absolute error less
than 10−6 .
Exercise 2.16 Approximate T = f (a) by A = f (a + h), where h is small, and f (x)
has continuous first derivative in an interval containing a and a + h. (1) Use Taylor’s
theorem to find an expression of the absolute error |T − A| in terms of h. (2) Write the
absolute error in big-O notation. (3) What is the limit of the error as h → 0?
Exercise 2.17 If f (a) is approximated by
f (a + h) + f (a − h)
f (a) ≈
2
where h is a small value, and f (x) has continuous second derivative everywhere, (1)
find the absolute error of the approximation in terms of h using Taylor’s theorem
f = Tn + Rn with n = 1; (2) write the absolute error in big-O notation.
Exercise 2.18 f (x) ∈ C2 near a. Approximate f 0 (a) as
f (a) − f (a − h)
f 0 (a) ≈
h
where h is a small positive value. Use Taylor’s theorem to show that the error of the
approximation is O(h).

2.5 Programming Problems


Problem 2.1 Write the MATLAB m-function
function [p] = nest(c,x,a)
to implement Horner’s method (not the Naive method) for evaluating the degree-n
polynomial centered at the number a:
pn (x) = c0 + c1 (x − a) + c2 (x − a)2 + · · · + cn (x − a)n .
The n + 1 coefficients c0 , c1 , ..., cn are entries of the input vector c. The values of the
independent variable x are passed as a vector.
Test your m-function by evaluating p3 (x) = 1 + 3(x + 1) − 2(x + 1)3 at x = 0.
Programming Problems 23

Problem 2.2 Let Tn (x) be the Taylor polynomial of degree n for f (x) = ln(1 + x) at
a = 0. Write a MATLAB m-script to use your function nest in the previous problem
to evaluate T4 (x) and T9 (x) at

x = −0.5, −0.49, −0.48, ..., 0.48, 0.49, 0.5.

(1) Plot f (x), T4 (x) and T9 (x) in one figure using the MATLAB command plot.
(2) Plot the Taylor’s remainders (the error terms) | f (x) − T4 (x)| and | f (x) − T9 (x)|
in another figure using the MATLAB command semilogy (use help semilogy to
learn what semilogy does and how to use it).
(3) Derive an upper bound of | ln(1 + x) − T9 (x)| for −0.5 ≤ x ≤ 0.5 using Taylor’s
theorem. Is the absolute error | ln(1 + x) − T9 (x)| in your second plot less than your
derived upper bound?
3
Roundoff Errors and Error Propagation

In this chapter, we explains why roundoff errors are inevitable in scientific computing
and how roundoff errors are propagated in arithmetic and algorithms. We use exam-
ples to illustrate and analyze the stability of algorithms.

Let’s start by running the following MATLAB demo code


a = 0;
n = 0;
while a~=1 && n<20
a = a+0.1;
n = n+1;
end
frpintf(’a = %18.16f after addition of 0.1 for %d times\n’,a,n)
The output of the code is
a = 2.000000000000000444 after addition of 0.1 for 20 times
We might expect that the while loop would end when a could have reached 1 (to
violate the condition a˜=1) after n = 10 additions of 0.1, but it stops only when n
goes to 20 (to violate the condition n<20).
Why is the repeated addition of 0.1 for 10 times not equal to 1 in this demo? The
reason is that the innocent looking real number 0.1 cannot be exactly represented
by computers. As we will see shortly, a computer can exactly represent only a finite
number of real numbers which are called machine numbers. Most likely, a real number
(for example 0.1) cannot be represented exactly by a computer and is represented
inexactly as a nearby machine number, introducing the so-called roundoff error.

3.1 Numbers
We have seen the following different types of numbers.
• natural numbers: N = {1, 2, 3, . . .}
• integers: Z = {. . . , −3, −2, −1, 0, 1, 2, 3, . . .}
• rational numbers: Q = {p/q|, p ∈ Z, q ∈ Z, q 6= 0}

DOI: 10.1201/9781003201694-3 25
26 Roundoff Errors and Error Propagation

• real numbers: R = set of all the numbers on the real number line, including
√ rational
numbers (such as 0.1 and 1/3) and irrational numbers
√ (such as 2, π and e).
• complex numbers: C = {a + bi|a ∈ R, b ∈ R, i = −1}
The above sets of numbers expand from top to bottom: N ⊂ Z ⊂ Q ⊂ R ⊂ C.
A computer stores a number in a physical unit such as registers, RAMs, or disk
drives which may be regarded as an ordered list of switches. Each switch has 2
statuses: on and off. We assign the on and off statuses the values 1 and 0, respectively,
and call such a switch a bit. So a computer represents a number as a pattern of ordered
bits (for example 01101010). The mapping between a bit pattern and a number is
determined by the IEEE (Institute of Electrical and Electronics Engineers, pronounced
as I-triple-E) standards. Below we describe how a computer represents integers and
real numbers.

3.1.1 Integers
Signed integers are commonly used in computations. An 8-bit (1-byte) signed integer
is stored as the bit pattern
b7 b6 b5 b4 b3 b2 b1 b0
which corresponds to the value

(−b7 )×27 +b6 ×26 +b5 ×25 +b4 ×24 +b3 ×23 +b2 ×22 +b1 ×21 +b0 ×20 (3.1)

With 8 bits, only 28 = 128 signed integers −128, −127, . . . , 126, 127 can be repre-
sented.
We also use unsigned integers in indexing (for example, row and column indices
of a matrix) and in the representing real numbers (soon later). An 8-bit unsigned
integer is stored as the bit pattern

b7 b6 b5 b4 b3 b2 b1 b0

which corresponds to the value

b7 × 27 + b6 × 26 + b5 × 25 + b4 × 24 + b3 × 23 + b2 × 22 + b1 × 21 + b0 × 20 (3.2)

With 8 bits, only 28 = 128 unsigned integers 0, 1, . . . , 254, 255 can be represented.

3.2 Floating-Point Numbers


Real numbers are represented by a computer as floating-point numbers, including
64-bit double precision (DP) type and 32-bit single precision (SP) type (by default,
MATLAB use DP floating-point numbers for real number arithmetic). What are
floating-point numbers? Let’s look at scientific notation first to answer this question.
Floating-Point Numbers 27

3.2.1 Scientific Notation and Rounding


We are familiar with decimal numbers. The decimal number system is a base-10
system in which the place values are integer powers of 10 (for example, 234.56 =
2 × 102 + 3 × 101 + 4 × 100 + 5 × 10−1 + 6 × 10−2 ).

Definition 3.1 The scientific notation of a decimal number T is

T = σ · T̄ · 10e (3.3)

where
• σ = +1/ − 1: sign of T
• 1 ≤ T̄ < 10: significand/mantissa of T
• e: integer exponent

The mantissa T̄ has the form

T̄ = d1 .d2 d3 · · · (3.4)

where di ∈ {0, 1, 2, . . . , 9} (i = 1, 2, 3, . . .) except d1 6= 0. The leftmost nonzero digit d1


in the mantissa corresponds to the largest place value and is called the first significant
digit (the most significant digit), the second digit d2 from left in the mantissa is called
the second significant digit, and so on.
A floating-point representation A of the number T in the decimal system has the
same form as scientific notation:

A = σ · Ā · 10E (3.5)

but the number of digits in the mantissa is limited. If the mantissa Ā of a floating-point
representation allows only t decimal digits, then it can be written as

Ā = d˜1 .d˜2 d˜3 · · · d˜t−1 d̃t (3.6)

It is called floating-point representation because the decimal point is always floating


between d˜1 and d˜2 with the help of the power 10E . If the mantissa T̄ of the number T
in scientific notation has more than t digits:

T̄ = d1 .d2 d3 · · · dt dt+1 · · · (3.7)

then its floating-point representation A can be obtained by rounding. The rule for
rounding is “round to nearest, ties to even” by comparing the digit dt+1 (the first digit
to be discarded) with 5 (half of the base value 10) as follows.
• “Round to nearest”: If dt+1 < 5, round T̄ down to Ā by simply chopping off
all the less significant digits to the right of dt . If dt+1 > 5, round T̄ up to Ā by
discarding all the digits to the right of dt and adding 1 to the digit dt , which
28 Roundoff Errors and Error Propagation

may lead to carrying (or even the adjustment of the exponent e to E = e + 1). If
dt+1 = 5 but is not the rightmost nonzero digit (i.e. there are other nonzero digits
after it), round T̄ up.
• “Ties to even”: If dt+1 = 5 and is the rightmost nonzero digit, then when dt is
even, round T̄ down; and when dt is odd, round T̄ up. In either case, the digit d̃t
is even.

Example 3.1 (Rounding) The fixed-point numbers

3.1416, −124.63, −43.652, 0.002375, −0.2385

can be written in scientific notation as

+3.1416 × 100 , −1.2463 × 102 , −4.3652 × 101 , 2.375 × 10−3 , −2.385 × 10−1

respectively. If the mantissa of a floating-point representation (in base 10) can have
only 3 significant digits, then the floating-point representations of these numbers are
obtained by rounding as

+3.14 × 100 , −1.25 × 102 , −4.37 × 101 , 2.38 × 10−3 , −2.38 × 10−1

respectively. 

So the floating-point representation A can be an approximation of the decimal num-


ber T . We say A is a t-digit approximation of T , meaning A has t correct/significant
digits. The error induced by rounding is called roundoff error. The rounding rule
implies that the absolute roundoff error is bounded by 5 times the place value at the
digit dt+1 :

|T − A| ≤ 5 · 10−t · 10e (3.8)

Since |T | ≥ 1 · 10e , the relative roundoff error is bounded as

|T − A| 5 · 10−t · 10e 10−(t−1)


≤ e
= = 5 · 10−t (3.9)
|T | 10 2
If the relative error in an approximation A of a number T does not exceed 5 · 10−t , we
say A has t correct/significant decimal digits.

Example 3.2 (Significant digits) The floating-point representation with a 3-digit


mantissa for the number −1.2463 × 102 is −1.25 × 102 . The absolute roundoff error
in the representation is

|(−1.2463 × 102 ) − (1.25 × 102 )| < 0.005 × 102

and the relative error is


|(−1.2463 × 102 ) − (1.25 × 102 )| 0.005 × 102
< = 5 × 10−3
| − 1.2463 × 102 | 102
The representation has 3 significant decimal digits. 
Floating-Point Numbers 29

The above description can be extended to a floating-point representation flβ (x) of


a number x in base-β system. In particular, the relative roundoff error is bounded as

|x − flβ (x)| β −(t−1)


≤ (3.10)
|x| 2

where t is the number of digits in the mantissa of flβ (x).

Definition 3.2 The rounding unit η of a floating-point number system charac-


terized by the base β and the t-digit mantissa is defined as

β −(t−1)
η= (3.11)
2
which is the sharp upper bound for the relative roundoff error in the floating-point
representation of a number.

3.2.2 DP Floating-Point Representation


Since a computer represents a real number as a bit (0 or 1) pattern, it naturally
uses the binary number system to represent the number as a floating-point number
in base 2. If 64 bits are used in the floating-point representation of the number x,
the representation is denoted as flDP (x), which is called a double precision (DP)
floating-point representation or a DP machine number. Soon we will find out that
generally
x 6= flDP (x)
The bit pattern of the 64 bits for storing x as flDP (x) can be given as

b64 b63 b62 · · · b53 b52 b51 · · · b1 (3.12)


|{z} | {z }| {z }
s b f

which is divided into three fields:


• the sign bit s: a 1-bit unsigned integer b64 , so s is either 0 or 1.
• the biased exponent b: an 11-bit unsigned integer

b = b63 × 210 + b62 × 29 + · · · b53 × 20 , b ∈ [0, 2047] (3.13)

• the fraction field f : a 52-bit unsigned integer

f = b52 × 251 + b51 × 250 + · · · b1 × 20 , f ∈ [0, 252 − 1] (3.14)


30 Roundoff Errors and Error Propagation

The decimal value flDP (x) corresponding to this bit pattern as determined by the
IEEE standard is
  
 (−1) s 1+ f 2b−1023 , if 1 ≤ b ≤ 2046
252


  
f
2−1022 ,

 (−1)s 52

if b = 0, f 6= 0
2
flDP (x) = (3.15)

 ±0, if b = 0, f = 0
±∞, if b = 2047, f = 0




NaN, if b = 2047, f 6= 0

Remark In Eq. (3.15), e := b − 1023 ∈ [−1022, 1023] is called the unbi-


ased exponent; and f /252 = (0.b52 b51 · · · b1 )2 (the reason f is call the frac-
tion field), where the subscript 2 is used to denote a binary number (so
1 + f /252 = (1.b52 b51 · · · b1 )2 is a 53-digit mantissa in base 2).

Example 3.3 Find the decimal value of the DP floating-point representation of the
real number x, the DP number flDP (x), whose bit pattern is

1 10000000101 111000010 · · · 0

[Solution:] The unsigned integers corresponding to the three fields in the pattern are
• s=1
• b = 210 + 22 + 20 = 1029
• f = 251 + 250 + 249 + 244
So according to Eq. (3.15), we have

251 + 250 + 249 + 244


 
flDP (x) = (−1)1 1 + 21029−1023 = −120.25
252

Example 3.4 The real number 1 is a DP number with the bit pattern

0 01111111111 00 · · · 0

The next larger DP number adjacent to 1 has the bit pattern

0 01111111111 00 · · · 01

and its value is 1 + 2−52 . 

We call the distance from 1 to its next larger DP number the DP machine epsilon
εDP . As we will see shortly, this value characterizes the precision (the level of relative
roundoff error) of the DP floating-point representation.
There are 264 bit patterns with 64 bits, so the DP floating-point representation on a
computer provides 264 DP numbers. They are a finite number of discrete points on the
Floating-Point Numbers 31

real number line, but the real number line is continuous and contains infinitely many
numbers/points. If a real number x on the real number line is not one of those discrete
DP numbers (i.e. there is no bit pattern whose corresponding value is exactly x), how
does the computer represent x and do arithmetic involving x? The computer represents
x by flDP (x) which is the DP number nearest to x as illustrated in Fig. 3.1, and use
flDP (x) to do arithmetic. In this case, the true value x is rounded to the approximation
flDP (x), introducing roundoff error in the approximation flDP (x). By Eq. (3.11), the
relative roundoff error satisfies

x − flDP (x) 2−(53−1) εDP


≤η = = = 2−53 ≈ 1.11 × 10−16 < 5 × 10−16 (3.16)
x 2 2

which implies that flDP (x), as a representation of x, has at least 16 correct/significant


decimal digits.

rounding

fl DP (x) x

DP number Next DP number

FIGURE 3.1
A computer rounds a non-DP number x to flDP (x), the DP number nearest to x.

Example 3.5 The real number 1 is a DP number. The next larger DP number adjacent
to 1 is 1 + 2−52 . If x = 1 + 2−54 , what is the value of flDP (x)?
[Solution:] x = 1 + 2−54 is between the consecutive DP numbers 1 and 1 + 2−52 . It
is closer to 1 than 1 + 2−52 . So flDP (x) = 1, i.e. x = 1 + 2−54 is rounded to 1. The
relative roundoff error is
x − flDP (x) 2−54 εDP
= −54
< 2−54 < = 2−53
x 1+2 2

Now we can explain why 0.1 used in the beginning demo is not a DP number. If
we convert 0.1 to a binary number, we get

0.1 = (0.0001 1001 1001 · · · )2 = [1 + (0.1001 1001 · · · )2 ] · 2−4

where the pattern 1001 repeats forever. So 0.1 in the floating-point binary form
requires infinitely many bits in the fraction field and cannot be a DP number with
only 52 fraction bits. In the demo, MATLAB rounds 0.1 to its nearest DP number
flDP (0.1), which is slightly different from 0.1, and repeatedly adds flDP (0.1) (instead
of 0.1), so the result is not exactly equal to 1, which is a DP machine number.
32 Roundoff Errors and Error Propagation

From Eq. (3.15), we can figure out the largest and smallest DP numbers, giving
the DP numbers a range. If a number resulted from a computation has a magnitude
that is outside this range, we say overflow occurs (or the computation overflows).
Fatal error or exception in the execution of a program may be caused by overflow.
Sometimes we may avoid overflow by properly scaling the numbers.

3.3 Error Propagation


We now know roundoff error is inevitable in computation on a computer. So we need
to know how roundoff error affects computational results. Below we first analyze how
errors propagate in arithmetic operations.
Let x̂ and ŷ be approximations of the nonzero true values x and y, respectively (for
example x̂ = flDP (x) and ŷ = flDP (y)). We may write
x̂ = x(1 + εx ), ŷ = y(1 + εy ) (3.17)
Note that |εx | and |εy | are relative errors of the approximations x̂ and ŷ, respectively.
Let’s first consider error propagation in multiplication. The true product x · y of x
and y is approximated by the approximate product x̂ · ŷ. We have
x̂ · ŷ = x(1 + εx ) · y(1 + εy ) = x · y(1 + εx + εy + εx εy ) ≡ x · y(1 + ε)
where the relative error of the approximate product is |ε| = |εx + εy + εx εy |. So the
errors in the multiplicand and the multiplier are propagated to the product. By the
triangle inequality,
||a| − |b|| ≤ |a ± b| ≤ |a| + |b| (3.18)
we have
|ε| ≤ |εx | + |εy | + |εx εy | (3.19)
So if the relative errors |εx | and |εy | of the multiplicand and the multiplier are small, the
relative error |ε| of the approximate product it also small. Multiplication is therefore
safe in terms of error propagation.

3.3.1 Catastrophic Cancellation


Now let’s consider error propagation in subtraction. The true difference x − y (as-
suming nonzero) of x and y is approximated by the approximate difference x̂ − ŷ. We
have
 
x y
x̂ − ŷ = x(1 + εx ) − y(1 + εy ) = (x − y) 1 + εx − εy ≡ (x − y)(1 + ε)
x−y x−y
where the relative error of the approximate difference is
x y x y
|ε| = εx − εy ≤ |εx | + |εy | (3.20)
x−y x−y x−y x−y
Error Propagation 33

The upper bound above can be sharp. So if x and y are very close, i.e. (x − y) ≈ 0, then
the relative error can be huge even if εx and εy are very small because of the division by
(x − y) (≈ 0). We call the phenomenon that subtraction of relatively accurate numbers
produces relatively inaccurate result catastrophic cancellation. Subtraction of close
approximate numbers cancels a lot of significant digits and is dangerous.

Example 3.6 (Catastrophic cancellation) x̂ = 1.001 is an approximation of x =


1.000. The relative error of the approximation is 0.1%. ŷ = 0.998 is an approximation
of y = 0.999. The relative error of this approximation is approximately 0.1%. The
difference x̂ − ŷ = 0.003 approximates (very badly) the true difference x − y = 0.001.
The relative error of the approximate result is 200%. 

We try our best to avoid subtraction of close approximate numbers in our algorithm
design to avoid catastrophic cancellation (loss of significant digits). Sometimes we
may reformulate the problem to do so.

Example 3.7 (Reformulation) To evaluate √ f (x) = x2 + 1 − 1 for very small |x|,
we can reformulate f (x) as f (x) = x2 /( x2 + 1 + 1). The former expression for f (x)
can lead to catastrophic cancellation, while the latter does not. 

3.3.2 Algorithm Stability


A numerical algorithm is a step-by-step procedure to find the numerical approximate
solution of a mathematical problem. An algorithm is stable if small errors introduced
in the algorithm stay small. If the small errors get amplified to be out of control, the
algorithm is unstable. Below we use error propagation analysis to check algorithm
stability/instability in an example.
In this example, we consider the sequence of values {V j }∞j=0 defined by the definite
integrals
Z 1
Vj = ex−1 x j dx, j = 0, 1, 2, . . . (3.21)
0

The first term V0 is given by


Z 1
1
V0 = ex−1 dx = 1 − (3.22)
0 e
The sequence satisfies the inequalities
1
0 < V j+1 < V j < , j = 1, 2, 3, . . . (3.23)
j
and the forward recurrence relation

V j = 1 − jV j−1 , j = 1, 2, 3, . . . (3.24)
Another Random Document on
Scribd Without Any Related Topics
Passion Week
[69

VEXILLA REGIS PRODEUNT


By Venantius Fortunatus. Born in the district of Treviso, Italy, about 530. In 565 he
made a pilgrimage to the shrine of St. Martin at Tours, and spent the remainder of
his years in Gaul. Through the influence of his friend Queen Rhadegunda,
Fortunatus became Bishop of Poitiers in 597. Some place his death in the year 609.
Fortunatus must have been an author of great industry and versatility. He wrote the
life of St. Martin in four books, containing 2245 hexameter lines; he threw off in
profusion vers de societé when wandering from castle to cloister in Gaul; and he
composed a volume of hymns for all the festivals of the Christian year, which is now
unhappily lost. This is his best known hymn, Dr. Neale’s translation of which is
inserted for the Fifth Sunday in Lent, otherwise called Palm Sunday, in “Hymns
Ancient and Modern” (No. 84).

See the Royal banners


Wave across the sky,
Bright the mystic radiance,
For the Cross is nigh;
And He who came our flesh to wear,
The Christ of God, was wounded there.

[70
II

Deep the cruel spear thrust,


By the soldier given;
Blood and water mingle,
Where the flesh is riven;
To cleanse our souls the crimson tide
Leapt from the Saviour’s riven side.

III

In the distant ages


Zion’s harp was strung,
And the faithful saw Him,
While the prophet sung;
Now Israel’s Hope the nations see,
For Christ is reigning from the tree.

IV

Tree of wondrous beauty,


Tree of grace and light,
Royal throne to rest on,
Decked with purple bright;
The choice of God, this royal throne
Whence Christ, the King, should rule His own.

[71
V

See the branches drooping!


Laden, see they sway!
For the price of heaven
On those branches lay;
Ah! great the price, that price was paid,
By Him on whom the debt was laid.
[72

PANGE, LINGUA, GLORIOSI, PRŒLIUM


CERTAMINIS
This, “one of the first of the Latin mediæval hymns,” has been credited to St. Hilary.
It has also been ascribed to Claudianus Mamertus, who died in 474. But by the
majority of authorities it is regarded as the composition of Fortunatus, and ranks
next to the Vexilla Regis prodeunt in their estimate. A rendering of it by Keble will
be found in his “Miscellaneous Poems,” beginning, “Sing, my tongue, of glorious
warfare,” which is Dr. Neale’s “Sing, my tongue, the glorious battle,” in a somewhat
altered form.

Tell, my tongue, the glorious conflict,


Crowned with victory nobly won;—
More than all the spoil of battle,
Praise the triumph of God’s Son;
How by death the crown of conquest
Graced Him when the strife was done.

II

Grieving sore o’er Eden’s sorrow


When our race in Adam fell;
And the fatal fruit he tasted, [73
Welcomed sin, and death, and hell;
God ordained a tree in Zion,
Eden’s poison to dispel.

III

In the work of our Redemption


Wisdom met the tempter’s foils;—
On the ground he claimed, the Victor
Fought, and bore away the spoils;
And the bane became the blessing,
Freedom sprang amid his toils.

IV
From the bosom of the Father,
Where He shared the regal crown,
At the time by God appointed,
Came the world’s Creator down—
God incarnate, born of Virgin,
Shorn of glory and renown.

List! the voice of infant weeping,


Cradled where the oxen stand,
And the Virgin mother watches, [74
Tending Him with loving hand,—
Hands and feet of God she bindeth,
Folding them in swaddling band.

VI

Blessing, blessing everlasting,


To the glorious Trinity;
To the Father, Son, and Spirit,
Equal glory let there be;
Universal praise be given,
To the Blessed One in Three.
[75

LUSTRA SEX QUI JAM PEREGIT


By some attributed to St. Ambrose, but generally and with greater probability to
Fortunatus. There is an imitation of this hymn in English by Bishop Mant, beginning,
“See the destined day arise!” one of the Passion hymns in “Hymns Ancient and
Modern” (No. 99).

Thirty years by God appointed,


And there dawns the woeful day,
When the great Redeemer girds Him
For the tumult of the fray;
And upon the cross uplifted,
Bears our load of guilt away.

II

Ah! ’tis bitter gall He drinketh,


When His heart in anguish fails;—
From the thorns His life-blood trickles,
From the spear wound and the nails;
But that crimson stream for cleansing,
O’er creation wide prevails.

[76
III

Faithful Cross! in all the woodland,


Standeth not a nobler tree;
In thy leaf, and flower, and fruitage,
None can e’er thy equal be;
Sweet the wood, and sweet the iron,
Sweet the load that hung on thee.

IV

Noble tree! unbend thy branches,


Let thy stubborn fibres bend,
Cast thy native rigour from thee,
Be a gentle, loving friend;
Bear Him in thine arms, and softly,
Christ, the King eternal, tend.

V
Only thou could’st bear the burden
Of the ransom of our race;
Only thou could’st be a refuge,
Like the ark, a hiding-place,
By the sacred blood anointed,
Of the Covenant of Grace.

[77
VI

Blessing, blessing everlasting,


To the glorious Trinity;
To the Father, Son, and Spirit,
Equal glory let there be;
Universal praise be given,
To the Blessed One in Three.
[78

CRUX AVE BENEDICTA


This little poem, which he pronounces “perfect in its kind,” is taken by Trench from
Daniel’s Thesaurus, without any note of author or of date.

Hail, thou Blessed Cross, all hail!


Death no longer can prevail.
On those arms extended high,
Did my King and Saviour die.

II

Queen of all the trees that grow,


Medicine when health is low,
Solace to the cumbered heart,
Comfort thou when sorrows smart.
III

O! most sacred wood, the sign


That eternal life is mine;
On the fruit thy branches give,
Feeds the human heart to live.

[79
IV

When, around the Judgment-seat,


Friends of thine and foes shall meet,
Be my prayer, O Christ, to Thee,
And in love remember me.
[80

HORÆ DE PASSIONE D. N. JESU CHRISTI


From a fourteenth-century MS., where it bears the title, “Hours of the Passion of our
Lord Jesus Christ, compiled from the Prophets and the New Testament by the
Blessed Pope Urban” (b. 1302, d. 1370).

(AD PRIMAM)
(Tu qui velatus facie)

Veiled was the glory of Thy face,


O Jesus, Lord of heavenly grace,
When mocking knees were bent in scorn,
And bitter stripes were meekly borne.

II

To Thee the prayer of faith we send,


In Thee we hope: O Lord, attend,
And in Thy mercy lead the way
To where Thy glory shines as day.

[81
III

To Thee be highest honours paid,


O Christ, who wast by man betrayed,
Who on the cross of anguish sore
Didst die, that we might die no more.

(AD TERTIAM)
(Hora qui ductus tertia)

IV

O Christ, who in that hour of dread


Forth as a sacrifice wast led;
Who, to retrieve our grievous loss,
Didst bear the burden of the cross.

O may Thy Love our hearts inflame;


Be Thy pure life our constant aim;
That we may win the heavenly rest,
And share the glories of the blest.

VI

To Thee be highest honours paid,


O Christ, who wast by man betrayed;
Who on the cross of anguish sore
Didst die, that we might die no more.
[82

(AD SEXTAM)
(Crucem pro nobis subiit)
VII

For us the cruel cross He bare,


Endured the thirst while hanging there—
O Jesus! Thou hast anguish borne,
Thy hands and feet with nails were torn.

VIII

Honour and blessing be to Thee,


O Christ, who hung upon the tree,
Who, by the offering of Thy grace,
Didst save from death our fallen race.

(AD NONAM)
(Beata Christi passio)
IX

Thy blessed Passion, Christ, be ours,


To set us free from Satan’s powers;
To aid our fainting souls to rise
To joys prepared in Paradise.

[83
X

To Christ the Lord all glory be,


Who, hanging on the shameful tree,
Gave up His life with bitter cry,
And saved a world prepared to die.

XI

To Thee be highest honours paid,


O Christ, who wast by man betrayed,
Who, on the cross of anguish sore,
Didst die, that we might die no more.

(AD COMPLETORIUM)
(Qui jacuisti mortuus)

XII

O spotless King, who shared its gloom,


And lay at peace within the tomb,
Teach us to find our rest in Thee,
And sing Thy praise eternally.

[84
XIII

Come to our help, O Lord, who gave


Thy precious blood our souls to save;
Lead us to Thine eternal peace,
Whose sweetest joys shall never cease.

[85
Easter
[87

FINITA JAM SUNT PRÆLIA


Of unknown date and authorship. It has not been traced further back than the
Hymnodia Sacra, Munster, 1753.

Alleluia! Alleluia!
The din of battle now is dead,
And glory crowns the Victor’s head;
Let mirth abound,
And songs resound—Alleluia!

II

Alleluia! alleluia!
The bitter pangs of death are past,
And Christ hath vanquished hell at last;
Cheers are ringing,
Psalms are singing—Alleluia!

III

Alleluia! alleluia!
And when the morn appointed broke,
All decked with beauty Christ awoke; [88
O shout with glee,
Sing merrily—Alleluia!

IV
Alleluia! Alleluia!
Hell hath He closed with His own hand,
The gates of heaven wide open stand;
Let mirth abound,
And songs resound—Alleluia!

Alleluia! Alleluia!
’Tis Thy wounds, O Blessed Jesus—
’Tis Thy death from dying frees us,
That living, we
May sing with glee—Alleluia!
[89

PLAUDITE, CŒLI!
A Jesuit hymn, taken by Walraff, in 1806, out of the Psalteriolum Cantiorum
Catholicarum a Patribus Societati Jesu.

Shout praises, ye heavens,


And sigh them, soft air;
From highest to lowest,
Sing, sing everywhere;
For black clouds of tempest
Are banished from sight;
And spring, crowned with glory,
Is pouring her light.

II

Come forth with the spring-time,


Sweet flow’rets, and spread
Your rich hues around us
Where nature lay dead;
Come, violets modest,
And roses so gay,
With lilies and marigolds,
Spangle the way.

[90
III

Flow joy song in fulness,


Flow higher and higher;
Pour forth thy sweet measures,
Thou murmuring lyre;
O sing, for He liveth,
As truly He said,
Yea, Jesus hath risen
Unharmed from the dead.

IV

Shout praises, ye mountains,


Vales catch the refrain;
Frisk gaily, ye fountains;
Hills, tell it again—
He liveth, He liveth,
As truly He said;
Yea, Jesus hath risen
Unharmed from the dead.
[91

MORTIS PORTIS FRACTIS


By Peter of St. Maurice, sometimes styled Peter of Cluny, but best known as Peter
the Venerable. Born in Auvergne, 1092 or 1094; began life as a soldier; afterwards
became a Benedictine monk; elected abbot of the monastery of his order at Cluny
in Burgundy; died there in 1156 or 1157. The greater part of his literary activity was
given to the controversy between the Clugnian and Cistercian, or “black” and
“white” monks. This Resurrection hymn is taken from “Some Rhythms, Proses,
Sequences, Verses, and Hymns,” contained in the Bibliotheca Cluniacencis, 1623.

Burst are the iron gates of death—


A stronger power prevails;
For, by the cross, the cruel king
Before the Victor quails,
O clear the light that shines afar,
Where darkness held its sway,
For God, who made the light at first,
Restores its gladdening ray.

II

That sinners might for ever live,


The great Creator dies,
And by His death to new estate
Our souls enraptured rise.
There, Satan groaned in baffled hate, [92
Where Christ our triumph won—
For what to Him was deathly loss,
To man was life begun.

III

He grasps the envied prize, but fails,


And while he wounds, he dies;
But calmly, and with mighty power,
The King secures the prize;
And, leaving earth, His triumph won,
He seeks His native skies.

IV
And now triumphant o’er the grave,
The Lord to earth returns;
To new create our fallen race,
His soul with ardour burns;
Down to the dwellings of the lost,
To dwell with man He came;
And hearts in grievous bondage held,
Receive Him with acclaim.
[93

ALLELUIA, DULCE CARMEN


Found in three MSS. of the eleventh century in the British Museum Library, and
published by the Surtees Society in the “Latin Hymns of the Anglo-Saxon Church,”
from a MS. of the eleventh century, in Durham Library.

Alleluia, hymn of sweetness,


Joyful voice of ceaseless praise;
Alleluia, pleasant anthem,
Choirs celestial sweetly raise:
This the song of those abiding
In the house of God always.

II

Alleluia, Mother Salem,


All Thy people joy in song;
Alleluia, walls and bulwarks
Evermore the notes prolong:
Ah! beside the streams of Babel,
Exiled, weep we o’er our wrong.

[94
III
Alleluia, ’tis befitting
That our song should falter here;
Alleluia, can we sing it
When the clouds of wrath appear?
To bemoan our sin with weeping,
Now the time is drawing near.

IV

Trinity, for ever blessed!


May we sing the gladsome lay,
When from sin our souls are severed,
And the clouds have passed away,
And we share the Easter glory,
In the realms of endless day?
[95
Ascension
[97

ÆTERNE REX ALTISSIME


A hymn of complex authorship and of frequently altered text.

Eternal King, enthroned on high,


Redeemer, strong Thy folk to save;
Thee, powerful death, by death o’ercome,
A royal crown of triumph gave.

II

Ascending to the throne of God,


Beyond the glittering host of heaven,
More power than human hand could give
To Thee, victorious King, is given.

III

Three kingdoms bow before Thee now—


The heavens above, the earth below,
Hell’s dark abode—and to their Lord,
On bended knee, submission show.

[98
IV

All awe inspired, the angel host


Behold man’s changed estate, amazed;
Our sinful flesh, by flesh renewed,
And man, true God, to Godhead raised.

O Christ, with God who dwell’st on high,


Be Thou to us, we humbly pray,
A lasting joy while here we wait,
Our great reward in heaven for aye.

VI

In earnest prayer we come to Thee;


O may our sins be all forgiven,
And lift our hearts by Thy rich grace,
To where Thou art Thyself, in heaven.

VII

That when in clouds of Judgment dire,


Thou com’st with Thine angelic host,
We may escape the avenger’s power,
And wear anew the crowns we lost.

[99
VIII

To Thee, O Christ, all glory be,


Victor returning now to heaven;
To Father, and to Holy Ghost,
Let praise through endless years be given.
[100

POSTQUAM HOSTEM ET INFERNA


By Adam of St. Victor. (See p. 49.)

Broken are the bands that bound us,


Spoiled are Satan’s realms around us,
And to joys supernal now,
Christ returns with hosts attending,
And, as when at first descending,
Angel guards their homage bow.

II

Far above the stars ascending,


Faith alone His course attending,
Passing now from mortal sight;
To His hand all power is given,
One with God He rules in heaven,
One in honour and in might.

[101
III

Victor on His throne uplifted,


See all rule to Him is gifted,
O’er Creation’s wide domain.
Now for evermore He liveth,
Nevermore His life He giveth—
Once the sacrifice was slain.

IV

Once He wore our flesh in weakness,


Once He suffered, once in meekness
Gave Himself for sin to die.
Now no longer pain He knoweth:
Perfect peace for ever floweth,
Perfect joy is ever nigh.
[102

CŒLOS ASCENDIT HODIE


Of unknown date and authorship. The text is in Daniel’s Thesaurus, with “Alleluia”
as a refrain. Dr. Neale gives it in his “Mediæval Hymns and Sequences” as
“apparently of the twelfth century.”

To-day the lingering clouds are riven,


Alleluia!
Our glorious King ascends to heaven,
Alleluia!

II

The heaven and earth His rule obey,


Alleluia!
Who sits at God’s right hand for aye,
Alleluia!

III

See, all things are fulfilled at last,


Alleluia!
By David sung in ages past,
Alleluia!

[103
IV

And on the throne of high renown,


Alleluia!
The Lord is with His Lord set down,
Alleluia!

Now blessings on our Lord we shower,


Alleluia!
In this chief triumph of His power,
Alleluia!

VI

Let praise the Trinity adore,


Alleluia!
To God be glory evermore,
Alleluia!
[104

O CHRISTE, QUI NOSTER POLI


Appeared in the Cluniac Breviary of 1686, and in that of Paris, 1736, as also in later
French Breviaries. From his connection with the revised Paris Breviary, this hymn
has been ascribed to Archbishop Charles de Vintimille, born 1655, died 1746; but in
neither the Cluniac nor Paris Breviary is it marked as his. Chandler’s version of the
hymn, beginning, “O Jesu, who art gone before, To Thy blest realms of light,”
appears in Dr. Martineau’s “Hymns of Praise and Prayer,” with opening lines altered
to, “The Crucified is gone before, To the blest realms of light,” and with other
variations.

O Christ, who art ascended now


To realms of bliss above,
Inspire our souls to rise to Thee,
Upborne by faith and love.
II

Make us to seek those holy joys,


That they who love receive;
That earthly mind can never know,
Nor faithless soul perceive.

[105
III

There, where Thou art, they reap reward


Who toiled at duty’s call;
For Thou dost give Thyself to them,
And Thou art all in all.

IV

By power divine, O let us come


Where glory cannot fade;
And from Thy heavenly throne send down
The Spirit to our aid.

To Thee who art at God’s right hand,


O Christ, to Thee be praise,
To Father, and to Holy Ghost,
Be glory given always.
[107
Whitsuntide
[109

VENI, CREATOR SPIRITUS,


MENTES TUORUM VISITA
Of the authorship of this grand hymn nothing unquestioned is known. It has been
ascribed to Ambrose, Gregory, Rhabanus Maurus (died 856), and Charlemagne. The
most widely prevalent opinion ascribes it to the last-named person, but in the
judgment of Dr. Julian’s assistant-editor “the hymn is clearly not the work of St.
Ambrose nor of Charles the Great. Nor is there sufficient evidence to allow us to
ascribe it either to Gregory the Great, to Rhabanus Maurus, or to any of the
ecclesiastics connected with the court of Charles the Fat.” The hymn has not yet
been found in any MS. earlier than the latter part of the tenth century.

Come, Thou Creator Spirit blest,


And with Thy grace our minds pervade;
May Thy sweet presence ever dwell
Within the souls which Thou hast made.

II

Thou Holy Paraclete! the Gift


Sent down to earth from God Most High,
Thou Font of Life and fire and love,
Thy holy unction now apply.

[110
III

Sevenfold Thy gifts to us are given,


Of God’s right hand the Finger Thou;
The promise of the Father’s grace,
With gifts of tongues, Thou dost endow.

IV

Make our dull sense enraptured glow,


And let our hearts o’erflow with love;
The weakness of our flesh inspire
With heavenly valour from above.

Far from our souls the foe repel,


And let us know the bliss of peace;
Guide Thou our steps, that evermore
Our hearts may learn from sin to cease.

VI

Lead us the Father’s love to know;


Reveal to us the Eternal Son;
And Thee, the Sent of both, we’ll praise,
While everlasting ages run.
[111

VENI, SANCTE SPIRITUS ET EMITTE CŒLITUS


A sequence universally regarded as one of the masterpieces of sacred Latin poetry.
As in the case of the Veni, Creator Spiritus, the authorship is matter of dispute.
Robert II. of France, Hermannus Contractus (born 1013, died 1054), Stephen
Langton the Archbishop of Canterbury, Pope Innocent III.—these have all in turn
been credited with its production. Dr. Julian, the greatest living authority, sums up
the matter of authorship thus: “The sequence is clearly not earlier than about the
beginning of the thirteenth century. It is certainly neither by Robert II. nor by
Hermannus Contractus. The most probable author is Innocent III.”
I

Holy Spirit, come with power;


Let Thy light, in darkest hour,
Shine upon our onward way.
Father of the humble heart,
Come, Thy choicest gifts impart—
Light our hearts with heavenly ray.

[112
II

Thou canst best the heart console;


Sweet Thy sojourn with the soul—
Cooling breath at noon of day,
Calm Thy rest in toil and care,
Soft Thy shade in noontide glare—
Thou dost chase our tears away.

III

O! Thou blessed Light of light!


Let Thy beams in radiance bright
Fill our inmost heart for aye.
If Thou come not with Thy grace,
Nought of worth can take Thy place,
Nought but leads the soul astray.

IV

What is filthy, come, renew;


What is parched, with grace bedew;
Heal the wounded in the way.
What is stubborn, gently bend;
To the chilled the life-glow send;
Bring the erring ’neath Thy sway.

[113
V

To the faithful who repose


In the love Thy grace bestows,
Be Thy sevenfold gift alway—
Rich reward for service given,
Hope in death and joy in heaven,
Joy untold that lasteth aye.
[114

O FONS AMORIS, SPIRITUS


By Charles Coffin. (See p. 3.) It is a recast of the Nunc nobis, Sancte Spiritus of St.
Ambrose.

O Holy Spirit, font of love,


Thou source of life, and joy, and peace,
With holy fire come from above,
And bid our hearts their warmth increase.

II

O Thou who didst with love’s strong cord


Unite the Father and the Son,
May we who love a common Lord,
In mutual love be bound in one.

III

Now to the Father throned on high,


And unto Christ His only Son,
And to the Spirit, glory be,
Now, and while endless ages run.
[115
Trinity
[117

TU TRINITATIS UNITAS
A cento. Added to the Roman Breviary in 1568. In a subsequent edition it is the
hymn for Lauds on Trinity Sunday. It is made up of the first stanza of a hymn with
the same opening, and of the third stanza of the composition, Æterna cœli gloria,
with a doxology added.

O Thou Eternal One in Three,


Dread Ruler of the earth and sky,
Accept the praise we yield to Thee,
Who, waking, lift our songs on high.

II

The star that tells the approach of day


Is lingering in the glow of morn,
And night and darkness fade away—
O Holy Light, our souls adorn!

III

To God the Father throned in heaven,


To Christ the One Begotten Son,
And to the Spirit praise be given,
Now, and while endless ages run.

[118
O PATER SANCTE, MITIS ATQUE PIE
Found in two MSS. of the eleventh century, and included in the York, Sarum, and
Aberdeen Breviaries.

O Holy Father, gracious Thou and tender;


O Jesus Christ, Thou much adorèd Son;
Spirit most sweet, Thou Paraclete, Defender,
Eternally one!

II

Trinity Holy, Unity abiding,


True God Thou art, unbounded goodness Thou,
Light of the angels, trust of the confiding,
We hope in Thee now.

III

Thee all creation pays eternal homage;


Thee all Thy creatures songs of glory raise;
Now come we humbly, joining in the chorus,
O hear Thou our praise.

IV

Glory to Thee, O God of power almighty,


Triune yet One, and great Thou art and high;
Hymns fitly tell Thy honour, praise, and glory,
and eternally.
[119

ADESTO, SANCTA TRINITAS


Authorship unknown. It first occurs in a MS. of the eleventh century in the British
Museum Library, has a place in the English Breviaries of York, Hereford, and St.
Albans, and is printed in the “Latin Hymns of the Anglo-Saxon Church.”

Be present, Holy Trinity,


One glory Thou, one Deity;
Where’er creation’s bounds extend,
Thou art beginning without end.

II

The hosts of heaven Thy praise proclaim,


Adoring, tell Thy matchless fame;
Earth’s threefold fabric joins the song,
To bless Thee through the ages long.

III

And we, Thy humble servants, now


To Thee in adoration bow;
Our suppliant vows and prayers unite
With hymns that fill the realms of light.

[120
IV

One Light, we Thee our homage pay,


We worship Thee, O triple ray;
Thou First and Last, we speak Thy fame,
And every spirit lauds Thy name.

V
Welcome to our website – the perfect destination for book lovers and
knowledge seekers. We believe that every book holds a new world,
offering opportunities for learning, discovery, and personal growth.
That’s why we are dedicated to bringing you a diverse collection of
books, ranging from classic literature and specialized publications to
self-development guides and children's books.

More than just a book-buying platform, we strive to be a bridge


connecting you with timeless cultural and intellectual values. With an
elegant, user-friendly interface and a smart search system, you can
quickly find the books that best suit your interests. Additionally,
our special promotions and home delivery services help you save time
and fully enjoy the joy of reading.

Join us on a journey of knowledge exploration, passion nurturing, and


personal growth every day!

ebookbell.com

You might also like