Different Itation
Different Itation
This notebook contains an excerpt from the Python Programming and Numerical Methods - A Guide for Engineers and Scientists, the
content is also available at Berkeley Python Numerical Methods. Print to PDF
The copyright of the book belongs to Elsevier. We also have this interactive book online for a better learning experience. The code is released
under the MIT license. If you find this content useful, please consider supporting the work on Elsevier or Amazon!
< 19.6 Summary and Problems | Contents | 20.1 Numerical Differentiation Problem Statement >
CHAPTER OUTLINE
20.1 Numerical Differentiation Problem Statement
20.2 Finite Difference Approximating Derivatives
20.3 Approximating of Higher Order Derivatives
20.4 Numerical Differentiation with Noise
20.5 Summary and Problems
Motivation
Many engineering and science systems change over time, space, and many other dimensions of interest. In mathematics, function derivatives are
often used to model these changes. However, in practice the function may not be explicitly known, or the function may be implicitly represented by a
set of data points. In these cases and others, it may be desirable to compute derivatives numerically rather than analytically.
The focus of this chapter is numerical differentiation. By the end of this chapter you should be able to derive some basic numerical differentiation
schemes and their accuracy.
© Copyright 2020.
https://pythonnumericalmethods.studentorg.berkeley.edu/notebooks/chapter20.00-Numerical-Differentiation.html# 1/1
3/7/25, 10:46 AM Numerical Differentiation Problem Statement — Python Numerical Methods
This notebook contains an excerpt from the Python Programming and Numerical Methods - A Guide for Engineers and Scientists, the
content is also available at Berkeley Python Numerical Methods. Print to PDF
The copyright of the book belongs to Elsevier. We also have this interactive book online for a better learning experience. The code is released
under the MIT license. If you find this content useful, please consider supporting the work on Elsevier or Amazon!
< CHAPTER 20. Numerical Differentiation | Contents | 20.2 Finite Difference Approximating Derivatives >
There are several functions in Python that can be used to generate numerical grids. For numerical grids in one dimension, it is sufficient to use the
linspace function, which you have already used for creating regularly spaced arrays.
In Python, a function f (x) can be represented over an interval by computing its value on a grid. Although the function itself may be continuous, this
discrete or discretized representation is useful for numerical calculations and corresponds to data sets that may be acquired in engineering and
science practice. Specifically, the function value may only be known at discrete points. For example, a temperature sensor may deliver temperature
versus time pairs at regular time intervals. Although temperature is a smooth and continuous function of time, the sensor only provides values at
discrete time intervals, and in this particular case, the underlying function would not even be known.
Whether f is an analytic function or a discrete representation of one, we would like to derive methods of approximating the derivative of f over a
numerical grid and determine their accuracy.
< CHAPTER 20. Numerical Differentiation | Contents | 20.2 Finite Difference Approximating Derivatives >
© Copyright 2020.
https://pythonnumericalmethods.studentorg.berkeley.edu/notebooks/chapter20.01-Numerical-Differentiation-Problem-Statement.html 1/1
3/7/25, 10:47 AM Finite Difference Approximating Derivatives — Python Numerical Methods
This notebook contains an excerpt from the Python Programming and Numerical Methods - A Guide for Engineers and Scientists, the
content is also available at Berkeley Python Numerical Methods. Print to PDF
The copyright of the book belongs to Elsevier. We also have this interactive book online for a better learning experience. The code is released
under the MIT license. If you find this content useful, please consider supporting the work on Elsevier or Amazon!
< 20.1 Numerical Differentiation Problem Statement | Contents | 20.3 Approximating of Higher Order Derivatives >
f (x) − f (a)
′
f (a) = lim
x→a x − a
The derivative at x = a is the slope at this point. In finite difference approximations of this slope, we can use values of the function in the
neighborhood of the point x = a to achieve the goal. There are various finite difference formulas used in different applications, and three of these,
where the derivative is calculated using the values of two points, are presented below.
The forward difference is to estimate the slope of the function at xj using the line that connects (xj , f (xj )) and (xj+1 , f (xj+1 )) :
f (xj+1 ) − f (xj )
′
f (x j ) =
xj+1 − xj
The backward difference is to estimate the slope of the function at xj using the line that connects (xj−1 , f (xj−1 )) and (xj , f (xj )):
f (xj ) − f (xj−1 )
′
f (x j ) =
xj − xj−1
The central difference is to estimate the slope of the function at xj using the line that connects (xj−1 , f (xj−1 )) and (xj+1 , f (xj+1 )) :
f (xj+1 ) − f (xj−1 )
′
f (x j ) =
xj+1 − xj−1
The following figure illustrates the three different type of formulas to estimate the slope.
If x is on a grid of points with spacing h, we can compute the Taylor series at x = xj+1 to get
0 ′ 1 ′′ 2 ′′′ 3
f (xj )(xj+1 − xj ) f (xj )(xj+1 − xj ) f (xj )(xj+1 − xj ) f (xj )(xj+1 − xj )
f (xj+1 ) = + + + + ⋯.
0! 1! 2! 3!
′′ ′′′ 2
f (xj+1 ) − f (xj ) f (xj )h f (x j )h
′
f (x j ) = + (− − − ⋯) .
h 2! 3!
′′ ′′′ 2
f (xj )h f (xj )h
The terms that are in parentheses, − 2!
−
3!
− ⋯ , are called higher order terms of h. The higher order terms can be rewritten as
′′ ′′′ 2
f (xj )h f (x j )h
− − − ⋯ = h(α + ϵ(h)),
2! 3!
where α is some constant, and ϵ(h) is a function of h that goes to zero as h goes to 0. You can verify with some algebra that this is true. We use the
abbreviation “O(h) ” for h(α + ϵ(h)), and in general, we use the abbreviation “O(hp )” to denote hp (α + ϵ(h)) .
f (xj+1 ) − f (xj )
′
f (x j ) = + O(h).
h
https://pythonnumericalmethods.studentorg.berkeley.edu/notebooks/chapter20.02-Finite-Difference-Approximating-Derivatives.html 1/4
3/7/25, 10:47 AM Finite Difference Approximating Derivatives — Python Numerical Methods
f (xj+1 ) − f (xj )
′
f (x j ) ≈ ,
h
Here, O(h) describes the accuracy of the forward difference formula for approximating derivatives. For an approximation that is O(hp ), we say that
p is the order of the accuracy of the approximation. With few exceptions, higher order accuracy is better than lower order. To illustrate this point,
assume q < p . Then as the spacing, h > 0 , goes to 0, hp goes to 0 faster than hq . Therefore as h goes to 0, an approximation of a value that is O(hp )
gets closer to the true value faster than one that is O(hq ).
By computing the Taylor series around a = xj at x = xj−1 and again solving for f ′ (xj ), we get the backward difference formula
f (xj ) − f (xj−1 )
′
f (x j ) ≈ ,
h
which is also O(h) . You should try to verify this result on your own.
Intuitively, the forward and backward difference formulas for the derivative at xj are just the slopes between the point at xj and the points xj+1 and
xj−1 , respectively.
We can construct an improved approximation of the derivative by clever manipulation of Taylor series terms taken at different points. To illustrate,
we can compute the Taylor series around a = xj at both xj+1 and xj−1 . Written out, these equations are
1 1
′ ′′ 2 ′′′ 3
f (xj+1 ) = f (xj ) + f (xj )h + f (x j )h + f (x j )h + ⋯
2 6
and
1 1
′ ′′ 2 ′′′ 3
f (xj−1 ) = f (xj ) − f (xj )h + f (x j )h − f (x j )h + ⋯.
2 6
2
′ ′′′ 3
f (xj+1 ) − f (xj−1 ) = 2f (xj ) + f (x j )h + ⋯,
3
which when solved for f ′ (xj ) gives the central difference formula
f (xj+1 ) − f (xj−1 )
′
f (x j ) ≈ .
2h
Because of how we subtracted the two equations, the h terms canceled out; therefore, the central difference formula is O(h2 ), even though it
requires the same amount of computational effort as the forward and backward difference formulas! Thus the central difference formula gets an
extra order of accuracy for free. In general, formulas that utilize symmetric points around xj , for example xj−1 and xj+1 , have better accuracy than
asymmetric ones, such as the forward and background difference formulas.
The following figure shows the forward difference (line joining (xj , y j ) and (xj+1 , y j+1 ) ), backward difference (line joining (xj , y j ) and
(xj−1 , y j−1 ) ), and central difference (line joining (xj−1 , y j−1 ) and (xj+1 , y j+1 ) ) approximation of the derivative of a function f . As can be seen, the
difference in the value of the slope can be significantly different based on the size of the step h and the nature of the function.
TRY IT! Take the Taylor series of f around a = xj and compute the series at x = xj−2 , xj−1 , xj+1 , xj+2 . Show that the resulting equations can be
combined to form an approximation for f ′ (xj ) that is O(h4 ).
5 ′′′′′
48h f (x j )
′
f (xj−2 ) − 8f (xj−1 ) + 8f (xj−1 ) − f (xj+2 ) = 12hf (xj ) −
120
This formula is a better approximation for the derivative at xj than the central difference formula, but requires twice as many calculations.
https://pythonnumericalmethods.studentorg.berkeley.edu/notebooks/chapter20.02-Finite-Difference-Approximating-Derivatives.html 2/4
3/7/25, 10:47 AM Finite Difference Approximating Derivatives — Python Numerical Methods
TIP! Python has a command that can be used to compute finite differences directly: for a vector f , the command d = np. dif f (f ) produces an
array d in which the entries are the differences of the adjacent elements in the initial array f . In other words d(i) = f (i + 1) − f (i) .
WARNING! When using the command np.diff, the size of the output is one less than the size of the input since it needs two arguments to produce a
difference.
EXAMPLE: Consider the function f (x) = cos(x) . We know the derivative of cos(x) is − sin(x) . Although in practice we may not know the
underlying function we are finding the derivative for, we use the simple example to illustrate the aforementioned numerical differentiation methods
and their accuracy. The following code computes the derivatives numerically.
import numpy as np
import matplotlib.pyplot as plt
plt.style.use('seaborn-poster')
%matplotlib inline
# step size
h = 0.1
# define grid
x = np.arange(0, 2*np.pi, h)
# compute function
y = np.cos(x)
# Plot solution
plt.figure(figsize = (12, 8))
plt.plot(x_diff, forward_diff, '--', \
label = 'Finite difference approximation')
plt.plot(x_diff, exact_solution, \
label = 'Exact solution')
plt.legend()
plt.show()
0.049984407218554114
As the above figure shows, there is a small offset between the two curves, which results from the numerical error in the evaluation of the numerical
derivatives. The maximal error between the two numerical results is of the order 0.05 and expected to decrease with the size of the step.
As illustrated in the previous example, the finite difference scheme contains a numerical error due to the approximation of the derivative. This
difference decreases with the size of the discretization step, which is illustrated in the following example.
EXAMPLE: The following code computes the numerical derivative of f (x) = cos(x) using the forward difference formula for decreasing step sizes,
h . It then plots the maximum error between the approximated derivative and the true derivative versus h as shown in the generated figure.
https://pythonnumericalmethods.studentorg.berkeley.edu/notebooks/chapter20.02-Finite-Difference-Approximating-Derivatives.html 3/4
3/7/25, 10:47 AM Finite Difference Approximating Derivatives — Python Numerical Methods
for i in range(iterations):
# halve the step size
h /= 2
# store this step size
step_size.append(h)
# compute new grid
x = np.arange(0, 2 * np.pi, h)
# compute function value at grid
y = np.cos(x)
# compute vector of forward differences
forward_diff = np.diff(y)/h
# compute corresponding grid
x_diff = x[:-1]
# compute exact solution
exact_solution = -np.sin(x_diff)
The slope of the line in log-log space is 1; therefore, the error is proportional to h1 , which means that, as expected, the forward difference formula is
O(h) .
< 20.1 Numerical Differentiation Problem Statement | Contents | 20.3 Approximating of Higher Order Derivatives >
© Copyright 2020.
https://pythonnumericalmethods.studentorg.berkeley.edu/notebooks/chapter20.02-Finite-Difference-Approximating-Derivatives.html 4/4
3/7/25, 10:47 AM Approximating of Higher Order Derivatives — Python Numerical Methods
This notebook contains an excerpt from the Python Programming and Numerical Methods - A Guide for Engineers and Scientists, the
content is also available at Berkeley Python Numerical Methods. Print to PDF
The copyright of the book belongs to Elsevier. We also have this interactive book online for a better learning experience. The code is released
under the MIT license. If you find this content useful, please consider supporting the work on Elsevier or Amazon!
< 20.2 Finite Difference Approximating Derivatives | Contents | 20.4 Numerical Differentiation with Noise >
2 ′′ 3 ′′′
h f (x j ) h f (x j )
′
f (xj−1 ) = f (xj ) − hf (xj ) + − + ⋯
2 6
and
2 ′′ 3 ′′′
h f (x j ) h f (x j )
′
f (xj+1 ) = f (xj ) + hf (xj ) + + + ⋯.
2 6
4 ′′′′
h f (x j )
2 ′′
f (xj−1 ) + f (xj+1 ) = 2f (xj ) + h f (x j ) + + ⋯,
24
< 20.2 Finite Difference Approximating Derivatives | Contents | 20.4 Numerical Differentiation with Noise >
© Copyright 2020.
https://pythonnumericalmethods.studentorg.berkeley.edu/notebooks/chapter20.03-Approximating-of-Higher-Order-Derivatives.html 1/1
3/7/25, 10:47 AM Numerical Differentiation with Noise — Python Numerical Methods
This notebook contains an excerpt from the Python Programming and Numerical Methods - A Guide for Engineers and Scientists, the
content is also available at Berkeley Python Numerical Methods. Print to PDF
The copyright of the book belongs to Elsevier. We also have this interactive book online for a better learning experience. The code is released
under the MIT license. If you find this content useful, please consider supporting the work on Elsevier or Amazon!
< 20.3 Approximating of Higher Order Derivatives | Contents | 20.5 Summary and Problems >
To illustrate this point, we numerically compute the derivative of a simple cosine wave corrupted by a small sin wave. Consider the following two
functions:
f (x) = cos(x)
and
where 0 < ϵ ≪ 1 is a very small number and ω is a large number. When ϵ is small, it is clear that f ≃ f ϵ,ω . To illustrate this point, we plot f ϵ,ω (x) for
ϵ = 0.01 and ω = 100 , and we can see it is very close to f (x), as shown in the following figure.
import numpy as np
import matplotlib.pyplot as plt
plt.style.use('seaborn-poster')
%matplotlib inline
y = np.cos(x)
y_noise = y + epsilon*np.sin(omega*x)
# Plot solution
plt.figure(figsize = (12, 8))
plt.plot(x, y_noise, 'r-', \
label = 'cos(x) + noise')
plt.plot(x, y, 'b-', \
label = 'cos(x)')
plt.xlabel('x')
plt.ylabel('y')
plt.legend()
plt.show()
′
f (x) = − sin(x)
and
′
f ϵ,ω (x) = − sin(x) + ϵω cos(ωx).
Since ϵω may not be small when ω is large, the contribution of the noise to the derivative may not be small. As a result, the derivative (analytic and
numerical) may not be usable. For instance, the following figure shows f ′ (x) and f ϵ,ω
′
(x) for ϵ = 0.01 and ω = 100 .
https://pythonnumericalmethods.studentorg.berkeley.edu/notebooks/chapter20.04-Numerical-Differentiation-with-Noise.html 1/2
3/7/25, 10:47 AM Numerical Differentiation with Noise — Python Numerical Methods
# Plot solution
plt.figure(figsize = (12, 8))
plt.plot(x, y_noise, 'r-', \
label = 'Derivative cos(x) + noise')
plt.plot(x, y, 'b-', \
label = 'Derivative of cos(x)')
plt.xlabel('x')
plt.ylabel('y')
plt.legend()
plt.show()
< 20.3 Approximating of Higher Order Derivatives | Contents | 20.5 Summary and Problems >
© Copyright 2020.
https://pythonnumericalmethods.studentorg.berkeley.edu/notebooks/chapter20.04-Numerical-Differentiation-with-Noise.html 2/2
3/7/25, 10:47 AM Summary — Python Numerical Methods
This notebook contains an excerpt from the Python Programming and Numerical Methods - A Guide for Engineers and Scientists, the
content is also available at Berkeley Python Numerical Methods. Print to PDF
The copyright of the book belongs to Elsevier. We also have this interactive book online for a better learning experience. The code is released
under the MIT license. If you find this content useful, please consider supporting the work on Elsevier or Amazon!
< 20.4 Numerical Differentiation with Noise | Contents | CHAPTER 21. Numerical Integration >
Summary
1. Because explicit derivation of functions is sometimes cumbersome for engineering applications, numerical approaches can be preferable.
2. Numerical approximation of derivatives can be done using a grid on which the derivative is approximated by finite differences.
3. Finite differences approximate the derivative by ratios of differences in the function value over small intervals.
4. Finite difference schemes have different approximation orders depending on the method used.
5. There are issues with finite differences for approximation of derivatives when the data is noisy.
Problems
1. Write a function my_der_calc(f , a, b, N , option), with the output as [df , X], where f is a function object, a and b are scalars such that a <
b, N is an integer bigger than 10, and option is the string f orward, backward , or central . Let x be an array starting at a, ending at b ,
containing N evenly spaced elements, and let y be the array f (x). The output argument, df , should be the numerical derivatives computed for
x and y according to the method defined by the input argument, option . The output argument X should be an array the same size as df
containing the points in x for which df is valid. Specifically, the forward difference method “loses” the last point, the backward difference
method loses the first point, and the central difference method loses the first and last points.
2. Write a function my_num_dif f (f , a, b, n, option), with the output as [df , X], where f is a function object. The function my_num_dif f
should compute the derivative of f numerical for n evenly spaced points starting at a and ending at b according to the method defined by
option . The input argument option is one of the following strings: ‘forward’, ‘backward’, ‘central’. Note that for the forward and backward
method, the output argument, dy, should be (n − 1) 1D array, and for the central difference method dy should be (n − 2) 1D array. The
function should also output a vector X that is the same size as dy and denotes the x-values for which dy is valid.
Test Cases:
https://pythonnumericalmethods.studentorg.berkeley.edu/notebooks/chapter20.05-Summary-and-Problems.html 1/3
3/7/25, 10:47 AM Summary — Python Numerical Methods
1. Write a function my_num_dif f _w_smoothing(x, y, n), with the output [dy, X], where x and y are 1D numpy array of the same length,
and n is a strictly positive scalar. The function should first create a vector of “smoothed” y data points where
y_smooth[i] = np. mean(y[i − n : i + n]) . The function should then compute dy, the derivative of the smoothed y -vector using the
central difference method. The function should also output a 1D array X that is the same size as dy and denotes the x-values for which dy is
valid.
Assume that the data contained in x is in ascending order with no duplicate entries. However, it is possible that the elements of x will not be evenly
spaced. Note that the output dy will have 2n + 2 fewer points than y . Assume that the length of y is much bigger than 2n + 2 .
Test Cases:
https://pythonnumericalmethods.studentorg.berkeley.edu/notebooks/chapter20.05-Summary-and-Problems.html 2/3
3/7/25, 10:47 AM Summary — Python Numerical Methods
1. Use Taylor series to show the following approximations and their accuracy. $$
$$
< 20.4 Numerical Differentiation with Noise | Contents | CHAPTER 21. Numerical Integration >
© Copyright 2020.
https://pythonnumericalmethods.studentorg.berkeley.edu/notebooks/chapter20.05-Summary-and-Problems.html 3/3