0% found this document useful (0 votes)
39 views20 pages

Root Finding Methods

Uploaded by

Ahsan Ali
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
39 views20 pages

Root Finding Methods

Uploaded by

Ahsan Ali
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 20

Root-Finding Methods

Tamas Kis | [email protected]


Tamas Kis
https://tamaskis.github.io

Copyright © 2022 Tamas Kis.

This work is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
To view a copy of this license, visit http://creativecommons.org/licenses/by-nc-sa/4.0/ or send a letter
to Creative Commons, PO Box 1866, Mountain View, CA 94042, USA.
Contents

Contents iii

List of Algorithms iv

1 Fixed-Point Iteration 1
1.1 Basic Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Termination Conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.3 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.4 Iterative Approaches in Engineering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.5 Root Finding Using Fixed-Point Iteration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

2 Univariate Root Finding 5


2.1 Newton’s Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.2 Secant Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.3 Bisection Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

3 Multivariate Root Finding 12


3.1 Newton’s Method in n Dimensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
3.1.1 Approximating the Jacobian . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
3.2 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
3.3 Solving Nonlinear Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

References 15
List of Algorithms

Algorithm 1 fixed_point_iteration Fixed-point iteration for finding the fixed point of a


univariate, scalar-valued function . . . . . . . . . . . . . . . . . . . . 2
Algorithm 2 newtons_method Newton’s method for finding the root of a differentiable,
univariate, scalar-valued function . . . . . . . . . . . . . . . . . . . . 6
Algorithm 3 secant_method Secant method for finding the root of a univariate,
scalar-valued function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
Algorithm 4 bisection_method Bisection method for finding the root of a univariate,
scalar-valued function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
Algorithm 5 newtons_method_n Newton’s method for finding the root of a differentiable,
multivariate, vector-valued function . . . . . . . . . . . . . . . . . . 13
1
Fixed-Point Iteration

1.1 Basic Theory


Consider a univariate function f (x). A fixed point of f (x), which we denote as c, satisfies
f (c) = c
Essentially, at the fixed point, the value of the dependent variable is the same as the value of the independent variable.
Another way to view fixed points are as the intersections of the curve y = f (x) with the curve y = x.
Consider the case where we can’t explicitly solve for c. Let’s assume we make an initial guess, c0 , and then
iteratively generate a sequence of refined estimates for the true value of c.
{c0 , c1 , c2 , ..., ck , ...}
If this sequence of refined estimates for the fixed point converges to the true fixed point, c, then
c = lim ck (1.1)
k→∞

The sequence for the refined fixed point estimates (the ck ’s) can be iteratively generated using the function, f (x),
whose fixed point we are trying to find.
ck+1 = f (ck ) (1.2)
To show that this is valid, from Eqs. (1.1) and (1.2), we have [1, p. 60]
 
c = lim ck = lim f (ck−1 ) = f lim ck−1 = f (c)
k→∞ k→∞ k→∞

1.2 Termination Conditions


Given an initial guess, c0 , we can keep coming up with new estimates of the fixed point. To terminate the iterative
procedure, it is easiest to use the absolute error, ε, defined as
ε = |xk+1 − xk | (1.3)
Once ε is small enough, we say that the estimate of the fixed point has converged to the true fixed point, c, within
some tolerance (which we denote as TOL). For all the algorithms discussed in this document, we assume a tolerance
of
TOL = 10−10
2 Chapter 1 Fixed-Point Iteration

unless otherwise specified. Note that a relative error, εr , is also often used in iterative algorithms such as fixed point
iteration.
|xk+1 − xk |
εr =
|xk |
εr does a better job in comparing iterates with respect to each other, since depending on the problem, we could consider
a difference of xk+1 − xk = 100 to be really small, while for other problems, that difference is very large. However,
using εr can lead to issues when |xk | is 0 (or numerically very small) since it appears in the denominator. Additionally,
when manually setting the tolerance, it is more intuitive to set an absolute tolerance rather than a relative tolerance; an
absolute tolerance gives a direct measure of the numerical precision of the result, while a relative tolerance provides
an estimate of the number of correct digits.
If we predetermine that, at most, we can tolerate an error of TOL, then we will keep iterating Eq. (1.2) until
ε < TOL. In some cases, the error may never decrease below TOL, or take too long to decrease below TOL.
To account for this, we also define the maximum number of iterations, kmax , so that the algorithm does not keep
iterating forever, or for too long of a time [1, pp. 49–50]. For all the algorithms discussed in this document, we assume
a maximum number of iterations of
kmax = 200
unless otherwise specified.

1.3 Algorithm
Algorithm 1 is adapted from [1, pp. 60–61].

Algorithm 1: fixed_point_iteration
Fixed-point iteration for finding the fixed point of a univariate, scalar-
valued function.

Inputs:
• f (x) - univariate, scalar-valued function (f : R → R)
• x0 ∈ R - initial guess for fixed point
• TOL ∈ R - (OPTIONAL) tolerance
• kmax ∈ Z - (OPTIONAL) maximum number of iterations

Procedure:
1. Default the tolerance to TOL = 10−12 if not specified.
2. Default the maximum number of iterations to kmax = 200 if not specified.
3. Return the initial guess if it is a fixed point of f (x).

if f (x0 ) = x0
return c = x0
end

4. Fixed point estimate at the first iteration.

xcurr = x0

5. Initialize xnext .

xnext = 0

6. Find the fixed point using fixed-point iteration.


1.4 Iterative Approaches in Engineering 3

for k = 1 to kmax
(a) Update the fixed point estimate.

xnext = f (xcurr )

(b) Terminate if converged.

if |xnext − xcurr | < TOL


break
end

(c) Store the updated fixed point estimate for the next iteration.

xcurr = xnext
end

7. Converged fixed point.

return c = xnext

Outputs:
• c ∈ R - fixed point of f (x)

1.4 Iterative Approaches in Engineering


Fixed-point iteration is especially useful for solving many problems in engineering. Consider the case where we have
two unknown quantities, and two highly nonlinear equations that relate them to one another. Mathematically, we have
two variables, x and y, and the following two functions:

y = f (x) (1.4)

x = g(y) (1.5)
In one function, you input x and get y, while in the other function, you input y and get x. Since f (x) and g(y) are
nonlinear (as previously mentioned), we cannot obtain closed-form solutions for x and y.
Let’s say we’re primarily interested in the variable x, where the variable y mainly serves to place a constraint on
x. We want to find the value x = c such that both equations are satisfied simultaneously (i.e. y = f (c) and c = g(y)).
Then we can define a new function h(x) as the function composition h = g ◦ f by substituting Eq. (1.4) into Eq. (1.5).

h(x) = g(f (x))


The solution to our problem, x = c, is just the fixed point of h(x). See Examples #4a and #4b on the “Examples” tab
of https://www.mathworks.com/matlabcentral/fileexchange/86992-fixed-point-iteration-fix
ed_point_iteration for an implementation of this approach to a pipe flow problem1 .

1 This example is adapted from my personal solutions to Problem 8.96 in [4, p. 476] using the Haaland equation [4, p. 431]. However, this fluid
mechanics text, in general, does not take a computational approach to such problems. Rather, it performs a “trial and error” procedure (including
hand calculations and reading values off of a chart), which essentially follows the same process as fixed-point iteration – an example of this can
be found in Example 8.7 [4, p. 444].
4 Chapter 1 Fixed-Point Iteration

1.5 Root Finding Using Fixed-Point Iteration


Consider a function, f (x). To find a root of f (x), we need to find a value of x that satisfies

f (x) = 0

Similarly, to find a fixed-point of f (x), we know we need to find a value of x that satisfies

f (x) = x

Therefore, it is simple to convert a root-finding problem to a fixed-point-finding problem.

Finding the root of f (x) is equivalent to finding the fixed-point of g(x), where g(x) is
defined as [1, p. 56]
g(x) = x − f (x)
2
Univariate Root Finding

2.1 Newton's Method


Newton’s method is a technique used to find the root (based on an initial guess1 , x0 ) of a differentiable, univariate
function f (x). Consider the Taylor series expansion of f (x) about the point x = x0 .
1
f (x) = f (x0 ) + f 0 (x0 )(x − x0 ) + f 00 (x0 )(x − x0 )2 + · · ·
2
Ignoring higher order terms, we have

f (x) ≈ f (x0 ) + f 0 (x0 )(x − x0 ) (2.1)

Equation (2.1) essentially approximates f (x) as the tangent line to the curve y = f (x) at the point x = x0 . The
x-intercept of this tangent line (i.e. its root or zero, x1 , can be found by setting the left hand side of Eq. (2.1) equal to
0.
f (x0 )
0 = f 0 (x0 )(x1 − x0 ) + f (x0 ) → x1 = x0 − 0
f (x0 )
x1 represents an updated estimate of the root of f (x), given an initial guess x0 . To keep refining our estimate, we can
keep iterating through this procedure. Eq. (2.2) below finds the (k + 1)th iterate given the kth iterate.

f (xk )
xk+1 = xk − (2.2)
f 0 (xk )

The iteration can be termination in the same manner as described in Section 1.2 [1, pp. 67–68].
All root-finding methods have cases where they can fail to converge, but Newton’s method has a particular
vulnerability; if at any point f 0 (xk ) = 0, Newton’s method will result in an Inf solution (due to division by 0). To
safeguard against this failure, we can perturb xk by (100)(TOL) |xk |. However, when xk = 0, this results in no
perturbation; in this case, we just perturb xk by (100)(TOL).

1 Often, a function f (x) will have multiple roots. Therefore, Newton’s method typically finds the root closest to the initial guess x0 . However,
this is not always the case; the algorithm depends heavily on the derivative of f (x), which, depending on its form, may cause it to converge on
a root further from x0 .
6 Chapter 2 Univariate Root Finding

Algorithm 2: newtons_method
Newton's method for finding the root of a differentiable, univariate,
scalar-valued function.

Inputs:
• f (x) - differentiable, univariate, scalar-valued function (f : R → R)
• f 0 (x) - derivative of f (x) (f 0 : R → R)
• x0 ∈ R - initial guess for root
• TOL ∈ R - (OPTIONAL) tolerance (defaults to 10−10 )
• kmax ∈ Z - (OPTIONAL) maximum number of iterations (defaults to 200)

Procedure:
1. Default the tolerance to TOL = 10−10 if not input.
2. Default the maximum number of iterations to kmax = 200 if not input.
3. Return the initial guess if it is a root of f (x).

if f (x0 ) = 0
return x∗ = x0
end

4. Root estimate at the first iteration.

xcurr = x0

5. Initialize xnext .

xnext = 0

6. Find the root using Newton’s method.

for k = 1 to kmax
2.2 Secant Method 7

(a) Evaluate the derivative at the current root estimate.


0
fcurr = f 0 (xcurr )
0
(b) Perturb xcurr if fcurr = 0.

0
if fcurr =0
if xcurr 6= 0
xcurr = xcurr (1 + (100)(TOL) |xcurr |)
else
xcurr = (100)(TOL)
end
end

(c) Update the root estimate.

f (xcurr )
xnext = xcurr − 0
fcurr

(d) Terminate if converged.

if |xnext − xcurr | < TOL


break
end

(e) Store the updated root estimate for the next iteration.

xcurr = xnext
end

7. Converged root.

return x∗ = xnext

Outputs:
• x∗ ∈ R - root of f (x)

2.2 Secant Method


Recall Newton’s method for iteratively solving for the root of a differentiable function:

f (xk )
xk+1 = xk − (2.3)
f 0 (xk )

If we don’t know f 0 (x) (for example, it could be very complicated and tedious to derive by hand), then we can instead
approximate it using some numerical method. Specifically, for the secant method, we use the backward approximation
of a derivative, given by Eq. (2.4) below.

f (xk ) − f (xk−1 )
f 0 (xk ) ≈ (2.4)
xk − xk−1
8 Chapter 2 Univariate Root Finding

Substituting Eq. (2.4) into Eq. (2.3),


 
xk − xk−1 [f (xk ) − f (xk−1 )] xk (xk − xk−1 )f (xk )
xk+1 = xk − f (xk ) = −
f (xk ) − f (xk−1 ) f (xk ) − f (xk−1 ) f (xk ) − f (xk−1 )
xk f (xk ) − xk f (xk−1 ) xk−1 f (xk ) − xk f (xk ) xk f (xk ) − xk f (xk ) + xk−1 f (xk ) − xk f (xk−1 )
= + =
f (xk ) − f (xk−1 ) f (xk ) − f (xk−1 ) f (xk ) − f (xk−1 )

xk−1 f (xk ) − xk f (xk−1 )


xk+1 = (2.5)
f (xk ) − f (xk−1 )
Equation (2.5) iteratively defines the secant method, which can be essentially thought of as a finite difference
approximation of Newton’s method for finding the root of a univariate function. The implementation of the secant
method is very similar to that of Newton’s method, but there are some important items of note [1, pp. 71–72]:
• Like with Newton’s method, we first have to make an initial guess x0 for the root. Additionally, we need to set
the root estimate at the first iteration (i.e. x1 ) to a value slightly different than x0 – otherwise, the denominator
of the right hand side of Eq. (2.5) will be 0 (since f (x1 ) − f (x0 ) = f (x0 ) − f (x0 ) = 0 if x1 = x0 ) and x2 will
be undefined. We can think of this as “kick-starting” the algorithm.
• Function evaluations (i.e. evaluating f (x)) are typically the most expensive part of the solution procedure.
Note that for the secant method, f (xcurr ) at the kth iteration is identical to f (xprev ) at the (k + 1)th iteration.
Therefore, it is prudent to save function evaluations for the subsequent iteration.
• Whereas Newton’s method fails when f 0 (xk ) = 0, the secant method fails when f (xk ) = f (xk−1 ). Similar
to how we guarded against this divide-by-0 failure for Newton’s method, we again perturb xk if this situation
arises.

Algorithm 3: secant_method
Secant method for finding the root of a univariate, scalar-valued
function.

Inputs:
• f (x) - univariate, scalar-valued function (f : R → R)
• x0 ∈ R - initial guess for root
• TOL ∈ R - (OPTIONAL) tolerance (defaults to 10−10 )
• kmax ∈ Z - (OPTIONAL) maximum number of iterations (defaults to 200)

Procedure:
1. Default the tolerance to TOL = 10−10 if not input.
2. Default the maximum number of iterations to kmax = 200 if not input.
3. Return the initial guess if it is a root of f (x).

if f (x0 ) = 0
return x∗ = x0
end

4. Root estimates at the first and second iterations.

xprev = x0
if x0 6= 0
xcurr = x0 (1 + (100)(TOL) |x0 |)
else
xcurr = (100)(TOL)
end
2.2 Secant Method 9

5. Function evaluation at the first iteration.

fprev = f (x0 )

6. Initialize xnext .

xnext = 0

7. Find the root using the secant method.

for k = 2 to kmax
(a) Function evaluation at the current iteration.

fcurr = f (xcurr )

(b) Perturb xcurr if fcurr = fprev .

if fcurr = fprev
if xcurr 6= 0
xcurr = xcurr (1 + (100)(TOL) |xcurr |)
else
xcurr = (100)(TOL)
end fcurr = f (xcurr )
end

(c) Update the root estimate.

xprev fcurr − xcurr fprev


xnext =
fcurr − fprev

(d) Terminate if converged.

if |xnext − xcurr | < TOL


break
end

(e) Store the next and current root estimates for the next iteration.

xprev = xcurr
xcurr = xnext

(f) Store the current function evaluation for the next iteration.

fprev = fcurr
end

8. Converged root.

return x∗ = xnext

Outputs:
• x∗ ∈ R - root of f (x)
10 Chapter 2 Univariate Root Finding

2.3 Bisection Method


The bisection method can be used to find the root of a univariate function f (x), with no restrictions on the differen-
tiability of f . The basic idea behind the bisection is starting off with some interval [a, b] containing a root, iteratively
“shrinking” this interval until it is below some tolerance threshold, and then taking the root to be the midpoint of this
interval. The general procedure is as follows:

1. Make an initial guess for the interval [a, b] containing the root.
2. Assume the root, c, is the midpoint of this interval: c = (a + b)/2.
3. Evaluate f (a) and f (c).
(a) If f (a) < 0 and f (c) > 0 (i.e. they have different sign), we know the true root, x∗ , is contained in the
interval [a, c]. Therefore, we update our interval so that a remains the same, but b is updated to be c.
(b) If f (a) and f (c) have the same sign (either both negative or both positive), we know the true root, x∗ , must
be contained in the interval [c, b]. Therefore, we update our interval so that b remains the same, but a is
updated to be c.
4. Repeating steps 2 and 3, the interval [a, b] will keep shrinking. Once the difference (b − a) is small enough, we
say that the estimate of the root has converged to the true root, x∗ , within some tolerance, TOL.

In some cases, the difference (b − a) may never decrease below TOL, or take too long to decrease to below TOL.
Therefore, like with the previous methods, we also use the maximum number of iterations, kmax , as another termination
condition [1, pp. 48–49].

Algorithm 4:
Bisection method for finding the root of a univariate, scalar-valued
function.

Inputs:
• f (x) - univariate, scalar-valued function (f : R → R)
• a∈R - lower bound of interval containing root
• b∈R - upper bound of interval containing root
• TOL ∈ R - (OPTIONAL) tolerance (defaults to 10−10 )
• kmax ∈ Z - (OPTIONAL) maximum number of iterations (defaults to 200)

Procedure:
1. Default the tolerance to TOL = 10−10 if not input.
2. Default the maximum number of iterations to kmax = 200 if not input.
3. Root estimate at the first iteration.
a+b
c=
2
4. Return the root estimate at the first iteration if it is a root of f (x).

if f (c) = 0
return x∗ = c
end

5. Function evaluations at the first iteration.


fa = f (a)
fc = f (c)

6. Find the root using the bisection method.


2.3 Bisection Method 11

for k = 1 to kmax
(a) Update interval.

if fc = 0
break
else if fa fc > 0
a=c
fa = fc
else
b=c
end

(b) Update the root estimate.

a+b
c=
2
(c) Terminate if converged.

if (b − a) < TOL
break
end

(d) Function evaluation at updated root estimate.

fc = f (c)
end

7. Converged root.

return x∗ = c

Outputs:
• x∗ ∈ R - root of f (x)
3
Multivariate Root Finding

3.1 Newton's Method in n Dimensions


Consider a multivariate, vector-valued function f : Rn → Rn . Our goal is to find the root, x∗ , of this function.

f (x) = 0 (3.1)

In Section 2.1, we introduced Newton’s method as an algorithm for finding the root of a scalar-valued, univariate
function, f : R → R. Finding the root of f (x) is, by definition, solving the equation
f (x) = 0
for x. Note the similarity of this equation to Eq. (3.1). We can extend Newton’s method to the case of a multivariate,
vector-valued function whose input and output dimensions are the same (i.e. same number of equations and unknowns).
For the univariate case, we used the update equation
f (xk )
xk+1 = xk −
f 0 (xk )
By analogy, in the multivariate, vector-valued case, this becomes
xk+1 = xk − J(xk )−1 f (xk )
However, in its implementation, we avoid computing the inverse of the Jacobian matrix. Instead, we solve the rear-
ranged equation
J(xk )(xk+1 − xk ) = −f (xk )
for the unknown xk+1 − xk , and then find xk+1 accordingly. In two steps, this can be written as

J(xk )yk = −f (xk )


| {z }
solve for yk (3.2)
xk+1 = xk + yk

For the multivariate case, we define the absolute error for the termination condition using the 2-norm:
ε = kxk+1 − xk k
However, from Eq. (3.2), we also know that yk = xk+1 − xk . Thus, we can rewrite the absolute error as [1, pp.
638-641]
ε = yk (3.3)
3.2 Algorithm 13

3.1.1 Approximating the Jacobian


Note that Newton’s method for multivariate, vector-valued functions requires the Jacobian, J(xk ). In many cases, it is
difficult/time-consuming to calculate this Jacobian. Instead, we can use the ijacobian (complex-step approxima-
tion), cjacobian (central difference approximation), or fjacobian (forward difference approximation) functions
from the Numerical Differentiation Toolbox [3].

J(xk ) ≈ ijacobian(f , xk )
J(xk ) ≈ cjacobian(f , xk )
J(xk ) ≈ fjacobian(f , xk )

The ijacobian function is the most accurate, providing a numerical approximation that is typically accurate to within
double precision. However, there are a few functions that special care must be taken with in order to be compatible
with the complex-step approximation; see Sections 3.3 and 3.4 of [2].

3.2 Algorithm

Algorithm 5: newtons_method_n
Newton's method for finding the root of a differentiable, multivariate,
vector-valued function.

Inputs:
• f (x) - multivariate, vector-valued function (f : Rn → Rn )
• J(x) - Jacobian of f (x)
• x0 ∈ Rn - initial guess for solution
• TOL ∈ R - (OPTIONAL) tolerance (defaults to 10−10 )
• kmax ∈ Z - (OPTIONAL) maximum number of iterations (defaults to 200)

Procedure:
1. Default the tolerance to TOL = 10−10 if not input.
2. Default the maximum number of iterations to kmax = 200 if not input.
3. Return the initial guess if it is a root of f (x).

if f (x0 ) = 0n
return x∗ = x0
end

4. Root estimate at the first iteration.

xcurr = x0

5. Initialize xnext .

xnext = 0

6. Find the root using Newton’s method.

for k = 1 to kmax
14 Chapter 3 Multivariate Root Finding

(a) Solve the linear system below for y.

J(xcurr )y = −f (xcurr )

(b) Update the root estimate.

xnext = xcurr + y

(c) Terminate if converged.

if kyk < TOL


break
end

(d) Calculate error.

ε = kxnext − xcurr k

(e) Store the updated root estimate for the next iteration.

xcurr = xnext
end

7. Converged root.

return x∗ = xnext

Outputs:
• x∗ ∈ Rn - root of f (x)

3.3 Solving Nonlinear Systems


Consider a system of n nonlinear equations in n unknowns:
g1 (x1 , ..., xn ) = h1 (x1 , ..., xn )
g2 (x1 , ..., xn ) = h2 (x1 , ..., xn )
..
.
gn (x1 , ..., xn ) = hn (x1 , ..., xn )
Let’s rewrite the argument of each univariate function in terms of the vector variable x ∈ Rn , where
 
x1
 .. 
x= . 
xn
Additionally, let’s move all the h equations to the left hand side. Then we have
g1 (x) − h1 (x) = 0
g2 (x) − h2 (x) = 0
..
.
gn (x) − hn (x) = 0
3.3 Solving Nonlinear Systems 15

Let’s define fi (x) = g1 (x) − h1 (x). Then


f1 (x) = 0
f2 (x) = 0
..
.
fn (x) = 0
Defining f : Rn → Rn as a vector-valued function,
 
f1 (x)
 f2 (x) 
f (x) =  . 
 
 .. 
fn (x)

We have thus converted this problem into the root-finding problem

f (x) = 0

which can be solved using Newton’s method.


Bibliography

[1] Richard L. Burden and J. Douglas Faires. Numerical Analysis. 9th ed. Boston, MA: Brooks/Cole, Cengage Learn-
ing, 2011.
[2] Tamas Kis. Numerical Differentiation. 2021. URL: https://tamaskis.github.io/files/Numerical_
Differentiation.pdf.
[3] Tamas Kis. Numerical Differentiation Toolbox. 2021. URL: https://github.com/tamaskis/Numerical_
Differentiation_Toolbox-MATLAB.
[4] Bruce R. Munson et al. Fundamentals of Fluid Mechanics. 7th . Hoboken, NJ: John Wiley & Sons, 2013.

You might also like