0% found this document useful (0 votes)
15 views44 pages

1-Root Finding-Open Methods

The document discusses advanced numerical methods for root finding, emphasizing their importance in engineering applications. It covers various equations, error definitions, and iterative methods such as the Babylonian method for square roots and the Bisection method for solving equations. Additionally, it highlights the differences between closed and open root-finding methods, along with their advantages and disadvantages.

Uploaded by

ali282h
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views44 pages

1-Root Finding-Open Methods

The document discusses advanced numerical methods for root finding, emphasizing their importance in engineering applications. It covers various equations, error definitions, and iterative methods such as the Babylonian method for square roots and the Bisection method for solving equations. Additionally, it highlights the differences between closed and open root-finding methods, along with their advantages and disadvantages.

Uploaded by

ali282h
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 44

MEIE6004: Advanced Numerical Methods

Root Finding Methods

Nasser A. Al-Azri
Definition

Finding the root of an equation is solving for the unknown variable in an


equation, e.g. fining x in f(x)=0.
Finding the root of an equation is, for example, solving for x in
f(x)=0.
Root Finding is very important in Engineering.
• When an engineering problem involves the finding of an
independent variable instead of the dependent in a highly
nonlinear equation.
• Solving for an iterative equation.
𝑛𝑛 2 𝑉𝑉
Van der Waals Equation of State 𝑃𝑃 + 𝑎𝑎 − 𝑏𝑏 = 𝑅𝑅𝑅𝑅
𝑉𝑉 𝑛𝑛

P: Pressure, T: Temperature, V: Volume, n: number of moles, R: Gas constant, 8.314 kJ/(mol. K)


a and b are substance dependent constants.

Find the volume of 100 moles of water vapor at 100 kPa and 150 oC. For water vapor, a=0.547 Pa.m6/mol2 and
b=0.00003052 m3/mol.

1 𝜖𝜖 2.51
Colebrook Equation for Fining Friction Factor = −2 log +
𝑓𝑓 3.7𝐷𝐷 𝑅𝑅𝑅𝑅 𝑓𝑓

f: friction factor, ϵ: Roughness coefficient , Re: Reynold’s number

Find the friction coefficient for a 5 cm-diameter pipe with roughness of 0.004mm and flow with Re=80,500.
Error Definition
Exact decimal places are rarely used due to either measurement limitations or infinite
calculation procedure.

• Round-off error: rounding a number to the nearest figure. In some calculations, we


may need to round up (getting the next upper number) or round down (to get the
next smaller number).
• Truncation error: truncating infinite calculations to give an approximate result.
Example:
The sine of a number can be calculated by using the infinite series:
𝑥𝑥 3 𝑥𝑥 5 𝑥𝑥 7 𝑥𝑥 9
sin 𝑥𝑥 = 𝑥𝑥 − + − + − ⋯+ ⋯
3! 5! 7! 9!
If we calculate sin(5) using seven terms, the answer will be: -0.93758405 which is off by about -0.02134
compared to when we use the calculator.
Programming Hint …
Code the sine function in an
efficient concise Matlab code.

𝑥𝑥 3 𝑥𝑥 5 𝑥𝑥 7 𝑥𝑥 9
sin 𝑥𝑥 = 𝑥𝑥 − + − + − ⋯ + ⋯
3! 5! 7! 9!
Expressing Error
The error of a number is the difference between the true value and the approximate
one.
𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇 𝑣𝑣𝑣𝑣𝑣𝑣𝑣𝑣𝑣𝑣 = 𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴 𝑣𝑣𝑣𝑣𝑣𝑣𝑣𝑣𝑣𝑣 + 𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒

𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴 𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒 = |𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇 𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉 − 𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴 𝑣𝑣𝑣𝑣𝑣𝑣𝑣𝑣𝑣𝑣|

The relative error is the percentage of the absolute error from the true value.

𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇 − 𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎.
𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑅 𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒 = ∗ 100%
𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇
Expressing Error

Two scales are used for measuring weight:


Scale A reads 45 kg for a 50-kg mass
Scale B reads 955 kg for a 1000-kg mass.

Which error expression is more suitable in comparing the two scales?


Error Expression in
Numerical Methods
The error of a number is the difference between the true value and the approximate
one.
𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇 𝑣𝑣𝑣𝑣𝑣𝑣𝑣𝑣𝑣𝑣 = 𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴 𝑣𝑣𝑣𝑣𝑣𝑣𝑣𝑣𝑣𝑣 + 𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒

𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴 𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒 = |𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇 𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉 − 𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴 𝑣𝑣𝑣𝑣𝑣𝑣𝑣𝑣𝑣𝑣|

The relative error is the percentage of the absolute error from the true value.

𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇 − 𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎.
𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑅 𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒 = ∗ 100%
𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇
Finding the
Square-root of a
number
About 2000 years ago, Babylonians developed an iterative square root finding
method, which for until late centuries was a mystery.
In order to find the square root of M i.e. 𝑥𝑥 = 𝑀𝑀, then one should start with an
initial guess 𝑥𝑥0 and then the procedure will follow in an iterative manner using the
iterative equation:
1 𝑀𝑀
𝑥𝑥𝑖𝑖+1 = 𝑥𝑥𝑖𝑖 +
2 𝑥𝑥𝑖𝑖
Initially, 𝑥𝑥0 can be taken any number, preferably as close as possible to the exact
solution in order to speed up the convergence to the desired solution.
Finding the Square-root of a number

Using the Babylonian method, find the square root of 326 using hand calculations.
Use a reasonable initial guess, x0, and stop iterating when the relative error is less
than or equal to 0.001%.

Applying the Babylonian’s method using three different initial guesses:


x0 =15: 𝒊𝒊 𝒙𝒙𝒐𝒐𝒐𝒐𝒐𝒐 𝒙𝒙𝒏𝒏𝒏𝒏𝒏𝒏 Rel. Er. (%)
0 15 18.36667 18.3303085
1 18.36667 18.05811 1.70870736
2 18.05811 18.05547 0.01460054
3 18.05547 18.05547 1.0659E-06
4 18.05547 18.05547 0

Using the calculator, the answer is: 326=18.0554700


x0 =1: x0 =1000:
𝒊𝒊 𝒙𝒙𝒐𝒐𝒐𝒐𝒐𝒐 𝒙𝒙𝒏𝒏𝒏𝒏𝒏𝒏 Rel. Er. (%) 𝒊𝒊 𝒙𝒙𝒐𝒐𝒐𝒐𝒐𝒐 𝒙𝒙𝒏𝒏𝒏𝒏𝒏𝒏 Rel. Er. (%)
0 1 163.5 99.38838 0 1000 500.163 99.93482
1 163.5 82.74694 97.59038 1 500.163 250.4074 99.73971
2 82.74694 43.34333 90.91043 2 250.4074 125.8546 98.96557
3 43.34333 25.43234 70.42607 3 125.8546 64.22246 95.96669
4 25.43234 19.12533 32.97723 4 64.22246 34.64928 85.35004
5 19.12533 18.08539 5.750153 5 34.64928 22.02892 57.28995
6 18.08539 18.05549 0.165595 6 22.02892 18.41382 19.63253
7 18.05549 18.05547 0.000137 7 18.41382 18.05896 1.965051
8 18.05547 18.05547 9.4E-11 8 18.05896 18.05547 0.019311
9 18.05547 18.05547 0 9 18.05547 18.05547 1.86E-06
10 18.05547 18.05547 1.97E-14
11 18.05547 18.05547 0
Types of root-fining methods

Bracketing Closed Methods:


In this type of methods, we do not know the exact solution, however we know an interval that
includes at least one solution.
Example:
Find x, at which f(x) =0 given that 𝑥𝑥 ∈ 0, 10 :
𝑓𝑓(𝑥𝑥) = 2𝑥𝑥 2 − 𝑥𝑥 − 15 = 0
Advantage: Closed methods are always converging.
Disadvantage: it is not always possible to have a known interval.
Open Methods:
In this type, we neither know the exact solution nor where it is located. So we always have to start with a guess.

Advantage: Fast convergence even when the first guess is far different from the exact solution.
Disadvantage: They may diverge or miss the targeted solution in case of multiple solutions exist.
Example of a closed/bracketing method

Bisection Method: Algorithm


Given:
Function, 𝑓𝑓 𝑥𝑥 = 0 , Interval, [𝑥𝑥𝐿𝐿 , 𝑥𝑥𝑢𝑢 ] and Maximum acceptable relative error, 𝐸𝐸𝑟𝑟,𝑀𝑀𝑀𝑀𝑀𝑀

Step (1): make sure that 𝑓𝑓(𝑥𝑥) values at both bounds, xL and xU , have different signs.
𝑥𝑥𝑙𝑙 +𝑥𝑥𝑢𝑢
Step (2): evaluate 𝑥𝑥𝑟𝑟 = and 𝑓𝑓(𝑥𝑥𝑟𝑟 )
2
Step (3): select the new interval so as to maintain the same criterion in step (1)
Step (4): check the new relative error:
𝑥𝑥𝑟𝑟,𝑛𝑛𝑛𝑛𝑛𝑛 − 𝑥𝑥𝑟𝑟,𝑜𝑜𝑜𝑜𝑜𝑜
𝐸𝐸𝑟𝑟 = × 100%
𝑥𝑥𝑟𝑟,𝑛𝑛𝑛𝑛𝑛𝑛
Step (5): if 𝐸𝐸𝑟𝑟 < 𝐸𝐸𝑟𝑟,𝑀𝑀𝑀𝑀𝑀𝑀 , stop, otherwise go to step (3).
Bisection Method: Algorithm
Example: Solve

f 𝑥𝑥 = 2𝑥𝑥 2 − 𝑥𝑥 − 15 Where 𝑥𝑥 ∈ 0,10 𝑠𝑠𝑠𝑠𝑠𝑠𝑠 𝑡𝑡𝑡𝑡𝑡𝑡𝑡 𝐸𝐸𝑟𝑟 ≤ 10%

xLo =-15 xUo = 175 → xLo xUo <0


0+10
Let 𝑥𝑥𝐿𝐿,0 = 0 𝑎𝑎𝑎𝑎𝑎𝑎 𝑥𝑥𝑢𝑢,0 = 10, hence 𝑥𝑥𝑟𝑟 = =5
2

f 𝑥𝑥𝐿𝐿𝐿𝐿 = −15 𝑓𝑓 𝑥𝑥𝑈𝑈𝑈𝑈 = 175 → 𝑓𝑓(xro ) = 30

𝑥𝑥𝐿𝐿1 = 0 𝑎𝑎𝑎𝑎𝑎𝑎 𝑥𝑥𝑈𝑈1 = 5.


new interval will be, 𝑥𝑥𝐿𝐿,1 , 𝑥𝑥𝑢𝑢,1 = [0,5]
0+5
f 𝑥𝑥𝐿𝐿1 = −15 𝑓𝑓 𝑥𝑥𝑈𝑈1 = 30 𝑥𝑥𝑟𝑟 = = 2.5 → 𝑓𝑓 𝑥𝑥𝑟𝑟𝑟 = −5
2
Bisection Method: Algorithm
i xL xU 𝑓𝑓(xL ) 𝑓𝑓(xU ) xr 𝑓𝑓(xr ) Er ,%
0 0 10 -15 175 5 30 --
1 0 5 -15 30 2.5 -5 100
2 2.5 5 -5 30 3.75 9.375 33.33333
3 2.5 3.75 -5 9.375 3.125 1.40625 20
4 2.5 3.125 -5 1.40625 2.8125 -1.99219 11.11111
5 2.8125 3.125 -1.99219 1.40625 2.96875 -0.3418 5.263158
𝑥𝑥𝑟𝑟𝑛𝑛𝑛𝑛𝑛𝑛 − 𝑥𝑥𝑟𝑟𝑜𝑜𝑜𝑜𝑜𝑜
𝐸𝐸𝑟𝑟 = ∗ 100%
𝑥𝑥𝑟𝑟𝑛𝑛𝑛𝑛𝑛𝑛

The final exact answer lies within 𝑥𝑥 ∗ ∈ [2.8125, 3.125]


Bisection Method: Algorithm
Convergence rate
In proceeding with the bisection method, after each iteration the search domain (∆)
is decreased by half of that of the previous iteration. i.e.
∆𝑖𝑖+1 = 0.5 ∆𝑖𝑖
and hence, for example,
∆3 = 0.5∆2 = 0.5 0.5∆1 = 0.5 0.5 0.5∆0
Hence,
∆𝑖𝑖 = 0.5𝑖𝑖 ∆0

How many iterations is needed to reach a final solution with an accuracy of


±0.001 if the initial search domain is x∈[-10,10].
Bisection Method: Algorithm

The targeted accuracy is within 0.001, hence ∆n =0.001.

∆0 = 10 − −10 = 20

0.001 0.001
∆𝑛𝑛 = 0.5𝑛𝑛 × 20 = 0.001 → 0.5𝑛𝑛 = → 𝑛𝑛 𝑙𝑙𝑙𝑙 0.5 = ln
20 20

0.001
ln
𝑛𝑛 = 20 = 14.29
ln 0.5

Hence, 15 iterations are needed to reach a final solution within the targeted accuracy.
Programming Task …

Code the bisection method in Matlab.


Open Methods
(Chapter 6)

Open methods involve searching for the solution within an open domain by starting
with initial search point or points. In this course, three open methods are covered:
• Simple Fixed-Point Iteration
• Newton’s (aka Newton-Raphson) Method
• Secant Method
Simple Fixed-Point Iteration
The simple fixed point iteration method is based
on the concept of using a fixed point in an
iterative manner in order to get the solution of
the given equation.
• The equation is solved or manipulated so as to
put the variable on one side and the equation
is then solved iteratively.
• For a given function 𝑓𝑓(𝑥𝑥) = 0, a fixed point is a
point in the domain of a function, 𝑔𝑔, such that
𝑔𝑔 𝑥𝑥 = 𝑥𝑥.
• After solving for the independent variable, the
expression is used to guess the next iteration.
Simple Fixed-Point Iteration
Example (1): Solve

𝑓𝑓 𝑥𝑥 = 𝑥𝑥 2 − 2𝑥𝑥 + 3

Solving for x :
𝑥𝑥 2 + 3 𝑥𝑥𝑖𝑖2 + 3
𝑥𝑥 = → 𝑥𝑥𝑖𝑖+1 = 𝑔𝑔(𝑥𝑥𝑖𝑖 ) =
2 2
Example (2): Solve

𝑓𝑓 𝑥𝑥 = 𝑠𝑠𝑠𝑠𝑠𝑠 𝑥𝑥 = 0

Adding x to both sides:


𝑠𝑠𝑠𝑠𝑠𝑠 𝑥𝑥 + 𝑥𝑥 = 𝑥𝑥 → 𝑥𝑥𝑖𝑖+1 = 𝑔𝑔 𝑥𝑥𝑖𝑖 = 𝑠𝑠𝑠𝑠𝑠𝑠 𝑥𝑥𝑖𝑖 + 𝑥𝑥𝑖𝑖
Simple Fixed-Point Iteration
Convergence: The idea of simple fixed-point iteration
𝑓𝑓 𝑥𝑥 = 𝑒𝑒 −𝑥𝑥 − 𝑥𝑥 → 𝑥𝑥 = 𝑒𝑒 −𝑥𝑥 → 𝑥𝑥𝑖𝑖+1 = 𝑒𝑒 −𝑥𝑥𝑖𝑖 = 𝑔𝑔(𝑥𝑥𝑖𝑖 )
Let 𝑓𝑓1 𝑥𝑥 = 𝑥𝑥 and 𝑓𝑓2 𝑥𝑥 = 𝑔𝑔(𝑥𝑥)
From 𝑥𝑥𝑜𝑜 , 𝑥𝑥1 is chosen at the point such that 𝑓𝑓1 𝑥𝑥1 = 𝑓𝑓2 (𝑥𝑥2 ), and so on.
Convergence is ensured only if the absolute slope of 𝑓𝑓1 is less than the
slope of 𝑓𝑓2 , i. e. , |𝑔𝑔′ 𝑥𝑥 | < 1

Exercise: Prove that the function 𝑓𝑓(𝑥𝑥) = 𝑥𝑥 2 − 𝑥𝑥 + 3 is convergent


when using fixed point iteration on the range 0 ≤ 𝑥𝑥 ≤ 0.4

𝑓𝑓 𝑥𝑥 = 𝑥𝑥 2 − 𝑥𝑥 + 3
𝑥𝑥 2 + 3
𝑥𝑥 = → 𝑔𝑔^′(𝑥𝑥) = 2𝑥𝑥 𝑓𝑓𝑓𝑓𝑓𝑓 0 ≤ 𝑥𝑥 ≤ 0.4
2
0 ≤ 𝑔𝑔′ 𝑥𝑥 ≤ 0.4 ≤ 1 𝑠𝑠𝑠𝑠 |𝑔𝑔′ 𝑥𝑥 | < 1
Simple Fixed-Point Iteration
Example: Find the root of 𝑓𝑓(𝑥𝑥) = 𝑒𝑒 −𝑥𝑥 − 𝑥𝑥 = 0
𝑬𝑬𝑬𝑬, % 𝑬𝑬𝑬𝑬, %
Solving for 𝑥𝑥 𝒊𝒊 𝒙𝒙𝒊𝒊 𝒙𝒙𝒊𝒊+𝟏𝟏 = 𝒈𝒈(𝒙𝒙𝒊𝒊 ) 𝒊𝒊 𝒙𝒙𝒊𝒊 𝒙𝒙𝒊𝒊+𝟏𝟏 = 𝒈𝒈(𝒙𝒙𝒊𝒊 )

0 0 1 100 14 0.566909 0.567276 0.064752

𝑥𝑥 = 𝑒𝑒 −𝑥𝑥 → 𝑥𝑥𝑖𝑖+1 = 𝑒𝑒 −𝑥𝑥𝑖𝑖 1 1 0.367879 171.8282 15 0.567276 0.567068 0.036739

2 0.367879 0.692201 46.85364 16 0.567068 0.567186 0.020831


With 𝑥𝑥𝑜𝑜 = 0 3 0.692201 0.500474 38.30915 17 0.567186 0.567119 0.011816

𝑥𝑥1 = 𝑒𝑒 0 = 1, 𝑥𝑥2 = 𝑒𝑒 −1 = 0.367879 4 0.500474 0.606244 17.44679 18 0.567119 0.567157 0.006701

5 0.606244 0.545396 11.15662 19 0.567157 0.567135 0.003800

6 0.545396 0.579612 5.903351 20 0.567135 0.567148 0.002155

Further proceeding with the method 7 0.579612 0.560115 3.480867 21 0.567148 0.567141 0.001222

yields the results shown in the


8 0.560115 0.571143 1.930804 22 0.567141 0.567145 0.000693

9 0.571143 0.564879 1.108868 23 0.567145 0.567142 0.000393

following table: 10 0.564879 0.568429 0.624419 24 0.567142 0.567144 0.000223

11 0.568429 0.566415 0.355568 25 0.567144 0.567143 0.000126

12 0.566415 0.567557 0.201197 26 0.567143 0.567143 7.17E-05

13 0.567557 0.566909 0.114256 27 0.567143 0.567143 4.07E-05


Simple Fixed-Point Iteration
The following figure shows a graphical representation on how the value of 𝑥𝑥 changes.
You can think of the line 𝑦𝑦 = 𝑥𝑥 as the guide that leads to the next iteration in effort to
meet at the point of intersection of 𝑦𝑦 = 𝑥𝑥 and 𝑒𝑒 −𝑥𝑥 .
Newton-Raphson Method
Newton’s method (also known as Newton-Raphson Method) is based on using the
tangent line at a given point as lead to the intersection with the x-axis.
From the figure:
𝑓𝑓 𝑥𝑥𝑜𝑜 𝑓𝑓 𝑥𝑥𝑖𝑖
𝑓𝑓 ′ 𝑥𝑥𝑜𝑜 = 𝑡𝑡𝑡𝑡𝑡𝑡 𝜃𝜃 = → 𝑥𝑥𝑖𝑖+1 = 𝑥𝑥𝑖𝑖 −
𝑥𝑥𝑜𝑜 − 𝑥𝑥1 𝑓𝑓′ 𝑥𝑥𝑖𝑖
The same can be derived from Taylor series.
Taylor series:
𝑓𝑓 𝑥𝑥𝑖𝑖+1 = 𝑓𝑓 𝑥𝑥𝑖𝑖 + 𝑓𝑓 ′ 𝑥𝑥𝑖𝑖 − 𝑥𝑥𝑖𝑖+1 − 𝑥𝑥𝑖𝑖 + 𝑅𝑅1
When (𝑥𝑥𝑖𝑖+1 ) = 0
𝑓𝑓 𝑥𝑥𝑖𝑖
𝑥𝑥𝑖𝑖+1 = 𝑥𝑥𝑖𝑖 −
𝑓𝑓′ 𝑥𝑥𝑖𝑖
Newton-Raphson Method
Example: Use Newton-Raphson method to solve 𝑓𝑓 𝑥𝑥 = 𝑒𝑒 −𝑥𝑥 − 𝑥𝑥 = 0
𝑒𝑒 −𝑥𝑥𝑖𝑖 − 𝑥𝑥𝑖𝑖
𝑓𝑓 ′ 𝑥𝑥 = −𝑒𝑒 −𝑥𝑥 − 1 → 𝑥𝑥𝑖𝑖+1 = 𝑥𝑥𝑖𝑖 −
−𝑒𝑒 −𝑥𝑥𝑖𝑖 − 1
with 𝑥𝑥𝑜𝑜 = 0
0 − 𝑒𝑒 0 − 0 1 1
𝑥𝑥1 = 0
=0− =
−𝑒𝑒 − 1 −2 2 𝑖𝑖 𝑥𝑥𝑖𝑖 𝑓𝑓(𝑥𝑥𝑖𝑖 ) 𝑓𝑓𝑓(𝑥𝑥𝑖𝑖 ) 𝐸𝐸𝐸𝐸𝐸
1 1
− 0 0 1 -2
1 𝑒𝑒 2 − 1
𝑥𝑥2 = − 2 = − (−0.06631) = 0.566311
1 0.5 0.10653066 -1.60653066 100
2 1 2 2 0.566311003 0.00130451 -1.56761551 11.709291
−𝑒𝑒 −2 − 1 3 0.567143165 1.9648E-07 -1.56714336 0.14672871
4 0.56714329 4.4409E-15 -1.56714329 2.2106E-05
Proceeding with the same procedure yields: 5 0.56714329
6 0.56714329
0
0
-1.56714329 5.0897E-13
-1.56714329 0
Newton-Raphson Method
Pitfalls of the Newton-Raphson Method:
This method acts poorly when:
• Inflection point (𝑓𝑓 ′′ (𝑥𝑥) = 0) in the vicinity of a root
• Oscillating around local maximum or minimum
• Missing roots
• 𝑓𝑓 ′ (𝑥𝑥) = 0

These pitfalls can be avoided by identifying an initial point that is


as close as possible to the expected solution. Such a selection will
lead to avoiding scenarios 1, 2 and 4. In order to get all possible
solutions (pitfall 3), one can try different initial points that are
close enough to the solutions. This can be done by visual
inspection of 𝑓𝑓(𝑥𝑥) versus 𝑥𝑥 plot or by using a heuristic root-
fining method for getting estimate solutions.
Programming Task …

Code the Newton-Raphson method in Matlab.


Secant Method
The concept of this method is the same as the Newton-Raphson method and it is
used when it is inconvenient to derive 𝑓𝑓′(𝑥𝑥).
The first derivative can be approximated by using Taylor’s series to approximate the
function by truncating the third term and beyond.
2 3
𝑥𝑥𝑖𝑖+1 − 𝑥𝑥𝑖𝑖 ′ 𝑥𝑥 𝑥𝑥𝑖𝑖+1 − 𝑥𝑥𝑖𝑖
𝑓𝑓 𝑥𝑥𝑖𝑖+1 = 𝑓𝑓 𝑥𝑥𝑖𝑖 + 𝑓𝑓 ′ 𝑥𝑥𝑖𝑖 𝑥𝑥𝑖𝑖+1 − 𝑥𝑥𝑖𝑖 + 𝑓𝑓 ′′ 𝑥𝑥𝑖𝑖 + 𝑓𝑓 ′′ 𝑖𝑖 +⋯
2! 3!
Truncating the third term and beyond:
𝑓𝑓 𝑥𝑥𝑖𝑖+1 ≅ 𝑓𝑓 𝑥𝑥𝑖𝑖 + 𝑓𝑓 ′ 𝑥𝑥𝑖𝑖 − (𝑥𝑥𝑖𝑖+1 − 𝑥𝑥𝑖𝑖 )
𝑓𝑓 𝑥𝑥𝑖𝑖−1 −𝑓𝑓(𝑥𝑥𝑖𝑖 )
𝑓𝑓 ′ (𝑥𝑥) ≅
𝑥𝑥𝑖𝑖−1 − 𝑥𝑥𝑖𝑖
Substituting for 𝑓𝑓𝑓(𝑥𝑥) in Newton’s method yields:
𝑓𝑓 𝑥𝑥𝑖𝑖 𝑥𝑥𝑖𝑖−1 − 𝑥𝑥𝑖𝑖
𝑥𝑥𝑖𝑖+1 = 𝑥𝑥𝑖𝑖 −
𝑓𝑓 𝑥𝑥𝑖𝑖−1 − 𝑓𝑓 𝑥𝑥𝑖𝑖
Note that two initial points are needed to start the iteration, 𝑥𝑥−1 and 𝑥𝑥𝑜𝑜 .
Line Segment Approximation
• Understanding Continuity
• When can a curve be approximated as a straight line?
∆𝑓𝑓(𝑥𝑥)
• When can we assume that 𝑓𝑓 ′ 𝑥𝑥 ≡ ? This is possible when ∆𝑥𝑥 → 0
∆𝑥𝑥

∆𝑓𝑓(𝑥𝑥) 𝑑𝑑𝑓𝑓(𝑥𝑥)
lim =
∆𝑥𝑥 → ∞ ∆𝑥𝑥 𝑑𝑑𝑥𝑥
Try this:
𝑓𝑓 𝑥𝑥 = 3𝑥𝑥 3.5 + 5𝑥𝑥 + 1
𝑓𝑓 ′ 𝑥𝑥 = 10.5𝑥𝑥 2.5 + 5
Compare:
𝑓𝑓 ′ 5 = 930.9071 with:
𝑓𝑓(6.01) − 𝑓𝑓(5.99)
= 930.90873 when ∆𝑥𝑥 = 0.01
0.02
𝑓𝑓 6.5 − 𝑓𝑓 5.5
= 934.92 when ∆𝑥𝑥 = 0.5
1
Line Segment Approximation

How can we utilize this fact to avoid differentiating 𝑓𝑓(𝑥𝑥)?


Let 𝑥𝑥𝑜𝑜 , 𝑥𝑥1 ∈ ℝ
𝑓𝑓 𝑥𝑥1 − 𝑓𝑓 𝑥𝑥𝑜𝑜
𝑓𝑓 ′ 𝑥𝑥1 ≅
𝑥𝑥1 − 𝑥𝑥𝑜𝑜
and hence substituting in newton’s iterative equation:
𝑥𝑥1 − 𝑥𝑥𝑜𝑜
𝑥𝑥2 = 𝑥𝑥1 − 𝑓𝑓 𝑥𝑥1
𝑓𝑓 𝑥𝑥1 − 𝑓𝑓 𝑥𝑥𝑜𝑜
Secant Method
Example: Find the root of 𝑓𝑓 𝑥𝑥 = 𝑒𝑒 −𝑥𝑥 − 𝑥𝑥 with 𝑥𝑥−1 = 5 and 𝑥𝑥𝑜𝑜 = 1
𝑥𝑥−1 = 5 → 𝑓𝑓(𝑥𝑥−1 ) = −4.99326
𝑥𝑥0 = 1 → 𝑓𝑓 𝑥𝑥0 = −0.63212
1−5
𝑥𝑥1 = 1 + 0.63212 = 0.420225
−0.63212 + 4.99326
Proceeding with the same iterations yields:
𝑖𝑖 𝑥𝑥𝑖𝑖−1 𝑥𝑥𝑖𝑖 𝑓𝑓(𝑥𝑥𝑖𝑖−1 ) 𝑓𝑓(𝑥𝑥𝑖𝑖 ) 𝑥𝑥𝑖𝑖+1 𝐸𝐸𝐸𝐸, %
0 5 1 -4.99326 -0.63212 0.420225 137.9679
1 1 0.42022467 -0.63212 0.236675 0.578165 27.31756
2 0.420224673 0.57816532 0.236675 -0.01724 0.567442 1.889701
3 0.57816532 0.56744236 -0.01724 -0.00047 0.567143 0.052837
4 0.567442356 0.56714269 -0.00047 9.33E-07 0.567143 0.000105
5 0.567142695 0.56714329 9.33E-07 -5E-11 0.567143 5.68E-09
6 0.56714329 0.56714329 -5E-11 0 0.567143 0
Secant Method
Improving First-Derivative Approximation
A basic definition of the first derivative can be represented from:

𝑓𝑓 𝑥𝑥 + ∆𝑥𝑥 − 𝑓𝑓 𝑥𝑥
𝑓𝑓 ′ 𝑥𝑥 = lim
∆𝑥𝑥→0 ∆𝑥𝑥
Or

𝑓𝑓 𝑥𝑥 + ℎ − 𝑓𝑓 𝑥𝑥 − ℎ
𝑓𝑓 𝑥𝑥 = lim
ℎ→0 2ℎ

Hence, a better approximation of the derivative at point 𝑥𝑥 can be obtained by evaluating the rate
of change of the function between (𝑥𝑥 + ℎ) and (𝑥𝑥 − ℎ) whereas the value of ℎ approaches zero.
Let ε=0.001
𝑓𝑓 𝑥𝑥 + 𝜀𝜀 − 𝑓𝑓 𝑥𝑥 − 𝜀𝜀
𝑓𝑓′ 𝑥𝑥 ≅
2𝜀𝜀
Secant Method
Example: Find the analytical and approximate solution of f(5) in the function:
𝑓𝑓 𝑥𝑥 = 3𝑥𝑥 1.5 + ln 𝑥𝑥 − 𝑒𝑒 𝑥𝑥

Using analytical differentiation:


′ 0.5
1
𝑓𝑓 𝑥𝑥 = 4.5𝑥𝑥 + − 𝑒𝑒 𝑥𝑥 → 𝑓𝑓′ 5 = −138.1508532
𝑥𝑥
Using approximate differentiation with ε=0.001:
𝑓𝑓 5.001 − 𝑓𝑓 4.999
𝑓𝑓 ′ 5 = = −138.150878
0.002
Secant Method
Example (Cont’d):
Comparing the two solutions, the approximate one is accurate within 10−5 % which is
almost the same as the exact solution.
In the case where one needs to start with a single initial guess without having to
evaluate the derivative (𝑓𝑓𝑓(𝑥𝑥)) then the derivative approximation can replace the
derivative in Newton’s Method and hence the iterative equation will become:
2𝜖𝜖 𝑓𝑓 𝑥𝑥𝑖𝑖
𝑥𝑥𝑖𝑖+1 = 𝑥𝑥𝑖𝑖 −
𝑓𝑓 𝑥𝑥𝑖𝑖 + 𝜖𝜖 − 𝑓𝑓 𝑥𝑥𝑖𝑖 − 𝜖𝜖
The value of ϵ can be selected to be a very small number e.g., 𝜖𝜖 = 0.0001
Secant Method
Example: Using Van der Waals’ Equation of State, find the volume of 100 moles of
water vapor (in Liters) at 100 kPa and 150 °𝐶𝐶. For water vapor, a=553.6 𝐿𝐿2 𝑘𝑘𝑘𝑘𝑘𝑘/𝑚𝑚𝑚𝑚𝑙𝑙2
and b=0.03052 𝐿𝐿/𝑚𝑚𝑚𝑚𝑚𝑚. What will be a good initial guess to start with?
𝑛𝑛 2 𝑉𝑉
𝑃𝑃 + 𝑎𝑎 − 𝑏𝑏 = 𝑅𝑅𝑅𝑅
𝑉𝑉 𝑛𝑛
P: Pressure
T: Temperature
V: Volume
n: number of moles
R: Gas constant [8.314 kJ/(mol. K)= 8.314 L kPa/(mol. K)]
a and b are substance dependent constants.
Secant Method
Example (Cont’d):
The function to be solved can be established as:
𝑛𝑛 2 𝑉𝑉
𝑓𝑓 𝑉𝑉 = 𝑃𝑃 + 𝑎𝑎 − 𝑏𝑏 − 𝑅𝑅𝑅𝑅 = 0
𝑉𝑉 𝑛𝑛
Substituting the known contestants and values:
2
100 𝑉𝑉
𝑓𝑓 𝑉𝑉 = 100 + 553.6 − 0.03052 − 8.314 × 150 + 273 = 0
𝑉𝑉 100
Further simplification:
5.536 × 106 𝑉𝑉
𝑓𝑓 𝑉𝑉 = 100 + − 0.03052 − 3516.822 = 0
𝑉𝑉 2 100
Secant Method
Example (Cont’d):
An initial guess can be obtained from assuming water vapor as an ideal gas, hence:

100 × 8.314 × 150 + 273


100
Using Newton’s Equation with the approximation of the first derivative:
𝑖𝑖 𝐕𝐕 𝐕𝐕 + 𝜖𝜖 𝐕𝐕 − 𝜖𝜖 𝑓𝑓(𝐕𝐕) 𝑓𝑓(𝐕𝐕 + 𝜖𝜖) 𝑓𝑓(𝐕𝐕 − 𝜖𝜖) 𝐕𝐕𝑖𝑖+1 𝐸𝐸𝑟𝑟 %
0 3516.822 3516.832 3516.812 -3.052 -3.042 -3.062 3519.874 0.086708
1 3519.874 3519.884 3519.864 0 0.01 -0.01 3519.874 0

Notice that in this example, ϵ=0.01. The rapid convergence can be


attributed to the nature of the function as well as the selection of the initial
guess.
Programming Task …

Code the Secant method in Matlab.


Soft
Skills
How can you find the root of a univariate equation
using Microsoft excel?
𝑆𝑆𝑆𝑆𝑆𝑆𝑆𝑆𝑆𝑆: 𝑓𝑓 𝑥𝑥 = 3𝑥𝑥 2 − 5𝑥𝑥𝑥𝑥𝑥𝑥 𝑥𝑥 = 10
Soft
Skills
First make sure that you have the solver:
It should appear in Data menu under “Analyze” group.

If not available, a blank will be shown.


Soft To load the solver:
Skills

File menu  click Options  click Add-ins  click Solver Add-in  click Go…  check solver Add-in  click OK
Soft Once loaded, the solve appears in Data menu under “Analyze” group.
Skills

Once the solver has been loaded:


1- Initiate a guess and calculate the function
2- Go to the solver and set:

Set Objective:  the cell you want to have the value zero

Value of:  the targeted value (zero in this case)

By changing variable cell:  The cell you want to be adjusted


(the root you are looking for)

For the given example, Excel retrieved the


solution:2.92917874058944
At which f(x)=-1.21361E-06 which is almost zero.
Summary

You might also like