0% found this document useful (0 votes)
169 views163 pages

NM Total

The bisection method is a root-finding algorithm that uses the intermediate value theorem to numerically find a real root of a non-linear equation. It works by repeatedly bisecting an interval known to contain a root until the interval is smaller than a specified tolerance. At each step, it evaluates the function at the midpoint of the interval and selects the sub-interval for which the function changes sign, narrowing in on a root over successive iterations. The number of iterations required depends on the initial interval size and the specified tolerance for accuracy of the root.

Uploaded by

Prashant Prasar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
169 views163 pages

NM Total

The bisection method is a root-finding algorithm that uses the intermediate value theorem to numerically find a real root of a non-linear equation. It works by repeatedly bisecting an interval known to contain a root until the interval is smaller than a specified tolerance. At each step, it evaluates the function at the midpoint of the interval and selects the sub-interval for which the function changes sign, narrowing in on a root over successive iterations. The number of iterations required depends on the initial interval size and the specified tolerance for accuracy of the root.

Uploaded by

Prashant Prasar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 163

Numerical Methods

Solution of Non-Linear Equations

Jayandra Raj Shrestha


May 2021

Institute of Engineering,
Tribhuvan University, Nepal

1
Why Numerical Methods?

• Solving a mathematical model representing a real world problem may be very hard or
impossible to solve analytically
• May need to find approximate numerical solutions ⇒ Numerical Methods
• Numerical Methods:
• Obtain numerical solutions to mathematical problems involving continuous variables
• By performing numerical manipulations rather than using analytical relations
• Applying mathematical principle(s) in tricky ways, usually iteratively until an approximate
solution is found within some tolerable error bounds
• Easier than analytical techniques to implement as computer algorithms
• Same method may be applied accross different problems of same nature
• A branch of mathematics and computer science that deals with creating, analyzing, and
implementing algorithms for obtaining numerical solutions to problems involving
continuous variables.
• Problem Domains:
Natural Sciences and Engineering, Social Sciences, Medicine, Business, etc.
2
Non-linear Equations

• Any single variable function in the form of f (x) = 0 can be classified as a non-linear equation if
we cannot identify it clearly as a linear equation. E.g., Polynomial equations, transcendental
equations (equations composed of trigonometric functions, logarithmic functions, etc.)
• Solution ⇔ Root ⇔ Zero of the function ⇔ value(s) of the varialbe x which satisfies f (x) = 0
• Non-linear equations may have multiple or infinite number of roots or they may not possess any
real root at all.
• Analytical solution of non-linear equations demand different approach for each new form of the
problem whereas the same numerical technique may be employed for any such problem.
• Numerical techniques generally employ some mathematical principle iteratively in some tricky
manner to obtain one root at a time, numerically.
• Numerical techniques may be cumbersome than analytical techniques and produce only
approximate results, but may be the only alternate when analytical techniques are hard or
impossible to implement.
• Compared to analytical techniques, numerical techniques are far more easier to implement as
computer algorithms.
3
Bisection Method
Bisection Method

Iterative application of the Intermediate Value


Theorem to numerically find a real root of
non-linear equation.

Intermediate Value Theorem: b

f(x)
If a function f (x) = 0 is continuous between a
an interval (a, b) such that f (a) and f (b) are
of opposite signs, then there exists at least
one real root in the interval (a, b).

1
f (x) = + sin(x)
x3

4
Intermediate Value Theorem

b a
f(x)

f(x)
a b

x x
Multiple roots may exist if f (a) and f (b) are of Even if f (a) and f (b) are of opposite signs,
opposite signs and the function is continuous in existence of root cannot be guaranteed in (a, b)
(a, b). if the function is not continuous in (a, b).

5
Intermediate Value Theorem
f(x)

f(x)
a b a b

x x

No root in (a, b) if f (a) and f (b) are of same Even number of roots in (a, b) if f (a) and
signs (both +ve). f (b) are of same signs (both +ve).

6
Intermediate Value Theorem

a b a b
f(x)

f(x)
x x

No root in (a, b) if f (a) and f (b) are of same Even number of roots in (a, b) if f (a) and
signs (both -ve). f (b) are of same signs (both -ve).

7
Bisection Method

f(a)
f(c)
b
f(x)

a c f(b)
=(a+b)/2

Step 1: Determine the initial interval (a, b) which contains a root


a+b 8
Step 2: Bisect the interval (c = 2 )
Bisection Method

b
f(x)

a (old) a (new)

Step 3: If f (c) and f (a) are of same signs, take (c, b) as new interval; i.e., regard c as new a.
Otherwise take (a, c) as new interval; i.e, regard c as new b. 9
Bisection Method

b
f(x)

a c

Repeat Steps 2 and 3


10
Bisection Method

c b
f(x)

Repeat Steps 2 and 3


11
Bisection Method

c b
f(x)

|
a

Repeat Steps 2 and 3


12
When to stop?

For an accuracy of 3 decimal places:


Tolerable Error (ε) = 0.0005

Absolute Error:
|b − a| ≤ ε

Relative Error:
|b − a| |b − a| |b − a|
≤ε OR ≤ε OR ≤ε
|a| |b| |c|

Absolute Error in Functional Value:


|f (c)| ≤ ε

13
No. of iterations (n)

Error Tolerance = ε
Initial Interval = (a, b)
Regarding the size of interval (a, b) as error:
|b−a|
Error at nth iteration = 2n

|b − a| |b − a|
∴ ≤ε ⇒ 2n ≥
2n ε
 
|b − a|
log(2n ) ≥ log
ε

n log 2 ≥ log(|b − a|) − log(ε)

log(|b − a|) − log(ε)


∴n≥
log 2
14
Example

Find a real root of x sin(x) + cos(x) = 0 correct to three decimals using the Bisection Method.
Solution: Calculation Table:
Linear Search: i a+ b− c= a+b
2 f (c)
x 0 1 2 3 4 5 6 7 1 2 3 2.5 0.69504
f (x) 1 1.38 1.40 -0.57 -3.68 -4.51 -0.72 5.35 2 2.5 3 2.75 0.12527
3 2.75 3 2.875 -0.20727
Since f (2) = +ve and f (3) = −ve 4 2.75 2.875 2.8125 -0.03738
5 2.75 2.8125 2.7813 0.04475
and the function seems to be continuous in any interval, 6 2.7813 2.8125 2.7969 0.00391
there is at least one real root between x = 2 and x = 3. 7 2.7969 2.8125 2.8047 -0.01668
8 2.7969 2.8047 2.8008 -0.00637
Thus, let the initial interval be (a, b) = (2, 3). 9 2.7969 2.8008 2.7989 -0.00135
10 2.7969 2.7989 2.7979 0.00128
Successive computations are shown in the following 11 2.7979 2.7989 2.7984 -0.00004
table.

∵ |f (2.7984)| < 0.0005


∴ root = 2.798

15
Rate of Convergence

ε0 = |b − a|

|b − a| ε0
ε1 = =
2 2
|b − a| ε1
ε2 = 2
=
2 2

... ... ...

εn
εn+1 =
2
Linear Relationship between εn and εn+1 =⇒ Linear Convergence ! 16
Algorithm/Pseudocode for Bisection Method

Define: f(x)

Read: a, b, E
if f(a) × f(b) > 0 then
Print: Error
else
repeat
c ← (a + b)/2
if f(c) × f(a) < 0 then
b←c
else
a←c
end if
until |f(c)| ≤ E
Print: c
end if
17
Assignment

1. Find the +ve real root between 6 and 7 correct to 2 decimal places and a negative real
root correct to four decimal places of x sin(x) + cos(x) = 0 using the Bisection Method.
2. While finding a real root of a non-linear equation using bisection method, if the maximum
tolerable absolute error is 0.0005 (accuracy of three decimal places) and the initial starting
interval used is 1 (i.e., |a − b| = 1 initially) what would be the approximate number of
iterations required to obtain the specified accuracy of the root? What if the initial starting
interval used is 0.1?
3. Use bisection method to find the cube root of 7 correct to 4 decimal places by taking the
initial bracket with an interval of 0.1. [Hint: x3 − 7 = 0]
4. (Practical) Develop a simple algorithm and program code in C/C++ to find a real root of
a non-linear equation using the Bisection Method for a given function from user input
values of the interval (a, b) and error tolerance (E). Check your program by changing your
equation and also for different initial intervals of the same equation.

18
Numerical Methods
Solution of Non-Linear Equations
å Newton-Raphson Method

Jayandra Raj Shrestha


May 2021

Institute of Engineering,
Tribhuvan University, Nepal 1
Newton-Raphson Method

• Approximate the curve by a straight


line – tangent drawn on the curve at
a point very close to the root (x0 )

• x0 = initial guess to the root


• Red line = Tangent drawn on f (x) at
f(x)

(x0 , f (x0 ))
• x1 = First approximation of the root

root

x1 x0
2
x
Newton-Raphson Method

• x0 = initial guess to the root


• Red line = Tangent drawn on f (x) at
(x0 , f (x0 ))
• x1 = First approximation of the root

• Green line = Tangent drawn on f (x)


f(x)

at (x1 , f (x1 ))
• x2 = Second approximation of the
root
root
• Proceed similarly to obtain x3 , x4 , . . .
until xi+1 ≈ xi
x2 x1 x0
3
x
Newton-Raphson Method

Slope of tangent on f (x) at x0


= f 0 (x0 )
Also,
= f (x0 )/(x0 − x1 )

f (x0 )
or, f 0 (x0 ) =
x0 − x1
f(x)

f(x0) f (x0 )
∴ x1 = x0 −
f 0 (x0 )

f (x1 )
similarly, x2 = x1 −
root f 0 (x1 )

f (xi )
In general, xi+1 = xi −
x2 x1 x0 f 0 (xi )
4
x
Derivation via Taylor’s Series (Analytical approach)

Let x0 be an approximate root of f (x) = 0 and x1 = x0 + h be the corresponding exact root


Hence, f (x0 ) ≈ 0 whereas f (x0 + h) = 0
Expanding f (x0 + h) using Taylor’s Series:
h2 00 h3 000
f (x0 ) + hf 0 (x0 ) + 2! f (x0 ) + 3! f (x0 ) + ... ... ... = 0
Since h is a very small value, neglecting the terms containing higher powers of h:
f (x0 ) + hf 0 (x0 ) = 0
f (x0 )
or, f (x0 ) + (x1 − x0 )f 0 (x0 ) = 0 ⇒ x1 = x0 − f 0 (x0 )

The value of x1 thus obtained may not represent the exact root as mentioned earlier but may
be regarded as a better approximation than x0 .
Better approximations can be achieved from:
f (x1 ) f (x2 ) f (xi )
x2 = x1 − f 0 (x1 ) x3 = x2 − f 0 (x2 ) ... xi+1 = xi − f 0 (xi )
5
Limitations of NR method

• Requires the first derivative of the function to be determined analytically


• xi+1 → ∞ when f 0 (xi ) → 0 • Oscillation or divergence near local minima

• Oscillation or divergence near inflection point • Root jumping: Resulting into the computation of
[f 00 (xi ) = 0] an unintended root

6
Example

Find a real root of x sin(x) + cos(x) = 0 with an accuracy of 99.999% using the Newton
Raphson Method.
Solution: Calculation table:
Accuracy required = 99.999% |xi+1 −xi |
Tolerable error = (100 - 99.999)% i xi xi+1 Rel. Err. = |xi+1 |

= 0.001% 0 2.5 2.847022 0.121889


Tolerable relative error = 0.001/100 1 2.847022 2.799175 0.017093
= 0.00001 2 2.799175 2.798386 2.82 × 10−4
f (x) = x sin(x) + cos(x) 3 2.798386 2.798386 0
f 0 (x) = x cos(x)
Newton Raphson iteration formula:
∴ a root of the required accuracy = 2.798386.
xi+1 = xi − ff0(x i)
(xi ) ⇒ x −
x sin(x)+cos(x)
x cos(x)
∵ f (2) = +ve and f (3) = −ve,
there is a real root in the interval (2, 3)
∴ let the initial guess be x0 = 2.5 7
Rate of Convergence

Let α be a simple real root of f (x) = 0 such that f (α) = 0

Let xn be an approximate root near α and xn+1 be the successive better approximation

Thus, the Taylor’s series expansion of f (α) = f (xn + (α − xn )) gives:


1
f (xn ) + (α − xn )f 0 (xn ) + (α − xn )2 f 00 (xn ) + . . . = 0
2
Neglecting higher order terms and re-arranging:
f (xn ) 1 f 00 (xn )
− 0
= (α − xn ) + (α − xn )2 0
f (xn ) 2 f (xn )
f (xn )
Combining the above with the Newton-Raphson Iteration formula: xn+1 = xn − f 0 (xn )
:

1 f 00 (xn )
xn+1 − α = (xn − α)2 0
2 f (xn )
Setting εn = xn − α and εn+1 = xn+1 − α
1 2 f 00 (α)
εn+1 ≈ ε
2 n f 0 (α)
Hence, the Newton-Raphson method has a quadratic (second order) convergence.
8
Assignment

1. Rework with the given example taking x0 = 1 and explain about the result thus
obtained.
2. Determine a real root of ex + cos(x) − 5 = 0 correct to 6 decimals (absolute error
< 0.0000005) using the Newton-Raphson Method.
3. (Practical) Develop an algorithm and program in C/C++ to find a real root of a
polynomial equation of any degree given it’s co-efficients. Hint: construct
different functions for the evaluation of polynomial and it’s derivative at a given
point given the co-efficients. For n degree polynomial:
Xn n
X
f (x) = a0 + ai xi f 0 (x) = a1 + i × ai xi−1
i=1 i=2
where a0 is the constant term and a1 , a2 , . . . , an are the coefficients of
x, x2 , . . . , xn of the polynomial equation of degree n
9
Numerical Methods
Solution of Non-Linear Equations
å Method of False Position (Regula Falsi Method)
å Secant Method

Jayandra Raj Shrestha


May 2021

Institute of Engineering,
Tribhuvan University, Nepal 1
Method of False Position
(Regula Falsi Method)
Method of False Position (Regula Falsi)

• Determine an interval (a, b) containing a


root using the Intermediate Value Theorem
• Instead of bisecting the interval, join the
points (a, f (a)) and (b, f (b)) by a secant
line.
• Determine the point (c), which is the point
of intersection between the secant line and
the x-axis.
f(x)

f(b) f (a) f (b) a · f (b) − b · f (a)


= ⇒ c=
a−c b−c f (b) − f (a)

• Determine new interval as in Bisection


Method:
a c If the sign of f (c) is same as f (a):
b move a to c
f(a)
Otherwise:
move b to c
x • Repeat until f (c) ≈ 0: 2
Method of False Position (Regula Falsi)

• Determine an interval (a, b) containing a


root using the Intermediate Value Theorem
• Instead of bisecting the interval, join the
points (a, f (a)) and (b, f (b)) by a secant
line.
• Determine the point (c), which is the point
of intersection between the secant line and
the x-axis.
f(x)

f (a) f (b) a · f (b) − b · f (a)


= ⇒ c=
a−c b−c f (b) − f (a)

• Determine new interval as in Bisection


Method:
a c If the sign of f (c) is same as f (a):
b move a to c
Otherwise:
move b to c
x • Repeat until f (c) ≈ 0: 3
Method of False Position (Regula Falsi)

• Determine an interval (a, b) containing a


root using the Intermediate Value Theorem
• Instead of bisecting the interval, join the
points (a, f (a)) and (b, f (b)) by a secant
line.
• Determine the point (c), which is the point
of intersection between the secant line and
the x-axis.
f(x)

f (a) f (b) a · f (b) − b · f (a)


= ⇒ c=
a−c b−c f (b) − f (a)

• Determine new interval as in Bisection


Method:
a c If the sign of f (c) is same as f (a):
b move a to c
Otherwise:
move b to c
x • Repeat until f (c) ≈ 0: 4
Method of False Position (Regula Falsi)

• Determine an interval (a, b) containing a


root using the Intermediate Value Theorem
• Instead of bisecting the interval, join the
points (a, f (a)) and (b, f (b)) by a secant
line.
• Determine the point (c), which is the point
of intersection between the secant line and
the x-axis.
f(x)

f (a) f (b) a · f (b) − b · f (a)


= ⇒ c=
a−c b−c f (b) − f (a)

• Determine new interval as in Bisection


Method:
ac If the sign of f (c) is same as f (a):
b move a to c
Otherwise:
move b to c
x • Repeat until f (c) ≈ 0: 5
Problem with False Position Method

• Despite of deciding the new sub-interval using the Intermediate Value Theorem, it
can be observed that one of the brackets might remain fixed mostly, resulting in
slow convergence.
• Convergence rate can be improved by taking the latest two x-values to compute
the next approximation, completely avoiding the intermediate value theorem - the
Secant method.
• This improves the convergence rate, however, at the cost of reliability.

6
Secant Method
Secant Method

• Approximate the curve by a straight


line – secant line drawn on the curve
between two points very close to the
root x0 and x1 , which may or may
not bracket the root.
f(x)

x2 • Red line = Secant drawn on f (x)


between (x0 , f (x0 )) and (x1 , f (x1 ))
x0 x1
• x2 = First approximation of the root
• Next use x1 and x2 to approximate
x3 , and so on ...
x

7
Secant Method

• Purple line = Secant line drawn on


f (x) between (x1 , f (x1 )) and
(x2 , f (x2 ))
• x3 = Second approximation of the
root
f(x)

x2 x3
x0 x1

8
Secant Method

• Green line = Secant line drawn on


f (x) between (x2 , f (x2 )) and
(x3 , f (x3 ))
• x4 = Third approximation of the root
f(x)

x2 x3
x4 x0 x1

9
Secant Method

• Yellow line = Secant line drawn on


f (x) between (x3 , f (x3 )) and
(x4 , f (x4 ))
• Proceed until f (xi ) ≈ 0
f(x)

x2 x3
x4 x0 x1

10
Secant Method

Slope of the secant line

f (x0 )
=
x0 − x2

f(x1) Also,
f (x1 )
f(x0) =
x1 − x2
f(x)

x2 f (x0 ) f (x1 )
i.e., =
x0 x1 x0 − x2 x1 − x2

x1 f (x0 )−x2 f (x0 ) = x0 f (x1 )−x2 f (x1 )


x2 [f (x1 )−f (x0 )] = x0 f (x1 )−x1 f (x0 )
x
Given x0 , x1 ⇒ x2 = ? x0 f (x1 ) − x1 f (x0 )
∴ x2 =
f (x1 ) − f (x0 ) 11
Secant Method

Similarly,

x1 f (x2 ) − x2 f (x1 )
x3 =
f (x2 ) − f (x1 )

f(x1) and so on ...


f(x)

In General,
x2 x3
xi−1 f (xi ) − xi f (xi−1 )
x0 x1 xi+1 =
f (xi ) − f (xi−1 )
f(x2) or,
af (b) − bf (a)
c=
f (b) − f (a)
x where,
a = xi−1 b = xi c = xi+1
12
Example

Find a real root of ex + sin(x) − 4 = 0 correct to 5 decimals using the Secant Method.
Here, Calculation Table:
f (x) = ex + sin(x) − 4
i a b c f (c)
f (1) = −0.4402 (-ve)
[xi−1 ] [xi ] [xi+1 ]
f (2) = 4.2984 (+ve)
1 1.5 1.7 1.200093 0.252498
Let the initial interval be
2 1.7 1.200093 1.143058 0.046251
(x0 , x1 ) = (a, b) = (1.5, 1.7)
3 1.200093 1.143058 1.130268 0.001013
Secant formula:
4 1.143058 1.130268 1.129982 5 × 10−6
af (b) − bf (a)
c= 5 1.130268 1.129982 1.12998 −2 × 10−6
f (b) − f (a)
∵ |f (1.12998)| < 0.000005
∴ a root of the required accuracy = 1.12998
13
Rate of Convergence

εn+1 ∝ εnk

where, √
1+ 5
k= = 1.618 (Golden Ratio)
2

• Super-linear convergence
• Much more faster than Bisection Method (linear convergence)
• Almost comparable to that of Newton-Raphson Method (quadratic convergence)

14
Assignments

1. Show that the Secant method can be visualized as an approximation of the


Newton-Raphson method and hence derive the Secant formula from the Newton-Raphson
iteration formula.
2. Find a real root of x3 − 3x2 + 2x − 2 = 0 correct to 5 decimals, starting the iteration
from initial guess values with unit interval.
a) False Position Method
b) Secant Method
3. (Practical) Write algorithm/pseudocode and program code in C/C++ to determine a real
root of a non-linear equation using:
a) Secant method.
b) False Position method
Check your program for the following equations and show the corresponding outputs:
• x3 − 3x2 + 2x − 2 = 0
• x sin(x) + cos(x) = 0
15
• x2 + 4x + 5 = 0
Numerical Methods
Solution of Non-Linear Equations
å Fixed Point Iteration Method

Jayandra Raj Shrestha


May 2021

Institute of Engineering,
Tribhuvan University, Nepal
1
Fixed Point Iteration Method

Mechanism: Example:

f (x) = 0 cos(x) + 3x − 2 = 0

⇓ ⇓
x = g(x) 2 − cos(x)
x=
3
⇓ ⇓
Iteration formula Iteration formula
⇓ ⇓
xi+1 = g(xi ) 2 − cos(xi )
xi+1 =
3
2
Example 1

Find a real root of cos(x) + 3x − 2 = 0 correct to 5 decimal places using the fixed point
iteration method.
Solution: x1 = g(x0 ) = [2 − cos(x0 )]/3 = 0.374139

f (x) = cos(x) + 3x − 2 x2 = g(x1 ) = [2 − cos(x1 )]/3 = 0.356392

Writing f (x) = 0 as x = g(x): x3 = g(x2 ) = [2 − cos(x2 )]/3 = 0.354279


2 − cos(x)
x= x4 = g(x3 ) = [2 − cos(x3 )]/3 = 0.354034
3
Iteration formula: x5 = g(x4 ) = [2 − cos(x4 )]/3 = 0.354006
2 − cos(xi )
xi+1 = g(xi ) = x6 = g(x5 ) = [2 − cos(x5 )]/3 = 0.354003
3
∵ f (0) = −ve and f (1) = +ve, let x0 = 0.5 ∵ |x6 − x5 | < 0.000005
∴ root = 0.35400 3
Geometric Interpretation

Background:

• Where does the solution of f (x) = 0 lie in x-y graph?


• Intersection between y = f (x) and y = 0 (x-axis) ! But how?
• Because we visualized f (x) = 0 by replacing 0 with y.

In fixed point iteration method:

• We transform f (x) = 0 to x = g(x)


• We cannot plot x = g(x) in x-y graph. So, we take y = g(x) and plot it.
• How can we show the solution of f (x) = 0 or x = g(x) in x-y graph using y = g(x)?
• Intersection between y = x and y = g(x) ! But how?
• Because we visualized f (x) = 0 or x = g(x) by replacing x with y.

4
y=x
y

x0

y = g(x) fixed point of g(x)


x 5
x1 = g(x0) y=x
y

g(x0)

x1 x0

y = g(x) fixed point of g(x)


x 6
x1 = g(x0) y=x
x2 = g(x1)
y

g(x0)
g(x1)

x2 x1 x0

y = g(x) fixed point of g(x)


x 7
x1 = g(x0) y=x
x2 = g(x1)
x3 = g(x2)
y

g(x0)
g(x1)
g(x2)

x3 x2 x1 x0

y = g(x) fixed point of g(x)


x 8
Case I

x1 = g(x0) y=x
x2 = g(x1) Near the root:
x3 = g(x2) 0 < g 0 (x) < 1

Resulting in:
Monotonic Convergence
y

g(x0)
g(x1)
g(x2)

x3 x2 x1 x0

y = g(x) fixed point of g(x)


9
x
y=x
y

x0

y = g(x) fixed point of g(x)


x 10
y=x
x1 = g(x0)
y

g(x0)

x0 x1

y = g(x) fixed point of g(x)


x 11
y=x
x1 = g(x0)
x2 = g(x1)
y

g(x1)
g(x0)

x0 x1 x2

y = g(x) fixed point of g(x)


x 12
y=x
x1 = g(x0)
x2 = g(x1)
x3 = g(x2)

g(x2)
y

g(x1)
g(x0)

x0 x1 x2 x3

y = g(x) fixed point of g(x)


x 13
Case II
y=x
x1 = g(x0)
x2 = g(x1) Near the root:
x3 = g(x2) g 0 (x) > 1

Resulting in:
g(x2) Monotonic Divergence
y

g(x1)
g(x0)

x0 x1 x2 x3

y = g(x) fixed point of g(x)


14
x
y=x
y

x0

y = g(x) fixed point of g(x)


x 15
y=x
x1 = g(x0)
y

g(x0)

x0 x1

y = g(x) fixed point of g(x)


x 16
y=x
x1 = g(x0)
y x2 = g(x1)

g(x0)

g(x1)

x0 x2 x1

y = g(x) fixed point of g(x)


x 17
y=x
x1 = g(x0)
x2 = g(x1)
x3 = g(x2)
y

g(x0)
g(x2)

g(x1)

x0 x2 x3 x1

y = g(x) fixed point of g(x)


x 18
y=x
x1 = g(x0)
x2 = g(x1)
x3 = g(x2)
x4 = g(x3)
y

g(x0)
g(x2)

g(x3)
g(x1)

x0 x2 x4 x3 x1

y = g(x) fixed point of g(x)


x 19
Case III
y=x
x1 = g(x0)
x2 = g(x1) Near the root:
x3 = g(x2)
−1 < g 0 (x) < 0
x4 = g(x3)

Resulting in:
Oscillating Convergence
y

g(x0)
g(x2)

g(x3)
g(x1)

x0 x2 x4 x3 x1

y = g(x) fixed point of g(x)


20
x
y=x
y

x0

y = g(x) fixed point of g(x)


x 21
y=x
x1 = g(x0)
y

g(x0)

x0 x1

y = g(x) fixed point of g(x)


x 22
y=x
x1 = g(x0)
y x2 = g(x1)

g(x0)

g(x1)
x2 x0 x1

y = g(x) fixed point of g(x)


x 23
y=x
x1 = g(x0)
x2 = g(x1)
x3 = g(x2)
y

g(x2)
g(x0)

g(x1)
x2 x0 x1 x3

y = g(x) fixed point of g(x)


x 24
Case IV
y=x
x1 = g(x0)
x2 = g(x1) Near the root:
x3 = g(x2)
g 0 (x) < −1

Resulting in:
Oscillating Divergence
y

g(x2)
g(x0)

g(x1)
x2 x0 x1 x3

y = g(x) fixed point of g(x)


25
x
Convergence Properties

i) 0 < g 0 (x) < 1 ⇒ Monotonic convergence


ii) g 0 (x) > 1 ⇒ Monotonic Divergence
iii) −1 < g 0 (x) < 0 ⇒ Oscillating/Spiral Convergence
iv) g 0 (x) < −1 ⇒ Oscillating/Spiral Divergence

From i) and ii):


g 0 (x) = +ve ⇒ Monotonic
From ii) and iii):
g 0 (x) = -ve ⇒ Oscillating / Spiral
From ii) and iv):
|g 0 (x)| > 1 ⇒ Divergent
From i) and iii):
|g 0 (x)| < 1 ⇒ Convergent
26
Example 2

Find a real root of x3 + x2 − 1 = 0 correct to 6 decimals using the fixed point method.
Solution:
f (x) = x3 + x2 − 1
f (0) = −ve and f (1) = +ve
x = g(x) g 0 (x) g 0 (0) g 0 (1) g 0 (0.5) Useful?
1 2x
x = (1 − x2 ) 3 − 0 −∞ −0.4038 7
3(1 − x2 )2/3
1 3x2
x = (1 − x3 ) 2 − 0 −∞ −0.4009 7
3(1 − x3 )1/2
x = x3 + x2 + x − 1 3x2 + 2x + 1 1 6 2.75 7
− 12 1
x = (x + 1) − -0.5 −0.1768 −0.2722 3
2(x + 1)3/2
27
[Please complete this problem]
28
Finding Square Root of a positive real number using basic math operations

Example: 5 =?

Let x = 5
∴ x2 = 5 ⇒ x2 − 5 = 0 ⇒ f (x) = x2 − 5
Writing f (x) = 0 as x = g(x):
x = 5/x ⇒ Not useful
2x = 5/x + x
5/x+x
x= 2
⇒ Standard form

Standard form for N :
N/x+x
x= 2

What about cube root and higher roots?


29
Assignments

1. Complete the problem given in Example 2.


2. Find a real root of ex tan x = 1 correct to five decimal places using the Fixed
Point Method.
3. Find the cube root of 10 using fixed point iteration method. Note that the
iteration formula should consist of no other mathematical operations than the
basic math operations (addition, substraction, multiplication, division).
4. Formulate a general iteration formula using the fixed point technique to find mth

root of N and use it to compute 5 100 correct to 5 decimal places.

30
Numerical Methods
Solution of System of Linear Algebraic equations
å Gauss Jordan Method

Jayandra Raj Shrestha


June 2021

Institute of Engineering,
Tribhuvan University, Nepal
Gauss Jordan Method
Given the Linear System: Procedure:
• Reduce the augmented coefficient matrix to diagonal
a11 x1 + a12 x2 + a13 x3 = b1
form via row equivalent technique:
a21 x1 + a22 x2 + a23 x3 = b2 • eliminate the non-diagonal coefficients column-wise
a31 x1 + a32 x2 + a33 x3 = b3 • by taking the column diagonal as pivot
• and row corresponding to column diagonal as pivotal row
In matrix form:
     • Obtain the solution vector from diagonalized matrix
a11 a12 a13 x1 b1 • by dividing the constant term of each row from row
a21 a22 a23  x2  = b2 
    
diagonal (pivot element)
a31 a32 a33 x3 b3

Augmented coefficient matrix: Diagonal Form: Solution:


     
a11 a12 a13 : b1 α11 0 0 : β1 x1 = β1 /α11
⇒ x2 = β2 /α22 
 
a21 a22 a23 : b2  ⇒  0 α22 0 : β2 
   
JRS\IOE a31 a32 a33 : b3 0 0 α33 : β3 x3 = β3 /α33 1/5
Column-wise Elimination

 
a11 a12 a13 : b1 R1 ⇒ Pivotal Row
a21 a22 a23 : b2  R2 ← R2 − aa21 R1 [To eliminate a21 ]
 
11
a31 a32 a33 : b3 R3 ← R3 − aa11
31
R1 [To eliminate a31 ]
 
a11 a12 a13 : b1 R1 ← R1 − aa12
22
R2 [To eliminate a12 ]
∼ 0 a22 0 a23 0 : b2 0  R2 ⇒ Pivotal Row
 
0 a32 0 a33 0 : b3 0 R3 ← R3 − aa32
22
R2 [To eliminate a32 ]
 
a11 0 a13 0 : b1 0 R1 ← R1 − aa33
13
R3 [To eliminate a13 ]
∼ 0 a22 0
a23 0 : b2 0  a23
R2 ← R2 − a33 R3 [To eliminate a23 ]
 
00
0 0 a33 00 : b3 R3 ⇒ Pivotal Row
   
a11 0 0 : b1 00 x1 = b1 /a11
∼ 0 a22 0 0 : b2 00  ⇒ Solution ⇒ x2 = b2 /a22 
   
0 0 a33 00 : b3 00 x3 = b3 /a33
JRS\IOE 2/5
In summary
In the row operation formula
Eliminate all ai,j
where: ai,j ⇒ Element being eliminated
j = 1, 2, . . . , n Ri ⇒ Row corresponding to ai,j
i = 1, 2, . . . , n
aj,j ⇒ Pivot element
i 6= j
Rj ⇒ Pivotal Row
n = No. of unknowns
using the row operation:
ai,j Limitations
Ri ← Ri − Rj
aj,j
• The pivot element aj,j should not be zero.
Finally, obtain the solution: • When aj,j ≪ ai,j during row-operation,
bi small round-offs may result in large errors.
xi =
ai,i

JRS\IOE
where: i = 1, 2, . . . , n 3/5
Gauss Jordan Method: Example
Solve:  
5 4 3 : 28
5x1 + 4x2 + 3x3 = 28 R2 ← R2 − (7/5)R1
∼ 0 −8/5 −26/5 : −96/5 
 
R3 ← R3 − (8/5)R1
7x1 + 4x2 − x3 = 20 0 −57/5 −4/5 : −104/5
8x1 − 5x2 + 4x3 = 24
 
5 0 −10 : −20
R1 ← R1 + (5/2)R2
∼ 0 −8/5 −26/5 : −96/5
 
Augmented Coefficient Matrix: R3 ← R3 − (57/8)R2
0 0 145/4 : 116
 
5 4 3 : 28
7 4 −1 : 20
   
5 0 0 : 12
8 −5 4 : 24 R1 ← R1 + (8/29)R3
∼ 0 −8/5 0 : −64/25
 
R2 ← R2 + (104/725)R3
0 0 145/4 : 116

     
x1 12/5 2.4
   −64/25   
Solution: = =
 2   −8/5  1.6
x
JRS\IOE 116 4/5
x3 145/4
3.2
Assignments

1. Construct a system of linear equations in four unknowns of your own (using predetermined
values for the unknowns) and show its solution by Gauss Jordan Method. Make sure that
the system so constructed has a unique set of solution (coefficient matrix should be
non-singular).
2. Write an algorithm/pseudo-code for solving a system of linear equations in any number of
unknowns using the Gauss-Jordan Method, taking care of error condition(s) that may arise
during the computation. Check your program for the systems comprising of two, three,
four, and five unknowns.
3. Find out an expression to determine the total number of scalar calculations required to
solve a system of n unknowns using the Gauss-Jordan Method.

JRS\IOE 5/5
Numerical Methods
Solution of System of Linear Algebraic equations
å Gauss Elimination Method

Jayandra Raj Shrestha


June 2021

Institute of Engineering,
Tribhuvan University, Nepal
Gauss Elimination Method
Given the Linear System: Procedure:
• Reduce the augmented coefficient matrix to upper
a11 x1 + a12 x2 + a13 x3 = b1
triangular form via row equivalent technique:
a21 x1 + a22 x2 + a23 x3 = b2 • column-wise, eliminate the elements below the diagonal
a31 x1 + a32 x2 + a33 x3 = b3 • by taking the column diagonal as pivot
• and row corresponding to column diagonal as pivotal row
In matrix form:
     • Obtain the solution vector
a11 a12 a13 x1 b1 • via back-substitution
a21 a22 a23  x2  = b2 
    
a31 a32 a33 x3 b3

Augmented coeff. matrix: Upper triangular form: Solution:


   
a11 a12 a13 : b1 α11 α12 α13 : β1 x3 = β3 /α33
a21 a22

a23 : b2 

⇒ 0

α22 α23 : β2 
 ⇒ x2 = (β2 − α23 x3 )/α22
a31 a32 a33 : b3 0 0 α33 : β3 x1 = (β1 − α12 x2 − α13 x3 )/α11
JRS\IOE 1/6
Column-wise Elimination

 
a11 a12 a13 : b1 R1 ⇒ Pivotal Row
a21 a22 a23 : b2  R2 ← R2 − aa21 R1 [To eliminate a21 ]
 
11
a31 a32 a33 : b3 R3 ← R3 − aa11
31
R1 [To eliminate a31 ]
 
a11 a12 a13 : b1
∼ 0 a22 0 a23 0 : b2 0  R2 ⇒ Pivotal Row
 
0 a32 0 a33 0 : b3 0 R3 ← R3 − aa32
22
R2 [To eliminate a32 ]
 
a11 a12 a13 : b1
∼ 0 a22 0 a23 0 : b2 0 
 
00
0 0 a33 00 : b3

 x
x1 = (b1 − a12 x2 − a13 x3 )/a11 

Back-substitution ⇒  x2 = (b2 − a23 x3 )/a22 ⇒ Solution
 


JRS\IOE x3 = b3 /a33  2/6
In summary
In the row operation formula:
Eliminate all ai,j
where: ai,j ⇒ Element being eliminated
j = 1, . . . , (n-1) Ri ⇒ Row corresponding to ai,j
i = (j+1), . . . , n
aj,j ⇒ Pivot element
n = No. of unknowns
Rj ⇒ Pivotal Row
using the row operation:
ai,j
Ri ← Ri − Rj
aj,j Limitations
Perform back substitution for solution: • The pivot element aj,j should not be zero.
• When aj,j ≪ ai,j during row-operation,
xn = bn /an,n small round-offs may result in large errors.
bi − nj=i+1 ai,j xj
P
xi =
ai,i
JRS\IOE 3/6
where: i = n-1, n-2, . . . , 1
Gauss Elimination Method: Example

Solve:  
5 4 3 : 28
5x1 + 4x2 + 3x3 = 28 R2 ← R2 − (7/5)R1
∼ 0 −8/5 −26/5 : −96/5 
 
R3 ← R3 − (8/5)R1
7x1 + 4x2 − x3 = 20 0 −57/5 −4/5 : −104/5
8x1 − 5x2 + 4x3 = 24
 
5 4 3 : 28
R3 ← R3 − (57/8)R2 ∼ 0 −8/5 −26/5 : −96/5
 
Augmented Coefficient Matrix:
0 0 145/4 : 116
 
5 4 3 : 28
Back Substitution:
7 4 −1 : 20
 
116
   
8 −5 4 : 24 x3 = 145/4
= 3.2 x1 2.4
−96/5−(−26/5)x3
x2 = = −96/5−(−26/5)(3.2) = 1.6 ⇒ x2  = 1.6
   
−8/5 −8/5
28−4x2 −3x3 28−4(1.6)−3(3.2)
x1 = 5
= 5
= 2.4 x3 3.2

JRS\IOE 4/6
Pivoting

A strategy to choose a larger element as pivot such that it is:

• non-zero ⇒ to prevent division by zero


• larger than elements being eliminated ⇒ to prevent large errors due to small round-offs

Partial Pivoting Complete Pivoting


Before eliminating the elements of a Before eliminating the elements of a column:
column:
• compare the pivot with elements below as well as
• compare the pivot with elements right of it
below it • bring the numerically largest element in pivot
• bring the numerically largest position
element in pivot position • by interchanging rows and/or columns.
• by interchanging rows.
* While interchanging columns, the order of variables
* Most common approach. should also be tracked.
JRS\IOE 5/6
Assignments

1. Construct a system of linear equations in four unknowns of your own (using predetermined values
for the unknowns) and show its solution by Gauss Elimination Method. Make sure that the
system so constructed has a unique set of solution (coefficient matrix should be non-singular).
2. Write an algorithm/pseudo-code for solving a system of linear equations in any number of
unknowns using the Gauss Elimination Method, taking care of error condition(s) that may arise
during the computation. Check your program for the systems comprising of two, three, four, and
five unknowns.
3. Find out an expression to determine the total number of scalar calculations required to solve a
system of n unknowns using the Gauss Elimination Method.

JRS\IOE 6/6
Numerical Methods
Solution of System of Linear Algebraic equations
å Matrix Inverse via Gauss-Jordan Method
å Matrix Inverse via Gauss-Elimination Method

Jayandra Raj Shrestha


June 2021

Institute of Engineering,
Tribhuvan University, Nepal
Matrix Inverse via Gauss Jordan Method

  Which can be written as:


a11 a12 a13     
a11 a12 a13 x11 1
Given a non-singular square matrix, A = a21 a22 a23 
 
a21 a22 a23  x21  = 0
    
a31 a32 a33
a31 a32 a33 x31 0
    
  a11 a12 a13 x12 0
x11 x12 x13 a21

a22 a23  x22  = 1
   
Let X = A−1 = x21 x22 x23 
 
a31 a32 a33 x32 0
x31 x32 x33     
a11 a12 a13 x13 0
a21 a22 a23  x23  = 0
    

∴ AX = I a31 a32 a33 x33 1

i.e.,
    
a11 a12 a13 x11 x12 x13 1 0 0
a21 a22 a23  x21 x22 x23  = 0 1 0
    
a31 a32 a33 x31 x32 x33 0 0 1
JRS\IOE 1
Augmented coefficient matrices: Procedure:
     
x11 a11 a12 a13 : 1 a11 a12 a13 : 1 0 0
a21 a22 a23 : 0 1 0
 
x ⇒  21 a22 a23
a : 0
   
 21 
x31 a31 a32 a33 : 0 a31 a32 a33 : 0 0 1
   
x12 a11 a12 a13 : 0
x22  ⇒ a21 a22 a23
  
: 1
 ⇓
x32 a31 a32 a33 : 0
 
a11 0 0 0 : a14 a15 a16
   
x13 a11 a12 a13 : 0  0 a22 0 0 : a24 a25 a26 
 

x
 
 23  ⇒ a21 a22 a23

: 0
 0 0 a33 0 : a34 a35 a36
x33 a31 a32 a33 : 1

Combining:
 
  1 0 0 : x11 x12 x13
a11 a12 a13 : 1 0 0
0 1 0 : x21 x22 x23 
 
a21 a22 a23 : 0 1 0
 
0 0 1 : x31 x32 x33
a31 a32 a33 : 0 0 1
JRS\IOE 2
Example
2 −3 −2
h i
Find the inverse of the following matrix using the Gauss Jordan Method: −1 −1 −3 .
3 −2 3
Solution:
 
2 −3 −2
A = −1 −1 −3
 
3 −2 3
 
x11 x12 x13
Let A−1 = X = x21 x22 x23 
 
x31 x32 x33

∴ AX = I

    
2 −3 −2 x11 x12 x13 1 0 0
−1 −1 −3 x21 x22 x23  = 0 1 0
    

JRS\IOE 3 −2 3 x31 x32 x33 0 0 1 3
 
2 −3 −2 : 1 0 0
Augmented coefficient matrix: −1 −1 −3 : 0 1 0
 
3 −2 3 : 0 0 1
 
2 −3 −2 : 1 0 0
R2 ← R2 + (1/2) ∗ R1
∼ 0 −5/2 −4 : 1/2 1 0
 
R3 ← R3 − (3/2) ∗ R1
0 5/2 6 : −3/2 0 1
 
2 0 14/5 : 2/5 −6/5 0
R1 ← R1 − (6/5) ∗ R2
∼ 0 −5/2 −4 : 1/2 1 0
 
R3 ← R3 + R2
0 0 2 : −1 1 1
 
2 0 0 : 9/5 −13/5 −7/5
R1 ← R1 − (7/5) ∗ R3
∼ 0 −5/2 0 : −3/2 3 2 
 
R2 ← R2 + (2) ∗ R3
0 0 2 : −1 1 1
   
R1 ← R1/2 1 0 0 : 9/10 −13/10 −7/10 9/10 −13/10 −7/10
−1
R2 ← (−2/5)R2 ∼ 0 1 0 : 3/5 −6/5 −4/5  ∴ A =  3/5 −6/5 −4/5 
   
R3 ← R3/2 0 0 1 : −1/2 1/2 1/2 −1/2 1/2 1/2

JRS\IOE 4
Matrix Inverse Using Gauss Elimination Method

  Thus, we have:
2 −3 −2 : 1 0 0     
2 −3 −2 x11 1
Augmented coefficient matrix: −1 −1 −3 : 0 1 0
 
0 −5/2 −4 x21  = 1/2
    
3 −2 3 : 0 0 1
0 0 2 x31 −1
 
2 −3 −2 : 1 0 0
R2 ← R2 + (1/2) ∗ R1 ∼ 0 −5/2 −4 : 1/2 1 0
      
2 −3 −2 x12 0
R3 ← R3 − (3/2) ∗ R1 0 5/2 6 : −3/2 0 1 0

−5/2 −4 x22  = 1
   

  0 0 2 x32 1
2 −3 −2 : 1 0 0
∼ 0 −5/2 −4 : 1/2 1 0
 
    
2 −3 −2 x13 0
R3 ← R3 + R2 0 0 2 : −1 1 1
0 −5/2 −4 x23  = 0
    
0 0 2 x33 1

JRS\IOE 5
Performing Back Substitutions on each system:

   
x31 = −1/2 x11 9/10
x21 = (1/2)−(−4)(x 31 )
= (1/2)+4(−1/2)
= 3/5 V x21  =  3/5 
   
(−5/2) (−5/2)
1−(−3)(x21 )−(−2)(x31 ) 1+(3)(3/5)+(2)(−1/2) x31 −1/2
x11 = 2 = 2 = 9/10
   
x32 = 1/2 x12 −13/10
x22 = 1−(−4)(x 32 ) 1+4(1/2) V x22  =  −6/5 
   
(−5/2) = (−5/2) = −6/5
x12 = 0−(−3)(x22 )−(−2)(x32 )
= (3)(−6/5)+(2)(1/2) = −13/10 x32 1/2
2 2
   
x33 = 1/2 x31 −7/10
x23 = 0−(−4)(x 33 ) 4(1/2) V x32  =  −4/5 
   
(−5/2) = (−5/2) = −4/5
x13 = 0−(−3)(x23 )−(−2)(x33 )
= (3)(−4/5)+(2)(1/2)
= −7/10 x33 1/2
2 2
 
9/10 −13/10 −7/10
∴ A−1 = X =  3/5 −6/5 −4/5 
 
−1/2 1/2 1/2
JRS\IOE 6
Assignments

1. Construct a 4 × 4 non-singular square matrix of your own and compute its inverse using:
a) Gauss Jordan Method
b) Gauss Elimination Method
also verify the correctness of your result.
2. Write algorithm/pseudo-code and program to find the inverse of a given square matrix of
any order using the Gauss-Jordan Method. Verify your program with second, third, fourth,
and fifth order matrices.

JRS\IOE 7
Numerical Methods
Solution of System of Linear Algebraic equations
å Factorization Method (LU Decomposition)

Jayandra Raj Shrestha


June 2021

Institute of Engineering,
Tribhuvan University, Nepal
Matrix Factorization

A square matrix (A) can be written as the product of a lower triangular matrix (L) and an upper
triangular matrix (U), taking either of L or U as unit triangular, if all the leading principal minors of A
are non-singular.

a11 a12 · · · a1n

a
11 a
12
a
21 a22 · · · a2n


i.e., a11 6= 0 6= 0 ··· 6= 0
.

a21 a22 . .
. .. .
.
.

. . .

an1 an2 · · · ann

     
a11 a12 a13 1 0 0 u11 u12 u13  Do-Little’s Method

a21 a22 =
a23  l21 1 0  0 u22 u23  ⇒ L = Unit lower triangular
    

a31 a32 a33 l31 l32 1 0 0 u33  U = Upper traingular

     
a11 a12 a13 l11 0 0 1 u12 u13  Crout’s Method

a21 a22 a23  = l21 l22 0  0 1 u23  ⇒ L = Lower triangular
    

a31 a32 a33 l31 l32 l33 0 0 1  U = Unit Upper traingular

JRS\IOE 1
Factorization via Do-Little’s method

 
a11 a12 a13
Suppose A = a21 a22 a23 
 
a31 a32 a33
Let A = LU , where:
   
1 0 0 u11 u12 u13
L = l21 1 0and U = 0 u22 u23 
   
l31 l32 1 0 0 u33
   
u11 u12 u13 a11 a12 a13
LU = A ⇒  l21 u11 l21 u12 + u22 l21 u13 + u23  = a21 a22 a23 
   
l31 u11 l31 u12 + l32 u22 l31 u13 + l32 u23 + u33 a31 a32 a33
Equating term by term:
u11 = a11 u12 = a12 u13 = a13
l21 = a21 /u11 u22 = a22 − l21 u12 u23 = a23 − l21 u13
l31 = a31 /u11 l32 = (a32 − l31 u12 )/u22 u33 = a33 − l31 u13 − l32 u23
JRS\IOE 2
Solution of linear system via Factorization

Procedure:
Given: A X = B
Factorize: A = L U
AX = B ⇒ LU X = B
Let, U X = Y
LU X = B ⇒ LY = B
Obtain Y from L Y = B via forward substitution
Substitute Y in U X = Y
Obtain X from U X = Y via backward substitution
JRS\IOE 3
Example
Solve:
LU = A
5x1 + 4x2 + 3x3 = 28
7x1 + 4x2 − x3 = 20
8x1 − 5x2 + 4x3 = 24 ⇓
The given system expressed as AX = B:  
u11 u12 u13
      
 l21 u11 l21 u12 + u22 l21 u13 + u23

5 4 3 x1 28 
A = 7

4 −1 X = x2  B = 20
     l31 u11 l31 u12 + l32 u22 l31 u13 + l32 u23 + u33
8 −5 4 x3 24
 
5 4 3
= 7 4 −1
 
Let A = LU, where:
    8 −5 4
1 0 0 u11 u12 u13
L = l21 1 0 U = 0 u22 u23 
   
l31 l32 1 0 0 u33

JRS\IOE 4
   
u11 u12 u13 5 4 3
 l21 u11 l21 u12 + u22 l21 u13 + u23 =
 7 4 −1
   
l31 u11 l31 u12 + l32 u22 l31 u13 + l32 u23 + u33 8 −5 4
Equating term by term:
u11 u12 u13
= a11 = a12 = a13
=5 =4 =3
l21 u22 u23
= a21 /u11 = a22 − l21 u12 = a23 − l21 u13
= 4 − (7/5)(4) = −1 − (7/5)(3)
= 7/5 = −8/5 = −26/5
l31 l32 u33
= a31 /u11 = (a32 − l31 u12 )/u22 = a33 − l31 u13 − l32 u23
= (−5 − (8/5)(4))/(−8/5) = 4 − (8/5)(3) − (57/8)(−26/5)
= 8/5 = 57/8 = 145/4
JRS\IOE 5
 
1 0 0 ∴ y1 = 28
∴ L = 7/5 1 0
 
y2 = 20 − (7/5)(y1 )
8/5 57/8 1
= 20 − (7/5)(28)
 
5 4 3 = −96/5
and U = 0 −8/5 −26/5
 
y3 = 24 − (8/5)(y1 ) − (57/8)(y2 )
0 0 145/4
= 24 − (8/5)(28) − (57/8)(−96/5)
∴ AX = B ⇒ LU X = B = 116
 
y1  
Let U X = Y , where Y = y2 
  28
Substituting Y = −96/5 in U X = Y
 
y3
116
then LU X = B ⇒ LY = B
         
1 0 0 y1 28 5 4 3 x1 28
0 −8/5 −26/5 x2  = −96/5
    
i.e., 7/5 1 0 y2  = 20
    
8/5 57/8 1 y3 24 0 0 145/4 x3 116
JRS\IOE 6
Back Substitution: Hence, the required solution is:
116  
∴ x3 = = 3.2 2.4
145/4
X = 1.6
 
−96/5 − (−26/5)(3.2)
x2 = = 1.6 3.2
−8/5
28 − (4)(1.6) − (3)(3.2)
x1 = = 2.4
5

JRS\IOE 7
Do-Little Decomposition of a 4 × 4 matrix

u11 u12 u13 u14


= a11 = a12 = a13 = a14
l21 u22 u23 u24
= a21 /u11 = a22 − l21 u12 = a23 − l21 u13 = a24 − l21 u14
l31 l32 u33 u34
= a31 /u11 = (a32 − l31 u12 )/u22 = a33 − l31 u13 − l32 u23 = a34 − l31 u14 − l32 u24
l41 l42 l43 u44
= a41 /u11 = (a42 − l41 u12 )/u22 = (a43 − l41 u13 − l42 u23 )/u33 = a44 − l41 u14 − l42 u24 − l43 u34

i−1
X
Uij = Aij − Lik Ukj
k=1

j−1
!,
X
Lij = Aij − Lik Ukj Ujj
k=1
JRS\IOE 8
Pseudo-code

for i = 1 to n
for j = 1 to n
if i ≤ j
i−1
X
Uij = Aij − Lik Ukj
k=1

if i = j then Lij = 1 else Lij = 0


else
j−1
!,
X
Lij = Aij − Lik Ukj Ujj
k=1

Uij = 0
end if
end for
end for
JRS\IOE 9
Assignments

1. Construct a system of linear equations in 4 unknowns of your own and solve it using the
LU factorization method.
2. Write algorithm / pseudo-code and program to solve a system of linear equations using
the LU factorization method using Do-Little decomposition.

JRS\IOE 10
Numerical Methods
Solution of System of Linear Algebraic equations
å Iterative Methods:
à Jacobi’s Method
à Gauss-Seidal Method

Jayandra Raj Shrestha


June 2021

Institute of Engineering,
Tribhuvan University, Nepal
Jacobi’s Method

In Summary
a11 x1 + a12 x2 + a13 x3 = b1  
a21 x1 + a22 x2 + a23 x3 = b2 n
X
,
xi (k+1) =  (k) 
 
a31 x1 + a32 x2 + a33 x3 = b3 bi − aij xj  aii
j=1
j6=i
⇓ Transform
where, i = 1 to n
x1 = (b1 − a12 x2 − a13 x3 )/a11 k = iteration number

x2 = (b2 − a21 x1 − a23 x3 )/a22 Condition sufficient for convergence


x3 = (b3 − a31 x1 − a32 x2 )/a33 Strictly diagonal dominant coefficient matrix
(not necessary)
⇓ Iteration Formula 
 . |a11 | > |a12 | + |a13 |  n
x1 (k+1) = b1 − a12 x2 (k) − a13 x3 (k)

a11 |aii | >
X
|aij |
|a22 | > |a21 | + |a23 |
 . 
x2 (k+1) = b2 − a21 x1 (k) − a23 x3 (k) a22 |a33 | > |a31 | + |a32 |  j=1
j6=i
 .
JRS\IOE x3 (k+1) = b3 − a31 x1 (k) − a32 x2 (k) a33 1/5
Gauss-Seidal Method

In Summary
a11 x1 + a12 x2 + a13 x3 = b1
a21 x1 + a22 x2 + a23 x3 = b2 i−1
X n
X
!,
(k+1) (k+1) (k)
a31 x1 + a32 x2 + a33 x3 = b3 xi = bi − aij xj − aij xj aii
j=1 j=i+1

⇓ Transform where, i = 1 to n
x1 = (b1 − a12 x2 − a13 x3 )/a11 k = iteration number

x2 = (b2 − a21 x1 − a23 x3 )/a22 Condition sufficient for convergence


x3 = (b3 − a31 x1 − a32 x2 )/a33 Strictly diagonal dominant coefficient matrix
(not necessary)
⇓ Iteration Formula 
 . |a11 | > |a12 | + |a13 |  n
x1 (k+1) = b1 − a12 x2 (k) − a13 x3 (k)

a11
X
|a22 | > |a21 | + |a23 | |aii | > |aij |

|a33 | > |a31 | + |a32 |  j=1
 .
x2 (k+1) = b2 − a21 x1 (k+1) − a23 x3 (k) a22 j6=i
 .
(k+1)
x3
JRS\IOE = b3 − a31 x1 (k+1) − a32 x2 (k+1) a33 2/5
Example 1: Solve the following system using Jacobi’s method.
2x1 + 2x2 − x3 + 6x4 = 15 Let the initial guess values be
2x1 + 8x2 + 3x3 − x4 = 20 x1 = x2 = x3 = x4 = 0
2x1 + x2 + 7x3 − 2x4 = 25 Calculation Table:
k x1 (k) x2 (k) x3 (k) x4 (k) max. error
9x1 − x2 + 3x3 + 2x4 = 35
0 0 0 0 0 -
1 3.8889 2.5000 3.5714 2.5000 3.8889
Re-ordering for diagonal dominance: 2 2.4206 0.5010 2.8175 0.9656 1.9990
9x1 − x2 + 3x3 + 2x4 = 35 3 2.7908 0.9590 3.0841 1.9957 1.0301
4 2.5239 0.8952 3.2073 1.7641 0.2669
2x1 + 8x2 + 3x3 − x4 = 20 5 2.5272 0.8868 3.2265 1.8949 0.1308
2x1 + x2 + 7x3 − 2x4 = 25 6 2.4908 0.8951 3.2641 1.8998 0.0376
7 2.4781 0.8907 3.2747 1.9154 0.0156
2x1 + 2x2 − x3 + 6x4 = 15 8 2.4706 0.8919 3.2834 1.9229 0.0087
9 2.4662 0.8914 3.2875 1.9264 0.0044
Iteration Formula: 10 2.4640 0.8914 3.2899 1.9287 0.0024
11 2.4627 0.8914 3.2911 1.9299 0.0013
 .
(k) (k) (k)
x1 (k+1) = 35 + x2 − 3x3 − 2x4 9 12 2.4620 0.8914 3.2919 1.9305 0.0008

(k) (k) (k)
. 13 2.4616 0.8914 3.2922 1.9309 0.0004
x2 (k+1) = 20 − 2x1 − 3x3 + x4 8
x1 2.462
 .    
(k) (k) (k)
x3 (k+1) = 25 − 2x1 − x2 + 2x4 7  x2  0.891
∴ The required solution is 
x3  = 3.293
  
 .
JRS\IOE (k) (k) (k) 3/5
x4 (k+1) = 15 − 2x1 − 2x2 + x3 6 x4 1.931
Example 2: Solve the following system using Gauss-Seidal method.
2x1 + 2x2 − x3 + 6x4 = 15 Let the initial guess values be x2 = x3 = x4 = 0
2x1 + 8x2 + 3x3 − x4 = 20 Calculation Table:
2x1 + x2 + 7x3 − 2x4 = 25 k x1 (k) x2 (k) x3 (k) x4 (k) max. error
0 - 0 0 0 -
9x1 − x2 + 3x3 + 2x4 = 35 1 3.8889 1.5278 2.2421 1.0681 3.8889
2 3.0739 1.0243 2.8520 1.6093 0.8150
Re-ordering for diagonal dominance: 3 2.6944 0.9581 3.1245 1.8033 0.3795
4 2.5531 0.9155 3.2264 1.8815 0.1413
9x1 − x2 + 3x3 + 2x4 = 35
5 2.4970 0.9010 3.2669 1.9118 0.0561
2x1 + 8x2 + 3x3 − x4 = 20 6 2.4752 0.8951 3.2826 1.9237 0.0218
7 2.4667 0.8928 3.2887 1.9283 0.0085
2x1 + x2 + 7x3 − 2x4 = 25 8 2.4633 0.8920 3.2911 1.9301 0.0034
2x1 + 2x2 − x3 + 6x4 = 15 9 2.4621 0.8916 3.2921 1.9308 0.0012
10 2.4615 0.8914 3.2925 1.9311 0.0006
11 2.4613 0.8914 3.2926 1.9312 0.0002
Iteration Formula:
 .
(k) (k) (k)
x1 (k+1) = 35 + x2 − 3x3 − 2x4 9
   
x1 2.461
 .
(k+1) (k) (k)
x2 (k+1) = 20 − 2x1 − 3x3 + x4 8 x  0.891
 2 
∴ The required solution is   = 
 . 
(k+1) (k+1) (k)
x3 (k+1) = 25 − 2x1 − x2 + 2x4 7

x3  3.293
JRS\IOE (k+1)

(k+1) (k+1) (k+1)
. x4 1.931 4/5
x4 = 15 − 2x1 − 2x2 + x3 6
Assignments

1. Solve the following system, with an accuracy of 3 decimal places using:


a) Jacobi’s Method
b) Gauss-Seidal Method.

w + x + 2y + z = 5
6w + x − y + 2z = 4
2w + 5x − 4y + 6z = −5
w + 4x + 3y − z = 2

2. Write pseudo-code and program in C/C++ for Gauss-Seidal Method.

JRS\IOE 5/5
Numerical Methods
Linear Algebra
å Power Method

Jayandra Raj Shrestha


June 2021

Institute of Engineering,
Tribhuvan University, Nepal
Power Method

AX = λX
• A convenient iterative technique to compute the numerically largest (dominant) Eigen
value and the corresponding Eigen vector of a square matrix, as required in many
engineering problems
• Much suitable for machine computations (programming)

JRS\IOE 1/6
Mechanism

Given a square matrix A


Let X (0) = initial guess to the Eigen vector corresponding to the largest Eigen value
product scaling λ(1) = numerically largest value in Z (0)
AX (0) −−−−−−→ Z (0) −−−−−−→ λ(1) X (1) X (1) = Z (0) /λ(1)

product scaling λ(2) = numerically largest value in Z (1)


AX (1) −−−−−−→ Z (1) −−−−−−→ λ(2) X (2) X (2) = Z (1) /λ(2)

product scaling λ(3) = numerically largest value in Z (2)


AX (2) −−−−−−→ Z (2) −−−−−−→ λ(3) X (3) X (3) = Z (2) /λ(3)

··· ··· ···

product scaling λ(r+1) = numerically largest value in Z (r)


AX (r) −−−−−−→ Z (r) −−−−−−→ λ(r+1) X (r+1) X (r+1) = Z (r) /λ(r+1)
JRS\IOERepeat until X (r+1) ≈ X (r) =⇒ λ(r+1) = dominant Eigen Value, X (r+1) = corresponding vector 2/6
Example
     
2 3 2 5.075 0.576
A = 3 4 1 AX (3) = 5.298 = 8.817 0.601 = λ(4) X (4)
     

2 1 7 8.817 1
 
1    
(0)
4.955 0.566
Let the initial guess vector be X = 1
 
AX (4) = 5.132 = 8.753 0.586 = λ(5) X (5)
   
1
    8.753 1
7 0.7    
AX (0) =  8  = 10 0.8 = λ(1) X (1)
    4.89 0.561
AX (5) = 5.042 = 8.718 0.578 = λ(6) X (6)
   
10 1
    8.718 1
5.8 0.63    
AX (1) = 6.3 = 9.2 0.685 = λ(2) X (2)
    4.856 0.558
AX (6) = 4.995 = 8.7 0.574 = λ(7) X (7)
   
9.2 1
    8.7 1
5.315 0.594    
AX (2) =  5.63  = 8.945 0.629 = λ(3) X (3)
    4.838 0.557
AX (7) =  4.97  = 8.69 0.572 = λ(8) X (8)
   
JRS\IOE
8.945 1 3/6
8.69 1
For an accuracy of 2 decimal places:
 
∵ max X (8) − X (7) < 0.005

∴ Dominant Eigen value ≈ λ(8) = 8.69

 
0.56
and Corresponding Eigen Vector ≈ X (8) = 0.57
 
1.00

In normal form, the Eigen vector is:


   
0.56 0.4375
1
√ 0.57 = 0.4453
   
0.562 + 0.572 + 12
1 0.7812

JRS\IOE 4/6
Computing the smallest Eigen Value

1
∵ AX = λX =⇒ A−1 X = X
λ

∴ If λ is an Eigen value of A, then 1


λ is an Eigen value of A−1
Conversely, if λ is an Eigen value of A−1 , then 1
λ is an Eigen value of A
∴ smallest Eigen value of A = reciprocal of largest Eigen value of A−1
[with same Eigen vector]

JRS\IOE 5/6
Assignment

1. Find the dominant and the smallest Eigen values and corresponding Eigen vectors of the
following matrix using the power method.
 
−2 3 2
 4 −2 1 
 
−1 2 −9

2. Write a high level algorithm, detailed pseudo-code, and program code in C/C++ for
computing the dominant Eigen value and corresponding Eigen vector of a square matrix
using the power method.

JRS\IOE 6/6
6/4/2021

Interpolation
• Interpolation is the technique of estimating the value of a
function for any intermediate value of the independent
variable.
Interpolation • The process of computing or finding the value of a function
for any value of the independent variable outside the given
range is called extrapolation.
B. D. Mulmi • Here interpolation denotes the method of computing the value
of the function 𝑦 = 𝑓 (𝑥) for any given value of the
independent variable x when a set of values of y = f (x) for
September, 2020 certain values of x are known or given.

1 2

Interpolation
1. Interpolation with equally spaced interval x 2 4 6 8 10 12 14
a. Newton's Forward Interpolation Formula y 12 13 16 19 23 56 60
b. Newton’s Backward Interpolation Formula
Central Difference interpolation Formula
c. Stirling’s Formula 𝑦(2.5) =? (Forward Interpolation)
d. Bessel’s Formula 𝑦 14.5 =? (Backward Interpolation)
𝑦(0.5) = ? (Forward Interpolation)
𝑦 (2.2) = ?(Forward Interpolation)
2. Interpolation with unequally spaced interval 𝑦 (7.9) = ? (Central Interpolation)
a. Lagrange’s Interpolation Formula
b. Newton’s Divided Difference Formula
3 4
6/4/2021

Interpolation with Equally Spaced


𝑦 Interval
a. Newton’s Forward Interpolation Formula
𝑦 = 𝑓(𝑥)
∆ 𝑦 ∆ 𝑦
𝑦 = 𝑦 + 𝑝∆𝑦 + 𝑝 𝑝 − 1 +𝑝 𝑝−1 𝑝−2
𝑦 = 𝑓(𝑥) 2! 3!

ℎ + ………+𝑝 𝑝 − 1 … 𝑝 − 𝑛 − 1 !
𝑦 𝑦 𝑦 𝑦 𝑦
………

𝑥 𝑥
b. Newton’s Backward Interpolation Formula
𝑥 𝑥 𝑥 𝑥
∇ 𝑦 ∇ 𝑦
𝑦 = 𝑦 + 𝑝∇𝑦 + 𝑝 𝑝 + 1 +𝑝 𝑝+1 𝑝+2
𝑥 =𝑥 +ℎ 2! 3!
𝑥 =𝑥 + 2ℎ 𝑥−𝑥 ∇ 𝑦
𝑥 =𝑥 + 3ℎ 𝑝= + ………+ 𝑝 𝑝 + 1 … 𝑝 + 𝑛 − 1
…… …. … ℎ 𝑛!
𝑥 =𝑥 + 𝑛ℎ
5 6

Finite Differences Finite Differences ∆ 𝑦 = ∆𝑦 − ∆𝑦


𝑦 −𝑦 − 𝑦 −𝑦
1. Forward Differences 1. Forward Difference Table: ∆ 𝑦 = 𝑦 − 2𝑦 + 𝑦
2. Backward Differences 𝒙 𝒚 ∆ ∆𝟐 ∆𝟑 ∆𝟒
3. Central Differences 𝑥 𝑦 ∆𝑦 ∆ 𝑦 ∆ 𝑦 ∆ 𝑦
4. Divided Differences 𝑥 𝑦 ∆𝑦 ∆ 𝑦 ∆ 𝑦
𝑥 𝑦 ∆𝑦 ∆ 𝑦
𝑥 𝑦 ∆𝑦
𝑥 𝑦
∆𝑦 = 𝑦 − 𝑦 ∆𝑦 = 𝑦 − 𝑦 ∆ 𝑦 = ∆ 𝑦 − ∆8 𝑦
7
6/4/2021

Cont’d… Example:
2. Backward Difference Table: # Construct the forward and backward
difference table from the following data set.
𝒙 𝒚 ∇ ∇ ∇ ∇
𝑥 0 1 2 3 4
𝑥 𝑦 𝑦 -5 -6 -1 16 51
𝑥 𝑦 ∇y
𝑥 𝑦 ∇y ∇ y
𝑥 𝑦 ∇y ∇ y ∇ y
𝑥 𝑦 ∇y ∇ y ∇ y ∇ y
∇y =𝑦 − 𝑦 ∇ 𝑦 = ∇𝑦 − ∇𝑦 9 10

Newton’s Forward Interpolation Formula Proof:

∆ 𝑦 ∆ 𝑦 𝑦 =𝑎 +𝑎 𝑥−𝑥 +𝑎 𝑥−𝑥 𝑥−𝑥


𝑦 = 𝑦 + 𝑝∆𝑦 + 𝑝 𝑝 − 1 +𝑝 𝑝−1 𝑝−2
2! 3!
∆ +a 𝑥 − 𝑥 𝑥 − 𝑥 𝑥 − 𝑥 + ⋯
+ ………+ 𝑝 𝑝 − 1 … 𝑝 − 𝑛 − 1
!
+𝑎 𝑥 − 𝑥 𝑥 − 𝑥 𝑥 − 𝑥 … …
… 𝑥−𝑥 −−−− −(1)

Let 𝑥 = 𝑥 in eqn (1)


𝑦 = 𝑎 in eqn (1)

11 12
6/4/2021

Proof: Proof:

𝑦 =𝑦 +𝑎 𝑥−𝑥 +𝑎 𝑥−𝑥 𝑥−𝑥 𝑦 − 𝑦 = 𝑎 (𝑥 − 𝑥 )


+a 𝑥 − 𝑥 𝑥 − 𝑥 𝑥−𝑥 +⋯ 𝑎 =
+𝑎 𝑥 − 𝑥 𝑥 − 𝑥 𝑥 − 𝑥 …… 𝑎 =

in eqn (1). We get,
… 𝑥−𝑥 ∆𝑦
𝑦=𝑦 + 𝑥−𝑥 +𝑎 𝑥−𝑥 𝑥−𝑥
Again, ℎ
Put 𝑥 = 𝑥 eqn (1). We get, +a 𝑥 − 𝑥 𝑥 − 𝑥 𝑥 − 𝑥 + ⋯
𝑦 =𝑦 +𝑎 𝑥 −𝑥 +𝑎 𝑥 − 𝑥 𝑥 − 𝑥 𝑥 − 𝑥 … …
𝑦 − 𝑦 = 𝑎 (𝑥 −𝑥 ) … 𝑥−𝑥
13 14

Proof: Example:
# Using interpolation formula, find the polynomial which
∆𝑦 satisfies the following data.
𝑦=𝑦 + 𝑥−𝑥 +𝑎 𝑥−𝑥 𝑥−𝑥 𝒙 0 1 2 3 4

+a 𝑥 − 𝑥 𝑥 − 𝑥 𝑥 − 𝑥 + ⋯ 𝒚 -5 -6 -1 16 51
+𝑎 𝑥 − 𝑥 𝑥 − 𝑥 𝑥 − 𝑥 … … And also evaluate 𝑦(5).
… 𝑥−𝑥 𝑥=𝑥
𝑥 = 4 ,ℎ = 1 Forward:
𝑦 = 𝑦 + 𝑝∆𝑦 + 𝑎 𝑥 − 𝑥 𝑥 − 𝑥 𝑊𝑒 ℎ𝑎𝑣𝑒, 𝑥 =0
+a 𝑥−𝑥 𝑥−𝑥 𝑥−𝑥 +⋯ 𝑥−𝑥 𝑥=𝑥
+𝑎 𝑥−𝑥 𝑥−𝑥 𝑥−𝑥 …… 𝑝= =𝑥−4
ℎ ℎ=1
… 𝑥−𝑥 𝑓 𝑥 = 𝑥 − 2𝑥 − 5 𝑤𝑒 ℎ𝑎𝑣𝑒,
……… 𝑓 5 = 5 −2∗5 −5 𝑥−𝑥
= 110 𝑝= =𝑥

15 16
6/4/2021

Example: Example:
# Evaluate 𝑦(0.5) and 𝑦(5) using appropriate The forward difference table is
interpolation formula from the following data.
𝒙 0 1 2 3 4
𝒙 𝒚 ∆ ∆𝟐 ∆𝟑 ∆𝟒
𝒚 -5 -6 -1 16 51 0 −5 -1 6 6 0
1 −6 5 12 6
Solution: 2 −1 17 18
Here the point 𝑥 = 0.5 lies near the beginning 3 16 35
of the table. So we use Newton’s forward
interpolation formula. The forward difference 4 51
table is 17 18

Example: Exercise:
Given, # From the following data, evaluate 𝑦(0.71)
𝑥 =0
𝑥 =1 using appropriate interpolation formula.
𝑥 = 0.5 𝑥 0.70 0.72 0.74 0.76 0.78
∴ℎ =𝑥 −𝑥 =1
𝑊𝑒 ℎ𝑎𝑣𝑒, 𝑦 0.8423 0.8771 0.9131 0.9505 0.9893
𝑥−𝑥
𝑝= = 0.5

The Newton’s Forward interpolation formula is
𝑦 = 𝑦 +… 𝐵𝑎𝑐𝑘𝑤𝑎𝑟𝑑
∴ 𝑦 0.5 = −5.875 𝑥−𝑥
∴ 𝑦 5 = 110 𝑝=

19 20
6/4/2021

Interpolation with Unequally Spaced Interpolation with Unequally Spaced


Interval Interval
a) Lagrange’s Interpolation Formula Let 𝑦 = 𝑓 𝑥 be a function which takes the values
If 𝑦 = 𝑓 𝑥 takes the value 𝑦 , 𝑦 , 𝑦 , … 𝑦 corresponding to
𝑥 , 𝑥 , 𝑥 , … , 𝑥 , then 𝑥 , 𝑦 , 𝑥 , 𝑦 , 𝑥 , 𝑦 , … 𝑥 , 𝑦 . Since there
are 𝑛 + 1 pairs of values of x and y we can
𝑥−𝑥 𝑥−𝑥 𝑥−𝑥 … 𝑥−𝑥
𝑦=
𝑥 −𝑥 𝑥 −𝑥 𝑥 −𝑥 … 𝑥 −𝑥
∗𝑦 + represent 𝑓(𝑥) by a polynomial in x of degree n. Let
this polynomial be of the form :

∗𝑦 + 𝑦 =𝑎 𝑥−𝑥 𝑥−𝑥 𝑥−𝑥 … 𝑥−𝑥 +

… 𝑎 𝑥−𝑥 𝑥−𝑥 𝑥−𝑥 … 𝑥−𝑥 +

∗𝑦 +
… …. …. … … 𝑎 𝑥−𝑥 𝑥−𝑥 𝑥−𝑥 … 𝑥−𝑥 +

∗𝑦 … …. …. … …

21 𝑎 𝑥−𝑥 𝑥−𝑥 𝑥−𝑥 … 𝑥−𝑥 ------(1) 22

Interpolation with Unequally Spaced


Example:
Interval
Putting 𝑥 = 𝑥 , 𝑦 = 𝑦 in eqn (1). We get, # Given the values,
𝑦 = 𝑎 𝑥 −𝑥 𝑥 −𝑥 𝑥 −𝑥 … 𝑥 −𝑥 𝒙 5 7 • 𝒙 11 13 17
𝑦 𝑓(𝑥) 150 392 1452 2366 5202
∴𝑎 =
𝑥 −𝑥 𝑥 −𝑥 𝑥 −𝑥 … 𝑥 −𝑥 Evaluate 𝑓 9 , using Lagrange’s Interpolation formula.
Similar, putting 𝑥 = 𝑥 , 𝑦 = 𝑦 in eqn (1)
Solution:
𝑦 =𝑎 𝑥 −𝑥 𝑥 −𝑥 𝑥 −𝑥 … 𝑥 −𝑥
𝑦 Given:
∴𝑎 = 𝒙𝟎 𝒙𝟏 𝒙𝟐 𝒙𝟑 𝒙𝟒
𝑥 −𝑥 𝑥 −𝑥 𝑥 −𝑥 … 𝑥 −𝑥
𝒙 5 7 11 13 17
Proceeding the same way, we find 𝑎 , 𝑎 , … , 𝑎 . We get
𝑦 150 392 1452 2366 5202
Lagrange’s interpolation formula.
𝒚𝟎 𝒚𝟏 𝒚𝟐 𝒚𝟑 𝒚𝟒
𝑥 = 9
23 𝑓 9 =? 24
6/4/2021

Interpolation with unequally Spaced


Example:
Interval
Using Lagrange’s Interpolation formula. We have,
𝑥−𝑥 𝑥−𝑥 𝑥−𝑥 𝑥−𝑥 b. Newton’s Divided interpolation formula
𝑦= ∗𝑦 +
𝑥 −𝑥 𝑥 −𝑥 𝑥 − 𝑥 (𝑥 − 𝑥 ) 𝑦 =𝑦 + 𝑥−𝑥 𝑥 ,𝑥 + 𝑥 −𝑥 𝑥−𝑥 𝑥 ,𝑥 ,𝑥 +
𝑥−𝑥 𝑥−𝑥 𝑥−𝑥 𝑥−𝑥
∗𝑦 + 𝑥−𝑥 𝑥−𝑥 𝑥−𝑥 𝑥 ,𝑥 ,𝑥 ,𝑥 + ………
𝑥 −𝑥 𝑥 −𝑥 𝑥 −𝑥 𝑥 −𝑥
𝑥−𝑥 𝑥−𝑥 𝑥−𝑥 𝑥−𝑥
∗𝑦 +
𝑥 −𝑥 𝑥 −𝑥 𝑥 −𝑥 𝑥 −𝑥
𝑥−𝑥 𝑥−𝑥 𝑥−𝑥 𝑥−𝑥
∗𝑦 +
𝑥 −𝑥 𝑥 −𝑥 𝑥 −𝑥 𝑥 −𝑥
𝑥−𝑥 𝑥−𝑥 𝑥−𝑥 𝑥−𝑥
∗𝑦 +
𝑥 −𝑥 𝑥 −𝑥 𝑥 −𝑥 𝑥 −𝑥

50 3136 3872 2366 578


𝑓 9 =− + + − +
3 15 3 3 5

∴ 𝑓 9 = 810 25 26

Divided Difference Table Cont’d …


𝟏𝒔𝒕 𝟐𝒏𝒅 𝟑𝒓𝒅 𝟒𝒕𝒉 𝑦 −𝑦 𝑦 −𝑦
𝒙 𝒚 𝑫𝒊𝒗𝒊𝒅𝒆𝒅 𝑫𝒊𝒗𝒊𝒅𝒆𝒅 𝑫𝒊𝒗𝒊𝒅𝒆𝒅 𝑫𝒊𝒗𝒊𝒅𝒆𝒅 𝑥 ,𝑥 = ; 𝑥 ,𝑥 =
𝑫𝒊𝒇𝒇. 𝑫𝒊𝒇𝒇. 𝑫𝒊𝒇𝒇. 𝑫𝒊𝒇𝒇. 𝑥 −𝑥 𝑥 −𝑥
𝑥 𝑦
[𝑥 , 𝑥 ] 𝑥 ,𝑥 − 𝑥 ,𝑥
𝑥 ,𝑥 ,𝑥 =
𝑥 𝑦 [𝑥 , 𝑥 , 𝑥 ] 𝑥 −𝑥
[𝑥 , 𝑥 ] [𝑥 , 𝑥 , 𝑥 , 𝑥 ]
𝑥 𝑦 [𝑥 , 𝑥 , 𝑥 ] [𝑥 , 𝑥 , 𝑥 , 𝑥 , 𝑥 ] 𝑥 ,𝑥 − 𝑥 ,𝑥
𝑥 ,𝑥 ,𝑥 =
[𝑥 , 𝑥 ] [𝑥 , 𝑥 , 𝑥 , 𝑥 ] 𝑥 −𝑥
𝑥 𝑦 [𝑥 , 𝑥 , 𝑥 ]
[𝑥 , 𝑥 ] 𝑥 ,𝑥 ,𝑥 ,𝑥 − 𝑥 ,𝑥 ,𝑥 ,𝑥
𝑥 ,𝑥 ,𝑥 ,𝑥 ,𝑥 =
𝑥 𝑦 27 𝑥 −𝑥 28
6/4/2021

Example: Exercise:
# Given the values,
# Find the polynomial 𝑓(𝑥) by using interpolation formula
𝒙 5 7 11 13 17 which satisfies the following data and hence find 𝑓(9).
𝑓(𝑥) 150 392 1452 2366 5202
Evaluate 𝑓 9 , using Newton’s Divided Difference Interpolation formula. 𝒙 5 7 11 13 17

Solution: 𝑓(𝑥) 150 392 1452 2366 5202


The divided difference table is: (do yourself)
………
Given, 𝑥=𝑥
𝑥=9
𝑓 9 =?
Newton’s Divided Difference interpolation formula. We have,
𝑦 =𝑦 + 𝑥−𝑥 𝑥 ,𝑥 + 𝑥−𝑥 𝑥−𝑥 𝑥 ,𝑥 ,𝑥 +
𝑓 𝑥 =𝑥 +𝑥
𝑥−𝑥 𝑥−𝑥 𝑥−𝑥 𝑥 ,𝑥 ,𝑥 ,𝑥
……… 𝑓 9 = 810
………
∴ 𝑓 9 = 810 29 30

Interpolation with Equally Spaced


Central Difference Table
Interval
𝒙 𝒚 ∆ ∆𝟐 ∆𝟑 ∆𝟒 ∆𝟓 ∆𝟔 c. Stirling’s Formula
𝑥 𝑦
∆ ∆ ∆ ∆
∆𝑦 𝑦=𝑦 + + ∆ 𝑦 + +
! ! !
𝑥 𝑦 ∆ 𝑦 ∆ ∆
∆𝑦 ∆ 𝑦
∆ 𝑦 + +
! !
𝑥 𝑦 ∆ 𝑦 ∆ 𝑦 ∆ 𝑦 +⋯
!
∆𝑦 ∆ 𝑦 ∆ 𝑦
𝑥 𝑦 ∆ 𝑦 ∆ 𝑦 ∆ 𝑦
∆𝑦 ∆ 𝑦 ∆ 𝑦 d. Bessel’s Formula
𝑥 𝑦 ∆ 𝑦 ∆ 𝑦 1
𝑝 𝑝−1 ∆ 𝑦 +∆ 𝑦 𝑝− 𝑝 𝑝−1
𝑦 = 𝑦 + 𝑝∆𝑦 + + 2 ∆ 𝑦 +
∆𝑦 ∆ 𝑦 2! 2 3!
𝑥 𝑦 ∆ 𝑦 𝑝+1 𝑝 𝑝−1 𝑝−2 ∆ 𝑦 +∆ 𝑦
∆𝑦
+ …
4! 2
𝑥 𝑦 31 32
6/4/2021

Interpolation with Equally Spaced


Example:
Interval
# From the following table evaluate 𝑦(0.635) and 𝑦(0.65) using central
Formula Selection: difference interpolation formula.
𝑥−𝑥 𝒙 0.60 0.62 0.64 0.66 0.68
𝑝=
ℎ 𝑓(𝑥) 0.4952 0.5046 0.5133 0.5214 0.5287
c. Stirling’s Formula Solution :
1 1 The Central Difference table is:
− ≤𝑝≤ Construct Central Difference table (Do yourself)
4 4 Given,
𝑥 = 0.64
𝑥 = 0.635
d. Bessel’s Formula 𝑦 0.635 = ?
1 3 𝑤𝑒 ℎ𝑎𝑣𝑒,
<𝑝≤ 𝑝=
𝑥−𝑥
= −0.25
4 4 ℎ
1 1
𝑝 𝑙𝑖𝑒𝑠 𝑏𝑒𝑡𝑤𝑒𝑒𝑛 − ≤ 𝑝 ≤ , ∴ 𝑆𝑡𝑖𝑟𝑙𝑖𝑛𝑔 𝑓𝑜𝑟𝑚𝑢𝑙𝑎 𝑖𝑠 𝑎𝑝𝑝𝑟𝑜𝑝𝑟𝑖𝑎𝑡𝑒.
4 4
The Stirling’s formula is
33 34
y = .. … … 𝑦(0.635) = 0.5112

Exercise
# Apply Stirling formula to evaluate log 337.50,
from the following data.
𝑥 310 320 330 340 350 360
log(𝑥) 2.4914 2.5052 2.5185 2.5315 2.5441 2.5563

35
5/12/2021

Least Squares Method


• The curve of best fit is that for which e’s are as
small as possible.
Curve Fitting: Least Squares • Principle: The sum of the squares of the errors is
a minimum.
Method • Let the set of data points be (xi,yi), i = 1,2,3, … …,
n and let the curve given y = f(x) be fitted to this
data. At x = xi, the experimental (or observed)
September, 2020 value of the ordinate is yi and the corresponding
value on the fitting curve is f(xi). If ei is the error
of approximation at x = xi, then we have:
ei = yi – f(xi) ---- (1)
1 2

Cont’d …. Cont’d…
If we write:
1. Linear Curve Fitting
S= [y1 – f(x1)]2 + [y2 – f(x2)]2 + [y3 – f(x3)]2 + … + [yn – f(xn)]2
2. Non - Linear Curve Fitting
S = e12 + e22 + e32 + … + en2 ----(2) a) Exponential Function
b) Power Function
c) Polynomial of the nth degree

3 4
5/12/2021

1. Linear Curve Fitting/Fitting a


Cont’d…
Straight Line
Let y = f(x) = a + bx ---- (1) be the straight line to Differentiate eqn (3) w.r.t. a. For S to be minimum. We have,
be fitted to the given data (xi,yi), where i = s
 0  2 y1  (a  bx1 )  2 y2  (a  bx2 )  2 y3  (a  bx3 )  ...  2 yn  (a  bxn )
a
1,2,3,…,n. Then corresponding to the equation (2), y1  y2  y3  ...  yn  a  a  a  ...  a  bx1  x2  x3  ...  xn 
we have, n n

y
i 1
i  n  a  b xi
i 1
 (4)

S= [y1 – (a+bx1)]2 + [y2 – (a+bx2)]2 + [y3 –


Again, Differentiate eqn (3) w.r.t. b. For S to be minimum. We
(a+bx3)]2 + … + [yn – (a+bxn)]2 ---- (3) have,

5 6

Cont’d… Example:
n n

y  n  a  bxi
Fit the curve of the form y = a + bx to the following data set.
i (4) x 2 3 4 5 6
i1 i 1 y 3.2 4.4 6.5 8.62 10.8
n n n

x y  ax  bx 2 Solution:


i i i i (5) Given:
i1 i1 i1 No. of data set (n) =5
The given curve is: y = a + bx --- (1)
These equations are known as the Normal Equations. Normal Equations of eqn(1) are
Since xi and yi are known quantities, on solving eqn (4 & 5)
we can determine the constants a and b.  y  n  a  b  x 2 
 xy  a  x  b  x 3 
2

7 8
5/12/2021

Example: Example:
The values of  x,  x ,  xy ,  y are calculate shown in
2
On solving, we get
the following table:
a = 1.6
x y xy x2
1 3 3 b = 1.2
2 4 8
3 5 Hence, the required linear best fit
4 6 relation is y = 1.6 + 1.2x
5 8
∑x =15 26 90 55

Substituting these values in equation 2 and 3


9 10

Non – Linear Curve Fitting Exponential Function


a) Exponential Function Normal Equation
Let the curve be y  ae bx  (1)
𝑌 =𝑛∗𝐴+𝑏 𝑥
Taking log on both sides. We get,
log y = log a + bx*loge ∑ 𝑥𝑌 =A∑ 𝑥 + 𝑏 ∑ 𝑥
Y = A + bx - (2) [∵loge = 1]
Where,
Y = log y
A = log a

11 12
5/12/2021

Exponential Function:: Example Non – Linear Curve Fitting


b) Power Function
Determine the constants a and b by the method Let the curve be y  ax b
of least squares such that y  aebx fits the
Taking log on both sides. We get,
following data.
log y = log a +b*log x
x 2 4 6 8 10
y 4.077 11.084 30.128 81.897 222.62 Y = A + bX ----(2)
Solution:
The normal equation of eqn (2) are
Where,
No of data (n) = 5 Y = log y
Given relation : y = ae^bx -(1) 𝑌 =𝑛∗𝐴+𝑏 𝑥 A = log a
Taking log on both sides. We get, X = log x
log y = log a + bx*loge ∑ 𝑥𝑌 =A∑ 𝑥 + 𝑏 ∑ 𝑥
Y = A + bx - (2) [∵loge = 1]
Where,
Y = log y 13 14
A = log a

Power Function Power Function:: Example


Write Normal Equation Using least squares method, fit the curve of
the form y = axb to the following data set.
x 1 2 3 4 5
y 0.50 2.00 4.50 8.00 12.50

15 16
5/12/2021

Non – Linear Curve Fitting Example:


c) Polynomial of the nth degree
# Fit a second order polynomial to the data in
Let the second order polynomial be the following table.
y = a + bx +cx2 - (1) x 1 2 3 4
y 6 11 18 27
The Normal Equations are
𝑦 = 𝑎∗𝑛+𝑏 𝑥+𝑐 𝑥
#For the following set of data, fit a parabolic
𝑥𝑦 = 𝑎 𝑥+𝑏 𝑥 +𝑐 𝑥 curve using Least Squares Method and find y(2)
x 0.5 1 1.5 4.5 6.5 7.5
𝑥 𝑦=𝑎 𝑥 +𝑏 𝑥 +𝑐 𝑥 y 2.5 2.7 3.5 6.5 8.4 9.5

17 18

Example
For the following set of data, fit a parabolic curve using Least Squares Method and find y(2)
a) y=a+bx2 ------(1)
Solution: y=a+bX ---- (2) c) y=alogex +b (1)
Given data (n) = 6
Parabolic curve : y = a+bx+cx^2 -(1)
Normal equation of eqn (1)
Where, y=aX+b
𝑦 =𝑎∗𝑛+𝑏 𝑥+𝑐 𝑥
X = x2 X = logex
𝑥𝑦 = 𝑎 ∗ 𝑥+𝑏 𝑥 +𝑐 𝑥

𝑥 𝑦=𝑎 𝑥 +𝑏 𝑥 +𝑐 𝑥
b)y=a+b*1/x ------(1) d) y=ax+b+c/x (1)
xy =ax+b xy =ax2+bx+c
a = , b= ,c = Y = ax+b Y= ax2+bx+c
y(2) =
Where,
Where,
Y = xy
Y = xy
19 20
Cubic Spline Interpolation

B. D. Mulmi

Institute of Engineering

Feb, 2016
Linear Interpolation
Polynomial Interpolation
Cubic Spline Interpolation
Polynomial Interpolation is not always bad!
Given n+1 data points (xi , yi ) for i = 0 to n, develop n cubic
equations between (xi , yi ) and (xi+1 , yi+1 ) for i = 0 to n − 1, such
that the 1st and 2nd derivatives at common points are continuous.


 f0,1 (x) for x0 ≤ x ≤ x1
f1,2 (x) for x1 ≤ x ≤ x2

f (x) =

 ... ... ...
fn−1,n (x) for xn−1 ≤ x ≤ xn

Given by:

(x − xi+1 )3
 
Mi
fi,i+1 (x) = hi (x − xi+1 ) −
6 hi
(x − xi )3
 
Mi+1
− hi (x − xi ) −
6 hi
yi+1 (x − xi ) − yi (x − xi+1 )
+
hi

Where, Mi = yi00 00
Mi+1 = yi+1 hi = xi+1 − xi
To find the second derivatives (for i = 1 to n − 1):
Mi−1 (xi − xi−1 ) + 2Mi (xi+1 − xi−1 ) + Mi+1 (xi+1 − xi )
 
yi+1 − yi yi − yi−1
=6 −
xi+1 − xi xi − xi−1
or,
 
∆yi ∆yi−1
Mi−1 (hi−1 ) + 2Mi (hi−1 + hi ) + Mi+1 (hi ) = 6 −
hi hi−1

If n = 4:
 
∆y1 ∆y0
For i = 1 : M0 (h0 ) + 2M1 (h0 + h1 ) + M2 (h1 ) = 6 −
h1 h0
 
∆y2 ∆y1
For i = 2 : M1 (h1 ) + 2M2 (h1 + h2 ) + M3 (h2 ) = 6 −
h2 h1
 
∆y3 ∆y2
For i = 3 : M2 (h2 ) + 2M3 (h2 + h3 ) + M4 (h3 ) = 6 −
h3 h2
 
  M0
h0 2(h0 + h1 ) h1 0 0 
 M1 

 0 h1 2(h1 + h2 ) h2 0  M2 
 
0 0 h2 2(h2 + h3 ) h3  M3 
M4
   
∆y1 /h1 − ∆y0 /h0 [x1 , x2 ] − [x0 , x1 ]
= 6  ∆y2 /h2 − ∆y1 /h1  = 6  [x2 , x3 ] − [x1 , x2 ] 
∆y3 /h3 − ∆y2 /h2 [x3 , x4 ] − [x2 , x3 ]
If x is equally spaced:
6 2
Mi−1 + 4Mi + Mi+1 = ∆ yi−1
h2
If n = 4,
6 2
M0 + 4M1 + M2 = ∆ y0
h2
6
M1 + 4M2 + M3 = 2 ∆2 y1
h
6
M2 + 4M3 + M4 = 2 ∆2 y2
h
or,  
  M0  2 
1 4 1 0 0  M1  ∆ y0

 0 1 4 1 0 
 6  ∆ 2 y1 
 M2 =
 h2
0 0 1 4 1  M3  ∆ 2 y2
M4
In Natural Cubic Spline, M0 = Mn = 0
    2 
4 1 0 M1 ∆ y0
6
 1 4 1   M2  = 2  ∆ 2 y 1 
h
0 1 4 M3 ∆ 2 y2
Example

Using Cubic Spline Interpolation Technique, compute y(3), y(5),


and y(7) from the following data:

x 2 4 6 8 10
y 7 6 9 11 8

Solution:
Here, n = 4
and x is in equally-spaced interval i.e., h = 2
Let, M0 , M1 , ..., Mn be the 2nd derivatives at x = x0 , x1 , ..., xn
Thus, we have,
 
  M0  2 
1 4 1 0 0  M 1  ∆ y0
 0 1 4 1 0   M2  = 6  ∆ 2 y 1 
 
  h2
0 0 1 4 1  M3  ∆ 2 y2
M4
In Natural Cubic Spline, M0 = Mn = 0
So, the system of equations reduces to:
    2 
4 1 0 M1 ∆ y0
 1 4 1   M2  = 6  ∆ 2 y 1 
h2
0 1 4 M3 ∆ 2 y2
or,

4 1 0

M1
 
4
 x y ∆y ∆2 y
6
 1 4 1   M2  =  −1  2 7
4 -1
0 1 4 M3 −5
4 6 4
or, 3
     6 9 -1
4 1 0 M1 6.0
 1 4 1   M2  =  −1.5  2
8 11 -5
0 1 4 M3 −7.5
-3
10 8
On solving, we get,
M1 = 1.5804 M2 = −0.3214 M3 = −1.7946
Thus We now have,
i 0 1 2 3 4
x 2 4 6 8 10
y 7 6 9 11 8
M 0 1.5804 -0.3214 -1.7946 0
To compute y(5), i.e., to compute y at x=5:
Since x = 5 lies between x1 and x2 , we compute y(5) using:

(x − x2 )3
 
M1
f1,2 (x) = h(x − x2 ) −
6 h
(x − x1 )3
 
M2 y2 (x − x1 ) − y1 (x − x2 )
− h(x − x1 ) − +
6 h h

(5 − 6)3
 
1.5804
∴ f (5) = 2(5 − 6) −
6 2
(5 − 4)3
 
−0.3214 9(5 − 4) − 6(5 − 6)
− 2(5 − 4) − +
6 2 2
= 7.1839
6/14/2021

Numerical
NumericalIntegration
Integration
y Y = f(x)

Numerical Integration
y0 y1 y2 y3 yn

October, 2020 h

a = x0 x1 x2 x3 b = x0+nh=xn x

1 2

Numerical Integration Newton – Cote Quadrature Formula


1. Trapezoidal rule. Given a set of data points 𝑥𝑖, 𝑦𝑖 , 𝑖 =
2. Simpson’s 1/3 rule. 0, 1, 2, … , 𝑛 of a function 𝑦 = 𝑓 (𝑥), where
𝑓 (𝑥) is not explicitly known.
3. Simpson’s 3/8 rule.
Let 𝐼 = ∫ 𝑦𝑑𝑥 − (1)
4. Romberg's Method.
where 𝑦 = 𝑓(𝑥) takes the values
5. Gaussian Quadrature Formula. 𝑦0, 𝑦1, 𝑦2, … , 𝑦𝑛 for 𝑥0, 𝑥1, 𝑥2, … 𝑥𝑛. Let us divide
the interval (𝑎, 𝑏) into 𝑛 equal parts of width ℎ
so that 𝑎 = 𝑥0, 𝑥1 = 𝑥0 + ℎ,
𝑥2 = 𝑥0 + 2ℎ, … , 𝑥𝑛 = 𝑥0 + 𝑛ℎ = 𝑏
3 4
6/14/2021

Newton – Cote Quadrature Formula Newton – Cote Quadrature Formula


Clearly, Since 𝑥 = 𝑥 + 𝑝ℎ, 𝑑𝑥 = ℎ𝑑𝑝 and the above integration
𝑥0 = 𝑎 and 𝑥𝑛 = 𝑥0 + 𝑛ℎ = 𝑏, the integral becomes becomes
𝑝(𝑝 − 1) 𝑝 𝑝 − 1 (𝑝 − 2)
𝐼=ℎ 𝑦 + 𝑝∆𝑦 + ∆ 𝑦 + ∆ 𝑦 + ⋯ 𝑑𝑝
𝐼= 𝑦𝑑𝑥 2! 3!
𝑛 𝑛 𝑛 1
= ℎ 𝑛𝑦 + ∆𝑦 + − ∆ 𝑦 ∗
Approximating 𝑦 by Newton’s forward interpolation formula 2 3 2 2!
𝑛 1
𝑝(𝑝 − 1) 𝑝 𝑝 − 1 (𝑝 − 2) + −𝑛 +𝑛 ∆ 𝑦 ∗ +⋯ 2
𝐼= 𝑦 + 𝑝∆𝑦 + ∆ 𝑦 + ∆ 𝑦 + ⋯ 𝑑𝑥 4 3!
2! 3!
The equation no (2) is a general equation for numerical
integration, also know as Newton – Cote quadrature
formula.

5 6

TRAPEZOIDAL RULE TRAPEZOIDAL RULE


Putting n = 1 in eqn( 2 ) and taking curve through (𝑥0, 𝑦0) and (𝑥1, 𝑦1) as a
straight line, i.e. higher than first differences becomes zero. We get, Again, for the next interval [𝑥 , 𝑥 ], we get,

𝐼 = 𝑦𝑑𝑥 𝐼 = 𝑦𝑑𝑥

𝑌 𝑦 = 𝑓(𝑥)
1 = ℎ(𝑦 + ∆𝑦 )
= ℎ(𝑦 + ∆𝑦 )
2
= (2𝑦 + 𝑦 − 𝑦 ) = (2𝑦 + 𝑦 − 𝑦 )
𝑦 𝑦

= (𝑦 + 𝑦 ) = (𝑦 + 𝑦 )
𝑥 𝑥 𝑋
= area of Trapezium
7 8
6/14/2021

TRAPEZOIDAL RULE TRAPEZOIDAL RULE


Similarly, for the next interval [𝑥 , 𝑥 ], we get, Combining all these terms, we get,
I = I1 + I2 + I3 + … … … + In
𝐼 = 𝑦𝑑𝑥
𝑦𝑑𝑥 = 𝑦𝑑𝑥 + 𝑦𝑑𝑥 + 𝑦𝑑𝑥 + ⋯ + 𝑦𝑑𝑥
= (𝑦 + 𝑦 )
= 𝑦 +𝑦 + (𝑦 + 𝑦 )+ 𝑦 +𝑦 + ⋯ + (𝑦 +𝑦 )
Similarly, for the last interval [𝑥 , 𝑥 ], we get,
= 𝑦 +𝑦 + 𝑦 +𝑦 + 𝑦 +𝑦 + ⋯ + (𝑦 +𝑦 )
𝐼 = 𝑦𝑑𝑥
( ) = 𝑦 +𝑦 + 2(𝑦 + 𝑦 + 𝑦 + ⋯ + 𝑦 )
= (𝑦 +𝑦 ) This is called the composite Trapezoidal rule.

9 10

SIMPSON’S RULE SIMPSON’S RULE


Putting n = 2 in eqn (2) and taking curve through (𝑥0, 𝑦0), (𝑥1, 𝑦1) 𝑎𝑛𝑑 (𝑥2, 𝑦2) as Again, for the next interval [𝑥 , 𝑥 ], we get,
parabola. i.e. higher than second order becomes zero. We get,

𝐼 = 𝑦𝑑𝑥
𝐼 = 𝑦𝑑𝑥

= 2ℎ 𝑦 + ∆𝑦 + ∆ 𝑦 = 2ℎ 𝑦 + ∆𝑦 + ∆ 𝑦

= 2ℎ 𝑦 + (𝑦 − 𝑦 ) + ∆𝑦 − ∆𝑦 = 2ℎ 𝑦 + (𝑦 − 𝑦 ) + ∆𝑦 − ∆𝑦

1 = 2ℎ[𝑦 + (𝑦 − 𝑦 − 𝑦 − 𝑦 ]
= 2ℎ[𝑦 + 𝑦 −𝑦 − 𝑦 −𝑦
6
ℎ = (6𝑦 + 𝑦 − 2𝑦 + 𝑦 )
= 6𝑦 + 𝑦 − 2𝑦 + 𝑦
3
= 𝑦 + 4𝑦 + 𝑦
= 𝑦 + 4𝑦 + 𝑦 11 12
6/14/2021

SIMPSON’S RULE SIMPSON’S RULE


Again, for the next interval [𝑥 , 𝑥 ], we get, Combining all these terms, we get,
𝐼 = 𝐼1 + 𝐼2 + 𝐼3 + … … … + 𝐼 /
𝐼 = 𝑦𝑑𝑥

𝑦𝑑𝑥 = 𝑦𝑑𝑥 + 𝑦𝑑𝑥 + 𝑦𝑑𝑥 + ⋯ + 𝑦𝑑𝑥


= 𝑦 + 4𝑦 + 𝑦

= 𝑦 + 4𝑦 + 𝑦 + 𝑦 + 4𝑦 + 𝑦 + 𝑦 + 4𝑦 + 𝑦 + ⋯ + (𝑦 +
Similarly, for the last interval [𝑥 , 𝑥 ], we get, 4𝑦 +𝑦 )

𝐼 = 𝑦𝑑𝑥 = [ 𝑦 + 4𝑦 + 𝑦 + 𝑦 + 4𝑦 + 𝑦 + 𝑦 + 4𝑦 + 𝑦 + ⋯ + (𝑦 +
( ) 4𝑦 + 𝑦 )]
= 𝑦 + 4𝑦 +𝑦

13 14

SIMPSON’S RULE SIMPSON’S RULE


Putting n =3 in eqn(2) and taking curve through (𝑥𝑖 , 𝑦𝑖), i.e. i = 0,1,2,3 as a
I= [ 𝑦 +𝑦 + 4 𝑦 + 𝑦 + 𝑦 + 𝑦 + ⋯+ 𝑦 + 2(𝑦 + 𝑦 + 𝑦 +
polynomial of third order. i.e. higher than third differences becomes zero. We get,
⋯ + 𝑦 )]
[Do yourself]
= 𝑦 +𝑦 + 4 ∗ 𝑆𝑢𝑚 𝑜𝑓 𝑜𝑑𝑑 𝑜𝑟𝑑𝑖𝑛𝑎𝑡𝑒𝑠 + 2 ∗ (𝑆𝑢𝑚 𝑜𝑓 𝑒𝑣𝑒𝑛 𝑜𝑟𝑑𝑖𝑛𝑎𝑡𝑒𝑠)

This is called composite Simpson – rule. 𝑦𝑑𝑥

3ℎ
= [ 𝑦 +𝑦 +3 𝑦 + 𝑦 +𝑦 + 𝑦 + 𝑦 +⋯+𝑦 + 2(𝑦 + 𝑦 + 𝑦 + ⋯
8
+ 𝑦 )]

3ℎ
= 𝐹𝑖𝑟𝑠𝑡 + 𝑙𝑎𝑠𝑡 + 3 𝑚𝑜𝑑 𝑜𝑓 3 + 2(𝑚𝑢𝑙𝑡𝑖𝑝𝑙𝑒 𝑜𝑓 3)
8

This is called 𝑆𝑖𝑚𝑝𝑠𝑜𝑛’𝑠 3/8 – 𝑅𝑢𝑙𝑒.

15 16
6/14/2021

Example Solution
# Evaluate Solution:
1 Given,
𝑑𝑥 1
1+𝑥 𝐼= 𝑑𝑥
By using i) Trapezoidal Rule, ii) Simpson’s 1/3 – Rule and 1+𝑥
iii) Simpson’s 3/8 – Rule. [Take n = 6] 𝑎 = 0 = 𝑥0
𝑏 = 6 = 𝑥𝑛
𝑛 = 6
𝑏 −𝑎
∴ℎ=
𝑛
ℎ = 1

17 18

Cont’d… Exercise
The values of y = are given below: a) Evaluate the integral
𝑥 0 1 2 3 4 5 6 𝐼= sin 𝑥 (1 + 𝑒 ) 𝑑𝑥
𝑦 1 0.5 0.2 0.1 0.0588 0.0385 0.027
y0 y1 y2 y3 y4 y5 y6 Using Simpson’s 1/3 – rule. [Take n = 6]
i) By using Trapezoidal Rule. We have, A1 = 1 – 4 h =1
I= 𝑦 +𝑦 + 2(𝑦 + 𝑦 + 𝑦 + 𝑦 + 𝑦 ) b) Evaluate the integral A2 = 4 – 14 h =2
A = A1 + A2
= 1.4108
ii) Simpson’s one – third rule = 1.3662 𝐼= sin 𝑥 𝑑𝑥 = 2
ii) Simpson’s three – eighth rule = 1.3571
Using Simpson’s 3/8 – rule. [Take n = 6]
19 20
x 1 2 3 4 6 8 10 12 14
6/14/2021

Exercise: Exercise
a) Derive Newton- Cotes quadrature formula for integration and use it
to obtain the Simpson’s 3/8th – rule. # Evaluate the integral
b) Evaluate 1
𝑒 𝑑𝑥
𝐼= 𝑑𝑥
1+𝑥
By using i) Trapezoidal Rule, ii) Simpson’s 1/3 – rule and iii)
Simpson’s 3/8 – Rule. [Take n = 6] Using Simpson’s one – third rule. Taking h = 0.1.
c) Evaluate the integral
Hence find an approximate value of 𝜋 (pie)
correct to 3 decimal places.
sin 𝑥 𝑑𝑥
Using Simpson’s 1/3 – rule. [Take n = 6]

21 22

Romberg’s Method
𝐼 = 0.7853
1 𝐼 ℎ 4
Now, 𝐼 ℎ, ⁄ 6
1
𝐼= 𝑑𝑥 2 𝐼 ℎ 2 𝐼 ℎ, ℎ 2 , ℎ 4
1+𝑥 5

𝐼 ⁄ , ⁄
0.7853 = [tan−1(x)] 3 𝐼 ℎ 4
= tan-1(1) – tan-1(0)
= 1
𝐼= (4I − I )
𝜋 = 4 * 0.7853 3
= 3.1412
23 24
6/14/2021

Romberg’s Method Romberg’s Method:: Example


Romberg’s Formula: Use Romberg’s method to compute
1
𝐼 ℎ, ⁄ = 4𝐼 ℎ/2 − 𝐼(ℎ) dx
1+𝑥
𝐼 ⁄ , ⁄ = 4𝐼 ℎ/4 − 𝐼(ℎ/2) Correct to 3 decimal places.
1
𝐼 ℎ, ℎ 2 , ℎ 4 = 4𝐼 ℎ/2, ℎ/4 − 𝐼(ℎ, ℎ/2)
3
1
𝐼 = (4I − I )
3
25 26

Romberg’s Method:: Example Romberg’s Method:: Example


Solution: ii) When ℎ = 0.25, the value of y = 1/(1+x2) are
First we take ℎ = 0.5, 0.25 and 0.125 successively Do yourself
and evaluate the integral using Trapezoidal rule.
i) When h = 0.5, the value of 𝑦 = 1/(1 + 𝑥2) are
x 0 0.5 1
𝐼(ℎ/2) = 0.7828
y 1 0.8 0.5
y0 y1 y2
iii) When ℎ = 0.125, the value of y = 1/(1+x2) are
Trapezoidal rule we have, Do yourself
𝐼(ℎ) = ℎ/2[(𝑦0 + 𝑦2) + 2𝑦1) ]
I(0.5) = 0.7750 𝐼(ℎ/4) = 0.7848
27 28
6/14/2021

Romberg’s Method:: Example Romberg’s Method:: Example


Now, Romberg’s formula, The table of these values is
𝐼 ℎ, ⁄ = 4𝐼 ℎ/2 − 𝐼(ℎ)
0.7750
= 0.7854
0.7854
𝐼 ⁄ , ⁄ = 4𝐼 ℎ/4 − 𝐼(ℎ/2) 0.7828 0.7853
= 0.7855 0.7853
1
𝐼 ℎ, ℎ 2 , ℎ 4 = 4𝐼 ℎ/2, ℎ/4 − 𝐼(𝐼 ℎ , ℎ/2) 0.7848
3
= 0.7853
Hence, the value of the integral ∫ dx= 0.7853
29 30

Exercise: Gaussian Integration


a) Evaluate the integral 𝑛 𝑖 𝑤𝑖 𝑥𝑖
. 𝐼= 𝑓 𝑥 𝑑𝑥 1 1 1
𝑥 −
𝑑𝑥 3
sin 𝑥 𝐼𝑔 = Gauss integration assumes 2 2 1 1
an approximation of the form
using Romberg’s method, correct to 3 decimal 3
1 5
places. 𝐼 = 𝑓 𝑥 𝑑𝑥
9 −
3
5
3 2 8 0
= 𝑊 𝑓(𝑥 ) 9
3 5
3
9
5

31 32
6/14/2021

Cont’d….. Example
Gauss - Legendre Formula (n = 2) Solution:
𝑓(𝑥) = 𝑒𝑥
This formula will give correct value for integral
Gauss formula for 𝑛 = 2, we have
of 𝑓(𝑥) in the range (-1, 1) for any function.
𝑤1 = 𝑤2 = 1,
1
𝑥 =−
# Compute ∫ 𝑒 𝑑𝑥 using 2 – point Gauss – 3
Legendre formula. x2 =
I = 2.3504 Ig =∫ 𝑓(𝑥) 𝑑𝑥
= ∑ 𝑤 𝑓(𝑥 )
33 =𝑤 𝑓 𝑥 +𝑤 𝑓 𝑥 34

# Compute ∫ (3𝑥 + 8)𝑑𝑥 using Gaussian 2 – point formula.

Example Solution:
𝑓 𝑥 = 3𝑥 + 8
Gauss 2 point. We have,
𝐼 =1∗𝑒 +1∗𝑒 𝑊 =𝑊 =1
1
Ig = 1*exp(-1/sqrt(3))+1*exp(1/sqrt(3)) 𝑥 =−
Ig = 2.3427 3
1
Hence, the integral value of ∫ 𝑒 𝑑𝑥 = 2.3427 𝑥 =
3
𝑛=2
# Compute ∫ 𝑒 𝑑𝑥 using Gaussian 3 – point formula.
I = 2.35034 ∴𝐼 = 𝑓 𝑥 𝑑𝑥
# Compute ∫ (3𝑥 + 8)𝑑𝑥 using Gaussian 2 – point formula. =∑ 𝑊𝑓 𝑥
=𝑤 𝑓 𝑥 +𝑤 𝑓 𝑥
Calculated value : 16.0000 = 3𝑥 + 8 + 3𝑥 + 8
True value: 16.0000
= (3 ∗ − + 8) + (3 ∗ + 8)
Absolute error = |𝑡𝑟𝑢𝑒 𝑣𝑎𝑙𝑢𝑒 − 𝑐𝑎𝑙𝑐𝑢𝑙𝑎𝑡𝑒𝑑 𝑣𝑎𝑙𝑢𝑒|
= 7.4226 +8.5774
35 = 16.00 36
6/14/2021

Gaussian Integration Cont’d…


Changing the limits of integration Thus, the integral becomes,
𝑏−𝑎
𝑓 𝑥 𝑑𝑥 = 𝑐 𝑔 𝑧 𝑑𝑧 𝑓 𝑥 𝑑𝑥 = 𝑔 𝑧 𝑑𝑧
2

𝑏−𝑎 𝑎+𝑏 =𝑐 𝑤 𝑔(𝑧 )


𝑥= 𝑧+
2 2

𝑏−𝑎 𝑏−𝑎
𝑑𝑥 = 𝑑𝑧 𝑐=
2 2
37 38

Example Example::Solution
# Compute the integral Solution:
We first change the limits of the integration (-2,2) to (-1,1) by
transformation.
𝑒 𝑑𝑥
Given,
𝑎 = −2, 𝑏 = 2
Using Gaussian 2 – point formula.
𝑓(𝑥) = 𝑒
We have,
𝑏−𝑎 𝑎+𝑏
𝑥= 𝑧+
2 2
𝑥 = 2𝑧 + 0
𝑑𝑥 = 2𝑑𝑧
39 40
6/14/2021

Example::Solution Example::Solution
For Gaussian 2 – point formula, we have
Now, n = 2, 𝑤 = 𝑤 = 1
𝑔 𝑧 =𝑓 𝑥 𝑧 = − ,𝑧 =
𝑏−𝑎
𝑐= =2
=𝑒 2
Then, the integral becomes,
=𝑒
𝐼 =𝑐 𝑤 𝑔(𝑧 )
=𝑒

=𝑐 𝑤 𝑔(𝑧 )

= 𝑐 𝑤 ∗ 𝑔 𝑧 + 𝑤 ∗ 𝑔(𝑧 )
= 2 1∗𝑒 +1∗𝑒
41 = 4.6854 42

Exercise
# Use Gauss – Legendre 2 – point formula to
evaluate
𝑥 + 1 𝑑𝑥

# Evaluate the following integral by using Gaussian


3 – point formula,
cos 𝑥 + sin 𝑥
𝑑𝑥
1+𝑥

43
6/4/2021

Numerical Differentiation
• Numerical differentiation deals with the
following problem:
Numerical Differentiation given the function 𝑦 = 𝑓 (𝑥) find one of its
derivatives at the point 𝑥 = 𝑥 .
B. D. Mulmi • Here, the term given implies that we either have
an algorithm for computing the function, or
possesses a set of discrete data points
September, 2020 (𝑥𝑖, 𝑦𝑖), 𝑖 = 1, 2, … . , 𝑛.

1 2

Numerical Differentiation Numerical Differentiation


• In other words, we have a finite number of • Derivatives Based on Newton’s Forward
(𝑥, 𝑦) data points or pairs from which we can Interpolation Formula.
compute the derivative. • Derivatives Based on Newton’s Backward
• Numerical differentiation is a method to Interpolation Formula.
compute the derivatives of a function at some
values of independent variable 𝑥, when the
function 𝑓 (𝑥) is explicitly unknown, however
it is known only for a set of arguments.

3 4
6/4/2021

Derivatives Based on Newton’s


Cont’d…
Forward Interpolation Formula.
Suppose the function 𝑦 = 𝑓 (𝑥) is known at 𝑦 = 𝑦 + 𝑝∆𝑦 +

+

+⋯
! !
(𝑛 + 1) equispaced points 𝑥0, 𝑥1, … . , 𝑥𝑛 and 𝑝 𝑝−1 𝑝−2 … 𝑝− 𝑛−1 ∆ 𝑦
they are 𝑦0, 𝑦1, … . , 𝑦𝑛 respectively 𝑖. 𝑒. , +
𝑛!
(1)

𝑦𝑖 = 𝑓 (𝑥 ), 𝑖 = 0, 1, … . , 𝑛.
𝑝 −𝑝 𝑝 − 3𝑝 + 2𝑝
Let 𝑥𝑖 = 𝑥0 + 𝑖ℎ, and 𝑝 = (𝑥 – 𝑥0)/ℎ, 𝑦 = 𝑦 + 𝑝∆𝑦 +
2!
∆ 𝑦 +
3!
∆ 𝑦
where h is the spacing. +
𝑝 − 6𝑝 + 11𝑝 − 6𝑝 ∆ 𝑦
+⋯ 2
4!
The Newton’s forward interpolation formula is
Differentiate eqn(2) 𝑤. 𝑟. 𝑡 𝑝, we get,

5 6

Cont’d… Cont’d…
Differentiating eqn (2) 𝑤. 𝑟. 𝑡 𝑝. We get, Now,
𝑑𝑦 2𝑝 − 1 3𝑝 − 6𝑝 + 2 𝑑𝑦 𝑑𝑦 𝑑𝑝
= ∆𝑦 + ∆ 𝑦 + ∆ 𝑦 + = ∗
𝑑𝑝 2! 3! 𝑑𝑥 𝑑𝑝 𝑑𝑥

4𝑝 − 18𝑝 + 22𝑝 − 6 ∆ 𝑦 = ∆𝑦 + ∆ 𝑦 + ∆ 𝑦 + +⋯ (4)
+⋯ ! ! !
4!
Since, At 𝑥 = 𝑥 , 𝑝 = 0. Hence putting 𝑝 = 0, in eqn (4)
𝑥−𝑥 = ∆𝑦 − ∆ 𝑦 + ∆ 𝑦 − ∆ 𝑦 + ∆ 𝑦 − ⋯ . (5)
𝑝=

𝑑𝑝 1
∴ =
𝑑𝑥 ℎ

7 8
6/4/2021

Cont’d… Cont’d…
Again, Since ,
x  x0
Differentiating Eqn (4) 𝑤. 𝑟. 𝑡. 𝑝. We get, p
h
dp 1
𝑑 𝑑𝑦 1 6𝑝 − 6 12𝑝 − 36𝑝 + 22  
= [∆ 𝑦 + ∆ 𝑦 + ∆ 𝑦 + ⋯] dx h
𝑑𝑝 𝑑𝑥 ℎ 3! 4!
Now,
d 2 y dp d  dy 
 x  
dx 2 dx dp  dx 
1 1 2 2 6p 6 3 12 p 2  36 p  22 4 
 x   y0   y0   y 0  .....
h h  2! 3! 4! 
1  2 6 p 2
 18 p  11 
 2  y 0  ( p  1) 3 y 0  4 y 0  ..... (6 )
h  12 
9 10

Derivatives Based on Newton’s


Cont’d…
Backward Interpolation Formula
At x  x0 , p  0. Hence putting p  0, The Newton’s Backward Interpolation Formula is
d y
2
1  2 11 4 5 5 137 6  ∇ 𝑦 ∇ 𝑦
 2   2  y 0   y 0  12  y 0  6  y 0  180  y 0 ...
3
(7 ) 𝑦 = 𝑦 + 𝑝∇𝑦 + 𝑝 𝑝 + 1 +𝑝 𝑝+1 𝑝+2
 dx  x0 h 2! 3!
∇ 𝑦
+ ………+ 𝑝 𝑝 + 1 … 𝑝 + 𝑛 − 1 (1)
𝑛!
Similary,
d3y  DO YOURSELF
 3   ?????
 dx  x0

# Derive the expression to find the first and second derivative based on Newton’s
Backward Interpolation formula.

11 12
6/4/2021

Cont’d… Example

dy 1  ( 2 p  1) 2 3p2  6 p  2 3 4 p 3  18 p 2  22 p  6 4 
# From the following table, find 𝑑𝑦/𝑑𝑥 and
 y n 
dx h  2!
 yn 
3!
 yn 
4!
 y n  .....

( 2)
𝑑2𝑦/𝑑𝑥2 at 𝑥 = 1 , 2 and 4.2
At x  x n , p  0. Hence putting p  0,
 dy  1 1 2 1 3 1 4 
   y n   y n   y n   y n  ..... (3)
𝑥 1 2 3 4 5
 dx  xn h  2 3 4 
𝒚 11 30 85 194 375
d2y 1  11 5 137 6 
 2   2  2 y n   3 y n   4 y n   5 y n   y n ... ( 4) Solution: 𝑑𝑦
 dx  xn h  12 6 180  The Difference table is: 𝑥 = 1 = 156.76
𝑑𝑥 .
𝑥 = 1, ℎ = 1
∴𝑝=0 𝑦 = 3𝑥 − 2𝑥 + 10
𝑊𝑒 ℎ𝑎𝑣𝑒,
𝑑𝑦 1 1 1
= ∆𝑦 − ∆ 𝑦 + ∆ 𝑦 𝑑 𝑦
𝑑𝑥 ℎ 2 3 = 18
𝑑𝑥
𝑑𝑦
13 =7 14
𝑑𝑥

Example Exercise:
# A slider in a machine moves along a fixed straight rod. Its # A rod is rotating in a plane. The following table
distance 𝑥(𝑚) along the rod are given in the following table for
various values of the time 𝑡 (𝑠𝑒𝑐𝑜𝑛𝑑𝑠). gives the angle 𝜃 (radians) through which the
rod is turned for various values of the time
𝒕(𝒔𝒆𝒄. ) 1 2 3 4 5 6
𝒙(𝒎) 0.0201 0.0844 0.3444 1.0100 2.3660 4.7719 𝑡 second:
Find the velocity and acceleration of the slider at time 𝒕 0.0 0.2 0.4 0.6 0.8 1.0 1.2
𝑡 = 6 sec. 𝜽 0 0.12 0.49 1.12 2.02 3.20 4.67
𝑑𝑥
= 3.0693 𝑚/𝑠 Calculate the angular velocity and the angular acceleration of
𝑑𝑡
𝑑 𝑥 the rod, when 𝑡 = 0.2 and 1.0 second.
= 1.4777𝑚/𝑠
𝑑𝑡
15 16
6/4/2021

Maxima and Minima of the Tabulated Function

From calculus, we know that if a function is differentiable, then the maximum


and minimum value of that function can be determined by equating the first
derivative to zero and solving for the variable. This method is extendable for
Maxima and Minima of the Tabulated Function the tabulated function.
Now, consider the Newton’s forward interpolation formula
( ) ( ) ( )
𝑦 = 𝑦 + 𝑝∆𝑦 + ∆ 𝑦 + ∆ 𝑦 + ∆ 𝑦 +
! ! !
… (1)
Differentiate eqn (1) w.r.t. p , we get,

17 18

Maxima and Minima of the Tabulated Function Example


𝑑𝑦
= ∆𝑦 +
2𝑝 − 1
∆ 𝑦 +
3𝑝 − 6𝑝 + 2
∆ 𝑦 +⋯ 2
From the following table, for what value of 𝑥, 𝑦 is
𝑑𝑝 2 6 maximum? Also find this value of 𝑦.
𝑥 3 4 5 6 7 8
For maxima and minima, = 0, Hence, equating the right hand side of
𝑦 0.205 0.240 0.259 0.262 0.250 0.224
equation (2) to zero and retaining only upto third differences, we get,
Solution:
The Forward Difference table is :
∆𝑦 + ( )∆ 𝑦 + ( )∆ 𝑦 = 0
……..
…..… We have,
1 1 1
i.e. ∆ 𝑦 𝑝 + ∆ 𝑦 − ∆ 𝑦 𝑝 + ∆𝑦 − ∆ 𝑦 + ∆ 𝑦 =0 2
∆ 𝑦 𝑝 + ∆ 𝑦 − ∆ 𝑦 𝑝 + ∆𝑦 − ∆ 𝑦 + ∆ 𝑦
2 3
=0

19 20
6/4/2021

Example Exercise:
On solving,
𝑝 = 2.6875 Find 𝑥 correct to 4 decimal places for which 𝑦 is
𝑊𝑒 ℎ𝑎𝑣𝑒, maximum for the following data.
𝑥−𝑥
𝑝 = x 1.0 1.2 1.4 1.6 1.8

𝑥 = 5.6875 y 0 0.128 0.544 1.298 2.44
To find maximum 𝑦.
Newton’s forward interpolation formula. We have,
( ) ( ) ( )
𝑦 = 𝑦 + 𝑝∆𝑦 + ∆ 𝑦 + ∆ 𝑦 + ∆ 𝑦 +
! ! !

𝑦 = 0.2628

21 22

You might also like