0% found this document useful (0 votes)
56 views187 pages

NM Hemraj Notes

This document provides an overview of numerical analysis techniques for structural engineering. It discusses 7 focus areas of numerical analysis including non-linear solution methods, systems of linear equations, curve fitting, differentiation and integration, ordinary differential equations, partial differential equations, and special functions. It explains that numerical methods are used when analytical and exact solutions have limitations. Examples of when numerical methods are needed include problems with multiple initial guesses or non-converging analytical solutions. Engineering applications of numerical methods include structural analysis, mechanical design, and network simulation. Error analysis techniques are also discussed.

Uploaded by

Akash paudel
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
56 views187 pages

NM Hemraj Notes

This document provides an overview of numerical analysis techniques for structural engineering. It discusses 7 focus areas of numerical analysis including non-linear solution methods, systems of linear equations, curve fitting, differentiation and integration, ordinary differential equations, partial differential equations, and special functions. It explains that numerical methods are used when analytical and exact solutions have limitations. Examples of when numerical methods are needed include problems with multiple initial guesses or non-converging analytical solutions. Engineering applications of numerical methods include structural analysis, mechanical design, and network simulation. Error analysis techniques are also discussed.

Uploaded by

Akash paudel
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 187

Numerical Analysis

M.Sc. Structural Engineering

Lecture -1
Focus Area
1. Non linear Solution (root finding)

2. Systems of Linear Algebraic Equations Related to equation and


experimental data
3. Curve Fitting

4. Differentiation and Integration

5. Ordinary Differential Equations


Solution Technique for
6. Partial Differential Equations Differential Equations

7. Special Function (Bessel Function and Legendre’s formula)


Why Numerical Method

 Analytical and exact solution has a


limitation then we have to introduced
Numerical Method. Example: 2
x
 Mathematical tool designed to solve  sin x  0
numerical problems. 4
Factors
 Number of Initial  Analytical and exact not
Guesses or Starting
Points. work.
 Rate of Convergence  Numerical method is
 Stability. used.
 Accuracy and Precision
 Breadth of Application
 Special Requirements
 Programming Effort
Required.
For Engineering

How it works:

1 Mathematical Modelling

to describe and predict the


behavior of system

2 • Structural / mechanical analysis design and


behavior.
• Communication/ Power
Network Simulation
Train and traffic network
Error Analysis
Error Analysis

Questions:
1. Discuss the significance of Numerical methods in the field of
Science and Engineering in modern day context.
2. Discuss about the errors in Numerical calculations with examples.
Write difference between absolute and relative error with
appropriate examples.
3. Discuss Error Analysis and it’s different types with suitable
examples.
Solution of Non- linear Equation

Discussion
1. Bisection Method
ab
1.Bisection Method c
2
2.Secant Method
2. Secant Method

3. Newton-Raphson Method
xn 1  xn 
xn  xn 1 f (xn )
f (xn )  f (xn 1 )
3. Newton-Raphson Method

f (xn )
xn 1  xn 
f ' (xn )
Bisection Method

Convergence of Bisection Method


Bisection Method

Convergence of Bisection Method


Bisection Method

Convergence of Bisection Method


Bisection Method

Convergence of Bisection Method

Questions
1. Show that the root {cn} converges to x= α.
2. Show that the convergence of bisection method is linear.
3. Find the roots;
1. x3 – 9x + 1 = 0
2. 4exsinx-1 =0
Thank You
Lecture Note -2

M.sc. Structural Engineering


School of Engineering

By
Hem Raj Pandey
(Assistant Professor)
Pokhara University
Sunday, May 11, 2020

1
Hem Raj Pandey, Assistant Professor, Pokhara University

1
2

1 Secant Method
The secant method required two initial guess. If
|f (p0 )| ≤| f (p1 ) | the root is closer to p0 , other-
wise p1 .
Algorithm

ˆ Taking initial values p0 and p1 .

ˆ Compute f1 = f (p0 ) and f2 = f (p1 ).

p0 f2 −p1 f1
ˆ Compute p2 = f2 −f1

ˆ Test for accuracy; if | p2 − p0 |<  a root is


p2 else p0 = p2 , p2 = p1 ; go to step 3 Figure 1: Secant Method

ˆ Stop

2 Convergence of Secant
Method
Definition 2.1 If en+1 = kepn where k is constant, then rate of convergence of method
by which {xn } is generated by p. i.e en ∝ epn+1 . And xn , xn+1 are approximation
of root xr at nth and (n + 1)th iterations respectively.

Now,
We know the general formula of secant method is

(xn − xn−1 )
xn+1 = xn − f (xn )
f (xn ) − f (xn−1 )

xn−1 f (xn ) − xn f (xn−1 )


= (1)
f (xn ) − f (xn−1 )
2
Hem Raj Pandey, Assistant Professor, Pokhara University

2
Let xr be the actual root and en be the error of xn of f (x) = 0.
Then, the error in nth iteration with root xr is as, xn = xr + en , xn−1 = xr + en−1 ,
xn+1 = xr + en+1
Putting these value of xn , xn−1 , xn+1 in equation (1), we get

(en − en−1 )
xr + en+1 = xr + en − f (xn )
f (xn ) − f (xn−1 )

en−1 f (xn ) − en f (xn−1 )


en+1 = (2)
f (xn ) − f (xn−1 )
Using Mean Value Theorem , there exist a point x = Rn in [xr , xn ]. Since, xr be the
root of f (x) = 0 . Using f (xr ) = 0 and xn − xr = en ,
We get,
f (xn ) − f (xr )
f 0 (Rn ) =
xn − xr
f (xn ) − 0
=
en
0
∴ f (xn ) = en f (Rn )

Similarly, we get

f (xn−1 ) = en−1 f 0 (Rn−1 )

So, equation (2) reduced to


en−1 en f 0 (Rn ) − en−1 en f 0 (Rn−1 )
en+1 =
f (xn ) − f (xn−1 )
f 0 (Rn ) − f 0 (Rn−1 )
en+1 = en en−1
f (xn ) − f (xn−1 )
en+1 ∝ en en−1 (3)

If order of convergence is p then we get

en ∝ epn−1 (4)

en+1 ∝ epn

So, from equation (3) we get

epn ∝ epn−1 en−1 epn ∝ ep+1


n−1

3
p+1
p
en ∝ en−1 (5)

From equation (4), and (5) we get

p+1
p =
p
2
p −p−1 = 0

1± 5
p =
2

Since p is positive, we have p = 1.618. This shows that the order of the convergence
of secant method is 1.618 and Super linear.

3 Examples
Let x3 + x − 1 = 0 by second method
Solution
f (x) = x3 + x − 1, f (1) = 1, f (2) = 9, so a = 1, b = 2

(af (b) − bf (a))


c=
(f (b) − f (a))
(1∗9−2∗1)
So, we get c = (9−1)
, = 0.875
Now, c is near to a and doesn’t lie in (1, 2) ∴ Replace a by c and b by a
∴ New a = 0.875 and b = 1
so, d = 0.7253, Now 0.7253 is nearer to 0.875 than 1 ∴ new a = 0.7253 and new
b = 0.875.
so, d1 = 0.6924, d2 = 0.6847, d3 = 0.6829, d4 = 0.6825 and upto three decimal place
root is 0.682.
3

3
Hem Raj Pandey, Assistant Professor, Pokhara University

4
4

4 Newton-Raphson Method
In figure, initial guess be x1
and second value x2 is ob-
tained by taking the tangent
line to f (x) at the point
(x1 , f (x1 )) and finding the in-
tersection point of the tangeng
line with the x-axis and so on
.
Figure 2: Newton-Raphson Method
Let taylor series expansion
of f (x) about x1 is given by

(x − x1 )2 00 (x − x1 )3 000
f (x) = f (x1 ) + (x − x1 )f 0 (x1 ) + f (x1 ) + f (x1 ) + . . . (6)
2! 3!

If x2 is solution of f (x) = 0 and x1 is a point near to x2 , then f (x2 ) = 0.

(x − x1 )2 00 (x − x1 )3 000
f (x1 ) + (x − x1 )f 0 (x1 ) + f (x1 ) + f (x1 ) + . . . = 0
2! 3!

Considering only first two term of series an approximate solution is

f (x1 ) + (x − x1 )f 0 (x1 ) = 0
f (x1 )
x2 = x1 −
f 0 (x1 )

Continuing by taking x3 as solution of f (x) = 0, we get

f (x2 )
x3 = x2 −
f 0 (x2 )

So, the general formula of Newton- Raphson Method is

f (xn )
xn+1 = xn −
f 0 (xn )
4
Hem Raj Pandey, Assistant Professor, Pokhara University

5
4.1 Algorithm

ˆ Choose a point x1 as an initial guess of the solution.

ˆ Compute f (x1 ), f (x2 ). If f (x1 ) 6= 0 and f 0 (x1 ) = 0, Then

ˆ Repeat and set x2 = x1 , set

f (x1 )
x2 = x1 −
f 0 (x1 )

xn+1 −xn
ˆ Continue until the error is smaller than a specified value, | xn
|≤ 

4.2 Example

Find the real root of 3x = cosx + 1 correct to four decimal places using Newton-
Raphson method.
Solution:
Let,

f (x) = 3x − cos(x) − 1

f 0 (x) = 3 + sin(x)

Since, f (0) = −2 < 0, f (1) = 1.4597 > 0 so root lies between 0 and 1 .
we know ,

f (xn )
xn+1 = xn −
f 0 (xn )
3x0 − cos(x0 ) − 1
x1 = x0 −
3 + sin(x0 )

f (x0 )
Step x0 x1 = x 0 − f 0 (x0 )

1. 0.4 0.6127
Taking initial guess x0 = 0.4 , we get x1 = 0.6127
2. 0.6127 0.6071
3. 0.6071 0.6071
Hence, the root is 0.6071

6
5 Limitation of Newton-Raphson Method
1. Division by zero may occur when f 0 (xn ) is zero or very close to zero.

2. If the initial guess is too far away from the required root, the process may
converge to some other root.

5.1 Example
1
Solve x
− 2 = 0 by using NR method.
Solution:
Here,

1
f (x) = −2
x
1
f 0 (x) = − 2
x

And, we have from NR formula

f (xn )
xn+1 = xn −
f 0 (xn )
which implies that,
1
xn
−2
xn+1 = xn −
− x12
n

= 2(xn − x2n ) (7)

1. Case:I Let initial case is x0 = 1.4


From equation (2)

7
x1 = 2(x0 − x20 )

= 2(1.4 − (1.4)2 )

= −1.12

And

x2 = 2(x1 − x21 )

= 2[(−1.12) − (−1.12)2 ]

= −4.7488

These result indicate NM diverges.

2. Case:II Let initial case is x0 = 1. From equation (2)

x1 = 2(x0 − x20 )

= 2(1 − 1)

= 0

And

x2 = 2(x1 − x21 )

= 2∗0

= 0

Thus solution converges to x = 0, which is not a solution . At x = 0 the


function actually not defined.

3. Case:III Let initial case is x0 = 0.4. From equation (2)

8
x1 = 2(x0 − x20 )

= 2(0.4 − (0.4)2 )

= 0.48

And

x2 = 2(x1 − x21 )

= 2[0.48 − (0.48)2 ]

= .4992

Therefore, N-M converges to the correct solution. This shows that if the start-
ing point is close enough to the true solution, N-M converges.

6 Convergence of Newton-Raphon Method

6.1 First Part

We have

f (xn )
xn+1 = xn −
f 0 (xn )

Put the general iteration formula xn+1 = φ(xn )

f (xn )
φ(xn ) = xn −
f 0 (xn )
f (x)
φ(x) = x − 0
f (x)

Since, iteration method converges if |φ0 (x)| < 1,


Here,

f 0 (x)f 0 (x) − f (x)f 00 (x)


φ0 (x) = [ ]
(f 0 (x))2
[f 0 (x)]2 − [f 0 (x)]2 + f (x)f 00 (x)
=
[f 0 (x)]2
f (x)f 00 (x)
=
[f 0 (x)]2

9
Which implies that
|f (x)f 00 (x)| < |f 0 (x)|2

. Hence Newton-Raphson method converges in the interval considered.

6.2 Second Part

Let the function f (x) defined in the interval [xn , xn+1 ] where xn be the initial ap-
proximate to the root and the function is derivable on (xn , xn+1 ).
So, by using Taylor’s series as there exist a point R in (xn , xn+1 ) such that

xn+1 − xn 0 (xn+1 − xn )2 00
f (xn+1 ) = f (xn ) + f (xn ) + f (R) (8)
1! 2!

Let xr be the actual root of f (x) = 0, then xn+1 = xr and f (xn+1 ) = f (xr ) = 0, we
get from equation (3)

xr − xn 0 (xr − xn )2 00
0 = f (xn ) + f (xn ) + f (R) (9)
1! 2!

The general formula for the Newton-Raphson method is

f (xn )
xn+1 = xn −
f 0 (xn )
or, f (xn ) = (xn − xn+1 )f 0 (xn )

Substituting the values of f (xn ) in (3)

xr − xn 0 (xr − xn )2 00
0 = (xn − xn+1 )f 0 (xn ) + f (xn ) + f (R)
1! 2!
(xr − xn )2 00
or, 0 = (xr − xn+1 )f 0 (xn ) + f (R
2!

Let en be the error for the estimation of xn of f (x) = 0, then en = xr − xn ,


en+1 = xr − xn+1 . Putting these values of en and en+1 in the above equation

e2n 00
0 = en+1 f 0 (xn ) + f (R)
2!
∴ en+1 = Ce2n
f 00 (R)
where, C = − 0
2!f (xn )

This shows that the order of convergence of Newton-Raphson method is 2. So, it is


said to have quadratic convergence.

10
6.3 Problem
1
i. If N is a real number, then iterative formula to find N
is xn+1 = xn (2 − N xn ) .
Solution:
Let x = 1
N
, i.e., 1
x
− N = 0, taking f (x) = 1
x
− N , so that f 0 (x) = − x12 . We
know the Netwon-Raphson formula is
1
f (xn ) xn
−N
xn+1 = xn − 0 = xn −
f (xn ) − x12
n

= xn + xn − N x2n = 2xn − N x2n = xn (2 − N xn )



ii. If N is a real number, then iterative formula to find N is xn+1 = 12 (xn + xNn ) .

iii. If N is a real number, then iterative formula to find √1 is xn+1 = 12 (xn + N1xn )
N

7 Newton-Raphson Method for multiple roots


A multiple root corresponds to a point where a function is tangent to the x- axis.
Examples:

1. For double root.


Let f (x) = (x − 3)(x − 1)(x − 1), where, f (x) = x3 − 5x2 + 7x − 3.

2. For triple root.


Let f (x) = (x − 3)(x − 1)(x − 1)(x − 1), where f (x) = x4 − 6x3 + 12x2 − 10x + 3.

11
The modified formula for multiple root is

f (xn )
xn+1 = xn − m
f 0 (xn )

For double root m = 2, and triple root m = 3. As initial guess is sufficiently close to
root α then we get relation as,

f (x0 ) f 0 (x0 ) f 00 (x0 )


x0 − m 0 , x0 − (m − 1) 00 , x0 − (m − 2) 000
f (x0 ) f (x0 ) f (x0 )

7.1 Example

Find the double root of the equation x3 − x2 + x + 1 = 0


Solution

f (x) = x3 − x2 + x + 1

f 0 (x) = 3x2 − 2x + 1

f 00 (x) = 6x − 2

The actual root is x = 1.


Starting with x0 = 0.9, we get

f (x0 ) = f (0.9) = 0.019, f 0 (x0 ) = f 0 (0.9) = −0.3, f 00 (x0 ) = f 00 (0.9) = 3.4

So,

f (x0 ) (0.019)
x0 − m = 0.9 − 2
f 0 (x0 ) (−0.37)
= 1.0027
0
f (x0 ) (−0.37)
x0 − (m − 1) 00 = 0.9 − (2 − 1)
f (x0 ) 3.4
= 1.0088

12
The closeness of these values indicate that there is double root near to x = 1. For
the next approximation we choose x1 = 1.01.

f (x1 ) 0.0002
x1 − m = 1.01 − 2 ×
f 0 (x1 ) 0.0403
= 1.0001
0
f (x1 ) 0.0403
x1 − (m − 1) 00 = 1.01 − (2 − 1) ×
f (x1 ) 4.06
= 1.0001

The values obtained are equal. This shows that there is a double root at x = 1.0001.
Which is close to the actual root unity.

8 Newton -Raphson for System of two non-linear


equation
Consider the system of non-linear equation be

f (x, y) = 0

g(x, y) = 0 (10)

let (x0 , y0 ) be an initial approximation to the root of the system, and (x0 + h, y0 + k)
be the root of the system given by () . Then

f ((x0 + h, y0 + k)) = 0

g((x0 + h, y0 + k)) = 0 (11)

Let us assume that f and g are differentiable expanding (6) by taylor’s series.

∂f ∂f
f ((x0 + h, y0 + k)) = f0 + h +k + ··· = 0
∂x0 ∂y0
∂g ∂g
g((x0 + h, y0 + k)) = g0 + h +k + ··· = 0 (12)
∂x0 ∂y0
Neglecting, the second and higher terms from (7), we obtain,

∂f ∂f
h +k = −f0
∂x0 ∂y0

13
∂g ∂g
h +k = −g0 (13)
∂x0 ∂y0
∂f
f0 = f (x0 , y0 ), = ( ∂f
∂x0
)
∂x x→x0
, so on

∂f ∂f
∂x0 ∂y0
Solution of (8) unique if D = 6= 0 Solving (8) (by using Cramer’s rule)for
∂g ∂g
∂x ∂y
0 0
the value of h and k is given by x1 = x0 + h and y1 = y0 + h and above process is
repeated to desired degree of accuracy.(f (x, y) = g(x, y) = 0).

8.1 Example

Solve
x2 − y 2 = 4, x2 + y 2 = 16

Solution: To obtain the initial approximation we replace first equation by y = x



which implies that 2x2 = 16, , ⇒ x = 2 2.
√ √
Let x0 = 2 2, y0 = 2 2, and (x0 , y0 ) be the initial approximation to the root of the
system. We have
f = x2 − y 2 − 4 ⇒ f0 = −4

and g = x2 + y 2 − 16 ⇒ g0 = 0

differentially partially, we obtain

∂f ∂f
= 2x, = −2y
∂x ∂y
∂g ∂g
= 2x, = 2y
∂x ∂y
So that,
∂f √ ∂f √
= 2x0 = 4 2, = −2y0 = −4 2
∂x0 ∂y0
∂g √ ∂g √
= 2x0 = 4 2, = 2y0 = 4 2
∂x0 ∂y0
∂f ∂f
The system of linear equation can be written as h ∂x0
+ k ∂y 0
= −f0 ,
√ √
⇒ h(4 2 − k(4 2) = −(−4) , ⇒ h − k = 0.7072
∂g ∂g
h ∂x 0
+ k ∂y 0
= −g0 ,
√ √
⇒ h(4 2 + k(4 2) = 0,⇒ h + k = 0
Solving, h − k = 0.7072 and h + k = 0 we get,

14
h = 0.3536, k = –0.3536,
The second approximation to the root is given by

x1 = x0 + h = 2 2 + 0.3536 = 3.1820

y1 = y0 + k = 2 2 − 0.3536 = 2.4748

9 Questions
ˆ Solve f (x, y) = x2 + y − 20x + 40 = 0, g(x, y) = x + y 2 − 20y + 20 = 0 by
using N-R method

ˆ Solve x2 + y 2 = 1, y = x2 by using N-R method

ˆ Solve x2 + y − 11 = 0, x + y 2 − 7 = 0, by using N-R method

ˆ Solve sin(xy) + +x − y = 0, ycos(xy) + 1 = 0, by using N-R method.

5
Hem Raj Pandey, Assistant Professor, Pokhara University

15
1

Lecture Note -3

M.sc. Structural Engineering


School of Engineering

By
Hem Raj Pandey
(Assistant Professor)
Pokhara University
SSunday, May 18, 2020
2
Contents

1 Interpolation 5
1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.2 Error in Interpolation . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.3 Difference Table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.3.1 Newton’s Forward difference polynomial . . . . . . . . . . . . 9
1.3.2 Newton’s Backward’s difference polynomial . . . . . . . . . . 10
1.4 Divided Difference . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.4.1 Newton’s Divided Difference Polynomial . . . . . . . . . . . . 11
1.4.2 lagrange’s Polynomial Formula . . . . . . . . . . . . . . . . . 12
1.5 Inverse Interpolation . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
1.6 Double Interpolation . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
1.7 Cubic Splines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
1

1
Hem Raj Pandey, Assistant Professor, Pokhara University

3
4 CONTENTS
Chapter 1

Interpolation

1.1 Introduction
1
The method of constructing a function and estimation values at non-tabular points
is called interpolation. Then the process of finding the value of y corresponding to any

Figure 1.1: (a) linear , (b) quadratic and (c) Cubic interpolating polynomial

value of x = xi between x0 and xn is called interpolation. That means interpolation


is a procedure for estimating the value of a function for any intermediate value of
the independent variable between known value of data points.
x x0 x1 x2 ··· xn
Suppose given value of y = f (x) for set of value x as;
y y0 y1 y2 · · · yn
let Pn (x) be a polynomial and Pn (x) = fk , k = 0, 1, 2, · · · n is called the inter-
polating function. Let nth order polynomial be f (x) = a0 + a1 x + a2 x2 + · · · + an xn .
1
Hem Raj Pandey, Assistant Professor, Pokhara University

5
6 CHAPTER 1. INTERPOLATION

which has (n + 1) data point. i.e. (x0 , f0 ), (x1 , f1 ), (x2 , f2 ), (x3 , f3 ), · · · , (xn , fn ).So,
Pn (x) ' f (x)

Theorem 1. If x0 , x1 , · · · xn are real distinct numbers, then for arbitrary y0 , y1 , · · · yn ,


there is a unique polynomial of degree almost say ,Pn (x) such that Pn (xi ) = yi , i=
0, 1, 2, · · · , n.

1.2 Error in Interpolation

Theorem 2. Suppose that n ≥ 0 and f is a real-valued continuous function defined


on the closed interval [a, b]. The derivative of function f of order (n + 1) exists and
is continuous on [a, b]. let x ∈ [a, b] then there exist ξ = ξ(x) ∈ (a, b).
Such that,
f n+1 (ξ) Y
f (x) − pn (x) = (x) (1.1)
(n + 1)! n+1

Q
where, n+1 (x) = (x − x0 )(x − x1 )(x − x2 ), · · · , (x − xn )
Moreover,

Mn+1 Y
|f (x) − Pn (x)| ≤ (x)
(n + 1)! n+1

max
and Mn+1 = x∈[a,b]
| f n+1 (x) |

Proof:
Q
Let, x = xi , which implies that f (xi ) − Pn (xi ) = 0 and n+1 (xi − x0 )(xi − x1 )(xi −
x2 ), · · · , (xi − xn ) = 0, then (1.1) satisfied identically.
Consider, x 6= xi , x ∈ [a, b]. For such value of x , defined the function φ(t) as

2
Hem Raj Pandey, Assistant Professor, Pokhara University
1.2. ERROR IN INTERPOLATION 7

follows,
 Y
f (x) − Pn (x)
φ(t) = f (t) − Pn (t) − Q (t)
n+1 (x) n+1
Note x is fixed.
 
f (x) − Pn (x) Y
φ(xi ) = f (xi ) − Pn (xi ) − Q (xi )
n+1 (x) n+1
 
f (x) − Pn (x)
= 0− Q .0
n+1 (x)
= 0
 
f (x) − Pn (x) Y
φ(x) = f (x) − Pn (x) − Q (x)
n+1 (x) n+1
= 0

Thus φ vanishes at x, x0 , x1 , · · · , xn and (n + 2) points. Applying Rolle’s theorem


, φ0 (t) vanishes at (n + 1) distinct points each of which lies between the successive
zero of φ. Again φ00 (t) vanish at n distinct points, each of which lies between the
successive zero of φ0 . So, φ(n+1) vanish at some ξ ∈ (a, b).

  n+1
n+1 n+1 f (x) − Pn (x) Y
φ (ξ) = f (ξ) − Pnn+1 (ξ)
− Q (ξ)
n+1 (x) n+1
 
n+1 f (x) − Pn (x)
= f (ξ) − 0 − Q (n + 1)!
n+1 (x)

Since, Pn (t) is a polynomial of degree n , so degree greater than n is zero. which


implies that φn+1 (ξ) = 0.
 
n+1 f (x) − Pn (x)
0 = f (ξ) − Q (n + 1)!
n+1 (x)
f n+1 (ξ) Y
f (x) − Pn (x) = (x)
(n + 1)! n+1

Let us input the absolute in both sides, we get


n+1
f (ξ) Y
|f (x) − Pn (x)| = (x)
(n + 1)!
n+1

as f n+1 is continuous on the closed interval [a, b] the same interval for |f n+1 |. There-
fore, |f n+1 | is bounded on the closed interval [a, b].
n+1
f ≤ Mn+1
8 CHAPTER 1. INTERPOLATION

and these lies in maximum on [a, b]. Hence,


Mn+1 Y
|f (x) − Pn (x)| ≤ (x)
(n + 1)! n+1

Example 1.2.1. If the function f (x) = sinx is approximated by polynomial of degree


9 that interpolates f at 10 point in the interval [0, 1], how large is the error on this
interval?

Solution:
we have, f (x) = sinx, [a, b] and using the error condition of interpolation we get,

f 10 (ξ) Y
f (x) − P9 (x) = (x)
10! 10

Q Q
where 10 (x) = (x − x0 )(x − x1 ), · · · (x − x9 ), x ∈ [0, 1]. And | 10 (x)| ≤ 1. (Since,
max
x lies in 0 and 1 so |x − xi | ≤ 1) and implies that ξ∈(0,1)
|f 10 (ξ)| ≤ 1. So,

1
|f (x) − Pn (x)| = |sinx − P9 (x)| ≤
10!

1
Hence, the error is not greater than 10!
on this interval.

1.3 Difference Table

x y(x) ∆y0 ∆2 y0 ∆3 y0
x0 y0
y1 − y0
x1 y1 y2 − 2y1 + y0
y2 − y1 y3 − 3y2 + 3y1 − y0
x2 y2 y3 − 2y2 + y1
y3 − y2
x3 y3
1.3. DIFFERENCE TABLE 9

1.3.1 Newton’s Forward difference polynomial

If x0 , x1 , · · · , xn are given set of observations with common difference h, y0 , y1 , · · · , yn


are their corresponding values then approximation polynomial function is

p(p − 1) 2 p(p − 1)(p − 2) 3


yp (x) = y0 + p∆y0 + ∆ y0 + ∆ y0
2! 3!
p(p − 1)(p − 2), · · · , (p − n + 1) n
+ ··· + ∆ y0 (1.2)
n!

x−x0
where p = h

Example 1.3.1. Find number of men getting wages between Rs.10 and Rs. 15 from
wages (Rs.) 0 − 10 10 − 20 20 − 30 30 − 40
the following data;
Frequency 9 30 35 42

Solution:
wages less than (x) 10 20 30 40
The cumulative frequency table is
No. of man (y) 9 39 74 116
The difference table is

x y(x) ∆y0 ∆2 y0 ∆3 y0
10 9
30
20 39 5
35 2
30 74 7
42
40 116

∴ Newton’s forward interpolation formula is

p(p − 1) 2 p(p − 1)(p − 2) 3


yp (x) = y0 + p∆y0 + ∆ y0 + ∆ y0
2! 3!

To find number of men less than Rs.15 taking x0 = 10 ; y0 = 9, h = 10 so, x = x0 +ph,


(x−10)
∴p= 10

when x = 15, p = 0.5


10 CHAPTER 1. INTERPOLATION

0.5(0.5 − 1)5 0.5(0.5 − 1)(0.5 − 2)2


yp (15) = 9 + 0.5(30) + +
2! 3!
= 24 − 0.625 + 0.1250

= 23.45

∴ The number of men with wages less than Rs.15 is 23.5 i.e. 24 . But the number
of men with wages less than Rs.10 is 9. Hence, number of men with wages between
Rs. 10 and Rs. 15 is equal to 24 − 9 = 15.

1.3.2 Newton’s Backward’s difference polynomial

If x0 , x1 , · · · , xn are given set of observations with common difference h, y0 , y1 , · · · , yn


are their corresponding values then approximation polynomial function is

p(p − 1) 2 p(p − 1)(p − 2) 3


yp (x) = yn + p 5 yn + 5 yn + 5 yn
2! 3!
p(p − 1)(p − 2), · · · , (p − n + 1) n
+ ··· + 5 yn (1.3)
n!

x−xn
where p = h

1.4 Divided Difference

Let us consider the function y = f (x) and the divided difference of orders 0, 1, 2, · · · , n
can be defined as

y[x1 , x2 , x3 , · · · xn ] − y[x0 , x1 , x2 , · · · xn ]
y[x0 , x1 , x2 , · · · xn ] = (1.4)
[xn − x0 ]
1.4. DIVIDED DIFFERENCE 11

The divided difference table

x y(x) Ist order 2nd order 3rd order 4th order


x0 y0
y[x1 , x0 ]
x1 y1 y[x0 , x1 , x2 ]
y[x2 , x1 ] y[x0 , x1 , x2 , x3 ]
x2 y2 y[x1 , x2 , x3 ] y[x0 , x1 , x2 , x3 , x4 ]
y[x3 , x2 ] y[x1 , x2 , x3 , x4 ]
x3 y3 y[x2 , x3 , x4 ]
y[x4 , x3 ]
x4 y4

1.4.1 Newton’s Divided Difference Polynomial

The polynomial Pn (x) is defined as

pn (x) = y0 +(x−x0 )y[x0 , x1 ]+(x−x0 )(x−x1 )y[x0 , x1 , x2 ]+(x−x0 )(x−x1 )(x−x2 )y[x0 , x1 , x2 , x3 ]

+ · · · + (x − x0 )(x − x1 )(x − x2 ), · · · , (x − xn )y[x0 , x1 , x2 , · · · , xn ] (1.5)

y1 −y0 y[x1 ,x2 ]−y[x0 ,x1 ]


where, y[x0 , x1 ] = x1 −x0
, y[x0 , x1 , x2 ] = x2 −x0
, so on.

Example 1.4.1. Determine polynomial of x = 17cm of following data, by using


x 0 10 20 30 40 50
Newton’s divided difference.
y 0 50 150 300 500 750

Solution :
12 CHAPTER 1. INTERPOLATION

The divided difference table is;

x y(x) Ist order 2nd order 3rd order 4th order 5th order
0 0
50
10
=5
5
10 50 20
= 0.25
10 0
20 150 0.25 0
15 0 0
30 300 0.25 0
20 0
40 500 0.25
25
50 750

By using the Newton’s divided difference polynomial, we get

P5 (x) = y0 +(x−x0 )y[x0 , x1 ]+(x−x0 )(x−x1 )y[x0 , x1 , x2 ]+(x−x0 )(x−x1 )(x−x2 )y[x0 , x1 , x2 , x

(x − x0 )(x − x1 )(x − x2 )(x − x3 )y[x0 , x1 , x2 , x3 , x4 ]

P5 (17) = 0 + (17 − 0).5 + (17 − 0)(17 − 10).0.25 + (17 − 0)(17 − 10)(17 − 20).0 + 0

= 114.75

1.4.2 lagrange’s Polynomial Formula

If a function y = f (x) defined on an interval [a, b] takes the (n + 1) distinct values


y0 , y1 , y2 , · · · , yn corresponding to x0 , x1 , x2 , · · · , xn . Then
n
X
Pn (x) = Li (x)yi
i=0

where,
n
(x − x0 )(x − x1 ), · · · , (x − xi−1 )(x − xi+1 ) · · · (x − xn ) Y (x − xj )
Li (x) = =
(xi − x0 )(xi − x1 ), · · · , (xi − xi−1 )(xi − xi+1 ) · · · (xi − xn ) j=0,j6=i (xi − xj )
1.5. INVERSE INTERPOLATION 13

Example 1.4.2. Find out the missing values in the following set of data by using
x −2 −1 0 1 6
Lagrange’s polynomial method
y −20 ? 2 ? 70

Solution:
Here, x0 = −2, x1 = 0, x2 = 6, y0 = −20, y1 = 2, y2 = 70 And the Lagrange polyno-
mial formula is

(x − x1 )(x − x2 ) (x − x0 )(x − x2 ) (x − x0 )(x − x1 )


P3 (x) = y0 + y1 + y2 (1.6)
(x0 − x1 )(x0 − x2 ) (x1 − x0 )(x1 − x2 ) (x2 − x0 )(x2 − x1 )

Putting x = −1 and x = 1 in equation (1.6)

(−1 − 0)(−1 − 6) (−1 + 2)(−1 − 6) (−1 + 2)(−1 − 0)


P3 (−1) = (−20) + (2) + (70)
(−2 − 0)(−2 − 6) (0 + 2)(0 − 6) (6 + 2)(6 − 2)
35 7 35
= − + −
4 6 24
= −9.0417

(1 − 0)(1 − 6) (1 + 2)(1 − 6) (1 + 2)(1 − 0)


P3 (1) = (−20) + (2) + (70)
(−2 − 0)(−2 − 6) (0 + 2)(0 − 6) (6 + 2)(6 − 2)
25 5 35
= + +
4 2 8
= 13.1250

3
Hence the missing values are −9.0417 and 13.1250.

1.5 Inverse Interpolation


Given set of values of x and y, the process of finding the value of x for a certain value
of y called a inverse interpolation. Interchanging x and y in lagrange’s interpolation
formula, we get
n
X
Pn (y) = Li (y)xi
i=0

where,
n
(y − y0 )(y − y1 ), · · · , (y − yi−1 )(y − yi+1 ) · · · (y − yn ) Y (y − yj )
Li (y) = =
(yi − y0 )(yi − y1 ), · · · , (yi − yi−1 )(yi − yi+1 ) · · · (yi − yn ) j=0,j6=i (yi − yj )
3
Hem Raj Pandey, Assistant Professor, Pokhara University
14 CHAPTER 1. INTERPOLATION

Example 1.5.1. Find the value of x when y = 0.3 by applying inverse interpolation.
x 0.4 0.6 0.8
y 0.3683 0.3332 0.2897

Solution:
From the lagrange’s inverse interpolation, we get

(y − y1 )(y − y2 ) (y − y0 )(y − y2 ) (y − y0 )(y − y1 )


P3 (y) = y0 + x1 + x3
(y0 − y1 )(y0 − y2 ) (y1 − y0 )(y1 − y2 ) (y2 − y0 )(y2 − y1 )

Putting x0 = 0.4, x1 = 0.6, x2 = 0, and y0 = 0.3683, y1 = 0.3332, y2 = 0.2899.


We get

(0.3 − 0.3332)(0.3 − 0.2897)


P3 (0.3) = (0.4)
(0.3683 − 0.3332)(0.3683 − 0.2897)
(0.3 − 0.3683)(0.3 − 0.2897)
+ (0.6)
(0.3332 − 0.3683)(0.3332 − 0.2897)
(0.3 − 0.3683)(0.3 − 0.3332)
+ (0.8)
(0.2897 − 0.3683)(0.2897 − 0.3332)

= 0.7573

1.6 Double Interpolation

A function of two or more variables, the formula become complicated but a sim-
pler procedure is to interpolate with respect to the first variable keeping the others
constant and then interpolate with respect to the second variable and so on. This
method is numerical double interpolation. We can use

ˆ Method of linear interpolation.

ˆ Newton’s forward difference formula.

Example 1.6.1. Tabulate the values of function z(x, y) = x2 + y 2 − y for x =


0, 1, 2, 3, 4 and y = 0, 1, 2, 3, 4. Using this table of values, compute z(2.5, 3.5) by
numerical double interpolation.
1.6. DOUBLE INTERPOLATION 15

Solution:
The value of the function for the given values of x and y are given in following table:
y→
x↓
0 1 2 3 4
0 0 0 2 6 12
1 1 1 3 7 13
2 4 4 6 10 16
3 9 9 11 15 21
4 16 16 18 22 28

1. Method of linear interpolation.


we know the formulae of linear interpolation is

z(x1 ) − z(x0 )
z(x) = z(x0 ) + (x − x0 )
x1 − x0

we interpolate with respect to x keeping y constant. i.e x = 2.5, using linear


interpolation

y z
0 6.5
1 6.5
2 8.5
3 12.5
4 18.5

Again we interpolate with respect to y using linear interpolation for y = 3.5


we get,

12.5 + 18.5
z= = 15.5
2
So, z(2.5, 3.5) = 15.5, From the function z = x2 + y 2 − y, z(25, 3.5) = 15, So
the error is 5 percent.

2. Newton’s forward difference formula.


let the constant variable say x. And keeping x = 2.5, y = 3.5 as the near centre
of the set and the corresponding values are x = 1, 2, 3 and y = 2, 3, 4. So, using
16 CHAPTER 1. INTERPOLATION

Newton’s farward difference formula, we have

At x=1
y z ∆z ∆2 z
2 3
4
3 7 2
6
4 13

At x=2
y z ∆z ∆2 z
2 6
4
3 10 2
6
4 16

At x=3
y z ∆z ∆2 z
2 11
4
3 15 2
6
4 21

y−y0 3.5−2
Here, p = h
= 1
= 1.5
1.6. DOUBLE INTERPOLATION 17

We obtain
p(p − 1) 2
z(1, 3.5) = z0 + p∆z0 + ∆ z0
2!
(1.5)(0.5)
= 3 + (1.5)(4) + (2)
2
= 9.75
(1.5)(0.5)
z(2, 3.5) = 6 + (1.5)(4) + (2)
2
= 12.75
(1.5)(0.5)
z(3, 3.5) = 11 + (1.5)(4) + (2)
2
= 17.75

Therefore, we arrive at the following result,

At y=3.5
y z ∆z ∆2 z
1 9.75
3
2 12.75 2
5
3 17.75
2.5−1
and define p = 1
= 1.5 and using Newton formula,
(1.5)(0.5)
we get, z(2.5, 3.5) = 9.75 + (1.5)(3) + 2
(2) =2
And from the functional relation, we get z(2.5, 3.5) = (2.5)2 + (3.5)2 − 3.5 = 15.
Hence no error in interpolation.

Exercise

i. Tabulate the values of function z(x, y) = x2 + y 2 + y for x = 0, 1, 2, 3, 4 and


y = 0, 1, 2, 3, 4. Using this table of values, compute z(2.5, 1.5) by numerical
double interpolation.

ii. Tabulate the values of function z(x, y) = x2 + y 2 + y for x = 0, 1, 2, and


y = 0, 1, 2,. Using this table of values, compute z(0.5, 0.5) and z(1.5, 1.5) by
numerical double interpolation.
18 CHAPTER 1. INTERPOLATION

iii. Tabulate the values of function z(x, y) = ex siny+y−0.1 for x = 0.5, 1.0, 1.5, 2.0, 2.5, 3.0, 3
and y = 0.1, 0.2, 0.3, 0.4, 0.5, 0.5. Using this table of values, compute z(1.6, 0.33)
by numerical double interpolation.

1.7 Cubic Splines


A spline is a polynomial form by fitting a single high order polynomial from the
entire interval. A function S(x) is spline of degree k on [a, b] and a = x0 < x1 <
x2 < · · · < xn = b, where




 S0 (x), x0 ≤ x ≤ x1



S1 (x), x1 ≤ x ≤ x2

S(x) =
..
.,







xn−1 ≤ x ≤ xn

S (x),
n

where Si (x) ∈ P k , P k is a polynomial of order k.

Definition 1.7.1. A cubic spline is a piecewise cubic polynomial that is twice con-
tinuously differentiable. Given the data points (xi , yi ), i = 0, 1, 2, · · · n to determine
a function f (x) such that

i. Si (x) = yi , i = 0, 1, 2, · · · n

ii. S(x) is cubic in each subinterval (xi , xi+1 ) for i = 0, 1, 2, 3, · · · , (n + 1) and

iii. S(x) ∈ C 2 [x0 , xn ] are satisfied.

Theorem 3. If x0 , x1 , x2 , · · · , xn are given set of observations with common differ-


ence h and let y0 , y1 , y2 , · · · , yn are their corresponding values where,

i. y = S(x) is linear polynomial outside the interval [x0 , xn ].

ii. S(x) is cubic polynomial in each subinterval.


4
Hem Raj Pandey, Assistant Professor, Pokhara University
1.7. CUBIC SPLINES 19

iii. S 0 (x) and S 00 (x) are continuous at each points.

then,

1 (xi+1 − x)3 (x − xi )3
 
S(x) = Mi + Mi+1
hi 6 6
h2i h2i
   
(xi+1 − x) (x − xi )
+ yi − Mi + yi+1 − Mi+1
hi 6 hi 6

Where,

6
Mi−1 + 4Mi + Mi+1 = (yi−1 − 2yi + yi+1 ), i = 1, 2, 3, · · · , (n − 1)
h2

Proof:

Let us consider xi which are


arbitrarily placed, such that
xi+1 − xi = hi and S(x) is
cubic polynomial in each sub-
intervals, S 0 (x) and S 00 (x) are
continuous at each point of in-
terval (xi , xi+1 ). From the fig-
ure, the slope passing through
AB and BC are equal.i.e.

S 00 (x) − S 00 (xi ) S 00 (xi+1 ) − S 00 (xi ) Figure 1.2: Cubic spline interpolation


=
x − xi xi+1 − xi
1
S 00 (x) = S 00 (xi ) + [(x − xi )S 00 (xi+1 ) − (x − xi )s0 (xi )]
hi
1
= [(xi+1 − x)S 00 (xi ) + (x − xi )s00 (xi+1 )]
hi

1
S 00 (x) = [(xi+1 − x)Mi + (x − xi )Mi+1 ] (1.7)
hi
Where, S 00 (xi ) = Mi
Integrating (1.7) twice with respect to x , we get

1 (xi+1 − x)3 (x − xi )3
 
S(x) = Mi + Mi+1 + Ci (xi+1 − x) + di (x − xi ) (1.8)
hi 6 6
20 CHAPTER 1. INTERPOLATION

where Ci and di are constant. When S(xi ) = yi and S(xi+1 ) = yi+1 .


Then equation (1.8) changes as;
1 (xi+1 − xi )3
 
yi = Mi + Ci (xi+1 − xi )
hi 6
h2i
 
1
∴ Ci = yi − Mi
hi 6
and

1 (xi+1 − xi )3
 
yi+1 = Mi+1 + di (xi+1 − xi )
hi 6
h2i
 
1
∴ di = yi+1 − Mi+1
hi 6
Putting the value of Ci and di in (1.8) we get in interval (xi , xi+1 ).

1 (xi+1 − x)3 (x − xi )3
 
S(x) = Mi + Mi+1
hi 6 6
h2i
  
xi+1 − xi
+ yi − Mi
hi 6
h2i
  
x − xi
+ yi+1 − Mi+1 (1.9)
hi 6
This is cubic spline interpolation function. Where Mi0 s are constant.
Now, differentiating (1.9) with respect to x, we get

(xi+1 − x)2 (x − xi )2
 
0 1
S (x) = − Mi + Mi+1
hi 2 2
h2i h2i
   
1
− yi − Mi + yi+1 − Mi+1 (1.10)
hi 6 6
When x = xi , in equation (1.10), we get
hi 1
S 0 (xi , +) = − (2Mi + Mi+1 ) + (yi+1 − yi ) (1.11)
6 hi
When x = xi−1 , in interval (xi−1 , xi ). (we change equation (1.10) in interval
(xi−1 , xi ))
hi−1 1
S 0 (xi , −) = (2Mi + Mi−1 ) + (yi − yi−1 ) (1.12)
6 hi−1
Form the definition of continuity S 0 (xi , +) = S 0 (xi , −), which implies that

 
yi+1 − yi yi − yi−1
hi−1 Mi−1 + 2(hi−1 + hi )Mi + hi Mi+1 =6 − (1.13)
hi hi−1
1.7. CUBIC SPLINES 21

where, i = 1, 2, 3, · · · , (n − 1)
Since, we have equally -spaced knots,i.e, xi = x0 + ih, i = 1, 2, 3, · · · , n, for the
equation (1.13) using hi = hi−1 = h, which implies that,
 
yi+1 − yi − yi + yi−1
hMi−1 + 4hMi + hMi+1 = 6
h
 
6 yi+1 − 2yi + yi−1
Mi−1 + 4Mi + Mi+1 = 2 , for i = 1, 2, 3, · · · , (n − 1) (1.14)
h h
Which is the required condition for finding the value of M1 , M2 , · · · , Mn−1 .

Example 1.7.1. Using cubic spline interpolation technique estimate y(1.5) and y 0 (3)
x 1 2 3 4
from the following data.
y 1 2 5 11

Solution:
Here, h = 1 and n = 3 the cubic spline of the function in the interval xi ≤ x ≤ xi+1
is

1 1
s(x) = Mi (xi+1 − xi )3 + Mi+1 (x − xi )3
6h 6h
   
yi h yi+1 h
+ − Mi (xi+1 − x) + − Mi+1 (x − xi )
h 6 h 6
where Mi0 s as
6
Mi−1 + 4Mi + Mi+1 = (yi−1 − 2yi + yi+1 )
h2
when i = 1 and using M0 = 0
6
M0 + 4M1 + M2 = (y
h2 0
− 2y)1 + y2 ) = 12,

4M1 + M2 = 12 (1.15)

6
When i = 2 and M3 = 0, M1 + 4M2 + M3 = (y
h2 1
− 2y2 + y3 ) = 18.

M1 + 4M2 = 18 (1.16)

Solving (1.15) and (1.16) we get, M1 = 2, M2 = 4. Now, cubic spline function in


1 ≤ x ≤ 2 at i = 0 is

1 1
S(x) = M0 (x1 − x)3 + M1 (x − x0 )3
6 6
y0 1 y1 1
( − M0 (x1 − x)) + ( − M1 )(x − x0 )
1 6 1 6
22 CHAPTER 1. INTERPOLATION

Putting x0 = 1, x1 = 2, y0 = 1, y1 = 2, M0 = 0, M1 = 2, and M2 = 4

1 1
S(x) = 0 + (2)(x − 1)3 + 1(2 − x) + (2 − (2))(x − 1)
6 6
1 3 5
= (x − 1) + 1(2 − x) + (x − 1)
3 3
1 3 2
= (x − 3x + 5x)
3

when i = 1 at 2 ≤ x ≤ 3 is

1 1
S(x) = M1 (x2 − x)3 + M2 (x − x1 )3
6 6
y1 1 y2 1
( − M1 (x2 − x)) + ( − M2 )(x − x1 )
1 6 1 6

Putting x1 = 2, x2 = 3, y2 = 5, y1 = 2, M0 = 0, M1 = 2, and M2 = 4

1 1 1 1
S(x) = (2)(3 − x)3 + (4)(x − 2)3 + (2 − (2))(3 − x) + (5 − (4))(x − 2)
6 6 6 6
1 3 2
= (x − 3x + 5x)
3

when i = 2 at 3 ≤ x ≤ 4 is

1
S(x) = (−2x3 + 24x2 − 76x + 81)
3

Hence the cubic spline function is



1
(x3 − 3x2 + 5x), 1 ≤ x ≤ 2


3




S(x) = 1 (x3 − 3x2 + 5x), 2 ≤ x ≤ 3
 3


 1 (−2x3 + 24x2 − 76x + 81), 3 ≤ x ≤ 4


3

Thus, S(1.5) = 13 ((1.5)3 − 3(1.5)2 + 5(1.5)) = 1.375


and S 0 (3) = 4.67 .

Example 1.7.2. Solve the boundary value problem governed by the differential equa-
tion.
4x 0 2
y 00 + 2
y + y=0
1+x 1 + x2
and bc’s y(0) = 1, y(2) = 0.2. Using cubic spline method at h = 1.
1.7. CUBIC SPLINES 23

Solution:
Let y 00 (xi ) = S 00 (xi ) = Mi , So the given differential equation can be written as,

4xi 0 2
Mi = − y +
2 i
yi (1.17)
1 + xi 1 + x2i

Sinceh = 1, so divided the interval in [0, 1] and [1, 2] and we know for cubic spline.

h 1
S 0 (xi , +) = − (2Mi + Mi+1 ) + (yi+1 − yi ) (1.18)
6 h
h 1
S 0 (xi , −) = (2Mi + Mi−1 ) + (yi − yi−1 ) (1.19)
6 h
We get in interval [0, 1].
 
4xi h 1 2
Mi = − 2
− (2Mi + Mi+1 ) + (yi+1 − yi ) − yi (1.20)
(1 + xi ) 6 h 1 + x2i

and in interval [1, 2]


 
4xi h 1 2
Mi = − 2
(2Mi + Mi−1 ) + (yi − yi−1 ) − yi (1.21)
(1 + xi ) 6 h 1 + x2i

Taking h = 1.x0 = 0, x1 = 1, x2 = 2, [from interval [0, 2]]


Now, equation (1.20) and (1.21) reduced to

M0 = −2y0 (1.22)

and  
4x1 h 1 2
M1 = − 2
− (2M1 + M1 ) + (y2 − y2 ) − y1
(1 + x1 ) 6 h 1 + x21
1 1
M1 − M2 = y1 − 2y2 (1.23)
3 3
Again,  
4x1 h 1 2
M1 = − 2
(2M1 + M0 ) + (y1 − y0 ) − y1
(1 + x1 ) 6 h 1 + x21
5 1
M1 + M0 = −3y1 + 2y0 (1.24)
3 3
And (in interval [1, 2]) in (1.18)
 
4x2 h 1 2
M2 = − 2
(2M2 + M1 ) + (y2 − y1 ) − y2
(1 + x2 ) 6 h 1 + x22

23 4 8
M2 + M1 = −2y2 + y1 (1.25)
15 15 5
24 CHAPTER 1. INTERPOLATION

Applying the boundary condition, y0 = 1 and y2 = 0.2 for the equation from (1.22)
to (1.25) and we get

1 1 2 5 8
M0 = −2; M1 − M2 − y1 = − ; M1 + 3y1 = ;
3 3 3 3 3
4 23 8 2
M1 + M2 − y1 = − (1.26)
15 15 5 5
After solving all the equation in (1.26) we get,
184 58 96
M0 = −2, M1 = 295
= 0.6237, M2 = 295
= 0.1966, and y1 = 177
= 0.5423
The required solution is y1 = 0.5423.

Exercise

Using cubic spline , solve the boundary value problem.

d2 y
(i) dx2
+ y = 3x2 , bc’s, y(0) = 0, y(2) = 3.5, h = 1

d 2
(ii) x dx 2 + y = 0, bc’s, y(1) = 1, y(2) = 2, h = 0.25

(iii) y 00 = x + y, bc’s, y(0) = 0, y(1) = 0, h = 0.25

5
Hem Raj Pandey, Assistant Professor, Pokhara University
Lecture Note -4

M.sc. Structural Engineering


School of Engineering

By
Hem Raj Pandey
(Assistant Professor)
Pokhara University
Monday, May 25, 2020 07:08

1
Contents

1 Interpolation Contd.. 3
1.1 Double Interpolation . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2 Cubic Splines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.3 Least Square Method . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
1.3.1 Curve fitting for parabola y = a + bx + cx2 . . . . . . . . . . . 15
1.4 First and second Derivative by interpolation polynomials . . . . . . . 16
1.4.1 Newton’s Forward difference Interpolation . . . . . . . . . . . 16
1.4.2 Newton’s Backward difference Interpolation . . . . . . . . . . 17
1.4.3 Centre difference Interpolation . . . . . . . . . . . . . . . . . 17
1

1
Hem Raj Pandey, Assistant Professor, Pokhara University

2
Chapter 1

Interpolation Contd..

1.1 Double Interpolation


A function of two or more variables, the formula become complicated but a sim-
pler procedure is to interpolate with respect to the first variable keeping the others
constant and then interpolate with respect to the second variable and so on. This
method is numerical double interpolation. We can use

ˆ Method of linear interpolation.

ˆ Newton’s forward difference formula.

Example 1.1.1. Tabulate the values of function z(x, y) = x2 + y 2 − y for x =


0, 1, 2, 3, 4 and y = 0, 1, 2, 3, 4. Using this table of values, compute z(2.5, 3.5) by
numerical double interpolation.

Solution:
The value of the function for the given values of x and y are given in following table:
y→
x↓
0 1 2 3 4
0 0 0 2 6 12
1 1 1 3 7 13
2 4 4 6 10 16
3 9 9 11 15 21
4 16 16 18 22 28

3
1. Method of linear interpolation.
we know the formulae of linear interpolation is

z(x1 ) − z(x0 )
z(x) = z(x0 ) + (x − x0 )
x1 − x0

we interpolate with respect to x keeping y constant. i.e x = 2.5, using linear


interpolation
y z
0 6.5
1 6.5
2 8.5
3 12.5
4 18.5
Again we interpolate with respect to y using linear interpolation for y = 3.5
we get,

12.5 + 18.5
z= = 15.5
2
So, z(2.5, 3.5) = 15.5, From the function z = x2 + y 2 − y, z(25, 3.5) = 15, So
the error is 5 percent.

2. Newton’s forward difference formula.


let the constant variable say x. And keeping x = 2.5, y = 3.5 as the near centre
of the set and the corresponding values are x = 1, 2, 3 and y = 2, 3, 4. So, using
Newton’s farward difference formula, we have

At x=1
y z ∆z ∆2 z
2 3
4
3 7 2
6
4 13

4
At x=2
y z ∆z ∆2 z
2 6
4
3 10 2
6
4 16

At x=3
y z ∆z ∆2 z
2 11
4
3 15 2
6
4 21

y−y0 3.5−2
Here, p = h
= 1
= 1.5
We obtain

p(p − 1) 2
z(1, 3.5) = z0 + p∆z0 + ∆ z0
2!
(1.5)(0.5)
= 3 + (1.5)(4) + (2)
2
= 9.75
(1.5)(0.5)
z(2, 3.5) = 6 + (1.5)(4) + (2)
2
= 12.75
(1.5)(0.5)
z(3, 3.5) = 11 + (1.5)(4) + (2)
2
= 17.75

Therefore, we arrive at the following result,

5
At y=3.5
y z ∆z ∆2 z
1 9.75
3
2 12.75 2
5
3 17.75
2.5−1
and define p = 1
= 1.5 and using Newton formula,
(1.5)(0.5)
we get, z(2.5, 3.5) = 9.75 + (1.5)(3) + 2
(2) = 15
And from the functional relation, we get z(2.5, 3.5) = (2.5)2 + (3.5)2 − 3.5 = 15.
Hence no error in interpolation.

Exercise

i. Tabulate the values of function z(x, y) = x2 + y 2 + y for x = 0, 1, 2, 3, 4 and


y = 0, 1, 2, 3, 4. Using this table of values, compute z(2.5, 1.5) by numerical
double interpolation.

ii. Tabulate the values of function z(x, y) = x2 + y 2 + y for x = 0, 1, 2, and


y = 0, 1, 2,. Using this table of values, compute z(0.5, 0.5) and z(1.5, 1.5) by
numerical double interpolation.

iii. Tabulate the values of function z(x, y) = ex siny+y−0.1 for x = 0.5, 1.0, 1.5, 2.0, 2.5, 3.0, 3.5
and y = 0.1, 0.2, 0.3, 0.4, 0.5, 0.5. Using this table of values, compute z(1.6, 0.33)
by numerical double interpolation.

1.2 Cubic Splines


A spline is a polynomial form by fitting a single high order polynomial from the
entire interval. A function S(x) is spline of degree k on [a, b] and a = x0 < x1 <
1
Hem Raj Pandey, Assistant Professor, Pokhara University

6
x2 < · · · < xn = b, where




 S0 (x), x0 ≤ x ≤ x1



S1 (x), x1 ≤ x ≤ x2

S(x) =
..
.,







xn−1 ≤ x ≤ xn

S (x),
n

where Si (x) ∈ P k , P k is a polynomial of order k.

Definition 1.2.1. A cubic spline is a piecewise cubic polynomial that is twice con-
tinuously differentiable. Given the data points (xi , yi ), i = 0, 1, 2, · · · n to determine
a function f (x) such that

i. Si (x) = yi , i = 0, 1, 2, · · · n

ii. S(x) is cubic in each subinterval (xi , xi+1 ) for i = 0, 1, 2, 3, · · · , (n + 1) and

iii. S(x) ∈ C 2 [x0 , xn ] are satisfied.

Theorem 1. If x0 , x1 , x2 , · · · , xn are given set of observations with common differ-


ence h and let y0 , y1 , y2 , · · · , yn are their corresponding values where,

i. y = S(x) is linear polynomial outside the interval [x0 , xn ].

ii. S(x) is cubic polynomial in each subinterval.

iii. S 0 (x) and S 00 (x) are continuous at each points.

then,

1 (xi+1 − x)3 (x − xi )3
 
S(x) = Mi + Mi+1
hi 6 6
h2i h2i
   
(xi+1 − x) (x − xi )
+ yi − Mi + yi+1 − Mi+1
hi 6 hi 6
Where,
6
Mi−1 + 4Mi + Mi+1 = (yi−1 − 2yi + yi+1 ), i = 1, 2, 3, · · · , (n − 1)
h2
Proof:

7
Let us consider xi which are
arbitrarily placed, such that
xi+1 − xi = hi and S(x) is
cubic polynomial in each sub-
intervals, S 0 (x) and S 00 (x) are
continuous at each point of in-
terval (xi , xi+1 ). From the fig-
ure, the slope passing through
AB and BC are equal.i.e.
Figure 1.1: Cubic spline interpolation
S 00 (x) − S 00 (xi ) S 00 (xi+1 ) − S 00 (xi )
=
x − xi xi+1 − xi
1
S 00 (x) = S 00 (xi ) + [(x − xi )S 00 (xi+1 ) − (x − xi )s0 (xi )]
hi
1
= [(xi+1 − x)S 00 (xi ) + (x − xi )s00 (xi+1 )]
hi
1
S 00 (x) = [(xi+1 − x)Mi + (x − xi )Mi+1 ] (1.1)
hi
Where, S 00 (xi ) = Mi
Integrating (1.1) twice with respect to x , we get

1 (xi+1 − x)3 (x − xi )3
 
S(x) = Mi + Mi+1 + Ci (xi+1 − x) + di (x − xi ) (1.2)
hi 6 6

where Ci and di are constant. When S(xi ) = yi and S(xi+1 ) = yi+1 .


Then equation (1.2) changes as;

1 (xi+1 − xi )3
 
yi = Mi + Ci (xi+1 − xi )
hi 6

h2i
 
1
∴ Ci = yi − Mi
hi 6
and

1 (xi+1 − xi )3
 
yi+1 = Mi+1 + di (xi+1 − xi )
hi 6
h2i
 
1
∴ di = yi+1 − Mi+1
hi 6

8
Putting the value of Ci and di in (1.2) we get in interval (xi , xi+1 ).

1 (xi+1 − x)3 (x − xi )3
 
S(x) = Mi + Mi+1
hi 6 6
h2i
  
xi+1 − xi
+ yi − Mi
hi 6
h2i
  
x − xi
+ yi+1 − Mi+1 (1.3)
hi 6
This is cubic spline interpolation function. Where Mi0 s are constant.
Now, differentiating (1.3) with respect to x, we get

(xi+1 − x)2 (x − xi )2
 
0 1
S (x) = − Mi + Mi+1
hi 2 2
h2i h2i
   
1
− yi − Mi + yi+1 − Mi+1 (1.4)
hi 6 6
When x = xi , in equation (1.4), we get
hi 1
S 0 (xi , +) = − (2Mi + Mi+1 ) + (yi+1 − yi ) (1.5)
6 hi
When x = xi−1 , in interval (xi−1 , xi ). (we change equation (1.4) in interval (xi−1 , xi ))
hi−1 1
S 0 (xi , −) = (2Mi + Mi−1 ) + (yi − yi−1 ) (1.6)
6 hi−1
Form the definition of continuity S 0 (xi , +) = S 0 (xi , −), which implies that

 
yi+1 − yi yi − yi−1
hi−1 Mi−1 + 2(hi−1 + hi )Mi + hi Mi+1 =6 − (1.7)
hi hi−1
where, i = 1, 2, 3, · · · , (n − 1)
Since, we have equally -spaced knots,i.e, xi = x0 + ih, i = 1, 2, 3, · · · , n, for the
equation (1.7) using hi = hi−1 = h, which implies that,
 
yi+1 − yi − yi + yi−1
hMi−1 + 4hMi + hMi+1 = 6
h
 
6 yi+1 − 2yi + yi−1
Mi−1 + 4Mi + Mi+1 = 2 , for i = 1, 2, 3, · · · , (n − 1) (1.8)
h h
Which is the required condition for finding the value of M1 , M2 , · · · , Mn−1 .

Example 1.2.1. Using cubic spline interpolation technique estimate y(1.5) and y 0 (3)
x 1 2 3 4
from the following data.
y 1 2 5 11

9
Solution:
Here, h = 1 and n = 3 the cubic spline of the function in the interval xi ≤ x ≤ xi+1
is

1 1
s(x) = Mi (xi+1 − xi )3 + Mi+1 (x − xi )3
6h 6h
   
yi h yi+1 h
+ − Mi (xi+1 − x) + − Mi+1 (x − xi )
h 6 h 6

where Mi0 s as
6
Mi−1 + 4Mi + Mi+1 = (yi−1 − 2yi + yi+1 )
h2
when i = 1 and using M0 = 0
6
M0 + 4M1 + M2 = (y
h2 0
− 2y)1 + y2 ) = 12,

4M1 + M2 = 12 (1.9)

6
When i = 2 and M3 = 0, M1 + 4M2 + M3 = (y
h2 1
− 2y2 + y3 ) = 18.

M1 + 4M2 = 18 (1.10)

Solving (1.9) and (1.10) we get, M1 = 2, M2 = 4. Now, cubic spline function in


1 ≤ x ≤ 2 at i = 0 is

1 1
S(x) = M0 (x1 − x)3 + M1 (x − x0 )3
6 6
y0 1 y1 1
( − M0 (x1 − x)) + ( − M1 )(x − x0 )
1 6 1 6

Putting x0 = 1, x1 = 2, y0 = 1, y1 = 2, M0 = 0, M1 = 2, and M2 = 4

1 1
S(x) = 0 + (2)(x − 1)3 + 1(2 − x) + (2 − (2))(x − 1)
6 6
1 3 5
= (x − 1) + 1(2 − x) + (x − 1)
3 3
1 3 2
= (x − 3x + 5x)
3

when i = 1 at 2 ≤ x ≤ 3 is

1 1
S(x) = M1 (x2 − x)3 + M2 (x − x1 )3
6 6
y1 1 y2 1
( − M1 (x2 − x)) + ( − M2 )(x − x1 )
1 6 1 6

10
Putting x1 = 2, x2 = 3, y2 = 5, y1 = 2, M0 = 0, M1 = 2, and M2 = 4

1 1 1 1
S(x) = (2)(3 − x)3 + (4)(x − 2)3 + (2 − (2))(3 − x) + (5 − (4))(x − 2)
6 6 6 6
1 3
= (x − 3x2 + 5x)
3

when i = 2 at 3 ≤ x ≤ 4 is

1
S(x) = (−2x3 + 24x2 − 76x + 81)
3

Hence the cubic spline function is



1
(x3 − 3x2 + 5x), 1 ≤ x ≤ 2


3



S(x) = 1 (x3 − 3x2 + 5x), 2 ≤ x ≤ 3
 3


 1 (−2x3 + 24x2 − 76x + 81), 3 ≤ x ≤ 4


3

Thus, S(1.5) = 13 ((1.5)3 − 3(1.5)2 + 5(1.5)) = 1.375


and S 0 (3) = 4.67 .

Example 1.2.2. Solve the boundary value problem governed by the differential equa-
tion.
4x 0 2
y 00 + 2
y + y=0
1+x 1 + x2
and bc’s y(0) = 1, y(2) = 0.2. Using cubic spline method at h = 1.

Solution:
Let y 00 (xi ) = S 00 (xi ) = Mi , So the given differential equation can be written as,

4xi 0 2
Mi = − y −
2 i
yi (1.11)
1 + xi 1 + x2i

Sinceh = 1, so divided the interval in [0, 1] and [1, 2] and we know for cubic spline.

h 1
S 0 (xi , +) = − (2Mi + Mi+1 ) + (yi+1 − yi ) (1.12)
6 h
h 1
S 0 (xi , −) = (2Mi + Mi−1 ) + (yi − yi−1 ) (1.13)
6 h
We get in interval [0, 1].
 
4xi h 1 2
Mi = − 2
− (2Mi + Mi+1 ) + (yi+1 − yi ) − yi (1.14)
(1 + xi ) 6 h 1 + x2i

11
and in interval [1, 2]
 
4xi h 1 2
Mi = − 2
(2Mi + Mi−1 ) + (yi − yi−1 ) − yi (1.15)
(1 + xi ) 6 h 1 + x2i

Taking h = 1.x0 = 0, x1 = 1, x2 = 2, [from interval [0, 2]]


Now, equation (1.14) and (1.15) reduced to

M0 = −2y0 (1.16)

and  
4x1 h 1 2
M1 = − 2
− (2M1 + M1 ) + (y2 − y2 ) − y1
(1 + x1 ) 6 h 1 + x21
1 1
M1 − M2 = y1 − 2y2 (1.17)
3 3
Again,  
4x1 h 1 2
M1 = − (2M1 + M0 ) + (y 1 − y 0 ) − y1
(1 + x21 ) 6 h 1 + x21
5 1
M1 + M0 = −3y1 + 2y0 (1.18)
3 3
And (in interval [1, 2]) in (1.12)
 
4x2 h 1 2
M2 = − 2
(2M2 + M1 ) + (y2 − y1 ) − y2
(1 + x2 ) 6 h 1 + x22
23 4 8
M2 + M1 = −2y2 + y1 (1.19)
15 15 5
Applying the boundary condition, y0 = 1 and y2 = 0.2 for the equation from (1.16)
to (1.19) and we get

1 1 2 5 8
M0 = −2; M1 − M2 − y1 = − ; M1 + 3y1 = ;
3 3 3 3 3
4 23 8 2
M1 + M2 − y1 = − (1.20)
15 15 5 5
After solving all the equation in (1.20) we get,
184 58 96
M0 = −2, M1 = 295
= 0.6237, M2 = 295
= 0.1966, and y1 = 177
= 0.5423
The required solution is y1 = 0.5423.

12
Exercise

Using cubic spline , solve the boundary value problem.

d2 y
(i) dx2
+ y = 3x2 , bc’s, y(0) = 0, y(2) = 3.5, h = 1

d y2
(ii) x dx2 + y = 0, bc’s, y(1) = 1, y(2) = 2, h = 0.25

(iii) y 00 = x + y, bc’s, y(0) = 0, y(1) = 0, h = 0.25

1.3 Least Square Method


2
Let us consider a equation to fit the data be,

y = a + bx

, and error at point (xi , yi ) be,


X
e= (yi − a − bxi ) (1.21)

another square error at point (xi , yi ) be


X
e= (yi − a − bxi )2 (1.22)

Hence, the technique of minimising the sum of square of errors is know as least
square method.
Here equation (1.22) will be minimum when

∂e X
= −2 (yi − a − bxi ) = 0
∂a
∂e X
= −2 (yi − a − bxi )xi = 0
∂b
Which implies that
X X
na + xi b = yi (1.23)
X X X
xi a + (xi )2 b = xi y i (1.24)
2
Hem Raj Pandey, Assistant Professor, Pokhara University

13
Thus the solution is P P P
n xi yi − xi yi
b= P P
n (xi )2 − ( xi )2
(xi )2 yi − xi yi xi
P P P P
a= P P
n (xi )2 − ( xi )2
If we put Sx = xi , Sy = yi and Sxy = xi yi , Sxx = (xi )2
P P P P

we get
Sxx Sy − Sxy Sx
a=
nSxx − (Sx )2
nSxy − Sx Sy
b=
nSxx − (Sx )2
Hence the value of a and b in the equation y = a + bx that bas the best fit to n data
points (xi , yi ). We can use the relation for other non-linear equation.

ˆ y = bxm , ln(y) = mln(x) + ln(b)

ˆ y = bemx , ln(y) = mx + ln(b)

ˆ y = b10mx , ln(y) = mx + log(b)

ˆ y= 1
mx+b
, 1
y
= mx + b

ˆ y= mx
b+x
, 1
y
= b
mx
+ 1
m

Example 1.3.1. Applying the method of least squares find an equation of the form
y = ax + bx2 that fits the following data:

x 1 2 3 4 5 6
y 2.6 5.4 8.7 12.1 16.0 20.0

Solution:
y
The required curve fit is y = ax + bx2 which can be written as x
= a + bx . Let
y
Y = x
then we get Y = a + bx and the corresponding data are:

x 1 2 3 4 5 6
Y 2.6 2.7 2.9 3.025 3.2 3.367
And the corresponding normal equations are
X X X
b (xi )2 + a xi = xi Y i

14
X X
b xi + na = Yi

We have,
x Y xY x2
1 2.6 2.6 1
2 2.7 5.4 4
3 2.9 8.7 9
4 3.025 12.1 16
5 3.2 16.0 25
6 3.367 20.2 36
Σxi = 21 ΣYi =17.792 Σxi Yi =65.0 Σ(xi )2 =91
Putting these value in normal equation, we get

91b + 21a = 65 (1.25)

21b + 6a = 17.792 (1.26)

solving (1.25) and (1.26) we get b = 0.15589, a = 2.41973. Hence, the required
equation of fit for the given data is

Y = 0.15589x + 2.41973

,
y = 0.15589x2 + 2.41973x

1.3.1 Curve fitting for parabola y = a + bx + cx2

The normal equation of parabola is

Σy = na + bΣx + cΣx2

Σxy = aΣx + bΣx2 + cΣx3

Σx2 y = aΣx2 + bΣx3 + cΣx4

Problem 1. State the normal equation for fitting a second degree parabola a+bx+cx2
to the given data. And used it to fit the following data:

x 1.0 1.5 2.0 2.5 3.0 3.5 4.0


y 1.1 1.3 1.6 2.0 2.7 3.4 4.1

15
1.4 First and second Derivative by interpolation
polynomials
3

1.4.1 Newton’s Forward difference Interpolation

Here,
p(p − 1) 2 p(p − 1) 3
y = y0 + p∆y0 + ∆ y0 + ∆ y0 + · · · (1.27)
2! 3!
x−x0
where p = h

Differentiating (1.27) with respect to p is


dy 2p − 1 2 3p2 − 6p + 2 3
= ∆y0 + ∆ y0 + ∆ y0 + · · · , (1.28)
dp 2! 3!
And
dp 1
= (1.29)
dx h
So, from equation (1.28) and (1.29) we get

dy dy dp
=
dx dp dx
3p2 − 6p + 2 3
 
1 2p − 1 2
= ∆y0 + ∆ y0 + ∆ y0 + · · · , (1.30)
h 2! 3!
dy
From (1.30) gives the value of dx
at any x which is not tabulated.
when x = x0 , then p = 0
   
dy 1 1 2 1 3
= ∆y0 − ∆ y0 + ∆ y0 − · · · , (1.31)
dx x=x0 h 2 3

Differentiating (1.30) with respect to x is

d2 y
 
d dy dp
=
dx2 dp dx dx
6p2 − 18p + 11 4
 
1 2 3
= 2 ∆ y0 + (p − 1)∆ y0 + ∆ y0 + · · · , (1.32)
h 12
when x = x0 , then p = 0
 2   
dy 1 2 3 11 4
= 2 ∆ y0 − ∆ y0 + ∆ y0 − · · · , (1.33)
dx2 x=x0 h 12
3
Hem Raj Pandey, Assistant Professor, Pokhara University

16
1.4.2 Newton’s Backward difference Interpolation
p(p − 1) 2 p(p − 1)(p − 2) 3
y = yn + p∇yn + ∇ yn + ∇ yn + · · · (1.34)
2! 3!
x−xn
where, p = h

3p2 + 6p + 2 3
 
dy 1 2p + 1 2
= ∇yn + ∇ yn + ∇ yn + · · · (1.35)
dx h 2! 3!
when x = xn , then p = 0
   
dy 1 1 2 1 3 1 4
= ∇yn + ∇ yn + ∇ yn + ∇ yn + · · · , (1.36)
dx x=xn h 2 3 4

d2 y
 
d dy dp
2
=
dx dp dx dx
6p2 + 18p + 11 4
 
1 2 3
= 2 ∇ yn + (p + 1)∇ yn + ∇ yn + · · · , (1.37)
h 12

So,
d2 y
   
1 2 3 11 4
= 2 ∇ yn + ∇ yn + ∇ yn + · · · , (1.38)
dx2 x=xn h 12

1.4.3 Centre difference Interpolation

p p2
y = y0 + (∆y0 + ∆y−1 ) + ∆2 y−1
2 2
p3 − p 3 p4 − p 2 4
+ (∆ y−1 + ∆3 y−2 ) + ∆ y−2 + · · · (1.39)
12 24
x−x0
where p = h
, Differentiating (1.39) with respect to x,
 
dy 1 1 2
= (∆y0 + ∆y−1 ) + p∆ y−1
dx h 2
3p2 − 1 3
 
1 3 1 3 4
+ (∆ y−1 + ∆ y−2 ) + (2p − p)∆ y−2 +
h 12 2
 4 2

1 5p − 15p + 4 5 5
(∆ y−2 + ∆ y−3 ) + · · · (1.40)
h 240
   
dy 1 1 1 3 3 1 5 5
= (∆y0 + ∆y−1 ) − (∆ y−1 + ∆ y−2 ) + (∆ y−2 + ∆ y−3 ) + · · ·
dx x=x0 h 2 12 60
(1.41)

17
and

d2 y 1 h 2 p 3 3
i
= ∆ y−1 + (∆ y −1 + ∆ y−2 )
dx2 h2 2
6p2 + 1 4
 
1
+ ∆ y−2 +
h2 12
1 20p3 − 30p (∆5 y−2 + ∆5 y−3 )
   
+ ··· (1.42)
h2 5! 2

d2 y
   
1 2 1 4 1 6
= 2 ∆ y−1 − ∆ y−2 + ∆ y−3 − · · · (1.43)
dx2 x=x0 h 12 9
Example 1.4.1. A rod is rotation in a plane. The following tables gives the angle θ
(in radian) through which the rod has turned for various of the time(t) in seconds.
t 0 0.2 0.4 0.6 0.8 1.0 1.2
θ 0 0.12 0.49 1.12 2.02 3.20 4.67
Calculate the annular velocity and the angular acceleration of the rod, when t =
0.6 seconds.

Solution:
dθ d2 θ
We know that annular velocity is dt
and acceleration dt2
.
The difference table is

t θ ∆θ ∆2 θ ∆3 θ ∆4 θ ∆5 θ ∆6 θ
0 0
0.12
0.2 0.12 0.25
0.37 0.01
0.4 0.49 0.26 0
0.63 0.01 0
0.6 1.12 0.27 0 0
0.90 0.01 0
0.8 2.02 0.28 0
1.18 0.01
1.0 3.20 0.29
1.47
1.2 4.67

18
As the derivative are required near the middle of the table we use the centre difference
interpolation is

   
dθ 1 1 1 3 3 1 5 5
= (∆θ0 + ∆θ−1 ) − (∆ θ−1 + ∆ θ−2 ) + (∆ θ−2 + ∆ θ−3 ) + · · ·
dt t=t0 h 2 12 60
 2    (1.44)
dθ 1 1 1
= 2 ∆2 θ−1 − ∆4 θ−2 + ∆6 θ−3 − · · · (1.45)
dt2 t=t0 h 2 9
Here,
h = 0.2, t0 = 0.6, θ0 = 1.12, ∆θ0 = 0.9, ∆θ−1 = 0.63, ∆2 θ−1 = 0.27, ∆3 θ−1 =
0.01, ∆3 θ−3 = 0.01 and so on.
From, (1.44)
 
dθ 1 1 1
= [ (0.9 + 0.63) − (0.01 + 0.01) + 0] (1.46)
dt t=0.6 0.2 2 12
1
= [0.765 − 0.00166] (1.47)
0.2
= 3.8167 (1.48)

From (1.45)

d2 θ
 
1 1 1
= [0.27 − (0) + (0)] (1.49)
dt2 t=0.6 0.04 2 90
0.27
= (1.50)
0.04
= 6.75 (1.51)

Hence the required angular velocity is 3.8167 radian/sec and acceleration is 6.75radian/sec2 .

Problem 2. A slider in a machine moves along a fixed straight rod. It’s distance is
xcm, along the rod is given below for various value of the time t seconds. Find the
velocity and acceleration of the slider when t = 0.3, 0.6.

t 0 0.1 0.2 0.3 0.4 0.5 0.6


x 30.13 31.62 32.87 33.64 33.95 33.81 33.24

Problem 3. Find the value of f 0 (8) and f 00 (9). From the following data using ap-
propriate interpolating formula.

19
x 4 5 7 10 11
f(x) 48 100 294 900 1210
Hint’s: Since, the value of x are not equally spaced, we use the interpolation
formula for unequal interval. we use Newton’s divided difference formulation.

4
Hem Raj Pandey, Assistant Professor, Pokhara University

20
Lecture Note -5

M.sc. Structural Engineering


School of Engineering

By
Hem Raj Pandey
(Assistant Professor)
Pokhara University
Monday, June 1, 2020

1
Contents

1 Numerical Integration 3
1.1 First and second Derivative by interpolation polynomials . . . . . . . 3
1.1.1 Newton’s Forward difference Interpolation . . . . . . . . . . . 3
1.1.2 Newton’s Backward difference Interpolation . . . . . . . . . . 4
1.1.3 Centre difference Interpolation . . . . . . . . . . . . . . . . . 5
1.2 Newton-Cote (Quadrature) formula . . . . . . . . . . . . . . . . . . . 7
1.3 Error Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.3.1 Error in Trapezoidal Rule . . . . . . . . . . . . . . . . . . . . 10
1.3.2 Error in Simpson’s− 13 Rule . . . . . . . . . . . . . . . . . . . . 12
1.3.3 Error in Simpson’s− 38 Rule . . . . . . . . . . . . . . . . . . . . 14
1.4 Euler Maclaurin Formula . . . . . . . . . . . . . . . . . . . . . . . . . 17
1

1
Hem Raj Pandey, Assistant Professor, Pokhara University

2
Chapter 1

Numerical Integration

1.1 First and second Derivative by interpolation


polynomials
1

1.1.1 Newton’s Forward difference Interpolation

Here,
p(p − 1) 2 p(p − 1) 3
y = y0 + p∆y0 + ∆ y0 + ∆ y0 + · · · (1.1)
2! 3!
x−x0
where p = h

Differentiating (1.1) with respect to p is

dy 2p − 1 2 3p2 − 6p + 2 3
= ∆y0 + ∆ y0 + ∆ y0 + · · · , (1.2)
dp 2! 3!

And
dp 1
= (1.3)
dx h
So, from equation (1.2) and (1.3) we get

dy dy dp
=
dx dp dx
3p2 − 6p + 2 3
 
1 2p − 1 2
= ∆y0 + ∆ y0 + ∆ y0 + · · · , (1.4)
h 2! 3!
1
Hem Raj Pandey, Assistant Professor, Pokhara University

3
dy
From (1.4) gives the value of dx
at any x which is not tabulated.
when x = x0 , then p = 0
   
dy 1 1 2 1 3
= ∆y0 − ∆ y0 + ∆ y0 − · · · , (1.5)
dx x=x0 h 2 3

Differentiating (1.4) with respect to x is

d2 y
 
d dy dp
2
=
dx dp dx dx
6p2 − 18p + 11 4
 
1 2 3
= 2 ∆ y0 + (p − 1)∆ y0 + ∆ y0 + · · · , (1.6)
h 12

when x = x0 , then p = 0
 2   
dy 1 2 3 11 4
= 2 ∆ y0 − ∆ y0 + ∆ y0 − · · · , (1.7)
dx2 x=x0 h 12

1.1.2 Newton’s Backward difference Interpolation


p(p − 1) 2 p(p − 1)(p − 2) 3
y = yn + p∇yn + ∇ yn + ∇ yn + · · · (1.8)
2! 3!
x−xn
where, p = h

3p2 + 6p + 2 3
 
dy 1 2p + 1 2
= ∇yn + ∇ yn + ∇ yn + · · · (1.9)
dx h 2! 3!
when x = xn , then p = 0
   
dy 1 1 2 1 3 1 4
= ∇yn + ∇ yn + ∇ yn + ∇ yn + · · · , (1.10)
dx x=xn h 2 3 4

d2 y
 
d dy dp
2
=
dx dp dx dx
6p2 + 18p + 11 4
 
1 2 3
= 2 ∇ yn + (p + 1)∇ yn + ∇ yn + · · · , (1.11)
h 12

So,
d2 y
   
1 2 3 11 4
= 2 ∇ yn + ∇ yn + ∇ yn + · · · , (1.12)
dx2 x=xn h 12

4
1.1.3 Centre difference Interpolation

p p2
y = y0 + (∆y0 + ∆y−1 ) + ∆2 y−1
2 2
p3 − p 3 p4 − p 2 4
+ (∆ y−1 + ∆3 y−2 ) + ∆ y−2 + · · · (1.13)
12 24
x−x0
where p = h
, Differentiating (1.13) with respect to x,
 
dy 1 1 2
= (∆y0 + ∆y−1 ) + p∆ y−1
dx h 2
3p2 − 1 3
 
1 3 1 3 4
+ (∆ y−1 + ∆ y−2 ) + (2p − p)∆ y−2 +
h 12 2
 4 2

1 5p − 15p + 4 5 5
(∆ y−2 + ∆ y−3 ) + · · · (1.14)
h 240
   
dy 1 1 1 3 3 1 5 5
= (∆y0 + ∆y−1 ) − (∆ y−1 + ∆ y−2 ) + (∆ y−2 + ∆ y−3 ) + · · ·
dx x=x0 h 2 12 60
(1.15)
and

d2 y 1 h 2 p 3 3
i
= ∆ y−1 + (∆ y −1 + ∆ y−2 )
dx2 h2 2
6p2 + 1 4
 
1
+ ∆ y−2 +
h2 12
1 20p3 − 30p (∆5 y−2 + ∆5 y−3 )
   
+ ··· (1.16)
h2 5! 2

d2 y
   
1 2 1 4 1 6
= 2 ∆ y−1 − ∆ y−2 + ∆ y−3 − · · · (1.17)
dx2 x=x0 h 12 9
Example 1.1.1. A rod is rotation in a plane. The following tables gives the angle θ
(in radian) through which the rod has turned for various of the time(t) in seconds.
t 0 0.2 0.4 0.6 0.8 1.0 1.2
θ 0 0.12 0.49 1.12 2.02 3.20 4.67
Calculate the annular velocity and the angular acceleration of the rod, when t =
0.6 seconds.

Solution:
dθ d2 θ
We know that annular velocity is dt
and acceleration dt2
.

5
The difference table is

t θ ∆θ ∆2 θ ∆3 θ ∆4 θ ∆5 θ ∆6 θ
0 0
0.12
0.2 0.12 0.25
0.37 0.01
0.4 0.49 0.26 0
0.63 0.01 0
0.6 1.12 0.27 0 0
0.90 0.01 0
0.8 2.02 0.28 0
1.18 0.01
1.0 3.20 0.29
1.47
1.2 4.67

As the derivative are required near the middle of the table we use the centre difference
interpolation is

   
dθ 1 1 1 3 3 1 5 5
= (∆θ0 + ∆θ−1 ) − (∆ θ−1 + ∆ θ−2 ) + (∆ θ−2 + ∆ θ−3 ) + · · ·
dt t=t0 h 2 12 60
 2    (1.18)
dθ 1 1 1
= 2 ∆2 θ−1 − ∆4 θ−2 + ∆6 θ−3 − · · · (1.19)
dt2 t=t0 h 2 9
Here,
h = 0.2, t0 = 0.6, θ0 = 1.12, ∆θ0 = 0.9, ∆θ−1 = 0.63, ∆2 θ−1 = 0.27, ∆3 θ−1 =
0.01, ∆3 θ−3 = 0.01 and so on.
From, (1.18)
 
dθ 1 1 1
= [ (0.9 + 0.63) − (0.01 + 0.01) + 0] (1.20)
dt t=0.6 0.2 2 12
1
= [0.765 − 0.00166] (1.21)
0.2
= 3.8167 (1.22)

6
From (1.19)

d2 θ
 
1 1 1
= [0.27 − (0) + (0)] (1.23)
dt2 t=0.6 0.04 2 90
0.27
= (1.24)
0.04
= 6.75 (1.25)

Hence the required angular velocity is 3.8167 radian/sec and acceleration is 6.75radian/sec2 .

Problem 1. A slider in a machine moves along a fixed straight rod. It’s distance is
xcm, along the rod is given below for various value of the time t seconds. Find the
velocity and acceleration of the slider when t = 0.3, 0.6.

t 0 0.1 0.2 0.3 0.4 0.5 0.6


x 30.13 31.62 32.87 33.64 33.95 33.81 33.24

Problem 2. Find the value of f 0 (8) and f 00 (9). From the following data using ap-
propriate interpolating formula.

x 4 5 7 10 11
f(x) 48 100 294 900 1210
Hint’s: Since, the value of x are not equally spaced, we use the interpolation
formula for unequal interval. we use Newton’s divided difference formulation.

1.2 Newton-Cote (Quadrature) formula


Here, Z b Z b
f (x)dx ' Pn (x)dx
a a

Let us divide the interval [a, b] into n sub-


intervals [x0 , x1 ], [x1 , x2 ], · · · , [xn−1 , xn ] each of
b−a
length h = n
. So, that x0 = a, x1 = x0 +h, x2 =
x0 + 2h, · · · , xn = x0 + nh = b. Then,
Z b Z x0 +nh
f (x)dx = f (x)dx (1.26)
a x0

7
Considering the Newton’s farward interpolation
formula as

p(p − 1) 2 p(p − 1)(p − 2) 3


f (x) = y0 + p∆y0 + ∆ y0 + ∆ y0
2! 3!
p(p − 1)(p − 2)(p − 3) 4 p(p − 1)(p − 2)(p − 3)(p − 4) 5
+ ∆ y0 + ∆ y0
4! 5!
p(p − 1)(p − 2)(p − 3)(p − 4)(p − 5) 6
+ ∆ y0 + · · · (1.27)
6!

where x = x0 + ph,dx = hdp,when x = 0, p = 0 and x = x0 + nh, p = n. So, putting


x = x0 + ph in equation (1.26), we get

Z b Z n
f (x)dx = h f (x0 + ph)dp
a 0
Z n
p(p − 1) 2
=h y0 + p∆y0 + ∆ y0
0 2!
p(p − 1)(p − 2) 3 p(p − 1)(p − 2)(p − 3) 4 (1.28)
+ ∆ y0 + ∆ y0
3! 4!
p(p − 1)(p − 2)(p − 3)(p − 4) 5
+ ∆ y0
5! 
p(p − 1)(p − 2)(p − 3)(p − 4)(p − 5) 6
+ ∆ y0 + · · · dp
6!

Integration term by term and after substituting the limits, we get


Z b 
n n(2n − 3) 2
f (x)dx = nh y0 + ∆y0 + ∆ y0
a 2 12
 4 
n 3n3 11n2
n(n − 2) 32
5
− 2
+ 3
− 3n
+ ∆ y0 + ∆4 y0
24 24 (1.29)
n5 3 2
6
− 2n4 + 35n 4
− 50n
3
+ 12n 5
+ ∆ y0
120
n6 5 3 2
− 15n + 17n4 − 225n + 274n − 60n 6

7 6 4 3
+ ∆ y0 + · · · dp
720

The equation (1.29) is known as Newton-cote general quadrature formula.

Now put n = 1 in equation (1.28) and taking the curve (xi , yi ); i = 0, 1 as poly-
nomial order of one.
we get from (1.28)

8
Z x0 +h Z 1
f (x)dx = h f (x0 + ph)dp
x0 0
1
= h(y0 + ∆y0 )
2
h
= (y0 + y1 )
2
and
Z x0 +2h
1
f (x)dx = h(y1 + ∆y1 )
x0 +h 2
h
= (y1 + y2 )
2
so on upto nth term we get
Z x0 +nh
h
f (x)dx = [(y0 + yn ) + 2(y1 + y2 + · · · + yn−1 )] (1.30)
x0 2
The equation (1.30) is known as Trapezoidal rule.
Put n = 2 in equation (1.28) and taking the curve (xi , yi ); i = 0, 1, 2 as polyno-
mial order of two.
we get from (1.28)

Z x0 +2h Z 2
f (x)dx = h f (x0 + ph)dp
x0 0
2
p(p − 1) 2
Z
=h [y0 + p∆y0 ) + ∆ y0 ]dp
0 2!
2
= h[2y0 + 2∆y0 + ∆2 y0 ]
6
h
= (y0 + 4y1 + y2 )
2
and Z x0 +4h
h
f (x)dx = (y1 + 4y3 + y4 )
x0 +2h 3
; Z x0 +nh
h
f (x)dx = (yn−2 + 4yn−1 + yn )
x0 +(n−2)h 3
th
so on upto n term we get

Z x0 +nh
h
f (x)dx = (y0 + yn ) + 4(y1 + y3 + · · · + yn−1 )
x0 3 (1.31)

+ 2(y2 + y4 + · · · + yn−2 )

9
The equation (1.31) is known as Simpson- 31 rule.
Put n = 3 in equation (1.28) and taking the curve (xi , yi ); i = 0, 1, 2, 3 as poly-
nomial order of three.
we get from (1.28)

Z x0 +3h Z 3
f (x)dx = h f (x0 + ph)dp
x0 0
2
p(p − 1) 2 p(p − 1)(p − 3) 3
Z
=h [y0 + p∆y0 ) +
∆ y0 + ∆ y0 ]dp
0 2! 3!
9 9 3
= h[3y0 + ∆y0 + ∆2 y0 + ∆3 y0 ]
2 4 8
3h
= (y0 + 3y1 + 3y2 + y3 )
8
and Z x0 +5h
3h
f (x)dx = (y3 + 3y4 + 3y5 + y6 )
x0 +3h 8
; Z x0 +nh
3h
f (x)dx = (yn−3 + 3yn−2 + +yn−1 + yn )
x0 +(n−3)h 8
so on upto nth term we get
Z x0 +nh
3h 
f (x)dx = (y0 + yn ) + 3(y1 + y2 + y4 + · · · + yn−1 )
x0 8 (1.32)

+ 2(y3 + y6 + y9 + · · · + yn−3 )

The equation (1.32) is known as Simpson’s− 83 rule.

1.3 Error Analysis

1.3.1 Error in Trapezoidal Rule

Since, n increases the step size nh = b − a approaches to zero and the Trapezoidal
Rb
rule (T) approaches the exact value of a f (x)dx.
We know,
h
T = [(y0 + yn ) + 2(y1 + y2 + · · · + yn−1 )]
2

10
Let y = f (x) be the function in the interval [a, b] and T be the trapezoidal rule over
interval [a, b]. i.e

a = x0 < x0 + h < x0 + 2h < · · · < x0 + nh = b

and let error be E

Z b
E= y(x)dx − T
a
And the taylor’s series of function y(x) at x = x0 is
(x − x0 )2 00
y(x) = y(x0 ) + (x − x0 )y 0 (x0 ) + y (x0 ) + · · · (1.33)
2!
Applying x = x0 + h, y = y(x0 + h) = y1 in equation (1.33) we get,
h2 00
y1 = y0 + hy00 + y + ··· (1.34)
2! 0
So the area over the first strip by of the trapezoidal rule is as
h
A1 = [(y0 + y1 )]
2
h h2
= y0 + y0 + hy00 + y000 + · · ·

(1.35)
2 2!
h2 h3
= hy0 + y00 + y000 + · · ·
2 4
If y(x) is integrated over the interval [x0 , x0 + h] to (1.33)
Z x0 +h Z x0 +h
(x − x0 )2 00
y(x0 ) + (x − x0 )y 0 (x0 ) +
 
y(x)dx = y (x0 ) + · · · dx
x0 x0 2!
(1.36)
2 3
h 0 h 00
= hy0 + y0 + y0 + · · ·
2! 4
Subtracting (1.35) from (1.36) we get,
Z x0 +h
E1 = y(x)dx − A1
x0
 
1 1 3 00
= − h y0 + · · ·
6 4
1
= − h3 y000 + · · ·
12
Similarly error in [x1 , x2 ] is
Z x0 +2h
E2 = y(x)dx − A2
x0 +h
 
1 1 3 00
= − h y1 + · · ·
6 4
1
= − h3 y100 + · · ·
12
11
The total error is
Z x0 +nh
E = y(x)dx − T
x0
1 3 00
= − 00
h [y0 + y100 + · · · + yn−1 ]
12
00
Let f ∈ C 2 [a, b], then there exist c ∈ (a, b),if f 00 (c) be the maximum of |y000 |, · · · , |yn−1 |
then
nh3 00 (b − a)f 00 (c)h2
E=− f (c) = − = O(h2 )
12 12
Further if |f 00 (c)| ≤ M then
M (b − a)3
E≤
12n2
Hence, error in the trapezoidal rule is of the order h2 .

1.3.2 Error in Simpson’s− 31 Rule


b−a
Since, n increases the step size nh = 2
approaches to zero and the Simpson’s− 13
Rb
rule (S) approaches the exact value of a
f (x)dx.
We know,
h
S= [(y0 + yn ) + 2(y2 + y4 + · · · + yn−2 ) + 4(y1 + y3 + · · · yn−1 )]
3
Let y = f (x) be the function in the interval [a, b] and S be the Simpson’s− 31 rule
over interval [a, b]. i.e

a = x0 < x0 + h < x0 + 2h < · · · < x0 + 2nh = b

and let error be E

Z b
E= y(x)dx − S
a

And the taylor’s series of function y(x) at x = x0 is


(x − x0 )2 00
y(x) = y(x0 ) + (x − x0 )y 0 (x0 ) + y (x0 )
2! (1.37)
(x − x0 )3 000 (x − x0 )4 iv
+ y (x0 ) + y (x0 ) + · · ·
3! 4!
Applying x = x0 + h, y = y(x0 + h) = y1 in equation (1.37) we get,
h2 00 h3 000 h4 iv
y1 = y0 + hy00 + y0 + y0 + y0 + · · · (1.38)
2! 3! 4!
12
Applying x = x0 + 2h, y = y(x0 + 2h) = y2 in equation (1.37) we get,
h2 00 4h3 000 2h4 iv
y2 = y0 + 2hy00 + 2 y + y + y + ··· (1.39)
2! 0 3 0 3 0
So the area over the first strip of the Simpson’s− 31 rule (parabola) is as
h
A1 = [(y0 + 4y1 + y2 )]
3
h h2 h3 h4
= y0 + y0 + hy00 + y000 + y0000 + y0iv + · · ·
3 2! 3! 4!
2 3 4
 (1.40)
0 h 00 4h 000 2h iv
+ y0 + 2hy0 + 2 y0 + y + y + ···
2! 3 0 3 0
4h3 00 2h4 000 5h5 iv
= 2hy0 + 2h2 y00 + 2 y + y + y + ···
3 0 3 0 18 0
If y(x) is integrated over the interval [x0 , x0 + 2h] to (1.37)
Z x0 +2h Z x0 +2h 
(x − x0 )2 00
y(x)dx = y(x0 ) + (x − x0 )y 0 (x0 ) + y (x0 )
x0 x0 2!
(x − x0 )3 000 (x − x0 )4 iv

+ y (x0 ) + y (x0 ) + · · · dx (1.41)
3! 4!
4h3 00 2h4 000 4h5 iv
= 2hy0 + 2h2 y00 + y + y + y + ···
3 0 3 0 15 0
Subtracting (1.40) from (1.41) we get,
Z x0 +2h
E1 = y(x)dx − A1
x0
 
4 5 5 iv
= − h y0 + · · ·
15 18
1
= − h5 y0iv + · · ·
90
Similarly error in [x2 , x4 ] is
Z x0 +4h
E2 = y(x)dx − A2
x0 +2h
 
4 5 5 iv
= − h y1 + · · ·
15 18
1
= − h5 y1iv + · · ·
90
The total error is
Z x0 +2nh
E = y(x)dx − S
x0
1 5 iv
= − h [y0 + y1iv + · · · + yn−1
iv
]
90
13
Let f ∈ C 4 [a, b], then there exist c ∈ (a, b),if f iv (c) be the maximum of |y0iv |, · · · , |yn−1
iv
|
then
nh5 iv (b − a)f iv (c)h4
E=− f (c) = − = O(h4 )
90 180
Further if |f iv (c)| ≤ M then
M (b − a)5
E≤
2880n4

1.3.3 Error in Simpson’s− 83 Rule


b−a
Since, n increases the step size nh = 3
approaches to zero and the Simpson’s− 38
Rb
rule (S) approaches the exact value of a
f (x)dx.
We know,
3h
S= [(y0 + yn ) + 3(y1 + y2 + y4 + · · · + yn−1 ) + 2(y3 + y6 + y9 + · · · yn−3 )]
8
Let y = f (x) be the function in the interval [a, b] and S be the Simpson’s− 83 rule
over interval [a, b]. i.e

a = x0 < x0 + h < x0 + 2h < · · · < x0 + 3nh = b

and let error be E

Z b
E= y(x)dx − S
a
And the taylor’s series of function y(x) at x = x0 is

0(x − x0 )2 00
y(x) = y(x0 ) + (x − x0 )y (x0 ) + y (x0 )
2! (1.42)
(x − x0 )3 000 (x − x0 )4 iv
+ y (x0 ) + y (x0 ) + · · ·
3! 4!
Applying x = x0 + h, y = y(x0 + h) = y1 in equation (1.42) we get,
h2 00 h3 000 h4 iv
y1 = y0 + hy00 + y + y0 + y0 + · · · (1.43)
2! 0 3! 4!
Applying x = x0 + 2h, y = y(x0 + 2h) = y2 in equation (1.42) we get,
h2 00 4h3 000 2h4 iv
y2 = y0 + 2hy00 + 2 y + y + y + ··· (1.44)
2! 0 3 0 3 0
Applying x = x0 + 3h, y = y(x0 + 3h) = y3 in equation (1.42) we get,
9h2 00 9h3 000 27h4 iv
y3 = y0 + 3hy00 + y + y + y + ··· (1.45)
2 0 2 0 8 0
14
So the area over the first strip of the Simpson’s− 83 rule (Cubic curve) is as
3h
A1 = [(y0 + 3y1 + 3y2 + y3 )]
8 (1.46)
9h2 0 9h3 00 27h4 000 33h5 iv
= 3hy0 + y + y + y + y + ···
2 0 2 0 8 0 16 0
If y(x) is integrated over the interval [x0 , x0 + 3h] to (1.42)
Z x0 +3h Z x0 +3h 
(x − x0 )2 00
y(x)dx = y(x0 ) + (x − x0 )y 0 (x0 ) + y (x0 )
x0 x0 2!
(x − x0 )3 000 (x − x0 )4 iv

+ y (x0 ) + y (x0 ) + · · · dx (1.47)
3! 4!
9h2 0 9h3 00 27h4 000 81h5 iv
= 3hy0 + y + y + y + y + ···
2 0 2 0 8 0 40 0
Subtracting (1.46) from (1.47) we get,
Z x0 +3h
E1 = y(x)dx − A1
x0
 
81 33 5 iv
= − h y0 + · · ·
40 16
3
= − h5 y0iv + · · ·
80
Similarly error in [x3 , x6 ] is
Z x0 +6h
E2 = y(x)dx − A2
x0 +3h
 
81 33 5 iv
= − h y1 + · · ·
40 16
3
= − h5 y1iv + · · ·
80
The total error is
Z x0 +3nh
E = y(x)dx − S
x0
3 5 iv
= − h [y0 + y1iv + · · · + yn−1
iv
]
80
Let f ∈ C 4 [a, b], then there exist c ∈ (a, b),if f iv (c) be the maximum of |y0iv |, · · · , |yn−1
iv
|
then
3nh5 iv (b − a)f iv (c)h4
E=− f (c) = − = O(h4 )
80 80
Further if |f iv (c)| ≤ M then
M (b − a)5
E≤
64800n4
15
R6 dx
Example 1.3.1. Evaluate 0 1+x2
using Trapezoidal, Simpson’s− 31 and Simpson’s− 83
rule.

Solution:
R 6 dx dx
Here, 0 1+x 2 where f (x) = 1+x2
. Taking h = 1 the value of f (x) are as follows;

x 0 1 2 3 4 5 6
y 1 0.5 0.2 0.1 0.0588 0.0385 0.027
Now,
h 
T = (y0 + y6 ) + 2(y1 + y2 + y3 + y4 + y5 )
2
= 1.4108

h 
S1 = (y0 + y6 ) + 4(y1 + y3 + y5 ) + 2(y2 + y4 )
3 3
= 1.3662

3h  
S3 = (y0 + y6 ) + 3(y1 + y3 + y4 + y5 ) + 2(y3 )
8 8
= 1.3571

Example 1.3.2. What will be the value of h, so that the value of the integral
R5
1
logxdx will be accurate upto five decimal places for Simpson’s − 31 rule.

Solution:
We know the error of Simpson’s − 13 rule is
(b − a)f iv (c)h4


180

Since y = logx, y 0 = x1 , y 00 = − x12 , y 000 = 2


x3
, y iv = − x64 So,

max y iv (x) = 6
1≤x≤5

, and
min y iv (x) = 0.0096
1≤x≤5

So, error bonds are given by,

(0.0096)(4)h4 (6)(4)h4
<E<
180 180
16
If the result is to be accurate up to five decimal place, then

24 4
h < 10−5
180

i.e h4 < 0.000075, or h < 0.09

1.4 Euler Maclaurin Formula


Definition 1.4.1. Preliminary terms

ˆ The shifting operator E is defined as the relation; E n yi = yi+n .

ˆ The relation between E and ∆ is as; ∆y0 = y1 − y0 = Ey0 − y0 = (E − 1)y0 .


which implies that, E = 1 + ∆.

ˆ The operator D is defined as; Dy(x) = d


dx
y(x).

ˆ The relation of D and E is defined with the Taylor’s series expansion, where

h2 D2 h3 D3
 
Ey(x) = 1 + hD + + + · · · y(x)
2! 3!

which implies that E = ehD

Consider a function f (x) = ∆F (x). where,

F (x) = ∆−1 f (x) (1.48)

As, we know that F (x1 ) − F (x0 ) = ∆f (x0 ) = f (x0 ) Similarly ,

F (x2 ) − F (x1 ) = f (x1 )

F (x3 ) − F (x2 ) = f (x2 )


..
.

F (xn ) − F (xn−1 ) = f (xn−1 )

Adding all these, we get


n−1
X
F (xn ) − F (x0 ) = f (xi ) (1.49)
i=0

17
where, x0 , x1 , · · · , xn are the (n + 1) equi - spaced values of x with difference h. From
equation (1.48)

F (x) = ∆−1 f (x) = (E + 1)−1 f (x)

= (ehD − 1)−1 f (x)


−1
h2 D2 h3 D3
 
= 1 + hD + + + · · · − 1 f (x)
2! 3!
−1
hD h2 D2

−1
= (hD) 1 + + + ··· f (x)
2! 3!
hD h2 D2 h4 D4
 
1 −1
= D 1− + − · · · f (x)
h 2 12 720

h3 000
Z
1 1 h
F (x) = f (x)dx − f (x) + f 0 (x) − f (x) + · · · (1.50)
h 2 12 720
Putting x = xn and x = x0 in (1.50) and then subtracting , we get
1 xn
Z
1
F (xn ) − F (x0 ) = f (x)dx − [f (xn ) − f (x0 )]
h x0 2
(1.51)
h h3 000
+ [f 0 (xn ) − f 0 (x0 )] − [f (xn ) − f 000 (x0 )] + · · ·
12 720
From (1.49) and (1.51) we have
n−1 Z xn
X 1 1
f (xi ) = f (x)dx − [f (xn ) − f (x0 )]
i=0
h x0 2
h 0 h3 000
+ [f (xn ) − f 0 (x0 )] − [f (xn ) − f 000 (x0 )] + · · ·
12 720
i.e,
Z xn n−1
1 X 1
f (x)dx = f (xi ) + [f (xn ) − f (x0 )]
h x0 i=0
2
h 0 h3 000
− [f (xn ) − f 0 (x0 )] + [f (xn ) − f 000 (x0 )] + · · ·
12 720
1
= [f (x0 ) + 2f (x1 ) + 2f (x2 ) + · · · + 2f (xn−1 ) + f (xn )]
2
h h3 000
− [f 0 (xn ) − f 0 (x0 )] + [f (xn ) − f 000 (x0 )] + · · ·
12 720
Hence,
Z x0 +nh
h
= [y0 + 2y1 + 2y2 + · · · + 2yn−1 + yn ]
x0 2
(1.52)
h2 0 0 h4 000 000 h6
− (yn − y0 ) + (yn − y0 ) + (ynv − y0v ) · · ·
12 720 30240
18
The equation (1.52) is called the Euler-Maclaurin formula for integration. where
the first expression on the right -hand side of equation (1.52) denotes the approximate
value of the integration obtained by using trapezoidal rule and the other expression
represent the successive corrections of this value.

Example 1.4.1. Evaluate I = 0
2
sinx dx, using Euler- Maclaurin formula

Solution:
By Euler-maclaurin formula, we have
Z π
2 h
sinxdx = [y0 + 2y1 + 2y2 + · · · + 2yn−1 + yn ]
0 2
h2 h4 000
− (yn0 − y00 ) + (y − y0000 ) + · · ·
12 720 n
Here in interval [0, π2 ] we take h = π8 , so n = 4. Therefore,
Z π
2 π
sinxdx = [y0 + 2y1 + 2y2 + 2y3 + y4 ]
0 16
π2 0 0 π4
− 2
(y4 − y 0 ) + 4
(y4000 − y0000 ) + · · ·
(16) (12) (16) (720)

Here,y0 = 0 y 0 = cosx,y 00 = −cosx, y00 = 1, y0000 = −1, y40 = 0, y4000 = 0. So,


Z π
2 π π π 3π π
sinxdx = [0 + 2sin( ) + 2sin( ) + 2sin( ) + sin( )]
0 16 8 4 8 2
2 4
π π
− 2
(−1) + (1) + · · ·
(16) (12) (16)4 (720)
2
π
= [0 + 2(0.382683 + 0.707117 + 0.923879) + 1.000000] + · · ·
16
= 0.987119 + 0.012851 + 0.000033

= 1.000003 ' 1

19
Lecture Note -6

M.sc. Structural Engineering


School of Engineering

By
Hem Raj Pandey
(Assistant Professor)
Pokhara University
Monday, June 7, 2020

1
Contents

1 Numerical Integration Contd 3


1.1 Numerical Double Integration . . . . . . . . . . . . . . . . . . . . . . 3
1.1.1 Trapezoidal Rule . . . . . . . . . . . . . . . . . . . . . . . . . 3
1
1.1.2 Simpson’s - 3
Rule . . . . . . . . . . . . . . . . . . . . . . . . 4
1.2 Romberg Intergation . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.2.1 Romberg method for Trapezoidal Rule . . . . . . . . . . . . . 7
1.2.2 Romberg method by Simpson’s − 31 Rule . . . . . . . . . . . . 7
1.3 Gaussian Quadrature Formula . . . . . . . . . . . . . . . . . . . . . . 10
1.3.1 Gauss - Legendre quadrature Formula . . . . . . . . . . . . . 11
1

1
Hem Raj Pandey, Assistant Professor, Pokhara University

2
Chapter 1

Numerical Integration Contd

1.1 Numerical Double Integration


The double integration is to evaluate and the integral form is as;
Z bZ d
I= f (x, y)dydx (1.1)
a c

This integral can be evaluated numerically by two successive integrations in x and y


directions respectively, taking into account one variable at a time.
We can use the trapezoidal and Simpson’s − 31 rule for double integration.

1.1.1 Trapezoidal Rule

For the evaluation of inner part of the integral in (1.1) by trapezoidal rule , we get

d−c b
Z
I= [f (x, c) + f (x, d)]dx (1.2)
2 a

Again using trapezoidal rule in (1.2) we get

(d − c)(b − a)
I= [f (a, c) + f (b, c) + f (a, d) + f (b, d)]dx (1.3)
4

And the general form of evaluating (1.1) using trapezoidal rule is

3
(a) Trapezoidal rule (b) Trapezoidal rule


hk
I= (sum of the values of f (x, y) at four corners of the region)
4
+ 2(sum of the values of f (x, y)at the remaining nodes of the region of integration)

+ 4(sum of the value off (x, y)at the internal nodes of the region of integration)
(1.4)

(b−a) (d−c)
Where, h = N
, and k = M

1
1.1.2 Simpson’s - 3 Rule
1
For the evaluation of inner part of the integral in (1.1) by Simpson’s - 3
rule , we get
Z d
h
S= [f (a, y) + 4f (a + h, y) + f (b, y)]dy (1.5)
3 c

(b−a) (d−c) 1
Where, h = 2
and k = 2
. Again using Simpson’s - 3
rule in (1.5) we get

(hk)
S= [{f (a, c) + f (a, d) + f (b, c) + f (b, d)} + 4{f (a, c + k)
9 (1.6)
+ f (a + h, c) + f (a + h, d) + f (b, c + k)} + 16f (a + h, c + k)]

1
And the general form of evaluating (1.1) using Simpson’s - 3
rule is

4
1 1
(c) Simpson’s - 3 rule (d) Simpson’s - 3 rule

R2R2 dxdy
Example 1.1.1. Taking h = k = 0.25 and evaluate 1 1 x+y
by trapezoidal and
1
Simpson’s - 3
rule.

1
Solution: Taking h = k = 0.25, the table for the values of f (x, y) = x+y
is as
follows:

yx 1 1.25 1.50 1.75 2.0


1 0.5 0.444 0.4 0.3636 0.3333
1.25 0.444 0.4 0.3636 0.333 0.3076
1.50 0.4 0.3636 0.3333 .3076 0.2857
1.75 0.3636 0.333 0.3076 .2857 0.2666
2.0 0.3333 0.3076 0.2857 0.2666 0.25

Now, by using trapezoidal rule

Z 2 Z 2
dxdy (0.25)(0.25) 
= (0.5 + 0.3333 + 0.3333 + 0.25)
1 1 x+y 4
+ 2(0.3076 + 0.2857 + 0.2666 + 0.2666 + 0.2857 + 0.3076 + 0.3636

+ 0.4 + 0.444 + 0.444 + 0.4 + 0.3636)

+ 4(0.4 + 0.3636 + 0.3333 + 0.3636 + 0.3333+



+ 0.3076 + 0.3333 + 0.3076 + 0.2857)

= 0.0156[1.4166 + 8.27 + 12.112]

= 0.34005

5
1
Again, by using Simpson’s - 3
rule

Z 2 Z 2
dxdy (0.25)(0.25) 
= (0.5 + 0.3333 + 0.3333 + 0.25)
1 1 x+y 9
+ 2(0.4 + 0.2857 + 0.2857 + 0.4)

+ 4(0.444 + 0.3636 + 0.3076 + 0.2666

+ 0.2666 + 0.3076 + 0.3636 + 0.444 + 0.3333)

+ 8(0.3636 + 0.3636 + 0.3076 + 0.3076)



+ 16(0.4 + 0.333 + 0.2857 + 0.333)

= 0.0069[1.4166 + 2(1.3714) + 4(3.0966)

+ 8(1.3424) + 16(1.3523)]

= 0.3375
R1R1
Problem 1. Find the approximate value of the double integral 0 0
sin(x + y)dxdy
with the help of trapezoidal and Simpson’s − 31 method by taking h = 0.5 and k = 0.25.
R2R2 dxdy
Problem 2. Find the approximate value of the double integral 1 1 (1+x2 +y 2 )
with
the help of trapezoidal and Simpson’s − 13 method by taking h = 0.25 and k = 0.25.
R 1.5 R 1.5 dxdy
Problem 3. Evaluate double integral 1 1 1
(x2 +y 2 ) 2
R1R1 dxdy
Problem 4. Evaluate double integral 0 0 (x+3)(y+4)

1.2 Romberg Intergation


Romberg integration technique is an iterative technique. In Romberg integration,
we use a numerical method with different spacing to improve the accuracy of the
method.
1
Hem Raj Pandey, Assistant Professor, Pokhara University

6
1.2.1 Romberg method for Trapezoidal Rule
step Length Value of I O(h2 ) Value of I O(h4 ) Value of I O(h6 ) Value of I O(h8 )
h I(h)
I(h, h2 )
h
2
I( h2 ) I(h, h2 , h4 )
I( h2 , h4 ) I(h, h2 , h4 , h8 )
h
4
I( h4 ) I( h2 , h4 , h8 )
I( h4 , h8 )
h
8
I( h8 )

Where,
h 1 h 
I(h, ) = 4I − I(h)
2 3 2
h h 1 h h 
I( , ) = 4I − I( )
2 4 3 4 2
..
.
h h 1 h h h 
I(h, , ) = 4I , − I(h, )
2 4 3 2 4 2
h h h 1 h h h h 
I( , , ) = 4I , − I( , )
2 4 8 3 4 8 2 4
h h h 1 h h h h h 
I(h, , , ) = 4I , , − I(h, , )
2 4 8 3 2 4 8 2 4

1.2.2 Romberg method by Simpson’s − 13 Rule

step Length Value of I O(h2 ) Value of I O(h4 ) Value of I O(h6 ) Value of I O(h8 )
h I(h)
I(h, h2 )
h
2
I( h2 ) I(h, h2 , h4 )
I( h2 , h4 ) I(h, h2 , h4 , h8 )
h
4
I( h4 ) I( h2 , h4 , h8 )
I( h4 , h8 )
h
8
I( h8 )

7
Where,
h 1 h 
I(h, ) = 16I − I(h)
2 15 2
h h 1 h h 
I( , ) = 16I − I( )
2 4 15 4 2
..
.
h h 1 h h h 
I(h, , ) = 16I , − I(h, )
2 4 15 2 4 2
h h h 1 h h h h 
I( , , ) = 16I , − I( , )
2 4 8 15 4 8 2 4
h h h 1 h h h h h 
I(h, , , ) = 16I , , − I(h, , )
2 4 8 15 2 4 8 2 4
Example 1.2.1. Compute the value of
Z 1
dx
0 1 + x2
by using trapezoidal and Simpson’s − 31 rule. Then use Romberg method for better
approximation.
R1 dx 1
Solution Here, the integral 0 1+x2
, so that y = f (x) = 1+x2
and h = 0.5, 0.25, 0.125
we get the value of f (x) as
x 0 0.125 0.25 0.375 0.5 0.625 0.75 0.875 1
y 1 0.9846 0.9411 0.8767 0.8 0.7191 0.64 0.5663 0.5

1. (A) Trapezoidal Rule


Taking h=0.5, we get,

h
I(h) = [y0 + y2 + 2y1 ]
2
0.5
= [1 + 0.5 + 2 × 0.8]
2
= 0.775

Taking h=0.25, we get,

h h
I( ) = [y0 + y4 + 2(y1 + y2 + y3 )]
2 2
0.25
= [1 + 0.5 + 2(0.9411 + 0.8 + 0.64)]
2
= 0.7827

8
Taking h=0.125, we get,

h h
I( ) = [y0 + y8 + 2(y1 + y2 + y3 + y4 + y5 + y6 + y7 )]
4 2
0.125
= [1 + 0.5 + 2(0.9846 + 0.9411 + 0.8767 + 0.8 + 0.7191 + 0.64 + 0.5663)]
2
= 0.7847
Using Romberg formula, we get

step Length Value of I O(h2 ) Value of I O(h4 ) Value of I O(h6 )


0.5 I(0.5)=0.775
I(0.5,0.25)=0.7852
0.25 I(0.25) =0.7827 I(0.5,0.25,0.125)=0.7854
I(0.25,0.125)=0.7853
0.125 I(0.125)=0.7847

Therefore, integral value using Romberg method is 0.7854.

(B) Simpson’s − 31 Rule.

Taking h=0.5, we get,

h
I(h) = [y0 + y2 + 4y1 ]
3
0.5
= [1 + 0.5 + 4 × 0.8]
3
= 0.7834

Taking h=0.25, we get,

h h
I( ) = [y0 + y4 + 4(y1 + y3 ) + 2y3 ]
2 3
0.25
= [1 + 0.5 + 4(0.9411 + 0.64) + 2 × 0.8]
3
= 0.7853

Taking h=0.125, we get,

9
h h
I( ) = [y0 + y8 + 4(y1 + y3 + y5 + y7 ) + 2(y2 + y4 + y6 )]
4 3
0.125
= [1 + 0.5 + 8(0.9846 + 0.8767 + 0.7191 + 0.5663) + 2(0.9411 + 0.8 + 0.64)]
3
= 0.7853

Using Romberg formula, we get

step Length Value of I O(h2 ) Value of I O(h4 ) Value of I O(h6 )


0.5 I(0.5)=0.7834
I(0.5,0.25)=0.7854
0.25 I(0.25) =0.7853 I(0.5,0.25,0.125)=0.7852
I(0.25,0.125)=0.7853
0.125 I(0.125)=0.7853

Therefore, integral value using Romberg method is 0.7852.

1.3 Gaussian Quadrature Formula


Rb
In numerical integration, the value of integral a
f (x)dx can be evaluated by sub-
division of the interval [a, b] with unequal length for better accuracy.
Z b Xn
I= w(x)dx ' λi f (xi ) (1.7)
a i=0

Where x0 , x1 , x2 , · · · , xn are (n + 1) node points in the interval [a, b] and λ0i s are
weights given to the values of function f (x) at these node points.
We classify various Gaussian formulae based on w(x).

1. Gauss - Legendre quadrature, where w(x) = 1; −1 ≤ x ≤ 1.


1
2. Gauss - Chebshev quadrature, where w(x) = (1 − x2 )− 2 ; −1 ≤ x ≤ 1.

3. Gauss- Lagurre quadrature, where w(x) = e−x ; 0 ≤ x < ∞.


2
4. Gauss - Hermite quadrature, where w(x) = e−x ; −∞ < x < ∞.
2
Hem Raj Pandey, Assistant Professor, Pokhara University

10
1.3.1 Gauss - Legendre quadrature Formula

From (1.7) with weight function w(x) = 1 , we get


Z b
f (x)dx = λ0 f (x0 ) + λ1 f (x1 ) + λ2 f (x2 ) + · · · + λn f (xn ) (1.8)
a

As we have interval [−1, 1]. Therefore limits [a, b] transform to [−1, 1]. Using linear
transformation. Let x = pt + q When, x = a, t = −1, ⇒ a = −p + q
when, x = b, t = 1, ⇒ b = p + q
b−a b+a
Solving we get, p = 2
;q = 2
. Thus the required transformation is

1 
x= (b − a)t + (b + a) (1.9)
2
The required integral will be
Z 1
f (x)dx = λ0 f (x0 ) + λ1 f (x1 ) + λ2 f (x2 ) + · · · + λn f (xn ) (1.10)
−1

Gaussian-One point rule

We get from (1.10) as Z 1


f (x)dx = λ0 f (x0 ) (1.11)
−1

Where λ0 6= 0, and has two unknowns λ0 , x0 where we have a function of {1, x}. And
Z 1
f (x) = 1; dx = 2
−1
Z 1
f (x) = x; xdx = 0 = λ0 x0
−1

λ0 6= 0, ⇒ x0 = 0. Therefore, λ0 = 2.

Z 1
f (x) = 2f (0) (1.12)
−1

Gaussian 2− point rule

We get from (1.10) as


Z 1
f (x)dx = λ0 f (x0 ) + λ1 f (x1 ) (1.13)
−1

11
where, λ0 6= 0, λ1 6= 0, and x0 6= x1 , and the unknowns are λ0 , x0 , λ1 , x1 where
we have a function of {1, x, x2 , x3 }. And
Z 1
f (x) = 1; dx = 2 = λ0 + λ1
−1
Z 1
f (x) = x; xdx = 0 = λ0 x0 + λ1 x1
−1
Z 1 (1.14)
2 2
f (x) = x ; x2 dx = = λ0 (x0 )2 + λ1 (x1 )2
−1 3
Z 1
f (x) = x3 ; x3 dx = 0 = λ0 (x0 )3 + λ1 (x1 )3
−1

Eliminating λ0 from (1.14) we get,

λ1 (x1 )3 − λ1 x1 (x0 )2 = 0

λ1 x1 (x1 − x0 )(x1 + x0 ) = 0

Since, λ1 6= 0, x0 6= x1 , which implies that x1 = −x0 .


So, λ0 = λ1 = 1 and x1 = − √13 and x2 = √1 .
3
Therefore, the two point Gaussian rule
is Z 1
1  1 
f (x)dx = f − √ + f √
−1 3 3
Gaussian 3− point rule

We get from (1.10) as


Z 1
f (x)dx = λ0 f (x0 ) + λ1 f (x1 ) + λ2 f (x2 ) (1.15)
−1

where, λ0 6= 0, λ1 6= 0,λ2 6= 0 and x0 6= x1 6= x2 , and the unknowns are

12
λ0 , x0 , λ1 , x1 , λ2 , x2 where we have a function of {1, x, x2 , x3 , x4 , x5 }. And
Z 1
f (x) = 1; dx = 2 = λ0 + λ1 + λ2
−1
Z 1
f (x) = x; xdx = 0 = λ0 x0 + λ1 x1 + λ2 x2
−1
Z 1
2 2
f (x) = x ; x2 dx = = λ0 (x0 )2 + λ1 (x1 )2 + λ2 (x2 )2
−1 3
Z 1 (1.16)
f (x) = x3 ; x3 dx = 0 = λ0 (x0 )3 + λ1 (x1 )3 + λ2 (x2 )3
−1
Z 1
2
f (x) = x4 ; x4 dx = = λ0 (x0 )4 + λ1 (x1 )4 + λ2 (x2 )4
−1 5
Z 1
f (x) = x5 ; x5 dx = 0 = λ0 (x0 )5 + λ1 (x1 )5 + λ2 (x2 )5
−1
q
Solving this system of equation (1.16) as in two point rule, x0 = ± 35 , x0 = 0,
q
x2 = ± 35 , λ0 = λ2 = 59 , λ1 = 98 . Therefore, the three point Gaussian rule is

Z 1  r ! r !
1 3 3
f (x)dx = 5f − + 8f (0) + 5f (1.17)
−1 9 5 5

R1 Pn
Points (n + 1) Points (xi ) weight (λi ) −1
f (x)dx = i=0 λi f (xi )
1 0 2 2f(0)
± √13 = f − √13 + f √1
 
2 1 3
8
3 0 9   q  q 
1 3 3
9
5f − 5 + 8f (0) + 5f 5
q
± 35 5
9

Example 1.3.1. Discuss about the Gaussian quadrature formula and Compute the
R 1 dx
integral −1 1+x2 with the help of Gauss - Legendre 1, 2, 3- points formulas.Compare

the results with the exact value.

Solution:
1
Here,a = −1, b = 1, and f (x) = 1+x2

1. Gauss - Legendre 1− points formulas

13
Z 1
f (x)dx = 2f (0)
−1
Z 1
dx
= 2(1)
−1 1 + x2
=2

2. Gauss - Legendre 2− points formulas


Z 1    
1 1
f (x)dx = f − √ +f √
−1 3 3
Z 1
dx 1 1
2
= 2 +  2
−1 1 + x

1 + − √13 1 + √13
3
=
2
= 1.5

3. Gauss - Legendre 3− points formulas


Z 1  r 
5 3 8
f (x)dx = f − + f (0)
−1 9 5 9
r 
5 3
+ f
9 5
Z 1
dx 5 1
2
=  q 2
−1 1 + x 9
1 + − 35
8 5 1
+ (1) + q 2
9 9 3
1+ 5

114
=
72
= 1.58333

Exact solution is given by


Z 1 1
1 −1 = π = 1.571

dx = tan (x)
−1 1 + x2
−1 2
Hence, 3− points formula gives better approximation.

Example 1.3.2. Use Gauss - Legendre 2− points formula to compute the approxi-
R2p
mate value of the integral 1 (1 + cos2 x)dx.

14
Solution:
We have to convert the interval [1, 2] to [−1, 1] to apply Gauss-Legendre 2− point
formula. we use the following formula;

b−a b+a 1 3
x= t+ = t+
2 2 2 2

and when x = 1, ⇒ t = −1 and when x = 2 ⇒ t = 1, 2dx = dt. On substituting this


transformation in the integral, we have
s
Z 2 Z 1   
p 1 3 1
(1 + cos2 x)dx = 1 + cos2 t + dt
1 −1 2 2 2
Z 1 s   
1 1 3
= 1 + cos2 t + dt
2 −1 2 2

Now using Gauss-Legendre 2− point formula as follows


Z 1 s      
1 1 3 1 1
1 + cos 2 t+ dt = f − √ +f √
2 −1 2 2 3 3
s    
1 1 1 3
= 1 + cos 2 −√ +
2 2 3 2
s    
1 1 3
+ 1 + cos 2 √ +
2 3 2
1
= (1.023095667 + 1.060070203)
2
= 1.041582935

3
Hem Raj Pandey, Assistant Professor, Pokhara University

15
Lecture Note -7

M.sc. Structural Engineering


School of Engineering

By
Hem Raj Pandey
(Assistant Professor)
Pokhara University
Monday, June 15, 2020
Contents

1 Solution of Linear Algebra 3


1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2 Gauss elimination method . . . . . . . . . . . . . . . . . . . . . . . . 5
1.2.1 Pivoting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.3 Tridiagonal Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.3.1 Banded Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.3.2 Thomas Algorithm . . . . . . . . . . . . . . . . . . . . . . . . 8
1.4 Gauss - Jordan mathod . . . . . . . . . . . . . . . . . . . . . . . . . . 11
1.5 Factorisation/ LU Decomposition methods . . . . . . . . . . . . . . . 13
1.5.1 Doolittle Method . . . . . . . . . . . . . . . . . . . . . . . . . 15
1.5.2 Crout Method . . . . . . . . . . . . . . . . . . . . . . . . . . 15
1.5.3 Cholesky Method . . . . . . . . . . . . . . . . . . . . . . . . . 17
1.6 Matrix inversion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
1

1
Hem Raj Pandey, Assistant Professor, Pokhara University
Chapter 1

Solution of Linear Algebra

1.1 Introduction
In Science and Engineering we often encounter with a large number of simultaneous
linear equation as;

a11 x1 + a12 x2 + a13 x3 + · · · + a1n xn = b1

a21 x1 + a22 x2 + a23 x3 + · · · + a2n xn = b2

a31 x1 + a32 x2 + a33 x3 + · · · + a3n xn = b3

... ...

am1 x1 + am2 x2 + am3 x3 + · · · + amn xn = bn

Where, aij and bi are known values and xi are unknowns. Above system of equations
may also be written as
AX = B
     
a a12 a13 · · · a1n x b
 11   1  1
 a21 a22 a23 · · · a2n   x2   b2 
     
     
Where, A =  a31 a32 a33 · · · a3n , X =  x3 , B =  b3 
     
 ..   .. 
     
 
. . . . . . . . . . . . . . . . . . . . . . . . . . .
     
am1 am2 am3 · · · amn xn bn

1
We discuss two different methods to solve the system of equations.
1
Hem Raj Pandey, Assistant Professor, Pokhara University
1. Direct Methods

ˆ Gauss Elimination method.

ˆ Gauss Jordan mehod.

ˆ Factorisation/ LU Decomposition methods.

ˆ Matrix Inverse Method.

2. Iterative Methods

ˆ Jacobi Iterative method.

ˆ Gauss - Seidal method.

ˆ Successive over Relation.

Direct Method

In direct methods we have two special form of matrix;

1. A system of equation DX = B, where A = D and D be a diagonal matrix.


 
a x = b1
 11 1 
a22 x2 = b2 
 

 
··· ··· 
 

 
 
 an−1,n−1 xn−1 = bn−1 
 
ann xn = bn
This is diagonal system of equations and we get
bi
xi = , aii 6= 0, i = 1, 2, 3, · · · , n
aii

2. Taking U X = B, U is upper triangular matrix.

a11 x1 + a12 x2 + a13 x3 + · · · + a1n xn = b1

a22 x2 + a23 x3 + · · · + a2n xn = b2

a33 x3 + · · · + a3n xn = b3

... ...

an−1,n−1 xn−1 + an−1,n xn = bn−1

amn xn = bn
And we gets,
bn
xn =
ann
(bn−1 − an−1,n xn )
xn−1 =
an−1,n−1
···
Pn
b1 − j=2 a1,j xj
x1 =
an
This processes is called back substitution methods.

1.2 Gauss elimination method


In this method the unknowns are eliminated successively and the system is reduced to
an upper triangular system from which the unknowns are found by back substitution.
Consider the equations

a11 x1 + a12 x2 + a13 x3 = c1

a21 x1 + a22 x2 + a23 x3 = c2

a31 x1 + a32 x2 + a33 x3 = c3

and its augmented matrix form be


 
a a a : b1
 11 12 13 
 
a21 a22 a23 : b2 
 
a31 a32 a33 : b3
First step  
a11 a12 a13 : b1
 
 0 a022 a023 : b02 
 
 
0 0 0
0 a32 a33 : b3
Second step  
a a a : b1
 11 12 13 
 0 0 0
 0 a22 a23 : b2 

 
00 00
0 0 a33 : b3
And finally evaluate the unknowns from R3 and find the value of x1 , x2 , x3 by back
substitution.
1.2.1 Pivoting

The important case of the pivot being zero or close to zero. If the pivot is zero,
the entire process fails and if it is close to zero, round-off errors may occur. These
problem can be avoided by adopting a procedure called pivoting.
If a11 is either zero or very small compared to the other coefficients of the equation
then we find the largest available coefficient in the columns below the pivot equation
and then interchanging the two rows. In this way obtain a new pivot equation with
a non zero pivot. Such a process is called partial pivoting.

|aii | ≥ |aji |, j = i + 1, i + 2, · · · , n

In this case we search only the columns below for the largest element. If on the
other hand, we search both columns and rows for the largest element, the is called
complete pivoting.

|aii | ≥ |ajk |, j, k = i, i + 1, i + 2, · · · , n

It is obvious that complete pivoting involves more complexity in computations in


comparison to partial pivoting.

Example 1.2.1. Solve by Gauss elimination method.

10x − 7y + 3z + 5u = 6

−6x + 8y − z − 4u = 5

3x + y + 4z + 11u = 2

5x − 9y − 2z + 4u = 7

Solution
The augmented matrix for the given system is
 
10 −7 3 5 :6
 
−6 8 −1 −4
 
: 5
 
 
3 1 4 11 : 2
 
5 −9 −2 4 :7
Here R1 is a pivot row. Operating R2 → R2 + 0.6R1 , R3 → R3 − 0.3R1 , R4 →
R4 − 0.5R1  
10 −7 3 5 :6
 
0.8 −1 : 8.6
 
 0 3.8
 
 
 0 3.1 3.1 9.5 : 0.2
 
0 −5.5 −3.5 1.5 : 4
3.1 5.5
Here R2 is a pivot row. Operating R3 → R3 − 3.8 R2 , R4 → R4 + R
3.8 2
 
10 −7 3 5 :6
 
−1
 
 0 3.8 0.8 : 8.6 
 
2.447 10.315 : −6.815
 
0 0
 
0 0 −2.342 0.052 : 16.447

Here R3 is a pivot row. Operating R4 → R4 − 0.957R3


 
10 −7 3 5 :6
 
−1
 
 0 3.8 0.8 : 8.6 
 
 0 0 2.447 10.315 : −6.815
 
 
0 0 0 9.924 : 9.924

Therefore, back substitution yields; 9.924u = 9.924 and so u ≈ 1, 2.447z + 10.315u =


−6.815 and z = −6.999 ≈ −7, 3.8y +0.8z −u = 8.6 and y ≈ 4, 10x−7y +3z +5u = 6
and x ≈ 5.
Hence, the solution of the given system is x = 5, y = 4, z = −7, u = 1.

1.3 Tridiagonal Matrix


Tri-diagonal system of linear equations contains non-zero elements only at diago-
nal, lower diagonal and upper diagonal of the matrix A. These systems have simple
structures, and therefore require less computational efforts. e.g.
 
7 2 0 0
 
3 5 −3 0 
 
A= 
0 4 6 −1
 
 
0 0 5 8
1.3.1 Banded Matrix

Definition 1.3.1. An n × n matrix is called band matrix if integers p and q with


1 < p, q < n exist with the property that aij = 0 whenever, p ≤ j − i or q ≤ i − j.
The p describes the number of diagonals above and including the main diagonal on
which non- zero entries may lies and q describes the number of diagonals below and
including the main diagonal on which non-zero entries may lies. The band width of
the matrix is w = p + q − 1.
 
7 2 1 0
 
3 5 −3 −2
 
e.g. A =  , is a band matrix with p = 3, q = 2, So the band
0 4 6 −1
 
 
0 0 5 8
width = 3 + 2 − 1 = 4.

1.3.2 Thomas Algorithm


2
The Thomas algorithm also known as Tridiagonal Matrix Algorithm (TDMA) will
be discussed. Let the tridiagonal matrix be defined as;
    
b c 0 0 ... 0 0 0 x d
 1 1  1   1 
a2 b2 c2 0 . . . 0 0 0   x 2   d2 
    
    
 0 a3 b 3 c 3 . . . 0 0 0   x 3   d3 
    
  ..   .. 
    
... ... ...
 .  =  .  (1.1)


. .   ..   .. 
    
.. ..
. . .  .   . 


    
    
 0 0 0 0 . . . an−1 bn−1 cn−1  xn−1  dn−1 
    
0 0 0 0 ... 0 an bn xn dn
This system of equation (1.1) can be written as follows;

ai xi−1 + bi xi + ci xi+1 = di , i = 1, 2, . . . , n (1.2)

where a1 = cn = 0. From equation (1.1) we can defined the variable xi in the form
of xi+1 as follows;
2
Hem Raj Pandey, Assistant Professor, Pokhara University
xi = Pi xi+1 + Qi (1.3)

xi−1 = Pi−1 xi + Qi−1 (1.4)

Putting the value of xi and xi−1 in equation (1.2) we get

 
ai Pi−1 xi + Qi−1 + bi xi + xi+1 ci = di

−ci di − ai Qi−1
xi = xi+1 + (1.5)
bi + ai Pi−1 bi + ai Pi−1
Comparing equation (1.3) and (1.5), we get

−ci di − ai Qi−1
Pi = and Qi = (1.6)
bi + ai Pi−1 bi + ai Pi−1
This recurrence relations can be used to compute the values of the constant P
and Q. We required the initial value of P0 and Q0 . Computing the equation (1.2)
for i = 1.
−c1 d1
x1 = x2 +
b1 b1
Here we have P0 = Q0 = 0. Now we will compute the values of variables xi using
back substitutions. Using constant cn = 0 in equation (1.6) we get Pn = 0 and using
equation (1.2) at i = n, we have

xn = Qn ,

and
xi = Pi xi+1 + Qi , i = n − 1, n − 2, . . . , 1

Algorithm

−c1 d1
1. P0 = Q0 = 0; and P1 = b1
; Q1 = b1
;

2. for i = 2, 3, ....n

−ci di − ai Qi−1
Pi = and Qi =
bi + ai Pi−1 bi + ai Pi−1
3. By using back substitution

xn = Qn

xi = Pi xi+1 + Qi i = n − 1, n − 2, . . . , 1

Example 1.3.1. Using Thomas algorithm to compute the solution of the following
system of linear equation.

x1 + x2 = 1

3x1 + 2x2 + x3 = 5

2x2 + 3x3 + x4 = 2

−2x3 − 3x4 = −5

Solution:
The associated matrix form for the tridiagonal system is as follows
    
1 1 0 0 x 1
   1  
    
3 2 1 0  x2   5 
   =  
    
0 2 3 1  x3   2 
    
0 0 −2 −3 x4 −5
The value of ai , bi , ci and di are given as

i ai bi ci di
1 0 1 1 1
2 3 2 1 5
3 2 3 1 2
4 -2 -3 0 -5

We can compute the constant P and Q from Thomas algorithm as P0 = Q0 = 0,


where

−ci di − ai Qi−1
Pi = and Qi =
bi + ai Pi−1 bi + ai Pi−1
These recurrence relations provide following results
P1 = −1 Q1 = 1

P2 = 1 Q2 = −2

P3 = −0.2 Q3 = 1.2

P4 = 0 Q4 = 1
Using xn = Qn and xi = Pi xi+1 + Qi i = n − 1, n − 2, . . . , 1 we get,

x4 = Q4 = 1

x3 = P3 x4 + Q3 = 1

x2 = P2 x3 + Q2 = −1

x1 = P1 x2 + Q1 = 2

The final solution is as follows;


x1 = 2, x2 = −1,x3 = 1, x4 = 1

1.4 Gauss - Jordan mathod


In this method the elimination is performed not only in the equation below but also
in the equation above the pivotal row so that we get a diagonal matrix. In this way
we have the solution without further computation.

Procedure:

ˆ Write augmented matrix and reduce a11 to unity by suitable row operation.

ˆ Reduce all the elements of first column below the first row into zeros.

ˆ Reduce all the principal leading diagonal elements a11 , a22 , a33 , etc unity and
other elements are zero’s.

ˆ Stop process in step 3, if all the elements are zero except the last one on the
right. In that case the system is inconsistent and has no solutions.
i.e.  
a 0 0 ... 0 : b1
 11 
 0 a22 0 ... 0 b2 
 
 
 
0 0 a33 ... 0 : b3 
 .. .. 
 
...
 . . 
 
0 0 0 ... ann : bn
Example 1.4.1. Solve the system of equations

x + 2y + z − u = −2

2x + 3y − z + 2u = 7

x + y + 3z − 2u = −6

x+y+z+u=2

by using Gauss-Jordan method.

Solution
Here the system of equation and its augmented form is as;

    
x + 2y + z − u = −2 1 2 −1 1 x −2
    
2x + 3y − z + 2u = 7 3 −1 2  y   7 
    
2
and    =  
1 3 −2 z  −6
    
x + y + 3z − 2u = −6 1
    
x+y+z+u=2 1 1 1 1 u 2

Operating R2 → R2 − 2R1 , R3 → R3 − R1 , R4 → R4 − R1
    
1 2 1 −1 x −2
    
0 −1 −3 4  y   11 
    
   =  
0 −1 2 −1 z  −4
    
    
0 −1 0 2 u 4
Operating R4 → R4 − R2 , R3 → R3 − R2 , R1 → R1 + 2R2
    
1 0 −5 7 x 20
    
0 −1 −3 4  y   11 
    
   =  
5 −5 −15
    
0 0   z  
    
0 0 3 −2 u −7
1
Operating R2 → −R2 , R3 → R
5 3
    
1 0 −5 7 x 20
    
1 3 −4 y  −11
    
0
   =  
0 1 −1 z   −3 
    
0
    
0 0 3 −2 u −7

Operating R1 → R1 + 5R3 , R2 → R2 − 3R3 , R4 → R4 − 3R3


    
1 0 0 2 x 5
    
−1 y  −2
    
0 1 0
   =  
−1 z  −3
    
0 0 1
    
0 0 0 1 u 2
Operating R1 → R1 + 2R4 , R2 → R2 + R4 , R3 → R3 + R4
    
1 0 0 0 x 1
    
    
0 1 0 0 y   0 
   =  
0 z  −1
    
0 0 1
    
0 0 0 1 u 2
Therefore, the required solution is x = 1, y = 0, z = −1, and u = 2.

1.5 Factorisation/ LU Decomposition methods


If we need to solve the several linear system with same coefficient matrix then it
will be tedious to do Gauss elimination method for every R.H.S factor. Specially
in a situation where the R.H.S factor is function at previous factor to counter such
unnecessary equation we use the factorisation method.
In this method, the coefficient matrix A is factorized into the product of two
triangular matrices such that one matrix is lower triangular L and the other matrix
is upper triangular U , i.e.
A = LU
 
a a a · · · a1n
 11 12 13 
 a21 a22 a23 · · · a2n 
 
 
where, A =  a31 a32 a33 · · · a3n ,
 
 
 
. . . . . . . . . . . . . . . . . . . . . . .
 
an1 an2 an3 · · · ann
   
l11 0 . . . 0 u11 u12 . . . u1n
   
   
 l21 l22 . . . 0   0 u22 . . . u2n 
L= .
.. 
 and U = .

.. 

 .. .   .. . 
   
ln1 ln2 . . . lnn 0 0 . . . unn
are lower and upper triangular matrices, respectively. The matrices L and U have
to be completed, such that

  
l11 0 ... 0 u11 u12 . . . u1n
  
  
 l21 l22 . . . 0   0 u22 . . . u2n 
A = LU =  .

..   ..
 
.. 

 .. .  . . 
  
ln1 ln2 . . . lnn 0 0 . . . unn
  (1.7)
l u l11 u12 ... l11 u1n
 11 11 
 
 l21 u11 l21 u12 + l22 u22 . . . l21 u1n + l22 u2n 
= .. ..


 . . 
 
ln1 u11 ln2 u12 + ln2 u22 . . . ln1 u1n + ln2 u2n + · · · + lnn unn

After comparing the elements of both the matrices, we get the following relations;

li1 u1j + li2 u2j + · · · + lin unj = aij , 1 ≤ i, j ≤ n (1.8)

Where, 
 l = 0, j > i;
ij
 u = 0, i > j.
ij

After computing the matrices L and U . The system of equations is given by

AX = B

Changes to
LU X = B
Let U X = Y , then the above system reduces to LY = B.
The system LY = B is the lower triangular system. So, the vector Y can be easily
determined by using forward substitution. The vector X can be easily computed by
using back substitution from the following upper triangular system,

UX = Y

1.5.1 Doolittle Method

From equation (1.8) if we have

lii = 1; 1≤i≤n

which is called Doolittle method.

1.5.2 Crout Method

From equation (1.8) if we have

uii = 1; 1≤i≤n

which is called Crout method.

Example 1.5.1. Use Factorisation methods to calculate the solution of the following
system of linear equations;

3x1 − x2 + x3 = 1

2x1 + 3x2 + x3 = 4

3x1 + x2 − 2x3 = 6

Solution
First, we decompose the coefficient matrix A in the product of lower and upper
triangular matrices with diagonal elements in the lower triangular matrix as unity.

A = LU
    
3 −1 1 1 0 0 u11 u12 u13
    
1  = l21 1
    
2 3 0  0 u22 u23 
    
3 1 −2 l31 l32 1 0 0 u33
 
u u12 u13
 11 
= l21 u11
 
l21 u12 + u22 l21 u13 + u23 
 
l31 u11 l31 u12 + l32 u22 l31 u13 + l32 u23 + u33
After equating the terms on both sides, we obtain following set of equations

u11 = 3, u12 = −1, u13 = 1

l21 u11 = 2, l21 u12 + u22 = 3, l21 u13 + u23 = 1

l31 u11 = 3, l31 u12 + l32 u22 = 1, l31 u13 + l32 u23 + u33 = −2

The solution of this system produces the values of lij and uij as follows

First Row u11 = 3, u12 = −1, u13 = 1

First Column l21 = 23 , l31 = 1

11 1
Second Row u22 = 3
, u23 = 3

6
Second Column l32 = 11
,

−35
Third Column u33 = 11

So, we can easily write the coefficient matrix A in terms of the matrices L and U as
follows     
3 −1 1 1 0 0 3 −1 1
    
 2 11 1
1  = 3
  
2 3 1 0 0 3 3

    
6 35
3 1 −2 1 11
1 0 0 − 11
First, we solve LY = B by using forward substitution
    
1 0 0 y1 1
    
2
 3 1 0 y2  = 4
   
    
6
1 11 1 y3 6
10 35
The solution is, y1 = 1, y2 = 3
, y3 = 11
. Now ,we solve U X = Y by using backward
substitutions     
3 −1 1 x 1
   1  
0 11 1   x2  =  10
    
3 3    3 


0 0 − 35
11
x3 35
11

Solving this system of equations, finally the solution is given by

x1 = x2 = 1, x3 = −1 Ans

1.5.3 Cholesky Method

In case of positive definite symmetric matrix A, there exists a unique decomposition


of matrix A, known as Cholesky decomposition.

A = LLT

where L is a lower triangular matrix and LT is its transpose. Therefore, the system
AX = B can be written as follows;

LLT X = B

Let LT = Y , then LY = B.First, we compute vector Y using forward substitution


and then compute vector X from the equation;

LT = Y

The matrix A can also be decomposed as A = U U T , where U is an upper triangular


matrix.

Example 1.5.2. Solve the following system of linear equations by using the Cholesky
method

3x1 − x2 + x3 = 2

−x1 + 3x2 + x3 = 6

x1 + x2 + 2x3 = 5

Hint;
1.6 Matrix inversion
The matrix inversion takes place when we take a square matrix. To find the inverse
of matrix we use the following relations ;

ˆ A−1 = adjA
|A|
where, |A| =
6 0.

ˆ By Gauss elimination method. Let us consider the relation AX = I where,


    
a a a x x2 x3 1 0 0
 11 12 13   1   
y2 y3  = 0 1 0
    
a21 a22 a23   y1
    
a31 a32 a33 z1 z2 z3 0 0 1

After applying the Gauss elimination method operation. The equation is equiv-
alent to
         
a a12 a13 x 1 a a12 a13 x 0
 11   1    11   2  
a23   y1  = 0 a21 a22 a23   y2  = 1
         
a21 a22
         
a31 a32 a33 z1 0 a31 a32 a33 z2 0
    
a a12 a13 x 0
 11   3  
a21 a22 a23   y3  = 0
    
    
a31 a32 a33 z3 1

ˆ Gauss Jordan method


The matrix operation will be

[A : I] ⇒ A−1 [A : I] ⇒ [I : A−1 ]

where  
a11 a12 a13 : 1 0 0
 
[A : I] = a21 a22 a23 : 0 1 0
 
 
a31 a32 a33 : 0 0 1
 
0 0 0
1 0 0 : a11 a12 a13
 
[I : A−1 ] = 0 1 0 : a021 a022 a023 
 
 
0 0 0
0 0 1 : a31 a32 a33
 
2 1 1
 
Example 1.6.1. Find the inverse of the matrix 3 2 3 by using
 
 
1 4 9

ˆ Gauss Elimination method

ˆ Gauss Jordan Method

3
Hem Raj Pandey, Assistant Professor, Pokhara University
Solution of Linear Algebra Numerical Method

Lecture Note -8

M.sc. Structural Engineering


School of Engineering

By
Hem Raj Pandey
(Assistant Professor)
Pokhara University
Monday, June 22, 2020

1
Hem Raj Pandey, Assistant Professor, Pokhara University
Contents

1 Solution of Linear Algebra Contd 3


1.1 Iterative Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.1.1 Gauss Jacobi Iterative Method . . . . . . . . . . . . . . . . . 4
1.1.2 Gauss - Seidal Method . . . . . . . . . . . . . . . . . . . . . . 7
1.1.3 Successive Over relaxation (SOR) Methods . . . . . . . . . . . 9
1.2 Eigenvalue and Eigenvector . . . . . . . . . . . . . . . . . . . . . . . 10
1.2.1 Power Method . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
1.2.2 Inverse Power Method . . . . . . . . . . . . . . . . . . . . . . 13
1.2.3 Shifted Power Method . . . . . . . . . . . . . . . . . . . . . . 15

2
Chapter 1

Solution of Linear Algebra Contd

1.1 Iterative Method


The direct methods are easy to implement, but round-off error is significant in case
of large systems. The iterative procedures can be used for solution of such systems.
The iterative methods may require large numbers of iterations to produce the result
with higher accuracy. But, once the algorithms for these methods are implemented,
these iterations can be easily computed with the advent of high-speed computers.
For more accuracy and lesser computational work, direct and iterative methods
can be mixed up. First, we can apply the direct method to compute the solution and
then further improve this solution for more accuracy with an iterative procedure.

Condition of Convergence
A sufficient condition for convergence of the iterative method is that the system
of equations is diagonally dominant, that is, the coefficient matrix is diagonally
dominant. where,
n
X
|aii | ≥ |aij |, 1≤i≤n
j=1,j6=i

If the system is diagonally dominant, then the iteration converges for any initial
solution vector.

When to stop iteration


We stop the iteration procedure when the magnitudes of the differences between the
two successive iterates of all the variables are smaller than a given accuracy or error
tolerance or error bound .

Example 1.1.1. If we required two decimal places of accuracy, then we iterate until
(k+1) (k)
|xi − xi | < 0.005, for all i. If we require three decimal places of accuracy , then
(k+1) (k)
we iterate until |xi − xi | < 0.0005, for all i.

3
Solution of Linear Algebra Numerical Method

1.1.1 Gauss Jacobi Iterative Method


The linear system of n equations with n variables x1 , x2 , . . . , xn has the following
form.

a11 x1 + a12 x2 + a13 x3 + · · · + a1n xn = b1


a21 x1 + a22 x2 + a23 x3 + · · · + a2n xn = b2
a31 x1 + a32 x2 + a33 x3 + · · · + a3n xn = b3 (1.1)
.. .. .
. . = ..
an1 x1 + an2 x2 + an3 x3 + · · · + ann xn = bn

Which can be written as AX = B. And equation (1.1) can be rewritten as follows;

1
x1 = [b1 − (a12 x2 + a13 x3 + · · · + a1n xn )]
a11
1
x2 = [b2 − (a21 x1 + a23 x3 + · · · + a2n xn )]
a22
1
x3 = [b3 − (a31 x1 + a32 x2 + · · · + a3n xn )]
a33
.. .. ..
.= . .
1
xn = [bn − (an1 x1 + an2 x2 + an3 x3 + · · · + an,n−1 xn−1 )]
ann
Since initial approximation is required to compute the vector (x1 , x2 , . . . , xn ) and let
(0) (0) (0)
that approximation be [x1 , x2 , . . . , xn ]. We use these value in the above expres-
(1) (1) (1)
sions to get the next approximation [x1 , x2 , . . . , xn ] of the Jacobi method.

(1) 1 (0) (0)


x1 = [b1 − (a12 x2 + a13 x3 + · · · + a1n x(0)
n )]
a11
(1) 1 (0) (0)
x2 = [b2 − (a21 x1 + a23 x3 + · · · + a2n x(0)
n )]
a22
(1) 1 (0) (0)
x3 = [b3 − (a31 x1 + a32 x2 + · · · + a3n x(0)
n )]
a33
.. .. ..
.= . .
1 (0) (0) (0) (0)
x(1)
n = [bn − (an1 x1 + an2 x2 + an3 x3 + · · · + an,n−1 xn−1 )]
ann
The subscripts and superscripts denote variables and iterations respectively. Sim-
(1) (1) (1)
ilarly, the first approximation [x1 , x2 , . . . , xn ] is used to compute the second iter-
ation of jacobi method. The process is repeated till the desired accuracy is obtained.

4
Hem Raj Pandey, Assistant Professor, Pokhara University
Solution of Linear Algebra Numerical Method

The (k + 1)th iteration can be obtained as;

(k+1) 1 (k) (k)


x1 = [b1 − (a12 x2 + a13 x3 + · · · + a1n x(k)
n )]
a11
(k+1) 1 (k) (k)
x2 = [b2 − (a21 x1 + a23 x3 + · · · + a2n x(k)
n )]
a22
(k+1) 1 (k) (k)
x3 = [b3 − (a31 x1 + a32 x2 + · · · + a3n x(k)
n )]
a33
.. .. ..
.= . .
1 (k) (k) (k) (k)
x(k+1)
n = [bn − (an1 x1 + an2 x2 + an3 x3 + · · · + an,n−1 xn−1 )]
ann
where k = 0, 1, 2, 3, . . . , n. The above Jacobi iteration formula can be written as
follows;
 n 
(k+1) 1 X (k)
xi = bi − aij xj , 1 ≤ i ≤ n, k = 0, 1, 2, . . . (1.2)
aii j=1,j6=i

Example 1.1.2. Compute system of linear equation by using Jacobi iterations method;

7x1 − 3x2 + 2x3 + x4 = 12


2x1 − 6x2 + x3 + 2x4 =6
x1 + x2 + 5x3 + 2x4 = 12
x1 + 3x2 − 2x3 + 8x4 =5

Solution:
(0) (0) (0) (0)
Consider the initial approximation x1 = x2 = x3 = x4 = 0 we rewrite the given
system as follows,
1
x1 = (12 + 3x2 − 2x3 − x4 )
7
1
x2 = − (6 − 2x1 − x3 − 2x4 )
6
1
x3 = (12 − x1 − x2 − 2x4 )
5
1
x4 = (5 − x1 − 3x2 + 2x3 )
8

5
Hem Raj Pandey, Assistant Professor, Pokhara University
Solution of Linear Algebra Numerical Method

Using the initial approximation, we get


(1) 1 (0) (0) (0)
x1 = (12 + 3x2 − 2x3 − x4 )
7
12
=
7
= 1.714286
(1) 1 (0) (0) (0)
x2 = − (6 − 2x1 − x3 − 2x4 )
6
= −1.00000
(1) 1 (0) (0) (0)
x3 = (12 − x1 − x2 − 2x4 )
5
= 2.400000
(1) 1 (0) (0) (0)
x4 = (5 − x1 − 3x2 + 2x3 )
8
= 0.62500
The next approximation is,
(2) 1 (1) (1) (1)
x1 = (12 + 3x2 − 2x3 − x4 )
7
= 0.51071
(2) 1 (1) (1) (1)
x2 = − (6 − 2x1 − x3 − 2x4 )
6
= 0.1797
(2) 1 (1) (1) (1)
x3 = (12 − x1 − x2 − 2x4 )
5
= 2.00714
(2) 1 (1) (1) (1)
x4 = (5 − x1 − 3x2 + 2x3 )
8
= 1.38571
Similarly in tabulated form as;
itr. x1 x2 x3 x4
1 1.7142 -1.0000 2.4000 0.62500
2 0.5107 0.1797 2.0071 1.3857
3 1.0198 -0.0333 1.7076 0.9955
4 1.0698 -0.0435 1.8044 0.9369
5 1.0461 -0.0303 1.8199 0.9587
6 1.0443 -0.0283 1.8133 0.9505
7 1.0468 -0.0294 1.8125 0.95042
Here, we find
(7) (6)
|x1 − x1 | = |1.0468 − 1.0443| = 0.0025
(7) (6)
|x2 − x2 | = | − 0.0294 + 0.0283| = 0.0011
(7) (6)
|x3 − x3 | = |1.8125 − 1.8133| = 0.0008
(7)
|x4 − x4 (6)| = |0.95042 − 0.9505| = 0.0008

6
Hem Raj Pandey, Assistant Professor, Pokhara University
Solution of Linear Algebra Numerical Method

So the error in magnitude are less then 0.005, so the solution is


x1 = 1, x2 = 0, x3 = 2 and x4 = 1.

1.1.2 Gauss - Seidal Method


From equation (1.1) can be rewritten the equation as follows;

1
x1 = [b1 − (a12 x2 + a13 x3 + · · · + a1n xn )]
a11
1
x2 = [b2 − (a21 x1 + a23 x3 + · · · + a2n xn )]
a22
1
x3 = [b3 − (a31 x1 + a32 x2 + · · · + a3n xn )]
a33
.. .. ..
.= . .
1
xn = [bn − (an1 x1 + an2 x2 + an3 x3 + · · · + an,n−1 xn−1 )]
ann
(0) (0) (0)
let the initial approximation be [x1 , x2 , . . . , xn ]. In this method we use the
latest available values of the variables are used. So the k th iteration of the Gauss-
Seidal method is obtained as

(k+1) 1 (k) (k)


x1 = [b1 − (a12 x2 + a13 x3 + · · · + a1n x(k) n )]
a11
(k+1) 1 (k+1) (k)
x2 = [b2 − (a21 x1 + a23 x3 + · · · + a2n x(k)
n )]
a22
(k+1) 1 (k+1) (k+1)
x3 = [b3 − (a31 x1 + a32 x2 + · · · + a3n x(k)
n )]
a33
.. .. ..
.= . .
1 (k+1) (k+1) (k+1)
x(k+1)
n = [bn − (an1 x1 + an2 xk+1
2 + an3 x3 + · · · + an,n−1 xn−1 )]
ann
The above system can be written as follows;
 i−1
(k+1) 1 X (k+1)
xi = bi − aij xj
aii j=1
n  (1.3)
X (k)
− aij xj , 1 ≤ i ≤ n, k = 0, 1, 2, . . .
j=i+1

Example 1.1.3. Find the solution of the system of equation upto three decimal place

45x1 + 2x2 + 3x3 = 58


−3x1 + 22x2 + 2x3 = 47
5x1 + x2 + 20x3 = 67

7
Hem Raj Pandey, Assistant Professor, Pokhara University
Solution of Linear Algebra Numerical Method

Solution:
(0) (0) (0)
Consider the initial approximation be x1 = x2 = x3 = 0. we get

(1) 1 (0) (0)


x1 = (58 − 2x2 − 3x3 )
45
1
= (58)
45
= 1.2888
(1) 1 (1) (0)
x2 = (47 + 3x1 − 2x3 )
22
= 2.31212
(1) 1 (1) (1)
x3 = (67 − 5x1 − x2 )
20
= 2.91217

For second iteration


(2) 1 (1) (1)
x1 = (58 − 2x2 − 3x3 )
45
= 0.9919
(2) 1 (1) (1)
x2 = (47 + 3x1 − 2x3 )
22
= 2.0068
(2) 1 (1) (1)
x3 = (67 − 5x1 − x2 )
20
= 3.00166

Similarly in tabulated form

iter x1 x2 x3
1 1.2888 2.3121 2.9121
2 0.9919 2.0068 3.0016
3 0.9995 1.9997 3.0001
4 1.0000 1.9999 3.0000

Here, we find
(4) (3)
|x1 − x1 | = |1.0000 − 0.9995| = 0.0004
(4) (3)
|x2 − x2 | = |1.9999 − 1.9997| = 0.0002
(4) (3)
|x3 − x3 | = |3.0000 − 3.0001| = 0.0001

So the error in magnitude are less then 0.0005, so the solution is


x1 = 1, x2 = 2, and x3 = 3.

8
Hem Raj Pandey, Assistant Professor, Pokhara University
Solution of Linear Algebra Numerical Method

1.1.3 Successive Over relaxation (SOR) Methods


Iterative method frequently referred to
as relaxation method. Since the iterative
procedure can be viewed as relaxing x(0)
to the exact value of x.
And the over- relaxation concept is
for computation and is very effective in
accelerating the convergence rate of the
Gauss Jocbi and Gauss seidel method.
The relaxation methods are used to ac-
celerate the convergence of Jacobi and
Gauss -Seidel methods.
If we use simultaneous displacement
(as in Jacobi iteration ). Then the re-
Figure 1.1: Relaxation Method
laxation method is as follows;
 n 
k+1 (k) ω X (k)
xi = (1 − ω)xi + bi − aij xj , 1 ≤ i ≤ n, k = 0, 1, 2, . . . (1.4)
aii j=1,j6=i

If the relaxation method for successive displacement (Gauss-seidel iteration).


Then the relaxation method is as follows;

k+1 (k) ω
xi = (1 − ω)xi + bi
aii
i−1
X n
X  (1.5)
(k+1) (k)
− aij xj − aij xj , 1 ≤ i ≤ n, k = 0, 1, 2, . . .
j=1 j=i+1

Both equation (1.4) and (1.5) are called Relaxation method and

ω = 1, Gives Gauss Jocobi and Seidel method


0<ω<1 under relaxation method
ω>1 over relaxation method
ω ≥ 2.0 iteration method diverge

Since, successive displacement has faster convergence than simultaneous displace-


ment. So, Successive over Relaxation (SOR) method which is suggested for the
computation of linear system of equation.

Example 1.1.4. Solve by SOR method where ω = 1.25 of system of equations;

27x1 + 6x2 − x3 = 85
6x1 + 15x2 − 2x3 = 72
x1 + x2 + 54x3 = 110

Solution:
(0) (0) (0)
Consider the initial approximation be x1 = x2 = x3 = 0. we get

9
Hem Raj Pandey, Assistant Professor, Pokhara University
Solution of Linear Algebra Numerical Method

Using the SOR method and rewrite the equation as

(1) ω
(0) (0) (0)
x1 = (1 − ω)x1 + (85 − 6x2 + x3 )
27
(1) (0) ω (1) (0)
x2 = (1 − ω)x2 + (72 − 6x1 + 2x3 )
15
(1) (0) ω (1) (1)
x3 = (1 − ω)x3 + (110 − x1 − x2 )
54
Using the initial approximation we get

(1) (0) (0) (0)


x1 = −0.25x1 + 0.046(85 − 6x2 + x3 )
= 3.910
(1) (0) (1) (0)
x2 = −0.25x2 + 0.083(72 − 6x1 + 2x3 )
= 4.029
(1) (0) (1) (1)
x3 = −0.25x3 + 0.023(110 − x1 − x2 )
= 2.347

And it’s tabulated form is

itr. x1 x2 x3
1 3.910 4.029 2.347
2 1.929 4.398 1.798
3 2.297 4.031 1.935
4 2.312 4.138 1.898
5 2.277 4.123 1.908
6 2.291 4.121 1.905
7 2.287 4.123 1.906
8 2.288 4.122 1.906
9 2.288 4.122 1.906

Hence, the required solution is


x1 = 2.288, x2 = 4.122, and x3 = 1.906.

1.2 Eigenvalue and Eigenvector


Definition 1.2.1. Let A be a square matrix of dimension n × n. The scalar λ is
said to be and eigenvalue for A if there exists a non-zero vector X of dimension n
such that
AX = λX
The non-zero vector X is called the eigenvector corresponding to the eigenvalue λ.
Thus, if λ is an eigenvalue for a matrix A.

10
Hem Raj Pandey, Assistant Professor, Pokhara University
Solution of Linear Algebra Numerical Method

Then

AX = λX, X 6= 0 (1.6)
or [A − λI]X = 0 (1.7)

So equation (1.6) has a no-trival solution if and only if A − λI is singular, that is

|A − λI| = 0

. Thus if  
a11 a12 . . . . . . a1n
 a21 a22 . . . . . . a2n 
 
. . .
A= ... ... ... ... 
. . . ... ... ... ...
an1 an2 . . . . . . ann
then

a11 − λ a12 ... ... a1n

a21
a22 − λ ... ... a2n
|A − λIn | = . . .
... ... ... . . .
... ... ... ... . . .

an1 an2 ... ... ann − λ
= (−1)n λn + k1 λn−1 + k2 λn−2 + · · · + kn
=0

Which is called the characteristic equation of the matrix A.

1.2.1 Power Method


P0wer method is one of the most well suited iterative approach for machine compu-
tations. With this the numerically greatest eigen value and the corresponding eigen
vector can be computed to analyze different engineering problem.

When do we stop the iteration


The iterations are stopped when all the magnitudes of the differences of the ratios
are less than the given error tolerance.
The choice of the initial approximation vector x0 is important. If no suitable
approximation is available, we can choose x0 with all its components as one unit,
that is, x0 = [1, 1, 1, ..., 1]T . However, this initial approximation to the vector should
be non-orthogonal to x1 .

Definition 1.2.2. If A is a square matrix of size n and λ1 , λ2 , . . . , λn are the eigen


values of a matrix A satisfying the relation |λ1 | > |λ2 |, . . . , |λn |, then λ1 is called the
dominant (absolutely large) eigen value any eigen vector corresponding to this eigen
value λ1 is known as dominant eigen vector.

11
Hem Raj Pandey, Assistant Professor, Pokhara University
Solution of Linear Algebra Numerical Method

Algorithm
1. Start

2. Define initial eigen vector X.

3. Calculate Y = AX.

4. Find the largest element in magnitude of matrix Y and assign it to λ.

5. calculate fresh value X = ( λ1 ) × Y

6. If [λn − λn−1 ] > , go to step 3.


Else print λn as the dominant eigen value and the corresponding eigen vector.

7. End.

Example 1.2.1.
 Find thelargest eigenvalue and the corresponding eigenvector of
15 −4 −3
the matrix −10 12 −6.
−20 4 −2
Solution:
Let the initial guess of eigen vector corresponding to largest eigen values be X =
[1 0 0]T .

Iteration first
      
15 −4 −3 1 15 0.75
Y = AX (0) = −10 12 −6 0 = −10 = 20 −0.5 = λ(1) X (1)
−20 4 −2 0 −20 −1

The first approximation to the eigen value is λ(1) = 20 and the corresponding
eigen vector is X (1) = [0.75 − 0.5 − 1]T

Iteration second
      
15 −4 −3 0.75 16.25 1
Y = AX (1) = −10 12 −6 −0.5 =  −7.5  = 16.25 −0.4462 = λ(2) X (2)
−20 4 −2 −1 −15 −0.9231

Iteration third
      
15 −4 −3 1 19.5541 0.9807
Y = AX (2) = −10 12 −6 −0.4462 =  −9.8158  = 19.9386 −0.4923 = λ(3) X (3)
−20 4 −2 −0.9231 −19.9386 −1

Iteration fourth
      
15 −4 −3 0.9807 19.6797 1
Y = AX (3) = −10 12 −6 −0.4923 =  −9.7146  = 19.6797 −0.4936 = λ(4) X (4)
−20 4 −2 −1 −19.5832 −0.9951

12
Hem Raj Pandey, Assistant Professor, Pokhara University
Solution of Linear Algebra Numerical Method

Iteration fifth
      
15 −4 −3 1 19.9597 0.9988
Y = AX (4) = −10 12 −6 −0.4936 =  −9.9526  = 19.9842 −0.4980 = λ(5) X (5)
−20 4 −2 −0.9951 −19.9842 −1

Iteration sixth
      
15 −4 −3 0.9988 19.9740 1
Y = AX (5) = −10 12 −6 −0.4980 =  −9.9640  = 19.9740 −0.4988 = λ(6) X (6)
−20 4 −2 −1 −19.9680 −0.9997

Iteration seventh
      
15 −4 −3 1 19.9943 0.9999
Y = AX (6) = −10 12 −6 −0.4988 =  −9.9874  = 19.9958 −0.4995 = λ(7) X (7)
−20 4 −2 −0.9997 −19.9958 −1

Iteration eight
      
15 −4 −3 0.9999 19.9965 1
Y = AX (7) = −10 12 −6 −0.4995 =  −9.9930  = 19.9965 −0.4997 = λ(8) X (8)
−20 4 −2 −1 −19.9960 −1

The largest eigenvalue corresponding to the vectors of seventh and eight iterations
are almost same. Hence, the largest eigenvalue is λ = 19.9965 and the corresponding
eigenvector is X = [1 − 0.4997 − 1]T .

1.2.2 Inverse Power Method


When the smallest (in absolute value) eigenvalue of matrix A is distinct, its value can
found using a variation of the power method called the inverse power method. The
inverse power method is used to compute the smallest (in magnitude) eigenvalue
of a given square matrix A. It involves computing of the largest (in magnitude)
eigenvalue of the inverse matrix A–1 .

Theorem 1.2.1. Let λi be an eigenvalue of matrix A, then λ1i is the eigenvalue of


the matrix A–1 . The eigenvector Xi of matrix A–1 remains same as that of matrix
A. i.e
1
Xi = A−1 Xi
λi
It implies that λ1i is the eigenvalue of A−1 , the eigenvector Xi of matrix A−1 remains
same as that of matrix A.

Example 1.2.2. Determine


 the smallest
 eigenvalue and the corresponding eigenvec-
10 6 7
tor of the matrix A =  1 7 −2 by using inverse power method.
2 2 2

13
Hem Raj Pandey, Assistant Professor, Pokhara University
Solution of Linear Algebra Numerical Method

Solution:

The inverse of the matrix A is given by


 
18 2 −61
1 
A−1 = 6 6 27 
60
−12 −8 64

To compute the largest eigenvalue of matrix, A−1 , let us start the iterations with
initial vectors X (0) = [1 1 1]T .

Iteration first
      
18 2 −61 1 −41 −0.931818
1  1  44 
Y (0) = A−1 X (0) = 6 6 27  1 = 27  = 0.613636 
60 60 60
−12 −8 64 1 44 1

44
λ(1) = and X (1) = [−0.931818 0.613636 1]T
60
Iteration second
    
18 2 −61 −0.931818 −1
1  76.545456 
Y (1) = A−1 X (1) = 6 6 27   0.613636  = 0.473872
60 60
−12 −8 64 1 0.918052

76.545456
λ(2) = and X (2) = [−1 0.473872 0.918052]T
60
Other iteration are given by
73.053444
λ(3) = X (3) = [−1.00000 0.460357 0.9166493]T
60
72.994881
λ(4) = X (4) = [−1.00000 0.459096 0.9176354]T
60
73.057564
λ(5) = X (5) = [−1.00000 0.458963 0.9178505]T
60
73.070930
λ(6) = X (6) = [−1.00000 0.458948 0.9178856]T
60
73.073082
λ(7) = X (7) = [−1.00000 0.458946 0.9178907]T
60
73.073402
λ(8) = X (8) = [−1.00000 0.458945 0.9178918]T
60
73.073441
λ(9) = X (9) = [−1.00000 0.458945 0.9178918]T
60
73.073448
λ(10) = X (10) = [−1.00000 0.458945 0.9178911]T
60

The approximate value of the largest eigenvalue of A−1 is λ(10) = 73.073448


60
=
1
1.2178908. Hence, the smallest eigenvalue of A is 1.2178908 = 0.8210916775.

14
Hem Raj Pandey, Assistant Professor, Pokhara University
Solution of Linear Algebra Numerical Method

1.2.3 Shifted Power Method


Shifted power method is another variation of power method. It is used to compute
the eigenvalues which are farthest and nearest from a given scalar k.

Theorem 1.2.2. Let λi be an eigenvalue of matrix A, then (λi − k) is an eigenvalue


of the matrix (A − kI) with the same eigenvector as that of matrix A. i.e.,

(A − kI)Xi = (λi − k)Xi

Eigen value farthest to a given scalar


To compute eigenvalue of matrix A farthest to a given number k, first we find the
largest eigenvalue (in magnitude) of the matrix,(A − kI), and then that eigenvalue
in addition with k is the desired eigenvalue of matrix A.
For example, let us assume the eigenvalues of a matrix A,is −5, 2, 8 and we want
to compute the eigenvalue that is farthest from the scalar k = 5. The eigenvalues of
the matrix (A − 5I) are –10, –3 and 3. The computational procedure is to compute
the largest (in magnitude) eigenvalue of a matrix (A − 5I), (i.e.–10), and then add
scalar 5 to that eigenvalue to get the desired eigenvalue i.e. − 10 + 5 = −5.

Eigen value nearest to a given scalar


To compute eigenvalue of matrix A nearest to number k, first, we find the largest
eigenvalue (in magnitude) of matrix (A − kI)–1 , and then inverse of that eigenvalue
in addition with k is the desired eigenvalue of matrix A.
For example, let us assume the eigenvalues of a matrix A are –1, 4.5 and 7 and
we want to compute the eigenvalue that is nearest to 4. We have
Eigenvalues of matrix A are –1, 4.5 and 7.
Eigenvalues of matrix (A − 4I) are –5, 0.5 and 3.
Eigenvalues of matrix (A − 4I)–1 are − 15 , 2 and 13 .

The computational procedure is to compute the largest (in magnitude) eigenvalue


of a matrix (A − 4I)–1 i.e.2. Then reciprocal 0.5 of that eigenvalue in addition with
k (= 4) is the desired eigenvalue 4.5 of matrix A.
 
2 6 −3
Example 1.2.3. Determine the eigenvalue farthest to 4 for the matrix 5 3 −3
5 −4 4
(0) T
start the iteration with the initial vector X = [1 0 1]

Solution
To compute the eigenvalue of matrix A which is farthest to 4, we will find the
largest (in magnitude) eigenvalue of the matrix (A–4I), and then add scalar 4 to
that eigenvalue.
 
−2 6 −3
A − 4I = 5 −1
 −3
5 −4 0

15
Hem Raj Pandey, Assistant Professor, Pokhara University
Solution of Linear Algebra Numerical Method

Using the initial vector X (0) = [1 0 1]T for matrix A − 4I , the largest eigenvalue
of this matrix is computed as follows

Iteration first
      
−2 6 −3 1 −5 −1
Y (0) = (A − 4I)X (0) = 5 −1 −3 0 = 1 5 0.4
      
5 −4 0 1 1 1

λ(1) = 5 and X (1) = [−1 0.4 1]T

Other iterations are given by

λ(2) = 8.4 X (2) = [0.166667 − 1.0000 − 0.7857142]T


λ(3) = 4.833333 X (3) = [−0.822660 0.866995 1.00000]T
λ(4) = 7.980395 X (4) = [0.482099 − 1.000000 − 0.9500004]T
λ(5) = 6.410494 X (5) = [−0.641791 0.976601 1.00000]T
λ(6) = 7.185556 X (6) = [0.576599 − 1.00000 − 0.9902316]T
λ(7) = 6.882997 X (7) = [−0.607658 0.995742 1.00000]T
λ(8) = 7.034031 X (8) = [0.595643 − 1.00000 − 0.9981848]T
λ(9) = 6.978212 X (9) = [−0.601405 0.999219 1.000000]T
λ(10) = 7.006245 X (10) = [0.599198 − 1.000000 − 0.99966610]T

These iterations are converging to the eigenvalue 7. Since the elements of eigenvectors
are changing the sign alternatively, so the eigenvalue is –7. The largest eigenvalue of
matrix (A − 4I) is –7, so the eigenvalue of matrix A is –7 + 4 = –3. The eigenvalue
λ = –3 of matrix A is farthest from scalar 4.

16
Hem Raj Pandey, Assistant Professor, Pokhara University
Special Function Numerical Method

Lecture Note -9/10

M.sc. Structural Engineering


School of Engineering

By
Hem Raj Pandey
(Assistant Professor)
Pokhara University
Monday, June 29, 2020

1
Hem Raj Pandey, Assistant Professor, Pokhara University
Contents

1 Special Function 3
1.1 Legendre’s Equation and Legendre’s Function . . . . . . . . . . . . . 3
1.1.1 Legendre’s function of first and second kind . . . . . . . . . . 6
1.1.2 Rodrigue’s Formula . . . . . . . . . . . . . . . . . . . . . . . . 7
1.1.3 Legendre’s Polynomials . . . . . . . . . . . . . . . . . . . . . . 8
1.1.4 Generating Function . . . . . . . . . . . . . . . . . . . . . . . 10
1.1.5 Recurrence Relation . . . . . . . . . . . . . . . . . . . . . . . 10
1.2 Bessel’s Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
1.2.1 Recurrence Relations . . . . . . . . . . . . . . . . . . . . . . . 13

2
Chapter 1

Special Function

1.1 Legendre’s Equation and Legendre’s Function


The differential equation
d2 dy
(1 − x2 ) 2
− 2x + n(n + 1)y = 0 (1.1)
dx dx
is called the legendre’s equation and the solution to this differential equation is called
the legendre’s function.
Equation (1.1) can be written as
 
d 2 dy
(1 − x ) + n(n + 1)y = 0
dx dx
we assume that

X
y= ar xk−r
r=0 (1.2)
= a0 xk + a1 xk−1 + a2 xk−2 + . . .

Differentiable equation (1.2) with respect to x



X
0
y = ar (k − r)xk−r−1 (1.3)
r=0
X∞
and y 00 = ar (k − r)(k − r − 1)xk−r−2 (1.4)
r=0

Putting the value of (1.2) and (1.3) in equation (1.1)


X  ∞
X 
2 k−r−2 k−r−1
(1 − x ) ar (k − r)(k − r − 1)x − 2x ar (k − r)x
r=0 r=0

X
+n(n + 1) ar xk−r = 0
r=0

3
Special Function Numerical Method

or ,
∞ 
X
ar (k − r)(k − r − 1)xk−r−2 − ar (k − r)(k − r − 1)xk−r
r=0
−2ar (k − r)xk−r + n(n + 1)ar xk−r = 0

or ,
∞ 
X 
k−r−2 k−r
ar (k − r)(k − r − 1)x − ar {(k − r)(k − r + 1) − n(n + 1)}x =0
r=0

Comparing the co-efficient of xk ,(r = 0)

a0 [n(n + 1) − k(k + 1)] = 0


a0 6= 0, n2 + n − k 2 − k = 0

or, (n − k)(n − k + 1) = 0
∴k=n and k = −(n + 1) (1.5)
Comparing the co-efficient of xk−1 ,(r = 1)

a1 [k(k − 1) − n(n + 1)] = 0


Here, k 6= n, k − n − 1 6= 0
a1 [k 2 − k − n2 − n − 1] = 0
a1 [(k + n)(k − n − 1)] = 0

which implies that a1 = 0. From (1.5) (k + n)(k − n − 1) 6= 0.


Again comparing the coefficient of ak−r−2 equal to zero to get recurrence relation
between the coefficient. we get

ar (k − r)(k − r − 1) − ar+2 [(k − r − 2)(k − r − 1)


−n(n + 1)] = 0

(k − r)(k − r − 1)
∴ ar+2 = ar
(k − r − 1)(k − r − 2) − n(n + 1)
Here,
(k − r − 1)(k − r − 2) − n2 − n
= (k − r − 1)(k − r − 1 − 1) − n2 − n
= (k − r − 1)2 − (k − r − 1) − n2 − n
= [(k − r − 1 + n)(k − r − 1 − n) − (k − r − 1 + n)]
= (k − r − 1 + n)(k − r − n − 2)
(k − r)(k − r − 1)
∴, ar+2 = ar (1.6)
(k − r − 1 + n)(k − r − n − 2)
Now we are going to find the value of a0 , a1 , a2 , a3 , . . . , But a1 = a3 = a5 = a7 =
0 = . . . . For the value of equation (1.5) the two cases arises.

4
Hem Raj Pandey, Assistant Professor, Pokhara University
Special Function Numerical Method

Case-I. when k = n we get,

(n − r)(n − r − 1)
ar+2 = ar
(n − r − 1 + n)(n − r − n − 2)
(n − r)(n − r − 1)
= −ar
(r + 2)(2n − r − 1)

If r = 0

n(n − 1)
a2 = −a0
2(2n − 1)
at r = 2

(n − 2)(n − 3)
a4 = − a2
(2n − 3)4
n(n − 1)(n − 2)(n − 3)
=− a0
2.4(2n − 1)(2n − 3)
n(n − 1)(n − 2)(n − 3)(n − 4)(n − 5)
a6 = − a0
2.4.6(2n − 1)(2n − 3)(3n − 5)
..
.
n(n − 1)(n − 2)(n − 3)(n − 4), . . . , (n − 2r + 1)
a2r = (−1)r a0
2.4.6 . . . 2r(2n − 1)(2n − 3) . . . (2n − 2r + 1)
Putting the value in equation (1.2) we get

n(n − 1) n−2 n(n − 1)(n − 2)(n − 3) n−4
y = a0 x n − x + x
2(2n − 1) 2.4(2n − 1)(2n − 3)
 (1.7)
n(n − 1)(n − 2)(n − 3)(n − 4)(n − 5) n−6
− x + ...
2.4.6(2n − 1)(2n − 3)(3n − 5)

Case-II. when k = −(n + 1) we get,

(n + r + 1)(n + r + 2)
ar+2 = ar
(r + 2)(2n + r + 3)

for, r = 0, 2, 4, . . . , we get If r = 0

(n + 1)(n + 2)
a2 = a0
2(2n + 3)
at r = 2

5
Hem Raj Pandey, Assistant Professor, Pokhara University
Special Function Numerical Method

(n + 3)(n + 4)
a4 = a2
(2n + 5)4
(n + 1)(n + 2)(n + 3)(n + 4)
= a0
2.4(2n + 3)(2n + 5)
(n + 1)(n + 2)(n + 3)(n + 4)(n + 5)(n + 6)
a6 = a0
2.4.6(2n + 3)(2n + 5)(2n + 7)
..
.
(n + 1)(n + 2)(n + 3)(n + 4), . . . , (n + 2r)
a2r = a0
2.4.6 . . . 2r(2n + 3)(2n + 5)(2n + 7) . . . (n + 2r + 1)
Putting the value in equation (1.2) we get

(n + 1)(n + 2) −n−3 (n + 1)(n + 2)(n + 3)(n + 4) −n−5
y = a0 x−n−1 + x + x
2(2n + 3) 2.4(2n + 3)(2n + 5)

(n + 1)(n + 2)(n + 3)(n + 4)(n + 5)(n + 6) −n−7
+ x + ...
2.4.6(2n + 3)(2n + 5)(2n + 7)
(1.8)
Hence, the equation (1.7) and (1.8) are legendre’s function.

1.1.1 Legendre’s function of first and second kind


The solution of legendre’s equation is
d2 dy
(1 − x2 ) 2 − 2x + n(n + 1)y = 0 (1.9)
dx dx
at k = n is

n(n − 1) n−2 n(n − 1)(n − 2)(n − 3) n−4
y = a0 x n − x + x
2(2n − 1) 2.4(2n − 1)(2n − 3)

n(n − 1)(n − 2)(n − 3)(n − 4)(n − 5) n−6
− x + ...
2.4.6(2n − 1)(2n − 3)(3n − 5)
1.3.5....(2n−1)
where a0 is a constant and n is a +ve integral. If a0 = n!
.
Then the solution is
 
1.3.5. . . . (2n − 1) n n(n − 1) n−2 n(n − 1)(n − 2)(n − 3) n−4
y = Pn (x) = x − x + x +. . .
n! 2(2n − 1) 2.4(2n − 1)(2n − 3)
which is called legendre’s function of first kind. And for different value of n.
n!
We get legendre’s polynomials at k = −(n + 1) and a0 = 1.3.5...(2n+1) .
The Solution is

n! (n + 1)(n + 2) −n−3
y = Qn (x) = x−n−1 + x
1.3.5 . . . (2n + 1) 2(2n + 3)

(n + 1)(n + 2)(n + 3)(n + 4) −n−5
+ x + ...
2.4(2n + 3)(2n + 5)
Which is called legendre’s function of Second kind.

6
Hem Raj Pandey, Assistant Professor, Pokhara University
Special Function Numerical Method

1.1.2 Rodrigue’s Formula


Theorem 1.1.1. For +ve integer n.
1 dn 2
Pn (x) = (x − 1)n (1.10)
2n n! dxn
is called the Rodrigue’s formula.

Proof. Let
v = (x2 − 1)n (1.11)
Then
dv
= n(x2 − 1)n−1 (2x)
dx
Multiplying both sides by (x2 − 1), we get

dv
(x2 − 1) = 2n(x2 − 1)n x
dr
dv
(x2 − 1)
= 2nvx (1.12)
dx
Now, differentiating (1.12) , (n + 1) times by using Leibnitz’s theorem , we get

dn+2 v dn+1 v dn v
(x2 − 1) + ( n+1
C 1 )2x + ( n+1
C 2 )2
dxn+2 dxn+1 dxn
n+1 n

d v d v
= 2n x n+1 + (n+1 C1 ).1. n
dx dx

which implies that

2 dn+2 v n+1  dn+1 v


(x − 1) n+2 + 2x C1 − 1
dx dxn+1
n
d v
+2(n+1 C2 − nn+1 C1 ) n = 0
dx


dn+2 v dn+1 v dn v
(x2 − 1) + 2x − n(n + 1) =0 (1.13)
dxn+2 dxn+1 dxn
dn v
If we put dxn
= y, we get

d2 y dy
(x2 − 1) + 2x − n(n + 1)y = 0
dx2 dx
or
d2 y dy
(1 − x2 ) 2
− 2x + n(n + 1)y = 0
dx dx
dn v
This shows that y = dxn
is a solution of legendre’s equation. let us suppose,

Pn (x) = cvn (1.14)

7
Hem Raj Pandey, Assistant Professor, Pokhara University
Special Function Numerical Method

where c is a constant. For the value of c. Here,

v = (x2 − 1)n
= (x + 1)n (x − 1)n

so that
dn v n d
n
= (x + 1) (x − 1)n
dxn dxn
dn−1
+n C1 n(x + 1)n−1 n−1
(x − 1)n + . . .
dx
ndn
+ (x − 1) n
(x + 1)n = 0.
dx
[Dn (uv) = uDn v +n C1 DuDn−1 v +n C2 D2 uDn−2 v + · · · +n Cn Dn u.v]

If x = 1, then
dn v
n
= 2n n!
dx
Since all term disappear as (n − 1) is the factor of every term except first. From
equation (1.14) at x = 1, we get

Pn (1) = 1
1
C.2n .n! = 1, C=
2n n!
Putting the value of c in equation (1.14) we get,

1 dn v
Pn (x) =
2n n! dxn
1 dn 2
= n n
(x − 1)n
2 n! dx

1.1.3 Legendre’s Polynomials


From
1 dn 2
Pn (x) = (x − 1)n
2n n! dxn

8
Hem Raj Pandey, Assistant Professor, Pokhara University
Special Function Numerical Method

we get, P0 (x) = 1; P1 (x) = x,

3x2 − P0 (x)
P2 (x) =
2
2P2 (x) + P0 (x)
x2 =
3
3
5x − 3x
P3 (x) =
2
5x3 − 3P1 (x)
=
2
2P3 (x) + 3P1 (x)
x3 =
5
1
P4 (x) = (35x4 − 30x2 + 3)
8
8P4 (x) + 20P2 (x) + 7P0 (x)
x4 =
35
Example 1.1.1. Prove that Pn (0) = 0, for n is odd.

Solution

 
1.3.5. . . . (2n − 1) n n(n − 1) n−2
Pn (x) = x − x + ...
n! 2(2n − 1)
If n = 2m + 1 odd  
1.3.5. . . . (2n + 1) 2m+1 (2m + 1)2m 2m−1
P2m+1 (x) = x − x + ...
(2m + 1)! 2(2m + 1)

Substituting x = 0 ,
P2m+1 (0) = 0
If n is odd,
Pn (x) = 0

Example 1.1.2. Express f (x) = 4x3 + 6x2 + 7x + 2 in terms of legendre polynomials.

Solution:
Let
4x3 + 6x2 + 7x + 2 = aP3 (x) + bP2 (x) + cP1 (x) + dP0 (x)
 3   2 
5x 3x 3x 1
=a − +b − + c(x) + d(1)
2 2 2 2
5ax3 3ax 3bx2 b (1.15)
= − + − + cx + d
2 2 2 2 
3 2
5ax 3bx 3a b
= + + − +c x− +d
2 2 2 2

9
Hem Raj Pandey, Assistant Professor, Pokhara University
Special Function Numerical Method

Equating we get,
5a 8 47
4= , a= , b = 4, c= , d = 4.
2 5 5
Thus we get
8 47
4x3 + 6x2 + 7x + 2 = P3 (x) + 4P2 (x) + P1 (x) + 4P0 (x)
5 5

1.1.4 Generating Function


The function which generates Pn (x), n = 1, 2, 3, . . . is called the generates function
1
for Pn (x). The generating function for Pn (x) is (1 − 2xz + z 2 )− 2 .

Theorem 1.1.2. For +ve integer


1
X
z n Pn (x) = (1 − 2xz + z 2 )− 2
1
Proof. Expanding binomially (1 − 2xz + z 2 )− 2 , we get

1 1 1 3 z2
(1 − 2xz + z 2 )− 2 = 1 + z(2x − z) + (2x − z)2
2 2 2 2!
1 3 5 z3 135 (2n − 1)z n
+ (2x − z)3 , . . . , ,..., (2x − z)n + . . .
2 2 2 3!  2
 2 2  2.n!
3 2 1 2 5 3 3
= 1 + xz + x − z + x − x z3
2 2 2 2

1.3. . . . (2n − 1) n n(n − 1) n−2
+ ··· + x − x
n! 2.(2n − 1)

n(n − 1)(n − 2)(n − 3)
+ ...
2.4(2n − 1)(2n − 3)
= P0 (x) + zP1 (x) + z 2 P2 (x) + z 3 P3 (x) + · · · + z n Pn (x) + . . .
X∞
= z n Pn (x)
0

1.1.5 Recurrence Relation


Theorem 1.1.3. For +ve integer. prove that

(2n + 1)zPn = (n + 1)Pn+1 (x) + nPn−1 (x)

Proof. Given,

1
X
z n Pn (x) = (1 − 2xz + z 2 )− 2 (1.16)
0

10
Hem Raj Pandey, Assistant Professor, Pokhara University
Special Function Numerical Method

Differentiating both sides with respect to z we get


1 3
X
(1 − 2xz + z 2 )− 2 (−2x + 2z) = nz n−1 Pn (x)
2
or ,
X x−z
nz n−1 Pn (x) = 3 (1.17)
(1 − 2xz + z 2 ) 2
Multiplying both sides by (1 − 2xz + z 2 )
x−z X
3 = nz n−1 (1 − 2xz + z 2 )Pn (x)
(1 − 2xz + z )2 2
X
nz n−1 Pn (x) − 2xnPn (x)z n + nPn (x)z n+1

=
X X X X
(x − z) Pn (x)z n = nPn (x)z n−1 − 2nxPn (x)z 2 + nPn (x)z n+1
1
X
Since, (1 − 2xz + z 2 )− 2 = Pn (x)z n
Equating the coefficient of z n on both sides
xPn (x) − Pn−1 (x) = (n + 1)Pn+1 (x) − 2xPn (x)n + (n − 1)Pn−1 (x)
xPn (x)(1 + 2n) = (n + 1)Pn+1 (x) + nPn−1 (x)

1.2 Bessel’s Function


The differential equation
2
y2d dy
x 2
+ x + (x2 − n2 )y = 0 (1.18)
dx dx
are called Bessel’s functions. let
X∞
y = ar xk+r (1.19)
0

X
y0 = ar (k + r)xk+r−1 (1.20)
0

X
y 00 = ar (k + r)(k + r − 1)xk+r−2 (1.21)
0

Substituting the value of (1.19),(1.20),(1.21) in equation (1.18) we get



X
2
x ar (k + r)(k + r − 1)xk+r−2
0

X
+x ar (k + r)xk+r−1
0

X
+ (x2 − n2 ) ar xk+r = 0
0

11
Hem Raj Pandey, Assistant Professor, Pokhara University
Special Function Numerical Method

or,

X
ar (k + r)(k + r − 1) + ar (k + r) − n2 ar xk+r + ar xk+r+2 = 0
 
(1.22)
0

Equation (1.22) is identical. So, the equation with various power of x is


For xk , (r = 0)

a0 (k 2 − n2 ) = 0
a0 6= 0, k 2 = n2 , k = ±n

For xk+1 , (r = 1)

(k + 1)2 − n2 a1 = 0Since,k = n(k + 1)2 6= n


 

∴, [(k + 1)2 − n2 ] 6= 0, a1 = 0.

To find the coefficient , comparing the coefficients of xk+r ,

ar [(k + r)2 − n2 ] + ar−2 = 0


ar−2
ar = − , (Since, a1 = a3 = a5 = 0)
(k + r)2 − n2
Put r = 2, r = 4, r = 6, . . . , r = a2m
a0 a0
a2 = − 2 2
=−
(n + 2) − n 2.2(n + 1)
a2 a2 a0
a4 = − 2 2
=− = 2
(n + 4) − n 2.4(n + 2) 2.4.2 (n + 1)(n + 2)
..
.
(−1)m a0
a2m =
2.4.6. . . . .2m.2m (n + 1)(n + 2) . . . (n + m)
Therefore, we get
x2 x4

n
y = x a0 1 − +
1.2.(n + 1) 2.4.22 (n + 1)(n + 2)
(−1)m a0

− ··· + + ...
2.4.6. . . . .2m.2m (n + 1)(n + 2) . . . (n + m)

X xn+2r (−1)r
= a0
0
2.4. . . . .2r.2r (n + 1)(n + 2) . . . (n + r)

X xn+2r (−1)r 1.2.3. . . . n
= a0
0
1.2.3. . . . .r.2r 1.2.3. . . . .(n + r)

X xn+2r (−1)r n!
= a0
0
22r r!(n + r)!

12
Hem Raj Pandey, Assistant Professor, Pokhara University
Special Function Numerical Method

Since, a0 is arbitary
1
a0 =
2n Γn + 1


X (−1)r xn+2r n!
y=
0
2n n!22r r!(n + r)!
∞  n+2r
X (−1)r x
=
0
r!(n + r)! 2

Which is the Bessel’s function when k = n,


∞  n+2r
X (−1)r x
Jn (x) = (1.23)
0
Γ(r + 1)Γ(n + r + 1) 2

when k = −n
∞  −n+2r
X (−1)r x
J−n = (1.24)
0
Γ(r + 1)Γ(−n + r + 1) 2

1.2.1 Recurrence Relations


d
 n 
1. dx
x Jn (x) = xn Jn−1 (x)

Proof.
∞  n+2r
X (−1)r x
Jn (x) =
0
Γ(r + 1)Γ(n + r + 1) 2
∞  n+2r
n
X (−1)r xn x
x Jn (x) =
0
Γ(r + 1)Γ(n + r + 1) 2

d n  X (−1)r (2n + 2r)x2n+2r−1
x Jn (x) =
dx 0
Γ(r + 1)Γ(n + r + 1)2n+2r

X (−1)r xn xn+2r−1 (n + r)
=
0
Γ(r + 1)(n + r)!2n+2r−1
∞  n+2r−1
n
X (−1)r x
=x
0
Γ(r + 1)(n + r − 1)! 2
∞  n+2r−1
n
X (−1)r x
=x
0
Γ(r + 1)Γ(n + r) 2
= xn Jn−1 (x)

 −n
d
x Jn (x) = −x−n Jn+1 (x)

2. dx

13
Hem Raj Pandey, Assistant Professor, Pokhara University
Special Function Numerical Method

Proof.
∞  n+2r
X (−1)r x
Jn (x) =
0
Γ(r + 1)Γ(n + r + 1) 2
∞  n+2r
−n
X (−1)r x−n x
x Jn (x) =
0
Γ(r + 1)Γ(n + r + 1) 2

X x2r (−1)r
=
0
Γ(r + 1)Γ(n + r + 1)2n+2r

d  −n  X (−1)r 2rx2r−1
x Jn (x) =
dx 0
Γ(r + 1)Γ(n + r + 1)2n+2r

−n
X (−1)r−1 2rxn+2r−1
= −x
0
Γ(r + 1)Γ(n + r + 1)2n+2r

X (−1)r−1 rxn+2r−1
= −x−n
1
r!Γ(n + r + 1)2n+2r−1

X (−1)k xn+2k+1
= −x−n
k=0
k!Γ(n + k + 2)2n+2k+1
(r − 1 = k, 2r − 1 = 2k + 2 − 1 = 2k + 1)
∞  n+2r+1
X (−1)k x
= −x−n
k=0
Γ(k + 1)Γ(n + k + 2) 2
= −x−n Jn+1 (x)

3. xJn0 = xJn−1 − nJn

Proof.
d n
x Jn (x) = xn Jn−1

dx
n 0
orx Jn + nxn−1 Jn = xn Jn−1
Dividing byxn
n
Jn0 + Jn = Jn−1
x
xJn0 = xJn−1 − nJn

4. xJn0 = nJn − xJn+1 = −x−n Jn+1

14
Hem Raj Pandey, Assistant Professor, Pokhara University
Special Function Numerical Method

Proof.

d[x−n Jn ] = −x−n Jn+1


orx−n Jn0 − nx−n−1 Jn = −x−n Jn+1
Multiplying byxn
n
Jn0 − Jn = −Jn+1
x
xJn0 = nJn − xJn+1

5. 2nJn = x[Jn+1 + Jn−1 ]

Proof. From (3) and (4) , (3 − 4)

2nJn − xJn+1 − xJn−1 = 0


2nJn = x[Jn+1 + Jn−1 ]

Problem 1. Find J0 (x)

Solution

∞  n+2r
X (−1)r x
Jn (x) =
0
Γ(r + 1)Γ(r + n + 1) 2
∞  2r
X (−1)r x
J0 (x) =
0
Γ(r + 1)Γ(r + 1) 2
 2  4  6 6
x x 1 x 1
= 1. + − + ...
2 2 (2!)2 2 3!

Problem 2. Find J1 (x).

Problem 3. Prove J−n (x) = (−1)n Jn (x).


q
2
Problem 4. Prove J− 1 (x) = πx cosx.
2

15
Hem Raj Pandey, Assistant Professor, Pokhara University
1

Lecture Note -11

M.sc. Structural Engineering


School of Engineering

By
Hem Raj Pandey
(Assistant Professor)
Pokhara University

Hem Raj Pandey, Assistant Professor, Pokhara University


Contents

1 Ordinary Differential Equation 3


1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2 Picard’s Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.3 Taylor’s Series Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.4 Runge-Kutta Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

2
CHAPTER 1

Ordinary Differential Equation

1.1 Introduction
In general ordinary differential equation of nth order is given by

dy d2 y dn y
 
φ x, y, , 2 , . . . , n = 0 (1.1)
dx dx dx

And the solution of (1.1) is in the form of

f (x, y, c1 , c2 , . . . , cn ) = 0

If particular values are given to the constants c1 , c2 , . . . , cn then the resulting solution is called
a particular solution.
Problems in which all the initial conditions are specified at the initial point only are called
initial value problem.

Example 1.1.1.
dy
= f (x, y) with , y(x0 ) = y0 .
dx
The problems are involving second and higher order differential equations. And we suggest
the conditions at two or more points, such problem are called boundary value problem.

Example 1.1.2.

d2 y dy
2
+ f (x) + g(x)y(x) = R(x), with y(x0 ) = a and y(xn ) = b
dx dx

3
4 Ordinary Differential Equation

Initial Value Problems

1. Picard’s Method

2. Taylor’s Series Method

3. Runge- Kutta First,Second, and Fourth Order Method

1.2 Picard’s Method


Consider the differential equation

dy
= f (x, y), y(x0 ) = y0 (1.2)
dx

Integrating (1.2) between the limits, we get


Z x
y1 = y0 + f (x, y)dx (1.3)
x0

Here, unknown function y appears under the integral sign is called integral equation such
equation is solved by the method of successive approximations in which the first approximation
to y is obtains by putting y0 for y on RHS of (1.2) and we have,
Z x
y1 = y0 + f (x, y)dx
x0
Z x
y2 = y0 + f (x, y1 )dx
x0
Z x
y3 = y0 + f (x, y2 )dx
x0

at nth iterations
Z x
yn = y0 + f (x, yn−1 )dx
x0

Hence this method gives a sequence of approximations y1 , y2 , . . . , yn each giving a better


result than the preceding one.

Example 1.2.1. Solve the equation y 0 = x + y 2 with y(0) = 1, by using Picard’s method at
y(0.1).

Hem Raj Pandey, Assistant Professor, Pokhara University


5 Ordinary Differential Equation

Solution
Here, y0 = 1 and x0 = 0 is given. So,
Z x
y1 = y0 + f (x, y0 )dx
Zx0x
= y0 + (x + y02 )dx
Z xx0
=1+ (x + 1)dx
0
x2
=1+x+
2
Similarly,
x
x2 2 
Z
y1 = y0 + x+ 1+x+ dx
0 2
3 2 1 1
= 1 + x + x2 + x3 + x4 + x5
2 3 4 20
At y(0.1) the value of y is
3 2 1 1
y = 1 + 0.1 + (0.1)2 + (0.1)3 + (0.1)4 + (0.1)5
2 3 4 20
= 1.1156

1.3 Taylor’s Series Method


Taylor series method is the fundamental numerical method for the solution of the initial value
problem. Expanding y(x) in Taylor series about any point xi , we obtain
1
y(x) = y(xi ) + (x − xi )y 0 (xi ) + (x − xi )2 y 00 (xi )
2!
1
+ ··· + (x − xi )p y p (xi )
p!

Differentiating this successively we get, y 0 , y 00 , y 000 , . . .

Example 1.3.1. Given differential equation y 00 − xy 0 − y = 0 with the conditions y(0) = 1


and y 0 (0) = 0. use Taylor’s series method to determine the value of y(0.1).

Solution

Hem Raj Pandey, Assistant Professor, Pokhara University


6 Ordinary Differential Equation

We have, y(x) = 1 and y 0 (x) = 0, at x = 0. We know the differential equation as,

y 00 (x) = xy 0 (x) + y(x) (1.4)

Hence,
y 00 (0) = y(0) = 1

Successive differentiation of (1.4) gives,

y 000 (x) = xy 00 (x) + y 0 (x) + y 0 (x) y iv (x) = xy 000 (x) + y 00 (x) + 2y 00 (x)

= xy 00 (x) + 2y 0 (x) = xy 000 (x) + 3y 00 (x)

y 000 (0) = 0 y iv (0) = 3

y v (x) = xy iv (x) + 4y 000 (x) y vi (x) = xy v (x) + 4y iv (x)

y v (0) = 0 y vi (0) = 15

By Taylor’s series, we have at x = 0,

x2 00 x3
y(x) = y(0) + xy 0 (0) +y (0) + y 000 (0)
2! 3!
4 5
x x
+ y iv (0) + y v (0) + . . .
4! 5!
x2 x4 x6
=1+ + 3 + 15 + . . .
2! 4! 6!
Hence,

(0.1)2 (0.1)4 (0.1)6


y(0.1) = 1 + +3 + 15 + ...
2! 4! 6!
= 1 + 0.005 + 0.0000125 + . . .

= 1.0050125

1.4 Runge-Kutta Method


Two German mathematicians Carl Runge (1856–1927) and Wilhelm Kutta (1867–1944) de-
veloped more efficient methods in terms of accuracy to avoid the computation of higher order

Hem Raj Pandey, Assistant Professor, Pokhara University


7 Ordinary Differential Equation

derivations. These methods are well-known as Runge-Kutta methods. These methods agree
with Taylor’s series solution upto the term in hr where r differs from method to method and
is called the order of that method.
The Runge - kutta method is based on solution procedure of initial value problem in which
the initial conditions are known. Based on the order of differential equation there are different
Runge-kutta methods, which are commonly reffered to as, RK1, RK2, RK3, RK4 methods.

1. First order Runge-Kutta Method


Consider the differential equation

dy
= f (x, y) , y(x0 ) = y0 (1.5)
dx

We know the Euler’s method

y1 = y0 + hf (x0 , y0 ) = y0 + hy00 = y0 (x0 + h) (1.6)

And expanding by Taylor’s series

h2 00
y1 = y(x0 + h) = y0 + hy00 + y + ... (1.7)
2 0

Comparing (1.6) and (1.7), it follows that, Euler’s method agrees with Taylor’s series
solution up to the term in h. Hence Euler’s method is the first order Runge-Kutta
Method.

The Euler’s method is


yn+1 = yn + hf (xn , yn )

2. Second Order Runge-Kutta Method


The modified Euler’s method gives

h 
y1 = y0 + f (x0 , y0 ) + f (x0 + h, y1 ) (1.8)
2

Taking f0 = f (x0 , y0 ) and substituting y1 = y0 + hf0 the RHS of (1.8), we obtain

h 
y1 = y(x0 + h) = y0 + f0 + f (x0 + h, y0 + hf0 ) (1.9)
2

Hem Raj Pandey, Assistant Professor, Pokhara University


8 Ordinary Differential Equation

Expanding LHS by Taylor’s series, we get

h2 00 h3 000
y1 = y(x0 + h) = y0 + hy00 + y + y0 + . . . (1.10)
2! 0 3!

Expanding f (x0 + h, y0 + hf0 ) by Taylor’s series for a function of the variables,we obtain
   
∂f ∂f
+ O h2

f (x0 + h, y0 + hf0 ) = f (x0 , y0 ) + h + hf0
∂x 0 ∂y 0

So, (1.9) can be written as


     
1 2 ∂f 2 ∂f 3

y1 = y0 + hf0 + hf0 + h + h f0 +O h
2 ∂x 0 ∂y 0
h2
= y0 + hf0 + f00 + O(h3 )
 2  (1.11)
df ∂f ∂f
= +f where f = f (x, y)
dx ∂x ∂y
h2
= y0 + hy00 + y000 + O(h3 )
2!
Comparing (1.10) and (1.11), it follows that the modified Euler’s method agrees with
Taylor’s series solution up to the term in h2 . Hence the modified Euler’s method is the
Runge-Kutta method of second order.

The second order Runge-Kutta formula is as follows:


1 
y1 = y0 + k1 + k2
2
where, k1 = hf (x0 , y0 )

k2 = hf (x0 + h, y0 + k1 )

Which gives
1 
yn+1 = yn + k1 + k2
2
where, k1 = hf (xn , yn )

k2 = hf (xn + h, yn + k1 )

3. Third Order Runge-Kutta Method

Hem Raj Pandey, Assistant Professor, Pokhara University


9 Ordinary Differential Equation

The Runge-Kutta Method formula is Which gives


1 
yn+1 = yn + k1 + 4k2 + k3
6
where, k1 = hf (xn , yn )
h k1
k2 = hf (xn + , yn + )
2 2
k3 = hf (xn + h, yn + k2 )

4. Fourth Order Runge-Kutta Method


It is one of the most widely used methods and is particularly suitable in cases when the
computation of higher derivatives is complicated. Consider the differential equation.
dy
= f (x, y) , y(x0 ) = y0
dx
Let h be the interval between equidistant values of x then the increment in y is computed
from the formulae.
1 
yn+1 = yn + k1 + 2k2 + 2k3 + k4
6
where, k1 = hf (xn , yn )
h k1
k2 = hf (xn + , yn + ) (1.12)
2 2
h k2
k3 = hf (xn + , yn + )
2 2
k4 = hf (xn + h, yn + k4 )

This method is also termed as Runge-Kutta’s method.

Example 1.4.1. Using Euler’s method(RK1 order method), find an approximate value of y
dy
corresponding to x = 2, given that dx
= x + 2y and y(1) = 1, h = 0.1.

Solution:
Here, the first order differential equation is
dy
= x + 2y, y(1) = 1 (1.13)
dx
By Euler’s method, we get x0 = 1, y0 = 1, and f (x, y) = x + 2y and we have

yn+1 = yn + hf (xn , yn ) = yn + h(xn + 2yn )

Hem Raj Pandey, Assistant Professor, Pokhara University


10 Ordinary Differential Equation

Putting n = 0, 1, 2, 3, . . . 10 where h = 0.1 we get,

y(1.1) = y1 = y0 + h(x0 + 2y0 ) = 1.3

y2 = y1 + h(x1 + 2y1 ) = 1.67

y3 = y2 + h(x2 + 2y2 ) = 2.124

y4 = y3 + h(x3 + 2y3 ) = 2.6788


.. ..
.=.

y8 = 6.3746

y9 = 7.8296

y(2.0) = y10 = 9.5855


Hence, y(2) = 9.5855

dy
Example 1.4.2. Solve the initial value problem dx
= −2xy 2 , y(0) = 1, with h = 0.2 on
interval [0, 0.4]. Using

(i) Heun’s method (Second order RK-method).

(ii) RK−4 method.

Solution:
The first order differential equation is

dy
= −2xy 2 , y(0) = 1
dx

(i) Heun’s method (Second order RK-method)


Here, f (x, y) = −2xy 2 , x0 = 0, y0 = 1, x1 = 0 + h = 0.2, x2 = 0.4,
we know, yn+1 = yn + 21 (k1 + k2 ), where k1 = hf (xn , yn ), k2 = hf (xn + h, yn + k1 )
For, n = 0

Hem Raj Pandey, Assistant Professor, Pokhara University


11 Ordinary Differential Equation

k1 = hf (x0 , y0 ) = 0.2(−2x0 (y0 )2 )

= 0.2(−2 × 0 × 12 ) = 0

k2 = hf (x0 + h, y0 + k1 ) = 0.2(−2(x0 + h)(y0 + k1 )2 )

= −0.08
1
y1 = y0 + (−0.08)
2
y(0.2) = 1 − 0.04 = 0.96
For, n = 1

k1 = hf (x1 , y1 ) = 0.2(−2x1 (y1 )2 )

= 0.2(−2 × 0.2 × (0.96)2 ) = −0.07372

k2 = hf (x1 + h, y1 + k1 ) = 0.2(−2(x1 + h)(y1 + k1 )2 )

= −0.12567
1
y2 = y1 + (−0.07372 − 0.12567)
2
= 0.96 − 0.09969 = 0.860305

y(0.4) = 0.860305

(ii) RK−4 method


we know,
1
yn+1 = yn + (k1 + 2k2 + 2k3 + k4 )
6
where,

k1 = hf (xn , yn )
h k1
k2 = hf (xn + , yn + )
2 2
h k2
k3 = hf (xn + , yn + )
2 2
y4 = hf (xn + h, yn + k4 )

Hem Raj Pandey, Assistant Professor, Pokhara University


12 Ordinary Differential Equation

For, n = 0 , we have x0 = 0, y0 = 1

k1 = hf (x0 , y0 ) = 0
h k1
k2 = hf (x0 + , y0 + ) = −0.04
2 2
h k2
k3 = hf (x0 + , y0 + ) = −0.038416
2 2
k4 = −0.0739715
1
y1 = y(0.2) = y0 + (k1 + 2k2 + 2k3 + k4 )
6
1
= 1 + (0 − 0.08 − 0.076832 − 0.0739715)
6
= 0.9615328

y(0.2) = 0.9615328

For, n = 1 , we have x0 = 0, y1 = 0.9615328

k1 = hf (x1 , y1 ) = −0.07396
h k1
k2 = hf (x1 + , y1 + ) = −0.10257
2 2
h k2
k3 = hf (x2 + , y2 + ) = −0.09942
2 2
k4 = −0.118916
1
y2 = y(0.4) = y1 + (k1 + 2k2 + 2k3 + k4 )
6
1
= 0.96153 + (−0.07396 − 0.20515 − 0.19885 − 0.11891)
6
= 0.8620525

y(0.4) = 0.8620525

dy
Problem 1. Given dx
= y − x where y(0) = 2 . Find y(0.1) and y(0.2) by using RK method.

dy
Problem 2. Solve dx
= x + y where y(0) = 1 . Find y(0.4) taking h = 0.1 by RK − 4
method.

Hem Raj Pandey, Assistant Professor, Pokhara University


1

Lecture Note -12

M.sc. Structural Engineering


School of Engineering

By
Hem Raj Pandey
———————————
Assistant Professor
Pokhara University

Hem Raj Pandey, Assistant Professor, Pokhara University


Contents

1 Ordinary Differential Equation 3


1.1 Predictor-Corrector Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.1.1 Adams - Moulton Method . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.1.2 Adams- Milne’s Method . . . . . . . . . . . . . . . . . . . . . . . . . . 6

2
CHAPTER 1

Ordinary Differential Equation

1.1 Predictor-Corrector Method


This method which required function values at tn , tn−1 , tn−2 , . . . , for the computation of func-
tional value at tn−1 . A predictor formula is used to predict the value of y at tn+1 and corrector
formula is used to improve the value of yn+1 .

dy
Example 1.1.1. Consider a differential equation dt
= f (t, y) with initial condition y(t0 ) = y0 .
Using simple Euler’s and modified Euler’s method, we can write a simple predictor-corrector
as,

(0)
P : yn+1 = yn + hf (tn , yn )
h
  (1.1)
(1) (0)
C : yn+1 = yn + f (tn , yn ) + f (tn+1 , yn+1 )
2

There is a two method for the discussion.

1. Adams - Moulton Method.

2. Adams- Milne’s Method.

Let us consider the typical differential equation

dy
= f (t, y), y(t0 ) = y0 (1.2)
dt

3
4 Ordinary Differential Equation

On integrating between the limits tn and tn+1 , we have


Z tn+1 Z tn+1
dy
dt = f (t, y)dt
tn dt tn
Z tn+1 (1.3)
yn+1 − yn = f (t, y)dt
tn

1.1.1 Adams - Moulton Method

This method is desired based on backward differences and we known that Newton backward
difference interpolation can be written as

s(s + 1) 2 s(s + 1)(s + 2) 3


f (t, y) = fn + s 5 fn + 5 fn + 5 fn
2! 3!
s(s + 1)(s + 2), · · · , (s + n − 1) n
+ ··· + 5 fn (1.4)
n!
t−tn
where, s = h
, t = tn + sh.
Substituting (1.4) in equation (1.3) , we obtain
Z tn+1 
s(s + 1) 2 s(s + 1)(s + 2) 3
yn+1 = yn + fn + s 5 fn + 5 fn + 5 fn
tn 2! 3!

s(s + 1)(s + 2)(s + 3) 4
+ 5 fn + . . . dt (1.5)
4!
Now by changing the variable of integration (from t to s), the limits of integration also changes
(from 0 to 1), so we get
Z 1
s(s + 1) 2 s(s + 1)(s + 2) 3
yn+1 = yn + h fn + s 5 fn + 5 fn + 5 fn
0 2! 3!

s(s + 1)(s + 2)(s + 3) 4
+ 5 fn + . . . ds (1.6)
4!
After integration the (1.6) changes to
 
1 5 2 3 3 251 4
yn+1 = yn + h fn + 5 fn + 5 fn + 5 fn + 5 fn (1.7)
2 12 8 720
Now substituting the differences such as

5fn = fn − fn−1

52 fn = fn − 2fn−1 + fn−2

53 fn = fn − 3fn−1 + 3fn−2 − fn−3

Hem Raj Pandey, Assistant Professor, Pokhara University


5 Ordinary Differential Equation

Equation (1.7) simplifies to

 
h 251
yn+1 = yn + 55fn − 59fn−1 + 37fn−2 − 9fn−3 + h 54 fn (1.8)
24 720
Alternately, it can be written as

 
h 0 0 0 0 251
yn+1 = yn + 55yn − 59yn−1 + 37yn−2 − 9yn−3 + h 54 y n (1.9)
24 720
This is known as Adam’s -Bashforth formula and used as a predictor formula.

To obtain the corrector formula, we use the Newton’s backward interpolation formula (1.4)
about fn+1 from fn . Thus, we obtain

Z 0
s(s + 1) 2 s(s + 1)(s + 2) 3
yn+1 = yn + h fn+1 + s 5 fn+1 + 5 fn+1 + 5 fn+1
−1 2! 3!

s(s + 1)(s + 2)(s + 3) 4
+ 5 fn+1 + . . . ds (1.10)
4!
After integrating we get the corrector formula as,

 
1 1 2 1 3 19 4
yn+1 = yn + h fn+1 − 5 fn+1 − 5 fn+1 − 5 fn+1 − 5 fn+1 (1.11)
2 12 24 720
Now substituting the differences such as

5fn+1 = fn+1 + fn

52 fn+1 = fn+1 − 2fn + fn−1

53 fn+1 = fn+1 − 3fn + 3fn−1 + fn−2


Equation (1.11) simplifies to
   
h 0 0 0 0 19
yn+1 = yn + 9yn+1 + 19yn − 5yn−1 + yn−2 + − h 54 y n (1.12)
24 720
After neglecting the truncated error of higher order,
Adam − Moulton predictor − corrector method can be written as
 
h 0 0 0 0
P : yn+1 = yn + 55yn − 59yn−1 + 37yn−2 − 9yn−3
24
  (1.13)
h 0 0 0 0
C : yn+1 = yn + 9yn+1 + 19yn − 5yn−1 + yn−2
24

Hem Raj Pandey, Assistant Professor, Pokhara University


6 Ordinary Differential Equation

1.1.2 Adams- Milne’s Method

Consider the equispaced points t0 , t1 , t2 and t4 and integrating (1.2) between the limits t0 and
t4 , we get
Z t4 Z t4
dy
dt = f (t, y)dt
t0 dt t
Z 0t4 (1.14)
y4 − y0 = f (t, y)dt
t0

This method is desired based on farward differences and we known that Newton farward
difference interpolation can be written as

s(s − 1) s(s − 1)(s − 2)


f (t, y) = f0 + s 4 f0 + 42 f 0 + 43 f0 + · · · (1.15)
2! 3!
t−t0
where, s = h
, t = t0 + sh.
Substituting (1.15) in equation (1.14) , we obtain
t4  
s(s − 1) 2 s(s − 1)(s − 2) 3
Z
y4 = y0 + f0 + s 4 f0 + 4 f0 + 4 f0 + · · · dt (1.16)
t0 2! 3!

By changing the variable of integration (t to s) also change (0 to 4), we get


Z 4 
s(s − 1) 2 s(s − 1)(s − 2) 3
y4 = y0 + h f0 + s 4 f0 + 4 f0 + 4 f0 + ds (1.17)
0 2 6

which simplifies to
 
20 2 8 3 28 4
y4 = y0 + h 4f0 + s8 4 f0 + 4 f0 + 4 f0 + 4 f0 + · (1.18)
3 3 90

Neglecting higher order difference and substituting the difference such as,

4f0 = f1 − f0

42 f0 = f2 − 2f1 + f0

··· = ···

so equation (1.18) changes to

4h
y4 = y0 + [2f1 − f2 + 2f3 ] (1.19)
3

Hem Raj Pandey, Assistant Professor, Pokhara University


7 Ordinary Differential Equation

which implies that


4h 0
y4 = y0 + [2y1 − y20 + 2y30 ] (1.20)
3
Which is called Milne’s predictor formula.
Similarly integrating equation (1.14) over t0 to t2 or s = 0 to s = 2 and replacing the above
steps, we get
h 1
y2 = y0 + [y00 + 4y10 + y20 ] − h 44 y00 (1.21)
3 90
which is known as Milne’s corrector formula.
In general
 
4h 0 0 0
P : yn+1 = yn−3 + 2yn−2 − yn−1 + 2yn
3
  (1.22)
h 0 0 0
C : yn+1 = yn−1 + y + 4yn + yn+1
3 n−1

Example 1.1.2. Using Adams- Milne’s Method , find y(4.5) given 5ty 0 + y 2 − 2 = 0 where,
y(4) = 1,y(4.1) = 1.00491, y(4.2) = 1.0097, y(4.3) = 1.0143, y(4.4) = 1.0187.

Solution
We have,
dy 2 − y2
=
dt 5t
Taking t0 = 4, t1 = 4.1, t2 = 4.2, t3 = 4.3, t4 = 4.4, where given value are y0 , y1 , y2 , y3 , y4 and
we have to compute y5 . Then the starting value of Milne’s method are

2 − (y0 )2 2 − 12
t0 = 4, y0 = 1 y00 = = = 0.04
2 × t0 5×4
t1 = 4.1, y1 = 1.0049 y10 = 0.0485

t2 = 4.2, y2 = 1.0097 y20 = 0.0467

t3 = 4.3, y3 = 1.1.0143 y30 = 0.0452

t4 = 4.4, y4 = 1.00187 y40 = 0.0437

Since, y5 is required to find. So, we use the Meline’s predictor methods as,

(P ) 4h 0
y5 = y1 + (2y2 − y30 + 2y40 ) (h = 0.1)
3

Hem Raj Pandey, Assistant Professor, Pokhara University


8 Ordinary Differential Equation

at t5 = 4.5

(P ) 4 × 0.1
y5 = 1.0049 + (2 × 2.0467 − 0.0452 + 2 × 0.0437) = 1.023
3
2 − y52
Therefore, y50 = = 0.0424
5 × t5
Now, use correctors

(C) h
y5= y3 + (y30 + 4y40 + y50 )
3
0.1
= 1.0143 + (0.0452 + 4 × 0.0437 + 0.0424) = 1.023
3
Hence, y(4.5) = 1.023

dy
Example 1.1.3. Using RK−4 method, find y(0.4) for t = 0.1, 0.2, 0.3 given that dt
= ty + y 2
, y(0) = 1,Continue the solution at x = 0.4 , Using Milne’s Method.

Solution:
we have
dy
= ty + y 2
dt
To find y(0.1)
Here, t0 = 0, y0 = 1, h = 0.1

k1 = hf (t0 , y0 ) = 0.1
h k1
k2 = hf (t0 + , y0 + ) = 0.1155
2 2
h k2
k3 = hf (t0 + , y0 + ) = 0.1172
2 2
k4 = hf (t0 + h, y0 + k3 ) = 0.13598
1
k = y0 + (k1 + 2k2 + 2k3 + k4 ) = 1.1169
6
Thus y(0.1) = 1.1169

Hem Raj Pandey, Assistant Professor, Pokhara University


9 Ordinary Differential Equation

To find y(0.2)
Here, t1 = 0.1, y1 = 1.1169, h = 0.1

k1 = hf (t1 , y1 ) = 0.1359
h k1
k2 = hf (t1 + , y1 + ) = 0.1581
2 2
h k2
k3 = hf (t1 + , y1 + ) = 0.1609
2 2
k4 = hf (t1 + h, y1 + k3 ) = 0.1888
1
k = y1 + (k1 + 2k2 + 2k3 + k4 ) = 1.2773
6
Thus y(0.2) = 1.2773

To find y(0.3)
Here, t2 = 0.2, y2 = 1.2773, h = 0.1

k1 = hf (t2 , y2 ) = 0.1887
h k1
k2 = hf (t2 + , y2 + ) = 0.2224
2 2
h k2
k3 = hf (t2 + , y2 + ) = 0.2275
2 2
k4 = hf (t2 + h, y2 + k3 ) = 0.2716
1
k = y1 + (k1 + 2k2 + 2k3 + k4 ) = 1.504
6
Thus y(0.3) = 1.504 Now starting value for Milne’s Method are

t0 = 0, y0 = 1 y00 = ty0 + y02 = 1

t1 = 0.1, y1 = 1.1169 y10 = 1.3591

t2 = 0.2, y2 = 1.2773 y20 = 1.8869

t3 = 0.3, y3 = 1.504 y30 = 2.7132


Using the Meline’s predictor methods as,

(P ) 4h 0
y4 = y0 + (2y1 − y20 + 2y30 ) (h = 0.1)
3
at t4 = 0.4

(P )
y4 = 1.8344, y40 = 4.0988

Hem Raj Pandey, Assistant Professor, Pokhara University


10 Ordinary Differential Equation

Now, use correctors

(C) h
y4 = y2 + (y20 + 4y30 + y40 )
3
0.1
= 1.2773 + (1.8869 + 4 × 2.7132 + 4.098) = 1.8386
3
Hence, y(0.4) = 1.8386
dy
Example 1.1.4. Solve the initial value problem dx
= t − y 2 , y(0) = 1 to find y(0.4) by
Adam’s method. Starting solutions required are to obtained using Runge- Kutta method of
order 4 using step value h = 0.1.

Solution:
we have
dy
= t − y2
dt
To find y(0.1)
Here, t0 = 0, y0 = 1, h = 0.1

k1 = hf (t0 , y0 ) = −0.1
h k1
k2 = hf (t0 + , y0 + ) = −0.08525
2 2
h k2
k3 = hf (t0 + , y0 + ) = −0.0867
2 2
k4 = hf (t0 + h, y0 + k3 ) = −0.07341
1
k = y0 + (k1 + 2k2 + 2k3 + k4 ) = 0.9117
6
Thus y(0.1) = 0.9117

To find y(0.2)
Here, t1 = 0.1, y1 = 0.9117, h = 0.1

k1 = hf (t1 , y1 ) = −0.0731
h k1
k2 = hf (t1 + , y1 + ) = −0.0616
2 2
h k2
k3 = hf (t1 + , y1 + ) = −0.0626
2 2
k4 = hf (t1 + h, y1 + k3 ) = −0.0521
1
k = y1 + (k1 + 2k2 + 2k3 + k4 ) = 0.8494
6

Hem Raj Pandey, Assistant Professor, Pokhara University


11 Ordinary Differential Equation

Thus y(0.2) = 0.8494

To find y(0.3)
Here, t2 = 0.2, y2 = 0.8494, h = 0.1

k1 = hf (t2 , y2 ) = −0.0521
h k1
k2 = hf (t2 + , y2 + ) = −0.0428
2 2
h k2
k3 = hf (t2 + , y2 + ) = −0.0436
2 2
k4 = hf (t2 + h, y2 + k3 ) = −0.0349
1
k = y1 + (k1 + 2k2 + 2k3 + k4 ) = 0.8061
6
Thus y(0.3) = 0.8061
Now starting value for Adam’s method are

t0 = 0, y0 = 1 y00 = t0 − y02 = −1

t1 = 0.1, y1 = 0.9117 y10 = −0.7312

t2 = 0.2, y2 = 0.8494 y20 = −0.5215

t3 = 0.3, y3 = 0.8061 y30 = −0.3498

we know that
 
h 0 0 0 0
P : yn+1 = yn + 55yn − 59yn−1 + 37yn−2 − 9yn−3
24
 
h 0 0 0 0
C : yn+1 = yn + 9yn+1 + 19yn − 5yn−1 + yn−2
24

Since, y4 is required to find. So, we use the Adam predictor methods as,
at t4 = 0.4
 
(P ) h 0 0 0 0
y4 = y3 + 55y3 − 59y2 + 37y1 − 9y0
24
 
(P ) 0.1
and, y4 = 0.8061 + 55 × (−0.3498) − 59 × (−0.5215) + 37 × (−0.7312) − 9 × (−1)
24
= 0.7789, , y40 = −0.2067

Hem Raj Pandey, Assistant Professor, Pokhara University


12 Ordinary Differential Equation

Using corrector, we have


 
(C) h 0 0 0 0
y4 = y3 + 9y4 + 19y3 − 5y2 + y1
24
 
0.1
= 0.8061 + 9 × (−0.2067) + 19 × (−0.3498) − 5 × (−0.5215) + 0.7312
24
= 0.7785

Hence, y(0.4)= 0.7785

dy
Problem 1. Find y(0.2) if y(t) is the solution of dt
= 21 (t + y) assuming y(0) = 2, y(0.5) =
2.636, y(1.0) = 3.595, y(1.5) = 4.968 using Milne’s predictor-corrector method.

Problem 2. Find y(1.4) if y(t) is the solution of t2 y + ty = 1 assuming y(1) = 1, y(1.1) =


0.996, y(1.2) = 0.986, y(1.3) = 0.972 using Adam’s-Moulton P-C method.

Hem Raj Pandey, Assistant Professor, Pokhara University


1

Lecture Note -13

M.sc. Structural Engineering


School of Engineering

By
Hem Raj Pandey
———————————
Assistant Professor
Pokhara University

Hem Raj Pandey, Assistant Professor, Pokhara University


Contents

1 Ordinary Differential Equation 3


1.1 Boundary Value Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.1.1 Finite Difference Method . . . . . . . . . . . . . . . . . . . . . . . . . . 4

2
CHAPTER 1

Ordinary Differential Equation

1.1 Boundary Value Problem


A general second order ordinary differential equation is given by

y 00 = f (x, y, y 0 ), x ∈ [a, b] (1.1)

Since, the ordinary differential equation is of second order, the suitable conditions is required
to obtain a unique solution of the problem. The condition are prescribed at the end points
x = a and x = b, then it is called a two-point boundary value problem.
Let the linear second order ordinary differential equation

a0 (x)y 00 + a1 (x)y 0 + a2 (x)y = d(x), x ∈ [a, b] (1.2)

or
y 00 + p(x)y 0 + q(x)y = r(x), x ∈ [a, b] (1.3)

This implies that a0 (x), a1 (x), a2 (x) and d(x), or p(x), q(x) and r(x) are continuous for all
x ∈ [a, b]. The boundary condition can be used in three ways;

1. Boundary condition of first kind

y(a) = A, y(b) = B

where x = a and x = b are end points.

3
4 Ordinary Differential Equation

2. Boundary conditions of second kind The normal derivative of y(x) is given at the end
points x = a and x = b.
y 0 (a) = A, y 0 (b) = B

3. Boundary conditions of third kind or mixed boundary conditions

a0 y(a) − a1 y 0 (a) = A, b0 y(b) + b1 y 0 (b) = B

where a0 , a1 , b0 , b1 , A and B are constants.

1.1.1 Finite Difference Method

The finite difference method is a numerical procedure which solves a partial differential
equation by discretizing the continuous physical domain into a discrete finite difference grid,
approximating the individual exact partial derivatives in the partial differential equation by
algebraic finite difference approximations. Substituting the Finite differential approximation
into the partial differential equations to obtain an algebraic finite difference equation (FDE),
and solving the resulting algebraic finite difference equation for the dependent variable.
The finite difference approximation for different order of derivatives at every grid points
of an algebraic system.
First order derivatives:
Here, the Taylor series expansions of function u(x) for some finite distance ∆x to u(x + ∆x)
as;
(∆x)2 00 (∆x)3 000
u(x + ∆x) = u(x) ± ∆xu0 (x) ± u (x) ± u (x) ± · · · (1.4)
2! 3!
we know that, u(x) = ui
So, the equation (1.4) can be written as;

∂u (∆x)2 ∂ 2 u (∆x)3 ∂ 3 u
ui+1 = ui + ∆x + + + ··· (1.5)
∂x 2! ∂x2 3! ∂x3

∂u ui+1 − ui (∆x)2 ∂ 2 u (∆x)3 ∂ 3 u


= − 2
− 3
+ ···
∂x | ∆x
{z } | 2! ∂x {z3! ∂x }
f orward dif f erence truncation error

Hem Raj Pandey, Assistant Professor, Pokhara University


5 Ordinary Differential Equation

∂u (∆x)2 ∂ 2 u (∆x)3 ∂ 3 u
ui−1 = ui − ∆x + − + ··· (1.6)
∂x 2! ∂x2 3! ∂x3

∂u ui − ui+1 (∆x)2 ∂ 2 u (∆x)3 ∂ 3 u


= + 2
− 3
+ ···
∂x | ∆x{z } | 2! ∂x {z3! ∂x }
backward dif f erence truncation error

∂u ui+1 − ui−1 (∆x)3 ∂ 3 u


= − 3
+ ···
∂x | ∆x {z } | 3! ∂x
{z }
central dif f erence truncation error

∂u ui+1 − ui ∂u ui − ui−1
' '
|∂x {z ∆x } |∂x {z ∆x }
F orward dif f erence Backward dif f erence
∂u ui+1 − ui−1
'
|∂x {z2∆x }
Central dif f erence
∂u uj+1 − uj ∂u uj − uj−1
' '
∂y ∆y ∂y ∆y
| {z } | {z }
F orward dif f erence Backward dif f erence
∂u uj+1 − uj−1
'
∂y 2∆y
| {z }
Central dif f erence

For second order derivatives Adding equations (1.5) and (1.6) the central difference
scheme of second order derivative;

∂ 2u ui+1 − 2ui + ui−1


2
' (1.7)
∂x (∆x)2
and
∂ 2u uj+1 − 2uj + uj−1
'
∂y 2 (∆y)2

1
u000
i = (ui+2 − 2ui+1 + 2ui−1 − ui−2 )
2h3
1
uiv
i = 4 (ui+1 − 4ui+1 + 6ui − 4ui−1 + ui−2 )
h
Example 1.1.1. Solve the equation y 00 = x+y with the boundary conditions y(0) = y(1) = 0

Hem Raj Pandey, Assistant Professor, Pokhara University


6 Ordinary Differential Equation

Solution:
1
We divide the interval (0, 1) into four sub-intervals so that h = 4
and the pivot points are at
x0 = 0, x1 = 14 , x2 = 34 , and x4 = 1. Then the differential equation is approximated as

1
[yi+1 − 2yi + yi−1 ] = xi + yi
4x2
1
Here, 4x = 4

16yi+1 − 33yi + 16yi−1 = xi , i = 1, 2, 3

Using y0 = y4 = 0, we get the system of equations


1
16y2 − 33y1 =
4
1
16y3 − 33y2 + 16y1 =
2
3
−33y3 + 16y2 =
4
Their solution gives

y1 = −0.03488, y2 = −0.05632, y3 = −0.05003

Example 1.1.2. The deflection y of a fixed-fixed beam of length 3 meters is governed by the
differential equation
d4 y x2 2
+ 2y = + x+4
dx4 9 3
and the boundary conditions are given by y(0) = y 0 (0) = 0, and y(3) = y 0 (3) = 0. Find the
deflection at the points x = 1 and x = 2.

Solution:
Here, taking h = 1 and x0 = 0, x1 = 1, x2 = 2, x3 = 3 with corresponding y-values of
y0 , y1 , y2 , y3 and given y0 = y3 = 0, y00 = y30 = 0.
Using Finite difference approximation , we have

1
yiiv = [yi+2 − 4yi+1 + 6yi − 4yi−1 + yi−2 ]
h4

So differential equation becomes,

1 x2i 2
4
[yi+2 − 4y i+1 + 6yi − 4y i−1 + y i−2 ] + 2y i = + xi + 4, i = 1, 2
h 9 3

Hem Raj Pandey, Assistant Professor, Pokhara University


7 Ordinary Differential Equation

For i=1

1 2
y3 − 4y2 + 6y1 − 4y0 + y−1 = + +4
9 3
Using the boundary conditions y0 = y3 = 0, we get (1.8)
43
−4y2 + 8y1 + y−1 =
9
For i=2

4 4
y4 − 4y3 + 6y2 − 4y1 + y0 + 2y2 = + +4
9 3
Using the boundary conditions y0 = y3 = 0, we get (1.9)
52
y4 + 8y2 − 4y1 =
9
Since, y−1 and y4 are fictitious points. Using central difference approximations to the deriva-
tives, boundary conditions at the left and right ends of the beams, we have

y−1 − y1 y2 − y4
= 0, =0
2h 2h

Thus y−1 = y1 , and y4 = y2 , − − − − −(A)


Substitution (A) into the equation (1.8) and (1.9) , we get
43
4y1 − 4y2 =
9
52
4y1 − 9y2 = −
9
On solving , we get
y1 = 1.01709, y2 = 1.09402

Problem 1. The deflection of a cantilever beam is governed by the differential equation

d4 y
+ 81y = 729x2
dx4

, And the BC’s are, y(0) = y 0 (0) = 0, and y 00 (1) = y 000 (1) = 0, indicating zero deflection and
slope at the fixed end. The last two boundary conditions prescribe zero bending moment and
shear force at the free end. Find the deflection at the pivoted points of the beam assuming
h = 13 .

Hem Raj Pandey, Assistant Professor, Pokhara University


8 Ordinary Differential Equation

Problem 2. Solve the boundary value problem governed by the equation

d2 y
+y =0
dx2

, with the BC’s y(0) + y 0 (0) = 2 and y( π2 ) + y 0 ( π2 ) = −1. By dividing the interval into four
subinterval.

Hem Raj Pandey, Assistant Professor, Pokhara University

You might also like