0% found this document useful (0 votes)
50 views

Module On Numerical Analysis II

This document is a module on numerical analysis from Hawassa University in Ethiopia from December 2013. It contains an introduction to numerical analysis and chapters on finite difference methods, interpolation, and numerical integration. The introduction discusses numerical analysis as both a science and an art concerned with approximating solutions to mathematical problems using arithmetic calculations. Chapter 1 covers finite difference methods, including forward and backward difference formulas, and interpolation using Newton's, Lagrange's, and divided difference formulas. Chapter 2 is a revision on numerical integration techniques like the Newton-Cotes quadrature formulas, trapezoidal rule, and Simpson's 1/3 rule.
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
50 views

Module On Numerical Analysis II

This document is a module on numerical analysis from Hawassa University in Ethiopia from December 2013. It contains an introduction to numerical analysis and chapters on finite difference methods, interpolation, and numerical integration. The introduction discusses numerical analysis as both a science and an art concerned with approximating solutions to mathematical problems using arithmetic calculations. Chapter 1 covers finite difference methods, including forward and backward difference formulas, and interpolation using Newton's, Lagrange's, and divided difference formulas. Chapter 2 is a revision on numerical integration techniques like the Newton-Cotes quadrature formulas, trapezoidal rule, and Simpson's 1/3 rule.
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 99

Module on Numerical Analysis 2013EC

Hawassa University Daye Campus


Department of Mathematics

Module on Numerical Analysis II

Prepared by: Gedefaw Amsale (MSc)


Reviewed by: Ashebir Endale (MSc)

Edited by: Belsty Wale (MSc)

December, 2013EC
Table of Contents
Introduction......................................................................................................................................3
Chapter1...........................................................................................................................................4
Introduction......................................................................................................................................4
1.1 Finite Difference........................................................................................................................5
1.2 Forward Differences..................................................................................................................5
1.4 Backward Differences...............................................................................................................7
1.6 Interpolation with Equally Spaced points................................................................................10
1.6.1 Newton’s Forward Interpolation Formula........................................................................10
1.7 Interpolation with Unequally spaced point..........................................................................16
1.7.1 Lagrange’s Interpolation Formula....................................................................................16
Newton’s General Interpolation Formula with Divided Difference..........................................20
Chapter 2: Revision on Numerical Integration..............................................................................27
2.1 Numerical Integration..........................................................................................................27
2.1.1. Newton-Cotes Quadrature Formula.................................................................................28
2.1.2. The Trapezoidal Rule.......................................................................................................28
2.1.2. The Simpson’s 1/3-Rules.................................................................................................30
Numerical solutions of ODE...........................................................................................................5
Taylor series method........................................................................................................................6
Euler method....................................................................................................................................8
Modified Euler method..................................................................................................................10
Introduction
Unlike other terms denoting mathematical disciplines, such as calculus or linear algebra the exact
extent of the discipline called numerical analysis is not yet clearly defined. By numerical
analysis, we mean the theory of constructive methods in mathematical analysis. By constructive
method we mean a procedure that permits us to obtain the solution of a mathematical problem
with an arbitrary precision in a finite number of steps that can be prepared rationally (the number
of steps depend on the desired accuracy).

Numerical analysis is both a science and an art. As a science, it is concerned with the process by
which a mathematical problem can be solved by the operations of arithmetic. As an art,
numerical analysis is concerned with choosing the procedure which is best suited to the solution
of a particular problem. In general numerical analysis is the study of appropriate algorithms for
solving problems of mathematical analysis by means of arithmetic calculations.

The analytic methods have certain limitations in practical applications. So in most cases exact
solution is not possible from applied mathematics. In such cases the numerical methods are very
important tools to provide practical method for calculating the solutions of problems to a desired
degree of accuracy. The use of high-speed digital computers for solving problems in the field of
engineering design and scientific research increased the demand for numerical methods.

The problems that are impossible to solve by classical methods or that are too formidable for a
solution by manual computation can be resolved in a minimum of time using digital computers.
So it is essential for the modern man to be familiar with the numerical methods used in
programming problems using the computers.

The computer however is only as useful as the numerical methods employed. If the method is
inherently inefficient, the computer solution is worthless, no matter how efficiently the method is
organized and programmed. On the other hand, an accurate, efficient numerical method will
produce poor results if inefficiently organized or poorly programmed. The proper utilization of
the computer to solve scientific problems requires the development of a program on numerical
method suitable to the problem.

Chapter1
Introduction
Given the set of tabular values ( x 0 , y 0 ) , ( x 1 , y 1 ) , ( x 2 , y 2 ) , … , ( xn , y n ) satisfying the relation y=f (x )
Where the explicit nature of f (x) is not known, it is required to find a simpler function, say ϕ ( x ),
such that f (x) and ϕ ( x ) agree at the set of tabulated points. Such a process is called
interpolation. If ϕ ( x ) is a polynomial, then the process is called polynomial interpolation and
ϕ ( x ) is called the interpolating polynomial. Similarly, different types of interpolation arise
depending on whether ϕ ( x ) is a trigonometric function, exponential functions, etc.

Numerical interpolation approximates functions and we approximate functions for one or several
of the following reasons:

 A large number of important mathematical functions may only be known through tables
of their values.
 Some function may be known to exist but are computationally too complex to manipulate
numerically.
 Some function may be known but the solution of the problem in which they appear may
not have an obvious mathematical expression to work with.
Some of the methods of interpolation that will be considered in this unit include Newton’s
Forward and Backward difference interpolation formulae, Newton’s divided difference
interpolation formula and the Lagrangian interpolation.

1.1 Finite Difference


Assume that we have a table of values ( x i , y i ), i=0 , 1 , 2, … , n of any function y=f ( x ), the valus
of x being equally spaced, i.e, x i=x o +ih , i=0 , 1 ,2 , … , n. Suppose that we are required to recover
the values of f (x) for some intermediate values of x , or to obtain the derivative of f (x) for some
x in the range x 0 ≤ x ≤ x n. The methods for the solution to these problems are based on the
concept of the differences of a function which we now proceed to define.

1.2 Forward Differences


If y 0 , y 1 , y 2 ,… , y n denotes a set of values of y , then y 1− y 0 , y 2− y 1 , … , y n − y n−1 are called the
differences of y . Denoting these differences by ∆ y 0 , ∆ y 1 , ∆ y 2 , … , ∆ y n−1 respectively, we have

∆ y 0 = y 1− y 0 , ∆ y 1= y 2− y 1 , …, ∆ y n−1= y n− y n−1

Where ∆ is called the forward difference operator and ∆ y 0 , ∆ y 1 , ∆ y 2 , … are called first ordered
forward differences. The differences of first ordered forward differences are called second
2 2 2
ordered forward differences and are denoted by ∆ y 0 , ∆ y 1 , ∆ y 2 , … . Similarly, one can defined
3rd , 4th , …, nth order forward differences.

∆ 2 y 0 =∆ y 1−∆ y 0= y 2−2 y 1+ y 0

3 2 2
∆ y 0 =∆ y 1−∆ y 0 = y 3−3 y 2+ 3 y1 − y 0

∆ 4 y 0=∆3 y 1 −∆3 y 0= y 4−4 y 3+ 6 y 2−4 y 1+ y 0

⋮ = ⋮ = ⋮

∆ n y 0 = y n−c n1 y n−1+ c n2 y n−2−…+ (−1 )n y 0

Forward Difference Table

x y=f (x ) ∆ ∆
2

3

4

5

∆ y0

∆ y1
2
∆ y2 ∆ y0
3
∆ y3 ∆ y0
4
∆ y4 ∆ y1
2 ∆ y0
5
∆ 3 y1 ∆ y0

4
∆ y2
2 ∆ y1

3
∆ y3
2 ∆ y2

Example

Given f ( 0 )=3, f ( 1 ) =12, f ( 2 ) =81, f ( 3 )=200, f ( 4 ) =100,¿ f ( 5 ) =8.

a. Find ∆ 5 f (0) b. ∆ 2 f (2) c. ∆ 4 f (1)


Solution

x y=f (x ) ∆ ∆2 ∆3 ∆4 ∆5

0 3
9
1 12 60
69 -10
2 81 50 −259
119 -269 755
3 200 −219 496
-100 227
4 100 8
-92
5 8

1.4 Backward Differences


The differences y 1− y 0 , y 2− y 1 , … , y n − y n−1 are called first backward differences if they are
denoted by ∇ y 1 , ∇ y 2 , ∇ y 3 , … , ∇ y n respectively, so that
∇ y 1= y 1− y 0 , ∇ y 2= y 2− y 1 , …, ∇ y n= y n− y n−1

Where ∇ is called the backward difference operator. In a similar way, one can define backward
differences of higher orders. Thus we obtain:

∇ 2 y 2 =∇ y 2−∇ y 1= y 2−2 y 1 + y 0

3 2 2
∇ y3 =∇ y 3−∇ y 2= y 3−3 y 2 +3 y 1− y 0

∇ 4 y 4 =∇ 3 y 4 −∇ 3 y3 = y 4 −4 y 3 +6 y 2−4 y 1 + y 0

⋮ = ⋮ = ⋮

∇ n y n=∇ n−1 y n −∇ n−1 y n−1 ¿ y n −c n1 y n −1 +c n2 y n−2−…+ (−1 )n y 0

Backward Difference Table

x y=f (x ) ∇ ∇
2

3

4

5

x0 y0

∇ y1

2
x1 y1 ∇ y2
3
∇ y2 ∇ y3
x2 y2 2
∇ y3
4
∇ y4

∇ y3 3
∇ y4 ∇ y5
5

x3 y3 2
∇ y4
4
∇ y5

∇ y4 3
∇ y5

x4 y4 2
∇ y5
∇ y5

x5 y5

Example

Construct the backward difference table for y=log 10 x given below

x 10 20 30 40 50

y 1 1.3010 1.4771 1.6021 1.6990

The shift Operator (E)

The operator E is called shift operator or displacement or translation operator. It shows the
operation of increasing the argument value x by its interval of differencing h so that:

Ef ( x )=f ( x +h) in the case of a continuous variable x , and

E y x = y x+1 in the case of a discrete variable.

Similarly, E2 f ( x +h )=f (x +2 h)

Powers of the operator (positive or negative) are defined in a similar manner:

En f ( x )=f ( x+nh ) , En y x = y x+ nh

1. Evaluate the following


a. ∆ tan−1 x

b. ( 3 x−4 )
E
c. ∆ ( e x log 2 x )
d. ∆ ( x 2 / cos 2 x )
Solution

a. ∆ tan−1 x=tan −1 ( x +h ) −tan −1 x


Let tan−1 ( x +h )=A and tan−1 x=B

tan A−tan B x+ h−x h


tan ( A−B )= = =
1+tan A tan B 1+ x(x +h) 1+ x( x +h)

A−B=∆ tan−1 x=tan −1


( 1+ x (xh +h) )
∆ 1
b. ( 3 x−4 )= ( ∆ ( 3 x−4 ) )=E−1 ( ∆ ( 3 x−4 ) )=E−1 ( 3 ( x +h ) −4−(3 x −4) )
E E

( 3 x−4 )=E−1 ( 3 h )=3 h
E

1.6 Interpolation with Equally Spaced points


1.6.1 Newton’s Forward Interpolation Formula
Let the function y=f ( x ) take the values y 0 , y 1 , … , y n corresponding to the values x 0 , x 1 , … , x n
of x . Let these values of x be equi-spaced such that x i=x 0 +ih (i=1,2,3 , …). Assuming y (x ) to
be polynomial of the nth degree in x such that

y ( x 0 )= y 0 , y ( x 1 ) = y 1 , … , y ( xn ) = y n

Let the polynomial representation of( x 0 , y 0 ) ,(x1 , y 1) ,… ,( x n , y n) be

y ( x ) =a0 + a1 ( x−x 0 ) + a2 ( x−x 0 )( x −x1 ) + a3 ( x−x 0 )( x −x1 ) ( x−x 2 ) +…+ an ( x−x 0 )( x−x1 ) … ( x−x n−1 )
………………………..(*)

Putting x=x 0 and solve for a 0 we get a 0= y 0

Putting x=x 1 and solve for a 1 we get

y ( x 1) =a 0+ a1 ( x1−x 0 )

⟹ y 1= y 0 +a1 ( h )

y 1− y 0
⟹ a1=
h
∆ y0
⟹ a1=
h
2 3 n
∆ y0 ∆ y0 ∆ y0
In a similar manner we get a 2= 2
, a3 = 3
,…,a n= n
2h 3 !h n! h

Substituting these values of a i ' s in (*)we obtain

2 3 n
∆ y0 ∆ y0 ∆ y0 ∆ y0
y ( x) = y0 +
h
( x−x 0 ) + 2 (
x−x 0 ) ( x−x 1 ) + 3 (
x−x 0 ) ( x−x 1 ) ( x−x 2 ) +…+a n= n (
x−x 0 ) ( x−x 1 ) … ( x−x n−
2h 3! h n!h

x−x 0
Hence let P=
h

x−x n x−x 0 + x 0−x n x −x0 x 0 −x n x−x 0 x n −x 0 x−x 0 nh


= = + = − = − =P−n
h h h h h h h h

So (*) is written in the form of

2 3
P ( P−1 ) P ( P−1 ) ( P−2 ) P ( P−1 )( P−2 ) ...(P−n+1) n
y ( x) = y0 + P ∆ y0 + ∆ y 0+ ∆ y 0+ …+ ∆ y0
2! 3! n!

is called Newton’s Forward Interpolation Formula

We use Newton’s Forward Interpolation Formula at the beginning of the interval.

Example

1
a. Construct the difference table for f ( x )= , x ∈ [ 2,2.5 ] ∧h=0.1
x
b. Using NFIF Find i. f ( 2.25 ) ii. f ( 2.35 )
Solution
a.

x y=f (x ) ∆ ∆
2

3

4

5

2 0.5

-0.024

2.1 0.476 0.003

-0.021 -0.002

2.2 0.455 0.001 0.003


-0.020 0.001 −0.005

2.3 0.435 0.002 −0.002

-0.018 -0.001

2.4 0.417 0.001

-0.017

2. 0.400

x−x 0 2.25−2.2
b i . P= for h=0.1 , x=2.25∧x 0=2.2 ⟹ P= =0.5
h 0.1

P ( P−1 ) 2 P ( P−1 )( P−2 ) 3 P ( P−1 ) ( P−2 ) ...( P−n+1) n


f ( x )= y ( x )= y 0 + P ∆ y 0 + ∆ y0 + ∆ y 0 +…+ ∆ y0
2! 3! n!

0.5 ( 0.5−1 ) 0.5 ( 0.5−1 ) ( 0.5−2 )


⟹ y ( 2.25 )=0.445+ ( 0.5 )(−0.020 )+ ( 0.002 ) + (−0.001 )+ …+¿
2! 3!

y ( 2.25 ) ≅ 0.4446875

1
The exact value is y ( 2.25 )= =0.44444 … .
2.25

Example

Find the 4th degree polynomial which takes the following values

x 0 2 4 6 8 10

y 1 2 1 10 9 ?

Solution

x−0 x
P= =
2 2

4
65 2 5 3 x
y ( x ) =1+ 7 x− x+ x−
12 4 12

Example

From the table estimate the number of students who obtained marks b/n 40 and 45.

Mark 30-40 40-50 50-60 60-70 70-80


No.of student 31 42 51 35 31

Solution

First we prepare the cumulative frequency table as follows

Mark less than (x) 40 50 60 70 80

No.of student 31 73 124 159 190

Prepare a forward difference table as follows

x yx ∆ yx ∆ yx
2 3
∆ yx
4
∆ yx

40 31

42

50 73 9

51 -25

60 124 -16 37

35 12

70 159 -4

31

80 190

We shall find y 45 that is number of students with mark less than 45.

x−x 0 5
Taking x 0=40 , x=45 we have p= = =0.5
h 10

Using NFIF we get

P ( P−1 ) 2 P ( P−1 )( P−2 ) 3 P ( P−1 )( P−2 ) ( P−3) 4


y 45= y 40+ P ∆ y 0 + ∆ y 40 + ∆ y 40 + ∆ y 40
2! 3! 4!
0.5 ( 0.5−1 ) 0.5 ( 0.5−1 ) ( 0.5−2 ) 0.5 ( 0.5−1 ) ( 0.5−2 ) (0.5−3)
y 45=31+ ( 0.5 )( 42 )+ (9)+ (−25 ) + ( 37 )
2! 3! 4!

y 45=47.87

The number of students with marks less than 45 is 47.87 i.e.,48.

But the number of students with marks less than 40 is 31.

Hence the number of students getting marks b/n 40 and 45 is 48-31=17.

Newton’s Backward Interpolation Formula

To interpolate a value of f ( x) near to the end of the tabular values we use the following
polynomial

y ( x ) =a0 + a1 ( x−x n ) + a2 ( x−x n) ( x− xn−1 ) + a3 ( x −xn ) ( x−x n−1 ) ( x−x n−2 ) +…+ an ( x−x n )( x− xn−1 ) … ( x−x 1 )
……………….(**)

Now let us find a 0 , a 1 , … , an

Putting x=x n and solve for a 0 we get a 0= y n

Putting x=x n−1 and solve for a 1 we get

y ( x n−1) =a 0+ a1 ( xn −1−x n )

⟹ y n−1= y n +a1 ( h )

y n−1− y n y n− y n−1
⟹ a1= =
x n−1−x n x n−x n−1

∇ yn
⟹ a1=
h

∇2 y n ∇3 y n ∇n yn
In a similar manner we get a 2= , a3 = ,…,a n=
2h 2 3 !h3 n ! hn

x−x n
Let P=
h

x−x n−i x−x n + x n−x n−i x−x n x n−x n−i x −x n ih


= = + = + =P+i
h h h h h h

x−x n ( x−x n ) ( x−x n−1 ) ( x−x n ) ( x−x n−1) ( ( x−x n−2) )


⟹ =P , =P ( p +1 ) , =P ( p+ 1 )( P+2 ) and so on.
h h
2
h
3
Substituting these values of a i ' s and P+i in (**) in their respective place we obtain

P(P+ 1) 2 P(P+1) ( P+2 ) 3 P ( P+ 1 ) … ( P+(n−1) ) n


y ( x) = yn + P ∇ yn + ∇ y n+ ∇ y n +…+ ∇ yn
2! 3! n!

Example

The population of a town in the census was as given below.

Year, x 1891 1901 1911 1921 1931

Population, y 46 66 81 93 101
(in thousands)

Solution

First we construct the difference table

x y=f (x ) ∇ ∇2 ∇3 ∇4

1891 46

∇ y 1 =20

1901 66 2
∇ y 2 =-5
3
∇ y 2 =15 ∇ y3 =2

1911 81 2 4
∇ y 4 =-3
∇ y 3 =-3

∇ y 3 =12 3
∇ y 4 =-1
1921 93 2
∇ y 4 =-4
∇ y 4 =8

1931 101

x−x n 1925−1931
P= = =−0.6
h 10

P(P+ 1) 2 P(P+1) ( P+2 ) 3


y ( x) = yn + P ∇ yn + ∇ y n+ ∇ y n +…
2! 3!

(−0.6 ) (−0.6+1) (−0.6 ) (−0.6+1) (−0.6+ 2 ) (−0.6 ) (−0.6+ 1) (−0.6+2 ) (−0


y ( 1925 )=101+ (−0.6 )( 8 )+ (−4 ) + (−1 ) +
2! 3! 4!

y ( 1925 ) ≅ 96.8368 thousands ≅ 96,837

Exercise

1. The table below gives the value of tan x for 0.10 ≤ x ≤ 0.30

x 0.10 0.15 0.20 0.25 0.30

y=tanx 0.1003 0.1511 0.2027 0.255 0.3093


3

Find
a. tan0.12 b. tan0.26 c. tan0.5
2. Find the number of men getting wages b/n Rs.10 and Rs.15 from the following data:

Wages in Rs. 0-10 10-20 20-30 30-40

Frequency 9 30 35 42

1.7 Interpolation with Unequally spaced point


When the values of the argument are not at equally spaced then we use two such formulae for
interpolation.

1.7.1 Lagrange’s Interpolation Formula


Let y 0=f ( x 0 ) , y 1 =f ( x 1 ) , … , y n =f ( x n ) be(n+1) entries of a function y=f ( x ). Let p( x ) be a
polynomial of degree n corresponding to the arguments x 0 , x 1 , x 2 , … , x n which can be written as:

Pn ( x ) =A 0 ( x−x 1 )( x −x2 ) … ( x−x n ) + A1 ( x−x 0 ) ( x−x 2 ) … ( x−x n ) +…+ An ( x−x 0 ) ( x−x 1 ) … ( x−x n−1)

………………..( 1 )
Where A0 , A 1 , … , A nare constants to be determined.

The constants A0 , A 1 , … , A n, will be determined by considering the tabulated function y=f (x )


and the polynomial function P( x ) agree at the set of tabulated points.

Putting x=x 0 , x 1 , x 2 , … , x n in( 1 ) successively, we get the following.

For x=x 0 , y 0= A0 ( x 0−x 1 )( x0 −x 2) … ( x 0−x n )

y0
That is A0 =
( x 0 −x1 ) ( x 0−x 2 ) … ( x 0−x n )

For x=x 1 , y 1= A 1 ( x 1−x 0 ) ( x 1−x 2 ) … ( x 1−x n )

y1
That is A1=
( x 1−x 0 ) ( x 1−x 2 ) … ( x 1−x n )

Similarly

For x=x n , y n=A n ( x n−x 0 ) ( x n−x 2) … ( x n−x n−1 )

yn
That is An =
( x n− x0 ) ( x n−x 2 ) … ( x n−x n−1 )

Substituting the values of A0 , A 1 , … , A n in equation ( 1 ), we get

( x−x 1 ) ( x−x 2) … ( x− xn )
Pn ( x ) = y 0 +¿
( x 0 −x1 ) ( x 0−x 2 ) … ( x 0−x n )

( x−x 0 ) ( x−x 2 ) … ( x−x n )


y 1 +…+¿
( x 1−x 0 )( x 1−x 2 ) … ( x 1−x n )

( x−x 0 ) ( x−x 1 ) … ( x−x n−1 )


y n …………………….(2)
( x n−x 0 )( xn −x 2) … ( x n−x n−1 )

This is called Lagrange’s interpolation formula and which can be written as a general form:

n
Pn ( x ) =∑ Li ( x ) f i
i=0

Where
( ) ( x−x 0 ) … ( x−x i−1 ) ( x−x i+1 ) … ( x−x n )
n
x−x j
Li ( x )= ∏ x i−x j
=¿ ¿ ,( i=0 , 1 , 2, … , n) are
j=0 , j ≠i ( xi −x 0 )( x i−x 1 ) … ( x i−x i−1 )( x i−x i +1) … ( x i−x n )
individually polynomials of degree n in x and are called the Lagrange’s interpolation
coefficients.

Note: Li ( x j )= {
1if i= j
0 if i ≠ j

Note: The Lagrange’s interpolation formula can also be used to split the given function into
partial fractions.

Dividing both sides of equation (2) by ( x−x 0 ) ( x−x 1 ) ( x−x 2 ) . … . ( x−x n ) we get

f (x ) y0 1
= . +
( x−x 0 ) ( x−x 1) ( x−x 2 ) . … . ( x −x n) ( x 0−x 1 ) ( x 0−x 2 ) . …. ( x 0−x n ) x−x 0
y1 1
.+…+ ¿
( x 1−x 0 )( x 1−x 2 ) . … . ( x 1−x n ) x−x 1
yn 1
.
( x n−x 0 )( xn −x 1) ( x n−x 2 ) . … . ( x n −xn −1 ) x−x n

Example

Use Lagrange’s method of interpolation to find the unique polynomial P( x ) of degree 2 such
that : P ( 1 )=1 , P ( 3 )=27 , P ( 4 )=64 .

Solution

x 0=1 , x1 =3 , x 2=4∧ y 0=1 , y 1=27 , y 2=64

( x−x 1 ) ( x−x 2 ) ( x−x 0 )( x−x 2) ( x−x 0 ) ( x−x 1 )


P2 ( x ) = y0 + y 1+ y2
( x 0−x 1 ) ( x 0−x 2 ) ( x 1−x 0 )( x1 −x2 ) ( x 2−x 0 ) ( x 2−x 1)

( x−3 )( x−4 ) ( x−1 ) ( x −4 ) ( x−1 ) ( x−3 )


P2 ( x ) = ( 1) + ( 27 ) + ( 64 ) ⟹ P2 ( x )=8 x 2−19 x+12
( 1−3 )( 1−4 ) ( 3−1 ) (3−4 ) ( 4−1 ) ( 4−3 )
Exercise

1. The percentage of criminals for different age group is given below. Determine the
percentage number of criminals under 35 year using the Lagrange’s formula.

Age % No. of
criminals

Under25 year 52

Under30year 67.3

Under40 year 84.1

Under50 year 94.4

Answer 77.405

2. using Lagrange’s formula express the function


2
x +6 x−1
as a sum of partial fractions.
( x −1 ) ( x −4 )( x−6 )
2

2
[hint tabulate the value of f ( x )=x + 6 x−1 for x=−1 , 1 , 4 , 6]
Solution

f ( x )=x +6 x−1 , x 0=−1 , x1 =1 , x 2=4 , x3 =6∧ y 0=−6 , y 1=6 , y 2=39 , y 3=71


2

2 2
x +6 x−1 x + 6 x−1
=
( x −1 ) ( x −4 )( x−6 )
2
( x +1 ) ( x −1 )( x−4 ) ( x−6 )

2
x +6 x−1 y0 1 y1 1
= . + . +¿
( x+1 ) ( x−1 ) ( x −4 )( x−6 ) ( x 0−x 1 )( x 0 −x2 ) ( x 0−x 3 ) x−x 0 ( x 1−x 0 ) ( x1 −x2 ) ( x 1−x 3 ) x −x1

y2 1 y3 1
. + .
( x 2−x 0 )( x 2−x 1 ) ( x 2−x 3 ) x−x 2 ( x 3−x 0 ) ( x 3−x 1 ) ( x 3−x 2) x−x 3

x 2 +6 x−1
=
3 1
+
1 1
− ( ) ( ) ( ) ( )
13 1
+
71 1
( x+1 ) ( x−1 ) ( x −4 )( x−6 ) 3 5 x+1 5 x−1 10 x−4 70 x−6

Newton’s General Interpolation Formula with Divided Difference


Lagrange’s interpolation formula has the disadvantage that if any other interpolation point were
added, the interpolation coefficient will have to be recalculated.

This labor of recomputing the interpolation coefficients is saved by using Newton’s divided
differences interpolation formula.

Let the function y=f ( x ) take the values y 0 , y 1 , … , y n corresponding to the values x 0 , x 1 , … , x n
which are not equally spaced. The difference of the function values with respect to the difference
of the arguments is called divided differences.

y −y y −y y −y
[ x 0 , x 1 ]= x 1−x 0 , [ x 1 , x 2 ]= x 2−x 1 , … , [ xn −1 , xn ]= xn −x n−1 are called first order divided
1 0 2 1 n n−1

difference.

y n +1− y n y n− y n−1
( − )
[ x n , x n +1 ]−[ x n−1 , x n ] x n +1−x n x n−x n−1 is called the second order divided
[ x n−1 , x n , x n+1 ] = x n +1−x n−1
=
x n +1−x n−1
difference for the arguments x n−1 , x n , x n+1

In general the nth order divided differences for the arguments x 0 , x 1 , … , x n is given by

[ x 1 , x 2 , … , x n ]−[ x 0 , x 1 , … , x n−1 ]
[ x 0 , x 1 ,… , x n ]= x n−x 0

Properties of Divided Differences


1. Divided differences are symmetric with respect to the arguments i.e., independent of the
order of arguments.
i.e.,[ x 0 , x 1 ]=[ x 1 , x 0 ] , [ x n−1 , x n , x n +1 ]=[ x n+1 , x n−1 , x n ] = [ x n , x n+1 , x n−1 ].
2. The nth order divided differences of a polynomial of nth degree are constant

Newton’s Divided Difference Formula

Let y 0 , y 1 , … , y nbe the values of y=f (x ) corresponding to the arguments x 0 , x 1 , … , x n.

Then from the definition of divided differences, we have

y− y
[ x , x 0 ]= x−x 0
0

⟹ y= y 0 + ( x−x 0 ) [ x , x 0 ] …………..( i )

[ x , x 0 ] −[ x 0 , x 1 ]
[ x , x 0 , x 1 ]= ⟹ [ x , x 0 ]=¿ [ x 0 , x 1 ] + ( x−x 1 ) [ x , x0 , x 1 ]
x−x 1

Substituting this value of [ x , x 0 ] in( i ) ,we get

y= y 0+ ( x−x 0 ) [ x 0 , x1 ] + ( x−x 0 ) ( x−x 1 ) [ x , x 0 , x 1 ] …………….( ii )

[ x , x 0 , x 1 ]−[ x 0 , x1 , x2 ]
[ x , x 0 , x 1 , x 2 ]= x−x 2

⟹ [ x , x 0 , x 1 ] =[ x 0 , x 1 , x 2 ] + ( x−x 2 ) [ x , x 0 , x 1 , x 2 ]

Substituting this value of [ x , x 0 , x 1 ] in( ii ) ,we get

y= y 0+ ( x−x 0 ) [ x 0 , x1 ] + ( x−x 0 ) ( x−x 1 ) ( [ x 0 , x 1 , x 2 ] + ( x−x 2 ) [ x , x0 , x 1 , x 2 ] )

y= y 0+ ( x−x 0 ) [ x 0 , x1 ] + ( x−x 0 ) ( x−x 1 ) [ x0 , x 1 , x 2 ] + ( x−x 0 )( x −x1 ) ( x−x 2 ) [ x , x 0 , x 1 , x 2 ]


Proceeding in this manner, we get

y= y 0+ ( x−x 0 ) [ x 0 , x1 ] + ( x−x 0 ) ( x−x 1 ) [ x0 , x 1 , x 2 ] +¿

( x−x 0 ) ( x−x 1 ) ( x−x 2 ) [ x 0 , x 1 , x 2 , x 3 ] +¿

( x−x 0 ) ( x−x 1 ) ( x−x 2 ) … ( x −xn −1 ) [ x 0 , x 1 ,… , x n ] +¿

( x−x 0 ) ( x−x 1 ) ( x−x 2 ) … ( x −xn ) [ x , x 0 , x1 , … , x n ]

Is called Newton’s general interpolation formula with divide differences.

Table of Divided Differences

x y=f ( x ) 1st DD 2nd DD 3rd DD 4th DD 5th DD

y0

x0 [ x0 , x1 ]
y1 [ x0 , x1 , x2 ]
x1 [ x1, x2 ] [ x 0 , x1 , x2 , x3 ]
y2 [ x1 , x2 , x3 ] [ x0 , x1 , x2 , x3 , x4 ]
x2 [ x2, x3 ] [ x1 , x2 , x3 , x4 ] [ x0 , x1 , x2 , x3 , x4 , x5]
y3 [ x2 , x3 , x4 ] [ x1 , x2 , x3 , x4 , x5 ]
x3 [ x 2 , x3 , x4 , x5 ]
[ x3 , x4 ]
y4 [ x 3 , x 4 , x5 ]
x4
[ x 4 , x5 ]
y5

x5
Example

Apply Newton’s divided difference formula find f (8) if

f ( 1 ) =3 , f ( 3 )=31 , f ( 6 ) =223 , f ( 10 )=1011 , f ( 11 )=1343

Solution

x y=f ( x ) 1st DD 2nd DD 3rd DD 4th DD

1 3

31−3
=14
3−1

3 31 64−14
6−1
19−10
=10
223−31 10−1
=64
6−3
=1
6 223 1−1
=0
11−1
197−64
10−3
1011−223
27−19
10−6 =19
11−3
10 1011
=197
=1

332−197
11−6
1343−1011
11−10 =27
11 1343
=332
y= y 0+ ( x−x 0 ) [ x 0 , x1 ] + ( x−x 0 ) ( x−x 1 ) [ x0 , x 1 , x 2 ] + ( x−x 0 )( x −x1 ) ( x−x 2 ) [ x0 , x 1 , x 2 , x 3 ]

+¿ ( x−x 0 ) ( x−x 1 ) ( x−x 2 )( x −x3 ) [ x 0 , x 1 , x 2 , x 3 , x 4 ]

y=3+ ( x−1 ) 14 + ( x−1 ) ( x −3 ) 10+ ( x −1 )( x−3 )( x−6 ) 1+ ( x−1 ) ( x−3 ) ( x−6 ) ( x−10 ) 0

y=3+ ( x−1 ) 14 + ( x−1 ) ( x −3 ) 10+ ( x −1 )( x−3 )( x−6 ) 1

y ( 8 )=3+ ( 8−1 ) 14+ ( 8−1 ) ( 8−3 ) 10+ ( 8−1 ) ( 8−3 )( 8−6 ) 1=521

Exercise

From the following data estimate the number of persons having incomes b/n 2000 and 2500:

Income Below 500 500-1000 1000-2000 2000-3000 3000-4000

No. of 6000 4250 3600 1500 650


persons

Solution

First we prepare the cumulative frequency table as follows

Mark less than (x) 500 1000 2000 3000 4000

No.of persons 600 10250 13850 15350 16000


0

We have to draw a Newton’s divided difference interpolation table.

X y 1st DD 2nd DD 3rd DD 4th


DD
500 6000

8.5

1000 10250 -0.00326

3.6 0.00000084

2000 13850 -0.00105 0

1 .5 0.000000208

3000 15350 -0.000425

0 . 65

4000 16000

y= y 0+ ( x−x 0 ) [ x 0 , x1 ] + ( x−x 0 ) ( x−x 1 ) [ x0 , x 1 , x 2 ] + ( x−x 0 )( x −x1 ) ( x−x 2 ) [ x0 , x 1 , x 2 , x 3 ]

+¿ ( x−x 0 ) ( x−x 1 ) ( x−x 2 )( x −x3 ) [ x 0 , x 1 , x 2 , x 3 , x 4 ]

y=6000+ ( x−500 ) 8.5+ ( x−500 ) ( x−1000 )(−0.00326 ) + ( x−500 ) ( x−1000 ) ( x −2000 ) 0.00000084

+¿ 0

y=6000+ ( 2500−500 ) 8.5+ ( 2500−500 ) (2500−1000 ) (−0.00326 ) + ( 2500−500 )( 2500−1000 ) ( 2500−2000 ) 0.0000

+¿ 0

y=6000+ ( 2000 ) 8.5+ ( 2000 )( 1500 )(−0.00326 )+ ( 2000 )( 1500 ) ( 500 ) 0.00000084+0

y 2500 =14480

That means the number of persons having income less than 2500 is 14675

The number of persons having income less than 2000 is 13850

The number of persons having income b/n 2000 and 2500 is 14675−13850=825
Chapter2: Revision on Numerical Integration
 Chapter Objectives
 After studying this chapter, you should be able to:
 Drive the formula for numerical differentiation.
 Find the derivatives of functions using a numerical method.
 Drive a formula for the trapezoidal rule.
 Set error bound for the trapezoidal rule.
 Solve definite integrals using the trapezoidal rule.
 Drive a formula for the Simpson’s rule.
 Set error bound for the Simpson’s rule.
 Solve definite integrals using the Simpson’s.

2.1 Numerical Integration


Like numerical differentiation, we need to seek the help of numerical integration techniques in
the following reasons.

1. Functions do not possess closed form solution.


2. Closed form solutions exist but these solutions are complex and difficult to use for
calculations.
3. Data for variables are available in the form of a table, but no mathematical relationship
b/n them is known as is often the case with experimental data.

2.1.1. Newton-Cotes Quadrature Formula


Let y=f ( x ) be a function, where y takes the values y 0 , y 1 , y 2 ,… , y n for x 0 , x 1 , x 2 , … , x n. we
b

want to find the value of I =∫ f (x) dx .


a

Let the interval of integration ( a , b ) be divided into n equal subintervals of width h= ( b−a
n )
so that x 0=a , x 1=x 0 +h , x 2=x 0+ 2h , … , x n=x 0 +nh=b.

b x0 +nh

Therefore I =∫ f (x)dx= ∫ f ( x) dx …………………( i )


a x0

We can approximate f ( x ) by Newton’s forward interpolation formula which is given by:

P ( P−1 ) 2 P ( P−1 ) ( P−2 ) 3 P ( P−1 )( P−2 ) ...(P−n+1) n


y ( x) = y0 + P ∆ y0 + ∆ y 0+ ∆ y 0+ …+ ∆ y0
2! 3! n!

x−x 0 1
Where P= ⟹ dP= dx ⟹ dx=hdP
h h

Therefore equation ( i ) becomes,

[ ]
b n
P ( P−1 ) 2 P ( P−1 )( P−2 ) 3
I =∫ f (x) dx=h ∫ y0 + P ∆ y0 + ∆ y0+ ∆ y 0 +… dP
a 0 2! 3!

[ ]
2 2 3
n n ( 2 n−3 ) n ( n−2 )
I =hn y 0 + ∆ y 0 + ∆ y0 + ∆ y 0 +…+up ¿ ( n+1 ) terms …( ii )
2 12 24

This formula is called Newton Cotes-quadrature formula.


2.1.2. The Trapezoidal Rule
The Trapezoidal rule is the simplest practical numerical integration method. It is based on the
principle of finding the area of a trapezium. The principle behind the method is to replace the
curve y=f ( x ) by a straight line typically, we approximate the area A under the curve y=f ( x )
h
b/n the ordinates at x 0∧x 1 by A≅ ( y + y ), where
2 0 1

y 0=f ( x 0 ) , y 1 =f ( x 1 ) and h is the distance b/n x 0 ¿ x 1

Area

Derivation of the trapezoidal formula

Putting n=1 in equation ( ii ) and taking the curve y=f ( x ) through ( x 0 , y 0 )∧( x 1 , y 1 ) as a
polynomial of degree one so that differences of order higher than one vanish, we get:

x0 +h


x0
( 1 h
) h
f ( x)dx=h y 0 + ∆ y 0 = ( 2 y 0 + ( y 1− y 0 ) )= ( y 1+ y 0 )
2 2 2

Similarly, for the next sub intervals( x 0 +h , x 0+ 2h ) , ( x 0 +2 h , x0 +3 h ) ,… , we get:

x0 +2 h x 0+3 h x 0+nh
h h h
∫ f (x ) dx= ( y 1 + y 2 ) , ∫ f (x )dx= ( y 2+ y 3 ) ,… , ∫ f ( x )dx= ( y n−1+ y n )
2 2 2
x 0 +h 0x +2 h x + ( n−1 ) h 0

Adding the above integrals, we get:

b x0 +h x 0+2 h x 0 +3 h x 0 +nh

I =∫ f ( x ) dx= ∫ f ( x ) dx+ ∫ f ( x ) dx + ∫ f ( x ) dx +…+ ∫ f ( x ) dx


a x0 x 0+h x 0 +2h x 0+ ( n−1) h

h h h h
¿ ( y 1 + y 0 ) + ( y 1+ y2 ) + ( y 2+ y 3 ) +…+ ( y n−1+ y n )
2 2 2 2

h
¿ [ ( y + y )+ 2 { y1 + y 2 + y 3 +…+ y n−1 } ]
2 0 n
Which is known as Trapezoidal rule?

2.1.2. The Simpson’s 1/3-Rules


Putting n=2 in equation ( ii ) and taking the curve y=f ( x ) through ( x 0 , y 0 ) , ( x 1 , y 1 ) and ( x 2 , y 2 )
as a polynomial of degree two so that differences of order higher than two vanish, we get:

( )
x0 +2 h
1 2 2h h
∫ f ( x ) dx=2 h y 0 +∆ y 0 + ∆ y 0 = [ 6 y 0 +6 ( y 1− y 0 ) + ( y 2−2 y 1− y 0 ) ]= ( y 0 +4 y 1+ y 2 )
6 6 3
x0

( )
x0 +4 h
1 2 h
Similarly ∫ f ( x ) dx=2 h y 2 + ∆ y 2 + ∆ y 2 = ( y 2 +4 y 3+ y 4 ) , … ,
x +2 h 0
6 3

( )
x 0+nh 2
1 h
∫ f (x )dx=2h y n−2+ ∆ y n−2 + 6 ∆ y n−2 = 3 ( y n−2 + 4 y n−1+ y n )
x + ( n−2) h
0

Adding the above integrals, we get:

b
h
I =∫ f ( x ) dx ≅
3[ 0
y +4 ( y 1+ y 3+ …+ y n−1 ) +2 ( y 2+ y 4 +…+ y n −2 ) + y n ]
a

Which is known as Simpson’s one-third rule.

Note: To use Simpson’s one-third rule, the given interval of integration must be divided in to an
even number of subintervals.

1.2.4. The Simpson’s 3/8-Rules

Putting n=3 in equation ( ii ) and taking the curve y=f ( x ) through( x 0 , y 0 ) , ( x 1 , y 1 ),


( x 2 , y 2 ) ∧( x 3 , y 3 ) as a polynomial of degree three so that differences of order higher than three
vanish, we get:

( )
x0 +3 h
3 3 2 1 3
∫ f ( x ) dx=3 h y 0 + ∆ y + ∆ y 0+ ∆ y 0
2 0 4 8
x0

3h
¿
8
[ 8 y 0 +12 ( y 1− y 0 ) +6 ( y 2−2 y 1 + y 0 ) +( y 3−3 y 2+ 3 y1 − y 0 ) ]
3h
¿
8 0
( y +3 y 1 +3 y 2+ y3 )

Similarly

x0 +6 h x0 +nh
3h 3h
∫ f ( x) dx= 8 ( y 3 +3 y 4 +3 y 5 + y 6 ) , … , ∫ f ( x)dx= 8 ( y n−3 +3 y n−2 +3 y n−1+ y n )
x +3 h
0 x + ( n−3) h 0
Adding the above integrals, we get:

b
3h
I =∫ f ( x ) dx ≅
8 [ 0 n
( y + y ) +3 ( y 1 + y 2 + y 4 + y 5 …+ y n−2 + y n−1 ) +2 ( y 3+ y 6 +…+ y n−3 ) ]
a

Which is known as Simpson’s three-eight rule.

Note : To use Simpson’s three-eight rule, the given interval of integration must be divided in
to subintervals whose number n is a multiple of 3.

1.2.5. Boole’s-Rules

Putting n=4 in equation ( ii ) and taking the curve y=f ( x ) through( x 0 , y 0 ) , ( x 1 , y 1 ),


( x 2 , y 2 ) , ( x 3 , y 3 ) & ( x 4 , y 4 ) as a polynomial of degree four so that differences of order higher
than four vanish, we get:

b
2h
I =∫ f ( x ) dx ≅
45 [
7 ( y 0+ y n ) +32 ( y 1+ y 3 …+ y 2 n−1 ) +12 ( y 2+ y 6 +…+ y 4 n−2) +14 ( y 4 + y 8 + …+ y 4 n ) ]
a

1.2.6. Weddle’s-Rules

Putting n=6 in equation ( ii ) and taking the curve y=f ( x ) through( x 0 , y 0 ) , ( x 1 , y 1 ),


( x 2 , y 2 ) , ( x 3 , y 3 ),( x 4 , y 4 ) , ( x 5 , y 5 ) ∧( x 6 , y 6 ) as a polynomial of degree six so that differences of
order higher than six vanish, we get:

b
3h
I =∫ f ( x ) dx ≅ [ y + 5 y 1 + y 2 +6 y 3 + y 4 +5 y 5 +2 y 6 +5 y 7+ y 8 ]
a 10 0

Example
6
dx
Evaluate ∫ 2 by using
0 1+ x

a. Trapezoidal rule
b. Simpson’s 1/3 rule
c. Simpson’s 3/8 rule
d. Boole’s rule
e. Weddle’s rule
Solution

1
Divide the interval( 0,6 ) in to six parts each of widthh=1. The value of f ( x )= 2 are
1+ x
f ( 0 )=1 , f ( 1 )=0.5 , f ( 2 )=0.2, f ( 3 )=0.1 , f ( 4 )=0.0588 , f ( 5 ) =0.0385∧f ( 6 )=0.027

a. By Trapezoidal rule
6
h
∫ 1+dxx 2 = 2 [ y 0 +2 ( y 1 + y 2 + y 3 + y 4 + y 5 ) + y 6 ]
0

1
¿
2
[ 1+ 2 ( 0.5+ 0.2+ 0.1+0.0588+0.0385 ) +0.027 ]
¿ 1.4108

Or in matlab

>> x=0:1:6;

>> y=1./(1+x.^2);

>> I=trapz(x,y)

I =1.4108

b. By Simpson’s 1/3 rule


6

∫ 1+dxx 2 = h3 [ y 0 +4 ( y 1+ y3 + y 5 ) + 2 ( y 2 + y 4 ) + y 6 ]
0

1
¿
3
[ 1+4 ( 0.5+0.1+0.0385 ) +2 ( 0.2+0.0588 ) +0.027 ] =1.3662
c. By Simpson’s 3/8 rule
6

∫ 1+dxx 2 = 38h [ ( y 0 + y 6 ) +3 ( y 1 + y 2+ y 4 + y 5 ) +2 ( y 3 ) ]
0

3
¿
8
[ ( 1+0.027 ) +3 ( 0.5+ 0.2+ 0.0588+0.0385 ) +2 ( 0.1 ) ]=1.3571
d. By Boole’s rule
6

∫ 1+dxx 2 = 245h [ 7 ( y 0+ y 6 ) +32 ( y 1+ y 3 + y 5 ) +12 ( y 2 ) +14 ( y 4 ) ]


0

2
¿
45
[7 ( 1+0.027 )+ 32 ( 0.5+ 0.1+ 0.0385 )+ 12 ( 0.2 ) +14 ( 0.588 ) ]
e. By Weddle’s rule
6

∫ 1+dxx 2 = 310h [ y 0+5 y 1 + y 2 +6 y 3 + y 4 +5 y 5 +2 y 6 ]


0

3
¿
10
[ 1+5 ( 0.5 ) +0.2+6 ( 0.1 ) +0.0588+5 ( 0.0385 ) +2 ( 0.027 ) ]
¿ 1.3735
a. Numerical double integration

d b

The double integral I =∫ ∫ f ( x , y )dxdy is evaluated numerically by two successive integrations


c a

in x and y directions considering one variable at a time. Repeated application of trapezoidal rule
(or Simpson’s rule) yields formula for I .

1. Trapezoidal rule: Dividing the interval (a , b) into n equal sub-intervals each of length h ,
and the interval (c , d) into m equal sub intervals each of length k , we have:
x i=x 0 +ih , x 0=a , x n=b
y j= y 0 + jk , y 0=c , y m=d
Using trapezoidal rule in both directions, we get
d
h
[ ]
I = ∫ f ( x 0 , y ) + f ( x n , y ) +2 { f ( x1 , y ) + f ( x 2 , y )+ …+ f ( x n−1 , y ) } dy
2c
hk
¿ ¿
4

Where f ij =f (x i , y i)

The computational molecule of the method (¿) for n=m=1 and n=m=2 can be written as
2. Simpson’s rule: WE divide the interval (a , b) into 2 n equal sub-intervals each of length h
and the interval (c , d) into 2 m equal sub-intervals each of length k . Then applying
Simpson’s rule in both directions, we get
d b
I =∫ ∫ f ( x , y )dxdy
c a

[{ }]
n n−1
hk
I= f 00 + 4 ∑ f 2 i−1+ 2 ∑ f 2 i ,0 + f 2 n ,0
9 i=1 i=1

{ }
m n n−1
+ 4 ∑ f 0,2 j−1+ 4 ∑ f 2 i−1,2 j−1 +2 ∑ f 2 i ,2 j−1 + f 2 n ,2 j−1
j=0 i=1 i=1

{ }
m−1 n n−1
+2 ∑ f 0,2 j+ 4 ∑ f 2 i−1,2 j +2 ∑ f 2i , 2 j +f 2 n , 2 j
j=1 i=1 i=1

{ }
n n−1
+ f 0,2 m+ 4 ∑ f 2 i−1,2 m +2 ∑ f 2 i ,2 m + f 2 n ,2 m
i=1 i=1

Where h and k are the spacing in the x and y directions respectively and

b−a d−c
h= ,k=
2n 2m

x i=x 0 +ih, i=1,2 , … ,2 n−1

y j= y 0 + jk , j=1,2 , … ,2 m−1

x 0=a , x 2 n=b , y 0=c , y 2 m=d

The computational module for M = N = 1 and M = N = 2 can be written as

Example
1. Using trapezoidal rule evaluate
2 2
dxdy
I =∫ ∫ taking h=k =0.25 so that n=m=4
1 1 x+y
2. Apply Simpson’s rule to evaluate the integral
2.6 4.4
dxdy
I =∫ ∫ taking h=0.2∧k=0.3 so that n=m=2
2 4 xy

Solution

1. Since h=0.25 along the x−axis we have 1,1.25 , 1.5 ,1.75 , 2


Since k =0.25 along the y−axis we have 1,1.25 , 1.5 ,1.75 , 2
hk
I= ¿
4
1
I=
64
[ f ( 1,1 ) +f ( 1,2 ) +2 ( f ( 1,1.25 ) + f ( 1,1.5 ) + f ( 1,1.75 ) ) +( f ( 2,1 ) + f ( 2,2 ) ) + 2 ( f ( 2,1.25 )+ f ( 2,1.5 ) + f ( 2,1.75 ) ) +
2.6 4.4
dxdy
2. I =∫ ∫ ,
2 4 xy
since h=0.2 on the x−axis we have 4,4.2,4 .4
since k =0.3 on the y−axis we have 2,2.3,2 .6
hk
I= ¿
9
0.06
I=
9
[ ( f i−1 , j−1+ f i−1 , j+1 + f i+1 , j−1+ f i+ 1, j +1 )+ 4 ( f i−1 , j+ f i , j−1 +f i , j +1+ f i+1 , j ) +16 f i , j ]
0.06
I=
9
[ f ( 4,2 ) + f ( 4,2.6 ) + f ( 4.4,2 )+ f ( 4.4,2 .6 ) +4 {f ( 4,2.3 )+ f ( 4.2,2 ) + f ( 4.2,2.6 ) +f ( 4.4,2 .6 ) }+16 f (4.2,2.3)
2.6 4.4
dxdy
I =∫ ∫ ≅ 0.025
2 4 xy

Exercise

Evaluate the double integral

2 1
2 xy
∫∫ dxdy
1 0 ( 1+ x 2 )( 1+ y 2 )
Using

a. The trapezoidal rule with h=k =0.25


b. The Simpson’s rule with h=k =0.25

Solution

a.

>> x=0:0.25:1;

>> y=1:0.25:2;

>> [X,Y]=meshgrid(x,y);

>> F=2*X*Y./[(1+X.^2)*(1+Y.^2)];(remember that X and Y are in capital letter)

>> I=trapz(y,trapz(x,F,2))

I =0.3482

Chapter-3
Least square approximation

This method of curve fitting was suggested early in the nineteenth century by the French
mathematician
Adrian Legendre.
The method of least squares assumes that the best fitting line in the curve for which the sum of
the squares of
the vertical distances of the points ( x i , y i ) from the line is minimum.
The simplest example of a least-squares approximation is fitting a straight line to a set of paired
observations:
( x 1 , y 1 ) ,( x 2 , y 2 ) ,…,( x n , y n). The mathematical expression for the straight line is
y=a0 +ax +e

Minimizing sum of squared errors


Sum of the squared errors:
n
φ ( a0 , a )=∑ ( a0 + a x k − y k )
2

k=1

Necessary conditions for minimum (from partial derivative of calculus)

∂ φ 0∧∂ φ
= =0
∂ a0 ∂a

Which leads to the normal equation?


{
n

∑ 2 ( a0 +a x k − y k ) =0
k=1
n

∑ 2 ( a0 +a x k− y k ) x k =0
k=1

Or
¿

Simply notation

n n n n
Let p=∑ (x k );q=∑ ( y k ) ; r =∑ ( x k y k )∧s=∑ (x k ) yields
2

k=1 k=1 k=1 k=1

{
a0 n+ap=q
a0 p+ as=r

n
p [ ]( ) ( )
p a0
s a
=
q
r

Solving simply 2 x 2 system using Cramer’s rule

That is

pq−nr
a= 2
p −ns

pr −sq
a 0=
p 2−ns

Matrix view of normal equation

It often happens that Ax=b has no solution. The usual reason is

The matrix has more rows than columns. There are more equations than unknowns.

We cannot always get the error e=b− Ax downs to zero. When e is zero x is an exact solution to
Ax=b. When the length of e is as small as possible, ^x is a least squares solution. Our goal in this
section is to compute ^x and use it. These are real problems and they need an answer.

When Ax=b has no solution, multiply by AT and solve AT A ^x = A T b .

Minimizing the error

How do we make the error e=b− Ax as small as possible? This is an important question with a
beautiful answer. The best x (called ^x ) can be found by geometry or algebra or calculus:
0
90 angle or project using P or set the derivative of the error to zero.

But we will see by calculus only

By calculus:

Suppose y=a0 +ax be the line which fits best for the data ( x 1 , y 1 ) , ( x 2 , y 2 ) … , ( xn , y n )

Most functions are minimized by calculus. Here the error function E to be minimized is a sum of
squares e 21+ e 22+ e23 +…+ e2n is minimized.

2
E=‖ Ax−b‖ =( a 0+ a x1 − y 1 )2+ ( a0 +a x 2− y 2 )2 +…+ ( a0 + a x n− y n )2

n n n
∂E
=2 ∑ ( a 0+ a xi − y i )=0 ⟹ ∑ ( a 0+ a xi ) =∑ y i
∂ a0 i=1 i=1 i=1

n n n
∂E
=2 ∑ ( a0 + a x i− y i ) x i=0 ⟹ ∑ ( a0 x i +a x 2i )=∑ xi y i
∂a i=1 i=1 i=1

( ) [ ]
n
1 x1
n ∑ xi
Thus
i=1
= AT A , where A= 1 x 2
n n
⋮ ⋮
∑ xi ∑ x 2i 1 xn
i=1 i=1

( )
n

∑ yi
And
i=1
n
= AT b
∑ xi yi
i=1

The equation is identical with AT A ^x = A T b. The best a 0 anda are the components of ^x . The
equations from calculus are the same as the ‘’normal equations’’ from linear algebra. These are
the key equations of least squares:

2
The partial derivatives of ‖ Ax−b‖ are zero when AT A ^x = A T b.

Example: Suppose our data (x i , y i ) consist of pairs (−2,4 ) , (−1,2 ) , ( 0,1 ) , ( 2,1 ) and(3,1).

Find the least square solution y=a0 +ax +¿.

Solution
x 1=−2 , x2 =−1; x 3=0 ; x 4 =2 abd x 5=3

5
p=∑ ( x k )=−2−1+ 0+2+3=2
k=1

y 1=4 , y 2=2 ; y 3=1 ; y 4=1 abd y 5=1

5
q=∑ ( y k )=4+2+1+1+1=9
k=1

n
r =∑ ( x k y k )=x 1 y 1 + x 2 y 2 + x 3 y 3 + x 4 y 4 + x 5 y 5=−8−2+0+2+3=−5
k=1

5
s=∑ (x 2k )=x 21+ x 22 + x 23 + x 24 + x 25=4 +1+0+ 4+ 9=18
k=1

pq−nr 2 ( 9 )−5(−5) −43


a= = 2 = =−0.5
p 2−ns 2 −18(5) 86

pr −sq 2 (−5 )−18(9) 172


a 0= = 2 = =2.
p 2−ns 2 −18 (5) 86

Therefore the linear equation which fits best for the given data is y=−0.566 x+2.026 .

Or using matrix

( )( )
n 5
n ∑ xi 5 ∑ xi
n
i=1
n
= 5
i=1
5
= A T A= (52 182 )
∑ x i ∑ x 2i ∑ xi ∑ x 2i
i=1 i=1 i=1 i=1

( )( )
n 5

∑ yi ∑ yi
i=1
n
= i=1
5 ( )
= A T b= 9
−5
∑ xi yi ∑ xi yi
i=1 i=1

Let ^x = (aa )
0

T T
A A ^x = A b ⟹ x^ = (aa )=inv ( A A )∗A b=(−0.5
0 T 2 T
)
Therefore the best fitting straight line is y=2−0.5 x

Curve fitting: Suppose we know that the relation between x and y is given by quadratic law

2 2
y=a+bx +c x , so we want to fit a parabola y=a+bx +c x to the data. Then our unknowns are
a , b andc and should satisfy the equation

2
y i=a+b x i+ c x i , i=1,2,3 , … , n

In a matrix form

( )( ) ( )
1 x1 x12 y1
2 a
1 x2 x2 y
b = 2
⋮ ⋮ ⋮ ⋮
c
1 xn xn
2
yn

Example: using the data (−2,4 ) , (−1,2 ) , ( 0,1 ) , ( 2,1 ) and(3,1) find the least square solution
2
y=a+bx +c x .

Solution

( )( ) ( )
1 −2 4 4
1 −1 1 a 2
1 0 0 b= 1
1 2 4 c 1
1 3 9 1

( )
1 −2 4

( )
1 −1 1 5 2 18
A= 1 0 0 ⟹ AT A= 2 18 26
1 2 4 18 26 114
1 3 9

()
9
AT y = −5
31

() ( )
a 1.1169
b =inv ( A A )∗( A y )= −0.8052
T T

c 0.2792

The best fitting parabola is


y=1.1169−0.8052 x+02

2
0.2792 x .

Orthogonal Polynomials
Discrete Least-Squares Approximation Problem
Given a set of n discrete data points (x i , y i ), i=1 , 2 ,. . . ,m .
Find the algebraic polynomial
2 n
P n(x )=a0 + a1 x +a 2 x + ・・ ・+ an x ( n<m)
such that the error E(a0 , a1 ,. . . , an) in the least-squares sense is minimized; that is,
m
E( a 0 , a 1, . . ., a n)=∑ ( y i−(a0 + a1 x i +a2 x i + …+an x i ) ) is minimum.
2 n 2

i=1

Here E( a0 , a1 ,. . . , an) is a function of (n+1) variables: a 0 , a 1 , . .. , a n.

Since E(a0 , a1 ,. . . , an) is a function of the variable a 0 , a 1 , . .. , a n for this function to be minimum
we must have

∂E
=0 , i=0,1,2 , … , n
∂ ai

{
m
∂E
=−2 ∑ ( yi −( a 0+a1 xi +a2 x i +…+a n x i ) )
2 n
∂ a0 i=1
m
∂E
⟹ ∂ a =−2 ∑ ( y i−( a0 +a1 x i+ a2 x 2i +…+an x ni ) ) x i
1 i=1

m
∂E
=−2 ∑ ( y i−( a0 +a 1 x i+a2 x i +…+ an xi ) ) x i
2 n n
∂ an i=1

Setting these equations to be zero we have


{
m m m m m
a0 ∑ 1+a1 ∑ x i+ a2 ∑ x +⋯ +an ∑ x =∑ y i
2 n
i i
i=1 i=1 i =1 i=1 i=1
m m m m m
a0 ∑ xi + a1 ∑ x i +a2 ∑ x i + ⋯+ an ∑ xi =∑ x i yi
2 3 n+1

i=1 i=1 i=1 i=1 i=1



m m m m m
a0 ∑ x ni + a1 ∑ xin+1 +a2 ∑ x n+2
i +⋯+ an ∑ x i =∑ x i y i
2n n

i=1 i=1 i=1 i=1 i=1

Set

m m
sk =∑ xi , k =0,1,2 , … ,2 n and b k =∑ x i y i , k =0,1,2 , … ,n
k k

i=1 i=1

Using these notations, the above equations can be written as:

{
m
s 0 a0 + s1 a1 +…+ sn a n=b0 (notethat ∑ x i =s0 )
0

i=1
s 1 a 0+ s 2 a 1+ …+s n+1 an =b1

s n a 0+ s n+1 a1 +…+ s2 n an =bn

This is a system of (n+1) equations in (n+1) unknowns a 0 , a 1 , . .. , a n . these equations are


Called Normal Equations. This system now can be solved to obtain these (n+1)unknowns,
Provided a solution to the system exists. The system can be written in the following matrix form:

( )( ) ( )
s0 s 1 ⋯ sn a 0 b0
s1 s 2 ⋯ s n+1 a 1 b
= 1
⋮ ⋮ ⋯ ⋮ ⋮ ⋮
sn s n+1 ⋯ s 2n a n bn

( ) () ()
s0 s 1 ⋯ sn a0 b0
s s2 ⋯ s n+1 a1 b
Or Sa=b , where S= 1 , a= ∧b= 1
⋮ ⋮ ⋯ ⋮ ⋮ ⋮
sn s n+1 ⋯ s 2n an bn

Define
( )
2 n
1 x1 x1 ⋯ x1
2 n
1 x2 x2 ⋯ x2
A= 1 x3 x3 ⋯
2
x3
n

⋮ ⋮ ⋮ ⋯ ⋮
1 xm x 2m ⋯ x nm

Then the above system has the form:


AT ∗A∗a=b.
In this case, the matrix S= A T ∗A is symmetric and positive definite and is therefore nonsingular.

Thus, if x i ' s are distinct, the equation Sa=b has a unique solution.

Least-Squares Approximation of a Function


We have described least-squares approximation to fit a set of discrete data. Here we describe
Continuous least-square approximations of a function f ( x) by using polynomials.
First, consider approximation by a polynomial with monomial basis: {1 , x , x 2 , . . . , x n }.
Least-Square Approximations of a Function Using Monomial Polynomials
Given a function f (x) , continuous on [a , b] , find a polynomial Pn (x ) of degree at most n :
Pn (x ) ¿ a 0+ a1 x + a2 x 2 +・・ ・+an x n
such that the integral of the square of the error is minimized. That is,
b
E=∫ ( f ( x )−Pn ( x ) ) dx is minimized.
2

The polynomial Pn (x ) is called the Least-Squares Polynomial.


Since E is a function of a 0 , a 1 , . .. , a n , we denote this by E( a0 , a1 ,. . . , an) .
For minimization, we must have

∂E
=0 , i=0 , 1 ,. . . ,n .
∂ ai
As before, these conditions will give rise to a system of (n+1) normal equations in (n+1)
unknowns: a 0 , a 1 , . .. , a n. Solution of these equations will yield the unknowns: a 0 , a 1 , . .. , a n.

Setting up the Normal Equations


Since
b
2
E=∫ [ f ( x )−(a0 + a1 x +a2 x +・ ・・+ an x ) ]
2 n

a
b
∂E
=−2∫ [ f ( x )−(a0 + a1 x +a 2 x + ・・ ・+ an x ) ] dx
2 n
∂ a0 a

b
∂E
=−2∫ x [ f ( x )−(a 0+ a1 x +a 2 x + ・・ ・+ an x ) ] dx
2 n
∂ a1 a


b
∂E
=−2∫ x [ f ( x ) −(a 0+ a1 x+ a2 x +・・ ・+an x ) ] dx
n 2 n
∂ an a

∂E
Since =0 , for i =0,1,2 ,… , n we have
∂ ai
b b b b

∫ f ( x ) =¿ a0∫ dx +a1∫ xdx+ …+an ∫ x n dx ¿


a a a a

b b b b

∫ xf ( x )=¿ a 0∫ xdx +a 1∫ x dx +…+ an∫ xn +1 d x ¿


2

a a a a


b b b b

∫ x f ( x )=¿ a 0∫ x dx+ a1∫ x


n n n +1
dx +…+ an∫ x dx ¿
2n

a a a a

Denote
b b

∫ x dx=si for i=0,1,2 , … , 2n and ∫ x i f ( x ) dx=bi for i=0,1,2 , … , n


i

a a

Then the above equations can be written as

{
a 0 s0 + a1 s1 +a 2 s2 +…+ an s n=b 0
a0 s 1+ a1 s 2+ a2 s 3+ …+an sn+ 1=b1

a0 s n +a1 s n+1 + a2 sn +2+ …+a n s2 n=bn

Or in matrix form

( )( ) ( )
s0 s 1 ⋯ sn a 0 b0
s1 s 2 ⋯ s n+1 a 1 b
= 1
⋮ ⋮ ⋯ ⋮ ⋮ ⋮
sn s n+1 ⋯ s 2n a n bn

() ( )
b0 s0 s1 ⋯ sn

()
a0
b s s2 ⋯ s n+1
Sa=b , where a= ⋮ ,b= 1 ∧S= 1
⋮ ⋮ ⋮ ⋯ ⋮
an ⋯ s2 n
bn s n s n+1

Example
Find Linear and Quadratic least-squares approximations to f (x)=e x on [−1 ,1].
Solution
Linear Approximation: n=1; P1 (x )=a 0+ a1 x
1 1 1
2
s0 =∫ dx=2 , S 1=∫ xdx=0 , S 2=∫ x dx =
2

−1 −1 −1 3
1 1
1
b 0=∫ f ( x ) dx=∫ e dx =e− =2.35
x

−1 −1 e
1 1
2
b 1=∫ xf ( x ) dx=∫ xe dx= =0.7358
x

−1 −1 e

( )
2 0
S=
( )
S0 S 1
S1 S 2
=
0
2
3

b=
( )(b0
b1
=
2.35
0.7358 )

( )( ) (
2 0
0
3
a
2 0=
a1
2.35
0.7358 )
⟹ a0=1.1752∧a1=1.1037

Hence p ( x ) =1.1752+ 1.1037 x

Quadratic Fitting: n=2; P2 ( x )=a 0+ a1 x+ a2 x 2


1 1
2 2
s0 =2, s1=0 , s2 = , s 3=∫ x dx=0 , s 4 =∫ x dx=
3 4
3 −1 −1 5
1
b 0=2.3504 , b1 =0.7358 ,b 2=∫ x e dx =0.8789
2 x

−1

( )( ) ( )
2 0 2/3 a0 2.3504
0 2/3 0 a1 = 0.7358
2/3 0 2/5 a2 0.8789

Hence

()( )
a0 0.9963
a1 = 1.1037
a2 0.5368

The quadratic least square polynomial is P2 ( x ) =0.9963+1.1037 x+ 0.5368 x 2


Exercise
Find Linear and Quadratic least-squares approximations to f ( x )=x 2 +5 x+ 6 on [0 ,1].
Use of Orthogonal Polynomials in Least-squares Approximations
Definition: The set of functions {ϕ 0 , ϕ 1 , .. . , ϕ n }in [a , b]is called a set of orthogonal
functions, with respect to a weight function w ( x), if
b

a
{
∫ w ( x ) ϕ j ( x ) ϕi ( x ) dx= C0 ifif i≠
j
j
i= j

Where C j is a real positive number


Furthermore, if C j=1 , j=0 ,1 , . .. , n , then the orthogonal set is called an orthonormal set.
Using this interesting property, least-squares computations can be more numerically effective,
as shown below. Without any loss of generality, let’s assume thatw (x)=1.
Given the set of orthogonal polynomials{Qi (x )}ni=0, a polynomial Pn ( x )of degree n , can be
Written as:
Pn (x )=a0 Q0 ( x )+ a1 Q1 (x )+ ・・ ・+ an Qn ¿ ), for some a 0 , a 1 , … , an
Finding the least-squares approximation of f ( x) on [a , b]using orthogonal polynomials, then
Can be stated as follows:

Least-squares Approximation of a Function Using Orthogonal Polynomials

Given f ( x), continuous on [a , b], find a 0 , a 1 , . .. , a n using a polynomial of the form:


Pn (x )=a0 Q 0 (x )+ a1 Q1 ¿
Where
{Q k ( x) }nk =0 is a given set of orthogonal polynomials on [a , b], such that the error function:

b
E ( a 0 , a 1 , . .. , a n )=∫ ¿ ¿ ¿ is minimized.
a

As before, we set
∂E
=0 , i=0,1 , … , n
∂ ai

Now
b b
∂E
=0 ⟹∫ Q0 ( x ) f ( x ) dx =∫ Q 0 (x) ¿ ¿ ¿
∂ a0 a a
Since, {Q k ( x) }nk =0 is an orthogonal set, we have,
b

∫ Q20 ( x ) dx=C0
a

and ∫ Q0 (x )Qi (x )dx=0 ,fori≠ 0


a

Applying the above orthogonal property, we see from above that


b

∫ Q0 ( x ) f ( x ) dx=¿ C 0 a0 ¿.
a

That is

∫ Q 0 ( x ) f ( x ) dx
a
a 0=
C0

Similarly

b
1
ak= ∫ ϕ ( x ) f ( x ) dx , k=0,1,2, … , n
Ck a k

Where
b
C k =∫ ϕ k ( x ) dx
2

Expressions for a kwith Weight Function w ( x).

If the weight function w ( x) is included, then a k is modified to

b
1
a k = ∫ w(x )ϕ k ( x ) f ( x ) dx , k=0,1,2 , … , n
Ck a

Where
b
C k =∫ w ( x ) Qk (x)dx
2

Least-Squares Approximation Using Legendre’s Polynomials

Legendre’s Equation and Legendre Functions


n
1 d ( 2 )n
Qn ( x ) = n n
x −1 ,forn=0,1,2 ,3 , … is called Legendre polynomials.
2 n! d x

Recall that the first few Legendre polynomials are given by:
Q0 ( x )=1 1 3
Q3 ( x ) = (5 x −3 x)
2
Q 1 ( x )= x
1
1 Q4 ( x )= (35 x 4−30 x 2+ 3)
Q2 ( x ) = ( 3 x −1 )
2
8
2
1
Q5 ( x ) = ¿
8
To use the Legendre polynomials to approximate least-squares solution of a function f (x), we
set w (x)=1, and [a , b]=[−1 ,1]and compute
C k anda kwhere k =0 , 1 ,. . . ,n . That is
b
C k =∫ Q k (x )dx and
2

b
1
a k = ∫ f (x )Qk (x ) dx
Ck a
The least-squares polynomial will then be given by
Pn (x )=a0 Q0 ( x )+ a1 Q1 ( x )+ ・・ ・+ an Qn (x) .
A few C k ’ s are now listed below:

{
1 1

C 0=∫ ϕ 20 (x) dx=∫ 1dx =2


−1 −1
1 1
2
C 1=∫ ϕ21 ( x ) dx=∫ x 2 dx =
−1 −1 3
1 1

( )
2
1 2 1 2
C2=∫ ϕ 2 ( x)dx=∫
2
x− dx=
−1 −1 4 3 45

And So on
Example
Find linear and quadratic least-squares approximation to f (x)=e x using Legendre polynomials.
Solution
Linear Approximation: P1 ( x )=a0 ϕ 0 ( x)+ a1 ϕ1 (x )
ϕ 0 ( x )=1 , ϕ1 ( x )=x
Step1. Compute
1 1
C 0=∫ ϕ ( x ) dx=∫ dx=2
2
0
−1 −1

1 1
2
C 1=∫ ϕ ( x ) dx=∫ x dx=
2 2
1
−1 −1 3

Step2. Compute a 0∧a1

b 1
1 1 1 1
a 0= ∫ ϕ 0 ( x ) f ( x ) dx= ∫ e dx = (e− )
x
C0 a 2 −1 2 e

b 1
1 3 3
a 1= ∫ ϕ 1 ( x ) f ( x ) dx= ∫ xe dx=
x
C1 a 2 −1 e

1 1 3
P1 ( x )= (e− )+ x
2 e e

Quadratic Approximation: P2 (x)=a0 ϕ 0 (x )+ a1 ϕ 1( x )+a2 ϕ 2 (x)

a 0=
1
2 ( )
1
e− , a1 =
e
3
e
b 1

( )
2
1 1 2
C 2=∫ Q ( x ) dx= ∫ x − dx =
2 2
2
a 4 −1 3 45
b 1
a 2=
1
∫ ϕ ( x ) f ( x ) dx= 45
C2 a 2 ( )
∫ x 2− 13 e x dx=4 (e− 7e )
2 −1
The quadratic least square polynomial is

P2 ( x ) =
1
2 ( )
1 3 7
e− + x+ 2 e− ( 3 x 2−1 )
e e e ( )
Chebyshev polynomials: Another wonderful family of orthogonal polynomials

Definition: The set of polynomials defined by


T n (x)=cos[n arccos x], n ≥ 0 On [−1 ,1] are called the Chebyshev polynomials.

To see that T n (x)is a polynomial of degree n in our familiar form, we derive a recursive
Relation by noting that
T 0 (x)=1 (The Chebyshev polynomial of degree zero).
T 1( x)=x (The Chebyshev polynomial of degree 1).
A Recursive Relation for Generating Chebyshev Polynomials
Substituteθ=a rccos x . Then,T n (x)=cos(nθ), 0 ≤θ ≤ π .

T n+1 ( x)=cos( n+1)θ=cos nθ cos θ−sin nθ sin θ


T n−1( x)=cos (n−1)θ=cos nθ cos θ+ sin nθ sin θ
Adding the last two equations, we obtain
T n+1 (x) + T n−1( x)=2 cos nθ cos θ
The right-hand side still does not look like a polynomial in x . Note that cos θ=x
So,
T n+1 ( x)=2 cos nθ cos θ−T n−1 ( x )=2 x T n (x)−T n−1 ( x).
The above is a three-term recurrence relation to generate the Chebyshev Polynomials.
Three-Term Recurrence Formula for Chebyshev Polynomials
T 0 ( x )=1 , T 1 ( x )=x

T n+1 ( x ) =2 x T n ( x )−T n−1 ( x ) ,n ≥ 1

Using this recursive relation, the Chebyshev polynomials of the successive degrees can be
generated:
2
n=1: T 2 ( x)=2 x T 1 (x )−T 0 ( x)=2 x −1,
n=2: T 3 ( x )=2 x T 2 (x )−T 1 ( x)=2 x (2 x 2−1)−x =4 x 3−3 x and so on.
The orthogonal property of the Chebyshev polynomials
We now show that Chebyshev polynomials are orthogonal with respect to the weight function
1
w (x)= in the interval [−1,1]
√ 1−x 2
To demonstrate the orthogonal property of these polynomials, we show that

Orthogonal Property of the Chebyshev Polynomials

{
1
T m (x )T n ( x) 0 , if m≠ n
∫ = π
−1 √ 1−x 2 2
,m=n

1 1
T m ( x )T n (x) cos (marccosx)cos (narccosx )
First,∫ dx=∫ dx ,m ≠ n
−1 √ 1−x 2
−1 √1−x 2
−1
Since, arccosx=θ , dθ= dx
√ 1−x 2
The above integral becomes:

0 π
−∫ cosmθcosnθdθ =∫ cosmθcosnθdθ
π 0

1
Now, cosmθcosnθ can be written as
2
[ cos ( m+n ) θ+cos ( m−n ) θ ]

So
π π
1
∫ cosmθcosnθdθ= 20
∫ [ cos ( m+n ) θ+cos ( m−n ) θ ] dθ=0
0

Similarly, it can be shown [Exercise] that


1
T 2n ( x )
π
∫ = , for n≥ 1
−1 √1−x 2 2

The Least-Square Approximation using Chebyshev Polynomials


As before, the Chebyshev polynomials can be used to find least-squares approximations to a
function f ( x) as stated below.
1
set w (x)= ,[ a , b ]=[−1,1] and Qk (x )=T k ( x),
√ 1−x 2
Then, it is easy to see that using the orthogonal property of Chebyshev polynomials:

1
T 20 (x )
C 0=∫ dx=π
−1 √ 1−x 2
1 2
T k (x )
π
C k =∫ dx= , k =1, … , n
−1 √ 1−x 2 2

These

1
1 f (x)
a 0= ∫
π −1 √1− x2
dx

1
2 f (x )T i ( x )
a i= ∫ dx , i=1 ,… , n
π −1 √ 1−x 2

The least-squares approximating polynomial Pn (x ) of f ( x) using Chebyshev polynomials is


given by
Pn ( x ) =a0 T 0 ( x ) +a1 T 1 ( x )+ …+an T n ( x )
Where
1 1
2 f (x )T i ( x ) 1 f (x)
a i= ∫ dx , i=1 ,… , n and a 0= ∫ dx
π −1 √ 1−x 2 π −1 √ 1− x2

Example

Find a linear least-squares approximation of f (x)=e x using Chebyshev polynomials.

Solution

P1 ( x )=a0 ϕ 0 ( x ) +a 1 ϕ1 ( x ) =a0 T 0 ( x ) +a1 T 1 ( x )=a 0+ a1 x

Where

1 1
x
1 f (x) 1 e
a 0= ∫
π −1 √ 1− x2
dx= ∫
π −1 √1−x 2
dx =1.2660

1 1
2 f (x) T 1 ( x )
x
2 e (x )
a 1= ∫ dx= ∫ dx=1.1303
π −1 √ 1−x 2 π −1 √ 1−x 2

Thus, P1 ( x )=1.2660+ 1.1303 x


Chapter 4: Numerical methods for initial value problem

Numerical solutions of ODE


All numerical techniques for solving DE involves a series of estimate of y ( x ) starting from the
given conditions. There are two basic approaches that could be used to estimate the values of y ( x )
. There are known as one (single) –step methods and multi-steps methods. The methods for the
solution of the IVP.

'
y =f ( x , y ) , y ( x 0 )= y 0 Can be classified mainly into two types. They are

i. Single step methods, and


ii. Multistep methods.

We denote the numerical solution and the exact solution at x iby y iand y ( x i) respectively

Single step methods

In single step methods, we use information from only one preceding point that is to estimate the
value y i we need the conditions at the previous y i−1 only.

Example:-Picard, Taylor’s series, Euler’s method, modified Euler’s method, Runge-Kutta’s


method etc.

Multi step methods

In multi-step methods, we use information at two or more previous steps to estimate a value of
y ( x).

Example :-Milne-Simpson methods, Adams-Bash forth method etc.

An ODE of the nth order is


f ( x , y , y ' , … , y n ) =0 Or f ( x , y , y ' , … , y (n−1) )= y (n)…………….. (1)

The DE in equation (1) above together with the conditions at a point is called initial value
problem (IVP) whereas the DE in equation (1) above together with the conditions at two or more
points is called boundary value problems.

Taylor series method


Taylor series method is the fundamental numerical method for the solution of the initial value
problem

y ' =f ( x , y ) , y ( x 0 )= y 0

Expanding y ( x ) in Taylor series about any point x i , we obtain

1 1
y ( x ) = y ( xi ) + ( x −xi ) y ' ( xi ) + ( x−x i )2 y ' ' ( x i )+ …+ ( x−x i ) p y( p) ( x i )
2! p!

+1 p+1
( x−x i ) y ( p +1) (x i+θh) (¿)
( p+ 1 ) !

Where 0<θ <1 , x ∈ [ x 0 , b ]and b is the point up to which the solution is required.

We denote the numerical solution and the exact solution at x iby y iand y ( x i) respectively.

Now, consider the interval[ xi , xi +1 ]. Then length of the interval ish=x i+1−x i.

Substituting x=x i+1in ( ¿ ) ,we obtain

2 p p+1
' h '' h ( p) h ( p+1 )
y ( x i+1 )= y ( x i ) +h y ( x i ) + y ( x i ) +…+ y ( xi ) + …+ y ( xi +θh).
2! p! ( p+1 ) !

Neglecting the error term, we obtain the Taylor series method as

2 p
h '' h ( p)
y i+1 = y i+ h y 'i+ y i + …+ y .
2! p! i

Note that Taylor series method is an explicit single step method.

p +1
h ( p +1)
The truncation error is T i+1= y (x i+ θh) and the order is p.
( p+1 ) !

Example
Find using Taylor’s method to solve the equation y ' =x− y 2 and y ( 0 )=1 and find y ( 0.5 ) correct
to five decimal place accuracy.

Solution

y ' ( 0 )=0− y 2 ( 0 )=0−12=−1

Now y ' ' ( x )=1−2 y y ' ⟹ y ' ' ( 0 )=1−2 y ( 0 ) y ' ( 0 )=1−2 ( 1 ) (−1 )=3

y ( x ) =−2 ( ( y ) + y y ) ⟹ y ( 0 )=−2 ( ( y ( 0 ) ) + y ( 0 ) y ( 0 ) )=−2 ( 1+ 3 )=−8


' '' ' 2 '' '' ' ' 2 ''

y ( x )=−2 ( 3 y y + y y ) ⟹ y ( 0 )=−2 ( 3 y ( 0 ) y ( 0 )+ y ( 0 ) y ( 0 ) ) =−2 (−9−8 )=34


iv ' '' ' '' iv ' '' '''

y ( x )=−2 ( 3 ( y ) + 4 y y + y y ) ⟹ y ( 0 ) =−2 ( 27 +32+ 34 ) =186


v '' 2
' ''' iv v

3 2 4 3 17 4 31 5
y ( x ) =1−x + x − x + x − x + …
2 3 12 20

Now let us find the value of y ( 0.5 ).

3 2 4 3 17 4 31 5
y ( 0.5 )=1−0.5+ ( 0.5 ) − ( 0.5 ) + ( 0.5 ) − ( 0.5 )
2 3 12 20

y ( 0.5 )=1−0.5+0.375−0.16666+0.08854−0.04843=0.74851

Exercise

1. Find the analytic solution of y ' =x 2 + y , y ( 0 )=1 and compare to Taylor’s series.
2. Use Taylor’s series method solve y ( 0.25 ) an d y ( 0.5 ) giventhat
' 2 3
y =x + y , y ( 0 )=1

Euler method
The method of Taylor’s series yields the solution of a DE in the form of a power series. Now
Euler’s method which gives the solution in the form of a set of tabulated values.

dy
=f ( x , y ) ⟹ dy=f ( x , y ) dx ⟹∫ dy=∫ f ( x , y ) dx
'
Consider y =f ( x , y ) ⟹
dx
y1 x1

In the interval ( x 0 , x 1 ) , ∫ dy=∫ f ( x , y ) dx becomes ∫ dy=∫∫ f ( x , y ) dx . Consider f ( x , y ) as a


y0 x0

y1 x1 y1 x1

constant that is f ( x , y )=f ( x 0 , y 0 ). Hence ∫ dy=∫∫ f ( x , y ) dx ⟹∫ dy =∫ ∫ f ( x 0 , y 0) dx


y0 x0 y0 x0

⟹ y 1− y 0=f ( x 0 , y 0 ) ( x 1−x 0 ) ⟹ y 1= y 0+ hf ( x 0 , y 0 )

Similarly in the interval ( x 1 , x 2 ) , y 2= y 1+ hf ( x1 , y 1) where ( x , y ) =f ( x 1 , y 1 ) .

Proceeding in this manner we obtain the general formula in the interval ( x n , x n +1)

y n+ 1= y n +hf ( x n , y n )

Remark:

Taylor series method cannot be applied to all problems as we need the higher order derivatives.
The higher order derivatives are obtained as

'
y =f ( x , y ) ,

'' ∂ f ∂ f dy
y = + =f + f f y ,
∂ x ∂ y dx x

' ''
y = (
∂ ∂ f ∂ f dy
+ + + ) (
∂ ∂ f ∂ f dy dy
∂ x ∂ x ∂ y dx ∂ y ∂ x ∂ y dx dx xx )2
=f +2 f f xy + f f yy+ f y ( f x +f f y ) ,etc.

The number of partial derivatives required, increases as the order of the derivative of y increases.
Therefore, we find that computation of higher order derivatives is very difficult. Hence, we need
suitable methods which do not require the computation of higher order derivatives.

Example:

1. Solve the initial value problem y y ' =x , y ( 0 )=1 ,using Euler method in 0 ≤ x ≤ 0.8 ,with h=0.2
andh=0.1. Compare the results with the exact solution at x=0.8 .

Solution:

' x
We have y =f ( x , y )=
y

hx i
Euler method gives y i+1 = y i+ hf ( xi , y i )= y i + .
yi

Initial condition gives x 0=0 , y 0=1 .


0.2 x i
When h=0.2 ,we get y i+1 = y i+
yi

We get the following results.

0.2 x 0
y ( x 1) = y ( 0.2 ) ≈ y 1= y 0+ =1.0.
y0

0.2 x 1
y ( x 2) = y ( 0.4 ) ≈ y 2= y 1 + =1.04 .
y1

0.2 x 2
y ( x 3 )= y ( 0.6 ) ≈ y 3= y 2 + =1.11692.
y2

0.2 x 3
y ( x 4 ) = y ( 0.8 ) ≈ y 4 = y 3 + =1.22436 . M
y3

0.1 x i
When h=0.1 ,we get y i+1 = y i+ .
yi

We get the following results.

0.1 x 0
y ( x 1) = y ( 0.1 ) ≈ y 1= y 0+ =1.0.
y0

0.1 x1
y ( x 2) = y ( 0.2 ) ≈ y 2= y 1+ =1.01 .
y1

0.1 x 2
y ( x 3 )= y ( 0.3 ) ≈ y 3= y 2 + =1.02980.
y2

0.1 x 3
y ( x 4 ) = y ( 0.4 ) ≈ y 4= y 3 + =1.05893 .
y3

0.1 x 4
y ( x 5 )= y ( 0.5 ) ≈ y 5= y 4 + =1.09670 .
y4

0.1 x 5
y ( x 6 )= y ( 0.6 ) ≈ y 6= y 5 + =1.14229 .
y5

0.1 x6
y ( x 7 )= y ( 0.7 ) ≈ y 7= y 6 + =1.19482 .
y6

0.1 x 7
y ( x 8 )= y ( 0.8 ) ≈ y 8= y 7 + =1.25341 .
y7
The exact solution is y= √ x 2 +1.

At x=0.8 , the exact value is y ( 0.8 )= √ 1.64=1.28062.

The magnitude of the errors in the solutions are the following:

h=0.2 :|1.28062−1.22436|=0.05626.

h=0.1 :|1.28062−1.25341|=0.02721.

2. Consider the IVP y ' =x ( y+1 ) , y ( 0 )=1. Compute y ( 0.2 )with h=0.1using
i. Euler method
ii. Taylor series method of order two,
2
x
iii. Fourth order Taylor series method. (Exact y=−1+ 2e 2 )

Modified Euler method


By Euler’s method, we have

x1

y 1= y 0 +∫ f ( x , y ) dx
x0

y 1 ≅ y 0 +hf (x 0 , y 0) in the interval (x 0 , x 1 )

Instead of approximating f (x , y ) by f (x 0 , y 0 ) now approximate the integral by means of


trapezoidal rule so, we have

h
y 1= y 0 +
2
( f ( x 0 , y 0 ) +f ( x1 , y 1))

h
In general we obtain the integral formula y 1
(n +1)
= y0+
2
( f ( x0 , y 0 ) + f ( x1 , y(n)1 ) )
(0 )
The iteration formula can be started by y 1 from Euler’s formula.

That is y 1 = y 0 +h ( f ( x 0 , y 0 ) )
(0 )

h
(n +1)
Therefore y k +1 = y k +
2
( k+1 ) ) is the ( n+1 ) approximation of y k +1.
f ( x k , y k ) + f ( x k+1 , y (n) th

Example
Find the solution y ' =x− y 2 at (0,1) and find y (0.1) correct to three decimal place accuracy with

h=0.1 Using modified Euler’s method.

Solution

Let us find y 1 = y 0 +h ( f ( x 0 , y 0 ) )
(0 )

h
That is y (01 )=1+0.1 ( 0−12 )=0.9 and then y 1
(n +1)
= y0+
2
( f ( x0 , y 0 ) + f ( x1 , y(n)1 ) )
0.1
( 1)
Take n=0 ⟹ y1 =1+ (−1+ 0.1−0.92 ) =0.9145
2

0.1
(2 )
Take n=1⟹ y 1 =1+ (−1+0.1−0.91452 )=0.9132
2

0.1
(3 )
Take n=2⟹ y 1 =1+ (−1+0.1−0.91322 )=0.9133
2

Now consider the error

|0.9133−0.9132|=0.0001 .That is 3 accurate decimal places.

Exercise

Determine the value of y when x=0.1∧h=0.05 given that y ( 0 )=1∧ y ' =x 2+ y using modified
Euler’s method.

Runge-Kutta’s method

Euler’s method is less efficient in practical problems since it requires h to be small for obtaining
reasonable accuracy.

Runge-Kutta’s methods are designed to give greater accuracy and they possess the advantage of
requiring only the function values at some selected points on the sub interval.

1. First order Runge-Kutta’s method


The Euler method is called F.O.R.K method.
y n+ 1= y n +hf ( x n , y n)
2. Second order Runge-Kutta’s method
The modified Euler’s method is called S.O.R.K method.
1
y n+ 1= y n + (k 0+ k 1) where k 0=hf (x n , y n) and k 1=hf (x n+1 , y n+1 )
2
The error is order 3(O (h3 )).
3. Third order Runge-Kutta’s method
1
y n+ 1= y n + ( k 0 +4 k 1 +k 2 ) where
6
k 0=hf (x n , y n)
h k
k 1=hf (x n + , y n + 0 )
2 2
¿
k 2=hf (x n +h , y n +k )
k ¿=hf (x n+ h , y n + k 0) is called 3rd .O.R.K method.
4. Fourth order Runge-Kutta’s method
The 4th O.R.K method is the most commonly used
1
y n+ 1= y n + ¿ where
6
k 0=hf (x n , y n)
h k0
k 1=hf (x n + , y n + )
2 2
k 2=hf (x n +h/ 2 , y n+ k 1 /2)
k 3=hf (x n +h , y n +k 2 )
The error of 4th O.R.K method is order 5.
Example
Solve y ' =x− y 2 with h=0.1 correct to four decimal place and find y (0.1) and y (0.2)
where y ( 0 )=1
Solution
We have h=0.1 and f ( x , y )=x− y 2
Using the 4th O.R.K method
Take n=0
1
y 1= y 0 + ¿
6
x 0=0 and y 0=1
k 0=hf ( x 0 , y 0 ) ⟹ k 0 =0.1 ( 0−1 )=−0.1

( h
k 1=hf x 0 + , y 0 +
2
k0
2 )
⟹ k 1=0.1 f ( 0+0.05,1−0.05 )

⟹ k 1=0.1 ( 0.05−0.95 ) =−0.08525


2
k 2=hf (x 0 +h/2 , y 0 +k 1 /2) ⟹ k 2=−0.08665668906
k 3=hf (x 0 +h , y 0 +k 2 ) ⟹ k 3=−0.0734
1
y 1=1+ ¿
6

Take n=1
h=0.1 , y 1 =0.9138 , x 1=0.1
1
y 2= y 1 + ¿
6
k 0=hf ( x 1 , y 1 )=−0.073503

( h
)
k0
k 1=hf x 1 + , y 1+ =−0.0619
2 2

k =hf ( x + , y + )=−0.0629
h k 1
2 1 1
2 2
k 3=hf ( x 1+ h , y 1 +k 2) =−0.0524
1
y 2=0.9138+ ¿
6
Take n=2
h=0.1 , y 2 =0.8501 , x 1=0.2
k 0=hf ( x 2 , y 2 )=−0.05227

( h
)
k0
k 1=hf x 2 + , y 2+ =−0.042895
2 2

k =hf ( x + , y + )=−0.050963
h k 1
2 2 2
2 2

k 3=hf ( x 2+ h , y 2 +k 2 )=−0.033865

1
y 3= y ( 0.2 )= y 2+ ¿
6
Exercise
Apply Runge-Kutta’s method approximate values of y for x=0.2 for h=0.1 given
that y ' =x + y 2 and y (0)=1

Multi -step methods (predictor-corrector method)


As we have seen above to solve a D.E over a single interval say from x=x n to x=x n+1 we require
information only at the beginning of the interval that is at x=x n.

Multistep methods are methods require function values at x n , x n−1 , x n−2 , … for the computation of
the function value at x n+1. A predictor formula is used to predict the value of y at x n+1 and the
corrector formula is used to improve the value of y n+ 1.

Adams -Bashforth method

dy
Given =f ( x , y) and y 0= y ( x 0), we compute
dx

y−1= y ( x 0−h ) , y−2= y ( x 0−2 h ) , y−3= y ( x 0−3 h ) by Taylor’s series or Euler’s method or Runge-
Kutta’s method.

Next we calculate

f −1=f ( x0 −h , y−1 ) , f −2 =f ( x 0−2 h , y−2 ) , f −3 =f ( x 0−3 h , y−3 )

Then to find y 1 we substitute Newton’s backward interpolation formula

n(n+1) 2 n(n+1)(n+2) 3 x−x 0


f ( x , y )=f 0+ n ∇ f 0 + ∇ f 0+ ∇ f 0 +… where n= and f 0=f (x 0 , y 0 )
2 3! h

x 0+h

in y 1= y 0 + ∫ f (x , y )dx … … … … … … … … … … … … … … … … … … … … … … ..(1)
x0

We know our problem was y ' =f (x , y ) so let us consider (x 0 , x 1 )

x1
n ( n+1 ) 2 n ( n+1 ) ( n+2 ) 3
y 1= y 0 +∫ [f 0 +n ∇ f 0 + ∇ f 0+ ∇ f 0 + …]dx
x0
2 3!

x−x 0 1
Sincen= ⟹ dn= dx ⟹ dx=hdn , for x= x0 , n=0 , for x=x 1 ,n=1
h h

1
n ( n+1 ) 2 n ( n+1 ) ( n+2 ) 3
y 1= y 0 + h∫ [f 0 +n ∇ f 0 + ∇ f 0+ ∇ f 0 + …] dn
0 2 3!

1 5 2 3 3 251 4
y 1= y 0 + h[1+ ∇+ ∇ + ∇ + ∇ + …]f 0
2 12 8 720
Neglecting fourth and higher order differences and expressing

2 3
∇ f 0 , ∇ f 0 and ∇ f 0 in terms of function values, we get

h
y 1= y 0 +
24
[ 55 f 0−59 f −1 +37 f −2−9 f −3 ] … … … … … … … … … … … … … … … … … …(2)

This is called Adams-Bashforth predictor formula.

p h
Hence y i+1 = y i+ [55 f i−59 f i−1 +37 f i−2 −9 f i−3 ]
24

is called Adams-Bash forth predictor formula, i=0,1,2 , …

Having found y 1, we find f 1=f ( x 0+ h , y 1 )

Then to find a better value of y 1 , we derive a corrector formula by substituting Newton’s


backward formula at f 1. That is

n ( n+1 ) 2 n ( n+1 ) ( n+2 ) 3


f ( x , y )=f 1+ n ∇ f 1+ ∇ f 1+ ∇ f 1 +… in equation (1)
2 3!

We get

x1
n ( n+1 ) 2 n ( n+ 1 )( n+2 ) 3
y 1= y 0 +∫ [f 1 +n ∇ f 1 + ∇ f 1+ ∇ f 1+ …]dx
x0
2 3!

[put x=x 1 +nh , dx=hdn ]

0
n ( n+1 ) 2 n ( n+1 )( n+ 2 ) 3
y 1= y 0 + h ∫ [ f 1 +n ∇ f 1 + ∇ f 1+ ∇ f 1 +…] dn
−1 2 3!

1 1 2 1 3 19 4
y 1= y 0 + h[1− ∇− ∇ − ∇ − ∇ −…] f 1
2 12 24 720

Neglecting fourth and higher order differences and expressing

2 3
∇ f 0 , ∇ f 0 and ∇ f 0 in terms of function values, we get

h
y 1= y 0 + [9 f 1+19 f 0−5 f −1 +f −2 ] ,where f 1=f (x 0+ h , y 1 )
24
Which is called Adams-Moulton corrector formula.

c h
Hence y i+1 = y i+ [9 f i+1 +19 f i −5 f i−1 +f i −2 ] is called Adams-moulton corrector formula for
24
i=0,1,2 , … .

Note

∇ f 0 =f 0 −f −1

∇ 2 f 0=f 0 −2 f −1 + f −2 and so on

Note

The values y−1 , y−2 and y−3 which are required on the right side of the predictor formula
obtained by means of the Taylor series or Euler’s method or Runge-Kutta’s method due to this
reason.

Example

Solve y ' =x− y 2 with h=0.1 correct to four decimal place and find y (0.4 ) using predictor-
corrector Adams-moulton for y (0)=1

Solution

Now we have y ' =x− y 2 with h=0.1

Using 4th .O.R.K method we have to find y ( 0.1 ) , y ( 0.2) and y (0.3)

x−3=0 , x−2=0.1, x−1=0.2 , x 0=0.3

x 0 0.1 0.2 0.3 0.4 …. 1

y 1 0.9138 0.8512 0.8070 ?

Hence by Adam-Bashforth method we have the following

p h
y i+1 = y i+ [55 f i−59 f i−1 +37 f i−2 −9 f i−3 ]
24

Take i=0
h
y 1p= y 0+
24
[ 55 f 0 −59 f −1 +37 f −2−9 f −3 ]

0.1
y 1p=0.8070+
24
[ 55 f ( 0.3,0.8070 )−59 f ( 0.2,0 .8512 ) +37 f ( 0.1,0 .9138 )−9 f ( 0,1 ) ]

p
y 1 =0.7796 and

c h p
y 1= y 0 + [9 f 1 +19 f 0−5 f −1 +f −2 ]
24

c 0.1
y 1=0.8070+ [9 f ( 0.4,0 .7796 ) +19 f ( 0.3,0 .8070 )−5 f ( 0.2,0.8512 ) + f (0.1,0 .9138)]
24

c
y 1=0.7793

Exercise

1. Find y (0.4 ) using Runge-Kutta’s methods of the above problem.


2. Given y ’=1+ y 2 , y ( 0 )=0 , find y (1) with h=0.2using predictor-corrector Adams-
moulton method.

Milne’s method
dy
Given =f ( x , y) and y ( x 0 )= y 0 ; to find an approximate value of y for x=x 0 +nh by
dx
Milne’s method , we proceed as follows:
The value y 0= y ( x 0 ) being given, we compute
y 1= y ( x 0 +h ) , y 2= y ( x 0+ 2 h ) , y 3 = y ( x 0 +3 h ) by Picard or Taylor’s series method.
Next we calculate
f 0=f ( x0 , y 0 ) , f 1=f ( x 0+ h , y 1) , f 2=f ( x 0 +2 h , y 2 ) , f 3=f ( x0 +3 h , y 3 )

Then to find y 4 = y ( x 0+ 4 h ) , we substitute Newton’s forward interpolation formula.

n (n−1) 2 n(n−1)( n−2) 3


f ( x , y )=f 0+ n ∆ f 0+ ∆ f 0+ ∆ f 0 +…
2 3!
x−x 0
where n= and f 0=f (x 0 , y 0 ).
h
Now let us consider in ( x 0 , x 4 )
y ’=f (x , y)
x4 x4

∫ dy=∫ f (x , y )dx
x0 x0

x4

y ( x 4 ) = y ( x 0 ) +∫ f (x , y )dx
x0

x4

y 4 = y 0 +∫ f (x , y )dx
x0

x i+1

y ( x i+1 )= y ( x j ) + ∫ f ( x , y )dx where j=i−3


xj

[ ]
x i+1
n( n−1) 2 n (n−1)(n−2) 3
y ( x i+1 )= y ( x j ) + ∫ f 0 +n ∆ f 0 + ∆ f 0+ ∆ f 0+ … dx
xj
2 3!

[ ]
4
n(n−1) 2 n (n−1)(n−2) 3
Hence y ( x 4 ) = y ( x 0 ) + h∫ f 0 +n ∆ f 0 + ∆ f 0+ ∆ f 0+ … dn
0 2 3!
20 2 8
y 4 = y 0 +h[ 4 f 0+ 8 ∆ f 0 + ∆ f 0 + ∆3 f 0 +… ]
3 3
20 2 8 3
y 4 ≅ y 0+ h[4 f 0 +8 ∆ f 0 + ∆ f 0+ ∆ f 0]
3 3
Neglecting 4th and higher order difference and expressing ∆ f 0 , ∆2 f 0 and ∆ 3 f 0 in terms of
the function value , we get.
4h
y4= y0+ [2 f 1−f 2 +2 f 3 ] Which is called a predictor.
3
Having found y 4 , we obtain a first approximation to
f 4=f (x 0 + 4 h , y 4 )
Then a better value of y 4 is found by Simpson’s rule as
h
y 4 = y 2 + [ f 2+ 4 f 3 +f 4 ] which is called a corrector.
3

Then an improved value of f 4 is computed and again the corrector is applied to find a still
better value of y 4 . We repeat this step until y 4 remains unchanged.

Once y 4 and f 4 are obtained to the desired accuracy , y 5=f (x 5 , y 5 ) is found from predictor as

h
y 5= y 3 + [2 f 2−f 3 +2 f 4 ] and f 5=f (x 5 , y 5 ) is calculated then a better approximation to the value
3
of y 5 is obtained from the corrector as

h
y 5= y 3 + [f 3 + 4 f 4 + f 5 ]
3
In general

4h
y 4 +i= y i + [2 f i +1−f i+2 +2 f i +3 ], for i=0,1,2 , … is called a predictor and
3

h
y 4 +i= y 2 +i+ [f i +2+ 4 f i+3 + f i+ 4 ], for i=0,1,2 , … is called a corrector
3

Example solve y ’=1+ y 2 with h=0.2 correct to four decimal place and find y (1) using Milne’s
predictor-corrector method when y (0)=0.

Solution
x 0=0 , x 1=0.2 , x2=0.4 , x3 =0.6 , x 4=0.8 and x 5=1
We know by Picard method
x
y n= y 0+∫ f ( x , y n−1 ) dx
0

f ( x , y )=1+ y 2
x
y 1= y 0 +∫ f ( x , y 0 ) dx=x
0

x x
x3
y 2= y 0 +∫ f ( x , y 1 ) dx=∫ (1+ ¿ x )dx= x+
2
¿
0 0 3

( )
x x 3 2 3 5 7
x x 2x x
y 3= y 0+∫ f ( x , y 2 ) dx=∫ (1+ ¿ x + ) d x=x + + + ¿
0 0 3 3 15 63
3
0.4
y ( 0.2 )=0.2 , y ( 0.4 ) =0.4+ =0.4213 , y ( 0.6 ) =0.6828
3

x 0 0.2 0.4 0.6 0.8 1

y 0 0.2 0.4213 0.6828

f 1=f ( x 1 , y 1 ) =1+0.22=1.04

f 2=f ( x 2 , y 2) =1+0.42132=1.177
2
f 3=f ( x 3 , y 3 )=1+0.6828 =1.4662
Predictor
4h
y4= y0+ ¿
3
0.8
y 4 =0+
3
[ 2 ( 1.04+1.4662 ) −1.177 ]=1.0227
2
f 4=f ( x 4 , y 4 ) =1+ 1.0227 =2.046
Corrector
h
y 4 = y 2 + [ f 2+ 4 f 3 +f 4 ]
3
0.2
y 4 =0.4213+
3
[ 1.177+ 4 ( 1.4662 )+ 2.046 ] =1.02715
2
f 4=f ( x 4 , y 4 ) =1+ 1.02715 =2.05504
Again corrector
0.2
y 4 =0.4213+
3
[ 1.177+ 4 ( 1.4662 )+ 2.05628 ]=1.027756
f 4=f ( x 4 , y 4 ) =1+ 1.0277562=2.05628
Again corrector
h
y 4 = y 2 + [ f 2+ 4 f 3 +f 4 ]
3
0.2
y 4 =0.4213+
3
[ 1.177+ 4 ( 1.4662 )+ 2.05628 ]=1.027838
The answer is y ( 0.8 )=1.0278
Using predictor
4h
y 5= y 1 + ¿
3
f 4=1+1.0278 2=2.0564
+0.8
y 5=0.2+ [ 5.0006 ] =1.5335
3
2
f 5=1+1.5335 =2.352
Using the corrector
h
y 5= y 3 + [ f +4 f 4 +f 5 ]=1.4857
3 3
f 5=f ( x 5 , y 5 )=1+1.48572=3.2074
Again the corrector
h
y 5= y 3 + [ f +4 f 4 +f 5 ]=1.543
3 3
f 5=f ( x 5 , y 5 )=1+1.5432=3.381
Again the corrector
h
y 5= y 3 + [ f +4 f 4 +f 5 ]=1.554
3 3
f 5=f ( x 5 , y 5 )=1+1.554 2=3.4149
Again the corrector
h
y 5= y 3 + [ f +4 f 4 +f 5 ]=1.556
3 3
The correct value of y ( 1 ) is 1.556

Higher-order equations and System

Reduction of second order equation to first order system

Let the second order initial value problem be given as:

a 0 ( x ) y ' ' ( x ) +a 1 ( x ) y ' ( x ) + a2 ( x ) y ( x )=r ( x ) , y ( x 0 )=b0 , y' ( x 0 )=b 1 … … … … (1)

We can reduce this second order initial value problem to a system of two first order equations.

Define u1= y . Then, we have the system

u'1= y' =u2 , u1 ( x 0 )= y ( x 0 )=b0 ,

1 1
u'2=u '1' = y' ' =
a0 (x )
[ r ( x )−a1 ( x ) y '( x )−a 1 ( x ) y ( x ) ]=
a0 (x )
[ r ( x )−a1 ( x ) u2 −a1 ( x ) u1 ] ,

u2 ( x 0 )=b 1= y ' (x 0 )

The system is given by

[][
u'1
u2'
=
f 2 ( x
u2
, u 1 , u 2 )][ ] [ ]
u (x ) b
, 1 0 = 0 … … … … … … … … ..(2)
u2 ( x 0 ) b 1

1
Where f 2 ( x , u1 , u2 ) =
a0 ( x )
[ r ( x )−a1 ( x ) u 2−a1 ( x ) u1 ]

In general, we may have a system as

[ ][ ][ ] [ ]
'
y1 f ( x , y 1 , y 2) y 1 (x 0) b 0
'
= 1 , = … … … … … … … … … … …(3)
y2 f 2 ( x , y 1 , y 2) y 2 (x 0) b1

In vector notation, denote

y=
[] [] []
y1
y2
f b
, f = 1 , b= 0
f2 b1
Then we can rewrite the system as

'
y =f ( x , y ) , y ( x 0 )=b … … … … … … … … … … .(4 )

Therefore, the methods derived for the solution of the first order initial value problem

dy
=f ( x , y ) , y ( x 0 ) = y 0 … … … … … … … … … … ..(5)
dx

can be used to solve the system of equations (3) or (4), that is, the second order initial
value problem (1), by writing the method in vector form.

Example: Reduce the following second order initial value problems into systems of first order
equations:

a. 2 y ' ' −5 y ' +6 y=3 x , y ( 0 ) =1 , y ' ( 0 )=2


b. x 2 y ' ' + ( 2 x +1 ) y' +3 y=6 , y ( 1 )=2 , y ' ( 1 )=0.5
Solution
a. Let u1= y . Then, we have the system
u'1=u 2 , u1 (0)=1
1
u'2= [ 3 x +5 y ' −6 y ]= 1 [ 3 x+5 u 2−6 u1 ] ,u2 ( 0 ) =2
2 2
The system may be written as

[][ ][ ] [ ]
'
u1 f ( x ,u 1 , u2 ) u1 (0) 1
= 1 , =
'
u2 f 2 ( x , u1 , u2 ) u2 (0) 2

1
Where f 1 ( x ,u 1 , u2 ) =u2 and f 2 ( x , u1 , u2 ) =
2
[3 x +5 u2−6 u 1 ]
b. Let u1= y . Then, we have the system
'
u1=u 2 , u1 (1)=2
1
'
u2 = 2
[ 6−( 2 x +1 ) y ' −3 y ]= 12 [ 6−( 2 x+ 1 ) u2 −3u 1 ] , u2 ( 1 )=0.5
x x
The system may be written as

[][
u'1
u2'
= 1 ,
][ ] [ ]
f ( x ,u 1 , u2 ) u1 (1)
= 2
f 2 ( x , u1 , u2 ) u2 (1) 0.5

1
2[
Where f 1 ( x ,u 1 , u2 ) =u2 and f 2 ( x , u1 , u2 ) = 6−( 2 x +1 ) u2−3 u1 ]
x

Exercise
Rewrite the initial-value problem for the system of 2 second-order differential
equations

{
'' ' '
y 1 + y 1−2 y 2+ 2 y 1− y 2=x ' '
'' '
, y 1 ( 0 )=1 , y 2 ( 0 )=−1, y 1 ( 0 )=2 , y 2 ( 0 )=3
y 2 −3 y 1 + y 1 +4 y 2=2 sin ⁡( x)

As an initial value problem for a system of 4 first order differential equations.

Solving a system of ODE-IVPS

1. Taylor Series Method

In vector format, we write the Taylor series method of order p as

' h2 ' ' hp (p)


y i+1 = y i+ h y i+ y i + …+ y i … … … … … … … … … … …(i)
2! p!

[ ]
k −1
d
f 1 ( x i , y 1 ,i , y 2, i )

[ ]
( k)
y d x k−1
Where y (ik )= 1, i
( k)
= k −1
y d
2, i
f 2 ( x i , y 1 ,i , y 2, i )
d x k−1

In component form we obtain

2 p
h h
( y 1 )i +1=( y 1 )i +h ( y ) + ( y'1' )i +…+ ( y (1p )) i … … … … … … … … … … …(ii)
'
1 i
2! p!

h2 '' hp
( y 2 )i +1=( y 2 )i +h ( y '2 )i + ( y 2 )i +…+ ( y (2p )) i … … … … … … … … … … …(iii)
2! p!

2. Euler’s method for solving the system is given by

{( y 1 )i+1= ( y 1 )i+ h ( y 1 )i=( y 1) i+ h f 1 ( x i , ( y 1 )i , ( y 2 )i)


'

( y 2 )i+1=( y 2 )i+ h ( y '2 )i=( y 2) i+ h f 2 ( x i , ( y 1 )i , ( y 2 )i)

3. Runge-Kutta’s Fourth Order Method


In vector format, we write the Runge-Kutta’s 4th order method as
1
y i+1 = y i+ ( k 1 +2 k 2 +2 k 3 +k 4 ) , i=0,1,2 , …
6

Where k 1= [ ] [ ] [ ] [ ]
k 11
k 21
k k k
, k 2= 12 , k 3= 13 , k 4 = 14
k 22 k 23 k 24

k n1 =h f n ( x i , ( y 1) i , ( y 2 )i ) , n=1,2.
( h
2
1
2
1
)
k n2 =h f n xi + , ( y 1 )i + k 11 , ( y 2 )i+ k 21 , n=1,2
2

( h
2
1
2
1
)
k n3 =h f n xi + , ( y1 )i + k 12 , ( y 2 )i + k 22 , n=1,2
2

k n 4=h f n ( x i+ h , ( y1 )i + k 13 , ( y 2 )i +k 23) ,n=1,2

In explicit form, we write the method as

1
( y 1 )i +1=( y 1)i + ( k 11+2 k 12 +2 k 13 +k 14 ) ,
6

1
( y 2 )i+1=( y 2)i + ( k 21+2 k 22+2 k 23 +k 24 )
6

If we denote y 1=u , y 2=v , then we can write the equations as

1
(u)i+1=(u)i+ ( k 11 +2 k 12+ 2 k 13+ k 14 ) ,
6

1
(v)i+1 =(v)i + ( k 21 + 2k 22+2 k 23 +k 24 )
6

Example

Compute approximations to y ( 0.4 ) and y '(0.4 ), for the initial value problem

y ' ' + 4 y=cosx , y ( 0 ) =1, y ' ( 0 )=0

Using (i) Taylor series method of 4th order, (ii) Runge-Kutta’s method of 4th order, with step
( 2 cos 2 x+ cost )
length h=0.2. If exact solution is given by y ( x ) = , find the magnitudes of the
3
errors.

Solution

Let y=u . Reducing the given second order equation to a system of first order equations, we
obtain

'
u =v ,u ( 0 ) =1

v' = y ' ' =cos x−4 y =cosx −4 u , v ( 0 ) =0

i. Taylor series method of 4th order gives


2 3 4
' h ' ' h ' ' ' h (4 ) ' '' ' '' (4)
ui +1=ui +h ui + ui + u i + u i =ui +0.2 ui +0.02u i +0.0013 ui +0.000067 ui
2! 3! 4!
h 2 ' ' h3 '' ' h4 (4 )
' ' '' ''' (4 )
vi +1=vi + h v + v i + v i + v i =v i+ 0.2 v i+ 0.02 v i +0.0013 v i +0.000067 v i
i
2! 3! 4!

We have u' =v , v ' =cosx−4 u , u' ' =v ' , v '' =−sinx−4 u' ,

' '' '' ''' '' (4) ' '' ( 4)


u =v , v =−cosx−4 u ,u =v , v =sinx−4 u' ' '

For i=0 :u 0=1 , v 0=0 , x 0=0.

u'0 =v 0=0 , v '0=1−4 u 0=1−4=−3 , u'0' =v '0=−3 , v ''0 =−4 u'0 =0 ,

' '' '' ' '' '' ( 4) ''' (4 ) ''


u0 =v 0 =0 , v 0 =−1−4 u 0 =−1+12=11 ,u 0 =v 0 =11, v 0 =−4 u0 =0

u ( 0.2 )=u 1=u0 +0.2 u'0 +0.02 u'0' +0.0013 u'0' ' +0.000067 u(4)
0 =0.940733

v ( 0.2 )=v 1=v 0+ 0.2 v '0 +0.02 v '0' +0.0013 v ''0 ' + 0.000067 v (40 )=−0.585333

For ¿ 1: u1=0.940733 , v 1=−0.585344 , x 1=0.2.

u'1=v 1=−0.585333 , v '1=cos ( 0.2 )−4 u1=−2.782865, u'1' =v'1 =−2.782865 ,

'' ' '' ' '' ' '' ''


v1 =−sin ( 0.2 ) −4 u 1=2.142663 ,u 1 =v 1 =2.142663 , v 1 =−cos ( 0.2 )−4 u1 =10.151393 ,

u(4) '' (4 ) '''


1 =v 1 =10.151393 , v 1 =sin ( 0.2 ) −4 u1 =−8.371983

' '' ''' (4 )


u ( 0.4 )=u2=u 1+ 0.2u1 +0.02 u1 +0.0013 u1 +0.000067 u1 =0.771543

' '' '' ' (4)


v ( 0.4 )=v 2=v 1 +0.2 v 1 +0.02 v 1 + 0.0013 v 1 + 0.000067 v 1 =−1.086281

The exact solutions are: u ( 0.2 )=0.940730 , v ( 0.2 )=−0.5854480 ,

u ( 0.4 )=0.940730 , v ( 0.4 ) =−1.086281,

The magnitudes of the errors in the solutions are

|u ( 0.2 )−u 1|=0.000003

|v ( 0.2 ) −v 1|=0.000115

|u ( 0.4 )−u2|=0.000052

|v ( 0.4 )−v 2|=0.000205


ii. Runge-Kutta’s 4th order method
For i=0 , we have x 0=0 , u0 =1, v 0=0 , f 1 ( u , v )=v , f 2 (u , v )=cosx−4 u .
k 11=h f 1 ( x 0 , u0 , v 0 ) =h f 1 ( 0,1,0 )=0
k 21=h f 2 ( x 0 , u0 , v 0 ) =h f 2 ( 0,1,0 )=0.2 ( 1−4 )=−0.6

( h
2
1
2
1
)
k 12=h f 1 x 0 + , u0 + k 11 , v 0 + k 21 =0.2 f 1 ( 0.1,1 ,−0.3 ) =0.2 (−0.3 )=−0.06
2

=h f ( x + , u + k )=0.2 f ( 0.1,1 ,−0.3) =−0.600999


h 1 1
k 22 2 0 0 11 , v 0 + k 21 2
2 2 2

( h
2
1
2
1
)
k 13=h f 1 x 0 + , u0 + k 12 , v 0 + k 22 =0.2 f 1 ( 0.1 , 0.97 ,−0.3004995 ) =0.2 (−0.3004995 )=−0.60100
2

( h
2
1
2
1
)
k 23 =h f 2 x 0 + ,u0 + k 12 , v 0 + k 22 =0.2 f 2 ( 0.1,0 .97 ,−0.3004995 )=0.2 ( cos ( 0.1 )−4 ( 0.97 ) )=−0.576999
2

k 14 =h f 1 ( x 0 +h , u0 +k 13 , v 0 +k 23) =0.2 f 1 ( 0.2,0 .9399,−0.576999 )=0.2 (−0.576999 )=−0.115400

k 24 =h f 2 ( x 0 +h , u0 +k 13 , v 0 +k 23) =0.2 f 2 ( 0.2,0 .9399 ,−0.576999 )=0.2 ( cos ( 0.2 )−4 ( 0.9399 ) )=−0.555907

1
u ( 0.2 )=u 1=u0 + ( k 11+2 k 12 +2 k 13 +k 14 ) =0.940733
6

1
v ( 0.2 )=v 1=v 0+ ( k 21 +2 k 22 +2 k 23+ k 24 ) =−0.585317
6

For i=1 , we have x 1=0.2 , u1=0.940733 , v 1=−0.585317.

k 11=h f 1 ( x1 , u1 , v 1 )=h f 1 ( 0.2,0 .940733 ,−0.585317 )=−0.117063

k 21=h f 2 ( x 1 , u1 , v 1 )=h f 2 ( 0.2,0 .940733 ,−0.585317 )=−0.556573

( h
2
1
2
1
)
k 12=h f 1 x 1 + , u1 + k 11 , v1 + k 21 =0.2 f 1( 0.3,0.882202 ,−0.863604)=−0.172721
2

( h
2
1
2
1
)
k 22=h f 2 x 1 + , u1 + k 11 , v 1+ k 21 =0.2 f 2 ( 0.3,0 .882202 ,−0.863604 )=−0.514694
2

( h
2
1
2
1
)
k 13 =h f 1 x 1 + , u1 + k 12 , v 1+ k 22 =0.2 f 1 ( 0.3,0 .854372 ,−0.842664 )=−0.168533
2

( h
2
1
2
1
)
k 23 =h f 2 x 1 + , u1 + k 12 , v 1+ k 22 =0.2 f 2 ( 0.3,0 .854372 ,−0.842664 )=−0.492430
2

k 14 =h f 1 ( x 1+ h ,u 1+ k 13 , v 1+ k 23 )=0.2 f 1 ( 0.4,0 .7722 ,−1.077747 ) =−0.215549


k 24 =h f 2 ( x 1+ h ,u 1+ k 13 , v 1+ k 23 )=0.2 f 2 ( 0.4,0 .7722 ,−1.077747 ) =−0.433548

1
u ( 0.4 )=u2=u 1+ ( k 11 +2 k 12+2 k 13+ k 14 )=0.771546
6

1
v ( 0.4 )=v 2=v 1 + ( k 21+ 2k 22+2 k 23 +k 24 )=−1.086045
6

The magnitudes of the errors in the solutions are

|u ( 0.2 )−u 1|=0.000003

|v ( 0.2 ) −v 1|=0.000131

|u ( 0.4 )−u2|=0.000055

|v ( 0.4 )−v 2|=0.000236


Numerical methods for Boundary value problems

The shooting method

This method requires good initial guesses for the slope and can be applied to both linear and non-
linear problems. Its main advantage is that it is easy to apply. Let us consider

y ' ' =p ( x ) y ' + q ( x ) y +r ( x ) , a ≤ x ≤ b , y ( a )=α , y ( b )=β

The main steps involved in the shooting method are

i. Transformation of the B.V.P in to I.V.P


ii. Solution of the I.V.P by any single step methods.
iii. Solution of the given B.V.P

4.1. The Linear Shooting Method

Consider the boundary value problems (BVPs) for the second order differential equation
of the form
y ' ' =f ( x , y , y ' ) , a≤ x ≤ b , y ( a )=α and y ( b )= β … … … … … … … … … (1)

Under what conditions a boundary value problems has a solution or a unique solution.

i. Existence and uniqueness:


Suppose that f is continuous on the set
D={( x , y , y ) ;a ≤ x ≤ b ,−∞< y <∞ ,−∞< y ' < ∞ } and the partial derivatives f y and
'

f y 'are also continuous on D . If

a. f y ( x , y , y ' ) >0 , ∀ ( x , y , y ' ) ∈ D , and

b. There exists a constant M such |f y ' ( x , y , y ' )|≤ M , ∀ ( x , y , y ' ) ∈D


Then the boundary value problems (1) has a unique solution.
Example:Consider the linear boundary value problem:
'' '
y =p ( x ) y + q ( x ) y +r ( x ) , a ≤ x ≤ b , y ( a )=α , y ( b )=β
Under what condition a linear BVP has a unique solution?
''
f ( x , y , y )= y = p ( x ) y +q ( x ) y+ r ( x ) , f y ( x , y , y ) =q ( x ) , f y ' ( x , y , y )= p( x)
' ' ' '
are

continuous on D if p ( x ) , q ( x )andr ( x )are continuous fora ≤ x ≤ b


a. f y ( x , y , y ' )= p ( x ) >0fora ≤ x ≤ b.
b. Since f y ' is continuous on [a , b], f y ' is bounded.

So if p ( x ) , q (x)andr (x ) are continuous for a ≤ x ≤ b and q ( x)>0 for a ≤ x ≤ b, then the


boundary value problem has a unique solution.

Exercise
Consider the following boundary value problems:
+sin ( y )=0,1≤ x ≤ 2 , y ( 1 )= y ( 2 )=0
'' −xy '
y +e
Determine if the boundary value problem has a unique solution.

2
d y
Example: solve the boundary value problem 2
−2 y=8 x (9−x) subject to
dx
the boundary condition y ( 0 )=0 , y ( 9 ) =0.
Solution
First change the given boundary value problem into systems of initial value
problem.
Guess y ' (0). That is y ' ( 0 )=4
dy
=z ( say )=f 1 ( x , y , z ) … … … … … … … … … … … … .(i)
dx
dz d2 y
= =2 y+ 8 x ( 9−x ) =f 2 (x , y , z )… … … … … … (ii)
dx d x2

Use Euler’s method with step size h=3

From equation(i) we have


y i+1 = y i+ f 1 ( x i , y i , z i ) h= y i+3 z i
From equation(ii) we have
z i+1=z i + f 2 ( x i , yi , z i ) h=z i +3[2 y i +8 x i ( 9−x i ) ]

For i=0

y 1= y 0 +3 z 0=0+3 ( 4 )=12

z 1=z 0 +3 [ 2 y 0+ 8 x 0 ( 9−x 0 ) ]=4

For i=1

y 2= y 1 +3 z 1=24

z 2=z 1+ 3 [ 2 y 1 +8 x 1 ( 9−x 1 ) ]=508

For i=2

y 3= y 2 +3 z 2=1548= y (9)≠ 0 . This is due to our wrong guess z ( 0 )=4

Suppose z ( 0 )=−24

y i+1 = y i+ f 1 ( x i , y i , z i ) h= y i+3 z i
z i+1=z i + f 2 ( x i , yi , z i ) h=z i +3[2 y i +8 x i ( 9−x i ) ]

For i=0

y 1= y 0 +3 z 0=−72

z 1=z 0 +3 [ 2 y 0+ 8 x 0 ( 9−x 0 ) ]=−24

For i=1

y 2= y 1 +3 z 1=−144

z 2=z 1+ 3 [ 2 y 1 +8 x 1 ( 9−x 1 ) ]=−24

For i=2

y 3= y 2 +3 z 2=−216= y (9)≠ 0

So for z ( 0 )=4 we get y ( 9 )=1548 and for z ( 0 )=−24 we get y ( 9 )=−216

Let p be between z ( 0 )=4 and z ( 0 )=−24 and q be between y ( 9 )=1548 and y ( 9 )=−216

So we have to find p which makes q=0

Consider a line passing through (−24 ,−216) and ( 4 , 1548).

Let ( p , q) be on the line then the equation of the line is given by:

q−q 0=
( q1 −q0
p1 −p 0 )
( p− p 0 ) ,where( p0 , q0 ) =(−24 ,−216 ) and( p1 , q1 )=(4 , 1548) .

p= p 0+
( p 1− p0
q 1−q 0 )
( q−q0 )

Since we need to find p when q=0

p=−24+ ( 1548+ 216 )


4 +24
( 0+216 ) =−20.57

dy
Therefore z ( 0 )= ( 0 )=−20.57
dx
dy
For z ( 0 )= ( 0 )=−20.57 by using Euler’s formula we get
dx

y 1= y ( 3 )=−20.57

y 2= y ( 6 ) =−123.42

y 3= y ( 9 ) =0.09

Solving Non- Linear BVPs by using Shooting method

Consider the boundary value problems (BVPs) for the second order differential equation of the
form
y =f ( x , y , y ) , a≤ x ≤ b , y ( a )=α , y ( b )=β
'' '

When f ( x , y , y ' )is not linear in y and y ' . Assume a given boundary value problem has a unique
solution y ( x ). We will approximate the solution y ( x ) by solving a sequence of initial value
problems.
Let y ' ' + p ( x ) y ' +q ( x ) y=r ( x ) , y ( a )=α , y ( b )=β … … … … … … … … .(¿) be BVP
We know how to solve IVP.
Can we convert (¿) as an IVP? Yes it is. By supplying initial condition on y ' we can convert (¿)
into IVP.
That is
y ' ' + p y ' + qy=r , y ( a )=α , y ' ( a )=? say δ is an IVP
Solve IVP to get the solution at y (b) by finding suitable slope( y ' (a)).
Let y ' ' + p ( x ) y ' +q ( x ) y=r ( x ) , a ≤ x ≤ b , y ( a )=α , y ( b )=β is BVP
The equivalent IVP is given by:

{ '
y ' =z , y ( a ) =α
z =r− pz−qy , z ( a )=? say δ
IVP can be solved using a favorite methods such as Euler’s method ,R.K method, Taylor series
method etc.
The interval [a , b] can be divided into n sub intervals with equal step size h . We discretize x we
get
x 0=a , x 1 , x 2 ,… , x n=b such that y ( x 0 )= y ( a ) =α , y ( x1 ; δ ) , y ( x 2 ; δ ) , … , y ( x n ; δ )= y (b ; δ )
y (b ; δ ) means y at b using δ .
The target is y ( b )= β
Compare y ( b ;δ )−β If | y ( b ; δ )−β|<ϵ we have solved the BVP.
Looking for δ such that y ( b ; δ )−β ≅ 0.
Find solution of ϕ ( x )= y ( b ; δ ) −β=0 .
How do we find the roots of ϕ ( x )=0 ?
Using secant or Newton Raphson method we can find the roots of ϕ ( x )=0.
For simplicity we use only secant method.
( δn−δ n −1 )
δ n+1=δ n − ϕ ( δn )
ϕ ( δ n )−ϕ ( δ n−1 )
We need two initial approximation δ 0 and δ 1
The two initial slopes δ 0 and δ 1 define two IVPs given as follows:
IVP1: y ' ' + p y ' + gy =r , y ( a )=α ; y ' ( a )=δ 0
IVP2: y ' ' + p y ' + gy =r , y ( a )=α ; y ' ( a )=δ 1
Solve IVP1 and IVP2 to get y (b ; δ 0 ) and y (b ; δ 1)
ϕ ( δ 0 )= y ( b ; δ 0 ) −β
ϕ ( δ 1 )= y ( b ; δ 1 )−β

Then one can compute δ 2 using secant formula.


Solve IVP3: y ' ' + p y ' + qy=r , y ( a )=α ; y ' ( a )=δ 2

For y (b ; δ 2 ) check | y ( b ; δ 2 )−β|< ϵ


If yes then the BVP is solved
If no refine δ via secant method.
Example
1
Solve y ' ' =6 y 2−x ; y ( 0 )=1 , y ( 1 )=5 andh=
3
Solution
2IVP:choose δ 0=1.2∧δ 1=1.5
IVP1: y ' ' =6 y 2−x , y ( 0 ) =1, y ' ( 0 )=1.2 … … … … … … … …(i )
IVP2: y ' ' =6 y 2−x , y ( 0 ) =1, y ' ( 0 )=1.5 … … … … … … … … …(ii)
We need to solve equations ( i )∧(ii)
Using Euler’s method for solving IVPs.
'' 2 ' ' '' 2
y =6 y −x ⟹ y =z ⟹ z = y =6 y −x

{
'
y =z , y ( 0 )=1
IVP1: ' 2
z =6 y −x , z ( 0 )=1.2
From IVP1 and using Euler’s formula we have
y n+ 1= y n +h z n
From IVP2 and using Euler’s formula we have
z n+1=z n +h (6 y 2n−x n )
Hence
1 2
For i=0 (we have x 0=0 , x 1= , x 2= ∧x3 =1 )
3 3
1
y n+ 1= y n +h z n ⟹ y 1= y 0 +h z 0=1+ ( 1.2 )=1.4
3
1
z n+1=z n +h ( 6 y n−x n ) ⟹ z 1=z 0 + h ( 6 y 0−x 0 ) =1.2+
2 2
( 6 ( 1 )2−0 )=3.2
3
For i=1
y 2= y 1 +h z 1=2.466

z 2=z 1+ h ( 6 y 1−x 1 ) =7.01


2

For i=2
y 3= y ( 1 )= y 2 +h z 2=4.7966= y ( b ; δ 0 )
Similarly for IVP2 we have

{
'
y =z , y ( 0 )=1
IVP2: ' 2
z =6 y −x , z ( 0 )=1.5
Using Euler’s formula we get
y n+ 1= y n +h z n and z n+1=z n +h ( 6 y 2n−x n )
1
y 1= y 0 + h z 0=1+ (1.5 )=1.5
3
1
z 1=z 0 +h ( 6 y 0 −x 0) =1.5+
2
( 6 ( 1 )2−0 ) =3.5
3
y 2= y 1 +h z 1=2.666

z 2=z 1+ h ( 6 y 21−x 1 ) =7.89


y 3= y ( 1 )= y 2 +h z 2=5.29= y (1; 1.5)
ϕ ( δ 0 )= y ( 1 ;1.2 )−5=4.7966−5=−0.2034
ϕ ( δ 1 )= y ( 1; 1.5 )−5=5.29−5=0.29
If we are not interested we can refine δ using secant method

( δn−δ n −1 )
δ n+1=δ n − ϕ ( δn )
ϕ ( δ n )−ϕ ( δ n−1 )
For n=1
( δ1 −δ 0 ) ( 1.5−1.2 )
δ 2=δ 1− ϕ ( δ 1) =1.5− ( 0.29 )=1.32
ϕ ( δ 1 ) −ϕ ( δ 0 ) 0.29−(−0.2034 )
Solve IVP3:
{ '
y ' =z , y ( 0 )=1
2
z =6 y −x , z ( 0 )=1.32=δ 2

y n+ 1= y n +h z n and z n+1=z n +h ( 6 y 2n−x n )

1 1
y 1= y 0 + h z 0=1+ (1.32 ) =1.44 , z1 =z 0+ h ( 6 y 0−x 0 ) =1.32+ ( 6 ( 1 ) −0 )=3.32
2 2
3 3

y 2=2.51 , z 2=7.3672

y 3=4.96= y ( b )= y (1 ; 1.32)

| y ( 1; 1.32 ) −5|=|4.96−5|=0.035
ϕ ( δ 2 )= y ( 1; 1.32 ) −5=−0.035

( δ 2−δ 1 ) ( 1.32−1.5 ) 0.18 0.18


δ 3=δ 2− ϕ ( δ 2) =1.32− (−0.035 )=1.32− (−0.035 )=1.32+ ( 0.035 )=1.338
ϕ ( δ2 ) −ϕ ( δ 1) −0.035−0.29 0.325 0.325

{
'
y =z , y ( 0 )=1
Solve IVP4:
z =6 y 2−x , z ( 0 )=1.338
'

y n+ 1= y n +h z n and z n+1=z n +h ( 6 y 2n−x n )

1
y 1= y 0 + h z 0=1+ (1.338 )=1.446
3
1
z 1=z 0 +h ( 6 y 0 −x 0) =1.338+ ( 6 )=3.338
2
3
1
y 2= y 1 +h z 1=1.446+ ( 3.338 )=2.558
3

z 2=z 1+ h ( 6 y 21−x 1 ) =3.338+


1
3( 1
)
6 (1.446 )2− =7.4087
3

1
y 3= y 2 +h z 2=2.558+ (7.4087 )=5.027= y ( 1 ) = y (1; 1.338)
3

ϕ ( δ 3 )= y ( 1; 1.338 )−5=5.027−5=0.027
( δ 3−δ 2 ) ( 1.338−1.32 )
ϕ ( δ¿¿ 4 )=δ 3− ϕ ( δ 3 )=1.338− ( 0.027 )=1.330 ¿
ϕ ( δ 3 ) −ϕ ( δ 2) 0.027+ 0.035

{
'
y =z , y ( 0 )=1
Solve IVP5: ' 2
z =6 y −x , z ( 0 )=1.330

y n+ 1= y n +h z n and z n+1=z n +h ( 6 y 2n−x n )

1
y 1= y 0 + h z 0=1+ (1.330 )=1.443
3
1
z 1=z 0 +h ( 6 y 20 −x 0) =1.330+ ( 6 )=3.330
3
1
y 2= y 1 +h z 1=1.443+ ( 3.330 ) =2.553
3

z 2=z 1+ h ( 6 y 1−x 1 ) =3.330+


2 1
3 ( 2 1
6 (1.443 ) − =7.383
3 )
1
y 3= y 2 +h z 2=2.553+ ( 7.383 )=5.014= y ( 1 ) = y (1; 1.330) ≅ 5
3

Hence y 0= y ( 0 )=1 , y 1= y ( 13 )=1.443 , y = y ( 23 )=2.553∧ y = y ( 1)=5


2 3

Exercise

Solve the following BVPs by using the shooting method

a. x 2 y ' ' −2 y+ x=0 , y ( 2 ) = y ( 3 )=0 , withh=0.25


b. y ' ' = y + x , y ( 0 )= y ( 1 )=0 , withh=0.2
'' 3 ' 1 1
c. y = y − y y , 1 ≤ x ≤2 , y ( 1 ) = , y ( 2 )=
2 3
'' 3 3 5
d. y =2 y −6 y−2 x , 1 ≤ x ≤2 , y ( 1 ) =2 , y ( 2 )=
2

Finite Difference Method

In finite difference methods, the derivatives in the differential equation are replaced with finite
difference approximations. As shown in Fig.below,
The domain of the solution [ a , b ] is divided into N subintervals of equal length h , that are
b−a
defined by ( N +1 ) points called grid points. The length of each sub-interval ¿ is then h= .
N
Points a and b are the end points and the rest of the points are interior points. The differential
equation is then written at each of the interior points of the domain. This results in a system of
linear algebraic equations when the differential equation is linear, or in a system of non-linear
algebraic equations when the differential equation is non-linear. The solution of the system is the
numerical solution of the differential equation.

Frequently, the central difference formulas are used in a finite difference methods since they give
better accuracy. Recall that for a function y (x ) that is given at points
( x 1 , y 1 ) , ( x 2 , y 2 ) , … , ( x i , y i ) ,… , ( x N +1 , y N +1 ) that are equally spaced the finite difference

Two point forward difference formula for first derivative

' y i+1− y i
y i= +O(h)
h

Two point backward difference formula for first derivative

' y i− y i−1
y i= +O( h)
h
Approximation of the first and the second derivatives at the interior points, with the central
difference formulas, are given by:

dy y i +1− y i−1 d 2 y y i−1−2 y i+ yi +1


= and 2
= 2 (1)
dx 2h dx h

Three point forward and backward difference formula for first derivative

The central formula evaluates the first derivative at a given point x i by using the points x i−1 and
x i+1. Consequently, for a function that is given by a discrete set of n points, the central difference

formula is useful only for interior points and not for the end points ( x 0∨x n ). An estimate for the
first derivative at the endpoints, with error of O ( h2 ), can be calculated with three point for ward
and backward difference formulas.

The three-point forward difference formula calculates the derivatives at point x i from the value at
the point and the next two points, x i+1 and x i+2. It is assumed that the points are equally spaced
such that h=x i+2−x i+1=x i +1−x i.

y ' ' ( x i) h2 y ' ' '(x i )h 3


y ( x i+1 )= y ( x i ) + y ' ( x i ) h+ + ..........................*
2! 3!

2 3
y ' ' (xi ) ( 2h ) y ' ' ' (xi ) ( 2h )
'
y ( x i+2 )= y ( x i ) + y ( x i ) 2 h+ + ..........................**
2! 3!

Multiplying * by 4 and subtracting ** we get

'
−3 y ( x i ) +4 y ( x i+1 ) − y ( xi +2 ) 2
y ( xi )= +O( h ) which is called three-point forward difference
2h
formula. The formula can be used for calculating the derivatives at the first point of a function
that is given by a discrete set of n points.

The three-point backward difference formula yields the derivative at points x i from the value of
the function at that point and at the previous two points, x i−1 and x i−2.

y '' ( x i) h 2 y ' ' '(x i )h3


'
y ( x i−1 ) = y ( xi ) − y ( x i ) h+ − ..........................***
2! 3!

y ' ' ( x i ) ( 2 h )2 3
y ' ' ' ( xi ) ( 2 h )
'
y ( x i−2 ) = y ( xi ) − y ( x i ) 2 h+ − ..........................****
2! 3!
Multiplying *** by 4 and subtracting **** we get

3 y ( x i )−4 y ( xi−1 ) + y ( x i−2 )


y' ( xi )= +O(h2)
2h

Finite difference solution of a linear two point BVP

The finite difference approximation for a linear second-order differential equation of the form:

d2 y dy
2
=f ( x ) + g ( x ) y =r (x) (2)
dx dx

y i−1−2 y i + y i +1 y i+1− y i
is 2
+ f ( xi ) + g ( x i ) y i=r ( x i ) (3)
h 2h

The process of converting the differential equation(2), into the algebraic form, equation(3) at
each point x i is called discretization. For a two point BVP, the value of the solution at the end
points, y 1 and y N+1 are known. Equation (3) is written N−1 times for

i=2 , … , N . This gives a system of N−1 linear equations for the unknowns y 2 , … , y N , that can be
solved numerically with any of the methods.

The approach for solving a non-linear ODE with the finite difference method is the same as that
used for solving a linear ODE. The only difference is that the resulting system of simultaneous
equations is non-linear.

Example: solve the following two-point BVP

d2 y
1. 2
= y + x , y ( 1 )=0 , y ' ( 0 )=1,when h=0.5 , h=0.25
dx
2
d y dy
2. 2
+2 x +5 y−cos ( 3 x )=0 for , 0 ≤ x ≤ π
dx dx

With the boundary conditions y (0)=1.5 and y (π )=0

d2 y dy
3. + x + y=2 x for , 0 ≤ x ≤1 , with y (0)=1 and y (1)=1
dx 2
dx
2
d y dy
4. −2 + y =0, with y (0)=0 and y ' ( 1 ) =5 , whenh=0.25
dx
2
dx
2
d y −0.2 x
5. −2 2
+ y=e ,With y ' ( 1 ) = y and y (0)=1 , divide the solution domain into 8 sub-
dx
intervals.

Solution
2
d y y i−1−2 y i + y i +1
1. 2
=¿ 2
= yi + x i
dx h
y i+1 + (−2−h ) y i + y i−1=h xi
2 2

' −3 y i + 4 y i+1− yi +2 '


y ( xi )= = yi
2h
Fori=0
−3 y 0 + 4 y 1− y 2
y '0= ⟹−3 y 0 + 4 y 1− y 2=1⟹−3 y 0+ 4 y 1=1
1
2( )
2

Since h=0.5 , n=2 meaning i=1

For i=1

y i+1 + (−2−h2 ) y i + y i−1=h2 xi becomes

y 2 + (−2−0.5 ) y 1+ y 0 =0.5 x1
2 2

9 1
y 2− y 1 + y 0= x 1
4 4

−9 1
y1 + y0 = x1
4 4

−9 y 1 +4 y 0=x 1

−9 y 1 +4 y 0=0.5

{
−3 y 0 +4 y 1=1
−9 y 1+ 4 y 0 =0.5

⟹ y 0=−1 and y 1=−0.5

2
d y −0.2 x
5. −2 2 + y=e with y ' ( 1 ) = y and y (0)=1
dx
1−0 1
N=8 means h= =
8 8

y i−1−2 y i+ yi +1 −0.2 xi
−2( 2
)+ y i=e
h

1
−2 y i−1+ ( h2 +4 ) y i−2 y i+1=h 2 e
−0.2 xi
since h=
8

( )
257 e−0.2 x i

−2 y i−1+ y −2 y i+1=
64 i 64

−0.2 x i
−128 y i−1+ 257 yi −128 y i+1=e

For i=1

−0.2 x1
−128 y 0 +257 y 1−128 y 2=e

257 y 1−128 y 2=128.97531 , for i=1

−128 y 1+ 257 y 2−128 y 3=0.95122942 , for i=2

−128 y 2+ 257 y 3−128 y 4=0.92774349 , for i=3

−128 y 3 +257 y 4−128 y 5 =0.90483742, for i=4

−128 y 4 +257 y 5−128 y 6 =0.8824969 , for i=5

−128 y 5 +257 y 6−128 y 7=0.86070798, for i=6

−128 y 6 +257 y 7−128 y 8=0.83945702, for i=7

By using 3 point backward difference formula, y ' ( 1 ) = y becomes

3 yi −4 y i−1 + y i−2
= yi
2h

For i=8 and h=1/8 we get

11 y 8−16 y 7 +4 y 6=0
16 4
y 8= y 7− y
11 11 6

Therefore we have the following system of linear equation

257 y 1−128 y 2=128.97531 , for i=1

−128 y 1+ 257 y 2−128 y 3=0.95122942 , for i=2

−128 y 2+ 257 y 3−128 y 4=0.92774349 , for i=3

−128 y 3 +257 y 4−128 y 5 =0.90483742, for i=4

−128 y 4 +257 y 5−128 y 6 =0.8824969 , for i=5

−128 y 5 +257 y 6−128 y 7=0.86070798, for i=6

−896 y 6 +779 y 7=9.23402722, for i=7

( )( )
y1 1.7049
y2 2.4155
y3 3.1376
y4 3.8769
=
y5 4.6395
y6 5.4314
y7 6.2590
7.1298
y8

Chapter 6

Eigen value problems

Consider the Eigen value problem


Ax=λx
Where ∈ R n ×n , λ ∈ R∨ λ ∈C , is the Eigen value of A∧x ∈ R nis the eigen vector corresponding to
the eigen value λ .
The Eigen values of a matrix A are given by the roots of the characteristic equation
|λI − A|=0.
Note: The Eigen value of a square matrix A can be zero but the Eigen vector of a square matrix
A never be zero.

Example

( )
0 0 −2
Given the matrix A= 1 2 1 then,
1 0 3
a. Find the Eigen values of A .
b. Find the Eigen vectors of A

| |
λ 0 2
λ−2 −1 =λ [ ( λ−2 )( λ−3 ) ] +2 ( λ−2 )=( λ−2 ) ( λ −3 λ+2 )
2
det( λI − A )= −1
−1 0 λ−3

a. det ( λI −A )=0⇒ ( λ−2 ) ( λ 2−3 λ+2 )=0 ⇒ λ 1=2 , λ2=2 ,∧ λ3=1


For λ=2

()
x
b. Let the corresponding vector be y
z

| |( ) ( ) | |( ) ( )
λ 0 2 x 0 2 0 2 x 0
−1 λ−2 −1 y = 0 ⇒ −1 0 −1 y = 0 ⇒ 2 x +2 z =0 ⇒ x=−z
−1 0 λ−3 z 0 −1 0 −1 z 0
Let x=s∧ y=t

()( )( )() ( ) ()
x s s 0 1 0
y = t = 0 + t =s 0 +t 1 ,t , s ∈ R
z −s −s 0 −1 0

( ) ()
1 0
0 ∧ 1 are the corresponding eigen vectors w.r.t λ=2
−1 0
For λ=1

()
x
Let the corresponding eigen vector be y
z

| |( ) ( ) | |( ) ( )
λ 0 2 x 0 1 0 2 x 0
−1 λ−2 −1 y = 0 ⇒ −1 −1 −1 y = 0
−1 0 λ−3 z 0 −1 0 −2 z 0
⇒ x +2 z=0 ⇒ x=−2 z∧−x− y −z=0 but x=−2 z ⇒−(−2 z )− y−z=0 ⇒ y =z

()( ) ( )
x −2 z −2
y = z =z 1 , z∈ R
z z 1
()
−2
1 is the corresponding eigen vector For λ=1
1

Properties of Eigen values

1. The sum of the Eigen values of a matrix is the sum of the elements of the principal (main)
diagonal.
2. The product of the eigen values of a square matrix A is equals to the determinant of A .
1
3. If λ is an eigen value of a square matrix A then is the eigen value of the inverse of
λ
matrix A( A−1)
4. If λ 1 , λ2 , … , λn are the Eigen value of a square matrix A then Am has the Eigen values
λ m1 , λ m2 , … , λ mn where m is positive integer.
Methods of finding Eigen values
There are several methods for finding the Eigen values of a general matrix or a
symmetric matrix.
Among those method we will see only the power method and the Householder's method
for finding the Eigen value of a matrix including the corresponding Eigen vector.
1. power method

The method for finding the largest Eigen value in magnitude and the corresponding Eigen vector
of the Eigen value problem Ax=λx is called the power method.

What is the importance of this method? The necessary and sufficient condition for convergence
of the
Gauss-Jacobi and Gauss-Seidel iteration methods is that the spectral radius of the iteration matrix
H is less than one unit, that is, ρ(H )<1 , where ρ(H )is the largest eigen value in magnitude of
H( ρ ( H )=max i=1|λ i| ). If we write the matrix formulations of the methods, then we know H.
n

We can now find the largest Eigen value in magnitude of H, which determines whether the
methods converge or not.

We assume that λ 1 , λ2 , … , λn are distinct eigen values such that |λ 1|>|λ 2|> …>|λ n|.
Consider Ax=b .
We have to splitting matrix A as A=D+ L+ U , where
[ ]
a11 ⋯ a1 n
A= ⋮ ⋱ ⋮
a n1 ⋯ a nn

[ ]
a11 ⋯ 0
D= ⋮ ⋱ ⋮ → diagonal ¿
0 ⋯ ann

[ ]
0 ⋯ 0
L= ⋮ ⋱ ⋮ → lower triangular ¿
an 1 ⋯ 0

[ ]
0 ⋯ a1 n
U = ⋮ ⋱ ⋮ →upper triangular ¿
0 ⋯ 0

Ax=b ⟹ ( D+ L+U ) x=b ⟹ Dx=−( L+ U ) x +b ⟹ x (n+1 )=−D−1 ( L+ U ) x ( n) + D−1 b


−1
⟹ x( n+1 )=H x ( n) +C , where H=−D ( L+U ) and C=D b
−1

Hence H=−D−1 ( L+U ) is called Jacobi’s iteration matrix.


Also from Ax=b ⟹ ( D+ L+U ) x=b ⟹ ( D+ L ) x =−Ux+b ⟹ x (n+1 )=−( D+ L )−1 U x (n )+ ( D+ L )−1 b
−1 −1
⟹ x( n+1 )=H x ( n) +C , where H=−( D+ L ) U and C=( D+ L ) b
Hence H=−( D+ L )−1 U is called Gauss-Seidel iteration matrix.
Let v1 , v 2 ,... , v n be the Eigen vectors corresponding to the Eigen values λ 1 , λ2 , … , λn respectively.
The method is applicable if a complete system of n linearly independent Eigen vectors exist,
even though some of the Eigen values λ 1 , λ2 , … , λn may not be distinct. The n linearly
independent Eigen vectors form an n-dimensional vector space. Any vector v in this space of
Eigen vectors v1 , v 2 ,... , v n can be written as a linear combination of these vectors. That is
V =c 1 v1 +c 2 v 2 +…+ cn v n.
Since Ax=λx we have:
A v 1= λ1 v 1 , A v 2=λ2 v 2 , … , A v n=λ n v n
⟹ AV =c 1 λ 1 v 1+ c2 λ2 v 2 +…+ c n λ n v n=λ 1 ¿
Now finding A2 V , A 3 V , … , A k V , A k+1 V

2
We have A V =λ c 1 v 1+ c 2
2
1
(
λ2 2
λ1 2 ( )
v + …+c n
λn 2
λ1 n
v . ( ) )
( ( ) ( ) )
3 3
λ2 λ
A3 V = λ31 c 1 v 1 +c 2 v 2 +…+c n n v n .
λ1 λ1

⋯ ⋯ ⋯

( ( ) ( ) )
k k
k k λ2 λn
A V = λ c1 v 1 +c 2
1 v 2 +…+ c n v .
λ1 λ1 n

( ( ) ( ) )
k+1 k +1
λ2 λn
Ak +1 V =λ1k+1 c 1 v 1+ c 2 v 2 +…+ c n vn .
λ1 λ1

( )
k+1
λn λn
Since |λ 1|>|λ 2|> …>|λ n|, −1< <1⟹ → 0 , as k → ∞
λ1 λ1

Therefore Ak V → λ k1 c1 v 1∧ A k+1 V → λ k+1


1 c 1 v 1 as k → ∞

k+1 k +1
A V λ1 c 1 v 1
lim k
= k =λ1
k→∞ A V λ 1 c1 v 1

A k+1 V r
λ 1=lim , r =1, 2 , 3 ,… , n
k →∞ Ak V r

Where the suffix r denotes the r th component of the vector.


Therefore, we obtain n ratios, all of them tending to the same value, which is the largest Eigen
value in magnitude,|λ 1| .
When do we stop the iteration?
The iterations are stopped when all the magnitudes of the differences of the ratios are less than
the given error tolerance.

Remark: The choice of the initial approximation vector v 0 is important. If no suitable


approximation is available
We can choose v 0 with all its components as one unit, that is, v 0=( 1,1,1 ,… , 1 )T .
However, this initial approximation to the vector should be non-orthogonal to v1 .
Remark: Faster convergence is obtained when |λ 2|≪|λ 1| .
As k → ∞ , premultiplication each time by A, may introduce round-off errors. In order to
keep the round-off errors under control, we normalize the vector before premultiplying by A.
The normalization that we use is to make the largest element in magnitude as unity. If we use
this normalization, a simple algorithm for the power method can be written as follows.
y k +1=A v k
y k+1
v k+1=
m k+1
Where mk+1 is the largest element in magnitude of y k +1. Now, the largest element in magnitude of
v k+1is one unit.

A k+1 V r ( y k+1 )r
Then λ 1=lim k can be written as λ 1=lim , r=1 , 2 ,3 ,… , n and v k+1is the required
k →∞ A Vr k →∞ ( v k )
r

Eigen vector.
Remark: It may be noted that as k → ∞ , mk+1 also gives |λ 1|.

Inverse Power Method


Inverse power method can give approximation to any eigenvalue. However, it is used usually to
find the smallest eigenvalue in magnitude and the corresponding eigenvector of a given matrix
A.

The eigenvectors are computed very accurately by this method. Further, the method is powerful
to calculate accurately the eigenvectors, when the eigenvalues are not well separated.

In this case, power method converges very slowly.


If λ is an eigenvalue of A , then 1/ λ is an eigenvalue of A−1 corresponding to the same
eigenvector.
The smallest eigenvalue in magnitude of A is the largest eigenvalue in magnitude of A−1.
Choose an arbitrary vector y 0(non-orthogonal to x). Applying the power method on A−1.

Example: Find the largest eigen values in modulus and the corresponding eigen vector of the
matrix.

a. A=
1 2
3 4 ( )

( )
25 1 2
b. A= 1 3 0
2 0 −4
Solution :Let the initial approximation to the eigen vector be v 0. Then the power method
is given by

{
y k+1 =A v k
y k+1
v k +1=
m k+1

Where mk+1 is the largest element in magnitude of y k +1. The dominant eigen value in
magnitude is given by
( y k+1 )r
λ 1=lim , r=1,2,3 , … , n and v k+1 is the required eigen vector.
k →∞ ( v k )
r

T
Let v 0=[ 1,1 ] . We have the following results.

[] y 1
[][ ]
y 1= A v 0= 3 ,m1=7 , v 1= 1 = 3 = 0.42857
7 m1 7 7 1

y = A v =[ ] m 5.28571 [ 5.28571 ] [ 1 ]
2.42857 y 1 2.42857
2 0.45946
2 1 , m =5.28571 , v = =
2 2 =
5.28571 2

y 3= A v 2= [2.459946
5.37838 ] ,m =5.37838 , v = =
3
y
3
3 1
[ 2.459946
m 5.37838 5.37838
3
] =[
1 ]
0.45729

y = A v =[ ] [ ]=[
1 ]
2.45729 y 1 4 2.45729 0.45744
4 3 , m =5.37187
4 , v = =
4
5.37187 m 5.37187 5.37187
4

[ ]
y 5= A v 4=
2.45744
5.37232
y5
, m5=5.37232 , v 5= =
1 2.45744
m5 5.37232 5.37232
= [
0.45743
1 ][ ]
y =A v =
6 [2.45743
5.37229 ]
5

Now, we find the ratios


( y k+1 )r
λ 1=lim , r=1,2(where r isthe components of y k+1∧v k )
k →∞ ( v k )
r

We obtain the ratio as


2.45743 5.37229
For r =1, =5.37225 ,forr =2 , =5.37229
0.45743 1
The magnitude of the error between the ratios is |5.37225−5.37229|=0.00004 <0.00005 . Hence,
the dominant Eigen value, correct to four decimal places is 5.37225

Householder Algorithm
In Householder’s method, A is reduced to the tridiagonal form by orthogonal transformations
representing reflections. This reduction is done in exactly n – 2 transformations.
The transformations are of the form
T n T T 2 2 2
P=I−2W W where W ∈ R , such that W =[ x 1 , x 2 , … , x n ] and W W =x 1+ x2 +…+ x n =1.
P is symmetric and orthogonal. The vectors W are constructed with the first (r −1) components
as zero, that is
T 2 2 2
W r =(0,0 , … ,0 , x r , xr +1 , … , x n ) with x r + xr +1 +…+ x n=1 . With this choice of W r , form the
matrices
T
Pr =I −2W r W r . The similarity transformation is given by
A Pr =Pr A Pr =P r A P r . Put A=A 1 and form successively
−c 1 T
Pr
Ar =Pr Ar −1 Pr , r =2,3 , … ,n−1
At the first transformation, we find xr ’ s such that we get zeros in the positions
(1 , 3) ,(1 , 4) , ... ,(1 ,n) and in theCorresponding positions in the first column.
In the second transformation, we find xr ’ s such that we get zeros in the positions
(2 , 4) ,(2 , 5), ... ,(2 , n) and in the
Corresponding positions in the second column. In (n – 2) transformations, A is reduced to the
tridiagonal form.
Note:
1. A square matrix of order n is said to be orthogonal matrix if and only if A At =I = At A
2. An n × nmatrix A is called tridiagonal matrix if a ij=0whenever |i− j|>1.
3.
For example, consider

[ ]
a 11 a12 a13 a14
a a22 a23 a24
A= 21
a 31 a32 a33 a34
a 41 a42 a43 a44

For the first transformation, choose


T T 2 2 2
W 2 =[0 , x 2 , x 3 , x 4 ] ⟹ x 2 + x 3+ x 4=1

We find S1= √ a 212+a 213 + a214

2
x 2=
1
2
1+ (
a12 sign ( a12 )
S1 )
a13 sign ( a12 )
x 3=
2 S1 x2
a 14 sign ( a12)
x4 =
2 S1 x2
This transformation produces two zeros in the first row and first column.
One more transformation produces zeros in the (2 , 4) and ( 4 , 2) positions.
Example
Find all the eigenvalues of the matrix

[ ]
1 2 −1
A= 2 1 2 Using the Householder method.
−1 2 1

Solution
T 2 2
Choose W 2 =[0 , x 2 , x 3 ] such that x 2+ x3 =1. The parameters in the first House-holder
transformation are obtained as follows:

S1= √ a 12+a 13 =√ 2 + (−1 ) =√ 5 ,


2 2 2 2

2
x 2=
1
2 (a 1
1+ 12 sign(a12) = 1+
S1 2
2
=
2+ √5
√5 2 √5 ) ( )
⟹ x 2=
2+ √ 5
2√5 √
√ √ √ )
(
x 3=
a13
sign ( a12 )=
−1
=
−1 √ √2 5
2 S1 x 2 2 S1 x2 2 5 2+ 5

[ ]
1 0 0
T
P2=I 3−2 W 2 W = 0 −2/ √ 5 1/ √ 5
2
0 1/ √ 5 2/ √ 5
The required Householder transformation is

[ ]
1 − √5 0
A2=P 2 A1 P2= − √5 −3/5 −6/5 where A1= A
0 −6/5 13/5
To find the eigen value of A
λ−1 √5
|
Let f n=det ( λI − An ) and det ( λI − A2 ) = √ 5 λ +3/5
0 6/5
0
6/5
|
λ−13 /5
f 0=1
f 1=λ−1

f 2= | λ−1
5
5
λ+3/ 5
2 2
=λ − λ−
5
28
5 |
f 3=
| λ−1
√5
0
√5
λ+3 /5
6 /5
0

λ−13/5
|
6/ 5 = λ −3 λ −6 λ+ 16=( λ−2 ) ( λ −λ−8 )
3 2 2

Since f 3=0 , for λ=2, λ=2 is the eigen value of matrix A . The other two eigen values lies in the
interval (−3 ,−2 ) and ( 3,4 ). So we can find the other two eigen values using Newton-Raphson
method. That is
f 3 ( λk ) λ3k −3 λ2k −6 λk +16
λ k+1= λk − =λ k − for k =0,1,2 , …
f '3 ( λk ) 3 λ2k −6 λ k −6
Consider the interval ( 3,4 )

3+4
λ 0= =3.5
2
After a certain iteration we get λ=3.372 .
Consider the interval (−3 ,−2 )
−3−2
λ 0= =−2.5
2
After a certain iteration we get λ=– 2.372 .
Example
Use the House-holder’s method to reduce the given matrix A into the tridiagonal form

[ ]
4 −1 −2 2
A= −1 4 −1 −2
−2 −1 4 −1
2 −2 −1 4
Solution
Let W 2=[0 , x2 , x3 , x 4 ]T where x 22+ x23 + x 24 =1

S1= √ a 212+a 213+ a214=√ (−1 ) + (−2 ) +22=3


2 2

2
x 2=
1
2 (
a sign(a 12 ) 1
1+ 12
S1
= 1+
2 3
=
) (
(−1)(−1) 2
3 )
x 2=
√ 2
3
a13 1
x 3= sign ( a12 )=
2 S1 x 2 √6
a14 −1
x4 = sign ( a12) =
2 S1 x2 √6
[ ]
1 0 0 0
0 −1/ 3 −2/T3 2 /3
So P2=I 4 −2W 2 W = 2
0 −2 /3 2/3 1 /3
0 2/3 1/3 2 /3
A2=P 2 A1 P2 where A1= A

[ ]
4 3 0 0
3 16 /3 2/3 1/3
A2=
0 2 /3 16/3 −1/3
0 1/3 −1/3 4/3

T 2 2
W 3 =[ 0,0 , x3 , x 4 ] where x 3+ x 4=1

S2= √ a 23+ a24 = √ ( 2/3 ) + ( 1/3 ) =


2 2 2 2 √5
3

2
x 3=
1
2 (
a sign(a23 ) 1
1+ 23
S2
= 1+
2
2/3
√ 5/ 3 ) (
⟹ x 3=
2+ √ 5
2 √5 )
sign ( a23 )= √
a24 2 √5
x4 =
2 S2 x3 2 √ 2+ √ 5

( )
T

W 3 = 0,0 , √ , √ √
2+ 5 2 5
2 √ 5 2 √ 2+ √ 5

P3=I 4 −2W 3 W T3
The required Householder transformation is

[ ]
4 0 0 3
−5
3 16/3 0
A3 =P 3 A2 P3= 3√5
0 −5/3 √ 5 16 /3 9/5
0 0 9 /5 12/5
Questions

1. The following data gives the velocity of a particle for 20


seconds at an interval of 5 seconds. Find the initial acceleration
using the entire data.

Time(sec) 0 5 10 15 20
Velocity(m/sec) 0 3 14 69 228
1. Compute f ' (0) and f ' ' (4 )for the data

x 0 1 2 3 4
f ( x) 1 2.718 7.381 20.086 54.598
6

2. Using trapezoidal rule, find∫ f ( x )dx from the following set of value of
0

x and f ( x).

x 0 1 2 3 4 5 6
f (x) 1.56 3.64 4.62 5.12 7.05 9.22 10.44
1

3. Using the Simpson’s 1/3 rule evaluate ∫ x e x dx taking four interval and
0

compare the result with the actual value.


1
1
4. Using the Simpson’s 3/8 rule evaluate ∫ 2
dx by dividing the range
0 1+ x

in to six equal part.


5. Find Lagrange interpolation polynomial for the function f ( x) having
f ( 4 ) =1 , f (6 )=3 , f ( 8 )=8 ,and f ( 10 )=16. Also compute f (7).
6. Find f (3), using Lagrange interpolation formula for the function f (x)

having f ( 1 ) =2, f ( 2 ) =11 , f ( 4 )=77

You might also like