0% found this document useful (0 votes)
136 views362 pages

Note:: Other Websites/Blogs Owners We Requested You

This document discusses methods for solving algebraic and transcendental equations. It introduces various iterative methods for finding the roots or zeros of equations, including initial approximation, the method of false position, Newton-Raphson method, and general iteration method. It also covers direct and iterative methods for solving linear systems of algebraic equations, such as Gauss elimination, Gauss-Jordan, Gauss-Jacobi, and Gauss-Seidel methods. Finally, it discusses eigenvalue problems and the power method for finding eigenvalues and eigenvectors.

Uploaded by

avinash roy
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
136 views362 pages

Note:: Other Websites/Blogs Owners We Requested You

This document discusses methods for solving algebraic and transcendental equations. It introduces various iterative methods for finding the roots or zeros of equations, including initial approximation, the method of false position, Newton-Raphson method, and general iteration method. It also covers direct and iterative methods for solving linear systems of algebraic equations, such as Gauss elimination, Gauss-Jordan, Gauss-Jacobi, and Gauss-Seidel methods. Finally, it discusses eigenvalue problems and the power method for finding eigenvalues and eigenvectors.

Uploaded by

avinash roy
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 362

Downloaded From :

www.EasyEngineering.net

**Note : Other Websites/Blogs Owners we requested you,


1
Please do not Copy(or) Republish this Material.
This copy is NOT
FOR SALE.
**Disclimers : EasyEngineering does not own this book/materials, neither
created nor scanned. we provide the links which is already available on the
internet. For any quarries, Disclaimer are requested to kindly contact us. We
assured you we will do our best. We DO NOT SUPPORT PIRACY, this copy was
provided for students who are financially troubled but deserving

to learn.

2
Downloaded FromT: h
wawnwk.EYaosyuEann
gidneGeroindgB
.nleetss!

Downloaded From : www.EasyEngineering.net

3
Downloaded From :
www.EasyEngineering.net

4
This page
intentionally leftblank

5
6
7
PUBLISHING FOR ONE WORLD
NEW AGE INTERNATIONAL (P) LIMITED, PUBLISHERS
4835/24,Ansari Road,
Daryaganj, New Delhi -
110002Visit us at
www.newagepublisher
s.com

Preface

This book is based on the experience and the lecture notes of the
authors while teaching Numerical Analysis for almost four decades
at the Indian Institute of Technology, New Delhi.

8
This comprehensive textbook covers material for one semester
course on Numerical Methods of Anna University. The emphasis in the
book is on the presentation of fundamentalsand theoretical concepts in
an intelligible and easy to understand manner. The book is writtenas a
textbook rather than as a problem/guide book. The textbook offers a
logical presentation of both the theory and techniques for problem
solving to motivate the students for the study and application of
Numerical Methods. Examples and Problems in Exercises are used to
explain each theoretical concept and application of these concepts in
problem solving. Answers for every problem and hints for difficult
problems are provided to encourage the students for self- learning.
The authors are highly grateful to Prof. M.K. Jain, who was
their teacher, colleague and co-author of their earlier books on
Numerical Analysis. With his approval, we have freely used the
material from our book, Numerical Methods for Scientific and
Engineering Computation, published by the same publishers.
This book is the outcome of the request of Mr. Saumya
Gupta, Managing Director, New Age International Publishers, for
writing a good book on Numerical Methods for Anna University. The
authors are thankful to him for following it up until the book is
complete.
The first author is thankful to Dr. Gokaraju Gangaraju,
President of the college, Prof. P.S. Raju, Director and Prof. Jandhyala
N. Murthy, Principal, Gokaraju Rangaraju Institute of Engineering and
Technology, Hyderabad for their encouragement during the
preparation of the manuscript.
The second author is thankful to the entire management of
Manav Rachna Educational Institutions, Faridabad and the Director-
Principal of Manav Rachna College of Engineering, Faridabad for
providing a congenial environment during the writing of this book.

9
S.R.K. Iyengar
R.K. Jain

This page
intentionally leftblank

10
Contents
Preface (v)

⦁ SOLUTION OF EQUATIONS AND EIGEN VALUE PROBLEMS 1–62


⦁ Solution of Algebraic and Transcendental Equations, 1
⦁ Introduction, 1


Initial Approximation for an Iterative Procedure, 4
⦁ Method of False Position, 6
⦁ Newton-Raphson Method, 11
⦁ General Iteration Method, 15
⦁ Convergence of Iteration Methods, 19
⦁ Linear System of Algebraic Equations, 25
⦁ Introduction, 25
⦁ Direct Methods, 26
⦁ 1 Gauss Elimination Method, 28
⦁ Gauss-Jordan Method, 33
⦁ Inverse of a Matrix by Gauss-Jordan Method, 35
⦁ Iterative Methods, 41
⦁ Gauss-Jacobi Iteration Method, 41
⦁ Gauss-Seidel Iteration Method, 46
⦁ Eigen Value
P
r
o

11
P
r
o
b
l
e
m
s
,
5
2
⦁ Introduction,
5
2
⦁ Power Method,
5
3
⦁ Answers and Hints, 59

⦁ INTERPOLATION AND APPROXIMATION 63–108


⦁ Introduction, 63
⦁ Interpolation with Unevenly Spaced Points, 64
⦁ Lagrange Interpolation, 64
⦁ Newton’s Divided Difference Interpolation, 72
⦁ Interpolation with Evenly Spaced Points, 80
⦁ Newton’s Forward Difference Interpolation Formula, 89
⦁ Newton’s Backward Difference Interpolation Formula, 92
⦁ Spline Interpolation and Cubic Splines, 99
⦁ Answers and Hints, 108

⦁ NUMERICAL DIFFERENTIATION AND INTEGRATION 109–179


⦁ Introduction, 109
⦁ Numerical Differentiation, 109
⦁ Methods Based on Finite Differences, 109
⦁ 1 Derivatives Using Newton’s Forward Difference Formula,
109
⦁ 2 Derivatives Using Newton’s Backward Difference
Formula, 117
⦁ 3 Derivatives Using Newton’s Divided Difference Formula,
122
⦁ Numerical Integration, 128
⦁ Introduction, 128
⦁ Integration Rules Based on Uniform Mesh Spacing, 129

12

1 Trapezium Rule, 129
⦁ 2 Simpson’s 1/3 Rule, 136
⦁ 3 Simpson’s 3/8 Rule, 144
⦁ 4 Romberg Method, 147
⦁ Integration Rules Based on Non-uniform Mesh Spacing, 159
⦁ Gauss-Legendre Integration Rules, 160
⦁ Evaluation of Double Integrals, 169
⦁ 1 Evaluation of Double Integrals Using Trapezium Rule,
169
⦁ 2 Evaluation of Double Integrals by Simpson’s Rule, 173
⦁ Answers and Hints, 177

⦁ INITIAL VALUE PROBLEMS FOR ORDINARY


DIFFERENTIAL EQUATIONS 180–240
⦁ Introduction, 180
⦁ Single Step and Multi Step Methods, 182
⦁ Taylor Series Method, 184
⦁ Modified Euler and Heun’s Methods, 192
⦁ Runge-Kutta Methods, 200
⦁ System of First Order Initial Value Problems, 207
⦁ Taylor Series Method, 208
⦁ Runge-Kutta Fourth Order Method, 208
⦁ Multi Step Methods and Predictor-Corrector Methods, 216
⦁ Predictor Methods (Adams-Bashforth Methods), 217
⦁ Corrector Methods, 221
⦁ Adams-Moulton Methods, 221

13
⦁ Milne-Simpson Methods, 224
⦁ Predictor-Corrector Methods, 225
⦁ Stability of Numerical Methods, 237
⦁ Answers and Hints, 238

⦁ BOUNDARY VALUE PROBLEMS IN


ORDINARY DIFFERENTIAL EQUATIONS AND
INITIAL & BOUNDARY VALUE PROBLEMS IN
PARTIAL DIFFERENTIAL EQUATIONS 241–309
⦁ Introduction, 241
⦁ Boundary Value Problems Governed by Second Order
Ordinary Differential Equations,241
⦁ Classification of Linear Second Order Partial Differential Equations, 250
⦁ Finite Difference Methods for Laplace and Poisson Equations, 252
⦁ Finite Difference Method for Heat Conduction Equation, 274
⦁ Finite Difference Method for Wave Equation, 291


Answers and Hints, 308

14
This page
intentionally leftblank

are algebraic (polynomial) equations, and

15
xe2x – 1 = 0, cos x – xex = 0, tan x = x

are transcendental equations.


We assume that the function f(x) is
continuous in the required interval.
We define the following.
Root/zero A number , for which f()
 0 is called a root of the equation f(x) =
0, or a zero of f(x). Geometrically, a root
of an equa- tion f(x) = 0 is the value of x
at which the graph of the equation y =

f(x) intersects the x-axis (see Fig. 1.1).

16
Fig. 1.1 ‘Root of f(x) = 0’

2
NU
MERICAL METHODS

Simple root A number  is a simple root of f(x) = 0, if f() = 0 and


f ()  0. Then, we canwrite f(x) as
f(x) = (x – ) g(x), g()  0. (1.3)

For example, since (x – 1) is a factor of f(x) = x3 + x – 2 = 0, we can write

f(x) = (x – 1)(x2 + x + 2) = (x – 1) g(x), g(1)  0.

Alternately, we find f(1) = 0, f (x) = 3x2 + 1, f (1) = 4  0. Hence, x = 1 is a


simple root of
f(x) = x3 + x – 2 = 0.
Multiple root A number  is a multiple root, of multiplicity m, of f(x) = 0, if

f() = 0, f () = 0, ..., f (m –1) () = 0, and f (m) ()  0. (1.4)

Then, we can write f(x) as


f(x) = (x – )m g(x), g()  0.
For example, consider the equation f(x) = x3 – 3x2 + 4 = 0. We find
f(2) = 8 – 12 + 4 = 0, f (x) = 3x2 – 6x, f (2) = 12 – 12 =
0,
f (x) = 6x – 6, f (2) = 6  0.

17
Hence, x = 2 is a multiple root of multiplicity 2 (double root)
of f(x) = x3 – 3x2 + 4 = 0.We can write f(x) = (x – 2)2 (x + 1) =
(x – 2)2 g(x), g(2) = 3  0.
In this chapter, we shall be considering the case of simple roots only.
Remark 1 A polynomial equation of degree n has exactly n roots, real
or complex, simple or multiple, where as a transcendental equation
may have one root, infinite number of roots orno root.
We shall derive methods for finding only the real roots.
The methods for finding the roots are classified as (i) direct
methods, and (ii) iterative methods.
Direct methods These methods give the exact values of all the roots
in a finite number of steps (disregarding the round-off errors).
Therefore, for any direct method, we can give the total number of
operations (additions, subtractions, divisions and multiplications).
This number is called the operational count of the method.
For example, the roots of the quadratic equation ax2 + bx + c =
0, a  0, can be obtainedusing the method

jL
x=
1 r b 
2a

18
b2  4ac yj
.

For this method, we can give the count of the total number of operations.
There are direct methods for finding all the roots of cubic
and fourth degree polynomi- als. However, these methods are
difficult to use.
Direct methods for finding the roots of polynomial equations of degree
greater than 4 or transcendental equations are not available in literature.

SOLUTION OF EQUATIONS AND EIGEN VALUE PROBLEMS 3


Iterative methods These methods are based on the idea of
successive approximations. We start with one or two initial
approximations to the root and obtain a sequence of approxima-tions
x0, x1, ..., xk, ..., which in the limit as k  , converge to the exact
root . An iterative
method for finding a root of the equation f(x) = 0 can be obtained as
xk + 1 = (xk), k = 0, 1, 2, ..... (1.5)

This method uses one initial approximation to the roottions is given by

19
x0. The sequence of approxima-

x1 = (x0), x2 = (x1), x3 =  (x2), .....


The function  is called an iteration function and x0 is called an initial
approximation.

If a method uses two initial approximations x0, x1, to the root, then
we can write themethod as
xk + 1 = (xk – 1, xk), k = 1, 2, ..... (1.6)
Convergence of iterative methods The sequence of iterates, {xk}, is
said to converge to theexact root , if
lim xk = , or lim  xk –   = 0. (1.7)
k k

The error of approximation at the kth iterate is defined as


k = xk – . Then, we canwrite (1.7) as

lim  error of approximation  = lim  xk –   = lim  k  = 0.

k

20
k

21
k

Remark 2 Given one or two initial approximations to the root, we


require a suitable iteration function  for a given function f(x), such
that the sequence of iterates, {xk}, converge to the exact root .
Further, we also require a suitable criterion to terminate the
iteration.
Criterion to terminate iteration procedure Since, we cannot
perform infinite number of iterations, we need a criterion to stop the
iterations. We use one or both of the followingcriterion:
⦁ The equation f(x) = 0 is satisfied to a given accuracy or f(xk) is
bounded by an error tolerance .
 f (xk)    (1.8)
⦁ The magnitude of the difference between two successive
iterates is smaller than a given accuracy or an error bound .
 xk + 1 – xk   . (1.9)
Generally, we use the second criterion. In some very special
problems, we require to use both the criteria.
For example, if we require two decimal place accuracy, then we iterate until 
xk+1 – xk 
< 0.005. If we require three decimal place accuracy, then we iterate until  xk+
1 – xk  < 0.0005.
As we have seen earlier, we require a suitable iteration
function and suitable initial approximation(s) to start the iteration
procedure. In the next section, we give a method to find initial
approximation(s).

4
NU
MERICAL METHODS

⦁ Initial Approximation for an Iterative Procedure


For polynomial equations, Descartes’ rule of signs gives the bound
for the number of positive and negative real roots.
⦁ We count the number of changes of signs in the coefficients of
Pn(x) for the equation f(x) = Pn(x) = 0. The number of positive roots
cannot exceed the number of changes of signs. For example, if there
are four changes in signs, then the equation may have four positive
roots or two positive roots or no positive root. If there are three
changes in signs, then the equation may have three positive roots
or definitely one positive root. (For polynomial equations with real
coefficients, complex roots occur in conjugate pairs.)

22

We write the equation f(– x) = Pn(– x) = 0, and count the number of
changes of signs in the coefficients of Pn(– x). The number of negative
roots cannot exceed the number of changes of signs. Again, if there are
four changes in signs, then the equation may have four negative
roots or two negative roots or no negative root. If there are three
changes in signs, then the equation may have three negative roots or
definitely one negative root.
We use the following theorem of calculus to determine an
initial approximation. It is also called the intermediate value
theorem.
Theorem 1.1 If f(x) is continuous on some interval [a, b] and f(a)f(b)
< 0, then the equation f(x) = 0 has at least one real root or an odd
number of real roots in the interval (a, b).
This result is very simple to use. We set up a table of values
of f (x) for various values of x. Studying the changes in signs in the
values of f (x), we determine the intervals in which the roots lie. For
example, if f (1) and f (2) are of opposite signs, then there is a root in
the interval (1, 2).
Let us illustrate through the following examples.

Example 1.1 Determine the maximum number of positive and negative


roots and intervals of length one unit in which the real roots lie for the
following equations.
(i) 8x3 – 12x2 – 2x + 3 = 0 (ii) 3x3 – 2x2 – x – 5 = 0.
Solution
(i) Let f(x) = 8x3 – 12x2 – 2x + 3 = 0.
The number of changes in the signs of the coefficients (8, –
12, – 2, 3) is 2. Therefore, the equation has 2 or no positive roots.
Now, f(– x) = – 8x3 – 12x2 + 2x + 3. The number of changes in signs

23
in the coefficients (– 8, – 12, 2, 3) is 1. Therefore, the equation has
one negative root.
We have the following table of values for f(x), (Table 1.1).

Table 1.1. Values of f (x), Example 1.1(i ).

x –2 –1 0 1 2 3

f(x) – 105 – 15 3 –3 15 105

Since

f(– 1) f(0) < 0, there is a


root in the interval (–
1, 0),f(0) f(1) < 0, there
is a root in the interval
(0, 1), f(1) f(2) < 0,
there is a root in the
interval (1, 2).

SOLUTION OF EQUATIONS AND EIGEN VALUE PROBLEMS 5


Therefore, there are three real roots and the roots lie in the intervals (– 1, 0),
(0, 1), (1, 2).
(ii) Let f(x) = 3x2 – 2x2 – x – 5 = 0.
The number of changes in the signs of the coefficients (3, –
2, – 1, – 5) is 1. Therefore, the equation has one positive root. Now,
f(– x) = – 3x2 – 2x2 + x – 5. The number of changes in signs in the
coefficients (– 3, – 2, 1, – 5) is 2. Therefore, the equation has two
negative or no negative roots.
We have the table of values for f (x), (Table 1.2).

Table 1.2. Values of f (x ), Example 1.1(ii ).

x –3 –2 –1 0 1 2 3

f(x) – 101 – 35 –9 –5 –5 9 55

24
From the table, we find that there is one real positive root in the
interval (1, 2). The equation has no negative real root.
Example 1.2 Determine an interval of length one unit in which the
negative real root, which issmallest in magnitude lies for the equation
9x3 + 18x2 – 37x – 70 = 0.
Solution Let f(x) = 9x3 + 18x2 – 37x – 70 = 0. Since, the smallest
negative real root in magni- tude is required, we form a table of
values for x < 0, (Table 1.3).

Table 1.3. Values of f (x ), Example 1.2.

x –5 –4 –3 –2 –1 0

f(x) – 560 – 210 – 40 4 – 24 – 70

Since, f(– 2)f(– 1) < 0, the negative root of smallest


magnitude lies in the interval(– 2, –1).
Example 1.3 Locate the smallest positive root of the equations
⦁ xex = cos x. (ii) tan x = 2x.
Solution
(i) Let f(x) = xex – cos x = 0. We have f(0) = – 1, f(1) = e – cos 1 = 2.718 – 0.540 =
2.178. Since,
f(0) f(1) < 0, there is a root in the interval (0, 1).
⦁ Let f(x) = tan x – 2x = 0. We have the following function values.
f(0) = 0, f(0.1) = – 0.0997, f(0.5) = – 0.4537,
f(1) = – 0.4426, f(1, 1) = – 0.2352, f(1.2) = 0.1722.
Since, f(1.1) f(1.2) < 0, the root lies in the interval (1.1, 1.2).

Now, we present some iterative methods for finding a root


of the given algebraic ortranscendental equation.

25
We know from calculus, that in the neighborhood of a point on a curve, the
curve can be approximated by a straight line. For deriving numerical methods to
find a root of an equation

6
NU
MERICAL METHODS

f(x) = 0, we approximate the curve in a sufficiently small interval


which contains the root, by a straight line. That is, in the
neighborhood of a root, we approximate
f(x)  ax + b, a  0
where a and b are arbitrary parameters to be determined by
prescribing two appropriate conditions on f(x) and/or its derivatives.
Setting ax + b = 0, we get the next approximation to the root as x = –
b/a. Different ways of approximating the curve by a straight line give
different methods. These methods are also called chord methods.
Method of false position (also called regula-falsi method) and
Newton-Raphson method fall in this category of chord methods.

⦁ Method of False Position


The method is also called linear interpolation method or chord method or regula-falsi
method.

xk+1

26
= xk ( fk  fk1 )  ( xk  xk1 )fk
fk  fk1

27
xk1fk  xkfk1 , k = 1, 2, ... (1.11)
=
fk  fk1

Therefore, starting with the initial interval (x0, x1), in which the root lies, we
compute

x = x0 f1  x1 f0 .
2 f
1  f0

Now, if f(x0) f(x2) < 0, then the root lies in the interval (x0, x2).
Otherwise, the root lies in the interval (x2, x1). The iteration is
continued using the interval in which the root lies, until
the required accuracy criterion given in Eq.(1.8) or Eq.(1.9) is satisfied.
Alternate derivation of the method
Let the root of the equation f(x) = 0, lie in the interval (xk–1, xk).
Then, P(xk–1, fk–1), Q(xk, fk) are points on the curve f(x) = 0. Draw
the chord joining the points P and Q (Figs. 1.2a, b). We

SOLUTION OF EQUATIONS AND EIGEN VALUE PROBLEMS 7

approximate the curve in this interval by the chord, that is, f(x)  ax +
b. The next approximationto the root is given by x = – b/a. Since the
chord passes through the points P and Q, we get
fk–1 = axk–1 + b, and fk = axk + b.
Subtracting the two equations, we get

fk – f

28
k–1

29
= a(xk

30
– xk – 1

31
fk  fk1 .
), or a =
xk  xk1

The second
equation gives
b = fk – axk.
Hence, the
next
approximation
is given by
In Fig.1.2b, the right end point x1 is fixed and the left end
point moves towards therequired root. Therefore, in this case, in
actual computations, the method behaves like

xk+1

32
xk f1  x1 fk , k = 1, 2, … (1.13)
=
f1  fk

Remark 5 The computational cost of the method is one


evaluation of the function f(x), foreach iteration.
Remark 6 We would like to know why the method is also called a
linear interpolation method. Graphically, a linear interpolation
polynomial describes a straight line or a chord. The linear
interpolation polynomial that fits the data (xk–1, fk–1), (xk, fk) is given
by

8
NU
MERICAL METHODS

x  xk
f(x) =
xk1  xk

33
fk–1

34
+ x  xk1
xk  xk1

35
fk.

(We shall be discussing the concept of interpolation


polynomials in Chapter 2). Setting f(x) = 0, we get

(x  xk1) fk  (x  xk ) fk1
xk  xk1

36
= 0, or x(fk

37
⦁ fk–1

38
) = xk–1 fk

39
⦁ xk

40
fk – 1

or x = xk+1

41
xk1 fk  xk fk1
= .
fk  fk1

This gives the next approximation as given in Eq. (1.11).

x 0 1 2 3

f (x) 1 –1 3 19

x0 f3  x3f0
x =  0  0.36364(1)

42
= 0.34870, f(x ) = f(0.34870) = – 0.00370.

4 f
3  f0  0.04283  1 4

Since, f(0) f(0.3487) < 0, the root lies in the interval (0, 0.34870).

x0 f4  x4 f0
x =  0  0.3487(1)

43
= 0.34741, f(x ) = f(0.34741) = – 0.00030.

5 f 5
4  f0  0.00370  1
Since, f(0) f(0.34741) < 0, the root lies in the interval (0, 0.34741).

x0f5  x5 f0
x =  0  0.34741(1)

44
= 0.347306.

6 f
5  f0

45
 0.0003  1

SOLUTION OF EQUATIONS AND EIGEN VALUE PROBLEMS 9


Now,  x6 – x5  =  0.347306 – 0.34741   0.0001 < 0.0005.
The root has been computed correct to three decimal places.
The required root can betaken as x  x6 = 0.347306. We may also give
the result as 0.347, even though x6 is more accurate. Note that the left
end point x = 0 is fixed for all iterations.
Now, we compute the root in (1, 2). We have
x0 = 1, x1 = 2, f0 = f(x0) = f(1) = – 1, f1 = f(x1) = f(2) = 3.

x0 f1  x1f0
x =  3  2( 1)

46
= 1.25, f(x ) = f(1.25) = – 0.796875.

2 f
1  f0 3  ( 1) 2

Since, f(1.25) f(2) < 0, the root lies in the interval (1.25, 2). We use the formula given
in Eq.(1.13).

f1  x1f2 1.25(3)  2( 0.796875)


x = x2 

47
= 1.407407,

3 f1  f2

48
3  ( 0.796875)

f(x3) = f(1.407407) = – 0.434437.


Since, f(1.407407) f(2) < 0, the root lies in the interval (1.407407, 2).

x3 f1  x1f3 1.407407(3)  2( 0.434437)


x = 

49
= 1.482367,

4 f f
1 3

50
3  ( 0.434437)

f(x4) = f(1.482367) = – 0.189730.


Since f(1.482367) f(2) < 0, the root lies in the interval (1.482367, 2).

x4 f1  x1f4 1.482367(3)  2( 0.18973)


x = 

51
= 1.513156,

5 f f
1 4

52
3  ( 0.18973)

f(x5) = f(1.513156) = – 0.074884.


Since, f(1.513156) f(2) < 0, the root lies in the interval (1.513156, 2).

x5f1  x1f5 1.513156(3)  2( 0.074884)


x = 

53
= 1.525012,

6 f f
1 5

54
3  ( 0.74884)

f(x6) = f(1.525012) = – 0.028374.


Since, f(1.525012) f(2) < 0, the root lies in the interval (1.525012, 2).

x6f1  x1f6  2( 0.028374)


x =  1.525012(3) = 1.529462.

7 f1  f6

55
3  ( 0.028374)

f(x7) = f(1.529462) = – 0.010586.


Since, f(1.529462) f(2) < 0, the root lies in the interval (1.529462, 2).

x = x7f1  x1f7  1.529462(3)  2( 0.010586)

56
= 1.531116,

8 f
1  f7

57
3  ( 0.010586)

f(x8) = f(1.531116) = – 0.003928.


Since, f(1.531116) f(2) < 0, the root lies in the interval (1.531116, 2).

10
NU
MERICAL METHODS

x = x8f1  x1f8  1.531116(3)  2( 0.003928)

58
= 1.531729,

9 f
1  f8

59
3  (  0.003928)

f(x9) = f(1.531729) = – 0.001454.


Since, f(1.531729) f(2) < 0, the root lies in the interval (1.531729, 2).

x = x9 f1  x1f9  1.531729(3)  2( 0.001454)

60
= 1.531956.

10 f
1  f9

61
3  ( 0.001454)

Now, |x10 – x9  =  1.531956 – 1.53179   0.000227 < 0.0005.

The root has been computed correct to three decimal places. The
required root can betaken as x  x10 = 1.531956. Note that the right
end point x = 2 is fixed for all iterations.
f(x4) = f(0.49402) = 0.07079.
Since, f (0.49402) f(1) < 0, the root lies in the interval
(0.49402, 1).

x = x4 f1  x1 f4  0 . 49402( 2.17798)  1(0.07079)

62
= 0.50995,

5 f
1  f4

63
 2.17798  0.07079

f(x5) = f(0.50995) = 0.02360.


Since, f(0.50995) f(1) < 0, the root lies in the interval
(0.50995, 1).

x5 f1  x1f5  2.17798)  1(0.0236)


x =  0.50995(

64
= 0.51520,

6 f1  f5

65
 2.17798  0.0236

f(x6) = f(0.51520) = 0.00776.

SOLUTION OF EQUATIONS AND EIGEN VALUE PROBLEMS 11

Since, f(0.51520) f(1) < 0, the root lies in the interval (0.51520, 1).

x6 f1  x1f6  2.17798)  1(0.00776)


x =  0.5152(

66
= 0.51692.

7 f1  f6

67
 2.17798  0.00776

Now,  x7 – x6  =  0.51692 – 0.51520   0.00172 < 0.005.


The root has been computed correct to two decimal places.
The required root can be taken as x  x7 = 0.51692.
Note that the right end point x = 2 is fixed for all iterations.


Newton-Raphson Method

1 0 f ( x
0) 0

We repeat the procedure. The iteration method is defined as

x =x

68
f (xk ) , f (x )  0. (1.14)

k+1

69
k f (x
k) k

This method is called the Newton-Raphson method or simply the Newton’s method.
The method is also called the tangent method.
Alternate derivation of the method
Let xk be an approximation to the root of the equation f(x) = 0. Let x be an increment in
x such that xk + x is the exact root, that is f(xk + x)  0.

12
NU
MERICAL METHODS

Expanding in Taylor’s series about the point xk, we


get
(x )2
f(xk) + x f (xk) +
2! f  (xk) + ... = 0. (1.15)
Neglecting the second and higher powers of x, we obtain
f (xk ) .
f(x ) + x f (x )  0, or x = –
k k f ( x
k)

Hence, we obtain the iteration method

x =x

70
+ x = x

71
f (xk ) , f (x )  0, k = 0, 1, 2, ...

k+1 k

72
k f (x
k) k

x1 = 2x0

x2 = 2x1

73
– Nx 2 = 2(0.05) – 17(0.05)2 = 0.0575.
0
1

– Nx 2 = 2(0.0575) – 17(0.0575)2 = 0.058794.

x3 = 2x2

x4 = 2x3

74
– Nx2 = 2(0.058794) – 17(0.058794)2 = 0.058823.
2
3

– Nx2 = 2(0.058823) – 17(0.058823)2 = 0.058823.

Since,  x4 – x3  = 0, the iterations converge to the root. The required root is


0.058823.
(ii) With N = 17, and x0 = 0.15, we obtain the sequence of approximations
0

x1 = 2x0 – Nx 2 = 2(0.15) – 17(0.15)2 = – 0.0825.

SOLUTION OF EQUATIONS AND EIGEN VALUE PROBLEMS 13

x2 = 2x1 x3 = 2x2 x4 = 2x3

75
1
– Nx2 = 2(– 0.0825) – 17(– 0.8025)2 = – 0.280706.
2

– Nx2 = 2(– 0.280706) – 17(– 0.280706)2 = – 1.900942.


3

– Nx2 = 2(– 1.900942) – 17(– 1.900942)2 = – 65.23275.

We find that xk  –  as k increases. Therefore, the iterations


diverge very fast. Thisshows the importance of choosing a
proper initial approximation.
Example 1.7 Derive the Newton’s method for finding the qth root of a
positive number N, N1/q, where N > 0, q > 0. Hence, compute 171/3
correct to four decimal places, assuming the initial approximation as x0
= 2.

2x 3  17 2(2.571332)3  17

x = 3
2

76
2 = 2.571282.

4 3x
3

77
3(2.571332)

Now,  x4 – x3  =  2.571282 – 2.571332  = 0.00005.


We may take x  2.571282 as the required root correct to four decimal places.
Example 1.8 Perform four iterations of the Newton’s method to find
the smallest positive rootof the equation f(x) = x3 – 5x + 1 = 0.
Solution We have f(0) = 1, f(1) = – 3. Since, f(0) f(1) < 0, the
smallest positive root lies in the interval (0, 1). Applying the
Newton’s method, we obtain

x3  5x  1 2x3  1
xk+1 = xk – k k  k
, k = 0, 1, 2, ...
2
k
3xk  5 3x2  5

14
NU
MERICAL METHODS

Let x0 = 0.5. We have the following results.

1 0

2x3  1
x = 
0
3x2  5

78
2(0.5)3  1
3(0.5)2  5

79
= 0.176471,

2x3  1
x = 
1
2 3x2  5
x = 2x  1 
3
3x2 5
2
3
2

2x3  1

80
2(0.176471)3  1
3(0.176471)2  5

2(0.201568)3  1
3(0.201568)2  5

2(0.201640)3  1

81
= 0.201568,

= 0.201640,

x = 3 

82
= 0.201640.

= 11.631465 – 11.631465 log 10 11.631465  12.34


log10 11.631465  0.434294

83
= 11.594870.

x3 = x2

84
– x2 log 10 x2  12.34log 10 x2 
0.434294

= 11.59487 – 11.59487 log 10 11.59487  12.34


log 10 11.59487  0.434294

We have  x3 – x2  =  11.594854 – 11.594870  = 0.000016.


We may take x  11.594854 as the root correct to four decimal places.

85
= 11.594854.

SOLUTION OF EQUATIONS AND EIGEN VALUE PROBLEMS 15

⦁ General Iteration Method


The method is also called iteration method or method of successive
approximations or fixedpoint iteration method.
The first step in this method is to rewrite the given equation
f(x) = 0 in an equivalentform as
x = (x). (1.16)
There are many ways of rewriting f(x) = 0 in this form.

For example, f(x) = x3 – 5x + 1 = 0, can be rewritten in the following forms.

x 3  1 , x = (5x – 1)1/3, x = 5x  1 , etc. (1.17)


x=

5 x
Now, finding a root of f(x) = 0 is same as finding a number 
such that  = (), that is, a fixed point of (x). A fixed point of a
function  is a point  such that  = (). This result is also called
the fixed point theorem.
Using Eq.(1.16), the iteration method is written as
xk+1 = (xk), k = 0, 1, 2, ... (1.18)
The function (x) is called the iteration function. Starting
with the initial approxima- tion x0, we compute the next
approximations as
x1 = (x0), x2 = (x1), x3 = (x2),...
The stopping criterion is same as used earlier. Since, there

86
are many ways of writing f(x) = 0 as x = (x), it is important to know
whether all or at least one of these iteration methods converges.
Remark 10 Convergence of an iteration method xk+1 = (xk), k = 0, 1,
2,..., depends on the choice of the iteration function (x), and a
suitable initial approximation x0, to the root.
Consider again, the iteration methods given in Eq.(1.17), for finding a root of
the equation
f(x) = x3 – 5x + 1 = 0. The positive root lies in the interval (0, 1).

⦁ xk+1

87
3
x 1
= k , k = 0, 1, 2, ... (1.19)
5

With x0 = 1, we get the sequence of approximations as


x1 = 0.4, x2 = 0.2128, x3 = 0.20193, x4 = 0.20165, x5 =
0.20164.
The method converges and x  x5 = 0.20164 is taken as the
required approximation to the root.
(ii) xk+1 = (5xk – 1)1/3, k = 0, 1, 2, ... (1.20)

With x0 = 1, we get the sequence of approximations as


x1 = 1.5874, x2 = 1.9072, x3 = 2.0437, x4 = 2.0968,...
which does not converge to the root in (0, 1).

(iii) xk+1 =

88
5xk  1 , k = 0, 1, 2, ... (1.21)
xk

16
NU
MERICAL METHODS

With x0 = 1, we get the sequence of approximations as


x1 = 2.0, x2 = 2.1213, x3 = 2.1280, x4 = 2.1284,...
which does not converge to the root in (0, 1).

Now, we derive the condition that the iteration function (x)


should satisfy in order that the method converges.
Condition of convergence
The iteration method for finding a root of f(x) = 0, is written as
xk+1 = (xk), k = 0, 1, 2,... (1.22)

Let  be the exact root. That is,


 = (). (1.23)
We define the error of approximation at the kth iterate as k = xk – , k = 0, 1, 2,...
Subtracting (1.23) from (1.22), we obtain
xk+1 –  = (xk) – ()
= (xk – )(tk) (using the mean value
theorem) (1.24)
or

k+1 = (tk) k, xk < tk < .

89
Setting k = k – 1, we get k =
(tk–1) k–1, xk–1 < tk–1 < .
Hence,

k+1 = (t k )(t k–1)  k–1.
Using (1.24) recursively, we get


k+1 = (tk)(tk–1) ... (t0) 0.

The initial error 0 is known


and is a constant. We have
 k+1  =  (tk)   (tk–1)  ...  (t0)   0 .
Let  (tk)   c, k = 0, 1, 2,…
Then,  k+1   ck+1  0 . (1.25)
For convergence, we require that  k+1   0 as k  .
This result is possible, if and only if c < 1. Therefore, the iteration
method (1.22) converges, if and only if
 (xk)   c < 1, k = 0, 1, 2, ...
or  (x)   c < 1, for all x in the interval (a, b). (1.26)
We can test this condition using x0, the initial approximation,
before the computationsare done.
Let us now check whether the methods (1.19), (1.20), (1.21)
converge to a root in (0, 1) ofthe equation f(x) = x3 – 5x + 1 = 0.

x 3  1 , (x) = 3x 2 , and  (x)  = 3x 2


(i) We have (x) =

90
 1 for all x in

91
Hence,

5 5 5

92
0  x  1.

the method converges to a root in (0, 1).

SOLUTION OF EQUATIONS AND EIGEN VALUE PROBLEMS 17

5
(ii) We have (x) = (5x – 1)1/3, (x) =
3(5x  1)2/3

93
. Now  (x)  < 1, when x is close to 1 and

 (x)  > 1 in the other part of the interval. Convergence is not guaranteed.

(iii) We have (x) =

94
5x  1 , (x) = 1

95
. Again,  (x)  < 1, when x is close to

x 2x 3/2 (5x  1)1/2


1 and  (x)  > 1 in the other part of the interval. Convergence is
not guaranteed.

Remark 11 Sometimes, it may not be possible to find a suitable


iteration function (x) by manipulating the given function f(x).
Then, we may use the following procedure. Write f(x) = 0 as x = x
+  f(x) = (x), where  is a constant to be determined. Let x0 be an
initial approximation contained in the interval in which the root
lies. For convergence, we require
 (x0)  =  1 +  f(x0)  < 1. (1.27)
Simplifying, we find the interval in which  lies. We choose a
value for  from this interval and compute the approximations. A
judicious choice of a value in this interval may give faster
convergence.
Example 1.10 Find the smallest positive root of the equation x3 – x – 10
= 0, using the general iteration method.
Solution We have
f(x) = x3 – x – 10, f(0) = – 10, f(1) = – 10,
f(2) = 8 – 2 – 10 = – 4, f(3) = 27 – 3 – 10 = 14.
Since, f(2) f(3) < 0, the smallest positive root lies in the interval (2, 3).
Write x3 = x + 10, and x = (x + 10)1/3 = (x). We define the iteration method as
xk+1 = (xk + 10)1/3.

1 .
We obtain (x) =
3( x  10)2/3

96
We find  (x)  < 1 for all x in the interval (2, 3).
Hence, the iteration converges. Let x0 = 2.5. We obtain
the following results.
x1 = (12.5)1/3 = 2.3208, x2 = (12.3208)1/3 = 2.3097,

x3 = (12.3097)1/3 = 2.3090, x4 = (12.3090)1/3 = 2.3089.


Since,  x4 – x3  = 2.3089 – 2.3090  = 0.0001, we take the required
root as x  2.3089.

Example 1.11 Find the smallest negative root in magnitude of the equation

3x4 + x3 + 12x + 4 = 0, using the method of successive approximations.


Solution We have
f(x) = 3x4 + x3 + 12x + 4 = 0, f(0) = 4, f(– 1) = 3 – 1 – 12 + 4
= – 6.
Since, f(– 1) f(0) < 0, the smallest negative root in magnitude lies in the interval (– 1,
0).

18
NU
MERICAL METHODS

Write the given equation as


4
x(3x3 + x2 + 12) + 4 = 0, and x = –
3x3  x2  12

97
= (x).

The iteration method is written as

xk+1

98
4 .
=–
k k
3x3  x2  12

4(9x 2  2x)

We obtain (x) =
(3x 3  x 2  12)2 .

We find  (x)  < 1 for all x in the interval (– 1, 0). Hence, the iteration
converges.
=  1 +  (9x2 + 8x + 4)  < 1
for all x  (– 1, 0). This condition is also to be satisfied at the initial approximation.
Setting x0
= – 0.5, we get

 (x0) | = | 1 +  f (x0)  =

99
9
1 <1
4

9 < 1 or – 8 <  < 0.


or –1<1+
4 9
Hence,  takes negative values. The interval for  depends on the initial
approximation
x0. Let us choose the value  = – 0.5. We obtain the iteration method as

xk+1 = xk – 0.5 (3xk3 + 4xk2 + 4xk + 1)

SOLUTION OF EQUATIONS AND EIGEN VALUE PROBLEMS 19

= – 0.5 (3x 3 + 4x 2 + 2x + 1) = (x ).


k k k k

Starting with x0 = – 0.5, we obtain the following results.


0
0

x1 = (x0) = – 0.5 (3x 3 + 4x 2 + 2x0 + 1)

= – 0.5 [3(– 0.5)3 + 4(– 0.5)2 + 2(– 0.5) + 1] = – 0.3125.


1
1

x2 = (x1) = – 0.5(3x 3 + 4x 2 + 2x1 + 1)

= – 0.5[3(– 0.3125)3 + 4(– 0.3125)2 + 2(– 0.3125) + 1] = –


0.337036.
2
2

x3 = (x2) = – 0.5(3x 3 + 4x 2 + 2x2 + 1)

= – 0.5[3 (– 0.337036)3 + 4(– 0.337036)2 + 2(– 0.337036) + 1]


= – 0.332723.
3
3

x4 = (x3) = – 0.5(3x 3 + 4x 2 + 2x3 + 1)

100
= – 0.5[3(– 0.332723)3 + 4(– 0.332723)2 + 2(– 0.332723) + 1]
= – 0.333435.
4
4

x5 = (x4) = – 0.5(3x 3 + 4x 2 + 2x4 + 1)


= – 0.5[3(– 0.333435)3 + 4(– 0.333435)2 + 2(– 0.333435) + 1]
= – 0.333316.
Since  x5 – x4  =  – 0.333316 + 0.333435  = 0.000119
< 0.0005, the result is correct to three decimal places.
We can take the approximation as x  x5 = – 0.333316.
The exact root is x = – 1/3.We can verify that  (xj) 
< 1 for all j.
⦁ Convergence of the Iteration Methods
We now study the rate at which the iteration methods converge to the exact root,
if the initial approximation is sufficiently close to the desired root.
Define the error of approximation at the kth iterate as k = xk – , k = 0, 1, 2,...
Definition An iterative method is said to be of order p or has the rate of
convergence p, if p is

the largest positive real number for which there exists a finite constant

101
C  0 , such that

 k+1   C  k  p. (1.28)
The constant C, which is independent of k, is called the
asymptotic error constant and it depends on the derivatives of f(x) at
x = .
Let us now obtain the orders of the methods that were derived earlier.
Method of false position We have noted earlier (see Remark 4) that
if the root lies initially in the interval (x0, x1), then one of the end points
is fixed for all iterations. If the left end point x0 is fixed and the right
end point moves towards the required root, the method behaves like
(see Fig.1.2a)

xk+1

102
= x0 fk  xk f0 .
fk  f0

Substituting xk = k + , xk+1 = k+1 + , x0 = 0 + , we


expand each term in Taylor’sseries and simplify using the fact that
f() = 0. We obtain the error equation as
f () .
 = C  , where C =

k+1 0k

103
2 f ()

20
NU
MERICAL METHODS

Since 0 is finite and fixed, the error equation becomes


 k+1  =  C*   k  where C* = C0. (1.29)Hen
Method of successive approximations or fixed point iteration method
We have xk+1 = (xk), and  = ()Subtracting, we get
xk+1 –  = (xk) – () = ( + xk – ) – ()
= [() + (xk – ) () + ...] – ()
k

or k+1 = k() +
2).
O(

r f () 2 y r f () y

f ()

= k – jL k 
2f () j jL1 
 k  ...

104
j
 k  ...

r
= – 

105
 f ()

106
2 

107
y f ()

108
 2 + ...

k jL k

109
2f () k

110
j
...

111
2f () k

Neglecting the terms containing 3 and higher powers of k, we get

f () ,
 = C 2, where C =

k+1 k

112
2 f ()

SOLUTION OF EQUATIONS AND EIGEN VALUE PROBLEMS 21

and  k+1  =  C   k 2. (1.31) The

Remark 12 What is the importance of defining the order or rate of


convergence of a method? Suppose that we are using Newton’s
method for computing a root of f(x) = 0. Let us assume that at a
particular stage of iteration, the error in magnitude in computing
the root is 10–1 =


We observe from (1.31), that in the next iteration, the error behaves
like C(0.1)2 = C(10–2). That is, we may possibly get an accuracy of two
decimal places. Because of the quadratic convergence of the method,
we may possibly get an accuracy of four decimal places in the next
iteration. However, it also depends on the value of C. From this
discussion, we conclude that both fixed point iteration and regula-
falsi methods converge slowly as they have only linear rate of
convergence. Further, Newton’s method converges at least twice as
fast as the fixed point iteration and regula-falsi methods.
Remark 13 When does the Newton-Raphson method fail?
⦁ The method may fail when the initial approximation x0 is far
away from the exact root  (see Example 1.6). However, if the root
lies in a small interval (a, b) and x0  (a, b), then the method
converges.
⦁ From Eq.(1.31), we note that if f ()  0, and f (x) is finite then
C   and the method may fail. That is, in this case, the graph of y
= f(x) is almost parallel to x-axis at the root .
Remark 14 Let us have a re-look at the error equation. We have
defined the error of approxi-mation at the kth iterate as k = xk – , k =

113
0, 1, 2,... From xk+1 = (xk), k = 0, 1, 2,. and  = (),
we obtain (see Eq.(1.24))
xk+1 –  = (xk) – () = ( + k) – ()

r
= ()  () 

114
1 y
2
 ()   ... – ()

Lj k
2

115
k j
2 k

or k+1 = a1k + a  2 + ... (1.32)

where

a1 = (), a2 = (1/2)(), etc.


The exact root satisfies the
equation  = ().
If a1  0 that is, ()  0, then the method is of order 1 or has
linear convergence. For the general iteration method, which is of first
order, we have derived that the condition of conver- gence is  (x)
 < 1 for all x in the interval (a, b) in which the root lies. Note that in
this method,
 (x)   0 for all x in the neighborhood of the root .
If a1 = () = 0, and a2 = (1/2)()  0, then from Eq. (1.32),
the method is of order 2 or has quadratic convergence.
Let us verify this result for the Newton-Raphson method. For
the Newton-Raphson method

x =x

116
⦁ f (xk ) , we have (x) = x – f (x) .
k+1

117
k f (x
k)

118
f (x)

Then, (x) = 1 – [f (x)]2  f (x) f (x)  f (x) f (x)


[f (x)]2 [f (x)]2

22
NU
MERICAL METHODS

f ()f () = 0
and () =
[ f ()]2
since f() = 0 and f ()  0 ( is a simple root).
When, xk  , f (xk)  0, we have  (xk)  < 1, k = 1, 2,... and  0 as n  .

1
Now, (x) =
[f (x)]3

119
[ f (x) {f (x) f (x) + f(x) f (x)} – 2 f(x) {f (x)}2]

f ()
and () =
f ()

120
 0.

Therefore, a2  0 and the second order convergence of the Newton’s method is


verified.

⦁ Define a (i) root, (ii) simple root and (iii) multiple root of an algebraic
equation f(x) = 0.

Solution
⦁ A number , such that f()  0 is called a root of f(x) = 0.
⦁ Let  be a root of f(x) = 0. If f()  0 and f ()  0, then  is said to be
a simple root.
Then, we can write f(x) as
f (x) = (x – ) g(x), g()  0.
⦁ Let  be a root of f(x) = 0. If
f() = 0, f () = 0,..., f (m–1) () = 0, and f (m) ()  0,
then,  is said to be a multiple root of multiplicity m. Then, we can
write f (x) as
f(x) = (x – )m g(x), g()  0.
⦁ State the intermediate value theorem.
Solution If f(x) is continuous on some interval [a, b] and f (a)f (b) < 0, then
the equation
f(x) = 0 has at least one real root or an odd number of real roots in the
interval (a, b).
⦁ How can we find an initial approximation to the root of f (x) = 0 ?
Solution Using intermediate value theorem, we find an
interval (a, b) which contains the root of the equation f (x) =
0. This implies that f (a)f(b) < 0. Any point in this interval
(including the end points) can be taken as an initial

121
approximation to the root of f(x) = 0.
⦁ What is the Descartes’ rule of signs?
Solution Let f (x) = 0 be a polynomial equation Pn(x) = 0. We
count the number of changes of signs in the coefficients of f (x)
= Pn(x) = 0. The number of positive roots cannot exceed the
number of changes of signs in the coefficients of Pn(x). Now, we
write the equation f(– x) = Pn(– x) = 0, and count the number of
changes of signs in the coeffi- cients of Pn(– x). The number of
negative roots cannot exceed the number of changes of signs in
the coefficients of this equation.
⦁ Define convergence of an iterative method.
Solution Using any iteration method, we obtain a sequence of
iterates (approxima- tions to the root of f(x) = 0), x1, x2,..., xk,...
If

SOLUTION OF EQUATIONS AND EIGEN VALUE PROBLEMS 23

lim
k

122
x = , or lim
k
k

123
 xk –   = 0

where  is the exact root, then the method is said to be convergent.


⦁ What are the criteria used to terminate an iterative procedure?
Solution Let  be the prescribed error tolerance. We terminate
the iterations wheneither of the following criteria is satisfied.
(i)  f(xk)   . (ii)  xk+1 – xk   .Sometimes, we may use both the
⦁ Define the fixed point iteration method to obtain a root of f(x)
= 0. When does the methodconverge?

Solution Let a root of f(x) = 0 lie in the interval (a, b). Let x0 be
an initial approximation to the root. We write f(x) = 0 in an
equivalent form as x = (x), and define the fixed pointiteration
method as xk+1 = (xk), k = 0, 1, 2, … Starting with x0, we obtain
a sequence of approximations x1, x2,..., xk,... such that in the limit
as k  , xk  . The method converges when  (x)  < 1,
for all x in the interval (a, b). We normally check this
condition at x0.
⦁ Write the method of false position to obtain a root of f(x) = 0.
What is the computational cost of the method?
Solution Let a root of f(x) = 0 lie in the interval (a, b). Let
x0, x1 be two initial approxi- mations to the root in this
interval. The method of false position is defined by

xk+1

124
xk1 fk  xk fk1
= , k = 1, 2,...
fk  fk1

The computational cost of the method is one evaluation of f(x) per


iteration.
⦁ What is the disadvantage of the method of false position?
Solution If the root lies initially in the interval (x0, x1), then one
of the end points is fixed for all iterations. For example, in
Fig.1.2a, the left end point x0 is fixed and the right end point
moves towards the required root. Therefore, in actual
computations, the method behaves like

xk+1

125
x
= x0 fk  k f0 .
fk  f0

In Fig.1.2b, the right end point x1 is fixed and the left end
point moves towards therequired root. Therefore, in this case,
in actual computations, the method behaves like

xk+1

126
= xk f1  x1 fk .
f1  fk

⦁ Write the Newton-Raphson method to obtain a root of f(x) =


0. What is the computa-tional cost of the method?
Solution Let a root of f(x) = 0 lie in the interval (a, b). Let x0
be an initial approximation to the root in this interval. The
Newton-Raphson method to find this root is defined by

24
NU
MERICAL METHODS

x =x

127
f (xk ) , f (x )  0, k = 0, 1, 2,...,

k+1

128
k f (xk ) k

The computational cost of the method is one evaluation of f(x)


and one evaluation of the derivative f (x) per iteration.
⦁ Define the order (rate) of convergence of an iterative method
for finding the root of an equation f(x) = 0.

Solution Let  be the exact root of f (x) = 0. Define the error of


approximation at the kth iterate as k = xk – , k = 0, 1, 2,... An
iterative method is said to be of order p or has the rate of
convergence p, if p is the largest positive real number for
which there exists a finite constant C  0, such that
In the following problems, find the root as specified using the Newton-Raphson
method.
⦁ Find the smallest positive root of x4 – x = 10, correct to three decimal
places.

⦁ Find the root between 0 and 1 of x3 = 6x – 4, correct to two decimal places.


⦁ Find the real root of the equation 3x = cos x + 1. (A.U.
Nov./Dec. 2006)
⦁ Find a root of x log10 x – 1.2 = 0, correct to three decimal places.
(A.U.
Nov./
Dec.
2004)
⦁ Find the root of x = 2 sin x, near 1.9, correct to three decimal places.

⦁ (i) Write an iteration formula for finding

129
where N is a real number.
(A.U. Nov./Dec. 2006, A.U. Nov./Dec. 2003)

(ii) Hence, evaluate 142 , correct to three decimal places.

SOLUTION OF EQUATIONS AND EIGEN VALUE PROBLEMS 25


⦁ (i) Write an iteration formula for finding the value of 1/N, where N is a real
number.
(ii) Hence, evaluate 1/26, correct to four decimal places.
⦁ Find the root of the equation sin x = 1 + x3, which lies in the
interval (– 2, – 1), correct to three decimal places.
⦁ Find the approximate root of xex = 3, correct to three decimal places.
In the following problems, find the root as specified using the
iteration method/method ofsuccessive approximations/fixed point
iteration method.
⦁ Find the smallest positive root of x2 – 5x + 1 = 0, correct to four decimal
places.

⦁ Find the smallest positive root of x5 – 64x + 30 = 0, correct to four decimal


places.


Find the smallest negative root in magnitude of 3x3 – x + 1 = 0,
correct to four decimalplaces.
⦁ Find the smallest positive root of x = e–x, correct to two decimal places.
⦁ Find the real root of the equation cos x = 3x – 1. (A.U.
Nov./Dec. 2006)
⦁ The equation x2 + ax + b = 0, has two real roots  and . Show that the iteration

130
method
⦁ xk+1 = – (axk + b)/xk, is convergent near x = , if |  | > |  |,
⦁ xk+1 = – b/(xk + a), is convergent near x = , if |  | < |  |.

⦁ Introduction
Consider a system of n linear algebraic equations in n unknowns

a
1

x
1

+
a
1

x
2

+
.
.
.
+
a
1

x
n

=
b
1

a
2

x
1

+
a
2

x
2

+
.
131
.
.
.
+
a
2

x
n

=
b
2
... ... ... ...
an1x1 + an2x2 + ... + annxn = bn
where aij, i = 1, 2, ..., n, j = 1, 2, …, n, are the known coefficients, bi , i = 1,
2, …, n, are the knownright hand side values and xi, i = 1, 2, …, n are the
unknowns to be determined.
In matrix notation we write the system as
Ax = b (1.33)

rja
11

132
a12

133
… a1n yj rj x yj
1

134
rjb yj
1

where A= ja 21 a22 … a2n


j , x = j x j , and b =
2

135
jb j .
2

j ............... j

136
j…j

137
j…j

jLa n1 an2 … ann j

138
jLx j
n

139
jLb j
n

The matrix [A | b], obtained by appending the column b to the matrix A is


called the
augmented matrix. That is

26
NU
MERICAL METHODS

[A|b] =

140
ra
11

j
ja
21

141
a12a22

142
… a1n
… a2n

143
b1 y
j
b2
j

j ................... j

We define the following.

144
jLa n1

145
an2

146
… ann

147
bn j

⦁ The system of equations (1.33) is consistent


(has at least one solution), ifrank
(A) = rank [A | b] = r.
If r = n, then the system has unique solution.
If r < n, then the system has (n – r) parameter family of infinite number of
solutions.


The system of equations (1.33) is
inconsistent (has no solution) if
rank (A)  rank [A | b].
We assume that the given system is consistent.
The methods of solution of the linear algebraic system of
equations (1.33) may be classified as direct and iterative methods.
⦁ Direct methods produce the exact solution after a finite number of
steps (disregarding the round-off errors). In these methods, we can
determine the total number of operations (addi- tions, subtractions,
divisions and multiplications). This number is called the operational
countof the method.
⦁ Iterative methods are based on the idea of successive
approximations. We start with aninitial approximation to the
solution vector x = x0, and obtain a sequence of approximatevectors
x0, x1, ..., xk, ..., which in the limit as k  , converge to the exact
solution vector x. Now, we derive some direct methods.

⦁ Direct Methods
If the system of equations has some special forms, then the solution is obtained directly.
We consider two such special

148
forms.
⦁ Let A be a diagonal matrix, A = D. That is, we consider the system of equations
Dx = b as
a11x1 = b1
a22x2 = b2
... ... ... ... (1.34)
a
n

x
n

=
b
n

a
n

x
n

=
b
n
This system is called a diagonal system of equations. Solving directly, we obtain

bi
x =
i aii

149
, aii

150
 0, i = 1, 2, ..., n. (1.35)

⦁ Let A be an upper triangular matrix, A = U. That is, we


consider the system ofequations Ux = b as

SOLUTION OF EQUATIONS AND EIGEN VALUE PROBLEMS 27

a11x1 + a12x2 +.........................+ a1n xn = b1


a22x2 +........................+ a2n xn = b2
... ... ... ... (1.36)
an–1, n–1 xn–1 + an–1, n xn = bn–1
annxn = bn
This system is called an upper triangular system of equations.
Solving for the unknownsin the order xn, xn–1, , x1, we get
xn = bn/ann,
xn–1 = (bn–1 – an–1, nxn)/an–1, n–1,

... ... ... ...

j 7n

jb   a x j j
1 1, j j n 7

x =
j2 j

151
j
= b1  a
1, j xj
j

152
a11

153
(1.37)

1 a11

154
j2 j

The unknowns are obtained by back substitution and this


procedure is called the backsubstitution method.
Therefore, when the given system of equations is one of the
above two forms, the solutionis obtained directly.
Before we derive some direct methods, we define elementary
row operations that can beperformed on the rows of a matrix.
Elementary row transformations (operations) The following
operations on the rows of amatrix A are called the elementary row
transformations (operations).
⦁ Interchange of any two rows. If we interchange the ith row
with the jth row, then weusually denote the operation as Ri  Rj.
⦁ Division/multiplication of any row by a non-zero number p. If the ith row is
multiplied by
p, then we usually denote this operation as pRi.
⦁ Adding/subtracting a scalar multiple of any row to any other
row. If all the elements of the jth row are multiplied by a scalar p
and added to the corresponding elements of the ithrow, then, we
usually denote this operation as Ri  Ri + pRj. Note the order in
which the
operation Ri + pRj is written. The elements of the jth row remain unchanged and the
elements
of the ith row get changed.
These row operations change the form of A, but do not change
the row-rank of A. The matrix B obtained after the elementary row
operations is said to be row equivalent with A. In the context of the
solution of the system of algebraic equations, the solution of the new
systemis identical with the solution of the original system.
The above elementary operations performed on the columns of
A (column C in place of row R) are called elementary column
transformations (operations). However, we shall be using only the
elementary row operations.

28
NU
MERICAL METHODS

In this section, we derive two direct methods for the solution


of the given system of equations, namely, Gauss elimination method
and Gauss-Jordan method.
1.2.2.1 Gauss Elimination Method
The method is based on the idea of reducing the given system of
equations Ax = b, to an upper triangular system of equations Ux = z,
using elementary row operations. We know thatthese two systems are
equivalent. That is, the solutions of both the systems are identical. This
155
reduced system Ux = z, is then solved by the back substitution method
to obtain the solutionvector x.
We illustrate the method using the 3 × 3 system
a11x1 + a12x2 + a13 x3 = b1

a21x1 + a22 x2 + a23 x3 = b2 (1.38)


a31x1 + a32 x2 + a33 x3 = b3
We write the augmented matrix [A | b] and reduce it to the following form
Gauss elimination

[A|b] 
The augmented matrix of the system (1.38) is

156
[U|z]

ja 21

rja La
11 31

157
a12a22 a32

158
a13a23a33

159
yjb
b1 3

160
(1.39)

b2
j
First stage of elimination

We assume a11  0. This element a11 in the 1 × 1 position is


called the first pivot. We use this pivot to reduce all the elements
below this pivot in the first column as zeros. Multiply the
first row in (1.39) by a21/a11 and a31/a11 respectively and subtract
from the second and third rows. That is, we are performing the
elementary row operations R2 – (a 21/a11)R1 and R3 –(a31/a11)
R1 respectively. We obtain the new augmented matrix as

j
ra 11

j0

161
a12
a(1)

162
a13
a(1)

163
b1 y
j
2

b(1)
j

164
(1.40)

22 23

j0 a(1) a(1) b(1) j


L 32 33 3

where a(1) = a

165

ja 7 a
21

166
, a(1) = a

167

jj a 7j
21
a

168
, b(1) = b

169

jj a 7j b ,
21

22 22

170
j a jj
11 12 23

171
23
a11 j

172
13 2

173
2
a11 j 1

– j ja
a(1) = a

174
ja 7
31

175
, a(1) = a

176
ja 7
31

177
, b(1) = b

178
⦁ j jb.
32 32

179
a11 j 12 33

180
33 a11 j 13 3

181
3 a11 j 1

–j ja
j 7
a 3
1

Second stage of elimination

We assume a(1)  0 . This element a(1) in the 2 × 2 position is called the


second pivot.
22 22
We use this pivot to reduce the element below this pivot in the second column as
zero. Multi-

SOLUTION OF EQUATIONS AND EIGEN VALUE PROBLEMS 29

ply the second row in (1.40) by a(1) / a(1) and subtract from the third row.
That is, we are
32 22

performing the elementary row operation R


182
⦁ ( a(1) / a(1) )R . We obtain the new augmented

matrix as

183
ra
11

j
j0

184
a12
a(1)

185
a13
a(1)

186
3

b1 y
2

b(1)
j

187
32 22 2

188
(1.41)

22 23

j0 0

189
a(2)

190
b( 2) j

L
( 2) (1)

191
ja 7
(1)

192
33

(1)

193
3

( 2) (1)

194
ja 7
(1)

195
a(1) j
(1)

where a33

196
 a33

197
 j ja
32
23 , b3
22

198
 b3

199
 j jb 32 2 .
22

a(1) j
33

The element a(2)  0 is called the third pivot. This system is in the
required uppertriangular form [U|z]. The solution vector x is now
obtained by back substitution.

From the third row, we get x

200
= b(2) /a(2) .

3 3 33

From the second row, we get x

201
= (b(1)  a(1) x3 )/a(1) .
2 2 23 22
From the first row, we get x1 = (b1 – a12x2 – a13 x3)/a11.
In general, using a pivot, all the elements below that pivot in that column are made
zeros.
Alternately, at each stage of elimination, we may also make the
pivot as 1, by dividing that particular row by the pivot.
Remark 15 When does the Gauss elimination method as described
above fail? It fails when any one of the pivots is zero or it is a very
small number, as the elimination progresses. If a pivot is zero, then
division by it gives over flow error, since division by zero is not defined.
If a pivot is a very small number, then division by it introduces large
round-off errors and the solution may contain large errors.
For example, we may have the system
2x2 + 5x3 = 7
7x1 + x2 – 2x3 = 6
2x1 + 3x2 + 8x3 = 13
in which the first pivot is zero.
Pivoting Procedures How do we avoid computational errors in Gauss
elimination? To avoid computational errors, we follow the procedure
of partial pivoting. In the first stage of elimination, the first column of
the augmented matrix is searched for the largest element in
magnitude and brought as the first pivot by interchanging the first
row of the augmented matrix (first equation) with the row
(equation) having the largest element in magnitude. In the second
stage of elimination, the second column is searched for the largest
element in magnitude among the n – 1 elements leaving the first
element, and this element is brought as the second pivot by
interchanging the second row of the augmented matrix with the later
row having the largest element in magnitude. This procedure is
continued until the upper triangular system is obtained. Therefore,
partial pivoting is done after every stage of elimination. There is
another procedure called complete pivoting. In this procedure, we
search the entire matrix A in the augmented matrix for the largest
element in magnitude and bring it as the first pivot.

30
NU
MERICAL METHODS

This requires not only an interchange of the rows, but also an


interchange of the positions of the variables. It is possible that the
position of a variable is changed a number of times during this
pivoting. We need to keep track of the positions of all the variables.
Hence, the procedure is computationally expensive and is not used
in any software.
Remark 16 Gauss elimination method is a direct method. Therefore,
it is possible to count the total number of operations, that is, additions,

202
subtractions, divisions and multiplications.Without going into details,
we mention that the total number of divisions and multiplications
(division and multiplication take the same amount of computer time)
is n (n2 + 3n – 1)/3. The total number of additions and subtractions
(addition and subtraction take the same amount ofcomputer time) is n
(n – 1)(2n + 5)/6.

Remark 17 When the system of algebraic equations is large, how do we


conclude that it is consistent or not, using the Gauss elimination
method? A way of determining the consistency

Example 1.13 Solve the system of equations


x
1

+
1
0
x
2


x
3

=
3
2
x
1

+
3
203
x
2

+
2
0
x
3

=
7
10x1 – x2 + 2x3 = 4
using the Gauss elimination with partial pivoting.
Solution We have the augmented matrix as

rj 1 10 1 3 yj

j2 3 20 7j
L10 1 2 4

SOLUTION OF EQUATIONS AND EIGEN VALUE PROBLEMS 31

We perform the following elementary row transformations and do the


eliminations.

R R

204
rj
10
:

205
1 2

206
4 yj . R

207
– (R /5), R

208
– (R /10) :

1 3
j2 3
j
20 7
2 1 3 1

L 1 10 1 3

rj10  1 2

209
4 yj

210
rj10  1 2 4 yj

j0 3.2 19.6

211
6.2
j. R2  R3 :

212
j0 10.1 1.2

213
2.6
j.
L0 10.1  1.2

214
2.6

215
L0 3.2 19.6

216
6.2

R – (3.2/10.1)R

217
1
: rj10 2

218
4 yj
.

jL
j
3 2 0 10.1 1.2
0 0 19.98020

219
2.6

5.37624

Back substitution gives the solution.

Third equation gives x3

220
5.37624
=
19.98020

221
= 0.26908.

1 (2.6 1
Second equation gives x = + 1.2x ) =

222
(2.6 + 1.2(0.26908)) = 0.28940.

2 10.1 3 10.1

1
First equation gives x = (4 + x

223
1
– 2x ) =

224
(4 + 0.2894 – 2(0.26908)) = 0.37512.

1 10

225
2 3 10

Example 1.14 Solve the system of equations


2x1 + x2 + x3 – 2x4 = – 10
4x1 + 2x3 + x4 = 8
3x1 + 2x2 + 2x3 = 7
x1 + 3x2 + 2x3 – x4 = – 5
using the Gauss elimination with partial pivoting.
Solution The augmented matrix is given by

j
rj2 1 1  2  10
jy
8

4 0 2 1

j3 2 2 0

226
7
j.
L1 3 2 1 5
We perform the following elementary row transformations and do the
eliminations.

rj4 0 2 1 8 yj
R1  R2 : j2 1 1
j . R – (1/2) R , R
 2  10
2 1 3 – (3/4) R1, R4 – (1/4) R1:

j3 2 2 0 7 j
L1 3 2  1  5

32
NU
MERICAL METHODS

rj4 0 2 1

227
8 yj

228
rj4 0 2 1

229
8 yj

j0 1 0

230
 5/2

231
 14
j. R2  R4:

232
j0 3 3/2

233
5/4

234
j
7 .

jL0 2 1/2

235
 3/ 4

236
1
j

237
jL0 2 1/2

238
3/4

239
1
j

0 3 3/ 2

240
 5/ 4  7

241
0 1 0 5/2 14

rj4 0 2 1 8
jy

R3 – (2/3) R2, R4 – (1/3)R2:

242
j0 3 3/2  5/ 4

243
7
j. R 4 – R3 :

j0 0  1/2 1/ 12

244
17/ 3
j

rj4 0 2

245
L0 0

1 8

246
1/2

yj

247
 25/12

248
 35/3

j0 3 3/2 5/ 4 7 j.

j0 0  1/2 1/12 17/3 j


L0 0 0  13/6  52/3

Using back substitution, we obtain

x =
jj  52 7j jj  6 7j

249
= 8, x

250
=–2
jj 17  1 x 7j
3

251
=–2
jj 17  1 (8)7j

252
= – 10,

4 3 j 13 j

253
3 3 12 j

254
3 12 j

x =
1 rj 7  jj 37j x

255
⦁ jj 5 7j x

256
yj  1 rj 7  jj 3 7j ( 10)  jj 5 7j (8)yj

257
= 6,

2
3 L

258
2 j 3

259
4 j 4
3 L

260
j
2

261
4 j

1 1
x = [8 – 2x – x ] = [8 – 2(– 10) – 8] = 5.
1 4
4 3 4

Example 1.15 Solve the system of equations


3x1 + 3x2 + 4x3 = 20
2x1 + x2 + 3x3 = 13
x1 + x2 + 3x3 = 6
using the Gauss elimination method.
Solution Let us solve this problem by making the pivots as 1. The
augmented matrix is givenby

j2 1 3
rj3 3 4
L1 1 3

262
20 yj .
6

13
j

We perform the following elementary row transformations and do the


eliminations.

R /3:

263
rj1 1 4/3

264
yj
20/3
. R

265
⦁ 2R , R

266
rj
1 1 4/3
–R:

267
20/3 yj
.

1
j2 1 3

268
13
j 2

269
1 3 1
j0

270
 1 1/3

271
 1/3
j

L1 1 3 6

272
L0 0 5/3

273
 2/3

SOLUTION OF EQUATIONS AND EIGEN VALUE PROBLEMS 33

Back substitution gives the solution as

x =– j 2 7j 37

274
2,x
=–

275
= 1  x3 1

276
1 j 27  1 ,

3 j 3 jj j 5jj

277
5 2 3 3

278
33 j

279
5
jj 5

20 4

280
20 1 4 j 2 7 35
x1 =

281
3 – x2 – 3 x3 =

282
3
j jj  5
53 5

283
= 7.

Example 1.16 Test the consistency of the following system of equations

x
1

+
1
0
x
2


x
3

=
3
2
x
1

+
3
x
2

+
2
0
x
3

284
=
7

Gauss- Jordan method


[A|b] 

285
[I | X]

In this case, after the eliminations are completed, we obtain


the augmented matrix fora 3 × 3 system as

j0 1 0

rj1 0 0

L0 0 1

286
d1 yj
d2
j (1.42)
d3

and the solution is xi = di, i = 1, 2, 3.

34
NU
MERICAL METHODS

Elimination procedure The first step is same as in Gauss


elimination method, that is, we make the elements below the first
pivot as zeros, using the elementary row transformations. From the
second step onwards, we make the elements below and above the
pivots as zeros using the elementary row transformations. Lastly, we
divide each row by its pivot so that thefinal augmented matrix is of the
form (1.42). Partial pivoting can also be used in the solution.We may
also make the pivots as 1 before performing the elimination.
Let us illustrate the method.
Example 1.17 Solve the following system of equations
x
1

+
x
2

+
x
3

=
1
4
x
1

+
3
x
2


x
3

=
6

287
3x1 + 5x2 + 3x3 = 4
using the Gauss-Jordan method (i) without partial pivoting, (ii) with partial
pivoting.
Solution We have the augmented matrix as

rj 1 1 1 1 yj

j4 3 1 6
j
L3 5 3 4

⦁ We perform the following elementary row transformations and do the


eliminations.

rj 1 1 1 yj
1

R2 – 4R1, R3 – 3R1 :
j0 1 5
j
2 .

L0 2 0 1

R +R ,R

288
+ 2R

289
:
rj 1 0

290
4 yj .
3

1 2 3

291
2
j0 1

292
5 2
j

L0 0  10 5

rj1 0 0 1 yj

j
R1 – (4/10)R3, R2 – (5/10) R3 : 0 1 0

293
j
 1/2 .

L0 0

294
 10 5

Now, making the pivots as 1, ((– R2), (R3/(– 10))) we get

j
j
rj1 0 0 1 yj
.

0 1 0

L0 0 1

295
1/2
 1/2

Therefore, the solution of the system is x1 = 1, x2 = 1/2, x3 = – 1/2.


⦁ We perform the following elementary row transformations and do the
elimination.

rj4 3 1 6 yj

296
rj1 3/4  1/4

297
yj
3/2

R1  R2 :
j1 1 1 1 j. R1/4 :
j1 1 1 1 j.
L3 5 3 4 L3 5 3 4

j
SOLUTION OF EQUATIONS AND EIGEN VALUE PROBLEMS 35

j
rj1 3/4  1/4

298
3/2 yj

R2 – R1, R3 – 3R1 :

299
0 1/4 5/4

L0 11/4 15/4

300
 1/2 .
 1/2

rj1 3/4  1/4 3/2 yj rj1 3/4  1/4 3/2 y


11/4 15/4 1 15/1
1 j
 2/11
 1/2 
L0 1/4 5/4
L0 1/4 5/4
1/2

j
R2  R3 : 0
j j
 1/2 . R2 /(11/4) : 0 .

j
rj 1 0  14/11

301
yj
18/11

R1 – (3/4) R2, R3 – (1/4)R2 :

302
0 1 15/11

j
j
L0 0 10/11

303
 2/11 .
 5/11

rj1 0  14/11

304
18/11 yj

R3/(10/11) :

305
0 1 15/11

L0 0 1

306
 2/11 .
 1/2

j
rj1 0 0 1 yj

L
j
R1 + (14/11) R3, R2 – (15/11)R3 : 0 1 0

0 0 1

307
1/2 .
 1/2

Therefore, the solution of the system is x1 = 1, x2 = 1/2, x3 = – 1/2.


Remark 18 The Gauss-Jordan method looks very elegant as the
solution is obtained directly. However, it is computationally more
expensive than Gauss elimination. For large n, the total number of
divisions and multiplications for Gauss-Jordan method is almost
1.5 times the total number of divisions and multiplications
required for Gauss elimination. Hence, we do not normally use this
method for the solution of the system of equations. The most
important application of this method is to find the inverse of a non-
singular matrix. We present thismethod in the following section.
1.2.2.3 Inverse of a Matrix by Gauss-Jordan Method

As given in Remark 18, the important application of the Gauss-


Jordan method is to find the inverse of a non-singular matrix A. We
start with the augmented matrix of A with theidentity matrix I of the
same order. When the Gauss-Jordan procedure is completed, we ob-
tain

since, AA–1 = I.

308
Gauss- Jordan method
[A | I] 

309
[I | A–1]

Remark 19 Partial pivoting can also be done using the augmented


matrix [A|I]. However, we cannot first interchange the rows of A and
then find the inverse. Then, we would be finding the inverse of a
different matrix.
Example 1.18 Find the inverse of the matrix

rj 1 1 1yj
j4 3  1j
L3 5 3

36
NU
MERICAL METHODS

using the Gauss-Jordan method (i) without partial pivoting, and (ii) with partial
pivoting.
Solution Consider the augmented matrix

rj1 1 1 1 0 0 yj

j4 3 1 0 1 0 . j
L3 5 3 0 0 1
⦁ We perform the following elementary row transformations and do the
eliminations.

rj1 1 1 1 0 0 yj

310

We perform the following elementary row transformations and do the
eliminations.

rj4 3 1 0 1 yj
0

311
rj1 3/4  1/4

312
0 1/4 0 yj

j
R1  R2 : 1 1 1

313
1 0 0
j. j
R1 /4 : 1 1 1

314
1 0 0
j.
j
L3 5 3

315
0 0 1

316
L3 5 3

317
0 0 1

rj1 3/4  1/4

318
0 yj
1/4 0

L
j
R2 – R1, R3 – 3R1 : 0 1/4 5/4

0 11/4 15/4

319
1  1/4 0 .

j
0  3/4 1

R R

320
: rj1 3/4

321
 1/4

322
0 1/4 0 yj .

j
2 3
0 11/4 15/4

L0 1/4 5/4

323
0  3/4 1
1  1/4 0

SOLUTION OF EQUATIONS AND EIGEN VALUE PROBLEMS 37

rj1 3/4  1/4 0 1/4 0 yj


j
R2 /(11/4) : 0 1 15/11 0
j.
 3/11 4/11

L0 1/4 5/4 1  1/4 0

R1 – (3/4) R2, R3

324
– (1/4)R

325
r1 0  14/11 0

:
j
2 0 1 15/11 0

326
5/11
 3/11

327
 3/11 yj
.

4/11
j

L0 0 10/11 1 2/11
1/11

rj1 0  14/11 0 5/11  3/11


jy

j
R3 /(10/11) : 0 1 15/11 0  3/11 4/11
j.
L0 0 1 11/10  1/5  1/10

R + (14/11) R , R

328
– (15/11)R

329
: rj1 0 0

330
7/5 1/5

331
 2/5 yj
.

1 3 2

332
3
j0 1 0

333
3/2

334

0

1/2
j

L0 0 1

Therefore, the inverse of the matrix is given by

335
11/10 1/5 1/10

jr 7/5 1/5 2/5 yj

jL 3/2 
0

1/2
j
11/10 1/5 1/10

Example 1.19 Using the Gauss-Jordan method, find the inverse of

j2 1 1j
rj2 2 3yj . (A.U.

Apr./May 2004)

L1 3 5

Solution We have the following augmented matrix.

j2 1 1 0 1
j
0

rj2 2 3 1 0 0 yj .
L1 3 5 0 0 1

We perform the following elementary row transformations and do the


eliminations.

r1 1 3/2 1/2 0 0

336
r1 1 3/2

337
1/2 0 0

R1 /2 :
j2 1 1

338
0
j j
1 0 . R2 – 2R1, R3 – R1 : 0 1 2

339
1 1 0
j.

L1 3 5

340
0 0 1

341
L0 2 7/2

342
 1/2 0 1

rj1 1 3/2
R  R . Then, R /2 :

343
1/2 0 0

344
yj .

2 3 2

345
j0 1 7/4 1/4 0 1/2 j

L0 1 2 1 1 0

R –R ,R +R

346
: rj1 0

347
 1/4

348
3/4 0

349
 1/2 yj .

1 2 3

350
2
j0 1

351
7/4

352
 1/4 0 1/2 j

L0 0 1/4 5/4 1 1/2

38
NU
MERICAL METHODS

rj1 0  1/4

353
 1/2
3/4 0
jy

R3 /(– 1/4) :

354
j0 1 7/4

355
 1/4 0 1/2
j.
 

L0 0 1

356
5 4 2

R1 + (1/4)R3, R2

357
– (7/4)R3

358
r1 0 0

:j

j0 1 0

359
2 1
9 7

360

yj .
1

⦁ 4j

L0 0 1 5 4 2
Therefore, the inverse of the given matrix is given by

2 1 1
jr jy .
j 9 7 4
j
L5 4 2

⦁ What is a direct method for solving a linear system of algebraic equations Ax


=b?

Solution Direct methods produce the solutions in a finite


number of steps. The numberof operations, called the
operational count, can be calculated.
⦁ What is an augmented matrix of the system of algebraic equations Ax = b ?
Solution The augmented matrix is denoted by [A | b], where
A and b are the coeffi- cient matrix and right hand side
vector respectively. If A is an n × n matrix and b is an n × 1
vector, then the augmented matrix is of order n × (n + 1).
⦁ Define the rank of a matrix.
Solution The number of linearly independent rows/columns

361
The number of linearly independent rows/columns of a matrix
define the row- rank/column-rank of that matrix. We note
that row-rank = column-rank = rank.
⦁ Define consistency and inconsistency of a system of linear
system of algebraic equa- tions Ax = b.
Solution Let the augmented matrix of the system be [A | b].

⦁ The system of equations Ax = b is consistent


(has at least one solution), if rank
(A) = rank [A | b] = r.
If r = n, then the system has unique solution.
If r < n, then the system has (n – r) parameter family of infinite number of
solutions.
⦁ The system of equations Ax = b is
inconsistent (has no
solution) if rank (A)  rank
[A | b].
⦁ Define elementary row transformations.
Solution We define the following operations as elementary row
transformations.
⦁ Interchange of any two rows. If we interchange the ith row
with the jth row, then weusually denote the operation as Ri
 Rj.
⦁ Division/multiplication of any row by a non-zero number
p. If the ith row is multi- plied by p, then we usually
denote this operation as pRi.

362

You might also like