Note:: Other Websites/Blogs Owners We Requested You
Note:: Other Websites/Blogs Owners We Requested You
www.EasyEngineering.net
to learn.
2
Downloaded FromT: h
wawnwk.EYaosyuEann
gidneGeroindgB
.nleetss!
3
Downloaded From :
www.EasyEngineering.net
4
This page
intentionally leftblank
5
6
7
PUBLISHING FOR ONE WORLD
NEW AGE INTERNATIONAL (P) LIMITED, PUBLISHERS
4835/24,Ansari Road,
Daryaganj, New Delhi -
110002Visit us at
www.newagepublisher
s.com
Preface
This book is based on the experience and the lecture notes of the
authors while teaching Numerical Analysis for almost four decades
at the Indian Institute of Technology, New Delhi.
8
This comprehensive textbook covers material for one semester
course on Numerical Methods of Anna University. The emphasis in the
book is on the presentation of fundamentalsand theoretical concepts in
an intelligible and easy to understand manner. The book is writtenas a
textbook rather than as a problem/guide book. The textbook offers a
logical presentation of both the theory and techniques for problem
solving to motivate the students for the study and application of
Numerical Methods. Examples and Problems in Exercises are used to
explain each theoretical concept and application of these concepts in
problem solving. Answers for every problem and hints for difficult
problems are provided to encourage the students for self- learning.
The authors are highly grateful to Prof. M.K. Jain, who was
their teacher, colleague and co-author of their earlier books on
Numerical Analysis. With his approval, we have freely used the
material from our book, Numerical Methods for Scientific and
Engineering Computation, published by the same publishers.
This book is the outcome of the request of Mr. Saumya
Gupta, Managing Director, New Age International Publishers, for
writing a good book on Numerical Methods for Anna University. The
authors are thankful to him for following it up until the book is
complete.
The first author is thankful to Dr. Gokaraju Gangaraju,
President of the college, Prof. P.S. Raju, Director and Prof. Jandhyala
N. Murthy, Principal, Gokaraju Rangaraju Institute of Engineering and
Technology, Hyderabad for their encouragement during the
preparation of the manuscript.
The second author is thankful to the entire management of
Manav Rachna Educational Institutions, Faridabad and the Director-
Principal of Manav Rachna College of Engineering, Faridabad for
providing a congenial environment during the writing of this book.
9
S.R.K. Iyengar
R.K. Jain
This page
intentionally leftblank
10
Contents
Preface (v)
⦁
Initial Approximation for an Iterative Procedure, 4
⦁ Method of False Position, 6
⦁ Newton-Raphson Method, 11
⦁ General Iteration Method, 15
⦁ Convergence of Iteration Methods, 19
⦁ Linear System of Algebraic Equations, 25
⦁ Introduction, 25
⦁ Direct Methods, 26
⦁ 1 Gauss Elimination Method, 28
⦁ Gauss-Jordan Method, 33
⦁ Inverse of a Matrix by Gauss-Jordan Method, 35
⦁ Iterative Methods, 41
⦁ Gauss-Jacobi Iteration Method, 41
⦁ Gauss-Seidel Iteration Method, 46
⦁ Eigen Value
P
r
o
11
P
r
o
b
l
e
m
s
,
5
2
⦁ Introduction,
5
2
⦁ Power Method,
5
3
⦁ Answers and Hints, 59
12
⦁
1 Trapezium Rule, 129
⦁ 2 Simpson’s 1/3 Rule, 136
⦁ 3 Simpson’s 3/8 Rule, 144
⦁ 4 Romberg Method, 147
⦁ Integration Rules Based on Non-uniform Mesh Spacing, 159
⦁ Gauss-Legendre Integration Rules, 160
⦁ Evaluation of Double Integrals, 169
⦁ 1 Evaluation of Double Integrals Using Trapezium Rule,
169
⦁ 2 Evaluation of Double Integrals by Simpson’s Rule, 173
⦁ Answers and Hints, 177
13
⦁ Milne-Simpson Methods, 224
⦁ Predictor-Corrector Methods, 225
⦁ Stability of Numerical Methods, 237
⦁ Answers and Hints, 238
⦁
Answers and Hints, 308
14
This page
intentionally leftblank
15
xe2x – 1 = 0, cos x – xex = 0, tan x = x
16
Fig. 1.1 ‘Root of f(x) = 0’
2
NU
MERICAL METHODS
17
Hence, x = 2 is a multiple root of multiplicity 2 (double root)
of f(x) = x3 – 3x2 + 4 = 0.We can write f(x) = (x – 2)2 (x + 1) =
(x – 2)2 g(x), g(2) = 3 0.
In this chapter, we shall be considering the case of simple roots only.
Remark 1 A polynomial equation of degree n has exactly n roots, real
or complex, simple or multiple, where as a transcendental equation
may have one root, infinite number of roots orno root.
We shall derive methods for finding only the real roots.
The methods for finding the roots are classified as (i) direct
methods, and (ii) iterative methods.
Direct methods These methods give the exact values of all the roots
in a finite number of steps (disregarding the round-off errors).
Therefore, for any direct method, we can give the total number of
operations (additions, subtractions, divisions and multiplications).
This number is called the operational count of the method.
For example, the roots of the quadratic equation ax2 + bx + c =
0, a 0, can be obtainedusing the method
jL
x=
1 r b
2a
18
b2 4ac yj
.
For this method, we can give the count of the total number of operations.
There are direct methods for finding all the roots of cubic
and fourth degree polynomi- als. However, these methods are
difficult to use.
Direct methods for finding the roots of polynomial equations of degree
greater than 4 or transcendental equations are not available in literature.
19
x0. The sequence of approxima-
If a method uses two initial approximations x0, x1, to the root, then
we can write themethod as
xk + 1 = (xk – 1, xk), k = 1, 2, ..... (1.6)
Convergence of iterative methods The sequence of iterates, {xk}, is
said to converge to theexact root , if
lim xk = , or lim xk – = 0. (1.7)
k k
k
20
k
21
k
4
NU
MERICAL METHODS
22
⦁
We write the equation f(– x) = Pn(– x) = 0, and count the number of
changes of signs in the coefficients of Pn(– x). The number of negative
roots cannot exceed the number of changes of signs. Again, if there are
four changes in signs, then the equation may have four negative
roots or two negative roots or no negative root. If there are three
changes in signs, then the equation may have three negative roots or
definitely one negative root.
We use the following theorem of calculus to determine an
initial approximation. It is also called the intermediate value
theorem.
Theorem 1.1 If f(x) is continuous on some interval [a, b] and f(a)f(b)
< 0, then the equation f(x) = 0 has at least one real root or an odd
number of real roots in the interval (a, b).
This result is very simple to use. We set up a table of values
of f (x) for various values of x. Studying the changes in signs in the
values of f (x), we determine the intervals in which the roots lie. For
example, if f (1) and f (2) are of opposite signs, then there is a root in
the interval (1, 2).
Let us illustrate through the following examples.
23
in the coefficients (– 8, – 12, 2, 3) is 1. Therefore, the equation has
one negative root.
We have the following table of values for f(x), (Table 1.1).
x –2 –1 0 1 2 3
Since
x –3 –2 –1 0 1 2 3
f(x) – 101 – 35 –9 –5 –5 9 55
24
From the table, we find that there is one real positive root in the
interval (1, 2). The equation has no negative real root.
Example 1.2 Determine an interval of length one unit in which the
negative real root, which issmallest in magnitude lies for the equation
9x3 + 18x2 – 37x – 70 = 0.
Solution Let f(x) = 9x3 + 18x2 – 37x – 70 = 0. Since, the smallest
negative real root in magni- tude is required, we form a table of
values for x < 0, (Table 1.3).
x –5 –4 –3 –2 –1 0
25
We know from calculus, that in the neighborhood of a point on a curve, the
curve can be approximated by a straight line. For deriving numerical methods to
find a root of an equation
6
NU
MERICAL METHODS
xk+1
26
= xk ( fk fk1 ) ( xk xk1 )fk
fk fk1
27
xk1fk xkfk1 , k = 1, 2, ... (1.11)
=
fk fk1
Therefore, starting with the initial interval (x0, x1), in which the root lies, we
compute
x = x0 f1 x1 f0 .
2 f
1 f0
Now, if f(x0) f(x2) < 0, then the root lies in the interval (x0, x2).
Otherwise, the root lies in the interval (x2, x1). The iteration is
continued using the interval in which the root lies, until
the required accuracy criterion given in Eq.(1.8) or Eq.(1.9) is satisfied.
Alternate derivation of the method
Let the root of the equation f(x) = 0, lie in the interval (xk–1, xk).
Then, P(xk–1, fk–1), Q(xk, fk) are points on the curve f(x) = 0. Draw
the chord joining the points P and Q (Figs. 1.2a, b). We
approximate the curve in this interval by the chord, that is, f(x) ax +
b. The next approximationto the root is given by x = – b/a. Since the
chord passes through the points P and Q, we get
fk–1 = axk–1 + b, and fk = axk + b.
Subtracting the two equations, we get
fk – f
28
k–1
29
= a(xk
30
– xk – 1
31
fk fk1 .
), or a =
xk xk1
The second
equation gives
b = fk – axk.
Hence, the
next
approximation
is given by
In Fig.1.2b, the right end point x1 is fixed and the left end
point moves towards therequired root. Therefore, in this case, in
actual computations, the method behaves like
xk+1
32
xk f1 x1 fk , k = 1, 2, … (1.13)
=
f1 fk
8
NU
MERICAL METHODS
x xk
f(x) =
xk1 xk
33
fk–1
34
+ x xk1
xk xk1
35
fk.
(x xk1) fk (x xk ) fk1
xk xk1
36
= 0, or x(fk
37
⦁ fk–1
38
) = xk–1 fk
39
⦁ xk
40
fk – 1
or x = xk+1
41
xk1 fk xk fk1
= .
fk fk1
x 0 1 2 3
f (x) 1 –1 3 19
x0 f3 x3f0
x = 0 0.36364(1)
42
= 0.34870, f(x ) = f(0.34870) = – 0.00370.
4 f
3 f0 0.04283 1 4
Since, f(0) f(0.3487) < 0, the root lies in the interval (0, 0.34870).
x0 f4 x4 f0
x = 0 0.3487(1)
43
= 0.34741, f(x ) = f(0.34741) = – 0.00030.
5 f 5
4 f0 0.00370 1
Since, f(0) f(0.34741) < 0, the root lies in the interval (0, 0.34741).
x0f5 x5 f0
x = 0 0.34741(1)
44
= 0.347306.
6 f
5 f0
45
0.0003 1
x0 f1 x1f0
x = 3 2( 1)
46
= 1.25, f(x ) = f(1.25) = – 0.796875.
2 f
1 f0 3 ( 1) 2
Since, f(1.25) f(2) < 0, the root lies in the interval (1.25, 2). We use the formula given
in Eq.(1.13).
47
= 1.407407,
3 f1 f2
48
3 ( 0.796875)
49
= 1.482367,
4 f f
1 3
50
3 ( 0.434437)
51
= 1.513156,
5 f f
1 4
52
3 ( 0.18973)
53
= 1.525012,
6 f f
1 5
54
3 ( 0.74884)
7 f1 f6
55
3 ( 0.028374)
56
= 1.531116,
8 f
1 f7
57
3 ( 0.010586)
10
NU
MERICAL METHODS
58
= 1.531729,
9 f
1 f8
59
3 ( 0.003928)
60
= 1.531956.
10 f
1 f9
61
3 ( 0.001454)
The root has been computed correct to three decimal places. The
required root can betaken as x x10 = 1.531956. Note that the right
end point x = 2 is fixed for all iterations.
f(x4) = f(0.49402) = 0.07079.
Since, f (0.49402) f(1) < 0, the root lies in the interval
(0.49402, 1).
62
= 0.50995,
5 f
1 f4
63
2.17798 0.07079
64
= 0.51520,
6 f1 f5
65
2.17798 0.0236
Since, f(0.51520) f(1) < 0, the root lies in the interval (0.51520, 1).
66
= 0.51692.
7 f1 f6
67
2.17798 0.00776
⦁
Newton-Raphson Method
1 0 f ( x
0) 0
x =x
68
f (xk ) , f (x ) 0. (1.14)
–
k+1
69
k f (x
k) k
This method is called the Newton-Raphson method or simply the Newton’s method.
The method is also called the tangent method.
Alternate derivation of the method
Let xk be an approximation to the root of the equation f(x) = 0. Let x be an increment in
x such that xk + x is the exact root, that is f(xk + x) 0.
12
NU
MERICAL METHODS
x =x
70
+ x = x
71
f (xk ) , f (x ) 0, k = 0, 1, 2, ...
–
k+1 k
72
k f (x
k) k
x1 = 2x0
x2 = 2x1
73
– Nx 2 = 2(0.05) – 17(0.05)2 = 0.0575.
0
1
x3 = 2x2
x4 = 2x3
74
– Nx2 = 2(0.058794) – 17(0.058794)2 = 0.058823.
2
3
75
1
– Nx2 = 2(– 0.0825) – 17(– 0.8025)2 = – 0.280706.
2
2x 3 17 2(2.571332)3 17
x = 3
2
76
2 = 2.571282.
4 3x
3
77
3(2.571332)
x3 5x 1 2x3 1
xk+1 = xk – k k k
, k = 0, 1, 2, ...
2
k
3xk 5 3x2 5
14
NU
MERICAL METHODS
1 0
2x3 1
x =
0
3x2 5
78
2(0.5)3 1
3(0.5)2 5
79
= 0.176471,
2x3 1
x =
1
2 3x2 5
x = 2x 1
3
3x2 5
2
3
2
2x3 1
80
2(0.176471)3 1
3(0.176471)2 5
2(0.201568)3 1
3(0.201568)2 5
2(0.201640)3 1
81
= 0.201568,
= 0.201640,
x = 3
82
= 0.201640.
83
= 11.594870.
x3 = x2
84
– x2 log 10 x2 12.34log 10 x2
0.434294
85
= 11.594854.
5 x
Now, finding a root of f(x) = 0 is same as finding a number
such that = (), that is, a fixed point of (x). A fixed point of a
function is a point such that = (). This result is also called
the fixed point theorem.
Using Eq.(1.16), the iteration method is written as
xk+1 = (xk), k = 0, 1, 2, ... (1.18)
The function (x) is called the iteration function. Starting
with the initial approxima- tion x0, we compute the next
approximations as
x1 = (x0), x2 = (x1), x3 = (x2),...
The stopping criterion is same as used earlier. Since, there
86
are many ways of writing f(x) = 0 as x = (x), it is important to know
whether all or at least one of these iteration methods converges.
Remark 10 Convergence of an iteration method xk+1 = (xk), k = 0, 1,
2,..., depends on the choice of the iteration function (x), and a
suitable initial approximation x0, to the root.
Consider again, the iteration methods given in Eq.(1.17), for finding a root of
the equation
f(x) = x3 – 5x + 1 = 0. The positive root lies in the interval (0, 1).
⦁ xk+1
87
3
x 1
= k , k = 0, 1, 2, ... (1.19)
5
(iii) xk+1 =
88
5xk 1 , k = 0, 1, 2, ... (1.21)
xk
16
NU
MERICAL METHODS
89
Setting k = k – 1, we get k =
(tk–1) k–1, xk–1 < tk–1 < .
Hence,
k+1 = (t k )(t k–1) k–1.
Using (1.24) recursively, we get
k+1 = (tk)(tk–1) ... (t0) 0.
90
1 for all x in
91
Hence,
5 5 5
92
0 x 1.
5
(ii) We have (x) = (5x – 1)1/3, (x) =
3(5x 1)2/3
93
. Now (x) < 1, when x is close to 1 and
(x) > 1 in the other part of the interval. Convergence is not guaranteed.
94
5x 1 , (x) = 1
95
. Again, (x) < 1, when x is close to
1 .
We obtain (x) =
3( x 10)2/3
96
We find (x) < 1 for all x in the interval (2, 3).
Hence, the iteration converges. Let x0 = 2.5. We obtain
the following results.
x1 = (12.5)1/3 = 2.3208, x2 = (12.3208)1/3 = 2.3097,
Example 1.11 Find the smallest negative root in magnitude of the equation
18
NU
MERICAL METHODS
97
= (x).
xk+1
98
4 .
=–
k k
3x3 x2 12
4(9x 2 2x)
We obtain (x) =
(3x 3 x 2 12)2 .
We find (x) < 1 for all x in the interval (– 1, 0). Hence, the iteration
converges.
= 1 + (9x2 + 8x + 4) < 1
for all x (– 1, 0). This condition is also to be satisfied at the initial approximation.
Setting x0
= – 0.5, we get
(x0) | = | 1 + f (x0) =
99
9
1 <1
4
100
= – 0.5[3(– 0.332723)3 + 4(– 0.332723)2 + 2(– 0.332723) + 1]
= – 0.333435.
4
4
the largest positive real number for which there exists a finite constant
101
C 0 , such that
k+1 C k p. (1.28)
The constant C, which is independent of k, is called the
asymptotic error constant and it depends on the derivatives of f(x) at
x = .
Let us now obtain the orders of the methods that were derived earlier.
Method of false position We have noted earlier (see Remark 4) that
if the root lies initially in the interval (x0, x1), then one of the end points
is fixed for all iterations. If the left end point x0 is fixed and the right
end point moves towards the required root, the method behaves like
(see Fig.1.2a)
xk+1
102
= x0 fk xk f0 .
fk f0
k+1 0k
103
2 f ()
20
NU
MERICAL METHODS
or k+1 = k() +
2).
O(
r f () 2 y r f () y
f ()
= k – jL k
2f () j jL1
k ...
104
j
k ...
r
= –
105
f ()
106
2
107
y f ()
108
2 + ...
k jL k
109
2f () k
110
j
...
111
2f () k
f () ,
= C 2, where C =
k+1 k
112
2 f ()
⦁
We observe from (1.31), that in the next iteration, the error behaves
like C(0.1)2 = C(10–2). That is, we may possibly get an accuracy of two
decimal places. Because of the quadratic convergence of the method,
we may possibly get an accuracy of four decimal places in the next
iteration. However, it also depends on the value of C. From this
discussion, we conclude that both fixed point iteration and regula-
falsi methods converge slowly as they have only linear rate of
convergence. Further, Newton’s method converges at least twice as
fast as the fixed point iteration and regula-falsi methods.
Remark 13 When does the Newton-Raphson method fail?
⦁ The method may fail when the initial approximation x0 is far
away from the exact root (see Example 1.6). However, if the root
lies in a small interval (a, b) and x0 (a, b), then the method
converges.
⦁ From Eq.(1.31), we note that if f () 0, and f (x) is finite then
C and the method may fail. That is, in this case, the graph of y
= f(x) is almost parallel to x-axis at the root .
Remark 14 Let us have a re-look at the error equation. We have
defined the error of approxi-mation at the kth iterate as k = xk – , k =
113
0, 1, 2,... From xk+1 = (xk), k = 0, 1, 2,. and = (),
we obtain (see Eq.(1.24))
xk+1 – = (xk) – () = ( + k) – ()
r
= () ()
114
1 y
2
() ... – ()
Lj k
2
115
k j
2 k
where
x =x
⦁
116
⦁ f (xk ) , we have (x) = x – f (x) .
k+1
117
k f (x
k)
118
f (x)
22
NU
MERICAL METHODS
f ()f () = 0
and () =
[ f ()]2
since f() = 0 and f () 0 ( is a simple root).
When, xk , f (xk) 0, we have (xk) < 1, k = 1, 2,... and 0 as n .
1
Now, (x) =
[f (x)]3
119
[ f (x) {f (x) f (x) + f(x) f (x)} – 2 f(x) {f (x)}2]
f ()
and () =
f ()
120
0.
⦁ Define a (i) root, (ii) simple root and (iii) multiple root of an algebraic
equation f(x) = 0.
Solution
⦁ A number , such that f() 0 is called a root of f(x) = 0.
⦁ Let be a root of f(x) = 0. If f() 0 and f () 0, then is said to be
a simple root.
Then, we can write f(x) as
f (x) = (x – ) g(x), g() 0.
⦁ Let be a root of f(x) = 0. If
f() = 0, f () = 0,..., f (m–1) () = 0, and f (m) () 0,
then, is said to be a multiple root of multiplicity m. Then, we can
write f (x) as
f(x) = (x – )m g(x), g() 0.
⦁ State the intermediate value theorem.
Solution If f(x) is continuous on some interval [a, b] and f (a)f (b) < 0, then
the equation
f(x) = 0 has at least one real root or an odd number of real roots in the
interval (a, b).
⦁ How can we find an initial approximation to the root of f (x) = 0 ?
Solution Using intermediate value theorem, we find an
interval (a, b) which contains the root of the equation f (x) =
0. This implies that f (a)f(b) < 0. Any point in this interval
(including the end points) can be taken as an initial
121
approximation to the root of f(x) = 0.
⦁ What is the Descartes’ rule of signs?
Solution Let f (x) = 0 be a polynomial equation Pn(x) = 0. We
count the number of changes of signs in the coefficients of f (x)
= Pn(x) = 0. The number of positive roots cannot exceed the
number of changes of signs in the coefficients of Pn(x). Now, we
write the equation f(– x) = Pn(– x) = 0, and count the number of
changes of signs in the coeffi- cients of Pn(– x). The number of
negative roots cannot exceed the number of changes of signs in
the coefficients of this equation.
⦁ Define convergence of an iterative method.
Solution Using any iteration method, we obtain a sequence of
iterates (approxima- tions to the root of f(x) = 0), x1, x2,..., xk,...
If
lim
k
122
x = , or lim
k
k
123
xk – = 0
Solution Let a root of f(x) = 0 lie in the interval (a, b). Let x0 be
an initial approximation to the root. We write f(x) = 0 in an
equivalent form as x = (x), and define the fixed pointiteration
method as xk+1 = (xk), k = 0, 1, 2, … Starting with x0, we obtain
a sequence of approximations x1, x2,..., xk,... such that in the limit
as k , xk . The method converges when (x) < 1,
for all x in the interval (a, b). We normally check this
condition at x0.
⦁ Write the method of false position to obtain a root of f(x) = 0.
What is the computational cost of the method?
Solution Let a root of f(x) = 0 lie in the interval (a, b). Let
x0, x1 be two initial approxi- mations to the root in this
interval. The method of false position is defined by
xk+1
124
xk1 fk xk fk1
= , k = 1, 2,...
fk fk1
xk+1
125
x
= x0 fk k f0 .
fk f0
In Fig.1.2b, the right end point x1 is fixed and the left end
point moves towards therequired root. Therefore, in this case,
in actual computations, the method behaves like
xk+1
126
= xk f1 x1 fk .
f1 fk
24
NU
MERICAL METHODS
x =x
127
f (xk ) , f (x ) 0, k = 0, 1, 2,...,
–
k+1
128
k f (xk ) k
129
where N is a real number.
(A.U. Nov./Dec. 2006, A.U. Nov./Dec. 2003)
⦁
Find the smallest negative root in magnitude of 3x3 – x + 1 = 0,
correct to four decimalplaces.
⦁ Find the smallest positive root of x = e–x, correct to two decimal places.
⦁ Find the real root of the equation cos x = 3x – 1. (A.U.
Nov./Dec. 2006)
⦁ The equation x2 + ax + b = 0, has two real roots and . Show that the iteration
130
method
⦁ xk+1 = – (axk + b)/xk, is convergent near x = , if | | > | |,
⦁ xk+1 = – b/(xk + a), is convergent near x = , if | | < | |.
⦁ Introduction
Consider a system of n linear algebraic equations in n unknowns
a
1
x
1
+
a
1
x
2
+
.
.
.
+
a
1
x
n
=
b
1
a
2
x
1
+
a
2
x
2
+
.
131
.
.
.
+
a
2
x
n
=
b
2
... ... ... ...
an1x1 + an2x2 + ... + annxn = bn
where aij, i = 1, 2, ..., n, j = 1, 2, …, n, are the known coefficients, bi , i = 1,
2, …, n, are the knownright hand side values and xi, i = 1, 2, …, n are the
unknowns to be determined.
In matrix notation we write the system as
Ax = b (1.33)
rja
11
132
a12
133
… a1n yj rj x yj
1
134
rjb yj
1
135
jb j .
2
j ............... j
136
j…j
137
j…j
138
jLx j
n
139
jLb j
n
26
NU
MERICAL METHODS
[A|b] =
140
ra
11
j
ja
21
141
a12a22
142
… a1n
… a2n
143
b1 y
j
b2
j
j ................... j
144
jLa n1
145
an2
146
… ann
147
bn j
⦁
The system of equations (1.33) is
inconsistent (has no solution) if
rank (A) rank [A | b].
We assume that the given system is consistent.
The methods of solution of the linear algebraic system of
equations (1.33) may be classified as direct and iterative methods.
⦁ Direct methods produce the exact solution after a finite number of
steps (disregarding the round-off errors). In these methods, we can
determine the total number of operations (addi- tions, subtractions,
divisions and multiplications). This number is called the operational
countof the method.
⦁ Iterative methods are based on the idea of successive
approximations. We start with aninitial approximation to the
solution vector x = x0, and obtain a sequence of approximatevectors
x0, x1, ..., xk, ..., which in the limit as k , converge to the exact
solution vector x. Now, we derive some direct methods.
⦁ Direct Methods
If the system of equations has some special forms, then the solution is obtained directly.
We consider two such special
148
forms.
⦁ Let A be a diagonal matrix, A = D. That is, we consider the system of equations
Dx = b as
a11x1 = b1
a22x2 = b2
... ... ... ... (1.34)
a
n
x
n
=
b
n
a
n
x
n
=
b
n
This system is called a diagonal system of equations. Solving directly, we obtain
bi
x =
i aii
149
, aii
150
0, i = 1, 2, ..., n. (1.35)
j 7n
jb a x j j
1 1, j j n 7
x =
j2 j
151
j
= b1 a
1, j xj
j
152
a11
153
(1.37)
1 a11
154
j2 j
28
NU
MERICAL METHODS
[A|b]
The augmented matrix of the system (1.38) is
156
[U|z]
ja 21
rja La
11 31
157
a12a22 a32
158
a13a23a33
159
yjb
b1 3
160
(1.39)
b2
j
First stage of elimination
j
ra 11
j0
161
a12
a(1)
162
a13
a(1)
163
b1 y
j
2
b(1)
j
164
(1.40)
22 23
where a(1) = a
165
⦁
ja 7 a
21
166
, a(1) = a
167
⦁
jj a 7j
21
a
168
, b(1) = b
169
⦁
jj a 7j b ,
21
22 22
170
j a jj
11 12 23
171
23
a11 j
172
13 2
173
2
a11 j 1
– j ja
a(1) = a
174
ja 7
31
175
, a(1) = a
176
ja 7
31
177
, b(1) = b
178
⦁ j jb.
32 32
179
a11 j 12 33
180
33 a11 j 13 3
181
3 a11 j 1
–j ja
j 7
a 3
1
ply the second row in (1.40) by a(1) / a(1) and subtract from the third row.
That is, we are
32 22
182
⦁ ( a(1) / a(1) )R . We obtain the new augmented
matrix as
183
ra
11
j
j0
184
a12
a(1)
185
a13
a(1)
186
3
b1 y
2
b(1)
j
187
32 22 2
188
(1.41)
22 23
j0 0
189
a(2)
190
b( 2) j
L
( 2) (1)
191
ja 7
(1)
192
33
(1)
193
3
( 2) (1)
194
ja 7
(1)
195
a(1) j
(1)
where a33
196
a33
197
j ja
32
23 , b3
22
198
b3
199
j jb 32 2 .
22
a(1) j
33
The element a(2) 0 is called the third pivot. This system is in the
required uppertriangular form [U|z]. The solution vector x is now
obtained by back substitution.
200
= b(2) /a(2) .
3 3 33
201
= (b(1) a(1) x3 )/a(1) .
2 2 23 22
From the first row, we get x1 = (b1 – a12x2 – a13 x3)/a11.
In general, using a pivot, all the elements below that pivot in that column are made
zeros.
Alternately, at each stage of elimination, we may also make the
pivot as 1, by dividing that particular row by the pivot.
Remark 15 When does the Gauss elimination method as described
above fail? It fails when any one of the pivots is zero or it is a very
small number, as the elimination progresses. If a pivot is zero, then
division by it gives over flow error, since division by zero is not defined.
If a pivot is a very small number, then division by it introduces large
round-off errors and the solution may contain large errors.
For example, we may have the system
2x2 + 5x3 = 7
7x1 + x2 – 2x3 = 6
2x1 + 3x2 + 8x3 = 13
in which the first pivot is zero.
Pivoting Procedures How do we avoid computational errors in Gauss
elimination? To avoid computational errors, we follow the procedure
of partial pivoting. In the first stage of elimination, the first column of
the augmented matrix is searched for the largest element in
magnitude and brought as the first pivot by interchanging the first
row of the augmented matrix (first equation) with the row
(equation) having the largest element in magnitude. In the second
stage of elimination, the second column is searched for the largest
element in magnitude among the n – 1 elements leaving the first
element, and this element is brought as the second pivot by
interchanging the second row of the augmented matrix with the later
row having the largest element in magnitude. This procedure is
continued until the upper triangular system is obtained. Therefore,
partial pivoting is done after every stage of elimination. There is
another procedure called complete pivoting. In this procedure, we
search the entire matrix A in the augmented matrix for the largest
element in magnitude and bring it as the first pivot.
30
NU
MERICAL METHODS
202
subtractions, divisions and multiplications.Without going into details,
we mention that the total number of divisions and multiplications
(division and multiplication take the same amount of computer time)
is n (n2 + 3n – 1)/3. The total number of additions and subtractions
(addition and subtraction take the same amount ofcomputer time) is n
(n – 1)(2n + 5)/6.
+
1
0
x
2
–
x
3
=
3
2
x
1
+
3
203
x
2
+
2
0
x
3
=
7
10x1 – x2 + 2x3 = 4
using the Gauss elimination with partial pivoting.
Solution We have the augmented matrix as
rj 1 10 1 3 yj
j2 3 20 7j
L10 1 2 4
R R
204
rj
10
:
205
1 2
206
4 yj . R
207
– (R /5), R
208
– (R /10) :
1 3
j2 3
j
20 7
2 1 3 1
L 1 10 1 3
rj10 1 2
209
4 yj
210
rj10 1 2 4 yj
j0 3.2 19.6
211
6.2
j. R2 R3 :
212
j0 10.1 1.2
213
2.6
j.
L0 10.1 1.2
214
2.6
215
L0 3.2 19.6
216
6.2
R – (3.2/10.1)R
217
1
: rj10 2
218
4 yj
.
jL
j
3 2 0 10.1 1.2
0 0 19.98020
219
2.6
5.37624
220
5.37624
=
19.98020
221
= 0.26908.
1 (2.6 1
Second equation gives x = + 1.2x ) =
222
(2.6 + 1.2(0.26908)) = 0.28940.
2 10.1 3 10.1
1
First equation gives x = (4 + x
223
1
– 2x ) =
224
(4 + 0.2894 – 2(0.26908)) = 0.37512.
1 10
225
2 3 10
j
rj2 1 1 2 10
jy
8
4 0 2 1
j3 2 2 0
226
7
j.
L1 3 2 1 5
We perform the following elementary row transformations and do the
eliminations.
rj4 0 2 1 8 yj
R1 R2 : j2 1 1
j . R – (1/2) R , R
2 10
2 1 3 – (3/4) R1, R4 – (1/4) R1:
j3 2 2 0 7 j
L1 3 2 1 5
32
NU
MERICAL METHODS
rj4 0 2 1
227
8 yj
228
rj4 0 2 1
229
8 yj
j0 1 0
230
5/2
231
14
j. R2 R4:
232
j0 3 3/2
233
5/4
234
j
7 .
jL0 2 1/2
235
3/ 4
236
1
j
237
jL0 2 1/2
238
3/4
239
1
j
0 3 3/ 2
240
5/ 4 7
241
0 1 0 5/2 14
rj4 0 2 1 8
jy
242
j0 3 3/2 5/ 4
243
7
j. R 4 – R3 :
j0 0 1/2 1/ 12
244
17/ 3
j
rj4 0 2
245
L0 0
1 8
246
1/2
yj
247
25/12
248
35/3
j0 3 3/2 5/ 4 7 j.
x =
jj 52 7j jj 6 7j
249
= 8, x
250
=–2
jj 17 1 x 7j
3
251
=–2
jj 17 1 (8)7j
252
= – 10,
4 3 j 13 j
253
3 3 12 j
254
3 12 j
x =
1 rj 7 jj 37j x
⦁
255
⦁ jj 5 7j x
256
yj 1 rj 7 jj 3 7j ( 10) jj 5 7j (8)yj
257
= 6,
2
3 L
258
2 j 3
259
4 j 4
3 L
260
j
2
261
4 j
1 1
x = [8 – 2x – x ] = [8 – 2(– 10) – 8] = 5.
1 4
4 3 4
j2 1 3
rj3 3 4
L1 1 3
262
20 yj .
6
13
j
R /3:
263
rj1 1 4/3
264
yj
20/3
. R
265
⦁ 2R , R
266
rj
1 1 4/3
–R:
267
20/3 yj
.
1
j2 1 3
268
13
j 2
269
1 3 1
j0
270
1 1/3
271
1/3
j
L1 1 3 6
272
L0 0 5/3
273
2/3
x =– j 2 7j 37
274
2,x
=–
275
= 1 x3 1
276
1 j 27 1 ,
3 j 3 jj j 5jj
277
5 2 3 3
278
33 j
279
5
jj 5
20 4
280
20 1 4 j 2 7 35
x1 =
281
3 – x2 – 3 x3 =
282
3
j jj 5
53 5
283
= 7.
x
1
+
1
0
x
2
–
x
3
=
3
2
x
1
+
3
x
2
+
2
0
x
3
284
=
7
285
[I | X]
j0 1 0
rj1 0 0
L0 0 1
286
d1 yj
d2
j (1.42)
d3
34
NU
MERICAL METHODS
+
x
2
+
x
3
=
1
4
x
1
+
3
x
2
–
x
3
=
6
287
3x1 + 5x2 + 3x3 = 4
using the Gauss-Jordan method (i) without partial pivoting, (ii) with partial
pivoting.
Solution We have the augmented matrix as
rj 1 1 1 1 yj
j4 3 1 6
j
L3 5 3 4
rj 1 1 1 yj
1
R2 – 4R1, R3 – 3R1 :
j0 1 5
j
2 .
L0 2 0 1
R +R ,R
288
+ 2R
289
:
rj 1 0
290
4 yj .
3
1 2 3
291
2
j0 1
292
5 2
j
L0 0 10 5
rj1 0 0 1 yj
j
R1 – (4/10)R3, R2 – (5/10) R3 : 0 1 0
293
j
1/2 .
L0 0
294
10 5
j
j
rj1 0 0 1 yj
.
0 1 0
L0 0 1
295
1/2
1/2
rj4 3 1 6 yj
296
rj1 3/4 1/4
297
yj
3/2
R1 R2 :
j1 1 1 1 j. R1/4 :
j1 1 1 1 j.
L3 5 3 4 L3 5 3 4
j
SOLUTION OF EQUATIONS AND EIGEN VALUE PROBLEMS 35
j
rj1 3/4 1/4
298
3/2 yj
R2 – R1, R3 – 3R1 :
299
0 1/4 5/4
L0 11/4 15/4
300
1/2 .
1/2
j
R2 R3 : 0
j j
1/2 . R2 /(11/4) : 0 .
j
rj 1 0 14/11
301
yj
18/11
302
0 1 15/11
j
j
L0 0 10/11
303
2/11 .
5/11
rj1 0 14/11
304
18/11 yj
R3/(10/11) :
305
0 1 15/11
L0 0 1
306
2/11 .
1/2
j
rj1 0 0 1 yj
L
j
R1 + (14/11) R3, R2 – (15/11)R3 : 0 1 0
0 0 1
307
1/2 .
1/2
since, AA–1 = I.
308
Gauss- Jordan method
[A | I]
309
[I | A–1]
rj 1 1 1yj
j4 3 1j
L3 5 3
36
NU
MERICAL METHODS
using the Gauss-Jordan method (i) without partial pivoting, and (ii) with partial
pivoting.
Solution Consider the augmented matrix
rj1 1 1 1 0 0 yj
j4 3 1 0 1 0 . j
L3 5 3 0 0 1
⦁ We perform the following elementary row transformations and do the
eliminations.
rj1 1 1 1 0 0 yj
310
⦁
We perform the following elementary row transformations and do the
eliminations.
rj4 3 1 0 1 yj
0
311
rj1 3/4 1/4
312
0 1/4 0 yj
j
R1 R2 : 1 1 1
313
1 0 0
j. j
R1 /4 : 1 1 1
314
1 0 0
j.
j
L3 5 3
315
0 0 1
316
L3 5 3
317
0 0 1
318
0 yj
1/4 0
L
j
R2 – R1, R3 – 3R1 : 0 1/4 5/4
0 11/4 15/4
319
1 1/4 0 .
j
0 3/4 1
R R
320
: rj1 3/4
321
1/4
322
0 1/4 0 yj .
j
2 3
0 11/4 15/4
L0 1/4 5/4
323
0 3/4 1
1 1/4 0
R1 – (3/4) R2, R3
324
– (1/4)R
325
r1 0 14/11 0
:
j
2 0 1 15/11 0
326
5/11
3/11
327
3/11 yj
.
4/11
j
L0 0 10/11 1 2/11
1/11
j
R3 /(10/11) : 0 1 15/11 0 3/11 4/11
j.
L0 0 1 11/10 1/5 1/10
R + (14/11) R , R
328
– (15/11)R
329
: rj1 0 0
330
7/5 1/5
331
2/5 yj
.
1 3 2
332
3
j0 1 0
333
3/2
334
0
1/2
j
L0 0 1
335
11/10 1/5 1/10
jL 3/2
0
1/2
j
11/10 1/5 1/10
j2 1 1j
rj2 2 3yj . (A.U.
Apr./May 2004)
L1 3 5
j2 1 1 0 1
j
0
rj2 2 3 1 0 0 yj .
L1 3 5 0 0 1
r1 1 3/2 1/2 0 0
336
r1 1 3/2
337
1/2 0 0
R1 /2 :
j2 1 1
338
0
j j
1 0 . R2 – 2R1, R3 – R1 : 0 1 2
339
1 1 0
j.
L1 3 5
340
0 0 1
341
L0 2 7/2
342
1/2 0 1
rj1 1 3/2
R R . Then, R /2 :
343
1/2 0 0
344
yj .
2 3 2
345
j0 1 7/4 1/4 0 1/2 j
L0 1 2 1 1 0
R –R ,R +R
346
: rj1 0
347
1/4
348
3/4 0
349
1/2 yj .
1 2 3
350
2
j0 1
351
7/4
352
1/4 0 1/2 j
38
NU
MERICAL METHODS
rj1 0 1/4
353
1/2
3/4 0
jy
R3 /(– 1/4) :
354
j0 1 7/4
355
1/4 0 1/2
j.
L0 0 1
356
5 4 2
R1 + (1/4)R3, R2
357
– (7/4)R3
358
r1 0 0
:j
j0 1 0
359
2 1
9 7
⦁
360
⦁
yj .
1
⦁ 4j
L0 0 1 5 4 2
Therefore, the inverse of the given matrix is given by
2 1 1
jr jy .
j 9 7 4
j
L5 4 2
361
The number of linearly independent rows/columns of a matrix
define the row- rank/column-rank of that matrix. We note
that row-rank = column-rank = rank.
⦁ Define consistency and inconsistency of a system of linear
system of algebraic equa- tions Ax = b.
Solution Let the augmented matrix of the system be [A | b].
362