Numerical Solutions of Algebraic Equations: Direct Methods
Numerical Solutions of Algebraic Equations: Direct Methods
Table of Contents
1. Learning outcomes:
2. Introduction:
where aij (i 1, 2,..., n & j 1, 2, ..., n) are the known coefficients, bi (i 1, 2,..., n) are
the known values and xi (i 1, 2,..., n) are the unknowns to be determined.
AX b
where
x1 b1
x b
a11 a12 . . . a1n
2 2
a a22 . . . a2 n . .
A 21 , X and b
. . ... . . .
. .
an1 an 2 . . . ann
xn bn
There are two types of numerical methods to solve the above system of
equations
(I) Direct Methods: direct methods such as Gauss Elimination method, in such
methods the amount of computation to get a solution can be specified in
advance.
AX b
where
x1 b1
x b
a11 a12 . . . a1n
2 2
a a22 . . . a2 n . .
A 21 , X and b
. . ... . . .
. .
an1 an 2 . . . ann
xn bn
is that
or we can say that rank of the coefficient matrix is the same as the rank of the
augmented matrix.
AX = b (1)
A1 AX A1b
X A1b
where
1
A1 Adj A
det A
Adj A
X b
det A
AX b
where
x1 b1
x b
a11 a12 . . . a1n
2 2
a a22 . . . a2 n . .
A 21 , X and b
. . ... . . .
. .
an1 an 2 . . . ann
xn bn
det Aj
xj
det A
Value Addition
Cramer's Rule is feasible only when n = 2, 3 or 4 only.
3x 2 y 2 z 3
2 x 3 y z 3
x 2y z 4
AX = b
where
3 1 2 x 3
A 2 3 1 , X y and b 3
1 2 1 z 4
3 1 2
det A A 2 3 1 8
1 2 1
1 3 5
adj A 3 1 7
7 5 11
1 3 5
1
7
11
A adj A 3 1
A 8
7 5 11
1 3 5 3 1
1
1
X A b 3 1 7 3 2
8
7 5 11 4 1
Thus,
x 1, y 2 and z 1.
x 2y z 2
3x 6 y z 1
3x 3 y 2 z 3
AX = b (1)
where
1 2 1 x 2
A 3 6 1 , X y and b 1
3 3 2 z 3
1 2 1
det A A 3 6 1 12
3 3 2
2 2 1 1 2 1 1 2 2
Also A1 1 6 1 , A2 3
1 1 and A3 3
6 1
3 3 2 3 3 2 3 3 3
2 2 1 1 2 1 1 2 2
A1 1 6 1 35, A2 3 1 1 13 and A3 3 6 1 15
3 3 2 3 3 2 3 3 3
A1 35 A 13 A 15
x , y 2 and z 3
A 12 A 12 A 12
Thus,
35 13 5
x , y and z .
12 12 4
A = LU (1)
where
Using the matrix multiplication rule to multiply the matrices L and U and
comparing the elements of the resulting matrix with those of A we obtain
where
Now,
AX = b (2)
We have
LUX = b (3)
Ly = b (4)
On solving equation (4) by forward substitution, we find the vector y now solve
the system of equations
UX = b
x1 , x2 , . . ., xn .
We have
UX = y
and Ly = b
y L1b and x U 1 y
A1 U 1L1 .
2x 3y z 9
x 2 y 3z 6
3x y 2 z 8
AX = b (1)
where
2 3 1 x 9
A 1 2 3 , X y and b 6
3 1 2 z 8
Now let
A=LU (2)
where
LU=A
Thus, we have
1 0 0 2 3 1
1 1 5
L 1 0 and U 0
2 2 2
3 0 0 18
7 1
2
LUX = b
Let
UX=Y (3)
y1
where Y y2
y3
LY = b
1 0 0
y1 9
1 1 0 y2 6
2
3 y3 8
7 1
2
y1 9
1
y1 y2 6
2
3
y1 7 y2 y3 8
2
3
y1 9, y2 and y3 5
2
2 3 1 9
x
0 1 5 3
y
2 2 2
z
0
0 18 5
2x 3y z 9
1 5 3
y z
2 2 2
18 z 5
35 29 5
x , y and z
18 18 18
35 29 5
x , y and z .
18 18 18
A = LU (1)
AX = b (2)
We have
LUX = b (3)
Ly = b (4)
On solving equation (4) by forward substitution, we find the vector y now solve
the system of equations
UX = b
x1 , x2 , . . ., xn .
We have
UX = y
and Ly = b
y L1b and x U 1 y
A1 U 1L1 .
x y z 1
4x 3y z 6
3x 5 y 3z 4
AX = b (1)
where
1 1 1 x 1
A 4 3 1 , X y and b 6
3 5 3 z 4
Now let
A=LU (2)
where
LU=A
Thus, we have
1 0 0 1 1 1
L 4 1 0 and U 0
1 5
3 2 10 0 0 1
LUX = b
Let
UX=Y (3)
y1
where Y y2
y3
LY = b
1 0 0 y1 1
4
1 0 y2 6
3 2 10 y3 4
y1 1
4 y1 y2 6
3 y1 2 y2 10 y3 4
1
y1 1, y2 2 and y3
2
1 1 1 x 1
0
1 5 y 2
0 0 1 z 1
2
x y z 1
y 5 z 2
1
z
2
1 1
x 1, y and z
2 2
1 1
x 1, y and z .
2 2
This method is also known as the square root method. If the coefficient matrix A
is symmetric and positive definite, then the matrix A can be decomposed as
A LLT
where L ij , ij 0 if i j
AX b (1)
becomes
LLT X b (2)
LY b (3)
LT X Y (4)
x 2 y 3z 5
2 x 8 y 22 z 6
3x 22 y 82 z 10
AX = b (1)
where
1 2 3 x 5
A 2 8 22 , X y and b 6
3 22 82 z 10
Now let
A LLT (2)
where
l11 0 0
L l21 l22 0
l31 l32 l33
l112 1 l11 1,
l11l21 2 l21 2,
l11l31 3 l31 3,
l212 l22 2 8 l22 2,
l31l21 l32l22 22 l32 8,
l312 l32 2 l332 82 l33 3.
Thus, we have
1 0 0
L 2 2 0
3 8 3
LLT X b
Let
LT X Y (3)
y1
where Y y2
y3
LY b
1 0 0 y1 5
2
2 0 y2 6
3 8 3 y3 10
y1 5
2 y1 2 y2 6
3 y1 8 y2 3 y3 10
y1 5, y2 2 and y3 3
1 2 3 x 5
0 2 8 y 2
0 0 3 z 3
x 2 y 3z 5
2 y 8 z 2
3z 3
x 2, y 3 and z 1
x 2, y 3 and z 1.
4 x 2 y 14 z 14
2 x 17 y 5 z 101
14 x 5 y 83z 155
AX = b (1)
where
4 2 14 x 14
A 2 17 5 , X y and b 101
14 5 83 z 155
Now let
A LLT (2)
where
l11 0 0
L l21 l22 0
l31 l32 l33
l112 4 l11 2,
l11l21 2 l21 1,
l11l31 14 l31 7,
l212 l22 2 17 l22 4,
l31l21 l32l22 5 l32 3,
l312 l32 2 l332 83 l33 5.
Thus, we have
2 0 0
L 1 4 0
7 3 5
LLT X b
Let
LT X Y (3)
y1
where Y y2
y3
LY b
2 0 0 y1 14
1 4
0 y2 101
7 3 5 y3 155
2 y1 14
y1 4 y2 101
7 y1 3 y2 5 y3 155
y1 7, y2 27 and y3 5
2 1 7 x 7
0 4 3 y 27
0 0 5 z 5
2x y 7z 7
4 y 3z 27
5z 5
x 3, y 6 and z 1
x 3, y 6 and z 1 .
A LLT (1)
where L ij , ij 0 if i j
Thus,
6. Pivoting:
In the Gauss elimination process, sometimes it may happen that any one of the
pivot element a11 , a '22 , a '33 , . . ., a 'nn vanishes or becomes very small compared to
other elements in that column, then we attempt to rearrange the remaining rows
so as to obtain a non-vanishing pivot or to avoid the multiplication by a large
number. This process is called pivoting.
Step 1: In the first stage of elimantion, we searched the first column for the
largest element in magnitude and brought as the first pivot by interchanging the
first equation with equation having the largest element in magnitude.
Step 2: In the second stage we searched the second column for the largest
element in magnitude among the (n-1) elements leaving the first element and
brought this element as the second pivot by interchanging the second equation
with the equation having the largest element in magnitude.
This procedure is continued until we got the upper triangular matrix. In the
partial pivoting, pivot is found by the algorithm choose j, the smallest integer for
which
In the complete pivoting search the matrix A for the largest element in
magnitude and bring it as the first pivot. In complete pivoting not only an
interchange of equations requires but also an interchange of position of the
variables requires.
In complete pivoting following algorithm is used to find the pivot choose l and m
as the smallest integers for which
(k )
alm max aij( k ) , k i, j n
From the previous methods, we have learnt that any system of linear algebraic
equations can be solved by the use of determinants. But the method of solving
the system of linear equations by determinants is not very practical, even with
efficient methods for evaluating the determinants. Because if the order of the
determinant is large, then the evaluation becomes tedious. Therefore to avoid
these unnecessary computations, mathematicians have tried to develop simpler
and less time consuming procedures and various methods for solving system of
linear equations have been suggested. Gauss elimination method is one of the
most important method to solve the system of linear equations .
The first equation is called the pivot equation and the coefficient of x1 in
the first equation i.e., a11 0 is called the pivot. Thus first step gives the new
system as follows.
In the second step of Gauss elimination method, we take the new second
equation (which no longer contains x1) as the pivot equation and use it to
eliminate x2 from the third, fourth, . . . , nth equation.
Thus, the new system of equations is of upper triangular form that can be solved
by the back substitution.
8 x2 2 x3 7
3x1 5 x2 2 x3 8
6 x1 2 x2 8 x3 26
8x2 2 x3 7 (1)
6 x1 2 x2 8x3 26 (3)
6 x1 2 x2 8x3 26 (4)
8x2 2 x3 7 (6)
1
On subtracting times of equation (4) from equation (5) we have
2
6 x1 2 x2 8x3 26 (7)
4 x2 2 x3 5 (8)
8x2 2 x3 7 (9)
6 x1 2 x2 8x3 26 (10)
4 x2 2 x3 5 (11)
6 x3 3 (12)
1
x3 , x2 1 and x1 4 .
2
1
x1 4, x2 1 and x3 .
2
1 1
x1 x2 x3 1
2 3
1 1 1
x1 x2 x3 0
2 3 4
1 1 1
x1 x2 x3 0
3 4 5
1 1
x1 x2 x3 1 (1)
2 3
1 1 1
x1 x2 x3 0 (2)
2 3 4
1 1 1
x1 x2 x3 0 (3)
3 4 5
1 1
On subtracting times of equation (1) from equation (2) and times of
2 3
equation (1) from equation (3) we have
1 1
x1 x2 x3 1 (4)
2 3
1 1 1
x2 x3 (5)
12 12 2
1 4 1
x2 x3 (6)
12 45 3
1 1
x1 x2 x3 1 (7)
2 3
1 1 1
x2 x3 (8)
12 12 2
1 1
x3 (9)
180 6
x1 9, x2 36 and x3 30 .
10 x1 7 x2 3x3 5 x4 6
6 x1 8 x2 x3 4 x4 5
3x1 x2 4 x3 11x4 2
5 x1 9 x2 2 x3 4 x4 7
6 x1 8x2 x3 4 x4 5 (2)
5x1 9 x2 2 x3 4 x4 7 (4)
On solving equation (13), (14), (15) and (16) by back substitution we have
x4 1, x3 7, x2 4 and x1 5 .
x1 5, x2 4, x3 7 and x4 1 .
10 x1 x2 2 x3 4
x1 10 x2 x3 3
2 x1 3x2 20 x3 7
10 x1 x2 2 x3 4 (1)
x1 10 x2 x3 3 (2)
2 x1 3x2 20 x3 7 (3)
10 x1 x2 2 x3 4 (4)
101 12 26
x2 x3 (5)
10 10 10
32 196 62
x2 x3 (6)
10 10 10
10 x1 x2 2 x3 4 (7)
101 12 26
x2 x3 (8)
10 10 10
20180 5430
x3 (9)
1010 1010
x1 2 x2 x3 8
2 x1 3x2 4 x3 20
4 x1 3x2 2 x3 16
x1 2 x2 x3 8 (1)
2 x1 3x2 4 x3 20 (2)
4 x1 3x2 2 x3 16 (3)
x1 2 x2 x3 8 (4)
x2 2 x3 4 (5)
x2 2 x3 36 (8)
12 x3 36 (9)
x1 1 (7)
x2 2 (8)
12 x3 36 (9)
This gives
x1 1, x2 2 and x3 3 .
Example 12: Find the inverse of the coefficient matrix of the given system of
equations
x1 x2 x3 1
4 x1 3x2 1x3 6
3x1 5 x2 3x3 4
using Gauss elimination method with partial pivoting and hence solve the system
of the equations..
AX b (1)
where
1 1 1 x1 1
A 4 3 1 , X x2
and b 6
3 5 3 x3 4
1 1 1 1 0 0
[ A | I ] 4 3 1 0 1 0
3 5 3 0 0 1
4 3 1 0 1 0
1 1 1 1 0 0 [ R1 R2 ]
3 5 3 0 0 1
3 1 1
1 0 0
4 4 4
1
1 1 1 1 0 0 R1 R1
3 4
5 3 0 0 1
3 1 1
1 0 0
4 4 4
R2 R2 R1
0 0
1 5 1
1
4 4 4 R3 R3 3R1
0 11 15 3
0 1
4 4 4
3 1 1
1 0 0
4 4 4
0 1
11 15 3
0 R2 R3
4 4 4
0 1 5 1
1 0
4 4 4
3 1 1
1 0 0
4 4 4
0
15 3 4 4
1 0 R2 R2
11 11 11 11
0 1 5 1
1 0
4 4 4
14 5 3
1 0 0
11 11 11 3
R1 R1 R2
0
15 3 4 4
1 0
11 11 11 1
R3 R3 R2
0 10 2 1 4
0 1
11 11 11
14 5 3
1 0 0
11 11 11
0
15 3 4 11
1 0 R3 R3
11 11 11 10
0 0 1 11
1
1
10 5 10
7 1 2
5 5 5 14
1 0 0 R1 R1 R3
0 1 0
3 1 11
0
2 2 15
0 0 1 R2 R2 R3
11 1 1 11
10 5 10
7 1 2
5
5 5
A
1 3 1
0
2 2
11
1
1
10 5 10
X A1b
7 1 2
5
5 5 1 1
1 1
X
3
0 6
2 2 2
4
11 1 1 1
10 5 10 2
Thus,
1 1
x1 1, x2 and x3 .
2 2
Amount of storage
a21
Elimination of x1: For eliminating x1, the factor is computed once. There
a11
are (n-1) multiplications in the (n-1) terms on the left side and 1 multiplication
on the right side.
(1 n 1 1 n 1) ,
(n 2)n (n 2)(n 2 2) ,
(n k )(n 2 k ) ,
Thus, the total number of operations required to eliminate x1, x2, x3, . . ., xn-1
are as follows
n 1 n 1
(n k )(n 2 k ) (n k )2 (n k )
k 1 k 1
n 1
n 2 k 2 2nk 2n 2k
k 1
(n 1) n(2 n 2 1)
n 2 (n 1)
6
(n 1) n (n 1) n
2n 2n(n 1) 2
2 2
n 1
n3
(n k )(n 2 k )
k 1 3
n3
Note 1: Similarly, it can be shown that the Gauss-Jordan Method requires
2
arithmetic operations. Hence, Gauss elimination method is preferred to Gauss-
Jordan method to solve the large system of equations.
n3
Note 2: In L-U decompositions method total number of operations count is
3
same as in Gauss elimination method.
n3
Note 3: In Cholesky method total number of operations count is .
6
Exercise:
4x 5 y 7
(I)
12 x 14 y 18
5 x 9 y 2 z 24
(II) 9 x 4 y z 25
2 x y z 11
4 x 6 y 8z 0
(III) 6 x 34 y 52 z 160
8 x 52 y 129 z 452
4 x1 x2 1
x1 4 x2 x3 0
(II)
x2 4 x3 x4 0
x3 4 x4 0
3. Solve the following system of equations using Gauss elimination method
2 x 2 y 3z 1
(I) 4 x 2 y 3z 2
x yz 3
2 x1 x2 x3 2 x4 2
4 x1 2 x3 x4 3
(II)
3x1 2 x2 2 x3 1
x1 3x2 2 x3 4
4 x1 x2 x3 4
(III) x1 4 x2 2 x3 4
3x1 2 x2 4 x3 6
x1 x2 x3 2
(IV) 2 x1 3x2 5 x3 3
3x1 2 x2 3x3 6
2 x1 x2 4 x3 x4 4
4 x1 3x2 5 x3 2 x4 1
(II)
x1 x2 x3 x4 1
x1 3x2 3x3 2 x4 1
4 x1 2 x2 4 x3 10
2 x1 2 x2 3x3 2 x4 18
(III)
4 x1 2 x2 6 x3 3x4 30
2 x2 3x3 9 x4 61
10 x1 2 x2 x3 59
(IV) x1 8 x2 2 x3 4 .
7 x1 x2 20 x3 5
Summary:
References: