0% found this document useful (0 votes)
321 views12 pages

3-Gauss Elimination Method

The document discusses the Gauss elimination method for solving systems of linear equations. There are two main steps: 1) The elimination process which converts the original system [A]{x}={b} into an upper triangular system [U]{x}={y} through a series of row operations. 2) Back substitution is then used to solve for the variables by substituting into the upper triangular system. The document analyzes the number of operations involved in Gauss elimination and discusses that more efficient methods exist such as LU decomposition.

Uploaded by

sirfmein
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
321 views12 pages

3-Gauss Elimination Method

The document discusses the Gauss elimination method for solving systems of linear equations. There are two main steps: 1) The elimination process which converts the original system [A]{x}={b} into an upper triangular system [U]{x}={y} through a series of row operations. 2) Back substitution is then used to solve for the variables by substituting into the upper triangular system. The document analyzes the number of operations involved in Gauss elimination and discusses that more efficient methods exist such as LU decomposition.

Uploaded by

sirfmein
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 12

CE 601: Numerical Methods

Lecture 4

Course Coordinator:
Dr. Suresh A. Kartha,
Associate Professor,
Department of Civil Engineering,
IIT Guwahati.
Gauss Elimination Method
• There are two processes in Gauss elimination.
• For a linear system n X n matrix with n
unknowns [A]{x}={b},
→ There are (n-1) sub-steps for elimination to
create the system [U]{x} = {y}
aij( k ) aij( k 1)
lik akj( k 1)

bi( k ) bi( k 1)
lik bk( k 1)

aik( k 1)
lik
akk( k 1)

where i k 1, k 2,..., n; j k, k 1, k 2,..., n and k 1,2,3,..., n 1.


• After performing (n-1) elimination steps to
convert [A]{x} = {b} to form [U]{x}={y}, we need
to perform back-substitution to evaluate the
components of {x}.
• Substitution Process
• [U]{x} = {y} where {y}T = {b1 b2(1) b3(2) … bn(n-1)}
• We start from bottom
xn bn( n 1) (n
ann 1)

xn 1 bn( n 1 2) a((nn 2)
x
1) n n a((nn 2)
1)( n 1)

xn 2 bn( n 23) a((nn 3)


x
2)( n 1) n 1 a((nn 3)
x
2) n n a((nn 3)
2)( n 2)
• Similarly you get

bi(i 1)
ai((ii 1)1) xi 1 ai((ii 1)2) xi 2 ain( i 1) xn
xi ( i 1)
a ii
n
bi(i 1)
aij(i 1) x j
j i 1
i.e., xi ( i 1)
;i (n 1),(n 2),...,2,1.
a ii
• Operations Involved in Gauss Elimination
• While using computational methods to solve linear systems,
emphasis should be given on efficient way of computing using
the algorithms.
• The normal computing operations involved are
 Addition
 Subtraction
 Multiplication
 Division
• Let us hypothetically suggest for a computer, each of the
above operation involves say ‘t’ mili-seconds. So if we have
large number of above operattions, the computer will take
more time to evaluate.
• In the case of Gauss elimination method, let us see how many
no. of operations are involved to solve an n X n linear system.
• No. of operations involved in elimination steps.
→ No. of elimination steps = (n-1)
→ No. of operations for first elimination step (i.e. k=1)
→ Evaluate multiplication factor li1.
→ There are (n-1) rows to be operated in first sub-step, so
no of operations for multiplication factors,
l21 a21 a11 , l31 a31 a11 ,..., li1 ai1 a11,...ln1 an1 a11
i.e. (n-1) operations (all divisions)
(1)
→ For evaluating aij aij li1a1 j
(2 operations)*(n-1)*(n-1)
→ For evaluating bi(1) bi li1b1
(2 operations)*(n-1)
• In the first elimination step, you have the following
number of operations.
(n 1) 2(n 1)2 2(n 1)
(n 1)(2n 1)
• Similarly, in the second elimination step (k=2)
→The no. of operations = (n-1)(2n+1),
→For k=3, no. of operations = (n-3)(2n-3)
• In general for any kth elimination step, we have
no. of operations = (n-k)(2n-2k+3)
• Total no. operations for elimination
n 1
(n p)(2n 2 p 3)
p 1

1
(n 1)(7 4n)n
6
2 3 n2 7
n n
3 2 6
• No. of operations involved in back-substitution. There
are ‘n’ back-substiution steps: i.e. i=1,2,3,…,n
→ First back sub step,
xn bn( n 1) ann( n 1)
-> 1 operation
→ Second back sub step,
xn 1 bn( n 1 2) a((nn 1)2)n xn a((nn 1)(
2)
n 1)
-> 3 operations
→ In general for any ‘i’,
xn 2 bn( n 23) a((nn 2)( 3)
x
n 1) n 1 a ( n 3)
x
( n 2) n n a ( n 3)
( n 2)( n 2)

i n 1, n 2,...,1 -> (2i-1) operations. n


• Total no. of operations for back-substitution: (2i 1) n
2

i 1
• Total no. of operations in entire Gauss elimination
2
process, 2 n n 7 n n
3 2

3 2 6
2 3 3 2 7
n n n
3 2 6
• E.g. If you have 1000 X 1000 linear system, total no. of operations
= 0.6681655 x 10-5.
If an operation takes (hypothetically) 0.1 milli seconds per
operation, then total time taken = 66817 seconds ≈ 18.6 hours.
• That is why, we suggest to have efficient computer methods.
• Gauss elimination method is a traditional form, however, it is not
the efficient method to solve system of linear equation.
• There is another direct elimination method called Gauss-Jordan
elimination method. (this I request you to refer on your own).
• In Gauss-Jordan method the principle is the convert [A]{x} = {b} to
the form [I]{x}={x} where {I} is a identity matrix.
• Gauss-Jordan method is computationally not efficient. You will see
that the number of operations involved in Gauss-Jordan is = n3 +n2
–n.
• In school days you have also studied matrix
inverse methods and corresponding
determinants to solve linear systems. This
method take 2n3 -2n2 + n number of arithmetic
operations for matrix inverse. Also the
multiplication [A]-1{b} further requires 2n2-n
operations.
• Note: As discussed in one of the earlier lecture,
numerical methods may generate errors in the
solutions.
• The gauss elimination method may also be
present with errors in the solutions like round-off
errors. One can use partial pivoting or scaled
partial pivoting to reduce such errors.
• LU Decomposition
o We have discussed that matrix can be factored i.e., it
can be given as product of two different matrix.
[A] = [B][C]
o There can be many possibilities of obtaining factor
matrices.
o In a similar tone, one can also factorize [A] as
product of [L] and [U] i.e., [A]= [L][U] where [L] is
lower triangular and [U] is a upper triangular
matrix.
a11 a12 a1n l11 0 0 u11 u12 u1n
a21 a22 a2 n l21 l22 0 0 u22 u2 n
....(4)

an1 an 2 ann ln1 ln 2 lnn 0 0 unn


o In the representation (Eq.4) if we specify the
values of diagonal elements of either [L] or
[U], then that factorization will be unique.
o The LU decomposition methods the Doolittle
and Crout’s work on these principles

You might also like