0% found this document useful (0 votes)
4 views32 pages

Chapter 3

The document discusses numerical solutions for systems of linear equations, emphasizing the necessity of having equal numbers of equations and unknowns for a unique solution. It introduces methods such as Cramer's Rule, Gauss Elimination, and LU Factorization, explaining their processes and providing examples for clarity. The document highlights the limitations of Cramer's Rule for larger systems and outlines the steps for using Gauss Elimination and LU Factorization for more efficient solutions.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views32 pages

Chapter 3

The document discusses numerical solutions for systems of linear equations, emphasizing the necessity of having equal numbers of equations and unknowns for a unique solution. It introduces methods such as Cramer's Rule, Gauss Elimination, and LU Factorization, explaining their processes and providing examples for clarity. The document highlights the limitations of Cramer's Rule for larger systems and outlines the steps for using Gauss Elimination and LU Factorization for more efficient solutions.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 32

3.

Numerical Solutions of Systems


of Linear Equations

IS 5306- M H M R S Dilhani 1
𝑎11 𝑥1 + 𝑎12 𝑥2 + ⋯ + 𝑎1𝑛 𝑥𝑛 = 𝑏1
𝑎21 𝑥1 + 𝑎22 𝑥2 + ⋯ + 𝑎2𝑛 𝑥𝑛 = 𝑏2

𝑎𝑛1 𝑥1 + 𝑎𝑛2 𝑥2 + ⋯ + 𝑎𝑛𝑛 𝑥𝑛 = 𝑏𝑛

To find an unique solution, we have to have n


equations and n unknowns. In addition to that,
𝑎11 𝑎12 … 𝑎1𝑛
𝑎21 𝑎22 … 𝑎2𝑛
⋮ ≠ 0 is needed
𝑎𝑛1 𝑎𝑛2 … 𝑎𝑛𝑛

IS 5306- M H M R S Dilhani 2
We also can write, this system of equations using
matrices.
i.e. 𝐴𝑥 = 𝑏
where A is called the coefficient matrix and it
represent as
𝑎11 𝑎12 … 𝑎1𝑛
𝑎21 𝑎22 … 𝑎2𝑛
𝐴= ⋮ ,
𝑎𝑛1 𝑎𝑛2 … 𝑎𝑛𝑛
𝑥1 𝑏1
𝑥2 𝑏2
𝑥 = ⋮ 𝑏 =

𝑥𝑛 𝑏𝑛
IS 5306- M H M R S Dilhani 3
3.1 Direct Methods of Solving Systems of
Linear Equations

3.1.1 Cramer’s Rule

This method is firstly developed by a Swiss


mathematician, Gabriel Cramer. In order to
explain the method, lets consider the systems of
linear equations as follows.

IS 5306- M H M R S Dilhani 4
𝐴𝑥 = 𝑏
𝑎11 𝑎12 … 𝑎1𝑛 𝑥1 𝑏1
𝑎21 𝑎22 … 𝑎2𝑛 𝑥2 𝑏2
⋮ ⋮ =

𝑎𝑛1 𝑎𝑛2 … 𝑎𝑛𝑛 𝑥𝑛 𝑏𝑛

Then the solution of above equation is given by Cramer’s


rule as;
𝑏1 𝑎12 …𝑎1𝑛 𝑎11 𝑏1 …𝑎1𝑛 𝑎11 𝑎12 … 𝑏1
𝑏2 𝑎22 …𝑎2𝑛 𝑎21 𝑏2 …𝑎2𝑛 𝑎21 𝑎22 … 𝑏2
⋮ ⋮ ⋮
𝑏𝑛 𝑎𝑛2 …𝑎𝑛𝑛 𝑎𝑛1 𝑏𝑛 …𝑎𝑛𝑛 𝑎𝑛1 𝑎𝑛2 … 𝑏𝑛
𝑥1 = ; 𝑥2 = …𝑥𝑛 =
𝐴 𝐴 𝐴

Where, |𝐴| ≠ 0
IS 5306- M H M R S Dilhani 5
This method is quite general, but involves a lot
of labour when the number of equations exceeds
four. Therefore, Cramer’s rule is not suitable for
large systems of equations.

Example 1:
Solve the following equation by using cramer’s
rule.
2𝑥 + 𝑦 = 3
4𝑥 + 3𝑦 = 7

IS 5306- M H M R S Dilhani 6
Example 2:
Apply the cramer’s rule to solve the following
equations.
𝑥 + 3𝑦 + 2𝑧 = 15
𝑥 − 4𝑦 + 𝑧 = −4
2𝑥 + 3𝑦 + 𝑧 = 10

IS 5306- M H M R S Dilhani 7
3.1.2 Gauss Elimination Method
𝑎11 𝑥1 + 𝑎12 𝑥2 + ⋯ + 𝑎1𝑛 𝑥𝑛 = 𝑏1 → (1)
𝑎21 𝑥1 + 𝑎22 𝑥2 + ⋯ + 𝑎2𝑛 𝑥𝑛 = 𝑏2 → (2)

𝑎𝑛1 𝑥1 + 𝑎𝑛2 𝑥2 + ⋯ + 𝑎𝑛𝑛 𝑥𝑛 =𝑏𝑛 → (𝑛)

We divide equation (1) by 𝑎11 (assuming 𝑎11 ≠


0, otherwise interchanging the equation with
another is necessary), and multiple with
𝑎21 , 𝑎31 , … , 𝑎𝑛1 and subtract from equations (2),
(3), …, (n) respectively.
IS 5306- M H M R S Dilhani 8
Here 𝑎11 is called the pivot. The ‘Pivot’ or ‘pivot element’
is an element on the left hand side of the matrix that you
want the elements above and below to be zero

𝑎11 𝑥1 + 𝑎12 𝑥2 + 𝑎13 𝑥3 + ⋯ + 𝑎1𝑛 𝑥𝑛 = 𝑏1 → (1)′


𝑎21
2 − (1) × ⟹
𝑎11
𝑎12 𝑎21 𝑎13 𝑎21
𝑎22 − 𝑥2 + 𝑎23 − 𝑥3 + ⋯
𝑎11 𝑎11
𝑎1𝑛 𝑎21 𝑏1 𝑎21
+ 𝑎2𝑛 − 𝑥𝑛 = 𝑏2 − → (2)′
𝑎11 𝑎11

IS 5306- M H M R S Dilhani 9
𝑎12 𝑎𝑛1 𝑎13 𝑎𝑛1
𝑎𝑛2 − 𝑥2 + 𝑎𝑛3 − 𝑥3 + ⋯
𝑎11 𝑎11
𝑎1𝑛 𝑎𝑛1 𝑏1 𝑎𝑛1
+ 𝑎𝑛𝑛 − 𝑥𝑛 = 𝑏𝑛 − → (𝑛)′
𝑎11 𝑎11
Now we can write this as;

1 1 1
𝑎11 𝑥1 + 𝑎12 𝑥2 + ⋯ + 𝑎1𝑛 𝑥𝑛 = 𝑏1 → (𝐴)
1 1
𝑎22 𝑥2 + ⋯ + 𝑎2𝑛 𝑥𝑛 = 𝑏2 → (𝐵)

1 1
𝑎𝑛2 𝑥2 + ⋯ + 𝑎𝑛𝑛 𝑥𝑛 = 𝑏𝑛 → (𝑁)

IS 5306- M H M R S Dilhani 10
Assuming 𝑎22 is not zero, we use that as the pivot.
Otherwise interchange the second equation with an
equation where 𝑎𝑖𝑖, 𝑖 = 3,4, … , 𝑛. 𝑛 is non zero, and
eliminate the 𝑥𝑖 from the equations.
So, we have,
2 2 2
𝑎11 𝑥1 + 𝑎12 𝑥2 + ⋯ ⋯ ⋯ + 𝑎1𝑛 𝑥𝑛 = 𝑏12
2 2
𝑎22 𝑥2 + ⋯ ⋯ ⋯ + 𝑎2𝑛 𝑥𝑛 = 𝑏22
2 2
𝑎33 𝑥3 + ⋯ + 𝑎3𝑛 𝑥𝑛 = 𝑏32

2 2
𝑎𝑛3 𝑥3 + ⋯ + 𝑎𝑛𝑛 𝑥𝑛 = 𝑏𝑛2

IS 5306- M H M R S Dilhani 11
By continuing like this we get,
𝑛−1 𝑛−1 𝑛−1
𝑎11 𝑥1 + 𝑎12 𝑥2 + ⋯ ⋯ ⋯ + 𝑎1𝑛 𝑥𝑛 = 𝑏1𝑛−1 → 1 n−1
𝑛−1 𝑛−1
𝑎22 𝑥2 + ⋯ ⋯ ⋯ + 𝑎2𝑛 𝑥𝑛 = 𝑏2𝑛−1 → 2 𝑛−1
𝑛−1 𝑛−1
𝑎33 𝑥3 + ⋯ + 𝑎3𝑛 𝑥𝑛 = 𝑏3𝑛−1 → 3 𝑛−1


𝑛−1
𝑎𝑛𝑛 𝑥𝑛 = 𝑏𝑛𝑛−1 → 𝑛 𝑛−1

Therefore, from last equation,


𝑏𝑛𝑛−1
𝑥𝑛 = 𝑛−1
𝑎11
Now using the back substitutions we can find
𝑥𝑛−1 , 𝑥𝑛−2 , … . , 𝑥2 , 𝑥1

IS 5306- M H M R S Dilhani 12
Example 1: Use Gauss elimination to solve,
2x+y+z=10
3x+2y+3z=18
x+4y+9z=16
Example 2: Use Gauss elimination to solve,
𝑥1 − 𝑥2 + 2𝑥3 − 𝑥4 = −8
2𝑥1 − 2𝑥2 + 3𝑥3 − 3𝑥4 = −20
𝑥1 + 𝑥2 + 𝑥3 = −2
𝑥1 − 𝑥2 + 4𝑥3 + 3𝑥4 = 4

IS 5306- M H M R S Dilhani 13
3.1.3 LU Factorization Method

The factorization is particularly useful when it


has the form A=LU, where L is the unit lower
triangular matrix and U is the upper triangular
matrix. If Gaussian elimination applied to an
arbitrarily linear system 𝐴𝑥 = 𝑏 requires more
operations to determine x. If A has been
factorized into the triangular form A=LU, then
we can solve for x more easily by using few
steps.
IS 5306- M H M R S Dilhani 14
First we let 𝑦 = 𝑈𝑥 and then,
𝐴𝑥 = 𝑏
(𝐿𝑈)𝑥 = 𝑏
𝐿𝑦 = 𝑏
So we have the system 𝐿𝑦 = 𝑏 for y.
Since y is known, then we can easily solve the
system.
𝑈𝑥 = 𝑦
to obtain the solution for x.

IS 5306- M H M R S Dilhani 15
Now, we can solve 𝐴𝑥 = 𝑏 in two stages:
i. Solving the equation: 𝐿𝑦 = 𝑏 for y by
forward substitution and
ii. Solving the equation: 𝑈𝑥 = 𝑦 for x using y
by backward substitution.
The elements of L and U can be determined by
comparing the elements of the product of L and
U with those of A. This is done by assuming the
diagonal elements of L or U to be unity.

IS 5306- M H M R S Dilhani 16
Example: Find the lower triangular matrix and upper
triangular matrix of A.
4 12 8 4
1 7 18 9
𝐴=
2 9 20 20
3 11 15 14
Initialize
1 0 0 0 𝑢11 𝑢12 𝑢13 𝑢14
𝑙21 1 0 0 0 𝑢22 𝑢23 𝑢24
𝐿= 𝑈= 0 0 𝑢33 𝑢34
𝑙31 𝑙32 1 0
𝑙41 𝑙42 𝑙43 1 0 0 0 𝑢44

IS 5306- M H M R S Dilhani 17
𝐴 = 𝐿𝑈
4 12 8 4
1 7 18 9
2 9 20 20
3 11 15 14
1 0 0 0 𝑢11 𝑢12 𝑢13 𝑢14
𝑙21 1 0 0 0 𝑢22 𝑢23 𝑢24
= × 0 0 𝑢33 𝑢34
𝑙31 𝑙32 1 0
𝑙41 𝑙42 𝑙43 1 0 0 0 𝑢44

IS 5306- M H M R S Dilhani 18
4 12 8 4
1 7 18 9
2 9 20 20
3 11 15 14
𝑢11 𝑢12 𝑢13 𝑢14
𝑙21 𝑢11 𝑙21 𝑢12 + 𝑢22 𝑙21 𝑢13 + 𝑢23 𝑙21 𝑢14 + 𝑢24
= 𝑙 𝑢
31 11 𝑙31 𝑢12 + 𝑙32 𝑢22 𝑙31 𝑢13 + 𝑙32 𝑢23 + 𝑢33 𝑙31 𝑢14 + 𝑙32 𝑢24 + 𝑢34
𝑙41 𝑢11 𝑙41 𝑢12 + 𝑙42 𝑢22 𝑙41 𝑢13 + 𝑙42 𝑢23 + 𝑙43 𝑢33 𝑙41 𝑢14 + 𝑙42 𝑢24 + 𝑙43 𝑢34 + 𝑢44

On comparisons, we will have the following relations:


𝑢11 = 4
𝑢12 = 12
𝑢13 = 8
𝑢14 = 4
𝑙21 𝑢11 = 1 ⇒ 𝑙21 = 1/4
𝑙31 𝑢11 = 2 ⇒ 𝑙31 = 1/2
So on we can compute all the terms.

IS 5306- M H M R S Dilhani 19
4 12 8 4
1 7 18 9
2 9 20 20
3 11 15 14
1 0 0 0 4 12 8 4
1/4 1 0 0 0 4 16 8
= ×
1/2 3/4 1 0 0 0 4 12
3/4 1/2 1/4 1 0 0 0 4

IS 5306- M H M R S Dilhani 20
Example 2:
Solve the following linear system by using LU
factorization method.
3𝑥 + 2𝑦 + 7𝑧 = 4
2𝑥 + 3𝑦 + 𝑧 = 5
3𝑥 + 4𝑦 + 𝑧 = 7

IS 5306- M H M R S Dilhani 21
Example 3:
Solve the following linear system by using LU
factorization method.
𝑥1 + 𝑥2 + 3𝑥4 = 4
2𝑥1 + 𝑥2 − 𝑥3 + 3𝑥4 = 1
3𝑥1 − 𝑥2 − 𝑥3 + 2𝑥4 = −3
−𝑥1 + 2𝑥2 + 3𝑥3 − 𝑥4 = 4

IS 5306- M H M R S Dilhani 22
3.2 Iterative methods of solving systems of
Linear Equations
3.2.1 Jacobi Method

This iterative technique was developed by Carl


Jacob Jacobi (1804-1851), and this methods
makes two assumptions.
i. The system of linear equation has a unique
solution. (i.e. the system has ‘n’ equations,
‘n’ unknowns and |𝑨| ≠ 𝟎)

IS 5306- M H M R S Dilhani 23
ii. The coefficient matrix A has no zeros on its
main diagonal. If any of the diagonal entries
𝑎11 , 𝑎22 , … , 𝑎𝑛𝑛 are zero, then rows or
columns must be interchanged to obtain a
coefficient matrix that has nonzero entries on
the main diagonal.

To obtain the Jacobi method, solve the first


equation for 𝑥1 , the 2nd equation for 𝑥2 and so
on, as follows.

IS 5306- M H M R S Dilhani 24
𝑥1 = (𝑏1 − 𝑎12 𝑥2 − 𝑎13 𝑥3 − ⋯ − 𝑎1𝑛 𝑥𝑛 )Τ𝑎11
𝑥2 = (𝑏2 − 𝑎21 𝑥1 − 𝑎23 𝑥3 − ⋯ − 𝑎2𝑛 𝑥𝑛 )Τ𝑎22

𝑥𝑛 = (𝑏𝑛 − 𝑎𝑛1 𝑥1 − 𝑎𝑛2 𝑥2 − ⋯ − 𝑎𝑛𝑛−1 𝑥𝑛−1 )Τ𝑎𝑛𝑛

Then make an initial approximation of the solution,


0 0 0 (0)
(𝑥1 , 𝑥2 , 𝑥3 , … , 𝑥𝑛 )
(0)
And substitute these values of 𝑥𝑖
into the right-hand side
of the rewritten equations to obtain the first approximation.
Then you can substitute 1st approximation’s 𝑥𝑖 values to
above equations again and get the second approximation.
This procedure is repeated until we get a convergence to
the actual solution.

IS 5306- M H M R S Dilhani 25
Example 1:
Use Jacobi’s method to solve the equations,
20x +y -2z =17
3x +20y –z =-18
2x -3y +20z =25
Example 2:
5x-2y+3x=-1
-3x+9y+z=2
2x-y-7z=3

IS 5306- M H M R S Dilhani 26
3.2.2 Gauss-Seidel Method
• This is a modification of the Jacobi’s method.
• Carl F. Gauss (1777-1855) and Philipp L. Seidel
(1821-1896) have found this method.
• This method s easier than Jacobi’s method
because it requires only few number of iterations.
• This method is very accurate compared to the
Jacobi Method.
• With Jacobi’s method, the values of 𝑥𝑖 obtained in
the nth approximation remain unchanged until the
entire (n+1)th approximation has been calculated.

IS 5306- M H M R S Dilhani 27
• With the Gauss-Seidel method, on the other
hand, you use the new values of 𝑥𝑖 as soon as
they are known.
• That is once you have determined 𝑥1 from the
1st equation, its value is then used in the 2nd
equation to obtain the new 𝑥2 .
• Similarly, the new 𝑥1 and 𝑥2 are used in the 3rd
equation to obtain the new 𝑥3 .
• This procedure is demonstrated in following
examples.

IS 5306- M H M R S Dilhani 28
Example 1:
By using Gauss-Seidel method solve the following
systems of linear equations.
5x-2y+3z=-1
-3x+9y+z=2
2x-y-7z=3
Example 2:
Apply Jacobi’s and Gauss-Seidel methods to solve
the following systems.
x-5y=-4
7x-y=6

IS 5306- M H M R S Dilhani 29
Condition for convergence (Strictly
Diagonally Dominant Matrix)

If A is strictly diagonally dominant, then for any


choice of 𝑋 (0) , both Jacobi and Gauss-Seidel
(𝑘) 𝑟
methods, give sequences 𝑥 𝑘=1
that converge
to the unique solution of 𝐴𝑥 = 𝑏.

IS 5306- M H M R S Dilhani 30
𝑛
𝑎𝑖𝑗
෍ < 1, 𝑖 = 1, 2, 3, … , 𝑛
𝑎𝑖𝑖
𝑗=1
𝑗≠𝑖
i.e 𝑎11 ≥ 𝑎12 + 𝑎13 + ⋯ + |𝑎1𝑛 |
𝑎22 ≥ 𝑎21 + 𝑎23 + ⋯ + |𝑎2𝑛 |

𝑎𝑛𝑛 ≥ 𝑎𝑛1 + 𝑎𝑛2 + ⋯ + |𝑎𝑛𝑛−1 |

Note:
• This is only a sufficient condition. Not a necessary
condition.
• This condition is called the strictly diagonal dominant

IS 5306- M H M R S Dilhani 31
Example:
Which of the following systems of linear
equations has a strictly diagonal dominant
coefficient matrix?
3𝑥 − 𝑦 = −4
i.
2𝑥 + 5𝑦 = 2
4𝑥 + 2𝑦 − 𝑧 = −1
ii. 𝑥 + 2𝑧 = −4
3𝑥 − 5𝑦 + 𝑧 = 3

IS 5306- M H M R S Dilhani 32

You might also like