0% found this document useful (0 votes)
58 views49 pages

Solution of Linear System of Equations

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
58 views49 pages

Solution of Linear System of Equations

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 49

SOLUTION OF LINEAR SYSTEM OF EQUATIONS

• In root finding, we solved the value of x that satisfy f(x)=0.


• Now, we are searching for the values x1, x2,…,xn that simultaneously satisfy n number of
equations;
• 𝑓 𝑥 ,𝑥 ,...,𝑥 =0
• 𝑓 𝑥 ,𝑥 ,...,𝑥 =0
• .
• .
• .
• 𝑓 𝑥 ,𝑥 ,...,𝑥 =0
• When such a system of algebraic equations are linear, it can be written in the following open
form:
𝑎 𝑥 + 𝑎 𝑥 +. . . +𝑎 𝑥 = 𝑏
𝑎 𝑥 + 𝑎 𝑥 +. . . +𝑎 𝑥 = 𝑏
.

.
.
𝑎 𝑥 + 𝑎 𝑥 +. . . +𝑎 𝑥 = 𝑏
• where a’s and b’s are constants. 1
SOLUTION OF LINEAR SYSTEM OF EQUATIONS

2
Mathematical Background
• [A] is the shorthand notation for matrix and aij
designates the element at i’th row and j’th column column 1
of this matrix.
 a11 a12 ... a1m 
• A matrix with n rows and m columns is referred to a a
row 2
as an “n by m matrix”. ... a2m 
 A   21 22   

• Matrices with n=1 rows are called row vectors:  
• an1 an2 ... anm 

• Matrices with m=1 columns are called column


vectors:

3
Mathematical Background, cont.
• For vectors, one subscript is sufficient. For matrices, we need two subscripts.
• Matrices where n=m are called square matrices.
• The diagonal consisting of elements aii (a11, a22,…, ann) of the square matrix [A] is
termed the main diagonal.
• A square matrix with aij= aji is called a symmetric matrix.
5 0 −1
1 2
• 𝐴 = 0 2 4 , 𝐵 =
2 0
−1 4 7

• A square matrix with all elements off the main diagonal equal to zero is called a
diagonal matrix.
𝑎 0
𝑎
• 𝐴 = , aij=0 if i≠j for i,j=1,2,…,n

0 𝑎 4
Mathematical Background, cont.
• The diagonal matrix with all elements on the main diagonal equal to
1 is called the identity matrix:

• The matrix with all elements below the main diagonal equal to zero
is called an upper triangular matrix:

• , aij=0 if i>j for i,j=1,2,…,n

5
Mathematical Background, cont.
• The matrix with all elements above the main diagonal equal to zero
is called a lower triangular matrix:

• , aij=0 if i<j for i,j=1,2,…,n

• Addition of two matrices:

• Subtraction of two matrices:

• Multiplication of a matrix with a scalar:


6
Mathematical Background, cont.
• The product of two matrices is carried out as:
• × × × , , i=1,2,…,m, j=1,2,…,n
• (i’th row elements of [A] multiplied with j’th column elements of [B] and summed)
• The number of columns of [A] has to be equal to the number of rows of [B].
• Example
• If , calculate

• Solution

7
Mathematical Background, cont.
• Matrix division is not a valid matrix operation, therefore a
matrix cannot be divided by another. If a matrix [A] is square
and nonsingular the inverse of [A] is defined as:

• Thus, multiplication of a matrix by the inverse of another is
analogous to division.
• The inverse of a two dimensional matrix can be obtained as:

• The transpose of a matrix is defined as:
• ,
• (Therefore the transpose of a symmetrical matrix is the matrix itself.)
8
Mathematical Background, cont.
• A column vector can be converted to a row vector by the
transpose operation:

• Appending the columns of two given matrices is called the


matrix augmentation. a a a 1 0 0
11 12 13
• The augmentation of with is:  
× × a21 a22 a23 0 1 0
a31 a32 a33 0 0 1 

9
Mathematical Background, cont.
• System of linear algebraic equations can be represented in
matrix form:

• , or,

10
Mathematical Background, cont.
• Solution of this equation system can be obtained after
calculating the inverse of [A], as follows:



• Therefore, the equation system is solved and the unknowns/variables
are obtained.

11
Non-Computer Methods For the Solution of
Linear System of Equations
• For small linear equation systems (n≤3) we can use the graphical
method, Cramer’s rule and the elimination of unknowns.
• The Graphical Method
• Let’s consider a system of equations with n=2.

• Solve equations for x2:

• Thus, we transformed the solution of this equation system into


finding the intersection point of two lines.
12
Example 9.1
• Use the graphical method to solve

• Solution

• The intersection point is observed as x1=4 and x2=3


• We can verify the solution by substituting it to the -The 1st equation is satisfied over the
blue line.
equation system: -The 2nd equation is satisfied over the
orange line.
• ✔ - At intersection point, both
equations are satisfied
13
simultaneously.
Example 1
• Consider the following equation system with
three equations (n=3)
3𝑥 + 5𝑥 − 3𝑥 = 3
• 𝑥 − 2𝑥 − 𝑥 = −5
2𝑥 + 4𝑥 − 7𝑥 = 10

• This system has a 3D graphical representation


with the solution corresponding to the
intersection point of 3 planes:
1
𝑥 = −3 + 3𝑥 + 5𝑥
3
𝑥 = 5 + 𝑥 − 2𝑥
2
𝑥 = −5 + 𝑥 + 2𝑥
7

(Systems with n>3 has no graphical representation!)


14
Non-Computer Methods For the Solution of
Linear System of Equations, cont.
• A linear equation system may not have a unique solution or a
solution at all.
• These cases are shown graphically for a system of n=2
equations:

No solution (singular) Infinite number of solutions (singular) Ill-conditioned (Almost singular)


(There is no (x1, x2) that (For any x1 there is another x2 (The intersection point where
satisfy both equations.) that satisfy both equations.) the root lies is difficult to obtain precisely.)
15
Non-Computer Methods For the Solution of
Linear System of Equations, cont.
• Determinants and the Cramer’s Rule
• This method depends on calculating the determinant of matrices.
• Let’s consider where


a11 a12 a13
• the determinant of [A] is shown as: A  a21 a22 a23
a31 a32 a33
• Determinant is a scalar quantity. For a n=2 matrix it can be calculated
as:
a11 a12
A  a11a22  a12a21
a21 a22

16
Non-Computer Methods For the Solution of
Linear System of Equations, cont.
• For a n=3 matrix, the determinant can be calculated as:
a11 a12 a13
a a23 a a23 a a22
A  a21 a22 a23  a11 22  a12 21  a13 21
a32 a33 a31 a33 a31 a32
a31 a32 a33
 a11  a22a33  a23a32   a12  a21a33  a23a31   a13  a21a32  a22a31 

• The 2x2 determinants are the minor determinants or minors of the


3x3 determinant.
Example 9.2
• Compute determinants of the singular and ill-conditioned systems
presented in the graphical approach

1
 x1  x2  1
2 1 / 2 1 1  1
D      0
1 1 1 / 2 1 2  2
 x1  x2 
2 2

18
Example 9.2, cont.

1
 x1  x2  1 1 / 2 1 1
2 D   2   1  0
1 2 2
 x1  2 x2  2

2.3
 x1  x2  1.1
5 2.3 / 5 1 2.3  1 
1
D       0.04
 x1  x2  1 1 / 2 1 5  2
2

19
Non-Computer Methods For the Solution of
Linear System of Equations, cont.
• Singular systems have coefficient matrix with its determinant equal to zero.
• The ill-conditioned systems have determinants close to zero (when scaled).

• Cramer’s Rule
• We can solve the following 3x3 equation system
𝑎 𝑎 𝑎 𝑥 𝑏
• 𝑎 𝑎 𝑎 𝑥 = 𝑏 , 𝐴 𝑋 = 𝐵
𝑎 𝑎 𝑎 𝑥 𝑏
• as follows:
b1 a12 a13 a11 b1 a13 a11 a12 b1
b2 a22 a23 a21 b2 a23 a21 a22 b2
b a a a b a a a b3
x1  3 32 33 , x2  31 3 33 and x3  31 32
A A A
20
Example 9.3
• Use Cramer’s rule to solve
0.01 0.52 1
0.67 1 1.9
• 0.44 0.3 0.5 0.03278
x1    14.9
0.0022 0.0022

0.3 0.01 1
• Solution 0.5 0.67 1.9
0.1 0.44 0.5 0.0649
• x2    29.5
0.0022 0.0022
0.3 0.52 1 and
1 1.9 0.5 1.9 0.5 1
A  0.5 1 1.9  0.3  0.52 1  0.0022 0.3 0.52 0.01
0.3 0.5 0.1 0.5 0.1 0.3
0.1 0.3 0.5 0.5 1 0.67
(This method is impractical for large system of equations (n>3) 0.1 0.3 0.44 0.04356
x3    19.8
since the calculation of determinants is time consuming.) 0.0022 0.0022 21
Non-Computer Methods For the Solution of
Linear System of Equations, cont.
• The Elimination of Unknowns
• Let’s consider the equation system:

• The basic strategy of this method is to multiply the equations by
constants so that one of the unknowns will be eliminated when
equations are combined.
• For example, multiply first equation by a21 and the second equation
by a11:

• Subtracting one from the other we have:

22
Non-Computer Methods For the Solution of
Linear System of Equations, cont.
• This result can be substituted to any of the two equations to
obtain:

• We would obtain the same results with the Cramer’s rule:


b1 a12 a11 b1
b a a b a b a b a b a b
x1  2 22  22 1 12 2 x2  21 2  11 2 21 1
a11 a12 a11a22  a21a12 a11 a12 a11a22  a21a12
a21 a22 a21 a22

23
Example 9.4
• Sum equations:
• Solve •

• Use the first equation to solve
• By using the elimination of for x1:
unknowns. •

• Solution
• Multiply second equation by 3:

24
Computer Methods
Naive Gauss Elimination
• The method is used to solve the linear system of algebraic
equations:

• This technique depends on two phases; first the elimination of


unknowns and second the solution through back substitution.
25
Naive Gauss Elimination,
Forward elimination of unknowns
• Let’s multiply the first equation by (a21/a11)
a21 a a a
a21 x1  a12 x2  21 a13 x3  ...  21 a1n xn  21 b1
a11 a11 a11 a11

• Then subtract this equation from the second:


 a21   a21   a  a
 a22  a12  x2   a23  a12  x3  ...   a2n  21 a1n  xn  b2  21 b1
 a11   a11   a11  a11

• or  x2  a23
a22  x3  ...  a2 n xn  b2
a11 x1  a12 x2  a13 x3  ...  a1n xn  b1
 x2  a23
a22  x3  ...  a2 n xn  b2
• The procedure is repeated by multiplying first equation with
(a /a ) then subtracting from the third equation.  x2  a33
a32  x3  ...  a3 n xn  b3
31 11
• Repeating the same procedure for all equations: .
.
.
an 2 x2  an 3 x3  ...  ann
 xn  bn 26
Naive Gauss Elimination,
Forward elimination of unknowns, cont.

• Now we continue elimination a11 x1  a12 x2  a13 x3  ...  a1n xn  b1


for the second unknown x2:
 x2  a23
a22  x3  ...  a2 n xn  b2
• The double prime means that
 x3  ...  a3n xn  b3
a33
the element has been modified
twice. .
.
.
an3 x3  ...  ann
 xn  bn

27
Naive Gauss Elimination,
Forward elimination of unknowns, cont.
a11 x1  a12 x2  a13 x3  ...  a1n xn  b1
• When we continue with the
remaining pivoting equations, we  x2  a23
a22  x3  ...  a2 n xn  b2
have:  x3  ...  a3n xn  b3
a33
• Now the system has been .
transformed to upper triangular .
form.
.
 
xn  bn
n 1 n 1
ann

28
Naive Gauss Elimination,
Back substitution
• The last one of these equations can be solved to give:

• The result can be back substituted into (n-1)th equation to solve


for xn-1.

( )
• )( ( ) ,
)(

• In general, we can write:

• , for i=n,n-1,…,1
29
Naive Gauss Elimination,
Back substitution
• The method is called “naive” since it does not account for the
possibility that the pivoting terms in the forward elimination
phase might be equal to zero (the division by pivoting elements
that are equal to zero will give division by zero error during the
forward elimination!).

30
Example 9.5
• Use Gauss elimination to solve third equation:
3𝑥 − 0.1𝑥 − 0.2𝑥 = 7.85
• 0.1𝑥 + 7𝑥 − 0.3𝑥 = −19.3 3𝑥 − 0.1𝑥 − 0.2𝑥 = 7.85
• 7.00333𝑥 − 0.293333𝑥 = −19.5617
0.3𝑥 − 0.2𝑥 + 10𝑥 = 71.4 10.0120𝑥 = 70.0843

• Solution • We can solve for the unknowns now by


using the back substitution.
• Multiply first equation by 0.1/3 and
0.3/3 and subtract from the second and • 𝑥 =
.
= 7.00003
third equations respectively. .
. . .
• 𝑥 = =
.
− 2.50001
3𝑥 − 0.1𝑥 − 0.2𝑥 = 7.85 . . . . .
• 7.00333𝑥 − 0.293333𝑥 = −19.5617 • 𝑥 = =
3.00000
−0.190000𝑥 + 10.0200𝑥 = 70.6150
(The results can be verified by substitution in
• Multiply the second equation by the original set of equations.)
(-0.190000/7.00333) and subtract from the 31
Pitfalls of Elimination Methods
• Division by zero: During elimination and back substitution it is possible that a
division by zero can occur.
0 1 2 𝑥 1
• 2 3 5 𝑥 = 2 , The pivoting element in first row is equal to zero!
5 0 4 𝑥 0

• Round-off errors: A lot of arithmetic operations are carried out and we can have
large magnitude of round-off errors. To avoid these errors we can use more
significant figures in calculations by hand. In computer programs, we can use
double precision variables.

• İll-conditioned systems: For these systems, small changes in coefficients result in


large variations in the results. The round-off errors induce small changes in the
coefficients that can lead to large solution errors for ill-conditioned systems.
32
Example 9.6

• Solution of this system is:


× × .

× . ×
× . . ×

× . ×
• If we change the coefficient of x1 in the
second equation from 1.1 to 1.05, we
have:
× × .

× . ×
× . . ×

× . ×
33
Example 9.7, Effect of Scaling on the
Determinant
• Let’s consider the same ill-conditioned system.
𝑥 + 2𝑥 = 10

1.1𝑥 + 2𝑥 = 10.4
• The determinant is:
• Let’s multiply both sides of the equations in the system by 10 (The solution of
system won’t change!)
10𝑥 + 20𝑥 = 100

11𝑥 + 20𝑥 = 104
• Now the determinant is:
• A large determinant does not necessarily mean that the system is well-conditioned,
therefore, it is difficult to specify how close to zero the determinant must be to
indicate the ill-conditioning.
• A way to scale the coefficients of a matrix is to divide each equation by the largest
of its coefficients so that the maximum coefficient in any row becomes 1. Then, the
determinant of the scaled matrix can give idea on the condition of this matrix.
34
Example 9.8
• First, scale and then calculate the determinant of the following systems of
equations:
• and

• Solution
• The scaling of first system of equations gives:
• , (indication of well condition)

• The scaling of second equation system gives:


• , (indication of ill-condition)
35
Determinant Evaluation by Gauss Elimination
• Calculating determinants by cofactor expansion based methods
are numerically expensive and impractical for large systems.
• Fortunately, determinants can be calculated by the Gauss
elimination method since the value of the determinant does not
change by forward elimination process.
• Then, we use the fact that a triangular matrix has determinant
equal to the product of its diagonal elements, e.g.
a11 a12 a13
a a23 0 a23 0 a22
D  0 a22 a23  a11 22  a12  a13
0 a33 0 a33 0 0
0 0 a33
D  a11a22a33
36
Example 2
0.8 0.4 0.2 0.2 0.6
0.1 0.3 0.8 1. 0.4
• Calculate the determinant of
det  A  A  1. 0.3 0.4 0.4 0.7
[A] by the Gauss elimination
0.9 0.8 0.1 0.8 0.
method
0.5 0.6 0.5 0.9 0.5
 0.8 0.4 0.2 0.2 0.6 
 0.1 0.3 0.8 1. 0.4  • The forward elimination gives:

 A   1. 0.3 0.4 0.4 0.7  0.8 0.4 0.2 0.2 0.6
 
 0.9 0.8 0.1 0.8 0.  0 0.25 0.775 0.975 0.325
 0.5 0.6 0.5 0.9 0.5  A 0 0 0.77 0.93 0.21
• Solution 0 0 0 0.671429 0.8
0 0 0 0 0.182398
A  0.8  0.25  0.77  0.671429  0.182398  0.018860
37
Improving the Solutions by Pivoting
• As mentioned before, division by zero error occurs when a pivot element is
zero.
• Problems may also rise when a pivoting term is small and close to zero.
• Therefore, the largest element below the pivot is determined. Then, the
pivoting row and this row is switched. This is called the partial pivoting.
• If columns as well as rows are searched for the largest element, it is called
complete pivoting.
• Complete pivoting changes the order of variable vector {X} by changing the
columns. One should track this change in the ordering of variables in
calculations. Therefore, it is not used commonly.
• Pivoting minimizes round-off errors as well as it avoids division by zero
errors.

38
Example 9.9
• Use the “naive” Gauss elimination to solve • −9999𝑥 = −6666 → 𝑥 =
0.0003𝑥 + 3.0000𝑥 = 2.0001
• • Substituting this result in the first equation
1.0000𝑥 + 1.0000𝑥 = 1.0000
and solving for x1, we have:
• Then, repeat computation with partial
pivoting • 𝑥 =
.
.
• Solution • Using 3 significant figures in computation:
• The system is not ill-conditioned, after • 𝑥 = 0.667, 𝑥 =
. .
= −3
scaling we have: .
• Using 4 significant figures in computation:
. .
0.0001𝑥 + 1.0000𝑥 = 0.6667 • 𝑥 = 0.6667, 𝑥 = =0
• .
1.0000𝑥 + 1.0000𝑥 = 1.0000
• 𝐷 = 0.0001 × 1 − 1 × 1 = −0.9999

• Solution without pivoting:


• Multiply the first equation with 1/(0.0003)
and subtract from the second:
39
Example 9.9, cont.
• Solution with pivoting: computation:
.
• The second equation has the largest • 𝑥 = 0.667, 𝑥 = = 0.333
pivot element. Let’s change the first • Using 4 significant figures in
and second equations: computation:
1.0000𝑥 + 1.0000𝑥 = 1.0000 .
• • 𝑥 = 0.6667, 𝑥 = = 0.3333
0.0003𝑥 + 3.0000𝑥 = 2.0001
• Elimination and substitution gives:
.
• 𝑥 = = =
.
• Substituting this result in the first
equation and solving for x1, we have:

• 𝑥 =
• Using 3 significant figures in Thus, the pivoting strategy gives much better results!

40
Example 3
• Solve the linear algebraic equation system by Gauss elimination with
partial pivoting.

 58.6245 60.7389 3.32716 43.0436  x1   66.3975


 98.8004 35.6892 63.0640
 41.9285  x2   91.6913
  
 92.0571 48.0621 89.7516 63.2268  x3  28.0886
 
 51.4705 62.6057 26.1655 49.7956  x4   44.8099 

41
Example 3, cont.
Switch 1st and 2nd rows
v (Pivoting)

Forward elimination of x1

Switch 2nd and 3rd rows


(Pivoting)

Forward elimination of x2
No switching required since
the pivoting term is already largest (in magnitude)
Forward elimination of x3 Back substitution gives:
x1=5.97332, x2=5.09664,
x3=7.03448, x4=3.02997
42
Example 4 (determinant with partial pivoting)
• Calculate the determinant of matrix [A] using Gauss elimination with partial pivoting.

• While the matrix is converted to upper-triangular form, row changes are


made (partial pivoting). The determinant of [A] is calculated by:

• where s is the number of row changes.
43
Example 4, cont.
s=3

s=1

The determinant is:


s=2
det 𝐴
= −1 −8.51793 −8.96842 8.63889 −10.7374
= 7086.10

44
Example 5
• Three masses are suspended k1

vertically by a series of identical


m1
springs where mass 1 is at the top x1
and mass 3 is at the bottom. If k2
g=9.81 m/s2, m1=2kg, m2=3kg,
m3=2.5kg and k’s are 10kg/s2 each m2
x2
solve the vertical displacements x1,
x2 and x3. k3

m3
x3

45
Example 5, cont.
• Let’s write the force equilibrium for each mass:
𝑘 𝑥 = 𝑚 +𝑚 +𝑚 𝑔 k1
• 𝑘 𝑥 −𝑥 = 𝑚 +𝑚 𝑔
𝑘 𝑥 −𝑥 = 𝑚 𝑔 m1
x1
• In matrix form:
𝑘 0 0 𝑥 𝑚 +𝑚 +𝑚 𝑔 k2
• −𝑘 𝑘 0 𝑥 = 𝑚 +𝑚 𝑔
0 −𝑘 𝑘 𝑥 𝑚 𝑔 m2
x2
• For the given values of parameters, we have:
10 0 0 𝑥 73.575 k3
• −10 10 0 𝑥 = 53.955
0 −10 10 𝑥 24.525
m3
• Using Gauss elimination, we have: x3
• x1=7.3575m, x2=12.753m and x3=15.2055m
46
Gauss-Jordan Method
• This method is a variation of the Gauss elimination method. The
major difference is that when an unknown is eliminated, it is
eliminated from all equations. The method is illustrated in the
following example.
• Example
• Solve the system of linear algebraic equations by using the Gauss-
Jordan method:

• Use six significant figures in computations.


47
Gauss-Jordan Method, cont.
• Solution  3 0.1 0.2 7.85 
• The augmented form of this system is:  
 0.1 7 0.3  19.3 
0.3 0.2 10 71.4 

 1 0.0333333 0.0666667 2.61667


• Normalize the first row with dividing it by 
0.1 7  0.3  19.3

 
the pivoting element: 0.3 0.2 10 71.4 

1 0.0333333 0.0666667 2.61667 


 
 0 7.00333  0.293333 19.5617 
• Forward elimination of x1: 0 0.190000 10.0200 70.6150 
48
Gauss-Jordan Method, cont.
1 0.0333333 0.0666667 2.61667 
• Normalize the second row dividing it by 
0 1 0.0418848 2.79320

 
7.00333: 0 0.190000 10.0200 70.6150 
1 0 0.0680629 2.52356 
 
• Eliminate x2 from the first and third 0 1 0.0418848 2.79320 
0 0 10.0120 70.0843 
equations: 1 0 0.0680629 2.52356 
 
0 1 0.0418848 2.79320 
0 0 1 7.00003 
• Normalize third row:
1 0 0 3.00000 
 
0 1 0 2.50001
• Eliminate x3 from the first and second  0 0 1 7.00003 
equations:
Identitiy Solution 49
Matrix

You might also like