0% found this document useful (0 votes)
54 views27 pages

Systems of Linear Equations: A X X X X A X X X X A X X X X

The document discusses solving systems of linear equations using Gaussian elimination. It begins by representing a system of linear equations in matrix form. It then explains the Gaussian elimination method, which transforms the matrix into upper triangular form through a series of row operations. This decomposition of the original matrix A into the product of a lower triangular matrix L and upper triangular matrix U (LU decomposition) allows the system of equations to be solved efficiently. Pseudocode for implementing Gaussian elimination is provided.

Uploaded by

đại phạm
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
54 views27 pages

Systems of Linear Equations: A X X X X A X X X X A X X X X

The document discusses solving systems of linear equations using Gaussian elimination. It begins by representing a system of linear equations in matrix form. It then explains the Gaussian elimination method, which transforms the matrix into upper triangular form through a series of row operations. This decomposition of the original matrix A into the product of a lower triangular matrix L and upper triangular matrix U (LU decomposition) allows the system of equations to be solved efficiently. Pseudocode for implementing Gaussian elimination is provided.

Uploaded by

đại phạm
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 27

Systems of Linear Equations

Last time, we found that solving equations such as Poisson’s


equation or Laplace’s equation on a grid is equivalent to solving a
system of linear equations.

There are many other examples where systems of linear


equations appear, such as eigenvalue problems. In this lecture,
we look into different approaches to solving systems of linear
equations (SLE’s).

SLE: a11 x1 + a12 x 2 + a13 x 3 +  + a1n x n = b1 n unknowns xj


a21 x1 + a22 x 2 + a23 x 3 +  + a2n x n = b2 m equations
a31 x1 + a32 x 2 + a33 x 3 +  + a3n x n = b3 aij and bi are known

am x1 + am 2 x 2 + am 3 x 3 +  + amn x n = bm

Winter Semester 2006/7 Computational Physics I Lecture 8 1


SLE’s
If m<n, there is not a unique solution for the x’s
(underconstrained). If m>n, the system can be overconstrained
and usually the job is to find the best set {x} to represent the set if
equations. We will consider here square matrices m=n, with
detA0, in which case the equations are linearly independent and
there is a unique solution.

Matrix representation:
 
Ax = b
 a11 a12  a1n   b1 
 a21 a22  a2n   b2 
A=  b= 
   
   
 an1 an 2  ann   bn 

Winter Semester 2006/7 Computational Physics I Lecture 8 2


SLE’s
Note that if the matrix is diagonal, then the solutions is easy:

 a11 0  0   x1   b1 
 a21 a22  0   x 2   b2 
   =  
       
    
 an1 an 2  ann   x n   bn 

a11 x1 = b1 x1 = b1 /a11
 b  a21b1 
 2 a11
a21 x1 + a22 x 2 = b2 x2 =
a22

Goal of ‘Gaussian elimination’ method is to bring A into this form.

Winter Semester 2006/7 Computational Physics I Lecture 8 3


SLE’s
 
Ax = b
 a11 a12  a1n   x1   b1 
 a21 a22  a2n   x 2   b2 
   =  
       
    
 an1 an 2  ann   x n   bn 
Note:
1. Interchanging two rows of A and the same rows of b gives the
same set of equations.
2. Solution not changed if replace a row with a linear combination
including another row, as long as the b’s are treated in a
similar way. E.g., aij  ki aij + k i aij j = 1,,n
bi  kibi + k i bi
3. Interchanging two columns of A gives the same result if
simultaneously two rows of vector x interchanged.
Winter Semester 2006/7 Computational Physics I Lecture 8 4
Gauss Elimination Method
ai1
define li1 =
a11
 
Transform A by subtracting li1a1t from row i where a1t is the
transpose of row 1 .
 a1t 

 a1t 
 a t   a t  l a t 
 2    2 21 1  or A(1) = L1 A where
   
 t   t 
 an   an  ln1a1t 
 1 
 l21 1 0 
 
L1 =  l31 1  is the Frobenius matrix
  0  
 
 ln1 1
Winter Semester 2006/7 Computational Physics I Lecture 8 5
Gauss Elimination Method
The result is
 a11 a12  a1n 
 0 a (1)
22  a(1) 
2n
A(1) =  
   
 (1) 
 0 an 2  ann 
(1)

now subtract a(1)


i2 /a (1)
22 times the second row from rows 3...n

A(2) = L2 A(1) = L2 L1 A
1 0
0 1 
 
L2 =  0 l32 1  where li2 = a (1)
i2 /a (1)
22
0  0  
 
 0 ln 2 0 0 1

Winter Semester 2006/7 Computational Physics I Lecture 8 6


Gauss Elimination Method
Now have:
 a11 a12 a13  a1n 
 0 a(1)
22
(1)
a23  a(1)
2n

 (2) 
A = 0
(2)
0 (2)
a33  a3n 
      
 (2) 
 0 0 an 3  ann 
(2)

Keep going until have upper triangular matrix (n  1) steps


A(n 1) = Ln 1Ln  2 L1 A = U
 u11 u12 u13  u1n 
 0 u22 u23  u2n 
     
U= 0 0 u33  u3n  with Ux = c and c = Ln 1Ln  2 L1b
      
 
 0 0 0  unn 
Winter Semester 2006/7 Computational Physics I Lecture 8 7
Gauss Elimination Method
Note that:
 1  1 
 l21 1 0   l21 1 0 
  1  
L1 =  l31 1  L1 =  l31 1 
  0     0  
   
 ln1 1  ln1 1

and similarly for the other Li, so


Ln 1Ln  2 L1 A = U implies A = L1L
1 2
1
L1
n 1U = LU,with

1 
 l21 1 0 
1 1 1  
L = L1 L2 Ln 1 =  l31 l32 1 
   l43  
 
 ln1 ln 2 lnn 1 1
Winter Semester 2006/7 Computational Physics I Lecture 8 8
Gauss Elimination Method
i.e., the matrix A has been decomposed (LU decomposition) into
an upper triangular and a lower triangular matrix. The solution is
now easy.
      
Ax = LUx = b First solve Ly = b, then Ux = y so
y1 = b1
y 2 = b2  l21 y1

then going from the bottom
yn
xn =
unn
y n 1  x n un 1,n
x n 1 =
un 1,n 1


Winter Semester 2006/7 Computational Physics I Lecture 8 9


Gauss Elimination Method

Take a concrete example:  4 1 1 1  x1  1


 1 4 1 1  x 2  1
   =  
 1 1 4 1  x 3  1
Here is some code:     
 1 1 1 4  x 4  1
* Loop over the rows. The index i is the row which is not touched in this step
Do i=1,n-1
*
* For step i, we modify all rows j>i
Do j=i+1,n
*
* Loop over the column elements in this row. Perform linear transformation
* on matrix A. Keep the upper diagonal elements in A as we go. Also build
* up the lower diagonal matrix at the same time.
Lji=A(j,i)/A(i,i)
Do k=1,n
A(j,k)=A(j,k)-Lji*A(i,k)
Enddo
L(j,i)=Lji
Enddo
Enddo
Winter Semester 2006/7 Computational Physics I Lecture 8 10
Gauss Elimination Method

4.00000 1.00000 1.00000 1.00000 1.00000 0.00000 0.00000 0.00000


A(1)= 0.00000 3.75000 0.75000 0.75000 L(1)= 0.25000 1.00000 0.00000 0.00000
0.00000 0.75000 3.75000 0.75000 0.25000 0.00000 1.00000 0.00000
0.00000 0.75000 0.75000 3.75000 0.25000 0.00000 0.00000 1.00000

4.00000 1.00000 1.00000 1.00000 1.00000 0.00000 0.00000 0.00000


A(2)= 0.00000 3.75000 0.75000 0.75000 L(2)= 0.25000 1.00000 0.00000 0.00000
0.00000 0.00000 3.60000 0.60000 0.25000 0.20000 1.00000 0.00000
0.00000 0.00000 0.60000 3.60000 0.25000 0.20000 0.00000 1.00000

4.00000 1.00000 1.00000 1.00000 1.00000 0.00000 0.00000 0.00000


A(3)= 0.00000 3.75000 0.75000 0.75000 L(3)= 0.25000 1.00000 0.00000 0.00000
0.00000 0.00000 3.60000 0.60000 0.25000 0.20000 1.00000 0.00000
0.00000 0.00000 0.00000 3.50000 0.25000 0.20000 0.16667 1.00000

Winter Semester 2006/7 Computational Physics I Lecture 8 11


Gauss Elimination Method
* Now build up the solution matrix
* First the vector y
Do i=1,n
y(i)=b(i) 1.00
Do j=1,i-1 0.75
y(i)=y(i)-L(i,j)*y(j) y= 0.60
Enddo 0.50
Enddo
*
* and now x
*
Do i=n,1,-1
0.14
x(i)=y(i)
0.14
Do j=i+1,n x= 0.14
x(i)=x(i)-A(i,j)*x(j)
0.14
Enddo
x(i)=x(i)/A(i,i)
Enddo

It’s good practice to check result. Is Ax=b ? Is A=LU ?

Winter Semester 2006/7 Computational Physics I Lecture 8 12


Gauss-Jordan Elimination
The Gauss-Jordan method changes the matrix A into the diagonal
matrix in one pass. First try, use 2. from P.4 to recast equations:

a1 j
a1 j  a1 j = j = 1,2,3,4
a11

Then take the following linear combination:


a1 j
aij = aij + ki a1 j = aij + ki i = 2,3,4
a11
and choose ki such that
 = 0 = ai1 + ki a11
ai1  = ai1 + ki i = 2,3,4
or k i = ai1

Winter Semester 2006/7 Computational Physics I Lecture 8 13


Gauss-Jordan Elimination
 a11 a12 a13 a14   x1   b1 
Work with a 4x4 matrix to be  a21 a22 a23 a24   x 2   b2 
concrete:    =  
 a31 a32 a33 a34   x 3   b3 
    b 
 a41 a42 a43 a44   x 4   4 a 11

After the first step, our matrix looks like this
1 a12 a13 a14   b1 
 a11 a11 a11   x1   a11 
 0 a  a a12 a  a
a13
a  a
a14 
   b  a b1 
 22 21 a11 23 21 a11 24 21 a11 x 2  2 21 a11
=
 0 a  a a12 a13 a14   x 3   b1 
a  a a  a   b  a
 32 31 a11 33 31 a11 34 31 a11    3 31 a11 
 a a a   x4   b1 
 0 a42  a41 12 a  a 13 a  a 14   b  a
 a11 43 41 a11 44 41 a11  4 41 a11

Now we move on and make the second column look like the unit
matrix, and so on.
Winter Semester 2006/7 Computational Physics I Lecture 8 14
Gauss-Jordan Elimination
a2 j
1. a2 j = a22
a2 j
2. aij = aij  ai2
 a22 i = 1,3,4
b2
3. bi= bi  ai2
 a22 i = 1,3,4

Note that first column is not affected:

 = 0 a = 0
a21
22
a
 = ai1
ai1   ai2
 21 a = ai1  0 a = ai1
  ai2  i = 1,3,4
22 22

Winter Semester 2006/7 Computational Physics I Lecture 8 15


Gauss-Jordan Elimination
After these two sets of transformations, we have

1  a14
0 a13    x1   b1
0    x 2   b2
1 a23 a24
   =  
0 0 a33 a34
   x 3   b3
    
0    x 4   b4
0 a43 a44

Keep going until we have


1 0 0 0  x1   b1(4 )  x1   b1(4 )
0 1 0 0  x 2   b2(4 )  x 2   b2(4 )
    =  (4 )    =  (4 )
0 0 1 0  x 3   b3   x 3   b3 
      (4 )
0 0 0 1  x 4   b4(4 )  x 4   b4 

Winter Semester 2006/7 Computational Physics I Lecture 8 16


Gauss-Jordan Elimination
Algorithm looks like this:
1. Loop over the n columns. We want to turn the columns one at a time into the unit
matrix
2. For column k, we find the linear transformation on the rows which gives the
desired result (1 in row k, 0 in other rows)
a) Loop over the n rows and make the diagonal element a 1. For i=k, do this by
dividing every element in A(i,j) by A(k,k), where j is the column index and i is
the row index. Also need to divide b(i) by A(k,k)
b) For ik, make the element in column k a 0. We do this by subtracting
A(i,k)*A(k,j)/A(k,k) from A(i,j). For b(i), we subtract A(i,k)*b(k)/A(k,k)

Test matrices:
 1 1 1 1 
1
4 1 1 1 1 1
2 3 4
1 1 1 1  1
1 1
A2 =  2 5
4 1 3 4
A1 =   b1 =   b2 =  
1 1 1 1 
1
1 1 4 1 1  3 4 5 6 
   1 1 1 1  1
1 1 1 4 1  4 5 6 7

Winter Semester 2006/7 Computational Physics I Lecture 8 17


Gauss-Jordan Elimination
Start with A1: 4.00 1.00 1.00 1.00 1.00
1.00 4.00 1.00 1.00 b1 1.00
1.00 1.00 4.00 1.00 1.00
1.00 1.00 1.00 4.00 1.00

First iteration 1.00 0.25 0.25 0.25 0.25


0.00 3.75 0.75 0.75 0.75
0.00 0.75 3.75 0.75 0.75
0.00 0.75 0.75 3.75 0.75

1.00 0.00 0.20 0.20 0.20


Second iteration 0.00 1.00 0.20 0.20 0.20
0.00 0.00 3.60 0.60 0.60
0.00 0.00 0.60 3.60 0.60

1.00 0.00 0.00 0.17 0.17


Third iteration 0.00 1.00 0.00 0.17 0.17
0.00 0.00 1.00 0.17 0.17
0.00 0.00 0.00 3.50 0.50

Fourth iteration 1.00 0.00 0.00 0.00 0.14


0.00 1.00 0.00 0.00 0.14
0.00 0.00 1.00 0.00 0.14
0.00 0.00 0.00 1.00 0.14
Winter Semester 2006/7 Computational Physics I Lecture 8 18
Gauss-Jordan Elimination

We can use the same transformations to get the inverse of A:

AA 1 = E
now suppose the transformation of A to E consists
of the following steps :
T4 T3T2T1 A = E
Apply to the equation above
T4 T3T2T1 AA 1 = T4 T3T2T1 E
or
EA 1 = A 1 = T4 T3T2T1 E

Winter Semester 2006/7 Computational Physics I Lecture 8 19


Gauss-Jordan Elimination
1.00000 0.00000 0.00000 0.00000
Let’s try it on the previous example: 0.00000 1.00000 0.00000 0.00000
0.00000 0.00000 1.00000 0.00000
0.00000 0.00000 0.00000 1.00000

0.25000 0.00000 0.00000 0.00000


After T1 -.25000 1.00000 0.00000 0.00000
-.25000 0.00000 1.00000 0.00000
-.25000 0.00000 0.00000 1.00000

0.26667 -.06667 0.00000 0.00000


-.06667 0.26667 0.00000 0.00000
After T2 -.20000 -.20000 1.00000 0.00000
-.20000 -.20000 0.00000 1.00000

0.27778 -.05556 -.05556 0.00000


After T3 -.05556 0.27778 -.05556 0.00000
-.05556 -.05556 0.27778 0.00000
-.16667 -.16667 -.16667 1.00000

After T4 0.28571 -.04762 -.04762 -.04762


-.04762 0.28571 -.04762 -.04762
-.04762 -.04762 0.28571 -.04762
-.04762 -.04762 -.04762 0.28571
Winter Semester 2006/7 Computational Physics I Lecture 8 20
Gauss-Jordan Elimination
With A2: 1.00000 0.50000 0.33333 0.25000 1.00
0.50000 0.33333 0.25000 0.20000 1.00
0.33333 0.25000 0.20000 0.16667 1.00
0.25000 0.20000 0.16667 0.14286 1.00

1.00000 0.50000 0.33333 0.25000 1.00


0.00000 0.08333 0.08333 0.07500 0.50
0.00000 0.08333 0.08889 0.08333 0.67
0.00000 0.07500 0.08333 0.08036 0.75

1.00000 0.00000 -.16667 -.20000 -2.00


0.00000 1.00000 1.00000 0.90000 6.00
0.00000 0.00000 0.00556 0.00833 0.17
0.00000 0.00000 0.00833 0.01286 0.30

1.00000 0.00000 0.00000 0.05000 3.00


0.00000 1.00000 0.00000 -.60000 -24.00
0.00000 0.00000 1.00000 1.50000 30.00
0.00000 0.00000 0.00000 0.00036 0.05

1.00000 0.00000 0.00000 0.00000 -4.00


0.00000 1.00000 0.00000 0.00000 59.99
Start to see 0.00000 0.00000 1.00000 0.00000 -179.99
0.00000 0.00000 0.00000 1.00000 139.99
rounding issues:
Winter Semester 2006/7 Computational Physics I Lecture 8 21
Gauss-Jordan Elimination
As a check, we calculate the result for x for different initial
matrices. Use matrices of the form A2:
1
aij =
i + j 1

Try different size square matrices. Should give us our initial vector
b. Find that start into mumerical problems with n=m=10.

Single precision calculation. All values should be 1.

0.971252441 0.977539062 0.981124878 0.983856201 0.985870361


0.987503052 0.988098145 0.989746094 0.99067688 0.991607666

Double precision calculation. All values are =1.

Winter Semester 2006/7 Computational Physics I Lecture 8 22


Gauss-Jordan Elimination
Check 100x100 matrix double precision
0.999999987 0.999999987 0.999999988 0.999999988 0.999999988 0.999999989
0.999999989 0.999999989 0.999999989 0.99999999 0.99999999 0.99999999
0.99999999 0.999999991 0.999999991 0.999999991 0.999999991 0.999999991
0.999999992 0.999999992 0.999999992 0.999999992 0.999999992 0.999999992
0.999999993 0.999999993 0.999999993 0.999999993 0.999999993 0.999999993
0.999999993 0.999999993 0.999999993 0.999999994 0.999999994 0.999999994
0.999999994 0.999999994 0.999999994 0.999999994 0.999999994 0.999999994
0.999999994 0.999999994 0.999999995 0.999999995 0.999999995 0.999999995
0.999999995 0.999999995 0.999999995 0.999999995 0.999999995 0.999999995
0.999999995 0.999999995 0.999999995 0.999999995 0.999999995 0.999999995
0.999999996 0.999999996 0.999999996 0.999999996 0.999999996 0.999999996
0.999999996 0.999999996 0.999999996 0.999999996 0.999999996 0.999999996
0.999999996 0.999999996 0.999999996 0.999999996 0.999999996 0.999999996
0.999999996 0.999999996 0.999999996 0.999999996 0.999999996 0.999999996
0.999999996 0.999999996 0.999999997 0.999999997 0.999999997 0.999999997
0.999999997 0.999999997 0.999999997 0.999999997 0.999999997 0.999999997
0.999999997 0.999999997 0.999999997 0.999999997

Looks pretty good. However, there are cases where this simple
Gauss-Jordan technique does not work so well.
Winter Semester 2006/7 Computational Physics I Lecture 8 23
Pivoting

What if the diagonal element which we divide by is very small or


zero ? (The element we divide by is called the pivot). This
obviously causes problems. Resolved by a technique called
pivoting (exchanging rows and columns).

Partial pivoting - exchange rows in the region which have not yet
been transformed to the unit matrix.

Full pivoting - exchange rows and columns.

These tricks are allowed according to the rules we stated on


pages 4,5.

Winter Semester 2006/7 Computational Physics I Lecture 8 24


Partial Pivoting
Algorithm looks like this:
1. Loop over the n columns. We want to turn the columns one at a time into the unit
matrix. Use the index k to specify which column element we want to turn into a 1.
2. For each value of k, we loop over the rows ik and look for the maximum value
A(i,k). The row i which gives this maximum is swapped with row k. Use an index
vector to keep track of the permutations Ind(i)=row which is to be considered as ith
row.
3. Find the linear transformation:
a) Loop over the n rows and make the diagonal element a 1. For Ind(i)=k, do this
by dividing every element in A(Ind(i),j) by A(Ind(i),k), where j is the column
index and Ind(i) is the row index. Also need to divide b(Ind(i)) by A(Ind(i),k)
b) For Ind(i)k, make the element in column k a 0. We do this by subtracting
A(Ind(i),k)*A(Ind(k),j)/A(Ind(k),k) from A(Ind(i),j). For b(i), we subtract
A(Ind(i),k)*b(Ind(k))/A(Ind(k),k)

Winter Semester 2006/7 Computational Physics I Lecture 8 25


Partial Pivoting
Example: 1.00000 0.50000 0.33333 0.25000 1.00
0.50000 0.33333 0.25000 0.20000 1.00
0.33333 0.25000 0.20000 0.16667 1.00
0.25000 0.20000 0.16667 0.14286 1.00

1.00000 0.50000 0.33333 0.25000 1.00


0.00000 0.08333 0.08333 0.07500 0.50
0.00000 0.08333 0.08889 0.08333 0.67
0.00000 0.07500 0.08333 0.08036 0.75

1.00000 0.00000 -.16667 -.20000 -2.00


0.00000 1.00000 1.00000 0.90000 6.00
0.00000 0.00000 0.00556 0.00833 0.17
0.00000 0.00000 0.00833 0.01286 0.30

1.00000 0.00000 0.00000 0.05714 4.00


0.00000 1.00000 0.00000 -.64286 -30.00
0.00000 0.00000 0.00000 -.00024 -0.03
0.00000 0.00000 1.00000 1.54286 36.00

1.00000 0.00000 0.00000 0.00000 -4.00


0.00000 1.00000 0.00000 0.00000 60.00
0.00000 0.00000 0.00000 1.00000 139.99
0.00000 0.00000 1.00000 0.00000 -179.99
Winter Semester 2006/7 Computational Physics I Lecture 8 26
Exercizes

1. Solve for x in Ax=b using the LU decomposition method


1
aij = bj = j
i + j +1

2. Do it again with the Gauss-Jordan method

3. (more difficult). Now try


aij = i  j i, j bj = j

You will need to use pivoting.

Winter Semester 2006/7 Computational Physics I Lecture 8 27

You might also like