L6 LinearAlgebraicEqns
L6 LinearAlgebraicEqns
NUMERICAL METHODS
Ax = b matrix equation
A1 AX A1B
X A1B
3x1 + 2x2 = 18
- x1 + 2x2 = 2
x2 = -3/2x1 + 9
x2 = 1/2x1 + 1
Singular systems:
• a) The two equations represent parallel lines. There is no solution.
• b) The two lines are coincident. There is an infinite number of
solutions .
ill-conditioned system
• c) It is difficult the exact point at which lines intersect.
Determinants, Cramer’s Rule
Given a second-order matrix A, the determinant D is defined as
follows:
| 1 1.9|
A1 = | 0.3 0.5| = 1(0.5) – 1.9(0.3) = -0.07
| 0.5 1.9|
A2 = | 0.1 0.5| = 0.5(0.5) – 1.9(0.1) = 0.06
| 0.5 1 |
A2 = | 0.1 0.3| = 0.5(03) – 1(0.1) = 0.05
Example: Cramer’s Rule
a11 a12 a13
a a 23 a a 23 a a 22
D a 21 a 22 a 23 , D a11 22 a12 21 a13 21
a32 a33 a31 a33 a31 a32
a31 a32 a33
a 22 a 23 a a 23 a a 22
D a11 a12 21 a13 21
a32 a33 a31 a33 a31 a32
0.3 0.52 1
D 0.5 1 1. 9
0.1 0.3 0.5
1 1.9
A1 (1)(0.5) (0.3)(1.9) 0.07
0.3 0.5
0.5 1.9
A2 0.5(0.5) 0.1(1.9) 0.06
0.1 0.5
0.5 1
A1 0.5(0.3) 0.1(1) 0.05
0.1 0.3
Example: Cramer’s Rule (cont.)
| 0.3 -0.01 1 |
| 0.5 0.67 1.9|
| 0.1 -0.44 0.5| 0.0649
X2 = ---------------------------- = -------------------- = -29.5
-0.0022 -0.0022
Forward elimination.
Multiply the equation (1) by (0.1)/3 and subtract
the result from (2) to give
Back substitution:
Obtain the value of x3 from (9)
x3 = 70.0843 / 10.0200
x3 = 7.00003 (10)
Example: Gauss Elimination
x1 = 3.0 (12)
Example: Gauss Elimination
2. Partial pivoting:
• Pivoting
Solution: x1 = x2 = 1
Gauss-Jordan Method
• X1 = 3.000441181
• X2 = -2.48809640
Example of Gauss- Jordan Method
• X3 = 7.85(-0.0100779) – 19.3(0.00269816) +
71.4(0.0998801)
• X3 = 7.00025314
• Solution :
X1 = 3.00041181
X2 = -2.48809640
X3 = 1.43155150
Gauss- Jordan Method
4 x1 2 x2 5 x3 6
2 x1 x2 x3 5
3x1 2 x2 x3 4
AX B
What if you have to solve the linear system several times, with
changing B vectors?
• Then, Gaussian elimination becomes inefficient
• Instead, use a method that separates out transformations on A
from transformations on B
LU Decomposition
1. Construct L and U
a (k )
Ei ik
E Ei k 1, n 1
(k ) k
a kk
i k 1, n
Multiplier factors
• Keep track of the multiplier factors and populate L with these
factors used during forward elimination
lii 1
aik ( k )
lik
akk ( k )
b1
d1
l11
1 i 1
di bi lij d j i 2, n
lii j 1
3. Solve for X using back substitution
dn
xn
unn
1 n
xi
uii
d i uij x j i n 1, n 21
j i 1
2. LU substitution step:
The lower
triangular
matrix
Example: LU factorization or Decomposition
y1 = 7.85
0.333333y1 + y2 = -19.3
0.1y1 – 0.02713y2 + y3 = 71.4
Example:The Substitution Steps
x1 =3 x2 = - 2.5 x3 = 7
Example:
Solve the linear system by LU decomposition method
x1 x2 3x4 4
2 x1 x2 x3 x4 1
3x1 x2 x3 2 x4 3
x1 2 x2 3x3 x4 4
Answer: x1 1 x2 2 x3 0 x4 1
Factorization with MATLAB
If A is nonsingular,
1
AA I
AX B
To find A-1
•Set b to each of the unit vectors in the identity matrix.
1 0 0 1 0 0
0 1 0 0 1 0
I b (1) b ( 2) b ( n )
0 0 1 nxn 0
0 1
1 1 0
A 2 1 1
3 1 1
The solution is given in class
Answer: 0 0 .2 0 .2
A1 1 0.2 0.2
1 0.8 0.2
Iterative Methods for Linear
Algebraic Equations
Iterative Methods
bi n aij
xi x j i 1,2 n
aii j 1 aii
j i
3. Generate sequence of approximate solution vectors by
computing
( k 1)
X (k )
TX C k: iteration number
n
( k 1)
ij j ) bi
( a x
j 1, j i
xi ( k ) i 1,2 n
aii
4. Terminate the procedure when
a s
or max. number of iterations is reached.
How to define εa
x ( k ) x ( k 1)
a (k )
x
x
max xi Maximum magnitude norm
1i n
n
n
p
1/ p
x 1 xi
x p
xi i 1
i 1
1/ 2
2
n
x 2 xi Euclidean norm
i 1
The distance between two vectors is defined as the norm of the
difference of the vectors.
y y1 y2 yn T
x y
max xi yi
1i n
1/ 2
n
2
x y 2
( xi yi )
i 1
Solve the linear system with Jacobi iterative method
4 x1 x2 x3 7
4 x1 8 x2 x3 21
2 x1 x2 5 x3 15
x ( 0) 1 2 2T
Carry out four iterations.
Gauss-Seidel Method
i 1 n
( k 1)
( aij x j )
(k )
( aij x j ) bi
j 1 j i 1
xi (k )
i 1,2 n
aii
Solve the linear system with Gauss-Seidel iterative method
4 x1 x2 x3 7
4 x1 8 x2 x3 21
2 x1 x2 5 x3 15
x ( 0) 1 2 2
Iterate until εa ≤ 0.001 . Use maximum magnitude norm to
calculate εa
Convergence criterion
Diagonally Dominant
For each equation i :
n
aii aij
j 1
j i
Note that this is not a necessary condition, i.e. the system may
still have a chance to converge even if A is not diagonally
dominant.
(a) convergent (b) divergent
Improvement of Convergence Using Relaxation
0 w2
•For choices of w with 0 < w < 1, the procedures are called
under-relaxation methods
•For choices of w with 1 < w, the procedures are called over-
relaxation methods
These methods are abbreviated SOR, for Successive Over-
Relaxation.
Substitute w in Gauss-Seidel iterative equation
( k 1) w i 1 n
( k 1)
(1 w) xi i ij j
(k ) (k )
xi b a x aij x j
aii j 1 j i 1
i 1,2 n
Use SOR method with w=1.25 to solve the linear system
4 x1 3x2 24
3x1 4 x2 x3 30
x2 4 x3 24
x ( 0)
1 1 1
T