0% found this document useful (0 votes)
159 views

Matrices in Data Science

The document provides an introduction to linear algebra concepts including vectors, matrices, and their properties. Vectors are defined as one-dimensional arrays that can represent both magnitude and direction. Matrices are two-dimensional arrays of numbers that can be added, multiplied, and have properties like determinants, traces, and inverses calculated. The rank of a matrix is also introduced as the largest number of linearly independent columns.

Uploaded by

ramlal18ram
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
159 views

Matrices in Data Science

The document provides an introduction to linear algebra concepts including vectors, matrices, and their properties. Vectors are defined as one-dimensional arrays that can represent both magnitude and direction. Matrices are two-dimensional arrays of numbers that can be added, multiplied, and have properties like determinants, traces, and inverses calculated. The rank of a matrix is also introduced as the largest number of linearly independent columns.

Uploaded by

ramlal18ram
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 118

LINEAR ALGEBRA

Dr.A.Kannan,
( Retd. Professor, Department of Information Science
and Technology, CEG Campus, Anna University,
Chennai-25).
Senior Professor,
School of Computer Science and Engineering,
VIT, Vellore-632014.

01/11/2024 1
LINEAR ALGEBRA
• Linear Algebra is the study of Vectors and
Linear Functions.
• Linear algebra is the study of vectors and
linear transformations.

01/11/2024 2
VECTORS

• Vector definition
• Computer science:
Vector is a one-dimensional array of ordered real-
valued scalars.
• Mathematics:
Vector is a quantity possessing both magnitude
and direction, represented by an arrow indicating
the direction, and the length of which is
proportional to the magnitude.
01/11/2024 3
VECTORS
Vector : a one dimensiona l array of numbers
Examples :
 2
row vector 1 4 2 column vector  
1 
1 0  0  0 
0  1 0  0 
Identity vectors e1   , e2   , e3   , e4   
0  0  1 0 
       
0  0  0  1

01/11/2024 4
VECTORS
• Vectors are written in column form or in row
form
– Denoted by bold-font lower-case letters.

01/11/2024 5
VECTORS
• For a general form vector with elements the
vector lies in the -dimensional space

01/11/2024 6
MATRICES
Matrix : a two dimensiona l array of numbers
Examples :
0 0 0 1 0
zero matrix   identity matrix 0 1 
0 0 0  
1 0 0 0 1 2 0 0
0 4 0 0 3 4 1 0
diagonal  , Tridiagona l 
0 0 0 0 0 1 4 1
   
0 0 0 6 0 0 2 1

01/11/2024 7
MATRICES
Examples :
1 2 1 3
 2 1  1 0 4 1 0
symmetric  1 0 5 , upper tria ngular 
0 0 4 1
 1 5 4   
0 0 0 1

01/11/2024 8
Determinant of a MATRICES

Defined for square matrices only


Examples :
 2 3  1
  0 5 3 -1 3 -1
det  1 0 5   2 -1 -1
5 4 5 4 0 5
 1 5 4 
 2(25)  1(12  5)  1(15  0)  82

01/11/2024 9
CS 404/504, Fall 2021

Matrices
Matrices

• Matrix is a rectangular array of real-valued scalars arranged in m horizontal rows


and n vertical columns
 Each element belongs to the ith row and jth column
 The elements are denoted or or or

• For the matrix , the size (dimension) is or


 Matrices are denoted by bold-font upper-case letters

10
CS 404/504, Fall 2021

Matrices
Matrices

• Addition or subtraction  A  B i , j  Ai , j  Bi , j

• Scalar multiplication  cA i , j  c  Ai , j

• Matrix multiplication  AB i , j  A i ,1B 1, j  A i ,2B 2, j    A i ,n B n, j


 Defined only if the number of columns of the left matrix is the same as the number of
rows of the right matrix
 Note that

11
CS 404/504, Fall 2021

Matrices
Matrices

• Transpose of the matrix: has the rows and columns exchanged

A 
T
i, j
 A j, i

 Some properties

• Square matrix: has the same number of rows and columns

• Identity matrix ( In ): has ones on the main diagonal, and zeros elsewhere

1 0 0 
 E.g.: identity matrix of size 3×3 : I 3  0 1 0 
0 0 1 

12
CS 404/504, Fall 2021

Matrices
Matrices

• Determinant of a matrix, denoted by det(A) or is a real-valued scalar encoding


certain properties of the matrix
 E.g., for a matrix of size 2×2:  a b  
det      ad  bc
 c d  
 For larger-size matrices the determinant of a matrix id calculated as

 In the above, is a minor of the matrix obtained by removing the row and column
associated with the indices i and j
• Trace of a matrix is the sum of all diagonal elements

• A matrix for which is called a symmetric matrix

13
CS 404/504, Fall 2021

Matrices
Matrices

• Elementwise multiplication of two matrices A and B is called the Hadamard


product or elementwise product
 The math notation is

14
CS 404/504, Fall 2021

Matrix-Vector Products
Matrices

• Consider a matrix and a vector


• The matrix can be written in terms of its row vectors (e.g., is the first row)

• The matrix-vector product is a column vector of length m, whose ith element is the
dot product

• Note the size:


15
CS 404/504, Fall 2021

Matrix-Matrix Products
Matrices

• To multiply two matrices and

• We can consider the matrix-matrix product as dot-products of rows in and


columns in

• Size:

16
CS 404/504, Fall 2021

Linear Dependence
Matrices

• For the following matrix 𝐁= 2 [4 −1


−2 ]
• Notice that for the two columns and , we can write
 This means that the two columns are linearly dependent
• The weighted sum is referred to as a linear combination of the vectors and
 In this case, a linear combination of the two vectors exist for which
• A collection of vectors are linearly dependent if there exist coefficients not all
equal to zero, so that

• If there is no linear dependence, the vectors are linearly independent

17
CS 404/504, Fall 2021

Matrix Rank
Matrices

• For an matrix, the rank of the matrix is the largest number of linearly
independent columns
• The matrix B from the previous example has , since the two columns are linearly
dependent
𝐁= 2 [4 −1
−2 ]
• The matrix C below has , since it has two linearly independent columns
 I.e., , ,

18
CS 404/504, Fall 2021

Inverse of a Matrix
Matrices

• For a square matrix A with rank , is its inverse matrix if their product is an
identity matrix I

 A 1   A
1
• Properties of inverse matrices

 AB 
1
 B 1A 1
• If (i.e., ), then the inverse does not exist
 A matrix that is not invertible is called a singular matrix
• Note that finding an inverse of a large matrix is computationally expensive
 In addition, it can lead to numerical instability
• If the inverse of a matrix is equal to its transpose, the matrix is said to be
orthogonal matrix
A 1  AT

19
CS 404/504, Fall 2021

Pseudo-Inverse of a Matrix
Matrices

• Pseudo-inverse of a matrix
 Also known as Moore-Penrose pseudo-inverse
• For matrices that are not square, the inverse does not exist
 Therefore, a pseudo-inverse is used
• If , then the pseudo-inverse is and
• If , then the pseudo-inverse is and

 E.g., for a matrix with dimension , a pseudo-inverse can be found of size , so that

20
Adding and Multiplying Matrices
The addition of two matrices A and B
* Defined only if they have the same size
* C  A  B  c ij  a ij  b ij i, j

Multiplica tion of two matrices A(n  m) and B(p  q)


* The product C  AB is defined only if m  p
m
* C  A B  c ij   a ik b kj i, j
k 1

01/11/2024 21
Systems of Linear Equations
• Definition
A linear equation in n variables x1, x2, x3, …, xn
has the form a1 x1 + a2 x2 + a3 x3 + … + an xn = b
where the coefficients a1, a2, a3, …, an and b are
real numbers.

01/11/2024 22
Solutions for system of linear equations

Figure 1.3
Figure 1.1 Figure 1.2 Many solution
Unique solution No solution 4x – 2y = 6
x + 3y = 9 –2x + y = 3 6x – 3y = 9
–2x + y = –4 –4x + 2y = 2 Both equations have the
Lines intersect at (3, 2) Lines are parallel. same graph. Any point on
Unique solution: No point of intersection. the graph is a solution.
x = 3, y = 2. No solutions. Many solutions.
01/11/2024 23
Systems of Linear Equations
A system of linear equations can be presented
in different forms

2 x1  4 x2  3 x3  3  2 4  3  x1  3

2.5 x1  x2  3 x3  5   2.5  1 3   x2   5
x1  6 x3  7   1 0  6  x3  7 
Standard form Matrix form

01/11/2024 24
Solutions of Linear Equations

 x1  1
 x   2 is a solution t o the following equations :
 2  
x1  x2  3
x1  2 x2  5

01/11/2024 25
Solutions of Linear Equations
• A set of equations is inconsistent if there exists no solution
to the system of equations:

x1  2 x2  3
2 x1  4 x2  5
These equations are inconsiste nt

01/11/2024 26
Solutions of Linear Equations
• Some systems of equations may have infinite number of
solutions
x1  2 x2  3
2 x1  4 x2  6
have infinite number of solutions
 x1   a 
 x   0.5(3  a ) is a solution for all a
 2  

01/11/2024 27
Graphical Solution of Systems of
Linear Equations
x1  x2  3
x1  2 x2  5
Solution
x1=1, x2=2

01/11/2024 28
Cramer’s Rule is Not Practical
Cramer' s Rule can be used to solve the system
3 1 1 3
5 2 1 5
x1   1, x2  2
1 1 1 1
1 2 1 2

Cramer' s Rule is not practical for large systems .


To solve N by N system requires (N  1)(N - 1)N! multiplications.
To solve a 30 by 30 system, 2.38  1035 multiplications are needed.
It can be used if the determinants are computed in efficient way
01/11/2024 29
Naive Gaussian Elimination
• The method consists of two steps:
– Forward Elimination: the system is reduced to
upper triangular form. A sequence of elementary
operations is used.
– Backward Substitution: Solve the system
starting from the last variable.
 a11 a12 a13   x1   b1  a11 a12 a13   x1   b1 
a a23   x   b   0 a22 ' a23 '  x   b '
 21 a22  2  2   2  2 
 a31 a32 a33   x3  b3   0 0 a33 '  x3  b3 '

01/11/2024 30
Elementary Row Operations

• Adding a multiple of one row to another

• Multiply any row by a non-zero constant

01/11/2024 31
Example
Forward Elimination
 6  2 2 4   x1   16 
 12  8 6 10   x   26 
   2  
 3  13 9 3   x3    19 
    
 6 4 1  18  x4   34
Part 1 : Forward Eliminatio n
Step1 : Eliminate x1 from equations 2, 3, 4
6  2 2 4   x1   16 
0  4 2 2  x    6 
  2  
0  12 8 1   x3   27 
     
0 2 3  14  x4    18 
01/11/2024 32
Example
Forward Elimination
Step2 : Eliminate x2 from equations 3, 4
6  2 2 4   x1   16 
0  4 2 2   x    6 
   2  
0 0 2  5   x3    9 
    
0 0 4  13  4   21
x
Step3 : Eliminate x3 from equation 4
6  2 2 4   x1   16 
0  4 2 2   x    6
   2  
 0 0 2  5  x3    9
     
0 0 0  3  x4    3
01/11/2024 33
Example
Forward Elimination

Summary of the Forward Eliminatio n :


 6 2 2 4   x1   16  6  2 2 4   x1   16 
 12  8 6 10   x   26  0  4 2 2   x    6
  2     2  
 3  13 9 3   x3    19  0 0 2  5  x3   9
           
 6 4 1  18 x
 4   34   0 0 0  3  x4    3 

01/11/2024 34
Example
Backward Substitution

6  2 2 4   x1   16 
0  4 2 2   x    6 
   2  
0 0 2  5  x3   9
    
0 0 0  3  x4    3
Solve for x4 , then solve for x3 ,... solve for x1
3 95
x4   1, x3   2
3 2
 6  2(2)  2(1) 16  2(1)  2(2)  4(1)
x2   1, x1  3
4 6

01/11/2024 35
Forward Elimination
 ai1  
aij  aij   a1 j (1  j  n ) 
 a11  
To eliminate x1 2  i  n
a  
bi  bi   i1 b1
 a11  

 ai 2  
aij  aij   a 2 j (2  j  n) 
 a 22  
To eliminate x2 3  i  n
a  
bi  bi   i 2 b2
 a22  

01/11/2024 36
Forward Elimination

 aik  
aij  aij   a kj (k  j  n) 
 a kk  
To eliminate xk k  1  i  n
 aik  
bi  bi   bk
 a kk  

Continue until xn 1 is eliminated.

01/11/2024 37
Backward Substitution
bn
xn 
a n ,n
bn 1  an 1,n xn
xn 1 
an 1,n 1
bn  2  an  2,n xn  an  2,n 1 xn 1
xn  2 
a n  2, n  2
n
bi   ai , j x j
j  i 1
xi 
a i ,i

01/11/2024 38
Naive Gaussian Elimination

o The method consists of two steps


o Forward Elimination: the system is reduced to upper triangular
form. A sequence of elementary operations is used.
 a11 a12 a13   x1   b1  a11 a12 a13   x1   b1 
a a23   x   b   0 a22 ' a23 '  x   b '
 21 a22  2  2   2  2 
 a31 a32 a33   x3  b3   0 0 a33 '  x3  b3 '
o Backward Substitution: Solve the system starting from the last
variable. Solve for xn ,xn-1,…x1.

01/11/2024 39
Example 1
Solve using Naive Gaussian Eliminatio n :
Part 1 : Forward Eliminatio n ___ Step1 : Eliminate x1 from equations 2, 3
x1  2 x2  3 x3  8 eq1 unchanged ( pivot equation )
2
2 x1  3 x2  2 x3  10 eq 2  eq 2   eq1
1
3
3 x1  x2  2 x3  7 eq3  eq3   eq1
1
x1  2 x2  3 x3  8
 x2  4 x3  6
 5 x2  7 x3   17

01/11/2024 40
Example 1
Part 1 : Forward Eliminatio n Step2 : Eliminate x2 from equation 3
x1  2 x2  3 x3  8 eq1 unchanged
 x2  4 x3  6 eq 2 unchanged ( pivot equation )
 5
 5 x2  7 x3   17 eq3  eq3   eq 2
 1 

 x1  2 x2  3 x3  8

   x2  4 x3  6
 13x3  13

01/11/2024 41
Example 1
Backward Substitution

b3 13
x3   1
a3,3 13
b2  a2,3 x3  6  4 x3
x2   2
a 2, 2 1
b1  a1,2 x2  a1,3 x3 8  2 x2  3 x3
x1   1
a1,1 a1,1
 x1  1 
The solution is  x2   2
 x3  1 
01/11/2024 42
Determinant

The elementary operations do not affect the determinan t


Example :
1 2 3  1 2 3
A  2 3 2 Elementary   A'  0  1  4
  operations
3 1 2 0 0 13 
det (A)  det (A')  13

01/11/2024 43
How Many Solutions Does a System of
Equations AX=B Have?
Unique No solution Infinite
det(A)  0 det(A)  0 det(A)  0
reduced matrix reduced matrix reduced matrix
has no zero rows has one or more has one or more
zero rows zero rows
correspond ing B correspond ing B
elements  0 elements  0

01/11/2024 44
Examples
Unique No solution infinte # of solutions
1 2 1  1 2  2 1 2   2
3 4 X  2  2 4 X   3  2 4 X   4
           
  
1 2  1 1 2  2 1 2   2
0  2 X   1 0 0 X   1 0 0  X   0 
           
solution : No solution Infinite # solutions
0   
X   0  1 impossible ! X  
0.5 1  . 5  

01/11/2024 45
Pseudo-Code: Forward Elimination
Do k = 1 to n-1
Do i = k+1 to n
factor = ai,k / ak,k
Do j = k+1 to n
ai,j = ai,j – factor * ak,j
End Do
bi = bi – factor * bk
End Do
End Do
01/11/2024 46
Pseudo-Code: Back Substitution
xn = bn / an,n
Do i = n-1 downto 1
sum = bi
Do j = i+1 to n
sum = sum – ai,j * xj
End Do
xi = sum / ai,i
End Do
01/11/2024 47
Matrix Inversion

Activity: Does the following matrix have an


inverse?

[ ][ ]
2 1 1 2 1 1
4 −6 0 ≡ 0 −8 −2
−2 7 2 0 0 1

01/11/2024
Therefore, inverse exists! 48
Matrix Inversion

How to find the inverse of a matrix?

01/11/2024 49
Matrix Inversion

How to find the inverse of a matrix?

Use the property:

Treat as an unknown and solve for it

01/11/2024 50
Matrix Inversion
Let us first treat each column of
separately.

[ ][ ][]
2 1 1 𝑥11 1
4 −6 0 ∗ 𝑥 21 = 0
−2 7 2 𝑥 31 0

01/11/2024 51
Matrix Inversion
Let us first treat each column of
separately.

[ ][ ][]
2 1 1 𝑥 12 0
4 −6 0 ∗ 𝑥 22 = 1
−2 7 2 𝑥 32 0

01/11/2024 52
Matrix Inversion

Let us first treat each column of


separately.

[ ][ ][]
2 1 1 𝑥 13 0
4 −6 0 ∗ 𝑥 23 = 0
−2 7 2 𝑥 33 1

01/11/2024 53
Matrix Inversion

A more efficient way is called Gauss-Jordan


Method

[ ]
2 1 1 1 0 0
4 −6 0 0 1 0
−2 7 2 0 0 1

𝑨 𝑰
Perform row manipulations till the left side
becomes identity matrix
01/11/2024 54
Matrix Inversion

A more efficient way is called Gauss-Jordan


Method

[ ]
2 1 1 1 0 0
0 −8 −2 −2 1 0
0 8 3 1 0 1

Perform row manipulations till the left side


becomes identity matrix
01/11/2024 55
Matrix Inversion

A more efficient way is called Gauss-Jordan


Method

[ ]
2 1 1 1 0 0
0 −8 −2 −2 1 0
0 0 1 −1 1 1

Perform row manipulations till the left side


becomes identity matrix
01/11/2024 56
Matrix Inversion

A more efficient way is called Gauss-Jordan


Method

[ ]
2 1 0 2 −1 −1
0 −8 0 −4 3 2
0 0 1 −1 1 1

Perform row manipulations till the left side


becomes identity matrix
01/11/2024 57
Matrix Inversion

A more efficient way is called Gauss-Jordan


Method

[ ]
2 1 0 2 −1 −1
0 1 0 1/ 2 − 3/ 8 − 1/ 4
0 0 1 −1 1 1

Perform row manipulations till the left side


becomes identity matrix
01/11/2024 58
Matrix Inversion

A more efficient way is called Gauss-Jordan


Method

[ ]
2 0 0 3 /2 −5 / 8 −3 / 4
0 1 0 1/ 2 −3 / 8 −1/ 4
0 0 1 −1 1 1

Perform row manipulations till the left side


becomes identity matrix
01/11/2024 59
Matrix Inversion

A more efficient way is called Gauss-Jordan


Method

[ ]
1 0 0 3/4 −5 / 16 −3 / 8
0 1 0 1/ 2 −3 / 8 − 1/ 4
0 0 1 −1 1 1

𝑰 𝑨
−𝟏

Then, the right side is

01/11/2024 60
Matrix Inversion

Finally, test if this is correct!

[ ]
3 5 3
− −

[ ] [ ]
2 1 1 4 16 8 1 0 0
4 −6 0 ∗ 1 3 1 = 0 1 0
− −
−2 7 2 2 8 4 0 0 1
−1 1 1

Activity: Verify that this is indeed true

01/11/2024 61
Rank
Rank enables one to relate matrices to vectors, and vice versa.

Definition
Let A be an m  n matrix. The rows of A may be viewed as row
vectors r1, …, rm, and the columns as column vectors c1, …, cn.
Each row vector will have n components, and each column vector
will have m components, The row vectors will span a subspace of
Rn called the row space of A, and the column vectors will span a
subspace of Rm called the column space of A.

01/11/2024 62
Example 1
Consider the matrix
 1 2  1 2
3 4 1 6
5 4 1 0

(1) The row vectors of A are
r1  (1, 2,  1, 2), r2  (3, 4, 1, 6), r3  (5, 4, 1, 0)
These vectors span a subspace of R4 called the row space of A.

(2) The column vectors of A are


1   2  1  2
c1  3 c 2  4 c3   1 c 4  6 
       
5
  4
   1 0 
These vectors span a subspace of R3 called the column space of A.
01/11/2024 63
Theorem 1
The row space and the column space of a matrix A have the
same dimension.

Definition
The dimension of the row space and the column space of a matrix
A is called the rank of A. The rank of A is denoted rank(A).

01/11/2024 64
Example 2
Determine the rank of the matrix
1 2 3 
A  0 1 2
2 5 8 
 
Solution
The third row of A is a linear combination of the first two rows:
(2, 5, 8) = 2(1, 2, 3) + (0, 1, 2)
Hence the three rows of A are linearly dependent.
The rank of A must be less than 3. Since (1, 2, 3) is not a scalar
multiple of (0, 1, 2), these two vectors are linearly independent.
These vectors form a basis for the row space of A.
Thus rank(A) = 2.

01/11/2024 65
Theorem 2
The nonzero row vectors of a matrix A that is in reduced
echelon form are a basis for the row space of A. The rank of A is
the number of nonzero row vectors.

01/11/2024 66
Example 3
Find the rank of the matrix
1 2 0 0
0 0 1 0
A
0 0 0 1
 
0 0 0 0
This matrix is in reduced echelon form. There are three nonzero
row vectors, namely (1, 2, 0, 0), (0, 0, 1, 0), and (0, 0, 0, 1).
According to the previous theorem, these three vectors form a
basis for the row space of A.
Rank(A) = 3.

01/11/2024 67
Theorem 3
Let A and B be row equivalent matrices. Then A and B have the
same the row space. rank(A) = rank(B).

Theorem 4

Let E be a reduced echelon form of a matrix A. The nonzero row


vectors of E form a basis for the row space of A. The rank of A
is the number of nonzero row vectors in E.

01/11/2024 68
Example 4
Find a basis for the row space of the following matrix A, and
determine its rank.
1 2 3 
A   2 5 4
1 1 5 
 
Solution
Use elementary row operations to find a reduced echelon form of
the matrix A. We get
1 2 3   1 2 3  1 0 7
 2 5 4   0 1  2   0 1  2 
1 1 5  0  1 2  0 0 0 
     
The two vectors (1, 0, 7), (0, 1, -2) form a basis for the row
space of A. Rank(A) = 2.

01/11/2024 69
Orthonormal Vectors and Projections
Definition
A set of vectors in a vector space V is said to be an orthogonal
set if every pair of vectors in the set is orthogonal. The set is said
to be an orthonormal set if it is orthogonal and each vector is a
unit vector.

01/11/2024 70
Example 1
  3 4   4 3 
Show that the set  (1, 0, 0 ),  0, , ,  0, ,    is an orthonormal set.
  5 5   5 5 
Solution
(1) orthogonal:  
(1,0,0)  0, 53 , 54  0;
(1,0,0)  0, 54 , 53   0;
0, 53 , 54  0, 54 , 53   0;
(2) unit vector:
(1,0,0)  12  0 2  0 2  1
   0       1
0, 53 , 54 2 3
5
2
4
5
2

0, 54 ,  53   0  54   53  1


2 2 2

Thus the set is thus an orthonormal set.


01/11/2024 71
Theorem 5
An orthogonal set of nonzero vectors in a vector space is
linearly independent.
Proof Let {v1, …, vm} be an orthogonal set of nonzero vectors
in a vector space V. Let us examine the identity
c1v1 + c2v2 + … + cmvm = 0
Let vi be the ith vector of the orthogonal set. Take the dot
product of each side of the equation with vi and use the
1  c2 v 2    cm v m )  v i  0  v i
properties of the dot (c1 vproduct. We get
c1 v1  v i  c2 v 2  v i    cm v m  v i  0
Since the vectors v1, …, v2 are mutually orthogonal, vj‧vi = 0
unless j = i. Thus c v v  0
i i i
Since vi is a nonzero, then vi‧vi  0. Thus ci = 0.
Letting i = 1, …, m, we get c1 = 0, cm = 0, proving that the
01/11/2024 72
vectors are linearly independent.
Definition
A basis that is an orthogonal set is said to be an orthogonal
basis.
A basis that is an orthonormal set is said to be an orthonormal
basis.
Standard Bases
• R2: {(1, 0), (0, 1)}
• R3: {(1, 0, 0), (0, 1, 0), (0, 0, 1)} orthonormal bases
• Rn: {(1, …, 0), …, (0, …, 1)}

Theorem 6
Let {u1, …, un} be an orthonormal basis for a vector space V. Let
v be a vector in V. v can be written as a linearly combination of
these basis vectors as follows:
v  ( v  u1 )u1  ( v  u 2 )u 2    ( v  u n )u n
01/11/2024 73
Example 2
The following vectors u1, u2, and u3 form an orthonormal basis
for R3. Express the vector v = (7, -5, 10) as a linear combination
of these vectors.
 3 4  4 3
u1  (1, 0, 0), u 2   0, , , u 3   0, ,  
 5 5  5 5
Solution
v  u1  (7,  5, 10)  (1, 0, 0)  7
 3 4
v  u 2  (7,  5, 10)   0, ,   5
 5 5
 4 3
v  u 3  (7,  5, 10)   0, ,    10
 5 5
Thus
v  7u1  5u 2  10u 3

01/11/2024 74
Projection of One vector onto Another Vector
Let v and u be vectors in Rn with angle a (0  a  p) between them.
Figure 4.17 OA : the projection of v onto u
OA  OB cos   || v || cos 
v u v u
 || v || 
|| v || || u || || u ||
v u u v u
 OA  ( )( ) u
|| u || || u || u  u
v u
Note : If    / 2 then  0.
u u

v u
So we define proju v  u.
u u
01/11/2024 75
Definition
The projection of a vector v onto a nonzero vector u in Rn is
denoted projuv and is defined by
v u
proju v  u
u u

O Figure 4.18
01/11/2024 76
Example 3
Determine the projection of the vector v = (6, 7) onto the vector
u = (1, 4).
Solution

v  u  (6, 7)  (1, 4)  6  28  34
u  u  (1, 4)  (1, 4)  1  16  17
Thus
v u 34
proju v  u  (1, 4)  (2, 8)
u u 17
The projection of v onto u is (2, 8).

01/11/2024 77
Theorem 7
The Gram-Schmidt Orthogonalization Process
Let {v1, …, vn} be a basis for a vector space V. The set of vectors
{u1, …, un} defined as follows is orthogonal. To obtain an
orthonormal basis for V, normalize each of the vectors u1, …, un .
u1  v1
u 2  v 2  proju1 v 2
u 3  v 3  proju1 v 3  proju 2 v 3

u n  v n  proju1 v n    proju n 1 v n

01/11/2024 78
Example 4
The set {(1, 2, 0, 3), (4, 0, 5, 8), (8, 1, 5, 6)} is linearly independent
in R4. The vectors form a basis for a three-dimensional subspace V
of R4. Construct an orthonormal basis for V.
Solution

Let v1 = (1, 2, 0, 3), v2 = (4, 0, 5, 8), v3 = (8, 1, 5, 6)}.


Use the Gram-Schmidt process to construct an orthogonal set
{u1, u2, u3} from these vectors.
Let u1  v1  (1, 2, 0, 3)
(v2  u2 )
Let u 2  v 2  proju1 v 2  v 2  u1  (2,  4, 5, 2)
(u1  u1 )
Let u 3  v 3  proju1 v 3  proju 2 v 3
( v 3  u1 ) (v3  u2 )
 v3  u1  u 2  (4, 1, 0,  2)
(u1  u1 ) (u 2  u 2 )
01/11/2024 79
The set {(1, 2, 0, 3), (2, -4, 5, 2), (4, 1, 0, -2)} is an orthogonal
basis for V.
Normalize them to get an orthonormal basis:
(1, 2, 0, 3)  12  2 2  0 2  32  14
(2,  4, 5, 2)  2 2  (4) 2  52  2 2  7
(4, 1, 0,  2)  4 2  12  0 2  (2) 2  21
 orthonormal basis for V:
 1 2 3   2 4 5 2  4 1 2 
 , , 0, ,  ,  , , ,  , , 0,  
 14 14 14   7 7 7 7   21 21 21 

01/11/2024 80
Problems with Naive Gaussian Elimination

o The Naive Gaussian Elimination may fail for very simple


cases. (The pivoting element is zero).
0 1  x1  1 
1 1  x   2
   2  
o Very small pivoting element may result in serious computation
errors
10 10 1  x1  1
    
 1 1  x2  2

01/11/2024 81
Example 2
Solve the following system using Gaussian Elimination
with Scaled Partial Pivoting :

1  1 2 1  x1   1 
3 2 1 4 x   1 
   2  
5 8 6 3  x3   1 
4 2 5 3  x    1
  4  

01/11/2024 82
Example 2
Initialization step
Scale vector:
1  1 2 1   x1   1  disregard sign
3 2 1     
4   x2   1 
find largest in
 
magnitude in
each row
5 8 6 3  x3   1 
     
4 2 5 3  x4   1
Scale vector S  2 4 8 5
Index Vector L  1 2 3 4
01/11/2024 83
Why Index Vector?
• Index vectors are used because it is much
easier to exchange a single index element
compared to exchanging the values of a
complete row.
• In practical problems with very large N,
exchanging the contents of rows may not be
practical.

01/11/2024 84
Example 2
Forward Elimination-- Step 1: eliminate x1

Selection of the pivot equation


1  1 2 1  x1   1 
3 2 1 4 x   1 
  2      S  [2 4 8 5]
5 8 6 3  x3   1   L  [1 2 3 4 ]
     
4 2 5 3  x4   1
 al ,1   1 3 5 4 
Ratios   i
i  1,2,3,4   , , ,   max correspond s to l4
 S li   2 4 8 5 
equation 4 is the first pivot equation Exchange l4 and l1
L [4 2 3 1 ]

01/11/2024 85
Example 2
Forward Elimination-- Step 1: eliminate x1

Update A and B
1  1 2 1  x1   1 
 3 2 1 4  x   1 
   2   First pivot
5 8 6 3  x3   1 
      equation
4 2 5 3  x4   1
0  1.5 0.75 0.25   x1  1.25 
0 0.5  2.75 1.75   x  1.75 
    2  
0 5.5  0.25  0.75  x3  2.25
     
4 2 5 3   4   1 
x

01/11/2024 86
Example 2
Forward Elimination-- Step 2: eliminate x2

Selection of the second pivot equation


0  1.5 0.75 0.25   x1  1.25 
0 0.5  2.75 1.75   x  1.75 
   2  
0 5.5  0.25  0.75  x3  2.25
     
4 2 5 3   x4    1 
S  [2 4 8 5 ] L[ 4 2 3 1]

 al , 2   0.5 5.5 1.5 


Ratios :  i
i  2,3,4    L  [ 41 3 2 ]
 S li   4 8 2 
01/11/2024 87
Example 2
Forward Elimination-- Step 3: eliminate x3

Third pivot
0  1.5 0.75 0.25   x1   1.25 
0   x  2.1667  equation
 0  2 .5 1 .8333   2   
0 0 0.25 1.6667  x3   6.8333
     
 4 2 5 3 x
  4   1 
L  [ 4 1 2 3]
0  1.5 0.75 0.25   x1   1.25 
0 0  2. 5 1. 8333  x  2.1667 
   2   
0 0 0 2   x3   9 
     
 4 2 5 3  x
 4   1 
01/11/2024 88
Example 2
Backward Substitution

0  1.5 0.75 0.25   x1   1.25  L  [ 4 1 2 3]


0 0  2.5 1.8333  x  2.1667
   2  
0 0 0 2   x3   9 
4 2 5 3  x   1 
  4  
b3 9 b  a 2,4 x4 2.1667  1.8333 x4
x4    4.5, x3  2   2.4327
a3,4 2 a 2,3  2.5
b1  a1,4 x4  a1,3 x3 1.25  0.25 x4  0.75 x3
x2    1.1333
a1,2  1. 5
b4  a 4,4 x4  a 4,3 x3  a 4,2 x2  1  3 x 4  5 x3  2 x 2
x1    7.2333
al1 ,1 4

01/11/2024 89
Example 3
Solve the following sytstem using Gaussian Eliminatio n
with Scaled Partial Pivoting

1  1 2 1  x1   1 
3 2 1 4 x   1 
  2   
5  8 6 3  x3   1 
     
4 2 5 3  x4   1

01/11/2024 90
Example 3
Initialization step

1  1 2 1   x1   1 
3 2 1     
4   x2   1 
 
5  8 6 3  x3   1 
     
4 2 5 3  x4   1
Scale vector S  2 4 8 5
Index Vector L  1 2 3 4
01/11/2024 91
Example 3
Forward Elimination-- Step 1: eliminate x1

Selection of the pivot equation


1  1 2 1  x1   1 
3 2 1 4 x   1 
  2      S  [2 4 8 5]
5  8 6 3  x3   1   L  [1 2 3 4 ]
     
4 2 5 3  x4   1
 al ,1  1 3 5 4 
 i 
Ratios   i  1,2,3,4   , , ,   max correspond s to l4
 Sli   2 4 8 5 
equation 4 is the first pivot equation Exchange l4 and l1
L [4 2 3 1 ]

01/11/2024 92
Example 3
Forward Elimination-- Step 1: eliminate x1

Update A and B
1  1 2 1  x1   1 
3 3 1
 4  x2   1 

5  8 6 3  x3   1 
     
4 2 5 3  x4   1
0  1.5 0.75 0.25   x1  1.25 
0 0.5  2.75 1.75   x  1.75 
   2   
0  10.5  0.25  0.75  x3  2.25
     
4 2 5 3   4   1 
x

01/11/2024 93
Example 3
Forward Elimination-- Step 2: eliminate x2

Selection of the second pivot equation


 0  1 .5 0.75 0.25   x1  1.25 
0 0.5  2.75 1.75   x  1.75 
  2   
0  10.5  0.25  0.75  x3  2.25
     
4 2 5 3   x4    1 
S  [2 4 8 5] L[ 4 2 3 1 ]

 al , 2 
 i   0.5 10.5 1.5 
Ratios :  i  2,3,4     L  [ 4 3 2 1]
 Sli   4 8 2 
01/11/2024 94
Example 3
Forward Elimination-- Step 2: eliminate x2
Updating A and B
 0  1 .5 0.75 0.25   x1  1.25 
0 0.5  2.75 1.75   x  1.75 
   2  
0  10.5  0.25  0.75  x3  2.25
     
4 2 5 3   x4    1 
L  [ 4 1 3 2]
0 0 0.7857 0.3571  x1  0.9286
0 0 - 2.7619 1.7143   x  1.8571
   2  
0  10.5  0.25  0.75   x3   2.25 
     
 4 2 5 3 x
  4   1 
01/11/2024 95
Example 3
Forward Elimination-- Step 3: eliminate x3
Selection of the third pivot equation
0 0 0.7857 0.3571  x1  0.9286
0 0  2.7619 1.7143   x  1.8571
   2  
0  10.5  0.25  0.75   x3   2.25 
     
4 2 5 3   x4    1 
S  [2 4 8 5 ] L[ 4 3 2 1 ]

 al ,3   2.7619 0.7857 
Ratios :  i
i  3,4     L  [ 4 3 2 1]
 Sli   4 2 

01/11/2024 96
Example 3
Forward Elimination-- Step 3: eliminate x3

0 0 0.7857 0.3571  x1  0.9286


0 0  2.7619 1.7143   x  1.8571
   2  
0  10.5  0.25  0.75   x3   2.25 
     
 4 2 5 3 x
  4   1 
L  [ 4 3 2 1]
0 0 0 0.8448  x1  1.4569
0 0  2.7619 1.7143   x  1.8571
   2  
0  10.5  0.25  0.75   x3   2.25 
     
4 2 5 3   x4    1 

01/11/2024 97
Example 3
Backward Substitution
0 0 0 0.8448  x1  1.4569 L  [ 4 3 2 1]
0 0  2.7619 1.7143  x2  1.8571
     
0  10.5  0.25  0.75   x3   2.25 
4
 2 5 3   x4    1 
bl4 1.4569 bl3  al3 ,4 x4 1.8571  1.7143x4
x4    1.7245, x3    0.3980
al4 ,4 0.8448 a l3 ,3  2.7619
bl2  al2 ,4 x4  al2 ,3 x3
x2   0.3469
al2 ,2
bl1  al1 ,4 x4  al1 ,3 x3  al1 ,2 x2  1  3 x 4  5 x3  2 x 2
x1    1.8673
al1 ,1 4

01/11/2024 98
How Do We Know If a Solution is Good or
Not
Given AX=B
X is a solution if AX-B=0
Compute the residual vector R= AX-B
Due to rounding error, R may not be zero

The solution is acceptable if max ri  


i

01/11/2024 99
How Good is the Solution?
1  1 2 1  x1   1   x1    1.8673 
3 2 1 4 x   1   x   0.3469
  2     solution  2   
5  8 6 3  x3   1   x3   0.3980 
         
4 2 5 3  x4   1  x4   1.7245 
0.005
0.002
Residues : R   
0.003
 
 0.001

01/11/2024 100
Remarks:
• We use index vector to avoid the need to move the rows
which may not be practical for large problems.
• If we order the equation as in the last value of the index
vector, we have a triangular form.
• Scale vector is formed by taking maximum in magnitude in
each row.
• Scale vector does not change.
• The original matrices A and B are used in checking the
residuals.

01/11/2024 101
Tridiagonal Systems
Tridiagonal Systems:
• The non-zero elements are in the
main diagonal, super diagonal
5 1 0 0 0  x1   b1 
and subdiagonal. 3    
4 1 0 0  x2  b2  

• aij=0 if |i-j| > 1 0 2 6 2 0  x3   b3 
    
0 0 1 4 1  x4  b4 
0 0 0 1 6  x5  b5 

01/11/2024 102
Tridiagonal Systems
• Occur in many applications
• Needs less storage (4n-2 compared to n +n for the general cases)
2

• Selection of pivoting rows is unnecessary (under some


conditions)
• Efficiently solved by Gaussian elimination

01/11/2024 103
Algorithm to Solve Tridiagonal Systems

• Based on Naive Gaussian elimination.


• As in previous Gaussian elimination algorithms
– Forward elimination step
– Backward substitution step
• Elements in the super diagonal are not affected.
• Elements in the main diagonal, and B need updating

01/11/2024 104
Tridiagonal System

All the a elements will be zeros, need to update the d and b elements
The c elements are not updated
d1 c1   x1   b1  d1 c1   x1   b1 
a d c2   x  b   d 2' c2   x  b ' 
 1 2 2
     2   2   2' 
 a2 d3    x3    b3    d 3'    x3    b3 
   cn 1          cn 1       
      '
   ' 
 a n 1 d n   xn  bn   d n   xn  bn 

01/11/2024 105
Diagonal Dominance
A matrix A is diagonally dominant if
n
a ii   a ij for (1  i  n)
j 1,
j i

The magnitude of each diagonal element is larger tha n


the sum of elements in the correspond ing row.

01/11/2024 106
Diagonal Dominance
Examples :

3 0 1   3 0 1 
1 6 1   2 3 2
   
1 2  5  1 2 1 
Diagonally dominant Not Diagonally dominant

01/11/2024 107
Diagonally Dominant Tridiagonal System

• A tridiagonal system is diagonally dominant if

d i  ci  ai 1 (1  i  n)

• Forward Elimination preserves diagonal dominance

01/11/2024 108
Solving Tridiagonal System
Forward Eliminatio n
 ai 1 
d i  d i   ci 1
 d i 1 
 ai 1 
bi  bi   bi 1 2in
 d i 1 
Backward Substituti on
bn
xn 
dn
1
xi  bi  ci xi 1  for i  n  1, n  2,...,1
di

01/11/2024 109
Example
Solve
5 2   x1  12 5 1  2 12
1 5 2   x   9  5 1  2 9
  2      D   , A   , C   , B   
 1 5 2  x3   8  5 1  2 8
            
 1 5   x4   6  5
      6
Forward Eliminatio n
 ai 1   ai 1 
d i  d i   ci 1 , bi  bi   bi 1 2i4
 d i 1   d i 1 
Backward Substituti on
bn 1
xn  , xi  bi  ci xi 1  for i  3,2,1
dn di
01/11/2024 110
Example
5 1  2 12
5 1  2 9
D   , A   , C   , B   
5 1  2 8
       
5
      6
Forward Eliminatio n
a  1 2 a  1 12
d 2  d 2   1 c1  5   4.6, b2  b2   1 b1  9   6.6
 d1  5  d1  5
a  1 2 a  1 6.6
d 3  d 3   2 c2  5   4.5652, b3  b3   2 b2  8   6.5652
 d2  4.6  d2  4 .6
a  1 2 a  1 6.5652
d 4  d 4   3 c3  5   4.5619, b4  b4   3 b3  6   4.5619
 d3  4.5652  d3  4.5652

01/11/2024 111
Example
Backward Substitution
• After the Forward Elimination:
D T  5 4.6 4.5652 4.5619, B T  12 6.6 6.5652 4.5619

• Backward Substitution:
b4 4.5619
x4    1,
d 4 4.5619
b3  c3 x4 6.5652  2 1
x3   1
d3 4.5652
b2  c2 x3 6.6  2 1
x2   1
d2 4.6
b1  c1 x2 12  2 1
x1   2
d1 5

01/11/2024 112
Gauss-Jordan Method
• The method reduces the general system of equations AX=B to
IX=B where I is an identity matrix.

• Only Forward elimination is done and no backward


substitution is needed.

• It has the same problems as Naive Gaussian elimination and


can be modified to do partial scaled pivoting.

• It takes 50% more time than Naive Gaussian method.

01/11/2024 113
Gauss-Jordan Method
Example
2  2 2   x1  0
4 2  1  x   7 
   2  
2  2 4   x3  2
Step 1 Eleminate x1 from equations 2and 3

eq1  eq1 / 2 
 1  1 1   x1  0
4  
eq 2  eq 2   eq1  0 6  5  x2   7 
1 
0 0 2   x3  2
  
2
eq3  eq3   eq1
1 

01/11/2024 114
Gauss-Jordan Method
Example
1  1 1   x1  0
 0 6  5  x    7 
   2  
0 0 2   x3  2
Step 2 Eleminate x 2 from equations 1 and 3

eq 2  eq 2 / 6 
 1 0 0.1667   x1  1.1667 
 1  
eq1  eq1   eq 2  0 1  0.8333  x   1.1667 
 2  
 1   0 0
 2   x3   2 
 
0 
eq3  eq3   eq 2 
1 
01/11/2024 115
Gauss-Jordan Method
Example
1 0 0.1667   x1  1.1667 
0 1  0.8333  x   1.1667 
   2  
0 0 2   x3   2 
Step 3 Eleminate x 3 from equations 1 and 2

eq3  eq3 / 2 
 1 0 0  x1  1 
 0.1667       
eq1  eq1   eq3   0 1 0  x2   2
 1   0 0 1   x  1 
  0.8333      3  
eq 2  eq 2   eq3
 1  

01/11/2024 116
Gauss-Jordan Method
Example

2  2 2   x1  0
4 2  1  x   7 
   2  
2  2 4   x3  2
is transforme d to
1 0 0  x1  1   x1  1 
0 1 0  x   2  solution is  x   2
   2    2  
0 0 1  x3  1   x3  1 
01/11/2024 117
CONCLUSIONS
• LINEAR ALGEBRA
• VECTORS
• LINEAR SYSTEMS
• LINEAR EQUATIONS
• MATRICES
• SOLVING LINEAR EQUATIONS
• MATRIX INVERSION

01/11/2024 118

You might also like