0% found this document useful (0 votes)
9 views46 pages

MODULE - 05-Matrix Decomposition Probability

This document covers the fundamental concepts of matrix decomposition, including least squares for curve fitting, eigenvalues, eigenvectors, and their applications in linear algebra. It provides detailed examples and methods for solving problems related to these topics, emphasizing their importance in simplifying complex matrix operations. Additionally, it includes practice questions to reinforce understanding of the material presented.

Uploaded by

22btrca057
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views46 pages

MODULE - 05-Matrix Decomposition Probability

This document covers the fundamental concepts of matrix decomposition, including least squares for curve fitting, eigenvalues, eigenvectors, and their applications in linear algebra. It provides detailed examples and methods for solving problems related to these topics, emphasizing their importance in simplifying complex matrix operations. Additionally, it includes practice questions to reinforce understanding of the material presented.

Uploaded by

22btrca057
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 46

Probability and Vector Spaces

Department of Mathematics
Jain Global campus, Jakkasandra Post, Kanakapura Taluk, Ramanagara District -562112

MODULE 5:
Matrix Decomposition

Department of Mathematics
FET-JAIN (Deemed-to-be University)
Table of Content

• Aim
• Introduction
• Objective
• Curve Fitting-Least Squares
• Eigen Values and Eigen Vectors
• Eigen Value Decomposition
• Singular Value Decomposition
• Reference Links
Aim

To equip students in the fundamental concepts of decomposition of matrices so


that they can simplify more complex matrix operations that can be performed on
the decomposed matrix rather than on the original matrix itself, which is helpful
in simplifying data, removing noise, may improving algorithm results
a. Discuss the Least square method for curve fitting.

b. Define Eigen values and Eigen vectors of a matrix

c. Describe the examples of Eigen value decomposition

Objective d. Discuss the singular value decomposition method

e. Describe the examples of singular value decomposition


Introduction

The importance of linear algebra for applications has risen in direct proportion to
the increase in computing power, with each new generation of hardware and software
triggering a demand for even greater capabilities. Computer science is thus intricately
linked with linear algebra through the explosive growth of parallel processing and large-
scale computations.
Least Squares (Curve Fitting)

Working rule: Quadratic fit


Curve: 𝒚 = 𝒂𝟐 𝒙𝟐 + 𝒂𝟏 𝒙 + 𝒂𝟎
𝑦1 𝑥1 2 𝑥1 1 𝑎2
𝑦2 𝑥2
2 𝑥2 1 and 𝑋 = 𝑎 .
Step 1. Form 𝐵 = ⋮ , 𝐴 = 1
⋮ ⋮ ⋮ 𝑎0
𝑦𝑛 𝑥𝑛 2 𝑥𝑛 1
Step 2. Solve the normal system: 𝐴𝑇 𝐴𝑋 = 𝐴𝑇 𝐵, for finding X by Gauss Jordan reduction.
Least Squares (Curve Fitting)

Working rule: Linear Fit

Curve: 𝒚 = 𝒂𝟏 𝒙 + 𝒂𝟎
𝑦1 𝑥1 1
𝑦2 𝑥2 1 𝑎1
Step 1. Form 𝐵 = ⋮ , 𝐴 = ,𝑋= 𝑎
⋮ ⋮ 0
𝑦𝑛 𝑥𝑛 1
Step 2. Solve the normal system: 𝐴𝑇 𝐴𝑋 = 𝐴𝑇 𝐵, for finding X by Gauss Jordan
reduction.
Least Squares (Curve Fitting)
Ex 1. In the manufacturing of product X, the amount of the compound beta present
in the product is controlled by the amount of independent alpha used in the process.
In manufacturing a gallon of X, the amount of alpha used and the amount of beta
present are recorded. The following data were obtained:
Alpha used (x) 3 4 5 6 7 8 9 10 11 12
(ounces/gallon)
Beta present (y) 4.5 5.5 5.7 6.6 7.0 7.7 8.5 8.7 9.5 9.7
(ounces/gallon)
Find an equation of the least square line for the data.
Use the equation obtained to predict the number of ounces beta present in a gallon
of product X if 30 ounces of alpha are used per gallon.
Solution:
To fit a curve of the form 𝑦 = 𝑎1 𝑥 + 𝑎0

4.5 3 1
5.5 4 1
5.7 5 1
6.6 6 1
𝐵=
7.0
,𝐴=
7 1 , 𝑋 = 𝑎1 .
7.7 8 1 𝑎0
8.5 9 1
8.7 10 1
9.5 11 1
9.7 12 1
3 1
4 1
5 1
6 1
3 4 5 6 7 8 9 10 11 12 7 1 645 75
𝐴𝑇 𝐴 = =
1 1 1 1 1 1 1 1 1 1 8 1 75 10
9 1
10 1
11 1
12 1

4.5
5.5
5.7
6.6
3 4 5 6 7 8 9 10 11 12 7.0 598.6
𝐴𝑇 𝐵 =
1 1
=
1 1 1 1 1 1 1 1 7.7 73.4
8.5
8.7
9.5
9.7
𝐴𝑇 𝐴𝑋 = 𝐴𝑇 𝐵

645 75 𝑎1 598.6
𝑎0 =
75 10 73.4

𝑎1 645 75 −1 598.6
𝑎0 = 75 10 .
73.4
0.0121 −0.09 598.6 0.5830
= . =
−0.09 0.7818 73.4 2.9672
𝑌 = 0.5830 𝑥 + 2.9672
𝑌𝑥=30 = 0.5830 30 + 2.9672 = 20.4572
Least Squares (Curve Fitting)
Ex 2. The following data shows atmospheric pollutants yi (relative to an EPA
standard) at half hour intervals 𝒕𝒊 :
ti 1 1.5 2 2.5 3 3.5 4 4.5 5
yi -0.15 0.24 0.68 1.04 1.21 1.15 0.86 0.41 -0.08
Solution: To fit a curve of the form 𝑦 = 𝑎2 𝑡 2 + 𝑎1 𝑡 + 𝑎0
−0.15 1 1 1
0.24 2.25 1.5 1
0.68 4 2 1
1.04 6.25 2.5 1 𝑎2
Let 𝐵 = 1.21 , 𝐴 = 9 3 1 , X= 𝑎1 .
1.15 12.25 3.5 1 𝑎0
0.86 16 4 1
0.41 20.25 4.5 1
−0.08 25 5 1
−0.15
0.24
0.68
1 2.25 4 6.25 9 12.25 16 20.25 25 1.04
𝐴𝑇 𝐵 = 1 1.5 2 2.5 3 3.5 4 4.5 5 1.21
1 1 1 1 1 1 1 1 1 1.15
0.86
0.41
−0.08
54.6725
= 15.7250
5.3700
Solve 𝐴𝑇 𝐴𝑋 = 𝐴𝑇 𝐵
𝑋 = (𝐴𝑇 𝐴)−1 . 𝐴𝑇 𝐵

−1
1583.25 378 96 54.6725
= 378 96 27 . 15.7250
96 27 9 5.3700

0.0519 −0.311 0.3809 54.6725


= −0.311 1.9367 −2.485 . 15.7250
0.3809 −2.485 3.5047 5.3700

−0.32714
= 2.0038
−1.9250

𝑦 = −0.32714 𝑡 2 + 2.0038 𝑡 + (−1.9250)


Questions for practice

1. The distributor of new car has obtained the following data

Number of weeks after


1 2 3 4 5 6 7 8 9 10
introduction a car
Gross Receipts per
0.8 0.5 3.2 4.3 4 5.1 4.3 3.8 1.2 0.8
week(millions of dollars)

Let x denote the gross receipts per week (in millions of dollars) t weeks after the
t

introduction of the car. Use the equation to


a. Find a least squares quadratic polynomial for the given data
b. estimate the gross receipts 12 weeks after the introduction of the car.
Questions for practice

2. A steel producer gathers the following data


Year 1997 1998 1999 2000 2001 2002
Annual Sales
(Millions of 1.2 2.3 3.2 3.6 3.8 5.1
dollars)

Represent the years 1997-2002 as 0,1,2,3,4,5 respectively. Let x denote the


the year and y denote the annual sales. Then t

a. Find the least square line relating to x and y


b. Use the obtained equation to estimate the annual sale in 2006.
Eigenvalues and Eigenvectors
Definition:
An eigenvector of an 𝑛 × 𝑛 matrix A is a nonzero vector x such that 𝐴𝑥 = λx for some
scalar λ. A scalar λ is called an eigenvalue of A if there is a nontrivial solution x of 𝐴𝑥 =
λx. Such an x is called an eigenvector corresponding to λ.
Definition:
Let A be the matrix then the scalar equation 𝑑𝑒𝑡 𝐴 − λI = 0 is called the characteristic
equation A.

1 6 6 3
Example 1: Let 𝐴 = 𝑢= and 𝑣 = . Are u and v eigenvectors of A?
5 2 −5 −2
Solution:
1 6 6 6 − 30 −24 6
𝐴𝑢 = = = −4 = −4𝑢
5 2 −5 30 − 10 20 −5
1 6 3 3 − 12 −9 3
𝐴𝑣 = = = ≠λ .
5 2 −2 15 − 4 11 −2
Thus, u is an eigenvector corresponding to an eigenvalue -4, but v is not an eigenvector
of A, because Av is not a multiple of v.
𝟏 𝟔
Example 2: Show that 7 is an eigenvalue of the matrix 𝑨 = and find the
𝟓 𝟐
corresponding eigenvectors.
Solution:
The scalar 7 is an eigenvalue of A iff the equation 𝐴𝑥 = 7𝑥 has a nontrivial solution, i.e.
𝐴𝑥 − 7𝑥 = 0 𝑜𝑟 𝐴 − 7𝐼 𝑥 = 0 → (1)
−6 6
To solve this homogeneous equation, form the matrix 𝐴 − 7𝐼 = .
5 −5
The columns of 𝐴 − 7𝐼 are obviously linear dependent (multiple of each other). So
equation(1) has nontrivial solution. Thus 7 is an Eigen value of A.
To find the corresponding eigenvectors use row operations
−6 6 0 1 1
Which implies 𝑅1 → − 𝑅1 𝑎𝑛𝑑 𝑅2 → 𝑅2
5 −5 0 6 5
1 −1 0 𝑥1 𝑥2 1
Then which gives 𝑥1 = 𝑥2 , the general solution 𝑋 = 𝑥 = 𝑥 = 𝑥2 .
0 0 0 2 2 1
Each vector of this form with 𝑥2 ≠ 0 is an eigenvector corresponding 𝜆 = 7.
Note:
i) Although the row reduction was used in the above example to find the eigenvector, it
cannot be used to find eigenvalues.
ii) An echelon form of a matrix A usually does not display the eigenvalues of A.
iii) Thus, 𝝺 is an eigenvalue of A if and only if the equation 𝑨 − 𝞴𝑰 𝒙 = 𝟎 →∗ has a
nontrivial solution.
iv) The set of all solutions of ∗ is just the null space of the matrix 𝑨 − 𝞴𝑰. So the set is
subspace of 𝑹𝒏 and is called the Eigen space of A corresponding to eigen value.
Diagonalization of Matrix

The diagonalization Theorem: An 𝑛 × 𝑛 matrix A is diagonalizable if and only if A has


n linearly independent eigenvectors.
In fact 𝐴 = 𝑃𝐷𝑃−1 , with D is a diagonal matrix, iff the columns of P are n linearly
independent eigenvectors of A. In this case the diagonal entries of D are eigenvalues of
A that correspond, respectively, to the eigenvectors in P.

Theorem: An 𝑛 × 𝑛 matrix with n distinct eigenvalues is diagonalizable.


Theorem: Let A be an 𝑛 × 𝑛 matrix whose distinct eigenvalues are λ1 , λ2 … . λp .
a) For 1 ≤ 𝑘 ≤ 𝑝, the dimension of the Eigen space for 𝜆𝐾 is less than or equal to the
multiplicity of the eigenvalue 𝜆𝐾 .
b) The matrix A is diagonalizable iff the sum of the dimensions of the distinct Eigen
spaces equal to n, and this happens iff the dimension of the Eigen space for each 𝜆𝐾
equals the multiplicity of 𝜆𝐾 .
c) If A is diagonalizable and 𝔙𝑘 is basis for the Eigen space corresponding to 𝜆𝐾 for each
k, then the total collection of vectors in the sets 𝔙1 … . 𝔙p forms an eigenvector basis for
Rn .

Modal Matrix:
Consider a square matrix of order 3 × 3, Let λ1 , λ2 , λ3 be the Eigen values and the
corresponding eigenvectors 𝑋1 , 𝑋2 , 𝑋3 then 𝑃 = (𝑋1 , 𝑋2 , 𝑋3 ) is called as modal matrix
𝜆1 0 0
and 𝐷 = 0 𝜆2 0 is called a diagonal matrix.
0 0 𝜆3
Power of Matrix A:
Consider 𝐷2 = 𝐷. 𝐷
𝐷2 = (𝑃−1 𝐴𝑃)(𝑃−1 𝐴𝑃)
= 𝑃−1 𝐴𝑃𝑃−1 𝐴𝑃
=𝑃−1 𝐴𝐼𝐴 𝑃
=𝑃−1 𝐴𝐴𝑃
𝐷2 = 𝑃−1 𝐴2 𝑃
𝑃𝐷2 𝑃−1 = 𝑃𝑃−1 𝐴2 𝑃𝑃−1

𝑃𝐷2 𝑃−1 = 𝐴2
𝜆1𝑛 0 0
Which implies 𝐴𝑛 = 𝑃𝐷𝑛 𝑃−1 where 𝐷𝑛 = 0 𝜆𝑛2 0
0 0 𝜆𝑛3
3. Find all the Eigen values and the corresponding Eigen vectors of the matrix
−1 3
−2 4
Solution:
The above sets of equations are all same as
The characteristic equation of A is A− I =0 we have only one independent equation
−1 − 𝜆 3 − 2𝑥 + 3𝑦 = 0
=0
−2 4−𝜆 2𝑥 = 3𝑦
𝑥 𝑦
−1 − 𝜆 4 − 𝜆 + 6 = 0 =
3 2
−4 − 4𝜆 + 𝜆 + 𝜆2 + 6 = 0 𝑋1 = [ 3 2]
𝜆2 − 3𝜆 + 2 = 0 Case (ii): Let 𝜆 = 2 and the system of equations becomes
𝜆 = 1, 2
-3𝑥 + 3𝑦 = 0
Now the system of equations is
−2𝑥 + 2𝑦 = 0
−1 − 𝜆 𝑥 + 3𝑦 = 0
−2𝑥 + 4 − 𝜆 𝑦 = 0 The above sets of equations are all same as
we have only one independent equation
Case (i): Let 𝜆 = 1 and the system of equations becomes
− 3𝑥 + 3𝑦 = 0
-2𝑥 + 3𝑦 = 0 3𝑥 = 3𝑦
−2𝑥 + 3𝑦 = 0 𝑥 𝑦
=
1 1
𝑋2 = [ 1 1]
Problems
2. Find all the Eigen values and the corresponding Eigen vectors of the matrix
8 −6 2 
− 6 7 −4

2 −4 3 
Applying the rule of cross multiplication for (i) and (ii)
Solution:
x −y z
The characteristic equation of A is A− I =0 = =
−6 2 8 2 8 −6
8− −6 2
7 −4 −6 −4 −6 7
−6 7− −4 =0
2 −4 3−
x −y z x y z
On expanding, we have  − 18 + 45 = 0
3 2 = = or = =
10 − 20 20 1 2 2
After solving, we get  = 0, 3, 15 are the Eigen values.
 (x, y, z ) are proportional to (1, 2, 2) and we can write
Now the system of equations is
x = k , y = 2k , z = 2k where k is arbitrary
(8 −  )x − 6 y + 2 z = 0
− 6 x + (7 −  ) y − 4 z = 0...............(1) k 
2 x − 4 y + (3 −  )z = 0  the Eigen vector X 1 for  = 0 is 2k 
Case (i): Let  = 0 and the system of equations becomes  2k 

8x − 6 y + 2 z = 0 − (i )
∴ Eigen Vector 𝑋1 = [1 2 2 ]
− 6x + 7 y − 4z = 0 − (ii)
2 x − 4 y + 3z = 0 − (iii)
Case (ii): Let  = 3 and the system of equations becomes

5x − 6 y + 2 z = 0 − (iv) x −y z
= =
− 6x + 4 y − 4z = 0 − (v ) −6 2 −7 2 −7 −6
2x − 4 y + 0z = 0 − (vi) −8 −4 −6 −4 −6 −8

Applying the rule of cross multiplication for (iv) and (v)


x
=
−y
=
z x −y z x −y z
−6 2 5 2 5 −6
= = or = =
40 40 20 2 2 1
4 −4 −6 −4 −6 4

x −y
= =
z
or
x y
= =
z the Eigen vector X 3 for  = 15 is 2 − 2 1
16 − 8 − 16 2 1 −2

the Eigen vector X 2 for 𝜆 = 3𝑖𝑠 2 1 −2

Case (iii): Let X 2 and the system of equations becomes

Applying the rule of cross multiplication for (vii) and (viii)


11 − 4 − 7
1.2 Reduce the matrix A =  7 − 2 − 5 into a diagonal matrix. Also find A5 .
10 − 4 − 6
Solution: The characteristic equation of A is A − I = 0
11 −  −4 −7 
 7 (−2 − ) − 5  = 0

 10 −4 (−6 − )
⇒ (11 −  ) [( −2 − )(−6 − ) − 20] + 4[7(−6 − ) + 50] − 7[−28 − 10(−2 − )] = 0
⇒ (11 − )[2 + 8 − 8] + 4[8 − 7] − 7[10 − 8] = 0 Case (ii): Let  = 1 and the corresponding equations are
⇒ 11 + 88 − 88 −  − 8 + 8 + 32 − 28 − 70 + 56 = 0
2 3 2
10x − 4 y − 7 z = 0
⇒ 3 − 32 − 2 = 0 7 x − 3 y − 5z = 0
 = 0, 1, 2. 10x − 4 y − 7 z = 0
Now consider [ A − I ] [ X ] = [0] . x −y z x y z
= = or = =
−1 −1 − 2 1 −1 2
(11 − ) x − 4 y − 7 z = 0
X 2 = (1 − 1 2)' is the eigen vector corresponding to  = 1 .
7 x + (−2 − ) y − 5 z = 0
10x − 4 y + (−6 − ) z = 0
Case (iii): Let  = 2 and the corresponding equations are
Case (i): Let  = 0 and the corresponding equations are 9x − 4 y − 7z = 0
7 x − 4 y − 5z = 0
11x − 4 y − 7 z = 0
10x − 4 y − 8 z = 0
7 x − 2 y − 5z = 0
x −y z x y z
10x − 4 y − 6 z = 0 = = or = =
−8 4 −8 2 1 2
x −y z
= = or = =
x y z X 3 = (2 1 2)' is the Eigen vector corresponding to  = 2 .
6 −6 6 1 1 1
X 1 = (1 1 1)' is the eigen vector corresponding to  = 0 .
1 1 2
Hence the modal matrix P = X 1 X2 X 3 = 1 − 1 1




1 2 2
We have P = 1(−2 − 2) − 1(2 − 1) + 2(2 + 1) = 1
− 4 2 3 Now to find out A5, consider the following

AdjP =  − 1 0 1  we have An = PD n P −1 .
 3 − 1 − 2
− 4 2 3 A5 = PD 5 P −1 and D 5 = Diag(05 15 25 ) = Diag(0 1 32) .
P −1 = ( AdjP) =  − 1 0 1 
1
P
 3 − 1 − 2 1 1 2 0 0 0  − 4 2 3
Hence A = 1 − 1 1  0 1 0 
5  −1 0
 1 
1 2 2 0 0 32  3 − 1 − 2
Diagonalization of A is given by P −1 AP :
0 1 64 − 4 2 3
  
= 0 − 1 32  − 1 0 1 
− 4 2 3 11 − 4 − 7  1 1 2
 0 2 64  3 − 1 − 2
−1
Now P AP =  − 1 0 1   7 − 2 − 5
 
1 − 1 1 
 
 3 − 1 − 2 10 − 4 − 6 1 2 2 191 − 64 − 127
=  97 − 32 − 65 
0 0 0 1 1 2
 190 − 64 − 126
= − 1 0 1  1 − 1 1 
  .
 6 − 2 − 4 1 2 2
0 0 0 
= 0 1 0 = D
0 0 2
P −1 AP = D = Diag(0 1 2) .
Eigenvalue Decomposition: Or Orthogonal diagonalization

Definition:
A matrix is said to be orthogonally diagonalizable if there are an orthogonal matrix P
(with 𝑃−1 = 𝑃𝑇 ). And a diagonal matrix D such that 𝐴 = 𝑃𝐷𝑃𝑇 = 𝑃𝐷𝑃−1

Theorem:
An 𝑚 × 𝑛 matrix A is orthogonally diagonalizable iff A is a symmetric Matrix
6 −2 −1
Example1: If possible, orthogonally diagonalize the matrix 𝐴 = −2 6 −1
−1 −1 5
Solution:
6 −2 −1
The characteristic equation is 𝐴 − 𝜆𝐼 = 0 implies −2 6 −1 = 0
−1 −1 5

Which implies 𝜆3 − 11𝜆2 + 90𝜆 − 144 = 0, the Eigen values are 𝜆 = 8,6,3
−2 −2 −1 0
For 𝜆 = 8, we have 𝐴 − 8𝐼, 0 = −2 −2 −1 0 after row transformation we get
−1 −1 −3 0
1 1 0 0 𝑥1 −𝑥2 −1
0 0 1 0 which implies 𝑥1 + 𝑥2 = 0 → 𝑋 = 𝑥2 = 𝑥2 = 𝑥2 1 , then
0 0 0 0 𝑥3 0 0
−1
𝑣1 = 1 .
0
0 −2 −1 0
Let 𝜆 = 6, we have 𝐴 − 6𝐼, 0 = −2 0 −1 0 after row transformation we get
−1 −1 −1 0
1 −1 0 0 𝑥1 −1 −1
Then 0 −2 −1 0 implies 𝑋 = 𝑥2 = 𝑥3 −1 , thus the 𝑣2 = −1 .
0 0 0 0 𝑥3 2 2

3 −2 −1 0
And for 𝜆 = 3, we have 𝐴 − 3𝐼, 0 = −2 3 −1 0 after row transformation
−1 −1 2 0
−1 1 0 0 𝑥1 1 1
we get 0 1 −1 0 implies 𝑋 = 𝑥2 = 𝑥3 1 , thus the 𝑣3 = 1 .
0 0 0 0 𝑥3 1 1
Clearly these 𝑣1 , 𝑣2 , 𝑣3 vectors are basis for 𝑅3 .

The set{𝑣1 , 𝑣2 , 𝑣3 } is linearly independent. It is easy to see that {𝑣1 , 𝑣2 , 𝑣3 } is an


orthogonal set. And P will be more useful if its columns are orthonormal.
Since a nonzero multiple of an eigenvector is still an eigenvector. We can normalize
𝑣1 , 𝑣2 , 𝑣3 to produce the unit eigenvectors.

1 1

1 −
6 3
2
𝑣1 𝑣2 1 𝑣3 1
Then 𝑢1 = = 1 , 𝑢2 = = − 6
and 𝑢3 = = 3
.
𝑣1 𝑣2 𝑣3
2
2 1
0 6 3
1 1

1 −
6 3
2
1 1
8 0 0
Let 𝑃 = 1 − and 𝐷 = 0 6 0.
6 3
2 0 0 3
2 1
0 6 3
Then 𝐴 = 𝑃𝐷𝑃−1 as usual. But this time P is square matrix and has orthonormal
columns, P is orthogonal matrix and 𝑃−1 is simply 𝑃𝑇 .
Questions for practice
𝟑 −𝟐 𝟒
1. Orthogonally diagonalize the matrix 𝑨 = −𝟐 𝟔 𝟐 whose characteristic
𝟒 𝟐 𝟑
equation is −𝝀𝟑 + 𝟏𝟐𝝀𝟐 − 𝟐𝟏𝝀 − 𝟗𝟖 = 𝟎 = − 𝝀 − 𝟕 𝟐 (𝝀 + 𝟐).

𝟏 𝟔 𝟏
2. Orthogonally diagonalize the matrix 𝑨 = 𝟏 𝟐 𝟎
𝟎 𝟎 𝟑
Singular Value Decomposition
Singular values:
The square root of eigen values of a symmetric matrix ATA or AAT are called singular
values of the matrix A.
Singular Vectors:
The eigen vectors of ATA corresponding to singular values of A are called singular
vectors.
The singular vectors of ATA are called right singular vectors and singular vectors of AAT
are called left singular vectors.

Definition of SVD:
Let A be any matrix m × n then this matrix can be decomposed in to product of three
matrices given by Am×n =Um×m Σm×nVTn×n and it is called singular value decomposition of
A.
Where U and V are called orthogonal matrices or unitary matrices and Σ is called
singular matrix.
Important points on SVD:
The SVD produces orthonormal bases of v’s and u’ for the four fundamental subspaces.
Using those bases, A becomes a diagonal matrix Σ and Avi = σiui and σi = singular value.
The two-bases diagonalization A = UΣVT often has more information than A = PDP−1
UΣVT separates A into rank-1 matrices
σ1u1vT1 + σ2u2vT2 + ........ + σrurvTr . σ1u1v T 1 is the largest.
u1,u2 ,..., ur is an orthonormal basis for the column space
ur+1, ur+2 ...,um is an orthonormal basis for left null space N (AT)
v1,v2 ..., vr is an orthonormal basis for the row space
vr+1,vr+2 ..., vn is an orthonormal basis for the null space N(A).
Working Rule
Step-1: Compute ATA or AAT.
Step-2: Find the eigen values of ATA or AAT.
Step-3: Compute the square root of non-zero eigen values of above step called singular
values of A.
Step-4: Construct a singular matrix Σ of order same as A having singular values on the
diagonal in decreasing order.
Step-5: Find the eigen vector of ATA or AAT and construct a orthogonal matrix V or U
consisting of orthonormal eigen vectors of ATA or AAT.
Step-6: Construct the vectors ui = Aviσi or vi = ATui σi and the orthogonal matrix U or V
respectively.
Step-7: Finally, A = UΣVT gives the singular value decomposition of matrix A
−𝟑 𝟏
Example: Find Singular value decomposition of the matrix 𝑨 = 𝟔 −𝟐
𝟔 −𝟐
Solution:
Hence the orthogonal matrix 𝑈 = 𝑢1 , 𝑢2 , 𝑢3 or matrix of left singular vectors is
given by
Questions for practice

𝟏 𝟏 𝟏
1. Find Singular value decomposition of the matrix 𝑨 =
𝟏 𝟏 𝟏

𝟏 −𝟏
2. Find Singular value decomposition of the matrix 𝑨 = −𝟐 𝟐 .
𝟐 −𝟐
Summary

Outcomes:

a. Discuss the concept of curve fitting using least square sense and its application in

engineering.

b. Describe the importance of diagonalization to tackle engineering problems.

You might also like