Linear Algebra Final
Linear Algebra Final
For Machine
Learning
01 What is a Matrix?
Number of Rows
02 Matrix Operations 5 2 9
03 Eigen Values
4 6 7
1 3 8
04 Eigen Vectors
Number of Columns
Number of Rows
5 2 9
4 6 7
1 3 8
Number of Columns(n)
Number of Rows(m)
Number of Columns(n)
Number of Rows(m)
Output
In Python, we use the list of lists in a numpy array to declare a list. As seen in
the example we have declared A and B matrices with dimensions 2x2.
5 2 2 4 5 4 9 7 6
4 6 7 2 3 2 6 9 9
1 3 2 1 2 4 2 5 6
In the addition operation, the corresponding values of the elements are added
to the resultant matrix like shown in the example.
5 2 2 4 5 4 9 7 6
4 6 7 2 3 2 6 9 9
1 3 2 1 2 4 2 5 6
5 2 2 4 5 4 1 -3 -2
4 6 7 2 3 2 2 3 5
1 3 2 1 2 4 0 1 -2
5 2 2 4 5 4 1 -3 -2
4 6 7 2 3 2 2 3 5
1 3 2 1 2 4 0 1 -2
5 2 9 10 4 18
4 6 7 2 8 12 14
1 3 8 2 6 16
Multiplying a matrix with a scalar will yield the resultant matrix that will have
each element multiplied with the scalar.
5 2 9 1
4 6 7 2 The dimensions of the first
matrix is 3x3 and that of the
1 3 8 4 second matrix is 3x1. Therefore,
the dimensions of the resultant
matrix will be 3x1.
5 2 9 1 X
4 6 7 2 Y
1 3 8 4 Z
For finding the elements of the resultant matrix, we multiply the rows of the first matrix to the
column of the second matrix.
5 2 9 1 45
4 6 7 2 44
1 3 8 4 39
Output
In this example, we have used the numpy.dot() method to multiply the two
matrices.
5 2 9 1 2 x1 x2
4 6 7 2 1 y1 y2
1 3 8 1 2 z1 z2
5 2 9 1 2
X1 = (4x1) + (6x2) + (7x1) = 4+12+7 = 23
4 6 7 2 1
X2 = (4x2) + (6x1) + (7x2) = 8+6+14 = 28
1 3 8 1 2
X1 = (1x1) + (3x2) + (8x1) = 1+6+8 = 15
5 2 9 1 2 18 30
4 6 7 2 1 23 28
1 3 8 1 2 15 21
There is a limitation with matrix multiplication that states that the columns of the first matrix should
always be equal to the rows of the second matrix. For example – if the dimensions of the above matrices
were 3x3 and 2x2, the multiplication would not have been possible.
Output
In this example, we have used the numpy.dot() method to multiply the two
matrices.
5 2 9 1 2 1 47 30
4 6 7 3 1 2
1 3 8 4 2 1
5 2 9 1 2 1 47 30 ?
4 6 7 3 1 2 ? ? ?
1 3 8 4 2 1 ? ? ?
Similarly, you can find the rest of the elements of the resultant matrix.
5 2 9 1 2 1 47 30 18
4 6 7 3 1 2 50 28 23
1 3 8 4 2 1 42 21 15
Output
In this example, we have used the numpy.dot() method to multiply the two
matrices.
3 2 i j k
2 9 3 2 8
8 7 2 9 7
2 8 3 8 3 2
i 9 7
j 2 7
k 2 9
Output
In this example, we have used the numpy.cross() method to find the cross
product of the two matrices.
T 1 2
1 1 0
1 1
2 1 2
0 2
In the transpose of a matrix, the columns and rows are interchanged like
shown in the example.
Output
-1
5 2 9
4 6 7 To understand the inverse of a
matrix, there are several
concepts that you must be
1 3 8 familiar with like determinant
of a matrix, adjoint of a matrix,
etc.
5 2 9
4 6 7 We can follow the laplacian
expansion method to find the
1 3 8 determinant of the matrix. It
helps us in finding the inverse
of the matrix.
a b
|Matrix| = a x d – b x c
c d
1 4
|Matrix| = 1 x 9 – 4 x 6 = 9 – 24 = -15
6 9
a b c
|Matrix| = a(e x i – f x h) – b(d x i – f x g) + c(d x h – e x g)
d e f
g h i
2 2 -3
|Matrix| = 2(2 x 2 – 2 x 1) – 2(1 x 2 – 2 x 1) – 3(1 x 1 – 2 x 1)
1 2 2
= 2(2) – 2(0) – 3(-1)
= 4 – 0 – (-3)
=4+3=7
1 1 2
5 2 9
4 6 7 To find the adjoint of a given
matrix, we have to find the
transpose of the cofactors of
1 3 8 the elements in the given
matrix. It is also known as
adjugate of a matrix.
The minor (m11, m12,… mij) are the determinants of the square matrix that is formed after
eliminating the ith row and jth column of the matrix. We use the minors to calculate the cofactors
of the given matrix.
m11 = 48 – 21 = 27 m22 = 40 – 9 = 31
m12 = 32 – 7 = 25 m23 = 15 -2 = 13
m33 = 30 – 8 = 22
T
27 -25 6 27 11 -40
11 31 -13 -25 31 1
-40 1 22 6 -13 22
Let’s take a look at the formulae to find the inverse of the matrix.
-1
2 1 The formulae to calculate the inverse
of a matrix is as follows:
4 3 A-1 = 1/|A|. Adj A
-1
2 1 Now, to calculate the inverse, we will
find the determinant and adjoint of
4 3 the given matrix.
3 -1
Adjoint of the given matrix is
-4 2
-1
2 1 3/2 -1/2
4 3 -2 1
To verify the result we can use the inverse property that states that:
AA-1 = I , where I is the identity matrix.
2 1 3/2 -1/2 1 0
4 3 -2 1 0 1
Since we are getting an identity matrix when we did AA-1, we have calculated the inverse of the
matrix successfully. We can follow the same formulae for the 3x3 matrix as well – where the adjoint
and the determinant will determine the inverse of the given matrix.
Output
Things to Remember:
1 2 2 1 2 2
0 1 4 0 1 4
2 4 4 0 0 0
We can use the gaussian elimination method where we interchange, multiply and
add the rows to reach the row echelon form. Here, we have used the following
operation to get the row echolen form.
1. R3 = R3 – 2R1
Since, the non-zero rows are 2, hence the rank of the matrix is 2.
Output
A characteristic polynomial is used to derive the Eigen values of a given matrix. The
characteristic polynomial of a matrix A is denoted by the function given below.
The characteristic polynomial of the square matrix A is given by the function f(λ), and the I is
the identity matrix. The whole objective of using the characteristic polynomial is to find the
Eigen values of the given matrix.
Eigen values are the roots of the characteristic polynomial of the matrix A. Let’s understand
this with a simple example.
f(λ ) = λ2 - 6 λ +1
Now, to find the eigen values, we will calculate the roots of the characteristic polynomial.
5 2 f(λ ) = λ2 - 6 λ +1
2 1 -> λ2 - 6 λ +1 = 0
Using the quadratic formula, we can get the roots of the
equation as 3 + 2√2 and 3 - 2√2.
Therefore, the Eigen values for the given matrix are 3 + 2√2 and 3 - 2√2
5 2 X X
λ .
2 1 Y Y
5 2 X X
λ .
2 1 Y Y
Using the equations that we get by solving the above equation for both the values of λ, we will
get the Eigen Vectors of the given matrix.
Output
Collinearity is a concept in statistics that resonates with the linear relationship between the
predictor variables(independent variable) in a regression model. If the independent variables
are correlated, they won’t be able to independently predict the value of the dependent
variable in the regression model.
support@intellipaat.
com