0% found this document useful (0 votes)
34 views23 pages

Linear Algebra

The document provides an overview of key linear algebra concepts including vectors, matrices, inner and outer products, determinants, traces, matrix inversion, and eigenvalues and eigenvectors. It defines these terms, provides examples of computing them, and explains their significance, especially for eigenvectors and eigenvalues.

Uploaded by

Jenil Saliya
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
34 views23 pages

Linear Algebra

The document provides an overview of key linear algebra concepts including vectors, matrices, inner and outer products, determinants, traces, matrix inversion, and eigenvalues and eigenvectors. It defines these terms, provides examples of computing them, and explains their significance, especially for eigenvectors and eigenvalues.

Uploaded by

Jenil Saliya
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 23

Linear Algebra

(Preliminaries, Inner product, Outer product,


determinant and trace, matrix inverse,
eigenvalues, eigenvectors)

1
Notation and preliminaries: column
vector and its transpose
 A d-dimensional column vector x and its transpose xt can
be written as below, where all components can take on
real values:

2
Matrix and its transpose
 We denote an n × d (rectangular) matrix M and its d × n
transpose Mt as:

In other words, the jith entry of Mt is the ijth entry of M.

3
Square Matrix

 Square Matrix
 A square (d × d) matrix is called symmetric if

its entries obey mij = mji


 It is called skew-symmetric (or anti-symmetric)

if mij = −mji
 An Identity matrix ‘I’ is a d×d (square) matrix

whose diagonal entries are ‘1’, and all other


entries are ‘0’.

4
Multiplying a matrix by a vector
 We can multiply a vector by a matrix, Mx = y as:

5
Inner product
 The inner product of two vectors x and y having the same
dimensionality is denoted as xty and yields a scalar:

It is also called the scalar product or dot product and


denoted x • y

6
Inner product.. continued
 The Euclidean norm or length of the vector is:

If ||x|| =1, the vector is called normalized.


The angle between two d-dimensional vectors x and y obeys:

Note:
 The inner product is a measure of the co-linearity of two vectors — a natural
indication of their similarity
 If xty = 0, then the vectors are orthogonal
 If ||xty|| = ||x|| ||y||, the vectors are colinear
 We say a set of vectors {x1, x2, . . . , xn} is linearly independent if no vector in
the set can be written as a linear combination of any of the others

7
Example
 Example 1: Find inner product of x(2,-3) and y(5,1).
 Solution: We are given two vectors:

Inner product x • y = xty = or just 7

8
Outer product
 The outer product (matrix) of two vectors x and y gives a matrix as
below:

Mij  xiyj

9
Determinant of a Matrix

 The determinant of a d × d (square) matrix is a scalar, denoted |M|,


and reveals properties of the matrix. For instance, if we consider the
columns of M as vectors, if these vectors are not linearly
independent, then the determinant vanishes.

 If M is itself a scalar (i.e., a 1 × 1 matrix M), then |M| = M.

 If M is 2×2 matrix then |M| = m11m22−m21m12.

10
Determinant of a Matrix.. continued
The determinant of a general square matrix can be computed by a
method called expansion by minors, and this leads to a recursive
definition. If M is our d × d matrix, we define Mi|j to be the (d − 1)
× (d − 1) matrix obtained by deleting the ith row and the jth column
of M:

Given the determinants |Mx|1|, we can now compute the determinant of M as:

Example 2: Finding determinant of a matrix


11
Trace
 The trace of a d × d (square) matrix, denoted tr[M], is the sum of its
diagonal elements.

Both the determinant and trace of a matrix are invariant with respect to
rotations of the coordinate system.

12
Matrix inversion
 The determinant must be non-zero for the inverse of a matrix to exist
 The inverse of a d × d matrix M, denoted M−1, is the d × d matrix
such that: MM−1 = I
 We call the scalar Cij = (−1)i+j |Mi|j| the as the cofactor of the i,j entry
of M
 Mi|j is the (d − 1) × (d − 1) matrix formed by deleting the ith row and
jth column of M
 The adjoint of M, written Adj[M], is the matrix whose i, j entry is the
j, i cofactor of M
 The inverse of a matrix M can be written as:

13
Examples: Finding inverse of a matrix

 Example 3: Finding inverse of 2×2 matrix


 Example 4: Finding inverse of 3×3 matrix

14
Eigenvectors and Eigenvalues
 Given a d × d matrix M a very important class of linear equations is
of the form:

for scalar λ, which can be rewritten as: Mx=λIx. Thus, we have:

where I the identity matrix, and 0 the zero vector

 The solution vector x = ei and corresponding scalar λ = λi are called the


eigenvector and associated eigenvalue.
 There are d (possibly non-distinct) solution vectors {e1, e2, . . . , ed}
each with an associated eigenvalue {λ1, λ2, . . . , λd}.
Thus, multiple solutions are possible
Also, under multiplication by M the eigenvectors are changed only in
magnitude — not direction
15
Understanding the significance of Eigenvectors and Eigenvalues

 Let us consider a square in 2-d space as below, and focus on three


vectors as shown (colored red, yellow, green)

16
Understanding the significance of Eigenvectors and Eigenvalues:
Scaling

 Suppose we scale the square by a factor of 2 along y-axis as shown


below:

17
Understanding the significance of Eigenvectors
and Eigenvalues
 Notice that the red vector has the same scale and direction after the linear
transformation. The green vector changes in scale but still has the same
direction. Whereas the yellow vector neither has the same scale but also
its angle with the x axis increased, hence its direction also changed.
 If we look closely, apart from the red vector and the green vector all the
other vectors direction changed. Hence we can say the red and green
vector are special and they are characteristic of this linear transform.
 These vectors are called eigenvectors of this linear transformation. And
their change in scale due to the transformation is called their eigenvalue.
For the red vector the eigenvalue is 1 since its scale is constant after and
before the transformation; whereas for the green vector the eigenvalue is
2 since it scaled up by a factor of 2.
 A vector has both magnitude and direction. The only way to change the
magnitude of a vector without changing its direction is by multiplying it
with a scalar. But this is not true for all vectors of the matrix. The
eigenvectors are the special vectors for which this property holds true, and
the corresponding scalar quantities λ are called eigenvalues. This is
indicated by the equation: Mx=λx. 18
Understanding the significance of Eigenvectors and Eigenvalues:
Shearing

 Let’s have a look at another linear transformation where we shear the


square along the x axis.

Here the red vector to be the eigenvector and its eigenvalue is 1.


19
Understanding the significance of Eigenvectors and Eigenvalues:
Rotation

 Let us rotate the square by 90 degrees clockwise.

Here, there are no eigenvectors.

20
Understanding the significance of Eigenvectors and Eigenvalues:
Rotation

 Let us rotate the square by 180 degrees clockwise.

Here all the vectors along with the three colored vectors are
eigenvectors with an eigenvalue of -1

21
Example: Finding eigenvector and
eigenvalue

 Example 5: Finding eigenvector and eigenvalue


 Example 6: Finding eigenvector and eigenvalue

22
Additional Reading
 https://towardsdatascience.com/eigenvectors-and-eigenvalues-all-
you-need-to-know-df92780c591f
 https://towardsdatascience.com/understanding-singular-value-
decomposition-and-its-application-in-data-science-388a54be95d

23

You might also like