0% found this document useful (0 votes)
229 views23 pages

Linear Algebra: Lecture Slides For Chapter 2 of

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
229 views23 pages

Linear Algebra: Lecture Slides For Chapter 2 of

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 23

Linear Algebra

Lecture slides for Chapter 2 of Deep Learning


About this chapter

• Not a comprehensive survey of all of linear algebra

• Focused on the subset most relevant to deep


learning

(Goodfellow 2016)
Scalars
• A scalar is a single number

• Integers, real numbers, rational numbers, etc.

• We denote it with italic font:

(Goodfellow 2016)
Vectors
• A vector is a 1-D array of numbers:

• Can be real, binary, integer, etc.

• Example notation for type and size:

(Goodfellow 2016)
Matrices

• A matrix is a 2-D array of numbers:


Row

Column
• Example notation for type and shape:

(Goodfellow 2016)
Tensors
• A tensor is an array of numbers, that may have

• zero dimensions, and be a scalar

• one dimension, and be a vector

• two dimensions, and be a matrix

• or more dimensions.

(Goodfellow 2016)
Matrix Transpose

(Goodfellow 2016)
Matrix (Dot) Product

m = m • n

p
p n Must
match
(Goodfellow 2016)
Identity Matrix

(Goodfellow 2016)
Systems of Equations

expands to

(Goodfellow 2016)
Solving Systems of
Equations
• A linear system of equations can have:

• No solution

• Many solutions

• Exactly one solution: this means multiplication by


the matrix is an invertible function

(Goodfellow 2016)
Matrix Inversion
• Matrix inverse:

• Solving a system using an inverse:

• Numerically unstable, but useful for abstract


analysis

(Goodfellow 2016)
Invertibility

• Matrix can’t be inverted if…

• More rows than columns

• More columns than rows

• Redundant rows/columns (“linearly dependent”,


“low rank”)

(Goodfellow 2016)
Norms
• Functions that measure how “large” a vector is

• Similar to a distance between zero and the point


represented by the vector

(Goodfellow 2016)
Norms
• Lp norm

• Most popular norm: L2 norm, p=2

• L1 norm, p=1:

• Max norm, infinite p:

(Goodfellow 2016)
Special Matrices and
Vectors
• Unit vector:

• Symmetric Matrix:

• Orthogonal matrix:

(Goodfellow 2016)
Eigendecomposition

• Eigenvector and eigenvalue:

• Eigendecomposition of a diagonalizable matrix:

• Every real symmetric matrix has a real, orthogonal


eigendecomposition:

(Goodfellow 2016)
Effect of Eigenvalues

(Goodfellow 2016)
Singular Value
Decomposition

• Similar to eigendecomposition

• More general; matrix need not be square

(Goodfellow 2016)
Moore-Penrose
Pseudoinverse

• If the equation has:

• Exactly one solution: this is the same as the inverse.

• No solution: this gives us the solution with the


smallest error

• Many solutions: this gives us the solution with the


smallest norm of x.

(Goodfellow 2016)
Computing the
Pseudoinverse

The SVD allows the computation of the pseudoinverse:

Take reciprocal of non-zero entries

(Goodfellow 2016)
Trace

(Goodfellow 2016)
Learning linear algebra

• Do a lot of practice problems

• Start out with lots of summation signs and indexing


into individual entries

• Eventually you will be able to mostly use matrix and


vector product notation quickly and easily

(Goodfellow 2016)

You might also like