0% found this document useful (0 votes)
19 views21 pages

M0 1 After Class

Uploaded by

Cheug-Lee Lim
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views21 pages

M0 1 After Class

Uploaded by

Cheug-Lee Lim
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 21

M0-1: Linear Algebra 101

Modern Control:Theory and Design (2023)

Ding Zhao

Associate Professor
College of Engineering
School of Computer Science

Carnegie Mellon University

Ding Zhao (CMU) M0-1:Linear Algebra 101 1 / 21


Notation

An m x n matrix: m rows and n columns

m = 1 : Row matrix
n = 1 : Column matrix
m = n : Square matrix
m = n = 1 : Scalar

Ding Zhao (CMU) M0-1:Linear Algebra 101 2 / 21


Matrix Transpose

Consider m x n matrix A = [aij ]; i = 1, 2, . . . , m and j = 1, 2, . . . , n


Transpose of A is an n x m matrix AT = [aji ] .

 T
0 4
Example:  7 0  =
3 1

if A = AT : A is said to be symmetric
if A = −AT : A is said to be skew-symmetric

Ding Zhao (CMU) M0-1:Linear Algebra 101 3 / 21


Conjugate and Associate Matrix

Conjugate matrix
The conjugate of A, written A, is the matrix formed by replacing every element in A by its
complex conjugate. Thus A = [aij ] .
If all elements of A are real, then A = A
If all elements are purely imaginary, then A = −A

Associate matrix
The associate matrix of A is the conjugate transpose of A. The order of these two operations
is immaterial.
T
A = A ⇒ A: Hermitian
T
A = −A ⇒ A: Skew-Hermitian
For real matrices, symmetric and Hermitian mean the same.

Ding Zhao (CMU) M0-1:Linear Algebra 101 4 / 21


Matrix Addition and Subtraction

Matrix addition and subtraction are performed on an element-by-element basis. That is, if
A = [aij ] and B = [bij ] are both m × n matrices, then A + B = C and A − B = D indicate
that the matrices C = [cij ] and D = [dij ] are also m × n matrices whose elements are given
by cij = aij + bij and dij = aij − bij for i = 1, 2, . . . , m and j = 1, 2, . . . , n.
   
0 4 1 2
Example:  7 0  +  2 3  =
3 1 0 4
Properties:

A + B = B + A Commutative
(A + B) + C = A + (B + C) Associative
(A + B)T = AT + BT

Ding Zhao (CMU) M0-1:Linear Algebra 101 5 / 21


Scalar Multiplication

Multiplication of a matrix A = [aij ] by an arbitrary scalar α ∈ F amounts to multiplying


every element in A by α. That is, αA = Aα = [αaij ] .
   
1 6 −2 −12
Example: (−2)  9 3  =  −18 −6 
6 0 −12 0

Properties:

(α + β)A = αA + βA
(αβ)A = (α)(βA)
α(A + B) = αA + αB

Ding Zhao (CMU) M0-1:Linear Algebra 101 6 / 21


Matrix Multiplication

Consider an m × n matrix A = [aij ] and a p × q matrix B = [bij ]. This product is only defined
when A has the same number of columns as B has P rows i.e., when n = p. The elements of
C = [cij ] are then computed according to cij = nk=1 aik bkj
Example:
      
2 3 1 3 5 2(1) + 3(2) 2(3) + 3(4) 2(5) + 3(8) 8 18 34
= =
4 5 2 4 8 4(1) + 5(2) 4(3) + 5(4) 4(5) + 5(8) 14 32 60
Properties:
(AB)C = A(BC) = ABC
α(AB) = (αA)B = A(αB), where α is a scalar
A(B + C) = AB + AC, (A + B)C = AC + BC
(AB)T = BT AT
AB ̸= BA Not commutative, this makes vectors/matrices different than scalars

Ding Zhao (CMU) M0-1:Linear Algebra 101 7 / 21


Matrix Inverse
In order to learn similarity transformation, we need review the definition of matrix inverse.
If matrix A is square, and (square) matrix B satisfies

BA = AB = I

then B is called the inverse of A and is denoted as B = A−1 For the inverse to exist, A must
have a nonzero determinant, i.e., A must be non-singular. When this is true, A has a unique
inverse given by
CT
A−1 =
|A|
where C is the matrix formed by the cofactors Cij . The matrix CT is called the adjoint
matrix, Adj(A) . Thus the inverse of a nonsingular matrix is

A−1 = Adj(A)/|A|

Ding Zhao (CMU) M0-1:Linear Algebra 101 8 / 21


Dot Product

Consider two vectors a = [a1 ,P


a2 , ..., an ] and b = [b1 , b2 , ..., bn ], the dot product of two vectors
is defined as: a · b = a b = ni=1 ai bi = a1 b1 + a2 b2 + · · · + an bn
T

Example: [1, 2, −5] [4, −3, −1]T = (1 × 4) + (2 × −3) + (−5 × −1) = 4 − 6 + 5 = 3

Inner product generalizes the dot product (which is in Euclidean spaces) to vector space of
any dimensions. An inner product space is a vector space V over the field F , and can be
represented with a map
⟨·, ·⟩ : V × V → F
that satisfies three properties: conjugate symmetry ⟨x, y⟩ = ⟨y, x⟩, linearity in the first
argument ⟨ax, y⟩ = a⟨x, y⟩ and positive-definite (⟨x, x⟩ ≥ 0).
Usually denoted as (X , F , ⟨·, ·⟩)

Ding Zhao (CMU) M0-1:Linear Algebra 101 9 / 21


Angle between Vectors and Orthogonal

Inspired from the geometric space, the angle between two vectors is defined as:

xT y
∠ (x, y) = arccos √ p
xT x y T y
For x, y ∈ V , we say that x and y are orthogonal if ∠ (x, y) = 90◦

xT y = cos(90◦ ) = 0
A set of vectors x = {x1 , . . . , xn } is called orthonormal if
(
xTi xj = 0, ∀i̸=j
xTi xi = 1, 1 ⩽ i ⩽ n

Ding Zhao (CMU) M0-1:Linear Algebra 101 10 / 21


p-norm

x = [x1 , . . . , xn ]T
P 1 √
n 2 2
||x||2 = i=1 |xi | = xT x Euclidean norm - Distance
1
||x||p = ( ni=1 |xi |p ) p
P

Pn
||x||1 = i=1 |xi |
||x||∞ = max |xi |
You may interpret norm as the generalized linear space version of absolute value. It is an
important concept because it is usually used as a measure of magnitude, which we will use
extensively to describe the behaviors of a system, e.g. stability. Hilbert Space

Ding Zhao (CMU) M0-1:Linear Algebra 101 11 / 21


Null Matrix and Unit Matrix

Null Matrix
The null matrix 0 is one that has all its elements equal to zero. The null matrix is however,
not unique because the numbers of rows and columns it possesses can be any finite positive
integers. Whenever necessary, a null matrix of size m × n is denoted by 0mn .
A+0=0+A=A
0A = A0 = 0
Note: AB = 0 does not imply that either A or B is a null matrix.
Identity Matrix
The identity or unit matrix I is a square matrix with elements on its diagonal (i = jpositions)
as ones and with all other elements as zeros. When necessary, an n × n unit matrix shall be
denoted by In .
if A is m × n, then Im A = A and AIn = A

Ding Zhao (CMU) M0-1:Linear Algebra 101 12 / 21


Eigenvalues and Eigenvectors

Definition
Consider a square matrix A
An eigenvector for A is a non-null vector v ̸= 0 for which there exists an eigenvalue λ ∈ R
such that
Av = λv
Some basic properties:
An eigenvector has at most one eigenvalue
If v is an eigenvector, then so is av, ∀ scalar a ̸= 0 but λ could be 0
v
a normalized e-vector is defined as ||v||2

Ding Zhao (CMU) M0-1:Linear Algebra 101 13 / 21


Matrix Inverse Properties
−1
A−1 = A, i.e., inverse of inverse is original matrix (assuming A is invertible)
(AB)−1 = B−1 A−1 (assuming A, B are invertible)
−1 T
AT = A−1 (assuming A is invertible )
I−1 = I
(αA)−1 = (1/α)A−1 (assuming A invertible, α ̸= 0)
 −1  −1 
A1 0 · · · 0 A1 0 ··· 0
 0 A2 · · · 0   0 A−12 ··· 0 
..  =  ..
   
 .. .. . . .. . . .. 
 . . . .   . . . . 
0 0 · · · An 0 0 · · · A−1 n
 −1    
−1 a b 1 d −b 1 d −b
A = = =
c d det A −c a ad − bc −c a
How to compute the inverse of high dimensional matrices
Ding Zhao (CMU) M0-1:Linear Algebra 101 14 / 21
Determinant of Matrix

Minors An n × n matrix A contains n2 elements aij . Each of these has associated with it a
unique scalar, called a minor Mij . The minor Mpq is the determinant of the n − 1 × n − 1
matrix formed from A by crossing out the p th row and q th column.

Cofactors Each element apq of A has a cofactor Cpq , which differs from Mpq at most by a
sign change. Cofactors are sometimes called signed minors for this reason and are given by
Cpq = (−1)p+q Mpq .

Determinants by Laplace Expansion IfPA is an n × n matrix, any arbitrary row k can be


selected and |A| is then given by |A| = nj=1 akj Ckj . Similarly, Laplace
Pn expansion can be
carried out with respect to any arbitrary column l, to obtain |A| = i=1 ail Cil . Laplace
expansion reduces the evaluation of an n × n determinant down to the evaluation of a string
of (n − 1) × (n − 1) determinants, namely, the cofactors.

Ding Zhao (CMU) M0-1:Linear Algebra 101 15 / 21


Determinant of Matrix
 
2 4 1
Example: Calculate determinant of A =  3 0 2  .
2 0 3 Laplace, 1749-1827

Three of its minors are


 
a b 3 2 2 1 2 1
= ad − bc, M12 = = 5, M22 = = 4, and M32 = =1
c d 2 3 2 3 3 2

The associated cofactors are

C12 = (−1)3 5 = −5, C22 = (−1)4 4 = 4, C32 = (−1)5 1 = −1

Using Laplace expansion with respect to column 2 gives |A| = 4C12 = −20

Ding Zhao (CMU) M0-1:Linear Algebra 101 16 / 21


Properties of Determinant

Given A, B ∈ C(n×n) : det(A) ̸= 0 ⇐⇒ A is nonsingular/not defective ⇐⇒ rows and


columns independent

Properties:
det(αA) = αn det(A)
det(AT ) = det(A)
det(I) = 1
det(AB) = det(A) det(B)
det(A−1 ) = 1/ det(A)
Q
If A is triangular matrix, det A = diag(A) (What if it is a diagonal matrix?)
Note: that is why |Ep,q (α)| = 1

Ding Zhao (CMU) M0-1:Linear Algebra 101 17 / 21


Elementary Operation

How to compute the inverse? We can use Gaussian Elimination. First need to know the three
basic operations on a matrix, called elementary operations:
1 Row switching: The interchange of two rows (or of two columns).
2 Row multiplication: The multiplication of every element in a given row (or column) by a
scalar α.
3 Row addition: The multiplication of the elements of a given row (or column) by a scalar
α, and adding the result to another row (column). The original row (column) is unaltered.
We will mainly use row operation in this course.

Ding Zhao (CMU) M0-1:Linear Algebra 101 18 / 21


Gaussian Elimination

Highlights of Gaussian Elimination: Gauss, 1777-1855


Use elementary row operations to reduce the augmented matrix to a form such that
1 The first non zero entry of each row should be on the right-hand side of the first non zero
entry of the preceding row. Simply put, the coefficient part (corresponding to A) of the
augmented matrix should form an n × n upper triangular matrix.
2 Any zero row should be at the bottom of the matrix.

Gaussian elimination uses the row reduced form of the augmented matrix to compactly solve a
given system of linear equations (echelon form).

Ding Zhao (CMU) M0-1:Linear Algebra 101 19 / 21


Field
Let F be a set with at least 2 elements, assume F has 2 operations:
“+”: F × F → F (addition) and “·” : F × F → F (multiplication). F is called a field iff:
A0 : ∀α, β ∈ F , ∃α + β ∈ F ⇒ Closure under Addition
A1 : ∀α, β ∈ F , α + β = β + α ⇒ Commutativity
A2 : ∀α, β, γ ∈ F , (α + β) + γ = α + (β + γ) ⇒ Associativity
A3 : ∃0 ∈ F , ∀α ∈ F , α + 0 = α ⇒ Neutral
A4 : ∀α ∈ F , ∃(−α) ∈ F , α + (−α) = 0 ⇒ Inverse

M0 : ∀α, β ∈ F , ∃α· β ∈ F ⇒ Closure under Multiplication


M1 : ∀α, β ∈ F , α· β = β· α ⇒ Commutativity
M2 : ∀α, β ∈ F , (α· β)· γ = α· (β· γ) ⇒ Associativity
M3 : ∃1 ∈ F , ∀α ∈ F , α· 1 = α ⇒ Neutral
M4 : ∀α ̸= 0, ∃α−1 , α· α−1 = 1 ⇒ Inverse

D : α, β, γ ∈ F , α· (β + γ) = α· β + α· γ ⇒ Distributivity
Ding Zhao (CMU) M0-1:Linear Algebra 101 20 / 21
Vector Space
Vector spaces → Linear spaces Let F be a field, let V be a set that has an “addition”
operation “+”: V × V → V . V is called a vector space over F iff:
A0 : ∀x, y ∈ V , ∃x + y ∈ V Closure under Addition
A1 : ∀x, y ∈ V , x + y = y + x Commutativity
A2 : ∀x, y, z ∈ V , (x + y) + z = x + (y + z) Associativity
A3 : ∅ ∈ V , ∀x ∈ V , x + ∅ = x Neutral
A4 : ∀x ∈ V , ∃(−x) ∈ V , x + (−x) = ∅ Inverse

SM0 : ∀α ∈ F , ∀x ∈ V , ∃α· x ∈ V Closure under Scalar Multiplication


SM1 : ∀α, β ∈ F , ∃x ∈ V , (α· β)x = α(β· x) Scalar Associativity
SM2 : ∀α ∈ F , ∀x, y, ∈ V , α(x + y) = αx + αy Scalar-Vector Distributivity
SM3 : ∀α, β ∈ F , ∀x ∈ V , (α + β)x = αx + βx Vector-Scalar Distributivity
SM4 : ∀x ∈ V , 1· x = x Neutral

Usually denoted as (X , F ) or (V , F )
Ding Zhao (CMU) M0-1:Linear Algebra 101 21 / 21

You might also like