M0 1 After Class
M0 1 After Class
Ding Zhao
Associate Professor
College of Engineering
School of Computer Science
m = 1 : Row matrix
n = 1 : Column matrix
m = n : Square matrix
m = n = 1 : Scalar
T
0 4
Example: 7 0 =
3 1
if A = AT : A is said to be symmetric
if A = −AT : A is said to be skew-symmetric
Conjugate matrix
The conjugate of A, written A, is the matrix formed by replacing every element in A by its
complex conjugate. Thus A = [aij ] .
If all elements of A are real, then A = A
If all elements are purely imaginary, then A = −A
Associate matrix
The associate matrix of A is the conjugate transpose of A. The order of these two operations
is immaterial.
T
A = A ⇒ A: Hermitian
T
A = −A ⇒ A: Skew-Hermitian
For real matrices, symmetric and Hermitian mean the same.
Matrix addition and subtraction are performed on an element-by-element basis. That is, if
A = [aij ] and B = [bij ] are both m × n matrices, then A + B = C and A − B = D indicate
that the matrices C = [cij ] and D = [dij ] are also m × n matrices whose elements are given
by cij = aij + bij and dij = aij − bij for i = 1, 2, . . . , m and j = 1, 2, . . . , n.
0 4 1 2
Example: 7 0 + 2 3 =
3 1 0 4
Properties:
A + B = B + A Commutative
(A + B) + C = A + (B + C) Associative
(A + B)T = AT + BT
Properties:
(α + β)A = αA + βA
(αβ)A = (α)(βA)
α(A + B) = αA + αB
Consider an m × n matrix A = [aij ] and a p × q matrix B = [bij ]. This product is only defined
when A has the same number of columns as B has P rows i.e., when n = p. The elements of
C = [cij ] are then computed according to cij = nk=1 aik bkj
Example:
2 3 1 3 5 2(1) + 3(2) 2(3) + 3(4) 2(5) + 3(8) 8 18 34
= =
4 5 2 4 8 4(1) + 5(2) 4(3) + 5(4) 4(5) + 5(8) 14 32 60
Properties:
(AB)C = A(BC) = ABC
α(AB) = (αA)B = A(αB), where α is a scalar
A(B + C) = AB + AC, (A + B)C = AC + BC
(AB)T = BT AT
AB ̸= BA Not commutative, this makes vectors/matrices different than scalars
BA = AB = I
then B is called the inverse of A and is denoted as B = A−1 For the inverse to exist, A must
have a nonzero determinant, i.e., A must be non-singular. When this is true, A has a unique
inverse given by
CT
A−1 =
|A|
where C is the matrix formed by the cofactors Cij . The matrix CT is called the adjoint
matrix, Adj(A) . Thus the inverse of a nonsingular matrix is
A−1 = Adj(A)/|A|
Inner product generalizes the dot product (which is in Euclidean spaces) to vector space of
any dimensions. An inner product space is a vector space V over the field F , and can be
represented with a map
⟨·, ·⟩ : V × V → F
that satisfies three properties: conjugate symmetry ⟨x, y⟩ = ⟨y, x⟩, linearity in the first
argument ⟨ax, y⟩ = a⟨x, y⟩ and positive-definite (⟨x, x⟩ ≥ 0).
Usually denoted as (X , F , ⟨·, ·⟩)
Inspired from the geometric space, the angle between two vectors is defined as:
xT y
∠ (x, y) = arccos √ p
xT x y T y
For x, y ∈ V , we say that x and y are orthogonal if ∠ (x, y) = 90◦
xT y = cos(90◦ ) = 0
A set of vectors x = {x1 , . . . , xn } is called orthonormal if
(
xTi xj = 0, ∀i̸=j
xTi xi = 1, 1 ⩽ i ⩽ n
x = [x1 , . . . , xn ]T
P 1 √
n 2 2
||x||2 = i=1 |xi | = xT x Euclidean norm - Distance
1
||x||p = ( ni=1 |xi |p ) p
P
Pn
||x||1 = i=1 |xi |
||x||∞ = max |xi |
You may interpret norm as the generalized linear space version of absolute value. It is an
important concept because it is usually used as a measure of magnitude, which we will use
extensively to describe the behaviors of a system, e.g. stability. Hilbert Space
Null Matrix
The null matrix 0 is one that has all its elements equal to zero. The null matrix is however,
not unique because the numbers of rows and columns it possesses can be any finite positive
integers. Whenever necessary, a null matrix of size m × n is denoted by 0mn .
A+0=0+A=A
0A = A0 = 0
Note: AB = 0 does not imply that either A or B is a null matrix.
Identity Matrix
The identity or unit matrix I is a square matrix with elements on its diagonal (i = jpositions)
as ones and with all other elements as zeros. When necessary, an n × n unit matrix shall be
denoted by In .
if A is m × n, then Im A = A and AIn = A
Definition
Consider a square matrix A
An eigenvector for A is a non-null vector v ̸= 0 for which there exists an eigenvalue λ ∈ R
such that
Av = λv
Some basic properties:
An eigenvector has at most one eigenvalue
If v is an eigenvector, then so is av, ∀ scalar a ̸= 0 but λ could be 0
v
a normalized e-vector is defined as ||v||2
Minors An n × n matrix A contains n2 elements aij . Each of these has associated with it a
unique scalar, called a minor Mij . The minor Mpq is the determinant of the n − 1 × n − 1
matrix formed from A by crossing out the p th row and q th column.
Cofactors Each element apq of A has a cofactor Cpq , which differs from Mpq at most by a
sign change. Cofactors are sometimes called signed minors for this reason and are given by
Cpq = (−1)p+q Mpq .
Using Laplace expansion with respect to column 2 gives |A| = 4C12 = −20
Properties:
det(αA) = αn det(A)
det(AT ) = det(A)
det(I) = 1
det(AB) = det(A) det(B)
det(A−1 ) = 1/ det(A)
Q
If A is triangular matrix, det A = diag(A) (What if it is a diagonal matrix?)
Note: that is why |Ep,q (α)| = 1
How to compute the inverse? We can use Gaussian Elimination. First need to know the three
basic operations on a matrix, called elementary operations:
1 Row switching: The interchange of two rows (or of two columns).
2 Row multiplication: The multiplication of every element in a given row (or column) by a
scalar α.
3 Row addition: The multiplication of the elements of a given row (or column) by a scalar
α, and adding the result to another row (column). The original row (column) is unaltered.
We will mainly use row operation in this course.
Gaussian elimination uses the row reduced form of the augmented matrix to compactly solve a
given system of linear equations (echelon form).
D : α, β, γ ∈ F , α· (β + γ) = α· β + α· γ ⇒ Distributivity
Ding Zhao (CMU) M0-1:Linear Algebra 101 20 / 21
Vector Space
Vector spaces → Linear spaces Let F be a field, let V be a set that has an “addition”
operation “+”: V × V → V . V is called a vector space over F iff:
A0 : ∀x, y ∈ V , ∃x + y ∈ V Closure under Addition
A1 : ∀x, y ∈ V , x + y = y + x Commutativity
A2 : ∀x, y, z ∈ V , (x + y) + z = x + (y + z) Associativity
A3 : ∅ ∈ V , ∀x ∈ V , x + ∅ = x Neutral
A4 : ∀x ∈ V , ∃(−x) ∈ V , x + (−x) = ∅ Inverse
Usually denoted as (X , F ) or (V , F )
Ding Zhao (CMU) M0-1:Linear Algebra 101 21 / 21