A21 NYC Notes LinearAlgebra
A21 NYC Notes LinearAlgebra
AND
VECTOR GEOMETRY
Hadi Bigdely
Riccardo Catalano
Véronique Godin
Mariah Hamel
Christopher Turner
Contents
1 Vectors 9
1.1 Basic Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.1.1 Introduction to Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.1.2 Algebraic Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
1.1.3 Operations on Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.1.4 Parallel Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
1.1.5 Length of a Vector. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
1.1.6 Geometry Using Vectors. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
1.2 Dot Product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
1.2.1 Dot Product and Angles Between Vectors. . . . . . . . . . . . . . . . . . . . . . . . . . . 22
1.2.2 Properties of the Dot Product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
1.2.3 Geometric Proof Using the Dot Product . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
1.3 Projections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
1.4 Cross Product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
1.4.1 Definition of the Cross Product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
1.4.2 Direction of the Cross Product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
1.4.3 Length of the Cross Product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
1.4.4 Properties of the Cross Product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
1.5 Triple Scalar Product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
1.5.1 Definition of the Triple Scalar Product and its Algebraic Properties . . . . . . . . . . . 37
1.5.2 Geometric Applications of the Triple Scalar Product . . . . . . . . . . . . . . . . . . . . 38
3
2.5.1 Point of Intersection of Two Lines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
2.5.2 Point of Intersection of a Line and a Plane . . . . . . . . . . . . . . . . . . . . . . . . . 66
2.5.3 Line of Intersection of Two Planes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
3 Linear Systems 69
3.1 Systems of Linear Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
3.2 Solving Linear Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
3.2.1 Elementary Row Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
3.2.2 Echelon Forms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
3.2.3 Linear Systems with No Solutions or Infinitely Many Solutions. . . . . . . . . . . . . . . 81
3.3 The Number of Solutions of a Linear System . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
3.4 Homogeneous Linear Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
4 Matrix Algebra 97
4.1 Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
4.2 Definition of Operations on Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
4.2.1 Definition of the Sum and the Scalar Multiplication of Matrices . . . . . . . . . . . . . . 99
4.2.2 Definition of the Product of Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
4.3 Properties of Matrix Algebra . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
4.3.1 Matrix Multiplication is NOT Commutative . . . . . . . . . . . . . . . . . . . . . . . . . 102
4.3.2 Standard Properties of Matrix Operations . . . . . . . . . . . . . . . . . . . . . . . . . . 104
4.3.3 Powers of a Square Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
4.4 The Tranpose of a Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
4.4.1 Definition and Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
4.4.2 Symmetric Matrices and Other Special Types of Square Matrices . . . . . . . . . . . . . 111
6 Determinants 135
6.1 Cofactor Expansion and Determinants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
6.1.1 Cofactor Expansion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
6.1.2 Determinants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
6.1.3 A Technique for Evaluating the Determinant of (ONLY) 3 × 3 Matrices . . . . . . . . . 141
6.2 Evaluating Determinants Using Row Reduction . . . . . . . . . . . . . . . . . . . . . . . . . . . 142
6.3 Properties of Determinants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
6.3.1 Properties of Determinants (Part 1) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
6.3.2 Determinant of Elementary Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
6.3.3 Properties of Determinants (Part 2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
6.4 Cramer’s Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150
6.5 Vector Products and Determinants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
These Notes
This set of notes is a collaboration between your teachers at Marianopolis College. Welcome! We’re excited to
have you here.
These notes are in skeleton form. This means that the notes are only partially complete. Your teacher will
discuss with you how you will fill in these notes.
https://drive.google.com/file/d/1nbSIqhMlKUSpsWH-CbSMJXVFX0XqU6tF/view?usp=sharing
to see if the error has been fixed. If it hasn’t, please report it to your teacher. Your help will be greatly
appreciated.
7
Chapter 1
Vectors
Contents
1.1 Basic Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.1.1 Introduction to Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.1.2 Algebraic Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
1.1.3 Operations on Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.1.4 Parallel Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
1.1.5 Length of a Vector. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
1.1.6 Geometry Using Vectors. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
1.2 Dot Product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
1.2.1 Dot Product and Angles Between Vectors. . . . . . . . . . . . . . . . . . . . . . . . . . . 22
1.2.2 Properties of the Dot Product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
1.2.3 Geometric Proof Using the Dot Product . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
1.3 Projections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
1.4 Cross Product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
1.4.1 Definition of the Cross Product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
1.4.2 Direction of the Cross Product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
1.4.3 Length of the Cross Product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
1.4.4 Properties of the Cross Product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
1.5 Triple Scalar Product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
1.5.1 Definition of the Triple Scalar Product and its Algebraic Properties . . . . . . . . . . . 37
1.5.2 Geometric Applications of the Triple Scalar Product . . . . . . . . . . . . . . . . . . . . 38
9
CHAPTER 1. VECTORS 1.1. BASIC VECTORS
2 ~v
−6 −4 −2 2 4 6
−2
−4
−6
You might have seen vectors before in physics as representing either force, displacement, velocity or accel-
eration. Many of our definitions will come from our intuition of displacement and in two dimensions we can
think of ~v as h∆x, ∆yi.
Example 1.1.2. Consider the following two vectors and think of them as displacement.
4
~v
−2 2 4 6
w
~
−2
−4
4 4
2 2
−2 2 4 6 −2 2 4 6
−2 −2
−4 −4
1.1. BASIC VECTORS CHAPTER 1. VECTORS
3. n is called the of ~v
4 4
2 2
−6 −4 −2 2 4 6 −6 −4 −2 2 4 6
−2 −2
−4 −4
−6 −6
(c) (3, 2, 4) −3
(d) 5
4
z z
5 5
-5 -5
y y
-5 5 -5 5
x
5 x
5
-5 -5
CHAPTER 1. VECTORS 1.1. BASIC VECTORS
−~v =
Geometrically, we are creating a vector with the same length as ~v , but in the opposite direction.
~v
k~v =
Geometrically, we are stretching the vector (k > 1), shrinking the vector (0 < k < 1), stretching its
inverse (k < −1) or shrinking its inverse (−1 < k < 0).
~v
~v + w
~=
Geometrically, we can picture the sum either by placing the vectors “tip to tail” or by placing them “tail
to tail” and building a parallelogram.
w
~
~v ~v
w
~
(a) Tip to tail (b) Parallelogram rule
~=
~v − w
~v
w
~
~v
w
~
1.1. BASIC VECTORS CHAPTER 1. VECTORS
1
−8
Example 1.1.4. (a) 2 + 4 5 =
3 6
(b) Assume that we are given the following two vectors ~v and w.
~ Sketch the following vectors.
~v
w
~
(a) ~v + 2w
~ (b) w
~ − 3~v
3. Additive identity:
~0 + ~u = ~u + ~0 = ~u
4. Existence of additive inverse: For any ~u, there exists a −~u such that
~u + (−~u) = ~0
All these operations are done component per component (or componentwise) and so these properties are
inherited from properties of real numbers.
CHAPTER 1. VECTORS 1.1. BASIC VECTORS
−−→
2. For any two points we let P Q be the vector from P to Q.
P Q
• •
−−→
4. The vector ~v = OP drawn from O to P is said to be in standard position.
−−→ −−→
Example 1.1.5. (a) Simplify AB + BC.
−−→ −−→
(b) Write BA in terms of AB.
−−→
OP =
−−→
OQ =
−−→
PQ =
−−→
What do you notice about vector OP and point P ?
1.1. BASIC VECTORS CHAPTER 1. VECTORS
−−→
Example 1.1.6. (a) Assume P (1, 3, 2, −5) and Q(2, −5, 3, 10). Find P Q.
−−→
(b) Assume that P (1, 0, 5) and P Q = h−3, 5, −13i. Find the coordinates for Q.
~v
Takeaway 1
~0 is parallel to
(a) ~v = h−2, 3i
4
2
~v
−3 −2 −1 1 2 3
−1
1.1. BASIC VECTORS CHAPTER 1. VECTORS
(b) w
~ = h4, −5, 6i
z
-5
y
-5 5
x 5
-5
1. kk~v k =
1
2. If ~v is not the zero vector then k~v k ~
v is
Proof.
CHAPTER 1. VECTORS 1.1. BASIC VECTORS
(b) Find the point B three units away from P between P and Q.
B C
A D
−−→ −−→ −−→ −−→
is a parallelogram whenever AB = DC and AD = BC.
CHAPTER 1. VECTORS 1.1. BASIC VECTORS
Example 1.1.12. Consider the quadrilateral with vertices P (1, 2), Q(6, 4), R(2, 5) and S(5, 1).
~b
w
~ ~a
~v
Example 1.1.14 (Similar triangles). Let ABC be a triangle. Let M and N be the midpoints of AB and BC,
respectively. Show that the segment M N is parallel to the segment AC but half as long.
CHAPTER 1. VECTORS 1.2. DOT PRODUCT
1 2
~ be vectors in R2 or R3 .
Proposition 1.2.1. Let ~v and w
Let θ be the angle between them. Then w
~
θ
~=
~v · w
~v
We will prove this proposition on page 25 after studying the properties of ~v · w.
~
Extra. Proposition 1.2.1 is stated only in R2 and R3 because we do not have any intuition for angles in
higher dimensions. However we can use the formula of Takeaway 3 to define the angle between vectors in any
dimension. A theorem, called the Cauchy-Schwarz inequality, tells us that for non-zero vectors ~v and w,
~
~v · w
~
−1 ≤ ≤1
k~v k kwk
~
and so
~v · w
~
θ = arccos
k~v k kwk
~
is always defined.
~ >0
case#1. ~v · w
~ <0
case#2. ~v · w
~ =0
case#3. ~v · w
~=
~v · w
We then write ~v ⊥ w.
~
1. Commutativity ~u · ~v = ~v · ~u
2. Distributivity ~u · (~v + w)
~ = ~u · ~v + ~u · w
~
Proof.
(6~v + w)
~ · (2~v − 5w)
~
Takeaway 4
(a~v + bw)
~ · (c~x + d~y ) =
Example 1.2.5. Assume that ~v and w ~ are unit vectors and that they form an angle of 45◦ . Using the dot
product, find the length of ~v + 2w.
~
~ = k~v k kwk
Proof that ~v · w ~ cos(θ).
CHAPTER 1. VECTORS 1.2. DOT PRODUCT
1.3 Projections
Goal
In Definition 1.3.1, we will call ~a the projection of ~v onto d, ~ and ~b the orthogonal component of ~v
~ But before we can make that definition precise, we need the following theorem that states
perpendicular to d.
that these vectors ~a and ~b exist and are unique for any pair of vectors ~v and d.
~
~
Theorem 1.3.1. Take any vector ~v in Rn and non-zero vector d.
• ~a is parallel to d~
• ~b is perpendicular to d~
• ~v = ~a + ~b
2. In fact
~a = ~b =
We will derive these formulas for ~a and ~b on page 29, but we can give them a name right now.
projd~(~v ) =
orthd~(~v ) =
CHAPTER 1. VECTORS 1.3. PROJECTIONS
Example 1.3.1. For each of these pairs of vectors ~v and d, ~ sketch proj ~(~v ) and orth ~(~v ) on the same picture
d d
so that proj d~ ~v , orth d~ ~v and ~v form a right triangle.
~v ~v
~v
d~ d~ d~
Example 1.3.2. Let ~v = h−2, 6i and w ~ = h2, −1i. Write ~v as ~a +~b where ~a is parallel to w
~ and ~b is orthogonal
to w.
~ Give a precise sketch of all involved vectors.
6
−6 −4 −2 2 4
−2
−4
1.3. PROJECTIONS CHAPTER 1. VECTORS
Example 1.3.3. Find the coordinates of the point R in the following picture.
P (1, 3, 2)
•
L
• • •
A(−2, 3, 2) R B(6, −1, 6)
Remark. In Example 1.3.3, R is the point on the line L which is closest to P . In the next section of the course,
we will be using projections to discuss distances and closest points on lines and on planes.
1.4. CROSS PRODUCT CHAPTER 1. VECTORS
(a) Evaluate
~=
~v × w
Takeaway 5
Then ~v × w
~ is to both ~v and w.
~
Proof. ~v is perpendicular to ~v × w.
~
Example 1.4.2. Find all unit vectors perpendicular to both ~a = h1, 2, 5i and ~b = h3, 0, 1i.
Step 1. Put the palm and the fingers of your right hand in the
direction of the first vector ~u.
1
Click here for the source of the image for the right-hand rule. It was taken with permission.
1.4. CROSS PRODUCT CHAPTER 1. VECTORS
Example 1.4.3. Let ı̂ = h1, 0, 0i, ̂ = h0, 1, 0i and k̂ = h0, 0, 1i. Use the right-hand rule to sketch both ı̂ × ̂
and ̂ × ı̂. Check your work using algebra.
z
Takeaway 6
The vectors ~v × w
~ and w
~ × ~v are in opposite directions but they have the same length. So
~ × ~v =
w
~ =
1. k~v × wk
~ =
2. k~v × wk
Example 1.4.4. Find the area of the triangle with vertices A(1, 2, 0), B(2, 7, 1) and C(4, 5, 2).
CHAPTER 1. VECTORS 1.4. CROSS PRODUCT
To prove Proposition 1.4.2, we will use Lagrange’s identity which states that for any ~v and w
~ ∈ R3 ,
~ 2 = k~v k2 kwk
k~v × wk ~ 2 − (~v · w)
~ 2 (1.1)
This can be proven by expressing both sides in terms of the components of ~v and w.
~
~ be in R3 .
Corollary. Let ~v and w
~ = ~0 ⇐⇒
~v × w
1. Anticommutativity : ~ × ~v =
w
~u × (~v + w)
~ =
2. Distributivity on both sides:
(~u + ~v ) × w
~=
4. Multiplication by zero : ~v × ~0 = ~0 × ~v =
5. Parallel vectors : ~ = ~0 ⇐⇒ ~v // w
~v × w ~
6. Perpendicularity : ~v × w
~ is perpendicular to both ~v and w
~
Note. One property that you might expect is NOT true for the cross product. The cross product is NOT
associative. In general,
~u × (~v × w)
~ 6= (~u × ~v ) × w
~
as you will prove on the exercise sheet.
h i
Example 1.4.6. Simplify ~a · (~b + ~c) × (~c + ~a)
Takeaway 7
We can expand cross products, but we have to be careful not to switch the order of the vectors as
~ 6= w
~v × w ~ × ~v . More precisely,
1.5.1 Definition of the Triple Scalar Product and its Algebraic Properties
~u · (~v × w)
~ = −~v · (~u × w)
~ = ~v · (w
~ × ~u) = · · ·
CHAPTER 1. VECTORS 1.5. TRIPLE SCALAR PRODUCT
(a) (3w)
~ · [(2~v ) × ~u]
(b) (2~v + w)
~ · [(~u + ~v ) × w]
~
~u
w
~
~v
~ is Volume =
with adjacent edges ~u, ~v and w
Remark. When computing the volume of a parallelepiped, any order of the vectors ~u, ~v and w
~ in |~u · (~v × w)|
~
will give the same result because of Takeaway 8.
1.5. TRIPLE SCALAR PRODUCT CHAPTER 1. VECTORS
• F
C(−3, 3, 1)
• • E
D(5, 3, 6) •
• •
A(1, 2, 0) B(1, 5, 1)
1. ~v1 , ~v2 , · · · , ~vk are called colinear if they all lie on the same
2. ~v1 , ~v2 , · · · , ~vk are called coplanar if they all lie on the same
Proposition 1.5.2. 1. ~v1 , ~v2 , · · · , ~vk are colinear if and only if they are (pairwise)
Proof of 2b.
B
• A2 B2 3
•
B1 •
A4 • •
A3 B4
A1 • • •
A4
•
• A3 • B3
• • • •
A1 A2 B1 B2 • B4
A2
•
A4 •
• • A3
A1
A2
•
A4 •
• • A3
A1
Chapter 2
Contents
2.1 Lines in R3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
2.1.1 Review of Lines in R2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
2.1.2 Generalizing from R to R
2 3
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
3
2.2 Planes in R . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
2.2.1 General Form Equation of the Plane . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
2.2.2 Vector Form Equation of the Plane . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
2.3 Relative Positions of Lines & Planes in R3 . . . . . . . . . . . . . . . . . . . . . . . . 53
2.3.1 Relative Position of Two Lines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
2.3.2 Relative Position of a Line and a Plane . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
2.3.3 Relative Position of Two Planes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
2.4 Shortest Distances and Closest Points . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
2.4.1 Distance Point-to-Line . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
2.4.2 Distance Line-to-Line . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
2.4.3 Distance Point-to-Plane . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
2.4.4 Distance Line-to-Plane . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
2.4.5 Distance Plane-to-Plane . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
2.5 Intersections of Lines & Planes in R3 . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
2.5.1 Point of Intersection of Two Lines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
2.5.2 Point of Intersection of a Line and a Plane . . . . . . . . . . . . . . . . . . . . . . . . . 66
2.5.3 Line of Intersection of Two Planes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
45
CHAPTER 2. LINES & PLANES IN R3 2.1. LINES IN R3
2.1 Lines in R3
Goal
To describe a line in three-dimensional space using a vector equation, parametric equations, or symmetric
equations.
To achieve this goal, we’ll first revisit what we know about lines in two-dimensional space.
(x2 , y2 )
•
∆y m=
(x1 , y1 )
•
• (0, b) ∆x
Note: To give the equation of a line, it is
sufficient to have two things:
Example 2.1.1. Consider the equation x+y = 0. All the points that satisfy this equation form what geometric
object...
(a) ... in R2 ? Name and draw the object. (b) ... in R3 ? Name and draw the object.
6 z
5
4
2
-5
y
−6 −4 −2 2 4 6
−2 -5 5
−4
x
5
−6
-5
2.1. LINES IN R3 CHAPTER 2. LINES & PLANES IN R3
Takeaway
The equation ax + by + c = 0 does NOT yield a line in R3 , it yields a plane. (We will soon see why
ax + by + cz + d = 0 is the general equation of a plane in R3 ).
(2) We still need a point on the line, but this point, e.g.
(x1 , y1 ), can be expressed in terms of its position vector,
e.g. hx1 , y1 i.
−−→
OP
y
L
x
Remark. We can describe the position vector of every point on L as the sum of the position
~
vector of P and a scalar multiple of the direction vector d.
(b) the parametric equations for L (c) the symmetric form equations for L
CHAPTER 2. LINES & PLANES IN R3 2.1. LINES IN R3
Takeaway
In R2 , a line must hit at least one of the axes. But in R3 , a line might not hit any of the axes at all!!
(g) Give a vector equation for the line L2 that is parallel to L and passes through the point C(π, e, −1).
CHAPTER 2. LINES & PLANES IN R3 2.2. PLANES IN R3
2.2 Planes in R3
Goal
z
~n
P
• d~1
d~2 A plane is a two-dimensional, infinite, flat surface.
−−→
OP y
•
P
2.2. PLANES IN R3 CHAPTER 2. LINES & PLANES IN R3
Notes:
• the general form equation of a plane is also called the general equation, the standard equation, the normal
equation, and the implicit equation of a plane.
Example
2.2.2. Give a general form equation of the plane that is perpendicular to the line
x=2+t
L: y=4 ; t ∈ R, and passes through the point (1, −3, 4).
z = 4 − 5t
Example 2.2.3. Find a general form equation of the plane which is parallel to the xz−plane and passes
through the point (2, 5, −6).
CHAPTER 2. LINES & PLANES IN R3 2.2. PLANES IN R3
•
P
d~1
Remark. Notice the similarity to the line equation hx, y, zi = hx0 , y0 , z0 i + thd1 , d2 , d3 i, t ∈ R
Example 2.2.4. Let π : hx, y, zi = h2, 3, −1i + sh1, −1, 3i + th2, 0, 1i; s, t ∈ R. Give a general form equation
for π.
To determine the relative positions between lines and planes in three-dimensional space given their
algebraic descriptions (equations).
We are going to use (mostly) vector methods to determine the relative position of lines and planes. It
should be noted that one can often use the number of solutions in the intersection of these geometric objects
to arrive at the same conclusions.
P2 •
d~2
parallel distinct
(or just “parallel”)
• d~1
P1
P2
•
d~2 parallel coincident
(or just “coincident”)
• d~1
P1
P1
d~2 •
P2 intersecting at a single point
• •
(or just “intersecting”)
d~1
P1
•
skew
P2
• (non-parallel, non-intersecting)
d~1 d~2
CHAPTER 2. LINES & PLANES IN R3 2.3. RELATIVE POSITIONS OF LINES & PLANES IN R3
Example 2.3.1. Determine the relationship between the lines L1 : hx, y, zi = h3, 1, −2i + th2, −1, 0i, t ∈ R,
z−1
and L2 : 3 − x = ; y = 2.
2
Is d~1 // d~2 ?
yes no
−−−→ −−−→
Is P1 P2 // d~1 // d~2 ? Is P1 P2 · d~1 × d~2 = 0 ?
yes no
yes no
Lines are . Lines are .
P d~
•
parallel disjoint
~n
(or just “parallel”)
~n
parallel, line contained in plane
P
• (or just “line contained in plane”)
d~ π
d~
~n
intersecting at a single point
• (or just “intersecting”)
π
•
P
CHAPTER 2. LINES & PLANES IN R3 2.3. RELATIVE POSITIONS OF LINES & PLANES IN R3
x=1+t
Example 2.3.2. Determine the relationship between the line L : y = 3 + t ; t ∈ R and the plane
z = 2 + 2t
π : x + y − z = 2.
Is d~ · ~n = 0 ?
yes no
yes no
~n2
parallel distinct
(or just “parallel”)
~n1
~n2
~n1 parallel coincident
(or just “coincident”)
intersecting at a line
(or just “intersecting”)
CHAPTER 2. LINES & PLANES IN R3 2.3. RELATIVE POSITIONS OF LINES & PLANES IN R3
Example 2.3.3. Determine the relationship between the planes π1 : −2x+y−6z = 4 and π2 : x− 12 y+3z+2 = 0.
Is n~1 // n~2 ?
yes no
yes no
1) To determine the shortest distance between geometric objects in three-dimensional space, and
2) To determine the closest point on a line or plane to a given point which is not on the line or plane.
When we say the “distance” between two objects, we will always mean the minimum distance between
them. In Rn , this will always be a perpendicular distance.
Distance Formula: L
d~
• •
P A
Closest Point: L
Q
•
d~
• •
P A
CHAPTER 2. LINES & PLANES IN R3 2.4. SHORTEST DISTANCES AND CLOSEST POINTS
x = −1 + t
Example 2.4.1. Consider the point A(2, 1, 1) and the line L : y = 8 + 2t ; t ∈ R .
z =8+t
2. If the lines are parallel, choose a point P on L1 and do point-to-line with P and L2 .
P L1
• L2 distL1 ,L2 =
P1
• L1
d~1
Note. The distance between skew lines may also be found by distL1 ,L2 = .
x=4
Example 2.4.2. Find the shortest distance between the lines L1 : y = −1 + t ; t ∈ R
z = 3 − 2t
5−x z − 17
and L2 : =8−y = .
3 4
CHAPTER 2. LINES & PLANES IN R3 2.4. SHORTEST DISTANCES AND CLOSEST POINTS
Distance Formula: A
•
~n
Closest Point: A
•
~n
•
Q π
2.4. SHORTEST DISTANCES AND CLOSEST POINTS CHAPTER 2. LINES & PLANES IN R3
Example 2.4.3. Consider the point A(6, −10, 23) and the plane π : x − 4y + 8z + 13 = 0.
2. If the line is parallel to the plane, choose a point on the line and do point-to-plane.
P
• L
distL,π =
2. If the planes are parallel, choose a point P on π2 and do point-to-plane with P and π1
P • π2 distπ2 ,π1 =
π1
2.5. INTERSECTIONS OF LINES & PLANES IN R3 CHAPTER 2. LINES & PLANES IN R3
To determine the point of intersection of two lines or of a line and a plane, and to determine the line of
intersection of two planes in R3 .
As is often the case, there is more than one approach to finding intersections of lines and planes. The
section below outlines some possible solutions.
To find the point of intersection of two lines we may use the following procedure:
4. Plug the solution into either line to the find the point of intersection.
Example 2.5.1. Use vector methods to show that the lines intersect at a point. Then find their point of
intersection.
L1 : hx, y, zi = h7, −3, −2i + th2, −1, −1i; t ∈ R L2 : x − 1 = y − 6 = −z − 1
CHAPTER 2. LINES & PLANES IN R3 2.5. INTERSECTIONS OF LINES & PLANES IN R3
To find the point of intersection of a line and a plane we may use the following procedure:
1. Write the line in parametric equation form and the plane in general
equation form. L
2. Plug the parametric equations of the line into the equation of the • π
plane and solve for the parameter, t.
Since we know that two non-parallel (non-coincident) planes in R3 intersect at a line, we can find L,
their line of intersection, by:
(
π1 : x + y − z = 2
Example 2.5.3. Consider the planes .
π2 : x + y + z = 1
Use vector methods to show that the planes intersect at a line. Then find their line of intersection using:
(i) Method 1
CHAPTER 2. LINES & PLANES IN R3 2.5. INTERSECTIONS OF LINES & PLANES IN R3
(ii) Method 2
(
π1 : x + y − z = 2
Recall the planes:
π2 : x + y + z = 1
Recall Method 2:
Note. In the next unit on Linear Systems, we will learn to generalize the algebraic approach. This will allow us
to deal with intersections both in higher dimensions (more variables), and of more objects (more equations).
Chapter 3
Linear Systems
Contents
3.1 Systems of Linear Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
3.2 Solving Linear Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
3.2.1 Elementary Row Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
3.2.2 Echelon Forms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
3.2.3 Linear Systems with No Solutions or Infinitely Many Solutions. . . . . . . . . . . . . . . 81
3.3 The Number of Solutions of a Linear System . . . . . . . . . . . . . . . . . . . . . . . 86
3.4 Homogeneous Linear Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
69
CHAPTER 3. LINEAR SYSTEMS 3.1. SYSTEMS OF LINEAR EQUATIONS
Goal
a1 x1 + a2 x2 + · · · + an xn = b
R2
R3
R4
Remark. In a linear equation, the power on each variable must be a 1 and we can’t multiply any variables
together.
3.1. SYSTEMS OF LINEAR EQUATIONS CHAPTER 3. LINEAR SYSTEMS
Remark. The coefficient aij refers to the coefficient of the variable in the equation.
3x + 2y =1
(
x− y =2
Geometric meaning: To find the point of in- Algebraic meaning: To find all point(s) (x, y)
tersection between the two lines. which satisfy both equations.
3x + 2y = 1
x−y =2
CHAPTER 3. LINEAR SYSTEMS 3.2. SOLVING LINEAR SYSTEMS
Goal
To develop a systematic approach to solve linear systems with many variables and many equations.
=
=
=
=
=
=
=
=
Observe:
I. II. III.
3.2. SOLVING LINEAR SYSTEMS CHAPTER 3. LINEAR SYSTEMS
More vocabulary:
the matrix
a11 a12 ··· a1n b1
a21
a22 ··· a2n b2
=
Aaug .. .. .. .. ..
. . . . .
am1 am2 · · · amn bm
The matrix
a11 a12 ··· a1n
a21
a22 ··· a2n
A=
.. .. .. ..
. . . .
am1 am2 · · · amn
2x − 3y + 4z = 1
x − y − z = −1
−x + 2y − z = −2
(b) Write down the linear system that corresponds to the augmented matrix
3 2 0 1 5
2 1 −1 π 4
0 0 4 2 1
CHAPTER 3. LINEAR SYSTEMS 3.2. SOLVING LINEAR SYSTEMS
2x − 3y + 4z = 1
x − y − z = −1
−x + 2y − z = −2
Takeaway 9
Elementary row operations performed on an augmented matrix do not change the solution set of the
corresponding linear system.
3.2. SOLVING LINEAR SYSTEMS CHAPTER 3. LINEAR SYSTEMS
or .
1. all rows of zeros are at the bottom (provided there are rows of zeros).
2. in each nonzero row, the first nonzero entry (called the ) is in a column
to the left of any leading entries below.
F F F F F F
0 =
F F F F F
where
0
0 0 0 F F
F=
0 0 0 0 0 F
(4) Each column containing a leading 1 has zeros above and below.
1 0 F F 0 0 F
0 1 F F 0 0
F
where F=
0 0 0 0 1 0 F
0 0 0 0 0 1 F
3.2. SOLVING LINEAR SYSTEMS CHAPTER 3. LINEAR SYSTEMS
Example 3.2.4. Determine if each matrix is in row echelon form, reduced row echelon form or neither form.
[Note: Any matrix, augmented or not, can be put into REF or RREF.]
" # " # " #
1 2 1 1 2 1 1 0 1
0 2 0 0 1 0 0 1 0
−1 4 1 −1 4 1 1 0 1
0 0 0 0 1 0 0 1 0
0 1 0 0 0 0 0 0 0
0 0 0 1
1 4 0 1 1 4 0 0
0 0 3 1
0 0 1 0 0 0 1 0
0 2 1 1
0 0 0 1 0 0 0 1
4 3 2 −5
Remark. Generally, solving a linear system by Gaussian elimination and back substitution means following
a specific algorithm until the augmented matrix reaches row echelon form. This allows you to solve for the
‘last’ variable, which you can substitute to solve for the other variables.
Solving a linear system using Gauss-Jordan elimination means using the algorithm to reduce the aug-
mented matrix to reduced row echelon form.
Using the three elementary row operations, we can use the following procedure to reach row echelon
form or reduced row echelon form.
Step 1: Find the leftmost nonzero column. If the entry at the top of this column is zero, then use a row
swap to place a non zero entry in this position.
Step 2: Scale the first row so the leftmost nonzero entry is a 1, called a leading 1 .
Step 3: Create zeros in all positions beneath the leading 1 by adding or subtracting suitable multiples of
the row containing the leading 1.
Step 4: If the matrix is in row echelon form, then go to step 5. If it is not in REF, then cover (or ignore)
the first row and repeat steps 1, 2 and 3 with the remaining matrix.
Step 5: Using the rightmost leading 1, create zeros above it by adding or subtracting suitable multiples of
the row containing this leading 1.
Step 6: If the matrix is now in reduced row echelon form, you’re done! If it is not in RREF, then cover
(or ignore) the row that was just used and repeat step 5 on the remaining matrix.
CHAPTER 3. LINEAR SYSTEMS 3.2. SOLVING LINEAR SYSTEMS
Example 3.2.5. This is how the Gauss-Jordan algorithm will look as you proceed through the steps using
elementary row operations.
F
F F F F F F F 1 F F F F F F 1 F F F F F F
0
F F F F F F F
F F F F F F F F F F F F F
∼ ∼
F F F F F F F
F F F F F F 0
F F F F F F F
F F F F F F F F F F F F F F 0 F F F F F F
1 F F F F F F 1 F F F F F F 1 F F F F F F
0 1 0 1 0 1
F F F F F F F F F F F F F F F
∼ ∼ ∼
0 F F F F F 0
F 0 0 0 F F 0
F 0 0 0 1 F F
0 F F F F F F 0 0 0 0 F F F 0 0 0 0 F F F
1 F F F F F F 1 F F F F F F 1 F F F F 0 F
0 1 0 1 0 1 0
F F F F F F F F F F F F F F
∼ ∼ ∼
0 0 0 0 1 F 0
F 0 0 0 1 F 0
F 0 0 0 1 0 F
0 0 0 0 0 F F 0 0 0 0 0 1 F 0 0 0 0 0 1 F
1 F F F 0 0 F 1 0 F F 0 0 F
0 1 0 0 0 1 0 0
F F F F F F
∼ ∼
0 0 0 0 1 0 F 0 0 0 0 1 0 F
0 0 0 0 0 1 F 0 0 0 0 0 1 F
3.2. SOLVING LINEAR SYSTEMS CHAPTER 3. LINEAR SYSTEMS
Definition 3.2.3. (Row equivalent) We say that matrices A and B are if one
can be obtained from the other using a sequence of elementary row operations. We write .
Theorem 3.2.1. The reduced row echelon form of a matrix is unique. We can, therefore, write RREF (A) to
denote the reduced row echelon form of the matrix A.
Each of the linear systems we’ve discussed using elementary row operations has a single solution. There are
two other situations we must discuss.
Example 3.2.7. In R2 , we know that the lines y = 3x + 2 and y = 3x − 1 are parallel because they have the
same slope. Further, we know they don’t intersect since they have different y-intercepts. Let’s see how we can
recognize this in the row reduction process:
y = 3x + 2 −3x + y = 2
( (
⇐⇒
y = 3x − 1 −3x + y = −1
Example 3.2.8. We’ve seen that in R3 , the intersection of any two non-parallel planes is a line. We know that
the planes π1 : x + y − z = 2 and π2 : x + y + z = 1 aren’t parallel since their normal vectors aren’t parallel.
We can use row reduction to find an equation for the line of intersection.
Remark. The convention is to call any variable that isn’t associate to a leading entry a free variable. We
assign a parameter to each free variable.
CHAPTER 3. LINEAR SYSTEMS 3.2. SOLVING LINEAR SYSTEMS
Example 3.2.9. Describe the solution to each linear system using parameter(s).
" #
1 −1 1
1.
0 0 0
1 0 −1 0 3
2. 0 0 0 1 2
0 0 0 0 0
x + y + z = −1
(
Example 3.2.10. Solve the linear system
x + 2y + 3z = −4
3.2. SOLVING LINEAR SYSTEMS CHAPTER 3. LINEAR SYSTEMS
solutions, the is the parametric (or vector) equation that describes all possible
Remark. In the previous example, a particular solution is any point on the line of intersection. Just pick one
by choosing any values you like for each of the parameters!
Example 3.2.11. Solve the following linear system by reducing the augmented matrix to reduced row echelon
form. Give the general solution and three different particular solutions.
5x3 + 10x4 + 15x6 = 5
2x1 + 6x2 + 8x4 + 4x5 + 18x6 = 6
CHAPTER 3. LINEAR SYSTEMS 3.2. SOLVING LINEAR SYSTEMS
Goal
To determine the number of solutions to a linear system by investigating the number of leading entries
in the REF or RREF of the coefficient matrix and the augmented matrix of the linear system.
has at least one solution. If the system has no solutions, we say that the system is
on the or the
augmented matrices represent linear systems that have the same solution set and so have the same number of
solutions.
Remark. We will see the proof of Theorem 3.3.1 in Section 5.6.
1 2 3
" #
1 0 2 0
A=
0 1 −1 0 C = 0 1 2
0 1 3
1 0 2 0 1 2 3 0 5
B = 0 0 1 2 D = 0 1 7 0 1
0 0 0 4 0 0 0 1 3
3.3. THE NUMBER OF SOLUTIONS OF A LINEAR SYSTEM CHAPTER 3. LINEAR SYSTEMS
The row echelon form or reduced row echelon form of the augmented matrix Aaug of a linear system
with coefficient matrix A has one of three forms.
I. The last column of the augmented matrix’s REF or RREF has a leading entry. For example, you
may find a matrix that looks like
F F F F
0 F F F where
0 0 0 0 F
Notice: A linear system has no solutions if and only if the reduced augmented matrix has more
leading entries than the reduced coefficient matrix.
II. Every column of the matrix contains a leading entry, except the last column. For example, you
may find a matrix that looks like
F F F
0 F F where
0 0 F F
Notice: A linear system has a unique solution if and only if the reduced coefficient matrix has the
same number of leading entries as variables (of the linear system).
III. At least two columns (one of which must be the last column) do not contain leading entries. For
example, you may find a matrix that looks like
F F F F
0 F F F where
0 0 0 F F
Notice: A consistent linear system has infinitely many solutions if and only if there are free
variables.
CHAPTER 3. LINEAR SYSTEMS 3.3. THE NUMBER OF SOLUTIONS OF A LINEAR SYSTEM
Theorem 3.3.2. Suppose that Aaug is the augmented matrix of a consistent linear system in n variables. Then
Example 3.3.2. For each part, consider the linear system whose augmented matrix reduces to the given
matrix. How many solutions does the linear system have?
3 1 2 4
(a) 0 1 3 7
0 0 0 2
" #
7 2 4
(b)
0 −2 3
1 0 0 3 1
(c) 0 0 1 2 3
0 0 0 0 0
1 3 2 −2 1
0 1 2 −3 5
(d) 0 0 1 1 6
0 0 0 1 4
0 0 0 0 0
3.3. THE NUMBER OF SOLUTIONS OF A LINEAR SYSTEM CHAPTER 3. LINEAR SYSTEMS
Example 3.3.3. Consider the linear system whose augmented matrix reduces to
1 1 1 1
0 1 k 1
0 0 k − 1 k2 − k
Observe:
CHAPTER 3. LINEAR SYSTEMS 3.3. THE NUMBER OF SOLUTIONS OF A LINEAR SYSTEM
have
(i) a unique solution?
(ii) infinitely many solutions?
(iii) no solutions?
Strategy: Inspired by the previous example:
• Use elementary row operations to reduce Aaug to look like row echelon form. [Note: going to RREF will
require too much algebra!]
• Investigate the special values of α where the potential leading entries are equal to zero.
3.3. THE NUMBER OF SOLUTIONS OF A LINEAR SYSTEM CHAPTER 3. LINEAR SYSTEMS
1 −2 3 1
We showed that the augmented matrix of the linear system is row equivalent to 0 α + 2 −1 1 .
0 0 α 2α
In each case, describe the intersection geometrically, and draw a picture representing a possible configuration
of the three planes.
(i) α 6= −2 and α 6= 0
(ii) α = 0
(iii) α = −2
3.4. HOMOGENEOUS LINEAR SYSTEMS CHAPTER 3. LINEAR SYSTEMS
Goal
Example 3.4.1. Planes that pass through the origin in R3 . Consider the four planes
π1 : 3x − 2y + z = 0
π2 : x + y + 2z = 0
π3 : 6x − y + z = 0
π4 : x + y − z = 0
We notice that satisfies each of these equations and that the four planes are non-parallel.
Therefore, it must be that they planes either intersect at the single point or their common
intersection is .
Definition 3.4.2. (Trivial solution) Consider the homogeneous linear system in n variables
Takeaway 10
The trivial solution is a solution to every homogeneous linear system. Therefore, every homogeneous
system is consistent and has either
Theorem 3.4.1. If a homogeneous linear system has n unknowns (variables) and the reduced row echelon form
of its coefficient matrix has r leading entries (rank r), then the system must have
free variables.
CHAPTER 3. LINEAR SYSTEMS 3.4. HOMOGENEOUS LINEAR SYSTEMS
x1 + x2 + x3 + x4 = 0
x1 − 2x2 + 3x4 = 1
−3x + 6x + x − 4x = −2
1 2 3 4
2x − 4x − x + 2x = 1
1 2 3 4
is
1 2
x1
x 0 1
2
= + t, t ∈ R
x3 1 0
x4 0 0
x1 − 2x2 + 3x4 = 0
−3x + 6x + x − 4x = 0
1 2 3 4
2x − 4x − x + 2x = 0
1 2 3 4
3.4. HOMOGENEOUS LINEAR SYSTEMS CHAPTER 3. LINEAR SYSTEMS
has only the trivial solution the how many solutions does the linear system
ax + by + cz = 3
dx + ey + f z = 7
gx + hy + jz = 11
have?
Chapter 4
Matrix Algebra
Contents
4.1 Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
4.2 Definition of Operations on Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
4.2.1 Definition of the Sum and the Scalar Multiplication of Matrices . . . . . . . . . . . . . . 99
4.2.2 Definition of the Product of Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
4.3 Properties of Matrix Algebra . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
4.3.1 Matrix Multiplication is NOT Commutative . . . . . . . . . . . . . . . . . . . . . . . . . 102
4.3.2 Standard Properties of Matrix Operations . . . . . . . . . . . . . . . . . . . . . . . . . . 104
4.3.3 Powers of a Square Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
4.4 The Tranpose of a Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
4.4.1 Definition and Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
4.4.2 Symmetric Matrices and Other Special Types of Square Matrices . . . . . . . . . . . . . 111
CHAPTER 4. MATRIX ALGEBRA 4.1. NOTATION
4.1 Notation
Definition 4.1.1. 1. A matrix A is a rectangular array of real numbers.
3. The i, j-entry aij of a matrix A is the number in the row and the
column.
a11 a12 ··· a1j ··· a1n
. .. .. ..
.. . . .
ai1 ai2 ··· aij ··· ain
. .. .. ..
.
. . . .
We write A = .
4. Two matrices A = [aij ]m×n and B = [bij ]r×s are equal if both
Example 4.1.2. Find all value(s) of x for which the following pairs of matrices are equal.
" # " #
1 x 1 −3
(a) A = 2 and B =
x 4 9 4
" # " #
1 4 −1 x2 4 x 0
(b) C = and D =
3 1 0 3 x2 0 0
4.2. DEFINITION OF OPERATIONS ON MATRICES CHAPTER 4. MATRIX ALGEBRA
Addition A+B =
Subtraction A−B =
Scalar multiplication kA =
1 3
" # " #
2 3 7 −2
Example 4.2.1. Let A = ,B= and C = 2 3. Evaluate the following.
−4 12 1 9
−1 6
(a) A − 2B=
(b) 3A + C =
Definition 4.2.2. For any positive integers m and n, the zero matrix of size m × n is
0m×n =
" #
1 3 5
Example 4.2.2. Evaluate + 02×3 =
2 4 6
Takeaway
Remark. When the size of 0m×n is obvious from context, we often write just 0.
CHAPTER 4. MATRIX ALGEBRA 4.2. DEFINITION OF OPERATIONS ON MATRICES
AB =
Remark. Since the number of columns of A and the number of rows of B are equal (to k), ~vi and w~ j are vectros
in the same R , ~vi · w
k ~ j exists and AB is defined. If the number of columns of A is NOT equal to the number
of rows of B then AB .
5 1
" # " #
1 3 1 2 −2
Example 4.2.3. Let A = ,B= and C = 1 3. Evaluate
−2 4 0 1 3
−1 1
(a) AB =
(b) BA =
(c) BC =
1
h i
Example 4.2.4. Evaluate 3 1 2 1 =
−1
4.2. DEFINITION OF OPERATIONS ON MATRICES CHAPTER 4. MATRIX ALGEBRA
A B = AB
Definition 4.2.5 (Identity matrix). For any positve integer n, the identity matrix of size n is the n×n matrix
In =
" #
2 5 1
Example 4.2.7. Let A = . Evaluate AI3 and I2 A.
4 3 0
Takeaway 11
Remark. In the algebra of matrices, In plays the same role as in R. It is the “multiplicative identity”.
When the size of In is obvious from the context, we often write I.
CHAPTER 4. MATRIX ALGEBRA 4.3. PROPERTIES OF MATRIX ALGEBRA
Takeaway 12
In general AB 6= BA for matrices A and B even when both AB and BA are defined.
4.3. PROPERTIES OF MATRIX ALGEBRA CHAPTER 4. MATRIX ALGEBRA
Definition 4.3.1 (Commuting matrices). Let A and B be matrices. A and B are said to commute if
.
Proof.
" #
1 3
Example 4.3.2. Find all matrices that commute with A = .
2 8
CHAPTER 4. MATRIX ALGEBRA 4.3. PROPERTIES OF MATRIX ALGEBRA
Proposition 4.3.2. Let A, B and C be matrices and let d and e be any real numbers. The following properties
hold as long as the sizes of the matrices ensure that the operations are defined.
(A + B)C =
(b) AB + 3A
Takeaway 13
1. When expanding a product of matrices, matrices that start on the left are kept on the left and
matrices that start on the right are kept on the right.
2. If a matrix is on the left of ALL the terms in an expression it can be factored on the left. If a
matrix is on the right of all terms in an expression then it can be factored on the right.
CHAPTER 4. MATRIX ALGEBRA 4.3. PROPERTIES OF MATRIX ALGEBRA
In this last example, we canceled the 2 in 2X by multiplying by 2−1 = 21 . We cannot (always) cancel like
that if the coefficient in front of X is a matrix as we will see in the next example.
" # " # " #
0 1 1 1 5 −5
Example 4.3.6. Let A = . Show that both B = and C = are solutions to
0 3 4 −2 4 −2
" #
4 −2
AX = (4.1)
12 −6
4.3. PROPERTIES OF MATRIX ALGEBRA CHAPTER 4. MATRIX ALGEBRA
Hence to solve for X in equation (4.1), we cannot simply cancel A algebraically. We will see in the next
chapter the necessary conditions to make the cancelling of matrices possible. Here is an other example of
unexpected behaviour in matrix algebra.
6
" # −3
1 0 3
Example 4.3.7. Evaluate −1 2 =
−3 1 −8
1 −2
Takeaway 14
AB = AC =⇒
6 B = C.
Remark. Since matrix multiplication is associative, we can omit parenthesis when we multiply many matrices
as in ABCD or in AAAAA.
Definition 4.3.2 (Power of a matrix). Let A be a square matrix and let r be a non-negative integer. The
r-th power of A is
A =
r
A0 =
CHAPTER 4. MATRIX ALGEBRA 4.3. PROPERTIES OF MATRIX ALGEBRA
" #
1 2
Example 4.3.8. Let A = .
0 1
(a) Evaluate A2 , A3 , A4 .
Note that getting a formula An for a more complicated matrix A is an advanced problem that you could
learn to solve in a second linear algebra course.
Proposition 4.3.3. For any square matrix A and any positive integers r and s,
1. Ar As =
2. (Ar )s =
Proof.
4.4. THE TRANPOSE OF A MATRIX CHAPTER 4. MATRIX ALGEBRA
Proposition 4.4.1 (Properties of the transpose). Let A and B be matrices and let k be any real number. The
following hold whenever A and B have compatible sizes.
1. (AT )T =
2. (A + B)T =
3. (AB)T =
4. (kA)T =
(a) (ABC)T
(b) (I + AT )T
Takeaway 15
1. (A1 A2 · · · Am )T =
2. I T = I
4.4. THE TRANPOSE OF A MATRIX CHAPTER 4. MATRIX ALGEBRA
1. A is symmetric if .
2. A is antisymmetric or skew-symmetric if .
0 2 3
" #
x 3
Example 4.4.4. Find the values of x, y and z so that A = is skew-symmetric.
y z
Example 4.4.5. Assume that A and B are symmetric matrices that commute. Show that AB is also symmetric.
CHAPTER 4. MATRIX ALGEBRA 4.4. THE TRANPOSE OF A MATRIX
• The matrix A is symmetric if the upper and lower entries are mirrored. The diagonal entries are free.
• The matrix A is antisymmetric if the upper and lower entries are mirrored with an extra “-” and if the
diagonal entries are zero.
1. A is diagonal if .
2. A is upper triangular if .
3. A is lower triangular if .
0 0 0
(a) Is 03×3 diagonal? (b) Is 03×3 upper triangular? (c) Is 03×3 lower triangular?
Chapter 5
Invertibility of Matrices
Contents
5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
5.2 The Inverse Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
5.3 Elementary Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
5.4 Finding A−1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
5.5 Linear Systems as Matrix Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
5.6 The Invertibility Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
113
CHAPTER 5. INVERTIBILITY OF MATRICES 5.1. INTRODUCTION
5.1 Introduction
Goal
To define canceling out in the context of matrix algebra and to learn how and under what conditions we
can do so.
ab = 0
=⇒
=⇒
=⇒ b=0
This hinges on the fact that if a 6= 0, then there exists a number a−1 such that
1
a−1 a = a = 1.
a
The number a−1 is called the reciprocal or the multiplicative inverse of a. Since every nonzero real number
has an inverse, then we can cancel out a on both sides of the equality and obtain b = 0.
For matrices, the above is not true. Suppose we have two matrices A and B such that AB = 0 and A 6= 0
(zero matrix), then it is not necessarily true that B = 0. See example 4.3.7 in section 4.3.2. The reason for
this is that even if A 6= 0, there does not always exist a matrix “A−1 ” that cancels out A.
5.2. THE INVERSE MATRIX CHAPTER 5. INVERTIBILITY OF MATRICES
If such a matrix B exists, we say that A is invertible (or non-singular). If no such matrix B exists, we say
that A is singular (or non-invertible).
Proof.
Notation. If A and B are n × n matrices and AB = BA = I, since the inverse is unique we write B = A−1
(read A inverse).
" #
1 2
Example 5.2.1. (a) Consider the matrix A = . Check that A is invertible by applying the definition
0 1
" #
1 −2
with B = .
0 1
CHAPTER 5. INVERTIBILITY OF MATRICES 5.2. THE INVERSE MATRIX
" #
1 2
(b) Let A = . Show that A is singular.
0 0
Remark. It can be proven that if A and B are n × n matrices and AB = I, then necessarily BA = I also. From
here on, you may assume this to be true and only show one of the two multiplications.
" #
1 2
Example 5.2.2. (a) Let A = . Find A−1 if possible.
3 4
" #
4 k
(b) Find all real values k so that matrix is singular.
k 9
Let’s now revisit the idea of canceling out a matrix A. When we are solving a real number equation of the
form ax = b, where a 6= 0, we cancel out a as follows,
ax = b
−1
a ax = a−1 b
x = a−1 b
Now that we’ve defined matrix A−1 and now that we have a method for computing it (when it exists) in the
case of 2 × 2 matrices, we can solve certain matrix equations in a similar fashion.
CHAPTER 5. INVERTIBILITY OF MATRICES 5.2. THE INVERSE MATRIX
" A # " B #
3 −2 5 2
X= .
2 2 −1 1
We will see more examples of matrix equations later. Let’s first list some useful properties.
5.2. THE INVERSE MATRIX CHAPTER 5. INVERTIBILITY OF MATRICES
Theorem 5.2.3 (Properties of Invertible Matrices). If A and B are invertible n × n matrices and k is a
non-zero real number, then the following properties are true.
Proof.
CHAPTER 5. INVERTIBILITY OF MATRICES 5.2. THE INVERSE MATRIX
Example 5.2.4. (a) Solve for the matrix X in the following equation:
" #
−1 −1 2
(I + 2X) =
4 5
ABC T DBAT C = AB T
Remark. (i) Property (d) above can be extended to the product of three matrices. Let A, B and C be
invertible n × n matrices, then ABC is invertible and (ABC)−1 = C −1 B −1 A−1 .
Definition 5.3.1. An n × n matrix E is called an elementary matrix if it can be obtained from the identity
matrix I by applying one single elementary row operation.
Example 5.3.1. The matrices E, F and G below are examples of elementary matrices.
1 0 0 1 0 0
" #
−3 0
E= , F = 0 0 1 , G = 0 1 0
0 1
0 1 0 0 3 1
Each of the matrices and can be obtained from by applying one single elementary row operation.
I F I G
" I # E
1 0 0 1 0 0 1 0 0 1 0 0
" #
1 0 −3 0
i) ∼ , (ii) 0 1 0 ∼ 0 0 1 iii) 0 1 0 ∼ 0 1 0
0 1 0 1
0 0 1 0 1 0 0 0 1 0 3 1
Notice also that for each elementary matrix, we can find the inverse elementary row operation that will convert
it back to the identity matrix.
" E # " I #
−3 0 1 0
i) ∼
0 1 0 1
F I
1 0 0 1 0 0
(ii) 0 0 1 ∼ 0 1 0
0 1 0 0 0 1
G I
1 0 0 1 0 0
iii) 0 1 0 ∼ 0 1 0
0 3 1 0 0 1
CHAPTER 5. INVERTIBILITY OF MATRICES 5.3. ELEMENTARY MATRICES
Takeaway
Every elementary row operation has an inverse operation that undoes what was done by the original.
Here are inverse operations for the three types of EROs.
• is inverse of Ri ↔ Rj .
• is inverse of Ri + kRj → Ri .
Proposition 5.3.1. An elementary matrix allows us to convert an elementary row operation into a matrix
multiplication.
" # " #
2 3 0 2 3 0
Example 5.3.2. Consider the matrices A = and B = .
5 1 2 1 −5 2
We can verify that B can be obtained by applying the elementary row operation
to the matrix A.
Now consider the elementary matrix obtained by applying this same elementary row
operation to the identity matrix I.
5.3. ELEMENTARY MATRICES CHAPTER 5. INVERTIBILITY OF MATRICES
Proposition 5.3.2. An elementary matrix is invertible and its inverse is also an elementary matrix.
Proof.
1 0 0
Example 5.3.3. Consider the elementary matrix E = 0 1 0 obtained by applying the operation
0 3 1
R3 + 3R2 → R3 to the identity matrix I. Use the inverse elementary operation, to find the inverse F of E.
Check directly from the definition of inverses that F is indeed the inverse of E.
Recall: The matrices A and B are said to be row-equivalent (we write A ∼ B) if there exists a sequence of
elementary row operations O1 , O2 , . . . , Ok that turns A into B. In other words, we have
A ∼ A1 ∼ . . . ∼ Ak−1 ∼ B
O1 O2 Ok−1 Ok
CHAPTER 5. INVERTIBILITY OF MATRICES 5.3. ELEMENTARY MATRICES
Proposition 5.3.3. If A ∼ B, then there exists a sequence of elementary matrices E1 , E2 , . . . , Ek such that
Ek · · · E1 A = B.
Proof.
Proof.
5.3. ELEMENTARY MATRICES CHAPTER 5. INVERTIBILITY OF MATRICES
" # " #
2 1 1 0 10 −4
Example 5.3.4. Consider the matrices A = and B = .
0 5 −2 2 6 −1
We are now able to give a condition which is sufficient to guarantee that an n × n matrix A is invertible.
Proof.
Note. It will be proven later in this chapter that the converse of the above statement is also true, i.e., if A is
invertible, then A ∼ I.
Remark. The above theorem gives us a procedure to find A−1 . In fact, since A−1 = Ek · · · E1 , we can write
Ek · · · E1 I = A−1 . But this means that
I ∼ A1 ∼ . . . ∼ Ak−1 ∼ A−1
O1 O2 Ok−1 Ok
Takeaway
The same sequence of elementary row operations that turns A into I will turn I into A−1 . This allows
us to describe a general method to find A−1 for any n × n, if it exists, in the next section.
5.4. FINDING A−1 CHAPTER 5. INVERTIBILITY OF MATRICES
• If A ∼ I, then A is invertible.
• The same sequence O1 , O2 , . . . , Ok of elementary row operations that turns A into I will turn I into A−1 .
h i
• Create a n × 2n augmented matrix A I .
• Use row-reduction (Gauss-Jordan elimination) to find the reduced row echelon form.
• If the first n columns contain the identity matrix I, then the last n columns contain A−1 , i.e.,
h i h i
A I ∼ ... ∼ I A−1 .
If the first n columns do not contain the identity, then A−1 does not exist and A is singular.
−1 0 1
−1 0 1
Since A is not a square matrix, there is no matrix A−1 that would allow us to cancel out A on either side.
Instead, let’s carry out the multiplication on the left and equate both sides.
AX B
" # " #
2x − y + z 5
=
x + 7y − 5z 2
2×1 2×1
In order for these 2 × 1 matrices to be equal, the following equations must be satisfied.
Clearly, any set of values of x, y and z that satisfies this linear system also satisfies the original matrix equation
AX = B and vice versa. In other words, solving the linear system or solving the matrix equation will yield the
same solution set.
Takeaway
Any matrix equation can be expressed as a linear system, and any linear system can be written as a
matrix equation.
From here on, we will sometimes write AX = B when referring to the corresponding linear system and AX = 0
CHAPTER 5. INVERTIBILITY OF MATRICES 5.5. LINEAR SYSTEMS AS MATRIX EQUATIONS
Theorem 5.5.1. A system of linear equations has zero, one or infinitely many solutions. There are no other
possibilities.
Proof.
Notation. In some context, the linear system AX = B may be written as A~x = ~b. In fact, since X is an n × 1
column matrix, it can be seen as the vector ~x = hx1 , x2 , · · · , xn i in Rn and since B is on m × 1 column matrix,
it can be seen as the vector ~b = hb1 , b2 , . . . , bm i ∈ Rm .
5.5. LINEAR SYSTEMS AS MATRIX EQUATIONS CHAPTER 5. INVERTIBILITY OF MATRICES
As we saw in section 4.2, when A is invertible, we can solve the matrix equation (and therefore the linear
system) by multiplying either side of the equation by A−1 .
AX = B
−1
A AX = A−1 B
X = A−1 B
Example 5.5.1. Consider the linear system below. Rewrite the system as a matrix equation and use A−1 to
find the general solution if possible.
x+y =2
2x − y = 4
Remark. (i) The above method yields one set of values given by the computation X = A−1 B. This means
that the corresponding linear system has exactly one solution.
(ii) When the linear system is homogeneous, i.e., of the form AX = 0, then the unique solution is given by
X = A−1 0 = 0. In other words, the system has only the trivial solution.
CHAPTER 5. INVERTIBILITY OF MATRICES 5.6. THE INVERTIBILITY THEOREM
Conversely, we will prove that if the linear system AX = B has only one solution, then the matrix A must be
invertible. Equivalently, if an n × n matrix A is singular, then the linear system AX = B must have infinitely
many solutions or no solution at all.
x − 2y = b1
−2x + 4y = b2
(a) Write the system as a matrix equation AX = B and show that A is singular.
(b) Find conditions on b1 and b2 for the system to have i) infinitely many solutions, ii) no solution.
5.6. THE INVERTIBILITY THEOREM CHAPTER 5. INVERTIBILITY OF MATRICES
We can summarize the above by saying that “A is invertible” is true if and only if “AX = B has exactly
one solution” is also true. In such a case, we say that the two statements are equivalent. We are now ready to
list several statements that are equivalent to the statement “A is invertible”.
Theorem 5.6.1 (Fundamental Theorem of Invertible Matrices). If A is an n × n matrix, then the following
statements are equivalent, i.e., for a given matrix, they are all true or false.
(a) A is invertible.
(b) The linear system AX = B has exactly one solution for any B.
(c) The homogeneous linear system AX = 0 has only the trivial solution.
Proof. We will prove the equivalence by establishing the following chain of implications
Determinants
Contents
6.1 Cofactor Expansion and Determinants . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
6.1.1 Cofactor Expansion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
6.1.2 Determinants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
6.1.3 A Technique for Evaluating the Determinant of (ONLY) 3 × 3 Matrices . . . . . . . . . 141
6.2 Evaluating Determinants Using Row Reduction . . . . . . . . . . . . . . . . . . . . . 142
6.3 Properties of Determinants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
6.3.1 Properties of Determinants (Part 1) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
6.3.2 Determinant of Elementary Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
6.3.3 Properties of Determinants (Part 2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
6.4 Cramer’s Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150
6.5 Vector Products and Determinants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
135
CHAPTER 6. DETERMINANTS 6.1. COFACTOR EXPANSION AND DETERMINANTS
h i
Definition 6.1.1. Let A = a be a 1 × 1 matrix. Then .
Definition 6.1.3. (Cofactor) The cofactor of the entry aij denoted by Cij is defined as
" #
1 −2
Example 6.1.1. Let A = . Find the cofactors C11 , C12 and C22 .
3 2
6.1. COFACTOR EXPANSION AND DETERMINANTS CHAPTER 6. DETERMINANTS
Remark. Let A = [aij ]. For a given aij , the sign of (−1)i+j (which is used in the definition of Cij ) can be
obtained from the following matrix
+ − + − ...
− + − + . . .
.. ..
. .
Indeed (
Cij = , i + j is even
Cij = , i + j is odd
j-th
column
" #
1 4
Example 6.1.2. Let A = . Evaluate the cofactor expansion along each row and each column.
−1 2
CHAPTER 6. DETERMINANTS 6.1. COFACTOR EXPANSION AND DETERMINANTS
Remark. Although the cofactor of an entry and the cofactor expansion along a row or a column are both scalars,
they are different.
Theorem 6.1.1. Let A be an n × n matrix. The cofactor expansions along any row or any column are all
equal.
6.1.2 Determinants
Definition 6.1.5. (Determinant) Let A be an n × n matrix. Then the determinant of A denoted by det(A)
or |A| is the cofactor expansion along or .
1 4
Example 6.1.3. As we computed in Example 7.1.2, =
−1 2
a b
Theorem 6.1.2. = .
c d
Proof. Let’s compute the cofactor expansion along the first row
6.1. COFACTOR EXPANSION AND DETERMINANTS CHAPTER 6. DETERMINANTS
2 −1 3
−1 2 −1 3
1 1 4 2
Example 6.1.5. Evaluate .
1 0 0 2
3 0 2 5
CHAPTER 6. DETERMINANTS 6.1. COFACTOR EXPANSION AND DETERMINANTS
Remark. If An×n has a zero row or a zero column then det(A) = 0. (Why?)
Theorem 6.1.3. The determinant of a triangular matrix (upper or lower) or a diagonal matrix is the product
of the entries on its main diagonal. So we have
2 −1 3
Example 6.1.6. Evaluate 0 5 1
0 0 4
Proof. Since the rows of AT are the columns of A, evaluating the cofactor expansion along the first row of AT
equals evaluating the cofactor expansion along the first column of A.
a b c
d e f
g h i
2 3 −1
Example 6.1.7. Compute the determinant 5 1 0
6 2 4
CHAPTER 6. DETERMINANTS 6.2. EVALUATING DETERMINANTS USING ROW REDUCTION
Remark. Any REF or RREF square matrix is upper triangular. Therefore its determinant equals the product
of the entries on the main diagonal.
Theorem 6.2.1.
(a) If B is a matrix that results when we interchange any two rows (columns) of A then .
(c) If B results when we add a multiple of a row (column) to another row (column) of A then .
1 3 −1
2 1 3
−1 0 1
Method
1. Apply row operations to find an REF and keep track of the changes in the determinants.
2. Since any REF matrix is upper triangular, its determinant can be calculated by multiplying its
diagonal entries.
CHAPTER 6. DETERMINANTS 6.3. PROPERTIES OF DETERMINANTS
Proof.
(b) Assume that row i, Ri and row j, Rj are identical. Apply the row operation −Ri + Rj → Rj so that row
j of the new matrix becomes zero. Since this row operation does not change the determinant and we have
a zero row, det(A) = 0.
(c) We prove this part for a 3 × 3 matrix A, the general case is similar.
ka11 ka12 ka13
det(kA) = ka21 ka22 ka23 =
ka31 ka32 ka33
Proof.
(a)
(b)
(c)
CHAPTER 6. DETERMINANTS 6.3. PROPERTIES OF DETERMINANTS
Proof. Let R be the RREF of A. We know that there are elementary matrices E1 , . . . , Er such that
Er . . . E2 E1 A = R
det(R) =
(=⇒):
(⇐=):
6.3. PROPERTIES OF DETERMINANTS CHAPTER 6. DETERMINANTS
1
(b) If A is invertible, then det(A−1 ) = .
det(A)
• Proof of (b):
8 3 1
Example 6.3.4. Let A = 2 1 x. For which values of x, is the matrix A invertible?
6 3 1
CHAPTER 6. DETERMINANTS 6.4. CRAMER’S RULE
To use determinants to solve a linear system with n equations and n unknowns when the coefficient
matrix is invertible.
det(Ai (~b))
xi = , i = 1, . . . , n
det(A)
where Ai (b) is the matrix formed by replacing the i-th column of A by ~b.
So Ai (~b) is
a11 . . . b1 . . . a1n
.. .. ..
. . .
an1 . . . bn . . . ann
x1 + 2x2 = 2
(
−x1 + 4x2 = 1
6.5. VECTOR PRODUCTS AND DETERMINANTS CHAPTER 6. DETERMINANTS
To compute the cross product of two vectors and also the triple scalar product of three vectors using
determinants.
~ =< v2 w3 − v3 w2 , v3 w1 − w3 v1 , v1 w2 − w1 v2 >
~v × w
~ = v1 v2 v3
~v × w
Example 6.5.1. Use the determinant method to calculate ~u × ~v if ~u =< 1, −2, 3 > and ~v =< 2, 1, 4 >.
Example 6.5.2. Use properties of determinants to prove the given properties of the cross product.
Contents
1.1 Basic Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.1.1 Introduction to Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.1.2 Algebraic Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
1.1.3 Operations on Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.1.4 Parallel Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
1.1.5 Length of a Vector. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
1.1.6 Geometry Using Vectors. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
1.2 Dot Product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
1.2.1 Dot Product and Angles Between Vectors. . . . . . . . . . . . . . . . . . . . . . . . . . . 22
1.2.2 Properties of the Dot Product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
1.2.3 Geometric Proof Using the Dot Product . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
1.3 Projections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
1.4 Cross Product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
1.4.1 Definition of the Cross Product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
1.4.2 Direction of the Cross Product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
1.4.3 Length of the Cross Product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
1.4.4 Properties of the Cross Product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
1.5 Triple Scalar Product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
1.5.1 Definition of the Triple Scalar Product and its Algebraic Properties . . . . . . . . . . . 37
1.5.2 Geometric Applications of the Triple Scalar Product . . . . . . . . . . . . . . . . . . . . 38
CHAPTER 7. LIN. COMB., SPANS, INDEP. 7.1. LINEAR COMBINATIONS
To introduce linear combinations and to use a linear system to determine whether or not a vector may
be expressed as a linear combination of a given collection of vectors.
Definition 7.1.1. 1. A linear combination of vectors ~v1 , ~v2 , . . . , ~vk ∈ Rn is any sum of the form
2. If for a vector w
~ ∈ Rn there exist real numbers c1 , c2 , . . . , ck such that w
~= ,
Example 7.1.1. Express the vector h2, −5, 4i as a linear combination of the vectors ı̂, ̂, and k̂.
Takeaway 16
Any vector ha, b, ci ∈ R3 can be written as a linear combination of the standard unit vectors ı̂, ̂, and k̂:
ha, b, ci =
Note: this way of writing vectors is very commonly used in Physics, and this expression is unique.
7.1. LINEAR COMBINATIONS CHAPTER 7. LIN. COMB., SPANS, INDEP.
1
4
Example 7.1.2. Consider the vector w ~ = . In each of the cases below, determine if w
~ can be expressed
6
−4
as a linear combination of the given vectors or not.
2 1 0
−1 −2 3
(a) ~v1 = , ~v2 = , ~v3 =
−3 −4 5
1 2 −3
CHAPTER 7. LIN. COMB., SPANS, INDEP. 7.1. LINEAR COMBINATIONS
2
−1
(b) ~v1 =
−3
1
Takeaway 17
vectors ~v1 , ~v2 , . . . , ~vk , we try to solve the ~ = a1~v1 + a2~v2 + · · · + ak~vk
w
Aaug =
7.2. SPANS CHAPTER 7. LIN. COMB., SPANS, INDEP.
7.2 Spans
Goal
To define the span of a collection of vectors in Rn algebraically and to understand what this definition
means geometrically in R3 .
Definition 7.2.1. 1. The span of a collection of vectors ~v1 , ~v2 , . . . , ~vk ∈ Rn , span{~v1 , ~v2 , . . . , ~vk }, is the set
2. If S = span{~v1 , ~v2 , . . . , ~vk }, the vectors ~v1 , ~v2 , . . . , ~vk are called or
2 1 0 1
−1 −2 3 4
~ = . Determine if w
Example 7.2.1. Let ~v1 = , ~v2 = , ~v3 = , and w ~ ∈ span{~v1 , ~v2 , ~v3 }.
−3 −4 5 6
1 2 −3 −4
CHAPTER 7. LIN. COMB., SPANS, INDEP. 7.2. SPANS
2 1
−1 4
~ = . Determine if w
Example 7.2.2. Let ~v1 = and w ~ ∈ span{~v1 }.
−3 6
1 −4
Example 7.2.4. In each of the following situations, give a geometric description of the span of the given
collection of vectors.
z
(a) span{ ~0 }
(b) span{~u}, ~u 6= ~0
z
~u
y
z
(c) span{~u, ~v }, ~u r
// ~v
~v
~u
y
x
CHAPTER 7. LIN. COMB., SPANS, INDEP. 7.2. SPANS
case 1: ~u // ~v // w
~
w
~
~u y
~v
x
z z
~v ~v
~u ~u
y y
w
~ w
~
x x
w
~ z
~v
~u
y
Remark. In the previous two examples, we saw that to generate a line L through the origin one vector is
sufficient, but that L can be generated with more vectors (if they are all parallel). We also saw that a plane
π can be generated with two vectors, but can also be generated with more vectors (if the are all coplanar).
In the next section on linear dependence and independence, we will generalize these types of relationships to
larger collections of vectors in higher dimensions.
7.2. SPANS CHAPTER 7. LIN. COMB., SPANS, INDEP.
and if
and if
• all of R3 if
Example 7.2.6. If ~v1 = h2, −1, 3i, ~v2 = h−1, 1, −1i, and ~v3 = h5, −1, 9i, describe span{~v1 , ~v2 , ~v3 } geometrically
by giving an equation for the geometric object that represents the span.
Method I: Geometric
CHAPTER 7. LIN. COMB., SPANS, INDEP. 7.2. SPANS
Remark. If spans are just points, lines, planes, etc., why bother studying them? We are familiar with these ge-
ometric objects from our home space of R3 . Spans, however, are defined in terms of linear combinations: scalar
multiplication and vector addition which are defined in Rn for all n. Reinterpreting our familiar geometries
in terms of spans lets us work with geometric objects in unfamiliar higher dimensions. The previous method
(geometric) depends on the cross-product, and so works only in R3 . The next method (algebraic) works in any
dimension.
Example 7.2.6 continued: If ~v1 = h2, −1, 3i, ~v2 = h−1, 1, −1i, and ~v3 = h5, −1, 9i, describe span{~v1 , ~v2 , ~v3 }
geometrically by giving an equation for the geometric object that represents the span.
Method II: Algebraic
7.3. LINEAR INDEPENDENCE CHAPTER 7. LIN. COMB., SPANS, INDEP.
To define linear dependence and independence, and to determine if a collection of vectors in Rn is linearly
dependent or independent.
(also called the of the vectors ~v1 , ~v2 , . . . , ~vk ) has a solution
Example 7.3.1. In each of the cases below, determine if the collection of vectors is linearly dependent or
independent. If the set is linearly dependent, give a dependence relation.
n o
(a) {~v1 , ~v2 } = h1, −2, 0, −1i, h0, 1, 1, 3i
CHAPTER 7. LIN. COMB., SPANS, INDEP. 7.3. LINEAR INDEPENDENCE
n o
(b) {~v1 , ~v2 , ~v3 } = h1, −1, 1i, h2, −1, −1i, h−4, 1, 5i
7.3. LINEAR INDEPENDENCE CHAPTER 7. LIN. COMB., SPANS, INDEP.
a , H, with equations
in the variables .
A=
Theorem 7.3.1. The set {~v1 , ~v2 , . . . , ~vk } is linearly dependent if and only in at least one of the vectors in the
set is in the span of the remaining vectors.
Proof.
Corollary. {~v1 , ~v2 , . . . , ~vk } is linearly independent if and only if no vector in the set is in the span of the other
vectors.
A rephrasing of Corollary 7.3 is that {~v1 , ~v2 , . . . , ~vk } is linearly independent if and only if there are
Said another way, {~v1 , ~v2 , . . . , ~vk } is linearly independent if and only if removing any of the generators
Conversely, {~v1 , ~v2 , . . . , ~vk } is linearly dependent if and only if it is possible to remove at least one of
Example 7.3.2. Based on the diagrams below, decide if each of the following sets of vectors is linearly
dependent or independent.
w
~
(a) ~ is linearly
{~v , w}
~v
w
~
(b) ~ is linearly
{~v , w}
~v
w
~ ~u
(c) ~ is linearly
{~u, ~v , w}
~v
Proof.
CHAPTER 7. LIN. COMB., SPANS, INDEP. 7.3. LINEAR INDEPENDENCE
Set Conditions for Linear Independence Span of the Set when Independent
{~u}
{~u, ~v }
{~u, ~v , w}
~
Example 7.3.3. Verify n using the triple scalar product that the set ofovectors from part (b) of Example 7.3.1
is linearly dependent. ~v1 = h1, −1, 1i, ~v2 = h2, −1, −1i, ~v3 = h−4, 1, 5i
7.4. BASIS AND DIMENSION OF A SPAN CHAPTER 7. LIN. COMB., SPANS, INDEP.
In Sections ?? and ??, spans of sets of vectors in R3 were completely described geometrically. These are
either
In this section, we will generalize that work to find similar descriptions for spans of sets of vectors in Rn for
any n. In particular, we will learn how to describe the span of a set of vectors in Rn with as few vectors as
possible.
(i) B spans S.
This means that S = span B = .
is .
In this definition, condition (??) ensures that B has enough generators to construct all of S. Condition (??)
ensures that B has no redundant vectors, meaning that removing any of the generators would yield a different
and smaller span.
Example 7.4.1. Show that A = {h1, 0i, h0, 1i} is a basis for R2 .
CHAPTER 7. LIN. COMB., SPANS, INDEP. 7.4. BASIS AND DIMENSION OF A SPAN
is a basis for Rn .
is called the of Rn .
1
2 4
Example 7.4.3. Consider A = −1 , −1 , −3 . Consider S = span A.
1
−1 1
1
2 1
2 4
(c) Show that B = −1 , −1 is a basis for S = span −1 , −1 , −3 .
1
−1
1
−1 1
In the previous example, we spent a lot of energy showing that removing h4, −3, 1i from the set of generators
would not change the span. To avoid repeating this argument for every example, let us recap what we have
learned.
Takeaway 23
Consider A = {~v1 , · · · , ~vk } and assume that ~vi can be written as a linear combination of
Using this takeaway, we may remove any vector of A that can be written in terms of other vectors in the
set (that we are keeping). Looking at the start of this last example, we find the reduction
1 2 4 0 1 0 2 0
−1 −1 −3 0 =⇒ 0 1 1 0 .
1 −1 1 0 0 0 0 0
We saw that
4 1 2
Step 2. Let B contain only the vectors of ~v1 , ~v2 , · · · , ~vk whose columns correspond to
in either reduced matrix REF (H) or RREF (H). This B is a
basis for S.
Step 3. Each in the reduced matrix RREF (H) gives the coefficients
necessary to write the corresponding vector as a of the ele-
ments of B.
Let us explain this last point a bit more. The reduction needed to write a redundant vector in terms of the
vectors of the basis is included as part of the big reduction
You simply need to ignore the columns of the other redundant vectors. In general, elementary row operations
preserve any linear dependency between the columns.
7.4. BASIS AND DIMENSION OF A SPAN CHAPTER 7. LIN. COMB., SPANS, INDEP.
1 2 1
−4
4 −16 0 −12
Example 7.4.4. Let S = span , , , = span {~
v1 , ~v2 , ~v3 , ~v4 }
−2 8 1 8
3 1
−12
−7
(b) Write each vector of {~v1 , ~v2 , ~v3 , ~v4 } as a linear combination of the basis B
As the last example shows a span does not usually have a unique basis. In fact, any non-trivial span will
have infinitely many different bases.
Although bases themselves are not unique, once you have fixed a basis B for your span S, any vector of S
will be expressed in a unique way as a linear combination of the elements of B.
~=
w .
• Uniqueness of a1 , · · · , ak .
7.4. BASIS AND DIMENSION OF A SPAN CHAPTER 7. LIN. COMB., SPANS, INDEP.
Theorem 7.4.2. Let S be a span and let B1 = {~v1 , ~v2 , · · · , ~vk } and B2 = {w ~ 2, · · · , w
~ 1, w ~ r } be two bases for S.
Then
Proof. We will show that k ≥ r. A similar argument shows that r ≥ k and hence that r = k as claimed.
CHAPTER 7. LIN. COMB., SPANS, INDEP. 7.4. BASIS AND DIMENSION OF A SPAN
We just showed that the number of elements in a basis does not depend on the chosen basis but only on
the span itself. This allows for the following definition.
The dimension of a span gives its geometry. Following Takeaway ?? on page ??, we get the following
correspondence between the dimension of a span and its geometric characteristics.
4
7.4. BASIS AND DIMENSION OF A SPAN CHAPTER 7. LIN. COMB., SPANS, INDEP.
2
1 −1
Example 7.4.7. Consider S = span 3 , 4 , 6 .
1
3 7
7.5 Subspaces
Goal
Definition 7.5.1 (Vector space). A vector space is a non-empty set V with two operations.
so
2. scalar multiplication
so
These operations must satisfy certain rules, for example the addition must be commutative.
Example 7.5.1. (a) Rn is a vector space with the standard vector operations.
of real functions with domain R. F(R) is a vector space with the usual addition of functions and multipli-
cation of functions by real numbers.
Because all of these examples have addition and scalar multiplication, everything that we have covered in
this chapter could be applied to these settings. Hence you could talk about a linear combination of functions
or the dimension of a span of polynomials. Turns out that this generalization from the familiar vectors in Rn
to “functions as vectors” is extremely useful. We will save all of that fun for Linear 2.
7.5.2 Subspaces of Rn
Definition 7.5.2 (Subspaces of Rn ). A subset S of Rn is called a subspace if the following conditions
are true.
1. (zero element) is in S.
Example 7.5.2. Show that the following sets are subspaces by proving the three properties or show that they
are not subspaces by finding a specific counter-example to one of the three properties.
(a) V = hx, y, zi ∈ R3 |x + y + z = 3
(b) W = hx, y, zi ∈ R3 |x = 2y
CHAPTER 7. LIN. COMB., SPANS, INDEP. 7.5. SUBSPACES
S = span A = { | a1 , a2 , · · · , am ∈ R}
Since subspaces and spans are the same objects, subspaces also have bases and dimensions. By Theorem
??, given a basis B for a subspace V of Rn , every element of V can be written in a unique way as a linear
combination of the elements of a basis B of V .
7.5. SUBSPACES CHAPTER 7. LIN. COMB., SPANS, INDEP.
Proposition ?? gives some insight on subspaces. We now know that subspaces are
, , ,...
In the next example, we can use that insight to determine whether a given subset is a subspace.
is a subspace of R2 . If it is prove all three properties. If it is not, find a specific counter-example for one of the
three properties.
Note that using our knowledge of spans, we can now show that a subset of Rn is a subspace without proving
all three properties. In future examples, however, we may force you to show the three properties.
Show that V is a subspace of R4 , find a basis for it and give a geometric description of V .
CHAPTER 7. LIN. COMB., SPANS, INDEP. 7.5. SUBSPACES
The following proposition generalizes the work done in the last example.
Proof. When using the techniques of Chapter 3 to solve a linear system, the solution
set S is written as
x1
x2
. = + s1 + · · · + sk =
. s1 , . . . , s k ∈
.
xn
When solving a homogeneous system, we get a set of generators {w ~ k }, (one from each
~ 1, · · · w ).
This set of generators is automatically linearly independent. To see this, consider a linear combination
~ 1 + · · · + sj w
s1 w ~ j + · · · + sk w
~ k = ~0
To show that sj needs to be zero, look at the row of its corresponding free variable, say xi .
0
x1
. .
.. ..
xi = s 1 + · · · + sj + · · · + sk = = 0
. .
. .
. .
xn 0
Takeaway
When using the method of Chapter 3 to solve a homogeneous linear system, you get
~x = .