0% found this document useful (0 votes)
33 views182 pages

A21 NYC Notes LinearAlgebra

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
33 views182 pages

A21 NYC Notes LinearAlgebra

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 182

LINEAR ALGEBRA

AND

VECTOR GEOMETRY

Hadi Bigdely
Riccardo Catalano
Véronique Godin
Mariah Hamel
Christopher Turner
Contents

1 Vectors 9
1.1 Basic Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.1.1 Introduction to Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.1.2 Algebraic Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
1.1.3 Operations on Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.1.4 Parallel Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
1.1.5 Length of a Vector. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
1.1.6 Geometry Using Vectors. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
1.2 Dot Product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
1.2.1 Dot Product and Angles Between Vectors. . . . . . . . . . . . . . . . . . . . . . . . . . . 22
1.2.2 Properties of the Dot Product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
1.2.3 Geometric Proof Using the Dot Product . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
1.3 Projections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
1.4 Cross Product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
1.4.1 Definition of the Cross Product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
1.4.2 Direction of the Cross Product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
1.4.3 Length of the Cross Product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
1.4.4 Properties of the Cross Product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
1.5 Triple Scalar Product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
1.5.1 Definition of the Triple Scalar Product and its Algebraic Properties . . . . . . . . . . . 37
1.5.2 Geometric Applications of the Triple Scalar Product . . . . . . . . . . . . . . . . . . . . 38

2 Lines & Planes in R3 45


2.1 Lines in R3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
2.1.1 Review of Lines in R2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
2.1.2 Generalizing from R2 to R3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
2.2 Planes in R3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
2.2.1 General Form Equation of the Plane . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
2.2.2 Vector Form Equation of the Plane . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
2.3 Relative Positions of Lines & Planes in R3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
2.3.1 Relative Position of Two Lines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
2.3.2 Relative Position of a Line and a Plane . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
2.3.3 Relative Position of Two Planes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
2.4 Shortest Distances and Closest Points . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
2.4.1 Distance Point-to-Line . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
2.4.2 Distance Line-to-Line . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
2.4.3 Distance Point-to-Plane . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
2.4.4 Distance Line-to-Plane . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
2.4.5 Distance Plane-to-Plane . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
2.5 Intersections of Lines & Planes in R3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65

3
2.5.1 Point of Intersection of Two Lines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
2.5.2 Point of Intersection of a Line and a Plane . . . . . . . . . . . . . . . . . . . . . . . . . 66
2.5.3 Line of Intersection of Two Planes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67

3 Linear Systems 69
3.1 Systems of Linear Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
3.2 Solving Linear Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
3.2.1 Elementary Row Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
3.2.2 Echelon Forms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
3.2.3 Linear Systems with No Solutions or Infinitely Many Solutions. . . . . . . . . . . . . . . 81
3.3 The Number of Solutions of a Linear System . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
3.4 Homogeneous Linear Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93

4 Matrix Algebra 97
4.1 Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
4.2 Definition of Operations on Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
4.2.1 Definition of the Sum and the Scalar Multiplication of Matrices . . . . . . . . . . . . . . 99
4.2.2 Definition of the Product of Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
4.3 Properties of Matrix Algebra . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
4.3.1 Matrix Multiplication is NOT Commutative . . . . . . . . . . . . . . . . . . . . . . . . . 102
4.3.2 Standard Properties of Matrix Operations . . . . . . . . . . . . . . . . . . . . . . . . . . 104
4.3.3 Powers of a Square Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
4.4 The Tranpose of a Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
4.4.1 Definition and Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
4.4.2 Symmetric Matrices and Other Special Types of Square Matrices . . . . . . . . . . . . . 111

5 Invertibility of Matrices 113


5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
5.2 The Inverse Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
5.3 Elementary Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
5.4 Finding A−1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
5.5 Linear Systems as Matrix Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
5.6 The Invertibility Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132

6 Determinants 135
6.1 Cofactor Expansion and Determinants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
6.1.1 Cofactor Expansion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
6.1.2 Determinants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
6.1.3 A Technique for Evaluating the Determinant of (ONLY) 3 × 3 Matrices . . . . . . . . . 141
6.2 Evaluating Determinants Using Row Reduction . . . . . . . . . . . . . . . . . . . . . . . . . . . 142
6.3 Properties of Determinants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
6.3.1 Properties of Determinants (Part 1) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
6.3.2 Determinant of Elementary Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
6.3.3 Properties of Determinants (Part 2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
6.4 Cramer’s Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150
6.5 Vector Products and Determinants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151

7 Linear Combinations, Spans, & Independence 153


7.1 Linear Combinations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154
7.1.1 Algebra of Linear Combinations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154
7.2 Spans . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157
7.2.1 Spans Viewed Algebraically . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157
7.2.2 Spans Viewed Geometrically in R3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
7.3 Linear Independence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
Preface

These Notes
This set of notes is a collaboration between your teachers at Marianopolis College. Welcome! We’re excited to
have you here.
These notes are in skeleton form. This means that the notes are only partially complete. Your teacher will
discuss with you how you will fill in these notes.

Errors and typos


If you find a typo or suspect an error in the notes, check the up-to-date version of the document at

https://drive.google.com/file/d/1nbSIqhMlKUSpsWH-CbSMJXVFX0XqU6tF/view?usp=sharing

to see if the error has been fixed. If it hasn’t, please report it to your teacher. Your help will be greatly
appreciated.

7
Chapter 1

Vectors

Contents
1.1 Basic Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.1.1 Introduction to Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.1.2 Algebraic Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
1.1.3 Operations on Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.1.4 Parallel Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
1.1.5 Length of a Vector. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
1.1.6 Geometry Using Vectors. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
1.2 Dot Product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
1.2.1 Dot Product and Angles Between Vectors. . . . . . . . . . . . . . . . . . . . . . . . . . . 22
1.2.2 Properties of the Dot Product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
1.2.3 Geometric Proof Using the Dot Product . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
1.3 Projections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
1.4 Cross Product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
1.4.1 Definition of the Cross Product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
1.4.2 Direction of the Cross Product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
1.4.3 Length of the Cross Product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
1.4.4 Properties of the Cross Product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
1.5 Triple Scalar Product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
1.5.1 Definition of the Triple Scalar Product and its Algebraic Properties . . . . . . . . . . . 37
1.5.2 Geometric Applications of the Triple Scalar Product . . . . . . . . . . . . . . . . . . . . 38

9
CHAPTER 1. VECTORS 1.1. BASIC VECTORS

1.1 Basic Vectors


Goal

To introduce vectors algebraically and geometrically.

1.1.1 Introduction to Vectors


Definition 1.1.1. A vector is an arrow which has . Vectors
do not have a fixed position.

Example 1.1.1. Draw three vectors equal to ~v .


6

2 ~v

−6 −4 −2 2 4 6
−2

−4

−6

You might have seen vectors before in physics as representing either force, displacement, velocity or accel-
eration. Many of our definitions will come from our intuition of displacement and in two dimensions we can
think of ~v as h∆x, ∆yi.

Example 1.1.2. Consider the following two vectors and think of them as displacement.
4

~v
−2 2 4 6
w
~
−2

−4

(a) What should ~v + w


~ mean? Draw it. (b) What should 2~v mean? Draw it.

4 4

2 2

−2 2 4 6 −2 2 4 6

−2 −2

−4 −4
1.1. BASIC VECTORS CHAPTER 1. VECTORS

1.1.2 Algebraic Vectors


Definition 1.1.2. 1. A vector ~v in Rn is an

2. v1 , v2 , · · · vn are called the of ~v .

3. n is called the of ~v

Notation. 1. We will be using round brackets (a1 , a2 , · · · , an ) for points.


 
a1
 a2 
 
 ..  for vectors.
2. We will be using angle brackets ha1 , a2 , · · · , an i or square brackets  
 . 
an

Example 1.1.3. Sketch the following points or vectors in R2 and R3 .

(a) ~u = h4, 3i (b) ~v = h−5, 3i


6 6

4 4

2 2

−6 −4 −2 2 4 6 −6 −4 −2 2 4 6
−2 −2

−4 −4

−6 −6
 
(c) (3, 2, 4) −3
(d)  5 
 
4
z z
5 5

-5 -5
y y
-5 5 -5 5

x
5 x
5
-5 -5
CHAPTER 1. VECTORS 1.1. BASIC VECTORS

1.1.3 Operations on Vectors


Definition 1.1.3. The zero vector of Rn is ~0 = .
We will represent the zero vector ~0 as a point.
Definition 1.1.4. Let ~v = hv1 , v2 , · · · , vn i and w
~ = hw1 , w2 , · · · , wn i be in Rn . Let k be any real number.
1. The inverse −~v of ~v is

−~v =

Geometrically, we are creating a vector with the same length as ~v , but in the opposite direction.

~v

2. The scalar multiple k~v of ~v is

k~v =

Geometrically, we are stretching the vector (k > 1), shrinking the vector (0 < k < 1), stretching its
inverse (k < −1) or shrinking its inverse (−1 < k < 0).

~v

3. The sum of ~v and w


~ is

~v + w
~=

Geometrically, we can picture the sum either by placing the vectors “tip to tail” or by placing them “tail
to tail” and building a parallelogram.

w
~
~v ~v

w
~
(a) Tip to tail (b) Parallelogram rule

4. The difference of ~v and w


~ is

~=
~v − w

Geometrically, we can again see it in two different ways.

~v
w
~
~v

w
~
1.1. BASIC VECTORS CHAPTER 1. VECTORS

1
   
−8
Example 1.1.4. (a) 2 + 4  5  =
   
3 6

(b) Assume that we are given the following two vectors ~v and w.
~ Sketch the following vectors.

~v

w
~

(a) ~v + 2w
~ (b) w
~ − 3~v

Proposition 1.1.1 (Properties of vector operations.). For all ~u, ~v and w


~ in Rn and a, b ∈ R :

1. Commutativity of vector addition:


~u + ~v = ~v + ~u

2. Associativity of vector addition:


(~u + ~v ) + w
~ = ~u + (~v + w)
~

3. Additive identity:
~0 + ~u = ~u + ~0 = ~u

4. Existence of additive inverse: For any ~u, there exists a −~u such that

~u + (−~u) = ~0

5. Associativity of scalar multiplication:


a(b~u) = (ab)~u

6. Distributivity of scalar sums:


(a + b)~u = a~u + b~u

7. Distributivity of vector sums:


a(~u + ~v ) = a~u + a~v

8. Scalar multiplication identity:


1~u = ~u

All these operations are done component per component (or componentwise) and so these properties are
inherited from properties of real numbers.
CHAPTER 1. VECTORS 1.1. BASIC VECTORS

Definition 1.1.5. 1. The origin O in Rn is the point .

−−→
2. For any two points we let P Q be the vector from P to Q.

P Q
• •

3. For any point P , the position vector of P is the vector .

−−→
4. The vector ~v = OP drawn from O to P is said to be in standard position.

−−→ −−→
Example 1.1.5. (a) Simplify AB + BC.

−−→ −−→
(b) Write BA in terms of AB.

Formula. Given coordinates P (a1 , a2 , · · · , an ) and Q(b1 , b2 , · · · , bn ) for the points,

−−→
OP =

−−→
OQ =

−−→
PQ =

−−→
What do you notice about vector OP and point P ?
1.1. BASIC VECTORS CHAPTER 1. VECTORS

−−→
Example 1.1.6. (a) Assume P (1, 3, 2, −5) and Q(2, −5, 3, 10). Find P Q.

−−→
(b) Assume that P (1, 0, 5) and P Q = h−3, 5, −13i. Find the coordinates for Q.

1.1.4 Parallel Vectors


In R2 , we say that two lines are parallel if they have the same slope, i.e. the same direction. This concept
works for vectors in Rn as well: two vectors ~v and w~ in Rn are said to be parallel if they have the same (or
opposite) direction. We will then write ~v // w.
~

Example 1.1.7. Give two different vectors parallel to ~v .

~v

Here’s how to make this precise using algebra.

Definition 1.1.6. Two vectors ~v and w


~ are parallel if there is a constant k in R so that

Example 1.1.8. Are the following pairs of vectors parallel?

(a) ~v = h15, −9, 27i and w


~ = h−10, 6, −18i.
CHAPTER 1. VECTORS 1.1. BASIC VECTORS

(b) ~v = h2, −4, 5i and w


~ = h1, −2, −5i.

(c) ~v = h1, 2, −1, 3i and w


~ = h0, 0, 0, 0i

Takeaway 1

~0 is parallel to

1.1.5 Length of a Vector.


Example 1.1.9. Find the length of the following vectors.

(a) ~v = h−2, 3i
4

2
~v

−3 −2 −1 1 2 3

−1
1.1. BASIC VECTORS CHAPTER 1. VECTORS

(b) w
~ = h4, −5, 6i
z

-5

y
-5 5

x 5
-5

Definition 1.1.7. Let ~v = hv1 , v2 , · · · , vn i be any vector in Rn .

1. The length, magnitude or the norm of ~v is .

2. The vector ~v is a unit vector if .

Proposition 1.1.2. Let ~v be any vector in Rn and k be any real number.

1. kk~v k =

1
2. If ~v is not the zero vector then k~v k ~
v is

Proof.
CHAPTER 1. VECTORS 1.1. BASIC VECTORS

Example 1.1.10. Let ~v = h1, 2, 5i and w


~ = h−3, 5, −2i.

(a) Evaluate k~v − 2wk.


~

(b) Find a unit vector in the same direction as ~v .

(c) Find a vector of length 12 in the opposite direction of w.


~

Takeaway 2: Resizing a vector ~v .

For any non-zero vector ~v and positive real number k.

1. is a vector in the same direction as ~v but k times as long as ~v .

2. is a vector in the same direction as ~v but of length k.


1.1. BASIC VECTORS CHAPTER 1. VECTORS

Example 1.1.11. Take P (1, 1, 2) and Q(8, 15, −12)

(a) Find the point A two-sevenths of the way from P to Q.

(b) Find the point B three units away from P between P and Q.

1.1.6 Geometry Using Vectors.


Definition 1.1.8. A quadrilateral ABCD is a parallelogram if its opposite sides are parallel and have the
same length.

Note. Using vector notation, the quadrilateral

B C

A D
−−→ −−→ −−→ −−→
is a parallelogram whenever AB = DC and AD = BC.
CHAPTER 1. VECTORS 1.1. BASIC VECTORS

Example 1.1.12. Consider the quadrilateral with vertices P (1, 2), Q(6, 4), R(2, 5) and S(5, 1).

(a) Show that this quadrilateral is a parallelogram.

(b) Find the length of its diagonals.


1.1. BASIC VECTORS CHAPTER 1. VECTORS

~ Express the diagonals ~a and ~b of this


Example 1.1.13. Consider the parallelogram with sides ~v and w.
parallelogram in terms of ~v and w.
~

~b

w
~ ~a

~v

Example 1.1.14 (Similar triangles). Let ABC be a triangle. Let M and N be the midpoints of AB and BC,
respectively. Show that the segment M N is parallel to the segment AC but half as long.
CHAPTER 1. VECTORS 1.2. DOT PRODUCT

1.2 Dot Product


Goal

To study angles between vectors algebraically.

1.2.1 Dot Product and Angles Between Vectors.


Definition 1.2.1. Let ~v = hv1 , v2 , · · · , vn i and w
~ = hw1 , w2 , · · · , wn i be vectors in Rn . Then the dot product
~ of ~v and w
~v · w ~ is
~=
~v · w

1 2
   

Example 1.2.1. Evaluate  3  · 0


   
−5 6

~ be vectors in R2 or R3 .
Proposition 1.2.1. Let ~v and w
Let θ be the angle between them. Then w
~

θ
~=
~v · w
~v
We will prove this proposition on page 25 after studying the properties of ~v · w.
~

Example 1.2.2. Find the angle between ~v = h1, 2, 0i and w


~ = h2, 0, 1i to the nearest degree.

Takeaway 3: Angle between ~v and w.


~

The angle between two vectors ~v and w


~ is θ =
1.2. DOT PRODUCT CHAPTER 1. VECTORS

Extra. Proposition 1.2.1 is stated only in R2 and R3 because we do not have any intuition for angles in
higher dimensions. However we can use the formula of Takeaway 3 to define the angle between vectors in any
dimension. A theorem, called the Cauchy-Schwarz inequality, tells us that for non-zero vectors ~v and w,
~

~v · w
~
−1 ≤ ≤1
k~v k kwk
~

and so
~v · w
~
 
θ = arccos
k~v k kwk
~
is always defined.

From Proposition 1.2.1,


~=
~v · w
Since k~v k and kwk
~ are positive numbers, ~v · w
~ and cos(θ) will have the same sign.

~ >0
case#1. ~v · w

~ <0
case#2. ~v · w

~ =0
case#3. ~v · w

Definition 1.2.2. Two vectors ~v and w


~ in Rn are called perpendicular or orthogonal if

~=
~v · w
We then write ~v ⊥ w.
~

Example 1.2.3. For which value(s) of k are ~v = h1, 2i and w


~ = hk, k 2 i perpendicular?
CHAPTER 1. VECTORS 1.2. DOT PRODUCT

1.2.2 Properties of the Dot Product


~ be vectors in Rn and let k be a real number.
Proposition 1.2.2. Let ~u, ~v and w

1. Commutativity ~u · ~v = ~v · ~u

2. Distributivity ~u · (~v + w)
~ = ~u · ~v + ~u · w
~

3. Associativity with scalar multiplication k(~u · ~v ) = (k~u) · ~v = ~u · (k~v )

4. Relationship to the norm ~v · ~v = k~v k2

Proof.

Example 1.2.4. Assume that ~v has length 6, that w


~ has length 5 and that ~v · w
~ = −5. Evaluate

(6~v + w)
~ · (2~v − 5w)
~

Justify each step.


1.2. DOT PRODUCT CHAPTER 1. VECTORS

Takeaway 4

We can expand expressions with dot products.

(a~v + bw)
~ · (c~x + d~y ) =

Example 1.2.5. Assume that ~v and w ~ are unit vectors and that they form an angle of 45◦ . Using the dot
product, find the length of ~v + 2w.
~

We are now ready to prove Proposition 1.2.1.

~ = k~v k kwk
Proof that ~v · w ~ cos(θ).
CHAPTER 1. VECTORS 1.2. DOT PRODUCT

1.2.3 Geometric Proof Using the Dot Product


Definition 1.2.3. A rhombus is a parallelogram whose sides have .

Example 1.2.6. Show that the diagonals of a rhombus intersect perpendicularly.


1.3. PROJECTIONS CHAPTER 1. VECTORS

1.3 Projections
Goal

To decompose a given vector ~v into


~v
~v = ~a + ~b ~b

where ~a is parallel to a given non-zero vector d~ and ~b is perpendicular


~
to d. ~a d~

In Definition 1.3.1, we will call ~a the projection of ~v onto d, ~ and ~b the orthogonal component of ~v
~ But before we can make that definition precise, we need the following theorem that states
perpendicular to d.
that these vectors ~a and ~b exist and are unique for any pair of vectors ~v and d.
~

~
Theorem 1.3.1. Take any vector ~v in Rn and non-zero vector d.

1. There are unique vectors ~a and ~b so that

• ~a is parallel to d~
• ~b is perpendicular to d~
• ~v = ~a + ~b

2. In fact

~a = ~b =

We will derive these formulas for ~a and ~b on page 29, but we can give them a name right now.

Definition 1.3.1. 1. The projection of ~v onto (or along) d~ is

projd~(~v ) =

2. The orthogonal component of ~v perpendicular to d~ is

orthd~(~v ) =
CHAPTER 1. VECTORS 1.3. PROJECTIONS

Example 1.3.1. For each of these pairs of vectors ~v and d, ~ sketch proj ~(~v ) and orth ~(~v ) on the same picture
d d
so that proj d~ ~v , orth d~ ~v and ~v form a right triangle.

(a) (b) (c)

~v ~v
~v

d~ d~ d~

Example 1.3.2. Let ~v = h−2, 6i and w ~ = h2, −1i. Write ~v as ~a +~b where ~a is parallel to w
~ and ~b is orthogonal
to w.
~ Give a precise sketch of all involved vectors.
6

−6 −4 −2 2 4
−2

−4
1.3. PROJECTIONS CHAPTER 1. VECTORS

Proof. Derivation of the formula for the projection of Theorem 1.3.1


CHAPTER 1. VECTORS 1.3. PROJECTIONS

Example 1.3.3. Find the coordinates of the point R in the following picture.

P (1, 3, 2)

L
• • •
A(−2, 3, 2) R B(6, −1, 6)

Remark. In Example 1.3.3, R is the point on the line L which is closest to P . In the next section of the course,
we will be using projections to discuss distances and closest points on lines and on planes.
1.4. CROSS PRODUCT CHAPTER 1. VECTORS

1.4 Cross Product


Goal

To find a direction that is perpendicular to two given w


~
vectors ~v and w
~ in R3 , and hence, to the whole plane
containing ~v and w.
~ ~v

1.4.1 Definition of the Cross Product


Definition 1.4.1. Let ~v = hv1 , v2 , v3 i and w
~ = hw1 , w2 , w3 i be vectors in R3 . The cross product ~v × w
~ of ~v
and w
~ is    
v1 w1
~ = v2 
~v × w w2  =
   
v3 w3

Example 1.4.1. Let ~v = h1, 2, 4i and w


~ = h3, 5, −6i.

(a) Evaluate

~=
~v × w

(b) Find the angle between ~v and ~v × w.


~

(c) Find the angle between w


~ and ~v × w.
~

Takeaway 5

~ is perpendicular to both ~v and w.


~v × w ~
CHAPTER 1. VECTORS 1.4. CROSS PRODUCT

1.4.2 Direction of the Cross Product


Proposition 1.4.1. Let ~v = hv1 , v2 , v3 i and w
~ = hw1 , w2 , w3 i be in R3 .

Then ~v × w
~ is to both ~v and w.
~

Proof. ~v is perpendicular to ~v × w.
~

Example 1.4.2. Find all unit vectors perpendicular to both ~a = h1, 2, 5i and ~b = h3, 0, 1i.

For two (non-parallel) vectors ~v , w


~ in R3 , there are two opposite directions perpendicular to both ~v and w.
~
In which of them would ~v × w
~ point? The direction is given by the “right-hand” rule.

Right-hand rule. To get the direction of ~u × ~v ,

Step 1. Put the palm and the fingers of your right hand in the
direction of the first vector ~u.

Step 2. Bend your fingers in the direction of the second vector ~v .

Step 3. Your thumb gives you the direction of ~u × ~v .

1
Click here for the source of the image for the right-hand rule. It was taken with permission.
1.4. CROSS PRODUCT CHAPTER 1. VECTORS

Example 1.4.3. Let ı̂ = h1, 0, 0i, ̂ = h0, 1, 0i and k̂ = h0, 0, 1i. Use the right-hand rule to sketch both ı̂ × ̂
and ̂ × ı̂. Check your work using algebra.
z

Takeaway 6

The vectors ~v × w
~ and w
~ × ~v are in opposite directions but they have the same length. So

~ × ~v =
w

1.4.3 Length of the Cross Product


~ be vectors in R3
Proposition 1.4.2. Let ~v and w

~ =
1. k~v × wk

~ =
2. k~v × wk

Example 1.4.4. Find the area of the triangle with vertices A(1, 2, 0), B(2, 7, 1) and C(4, 5, 2).
CHAPTER 1. VECTORS 1.4. CROSS PRODUCT

To prove Proposition 1.4.2, we will use Lagrange’s identity which states that for any ~v and w
~ ∈ R3 ,

~ 2 = k~v k2 kwk
k~v × wk ~ 2 − (~v · w)
~ 2 (1.1)

This can be proven by expressing both sides in terms of the components of ~v and w.
~

Proof of Proposition 1.4.2.

~ be in R3 .
Corollary. Let ~v and w

~ = ~0 ⇐⇒
~v × w

Proof of the Corollary.


1.4. CROSS PRODUCT CHAPTER 1. VECTORS

1.4.4 Properties of the Cross Product


~ ∈ R3 and let k ∈ R.
Proposition 1.4.3. Let ~u, ~v and w

1. Anticommutativity : ~ × ~v =
w

~u × (~v + w)
~ =
2. Distributivity on both sides:
(~u + ~v ) × w
~=

3. Associativity with scalar multiplication : k (~v × w)


~ =

4. Multiplication by zero : ~v × ~0 = ~0 × ~v =

5. Parallel vectors : ~ = ~0 ⇐⇒ ~v // w
~v × w ~

6. Perpendicularity : ~v × w
~ is perpendicular to both ~v and w
~

Note. One property that you might expect is NOT true for the cross product. The cross product is NOT
associative. In general,
~u × (~v × w)
~ 6= (~u × ~v ) × w
~
as you will prove on the exercise sheet.

Example 1.4.5. Assume that ~v × w


~ = h1, 5, −1i. Evaluate (2~v + 4w)
~ × w.
~
CHAPTER 1. VECTORS 1.4. CROSS PRODUCT

h i
Example 1.4.6. Simplify ~a · (~b + ~c) × (~c + ~a)

Takeaway 7

We can expand cross products, but we have to be careful not to switch the order of the vectors as
~ 6= w
~v × w ~ × ~v . More precisely,

(~a + ~b) × (~c + d)


~ =
1.5. TRIPLE SCALAR PRODUCT CHAPTER 1. VECTORS

1.5 Triple Scalar Product


Goal

To determine algebraically whether three given vec- w


~ ~u
tors ~u, ~v and w
~ in R3 lie on the same plane or not.
~v

1.5.1 Definition of the Triple Scalar Product and its Algebraic Properties

Definition 1.5.1. Let ~u, ~v and w


~ be vectors in R3 . The triple scalar product of ~u, ~v and w
~ is

Example 1.5.1. Let ~u = h1, 5, 2i, ~v = h3, −1, 1i and w


~ = h−2, 0, 3i. Compute ~u · (~v × w)
~ and w
~ · (~v × ~u).
What do you notice?

Takeaway 8: Property of the triple scalar product

When we switch any two vectors in ~u · (~v × w),


~ we flip the sign. Hence

~u · (~v × w)
~ = −~v · (~u × w)
~ = ~v · (w
~ × ~u) = · · ·
CHAPTER 1. VECTORS 1.5. TRIPLE SCALAR PRODUCT

Example 1.5.2. Assume that ~u · (~v × w)


~ = 8 and compute the following.

(a) (3w)
~ · [(2~v ) × ~u]

(b) (2~v + w)
~ · [(~u + ~v ) × w]
~

1.5.2 Geometric Applications of the Triple Scalar Product


Definition 1.5.2. A parallelepiped is a solid formed by six parallelograms.

Proposition 1.5.1. The volume of a parallelepiped

~u
w
~
~v

~ is Volume =
with adjacent edges ~u, ~v and w

Remark. When computing the volume of a parallelepiped, any order of the vectors ~u, ~v and w
~ in |~u · (~v × w)|
~
will give the same result because of Takeaway 8.
1.5. TRIPLE SCALAR PRODUCT CHAPTER 1. VECTORS

Example 1.5.3. Consider the following parallelepiped.

• F
C(−3, 3, 1)
• • E
D(5, 3, 6) •

• •
A(1, 2, 0) B(1, 5, 1)

(a) Find its volume.

(b) Find the coordinates of the vertices E and F .


CHAPTER 1. VECTORS 1.5. TRIPLE SCALAR PRODUCT

Proof of Proposition 1.5.1.


1.5. TRIPLE SCALAR PRODUCT CHAPTER 1. VECTORS

Definition 1.5.3. Let ~v1 , ~v2 , · · · , ~vk be vectors in Rn .

1. ~v1 , ~v2 , · · · , ~vk are called colinear if they all lie on the same

2. ~v1 , ~v2 , · · · , ~vk are called coplanar if they all lie on the same

Example 1.5.4. Draw four coplanar vectors in R3 and 3 colinear vectors in R2 .


CHAPTER 1. VECTORS 1.5. TRIPLE SCALAR PRODUCT

Proposition 1.5.2. 1. ~v1 , ~v2 , · · · , ~vk are colinear if and only if they are (pairwise)

2. Let ~v1 , ~v2 and ~v3 be vectors in R3 .

(a) ~v1 and ~v2 are colinear ⇐⇒

(b) ~v1 , ~v2 and ~v3 are coplanar ⇐⇒

Proof of 2b.

Example 1.5.5. Are the vectors ~u = h1, 5, 3i, ~v = h2, 5, 1i and w


~ = h3, 10, 4i coplanar?
1.5. TRIPLE SCALAR PRODUCT CHAPTER 1. VECTORS

Definition 1.5.4. Let A1 , A2 , · · · , An be points in Rn .

1. A1 , A2 , · · · An are called colinear if they lie on the same line.

B
• A2 B2 3

B1 •
A4 • •
A3 B4
A1 • • •

2. A1 , A2 , · · · An are called coplanar if they lie on the same plane.

A4

• A3 • B3
• • • •
A1 A2 B1 B2 • B4

Proposition 1.5.3. 1. A1 , A2 , · · · An are colinear if and only if .

A2

A4 •

• • A3
A1

2. A1 , A2 , · · · An are coplanar if and only if .

A2

A4 •

• • A3
A1
Chapter 2

Lines & Planes in R3

Contents
2.1 Lines in R3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
2.1.1 Review of Lines in R2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
2.1.2 Generalizing from R to R
2 3
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
3
2.2 Planes in R . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
2.2.1 General Form Equation of the Plane . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
2.2.2 Vector Form Equation of the Plane . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
2.3 Relative Positions of Lines & Planes in R3 . . . . . . . . . . . . . . . . . . . . . . . . 53
2.3.1 Relative Position of Two Lines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
2.3.2 Relative Position of a Line and a Plane . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
2.3.3 Relative Position of Two Planes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
2.4 Shortest Distances and Closest Points . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
2.4.1 Distance Point-to-Line . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
2.4.2 Distance Line-to-Line . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
2.4.3 Distance Point-to-Plane . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
2.4.4 Distance Line-to-Plane . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
2.4.5 Distance Plane-to-Plane . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
2.5 Intersections of Lines & Planes in R3 . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
2.5.1 Point of Intersection of Two Lines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
2.5.2 Point of Intersection of a Line and a Plane . . . . . . . . . . . . . . . . . . . . . . . . . 66
2.5.3 Line of Intersection of Two Planes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67

45
CHAPTER 2. LINES & PLANES IN R3 2.1. LINES IN R3

2.1 Lines in R3

Goal

To describe a line in three-dimensional space using a vector equation, parametric equations, or symmetric
equations.

To achieve this goal, we’ll first revisit what we know about lines in two-dimensional space.

2.1.1 Review of Lines in R2


Any line in R2 can be written using the general form equation:

All non-vertical lines can also be written in slope-intercept form:


(this form is also sometimes called function form)

where m is the and (0, b) is the .

(x2 , y2 )

∆y m=
(x1 , y1 )

• (0, b) ∆x
Note: To give the equation of a line, it is
sufficient to have two things:

1. a point on the line

2. the slope of the line

Example 2.1.1. Consider the equation x+y = 0. All the points that satisfy this equation form what geometric
object...

(a) ... in R2 ? Name and draw the object. (b) ... in R3 ? Name and draw the object.

6 z

5
4

2
-5

y
−6 −4 −2 2 4 6
−2 -5 5

−4
x
5
−6
-5
2.1. LINES IN R3 CHAPTER 2. LINES & PLANES IN R3

Takeaway

The equation ax + by + c = 0 does NOT yield a line in R3 , it yields a plane. (We will soon see why
ax + by + cz + d = 0 is the general equation of a plane in R3 ).

2.1.2 Generalizing from R2 to R3


So what kind of equation DOES yield a line in R3 ? The concept of slope m = ∆x ∆y
as the ratio of changes in
two variables is not well defined in three dimensional space since there may be changes in all three variables:
∆x, ∆y, and ∆z. If we want to find the equation of a line in R3 , we need to rethink the geometry of the line
using something that does generalize to three dimensions: vectors.
(x2 , y2 ) Remember we used two things to give the equation of
• the line in R2 : (1) the slope of the line, and (2) a point
d~
(x1 , y1 )
∆y on the line. We can replace both with vectors.
• (1) Let d~ = h∆x, ∆yi, then:
• ∆x
d~ • d~ captures the same information as the slope
m = ∆x∆y

• d~ is parallel to the line.

(2) We still need a point on the line, but this point, e.g.
(x1 , y1 ), can be expressed in terms of its position vector,
e.g. hx1 , y1 i.

Derivation of the Equation(s) of a Line in R3


Let L be a line in R3 . Given point P (x0 , y0 , z0 ) on L and vector d~ = hd1 , d2 , d3 i parallel to L, derive:
z (a) the vector equation for L
P

d~

−−→
OP
y

L
x

Remark. We can describe the position vector of every point on L as the sum of the position
~
vector of P and a scalar multiple of the direction vector d.
(b) the parametric equations for L (c) the symmetric form equations for L
CHAPTER 2. LINES & PLANES IN R3 2.1. LINES IN R3

Example 2.1.2. The line L : hx, y, zi = h−3, 6, 1i + t h−1, 0, 2i; t ∈ R is given.

(a) Write L in parametric equation form and symmetric equations form.

(b) Identify a point on L and a vector parallel to L.

(c) Give two more points on L.

(d) Give two more different vector equations for L.


2.1. LINES IN R3 CHAPTER 2. LINES & PLANES IN R3
Recall: L : hx, y, zi = h−3, 6, 1i + t h−1, 0, 2i; t ∈ R

(e) Determine if points A(−2, 6, −1) and B(−7, 6, 8) are on L.

(f) Find all intercepts of L.

Takeaway

In R2 , a line must hit at least one of the axes. But in R3 , a line might not hit any of the axes at all!!

(g) Give a vector equation for the line L2 that is parallel to L and passes through the point C(π, e, −1).
CHAPTER 2. LINES & PLANES IN R3 2.2. PLANES IN R3

2.2 Planes in R3
Goal

To describe a plane in three-dimensional space using a general equation or a vector equation.

z
~n

P
• d~1
d~2 A plane is a two-dimensional, infinite, flat surface.
−−→
OP y

2.2.1 General Form Equation of the Plane


Let π be a plane in R3 . Given point P (x0 , y0 , z0 ) in π and vector ~n = ha, b, ci perpendicular to π, derive the
general form equation for π.
Derivation:
~n


P
2.2. PLANES IN R3 CHAPTER 2. LINES & PLANES IN R3

Notes:

• ~n is called the normal vector.

• the general form equation of a plane is also called the general equation, the standard equation, the normal
equation, and the implicit equation of a plane.

Example 2.2.1. Let π : x + y − 2z + 3 = 0. Find a vector perpendicular to π and a point in π.

Example
 2.2.2. Give a general form equation of the plane that is perpendicular to the line
 x=2+t

L: y=4 ; t ∈ R, and passes through the point (1, −3, 4).
 z = 4 − 5t

Example 2.2.3. Find a general form equation of the plane which is parallel to the xz−plane and passes
through the point (2, 5, −6).
CHAPTER 2. LINES & PLANES IN R3 2.2. PLANES IN R3

2.2.2 Vector Form Equation of the Plane


Let π be a plane in R3 . Given point P (x0 , y0 , z0 ) in π and non-parallel vectors d~1 and d~2 in π, derive the vector
form equation for π.
Derivation:
d~2


P
d~1

Remark. Notice the similarity to the line equation hx, y, zi = hx0 , y0 , z0 i + thd1 , d2 , d3 i, t ∈ R

Example 2.2.4. Let π : hx, y, zi = h2, 3, −1i + sh1, −1, 3i + th2, 0, 1i; s, t ∈ R. Give a general form equation
for π.

Example 2.2.5. Let π : x + y + 2z = 3. Give a vector form equation for π.


2.3. RELATIVE POSITIONS OF LINES & PLANES IN R3 CHAPTER 2. LINES & PLANES IN R3

2.3 Relative Positions of Lines & Planes in R3


Goal

To determine the relative positions between lines and planes in three-dimensional space given their
algebraic descriptions (equations).

We are going to use (mostly) vector methods to determine the relative position of lines and planes. It
should be noted that one can often use the number of solutions in the intersection of these geometric objects
to arrive at the same conclusions.

2.3.1 Relative Position of Two Lines

P2 •
d~2
parallel distinct
(or just “parallel”)
• d~1
P1

P2

d~2 parallel coincident
(or just “coincident”)
• d~1
P1

P1
d~2 •
P2 intersecting at a single point
• •
(or just “intersecting”)
d~1

P1

skew
P2
• (non-parallel, non-intersecting)

d~1 d~2
CHAPTER 2. LINES & PLANES IN R3 2.3. RELATIVE POSITIONS OF LINES & PLANES IN R3

Example 2.3.1. Determine the relationship between the lines L1 : hx, y, zi = h3, 1, −2i + th2, −1, 0i, t ∈ R,
z−1
and L2 : 3 − x = ; y = 2.
2

Decision Tree for Two Lines

Is d~1 // d~2 ?

yes no

Lines are or . Lines are or .

−−−→ −−−→  
Is P1 P2 // d~1 // d~2 ? Is P1 P2 · d~1 × d~2 = 0 ?

yes no

yes no
Lines are . Lines are .

Lines are . Lines are .


2.3. RELATIVE POSITIONS OF LINES & PLANES IN R3 CHAPTER 2. LINES & PLANES IN R3

2.3.2 Relative Position of a Line and a Plane

P d~

parallel disjoint
~n
(or just “parallel”)

~n
parallel, line contained in plane
P
• (or just “line contained in plane”)
d~ π

d~
~n
intersecting at a single point
• (or just “intersecting”)
π

P
CHAPTER 2. LINES & PLANES IN R3 2.3. RELATIVE POSITIONS OF LINES & PLANES IN R3


 x=1+t

Example 2.3.2. Determine the relationship between the line L : y = 3 + t ; t ∈ R and the plane
 z = 2 + 2t
π : x + y − z = 2.

Decision Tree for a Line and a Plane

Is d~ · ~n = 0 ?

yes no

Line or the plane. Line the plane.

Does P satisfy the equation of π?

yes no

Line is the plane. Line is the plane.


2.3. RELATIVE POSITIONS OF LINES & PLANES IN R3 CHAPTER 2. LINES & PLANES IN R3

2.3.3 Relative Position of Two Planes

~n2

parallel distinct
(or just “parallel”)
~n1

~n2
~n1 parallel coincident
(or just “coincident”)

intersecting at a line
(or just “intersecting”)
CHAPTER 2. LINES & PLANES IN R3 2.3. RELATIVE POSITIONS OF LINES & PLANES IN R3

Example 2.3.3. Determine the relationship between the planes π1 : −2x+y−6z = 4 and π2 : x− 12 y+3z+2 = 0.

Decision Tree for Two Planes

Is n~1 // n~2 ?

yes no

Planes are or . Planes are .

Are the plane equations multiples of each other?

yes no

Planes are . Planes are .


2.4. SHORTEST DISTANCES AND CLOSEST POINTS CHAPTER 2. LINES & PLANES IN R3

2.4 Shortest Distances and Closest Points


Goal

1) To determine the shortest distance between geometric objects in three-dimensional space, and
2) To determine the closest point on a line or plane to a given point which is not on the line or plane.

When we say the “distance” between two objects, we will always mean the minimum distance between
them. In Rn , this will always be a perpendicular distance.

2.4.1 Distance Point-to-Line


Given a point A and a line L we will find a formula for the minimum distance from A to L, distA,L , and we
will also see how to find point Q, the point on L that is closest to A.

Distance Formula: L

d~

• •
P A

Closest Point: L

Q

d~

• •
P A
CHAPTER 2. LINES & PLANES IN R3 2.4. SHORTEST DISTANCES AND CLOSEST POINTS


 x = −1 + t

Example 2.4.1. Consider the point A(2, 1, 1) and the line L : y = 8 + 2t ; t ∈ R .
 z =8+t

(a) Show that A is not on L.

(b) Find the point Q on L that is closest to A.

(c) Find the minimum distance from A to L by


−→
(i) finding ||QA||

(ii) using the distance formula.


2.4. SHORTEST DISTANCES AND CLOSEST POINTS CHAPTER 2. LINES & PLANES IN R3

2.4.2 Distance Line-to-Line


1. If the lines are intersecting or coincident, the shortest distance between them is distL1 ,L2 = .
L2 L2
L1 •
L1

2. If the lines are parallel, choose a point P on L1 and do point-to-line with P and L2 .
P L1
• L2 distL1 ,L2 =

3. If the lines are skew...


P2 L2

d~2

P1
• L1
d~1

Note. The distance between skew lines may also be found by distL1 ,L2 = .


 x=4

Example 2.4.2. Find the shortest distance between the lines L1 : y = −1 + t ; t ∈ R
 z = 3 − 2t

5−x z − 17
and L2 : =8−y = .
3 4
CHAPTER 2. LINES & PLANES IN R3 2.4. SHORTEST DISTANCES AND CLOSEST POINTS

2.4.3 Distance Point-to-Plane


Given a point A(x0 , y0 , z0 ) and a plane π : ax + by + cz + d = 0 we will find a formula for distA,π , the minimum
distance from A to π, and we will also see how to find point Q, the point on π that is closest to A.

Distance Formula: A

~n

Closest Point: A

~n


Q π
2.4. SHORTEST DISTANCES AND CLOSEST POINTS CHAPTER 2. LINES & PLANES IN R3

Example 2.4.3. Consider the point A(6, −10, 23) and the plane π : x − 4y + 8z + 13 = 0.

(a) Show that A is not in π.

(b) Find the point Q in π that is closest to A.

(c) Find the minimum distance from A to π by


−→
(i) finding ||QA||

(ii) using the distance formula.


CHAPTER 2. LINES & PLANES IN R3 2.4. SHORTEST DISTANCES AND CLOSEST POINTS

2.4.4 Distance Line-to-Plane


1. If the line is contained in the plane or if the line intersects the plane, the shortest distance between
them is distL,π = .
L
L
π • π

2. If the line is parallel to the plane, choose a point on the line and do point-to-plane.
P
• L
distL,π =

2.4.5 Distance Plane-to-Plane


1. If the planes are coincident or intersecting, the shortest distance between them is distπ1 ,π2 = .
π2
π2
π1
π1

2. If the planes are parallel, choose a point P on π2 and do point-to-plane with P and π1

P • π2 distπ2 ,π1 =

π1
2.5. INTERSECTIONS OF LINES & PLANES IN R3 CHAPTER 2. LINES & PLANES IN R3

2.5 Intersections of Lines & Planes in R3


Goal

To determine the point of intersection of two lines or of a line and a plane, and to determine the line of
intersection of two planes in R3 .

As is often the case, there is more than one approach to finding intersections of lines and planes. The
section below outlines some possible solutions.

2.5.1 Point of Intersection of Two Lines


Method

To find the point of intersection of two lines we may use the following procedure:

1. Write both lines in parametric equation form using different parameters


(e.g. t1 & t2 , or s & t). Set the corresponding equations equal. L2
2. Pick two of the equations and solve the system using high school methods. •
L1
3. Test the solution from (2) in the last equation.

4. Plug the solution into either line to the find the point of intersection.

Example 2.5.1. Use vector methods to show that the lines intersect at a point. Then find their point of
intersection.
L1 : hx, y, zi = h7, −3, −2i + th2, −1, −1i; t ∈ R L2 : x − 1 = y − 6 = −z − 1
CHAPTER 2. LINES & PLANES IN R3 2.5. INTERSECTIONS OF LINES & PLANES IN R3

2.5.2 Point of Intersection of a Line and a Plane


Method

To find the point of intersection of a line and a plane we may use the following procedure:

1. Write the line in parametric equation form and the plane in general
equation form. L

2. Plug the parametric equations of the line into the equation of the • π
plane and solve for the parameter, t.

3. Plug t back into the line to find the point of intersection.

Example 2.5.2. Use vector methods to show that

L : hx, y, zi = h2, −3, −3i + th−1, 1, −3i; t ∈ R and π : x + 2y − z = −9

intersect at a point. Then find their point of intersection.


2.5. INTERSECTIONS OF LINES & PLANES IN R3 CHAPTER 2. LINES & PLANES IN R3

2.5.3 Line of Intersection of Two Planes


Method

Since we know that two non-parallel (non-coincident) planes in R3 intersect at a line, we can find L,
their line of intersection, by:

1. Method 1: Vector Geometry

(i) Find the direction vector of L.


Since L is contained in both planes, its direction vector d~ is
perpendicular to both normal vectors. So we may take d~ = ~n1 ×~n2 .
(ii) Find a point on L.
π2
Any line in R3 must intersect at least one of the coordinate planes
x = 0, y = 0, and z = 0. We can use this fact and some high school
algebra to find a point on L. π1
2. Method 2: Algebra

(i) Use elimination to reduce the number of variables.


(ii) Let one variable be “free” and represent its value by a parameter,
t.
(iii) Use substitution to re-write all the variables in terms only of con-
stants and the parameter t.

(
π1 : x + y − z = 2
Example 2.5.3. Consider the planes .
π2 : x + y + z = 1
Use vector methods to show that the planes intersect at a line. Then find their line of intersection using:
(i) Method 1
CHAPTER 2. LINES & PLANES IN R3 2.5. INTERSECTIONS OF LINES & PLANES IN R3

(ii) Method 2
(
π1 : x + y − z = 2
Recall the planes:
π2 : x + y + z = 1

Recall Method 2:

(i) Use elimination to reduce the number of variables.


(ii) Let one variable be “free” and represent its value by a parameter, t.
(iii) Use substitution to rewrite all variables in terms only of constants and the parameter t.

Note. In the next unit on Linear Systems, we will learn to generalize the algebraic approach. This will allow us
to deal with intersections both in higher dimensions (more variables), and of more objects (more equations).
Chapter 3

Linear Systems

Contents
3.1 Systems of Linear Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
3.2 Solving Linear Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
3.2.1 Elementary Row Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
3.2.2 Echelon Forms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
3.2.3 Linear Systems with No Solutions or Infinitely Many Solutions. . . . . . . . . . . . . . . 81
3.3 The Number of Solutions of a Linear System . . . . . . . . . . . . . . . . . . . . . . . 86
3.4 Homogeneous Linear Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93

69
CHAPTER 3. LINEAR SYSTEMS 3.1. SYSTEMS OF LINEAR EQUATIONS

3.1 Systems of Linear Equations

Goal

To introduce vocabulary for the basics of linear systems.

Definition 3.1.1. (Linear equation) A linear equation in Rn is an equation of the form

a1 x1 + a2 x2 + · · · + an xn = b

where are variables, are real numbers called

coefficients and is a constant.

Example 3.1.1. We can quickly identify linear equations.

Linear Not linear

R2

R3

R4

Remark. In a linear equation, the power on each variable must be a 1 and we can’t multiply any variables
together.
3.1. SYSTEMS OF LINEAR EQUATIONS CHAPTER 3. LINEAR SYSTEMS

Definition 3.1.2. (Linear system) A set

a11 x1 + a12 x2 + · · · + a1n xn = b1





 a21 x1 + a22 x2 + · · · + a2n xn = b2


.. .. .. .. ..


 . . . . .
 a x +a x +···+a x =

bm
m1 1 m2 2 mn n

of linear equations, each in variables, is called a linear system.

Remark. The coefficient aij refers to the coefficient of the variable in the equation.

To solve a linear system means to find all the points which

solve all m equations.

Example 3.1.2. Solve the linear system

3x + 2y =1
(

x− y =2

Geometric meaning: To find the point of in- Algebraic meaning: To find all point(s) (x, y)
tersection between the two lines. which satisfy both equations.

3x + 2y = 1

x−y =2
CHAPTER 3. LINEAR SYSTEMS 3.2. SOLVING LINEAR SYSTEMS

3.2 Solving Linear Systems

Goal

To develop a systematic approach to solve linear systems with many variables and many equations.

3.2.1 Elementary Row Operations


Example 3.2.1.

3x

+ 2y =1

 x− y =2



 =

 =



 =

 =



 =

 =



 =

 =

Observe:

I. II. III.
3.2. SOLVING LINEAR SYSTEMS CHAPTER 3. LINEAR SYSTEMS

More vocabulary:

Given a system of linear equations

a11 x1 + a12 x2 + · · · + a1n xn = b1





 a21 x1 + a22 x2 + · · · + a2n xn = b2


.. .. .. .. ..


 . . . . .
 a x +a x +···+a x =

bm
m1 1 m2 2 mn n

the matrix
 
a11 a12 ··· a1n b1
 a21

a22 ··· a2n b2 
=

Aaug  .. .. .. .. .. 
 . . . . . 

am1 am2 · · · amn bm

is called the of the linear system.

The matrix  
a11 a12 ··· a1n
 a21

a22 ··· a2n 
A=

 .. .. .. .. 
 . . . . 

am1 am2 · · · amn

is called the of the linear system.

Example 3.2.2. (a) Consider the linear system

2x − 3y + 4z = 1




x − y − z = −1

−x + 2y − z = −2

Write down each of the following

(i) Coefficient matrix (ii) Augmented matrix

(b) Write down the linear system that corresponds to the augmented matrix

3 2 0 1 5
 

2 1 −1 π 4
 
0 0 4 2 1
CHAPTER 3. LINEAR SYSTEMS 3.2. SOLVING LINEAR SYSTEMS

Example 3.2.3. Find the intersection of the three planes


π1 : 2x − 3y + 4z = 1
π2 : x − y − z = −1
π3 : −x + 2y − z = −2
Using our strategy from R2 :

2x − 3y + 4z = 1




x − y − z = −1

−x + 2y − z = −2

Elementary row operations:

I) Interchange two rows

II) Scale a row by a nonzero


scalar

III) Add a scalar multiple of


one row to another row

Takeaway 9

Elementary row operations performed on an augmented matrix do not change the solution set of the
corresponding linear system.
3.2. SOLVING LINEAR SYSTEMS CHAPTER 3. LINEAR SYSTEMS

Example 3.2.3 continued...


2x − 3y + 4z = 1




We use elementary row operations to solve the linear system x − y − z = −1

−x + 2y − z = −2

CHAPTER 3. LINEAR SYSTEMS 3.2. SOLVING LINEAR SYSTEMS

Definition 3.2.1. (Row reduction) is a method of transforming a

matrix into a ‘nicer’ form using the .

3.2.2 Echelon Forms


Given the augmented matrix of a linear system, we can reduce the matrix using elementary row operations,
which preserves the solution set to the linear system. The goal is to reach

or .

Definition 3.2.2. (REF and RREF) A matrix is in if

1. all rows of zeros are at the bottom (provided there are rows of zeros).

2. in each nonzero row, the first nonzero entry (called the ) is in a column
to the left of any leading entries below.

Matrices in REF look like

 
 F F F F F F
 
0 =
 
 F F F F F
where
 
 
0

0 0 0  F F
 F=
 
0 0 0 0 0  F

A matrix is in if (1) and (2) hold and

(3) The leading entry in each row is a 1.

(4) Each column containing a leading 1 has zeros above and below.

Matrices in RREF look like

 
1 0 F F 0 0 F
 
0 1 F F 0 0
 
F
where F=
 
 
0 0 0 0 1 0 F


 
0 0 0 0 0 1 F
3.2. SOLVING LINEAR SYSTEMS CHAPTER 3. LINEAR SYSTEMS

Example 3.2.4. Determine if each matrix is in row echelon form, reduced row echelon form or neither form.
[Note: Any matrix, augmented or not, can be put into REF or RREF.]
" # " # " #
1 2 1 1 2 1 1 0 1
0 2 0 0 1 0 0 1 0

−1 4 1 −1 4 1 1 0 1
     

 0 0 0  0 1 0 0 1 0
     
0 1 0 0 0 0 0 0 0

0 0 0 1
 
1 4 0 1 1 4 0 0
   
0 0 3 1
0 0 1 0 0 0 1 0
     
0 2 1 1
 
0 0 0 1 0 0 0 1
4 3 2 −5

Remark. Generally, solving a linear system by Gaussian elimination and back substitution means following
a specific algorithm until the augmented matrix reaches row echelon form. This allows you to solve for the
‘last’ variable, which you can substitute to solve for the other variables.
Solving a linear system using Gauss-Jordan elimination means using the algorithm to reduce the aug-
mented matrix to reduced row echelon form.

Method: The Gauss-Jordan Algorithm.

Using the three elementary row operations, we can use the following procedure to reach row echelon
form or reduced row echelon form.

Step 1: Find the leftmost nonzero column. If the entry at the top of this column is zero, then use a row
swap to place a non zero entry in this position.

Step 2: Scale the first row so the leftmost nonzero entry is a 1, called a leading 1 .

Step 3: Create zeros in all positions beneath the leading 1 by adding or subtracting suitable multiples of
the row containing the leading 1.

Step 4: If the matrix is in row echelon form, then go to step 5. If it is not in REF, then cover (or ignore)
the first row and repeat steps 1, 2 and 3 with the remaining matrix.

Step 5: Using the rightmost leading 1, create zeros above it by adding or subtracting suitable multiples of
the row containing this leading 1.

Step 6: If the matrix is now in reduced row echelon form, you’re done! If it is not in RREF, then cover
(or ignore) the row that was just used and repeat step 5 on the remaining matrix.
CHAPTER 3. LINEAR SYSTEMS 3.2. SOLVING LINEAR SYSTEMS

Example 3.2.5. This is how the Gauss-Jordan algorithm will look as you proceed through the steps using
elementary row operations.
F

     
F F F F F F F 1 F F F F F F 1 F F F F F F
     
 0
     
F F F F F F F
 F F F F F F F F F F F F F
 
∼ ∼
 
 
F F F F F F  F
F F F F F F  0
F F F F F F F
  

     
F F F F F F F F F F F F F F 0 F F F F F F

     
1 F F F F F F 1 F F F F F F 1 F F F F F F
     
0 1  0 1  0 1
     
F F F F F  F F F F F  F F F F F
∼ ∼ ∼
 

0 F F F F F  0
F 0 0 0 F F  0
F 0 0 0 1 F F
  

     
0 F F F F F F 0 0 0 0 F F F 0 0 0 0 F F F

     
1 F F F F F F 1 F F F F F F 1 F F F F 0 F
     
0 1  0 1  0 1 0
     
F F F F F  F F F F F  F F F F
∼ ∼ ∼
 

0 0 0 0 1 F  0
F 0 0 0 1 F  0
F 0 0 0 1 0 F
  

     
0 0 0 0 0 F F 0 0 0 0 0 1 F 0 0 0 0 0 1 F

   
1 F F F 0 0 F 1 0 F F 0 0 F
   
0 1 0 0 0 1 0 0
   
F F F F F F
∼ ∼
   

0 0 0 0 1 0 F 0 0 0 0 1 0 F
  

   
0 0 0 0 0 1 F 0 0 0 0 0 1 F
3.2. SOLVING LINEAR SYSTEMS CHAPTER 3. LINEAR SYSTEMS

Definition 3.2.3. (Row equivalent) We say that matrices A and B are if one

can be obtained from the other using a sequence of elementary row operations. We write .

Theorem 3.2.1. The reduced row echelon form of a matrix is unique. We can, therefore, write RREF (A) to
denote the reduced row echelon form of the matrix A.

Remark. If A ∼ B, then RREF (A) = RREF (B).

Example 3.2.6. Solve the system





 2x2 + x3 =8
2x1 + 3x2 + x3 =5

 x − x − 2x

= −5
1 2 3

(a) Using row reduction and back substitution

(b) By reducing the augmented matrix to reduced row echelon form


CHAPTER 3. LINEAR SYSTEMS 3.2. SOLVING LINEAR SYSTEMS

Example 3.2.6 continued...


3.2. SOLVING LINEAR SYSTEMS CHAPTER 3. LINEAR SYSTEMS

3.2.3 Linear Systems with No Solutions or Infinitely Many Solutions.

Each of the linear systems we’ve discussed using elementary row operations has a single solution. There are
two other situations we must discuss.

Linear systems with no solutions:

Example 3.2.7. In R2 , we know that the lines y = 3x + 2 and y = 3x − 1 are parallel because they have the
same slope. Further, we know they don’t intersect since they have different y-intercepts. Let’s see how we can
recognize this in the row reduction process:

y = 3x + 2 −3x + y = 2
( (
⇐⇒
y = 3x − 1 −3x + y = −1

Linear systems with infinitely many solutions:

Example 3.2.8. We’ve seen that in R3 , the intersection of any two non-parallel planes is a line. We know that
the planes π1 : x + y − z = 2 and π2 : x + y + z = 1 aren’t parallel since their normal vectors aren’t parallel.
We can use row reduction to find an equation for the line of intersection.

Remark. The convention is to call any variable that isn’t associate to a leading entry a free variable. We
assign a parameter to each free variable.
CHAPTER 3. LINEAR SYSTEMS 3.2. SOLVING LINEAR SYSTEMS

Example 3.2.9. Describe the solution to each linear system using parameter(s).

" #
1 −1 1
1.
0 0 0

1 0 −1 0 3
 

2. 0 0 0 1 2
 
0 0 0 0 0

x + y + z = −1
(
Example 3.2.10. Solve the linear system
x + 2y + 3z = −4
3.2. SOLVING LINEAR SYSTEMS CHAPTER 3. LINEAR SYSTEMS

Definition 3.2.4. (General and particular solutions) If a linear system has

solutions, the is the parametric (or vector) equation that describes all possible

solutions. A is one numerical solution.

Remark. In the previous example, a particular solution is any point on the line of intersection. Just pick one
by choosing any values you like for each of the parameters!

Example 3.2.11. Solve the following linear system by reducing the augmented matrix to reduced row echelon
form. Give the general solution and three different particular solutions.

x1 + 3x2 − 2x3 + 2x5 =0





2x1 + 6x2 − 5x3 − 2x4 + 4x5 − 3x6 = −1



 5x3 + 10x4 + 15x6 = 5

2x1 + 6x2 + 8x4 + 4x5 + 18x6 = 6

CHAPTER 3. LINEAR SYSTEMS 3.2. SOLVING LINEAR SYSTEMS

Example 3.2.11 continued...


3.2. SOLVING LINEAR SYSTEMS CHAPTER 3. LINEAR SYSTEMS

Example 3.2.11 continued...


CHAPTER 3. LINEAR SYSTEMS 3.3. THE NUMBER OF SOLUTIONS OF A LINEAR SYSTEM

3.3 The Number of Solutions of a Linear System

Goal

To determine the number of solutions to a linear system by investigating the number of leading entries
in the REF or RREF of the coefficient matrix and the augmented matrix of the linear system.

Definition 3.3.1. (Consistent) We say that a linear system is if the system

has at least one solution. If the system has no solutions, we say that the system is

Theorem 3.3.1. Every linear system has either

To determine the that a linear system has, we rely

on the or the

of the . We are able to do this because row equivalent

augmented matrices represent linear systems that have the same solution set and so have the same number of

solutions.
Remark. We will see the proof of Theorem 3.3.1 in Section 5.6.

Definition 3.3.2. (Rank) The of a matrix A is the number of leading entries

in its row echelon or reduced row echelon form. We write .

Example 3.3.1. Find the rank of each matrix.

1 2 3
" #  
1 0 2 0
A=
0 1 −1 0 C = 0 1 2
 
0 1 3

1 0 2 0 1 2 3 0 5
   

B = 0 0 1 2 D = 0 1 7 0 1
   
0 0 0 4 0 0 0 1 3
3.3. THE NUMBER OF SOLUTIONS OF A LINEAR SYSTEM CHAPTER 3. LINEAR SYSTEMS

Method: The number of solutions of a linear system.

The row echelon form or reduced row echelon form of the augmented matrix Aaug of a linear system
with coefficient matrix A has one of three forms.

I. The last column of the augmented matrix’s REF or RREF has a leading entry. For example, you
may find a matrix that looks like

 
 F F F F 
 0  F F F where
 
0 0 0 0  F

In this case the linear system has .

Notice: A linear system has no solutions if and only if the reduced augmented matrix has more
leading entries than the reduced coefficient matrix.

II. Every column of the matrix contains a leading entry, except the last column. For example, you
may find a matrix that looks like

 
 F F F 
 0  F F where
 
0 0  F F

In this case the linear system has a .

Notice: A linear system has a unique solution if and only if the reduced coefficient matrix has the
same number of leading entries as variables (of the linear system).

III. At least two columns (one of which must be the last column) do not contain leading entries. For
example, you may find a matrix that looks like

 
 F F F F 
 0  F F F where
 
0 0 0  F F

In this case the linear system has .

Notice: A consistent linear system has infinitely many solutions if and only if there are free
variables.
CHAPTER 3. LINEAR SYSTEMS 3.3. THE NUMBER OF SOLUTIONS OF A LINEAR SYSTEM

Theorem 3.3.2. Suppose that Aaug is the augmented matrix of a consistent linear system in n variables. Then

Example 3.3.2. For each part, consider the linear system whose augmented matrix reduces to the given
matrix. How many solutions does the linear system have?

3 1 2 4
 

(a) 0 1 3 7
 
0 0 0 2

" #
7 2 4
(b)
0 −2 3

1 0 0 3 1
 

(c) 0 0 1 2 3
 
0 0 0 0 0

1 3 2 −2 1
 
0 1 2 −3 5
 
(d) 0 0 1 1 6
 
0 0 0 1 4
 

0 0 0 0 0
3.3. THE NUMBER OF SOLUTIONS OF A LINEAR SYSTEM CHAPTER 3. LINEAR SYSTEMS

Example 3.3.3. Consider the linear system whose augmented matrix reduces to

1 1 1 1
 

0 1 k 1 
 
0 0 k − 1 k2 − k

Which value(s) of k affect the number of solutions to the linear system?

Observe:
CHAPTER 3. LINEAR SYSTEMS 3.3. THE NUMBER OF SOLUTIONS OF A LINEAR SYSTEM

Example 3.3.4. For which values of α does the linear system


x − 2y + 3z = 1




x + αy + 2z = 2

−2x + α2 y − 4z = 3α − 4

have
(i) a unique solution?
(ii) infinitely many solutions?
(iii) no solutions?
Strategy: Inspired by the previous example:
• Use elementary row operations to reduce Aaug to look like row echelon form. [Note: going to RREF will
require too much algebra!]
• Investigate the special values of α where the potential leading entries are equal to zero.
3.3. THE NUMBER OF SOLUTIONS OF A LINEAR SYSTEM CHAPTER 3. LINEAR SYSTEMS

Example 3.3.4 continued...


CHAPTER 3. LINEAR SYSTEMS 3.3. THE NUMBER OF SOLUTIONS OF A LINEAR SYSTEM

Example 3.3.5. Recall the linear system from Example 3.3.4


x − 2y + 3z = 1




x + αy + 2z = 2

−2x + α2 y − 4z = 3α − 4

1 −2 3 1
 

We showed that the augmented matrix of the linear system is row equivalent to 0 α + 2 −1 1 .
 
0 0 α 2α

In each case, describe the intersection geometrically, and draw a picture representing a possible configuration
of the three planes.
(i) α 6= −2 and α 6= 0

(ii) α = 0

(iii) α = −2
3.4. HOMOGENEOUS LINEAR SYSTEMS CHAPTER 3. LINEAR SYSTEMS

3.4 Homogeneous Linear Systems

Goal

To understand linear systems whose constant vector is the zero vector.

Definition 3.4.1. (Homogeneous) A system of linear equations is called

if the constant term in each equation is equal to .

Example 3.4.1. Planes that pass through the origin in R3 . Consider the four planes

π1 : 3x − 2y + z = 0
π2 : x + y + 2z = 0
π3 : 6x − y + z = 0
π4 : x + y − z = 0
We notice that satisfies each of these equations and that the four planes are non-parallel.

Therefore, it must be that they planes either intersect at the single point or their common

intersection is .

Definition 3.4.2. (Trivial solution) Consider the homogeneous linear system in n variables

a11 x1 + a12 x2 + · · · + a1n xn =





 a21 x1 + a22 x2 + · · · + a2n xn =


.. .. .. .. ..


 . . . . .
 a x +a x +···+a x =

m1 1 m2 2 mn n

The solution is called the .

Takeaway 10

The trivial solution is a solution to every homogeneous linear system. Therefore, every homogeneous
system is consistent and has either

• only the trivial solution or • infinitely many solutions

Theorem 3.4.1. If a homogeneous linear system has n unknowns (variables) and the reduced row echelon form
of its coefficient matrix has r leading entries (rank r), then the system must have

free variables.
CHAPTER 3. LINEAR SYSTEMS 3.4. HOMOGENEOUS LINEAR SYSTEMS

Example 3.4.2. The system


x1 + x2 + x3 − x4 = 0
(

x1 + x2 + x3 + x4 = 0

has coefficient matrix

The system has and leading entries. Therefore, the

homogenous system must have free variables.

Example 3.4.3. Given that the general solution to the

x1 − 2x2 + 3x4 = 1




−3x + 6x + x − 4x = −2
1 2 3 4

 2x − 4x − x + 2x = 1

1 2 3 4

is
1 2
     
x1
x  0 1
 2    
  =   +   t, t ∈ R
x3  1 0
x4 0 0

find the solution to the

x1 − 2x2 + 3x4 = 0




−3x + 6x + x − 4x = 0
1 2 3 4

 2x − 4x − x + 2x = 0

1 2 3 4
3.4. HOMOGENEOUS LINEAR SYSTEMS CHAPTER 3. LINEAR SYSTEMS

Example 3.4.3 continued...


CHAPTER 3. LINEAR SYSTEMS 3.4. HOMOGENEOUS LINEAR SYSTEMS

Example 3.4.4. If the linear system


ax + by + cz = 0




dx + ey + f z = 0

gx + hy + jz = 0

has only the trivial solution the how many solutions does the linear system

ax + by + cz = 3




dx + ey + f z = 7

gx + hy + jz = 11

have?
Chapter 4

Matrix Algebra

Contents
4.1 Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
4.2 Definition of Operations on Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
4.2.1 Definition of the Sum and the Scalar Multiplication of Matrices . . . . . . . . . . . . . . 99
4.2.2 Definition of the Product of Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
4.3 Properties of Matrix Algebra . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
4.3.1 Matrix Multiplication is NOT Commutative . . . . . . . . . . . . . . . . . . . . . . . . . 102
4.3.2 Standard Properties of Matrix Operations . . . . . . . . . . . . . . . . . . . . . . . . . . 104
4.3.3 Powers of a Square Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
4.4 The Tranpose of a Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
4.4.1 Definition and Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
4.4.2 Symmetric Matrices and Other Special Types of Square Matrices . . . . . . . . . . . . . 111
CHAPTER 4. MATRIX ALGEBRA 4.1. NOTATION

4.1 Notation
Definition 4.1.1. 1. A matrix A is a rectangular array of real numbers.

2. The size of a matrix with is given as (say “m by n”).

3. The i, j-entry aij of a matrix A is the number in the row and the
column.

 
a11 a12 ··· a1j ··· a1n
 . .. .. .. 
 .. . . . 
 
 ai1 ai2 ··· aij ··· ain 
 
 . .. .. .. 
 .
 . . . . 

am1 am2 · · · amj · · · amn

We write A = .

4. Two matrices A = [aij ]m×n and B = [bij ]r×s are equal if both

(i) A and B have AND

(ii) For all i ≤ m and j ≤ n, .

Example 4.1.1. Write the matrix A = [i + j]2×3 explicitly.

Example 4.1.2. Find all value(s) of x for which the following pairs of matrices are equal.
" # " #
1 x 1 −3
(a) A = 2 and B =
x 4 9 4

" # " #
1 4 −1 x2 4 x 0
(b) C = and D =
3 1 0 3 x2 0 0
4.2. DEFINITION OF OPERATIONS ON MATRICES CHAPTER 4. MATRIX ALGEBRA

4.2 Definition of Arithmetic Operations on Matrices


Goal

To define arithmetic operations on matrices inspired by vector operations and algebra on R.

4.2.1 Definition of the Sum and the Scalar Multiplication of Matrices


Definition 4.2.1. Let A = [aij ]m×n and B = [bij ]m×n be matrices of the same size and let k be any real
number.
In index notation In words

Addition A+B =

Subtraction A−B =

Scalar multiplication kA =

1 3
 
" # " #
2 3 7 −2
Example 4.2.1. Let A = ,B= and C =  2 3. Evaluate the following.
 
−4 12 1 9
−1 6

(a) A − 2B=

(b) 3A + C =

Definition 4.2.2. For any positive integers m and n, the zero matrix of size m × n is

0m×n =

" #
1 3 5
Example 4.2.2. Evaluate + 02×3 =
2 4 6

Takeaway

For any m × n matrix A, we have that 0m×n + A = A + 0m×n = A.

Remark. When the size of 0m×n is obvious from context, we often write just 0.
CHAPTER 4. MATRIX ALGEBRA 4.2. DEFINITION OF OPERATIONS ON MATRICES

4.2.2 Definition of the Product of Matrices


Definition 4.2.3 (Product of matrices). Let A be an m × k matrix and let B be an k × n matrix. The product
AB of A and B is the m × n matrix

AB =

Remark. Since the number of columns of A and the number of rows of B are equal (to k), ~vi and w~ j are vectros
in the same R , ~vi · w
k ~ j exists and AB is defined. If the number of columns of A is NOT equal to the number
of rows of B then AB .

5 1
 
" # " #
1 3 1 2 −2
Example 4.2.3. Let A = ,B= and C =  1 3. Evaluate
 
−2 4 0 1 3
−1 1

(a) AB =

(b) BA =

(c) BC =

1
 
h i
Example 4.2.4. Evaluate 3 1 2  1  =
 
−1
4.2. DEFINITION OF OPERATIONS ON MATRICES CHAPTER 4. MATRIX ALGEBRA

Let’s look again at the sizes of the matrices involved in a product.

A B = AB

Example 4.2.5. Let A be an m × n matrix. When is A2 = AA defined?

Definition 4.2.4 (Square matrix). An m × n matrix A is square if .

Example 4.2.6. Give a square matrix A and compute A2 .

Definition 4.2.5 (Identity matrix). For any positve integer n, the identity matrix of size n is the n×n matrix

In =

" #
2 5 1
Example 4.2.7. Let A = . Evaluate AI3 and I2 A.
4 3 0

Takeaway 11

For any m × n matrix A, AIn = Im A = A.

Remark. In the algebra of matrices, In plays the same role as in R. It is the “multiplicative identity”.
When the size of In is obvious from the context, we often write I.
CHAPTER 4. MATRIX ALGEBRA 4.3. PROPERTIES OF MATRIX ALGEBRA

4.3 Properties of Matrix Algebra


Goal

To study the operations on matrices abstractly.

4.3.1 Matrix Multiplication is NOT Commutative


The algebra of matrices is more complicated than the algebra of real numbers. The next example will illustrate
this.
" # " # " #
1 2 2 0 1 0 1
Example 4.3.1. Let A = ,B= and C = .
0 0 1 0 0 1 0

(a) Compute AB and BA.

(b) Compute AC and CA.

Takeaway 12

In general AB 6= BA for matrices A and B even when both AB and BA are defined.
4.3. PROPERTIES OF MATRIX ALGEBRA CHAPTER 4. MATRIX ALGEBRA

Definition 4.3.1 (Commuting matrices). Let A and B be matrices. A and B are said to commute if
.

Proposition 4.3.1. If A and B commute then A and B must be .

Proof.

" #
1 3
Example 4.3.2. Find all matrices that commute with A = .
2 8
CHAPTER 4. MATRIX ALGEBRA 4.3. PROPERTIES OF MATRIX ALGEBRA

4.3.2 Standard Properties of Matrix Operations


Goal

To manipulate matrix operations abstractly without using specific entries.

Proposition 4.3.2. Let A, B and C be matrices and let d and e be any real numbers. The following properties
hold as long as the sizes of the matrices ensure that the operations are defined.

1. Commutativity of matrix addition : A+B =B+A

2. Associativity of matrix addition : A + (B + C) = (A + B) + C

Associativity of matrix multiplication : A(BC) = (AB)C

Associativity with scalars : d(eA) = (de)A


d(AB) = (dA)B = A(dB)

3. Distributivity over matrix addition : A(B + C) =

(A + B)C =

Distributivity with scalars : d(A + B) = dA + dB


(d + e)A = dA + eA

4. Properties of the zero matrix : 0+A=A


0A = 0
properties of the zero scalar : 0A = 0

5. properties of the identity matrix : IA = A


AI = A
properties of the one scalar : 1A = A
4.3. PROPERTIES OF MATRIX ALGEBRA CHAPTER 4. MATRIX ALGEBRA

Example 4.3.3. Expand (A + B)2 and simplify.

Example 4.3.4. Factor the following expressions.

(a) ABC 2 − ACDBC

(b) AB + 3A

Takeaway 13

1. When expanding a product of matrices, matrices that start on the left are kept on the left and
matrices that start on the right are kept on the right.

2. If a matrix is on the left of ALL the terms in an expression it can be factored on the left. If a
matrix is on the right of all terms in an expression then it can be factored on the right.
CHAPTER 4. MATRIX ALGEBRA 4.3. PROPERTIES OF MATRIX ALGEBRA

Example 4.3.5. Solve for X in the following equation


" # " #!
1 1 1 −1
3X − =5 X+
1 1 −1 2

In this last example, we canceled the 2 in 2X by multiplying by 2−1 = 21 . We cannot (always) cancel like
that if the coefficient in front of X is a matrix as we will see in the next example.
" # " # " #
0 1 1 1 5 −5
Example 4.3.6. Let A = . Show that both B = and C = are solutions to
0 3 4 −2 4 −2
" #
4 −2
AX = (4.1)
12 −6
4.3. PROPERTIES OF MATRIX ALGEBRA CHAPTER 4. MATRIX ALGEBRA

Hence to solve for X in equation (4.1), we cannot simply cancel A algebraically. We will see in the next
chapter the necessary conditions to make the cancelling of matrices possible. Here is an other example of
unexpected behaviour in matrix algebra.

6
 
" # −3
1 0 3 
Example 4.3.7. Evaluate −1 2 =

−3 1 −8
1 −2

Takeaway 14

1. We cannot always cancel matrices from an equation.

AB = AC =⇒
6 B = C.

2. There are non-zero matrices A and B so that AB = 0.

4.3.3 Powers of a Square Matrix

Remark. Since matrix multiplication is associative, we can omit parenthesis when we multiply many matrices
as in ABCD or in AAAAA.

Definition 4.3.2 (Power of a matrix). Let A be a square matrix and let r be a non-negative integer. The
r-th power of A is

A =
 r






 A0 =



CHAPTER 4. MATRIX ALGEBRA 4.3. PROPERTIES OF MATRIX ALGEBRA

" #
1 2
Example 4.3.8. Let A = .
0 1

(a) Evaluate A2 , A3 , A4 .

(b) Give a formula for An .

Note that getting a formula An for a more complicated matrix A is an advanced problem that you could
learn to solve in a second linear algebra course.

Proposition 4.3.3. For any square matrix A and any positive integers r and s,

1. Ar As =

2. (Ar )s =

Proof.
4.4. THE TRANPOSE OF A MATRIX CHAPTER 4. MATRIX ALGEBRA

4.4 The Tranpose of a Matrix


4.4.1 Definition and Properties
Definition 4.4.1 (Transpose of a matrix). Let A = [aij ]m×n be an m × n matrix. The tranpose AT of A is
the matrix
AT =
Hence, the rows of AT are the columns of A and vice versa.
" #
2 4 8
Example 4.4.1. Let A = . Find AT .
3 5 −2

Proposition 4.4.1 (Properties of the transpose). Let A and B be matrices and let k be any real number. The
following hold whenever A and B have compatible sizes.

1. (AT )T =

2. (A + B)T =

3. (AB)T =

4. (kA)T =

Proof of property (3) of the tranpose.


CHAPTER 4. MATRIX ALGEBRA 4.4. THE TRANPOSE OF A MATRIX

Example 4.4.2. Expand and simplify.

(a) (ABC)T

(b) (I + AT )T

(c) (2AT B − I)T (A + B)T

Takeaway 15

1. (A1 A2 · · · Am )T =

2. I T = I
4.4. THE TRANPOSE OF A MATRIX CHAPTER 4. MATRIX ALGEBRA

4.4.2 Symmetric Matrices and Other Special Types of Square Matrices


Definition 4.4.2 (Symmetric and antisymmetric matrices). Let A be a square matrix.

1. A is symmetric if .

2. A is antisymmetric or skew-symmetric if .

0 2 3
 

Example 4.4.3. Is A = 2 2 −1 symmetric? Is it antisymmetric?


 
3 −1 −8

" #
x 3
Example 4.4.4. Find the values of x, y and z so that A = is skew-symmetric.
y z

Example 4.4.5. Assume that A and B are symmetric matrices that commute. Show that AB is also symmetric.
CHAPTER 4. MATRIX ALGEBRA 4.4. THE TRANPOSE OF A MATRIX

The entries of an n × n matrix A can be divided into three blocks.


     
a11 a12 a13 · · · a1n a11 a12 a13 · · · a1n a11 a12 a13 · · · a1n
a
 21 a22 a33 · · · a2n 

a
 21 a22 a33 · · · a2n 

a
 21 a22 a33 · · · a2n 

A= a a32 a33 · · · a3n   a31 a32 a33 · · · a3n   a31 a32 a33 · · · a3n 
     
 31
 .. .. ..  .. .. ..  .. .. ..
    
 . . .  . . .  . . .
  
  
an1 an2 an3 · · · ann an1 an2 an3 · · · ann an1 an2 an3 · · · ann

• The matrix A is symmetric if the upper and lower entries are mirrored. The diagonal entries are free.

• The matrix A is antisymmetric if the upper and lower entries are mirrored with an extra “-” and if the
diagonal entries are zero.

Definition 4.4.3. Let A be a square matrix.

1. A is diagonal if .

2. A is upper triangular if .

3. A is lower triangular if .

0 0 0
 

Example 4.4.6. Consider 03×3 = 0 0 0


 
0 0 0

(a) Is 03×3 diagonal? (b) Is 03×3 upper triangular? (c) Is 03×3 lower triangular?
Chapter 5

Invertibility of Matrices

Contents
5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
5.2 The Inverse Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
5.3 Elementary Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
5.4 Finding A−1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
5.5 Linear Systems as Matrix Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
5.6 The Invertibility Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132

113
CHAPTER 5. INVERTIBILITY OF MATRICES 5.1. INTRODUCTION

5.1 Introduction
Goal

To define canceling out in the context of matrix algebra and to learn how and under what conditions we
can do so.

Let a and b be two real numbers. We know that if ab = 0 and a 6= 0, then b = 0.


In fact, we have

ab = 0

=⇒

=⇒

=⇒ b=0

This hinges on the fact that if a 6= 0, then there exists a number a−1 such that
1
a−1 a = a = 1.
a
The number a−1 is called the reciprocal or the multiplicative inverse of a. Since every nonzero real number
has an inverse, then we can cancel out a on both sides of the equality and obtain b = 0.
For matrices, the above is not true. Suppose we have two matrices A and B such that AB = 0 and A 6= 0
(zero matrix), then it is not necessarily true that B = 0. See example 4.3.7 in section 4.3.2. The reason for
this is that even if A 6= 0, there does not always exist a matrix “A−1 ” that cancels out A.
5.2. THE INVERSE MATRIX CHAPTER 5. INVERTIBILITY OF MATRICES

5.2 The Inverse Matrix


In section 4.2.2 we saw that the matrix I is the identity element when it comes to matrix multiplication, i.e.,
the matrix I plays a similar role in matrix algebra as the number 1 for real numbers. In fact, we have AI = A
and IA = A. It is thus reasonable to describe canceling out a matrix A as the operation of multiplying it by a
matrix “A−1 ” so that A−1 A = I and AA−1 = I.

Definition 5.2.1. If A is an n × n matrix, an inverse of A is an n × n matrix B such that

If such a matrix B exists, we say that A is invertible (or non-singular). If no such matrix B exists, we say
that A is singular (or non-invertible).

Theorem 5.2.1. If A is an invertible matrix, then its inverse is unique.

Proof.

Notation. If A and B are n × n matrices and AB = BA = I, since the inverse is unique we write B = A−1
(read A inverse).

" #
1 2
Example 5.2.1. (a) Consider the matrix A = . Check that A is invertible by applying the definition
0 1
" #
1 −2
with B = .
0 1
CHAPTER 5. INVERTIBILITY OF MATRICES 5.2. THE INVERSE MATRIX

" #
1 2
(b) Let A = . Show that A is singular.
0 0

Remark. It can be proven that if A and B are n × n matrices and AB = I, then necessarily BA = I also. From
here on, you may assume this to be true and only show one of the two multiplications.

For 2 × 2 matrices, we have the following rule.


" #
a b
Theorem 5.2.2. The matrix A = is invertible if and only if ad − bc 6= 0, in which case the inverse
c d
is given by the formula
1
" #
−1 d −b
A = .
ad − bc −c a

Proof. See exercise sheet.


5.2. THE INVERSE MATRIX CHAPTER 5. INVERTIBILITY OF MATRICES

" #
1 2
Example 5.2.2. (a) Let A = . Find A−1 if possible.
3 4

" #
4 k
(b) Find all real values k so that matrix is singular.
k 9

Let’s now revisit the idea of canceling out a matrix A. When we are solving a real number equation of the
form ax = b, where a 6= 0, we cancel out a as follows,

ax = b
−1
a ax = a−1 b
x = a−1 b

Now that we’ve defined matrix A−1 and now that we have a method for computing it (when it exists) in the
case of 2 × 2 matrices, we can solve certain matrix equations in a similar fashion.
CHAPTER 5. INVERTIBILITY OF MATRICES 5.2. THE INVERSE MATRIX

Example 5.2.3. Solve for the matrix X in the following equation.

" A # " B #
3 −2 5 2
X= .
2 2 −1 1

We will see more examples of matrix equations later. Let’s first list some useful properties.
5.2. THE INVERSE MATRIX CHAPTER 5. INVERTIBILITY OF MATRICES

Theorem 5.2.3 (Properties of Invertible Matrices). If A and B are invertible n × n matrices and k is a
non-zero real number, then the following properties are true.

1. A−1 is invertible and (A−1 )−1 = A, i.e., the inverse of A−1 is A.

2. kA is invertible and (kA)−1 = k1 A−1 , (k 6= 0).

3. AT is invertible and (AT )−1 = (A−1 )T .

4. AB is invertible and (AB)−1 = B −1 A−1 .

Proof.
CHAPTER 5. INVERTIBILITY OF MATRICES 5.2. THE INVERSE MATRIX

Example 5.2.4. (a) Solve for the matrix X in the following equation:
" #
−1 −1 2
(I + 2X) =
4 5

(b) Assuming all matrices are n × n and invertible, isolate matrix D.

ABC T DBAT C = AB T

Remark. (i) Property (d) above can be extended to the product of three matrices. Let A, B and C be
invertible n × n matrices, then ABC is invertible and (ABC)−1 = C −1 B −1 A−1 .

Proof. See exercise sheet.

(ii) Similarly, if A1 , A2 , . . . , Ak are invertible n × n matrices, then A1 A2 · · · Ak is invertible and

(A1 A2 · · · Ak )−1 = A−1 −1 −1


k · · · A2 A1
5.3. ELEMENTARY MATRICES CHAPTER 5. INVERTIBILITY OF MATRICES

5.3 Elementary Matrices


Goal

To learn about elementary matrices in order to develop a method to determine whether an n × n


matrix A is invertible and to find its inverse.

Definition 5.3.1. An n × n matrix E is called an elementary matrix if it can be obtained from the identity
matrix I by applying one single elementary row operation.

Example 5.3.1. The matrices E, F and G below are examples of elementary matrices.

1 0 0 1 0 0
   
" #
−3 0
E= , F = 0 0 1 , G = 0 1 0
   
0 1
0 1 0 0 3 1

Each of the matrices and can be obtained from by applying one single elementary row operation.
I F I G
" I # E
1 0 0 1 0 0 1 0 0 1 0 0
       
" #
1 0 −3 0
i) ∼ , (ii) 0 1 0 ∼ 0 0 1 iii) 0 1 0 ∼ 0 1 0
       
0 1 0 1
0 0 1 0 1 0 0 0 1 0 3 1

Notice also that for each elementary matrix, we can find the inverse elementary row operation that will convert
it back to the identity matrix.

" E # " I #
−3 0 1 0
i) ∼
0 1 0 1

F I
1 0 0 1 0 0
   

(ii) 0 0 1 ∼ 0 1 0
   
0 1 0 0 0 1

G I
1 0 0 1 0 0
   

iii) 0 1 0 ∼ 0 1 0
   
0 3 1 0 0 1
CHAPTER 5. INVERTIBILITY OF MATRICES 5.3. ELEMENTARY MATRICES

Takeaway

Every elementary row operation has an inverse operation that undoes what was done by the original.
Here are inverse operations for the three types of EROs.

• is inverse elementary row operation of kRi → Ri where k 6= 0.

• is inverse of Ri ↔ Rj .

• is inverse of Ri + kRj → Ri .

Proposition 5.3.1. An elementary matrix allows us to convert an elementary row operation into a matrix
multiplication.

" # " #
2 3 0 2 3 0
Example 5.3.2. Consider the matrices A = and B = .
5 1 2 1 −5 2

We can verify that B can be obtained by applying the elementary row operation
to the matrix A.

Now consider the elementary matrix obtained by applying this same elementary row
operation to the identity matrix I.
5.3. ELEMENTARY MATRICES CHAPTER 5. INVERTIBILITY OF MATRICES

Proposition 5.3.2. An elementary matrix is invertible and its inverse is also an elementary matrix.

Proof.

1 0 0
 

Example 5.3.3. Consider the elementary matrix E = 0 1 0 obtained by applying the operation
 
0 3 1
R3 + 3R2 → R3 to the identity matrix I. Use the inverse elementary operation, to find the inverse F of E.
Check directly from the definition of inverses that F is indeed the inverse of E.

Recall: The matrices A and B are said to be row-equivalent (we write A ∼ B) if there exists a sequence of
elementary row operations O1 , O2 , . . . , Ok that turns A into B. In other words, we have

A ∼ A1 ∼ . . . ∼ Ak−1 ∼ B
O1 O2 Ok−1 Ok
CHAPTER 5. INVERTIBILITY OF MATRICES 5.3. ELEMENTARY MATRICES

Proposition 5.3.3. If A ∼ B, then there exists a sequence of elementary matrices E1 , E2 , . . . , Ek such that
Ek · · · E1 A = B.

Proof.

Proposition 5.3.4. If A ∼ B, then there exists an invertible matrix P such that P A = B.

Proof.
5.3. ELEMENTARY MATRICES CHAPTER 5. INVERTIBILITY OF MATRICES

" # " #
2 1 1 0 10 −4
Example 5.3.4. Consider the matrices A = and B = .
0 5 −2 2 6 −1

a) Find the elementary matrices E, F and G such that GF EA = B.

b) Find an invertible matrix P such that P A = B.


CHAPTER 5. INVERTIBILITY OF MATRICES 5.3. ELEMENTARY MATRICES

We are now able to give a condition which is sufficient to guarantee that an n × n matrix A is invertible.

Theorem 5.3.5. If A ∼ I, then A is invertible.

Proof.

Note. It will be proven later in this chapter that the converse of the above statement is also true, i.e., if A is
invertible, then A ∼ I.

Remark. The above theorem gives us a procedure to find A−1 . In fact, since A−1 = Ek · · · E1 , we can write
Ek · · · E1 I = A−1 . But this means that

I ∼ A1 ∼ . . . ∼ Ak−1 ∼ A−1
O1 O2 Ok−1 Ok

where O1 , O2 , . . . , Ok are corresponding EROs.

Takeaway

The same sequence of elementary row operations that turns A into I will turn I into A−1 . This allows
us to describe a general method to find A−1 for any n × n, if it exists, in the next section.
5.4. FINDING A−1 CHAPTER 5. INVERTIBILITY OF MATRICES

5.4 Finding A−1


Recall:

• If A ∼ I, then A is invertible.

• The same sequence O1 , O2 , . . . , Ok of elementary row operations that turns A into I will turn I into A−1 .

This allows us to write the following general method to find A−1 .

Method: Finding A−1 for an n × n matrix A

h i
• Create a n × 2n augmented matrix A I .

• Use row-reduction (Gauss-Jordan elimination) to find the reduced row echelon form.

• If the first n columns contain the identity matrix I, then the last n columns contain A−1 , i.e.,
h i h i
A I ∼ ... ∼ I A−1 .

If the first n columns do not contain the identity, then A−1 does not exist and A is singular.

−1 0 1
 

Example 5.4.1. (a) If possible, find the inverse of A =  1 1 0 .


 
3 1 −1
CHAPTER 5. INVERTIBILITY OF MATRICES 5.4. FINDING A−1

−1 0 1
 

(b) If possible, find the inverse of A =  1 1 0.


 
3 4 1
5.5. LINEAR SYSTEMS AS MATRIX EQUATIONS CHAPTER 5. INVERTIBILITY OF MATRICES

5.5 Linear Systems as Matrix Equations


Consider the following matrix equation:
X
A   B
" # x " #
2 −1 1   5
y  =
1 7 −5 2
z
2×3 2×1
3×1

Since A is not a square matrix, there is no matrix A−1 that would allow us to cancel out A on either side.
Instead, let’s carry out the multiplication on the left and equate both sides.
AX B
" # " #
2x − y + z 5
=
x + 7y − 5z 2
2×1 2×1

In order for these 2 × 1 matrices to be equal, the following equations must be satisfied.

Clearly, any set of values of x, y and z that satisfies this linear system also satisfies the original matrix equation
AX = B and vice versa. In other words, solving the linear system or solving the matrix equation will yield the
same solution set.

Takeaway

Any matrix equation can be expressed as a linear system, and any linear system can be written as a
matrix equation.

In general, a linear system with m equations and n unknowns,


a11 x1 + a12 x2 + · · · + a1n xn = b1



 a21 x1 + a22 x2 + · · · + a2n xn = b2


.. .. .. .. ..


 . . . . .
 a x +a x +···+a x =

bm
m1 1 m2 2 mn n

can be written as a matrix equation of the form AX = B, where


 
a11 a12 ··· a1n
 a21

a22 ··· a2n 
=  is the coefficient matrix of the linear system,

Am×n  .. .. .. .. 
 . . . . 
am1 am2 · · · amn
   
x1 b1
 x2   b2 
   
Xn×1 =
 ..  is the column matrix of unknowns, and Bm×1 =  ..  is the column matrix of constants.
  
 .   . 
xn bm

From here on, we will sometimes write AX = B when referring to the corresponding linear system and AX = 0
CHAPTER 5. INVERTIBILITY OF MATRICES 5.5. LINEAR SYSTEMS AS MATRIX EQUATIONS

if the system is homogeneous.

Theorem 5.5.1. A system of linear equations has zero, one or infinitely many solutions. There are no other
possibilities.

Proof.

Notation. In some context, the linear system AX = B may be written as A~x = ~b. In fact, since X is an n × 1
column matrix, it can be seen as the vector ~x = hx1 , x2 , · · · , xn i in Rn and since B is on m × 1 column matrix,
it can be seen as the vector ~b = hb1 , b2 , . . . , bm i ∈ Rm .
5.5. LINEAR SYSTEMS AS MATRIX EQUATIONS CHAPTER 5. INVERTIBILITY OF MATRICES

As we saw in section 4.2, when A is invertible, we can solve the matrix equation (and therefore the linear
system) by multiplying either side of the equation by A−1 .

AX = B
−1
A AX = A−1 B
X = A−1 B

Example 5.5.1. Consider the linear system below. Rewrite the system as a matrix equation and use A−1 to
find the general solution if possible.

x+y =2
2x − y = 4

Remark. (i) The above method yields one set of values given by the computation X = A−1 B. This means
that the corresponding linear system has exactly one solution.

(ii) When the linear system is homogeneous, i.e., of the form AX = 0, then the unique solution is given by
X = A−1 0 = 0. In other words, the system has only the trivial solution.
CHAPTER 5. INVERTIBILITY OF MATRICES 5.6. THE INVERTIBILITY THEOREM

5.6 The Invertibility Theorem


Takeaway: From section 5.5

When A is an n × n invertible matrix, then

• the linear system AX = B has the unique solution X = A−1 B.

• the homogeneous linear system AX = 0 has only the trivial solution.

Conversely, we will prove that if the linear system AX = B has only one solution, then the matrix A must be
invertible. Equivalently, if an n × n matrix A is singular, then the linear system AX = B must have infinitely
many solutions or no solution at all.

Example 5.6.1. Consider the following linear system

x − 2y = b1
−2x + 4y = b2

(a) Write the system as a matrix equation AX = B and show that A is singular.

(b) Find conditions on b1 and b2 for the system to have i) infinitely many solutions, ii) no solution.
5.6. THE INVERTIBILITY THEOREM CHAPTER 5. INVERTIBILITY OF MATRICES

We can summarize the above by saying that “A is invertible” is true if and only if “AX = B has exactly
one solution” is also true. In such a case, we say that the two statements are equivalent. We are now ready to
list several statements that are equivalent to the statement “A is invertible”.

Theorem 5.6.1 (Fundamental Theorem of Invertible Matrices). If A is an n × n matrix, then the following
statements are equivalent, i.e., for a given matrix, they are all true or false.

(a) A is invertible.

(b) The linear system AX = B has exactly one solution for any B.

(c) The homogeneous linear system AX = 0 has only the trivial solution.

(d) The rank of A is n.

(e) The RREF of A is the identity matrix I (i.e., A ∼ I)

Proof. We will prove the equivalence by establishing the following chain of implications

(a) =⇒ (b) =⇒ (c) =⇒ (d) =⇒ (e) =⇒ (a)


CHAPTER 5. INVERTIBILITY OF MATRICES 5.6. THE INVERTIBILITY THEOREM

Proof of theorem 5.6.1 (Continued)


Chapter 6

Determinants

Contents
6.1 Cofactor Expansion and Determinants . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
6.1.1 Cofactor Expansion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
6.1.2 Determinants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
6.1.3 A Technique for Evaluating the Determinant of (ONLY) 3 × 3 Matrices . . . . . . . . . 141
6.2 Evaluating Determinants Using Row Reduction . . . . . . . . . . . . . . . . . . . . . 142
6.3 Properties of Determinants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
6.3.1 Properties of Determinants (Part 1) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
6.3.2 Determinant of Elementary Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
6.3.3 Properties of Determinants (Part 2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
6.4 Cramer’s Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150
6.5 Vector Products and Determinants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151

135
CHAPTER 6. DETERMINANTS 6.1. COFACTOR EXPANSION AND DETERMINANTS

6.1 Cofactor Expansion and Determinants


" #
a b
Recall that is invertible if and only if ad − bc 6= 0. So we could determine the invertibility of a 2 × 2
c d
matrix by computing a numerical value. In this chapter, our goal is to generalize this result to larger matrices.
The determinant is a number that can be computed from the entries of a square matrix. In this chapter,
we will define the determinant of a square matrix and will learn how to compute it. We will also study the
relation between the determinant of a matrix An×n and the invertibility of An×n .

6.1.1 Cofactor Expansion


Notation. For a given n × n matrix A the determinant of A is denoted by det(A) or |A|.

h i
Definition 6.1.1. Let A = a be a 1 × 1 matrix. Then .

To define the determinant of larger matrices, we need some definitions.


h i
Definition 6.1.2. (Minor) Let A = aij be an n × n matrix with n ≥ 2. Then the (i, j)-Minor of A
denoted by Mij is defined to be the determinant of the (n − 1) × (n − 1) submatrix of A obtained by removing
the i-th row and j-th column of A.
 
a11 . . . a1j . . . a1n
 . .. 
 ..

 ... ... ... . 
Mij = det  ai1 ... aij ... ain
 

 . ..
 .

 . ... ... ... .


an1 . . . anj . . . ann

Definition 6.1.3. (Cofactor) The cofactor of the entry aij denoted by Cij is defined as

" #
1 −2
Example 6.1.1. Let A = . Find the cofactors C11 , C12 and C22 .
3 2
6.1. COFACTOR EXPANSION AND DETERMINANTS CHAPTER 6. DETERMINANTS

Remark. Let A = [aij ]. For a given aij , the sign of (−1)i+j (which is used in the definition of Cij ) can be
obtained from the following matrix

+ − + − ...
 
− + − + . . .
.. ..
 
. .
Indeed (
Cij = , i + j is even
Cij = , i + j is odd

Definition 6.1.4. (Cofactor expansion) Let A be an n × n matrix.

• The cofactor expansion along the i-th row is

ai1 Ci1 + ai2 Ci2 + · · · + ain Cin =


 
..

 . 

 a ... a 
 i1 in 
..
 
 
.
i-th
row

• The cofactor expansion along the j-th column is

a1j C1j + a2j C2j + · · · + anj Cnj =


 
a1j
 
..
 
.
 
 ... ... 

anj

j-th
column
" #
1 4
Example 6.1.2. Let A = . Evaluate the cofactor expansion along each row and each column.
−1 2
CHAPTER 6. DETERMINANTS 6.1. COFACTOR EXPANSION AND DETERMINANTS

Remark. Although the cofactor of an entry and the cofactor expansion along a row or a column are both scalars,
they are different.

Theorem 6.1.1. Let A be an n × n matrix. The cofactor expansions along any row or any column are all
equal.

6.1.2 Determinants
Definition 6.1.5. (Determinant) Let A be an n × n matrix. Then the determinant of A denoted by det(A)
or |A| is the cofactor expansion along or .

1 4
Example 6.1.3. As we computed in Example 7.1.2, =
−1 2

a b
Theorem 6.1.2. = .
c d

Proof. Let’s compute the cofactor expansion along the first row
6.1. COFACTOR EXPANSION AND DETERMINANTS CHAPTER 6. DETERMINANTS

2 −1 3
 

Example 6.1.4. Evaluate det(A) if A = 1 4 2.


 
0 2 5
Let’s choose the third row or the first column (why?)

Remark. (Smart choice) Pick a row or column with many zeros.

−1 2 −1 3
1 1 4 2
Example 6.1.5. Evaluate .
1 0 0 2
3 0 2 5
CHAPTER 6. DETERMINANTS 6.1. COFACTOR EXPANSION AND DETERMINANTS

Remark. If An×n has a zero row or a zero column then det(A) = 0. (Why?)

Theorem 6.1.3. The determinant of a triangular matrix (upper or lower) or a diagonal matrix is the product
of the entries on its main diagonal. So we have

a11 a12 · · · a1n a11 0 a11 0


a22 a2n a21 a22 a22
.. .. = .. .. = .. = a11 a22 . . . ann
. . . . .
0 ann an1 an2 · · · ann 0 ann
Proof. Let’s prove the theorem for the upper triangular case in a 3 × 3 matrix.

2 −1 3
Example 6.1.6. Evaluate 0 5 1
0 0 4

Theorem 6.1.4. Let A be a square matrix. Then det(AT ) = det(A).

Proof. Since the rows of AT are the columns of A, evaluating the cofactor expansion along the first row of AT
equals evaluating the cofactor expansion along the first column of A.

Remark. det(In×n ) = 1. (Why?)


6.1. COFACTOR EXPANSION AND DETERMINANTS CHAPTER 6. DETERMINANTS

6.1.3 A Technique for Evaluating the Determinant of (ONLY) 3 × 3 Matrices


Copy the first two columns of the matrix after it, then

a b c
d e f
g h i

2 3 −1
Example 6.1.7. Compute the determinant 5 1 0
6 2 4
CHAPTER 6. DETERMINANTS 6.2. EVALUATING DETERMINANTS USING ROW REDUCTION

6.2 Evaluating Determinants Using Row Reduction


Goal

To evaluate determinants using row reduction.

Remark. Any REF or RREF square matrix is upper triangular. Therefore its determinant equals the product
of the entries on the main diagonal.

Theorem 6.2.1.

(a) If B is a matrix that results when we interchange any two rows (columns) of A then .

As an example for 3 × 3 matrices, we have,

a11 a12 a13


a21 a22 a23 =
a31 a32 a33

(b) If B results when we multiply a row (column) of A by k then .

As an example for 3 × 3 matrices, we have,

a11 a12 a13


ka21 ka22 ka23 =
a31 a32 a33

(c) If B results when we add a multiple of a row (column) to another row (column) of A then .

As an example for 3 × 3 matrices, we have,

a11 a12 a13


a21 + ka11 a22 + ka12 a23 + ka13 =
a31 a32 a33
6.2. EVALUATING DETERMINANTS USING ROW REDUCTION CHAPTER 6. DETERMINANTS

Example 6.2.1. Evaluate the following determinant using row operations.

1 3 −1
2 1 3
−1 0 1

Method

To find the determinant using row operations:

1. Apply row operations to find an REF and keep track of the changes in the determinants.

2. Since any REF matrix is upper triangular, its determinant can be calculated by multiplying its
diagonal entries.
CHAPTER 6. DETERMINANTS 6.3. PROPERTIES OF DETERMINANTS

6.3 Properties of Determinants


6.3.1 Properties of Determinants (Part 1)
Theorem 6.3.1. Let A be an n × n matrix.

(a) If A has a zero row (column) then .

(b) If A has two identical rows (columns) then .

(c) det(kAn×n ) = where k is a scalar.

Proof.

(a) Apply cofactor expansion along the zero row (column).

(b) Assume that row i, Ri and row j, Rj are identical. Apply the row operation −Ri + Rj → Rj so that row
j of the new matrix becomes zero. Since this row operation does not change the determinant and we have
a zero row, det(A) = 0.

(c) We prove this part for a 3 × 3 matrix A, the general case is similar.
ka11 ka12 ka13
det(kA) = ka21 ka22 ka23 =
ka31 ka32 ka33

Example 6.3.1. Evaluate the following determinant.


−1 2 3 4
0 3 5 6
4 2 −3 4
−2 4 7 8
6.3. PROPERTIES OF DETERMINANTS CHAPTER 6. DETERMINANTS

6.3.2 Determinant of Elementary Matrices


Goal

To calculate determinant of elementary matrices.

Theorem 6.3.2. Let E be an n × n elementary matrix.

(a) If E results from interchanging two rows of In then .

(b) If E results from multiplying one row of In by k then .

(c) If E results from adding a multiple of a row of In to another row then .

Proof.

(a)

(b)

(c)
CHAPTER 6. DETERMINANTS 6.3. PROPERTIES OF DETERMINANTS

Lemma 6.3.3. Let B be n × n and E be n × n elementary. Then det(EB) = det(E) det(B).

Theorem 6.3.4. A square matrix A is invertible if and only if det(A) 6= 0.

Proof. Let R be the RREF of A. We know that there are elementary matrices E1 , . . . , Er such that

Er . . . E2 E1 A = R

Therefore by the previous Lemma,

det(R) =

(=⇒):

(⇐=):
6.3. PROPERTIES OF DETERMINANTS CHAPTER 6. DETERMINANTS

6.3.3 Properties of Determinants (Part 2)


Theorem 6.3.5.

(a) If A and B are n × n matrices then det(AB) = det(A) det(B).

1
(b) If A is invertible, then det(A−1 ) = .
det(A)

(c) det(AT ) = det(A).

• Proof of (a): We consider two cases, 1) A is invertible and 2) A is not invertible.


Case 1: Suppose that A is invertible. Then by the Fundamental Theorem of Invertible Matrices, A =
E1 E2 . . . Ek where E1 ,. . . ,Ek are elementary. Now

Case 2: Suppose that A is not invertible. Then AB is not invertible either.


CHAPTER 6. DETERMINANTS 6.3. PROPERTIES OF DETERMINANTS

• Proof of (b):

• Proof of (c): Look at Theorem 7.1.4

Example 6.3.2. If det(A3×3 ) = 5 and |B3×3 | = −2, evaluate det((2A)−1 (5B)T ).

Example 6.3.3. If A is 4 × 4 and A3 − 3A = 0, find the value(s) of det(A).


6.3. PROPERTIES OF DETERMINANTS CHAPTER 6. DETERMINANTS

8 3 1
 

Example 6.3.4. Let A = 2 1 x. For which values of x, is the matrix A invertible?
 
6 3 1
CHAPTER 6. DETERMINANTS 6.4. CRAMER’S RULE

6.4 Cramer’s Rule


Goal

To use determinants to solve a linear system with n equations and n unknowns when the coefficient
matrix is invertible.

Theorem 6.4.1. (Cramer’s Rule)


Let An×n be an invertible matrix. Then the unique solution of the linear system A~x = ~b is given by

det(Ai (~b))
xi = , i = 1, . . . , n
det(A)
where Ai (b) is the matrix formed by replacing the i-th column of A by ~b.
So Ai (~b) is

 
 a11 . . . b1 . . . a1n 
.. .. ..
 
 

 . . . 

an1 . . . bn . . . ann

i-th column of A is replaced by ~b

Example 6.4.1. Use Cramer’s Rule to solve the system

x1 + 2x2 = 2
(

−x1 + 4x2 = 1
6.5. VECTOR PRODUCTS AND DETERMINANTS CHAPTER 6. DETERMINANTS

6.5 Vector Products and Determinants


Goal

To compute the cross product of two vectors and also the triple scalar product of three vectors using
determinants.

Recall that (Definition 1.4.1) given ~v =< v1 , v2 , v3 > and w


~ =< w1 , w2 , w3 >, we had

~ =< v2 w3 − v3 w2 , v3 w1 − w3 v1 , v1 w2 − w1 v2 >
~v × w

If we write this vector as ,


then we see that

~ = v1 v2 v3
~v × w

Example 6.5.1. Use the determinant method to calculate ~u × ~v if ~u =< 1, −2, 3 > and ~v =< 2, 1, 4 >.

Example 6.5.2. Use properties of determinants to prove the given properties of the cross product.

(a) ~u × ~u = 0 (b) ~u × ~v = −(~v × ~u)


CHAPTER 6. DETERMINANTS 6.5. VECTOR PRODUCTS AND DETERMINANTS

Reminder. Let ~u, ~v and w


~ be vectors in R3 . The triple scalar product of ~u, ~v and w
~ is .

Theorem 6.5.1. Given ~u =< u1 , u2 , u3 >, ~v =< v1 , v2 , v3 > and w


~ =< w1 , w2 , w3 >. The triple scalar product
of ~u, ~v and w
~ can be calculated by
u1 u2 u3
~u · (~v × w)
~ =

Example 6.5.3. Let ~u =< 1, 2, −1 >, ~v =< −2, 3, 1 > and w


~ =< 0, 4, 2 >.

(a) Evaluate ~u · (~v × w).


~

(b) Interpret your answer in part (1) geometrically.


Chapter 7

Linear Combinations, Spans, &


Independence

Contents
1.1 Basic Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.1.1 Introduction to Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.1.2 Algebraic Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
1.1.3 Operations on Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.1.4 Parallel Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
1.1.5 Length of a Vector. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
1.1.6 Geometry Using Vectors. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
1.2 Dot Product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
1.2.1 Dot Product and Angles Between Vectors. . . . . . . . . . . . . . . . . . . . . . . . . . . 22
1.2.2 Properties of the Dot Product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
1.2.3 Geometric Proof Using the Dot Product . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
1.3 Projections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
1.4 Cross Product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
1.4.1 Definition of the Cross Product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
1.4.2 Direction of the Cross Product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
1.4.3 Length of the Cross Product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
1.4.4 Properties of the Cross Product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
1.5 Triple Scalar Product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
1.5.1 Definition of the Triple Scalar Product and its Algebraic Properties . . . . . . . . . . . 37
1.5.2 Geometric Applications of the Triple Scalar Product . . . . . . . . . . . . . . . . . . . . 38
CHAPTER 7. LIN. COMB., SPANS, INDEP. 7.1. LINEAR COMBINATIONS

7.1 Linear Combinations


Goal

To introduce linear combinations and to use a linear system to determine whether or not a vector may
be expressed as a linear combination of a given collection of vectors.

7.1.1 Algebra of Linear Combinations

Definition 7.1.1. 1. A linear combination of vectors ~v1 , ~v2 , . . . , ~vk ∈ Rn is any sum of the form

where the ai are .

2. If for a vector w
~ ∈ Rn there exist real numbers c1 , c2 , . . . , ck such that w
~= ,

then we say that w


~ of the vectors ~v1 , ~v2 , . . . , ~vk

(or sometimes more directly, w


~ of ~v1 , ~v2 , . . . , ~vk ).

Example 7.1.1. Express the vector h2, −5, 4i as a linear combination of the vectors ı̂, ̂, and k̂.

Takeaway 16

Any vector ha, b, ci ∈ R3 can be written as a linear combination of the standard unit vectors ı̂, ̂, and k̂:

ha, b, ci =

Note: this way of writing vectors is very commonly used in Physics, and this expression is unique.
7.1. LINEAR COMBINATIONS CHAPTER 7. LIN. COMB., SPANS, INDEP.

1
 
 4 
Example 7.1.2. Consider the vector w ~ =  . In each of the cases below, determine if w
~ can be expressed
 
 6 
−4
as a linear combination of the given vectors or not.
2 1 0
     
−1 −2  3 
(a) ~v1 =   , ~v2 =   , ~v3 =  
     
−3 −4  5 
1 2 −3
CHAPTER 7. LIN. COMB., SPANS, INDEP. 7.1. LINEAR COMBINATIONS

2
 
−1
(b) ~v1 =  
 
−3
1

Takeaway 17

Given vectors w, ~ and ~v1 , ~v2 , . . . , ~vk ∈ Rn , w


~ may or may not be expressible as a linear combination of
~v1 , ~v2 , . . . , ~vk . And if it is expressible, that expression may or may not be unique.

Remark. When we try to determine if a given vector w


~ may be expressed as a linear combination of

vectors ~v1 , ~v2 , . . . , ~vk , we try to solve the ~ = a1~v1 + a2~v2 + · · · + ak~vk
w

for the unknowns . This is equivalent to

a with the same unknowns.

The augmented matrix Aaug for the is of size

and the entries of its match .

Aaug =
7.2. SPANS CHAPTER 7. LIN. COMB., SPANS, INDEP.

7.2 Spans
Goal

To define the span of a collection of vectors in Rn algebraically and to understand what this definition
means geometrically in R3 .

7.2.1 Spans Viewed Algebraically

Definition 7.2.1. 1. The span of a collection of vectors ~v1 , ~v2 , . . . , ~vk ∈ Rn , span{~v1 , ~v2 , . . . , ~vk }, is the set

of of ~v1 , ~v2 , . . . , ~vk . Symbolically, we

can write span{~v1 , ~v2 , . . . , ~vk } =

2. If S = span{~v1 , ~v2 , . . . , ~vk }, the vectors ~v1 , ~v2 , . . . , ~vk are called or

just for S, and ~v1 , ~v2 , . . . , ~vk are said to S.

2 1 0 1
       
−1 −2  3   4 
~ =  . Determine if w
Example 7.2.1. Let ~v1 =   , ~v2 =   , ~v3 =  , and w ~ ∈ span{~v1 , ~v2 , ~v3 }.
       
−3 −4  5   6 
1 2 −3 −4
CHAPTER 7. LIN. COMB., SPANS, INDEP. 7.2. SPANS

2 1
   
−1  4 
~ =  . Determine if w
Example 7.2.2. Let ~v1 =   and w ~ ∈ span{~v1 }.
   
−3  6 
1 −4

Takeaway 18: Rephrasing an old question with new vocabulary

Asking is the same thing as asking

which is something we explored in the last section.

Example 7.2.3. Show that the zero vector is in every span in Rn .


7.2. SPANS CHAPTER 7. LIN. COMB., SPANS, INDEP.

7.2.2 Spans Viewed Geometrically in R3


To interpret spans geometrically, we will draw all vectors in standard position (i.e. starting at the origin)

Example 7.2.4. In each of the following situations, give a geometric description of the span of the given
collection of vectors.
z
(a) span{ ~0 }

(b) span{~u}, ~u 6= ~0
z

~u
y

z
(c) span{~u, ~v }, ~u r
// ~v

~v
~u
y

x
CHAPTER 7. LIN. COMB., SPANS, INDEP. 7.2. SPANS

Example 7.2.5. What are the geometric possibilities for span{~u, ~v , w}


~ for three nonzero vectors ~u, ~v and w
~
in R ?
3

case 1: ~u // ~v // w
~

w
~
~u y

~v
x

case 2: ~u, ~v and w


~ are coplanar but not all parallel

z z

~v ~v
~u ~u
y y
w
~ w
~
x x

case 3: ~u, ~v and w


~ are non-coplanar

w
~ z

~v
~u
y

Remark. In the previous two examples, we saw that to generate a line L through the origin one vector is
sufficient, but that L can be generated with more vectors (if they are all parallel). We also saw that a plane
π can be generated with two vectors, but can also be generated with more vectors (if the are all coplanar).
In the next section on linear dependence and independence, we will generalize these types of relationships to
larger collections of vectors in higher dimensions.
7.2. SPANS CHAPTER 7. LIN. COMB., SPANS, INDEP.

Takeaway 19: Spans in R3

Geometrically in R3 , span{~v1 , ~v2 , . . . , ~vk } is:

• the point of the origin if

• a line through the origin if ,

and if

• a plane through the origin if ,

and if

• all of R3 if

Example 7.2.6. If ~v1 = h2, −1, 3i, ~v2 = h−1, 1, −1i, and ~v3 = h5, −1, 9i, describe span{~v1 , ~v2 , ~v3 } geometrically
by giving an equation for the geometric object that represents the span.

Method I: Geometric
CHAPTER 7. LIN. COMB., SPANS, INDEP. 7.2. SPANS

Remark. If spans are just points, lines, planes, etc., why bother studying them? We are familiar with these ge-
ometric objects from our home space of R3 . Spans, however, are defined in terms of linear combinations: scalar
multiplication and vector addition which are defined in Rn for all n. Reinterpreting our familiar geometries
in terms of spans lets us work with geometric objects in unfamiliar higher dimensions. The previous method
(geometric) depends on the cross-product, and so works only in R3 . The next method (algebraic) works in any
dimension.

Example 7.2.6 continued: If ~v1 = h2, −1, 3i, ~v2 = h−1, 1, −1i, and ~v3 = h5, −1, 9i, describe span{~v1 , ~v2 , ~v3 }
geometrically by giving an equation for the geometric object that represents the span.
Method II: Algebraic
7.3. LINEAR INDEPENDENCE CHAPTER 7. LIN. COMB., SPANS, INDEP.

7.3 Linear Independence


Goal

To define linear dependence and independence, and to determine if a collection of vectors in Rn is linearly
dependent or independent.

Definition 7.3.1. A collection of vectors {~v1 , ~v2 , . . . , ~vk } in Rn is called

1. linearly dependent if the equation

(also called the of the vectors ~v1 , ~v2 , . . . , ~vk ) has a solution

where at least one of the scalars a1 , a2 , . . . , ak is .

The written with a nonzero solution is called

a for the vectors.

2. linearly independent if it is not .

Example 7.3.1. In each of the cases below, determine if the collection of vectors is linearly dependent or
independent. If the set is linearly dependent, give a dependence relation.
n o
(a) {~v1 , ~v2 } = h1, −2, 0, −1i, h0, 1, 1, 3i
CHAPTER 7. LIN. COMB., SPANS, INDEP. 7.3. LINEAR INDEPENDENCE

n o
(b) {~v1 , ~v2 , ~v3 } = h1, −1, 1i, h2, −1, −1i, h−4, 1, 5i
7.3. LINEAR INDEPENDENCE CHAPTER 7. LIN. COMB., SPANS, INDEP.

Takeaway 20: Homogenous Systems and Linear Dependence

The dependence equation a1~v1 + a2~v2 + · · · + ak~vk = ~0 for vectors in Rn is equivalent to

a , H, with equations

in the variables .

If A is the coefficient matrix of H, then A is an matrix whose columns match

A=

Since homogeneous solutions are always consistent (why?

) there are only two possibilities:

(1) H has a , the ,

in which case {~v1 , ~v2 , . . . , ~vk } is , or

(2) H has , in which case

{~v1 , ~v2 , . . . , ~vk } is .


CHAPTER 7. LIN. COMB., SPANS, INDEP. 7.3. LINEAR INDEPENDENCE

Theorem 7.3.1. The set {~v1 , ~v2 , . . . , ~vk } is linearly dependent if and only in at least one of the vectors in the
set is in the span of the remaining vectors.

Proof.

Corollary. {~v1 , ~v2 } is linearly dependent if and only if ~v1 // ~v2 .

Corollary. {~v1 , ~v2 , . . . , ~vk } is linearly independent if and only if no vector in the set is in the span of the other
vectors.

Takeaway 21: Interpretation of Linear Independence

A rephrasing of Corollary 7.3 is that {~v1 , ~v2 , . . . , ~vk } is linearly independent if and only if there are

in span{~v1 , ~v2 , . . . , ~vk }; every vector

is essential in generating the span.

Said another way, {~v1 , ~v2 , . . . , ~vk } is linearly independent if and only if removing any of the generators

span{~v1 , ~v2 , . . . , ~vk }.

Conversely, {~v1 , ~v2 , . . . , ~vk } is linearly dependent if and only if it is possible to remove at least one of

the generators span{~v1 , ~v2 , . . . , ~vk }.


7.3. LINEAR INDEPENDENCE CHAPTER 7. LIN. COMB., SPANS, INDEP.

Example 7.3.2. Based on the diagrams below, decide if each of the following sets of vectors is linearly
dependent or independent.

w
~
(a) ~ is linearly
{~v , w}
~v

w
~
(b) ~ is linearly
{~v , w}
~v

w
~ ~u
(c) ~ is linearly
{~u, ~v , w}
~v

Proposition 7.3.2. A collection of k vectors in Rn is linearly dependent if k > n.

Proof.
CHAPTER 7. LIN. COMB., SPANS, INDEP. 7.3. LINEAR INDEPENDENCE

Takeaway 22: Summary for R3

Set Conditions for Linear Independence Span of the Set when Independent

{~u}

{~u, ~v }

{~u, ~v , w}
~

{~v1 , ~v2 , . . . , ~vk }, k ≥ 4

Example 7.3.3. Verify n using the triple scalar product that the set ofovectors from part (b) of Example 7.3.1
is linearly dependent. ~v1 = h1, −1, 1i, ~v2 = h2, −1, −1i, ~v3 = h−4, 1, 5i
7.4. BASIS AND DIMENSION OF A SPAN CHAPTER 7. LIN. COMB., SPANS, INDEP.

7.4 Basis and Dimension of a Span


Goal

To define the concepts of basis and dimension for any span.

In Sections ?? and ??, spans of sets of vectors in R3 were completely described geometrically. These are
either

In this section, we will generalize that work to find similar descriptions for spans of sets of vectors in Rn for
any n. In particular, we will learn how to describe the span of a set of vectors in Rn with as few vectors as
possible.

7.4.1 Basis of a Span


Definition 7.4.1 (Basis). Let S be a span in Rn . Let B = {~v1 , ~v2 , · · · , ~vk } be a set of vectors in Rn . We call
B a basis for S if both of the following conditions are true.

(i) B spans S.
This means that S = span B = .

(ii) B is linearly independent.


This means that the only solution to

is .

In this definition, condition (??) ensures that B has enough generators to construct all of S. Condition (??)
ensures that B has no redundant vectors, meaning that removing any of the generators would yield a different
and smaller span.

Example 7.4.1. Show that A = {h1, 0i, h0, 1i} is a basis for R2 .
CHAPTER 7. LIN. COMB., SPANS, INDEP. 7.4. BASIS AND DIMENSION OF A SPAN

Example 7.4.2. Similarly  



 


 


 


 


 


 


 


 

is a basis for Rn .

Definition 7.4.2. The set


1 0 0 
     

 


 0 1 0 
   
      
0 , 0 , · · · 0
  
, 
 
 ..   ..   .. 
    
. .  . 

 

     

0 0 1

 

is called the of Rn .
     
 1
 2 4 
Example 7.4.3. Consider A = −1 , −1 , −3 . Consider S = span A.
     
 1

−1 1 

(a) Is A a basis for S?

(b) Describe S geometrically using the method of Section ??.


7.4. BASIS AND DIMENSION OF A SPAN CHAPTER 7. LIN. COMB., SPANS, INDEP.

         
 1
 2   1
 2 4 

(c) Show that B = −1 , −1 is a basis for S = span −1 , −1 , −3 .
         
 1

−1 
  1

−1 1 

Cond (ii) B is linearly independent

Cond (i) span B = S


CHAPTER 7. LIN. COMB., SPANS, INDEP. 7.4. BASIS AND DIMENSION OF A SPAN

In the previous example, we spent a lot of energy showing that removing h4, −3, 1i from the set of generators
would not change the span. To avoid repeating this argument for every example, let us recap what we have
learned.

Takeaway 23

Consider A = {~v1 , · · · , ~vk } and assume that ~vi can be written as a linear combination of

~v1 , · · · , ~vi−1 , ~vi+1 , · · · ~vk

then span{~v1 , · · · ~vi−1 , ~vi , ~vi+1 , · · · ~vk } = .

Using this takeaway, we may remove any vector of A that can be written in terms of other vectors in the
set (that we are keeping). Looking at the start of this last example, we find the reduction

1 2 4 0 1 0 2 0
   

 −1 −1 −3 0  =⇒  0 1 1 0  .
   
1 −1 1 0 0 0 0 0

We saw that
4 1 2
     

−3 = −1 + −1 .


     
1 1 −1
In general, we can write any of the vectors corresponding to a in terms of the vectors
corresponding to in the reduced matrix.

Method: To find a basis for S = span {~v1 , ~v2 , · · · , ~vk }.

 

Step 1. Reduce the matrix H =   to REF or RREF.


 

Step 2. Let B contain only the vectors of ~v1 , ~v2 , · · · , ~vk whose columns correspond to
in either reduced matrix REF (H) or RREF (H). This B is a
basis for S.

Step 3. Each in the reduced matrix RREF (H) gives the coefficients
necessary to write the corresponding vector as a of the ele-
ments of B.

Let us explain this last point a bit more. The reduction needed to write a redundant vector in terms of the
vectors of the basis is included as part of the big reduction

You simply need to ignore the columns of the other redundant vectors. In general, elementary row operations
preserve any linear dependency between the columns.
7.4. BASIS AND DIMENSION OF A SPAN CHAPTER 7. LIN. COMB., SPANS, INDEP.

 1 2 1 
       
 −4 
 4  −16 0 −12
 
Example 7.4.4. Let S = span   ,  , ,  = span {~
v1 , ~v2 , ~v3 , ~v4 }
       
−2  8  1  8 
 
3 1
 

−12
−7

(a) Find a basis B for S.

(b) Write each vector of {~v1 , ~v2 , ~v3 , ~v4 } as a linear combination of the basis B

(c) Find a different basis for S.

As the last example shows a span does not usually have a unique basis. In fact, any non-trivial span will
have infinitely many different bases.

Example 7.4.5. Give three different bases for R2 . No justification necessary.


CHAPTER 7. LIN. COMB., SPANS, INDEP. 7.4. BASIS AND DIMENSION OF A SPAN

Although bases themselves are not unique, once you have fixed a basis B for your span S, any vector of S
will be expressed in a unique way as a linear combination of the elements of B.

Theorem 7.4.1. Let B = {~v1 , · · · , ~vk } be a basis for S. Let w


~ be any element of S.
There are unique real numbers a1 , · · · , ak so that

~=
w .

Proof. • There are such real numbers a1 , · · · , ak .

• Uniqueness of a1 , · · · , ak .
7.4. BASIS AND DIMENSION OF A SPAN CHAPTER 7. LIN. COMB., SPANS, INDEP.

7.4.2 The Dimension of a Span


As we have seen in the previous section, a span does have a unique basis. In fact any set of
non-parallel vectors in R2 gives a basis for R2 . Similarly any non-coplanar vectors in R3 forms a
basis for R . Notice that eventhough the bases are not unique, they all have the same number of elements.
3

Theorem 7.4.2. Let S be a span and let B1 = {~v1 , ~v2 , · · · , ~vk } and B2 = {w ~ 2, · · · , w
~ 1, w ~ r } be two bases for S.
Then

Proof. We will show that k ≥ r. A similar argument shows that r ≥ k and hence that r = k as claimed.
CHAPTER 7. LIN. COMB., SPANS, INDEP. 7.4. BASIS AND DIMENSION OF A SPAN

We just showed that the number of elements in a basis does not depend on the chosen basis but only on
the span itself. This allows for the following definition.

Definition 7.4.3 (Dimension). Let S be a span. The dimension of S, , is the number of


elements in .

Example 7.4.6. Rn has dimension n.

The dimension of a span gives its geometry. Following Takeaway ?? on page ??, we get the following
correspondence between the dimension of a span and its geometric characteristics.

Dimension Sketch Geometry

4
7.4. BASIS AND DIMENSION OF A SPAN CHAPTER 7. LIN. COMB., SPANS, INDEP.

     
 2
 1 −1 
Example 7.4.7. Consider S = span 3 , 4 ,  6  .
     
 1

3 7 

(a) Find the dimension of S.

(b) Describe S geometrically

(c) Find an implicit equation for S.


CHAPTER 7. LIN. COMB., SPANS, INDEP. 7.5. SUBSPACES

7.5 Subspaces
Goal

Define the concept of a subspace and link subspaces and spans.

7.5.1 Vector Spaces


Recall that a subset is a set included inside another set. Similarly a subspace is a space included inside another
space. Before getting into the idea of subspaces, we will take a moment to discuss what a space is.

Definition 7.5.1 (Vector space). A vector space is a non-empty set V with two operations.

1. addition of vector “+”

so

2. scalar multiplication

so

These operations must satisfy certain rules, for example the addition must be commutative.

Example 7.5.1. (a) Rn is a vector space with the standard vector operations.

(b) Consider the set


P={ | a0 , a1 , · · · an ∈ R}
of polynomials. Then P is a vector space with the usual addition of polynomials and multiplication by a
real number.

(c) Consider the set


n o
F(R) =

of real functions with domain R. F(R) is a vector space with the usual addition of functions and multipli-
cation of functions by real numbers.

Because all of these examples have addition and scalar multiplication, everything that we have covered in
this chapter could be applied to these settings. Hence you could talk about a linear combination of functions
or the dimension of a span of polynomials. Turns out that this generalization from the familiar vectors in Rn
to “functions as vectors” is extremely useful. We will save all of that fun for Linear 2.

7.5.2 Subspaces of Rn
Definition 7.5.2 (Subspaces of Rn ). A subset S of Rn is called a subspace if the following conditions
are true.

1. (zero element) is in S.

2. (closure under addition) For any ~v and w


~ in S, .

3. (closure under scalar multiplication) For any ~v in S and any k ∈ R, .


7.5. SUBSPACES CHAPTER 7. LIN. COMB., SPANS, INDEP.

Example 7.5.2. Show that the following sets are subspaces by proving the three properties or show that they
are not subspaces by finding a specific counter-example to one of the three properties.

(a) V = hx, y, zi ∈ R3 |x + y + z = 3


(b) W = hx, y, zi ∈ R3 |x = 2y

CHAPTER 7. LIN. COMB., SPANS, INDEP. 7.5. SUBSPACES

7.5.3 Subspaces and Spans


We are now ready to show that subspaces and spans are actually the same objects.

Proposition 7.5.1. 1. For any set A of vectors in Rn , span A is .

2. For any subspace V of Rn , there is a set A so that .

Proof. 1. Let A = {~v1 , ~v2 , · · · , ~vm } be a subset of Rn . We will show that

S = span A = { | a1 , a2 , · · · , am ∈ R}

is a subspace by proving all three properties.

2. Take a subspace V in Rn . Here is a sketch of how to build a set A so that span A = V .

Since subspaces and spans are the same objects, subspaces also have bases and dimensions. By Theorem
??, given a basis B for a subspace V of Rn , every element of V can be written in a unique way as a linear
combination of the elements of a basis B of V .
7.5. SUBSPACES CHAPTER 7. LIN. COMB., SPANS, INDEP.

Proposition ?? gives some insight on subspaces. We now know that subspaces are

, , ,...

In the next example, we can use that insight to determine whether a given subset is a subspace.

Example 7.5.3. Determine whether n o


S = hx, yi ∈ R2 |xy = 0

is a subspace of R2 . If it is prove all three properties. If it is not, find a specific counter-example for one of the
three properties.

Note that using our knowledge of spans, we can now show that a subset of Rn is a subspace without proving
all three properties. In future examples, however, we may force you to show the three properties.

Example 7.5.4. Consider

V = {hw, x, y, zi |2x − y + z = 0 and w = x + y + z} .

Show that V is a subspace of R4 , find a basis for it and give a geometric description of V .
CHAPTER 7. LIN. COMB., SPANS, INDEP. 7.5. SUBSPACES

The following proposition generalizes the work done in the last example.

Proposition 7.5.2. The solution set of a linear system in n variables is a .

Proof. When using the techniques of Chapter 3 to solve a linear system, the solution
set S is written as
       
x1
 x2  
       
 . =  + s1   + · · · + sk  =
    
 .   s1 , . . . , s k ∈
 .  
    
    
xn

When solving a homogeneous system, we get a set of generators {w ~ k }, (one from each
~ 1, · · · w ).
This set of generators is automatically linearly independent. To see this, consider a linear combination

~ 1 + · · · + sj w
s1 w ~ j + · · · + sk w
~ k = ~0

To show that sj needs to be zero, look at the row of its corresponding free variable, say xi .

0
           
x1
 .   .
 ..    .. 
      
      
           
 xi  = s 1   + · · · + sj   + · · · + sk  =  = 0
           
 .   .
 .   .
      
 .   .
      
      
xn 0

Hence . Similary, s1 = · · · = sj = · · · = sk = and hence {w ~ k } is a


~ 1, · · · , w
for S.

Takeaway

When using the method of Chapter 3 to solve a homogeneous linear system, you get

~x = .

The vectors are automatically and so


n o
B=

is a basis for the solution set.

You might also like