0% found this document useful (0 votes)
296 views17 pages

Summary Introductory Linear Algebra

This document provides a comprehensive summary of introductory linear algebra concepts. It defines key matrix terminology and outlines operations on matrices such as scalar multiplication, addition, multiplication, and transposition. It also describes elementary row operations, row reduction procedures, solving systems of linear equations, determinants, inverses, diagonalization, bases, subspaces, vector operations, and 2D/3D vector geometry and matrix transformations. The summary is broken into 14 sections and provides definitions, facts, and procedures for main linear algebra topics.

Uploaded by

a
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
296 views17 pages

Summary Introductory Linear Algebra

This document provides a comprehensive summary of introductory linear algebra concepts. It defines key matrix terminology and outlines operations on matrices such as scalar multiplication, addition, multiplication, and transposition. It also describes elementary row operations, row reduction procedures, solving systems of linear equations, determinants, inverses, diagonalization, bases, subspaces, vector operations, and 2D/3D vector geometry and matrix transformations. The summary is broken into 14 sections and provides definitions, facts, and procedures for main linear algebra topics.

Uploaded by

a
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 17

A Comprehensive

Summary on

INTRODUCTORY
LINEAR
ALGEBRA
A Comprehensive Summary on Introductory Linear Algebra

A Comprehensive Summary on
Introductory Linear Algebra

mathvault.ca

Table of Contents 7.2.2 Procedures . . . . . . . . . 7


7.3 Properties on Matrices . . . . . . . 7

1 Matrix Terminologies 3 8 Elementary Matrices 7


8.1 Definition . . . . . . . . . . . . . . 7
2 Operations on Matrices 3 8.2 Facts . . . . . . . . . . . . . . . . . 8
2.1 Scalar Multiplication . . . . . . . . 3
2.2 Addition . . . . . . . . . . . . . . . 3 9 Diagonalization 8
2.3 Multiplication . . . . . . . . . . . . 3 9.1 Definitions . . . . . . . . . . . . . . 8
2.4 Transposition . . . . . . . . . . . . 3 9.2 Characteristic Polynomial . . . . . . 8
2.5 Trace . . . . . . . . . . . . . . . . . 4 9.3 Properties of Eigenvectors . . . . . . 8
9.4 Diagonalizable Matrices . . . . . . 8
3 Elementary Row Operations 4 9.4.1 Definition and Procedure . . 8
9.4.2 Properties of Diagonalizable
4 Row Reduction on Matrices 4 Matrices . . . . . . . . . . . 9
4.1 Preliminaries . . . . . . . . . . . . 4
4.2 Procedure . . . . . . . . . . . . . . 4 10 Basis and Related Topics 9
4.3 Comparison of Different Forms . . . 4 10.1 Definitions . . . . . . . . . . . . . . 9
10.2 Facts . . . . . . . . . . . . . . . . . 9
5 System of Linear Equations 4 10.3 Procedure for Basis Extraction . . . 10
5.1 Procedure for Solving a System . . . 5 10.4 Equivalences . . . . . . . . . . . . . 10
5.2 Facts . . . . . . . . . . . . . . . . . 5
11 Subspaces 10
6 Determinant 5 11.1 Definition . . . . . . . . . . . . . . 10
6.1 Definition . . . . . . . . . . . . . . 5 11.2 Examples of Subspace . . . . . . . . 10
6.2 Facts . . . . . . . . . . . . . . . . . 5 11.3 Standard Subspaces . . . . . . . . . 10
6.3 Properties on Matrix Rows and 11.3.1 Definitions . . . . . . . . . 10
Columns . . . . . . . . . . . . . . . 5 11.3.2 Bases of Standard Subspaces 11
6.4 Properties on Matrices . . . . . . . 6
12 Operations on Vectors 11
7 Inverse 6 12.1 Preliminaries . . . . . . . . . . . . 11
7.1 Invertibility . . . . . . . . . . . . . 6 12.2 Length . . . . . . . . . . . . . . . . 11
7.2 Procedures for Finding Inverses . . . 7 12.3 Dot Product . . . . . . . . . . . . . 11
7.2.1 Terminologies . . . . . . . . 7 12.3.1 Definition and Facts . . . . 11

Table of Contents 2
A Comprehensive Summary on Introductory Linear Algebra

12.3.2 Properties . . . . . . . . . . 12 • Scalar division can be defined simiarly i.e.,


12.4 Cross Product . . . . . . . . . . . . 12 A df 1 
= A .
12.4.1 Definition and Facts . . . . 12 k k
12.4.2 Properties . . . . . . . . . . 12 • k1 (k2 A) = (k1 k2 )A
12.5 Projection-Related Operations . . . 12

13 2D/3D Vector Geometry 13 2.2 Addition


13.1 Equations . . . . . . . . . . . . . . 13
13.1.1 Lines . . . . . . . . . . . . . 13 • Requires two matrices of the same dimension.
13.1.2 Planes . . . . . . . . . . . . 13
13.2 Point vs. Point . . . . . . . . . . . . 13 • Preserves the dimension of the matrices.
13.3 Point vs. Line . . . . . . . . . . . . 13
• A+B =B+A
13.4 Point vs. Plane . . . . . . . . . . . . 14
13.5 Line vs. Line . . . . . . . . . . . . . 15 • (A + B) + C = A + (B + C)
13.6 Line vs. Plane . . . . . . . . . . . . 15
• k(A + B) = kA + kB
13.7 Plane vs. Plane . . . . . . . . . . . 16

14 Matrix Transformation 16
14.1 Preliminaries . . . . . . . . . . . . 16 2.3 Multiplication
14.2 Standard Matrix Transformations in 2D 16
• Requires two matrices with matching “inner
dimensions".
$ 1 Matrix Terminologies • Produces a matrix with the corresponding
“outer dimensions” (i.e., m × n times n × p →
• Diagonal matrix: A matrix whose non-zero en- m × p).
tries are found in the main diagonal only. • The ij entry of AB results from dot-multiplying
• Identity matrix: A diagonal, n × n matrix with the ith row of A with the jth column of B.
1 across the main diagonal. Usually denoted • (AB)C = A(BC), but AB 6= BA in general.
by I.
• (kA)B = k(AB) = A(kB)
• Upper-triangular matrix: A matrix whose non-
zero entries are found at or above the main • A(B +C) = AB +AC, (A+B)C = AC +BC
diagonal only.
• Lower-triangular matrix: A matrix whose non- 2.4 Transposition
zero entries are found at or below the main
diagonal only. • Reverses the dimension of the matrix.
• (AT )ij = Aji

$ 2 Operations on Matrices • ith row of AT corresponds to ith column of A.


• j th column of AT corresponds to j th row of A.
2.1 Scalar Multiplication • (AT )T = A, (kA)T = k(AT )

• Preserves the dimension of the matrix. • (A + B)T = AT + B T , (AB)T = B T AT

2 Operations on Matrices 3
A Comprehensive Summary on Introductory Linear Algebra

2.5 Trace 4.2 Procedure


Given a n × n matrix A, the trace of A — or Tr(A) To search for the ith leading number in a column,
for short — is the sum of all the entries in A’s main check the column from the ith entry and onwards:
diagonal.
• If all said entries are zero, search for the ith
• Tr(kA) = k Tr(A) leading number in the next column instead.

• Tr(A + B) = Tr(A) + Tr(B) • If not, use elementary row operations to:

• Tr(AT ) = Tr(A), Tr(AB) = Tr(BA) • Create a leading number in the ith entry
of the column.
• Reduce all entries beneath it to zero.
$ 3 Elementary Row Opera- and proceed to search for the (i + 1)th leading
tions number in the next column.
Once all the columns are handled, the matrix would
The three elementary row operations are: be in a ladder form, where
• Row Multiplication (Ri → kRi , k 6= 0) • Dividing each non-zero row with its leading
number would put the matrix into a row-
• Row Swapping (Ri ↔ Rj ) echelon form.
• Row Absorption (Ri → Ri + kRj ) • From here, further reducing all non-leading-
one entries in each column to zero would put
Note that each elementary row operation can be
the matrix into the reduced row-echelon form.
reversed by another elementary row operation of
the same type.
4.3 Comparison of Different
Forms
$ 4 Row Reduction on Ma-
trices • A ladder form is similar to a row-echelon form,
except that a non-zero row needs not start with
1.
4.1 Preliminaries
• In a row-echelon form, all entries beneath a
At the most fundamental level, to perform row reduc- leading one is 0. In a reduced row-echelon
tion on a matrix is to alternate between the following form, all entries above and beneath a leading
two processes: one is 0.

1. Finding a leading number (i.e., a small, non- • While a matrix can have several ladder forms
zero number) in a column — whenever appli- and row-echelon forms, it can only have one
cable. reduced row-echelon form.

2. Nullify all other entries in the same column.


By running through all the columns this way from
$ 5 System of Linear Equa-
left to right, the final matrix — also known as the tions
reduced matrix — can then be obtained.

5 System of Linear Equations 4


A Comprehensive Summary on Introductory Linear Algebra

5.1 Procedure for Solving a Sys- • A homogeneous system — a system whose con-
tem stant terms are all zero — has at least the trivial
solution.
To solve a system of linear equations with m equa-
tions and n variables using matrices, proceed as
follows: $ 6 Determinant
1. Convert the equations into an augmented ma-
trix — with the m × n coefficient matrix on the 6.1 Definition
left and the m × 1 constant matrix on the right.
Given an n × n matrix A, the determinant of A —
2. Reduce the augmented matrix into a ladder or det(A) for short — is a scalar quantity which can
form, or — if needed — a row-echelon form be defined recursively:
or a reduced row-echelon form. Once there,  
three scenarios ensue: • For a 2 × 2 matrix
a b
:
c d
• If an inconsistent row — a row with zero
everywhere except the last entry — pops
 
a b df
det = ad − bc
up during the row reduction process, then c d
the original system is unsolvable.
• For a n×n matrix A, Caij — the cofactor of the
• If the reduced matrix has n leading num- ij entry of A — is defined to be the signed de-
bers and no inconsistent row exists, then terminant of the matrix resulted from removing
the system has a unique solution. the ith row and the j th column of A.
• If the reduced matrix has less than n lead- • For a general n × n matrix A (with n ≥ 3),
ing numbers and no inconsistent row ex- the determinant can be defined as the cofactor
ists, then the system has infinitely many expansion along the first row of A:
solutions, in which case: df
det(A) = a11 Ca11 + · · · + a1n Ca1n
• When converted back into equation
form, the system will have less than
n leading variables. 6.2 Facts
• By turning the non-leading variables
Guveb an n × n matrix A, det(A) can be obtained
into parameters and applying back
by cofactor-expanding along any row or any column
substitution, we can then find the gen-
of A. As a result:
eral solution to the system — along
with the basic vectors that generate • If A has a row of zeros or a column of zeros,
them. then det(A) = 0.
• If A is upper or lower-triangular, then det(A)
5.2 Facts is the product of its main-diagonal entries.

• For a n-variable system with infinitely many


solutions: 6.3 Properties on Matrix Rows
and Columns
# of parameters = # of non-leading variables
= n − # of leading variables In what follows:

6 Determinant 5
A Comprehensive Summary on Introductory Linear Algebra

• All matrices presented are assumed to be n × n


.. ..
   
matrices. . .
Ri  Ri 
   
• Ri and Rj are assumed to be n-entry row vec-  
 ..
 = det  ... 
  
tors. det 
 .
  
R + kR  R 
 j i  j
• Ci and Cj are assumed to be n-entry column .. ..
vectors. . .

Row/Column Multiplication  
det · · · Ci · · · Cj + kCi · · · =
 
 .  . det · · · Ci · · · Cj · · ·
.. ..
det kRi  = k det Ri 
   
.. .. 6.4 Properties on Matrices
. .

Given a n × n matrix A:
   
det · · · kCi · · · = k det · · · Ci · · ·

• If A has a duplicate row or a duplicate column,


Row/Column Addition then det(A) = 0.
 ..  .
..
 . 
.. • det(kA) = k n det(A)
.
det Ri + Rj  = det Ri  + det Rj  • det(AT ) = det(A)
     
.. .. ..
. . . 1
• det(A−1 ) =
det(A)
n−1


det Adj(A) = det(A)
   
det · · · Ci + Cj · · · = det · · · Ci · · ·
 
+ det · · · Cj · · ·
In addition, if B is also a n × n matrix, then:
det(AB) = det(A) det(B)
Row/Column Swapping
In particular:
.. .. m
   
. . det(Am ) = det(A) (where m ∈ N)
Rj   Ri 
   
 .   . 
 ..  = − det  .. 
det    
R 
 i
R 
 j $ 7 Inverse
.. ..
. .
7.1 Invertibility
 
det · · · Cj · · · Ci · · · = Given a n × n matrix A and the n × n identity matrix
 
− det · · · Ci · · · Cj · · · I, A is said to be invertible if and only if there is a
n × n matrix B such that:
Row/Column Absorption AB = I and BA = I

7 Inverse 6
A Comprehensive Summary on Introductory Linear Algebra

In which case, since B is the only matrix with such In particular, in the case of a 2 × 2 matrix:
properties, it is referred to as the inverse of A — or  
A−1 for short. d −b
−1
−c a

a b
The following claims are all equivalent: c d
=
ad − bc
• A is invertible.
• det(A) 6= 0 • Row reduction method: By writing A and I
alongside each other and carry out row reduc-
• The equation Ax = 0 (where x and 0 are n- tion until A is reduced to the identity matrix,
entry column vectors denoting the variable vec- the original I would be reduced to A−1 as well:
tor and the zero vector, respectively) has only
the trivial solution.   Row Reduction  −1 
A I −−−−−−−→ I A
• The reduced row-echelon form of A is I.
• The equation Ax = b has a unique solution
for each n-entry column vector b. 7.3 Properties on Matrices
• The rows of A are linearly independent. Given an invertible n × n matrix A:
• The columns of A are linearly independent. −1
• (A−1 ) =A
A−1
7.2 Procedures for Finding In- • (kA) −1
= (where k 6= 0)
k
verses −1 T
• AT = (A−1 )
Ï 7.2.1 Terminologies
In addition, if B is also an invertible n × n matrix,
Given a n × n matrix A: then:
(AB)−1 = B −1 A−1
• The cofactor matrix of A is the n × n matrix
whose ij entry is the cofactor of aij . In particular:
m
• Adj(A), the adjoint of A, is the transpose of (Am )−1 = (A−1 ) (where m ∈ N)
the cofactor matrix of A.

$ 8 Elementary Matrices
Ï 7.2.2 Procedures
Given an invertible n × n matrix A, two methods 8.1 Definition
for finding A−1 exist:
An n × n elementary matrix is a matrix obtainable
• Adjoint method: Since A Adj(A) = det(A)I by performing one elementary row operation on
and det(A) 6= 0, A−1 can be determined using the n × n identity matrix I.
the following formula:
As a result, three types of elementary matrices exist:
−1 Adj(A)
A = • Those resulted from row multiplication
det(A)

8 Elementary Matrices 7
A Comprehensive Summary on Introductory Linear Algebra

• Those resulted from row swapping 9.2 Characteristic Polynomial


• Those resulted from row absorption
Given an n × n matrix A, the following claims are
equivalent:
8.2 Facts
• λ is an eigenvalue of A.
Given a matrix A with n rows: • The equation Ax = λx has a non-zero solution.

• Performing an elementary row operation on • The equation (λI − A)x = 0 has a non-zero
A is equivalent to left-multiplying A with the solution.
n × n elementary matrix associated with the • det(λI − A) = 0
said operation.
• Since each elementary row operation is re- In other words:
versible, each elementary matrix is invertible
• If we define det(xI − A) as the characteristic
In particular, if A is an n × n invertible matrix, then: polynomial of A, then its roots are precisely
the eigenvalues of A.
• A−1 can be conceived as the series of ele- • Once an eigenvalue λ is determined, its
mentary row operations leading A to I (i.e., eigenspace and basic eigenvectors can also be
A−1 = En . . . E1 ). found by solving the equation (λI − A)x = 0.
• Similarly, A can be conceived as the series of
elementary row operations leading I to A (i.e.,
A = E1−1 . . . En−1 ). 9.3 Properties of Eigenvectors
More schematically:
• Since eigenspaces of distinct eigenvalues are
E1 ··· En — apart from the zero vector — disjoint from
A

I each other, it follows that every eigenvector is


−1 ... −1
E1 En associated with a unique eigenvalue.
• The collection of all basic eigenvectors (from
the distinct eigenspaces) forms a linearly inde-
$ 9 Diagonalization pendent set.
• As a result, an n × n matrix can have at most
9.1 Definitions n basic eigenvectors.
Given an n × n matrix A:
9.4 Diagonalizable Matrices
• A number λ is called an eigenvalue of A if and
only if the equation Ax = λx has a non-zero
solution. In which case: Ï 9.4.1 Definition and Procedure
• The set of all the solutions is called the Given a n × n matrix A, A is said to be diagonal-
eigenspace of A with eigenvalue λ. izable if and only if there exists an n × n diagonal
matrix D and an n × n invertible matrix P such that:
• Each non-zero solution is called an eigen-
vector of A with eigenvalue λ. P −1 AP = D

9 Diagonalization 8
A Comprehensive Summary on Introductory Linear Algebra

In fact, it can be shown that: 10.1 Definitions


A is diagonalizable ⇐⇒ A has n linearly
Given a set of vectors v, v1 , . . . , vn from a vector
independent eigenvectors.
space V :
For example: df
• A linear combination of v1 , . . . , vn = a vector
• If A has n eigenvalues, then it is automatically of the form k1 v1 +· · ·+kn vn (for some numbers
diagonalizable. k1 , . . . , kn )
df
• If A has less than n eigenvalues, but neverthe- • Span(v1 , . . . , vn ) = the set of all linear com-
less possesses n basic eigenvectors, then it is binations of v1 , . . . , vn
still diagonalizable.
(In other words, to show that v is in the span of
In which case, if we let: v1 , . . . , vn is to show that v can be expressed
as a linear combination of v1 , . . . , vn .)
• v1 , . . . , vn to be the n basic eigenvectors asso-
ciated with the eigenvalues λ1 , . . . , λn , respec- • v1 , . . . , vn is a spanning set of V (equiv.,
df
tively. v1 , . . . , vn span V ) ⇐⇒ Span(v1 , . . . , vn ) =
V.
• P to be the n × n matrix v1 . . . vn .
 
df
• v1 , . . . , vn are linearly independent ⇐⇒ the
• D to be the n × n diagonal matrix with
equation x1 v1 + . . . + xn vn = 0 has only the
λ1 , . . . , λn in the main diagonal.
trivial solution.
then it would follow that: (In other words, the zero vector in V can be
expressed as a linear combination of v1 , . . . , vn
P −1 AP = D or A = P DP −1 in a unique way.)
df
• v1 , . . . , vn is a basis of V ⇐⇒ v1 , . . . , vn span
Ï 9.4.2 Properties of Diagonaliz- V and are linearly independent.
able Matrices
Given a n × n diagonalizable matrix A with eigen- 10.2 Facts
values λ1 , . . . , λn (with repeating multiplicities):
Given a series of vectors v1 , . . . , vn from a vector
• Am = P Dm P −1 (where m ∈ N) space V :

(Note: This formula can be used to compute • v1 , . . . , vn are linearly dependent ⇐⇒ one
any power of A quickly.) of the vector vi can be expressed as a linear
combination of the other vectors.
• Tr(A) = λ1 + · · · + λn
• v1 , . . . , vn is a basis of V =⇒ every vector in
• det(A) = λ1 × · · · × λn
V can be expressed as a linear combination of
V in a unique way.

$ 10 Basis and Related Top- In general, given two sets A and B:


ics If A is a linearly independent set in V and
B is a spanning set of V , then |A| ≤ |B|.

10 Basis and Related Topics 9


A Comprehensive Summary on Introductory Linear Algebra

In particular • v1 , . . . ,vn are linearly


 independent ⇐⇒ the
matrix v1 . . . vn can be reduced to a lad-
• If A and B are both basis of V , then |A| = |B|. der form (equiv., a row-echelon form or a re-
• In other words, any basis of V will have the duced row-echelon form) with n leading num-
same number of vectors. This number is known bers.
as the dimension of V — or dim(V ) for short. • Span(u1 , . . . , um ) = Span(v1 , . . . , vn ) ⇐⇒
As a result, given a series of vectors v1 , . . . , vn in each ui can be expressed as a linear combi-
V , we have that: nation of v1 , . . . , vn , and each vi can be ex-
pressed as a linear combination of u1 , . . . , um .
• n < dim(V ) =⇒ v1 , . . . , vn does not span V .
• n > dim(V ) =⇒ v1 , . . . , vn are not linearly
independent. $ 11 Subspaces
• n = dim(V ) and v1 , . . . , vn are linearly inde-
pendent =⇒ v1 , . . . , vn is a basis of V . 11.1 Definition
Given a subset S of a vector space V , S is called a
10.3 Procedure for Basis Ex- subspace of V if and only if all of following three
traction conditions hold:

Given a series of vectors v1 , . . . , vn from a vector 1. 0V ∈ S.


space V , a basis of Span(v1 , . . . , vn ) can be deter- 2. For all v1 , v2 ∈ S, v1 + v2 ∈ S.
mined as follows:
  3. For all v ∈ S and each number k, kv ∈ S.
v1
 .. 
1. Create the matrix  . .
vn 11.2 Examples of Subspace
2. Reduce the matrix to a ladder form (equiv., a
Given a vector space V and a series of vectors
row-echelon form, or a reduced row-echelon
v1 , . . . , vn in V , some examples of subspace in-
form). Once there:
clude:
• The non-zero rows of the reduced matrix
will form a basis of Span(v1 , . . . , vn ). • The trivial subspaces (i.e., {0v } and V )

• The number of those non-zero rows will • Span(v1 , . . . , vn )


be the dimension of Span(v1 , . . . , vn ). • A line through the origin (in R2 or R3 )
• A plane through the origin (in R3 )
10.4 Equivalences
Given a series of vectors v, v1 , . . . , vn , u1 , . . . , um 11.3 Standard Subspaces
from a vector space V :

• v is a linear combination of v1 , . . . , vn ⇐⇒
Ï 11.3.1 Definitions
the augmented matrix v1 . . . vn v is

Given a m × n matrix A:
solvable.

11 Subspaces 10
A Comprehensive Summary on Introductory Linear Algebra

• The row space of A — or Row(A) for short — 12.1 Preliminaries


is the span of the rows of A.
Given a number k and two vectors u = (u1 , . . . , un )
• The column space of A — or Col(A) for short
and v = (v1 , . . . , vn ) in Rn :
— is the span of the columns of A.
• The null space of A — or Null(A) for short — •
df
u + v = (u1 + v1 , . . . , un + vn )
is the set of all solutions to the homogeneous df
system Ax = 0. • kv = (kv1 , . . . , kvn )
df
• In particular, if A is n × n matrix and λ is • 0 = (0, . . . , 0)
an eigenvalue of A, then the eigenspace
| {z }
n times
df
of A (with eigenvalue λ) = Null(λI − A). In the case where u and v are non-zero vectors:
df
u and v are parallel ⇐⇒ u = kv for
Ï 11.3.2 Bases of Standard Sub- some number k.
spaces
Given a m×n matrix A, when A is reduced to a lad- 12.2 Length
der form (equiv., a row-echelon form or a reduced
row-echelon form) A0 : Given a vector v = (v1 , . . . , vn ) in Rn , the length of
v — or |v| for short — is defined as follows:
• The non-zero rows of A0 form a basis of q
df
Row(A). |v| = v12 + . . . + vn2
• The columns of A corresponding to the leading Note that:
columns of A0 form a basis of Col(A).
• |v| = 0 ⇐⇒ v = 0
• The basic vectors in the general solution of
the augmented matrix A0 0 form a basis of •

|kv| = |k||v|
Null(A), where:
df
# of basic vectors = dim(Null(A)) = Nullity(A) 12.3 Dot Product
Since in A0 , the number of non-zero rows is equal
to the number of leading numbers, we have that:
Ï 12.3.1 Definition and Facts
df
dim(Row(A)) = dim(Col(A)) = Rank(A) Given two vectors u = (u1 , . . . , un ) and v =
(v1 , . . . , vn ) in Rn :
Furthermore, since in the homogeneous system as-
df
sociated with A0 , the number of leading variables u · v = u1 v1 + · · · + un vn
and non-leading variables add up to n, we also have
that: In the case where u and v are non-zero vectors in
R3 (or in R2 ), we have that:
Rank(A) + Nullity(A) = n
u · v = |u| |v| cos θ
df
(θ = the angle between u and v)
$ 12 Operations on Vectors
From which it follows that:

12 Operations on Vectors 11
A Comprehensive Summary on Introductory Linear Algebra

• u · v = 0 ⇐⇒ u and v are perpendicular. • |u×v| is the area of the parallelogram spanned


by u and v.
• u · v > 0 ⇐⇒ u and v form an acute angle.
• u · v < 0 ⇐⇒ u and v form an obtuse angle.
Ï 12.4.2 Properties
Ï 12.3.2 Properties Given a number k and three vectors u, v, w in R3 ,
we have that:
Given a number k and three vectors u, v, w in Rn ,
we have that: • 0×v =v×0=0

• u · u = |u|2 • v×v =0

• u·0=0·u=0 • u × v = −(v × u)

• u·v =v·u • (ku) × v = k(u × v) = u × (kv)

• (ku) · v = k(u · v) = u · (kv) • u × (v + w) = u × v + u × w

• u · (v + w) = u · v + u · w • (u + v) × w = u × w + v × w
• (u + v) · w = u · w + v · w • (u × v) · u = 0, (u × v) · v = 0
 
u
• u · (v × w) = det v 
12.4 Cross Product

w

Ï 12.4.1 Definition and Facts


12.5 Projection-Related Opera-
Given two vectors u = (u1 , u2 , u3 ) and v =
(v1 , v2 , v3 ) in R3 : tions

u×v =
df Given a vector v and a non-zero directional vector
       d in R3 (or in R2 ):
u u3 u1 u3 u1 u2
det 2 , − det , det
v2 v3 v1 v3 v1 v2 df
• projd v = the projection of v onto d
In the case where u and v are non-zero vectors, we df
• oprojd v = the orthogonal projection of v
have that: onto d
df
|u × v| = |u| |v| sin θ • refld v = the reflection of v about d
df
(θ = the angle between u and v) oprojd v
projd v
From which it follows that: refld v v
d
u × v = 0 ⇐⇒ u and v are parallel.

In the case where u and v are non-parallel vectors:


In addition:
• u × v is a vector perpendicular to both u and v·d
v. • projd v = d
d·d
12 Operations on Vectors 12
A Comprehensive Summary on Introductory Linear Algebra

• oprojd v = v − projd v • Point-Normal Form: (X − P ) · n = 0


(since projd v + oprojd v = v) df
( X = (x, y, z) being the variable vector )
• refld v = 2 projd v − v • Standard Form: ax + by + cz = k
(since v + refld v = 2 projd v) (where (a, b, c) = n and k = ax0 + by0 + cz0 )

$ 13 2D/3D Vector Geome- 13.2 Point vs. Point


try
Given two points P and Q in R2 (or in R3 ):
13.1 Equations −→
• PQ = Q − P
−→
Ï 13.1.1 Lines • The distance between P and Q = |P Q|

Given a point P = (x0 , y0 .z0 ) and a directional


vector d = (dx , dy , dz ) in R3 , the line spanned by 13.3 Point vs. Line
d passing through P can be expressed in various
forms: Given a point Q and a line ` : P + dt in R2 (or R3 ):
• Point-Direction Form: P + dt df
Q is on ` ⇐⇒ Q = P + dt for some number t.
(t being a scalar parameter)
In the case where Q is not on `, the distance be-
tween Q and ` can be determined in three ways:

x = x 0 + d x t

• Component Form: y = y0 + dy t
 • Orthogonal Projection Approach
z = z0 + dz t

df −→
x − x0 y − y0 z − z0 1. Compute v = oprojd QP . Once there:
• Symmetric Form: = =
dx dy dz • |v| would give the distance between
(provided that dx , dy , dz 6= 0) Q and `.
• Q + v would give the point on ` clos-
Note that each of the forms above have an analogue
est to Q.
in R2 as well.
` Q+v d
Ï 13.1.2 Planes v P

Given a point P = (x0 , y0 , z0 ), two non-parallel Q


directional vectors d1 , d2 and an associated nor-
mal vector n in R3 , the plane spanned by d1 and • Dot Product Approach
d2 passing through P can be expressed in various
1. Let X = P + dt be the point on ` where
forms:
the shortest distance occurs.
• Point-Direction Form: P + d1 s + d2 t 2. By plugging X (in the above form) into
−−→
(s, t being scalar parameters) the equation QX · d = 0, we can solve

13 2D/3D Vector Geometry 13


A Comprehensive Summary on Introductory Linear Algebra

for the missing value of t — and hence • |v| would give the distance between
determine the coordinates of X as well. Q and P.
−−→
3. Once there, |QX| would give the distance • Q + v would give the point on P clos-
between Q and `. est to Q.

• Cross Product Approach n


The distance between Q and ` is calculated as Q+v P
−→
the height of the parallelogram spanned by QP v
and d: Q
Area
−→
z }| {
|QP × d| • Dot Product Approach (for P : P + d1 s + d2 t)
|d|
|{z}
Base 1. Let X = P + d1 s + d2 t be the point on
P where the shortest distance occurs.

13.4 Point vs. Plane 2. By plugging X (in the above form) into
the system:
Given a point Q = (x1 , y1 , z1 ) and a plane P in R3 : (−−→
QX · d1 = 0
−−→
• If P is in the point-direction form P + d1 s + QX · d2 = 0
d2 t:
we can solve for the missing values of s
df
Q is on P ⇐⇒ Q = P + d1 s + d2 t and t — and hence determine the coordi-
for some numbers s and t nates of X as well.
−−→
• If P is in the standard form ax + by + cz = k: 3. Once there, |QX| would give the distance
df between Q and P.
Q is on P ⇐⇒ ax1 + by1 + cz1 = k

In the case where Q is not on P, the distance be- • Intersection Point Approach (for P in standard
tween Q and P can be determined in three ways: form)

1. Let X be the point on P closest to Q.


• Projection Approach −−→
Since QX is parallel to n, X must be of
df −→ the form Q + nt for some number t.
1. Compute v = projn QP . Note that:
2. By plugging X (in the above form) into
• If P is in point-direction form, the
the equation of P, we can solve for the
cross product of the directional vec-
missing value of t — and hence determine
tors can be used as n.
the coordinates of X as well.
• If P is in standard form, then any −−→
point on the plane can be used as 3. Once there, |QX| would give the distance
P. between Q and P.

2. Once there:

13 2D/3D Vector Geometry 14


A Comprehensive Summary on Introductory Linear Algebra

13.5 Line vs. Line 2. By plugging X1 and X2 (in the above


forms) into the following system:
In R2 , a pair of two lines falls into exactly one of (−−−→
following categories: X1 X2 · d1 = 0
−−−→
X1 X2 · d2 = 0
• Parallel intersecting
we can solve for the missing values of s
• Parallel non-intersecting and t — and hence determine the coordi-
nates of X1 and X2 as well.
• Non-parallel intersecting
3. Once there, the distance between `1 and
−−−→
In contrast, a pair of two lines in R3 falls into exactly `2 is simply |X1 X2 |.
one of the following four categories:
13.6 Line vs. Plane
• Parallel intersecting (i.e., overlapping)
• Parallel non-intersecting In R3 , a line ` and a plane P must fall into exactly
one of the following categories:
• Non-parallel intersecting
• Parallel intersecting (i.e., overlapping)
• Non-parallel non-intersecting (i.e. skew)
• Parallel non-intersecting
Given two lines `1 : P1 + d1 s and `2 : P2 + d2 t: • Non-parallel intersecting

• `1 and `2 are parallel ⇐⇒ d1 and d2 are In what follows, we assume that a plane is always
parallel. converted into standard form for easier analysis.
More specifically, if ` is in the form (x(t), y(t), z(t))
• `1 and `2 are intersecting ⇐⇒ the equation with directional vector d and P is in the form ax +
P1 + d1 s = P2 + d2 t is solvable for some num- by + cz = k with n = (a, b, c), then:
bers s and t.
(in the case where such a (s, t) pair exists • ` and P are parallel ⇐⇒ d · n = 0
and is unique, the coordinates of the intersec- • ` and P are intersecting ⇐⇒ the equation
tion point can be determined by, say, back- ax(t) + by(t) + cz(t) = k is solvable for some
substituting the value of s into P1 + d1 s.) t. Moreover:

In the case where `1 and `2 don’t intersect, the dis- • If the equation holds for all t, then ` and
tance between them can be determined as follows: P are overlapping.
• If the equation holds for a single t, then
• If `1 and `2 are parallel, then the distance be-
` and P intersect at a unique point —
tween them is simply the distance between `1
whose coordinates can be determined
and any point on `2 .
by back-substituting the value of t into
• If `1 and `2 are non-parallel, then: ` : (x(t), y(t), z(t)).

1. The shortest distance must occur between In the case where ` and P are non-intersecting
a point X1 = P1 + d1 s on `1 , and a point (hence parallel), the distance between ` and P is
X2 = P2 + d2 t on `2 . simply the distance between P and any point on `.

13 2D/3D Vector Geometry 15


A Comprehensive Summary on Introductory Linear Algebra

13.7 Plane vs. Plane • f (v1 + v2 ) = f (v1 ) + f (v2 ) for all n-entry
vectors v1 and v2 .
In R3 , any pair of planes must fall into exactly one
• f (kv) = kf (v) for all numbers k and n-entry
of the following categories:
vectors v.
• Parallel intersecting (i.e., overlapping)
In fact, any function from Rn to Rm with these two
• Parallel non-intersecting properties must be a matrix transformation as well.
• Non-parallel intersecting

In what follows, we assume that a plane is always


14.2 Standard Matrix Transfor-
converted into standard form for easier analysis. mations in 2D
More specifically, given P1 : a1 x + b1 y + c1 z = k1
If f is a linear transformation on 2D vectors, then
and P2 : a2 x + b2 y + c2 z = k2 :
f must be a matrix transformation induced by the
• P1 and P2 are parallel ⇐⇒ (a1 , b1 , c1 ) = 2 × 2 matrix
(a2 , b2 , c2 )   
1
 
0
f f
• P1 and P2 are intersecting ⇐⇒ the system 0 1
(
a1 x + b 1 y + c 1 z = k 1 The following matrices operate on 2D vectors based
a2 x + b 2 y + c 2 z = k 2 on the line ` : y = mx:

is solvable for some x, y and z. In which case: Matrix Eigenvectors


(1, m) (λ = 1)
 
• If the solution set is generated by one pa- Projection 1 m
rameter, then P1 and P2 intersect at a line. (onto `) m m2 (m, −1) (λ = 0)
1 + m2
• If not, then P and Q are overlapping.
(1, m) (λ = 0)
 
Orthogonal m2 −m
In the case where P1 and P2 are non-intersecting Projection −m 1 (m, −1) (λ = 1)
(hence parallel), the distance between P1 and P2 is (onto `) 1 + m2
simply the distance between P1 and any point on
(1, m) (λ = 1)
 
P2 . Reflection 1 − m2 2m
(about `) 2m m2 − 1 (m, −1) (λ = −1)
1 + m2
$ 14 Matrix Transformation
The following matrix operates on 2D vectors by
14.1 Preliminaries applying a counter-clockwise rotation with angle
θ:  
cos θ − sin θ
A function f from Rn to Rm is a matrix transforma-
sin θ cos θ
tion if and only and there exists a m × n matrix M
such that:
f (v) = M v

In which case, f is also a linear transformation in


the sense that:

14 Matrix Transformation 16

You might also like