3 - Vector Spaces
3 - Vector Spaces
3. Vector Spaces
(Sections 7.4-7.5 of Kreyszig)
Nguyễn Anh Tú
[email protected]
Contents
1 Vector spaces
3 Linear Independence
Vector spaces
Introduction
• Many physical quantities, such as area, length, mass, and
temperature, are completely determined by its magnitude.
Such quantities are called scalars.
• Other physical quantities can only be determined by both
a magnitude and a direction, e.g. forces, velocities,
electromagnetic fields,... These quantities are called
vectors.
• The goal of this section is to study vector spaces, and its
fundamental properties:
1. Linear dependence/independence.
2. Spanning sets.
• Equal vectors have the same length and direction but may
have different starting points.
What is a Vector?
Properties
For any u in V and scalar c,
1. 0u = 0 , where 0 is the zero vector of V .
2. c00 = 0 .
3. −u = (−1)u.
Vector Spaces
Example: Three-dimensional vector space
Let V be the set of all arrows (directed line segments) in
three-dimensional space, with two arrows regarded as equal if
they have the same length and point in the same direction.
Define addition by the parallelogram rule and for each v in V ,
define cv to be the arrow whose length is |c| times the length
of v , pointing in the same direction as v if c > 0 and
otherwise pointing in the opposite direction.
Show that V is a vector space. This space is a common model
in physical problems for various forces
Vector Spaces
Spaces of Matrices
The set of all m × n matrices with matrix addition and
multiplication of a matrix by a real number (scalar
multiplication), is a vector space (verify). We denote this
vector space by Mmn .
Vector Spaces
p(x) = a0 + a1 x + a2 x 2 + ... + an x n ,
Definition
A subspace of a vector space V is a subset H of V that has
three properties:
a. The zero vector of V is in H.
b. H is closed under vector addition. That is, for each u and v
in H, the sum u + v is in H.
c. H is closed under multiplication by scalars. That is, for each
u in H and each scalar c, the vector cu is in H.
H = {(x, y , 0) : x, y ∈ R}
is a subspace of V = R3 .
Subspaces
Example
The set consisting of only the zero vector in a vector space V
is a subspace of V , called the zero subspace and written as
{00}.
Example
Let P be the set of all polynomials with real coefficients, with
operations in P defined as for functions. Then P is a subspace
of the space of all real-valued functions defined on R. Also, for
each n > 0, Pn is a subspace of P.
Subspaces
Example
A line in R2 not containing the origin is not a subspace of R2 .
A plane in R3 not containing the origin is not a subspace of
R3 .
Subspaces
Example
Which of the given subsets of the vector space P2 are
subspace?
(a) a2 t 2 + a1 t + a0 , where a1 = 0, a0 = 0
(b) a2 t 2 + a1 t + a0 , where a1 = 2a0
(c) a2 t 2 + a1 t + a0 , where a2 + a1 + a0 = 2
Exercises
Let W be the set of all 3 × 3 matrices of the form
a 0 b
0 c 0
d 0 e
Exercises
Which of the following subsets of the vector space Mmn are
subspaces?
(a) The set of all n × n symmetric matrices.
(b) The set of all n × n diagonal matrices.
(c) The set of all n × n invertible matrices.
Example
if A is an m × n matrix, then the homogeneous system of m
equations in n unknowns with coefficient matrix A can be
written as
Ax = 0
where x is a vector in Rn and 0 is the zero vector. Show that
set W of all solutions is a subspace of Rn .
W is called the solution space of the homogeneous system, or
the null space of the matrix A.
Linear combination
Definition
Let v1 , v2 , ..., vk be vectors in a vector space V . A vector v in
V is a linear combination of v1 , v2 , ..., vk if
k
X
v = a1 v1 + a2 v2 + +... + ak vk = a j vj
j=1
Example
a
If v1 = [1 0 1]T and v2 = [0 1 1]T then every w = b is
a+b
a linear combination of v1 , v2 since w = av1 + bv2 .
A Subspace Spanned by a Set
is a subspace of V .
Example
Consider the set S of 2 × 3 matrices given by
1 0 0 0 1 0 0 0 0 0 0 0
S= , , ,
0 0 0 0 0 0 0 1 0 0 0 1
Example
Let S = {t 2 , t, 1}, then we have span S be a subset of P2 .
Then span S is the subspace of all polynomials of the form
a2 t 2 + a1 t + a0 .
Example
Let
1 0 0 0
S= ,
0 0 0 1
Then span S is the subspace of all all 2 × 2 diagonal matrices .
A Subspace Spanned by a Set
Example
Let n o
H = (a − 3b, b − a, a, b)T : a, b ∈ R
Example
For what value(s) of a will y be in the subspace of R3 spanned
by v1 , v2 , v3 , if
1 5 −3 −4
v1 = −1 , v2 = −4 , v3 = 1 , and y = 3
−2 −7 0 a
Answer: a = 5.
A Subspace Spanned by a Set
Example
In P2 let
v = t2 + t + 2
a1 v1 + a2 v2 + a3 v3 + a4 v4 = v
2a1 + a2 + 5a3 − a4 = 1
Example
Let V be the vector space R 3 . Let
1 1 1
v1 = 2 , v2 = 0 , v3 = 1
1 2 0
Solution:
Pick up any
a
v = b ∈V
c
A Subspace Spanned by a Set
Solution (Cont.):
This leads to the linear system
a1 + a2 + a3 = a
2a1 + a3 = b
a1 + 2a2 = c
A solution is (verify)
−2a + 2b + c a−b+c 4a − b − 2c
a1 = , a2 = , a3 =
3 3 3
A Subspace Spanned by a Set
Example
Explain why the set S is not a spanning set for the vector
space V .
(a) S = {t 3 , t 2 , t} , V = P3
(b)
1 0
S= , , V = R2
0 0
(c)
1 0 0 1
S= , , V = M22
0 1 1 0
Section 3
Linear Independence
Linear Independence
Definition
The vectors v1 , v2 , v3 , ..., vk in a vector space V are said to be
linearly dependent if there exist constants a1 , a2 , ..., ak , not all
zero, such that
X k
aj vj = 0
j=1
Example
1 1 2
v1 = 2 , v2 = 0 , v3 = 2
1 2 3
are linearly dependent since v1 + v2 − v3 = 0 .
Example
1 0
v1 = , v1 =
0 2
are linearly independent since av1 + bv2 = 0 iff a = b = 0.
Linear Independence
Example
Determine whether the vectors
3 1 −1
v1 = 2 , v2 = 2 , v3 = 2
1 0 −1
are linearly independent.
Solution
Forming equation:
a 1 v1 + a 2 v2 + a 3 v3 = 0
3 1 −1 0
a1 2 + a2 2 + a3 2 = 0
1 0 −1 0
Linear Independence
We obtain the homogeneous system
3a1 + a2 − a3 = 0
Example
Determine whether the vectors
1 0 1
v1 = 0 , v2 = 1 , v3 = 1
1 1 1
are linearly independent.
Solution
Forming equation:
1 0 1 0
a1 0 + a2 1 + a3 1 = 0
1 1 1 0
Linear Independence
Example
Are the vectors
2 1 1 2 0 −3
v1 = , v2 = , v3 =
0 1 1 0 −2 1
in M22 linearly independent?
Solution:
Setting up the equation:
2 1 1 2 0 −3 0 0
a1 + a2 + a3 =
0 1 1 0 −2 1 0 0
Linear Independence
2a1 + a2 a1 + 2a2 + a3 0 0
=
a2 − 2a3 a1 + a3 0 0
Solving the linear system to find aj :
2 1 0 0 1 0 1 0
1 2 −3 0 Row operations 0 1 −2 0
0 1 −2 0 −−−−−−−−→
0 0 0 0
1 0 1 0 0 0 0 0
k (−1, 2, 1)T , k 6= 0
Example
Are the vectors
v1 = t 2 + t + 2, v2 = 2t 2 + t, v3 = 3t 2 + 2t + 2
in P2 linearly independent?
Answer: The given vectors are linearly dependent
Linear Independence
Theorem
Let S = {v1 , v2 , ..., vn } be a set of n vectors in Rn . Let A be
the matrix whose columns (rows) are the elements of S. Then
S is linearly independent if and only if det(A) 6= 0.
Proof:
We will prove the result for column-vectors.
Example
1 0 3
Is S = 2 , 1 , 0 a linearly independent set
3 2 −1
of vector in R3 ?
Solution
We form the matrix A whose columns are the vectors in S:
1 0 3
A= 2 1 0
3 2 −1
det (A) = 2
So S is linearly independent.
Linear Independence
Theorem
Let S1 and S2 be finite subsets of a vector space and let S1 be
a subset of S2 . Then the following statements are true:
(a) If S1 is linearly dependent, so is S2 .
(b) If S2 is linearly independent, so is S1 .
Proof: Let
a1 v1 + a2 v2 + . . . + ak vk + 0vk+1 + . . . + 0vm = 0
Since not all the coefficients in the equations above are zero,
we conclude that S2 is linearly dependent.
Remarks
• The set S = {00} consisting of only the vector 0 is linearly
dependent.
From this it follows that if S is any set of vectors that
contains 0, then S must be linearly dependent.
• A set of vectors consisting of a single nonzero vector is
linearly independent.
• If v1 , v2 , ..., vk are vectors in a vector space V and for
some i 6= j, vi = vj , then v1 , v2 , ..., vk are linearly
dependent.
Linear Independence
Theorem
The nonzero vectors v1 , v2 , ..., vk in a vector space V are
linearly dependent if and only if one of the vectors vj (j ≥ 2) is
a linear combination of the preceding vectors v1 , v2 , ..., vj−1 .
Example
1 1 −3 2
v1 = 2 , v2 = −2 , v3 = 2 , v4 = 0
−1 1 −1 0
v1 + v2 + 0v3 − v4 = 0
so v1 , v2 , v3 , and v4 are linearly dependent. We then have
v4 = v1 + v2 + 0v3 .
Section 4
Example
Let V = R3 . The vectors
1 0 0
v1 = 0 , v2 = 1 , v3 = 0
0 0 1
Example
Generally, the natural basis or standard basis for Rn is denoted
by
{e1 , e2 , ..., en }
where
0
..
.
0
ei = 1
0
.
..
0
Basis and dimensions
Example
Show that the set
S = t 2 + 1, t − 1, 2t + 2
at 2 + bt + c = a1 t 2 + 1 + a2 (t − 1) + a3 (2t + 2)
Basis and dimensions
We find
a+b−c c +b−a
a1 = a, a2 = , a3 = .
2 4
Hence S spans V.
To show that S is linearly independent, we solve
a1 t 2 + 1 + a2 (t − 1) + a3 (2t + 2) = 0
Example
Show that the set
S = {v1 , v2 , v3 , v4 }
where
1 0 0 1
0 1 2 0
v1 =
1 , v2 = −1
, v3 = , v4 =
2 0
0 2 1 1
0 2 1 1
Basis and dimensions
Example
Find a basis for the subspace V of P2 , consisting of all vectors
of the form at at 2 + bt + c where c = a − b.
Hint:
S = t 2 + 1, t − 1
Remarks
A vector space V is called finite-dimensional if there is a finite
subset of V that is a basis for V . If there is no such finite
subset of V , then V is called infinite-dimensional.
Basis and dimensions
Theorem
If S = {v1 , v2 , .., vn } is a basis for a vector space V , then every
vector in V can be written in one and only one way as a linear
combination of the vectors in S.
Proof: Suppose
This implies
a1 v1 + a2 v2 + ... + an vn = 0
Example
Find a basis for S = {v1 , v2 , .., v5 } := Col A, where
1 4 0 2 0
0 0 1 −1 0
A = v1 v2 · · · v5 = 0 0
0 0 2
0 0 0 0 0
Finding a basis
Solution
Each nonpivot column of A is a linear combination of the pivot
columns. In fact, v2 = 4v1 , v4 = 2v1 − v3 .
Let
1 0 0
0 , , 0
1
S = {v1 , v2 , v3 } = 0 0 2
0 0 0
Solution: We have
Theorem
Let S = {v1 , v2 , .., vn } be a basis for a vector space V , and
T = {w1 , w2 , .., wm }.
• If T is a linearly independent then m ≤ n.
• If T spans V then m ≥ n.
Corollary
If S = {v1 , v2 , .., vn } and T = {w1 , w2 , .., wm } are bases for a
vector V , then n = m.
Dimensions
Definition
The dimension of a nonzero vector space V is the number of
vectors in a basis for V , denoted by dim V .
We also define the dimension of the trivial vector space {00} to
be zero.
Example
• If S = {t 2 , t, 1} is a basis for P2 , so dim P2 = 3.
• dim Rn = n.
• dim Mm,n = mn.
For a set of exactly dim V vectors, only one of the two
conditions for being a basis is needed. I.e.,
Theorem
Let V be an n-dimensional vector space.
• If S = {v1 , v2 , ..., vn } is a linearly independent set of
vectors in V, then S is a basis for V .
• If S = {v1 , v2 , ..., vn } spans V , then S is a basis for V .
Basis
Definition
Let S be a set of vectors in a vector space V . A subset T of S
is called a maximal independent subset of S if T is a linearly
independent set of vectors that is not properly contained in
any other linearly independent subset of S.
Example
Let
1 0 1
S = {v1 , v2 , v3 } = , ,
0 2 2
Then maximal independent subsets of S are {v1 , v2 }, {v2 , v3 },
and {v1 , v3 }.
Theorem
Let S be a finite subset of the vector space V that spans V . A
maximal independent subset T of S is a basis for V .
Section 5
Definition
The dimension of the row (column) space of A is called the
row (column) rank of A.
Row Space and column space
Theorem
If A and B are two m × n row (column) equivalent matrices,
then the row (column) spaces of A and B are equal.
As a consequence, if A and B are row equivalent, then row
rank A = row rank B; and if A and B are column equivalent,
then column rank A = column rank B.
Example
Find the row space, the null space, and the column space of
the matrix
−2 −5 8 0 −17
1 3 −5 1 5
A= 3 11 −19 7 1
1 7 −13 5 −13
Row Space and column space
Solution: We have
1 3 −5 1 5
0 1 −2 2 −7
A∼B =
0 0 0 −4 20
0 0 0 0 0
The first three rows of B form a basis for the row space of A
(as well as the row space of B).
Thus, a basis for Row A is
Definition
The nullity of A is the dimension of the null space of A, that
is, the dimension of the solution space of Ax = 0.
Thus, in the previous example, nullity A = 2, row rank
A=column rank A = 3.
Theorem
Let A be an m × n matrix. Then
• The row rank and column rank of A are equal. They equal
the number of pivot columns of the echelon form of A.
• The nullity of A equals the number of non-pivot columns
of the echelon form of A.
• It follows that rank A + nullity A = n.
Rank-Nullity
Example
Let
3 −1 2
A= 2 1 3
7 1 8
We have
1 0 1
A∼B = 0 1 1
0 0 0
Thus, nullity A=1, rank A = 2 and nullity A + rank A = 3 =
numbers of column of A.
Rank-Nullity
Example
Let
1 1 4 1 2
0 1 2 1 1
A=
0 0 0 1 2
1 −1 0 0 2
2 1 6 0 1
We have
1 0 2 0 1
0 1 2 0 −1
A∼B =
0 0 0 1 2
0 0 0 0 0
0 0 0 0 0
Thus, nullity A=2, rank A = 3 and nullity A + rank A = 5 =
number of columns of A.
Rank and Invertibility
Suppose A is a square matrix of size n. Since rank A is the
number of pivot columns of A, it follows that rank A = n if
and only if A is invertible. Thus,
Corollary
Let A be an n × n matrix. The following are equivalent
(a) A is invertible
(b) rank A = n.
(b) det(A) 6= 0.
(c) The homogeneous system Ax = 0 has only the trivial
solution.
(d) The linear system Ax = b has a unique solution for every
vector b ∈ Rn .
Section 6
S = {v1 , v2 , ..., vn } .
v = a1 v1 + a2 v2 + ... + an vn .
We define
a1
a2
[v ]S = ..
.
an
and call [v ]S ∈ Rn the coordinate vector of v with respect to
the basis S.
Coordinates
Example
Consider the vector space R3 and let S = {v1 , v2 , v3 } be the
basis for R3 , where
1 2 0
v1 = 1 , v2 = 0 , v3 = 1
0 1 2
If
1
v = 1
−5
compute [v ]S
Coordinates
Solution
To find [v ]S , we need to find the constants a1 , a2 , a3 such that
a1 v1 + a2 v2 + a3 v3 = v . Solve the linear system
1 2 0 1
1 0 1 1
0 1 2 −5
Definition
Let V and W be real vector spaces. A bijection L from V to
W is called an isomorphism if
(a) L (v + w ) = L (v ) + L (w ),
(b) L (cv ) = cL (v ).
If there is an isomorphism from V to W , we say that V is
isomorphic to W .
Theorem
(a) Two finite-dimensional vector spaces are isomorphic if and
only if their dimensions are equal.
(b) If V is an n−dimensional real vector space, then V is
isomorphic to Rn .
Changes of bases - Transition matrices
Thus, let S = {v1 , v2 , ..., vn } and T = {w1 , w2 , ..., wn } be two
ordered bases for the n−dimensional vector space V . Let v be
a vector in V and let
c1
c2
[v ]T = ..
.
cn
Therefore,
[v ]S = PT →S [v ]T
Changes of bases - Transition matrices
Example
Let T = {w1 , w2 }, S = {v1 , v2 }, where
−9 −5 1 3
w1 = , w2 = , v1 = , v2 = .
1 −1 −4 −5
Example
We can solve both systems simultaneously.
1 3 −9 −5
v1 v2 w1 w2 =
−4 −5 1 −1
1 0 6 4
∼
0 1 −5 −3
x1 6 y1 4
[w1 ]S = = , [w2 ]S = =
x2 −5 y2 −3
Thus,
6 4
PT →S =
−5 −3
Changes of bases - Transition matrices
Example
Let
T = {w1 , w2 , w3 } , S = {v1 , v2 , v3 } ,
where
6 4 5
w1 = 3 , w2 = −1 , w3 = 5
3 3 2
2 1 1
v1 = 0 , v2 = 2 , v3 = 1
1 0 1
Find the transition matrix PT →S from the T-basis to the
S-basis.
Changes of bases - Transition matrices
To find [wj ]S , j = 1, 2, 3, we can solve three systems
simultaneously
2 1 1 6 4 5
v1 v2 v3 w1 w2 w3 = 0 2 1 3 −1 5
1 0 1 3 3 2
1 0 0 2 2 1
v1 v2 v3 w1 w2 w3 ∼ 0 1 0 1 −1 2
0 0 1 1 1 1
2 2 1
PT →S = 1 −1 2
1 1 1
Q: Verify [v ]S = PT →S [v ]T ?
Exercises