Chapter 4 (A02) 3
Chapter 4 (A02) 3
Vector Spaces
59
4.1 Spaces of Polynomials
4.1.1 Polynomial Vector Space
Definition: Polynomial Space and the Standard Basis of Polynomial Space
The collection of polynomials of degree at most n is denoted Pn . The collection of polynomials given by
{1, x, x2 , ..., xn } is called the monomial basis for Pn .
Property Name
There is a poly. o(x) such that o(x) + s(x) = s(x) for all poly. s(x) Existence of zero
For each s(x) there exists a −s(x) such that s(x) + (−s(x)) = o(x) Existence of inverse
60
4.1.2 Linear Combinations, Spans, and Linear Dependendence/Independence
Definition: Linear Combination of Polynomials
t1 p1 (x) + · · · + tn pn (x)
Let B = {p1 (x), ..., pk (x)} be a set of polynomials of degree at most n, Then the span of B is defined as
61
Definition: Linear Dependence and Independence of Polynomials
The set B = {p1 (x), ..., pk (x)} is said to be linearly independent if the only solution to the equation
t1 p1 (x) + · · · + tk pk (x) = 0
is t1 = · · · = tk = 0; otherwise, there is a solution where not all ti are zero, for which B is said to be linearly
dependent.
62
4.2 Vector Spaces
4.2.1 Vector Spaces
Definition: Vector Spaces
We refer the reader to the appendix for the requirements of an algebraic space to be called a Vector Space.
2. The space M (m, n) of all size m × n matrices equipped with matrix addition and scalar multiplication.
4. The space of functions F(a, b) = {f |f : (a, b) ⊆ R → R} equipped with function addition and scalar
multiplication.
Example 1: Consider the space R2 with addition defined to be standard vector addition ⊕ = +, and scalar
multiplication defined by k ⊙ (x, y) = (ky, kx). By finding a counterexample, show that this is not a vector space.
Example 2: Consider the space R with addition defined to be x ⊕ y = x2 + y + 1 and scalar multiplication to
be defined by standard multiplication ⊙ = ·. By finding a counterexample, show that this is not a vector space.
63
Example 3: Consider R+ = (0, ∞) the space of positive real numbers. Define addition on this space to be
x ⊕ y = xy and scalar multiplication to be s ⊙ x = xs . Prove that this is a vector space.
64
V6: Closure under scalar multiplication
We will call the space R+ equipped with x ⊕ y = xy and s ⊙ xs the Exponential Space and denote it E.
65
Theorem: Inverse and Zero Properties of Vector Spaces
1. 0 ⊙ x = 0 for all x ∈ V
3. t ⊙ 0 = 0 for all t ∈ R
Example 4: Demonstrate the prior theorem holds for E, the exponential space.
Example 5: Let V = {(a, b) | a ∈ R, b ∈ R+ }. Define addition in this space to be (a, b) ⊕ (c, d) = (ad + bc, bd)
and scalar multiplication to be t ⊙ (a, b) = (tabt−1 , bt ). Given this is a vector space, determine the zero vector and
the additive inverse of any vector using the prior theorem.
66
4.2.2 Subspaces
Definition: Subspaces
A subspace of a vector space is a vector space itself. Consequently, this means that as U ⊆ V you may think
of a vector subspace as a smaller vector space within another vector space.
Example 7: Prove that the set U = {A ∈ M (2, 2) | a11 + a22 = 0} is a subspace of M (2, 2).
67
4.3 Bases and Dimensions
4.3.1 Linear Combinations, Spans and Bases
Theorem: Spanning Sets as Subspaces
If {v1 , ..., vk } is a set of vectors in a vector space V and S is the set of all possible linear combinations of
these vectors,
If S is a subspace of the vector space V consisting of all linear combinations of vectors v1 , ..., vk ∈ V, then S
is called the subspace spanned by B = {v1 , ..., vk }, and we say that the set B spans S. The set B is called
a spanning set for the subspace S. We denote S = Span({v1 , ..., vk }) = Span(B).
If B = {v1 , ..., vk } is a set of vectors in a vector space V, then B is said to be linearly independent if the
only solution to the equation
(t1 ⊙ v1 ) ⊕ · · · ⊕ (tk ⊙ vk ) = 0
is t1 = · · · = tk = 0; otherwise, there is a non-trivial solution and we say B is said to be linearly dependent.
Let B = {v1 , ..., vn } be a spanning set for a vector space V. Then every vector in V can be expressed in a
unique way as a linear combination of the vectors of B if and only if the set B is linearly independent.
A set B of vectors in a vector space V is a basis if it is a linearly independent spanning set for V.
1 2 0 1 2 5
Example 1: Prove that the set B = , , is not a basis for the subspace Span(B).
−1 1 3 1 1 3
Note: As a reminder, if we don’t specify the operations or vector space, you can assume they are the standard
operations from the standard space. Also, hint... 2A + B − C = O.
68
4.3.2 Determining a Basis of a Subspace
Note: Finding a Basis
Finding a basis of a subspace can be quite difficult. The technique is to first (somehow) determine a spanning
set, then reduce it to or prove that it is, a linearly independent set. Finding the spanning set is the creative
step, while turning a spanning set into a basis is quite procedural. One technique is to associate the span
as the range of some matrix, then discern a basis using our results on the basis of the columnspace of that
matrix.
Example 2: Determine a basis for the subspace S = {p(x) ∈ P2 | p(1) = 0} of P2 . Hint: Every element in this
space can be written as p(x) = (x − 1)(ax + b) for some constants a and b.
69
4.3.3 Dimension
Definition
If B = {v1 , ..., vn } and C = {u1 , ..., uk } are both bases of a vector space V, then k = n. If a vector space
V has a basis with n vectors, then we say that the dimension of V is n and write dim(V) = n. If a vector
space V does not have a basis with finitely many elements, then V is called infinite-dimensional. The
dimension of the trivial vector space V = {0} is defined to be 0.
Example 4: Determine the dimension of the vector space Span(T ) of the previous example.
3. A set with n elements of V is a spanning set for V if and only if it is linearly independent.
1 1 −2 −1 1 0
Example 5: Let C = , , . Extend C to a basis for M (2, 2). Hint: We’ve
0 1 1 1 1 1
worked with M (2, 2) and know that dim(M (2, 2)) = 4. Thus, you need only find an additional element not in the
span of these three matrices.
70
4.4 Coordinates with Respect to a Basis
4.4.1 Bases
Definition: Coordinate Vectors
Suppose that B = {v1 , ..., vn } is a basis for the vector space V. If x ∈ V with
x = x 1 v1 ⊕ x 2 v2 ⊕ · · · ⊕ x n vn
then the coordinate vector of x with respect to the basis B is
x1
x2
[x]B = .
..
xn
Example 2: The collection B = {1, x, 1 + x2 } is a basis of P2 . Find the B-coordinates of p(x) = 2 + x + 3x2 .
71
4.4.2 Change-of-Basis
Theorem: Linearity of Basis Representations
Let B be a basis for a finite dimensional vector space V. Then, for any x, y ∈ V and s, t ∈ R we have
This means that the function gB : V → Rn given by gB (v) = [v]B is a linear function. In the event that
V = Rn this means that we should be able to find a representing matrix [gB ].
Consider a general vector space V with two bases B and C = {w1 , ..., wn }.
x = (x1 ⊙ w1 ) ⊕ · · · ⊕ (xn ⊙ wn )
T
That is, [x]C = x1 x2 · · · xn . Taking B-coordinates gives
Let B and C = {w1 , ..., wn } both be bases for a vector space V. The matrix
P = [w1 ]B · · · [wn ]B
is called the change of coordinates matrix from C-coordinates to B-coordinates and satisfies
[x]B = P [x]C
and is called the change of coordinates equation. Often an emphatic notation PBC is used.
Let B and C both be bases for a finite-dimensional vector space V. Let P be the change of coordinates
matrix from C-coordinates to B-coordinates. Then, P is invertible and P −1 is the change of coordinates
matrix from B-coordinates to C-coordinates.
72
Note: Efficiently Obtaining PBC
1 1 1 0
Example 1: Earlier we considered the bases B = , and E = , for R2 . The
−1 1 0 1
E 1 0
change of basis matrix from E to B is PB = , . Demonstrate the prior note by setting up the
0 B 1 B
1 0
systems of equations for , and solving for them. Demonstrate the change-of-basis is consistent with
0 B 1 B
T
Example 1 by computing PBE ⃗a where ⃗a = 3 −2 .
73
Example 2: In this example we introduce another way to obtain a change of basis matrixfrom the standard
basis (specifically) to another basis. Let C = {1, x, x2 } be the standard basis of P2 and let B = 1 − x2 , x, −2 + x2 .
Find the change of coordinates matrix from B to C. Then, use the inverse to find the change of coordinates matrix
from B to C. Do you feel this is conceptually simpler?
74
4.5 General Linear Mappings
4.5.1 General Linearity Conditions
Definition: Linear Mappings
If V and W are vector spaces over R, a function L : V → W is a linear mapping if it satisfies the linearity
properties
Property Name
for all x, y ∈ V; t ∈ R; ⊕V and ⊙V are the operations of addition and scalar multiplying respectively on V;
and ⊕W and ⊙W are the operations of addition and scalar multiplying respectively on W. If V = W, then
L may be called a linear operator.
Example 1: Let L : M (2, 2) → P2 be defined by L(A) = a21 + (a12 + a22 )x + a11 x2 . Prove that L is a linear
mapping.
Example 2: Define L : E → R by L(x) = ln(x). Prove that L is a linear function. Note: The function is most
certainly not linear if it is mapping L : R+ → R as usually done in calculus!
75
4.5.2 The General Rank-Nullity Theorem
Definition: The Range and Nullspace/Kernel of a General Linear Mapping
Let V and W be vector spaces over R. The range of a linear mapping L : V → W is defined to be the set
Range(L) = {L(x) ∈ W | x ∈ V}
The nullspace ( or kernel) of L is the set of all vectors in V whose image under L is the zero vector 0W .
We write
Null(L) = {x ∈ V | L(x) = 0W }
Let V and W be vector spaces and let L : V → W be a linear mapping. Then, Null(L) is a subspace of V
and Range(L) is a subspace of W.
Example 3: Consider the mapping L : M (2, 2) → P2 from an earlier example given by L(A) = a21 +
(a12 + a22 )x + a11 x2 . Determine whether 1 + x + x2 ∈ Range(L), and if it is, determine a matrix A such that
L(A) = 1 + x + x2 . Afterwards, determine the nullspace of L.
Let V and W be vector spaces and let L : V → W be a linear mapping. Then, L(0V ) = 0W .
Example 4: Demonstrate the prior theorem is true using L : E → R given by L(x) = ln(x) of Example 2.
76
Example 5: Determine a basis for the range and nullspace of the linear mapping L : P1 → R3 defined by
0
L(a + bx) = 0
a − 2b
Let V and W be vector spaces over R. The rank of a linear mapping L : V → W is the dimension of the
range of L, that is, rank(L) = dim(Range(L)).
Let V and W be vector spaces over R. The nullity of a linear mapping L : V → W is the dimension of
the nullspace of L, that is, nullity(L) = dim(Null(L)).
Let V and W be vector spaces over R with dim(V) = n, and let L : V → W be a linear mapping. Then,
rank(L) + nullity(L) = n
77
4.6 Matrix of a Linear Mapping
4.6.1 The Matrix of L with Respect to the Basis B
Note: The Representing Matrix of a Mapping in Another Basis
In this subsection we are concerned with the representing matrix [L] of L in another basis B. Specifically, we
want everything stated in the language of B with nothing in the language of the standard basis. That is, for
an input [⃗x]B the output is [L(⃗x)]B and we seek a matrix A such that [L(⃗x)]B = A[⃗x]B , which we will call [L]B .
Let B = {⃗v1 , ..., ⃗vn } be a basis for Rn and let L : Rn → Rn be a linear operator. Then, for any ⃗x ∈ Rn , we
can write ⃗x = b1⃗v1 + · · · + bn⃗vn . Thus by linearity,
Let V be a vector space. Suppose that B = {v1 , ..., vn } is any basis for V and that L : V → V is a linear
operator. Define the matrix of the linear operator L with respect to the basis B to be the matrix
[L]B = [L(v1 )]B · · · [L(vn )]B
where we have for any x ∈ V, [L(x)]B = [L]B [x]B .
2 2 1 1
Example 1: Let L : R → R be given by L(x1 , x2 ) = (x2 , x1 ) and let B = , . Determine
−1 1
[L]B .
78
4.6.2 Change of Coordinates and Linear Mappings
Note: Another Way to Obtain [L]B
Using the prior note, you may extend the logic to find the matrix representation of the linear mapping if
the input is in basis B and the output is in basis C. This is simply [L]B S B
C = PC [L]PS , which is very useful.
2 1 3 1 1 0
Example 2: Let L be the linear mapping with [L] = A = −1 2 2 and let B =
1 , 1 , 1 .
−2 3 1 0 1 1
Determine [L]B .
79