0% found this document useful (0 votes)
13 views21 pages

Chapter 4 (A02) 3

Uploaded by

rq5f6wj6xk
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views21 pages

Chapter 4 (A02) 3

Uploaded by

rq5f6wj6xk
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 21

Chapter 4

Vector Spaces

59
4.1 Spaces of Polynomials
4.1.1 Polynomial Vector Space
Definition: Polynomial Space and the Standard Basis of Polynomial Space

The collection of polynomials of degree at most n is denoted Pn . The collection of polynomials given by
{1, x, x2 , ..., xn } is called the monomial basis for Pn .

Definition: Algebra of Polynomials

Let p, q ∈ Pn . If p(x) = an xn + · · · + a1 x + a0 and q(x) = bn xn + · · · + b1 x + b0 and t is a scalar then


polynomial addition is defined as

p(x) + q(x) = (an + bn )xn + · · · + (a1 + b1 )x + (a0 + b0 )


and scalar multiplication is defined as

tp(x) = (tan )xn + · · · + (ta1 )x + (ta0 )

Theorem: Properties of Polynomials

Let p(x), q(x) and r(x) be polynomials in Pn and let s, t ∈ R. Then...

Property Name

p(x) + q(x) is a polynomial of degree at most n Closure under addition

p(x) + q(x) = q(x) + p(x) Commutativity of addition

(p(x) + q(x)) + r(x) = p(x) + (q(x) + r(x)) Associativity of addition

There is a poly. o(x) such that o(x) + s(x) = s(x) for all poly. s(x) Existence of zero

For each s(x) there exists a −s(x) such that s(x) + (−s(x)) = o(x) Existence of inverse

tp(x) is a polynomial of degree at most n Closure under mult.

s(tp(x)) = (st)p(x) Associativity of mult.

(s + t)p(x) = sp(x) + tp(x) Distributivity of scalar add.

t(p(x) + q(x)) = tp(x) + tq(x) Distributivity of poly add.

1 s(x) = s(x) for all s(x) 1 is the scalar identity

60
4.1.2 Linear Combinations, Spans, and Linear Dependendence/Independence
Definition: Linear Combination of Polynomials

A linear combination of polynomials p1 (x),...,pn (x) is given by a sum of the form

t1 p1 (x) + · · · + tn pn (x)

for scalars t1 , ..., tn ∈ R.

Definition: Span of Polynomials

Let B = {p1 (x), ..., pk (x)} be a set of polynomials of degree at most n, Then the span of B is defined as

Span(B) = {t1 p1 (x) + · · · + tk pk (x) | t1 , ..., tk ∈ R}

Example 1: Determine if p(x) = 1 + 2x + 3x2 is in the span of B = {1 + x, 1 + x2 , 1 + x + x2 }. If so, write


p(x) as a linear combination of the polynomials in B.

61
Definition: Linear Dependence and Independence of Polynomials

The set B = {p1 (x), ..., pk (x)} is said to be linearly independent if the only solution to the equation

t1 p1 (x) + · · · + tk pk (x) = 0
is t1 = · · · = tk = 0; otherwise, there is a solution where not all ti are zero, for which B is said to be linearly
dependent.

Example 2: Determine if the set B = {2 − x2 , 3x, −2 + x + x2 } is linearly dependent or independent.

62
4.2 Vector Spaces
4.2.1 Vector Spaces
Definition: Vector Spaces

We refer the reader to the appendix for the requirements of an algebraic space to be called a Vector Space.

Theorem: Examples of Vector Spaces

The following are vector spaces:

1. The space Rn equipped with vector addition and scalar multiplication.

2. The space M (m, n) of all size m × n matrices equipped with matrix addition and scalar multiplication.

3. The space Pn equipped with polynomial addition and scalar multiplication.

4. The space of functions F(a, b) = {f |f : (a, b) ⊆ R → R} equipped with function addition and scalar
multiplication.

Example 1: Consider the space R2 with addition defined to be standard vector addition ⊕ = +, and scalar
multiplication defined by k ⊙ (x, y) = (ky, kx). By finding a counterexample, show that this is not a vector space.

Example 2: Consider the space R with addition defined to be x ⊕ y = x2 + y + 1 and scalar multiplication to
be defined by standard multiplication ⊙ = ·. By finding a counterexample, show that this is not a vector space.

63
Example 3: Consider R+ = (0, ∞) the space of positive real numbers. Define addition on this space to be
x ⊕ y = xy and scalar multiplication to be s ⊙ x = xs . Prove that this is a vector space.

V1: Closure under addition

V2: Commutativity of addition

V3: Associativity of addition

V4: Existence of zero

V5: Additive inverse

64
V6: Closure under scalar multiplication

V7: Associativity of scalar multiplication

V8: Distributivity of scalar addition

V9: Distributivity of addition

V10: 1 is the scalar identity

Definition: Exponential Space

We will call the space R+ equipped with x ⊕ y = xy and s ⊙ xs the Exponential Space and denote it E.

65
Theorem: Inverse and Zero Properties of Vector Spaces

Let V be a vector space. Then...

1. 0 ⊙ x = 0 for all x ∈ V

2. (−1) ⊙ x = −x for all x ∈ V

3. t ⊙ 0 = 0 for all t ∈ R

Example 4: Demonstrate the prior theorem holds for E, the exponential space.

Example 5: Let V = {(a, b) | a ∈ R, b ∈ R+ }. Define addition in this space to be (a, b) ⊕ (c, d) = (ad + bc, bd)
and scalar multiplication to be t ⊙ (a, b) = (tabt−1 , bt ). Given this is a vector space, determine the zero vector and
the additive inverse of any vector using the prior theorem.

66
4.2.2 Subspaces
Definition: Subspaces

We refer the reader to the appendix for the definition of a subspace.

Theorem: Subspaces are (“Smaller”) Vector Spaces

A subspace of a vector space is a vector space itself. Consequently, this means that as U ⊆ V you may think
of a vector subspace as a smaller vector space within another vector space.

Example 6: Let U = {p(x) ∈ P3 | p(3) = 0}. Show that U is a subspace of P3 .

Example 7: Prove that the set U = {A ∈ M (2, 2) | a11 + a22 = 0} is a subspace of M (2, 2).

67
4.3 Bases and Dimensions
4.3.1 Linear Combinations, Spans and Bases
Theorem: Spanning Sets as Subspaces

If {v1 , ..., vk } is a set of vectors in a vector space V and S is the set of all possible linear combinations of
these vectors,

S = Span({v1 , ..., vk }) = {(t1 ⊙ v1 ) ⊕ · · · ⊕ (tk ⊙ vk ) | t1 , ..., tk ∈ R}


then S is a subspace of V.

Definition: Spanning Sets of a Vector Space

If S is a subspace of the vector space V consisting of all linear combinations of vectors v1 , ..., vk ∈ V, then S
is called the subspace spanned by B = {v1 , ..., vk }, and we say that the set B spans S. The set B is called
a spanning set for the subspace S. We denote S = Span({v1 , ..., vk }) = Span(B).

Definition: Linear Dependence and Independence

If B = {v1 , ..., vk } is a set of vectors in a vector space V, then B is said to be linearly independent if the
only solution to the equation
(t1 ⊙ v1 ) ⊕ · · · ⊕ (tk ⊙ vk ) = 0
is t1 = · · · = tk = 0; otherwise, there is a non-trivial solution and we say B is said to be linearly dependent.

Theorem: Unique Representation Theorem

Let B = {v1 , ..., vn } be a spanning set for a vector space V. Then every vector in V can be expressed in a
unique way as a linear combination of the vectors of B if and only if the set B is linearly independent.

Definition: Basis of a Vector Space

A set B of vectors in a vector space V is a basis if it is a linearly independent spanning set for V.
     
1 2 0 1 2 5
Example 1: Prove that the set B = , , is not a basis for the subspace Span(B).
−1 1 3 1 1 3
Note: As a reminder, if we don’t specify the operations or vector space, you can assume they are the standard
operations from the standard space. Also, hint... 2A + B − C = O.

68
4.3.2 Determining a Basis of a Subspace
Note: Finding a Basis

Finding a basis of a subspace can be quite difficult. The technique is to first (somehow) determine a spanning
set, then reduce it to or prove that it is, a linearly independent set. Finding the spanning set is the creative
step, while turning a spanning set into a basis is quite procedural. One technique is to associate the span
as the range of some matrix, then discern a basis using our results on the basis of the columnspace of that
matrix.

Example 2: Determine a basis for the subspace S = {p(x) ∈ P2 | p(1) = 0} of P2 . Hint: Every element in this
space can be written as p(x) = (x − 1)(ax + b) for some constants a and b.

Example 3: Determine a basis for Span 1 + x − 2x2 , 2 − x + x2 , 1 − 2x + 3x2 , 1 + 5x + 3x2



.
   
1 2 1 1 1 0 −1 0
RREF
Hint: To speed things up, you are given the fact that  1 −1 −2 5  ∼  0 1 1 0 .
−2 1 3 3 0 0 0 1

69
4.3.3 Dimension
Definition
If B = {v1 , ..., vn } and C = {u1 , ..., uk } are both bases of a vector space V, then k = n. If a vector space
V has a basis with n vectors, then we say that the dimension of V is n and write dim(V) = n. If a vector
space V does not have a basis with finitely many elements, then V is called infinite-dimensional. The
dimension of the trivial vector space V = {0} is defined to be 0.

Example 4: Determine the dimension of the vector space Span(T ) of the previous example.

4.3.4 Extending a Linearly Independent Subset to a Basis


Theorem: Dimension and Linear Independency

Let V be an n-dimensional vector space. Then

1. A set of more than n vectors in V must be linearly dependent.

2. A set of fewer than n vectors cannot span V.

3. A set with n elements of V is a spanning set for V if and only if it is linearly independent.
     
1 1 −2 −1 1 0
Example 5: Let C = , , . Extend C to a basis for M (2, 2). Hint: We’ve
0 1 1 1 1 1
worked with M (2, 2) and know that dim(M (2, 2)) = 4. Thus, you need only find an additional element not in the
span of these three matrices.

70
4.4 Coordinates with Respect to a Basis
4.4.1 Bases
Definition: Coordinate Vectors
Suppose that B = {v1 , ..., vn } is a basis for the vector space V. If x ∈ V with

x = x 1 v1 ⊕ x 2 v2 ⊕ · · · ⊕ x n vn
then the coordinate vector of x with respect to the basis B is
 
x1
 x2 
[x]B =  . 
 
 .. 
xn

Example 1: Consider the bases of R2 (you don’t need to check this)


       
1 1 1 0
B = {⃗v1 , ⃗v2 } = , and E = {⃗e1 , ⃗e2 } = ,
−1 1 0 1
 
3
find [⃗a]B and [⃗a]E where ⃗a = .
−2

Example 2: The collection B = {1, x, 1 + x2 } is a basis of P2 . Find the B-coordinates of p(x) = 2 + x + 3x2 .

71
4.4.2 Change-of-Basis
Theorem: Linearity of Basis Representations

Let B be a basis for a finite dimensional vector space V. Then, for any x, y ∈ V and s, t ∈ R we have

[(s ⊙ x) ⊕ (t ⊙ y)]B = s[x]B + t[y]B

Note: Matrix Representation of Coordinate Representations

This means that the function gB : V → Rn given by gB (v) = [v]B is a linear function. In the event that
V = Rn this means that we should be able to find a representing matrix [gB ].

Note: Development of the Change-of-Basis Matrix

Consider a general vector space V with two bases B and C = {w1 , ..., wn }.

Let x ∈ V and write x be written as a linear combination of the vectors in C,

x = (x1 ⊙ w1 ) ⊕ · · · ⊕ (xn ⊙ wn )
 T
That is, [x]C = x1 x2 · · · xn . Taking B-coordinates gives

[x]B = [(x1 ⊙ w1 ) ⊕ · · · ⊕ (xn ⊙ wn )]B


= x1 [w1 ]B + · · · + xn [wn ]B
 
x1
  . 
= [w1 ]B · · · [wn ]B  .. 
xn
 
= [w1 ]B · · · [wn ]B [x]C

Theorem: Change-of-Basis Matrix

Let B and C = {w1 , ..., wn } both be bases for a vector space V. The matrix
 
P = [w1 ]B · · · [wn ]B
is called the change of coordinates matrix from C-coordinates to B-coordinates and satisfies

[x]B = P [x]C
and is called the change of coordinates equation. Often an emphatic notation PBC is used.

Theorem: Invertibility Reverses the Change-of-Basis, i.e. (PBC )−1 = PCB

Let B and C both be bases for a finite-dimensional vector space V. Let P be the change of coordinates
matrix from C-coordinates to B-coordinates. Then, P is invertible and P −1 is the change of coordinates
matrix from B-coordinates to C-coordinates.

72
Note: Efficiently Obtaining PBC

Let B = {v1 , v2 , ..., vn }. One will note by prior examples


 that when forming [x]B that you need to solve a
system of linear equations. Hence, when forming PBC = [w1 ]B · · · [wn ]B you need to solve n systems
of linear equations, i.e. t1 ⊙ v1 ⊕ · · · ⊕ tn ⊙ vn = wk for every 1 ≤ k ≤ n. However, the left-hand-
side (i.e. coefficient matrix) is always the same. Hence, you may efficiently solve for PBC by solving the
multi-augmented system
 RREF 
I PBC
 
v1 · · · vn w1 · · · wn ∼

       
1 1 1 0
Example 1: Earlier we considered the bases B = , and E = , for R2 . The
−1 1 0 1
    
E 1 0
change of basis matrix from E to B is PB = , . Demonstrate the prior note by setting up the
0 B 1 B
   
1 0
systems of equations for , and solving for them. Demonstrate the change-of-basis is consistent with
0 B 1 B
T
Example 1 by computing PBE ⃗a where ⃗a = 3 −2 .


73
Example 2: In this example we introduce another way to obtain a change of basis matrixfrom the standard
basis (specifically) to another basis. Let C = {1, x, x2 } be the standard basis of P2 and let B = 1 − x2 , x, −2 + x2 .
Find the change of coordinates matrix from B to C. Then, use the inverse to find the change of coordinates matrix
from B to C. Do you feel this is conceptually simpler?

74
4.5 General Linear Mappings
4.5.1 General Linearity Conditions
Definition: Linear Mappings

If V and W are vector spaces over R, a function L : V → W is a linear mapping if it satisfies the linearity
properties

Property Name

L(x ⊕V y) = L(x) ⊕W L(y) Additive Linearity

L(t ⊙V x) = t ⊙W L(x) Scalar Linearity

for all x, y ∈ V; t ∈ R; ⊕V and ⊙V are the operations of addition and scalar multiplying respectively on V;
and ⊕W and ⊙W are the operations of addition and scalar multiplying respectively on W. If V = W, then
L may be called a linear operator.

Example 1: Let L : M (2, 2) → P2 be defined by L(A) = a21 + (a12 + a22 )x + a11 x2 . Prove that L is a linear
mapping.

Example 2: Define L : E → R by L(x) = ln(x). Prove that L is a linear function. Note: The function is most
certainly not linear if it is mapping L : R+ → R as usually done in calculus!

75
4.5.2 The General Rank-Nullity Theorem
Definition: The Range and Nullspace/Kernel of a General Linear Mapping

Let V and W be vector spaces over R. The range of a linear mapping L : V → W is defined to be the set

Range(L) = {L(x) ∈ W | x ∈ V}

The nullspace ( or kernel) of L is the set of all vectors in V whose image under L is the zero vector 0W .
We write

Null(L) = {x ∈ V | L(x) = 0W }

Theorem: The Nullspace and Range are Subspaces

Let V and W be vector spaces and let L : V → W be a linear mapping. Then, Null(L) is a subspace of V
and Range(L) is a subspace of W.

Example 3: Consider the mapping L : M (2, 2) → P2 from an earlier example given by L(A) = a21 +
(a12 + a22 )x + a11 x2 . Determine whether 1 + x + x2 ∈ Range(L), and if it is, determine a matrix A such that
L(A) = 1 + x + x2 . Afterwards, determine the nullspace of L.

Theorem: Linear Mappings Fix the Zero Vector

Let V and W be vector spaces and let L : V → W be a linear mapping. Then, L(0V ) = 0W .

Example 4: Demonstrate the prior theorem is true using L : E → R given by L(x) = ln(x) of Example 2.

76
Example 5: Determine a basis for the range and nullspace of the linear mapping L : P1 → R3 defined by
 
0
L(a + bx) =  0 
a − 2b

Definition: The Rank of a General Linear Mapping

Let V and W be vector spaces over R. The rank of a linear mapping L : V → W is the dimension of the
range of L, that is, rank(L) = dim(Range(L)).

Definition: The Nullity of a General Linear Mapping

Let V and W be vector spaces over R. The nullity of a linear mapping L : V → W is the dimension of
the nullspace of L, that is, nullity(L) = dim(Null(L)).

The Rank-Nullity Theorem for a General Linear Mapping

Let V and W be vector spaces over R with dim(V) = n, and let L : V → W be a linear mapping. Then,

rank(L) + nullity(L) = n

Example 6: Confirm the Rank-Nullity Theorem in the previous example.

77
4.6 Matrix of a Linear Mapping
4.6.1 The Matrix of L with Respect to the Basis B
Note: The Representing Matrix of a Mapping in Another Basis

In this subsection we are concerned with the representing matrix [L] of L in another basis B. Specifically, we
want everything stated in the language of B with nothing in the language of the standard basis. That is, for
an input [⃗x]B the output is [L(⃗x)]B and we seek a matrix A such that [L(⃗x)]B = A[⃗x]B , which we will call [L]B .

Let B = {⃗v1 , ..., ⃗vn } be a basis for Rn and let L : Rn → Rn be a linear operator. Then, for any ⃗x ∈ Rn , we
can write ⃗x = b1⃗v1 + · · · + bn⃗vn . Thus by linearity,

L(⃗x) = L(b1⃗v1 + · · · + bn⃗vn ) = b1 L(⃗v1 ) + · · · + bn L(⃗vn )


Representing everything in B coordinates we obtain

[L(⃗x)]B = [b1 L(⃗v1 ) + · · · + bn L(⃗vn )]B


= b1 [L(⃗v1 )]B + · · · + bn [L(⃗vn )]B
 
b1
  . 
= [L(⃗v1 )]B · · · [L(⃗vn )]B  .. 
bn
= [L]B [⃗x]B

Theorem: The Representing Matrix of L in the Basis B

Let V be a vector space. Suppose that B = {v1 , ..., vn } is any basis for V and that L : V → V is a linear
operator. Define the matrix of the linear operator L with respect to the basis B to be the matrix
 
[L]B = [L(v1 )]B · · · [L(vn )]B
where we have for any x ∈ V, [L(x)]B = [L]B [x]B .
   
2 2 1 1
Example 1: Let L : R → R be given by L(x1 , x2 ) = (x2 , x1 ) and let B = , . Determine
−1 1
[L]B .

78
4.6.2 Change of Coordinates and Linear Mappings
Note: Another Way to Obtain [L]B

There is another way to obtain [L]B which is quite simple (in


theory). Specifically, let S represent the standard basis and let
B represent the basis we wish to work in. Let P = PSB be the
change of basis matrix from B to S. Then we may construct

[L]B = PBS [L]PSB ⇒ [L]B = P −1 [L]P


where [L] is just the regular matrix representation of L with
respect to the standard basis (as done in prior chapters). The
logic of this follows the adjacent diagram with linear composition.

Note: Linear Mapping with Different Basis

Using the prior note, you may extend the logic to find the matrix representation of the linear mapping if
the input is in basis B and the output is in basis C. This is simply [L]B S B
C = PC [L]PS , which is very useful.

       
2 1 3  1 1 0 
Example 2: Let L be the linear mapping with [L] = A = −1 2 2 and let B =
   1 , 1 , 1  .
   
−2 3 1 0 1 1
 
Determine [L]B .

79

You might also like