Lecture 4
Lecture 4
Vector Spaces
4.1 Vectors in Rn
4.2 Vector Spaces
4.3 Subspaces of Vector Spaces
4.4 Spanning Sets and Linear Independence
4.5 Basis and Dimension
4.6 Rank of a Matrix and Systems of Linear Equations
4.7 Coordinates and Change of Basis
§ Addition:
→
! →
"
!#"
n
4.1 Vectors in R
n An ordered n-tuple:
a sequence of n real number ( x1 , x2 , !, xn )
n
§ n-space: R
the set of all ordered n-tuple
n Ex:
1
n=1 R = 1-space
= set of all real number
2
n=2 R = 2-space
= set of all ordered pair of real numbers ( x1 , x2 )
3
n=3 R = 3-space
= set of all ordered triple of real numbers ( x1 , x2 , x3 )
4
n=4 R = 4-space
= set of all ordered quadruple of real numbers ( x1 , x2 , x3 , x4 )
n Notes:
n
(1) An n-tuple ( x1 , x2 , !, xn ) can be viewed as a point in R
with the xi’s as its coordinates.
(2) An n-tuple ( x1 , x2 , !, xn ) can be viewed as a vector
x = ( x1 , x2 , !, xn ) in Rn with the xi’s as its components.
§ Ex:
§ Equal:
u = v if and only if u1 = v1 , u2 = v2 , !, un = vn
§ Notes:
The sum of two vectors and the scalar multiple of a vector
n
in R are called the standard operations in Rn.
§ Negative:
- u = (-u1 ,-u2 ,-u3 ,...,-un )
§ Difference:
u - v = (u1 - v1 , u2 - v2 , u3 - v3 ,..., un - vn )
§ Zero vector:
0 = (0, 0, ..., 0)
§ Notes:
(1) The zero vector 0 in Rn is called the additive identity in Rn.
(2) The vector –v is called the additive inverse of v.
(b) 3(x + w ) = 2u - v + x
3x + 3w = 2u - v + x
3x - x = 2u - v - 3w
2x = 2u - v - 3w
x = u - 12 v - 32 w
= (2,1,5,0 ) + (- 2, -23 , -21 , 12 ) + (9,-3,0, -29 )
= (9, -211 , 92 ,-4 )
n Thm 4.2: (Properties of additive identity and additive inverse)
n
Let v be a vector in R and c be a scalar. Then the following is true.
(1) The additive identity is unique. That is, if u+v=v, then u = 0
(2) The additive inverse of v is unique. That is, if v+u=0, then u = –v
(3) 0v=0
(4) c0=0
(5) If cv=0, then c=0 or v=0
(6) –(– v) = v
11
n Linear combination:
The vector x is called a linear combination of v 1 , v 2 ,..., v n,
if it can be expressed in the form
x = c1 v 1 + c2 v 2 + ! + cn v n c1 , c2 , ! , cn : scalar
§ Ex 6:
Given x = (– 1, – 2, – 2), u = (0,1,4), v = (– 1,1,2), and
3
w = (3,1,2) in R . Show that x is a linear combination of u, v and w.
That is, we need to find a, b, and c such that x = au+bv+cw.
Sol:
- b + 3c = - 1
a + b + c = -2
4a + 2b + 2c = - 2
Þ a = 1, b = -2, c = -1 Thus x = u - 2 v - w
§ Notes:
A vector u = (u1 , u2 ,!, un ) in R n can be viewed as:
ê! ú
ê ú
ëun û
13
Addition:
(1) u+v is in V
(2) u+v=v+u
(3) u+(v+w)=(u+v)+w
(4) V has a an element, denoted by 0, such that for every u in V,
we have u+0=u. This element 0 is called zero vector.
(5) For every u in V, there is a vector in V denoted by –u
such that u+(–u)=0
15
Scalar multiplication:
(6) cu is in V.
(7) c(u + v) = cu + cv
(8) (c + d )u = cu + du
(10) 1(u) = u
n Notes:
(1) A vector space consists of four entities:
a set of vectors, a set of scalars, and two operations
V:nonempty set
c:scalar – real number (it might be complex
numbers)
+ (u, v ) = u + v:addition
• (c, u) = cu: scalar multiplication
(V , +, •) is called a vector space
17
(2) Matrix space: V = M m´n (the set of all m×n matrices with real values)
Ex: :(m = n = 2)
éu11 u12 ù é v11 v12 ù é u11 + v11 u12 + v12 ù
êu u ú + êv v ú = êu + v u + v ú vector addition
ë 21 22 û ë 21 22 û ë 21 21 22 22 û
éu u ù é ku ku12 ù
k ê 11 12 ú = ê 11 ú scalar multiplication
u u ku
ë 21 22 û ë 21 ku 22 û
V = Pn (x)
19
21
n Ex 8:
V=R2=the set of all ordered pairs of real numbers
We define new operations on V as follow.
23
n Ex: Subspace of R3
(1) {0} 0 = (0, 0, 0 )
(2) Lines through the origin
(3) Planes through the origin
(4) R 3
25
§ Ex 2: (A subspace of M2×2)
Let W be the set of all 2×2 symmetric matrices. Show that
W is a subspace of the vector space M2×2, with the standard
operations of matrix addition and scalar multiplication.
Sol:
!W Í M 2´2 M 2´2 : vector sapces
Let A1, A2 Î W ( A1T = A1, A2T = A2 )
A1 Î W, A2 Î W Þ ( A1 + A2 )T = A1T + A2T = A1 + A2 ( A 1 + A 2 Î W )
k Î R , A Î W Þ (kA)T = kAT = kA (k A Î W )
\W is a subspace of M 2´2
n Ex 3: (The set of singular matrices is not a subspace of M2×2)
Let W be the set of singular matrices of order 2. Show that
W is not a subspace of M2×2 with the standard operations.
Sol:
é1 0ù é0 0 ù
A=ê ú Î W , B = ê0 1 ú Î W
ë 0 0 û ë û
é1 0ù
\A+ B = ê ú ÏW
ë 0 1 û
\W2 is not a subspace of M 2´2
27
Sol:
Let u = (1, 1) Î W
! (- 1)u = (- 1)(1, 1) = (- 1, - 1) Ï W
(not closed under scalar
multiplication)
\W is not a subspace of R 2
§ Ex 6: (Determining subspaces of R2)
Which of the following two subsets is a subspace of R2?
(a) The set of points on the line given by x+2y=0.
(b) The set of points on the line given by x+2y=1.
Sol:
(a) W = {( x, y ) x + 2 y = 0} = {(-2t , t ) t Î R}
Let v1 = (- 2t1 , t1 ) Î W v2 = (- 2t 2 , t 2 ) Î W
\W is a subspace of R 2
29
Let v = (1,0) Î W
! (- 1)v = (- 1,0 ) Ï W
\W is not a subspace of R 2
§ Ex 8: (Determining subspaces of R3)
Which of the following subsets is a subspace of R 3?
(a) W = {( x1 , x2 ,1) x1 , x2 Î R}
(b) W = {( x1 , x1 + x3 , x3 ) x1 , x3 Î R}
Sol:
(a) Let v = (0,0,1) Î W
Þ (-1) v = (0,0,-1) Ï W
\W is not a subspace of R 3
(b) Let v = ( v1 , v1 + v 3 , v 3 ) Î W , u = (u1 , u1 + u 3 , u 3 ) Î W
! v + u = (v1 + u1 , (v1 + u1 ) + (v 3 + u 3 ), v 3 + u 3 ) Î W
kv = (kv1 , (kv1 ) + (kv 3 ), kv 3 ) Î W
\W is a subspace of R 3
31
33
c1 - c3 =1
Þ 2c1 + c2 =1
3c1 + 2c2 + c3 =1
é1 0 - 1 1ù é1 0 - 1 1 ù
Þ êê2 1 0 1úú "Gauss−Jordan
"""""" Elimination
→ ê0 1 2 - 1ú
ê ú
êë3 2 1 1úû êë0 0 0 0 úû
Þ c1 = 1 + t , c2 = -1 - 2t , c3 = t
35
(b)
w = c1 v1 + c2 v 2 + c3 v 3 , where w = (1, - 2,2)
é1 0 - 1 1 ù é1 0 - 1 1 ù
Þ êê2 1 0 - 2úú " """"""→ ê0 1 2 - 4 ú
Gauss−Jordan Elimination
ê ú
êë3 2 1 2 úû êë0 0 0 7 úû
Þ w ¹ c1 v1 + c2 v 2 + c3 v 3
§ the span of a set: span (S)
If S={v1, v2,…, vk} is a set of vectors in a vector space V,
then the span of S is the set of all linear combinations of
the vectors in S,
span(S ) ={c1 v1 + c2 v 2 + ! + ck v k "ci Î R}
(the set of all linear combinations of vectors in S )
37
§ Notes:
(1) span(f ) = {0} , we agree by consent!
(2) S Í span( S )
(3) S1 , S 2 Í V
S1 Í S 2 Þ span( S1 ) Í span( S 2 )
§ Ex 5: (A spanning set for R3)
Show that the set S = {(1,2,3), (0,1,2), (-2,0,1)} spans R 3
Sol:
We must determine whether an arbitrary vector u = (u1 , u2 , u3 )
in R 3 can be written as a linear combination of v1 , v 2 , and v 3 .
u Î R 3 Þ u = c1 v1 + c2 v 2 + c3 v 3
Þ c1 - 2c3 = u1
2c1 + c2 = u2
3c1 + 2c2 + c3 = u3
The problem thus reduces to determining whether this system
is consistent for all values of u1 , u2 , and u3 .
39
1 0 -2
!A=2 1 0 ¹0
3 2 1
Þ span( S ) = R 3
§ Thm 4.6: (Properties of Span(S))
41
43
c1+2c2 =0 é1 2 0 0ù é1 2 0 0ù
ê1 ú ¾G.¾J.® ê 1 ú
c1+5c2+c3 = 0 ê 5 1 0 ú ê1 1 3 0ú
–2c1 – c2+c3 = 0 êë- 2 - 1 1 0úû ê0 0 0 0 ú
ë û
This system has infinitely many solutions.
(i.e., This system has nontrivial solutions.)
S is linearly dependent. (Ex: c1=2 , c2= – 1 , c3=3)
45
é2 1ù é 3 0ù é1 0 ù é0 0 ù
c1 ê
0 1ú 2 ê 2 1 ú 3 ê 2 0 ú = ê0 0 ú
+ c + c
ë û ë û ë û ë û
2c1+3c2+ c3 = 0
c1 =0
2c2+2c3 = 0
c1+ c2 =0
é2 3 1 0ù é1 0 0 0ù
ê1 0 0 0úú ê0
ê ê 1 0 0úú
ê0 ¾¾¾¾¾¾¾¾
Gauss - Jordan Elimination
®
2 2 0ú ê0 0 1 0ú
ê ú ê ú
ë1 1 0 0û ë0 0 0 0û
S is linearly independent.
47
Pf:
(=>) c1v1+c2v2+…+ckvk = 0
! S is linearly dependent
∃ ci ≠ 0 for some i
%! %"#! %"$! %%
v$ = − v& −%%% − v$'& − v$#& −%%% − v(
%" %" %" %"
(Ü)
Let vi = d1v1+…+di-1vi-1+di+1vi+1+…+dkvk
Þ d1v1+…+di-1vi-1-vi+di+1vi+1+…+dkvk = 0
Þ S is linearly dependent
49
§ Notes:
(1) Ø (empty set) is a basis for {0} we agree by consent!
(2) the standard basis for R3:
{i, j, k} i = (1, 0, 0), j = (0, 1, 0), k = (0, 0, 1)
n
(3) the standard basis for R :
{e1, e2, …, en} e1=(1,0,…,0), e2=(0,1,…,0), en=(0,0,…,1)
Ex: R4 {(1,0,0,0), (0,1,0,0), (0,0,1,0), (0,0,0,1)}
(4) the standard basis for m×n matrix space:
{ Eij | i = 1,…, m , j = 1,…. , n }, where
ì a =1
{ Eij | 1£i£m , 1£j£n }, and in Eij í ij
îother entries are zero
ìé1 0ù é0 1ù é0 0ù é0 0ù ü
Ex: 2´ 2 matrix space: íê ú, ê0 0ú, ê1 0ú, ê0 1ú ý
îë 0 0 û ë û ë û ë ûþ
(5) the standard basis for Pn(x):
{1, x, x2, …, xn}
Ex: P3(x) {1, x, x2, x3}
51
! S is linearly independent
==> c1= b1 , c2= b2 ,…, cn= (i.e., uniqueness)
bn
nThm 4.9: (Bases and linear dependence)
If S = {v1 , v 2 , !, v n } is a basis for a vector space V, then every
set containing more than n vectors in V is linearly dependent.
Pf:
Let S1 = {u1, u2, …, um} , m > n
! span( S ) = V
u1 = c11 v1 + c21 v 2 + ! + cn1 v n
ui ∈V Þ u 2 = c12 v1 + c22 v 2 + ! + cn 2 v n
"
u m = c1m v1 + c2 m v 2 + ! + cnm v n
53
Let k1u1+k2u2+…+kmum= 0
=> d1v1+d2v2+…+dnvn= 0 (where di = ci1k1+ci2k2+…+cimkm)
! S is L.I.
=> di=0 ∀ i i.e. c11k1 + c12 k 2 + ! + c1m k m = 0
c21k1 + c22 k 2 + ! + c2 m k m = 0
"
cn1k1 + cn 2 k 2 + ! + cnm k m = 0
! Known: If the homogeneous system has fewer equations
than variables, then it must have infinitely many solution.
m > n => k1u1+k2u2+…+kmum = 0 has nontrivial solution
==> S1 is linearly dependent
n Thm 4.10: (Number of vectors in a basis)
If a vector space V has one basis with n vectors, then every
basis for V has n vectors. (All bases for a finite-dimensional
vector space has the same number of vectors.)
Pf:
S ={v1, v2, …, vn}
two bases for a vector space
S'={u1, u2, …, um}
S is a basis ü Thm.4.9 ü
ý Þ n ³ m ï
S ' is L.I. þ ï
ýÞ n = m
S is L.I. ü Thm .4.9
ï
ý Þ n £ mï
S ' is a basis þ þ
55
n Finite dimensional:
A vector space V is called finite dimensional,
if it has a basis consisting of a finite number of elements.
n Infinite dimensional:
If a vector space V is not finite dimensional,
then it is called infinite dimensional.
n Dimension:
The dimension of a finite dimensional vector space V is
defined to be the number of vectors in a basis for V.
V: a vector space S: a basis for V
dim(V) = #(S) (the number of vectors in S)
§ Notes: dim(V) = n
Linearly
(1) dim({0}) = 0 = #(Ø) Generating Bases Independent
Sets Sets
57
n Ex:
(1) Vector space Rn - basis {e1 , e2 , … , en}
- dim(Rn) = n
(2) Vector space Mm×n - basis {Eij | i = 1,…, m , j = 1,…, n}
- dim(Mm×n)=mn
(3) Vector space Pn(x) - basis {1, x, x2, … , xn}
- dim(Pn(x)) = n+1
(4) Vector space P(x) - basis {1, x, x2, …}
- dim(P(x)) = ∞
n Ex 9: (Finding the dimension of a subspace)
(a) W={(d, c–d, c): c and d are real numbers}
(b) W={(2b, b, 0): b is a real number}
Sol: (Note: Find a set of L.I. vectors that spans the subspace)
(a) (d, c– d, c) = c(0, 1, 1) + d(1, – 1, 0)
=> S = {(0, 1, 1) , (1, – 1, 0)}(S is L.I. and S spans W)
=> S is a basis for W
=> dim(W) = #(S) = 2
(b) ! (2b, b,0 ) = b(2,1,0 )
=> S = {(2, 1, 0)} spans W and S is L.I.
=> S is a basis for W
=> dim(W) = #(S) = 1
59
dim(V) = n
Generating Linearly
Sets Bases Independent
Sets
61
Recall:
§ row vectors: Row vectors of A
é a11 a12 " a1n ù é A(1) ù [a11 , a12 , ! , a1n ] = A(1)
êa ê ú
a22 " a2 n úú ê A(2 ) ú [a21 , a22 , ! , a2n ] = A(2)
A = ê 21 =
ê ! ! ! ú ê ! ú "
ê ú ê ú
ëam1 am 2 " amn û êë A(m ) úû [am1 , am2 , ! , amn ] = A( m )
63
§ Compare notations
Here In some textbook
RS(A) C(AT)
CS(A) C(A)
NS(A) N(A)
NS(AT) N(AT)
4.6 Rank of a Matrix and Systems of Linear Equations
n Recall: A m×n matrix A is row equivalent to a m×n matrix B if
B can be obtained from A by applying a finite number
of elementary row operations
n Thm 4.12: (Row-equivalent matrices have the same row space)
If an m×n matrix A is row equivalent to an m×n matrix B,
65
form, then the nonzero row vectors of B form a basis for the
row space of A.
§ Ex 2: ( Finding a basis for a row space)
é 1 3 1 3ù
ê 0 1 1 0úú
ê
A = ê- 3 0 6 - 1ú Find a basis of row space of A
ê ú
ê 3 4 -2 1ú
êë 2 0 -4 2úû
Sol: é 1 3 1 3ù é1 3 1 3ù w 1
ê 0 ê0 1 1 0úú w 2
ê 1 1 0úú ê
A= ê- 3 - 1ú ê 0 0 1ú w 3
ê
0 6
ú ¾Gauss
¾¾ ¾ E.
® B = ê0 ú
ê 3 4 -2 1ú ê0 0 0 0ú
êë 2 0 -4 2úû êë0 0 0 0úû
a1 a 2 a3 a4 b1 b 2 b3 b 4
67
n Notes:
(1) b 3 = -2b1 + b 2 Þ a 3 = -2a1 + a 2
(2) {b1 , b 2 , b 4 } is L.I. Þ {a1 , a 2 , a 4 } is L.I.
n Ex 3: (Finding a basis for a subspace)
Find a basis for the subspace of R3 spanned by
v1 v2 v3
S = {(-1, 2, 5), (3, 0, 3), (5, 1, 8)}
Sol: é - 1 2 5ù v 1 é 1 - 2 - 5ù w1
A = êê 3 0 3úú v 2 G.E.
B = êê0 1 3úú w2
êë 5 1 8úû v 3 êë0 0 0úû
69
ê1 1 6 - 2 - 4ú ê0 0 1 - 1 - 1ú w3
ê ú ê ú
ë3 0 -1 1 - 2û ë0 0 0 0 0û
! CS(A)=RS(AT)
\ a basis for CS(A)
= a basis for RS(AT)
= {the nonzero vectors of B}
= {w1, w2, w3}
ìé 1 ù é 0 ù é 0 ùü
ïê ú ê 1 ú ê 0 úï
ïê 0 ú ê ú ê ú ïï
ï
= íê- 3ú, ê 9 ú, ê 1 ú ý (a basis for the column space of A)
ïê 3 ú ê ú ê úï
ïê ú ê - 5ú ê- 1ú ï
ïîêë 2 úû êë- 6úû êë- 1úû ï
þ
§ Note: This basis is not a subset of {c1, c2, c3, c4}.
71
§ Rank:
The dimension of the row (or column) space of a matrix A
is called the rank of A and is denoted by rank(A).
75
§ Nullity:
nullity(A) = dim(NS(A))
n Notes:
(1) rank(A): The number of leading variables in the solution of Ax=0.
(The number of nonzero rows in the row-echelon form of A)
(2) nullity (A): The number of free variables in the solution of Ax = 0.
77
n Notes:
If A is an m×n matrix and rank(A) = r, then
Fundamental Dimension
Subspaces
RS(A)=CS(AT) r
CS(A)=RS(AT) r
NS(A) n–r
NS(AT) m–r
n Ex 7: (Rank and nullity of a matrix)
Let the column vectors of the matrix A be denoted by a1, a2, a3,
a4, and a5.
é 1 0 -2 1 0ù
ê 0 -1 - 3 1 3ú
A=ê ú
ê- 2 - 1 1 -1 3ú
ê ú
ë 0 3 9 0 - 12û
a1 a2 a3 a4 a5
(a) Find the rank and nullity of A.
(b) Find a subset of the column vectors of A that forms a basis for
the column space of A .
(c) If possible, write the third column of A as a linear combination
of the first two columns.
79
é 1 0 -2 1 0ù é1 0 -2 0 1ù
ê 0 -1 - 3 1 3ú ê0 1 3 0 - 4úú
A=ê ú B=ê
ê- 2 - 1 1 - 1 3ú ê0 0 0 1 - 1ú
ê ú ê ú
ë 0 3 9 0 - 12û ë0 0 0 0 0û
a1 a2 a3 a4 a5 b1 b2 b3 b4 b5
81
Sol:
é1 0 - 2 1 5ù é1 0 - 2 1 5ù
ê3 1 - 5 0 8 ú ¾G¾ ¾
. J .E
® ê0 1 1 - 3 - 7 ú
ê ú ê ú
êë 1 2 0 - 5 - 9úû êë0 0 0 0 0úû
s t
83
é x1 ù é 2 s - t + 5ù é 2ù é- 1ù é 5ù
ê x ú ê- s + 3t - 7 úú êê- 1úú êê 3úú êê- 7 úú
Þ x = ê 2ú = ê =s +t +
ê x3 ú ê s + 0t + 0ú ê 1ú ê 0ú ê 0ú
ê ú ê ú ê ú ê ú ê ú
ë x4 û ë 0 s + t + 0û ë 0û ë 1û ë 0û
= su1 + tu 2 + x p
é5ù
ê- 7 ú
i.e. x p = ê ú is a particular solution vector of Ax=b.
ê0ú
ê ú
ë0û
85
ê ! ú ê ! ú ê ! ú
ê ú ê ú ê ú
ëam1 û ë am 2 û ëamn û
So, b = Ax = x1A(1) + x2A(2)+… + xnA(n), which is a linear
combination of column vector of A.
Hence, Ax = b is consistent if and only if b is a linear combination
of the columns of A. That is, the system is consistent if and only if
b is in the subspace of Rm spanned by the columns of A.
§ Note:
If rank([A|b])=rank(A)
Then the system Ax=b is consistent.
x1 + x2 - x3 = -1
x1 + x3 = 3
3 x1 + 2 x2 - x3 = 1
Sol:
é 1 1 - 1ù é1 0 1ù
A = êê 1 0 1úú ¾G¾ ¾®êê0 1 - 2úú
. J .E .
87
é 1 1 - 1 - 1ù é1 0 1 3ù
[ A ! b] = êê 1 0 1 3úú ¾G¾ ¾®êê0 1 - 2 - 4úú
. J .E .
§ Check:
rank ( A) = rank ([ A b]) = 2
n Summary of equivalent conditions for square matrices:
If A is an n×n matrix, then the following conditions are equivalent.
(1) A is invertible
(2) Ax = b has a unique solution for any n×1 matrix b.
(3) Ax = 0 has only the trivial solution
(4) rank(A) = n
(5) The n row vectors of A are linearly independent.
(6) The n column vectors of A are linearly independent.
89
é - 2ù
\[x]S = êê 1úú.
êë 3úû
91
c 1
+ 2c = 1 3 é1 0 2ù é c ù é 1ù1
Þ - c + 3c = 2 2
i.e. êê0 - 1
3
3úú êêc úú = êê 2úú
2
c + 2c - 5c = - 1
1 2 3
êë 1 2 - 5úû êëc úû êë- 1úû 3
é1 0 2 1ù é1 0 0 5ù é 5ù
Þ êê0 - 1 3 2úú ® êê0 1 0 - 8úú Þ [ x ]B¢ = êê - 8úú
G.J. E.
93
é k1 ù
Let v Î V , [ v ]B¢ = ê ú
ëk 2 û
Þ v = k1u1¢ + k 2u¢2
= k1 (au1 + bu 2 ) + k 2 (cu1 + du 2 )
= (k1a + k 2 c)u1 + (k1b + k 2 d )u 2
So, coordinates of v relative to the basis B={u1,u2}
are k1a+k2c, k1b+k2d. That is,
é k a + k 2 c ù éa c ù é k1 ù
Þ [ v ]B = ê 1 ú =ê úê ú
ëk1b + k 2 d û ëb d û ëk 2 û
= [[u1¢ ]B [u¢2 ]B ] [v ]B¢
n Transition matrix from B' to B:
Let B = {u1 , u 2 ,..., u n } and B¢ = {u1¢ , u¢2 ..., u¢n } be two bases
for a vector space V
If [v]B is the coordinate matrix of v relative to B
[v]B’ is the coordinate matrix of v relative to B'
then [ v ]B = P[ v ]B¢
= [[u1¢ ]B , [u¢2 ]B ,..., [u¢n ]B ] [v ]B¢
where
P = [[u1¢ ]B , [u¢2 ]B , ..., [u¢n ]B ]
95
n Remark: The bases B and B' have similar role. Therefore, we can
exchange B and B' and we have
n Thm 4.19: (The inverse of a transition matrix)
n Notes:
B = {u1 , u 2 , ..., u n }, B ' = {u¢1 , u¢2 , ..., u¢n }
[v ]B = [[u¢1 ]B , [u¢2 ]B , ..., [u¢n ]B ] [v ]B¢ = P [v ]B¢
[v ]B¢ = [[u1 ]B¢ , [u 2 ]B¢ , ..., [u n ]B¢ ] [v ]B = P -1 [v ]B
n Thm 4.20: (Transition matrix from B to B')
Let B={v1, v2, … , vn} and B' ={u1, u2, … , un} be two bases
n
for R . Then the transition matrix P from B' to B can be found
by using Gauss-Jordan elimination on the n×2n matrix 𝐵 ⋮ B′
as follows.
𝐵 ⋮ B′ I) ⋮ P
97
99
(c)
é- 1 2 ! -3 4ù é 1 0 ! - 1 2ù
ê 2 - 2 ! 2 - 2ú G.J.E.
ê0 1 ! - 2 3ú
ë û ë û
-1
B' B I P
é - 1 2ù
\ P -1 = ê ú (the transition matrix from B to B')
ë - 2 3û
n Check:
é 3 - 2ù é - 1 2ù é 1 0ù
PP -1 = ê ú ê ú =ê ú = I2
ë2 - 1û ë- 2 3û ë0 1û
§ Ex 6: (Coordinate representation in P3(x))
(a) Find the coordinate matrix of p = 3x3-2x2+4 relative to the
standard basis S = {1, x, x2, x3} in P3(x).
(b) Find the coordinate matrix of p = 3x3-2x2+4 relative to the
basis S = {1, 1+x, 1+ x2, 1+ x3} in P3(x).
é 4ù
Sol: ê 0ú
(a) p = (4)(1) + (0)( x) + (-2)( x 2 ) + (3)( x 3 ) Þ [ p ]B = ê ú
ê - 2ú
ê ú
ë 3û é 3ù
ê 0ú
(b) p = (3)(1) + (0)(1 + x) + (-2)(1 + x ) + (3)(1 + x ) Þ [ p ]B = ê ú
2 3
ê - 2ú
ê ú
ë 3û
101