0% found this document useful (0 votes)
121 views51 pages

Lecture 4

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
121 views51 pages

Lecture 4

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 51

Lecture 4

Vector Spaces
4.1 Vectors in Rn
4.2 Vector Spaces
4.3 Subspaces of Vector Spaces
4.4 Spanning Sets and Linear Independence
4.5 Basis and Dimension
4.6 Rank of a Matrix and Systems of Linear Equations
4.7 Coordinates and Change of Basis

4.0 Vectors as we have learned in highschool


n An oriented interval of a straight line:

§ Used to represent speed of motions, forces,….

§ Addition:


! →
"

!#"
n
4.1 Vectors in R
n An ordered n-tuple:
a sequence of n real number ( x1 , x2 , !, xn )

n
§ n-space: R
the set of all ordered n-tuple

We call a sequence of n real number ( x1 , x2 , !, xn ) a vector of Rn

n Ex:
1
n=1 R = 1-space
= set of all real number
2
n=2 R = 2-space
= set of all ordered pair of real numbers ( x1 , x2 )
3
n=3 R = 3-space
= set of all ordered triple of real numbers ( x1 , x2 , x3 )

4
n=4 R = 4-space
= set of all ordered quadruple of real numbers ( x1 , x2 , x3 , x4 )
n Notes:
n
(1) An n-tuple ( x1 , x2 , !, xn ) can be viewed as a point in R
with the xi’s as its coordinates.
(2) An n-tuple ( x1 , x2 , !, xn ) can be viewed as a vector
x = ( x1 , x2 , !, xn ) in Rn with the xi’s as its components.
§ Ex:

u = (u1 , u2 , !, un ), v = (v1 , v2 , !, vn ) (two vectors in Rn)

§ Equal:
u = v if and only if u1 = v1 , u2 = v2 , !, un = vn

§ Vector addition (the sum of u and v):


u + v = (u1 + v1 , u2 + v2 , !, un + vn )

§ Scalar multiplication (the scalar multiple of u by c):


cu = (cu1 , cu2 , ! , cun )

§ Notes:
The sum of two vectors and the scalar multiple of a vector
n
in R are called the standard operations in Rn.
§ Negative:
- u = (-u1 ,-u2 ,-u3 ,...,-un )
§ Difference:
u - v = (u1 - v1 , u2 - v2 , u3 - v3 ,..., un - vn )

§ Zero vector:
0 = (0, 0, ..., 0)

§ Notes:
(1) The zero vector 0 in Rn is called the additive identity in Rn.
(2) The vector –v is called the additive inverse of v.

n Thm 4.1: (Properties of vector addition and scalar multiplication)


n
Let u, v, and w be vectors in R , and let c and d be scalars.
(1) u+v is a vector in Rn
(2) u+v = v+u
(3) (u+v)+w = u+(v+w)
(4) u+0 = u
(5) u+(–u) = 0
(6) cu is a vector in Rn
(7) c(u+v) = cu+cv
(8) (c+d)u = cu+du
(9) c(du) = (cd)u
(10) 1(u) = u
n Ex 5: (Vector operations in R4)
Let u=(2, – 1, 5, 0), v=(4, 3, 1, – 1), and w=(– 6, 2, 0, 3) be
4
vectors in R .
Find x in each of the following equations.
(a) x = 2u – (v + 3w)
(b) 3(x+w) = 2u – v+x
Sol: (a) x = 2u - ( v + 3w )
= 2u - v - 3w
= (4, - 2, 10, 0) - (4, 3, 1, - 1) - (-18, 6, 0, 9)
= (4 - 4 + 18, - 2 - 3 - 6, 10 - 1 - 0, 0 + 1 - 9)
= (18, - 11, 9, - 8).

(b) 3(x + w ) = 2u - v + x
3x + 3w = 2u - v + x
3x - x = 2u - v - 3w
2x = 2u - v - 3w
x = u - 12 v - 32 w
= (2,1,5,0 ) + (- 2, -23 , -21 , 12 ) + (9,-3,0, -29 )
= (9, -211 , 92 ,-4 )
n Thm 4.2: (Properties of additive identity and additive inverse)
n
Let v be a vector in R and c be a scalar. Then the following is true.
(1) The additive identity is unique. That is, if u+v=v, then u = 0
(2) The additive inverse of v is unique. That is, if v+u=0, then u = –v
(3) 0v=0
(4) c0=0
(5) If cv=0, then c=0 or v=0
(6) –(– v) = v

11

n Linear combination:
The vector x is called a linear combination of v 1 , v 2 ,..., v n,
if it can be expressed in the form
x = c1 v 1 + c2 v 2 + ! + cn v n c1 , c2 , ! , cn : scalar
§ Ex 6:
Given x = (– 1, – 2, – 2), u = (0,1,4), v = (– 1,1,2), and
3
w = (3,1,2) in R . Show that x is a linear combination of u, v and w.
That is, we need to find a, b, and c such that x = au+bv+cw.
Sol:
- b + 3c = - 1
a + b + c = -2
4a + 2b + 2c = - 2
Þ a = 1, b = -2, c = -1 Thus x = u - 2 v - w
§ Notes:
A vector u = (u1 , u2 ,!, un ) in R n can be viewed as:

a 1×n row matrix (row vector): u = [u1 , u2 , !, un ]


or éu1 ù
êu ú
a n×1 column matrix (column vector): u = ê ú
2

ê! ú
ê ú
ëun û

§ Observation: The matrix operations of addition and scalar


multiplication give the same results as the corresponding
vector operations

13

Vector addition Scalar multiplication


u + v = (u1 , u2 , ! , un ) + (v1 , v2 , ! , vn ) cu = c(u1 , u2 , ! , un )
= (u1 + v1 , u2 + v2 , ! , un + vn ) = (cu1 , cu2 , ! , cun )
Row matrix addition Row matrix multiplication

u + v = [u1 , u2 , ! , un ] + [v1 , v2 , ! , vn ] cu = c[u1 , u2 , ! , un ]


= [u1 + v1 , u2 + v2 , ! , un + vn ] = [cu1 , cu2 , ! , cun ]
Column matrix addition Column matrix multiplication
éu1 ù év1 ù éu1 + v1 ù éu1 ù écu1 ù
êu ú êv ú êu + v ú êu ú êcu ú
u+ v = ê 2ú + ê 2ú = ê 2 2ú cu = c ê 2 ú = ê 2 ú
ê! ú ê! ú ê ! ú ê! ú ê! ú
ê ú ê ú ê ú ê ú ê ú
u v u
ë nû ë nû ë n nû + v ëun û ëcun û
4.2 Vector Spaces
Definition
Let V be a set on which two operations (addition and scalar
multiplication) are defined. If the following axioms are satisfied
for every u, v, and w in V and every scalar (real number) c and d,
then V is called a vector space and elements of V is called vectors

Addition:
(1) u+v is in V
(2) u+v=v+u
(3) u+(v+w)=(u+v)+w
(4) V has a an element, denoted by 0, such that for every u in V,
we have u+0=u. This element 0 is called zero vector.
(5) For every u in V, there is a vector in V denoted by –u
such that u+(–u)=0
15

Scalar multiplication:
(6) cu is in V.
(7) c(u + v) = cu + cv
(8) (c + d )u = cu + du

(9) c(du) = (cd )u

(10) 1(u) = u
n Notes:
(1) A vector space consists of four entities:
a set of vectors, a set of scalars, and two operations
V:nonempty set
c:scalar – real number (it might be complex
numbers)
+ (u, v ) = u + v:addition
• (c, u) = cu: scalar multiplication
(V , +, •) is called a vector space

(2) V = {0}: zero vector space

17

n Examples of vector spaces:


(1) n-tuple space: Rn
(u1 , u2 ,!, un ) + (v1 , v2 ,!, vn ) = (u1 + v1 , u2 + v2 ,!, un + vn ) vector addition
k (u1 , u2 , !, un ) = (ku1 , ku2 , !, kun ) scalar multiplication

(2) Matrix space: V = M m´n (the set of all m×n matrices with real values)
Ex: :(m = n = 2)
éu11 u12 ù é v11 v12 ù é u11 + v11 u12 + v12 ù
êu u ú + êv v ú = êu + v u + v ú vector addition
ë 21 22 û ë 21 22 û ë 21 21 22 22 û
éu u ù é ku ku12 ù
k ê 11 12 ú = ê 11 ú scalar multiplication
u u ku
ë 21 22 û ë 21 ku 22 û
V = Pn (x)

Given 2 polynomials of degree n


p ( x) = a0 + a1 x + ! + an x n
q ( x) = b0 + b1 x + ! + bn x n
Define:
p ( x) + q ( x) = (a0 + b0 ) + (a1 + b1 ) x + ! + (an + bn ) x n
kp( x) = ka0 + ka1 x + ! + kan x n
(4) Function space: V = c(-¥, ¥) (the set of all real-valued
continuous functions defined on the entire real line.)
( f + g )( x) = f ( x) + g ( x)
(kf )( x) = kf ( x)

19

§ Thm 4.3: (Properties of scalar multiplication)


Let v be any element of a vector space V, and let c be any
scalar. Then the following properties are true.
(1) 0v = 0
(2) c0 = 0
(3) If cv = 0, then c = 0 or v = 0
(4) (-1)v = - v
n Notes: To show that a set is not a vector space, you need
only find one axiom that is not satisfied.
§ Ex 6: The set V of all integer is not a vector space.
Pf: 1 Î V , 12 Î R
( 12 )(1) = 12 Ï V (V is not closed under scalar multiplication)
­ ­ ­
scalar noninteger
integer

§ Ex 7: The set of all second-degree polynomials is not a vector space.

Pf: Let p( x) = x 2 and q( x) = - x 2 + x + 1


Þ p( x) + q( x) = x + 1Ï V
(it is not closed under vector addition)

21

n Ex 8:
V=R2=the set of all ordered pairs of real numbers
We define new operations on V as follow.

vector addition: (u1 , u2 ) + (v1 , v2 ) = (u1 + v1 , u2 + v2 )


scalar multiplication: c(u1 , u2 ) = (cu1 ,0)
Verify V is not a vector space.
Sol:
!1(1, 1) = (1, 0) ¹ (1, 1)
\ the set V (together with the two given operations) is
not a vector space
4.3 Subspaces of Vector Spaces
n Subspace:
(V ,+,•) : a vector space
W ¹f ü
ý : a nonempty subset
W ÍVþ
(W ,+,•) :a vector space (under the same operations of addition
and scalar multiplication defined in V)
Þ W is a subspace of V
§ Trivial subspace:
Every vector space V has at least two subspaces.
(1) Zero vector space {0} is a subspace of V.
(2) V is a subspace of V.

23

n Thm 4.4: (Test for a subspace)


If W is a nonempty subset of a vector space V, then W is
a subspace of V if and only if the following conditions hold.

(1) If u and v are in W, then u+v is in W.


(2) If u is in W and c is any scalar, then cu is in W.
n Ex: Subspace of R2
(1) {0} 0 = (0, 0 )
(2) Lines through the origin
(3) R 2

n Ex: Subspace of R3
(1) {0} 0 = (0, 0, 0 )
(2) Lines through the origin
(3) Planes through the origin
(4) R 3

25

§ Ex 2: (A subspace of M2×2)
Let W be the set of all 2×2 symmetric matrices. Show that
W is a subspace of the vector space M2×2, with the standard
operations of matrix addition and scalar multiplication.
Sol:
!W Í M 2´2 M 2´2 : vector sapces
Let A1, A2 Î W ( A1T = A1, A2T = A2 )
A1 Î W, A2 Î W Þ ( A1 + A2 )T = A1T + A2T = A1 + A2 ( A 1 + A 2 Î W )
k Î R , A Î W Þ (kA)T = kAT = kA (k A Î W )
\W is a subspace of M 2´2
n Ex 3: (The set of singular matrices is not a subspace of M2×2)
Let W be the set of singular matrices of order 2. Show that
W is not a subspace of M2×2 with the standard operations.

Sol:
é1 0ù é0 0 ù
A=ê ú Î W , B = ê0 1 ú Î W
ë 0 0 û ë û

é1 0ù
\A+ B = ê ú ÏW
ë 0 1 û
\W2 is not a subspace of M 2´2

27

§ Ex 4: (The set of first-quadrant vectors is not a subspace of R2)


Show that W = {( x1 , x2 ) : x1 ³ 0 and x2 ³ 0} , with the standard
operations, is not a subspace of R2.

Sol:
Let u = (1, 1) Î W
! (- 1)u = (- 1)(1, 1) = (- 1, - 1) Ï W
(not closed under scalar
multiplication)

\W is not a subspace of R 2
§ Ex 6: (Determining subspaces of R2)
Which of the following two subsets is a subspace of R2?
(a) The set of points on the line given by x+2y=0.
(b) The set of points on the line given by x+2y=1.
Sol:
(a) W = {( x, y ) x + 2 y = 0} = {(-2t , t ) t Î R}
Let v1 = (- 2t1 , t1 ) Î W v2 = (- 2t 2 , t 2 ) Î W

! v1 + v2 = (- 2(t1 + t 2 ),t1 + t 2 ) Î W (closed under addition)

kv1 = (- 2(kt1 ), kt1 ) Î W (closed under scalar multiplication)

\W is a subspace of R 2

29

(b) W = {( x, y ) x + 2 y = 1} (Note: the zero vector is not on the line)

Let v = (1,0) Î W

! (- 1)v = (- 1,0 ) Ï W
\W is not a subspace of R 2
§ Ex 8: (Determining subspaces of R3)
Which of the following subsets is a subspace of R 3?
(a) W = {( x1 , x2 ,1) x1 , x2 Î R}
(b) W = {( x1 , x1 + x3 , x3 ) x1 , x3 Î R}
Sol:
(a) Let v = (0,0,1) Î W
Þ (-1) v = (0,0,-1) Ï W
\W is not a subspace of R 3
(b) Let v = ( v1 , v1 + v 3 , v 3 ) Î W , u = (u1 , u1 + u 3 , u 3 ) Î W
! v + u = (v1 + u1 , (v1 + u1 ) + (v 3 + u 3 ), v 3 + u 3 ) Î W
kv = (kv1 , (kv1 ) + (kv 3 ), kv 3 ) Î W
\W is a subspace of R 3

31

§ Thm 4.5: (The intersection of two subspaces is a subspace)


If V and W are both subspaces of a vector space U,
then the intersection of V and W (denoted by V ∩ W )
is also a subspace of U.

§Homework : Verify the above with examples from geometry


of plan and space.
4.4 Spanning Sets and Linear Independence
§ Linear combination: (recall)
A vector v in a vector space V is called a linear combination of
the vectors u1,u 2 ,! ,u k in V if v can be written in the form

v = c1u1 + c2u 2 + ! + ck u k c1,c2 ,! ,ck : scalars

33

§Ex 2-3: (Finding a linear combination)


v1 = (1,2,3) v 2 = (0,1,2) v 3 = (-1,0,1)
Prove (a) w = (1,1,1) is a linear combination of v1 , v 2 , v 3
(b) w = (1,-2,2) is not a linear combination of v1 , v 2 , v 3
Sol:
(a) w = c1 v1 + c2 v 2 + c3 v 3 , where w = (1,1,1)

(1,1,1) = c1 (1,2,3) + c2 (0,1,2) + c3 (- 1,0,1)


= (c1 - c3 , 2c1 + c2 , 3c1 + 2c2 + c3 )

c1 - c3 =1
Þ 2c1 + c2 =1
3c1 + 2c2 + c3 =1
é1 0 - 1 1ù é1 0 - 1 1 ù
Þ êê2 1 0 1úú "Gauss−Jordan
"""""" Elimination
→ ê0 1 2 - 1ú
ê ú
êë3 2 1 1úû êë0 0 0 0 úû

Þ c1 = 1 + t , c2 = -1 - 2t , c3 = t

(this system has infinitely many solutions)


t =1
Þ w = 2 v1 - 3 v 2 + v 3

35

(b)
w = c1 v1 + c2 v 2 + c3 v 3 , where w = (1, - 2,2)

é1 0 - 1 1 ù é1 0 - 1 1 ù
Þ êê2 1 0 - 2úú " """"""→ ê0 1 2 - 4 ú
Gauss−Jordan Elimination
ê ú
êë3 2 1 2 úû êë0 0 0 7 úû

Þ this system has no solution (! 0 ¹ 7)

Þ w ¹ c1 v1 + c2 v 2 + c3 v 3
§ the span of a set: span (S)
If S={v1, v2,…, vk} is a set of vectors in a vector space V,
then the span of S is the set of all linear combinations of
the vectors in S,
span(S ) ={c1 v1 + c2 v 2 + ! + ck v k "ci Î R}
(the set of all linear combinations of vectors in S )

§ a spanning set of a vector space:


If every vector in a given vector space V can be written as a
linear combination of vectors in a given set S, then S is
called a spanning set of the vector space V.
In other words, S is a spanning set of V if and only if
Span(S) = V.

37

§ Notes: In case S is a spanning set of V we use following notation


span ( S ) = V
Þ S spans (generates) V
V is spanned (generated) by S
S is a spanning set of V

§ Notes:
(1) span(f ) = {0} , we agree by consent!
(2) S Í span( S )
(3) S1 , S 2 Í V
S1 Í S 2 Þ span( S1 ) Í span( S 2 )
§ Ex 5: (A spanning set for R3)
Show that the set S = {(1,2,3), (0,1,2), (-2,0,1)} spans R 3
Sol:
We must determine whether an arbitrary vector u = (u1 , u2 , u3 )
in R 3 can be written as a linear combination of v1 , v 2 , and v 3 .
u Î R 3 Þ u = c1 v1 + c2 v 2 + c3 v 3
Þ c1 - 2c3 = u1
2c1 + c2 = u2
3c1 + 2c2 + c3 = u3
The problem thus reduces to determining whether this system
is consistent for all values of u1 , u2 , and u3 .

39

1 0 -2
!A=2 1 0 ¹0
3 2 1

Þ Ax = b has exactly one solution for every u.

Þ span( S ) = R 3
§ Thm 4.6: (Properties of Span(S))

If S={v1, v2,…, vk} is a set of vectors in a vector space V, then

(a) span (S) is a subspace of V.

(b) Every subspace of V that contains S must contain span (S).


That is, span (S) is the smallest subspace of V that contains S.

41

§ Linear Independence (L.I.) and Linear Dependence (L.D.):


S = {v1 , v 2 , !, v k } : a set of vectors in a vector space V
We consider the equation c1 v1 + c2 v 2 + ! + ck v k = 0 , where
c1 , c2 , ! ck are unknowns.
(1) Obviously the equation has the solution (c1 = c2 = ! = ck = 0)

(2) If the equation has only the trivial solution (c1 = c2 = ! = ck = 0)


then S is called linearly independent.
(3) If the equation has a nontrivial solution (i.e., not all zeros),
then S is called linearly dependent.
§ Notes:
(1) f is linearly independent we agree by consent!
(2) 0 Î S Þ S is linearly dependent.
(3) v ¹ 0 Þ {v} is linearly independent
(4) S1 Í S 2
S1 is linearly dependent Þ S 2 is linearly dependent

S 2 is linearly independent Þ S1 is linearly independent

43

§ Ex 8: (Testing for linearly independent)


Determine whether the following set of vectors in R3 is L.I. or L.D.
S = {(1, 2, 3), (0, 1, 2 ), (- 2, 0, 1)}
v1 v2 v3
Sol: c1 - 2c3 = 0
c1 v1 + c2 v 2 + c3 v 3 = 0 Þ 2c1 + c2 + =0
3c1 + 2c2 + c3 = 0
é1 0 - 2 0 ù é1 0 0 0ù
Þ êê2 1 0 0úú ¾Gauss
¾¾ ¾¾¾¾ ¾® êê0 1 0 0úú
- Jordan Elimination

êë3 2 1 0úû êë0 0 1 0úû


Þ c1 = c2 = c3 = 0 (only the trivial solution )
Þ S is linearly independent
n Ex 9: (Testing for linearly independent)
Determine whether the following set of vectors in P2 is L.I. or L.D.
S = {1+x – 2x2 , 2+5x – x2 , x+x2}
v1 v2 v3
Sol:
c1v1+c2v2+c3v3 = 0
i.e. c1(1+x – 2x2) + c2(2+5x – x2) + c3(x+x2) = 0+0x+0x2

c1+2c2 =0 é1 2 0 0ù é1 2 0 0ù
ê1 ú ¾G.¾J.® ê 1 ú
c1+5c2+c3 = 0 ê 5 1 0 ú ê1 1 3 0ú
–2c1 – c2+c3 = 0 êë- 2 - 1 1 0úû ê0 0 0 0 ú
ë û
This system has infinitely many solutions.
(i.e., This system has nontrivial solutions.)
S is linearly dependent. (Ex: c1=2 , c2= – 1 , c3=3)

45

n Ex 10: (Testing for linearly independent)


Determine whether the following set of vectors in 2×2
matrix space is L.I. or L.D.
ìé2 1ù é3 0ù é1 0ù ü
S = íê ú , ê 2 1 ú , ê 2 0ú ý
îë 0 1û ë û ë ûþ
v1 v2 v3
Sol:
c1v1+c2v2+c3v3 = 0

é2 1ù é 3 0ù é1 0 ù é0 0 ù
c1 ê
0 1ú 2 ê 2 1 ú 3 ê 2 0 ú = ê0 0 ú
+ c + c
ë û ë û ë û ë û
2c1+3c2+ c3 = 0
c1 =0
2c2+2c3 = 0
c1+ c2 =0

é2 3 1 0ù é1 0 0 0ù
ê1 0 0 0úú ê0
ê ê 1 0 0úú
ê0 ¾¾¾¾¾¾¾¾
Gauss - Jordan Elimination
®
2 2 0ú ê0 0 1 0ú
ê ú ê ú
ë1 1 0 0û ë0 0 0 0û

c1 = c2 = c3= 0 (This system has only the trivial solution.)

S is linearly independent.

47

n Thm 4.7: (A property of linearly dependent sets)


A set S = {v1,v2,…,vk}, k > 1, is linearly dependent if and
only if at least one of the vectors vj in S can be written as
a linear combination of the other vectors in S.

Pf:
(=>) c1v1+c2v2+…+ckvk = 0

! S is linearly dependent

∃ ci ≠ 0 for some i
%! %"#! %"$! %%
v$ = − v& −%%% − v$'& − v$#& −%%% − v(
%" %" %" %"
(Ü)
Let vi = d1v1+…+di-1vi-1+di+1vi+1+…+dkvk
Þ d1v1+…+di-1vi-1-vi+di+1vi+1+…+dkvk = 0

Þ c1=d1, …,ci-1=di-1, ci=-1,ci+1=di+1,…, ck=dk (nontrivial solution)

Þ S is linearly dependent

n Corollary to Theorem 4.7:


Two vectors u and v in a vector space V are linearly dependent
if and only if one is a scalar multiple of the other. That is, u = kv,
for some scalar k.

49

4.5 Basis and Dimension


n Basis:
Linearly
V:a vector space Generating
Bases Independent
Sets
S ={v1, v2, …, vn} ⊂ V Sets

ì(a ) S spans V (i.e., span(S) = V )


í
î(b) S is linearly independent
S is called a basis for V

§ Notes:
(1) Ø (empty set) is a basis for {0} we agree by consent!
(2) the standard basis for R3:
{i, j, k} i = (1, 0, 0), j = (0, 1, 0), k = (0, 0, 1)
n
(3) the standard basis for R :
{e1, e2, …, en} e1=(1,0,…,0), e2=(0,1,…,0), en=(0,0,…,1)
Ex: R4 {(1,0,0,0), (0,1,0,0), (0,0,1,0), (0,0,0,1)}
(4) the standard basis for m×n matrix space:
{ Eij | i = 1,…, m , j = 1,…. , n }, where
ì a =1
{ Eij | 1£i£m , 1£j£n }, and in Eij í ij
îother entries are zero
ìé1 0ù é0 1ù é0 0ù é0 0ù ü
Ex: 2´ 2 matrix space: íê ú, ê0 0ú, ê1 0ú, ê0 1ú ý
îë 0 0 û ë û ë û ë ûþ
(5) the standard basis for Pn(x):
{1, x, x2, …, xn}
Ex: P3(x) {1, x, x2, x3}

51

n Thm 4.8: (Uniqueness of basis representation)


If S = {v1 , v 2 , !, v n } is a basis for a vector space V, then every
vector in V can be written in one and only one way as a linear
combination of vectors in S.
Pf:
ì 1. span(S) = V
! S is a basis Þ í
î 2. S is linearly independent
! span(S) = V Let v = c1v1+c2v2+…+cnvn
v = b1v1+b2v2+…+bnvn
==> 0 = (c1–b1)v1+(c2 – b2)v2+…+(cn – bn)vn

! S is linearly independent
==> c1= b1 , c2= b2 ,…, cn= (i.e., uniqueness)
bn
nThm 4.9: (Bases and linear dependence)
If S = {v1 , v 2 , !, v n } is a basis for a vector space V, then every
set containing more than n vectors in V is linearly dependent.

Pf:
Let S1 = {u1, u2, …, um} , m > n
! span( S ) = V
u1 = c11 v1 + c21 v 2 + ! + cn1 v n
ui ∈V Þ u 2 = c12 v1 + c22 v 2 + ! + cn 2 v n
"
u m = c1m v1 + c2 m v 2 + ! + cnm v n

53

Let k1u1+k2u2+…+kmum= 0
=> d1v1+d2v2+…+dnvn= 0 (where di = ci1k1+ci2k2+…+cimkm)
! S is L.I.
=> di=0 ∀ i i.e. c11k1 + c12 k 2 + ! + c1m k m = 0
c21k1 + c22 k 2 + ! + c2 m k m = 0
"
cn1k1 + cn 2 k 2 + ! + cnm k m = 0
! Known: If the homogeneous system has fewer equations
than variables, then it must have infinitely many solution.
m > n => k1u1+k2u2+…+kmum = 0 has nontrivial solution
==> S1 is linearly dependent
n Thm 4.10: (Number of vectors in a basis)
If a vector space V has one basis with n vectors, then every
basis for V has n vectors. (All bases for a finite-dimensional
vector space has the same number of vectors.)
Pf:
S ={v1, v2, …, vn}
two bases for a vector space
S'={u1, u2, …, um}
S is a basis ü Thm.4.9 ü
ý Þ n ³ m ï
S ' is L.I. þ ï
ýÞ n = m
S is L.I. ü Thm .4.9
ï
ý Þ n £ mï
S ' is a basis þ þ

55

n Finite dimensional:
A vector space V is called finite dimensional,
if it has a basis consisting of a finite number of elements.
n Infinite dimensional:
If a vector space V is not finite dimensional,
then it is called infinite dimensional.
n Dimension:
The dimension of a finite dimensional vector space V is
defined to be the number of vectors in a basis for V.
V: a vector space S: a basis for V
dim(V) = #(S) (the number of vectors in S)
§ Notes: dim(V) = n
Linearly
(1) dim({0}) = 0 = #(Ø) Generating Bases Independent
Sets Sets

(2) dim(V) = n , S a subset of V #(S) > n #(S) = n #(S) < n

S:a generating set => #(S) ≥ n


S:a L.I. set => #(S) ≤ n
S:a basis => #(S) = n

(3) dim(V) = n , W is a subspace of V => dim(W) ≤ n

57

n Ex:
(1) Vector space Rn - basis {e1 , e2 , … , en}
- dim(Rn) = n
(2) Vector space Mm×n - basis {Eij | i = 1,…, m , j = 1,…, n}
- dim(Mm×n)=mn
(3) Vector space Pn(x) - basis {1, x, x2, … , xn}
- dim(Pn(x)) = n+1
(4) Vector space P(x) - basis {1, x, x2, …}
- dim(P(x)) = ∞
n Ex 9: (Finding the dimension of a subspace)
(a) W={(d, c–d, c): c and d are real numbers}
(b) W={(2b, b, 0): b is a real number}
Sol: (Note: Find a set of L.I. vectors that spans the subspace)
(a) (d, c– d, c) = c(0, 1, 1) + d(1, – 1, 0)
=> S = {(0, 1, 1) , (1, – 1, 0)}(S is L.I. and S spans W)
=> S is a basis for W
=> dim(W) = #(S) = 2
(b) ! (2b, b,0 ) = b(2,1,0 )
=> S = {(2, 1, 0)} spans W and S is L.I.
=> S is a basis for W
=> dim(W) = #(S) = 1

59

§ Ex 11: (Finding the dimension of a subspace)


Let W be the subspace of all symmetric matrices in M2x2.
What is the dimension of W?
Sol:
ìïéa b ù üï
W = íê ú a, b, c Î R ý
ïîëb c û ïþ
éa b ù é1 0ù é0 1ù é0 0ù
!ê ú = a ê0 0ú + b ê1 0ú + c ê0 1ú
ë b c û ë û ë û ë û
ìé1 0ù é0 1ù é0 0ù ü
Þ S = íê ú, ê1 0ú, ê0 1ú ý spans W and S is L.I.
îë 0 0 û ë û ë ûþ
=> S is a basis for W => dim(W) = #(S) = 3
n Thm 4.11: (Basis tests in an n-dimensional space)
Let V be a vector space of dimension n.
(1) If S = {v1 , v 2 , !, v n } is a linearly independent set of
vectors in V, then S is a basis for V.
(2) If S = {v1 , v 2 ,!, v n } spans V, then S is a basis for V.

dim(V) = n

Generating Linearly
Sets Bases Independent
Sets

#(S) > n #(S) < n


#(S) = n

61

Recall:
§ row vectors: Row vectors of A
é a11 a12 " a1n ù é A(1) ù [a11 , a12 , ! , a1n ] = A(1)
êa ê ú
a22 " a2 n úú ê A(2 ) ú [a21 , a22 , ! , a2n ] = A(2)
A = ê 21 =
ê ! ! ! ú ê ! ú "
ê ú ê ú
ëam1 am 2 " amn û êë A(m ) úû [am1 , am2 , ! , amn ] = A( m )

§ column vectors: Column vectors of A


é a11 a12 " a1n ù é a11 ù é a12 ù é a1n ù
êa êa ú ê a ú êa ú
" a2 n úú
A = ê 21
ê !
a22
! ! ú
[
= A(1) ! A(2 ) !"! A(n ) ]ê 21 ú ê 22 ú " ê 2 n ú
ê ! úê ! ú ê ! ú
ê ú ê úê ú ê ú
ëam1 am 2 " amn û ëam1 û ëam 2 û ëamn û
|| || ||
(1) (2) (n)
A A A
Four Fundamental subspaces
Let A be an m×n matrix.
§ Row space:
The row space of A is the subspace of Rn spanned by
the row vectors of A.
RS ( A) = {a 1 A(1) + a 2 A( 2 ) + ... + a m A( m ) | a 1 , a 2 ,..., a m Î R}
§ Column space:
The column space of A is the subspace of Rm spanned by
the column vectors of A.

CS ( A) = {b1 A(1) + b 2 A(2) + ! + b n A( n ) b1 , b 2 , ! b n Î R}


n Null space:
The null space of A is the set of all solutions of Ax=0 and
it is a subspace of Rn.
NS ( A) = {x Î R n | Ax = 0}

63

Four Fundamental subspaces


§ Left Null space:
The left null space of A is the null space of AT. It is the set of
all solutions of ATy=0 and is a subspace of Rm.
NS( AT ) = {y ∈ R m | AT y = 0}

§ Compare notations
Here In some textbook
RS(A) C(AT)
CS(A) C(A)
NS(A) N(A)
NS(AT) N(AT)
4.6 Rank of a Matrix and Systems of Linear Equations
n Recall: A m×n matrix A is row equivalent to a m×n matrix B if
B can be obtained from A by applying a finite number
of elementary row operations
n Thm 4.12: (Row-equivalent matrices have the same row space)
If an m×n matrix A is row equivalent to an m×n matrix B,

then the row space of A is equal to the row space of B.


§ Note
(1) The row space of a matrix is not changed by elementary
row operations.
RS(r(A)) = RS(A) r: elementary row operations
(2) Elementary row operations can change the column space.

65

n Thm 4.13: (Basis for the row space of a matrix)


If a matrix A is row equivalent to a matrix B in row-echelon

form, then the nonzero row vectors of B form a basis for the

row space of A.
§ Ex 2: ( Finding a basis for a row space)
é 1 3 1 3ù
ê 0 1 1 0úú
ê
A = ê- 3 0 6 - 1ú Find a basis of row space of A
ê ú
ê 3 4 -2 1ú
êë 2 0 -4 2úû
Sol: é 1 3 1 3ù é1 3 1 3ù w 1
ê 0 ê0 1 1 0úú w 2
ê 1 1 0úú ê
A= ê- 3 - 1ú ê 0 0 1ú w 3
ê
0 6
ú ¾Gauss
¾¾ ¾ E.
® B = ê0 ú
ê 3 4 -2 1ú ê0 0 0 0ú
êë 2 0 -4 2úû êë0 0 0 0úû
a1 a 2 a3 a4 b1 b 2 b3 b 4

67

a basis for RS(A) = {the nonzero row vectors of B} (Thm 4.13)


= {w1, w2, w3} = {(1, 3, 1, 3), (0, 1, 1, 0), (0, 0, 0, 1)}

n Notes:
(1) b 3 = -2b1 + b 2 Þ a 3 = -2a1 + a 2
(2) {b1 , b 2 , b 4 } is L.I. Þ {a1 , a 2 , a 4 } is L.I.
n Ex 3: (Finding a basis for a subspace)
Find a basis for the subspace of R3 spanned by
v1 v2 v3
S = {(-1, 2, 5), (3, 0, 3), (5, 1, 8)}
Sol: é - 1 2 5ù v 1 é 1 - 2 - 5ù w1
A = êê 3 0 3úú v 2 G.E.
B = êê0 1 3úú w2
êë 5 1 8úû v 3 êë0 0 0úû

a basis for span({v1, v2, v3})


= a basis for RS(A)
= {the nonzero row vectors of B} (Thm 4.13)
= {w1, w2}
= {(1, –2, – 5) , (0, 1, 3)}

69

n Ex 4: (Finding a basis for the column space of a matrix)


Find a basis for the column space of the matrix A given in Ex 2.
é 1 3 1 3ù
ê 0 1 1 0úú
ê
A = ê- 3 0 6 - 1ú
ê ú
ê 3 4 -2 1ú
êë 2 0 -4 - 2úû
Sol. 1st method:
é1 0 -3 3 2ù é1 0 -3 3 2ù w1
ê3 1 0 4 0úú ê0 1 9 - 5 - 6úú w 2
A =ê
T
¾¾® B = ê
G.E.

ê1 1 6 - 2 - 4ú ê0 0 1 - 1 - 1ú w3
ê ú ê ú
ë3 0 -1 1 - 2û ë0 0 0 0 0û
! CS(A)=RS(AT)
\ a basis for CS(A)
= a basis for RS(AT)
= {the nonzero vectors of B}
= {w1, w2, w3}
ìé 1 ù é 0 ù é 0 ùü
ïê ú ê 1 ú ê 0 úï
ïê 0 ú ê ú ê ú ïï
ï
= íê- 3ú, ê 9 ú, ê 1 ú ý (a basis for the column space of A)
ïê 3 ú ê ú ê úï
ïê ú ê - 5ú ê- 1ú ï
ïîêë 2 úû êë- 6úû êë- 1úû ï
þ
§ Note: This basis is not a subset of {c1, c2, c3, c4}.

71

n Sol. 2nd method:


é1 3 1 3ù é1 3 1 3ù
ê0 1 1 0 úú ê0 1 1 0úú
ê ê
A = ê- 3 0 6 - 1 ú ¾¾¾® B = ê0
G.E .
0 0 1ú
ê ú ê ú
ê3 4 -2 1 ú ê0 0 0 0ú
êë 2 0 - 4 - 2úû êë0 0 0 0úû
c1 c2 c3 c4 v1 v 2 v3 v 4

Leading 1 => {v1, v2, v4} is a basis for CS(B)


{c1, c2, c4} is a basis for CS(A)
§ Notes:
(1) This basis is a subset of {c1, c2, c3, c4}.
(2) v3 = –2v1+ v2, thus c3 = – 2c1+ c2 .
n Thm 4.14: (Solutions of a homogeneous system)
If A is an m×n matrix, then the set of all solutions of the
homogeneous system of linear equations Ax = 0 is a subspace
of Rn called the nullspace of A. NS ( A) = {x Î R n | Ax = 0}
Pf:
NS(A) ⊆ R n
NS(A) ≠ φ (∵ A0 = 0)
Let x1, x 2 ∈NS(A) (i.e. Ax1 = 0, Ax 2 = 0)
Then (1)A(x1 + x 2 ) = Ax1 + Ax 2 = 0 + 0 = 0 Addition
(2)A(cx1) = c(Ax1) = c(0) = 0 Scalar multiplication
Thus NS(A) is a subspace of R n
§ Notes: The nullspace of A is also called the solution space of
the homogeneous system Ax = 0.
73

§ Ex 6: (Finding the solution space of a homogeneous system)


Find the nullspace of the matrix A. é1 2 - 2 1 ù
A = êê3 6 - 5 4úú
êë1 2 0 3úû
Sol: The nullspace of A is the solution space of Ax = 0.
é1 2 - 2 1 ù é1 2 0 3ù
A = êê3 6 - 5 4úú ¾G¾ ¾®êê0 0 1 1úú
. J .E x2 and x4 are
êë1 2 0 3úû êë0 0 0 0úû free variables
Þ x1 = –2s – 3t, x2 = s, x3 = –t, x4 = t
é x1 ù é- 2 s - 3t ù é- 2ù é- 3ù
ê x ú ê s ú ê 1ú ê 0ú
Þ x = ê 2ú = ê ú = s ê ú + t ê ú = sv 1 + t v 2
ê x3 ú ê - t ú ê 0ú ê - 1ú
ê ú ê ú ê ú ê ú
ë x4 û ë t û ë 0û ë 1û
Þ NS ( A) = {sv1 + tv 2 | s, t Î R}
n Thm 4.15: (Row and column space have equal dimensions)
If A is an m×n matrix, then the row space and the column
space of A have the same dimension.
dim(RS(A)) = dim(CS(A))

§ Rank:
The dimension of the row (or column) space of a matrix A
is called the rank of A and is denoted by rank(A).

rank(A) = dim(RS(A)) = dim(CS(A))

75

§ Nullity:

The dimension of the nullspace of A is called the nullity of A.

nullity(A) = dim(NS(A))

§ Note: rank(AT) = rank(A)

Pf: rank(AT) = dim(RS(AT)) = dim(CS(A)) = rank(A)


n Thm 4.16: (Dimension of the solution space)
If A is an m×n matrix of rank r, then the dimension of
the solution space of Ax = 0 is n – r. That is
n = rank(A) + nullity(A)

n Notes:
(1) rank(A): The number of leading variables in the solution of Ax=0.
(The number of nonzero rows in the row-echelon form of A)
(2) nullity (A): The number of free variables in the solution of Ax = 0.

77

n Notes:
If A is an m×n matrix and rank(A) = r, then

Fundamental Dimension
Subspaces
RS(A)=CS(AT) r
CS(A)=RS(AT) r
NS(A) n–r
NS(AT) m–r
n Ex 7: (Rank and nullity of a matrix)
Let the column vectors of the matrix A be denoted by a1, a2, a3,
a4, and a5.
é 1 0 -2 1 0ù
ê 0 -1 - 3 1 3ú
A=ê ú
ê- 2 - 1 1 -1 3ú
ê ú
ë 0 3 9 0 - 12û
a1 a2 a3 a4 a5
(a) Find the rank and nullity of A.
(b) Find a subset of the column vectors of A that forms a basis for
the column space of A .
(c) If possible, write the third column of A as a linear combination
of the first two columns.

79

Sol: Let B be the reduced row-echelon form of A.

é 1 0 -2 1 0ù é1 0 -2 0 1ù
ê 0 -1 - 3 1 3ú ê0 1 3 0 - 4úú
A=ê ú B=ê
ê- 2 - 1 1 - 1 3ú ê0 0 0 1 - 1ú
ê ú ê ú
ë 0 3 9 0 - 12û ë0 0 0 0 0û
a1 a2 a3 a4 a5 b1 b2 b3 b4 b5

(a) rank(A) = 3 (the number of nonzero rows in B)


nullity(A) = n – rank(A) = 5-3 = 2
(b) Leading 1
Þ {b1 , b 2 , b 4 } is a basis for CS ( B )
{a1 , a 2 , a 4 } is a basis for CS ( A)
é 1ù é 0ù é 1ù
ê 0ú ê- 1ú ê 1ú
a1 = ê ú, a 2 = ê ú, and a 4 = ê ú,
ê - 2ú ê- 1ú ê- 1ú
ê 0ú ê 3ú ê 0ú
ë û ë û ë û

(c) b 3 = -2b1 + 3b 2 Þ a 3 = -2a1 + 3a 2

81

§ Thm 4.17: (Solutions of a nonhomogeneous linear system)


If xp is a particular solution of the nonhomogeneous system
Ax = b, then every solution of this system can be written in
the form x = xp + xh , where xh is a solution of the corresponding
homogeneous system Ax = 0.
Pf: Let x be any solution of Ax = b.
Þ A(x - x p ) = Ax - Ax p = b - b = 0.
Þ (x - x p ) is a solution of Ax = 0
Let x h = x - x p
Þ x = x p + xh
n Ex 8: (Finding the solution set of a nonhomogeneous system)
Find the set of all solution vectors of the system of linear equations.
x1 - 2 x3 + x4 = 5
3 x1 + x2 - 5 x3 = 8
x1 + 2 x2 - 5 x4 = -9

Sol:

é1 0 - 2 1 5ù é1 0 - 2 1 5ù
ê3 1 - 5 0 8 ú ¾G¾ ¾
. J .E
® ê0 1 1 - 3 - 7 ú
ê ú ê ú
êë 1 2 0 - 5 - 9úû êë0 0 0 0 0úû
s t

83

é x1 ù é 2 s - t + 5ù é 2ù é- 1ù é 5ù
ê x ú ê- s + 3t - 7 úú êê- 1úú êê 3úú êê- 7 úú
Þ x = ê 2ú = ê =s +t +
ê x3 ú ê s + 0t + 0ú ê 1ú ê 0ú ê 0ú
ê ú ê ú ê ú ê ú ê ú
ë x4 û ë 0 s + t + 0û ë 0û ë 1û ë 0û

= su1 + tu 2 + x p
é5ù
ê- 7 ú
i.e. x p = ê ú is a particular solution vector of Ax=b.
ê0ú
ê ú
ë0û

xh = su1 + tu2 is a solution of Ax = 0


n Thm 4.18: (Solution of a system of linear equations)
The system of linear equations Ax = b is consistent if and only
if b is in the column space of A.
Pf.
Let
é a11 a12 " a1n ù é x1 ù é b1 ù
êa a22 " a2 n úú êx ú êb ú
A = ê 21 , x = ê 2ú, and b= ê 2ú
ê ! ! ! ú ê!ú ê!ú
ê ú ê ú ê ú
ë a m1 am 2 " amn û ë xn û ëbm û
be the coefficient matrix, the column matrix of unknowns,
and the right-hand side, respectively, of the system Ax = b.

85

Then é a11 a12 " a1n ù é x1 ù é a11 x1 + a12 x2 + " + a1n xn ù


êa a22 " a2 n úú êê x2 úú êê a21 x1 + a22 x2 + " + a2 n xn úú
Ax = ê 21 =
ê ! ! ! úê! ú ê ! ! ú
ê úê ú ê ú
ëam1 am 2 " amn û ë xn û ëam1 x1 + a m 2 x2 + " + amn xn û
é a11 ù é a12 ù é a1n ù
êa ú êa ú êa ú
= x1 ê ú + x2 ê ú + " + xn ê 2 n ú.
21 22

ê ! ú ê ! ú ê ! ú
ê ú ê ú ê ú
ëam1 û ë am 2 û ëamn û
So, b = Ax = x1A(1) + x2A(2)+… + xnA(n), which is a linear
combination of column vector of A.
Hence, Ax = b is consistent if and only if b is a linear combination
of the columns of A. That is, the system is consistent if and only if
b is in the subspace of Rm spanned by the columns of A.
§ Note:
If rank([A|b])=rank(A)
Then the system Ax=b is consistent.

n Ex 9: (Consistency of a system of linear equations)

x1 + x2 - x3 = -1
x1 + x3 = 3
3 x1 + 2 x2 - x3 = 1

Sol:
é 1 1 - 1ù é1 0 1ù
A = êê 1 0 1úú ¾G¾ ¾®êê0 1 - 2úú
. J .E .

êë3 2 - 1úû êë0 0 0úû

87

é 1 1 - 1 - 1ù é1 0 1 3ù
[ A ! b] = êê 1 0 1 3úú ¾G¾ ¾®êê0 1 - 2 - 4úú
. J .E .

êë3 2 - 1 1úû êë0 0 0 0úû


c1 c2 c3 b w1 w2 w3 v
! v = 3w1 - 4w 2
Þ b = 3c1 - 4c 2 + 0c3 (b is in the column space of A)

Þ The system of linear equations is consistent.

§ Check:
rank ( A) = rank ([ A b]) = 2
n Summary of equivalent conditions for square matrices:
If A is an n×n matrix, then the following conditions are equivalent.
(1) A is invertible
(2) Ax = b has a unique solution for any n×1 matrix b.
(3) Ax = 0 has only the trivial solution
(4) rank(A) = n
(5) The n row vectors of A are linearly independent.
(6) The n column vectors of A are linearly independent.

89

4.7 Coordinates and Change of Basis


n Coordinate representation relative to a basis
Let B = {v1, v2, …, vn} be an ordered basis for a vector space V
and let x be a vector in V such that
x = c1 v1 + c2 v 2 + ! + cn v n .
The scalars c1, c2, …, cn are called the coordinates of x relative
to the basis B. The coordinate matrix (or coordinate vector)
of x relative to B is the column matrix in Rn whose components
are the coordinates of x.
é c1 ù
êc ú
[x]B = ê 2ú
ê!ú
ê ú
ëcn û
n
§ Ex 1: (Coordinates and components in R )
Find the coordinate matrix of x = (–2, 1, 3) in R3
relative to the standard basis
S = {(1, 0, 0), ( 0, 1, 0), (0, 0, 1)}
Sol:
! x = (-2, 1, 3) = -2(1, 0, 0) + 1(0, 1, 0) + 3(0, 0, 1),

é - 2ù
\[x]S = êê 1úú.
êë 3úû

91

§ Ex 3: (Finding a coordinate matrix relative to a nonstandard basis)


Find the coordinate matrix of x=(1, 2, –1) in R3
relative to the (nonstandard) basis
B ' = {u1, u2, u3}={(1, 0, 1), (0, – 1, 2), (2, 3, – 5)}
Sol: x = c u + c u + c u Þ (1, 2, - 1) = c (1, 0, 1) + c (0, - 1, 2) + c (2, 3, - 5)
1 1 2 2 3 3 1 2 3

c 1
+ 2c = 1 3 é1 0 2ù é c ù é 1ù1

Þ - c + 3c = 2 2
i.e. êê0 - 1
3
3úú êêc úú = êê 2úú
2

c + 2c - 5c = - 1
1 2 3
êë 1 2 - 5úû êëc úû êë- 1úû 3

é1 0 2 1ù é1 0 0 5ù é 5ù
Þ êê0 - 1 3 2úú ® êê0 1 0 - 8úú Þ [ x ]B¢ = êê - 8úú
G.J. E.

êë 1 2 - 5 - 1úû êë0 0 1 - 2úû êë- 2úû


n Change of basis problem:
You were given the coordinates of a vector relative to one
basis B;
and you are asked to find the coordinates of the same vector
relative to another basis B'.
§ Ex: (Change of basis)
Consider two bases for a vector space V
B = {u1 , u 2 }, B¢ = {u1¢ , u¢2 }
éa ù éc ù
If [u1¢ ]B = ê ú, [u¢2 ]B = ê ú
ëb û ëd û
i.e., u1¢ = au1 + bu 2 , u¢2 = cu1 + du 2

93

é k1 ù
Let v Î V , [ v ]B¢ = ê ú
ëk 2 û
Þ v = k1u1¢ + k 2u¢2
= k1 (au1 + bu 2 ) + k 2 (cu1 + du 2 )
= (k1a + k 2 c)u1 + (k1b + k 2 d )u 2
So, coordinates of v relative to the basis B={u1,u2}
are k1a+k2c, k1b+k2d. That is,

é k a + k 2 c ù éa c ù é k1 ù
Þ [ v ]B = ê 1 ú =ê úê ú
ëk1b + k 2 d û ëb d û ëk 2 û
= [[u1¢ ]B [u¢2 ]B ] [v ]B¢
n Transition matrix from B' to B:
Let B = {u1 , u 2 ,..., u n } and B¢ = {u1¢ , u¢2 ..., u¢n } be two bases
for a vector space V
If [v]B is the coordinate matrix of v relative to B
[v]B’ is the coordinate matrix of v relative to B'
then [ v ]B = P[ v ]B¢
= [[u1¢ ]B , [u¢2 ]B ,..., [u¢n ]B ] [v ]B¢
where
P = [[u1¢ ]B , [u¢2 ]B , ..., [u¢n ]B ]

is called the transition matrix from B' to B

95

n Remark: The bases B and B' have similar role. Therefore, we can
exchange B and B' and we have
n Thm 4.19: (The inverse of a transition matrix)

If P is the transition matrix from a basis B' to a basis B in Rn,


then
(1) P is invertible
–1
(2) The transition matrix from B to B' is P

n Notes:
B = {u1 , u 2 , ..., u n }, B ' = {u¢1 , u¢2 , ..., u¢n }
[v ]B = [[u¢1 ]B , [u¢2 ]B , ..., [u¢n ]B ] [v ]B¢ = P [v ]B¢
[v ]B¢ = [[u1 ]B¢ , [u 2 ]B¢ , ..., [u n ]B¢ ] [v ]B = P -1 [v ]B
n Thm 4.20: (Transition matrix from B to B')
Let B={v1, v2, … , vn} and B' ={u1, u2, … , un} be two bases
n
for R . Then the transition matrix P from B' to B can be found
by using Gauss-Jordan elimination on the n×2n matrix 𝐵 ⋮ B′
as follows.

𝐵 ⋮ B′ I) ⋮ P

97

§ Ex 5: (Finding a transition matrix)


B={(–3, 2), (4,–2)} and B' ={(–1, 2), (2,–2)} are two bases for R2

(a) Find the transition matrix from B' to B.


é1 ù
(b) Let [ v ]B ' = ê ú, find [ v ]B
ë 2û
(c) Find the transition matrix from B to B' .
Sol:
(a) é- 3 4 ! -1 2ù G.J.E. é 1 0 ! 3 - 2ù
ê 2 ê0 1 ! 2 - 1ú
ë -2 ! 2 - 2úû ë û
B B' I P
é 3 - 2ù
\P = ê ú (the transition matrix from B' to B)
ë2 - 1 û
(b)
é 1ù é3 - 2ù é1 ù é- 1ù
[v ]B¢ = ê ú Þ [v ]B = P [v ]B¢ = ê ú ê 2ú = ê 0 ú
Check : 2
ë û ë 2 - 1 ûë û ë û
é 1ù
[v ]B¢ = ê ú Þ v = (1)(-1,2) + (2)(2,-2) = (3,-2)
ë 2û
é- 1ù
[v ]B = ê ú Þ v = (-1)(3,-2) + (0)(4,-2) = (3,-2)
ë 0û

99

(c)
é- 1 2 ! -3 4ù é 1 0 ! - 1 2ù
ê 2 - 2 ! 2 - 2ú G.J.E.
ê0 1 ! - 2 3ú
ë û ë û
-1
B' B I P

é - 1 2ù
\ P -1 = ê ú (the transition matrix from B to B')
ë - 2 3û

n Check:
é 3 - 2ù é - 1 2ù é 1 0ù
PP -1 = ê ú ê ú =ê ú = I2
ë2 - 1û ë- 2 3û ë0 1û
§ Ex 6: (Coordinate representation in P3(x))
(a) Find the coordinate matrix of p = 3x3-2x2+4 relative to the
standard basis S = {1, x, x2, x3} in P3(x).
(b) Find the coordinate matrix of p = 3x3-2x2+4 relative to the
basis S = {1, 1+x, 1+ x2, 1+ x3} in P3(x).
é 4ù
Sol: ê 0ú
(a) p = (4)(1) + (0)( x) + (-2)( x 2 ) + (3)( x 3 ) Þ [ p ]B = ê ú
ê - 2ú
ê ú
ë 3û é 3ù
ê 0ú
(b) p = (3)(1) + (0)(1 + x) + (-2)(1 + x ) + (3)(1 + x ) Þ [ p ]B = ê ú
2 3

ê - 2ú
ê ú
ë 3û

101

§ Ex: (Coordinate representation in M2x2)


Find the coordinate matrix of x = éê5 6ùú relative to
ë7 8 û
the standard basis in M2x2.
B = ìé1 0ù, é0 1ù, é0 0ù, é0 0ù ü
íê ý
î ë 0 0úû êë0 0úû êë1 0úû êë0 1úû þ
Sol:
é5 6ù é1 0ù é0 1ù é0 0ù é0 0ù
x=ê ú = 5 ê0 0ú + 6 ê0 0ú + 7 ê1 0ú + 8ê0 1ú
ë 7 8 û ë û ë û ë û ë û
é5 ù
ê6 ú
Þ [x ]B = ê ú
ê7 ú
ê ú
ë8 û

You might also like