0% found this document useful (0 votes)
10 views6 pages

L3 Isomorphism 2023

Uploaded by

timzgeybi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views6 pages

L3 Isomorphism 2023

Uploaded by

timzgeybi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

Lecture 3.

Isomorphism
September 2023

The Dimension Theorem


The dimension theorem is one of the most useful results in all of linear algebra.
THEOREM 3.1 (Dimension Theorem) Let f : V → W be any linear transformation
and assume that Ker f and Im f are both finite dimensional. Then V is also finite dimensional
and
dim V = dim(Ker f ) + dim(Im f )
Proof. Every vector in Im f = f (V ) has the form f (v) for some v in V . Hence let
{f (b1 ), f (b2 ), . . . , f (bk )} be a basis of Im f , where the bi lie in V . Let {e1 , e2 , . . . , er } be
any basis of Ker f . Then dim(Im f ) = k and dim(Ker f ) = r, so it suffices to show that
B = {e1 , . . . , er , b1 , . . . , bk } is a basis of V .
1. B spans V . If v lies in V , then f (v) lies in Im f , so
f (v) = t1 f (b1 ) + t2 f (b2 ) + · · · + tk f (bk ), ti ∈ R.
This implies that v − t1 b1 − t2 b2 − · · · − tk bk lies in Ker f and so is a linear combination
of e1 , e2 , . . . , er . Hence v is a linear combination of the vectors in B.
2. B is linearly independent. Suppose that ti and sj in R satisfy
t1 e1 + · · · + tr er + s1 b1 + · · · + sk bk = 0. (1)
Applying f gives s1 f (b1 ) + s2 f (b2 ) + · · · + sk f (bk ) = 0 (because f (ei ) = 0 for each i).
Hence the independence of {f (b1 ), f (b2 ), . . . , f (bk )} yields s1 = · · · = sk = 0. But then
(1) becomes
t 1 e1 + · · · + t r er = 0
so t1 = · · · = tr = 0 by the independence of {e1 , e2 , . . . , er }. This proves that B is linearly
independent.
EXAMPLE 3.2 Find the bases in the kernel and the image of linear transformation
f : R3 → R4 induced by the matrix A, i.e., f (v) = Av. Compute rank and nullity of f .
Verify whether f is one-to-one or onto. Find f −1 (b) if
   
2 1 0 1
1 −1 3  2
A= 1 2 −3 ;
 b= −1 .

0 3 −6 −3
Solution. To obtain kernel, we should find a general solution of a homogeneous system
Ax = 0. To find f −1 (b), we should find a general solution of a homogeneous system
Ax = b. So we can do both things simultaneously:
   
2 1 0 1    
1 −1 3  0 −1
x =  2 ,
 

1 2 −3 −1 =⇒ x = 1 + t  2  , t ∈ R.
1 1
0 3 −6 −3
Therefore,    
 −1  0
−1
Ker f = span  2  , f (b) = 1 + Ker f.

1 1
 

A basis of Ker f consists of just one vector (−1, 2, 1)T , and dim Ker f = 1.
By Dimension Theorem,
dim(Im f ) = dim(R3 ) − dim Ker f = 3 − 1 = 2
is the rank of f . By construction of A,
     

 2 1 0 
     
1 −1  3 

Im f = span   ,   ,   .


 1 2 −3  
0 3 −6
 

However, these three vectors do not form a basis in Im f , since they are not linearly
independent as dim Im f = 2. But if we leave any two linearly independent vectors, we
will obtain a basis. For example,
v1 = (2, 1, 1, 0)T , v2 = (1, −1, 2, 3)T
is a basis of Im f . Transformation f is neither one-to-one nor onto, since Ker f 6= 0 and
∈ f 6= R4 .
Isomorphisms
Often two vector spaces can consist of quite different types of vectors but, on closer
examination, turn out to be the same underlying space displayed in different symbols.
DEFINITION 3.3 A linear transformation f : V → W is called an isomorphism if
it is both onto and one-to-one. The vector spaces V and W are said to be isomorphic if
there exists an isomorphism f : V → W , and we write V ∼= W when this is the case.
EXAMPLE 3.4 Isomorphic spaces can “look” quitedifferent. ∼
 For example, M2×2 = P3
a b
because the transformation f : M2×2 → P3 given by T = a + bx + cx2 + dx3 is an
c d
isomorphism.
An isomorphism f : V → W induces a pairing
v ↔ f (v)
between vectors v in V and vectors f (v) in W that preserves vector addition and scalar
multiplication. Hence, as far as their vector space properties are concerned, the spaces V
and W are identical except for notation.
The following theorem gives a very useful characterization of isomorphisms: They are
the linear transformations that preserve bases.
THEOREM 3.5 (Criterion of Isomorphism) If V and W are finite dimensional
spaces, the following conditions are equivalent for a linear transformation f : V → W .
1. f is an isomorphism.
2. If {e1 , e2 , . . . , en } is any basis of V , then {f (e1 ), f (e2 ), . . . , f (en )} is a basis of W .
3. There exists a basis {e1 , e2 , . . . , en } of V such that {f (e1 ), f (e2 ), . . . , f (en )} is a basis
of W .

Proof. (1) ⇒ (2). Let {e1 , . . . , en } be a basis of V . If

t1 f (e1 ) + · · · + tn f (en ) = 0

with ti in R, then f (t1 e1 + · · · + tn en ) = 0, so t1 e1 + · · · + tn en = 0 (because Ker f = {0}).


But then each ti = 0 by the independence of the ei , so {f (e1 ), . . . , f (en )} is independent.
To show that it spans W , choose w in W . Because f is onto, w = f (v) for some v in V ,
so write v = t1 e1 + · · · + tn en . Then w = f (v) = t1 f (e1 ) + · · · + tn f (en ), proving that
{f (e1 ), . . . , f (en )} spans W.
(2) ⇒ (3). This is because V has a basis.
(3) ⇒ (1). If f (v) = 0, write v = v1 e1 + · · · + vn en where each vi is in R. Then

0 = f (v) = v1 f (e1 ) + · · · + vn f (en ),

so v1 = · · · = vn = 0 by (3). Hence v = 0, so ker f = {0} and f is one-to-one. To show


that f is onto, let w be any vector in W . By (3) there exist w1 , . . . , wn in R such that

w = w1 f (e1 ) + · · · + wn f (en ) = f (w1 e1 + · · · + wn en ).

Thus f is onto.
The following theorem shows that two vector spaces V and W have the same dimension
if and only if they are isomorphic.
THEOREM 3.6 (Isomorphism Theorem) If V and W are finite dimensional vector
spaces, then V ∼
= W if and only if dim V = dim W .
Proof. ⇒ If V ∼ = W , then there exists an isomorphism f : V → W . Since V is finite
dimensional, let {e1 , . . . , en } be a basis of V . Then {f (e1 ), . . . , f (en )} is a basis of W by
Theorem 3.5, so dim W = n = dim V .
⇐ Let V and W be vector spaces of dimension n, and suppose that {e1 , . . . , en } and
{b1 , . . . , bn } are bases of V and W , respectively. Theorem 1.9 asserts that there exists a
linear transformation f : V → W such that f (ei ) = bi for each i = 1, 2, . . . , n. Then
{f (e1 ), . . . , f (en )} is evidently a basis of W , so f is an isomorphism by Theorem 3.5.
Furthermore, the action of f is prescribed by

f (r1 e1 + · · · + rn en ) = r1 b1 + · · · + rn bn ,

so isomorphisms between spaces of equal dimension can be easily defined as soon as bases
are known.
COROLLARY 3.7 If V is a vector space and dim V = n, then V is isomorphic to Rn .
If V is a vector space of dimension n, note that there are important explicit isomorphisms
V → Rn . Fix a basis B = {v1 , v2 , . . . , vn } of V and write {e1 , e2 , . . . , en } for the standard
basis of Rn . Since there is a unique linear transformation f : V → Rn given by
 
x1
f (x1 v1 + · · · + xn vn ) = x1 e1 + · · · + xn en =  ... 
xn

where each xi is in R. Moreover, f (vi ) = ei for each i so f is an isomorphism by Theorem


3.5, called the coordinate isomorphism corresponding to the basis B.
Composition and Inverse Linear Transformations
THEOREM 3.8 Let f : V → W and g : W → U be linear transformations.
1. The composition g ◦ f is again a linear transformation.
2. Suppose f and g are represented by matrices Af and Ag relative to bases E, E 0 , and
E 00 of V , W , and U , respectively. Then the matrix corresponding to the composition
g ◦ f is equal to the product of matrices corresponding to f and g in the appropriate
order:
Ag◦f = Ag · Af .
Proof. 1) Linearity of the composition follows from the fact that both f and g preserves
addition and scalar multiplication, and so does the composition g ◦ f .
2) Take an arbitrary vector v ∈ V . It is mapped to the vector w ∈ W such that
w = Af v. The vector w, in turn, is mapped to the vector u ∈ U such that u = Ag w.
Therefore, eventually, the vector v is transformed to the vector u by the formula

u = Ag w = Ag (Af v) = (Ag Af )v.

THEOREM 3.9 Let V and W be finite-dimensional vector spaces. The following


conditions are equivalent for a linear transformation f : V → W .
1. f is an isomorphism.
2. There exists a linear transformation g : W → V such that g ◦ f = 1V and f ◦ g = 1W .
Moreover, in this case g is also an isomorphism and is uniquely determined by f : If
w ∈ W is written as w = f (v), then g(w) = v.
Proof. (1) ⇒ (2). If B = {e1 , . . . , en } is a basis of V , then D = {f (e1 ), . . . , f (en )} is
a basis of W by Theorem 3.5. Hence, define a linear transformation g : W → V by

g[f (ei )] = ei for each i. (2)

Since ei = 1V (ei ), this gives g ◦ f = 1V by Proposition 1.7. But applying f gives


f [g[f (ei )]] = f (ei ) for each i, so f ◦ g = 1W (again by Proposition 1.7).
(2) ⇒ (1). If f (v) = f (v1 ), then g[f (v)] = g[f (v1 )]. Because gf = 1V by (2), this
reads v = v1 ; that is, f is one-to-one. Given w in W , the fact that f ◦ g = 1W means that
w = f [g(w)], so f is onto.
Finally, g is uniquely determined by the condition g ◦ f = 1V because this condition
implies (2) and g is an isomorphism because it carries the basis D to B. As to the last
assertion, given w in W , write w = r1 f (e1 ) + · · · + rn f (en ). Then w = f (v), where
v = r1 e1 + · · · + rn en . Then g(w) = v by (2).
DEFINITION 3.10 Given an isomorphism f : V → W , the unique isomorphism
g : W → V satisfying g ◦ f = 1V and f ◦ g = 1W is called the inverse of f and is denoted
by f −1 .
Hence f : V → W and f −1 : W → V are related by the fundamental identities:

f −1 [f (v)] = v for all v ∈ V and f [f −1 (w)] = w for all w ∈ W.

In other words, each of f and f −1 reverses the action of the other. In particular, equation
(2) in the proof of Theorem 3.9 shows how to define f −1 using the image of a basis under
the isomorphism f .
THEOREM 3.11 Let f : V → W be an invertible linear transformation (isomorphism)
represented by a matrix A relative to bases E, E 0 of V , W , respectively. Then the matrix
corresponding to the inverse of f is A−1 .
Proof. Since f −1 ◦ f = 1V , by Theorem 3.8 we have Af −1 Af = I. Thus Af −1 = A−1
f .

EXAMPLE 3.12 Define f : Pn → R by f [p(x)] = the sum of all the coefficients of


p(x).
(a) Prove that f is linear.
(b) Use the dimension theorem to show that dim(Ker f ) = n.
(c) Conclude that {x − 1, x2 − 1, . . . , xn − 1} is a basis of ker f .
Solution. (a) is evident. (b) dim(Ker f ) = dim Pn − dim(Im f ) = (n + 1) − 1 = n.
(c) B = {x − 1, x2 − 1, . . . , xn − 1} is independent (distinct degrees) and contained in
Ker f . Hence B is a basis of Ker f by (b).
Exercises

1. Find the bases in the kernel and the image of linear transformation fA : R4 → R3
induced by the matrix A, i.e., f (v) = Av. Compute rank and nullity of f . Verify whether
f is one-to-one or onto. Find f −1 (b) if
   
2 1 −1 3 4
A = 1 0 3 1  ; b = 2  .
1 1 −4 2 2

Answer. Ker f = span{(−3, 7, 1, 0)T , (1, 1, 0, −1)T }, Im f = span{(1, 0, 1)T , (0, 1, −1)T },
rank f = 2, dim(Ker f ) = 2, f −1 (b) = (1, −1, 0, 1)T + Ker f . Neither one-to-one nor onto.

2. Find the bases in the kernel and the image of linear transformation f : R3 → R4
induced by the matrix A, i.e., f (v) = Av. Compute rank and nullity of f . Verify whether
f is one-to-one or onto. Find f −1 (b) if
   
4 5 9 5
3 2 7 4
A= −1 3
; b=
 3 .

2
7 7 6 −1

Answer. Ker f = {0}, Im f = span{(4, 3, −1, 7)T , (5, 2, 3, 7)T , (9, 7, 2, 6)T }, rank f =
3, dim(Ker f ) = 0, f −1 (x) = (−1, 0, 1)T . The transformation is one-to-one.

You might also like