Lecture 6
Lecture 6
Amit Tripathi
Definition
An isomorphism of the vector spaces V and W is a bijective linear
transformation T : V → W . When the map defining the isomorphism is
clear from the context, we write
V ∼
= W.
(a) Id : R2 → R2 is an isomorphism.
(b) T : R2 → R2 , T (x1 , x2 ) = (x1 , x1 + x2 ) is an isomorphism.
(c) Is every non-zero linear transformation R → R an isomorphism?
(d) Is every non-zero linear transformation R2 → R2 an isomorphism?
Remark
It is clear by rank-nullity that if T is an isomorphism, then
dim(V ) = dim(W ).
Definition
Let W and W ′ be two subspaces of V such that W ∩ W ′ = {0}. Then
W + W ′ is also written as W ⊕ W ′ .
(a) Let V = R2 and let W be the subspace spanned by the vector (1, 0)
and W ′ be the subspace spanned by the vector (0, 1). Then
V = W ⊕ W ′.
(b) If V = W ⊕ W ′ , then any vector α can be written uniquely as
α = β + β′
where aij ∈ R.
Thus, if we know the mn scalars {aij }, and the bases {αi }, {βj }, then we
have determined T . We define a m × n matrix A as following:
In other words
a11 a12 ··· a1n
a21 a22 ··· a2n
A=
· · · · · ·
···
am1 am2 ··· amn
We have
T (v ) = T (c1 α1 + · · · + cn αn )
= c1 T (α1 ) + · · · + cn T (αn )
= c1 (a11 β1 + a21 β2 + · · · + am1 βm ) + · · ·
+ cn (a1n β1 + a2n β2 + · · · + amn βm )
= (a11 c1 + a12 c2 + · · · + a1n cn ) β1 + · · ·
+ (am1 c1 + am2 c2 + · · · + amn cn ) βm
= w.
Theorem
Let V be an n-dimensional vector space and let W be an m-dimensional
vector space. Let B be an ordered basis of V and C be an ordered basis of
W . For each linear transformation T : V → W , there is an m × n matrix
A such that
[T (α)]C = A[α]B , for each vector α ∈ V .
Here the (i, j)th entry of A is obtained as
Note that the matrix corresponding to T will depend upon the choses
basis of both the vector spaces V and W .
An important case is when V = W i.e., when T is a linear operator.
In this case, we can take C = B. We shall then denote the matrix of T
with respect to the basis B by [T ]B , and the linear transformation can
be described in terms of matrices as
[T (α)]B = [T ]B [α]B .
Remark
It is important to keep in mind that the matrix representation of a linear
transformation is always with respect to chosen basis for the domain and
the codomain vector spaces.
Example
Let V = R2 and let T : V → V be the linear transformation defined as
Thus, we get
1 0
[T ]B =
0 0
Example
Now, let C = {e1′ , e2′ } be another ordered basis where e1′ = (1, 1) and
e2′ = (1, 0).
Thus, we get
0 0
[T ]B,C =
1 0
where the use of subscripts B and C is to emphasis that the matrix
representation of T depends upon the chosen basis.
Example
Let V be a finite dimensional vector space and let B = {α1 , · · · , αn } be a
basis of V . Let T : V → V be the identity operator i.e. T (α) = α for all
α ∈ V . Then
T (α1 ) = α1 = 1 · α1 + 0 · α2 + · · · + 0 · αn
T (α2 ) = α2 = 0 · α1 + 1 · α2 + · · · + 0 · αn
············
T (αn ) = αn = 0 · α1 + 0 · α2 + · · · + 1 · αn
It follows that the matrix of T with respect to any arbitrary basis B is the
identity matrix.
Example
x cos(θ) − sin(θ) x
Rθ : R2 → R2 , 7→ .
y sin(θ) cos(θ) y
In other words,
As R vector spaces
R2 ∼
= C.
How does Rθ looks over C ?
Example
Rθ
R2 / R2
C
? /C
T S
V,B W,C Z, D
Let A be the matrix of T with respect to the basis B and C. Let B be the
matrix of S with respect to the basis C and D.
S◦T
V,B Z, D
[S ◦ T ]B,D = BA.
Remark
Let V be a finite dimensional vector space and let B be a basis of V . A
linear operator
T :V →V
is an isomorphism if and only if the associated matrix [T ]B is invertible. In
fact, we have
[T −1 ]B = ([T ]B )−1 .
TA : Rn → Rm
Let Hom(V , W ) denote the set of all the linear transformations from V to
W.
Let T , T ′ ∈ Hom(V , W ), and let c ∈ R, we define T + T ′ and cT as
follows: For any v ∈ V , set
(T + T ′ )(v ) = T (v ) + T ′ (v )
(cT )(v ) = c(T (v )).
Definition
A linear operator is a linear transformation from a vector space to itself.
Definition
Let T : V → V be a linear operator. Suppose there is a scalar c and a
nonzero vector v ∈ V such that
T (v ) = cv .
Example
Define T : R3 → R3 as
2v3 = cv1
v1 + v3 = cv2
2v1 = cv3 .
Example
Solving them we get following possibilities:
(a) v1 = v2 = v3 = 0.
(b) c = 2 and v1 = v2 = v3 .
(c) c = −2, v1 = −v3 and v2 = 0.
The first solution gives v = 0 which is not allowed. With eigenvalue
c = 2, one possible eigenvector is (1, 1, 1) and with eigenvalue c = −2,
one possible eigenvector is (1, 0, −1).
Theorem
A scalar λ is an eigenvalue if and only if T − λI has no inverse.
Equivalently, if A is a matrix representation of T with respect to some
basis B, then
λ is an eigenvalue ⇐⇒ |A − λI | = 0.
Proof.
Any eigenvalue λ corresponds to some eigenvector v ̸= 0. So we have
Av = λv . This is equivalent to saying that
(A − λI )X = 0,
Theorem
Let T : V → V be a linear operator and let λ1 , · · · , λn be distinct
eigenvalues. Let vi ∈ V be an eigenvector corresponding λi for
i = 1, · · · , n. Then the set {v1 , · · · , vn } is linearly independent.
Proof.
We argue by contradiction. Suppose v1 , · · · , vn are dependent. Since any
eigenvector is by definition non-zero, we get that the set {v1 } is linearly
independent. Let m be the first integer ≥ 1 such that the set {v1 , · · · , vm }
is linearly independent but the set {v1 , · · · , vm+1 } is linearly dependent.
Let
c1 v1 + · · · + cm vm + cm+1 vm+1 = 0,
be a linear relation. Applying T on both sides, we get
Proof.
Multiplying the first equation by λm+1 and subtracting from the second,
we get
c1 (λ1 − λm+1 )v1 + · · · + cm (λm − λm+1 )vm = 0.
Since the set {v1 , · · · , vm } is assumed to be independent, we conclude that
c1 = · · · = cm = 0,