Linear Transformations and Their Matrices: Without Coordinates (No Matrix)
Linear Transformations and Their Matrices: Without Coordinates (No Matrix)
T : R2 −→ R2 .
The rule for this mapping is that every vector v is projected onto a vector T (v)
on the line of the projection. Projection is a linear transformation.
Definition of linear
A transformation T is linear if:
T (v + w) = T (v) + T (w)
and
T (cv) = cT (v)
for all vectors v and w and for all scalars c. Equivalently,
for all vectors v and w and scalars c and d. It’s worth noticing that T (0) = 0,
because if not it couldn’t be true that T (c0) = cT (0).
1
Example 2: Rotation by 45◦
This transformation T : R2 −→ R2 takes an input vector v and outputs the
vector T (v) that comes from rotating v counterclockwise by 45◦ about the ori
gin. Note that we can describe this and see that it’s linear without using any
coordinates.
Example 3: T (v) = Av
Given a matrix A, define T (v) = Av. This is a linear transformation:
A(v + w) = A(v) + A(w)
and
A(cv) = cA(v).
Example 4
� �
1 0
Suppose A = . How would we describe the transformation T (v) =
0 −1
Av geometrically?
When we multiply A by a vector v in R2 , the x component of the vector
is unchanged and the sign of the y component of the vector is reversed. The
transformation v �→ Av reflects the xy-plane across the x axis.
Example 5
How could we find a linear transformation T : R3 −→ R2 that takes three
dimensional space to two dimensional space? Choose any 2 by 3 matrix A and
define T (v) = Av.
2
Describing T (v)
How much information do we need about T to to determine T (v) for all v? If
we know how T transforms a single vector v1 , we can use the fact that T is a
linear transformation to calculate T (cv1 ) for any scalar c. If we know T (v1 )
and T (v2 ) for two independent vectors v1 and v2 , we can predict how T will
transform any vector cv1 + dv2 in the plane spanned by v1 and v2 . If we wish to
know T (v) for all vectors v in Rn , we just need to know T (v1 ), T (v2 ), ..., T (vn )
for any basis v1 , v2 , ..., vn of the input space. This is because any v in the input
space can be written as a linear combination of basis vectors, and we know that
T is linear:
v = c1 v1 + c2 v2 + · · · + c n v n
T ( v ) = c1 T ( v1 ) + c2 T ( v2 ) + · · · + c n T ( v n ).
T ( c1 v1 + c2 v2 ) = c1 v1 + 0
3
� �
1 0
and the matrix of the projection transformation is just A = .
0 0
� �� � � �
1 0 c1 c1
Av = = .
0 0 c2 0
This is a nice matrix! If our chosen basis consists of eigenvectors then the
matrix of the transformation will be the diagonal matrix Λ with eigenvalues
on the diagonal.
To see how important the choice of basis is, let’s use the standard basis for
the linear transformation� that� projects the plane �onto�a line at a 45◦ angle. If
1 0
we choose v1 = w1 = and v2 = w2 = , we get the projection
0 1
aa T
� �
1/2 1/2
matrix P = T = . We can check by graphing that this is the
a a 1/2 1/2
correct matrix, but calculating P directly is more difficult for this basis than it
was with a basis of eigenvectors.
d
Example 6: T =
dx
Let T be a transformation that takes the derivative:
Conclusion
For any linear transformation T we can find a matrix A so that T (v) = Av.
If the transformation is invertible, the inverse transformation has the matrix
A−1 . The product of two transformations T1 : v �→ A1 v and T2 : w �→ A2 w
corresponds to the product A2 A1 of their matrices. This is where matrix multi
plication came from!
4
MIT OpenCourseWare
http://ocw.mit.edu
For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.