Linear Algebra I
Linear Algebra I
UNIT I
VECTORS
Certain Physical quantities such as mass, area, density, volume, etc., that possess only
magnitude are called scalars. On the other hand, there are physical quantities such as
force, displacement, velocity, acceleration, etc that has both magnitude and direction.
Such quantities are called vectors.
The concept of a vector is essential for the whole course. It provides the foundation and
geometric motivation for everything that follows. Hence the properties of vectors, both
algebraic and geometric, will be discussed in this unit.
We know that, once a unit length is selected, a number x can be used to represent a point
on a line. A pair of numbers (i.e. a couple of numbers) (x, y) can be used to represent a
point in the plane. A triple of numbers (x, y, z) can be used to represent a point in space.
The following pictures illustrate these representations:
1
Linear Algebra I
We can say that a single number represents a point in 1-space (A), a couple represents a
point in 2-space (B) and a triple represents a point in 3-space (C).
Although we cannot draw a picture to go further, we can say that a quadruple of numbers
(x, y z, w) or (x1, x2, x3, x4) represent a point in 4-sapce.
Example 1.1.1 The space we live in can be considered as a 3 space. After selecting an
origin and a coordinate system, we can describe the position of a point
(body, particle, etc.) by 3 coordinates. We can extend this space to a 4
dimensional space, with a fourth coordinate, for example, time. If you
select the origin of the time axis as the birth of Christ, how do we
describe a body with negative time co-ordinate? What if the birth of
the earth is taken as the origin of time?
If A = (a1, a2, ., an) and B = (b1, b2, …, bn) are points in the same space , and if c is a
real number then
i. A and B are equal (or represent the same point) if a 1 = b1, a2 = b2, … and an =
bn.
ii. A + B, A – B and cA are defined to the points whose coordinates are (a 1 + b1,
a2 + b2, …, an + bn), (a1-b1, a2-b2, …,an - bn) and (ca1, ca2, …, can), respectively.
Example 1.1.2 1) Let A = (1,2), B = (-3,4) , then A+B=(-2,6), A-B = (4,-2), -3A=(-3,-6)
2) Let X = (1, 0, π, 4), Y = (2, 4,-2π,-6), then 2X+Y = (4, 4, 0, 2) and
X-(1/2) Y = (0,-2, 2π, 7).
2
Linear Algebra I
Every pair of distinct points A and B in determines a directed line segment with initial
point at A and terminal point at B. We call such a directed line segment a vector and
denote it by . The length of the line segment is the magnitude of the vector. Although
has zero length, and strictly speaking, no direction, it is convenient to view it as a
vector. It is called a zero or a null vector. It is often denoted by .
Notice that the definition of equality of two vectors does not require that the vectors have
the same initial and terminal points. Rather it suggests that we can move vectors freely
provided we make no change in magnitude and direction.
Activity 1.2.1
Let (a1, a2) be the coordinate representation of A
and let (b1, b2) be that of B. Let P be the point
(b1-a1, b2 –a2). If O is the origin, is in the
direction of ?
Is length of equal to the length of ?
Is = ?
If your answer for the above questions is yes, then we can conclude that any vector V=
in the plane is a vector with initial point at the origin. This is the only vector
whose initial point is the origin and P = B – A, which is equal to . Moreover, V
= is uniquely determined by its terminal point P . If P = (x, y), then we shall write V =
(x, y) and refer to it as the coordinate representation of V relative to the chosen
coordinate system. In view of this, we shall call (x, y) either a point or a vector,
depending on the interpretation which we have in mind. So if V = , then we can write
V = B – A. In view of this two vectors and are equal (or equivalence) if B – A =
D – C.
3
Linear Algebra I
Example 1.2.1: If P = (1, 3), Q = (-1,0), R = (0, -1) and S = (-2, -4) then
As numbers can be added subtracted and multiplies, vectors can be combined in the
following ways.
Let A = (a1, a2) and B = (b1, b2) vectors and t be a real number. The sum
A + B = (a1 + b1, a2 + b2)
The difference A – B = (a1 - b1, a2 – b2)
The scalar multiple tA = (t a1, t a2)
The geometric interpretation of the above vector operations is that A + B is a vector
obtained by placing the initial point of B on the terminal point of A.
If t > 0, then tA is a vector in the direction of A. What about if t<0? A and tA are said
to have opposite direction.( see figure a and b)
tA
for t > 0 Fig. a for t < 0 Fig. b
We can extend the above notions to vectors in but the geometric interpretations for n
> 3 are difficult. Hence we focus on algebraic aspects of vectors.
If A = (a1, a2, …, an) and B = (b1, b2, …, bn) are vectors in and if t is any real number,
then
4
Linear Algebra I
Activity 1.2.2: 1) Let A = (6, -2, 4). Find two vectors C and D which are parallel to A.
Are C and D also parallel to each other?
2) Let P = (3,7), Q = (-4,2), R = (5,1), S = (-16,-14).
Is parallel to ?
Using the above definitions and applying the associative and commutative properties of
real numbers, one can prove the following theorem.
Theorem 1.2.1 Let A, B and C be any members of , and let m and n be any real
numbers. Then
a) Associativity
b)
c) Commutativity
d)
Distributive property
e)
f)
g)
Proof c)
=
=
=
=
Example 1.2.3
A boat captain wants to travel due south at 40 knots.
If the current is moving northwest at 16 knots, in
what direction and magnitude should he works the
engine?
5
Linear Algebra I
Exercise 1.2.1:
1. Given three vectors A = (1, 1, 1), B = (-1, 2, 3) and C = (0, 3, 4), find
a. A+B c. A+B – C
b. 2A – B d. A – 3B + 10C
2. Determine whether and can be found to satisfy the vector equations
a. (2, 1, 0) = (-2, 0, 2) + (1, 1, 1)
b. (-3, 1, 2) = (-2, 0, 2) + (1,1,1)
x z
y
The distance between two points (x1, y1, z1) and (x2, y2, z2) is given by:
1.3 Scalar product and norm of vector, orthogonal projection, and direction cosines
Let A = (a1, a2,…,an) and B = (b1, b2, …, bn) be two vectors. The scalar product of A and
B is the number A.B defined by
6
Linear Algebra I
Note: The scalar product is also called a dot product or inner product.
Example 1.3.1
The scalar product satisfies many of the laws that hold for real numbers. The basic ones
are:
a)
b)
c)
Example 1.3.2: Given A = (3, 2,-1) and B = (2,0, 3), and C = (1,-1,1), then
a. A.B = B.A = 3
b. 2(A.B) = 6
c. (A+B).C = A.C + B.C = 5
Activity 1.3.2: Find A.A, B.B, and C.C. Are all positive values?
7
Linear Algebra I
The length, or norm or magnitude of vector A = (a1, a2, …, an), denoted by ||A||, can be
expressed in terms of the scalar product. By definition
and
||A|| =
Any non-zero vector can be fully represented by providing its magnitude and a unit
vector along its direction. Let be a unit vector in the direction of A. Then
Example 1.3.5: Given a vector A = (1, 1, 1). Find a unit vector in the direction of A.
Solution:
, then the unit vector in the direction of A is:
Activity 1.3.3: 1. Given three vectors A = (1, 1, 1), B = (-1, 2, 3) and C = (0, 3, 4), find
the unit vector in the direction of A+B – C.
2. The vectors i (1, 0, 0), j = (0, 1, 0) and k = (0, 0, 1) are unit vectors in the
direction of positive x, y and z axis, respectively. Find a unit vector in the
direction of A = (-1, 2, 3).
Let A, B be two n-tuples of vectors. We define the distance between A and B to be
8
Linear Algebra I
Then
(Why?)
After
cancellation, we get,
Activity 1.3.4: Given two non-zero vectors A and B, how do you find the angle between
them? Take, for example, A = (2, -1, 2), B = (1, -1, 0) and find the
angle between them.
Two non-zero vectors are said to be orthogonal (Perpendicular) if the angle between
them is .
Note: Two non-zero vectors A and B are said to be orthogonal (Perpendicular) if A.B= 0.
9
Linear Algebra I
a) (Schwarz inequality)
b) (Triangle inequality)
Proof a) If one of A or B is a zero vector then both sides of the inequality are equal
to 0. Suppose both A and B are non-zero.
From ,
10
Linear Algebra I
n
Remark: The inequalities of Theorem 1.3.2 hold true also for any vectors A and B in R .
line containing A.
or (why?)
That is,
2. B - is orthogonal (perpendicular) to A.
11
Linear Algebra I
Activity 1.3.4: One application of projections of vector arises in the definition of the
work done by a force on a moving body. Find another application.
, ,
Where the direction angles, , and are the angles that the vector makes with the
positive x, y, and z-axes respectively.
Remark:
Example 1.3.7: Let u = (1, -2, 3). Find the direction cosines of u.
Solution: Since
Activity 1.3.5: Is ?
12
Linear Algebra I
The second type of product of two vectors is the cross product. Unlike the dot product,
the cross product of two vectors is a vector.
Definition 1.4.1 The cross product (or vector product) A x B of two vectors
A = (a1, a2, a3) and B = (b1, b2, b3) is defined by
A x B = (a2 b3 – a3 b2, a3 b1 – a1 b3, a1 b2 - a2 b1)
Note that the cross product is defined in
Proof :The following is the proof for 1, 2 and 8. The rest are left as an exercise
1) From the definition of cross product,
A x B = (a2 b3 – a3 b2, a3 b1 – a1 b3, a1 b2 – a2 b1)
For B x A, interchange A and B to obtain
B x A = (b2 a3 – b3 a2, b3 a1 - b1 a3, b1 a2 - b2 a1)
= (a2 b3 – a3 b2, a3 b1 - a1 b3, a1 b2 - a2 b1)
13
Linear Algebra I
= - (A x B)
2) A x A = (a2 a3 – a3 a2, a3 a1 - a1 a3, a1 a2 - a2 a1)
= (0, 0, 0)
8) Setting C = A in 5) yields
A . (A x B) = B . (A x A)
= B.0 (why?)
=0
By setting C = B in 5),
B .(A x B) = A . (B x B)
= A.0=0
This shows that for non zero vectors A and B, the cross product A x B is orthogonal to
both A and B.
Activity 1.4.2: Are the usual commutative and associative laws valid? i.e. for any
vectors A, B and C in , is A x B = B x A?
Is A x (B x C) = (A x B) x C?
From 4) of theorem 1.4.1, we derive an important formula for the norm of the cross
product.
Activity 1.4.3:
- For the unit vectors and , find and . What is
- If A and B are parallel, what is A B?
14
Linear Algebra I
Exercise 1.4.1:
1. Find a unit vector perpendicular to both A = (2,-3,1) and B = (1,2,-4).
2. Prove that (A – B)x(A + B) = 2(AxB).
Let u and v be vectors and consider the parallelogram that the two vectors make. Then
The direction of uxv is a right angle to the parallelogram that follows the right hand rule.
To find the volume of the parallelepiped spanned by three vectors u, v, and w, we find
the triple product:
= Volume
Example 1.5.1: 1. Find the area of the parallelogram which is formed by the two vectors
u= (1, 3, 2) and v= (-2, 1, 3).
15
Linear Algebra I
Exercise: Find the area of the triangle having vertices at u = (3, -2, -1),
Note that if (x, y, z) is on line and if A = (a1, a2, a3) and B = (b1, b2, b3 then
(x, y, z) = (a1, a2, a3) + t(b1, b2, b3) for some real number t.
16
Linear Algebra I
Example 1.6.1 Find equation of a line through P1 = (0, 1, 2) and P2 = (-1, 1, 1).
Solution: We need a point A on the line and a vector B parallel to the vector
formed by two point of the line.
Take A = P1 and B = P2 – P1. Then
A + t B = (0, 1, 2) + t (-1, 0, -1)
(x, y, z) = (0, 1, 2) + t (-1, 0, -1) is equation of the line. By giving
distinct values for t we will obtain distinct points on the line. Find some
of the points.
Note: The equation of a line passing through points A and B is given by:
P = A + t(B – A) or P = (1 – t)A + B,
Exercise 1.6.1: Let the line L1 passes through the points (5,1,7) and (6,0,8) and the line
L2 passes through the points (3,1,3) and . Find the value of
17
Linear Algebra I
Activity 1.6.3: 1) Find the parametric equation of a line that contains (2, -1, 1) and is
be parallel if B1 and B2 are parallel. That is the vectors P1 – Q1 and P2 – Q2 are parallel for
the minimum of the lengths of all vectors with initial point the origin and terminal point
18
Linear Algebra I
According to our definition, if P is any point on the plane through P0 and perpendicular to
N, then or
Example 1.6.2: Find an Equation of the plane that contains point (-2, 4, 5) and that is
normal to (7, 0, -6).
Solution: The equation of the plane is given by 7(x+2)+0(y-4)-6(z-5) = 0 or 7x- 6z = -44
Activity 1.6.6:
1. A plane passes through (-1, 2, 3) and is perpendicular to the y-axis. What is
the equation?
19
Linear Algebra I
Exercise 1.6.2: Find the equation of the plane passing through the three points
P1 = (2,1,1), P2 = (3,-1,1), P3 = (4,1,-1).
Let Q be a point outside a plane normal through P. The distance d from Q to the
to N. We define the distance from point plane is the distance between Q and Po.
Q to the plane as follows. Let Po be the
point of intersection of the line through
Q, in the direction of N, and the plane
However,
Hence =
c) (-2, 2, 2) f)
20
Linear Algebra I
6. Let ,
a) Show that each u1, u2, u3 is orthogonal to the other two and that each is
a unit vector
b) Find the projection of E1 on each of u1, u2, u3
c) Find the projection of A = (a1, a2, a3) on u1.
7. In the following cases compute (A x B).C
a) A = (1, 2, 0) B = (-3, 1, 0), C = (4, 9, -3)
b) A = (-3, 1, -2) B = (2, 0, 4), C = (1, 1, 1)
8. Prove that two non-zero vectors A and B are perpendicular if and only if
for every number t.
9. If A + B + C = 0. Show that A x B = B x C = C x A
10. Find a formula for the area of a parallelogram whose vertices, in order, are
P, Q, R & S.
21
Linear Algebra I
16. Find the intersection point of the lines {p:p = (-2, 4, 6) + t(1, 2, 3)} and
{p:p = (-2, 1, 6) + t(3, 2, 1)}. Find equation of the plane containing the two
lines.
17. Find all points of intersection of the line {p:p = t(1, -3, 6)} and the plane
{p:x + 3y + z = 2}
18. Prove that if B.N = 0 and A is on the plane {p:(p – po).N = 0} then the entire
line {p:p = A + tB} lies in the plane.
19. Find a line through the point po = (-5, 2, 1) and normal to the plane
{(x, y, z): x = y}
20. Find a line through (xo, yo, zo) and normal to the plane {(x,
y, z): ax + by + cz = d}
by:
22. Let be the line x = 1 + 2t, y = -1 + 3t, z = -5 + 7t. Find the two
22
Linear Algebra I
UNIT II
VECTOR SPACES
Definition 2.1.1: Let K be a set of numbers. We shall say that K is field if it satisfies the
following conditions:
a) If x, y are elements of K then x + y and xy are also elements of K.
b) If x is an element of K, then –x is also an element of K.
Furthermore, if x 0, then x-1 is also an element of K.
c) 0 and 1 are elements of K.
Example 2.1.1: The set of all real numbers and the set of all complex numbers ℂ are
fields.
Activity 2.1.1: Are ℤ (The set of all integers) and Q (the set of all rational numbers
fields?
Remark: The essential thing about a field is that its elements can be added and
multiplied and the results are also elements of the field. Moreover, every
element can be divided by a non-zero element.
Definition 2.1.2: A vector space V over a field K is a set of objects which can be added
and can be multiplied by elements of K. It satisfies the following
properties.
V1) For any u, v V and a K, we have
u+vV and au V
V2) For any u, v, w V,
(u + v) + w = u + (v + w)
V3) There is an element of V, denoted by O (called the zero element), such that
0 + u = u + 0 = u for all elements u of V.
V4) For u V, there exists –u V such that
u + (-u) = 0
23
Linear Algebra I
Activity 2.1.2:What is the name given for each of the above properties?
Other properties of a vector space can be deduced from the above eight properties. For
example, the property 0u = O can be proved as :
0u + u = 0u + 1.u (by V8)
= (0 + 1) u (by V7)
= 1. u
=u
By adding –u to both sides of ou + u = u, we have 0u = O
24
Linear Algebra I
The algebraic properties of elements of an arbitrary vector space are very similar to those
of elements of 2, 3, or n. Consequently, we call elements of a vector space as vectors
Definition2.3.1: Suppose V is a vector space over k and W is a subset of V. If, under the
addition and scalar multiplication that is defined on V, W is also a vector space then we
call W a subspace of V.
Using this definition and the axioms of a vector space, we can easily prove the following:
A subset W of a vector space V is called a subspace of V if:
i) W is closed under addition. That is, if u, w W, then u + w W
ii) W is closed under scalar multiplication. That is, if uW and a k, then auW.
iii) W contains the additive identity 0.
Then as W V, properties V1 – V8 are satisfied for the elements of W.
Hence W itself is a vector space over k. We call W a subspace of V.
25
Linear Algebra I
For and = = 0.
Hence au H. Now, the element O of 2 is (0, 0). 0 + 4(0) = 0. Hence O = (0, 0) is in H.
H is a subspace of 2
Activity 2.3.1:Take any vector A in 3. Let W be the set of all vectors B in 3 where
B.A = 0. Discuss whether W is a subspace of 3 or not.
Definition 2.3.2: Let v1, v2, …, vn be elements of a vector space V over k. Let x 1,
x2, …, xn be elements of k. Then an expression of the form x 1v1 + x2v2 +… + xn vn is
called a linear combination of v1, v2, …, vn..
Example 2.3.2: The sum 2(3, 1) + 4(-1, 2) +(1, 0) is a linear combination of (3, 1), (-1, 2)
and (1, 0). As this sum is equal to (3, 10), we say that (3, 10) is a linear combination of
the three ordered pairs.
Activity 2.3.2:
i) Take two elements v1 and v2 of 3. Let W be the set of all linear
combinations of v1 & v2. Show that W is a subspace of . W is called the
subspace generated by v1 and v2
ii) Generalize i) by showing that a set w generated by elements
v1, v2, …, vn of a vector space V is a subspace.
26
Linear Algebra I
Definition 2.4.1: Let V be a vector space over k. Elements v 1, v2, …, vn of V are said to
be linearly independent if and only if the following condition is satisfied:
whenever a1, a2, …, an are in k such that a1v1 + a2v2 + … + anvn = 0, then ai = 0 for all
i = 1, 2, …, n.
If the above condition does not hold, the vectors are called linearly dependent. In other
words v1, v2,…, vn are linearly dependent if and only if there are numbers a 1, a2, …, an
where a1v1 + a2v2 + … + anvn = 0 for at least one non-zero ai.
Example 2.4.1: Consider v1 = (1, -1,1) , v2 = (2, 0, -1) and v3 = (2, -2, 2)
i) a1v1 + a2v2 = a1 (1, -1, 1) + a2 (2, 0, -1) = (a1 + 2a2, -a1, a1 – a2)
a1v1 + a2v2 = 0 a1 + 2a2 = 0, -a1 = 0 and a1 – a2 = 0
a1 = 0 and a2 = 0
Hence v1 & v2 are linearly independent.
ii) a1v1 + a2v3 = a1 (1, -1, 1) + a2 (2, -2, 2)
= (a1 + 2a2, -a1 – 2a2 , a1 +2 a2)
a1v1 + a2v3 = 0 a1 + 2a2 = 0, -a1 – 2a2 = 0 and a1 +2 a2 = 0
a1 = -2a2
Take a1 = 2 and a2 = -1, we get 2(1, -1, 1) + (-1) (2, -2, 2) = 0.
As the constants are not all equal to zero, v1 and v3 are linearly dependent.
Activity 2.4.1: Show that v1, v2 and v3 are also linearly dependent.
Remark: If vectors are linearly dependent, at least one of them can be written as a linear
combination of the others.
Activity 2.4.2: Show that (1, 0, 0, …,0), (0, 1,0,…)…, (0,0,0, …, 1) are linearly
independent vectors in n.
27
Linear Algebra I
Definition 2.5.1: If elements e1, e2, …, en of a vector space V are linearly independent
and generate V, then the set B = {e 1, e2, …, en} is called a basis of V.
we shall also say that the elements e1, e2,…, en constitute or form a
basis of V.
Example 2.5.1:
1) Show that e1 = (0, -1) and e2 = (2, 1) form a basis of 2.
Solution: we have to show that
i) e1 and e2 are linearly independent
ii) They generate 2 i.e every element (x,y) of 2 can be written as a
linear combination of e1and e2.
i) a1 e1 + a2 e2 = O a1(0, -1) + a2(2,1) = (0, 0)
2a2 = 0 and –a1 + a2 = 0
a2 = 0 and a1 = 0
e1 and e2 are linearly independent
ii) (x, y) = a1e2 + a2 e2 (x, y) = (0, -a1) + (2a2, a2)
x = 2a2 and y = -a1 + a2
a2 = and a1 = a2 – y … (*)
Therefore, given any (x, y), we can find a 1 and a2 given by (*) and (x, y) can be
written as a linear combination of e1 and e2 as
(x, y) =
28
Linear Algebra I
The vectors E1 = (1, 0, 0) , E2 = (0, 1, 0), E3 = (0, 0, 1) are linearly independent and every
element (x, y, z) of 3 can be written as
(x, y, z) = x(1, 0, 0) + y(0, 1, 0) + z (0, 0, 1)
= xE1 + yE2 + zE3
Hence {E1, E2, E3} is a basis of 3.
Note that the set of elements E 1 = (1, 0, 0,…,0), E2 = (0, 1, 0, … 0),…,E n = (0, 0, 0, …,1)
is a basis of n. It is called a standard basis.
Example 2.5.2
1) In 1) of example 3.3.1 The coordinate vector of (4,3) with respect to the basis
{(0, -1), (2,1)} is (-1, 2). But with respect to the standard basis it is (4, 3).
Find coordinates of (4,3) in some other basis of 2.
2) Consider the set V of all polynomial functions f: which are of degree less
than or equal to 2.
Every element of V has the form f(x) = bx2 + cx + d, where b, c, d
V is a vector space over (show).
Clearly, e1 = x2, e2 = x and e3 = 1 are in V and a1e1 + a2 e2 + a3e3 = O
29
Linear Algebra I
E = {(1, 0, 0), (0,1,0), (0,0,1)} and B = {(-1,1,0), (-2, 0, 2), (1, 1, 1)} are bases of and
each has three elements. Can you find a basis of having two elements? four elements?
The main result of this section is that any two bases of a vector space have the same
number of elements. To prove this, we use the following theorem.
Theorem 2.5.1: Let V be a vector space over the field K. Let {v 1, v2,…,vn} be a basis of
V. If w1, w2,…,wm are elements of V, where m > n, then w 1, w2, …, wm
are linearly dependent.
Proof (reading assignment)
Theorem 2.5.2: Let V be a vector space and suppose that one basis B has n elements, and
another basis W has m elements. Them m = n.
30
Linear Algebra I
Remarks : 1. If V = {0}, then V doesn’t have a basis, and we shall say that dim v is
zero.
2. The zero vector space or a vector space which has a basis consisting of
a finite number of elements, is called finite dimensional. Other vector
spaces are called infinite dimensional.
Example 2.5.3:
1) over has dimension 3. In general over has dimension n.
2) over has dimension 1. In fact, {1} is a basis of , because
and
any number has a unique expression .
Definition 2.5.3: The set of elements {v1, v2, …,vn}of a vector space V is said to be a
maximal set of linearly independent elements if v1, v2, …,vn are
linearly independent and if given any element w of V, the elements
w,v1, v2, …, vn are linearly dependent.
Example 2.5.4: In {(1, 0, 0), (0, 1, 1), (0, 2, 1)} is a maximal set of linearly
independent elements.
We now give criteria which allow us to tell when elements of a vector space constitute a
basis.
Theorem 2.5.3: Let V be a vector space and {v1, v2, …,vn}be a maximal set of linearly
independent elements of V. Then {v1, v2, …,vn}is a basis of V.
31
Linear Algebra I
Theorem 2.5.4: Let dim V = n, and let v1, v2, …,vn be linearly independent elements of v.
Then
{v1, v2, …,vn} is a basis of v.
Proof: According to theorem 3.4.1, {v1, v2, …,vn} is a maximum set of linearly
independent elements of V.
Hence it is a basis by theorem 2.5.3
Let V be a vector space over the field K. Let U, W be subspaces of V. We define the
sum of U and W to be the subset of V consisting of all sums u + w with and
. We denote this sum by U +W and it is a subspace of V. Indeed, if and
then
If , then
32
Linear Algebra I
Definition 2.6.1: A vector space V is a direct sum of U and W if for every element v in
V there exist unique elements and such that .
Theorem 2.6.1: Let V be a vector space over the field K, and let U, W be subspaces. If
U + W = V, and if , then V is the direct sum of U and W.
Proof: Exercise
Note: When V is the direct sum of subspaces U, W we write:
Theorem 2.6.2: Let V be a finite dimensional vector space over the field K. Let W be a
subspace. Then there exists a subspace U such that V is the direct sum of W and U.
Proof: Exercise
Theorem 2.6.3: If V is a finite dimensional vector space over the field K, and is the
direct sum of subspaces U, W then
dim V = dim U + dim W
Proof: Exercise
Remark: We can also define V as a direct sum of more than two subspaces. Let W 1,
W2, …., Wr be subspaces of V. We shall say that V is their direct sum if every element
of can be expressed in a unique way as a sum
With wi in Wi.
Suppose now that U, W are arbitrarily vector spaces over the field K(i.e. not necessarily
subspaces of some vector space). We let UXW be the set of all pairs (u, w) whose first
component is an element u of U and whose second component is an element w of W.
We define the addition of such pairs component wise, namely, if and
we define
33
Linear Algebra I
Exercise:
1. Let, , , and .
34
Linear Algebra I
Exercise 2.1
1. Let k be the set of all numbers which can be written in the form , where a,
b are rational numbers. Show that k is a field.
2. Show that the following sets form subspaces
a. The set of all (x, y) in such that x = y
b. The set of all (x, y) in such that x – y = 0
c. The set of all (x, y, z) in such that x + y = 3z
d. The set of all (x, y, z) in such that x = y and z = 2y
3. If U and W are subspaces of a vector space V, show that and are
subspaces.
4. Decide whether the following vectors are linearly independent or not (on )
a) (, 0) and (0, 1)
b) (-1, 1, 0) and (0, 1, 2)
c) (0, 1, 1), (0, 2, 1), and (1, 5, 3)
5. Find the coordinates of X with respect to the vectors A, B and C
a. X = (1, 0, 0), A = (1, 1, 1), B = (-1, 1, 0), C = (1, 0, -1)
b. X = (1, 1, 1) , A = (0, 1, -1), B = (1, 1, 0), C = (1, 0, 2)
6. Prove: The vectors (a, b) and (c, d) in the plane are linearly dependent if and only
if ad – bc = 0
7. Find a basis and the dimension of the subspace of generated by
{(1, -4, -2, 1), (1, -3, -1, 2), (3, -8, -2, 7)}.
8. Let W be the space generated by the polynomials x3 + 3x2 – x + 4, and
2x3 + x2 – 7x – 7. Find a basis and the dimension of W.
9. Let V = {(a, b, c, d) 4: b – 2c + d = 0}
W = {(a, b, c, d) 4: a = d, b = 2c}
Find a basis and dimension of
a) V b) W c)
35
Linear Algebra I
10. What is the dimension of the space of 2 x 2 matrices? Give a basis for this
space. Answer the same question for the space of n x m matrices.
11. Find the dimensions of the following
a) The space of n x n matrices all of whose elements are 0 except possibly
the diagonal elements.
b) The space of n x n upper triangular matrices
c) The space of n x n symmetric matrices
d) The space of n x n diagonal matrices
12. Let V be a subspace of 3. What are the possible dimensions for V? Show that if
V 3, then either V = {0}, or V is a straight line passing through the origin, or V
is a plane passing through the origin.
UNIT III
Matrices
The concept of matrices has had its origin in various types of linear problems, the most
important of which concerns the nature of solutions of any given system of linear
equations. Matrices are also useful in organizing and manipulating large amounts of data.
Today, the subject of matrices is one of the most important and powerful tools in
36
Linear Algebra I
Mathematics which has found applications to a very large number of disciplines such as
Engineering, Business and Economics, statistics etc.
The order of a matrix is the number of rows and columns it has. When we say a matrix
is a 3 by 4 matrix, we are saying that it has 3 rows and 4 columns. The rows are always
mentioned first and the columns second. This means that a 3 4 matrix does not have the
same order as a 4 3 matrix. It must be noted that even though an matrix contains
mn elements, the entire matrix should be considered as a single entity. In keeping with
this point of view, matrices are denoted by single capital letters such as A, B, C and so
on.
Remark: By the size of a matrix or the dimension of a matrix we mean the order of the
matrix.
37
Linear Algebra I
Solution: Since A has 2 rows and 3 columns, we say A has order , where the number
of rows is specified first. The element 6 is in the position a23 (read a two three) because it
is in row 2 and column 3.
Solution: , the element in the second row and third column, is 1 and , the element
in the third row and second column, is 7. What is the size of this matrix?
Activity 3.1.1: 1. Suppose A is a 5x7 matrix, then
a. A has 7 rows. (True/False)
b. is an element of A for i = 6 and j = 4.(True/False)
c. For what values of i and j, is an element of A?
2. Suppose and
by the symbol or more simply . This notation merely indicates what type of
symbols we are using to denote the general entry.
38
Linear Algebra I
Solution: Since the number of rows is specified first, this matrix has four rows and
five columns.
= .
Definition 3.1.2: Two matrices A and B are said to be equal, written A = B, if they are of
the same order and if all corresponding entries are equal.
Activity 3.1.3: Find the values of x, y, z and w which satisfy the matrix equation
a.
b.
3.2. Types of matrices: Square, identity, scalar, diagonal, triangular, symmetric, and
skew symmetric matrices
39
Linear Algebra I
Certain types of matrices, which play important roles in matrix theory, are now
considered.
Row Matrix: A matrix that has exactly one row is called a row matrix. For example, the
matrix is a row matrix of order .
Column Matrix: A matrix consisting of a single column is called a column matrix. For
Zero or Null Matrix: A matrix whose entries are all 0 is called a zero or null matrix. It
zero matrix.
Square Matrix: An matrix is said to be a square matrix of order n if m = n. That
is, if it has the same number of columns as rows.
respectively.
Note: The sum of the entries on the main diagonal of a square matrix A of order n is
40
Linear Algebra I
Triangular Matrix: A square matrix is said to be an upper (lower) triangular matrix if all
entries below (above) the main diagonal are zeros.
matrices, respectively.
Diagonal Matrix: A square matrix is said to be diagonal if each of the entries not falling
on the main diagonal is zero. Thus a square matrix is diagonal if for .
Activity 3.2.2: What about for i = j?
Scalar matrix: A diagonal matrix whose all the diagonal elements are equal is called
a scalar matrix.
Identity Matrix or Unit Matrix: A square matrix is said to be identity matrix or unit
matrix if all its main diagonal entries are 1’s and all other entries are 0’s. In other words,
41
Linear Algebra I
a diagonal matrix whose all main diagonal elements are equal to 1 is called an identity or
unit matrix. An identity matrix of order n is denoted by In or more simply by I.
of order 2.
Addition of matrices: Let A and B be two matrices of the same order. Then the addition
of A and B, denoted by A + B, is the matrix obtained by adding
corresponding entries of A and B. Thus, if
and then .
Remark: Notice that we can add two matrices if and only if they are of the same order. If
they are, we say they are conformable for addition. Also, the order of the sum of two
matrices is same as that of the two original matrices.
42
Linear Algebra I
A= B= C=
Find, if possible. a) A + B b) B + C
If A is any matrix, the negative of A, denoted by –A, is the matrix obtained by replacing
each entry in A by its negative. For example, if
then
Note: The zero matrix plays the same role in matrix addition as the number zero does in
addition of numbers.
Subtraction of Matrices: Let A and B be two matrices of the same order. Then by
A – B, we mean A + (-B). In other words, to find A – B we subtract each entry of
B from the corresponding entry of A.
43
Linear Algebra I
Then
Solution: and
Solving gives x = 2, y = -4
44
Linear Algebra I
Multiplication of Matrices
While the operations of matrix addition and scalar multiplication are fairly
straightforward, the product AB of matrices A and B can be defined under the condition
that the number of columns of A must be equal to the number of rows of B. If the number
of columns in the matrix A equals the number of rows in the matrix B, we say that the
matrices are conformable for the product AB.
Because of wide use of matrix multiplication in application problems, it is important that
we learn it well. Therefore, we will try to learn the process in a step by step manner. We
first begin by finding a product of a row matrix and a column matrix.
AB =
= [ (2a + 3b + 4c)]
Note that AB is a 1 1 matrix, and its only entry is 2a + 3b + 4c.
Solution: AB = =
Note: In order for a product of a row matrix and a column matrix to exist, the number of
entries in the row matrix must be the same as the number of entries in the column matrix.
45
Linear Algebra I
Example 3.3.5: Here is an application: Suppose you sell 3 T-shirts at $10 each, 4 hats at
$15 each, and 1 pair of shorts at $20. Then your total revenue is
Solution: We already know how to multiply a row matrix by a column matrix. To find
the product AB, in this example, we will be multiplying the row matrix A to
both the first and second columns of matrix B, resulting in a 1 2 matrix.
R= S= T=
Find 2RS – 3ST.
46
Linear Algebra I
Activity 3.3.4: 1. If a matrix A is 3x5 and the product AB is 3x7, then what is the order
of B?
2. How many rows does X have if XY is a 2x6 matrix?
Remark: The definition refers to the product AB, in that order, A is the left factor called
pre factor and B is the right factor called post factor.
Solution: Since the number of columns of A is equal to the number of rows of B, the
product AB=C is defined. Since A is and B is , the product AB
will be
The entry c11 is obtained by summing the products of each entry in row 1
of A by the corresponding entry in column 1 of B, that is.
. Similarly, for C21 we use the entries in
row 2 of A and those in column 1 of B, that is C21 = (5) (-2) + (3) (2) = -4.
Also, C12 = (1) (4) + (-4) (7) = -24
C13 = (1) (1) + (-4) (3) = -11
C14 = (1) (6) + (-4) (8) = -26
C22 = (5) (4) + (3) (7) = 41
C23 = (5) (1) + (3 ) (3) = 14
47
Linear Algebra I
Thus
Observe that the product BA is not defined since the number of columns of B is not equal
to the number of rows of A. This shows that matrix multiplication is not commutative.
That is, for any two matrices A and B, it is usually the case that (even if both
products are defined).
, . Thus, .
i) iii)
ii) iv)
48
Linear Algebra I
2) Let , ,
then . But .
In this chapter, we will be using matrices to solve linear systems. Later, we will be asked
to express linear systems as the matrix equation AX = B, where A, X, and B are
matrices. The matrix A is called the coefficient matrix.
Example 3.3.10: Verify that the system of two linear equations with two unknowns:
A = X = and B =
AX = =
If AX = B, then
=
If two matrices are equal, then their corresponding entries are equal.
Therefore, it follows that
49
Linear Algebra I
If A, B and C are any matrices, and if I is an identity matrix, then the following hold,
whenever the dimensions of the matrices are such that the products are defined.
Remark: For real numbers, a multiplied by itself n times can be written as an.
Similarly, a square matrix A multiplied by itself n times can be written as
An. Therefore, A2 means AA, A3 means AAA and so on.
Exercise:
1. If and
2. Let
50
Linear Algebra I
of order 3?
Transpose of a matrix
Definition 3.3.1: Let A be an matrix. The transpose of A, denoted by or , is
the matrix obtained from A by interchanging the rows and columns of A. Thus the
first row of A is the first column of At, the second row of A is the second column of At
and so on.
51
Linear Algebra I
So A is skew-symmetric.
52
Linear Algebra I
Exercise
1. a) Form a 4 by 5 matrix, B, such that bij = i*j, where * represents
multiplication.
b) What is BT? c) Is B symmetric? Why or why not?
i) ii) iii)
3. Let is is symmetric?
We say that two matrices are row equivalent if one is obtained from the other by a finite
sequence of elementary row operations.
It is important to note that row operations are reversible. If two rows are interchanged,
they can be returned to their original positions by another interchange. If a row is scaled
by a nonzero constant c, then multiplying the new row by produces the original row.
Finally, consider a replacement operation involving two rows, say rows i and j, and
53
Linear Algebra I
suppose c times row i is added to row j to produce a new row j. To “reverse” this
operation, add – c times row i to the new row j and obtain the original row j.
Example 3.4.1: Find the elementary row operation that transforms the first matrix in to
the second, and then find the reverse row operation that transforms the second matrix in
to the first.
Solution: R2 ½ R2
2R2 R2
Activity 3.4.1: Find the elementary row operation that transforms the first matrix in to
the second, and then find the reverse row operation that transforms the second matrix in
to the first.
a) b)
In the definition that follows, a nonzero row (or column) in a matrix means a row (or
column) that contains at least one non-zero entry; a leading entry of a row refers to the
left most nonzero entry (in a non zero row).
Definition 3.5.1: A matrix is in echelon form (or row echelon form) if it has the
Following three properties:
1) All nonzero rows are above any rows of all zeros.
54
Linear Algebra I
2) Each leading entry of a row is in a column to the right of the leading entry
of the row above it.
3) All entries in a column below a leading entry are zero.
If a matrix in echelon form satisfies the following additional condition then it is in
reduced echelon form (or row reduced echelon form)
4) The leading entry in each non zero row is 1
5) Each leading 1 is the only nonzero entry in its column.
Example 3.5.1: The following matrices are in row echelon form, in fact the second
matrix is in row reduced echelon form
Definition 3.5.2: (i) A matrix which is in row echelon form is called an echelon matrix.
(ii) A matrix which is in row reduced echelon form is called a reduced
echelon matrix.
Note: 1) Each matrix is row equivalent to one and only one row reduced echelon matrix.
But a matrix can be row equivalent to more than one echelon matrices.
2) If matrix A is row equivalent to an echelon matrix U, we call U an echelon
form of A. If U is in reduced echelon form, we call U the reduced echelon
form of A.
Activity 3.5.1: Determine which of the following matrices are in row reduced echelon
form and which others are in row echelon form (but not in reduced echelon form)
a) b) c)
55
Linear Algebra I
d) e)
Example 3.6.1: Find the rank of each of the matrices given in the above activity.
Solution: a) has rank 2; b) has rank 1; c, d, and e have rank 3.
Activity 3.6.1: Find the row reduced echelon form of each of the following matrices and
determine the rank.
a) b)
c) d)
In this section we will present certain systematic methods for solving system of linear
equations.
Definition 3.7.1: A linear equation in the variables over the real field
is an equation that can be written in the form
56
Linear Algebra I
(1)
where b and the coefficients are given real numbers.
(2)
and
(3)
57
Linear Algebra I
augmented matrix.
Are the coefficient matrix and the augmented matrix of a homogeneous linear system
equal? Why?
A solution of a linear system in n-unknowns is an n-tuple of
real numbers that makes each of the equations in the system a true statement when si is
substituted for xi, i = 1,2, . . ., n. The set of all possible solutions is called the solution set
of the linear system. We say that two linear systems are equivalent if they have the same
solution set.
Activity 3.7.1: 1) Give the coefficient matrix and the augmented matrix of the linear
system
The activity given above illustrates the following general fact about linear systems.
A system of linear equations has either
1. no solution, or
2. exactly one solution, or
3. Infinitely many solutions.
58
Linear Algebra I
We say that a linear system is consistent if it has either one solution or infinitely many
solutions; a system is inconsistent if it has no solution.
Activity 3.7.2:
1. The homogeneous linear system is consistent for any matrix A.
Explain, why?
2. Consider a linear system of two equations in two unknowns, give geometric
interpretation if the system has
i) no solution ii) exactly one solution iii) many solutions
Do the same for a linear system of three equations in three unknowns.
Activity 3.7.3: Why these three operations do not change the solution set of the system?
We illustrate this technique by using the following example.
59
Linear Algebra I
Solution: We perform the elimination procedure with and without matrix notation of the
system. For each step we put the resulting system and its augmented matrix side by side
for comparison:
We keep x1 in the first equation and eliminate it from the other equations. For this replace
the third equation by the sum of itself and two times equation 1.
R3 R3 + 2R1
R3 R3 – 3R2
Now we eliminate the –2x3 term form equation 2. For this we use x3 in equation 3.
60
Linear Algebra I
R2 R2 +2R3
Again by using the x3 term in equation 3, we eliminate the –x3 term in equation 1.
R1 R1+R3
So we have an equivalent system (to the original system) that is easier to solve.
R1 R1 – 3R2
Thus the system has only one solution, namely (5, -2, -3) or . To
verify that (5, -2, -3) is a solution, substitute these values in to the left side of the original
system, and compute:
5 + 3(-2) - (-3) = 5 – 6 + 3 = 2
-2 - 2(-3) = -2 + 6 = 4
-2(5) - 3(-2) - 3(-3) = -10 + 6 + 9 = 5
It is a solution, as it satisfies all the equation in the given system (3).
61
Linear Algebra I
Let us see how elementary row operations on the augmented matrix of a given linear
system can be used to determine a solution of the system. Suppose a system of linear
equations is changed to a new one via row operations on its augmented matrix. By
considering each type of row operation it is easy to see that any solution of the original
system remains a solution of the new system.
Conversely, since the original system can be produced via row operations on the new
system, each solution of the new system is also a solution of the original system. From
this we have the following important property.
If the augmented matrices of two linear systems are row equivalent, then the two
systems have the same solution set.
Thus to solve a linear system by elimination we first perform appropriate row operations
on the augmented matrix of the system to obtain the augmented matrix of an equivalent
linear system which is easier to solve and use back substitution on the resulting new
system. This method can also be used to answer questions about existence and
uniqueness of a solution whenever there is no need to solve the system completely.
62
Linear Algebra I
1 1 1 3
Solution: The augmented matrix is A 1 5 5 2
2 1 1 1
Let us perform a finite sequence of elementary row operations on the augmented matrix.
(*)
But the last equation is never true. That is there are no values
that satisfy the new system (*). Since (*) and the original linear system have the
same solution set, the original system is inconsistent (has no solution).
Let us find an echelon form of the augmented matrix first. From this we
63
Linear Algebra I
where is any real number. There are an infinite number of solutions, for
example,
x= , y = 0, z = 3
x = 0, y = 1, z = 3 and so on.
64
Linear Algebra I
In vector form the general solution of the given system is of the form
where . What dose this represents in ?
Remark: A system of linear equation is consistent if and only if the ranks of the
coefficient matrix and the augmented matrix are equal.
Exercise 3.1:
1. Find the solution set of the following system:
a. c.
b. d.
65
Linear Algebra I
UNIT IV
Determinants
66
Linear Algebra I
In this case, the straight bars do NOT mean absolute value; they represent the
determinant of the matrix. We will see some of the uses of the determinant in the
subsequent sections. For now, let's find out how to compute the determinant of a matrix
so that we can use it later.
.
That is, the determinant of a matrix is obtained by taking the product of the entries
in the main diagonal and subtracting from it the product of the entries in the other
diagonal.
To define the determinant of a square matrix A of order n(n > 2), we need the concepts of
the minor and the cofactor of an element.
Let be a determinant of order n. The minor of aij, is the determinant that is left
by deleting the ith row and the jth column. It is denoted by Mij.
67
Linear Algebra I
Example 4.1.1: Evaluate the cofactor of each of the entries of the matrix:
Solution: C11 = -1, C21 = 1 , C31 = -1, C12 = 8, C13 = -5, C22 = -6, C32 = 2, C23 = 3, C33 = -1
Activity 4.1.1: Evaluate the cofactor of each of the entries of the given matrices:
a. b.
Definition 4.1.3 :( Determinant of order n): If A is a square matrix of order n (n >2), then
its determinant may be calculated by multiplying the entries of any row (or column) by
their cofactors and summing the resulting products. That is,
Or
Remark: It is a fact that determinant of a matrix is unique and does not depend on the
row or column chosen for its evaluation.
Solution: Choose a given row or column. Let us arbitrarily select the first row. Then
68
Linear Algebra I
= 22
If we had expanded along the first column, then
= 22, as before
= = 54 – 94 + 13 = -27
a. c.
b. d.
69
Linear Algebra I
The following diagram called Sarrus’ diagram, enables us to write the value of the
determinant of order 3 very conveniently. This
technique does not hold for determinant of
higher order.
Working Rule: Make the Sarrus’ diagram by
repeating the first two columns of the
determinant as shown below. Then multiply the elements joined by arrows. Assign the
positive sign to an expression if it is formed by a downward arrow and negative sign to
an expression if it is formed by an upward arrow.
Value:
Exercise:
70
Linear Algebra I
Property 1: The value of a determinant remains unchanged if rows are changed into
columns and columns into rows. That is,
or
Property 2: If any two rows (or columns) of a determinant are interchanged, the value of
the determinant so obtained is the negative of the value of the original determinant. That
is, .
71
Linear Algebra I
Property 5: If to the elements of a row (or column) of a determinant are added k times
the elements of another row (or column), the value of the determinant so obtained is
equal to the value of the original determinant. That is,
Property 6: If each element of a row (or column) of a determinant is the sum of two
elements, the determinant can be expressed as the sum of two determinants. That is,
= = 24 - 36 = -12
72
Linear Algebra I
= (Property 5)
Activity 4.2.1: Evaluate the following determinants by using the properties listed above:
a) b) c)
Example 4.2.4: Let A and B be 3x3 matrix with det A = 2 and det B = -3.
Find det (2ABt).
Solution: , since det B = det Bt.
Definition 4.3.1: Let A = (aij) be a square matrix of order n and let Cij be the cofactor of
aij. Then the adjoint of A, denoted by adj A, is defined as the transpose of the cofactor
matrix (Cij).
73
Linear Algebra I
Solution: We have C11=-3, C12=6, C13= -3, C21=5, C22=-10, C23=5, C31=2,C32=-4, C33=2.
Thus, .
Hence =
Definition 4.3.2: Let A be a square matrix of order n. Then a square matrices B of order
n, if it exists, is called an inverse of A if AB = BA = In. A matrix A having an inverse is
74
Linear Algebra I
called an invertible matrix. It may easily be seen that if a matrix A is invertible, its
inverse is unique. The inverse of an invertible matrix A is denoted by A-1.
Does every square matrix possess an inverse? To answer this let us consider the matrix
We thus see that there cannot be any matrix B for which AB and BA both are equal to I2.
Therefore A is not invertible. Hence, we conclude that a square matrix may fail to have
an inverse. However, if A is a square matrix such that , then A is invertible and
Solution:
or
75
Linear Algebra I
Note: If A is an invertible nxn matrix, then AA-1 = In and det A-1 = , where
2. The inverse of the inverse is the original matrix itself, i.e. A 1 ) 1 A
3. The inverse of the transpose of a matrix is the transpose of its inverse, i.e.,
4. If A and B are two invertible matrices of the same order, then AB is also
invertible and moreover,
. To find adjA, let Cij denote the cofactor of aij, the element in the ith row
and jth column of |A|. Thus C11=3, C12 = -1, C13=-6, C21 = 2, C22=2, C23=-4, C31=-9 C32=-5 and C33 =26.
. Hence
3. If then . (True/False)
4.4. Cramer’s rule for solving system of linear equations (homogeneous and non
homogeneous)
76
Linear Algebra I
Now let e1, e2, . . . en be columns of the n × n identity matrix I and Ii(x) be the matrix
obtained from I by replacing column i by x.
This method for finding the solutions of n linear equations in n unknowns is known as
Cramer’s Rule.
Example 4.4.1: Solve the following system of linear equations by Cramer’s Rule.
where
By Cramer’s Rule,
77
Linear Algebra I
and
Example 4.4.2: Solve the following system of linear equations by Cramer’s Rule.
where
By Cramer’s Rule,
, and
78
Linear Algebra I
a) 2z + 3 = y + 3x b) –a + 3b – 2c = 7
x – 3z = 2y + 1 3a + 3c = -3
3y + z = 2-2x 2a + b + 2c = -1
For the non homogeneous system , if , then the Cramer’s rule does
not give any information whether or not the system has a solution. However, in the case
of homogeneous system we have the following useful theorem.
solution.
Solution: det A = 0
or
Let us consider a square matrix A of order n (i.e n rows and n columns). If there are at
least k rows and k columns which must be deleted in order to obtain a non vanishing
determinant, then the order of the highest ordered non vanishing determinant in A is
given by r = n – k and this number is defined as the rank of A.
79
Linear Algebra I
Solution: | A | = -1(24 – 25) + 2(18 – 20) – 3(15 – 16) = 0. So R (A) < 3 … (1)
80
Linear Algebra I
Example 4.5.5: The 34 matrix A = has a row that is a constant multiple
of another row (i.e R2 = 2R1). This matrix possesses four square sub
matrices order 3: , , , .
The determinant of each of these matrices is zero. Because, in each case the second row
is a constant multiple of the first row. Thus the rank of the matrix A cannot be equal to 3.
That is rank of A < 3. However, it is easy to find a 2 x 2 sub matrix of A whose
81
Linear Algebra I
82
Linear Algebra I
For each eigenvalue , the corresponding eigenvector is found by substituting back into
the equation .
eignvectors of A.
Solution:
For = 7: (A – 7I2)X = 0
Hence, any vector of the type , where is any real number, is an eigenvector
Hence, any vector of the type , where is any real number, is an eigenvector
83
Linear Algebra I
eignvectors of A.
(3 - ) (3 - ) (5 - ) - 4(5 - ) = 0
[(3 - )2 - 4] (5 - ) = 0
(2 - 6 + 5) ( - 5) = 0
( - 1) ( - 5)2 = 0
So, eigenvalues of A are: = 1 and = 5.
To find the corresponding eigen vectors, we substitute the values of in the equation
84
Linear Algebra I
Exercise:
Find the eigenvalues and the corresponding eigenvectors of the matrices:
a) b)
c) d)
Proof:
85
Linear Algebra I
and
.
Therefore,
.
Since , .
86
Linear Algebra I
Thus,
are not orthogonal. We can obtain two orthonormal eigenvectors via Gram-Schmidt
process. The orthogonal eigenvectors are
87
Linear Algebra I
Thus,
and .
Note: For a set of vectors , we can find a set of orthogonal vectors
via Gram-Schmidt process:
88
Linear Algebra I
UNIT V
LINEAR TRANSFORMATIONS
Activity 5.1.1: Recall about the meaning and properties of a function. Also try to recall
about related concepts like domain, range, one to one, on to,
composition, and inverse.
Recall that a function (mapping) consists of the following:
i) a set X, each of whose element is mapped
ii) a set Y, to which each element of x is mapped
iii) a rule (correspondence) f, which associates with each element x of X a single
element f (x) of Y.
89
Linear Algebra I
Activity 5.1.2:
1. Let A = [2, ) and B = [-4, ). Define a function
f : A B by f (x) = x2 - 4x.
90
Linear Algebra I
The following definition will enable us to give a complete answer for this question.
Definition 5.1.1: Let V and W be vector spaces over the same field K. A function
T: V W is called a linear transformation (or a linear mapping)
of V in to W if it satisfies the following conditions:
i) T (u + v) = T(u) + T(v) u,v V
ii) T(u) = T(u) K and u V
Note: 1. Using condition (ii) of the definition, one can show that T(Ov) = Ow
where Ov and Ow are zero vectors in V and W respectively.
T (Ov) = T(0.u) ( because 0.u = Ov for any u V)
= 0T(u) (by (ii),0 K (The zero element in the field K))
= OW (Why?)
This proves that a linear mapping maps a zero vector in to zero vector.
2) The two conditions in the definition are equivalent to
T(u + v) = T(u) + T(v) , K and u, vV
Let us prove that a function T from vector space V in to W over the same field K is a
linear transformation iff T(1v1 + 2v2) = 1T(v1) + 2T(v2) for any 1, 2 K and for
any v1, v2 V.
91
Linear Algebra I
Now let us see some examples of a linear transformation (or a linear mapping).
92
Linear Algebra I
Example 5.1.1 Let V be a vector space over the field K. Then the mapping
I: V V given by I(v) = v vV is a linear transformation. To prove
this let u,v V and K. Then u + v V and uV as V is a vector
space. Since I(x) = x x V, we have
i) I (u + v) = u + v and I(u) + I(v) = u + v
Thus I (u + v) = I(u) + I(v)
ii) I (u) = u and I(u) = u
Thus I (u) = I(u)
Therefore I is a linear transformation. We call I identity transformation.
Example 5.1.2: Let T be a mapping from a vector space V over a field K into it self
given by T(v) = Ov v V where Ov is a zero vector in V.
Then T is a linear transformation. (Verify!). We call this linear
transformation the zero transformation.
Example 5.1.3: Let V be the vector space of all differentiable real valued functions
of real variables on an open interval (a, b). Then the mapping D: V V
given by (where is the derivative of f) is a linear
transformation. This can be easily verified by using the properties of
derivative.
93
Linear Algebra I
94
Linear Algebra I
Remark: To show that a mapping T from a vector space V in to W over the same
field K is not a linear transformation it suffices to show that there exists two
vectors v1, v2 V such that T(v1 + v2) T(v1) + T(v2) or there exists scalar V
such that .
Activity 5.1.3:
1. Show that the mapping T : 32 defined by T(x, y, z) = (x-y, x – z) is a linear
transformation.
2. Is the mapping L: 32 defined by L (a, b, c) = (| a |, 0) a linear
transformation? Justify your answer!
Let us add one more example. Recall that vectors v 1, v2, v3, … vm in a vector space V
over a field K are linearly independent iff 1 v1 + 2v2 +… + nvn = Ov (where 1,2,
…, n K) implies 1 = 2 = …. = n = 0
Example 5.1.7: Let T be a linear transformation from a vector space V in to W over the
same field K. Prove that the vectors v1, v2, v3, …, vn V are linearly
independent if T(v1), T(v2), T(v3), …, T(vn) are linearly independent
vectors in W.
Solution: Suppose T(v1), T(v2), T(v3), …, T(vn) are linearly independent vectors in W
where v1, v2, v3, …, vn are vectors in V and T:V W is a linear
transformation. To prove that v1, v2, v3,…,vn are linearly independent.
Let 1 2, 3, … ,n K such that
1v1 + 2v2 + 3v3 + …+ nvn = Ov
Then T(1v1 + 2v2 + 3v3 + … + nvn) = T(Ov)
So 1T(v1) + 2T(v2) + 3T(v3) + … + nT(vn) = Ow
But T(v1), T(v2), T(v3), …, T(vn) are linearly independent.
Hence 1 = 2 = 3 = … = n = 0. Thus we have shown that
1v1 + 2v2 + 3v3 + … + nvn = Ov implies
95
Linear Algebra I
1 = 2 = 3 = … = n = 0.
Consequently v1, v2, v3, …, vn are linearly independent.
Exercise
1. Determine whether or not each of the following mappings is linear transformation.
a) T: 22 given by T (x, y) = (x + y, x)
b) T: 2 given by T (x, y) = xy
c) T: 32 given by T (x1, x2, x3) = (1 + x1, x2)
d) L: 32 given by L (x, y, z) = (z + x, y)
e) L: 22 given by L (p, q) = (p3, q3)
2. Let M2 denote the vector space of 2 2 matrices over .
Let T: M2 M2 be given by
96
Linear Algebra I
We now state two other basic properties in the following theorem. The proof is left for
you as an exercise.
97
Linear Algebra I
Theorem 5.1.1: Let T be a linear transformation from a vector space V in to W over the
same field K. Then i) T(-v) = -T(v) v V
ii) T(v1 – v2) = T(v1) – T(v2) v1, v2 V
Our next theorem asserts that a linear transformation from a given finite dimensional
vector space V in to any vector space W is completely determined by its values on the
elements of a given basis of V.
Theorem 5.1.2: Let V and W be vector spaces over the field K. Let {v1, v2, v3, …, vn}be
a basis of V. If {w1, w2, w3, …, wn}is a set of arbitrary vectors in W, then
there exists a unique linear transformation F: V W such that
F(vj) = wj for j = 1, 2, … n.
To prove the theorem, we need to
a) define a function F from V into W such that F(vi) = wi for all i = 1, 2, 3, … n.
b) show F is a linear transformation
c) show that F is unique.
Proof: Let V and W be vector spaces over the field K. Let {v1, v2, v3, …, vn} be a basis
of V and {w1, w2, w3, …, wn} be any set of n-vectors in W.
Since {v1, v2, v3, …, vn} is a basis of V, for any v V there exist unique scalars
a1, a2, a3, …, an K such that
i.e where
98
Linear Algebra I
Then and for some unique scalars x1, x2, x3, …,xn,
(i) F(x + y) =
= by definition of F
= by definition of F
= F(x) + F(y)
F(x + y) = F(x) + F(y) x, y V
ii) F(x) =
= by definition of F
= by definition of F
99
Linear Algebra I
Let x be any vector in V then for some unique scalars x 1, x2, x3,
…, xn in K.
Thus
= as G is a linear transformation
= by definition of F.
= F(x)
Since G(x) = F(x) for any x V, we conclude that G = F.
This proves that F is unique. With this we complete the proof of the theorem.
Remark: 1) The vectors w1, w2,w3,…, wn in theorem 4.2.2 are completely arbitrary;
they may be linearly dependent, independent or they may even be equal to
each other. But the number of these vectors in W must be equal with that of
the number of basis vectors of V.
2) In determining the linear transformation from V in to W the assumption that
{v1, v2, …, vn} is a basis of V is essential.
100
Linear Algebra I
Example 5.1.8:
a) Is there a linear transformation T from 2 in to 2 such that T(2, 3) = (4, 5) and T(1,
0) = (0, 0)?
b) How many linear transformations satisfying the given conditions do we have?
Solution: a) Yes. The two vectors (2, 3) and (1, 0) are linearly independent and hence
they form a basis for 2. Thus according to theorem 4.2.2, there is unique
linear transformation from 2 in to 2 such that T(2,3) = (4, 5) and T(1,
0) = (0, 0).
b) As it is verified above, we have only one linear transformation satisfying
the given conditions.
Thus (x, y) =
Therefore (x, y) 2
101
Linear Algebra I
Observe that the image of any vector (a, b) 2 under the linear transformation of
example 4.2.2 is . So the image of 2 under T is the line through (0, 0) with
direction vector .
Activity 5.1.4:
i) Let A = {(x, y)| x2 + y2 = 1}. Find the image of A under T i.e T[A].
ii) Describe the set containing all elements in 2 whose images is (0,0)
Thus we have
102
Linear Algebra I
Activity 5.1.5:
1 a) Find a linear transformation T: 22 such that T(1, 0) = (1, 1) and
T(0, 1) = (-1, 2)
b) Prove that T maps the square with vertices (0, 0), (1, 0), (1, 1) and
(0, 1) in to a parallelogram.
2. a) Is there a linear transformation T:33 such that
T(0, 1, 2) = (3, 1, 2) and T(1, 1, 1) = (2, 2, 2)?
b) If your answer in (a) is yes,
(i) find T.
(ii) is it unique? Why?
103
Linear Algebra I
,
L(x, y, z) =
L(x, y, z) =
L(1, 1,1) = .
104
Linear Algebra I
satisfy the requirements of the question in the above example? Replace (0, 0, 1) by (1, 0,
0) in the solution of the above example and find a linear transformation
L: 3 2 such that L(1, -1, 1) = (1, 0) and L(1, 1, 1) = (0, 1). Do the same by replacing
(0, 0) by (1, 1) and (0, 0,1) by (1, 0, -1).
Exercise:
105
Linear Algebra I
In this section we will discuss in detail about two important sets related to a linear
transformation T: V W where V and W are vector spaces over the same field K. One
of them is a subset of V and the other is a subset of W. Is there an element v in V such
that T(v) = Ow -the zero vector in W? We know that T(O v) = Ow, so there is at least one
element v in V such that T(v) = Ow. Hence we have a non-empty subset
U = {u V| T(u) = Ow} of V. It is a subspace of V because
i) Ov U as T(Ov) = Ow
ii) If u1, u2 U then T(u1) = T(u2) = Ow
So T(u1 + u2) = T(u1) + T(u2) as T is a linear transformation
= Ow + Ow
= Ow
Thus u1 + u2 U for any u1, u2 U
iii) if u U and K, then T(u) = Ow
T(u) = T(u) as T is a linear transformation
= .Ow
= Ow
So u U for any K and for any u U
From (i), (ii) and (iii) it follows that U is a subspace of V.
On the other hand we know that, the range of T is a subset of W. It is also a subspace of
W(verify!).
This section is concerned with these two special subspaces. Let us begin with the
following definition.
106
Linear Algebra I
Theorem 5.2.1: Let T be a linear transformation from a vector space V in to W over the
some field K. Then (a) Ker T is a subspace of V
(b) ImT is a subspace of W.
Proof :a) It is already proved
b) Clearly ImT is a subset of W
i) Since T(Ov) = Ow , Ow ImT.
ii) Let w1, w2 ImT. Then there exists u1, u2 V such that
T(u1) = w1 and T(u2) = w2. Since V is a vector space, u1 + u2 V. Moreover
T(u1 + u2) = T(u1) + T(u2) = w1 + w2. Thus w1 + w2 ImT as there exists a
vector v V such that T(v) = w1 + w2 (v = u1 + u2).
So we have w1 + w2 ImT w1, w2 W.
iii) Let K and w ImT. Then there exists v V such that T(v) = w
as w ImT. Since V is a vector space over K, v V. Moreover
T(v) = T(v) = w. Hence w ImT. From (i), (ii) and (iii) it
follows that ImT is a subspace of W.
107
Linear Algebra I
=
and
=
=
=
Therefore G is a linear transformation.
b) ker
=
=
=
So kerG is the subspace of generated by (1,-1,-1). List at least four elements of ker G.
=
=
=
=
=
Thus ImG is the subspace of generated by (1, 0) and (0, 1).
That is (why?) observe that dim (ker G) = 1 and dim (ImG) = 2 .
108
Linear Algebra I
Thus
109
Linear Algebra I
ii) Since ker T = {(0, 0)} (contains only the zero vector of ), T is
one – to – one by theorem 4.3.2. But it is not on to why?
Notice that whenever kerT {Ov}, we can conclude that T is not one-to-one.
In the theorem we have proved that if the kernel of a linear transformation T contains
only the zero vector i.e T is 1 – 1 then T maps linearly independent vectors in to linearly
independent vectors.
The next theorem relates the dimension of the kernel and image of a linear transformation
L: V W with the dimension of V. Before going to it let us have the following
definition.
Definition 5.2.2: Let L be a linear transformation from a vector space V in to W over the
field K.
(a) The dimension of the Kernel (the null space) of L is called the nullity
of L.
(b) The dimension of the Image (the range) of L is called the rank of L.
110
Linear Algebra I
Theorem 5.2.4: (Rank-nullity theorem) Let V and W be vector spaces over the same
field K. Let L: V W be a linear transformation. If V is finite
dimensional vector space then dim V = nullity of L + rank of L
i.e dim V = dim (ker L) + dim (ImL)
Proof: Since V is finite dimensional vector space, it is obvious that Ker L and
ImL =L (V) are finite dimensional. Moreover dim (Ker L), dim (ImL) dim V
(Verify)
Let {u1, u2, …, up} and {w1, w2, …, wq} be basis of kerL and ImL respectively
(p, q dim V) Then there exist v1, v2, …, vq V such that L(vi) = wi for i = 1, 2,
3,…q as wi ImL.
Claim: = {u1, u2, …, up, v1, v2, …, vq} is a bais of V.
Now we show that
i) generates V
ii) is linearly independent.
i) Let v V. Then L(v) ImL and hence there exist unique scalars b 1, b2, …, bqin
K such that L(v) = b1w1 + b2w2 + … + bqwq.
So L(v) = b1L(v1) + b2L(v2) + … + bqL(vq) as L(vi) = wi for i = 1, 2, ..q
111
Linear Algebra I
ii) Suppose 1u1 + 2u2 + … + pup + r1v1 + r2v2+ … + rqvq = Ov ……… (1)
where 1, 2 …,pr1, r2, …, rq K
Then L(1u1 + 2u2 + … + pup + r1v1 + r2v2 + … + rqvq) = L(Ov) = Ow
So we have 1L(u1) + 2L(u2) + …+pL(up) + r1L(v1) + r2L(v2) + … + rqL(vq) = Ow
r1w1 + r2w2 + … + rqwq = Ow since L(uj) = Ow and
L(vi) = wi j = 1, 2, …, p and i = 1, 2, …,q
r1 = r2 = … = rq = 0, since {w1, w2, …, wq} is basis of ImL.
1u1 + 2u2 + … + pup = Ov (replace r1, r2 … rqby 0 in (1)
1 = 2 = … = p = 0, since {u1, u2, …, up} is a basis of Ker L.
Thus we have shown that
1u1 + 2 u2 + … + pup + r1v1 + r2v2 + … + rqvq = Ov
1 = 2 = … = p = r1 = v2 = … = rq = 0
That is is a linearly independent set in V. From (i) and (ii) it follows that
= {u1, u2, …, up, v1, v2, …, vq} is a basis of V.
Hence dimV = p + q = dim (kerL) + dim (ImL)
Therefore dim V = Nullity of L + Rank of L.
112
Linear Algebra I
(x – y + z + w, x + 2z – w, x + y + 3z – 3w) = (0,0,0)
113
Linear Algebra I
Notation: We denote the set of all linear transformations from a vector space V in to W
over the field K by L(V, W). While using the notation L(V, W), it should
be noticed that V and W are vector spaces over the same field . Let T, S
L(V, W) and K. Then T and S are linear transformations from V in to
W and hence they are functions from V in to W. Thus
i) the sum of T and S, T + S is a function from V in to W defined by
(T + S) (v) = T(v) + S(v) v V
ii) the scalar multiple of T by , T is a function from V in to W
defined by (T) (v) = (T(v)) v V.
We now state a theorem that asserts T + S and T are linear transformations form V in to
W for any T, S L(V, W) and K; and L (V, W) is a vector space with addition and
multiplication by scalars defined as above.
We shall prove the first two assertions and leave the last one as an exercise.
Theorem 5.3.1: Let V and W be vector spaces over the field K. Let T and S be linear
transformations from V in to W and K. Then
i) the function T + S is a linear transformation from V in to W.
i.e T + S L (V, W).
ii) the function T is a linear transformation from V in to W.
i.e T L (V, W).
iii) L (V, W) the set of all linear transformations from V in to W with
respect to the operation of vector addition and scalar multiplication
defined as (T + S) (v) = T(v) + S(v) and (T (v) = T(v) for all v V,
is a vector space over K.
114
Linear Algebra I
Note: In the above theorem, we have asserted that L(V,W) is a vector space over K.
With this we can consider every linear transformation in L(V, W) as a vector. The
zero vector in this vector space will be the zero transformation that sends every
vector of V in to the zero vector in W.
115
Linear Algebra I
Theorem 5.3.2: Let V, W and Z be vector spaces over the field K. Let T and S be linear
transformations from V in to W and from W into Z respectively. Then
the compose function SoT defined by (SoT) (v) = S(T(v)) for all v V
is a linear transformation.
Proof: Let T and S be as in the hypothesis of the theorem Let v1, v2 V and r K
Then (SoT) (v1 + v2) = S(T(v1 + v2))
= S(T(v1) + T(v2)) why?
= S(T(v1)) + S(T(v2)) why?
= (SoT) (v1) + (SoT) (v2)
and (SoT) (rv1) = S(T(rv1))
= S(rT(v1)) because T is a linear transformation.
= rS(T(v1)) why?
= r(SoT) (v1)
Therefore SoT is a linear transformation.
116
Linear Algebra I
Notation: For the sake of brevity we shall simply denote the composition SoT of S and T
by ST.
Activity 5.3.1:
1) Give two linear transformation S and T from 3 in to it self such that ST TS.
From this you may conclude that composition of linear transformations is not
commutative.
2) Is composition of linear transformations associative? Justify your answer!
3) Let U, V and W be vector spaces over the field K.
Let S, S' L (U,V) and T, T' L (V, W)
Verify each of the following.
i) T(S + S') = TS + TS'
ii) (T + T')S = TS + T'S
iii) (T S) = ( T)S = T( S) for all K
Example 5.3.2: Let T be a linear operator on a vector space V over the field F.
(i) If T2 = O the zero mapping then what can you say about the
relation of the range of T to the kernel of T?
ii) Give an example of a linear operator on 2 such that
T2 = O but T O.
Solution: i) Method 1: T2 = O T2(v) = O(v), v V
T(T(v)) = Ov – zero vector in V.
T(v) ker T v V
But T(v) is an arbitrary element of Range of T for any v V.
Therefore Range of T ker T.
117
Linear Algebra I
Activity 5.3.2:
1. Let T and S be linear operators on 2. Does TS = O imply either T = O or S = O?
Explain! Give a counter example if your answer is No.
2. Let V be finite dimensional vector space over the field F and T be a linear
operator on V. Suppose that rank (T 2) = rank T. Prove that the range and null
118
Linear Algebra I
Now let us discuss about inverse of a linear transformation. Recall that a function T from
V in to W is called invertible if there exists a functions S from W in to V such that ST is
the identity function on V and TS is the identity function on W. If T is invertible the
function S is unique and is denoted by T-1and is called the inverse of T.
T-1(w) = v T(v) = w whenever T -1 exists. Furthermore we know that T is invertible iff
T is one- to- one and on to.
Theorem 5.3.3: Let V and W be vector spaces over the field F and Let T be a linear
transformation from V in to W. If T is invertible then the inverse
function T-1 is a linear transformation.
Proof: Suppose T : V W is invertible then there exists a unique function
T-1: W V such that T-1(w) = v T(v) = w for all w W. Moreover
T is 1-1 and on to. We need to show that T-1 is linear. Let w1, w2 W and F
Then there exist unique vectors v1, v2 V such that T(v1) = w1 and T(v2) = w2
as T is 1-1 and on to. So T-1(w1) = v1 and T-1(w2) = v2.
T(v1 + v2) = T(v1) + T(v2) because T is linear
= w1 + w2
Thus T-1(w1 + w2) = v1 + v2 = T-1(w1) + T-1(w2) for any w1, w2 W.
Since T( v1) = w1, T-1 (w1) = v1 = T-1(w1). Therefore T-1 is a linear
transformation.
119
Linear Algebra I
KerT =
=
= .
Therefore T is one to one.
Moreover from rank-nullity theorem, dim(3) = dim (kerT) + dim (ImT).
3 = 0 + dim(ImT). So dim(ImT) = 3. Since ImT is a subspace of 3
and dim(ImT) = 3, we have ImT = 3. Hence T is on to as ImT = 3.
Therefore T is invertible as it one to- one and on to.
To find T-1, let T (x, y, z) = (u, v, w).
Activity 5.3.3:
120
Linear Algebra I
121
Linear Algebra I
Exercise:
1. Let T and S be linear operators on 2 defined by T(x, y) = (y, x) and S(a, b) = (a, 0)
i) How do you describe T and S geometrically?
ii) Give rules like the one defining T and S for each of the linear
transformations S – 2T, ST, TS, T2, S2.
2. Let T be the linear operator on 3 defined by
T(x1, x2, x3) = (3x1, x1 – x2, 2x1 + x2 + x3). Is T invertible? If so, find a rule for T-1.
3. For the linear operator T of exercise 2, Show that (T2 – I) (T – 3I) = O
(I – identity aping and O zero mapping)
4. Let T be a linear transformation from 3 in to 2 and Let U be a linear transformation
from 2 in to 3. Prove that the linear transformation UT is not
invertible.
5. L : 3 3 be a linear transformation. Show that L is invertible and find L-1 for:
a)
b)
6. a) Let S : V V be a linear operator such that S 2 – S + I = (where I is the identity
mapping on V and is the zero mapping on V) Show that S -1 exists and is equal
to I – S.
b) Let T be a linear operator on a vector space V, and assume that L 3 (v) = Ov for all
v V. Show that I – L is invertible.
7. Let F and G be linear operators on a vector space V over the set of real numbers..
Assume that FG = GF. Show that
i) (F + G)2 = F2 + 2FG + G2
ii) (F + G) (F – G) = F2 – G2
122
Linear Algebra I
In this section we shall investigate the strong relationship that exists between linear
transformation and matrices.
transformation.
b) Let . Then
123
Linear Algebra I
and .
Solution:
i)
ii) deforms the given square as if the top of the square were
pushed to the right while the base is held fixed (see the figure
below).
Activity 5.4.2:
1. Let A be a matrix. What must a and b be inorder to define by
associated with A.
3. Let
124
Linear Algebra I
In view of definition 4.5.1 we can study system of linear equations with the help of linear
transformation associated with the coefficient matrix of the system. Consider the system
AX b where A is an real matrix. Then iff where TA is the
linear transformation associated with matrix A. Thus the system AX = b has a solution iff
b is in the range of TA. If there is exactly one element whose image is b under TA
then the system has exactly one solution. But if b has more than one pre-images
under TA then the system has more than one solution. If there is no such that
, (i.e b is not in the range of TA), then the system has no solution.
Activity 5.4.3:
Prove that if ker , then the system has at most one solution.
Further if the zero column vector in then the homogeneous system has
at least one solution. What is this solution?
The solution set of is the kernel of TA. So the solution set of is a subspace
of . Suppose dim and is a basis of ker TA. Then any
solution v of can be expressed as
where are scalars.
Now let Xo be one particular solution of the non-homogeneous system and w be
any solution of .
Then so we have, . This in turn implies
and hence . Thus
or where are scalars.
Therefore if Xo is one particular solution of the non-homogeneous system then
every solution w of is given by where
is a basis ker TA and are scalars.
This called the general solution of the non homogeneous system .
125
Linear Algebra I
Example 5.4.2
with matrix A.
i) Find ker TA ?
ii) Is ?
iii) Is there more than one X whose image under TA is b?
iv) Describe the solution set of
Solution: is given by
i)
So
Thus =
= =b
126
Linear Algebra I
as .
Exercise:
i) ii)
127
Linear Algebra I
iii) iv)
In the previous subsection we have seen that associated to any given matrix A there
is a linear transformation defined by . In this subsection we
shall see the reverse process, that is to find a matrix associated to a given linear
transformation from a finite dimensional vector space V in to finite dimensional vector
space W over the same field K.
Before going further let us recall about the coordinates of an element V of a finite
dimensional vector space V with respect to a given ordered basis of V. What do we
mean by ordered basis? Suppose is an ordered basis for a finite
dimensional vector space V over a field K and v is in V. The coordinates of v relative to
the basis (or the coordinates of v) are the scalars in K such that
.
Example 5.4.3: (i) The coordinates of (x, y, z) relative the standard basis
of are simply x, y and z, since
(x, y, z) = x(1, 0, 0)+ y(0, 1, 0+ z(0, 0, 1).
128
Linear Algebra I
Activity 5.4.4:
of
where and .
and
Let V be an n-dimensional vector space over the field K and let W be an m-dimensional
vector space over K. Let and be ordered bases of V
and W respectively. Suppose is a linear transformation.
129
Linear Algebra I
…. (2)
That is
The matrix M in (2) is called a matrix representation of T or the matrix for T relative
to the bases and . If and are standard bases of V and W respectively, we call
the matrix M in (2) the standard matrix for the linear transformation T.
130
Linear Algebra I
Our next task is to examine how the matrix M in (2) determines the linear transformation
T. If x = x1v1 + x2v2 + …+xnvn is a vector in V, then the coordinate vector of x relative to
and T(x) = T(x1v1 + x2v2 + …+xnvn) = x1T(v1) + x2T(v2) + …+xnT(vn) ….. (3)
Using the basis in W, we can rewrite (3) in terms of coordinate vectors relative to
as … (4)
Further the vector equation (4) can be written as a matrix equation?
…. (5)
131
Linear Algebra I
Thus if is the coordinate vector of x relative to , then the equation in (5) shows that
is the coordinate vector of the vector T(X) relative to .
Note: In case when W is the same as V and the basis is the same as , the matrix M
in (2) is called the matrix for T relative to and is denoted by .
Activity 5.4.5: Using equation (3), verify equations (4) and (5).
Example 5.4.6: Let be a basis for a vector space V over the set of real
numbers. Find T(3b1 – 4b2), where T is a linear transformation
form V in to V whose matrix relative to is
Solution: Let x = 3b1 - 4b2. Then the coordinate vector of x relative to and
132
Linear Algebra I
= =
Exercise:
1. Let be defined by F(x, y, z) = (z - x, x + y). Find the matrix
associated with F with respect to the standard bases of and .
2. Let defined by T(a, b) = (a, b, a+2b). Find the matrix of T relative to
the bases B1= {(1,1), (2,0)} and B2 = {(1,1,1), (1,1,0), (0,1,1)}.
Definition 5.5.1:
Let T: V V be a linear operator on a vector space V over a field K. An eigenvalue of T
is a scalar in K such that there is a non-zero vector v in V with T(v) = v. If is an
eigenvalue of T, then
133
Linear Algebra I
Activity 5.5.1: Let T: V V be a linear operator with ker T {0}. Prove that every non
zero vector in ker T is an eigenvector of T with eigenvalue 0.
Example 5.5.1:
a) Let id: V V be the identity operator.
Every non-zero vector in V is an eigenvector of id with an eigenvalue 1, since:
Id() = = 1.
b) Let T: 2 2 be a linear operator which rotates each 2 by an angle of
90o.
134
Linear Algebra I
135