0% found this document useful (0 votes)
84 views153 pages

Linear Algebra (IAS) 1979-2006 Solved

This document contains solutions to questions from past UPSC Civil Services mathematics exams on linear algebra. It discusses several key concepts: 1) It explains the Cayley-Hamilton theorem and uses it to find the inverse of a 2x2 matrix. 2) It defines the rank of a matrix and proves that a system of equations is consistent if and only if the rank of the augmented matrix equals the rank of the coefficient matrix. 3) It discusses the Gram-Schmidt process for orthonormalizing a basis and shows that every finite dimensional inner product space has an orthonormal basis. 4) It defines the nullity of a linear transformation and finds the nullity of an example transformation from

Uploaded by

dehejar970
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
84 views153 pages

Linear Algebra (IAS) 1979-2006 Solved

This document contains solutions to questions from past UPSC Civil Services mathematics exams on linear algebra. It discusses several key concepts: 1) It explains the Cayley-Hamilton theorem and uses it to find the inverse of a 2x2 matrix. 2) It defines the rank of a matrix and proves that a system of equations is consistent if and only if the rank of the augmented matrix equals the rank of the coefficient matrix. 3) It discusses the Gram-Schmidt process for orthonormalizing a basis and shows that every finite dimensional inner product space has an orthonormal basis. 4) It defines the nullity of a linear transformation and finds the nullity of an example transformation from

Uploaded by

dehejar970
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 153

UPSC Civil Services Main 1979 - Mathematics

Linear Algebra
Sunder Lal
Retired Professor of Mathematics
Panjab University
Chandigarh

December 27, 2009

Question 1(a) State and prove Cayley-Hamilton Theorem.

Solution. See 1987, question 5(a).

Question 1(b) Reduce the quadratic expression x2 + 2y 2 + 2z 2 + 2xy + 2xz to the canonical
form.

Solution. Completing the squares: given form = (x + y + z)2 + (y − z)2 . Put X =


x + y + z, Y = y − z, Z = z to get the canonical form = X 2 + Y 2 . The expression is positive
semi-definite.
Alternate solution: See 1981 question 1(b) for an alternate method of canonicalization.

Question 2(a) Find the elements p, q, r such that the product BA of the matrices
   
1 2 1 1 0 0
A =  4 1 2  , B = p 1 0 
−10 2 4 q r 1

is of the form  
a1 b 1 c 1
BA =  0 b2 c2 
0 0 c3
Hence solve the set of equations Ax = y, where x is the column vector (x1 , x2 , x3 ), and y is
the column vector (0, 8, −4).

1
Solution.
   
1 2 1 a1 b 1 c 1
BA =  p + 4 2p + 1 p + 2  =  0 b2 c 2 
q + 4r − 10 2q + r + 2 q + 2r + 4 0 0 c3

Thus p + 4 = 0, q + 4r − 10 = 0, 2q + r + 2 = 0 ⇒ p = −4, r = 22
7
, q = − 18
7
.
Now solving Ax = y is the same as solving BAx = By because |B| = 6 0.
       
1 2 1 x1 1 0 0 0 0
0 −7 −2 x2  =  −4 1 0  8  =  8 
54
0 0 7
x3 − 18
7
22
7
1 −4 148
7

Thus 54 x = 148
7 3 7
⇒ x3 = 27 74
. −7x2 − 2x3 = 8 ⇒ −7x2 = 2x3 + 8 = 364
27
, so x2 = − 52
27
.
104 74
x1 + 2x2 + x3 = 0 ⇒ x1 = 27 − 27 = 10
9
.
10 52 74
Thus x1 = 9 , x2 = − 27 , x3 = 27 is the required solution.

Question 3(a) If S and T are subspaces of a finite dimensional vector space, then show
that
dim(S + T ) = dim S + dim T − dim(S ∩ T )

Solution. See 1988, question 1(b).

Question 3(b) Determine the value of a for which the following system of equations:

x1 + x2 + x3 = 2
x1 + 2x2 + x3 = −2
x1 + x2 + (a − 5)x3 = a

has (1) a unique solution (2) no solution.

1 1 1
Solution. 1 2 3 = 2a − 10 − 3 − a + 5 + 3 − 1 = a − 6
1 1 a−5

1. If a − 6 6= 0 i.e. a 6= 6, the system has a unique solution.

2. If a = 6, the system is inconsistent as the third equation becomes x1 + x2 + x3 = 6,


which is inconsistent with the first. So there is no solution.

2
Paper II
Question 4(a) Prove that any two finite dimensional vector spaces of the same dimension
are isomorphic.
Solution. See 1987 question 4(b).

Question 4(b) Define the dual space of a finite dimensional vector space V and show that
it has the same dimension as V.
Solution. Let V ∗ = {f : V −→ R, f a linear transformation}. Then V ∗ is a vector space
for the usual pointwise addition and scalar multiplication of functions: for all v ∈ V and all
α ∈ R, (f + g)(v) = f (v) + g(v), (αf )(v) = αf (v).
Let P a basis for V. Define n linear functionals v1∗ , . . . , vn∗ by vi∗ (vj ) = δij ,
v1 , . . . , vn be P
and vi∗ ( j=1 αj vj ) = nj=1 αj vi∗ (vj ) = αi .
n

Then v1∗ , . . . , vn∗ are linearly independent — ni=1 αi vi∗ = 0 ⇒ ( ni=1 αi vi∗ )(vj ) = αj =
P P
0, 1 ≤ j ≤ n. Pn Pn
∗ ∗ ∗ ∗ ∗ ∗
Pn v1 , . . . , v n generate V — if f ∈ V , then f = i=1 f (v i )vi . Clearly ( i=1 f (vi )vi )(vj ) =

i=1 f (vi )vi (vj ) = f (vj ), so the two sides agree on v1 , . . . , vn , and hence by linearity on all
of V.
Thus v1∗ , . . . , vn∗ is a basis of V ∗ , so dim V ∗ = dim V . V ∗ is called the dual of V.

Question 4(c) Show that every finite dimensional inner product space V over the field of
complex numbers has an orthonormal basis.
Solution. Let w1 , . . . , wn be a basis of V. We will convert it into an orthonormal basis of
V by the Gram-Schmidt orthonormalization process.
Starting with i = 1, define
i−1
X hwi , vj i
v i = wi − vj
j=1
||vj ||2

Each vi is non-zero, as otherwise wi can be written as a linear combination of wj , j < i,


but w1 , . . . , wn are linearly independent.
Now we can prove by induction on i that hvi , vj i = 0 for all j < i — this is enough
because hvi , vj i = hvj , vi i . Suppose it is true for all k < i. Then hvi , vj i = hwi , vj i −
Pi−1 hwi ,vm i hwi ,vj i
m=1 ||vm ||2 hvm , vj i = hwi , vj i − ||vj ||2 hvj , vj i = 0. Thus v1 , . . . , vn are mutually orthog-
onal. They are linearly independent, as ni=1 ai vi = 0 ⇒ h ni=1 ai vi , vj i = aj ||vj ||2 = 0 ⇒
P P
aj = 0 for all i ≤ j ≤ n. Replacing vi by ||vvii || gives us an orthonormal basis of V.

Question 5(a) Define the rank and nullity of a linear transformation. If V and W are
finite dimensional vector spaces over a field, and T is a linear transformation of V into W,
prove that
rank T + nullity T = dim V
Solution. See 1998 question 3(a).

3
Question 5(b) Define a positive definite form. State and prove a necessary and sufficient
condition for a quadratic form to be positive definite.

Solution. See 1992 question 2(c).

Question 5(c) Show that the mapping T : R3 −→ R3 defined by T (x, y, z) = (x − y +


2z, 2x + y, −x − 2y + 2z) is a linear transformation. Find its nullity.

Solution.

T (ax + by) = T (ax1 + by1 , ax2 + by2 , ax3 + by3 )


= (ax1 + by1 − ax2 − by2 + 2ax3 + 2by3 , 2ax1 + 2by1 + ax2 + by2 ,
−ax1 − by1 − 2ax2 − 2by2 + 2ax3 + 2by3 )
= aT (x1 , x2 , x3 ) + bT (y1 , y2 , y3 )

Thus T is a linear transformation.


If (x, y, z) ∈ the null space of T , then x − y + 2z = 0, 2x + y = 0, −x − 2y + 2z = 0 ⇒ y =
−2x, z = − 3x2
. Thus the null space is {(x, −2x, − 3x2
) | x ∈ R} = {(2, −4, −3)x | x ∈ R}.
Thus nullity T = 1.

4
UPSC Civil Services Main 1980 - Mathematics
Linear Algebra
Sunder Lal
Retired Professor of Mathematics
Panjab University
Chandigarh

December 27, 2009

Question 1(a) Define the rank of a matrix. Prove that a system of equations Ax = b is
consistent if and only if rank (A, b) = rank A, where (A, b) is the augmented matrix of the
system.

Solution. See 1987 question 3(a).


 
2 1
Question 1(b) Verify the Cayley Hamilton Theorem for the matrix A = , and
1 2
hence find A−1 .

Solution. The Cayley Hamilton theorem is — Every matrix A satisfies its characteristic
x − 2 −1
equation |xI − A| = 0. In the current problem, |xI − A| = = x2 − 4x + 4 − 1 =
−1 x − 2   
2 2 2 2 1 2 1
x − 4x + 3. Thus we need to show that A − 4A + 3I = 0. Now A = =
          1 2 1 2
5 4 2 5 4 2 1 1 0 0 0
, so A − 4A + 3I = −4 +3 = , verifying the Cayley
4 5 4 5 1 2 0 1 0 0
Hamilton Theorem.  
2 1 1 0
A2 − 4A + 3I = 0 ⇒ A(A − 4I) = −3I ⇒ A−1 = − 3 (A − 4I) = − 3 (
1 1
−4 )=
 2 1 2 0 1
− 13

3 .
− 13 23

Question 2(a) Prove that if P is any non-singular matrix of order n, then the matrices
P−1 AP and A have the same characteristic polynomial.

Solution. The characteristic polynomial of P−1 AP is |xI−P−1 AP| = |xP−1 P−P−1 AP| =
|P−1 ||xI − A||P| = |xI − A| which is the characteristic polynomial of A.

1
 
3 4
Question 2(b) Find the eigenvalues and eigenvectors of the matrix A = .
4 −3
 
3 4 3−λ 4
Solution. The characteristic equation of A = is = 0 ⇒ −(9 −
4 −3 4 −3 − λ
λ2 ) − 16 = 0 ⇒ λ2 − 25 = 0 ⇒ λ = 5, −5.   
−2 4 x1
If (x1 , x2 ) is an eigenvector for λ = 5, then = 0 ⇒ 2x1 − 4x2 = 0 ⇒
4 −8 x2
x1 = 2x2 . Thus (2x, x), x ∈ R, x 6= 0 gives all eigenvectors for λ = 5, in particular, we can
take (2, 1) as an eigenvector for λ = 5.   
8 4 x1
If (x1 , x2 ) is an eigenvector for λ = −5, then = 0 ⇒ 4x1 + 2x2 = 0 ⇒ x2 =
4 2 x2
−2x1 . Thus (x, −2x), x ∈ R, x 6= 0 gives all eigenvectors for λ = −5, in particular, we can
take (1, −2) as an eigenvector for λ = −5.
Question 3(a) Find a basis for the vector space V = {p(x) | p(x) = a0 + a1 x + a2 x2 } and
its dimension.
Solution. Let f1 = 1, f2 = x, f3 = x2 , then f1 , f2 , f3 are linearly independent, because
α1 f1 + α2 f2 + α3 f3 = 0 ⇒ α1 + α2 x + α3 x2 = 0(zero polynomial) ⇒ α1 = α2 = α3 = 0.
f1 , f2 , f3 generate V because p(x) = a0 + a1 x + a2 x2 = a0 f1 + a1 f2 + a2 f3 for any p(x) ∈ V.
Thus {f1 , f2 , f3 } is a basis for V and its dimension is 3.
Question 3(b) Find the values of the parameter λ for which the system of equations
x + y + 4z = 1
x + 2y − 2z = 1
λx + y + z = 1
will have (i) unique solution (ii) no solution.
−1  

1 1 4 1
Solution. The system will have the unique solution given by 1 2 −2
   1 if
λ 1 1 1
1 1 4
7
1 2 −2 = 1(2 + 2) + 4(1 − 2λ) − 1(1 + 2λ) 6= 0. Thus 4 + 4 − 8λ − 1 − 2λ 6= 0 ⇒ λ 6= 10 .
λ 1 1
7
When λ = 10 , the system is
x + y + 4z = 1
x + 2y − 2z = 1
7x + 10y + 10z = 10
This system has no solution as it is inconsistent: 4(x+y+4z)+3(x+2y−2z) = 7x+10y+10z =
7, but the third equation says that 7x + 10y + 10z = 10. Thus there is a unique solution if
7 7
λ 6= 10 , and no solution if λ = 10 .

2
Paper II

Question 3(c) If V is a finite dimensional vector space and M is a subspace of V, then


show that each vector x ∈ V can be uniquely expressed as x = y + z, where y ∈ M and
z ∈ M⊥ , the orthogonal complement of M.

Solution. Let P v1 , . . . , vm be any orthonormal basis of M, where m = dim M. Given


m
x ∈ V, let y = i=1 hx, vi i vi , and z =Px − y. Clearly y ∈ M, and x = y + z. Now
hz, vi i = hx, vi i − hy, vi i = hx, vi i − m j=1 hx, vj i hvj , vi i = hx, vi i − hx, vi i = 0. So
hz, vi i = 0, i = 1, . . . , m ⇒ hz, mi = 0 for every m ∈ M, so z ∈ M⊥ .
Now if x = y0 +z0 , then y−y0 = z0 −z. But y−y0 ∈ M, z0 −z ∈ M⊥ , so hy − y0 , z0 − zi =
0 ⇒ hy − y0 , y − y0 i = 0 ⇒ ||y − y0 || = 0 ⇒ y − y0 = 0 ⇒ z0 − z = 0. Thus y = y0 , z = z0
and the representation is unique.

Question 3(d) Find one characteristic value and corresponding characteristic vector for
the operators T on R3 defined as

1. T is a reflection on the plane x = z.

2. T is a projection on the plane z = 0.

3. T (x, y, z) = (3x + y + z, 2y + z, z).

Solution.

1. T (x, y, z) = (z, y, x) because the midpoint of (x, y, z) and (z, y, x) lies on the plane
x = z. T (1, 0, 0) = (0, 0, 1), T (0, 1, 0) = (0, 1, 0), T (0, 0, 1) = (1, 0, 0). Thus it is clear
that 1 is an eigenvalue, and (0, 1, 0) is a corresponding eigenvector.

2. T (1, 0, 0) = (1, 0, 0), T (0, 1, 0) = (0, 1, 0), T (0, 0, 1) = (0, 0, 0). Clearly 1 is an eigen-
value with (1, 0, 0) or (0, 1, 0) as eigenvectors.

3. T (1, 0, 0) = (3, 0, 0), T (0, 1, 0) = (1, 2, 0), T (0, 0, 1) = (1, 1, 1). Clearly (1, 0, 0) is an
eigenvector, corresponding to the eigenvalue 3.

3
UPSC Civil Services Main 1981 - Mathematics
Linear Algebra
Sunder Lal
Retired Professor of Mathematics
Panjab University
Chandigarh

December 27, 2009

Question
 1(a) State and prove the Cayley Hamilton theorem and verify it for the matrix

2 3
A= . Use the result to determine A−1 .
3 5

Solution. See 1987 question 5(a) for the Cayley Hamilton theorem.
x − 2 −3
The characteristic equation of A is = 0, or (x − 2)(x − 5) − 9 = 0 ⇒
−3 x − 5
x2 − 7x +1 = 0. The
 Cayley
 Hamilton
2
 theorem implies that A − 7A + I = 0.
2 3 2 3 13 21
A2 = = .
3 5 3 5  21 34      
2 13 21 2 3 1 0 0 0
Now A − 7A + I = −7 + =
21 34 3 5 0 1 0 0
2 −1
So thetheorem
 is  verified.
  A − 7A +  I = 0 ⇒ (A − 7I)A = −I ⇒ A = 7I − A. Thus
1 0 2 3 5 −3
A−1 = 7 − = .
0 1 3 5 −3 2

Question 1(b) Let Q be the quadratic form

Q = 5x21 + 5x22 + 2x23 + 8x1 x2 + 4x1 x3 + 4x2 x3

By using an orthogonal change of variables reduce Q to a form without the cross terms i.e.
with terms of the form aij xi xj , i 6= j.
 
5 4 2
Solution. The matrix of the qiven quadratic form Q is A = 4 5 2.
2 2 2

1
The characteristic polynomial of A is

5−λ 4 2
4 5−λ 2 =0
2 2 2−λ
⇒ (5 − λ)(5 − λ)(2 − λ) − 4(5 − λ) − 4(8 − 4λ) + 16 + 16 − 4(5 − λ) = 0
⇒ (λ2 − 10λ + 25)(2 − λ) − 20 + 4λ − 32 + 16λ + 12 + 4λ = 0
⇒ −λ3 + 12λ2 + λ(−25 + 4 + 16 + 4 − 20) + 50 − 20 − 32 + 12 = 0
⇒ λ3 − 12λ2 + 21λ − 10 = 0

Thus the eigenvalues are λ = 1, 1, 10. Let (x1 , x2 , x3 ) be an eigenvector for λ = 10, then
  
−5 4 2 x1 −5x1 + 4x2 + 2x3 = 0 (i)
 4 −5 2  x2  = 0 ⇒ 4x1 − 5x2 + 2x3 = 0 (ii)
2 2 −8 x3 2x1 + 2x2 − 8x3 = 0 (iii)

Subtracting (ii) from (i), we get −9x1 + 9x2 = 0 ⇒ x1 = x2 ⇒ x1 = 2x3 . Thus taking
x3 = 1, we get (2, 2, 1) as an eigenvector for λ = 10.
Let (x1 , x2 , x3 ) be an eigenvector for λ = 1, then
  
4 4 2 x1
4 4 2 x2  = 0 ⇒ 4x1 + 4x2 + 2x3 = 0
2x1 + 2x2 + x3 = 0
2 2 1 x3

Take x3 = 0, x1 = 1 ⇒ x2 = −1 to get (1, −1, 0) as an eigenvector for λ = 1. Take


x1 = x2 = 1 ⇒ x3 = −4 to get (1, 1, −4) as another eigenvector for λ = 1, orthogonal to the
first.
Thus  
√2 √1 √1
 √29 2
− √12
18
√1 
O=

 9 18
√1 0 − √418
9
 
10 0 0
is an orthogonal matrix such that O0 AO = O−1 AO =  0 1 0. If (X1 , X2 , X3 ) are new
    0 0 1
x1 X1
variables, then x2 = O X2  takes Q(x1 , x2 , x3 ) to 10X12 + X22 + X32 .
  
x3 X3
Note: If the orthogonal transformation was not required for the diagonalization, we

2
could do it easily by completing squares:

5x21 + 5x22 + 2x23 + 8x1 x2 + 4x1 x3 + 4x2 x3


8 4
= 5[x21 + x1 x2 + x1 x3 ] + 5x22 + 2x23 + 4x2 x3
5 5
4 2 2 16 4 16
= 5[x1 + x2 + x3 ] + (5 − )x22 + (2 − )x23 + (4 − )x2 x3
5 5 25 5 5
4 2 2 9 2 4 6 2
= 5[x1 + x2 + x3 ] + (x2 + x2 x3 ) + x3
5 5 5 9 5
4 2 2 9 2 2 10 2
= 5[x1 + x2 + x3 ] + [x2 + x3 ] + x3
5 5 5 9 9
9 10
= 5X 2 + Y 2 + Z 2
5 9
where X = x1 + 45 x2 + 52 x3 , Y = x2 + 29 x3 , Z = x3 , or x3 = Z, x2 = Y − 92 Z, x1 = X − 45 Y − 29 Z.

Question 2(a) Define a vector space. Show that the set V of all real-valued functions on
[0, 1] is a vector space over the set of real numbers with respect to the addition and scalar
multiplication of functions.

Solution. See 1984 question 4(a).

Question 2(b) If zero is a root of the characteristic equation of a matrix A, show that the
corresponding linear transformation cannot be one to one.

Solution. If zero is a root of |A − λI| = 0, the characteristic equation of A, then 0 is an


eigenvalue of A, so there is a non-zero eigenvector x such that Ax = 0, thus A is not 1-1.

Question 2(c) Show that a linear transformation T from a Euclidean space V to V is


orthogonal if and only if the matrix corresponding to it with respect to any orthonormal basis
is orthogonal.

Solution. T : V −→ V is said to be orthogonal if hT(u), T(v)i = hu, vi for any u, v ∈ V.


Lemma 1. T is orthogonal iff T takes an orthonormal basis to an orthonormal basis.
Proof: Let {v1 , v2 , . . . , vn } be an orthonormal basis. Then

1. hT(vi ), T(vj )i = hvi , vj i = 0 if i 6= j

2. hT(vi ), T(vi )i = hvi , vi i = 1

3. If ni=1 αi T(vi ) = 0, then h ni=1 αi T(vi ), vj i = αj = 0 for all j, so T(vi ) are linearly
P P
independent.

3
Thus T(v1 ), . . . , T(vn ) form an orthonormal basis. Pn
Pn Conversely, let T(v 1 ), . .
Pn. , T(v n ) be an orthonormal basisPn of V. Let v =
Pn i=1 αi vi , w =
Pi=1 βi vi , then hv, wi = i=1 αi βi and hT(v), T(w)i = h i=1 αi T(vi ), i=1 βi T(vi )i =
n
i=1 αi βi . Thus hT(v), T(w)i = hv, wi , so T is orthogonal.
Lemma 2. Let T∗ be defined by hT(v), wi = hv, T∗ (w)i . Then T∗ is a linear trans-
formation, and T is orthogonal iff T∗ T = TT∗ = I.
Proof: The fact that T∗ is a linear transformation can be easily checked. If T is
orthogonal, then hv, T∗ T(w)i = hT(v), T(w)i = hv, wi , so T∗ T = I. From this and the
fact that T is 1-1, it follows that TT∗ = I.
Lemma 3. If the matrix of T w.r.t. the orthonormal basis {v1 , v2 , . . . , vn } is A = (aij ),
then the matrix of T∗ P is the transpose, i.e. (aji ). P
n ∗ n ∗
Proof: T(vi )P= j=1 aij vj . Let T (vi ) = j=1 bij vj . Now bij = hT (vi ), vj i =
hvi , T(vj )i = hvi , nk=1 ajk vk i = aji . Since TT∗ = I, A0 A = AA0 = I, so A is orthogonal.
The converse is also obvious now.

Question 3(a) Investigate for what values of λ and µ does the following system of equations

x+y+z = 6
x + 2y + 3z = 10
x + 2y + λz = µ

have (1) a unique solution (2) no solution (3) an infinite number of solutions?

Solution.
1 1 1
1. A unique solution exists when 1 2 3 6= 0, whatever µ may be. Thus 2λ−6−(λ−3) 6=
1 2 λ
0 ⇒ λ
 6
= 3. Thus for
−1  all λ 6 = 3 and for all µ we have a unique solution given by
x 1 1 1 6
y  = 1 2 3  10
z 1 2 λ µ
2. A unique solution does not exist if λ = 3. If µ 6= 10, then the second and third
equations are inconsistent. Thus if λ = 3, µ 6= 10, the system has no solution.

3. If λ = 3, µ = 10, then the system is x+y+z = 6, x+2y+3z = 10. The coefficient matrix
is of rank 2, so the space of solutions is one dimensional. y + 2z = 4 ⇒ y = 4 − 2z,
and thus x = 2 + z. The space of solutions is (2 + z, 4 − 2z, z) for z ∈ R.

4
Question 3(b) Let (xi , yi ), i = 1, . . . , n be n points in the plane, no two of them having the
same abscissa. Find a polynomial f (x) of degree n − 1 which takes the value f (xi ) = yi , 1 ≤
i ≤ n.

Solution. Let f (x) = a0 + a1 x + . . . + an−1 xn−1 . We want to determine a0 , . . . , an−1 such


that       
a0 1 x1 . . . xn−1 1 a 0 y1
 a1  1 x2 . . . xn−1   a1   y2 
2
A  ..  =  ..   ..  =  .. 
      
 .  .  .   . 
n−1
an−1 1 xn . . . xn an−1 yn
   
a0 y1
 a1   y2 
6 0, as x1 , . . . , xn are distinct.  ..  = A−1  .. .
This is possible as |A| =
   
 .  .
an−1 yn
Note: We can also use Lagrange’s interpolation formula from numerical analysis, giving
n Q
j6=i (x − xj )
X
f (x) = yi Q
i−1 j6=i (xi − xj )

The two methods give the same polynomial, which is unique.

Paper II
 
3 0 √0
Question 4(a) Find a set of three orthonormal eigenvectors for the matrix A = 0 √4 3
0 3 6

Solution. The characteristic equation of A is

3−λ 0 √0
|A − λI = 0 −λ
4√ 3 =0
0 3 6−λ

Thus (3 − λ)(4 − λ)(6 − λ) − 3(3 − λ) = 0 ⇒ λ = 3, λ2 − 10λ + 21 = 0. Thus the eigenvalues


of A are 3, 3, 7.
Let (x1 , x2 , x3 ) be an eigenvector for λ = 7. Then
  
−4 0 √0 x1
 0 −3 3 x2  = 0

 
0 3 −1 x3
√ √ √
Thus −4x1 = 0, −3x2 + 3x3 = 0, 3x2 − x3 = 0. Thus √ x1 = 0, x3 = 3x2 with x2 6=1 0 √gives
any eigenvector for λ = 7. Take x2 = 1 to get (0, 1, 3), and normalize it to get (0, 2 , 23 ).

5
Let (x1 , x2 , x3 ) be an eigenvector for λ = 3. Then
  
0 0 √0 x1
0 1 3 x2  = 0

 
0 3 3 x3
√ √
Thus x2 + 3x3 = 0. Thus (x1 , − 3x3 , x3 ) with x1 , x3 ∈ R gives any √ eigenvector for λ = 3.
We can take x1 = 1, x3 = 0, and x1 = 0, x3 = 1 to get (1, 0, 0), (0, − 3, 1) as eigenvectors for
λ = 3 — these are orthogonal √ and therefore span the the eigenspace of λ = 3. Orthonormal
3 1
vectors are (1, 0, 0), (0, − 2 , 2 ). √ √
Thus the required orthonormal vectors are (0, 12 , 23 ), (1, 0, 0), (0, − 23 , 21 ).
In fact
√ 
1 3 0 0√ 1
    
0 2√ 2
3 0 √0 7 0 0
0 − 3 1  0 4
√ 3  √12 − 23 0 = 0 3 0
2 2
0 3 6 3 1 0 0 3
1 0 0 2 2
0

Question 4(b) Show that if A = X0 AX and B = X0 BX are two quadratic forms one of
which is positive definite and A, B are symmetric matrices, then they can be expressed as
linear combinations of squares by an appropriate linear transformation.

Solution. Let B be positive definite. Then there exists an orthogonal real non-singular
matrix H such that H0 BH = In , the unit matrix of order n. A is real-symmetric ⇒ H0 AH is
real symmetric. There exists
 K a real orthogonal
 matrix such that K0 H0 AHK is a diagonal
λ1 0 . . . 0
 0 λ2 . . . 0 
matrix i.e. K0 H0 AHK =  .. 0
..  where λ1 , . . . , λn are the eigenvalues of H AH.
 
..
. . .
0 0 . . . λn
   
x1 X1
0 0 0  ..   .. 
Now K H BHK = K In K = In . Then  .  = HK  .  diagonalizes A, B simulta-
xn Xn
neously.

   
x1 x1
. . . xn A  ...  = λ1 X12 + . . . + λn Xn2 . . . xn B  ...  = X12 + . . . + Xn2
     
x1 x1
xn xn

|A−λB| = 0 because |A−λB| = |H0 ||A−λB||H| =


Note that λ1 , . . . , λn are the roots of Q
|H AH − λH BH| = |H AH − λIn | = ni=1 (λ − λi ).
0 0 0

6
UPSC Civil Services Main 1982 - Mathematics
Linear Algebra
Sunder Lal
Retired Professor of Mathematics
Panjab University
Chandigarh

December 27, 2009

Question 1(a) Let V be a vector space. If dim V = n with n > 0, prove that

1. any set of n linearly independent vectors is a basis of V.

2. V cannot be generated by fewer than n vectors.

Solution. From 1983 question 1(a) we get that any two bases of V have n elements.

1. Let v1 , . . . , vn be n linearly independent vectors in V. Then v1 , . . . , vn generate V —


if v ∈ V is such that v is not a linear combination of v1 , . . . , vn , then v, v1 , . . . , vn are
linearly independent, so dim V > n which is not true. Thus v1 , . . . , vn is a basis of V
— here we have used the technique used to complete any linearly independent set to
a basis.

2. V cannot be generated by fewer than n vectors, because then it will have a basis
consisting of less than n elements, which contradicts the fact that dim V = n.

Question 1(b) Define a linear transformation. Prove that both the range and the kernel of
a linear transformation are vector spaces.

Solution. Let V and W be two vector spaces. A mapping T : V −→ W is said to be a


linear transformation if

1. T(v1 + v2 ) = T(v1 ) + T(v2 ).

2. T(αv) = αT(v) for any α ∈ R, v ∈ V.

1
Range of T = T(V), kernel of T = {v | T(v) = 0}. If w1 , w2 ∈ T(V), then w1 =
T(v1 ), w2 = T(v2 ) for some v1 , v2 ∈ V, αw1 + βw2 = αT(v1 ) + βT(v2 ) = T(αv1 + βv2 ).
But αv1 + βv2 ∈ V ∴ αw1 + βw2 ∈ T(V), thus T(V) is a subspace of W. Note that
T(V) 6= ∅ ∵ 0 ∈ T(V) so T(V) is a vector space.
If v1 , v2 ∈ kernel T then T(v1 ) = 0, T(v2 ) = 0. Now T(αv1 + βv2 ) = αT(v1 ) +
βT(v2 ) = 0 ⇒ αv1 + βv2 ∈ kernel T. Thus kernel T is a subspace. kernel T 6= ∅, bf 0 ∈
kernel T so kernel T is a vector space.

Question 2(a) Reduce the matrix


 
2 3 −1 0
1 −1 2 0
1 2 −1 0

to row echelon form.

Solution. Let the given matrix  be called A. 


1 4 −3 0
Operation R1 − R2 ⇒ A ∼ 1 −1 2 0

1 2  −1 0 
1 4 −3 0
Operation R2 − R1 , R3 − R1 ⇒ A ∼ 0 −5 5
 0
 0 −2 2 0
1 4 −3 0
Operation − 15 R2 , − 12 R3 ⇒ A ∼ 0 1 −1 0
 0 1 −1  0
1 4 −3 0
Operation R3 − R2 ⇒ A ∼ 0 1 −1 0
0 0 0 0 
1 0 1 0
Operation R1 − 4R2 ⇒ A ∼ 0 1 −1 0
0 0 0 0  
1 0 1 0
Thus rank A = 2 and the row echelon form is 0 1 −1 0
0 0 0 0

Question 2(b) If V is a vector space of dimension n and T is a linear transformation on


V of rank r, prove that T has nullity n − r.

Solution. See 1998 question 3(a).

2
Question 2(c) Show that the system of equations

3x + y − 5z = −1
x − 2y + z = −5
x + 5y − 7z = 2

is inconsistent.

Solution. From the first two equations, (3x + y − 5z) − 2(x − 2y + z) = −1 − 2(−5) = 9 ⇒
x + 5y − 7z = 9. But this is inconsistent with the third equation, hence the overall system
in inconsistent.

Question 3(a) Prove that the trace of a matrix is equal to the sum of its characteristic
roots.

Solution. The characteristic polynomial of A is |λI − A| = λn + p1 λn−1 + p2 λn−2 + . . . + pn .


Thus the sum of the roots of |λI − A| = −p1 = a11 + a22 + . . . + ann = tr A. Thus the trace
of A = sum of the eigenvalues of A.

Question 3(b) If A, B are two non-singular matrices of the same order, prove that AB
and BA have the same eigenvalues.

Solution. See 1995 question 2(b).


 
cos θ sin θ
Question 3(c) Find the eigenvalues and eigenvectors of the matrix A = .
sin θ − cos θ

Solution. The characteristic equation of A is (cos θ − λ)(− cos θ − λ) − sin2 θ = 0 ⇒


λ2 − 1 = 0 ⇒ λ = ±1.
If (x1 , x2 ) is an eigenvector for λ = 1, then
  
cos θ − 1 sin θ x1
=0
sin θ − cos θ − 1 x2

Thus x1 (cos θ−1)+x2 sin θ = 0, x1 sin θ+x2 (− cos θ−1) = 0. We can take x1 = 1+cos θ, x2 =
sin θ.
Similarly if (x1 , x2 ) is an eigenvector for λ = −1, then
  
cos θ + 1 sin θ x1
=0
sin θ − cos θ + 1 x2

Thus x1 (cos θ+1)+x2 sin θ = 0, x1 sin θ+x2 (− cos θ+1) = 0. We can take x1 = 1−cos θ, x2 =
− sin θ as an eigenvector.

3
Paper II

Question 4(a) If V is finite dimensional and if W is a subspace of V, then show that W


is finite dimensional and dim W ≤ dim V.

Solution. If W = {0} then dim W = 0 ≤ dim V. If W = 6 {0}, let v1 ∈ W, v1 6= 0. Let W1


be the space spanned by v1 then W1 is of dimension 1. If W1 = W, then dim W = 1 ≤ dim V.
If W1 6= W, then there exists a v2 ∈ W, v2 6∈ W1 . v1 , v2 are linearly independent —
if av1 + bv2 = 0, then if b 6= 0 then v2 = − ab v1 ⇒ v2 ∈ W1 , which is not true, hence
b = 0 ⇒ a = 0. Now let W2 be the space spanned by v1 , v2 then W2 is of dimension 2. If
W2 = W, then dim W = 2 ≤ dim V.
We continue the same reasoning as above, but this process must stop after at most r
steps where r ≤ n, otherwise we would have found n + 1 linearly independent vectors in V,
which is not possible. After r steps, we would have v1 , . . . , vr which are linearly independent
and span W. Thus dim W ≤ dim V, and W is finite dimensional.

Question 5(a) State and prove the Cayley-Hamilton Theorem when the eigenvalues are all
different.

Solution. See 1987 question 5(a).

Question 5(b) When are two real symmetric matrices said to be congruent? Is congruence
an equivalence relation? Also show how you can find the signature of A.

Solution. Two matrices A, B are said to be congruent to each other if there exists a
nonsingular matrix P such that P0 AP = B.
Congruence is an equivalence relation:
• Reflexive: A ≡ A ∵ A = I0 AI, I is the unit matrix.

• Symmetric: A ≡ B ⇒ P0 AP = B ⇒ A = (P−1 )0 BP−1 ⇒ B ≡ A.

• Transitive: A ≡ B, B ≡ C ⇒ A ≡ C — P0 AP = B, Q0 BQ = C ⇒ Q0 P0 APQ =
C ⇒ A ≡ C because PQ is nonsingular as both P, Q are nonsingular.
Given a symmetric matrix A, we first prove that there exists a nonsingular matrix P
such that P0 AP = diagonal[α1 , α2 , . . . , αr , 0, . . . , 0] where r is the rank of A.
We will prove this by induction on the order n of the matrix A. If n = 1, there is nothing
to prove. Assume that the result is true for all matrices of order < n.
Step 1. We first ensure that we have a11 6= 0. If it is 0, but some other akk 6= 0, we
interchange the k-th row with the first row and the k-th column with the first column, to
get B = P0 AP, where b11 = akk 6= 0. Note that P is the elementary matrix E1k (see 1983
question 2(a)), and is hence nonsingular and symmetric, so B is symmetric.
If all aii are 0, but some aij 6= 0. We add the j-th row to the i-th row and the j-the column
to the i-th column by multiplying A by Eij (1) and its transpose, to get B = Eij (1)AEij (1)0

4
— now bii = aij + aji 6= 0. B is still symmetric, and we can shift bii to the leading place as
above.
(Note that if all aij = 0, we stop.)
Thus we start with a11 6= 0. We subtract aa1k 11
times the first row from the k-th row and
a1k
a11
times the first column from the k-th column, by performing B = Ek1 (− aa1k11
)AEk1 (− aa1k
11
)0
 
a11 0
Repeating this for all k, 2 ≤ k ≤ n, we get P01 AP1 = , where A1 is n − 1 ×
0 A1
0
n − 1 and P1 is nonsingular. Now by induction, ∃P2 , n − 1 × n − 1 such  that  P2 AP2 =
1 0
diagonal[β2 , . . . , βr , 0, . . . , 0], rank A1 = rank A − 1. Now set P = P1 to get the
0 P2
result.
Now that we have P0 AP = diagonal[α1 , α2 , . . . , αr , 0, . . . , 0], let us assume that α1 , . . . , αs
are positive, the rest are negative. Then let αi = βi2 , 1 ≤ i ≤ s, −αj = βj2 , s + 1 ≤ j ≤ r. Set
Q = diagonal[β1−1 , . . . , βr−1 , 1, . . . , 1]. Then x0 Q0 P0 APQx = x21 + . . . + x2s − x2s+1 − x2r . Thus
we can find the signature of A by looking at the number of positive and negative squares of
the RHS.

Question
P 5(c)P Derive a set of necessary and sufficient conditions that the real quadratic
form 3j=1 3i=1 aij xi xj be positive definite.
Is 4x2 + 9y 2 + 2z 2 + 8yz + 6zx + 6xy positive definite?

Solution. For the first part, see 1992 question 2(c).

Q(x, y, z) = 4x2 + 9y 2 + 2z 2 + 8yz + 6zx + 6xy


3 3 9 9 9
= (2x + y + z)2 + 9y 2 + 2z 2 + 8yz + − yz − y 2 − z 2
2 2 2 4 4
3 3 2 27 2 1 2 7
= (2x + y + z) + y − z − yz
2 2 4 4 2
3 3 2 27 2 1 2 14
= (2x + y + z) + (y − z − yz)
2 2 4 27 27
3 3 2 27 7 2 1 2 49 2
= (2x + y + z) + (y − z) − z − z
2 2 4 27 4 108

So set X = 2x + 32 y + 23 z, Y = y − 277
z, Z = z, then Q(x, y, z) is transformed to X 2 + 27
4
Y2−
76
108
Z 2 . Hence Q(x, y, z) is is not positive definite.

5
UPSC Civil Services Main 1983 - Mathematics
Linear Algebra
Sunder Lal
Retired Professor of Mathematics
Panjab University
Chandigarh

December 27, 2009

Question 1(a) Let V be a finitely generated vector space. Show that V has a finite basis
and any two bases of V have the same number of vectors.
Solution. Let {v1 , . . . , vm } be a generating set for V, we assume that vi 6= 0, 1 ≤ i ≤ m.
If {v1 , . . . , vm } is linearly independent, then it is a basis of V. Otherwise, there exists a
vk that depends linearly on {vi | 1 ≤ i ≤ m, i 6= k}. This latter set is also a generating
set, and we rename it {u1 , . . . , um−1 }. We now apply the same reasoning to it — either it
is linearly independent and hence a basis, or we can drop an element from it and it still
remains a generating set. In a finite number of steps, we reach {x1 , . . . , xr } ⊆ {v1 , . . . , vm }
such that {x1 , . . . , xr } is linearly independent and a generating set, thus {x1 , . . . , xr } is a
basis of V.
Note: An alternative approach leading to the same result is to pick the maximal linearly
independent subset of {v1 , . . . , vm }. There are only 2m such subsets, so we can do so in a
finite number of steps (in the above procedure we dropped the dependent elements one at
a time to reach the maximal linearly independent subset). Now to be a basis, the maximal
linearly independent subset S = {x1 , . . . , xr } ⊆ {v1 , . . . , vm } needs to generate V. But this
is immediate, as for each vi , either vi ∈ S or S ∪ {vi } is linearly Pdependent — in that case
P r r
j=1 aj xj + bvi = 0, but not all aj , b are 0. Now if b = 0 then j=1 aj xj = 0 ⇒ aj = 0 for
1 ≤ j ≤ r, as S is linearly independent, and this contradicts the statement that not all aj , b
are 0. So b 6= 0, hence vi is a linear combination of S, hence S generates V and is a basis.
Any two bases have the same number of elements: Let {v1 , . . . , vm } and {w1 , . . . , wn }
be two bases of V. Assume wlogP that m ≤ n. Now since w1 ∈ V, w1 is generated by the
m
basis {v1 , . . . , vm }, thus w1 = j=1 aj vj . There must be at least one non-zero ak , as
w1 6= 0. NowP the set {vi | 1 ≤ i ≤ m, i 6= k} ∪ {w1 } generates the set {v1 , . . . , vm } (since
aj
vk = a1k w1 − m j=1,j6=k aP
k
vj ) and hence generates V.
Now we have w2 = m i=1,i6=k ai vi + bw1 . At least one of the ai 6= 0, otherwise we have a
linear equation between w1 and w2 , but these are linearly independent. We replace vi by w2 ,

1
and the result is also a generating set as above. Continuing, after m steps, we
P get a subset
{w1 , . . . , wm } which is a generating set. Now if n > m, we would have wn = m i=0 ai wi , but
this is not possible as the wi were a basis, and thus linearly independent. Hence n = m, and
the two bases have equal number of elements.

Question 1(b) Let V be the vector space of polynomials of degree ≤ 3. Determine whether
the following vectors of V are linearly dependent or independent: u = t3 − 3t2 + 5t + 1, v =
t3 − t2 + 8t + 2, w = 2t3 − 4t2 + 9t + 5.

Solution. Let au + bv + cw = 0. Then

a + b + 2c = 0 (1)
−3a − b − 4c = 0 (2)
5a + 8b + 9c = 0 (3)
a + 2b + 5c = 0 (4)

From (4) - (1) we get b + 3c = 0. Substituting b = −3c in (2), c = −3a ⇒ b = 9a. Now from
(1), a + 9a − 6a = 0 ⇒ a = 0 ⇒ b = c = 0. Thus au + bv + cw = 0 ⇒ a = b = c = 0, so the
vectors are linearly independent.

Question 1(c) For any linear transformation T : V1 → V2 prove that

rank T ≤ min(dim V1 , dim V2 )

Solution. By definition, rank T = dim T (V1 ). Clearly T (V1 ) is a subspace of V2 and


therefore dim T (V1 ) ≤ dim V2 . Let v1 , . . . , vn be a basis of V1 , then T (V1 ) is generated by
T
P(v 1 ), . . . , T (vn ) — If w ∈ T (V1 ), then there
Pn exists v ∈ V1 such that T (v) = w. But v =
n
i=1 ai vi , ai ∈ R, therefore w = T (v) = i=1 ai T (vi ) ⇒ {T (v1 ), . . . , T (vn )} is a generating
system for T (V1 ) ⇒ dim T (V1 ) ≤ n. Thus rank T = dim T (V1 ) ≤ min(dim V1 , dim V2 ).

Question 2(a) Show that every non-singular matrix can be expressed as a product of ele-
mentary matrices.

Solution. We first list all the elementary matrices:

1. Eij = the matrix obtained by interchanging the i-th and j-th rows (or the i-th and
j-th columns of the unit matrix. For example, if n = 4, then
 
1 0 0 0
0 0 1 0
E23 = 
0 1 0 0

0 0 0 1

2
2. Ei (α) is the matrix obtained by multiplying the i-th row of the unit matrix by α =
the matrix obtained by multiplying the i-th column of the unit matrix by α.

3. Eij (β) = the matrix obtained by adding β times the j-th row to the i-th row of the
unit matrix.

4. (Eij (β))0 = transpose of Eij (β) = the matrix obtained by adding β times the j-th
column to the i-th column of the unit matrix.
All elementary matrices are non-singular. In fact |Eij | = −1, |Ei (α)| = α, |Eij (β)| =
|(Eij (β))0 | = 1.
We now prove the result.
(1) Let C = AB. Then any elementary row transformation
  on AB is equivalent to sub-
R1
 ..  
jecting A to the same row transformation. Let A =  .  and B = C1 . . . Cp n×p .
Rm m×n
 
R1 C1 . . . R1 Cp
 .. .. 
Then AB =  . .  . Thus if any elementary row transformation i.e. (i)
Rm C1 . . . Rm Cp m×p
Intercanging two rows (ii) Multiplying a row by a scalar (iii) Adding a scalar multiple of a
row to another row, is carried out on A, the same will be carried out on AB and vice versa.
Similarly any column transformation on B is equivalent to the same column transformation
on AB.
(2) Multiplying by an elementary matrix Eij , Ei (α), Eij (β) on the left is the same as per-
forming the corresponding elementary row operation on the matrix. Multiplying the matrix
by an elementary matrix to the right is equal to subjecting the matrix to the correspond-
ing column transformation. We write A = IA. Now interchanging the i-th and j-th row
of A is equivalent to doing the same on I in IA (result (1) above), which is the same as
Eij A. Similar results hold for the other two row transformations. Writing A as AI gives
the corresponding result for column transformations.
(3) We now prove that if A is a matrix
 of  rank r > 0, then there exist P, Q products of
Ir 0
elementary matrices such that PAQ = where Ir is the unit matrix of order r. Since
0 0
A 6= 0, A has at least one non-zero element, say aij . By interchanging the i-th row with
the first row and the j-th column with the first column, we get a new matrix Bij = (bij )
such that b11 6= 0. This simply means that there exist elementary matrices P1 , Q1 such
−1
that P1 AQ1 =
  B. We multiply P1 AQ1 by P2 = E1 (b11 ) to obtain P2 P1 AQ1 = C =
1 ∗ ... ∗
∗ 
. Subtracting suitable multiples of the first row from the remaining rows of
 
 ..
. ∗ 

C and suitable multiples of the first column from the remaining columns, we get the new

3
 
1 0 ... 0
0 
matrix D of the form  .. . Thus we have proved that there exist P∗ , Q∗ products
 
. ∗
A 
0
 
1 0 ... 0
0 
∗ ∗
of elementary matrices such that P AQ =  .. . We carry on the same process
 
. ∗
A 
0
 
∗ ∗∗ ∗∗ Ir 0
on A without affecting the first row and column, and in r steps we get P AQ = ,
0 E
where P∗∗ , Q∗∗ are products of elementary matrices. Note that E = 0 because rank A = r.
Now if A is nonsingular, then P∗∗ AQ∗∗ = I. Inverting the elementary matrices (the
inverse of an elementary matrix is elementary), we get that A is a product of elementary
matrices.

Question 2(b) Reduce the matrix A to its normal form, and hence or otherwise determine
its rank.  
0 1 2 1
A = 1 2 3 2
3 1 1 3
 
1 2 3 2
Solution. Interchange of R1 and R2 ⇒ A ∼ 0 1 2 1
  3 1 1 3
1 2 3 2
R3 − 3R1 ⇒ A ∼ 0 1 2 1
0 −5 −8 −3
1 2 3 2
R3 + 5R2 ⇒ A ∼ 0 1 2 1
 0 0 2 2
1 2 3 2
− 21 R3 ⇒ A ∼ 0 1 2 1
0 0 −1 −1 
1 2 3 2
R3 + R2 ⇒ A ∼ 0 1 2 1
0 1 1 0 
1 2 3 2
Interchanging C2 , C4 , A ∼ 0 1 2 1
0 0 1 1 
1 0 0 0
C2 − 2C1 , C3 − 3C1 , C4 − 2C1 ⇒ A ∼ 0 1 2 1
0 0 1 1

4
   
1 0 0 0 1 0 0 0
C3 − 2C2 , C4 − C2 ⇒ A ∼ 0 1 0 0. C4 − C3 ⇒ A ∼ 0 1 0 0
0 0 1 1 0 0 1 0
we have P(3× 3) and Q(4 × 4) both products of elementary matrices such that
Thus 
1 0 0 0
PAQ = 0 1 0 0, which is the normal form of A. Clearly the rank of A is 3.
0 0 1 0

Question 2(c) Show that the equations


x+y+z = 3
3x − 5y + 2z = 8
5x − 3y + 4z = 14
are consistent and solve them.
 
1 1 1
Solution. The coefficient matrix A = 3 −5 2.
5 −3 4
det A = 1(−20 + 6) − 1(12 − 10) + 1(−9 + 25) = 0, thus rank A < 3. Actually rank A = 2,
1 1
since 6= 0.
3 −5  
1 1 1 3
The augmented matrix B = 3 −5 2 8 .
 5 −3 4 14 
1 1 1 3
R2 − 3R1 , R3 − 5R1 ⇒ B ∼ 0 −8 −1 −1
 0 −8 −1 −1
1 1 1 3
R3 − R2 ⇒ B ∼ 0 −8 −1 −1
0 0 0 0
1 1
Thus rank B = 2, because 6= 0.
0 −8
Since rank A = rank B = 2, the system is consistent, and the space of solutions has
dimension 1.
Now x + y = 3 − z, 3x − 5y = 8 − 2z, subtracting the second from 3 times the first we
get 8y = 1 − z ⇒ y = 1−z 8
. x = 3 − z − 1−z
8
= 23−7z
8
. Thus the solutions are given by
( 23−7z
8
, 1−z
8
, z), z ∈ R.

Question 3(a) Prove that a square matrix satisfies its characteristic equation. Use this
result to find the inverse of  
0 1 2
A = 1 2 3
3 1 1

5
Solution. The first part is the Cayley-Hamilton theorem, see 1987 question 3(a).
The characteristic equation of A is
−λ 1 2
|A − λI| = 1 2 − λ 3 = 0
3 1 1−λ
⇒ −λ(λ2 − 3λ + 2 − 3) − (1 − λ − 9) + 2(1 − 6 + 3λ) = 0
⇒ λ3 − 3λ2 − 8λ + 2 = 0

Thus A3 − 3A2 − 8A + 2I = 0 ⇒ A(A2 − 3A − 8I) = −2I, or A−1 = − 21 (A2 − 3A − 8I).


    
0 1 2 0 1 2 7 4 5
A2 = 1 2 3 1 2 3 = 11 8 11
3 1 1 3 1 1 4 6 10
        
7 4 5 0 3 6 8 0 0 1 −1 1
1  1
∴ A−1 = − 11 8 11 + 3 6 9 + 0 8 0 = −8 6 −2
2 2
4 6 10 9 3 3 0 0 8 5 −3 1

Note: In this case, we were required to use this method to find the inverse. An alternate
method of finding the inverse by performing elementary row and column operations is shown
in 1985 question 1(c).
Question 3(b) Find the eigenvalues and eigenvectors of
 
8 −6 2
A = −6 7 −4
2 −4 3

Solution.
8 − x −6 2
|A − xI| = −6 7 − x −4 = 0
2 −4 3 − x
⇒ (8 − x)(x2 − 10x + 21 − 16) + 6(6x − 18 + 8) + 2(24 − 14 + 2x) = 0
⇒ −x3 + 18x2 − 85x + 40 + 36x − 60 + 20 + 4x = 0
⇒ x3 − 18x2 + 45x = 0

Thus the eigenvalues are 0, 3, 15.   


8 −6 2 x1
If (x1 , x2 , x3 ) is an eigenvector for the eigenvalue 0, then −6 7 −4
   x2  = 0.
2 −4 3 x3
1
Thus 8x1 − 6x2 + 2x3 = 0, −6x1 + 7x2 − 4x3 = 0, 2x1 − 4x2 + 3x3 = 0 ⇒ x1 = 2 x3 , x2 = x3 .
Thus (1, 2, 2) is an eigenvector for 0, in general (x/2, x, x), x 6= 0 is an eigenvector for 0.

6
  
5 −6 2 x1
If (x1 , x2 , x3 ) is an eigenvector for the eigenvalue 3, then −6 4 −4
   x2  = 0.
2 −4 0 x3
Thus 5x1 − 6x2 + 2x3 = 0, −6x1 + 4x2 − 4x3 = 0, 2x1 − 4x2 = 0 ⇒ x1 = 2x2 , x3 = −2x2 .
Thus (2, 1, −2) is an eigenvector for 3, in general (2x, x, −2x), x 6= 0 is an eigenvector
  for 3.
−7 −6 2 x1
If (x1 , x2 , x3 ) is an eigenvector for the eigenvalue 15, then −6 −8 −4
   x2  = 0.
2 −4 −12 x3
Thus −7x1 −6x2 +2x3 = 0, −6x1 −8x2 −4x3 = 0, 2x1 −4x2 −12x3 = 0 ⇒ x1 = 2x3 , x2 = −2x3 .
Thus (2, −2, 1) is an eigenvector for 15, in general (2x, −2x, x), x 6= 0 is an eigenvector for
15.
Question 3(c) Show that the eigenvalues of an upper or lower triangular matrix are just
the diagonal elements of the matrix.
Solution. Let A = (aij ), such that aij = 0 for i < j, i.e. A is upper triangular. Now
|xI − A| = (x − a11 )(x − a22 ) . . . (x − ann )
showing that |xI − A| = 0 ⇒ x = a11 , a22 , . . . , ann . Thus the eigenvalues of A are
a11 , a22 , . . . , ann .
Similarly for a lower triangular matrix.
Paper II
Question 4(a) Prove that a necessary and sufficient condition that a linear transformation
A on a unitary space is Hermitian is that hAx, xi is real for all x.
Solution. A unitary space is an old name for an inner product space. Let V be an inner
product space over C, and hAv, vi be real for all v ∈ V. Then since
hA(v + w), v + wi = hAv, vi + hAw, wi + hAv, wi + hAw, vi
hAv, wi +hAw, vi is real (because hA(v + w), v + wi −hAv, vi −hAw, wi is real). Hence
hAv, wi + hAw, vi = hw, Avi + hv, Awi (1)
because z real ⇒ z = z.
Also,
hA(v + iw), v + iwi = hAv, vi + hA(iw), iwi − ihAv, wi + ihAw, vi
thus −ihAv, wi + ihAw, vi is real. Thus
−ihAv, wi + ihAw, vi = −ihw, Avi + ihv, Awi (2)
Multiplying (1) by i and adding to (2), we get
2ihAw, vi = 2ihv, Awi

Thus A = A , so A is Hermitian.
Conversely, if hAw, vi = hv, Awi , then hAv, vi = hv, Avi = hAv, vi ⇒ hAv, vi is
real.

7
Question 4(b) If A is a linear transformation on an n-dimensional vector space, then prove
that

1. rank A = rank A0 .

2. nullity A = n − rank A.

Solution. We know that rank A = r if A has a minor of order r different from 0, and all
minors of order > r are 0. Thus rank A = rank A0 .
For the second part, see 1998 question 3(a).

Question 4(c) Show that a real symmetric matrix A is positive definite if and only if there
exists a real non-singular matrix P such that A = PP0 .

Solution. x0 Ax = x0 PP0 x = sum of squares > 0 (because P0 x 6= 0 as P is non-singular).


Conversely: Let x1 , x2 , . . . , xn be a basis of Rn . We will use this to construct a new basis
e1 , e2 , . . . , en which satisfies ei Aej = δij , as follows:
x1
e1 = √
x1 Ax1
y2 = x2 − (x2 Ae1 )e1
y
e2 = √ 2
y2 Ay2
...
i−1
X
yi = xi − (xi Aej )ej
j=1
y
ei = √ i
yi Ayi
...
Pn
e1 , e2 , . . . , en are linearly independent — if i=1 ai ei = 0, then take the largest i such
that ai 6= 0, this allows us to express xi in terms of the other basis vectors, which is not
possible. Inductively we can also verify that ei Aei = 1, and ei Aej = 0 if i 6= j. This is
the Gram-Schmidt orthonormalization process - we exploit the property that any positive
definite matrix A gives rise to an inner product hx,yi = x0 Ay.
Now consider the matrix Q = e1 e2 . . . en . Now if B = Q0 AQ then bij = ei Aej ,
thus B = In . Since Q consists of linearly independent columns, it is invertible, and thus
A = Q0 −1 Q−1 . Setting P = Q0 −1 , we have A = PP0 .

Question 5(a) If S is a skew symmetric matrix of order n and if I + S is non-singular,


then prove that A = (I − S)(I + S)−1 is an orthogonal matrix of order n.

Solution. See 1999, question 2(b).

8
Question 5(b) Under what circumstances will the real n × n matrix
 
x a a ... a
a x a . . . a
 
A=  a a x . . . a 

 ... 
a a a ... x

be (1) positive semidefinite (2) positive definite.

Solution. The eigenvalues of the given matrix can be computed as follows:

x−λ a a ... a
a x−λ a ... a
a a x − λ ... a = 0
...
a a a ... x − λ
x − λ a − x + λ a − x + λ ... a − x + λ
a x−λ−a 0 ... 0
⇒ a 0 x − λ − a ... 0 = 0
...
a 0 0 ... x − λ − a
x − λ + (n − 1)a 0 0 ... 0
a x−λ−a 0 ... 0
⇒ a 0 x − λ − a ... 0 = 0
...
a 0 0 ... x − λ − a
⇒ (x − λ + (n − 1)a)(x − λ − a)n−1 = 0

Thus the eigenvalues are x − a (repeated n − 1 times) and x + (n − 1)a. For positive definite,
λ > 0 ⇒ x > a, x > (n − 1)(−a). If a > 0, this reduces to x > a, if a ≤ 0, this reduces to
x > (n − 1)(−a).
For positive semi-definite, λ ≥ 0. By the same reasoning, if a > 0, then x ≥ a, otherwise
x ≥ (n − 1)(−a).

9
UPSC Civil Services Main 1984 - Mathematics
Linear Algebra
Sunder Lal
Retired Professor of Mathematics
Panjab University
Chandigarh

December 27, 2009

Question 1(a) If W1 , W2 are finite dimensional subspaces of a vector space V, then show
that W1 + W2 is finite dimensional and dim W1 + dim W2 = dim(W1 + W2 ) + dim(W1 ∩ W2 ).

Solution. See 1988, question 1(b).

Question 1(b) If A and B are n-rowed square non-zero matrices such that AB = 0, then
show that both A and B are singular. If both A and B are singular, and AB = 0, does it
imply that BA = 0? Justify your answer.

Solution. If A were non-singular, then A−1 AB = 0 ⇒ B = 0. Thus A is singular, and


similarly B is singular.    
1 0 0 0
AB = 0 does not imply that BA = 0. Let A = ,B = . Then AB =
    1 0 0 1
0 0 0 0
, but BA = 6= 0.
0 0 1 0

Question 2(a) Show that row-equivalent matrices have the same rank.

Solution. See 1986 question 3(b).

Question 2(b) A linear transformation T on a vector space V with finite basis α1 , α2 , . . . , αn


is non-singular if and only if the vectors α1 T, α2 T, . . . , αn T are linearly independent in V.
When this is the case, show that T has an inverse T −1 with T T −1 = T −1 T = I.

1
Solution. If T (α1 P ), T (α2 ), . . . , T (αn ) Pare linearly independent, then T P is one-one: Let
n n n
v ∈ V. Then v = a α
i=1 i i , T (v) = a
i=1 i T (α i ). If T (v) = 0, then i=1 ai T (αi ) =
0 ⇒ ai = 0, 1 ≤ i ≤ n, because T (α1 ), T (α2 ), . . . , T (αn ) are linearly independent. Thus
T (v) = 0 ⇒ v = 0, so T is one-one.
T is onto: dim T (v) = dim V = n.
Thus T is invertible, in fact T −1 (T (αi )) = αi .
T −1 is a linear transformation: Let T −1 (v) = u, T−1 (w) = x. Then T (u) = v, T (x) = w.
Let T −1 (av + bw) = z, then T (z) = av + bw = aT (u) + bT (x) = T (au + bx) ⇒ z = au + bx.
Thus T −1 (av + bw) = aT −1 (v) + bT −1 (w), so T −1 is linear. It is obvious that T T −1 =
T −1 T = I, as this is true for the basis elements by definition, and extends to all vectors by
linearity. Pn
Conversely
Pn if T is non-singular, then a 1 T (α1 )+a 2 T (α 2 )+. . .+a n T (α n ) = 0 ⇒ T ( i=1 ai αi ) =
0 ⇒ i=1 ai αi = 0 ⇒ ai = 0, 1 ≤ i ≤ n because α1 , α2 , . . . , αn are linearly independent.
Thus T (α1 ), T (α2 ), . . . , T (αn ) are linearly independent.

Question 2(c) Solve the following system of equations:


3x1 + 2x2 + 2x3 − 5x4 = 8
2x1 + 5x2 + 5x3 − 18x4 = 9
4x1 − x2 − x3 + 8x4 = 7

Solution. Let the coefficient matrix be


 
3 2 2 −5
A = 2 5 5 −18
4 −1 −1 8
Doubling R1 , and subtracting R2 + R3 , we get
 
0 0 0 0
A ∼ 2 5 5 −18
4 −1 −1 8
Thus the rank of A is 2.
The augmented matrix
   
3 2 2 −5 8 0 0 0 0 0
B = 2 5 5 −18 9 ∼ 2 5 5 −18 9
4 −1 −1 8 7 4 −1 −1 8 7
Thus the rank of B is also 2, so the system is consistent.
Since the rank of A is 2, the space of solutions has rank = 4 − 2 = 2. Adding twice
the third equation to the first we get 11x1 + 11x4 = 22 ⇒ x1 = 2 − x4 . Substituting this
in the third equation, we get x2 = 1 − x3 + 4x4 . Thus the required solution system is
(2 − x4 , 1 − x3 + 4x4 , x3 , x4 ), where x3 , x4 take any value in R.

2
Question 3(a) Let V and W be vector spaces over the field F , and let T be a linear trans-
formation from V to W. If V is finite dimensional show that rank T + nullity T = dim V.

Solution. See question 1(a) from 1992.

e = T−1 AT. Show


Question 3(b) Let A be a square matrix and T be non-singular. Let A
that
1. A and A
e have the same eigenvalues.

2. tr A = tr A.
e

3. If x is an eigenvector of A corresponding to an eigenvalue, then T−1 x is an eigenvector


of Ae corresponding to the same eigenvalue.

Solution.
1. The eigenvalues of A are roots of |xI − A| = 0. The eigenvalue of A
e are roots of
−1 −1 −1 −1
0 = |xI − T AT| = |T xIT − T AT| = |T ||xI − A||T| = |xI − A|, so the
eigenvalues are the same.
2. tr AB = tr BA, so tr T−1 AT = tr ATT−1 = tr A.
3. If Ax = λx then T−1 AT(T−1 x) = T−1 (λx) = λT−1 x.

Question 3(c) A 3 × 3 matrix has the eigenvalues 6, 2, -1. The corresponding eigenvectors
are (2, 3, −2), (9, 5, 4), (4, 4, −1). Find the matrix.
 
2 9 4
Solution. Let P =  3 5 4 , and let A be the required matrix. Then AP =
  −2 4 −1 
6 0 0 6 0 0
P 0 2 0 , therefore A = P 0 2 0  P−1 . A simple calculation gives P−1 =
 0 0 −1  0 0 −1     
−21 25 16 2 9 4 6 0 0 12 18 −4
 −5 6 4 , note that |P| = 1. Now  3 5 4  0 2 0  =  18 10 −4.
22 −26  −17  −2 4 −1
 0 0 −1  −12 8 1
12 18 −4 −21 25 16 −430 512 352
Thus A =  18 10 −4   −5 6 4  =  516 620 396 
−12 8 1 22 −26 −17  234 226 −173 
x1 x2 x3 6 0 0
A longer way would be to set A =  y1 y2 y3 . Then AP = P 0 2 0 . This
z1 z2 z3 0 0 −1
yields three systems of linear equations which have to be solved.

3
Paper II

Question 4(a) Let V be the set of all functions from a non-empty set into a field K. For
any function f, g ∈ V and any scalar k ∈ K, let f + g and kf be functions in V defined by
(f + g)(x) = f (x) + g(x), (kf )(x) = kf (x) for every x ∈ X. Prove that V is a vector space
over K.

Solution. V = {f | f : X −→ K}.

1. f, g ∈ V ⇒ (f + g)(x) = f (x) + g(x) ⇒ f + g : X −→ K ⇒ f + g ∈ V.

2. The zero function namely 0(x) = 0∀x ∈ X is the additive identity of V i.e. f + 0 =
0 + f = f ∀f ∈ V.

3. f ∈ V ⇒ −f ∈ V where (−f )(x) = −f (x) anf f + (−f ) = 0 = (−f ) + f .

4. (f + g) + h = f + (g + h) for every f, g, h ∈ V.

5. If f ∈ V, k ∈ K, then (kf )(x) = kf (x)∀x ∈ X, so kf ∈ V and k(f + g) = kf + kg.

6. If k, k 0 ∈ K, f ∈ V, then k(k 0 f ) = (kk 0 )f .

7. If 1 ∈ K is the multiplicative identity, then 1f = f for every f .

8. k, k 0 ∈ K, f ∈ V ⇒ (k + k 0 )f (x) = kf (x) + k 0 f (x) ⇒ (k + k 0 )f = kf + k 0 f .

Thus V is a vector space over K.

Question 4(b) Find the eigenvalues and basis for each eigenspace of the matrix
 
1 −3 3
A = 3 −5 3
6 −6 4
.

Solution. The characteristic equation of A is

 |A − λI|  = 0
1−λ −3 3
⇒  3 −5 − λ 3  = 0
6 −6 4−λ
⇒ (1 − λ)(λ + 5)(λ − 4) + 18(1 − λ) + 3(12 − 3λ − 18) + 3(−18 + 30 + 6λ) = 0
⇒ (1 − λ)(λ2 + λ − 20) + 18 − 18λ − 9λ − 18 + 36 + 18λ = 0
⇒ λ2 + λ − 20 − λ3 − λ2 + 20λ − 9λ + 36 = 0
⇒ λ3 − 12λ − 16 = 0

4
Thus λ = −2, 4, −2. Let (x1 , x2 , x3 ) be an eigenvector for λ = 4.
  
−3 −3 3 x1
3 −9 3 x2  = 0
6 −6 0 x3
Thus −3x1 − 3x2 + 3x3 = 0, 3x1 − 9x2 + 3x3 = 0, 6x1 − 6x2 = 0 ⇒ x1 = x2 , x3 = 2x1 . We
can take (1, 1, 2) as an eigenvector corresponding to λ = 4.
Let (x1 , x2 , x3 ) be an eigenvector for λ = −2.
  
3 −3 3 x1
3 −3 3 x2  = 0
6 −6 6 x3
Thus 3x1 − 3x2 + 3x3 = 0 ⇒ x2 = x1 + x3 . (1, 1, 0), (0, 1, 1) can be taken as eigenvectors for
λ = −2.
Clearly (1, 1, 2) is a basis for the eigenspace for λ = 4. (1, 1, 0), (0, 1, 1) is a basis for the
eigenspace for λ = −2.
Question 4(c) Let a vector space V have finite dimension and let W be a subspace of V
and W 0 the annihilator of W. Prove that dim W + dim W 0 = dim V.
Solution. Let dim V = n, dim W = m, W ⊆ V. Let {v1 , . . . , vn } be a basis of V so chosen
that {v1 , . . . , vm } is a basis of W. Let {v1∗ , . . . , vn∗ } be the dual basis of V ∗ i.e. vi∗ (vj ) = δij .

We shall show that W 0 has {vm+1 , . . . , vn∗ } as a basis.
By definition of the dual basis vi∗ (vj ) = 0 when 1 ≤ i ≤ m and m + 1 ≤ j ≤ n. Since

vj , m + 1 ≤ j ≤ n annihilate the basis of W, it follows that vj (w) = 0 for all w ∈ W. Thus

{vm+1 , . . . , vn∗ } ⊆ W 0 , and are linearly independent, being a subset of a linearly independent
set.
Let f ∈ W 0 , then f = ni=1 ai vi∗ . We shall show that ai = 0 for 1 ≤ i ≤ m, thus f
P
∗ ∗ 0
is a linear P combination of {vP m+1 , . . . , vn }. By definition of W , f (v1 ) = 0, . . . , f (vm ) = 0,
n ∗ n ∗
therefore ( i=1 ai vi )(vj ) = i=1 ai δij = aj = 0 when 1 ≤ j ≤ m. Thus {vm+1 , . . . , vn∗ } is a
basis of W 0 , hence dim W 0 = n − m, hence dim W + dim W 0 = n = dim V.
Question 5(a) Prove that every matrix satisfies its characteristic equation.
Solution. See 1987 question 3(a).
Question 5(b) Find a necessary and sufficient condition that the real quadratic form
Xn X
n
aij xi xj be a positive definite form.
i=1 j=1

Solution. See 1991 question 1(c) and 1992 question 1(c)


Question 5(c) Prove that the rank of the product of two matrices cannot exceed the rank
of either of them.
Solution. See 1987 question 1(b).

5
UPSC Civil Services Main 1985 - Mathematics
Linear Algebra
Sunder Lal
Retired Professor of Mathematics
Panjab University
Chandigarh

December 27, 2009

Question 1(a) If W1 and W2 are finite dimensional subspaces of a vector space V, then
show that W1 + W2 is finite dimensional and

dim W1 + dim W2 = dim(W1 + W2 ) + dim(W1 ∩ W2 )

Solution. See 1988 question 1(b).


       
1 0 0 1 0 0 0 0
Question 1(b) Let M1 = , M2 = , M3 = , M4 = . Prove
0 0 0 0 1 0 0 1
that the set {M1 , M2 , M3 , M4 } forms the basis of the vector space of 2 × 2 matrices.

Solution. See 2006 question 1(a).




1 3 3
Question 1(c) Find the inverse of the matrix A = 1 4 3.
1 3 4

Solution.    
1 3 3 1 0 0
1 4 3 = 0 1 0 A
1 3 4 0 0 1
Using operation R2 − R1 , R3 − R1 , we get
   
1 3 3 1 0 0
0 1 0 = −1 1 0 A
0 0 1 −1 0 1

1
Now with operation R1 − 3(R2 + R3 ) we get
   
1 0 0 7 −3 −3
0 1 0 = −1 1 0 A
0 0 1 −1 0 1
 
7 −3 −3
Thus the inverse of A is −1 1 0 .
−1 0 1

Question 2(a) If T : V −→ W is a linear transformation from an n-dimensional vector


space V to a vector space W, then prove that rank(T ) + nullity(T ) = n.

Solution. See 1992 question 1(a).

Question 2(b) Consider the basis S = {v1 , v2 , v3 } of R3 , where v1 = (1, 1, 1), v2 =


(1, 1, 0), v3 = (1, 0, 0). Express (2, −3, 5) in terms of v1 , v2 , v3 . Let T : R3 −→ R2 be
defined as T (v1 ) = (1, 0), T (v2 ) = (2, −1), T (v3 ) = (4, 3). Find T (2, −3, 5).

Solution. Let (2, −3, 5) = a(1, 1, 1) + b(1, 1, 0) + c(1, 0, 0). Then a + b + c = 2, a + b =


−3, a = 5 ⇒ a = 5, b = −8, c = 5. Thus (2, −3, 5) = 5v1 − 8v2 + 5v3 .
T (2, −3, 5) = 5T (v1 ) − 8T (v2 ) + 5T (v3 ) = 5(1, 0) − 8(2, −1) + 5(4, 3) = (9, 23).
 
6 3 −4
Question 2(c) Reduce the following matrix into echelon form: −4 1 −6.
1 2 −5
   
6 3 −4 1 2 −5
Solution. A = −4 1 −6 ∼ −4 1 −6 by exchanging R1 and R3 .
1 2 −5 6 3 −4 
1 2 −5
Now R2 + 4R1 , R3 − 6R1 ⇒ A ∼ 0 9 −26.

  0 −9 26
1 2 −5
R3 + R2 ⇒ A ∼ 0 9 −26.
0 0 0  
1 2 −5
Multiply R2 by 91 to get A ∼ 0 1 − 269
.
0 0 0
1 0 97

Now R1 − 2R2 ⇒ A ∼ 0 1 − 26 9
 which is the required form.
0 0 0

2
Question 3(a) Show that if λ is an eigenvalue of matrix A, then λn is an eigenvalue of
An , where n is a positive integer.
Solution. If x is an eigenvector for λ, then An x = An−1 Ax = λAn−1 x. Repeating this
process, we get the result.
Question 3(b) Determine if the vectors (1, −2, 1), (2, 1, −1), (7, −4, 1) are linearly indepen-
dent in R3 .
Solution. If possible, let a(1, −2, 1) + b(2, 1, −1) + c(7, −4, 1) = 0. Then a + 2b + 7c =
0, −2a + b − 4c = 0, a − b + c = 0. Adding the last two we get a = −3c, and from the third
we then get b = −2c. These values satisfy the first equation also, hence letting c = −1 we
get 3(1, −2, 1) + 2(2, 1, −1) − (7, −4, 1) = 0. Thus the vectors are linearly dependent.
Question 3(c) Solve
2x1 + 3x2 + x3 = 9 (1)
x1 + 2x2 + 3x3 = 6 (2)
3x1 + x2 + 2x3 = 8 (3)
Solution. 2(2) − (1) ⇒ x2 + 5x3 = 3 ⇒ x2 = 3 − 5x3 . Substituting x2 in (2), x1 = 7x3 .
5
Now substituting x1 , x2 in (3), we get 21x3 + 3 − 5x3 + 2x3 = 8 ⇒ x3 = 18 , x2 = 29
18
, x1 = 35
18
,
which is the required solution.
(Using Cramer’s rule would have been lengthy.)
Paper II
Question 4(a) Let V be the vector space of all functions from R into R. Let Ve be the
subset of all even functions f, f (−x) = f (x), and Vo be the subset of all odd functions
f, f (−x) = −f (x). Prove that
1. Ve and Vo are subspaces of V
2. Ve + Vo = V
3. Ve ∩ Vo = {0}
Solution.
1. Let f, g ∈ Ve , then αf + βg ∈ Ve for all α, β ∈ R, because (αf + βg)(−x) = αf (−x) +
βg(−x) = αf (x) + βg(x) = (αf + βg)(x), thus Ve is a subspace of V. Similarly, if
f, g ∈ Vo , then αf + βg ∈ Vo for all α, β ∈ R, because (αf + βg)(−x) = αf (−x) +
βg(−x) = −αf (x) − βg(x) = −(αf + βg)(x), thus Ve is a subspace of V.
2. Let f (x) ∈ V. Define F (x) = f (x)+f
2
(−x)
, G(x) = f (x)−f
2
(−x)
. Then F (−x) = F (x) ⇒
F ∈ Ve , G(x) = −G(x) ⇒ G ∈ Vo and f (x) = F (x) + G(x). Thus Ve + Vo = V.
3. If f ∈ Ve ∩ Vo , then f (−x) = f (x) ∵ f ∈ Ve , f (−x) = −f (x) ∵ f ∈ Vo . Thus
2f (−x) = 0 for all x ∈ R, so f = 0 ⇒ Ve ∩ Vo = {0}.

3
Question 4(b) Find the dimension and basis of the solution space S of the system

x1 + 2x2 + 2x3 − x4 + 3x5 = 0


x1 + 2x2 + 3x3 + x4 + x5 = 0
3x1 + 6x2 + 8x3 + x4 + 5x5 = 0

Solution.    
1 2 2 −1 3 1 2 2 −1 3
A = 1 2 3 1 1 ∼ 1 2 3 1 1
3 6 8 1 5 0 0 0 0 0
by performing R3 − R1 − 2R2 .
Thus rank A < 3. Actually rank A = 2, because if A = (C1 , C2 , C3 , C4 , C5 ), where Ci are
columns, then C1 and C3 are linearly independent.
Adding the first two equations, we get 4x5 = −2x1 − 4x2 − 5x3 . Subtracting 3 times
the second from the first, we get 4x4 = −2x1 − 4x2 − 7x3 . From these we can see that
X1 = (2, 0, 0, −1, −1), X3 = (0, 1, 0, −1, −1), X3 = (0, 0, 4, −5, −7) are three independent
solutions. Since rank A = 2, the dimension of the solution space S is 3, and {X1 , X2 , X3 }
is its basis.

Question 4(c) Let W1 and W2 be subspaces of a finite dimensional vector space V. Prove
that (W1 + W2 )0 = W10 ∩ W20 .

Solution. Let V ∗ be the dual of V i.e. V ∗ = {f | f : V −→ R, f linear}. Then


W 0 = {f | f ∈ V ∗ , ∀w ∈ W.f (w) = 0}. W 0 is a vector subspace of V ∗ of dimension
dim V − dim W.
If W1 ⊆ W2 , then W20 ⊆ W10 , because if f ∈ W20 , f (w) = 0∀w ∈ W2 , and therefore
f (w) = 0∀w ∈ W1 , so f ∈ W10 .
Now W1 ⊆ W1 + W2 and W2 ⊆ W1 + W2 , so (W1 + W2 )0 ⊆ W10 and (W1 + W2 )0 ⊆ W20 ,
thus (W1 + W2 )0 ⊆ W10 ∩ W20 .
Conversely, if f ∈ W10 ∩ W20 , then f (w1 ) = 0, f (w2 ) = 0 for all w1 ∈ W1 , w2 ∈ W2 . Now
any w ∈ W1 + W2 is of the form w = w1 + w2 , so f (w) = f (w1 ) + f (w2 ) = 0, because f is
linear. Thus f ∈ (W1 + W2 )0 .
Thus (W1 + W2 )0 = W10 ∩ W20 .
 
1 1+i 2i
Question 5(a) Let H = 1 − i 4 2 − 3i. Find P so that P0 HP is diagonal. Find
−2i 2 + 3i 7
the signature of H.

Solution.      
1 1+i 2i 1 0 0 1 0 0
1 − i 4 2 − 3i = 0 1 0 H 0 1 0
−2i 2 + 3i 7 0 0 1 0 0 1

4
Subtracting (1 − i)R1 from R2 , and adding 2iR1 to R3 , we get
     
1 1 + i 2i 1 0 0 1 0 0
0 2 −5i = −1 + i 1 0 H 0 1 0
0 5i 3 2i 0 1 0 0 1

Subtracting (1 + i)C1 from C2 , and adding −2iC1 to C3 , we get


     
1 0 0 1 0 0 1 −1 − i −2i
0 2 −5i = −1 + i 1 0 H 0 1 0 
0 5i 3 2i 0 1 0 0 1

Subtracting 25 iR2 from R3 , and adding 52 iC2 to C3 we get

1 −1 − i 52 − 92 i
     
1 0 0 1 0 0
0 2 0  = −1 + i 1 0 H 0 5
1 2
i 
19 5 9 5
0 0 −2 2
+ 2i −2i 1 0 0 1

1 −1 + i 52 + 29 i
 

Thus P = 0 1 − 52 i 
0 0 1
Index = Number of positive entries = 2. Signature = Number of positive entries - Number
of negative entries = 1.

Question 5(b) Prove that every matrix is a root of its characteristic polynomial.

Solution. This is the Cayley Hamilton theorem, proved in Question 5(a), 1987.

Question 5(c) If B = AP, where P is nonsingular and A orthogonal, show that PB−1 is
orthogonal.

Solution. B−1 = P−1 A−1 , so PB−1 = PP−1 A−1 = A−1 . Now (A−1 )0 A−1 = (AA0 )−1 = I.
Similarly A−1 (A−1 )0 = (A0 A)−1 = I, so PB−1 is orthogonal.

5
UPSC Civil Services Main 1986 - Mathematics
Linear Algebra
Sunder Lal
Retired Professor of Mathematics
Panjab University
Chandigarh

December 27, 2009

Question 1(a) If A, B, C are three n × n matrices, show that A(BC) = (AB)C. Show by
an example that matrix multiplication is non-commutative.

Solution. Let A = (aij P),nB = (bij ), C = (cij ). Let


Pn BC = (βij ), AB = (αij ). Then the ij-th
element
P of P the RHS = k=1 αik ckj . But αik = l=1 ail blk , so the ij-th element of the RHS
= nk=1 nl=1 ail blk ckj . Pn Pn Pn
Similarly, the ij-th element of the LHS = l=1 ail βlj = l=1 ail k=1 blk ckj . Thus
A(BC) = (AB)C.        
1 1 0 0 1 1 0 0 0 1
Let A = ,B = , then AB = = . But BA =
  0 1  0 1 0 1 0 1 0 1
0 0 1 1 0 0
= . Thus AB 6= BA.
0 1 0 1 0 1

Question 1(b) Examine the correctness or otherwise of the following statements:

1. The division law is not valid in matrix algebra.

2. If A, B are square matrices each of order n, and I is the corresponding unit matrix,
then the equation
AB − BA = I
can never hold.

Solution.
       
1 0 0 0 0 0 0 0
1. True. Let A = ,B = ,C = . Then AB = = AC, but
0 0 0 1 0 0 0 0
B 6= C.

1
2. True. We have proved in 1987 question 5(c) that AB and BA have the same eigen-
values. Trace of AB − BA = trace of AB - trace of BA = sum of the eigenvalues of
AB - sum of the eigenvalues of BA = 0. But trace of In = n, thus AB − BA = I can
never hold.

Question 1(c) Find a 3 × 3 matrix X such that


   
1 2 −2 1 0 1
−1 3 0  X = 0 1 0
0 −2 1 0 1 1

1 2 −2
Solution. |A| = −1 3 0 = 3 − 2(−1) − 2(2) = 1, so A is non-singular. Hence
 0 −2 1  
1 0 1 3 2 6
−1
X=A 0 1 0. Asimple calculation gives A−1 = 1 1 2. Thus
0 1 1 2 2 5
    
3 2 6 1 0 1 3 8 9
X = 1 1 2 0 1 0 = 1 3 3
2 2 5 0 1 1 2 7 7

Question 2(a) If M, N are two subspaces of a vector space S, then show that their dimen-
sions satisfy
dim M + dim N = dim (M ∩ N ) + dim (M + N )

Solution. See 1998 question 1(b).

Question 2(b) Find a maximal linearly independent subsystem of the system of vectors
v1 = (2, −2, −4), v2 = (1, 9, 3), v3 = (−2, −4, 1), v4 = (3, 7, −1).

Solution. v1 , v2 are linearly independent because av1 +bv2 = (2a+b, −2a+9b, −4a+3b) =
0 ⇒ a = b = 0.
v3 is dependent on v1 , v2 . If v3 = av1 + bv2 , then (2a + b, −2a + 9b, −4a + 3b) =
7 6
(−2, −4, 1) ⇒ a = − 10 , b = − 10 .
Similarly v4 is dependent on v1 , v2 . If v4 = av1 + bv2 , then (2a + b, −2a + 9b, −4a + 3b) =
(3, 7, −1) ⇒ a = b = 1.
Thus the maximally linearly independent set is {v1 , v2 }.

2
Question 2(c) Show that the system of equations

4x + y − 2z + w = 3
x − 2y − z + 2w = 2
2x + 5y − w = −1
3x + 3y − z − 3w = 1

although consistent is not uniquely solvable. Determine a general solution using x as a


parameter.
 
4 1 −2 1
1 −2 −1 2 
Solution. The coefficient matrix A =  .
2 5 0 −1
3 3 −1 −3
 
4 1 −2 1 3
1 −2 −1 2 2
The augmented matrix B =  .
2 5 0 −1 −1
3 3 −1 −3 1
Add R2 to R4 , and subtract R1 , to get
   
4 1 −2 1 4 1 −2 1 3
1 −2 −1 2  1 −2 −1 2 2
A∼ ,B ∼  
2 5 0 −1 2 5 0 −1 −1
0 0 0 0 0 0 0 0 0

1 −2 1
Since −2 −1 2 = 1 − 16 + 5 6= 0, it follows that rank A = rank B = 3, so the system
5 0 −1
is consistent. Since rank A = 3, the space of solutions is of dimension 1.
Subtracting the second equation from the fourth, we get 2x + 5y − 5w = −1. But
2x + 5y − w = −1, so w − 5w = 0 ⇒ w = 0.
Now y − 2z = 3 − 4x, −2y − z = 2 − x ⇒ −5z = 8 − 9x ⇒ z = 9x−8 5
. Now y =
−2x−1 −2x−1 9x−8
3 − 4x + 2 9x−8
5
= 5
. Thus the space of solutions is (x, 5
, 5
, 0). The system does
not have a unique solution.

Question 3(a) Show that every square matrix satisfies its characteristic equation. Using
this result or otherwiseshow that if
 
1 0 2
A = 0 −1 1
0 1 0

then A4 − 2A3 − 2A2 + 6A − 2I = A, where I is the 3 × 3 identity matrix.

3
Solution. The first part is the Cayley Hamilton theorem. See 1987 Question 5(a).
The characteristic equation of A is |A − xI| = 0, thus
1−x 0 2
0 −1 − x 1 = (1 − x)(x2 + x − 1) = −x3 + 2x − 1 = 0
0 1 −x
By the Cayley Hamilton Theorem, A3 − 2A + I = 0.
Thus A4 = 2A2 − A, and 2A3 = 4A − 2I. Hence A4 − 2A3 − 2A2 + 6A − 2I =
2A2 − A − 4A + 2I − 2A2 + 6A − 2I = A as required.

Question 3(b) 1. Show that a square matrix is singular if and only if at least one of its
eigenvalues is 0.
2. The rank of an n×n matrix A remains unchanged if it is premultiplied or postmultiplied
by a nonsingular matrix, and that rank(XAX−1 ) = rank(A).
Solution.
1. The characteristic polynomial of A is |A−xI|. Putting x = 0, we see that the constant
term in the characteristic polynomial is |A|. Thus if A has 0 as an eigenvalue iff 0 is
a root of the characteristic polynomial iff |A| = 0.
 
R1
2. Let A =  ... , where each Ri is 1×n, i.e. A is m×n. Now rank(A) is the dimension
 
Rm
of the row space of A, i.e. the space generated by  R1 , . . . , Rm . Let P = (pij ) be 
an
p11 R1 + p12 R2 + . . . + p1m Rm
 p21 R1 + p22 R2 + . . . + p2m Rm 
m × m nonsingular matrix. Then B = PA =  .
 
..
 . 
pm1 R1 + pm2 R2 + . . . + pmm Rm
Thus the rows of PA ⊂ the row space of A, being linear combinations of rows of
A. Writing A = P−1 B, we get that the row space of A ⊂ the row space of B, so
rank(A) = rank(B).
Let Q be non-singular n × n, and C = AQ. It can be proved as above that the column
space of A = the column space of C, thus rank(A) = rank(C).
Now by using the above results, rank(XAX−1 ) = rank(XA) = rank(A).

Paper II
Question 4(a) If V1 and V2 are subspaces of a vector space V, then show that dim(V1 +V2 ) =
dim(V1 ) + dim(V2 ) − dim(V1 ∩ V2 ).
Solution. See 1998, question 1(b).

4
Question 4(b) Let V and W be vector spaces over the same field F and dim V = n. Let
{e1 , . . . , en } be a basis of V. Show that a map f : {e1 , . . . , en } −→ W, can be uniquely
extended to a linear transformation T : V −→ W whose restriction to the given basis is f
i.e. T (ei ) = f (ei ).
Pn Pn
Solution.
Pn If v =Pn i=1 a i e i , define T (v) = i=1 ai f (ei ). Clearly T (ei ) = f (ei ). If
v = i=1 ai ei , w = i=1 bi ei , then
n
X
T (αv + βw) = T ( (αai + βbi )ei )
i=1
n
X
= (αai + βbi )f (ei )
i=1
n
X n
X
= α ai f (ei ) + β bi f (ei )
i=1 i=1
= αT (v) + βT (w)

Thus T is a linear transformation.


PnIf U is any other linear transformation
Pn satisfying
Pn U (ei ) P= f (ei ), then P
n
for any v =
n
a e
i=1 i i , by linearity, T (v) = T ( a e
i=1 i i ) = a
i=1 i T (e i ) = a
i=1 i f (e i ) = i=1 ai U (ei ) =
U (v). Since this is true for every v, we have T = U .

Question 5(a) 1. If A and B are two linear transformations and if A−1 and B −1 exist,
show that (AB)−1 exists and (AB)−1 = B −1 A−1 .

2. Prove that similar matrices have the same characteristic polynomial and hence the
same eigenvalues.

3. Prove that the eigenvalues of a Hermitian matrix are real.

Solution.

1. Clearly (AB)(B −1 A−1 ) = AIA−1 = AA−1 = I, (B −1 A−1 )(AB) = B −1 A−1 AB =


B −1 B = I. Thus AB is invertible and its inverse is B −1 A−1 .

2. If B = P−1 AP then |λI−B| = |λP−1 P−P−1 AP| = |P−1 ||λI−A||P| = |λI−A|. Thus
A and B have the same characteristic polynomial and therefore the same eigenvalues.

3. See 1993 question 2(c).

5
1
Question 5(b) Reduce 2x2 + 4xy + 5y 2 + 4x + 13y − 4
= 0 to canonical form.

Solution.
1
LHS = 2(x + y + 1)2 − 2y 2 − 2 + 5y 2 + 9y −
4
9
= 2(x + y + 1)2 + 3(y 2 + 3y) −
4
3 27 9
= 2(x + y + 1)2 + 3(y + )2 − −
2 4 4
3
= 2X 2 + 3Y 2 − 9 where X = x + y + 1, Y = y +
2
X2 Y2
2X 2 + 3Y 2 − 9 = 0 ⇒ 9/2
+ 3
= 1. Thus the given equation is an ellipse.
 
0 1 1
Question 5(c) Find the reciprocal of the matrix T = 1 0 1. Then show that the
  1 1 0
b+c c−a b−a
transform of the matrix A = 2 c − b c + a a − b by T i.e. TAT−1 is a diagonal matrix.
1

b−c a−c a+b


Determine the eigenvalues of the matrix A.

Solution. |T| = −1(−1) + 1(1) = 2. So


   
A11 A21 A31 −1 1 1
1 1
T−1 = A12 A22 A32  =  1 −1 1 
2 2
A13 A23 A33 1 1 −1

Here Aij denotes the cofactor of aij . Now

    
0 1 1 b+c c−a b−a −1 1 1
1 1
TAT−1 = 1 0 1 c − b c + a a − b  1 −1 1 
2 2
1 1 0 b−c a−c a+b 1 1 −1
  
0 2a 2a −1 1 1
1
= 2b 0 2b   1 −1 1 
4
2c 2c 0 1 1 −1
   
4a 0 0 a 0 0
1
= 0 4b 0  = 0 b 0
4
0 0 4c 0 0 c

Thus TAT−1 is diagonal. Now the eigenvalues of A and TAT−1 are the same, so the
eigenvalues of A are a, b, c.

6
UPSC Civil Services Main 1987 - Mathematics
Linear Algebra
Sunder Lal
Retired Professor of Mathematics
Panjab University
Chandigarh

December 27, 2009

 
7 −3
Question 1(a) 1. Find all the matrices which commute with the matrix .
5 −2
2. Prove that the product of two n × n symmetric matrices is a symmetric matrix if and
only if the matrices commute.

Solution.

1.      
a b 7 −3 7 −3 a b
=
c d 5 −2 5 −2 c d

⇒ 7a + 5b = 7a − 3c (i)
−3a − 2b = 7b − 3d (ii)
7c + 5d = 5a − 2c (iii)
−3c − 2d = 5b − 2d (iv)
(i) and (iv) ⇒ 5b = −3c. From (ii) we get d = a + 3b, and from (iii) we get the
same
  − 9c = 5a + 15b = 5d, or d = a + 3b. Thus the required matrices are
thing: 5a
a b
, a, b arbitrary.
− 35 b a + 3b

2. Given A0 = A, B0 = B. Suppose AB = BA, then (AB)0 = B0 A0 = BA = AB ⇒ AB


is symmetric. Let AB be symmetric. Then AB = (AB)0 = B0 A0 = BA, so A and B
commute. Thus AB is symmetric ⇔ AB = BA when A0 = A, B0 = B.

1
Question 1(b) Show that the rank of the product of two square matrices A, B each of order
n satisfies the inequality
rA + rB − n ≤ rAB ≤ min(rA , rB )
where rC stands for the rank of C, a square matrix.
 
G
Solution. There exists a non-singular matrix P such that PA = , where G is a
  0
G
rA × n matrix of rank rA . Now PAB = B has at most rA non-zero rows obtained on
0
multiplying rA non-zero rows of G with B. Thus rPAB , which is the same as rank rAB as P
is non-singular, ≤ rA . 
Similarly there exists a non-singular matrix Q such
 that BQ = H 0 , where H is a
n × rB matrix of rank rB . Now ABQ = A H 0 has at most rB non-zero, columns, so
rABQ ≤ rB . Now rABQ = rAB as |Q| = 6 0, so rAB ≤ rB , hence rAB ≤ min(rA , rB ).
Let S(A) denote the space generated by the vectors r1 , . . . , rn where ri is the ith row
of A, then dim(S(A)) = rA , similarly dim(S(B)) = rB . Let S denote the space generated
by the rows of A and B. Clearly dim(S) ≤ dim(S(A)) + dim(S(B)) = rA + rB . Clearly
S(A + B) ⊆ S. Therefore rA+B ≤ dim(S) ≤ rA + rB .    
IrA 0 −1 IrA 0
Now there exist non-singular matrices P, Q such that PAQ = or A = P Q−1 .
    0 0 0 0
−1 0 0 −1 −1 IrA 0
Let C = P Q . Then A + C = P Q−1 = P−1 Q−1 , so A + C
0 In−rA 0 In−rA
is nonsingular.
Now rank B = rank((A + C)B) ≤ rank(AB) + rank(CB). But rank(CB) ≤ rank(C) =
n−rA . Thus rB ≤ rAB +n−rA ⇒ rA +rB −n ≤ rAB . Hence rA +rB −n ≤ rAB ≤ min(rA , rB ).

Question 1(c) If 1 ≤ a ≤ 5, find the rank of the matrix


 
1 1 1 1
1 3 −2 a 
A= 2 2a − 2 −a − 2 3a − 1

3 a+2 −3 2a + 1

1 0 0 0
1 2 −3 a−1
Solution. |A| = by carrying out the operations C2 −C1 , C3 −
2 2a − 4 −a − 4 3a − 3
3 a−1 −6 2a − 2
2 −3 1 0 0 1
C1 , C4 − C1 . Thus |A| = (a − 1) 2a − 4 −a − 4 3 = (a − 1) 2a − 10 −a + 5 3 =
a−1 −6 2 a−5 0 2
2
(a − 1)(a − 5) .

2
Thus |A| =6 0 when a 6= 1, a 6= 5. So for 1 < a < 5, rank A = 4.
If a = 5,
 
1 1 1 1
1 3 −2 5 
A =  2 8 −7 14

3 7 −3 11
 
1 0 0 0
1 2 −3 4 
=   (C2 − C1 , C3 − C1 , C4 − C1 )
2 6 −9 12
3 4 −6 8
 
1 0 0 0
0 2 −3 4 
=   (R2 − R1 , R3 − 2R1 , R4 − 3R1 )
0 6 −9 12
0 4 −6 8
 
1 0 0 0
0 2 −3 4
= 
0
 (R3 − 3R2 , R4 − 2R2 )
0 0 0
0 0 0 0

1 0
which has rank 2, as 6= 0, showing that rank of A is 2 when a = 5.
0 2
If a = 1,
 
1 1 1 1
1 3 −2 1
A =  
2 0 −3 2
3 3 −3 3
 
1 0 0 0
1 2 −3 0
= 
2 −2 −5
 (C2 − C1 , C3 − C1 , C4 − C1 )
0
3 0 −6 0

1 0 0
which has rank 3 since 1 2 −3 6= 0, showing that rank of A is 3 when a = 1.
2 −2 −5

Question 2(a) If the eigenvalues of a matrix A are λj , j = 1, 2, . . . n and if f (x) is a


polynomial in x, show that the eigenvalues of the polynomial f (A) are f (λj ), j = 1, 2, . . . n.

3
Solution. Let xr be an eigenvector of λr . Then Ak xr = Ak−1 (Axr ) = λr Ak−1 xr = . . . =
λkr xr . Thus the eigenvalues of Ak are λkj , j = 1, 2, . . . , n.
Let f (x) = a0 + a1 x + . . . + am xm . Then (a0 I + a1 A + . . . + am Am )xr = (a0 + a1 λr +
. . . + am λmr )xr = f (λr )xr . Thus the eigenvalues of f (A) are f (λj ), j = 1, 2, . . . n.

Question 2(b) If A is skew-symmetric, then show that (I − A)(I + A)−1 , where I is the
corresponding identity matrix, is orthogonal.
0 ab
 
Hence construct an orthogonal matrix if A = .
− ab 0
−1
Solution. For
 theaorthogonality
  − A)(I
of (I
a
+ A) , see question 2(a)
 of 1999.

1 −b 1 b −1 b b −a
I−A= a , and I + A = ⇒ (I + A) = a2 +b2 .
b
1 − ab 1 a b !
b2 −a2 −2ab
    2 
−1 1 b −a b −a 1 b − a2 −2ab a2 +b2 a2 +b2
Thus (I−A)(I+A) = a2 +b2 = a2 +b2 = b2 −a2 ,
a b a b 2ab b 2 − a2 2ab
a2 +b2 a2 +b2
which is the required orthogonal matrix.

Question 2(c) 1. If A and B are arbitrary square matrices of which A is non-singular,


show that AB and BA have the same characteristic polynomial.
2. Show that a real matrix A is orthogonal if and only if |Ax| = |x| for all x.

Solution.
1. BA = A−1 ABA. Thus the characteristic polynomial of BA is |xI−BA| = |xA−1 A−
A−1 ABA| = |A−1 ||xI − AB||A| = |xI − AB| which is the characteristic polynomial
of AB.
√ √
2. If A is orthogonal, i.e. A0 A = I, then |Ax| = x0 A0 Ax = x0 x = |x|.
Conversely |Ax| = |x| ⇒ x0 A0 Ax = x0 x ⇒ x0 (A0 A − I)x = 0 for all x. Thus
A0 A − I = 0, so A is orthogonal.
Note that is A = (aij ) is symmetric, and ni,j=1 aij xi xj = 0 for all x, then choose x = ei to
P
get e0i Aei = aii = 0, and choose x = ei +ej to get 0 = x0 Ax = aii +2aij +ajj = 2aij ⇒ aij = 0.
(Here ei is the i-th unit vector.) Thus A = 0.

Question 3(a) Show that a necessary and sufficient condition for a system of linear equa-
tions to be consistent is that the rank of the coefficient matrix is equal to the rank of the
augmented matrix. Hence show that the system
x + 2y + 5z + 9 = 0
x − y + 3z − 2 = 0
3x − 6y − z − 25 = 0
is consistent and has a unique solution. Determine this solution.

4
Solution. Let the system be Ax = b where A is m × n, x is n × 1 and b is m × 1. Let
rank A = r. A = [c1 , c2 , . . . , cn ] where each cj is an m × 1 column. We can assume without
loss of generality that c1 , c2 , . . . , cr are linearly independent, r = rank A. The system is now
x1 c1 + x2 c2 + . . . + xn cn = b
, where x0 = (x1 , . . . , xn ). Suppose rank([A b]) = r. This means that out of n + 1 columns,
exactly r are independent. But by assumption, c1 , c2 , . . . , cr are linearly independent, there-
fore these vectors form a basis for the column space of [A b]. Consequently there exist
α1 , . . . , αr such that α1 c1 + α2 c2 + . . . + αr cr = b. This gives us the required solution
{α1 , . . . , αr , 0, . . . , 0} to the linear system.
Conversely, let the system be consistent. Let A = [c1 , c2 , . . . , cn ] as before, with c1 , c2 , . . . , cr
linearly independent, r = rank A. Since the column space of A, i.e. the space generated
by c1 , c2 , . . . , cn has dimension r, each cj for r + 1 ≤ j ≤ n is linearly dependent on
c1 , c2 , . . . , cr . Since there exist α1 , . . . , αn such that α1 c1 + α2 c2 + . . . + αn cn = b, b is a
linear combination of c1 , c2 , . . . , cn . But each cj for r + 1 ≤ j ≤ n is a linear combination of
c1 , c2 , . . . , cr , therefore b is a linear combination of c1 , c2 , . . . , cr . Thus the space generated
by {c1 , c2 , . . . , cn , b} also has dimension
 r, sorank([A b]) = r = rank A.
1 2 5
The coefficient matrix A = 1 −1 3 . |A| = 24 6= 0, so rank A = 3. The aug-
 3 −6  −1
1 2 5 −9 1 2 5
mented matrix B = 1 −1 3  2  has rank ≤ 3, but since 1 −1 3 6= 0, it has
3 −6 −1 25 3 −6 −1
rank 3. Thus the given system is consistent.
Subtracting the second equation from the first we get 3y + 2z + 11 = 0. Subtracting 3
times the second equation from the third, we get 3y + 10z + 19 = 0. Clearly z = −1, y =
−3 ⇒ x = 2. Thus (2, −3, −1) is the unique solution. In fact the only solution of the system
is      
x −9 2
y  = A−1  2  = −3
z 25 −1

Question 3(b) In an n-dimensional vector space the system of vectors xj , j = 1, . . . , r are


linearly independent and can be expressed linearly in terms of the vectors yk , k = 1, . . . , s.
Show that r ≤ s.
Find a maximal linearly independent subsystem of the linear forms
f1 = x + 2y + z + 3t
f2 = 4x − y − 5z − 6t
f3 = x − 3y − 4z − 7t
f4 = 2x + y − z

5
Solution. Let W be the subspace spanned by yk , k = 1, . . . , s. Then dim W ≤ s. Since
xj ∈ W, j = 1, . . . , r because xj is a linear combination of yk , k = 1, . . . , s, and xj , j =
1, . . . , r are linearly independent, dim W ≥ r ⇒ r ≤ s.
Clearly f1 and f4 are linearly independent. f2 is linearly expressible in terms of f1 and f4
because f2 = af1 + bf4 ⇒ a + 2b = 4, 2a + b = −1, a − b = 5, 3a = −6 ⇒ a = −2, b = 3 satisfy
all four, hence f2 = −2f1 + 3f4 . Similarly f3 = − 37 f1 + 53 f4 . Thus {f1 , f4 } is a maximally
independent subsystem.

Paper II

Question 4(a) Let T : V −→ W be a linear transformation. If V is finite dimensional,


show that
rank T + nullity T = dim V

Solution. See question 1(a) of 1992.

Question 4(b) Prove that two finite dimensional vector spaces V, W over the same field F
are isomorphic if they are of the same dimension n.

Solution. Let dim V = dim W = n. Let v1 , . . . , vn be a basis of V,Pand w1 , . . . , wn be a


W. Define T : V −→ W by T (vi ) = wi and if v ∈ V, v = ni=1 ai vi , ai ∈ R then
basis of P
T (v) = ni=1 ai T (vi ). Then

1. T is a linear transformation. If v = ni=1 ai vi , u = ni=1 bi vi then


P P

n
X
T (αv + βu) = T ( (αai + βbi )T (vi )
i=1
n
X
= (αai + βbi )T (vi )
i=1
n
X n
X
= α ai T (vi ) + β bi T (vi )
i=1 i=1
= αT (v) + βT (u)

T is 1-1. Let T (v) = 0, where v = ni=1 ai vi . Then 0 = T (v) = ni=1 ai T (vi ) =


P P
2. P
n
i=1 ai wi ⇒ ai = 0, i = 1, . . . , n, because w1 , . . . , wn are linearly independent. Thus
T (v) = 0 ⇒ v = 0.

3. T is onto. If w ∈ W and w = ni=0 bi wi , then T (v) = w where v = ni=1 bi vi .


P P

6
Note: The converse of 4(b) is also true i.e. if T : V −→ W is an isomorphism i.e. V, W are
isomorphic, then dim V = dim W.
Let v1 , . . . , vn be a basis of V. Then {w1 = T P is a basis of W.
(v1 ), . . . , wn = T (vn )}P
n n
w1 , . . . , wn are linearly independent. If i=0 bi wi = 0, then i=0 bi T vi = 0 ⇒
T ( ni=0 bi vi ) = 0 ⇒ ni=0 bi vi = 0 ⇒ bi = 0 for 1 ≤ i ≤ n, because v1 , . . . , vn are linearly
P P
independent.
w1 , . . . , wn generate W.P If w ∈ W, then there exists a P v ∈ V such that
Pn T (v) = w,
n n
because T is onto. Let v = i=0 bi vi , then w = T (v) = T ( i=0 bi vi ) = i=0 bi T (vi ) =
P n
i=0 bi wi .

Question 5(a) Prove that every square matrix is the root of its characteristic polynomial.

Solution. This is the Cayley Hamilton Theorem. Let A be a matrix of order n. Let

|A − xI| = ao + a1 x + . . . + an xn

Then we wish to show that

ao I + a1 A + . . . + an A n = 0

Suppose the adjoint of A − xI is B0 + B1 x + . . . + Bn−1 xn−1 , where Bi are matrices of order


n. Then by definition of the adjoint,

(A − xI)(B0 + B1 x + . . . + Bn−1 xn−1 ) = |A − xI|I

Substituting for |A − xI| the expression ao + a1 x + . . . + an xn and equating coefficients


of like powers, we get

AB0 = ao I
AB1 − B0 = a1 I
AB2 − B1 = a2 I
...
ABn−1 − Bn−2 = an−1 I
−Bn−1 = an I

Multiplying these equations successively by I, A, A2 , . . . , An on the left and adding, we get


0 = ao I + a1 A + . . . + an An , which was to be proved.

7
Question 5(b) Determine the eigenvalues and the corresponding eigenvectors of
 
2 2 1
A = 1 3 1
1 2 2

Solution.
λ − 2 −2 −1
|λI − A| = −1 λ − 3 −1 = 0
−1 −2 λ − 2

⇒ (λ − 2)2 (λ − 3) − 2(λ − 2) + 2(−λ + 2) − 2 − 2 − (λ − 3) = 0


⇒ (λ2 − 4λ + 4)(λ − 3) − 5λ + 7 = 0
⇒ λ3 − 7λ2 + 11λ − 5 = (λ − 1)(λ2 − 6λ + 5) = 0
⇒ λ = 1, 5, 1

Let (x1 , x2 , x3 ) be an eigenvector for λ = 5. Then


  
3 −2 −1 x1
−1 2 −1 x2  = 0
−1 −2 3 x3

Thus 3x1 − 2x2 − x3 = 0, −x1 + 2x2 − x3 = 0, −x1 − 2x2 + 3x3 = 0 ⇒ x1 = x2 = x3 . Thus


(1, 1, 1) is an eigenvector for λ = 5. In fact (x, x, x) with x 6= 0 are eigenvectors for λ = 5.
Let (x1 , x2 , x3 ) be an eigenvector for λ = 1. Then
  
−1 −2 −1 x1
−1 −2 −1 x2  = 0
−1 −2 −1 x3

Thus x1 + 2x2 + x3 = 0. We can take x1 = (1, 0, −1) and x2 = (0, 1, −2) as eigenvectors for
λ = 1. These are linearly independent, and all eigenvectors for λ = 1 are linear combinations
of x1 , x2 .    
1 1 0 5 0 0
Let P = 1 0 1 . Then P−1 AP = 0 1 0.
1 −1 −2 0 0 1

Question 5(c) Let A and B be n square matrices over F . Show that AB and BA have
the same eigenvalues.

Solution. If A is non-singular, then

BA = A−1 ABA ⇒ |xI − BA| = |xA−1 A − A−1 ABA| = |A−1 ||xI − AB||A| = |xI − AB|

8
Thus the characteristic polynomials of AB and BA are the same, so they have the same
eigenvalues.
If A is singular,
 then let rank(A) = r. Then there  existP, Q non-singular such that
Ir 0 I 0
PAQ = . Now PABP−1 = PAQQ−1 BP−1 = r Q−1 BP−1 . Let Q−1 BP−1 =
 0 0 0 0
B1 B2
, where B1 is r × r, B2 is r × n − r, B3 is n − r × r, B4 is n − r × n − r. Then
B3 B4     
−1 Ir 0 B1 B2 B1 B2
PABP = = , so the characteristic roots of AB are the
0 0 B3 B4 0 0
same as those of B1 , along with 0 repeated n − r times.
   
−1 B 1 B 2 Ir 0 B 1 0
Now Q−1 BAQ = Q−1 BP PAQ = = so the character-
B3 B4 0 0 B3 0
istic roots of BA are the same as those of B1 , along with 0 repeated n − r times. Thus BA
and AB have the same characteristic roots.

9
UPSC Civil Services Main 1988 - Mathematics
Linear Algebra
Sunder Lal
Retired Professor of Mathematics
Panjab University
Chandigarh

December 27, 2009

Question 1(a) Show that a linear transformation of a vector space Vm of dimension m


into a vector space Vn of dimension n over the same field can be represented as a matrix.
If T is a linear transformation of V2 into V4 such that T(3, 1) = (4, 1, 2, 1) and T(−1, 2) =
(3, 0, −2, 1), then find the matrix of T.

Solution. Let vi , i = 1, . . . , m be a basis of Vm and wj , j = 1, . . . , n be a basis of Vn . If


n
X
T(vi ) = aji wj , i = 1, . . . , m
j=1

then T corresponds to the n × m matrix A whose (i, j)’th entry is aij . In fact (v1 , . . . , vm ) =
(w1 , . . . , wn )A.
It can be easily seen that
2 1
e1 = (1, 0) = (3, 1) − (−1, 2)
7 7
1 3
e2 = (0, 1) = (3, 1) + (−1, 2)
7 7

1
and therefore
2 1
(4, 1, 2, 1) − (3, 0, −2, 1)
T(e1 ) =
7 7
1
= (5, 2, 6, 1)
7
1 ∗
= (5e + 2e∗2 + 6e∗3 + e∗4 )
7 1
1 3
T(e2 ) = (4, 1, 2, 1) + (3, 0, −2, 1)
7 7
1
= (13, 1, −4, 4)
7
1
= (13e∗1 + e∗2 − 4e∗3 + 4e∗4 )
7
 
5 13
2 1 
Thus T corresponds to the matrix 17 
6 −4 w.r.t. the standard basis.

7 4

Question 1(b) If M, N are finite dimensional subspaces of V, then show that dim(M +
N ) = dim M + dim N − dim(M ∩ N ).

Solution. Let {u1 , u2 , . . . , ur } be a basis of M ∩ N where dim(M ∩ N ) = r. Complete


{u1 , u2 , . . . , ur } to a basis {u1 , u2 , . . . , ur , v1 , . . . , vm } of M, where dim M = m + r. Com-

B
plete {u1 , u2 , . . . , ur } to a basis {u1 , u2 , . . . , ur , w1 , . . . , wn } of N , where dim N = n+r. We
shall show that = {u1 , u2 , . . . , ur , v1 , . . . , vm , w1 , . . . , wn } is a basis of M + N , proving
the result.
If u ∈ M + N , then u = v + w for some v ∈ M, w ∈ N . Since B B is a superset of the

written as a linear combination of elements of


We now show that the set
B
bases of M, N , v, w can be written as linear combination of elements of

B . Thus B
is linearly independent. If possible let
generates M + N .
⇒ u can be

n
X m
X r
X
αi vi + βi w i + γi ui = 0
i=1 i=1 i=1
Pn Pm Pr Pn
Since
Pn i=1 αi v i = − i=1 β i w i − i=1 γi u i it follows that i=1 αi vi ∈ N .P Therefore
n r
means that ni=1 αi vi −
P P
α
Pri=1 i iv ∈ M ∩ N ⇒ α
i=1 i i v = η
i=1 i i u for ηi ∈ R. This
i=1 ηi ui = 0. But {u1 , u2 , . . . , ur , v1 , . . . , vm } are linearly independent, so αi = 0, 1 ≤
i ≤ n. Similarly we can show that βi = 0, 1 ≤ i ≤ m. Then the linear indepen-
dence of {u1 , u2 , . . . , ur } shows that γi = 0, 1 ≤ i ≤ r. Thus the vectors in
early independent and form a basis of M + N , showing that the dimension of M + N is
B
are lin-

m + n + r = (m + r) + (n + r) − r, which completes the proof.

2
Question 1(c) Determine a basis of the subspace spanned by the vectors v1 = (1, 2, 3), v2 =
(2, 1, −1), v3 = (1, −1, −4), v4 = (4, 2, −2).
Solution. v1 , v2 are linearly independent because if αv1 + βv2 = 0 then α + 2β =
0, 2α + β = 0, 3α − β = 0 ⇒ α = β = 0. If v3 = αv1 + βv2 , then the three linear equations
α + 2β = 1, 2α + β = −1, 3α − β = −4 should be consistent — clearly α = −1, β = 1 satisfy
all three, showing v3 = v2 − v1 . Again suppose v4 = αv1 + βv2 , then the three linear
equations α + 2β = 4, 2α + β = 2, 3α − β = −2 should be consistent — clearly α = 0, β = 2
satisfy all three, showing v4 = 2v2 .
Hence {v1 , v2 } is a basis for the vector space generated by {v1 , v2 , v3 , v4 }.
 
a1 b
Question 2(a) Show that it is impossible for S = , b 6= 0 to have identical eigen-
b a2
values.
 
0 λ1 0
Solution. We know given S symmetric ∃O orthogonal so that O SO = , where
0 λ2
λ1 , λ
2 are eigenvalues
 of S. If λ1 = λ2 , then we have S = O0 −1 (λI)O−1 = λ(OO0 )−1 = λI ⇒
λ 0
S= . Thus if b 6= 0, S cannot have identical eigenvalues.
0 λ

Question 2(b) Prove that the eigenvalues of a Hermitian matrix are all real and the eigen-
values of a skew-Hermitian matrix are either zero or pure imaginary.
Solution. See question 2(a), year 1998.

Question 2(c) If x0 Ax > 0 for all x 6= 0, A symmetric, then for all y 6= 0 y0 A−1 y > 0.
If λ is the largest eigenvalue of A, then
x0 Ax
λ = sup
x∈Rn x0 x
x6=0
.
Solution. Clearly A = A0 A−1 A ∴ x0 Ax = x0 A0 A−1 Ax = y0 A−1 y where y = Ax for any
x ∈ Rn , x 6= 0. Since |A| 6= 0, any vector y can be written as Ax, by taking x = A−1 y.
Thus x0 Ax > 0 ⇒ y0 A−1 y > 0 for all y 6= 0.  
λ1 0 . . . 0
 0 λ2 . . . 0 
0
0
Let M = supx∈Rn xxAx . Let O be an orthogonal matrix such that O AO = .. .
 
0x  ..
x6=0 . .
0 0 . . . λn
Let 0 6= x = Oy, then x x = y O Oy = y y. Now x Ax = y O AOy = i λi yi2 ≤ λy0 y
0 0 0 0 0 0 0
P
0 0
where λ is the largest eigenvalue of A. Thus λ ≥ xyAx 0y = xxAx 0 x , so λ ≥ M . On the other
0
hand, if x 6= 0 is an eigenvector corresponding to λ, then x Ax = λx0 x ⇒ λ = xxAx
0
0x ≤ M .

Thus λ = M as required.

3
Question 3(a) By converting A to an echelon matrix, determine its rank, where
 
0 0 1 2 8 9
0 0 4 6 5 3
 
A= 0 2 3 1 4 7 
0 3 0 9 3 7
0 0 5 7 3 1

Solution. Consider  
0 0 0 0 0
0 0 2 3 0
 
0
1 4 3 0 5
A =
2

 6 1 9 7
8 5 4 3 3
9 3 7 7 1
Interchange the first row with the third, then third with fourth, fourth with fifth and fifth
with sixth to get  
1 4 3 0 5
0 0 2 3 0
 
0
2 6 1 9 7
A ∼ 8 5

 4 3 3
9 3 7 7 1
0 0 0 0 0
Now perform R3 − 2R1 , R4 − 8R1 , R5 − 9R1 to get
 
1 4 3 0 5
0 0 2 3 0 
 
0
0 −2 −5 9 −3 
A ∼ 0 −27 −20 3 −37

 
0 −33 −20 7 −44
0 0 0 0 0

Interchange the second and the third row, and perform − 12 R2 , 21 R3 to get
 
1 4 3 0 5
5
0 1
 2
− 92 3 
2 
0 0 3
0 1 0 
A ∼  2 
 0 −27 −20 3 −37 

0 −33 −20 7 −44
0 0 0 0 0

4
Perform R4 + 27R2 , R5 + 33R2 to get
 
1 4 3 0 5
5
0
 1 2
− 92 3
2


0 3
0 0 1 0
A ∼ 95
2 
0
 0 2
− 237
2
7 
2 
125
0 0 2
− 283
2
11 
2
0 0 0 0 0
95 125
Operation R4 − 2
R3 , R5 − 2
R3
yields
 
1 4 3 0 5
5
0
 1 2
− 92 

3
2
0 3
0 0 1 0
A ∼ 2 
0
 0 0 − 759
4
7 
2 
0 0 0 − 941
4
11 
2
0 0 0 0 0
4
Now multiply R4 with − 759
 
1 4 3 0 5
5
0
 1 2
− 92 3

2

0 3
0 0 1 0 
A ∼ 2 
14 
0
 0 0 1 − 759 
0 0 0 − 941
4
11 
2
0 0 0 0 0
941
Performing R5 + 4
R4 results in
 
1 4 3 0 5
5
0
 1 2
− 29 3
2


0 3
0 0 1 0 
A ∼ 2
14

0
 0 0 1 − 759  
11
0 0 0 0 2
− 941×7
1882

0 0 0 0 0

which can be converted to  


1 4 3 0 5
5
0
 1 2
− 92 3
2


0 3
0 0 1 0 
A ∼ 2 
14 
0
 0 0 1 − 759 
0 0 0 0 1 
0 0 0 0 0
which is an echelon matrix. Its rank is clearly 5, so the rank of A = 5.

5
Question 3(b) Given AB = AC does it follow that B = C? Can you provide a counterex-
ample?

Solution. It does not follow that B = C.


   
1 0 0 1
A= , B= ⇒ AB = 0
0 0 0 0
C = 0 ⇒ AC = 0, but B 6= C.
 
0 −1 0
Question 3(c) Find a nonsingular matrix which diagonalizes A = −1 −1 1 , B =
  0 1 0
2 1 −2
1 2 −2 simultaneusly. Find the diagonal form of A.
−2 −2 3

Solution.
−2λ −1 − λ 2λ −2λ −1 − λ 0
|A − λB| = 0 ⇒ −1 − λ −1 − 2λ 1 + 2λ = 0 ⇒ −1 + λ 0 0 =0
2λ 1 + 2λ −3λ 2λ 1 + 2λ −λ

Thus λ = 0, 1, −1. This shows that the matrices are diagonalizable simultaneously.
We now determine x1 , x2 , x3 such that (A − λB)xi = 0, i = 1, 2, 3. For λ = 0, let
x1 0 = (x1 , x2 , x3 ) be such that (A − λB)x1 = 0. Thus
  
0 −1 0 x1
−1 −1 1 x2  = 0
0 1 0 x3

Thus −x2 = 0, −x1 − x2 + x3 = 0, x2 = 0. Thus x1 0 = (1, 0, 1).


For λ = 1, let x2 0 = (x1 , x2 , x3 ) be such that (A − λB)x2 = 0. Thus
  
−2 −2 2 x1
−2 −3 3  x2  = 0
2 3 −3 x3

Thus −2x1 −2x2 +2x3 = 0, −2x1 −3x2 +3x3 = 0, 2x1 +3x2 −3x3 = 0 ⇒ x2 −x3 = 0 ⇒ x1 = 0.
Thus we may take x2 0 = (0, 1, 1).
For λ = −1, let x3 0 = (x1 , x2 , x3 ) be such that (A − λB)x3 = 0. Thus
  
2 0 −2 x1
0 1 −1   x2  = 0
−2 −1 3 x3

6
Thus 2x1 − 2x3 = 0, x2 − x3 = 0, −2x1 − x2 + 3x3 = 0 ⇒ x1 = x2 = x3 . Thus we may take
x3 0 = (1, 1, 1).
 
1 0 1
Let P = 0 1 1 so that
1 1 1
        
1 0 1 0 −1 0 1 0 1 1 0 1 0 −1 −1 0 0 0
0
P AP = 0 1 1
  −1 −1 1   0 1 1 = 0
  1 1 0 0 −1 = 0 1 0 
1 1 1 0 1 0 1 1 1 1 1 1 0 1 1 0 0 −1

7
UPSC Civil Services Main 1989 - Mathematics
Linear Algebra
Sunder Lal
Retired Professor of Mathematics
Panjab University
Chandigarh

December 16, 2007

 
3 1 −1
Question 1(a) Find a basis for the null space of the matrix A = .
0 1 2

Solution. A is a linear transformation from R3 to R2 defined by A(e1 ) = 3e∗1 , A(e2 ) =


e∗1 + e∗2 , A(e3 ) = −e∗1 + 2e∗2 , where e1 , e2 , e3 is the standard basis of R3 and e∗1 , e∗2 is the
standard basis of R2 . Thus A(a, b, c) = e∗1 (3a + b − c) + e∗2 (b + 2c). Consequently, (a, b, c) ∈
null space of A if and only if 3a + b − c = 0, b + 2c = 0 ⇒ b = −2c, a = c. Thus null space
of A is {(c, −2c, c) | c ∈ R}. Note that rank A = 2, so the null space has dimension 1.
A basis for the null space is (1, −2, 1), any other multiple of this can also be regarded as a
basis.

Question 1(b) If W is a subspace of a finite dimensional vector space V then prove that
dim V/W = dim V − dim W.

Solution. Let v1 , . . . , vr be a basis of W, dim W = r. Let vr+1 , . . . , vn be n−r vectors in V


so chosen that v1 , . . . , vn is a basis of V, dim V = n. We will show that vi + W, r + 1 ≤ i ≤ n
is a basis of V/W ⇒ dim V/W = n − r.

1
First we show linear independence:
n
X
αi (vi + W) = 0
i=r+1
Xn
⇒ αi vi + W = 0 + W
i=r+1
Xn
⇒ αi vi ∈ W
i=r+1
Xn r
X
⇒ αi vi = −αi vi (say)
i=r+1 i=1
Xn
⇒ αi vi = 0
i=1
⇒ αi = 0, 1 ≤ i ≤ n (vi are linearly independent.)

Thus vi + W, r + 1 ≤ i ≤ n are linearly independent.


Pn
PnIf v + W is anyPnelement of V/W, Pnthen v = i=1 αi vi as v ∈ V. Therefore v + W =
i=1 αi vi +W = i=1 αi (vi +W) = i=r+1 αi (vi +W) because v1 +W = . . . = vr +W = W.
Thus vi + W, r + 1 ≤ i ≤ n generate V/W. Hence dim V/W = n − r = dim V − dim W

Question 1(c) Show that all vectors (x1 , x2 , x3 , x4 ) in the vector space V4 (R) which obey
x4 −x3 = x2 −x1 form a subspace V. Show further that V is spanned by ξ1 = (1, 0, 0, −1), ξ2 =
(0, 1, 0, 1), ξ3 = (0, 0, 1, 1).

Solution. If y = (y1 , y2 , y3 , y4 ), z = (z1 , z2 , z3 , z4 ) ∈ V then αy + βz = (a1 , a2 , a3 , a4 ) ∈ V


because

a4 − a3 = (αy4 + βz4 ) − (αy3 + βz3 )


= α(y4 − y3 ) + β(z4 − z3 )
= α(y2 − y1 ) + β(z2 − z1 ) ∵ y4 − y3 = y2 − y1 , z4 − z3 = z2 − z1
= a2 − a1

Thus V is a subspace of V4 (R). Note that V = 6 ∅.


Clearly ξ1 , ξ2 , ξ3 are linearly independent ⇒ dim V ≥ 3. But V =
6 V4 (R) because
(1, 0, 0, 0) 6∈ V ∴ dim V < 4 ⇒ dim V = 3.
Hence ξ1 , ξ2 , ξ3 is a basis of V and therefore span V.

Question 2(a) Let P be a real skew-symmetric matrix and I the corresponding unit matrix.
Show that I − P is non-singular. Also show that Q = (I + P)(I − P)−1 is orthogonal.

2
Solution. We have proved (question 2(a), year 1998) that the eigenvalues of a skew-
Hermitian and therefore of a skew-symmetric matrix are zero or pure imaginary. This means
|I − P| =
6 0 because 1 cannot be an eigenvalue of P.
Q Q = [(I − P)−1 ]0 (I + P)0 (I + P)(I − P)−1 = (I + P)−1 (I − P)(I + P)(I − P)−1 . But
0

(I − P)(I + P) = I − P2 = (I + P)(I − P), therefore Q0 Q = I. Similarly QQ0 = I ⇒ Q is


orthogonal.
Related Results:

1. If S is skew-Hermitian, then A = (I+S)(I−S)−1 is unitary. Conversely, if A is unitary,


then A can be written as A = (I + S)(I − S)−1 for some skew-Hermitian matrix S
provided −1 is not an eigenvalue of A.
Proof:
0 0 0 0
A = ((I − S)−1 )0 (I + S) = (I − S )−1 (I + S )
= (I + S)−1 (I − S)
0
∴ AA = (I + S)(I − S)−1 (I + S)−1 (I − S)
= (I + S)(I + S)−1 (I − S)−1 (I − S) = I
∵ (I − S)−1 (I + S)−1 = (I − S2 )−1 = (I + S)−1 (I − S)−1
0
Similarly A A = I, so A is unitary.
Now A(I − S) = I + S ⇒ A − I = (A + I)S ⇒ S = (A + I)−1 (A − I). It can be checked
as above that S is skew-Hermitian. Note that |A + I| =
6 0.

2. If H is Hermitian, then A = (H + iI)−1 (H − iI) is unitary and every unitary matrix


can be thus represented provided it does not have −1 as its eigenvalue.

3. If S is real, S0 = −S and S2 = −I, then S is orthogonal and of even order, and there
exist non-null vectors x, y such that x0 x = y0 y = 1, x0 y = 0, Sx + y = 0, Sy = x.
Proof: S0 S = −SS = I, so S is orthogonal, |S| =
6 0 ⇒ S is of even order.
Choose y such that y0 y = 1. Then y0 Sy = (y0 Sy)0 = y0 S0 y = −y0 Sy ⇒ y0 Sy = 0.
Set x = Sy, then y0 x = 0, Sx + y = 0. In addition, x0 x = y0 S0 Sy = y0 y = 1.

Question 2(b) Show that an n × n matrix A is similar to a diagonal matrix if and only if
the set of eigenvectors of A includes a set of n linearly independent vectors.

Solution. See question 2(c) of 1998.

Question 2(c) Let r1 , r2 be distinct eigenvalues of a matrix A and let ξi be an eigenvector


0
corresponding to ri , i = 1, 2. If A is Hermitian, show that ξ1 ξ2 = 0.

Solution. See question 2(c) of 1993.

3

1 1 , and B =
Question
 3(a) Find the roots of the equation |xA − B| = 0 where A = 1 4
0 3 . Use the result to show that the real quadratic forms F = x2 + 2x x + 4x2 , G = 6x x
3 0 1 1 2 2 1 2
can be simultaneously reduced by a non-singular linear substitution to y12 + y22 , y12 − 3y22 .

x x−3
Solution. |xA − B| = = 4x2 − (x − 3)2 ⇒ ±2x = x − 3 ⇒ x = −3, 1.
x − 3 4x
Let x1 = (x1 , x2 ) be a row vector such that (A − B) xx12 = 0.


  
1 −2 x1
= 0 ⇒ x1 − 2x2 = 0
−2 4 x2

We take x1 = 2, x2 = 1, so x1 = (2, 1).


Let x2 = (x1 , x2 ) be a row vector such that (−3A − B) xx12 = 0.


  
−3 −6 x1
= 0 ⇒ x1 + 2x2 = 0
−6 −12 x2

We take x1 = −2, x2 =1, so  x2 = (−2,1).


x1 Ax01 = (2, 1) 11 14 21 =(2, 1) 36 = 12.
x2 Ax02 = (−2, 1) 11 14 −2 1 = (−2, 1) −12 = 4.
0
Note that x1 Ax2 = 0. 
x1 Bx01 = (2, 1) 03 30 21 =(2, 1) 36 = 12.


x2 Bx02 = (−2, 1) 03 30 −2 1 = (−2, 1) −63


= −12.
0
Note that x1 Bx2 = 0.
√1 0
Thus if P = [x1 0 , x2 0 ], then P0 AP = 12 0 , and P0 BP = 12 0
 
0 4 0 −12 . Let Q = 12
0 12
,
then Q0 P0 APQ = 10 01 and Q0 P0 BPQ = 10 −3
 0

as desired. Thus the required non-singular
linear transformation is PQ.
−1
− tan 2θ tan 2θ
   
cos θ − sin θ 1 1
Question 3(b) Show that = .
sin θ cos θ tan 2θ 1 − tan 2θ 1

Solution.
− tan 2θ cos2 2θ − sin 2θ cos 2θ
  
1
R.H.S = θ
tan 2 1 sin 2θ cos 2θ cos2 2θ
cos2 2θ − sin2 2θ −2 sin 2θ cos 2θ
 
= = L.H.S
2 sin 2θ cos 2θ − sin2 2θ + cos2 2θ

4
 
0 1
Question 3(c) Verify the Cayley-Hamilton theorem for A = .
−2 3

−λ 1
Solution. The characteristic equation for A is = 0 ⇒ −3λ + λ2 + 2 = 0
−2 3 − λ
2
Thus according
     theorem A − 3A + 2I = 0.
to the Cayley-Hamilton
0 1 0 1 −2 3
A2 = =
 −2
 3  −2 3  −6 7  
−2 3 0 1 1 0 0 0
−3 +2 =
−6 7 −2 3 0 1 0 0  
0 1
Thus the Cayley Hamilton theorem is verified for A = .
−2 3

5
UPSC Civil Services Main 1990 - Mathematics
Linear Algebra
Sunder Lal
Retired Professor of Mathematics
Panjab University
Chandigarh

June 14, 2007

1 Linear Algebra
Question 1(a) State any definition of the determinant of an n×n matrix and show that the
determinant function is multiplicative i.e. det AB = det A det B for any two n × n matrices
A, B. You may assume the matrices to be real.
Solution. Let π be a permutation of 1, . . . , n. Define sign(π) as follows: count the number
of pairs of numbers that need to be interchanged to get to π from the identity permutation.
If this is even, the sign is 1, and if it is odd, the sign is −1. Now if Π is the set of all
permutations of 1, . . . , n, define
X Y
det A = sign(π) aiπ(i)
π∈Π i

where aij are the elements of A.


Note that the det A is n-linear i.e. if we perform any row or column operation on A the
determinant is unchanged. Also, if any two rows are swapped, the sign of the determinant
changes. These are simple consequences of the above definition.
Consider the 2n × 2n matrix

a11 a12 . . . a1n 0 . . . 0

 a21 a22 . . . a2n 0 . . . 0 
 . .. .. .. .. 
 .
 . . . . . 

a a . . . ann 0 . . . 0 
 
P =  n1 n2
 −1 0 . . . 0 b11 . . . b1n 

 0 −1 . . . 0 b21 . . . b2n 
 
 . .. .. .. .. 
 .. . . . . 
0 0 . . . −1 bn1 . . . bnn

1
Then det P = det A det B, because if for any permutation π, π(i) > n for i ≤ n, then
the corresponding element of the sum is 0 as aiπ(i) = 0. Thus π(i) ≤ n if i ≤ n, and
consequently π(j) > n if j > n. So each permutation consists of a permutation of 1, . . . , n
and a permutation of n + 1, . . . , 2n, consequently we can factor the sum, to get det P =
det A det B.
Now we perform a series of column operations to P — add b11 C1 + . . . + bn1 Cn to Cn+1 ,
to get 
a11 a12 . . . a1n c11 0 . . . 0

 a21 a22 . . . a2n c21 0 . . . 0 
 . .. .. .. .. 
 .
 . . . . . 

an1 an2 . . . ann cn1 0 . . . 0 
 
 −1 0 . . . 0 0 b12 . . . b1n 
 
 0 −1 . . . 0 0 b22 . . . b2n 
 
 . .. .. .. .. .. 
 .. . . . . . 
0 0 . . . −1 0 bn2 . . . bnn
where C = AB = (cij ). Similarly add b12 C1 + . . . + bn2 Cn to Cn+2 , . . ., b1n C1 + . . . + bnn Cn
to C2n to get  
 A  C
 −1 0 . . . 0 
 
 0 −1 . . . 0  
  0
 ...  
0 0 . . . −1
We can now verify that det P = det C. Any permutation π that leads to a non-zero term
in the determinant sum must have π(j) = j − n for j > n, thus piπ(i) = −1, i > n. Also
π(j) > n for j ≤ n, so any such π can be written as a permutation of 1, . . . , n followed by a
series of swaps of the i-th number with the (n + i)-th number, which is n + i. Also sign(π)
is the same as the sign of the corresponding permutation π 0 of 1, . . . , n — we first do π 0 by
exchanges and then additionally swap the i-th element with the (i + n)-th element, for each
i ≤ n. Now if n is even, this involves an even number of additional swaps, and multiply by
(−1)n corresponding to piπ(i) for i > n, otherwise we get an odd number of additional swaps,
flipping the sign, but we still multiply by (−1)n = −1.
Thus det P = det C = det A det B.

Question 1(b) Prove Laplace’s formula for simulataneous expansion of the determinant by
the first row and column; that given an (n+1)×(n+1) matrix in the block form M = αγ D β

,
where α is a scalar, β is a 1 × n matrix (a row vector), γ is a n × 1 matrix (a column vector),
and D is an n × n matrix, then det M = α det D − βD0 γ 0 , where D0 is the matrix of cofactors
of D and βD0 γ 0 stands for the matrix product of size 1 × 1.

Solution. Let M = (aij ), 1 ≤ i, j ≤ n + 1. Thus α = a11 , β = (a12 . . . a1,n+1 ),

2
   
a21 a22 . . . a2,n+1
γ =  ...  and D =  ... ..
.
   
.
an+1,1 an+1,2 . . . an+1,n+1
det M = a11 |A11 | − a12 |A12 | + . . . + (−1)n a1,n+1 |A1,n+1 | where Aij is the minor corre-
sponding to aij (formed
Pby deleting the i-th row and j-th column of A). Clearly D = A11 ,
n+1
so det M = α det D − j=2 (−1)j a1j det A1j . Now

a21 a22 ... a2,j−1 a2,j+1 ... a2,n+1


a31 a32 ... a3,j−1 a3,j+1 ... a3,n+1
|A1j | = .. .. .. .. ..
. . . . .
an+1,1 an+1,2 . . . an+1,j−1 an+1,j+1 . . . an+1,n+1

Let Bij be the minor of aij in D. Expanding |A1j | in terms of the first column, we get

|A1j | = a21 |B2j | − a31 |B3j | + . . . + (−1)n+1 an+1,1 |Bn+1,1 |

n+1 X
X n+1
det M = α det D − a1j ai1 |Bij |(−1)i (−1)j
j=2 i=2
 
a21
= α det D − (a12 a13 . . . a1,n+1 )(cij )  ... 
 
an+1,1
= α det D − βD0 γ

where cij = (−1)i+j |Bij |, thus D0 = (cij ) is the matrix of cofactors of D.

Question 1(c) For M as in 1(b), if D is invertible, show that det M = det D(α − βD−1 γ).

Solution. If D is invertible, then DD0 = D0 D = (det D)I ⇒ D0 = D−1 det D. So


det M = α det D − βD0 γ = α det D − βD−1 det Dγ = det D(α − βD−1 γ).

Question 2(a) Write the definition of the characteristic polynomial, eigenvalues and eigen-
vectors of a square matrix. Also say briefly something about the importance and/or applica-
tions of these notions.

Solution. Let A be an n × n real or complex matrix. The polynomial |xIn − A| is called


the characteristic polynomial of A. The roots of this polynomial are called the eigenvalues
of A. If λ is an eigenvalue of A, then all the non-zero vectors x such that Ax = λx are
called eigenvectors of A corresponding to λ.
Many problems in mathematics and other sciences require finding eigenvalues and eigen-
vectors of an operator.

3
• Eigenvalues can be used to find a very simple matrix for an operator — either diagonal
or a block diagonal form. This can be used to compute powers of matrices quickly.

• If one wishes to solve a linear differential system like x0 = Ax, or study the local
properties of a nonlinear system, finding the diagonal form of the matrix can give us
a decoupled form of the system, allowing us to find the solution or understand its
qualitative behavior, like its stability and oscillatory behavior.

• The calculation of Google’s Pagerank is essentially the computation of the principal


eigenvector (corresponding to the eigenvalue with the largest absolute value) of a very
large matrix (the adjacency matrix of the web graph) — this is used to find the relative
importance of documents on the World Wide Web. Similar calculations are used to
compute the stationary distribution of a Markov system.

• In mechanics, the eigenvectors of the inertia tensor are used to define the principal
axes of a rigid body, which are important in analyzing the rotation of the rigid body.

• Eigenvalues can be used to compute low rank approximations to matrices, which help
in reducing the dimensionality of various problems. This is used in statistics and
operations research to explain a large number of observables in terms of a few hidden
variables + noise.

• Eigenvalues can help us determine the form of a quadric or higher dimensional surface
— see the relevant section in year 1999.

• In quantum mechanics, states are represented by unit vectors, while observable quan-
tities (like position and energy) are represented by Hermitian matrices. The basic
problem in any quantum system is the determination of the eigenvalues and eigenvec-
tors of the energy matrix. The eigenvalues are the observed values of the observable
quantity, and discreteness of the eigenvalues leads to the quantization of the observed
values.

Question 2(b) Show that a Hermitian matrix possesses a set of eigenvectors which form
an orthonormal basis. State briefly how or why a general n × n complex matrix may fail to
possess n linearly independent eigenvectors.

Solution. Let H be Hermitian, and λ1 , . . . , λn its eigenvalues, not necessarily distinct. Let
x1 with norm 1 be an eigenvector corresponding to λ1 . Then there exists (from a result
analogous to the result used in question 3(a), year 1995) a unitary matrix U such that x1 is
its first column. Therefore
 
−1 0 λ1 L
U1 HU1 = U1 HU1 =
0 H1

4
0
where H1 is (n − 1) × (n − 1) and L is (n − 1) × 1. Since U1 HU1 is Hermitian, it follows
that L = 0. Consequently  
0 λ1 0
U1 HU1 =
0 H1
Now H1 is Hermitian with eigenvalues λ2 , . . . , λn . Repeating the above argument, we find
U∗2 an (n − 1) × (n − 1) unitary matrix such that
 
0 ∗ λ 2 0
U∗2 H1 U2 =
0 H2
 
1 0
If U2 = then U2 is unitary, and
0 U∗2
 
λ1 0 0
0 0
U2 U1 HU1 U2 =  0 λ2 0 
0 0 H2
Repeating this process or by induction, we can get U unitary such that
 
λ1 · · · 0
0  .. .. 
U HU =  . .
0 ··· λn

If U = [C1 , C2 , . . . , Cn ], then C1 , C2 , . . . , Cn are eigenvectors of H and form an orthonormal


system.
A complex matrix A would fail to have n eigenvectors which are linearly independent
if A is not diagonalizable i.e. we cannot find P such that P−1 AP is a diagonal matrix.
For example if A = 10 1c , c 6= 0, and x1, x2 are two independent eigenvectors of A, then
−1 1 0 1 0

P = [x1 x2 ] would lead to P AP = 0 1 ⇒ A = 0 1 which is false.

Question 2(c) Define the minimal polynomial and show that a complex matrix is diago-
nalizable (i.e. conjugate to a diagonal matrix) if and only if the minimal polynomial has no
repeated root.

Solution. Given a complex n × n complex matrix A, if f (x) is a nonzero polynomial with


complex coefficients of least degree such that f (A) = 0, then f (x) is called the minimal
polynomial of A. The Cayley-Hamilton therem tells us that any n × n complex matrix A
satisfies the degree n polynomial equation |A − xI| = 0, so the minimal polynomial exists
and is of degree ≤ n.
A complex n × n matrix can be thought of as a linear transformation from Cn to Cn . Let
T : V −→ V, dim V Q = n. Let the minimal polynomial of T be p(x), having distinct roots
c1 , . . . , ck , so p(x) = kj=1 (x − cj ). We shall show that T is diagonalizable.
If k = 1, then the minimal polynomial is x−c, thus T−cI = 0, so T = cI is diagonalizable.
So assume k > 1.

5
k
x−ci
Q
Consider the polymonials pj = cj −ci
. Clearly pj (ci ) = 0 for i 6= j, and pi (ci ) = 1. This
i=1
i6=j
implies that the polynomials p1 , . . . , pk are linearly independent, and each one is of degree
k − 1 < k. Thus these form a basis of the space Pof polynomials of degree ≤ k − 1. Thus
k
given any polynomial g of degree ≤ k − 1, g = i=1 αi pi , where αi = g(ci ). In particular,
1 = ki=1 pi , x = ki=1 ci pi . Thus
P P

k
X k
X
I= pi (T), T = ci pi (T)
i=1 i=1

Moreover pi (T)pj (T) = 0, i 6= j because pi (x)pj (x) is divisible by the minimal polynomial
of T. Also pj (T) 6= 0, 1 ≤ j ≤ k, because the degree of pj is less than k, the degree of the
minimal polynomial of T.
Set Vi = pi (T)V, then V = I(V) = ki=1 pi (T)V = V1 + . . . + Vk . We shall now show that
P
Vi = Vci , the eigenspace of T with respect to ci .
v ∈ Vi ⇒ v = pi (T)w for some w ∈ V. Since (x−ci )pi is divisible by p, (T−ci I)v = 0, so
Tv = ci v so v ∈ Vci . Conversely, if v ∈ Vci , then Tv = ci v, or (T−ci I)v = 0 ⇒ pj (T)v = 0
for j 6= i. Since
Pv = pi (T)v + . . . + pk (T)v, we get v = pi (T)v ⇒ v ∈ Vi .
Thus V = ki=1 Vci so V has a basis consisting of eigenvectors, so T is diagonalizable.
Conversely let T be diagonalizable, then we shall show that the minimal polynomial of
T has distinct roots. Let  
λ1 0 . . . 0
 0 λ2 . . . 0 
−1
P TP =  ..
 
.. 
. .
0 0 . . . λn
and out of λ1 , . . . , λn , let λ1 , . . . , λk be distinct. Let g(x) = (x − λ1 ) . . . (x − λk ). Then
v ∈ V ⇒ v = v1 + . . . + vk where vi ∈ Vλi , the eigenspace of λi . Thus g(T)(v) = 0, so
g(T) = 0. Thus g(x) is divisible by the minimal polynomial of T. Since g(x) has all distinct
roots, it immediately follows that the minimal polynomial also has all distinct roots.

a b is expressible in the form LDU, where
Question 3(a) Show  that a 2 × 2 matrix M = c d 
L has the form α1 01 , D is diagonal and U has the form 10 β1 If and only if either a 6= 0
or a = b = c = 0. Also show that when a 6= 0 the factorization M = LDU is unique.

Solution. Given M = ac db , suppose M = LDU = α1 01 a01 a02 10 β1 = aa11α a02 01 β1 =


     
a1 a1 β

a1 α a1 αβ+a2 . Thus M = LDU ⇒ a1 = a, a1 β = b, a1 α = c, a αβ + a = d. Thus if a = 0,
 010  1 β 2
0 0 1 0
then b = c = 0 and d = a2 , In this case, M = 0 d = α 1 0 d 0 1 whatever α, β may
be, i.e. M can be represented as LDU in infinitely many ways.
If a 6= 0, then a1 = a, β = ab , α = ac , a2 = d − bca are uniquely determined. Thus
 a 0  b
M = ac db = 1ac 01 0 d− bca 10 1a and has a unique representation.


6
   
Conversely, if M = 00 d0 , i.e. a = b = c = 0, then M = α1 01 00 d0 10 β1 for any
a 0  b 
α, β ∈ R. If M = ac db , a 6= 0, then M = LDU with L = 1ac 01 , D = 0 d− bca , U = 10 1a
 

as shown above.

Question 3(b) Suppose a real matrix has eigenvalue λ, possibly complex. Show that there
exists a real eigenvector for λ if and only if λ is real.

Solution. If λ is real, then the n × n matrix A − λI defines a linear transformation from


Rn to Rn . Since |A − λI| = 0, the rows are linearly dependent, so there exists x ∈ Rn , x 6= 0
such that (A − λI)x = 0 ⇒ Ax = λx. Thus there exists a real eigenvector for λ.
Conversely, suppose Ax = λx, x ∈ Rn , x 6= 0. then λx = Ax = Ax = λx = λx ⇒
(λ − λ)x = 0 ⇒ λ − λ = 0 ∵ x 6= 0 ⇒ λ = λ i.e. λ is real.

n
Question 3(c)  If a 2×2 matrix
 A has order n, i.e. A = I2 , then show that A is conjugate
cos θ sin θ
to the matrix where θ = 2πm for some integer m.
− sin θ cos θ n

Solution. Note: A has to be real, otherwise the result  is false: if α1 , α2 are two distinct
α1 0
n-th roots of unitysuch that α1 6= α2 , then A = 0 α2 has order n, but A is not conjugate
cos θ sin θ
to whose eigenvalues are complex conjugates of each other.
− sin θ cos θ
An = I ⇒ eigenvalues of A are n-th roots of unity. If A has repeated eigenvalues, then
these can be 1 or −1, because eigenvalues of real matrices are complex conjugates of each
other, so the repeated eigenvalues must be real, and they also must be roots of 1.
−1

1 c .
Case 1: A has eigenvalues 1, 1. There exists P non-singular such that P AP = 0 1
Now (P−1 AP)n = 10 nc = P−1An P = P−1 I2 P = I2 , so nc = 0 ⇒ c = 0. Thus A is

1
cos θ sin θ
conjugate to I2 = , θ = 2πn .
− sin θ cos θ n

Case 2: A has eigenvalues −1, −1. There exists P non-singular such that P−1 AP =
−1 c −1 n −1 nc
 1 nc
 n
0 −1 . Now P A P = 0 −1 or 0 1 , according as nis odd or even.  But A = I2 ,
cos θ sin θ
therefore n is even and c = 0. Thus A is conjugate to −I2 = , θ = 2πm ,m =
− sin θ cos θ n
n
2
.
Case 3: A hasdistinct eigenvalues λ1 , λ2 . Then λ1 = λ2 . If λ1 = cos θ + i sin θ, with
cos θ sin θ cos θ − λ sin θ
θ = 2πm , set B = . The eigenvalues of B are roots of =
n − sin θ cos θ − sin θ cos θ − λ
0 ⇒ λ = cos θ ± i sin  θ. Since A and B have the same eigenvalues λ1 , λ2 distinct, both are
λ1 0
conjugate to 0 λ2 and are therefore conjugate to each other.

7
UPSC Civil Services Main 1991 - Mathematics
Linear Algebra
Sunder Lal
Retired Professor of Mathematics
Panjab University
Chandigarh

June 14, 2007

Question 1(a) Let V(R) be the real vector space of all 2×3 matrices with real entries. Find
a basis of V(R). What is the dimension of V(R).
     
1 0 0 0 1 0 0 0 1
Solution. Let A1 = , A2 = , A3 =
  0 0 0  0 0 0  0 0 0
0 0 0 0 0 0 0 0 0
and B1 = , B2 = , B3 = . Clearly Ai , Bi , i = 1, 2, 3 ∈
1 0 0 0 1 0 0 0 1
V(R). These generate V(R) because
 
a1 a2 a3
A= = a1 A1 + a2 A2 + a3 A3 + b1 B1 + b2 B2 + b3 B3
b1 b2 b3

for any arbitrary element A ∈ V(R).


 They  are linearly independent because if the RHS in the above equation was equal to
0 0 0
, then ai = 0, bi = 0 for i = 1, 2, 3. Thus Ai , Bi , i = 1, 2, 3 is a basis for V(R) and
0 0 0
the dimension of V(R) is 6.

Question 1(b) Let C be the field of complex numbers and let T be the function from C3 to
C3 defined by

T(x1 , x2 , x3 ) = (x1 − x2 + 2x3 , 2x1 + x2 , −x1 − 2x2 + 2x3 )

1. Verify that T is a linear transformation.

2. If (a, b, c) ∈ C3 , what are the conditions on a, b, c so that (a, b, c) is in the range of T?


What is the rank of T?

1
3. What are the conditions on a, b, c so that (a, b, c) is in the null space of T? What is
the nullity of T?

Solution. T(e1 ) = (1, 2, −1), T(e2 ) = (−1, 1, −2), T(e3 ) = (2, 0, 2). Clearly T(e1 ) and
T(e3 ) are linearly independent. If

(−1, 1, 2) = α(1, 2, −1) + β(2, 0, 2)

then α+2β = −1, 2α = 1, −α+2β = −2, so α = 12 , β = − 43 , so T(e2 ) is a linear combination


of T(e1 ) and T(e3 ). Thus rank of T is 2, nullity of T is 1.
If (a, b, c) is in the range of T, then (a, b, c) = α(1, 2, −1) + β(2, 0, 2). Thus α + 2β =
a− b
a, 2α = b, −α + 2β = c. From the first two equations, α = 2b , β = 2 2 . The equations would
be consistent if − 2b + a − 2b = c, or a = b + c. So the condition for (a, b, c) to belong to the
range of T is a = b + c.
If (a, b, c) ∈ null space of T, then a − b + 2c = 0, 2a + b = 0, −a − 2b + 2c = 0. Thus
3a + 2c = 0, so a = − 2c 3
, b = 4c
3
. Thus the conditions for (a, b, c) to belong to the null space of
T are 3a + 2c = 0, 3b = 4c. Thus the null space consists of the vectors {(− 2c , 4c , c) | c ∈ R},
3 3
showing that the nullity of T is 1.

Question 1(c) If A = 1 2
1 3 , express A6 − 4A5 + 8A4 − 12A3 + 14A2 as a linear polynomial
in A.

Solution. Characteristic polynomial of A is 1−λ 2 2


1 3−λ = (λ − 3)(λ − 1) − 2 = λ − 4λ + 1.
By the Cayley Hamilton theorem, A2 − 4A + I = 0. Dividing the given polynomial by
A2 − 4A + I, we have

A6 − 4A5 + 8A4 − 12A3 + 14A2


= A4 (A2 − 4A + I) + 7A4 − 12A3 + 14A2
= (A4 + 7A2 )(A2 − 4A + I) + 16A3 + 7A2
= (A4 + 7A2 + 16A)(A2 − 4A + I) + 71A2 − 16A
= (A4 + 7A2 + 16A + 71I)(A2 − 4A + I) + 268A − 71I

Since A2 − 4A + I = 0, A6 − 4A5 + 8A4 − 12A3 + 14A2 = 268A − 71I.

Question 2(a) Let T : R2 −→ R2 be a linear transformation defined by T(x1 , x2 ) =


(−x2 , x1 ).

1. What is the matrix of T in the standard basis of R2 ?

2. What is the matrix of T in the ordered basis B = {α1 , α2 } where α1 = (1, 2), α2 =
(1, −1)?

2
Solution.  T(e1 ) = (0, 1) = e2 , T(e2 ) = (−1, 0) = −e1. Thus (T(e1 ), T(e2 )) =
(e1 e2 ) 01 −1 0 −1
0 . So the matrix of T in the standard basis is 1 0 .
T(α1 ) = (−2, 1), T(α2 ) = (1, 1). If (a, b) = xα1 + yα2 , then x + y = a, 2x − y = b, so
x = a+b
3
, y = 2a−b
3
. This shows that
T(α1 ) = (−2, 1) = − 13 α1 − 35 α2
T(α2 ) = (1, 1) = 23 α1 + 13 α2
 1 2
−3 3
Thus (T(α1 ) T(α2 )) = (α1 α2 ) 5 1 . Consequently the matrix of T in the ordered
 1 2 − 3 3
−3 3
basis B is .
− 53 31

Question 2(b)
 Determine
 a non-singular matrix P such that P0 AP is a diagonal matrix,
0 1 2
where A = 1
 0 3. Is the matrix congruent to a diagonal matrix? Justify your answer.
2 3 0
Solution. The quadratic form associated with A is Q(x, y, z) = 2xy + 4xz + 6yz. Let
x = X, y = X + Y, z = Z (thus X = x, Y = y − x, Z = z). Then
Q(X, Y, Z) = 2X 2 + 2XY + 4XZ + 6XZ + 6Y Z
= 2X 2 + 2XY + 10XZ + 6Y Z
Y 5 2 Y 2 25 2
= 2(X + + Z) − − Z +YZ
2 2 2 2
Y 5 2 1
= 2(X + + Z) − (Y − Z)2 − 12Z 2
2 2 2
Put
Y 5 x y 5z
ξ = X+ + Z= + +
2 2 2 2 2
η = Y − Z = −x + y − z
ζ = Z=z
   1 1 5 −1   
1 − 21 −3
 
x 2 2 2
ξ ξ
y  = −1 1 −1 η  = 1 1 −2 η 
2
z 0 0 1 ζ 0 0 1 ζ
Q(x, y, z) transforms to 2ξ 2 − 21 η 2 − 12ζ 2 . Thus
 
2 0 0
P0 AP = 0 − 21 0 
0 0 −12
1 − 12 −3
 

with P = 1 21 −2 Clearly A is congruent to a diagonal matrix as shown above.


0 0 1

3
Question 2(c) Reduce the matrix
 
1 3 4 −5
−2 −5 −10 16 
 
5 9 33 −68
4 7 30 −78

to echelon form by elementary row transformations.

Solution. Let the given matrix be A. Operations R2 + 2R1 , R3 − 5R1 , R4 − 4R1 ⇒


 
1 3 4 −5
0 1 −2 6 
A≈ 0 −6 13

−43
0 −5 14 −58

Operations R3 + 6R2 , R4 + 5R2 ⇒


 
1 3 4 −5
0 1 −2 6 
A≈ 
0 0 1 −7 
0 0 4 −28

Operations R4 − 4R3 ⇒  
1 3 4 −5
0 1 −2 6 
A≈ 
0 0 1 −7
0 0 0 0
Operation R1 − 3R2 ⇒  
1 0 10 −23
0 1 −2 6 
A≈ 
0 0 1 −7 
0 0 0 0
Operations R1 − 10R3 , R2 + 2R3 ⇒
 
1 0 0 47
0 1 0 −8
A≈ 
0 0 1 −7
0 0 0 0

which is the required row echelon form. The rank of A is 3.

4
Question 3(a) U is an n-rowed unitary matrix such that |I − U| = 6 0, show that the matrix
H defined by iH = (I + U)(I − U)−1 is Hermitian. If eiα1 , . . . , eiαn are the eigenvalues of U
then cot α21 , . . . , cot α2n are eigenvalues of H.
Solution.
(iH)(I − U) = (I + U)
0 0 0
⇒ (I − U )(iH) = (I + U )

0 0 0 0
Substituting I = U U, we have from the second equation that U (U − I)(iH) = U (U + I).
0 0 0
So (iH) = −iH = −(I + U)(I − U)−1 = −iH, so H = H, thus H is Hermitian.
If an eigenvalue of a nonsingular matrix A is λ, then λ−1 is an eigenvalue of A−1 ∵ Ax =
λx ⇒ λ−1 x = A−1 x, note that λ 6= 0 ∵ |A| =6 0. Thus the eigenvalues of H are
1 1 + eiαj
,1 ≤ j ≤ n
i 1 − eiαj
eiαj /2 + e−iαj /2
= −i −iαj /2 ,1 ≤ j ≤ n
e − eiαj /2
eiαj /2 +e−iαj /2
2
= ,1 ≤ j ≤ n
e−iαj /2 −eiαj /2
2i
cot αj
= ,1 ≤ j ≤ n
2

Question 3(b) Let A be an n × n matrix with distinct eigenvalues λ1 , . . . , λn . Show that if


A is non-singular then there exist 2n matrices X such that X2 = A. What happens in case
A is a singular matrix?
Solution. There exists √P non-singular
√ such that P−1 AP = diagonal[λ1 , . . . , λn ].
Let Y1 = diagonal[ λ1 , . . . , λn ], and let X = PYP−1 . Then X2 = PYP−1 PYP−1 =
−1
PY2 P = A. Thus any of the 2n matrices √ formed
√ by choosing a sign for each of the
−1
diagonal entries from X = P diagonal[± λ1 , . . . , ± λn ] P has the same property (note
that they are all distinct).
If one of the eigenvalues is zero, the number of matrices X would become 2n−1 , since we
would have one less choice.
Question 3(c) Show that a real quadratic x0 Ax is positive definite if and only if there exists
a non-singular matrix B such that A = B0 B.
Solution. If A = B0 B, then x0 Ax = x0 B0 Bx = X0 X, where X = Bx. Now if x 6= 0, then
Bx 6= 0, as B is nonsingular, and 0 is not its eigenvalue. Thus x0 Ax = X0 X > 0, so x0 Ax
is positive definite.
Conversely, see the result used in the solution of question 2(c), year 1992.

5
UPSC Civil Services Main 1992 - Mathematics
Linear Algebra
Sunder Lal
Retired Professor of Mathematics
Panjab University
Chandigarh

June 14, 2007

Question 1(a) Let U and V be vector spaces over a field K and let V be of finite dimension.
Let T : V −→ U be a linear transformation, prove that dim V = dim T(V) + dim nullity T.

Solution. See question 3(a), year 1998.

Question 1(b) Let S = {(x, y, z) | x + y + z = 0, x, y, z ∈ R}. Prove that S is a subspace


of R3 . Find a basis of S.

Solution. S = 6 ∅ because (0, 0, 0) ∈ S. If (x1 , y1 , z1 ), (x2 , y2 , z2 ) ∈ S then α1 (x1 , y1 , z1 ) +


α2 (x2 , y2 , z2 ) ∈ S because (α1 x1 + α2 x2 ) + (α1 y1 + α2 y2 ) + (α1 z1 + α2 z2 ) = α1 (x1 + y1 + z1 ) +
α2 (x2 + y2 + z2 ) = 0. Thus S is a subspace of R3 .
Clearly (1, 0, −1), (1, −1, 0) ∈ S and are linearly independent. Thus dim S ≥ 2. However
(1, 1, 1) 6∈ S, so S = 6 R3 . Thus dim S = 2 and {(1, 0, −1), (1, −1, 0)} is a basis for S.

Question 1(c) Which of the following are linear transformations?

1. T : R −→ R2 defined by T(x) = (2x, −x).

2. T : R2 −→ R3 defined by T(x, y) = (xy, y, x).

3. T : R2 −→ R3 defined by T(x, y) = (x + y, y, x).

4. T : R −→ R2 defined by T(x) = (1, −1).

Solution.

1
1.

T(αx + βy) = (2αx + 2βy, −αx − βy)


= (2αx, −αx) + (2βy, −βy)
= αT(x) + βT(y)

Thus T is a linear transformation.

2. T(2(1, 1)) = T(2, 2) = (4, 2, 2) 6= 2T(1, 1) = 2(1, 1, 1) Thus T is not a linear transfor-
mation.

3.

T(α(x1 , y1 ) + β(x2 + y2 )) = T(αx1 + βx2 , αy1 + βy2 )


= (αx1 + βx2 + αy1 + βy2 , αy1 + βy2 , αx1 + βx2 )
= α(x1 + y1 , y1 , x1 ) + β(x2 + y2 , y2 , x2 )
= αT(x1 , y1 ) + βT(x2 , y2 )

Thus T is a linear transformation.

4. T(2(0, 0)) = T(0, 0) = (1, −1) 6= 2T(0, 0) Thus T is not a linear transformation.

Question 2(a) Let T : M2,1 −→ M2,3 be a linear transformation defined by (with the usual
notation)        
1 2 1 3 1 6 1 0
T = ,T =
0 4 1 5 1 0 0 2
 
x
Find T .
y

Solution.
       
x 1 1 1
= x −y +y
y 0 0 1
       
x 2 1 3 6 1 0 2x + 4y x 3x − 3y
T = (x − y) +y =
y 4 1 5 0 0 2 4x − 4y x − y 5x − 3y

2
Question 2(b) For what values of η do the following equations

x+y+z = 1
x + 2y + 4z = η
x + 4y + 10z = η 2

have a solution? Solve them in each case.


1 1 1 
Solution. Since the determinant of the coefficient matrix 1 2 4 is 0, the system has to
1 4 10
be consistent to be solvable.
Clearly x + 4y + 10z = 3(x + 2y + 4z) − 2(x + y + z). Thus for the system to be consistent
we must have η 2 = 3η − 2, or η = 1, 2.
If η = 1, then x + y + z = 1, x + 2y + 4z = 1 so y + 3z = 0, or y = −3z, x = 1 + 2z. Thus
the space of solutions is {(1 + 2z, −3z, z) | z ∈ R}. Note that the rank of the coefficient
matrix is 2, and consequently the space of solutions is one dimensional.
If η = 2, then x + y + z = 1, x + 2y + 4z = 2, so y + 3z = 1 or y = 1 − 3z, hence x = 2z.
Consequently, the space of solutions is {(2z, 1 − 3z, z) | z ∈ R}.

Question 2(c) Prove that a necessary and sufficient condition of a real quadratic form
x0 Ax to be positive definite is that the leading principal minors of A are all positive.

Solution. Let all the principal minors be positive. We have to prove that the quadratic
form is positive definite. We prove the result by induction.
If n = 1, then a11 x2 > 0 ⇔ a11 > 0. Suppose as induction hypothesis the result is true
B B1 
for n = m. Let S = B01 k be a matrix of a quadratic form in m + 1 variables, where
B is m × m, B1 is m × 1 and k is a single element. Since all principle minors of B are
leading principal minors of S, and are hence positive, the induction hypothesis gives that B
is positive definite. This means that there exists a non-singular m × m matrix P such that
P0 BP = Im (We shall prove this presently). Let C be an m-rowed column to be determined
soon. Then
 0 
P0 BP P0 BC + P0 B1
   
P 0 B B1 P C
=
C0 1 B0 1 k 0 1 C0 B0 P + B01 P C0 BC + C0 B1 + B0 1 C + k

Let C be so chosen that BC + B1 = 0, or C = −B−1 B1 . Then


 0     0 
P 0 B B1 P C P BP 0
=
C0 1 B0 1 k 0 1 0 B0 1 C + k

Taking determinants, we get |P0 ||S||P| = B0 1 C + k, because P0 BP = Im , and B0 1 C + k is


0 0 2
a single element. Since |S|
 > 0,itfollows that
 B 1 C + k > 0, so let B 1 C + k = α . Then
P C Im 0
Q0 SQ = Im+1 with Q = . Thus the quadratic forms of S and Im+1 take
0 1 0 α−1
the same values. Hence S is positive definite, so the condition is sufficient.

3
The condition is necessary - Since x0 Ax is positive definite, there is a non-singular matrix
P such that P0 AP = I ⇒ |A||P|2 = 1 ⇒ |A| > 0.
Let 1 ≤ r < n. Let xr+1 = . . . = xn = 0, then we obtain a quadratic form in r variables
which is positive definite. Clearly the determinant of this quadratic form is the r×r principal
minor of A which shows the result.
Proof of the result used: Let A be positive definite, then there exists a non-singular P
such that P0 AP = I.
We will prove this by induction. If n = 1, then the form corresponding to A is a11 x2 and

a11 > 0, so that P = ( a11 ).
Take  
1 −a−1 11 a12 0 ... 0
0 
P1 =  ..
 

. (n − 1) × (n − 1) 
0
then  
a11 0 a13 ... a1n
 0 
 
P01 AP1 =  a13
 

 .. 
 . (n − 1) × (n − 1) 
a1n
Repeating this process, we get a non-singular Q such that
 
a11 0 ... 0
Q0 AQ =  ...
 
(n − 1) × (n − 1) 
0

Given the (n − 1) × (n − 1) matrix on the lower right, we get by induction P∗ s.t. P∗ 0 ((n −
1) × (n − 1) matrix)P∗ is diagonal. Thus ∃P, |P| 6= 0, P0 AP = [α1 , . . . , αn ] say. Take R =
√ √
diagonal[ α1 , . . . , αn ], then R0 P0 APR = In .

2 1

Question 3(a) State the Cayley-Hamilton theorem and use it to find the inverse of 4 3 .

Solution. Let A be an n × n matrix. If |λI − A| = λn + a1 λn−1 + . . . + an = 0 is the


characteristic equation of A, then the Cayley-Hamilton theorem says that An + a1 An−1 +
. . . + an I = 0 i.e. a matrix satisfies its characteristic
 equation.
The characteristic equation of A = 4 3 is2 1

2−λ 1
= λ2 − 5λ + 2 = 0
4 3−λ

4
By the Cayley-Hamilton theorem, A2 − 5A + 2I = 0, so A(A − 5I) = −2I, thus A−1 =
− 12 (A − 5I). Thus
  3
− 21
   
−1 1 2 1 5 0
A =− − = 2
2 4 3 0 5 −2 1

Question 3(b) Transform the following into diagonal form


x2 + 2xy, 8x2 − 4xy + 5y 2
and give the transformation employed.
8 −2
 
Solution. Let A = 11 10 , B = −2 5

1 − 8λ 1 + 2λ
Let 0 = |A − λB| = = −5λ + 40λ2 − 4λ2 − 4λ − 1
1 + 2λ −5λ

Thus 36λ2 − 9λ − 1 = 0, so λ = 9± 81+144 72
= 13 , − 12
1
. 
x1 1 5 5
Let (x1 , x2 ) be the vector such that (A − λB) x2 = 0 with λ = 3 . Thus − 3 x1 + 3 x2 =
0 ⇒ x1 = x2 . We take x1 = 11 so that (A − λB)x1 = 0 with λ = 31 . Similarly, if (x1 , x2 ) is


the vector such that  (A − λB) xx12 = 0 with λ = − 12 1


, then 53 x1 + 56 x2 = 0, so 2x1 + x2 = 0.
1
We take x2 = −2 .
Now
x01 Ax1 = ( 1 1 ) 11 10 11 = ( 1 1 ) 21 = 3
  

x02 Ax2 = ( 1 −2 ) 11 10 −2 = ( 1 −2 ) −1
 1  
1 = −3
x01 Ax2 = ( 1 1 ) 11 10 −2 = ( 1 1 ) −1
 1
 
1 =0
If P = (x1 x2 ), then P0 AP = 30 −3
 
0
, thus x2 + 2xy ≈ 3X 2 − 3Y 2 by P = 11 −2 1
.
Similarly
x01 Bx1 = ( 1 1 ) −2 8 −2
 1 6

5 1 = (1 1) 3 = 9
x02 Bx2 = ( 1 −2 ) −2 8 −2
 1  12

= ( 1 −2 ) −12 = 36
0 8 −2
5
 1−2 12

x1 Bx2 = ( 1 1 ) −2 5 −2 = ( 1 1 ) −12 = 0

Thus P0 BP = 90 36 0 , so 8x2 − 4xy + 5y 2 is transformed to 9X 2 + 36Y 2 by X = P x


  
Y y

Question 3(c) Prove that the characteristic roots of a Hermitian matrix are all real, and
the characteristic roots of a skew Hermitian matrix are all zero or pure imaginary.
Solution. For Hermitian matrices, see question 2(c), year 1995.
0 0
If H is skew-Hermitian, then iH is Hermitian, because (iH) = iH = −iH = iH as
0
H = −H . Thus the eigenvalues of iH are real. Therefore the eigenvalues of H are −ix
where x ∈ R. So they must be 0 (if x = 0) or pure imaginary.

5
UPSC Civil Services Main 1993 - Mathematics
Linear Algebra
Sunder Lal
Retired Professor of Mathematics
Panjab University
Chandigarh

June 14, 2007

Question 1(a) Show that the set S = {(1, 0, 0), (1, 1, 0), (1, 1, 1), (0, 1, 0)} spans the vector
space R3 but is not a basis set.

Solution. The vectors (1, 0, 0), (0, 1, 0), (1, 1, 1) are linearly independent, because α(1, 0, 0)+
β(1, 1, 0) + γ(1, 1, 1) = 0 ⇒ α + γ = 0, β + γ = 0, γ = 0 ⇒ α = β = γ = 0.
Thus (1, 0, 0), (1, 1, 0), (1, 1, 1) is a basis of R3 , as dimR R3 = 3.
Any set containng a basis spans the space, so S spans R3 , but it is not a basis because
the four vectors are not linearly independent, in fact (1, 1, 0) = (1, 0, 0) + (0, 1, 0).

Question 1(b) Define rank and nullity of a linear transformation. If V is a finite dimen-
sional vector space and T is a linear operator on V such that rank T2 = rank T, then prove
that the null space of T is equal to the null space of T2 , and the intersection of the range
space and null space of T is the zero subspace of V.

Solution. The dimension of the image space T(V) is called rank of T. The dimension of
the vector space kernel of T = {v | T(v) = 0} is called the nullity of T.
Now v ∈ null space of T ⇒ T(v) = 0 ⇒ T2 (v) = 0 ⇒ v ∈ null space of T2 . Thus null
space of T ⊆ null space of T2 . But we are given that rank T = rank T2 , so therefore nullity
of T = nullity of T2 , because of the nullity theorem — rank T + nullity T = dim V. Thus
null space of T = null space of T2 .
Finally if v ∈ range of T, and v ∈ null space of T, then v = T(w) for some w ∈ V. Now

T2 (w) = T(v) ⇒ w ∈ null space of T2


⇒ w ∈ null space of T
⇒ 0 = T(w) = v

Thus range of T∩ null space of T = {0}.

1
2
Question 1(c) If the  matrix of a linear operator T on R relative to the standard basis
1 1
{(1, 0), (0, 1)} is 1 1 , find the matrix of T relative to the basis B = {(1, 1), (−1, 1)}.

Solution. Let v1 = (1, 1), v2 = (−1, 1). Then T(v1 ) = (11) 11 11 = (2, 2) = 2v1 .
T(v2 ) = (−11) 11 11 = (0, 0) = 0. So (T(v1 ), T(v2 ) = (v1 ) v2 ) 20 00 , so the matrix of T
relative to the basis B is 20 00 .

A−1
   
A 0 0
Question 2(a) Prove that the inverse of is where A, C are
B C −C−1 BA−1 C−1
 
1 0 0 0
1 1 0 0
nonsingular matrices. Hence find the inverse of 
1 1 1 0.

1 1 1 1

Solution.
A−1
    
A 0 0 I 0
−1 = = Identity matrix.
B C −C−1 BA C−1  BA−1 − BA−1 I 
A−1
  
0 A 0 I 0
−1 −1 −1 = −1 −1 = Identity matrix, which shows
−C BA C B C −C B + C B I
the result.
B = 11 11  . Then A−1 = C−1 = −1 and C−1 BA−1 =
1 0
  1 0

Let A = C =  1 1 and
 1
1 0 1 1 1 0 1 0 0 1 = 0 1 Thus
−1 1 1 1 −1 1 = −1 1 0 1 0 0

 −1  
1 0 0 0 1 0 0 0
1 1 0 0
  −1 1 0 0

1 =
 0 −1 1

1 1 0 0
1 1 1 1 0 0 −1 1

Question 2(b) If A is an orthogonal matrix with the property that −1 is not an eigenvalue,
then show that A = (I − S)(I + S)−1 for some skew symmetric matrix S.

Solution. We want S skew symmetric such that A(I + S) = I − S i.e. A + AS = I − S


or AS + S = I − A or (I + A)S = I − A. Let S = (I + A)−1 (I − A), note that I + A is
invertible because if |I + A| = 0, then −1 will be an eigenvalue of A.
Note that the two factors of S commute, because (I+A)(I−A) = I−A2 = (I−A)(I+A),
so (I − A)(I + A)−1 = (I + A)−1 (I − A).

2
Now
S0 = (I − A)0 ((I + A)−1 )0
= (I − A0 )(I + A0 )−1
= (AA0 − A0 )(A0 A + A0 )−1
−1
= (A − I)A0 A0 (A + I)−1
= −(I − A)(I + A)−1
= −(I + A)−1 (I − A)
= −S
Thus S is skew symmetric, so A = (I − S)(I + S)−1 where S = (I + A)−1 (I − A)

Question 2(c) Show that any two eigenvectors corresponding to distinct eigenvalues of (i)
Hermitian matrices (ii) unitary matrices are orthogonal.
Solution. We first prove that the eigenvalues of a Hermitian matrix, and therefore of a
symmetric matrix, are real.
Let H be Hermitian, and λ be one of its eigenvalues. Let x 6= 0 be an eigenvector
0 0
corresponding to λ. Thus Hx = λx, so x0 Hx = x0 λx. But (x0 Hx) = (x0 Hx)0 = x0 H x =
0
x0 Hx, because H = H. Note that (x0 Hx)0 = x0 Hx, since it is a single element, therefore
0
x0 Hx is real. Similarly x0 x 6= 0 is real, so λ = xxHx 0 x is real.

Let H be Hermitian, Hx1 = λ1 x1 , Hx2 = λ2 x2 with λ1 6= λ2 . Clearly x02 Hx1 = λ1 x02 x1 ,


0 0
x01 Hx2 = λ2 x01 x2 . But (x02 Hx1 )0 = x01 H x2 = λ1 x01 x2 . So λ2 x01 x2 = x01 Hx2 = x01 H x2 =
0
λ1 x01 x2 because H = H. Since λ1 6= λ2 , x01 x2 = 0, so x1 , x2 are orthogonal.
Let U be unitary, Ux1 = λ1 x1 , Ux2 = λ2 x2 , where λ1 , λ2 are distinct eigenvalues of
0 0
U with corresponding eigenvectors x1 , x2 . Thus x02 U Ux1 = λ2 x02 λ1 x1 . Since U U = I,
λ2 x02 λ1 x1 = x02 x1 , so (1 − λ2 λ1 )(x02 x1 ) = 0. But 1 − λ2 λ1 = λ2 λ2 − λ2 λ1 = λ2 (λ2 − λ1 ) 6= 01 .
Thus x02 x1 = 0, so x1 , x2 are orthogonal.

Question 3(a) A matrix B of order n is of the form λA, where λ is a scalar and A has 1
everywhere except the diagonal, which has µ. Find λ, µ so that B may be orthogonal.
 
µ 1 ... 1
 1 µ . . . 1
Solution. A =  . . .
. B = λA. Thus
... 
1 1 ... µ
  
λµ λ . . . λ λµ λ . . . λ
 λ λµ . . . λ   λ λµ . . . λ 
B0 B =  . . .
  = BB0 = B2
...  . . . ... 
λ λ . . . λµ λ λ . . . λµ
1
We used here the fact that all eigenvalues of a unitary matrix have modulus 1. If Ux = λx, then
0 0
x U = λx0 . Thus x0 U Ux = λλx0 x, so x0 x = λλx0 x. Now x0 x 6= 0, so λλ = 1.
0

3
Clearly each diagonal element of BB0 is λ2 µ2 + (n − 1)λ2 , and each nondiagonal element is
2λ2 µ + (n − 2)λ2 . Thus B will be orthogonal if 2λ2 µ + (n − 2)λ2 = 0, λ2 µ2 + (n − 1)λ2 = 1.
Since λ 6= 0, µ = 2−n
2
= 1 − n2 , and λ2 = (1− n )12 +n−1 = 1
n2
= n42 , thus λ = ± n2 .
2 1−n+ 4
+n−1

Question 3(b) Find the rank of the matrix


 
1 −1 3 6
A = 1 3 −3 −4
5 3 3 11

by reducing it to its normal form.

Solution.  
    1 0 0 0
1 −1 3 6 1 0 0 0 1 0 0
A = 1 3 −3 −4 = 0 1 0 A 
0

0 1 0
5 3 3 11 0 0 1
0 0 0 1
Operation C2 + C1 , C3 − 3C1 , C4 − 6C1 ⇒
 
    1 1 −3 −6
1 0 0 0 1 0 0 0
1 4 −6 −10 = 0 1 0 A  1 0 0
0 0 1 0
5 8 −12 −19 0 0 1
0 0 0 1

Operation R2 − R1 ⇒
 
    1 1 −3 −6
1 0 0 0 1 0 0 0
0 4 −6 −10 = −1 1 0 A  1 0 0
0 0 1 0
5 8 −12 −19 0 0 1
0 0 0 1

Operation R3 − 2R2 ⇒
 
  1
  1 −3 −6
1 0 0 0 1 0 0 0
0 4 −6 −10 = −1 1 0 A  1 0 0
0 0 1 0
5 0 0 1 2 −2 1
0 0 0 1

Interchanging C3 and C4 we get


 
  1
  1 −6 −3
1 0 0 0 1 0 0 0
0 4 −10 −6 = −1 1 0 A  1 0 0
0 0 0 1
5 0 1 0 2 −2 1
0 0 1 0

4
R3 − 5R1 ⇒
 
   1  1 −6 −3
1 0 0 0 1 0 0 0
0 4 −10 −6 = −1 1 0 A  1 0 0
0 0 0 1
0 0 1 0 −3 −2 1
0 0 1 0

Operation 41 R2 ⇒
 
    1 1 −6 −3
1 0 0 0 1 0 0
0 1 − 5
0 1 0 0
2
− 32  = − 14 41 0 A 
0

0 0 1
0 0 1 0 −3 −2 1
0 0 1 0

Operation C3 + 52 C2 , C4 + 32 C2 ⇒

1 − 27 − 23
 
    1
1 0 0 0 1 0 0
0 1 0 0 = − 1 1 0 A 
0 1 52 3 
2 
4 4 0 0 0 1 
0 0 1 0 −3 −2 1
0 0 1 0
   
1 0 0 0 1 0 0
Thus the normal form of A is 0 1 0 0 so rank A = 3. P = − 14 14 0 and
0 0 1 0 −3 −2 1
7 3
 
1 1 −2 −2
0 1 5 3 
Q= 2 2  and PAQ is the normal form.
0 0 0 1 
0 0 1 0

Question 3(c) Determine the following form as definite, semidefinite or indefinite

2x21 + 2x22 + 3x23 − 4x2 x3 − 4x3 x1 + 2x1 x2

Solution. Completing the squares of the given form (say Q(x1 , x2 , x3 )):
1 3
Q(x1 , x2 , x3 ) = 2(x1 + x2 − x3 )2 + x22 + x23 − 2x2 x3
2 2
1 1
= 2(x1 + x2 − x3 )2 + (x3 − x2 )2 + x22
2 2
Thus Q can be written as the sum of 3 squares with positive coefficients, so it is positive
definite.

5
UPSC Civil Services Main 1994 - Mathematics
Linear Algebra
Sunder Lal
Retired Professor of Mathematics
Panjab University
Chandigarh

June 14, 2007

Question 1(a) Show that f1 (t) = 1, f2 (t) = t − 2, f3 (t) = (t − 2)2 forms a basis of P3 =
{Space of polynomials of degree ≤ 2}. Express 3t2 −5t+4 as a linear combination of f1 , f2 , f3 .

Solution. If α1 f1 + α2 f2 + α3 f3 ≡ 0, then α3 being the coefficient of t2 is equal to 0. Then


coefficient of t is α2 so it must be 0, hence α1 = 0. Thus f1 , f2 , f3 are linearly independent.
Since {1, t, t2 } is a basis for P3 , its dimension is 3, hence f1 , f2 , f3 is a basis of P3 .
Now by Taylor’s expansion p(t) = 3t2 − 5t + 4 = p(2) + p0 (2)(t − 2) + p”(2) 2!
(t − 2)2 =
6 + 7(t − 2) + 3(t − 2)2 = 6f1 + 7f2 + 3f3 .

Question 1(b) Let T : R4 −→ R3 be defined by

T(a, b, c, d) = (a − b + c + d, a + 2c − d, a + b + 3c − 3d), a, b, c, d ∈ R

Verify that rank(T) + nullity(T) = dim(V4 (R).

Solution. Let

T(1, 0, 0, 0) = (1, 1, 1) = v1
T(0, 1, 0, 0) = (−1, 0, 1) = v2
T(0, 0, 1, 0) = (1, 2, 3) = v3
T(0, 0, 0, 1) = (1, −1, −3) = v4

T(R4 ) is generated by v1 , v2 , v3 , v4 and therefore a maximal independent subset of v1 , v2 , v3 , v4


will form a basis of T(R4 ). v1 , v2 are linearly independent because if αv1 + βv2 = 0, then
α − β = 0, α = 0 so α = β = 0.
v3 is dependent on v1 , v2 , because if v3 = αv1 + βv2 , then α − β = 1, α = 2, α + β =
3 ⇒ α = 2, β = 1 ∴ v3 = 2v1 + v2 .

1
v4 is dependent on v1 , v2 , because if v4 = αv1 + βv2 , then α − β = 1, α = −1, α + β =
−3 ⇒ α = −1, β = −2 ∴ v4 = −v1 − 2v2 .
Thus v1 , v2 is a basis of T(R4 ), so rank T = 2.
Now (a, b, c, d) ∈ ker T ⇔ a − b + c + d = 0, a + 2c − d = 0, a + b + 3c − 3d = 0
Choosing particular values of a, b, c, d, we see that (1, 2, 0, 1), (−1, 1, 1, 1) ∈ ker T and are
linearly independent, so dim ker T ≥ 2. But (1, 2, 0, 1), (−1, 1, 1, 1) generate ker T, because
if (a, b, c, d) ∈ ker T, and (a, b, c, d) = α(1, 2, 0, 1) + β(−1, 1, 1, 1), then α − β = a, 2α + β =
b, β = c, α + β = d, so α = a + c, β = c and these satisfy the remaining equations 2α + β =
b, α + β = d, because (a, b, c, d) ∈ ker T and therefore a − b + c + d = 0, a + 2c − d = 0. Thus
(a, b, c, d) = (a + c)(1, 2, 0, 1) + c(−1, 1, 1, 1), so dim ker T = nullity T = 2
Hence rank T + nullity T = 4 = dim(R4 ), as required.

Question 1(c) If T is an operator on R3 whose basis is B = {(1, 0, 0), (0, 1, 0), (0, 0, 1)}
such that  
0 1 1
[T : B] =  1 0 −1
−1 −1 0
find a matrix of T w.r.t. a basis B1 = {(0, 1, −1), (1, −1, 1), (−1, 1, 0)}.

Solution. The basis B is the standard basis, hence the representation of B1 in this basis is
as given. (Note that if B were some other basis, we would write B1 in that basis, and then
continue as below.) Let v1 = (0, 1, −1), v2 = (1, −1, 1), v3 = (−1, 1, 0). Then

T(v1 ) = (0, 1, −1) = v1


T(v2 ) = (0, 0, 0) = 0
T(v3 ) = (1, −1, 0) = −v3

Thus  
1 0 0
[T : B1 ] = 0 0 0 
0 0 −1
Note: The main idea behind the above solution is to express T(vi ) = ni=0 αii vi . Now
P
we solve for αii to get the matrix for T in the new basis.
An alternative is to compute P−1 [T : B]P, where P is given by [v1 , . . . , vn ] = [e1 , . . . , en ]P
or P = [v1 0 , . . . , vn 0 ]. Show that this is true.

Question 2(a) If A = haij i is an n × n matrix such that aii = n, aij = r if i 6= j, show that

[A − (n − r)I][A − (n − r + nr)I] = 0

Hence find the inverse of the n × n matrix B = hbij i where bii = 1, bij = ρ, i 6= j and
1
ρ 6= 1, ρ 6= 1−n .

2
Solution. Let C = A − (n − r)I, then every entry of C is r. Let D = A − (n − r + nr)I =
C − nrI. Thus CD = C2 − nrC. Each entry of C2 is nr2 , which is the same as each entry
of nrC, so CD = 0 as required.
The given equation implies

A[A − (2n − 2r + nr)I] = −(n − r)(n − r − nr)I

Let A = nB, where r = ρn. Thus A satisfies the conditions for the equation to hold, so
substituting A and r in the above equation

nB[nB − (2n − 2nρ + n2 ρ)I] = −(n − nρ)(n − nρ − n2 ρ)I


B[B − (2 − 2ρ + nρ)I] = −(1 − ρ)(1 − ρ − nρ)I
1
B−1 = − [B − (2 − 2ρ + nρ)I]
(1 − ρ)(1 − ρ − nρ)
1−2ρ+nρ
Thus the diagonal elements of B−1 are all (1−ρ)(1−ρ−nρ)
, while the off-diagonal elements are
2−3ρ+nρ
all (1−ρ)(1−ρ−nρ) .

Question 2(b) Prove that the eigenvectors corresponding to distinct eigenvalues of a square
matrix are linearly independent.

Solution. Let x1 , x2 , . . . , xr be eigenvectors corresponding to distinct eigenvalues λ1 , . . . , λr


of a matrix A.Q Let a1 x1 + . . . + ar xr = 0, ai ∈ R. We shall show that ai = 0, 1 ≤ Q i ≤ r.
Let L1 = ri=2 (A − λi I). Note that the factors of L1 commute. Thus L1 x2 = ri=3 (A −
λi I)(A − λ2 I)x2 = 0 because Ax2 = λ2 x2 . Similarly L1 x3 = . . . = L1 xr = 0. Moreover
L1 x1 = (λ1 − λ2 ) . . . (λ1 − λr )x1 .
Consequently

0 = L1 (a1 x1 + . . . + ar xr )
= a1 L 1 x 1
= a1 (λ1 − λ2 ) . . . (λ1 − λr )x1

λ1 − λi 6= 0, 2 ≤ i ≤ r, and Q
x1 6= 0 so a1 = 0.
Similarly taking Li = rj=1 (A − λj I), we show that ai = 0 for 1 ≤ i ≤ r. Thus
i6=j
x1 , x2 , . . . , xr are linearly independent.

Question 2(c) Determine the eigenvalues and eigenvectors of the matrix


 
3 1 4
A = 0 2 6
0 0 5

3
Solution. The characteristic polynomial of A is |λI − A| = (λ − 3)(λ − 2)(λ − 5).1 Thus
the eigenvalues of A are 3, 2, 5.
If x = (x1 , x2 , x3 ) is an eigenvector corresponding to λ = 3 then
  
0 1 4 x1
(A − 3I)x = 0 −1 6
   x2  = 0
0 0 2 x3

Thus x2 + 4x3 = 0, −x2 + 6x3 = 0, 2x3 = 0, take x1 = 1, x2 = 0 to get (1, 0, 0) as an


eigenvector for λ = 3. All the eigenvectors are (x1 , 0, 0), x1 6= 0.
If x = (x1 , x2 , x3 ) is an eigenvector corresponding to λ = 2 then
  
1 1 4 x1
(A − 2I)x = 0 0 6
   x2  = 0
0 0 3 x3

Thus x1 + x2 + 4x3 = 0, 6x3 = 0, 3x3 = 0, take x1 = 1 to get (1, −1, 0) as an eigenvector for
λ = 2. All the eigenvectors are (x1 , −x1 , 0), x1 6= 0.
If x = (x1 , x2 , x3 ) is an eigenvector corresponding to λ = 5 then
  
−2 1 4 x1
(A − 5I)x =  0 −3 6   x2  = 0
0 0 0 x3

Thus −2x1 + x2 + 4x3 = 0, −3x2 + 6x3 = 0, take x3 = 1 to get (3, 2, 1) as an eigenvector for
λ = 5. All eigenvectors are (3x3 , 2x3 , x3 ), x3 6= 0.

Question 3(a) Show that the matrix congruent to a skew symmetric matrix is skew sym-
metric. Use the result to prove that the determinant of a skew symmetric matrix of even
order is the square of a rational function of its elements.

Solution. Let B = P0 AP, be congruent to A, and A0 = −A. Then B0 = P0 A0 P =


−P0 AP = −B, so B is skew symmetric.
We prove the second result by induction on m, where n = 2m  is the order of the skew
0 a 2
symmetric matrix under consideration. If m = 1 then A = −a 0 , |A| = a , so the result is
true for m = 1.
Assume by induction that the result is true for all skew symmetric matrices of even order
< 2m. If A ≡ 0, there is nothing to prove. Otherwise there exists at least one non-zero
element aij . Changing the first row with row j, we move aij in the first row. Changing
1
Note that the determinant of an upper diagonal or a lower diagonal matrix is just the product of the
elements on the main diagonal.

4
column 1 and column j, we get −aij in the symmetric position. Now by multiplying the new
matrix by suitable elementary matrices on the left and right, we get
 
0 aij ∗ ∗ ... ∗
−aij 0 ∗ ∗ ... ∗
0
 
P AP =  ∗ ∗ 

 ... A2m−2 
∗ ∗
Now we can find P∗ a product of elementary matrices such that
 
0 aij 0 0 ... 0
−aij 0 0 0 ... 0
0
P∗ P0 APP∗ = 
 
 0 0 

 ... A2m−2 
0 0
Thus det A = determinant of a skew symmetric matrix of order 2 × determinant of a
skew symmetric matrix of order 2(m − 1). The induction hypothesis now gives the result.

Question 3(b) Find the rank of the matrix

c −b a0
 
0
 −c 0 a b0 
A=  b −a 0 c0 

−a0 −b0 −c0 0

where aa0 + bb0 + cc0 = 0, a, b, c positive integers.

Solution. The rank of a skew symmetric matrix is always even2 . Since 0 c


−c 0 = c2 >
0, rank A ≥ 2.
If c0 6= 0, then
0 c −b a0
cc0 0 ac0 b0 c0
−c0 |A| =
b −a 0 c0
0 0 0
−a −b −c 0
Adding b0 R3 − aR4 to R2 , all the entries of the second row become 0, so −c0 |A| = 0 ⇒
|A| = 0 ⇒ rank A < 4 ⇒ rank A = 2.
2
We can prove this by induction on the order of the skew symmetric matrix S. It is true if S is a 1 × 1
 be the 0-matrix, thus has rank 0. Now given an (n + 1) × (n + 1) matrix S, we can
matrix, since it must
C b
write it as −b 0
0 , where C is skew symmetric, and hence has even rank by the induction hypothesis. If b
is linearly dependent on
 the columns of C, then by a series of elementary operations on S, we can transform
it into P0 SP = 0C0 00 , so rank S = rank P0 SP = rank C. If b is linearly independent of the columns of
C b 0

C, then b’ is linearly independent of the rows of C, so rank S = rank −b 0
0 = rank[C b] + 1(∵ [−b 0] is
independent of the rows of [C b]) = rank C + 2, which is also even.

5
If c0 = 0,
0 c −b a0 a
c 0 a b0 a
a|A| =
b −a 0 0
0 0
−a a −b a 0 0
Adding −b0 R3 to R4 , we see that the fourth row has all 0’s, hence rank A = 2 as before.
Alternate solution:
c −b a0
 
0
 −c 0 a b0 
−a|A| =   b −a 0 c0 

aa0 ab0 ac0 0


Add −c0 R2 and b0 R3 to R4 , then all entries of the last row become 0. So rank A < 4, and
by the reasoning above, rank A ≥ 2, rank A 6= 3 so rank A = 2.

Question 3(c) Reduce the following symmetric matrix to a diagonal form and interpret the
results in terms of quadratic forms.
 
3 2 −1
A= 2 2 3 
−1 3 1

Solution.
x
(x y z)A y
z
= 3x2 + 2y 2 + z 2 + 4xy − 2xz + 6yz
2 1 2 2 22
= 3(x + y − z)2 + y 2 + z 2 + yz
3 3 3 3 3
2 1 2 2 11 2 117 2
= 3(x + y − z) + (y + z) − z
3 3 3 2 6
2 117 2
= 3X 2 + Y 2 − Z
3 6
where X = x − 23 y − 13 z, Y = y + 112
z, Z = z. This implies z = Z, y = Y − 11 2
Z, x =
2 11 1 2
X + 3 (Y − 2 Z) − 3Z = X + 3 Y − 4Z.
1 32 −4
 
3 0 0
Then if P = 0 1 − 11 2
, we have P0 AP = 0 2
3
0 .
0 0 1 0 0 − 117 6
The quadratic form associated with A is indefinite as it takes both positive and negative
values. Note that x0 Ax and x0 P0 APx0 take the same values.

6
UPSC Civil Services Main 1995 - Mathematics
Linear Algebra
Sunder Lal
Retired Professor of Mathematics
Panjab University
Chandigarh

June 14, 2007

1 Linear Algebra
Question 1(a) Let T(x1 , x2 , x3 ) = (3x1 + x3 , −2x1 + x2 , −x1 + 2x2 + 4x3 ) be a linear trans-
formation on R3 . What is the matrix of T w.r.t. the standard basis? What is a basis of the
range space of T? What is a basis of the null space of T?

Solution.

T(e1 ) = T(1, 0, 0) = (3, −2, −1) = 3e1 − 2e2 − e3


T(e2 ) = T(0, 1, 0) = (0, 1, 2) = e2 + 2e3
T(e3 ) = T(0, 0, 1) = (1, 0, 4) = e1 + 4e3
 
3 0 1
T ⇐⇒ A = −2 1 0
−1 2 4

Clearly T(e2 ), T(e3 ) are linearly independent. If (3, −2, −1) = α(0, 1, 2) + β(1, 0, 4), then
β = 3, α = −2, but 2α + 4β 6= −1, so T(e1 ), T(e2 ), T(e3 ) are linearly independent. Thus
(3, −2, −1), (0, 1, 2), (1, 0, 4) is a basis of the range space of T.
Note that T(x1 , x2 , x3 ) = 0 ⇔ x1 = x2 = x3 = 0, so the null space of T is {0}, and the
empty set is a basis. Note that the matrix of T is nonsingular, so T(e1 ), T(e2 ), T(e3 ) are
linearly independent.

Question 1(b) Let A be a square matrix of order n. Prove that Ax = b has a solution
⇔ b ∈ Rn is orthogonal to all solutions y of the system A0 y = 0.

1
Solution. If x is a solution of Ax = b and y is a solution of A0 y = 0, then b0 y = x0 A0 y = 0,
thus b is orthogonal to y.
Conversely, suppose b0 y = 0 for all y ∈ Rn which is a solution of A0 y = 0. Let
W = A(Rn ) = the range space of A, and W ⊥ its orthogonal complement. If A0 y = 0 then
x0 A0 y = 0 ⇒ (Ax)0 y = 0 for every x ∈ Rn ⇒ y ∈ W ⊥ . Conversely y ∈ W ⊥ ⇒ ∀x ∈
Rn .(Ax)0 y = 0 ⇒ x0 A0 y = 0 ⇒ A0 y = 0. Thus W ⊥ = {y | A0 y = 0}. Now b0 y = 0 for all
y ∈ W ⊥ , so b ∈ W ⇒ b = Ax for some x ∈ Rn ⇒ Ax = b is solvable.
Question 1(c) Define a similar matrix and prove that two similar matrices have the same
characteristic equation. Write down a matrix having 1, 2, 3 as eigenvalues. Is such a matrix
unique?
Solution. Two matrices A, B are said to be similar if there exists a matrix P such that
B = P−1 AP. If A, B are similar, say B = P−1 AP, then characteristic polynomial of B is
|λI − B| = |λI − P−1 AP| = |P−1 λIP − P−1 AP| = |P−1 ||λI − A||P| = |λI − A|. (Note that
|X||Y| = |XY|.) Thus the characteristic polynomial of B is the same as that of A.
1 0 0
Clearly the matrix A = 0 2 0 has eigenvalues 1,2,3. Such a matrix is not unique, for
0 0 3
1 1 0
example B = 0 2 0 has the same eigenvalues, but B 6= A.
0 0 3

Question 2(a) Show that  


5 −6 −6
A = −1 4 2
3 −6 −4
is diagonalizable and hence determine A5 .
Solution.
|A − λI| = 0
5 − λ −6 −6
⇒ −1 4 − λ 2 = 0
3 −6 −4 − λ
⇒ (5 − λ)[(4 − λ)(−4 − λ) + 12] + 6[4 + λ − 6] − 6[6 − 3(4 − λ)] = 0
⇒ (5 − λ)[λ2 − 4] + 6[λ − 2 − 3λ + 6] = 0
⇒ −λ3 + 5λ2 + 4λ − 20 − 12λ + 24 = 0
⇒ λ3 − 5λ2 + 8λ − 4 = 0
Thus λ = 1, 2, 2.
If (x1 , x2 , x3 ) is an eigenvector for λ = 1, then
  
4 −6 −6 x1
−1 3 2   x2  = 0
3 −6 −5 x3
⇒ 4x1 − 6x2 − 6x3 = 0
−x1 + 3x2 + 2x3 = 0
3x1 − 6x2 − 5x3 = 0

2
Thus x1 = x3 , x3 = −3x2 , so (−3, 1, −3) is an eigenvector for λ = 1.
If (x1 , x2 , x3 ) is an eigenvector for λ = 2, then
  
3 −6 −6 x1
−1 2 2  x2  = 0
3 −6 −6 x3
⇒ 3x1 − 6x2 − 6x3 = 0
−x1 + 2x2 + 2x3 = 0
3x1 − 6x2 − 6x3 = 0

Thus x1 − 2x2 − 2x3 = 0, so taking x1 = 0, x2 = 1, (0, 1, −1) is an eigenvector for λ = 2.


Taking x1 = 4, x2 = 1, (4, 1, 1) is another eigenvector for λ = 2, and these two are linearly
independent.   
−3 0 4 2 −4 −4
Let P =  1 1 1. A simple calculation shows that P−1 = 12 −4 9 7 .
−3 −11  2 −3 −3
1 0 0
−1
Clearly P AP = 0 2 0.

0 0 2  
1 0 0
Now P−1 A5 P = (P−1 AP)5 = 0 32 0 .
0 0 32
 
1 0 0
A5 = P 0 32 0  P−1
0 0 32
   
−3 0 4 1 0 0 2 −4 −4
1
= 1 1 1   0 32 0  −4 9 7
2
−3 −1 1 0 0 32 2 −3 −3
  
−3 0 128 2 −4 −4
1
= 1 32 32   −4 9 7
2
−3 −32 32 2 −3 −3
 
125 −186 −186
= −31 94 62 
93 −186 −154

Note: Another way of computing A5 is given below. This uses the characteristic poly-
nomial of A : A3 = 5A2 − 8A + 4I and not the diagonal form, so it will not be permissible
here.

3
A5 = A2 (5A2 − 8A + 4I)
= 5A(5A2 − 8A + 4I) − 8(5A2 − 8A + 4I) + 4A2
= 25(5A2 − 8A + 4I) − 76A2 + 84A − 32I
= 49A2 − 116A + 68I

Now calculate A2 and substitute.

Question 2(b) Let A and B be matrices of order n. If I − AB is invertible, then I − BA


is also invertible and
(I − BA)−1 = I + B(I − AB)−1 A
Show that AB and BA have the same characteristic values.

Solution.
(I + B(I − AB)−1 A)(I − BA)
= I − BA + B(I − AB)−1 A − B(I − AB)−1 ABA
= [I + B(I − AB)−1 A] − B[I + (I − AB)−1 AB]A (1)
−1 −1 −1
Now (I − AB) (I − AB) = (I − AB) − (I − AB) AB = I
∴ (I − AB)−1 = I + (I − AB)−1 AB
Substituting in (1) (I + B(I − AB)−1 A)(I − BA)
= I + B(I − AB)−1 A − B(I − AB)−1 A = I

Thus I − BA is invertible and (I − BA)−1 = I + B(I − AB)−1 A as desired.


We shall show that λI − AB is invertible if and only if λI − BA is invertible. This means
that if λ is an eigenvalue of AB, then |λI − AB| = 0 ⇒ |λI − BA| = 0 so λ is an eigenvalue
of BA.
If λI − AB is invertible, then

(I + B(λI − AB)−1 A)(λI − BA)


= λI − BA + λB(λI − AB)−1 A − B(λI − AB)−1 ABA
= λ[I + B(λI − AB)−1 A] − B[I + (λI − AB)−1 AB]A (2)
Now (λI − AB)−1 (λI − AB) = λ(λI − AB)−1 − (λI − AB)−1 AB = I
∴ λ(λI − AB)−1 = I + (λI − AB)−1 AB
Substituting in (2) (I + B(λI − AB)−1 A)(λI − BA)
= λI + λB(λI − AB)−1 A − λB(λI − AB)−1 A = λI

Thus λI − BA is invertible if λI − AB is invertible. The converse is obvious as the situation


is symmetric, thus AB and BA have the same eigenvalues.
We give another simple proof of the fact that AB and BA have the same eigenvalues.
1. Let 0 be an eigenvalue of AB. This means that AB is singular, i.e. 0 = |AB| =
|A||B| = |BA|, so BA is singular, hence 0 is an eigenvalue of BA.

4
2. Let λ 6= 0 be an eigenvalue of AB and let x 6= 0 be an eigenvector corresponding to
λ, i.e. ABx = λx. Let y = Bx. Then y 6= 0, because Ay = ABx = λx 6= 0 as λ 6= 0.
Now BAy = BABx = B(ABx) = λBx = λy. Thus λ is an eigenvalue of BA.

Question 2(c) Let a, b ∈ C, |b| = 1 and let H be a Hermitian matrix. Show that the
eigenvalues of aI + bH lie on a straight line in the complex plane.

Solution. Let t be as eigenvalue of H, which has to be real because H is Hermitian. Clearly


a + tb is an eigenvalue of aI + bH. Conversely, if λ is an eigenvalue of aI + bH, then λ−ab
(note b 6= 0 as |b| = 1) is an eigenvalue of H.
Clearly a + tb lies on the straight line joining points a and a + b:

z = (1 − x)a + x(b − a), x ∈ R

For the sake of completeness, we prove that the eigenvalues of a Hermitian matrix H are
real. Let z 6= 0 be an eigenvector corresponding to the eigenvalue t.

Hz = tz
⇒ z0 Hz = tz0 z
0
⇒ z0 Hz = tz0 z
0
But z0 Hz = z0 H z = z0 Hz = tz0 z
⇒ tz0 z = tz0 z
⇒t = t ∵ z0 z 6= 0

Question 3(a) Let A be a symmetric matrix. Show that A is positive definite if and only
if its eigenvalues are all positive.

Solution. A is real symmetric so all eigenvalues of A are real. Let λ1 , λ2 , . . . , λn be


eigenvalues of A, not necessarily distinct. Let x1 be an eigenvector corresponding to λ1 .
Since λ1 and A are real,
p x1 is also real. Replacing x1 if necessary by µx1 , µ suitable, we can
assume that ||x1 || = x01 x1 = 1.
Let P1 be an orthogonal matrix with x1 as its first column. Such a P1 exists, as will be
shown at the end of this result. Clearly the first column of the matrix P1 −1 AP1 is equal
λ
to P1 −1 Ax = λ1 P1 −1 x = 00 , because P1 −1 x is the first column of P1 −1 P = I. Thus
0
P1 −1 AP1 = λ01 B = P01 AP1 where B is (n − 1) × (n − 1) symmetric. Since P01 AP1 is
L


symmetric, it follows that P1 −1 AP1 = P01 AP1 = λ01 B0 . Induction now  gives that there
λ2 0 . . . 0
exists an (n − 1) × (n − 1) orthogonal matrix Q such that Q0 BQ = . . . 
0 0 . . . λn

5
where λ2 , λ3 , . . . , λn are eigenvalues of B. Let P2 = 01 Q 0

, then P2 is orthogonal and
P02 P01 AP1 P2 = diagonal[λ 1 , . . . , λ n ]. Set P = P P
1 2 . . . P 0
n and (y1 , . . . , yn )P = x then
,
n
x0 Ax = y0 P0 APy = i=0 λ2i yi2 .
P
0
Pn 2 2
Since P is non-singular, quadratic forms x Ax and i=0 λi yi assume the same values.
Hence A is positive definite if and only if ni=0 λ2i yi2 is positive definite if and only if λi > 0
P
for all i. p
Result used: If x1 is a real vector such that ||x1 || = x01 x1 = 1 then there exists an
orthogonal matrix with x1 as its first column.
Proof: We have to find real column vectors x2 , . . . , xn such that ||xi || = 1, 2 ≤ i ≤ n
and x2 , . . . , xn is an orthonormal system i.e. x0i xj = 0, i 6= j. Consider the single equation
x01 x = 0, where x is a column vector to be determined. This equation has a non-zero solution,
in fact the space of solutions is of dimension n − 1, the rank of the coefficient matrix being
1. If y2 is a solution, we take x2 = ||yy22 || so that x01 x2 = 0.
We now consider the two equations x01 x = 0, x02 x = 0. Again the number of unknowns
is more than the number of equations, so there is a solution, say y3 , and take x3 = ||yy33 || to
get x1 , x2 , x3 mutually orthogonal.
Proceeding in this manner, if we consider n − 1 equations x01 x = 0, . . . , x0n−1 x = 0, these
will have a nonzero solution yn , so we set xn = ||yynn || . Clearly x1 , x2 , . . . , xn is an orthonormal
system, and therefore P = [x1 , . . . , xn ] is an orthogonal matrix having x1 as a first column.

Question 3(b) Let A and B be square matrices of order n, show that AB − BA can never
be equal to the identity matrix.

Solution. Let A = haij i and B = hbij i. Then

tr AB = Sum of diagonal elements of AB


Xn Xn n X
X n
= aik bki = bki aik = tr BA
i=1 k=1 k=1 i=1

Thus tr(AB − BA) = tr AB − tr BA = 0. But the trace of the identity matrix is n, thus
AB − BA can never be equal to the identity matrix.
n
X
Question 3(c) Let A = haij i, 1 ≤ i, j ≤ n. If |aij | < |aii |, then the eigenvalues of A lie
j=1
i6=j
in the disc n
X
|λ − aii | ≤ |aij |
j=1
i6=j

6
Solution. See the solution to question 2(c), year 1997. We showed that if |λ − aii | >
n
X
|aij | then |λI − A| =
6 0, so λ is not an eigenvalue of A. Thus if λ is an eigenvalue, then
j=1
i6=j
n
X
|λ − aii | ≤ |aij |, so λ lies in the disc described in the question.
j=1
i6=j

7
UPSC Civil Services Main 1996 - Mathematics
Linear Algebra
Sunder Lal
Retired Professor of Mathematics
Panjab University
Chandigarh

June 14, 2007

Question 1(a) In R4 let W1 be the space generated by {(1, 1, 0, −1), (2, 4, 6, 0)} and let W2
be space generated by {(−1, −2, −2, 2), (4, 6, 4, −6), (1, 3, 4, −3)}. Find a basis for the space
W1 + W2 .

Solution. Let v1 = (1, 1, 0, −1), v2 = (2, 4, 6, 0), v3 = (−1, −2, −2, 2), v4 = (4, 6, 4, −6), v5 =
(1, 3, 4, −3). Since w ∈ W1 + W2 can be written as w = w1 + w2 , and w1 = α1 v1 + α2 v2
and w2 = α3 v3 + α4 v4 + α5 v5 , it follows that w is a linear combination of vi ⇒ W1 + W2
is generated by {vi , 1 ≤ i ≤ 5}. Thus a maximal independent subset of {vi , 1 ≤ i ≤ 5} will
be a basis of W1 + W2 .
Clearly v1 and v2 are linearly independent. If possible, let v3 = λ1 v1 + λ2 v2 , then the
four equations

λ1 + 2λ2 = −1
λ1 + 4λ2 = −2
0λ1 + 6λ2 = −2
−λ1 + 0λ2 = 2

should be consistent and provide us λ1 , λ2 . Clearly the third and fourth equations give us
λ1 = −2, λ2 = − 31 which do not satisfy the first two equations. Thus v1 , v2 , v3 are linearly
independent.
If possible let v4 = λ1 v1 + λ2 v2 + λ3 v3 . Then

λ1 + 2λ2 − λ3 =4 (1a)
λ1 + 4λ2 − 2λ3 =6 (1b)
0λ1 + 6λ2 − 2λ3 =4 (1c)
−λ1 + 0λ2 + 2λ3 = −6 (1d)

1
Adding (1b) and (1d) we get 4λ2 = 0, so λ2 = 0. Solving (1a) and (1b) we get λ3 = −2, λ1 =
2. These values satisfy all the four equations, so v4 = 2v1 − 2v3 .
If possible let v5 = λ1 v1 + λ2 v2 + λ3 v3 . Then

λ1 + 2λ2 − λ3 =1 (2a)
λ1 + 4λ2 − 2λ3 =3 (2b)
0λ1 + 6λ2 − 2λ3 =4 (2c)
−λ1 + 0λ2 + 2λ3 = −3 (2d)

Adding (2b) and (2d) we get 4λ2 = 0, so λ2 = 0. (2c) then gives us λ3 = −2, and
(2a) now gives λ1 = −1, which satisfies all equations. Thus v5 = −v1 − 2v3 . Hence
{(1, 1, 0, −1), (2, 4, 6, 0), (−1, −2, −2, 2)} is a basis of W1 + W2 .

Question 1(b) Let V be a finite dimensional vector space and v ∈ V, v 6= 0. Show that
there exists a linear functional f on V such that f (v) 6= 0.

Solution. Complete v to a basis of V, say {v1 = v, v2 , . . . , vn }, where dim V = n. Define


n
X n
X
f (vj ) = δ1j and f ( aj v j ) = aj f (vj ).
j=1 j=1
functional over V, and f (v) = f (v1 ) = 1. Note that f (vj ) = 0, j > 1
Clearly f is a linear P
and if any w ∈ V, w = i ai vi , f (w) = a1 .

Question 1(c) Let V = R3 , v1 , v2 , v3 be a basis of V. Let T : V −→ V be such that


T(vi ) = v1 + v2 + v3 , 1 ≤ i ≤ 3. By writing the matrix of T w.r.t. another basis show that
the matrices    
1 1 1 3 0 0
A = 1 1 1 and B = 0 0 0
1 1 1 0 0 0
are similar.

Solution. Clearly A is the matrix of T w.r.t. the basis v1 , v2 , v3 . Note that

[T(v1 ), T(v2 ), T(v3 )] = (v1 , v2 , v3 )A

Let

w1 = v1 + v 2 + v 3
w2 = v1 − v2
w3 = v2 − v3
⇒ T(w1 ) = 3w1 , T(w2 ) = T(w3 ) = 0

We now show that w1 , w2 , w3 is a basis for V, i.e. these are linearly independent.

2
Let αw1 + βw2 + γw3 = 0, then (α + β)v1 + (α − β + γ)v2 + (α − γ)v3 = 0. But
v1 , v2 , v3 are linearly independent, therefore α + β = 0, α − β + γ = 0, α − γ = 0 ⇒ α =
β = γ = 0 ⇒ w1 , w2 , w3 are linearly independent.
The matrix of T w.r.t. the basis w1 , w2 , w3 is clearly B. Note that the choice of
w1 , w2 , w3 is suggested by the shape of B.
6 0 then B = P−1 AP, so A and B are similar.
If (w1 , w2 , w3 ) = (v1 , v2 , v3 )P, |P| =

Question 2(a) Let V = R3 and T : V −→ V be a linear map defined by

T(x, y, z) = (x + z, −2x + y, −x + 2y + z)

What is the matrix of T w.r.t. the basis (1, 0, 1), (−1, 1, 1), (0, 1, 1)? Using this matrix write
down the matrix of T with respect to the basis (0, 1, 2), (−1, 1, 1), (0, 1, 1).

Solution. Let v1 = (1, 0, 1), v2 = (−1, 1, 1), v3 = (0, 1, 1). T(x, y, z) = (x+z, −2x+y, −x+
2y+z) = αv1 +βv2 +γv3 , say. This means α−β = x+z, β+γ = −2x+y, α+β+γ = −x+2y+
z. This implies α = x + y + z, β = y, γ = −2x. Thus T(x, y, z) = (x + y + z)v1 + yv2 − 2xv3 .
Hence  
2 1 2
[T(v1 ) T(v2 ) T(v3 )] = [v1 v2 v3 ]  0 1 1
−2 2 0
Let w1 = (0, 1, 2), w2 = (−1, 1, 1), w3 = (0, 1, 1). Then
 
1 0 0
[w1 w2 w3 ] = [v1 v2 v3 ] 1 1 0
0 0 1

Hence

[T(w1 ) T(w2 ) T(w3 )] = [T(v1 ) T(v2 ) T(v3 )]P


= [v1 v2 v3 ]AP
= [w1 w2 w3 ]P−1 AP

where    
2 1 2 1 0 0
A =  0 1 1 , P = 1 1 0
−2 2 0 0 0 1
Thus the matrix of T w.r.t. basis w1 , w2 , w3 is
     
1 0 0 2 1 2 1 0 0 3 1 2
−1
P AP = −1  1 0   0 1 1 1 1 0 = −2 0 −1
0 0 1 −2 2 0 0 0 1 0 2 0

3
Question 2(b) Let V and W be finite dimensional vector spaces such that dim V ≥ dim W.
Show that there is always a linear map of V onto W.
Solution. Let w1 , w2 , . . . , wm be a basis of W, and v1 , v2 , . . . , vn be a basis of V, n ≥ m.
Define
T(vi ) = wi , i = 1, 2, . . . , m
T(vi ) = 0, i = m + 1, . . . , n
and for any v ∈ V, v = i=1 αi vi , T(v) = m
Pn P
i=1 αi T(vi ). Pm
Clearly
Pm T : V
Pm−→ W is linear. T is onto, since if w ∈ W, w = i=1 ai wi , then
T( i=1 ai vi ) = i=1 ai T(vi ) = w, proving the result.

Question 2(c) Solve by Cramer’s rule


x + y − 2z = 1
2x − 7z = 3
x+y−z = 5
Solution.
1 1 −2 −1 −1 −2
D = 2 0 −7 = −5 −7 −7 = −2
1 1 −1 0 0 −1
˛ ˛ ˛ ˛
˛1
˛
˛
1 −2˛˛˛ ˛1
˛
˛
1 −2˛˛˛
˛3
˛
˛
0 −7˛˛˛ ˛3
˛
˛
0 −7˛˛˛
˛5
˛
1 −1˛˛ ˛4
˛
0 1 ˛˛ −31 31
x= D
= D
= −2
= 2
˛ ˛ ˛ ˛
˛1
˛
˛
1 −2˛˛˛ ˛1
˛
˛
0 0 ˛˛˛
˛2
˛
˛
3 −7˛˛˛ ˛2
˛
˛
1 −3˛˛˛
˛1
˛
5 −1˛˛ ˛1
˛
4 1 ˛˛ 13
y= D
= D
= −2
= − 13
2
˛ ˛ ˛ ˛
˛1 1 1˛˛˛ ˛1 1 1˛˛˛
˛ ˛
˛ ˛
˛2 0 3˛˛˛ ˛2 0 3˛˛˛
˛ ˛
˛ ˛
˛1 1 5˛˛ ˛0 0 4˛˛
˛ ˛
−8
z= D
= D
= −2
=4

Question 3(a) Find the inverse of the matrix


 
0 1 0 0
0 0 1 0
A= 0 0

0 1
1 0 0 0
by computing its characteristic polynomial.

4
Solution. The characteristic polynomial of A is

−λ 1 0 0
0 −λ 1 0
|A − λI| =
0 0 −λ 1
1 0 0 −λ
= −λ[−λ3 ] − 1[1] = λ4 − 1 = 0

Thus by the Cayley-Hamilton theorem, A4 = I, so A−1 = A3 .


    
0 1 0 0 0 1 0 0 0 0 1 0
 0 0 1 0 0 0
  1 0 0 0 0 1
A2 = 0 0 0 1 0 0
= 
0 1 1 0 0 0
1 0 0 0 1 0 0 0 0 1 0 0
    
0 0 1 0 0 1 0 0 0 0 0 1
 0 0 0 1 0 0 1 0 1 0 0 0
 = A−1
A3 = 
1
 =
0 0 0 0 0 0 1 0 1 0 0
0 1 0 0 1 0 0 0 0 0 1 0

Question 3(b) If A and B are n × n matrices such that AB = BA, show that AB and
BA have a common characteristic vector.

Solution. Let λ be any eigenvalue of A and let Vλ be the eigenspace of A corresponding to λ.


We show that B(Vλ ) ⊆ Vλ . Let v ∈ Vλ , then A(Bv) = B(Av) = B(λv) = λBv ⇒ Bv ∈ Vλ .
Consider B∗ : Vλ −→ Vλ such that B∗ (v) = B(v) — note that B∗ is a restriction of B
to Vλ and we have already shown that B(Vλ ) ⊆ Vλ .
Let µ be an eigenvalue of B∗ , then µ is also an eigenvalue of B (because a basis of Vλ can be
∗ C

extended to a basis of V, and in this basis B = B0 D for some matrices C, D). Let v ∈ Vλ
be an eigenvector of B∗ corresponding to µ, by definition v 6= 0. Then Bv = B∗ v = µv.
Thus A and B have a common eigenvector v, note that Av = λv as v ∈ Vλ .

Question 3(c) Reduce to canonical form the orthogonal matrix


2
− 23 13

3
O =  32 13 − 23 
1 2 2
3 3 3

Solution. Before solving this particular problem, we present a general discussion about
orthogonal matrices. An orthogonal matrix satisfies O0 O = I, so its determinant is 1 or -1,
here we focus on the case where |O| = 1. If λ is an eigenvalue of O and x a corresponding
eigenvector, then |λ|2 x0 x = (Ox)0 Ox = x0 O0 Ox = x0 x, so |λ| = 1. Since the characteristic

5
polynomial has real coefficients, the eigenvalues must be real or in complex conjugate pairs.
Thus for a matrix of order 3, at least one eigenvalue is real, and must be 1 or -1. Since
|O| = 1, one real value must be 1, and the three possibilities are {1, 1, 1}, {1, −1, −1} and
{1, eiθ , e−iθ }. √
Here we consider the third case, as the given matrix has 1 and 13 ± i 2 3 2 as eigenvalues,
proved later.
Let Z = X1 + iX2 be an eigenvector corresponding to the eigenvalue eiθ . Let X3 be
the eigenvector corresponding to the eigenvalue 1. Since Z and X3 correspond to different
eigenvalues, these are orthogonal, i.e. Z0 X3 = (X01 + iX02 )X3 = 0 ⇒ X01 X3 = 0, X02 X3 = 0.
Note that X1 , X2 , X3 are real vectors. Since OZ = eiθ Z = (cos θ + i sin θ)(X1 + iX2 ).
Equating real and imaginary parts we get

OX1 = X1 cos θ − X2 sin θ


OX2 = X1 sin θ + X2 cos θ
0 0
∴ X1 O OX1 = (X01 cos θ − X02 sin θ)(X1 cos θ − X2 sin θ)
⇒ X01 X1 = X01 X1 cos2 θ − X02 X1 cos θ sin θ − X01 X2 sin θ cos θ + X02 X2 sin2 θ
⇒ 0 = X01 X1 sin2 θ − X02 X2 sin2 θ + 2X01 X2 cos θ sin θ
⇒ 0 = X01 X1 sin θ − X02 X2 sin θ + 2X01 X2 cos θ (1)

(Note that sin θ 6= 0 since we are considering the case where eiθ is complex.) Similarly

X02 O0 OX1 = (X01 sin θ + X02 cos θ)(X1 cos θ − X2 sin θ)


⇒ X02 X1 = X01 X1 sin θ cos θ − X01 X2 sin2 θ − X02 X2 sin θ cos θ + X02 X1 cos2 θ
⇒ 0 = X01 X1 cos θ − X01 X2 cos θ − 2X01 X2 sin θ (2)

Multiplying (1) by sin θ and (2) by cos θ and adding, we get X01 X1 − X02 X2 = 0 or X01 X1 =
X02 X2 , so from (2), X1 X2 = 0, i.e. X1 , X2 are orthogonal.
Thus X1 , X2 , X3 are mutually orthogonal. We can assume that X01 X1 = X02 X2 = 1,
replacing Z by λZ, λ ∈ R if necessary. Similarly we can take X03 X3 = 1. Let P = [X1 X2 X3 ]
so that P0 P = I. Now

O[X1 X2 X3 ] = [X1 cos θ − X2 sin θ, X1 sin θ + X2 cos θ, X3 ]


 
cos θ sin θ 0
= [X1 X2 X3 ] − sin θ cos θ 0
0 0 1
 
cos θ sin θ 0
−1 0
⇒ P OP = P OP =  − sin θ cos θ 0
0 0 1

which is the canonical form of O when the eigenvalues are 1, eiθ , e−iθ .

6
Solution of given problem.
2
− 23 13

3
O =  32 13 − 32 
1 2 2
3 3 3
2 2 1
−λ −3 2 − 3λ −2 1
3
2 1
3 1
|O − λI| = 3 3
−λ − 23 = 2 1 − 3λ −2
1 2 2 27
3 3 3
−λ 1 2 2 − 3λ
1
= [(2 − 3λ)2 (1 − 3λ) + 4(2 − 3λ) + 1(3 + 3λ) + 2(6 − 6λ)]
27
1
= − [27λ3 − 45λ2 + 45λ − 27]
27
1
= − [(λ − 1)(3λ2 − 2λ − 3)]
3

Thus λ = 1, 31 ± i 2 3 2 are eigenvalues of O. √
2 2
Thus the canonical form of O is derived from above, where cos θ = 13 , sin θ = 3
:

1 2 2
 
3√ 3
0
− 2 2 1
0
3 3
0 0 1

The matrix P can be determined as follows (this is not needed for this problem, but is
given for completeness):

1. Let (x1 , x2 , x3 ) be an eigenvector for λ = 1, then


1 2 1
− x1 − x2 + x3 = 0
3 3 3
2 2 2
x1 − x2 − x3 = 0
3 3 3
1 2 1
x1 + x2 − x3 = 0
3 3 3
Thus x2 = 0, x1 − x3 = 0, so we can take (1, 0, 1) as an eigenvector.

2. The vectors X1 , X2 in the above discussion are determined by the requirements

OX1 = X1 cos θ − X2 sin θ


OX2 = X1 sin θ + X2 cos θ

7

2 2
where cos θ = 13 , sin θ = 3
. This gives us the following equations

2x11 − 2x12 + x13 = x11 − x21 2 2 (3)

2x11 + x12 − 2x13 = x12 − x22 2 2 (4)

x11 + 2x12 + 2x13 = x13 − x23 2 2 (5)

2x21 − 2x22 + x23 = x11 2 2 + x21 (6)

2x21 + x22 − 2x23 = x12 2 2 + x22 (7)

x21 + 2x22 + 2x23 = x13 2 2 + x23 (8)

Adding the √ last 3 equations, we get 2x21 √ = x11 + x12 + x13 . Subtracting equation (6)
from (8), 2x22 = x13 − x11 , and from (7) 2x23 = x11 − x12 + x13 . Substituting these
in the first 3 equations and simplifying, we get x11 = −x13 . Setting x11 = 0, x12 = 1,
we get (0, 1, 0), ( √12 , 0, − √12 ) as a possible solution for X1 , X2 .

Putting these together we get


0 √12 1
 

P = 1 0 0
0 − √12 1

2 2
 1 
3√ 3
0
We can now verify that OP = P − 2 3 2 1
3
0
0 0 1

8
UPSC Civil Services Main 1997 - Mathematics
Linear Algebra
Sunder Lal
Retired Professor of Mathematics
Panjab University
Chandigarh

June 14, 2007

1 Linear Algebra
Question 1(a) Let V be the vector space of polynomials over R. Find a basis and the
dimension of W ⊆ V spanned by

v1 = t3 − 2t2 + 4t + 1
v2 = 2t3 − 3t2 + 9t − 1
v3 = t3 + 6t − 5
v4 = 2t3 − 5t2 + 7t + 5

Solution. v1 and v2 are linearly independent, because if αv1 + βv2 = 0, then α + 2β =


0, −2α − 3β = 0, 4α + 9β = 0, α − β = 0 ⇒ α = β = 0.
v3 depends linearly on v1 , v2 — if αv1 + βv2 = v3 , then α + 2β = 1, −2α − 3β =
0, 4α + 9β = 6, α − β = −5 ⇒ α = −3, β = 2 which satisfy all the equations. Thus
v3 = −3v1 + 2v2 .
v4 depends linearly on v1 , v2 — if αv1 + βv2 = v4 , then α + 2β = 2, −2α − 3β =
−5, 4α + 9β = 7, α − β = 5 ⇒ α = 4, β = −1 which satisfy all the equations. Thus
v4 = 4v1 − v2 .
Thus dimR W = 2 and v1 , v2 is a basis of W.

Question 1(b) Verify that T(x1 , x2 ) = (x1 + x2 , x1 − x2 , x2 ) is a linear transformation from


R2 to R3 . Find its range, rank, null space and nullity.

1
Solution. Let x = (x1 , x2 ), y = (y1 , y2 ). Then
T(αx + βy) = T(αx1 + βy1 , αx2 + βy2 )
= (αx1 + βy1 + αx2 + βy2 , αx1 + βy1 − αx2 − βy2 , αx2 + βy2 )
= (α(x1 + x2 ), α(x1 − x2 ), αx2 ) + (β(y1 + y2 ), β(y1 − y2 ), βy2 )
= αT(x1 , x2 ) + βT(y1 , y2 )
Thus T is linear.

T(e1 ) = T(1, 0) = (1, 1, 0)


T(e2 ) = T(0, 1) = (1, −1, 1)
Clearly T(e1 ), T(e2 ) are linearly independent. Since T (R2 ) is generated by T(e1 ) and T(e2 ),
the rank of T is 2.
The range of T = {αT(e1 ) + βT(e2 ), α, β ∈ R}
= {α(1, 1, 0) + β(1, −1, 1)}
= {(α + β, α − β, β) | α, β ∈ R}
To find the null space of T, if T(x1 , x2 ) = (0, 0, 0), then x1 + x2 = 0, x1 − x2 = 0, x2 = 0,
so x1 = x2 = 0. Thus the null space of T is {0}, and nullity T = 0.

Question 1(c) Let V be the space of 2×2 matrices over R. Determine whether the matrices
A, B, C ∈ V are dependent where
     
1 2 3 −1 1 −5
A= B= C=
3 1 2 2 −4 0
Solution. If αA + βB + γC = 0, then
α + 3β + γ = 0 (1)
2α − β − 5γ = 0 (2)
3α + 2β − 4γ = 0 (3)
α + 2β = 0 (4)
From (4), we get α = −2β. This, together with (3) gives γ = −β. These satisfy (1) and
(2) also, so taking β = 1, α = −2, γ = −1 gives us −2A + B − C = 0. Thus A, B, C are
dependent.

Question 2(a) Let A be an n × n matrix such that each diagonal entry is µ and each
off-diagonal entry is 1. If B = λA is orthogonal, determine λ, µ.

Solution. Clearly A is symmetric. Let A = (aij ). B0 B = BB0 = λ2 A2 = I =⇒


P n 2
k=1 λ aik akj = δij
Taking i = j = 1, we get λ2 (µ2 +n−1) = 1 Taking i = 1, j = 2, we get λ2 (2µ+n−2) = 0.
Thus µ = −(n−2)/2 and λ2 [(n−2)2 /4+n−1] = 1. Simplifying, λ2 [n2 −4n+4+4n−4]/4 = 1,
which means λ2 = n42 , or λ = ± n2 .

2
Question 2(b) Show that  
2 −1 0
A =  −1 2 0 
2 2 3
is diagonalizable over R. Find P such that P−1 AP is diagonal and hence find A25 .

Solution. Characteristic equation of A is

2 − x −1 0
−1 2 − x 0 = 0
2 2 3−x
⇒ (2 − x)(2 − x)(3 − x) + 1(−(3 − x)) = 0
(3 − x)(4 − 4x + x2 − 1) = 0

Thus the eigenvalues are 3, 3, 1.


Let (x1 , x2 , x3 ) be an eigenvector for λ = 1.
  
1 −1 0 x1
 −1 1 0   x2  = 0
2 2 2 x3

Thus x1 − x2 = 0, −x1 + x2 = 0, 2x1 + 2x2 + 2x3 = 0. Take x1 = 1, then x2 = 1, x3 = −2, so


(1, 1, −2) is an eigenvector with eigenvalue 1.
Let (x1 , x2 , x3 ) be an eigenvector for λ = 3.
  
−1 −1 0 x1
 −1 −1 0   x2  = 0
2 2 0 x3

Thus x1 + x2 = 0. Take x1 = 1, x3 = 0, then x2 = −1, so (1, −1, 0) is an eigenvector


with eigenvalue 3. Take x1 = 0, x3 = 1, then x2 = 0, so (0, 0, 1) is also an eigenvector for
eigenvalue 3.     
1 1 0 1 0 0 1 0 0
Let P =  1 −1 0  then AP = P  0 3 0  or P−1 AP =  0 3 0 
−2 0 1  0  0 3  0 0 3 
1 0 0 1 0 0
−1 25 −1
Now P A P = (P AP) = 25  0 3 25
0  25
. Thus A = P 0 3  25
0  P−1
25
 1 1 0 0 3 0 0 325
2 2
0
−1 1 1
|P| = −2, so P = 
2
− 2 0 .
1 1 1

3
1 1
   
1 1 0 1 0 0 2 2
0
A25 =  1 −1 0   0 3 25
0  1
2
− 12 0 
−2 0 1 0 0 325 1 1 1
   1 1 
1 325 0 2 2
0
=  1 −325 0   12 − 21 0 
−2 0 325 1 1 1
 1+325 1−325

2 2
0
25 1+325
=  1−32 2
0 
−1 + 325 −1 + 325 325

Question 2(c) Let A = [aij ] be a square matrix of order n such that |aij | ≤ M . Let λ be
an eigenvalue of A, show that |λ| ≤ nM .

Solution. We first prove the following:


X n
Lemma: If A = [aij ] and |aij | ≤ aii then |A| =
6 0.
j=1
i6=j
If |A| = 0 then there exist x1 , . . . , xn ∈ C not all zero such that

a11 x1 + a12 x2 + . . . + a1n xn = 0


...
ai1 x1 + ai2 x2 + . . . + ain xn = 0
...
an1 x1 + an2 x2 + . . . + ann xn = 0

x
Let |xi | = max(|x1 |, |x2 |, . . . , |xn |), so | xji | ≤ 1 for all j.

x1 x2 xn
0 = aii − (−ai1 − ai2 − . . . − ain )
xi xi xi
x1 x2 xn
≥ |aii | − ai1 + ai2 + . . . + ain
xi xi xi
≥ |aii | − |ai1 | − |ai2 | − . . . − |ain |
n
X
which contradicts the premise |aij | ≤ aii Thus |A| =
6 0.
j=1
i6=j

4
n
X
Now the lemma tells us that if |λ − aii | > |aij | then |λI − A| =
6 0, so λ is not an
j=1
i6=j
n
X
eigenvalue of A. Thus |λ| ≤ |λ − aii | + |aii | ≤ |aij | ≤ nM as desired.
j=1

Question 3(a) Define a positive definite matrix and show that a positive definite matrix is
always non-singular. Show that the converse is not always true.

Solution. Let A be an n × n real symmetric matrix. A is said to be positive definite if


the associated quadtratic form
 
x1
  x2 
x1 x2 . . . xn A 
. . .  > 0

xn

for all (x1 , x2 , . . . , xn ) 6= (0, 0, . . . , 0) in Rn .


If |A| = 0 then rank A < n, which means that columns of A i.e. c1 , c2 , . . . , cn are
linearly dependent i.e. there exist real numbers x1 , x2 , . . . , xn not all zero such that
   
x1 x1
 x2    x2 
x1 c1 + x2 c2 + . . . + xn cn = A  . . . = 0 =⇒ x1 x2 . . . xn A . . . = 0
  

xn xn

where (x1 , x2 , . . . , xn ) 6= (0, 0, . . . , 0), which means that A is not positive definite. Thus A
is positive definite =⇒ |A| = 6 0.
The converse is not true. Take
 
1 0 0
A = 0 1 0 
0 0 −1

then |A| = −1, but  


 0
0 0 1 A 0 = −1

1
so A is not positive definite.

5
Question 3(b) Find the eigenvalues and their corresponding eigenvectors for the matrix
 
6 −2 2
−2 3 −1
2 −1 3

Solution. The characteristic equation for A is

0 = |A − xI|
 
6 − x −2 2
=  −2 3 − x −1 
2 −1 3 − x
= (6 − x)((3 − x)2 − 1) + 2(−6 + 2x + 2) + 2(2 − 6 + 2x)
= (6 − x)(9 − 6x + x2 ) − 6 + x − 8 + 4x − 8 + 4x
0 = x3 − 12x2 + 36x − 32
= (x − 2)(x2 − 10x + 16)

Thus the eigenvalues are 2, 2, 8.


Let (x1 , x2 , x3 ) be an eigenvector for λ = 2.
  
4 −2 2 x1
−2 1 −1  x2  = 0
2 −1 1 x3
Thus 4x1 − 2x2 + 2x3 = 0, −2x1 + x2 − x3 = 0, 2x1 − x2 + x3 = 0. Take x1 = 1, x2 = 0, then
x3 = −2, so (1, 0, −2) is an eigenvector with eigenvalue 2. Take x1 = 0, x2 = 1, then x3 = 1,
so (0, 1, 1) is an eigenvector with eigenvalue 2.
Let (x1 , x2 , x3 ) be an eigenvector for λ = 8.
  
−2 −2 2 x1
−2 −5 −1  x2  = 0
2 −1 −5 x3
Thus −2x1 − 2x2 + 2x3 = 0, −2x1 − 5x2 − x3 = 0, 2x1 − x2 − 5x3 = 0. From the last two, we
get x2 + x3 = 0, and from the first we get x1 = 2x3 . Take x3 = 1, then x2 = −1, x1 = 2, so
(2, −1, 1) is an eigenvector with eigenvalue 8.

Question 3(c) Find P invertible such that P reduces Q(x, y, z) = 2xy + 2yz + 2zx to its
canonical form.

Solution. The matrix of Q(x, y, z) is


 
0 1 1
A = 1 0 1
1 1 0

6
which has all diagonal entries 0, so we cannot complete squares right away.
     
0 1 1 1 0 0 1 0 0
1 0 1 = 0 1 0 A 0 1 0
1 1 0 0 0 1 0 0 1

Add the second row to the first and the second column to the first.
     
2 1 2 1 1 0 1 0 0
1 0 1 = 0 1 0 A 1 1 0
2 1 0 0 0 1 0 0 1

Subtract 12 R1 from R2 and 1


C
2 1
from C2 .

1 − 12 0
     
2 0 2 1 1 0
0 − 21 0 = − 12 21 0 A 1 12 0
2 0 0 0 0 1 0 0 1

Subtract R1 from R3 and C1 from C3 .

1 − 12 −1
     
2 0 0 1 1 0
0 − 1 0  = − 1 1 0 A 1 1 −1
2 2 2 2
0 0 −2 −1 −1 1 0 0 1

1 − 12 −1
   
2 0 0
Thus P = 1 21 −1 and P0 AP = 0 − 12 0 
0 0 1 0 0 −2
2 1 2 2
So Q(x, y, z) −→ 2X − 2 Y − 2Z .
Alternative Solution. Let x = X, y = X + Y, z = Z

Q(x, y, z) = 2X 2 + 2XY + 2ZX + 2ZY + 2ZX


= 2[X 2 + XY + 2ZX + ZY ]
Y Y2
= 2[(X + + Z)2 − − Z 2]
2 4
Put ξ = X + Y /2 + Z, η = Y, ζ = Z, so X = ξ − η/2 − ζ, Y = η, Z = ζ.

1 − 12 −1 1 − 21 −1
           
x 1 0 0 X 1 0 0 ξ ξ
y  = 1 1 0  Y  = 1 1 0 0 1 0  η  = 1 12 −1 η 
z 0 0 1 Z 0 0 1 0 0 1 ζ 0 0 1 ζ

1 − 21 −1
 

Thus Q(x, y, z) −→ 2ξ 2 − η 2 /2 − 2ζ 2 , and P = 1 12 −1 as before. Note that we put


0 0 1
x = X, y = X + Y, z = Z to create one square term to complete the squares.

7
UPSC Civil Services Main 1998 - Mathematics
Linear Algebra
Sunder Lal
Retired Professor of Mathematics
Panjab University
Chandigarh

June 14, 2007

Question 1(a) Given two linearly independent vectors (1, 0, 1, 0) and (0, −1, 1, 0) of R4 ,
find a basis of R4 which includes them.

Solution. Let v1 = (1, 0, 1, 0), v2 = (0, −1, 1, 0). Clearly these are linearly independent.
Let e1 , e2 , e3 , e4 be the standard basis. Then v1 , v2 , e1 , e2 , e3 , e4 generate R4 . We have to
find four vectors out of these which are linearly independent and include v1 , v2 .
If αv1 + βv2 + γe1 = 0, then α + γ = 0, −α = 0, α + β = 0 ⇒ α = β = γ = 0. Therefore
v1 , v2 , e1 are linearly independent.
We now show that v1 , v2 , e1 , e4 are linearly independent. Let αv1 + βv2 + γe1 + δe4 = 0
then δ = 0, and therefore α = β = γ = 0 because v1 , v2 , e1 are linearly independent.
Thus v1 , v2 , e1 , e4 is a basis of R4 .
Note that e2 = v1 − v2 − e1 , e3 = v1 − e1 .

Question 1(b) If V is a finite dimensional vector space over R and if f and g are two
linear transformations from V to R such that f (v) = 0 implies g(v) = 0, then prove that
g = λf for some λ ∈ R.

Solution. If g = 0, take λ = 0, so g(v) = 0 = 0f (v) for all v ∈ V.


If g 6= 0, then f 6= 0. Thus ∃v ∈ V such that f (v) 6= 0 ⇒ ∃w ∈ V such that f (w) = 1
v
(Note that f ( f (v) ) = 1).
Thus V/ ker f w R, or dim(ker f ) = n − 1. Similarly ker g has dimension n − 1. In fact,
ker f = ker g ∵ ker f ⊆ ker g and dim(ker f ) = dim(ker g). Let {v2 , . . . , vn } be a basis of
ker f and extend it to {v1 , v2 , . . . , vn } a basis of V. Then g = λf with λ = g(v1 )/f (v1 ) ∵
if v = α1 v1 + . . . + αn vn , then g(v) = α1 g(v1 ) = α1 λf (v1 ) = λf (v).

1
Question 1(c) Let T : R3 −→ R3 be defined by T(x1 , x2 , x3 ) = (x2 , x3 , −cx1 − bx2 − ax3 )
where a, b, c are fixed real numbers. Show that T is a linear transformation of R3 and that
A3 + aA2 + bA + cI = 0 where A is the matrix of T w.r.t. the standard basis of R3 .
Solution. Let x = (x1 , x2 , x3 ), y = (y1 , y2 , y3 ). Then
T(αx + βy) = (αx2 + βy2 , αx3 + βx3 , −c(αx1 + βy1 ) − b(αx2 + βy2 ) − a(αx3 + βy3 ))
= α(x2 , x3 , −cx1 − bx2 − ax3 ) + β(y2 , y3 , −cy1 − by2 − ay3 )
= αT(x) + βT(y)
Thus T is linear.
Clearly
T(1, 0, 0) = (0, 0, −c)
T(0, 1, 0) = (1, 0, −b)
T(0, 0, 1) = (0, 1, −a)
 
0 1 0
A= 0 0 1 
−c −b −a
The characteristic equation of A is |A − λI| = 0.
−λ 1 0
0 λ 1 = 0
−c −b −a − λ
−λ2 (a + λ) − bλ − c = 0
λ3 + aλ2 + bλ + c = 0
Now by the Cayley-Hamilton theorem A3 + aA2 + bA + cI = 0.
Question 2(a) If A and B are two matrices of order 2 × 2 such that A is skew-Hermitian
and AB = B then show that B = 0.
Solution. We first of all prove that eigenvalues of skew-Hermitian matrices are 0 or pure
0
imaginary. Let A be skew-Hermitian, i.e. A = −A and let λ be its characteristic root. If
x is an eigenvector of λ, then
Ax = λx
⇒ x0 λx = x0 Ax
0
= −x0 A x
0
= −Ax x
= −λx0 x
Thus λ = −λ ∵ x0 x 6= 0, showing that the real part of λ is 0.
Now if B 6= 0 and c1 , c2 are the columns of B, then c1 6= 0 or c2 6= 0. AB = B means
that Ac1 = c1 and Ac2 = c2 . Since either c1 6= 0 or c2 6= 0, 1 must be an eigenvalue of A,
which is not possible. Hence c1 = 0 and c2 = 0, which means B = 0.

2
Question 2(b) If T is a complex matrix of order 2 × 2 such that tr T = tr T2 = 0, then
show that T2 = 0.

Solution. Let λ1 , λ2 be the eigenvalues of T, then λ21 , λ22 are the eigenvalues of T2 . Given
that

tr T = λ1 + λ2 = 0
tr T2 = λ21 + λ22 = 0

0 = λ21 + λ22 = λ21 + (−λ1 )2 ⇒ λ1 = 0 and from λ1 + λ2 = 0 we get λ1 = λ2 = 0. The


characteristic equation of T is (x − λ1 )(x − λ2 ) = 0, or x2 = 0. By Cayley-Hamilton
theorem, we immediately get T2 = 0.

Question 2(c) Prove that a necessary and sufficient condition for an n × n real matrix A
to be similar to a diagonal matrix is that the set of characteristic vectors of A includes a set
of n linearly independent vectors.

Solution.
Necessity: By hypothesis there exists a nonsingular matrix P such that
 
λ1 0 . . . 0
 0 λ2 . . . 0 
P−1 AP = D =   ... ... ... ... 

0 0 . . . λn

Let P = [c1 , c2 , . . . , cn ], where each ci is an n-row column vector.

A[c1 , c2 , . . . , cn ] = [c1 , c2 , . . . , cn ]D = [λ1 c1 , λ2 c2 , . . . , λn cn ]

so Aci = λi ci for i = 1, . . . , n. Thus c1 , c2 , . . . , cn are characteristic vectors of A corre-


sponding to the eigenvalues λ1 , . . . , λn . Since P is nonsingular, c1 , c2 , . . . , cn are linearly
independent. Thus the set of characteristic vectors of A includes a set of n linearly indepen-
dent vectors.
Sufficiency: Let c1 , c2 , . . . , cn be n linearly independent eigenvectors of A corresponding
to eigenvalues λ1 , . . . , λn . Thus Aci = λi ci for i = 1, . . . , n. Let P = [c1 , c2 , . . . , cn ], then
P is nonsingular (otherwise 0 is an eigenvalue of P, so ∃x = (x1 , . . . , xn ) 6= 0 such that
Px = 0 ⇒ x1 c1 + . . . + xn cn = 0 ⇒ c1 , c2 , . . . , cn are not linearly independent.). Clearly
 
λ1 0 . . . 0
 0 λ2 . . . 0 
P−1 AP = D =   ... ... ... ... 

0 0 . . . λn

3
Question 3(a) Let A be a m × n matrix. Show that the sum of the rank and nullity of A
is n.

Solution. The matrix A can be regarded as a linear transformation A : F n −→ F m where


F is the field to which the entries of A belong, and the bases for F n , F m are standard bases.
Let T : V −→ W be a linear transformation, where dim(V) = n, dim(W) = m. We shall
show that dim(T(V)) + dim(kernel T) = n.
Take vn−r+1 , . . . , vn to be any basis of kernel T, where dim(kernel T) = r. Complete it
to a basis v1 , . . . , vn−r+1 . . . , vn of V. We shall show that T(v1 ), . . . , T(vn−r ) are linearly
independent and generate T(V), thus dim(T(V)) = n − r.
If w ∈ T(V), then ∃v ∈ V such that T(v) = w. If v = α1 v1 + . . . + αn vn , αi ∈ F, then
w = T(v) = α1 T(v1 ) + . . . + αn−r T(vn−r ) because T(vi ) = 0 for i > n − r. Thus T(V) is
generated by T(v1 ), . . . , T(vn−r ).
If α1 T(v1 ) + . . . + αn−r T(vn−r ) = 0, then T(α1 v1 + . . . + αn−r vn−r ) = 0. This implies
α1 v1 +. . .+αn−r vn−r ∈ kernel T ⇒ α1 v1 +. . .+αn−r vn−r = αn−r+1 vn−r+1 +. . .+αn vn . But
v1 , . . . , vn are linearly independent, so αi = 0 for i = 1, . . . , n. Hence T(v1 ), . . . , T(vn−r ) are
linearly independent, so they form a basis for T(V). Thus dim(T(V)) + dim(kernel T) = n.

Question 3(b) Find all real 2 × 2 matrices A with real eigenvalues which satisfy AA0 = I.

Solution. Since AA0 = I, |A| = ±1. If |A| = 1, then


    
a b a c 1 0
=
c d b d 0 1

so a2 + b2 = 1, c2 + d2 = 1, ac + bd = 0, ad − bc = 1. Let a = cos θ, b = sin θ. Then


c cos θ + d sin θ = 0 c cos θ sin θ + d sin2 θ = 0
⇒ ⇒ d = cos θ, c = − sin θ
−c sin θ + d cos θ = 1 −c sin θ cos θ + d cos2 θ = cos θ
 
cos θ sin θ
Thus A = , θ is real.
− sin θ cos θ
Now the eigenvalues of A are given by
cos θ − λ sin θ
|A − λI| = =0
− sin θ cos θ − λ

So (cos θ − λ)2 + sin2 θ = 0, or λ2 − 2λ cos θ + 1 = 0. Thus



2 cos θ ± 4 cos2 θ − 4
λ= = cos θ ± i sin θ
2
Since the eigenvalues of A are real, sin θ = 0, so cos θ = ±1. Thus
   
1 0 −1 0
A= ,
0 1 0 −1

4
 
0 1
If |A| = −1, J = , then |JA| = 1. Also JA(JA)0 = JAA0 J0 = JJ0 = I. Thus
1 0
 
cos θ sin θ
JA =
− sin θ cos θ
      
−1 cos θ sin θ 0 1 cos θ sin θ − sin θ cos θ
A=J = =
− sin θ cos θ 1 0 − sin θ cos θ cos θ sin θ
Now the eigenvalues of A are given by

λ + sin θ − cos θ
0 = |λI − A| = = λ2 − sin2 θ − cos2 θ = λ2 − 1
− cos θ λ − sin θ

Hence λ = ±1, so the eigenvalues are always real. Thus the possible values of A are
     
1 0 −1 0 − sin θ cos θ
, , for all real θ
0 1 0 −1 cos θ sin θ

Question 3(c) Reduce to diagonal matrix by rational congruent transformation the sym-
metric matrix  
1 2 −1
A= 2 0 3 
−1 3 1

Solution. The corresponding quadratic form is

x2 + z 2 + 4xy − 2xz + 6yz


= (x + 2y − z)2 − 4y 2 + 10yz
5 25
= (x + 2y − z)2 − 4(y − z)2 + z 2
4 4
25
= X 2 − 4Y 2 + Z 2
4
where X = x + 2y − z, Y = y − 5z/4, Z = z. From this we get z = Z, y = Y + 5Z/4, x =
X − 2Y − 32 Z. Thus

1 −2 − 32
     
1 0 0 1 0 0 1 2 −1
 0 −4 0  =  −2 1 5 
0  2 0 3  0 1 4
0 0 25 4
− 23 54 1 −1 3 1 0 0 1

5
UPSC Civil Services Main 1999 - Mathematics
Linear Algebra
Sunder Lal
Retired Professor of Mathematics
Panjab University
Chandigarh

June 14, 2007

Question 1(a) Let V be the vector space of functions from R to R. Show that f, g, h ∈ V
are linearly independent where f (t) = e2t , g(t) = t2 and h(t) = t.

Solution. Let a, b, c ∈ R and let ae2t + bt2 + ct = 0 for all t. Setting t = 0 shows that
a = 0. From t = 1 we get b + c = 0, and t = −1 gives b − c = 0, hence b = c = 0. Thus
f, g, h are linearly independent.

Question 1(b) If the matrix  linear transformation T on V2 (R) with respect to the
 of the
1 1
basis B = {(1, 0), (0, 1)} is , then what is the matrix of T with respect to the ordered
1 1
basis B1 = {(1, 1), (1, −1)}.

Solution. T(e1 ) = e1 + e2 , T(e2 ) = e1 + e2 . Let v1 = (1, 1) = e1 + e2 , v2 = (1, −1) = e1 −


e2 . Then T(v1 ) = T((1, 1)) = (2, 2) = 2e1 + 2e2 = 2v1 . T(v2 ) = T((1, −1)) =(0, 0) =  0.
2 0
Thus the matrix of T with respect to the ordered basis B1 = {(1, 1), (1, −1)} is .
0 0
 
4 2 2
Question 1(c) Diagonalize the matrix A = 2 4 2.
2 2 4

1
Solution. The characteristic equation is
4−x 2 2
0 = 2 4−x 2
2 2 4−x
= (4 − x)((4 − x)2 − 4) + 2(4 − 8 + 2x) − 2(8 − 2x − 4)
= (4 − x)(12 − 8x + x2 ) − 8 + 4x − 8 + 4x
= 48 − 32x + 4x2 − 12x + 8x2 − x3 − 16 + 8x
= −(x3 − 12x2 + 36x − 32)
= −(x − 2)(x2 − 10x + 16) ∵ 2 is a root
= −(x − 2)(x − 2)(x − 8)
The characteristic roots are 2, 2, 8.
If (x1 , x2 , x3 ) is an eigenvector for λ = 8, then
    
−4 2 2 x1 0
 2 −4 2   x2  =  0 
2 2 −4 x3 0
Thus x1 = x2 = x3 , so (1, 1, 1) is an eigenvector for λ = 8.
Similarly for λ = 2,     
2 2 2 x1 0
 2 2 2   x2  =  0 
2 2 2 x3 0
Thus x1 + x2 + x3 = 0. Take x1 = 1, x2 = 0, so (1, 0, −1) is an eigenvector. Take x1 = 0, x2 =
1, so (0, 1, −1)
 is an eigenvector
 for λ = 2. These
 eigenvectors
 are linearly independent.
1 1 0 8 0 0
Thus if P =  1 0 1 , then P−1 AP =  0 2 0 
1 −1 −1   0 0 2
8 0 0
To check, verify that AP = P 0 2 0 

0 0 2
   
1 0 0 i
Question 2(a) Test for congruency the matrices A = and B = . Prove
0 −1 −i 0
that A2n = B2m = I where m, n are positive integers.
Solution. A and B are not congruent, because A is symmetric and B is not. If A ≡ B
then ∃P non-singular such that P0 AP = B which implies that B should be symmetric.
    
2 1 0 1 0 1 0
A = =
0 −1 0 −1 0 1
    
0 i 0 i 1 0
B2 = =
−i 0 −i 0 0 1

2
Hence A2n = (A2 )n = I, and B2m = (B2 )m = I.

Question 2(b) If A is a skew symmetric matrix of order n then prove that (I−A)(I+A)−1
is orthogonal.

Solution.

O = (I − A)(I + A)−1
OO0 = (I − A)(I + A)−1 ((I + A)−1 )0 (I − A)0
= (I − A)(I + A)−1 (I − A)−1 (I + A) as A0 = −A
= (I − A)[(I − A)(I + A)]−1 (I + A)
= (I − A)[I − A2 )]−1 (I + A)
= (I − A)[(I + A)(I − A)]−1 (I + A)
= (I − A)(I − A)−1 (I + A)−1 (I + A)
= I

Similarly it can be shown that O0 O = I. Hence O is orthogonal.

Question 2(c) Test for positive definiteness the quadratic form 2x2 + y 2 + 2z 2 + 2xy − 2zx.

Solution. The given form is


1
2
(4x2 + 3y 2 + 4z 2 + 4xy − 4zx)
= 21 ((2x + y − z)2 + y 2 + 3z 2 + 2yz)
= 21 ((2x + y − z)2 + (y + z)2 + 2z 2 )
Now 2x + y − z = 0, y + z = 0, z = 0 implies x = y = z = 0. Hence the form is positive
definite.

Question 2(d) Reduce the equation

x2 + y 2 + z 2 − 2xy − 2yz + 2zx + x − y − 2z + 6 = 0

into canonical form and determine the nature of the quadric.

Solution. Consider x2 + y 2 + z 2 − 2xy − 2yz + 2zx. Its matrix is


 
1 −1 1
S =  −1 1 −1 
1 −1 1

Its characteristic equation is

1 − λ −1 1
−1 1 − λ −1 =0
1 −1 1 − λ

3
or λ3 − 3λ2 = 0. Thus λ = 0, 0, 3.
We next determine the characteristic vectors. For λ = 0, we get
    
1 −1 1 x1 0
 −1 1 −1   x2  =  0 
1 −1 1 x3 0

Thus x1 − x2 + x3 = 0. Take (1, 1, 0) and (−1, 1, 2) as orthogonal characteristic vectors


corresponding to λ = 0.
For λ = 3, we get     
−2 −1 1 x1 0
 −1 −2 −1   x2  =  0 
1 −1 −2 x3 0
This yields x1 = −x2 = x3 . Take (1, −1, 1) as the characteristic vector for λ = 3.
Thus if  1 
√ √1 − √1
 3 2 6
O =  − √13 √12 √16 

√1 0 √2
3 6
 
3 0 0
Then O0 SO =  0 0 0 .
0 0 0
Let    
X x
O  Y  =  y 
Z z
or
X Y Z
x = √ +√ −√
3 2 6
X Y Z
y = −√ + √ + √
3 2 6
X 2Z
z = √ +√
3 6
Thus the given equation can be transformed to
X Y
3X 2 + √ + √ − √Z6 + √X3 − √Y2 − √Z6 − 2X
√ − 4Z
√ +6=0
3 2
2
√ 3 6
⇒ 3X − Z 6 + 6 = 0
q √
⇒ 32 X 2 = Z − 6

√ 2
q
2
Shifting the origin to (0, 0, 6), we get X = 3
Z, showing that the equation is a parabolic
cylinder.

4
1 Reduction of Quadrics
For the sake of completeness, we give the complete theoretical discussion for the above
question.
Let

F (x, y, z) = ax2 + by 2 + cz 2 + 2f yz + 2gzx + 2hxy + 2ux + 2vy + 2wz + d = 0

It can be expressed in matrix form as


  
a h g u x
 h b f v  y
  
x y z 1 
 g
=0
f c w  z 
u v w d 1

Let the 4 × 4 matrix be Q.


Step I. Consider
    
 a h g x  x
x y z  h b f  y  = x y z S y 
g f c z z

1. Find the characteristic roots of S — λ1 , λ2 , λ3 .

2. Find characteristic vectors v1 , v2 , v2 corresponding to λ1 , λ2 , λ3 which are orthogonal.


These on normalization give us
 
O = ||vv11 || ||vv22 || ||vv33 ||

Thus we get O0 SO =diagonal(λ1 , λ2 , λ3 ). Let


   
X x
O  Y  =  y 
Z z

This gives us three equations expressing x, y, z in term of X, Y, Z. Substituting in F , we get

F (X, Y, Z) = λ1 X 2 + λ2 Y 2 + λ3 Z 2 + 2U X + 2V Y + 2W Z + d = 0

(Note that d is unaffected.)


  
X x
Note 1 Since O is orthogonal, the transformation O  Y  =  y  is just a rotation
Z z
of the axes, and therefore the nature of the quadric is unaffected.

5
Step II. We now consider 3 possibilities (ρ is rank of the matrix):
1. ρ(S) = 3 ⇒ λ1 λ2 λ3 6= 0. Shift the origin to (− λU1 , − λV2 , − W
λ3
), i.e. x = X + λU1 , y =
Y + λV2 , z = Z + W
λ3
. (Actually we are just completing the squares.) F gets transformed
to
λ1 x2 + λ2 y 2 + λ3 z 2 + d2 = 0
2. ρ(S) = 2. One characteristic root, say λ3 = 0. Shift the origin to (− λU1 , − λV2 , 0), and F
gets transformed to
λ1 x2 + λ2 y 2 + 2w2 z + d2 = 0
3. ρ(S) = 1. Two characteristic roots, say λ2 = λ3 = 0. Shift the origin to (− λU1 , 0, 0),
and F gets transformed to
λ1 x2 + 2v2 y + 2w2 z + d2 = 0

Note that ρ(S) = 0 ⇒ λ1 = λ2 = λ3 = 0. Then F (x, y, z) is no longer a quadric, it is a


plane.
Step III. Observe that ρ(S) ≤ ρ(Q) ≤ 4, ρ(S) ≤ 3.
1. Let ρ(Q) = 4, ρ(S) = 3. As shown above, F (x, y, z) = 0 is transformed to
λ1 x2 + λ2 y 2 + λ3 z 2 + d2 = 0
|Q| = λ1 λ2 λ3 d2 ⇒ d2 = |Q||S|
. Thus the quadric is λ1 x2 + λ2 y 2 + λ3 z 2 = − |Q|
|S|
, which
is a central quadric i.e. a quadric surface with a center, e.g., a sphere, ellipsoid, or
hyperboloid, depending upon the signs and magnitudes of the eigenvalues. If the right
hand side has positive sign (maybe by multiplying the equation with -1), then look at
the signs of the coefficients of the l.h.s. If all are positive, it is an ellipsoid, further
if all are equal, it is a sphere. If 1 or 2 are negative, it is a hyperboloid. If all 3 are
negative, the surface is the empty set.
2. ρ(Q) = 3, ρ(S) = 3. |Q| = λ1 λ2 λ3 d2 = 0 ⇒ d2 = 0, because λ1 λ2 λ3 6= 0. Thus the
quadric becomes λ1 x2 + λ2 y 2 + λ3 z 2 = 0, which is a cone.
3. ρ(Q) = 4, ρ(S) = 2
F (x, y, z) = λ1 x2 + λ2 y 2 + 2w2 z + d2 = 0
 
λ1 0 0 0
 0 λ2 0 0 
Q=  0 0 0 w2 

0 0 w 2 d2
d2
|Q| = −λ1 λ2 w22 . Since ρ(Q) = 4, w2 6= 0. Shifting the origin to (0, 0, − 2w 2
) we get

F (x, y, z) = λ1 x2 + λ2 y 2 + 2w2 z = 0
where w22 = −|Q|/λ1 λ2 . The surface is a paraboloid.

6
4. ρ(Q) = 3, ρ(S) = 2

F (x, y, z) = λ1 x2 + λ2 y 2 + 2w2 z + d2 = 0

ρ(Q) = 3 ⇒ |Q| = −λ1 λ2 w22 ⇒ w2 = 0. Since


 
λ1 0 0 0
 0 λ2 0 0 
Q=  0 0

0 0 
0 0 0 d2

and ρ(Q) = 3, d2 6= 0. Thus

F (x, y, z) = λ1 x2 + λ2 y 2 + d2 = 0

The quadric is a hyperbolic or elliptic cylinder.

5. ρ(Q) = 2, ρ(S) = 2  
λ1 0 0 0
 0 λ2 0 0 
Q=
 0 0

0 0 
0 0 0 d2
ρ(Q) = 2 ⇒ d2 = 0, and F (x, y, z) = λ1 x2 + λ2 y 2 = 0. The quadric is a pair of distinct
planes or a point, if λ1 = λ2 6= 0.

6. ρ(Q) = 4, ρ(S) = 1

F (x, y, z) = λ1 x2 + 2v2 y + 2w2 z + d2 = 0


 
λ1 0 0 0
 0 0 0 v2 
Q=  0 0 0 w2 

0 v2 w2 d2
which shows that ρ(Q) = 4 is not possible.

7. ρ(Q) = 3, ρ(S) = 1

F (x, y, z) = λ1 x2 + 2v2 y + 2w2 z + d2 = 0

ρ(Q) = 3 ⇒ both v2 and w2 cannot be 0. Suppose v2 6= 0. Shift the origin to


d2
(0, − 2v2
, 0).
F (x, y, z) = λ1 x2 + 2v2 y + 2w2 z = 0

7
Rotate the axes by

x = X
v2 w2
y = p 2 2
Y −p 2 Z
v2 + w2 v2 + w22
w2 v
z = p 2 Y + p 2 Z
v2 + w22 v22 + w22
q
2
F (x, y, z) = λ1 X + 2v3 Y = 0 v3 = v22 + w22
Thus the quadric is a parabolic cylinder.

8. ρ(Q) = 2, ρ(S) = 1 ρ(Q) = 2 ⇒ v2 = w2 = 0, d2 6= 0.

F (x, y, z) = λ1 X 2 + d2 = 0
The quadric is two parallel planes.

9. ρ(Q) = 1, ρ(S) = 1 The quadric immediately reduces to F (x, y, z) = λ1 X 2 = 0, so it


represents two coincident planes x = 0.
ρ(Q) ρ(S) Surface Canonical form
4 3 central quadric λ1 x + λ2 y 2 + λ3 z 2 = − |Q|
2
|S|
2 2 2
3 3 cone λ1 x + λ2 y + λ3 z = 0
4 2 paraboloid λ1 x + λ2 y 2 + 2w2 z = 0, w22 = − λ|Q|
2
1 λ2
3 2 elliptic or hyperbolic cylinder λ1 x2 + λ2 y 2 + d2 = 0
pair of distinct planes if λ1 λ2 < 0
2 2 λ 1 x2 + λ 2 y 2 = 0
point if λ1 λ2 > 0
4 1 Not possible p
3 1 parabolic cylinder λ1 X 2 + 2v3 Y = 0, v3 = v22 + w22
2 1 pair of parallel planes λ1 X 2 + d2 = 0
1 1 Two coincident planes λ1 X 2 = 0

8
UPSC Civil Services Main 2000 - Mathematics
Linear Algebra
Sunder Lal
Retired Professor of Mathematics
Panjab University
Chandigarh

June 14, 2007

Question 1(a) Let V be a vector space over R and let

T = {(x, y) | x, y ∈ V}

Define (x, y) + (x1 , y1 ) = (x + x1 , y + y1 ) in T and (α + iβ)(x, y) = (αx − βy, βx + αy)


for every α, β ∈ R. Show that T is a vector space over C.

Solution.

1. v1 , v2 ∈ T ⇒ v1 + v2 ∈ T

2. (0, 0) is the additive identity where 0 is the zero vector in V.

3. If (x, y) ∈ T , then (−x, −y) ∈ T , and (x, y) + (−x, −y) = (0, 0)

4. Clearly v1 +v2 = v2 +v1 and (v1 +v2 )+v3 = v1 +(v2 +v3 ) as addition is commutative
and associative in V.

5. z ∈ C, v ∈ T ⇒ zv ∈ T

6. 1v = (1 + i0)(x, y) = (x, y)

7.

(α + iβ)((x1 , y1 ) + (x2 , y2 ))
= (α(x1 + x2 ) − β(y1 + y2 ), β(x1 + x2 ) + α(y1 + y2 ))
= (αx1 − βy1 , βx1 + αy1 ) + (αx2 − βy2 , βx2 + αy2 )
= (α + iβ)(x1 , y1 ) + (α + iβ)(x2 , y2 )

1
8.

((α + iβ)(γ + iδ))(x, y)


= (αγ − βδ + i(αδ + βγ))(x, y)
= ((αγ − βδ)x − (αδ + βγ)y, (αγ − βδ)y + (αδ + βγ)x)
= (α(γx − δy) − β(δx + γy), β(γx − δy) + α(δx + γy))
= (α + iβ)((γx − δy), (δx + γy))
= (α + iβ)((γ + iδ)(x, y))

Thus T is a vector space over C.

Question 1(b) Show that if λ is a characteristic root of a non-singular matrix A, then λ−1
is a characteristic root of A−1 .

Solution.
Av = λv v 6= 0
−1 −1
⇒ A Av = λA v
⇒ A−1 v = λ−1 v
Thus λ−1 is a characteristic root of A−1 .

Question 2(a) Prove that a real symmetric matrix A is positive definite


 if and only if
1 2 3
A = BB0 for some non-singular B. Show also that A = 2 5 7  is positive definite
3 7 11
0 0
and find B such that A = BB . (Here B is the transpose of B.)

Solution. If A = BB0 for some non-singular B, then x0 Ax = x0 BB0 x where x 6= 0 is a


column vector. Since |B| =6 0, B0 x 6= 0 =⇒ x0 B.(B0 x) is the sum on n squares, at least one
of which is non-zero. Thus x0 Ax > 0 whenever x 6= 0, showing that A is positive definite.
Conversely, if A is positive definite, then ∃P non-singular such that P0 AP = In . Thus
A = P0−1 P−1 . Letting B = P0−1 we get A = BB0 as required.
The existence of P can be found by induction on n. Let
 
a11 a12 . . . a1n
 a21 a22 . . . a2n 
A= 
 ... 
an1 an2 . . . ann

Define
1 − aa11 . . . − aa1n
 12

11
 0 1 ... 0 
Q= 
 ... 
0 0 ... 1

2
 
0 a11 0
Then Q is non-singular, and Q AQ = , where S is (n − 1) × (n − 1) positive
0 S
definite. Let Q∗ be a (n − 1) ×  (n − 1) non-singular
 matrix such that Q∗ 0 SQ∗ is diagonal,
1 0
by induction. Then let Q1 = , and let P = Q1 Q. Then P0 AP is diagonal
0 Q∗
(b11 , b22 , . . . , bnn ). Let B = diagonal ( √b111 , . . . , √b1nn ). Then B0 P0 APB = In .
The quadratic form Q(x, y, z) associated with the given matrix A is given by
  
 1 2 3 x
x y z  2 5 7   y  = x2 + 5y 2 + 11z 2 + 4xy + 6xz + 14yz
3 7 11 z

Completing the squares we get Q(x, y, z) = (x + 2y + 3z)2 + (y + z)2 + z 2 , so A is positive


definite, as z = 0, y + z = 0, x + 2y + 3z = 0 =⇒ x = y = z = 0.
If B is a 3 × 3 matrix such that
   
x x + 2y + 3z
B0  y  =  y+z 
z z

then x0 BB0 x = Q = x0 Ax, so A = BB0 as A and BB0 are both symmetric. Clearly
 
1 0 0
B=  2 1 0 
3 1 1

and it can easily be verified that A = BB0 .

Question 2(b) Prove that a system Ax = B of non-homogeneous equations in n unknowns


has a unique solution provided the coefficient matrix is non-singular.

Solution. If A is non-singular, then the system is consistent because the rank of the
coefficient matrix A = n = rank of the n × n + 1 augmented matrix (A, B). If x1 , x2 are
two solutions, then
Ax1 = B = Ax2
=⇒ A(x1 − x2 ) = 0
=⇒ A−1 A(x1 − x2 ) = 0
=⇒ x1 = x2
Thus the unique solution is given by the column vector x = A−1 B.

3
Question 2(c) Prove that two similar matrices have the same characteristic roots. Is the
converse true? Justify your claim.

Solution. Let B = P−1 AP then characteristic polynomial of B is |λI − B| = |λI −


P−1 AP| = |P−1 λIP − P−1 AP| = |P−1 ||λI − A||P| = |λI − A|. (Note that |X||Y| = |XY|.)
Thus the characteristic polynomial of B is the same as that of A, so both A and B have the
same characteristic roots.
The converse is not true. Let
   
1 1 1 0
A= , B=
0 1 0 1

Then A and B have the same characteristic polynomial (λ − 1)2 and thus the same charac-
teristic roots. But B can never be similar to A because P−1 BP = B whatever P may be.

4
UPSC Civil Services Main 2001 - Mathematics
Linear Algebra
Sunder Lal
Retired Professor of Mathematics
Panjab University
Chandigarh

June 14, 2007

Question 1(a) Show that the vectors (1, 0, −1), (0, −3, 2) and (1, 2, 1) form a basis of the
vector space R3 (R).
Solution. Since dimR (R3 ) = 3, it is enough to prove that these are linearly independent.
If possible, let
a(1, 0, −1) + b(0, −3, 2) + c(1, 2, 1) = 0
This implies
a + c = 0, −3b + 2c = 0, −a + 2b + c = 0
Solving for c, c + 34 c + c = 0, so c = 0, hence a = b = 0. (Note that if these linearly
independent vectors were not a basis, they could be completed into one, but in R3 any four
vectors are linearly dependent, so this is a maximal linearly independent set, hence it is a
basis.
Alternate Solution. Since dim(R3 ) = 3, to show that (1, 0, −1), (0, −3, 2) and (1, 2, 1)
form a basis it is enough to show that these vectors generate R3 . In fact, given (x1 , x2 , x3 ),
we can always find a, b, c s.t. (x1 , x2 , x3 ) = a(1, 0, −1) + b(0, −3, 2) + c(1, 2, 1) as follows:
a + c = x1 , −3b + 2c = x2 , −a + 2b + c = x3 . Thus (c − x1 ) + 2(2c − x2 )/3 + c = x3 ,
so c + 34 c + c = x1 + 23 x2 + x3 . Thus c = 3x1 +2x 10
2 +3x3
, a = x1 − c = 7x1 −2x10
2 −3x3
, and
2c−x2 x1 −x2 +x3
b= 3 = 5
.
|A|
Question 1(b) If λ is a characteristic root of a non-singular matrix A, then prove that λ
is a characteristic root of Adj A.
Solution. If µ is a characteristic root of A, then aµ is a characteristic root of aA for a
constant a, because if Av = µv, v 6= 0 a vector, then aAv = aµv. Hence the result.
If λ is the characteristic root of A, |A| 6= 0, then λ 6= 0, and λ−1 is a characteristic root
of A−1 , because Av = λv =⇒ A−1 v = λ−1 v.
Since Adj A = A−1 |A|, it follows that |A| λ
is a characteristic root of Adj A.

1
 
1 0 0
Question 2(a) If A =  1 0 1  show that for all integers n ≥ 3, An = An−2 + A2 − I.
0 1 0
50
Hence determine A .

Solution. Characteristic equation of A is


λ−1 0 0
1 λ 1 =0
0 1 λ

or (λ−1)(λ2 −1) = λ3 −λ2 −λ+1 = 0. From the Cayley-Hamilton theorem, A3 −A2 −A+I =
0 ⇒ A3 = A + A2 − I. Thus the result is true for n = 3. Suppose the theorem is true for
n = m i.e. Am = Am−2 + A2 − I. We shall prove it for m + 1.

Am+1 = Am A
= (Am−2 + A2 − I)A
= Am−1 + A3 − A
= Am−1 + A2 + A − A − I
= Am−1 + A2 − I

The result follows by induction.


Let n = 2m. Using successively An = An−2 + A2 − I, we get A2m = mA2 − (m − 1)I.
Now     
1 0 0 1 0 0 1 0 0
A2 =  1 0 1  1 0 1  =  1 1 0 
0 1 0 0 1 0 1 0 1
so

A50 = 25A

2
− 24I   
25 0 0 24 0 0
=  25 25 0  −  0 24 0 
25 0 25 0 0 24
 
1 0 0
=  25 1 0 
25 0 1

Question 2(b) When is a square matrix A said to be congruent to a square matrix B?


Prove that every matrix congruent to a skew-symmetric matrix is skew-symmetric.

Solution. A ≡ B if ∃P nonsingular, s.t. P0 AP = B. If S0 = −S then (P0 SP)0 = P0 S0 P =


−(P0 SP), so P0 SP is also skew-symmetric.

2
Question
 2(c) Determine
 the orthogonal matrix P such that P−1 AP is diagonal where
7 4 −4
A=  4 −8 −1 .
−4 −1 −8

Solution. The characteristic equation is

λ − 7 −4 4
−4 λ + 8 1 = 0
4 1 λ+8
(λ − 7)((λ + 8)2 − 1) + 4(−4 − 4λ − 32) + 4(−4 − 4λ − 32) = 0
λ3 + 9λ2 − 81λ − 729 = 0
(λ + 9)(λ2 − 81) = 0

Thus λ = 9, −9, −9.

1. λ = 9. If (x1 , x2 , x3 ) is the eigenvector corresponding to λ = 9, we get

2x1 − 4x2 + 4x3 = 0


−4x1 + 17x2 + x3 = 0
4x1 + x2 + 17x3 = 0

From the second and third we get 18x2 +18x3 = 0. Take x2 = 1. Then x3 = −1, x1 = 4,
so (4, 1, −1) is an eigenvector for λ = 9.

2. λ = −9. If (x1 , x2 , x3 ) is the eigenvector corresponding to λ = −9, we get

−16x1 − 4x2 + 4x3 = 0


−4x1 − x2 + x3 = 0
4x1 + x2 − x3 = 0

There is only one equation 4x1 + x2 − x3 = 0. Take x1 = 0, x2 = 1, then x3 = 1,


so (0, 1, 1) is an eigenvector. Take x1 = −1, x2 = 2, then x3 = −2, so (−1, 2, −2) is
another eigenvector. These two are orthogonal to each other and are eigenvectors for
λ = −9. Note that to make the second vector orthogonal to the first, we needed to
ensure x2 = −x3 , then the equation suggested values for x1 , x2 .

Let  
0 − 13 √4
18
P= √1 2 √1
 
2 3 18 
√1 − 32 − √118
2

3
Clearly P0 P = I, since the columns of P are mutually orthogonal unit vectors.
 Moreover from

−9 0 0
Ax = xλ for the eigenvalues and eigenvectors, it follows that AP = P  0 −9 0 .
  0 0 9
−9 0 0
−1
Thus P AP =  0 −9 0 , which is diagonal as required.
0 0 9

Question 2(d) Show that the real quadratic form

Φ = n(x21 + x22 + . . . + x2n ) − (x1 + x2 + . . . + xn )2

in n variables is positive semi-definite.

Solution. Consider the expression

E = (X − x1 )2 + . . . + (X − xn )2
= nX 2 − 2X(x1 + . . . + xn ) + (x21 + x22 + . . . + x2n )

Clearly E being the sum of squares is non-negative, i.e. E ≥ 0. Let

(x1 + x2 + . . . + xn ) (x21 + x22 + . . . + x2n )


A= B=
n n
Then E = n(X 2 − 2AX + B) = n((X − A)2 + B − A2 ). When X = A, E = n(B − A2 ) = Φ,
and since E ≥ 0, Φ ≥ 0.
If x1 = x2 = . . . = xn = 1, then Φ = 0 showing that Φ is actually positive semi-definite.
Alternate solution. By Cauchy’s inequality
n
! n ! n
!2
X X X
a2i b2i ≥ ai b i
i=1 i=1 i=1

Setting b1 = b2 = . . . = bn = 1, we get
n
! n
!2
X X
n a2i − ai ≥0
i=1 i=1

showing that Φ is positive semi-definite.

4
UPSC Civil Services Main 2002 - Mathematics
Linear Algebra
Sunder Lal
Retired Professor of Mathematics
Panjab University
Chandigarh

June 14, 2007

Question 1(a) Show that the mapping T : R3 −→ R3 where T(a, b, c) = (a − b, b − c, a + c)


is linear and non-singular.

Solution.

T(λa, λb, λc) = (λ(a − b), λ(b − c), λ(a + c))


= λ(a − b, b − c, a + c)
= λT(a, b, c)

T(a, b, c) + T(x, y, z) = (a − b, b − c, a + c) + (x − y, y − z, x + z)
= (a − b + x − y, b − c + y − z, a + c + x + z)
= T(a + x, b + y, c + z)

Thus T is linear.
Now we show that

T(1, 0, 0) = (1, 0, 1)
T(0, 1, 0) = (−1, 1, 0)
T(0, 0, 1) = (0, −1, 1)

are linearly independent.

a1 (1, 0, 1) + a2 (−1, 1, 0) + a3 (0, −1, 1) = 0


⇒ a1 − a2 = 0, a2 − a3 = 0, a1 + a3 = 0
⇒ a1 = a2 = a3 = 0

1
Thus (1, 0, 1), (−1, 1, 0), (0, −1, 1) are linearly independent.
Since (1, 0, 0), (0, 1, 0), (0, 0, 1) generate R3 , (1, 0, 1), (−1, 1, 0), (0, −1, 1) generate T(R3 ),
hence dim(T(R3 )) = 3. Thus T is non-singular.
Alternatively,

T(a, b, c) = (0, 0, 0) ⇐⇒ a − b = 0, b − c = 0, a + c = 0 =⇒ a = b = c = 0

Thus T is 1-1, therefore it is onto, which shows it is nonsingular.

Question 1(b) Prove that a square matrix A is non-singular if and only if the constant
term in its characteristic polynomial is different from 0.

Solution. Let  
a11 ... a1n
 a21 ... a2n 
A=
. . .

... ...
an1 ... ann
Then characteristic polynomial of A = Det(xI − A). I = n × n unit matrix. Clearly
n
X
Det(xI − A) = xn − aii xn−1 + . . . + (−1)n DetA
i=1

Thus A is nonsingular iff the constant term in the characteristic polynomial of A 6= 0.

Question 2(a) Let T : R5 −→ R5 be a linear mapping given by T (a, b, c, d, e) = (b − d, d +


e, b, 2d + e, b + e). Obtain bases for its null space and range space.

Solution. Clearly
T (1, 0, 0, 0, 0) = (0, 0, 0, 0, 0)
T (0, 1, 0, 0, 0) = (1, 0, 1, 0, 1)
T (0, 0, 1, 0, 0) = (0, 0, 0, 0, 0)
T (0, 0, 0, 1, 0) = (−1, 1, 0, 2, 0)
T (0, 0, 0, 0, 1) = (0, 1, 0, 1, 1)
are generators of the range space of T . In fact, if v1 = (1, 0, 1, 0, 1), v2 = (−1, 1, 0, 2, 0), v3 =
(0, 1, 0, 1, 1) then v1 , v2 , v3 generate T (R5 ). We now show that v1 , v2 , v3 are linearly inde-
pendent. Let α1 v1 +α2 v2 +α3 v3 = 0. Then α1 −α2 = 0, α2 +α3 = 0, α1 = 0 ⇒ α2 = α3 = 0.
Thus v1 , v2 , v3 are linearly independent over R ⇒ T (R5 ) is of dimension 3 with basis
v 1 , v2 , v3 .
Thus the null space is of dimension 2, because dim(null space) + dim(range space) =
dim(given vector space = R5 ) = 5. Since e1 = (1, 0, 0, 0, 0) and e3 = (0, 0, 1, 0, 0) belong to
the null space of T , and both are linearly independent over R, e1 , e3 is a basis of the null
space of T .

2
Question 2(b) Let A be a 3 × 3 real symmetric matrix with eigenvalues 0, 0, 5. If the
corresponding eigenvectors are (2, 0, 1), (2, 1, 1), (1, 0, −2) then find the matrix A.
     
2 2 1 0 0 0 0 0 0
Solution. Let P = 0 1 0 , then P−1 AP = 0 0 0, so A = P 0 0 0 P−1 .
1 1 −2 2 0 0 5 0 0 5
1

5
−1 5
−1
A simple calculation shows that P = 0 1  0 , therefore
1
5
0 − 25
 2
−1 15
    
2 2 1 0 0 0 5
1 0 −2
A = 0 1 0  0 0 0  0 1 0 = 0 0 0 
1
1 1 −2 0 0 5 5
0 − 25 −2 0 4
 
1 0 −2
Thus  0 0 0  is the required symmetric matrix with 0, 0, 5 as eigenvalues.
−2 0 4

Question 2(c) Solve the following system of linear equations:

x1 − 2x2 − 3x3 + 4x4 = −1


−x1 + 3x2 + 5x3 − 5x4 − 2x5 = 0
2x1 + x2 − 2x3 + 3x4 − 4x5 = 17

Solution. There are three equations in 5 unknowns, therefore the rank of the coefficient
1 −2 −3
matrix ≤ 3. Since −1 3 5 = 1(−6 − 5) + 2(2 − 10) + (−3)(−1 − 6) = −6, the rank
−2 1 −2
of the coefficient matrix is 3. Using Cramer’s rule we solve the system

x1 − 2x2 − 3x3 = −1 − 4x4 (1)


−x1 + 3x2 + 5x3 = 5x4 + 2x5 (2)
2x1 + x2 − 2x3 = 17 − 3x4 + 4x5 (3)

3
−1 − 4x4 −2 −3
1
x1 = − 5x4 + 2x5 3 5
6
17 − 3x4 + 4x5 1 −2
1
= − [(−1 − 4x4 )(−11) − 3(5x4 + 2x5 − 51 + 9x4 − 12x5 ) + 2(−10x4 − 4x5 − 85 + 15x4 − 20x5 )]
6
1
= − [−6 + 44x4 − 42x4 + 10x4 + 30x5 − 48x5 ]
6
= 1 − 2x4 + 3x5

1 −1 − 4x4 −3
1
x2 = − −1 5x4 + 2x5 5
6
2 17 − 3x4 + 4x5 −2
1
= − [−10x4 − 4x5 − 85 + 15x4 − 20x5 − 8 − 32x4 + 51 − 9x4 + 12x5 + 30x4 + 12x5 ]
6
1
= − [−42 − 6x4 ]
6
= 7 + x4

1 −2 −1 − 4x4
1
x3 = − −1 3 5x4 + 2x5
6
2 1 17 − 3x4 + 4x5
1
= − [51 − 9x4 + 12x5 − 5x4 − 2x5 − 34 + 6x4 − 8x5 − 20x4 − 8x5 + 7 + 28x4 ]
6
1
= − [24 − 6x5 ]
6
= −4 + x5

The solution space is (1 − 2x4 + 3x5 , 7 + x4 , −4 + x5 , x4 , x5 ), where x4 , x5 ∈ R (arbitrarily).


Note that the vector space of solutions is of dimension 2.
Alternate Method.

x2 + 2x3 = −1 + x4 + 2x5 adding (1) and (2) (4)


7x2 + 8x3 = 17 + 7x4 + 8x5 adding 2×(2) and (3) (5)
6x3 = −24 + 6x5 using 7×(4) - (5) (6)
x2 = 7 + x4 from (4) and (6) (7)
x1 = 1 − 2x4 + 3x5 using (1), (6), (7) (8)

The solution space is as shown above.

4
Question 2(d) Use Cayley-Hamilton theorem to find the inverse of the following matrix
 
0 1 2
A= 1  2 3
3 1 1

Solution. Characteristic polynomial is given by |xI − A| = 0, where I is the 3 × 3 unit


matrix.
x −1 −2
−1 x − 2 −3 = 0
−3 −1 x − 1
x[x2 − 3x + 2 − 3] + 1[−x + 1 − 9] − 2[1 + 3x − 6] = 0
x3 − 3x2 − 8x + 2 = 0

By Cayley-Hamilton theorem, A3 − 3A2 − 8A + 2I = 0, or A(A2 − 3A − 8I) = −2I. Thus


1
A−1 = − (A2 − 3A − 8I)
2      
7 4 5 0 3 6 8 0 0
1
= − 11 8 11 − 3 6 9 − 0 8 0
2
4 6 10 9 3 3 0 0 8
 
−1 1 −1
1
= −  8 −6 2 
2
−5 3 −1
 1 1 1

2
− 2 2
= −4 3 −1
5
2
− 23 12

Check A−1 A = I.

5
UPSC Civil Services Main 2003 - Mathematics
Linear Algebra
Sunder Lal
Retired Professor of Mathematics
Panjab University
Chandigarh

September 16, 2007

Question 1(a) Let S be any non-empty subset of a vector space V over the field F . Show
that the set {a1 x1 + . . . + an xn | a1 , . . . , an ∈ F, x1 , . . . xn ∈ S, n ∈ N} is the subspace
generated by S.

Solution. Let W be the subset mentioned above. Let w1 , w2 ∈ W and a, b ∈ F . Then


w1 = a1 x1 + . . . ar xr , where a1 , . . . , ar ∈ F, x1 , . . . xr ∈ S and w2 = b1 y1 + . . . bs ys where
b1 , . . . bs ∈ F, y1 , . . . , ys ∈ S. Now αw1 +βw2 = c1 z1 +. . .+cr+s zr+s , where ci = αai , 1 ≤ i ≤
r, cj+r = βbj , 1 ≤ j ≤ s, and zi = xi , 1 ≤ i ≤ r, zj+r = yj , 1 ≤ j ≤ s. Clearly cj ∈ F, zj ∈ S
for 1 ≤ j ≤ r + s, showing that for any w1 , w2 ∈ W, α, β ∈ F, αw1 + βw2 ∈ W, moreover
W= 6 ∅ as S ⊆ W and S = 6 ∅. Thus W is a subspace of V.

2 1 1
Question 1(b) If A = 0 1 0, then find the matrix represented by 2A10 − 10A9 +
1 1 2
14A − 6A − 3A + 15A − 21A4 + 9A3 + A − I.
8 7 6 5

Solution. The characteristic equation of A is

2−x 1 1
|A − xI| = 0 1−x 0 = (2 − x)2 (1 − x) − (1 − x) = 0
1 1 2−x

or (1 − x)(4 − 4x + x2 ) − 1 + x = 3 − 7x + 5x2 − x3 = 0, or x3 − 5x2 + 7x − 3 = 0. By the

1
Cayley-Hamilton theorem, we get A3 − 5A2 + 7A − 3I = 0. Now

2A10 − 10A9 + 14A8 − 6A7 − 3A6 + 15A5 − 21A4 + 9A3 + A − I


= 2A7 [A3 − 5A2 + 7A − 3I] − 3A3 [A3 − 5A2 + 7A − 3I] + A − I
= 2A7 0 − 3A 3
 0 + A−I
1 1 1
= A − I = 0 0 0
1 1 1

which is the required value.

Question 2(a) Prove that the eigenvectors corresponding to distinct eigenvalues of a square
matrix are linearly independent.

Solution. Let x1 , x2 , . . . , xk be eigenvectors corresponding to distinct eigenvalues λ1 , . . . , λk


of the square matrix A.
We will show that if any subset of the vectors x1 , x2 , . . . , xk is linearly dependent, then
we can find a smaller set that is also linearly dependent — but this leads to a contradiction
as the eigenvectors are all non-zero.
Suppose, without loss of generality, that x1 , x2 , . . . , xr are linearly dependent. Thus there
exist α1 , . . . αr ∈ R, not all zero, such that

α 1 x1 + . . . + α r x r = 0 (1)

Thus A(α1 x1 + . . . + αr xr ) = 0 ⇒ α1 λ1 x1 + . . . + αr λr xr = 0. Multiplying (1) by λ1 and


subtracting, we have α2 (λ2 − λ1 )x2 + . . . + αr (λr − λ1 )xr = 0. Now αi 6= 0 ⇒ αi (λi − λ1 ) 6=
0, so not all αi (λi − λ1 ) can be zero, so we have a smaller set x2 , . . . , xr which is also
linearly dependent. This leads us to the contradiction mentioned above, hence the vectors
x1 , x2 , . . . , xk must be linearly independent.

Question 2(b) If H is a Hermitian matrix, then show that (H + iI)−1 (H − iI) is a unitary
matrix. Also show that every unitary matrix A can be written in this form provided 1 is not
an eigenvalue of A.

Solution. See related results of 1989, question 2(b).


 
6 −2 2
Question 2(c) If A = −2
 3 −1 then find a diagonal matrix D and a matrix B
2 −1 3
such that A = BDB where B0
0
denotes the transpose of B.

2
 
 x1
Solution. Let Q(x1 , x2 , x3 ) = x1 x2 x3 A x2  be the quadratic form associated with

x3
A. Then

Q(x1 , x2 , x3 ) = 6x21 + 3x22 + 3x23 − 4x1 x2 + 4x1 x3 − 2x2 x3


1 1 7 7 2
= 6[x1 − x2 + x3 ]2 + x22 + x23 − x2 x3
3 3 3 3 3
1 1 2 7 1 2 16 2
= 6[x1 − x2 + x3 ] + [x2 − x3 ] + x3
3 3 3 7 7
 
6 0 0
Let X1 = x1 − 31 x2 + 31 x3 , X2 = x2 − 71 x3 , X3 = x3 and D = 0 73 0 . Then
0 0 16 7
     
x1 x1 X1
0 
  
x1 x2 x3 A x2 = x1 x2 x3 BDB x2 = X1 X2 X3 D X2 
  
x3 x3 X3

1 − 31 31
        
X1 x1 x1 6 0 0
1  
where X2 = 0 1 − 7
   x2 = B x2 . Thus A = BDB where D = 0 73
0  0  0
16
X3 0 0 1 x3 x3 0 0 7
1 0 0
1
and B = − 3 1 0

1
3
− 17 1

Question 2(d) Reduce the quadratic form given below to canonical form and find its rank
and signature:
x2 + 4y 2 + 9z 2 + u2 − 12yx + 6zx − 4zy − 2xu − 6zu

Solution. Let

Q(x, y, z, u) = x2 + 4y 2 + 9z 2 + u2 − 12yx + 6zx − 4zy − 2xu − 6zu


= (x − 6y + 3z − u)2 − 32y 2 + 32yz − 12yu
3
= (x − 6y + 3z − u)2 − 32(y 2 − yz + yu)
8
1 3 2
= (x − 6y + 3z − u) − 32(y − z + u) + 8z 2 + 18u2 − 24uz
2
2 4
1 3 2 3
= (x − 6y + 3z − u) − 32(y − z + u) + 8(z − u)2
2
2 4 2

3
Put

X = x − 6y + 3z − u
1 3
Y = y− z+ u
2 4
3
Z = z− u
2
U = u
2
√ that ∗Q(x,√y, z, u)∗ is transformed∗2 to X∗2 − 32Y
so 2
+ 8Z 2 . We now put X ∗ = X, Y ∗ =
32Y, Z = 8Z, U = U to get X − Y + Z ∗2 as the canonical form of Q(x, y, z, u).
Rank of Q(x, y, z, u) = 3 = rank of the associated matrix. Signature of Q(x, y, z, u) =
number of positive squares - number of negative squares = 2 − 1 = 1.

4
UPSC Civil Services Main 2004 - Mathematics
Linear Algebra
Sunder Lal
Retired Professor of Mathematics
Panjab University
Chandigarh

September 13, 2007

Question 1(a) Let S be the space generated by the vectors {(0, 2, 6), (3, 1, 6), (4, −2, −2)}.
What is the dimension of S? Find a basis for S.

Solution. (0, 2, 6), (3, 1, 6) are linearly independent, because α(0, 2, 6) + β(3, 1, 6) = 0 ⇒
3β = 0, 2α + β = 0 ⇒ α = β = 0. Thus dim S ≥ 2.
If possible let (4, −2, −2) = α(0, 2, 6)+β(3, 1, 6), then 4 = 3β, −2 = 2α+β, −2 = 6α+6β
should be consistent. Clearly β = 34 , α = 12 (−2 − 43 ) = − 53 from the first two equations, and
these values satisfy the third. Thus (4, −2, −2) is a linear combination of (0, 2, 6) and (3, 1, 6).
Hence dim S = 2 and {(0, 2, 6), (3, 1, 6)} is a basis of S, being a maximal linearly inde-
pendent subset of a generating system.

Question 1(b) Show that f : R3 −→ R where f (x, y, z) = 3x + y − z is a linear transfor-


mation. What is the dimension of the kernel? Find a basis for the kernel.

Solution.

f (α(x1 , y1 , z1 ) + β(x2 , y2 , z2 )) = f (αx1 + βx2 , αy1 + βy2 , αz1 + βz2 )


= 3(αx1 + βx2 ) + αy1 + βy2 − (αz1 + βz2 )
= α(3x1 + y1 − z1 ) + β(3x2 + y2 − z2 )
= αf (x1 , y1 , z1 ) + βf (x2 , y2 , z2 )

Thus f is a linear transformation.


Easy solution for this particular example. Clearly (1, 0, 0) does not belong to the
kernel, therefore the dimension of the kernel is ≤ 2. A simple look at f shows that (0, 1, 1)
and (1, −1, 2) belong to the kernel and are linearly independent, thus the dimension of the
kernel is 2 and {(0, 1, 1), (1, −1, 2)} is a basis for the kernel.

1
General solution. Clearly f : R3 −→ R is onto, thus the dimension of the range of
f is 1. From question 3(a) of 1998, dimension of nullity of f + dimension of range of f =
dimension of domain of f , so the dimension of the nullity of f = 2. Given this, we can pick
a basis for the kernel by looking at the given transformation.

Question 2(a) Show that T the linear transformation from R3 to R4 represented by the
matrix  
1 3 0
 0 1 −2
 
2 1 1
−1 1 2
is one to one. Find a basis for its image.

Solution. {e1 , e2 , e3 } be the standard basis for R3 . Then

T(e1 ) = (1, 0, 2, −1) = v1


T(e2 ) = (3, 1, 1, 1) = v2
T(e3 ) = (0, −2, 1, 2) = v3

By linearity, if T(a, b, c) = av1 + bv2 + cv3 = 0, then a + 3b = 0, b − 2c = 0, 2a + b + c =


0, −a + b + 2c = 0 ⇒ a = b = c = 0. Thus T is one-one. Also {v1 , v2 , v3 } forms a basis for
the image, since {e1 , e2 , e3 } generates R3 , and {v1 , v2 , v3 } is a linearly independent set.

Question 2(b) Verify whether the following system of equations is consistent:

x + 3z = 5
−2x + 5y − z = 0
−x + 4y + z = 4

Solution. The first equation gives x = 5 − 3z, the second now gives 5y = z + 10 − 6z = 10 −
5z ⇒ y = 2−z. Putting these values in the third equation we get 4 = −5+3z+8−4z+z = 3,
hence the given system is inconsistent.
   
1 0 3 1 0 3 5
Alternative. Let A = −2 5 −1 be the coefficient matrix and B = −2 5 −1 0
−1 4 1 −1 4 1 4
be the augmented matrix, then it can be shown that rank A = 2 and rank B = 3, which
implies that the system is inconsistent. For consistency the ranks should be equal. This
procedure will be longer in this particular case.

2
 
1 1
Question 2(c) Find the characteristic polynomial of the matrix A = . Hence find
−1 3
A−1 and A6 .
x − 1 −1
Solution. The characteristic polynomial of A is given by |xI − A| = =
1 x−3
(x − 1)(x − 3) + 1 = x2 − 4x + 4.
2
The Cayley-Hamilton theorem states that A satisfies its characteristic equation −
 i.e. A
−3 1
4A + 4I = 0 ⇒ (A − 4I)A = A(A − 4I) = −4I. Thus A−1 = − A−4I = − 14 =
3
4 −1 −1
− 41

4
1 1
4 4
From A2 − 4A + 4I = 0 we get

A2 = 4A − 4I
A3 = 4A2 − 4A = 4(4A − 4I) − 4A = 12A − 16I
A6 = (12A − 16I)2 = 144A2 − 384A + 256I = 144(4A − 4I) − 384A + 256I
 
−128 192
= 192A − 320I =
−192 256

Question 2(d) Define a positive definite quadratic form. Reduce the quadratic form x21 +
x23 + 2x1 x2 + 2x2 x3 to canonical form. Is this quadratic form positive definite?

Solution. If Q(x1 , . . . , xn ) = ni=1 aij xi xj , aij = aji is a quadratic form in n variables with
P
j=1
aij ∈ R, thenPit is said to be positive definite if Q(α1 , . . . , αn ) > 0 whenever αi ∈ R, i =
1, . . . , n and i αi2 > 0.
Let the qiven be Q(x1 , x2 , x3 ). Then

Q(x1 , x2 , x3 ) = x21 + x23 + 2x1 x2 + 2x2 x3


= (x1 + x2 )2 + x23 + 2x2 x3 − x22
= (x1 + x2 )2 + (x2 + x3 )2 − 2x22

Let X1 = x1 + x2 , X2 = x2 , X3 = x2 + x3 i.e.
    
x1 1 −1 0 X1
x2  = 0 1 0 X2 
x3 0 −1 1 X3

then Q(x1 , x2 , x3 ) is transformed to X12 − 2X22 + X32 . Since Q(x1 , x2 , x3 ) and the transformed
quadratic form assume the same values, Q(x1 , x2 , x3 ) is an√indefinite form. The canonical
form of Q(x1 , x2 , x3 ) is Z12 − Z22 + Z32 where Z1 = X1 , Z2 = 2X2 , Z3 = X3 .

3
UPSC Civil Services Main 2005 - Mathematics
Linear Algebra
Sunder Lal
Retired Professor of Mathematics
Panjab University
Chandigarh

September 13, 2007

Question 1(a) Find the values of k for which the vectors (1, 1, 1, 1), (1, 3, −2, k), (2, 2k −
2, −k − 2, 3k − 1) and (3, k − 2, −3, 2k + 1) are linearly independent in R4 .

Solution. The given vectors are linearly independent if the matrix


 
1 1 1 1
1
 3 −2 k  
2 2k − 2 −k − 2 3k − 1
3 k+2 −3 2k + 1

is non-singular.
Now
1 1 1 1 1 0 0 0
2 −3 k−1
1 3 −2 k 1 2 −3 k−1
= = 2k − 4 −k − 4 3k − 3
2 2k − 2 −k − 2 3k − 1 2 2k − 4 −k − 4 3k − 3
k−1 −6 2k − 2
3 k+2 −3 2k + 1 3 k−1 −6 2k − 2

2 −3 k−1
= 2k − 4 −k − 4 3k − 3 = (k − 5)[−9k + 9 + (k − 1)(k + 4)] 6= 0
k−5 0 0

Clearly (k − 5)[−9k + 9 + (k − 1)(k + 4)] = 0 ⇔ k = 1, 5. Thus the vectors are linearly


independent when k 6= 1, 5.

1
Question 1(b) Let V be the vector space of polynomials in x of degree ≤ n over R. Prove
that the set {1, x, x2 , . . . , xn } is a basis for V. Extend this so that it becomes a basis for the
set of all polynomials in x.

Solution. {1, x, x2 , . . . , xn } are linearly independent over R — If a0 + a1 x + . . . + an xn = 0


where ai ∈ R, 0 ≤ i ≤ n, then we must have ai = 0 for every i because the non-zero
polynomial a0 +a1 x+. . .+an xn can have at most n roots in R whereas a0 +a1 x+. . .+an xn = 0
for every x ∈ R.
Every polynomial in x of degree ≤ n is clearly a linear combination of 1, x, x2 , . . . , xn

S
with coefficients from R. Thus {1, x, x2 , . . . , xn } is a basis for V.
We shall show that = {1, x, x2 , . . . , xn , xn+1 , . . .} is a basis for the space of all polyno-
mials.
S
(i) Linear Independence: Let {xi1 , . . . , xir } be a finite subset of . Let n = max{i1 , . . . , ir },

S
then {xi1 , . . . , xir } being a subset of the linearly independent set {1, x, x2 , . . . , xn } is linearly
independent, which shows the linear independence of .

S S
(ii) Let f be any polynomial. If degree of f is m, then f is a linear combination of
{1, x, x2 , . . . , xm }, which is a subset of . Thus is a basis of W, the space of all
polynomials over R.
3
Question 2(a) Let T bea linear transformation on R whose matrix relative to the standard
3
2 1 −1
basis of R is 1 2 2 . Find the matrix of T relative to the basis

3 3 4
B
= {(1, 1, 1), (1, 1, 0), (0, 1, 1)}.

Solution.  Let the vectors


  of the given
 basis be v1 , v2 , v3 . (T(v1 ), T(v2 ), T(v3 )) =
2 1 −1 1 1 0 2 3 0
1 2 2  1 1 1 =  5 3 4.
3 3 4 1 0 1 10 6 7
If (a, b, c) = αv1 + βv2 + γv3 , then α + β = a, α + β + γ = b, α + γ = c therefore
α = a − b + c, β = b − c, γ = b − a. Consequently
T(v1 ) = 7v1 − 5v2 + 3v3
T(v1 ) = 6v1 − 3v2 + 0v3
T(v1 ) = 3v1 − 3v2 + 4v3

B
 
7 6 3
This shows that the matrix of T with respect to given basis is −5 −3 −3
3 0 4

Question 2(b) If S is a skew-Hermitian matrix, then show that A = (I + S)(I − S)−1 is a


unitary matrix. Show that a unitary matrix A can be expressed in the above form provided
−1 is not an eigenvalue of A.

Solution. See related results of question 2(a) year 1989.

2
Question 2(c) Reduce the quadratic form

6x21 + 3x22 + 3x23 − 4x1 x2 − 4x2 x3 + 4x1 x3

to a sum of squares. Also find the corresponding linear transformation, index and signature.

Solution.

Q(x1 , x2 , x3 ) = 6x21 + 3x22 + 3x23 − 4x1 x2 − 4x2 x3 + 4x1 x3


2 2 1 1 2
= 6[x21 − x1 x2 + x1 x3 + x22 + x23 − x2 x3 ]
3 3 9 9 9
7 2 7 2 8
+ x2 + x3 − x2 x3
3 3 3
1 1 2 7 4 33
= 6[x1 − x2 + x3 ] + [x2 − x3 ]2 + x23
3 3 3 7 21
Put X1 = x1 − 13 x2 + 31 x3 , X2 = x2 − 47 x3 , X3 = x3 , so that
   1
1 3 − 17
 
x1 X1
x2  = 0 1 4  X2  (1)
7
x3 0 0 1 X3

and Q(x1 , x2 , x3 ) is transformed to 6X12 + 37 X22 + 33 2


X .
21 3
Let

√1 Z1
 
 
X1 q 6
 3 
X2  =  7 Z2 
 q 
X3 7
Z
11 3

then Q(x1 , x2 , x3 ) is transformed to Z12 + Z22 + Z32 , which is its canonical form. Thus
Q(x1 , x2 , x3 ) is positive definite. The Index of Q(x1 , x2 , x3 ) = Number of positive squares in
its canonical form = 3. The signature of Q(x1 , x2 , x3 ) = Number of positive squares - the
number of negative squares in its canonical form = 3.
The required linear transformation which transforms Q(x1 , x2 , x3 ) to sums of squares is
given by (1), and the linear transformation which transforms it to its canonical form is given
by
 √1
 
   1 1 0 0  
x1 1 3 −7  6 q  Z1
3
 x2  =  0 1 4   0 7
0  Z2 
7  q 
x3 0 0 1 0 0 7 Z3
11

3
UPSC Civil Services Main 2006 - Mathematics
Linear Algebra
Sunder Lal
Retired Professor of Mathematics
Panjab University
Chandigarh

December 16, 2007

Question 1(a) Let V be a vector space of all 2 × 2 matrices over the field F. Prove that V
has dimension 4 by exhibiting a basis for V.
   
Solution. Let M1 = 10 00 , M2 = 00 10 , M3 = 01 00 , M4 = 00 01 . We will show that
{M1 , M2 , M3 , M4 } is a basis of V over F. 
{M1 , M2 , M3 , M4 } generate V. Let A = ac db ∈ V. Then A = aM1 + bM2 + cM3 +
dM4 , where a, b, c, d ∈F. Thus {M1 , M2 , M3 , M4 } is a set of generators for V over F.
{M1 , M2 , M3 , M4 } are linearly independent over F. If aM1 + bM2 + cM3 + dM4 =

a b = 0 for a, b, c, d ∈F, then clearly a = b = c = d = 0, showing that {M , M , M , M }
c d 1 2 3 4
are linearly independent over F.
Hence {M1 , M2 , M3 , M4 } is a basis of V over F and dim V = 4.

1 3

Question 1(b) State the Cayley-Hamilton theorem and using it find the inverse of 2 4 .

Solution. Let A be an n × n matrix and let In be the n × n identity matrix. Then the n-
degree polynomial |xIn −A| is called the charateristic polynomial of A. The Cayley-Hamilton
theorem states that every matrix is a root of its characteristic polynomial:

if |xIn − A| = xn + a1 xn−1 + . . . + an
then An + a1 An−1 + . . . + an In = 0

|xIn − A| = 0 is called the characteristic equation of A.


−3
Let A = 12 34 . The characteristic equation of A is 0 = x−1−2 x−4 = (x − 1)(x − 4) − 6 =
x2 − 5x − 2.
By the Cayley-Hamilton Theorem, A2 −5A−2I2 = 0 ⇒ A(A−5I2 ) = (A−5I2 )A = 2I2 .
−2 3 
Thus A is invertible and A−1 = 21 (A − 5I2 ), so A−1 = 21 12 34 − 50 05 = 1 −21
  
2

1
B
Question 2(a) If T : R2 −→ R2 is defined by T(x, y) = (2x − 3y, x + y), compute the
matrix of T with respect to the basis = {(1, 2), (2, 3)}.

Solution. It is obvious that T : R2 −→ R2 is a linear transformation. Clearly

T(v1 ) = T(1, 2) = (−4, 3)


T(v2 ) = T(2, 3) = (−5, 5)

Let (a, b) = αv1 +βv2 , where a, b, α, β ∈ R, then α+2β = a, 2α+3β = b ⇒ α = 2b−3a, β =


2a − b. Thus = 18v1 − 11v2 , T(v2 ) = 25v1 − 15v1 , so (v1 , v2 )T = (T(v
T(v1 )   1 ), T(v2 ))=
B

18 25 18 25
(v1 , v2 ) . Thus the matrix of T with respect to the basis is
−11 −15 −11 −15
 
3 −2 0 −1
0 2 2 1
Question 2(b) Using elementary row operations, find the rank of A =  
1 −2 −3 −2
0 1 2 1

Solution. Operations R1 − 2R3 , R2 − R4 give


 
1 2 6 3
0 1 0 0
A∼ 1 −2 −3 −2

0 1 2 1

Operation R3 − R1 gives  
1 2 6 3
0 1 0 0
A∼ 
0 −4 −9 −5
0 1 2 1
Operations R3 + 4R2 , R4 − R2 ⇒
 
1 2 6 3
0 1 0 0
A∼ 
0 0 −9 −5
0 0 2 1

R4 + 92 R3 ⇒  
1 2 6 3
0 1 0 0 
A∼ 
0 0 −9 −5 
0 0 0 − 91
Clearly |A| = 1 ⇒ rank A = 4.

2
Question 2(c) Investigate for what values of λ and µ the equations

x+y+z = 6
x + 2y + 3z = 10
x + 2y + λz = µ

have (1) no solution (2) a unique solution (3) infinitely many solutions.

Solution.
 (2) The  equations will have a unique solution for all values of µ if the coefficient
1 1 1 1 1 1 1 0 0
matrix 1 2 3 is non-singular. i.e. 1 2 3 = 1 1
  2 = λ − 1 − 2 6= 0 i.e. λ 6= 3.
1 2 λ 1 2 λ 1 1 λ−1
Thus for λ 6= 3 and for all µ we have a unique solution which can be obtained by Cramer’s
rule or otherwise.
(1) If λ = 3, µ 6= 10 then the system is inconsistent and we have no solution.
(3) If λ = 3, µ = 10, the system will have infinitely many solutions obtained by solving
x + y = 6 − z, x + 2y = 10 − 3z ⇒ x = 2 + z, y = 4 − 2z, z is any real number.

Question 2(d) Find the quadratic form q(x, y) corresponding to the symmetric matrix
 
5 −3
A=
−3 8

Is this quadratic form positive definite? Justify your answer.

Solution. The quadratic form is


  
 5 −3 x
q(x, y) = x y
−3 8 y
= 5x2 − 6xy + 8y 2
6 8
= 5[x2 − xy + y 2 ]
5 5
3 2 31 2
= 5[(x − y) + y ]
5 25
6 (0, 0), (x, y) ∈ R2 . Thus q(x, y) is positive definite. In
Clearly q(x, y) > 0 for all (x, y) =
3
fact, q(x, y) = 0 ⇒ x − 5 y = 0, y = 0 ⇒ x = y = 0.

You might also like