Linear Algebra (IAS) 1979-2006 Solved
Linear Algebra (IAS) 1979-2006 Solved
Linear Algebra
Sunder Lal
Retired Professor of Mathematics
Panjab University
Chandigarh
Question 1(b) Reduce the quadratic expression x2 + 2y 2 + 2z 2 + 2xy + 2xz to the canonical
form.
Question 2(a) Find the elements p, q, r such that the product BA of the matrices
1 2 1 1 0 0
A = 4 1 2 , B = p 1 0
−10 2 4 q r 1
is of the form
a1 b 1 c 1
BA = 0 b2 c2
0 0 c3
Hence solve the set of equations Ax = y, where x is the column vector (x1 , x2 , x3 ), and y is
the column vector (0, 8, −4).
1
Solution.
1 2 1 a1 b 1 c 1
BA = p + 4 2p + 1 p + 2 = 0 b2 c 2
q + 4r − 10 2q + r + 2 q + 2r + 4 0 0 c3
Thus p + 4 = 0, q + 4r − 10 = 0, 2q + r + 2 = 0 ⇒ p = −4, r = 22
7
, q = − 18
7
.
Now solving Ax = y is the same as solving BAx = By because |B| = 6 0.
1 2 1 x1 1 0 0 0 0
0 −7 −2 x2 = −4 1 0 8 = 8
54
0 0 7
x3 − 18
7
22
7
1 −4 148
7
Thus 54 x = 148
7 3 7
⇒ x3 = 27 74
. −7x2 − 2x3 = 8 ⇒ −7x2 = 2x3 + 8 = 364
27
, so x2 = − 52
27
.
104 74
x1 + 2x2 + x3 = 0 ⇒ x1 = 27 − 27 = 10
9
.
10 52 74
Thus x1 = 9 , x2 = − 27 , x3 = 27 is the required solution.
Question 3(a) If S and T are subspaces of a finite dimensional vector space, then show
that
dim(S + T ) = dim S + dim T − dim(S ∩ T )
Question 3(b) Determine the value of a for which the following system of equations:
x1 + x2 + x3 = 2
x1 + 2x2 + x3 = −2
x1 + x2 + (a − 5)x3 = a
1 1 1
Solution. 1 2 3 = 2a − 10 − 3 − a + 5 + 3 − 1 = a − 6
1 1 a−5
2
Paper II
Question 4(a) Prove that any two finite dimensional vector spaces of the same dimension
are isomorphic.
Solution. See 1987 question 4(b).
Question 4(b) Define the dual space of a finite dimensional vector space V and show that
it has the same dimension as V.
Solution. Let V ∗ = {f : V −→ R, f a linear transformation}. Then V ∗ is a vector space
for the usual pointwise addition and scalar multiplication of functions: for all v ∈ V and all
α ∈ R, (f + g)(v) = f (v) + g(v), (αf )(v) = αf (v).
Let P a basis for V. Define n linear functionals v1∗ , . . . , vn∗ by vi∗ (vj ) = δij ,
v1 , . . . , vn be P
and vi∗ ( j=1 αj vj ) = nj=1 αj vi∗ (vj ) = αi .
n
Then v1∗ , . . . , vn∗ are linearly independent — ni=1 αi vi∗ = 0 ⇒ ( ni=1 αi vi∗ )(vj ) = αj =
P P
0, 1 ≤ j ≤ n. Pn Pn
∗ ∗ ∗ ∗ ∗ ∗
Pn v1 , . . . , v n generate V — if f ∈ V , then f = i=1 f (v i )vi . Clearly ( i=1 f (vi )vi )(vj ) =
∗
i=1 f (vi )vi (vj ) = f (vj ), so the two sides agree on v1 , . . . , vn , and hence by linearity on all
of V.
Thus v1∗ , . . . , vn∗ is a basis of V ∗ , so dim V ∗ = dim V . V ∗ is called the dual of V.
Question 4(c) Show that every finite dimensional inner product space V over the field of
complex numbers has an orthonormal basis.
Solution. Let w1 , . . . , wn be a basis of V. We will convert it into an orthonormal basis of
V by the Gram-Schmidt orthonormalization process.
Starting with i = 1, define
i−1
X hwi , vj i
v i = wi − vj
j=1
||vj ||2
Question 5(a) Define the rank and nullity of a linear transformation. If V and W are
finite dimensional vector spaces over a field, and T is a linear transformation of V into W,
prove that
rank T + nullity T = dim V
Solution. See 1998 question 3(a).
3
Question 5(b) Define a positive definite form. State and prove a necessary and sufficient
condition for a quadratic form to be positive definite.
Solution.
4
UPSC Civil Services Main 1980 - Mathematics
Linear Algebra
Sunder Lal
Retired Professor of Mathematics
Panjab University
Chandigarh
Question 1(a) Define the rank of a matrix. Prove that a system of equations Ax = b is
consistent if and only if rank (A, b) = rank A, where (A, b) is the augmented matrix of the
system.
Solution. The Cayley Hamilton theorem is — Every matrix A satisfies its characteristic
x − 2 −1
equation |xI − A| = 0. In the current problem, |xI − A| = = x2 − 4x + 4 − 1 =
−1 x − 2
2 2 2 2 1 2 1
x − 4x + 3. Thus we need to show that A − 4A + 3I = 0. Now A = =
1 2 1 2
5 4 2 5 4 2 1 1 0 0 0
, so A − 4A + 3I = −4 +3 = , verifying the Cayley
4 5 4 5 1 2 0 1 0 0
Hamilton Theorem.
2 1 1 0
A2 − 4A + 3I = 0 ⇒ A(A − 4I) = −3I ⇒ A−1 = − 3 (A − 4I) = − 3 (
1 1
−4 )=
2 1 2 0 1
− 13
3 .
− 13 23
Question 2(a) Prove that if P is any non-singular matrix of order n, then the matrices
P−1 AP and A have the same characteristic polynomial.
Solution. The characteristic polynomial of P−1 AP is |xI−P−1 AP| = |xP−1 P−P−1 AP| =
|P−1 ||xI − A||P| = |xI − A| which is the characteristic polynomial of A.
1
3 4
Question 2(b) Find the eigenvalues and eigenvectors of the matrix A = .
4 −3
3 4 3−λ 4
Solution. The characteristic equation of A = is = 0 ⇒ −(9 −
4 −3 4 −3 − λ
λ2 ) − 16 = 0 ⇒ λ2 − 25 = 0 ⇒ λ = 5, −5.
−2 4 x1
If (x1 , x2 ) is an eigenvector for λ = 5, then = 0 ⇒ 2x1 − 4x2 = 0 ⇒
4 −8 x2
x1 = 2x2 . Thus (2x, x), x ∈ R, x 6= 0 gives all eigenvectors for λ = 5, in particular, we can
take (2, 1) as an eigenvector for λ = 5.
8 4 x1
If (x1 , x2 ) is an eigenvector for λ = −5, then = 0 ⇒ 4x1 + 2x2 = 0 ⇒ x2 =
4 2 x2
−2x1 . Thus (x, −2x), x ∈ R, x 6= 0 gives all eigenvectors for λ = −5, in particular, we can
take (1, −2) as an eigenvector for λ = −5.
Question 3(a) Find a basis for the vector space V = {p(x) | p(x) = a0 + a1 x + a2 x2 } and
its dimension.
Solution. Let f1 = 1, f2 = x, f3 = x2 , then f1 , f2 , f3 are linearly independent, because
α1 f1 + α2 f2 + α3 f3 = 0 ⇒ α1 + α2 x + α3 x2 = 0(zero polynomial) ⇒ α1 = α2 = α3 = 0.
f1 , f2 , f3 generate V because p(x) = a0 + a1 x + a2 x2 = a0 f1 + a1 f2 + a2 f3 for any p(x) ∈ V.
Thus {f1 , f2 , f3 } is a basis for V and its dimension is 3.
Question 3(b) Find the values of the parameter λ for which the system of equations
x + y + 4z = 1
x + 2y − 2z = 1
λx + y + z = 1
will have (i) unique solution (ii) no solution.
−1
1 1 4 1
Solution. The system will have the unique solution given by 1 2 −2
1 if
λ 1 1 1
1 1 4
7
1 2 −2 = 1(2 + 2) + 4(1 − 2λ) − 1(1 + 2λ) 6= 0. Thus 4 + 4 − 8λ − 1 − 2λ 6= 0 ⇒ λ 6= 10 .
λ 1 1
7
When λ = 10 , the system is
x + y + 4z = 1
x + 2y − 2z = 1
7x + 10y + 10z = 10
This system has no solution as it is inconsistent: 4(x+y+4z)+3(x+2y−2z) = 7x+10y+10z =
7, but the third equation says that 7x + 10y + 10z = 10. Thus there is a unique solution if
7 7
λ 6= 10 , and no solution if λ = 10 .
2
Paper II
Question 3(d) Find one characteristic value and corresponding characteristic vector for
the operators T on R3 defined as
Solution.
1. T (x, y, z) = (z, y, x) because the midpoint of (x, y, z) and (z, y, x) lies on the plane
x = z. T (1, 0, 0) = (0, 0, 1), T (0, 1, 0) = (0, 1, 0), T (0, 0, 1) = (1, 0, 0). Thus it is clear
that 1 is an eigenvalue, and (0, 1, 0) is a corresponding eigenvector.
2. T (1, 0, 0) = (1, 0, 0), T (0, 1, 0) = (0, 1, 0), T (0, 0, 1) = (0, 0, 0). Clearly 1 is an eigen-
value with (1, 0, 0) or (0, 1, 0) as eigenvectors.
3. T (1, 0, 0) = (3, 0, 0), T (0, 1, 0) = (1, 2, 0), T (0, 0, 1) = (1, 1, 1). Clearly (1, 0, 0) is an
eigenvector, corresponding to the eigenvalue 3.
3
UPSC Civil Services Main 1981 - Mathematics
Linear Algebra
Sunder Lal
Retired Professor of Mathematics
Panjab University
Chandigarh
Question
1(a) State and prove the Cayley Hamilton theorem and verify it for the matrix
2 3
A= . Use the result to determine A−1 .
3 5
Solution. See 1987 question 5(a) for the Cayley Hamilton theorem.
x − 2 −3
The characteristic equation of A is = 0, or (x − 2)(x − 5) − 9 = 0 ⇒
−3 x − 5
x2 − 7x +1 = 0. The
Cayley
Hamilton
2
theorem implies that A − 7A + I = 0.
2 3 2 3 13 21
A2 = = .
3 5 3 5 21 34
2 13 21 2 3 1 0 0 0
Now A − 7A + I = −7 + =
21 34 3 5 0 1 0 0
2 −1
So thetheorem
is verified.
A − 7A + I = 0 ⇒ (A − 7I)A = −I ⇒ A = 7I − A. Thus
1 0 2 3 5 −3
A−1 = 7 − = .
0 1 3 5 −3 2
By using an orthogonal change of variables reduce Q to a form without the cross terms i.e.
with terms of the form aij xi xj , i 6= j.
5 4 2
Solution. The matrix of the qiven quadratic form Q is A = 4 5 2.
2 2 2
1
The characteristic polynomial of A is
5−λ 4 2
4 5−λ 2 =0
2 2 2−λ
⇒ (5 − λ)(5 − λ)(2 − λ) − 4(5 − λ) − 4(8 − 4λ) + 16 + 16 − 4(5 − λ) = 0
⇒ (λ2 − 10λ + 25)(2 − λ) − 20 + 4λ − 32 + 16λ + 12 + 4λ = 0
⇒ −λ3 + 12λ2 + λ(−25 + 4 + 16 + 4 − 20) + 50 − 20 − 32 + 12 = 0
⇒ λ3 − 12λ2 + 21λ − 10 = 0
Thus the eigenvalues are λ = 1, 1, 10. Let (x1 , x2 , x3 ) be an eigenvector for λ = 10, then
−5 4 2 x1 −5x1 + 4x2 + 2x3 = 0 (i)
4 −5 2 x2 = 0 ⇒ 4x1 − 5x2 + 2x3 = 0 (ii)
2 2 −8 x3 2x1 + 2x2 − 8x3 = 0 (iii)
Subtracting (ii) from (i), we get −9x1 + 9x2 = 0 ⇒ x1 = x2 ⇒ x1 = 2x3 . Thus taking
x3 = 1, we get (2, 2, 1) as an eigenvector for λ = 10.
Let (x1 , x2 , x3 ) be an eigenvector for λ = 1, then
4 4 2 x1
4 4 2 x2 = 0 ⇒ 4x1 + 4x2 + 2x3 = 0
2x1 + 2x2 + x3 = 0
2 2 1 x3
2
could do it easily by completing squares:
Question 2(a) Define a vector space. Show that the set V of all real-valued functions on
[0, 1] is a vector space over the set of real numbers with respect to the addition and scalar
multiplication of functions.
Question 2(b) If zero is a root of the characteristic equation of a matrix A, show that the
corresponding linear transformation cannot be one to one.
3. If ni=1 αi T(vi ) = 0, then h ni=1 αi T(vi ), vj i = αj = 0 for all j, so T(vi ) are linearly
P P
independent.
3
Thus T(v1 ), . . . , T(vn ) form an orthonormal basis. Pn
Pn Conversely, let T(v 1 ), . .
Pn. , T(v n ) be an orthonormal basisPn of V. Let v =
Pn i=1 αi vi , w =
Pi=1 βi vi , then hv, wi = i=1 αi βi and hT(v), T(w)i = h i=1 αi T(vi ), i=1 βi T(vi )i =
n
i=1 αi βi . Thus hT(v), T(w)i = hv, wi , so T is orthogonal.
Lemma 2. Let T∗ be defined by hT(v), wi = hv, T∗ (w)i . Then T∗ is a linear trans-
formation, and T is orthogonal iff T∗ T = TT∗ = I.
Proof: The fact that T∗ is a linear transformation can be easily checked. If T is
orthogonal, then hv, T∗ T(w)i = hT(v), T(w)i = hv, wi , so T∗ T = I. From this and the
fact that T is 1-1, it follows that TT∗ = I.
Lemma 3. If the matrix of T w.r.t. the orthonormal basis {v1 , v2 , . . . , vn } is A = (aij ),
then the matrix of T∗ P is the transpose, i.e. (aji ). P
n ∗ n ∗
Proof: T(vi )P= j=1 aij vj . Let T (vi ) = j=1 bij vj . Now bij = hT (vi ), vj i =
hvi , T(vj )i = hvi , nk=1 ajk vk i = aji . Since TT∗ = I, A0 A = AA0 = I, so A is orthogonal.
The converse is also obvious now.
Question 3(a) Investigate for what values of λ and µ does the following system of equations
x+y+z = 6
x + 2y + 3z = 10
x + 2y + λz = µ
have (1) a unique solution (2) no solution (3) an infinite number of solutions?
Solution.
1 1 1
1. A unique solution exists when 1 2 3 6= 0, whatever µ may be. Thus 2λ−6−(λ−3) 6=
1 2 λ
0 ⇒ λ
6
= 3. Thus for
−1 all λ 6 = 3 and for all µ we have a unique solution given by
x 1 1 1 6
y = 1 2 3 10
z 1 2 λ µ
2. A unique solution does not exist if λ = 3. If µ 6= 10, then the second and third
equations are inconsistent. Thus if λ = 3, µ 6= 10, the system has no solution.
3. If λ = 3, µ = 10, then the system is x+y+z = 6, x+2y+3z = 10. The coefficient matrix
is of rank 2, so the space of solutions is one dimensional. y + 2z = 4 ⇒ y = 4 − 2z,
and thus x = 2 + z. The space of solutions is (2 + z, 4 − 2z, z) for z ∈ R.
4
Question 3(b) Let (xi , yi ), i = 1, . . . , n be n points in the plane, no two of them having the
same abscissa. Find a polynomial f (x) of degree n − 1 which takes the value f (xi ) = yi , 1 ≤
i ≤ n.
Paper II
3 0 √0
Question 4(a) Find a set of three orthonormal eigenvectors for the matrix A = 0 √4 3
0 3 6
3−λ 0 √0
|A − λI = 0 −λ
4√ 3 =0
0 3 6−λ
5
Let (x1 , x2 , x3 ) be an eigenvector for λ = 3. Then
0 0 √0 x1
0 1 3 x2 = 0
√
0 3 3 x3
√ √
Thus x2 + 3x3 = 0. Thus (x1 , − 3x3 , x3 ) with x1 , x3 ∈ R gives any √ eigenvector for λ = 3.
We can take x1 = 1, x3 = 0, and x1 = 0, x3 = 1 to get (1, 0, 0), (0, − 3, 1) as eigenvectors for
λ = 3 — these are orthogonal √ and therefore span the the eigenspace of λ = 3. Orthonormal
3 1
vectors are (1, 0, 0), (0, − 2 , 2 ). √ √
Thus the required orthonormal vectors are (0, 12 , 23 ), (1, 0, 0), (0, − 23 , 21 ).
In fact
√
1 3 0 0√ 1
0 2√ 2
3 0 √0 7 0 0
0 − 3 1 0 4
√ 3 √12 − 23 0 = 0 3 0
2 2
0 3 6 3 1 0 0 3
1 0 0 2 2
0
Question 4(b) Show that if A = X0 AX and B = X0 BX are two quadratic forms one of
which is positive definite and A, B are symmetric matrices, then they can be expressed as
linear combinations of squares by an appropriate linear transformation.
Solution. Let B be positive definite. Then there exists an orthogonal real non-singular
matrix H such that H0 BH = In , the unit matrix of order n. A is real-symmetric ⇒ H0 AH is
real symmetric. There exists
K a real orthogonal
matrix such that K0 H0 AHK is a diagonal
λ1 0 . . . 0
0 λ2 . . . 0
matrix i.e. K0 H0 AHK = .. 0
.. where λ1 , . . . , λn are the eigenvalues of H AH.
..
. . .
0 0 . . . λn
x1 X1
0 0 0 .. ..
Now K H BHK = K In K = In . Then . = HK . diagonalizes A, B simulta-
xn Xn
neously.
x1 x1
. . . xn A ... = λ1 X12 + . . . + λn Xn2 . . . xn B ... = X12 + . . . + Xn2
x1 x1
xn xn
6
UPSC Civil Services Main 1982 - Mathematics
Linear Algebra
Sunder Lal
Retired Professor of Mathematics
Panjab University
Chandigarh
Question 1(a) Let V be a vector space. If dim V = n with n > 0, prove that
Solution. From 1983 question 1(a) we get that any two bases of V have n elements.
2. V cannot be generated by fewer than n vectors, because then it will have a basis
consisting of less than n elements, which contradicts the fact that dim V = n.
Question 1(b) Define a linear transformation. Prove that both the range and the kernel of
a linear transformation are vector spaces.
1
Range of T = T(V), kernel of T = {v | T(v) = 0}. If w1 , w2 ∈ T(V), then w1 =
T(v1 ), w2 = T(v2 ) for some v1 , v2 ∈ V, αw1 + βw2 = αT(v1 ) + βT(v2 ) = T(αv1 + βv2 ).
But αv1 + βv2 ∈ V ∴ αw1 + βw2 ∈ T(V), thus T(V) is a subspace of W. Note that
T(V) 6= ∅ ∵ 0 ∈ T(V) so T(V) is a vector space.
If v1 , v2 ∈ kernel T then T(v1 ) = 0, T(v2 ) = 0. Now T(αv1 + βv2 ) = αT(v1 ) +
βT(v2 ) = 0 ⇒ αv1 + βv2 ∈ kernel T. Thus kernel T is a subspace. kernel T 6= ∅, bf 0 ∈
kernel T so kernel T is a vector space.
2
Question 2(c) Show that the system of equations
3x + y − 5z = −1
x − 2y + z = −5
x + 5y − 7z = 2
is inconsistent.
Solution. From the first two equations, (3x + y − 5z) − 2(x − 2y + z) = −1 − 2(−5) = 9 ⇒
x + 5y − 7z = 9. But this is inconsistent with the third equation, hence the overall system
in inconsistent.
Question 3(a) Prove that the trace of a matrix is equal to the sum of its characteristic
roots.
Question 3(b) If A, B are two non-singular matrices of the same order, prove that AB
and BA have the same eigenvalues.
Thus x1 (cos θ−1)+x2 sin θ = 0, x1 sin θ+x2 (− cos θ−1) = 0. We can take x1 = 1+cos θ, x2 =
sin θ.
Similarly if (x1 , x2 ) is an eigenvector for λ = −1, then
cos θ + 1 sin θ x1
=0
sin θ − cos θ + 1 x2
Thus x1 (cos θ+1)+x2 sin θ = 0, x1 sin θ+x2 (− cos θ+1) = 0. We can take x1 = 1−cos θ, x2 =
− sin θ as an eigenvector.
3
Paper II
Question 5(a) State and prove the Cayley-Hamilton Theorem when the eigenvalues are all
different.
Question 5(b) When are two real symmetric matrices said to be congruent? Is congruence
an equivalence relation? Also show how you can find the signature of A.
Solution. Two matrices A, B are said to be congruent to each other if there exists a
nonsingular matrix P such that P0 AP = B.
Congruence is an equivalence relation:
• Reflexive: A ≡ A ∵ A = I0 AI, I is the unit matrix.
• Transitive: A ≡ B, B ≡ C ⇒ A ≡ C — P0 AP = B, Q0 BQ = C ⇒ Q0 P0 APQ =
C ⇒ A ≡ C because PQ is nonsingular as both P, Q are nonsingular.
Given a symmetric matrix A, we first prove that there exists a nonsingular matrix P
such that P0 AP = diagonal[α1 , α2 , . . . , αr , 0, . . . , 0] where r is the rank of A.
We will prove this by induction on the order n of the matrix A. If n = 1, there is nothing
to prove. Assume that the result is true for all matrices of order < n.
Step 1. We first ensure that we have a11 6= 0. If it is 0, but some other akk 6= 0, we
interchange the k-th row with the first row and the k-th column with the first column, to
get B = P0 AP, where b11 = akk 6= 0. Note that P is the elementary matrix E1k (see 1983
question 2(a)), and is hence nonsingular and symmetric, so B is symmetric.
If all aii are 0, but some aij 6= 0. We add the j-th row to the i-th row and the j-the column
to the i-th column by multiplying A by Eij (1) and its transpose, to get B = Eij (1)AEij (1)0
4
— now bii = aij + aji 6= 0. B is still symmetric, and we can shift bii to the leading place as
above.
(Note that if all aij = 0, we stop.)
Thus we start with a11 6= 0. We subtract aa1k 11
times the first row from the k-th row and
a1k
a11
times the first column from the k-th column, by performing B = Ek1 (− aa1k11
)AEk1 (− aa1k
11
)0
a11 0
Repeating this for all k, 2 ≤ k ≤ n, we get P01 AP1 = , where A1 is n − 1 ×
0 A1
0
n − 1 and P1 is nonsingular. Now by induction, ∃P2 , n − 1 × n − 1 such that P2 AP2 =
1 0
diagonal[β2 , . . . , βr , 0, . . . , 0], rank A1 = rank A − 1. Now set P = P1 to get the
0 P2
result.
Now that we have P0 AP = diagonal[α1 , α2 , . . . , αr , 0, . . . , 0], let us assume that α1 , . . . , αs
are positive, the rest are negative. Then let αi = βi2 , 1 ≤ i ≤ s, −αj = βj2 , s + 1 ≤ j ≤ r. Set
Q = diagonal[β1−1 , . . . , βr−1 , 1, . . . , 1]. Then x0 Q0 P0 APQx = x21 + . . . + x2s − x2s+1 − x2r . Thus
we can find the signature of A by looking at the number of positive and negative squares of
the RHS.
Question
P 5(c)P Derive a set of necessary and sufficient conditions that the real quadratic
form 3j=1 3i=1 aij xi xj be positive definite.
Is 4x2 + 9y 2 + 2z 2 + 8yz + 6zx + 6xy positive definite?
So set X = 2x + 32 y + 23 z, Y = y − 277
z, Z = z, then Q(x, y, z) is transformed to X 2 + 27
4
Y2−
76
108
Z 2 . Hence Q(x, y, z) is is not positive definite.
5
UPSC Civil Services Main 1983 - Mathematics
Linear Algebra
Sunder Lal
Retired Professor of Mathematics
Panjab University
Chandigarh
Question 1(a) Let V be a finitely generated vector space. Show that V has a finite basis
and any two bases of V have the same number of vectors.
Solution. Let {v1 , . . . , vm } be a generating set for V, we assume that vi 6= 0, 1 ≤ i ≤ m.
If {v1 , . . . , vm } is linearly independent, then it is a basis of V. Otherwise, there exists a
vk that depends linearly on {vi | 1 ≤ i ≤ m, i 6= k}. This latter set is also a generating
set, and we rename it {u1 , . . . , um−1 }. We now apply the same reasoning to it — either it
is linearly independent and hence a basis, or we can drop an element from it and it still
remains a generating set. In a finite number of steps, we reach {x1 , . . . , xr } ⊆ {v1 , . . . , vm }
such that {x1 , . . . , xr } is linearly independent and a generating set, thus {x1 , . . . , xr } is a
basis of V.
Note: An alternative approach leading to the same result is to pick the maximal linearly
independent subset of {v1 , . . . , vm }. There are only 2m such subsets, so we can do so in a
finite number of steps (in the above procedure we dropped the dependent elements one at
a time to reach the maximal linearly independent subset). Now to be a basis, the maximal
linearly independent subset S = {x1 , . . . , xr } ⊆ {v1 , . . . , vm } needs to generate V. But this
is immediate, as for each vi , either vi ∈ S or S ∪ {vi } is linearly Pdependent — in that case
P r r
j=1 aj xj + bvi = 0, but not all aj , b are 0. Now if b = 0 then j=1 aj xj = 0 ⇒ aj = 0 for
1 ≤ j ≤ r, as S is linearly independent, and this contradicts the statement that not all aj , b
are 0. So b 6= 0, hence vi is a linear combination of S, hence S generates V and is a basis.
Any two bases have the same number of elements: Let {v1 , . . . , vm } and {w1 , . . . , wn }
be two bases of V. Assume wlogP that m ≤ n. Now since w1 ∈ V, w1 is generated by the
m
basis {v1 , . . . , vm }, thus w1 = j=1 aj vj . There must be at least one non-zero ak , as
w1 6= 0. NowP the set {vi | 1 ≤ i ≤ m, i 6= k} ∪ {w1 } generates the set {v1 , . . . , vm } (since
aj
vk = a1k w1 − m j=1,j6=k aP
k
vj ) and hence generates V.
Now we have w2 = m i=1,i6=k ai vi + bw1 . At least one of the ai 6= 0, otherwise we have a
linear equation between w1 and w2 , but these are linearly independent. We replace vi by w2 ,
1
and the result is also a generating set as above. Continuing, after m steps, we
P get a subset
{w1 , . . . , wm } which is a generating set. Now if n > m, we would have wn = m i=0 ai wi , but
this is not possible as the wi were a basis, and thus linearly independent. Hence n = m, and
the two bases have equal number of elements.
Question 1(b) Let V be the vector space of polynomials of degree ≤ 3. Determine whether
the following vectors of V are linearly dependent or independent: u = t3 − 3t2 + 5t + 1, v =
t3 − t2 + 8t + 2, w = 2t3 − 4t2 + 9t + 5.
a + b + 2c = 0 (1)
−3a − b − 4c = 0 (2)
5a + 8b + 9c = 0 (3)
a + 2b + 5c = 0 (4)
From (4) - (1) we get b + 3c = 0. Substituting b = −3c in (2), c = −3a ⇒ b = 9a. Now from
(1), a + 9a − 6a = 0 ⇒ a = 0 ⇒ b = c = 0. Thus au + bv + cw = 0 ⇒ a = b = c = 0, so the
vectors are linearly independent.
Question 2(a) Show that every non-singular matrix can be expressed as a product of ele-
mentary matrices.
1. Eij = the matrix obtained by interchanging the i-th and j-th rows (or the i-th and
j-th columns of the unit matrix. For example, if n = 4, then
1 0 0 0
0 0 1 0
E23 =
0 1 0 0
0 0 0 1
2
2. Ei (α) is the matrix obtained by multiplying the i-th row of the unit matrix by α =
the matrix obtained by multiplying the i-th column of the unit matrix by α.
3. Eij (β) = the matrix obtained by adding β times the j-th row to the i-th row of the
unit matrix.
4. (Eij (β))0 = transpose of Eij (β) = the matrix obtained by adding β times the j-th
column to the i-th column of the unit matrix.
All elementary matrices are non-singular. In fact |Eij | = −1, |Ei (α)| = α, |Eij (β)| =
|(Eij (β))0 | = 1.
We now prove the result.
(1) Let C = AB. Then any elementary row transformation
on AB is equivalent to sub-
R1
..
jecting A to the same row transformation. Let A = . and B = C1 . . . Cp n×p .
Rm m×n
R1 C1 . . . R1 Cp
.. ..
Then AB = . . . Thus if any elementary row transformation i.e. (i)
Rm C1 . . . Rm Cp m×p
Intercanging two rows (ii) Multiplying a row by a scalar (iii) Adding a scalar multiple of a
row to another row, is carried out on A, the same will be carried out on AB and vice versa.
Similarly any column transformation on B is equivalent to the same column transformation
on AB.
(2) Multiplying by an elementary matrix Eij , Ei (α), Eij (β) on the left is the same as per-
forming the corresponding elementary row operation on the matrix. Multiplying the matrix
by an elementary matrix to the right is equal to subjecting the matrix to the correspond-
ing column transformation. We write A = IA. Now interchanging the i-th and j-th row
of A is equivalent to doing the same on I in IA (result (1) above), which is the same as
Eij A. Similar results hold for the other two row transformations. Writing A as AI gives
the corresponding result for column transformations.
(3) We now prove that if A is a matrix
of rank r > 0, then there exist P, Q products of
Ir 0
elementary matrices such that PAQ = where Ir is the unit matrix of order r. Since
0 0
A 6= 0, A has at least one non-zero element, say aij . By interchanging the i-th row with
the first row and the j-th column with the first column, we get a new matrix Bij = (bij )
such that b11 6= 0. This simply means that there exist elementary matrices P1 , Q1 such
−1
that P1 AQ1 =
B. We multiply P1 AQ1 by P2 = E1 (b11 ) to obtain P2 P1 AQ1 = C =
1 ∗ ... ∗
∗
. Subtracting suitable multiples of the first row from the remaining rows of
..
. ∗
∗
C and suitable multiples of the first column from the remaining columns, we get the new
3
1 0 ... 0
0
matrix D of the form .. . Thus we have proved that there exist P∗ , Q∗ products
. ∗
A
0
1 0 ... 0
0
∗ ∗
of elementary matrices such that P AQ = .. . We carry on the same process
. ∗
A
0
∗ ∗∗ ∗∗ Ir 0
on A without affecting the first row and column, and in r steps we get P AQ = ,
0 E
where P∗∗ , Q∗∗ are products of elementary matrices. Note that E = 0 because rank A = r.
Now if A is nonsingular, then P∗∗ AQ∗∗ = I. Inverting the elementary matrices (the
inverse of an elementary matrix is elementary), we get that A is a product of elementary
matrices.
Question 2(b) Reduce the matrix A to its normal form, and hence or otherwise determine
its rank.
0 1 2 1
A = 1 2 3 2
3 1 1 3
1 2 3 2
Solution. Interchange of R1 and R2 ⇒ A ∼ 0 1 2 1
3 1 1 3
1 2 3 2
R3 − 3R1 ⇒ A ∼ 0 1 2 1
0 −5 −8 −3
1 2 3 2
R3 + 5R2 ⇒ A ∼ 0 1 2 1
0 0 2 2
1 2 3 2
− 21 R3 ⇒ A ∼ 0 1 2 1
0 0 −1 −1
1 2 3 2
R3 + R2 ⇒ A ∼ 0 1 2 1
0 1 1 0
1 2 3 2
Interchanging C2 , C4 , A ∼ 0 1 2 1
0 0 1 1
1 0 0 0
C2 − 2C1 , C3 − 3C1 , C4 − 2C1 ⇒ A ∼ 0 1 2 1
0 0 1 1
4
1 0 0 0 1 0 0 0
C3 − 2C2 , C4 − C2 ⇒ A ∼ 0 1 0 0. C4 − C3 ⇒ A ∼ 0 1 0 0
0 0 1 1 0 0 1 0
we have P(3× 3) and Q(4 × 4) both products of elementary matrices such that
Thus
1 0 0 0
PAQ = 0 1 0 0, which is the normal form of A. Clearly the rank of A is 3.
0 0 1 0
Question 3(a) Prove that a square matrix satisfies its characteristic equation. Use this
result to find the inverse of
0 1 2
A = 1 2 3
3 1 1
5
Solution. The first part is the Cayley-Hamilton theorem, see 1987 question 3(a).
The characteristic equation of A is
−λ 1 2
|A − λI| = 1 2 − λ 3 = 0
3 1 1−λ
⇒ −λ(λ2 − 3λ + 2 − 3) − (1 − λ − 9) + 2(1 − 6 + 3λ) = 0
⇒ λ3 − 3λ2 − 8λ + 2 = 0
Note: In this case, we were required to use this method to find the inverse. An alternate
method of finding the inverse by performing elementary row and column operations is shown
in 1985 question 1(c).
Question 3(b) Find the eigenvalues and eigenvectors of
8 −6 2
A = −6 7 −4
2 −4 3
Solution.
8 − x −6 2
|A − xI| = −6 7 − x −4 = 0
2 −4 3 − x
⇒ (8 − x)(x2 − 10x + 21 − 16) + 6(6x − 18 + 8) + 2(24 − 14 + 2x) = 0
⇒ −x3 + 18x2 − 85x + 40 + 36x − 60 + 20 + 4x = 0
⇒ x3 − 18x2 + 45x = 0
6
5 −6 2 x1
If (x1 , x2 , x3 ) is an eigenvector for the eigenvalue 3, then −6 4 −4
x2 = 0.
2 −4 0 x3
Thus 5x1 − 6x2 + 2x3 = 0, −6x1 + 4x2 − 4x3 = 0, 2x1 − 4x2 = 0 ⇒ x1 = 2x2 , x3 = −2x2 .
Thus (2, 1, −2) is an eigenvector for 3, in general (2x, x, −2x), x 6= 0 is an eigenvector
for 3.
−7 −6 2 x1
If (x1 , x2 , x3 ) is an eigenvector for the eigenvalue 15, then −6 −8 −4
x2 = 0.
2 −4 −12 x3
Thus −7x1 −6x2 +2x3 = 0, −6x1 −8x2 −4x3 = 0, 2x1 −4x2 −12x3 = 0 ⇒ x1 = 2x3 , x2 = −2x3 .
Thus (2, −2, 1) is an eigenvector for 15, in general (2x, −2x, x), x 6= 0 is an eigenvector for
15.
Question 3(c) Show that the eigenvalues of an upper or lower triangular matrix are just
the diagonal elements of the matrix.
Solution. Let A = (aij ), such that aij = 0 for i < j, i.e. A is upper triangular. Now
|xI − A| = (x − a11 )(x − a22 ) . . . (x − ann )
showing that |xI − A| = 0 ⇒ x = a11 , a22 , . . . , ann . Thus the eigenvalues of A are
a11 , a22 , . . . , ann .
Similarly for a lower triangular matrix.
Paper II
Question 4(a) Prove that a necessary and sufficient condition that a linear transformation
A on a unitary space is Hermitian is that hAx, xi is real for all x.
Solution. A unitary space is an old name for an inner product space. Let V be an inner
product space over C, and hAv, vi be real for all v ∈ V. Then since
hA(v + w), v + wi = hAv, vi + hAw, wi + hAv, wi + hAw, vi
hAv, wi +hAw, vi is real (because hA(v + w), v + wi −hAv, vi −hAw, wi is real). Hence
hAv, wi + hAw, vi = hw, Avi + hv, Awi (1)
because z real ⇒ z = z.
Also,
hA(v + iw), v + iwi = hAv, vi + hA(iw), iwi − ihAv, wi + ihAw, vi
thus −ihAv, wi + ihAw, vi is real. Thus
−ihAv, wi + ihAw, vi = −ihw, Avi + ihv, Awi (2)
Multiplying (1) by i and adding to (2), we get
2ihAw, vi = 2ihv, Awi
∗
Thus A = A , so A is Hermitian.
Conversely, if hAw, vi = hv, Awi , then hAv, vi = hv, Avi = hAv, vi ⇒ hAv, vi is
real.
7
Question 4(b) If A is a linear transformation on an n-dimensional vector space, then prove
that
1. rank A = rank A0 .
2. nullity A = n − rank A.
Solution. We know that rank A = r if A has a minor of order r different from 0, and all
minors of order > r are 0. Thus rank A = rank A0 .
For the second part, see 1998 question 3(a).
Question 4(c) Show that a real symmetric matrix A is positive definite if and only if there
exists a real non-singular matrix P such that A = PP0 .
8
Question 5(b) Under what circumstances will the real n × n matrix
x a a ... a
a x a . . . a
A= a a x . . . a
...
a a a ... x
x−λ a a ... a
a x−λ a ... a
a a x − λ ... a = 0
...
a a a ... x − λ
x − λ a − x + λ a − x + λ ... a − x + λ
a x−λ−a 0 ... 0
⇒ a 0 x − λ − a ... 0 = 0
...
a 0 0 ... x − λ − a
x − λ + (n − 1)a 0 0 ... 0
a x−λ−a 0 ... 0
⇒ a 0 x − λ − a ... 0 = 0
...
a 0 0 ... x − λ − a
⇒ (x − λ + (n − 1)a)(x − λ − a)n−1 = 0
Thus the eigenvalues are x − a (repeated n − 1 times) and x + (n − 1)a. For positive definite,
λ > 0 ⇒ x > a, x > (n − 1)(−a). If a > 0, this reduces to x > a, if a ≤ 0, this reduces to
x > (n − 1)(−a).
For positive semi-definite, λ ≥ 0. By the same reasoning, if a > 0, then x ≥ a, otherwise
x ≥ (n − 1)(−a).
9
UPSC Civil Services Main 1984 - Mathematics
Linear Algebra
Sunder Lal
Retired Professor of Mathematics
Panjab University
Chandigarh
Question 1(a) If W1 , W2 are finite dimensional subspaces of a vector space V, then show
that W1 + W2 is finite dimensional and dim W1 + dim W2 = dim(W1 + W2 ) + dim(W1 ∩ W2 ).
Question 1(b) If A and B are n-rowed square non-zero matrices such that AB = 0, then
show that both A and B are singular. If both A and B are singular, and AB = 0, does it
imply that BA = 0? Justify your answer.
Question 2(a) Show that row-equivalent matrices have the same rank.
1
Solution. If T (α1 P ), T (α2 ), . . . , T (αn ) Pare linearly independent, then T P is one-one: Let
n n n
v ∈ V. Then v = a α
i=1 i i , T (v) = a
i=1 i T (α i ). If T (v) = 0, then i=1 ai T (αi ) =
0 ⇒ ai = 0, 1 ≤ i ≤ n, because T (α1 ), T (α2 ), . . . , T (αn ) are linearly independent. Thus
T (v) = 0 ⇒ v = 0, so T is one-one.
T is onto: dim T (v) = dim V = n.
Thus T is invertible, in fact T −1 (T (αi )) = αi .
T −1 is a linear transformation: Let T −1 (v) = u, T−1 (w) = x. Then T (u) = v, T (x) = w.
Let T −1 (av + bw) = z, then T (z) = av + bw = aT (u) + bT (x) = T (au + bx) ⇒ z = au + bx.
Thus T −1 (av + bw) = aT −1 (v) + bT −1 (w), so T −1 is linear. It is obvious that T T −1 =
T −1 T = I, as this is true for the basis elements by definition, and extends to all vectors by
linearity. Pn
Conversely
Pn if T is non-singular, then a 1 T (α1 )+a 2 T (α 2 )+. . .+a n T (α n ) = 0 ⇒ T ( i=1 ai αi ) =
0 ⇒ i=1 ai αi = 0 ⇒ ai = 0, 1 ≤ i ≤ n because α1 , α2 , . . . , αn are linearly independent.
Thus T (α1 ), T (α2 ), . . . , T (αn ) are linearly independent.
2
Question 3(a) Let V and W be vector spaces over the field F , and let T be a linear trans-
formation from V to W. If V is finite dimensional show that rank T + nullity T = dim V.
2. tr A = tr A.
e
Solution.
1. The eigenvalues of A are roots of |xI − A| = 0. The eigenvalue of A
e are roots of
−1 −1 −1 −1
0 = |xI − T AT| = |T xIT − T AT| = |T ||xI − A||T| = |xI − A|, so the
eigenvalues are the same.
2. tr AB = tr BA, so tr T−1 AT = tr ATT−1 = tr A.
3. If Ax = λx then T−1 AT(T−1 x) = T−1 (λx) = λT−1 x.
Question 3(c) A 3 × 3 matrix has the eigenvalues 6, 2, -1. The corresponding eigenvectors
are (2, 3, −2), (9, 5, 4), (4, 4, −1). Find the matrix.
2 9 4
Solution. Let P = 3 5 4 , and let A be the required matrix. Then AP =
−2 4 −1
6 0 0 6 0 0
P 0 2 0 , therefore A = P 0 2 0 P−1 . A simple calculation gives P−1 =
0 0 −1 0 0 −1
−21 25 16 2 9 4 6 0 0 12 18 −4
−5 6 4 , note that |P| = 1. Now 3 5 4 0 2 0 = 18 10 −4.
22 −26 −17 −2 4 −1
0 0 −1 −12 8 1
12 18 −4 −21 25 16 −430 512 352
Thus A = 18 10 −4 −5 6 4 = 516 620 396
−12 8 1 22 −26 −17 234 226 −173
x1 x2 x3 6 0 0
A longer way would be to set A = y1 y2 y3 . Then AP = P 0 2 0 . This
z1 z2 z3 0 0 −1
yields three systems of linear equations which have to be solved.
3
Paper II
Question 4(a) Let V be the set of all functions from a non-empty set into a field K. For
any function f, g ∈ V and any scalar k ∈ K, let f + g and kf be functions in V defined by
(f + g)(x) = f (x) + g(x), (kf )(x) = kf (x) for every x ∈ X. Prove that V is a vector space
over K.
Solution. V = {f | f : X −→ K}.
2. The zero function namely 0(x) = 0∀x ∈ X is the additive identity of V i.e. f + 0 =
0 + f = f ∀f ∈ V.
4. (f + g) + h = f + (g + h) for every f, g, h ∈ V.
Question 4(b) Find the eigenvalues and basis for each eigenspace of the matrix
1 −3 3
A = 3 −5 3
6 −6 4
.
|A − λI| = 0
1−λ −3 3
⇒ 3 −5 − λ 3 = 0
6 −6 4−λ
⇒ (1 − λ)(λ + 5)(λ − 4) + 18(1 − λ) + 3(12 − 3λ − 18) + 3(−18 + 30 + 6λ) = 0
⇒ (1 − λ)(λ2 + λ − 20) + 18 − 18λ − 9λ − 18 + 36 + 18λ = 0
⇒ λ2 + λ − 20 − λ3 − λ2 + 20λ − 9λ + 36 = 0
⇒ λ3 − 12λ − 16 = 0
4
Thus λ = −2, 4, −2. Let (x1 , x2 , x3 ) be an eigenvector for λ = 4.
−3 −3 3 x1
3 −9 3 x2 = 0
6 −6 0 x3
Thus −3x1 − 3x2 + 3x3 = 0, 3x1 − 9x2 + 3x3 = 0, 6x1 − 6x2 = 0 ⇒ x1 = x2 , x3 = 2x1 . We
can take (1, 1, 2) as an eigenvector corresponding to λ = 4.
Let (x1 , x2 , x3 ) be an eigenvector for λ = −2.
3 −3 3 x1
3 −3 3 x2 = 0
6 −6 6 x3
Thus 3x1 − 3x2 + 3x3 = 0 ⇒ x2 = x1 + x3 . (1, 1, 0), (0, 1, 1) can be taken as eigenvectors for
λ = −2.
Clearly (1, 1, 2) is a basis for the eigenspace for λ = 4. (1, 1, 0), (0, 1, 1) is a basis for the
eigenspace for λ = −2.
Question 4(c) Let a vector space V have finite dimension and let W be a subspace of V
and W 0 the annihilator of W. Prove that dim W + dim W 0 = dim V.
Solution. Let dim V = n, dim W = m, W ⊆ V. Let {v1 , . . . , vn } be a basis of V so chosen
that {v1 , . . . , vm } is a basis of W. Let {v1∗ , . . . , vn∗ } be the dual basis of V ∗ i.e. vi∗ (vj ) = δij .
∗
We shall show that W 0 has {vm+1 , . . . , vn∗ } as a basis.
By definition of the dual basis vi∗ (vj ) = 0 when 1 ≤ i ≤ m and m + 1 ≤ j ≤ n. Since
∗
vj , m + 1 ≤ j ≤ n annihilate the basis of W, it follows that vj (w) = 0 for all w ∈ W. Thus
∗
{vm+1 , . . . , vn∗ } ⊆ W 0 , and are linearly independent, being a subset of a linearly independent
set.
Let f ∈ W 0 , then f = ni=1 ai vi∗ . We shall show that ai = 0 for 1 ≤ i ≤ m, thus f
P
∗ ∗ 0
is a linear P combination of {vP m+1 , . . . , vn }. By definition of W , f (v1 ) = 0, . . . , f (vm ) = 0,
n ∗ n ∗
therefore ( i=1 ai vi )(vj ) = i=1 ai δij = aj = 0 when 1 ≤ j ≤ m. Thus {vm+1 , . . . , vn∗ } is a
basis of W 0 , hence dim W 0 = n − m, hence dim W + dim W 0 = n = dim V.
Question 5(a) Prove that every matrix satisfies its characteristic equation.
Solution. See 1987 question 3(a).
Question 5(b) Find a necessary and sufficient condition that the real quadratic form
Xn X
n
aij xi xj be a positive definite form.
i=1 j=1
5
UPSC Civil Services Main 1985 - Mathematics
Linear Algebra
Sunder Lal
Retired Professor of Mathematics
Panjab University
Chandigarh
Question 1(a) If W1 and W2 are finite dimensional subspaces of a vector space V, then
show that W1 + W2 is finite dimensional and
Solution.
1 3 3 1 0 0
1 4 3 = 0 1 0 A
1 3 4 0 0 1
Using operation R2 − R1 , R3 − R1 , we get
1 3 3 1 0 0
0 1 0 = −1 1 0 A
0 0 1 −1 0 1
1
Now with operation R1 − 3(R2 + R3 ) we get
1 0 0 7 −3 −3
0 1 0 = −1 1 0 A
0 0 1 −1 0 1
7 −3 −3
Thus the inverse of A is −1 1 0 .
−1 0 1
Now R1 − 2R2 ⇒ A ∼ 0 1 − 26 9
which is the required form.
0 0 0
2
Question 3(a) Show that if λ is an eigenvalue of matrix A, then λn is an eigenvalue of
An , where n is a positive integer.
Solution. If x is an eigenvector for λ, then An x = An−1 Ax = λAn−1 x. Repeating this
process, we get the result.
Question 3(b) Determine if the vectors (1, −2, 1), (2, 1, −1), (7, −4, 1) are linearly indepen-
dent in R3 .
Solution. If possible, let a(1, −2, 1) + b(2, 1, −1) + c(7, −4, 1) = 0. Then a + 2b + 7c =
0, −2a + b − 4c = 0, a − b + c = 0. Adding the last two we get a = −3c, and from the third
we then get b = −2c. These values satisfy the first equation also, hence letting c = −1 we
get 3(1, −2, 1) + 2(2, 1, −1) − (7, −4, 1) = 0. Thus the vectors are linearly dependent.
Question 3(c) Solve
2x1 + 3x2 + x3 = 9 (1)
x1 + 2x2 + 3x3 = 6 (2)
3x1 + x2 + 2x3 = 8 (3)
Solution. 2(2) − (1) ⇒ x2 + 5x3 = 3 ⇒ x2 = 3 − 5x3 . Substituting x2 in (2), x1 = 7x3 .
5
Now substituting x1 , x2 in (3), we get 21x3 + 3 − 5x3 + 2x3 = 8 ⇒ x3 = 18 , x2 = 29
18
, x1 = 35
18
,
which is the required solution.
(Using Cramer’s rule would have been lengthy.)
Paper II
Question 4(a) Let V be the vector space of all functions from R into R. Let Ve be the
subset of all even functions f, f (−x) = f (x), and Vo be the subset of all odd functions
f, f (−x) = −f (x). Prove that
1. Ve and Vo are subspaces of V
2. Ve + Vo = V
3. Ve ∩ Vo = {0}
Solution.
1. Let f, g ∈ Ve , then αf + βg ∈ Ve for all α, β ∈ R, because (αf + βg)(−x) = αf (−x) +
βg(−x) = αf (x) + βg(x) = (αf + βg)(x), thus Ve is a subspace of V. Similarly, if
f, g ∈ Vo , then αf + βg ∈ Vo for all α, β ∈ R, because (αf + βg)(−x) = αf (−x) +
βg(−x) = −αf (x) − βg(x) = −(αf + βg)(x), thus Ve is a subspace of V.
2. Let f (x) ∈ V. Define F (x) = f (x)+f
2
(−x)
, G(x) = f (x)−f
2
(−x)
. Then F (−x) = F (x) ⇒
F ∈ Ve , G(x) = −G(x) ⇒ G ∈ Vo and f (x) = F (x) + G(x). Thus Ve + Vo = V.
3. If f ∈ Ve ∩ Vo , then f (−x) = f (x) ∵ f ∈ Ve , f (−x) = −f (x) ∵ f ∈ Vo . Thus
2f (−x) = 0 for all x ∈ R, so f = 0 ⇒ Ve ∩ Vo = {0}.
3
Question 4(b) Find the dimension and basis of the solution space S of the system
Solution.
1 2 2 −1 3 1 2 2 −1 3
A = 1 2 3 1 1 ∼ 1 2 3 1 1
3 6 8 1 5 0 0 0 0 0
by performing R3 − R1 − 2R2 .
Thus rank A < 3. Actually rank A = 2, because if A = (C1 , C2 , C3 , C4 , C5 ), where Ci are
columns, then C1 and C3 are linearly independent.
Adding the first two equations, we get 4x5 = −2x1 − 4x2 − 5x3 . Subtracting 3 times
the second from the first, we get 4x4 = −2x1 − 4x2 − 7x3 . From these we can see that
X1 = (2, 0, 0, −1, −1), X3 = (0, 1, 0, −1, −1), X3 = (0, 0, 4, −5, −7) are three independent
solutions. Since rank A = 2, the dimension of the solution space S is 3, and {X1 , X2 , X3 }
is its basis.
Question 4(c) Let W1 and W2 be subspaces of a finite dimensional vector space V. Prove
that (W1 + W2 )0 = W10 ∩ W20 .
Solution.
1 1+i 2i 1 0 0 1 0 0
1 − i 4 2 − 3i = 0 1 0 H 0 1 0
−2i 2 + 3i 7 0 0 1 0 0 1
4
Subtracting (1 − i)R1 from R2 , and adding 2iR1 to R3 , we get
1 1 + i 2i 1 0 0 1 0 0
0 2 −5i = −1 + i 1 0 H 0 1 0
0 5i 3 2i 0 1 0 0 1
1 −1 − i 52 − 92 i
1 0 0 1 0 0
0 2 0 = −1 + i 1 0 H 0 5
1 2
i
19 5 9 5
0 0 −2 2
+ 2i −2i 1 0 0 1
1 −1 + i 52 + 29 i
Thus P = 0 1 − 52 i
0 0 1
Index = Number of positive entries = 2. Signature = Number of positive entries - Number
of negative entries = 1.
Question 5(b) Prove that every matrix is a root of its characteristic polynomial.
Solution. This is the Cayley Hamilton theorem, proved in Question 5(a), 1987.
Question 5(c) If B = AP, where P is nonsingular and A orthogonal, show that PB−1 is
orthogonal.
Solution. B−1 = P−1 A−1 , so PB−1 = PP−1 A−1 = A−1 . Now (A−1 )0 A−1 = (AA0 )−1 = I.
Similarly A−1 (A−1 )0 = (A0 A)−1 = I, so PB−1 is orthogonal.
5
UPSC Civil Services Main 1986 - Mathematics
Linear Algebra
Sunder Lal
Retired Professor of Mathematics
Panjab University
Chandigarh
Question 1(a) If A, B, C are three n × n matrices, show that A(BC) = (AB)C. Show by
an example that matrix multiplication is non-commutative.
2. If A, B are square matrices each of order n, and I is the corresponding unit matrix,
then the equation
AB − BA = I
can never hold.
Solution.
1 0 0 0 0 0 0 0
1. True. Let A = ,B = ,C = . Then AB = = AC, but
0 0 0 1 0 0 0 0
B 6= C.
1
2. True. We have proved in 1987 question 5(c) that AB and BA have the same eigen-
values. Trace of AB − BA = trace of AB - trace of BA = sum of the eigenvalues of
AB - sum of the eigenvalues of BA = 0. But trace of In = n, thus AB − BA = I can
never hold.
1 2 −2
Solution. |A| = −1 3 0 = 3 − 2(−1) − 2(2) = 1, so A is non-singular. Hence
0 −2 1
1 0 1 3 2 6
−1
X=A 0 1 0. Asimple calculation gives A−1 = 1 1 2. Thus
0 1 1 2 2 5
3 2 6 1 0 1 3 8 9
X = 1 1 2 0 1 0 = 1 3 3
2 2 5 0 1 1 2 7 7
Question 2(a) If M, N are two subspaces of a vector space S, then show that their dimen-
sions satisfy
dim M + dim N = dim (M ∩ N ) + dim (M + N )
Question 2(b) Find a maximal linearly independent subsystem of the system of vectors
v1 = (2, −2, −4), v2 = (1, 9, 3), v3 = (−2, −4, 1), v4 = (3, 7, −1).
Solution. v1 , v2 are linearly independent because av1 +bv2 = (2a+b, −2a+9b, −4a+3b) =
0 ⇒ a = b = 0.
v3 is dependent on v1 , v2 . If v3 = av1 + bv2 , then (2a + b, −2a + 9b, −4a + 3b) =
7 6
(−2, −4, 1) ⇒ a = − 10 , b = − 10 .
Similarly v4 is dependent on v1 , v2 . If v4 = av1 + bv2 , then (2a + b, −2a + 9b, −4a + 3b) =
(3, 7, −1) ⇒ a = b = 1.
Thus the maximally linearly independent set is {v1 , v2 }.
2
Question 2(c) Show that the system of equations
4x + y − 2z + w = 3
x − 2y − z + 2w = 2
2x + 5y − w = −1
3x + 3y − z − 3w = 1
1 −2 1
Since −2 −1 2 = 1 − 16 + 5 6= 0, it follows that rank A = rank B = 3, so the system
5 0 −1
is consistent. Since rank A = 3, the space of solutions is of dimension 1.
Subtracting the second equation from the fourth, we get 2x + 5y − 5w = −1. But
2x + 5y − w = −1, so w − 5w = 0 ⇒ w = 0.
Now y − 2z = 3 − 4x, −2y − z = 2 − x ⇒ −5z = 8 − 9x ⇒ z = 9x−8 5
. Now y =
−2x−1 −2x−1 9x−8
3 − 4x + 2 9x−8
5
= 5
. Thus the space of solutions is (x, 5
, 5
, 0). The system does
not have a unique solution.
Question 3(a) Show that every square matrix satisfies its characteristic equation. Using
this result or otherwiseshow that if
1 0 2
A = 0 −1 1
0 1 0
3
Solution. The first part is the Cayley Hamilton theorem. See 1987 Question 5(a).
The characteristic equation of A is |A − xI| = 0, thus
1−x 0 2
0 −1 − x 1 = (1 − x)(x2 + x − 1) = −x3 + 2x − 1 = 0
0 1 −x
By the Cayley Hamilton Theorem, A3 − 2A + I = 0.
Thus A4 = 2A2 − A, and 2A3 = 4A − 2I. Hence A4 − 2A3 − 2A2 + 6A − 2I =
2A2 − A − 4A + 2I − 2A2 + 6A − 2I = A as required.
Question 3(b) 1. Show that a square matrix is singular if and only if at least one of its
eigenvalues is 0.
2. The rank of an n×n matrix A remains unchanged if it is premultiplied or postmultiplied
by a nonsingular matrix, and that rank(XAX−1 ) = rank(A).
Solution.
1. The characteristic polynomial of A is |A−xI|. Putting x = 0, we see that the constant
term in the characteristic polynomial is |A|. Thus if A has 0 as an eigenvalue iff 0 is
a root of the characteristic polynomial iff |A| = 0.
R1
2. Let A = ... , where each Ri is 1×n, i.e. A is m×n. Now rank(A) is the dimension
Rm
of the row space of A, i.e. the space generated by R1 , . . . , Rm . Let P = (pij ) be
an
p11 R1 + p12 R2 + . . . + p1m Rm
p21 R1 + p22 R2 + . . . + p2m Rm
m × m nonsingular matrix. Then B = PA = .
..
.
pm1 R1 + pm2 R2 + . . . + pmm Rm
Thus the rows of PA ⊂ the row space of A, being linear combinations of rows of
A. Writing A = P−1 B, we get that the row space of A ⊂ the row space of B, so
rank(A) = rank(B).
Let Q be non-singular n × n, and C = AQ. It can be proved as above that the column
space of A = the column space of C, thus rank(A) = rank(C).
Now by using the above results, rank(XAX−1 ) = rank(XA) = rank(A).
Paper II
Question 4(a) If V1 and V2 are subspaces of a vector space V, then show that dim(V1 +V2 ) =
dim(V1 ) + dim(V2 ) − dim(V1 ∩ V2 ).
Solution. See 1998, question 1(b).
4
Question 4(b) Let V and W be vector spaces over the same field F and dim V = n. Let
{e1 , . . . , en } be a basis of V. Show that a map f : {e1 , . . . , en } −→ W, can be uniquely
extended to a linear transformation T : V −→ W whose restriction to the given basis is f
i.e. T (ei ) = f (ei ).
Pn Pn
Solution.
Pn If v =Pn i=1 a i e i , define T (v) = i=1 ai f (ei ). Clearly T (ei ) = f (ei ). If
v = i=1 ai ei , w = i=1 bi ei , then
n
X
T (αv + βw) = T ( (αai + βbi )ei )
i=1
n
X
= (αai + βbi )f (ei )
i=1
n
X n
X
= α ai f (ei ) + β bi f (ei )
i=1 i=1
= αT (v) + βT (w)
Question 5(a) 1. If A and B are two linear transformations and if A−1 and B −1 exist,
show that (AB)−1 exists and (AB)−1 = B −1 A−1 .
2. Prove that similar matrices have the same characteristic polynomial and hence the
same eigenvalues.
Solution.
2. If B = P−1 AP then |λI−B| = |λP−1 P−P−1 AP| = |P−1 ||λI−A||P| = |λI−A|. Thus
A and B have the same characteristic polynomial and therefore the same eigenvalues.
5
1
Question 5(b) Reduce 2x2 + 4xy + 5y 2 + 4x + 13y − 4
= 0 to canonical form.
Solution.
1
LHS = 2(x + y + 1)2 − 2y 2 − 2 + 5y 2 + 9y −
4
9
= 2(x + y + 1)2 + 3(y 2 + 3y) −
4
3 27 9
= 2(x + y + 1)2 + 3(y + )2 − −
2 4 4
3
= 2X 2 + 3Y 2 − 9 where X = x + y + 1, Y = y +
2
X2 Y2
2X 2 + 3Y 2 − 9 = 0 ⇒ 9/2
+ 3
= 1. Thus the given equation is an ellipse.
0 1 1
Question 5(c) Find the reciprocal of the matrix T = 1 0 1. Then show that the
1 1 0
b+c c−a b−a
transform of the matrix A = 2 c − b c + a a − b by T i.e. TAT−1 is a diagonal matrix.
1
0 1 1 b+c c−a b−a −1 1 1
1 1
TAT−1 = 1 0 1 c − b c + a a − b 1 −1 1
2 2
1 1 0 b−c a−c a+b 1 1 −1
0 2a 2a −1 1 1
1
= 2b 0 2b 1 −1 1
4
2c 2c 0 1 1 −1
4a 0 0 a 0 0
1
= 0 4b 0 = 0 b 0
4
0 0 4c 0 0 c
Thus TAT−1 is diagonal. Now the eigenvalues of A and TAT−1 are the same, so the
eigenvalues of A are a, b, c.
6
UPSC Civil Services Main 1987 - Mathematics
Linear Algebra
Sunder Lal
Retired Professor of Mathematics
Panjab University
Chandigarh
7 −3
Question 1(a) 1. Find all the matrices which commute with the matrix .
5 −2
2. Prove that the product of two n × n symmetric matrices is a symmetric matrix if and
only if the matrices commute.
Solution.
1.
a b 7 −3 7 −3 a b
=
c d 5 −2 5 −2 c d
⇒ 7a + 5b = 7a − 3c (i)
−3a − 2b = 7b − 3d (ii)
7c + 5d = 5a − 2c (iii)
−3c − 2d = 5b − 2d (iv)
(i) and (iv) ⇒ 5b = −3c. From (ii) we get d = a + 3b, and from (iii) we get the
same
− 9c = 5a + 15b = 5d, or d = a + 3b. Thus the required matrices are
thing: 5a
a b
, a, b arbitrary.
− 35 b a + 3b
1
Question 1(b) Show that the rank of the product of two square matrices A, B each of order
n satisfies the inequality
rA + rB − n ≤ rAB ≤ min(rA , rB )
where rC stands for the rank of C, a square matrix.
G
Solution. There exists a non-singular matrix P such that PA = , where G is a
0
G
rA × n matrix of rank rA . Now PAB = B has at most rA non-zero rows obtained on
0
multiplying rA non-zero rows of G with B. Thus rPAB , which is the same as rank rAB as P
is non-singular, ≤ rA .
Similarly there exists a non-singular matrix Q such
that BQ = H 0 , where H is a
n × rB matrix of rank rB . Now ABQ = A H 0 has at most rB non-zero, columns, so
rABQ ≤ rB . Now rABQ = rAB as |Q| = 6 0, so rAB ≤ rB , hence rAB ≤ min(rA , rB ).
Let S(A) denote the space generated by the vectors r1 , . . . , rn where ri is the ith row
of A, then dim(S(A)) = rA , similarly dim(S(B)) = rB . Let S denote the space generated
by the rows of A and B. Clearly dim(S) ≤ dim(S(A)) + dim(S(B)) = rA + rB . Clearly
S(A + B) ⊆ S. Therefore rA+B ≤ dim(S) ≤ rA + rB .
IrA 0 −1 IrA 0
Now there exist non-singular matrices P, Q such that PAQ = or A = P Q−1 .
0 0 0 0
−1 0 0 −1 −1 IrA 0
Let C = P Q . Then A + C = P Q−1 = P−1 Q−1 , so A + C
0 In−rA 0 In−rA
is nonsingular.
Now rank B = rank((A + C)B) ≤ rank(AB) + rank(CB). But rank(CB) ≤ rank(C) =
n−rA . Thus rB ≤ rAB +n−rA ⇒ rA +rB −n ≤ rAB . Hence rA +rB −n ≤ rAB ≤ min(rA , rB ).
3 a+2 −3 2a + 1
1 0 0 0
1 2 −3 a−1
Solution. |A| = by carrying out the operations C2 −C1 , C3 −
2 2a − 4 −a − 4 3a − 3
3 a−1 −6 2a − 2
2 −3 1 0 0 1
C1 , C4 − C1 . Thus |A| = (a − 1) 2a − 4 −a − 4 3 = (a − 1) 2a − 10 −a + 5 3 =
a−1 −6 2 a−5 0 2
2
(a − 1)(a − 5) .
2
Thus |A| =6 0 when a 6= 1, a 6= 5. So for 1 < a < 5, rank A = 4.
If a = 5,
1 1 1 1
1 3 −2 5
A = 2 8 −7 14
3 7 −3 11
1 0 0 0
1 2 −3 4
= (C2 − C1 , C3 − C1 , C4 − C1 )
2 6 −9 12
3 4 −6 8
1 0 0 0
0 2 −3 4
= (R2 − R1 , R3 − 2R1 , R4 − 3R1 )
0 6 −9 12
0 4 −6 8
1 0 0 0
0 2 −3 4
=
0
(R3 − 3R2 , R4 − 2R2 )
0 0 0
0 0 0 0
1 0
which has rank 2, as 6= 0, showing that rank of A is 2 when a = 5.
0 2
If a = 1,
1 1 1 1
1 3 −2 1
A =
2 0 −3 2
3 3 −3 3
1 0 0 0
1 2 −3 0
=
2 −2 −5
(C2 − C1 , C3 − C1 , C4 − C1 )
0
3 0 −6 0
1 0 0
which has rank 3 since 1 2 −3 6= 0, showing that rank of A is 3 when a = 1.
2 −2 −5
3
Solution. Let xr be an eigenvector of λr . Then Ak xr = Ak−1 (Axr ) = λr Ak−1 xr = . . . =
λkr xr . Thus the eigenvalues of Ak are λkj , j = 1, 2, . . . , n.
Let f (x) = a0 + a1 x + . . . + am xm . Then (a0 I + a1 A + . . . + am Am )xr = (a0 + a1 λr +
. . . + am λmr )xr = f (λr )xr . Thus the eigenvalues of f (A) are f (λj ), j = 1, 2, . . . n.
Question 2(b) If A is skew-symmetric, then show that (I − A)(I + A)−1 , where I is the
corresponding identity matrix, is orthogonal.
0 ab
Hence construct an orthogonal matrix if A = .
− ab 0
−1
Solution. For
theaorthogonality
− A)(I
of (I
a
+ A) , see question 2(a)
of 1999.
1 −b 1 b −1 b b −a
I−A= a , and I + A = ⇒ (I + A) = a2 +b2 .
b
1 − ab 1 a b !
b2 −a2 −2ab
2
−1 1 b −a b −a 1 b − a2 −2ab a2 +b2 a2 +b2
Thus (I−A)(I+A) = a2 +b2 = a2 +b2 = b2 −a2 ,
a b a b 2ab b 2 − a2 2ab
a2 +b2 a2 +b2
which is the required orthogonal matrix.
Solution.
1. BA = A−1 ABA. Thus the characteristic polynomial of BA is |xI−BA| = |xA−1 A−
A−1 ABA| = |A−1 ||xI − AB||A| = |xI − AB| which is the characteristic polynomial
of AB.
√ √
2. If A is orthogonal, i.e. A0 A = I, then |Ax| = x0 A0 Ax = x0 x = |x|.
Conversely |Ax| = |x| ⇒ x0 A0 Ax = x0 x ⇒ x0 (A0 A − I)x = 0 for all x. Thus
A0 A − I = 0, so A is orthogonal.
Note that is A = (aij ) is symmetric, and ni,j=1 aij xi xj = 0 for all x, then choose x = ei to
P
get e0i Aei = aii = 0, and choose x = ei +ej to get 0 = x0 Ax = aii +2aij +ajj = 2aij ⇒ aij = 0.
(Here ei is the i-th unit vector.) Thus A = 0.
Question 3(a) Show that a necessary and sufficient condition for a system of linear equa-
tions to be consistent is that the rank of the coefficient matrix is equal to the rank of the
augmented matrix. Hence show that the system
x + 2y + 5z + 9 = 0
x − y + 3z − 2 = 0
3x − 6y − z − 25 = 0
is consistent and has a unique solution. Determine this solution.
4
Solution. Let the system be Ax = b where A is m × n, x is n × 1 and b is m × 1. Let
rank A = r. A = [c1 , c2 , . . . , cn ] where each cj is an m × 1 column. We can assume without
loss of generality that c1 , c2 , . . . , cr are linearly independent, r = rank A. The system is now
x1 c1 + x2 c2 + . . . + xn cn = b
, where x0 = (x1 , . . . , xn ). Suppose rank([A b]) = r. This means that out of n + 1 columns,
exactly r are independent. But by assumption, c1 , c2 , . . . , cr are linearly independent, there-
fore these vectors form a basis for the column space of [A b]. Consequently there exist
α1 , . . . , αr such that α1 c1 + α2 c2 + . . . + αr cr = b. This gives us the required solution
{α1 , . . . , αr , 0, . . . , 0} to the linear system.
Conversely, let the system be consistent. Let A = [c1 , c2 , . . . , cn ] as before, with c1 , c2 , . . . , cr
linearly independent, r = rank A. Since the column space of A, i.e. the space generated
by c1 , c2 , . . . , cn has dimension r, each cj for r + 1 ≤ j ≤ n is linearly dependent on
c1 , c2 , . . . , cr . Since there exist α1 , . . . , αn such that α1 c1 + α2 c2 + . . . + αn cn = b, b is a
linear combination of c1 , c2 , . . . , cn . But each cj for r + 1 ≤ j ≤ n is a linear combination of
c1 , c2 , . . . , cr , therefore b is a linear combination of c1 , c2 , . . . , cr . Thus the space generated
by {c1 , c2 , . . . , cn , b} also has dimension
r, sorank([A b]) = r = rank A.
1 2 5
The coefficient matrix A = 1 −1 3 . |A| = 24 6= 0, so rank A = 3. The aug-
3 −6 −1
1 2 5 −9 1 2 5
mented matrix B = 1 −1 3 2 has rank ≤ 3, but since 1 −1 3 6= 0, it has
3 −6 −1 25 3 −6 −1
rank 3. Thus the given system is consistent.
Subtracting the second equation from the first we get 3y + 2z + 11 = 0. Subtracting 3
times the second equation from the third, we get 3y + 10z + 19 = 0. Clearly z = −1, y =
−3 ⇒ x = 2. Thus (2, −3, −1) is the unique solution. In fact the only solution of the system
is
x −9 2
y = A−1 2 = −3
z 25 −1
5
Solution. Let W be the subspace spanned by yk , k = 1, . . . , s. Then dim W ≤ s. Since
xj ∈ W, j = 1, . . . , r because xj is a linear combination of yk , k = 1, . . . , s, and xj , j =
1, . . . , r are linearly independent, dim W ≥ r ⇒ r ≤ s.
Clearly f1 and f4 are linearly independent. f2 is linearly expressible in terms of f1 and f4
because f2 = af1 + bf4 ⇒ a + 2b = 4, 2a + b = −1, a − b = 5, 3a = −6 ⇒ a = −2, b = 3 satisfy
all four, hence f2 = −2f1 + 3f4 . Similarly f3 = − 37 f1 + 53 f4 . Thus {f1 , f4 } is a maximally
independent subsystem.
Paper II
Question 4(b) Prove that two finite dimensional vector spaces V, W over the same field F
are isomorphic if they are of the same dimension n.
n
X
T (αv + βu) = T ( (αai + βbi )T (vi )
i=1
n
X
= (αai + βbi )T (vi )
i=1
n
X n
X
= α ai T (vi ) + β bi T (vi )
i=1 i=1
= αT (v) + βT (u)
6
Note: The converse of 4(b) is also true i.e. if T : V −→ W is an isomorphism i.e. V, W are
isomorphic, then dim V = dim W.
Let v1 , . . . , vn be a basis of V. Then {w1 = T P is a basis of W.
(v1 ), . . . , wn = T (vn )}P
n n
w1 , . . . , wn are linearly independent. If i=0 bi wi = 0, then i=0 bi T vi = 0 ⇒
T ( ni=0 bi vi ) = 0 ⇒ ni=0 bi vi = 0 ⇒ bi = 0 for 1 ≤ i ≤ n, because v1 , . . . , vn are linearly
P P
independent.
w1 , . . . , wn generate W.P If w ∈ W, then there exists a P v ∈ V such that
Pn T (v) = w,
n n
because T is onto. Let v = i=0 bi vi , then w = T (v) = T ( i=0 bi vi ) = i=0 bi T (vi ) =
P n
i=0 bi wi .
Question 5(a) Prove that every square matrix is the root of its characteristic polynomial.
Solution. This is the Cayley Hamilton Theorem. Let A be a matrix of order n. Let
|A − xI| = ao + a1 x + . . . + an xn
ao I + a1 A + . . . + an A n = 0
AB0 = ao I
AB1 − B0 = a1 I
AB2 − B1 = a2 I
...
ABn−1 − Bn−2 = an−1 I
−Bn−1 = an I
7
Question 5(b) Determine the eigenvalues and the corresponding eigenvectors of
2 2 1
A = 1 3 1
1 2 2
Solution.
λ − 2 −2 −1
|λI − A| = −1 λ − 3 −1 = 0
−1 −2 λ − 2
Thus x1 + 2x2 + x3 = 0. We can take x1 = (1, 0, −1) and x2 = (0, 1, −2) as eigenvectors for
λ = 1. These are linearly independent, and all eigenvectors for λ = 1 are linear combinations
of x1 , x2 .
1 1 0 5 0 0
Let P = 1 0 1 . Then P−1 AP = 0 1 0.
1 −1 −2 0 0 1
Question 5(c) Let A and B be n square matrices over F . Show that AB and BA have
the same eigenvalues.
BA = A−1 ABA ⇒ |xI − BA| = |xA−1 A − A−1 ABA| = |A−1 ||xI − AB||A| = |xI − AB|
8
Thus the characteristic polynomials of AB and BA are the same, so they have the same
eigenvalues.
If A is singular,
then let rank(A) = r. Then there existP, Q non-singular such that
Ir 0 I 0
PAQ = . Now PABP−1 = PAQQ−1 BP−1 = r Q−1 BP−1 . Let Q−1 BP−1 =
0 0 0 0
B1 B2
, where B1 is r × r, B2 is r × n − r, B3 is n − r × r, B4 is n − r × n − r. Then
B3 B4
−1 Ir 0 B1 B2 B1 B2
PABP = = , so the characteristic roots of AB are the
0 0 B3 B4 0 0
same as those of B1 , along with 0 repeated n − r times.
−1 B 1 B 2 Ir 0 B 1 0
Now Q−1 BAQ = Q−1 BP PAQ = = so the character-
B3 B4 0 0 B3 0
istic roots of BA are the same as those of B1 , along with 0 repeated n − r times. Thus BA
and AB have the same characteristic roots.
9
UPSC Civil Services Main 1988 - Mathematics
Linear Algebra
Sunder Lal
Retired Professor of Mathematics
Panjab University
Chandigarh
then T corresponds to the n × m matrix A whose (i, j)’th entry is aij . In fact (v1 , . . . , vm ) =
(w1 , . . . , wn )A.
It can be easily seen that
2 1
e1 = (1, 0) = (3, 1) − (−1, 2)
7 7
1 3
e2 = (0, 1) = (3, 1) + (−1, 2)
7 7
1
and therefore
2 1
(4, 1, 2, 1) − (3, 0, −2, 1)
T(e1 ) =
7 7
1
= (5, 2, 6, 1)
7
1 ∗
= (5e + 2e∗2 + 6e∗3 + e∗4 )
7 1
1 3
T(e2 ) = (4, 1, 2, 1) + (3, 0, −2, 1)
7 7
1
= (13, 1, −4, 4)
7
1
= (13e∗1 + e∗2 − 4e∗3 + 4e∗4 )
7
5 13
2 1
Thus T corresponds to the matrix 17
6 −4 w.r.t. the standard basis.
7 4
Question 1(b) If M, N are finite dimensional subspaces of V, then show that dim(M +
N ) = dim M + dim N − dim(M ∩ N ).
B
plete {u1 , u2 , . . . , ur } to a basis {u1 , u2 , . . . , ur , w1 , . . . , wn } of N , where dim N = n+r. We
shall show that = {u1 , u2 , . . . , ur , v1 , . . . , vm , w1 , . . . , wn } is a basis of M + N , proving
the result.
If u ∈ M + N , then u = v + w for some v ∈ M, w ∈ N . Since B B is a superset of the
B . Thus B
is linearly independent. If possible let
generates M + N .
⇒ u can be
n
X m
X r
X
αi vi + βi w i + γi ui = 0
i=1 i=1 i=1
Pn Pm Pr Pn
Since
Pn i=1 αi v i = − i=1 β i w i − i=1 γi u i it follows that i=1 αi vi ∈ N .P Therefore
n r
means that ni=1 αi vi −
P P
α
Pri=1 i iv ∈ M ∩ N ⇒ α
i=1 i i v = η
i=1 i i u for ηi ∈ R. This
i=1 ηi ui = 0. But {u1 , u2 , . . . , ur , v1 , . . . , vm } are linearly independent, so αi = 0, 1 ≤
i ≤ n. Similarly we can show that βi = 0, 1 ≤ i ≤ m. Then the linear indepen-
dence of {u1 , u2 , . . . , ur } shows that γi = 0, 1 ≤ i ≤ r. Thus the vectors in
early independent and form a basis of M + N , showing that the dimension of M + N is
B
are lin-
2
Question 1(c) Determine a basis of the subspace spanned by the vectors v1 = (1, 2, 3), v2 =
(2, 1, −1), v3 = (1, −1, −4), v4 = (4, 2, −2).
Solution. v1 , v2 are linearly independent because if αv1 + βv2 = 0 then α + 2β =
0, 2α + β = 0, 3α − β = 0 ⇒ α = β = 0. If v3 = αv1 + βv2 , then the three linear equations
α + 2β = 1, 2α + β = −1, 3α − β = −4 should be consistent — clearly α = −1, β = 1 satisfy
all three, showing v3 = v2 − v1 . Again suppose v4 = αv1 + βv2 , then the three linear
equations α + 2β = 4, 2α + β = 2, 3α − β = −2 should be consistent — clearly α = 0, β = 2
satisfy all three, showing v4 = 2v2 .
Hence {v1 , v2 } is a basis for the vector space generated by {v1 , v2 , v3 , v4 }.
a1 b
Question 2(a) Show that it is impossible for S = , b 6= 0 to have identical eigen-
b a2
values.
0 λ1 0
Solution. We know given S symmetric ∃O orthogonal so that O SO = , where
0 λ2
λ1 , λ
2 are eigenvalues
of S. If λ1 = λ2 , then we have S = O0 −1 (λI)O−1 = λ(OO0 )−1 = λI ⇒
λ 0
S= . Thus if b 6= 0, S cannot have identical eigenvalues.
0 λ
Question 2(b) Prove that the eigenvalues of a Hermitian matrix are all real and the eigen-
values of a skew-Hermitian matrix are either zero or pure imaginary.
Solution. See question 2(a), year 1998.
Question 2(c) If x0 Ax > 0 for all x 6= 0, A symmetric, then for all y 6= 0 y0 A−1 y > 0.
If λ is the largest eigenvalue of A, then
x0 Ax
λ = sup
x∈Rn x0 x
x6=0
.
Solution. Clearly A = A0 A−1 A ∴ x0 Ax = x0 A0 A−1 Ax = y0 A−1 y where y = Ax for any
x ∈ Rn , x 6= 0. Since |A| 6= 0, any vector y can be written as Ax, by taking x = A−1 y.
Thus x0 Ax > 0 ⇒ y0 A−1 y > 0 for all y 6= 0.
λ1 0 . . . 0
0 λ2 . . . 0
0
0
Let M = supx∈Rn xxAx . Let O be an orthogonal matrix such that O AO = .. .
0x ..
x6=0 . .
0 0 . . . λn
Let 0 6= x = Oy, then x x = y O Oy = y y. Now x Ax = y O AOy = i λi yi2 ≤ λy0 y
0 0 0 0 0 0 0
P
0 0
where λ is the largest eigenvalue of A. Thus λ ≥ xyAx 0y = xxAx 0 x , so λ ≥ M . On the other
0
hand, if x 6= 0 is an eigenvector corresponding to λ, then x Ax = λx0 x ⇒ λ = xxAx
0
0x ≤ M .
Thus λ = M as required.
3
Question 3(a) By converting A to an echelon matrix, determine its rank, where
0 0 1 2 8 9
0 0 4 6 5 3
A= 0 2 3 1 4 7
0 3 0 9 3 7
0 0 5 7 3 1
Solution. Consider
0 0 0 0 0
0 0 2 3 0
0
1 4 3 0 5
A =
2
6 1 9 7
8 5 4 3 3
9 3 7 7 1
Interchange the first row with the third, then third with fourth, fourth with fifth and fifth
with sixth to get
1 4 3 0 5
0 0 2 3 0
0
2 6 1 9 7
A ∼ 8 5
4 3 3
9 3 7 7 1
0 0 0 0 0
Now perform R3 − 2R1 , R4 − 8R1 , R5 − 9R1 to get
1 4 3 0 5
0 0 2 3 0
0
0 −2 −5 9 −3
A ∼ 0 −27 −20 3 −37
0 −33 −20 7 −44
0 0 0 0 0
Interchange the second and the third row, and perform − 12 R2 , 21 R3 to get
1 4 3 0 5
5
0 1
2
− 92 3
2
0 0 3
0 1 0
A ∼ 2
0 −27 −20 3 −37
0 −33 −20 7 −44
0 0 0 0 0
4
Perform R4 + 27R2 , R5 + 33R2 to get
1 4 3 0 5
5
0
1 2
− 92 3
2
0 3
0 0 1 0
A ∼ 95
2
0
0 2
− 237
2
7
2
125
0 0 2
− 283
2
11
2
0 0 0 0 0
95 125
Operation R4 − 2
R3 , R5 − 2
R3
yields
1 4 3 0 5
5
0
1 2
− 92
3
2
0 3
0 0 1 0
A ∼ 2
0
0 0 − 759
4
7
2
0 0 0 − 941
4
11
2
0 0 0 0 0
4
Now multiply R4 with − 759
1 4 3 0 5
5
0
1 2
− 92 3
2
0 3
0 0 1 0
A ∼ 2
14
0
0 0 1 − 759
0 0 0 − 941
4
11
2
0 0 0 0 0
941
Performing R5 + 4
R4 results in
1 4 3 0 5
5
0
1 2
− 29 3
2
0 3
0 0 1 0
A ∼ 2
14
0
0 0 1 − 759
11
0 0 0 0 2
− 941×7
1882
0 0 0 0 0
5
Question 3(b) Given AB = AC does it follow that B = C? Can you provide a counterex-
ample?
Solution.
−2λ −1 − λ 2λ −2λ −1 − λ 0
|A − λB| = 0 ⇒ −1 − λ −1 − 2λ 1 + 2λ = 0 ⇒ −1 + λ 0 0 =0
2λ 1 + 2λ −3λ 2λ 1 + 2λ −λ
Thus λ = 0, 1, −1. This shows that the matrices are diagonalizable simultaneously.
We now determine x1 , x2 , x3 such that (A − λB)xi = 0, i = 1, 2, 3. For λ = 0, let
x1 0 = (x1 , x2 , x3 ) be such that (A − λB)x1 = 0. Thus
0 −1 0 x1
−1 −1 1 x2 = 0
0 1 0 x3
Thus −2x1 −2x2 +2x3 = 0, −2x1 −3x2 +3x3 = 0, 2x1 +3x2 −3x3 = 0 ⇒ x2 −x3 = 0 ⇒ x1 = 0.
Thus we may take x2 0 = (0, 1, 1).
For λ = −1, let x3 0 = (x1 , x2 , x3 ) be such that (A − λB)x3 = 0. Thus
2 0 −2 x1
0 1 −1 x2 = 0
−2 −1 3 x3
6
Thus 2x1 − 2x3 = 0, x2 − x3 = 0, −2x1 − x2 + 3x3 = 0 ⇒ x1 = x2 = x3 . Thus we may take
x3 0 = (1, 1, 1).
1 0 1
Let P = 0 1 1 so that
1 1 1
1 0 1 0 −1 0 1 0 1 1 0 1 0 −1 −1 0 0 0
0
P AP = 0 1 1
−1 −1 1 0 1 1 = 0
1 1 0 0 −1 = 0 1 0
1 1 1 0 1 0 1 1 1 1 1 1 0 1 1 0 0 −1
7
UPSC Civil Services Main 1989 - Mathematics
Linear Algebra
Sunder Lal
Retired Professor of Mathematics
Panjab University
Chandigarh
3 1 −1
Question 1(a) Find a basis for the null space of the matrix A = .
0 1 2
Question 1(b) If W is a subspace of a finite dimensional vector space V then prove that
dim V/W = dim V − dim W.
1
First we show linear independence:
n
X
αi (vi + W) = 0
i=r+1
Xn
⇒ αi vi + W = 0 + W
i=r+1
Xn
⇒ αi vi ∈ W
i=r+1
Xn r
X
⇒ αi vi = −αi vi (say)
i=r+1 i=1
Xn
⇒ αi vi = 0
i=1
⇒ αi = 0, 1 ≤ i ≤ n (vi are linearly independent.)
Question 1(c) Show that all vectors (x1 , x2 , x3 , x4 ) in the vector space V4 (R) which obey
x4 −x3 = x2 −x1 form a subspace V. Show further that V is spanned by ξ1 = (1, 0, 0, −1), ξ2 =
(0, 1, 0, 1), ξ3 = (0, 0, 1, 1).
Question 2(a) Let P be a real skew-symmetric matrix and I the corresponding unit matrix.
Show that I − P is non-singular. Also show that Q = (I + P)(I − P)−1 is orthogonal.
2
Solution. We have proved (question 2(a), year 1998) that the eigenvalues of a skew-
Hermitian and therefore of a skew-symmetric matrix are zero or pure imaginary. This means
|I − P| =
6 0 because 1 cannot be an eigenvalue of P.
Q Q = [(I − P)−1 ]0 (I + P)0 (I + P)(I − P)−1 = (I + P)−1 (I − P)(I + P)(I − P)−1 . But
0
3. If S is real, S0 = −S and S2 = −I, then S is orthogonal and of even order, and there
exist non-null vectors x, y such that x0 x = y0 y = 1, x0 y = 0, Sx + y = 0, Sy = x.
Proof: S0 S = −SS = I, so S is orthogonal, |S| =
6 0 ⇒ S is of even order.
Choose y such that y0 y = 1. Then y0 Sy = (y0 Sy)0 = y0 S0 y = −y0 Sy ⇒ y0 Sy = 0.
Set x = Sy, then y0 x = 0, Sx + y = 0. In addition, x0 x = y0 S0 Sy = y0 y = 1.
Question 2(b) Show that an n × n matrix A is similar to a diagonal matrix if and only if
the set of eigenvectors of A includes a set of n linearly independent vectors.
3
1 1 , and B =
Question
3(a) Find the roots of the equation |xA − B| = 0 where A = 1 4
0 3 . Use the result to show that the real quadratic forms F = x2 + 2x x + 4x2 , G = 6x x
3 0 1 1 2 2 1 2
can be simultaneously reduced by a non-singular linear substitution to y12 + y22 , y12 − 3y22 .
x x−3
Solution. |xA − B| = = 4x2 − (x − 3)2 ⇒ ±2x = x − 3 ⇒ x = −3, 1.
x − 3 4x
Let x1 = (x1 , x2 ) be a row vector such that (A − B) xx12 = 0.
1 −2 x1
= 0 ⇒ x1 − 2x2 = 0
−2 4 x2
−3 −6 x1
= 0 ⇒ x1 + 2x2 = 0
−6 −12 x2
Solution.
− tan 2θ cos2 2θ − sin 2θ cos 2θ
1
R.H.S = θ
tan 2 1 sin 2θ cos 2θ cos2 2θ
cos2 2θ − sin2 2θ −2 sin 2θ cos 2θ
= = L.H.S
2 sin 2θ cos 2θ − sin2 2θ + cos2 2θ
4
0 1
Question 3(c) Verify the Cayley-Hamilton theorem for A = .
−2 3
−λ 1
Solution. The characteristic equation for A is = 0 ⇒ −3λ + λ2 + 2 = 0
−2 3 − λ
2
Thus according
theorem A − 3A + 2I = 0.
to the Cayley-Hamilton
0 1 0 1 −2 3
A2 = =
−2
3 −2 3 −6 7
−2 3 0 1 1 0 0 0
−3 +2 =
−6 7 −2 3 0 1 0 0
0 1
Thus the Cayley Hamilton theorem is verified for A = .
−2 3
5
UPSC Civil Services Main 1990 - Mathematics
Linear Algebra
Sunder Lal
Retired Professor of Mathematics
Panjab University
Chandigarh
1 Linear Algebra
Question 1(a) State any definition of the determinant of an n×n matrix and show that the
determinant function is multiplicative i.e. det AB = det A det B for any two n × n matrices
A, B. You may assume the matrices to be real.
Solution. Let π be a permutation of 1, . . . , n. Define sign(π) as follows: count the number
of pairs of numbers that need to be interchanged to get to π from the identity permutation.
If this is even, the sign is 1, and if it is odd, the sign is −1. Now if Π is the set of all
permutations of 1, . . . , n, define
X Y
det A = sign(π) aiπ(i)
π∈Π i
1
Then det P = det A det B, because if for any permutation π, π(i) > n for i ≤ n, then
the corresponding element of the sum is 0 as aiπ(i) = 0. Thus π(i) ≤ n if i ≤ n, and
consequently π(j) > n if j > n. So each permutation consists of a permutation of 1, . . . , n
and a permutation of n + 1, . . . , 2n, consequently we can factor the sum, to get det P =
det A det B.
Now we perform a series of column operations to P — add b11 C1 + . . . + bn1 Cn to Cn+1 ,
to get
a11 a12 . . . a1n c11 0 . . . 0
a21 a22 . . . a2n c21 0 . . . 0
. .. .. .. ..
.
. . . . .
an1 an2 . . . ann cn1 0 . . . 0
−1 0 . . . 0 0 b12 . . . b1n
0 −1 . . . 0 0 b22 . . . b2n
. .. .. .. .. ..
.. . . . . .
0 0 . . . −1 0 bn2 . . . bnn
where C = AB = (cij ). Similarly add b12 C1 + . . . + bn2 Cn to Cn+2 , . . ., b1n C1 + . . . + bnn Cn
to C2n to get
A C
−1 0 . . . 0
0 −1 . . . 0
0
...
0 0 . . . −1
We can now verify that det P = det C. Any permutation π that leads to a non-zero term
in the determinant sum must have π(j) = j − n for j > n, thus piπ(i) = −1, i > n. Also
π(j) > n for j ≤ n, so any such π can be written as a permutation of 1, . . . , n followed by a
series of swaps of the i-th number with the (n + i)-th number, which is n + i. Also sign(π)
is the same as the sign of the corresponding permutation π 0 of 1, . . . , n — we first do π 0 by
exchanges and then additionally swap the i-th element with the (i + n)-th element, for each
i ≤ n. Now if n is even, this involves an even number of additional swaps, and multiply by
(−1)n corresponding to piπ(i) for i > n, otherwise we get an odd number of additional swaps,
flipping the sign, but we still multiply by (−1)n = −1.
Thus det P = det C = det A det B.
Question 1(b) Prove Laplace’s formula for simulataneous expansion of the determinant by
the first row and column; that given an (n+1)×(n+1) matrix in the block form M = αγ D β
,
where α is a scalar, β is a 1 × n matrix (a row vector), γ is a n × 1 matrix (a column vector),
and D is an n × n matrix, then det M = α det D − βD0 γ 0 , where D0 is the matrix of cofactors
of D and βD0 γ 0 stands for the matrix product of size 1 × 1.
2
a21 a22 . . . a2,n+1
γ = ... and D = ... ..
.
.
an+1,1 an+1,2 . . . an+1,n+1
det M = a11 |A11 | − a12 |A12 | + . . . + (−1)n a1,n+1 |A1,n+1 | where Aij is the minor corre-
sponding to aij (formed
Pby deleting the i-th row and j-th column of A). Clearly D = A11 ,
n+1
so det M = α det D − j=2 (−1)j a1j det A1j . Now
Let Bij be the minor of aij in D. Expanding |A1j | in terms of the first column, we get
n+1 X
X n+1
det M = α det D − a1j ai1 |Bij |(−1)i (−1)j
j=2 i=2
a21
= α det D − (a12 a13 . . . a1,n+1 )(cij ) ...
an+1,1
= α det D − βD0 γ
Question 1(c) For M as in 1(b), if D is invertible, show that det M = det D(α − βD−1 γ).
Question 2(a) Write the definition of the characteristic polynomial, eigenvalues and eigen-
vectors of a square matrix. Also say briefly something about the importance and/or applica-
tions of these notions.
3
• Eigenvalues can be used to find a very simple matrix for an operator — either diagonal
or a block diagonal form. This can be used to compute powers of matrices quickly.
• If one wishes to solve a linear differential system like x0 = Ax, or study the local
properties of a nonlinear system, finding the diagonal form of the matrix can give us
a decoupled form of the system, allowing us to find the solution or understand its
qualitative behavior, like its stability and oscillatory behavior.
• In mechanics, the eigenvectors of the inertia tensor are used to define the principal
axes of a rigid body, which are important in analyzing the rotation of the rigid body.
• Eigenvalues can be used to compute low rank approximations to matrices, which help
in reducing the dimensionality of various problems. This is used in statistics and
operations research to explain a large number of observables in terms of a few hidden
variables + noise.
• Eigenvalues can help us determine the form of a quadric or higher dimensional surface
— see the relevant section in year 1999.
• In quantum mechanics, states are represented by unit vectors, while observable quan-
tities (like position and energy) are represented by Hermitian matrices. The basic
problem in any quantum system is the determination of the eigenvalues and eigenvec-
tors of the energy matrix. The eigenvalues are the observed values of the observable
quantity, and discreteness of the eigenvalues leads to the quantization of the observed
values.
Question 2(b) Show that a Hermitian matrix possesses a set of eigenvectors which form
an orthonormal basis. State briefly how or why a general n × n complex matrix may fail to
possess n linearly independent eigenvectors.
Solution. Let H be Hermitian, and λ1 , . . . , λn its eigenvalues, not necessarily distinct. Let
x1 with norm 1 be an eigenvector corresponding to λ1 . Then there exists (from a result
analogous to the result used in question 3(a), year 1995) a unitary matrix U such that x1 is
its first column. Therefore
−1 0 λ1 L
U1 HU1 = U1 HU1 =
0 H1
4
0
where H1 is (n − 1) × (n − 1) and L is (n − 1) × 1. Since U1 HU1 is Hermitian, it follows
that L = 0. Consequently
0 λ1 0
U1 HU1 =
0 H1
Now H1 is Hermitian with eigenvalues λ2 , . . . , λn . Repeating the above argument, we find
U∗2 an (n − 1) × (n − 1) unitary matrix such that
0 ∗ λ 2 0
U∗2 H1 U2 =
0 H2
1 0
If U2 = then U2 is unitary, and
0 U∗2
λ1 0 0
0 0
U2 U1 HU1 U2 = 0 λ2 0
0 0 H2
Repeating this process or by induction, we can get U unitary such that
λ1 · · · 0
0 .. ..
U HU = . .
0 ··· λn
Question 2(c) Define the minimal polynomial and show that a complex matrix is diago-
nalizable (i.e. conjugate to a diagonal matrix) if and only if the minimal polynomial has no
repeated root.
5
k
x−ci
Q
Consider the polymonials pj = cj −ci
. Clearly pj (ci ) = 0 for i 6= j, and pi (ci ) = 1. This
i=1
i6=j
implies that the polynomials p1 , . . . , pk are linearly independent, and each one is of degree
k − 1 < k. Thus these form a basis of the space Pof polynomials of degree ≤ k − 1. Thus
k
given any polynomial g of degree ≤ k − 1, g = i=1 αi pi , where αi = g(ci ). In particular,
1 = ki=1 pi , x = ki=1 ci pi . Thus
P P
k
X k
X
I= pi (T), T = ci pi (T)
i=1 i=1
Moreover pi (T)pj (T) = 0, i 6= j because pi (x)pj (x) is divisible by the minimal polynomial
of T. Also pj (T) 6= 0, 1 ≤ j ≤ k, because the degree of pj is less than k, the degree of the
minimal polynomial of T.
Set Vi = pi (T)V, then V = I(V) = ki=1 pi (T)V = V1 + . . . + Vk . We shall now show that
P
Vi = Vci , the eigenspace of T with respect to ci .
v ∈ Vi ⇒ v = pi (T)w for some w ∈ V. Since (x−ci )pi is divisible by p, (T−ci I)v = 0, so
Tv = ci v so v ∈ Vci . Conversely, if v ∈ Vci , then Tv = ci v, or (T−ci I)v = 0 ⇒ pj (T)v = 0
for j 6= i. Since
Pv = pi (T)v + . . . + pk (T)v, we get v = pi (T)v ⇒ v ∈ Vi .
Thus V = ki=1 Vci so V has a basis consisting of eigenvectors, so T is diagonalizable.
Conversely let T be diagonalizable, then we shall show that the minimal polynomial of
T has distinct roots. Let
λ1 0 . . . 0
0 λ2 . . . 0
−1
P TP = ..
..
. .
0 0 . . . λn
and out of λ1 , . . . , λn , let λ1 , . . . , λk be distinct. Let g(x) = (x − λ1 ) . . . (x − λk ). Then
v ∈ V ⇒ v = v1 + . . . + vk where vi ∈ Vλi , the eigenspace of λi . Thus g(T)(v) = 0, so
g(T) = 0. Thus g(x) is divisible by the minimal polynomial of T. Since g(x) has all distinct
roots, it immediately follows that the minimal polynomial also has all distinct roots.
a b is expressible in the form LDU, where
Question 3(a) Show that a 2 × 2 matrix M = c d
L has the form α1 01 , D is diagonal and U has the form 10 β1 If and only if either a 6= 0
or a = b = c = 0. Also show that when a 6= 0 the factorization M = LDU is unique.
6
Conversely, if M = 00 d0 , i.e. a = b = c = 0, then M = α1 01 00 d0 10 β1 for any
a 0 b
α, β ∈ R. If M = ac db , a 6= 0, then M = LDU with L = 1ac 01 , D = 0 d− bca , U = 10 1a
as shown above.
Question 3(b) Suppose a real matrix has eigenvalue λ, possibly complex. Show that there
exists a real eigenvector for λ if and only if λ is real.
n
Question 3(c) If a 2×2 matrix
A has order n, i.e. A = I2 , then show that A is conjugate
cos θ sin θ
to the matrix where θ = 2πm for some integer m.
− sin θ cos θ n
Solution. Note: A has to be real, otherwise the result is false: if α1 , α2 are two distinct
α1 0
n-th roots of unitysuch that α1 6= α2 , then A = 0 α2 has order n, but A is not conjugate
cos θ sin θ
to whose eigenvalues are complex conjugates of each other.
− sin θ cos θ
An = I ⇒ eigenvalues of A are n-th roots of unity. If A has repeated eigenvalues, then
these can be 1 or −1, because eigenvalues of real matrices are complex conjugates of each
other, so the repeated eigenvalues must be real, and they also must be roots of 1.
−1
1 c .
Case 1: A has eigenvalues 1, 1. There exists P non-singular such that P AP = 0 1
Now (P−1 AP)n = 10 nc = P−1An P = P−1 I2 P = I2 , so nc = 0 ⇒ c = 0. Thus A is
1
cos θ sin θ
conjugate to I2 = , θ = 2πn .
− sin θ cos θ n
Case 2: A has eigenvalues −1, −1. There exists P non-singular such that P−1 AP =
−1 c −1 n −1 nc
1 nc
n
0 −1 . Now P A P = 0 −1 or 0 1 , according as nis odd or even. But A = I2 ,
cos θ sin θ
therefore n is even and c = 0. Thus A is conjugate to −I2 = , θ = 2πm ,m =
− sin θ cos θ n
n
2
.
Case 3: A hasdistinct eigenvalues λ1 , λ2 . Then λ1 = λ2 . If λ1 = cos θ + i sin θ, with
cos θ sin θ cos θ − λ sin θ
θ = 2πm , set B = . The eigenvalues of B are roots of =
n − sin θ cos θ − sin θ cos θ − λ
0 ⇒ λ = cos θ ± i sin θ. Since A and B have the same eigenvalues λ1 , λ2 distinct, both are
λ1 0
conjugate to 0 λ2 and are therefore conjugate to each other.
7
UPSC Civil Services Main 1991 - Mathematics
Linear Algebra
Sunder Lal
Retired Professor of Mathematics
Panjab University
Chandigarh
Question 1(a) Let V(R) be the real vector space of all 2×3 matrices with real entries. Find
a basis of V(R). What is the dimension of V(R).
1 0 0 0 1 0 0 0 1
Solution. Let A1 = , A2 = , A3 =
0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0
and B1 = , B2 = , B3 = . Clearly Ai , Bi , i = 1, 2, 3 ∈
1 0 0 0 1 0 0 0 1
V(R). These generate V(R) because
a1 a2 a3
A= = a1 A1 + a2 A2 + a3 A3 + b1 B1 + b2 B2 + b3 B3
b1 b2 b3
Question 1(b) Let C be the field of complex numbers and let T be the function from C3 to
C3 defined by
1
3. What are the conditions on a, b, c so that (a, b, c) is in the null space of T? What is
the nullity of T?
Solution. T(e1 ) = (1, 2, −1), T(e2 ) = (−1, 1, −2), T(e3 ) = (2, 0, 2). Clearly T(e1 ) and
T(e3 ) are linearly independent. If
2. What is the matrix of T in the ordered basis B = {α1 , α2 } where α1 = (1, 2), α2 =
(1, −1)?
2
Solution. T(e1 ) = (0, 1) = e2 , T(e2 ) = (−1, 0) = −e1. Thus (T(e1 ), T(e2 )) =
(e1 e2 ) 01 −1 0 −1
0 . So the matrix of T in the standard basis is 1 0 .
T(α1 ) = (−2, 1), T(α2 ) = (1, 1). If (a, b) = xα1 + yα2 , then x + y = a, 2x − y = b, so
x = a+b
3
, y = 2a−b
3
. This shows that
T(α1 ) = (−2, 1) = − 13 α1 − 35 α2
T(α2 ) = (1, 1) = 23 α1 + 13 α2
1 2
−3 3
Thus (T(α1 ) T(α2 )) = (α1 α2 ) 5 1 . Consequently the matrix of T in the ordered
1 2 − 3 3
−3 3
basis B is .
− 53 31
Question 2(b)
Determine
a non-singular matrix P such that P0 AP is a diagonal matrix,
0 1 2
where A = 1
0 3. Is the matrix congruent to a diagonal matrix? Justify your answer.
2 3 0
Solution. The quadratic form associated with A is Q(x, y, z) = 2xy + 4xz + 6yz. Let
x = X, y = X + Y, z = Z (thus X = x, Y = y − x, Z = z). Then
Q(X, Y, Z) = 2X 2 + 2XY + 4XZ + 6XZ + 6Y Z
= 2X 2 + 2XY + 10XZ + 6Y Z
Y 5 2 Y 2 25 2
= 2(X + + Z) − − Z +YZ
2 2 2 2
Y 5 2 1
= 2(X + + Z) − (Y − Z)2 − 12Z 2
2 2 2
Put
Y 5 x y 5z
ξ = X+ + Z= + +
2 2 2 2 2
η = Y − Z = −x + y − z
ζ = Z=z
1 1 5 −1
1 − 21 −3
x 2 2 2
ξ ξ
y = −1 1 −1 η = 1 1 −2 η
2
z 0 0 1 ζ 0 0 1 ζ
Q(x, y, z) transforms to 2ξ 2 − 21 η 2 − 12ζ 2 . Thus
2 0 0
P0 AP = 0 − 21 0
0 0 −12
1 − 12 −3
3
Question 2(c) Reduce the matrix
1 3 4 −5
−2 −5 −10 16
5 9 33 −68
4 7 30 −78
Operations R4 − 4R3 ⇒
1 3 4 −5
0 1 −2 6
A≈
0 0 1 −7
0 0 0 0
Operation R1 − 3R2 ⇒
1 0 10 −23
0 1 −2 6
A≈
0 0 1 −7
0 0 0 0
Operations R1 − 10R3 , R2 + 2R3 ⇒
1 0 0 47
0 1 0 −8
A≈
0 0 1 −7
0 0 0 0
4
Question 3(a) U is an n-rowed unitary matrix such that |I − U| = 6 0, show that the matrix
H defined by iH = (I + U)(I − U)−1 is Hermitian. If eiα1 , . . . , eiαn are the eigenvalues of U
then cot α21 , . . . , cot α2n are eigenvalues of H.
Solution.
(iH)(I − U) = (I + U)
0 0 0
⇒ (I − U )(iH) = (I + U )
0 0 0 0
Substituting I = U U, we have from the second equation that U (U − I)(iH) = U (U + I).
0 0 0
So (iH) = −iH = −(I + U)(I − U)−1 = −iH, so H = H, thus H is Hermitian.
If an eigenvalue of a nonsingular matrix A is λ, then λ−1 is an eigenvalue of A−1 ∵ Ax =
λx ⇒ λ−1 x = A−1 x, note that λ 6= 0 ∵ |A| =6 0. Thus the eigenvalues of H are
1 1 + eiαj
,1 ≤ j ≤ n
i 1 − eiαj
eiαj /2 + e−iαj /2
= −i −iαj /2 ,1 ≤ j ≤ n
e − eiαj /2
eiαj /2 +e−iαj /2
2
= ,1 ≤ j ≤ n
e−iαj /2 −eiαj /2
2i
cot αj
= ,1 ≤ j ≤ n
2
5
UPSC Civil Services Main 1992 - Mathematics
Linear Algebra
Sunder Lal
Retired Professor of Mathematics
Panjab University
Chandigarh
Question 1(a) Let U and V be vector spaces over a field K and let V be of finite dimension.
Let T : V −→ U be a linear transformation, prove that dim V = dim T(V) + dim nullity T.
Solution.
1
1.
2. T(2(1, 1)) = T(2, 2) = (4, 2, 2) 6= 2T(1, 1) = 2(1, 1, 1) Thus T is not a linear transfor-
mation.
3.
4. T(2(0, 0)) = T(0, 0) = (1, −1) 6= 2T(0, 0) Thus T is not a linear transformation.
Question 2(a) Let T : M2,1 −→ M2,3 be a linear transformation defined by (with the usual
notation)
1 2 1 3 1 6 1 0
T = ,T =
0 4 1 5 1 0 0 2
x
Find T .
y
Solution.
x 1 1 1
= x −y +y
y 0 0 1
x 2 1 3 6 1 0 2x + 4y x 3x − 3y
T = (x − y) +y =
y 4 1 5 0 0 2 4x − 4y x − y 5x − 3y
2
Question 2(b) For what values of η do the following equations
x+y+z = 1
x + 2y + 4z = η
x + 4y + 10z = η 2
Question 2(c) Prove that a necessary and sufficient condition of a real quadratic form
x0 Ax to be positive definite is that the leading principal minors of A are all positive.
Solution. Let all the principal minors be positive. We have to prove that the quadratic
form is positive definite. We prove the result by induction.
If n = 1, then a11 x2 > 0 ⇔ a11 > 0. Suppose as induction hypothesis the result is true
B B1
for n = m. Let S = B01 k be a matrix of a quadratic form in m + 1 variables, where
B is m × m, B1 is m × 1 and k is a single element. Since all principle minors of B are
leading principal minors of S, and are hence positive, the induction hypothesis gives that B
is positive definite. This means that there exists a non-singular m × m matrix P such that
P0 BP = Im (We shall prove this presently). Let C be an m-rowed column to be determined
soon. Then
0
P0 BP P0 BC + P0 B1
P 0 B B1 P C
=
C0 1 B0 1 k 0 1 C0 B0 P + B01 P C0 BC + C0 B1 + B0 1 C + k
3
The condition is necessary - Since x0 Ax is positive definite, there is a non-singular matrix
P such that P0 AP = I ⇒ |A||P|2 = 1 ⇒ |A| > 0.
Let 1 ≤ r < n. Let xr+1 = . . . = xn = 0, then we obtain a quadratic form in r variables
which is positive definite. Clearly the determinant of this quadratic form is the r×r principal
minor of A which shows the result.
Proof of the result used: Let A be positive definite, then there exists a non-singular P
such that P0 AP = I.
We will prove this by induction. If n = 1, then the form corresponding to A is a11 x2 and
√
a11 > 0, so that P = ( a11 ).
Take
1 −a−1 11 a12 0 ... 0
0
P1 = ..
. (n − 1) × (n − 1)
0
then
a11 0 a13 ... a1n
0
P01 AP1 = a13
..
. (n − 1) × (n − 1)
a1n
Repeating this process, we get a non-singular Q such that
a11 0 ... 0
Q0 AQ = ...
(n − 1) × (n − 1)
0
Given the (n − 1) × (n − 1) matrix on the lower right, we get by induction P∗ s.t. P∗ 0 ((n −
1) × (n − 1) matrix)P∗ is diagonal. Thus ∃P, |P| 6= 0, P0 AP = [α1 , . . . , αn ] say. Take R =
√ √
diagonal[ α1 , . . . , αn ], then R0 P0 APR = In .
2 1
Question 3(a) State the Cayley-Hamilton theorem and use it to find the inverse of 4 3 .
2−λ 1
= λ2 − 5λ + 2 = 0
4 3−λ
4
By the Cayley-Hamilton theorem, A2 − 5A + 2I = 0, so A(A − 5I) = −2I, thus A−1 =
− 12 (A − 5I). Thus
3
− 21
−1 1 2 1 5 0
A =− − = 2
2 4 3 0 5 −2 1
1 − 8λ 1 + 2λ
Let 0 = |A − λB| = = −5λ + 40λ2 − 4λ2 − 4λ − 1
1 + 2λ −5λ
√
Thus 36λ2 − 9λ − 1 = 0, so λ = 9± 81+144 72
= 13 , − 12
1
.
x1 1 5 5
Let (x1 , x2 ) be the vector such that (A − λB) x2 = 0 with λ = 3 . Thus − 3 x1 + 3 x2 =
0 ⇒ x1 = x2 . We take x1 = 11 so that (A − λB)x1 = 0 with λ = 31 . Similarly, if (x1 , x2 ) is
x02 Ax2 = ( 1 −2 ) 11 10 −2 = ( 1 −2 ) −1
1
1 = −3
x01 Ax2 = ( 1 1 ) 11 10 −2 = ( 1 1 ) −1
1
1 =0
If P = (x1 x2 ), then P0 AP = 30 −3
0
, thus x2 + 2xy ≈ 3X 2 − 3Y 2 by P = 11 −2 1
.
Similarly
x01 Bx1 = ( 1 1 ) −2 8 −2
1 6
5 1 = (1 1) 3 = 9
x02 Bx2 = ( 1 −2 ) −2 8 −2
1 12
= ( 1 −2 ) −12 = 36
0 8 −2
5
1−2 12
x1 Bx2 = ( 1 1 ) −2 5 −2 = ( 1 1 ) −12 = 0
Question 3(c) Prove that the characteristic roots of a Hermitian matrix are all real, and
the characteristic roots of a skew Hermitian matrix are all zero or pure imaginary.
Solution. For Hermitian matrices, see question 2(c), year 1995.
0 0
If H is skew-Hermitian, then iH is Hermitian, because (iH) = iH = −iH = iH as
0
H = −H . Thus the eigenvalues of iH are real. Therefore the eigenvalues of H are −ix
where x ∈ R. So they must be 0 (if x = 0) or pure imaginary.
5
UPSC Civil Services Main 1993 - Mathematics
Linear Algebra
Sunder Lal
Retired Professor of Mathematics
Panjab University
Chandigarh
Question 1(a) Show that the set S = {(1, 0, 0), (1, 1, 0), (1, 1, 1), (0, 1, 0)} spans the vector
space R3 but is not a basis set.
Solution. The vectors (1, 0, 0), (0, 1, 0), (1, 1, 1) are linearly independent, because α(1, 0, 0)+
β(1, 1, 0) + γ(1, 1, 1) = 0 ⇒ α + γ = 0, β + γ = 0, γ = 0 ⇒ α = β = γ = 0.
Thus (1, 0, 0), (1, 1, 0), (1, 1, 1) is a basis of R3 , as dimR R3 = 3.
Any set containng a basis spans the space, so S spans R3 , but it is not a basis because
the four vectors are not linearly independent, in fact (1, 1, 0) = (1, 0, 0) + (0, 1, 0).
Question 1(b) Define rank and nullity of a linear transformation. If V is a finite dimen-
sional vector space and T is a linear operator on V such that rank T2 = rank T, then prove
that the null space of T is equal to the null space of T2 , and the intersection of the range
space and null space of T is the zero subspace of V.
Solution. The dimension of the image space T(V) is called rank of T. The dimension of
the vector space kernel of T = {v | T(v) = 0} is called the nullity of T.
Now v ∈ null space of T ⇒ T(v) = 0 ⇒ T2 (v) = 0 ⇒ v ∈ null space of T2 . Thus null
space of T ⊆ null space of T2 . But we are given that rank T = rank T2 , so therefore nullity
of T = nullity of T2 , because of the nullity theorem — rank T + nullity T = dim V. Thus
null space of T = null space of T2 .
Finally if v ∈ range of T, and v ∈ null space of T, then v = T(w) for some w ∈ V. Now
1
2
Question 1(c) If the matrix of a linear operator T on R relative to the standard basis
1 1
{(1, 0), (0, 1)} is 1 1 , find the matrix of T relative to the basis B = {(1, 1), (−1, 1)}.
Solution. Let v1 = (1, 1), v2 = (−1, 1). Then T(v1 ) = (11) 11 11 = (2, 2) = 2v1 .
T(v2 ) = (−11) 11 11 = (0, 0) = 0. So (T(v1 ), T(v2 ) = (v1 ) v2 ) 20 00 , so the matrix of T
relative to the basis B is 20 00 .
A−1
A 0 0
Question 2(a) Prove that the inverse of is where A, C are
B C −C−1 BA−1 C−1
1 0 0 0
1 1 0 0
nonsingular matrices. Hence find the inverse of
1 1 1 0.
1 1 1 1
Solution.
A−1
A 0 0 I 0
−1 = = Identity matrix.
B C −C−1 BA C−1 BA−1 − BA−1 I
A−1
0 A 0 I 0
−1 −1 −1 = −1 −1 = Identity matrix, which shows
−C BA C B C −C B + C B I
the result.
B = 11 11 . Then A−1 = C−1 = −1 and C−1 BA−1 =
1 0
1 0
Let A = C = 1 1 and
1
1 0 1 1 1 0 1 0 0 1 = 0 1 Thus
−1 1 1 1 −1 1 = −1 1 0 1 0 0
−1
1 0 0 0 1 0 0 0
1 1 0 0
−1 1 0 0
1 =
0 −1 1
1 1 0 0
1 1 1 1 0 0 −1 1
Question 2(b) If A is an orthogonal matrix with the property that −1 is not an eigenvalue,
then show that A = (I − S)(I + S)−1 for some skew symmetric matrix S.
2
Now
S0 = (I − A)0 ((I + A)−1 )0
= (I − A0 )(I + A0 )−1
= (AA0 − A0 )(A0 A + A0 )−1
−1
= (A − I)A0 A0 (A + I)−1
= −(I − A)(I + A)−1
= −(I + A)−1 (I − A)
= −S
Thus S is skew symmetric, so A = (I − S)(I + S)−1 where S = (I + A)−1 (I − A)
Question 2(c) Show that any two eigenvectors corresponding to distinct eigenvalues of (i)
Hermitian matrices (ii) unitary matrices are orthogonal.
Solution. We first prove that the eigenvalues of a Hermitian matrix, and therefore of a
symmetric matrix, are real.
Let H be Hermitian, and λ be one of its eigenvalues. Let x 6= 0 be an eigenvector
0 0
corresponding to λ. Thus Hx = λx, so x0 Hx = x0 λx. But (x0 Hx) = (x0 Hx)0 = x0 H x =
0
x0 Hx, because H = H. Note that (x0 Hx)0 = x0 Hx, since it is a single element, therefore
0
x0 Hx is real. Similarly x0 x 6= 0 is real, so λ = xxHx 0 x is real.
Question 3(a) A matrix B of order n is of the form λA, where λ is a scalar and A has 1
everywhere except the diagonal, which has µ. Find λ, µ so that B may be orthogonal.
µ 1 ... 1
1 µ . . . 1
Solution. A = . . .
. B = λA. Thus
...
1 1 ... µ
λµ λ . . . λ λµ λ . . . λ
λ λµ . . . λ λ λµ . . . λ
B0 B = . . .
= BB0 = B2
... . . . ...
λ λ . . . λµ λ λ . . . λµ
1
We used here the fact that all eigenvalues of a unitary matrix have modulus 1. If Ux = λx, then
0 0
x U = λx0 . Thus x0 U Ux = λλx0 x, so x0 x = λλx0 x. Now x0 x 6= 0, so λλ = 1.
0
3
Clearly each diagonal element of BB0 is λ2 µ2 + (n − 1)λ2 , and each nondiagonal element is
2λ2 µ + (n − 2)λ2 . Thus B will be orthogonal if 2λ2 µ + (n − 2)λ2 = 0, λ2 µ2 + (n − 1)λ2 = 1.
Since λ 6= 0, µ = 2−n
2
= 1 − n2 , and λ2 = (1− n )12 +n−1 = 1
n2
= n42 , thus λ = ± n2 .
2 1−n+ 4
+n−1
Solution.
1 0 0 0
1 −1 3 6 1 0 0 0 1 0 0
A = 1 3 −3 −4 = 0 1 0 A
0
0 1 0
5 3 3 11 0 0 1
0 0 0 1
Operation C2 + C1 , C3 − 3C1 , C4 − 6C1 ⇒
1 1 −3 −6
1 0 0 0 1 0 0 0
1 4 −6 −10 = 0 1 0 A 1 0 0
0 0 1 0
5 8 −12 −19 0 0 1
0 0 0 1
Operation R2 − R1 ⇒
1 1 −3 −6
1 0 0 0 1 0 0 0
0 4 −6 −10 = −1 1 0 A 1 0 0
0 0 1 0
5 8 −12 −19 0 0 1
0 0 0 1
Operation R3 − 2R2 ⇒
1
1 −3 −6
1 0 0 0 1 0 0 0
0 4 −6 −10 = −1 1 0 A 1 0 0
0 0 1 0
5 0 0 1 2 −2 1
0 0 0 1
4
R3 − 5R1 ⇒
1 1 −6 −3
1 0 0 0 1 0 0 0
0 4 −10 −6 = −1 1 0 A 1 0 0
0 0 0 1
0 0 1 0 −3 −2 1
0 0 1 0
Operation 41 R2 ⇒
1 1 −6 −3
1 0 0 0 1 0 0
0 1 − 5
0 1 0 0
2
− 32 = − 14 41 0 A
0
0 0 1
0 0 1 0 −3 −2 1
0 0 1 0
Operation C3 + 52 C2 , C4 + 32 C2 ⇒
1 − 27 − 23
1
1 0 0 0 1 0 0
0 1 0 0 = − 1 1 0 A
0 1 52 3
2
4 4 0 0 0 1
0 0 1 0 −3 −2 1
0 0 1 0
1 0 0 0 1 0 0
Thus the normal form of A is 0 1 0 0 so rank A = 3. P = − 14 14 0 and
0 0 1 0 −3 −2 1
7 3
1 1 −2 −2
0 1 5 3
Q= 2 2 and PAQ is the normal form.
0 0 0 1
0 0 1 0
Solution. Completing the squares of the given form (say Q(x1 , x2 , x3 )):
1 3
Q(x1 , x2 , x3 ) = 2(x1 + x2 − x3 )2 + x22 + x23 − 2x2 x3
2 2
1 1
= 2(x1 + x2 − x3 )2 + (x3 − x2 )2 + x22
2 2
Thus Q can be written as the sum of 3 squares with positive coefficients, so it is positive
definite.
5
UPSC Civil Services Main 1994 - Mathematics
Linear Algebra
Sunder Lal
Retired Professor of Mathematics
Panjab University
Chandigarh
Question 1(a) Show that f1 (t) = 1, f2 (t) = t − 2, f3 (t) = (t − 2)2 forms a basis of P3 =
{Space of polynomials of degree ≤ 2}. Express 3t2 −5t+4 as a linear combination of f1 , f2 , f3 .
T(a, b, c, d) = (a − b + c + d, a + 2c − d, a + b + 3c − 3d), a, b, c, d ∈ R
Solution. Let
T(1, 0, 0, 0) = (1, 1, 1) = v1
T(0, 1, 0, 0) = (−1, 0, 1) = v2
T(0, 0, 1, 0) = (1, 2, 3) = v3
T(0, 0, 0, 1) = (1, −1, −3) = v4
1
v4 is dependent on v1 , v2 , because if v4 = αv1 + βv2 , then α − β = 1, α = −1, α + β =
−3 ⇒ α = −1, β = −2 ∴ v4 = −v1 − 2v2 .
Thus v1 , v2 is a basis of T(R4 ), so rank T = 2.
Now (a, b, c, d) ∈ ker T ⇔ a − b + c + d = 0, a + 2c − d = 0, a + b + 3c − 3d = 0
Choosing particular values of a, b, c, d, we see that (1, 2, 0, 1), (−1, 1, 1, 1) ∈ ker T and are
linearly independent, so dim ker T ≥ 2. But (1, 2, 0, 1), (−1, 1, 1, 1) generate ker T, because
if (a, b, c, d) ∈ ker T, and (a, b, c, d) = α(1, 2, 0, 1) + β(−1, 1, 1, 1), then α − β = a, 2α + β =
b, β = c, α + β = d, so α = a + c, β = c and these satisfy the remaining equations 2α + β =
b, α + β = d, because (a, b, c, d) ∈ ker T and therefore a − b + c + d = 0, a + 2c − d = 0. Thus
(a, b, c, d) = (a + c)(1, 2, 0, 1) + c(−1, 1, 1, 1), so dim ker T = nullity T = 2
Hence rank T + nullity T = 4 = dim(R4 ), as required.
Question 1(c) If T is an operator on R3 whose basis is B = {(1, 0, 0), (0, 1, 0), (0, 0, 1)}
such that
0 1 1
[T : B] = 1 0 −1
−1 −1 0
find a matrix of T w.r.t. a basis B1 = {(0, 1, −1), (1, −1, 1), (−1, 1, 0)}.
Solution. The basis B is the standard basis, hence the representation of B1 in this basis is
as given. (Note that if B were some other basis, we would write B1 in that basis, and then
continue as below.) Let v1 = (0, 1, −1), v2 = (1, −1, 1), v3 = (−1, 1, 0). Then
Thus
1 0 0
[T : B1 ] = 0 0 0
0 0 −1
Note: The main idea behind the above solution is to express T(vi ) = ni=0 αii vi . Now
P
we solve for αii to get the matrix for T in the new basis.
An alternative is to compute P−1 [T : B]P, where P is given by [v1 , . . . , vn ] = [e1 , . . . , en ]P
or P = [v1 0 , . . . , vn 0 ]. Show that this is true.
Question 2(a) If A = haij i is an n × n matrix such that aii = n, aij = r if i 6= j, show that
[A − (n − r)I][A − (n − r + nr)I] = 0
Hence find the inverse of the n × n matrix B = hbij i where bii = 1, bij = ρ, i 6= j and
1
ρ 6= 1, ρ 6= 1−n .
2
Solution. Let C = A − (n − r)I, then every entry of C is r. Let D = A − (n − r + nr)I =
C − nrI. Thus CD = C2 − nrC. Each entry of C2 is nr2 , which is the same as each entry
of nrC, so CD = 0 as required.
The given equation implies
Let A = nB, where r = ρn. Thus A satisfies the conditions for the equation to hold, so
substituting A and r in the above equation
Question 2(b) Prove that the eigenvectors corresponding to distinct eigenvalues of a square
matrix are linearly independent.
0 = L1 (a1 x1 + . . . + ar xr )
= a1 L 1 x 1
= a1 (λ1 − λ2 ) . . . (λ1 − λr )x1
λ1 − λi 6= 0, 2 ≤ i ≤ r, and Q
x1 6= 0 so a1 = 0.
Similarly taking Li = rj=1 (A − λj I), we show that ai = 0 for 1 ≤ i ≤ r. Thus
i6=j
x1 , x2 , . . . , xr are linearly independent.
3
Solution. The characteristic polynomial of A is |λI − A| = (λ − 3)(λ − 2)(λ − 5).1 Thus
the eigenvalues of A are 3, 2, 5.
If x = (x1 , x2 , x3 ) is an eigenvector corresponding to λ = 3 then
0 1 4 x1
(A − 3I)x = 0 −1 6
x2 = 0
0 0 2 x3
Thus x1 + x2 + 4x3 = 0, 6x3 = 0, 3x3 = 0, take x1 = 1 to get (1, −1, 0) as an eigenvector for
λ = 2. All the eigenvectors are (x1 , −x1 , 0), x1 6= 0.
If x = (x1 , x2 , x3 ) is an eigenvector corresponding to λ = 5 then
−2 1 4 x1
(A − 5I)x = 0 −3 6 x2 = 0
0 0 0 x3
Thus −2x1 + x2 + 4x3 = 0, −3x2 + 6x3 = 0, take x3 = 1 to get (3, 2, 1) as an eigenvector for
λ = 5. All eigenvectors are (3x3 , 2x3 , x3 ), x3 6= 0.
Question 3(a) Show that the matrix congruent to a skew symmetric matrix is skew sym-
metric. Use the result to prove that the determinant of a skew symmetric matrix of even
order is the square of a rational function of its elements.
4
column 1 and column j, we get −aij in the symmetric position. Now by multiplying the new
matrix by suitable elementary matrices on the left and right, we get
0 aij ∗ ∗ ... ∗
−aij 0 ∗ ∗ ... ∗
0
P AP = ∗ ∗
... A2m−2
∗ ∗
Now we can find P∗ a product of elementary matrices such that
0 aij 0 0 ... 0
−aij 0 0 0 ... 0
0
P∗ P0 APP∗ =
0 0
... A2m−2
0 0
Thus det A = determinant of a skew symmetric matrix of order 2 × determinant of a
skew symmetric matrix of order 2(m − 1). The induction hypothesis now gives the result.
c −b a0
0
−c 0 a b0
A= b −a 0 c0
5
If c0 = 0,
0 c −b a0 a
c 0 a b0 a
a|A| =
b −a 0 0
0 0
−a a −b a 0 0
Adding −b0 R3 to R4 , we see that the fourth row has all 0’s, hence rank A = 2 as before.
Alternate solution:
c −b a0
0
−c 0 a b0
−a|A| = b −a 0 c0
Question 3(c) Reduce the following symmetric matrix to a diagonal form and interpret the
results in terms of quadratic forms.
3 2 −1
A= 2 2 3
−1 3 1
Solution.
x
(x y z)A y
z
= 3x2 + 2y 2 + z 2 + 4xy − 2xz + 6yz
2 1 2 2 22
= 3(x + y − z)2 + y 2 + z 2 + yz
3 3 3 3 3
2 1 2 2 11 2 117 2
= 3(x + y − z) + (y + z) − z
3 3 3 2 6
2 117 2
= 3X 2 + Y 2 − Z
3 6
where X = x − 23 y − 13 z, Y = y + 112
z, Z = z. This implies z = Z, y = Y − 11 2
Z, x =
2 11 1 2
X + 3 (Y − 2 Z) − 3Z = X + 3 Y − 4Z.
1 32 −4
3 0 0
Then if P = 0 1 − 11 2
, we have P0 AP = 0 2
3
0 .
0 0 1 0 0 − 117 6
The quadratic form associated with A is indefinite as it takes both positive and negative
values. Note that x0 Ax and x0 P0 APx0 take the same values.
6
UPSC Civil Services Main 1995 - Mathematics
Linear Algebra
Sunder Lal
Retired Professor of Mathematics
Panjab University
Chandigarh
1 Linear Algebra
Question 1(a) Let T(x1 , x2 , x3 ) = (3x1 + x3 , −2x1 + x2 , −x1 + 2x2 + 4x3 ) be a linear trans-
formation on R3 . What is the matrix of T w.r.t. the standard basis? What is a basis of the
range space of T? What is a basis of the null space of T?
Solution.
Clearly T(e2 ), T(e3 ) are linearly independent. If (3, −2, −1) = α(0, 1, 2) + β(1, 0, 4), then
β = 3, α = −2, but 2α + 4β 6= −1, so T(e1 ), T(e2 ), T(e3 ) are linearly independent. Thus
(3, −2, −1), (0, 1, 2), (1, 0, 4) is a basis of the range space of T.
Note that T(x1 , x2 , x3 ) = 0 ⇔ x1 = x2 = x3 = 0, so the null space of T is {0}, and the
empty set is a basis. Note that the matrix of T is nonsingular, so T(e1 ), T(e2 ), T(e3 ) are
linearly independent.
Question 1(b) Let A be a square matrix of order n. Prove that Ax = b has a solution
⇔ b ∈ Rn is orthogonal to all solutions y of the system A0 y = 0.
1
Solution. If x is a solution of Ax = b and y is a solution of A0 y = 0, then b0 y = x0 A0 y = 0,
thus b is orthogonal to y.
Conversely, suppose b0 y = 0 for all y ∈ Rn which is a solution of A0 y = 0. Let
W = A(Rn ) = the range space of A, and W ⊥ its orthogonal complement. If A0 y = 0 then
x0 A0 y = 0 ⇒ (Ax)0 y = 0 for every x ∈ Rn ⇒ y ∈ W ⊥ . Conversely y ∈ W ⊥ ⇒ ∀x ∈
Rn .(Ax)0 y = 0 ⇒ x0 A0 y = 0 ⇒ A0 y = 0. Thus W ⊥ = {y | A0 y = 0}. Now b0 y = 0 for all
y ∈ W ⊥ , so b ∈ W ⇒ b = Ax for some x ∈ Rn ⇒ Ax = b is solvable.
Question 1(c) Define a similar matrix and prove that two similar matrices have the same
characteristic equation. Write down a matrix having 1, 2, 3 as eigenvalues. Is such a matrix
unique?
Solution. Two matrices A, B are said to be similar if there exists a matrix P such that
B = P−1 AP. If A, B are similar, say B = P−1 AP, then characteristic polynomial of B is
|λI − B| = |λI − P−1 AP| = |P−1 λIP − P−1 AP| = |P−1 ||λI − A||P| = |λI − A|. (Note that
|X||Y| = |XY|.) Thus the characteristic polynomial of B is the same as that of A.
1 0 0
Clearly the matrix A = 0 2 0 has eigenvalues 1,2,3. Such a matrix is not unique, for
0 0 3
1 1 0
example B = 0 2 0 has the same eigenvalues, but B 6= A.
0 0 3
2
Thus x1 = x3 , x3 = −3x2 , so (−3, 1, −3) is an eigenvector for λ = 1.
If (x1 , x2 , x3 ) is an eigenvector for λ = 2, then
3 −6 −6 x1
−1 2 2 x2 = 0
3 −6 −6 x3
⇒ 3x1 − 6x2 − 6x3 = 0
−x1 + 2x2 + 2x3 = 0
3x1 − 6x2 − 6x3 = 0
Note: Another way of computing A5 is given below. This uses the characteristic poly-
nomial of A : A3 = 5A2 − 8A + 4I and not the diagonal form, so it will not be permissible
here.
3
A5 = A2 (5A2 − 8A + 4I)
= 5A(5A2 − 8A + 4I) − 8(5A2 − 8A + 4I) + 4A2
= 25(5A2 − 8A + 4I) − 76A2 + 84A − 32I
= 49A2 − 116A + 68I
Solution.
(I + B(I − AB)−1 A)(I − BA)
= I − BA + B(I − AB)−1 A − B(I − AB)−1 ABA
= [I + B(I − AB)−1 A] − B[I + (I − AB)−1 AB]A (1)
−1 −1 −1
Now (I − AB) (I − AB) = (I − AB) − (I − AB) AB = I
∴ (I − AB)−1 = I + (I − AB)−1 AB
Substituting in (1) (I + B(I − AB)−1 A)(I − BA)
= I + B(I − AB)−1 A − B(I − AB)−1 A = I
4
2. Let λ 6= 0 be an eigenvalue of AB and let x 6= 0 be an eigenvector corresponding to
λ, i.e. ABx = λx. Let y = Bx. Then y 6= 0, because Ay = ABx = λx 6= 0 as λ 6= 0.
Now BAy = BABx = B(ABx) = λBx = λy. Thus λ is an eigenvalue of BA.
Question 2(c) Let a, b ∈ C, |b| = 1 and let H be a Hermitian matrix. Show that the
eigenvalues of aI + bH lie on a straight line in the complex plane.
For the sake of completeness, we prove that the eigenvalues of a Hermitian matrix H are
real. Let z 6= 0 be an eigenvector corresponding to the eigenvalue t.
Hz = tz
⇒ z0 Hz = tz0 z
0
⇒ z0 Hz = tz0 z
0
But z0 Hz = z0 H z = z0 Hz = tz0 z
⇒ tz0 z = tz0 z
⇒t = t ∵ z0 z 6= 0
Question 3(a) Let A be a symmetric matrix. Show that A is positive definite if and only
if its eigenvalues are all positive.
symmetric, it follows that P1 −1 AP1 = P01 AP1 = λ01 B0 . Induction now gives that there
λ2 0 . . . 0
exists an (n − 1) × (n − 1) orthogonal matrix Q such that Q0 BQ = . . .
0 0 . . . λn
5
where λ2 , λ3 , . . . , λn are eigenvalues of B. Let P2 = 01 Q 0
, then P2 is orthogonal and
P02 P01 AP1 P2 = diagonal[λ 1 , . . . , λ n ]. Set P = P P
1 2 . . . P 0
n and (y1 , . . . , yn )P = x then
,
n
x0 Ax = y0 P0 APy = i=0 λ2i yi2 .
P
0
Pn 2 2
Since P is non-singular, quadratic forms x Ax and i=0 λi yi assume the same values.
Hence A is positive definite if and only if ni=0 λ2i yi2 is positive definite if and only if λi > 0
P
for all i. p
Result used: If x1 is a real vector such that ||x1 || = x01 x1 = 1 then there exists an
orthogonal matrix with x1 as its first column.
Proof: We have to find real column vectors x2 , . . . , xn such that ||xi || = 1, 2 ≤ i ≤ n
and x2 , . . . , xn is an orthonormal system i.e. x0i xj = 0, i 6= j. Consider the single equation
x01 x = 0, where x is a column vector to be determined. This equation has a non-zero solution,
in fact the space of solutions is of dimension n − 1, the rank of the coefficient matrix being
1. If y2 is a solution, we take x2 = ||yy22 || so that x01 x2 = 0.
We now consider the two equations x01 x = 0, x02 x = 0. Again the number of unknowns
is more than the number of equations, so there is a solution, say y3 , and take x3 = ||yy33 || to
get x1 , x2 , x3 mutually orthogonal.
Proceeding in this manner, if we consider n − 1 equations x01 x = 0, . . . , x0n−1 x = 0, these
will have a nonzero solution yn , so we set xn = ||yynn || . Clearly x1 , x2 , . . . , xn is an orthonormal
system, and therefore P = [x1 , . . . , xn ] is an orthogonal matrix having x1 as a first column.
Question 3(b) Let A and B be square matrices of order n, show that AB − BA can never
be equal to the identity matrix.
Thus tr(AB − BA) = tr AB − tr BA = 0. But the trace of the identity matrix is n, thus
AB − BA can never be equal to the identity matrix.
n
X
Question 3(c) Let A = haij i, 1 ≤ i, j ≤ n. If |aij | < |aii |, then the eigenvalues of A lie
j=1
i6=j
in the disc n
X
|λ − aii | ≤ |aij |
j=1
i6=j
6
Solution. See the solution to question 2(c), year 1997. We showed that if |λ − aii | >
n
X
|aij | then |λI − A| =
6 0, so λ is not an eigenvalue of A. Thus if λ is an eigenvalue, then
j=1
i6=j
n
X
|λ − aii | ≤ |aij |, so λ lies in the disc described in the question.
j=1
i6=j
7
UPSC Civil Services Main 1996 - Mathematics
Linear Algebra
Sunder Lal
Retired Professor of Mathematics
Panjab University
Chandigarh
Question 1(a) In R4 let W1 be the space generated by {(1, 1, 0, −1), (2, 4, 6, 0)} and let W2
be space generated by {(−1, −2, −2, 2), (4, 6, 4, −6), (1, 3, 4, −3)}. Find a basis for the space
W1 + W2 .
Solution. Let v1 = (1, 1, 0, −1), v2 = (2, 4, 6, 0), v3 = (−1, −2, −2, 2), v4 = (4, 6, 4, −6), v5 =
(1, 3, 4, −3). Since w ∈ W1 + W2 can be written as w = w1 + w2 , and w1 = α1 v1 + α2 v2
and w2 = α3 v3 + α4 v4 + α5 v5 , it follows that w is a linear combination of vi ⇒ W1 + W2
is generated by {vi , 1 ≤ i ≤ 5}. Thus a maximal independent subset of {vi , 1 ≤ i ≤ 5} will
be a basis of W1 + W2 .
Clearly v1 and v2 are linearly independent. If possible, let v3 = λ1 v1 + λ2 v2 , then the
four equations
λ1 + 2λ2 = −1
λ1 + 4λ2 = −2
0λ1 + 6λ2 = −2
−λ1 + 0λ2 = 2
should be consistent and provide us λ1 , λ2 . Clearly the third and fourth equations give us
λ1 = −2, λ2 = − 31 which do not satisfy the first two equations. Thus v1 , v2 , v3 are linearly
independent.
If possible let v4 = λ1 v1 + λ2 v2 + λ3 v3 . Then
λ1 + 2λ2 − λ3 =4 (1a)
λ1 + 4λ2 − 2λ3 =6 (1b)
0λ1 + 6λ2 − 2λ3 =4 (1c)
−λ1 + 0λ2 + 2λ3 = −6 (1d)
1
Adding (1b) and (1d) we get 4λ2 = 0, so λ2 = 0. Solving (1a) and (1b) we get λ3 = −2, λ1 =
2. These values satisfy all the four equations, so v4 = 2v1 − 2v3 .
If possible let v5 = λ1 v1 + λ2 v2 + λ3 v3 . Then
λ1 + 2λ2 − λ3 =1 (2a)
λ1 + 4λ2 − 2λ3 =3 (2b)
0λ1 + 6λ2 − 2λ3 =4 (2c)
−λ1 + 0λ2 + 2λ3 = −3 (2d)
Adding (2b) and (2d) we get 4λ2 = 0, so λ2 = 0. (2c) then gives us λ3 = −2, and
(2a) now gives λ1 = −1, which satisfies all equations. Thus v5 = −v1 − 2v3 . Hence
{(1, 1, 0, −1), (2, 4, 6, 0), (−1, −2, −2, 2)} is a basis of W1 + W2 .
Question 1(b) Let V be a finite dimensional vector space and v ∈ V, v 6= 0. Show that
there exists a linear functional f on V such that f (v) 6= 0.
Let
w1 = v1 + v 2 + v 3
w2 = v1 − v2
w3 = v2 − v3
⇒ T(w1 ) = 3w1 , T(w2 ) = T(w3 ) = 0
We now show that w1 , w2 , w3 is a basis for V, i.e. these are linearly independent.
2
Let αw1 + βw2 + γw3 = 0, then (α + β)v1 + (α − β + γ)v2 + (α − γ)v3 = 0. But
v1 , v2 , v3 are linearly independent, therefore α + β = 0, α − β + γ = 0, α − γ = 0 ⇒ α =
β = γ = 0 ⇒ w1 , w2 , w3 are linearly independent.
The matrix of T w.r.t. the basis w1 , w2 , w3 is clearly B. Note that the choice of
w1 , w2 , w3 is suggested by the shape of B.
6 0 then B = P−1 AP, so A and B are similar.
If (w1 , w2 , w3 ) = (v1 , v2 , v3 )P, |P| =
T(x, y, z) = (x + z, −2x + y, −x + 2y + z)
What is the matrix of T w.r.t. the basis (1, 0, 1), (−1, 1, 1), (0, 1, 1)? Using this matrix write
down the matrix of T with respect to the basis (0, 1, 2), (−1, 1, 1), (0, 1, 1).
Solution. Let v1 = (1, 0, 1), v2 = (−1, 1, 1), v3 = (0, 1, 1). T(x, y, z) = (x+z, −2x+y, −x+
2y+z) = αv1 +βv2 +γv3 , say. This means α−β = x+z, β+γ = −2x+y, α+β+γ = −x+2y+
z. This implies α = x + y + z, β = y, γ = −2x. Thus T(x, y, z) = (x + y + z)v1 + yv2 − 2xv3 .
Hence
2 1 2
[T(v1 ) T(v2 ) T(v3 )] = [v1 v2 v3 ] 0 1 1
−2 2 0
Let w1 = (0, 1, 2), w2 = (−1, 1, 1), w3 = (0, 1, 1). Then
1 0 0
[w1 w2 w3 ] = [v1 v2 v3 ] 1 1 0
0 0 1
Hence
where
2 1 2 1 0 0
A = 0 1 1 , P = 1 1 0
−2 2 0 0 0 1
Thus the matrix of T w.r.t. basis w1 , w2 , w3 is
1 0 0 2 1 2 1 0 0 3 1 2
−1
P AP = −1 1 0 0 1 1 1 1 0 = −2 0 −1
0 0 1 −2 2 0 0 0 1 0 2 0
3
Question 2(b) Let V and W be finite dimensional vector spaces such that dim V ≥ dim W.
Show that there is always a linear map of V onto W.
Solution. Let w1 , w2 , . . . , wm be a basis of W, and v1 , v2 , . . . , vn be a basis of V, n ≥ m.
Define
T(vi ) = wi , i = 1, 2, . . . , m
T(vi ) = 0, i = m + 1, . . . , n
and for any v ∈ V, v = i=1 αi vi , T(v) = m
Pn P
i=1 αi T(vi ). Pm
Clearly
Pm T : V
Pm−→ W is linear. T is onto, since if w ∈ W, w = i=1 ai wi , then
T( i=1 ai vi ) = i=1 ai T(vi ) = w, proving the result.
4
Solution. The characteristic polynomial of A is
−λ 1 0 0
0 −λ 1 0
|A − λI| =
0 0 −λ 1
1 0 0 −λ
= −λ[−λ3 ] − 1[1] = λ4 − 1 = 0
Question 3(b) If A and B are n × n matrices such that AB = BA, show that AB and
BA have a common characteristic vector.
Solution. Before solving this particular problem, we present a general discussion about
orthogonal matrices. An orthogonal matrix satisfies O0 O = I, so its determinant is 1 or -1,
here we focus on the case where |O| = 1. If λ is an eigenvalue of O and x a corresponding
eigenvector, then |λ|2 x0 x = (Ox)0 Ox = x0 O0 Ox = x0 x, so |λ| = 1. Since the characteristic
5
polynomial has real coefficients, the eigenvalues must be real or in complex conjugate pairs.
Thus for a matrix of order 3, at least one eigenvalue is real, and must be 1 or -1. Since
|O| = 1, one real value must be 1, and the three possibilities are {1, 1, 1}, {1, −1, −1} and
{1, eiθ , e−iθ }. √
Here we consider the third case, as the given matrix has 1 and 13 ± i 2 3 2 as eigenvalues,
proved later.
Let Z = X1 + iX2 be an eigenvector corresponding to the eigenvalue eiθ . Let X3 be
the eigenvector corresponding to the eigenvalue 1. Since Z and X3 correspond to different
eigenvalues, these are orthogonal, i.e. Z0 X3 = (X01 + iX02 )X3 = 0 ⇒ X01 X3 = 0, X02 X3 = 0.
Note that X1 , X2 , X3 are real vectors. Since OZ = eiθ Z = (cos θ + i sin θ)(X1 + iX2 ).
Equating real and imaginary parts we get
(Note that sin θ 6= 0 since we are considering the case where eiθ is complex.) Similarly
Multiplying (1) by sin θ and (2) by cos θ and adding, we get X01 X1 − X02 X2 = 0 or X01 X1 =
X02 X2 , so from (2), X1 X2 = 0, i.e. X1 , X2 are orthogonal.
Thus X1 , X2 , X3 are mutually orthogonal. We can assume that X01 X1 = X02 X2 = 1,
replacing Z by λZ, λ ∈ R if necessary. Similarly we can take X03 X3 = 1. Let P = [X1 X2 X3 ]
so that P0 P = I. Now
which is the canonical form of O when the eigenvalues are 1, eiθ , e−iθ .
6
Solution of given problem.
2
− 23 13
3
O = 32 13 − 32
1 2 2
3 3 3
2 2 1
−λ −3 2 − 3λ −2 1
3
2 1
3 1
|O − λI| = 3 3
−λ − 23 = 2 1 − 3λ −2
1 2 2 27
3 3 3
−λ 1 2 2 − 3λ
1
= [(2 − 3λ)2 (1 − 3λ) + 4(2 − 3λ) + 1(3 + 3λ) + 2(6 − 6λ)]
27
1
= − [27λ3 − 45λ2 + 45λ − 27]
27
1
= − [(λ − 1)(3λ2 − 2λ − 3)]
3
√
Thus λ = 1, 31 ± i 2 3 2 are eigenvalues of O. √
2 2
Thus the canonical form of O is derived from above, where cos θ = 13 , sin θ = 3
:
√
1 2 2
3√ 3
0
− 2 2 1
0
3 3
0 0 1
The matrix P can be determined as follows (this is not needed for this problem, but is
given for completeness):
7
√
2 2
where cos θ = 13 , sin θ = 3
. This gives us the following equations
√
2x11 − 2x12 + x13 = x11 − x21 2 2 (3)
√
2x11 + x12 − 2x13 = x12 − x22 2 2 (4)
√
x11 + 2x12 + 2x13 = x13 − x23 2 2 (5)
√
2x21 − 2x22 + x23 = x11 2 2 + x21 (6)
√
2x21 + x22 − 2x23 = x12 2 2 + x22 (7)
√
x21 + 2x22 + 2x23 = x13 2 2 + x23 (8)
√
Adding the √ last 3 equations, we get 2x21 √ = x11 + x12 + x13 . Subtracting equation (6)
from (8), 2x22 = x13 − x11 , and from (7) 2x23 = x11 − x12 + x13 . Substituting these
in the first 3 equations and simplifying, we get x11 = −x13 . Setting x11 = 0, x12 = 1,
we get (0, 1, 0), ( √12 , 0, − √12 ) as a possible solution for X1 , X2 .
P = 1 0 0
0 − √12 1
√
2 2
1
3√ 3
0
We can now verify that OP = P − 2 3 2 1
3
0
0 0 1
8
UPSC Civil Services Main 1997 - Mathematics
Linear Algebra
Sunder Lal
Retired Professor of Mathematics
Panjab University
Chandigarh
1 Linear Algebra
Question 1(a) Let V be the vector space of polynomials over R. Find a basis and the
dimension of W ⊆ V spanned by
v1 = t3 − 2t2 + 4t + 1
v2 = 2t3 − 3t2 + 9t − 1
v3 = t3 + 6t − 5
v4 = 2t3 − 5t2 + 7t + 5
1
Solution. Let x = (x1 , x2 ), y = (y1 , y2 ). Then
T(αx + βy) = T(αx1 + βy1 , αx2 + βy2 )
= (αx1 + βy1 + αx2 + βy2 , αx1 + βy1 − αx2 − βy2 , αx2 + βy2 )
= (α(x1 + x2 ), α(x1 − x2 ), αx2 ) + (β(y1 + y2 ), β(y1 − y2 ), βy2 )
= αT(x1 , x2 ) + βT(y1 , y2 )
Thus T is linear.
Question 1(c) Let V be the space of 2×2 matrices over R. Determine whether the matrices
A, B, C ∈ V are dependent where
1 2 3 −1 1 −5
A= B= C=
3 1 2 2 −4 0
Solution. If αA + βB + γC = 0, then
α + 3β + γ = 0 (1)
2α − β − 5γ = 0 (2)
3α + 2β − 4γ = 0 (3)
α + 2β = 0 (4)
From (4), we get α = −2β. This, together with (3) gives γ = −β. These satisfy (1) and
(2) also, so taking β = 1, α = −2, γ = −1 gives us −2A + B − C = 0. Thus A, B, C are
dependent.
Question 2(a) Let A be an n × n matrix such that each diagonal entry is µ and each
off-diagonal entry is 1. If B = λA is orthogonal, determine λ, µ.
2
Question 2(b) Show that
2 −1 0
A = −1 2 0
2 2 3
is diagonalizable over R. Find P such that P−1 AP is diagonal and hence find A25 .
2 − x −1 0
−1 2 − x 0 = 0
2 2 3−x
⇒ (2 − x)(2 − x)(3 − x) + 1(−(3 − x)) = 0
(3 − x)(4 − 4x + x2 − 1) = 0
3
1 1
1 1 0 1 0 0 2 2
0
A25 = 1 −1 0 0 3 25
0 1
2
− 12 0
−2 0 1 0 0 325 1 1 1
1 1
1 325 0 2 2
0
= 1 −325 0 12 − 21 0
−2 0 325 1 1 1
1+325 1−325
2 2
0
25 1+325
= 1−32 2
0
−1 + 325 −1 + 325 325
Question 2(c) Let A = [aij ] be a square matrix of order n such that |aij | ≤ M . Let λ be
an eigenvalue of A, show that |λ| ≤ nM .
x
Let |xi | = max(|x1 |, |x2 |, . . . , |xn |), so | xji | ≤ 1 for all j.
x1 x2 xn
0 = aii − (−ai1 − ai2 − . . . − ain )
xi xi xi
x1 x2 xn
≥ |aii | − ai1 + ai2 + . . . + ain
xi xi xi
≥ |aii | − |ai1 | − |ai2 | − . . . − |ain |
n
X
which contradicts the premise |aij | ≤ aii Thus |A| =
6 0.
j=1
i6=j
4
n
X
Now the lemma tells us that if |λ − aii | > |aij | then |λI − A| =
6 0, so λ is not an
j=1
i6=j
n
X
eigenvalue of A. Thus |λ| ≤ |λ − aii | + |aii | ≤ |aij | ≤ nM as desired.
j=1
Question 3(a) Define a positive definite matrix and show that a positive definite matrix is
always non-singular. Show that the converse is not always true.
xn
xn xn
where (x1 , x2 , . . . , xn ) 6= (0, 0, . . . , 0), which means that A is not positive definite. Thus A
is positive definite =⇒ |A| = 6 0.
The converse is not true. Take
1 0 0
A = 0 1 0
0 0 −1
5
Question 3(b) Find the eigenvalues and their corresponding eigenvectors for the matrix
6 −2 2
−2 3 −1
2 −1 3
0 = |A − xI|
6 − x −2 2
= −2 3 − x −1
2 −1 3 − x
= (6 − x)((3 − x)2 − 1) + 2(−6 + 2x + 2) + 2(2 − 6 + 2x)
= (6 − x)(9 − 6x + x2 ) − 6 + x − 8 + 4x − 8 + 4x
0 = x3 − 12x2 + 36x − 32
= (x − 2)(x2 − 10x + 16)
Question 3(c) Find P invertible such that P reduces Q(x, y, z) = 2xy + 2yz + 2zx to its
canonical form.
6
which has all diagonal entries 0, so we cannot complete squares right away.
0 1 1 1 0 0 1 0 0
1 0 1 = 0 1 0 A 0 1 0
1 1 0 0 0 1 0 0 1
Add the second row to the first and the second column to the first.
2 1 2 1 1 0 1 0 0
1 0 1 = 0 1 0 A 1 1 0
2 1 0 0 0 1 0 0 1
1 − 12 0
2 0 2 1 1 0
0 − 21 0 = − 12 21 0 A 1 12 0
2 0 0 0 0 1 0 0 1
1 − 12 −1
2 0 0 1 1 0
0 − 1 0 = − 1 1 0 A 1 1 −1
2 2 2 2
0 0 −2 −1 −1 1 0 0 1
1 − 12 −1
2 0 0
Thus P = 1 21 −1 and P0 AP = 0 − 12 0
0 0 1 0 0 −2
2 1 2 2
So Q(x, y, z) −→ 2X − 2 Y − 2Z .
Alternative Solution. Let x = X, y = X + Y, z = Z
1 − 12 −1 1 − 21 −1
x 1 0 0 X 1 0 0 ξ ξ
y = 1 1 0 Y = 1 1 0 0 1 0 η = 1 12 −1 η
z 0 0 1 Z 0 0 1 0 0 1 ζ 0 0 1 ζ
1 − 21 −1
7
UPSC Civil Services Main 1998 - Mathematics
Linear Algebra
Sunder Lal
Retired Professor of Mathematics
Panjab University
Chandigarh
Question 1(a) Given two linearly independent vectors (1, 0, 1, 0) and (0, −1, 1, 0) of R4 ,
find a basis of R4 which includes them.
Solution. Let v1 = (1, 0, 1, 0), v2 = (0, −1, 1, 0). Clearly these are linearly independent.
Let e1 , e2 , e3 , e4 be the standard basis. Then v1 , v2 , e1 , e2 , e3 , e4 generate R4 . We have to
find four vectors out of these which are linearly independent and include v1 , v2 .
If αv1 + βv2 + γe1 = 0, then α + γ = 0, −α = 0, α + β = 0 ⇒ α = β = γ = 0. Therefore
v1 , v2 , e1 are linearly independent.
We now show that v1 , v2 , e1 , e4 are linearly independent. Let αv1 + βv2 + γe1 + δe4 = 0
then δ = 0, and therefore α = β = γ = 0 because v1 , v2 , e1 are linearly independent.
Thus v1 , v2 , e1 , e4 is a basis of R4 .
Note that e2 = v1 − v2 − e1 , e3 = v1 − e1 .
Question 1(b) If V is a finite dimensional vector space over R and if f and g are two
linear transformations from V to R such that f (v) = 0 implies g(v) = 0, then prove that
g = λf for some λ ∈ R.
1
Question 1(c) Let T : R3 −→ R3 be defined by T(x1 , x2 , x3 ) = (x2 , x3 , −cx1 − bx2 − ax3 )
where a, b, c are fixed real numbers. Show that T is a linear transformation of R3 and that
A3 + aA2 + bA + cI = 0 where A is the matrix of T w.r.t. the standard basis of R3 .
Solution. Let x = (x1 , x2 , x3 ), y = (y1 , y2 , y3 ). Then
T(αx + βy) = (αx2 + βy2 , αx3 + βx3 , −c(αx1 + βy1 ) − b(αx2 + βy2 ) − a(αx3 + βy3 ))
= α(x2 , x3 , −cx1 − bx2 − ax3 ) + β(y2 , y3 , −cy1 − by2 − ay3 )
= αT(x) + βT(y)
Thus T is linear.
Clearly
T(1, 0, 0) = (0, 0, −c)
T(0, 1, 0) = (1, 0, −b)
T(0, 0, 1) = (0, 1, −a)
0 1 0
A= 0 0 1
−c −b −a
The characteristic equation of A is |A − λI| = 0.
−λ 1 0
0 λ 1 = 0
−c −b −a − λ
−λ2 (a + λ) − bλ − c = 0
λ3 + aλ2 + bλ + c = 0
Now by the Cayley-Hamilton theorem A3 + aA2 + bA + cI = 0.
Question 2(a) If A and B are two matrices of order 2 × 2 such that A is skew-Hermitian
and AB = B then show that B = 0.
Solution. We first of all prove that eigenvalues of skew-Hermitian matrices are 0 or pure
0
imaginary. Let A be skew-Hermitian, i.e. A = −A and let λ be its characteristic root. If
x is an eigenvector of λ, then
Ax = λx
⇒ x0 λx = x0 Ax
0
= −x0 A x
0
= −Ax x
= −λx0 x
Thus λ = −λ ∵ x0 x 6= 0, showing that the real part of λ is 0.
Now if B 6= 0 and c1 , c2 are the columns of B, then c1 6= 0 or c2 6= 0. AB = B means
that Ac1 = c1 and Ac2 = c2 . Since either c1 6= 0 or c2 6= 0, 1 must be an eigenvalue of A,
which is not possible. Hence c1 = 0 and c2 = 0, which means B = 0.
2
Question 2(b) If T is a complex matrix of order 2 × 2 such that tr T = tr T2 = 0, then
show that T2 = 0.
Solution. Let λ1 , λ2 be the eigenvalues of T, then λ21 , λ22 are the eigenvalues of T2 . Given
that
tr T = λ1 + λ2 = 0
tr T2 = λ21 + λ22 = 0
Question 2(c) Prove that a necessary and sufficient condition for an n × n real matrix A
to be similar to a diagonal matrix is that the set of characteristic vectors of A includes a set
of n linearly independent vectors.
Solution.
Necessity: By hypothesis there exists a nonsingular matrix P such that
λ1 0 . . . 0
0 λ2 . . . 0
P−1 AP = D = ... ... ... ...
0 0 . . . λn
0 0 . . . λn
3
Question 3(a) Let A be a m × n matrix. Show that the sum of the rank and nullity of A
is n.
Question 3(b) Find all real 2 × 2 matrices A with real eigenvalues which satisfy AA0 = I.
4
0 1
If |A| = −1, J = , then |JA| = 1. Also JA(JA)0 = JAA0 J0 = JJ0 = I. Thus
1 0
cos θ sin θ
JA =
− sin θ cos θ
−1 cos θ sin θ 0 1 cos θ sin θ − sin θ cos θ
A=J = =
− sin θ cos θ 1 0 − sin θ cos θ cos θ sin θ
Now the eigenvalues of A are given by
λ + sin θ − cos θ
0 = |λI − A| = = λ2 − sin2 θ − cos2 θ = λ2 − 1
− cos θ λ − sin θ
Hence λ = ±1, so the eigenvalues are always real. Thus the possible values of A are
1 0 −1 0 − sin θ cos θ
, , for all real θ
0 1 0 −1 cos θ sin θ
Question 3(c) Reduce to diagonal matrix by rational congruent transformation the sym-
metric matrix
1 2 −1
A= 2 0 3
−1 3 1
1 −2 − 32
1 0 0 1 0 0 1 2 −1
0 −4 0 = −2 1 5
0 2 0 3 0 1 4
0 0 25 4
− 23 54 1 −1 3 1 0 0 1
5
UPSC Civil Services Main 1999 - Mathematics
Linear Algebra
Sunder Lal
Retired Professor of Mathematics
Panjab University
Chandigarh
Question 1(a) Let V be the vector space of functions from R to R. Show that f, g, h ∈ V
are linearly independent where f (t) = e2t , g(t) = t2 and h(t) = t.
Solution. Let a, b, c ∈ R and let ae2t + bt2 + ct = 0 for all t. Setting t = 0 shows that
a = 0. From t = 1 we get b + c = 0, and t = −1 gives b − c = 0, hence b = c = 0. Thus
f, g, h are linearly independent.
Question 1(b) If the matrix linear transformation T on V2 (R) with respect to the
of the
1 1
basis B = {(1, 0), (0, 1)} is , then what is the matrix of T with respect to the ordered
1 1
basis B1 = {(1, 1), (1, −1)}.
1
Solution. The characteristic equation is
4−x 2 2
0 = 2 4−x 2
2 2 4−x
= (4 − x)((4 − x)2 − 4) + 2(4 − 8 + 2x) − 2(8 − 2x − 4)
= (4 − x)(12 − 8x + x2 ) − 8 + 4x − 8 + 4x
= 48 − 32x + 4x2 − 12x + 8x2 − x3 − 16 + 8x
= −(x3 − 12x2 + 36x − 32)
= −(x − 2)(x2 − 10x + 16) ∵ 2 is a root
= −(x − 2)(x − 2)(x − 8)
The characteristic roots are 2, 2, 8.
If (x1 , x2 , x3 ) is an eigenvector for λ = 8, then
−4 2 2 x1 0
2 −4 2 x2 = 0
2 2 −4 x3 0
Thus x1 = x2 = x3 , so (1, 1, 1) is an eigenvector for λ = 8.
Similarly for λ = 2,
2 2 2 x1 0
2 2 2 x2 = 0
2 2 2 x3 0
Thus x1 + x2 + x3 = 0. Take x1 = 1, x2 = 0, so (1, 0, −1) is an eigenvector. Take x1 = 0, x2 =
1, so (0, 1, −1)
is an eigenvector
for λ = 2. These
eigenvectors
are linearly independent.
1 1 0 8 0 0
Thus if P = 1 0 1 , then P−1 AP = 0 2 0
1 −1 −1 0 0 2
8 0 0
To check, verify that AP = P 0 2 0
0 0 2
1 0 0 i
Question 2(a) Test for congruency the matrices A = and B = . Prove
0 −1 −i 0
that A2n = B2m = I where m, n are positive integers.
Solution. A and B are not congruent, because A is symmetric and B is not. If A ≡ B
then ∃P non-singular such that P0 AP = B which implies that B should be symmetric.
2 1 0 1 0 1 0
A = =
0 −1 0 −1 0 1
0 i 0 i 1 0
B2 = =
−i 0 −i 0 0 1
2
Hence A2n = (A2 )n = I, and B2m = (B2 )m = I.
Question 2(b) If A is a skew symmetric matrix of order n then prove that (I−A)(I+A)−1
is orthogonal.
Solution.
O = (I − A)(I + A)−1
OO0 = (I − A)(I + A)−1 ((I + A)−1 )0 (I − A)0
= (I − A)(I + A)−1 (I − A)−1 (I + A) as A0 = −A
= (I − A)[(I − A)(I + A)]−1 (I + A)
= (I − A)[I − A2 )]−1 (I + A)
= (I − A)[(I + A)(I − A)]−1 (I + A)
= (I − A)(I − A)−1 (I + A)−1 (I + A)
= I
Question 2(c) Test for positive definiteness the quadratic form 2x2 + y 2 + 2z 2 + 2xy − 2zx.
1 − λ −1 1
−1 1 − λ −1 =0
1 −1 1 − λ
3
or λ3 − 3λ2 = 0. Thus λ = 0, 0, 3.
We next determine the characteristic vectors. For λ = 0, we get
1 −1 1 x1 0
−1 1 −1 x2 = 0
1 −1 1 x3 0
√ 2
q
2
Shifting the origin to (0, 0, 6), we get X = 3
Z, showing that the equation is a parabolic
cylinder.
4
1 Reduction of Quadrics
For the sake of completeness, we give the complete theoretical discussion for the above
question.
Let
F (X, Y, Z) = λ1 X 2 + λ2 Y 2 + λ3 Z 2 + 2U X + 2V Y + 2W Z + d = 0
5
Step II. We now consider 3 possibilities (ρ is rank of the matrix):
1. ρ(S) = 3 ⇒ λ1 λ2 λ3 6= 0. Shift the origin to (− λU1 , − λV2 , − W
λ3
), i.e. x = X + λU1 , y =
Y + λV2 , z = Z + W
λ3
. (Actually we are just completing the squares.) F gets transformed
to
λ1 x2 + λ2 y 2 + λ3 z 2 + d2 = 0
2. ρ(S) = 2. One characteristic root, say λ3 = 0. Shift the origin to (− λU1 , − λV2 , 0), and F
gets transformed to
λ1 x2 + λ2 y 2 + 2w2 z + d2 = 0
3. ρ(S) = 1. Two characteristic roots, say λ2 = λ3 = 0. Shift the origin to (− λU1 , 0, 0),
and F gets transformed to
λ1 x2 + 2v2 y + 2w2 z + d2 = 0
0 0 w 2 d2
d2
|Q| = −λ1 λ2 w22 . Since ρ(Q) = 4, w2 6= 0. Shifting the origin to (0, 0, − 2w 2
) we get
F (x, y, z) = λ1 x2 + λ2 y 2 + 2w2 z = 0
where w22 = −|Q|/λ1 λ2 . The surface is a paraboloid.
6
4. ρ(Q) = 3, ρ(S) = 2
F (x, y, z) = λ1 x2 + λ2 y 2 + 2w2 z + d2 = 0
F (x, y, z) = λ1 x2 + λ2 y 2 + d2 = 0
5. ρ(Q) = 2, ρ(S) = 2
λ1 0 0 0
0 λ2 0 0
Q=
0 0
0 0
0 0 0 d2
ρ(Q) = 2 ⇒ d2 = 0, and F (x, y, z) = λ1 x2 + λ2 y 2 = 0. The quadric is a pair of distinct
planes or a point, if λ1 = λ2 6= 0.
6. ρ(Q) = 4, ρ(S) = 1
0 v2 w2 d2
which shows that ρ(Q) = 4 is not possible.
7. ρ(Q) = 3, ρ(S) = 1
7
Rotate the axes by
x = X
v2 w2
y = p 2 2
Y −p 2 Z
v2 + w2 v2 + w22
w2 v
z = p 2 Y + p 2 Z
v2 + w22 v22 + w22
q
2
F (x, y, z) = λ1 X + 2v3 Y = 0 v3 = v22 + w22
Thus the quadric is a parabolic cylinder.
F (x, y, z) = λ1 X 2 + d2 = 0
The quadric is two parallel planes.
8
UPSC Civil Services Main 2000 - Mathematics
Linear Algebra
Sunder Lal
Retired Professor of Mathematics
Panjab University
Chandigarh
T = {(x, y) | x, y ∈ V}
Solution.
1. v1 , v2 ∈ T ⇒ v1 + v2 ∈ T
4. Clearly v1 +v2 = v2 +v1 and (v1 +v2 )+v3 = v1 +(v2 +v3 ) as addition is commutative
and associative in V.
5. z ∈ C, v ∈ T ⇒ zv ∈ T
6. 1v = (1 + i0)(x, y) = (x, y)
7.
(α + iβ)((x1 , y1 ) + (x2 , y2 ))
= (α(x1 + x2 ) − β(y1 + y2 ), β(x1 + x2 ) + α(y1 + y2 ))
= (αx1 − βy1 , βx1 + αy1 ) + (αx2 − βy2 , βx2 + αy2 )
= (α + iβ)(x1 , y1 ) + (α + iβ)(x2 , y2 )
1
8.
Question 1(b) Show that if λ is a characteristic root of a non-singular matrix A, then λ−1
is a characteristic root of A−1 .
Solution.
Av = λv v 6= 0
−1 −1
⇒ A Av = λA v
⇒ A−1 v = λ−1 v
Thus λ−1 is a characteristic root of A−1 .
Define
1 − aa11 . . . − aa1n
12
11
0 1 ... 0
Q=
...
0 0 ... 1
2
0 a11 0
Then Q is non-singular, and Q AQ = , where S is (n − 1) × (n − 1) positive
0 S
definite. Let Q∗ be a (n − 1) × (n − 1) non-singular
matrix such that Q∗ 0 SQ∗ is diagonal,
1 0
by induction. Then let Q1 = , and let P = Q1 Q. Then P0 AP is diagonal
0 Q∗
(b11 , b22 , . . . , bnn ). Let B = diagonal ( √b111 , . . . , √b1nn ). Then B0 P0 APB = In .
The quadratic form Q(x, y, z) associated with the given matrix A is given by
1 2 3 x
x y z 2 5 7 y = x2 + 5y 2 + 11z 2 + 4xy + 6xz + 14yz
3 7 11 z
then x0 BB0 x = Q = x0 Ax, so A = BB0 as A and BB0 are both symmetric. Clearly
1 0 0
B= 2 1 0
3 1 1
Solution. If A is non-singular, then the system is consistent because the rank of the
coefficient matrix A = n = rank of the n × n + 1 augmented matrix (A, B). If x1 , x2 are
two solutions, then
Ax1 = B = Ax2
=⇒ A(x1 − x2 ) = 0
=⇒ A−1 A(x1 − x2 ) = 0
=⇒ x1 = x2
Thus the unique solution is given by the column vector x = A−1 B.
3
Question 2(c) Prove that two similar matrices have the same characteristic roots. Is the
converse true? Justify your claim.
Then A and B have the same characteristic polynomial (λ − 1)2 and thus the same charac-
teristic roots. But B can never be similar to A because P−1 BP = B whatever P may be.
4
UPSC Civil Services Main 2001 - Mathematics
Linear Algebra
Sunder Lal
Retired Professor of Mathematics
Panjab University
Chandigarh
Question 1(a) Show that the vectors (1, 0, −1), (0, −3, 2) and (1, 2, 1) form a basis of the
vector space R3 (R).
Solution. Since dimR (R3 ) = 3, it is enough to prove that these are linearly independent.
If possible, let
a(1, 0, −1) + b(0, −3, 2) + c(1, 2, 1) = 0
This implies
a + c = 0, −3b + 2c = 0, −a + 2b + c = 0
Solving for c, c + 34 c + c = 0, so c = 0, hence a = b = 0. (Note that if these linearly
independent vectors were not a basis, they could be completed into one, but in R3 any four
vectors are linearly dependent, so this is a maximal linearly independent set, hence it is a
basis.
Alternate Solution. Since dim(R3 ) = 3, to show that (1, 0, −1), (0, −3, 2) and (1, 2, 1)
form a basis it is enough to show that these vectors generate R3 . In fact, given (x1 , x2 , x3 ),
we can always find a, b, c s.t. (x1 , x2 , x3 ) = a(1, 0, −1) + b(0, −3, 2) + c(1, 2, 1) as follows:
a + c = x1 , −3b + 2c = x2 , −a + 2b + c = x3 . Thus (c − x1 ) + 2(2c − x2 )/3 + c = x3 ,
so c + 34 c + c = x1 + 23 x2 + x3 . Thus c = 3x1 +2x 10
2 +3x3
, a = x1 − c = 7x1 −2x10
2 −3x3
, and
2c−x2 x1 −x2 +x3
b= 3 = 5
.
|A|
Question 1(b) If λ is a characteristic root of a non-singular matrix A, then prove that λ
is a characteristic root of Adj A.
Solution. If µ is a characteristic root of A, then aµ is a characteristic root of aA for a
constant a, because if Av = µv, v 6= 0 a vector, then aAv = aµv. Hence the result.
If λ is the characteristic root of A, |A| 6= 0, then λ 6= 0, and λ−1 is a characteristic root
of A−1 , because Av = λv =⇒ A−1 v = λ−1 v.
Since Adj A = A−1 |A|, it follows that |A| λ
is a characteristic root of Adj A.
1
1 0 0
Question 2(a) If A = 1 0 1 show that for all integers n ≥ 3, An = An−2 + A2 − I.
0 1 0
50
Hence determine A .
or (λ−1)(λ2 −1) = λ3 −λ2 −λ+1 = 0. From the Cayley-Hamilton theorem, A3 −A2 −A+I =
0 ⇒ A3 = A + A2 − I. Thus the result is true for n = 3. Suppose the theorem is true for
n = m i.e. Am = Am−2 + A2 − I. We shall prove it for m + 1.
Am+1 = Am A
= (Am−2 + A2 − I)A
= Am−1 + A3 − A
= Am−1 + A2 + A − A − I
= Am−1 + A2 − I
A50 = 25A
2
− 24I
25 0 0 24 0 0
= 25 25 0 − 0 24 0
25 0 25 0 0 24
1 0 0
= 25 1 0
25 0 1
2
Question
2(c) Determine
the orthogonal matrix P such that P−1 AP is diagonal where
7 4 −4
A= 4 −8 −1 .
−4 −1 −8
λ − 7 −4 4
−4 λ + 8 1 = 0
4 1 λ+8
(λ − 7)((λ + 8)2 − 1) + 4(−4 − 4λ − 32) + 4(−4 − 4λ − 32) = 0
λ3 + 9λ2 − 81λ − 729 = 0
(λ + 9)(λ2 − 81) = 0
From the second and third we get 18x2 +18x3 = 0. Take x2 = 1. Then x3 = −1, x1 = 4,
so (4, 1, −1) is an eigenvector for λ = 9.
Let
0 − 13 √4
18
P= √1 2 √1
2 3 18
√1 − 32 − √118
2
3
Clearly P0 P = I, since the columns of P are mutually orthogonal unit vectors.
Moreover from
−9 0 0
Ax = xλ for the eigenvalues and eigenvectors, it follows that AP = P 0 −9 0 .
0 0 9
−9 0 0
−1
Thus P AP = 0 −9 0 , which is diagonal as required.
0 0 9
E = (X − x1 )2 + . . . + (X − xn )2
= nX 2 − 2X(x1 + . . . + xn ) + (x21 + x22 + . . . + x2n )
Setting b1 = b2 = . . . = bn = 1, we get
n
! n
!2
X X
n a2i − ai ≥0
i=1 i=1
4
UPSC Civil Services Main 2002 - Mathematics
Linear Algebra
Sunder Lal
Retired Professor of Mathematics
Panjab University
Chandigarh
Solution.
T(a, b, c) + T(x, y, z) = (a − b, b − c, a + c) + (x − y, y − z, x + z)
= (a − b + x − y, b − c + y − z, a + c + x + z)
= T(a + x, b + y, c + z)
Thus T is linear.
Now we show that
T(1, 0, 0) = (1, 0, 1)
T(0, 1, 0) = (−1, 1, 0)
T(0, 0, 1) = (0, −1, 1)
1
Thus (1, 0, 1), (−1, 1, 0), (0, −1, 1) are linearly independent.
Since (1, 0, 0), (0, 1, 0), (0, 0, 1) generate R3 , (1, 0, 1), (−1, 1, 0), (0, −1, 1) generate T(R3 ),
hence dim(T(R3 )) = 3. Thus T is non-singular.
Alternatively,
T(a, b, c) = (0, 0, 0) ⇐⇒ a − b = 0, b − c = 0, a + c = 0 =⇒ a = b = c = 0
Question 1(b) Prove that a square matrix A is non-singular if and only if the constant
term in its characteristic polynomial is different from 0.
Solution. Let
a11 ... a1n
a21 ... a2n
A=
. . .
... ...
an1 ... ann
Then characteristic polynomial of A = Det(xI − A). I = n × n unit matrix. Clearly
n
X
Det(xI − A) = xn − aii xn−1 + . . . + (−1)n DetA
i=1
Solution. Clearly
T (1, 0, 0, 0, 0) = (0, 0, 0, 0, 0)
T (0, 1, 0, 0, 0) = (1, 0, 1, 0, 1)
T (0, 0, 1, 0, 0) = (0, 0, 0, 0, 0)
T (0, 0, 0, 1, 0) = (−1, 1, 0, 2, 0)
T (0, 0, 0, 0, 1) = (0, 1, 0, 1, 1)
are generators of the range space of T . In fact, if v1 = (1, 0, 1, 0, 1), v2 = (−1, 1, 0, 2, 0), v3 =
(0, 1, 0, 1, 1) then v1 , v2 , v3 generate T (R5 ). We now show that v1 , v2 , v3 are linearly inde-
pendent. Let α1 v1 +α2 v2 +α3 v3 = 0. Then α1 −α2 = 0, α2 +α3 = 0, α1 = 0 ⇒ α2 = α3 = 0.
Thus v1 , v2 , v3 are linearly independent over R ⇒ T (R5 ) is of dimension 3 with basis
v 1 , v2 , v3 .
Thus the null space is of dimension 2, because dim(null space) + dim(range space) =
dim(given vector space = R5 ) = 5. Since e1 = (1, 0, 0, 0, 0) and e3 = (0, 0, 1, 0, 0) belong to
the null space of T , and both are linearly independent over R, e1 , e3 is a basis of the null
space of T .
2
Question 2(b) Let A be a 3 × 3 real symmetric matrix with eigenvalues 0, 0, 5. If the
corresponding eigenvectors are (2, 0, 1), (2, 1, 1), (1, 0, −2) then find the matrix A.
2 2 1 0 0 0 0 0 0
Solution. Let P = 0 1 0 , then P−1 AP = 0 0 0, so A = P 0 0 0 P−1 .
1 1 −2 2 0 0 5 0 0 5
1
5
−1 5
−1
A simple calculation shows that P = 0 1 0 , therefore
1
5
0 − 25
2
−1 15
2 2 1 0 0 0 5
1 0 −2
A = 0 1 0 0 0 0 0 1 0 = 0 0 0
1
1 1 −2 0 0 5 5
0 − 25 −2 0 4
1 0 −2
Thus 0 0 0 is the required symmetric matrix with 0, 0, 5 as eigenvalues.
−2 0 4
Solution. There are three equations in 5 unknowns, therefore the rank of the coefficient
1 −2 −3
matrix ≤ 3. Since −1 3 5 = 1(−6 − 5) + 2(2 − 10) + (−3)(−1 − 6) = −6, the rank
−2 1 −2
of the coefficient matrix is 3. Using Cramer’s rule we solve the system
3
−1 − 4x4 −2 −3
1
x1 = − 5x4 + 2x5 3 5
6
17 − 3x4 + 4x5 1 −2
1
= − [(−1 − 4x4 )(−11) − 3(5x4 + 2x5 − 51 + 9x4 − 12x5 ) + 2(−10x4 − 4x5 − 85 + 15x4 − 20x5 )]
6
1
= − [−6 + 44x4 − 42x4 + 10x4 + 30x5 − 48x5 ]
6
= 1 − 2x4 + 3x5
1 −1 − 4x4 −3
1
x2 = − −1 5x4 + 2x5 5
6
2 17 − 3x4 + 4x5 −2
1
= − [−10x4 − 4x5 − 85 + 15x4 − 20x5 − 8 − 32x4 + 51 − 9x4 + 12x5 + 30x4 + 12x5 ]
6
1
= − [−42 − 6x4 ]
6
= 7 + x4
1 −2 −1 − 4x4
1
x3 = − −1 3 5x4 + 2x5
6
2 1 17 − 3x4 + 4x5
1
= − [51 − 9x4 + 12x5 − 5x4 − 2x5 − 34 + 6x4 − 8x5 − 20x4 − 8x5 + 7 + 28x4 ]
6
1
= − [24 − 6x5 ]
6
= −4 + x5
4
Question 2(d) Use Cayley-Hamilton theorem to find the inverse of the following matrix
0 1 2
A= 1 2 3
3 1 1
Check A−1 A = I.
5
UPSC Civil Services Main 2003 - Mathematics
Linear Algebra
Sunder Lal
Retired Professor of Mathematics
Panjab University
Chandigarh
Question 1(a) Let S be any non-empty subset of a vector space V over the field F . Show
that the set {a1 x1 + . . . + an xn | a1 , . . . , an ∈ F, x1 , . . . xn ∈ S, n ∈ N} is the subspace
generated by S.
2−x 1 1
|A − xI| = 0 1−x 0 = (2 − x)2 (1 − x) − (1 − x) = 0
1 1 2−x
1
Cayley-Hamilton theorem, we get A3 − 5A2 + 7A − 3I = 0. Now
Question 2(a) Prove that the eigenvectors corresponding to distinct eigenvalues of a square
matrix are linearly independent.
α 1 x1 + . . . + α r x r = 0 (1)
Question 2(b) If H is a Hermitian matrix, then show that (H + iI)−1 (H − iI) is a unitary
matrix. Also show that every unitary matrix A can be written in this form provided 1 is not
an eigenvalue of A.
2
x1
Solution. Let Q(x1 , x2 , x3 ) = x1 x2 x3 A x2 be the quadratic form associated with
x3
A. Then
1 − 31 31
X1 x1 x1 6 0 0
1
where X2 = 0 1 − 7
x2 = B x2 . Thus A = BDB where D = 0 73
0 0 0
16
X3 0 0 1 x3 x3 0 0 7
1 0 0
1
and B = − 3 1 0
1
3
− 17 1
Question 2(d) Reduce the quadratic form given below to canonical form and find its rank
and signature:
x2 + 4y 2 + 9z 2 + u2 − 12yx + 6zx − 4zy − 2xu − 6zu
Solution. Let
3
Put
X = x − 6y + 3z − u
1 3
Y = y− z+ u
2 4
3
Z = z− u
2
U = u
2
√ that ∗Q(x,√y, z, u)∗ is transformed∗2 to X∗2 − 32Y
so 2
+ 8Z 2 . We now put X ∗ = X, Y ∗ =
32Y, Z = 8Z, U = U to get X − Y + Z ∗2 as the canonical form of Q(x, y, z, u).
Rank of Q(x, y, z, u) = 3 = rank of the associated matrix. Signature of Q(x, y, z, u) =
number of positive squares - number of negative squares = 2 − 1 = 1.
4
UPSC Civil Services Main 2004 - Mathematics
Linear Algebra
Sunder Lal
Retired Professor of Mathematics
Panjab University
Chandigarh
Question 1(a) Let S be the space generated by the vectors {(0, 2, 6), (3, 1, 6), (4, −2, −2)}.
What is the dimension of S? Find a basis for S.
Solution. (0, 2, 6), (3, 1, 6) are linearly independent, because α(0, 2, 6) + β(3, 1, 6) = 0 ⇒
3β = 0, 2α + β = 0 ⇒ α = β = 0. Thus dim S ≥ 2.
If possible let (4, −2, −2) = α(0, 2, 6)+β(3, 1, 6), then 4 = 3β, −2 = 2α+β, −2 = 6α+6β
should be consistent. Clearly β = 34 , α = 12 (−2 − 43 ) = − 53 from the first two equations, and
these values satisfy the third. Thus (4, −2, −2) is a linear combination of (0, 2, 6) and (3, 1, 6).
Hence dim S = 2 and {(0, 2, 6), (3, 1, 6)} is a basis of S, being a maximal linearly inde-
pendent subset of a generating system.
Solution.
1
General solution. Clearly f : R3 −→ R is onto, thus the dimension of the range of
f is 1. From question 3(a) of 1998, dimension of nullity of f + dimension of range of f =
dimension of domain of f , so the dimension of the nullity of f = 2. Given this, we can pick
a basis for the kernel by looking at the given transformation.
Question 2(a) Show that T the linear transformation from R3 to R4 represented by the
matrix
1 3 0
0 1 −2
2 1 1
−1 1 2
is one to one. Find a basis for its image.
x + 3z = 5
−2x + 5y − z = 0
−x + 4y + z = 4
Solution. The first equation gives x = 5 − 3z, the second now gives 5y = z + 10 − 6z = 10 −
5z ⇒ y = 2−z. Putting these values in the third equation we get 4 = −5+3z+8−4z+z = 3,
hence the given system is inconsistent.
1 0 3 1 0 3 5
Alternative. Let A = −2 5 −1 be the coefficient matrix and B = −2 5 −1 0
−1 4 1 −1 4 1 4
be the augmented matrix, then it can be shown that rank A = 2 and rank B = 3, which
implies that the system is inconsistent. For consistency the ranks should be equal. This
procedure will be longer in this particular case.
2
1 1
Question 2(c) Find the characteristic polynomial of the matrix A = . Hence find
−1 3
A−1 and A6 .
x − 1 −1
Solution. The characteristic polynomial of A is given by |xI − A| = =
1 x−3
(x − 1)(x − 3) + 1 = x2 − 4x + 4.
2
The Cayley-Hamilton theorem states that A satisfies its characteristic equation −
i.e. A
−3 1
4A + 4I = 0 ⇒ (A − 4I)A = A(A − 4I) = −4I. Thus A−1 = − A−4I = − 14 =
3
4 −1 −1
− 41
4
1 1
4 4
From A2 − 4A + 4I = 0 we get
A2 = 4A − 4I
A3 = 4A2 − 4A = 4(4A − 4I) − 4A = 12A − 16I
A6 = (12A − 16I)2 = 144A2 − 384A + 256I = 144(4A − 4I) − 384A + 256I
−128 192
= 192A − 320I =
−192 256
Question 2(d) Define a positive definite quadratic form. Reduce the quadratic form x21 +
x23 + 2x1 x2 + 2x2 x3 to canonical form. Is this quadratic form positive definite?
Solution. If Q(x1 , . . . , xn ) = ni=1 aij xi xj , aij = aji is a quadratic form in n variables with
P
j=1
aij ∈ R, thenPit is said to be positive definite if Q(α1 , . . . , αn ) > 0 whenever αi ∈ R, i =
1, . . . , n and i αi2 > 0.
Let the qiven be Q(x1 , x2 , x3 ). Then
Let X1 = x1 + x2 , X2 = x2 , X3 = x2 + x3 i.e.
x1 1 −1 0 X1
x2 = 0 1 0 X2
x3 0 −1 1 X3
then Q(x1 , x2 , x3 ) is transformed to X12 − 2X22 + X32 . Since Q(x1 , x2 , x3 ) and the transformed
quadratic form assume the same values, Q(x1 , x2 , x3 ) is an√indefinite form. The canonical
form of Q(x1 , x2 , x3 ) is Z12 − Z22 + Z32 where Z1 = X1 , Z2 = 2X2 , Z3 = X3 .
3
UPSC Civil Services Main 2005 - Mathematics
Linear Algebra
Sunder Lal
Retired Professor of Mathematics
Panjab University
Chandigarh
Question 1(a) Find the values of k for which the vectors (1, 1, 1, 1), (1, 3, −2, k), (2, 2k −
2, −k − 2, 3k − 1) and (3, k − 2, −3, 2k + 1) are linearly independent in R4 .
is non-singular.
Now
1 1 1 1 1 0 0 0
2 −3 k−1
1 3 −2 k 1 2 −3 k−1
= = 2k − 4 −k − 4 3k − 3
2 2k − 2 −k − 2 3k − 1 2 2k − 4 −k − 4 3k − 3
k−1 −6 2k − 2
3 k+2 −3 2k + 1 3 k−1 −6 2k − 2
2 −3 k−1
= 2k − 4 −k − 4 3k − 3 = (k − 5)[−9k + 9 + (k − 1)(k + 4)] 6= 0
k−5 0 0
1
Question 1(b) Let V be the vector space of polynomials in x of degree ≤ n over R. Prove
that the set {1, x, x2 , . . . , xn } is a basis for V. Extend this so that it becomes a basis for the
set of all polynomials in x.
S
with coefficients from R. Thus {1, x, x2 , . . . , xn } is a basis for V.
We shall show that = {1, x, x2 , . . . , xn , xn+1 , . . .} is a basis for the space of all polyno-
mials.
S
(i) Linear Independence: Let {xi1 , . . . , xir } be a finite subset of . Let n = max{i1 , . . . , ir },
S
then {xi1 , . . . , xir } being a subset of the linearly independent set {1, x, x2 , . . . , xn } is linearly
independent, which shows the linear independence of .
S S
(ii) Let f be any polynomial. If degree of f is m, then f is a linear combination of
{1, x, x2 , . . . , xm }, which is a subset of . Thus is a basis of W, the space of all
polynomials over R.
3
Question 2(a) Let T bea linear transformation on R whose matrix relative to the standard
3
2 1 −1
basis of R is 1 2 2 . Find the matrix of T relative to the basis
3 3 4
B
= {(1, 1, 1), (1, 1, 0), (0, 1, 1)}.
B
7 6 3
This shows that the matrix of T with respect to given basis is −5 −3 −3
3 0 4
2
Question 2(c) Reduce the quadratic form
to a sum of squares. Also find the corresponding linear transformation, index and signature.
Solution.
√1 Z1
X1 q 6
3
X2 = 7 Z2
q
X3 7
Z
11 3
then Q(x1 , x2 , x3 ) is transformed to Z12 + Z22 + Z32 , which is its canonical form. Thus
Q(x1 , x2 , x3 ) is positive definite. The Index of Q(x1 , x2 , x3 ) = Number of positive squares in
its canonical form = 3. The signature of Q(x1 , x2 , x3 ) = Number of positive squares - the
number of negative squares in its canonical form = 3.
The required linear transformation which transforms Q(x1 , x2 , x3 ) to sums of squares is
given by (1), and the linear transformation which transforms it to its canonical form is given
by
√1
1 1 0 0
x1 1 3 −7 6 q Z1
3
x2 = 0 1 4 0 7
0 Z2
7 q
x3 0 0 1 0 0 7 Z3
11
3
UPSC Civil Services Main 2006 - Mathematics
Linear Algebra
Sunder Lal
Retired Professor of Mathematics
Panjab University
Chandigarh
Question 1(a) Let V be a vector space of all 2 × 2 matrices over the field F. Prove that V
has dimension 4 by exhibiting a basis for V.
Solution. Let M1 = 10 00 , M2 = 00 10 , M3 = 01 00 , M4 = 00 01 . We will show that
{M1 , M2 , M3 , M4 } is a basis of V over F.
{M1 , M2 , M3 , M4 } generate V. Let A = ac db ∈ V. Then A = aM1 + bM2 + cM3 +
dM4 , where a, b, c, d ∈F. Thus {M1 , M2 , M3 , M4 } is a set of generators for V over F.
{M1 , M2 , M3 , M4 } are linearly independent over F. If aM1 + bM2 + cM3 + dM4 =
a b = 0 for a, b, c, d ∈F, then clearly a = b = c = d = 0, showing that {M , M , M , M }
c d 1 2 3 4
are linearly independent over F.
Hence {M1 , M2 , M3 , M4 } is a basis of V over F and dim V = 4.
1 3
Question 1(b) State the Cayley-Hamilton theorem and using it find the inverse of 2 4 .
Solution. Let A be an n × n matrix and let In be the n × n identity matrix. Then the n-
degree polynomial |xIn −A| is called the charateristic polynomial of A. The Cayley-Hamilton
theorem states that every matrix is a root of its characteristic polynomial:
if |xIn − A| = xn + a1 xn−1 + . . . + an
then An + a1 An−1 + . . . + an In = 0
1
B
Question 2(a) If T : R2 −→ R2 is defined by T(x, y) = (2x − 3y, x + y), compute the
matrix of T with respect to the basis = {(1, 2), (2, 3)}.
0 1 2 1
Operation R3 − R1 gives
1 2 6 3
0 1 0 0
A∼
0 −4 −9 −5
0 1 2 1
Operations R3 + 4R2 , R4 − R2 ⇒
1 2 6 3
0 1 0 0
A∼
0 0 −9 −5
0 0 2 1
R4 + 92 R3 ⇒
1 2 6 3
0 1 0 0
A∼
0 0 −9 −5
0 0 0 − 91
Clearly |A| = 1 ⇒ rank A = 4.
2
Question 2(c) Investigate for what values of λ and µ the equations
x+y+z = 6
x + 2y + 3z = 10
x + 2y + λz = µ
have (1) no solution (2) a unique solution (3) infinitely many solutions.
Solution.
(2) The equations will have a unique solution for all values of µ if the coefficient
1 1 1 1 1 1 1 0 0
matrix 1 2 3 is non-singular. i.e. 1 2 3 = 1 1
2 = λ − 1 − 2 6= 0 i.e. λ 6= 3.
1 2 λ 1 2 λ 1 1 λ−1
Thus for λ 6= 3 and for all µ we have a unique solution which can be obtained by Cramer’s
rule or otherwise.
(1) If λ = 3, µ 6= 10 then the system is inconsistent and we have no solution.
(3) If λ = 3, µ = 10, the system will have infinitely many solutions obtained by solving
x + y = 6 − z, x + 2y = 10 − 3z ⇒ x = 2 + z, y = 4 − 2z, z is any real number.
Question 2(d) Find the quadratic form q(x, y) corresponding to the symmetric matrix
5 −3
A=
−3 8