0% found this document useful (0 votes)
127 views10 pages

Problem Set 2

This document provides information about Problem Set 2 for the course MTH 102: Linear Algebra. It includes 12 problems related to linear algebra concepts like permutation matrices, row equivalence of matrices, solution sets of systems of linear equations, and inverses of matrices. It asks students to determine permutation matrices, check for row equivalence, explain geometric interpretations, perform row reduction, find matrix inverses, and solve systems of linear equations.

Uploaded by

ag
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
127 views10 pages

Problem Set 2

This document provides information about Problem Set 2 for the course MTH 102: Linear Algebra. It includes 12 problems related to linear algebra concepts like permutation matrices, row equivalence of matrices, solution sets of systems of linear equations, and inverses of matrices. It asks students to determine permutation matrices, check for row equivalence, explain geometric interpretations, perform row reduction, find matrix inverses, and solve systems of linear equations.

Uploaded by

ag
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

MTH 102: Linear Algebra

Department of Mathematics and Statistics Indian Institute of Technology - Kanpur

Problem Set 2

Problems marked (T) are for discussions in Tutorial sessions.

1. (T) A square matrix P is called a permutation matrix if each row and column of P contains
exactly one
 1 and the
 rest of the entries are 0. Determine all the 3×3 permutation matrices. Now,
0 0 6
let A = 1 2 3 . Write down the permutation matrix P such that P A is upper triangular.
0 4 5
Which permutation matrices P1 and P2 make P1 AP2 lower triangular?

Solution: To permute rows of A, left multiply by permutation matrix


   
0 1 0 1 2 3
P = 0 0 1 yielding P A = 0 4 5 .
1 0 0 0 0 6
   
0 0 1 6 0 0
Right multiplication by P2 = 0 1 0 permutes columns of A yielding AP2 = 3 2 1 .
1 0 0 5 4 0
Finally, rearrange rows of AP2 by left multiplying
   
1 0 0 6 0 0
P1 = 0 0 1 resulting is P1 AP2 = 5 4 0 .
0 1 0 3 2 1

2. (T) Decide if they are row-equivalent:


   
1 2 0 1
(a) ,
4 8 1 2

Solution:
No, first one is singular and the second one is nonsingular.
1 0 2 1 0 2
" # " #
(b) 3 −1 1 , 0 2 10
5 −1 5 2 0 4

Solution: Show that their RREFs(Row reduced echelon forms) are same.
       
1 0 2 1 0 2 1 0 2 1 0 2
R2 ↔R3 R ←R3 −R1 R2 ←−2R2
 3 −1 1  − −−−−→  5 −1 5  −−3−−−− −−→  0 −1 −5  −−− −−−−→  0 2 10  .
R2 ←R2 −5R1 R3 ←R3 −R2
5 −1 5 3 −1 1 2 −1 −1 2 0 4
2 1 −1
" #  
1 0 2
(c) 1 1 0 ,
0 2 10
4 3 −1
Solution:
No, two matrices must have the same size, in order to be row equivalent.
2

3. Supply two examples each and explain their geometrical meaning.


(a) Two linear equations in two variables with exactly one solution.
Solution:o
x+y =2
. They represent two lines in R2 intersecting at a point.
x−y =0
(b) Two linear equations in two variables with infinitely many solutions.
Solution: o
x+y =2
. They represent the same line in R2 .
2x + 2y = 4
(c) Two linear equations in two variables with no solutions.
Solution: o
x+y =2
. They represent two parallel lines in R2 , no intersection.
2x + 2y = 1
(d) Three linear equations in two variables with exactly one solution.
Solution:
x+y =2 o
x − y = 0 . They represent three lines in R2 with a single point in common.
2x − y = 1
(e) Three linear equations in two variables with no solutions.
Solution:
x+y =2 o
x − y = 0 . They represent three lines in R2 with no point in common.
2x + 2y = 1
4. Suppose that x and y are two distinct solutions of the system Ax = b. Prove that there are
infinitely many solutions to this system, by showing that λx + (1 − λ)y is also a solution, for
each λ ∈ R. Do you have a geometric interpretation?
5. Give examples of three matrices of size 3 × 4 that are in Row Reduced Echelon Form (RREF)
but are different from the ones given in Question 6. Give examples of matrices that are not in
RREF, specifying the reasons.
1 ∗ ∗ 0 0 1 ∗ 0 0 0 1 0
" # " # " #
6. Is it possible to have RREF ([A|b]) = 0 0 0 1 , 0 0 0 1 or 0 0 0 1 ? Give reasons
0 0 0 0 0 0 0 0 0 0 0 0
for your answer.
7. Let B be a square invertible matrix. Then, prove that the system Ax = b and BAx = Bb are
row-equivalent.

Solution:
As B is invertible, there exists elementary matrices Ei ’s such that B = E1 E2 · · · Ek . Thus, the
system BAx = Bb is obtained from Ax = b by k elementary row operations.
Conversely, as B −1 = Ek−1 Ek−1
−1
· · · E1−1 and inverse of elementary matrices are also elementary
matrices, we do obtain Ax = b from BAx = Bb by k elementary row operations. Thus, the
above two systems are row equivalent.
3

8. [T] Suppose Ax = b and Cx = b have same solutions for every b. Is it true that A = C?

Solution: First, as Ax = b and Cx = b have same solutions, A and C have same shapes, that
is, same number of rows and columns, as well as same null space (follows from taking b = 0).
Now, if we take b to be the first column of A then x = [ 1 0 . . . 0 ]t solves Ax = b and
therefore also solves Cx = b which in turn implies that the first columns of A and C are same.
Same argument holds for other columns as well. Thus A = C.

9. Find the coefficients a, b, c, d so that the graph of y = ax3 + bx2 + cx + d passes through
(1, 2), (−1, 6), (2, 3) and (0, 1).

Solution:
This is as good as solving a+b+c = 1, a−b+c = 5, 4a+2b+c = 1. Apply Gaussian elimination.

10. Find matrices A and B with the given property or explain why you can not:
 
1  
0
(a) The only solution to Ax = 2 is x =
  .
1
3
Solution: Any A satisfying the given equation has to be a 3 × 2 matrix. The linear system
has a unique solution when rank of A is 2 and [ 1 2 3 ]t ∈ {Ax : x ∈ R2 }. Among many
possibilities, one such A is  
1 1
0 2 .
0 3
 
  1
0
(b) The only solution to Bx = is x = 2.
1
3
Solution: Any B satisfying the given equation has to be a 2 × 3 matrix which implies that
the null space of B, N (B), can not be trivial and hence we either have an infinitely many
solutions (when [0 1]t lies in the column space of B) or no solution. Thus, finding a B that
exhibits a unique solution is not possible.

11. Apply Gauss elimination to solve the system 2x+y +2z = 3 3x−y +4z = 7 and 4x+3y +6z = 5.

Solution:
     
2 1 2 3 2 1 2 3 2 1 2 3
R2 ←R2 −(3/2)R1 R ←R −2R1
−−−−−−−−−−→  0 −5/2 1 5/2  −−3−−−3−−−→
 3 −1 4 7  −  0 −5/2 1 5/2 
4 3 6 5 4 3 6 5 0 1 2 −1
 
2 1 2 3
R3 ←R3 +(2/5)R2
−−−−−−−−−−−→  0 −5/2 1 5/2 
0 0 12/5 0
4

We can thus obtain the solution to the given linear system by solving the equivalent system

2x + y + 2z = 3
(−5/2)y + z = 5/2
(12/5)z = 0

The solution is x = 2, y = −1 and z = 0.


 
1 2 2
12. (T) Using Gauss Jordan method, find the inverse of  2 1 2  .
2 2 1

Solution:
   
1 2 2 1 0 0 1 2 2 1 0 0
R2 ←R2 −2R1 R3 ←R3 −(2/3)R2
 2 1 2 0 1 0 −−−−−−−−→  0 −3 −2 −2 1 0  −−−−−−−−−−−→
R3 ←R3 −2R1
2 2 1 0 0 1 0 −2 −3 −2 0 1
   
1 2 2 1 0 0 1 2 2 1 0 0
R3 ←(−3/5)R3
 0 −3 −2 −2 1 0  −−−−−−−−−→  0 −3 −2 −2 1 0 
0 0 −5/3 −2/3 −2/3 1 0 0 1 2/5 2/5 −3/5
 
1 2 0 1/5 −4/5 6/5
R2 ←R2 +2R3 R2 ←(−1/3)R2
−−−−−−−−→ 0 −3 0 −6/5 9/5 −6/5  −−−−−−−−−→

R1 ←R1 −2R3
0 0 1 2/5 2/5 −3/5
   
1 2 0 1/5 −4/5 6/5 1 0 0 −3/5 2/5 2/5
R1 ←R1 −2R2
 0 1 0 2/5 −3/5 2/5  − −−−−−−−→  0 1 0 2/5 −3/5 2/5 
0 0 1 2/5 2/5 −3/5 0 0 1 2/5 2/5 −3/5
 
−3/5 2/5 2/5
Thus, the inverse is  2/5 −3/5 2/5 .
2/5 2/5 −3/5

13. (T) Let Bn×n be a real skew-symmetric matrix. Show that I − B is nonsingular.

Solution:
Let if possible I − B be singular. Then, the system (I − B)x = 0 has a non-trivial solution, say
x0 6= 0. Hence, Bx0 = x0 . Also, xt0 Bx0 ∈ R and hence

xt0 Bx0 = (xt0 Bx0 )t = xt0 B t x0 = −xt0 Bx0 .

Thus, xt0 Bx0 = 0. But, 0 = xt0 Bx0 = xt0 (Bx0 ) = xt0 x0 and hence x0 = 0.

14. (T) For two n × n matrices A and B, show that det(AB) = det(A)det(B).

Solution: First, suppose that A is singular. Then, AB is singular as well. We therefore have,
det(A) = 0 and det(AB) = 0 which leads to det(AB) = 0 = det(A)det(B).
5

Now we assume that A is non-singular. Recall that, for a non-singular square matrix, the
reduced row echelon form is the identity matrix, I. In other words, there exist elementary
matrices Es , Es−1 , . . . , E2 , E1 such that

A = Es Es−1 . . . E2 E1 I.

We therefore have
AB = Es Es−1 . . . E2 E1 B.
Thus the problem of showing det(AB) = det(A)det(B) reduces to showing
(a) det(Eij B) = det(Eij )det(B).
(b) det(Ei (c)B) = det(Eij (c))det(B).
(c) det(Eij (c)B) = det(Eij (c))det(B).
Now, for the proof, we use the defining properties 1, 2 and 3 discussed in class. We have
(a) det(Eij B) = −det(B) (property 2) = det(Eij )detB (using det(Eij ) = −det(I) = −1, follows
from properties 2 and 1).
(b) det(Ei (c)B) = c det(B) (property 3a) = det(Ei (c))det(B) (using det(Ei (c)) = c det(I) = c,
follows from properties 3a and 1).
(c) det(Eij (c)B) = det(B) (property 6) = det(Eij (c))det(B) (using det(Eij (c)) = 1, follows
from property 6 on the identity matrix).
The result now follows

det(AB) = det(Es Es−1 . . . E2 E1 B) = det(Es )det(Es−1 . . . E2 E1 B) = · · ·


= det(Es )det(Es−1 ) . . . det(E2 )det(E1 B)
= det(Es )det(Es−1 ) . . . det(E2 )det(E1 )det(B)
= det(Es )det(Es−1 ) . . . det(E2 E1 )det(B) = · · ·
= det(Es Es−1 . . . E2 E1 )det(B) = det(A)det(B).

15. For an n × n matrix A = [aij ], prove that det(A) = det(AT ).

Solution:
Recall that, a square matrix can be reduced to an upper form using elementary row operations.
In other words, there exist elementary matrices Es , Es−1 , . . . , E2 , E1 such that

Es Es−1 . . . E2 E1 A = U.

where U is an upper triangular matrix and each Ek , 1 ≤ k ≤ s, is either an elementary matrix


Eij or Eij (c) for some i, j and c. We therefore have

EA = U

with det(E) = det(E T ) = ±1. We also have det(U ) = det(U T ) for the upper triangular matrix
U . Thus
det(U )
det(A) = ,
det(E)
6

and
AT E T = U T
yields
det(U T ) det(U )
det(AT ) = = = det(A).
det(E T ) det(E)
16. Let A be an n × n matrix. Prove that

(a) If A2 = 0 then A is singular.


Solution: 0 = det(A2 ) = (det A)2 ⇒ det A = 0.
(b) If A2 = A, A 6= I then A is singular.
Solution: If A is nonsingular, then it is invertible. Then A = A−1 A2 = A−1 A = I.

17. Consider the system Ax = b. Let RREF ([A|b]) be one of the matrices in Question 6. Now,
recall the matrices Aj ’s, for 1 ≤ j ≤ 3 (defined to state the Cramer’s rule for solving a linear
system), that are obtained by replacing the j-th column of A by b. Then, we see that the above
system has NO solution even though det(A) = 0 = det(Aj ), for 1 ≤ j ≤ 3.

18. (T) Let A be an invertible square matrix with integer entries. Show that A−1 has integer entries
if and only if det(A) = ±1.

Solution: If det(A) = ±1, then entries of A−1 are integers as A−1 = det(A)
1
C T , where C is
the matrix of co-factors. If entries of A−1 are integers then 1/det(A) = det(A−1 ) ∈ Z. As
det(A) ∈ Z, the result follows.

19. Let A be an n × n matrix. Prove that the following statements are equivalent:

(a) det(A) 6= 0.
(b) A is invertible.
(c) The homogeneous system Ax = 0 has only the trivial solution.
(d) The row-reduced echelon form of A is In .
(e) A is a product of elementary matrices.
(f) The system Ax = b has a unique solution for every .
¯
(g) The system Ax = b is consistent for every b.

19a =⇒ 19b
Ct
By, definition, whenever det(A) 6= 0, A−1 = , where C is the co-factor matrix.
det(A)
19b =⇒ 19a
As A is invertible, AA−1 = In and hence det(A) det(A−1 ) = det(In ) = 1. Hence, det(A) 6= 0.
19b =⇒ 19c
As A is invertible, A−1 A = In . Let x0 be a solution of the homogeneous system Ax = 0. Then,

x0 = In x0 = (A−1 A)x0 = A−1 (Ax0 ) = A−1 0 = 0.


7

Thus, 0 is the only solution of the homogeneous system Ax = 0.


19c =⇒ 19d
Let xt = [x1 , x2 , . . . , xn ]. As 0 is the only solution of the linear system Ax = 0, the final equations
are x1 = 0, x2 = 0, . . . , xn = 0. These equations can be rewritten as

1 · x1 + 0 · x2 + 0 · x3 + · · · + 0 · xn = 0
0 · x1 + 1 · x2 + 0 · x3 + · · · + 0 · xn = 0
0 · x1 + 0 · x2 + 1 · x3 + · · · + 0 · xn = 0
.. .
. = ..
0 · x1 + 0 · x2 + 0 · x3 + · · · + 1 · xn = 0.

That is, the final system of homogeneous system is given by In · x = 0. Or equivalently, the
row-reduced echelon form of the augmented matrix [A 0] is [In 0]. That is, the row-reduced
echelon form of A is In .
19d =⇒ 19e
Suppose that the row-reduced echelon form of A is In . Then there exist elementary matrices
E1 , E2 , . . . , Ek such that
E1 E2 · · · Ek A = In . (1)
Now, the matrix Ej−1 is an elementary matrix and is the inverse of Ej for 1 ≤ j ≤ k. Therefore,
successively multiplying Equation (1) on the left by E1−1 , E2−1 , . . . , Ek−1 , we get
−1
A = Ek−1 Ek−1 · · · E2−1 E1−1

and thus A is a product of elementary matrices.


19e =⇒ 19b
Suppose A = E1 E2 · · · Ek ; where the Ei ’s are elementary matrices. As the elementary matrices
are invertible and the product of invertible matrices is also invertible, we get the required result.
19b =⇒ 19f
Observe that x0 = A−1 b is the unique solution of the system Ax = b.
19f =⇒ 19g
The system Ax = b has a solution and hence by definition, the system is consistent.
19g =⇒ 19b
For 1 ≤ i ≤ n, define ei = (0, . . . , 0, 1
|{z} , 0, . . . , 0)t , and consider the linear system
ith position
Ax = ei . By assumption, this system has a solution, say xi , for each i, 1 ≤ i ≤ n. Define a
matrix B = [x1 , x2 , . . . , xn ]. That is, the ith column of B is the solution of the system Ax = ei .
Then
AB = A[x1 , x2 , . . . , xn ] = [Ax1 , Ax2 , . . . , Axn ] = [e1 , e2 , . . . , en ] = In .
Therefore, the matrix A is invertible.
8

20. Let A be an n × n matrix. Then prove that det(A) = 0 if and only if the system Ax = 0 has a
non-trivial solution.

21. Let A be an n × n matrix. Then, the two statements given below cannot hold together.

(a) The system Ax = b has a unique solution for every b.


(b) The system Ax = 0 has a non-trivial solution.

22. (T) A real square matrix A is said to be orthogonal if AT A = AAT = I. Show that if A is
orthogonal then det(A) = ±1.

Solution: Since AAT = I, we have det(AAT ) = det(I) = 1. Now,

1 = det(AAT ) = det(A)det(AT ) = det(A)det(A) = (det(A))2 .

Thus, det(A) = ±1.

23. Let A = [aij ] be an invertible matrix and let B = [pi−j aij ], for some p 6= 0. Find the inverse of
B and also find det(B).

Solution: Note that B = DAD−1 , where D is a diagonal matrix with dii = pi . Hence,
B −1 = DA−1 D−1 . Also,

det(B) = det(DAD−1 ) = det(D) · det(A) · det(D−1 ) = det(A).

24. Suppose the 4 × 4 matrix M has 4 equal rows all containing a, b, c, d. We know that det(M ) = 0.
The problem is to find by any method

1+a b c d

a 1 + b c d
det(I + M ) = .
a b 1+c d

a b c 1+d

Solution: Subtracting row 1 from rows 2, 3 and 4, we get



1+a b c d

−1 1 0 0
det(I + M ) = .
−1 0 1 0

−1 0 0 1

Now, adding columns 2, 3 and 4 to column 1, we get



1+a+c+d a b d

0 1 0 0
det(I + M ) = .
0 0 1 0

0 0 0 1

Thus, det(I + M ) = 1 + a + b + c + d.
9

25. (T) For a complex matrix A = [aij ], let Ā = [ aij ] and A∗ = ĀT . Show that det(Ā) = det(A∗ ) =
detA . Therefore if A is Hermitian (that is, A∗ = A) then its determinant is real.

Solution: The first equality follows directly from the statement in Problem 4:
 
T
det(Ā) = det (Ā) = det(A∗ ).

The equality det(Ā) = detA follows from the determinant formula:


X X
det(Ā) = (sgn σ) a1σ(1) a2σ(2) · · · anσ(n) = (sgn σ)a1σ(1) a2σ(2) · · · anσ(n) = det(A).
σ∈Sn σ∈Sn

26. The numbers 1375, 1287, 4191 and 5731 are all divisible by 11. Prove that 11 also divides the
determinant of the matrix  
1 1 4 5
 3 2 1 7 
 .
 7 8 9 3 
5 7 1 1

Solution: Adding 1000 × Row1 , 100 × Row2 , 10 × Row3 to Row4 , we have



1 1 4 5 1 1 4 5

3 2 1 7 3 2 1 7
7 8 9 3 = 7 .

8 9 3
5 7 1 1 1375 1287 4191 5731

Since Row4 is divisible by 11, the determinant is divisible by 11.


x1 x21 x31 x41
 
1
 1
 x2 x22 x32 x42 
27. Compute determinant of   1 x3 x23 x33 x43  .
 1 2 3
x4 x4 x4 x4 4

1 x5 x25 x35 x45 .

Solution: We give the solution for the general case. Let

1 x1 x21 · · · x1n−1
 
 1 x2 x22 · · · xn−1 
 2 
An =  . . . . . . . 

 . . . ... . 
1 xn x2n · · · xnn−1 .

If n = 2, det(A2 ) = x2 − x1 . We will prove that


Y
det(An ) = (xj − xi ).
i<j
10

Assume the result for n − 1 and define

1 x1 x21 · · · xn−1

1

1 x2 x22 · · · xn−1


2
F (x) = . . . ... . .


. . . ... .

2
1 x x ··· x n−1

n−1
Q
Then F is a polynomial of degree n − 1 with roots x1 , x2 , . . . , xn−1 . So, F (x) = c (x − xi )
i=1
where c is coefficient of xn−1 which is clearly det(An−1 ). Therefore,
n−1
Y
F (x) = det(An−1 ) (x − xi ).
i=1

The result follows for n as


n−1
Y
det(An ) = F (xn ) = det(An−1 ) (xn − xi ).
i=1

You might also like