0% found this document useful (0 votes)
167 views11 pages

Linear Algebra Tutorial Sheets 1

This document contains a tutorial sheet for a mathematics course. It covers the following topics: matrix addition, scalar multiplication, transposition, matrix multiplication, elementary row operations, matrices as linear maps, GEM and GJEM. It contains 13 problems related to these topics, including finding inverses of matrices, computing products of matrices, determining linear independence of vectors, computing ranks of matrices, and properties of Markov and stochastic matrices.

Uploaded by

Vidushi Vinod
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
167 views11 pages

Linear Algebra Tutorial Sheets 1

This document contains a tutorial sheet for a mathematics course. It covers the following topics: matrix addition, scalar multiplication, transposition, matrix multiplication, elementary row operations, matrices as linear maps, GEM and GJEM. It contains 13 problems related to these topics, including finding inverses of matrices, computing products of matrices, determining linear independence of vectors, computing ranks of matrices, and properties of Markov and stochastic matrices.

Uploaded by

Vidushi Vinod
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

Department of Mathematics

Indian Institute of Technology, Bombay

MA 106 : Mathematics II Tutorial Sheet No.1


Autumn 2016 AR

Topics: Matrix addition, Scalar multiplication, Transposition, Matrix multiplication, Elemen-


tary row operations,matrices as linear maps, GEM and GJEM.

1. Show that every square matrix A can be written as S + T in a unique way, where S is
symmetric and T is skew-symmetric.

2. A linear combination of matrices A and B of the same size is an expression of the form
P = aA + bB, where a, b are arbitrary scalars. Show that for the square matrices
A, B, the following is true: (i)If these are symmetric then so is P . (ii)If these are
skewsymmetric, then so is P . (iii)If these are upper triangular, then so is P .

3. Let A and B be symmetric matrices of the same size. Show that AB is symmetric if
and only if AB = BA.

 
1
4. Consider   as a linear map R −→ R2 . Show that its range is a line through 0.
−1
 
1 −1
 
Similarly, show that −1 2  as a linear map from R2 −→ R3 has its range as a plane
 
 
0 1
through 0. Find its equation.

5. Consider the matrices:


           
1 1 −1 0 2 0 1 1 1 1 0 1
 , , , , , .
0 1 −1 1 0 −3 0 0 1 1 0 0

1
Determine the images of (i) Unit square {0 ≤ x ≤ 1, 0 ≤ y ≤ 1}, (ii) Unit circle
{x2 + y 2 = 1} and (iii) Unit disc {x2 + y 2 ≤ 1} under the above matrices viewed as
linear maps R2 −→ R2 .

6. Find the inverses of the following matrices


  using elementary
 row-operations:
x
1 3 −2 1 −x e  
    cos x − sin x
 2 5 −7 , 0 1 x  ,  .
   
    sin x cos x
0 1 −4 0 0 1

7. Compute the last row of the inverse of the following matrices:



1 0 1
 
(i)  8 1 0 .
 
 
−7 3 1
 
2 0 −1 4
 
 
 5 1 0 1 
(ii)  .
3 −2 
 
 0 1
 
−8 −1 2 1
n
X
8. A Markov or stochastic matrix is an n × n matrix [aij ] such that aij ≥ 0 and aij = 1.
j=1
Prove that the product of two Markov matrices is again a Markov matrix.

 
3 −1 4
 
9. Let Π = −1 5 −9. Compute the products. (Note the patterns):
 
 
2 −6 5
     
1 0 0
     
(i) Π 0 , Π 1 , Π 0 , [1 0 0]Π, [0 1 0]Π, [0 0 1]Π.
     
     
0 0 1
       
λ 0 0 λ 0 0 0 1 0 0 1 0
       
(ii) 0 µ 0 Π, Π  0 µ 0 , 1 0 0 Π, Π 1 0 0 .
       
       
0 0 ν 0 0 ν 0 0 1 0 0 1

2
       
1 λ 0 1 λ 0 1 0 0 1 0 0
       
(iii)0 1 0 Π, Π 0 1 0 , Π, Π  0 1 0 .
       
 0 1 0
       
0 0 1 0 0 1 µ 0 1 µ 0 1

10. Let A be a square matrix. Prove that there is a set of elementary matrices E1 , E2 , ..., EN
such that EN ...E2 E1 A is either the identity matrix or its bottom row is zero.

11. List all possibilities for the reduced row echelon matrices of order 4 × 4 having exactly
one pivot. Count the number of free parameters
 (degrees of freedom) in each case. For
0 1 ∗ ∗
 
 
0 0 0 0
example one of the possibilty is 

 wherein there are 2 degrees of freedom.

0 0 0 0
 
0 0 0 0
Repeat for 0, 2, 3 and 4 pivots.

 
1 1 0
 
12. Let A = 0 1 2 . Verify that (A − I)3 = [0] and so the inverse is A2 − 3A + 3I.
 
 
0 0 1
Compute the same and verify by multiplying.

   
n n−1
λ 1 λ nλ
13. Let X =   . (i) Show that X n =   , for all λ ∈ R, n ≥ 1.
n
0 λ 0 λ
(ii) If, as per the standard
 convention, we let X 0 = I (even when X = [0]), then show
1 1
that eX = eλ  .
0 1
(iii) Show that (i) holds for integers m = 0 and also m < 0 if λ 6= 0.

3
MA 106 : Mathematics II Tutorial Sheet No.2
Autumn 2016 AR

Topics: Rn , subspaces, linear independence, rank of a matrix, solvability of linear systems


using rank.

1. Suppose that the state of land use in a city area in 2003 was
1 (Residential) 30percent
2 (Commercial) 20percent
3 (Industrial) 50percent
Estimate the states of land use in 2008, 2013, 2018, assuming that
 the transitional
 prob-
0.8 0.1 0.1
 
abilities for 5-year intervals are given by the stochastic matrix  0.1 0.7 0.2 ,where
 
 
0.0 0.1 0.9
(i, j)th entry is the probability for the ith type to change to the j th type. (For e.g. 0.2 is
the probability for commercially used land to become industrial in a 5-year interval.)

2. Find whether the following sets of vectors are linearly dependent or independent:
(i) [1, −1, 1], [1, 1, −1], [−1, 1, 1], [0, 1, 0].
(ii) [1, 9, 9, 8], [2, 0, 0, 3], [2, 0, 0, 8].

3. Find the ranks of the following matrices:  


    0 8 −1
8 −4 m n 



     1 2 0 
(i)  −2 1 , (ii)  n m  (m2 6= n2 ), (iii)  .
   
 
     0 0 3 
6 −3 p p  
0 4 5

4. Solve the following system of linear equations in the unknowns x1 , . . . , x5 by GEM:


(i) 2x3 −2x4 +x5 =2 (ii) 2x1 −2x2 +x3 +x4 =1
2x2 −8x3 +14x4 −5x5 =2 −2x2 +x3 +7x4 =0
x2 +3x3 +x5 =α 3x1 −x2 +4x3 −2x4 = −2

4
5. Determine the equilibrium solution (D1 = S1 , D2 = S2 ) of the two-commodity market
with linear model (D, S, P ) = (demand, supply, price):
D1 = 40 − 2P1 − P2 S1 = 4P1 − P2 + 4
D2 = 16 + 5P1 − 2P2 S2 = 3P2 − 4.

6. For the following linear systems, find solvability by comparing the ranks of the coefficient
matrix and the augmented matrix. Write down a basis for the solutions of the asociated
homogeneous systems and hence describe the general solution of each of the systems.
(i) −2x4 +x5 =2 (ii) 2x1 −2x2 +x3 +x4 =1
2x2 −2x3 +14x4 −x5 =2 −2x2 +x3 −x4 =2
2x2 +3x3 +13x4 +x5 =3 x1 +x2 +2x3 −x4 = −2

7. Is the given set of vectors a vector space?


(i) All vectors [v1 , v2 , v3 ]T in R3 such that 3v1 − 2v2 + v3 = 0, 4v1 + 5v2 = 0. (ii) All
vectors in R2 with components less than 1 in absolute value.

8. For a < b, consider the system of equations:

x + y + z = 1
ax + by + 2z = 3
a2 x + b2 y + 4z = 9.

Find the pairs (a, b) for which the system has infinitely many solutions.

9. Show that the row space of a matrix does not change by row operations. Show that the
dimension of the column space is unchanged by row operations.
Proof:
(i)The new rows are linear combinations of previous rows and vice versa.
(ii) Suppose that C1 , ..., Cn are the columns of a matrix. If an ERO is applied through
a matrix E, then the new columns are EC1 , ..., ECn . If Cj1 , ..., Cjr are lin. ind. then so
are ECj1 , ..., ECjr and vice versa due to invertibility of E.

5
Department of Mathematics
Indian Institute of Technology, Bombay

MA 106 : Mathematics II Tutorial Sheet No.3


Autumn 2016 AR

Topics: Determinants, ranks by determinants, Adjoints, Inverses, Cramer’s rule.

1. Find the rank by determinants. Verify by row reduction.


     √ √ 
0 2 −3 4 3 −2 − 3 − 2
     
(i)  2 0 5  (ii) −8 −6 (iii) −1
     
0 1 
    √ √ 
−3 5 0 16 12 2 3 2

2. Find the values of β for which Cramer’s rule is applicable. For the remaining value(s)
of β, find the number of solutions.

x + 2y + 3z = 20
x + 3y + z = 13
x + 6y + βz = β.

3. Find whether the following set of vectors is linearly dependent or independent:

{ai + bj + ck, bi + cj + ak, ci + aj + bk}.

4. Consider the system of equations

x + λz = λ − 1

x + λy = λ+1

λx + y + 3z = 2λ − 1( or 1 − 2λ)

Find the values of λ for which Cramer’s rule can be used. For the remaining values of
λ, discuss the solvability of the linear system.

6
5. Find the matrices of minors, cofactors and the adjoint of the following matrices:
 
   √ √    3 2 1 0
  0 9 5 2 3 2 0.8 0.1 0.1 



a b     
 0 1 0 1
 
 , 2 0 0 , −1 0
 
1  , 0.1 0.7 0.2 , 
  .
0 2 −1 0

c d    √ √   
0 2 0 2 3 2 0.0 0.1 0.9  
0 0 0 1


6. In the previous problem verify that AAdj(A) = A Adj(A) = det AI. Hence compute
the inverses in the valid cases.

7. Solve by Cramer’s rule and verify by Gauss elimination.

5x − 3y = 37
−2x + 7y = −38.

8. Solve by Cramer’s rule and verify by Gauss elimination.

x + 2y + 3z = 20
7x + 3y + z = 13
x + 6y + 2z = 0.

 
11/2 1/3
 
9. Invert the matrix H = 1/2 1/3 1/4.
 
 
1/3 1/4 1/5

10. (Vandermonde determinant)


 
1 1 1
 
(a) Prove that det  a b c  = (b − a)(c − a)(c − b).
 
 
2 2 2
a b c

(b) Prove an analogous formula for n × n.

7
11. (Wronskian) Let f1 , f2 , ..., fn be functions over some interval (a, b). Their Wronskian is
another function on (a, b) defined by a determinant involving the given functions and
their derivatives upto the order n − 1.

f1 f2 ··· fn


0
f20 ··· fn0

def f1
Wf1 ,f2 ,...,fn (x) = .

.. .. .

.. . .

(n−1) (n−1) (n−1)
f1 f2 ··· fn

Prove that if c1 f1 + c2 f2 + · · · + cn fn = 0 holds over the interval (a, b) for some constants
c1 , c2 , ..., cn and Wf1 ,f2 ,...,fn (x0 ) 6= 0 at some x0 , then c1 = c2 = · · · = cn = 0. In other
words, nonvanishing of Wf1 ,f2 ,...,fn at a single point establishes linear independence of
f1 , f2 , ..., fn on (a, b).
Caution: The converse is false. W ≡ 0 =⇒
6 f1 , f2 , ..., fn linearly dependent on (a, b).
Though one can prove existence of a subinterval of (a, b) where linear dependence holds.

8
Department of Mathematics
Indian Institute of Technology, Bombay

MA 106 : Mathematics II Tutorial Sheet No.4


Autumn 2016 AR

Topics: Expansion in a basis, orthogonal sets, orthonormal basis, Gram-Schmidt process,


Bessel’s inequality.

1. (Resolution into orthogonal components) Let u be a nonzero vector and v be any other
hv, ui
vector. Let w = v − u as in Gram-Schmidt process. Then show that
kuk2
hv, ui
v= u+w
kuk2
is resolution of v into two components-one parallel to u and the other orthogonal to u.

     


 2 1 2  
      
2. Verify that the set of vectors −2 , 2 ,  1  is an orthogonal set in R3 . Is it
     

       
−2 

 1 
2
 
1
 
a basis? If yes, express 1 as a linear combination of these vectors and verify that
 
 
1
Bessel’s inequaliy is an equality.

3. Orthogonalize the following set of row-vectors in R4 .

{[1, 1, 1, 1], [1, 1, −1, −1], [1, 1, 0, 0], [−1, 1, −1, 1]}

Do you get an orthogonal basis?

4. Orthogonalize the following ordered set of row-vectors in R4 .

{[1, 1, 0, 0], [1, 0, 1, 0], [1, 0, 0, 1], [0, 1, 1, 0], [0, 1, 0, 1], [0, 0, 1, 1]}

9
Do you get an orthogonal basis? Does [−2, −1, 1, 2] belong to the linear span? Use
Bessel’s inequality.

5. For the following linear homogeneous systems, write down orthogonal bases for the so-
lution spaces.
(i) −2x4 +x5 =0 (ii) 2x1 −2x2 +x3 +x4 =0
2x2 −2x3 +14x4 −x5 =0 −2x2 +x3 −x4 =0
2x2 +3x3 +13x4 +x5 = 0 x1 +x2 +2x3 −x4 =0
(This is a variant of the problem 6 in Sheet No.2)

6. For the following linear homogeneous systems, write down orthogonal bases for the
solution spaces.
(i) 2x3 −2x4 +x5 =0 (ii) 2x1 −2x2 +x3 +x4 =0
2x2 −8x3 +14x4 −5x5 =0 −2x2 +x3 +7x4 =0
x2 +3x3 +x5 =0 3x1 −x2 +4x3 −2x4 =0

7. Find whether the following sets of vectors are linearly dependent or independent by
orthogonalizing them
(i) [1, −1, 1], [1, 1, −1], [−1, 1, 1], [0, 1, 0].
(ii) [2, 0, 0, 3], [2, 0, 0, 8], [2, 0, 1, 3].

8. Orthonormalize the ordered set in C5 :

{[1, ı, 0, 0, 0], [0, 1, ı, 0, 0], [0, 0, 1, ı, 0], [0, 0, 0, 1, ı]}


5
X

relative to the unitary inner product hv, wi = w v = vj w̄j .
j=1
Use Bessel’s inequality to find whether [1, ı, 1, ı, 1] is in the (complex) linear span of the
above set or not.

9. Let S = {v1 , v2 , ..., vk } be any ordered set in Rn and T = {w1 , w2 , ..., wk } be the set
resulting from the the Gram-Schmidt process applied to S. Prove that for j = 1, 2, ..., k,

10
the linear span of {v1 , v2 , ..., vj } equals that of {w1 , w2 , ..., wj }. Conclude that wj is
orthogonal to v1 , ..., vj−1 .
Hint: Use induction on j.

10. Let {p, q, r} be a linearly independent ordered set in R3 . Let {x(= p), y, z} be the
orthogonal set as a result of Gram-Schmidt process. Show that z must be a scalar mul-
tiple of p × q.

 
n −1 hv, wi
11. For two nonzero vectors v, w ∈ R , we define the angle between them by θ = cos .
kvkkwk
hv, wi
Note that by Cauchy-Schwartz inequality ∈ [−1, 1] =⇒ θ ∈ [0, π]. Moreover,
kvkkwk
π
v ⊥ w (orthogonal) if and only if θ = as expected.
2
Show that
kwk
(i) θ = 0 if and only if w = v is a positive scalar multiple of v (parallel).
kvk
kwk
(ii) θ = π if and only if w = − v is a negative scalar multiple of v (anti-parallel).
kvk
(iii) kv + wk2 = kvk2 + kwk2 + 2kvkkwk cos θ.

11

You might also like