0% found this document useful (0 votes)
13 views6 pages

Bitstream 821479

The document contains solutions to various exercises from a homework assignment for the MIT course 6.241: Dynamic Systems. It covers topics such as determinants of matrices, properties of linear transformations, and the relationship between the ranks of matrices. The exercises also include proofs related to linear algebra concepts, including the rank-nullity theorem and properties of polynomial spaces.

Uploaded by

sharmin.105.kuet
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views6 pages

Bitstream 821479

The document contains solutions to various exercises from a homework assignment for the MIT course 6.241: Dynamic Systems. It covers topics such as determinants of matrices, properties of linear transformations, and the relationship between the ranks of matrices. The exercises also include proofs related to linear algebra concepts, including the rank-nullity theorem and properties of polynomial spaces.

Uploaded by

sharmin.105.kuet
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

MASSACHUSETTS INSTITUTE OF TECHNOLOGY

Department of Electrical Engineering and Computer Science

6.241: Dynamic Systems—Fall 2003

Homework 1 Solutions

Exercise 1.1 a)Given A1 and A4 are square matrices, we know that A is square:
   
A1 A2 I 0
A= = .
0 A4 0 A4

Note that  
A1 A2
det = det(I)det(A4 ) = det(A4 ),
0 I
which can be verified by recursively computing the principal minors. Also, by the elementary
operations of rows, we have
   
A1 A2 A1 0
det = = det = det(A1 ).
0 I 0 I

Thus, finally using the fact that det(AB) = det(A) = det(B), we have

det(A) = det(A1 )det(A4 ).

b) Assume A−1 −1
1 and A4 exist. Then
    
−1 A1 A2 B1 B2 I 0
AA = = ,
0 A4 B3 B4 0 I

which yields four matrix equations:

1. A1 B1 + A2 B3 = I,

2. A1 B2 + A2 B4 = 0,

3. A4 B3 = 0,

4. A4 B4 = I.

From Eqn (4), B4 = A−1 −1 −1


4 , with which Eqn (2) yields B2 = −A1 A2 A4 . Also, from Eqn (3) B3 = 0,
−1
with which from Eqn (1) B1 = A1 . Therefore,
 −1 
−1 A1 −A1−1 A2 A4−1
A = .
0 A−1
4

Exercise 1.2 a)
    
0 I A1 A2 A3 A4
=
I 0 A3 A4 A1 A2

1
b) Let us find
 
B1 B2
B =
B3 B4

such that
 
A1 A2
BA =
0 A4 − A3 A−1
1 A2

The above equation implies four equations for submatrices

1. B1 A1 + B2 A3 = A1 ,

2. B1 A2 + B2 A4 = A2 ,

3. B3 A1 + B4 A3 = 0,

4. B3 A2 + B4 A4 = A4 − A3 A−1
1 A2 .

First two equations yield B1 = I and B2 = 0. Express B3 from the third equation as B3 =
−B4 A3 A−1 −1
1 and plug it into the fourth. After gathering the terms we get B4 A4 − A3 A1 A2 =
−1
A4 − A3 A1 A2 , which turns into identity if we set B4 = I. Therefore
 
I 0
B =
−A3 A−11 I

c) Using linear operations


 on rows we see that det (B)
 = 1. Then, det(A) = det(B)det(A) =
det (BA) = det (A1 ) det A4 − A3 A−1
1 A2 . Note that A4 − A3 A1−1 A2 does not have to be invert­
ible for the proof.

Exercise 1.3 We have to prove that det(I − AB) = det(I − BA).


Proof: Since I and I − BA are square,
 
I 0
det(I − BA) = det
B I − BA
  
I A I −A
= det
B I 0 I
  
I A I −A
= det ,
B I 0 I

yet, from Exercise 1.1, we have


 
I −A
det = det(I)det(I) = 1.
0 I

Thus,  
I A
det(I − BA) = det .
B I
Now,    
I A I − AB 0
det = det = det(I − AB).
B I B I

2
Therefore
det(I − BA)det(I − AB).
Note that (I − BA) is a q × q matrix while (I − AB) is a p × p matrix. Thus, one wants to compute
the determinant of (I − AB) or (I − BA), s/he compare p and q to pick a smaller size of product
either AB or BA.
b) We have to show that (I − AB)−1 A = A(I − BA)−1 .
Proof: Assume that (I − BA)−1 and (I − AB)−1 exist. Then,

A = A · I = A(I − BA)(I − BA)−1


= (A − BAB)(I − BA)−1
= (I − AB)A(I − BA)−1
→ (I − AB)−1 A = A(I − BA)−1 .

This completes the proof.


c) We have to show that (A + BCD)−1 = A−1 − A−1 B(C −1 + DA−1 B)−1 DA−1 .
Proof: Suppose (A + BCD)−1 = A−1 − A−1 B(C −1 + DA−1 B)−1 DA−1 , then

(A + BCD)−1 (A + BCD) = I = (A−1 − A−1 B(C −1 + DA−1 B)−1 DA−1 )(A + BCD)

= (I + A−1 BCD) − A−1 B(C −1 + DA−1 B)−1 D(I + A−1 BCD)


= (I + A−1 BCD) − A−1 B(C −1 (I + CDA−1 B))−1 D(I + A−1 BCD)
= (I + A−1 BCD) − A−1 B(I + CDA−1 B)−1 CD(I + A−1 BCD)
= (I + A−1 BCD) − A−1 BCD(I + A−1 BCD)−1 (I + A−1 BCD)
= I ∵ part b.

where we have used from part (b) that (I + CDA−1 B)−1 CD = CD(I + A−1 BCD)−1 . Then we
have to show that (I − abT )−1 only requires inversion of A scalar.
Proof: From the lemma proved above, let

A = I, B = −a, C = I, D = bT ,

then

(I − abT )−1 = I + a(I − bT a)−1 bT


abT
= I+ .
(1 − bT a)

Note that bT a is a scalar.

Exercise 1.4 a) First define all the spaces:

R(A) = {y ∈ Cn | Ax = y, ∀x ∈ Cn }
R⊥ (A) = {z ∈ Cm | y  z = z  y = 0, ∀y ∈ R(A)}
R(A ) = {p ∈ Cn | A v = p, ∀v ∈ Cm }
N (A) = {x ∈ Cn | Ax = 0}
N (A ) = {q ∈ Cm | A q = 0}

3
i) Prove that R⊥ (A) = N (A ).
Proof: Let
z ∈ R⊥ (A) → y  z = 0 ∀y ∈ R(A)
→ x A z = 0 ∀x ∈ Cn
→ A z = 0 → z ∈ N (A )
→ R⊥ (A) ⊂ N (A ).
Now let
q ∈ N (A ) → A q = 0
→ x A q = 0 ∀x ∈ Cn
→ y  q = 0 ∀y ∈ R(A)
→ q ∈ R⊥ (A)
→ N (A ) ⊂ R⊥ (A).
Therefore
R⊥ (A) = N (A ).
ii) Prove that N ⊥ (A) = R(A ).
Proof: From i) we know that N (A) = R⊥ (A ) by switching A with A . That implies that
N ⊥ (A) = {R⊥ (A )}⊥ = R(A ).
b) Show that rank(A) + rank(B) − n ≤ rank(AB) ≤ min{rank(A), rank(B)}.
Proof: i) Show that rank(AB) ≤ min{rank(A), rank(B)}. It can be proved as follows:
Each column of AB is a combination of the columns of A, which implies that R(AB) ⊆ R(A) →
rank(AB) ≤ rank(A).
Each row of AB is a combination of the rows of B → rowspace (AB) ⊆ rowspace (B), but the
dimension of rowspace = dimension of column space = rank, so that rank(AB) ≤ rank(B).
Therefore,

rank(AB) ≤ min{rank(A), rank(B)}.


ii) Show that rank(A) + rank(B) − n ≤ rank(AB).
Let

rB = rank(B)
rA = rank(A)
KB = dim(N (B  )) = nullity of B  = n − rB
KA = nullity of A = n − rA ,
where A ∈ Cm×n , B ∈ Bn×p .
Now, let {v1 , · · · , vrB } be a basis set of R(B), and add n − rB linearly independent vectors
{w1 , · · · , wn−rB } to the basis to span all of Cn , {v1 , v2 , · · · , vn , w1 , · · · , wn−rB }.
Let
   
M= v1 | v2 · · · vrB | w1 | · · · wn−rB = V W .
Suppose x ∈ Cn , then x = M α for some α ∈ Cn .

4
1. R(A) = R(AM ) = R([AV |AW ]).
Proof: i) Let x ∈ R(A) → Ay = x for some y ∈ Cn . Yet , y can be written as a linear
combination of the basis vectors of Cn , so y = M α for some α ∈ Cn .
Then, Ay = AM α = x → x ∈ R(AM ) → R(A) ⊂ R(AM ).
ii) Let x ∈ R(AM ) → AM y = x for some y ∈ C n , but M y = z ∈ Cn → Az = x → x ∈
R(A) → R(AM ) ⊂ R(A).
Therefore, R(A) = R(AM ) = R([AV |AW ]).
2. R(AB) = R(AV ).
Proof: i) Let x ∈ R(AV ) → AV y = x for some y ∈ CrB . Yet, V y = Bα for some α ∈ Cp
since the columns of V and B span the same space. That implies that AV y = ABα = x →
x ∈ R(AB) → R(AV ) ⊂ R(AB).
ii) Let x ∈ R(AB) → (AB)y = x for some y ∈ Cp . Yet, again By = V θ for some θ ∈ CrB →
ABy = AV θ = x → x ∈ R(AV ) → R(AB) ⊂ R(AV ).
Therefore, R(AV ) = R(AB).
Using the fact 1, we see that the number of linearly independent columns of A is less than or
equal to the number of linearly independent columns of AV + the number of linearly independent
columns of AW . , which means that
rank(A) ≤ rank(AV ) + rank(AW ).
Using the fact 2, we see that
rank(AV ) = rank(AB) → rank(A) ≤ rank(AB) + rank(AW ),
yet, there re only n − rB columns in AW . Thus,

→ rank(AW ) ≤ n − rB
→ rank(A) − rank(AB) ≤ rank(AW ) ≤ n − rB
→ rA − (n − rB ) ≤ rAB .
This completes the proof.

Exercise 1.8 Let X = {g(x) = α0 + α1 x + α2 x2 + · · · + αM xM | αi ∈ C}.


a) We have to show that the set B = {1, x, · · · , xM } is a basis for X.
Proof :
1. First, let’s show that elements in B are linearly independent. It is clear that each element in
B can not be written as a linear combination of each other. More formally,
c1 (1) + c1 (x) + · · · + cM (xM ) = 0 ↔ ∀i ci = 0.
Thus, elements of B are linearly independent.

2. Then, let’s show that elements in B span the space X. Every polynomial of order less than
or equal to M look like

M
p(x) = αi xi
i=0
for some set of αi ’s.
Therefore, {1, x1 , · · · , xM } span X.

5
b) T : X → X and T (g(x)) = d
dx g(x).

1. Show that T is linear.


Proof:
d
T (ag1 (x) + bg2 (x)) = (ag1 (x) + bg2 (x))
dx
d d
= a g1 + b g2
dx dx
= aT (g1 ) + bT (g2 ).

Thus, T is linear.

2. g(x) = α0 + α1 x + α2 x2 + · · · + αM xM , so

T (g(x)) = α1 + 2α2 x + · · · + M αM xM −1 .

Thus it can be written as follows:


    
0 1 0 0 ··· 0 α0 α1
 0 0 2 0 ··· 0  α1   2α2 
    
 0 0 0 3 ··· 0  α2   3α3 
    
 .. .. .. .. ..  α3 = .. .
 . . . . . 
0    . 
  ..   
 0 0 0 0 ··· M  .   M αM 
0 0 0 0 ··· 0 αM 0
The big matrix, M , is a matrix representation of T with respect to basis B. The column
vector in the left is a representation of g(x) with respect to B. The column vector in the
right is T (g) with respect to basis B.

3. Since the matrix M is upper triangular with zeros along diagonal (in fact M is Hessenberg),
the eigenvalues are all 0;
λi = 0 ∀i = 1, · · · , M + 1.

4. One eigenvector of M for λ1 = 0 must satisfy M V1 = λ1 V1 = 0


 
1
 0 
 
V1 =  . 
 .. 
0

is one eigenvector. Since λi ’s are not distinct, the eigenvectors are not necessarily inde­
pendent. Thus in order to computer the M others, ones uses the generalized eigenvector
formula.

You might also like