0% found this document useful (0 votes)
128 views151 pages

Matrixproblems PDF

(1) This document contains a collection of problems in introductory and advanced matrix calculus. It covers topics such as basic operations, linear equations, determinants, eigenvalues, functions of matrices, and more. (2) The problems are presented with their numerical labels followed by the mathematical statements. Short explanations or proofs are then given for the solutions. (3) Notation and important matrix definitions used throughout the document are provided at the beginning, including notation for vectors, matrices, operations, and the Pauli spin matrices.

Uploaded by

samuel asefa
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
128 views151 pages

Matrixproblems PDF

(1) This document contains a collection of problems in introductory and advanced matrix calculus. It covers topics such as basic operations, linear equations, determinants, eigenvalues, functions of matrices, and more. (2) The problems are presented with their numerical labels followed by the mathematical statements. Short explanations or proofs are then given for the solutions. (3) Notation and important matrix definitions used throughout the document are provided at the beginning, including notation for vectors, matrices, operations, and the Pauli spin matrices.

Uploaded by

samuel asefa
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 151

Problems and Solutions

in
Matrix Calculus

by
Willi-Hans Steeb
International School for Scientific Computing
at
University of Johannesburg, South Africa
Preface
The manuscript supplies a collection of problems in introductory and ad-
vanced matrix problems.

Prescribed book:
“Problems and Solutions in Introductory and Advanced Matrix Calculus”,
2nd edition
by
Willi-Hans Steeb and Yorick Hardy
World Scientific Publishing, Singapore 2016

v
Contents

Notation x

1 Basic Operations 1

2 Linear Equations 13

3 Determinants and Traces 18

4 Eigenvalues and Eigenvectors 27

5 Commutators and Anticommutators 42

6 Decomposition of Matrices 46

7 Functions of Matrices 52

8 Linear Differential Equations 59

9 Kronecker Product 63

10 Norms and Scalar Products 71

11 Groups and Matrices 76

12 Lie Algebras and Matrices 87

13 Graphs and Matrices 93

14 Hadamard Product 95

15 Differentiation 97

16 Integration 98

17 Numerical Methods 100

vii
18 Miscellaneous 107

Bibliography 138

Index 141

viii
Notation

:= is defined as
∈ belongs to (a set)

/ does not belong to (a set)
∩ intersection of sets
∪ union of sets
∅ empty set
N set of natural numbers
Z set of integers
Q set of rational numbers
R set of real numbers
R+ set of nonnegative real numbers
C set of complex numbers
Rn n-dimensional Euclidean space
space of column vectors with n real components
Cn n-dimensional complex linear space
space of column vectors with n complex components
H Hilbert
√ space
i −1
<z real part of the complex number z
=z imaginary part of the complex number z
|z| modulus of complex number z
|x + iy| = (x2 + y 2 )1/2 , x, y ∈ R
T ⊂S subset T of set S
S∩T the intersection of the sets S and T
S∪T the union of the sets S and T
f (S) image of set S under mapping f
f ◦g composition of two mappings (f ◦ g)(x) = f (g(x))
x column vector in Cn
xT transpose of x (row vector)
0 zero (column) vector
k.k norm
x · y ≡ x∗ y scalar product (inner product) in Cn
x×y vector product in R3
A, B, C m × n matrices
det(A) determinant of a square matrix A
tr(A) trace of a square matrix A
rank(A) rank of matrix A
AT transpose of matrix A

x
A conjugate of matrix A
A∗ conjugate transpose of matrix A
A† conjugate transpose of matrix A
(notation used in physics)
A−1 inverse of square matrix A (if it exists)
In n × n unit matrix
I unit operator
0n n × n zero matrix
AB matrix product of m × n matrix A
and n × p matrix B
A•B Hadamard product (entry-wise product)
of m × n matrices A and B
[A, B] := AB − BA commutator for square matrices A and B
[A, B]+ := AB + BA anticommutator for square matrices A and B
A⊗B Kronecker product of matrices A and B
A⊕B Direct sum of matrices A and B
δjk Kronecker delta with δjk = 1 for j = k
and δjk = 0 for j 6= k
λ eigenvalue
 real parameter
t time variable
Ĥ Hamilton operator

The Pauli spin matrices are used extensively in the book. They are given
by      
0 1 0 −i 1 0
σx := , σy := , σz := .
1 0 i 0 0 −1
In some cases we will also use σ1 , σ2 and σ3 to denote σx , σy and σz .

xi
Chapter 1

Basic Operations

Problem 1. Let x be a column vector in Rn and x 6= 0. Let

xxT
A=
xT x
T
where denotes the transpose, i.e. xT is a row vector. Calculate A2 .

Problem 2. Consider the 8 × 8 Hadamard matrix

1 1 1 1 1 1 1 1
 
1 1 1 1 −1 −1 −1 −1 
1 1 −1 −1 −1 −1 1 1 
 
1 1 −1 −1 1 1 −1 −1 
 
H= .
 1 −1 −1 1 1 −1 −1 1 
 1 −1 −1 1 −1 1 1 −1 
 
1 −1 1 −1 −1 1 −1 1
 
1 −1 1 −1 1 −1 1 −1

(i) Do the 8 column vectors in the matrix H form a basis in R8 ? Prove or


disprove.
(ii) Calculate HH T , where T denotes transpose. Compare the results from
(i) and (ii) and discuss.

Problem 3. Show that any 2 × 2 complex matrix has a unique represen-


tation of the form
a0 I2 + ia1 σ1 + ia2 σ2 + ia3 σ3

1
2 Problems and Solutions

for some a0 , a1 , a2 , a3 ∈ C, where I2 is the 2 × 2 identity matrix and σ1 , σ2 ,


σ3 are the Pauli spin matrices
     
0 1 0 −i 1 0
σ1 := , σ2 := , σ3 := .
1 0 i 0 0 −1

Problem 4. Let A, B be n × n matrices such that ABAB = 0n . Can we


conclude that BABA = 0n ?

Problem 5. A square matrix A over C is called skew-hermitian if A =


−A∗ . Show that such a matrix is normal, i.e., we have AA∗ = A∗ A.

Problem 6. Let A be an n × n skew-hermitian matrix over C, i.e. A∗ =


−A. Let U be an n × n unitary matrix, i.e., U ∗ = U −1 . Show that
B := U ∗ AU is a skew-hermitian matrix.

Problem 7. Let A, X, Y be n × n matrices. Assume that


XA = In , AY = In
where In is the n × n unit matrix. Show that X = Y .

Problem 8. Let A, B be n × n matrices. Assume that A is nonsingular,


i.e. A−1 exists. Show that if BA = 0n , then B = 0n .

Problem 9. Let A, B be n × n matrices and


A + B = In , AB = 0n .
Show that A2 = A and B 2 = B.

Problem 10. Consider the normalized vectors in R2


   
cos(θ1 ) cos(θ2 )
, .
sin(θ1 ) sin(θ2 )
Find the condition on θ1 and θ2 such that
   
cos(θ1 ) cos(θ2 )
+
sin(θ1 ) sin(θ2 )
is normalized. A vector x ∈ Rn is called normalized if kxk = 1, where k k
denotes the Euclidean norm.

Problem 11. Let


A := xxT + yyT (1)
Basic Operations 3

where    
cos(θ) sin(θ)
x= , y=
sin(θ) − cos(θ)
and θ ∈ R. Find xT x, yT y, xT y, yT x. Find the matrix A.

Problem 12. Find a 2 × 2 matrix A over R such that


       
1 1 1 0 1 1
A =√ , A =√ .
0 2 1 1 2 −1

Problem 13. Consider the 2 × 2 matrix over the complex numbers


 
3
1 X
Π(n) := I2 + nj σj 
2 j=1

where n := (n1 , n2 , n3 ) (nj ∈ R) is a unit vector, i.e., n21 + n22 + n23 = 1.


Here σ1 , σ2 , σ3 are the Pauli matrices
     
0 1 0 −i 1 0
σ1 = , σ2 = , σ3 =
1 0 i 0 0 −1
and I2 is the 2 × 2 unit matrix.
(i) Describe the property of Π(n), i.e., find Π∗ (n), tr(Π(n)) and Π2 (n),
where tr denotes the trace. The trace is the sum of the diagonal elements
of a square matrix.
(ii) Find the vector  iφ 
e cos(θ)
Π(n) .
sin(θ)
Discuss.

Problem 14. Let  


eiφ cos(θ)
x=
sin(θ)
where φ, θ ∈ R.
(i) Find the matrix ρ := xx∗ .
(ii) Find trρ.
(iii) Find ρ2 .

Problem 15. Consider the vector space R4 . Find all pairwise orthogonal
vectors (column vectors) x1 , . . . , xp , where the entries of the column vectors
can only be +1 or −1. Calculate the matrix
p
X
xj xTj
j=1
4 Problems and Solutions

and find the eigenvalues and eigenvectors of this matrix.

Problem 16. Let  


2 2 −2
A= 2 2 −2  .
−2 −2 6
(i) Let X be an m × n matrix. The column rank of X is the maximum
number of linearly independent columns. The row rank is the maximum
number of linearly independent rows. The row rank and the column rank
of X are equal (called the rank of X). Find the rank of A and denote it by
k.
(ii) Locate a k × k submatrix of A having rank k.
(iii) Find 3×3 permutation matrices P and Q such that in the matrix P AQ
the submatrix from (ii) is in the upper left portion of A.

Problem 17. Find 2 × 2 matrices A, B such that AB = 0n and BA 6= 0n .

Problem 18. Let A be an m × n matrix and B be a p × q matrix. Then


the direct sum of A and B, denoted by A ⊕ B, is the (m + p) × (n + q)
matrix defined by  
A 0
A ⊕ B := .
0 B
Let A1 , A2 be m × m matrices and B1 , B2 be n × n matrices. Calculate
(A1 ⊕ B1 )(A2 ⊕ B2 ).

Problem 19. Let A be an n × n matrix over R. Find all matrices that


satisfy the equation AT A = 0n .

Problem 20. Let π be a permutation on { 1, 2, . . . , n }. The matrix Pπ


for which pi∗ = eπ(i)∗ is called the permutation matrix associated with π,
where pi∗ is the ith row of Pπ and eij = 1 if i = j and 0 otherwise. Let
π = (3 2 4 1). Find Pπ .

Problem 21. A matrix A for which Ap = 0n , where p is a positive integer,


is called nilpotent. If p is the least positive integer for which Ap = 0n then
A is said to be nilpotent of index p. Find all 2 × 2 matrices over the real
numbers which are nilpotent with p = 2, i.e. A2 = 02 .

Problem 22. A square matrix is called idempotent if A2 = A. Find all


2 × 2 matrices over the real numbers which are idempotent and aij 6= 0 for
i, j = 1, 2.
Basic Operations 5

Problem 23. A square matrix A such that A2 = In is called involutory.


Find all 2 × 2 matrices over the real numbers which are involutory. Assume
that aij 6= 0 for i, j = 1, 2.

Problem 24. Show that an n × n matrix A is involutary if and only if


(In − A)(In + A) = 0n .

Problem 25. Let A be an n × n symmetric matrix over R. Let P be an


arbitrary n × n matrix over R. Show that P T AP is symmetric.

Problem 26. Let A be an n × n skew-symmetric matrix over R, i.e.


AT = −A. Let P be an arbitrary n × n matrix over R. Show that P T AP
is skew-symmetric.

Problem 27. Let A be an m × n matrix. The column rank of A is the


maximum number of linearly independent columns. The row rank is the
maximum number of linearly independent rows. The row rank and the
column rank of A are equal (called the rank of A). Find the rank of the
4 × 4 matrix
1 2 3 4
 
 5 6 7 8 
A= .
9 10 11 12
13 14 15 16

Problem 28. Let A be an invertible n × n matrix over C and B be an


n × n matrix over C. We define the n × n matrix

D := A−1 BA.

Calculate Dn , where n = 2, 3, . . ..

Problem 29. A Cartan matrix A is a square matrix whose elements aij


satisfy the following conditions:
1. aij is an integer, one of { −3, −2, −1, 0, 2 }
2. ajj = 2 for all diagonal elements of A
3. aij ≤ 0 off of the diagonal
4. aij = 0 iff aji = 0
5. There exists an invertible diagonal matrix D such that DAD−1 gives a
symmetric and positive definite quadratic form.

Give a 2 × 2 non-diagonal Cartan matrix.


6 Problems and Solutions

Problem 30. Let A, B, C, D be n × n matrices over R. Assume that


AB T and CDT are symmetric and ADT − BC T = In , where T denotes
transpose. Show that
AT D − C T B = I n .

Problem 31. Let n be a positive integer. Let An be the (2n+1)×(2n+1)


skew-symmetric matrix for which each entry in the first n subdiagonals
below the main diagonal is 1 and each of the remaining entries below the
main diagonal is −1. Give A1 and A2 . Find the rank of An .

Problem 32. A vector u = (u1 , u2 , . . . , un ) is called a probability vector


if the components are nonnegative and their sum is 1. Is the vector
u = (1/2, 0, 1/4, 1/4)
a probability vector? Can the vector
v = (2, 3, 5, 1, 0)
be “normalized” so that we obtain a probability vector?

Problem 33. An n × n matrix P = (pij ) is called a stochastic matrix if


each of its rows is a probability vector, i.e., if each entry of P is nonnegative
and the sum of the entries in each row is 1. Let A and B be two stochastic
n × n matrices. Is the matrix product AB also a stochastic matrix?

Problem 34. The numerical range, also known as the field of values, of
an n × n matrix A over the complex numbers, is defined as
F (A) := { z∗ Az : kzk = 1, z ∈ Cn }.
Find the numerical range for the 2 × 2 matrix
 
1 0
B= .
0 0
Find the numerical range for the 2 × 2 matrix
 
0 0
C= .
1 1
The Toeplitz-Hausdorff convexity theorem tells us that the numerical range
of a square matrix is a convex compact subset of the complex plane.

Problem 35. Let A be an n × n matrix over C. The field of values of A


is defined as the set
F (A) := { z∗ Az : z ∈ Cn , z∗ z = 1 }.
Basic Operations 7

Let α ∈ R and
α 1 0 0 0 0 0 0 0
 
1 α 1 0 0 0 0 0 0
0 1 α 1 0 0 0 0 0
 
0 0 1 α 1 0 0 0 0
 
A=0 0 0 1 α 1 0 0 0 .
 
0 0 0 0 1 α 1 0 0
 
0 0 0 0 0 1 α 1 0
 
0 0 0 0 0 0 1 α 1
 
0 0 0 0 0 0 0 1 α
(i) Show that the set F (A) lies on the real axis.
(ii) Show that
|z∗ Az| ≤ α + 16.

Problem 36. Let A be an n × n matrix over C and F (A) the field of


values. Let U be an n × n unitary matrix.
(i) Show that F (U ∗ AU ) = F (A).
(ii) Apply the theorem to the two matrices
   
0 1 1 0
A1 = , A2 =
1 0 0 −1
which are unitarily equivalent.

Problem 37. Can one find a unitary matrix U such that


   
0 c 0 ceiθ
U∗ U=
d 0 de−iθ 0
where c, d ∈ C and θ ∈ R ?

Problem 38. An n2 × n matrix J is called a selection matrix such that


J T is the n × n2 matrix
[E11 E22 . . . Enn ]
where Eii is the n × n matrix of zeros except for a 1 in the (i, i)th position.
(i) Find J for n = 2 and calculate J T J.
(ii) Calculate J T J for arbitrary n.

Problem 39. Consider a symmetric matrix A over R


a11 a12 a13 a14
 
a a22 a23 a24 
A =  12
a13 a23 a33 a34

a14 a24 a34 a44
8 Problems and Solutions

and the orthonormal basis (so-called Bell basis)


1 1
   
1 0 1  0 
x+ = √   , x− = √ 
2 0 0

2
1 −1
0 0
   
1 1 1  1 
y+ = √   , y− = √  .
2 1 2 −1
0 0
The Bell basis forms an orthonormal basis in R4 . Let Ae denote the matrix
A in the Bell basis. What is the condition on the entries aij such that the
matrix A is diagonal in the Bell basis?

Problem 40. Let A be an m × n matrix over C. The Moore-Penrose


pseudoinverse matrix A+ is the unique n × m matrix which satisfies
AA+ A = A
A+ AA+ = A+
(AA+ )∗ = AA+
(A+ A)∗ = A+ A.
We also have that
x = A+ b (1)
is the shortest length least square solution to the problem
Ax = b. (2)
(i) Show that if (A∗ A)−1 exists, then A+ = (A∗ A)−1 A∗ .
(ii) Let  
1 3
A = 2 4.
3 5
Find the Moore-Penrose matrix inverse A+ of A.

Problem 41. A Hadamard matrix is an n × n matrix H with entries


in { −1, +1 } such that any two distinct rows or columns of H have inner
product 0. Construct a 4 × 4 Hadamard matrix starting from the column
vector
x1 = (1 1 1 1)T .

Problem 42. A binary Hadamard matrix is an n × n matrix M (where n


is even) with entries in { 0, 1 } such that any two distinct rows or columns
Basic Operations 9

of M have Hamming distance n/2. The Hamming distance between two


vectors is the number of entries at which they differ. Find a 4 × 4 binary
Hadamard matrix.

Problem 43. Let x be a normalized column vector in Rn , i.e. xT x = 1.


A matrix T is called a Householder matrix if

T := In − 2xxT .

Calculate T 2 .

Problem 44. An n × n matrix P is a projection matrix if

P ∗ = P, P 2 = P.

(i) Let P1 and P2 be projection matrices. Is P1 + P2 a projection matrix?


(ii) Let P1 and P2 be projection matrices. Is P1 P2 a projection matrix?
(iii) Let P be a projection matrix. Is In − P a projection matrix? Calculate
P (In − P ).
(iv) Is  
1 1 1
1
P = 1 1 1
3
1 1 1
a projection matrix?

Problem 45. Let


   
a1 b1
a =  a2  , b =  b2 
a3 b3

be vectors in R3 . Let × denote the vector product.


(i) Show that we can find a 3 × 3 matrix S(a) such that

a × b = S(a)b.

(ii) Express the Jacobi identity

a × (b × c) + c × (a × b) + b × (c × a) = 0

using the matrices S(a), S(b) and S(c).

Problem 46. Let s (spin quantum number)


 
1 3 5
s∈ , 1, , 2, , . . . .
2 2 2
10 Problems and Solutions

Given a fixed s. The indices j, k run over s, s − 1, s − 2, . . . , −s + 1, −s.


Consider the (2s + 1) unit vectors (standard basis)
1 0 0
     
0 1 0
.
0 0 .
   
 .  , es,s−1 =  .  , . . . , es,−s =  .  .
es,s = 
 ..   ..  0
0 0 1
Obviously the vectors have (2s + 1) components. The (2s + 1) × (2s + 1)
matrices s+ and s− are defined as
p
s+ es,m := ~ (s − m)(s + m + 1)es,m+1 , m = s − 1, s − 2, . . . , −s
p
s− es,m := ~ (s + m)(s − m + 1)es,m−1 , m = s, s − 1, . . . , −s + 1
and s+ ess = 0, s− es−s = 0, where ~ is the Planck constant. We have
1 1
s+ := (sx + isy ), s− := (sx − isy ).
2 2
Thus
sx = s+ + s− , sy = −i(s+ − s− ).
(i) Find the matrix representation of s+ and s− .
(ii) The (2s + 1) × (2s + 1) matrix sz is defined as (eigenvalue equation)
sz es,m := m~es,m , m = s, s − 1, . . . , −s.
Let s := (sx , sy , sz ). Find the (2s + 1) × (2s + 1) matrix
s2 := s2x + s2y + s2z .
(iii) Calculate the expectation values
e∗s,s s+ es,s , e∗s,s s− es,s , e∗s,s sz es,s .

Problem 47. The Fibonacci numbers are defined by the recurrence rela-
tion (linear difference equation of second order with constant coefficients)
sn+2 = sn+1 + sn
where n = 0, 1, . . . and s0 = 0, s1 = 1. Write this recurrence relation in
matrix form. Find s6 , s5 , and s4 .

Problem 48. (i) Find four unit (column) vectors x1 , x2 , x3 , x4 in R3 such


that 
4 1 1 for j = k
xTj xk = δjk − =
3 3 −1/3 for j 6= k.
Basic Operations 11

Give a geometric interpretation.


(ii) Calculate the sum
4
X
xj .
j=1

(iii) Calculate the sum


4
X
xj xTj .
j=1

Problem 49. Assume that


A = A1 + iA2
is a nonsingular n × n matrix, where A1 and A2 are real n × n matrices.
Assume that A1 is also nonsingular. Find the inverse of A using the inverse
of A1 .

Problem 50. Let A and B be n × n matrices over R. Assume that


A 6= B, A3 = B 3 and A2 B = B 2 A. Is A2 + B 2 invertible?

Problem 51. Let A be a positive definite n×n matrix over R. Let x ∈ R.


Show that A + xxT is also positive definite.

Problem 52. Let A, B be n × n matrices over C. The matrix A is called


similar to the matrix B if there is an n × n invertible matrix S such that
A = S −1 BS.
If A is similar to B, then B is also similar to A, since B = SAS −1 .
(i) Consider the two matrices
   
1 0 1 0
A= , B= .
2 1 0 1
Are the matrices similar?
(ii) Consider the two matrices
   
1 0 0 1
C= , D= .
0 −1 1 0
Are the matrices similar?

Problem 53. Normalize the vector in R2


p 
1 + sin(α)
v= p .
1 − sin(α)
12 Problems and Solutions

Then find a normalized vector in R2 which is orthonormal to this vector.


Chapter 2

Linear Equations

Problem 1. Let
   
1 1 1
A= , b= .
2 −1 5

Find the solutions of the system of linear equations Ax = b.

Problem 2. Let
   
1 1 3
A= , b=
2 2 α

where α ∈ R. What is the condition on α so that there is a solution of the


equation Ax = b?

Problem 3. (i) Find all solutions of the system of linear equations


    
cos(θ) − sin(θ) x1 x1
= , θ ∈ R.
− sin(θ) − cos(θ) x2 x2

(ii) What type of equation is this?

Problem 4. Let A ∈ Rn×n and x, b ∈ Rn . Consider the linear equation


Ax = b. Show that it can be written as x = T x, i.e., find T x.

Problem 5. If the system of linear equations Ax = b admits no solution


we call the equations inconsistent. If there is a solution, the equations are

13
14 Problems and Solutions

called consistent. Let Ax = b be a system of m linear equations in n


unknowns and suppose that the rank of A is m. Show that in this case
Ax = b is consistent.

Problem 6. Show that the curve fitting problem

j 0 1 2 3 4
tj −1.0 −0.5 0.0 0.5 1.0
yj 1.0 0.5 0.0 0.5 2.0

by a quadratic polynomial of the form

p(t) = a2 t2 + a1 t + a0

leads to an overdetermined linear system.

Problem 7. Consider the overdetermined linear system Ax = b. Find


an x̂ such that

kAx̂ − bk2 = min kAx − bk2 ≡ min kr(x)k2


x x

with the residual vector r(x) := b − Ax and k . k2 denotes the Euclidean


norm.

Problem 8. Consider the overdetermined linear system Ax = b with

1 1 444
   
1 2   458 
1 3   478 
   
1 4   493 
   
 
1 5  x1  506 
   
A= , x= , b= .
1 6  x2  516 
1 7   523 
   
1 8   531 
   
1 9 543
   
1 10 571

Solve this linear system in the least squares sense (see previous problem)
by the normal equations method.

Problem 9. An underdetermined linear system is either inconsistent or


has infinitely many solutions. Consider the underdetermined linear system

Hx = y
Linear Equations 15

where H is an n × m matrix with m > n and


x1 y1
   
 x2   y2 
x= ...  ,
 y=  ..  .

.
xm yn

We assume that Hx = y has infinitely many solutions. Let P be the n × m


matrix
1 0 ... 0 0 ... 0
 
0 1 ... 0 0 ... 0
P = ... ... . . . ... ... ..  .
.
0 0 ... 1 0 ... 0
We define x̂ := P x. Find
min kP x − yk22
x

subject to the constraint kHx−yk22 = 0. We assume that (λH T H+P T P )−1


exists for all λ > 0. Apply the Lagrange multiplier method.

Problem 10. Show that solving the system of nonlinear equations with
the unknowns x1 , x2 , x3 , x4

(x1 − 1)2 + (x2 − 2)2 + x23 = a2 (x4 − b1 )2


(x1 − 2)2 + x22 + (x3 − 2)2 = a2 (x4 − b2 )2
(x1 − 1)2 + (x2 − 1)2 + (x3 − 1)2 = a2 (x4 − b3 )2
(x1 − 2)2 + (x2 − 1)2 + x23 = a2 (x4 − b4 )2

leads to a linear underdetermined system. Solve this system with respect


to x1 , x2 and x3 .

Problem 11. Let A be an m × n matrix over R. We define

NA := { x ∈ Rn : Ax = 0 }.

NA is called the kernel of A and

ν(A) := dim(NA )

is called the nullity of A. If NA only contains the zero vector, then ν(A) = 0.
(i) Let  
1 2 −1
A= .
2 −1 3
Find NA and ν(A).
16 Problems and Solutions

(ii) Let  
2 −1 3
A= 4 −2 6 .
−6 3 −9
Find NA and ν(A).

Problem 12. Let V be a vector space over a field F. Let W be a subspace


of V . We define an equivalence relation ∼ on V by stating that v1 ∼ v2 if
v1 − v2 ∈ W . The quotient space V /W is the set of equivalence classes [v]
where v1 − v2 ∈ W . Thus we can say that v1 is equivalent to v2 modulo W
if v1 = v2 + w for some w ∈ W . Let
  
x1
V = R2 = : x1 , x2 ∈ R
x2
and the subspace   
x1
W = : x1 ∈ R .
0
(i) Is
           
3 1 4 −3 3 4
∼ , ∼ , ∼ ?
0 0 1 1 0 1

(ii) Give the quotient space V /W .

Problem 13. (i) Let x1 , x2 , x3 ∈ Z. Find all solutions of the system of


linear equations

7x1 + 5x2 − 5x3 = 8


17x1 + 10x2 − 15x3 = −42.

(ii) Find all positive solutions.

Problem 14. Consider the inhomogeneous linear integral equation


Z 1
(α1 (x)β1 (y) + α2 (x)β2 (y))ϕ(y)dy + f (x) = ϕ(x) (1)
0

for the unknown function ϕ, f (x) = x and


√ √
α1 (x) = x, α2 (x) = x, β1 (y) = y, β2 (y) = y.

Thus α1 and α2 are continuous in [0, 1] and likewise for β1 and β2 . We


define Z 1 Z 1
B1 := β1 (y)ϕ(y)dy, B2 := β2 (y)ϕ(y)dy
0 0
Linear Equations 17

and Z 1 Z 1
aµν := βµ (y)αν (y)dy, bµ := βµ (y)f (y)dy
0 0
where µ, ν = 1, 2. Show that the integral equation can be cast into a system
of linear equations for B1 and B2 . Solve this system of linear equations and
thus find a solution of the integral equation.
Chapter 3

Determinants and Traces

Problem 1. Consider the 2 × 2 matrix


 
0 1
A= .
0 0

Can we find an invertible 2 × 2 matrix Q such that Q−1 AQ is a diagonal


matrix?

Problem 2. Let A be a 2 × 2 matrix over R. Assume that trA = 0 and


trA2 = 0. Can we conclude that A is the 2 × 2 zero matrix?

Problem 3. Consider the (n − 1) × (n − 1) matrix

3 1 1 1 ... 1
 
1 4 1 1 ... 1 
1 1 5 1 ... 1
 

A=
1 1 1 6 ... 1
.

. .. .. .. .. ..
 ..

. . . . . 
1 1 1 1 ... n+1

Let Dn be the determinant of this matrix. Is the sequence { Dn /n! }


bounded?

Problem 4. For an integer n ≥ 3, let θ := 2π/n. Find the determinant


of the n × n matrix A + In , where In is the n × n identity matrix and the
matrix A = (ajk ) has the entries ajk = cos(jθ + kθ) for all j, k = 1, 2, . . . , n.

18
Determinants and Traces 19

Problem 5. Let α, β, γ, δ be real numbers.


(i) Is the matrix
 −iβ/2    −iδ/2 
e 0 cos(γ/2) − sin(γ/2) e 0
U = eiα
0 eiβ/2 sin(γ/2) cos(γ/2) 0 eiδ/2

unitary?
(ii) What the determinant of U ?

Problem 6. Let A and B be two n × n matrices over C. If there exists


a non-singular n × n matrix X such that

A = XBX −1

then A and B are said to be similar matrices. Show that the spectra
(eigenvalues) of two similar matrices are equal.

Problem 7. Let U be the n × n unitary matrix

0 1 0 ... 0
 
0 0 1 ... 0
. .. .. . . .
U :=  . . .. 
. . . 
0 0 0 ... 1
1 0 0 ... 0

and V be the n × n unitary diagonal matrix (ζ ∈ C)

1 0 0 ... 0
 
0 ζ 0 ... 0 
0 0 ζ2 ... 0
 
V := 
.

. .. .. .. .. 
. . . . .

0 0 0 ... ζ n−1

where ζ n = 1. Then the set of matrices

{ U j V k : j, k = 0, 1, 2, . . . , n − 1 }

provide a basis in the Hilbert space for all n × n matrices with the scalar
product
1
hA, Bi := tr(AB ∗ )
n
for n × n matrices A and B. Write down the basis for n = 2.

Problem 8. Let A and B be n × n matrices over C. Show that the


matrices AB and BA have the same set of eigenvalues.
20 Problems and Solutions

Problem 9. An n × n circulant matrix C is given by


 c c1 c2 . . . cn−1 
0
 cn−1 c0 c1 . . . cn−2 
cn−2 cn−1 c0 . . . cn−3  .
 
C := 
 .
 . .. .. . . .. 
. . . . .

c1 c2 c3 . . . c0
For example, the matrix
0 1 0 ... 0
 
0 0 1 ... 0
. . . .. .. 
P :=  . . . .
. . . .
0 0 0 ... 1
1 0 0 ... 0
is a circulant matrix. It is also called the n×n primary permutation matrix.
(i) Let C and P be the matrices given above. Let
f (λ) = c0 + c1 λ + · · · + cn−1 λn−1 .
Show that C = f (P ).
(ii) Show that C is a normal matrix, that is, C ∗ C = CC ∗ .
(iii) Show that the eigenvalues of C are f (ω k ), k = 0, 1, . . . , n − 1, where ω
is the nth primitive root of unity.
(iv) Show that
det(C) = f (ω 0 )f (ω 1 ) · · · f (ω n−1 ).
(v) Show that F ∗ CF is a diagonal matrix, where F is the unitary matrix
with (j, k)-entry equal to
1
√ ω (j−1)(k−1) , j, k = 1, . . . , n.
n

Problem 10. An n × n matrix A is called reducible if there is a permu-


tation matrix P such that
 
B C
P T AP =
0 D
where B and D are square matrices of order at least 1. An n × n matrix
A is called irreducible if it is not reducible. Show that the n × n primary
permutation matrix
0 1 0 ... 0
 
0 0 1 ... 0
. . . .. .. 
A :=  . . .
. . . . .
0 0 0 ... 1
1 0 0 ... 0
Determinants and Traces 21

is irreducible.

Problem 11. We define a linear bijection, h, between R4 and H(2), the


set of complex 2 × 2 hermitian matrices, by
 
t + x y − iz
(t, x, y, z) → .
y + iz t − x

We denote the matrix on the right-hand side by H.


(i) Show that the matrix can be written as a linear combination of the Pauli
spin matrices and the identity matrix I2 .
(ii) Find the inverse map.
(iii) Calculate the determinant of 2 × 2 hermitian matrix H. Discuss.

Problem 12. Let A be an n × n invertible matrix over C. Assume that


A can be written as A = B + iB where B has only real coefficients. Show
that B −1 exists and
1
A−1 = (B −1 − iB −1 ).
2

Problem 13. Let A be an invertible matrix. Assume that A = A−1 .


What are the possible values for det(A)?

Problem 14. Let A be a skew-symmetric matrix over R, i.e. AT = −A


and of order 2n − 1. Show that det(A) = 0.

Problem 15. Show that if A is hermitian, i.e. A∗ = A then det(A) is a


real number.

Problem 16. Let A, B, and C be n × n matrices. Calculate


 
A 0n
det
C B

where 0n is the n × n zero matrix.

Problem 17. Let A, B are 2 × 2 matrices over R. Let H := A + iB.


Express det H as a sum of determinants.

Problem 18. Let A, B are 2 × 2 matrices over R. Let H := A + iB.


Assume that H is hermitian. Show that

det(H) = det(A) − det(B).


22 Problems and Solutions

Problem 19. Let A, B, C, D be n×n matrices. Assume that DC = CD,


i.e. C and D commute and det D 6= 0. Consider the (2n) × (2n) matrix
 
A B
M= .
C D

Show that
det(M ) = det(AD − BC). (1)
We know that  
U 0n
det = det(U ) det(Y ) (2)
X Y
and  
U V
det = det(U ) det(Y ) (3)
0n Y
where U , V , X, Y are n × n matrices and 0n is the n × n zero matrix.

Problem 20. Let A, B be n × n matrices. We have the identity


 
A B
det ≡ det(A + B) det(A − B).
B A

Use this identity to calculate the determinant of the left-hand side using
the right-hand side, where
   
2 3 0 2
A= , B= .
1 7 4 6

Problem 21. Let A, B, C, D be n × n matrices. Assume that D is


invertible. Consider the (2n) × (2n) matrix
 
A B
M= .
C D

Show that
det(M ) = det(AD − BD−1 CD). (1)

Problem 22. Let A, B be n×n positive definite (and therefore hermitian)


matrices. Show that
tr(AB) > 0.

Problem 23. Let P0 (x) = 1, P1 (x) = α1 − x and

Pk (x) = (αk − x)Pk−1 (x) − βk−1 Pk−2 (x), k = 2, 3, . . .


Determinants and Traces 23

where βj , j = 1, 2, . . . are positive numbers. Find a k × k matrix Ak such


that
Pk (x) = det(Ak ).

Problem 24. Let


1 1 
 x1 + y1 x1 + y2 
A= 1 1 
x2 + y1 x2 + y2
where we assume that xi + yj 6= 0 for i, j = 1, 2. Show that

(x1 − x2 )(y1 − y2 )
det(A) = .
(x1 + y1 )(x1 + y2 )(x2 + y1 )(x2 + y2 )

Problem 25. For a 3×3 matrix we can use the rule of Sarrus to calculate
the determinant (for higher dimensions there is no such thing). Let
 
a11 a12 a13
 a21 a22 a23  .
a31 a32 a33

Write the first two columns again to the right of the matrix to obtain
 
a11 a12 a13 | a11 a12
 a21 a22 a23 | a21 a22  .
a31 a32 a33 | a31 a32

Now look at the diagonals. The product of the diagonals sloping down to
the right have a plus sign, the ones up to the left have a negative sign. This
leads to the determinant

det(A) = a11 a22 a33 +a12 a23 a31 +a13 a21 a32 −a31 a22 a13 −a32 a23 a11 −a33 a21 a12 .

Use this rule to calculate the determinant of the rotational matrix


 
cos(θ) 0 − sin(θ)
R= 0 1 0 .
sin(θ) 0 cos(θ)

Problem 26. Let A, S be n × n matrices. Assume that S is invertible


and assume that
S −1 AS = ρS
24 Problems and Solutions

where ρ 6= 0. Show that A is invertible.

Problem 27. The determinant of an n × n circulant matrix is given by


a a a ... an 
1 2 3
 an a1 a2 . . . an−1  n−1 n
!
 . . . . .  n−1
Y X
jk
 ..
det  .. .. .. .. 
 = (−1) ζ ak (1)

a3 a4 a5 . . . a2
 j=0 k=1

a2 a3 a4 . . . a1
where ζ := exp(2πi/n). Find the determinant of the circulant n × n matrix
 1 4 9 ... n2 
 n2 1 4 . . . (n − 1)2 
 . .. .. . . .. 
 . .
 . . . .


9 16 25 . . . 4
 
4 9 16 . . . 1
using equation (1).

Problem 28. Let A be a nonzero 2 × 2 matrix over R. Let B1 , B2 , B3 ,


B4 be 2 × 2 matrices over R and assume that
det(A + Bj ) = det(A) + det(Bj ) for j = 1, 2, 3, 4.
Show that there exist real numbers c1 , c2 , c3 , c4 , not all zero, such that
 
0 0
c1 B1 + c2 B2 + c3 B3 + c4 B4 = . (1)
0 0

Problem 29. Let A, B be n × n matrices. Show that


tr((A + B)(A − B)) = tr(A2 ) − tr(B 2 ). (1)

Problem 30. An n × n matrix Q is orthogonal if Q is real and


QT Q = QT Q = In
i.e. Q−1 = QT .
(i) Find the determinant of an orthogonal matrix.
(ii) Let u, v be two vectors in R3 and u × v denotes the vector product of
u and v  
u2 v3 − u3 v2
u × v :=  u3 v1 − u1 v3  .
u1 v2 − u2 v1
Determinants and Traces 25

Let Q be a 3 × 3 orthogonal matrix. Calculate


(Qu) × (Qv).

Problem 31. Calculate the determinant of the n × n matrix


1 1 1 1 ... 1 1
 
1 0 1 1 ... 1 1
1 1 0 1 ... 1 1
 
1 1 1 0 ... 1 1.
 
A= .
 .. .. .. .. . . .. .. 
 . . . . . . 
1 1 1 1 ... 0 1
1 1 1 1 ... 1 0

Problem 32. Find the determinant of the matrix


0 1 1 1 ... 1 1
 
1 0 1 1 ... 1 1
1 1 0 1 ... 1 1
 
1 1 1 0 ... 1 1.
 
A= . . . .
 .. .. .. .. .. .. .. 
 . . . 
1 1 1 1 ... 0 1
1 1 1 1 ... 1 0

Problem 33. Let A be a 2 × 2 matrix over R


 
a11 a12
A=
a21 a22
with det(A) 6= 0. Is (AT )−1 = (A−1 )T ?

Problem 34. Let A be an invertible n × n matrix. Let c = 2. Can we


find an invertible matrix S such that
SAS −1 = cA.

Problem 35. Let σj (j = 1, 2, 3) be one of the Pauli spin matrices. Let


M be an 2 × 2 matrix such that M ∗ σj M = σj . Show that det(M M ∗ ) = 1.

Problem 36. Let A be a 2 × 2 skew-symmetric matrix over R. Then


det(I2 − A) = 1 + det(A) ≥ 1. Can we conclude for a 3 × 3 skew-symmetric
matrix B over R that
det(I3 + A) = 1 + det(A) ?
26 Problems and Solutions

Problem 37. Consider the symmetric 4 × 4 matrices

1 0 0 1 0 0 0 0
   
1 0 0 0 0 1 0 1 1 0
A=  B= 
2 0 0 0 0 2 0 1 1 0
 
1 0 0 1 0 0 0 0

with trace equal to 1. Find the determinant of A and B. Find the rank of
A and B. Can one find a permutation matrix P such that P AP T = B?

Problem 38. Find all 2 × 2 matrices over C such that

tr(A2 ) = (tr(A))2 .

Problem 39. Let n ≥ 2 and A be an n × n over C. The determinant of


A can be calculated utilizing the traces of A, A2 , . . . , An as
n
X Y (tr(A` ))k`
det(A) = (−1)k` +1
k` !`k`
k1 ,k2 ,...,kn `=1

where the sum runs over the sets of nonnegative integers (k1 , . . . , kn ) sat-
isfying the linear Diophatine equation
n
X
`k` = n.
`=1

(i) Apply it to a 2 × 2 matrix A.


(ii) Give an implementation with SymbolicC++.
Chapter 4

Eigenvalues and
Eigenvectors

Problem 1. (i) Find the eigenvalues and normalized eigenvectors of the


rotational matrix
 
sin(θ) cos(θ)
A= .
− cos(θ) sin(θ)

(ii) Are the eigenvectors orthogonal to each other?

Problem 2. (i) An n×n matrix A such that A2 = A is called idempotent.


What can be said about the eigenvalues of such a matrix?
(ii) An n × n matrix A for which Ap = 0n , where p is a positive integer, is
called nilpotent. What can be said about the eigenvalues of such a matrix?
(iii) An n × n matrix A such that A2 = In is called involutory. What can
be said about the eigenvalues of such a matrix?

Problem 3. Let x be a nonzero column vector in Rn . Then xxT is an


n × n matrix and xT x is a real number. Show that xT x is an eigenvalue of
xxT and x is the corresponding eigenvector.

Problem 4. Let A be an n×n matrix over C. Show that the eigenvectors


corresponding to distinct eigenvalues are linearly independent.

27
28 Problems and Solutions

Problem 5. Let A be an n × n matrix over C. The spectral radius of the


matrix A is the non-negative number defined by
ρ(A) := max{ |λj (A)| : 1 ≤ j ≤ n }
where λj (A) are the eigenvalues of A. We define the norm of A as
kAk := sup kAxk
kxk=1

where kAxk denotes the Euclidean norm of the vector Ax. Show that
ρ(A) ≤ kAk.

Problem 6. Let A be an n × n hermitian matrix, i.e., A = A∗ . Assume


that all n eigenvalues are different. Then the normalized eigenvectors { vj :
j = 1, 2, . . . , n } form an orthonormal basis in Cn . Consider
β := (Ax − µx, Ax − νx) ≡ (Ax − µx)∗ (Ax − νx)
where ( , ) denotes the scalar product in Cn and µ, ν are real constants with
µ < ν. Show that if no eigenvalue lies between µ and ν, then β ≥ 0.

Problem 7. Let A be an arbitrary n × n matrix over C. Let


A + A∗ A − A∗
H := , S := .
2 2i
Let λ be an eigenvalue of A and x be the corresponding normalized eigen-
vector (column vector).
(i) Show that
λ = x∗ Hx + ix∗ Sx.
(ii) Show that the real part λr of the eigenvalue λ is given by λr = x∗ Hx
and the imaginary part λi is given by λi = x∗ Sx.

Problem 8. Let A = (ajk ) be a normal nonsymmetric 3 × 3 matrix over


the real numbers. Show that
   
a1 a23 − a32
a =  a2  =  a31 − a13 
a3 a12 − a21
is an eigenvector of A.

Problem 9. Let λ1 , λ2 and λ3 be the eigenvalues of the matrix


 
0 1 2
A = 0 0 1.
2 2 1
Eigenvalues and Eigenvectors 29

Find λ21 + λ22 + λ23 without calculating the eigenvalues of A or A2 .

Problem 10. Find all solutions of the linear equation


 
cos(θ) − sin(θ)
x = x, θ∈R (1)
− sin(θ) − cos(θ)

with the condition that x ∈ R2 and xT x = 1, i.e., the vector x must be


normalized. What type of equation is (1)?

Problem 11. Consider the column vectors u and v in Rn


 cos(θ)   sin(θ) 
 cos(2θ)   sin(2θ) 
u= .. , v= .. 
. .
   
cos(nθ) sin(nθ)
where n ≥ 3 and θ = 2π/n.
(i) Calculate uT u + vT v.
(ii) Calculate uT u − vT v + 2iuT v.
(iii) Calculate the matrix A = uuT − vvT , Au and Av. Give an interpre-
tation of the results.

Problem 12. Let A be an n × n matrix over C. We define


n
X
rj := |ajk |, j = 1, 2, . . . , n.
k=1
k6=j

(i) Show that each eigenvalue λ of A satisfies at least one of the following
inequalities
|λ − ajj | ≤ rj , j = 1, 2, . . . , n.
In other words show that all eigenvalues of A can be found in the union of
disks
{ z : |z − ajj | ≤ rj , j = 1, 2, . . . , n }
This is Gers̆gorin disk theorem.
(ii) Apply this theorem to the matrix
 
0 i
A= .
−i 0
(iii) Apply this theorem to the matrix
 
1 2 3
B = 3 4 9.
1 1 1
30 Problems and Solutions

Problem 13. Let A be an n × n matrix over C. Let f be an entire


function, i.e., an analytic function on the whole complex plane, for example
exp(z), sin(z), cos(z). An infinite series expansion for f (A) is not generally
useful for computing f (A). Using the Cayley-Hamilton theorem we can
write
f (A) = an−1 An−1 + an−2 An−2 + · · · + a2 A2 + a1 A + a0 In (1)
where the complex numbers a0 , a1 , . . . , an−1 are determined as follows:
Let
r(λ) := an−1 λn−1 + an−2 λn−2 + · · · + a2 λ2 + a1 λ + a0
which is the right-hand side of (1) with Aj replaced by λj , where j =
0, 1, . . . , n − 1.
For each distinct eigenvalue λj of the matrix A, we consider the equation
f (λj ) = r(λj ). (2)
If λj is an eigenvalue of multiplicity k, for k > 1, then we consider also the
following equations
f 0 (λ)|λ=λj = r0 (λ)|λ=λj
f 00 (λ)|λ=λj = r00 (λ)|λ=λj
···=···
(k−1)
f (λ) = r(k−1) (λ) .

λ=λj λ=λj

Apply this technique to find exp(A) with


 
c c
A= , c ∈ R, c 6= 0.
c c

Problem 14. (i) Use the method given above to calculate exp(iK), where
the hermitian 2 × 2 matrix K is given by
 
a b
K= , a, c ∈ R, b ∈ C.
b c
(ii) Find the condition on a, b and c such that
 
1 1 1
eiK = √ .
2 1 −1

Problem 15. Let A be a normal matrix over C, i.e. A∗ A = AA∗ . Show


that if x is an eigenvector of A with eigenvalue λ, then x is an eigenvector
of A∗ with eigenvalue λ.
Eigenvalues and Eigenvectors 31

Problem 16. Show that an n × n matrix A is singular if and only if at


least one eigenvalue is 0.

Problem 17. Let A be an invertible n × n matrix. Show that if x is an


eigenvector of A with eigenvalue λ, then x is an eigenvector of A−1 with
eigenvalue λ−1 .

Problem 18. Let A be an n × n matrix over R. Show that A and AT


have the same eigenvalues.

Problem 19. Let A be a symmetric matrix over R. Since A is symmetric


over R there exists a set of orthonormal eigenvectors v1 , v2 , . . . , vn which
form an orthonormal basis. Let x ∈ Rn be a reasonably good approximation
to an eigenvector, say v1 . Calculate

xT Ax
R := .
xT x
The quotient is called Rayleigh quotient. Discuss.

Problem 20. Let A be an n × n real symmetric matrix and

Q(x) := xT Ax.

The following statements hold (maximum principle)


1) λ1 = maxkxk=1 Q(x) = Q(x1 ) is the largest eigenvalue of the matrix A
and x1 is the eigenvector corresponding to eigenvalue λ1 .
2) (inductive statement). Let λk = max Q(x) subject to the constraints
a) xT xj = 0, j = 1, 2, . . . , k − 1.
b) kxk = 1.
c) Then λk = Q(xk ) is the kth eigenvalue of A, λ1 ≥ λ2 ≥ . . . ≥ λk and xk
is the corresponding eigenvectors of A.
Apply the maximum principle to the matrix
 
3 1
A= .
1 3

Problem 21. Let A be an n × n matrix. An n × n matrix can have at


most n linearly independent eigenvectors. Now assume that A has n + 1
eigenvectors (at least one must be linearly dependent) such that any n of
them are linearly independent. Show that A is a scalar multiple of the
identity matrix In .
32 Problems and Solutions

Problem 22. An n × n stochastic matrix P satisfies the following condi-


tions:
pij ≥ 0 for all i, j = 1, 2, . . . , n

and
n
X
pij = 1 for all j = 1, 2, . . . , n.
i=1

Show that a stochastic matrix always has at least one eigenvalue equal to
one.

Problem 23. Let A be an n × n matrix over C. Assume that A is


hermitian and unitary. What can be said about the eigenvalues of A?

Problem 24. Consider the (n + 1) × (n + 1) matrix

s∗
 
0
A=
r 0n×n

where r and s are n × 1 vectors with complex entries, s∗ denoting the


conjugate transpose of s. Find det(B − λIn+1 ), i.e. find the characteristic
polynomial.

Problem 25. The matrix difference equation

p(t + 1) = M p(t), t = 0, 1, 2, . . .

with the column vector (vector of probabilities)

p(t) = (p1 (t), p2 (t), . . . , pn (t))T

and the n × n matrix


 (1 − w) 0.5w 0 ... 0.5w 
 0.5w (1 − w) 0.5w ... 0 
0 0.5w (1 − w) ... 0
 
M = 
 .. .. .. .. .. 

. . . . .

0.5w 0 0 ... (1 − w)

plays a role in random walk in one dimension. M is called the transition


matrix and w denotes the probability w ∈ [0, 1] that at a given time step the
particle jumps to either of its nearest neighbor sites, then the probability
that the particle does not jump either to the right of left is (1 − w). The
matrix M is of the type known as circulant matrix. Such an n × n matrix
Eigenvalues and Eigenvectors 33

is of the form  c
0 c1 c2 ... cn−1 
 cn−1 c0 c1 ... cn−2 
cn−2 cn−1 c0 ... cn−3 
 
C=
 .
 . .. .. .. .. 
. . . . .

c1 c2 c3 ... c0
with the normalized eigenvectors

1
 
1  e2πij/n 
ej = √  .. 
n . 
e2(n−1)πij/n

for j = 1, 2, . . . , n.
(i) Use this result to find the eigenvalues of the matrix C.
(ii) Use (i) to find the eigenvalues of the matrix M .
(iii) Use (ii) to find p(t) (t = 0, 1, 2, . . .), where we expand the initial dis-
tribution vector p(0) in terms of the eigenvectors
n
X
p(0) = ak ek
k=1

with
n
X
pj (0) = 1.
j=1

(iv) Assume that


1
 
1 1
p(0) =  . .
n  .. 
1
Give the time evolution of p(0).

Problem 26. Let U be a unitary matrix and x an eigenvector of U with


the corresponding eigenvalue λ, i.e.

U x = λx.

(i) Show that U ∗ x = λx.


(ii) Let λ, µ be distinct eigenvalues of a unitary matrix U with the corre-
sponding eigenvectors x and y, i.e.

U x = λx, U y = µy.
34 Problems and Solutions

Show that x∗ y = 0.

Problem 27. Let H, H0 , V be n × n matrices over C and H = H0 + V .


Let z ∈ C and assume that z is chosen so that (H0 −zIn )−1 and (H −zIn )−1
exist. Show that

(H − zIn )−1 = (H0 − zIn )−1 − (H0 − zIn )−1 V (H − zIn )−1 .

This is called the second resolvent identity.

Problem 28. Let u be a nonzero column vector in Rn . Consider the


n × n matrix
A = uuT − uT uIn .
Is u an eigenvector of this matrix? If so what is the eigenvalue?

Problem 29. An n × n matrix A is called a Hadamard matrix if each


entry of A is 1 or −1 and if the rows or columns of A are orthogonal, i.e.,

AAT = nIn or AT A = nIn .

Note that AAT = nIn and AT A = nIn are equivalent. Hadamard matrices
Hn of order 2n can be generated recursively by defining
   
1 1 Hn−1 Hn−1
H1 = , Hn =
1 −1 Hn−1 −Hn−1

for n ≥ 2. Show that the eigenvalues of Hn are given by +2n/2 and −2n/2
each of multiplicity 2n−1 .

Problem 30. Let U be an n × n unitary matrix. Then U can be written


as
U = V diag(λ1 , λ2 , . . . , λn )V ∗
where λ1 , λ2 , . . . , λn are the eigenvalues of U and V is an n × n unitary
matrix. Let  
0 1
U= .
1 0
Find the decomposition for U given above.

Problem 31. An n × n matrix A over the complex numbers is called


positive semidefinite (written as A ≥ 0), if

x∗ Ax ≥ 0 for all x ∈ Cn .

Show that for every A ≥ 0, there exists a unique B ≥ 0 so that B 2 = A.


Eigenvalues and Eigenvectors 35

Problem 32. An n × n matrix A over the complex numbers is said to


be normal if it commutes with its conjugate transpose A∗ A = AA∗ . The
matrix A can be written
Xn
A= λj Ej
j=1

where λj ∈ C are the eigenvalues of A and Ej are n × n matrices satisfying


n
X
Ej2 = Ej = Ej∗ , Ej Ek = 0n if j 6= k, Ej = In .
j=1

Let  
0 1
A= .
1 0
Find the decomposition of A given above.

Problem 33. Let A be an n × n matrix over R. Assume that A−1 exists.


Let u, v ∈ Rn , where u, v are considered as column vectors.
(i) Show that if
vT A−1 u = −1
then A + uvT is not invertible.
(ii) Assume that vT A−1 u 6= −1. Show that

A−1 uvT A−1


(A + uvT )−1 = A−1 − .
1 + vT A−1 u

Problem 34. The Denman-Beavers iteration for the square root of an


n × n matrix A with no eigenvalues on R− is
1 1
Yk+1 = (Yk + Zk−1 ), Zk+1 = (Zk + Yk−1 )
2 2
with k = 0, 1, 2, . . . and Z0 = In and Y0 = A. The iteration has the
properties that

lim Yk = A1/2 , lim Zk = A−1/2


k→∞ k→∞

and, for all k,


1
Yk = AZk , Yk Zk = Zk Yk , Yk+1 = (Yk + AYk−1 ).
2
(i) Can the Denman-Beavers iteration be applied to the matrix
 
1 1
A= ?
1 2
36 Problems and Solutions

(ii) Find Y1 and Z1 .

Problem 35. Let  


3 2
A=
4 3
and I2 be the 2 × 2 identity matrix. For j ≥ 1, let dj be the greatest
common divisor of the entries of Aj − I2 . Show that
lim dj = ∞.
j→∞

Hint. Use the eigenvalues of A and the characteristic polynomial.

Problem 36. (i) Consider the polynomial


p(x) = x2 − sx + d, s, d ∈ C.
Find a 2 × 2 matrix A such that its characteristic polynomial is p.
(ii) Consider the polynomial
q(x) = −x3 + sx2 − qx + d, s, q, d ∈ C.
Find a 3 × 3 matrix B such that its characteristic polynomial is q.

Problem 37. Calculate the eigenvalues of the 4 × 4 matrix


1 0 0 1
 
0 1 1 0 
A=
0 1 −1 0

1 0 0 −1
by calculating the eigenvalues of A2 .

Problem 38. Find all 4 × 4 permutation matrices with the eigenvalues


+1, −1, +i, −i.

Problem 39. Let A be an n × n matrix over R. Let J be the n × n


matrix with 1’s in the counter diagonal and 0’s otherwise. Assume that
tr(A) = 0, tr(JA) = 0.
What can be said about the eigenvalues of such a matrix?

Problem 40. Let α, β, γ ∈ R. Find the eigenvalues and normalized


eigenvectors of the 4 × 4 matrix
0 cos(α) cos(β) cos(γ)
 
 cos(α) 0 0 0 
.
cos(β) 0 0 0

cos(γ) 0 0 0
Eigenvalues and Eigenvectors 37

Problem 41. Let α ∈ R. Find the eigenvalues and eigenvectors of the


4 × 4 matrix
cosh(α) 0 0 sinh(α)
 
0 1 1 0
.
 
0 1 1 0

sinh(α) 0 0 cosh(α)

Problem 42. Consider the nonnormal matrix


 
1 2
A= .
0 3

The eigenvalues are 1 and 2. Find the normalized eigenvectors of A and


show that they are linearly independent, but not orthonormal.

Problem 43. Find the eigenvalues of the matrices


1 1 0 0 0
 
1 1 0 0
   
1 1 0 0 0 1 0 0
0 0 1 0
A3 =  0 0 1, A4 =  , A5 =  0 0 0 1 0.
 
0 0 0 1
1 0 0 0 0 0 0 1
 
1 0 0 0
1 0 0 0 0
Extend to n dimensions.

Problem 44. Lett x, y ∈ R. Find the eigenvalues and eigenvectors of the


matrix  
x+y z1 z2
M =  z̄1 −x + y z3 
z̄2 z̄3 −2y
with trace equal to 0.

Problem 45. Let A, B be hermitian matrices. Consider the eigenvalue


problem
Avj = λj Bvj , j = 1, . . . , n.
Expanding the eigenvector vj with respect to an orthonormal basis e1 , e2 ,
. . . , en , i.e.
X n
vj = ckj ek .
k=1

Show that
n
X n
X
A`k ckj = λj B`k ckj , ` = 1, . . . , n
k=1 k=1

where Ak` := e∗k Ae∗` , Bk` := e∗k Be` .


38 Problems and Solutions

Problem 46. Find the eigenvalues and normalized eigenvectors of the


4 × 4 matrix
1 1 0 0
 
0 1 1 0
A= .
0 0 1 1
0 0 0 1

Problem 47. Let H be a hermitian n×n matrix. Consider the eigenvalue


problem Hv = λv.
(i) Find the eigenvalues of H + iIn and H − iIn .
(ii) Since H is hermitian, the matrices H + iIn and H − iIn are invertible.
Find (H + iIn )v. Find (H − iIn )(H + iIn )−1 v. Discuss.

Problem 48. The matrix


 
cos(α) − sin(α)
A(α) =
sin(α) cos(α)

admits the eigenvalues λ+ = eiα and λ− = e−iα with the corresponding


normalized eigenvectors
   
1 1 1 1
v+ = √ , v− = √ .
2 −i 2 i
The star product A(α) ? A(α) is given by
cos(α) 0 0 − sin(α)
 
 0 cos(α) − sin(α) 0
A(α) ? A(α) =  .

0 sin(α) cos(α) 0
sin(α) 0 0 cos(α)
Find the eigenvalues and normalized eigenvectors of A(α) ? A(α).

Problem 49. Let x1 , x2 , x3 ∈ R. Find the eigenvalues of the 2 × 2 matrix


 
x3 x1 + ix2
.
x1 − ix2 −x3

Problem 50. The Cartan matrix for the Lie algebra g2 is given by
 
2 −1
A= .
−3 2

Is the matrix nonnormal? Show that the matrix is invertible. Find the
inverse. Find the eigenvalues and normalized eigenvectors of A.
Eigenvalues and Eigenvectors 39

Problem 51. Consider the matrices


 
  1 1 1
1 1
, 1 2 3
1 2
1 3 6
1 1 1 1
 
1 2 3 4 
.
1 3 6 10

1 4 10 20
Find the eigenvalues.

Problem 52. (i) Let ` > 0. Find the eigenvalues of the matrix
 
cos(x/`) ` sin(x/`)
.
−(1/`) sin(x/`) cos(x/`)
(ii) Let ` > 0. Find the eigenvalues of the matrix
 
cosh(x/`) ` sinh(x/`)
.
(1/`) sinh(x/`) cos(x/`)

Problem 53. Let a, b, c, d, e ∈ R. Find the eigenvalues of the 4×4 matrix


a b c d
 
b 0 e 0
.
c e 0 0

d 0 0 0

Problem 54. Find the eigenvalues and eigenvectors of the 4 × 4 matrix


1 0 1 0
 
1 0 1 0 1 
√  .
2 1 0 −1 0
0 1 0 −1
Is the matrix unitary?

Problem 55. Consider the 2 × 2 matrix over R


 
1 3 1
A= .
4 1 3
Find the eigenvalues and normalized eigenvectors of A. Find A2 , A3 , An .
Find
lim An
n→∞
40 Problems and Solutions

applying the spectral theorem.

Problem 56. Find the eigenvalues and eigenvectors of the staircase ma-
trices
0 0 0 1
   
  0 0 1
0 1 0 0 1 1
, 0 1 1, .
1 1 0 1 1 1

1 1 1
1 1 1 1
Extend to n-dimensions.

Problem 57. Consider the 3 × 3 matrix


 √ iπ/4 
3e /2 0 1
A= 0 i eiπ/24 /2  .
1 eiπ/24 /2 ieiπ/12

The matrix is not hermitian, but A = AT . Find H = AA∗ and the eigen-
values of H.

Problem 58. Let α, β ∈ R. Find the eigenvalues and normalized eigen-


vectors of the 3 × 3 matrix
 
α+β 0 α
 0 α+β 0 .
α 0 α+β

Problem 59. Let x1 , x2 ∈ R. Show that the eigenvalues of the 2 × 2


matrix  
1 + x21 −x1 x2
−x1 x2 −x1 x2 1 + x22
are given by λ1 = 1 + x21 + x22 and λ2 = 1. What curve in the plane is
described by  
1 + x21 −x1 x2
det = 0?
−x1 x2 −x1 x2 1 + x22

Problem 60. Consider the skew-symmetric 3 × 3 matrix over R


 
0 −a3 a2
A =  a3 0 −a1 
−a2 a1 0

where a1 , a2 , a3 ∈ R. Find the eigenvalues of A. Let 03 be the 3 × 3 zero


matrix and A1 , A2 , A3 be 3 × 3 skew-symmetric matrices over R. Find the
Eigenvalues and Eigenvectors 41

eigenvalues of the 9 × 9 matrices


 
03 −A3 A2
B =  A3 03 −A1  .
−A2 A1 03

Problem 61. Find the inverse matrices of


1 α1 0 0
   
1 α1 0
 0 1 α2  ,  0 1  α2 0 
.
0 0 1 α3
0 0 1
0 0 0 1

Extend to n-dimensions.

Problem 62. Let a, b ∈ R. Find the eigenvalues and normalized eigen-


vectors of the 3 × 3 matrix
 
0 0 −a
M =  0 0 b .
−a b 0

Problem 63. Find the eigenvalues of the unitary 2 × 2 matrix


 
0 1
U= .
i 0

Problem 64. Find all 4 × 4 permutation matrices with the eigenvalues


+1, −1, +i, −i.
Chapter 5

Commutators and
Anticommutators

Problem 1. Let A, B be n × n matrices. Assume that [A, B] = 0n and


[A, B]+ = 0n . What can be said about AB and BA?

Problem 2. Let A and B be symmetric n × n matrices over R. Show


that AB is symmetric if and only if A and B commute.

Problem 3. Let A and B be n × n matrices over C. Show that A and B


commute if and only if A − cIn and B − cIn commute over every c ∈ C.

Problem 4. Consider the matrices


     
1 0 0 1 0 0
h= , e= , f= .
0 −1 0 0 1 0
Find a nonzero 2 × 2 matrices A such that

[A, e] = 0n , [A, f ] = 0n , [A, h] = 0n .

Problem 5. Can one find 2 × 2 matrices A and B such that

[A2 , B 2 ] = 0n

while
[A, B] 6= 0n ?

42
Commutators and Anticommutators 43

Problem 6. Let A, B, C, D be n × n matrices over R. Assume that


AB T and CDT are symmetric and ADT − BC T = In , where T denotes
transpose. Show that
AT D − C T B = I n .

Problem 7. Let A, B, H be n × n matrices over C such that


[A, H] = 0n , [B, H] = 0n .
Find [[A, B], H].

Problem 8. Let A, B be n × n matrices. Assume that A is invertible.


Assume that [A, B] = 0n . Can we conclude that [A−1 , B] = 0n ?

Problem 9. Let A and B be n × n hermitian matrices. Suppose that


A2 = In , B 2 = In (1)
and
[A, B]+ ≡ AB + BA = 0n (2)
where 0n is the n × n zero matrix. Let x ∈ Cn be normalized, i.e., kxk = 1.
Here x is considered as a column vector.
(i) Show that
(x∗ Ax)2 + (x∗ Bx)2 ≤ 1. (3)
(ii) Give an example for the matrices A and B.

Problem 10. Let A and B be n × n hermitian matrices. Suppose that


A2 = A, B2 = B (1)
and
[A, B]+ ≡ AB + BA = 0n (2)
n
where 0n is the n × n zero matrix. Let x ∈ C be normalized, i.e., kxk = 1.
Here x is considered as a column vector. Show that
(x∗ Ax)2 + (x∗ Bx)2 ≤ 1. (3)

Problem 11. Let A, B be skew-hermitian matrices over C, i.e. A∗ = −A,


B ∗ = −B. Is the commutator of A and B again skew-hermitian?

Problem 12. Let A, B be n × n matrices over C. Let S be an invertible


n × n matrix over C with
e = S −1 AS,
A e = S −1 BS.
B
44 Problems and Solutions

Show that
[A, e = S −1 [A, B]S.
e B]

Problem 13. Can we find n × n matrices A, B over C such that

[A, B] = In (1)

where In denotes the identity matrix?

Problem 14. Can we find 2 × 2 matrices A and B of the form


   
0 a12 0 b12
A= , B=
a21 0 b21 0

and singular (i.e. det A = 0 and det B = 0) such that [A, B]+ = I2 .

Problem 15. Let A be an n × n hermitian matrix over C. Assume


that the eigenvalues of A, λ1 , λ2 , . . . , λn are nondegenerate and that the
normalized eigenvectors vj (j = 1, 2, . . . , n) of A form an orthonormal basis
in Cn . Let B be an n × n matrix over C. Assume that [A, B] = 0n , i.e., A
and B commute. Show that

vk∗ Bvj = 0 for k 6= j. (1)

Problem 16. Let A, B be hermitian n × n matrices. Assume they have


the same set of eigenvectors

Avj = λj vj , Bvj = µj vj , j = 1, 2, . . . , n

and that the normalized eigenvectors form an orthonormal basis in Cn .


Show that
[A, B] = 0n . (1)

Problem 17. Let A, B be n × n matrices. Then we have the expansion


1 1
eA Be−A = B + [A, B] + [A, [A, B]] + [A, [A, [A, B]]] + · · ·
2! 3!
(i) Assume that [A, B] = A. Calculate eA Be−A .
(ii) Assume that [A, B] = B. Calculate eA Be−A .

Problem 18. Let A be an arbitrary n × n matrix over C with tr(A) = 0.


Show that A can be written as commutator, i.e., there are n × n matrices
X and Y such that A = [X, Y ].
Commutators and Anticommutators 45

Problem 19. (i) Let A, B be n × n matrices over C with [A, B] = 0n .


Calculate
[A + cIn , B + cIn ]
where c ∈ C and In is the n × n identity matrix.
(ii) Let x be an eigenvector of the n × n matrix A with eigenvalue λ. Show
that x is also an eigenvector of A + cIn , where c ∈ C.

Problem 20. Let A, B, C be n × n matrices. Show that

eA [B, C]e−A ≡ [eA Be−A , eA Ce−A ].


Chapter 6

Decomposition of
Matrices

Problem 1. Find the LU -decomposition of the 3 × 3 matrix


 
3 6 −9
A =  2 5 −3  .
−4 1 10
The triangular matrices L and U are not uniquely determined by the matrix
equation A = LU . These two matrices together contain n2 + n unknown
elements. Thus when comparing elements on the left- and right-hand side
of A = LU we have n2 equations and n2 + n unknowns. We require a
further n conditions to uniquely determine the matrices. There are three
additional sets of n conditions that are commonly used. These are Doolit-
tle’s method with `jj = 1, j = 1, 2, . . . , n; Choleski’s method with `jj = ujj ,
j = 1, 2, . . . , n; Crout’s method with ujj = 1, j = 1, 2, . . . , n. Apply Crout’s
method.

Problem 2. Find the QR-decomposition of the 3 × 3 matrix


 
2 1 3
A =  −1 0 7 .
0 −1 −1

Problem 3. Consider a square non-singular square matrix A over C, i.e.


A−1 exists. The polar decomposition theorem states that A can be written

46
Decomposition of Matrices 47

as A = U P , where U is a unitary matrix and P is a hermitian positive


definite matrix. Show that A has a unique polar decomposition.

Problem 4. Let A be an arbitrary m × n matrix over R, i.e., A ∈ Rm×n .


Then A can be written as
A = U ΣV T

where U is an m × m orthogonal matrix, V is an n × n orthogonal matrix,


Σ is an m × n diagonal matrix with nonnegative entries and the superscript
T
denotes the transpose. This is called the singular value decomposition.
An algorithm to find the singular value decomposition is as follows.

1) Find the eigenvalues λj (j = 1, 2, . . . , n) of the n × n matrix AT A. Ar-


range the eigenvalues λ1 , λ2 , . . . , λn in descending order.

2) Find the number of nonzero eigenvalues of the matrix AT A. We call this


number r.

3) Find the orthogonal eigenvectors vj of the matrix AT A corresponding


to the obtained eigenvalues, and arrange them in the same order to form
the column-vectors of the n × n matrix V .

4) Form an m × n diagonal
p matrix Σ placing on the leading diagonal of it
the square root σj := λj of p = min(m, n) first eigenvalues of the matrix
AT A found in 1) in descending order.

5) Find the first r column vectors of the m × m matrix U

1
uj = Avj , j = 1, 2, . . . , r.
σj

6) Add to the matrix U the rest of the m − r vectors using the Gram-
Schmidt orthogonalization process.

We have
Avj = σj uj , AT uj = σj vj

and therefore
AT Avj = σj2 vj , AAT uj = σj2 uj .

Apply the algorithm to the matrix


 
0.96 1.72
A= .
2.28 0.96
48 Problems and Solutions

Problem 5. Find the singular value decomposition A = U ΣV T of the


matrix (row vector) A = (2 1 − 2).

Problem 6. Any unitary 2n × 2n matrix U can be decomposed as


   
U1 0 C S U3 0
U=
0 U2 −S C 0 U4

where U1 , U2 , U3 , U4 are 2n−1 × 2n−1 unitary matrices and C and S are the
2n−1 × 2n−1 diagonal matrices

C = diag(cos(α1 ), cos α2 , . . . , cos α2n /2 )


S = diag(sin(α1 ), sin α2 , . . . , sin(α2n /2 ))

where αj ∈ R. This decomposition is called cosine-sine decomposition.

Consider the unitary 2 × 2 matrix


 
0 i
U= .
−i 0

Show that U can be written as


   
u1 0 cos α sin α u3 0
U=
0 u2 − sin α cos α 0 u4

where α ∈ R and u1 , u2 , u3 , u4 ∈ U (1) (i.e., u1 , u2 , u3 , u4 are complex num-


bers with length 1). Find α, u1 , u2 , u3 , u4 .

Problem 7. (i) Find the cosine-sine decomposition of the unitary matrix


 
0 1
U= .
1 0

(ii) Use the result from (i) to find a 2 × 2 hermitian matrix K such that
U = exp(iK).

Problem 8. (i) Find the cosine-sine decomposition of the unitary matrix


(Hadamard matrix)  
1 1 1
U=√ .
2 1 −1

Problem 9. For any n × n matrix A there exists an n × n unitary matrix


(U ∗ = U −1 ) such that
U ∗ AU = T (1)
Decomposition of Matrices 49

where T is an n × n matrix in upper triangular form. Equation (1) is called


a Schur decomposition. The diagonal elements of T are the eigenvalues of
A. Note that such a decomposition is not unique. An iterative algorithm
to find a Schur decomposition for an n × n matrix is as follows.

It generates at each step matrices Uk and Tk (k = 1, 2, . . . , n − 1) with


the properties: each Uk is unitary, and each Tk has only zeros below its
main diagonal in its first k columns. Tn−1 is in upper triangular form, and
U = U1 U2 · · · Un−1 is the unitary matrix that transforms A into Tn−1 . We
set T0 = A. The kth step in the iteration is as follows.

Step 1. Denote as Ak the (n − k + 1) × (n − k + 1) submatrix in the lower


right portion of Tk−1 .
Step 2. Determine an eigenvalue and the corresponding normalized eigen-
vector for Ak .
Step 3. Construct a unitary matrix Nk which has as its first column the
normalized eigenvector found in step 2.
Step 4. For k = 1, set U1 = N1 , for k > 1, set
 
Ik−1 0
Uk =
0 Nk

where Ik−1 is the (k − 1) × (k − 1) identity matrix.


Step 5. Calculate Tk = Uk∗ Tk−1 Uk .

Apply the algorithm to the matrix


 
1 0 1
A = 0 1 0.
1 0 1

Problem 10. Let A be an n × n matrix over C. Then there exists an


n × n unitary matrix Q, such that

Q∗ AQ = D + N

where D = diag(λ1 , λ2 , . . . , λn ) is the diagonal matrix composed of the


eigenvalues of A and N is a strictly upper triangular matrix (i.e., N has
zero entries on the diagonal). The matrix Q is said to provide a Schur
decomposition of A.

Let    
3 8 1 2i 1
A= , Q= √ .
−2 3 5 −1 −2i
50 Problems and Solutions

Show that Q provides a Schur decomposition of A.

Problem 11. We say that a matrix is upper triangular if all their entries
below the main diagonal are 0, and that it is strictly upper triangular if in
addition all the entries on the main diagonal are equal to 1. Any invertible
real n × n matrix A can be written as the product of three real n × n
matrices
A = ODN
where N is strictly upper triangular, D is diagonal with positive entries, and
O is orthogonal. This is known as the Iwasawa decomposition of the matrix
A. The decomposition is unique. In other words, that if A = O0 D0 N 0 ,
where O0 , D0 and N 0 are orthogonal, diagonal with positive entries and
strictly upper triangular, respectively, then O0 = O, D = D0 and N 0 = N .
Find the Iwasawa decomposition of the matrix
 
0 1
A= .
1 2

Problem 12. Consider the matrix


 
a b
M=
c d
where a, b, c, d ∈ C and ad − bc = 1. Thus M is an element of the Lie group
SL(2, C). The Iwasawa decomposition is given by
     −1/2  
a b α β δ 0 1 η
=
c d −β α 0 δ 1/2 0 1
where α, β, η ∈ C and δ ∈ R+ . Find α, β, δ and η.

Problem 13. Let A be a unitary n × n matrix. Let P be an invertible


n × n matrix. Let B := AP . Show that P B −1 is unitary.

Problem 14. Show that every 2 × 2 matrix A of determinant 1 is the


product of three elementary matrices. This means that matrix A can be
written as      
a11 a12 1 x 1 0 1 z
= . (1)
a21 a22 0 1 y 1 0 1

Problem 15. Almost any 2 × 2 matrix A can be factored (Gaussian


decomposition) as
     
a11 a12 1 α λ 0 1 0
= .
a21 a22 0 1 0 µ β 1
Decomposition of Matrices 51

Find the decomposition of the matrix


 
1 1
A= .
1 1

Problem 16. Let A be an n × n matrix over R. Consider the LU -


decomposition A = LU , where L is a unit lower triangular matrix and
U is an upper triangular matrix. The LDU -decomposition is defined as
A = LDU , where L is unit lower triangular, D is diagonal and U is unit
upper triangular. Let
 
2 4 −2
A= 4 9 −3  .
−2 −3 7

Find the LDU -decomposition via the LU -decomposition.

Problem 17. Let U be an n × n unitary matrix. The matrix U can


always be diagonalized by a unitary matrix V such that
 iθ1
e ... 0

. .. .
..  V ∗
U = V  .. .
iθn
0 ... e

where eiθj , θj ∈ [0, 2π) are the eigenvalues of U . Let


 
0 1
U= .
1 0

Thus the eigenvalues are 1 and −1. Find the unitary matrix V such that
   
0 1 1 0
U= =V V ∗.
1 0 0 −1
Chapter 7

Functions of Matrices

Problem 1. Let A be an n × n matrix over C with A2 = rA, where r ∈ C


and r 6= 0.
(i) Calculate ezA , where z ∈ C.
(ii) Let U (z) = ezA . Let z 0 ∈ C. Calculate U (z)U (z 0 ).

Problem 2. Let A be an n × n matrix over C. We define sin(A) as



X (−1)j
sin(A) := A2j+1 .
j=0
(2j + 1)!

Can we find a 2 × 2 matrix B over the real numbers R such that


 
1 4
sin(B) = ? (1)
0 1

Problem 3. Consider the unitary matrix


 
0 1
U= .
1 0
Can we find an α ∈ R such that U = exp(αA), where
 
0 1
A= ?
−1 0

Problem 4. Let A be an n × n matrix over C. Assume that A2 = cIn ,


where c ∈ R.

52
Functions of Matrices 53

(i) Calculate exp(A).


(ii) Apply the result to the 2 × 2 matrix (z 6= 0)
 
0 z
B= .
−z 0
T
Thus B is skew-hermitian, i.e., B = −B.

Problem 5. Let H be a hermitian matrix, i.e., H = H ∗ . It is known


that U := eiH is a unitary matrix. Let
 
a b
H= , a ∈ R, b ∈ C
b a

with b 6= 0.
(i) Calculate eiH using the normalized eigenvectors of H to construct a
unitary matrix V such that V ∗ HV is a diagonal matrix.
(ii) Specify a, b such that we find the unitary matrix
 
0 1
U= .
1 0

Problem 6. It is known that any n×n unitary matrix U can be written as


U = exp(iK), where K is a hermitian matrix. Assume that det(U ) = −1.
What can be said about the trace of K?

Problem 7. The MacLaurin series for arctan(z) is defined as



z3 z5 z7 X (−1)j z 2j+1
arctan(z) = z − + − + ··· =
3 5 7 j=0
2j + 1

which converges for all complex values of z having absolute value less than
1, i.e., |z| < 1. Let A be an n × n matrix. Thus the series expansion

A3 A5 A7 X (−1)j A2j+1
arctan(A) = A − + − + ··· =
3 5 7 j=0
2j + 1

is well-defined for A if all eigenvalues λ of A satisfy |λ| < 1. Let


 
1 1 1
A= .
2 1 1

Does arctan(A) exist?


54 Problems and Solutions

Problem 8. For every positive definite matrix A, there is a unique posi-


tive definite matrix Q such that Q2 = A. The matrix Q is called the square
root of A. Can we find the square root of the matrix
 
1 5 3
B= ?
2 3 5

Problem 9. Let A, B be n × n matrices over C. Assume that

[A, [A, B]] = [B, [A, B]] = 0n . (1)

Show that
1
eA+B = eA eB e− 2 [A,B] (2a)
A+B B A + 12 [A,B]
e =e e e . (2b)
Use the technique of parameter differentiation, i.e. consider the matrix-
valued function
f () := eA eB
where  is a real parameter. Then take the derivative of f with respect to
.

Problem 10. Let


     
+ 0 1 − 0 0 1 1 0
J := , J := , J3 := .
0 0 1 0 2 0 −1

(i) Let  ∈ R. Find



+ +
+J − )
eJ , eJ , e(J .
(ii) Let r ∈ R. Show that
+
+J − ) −
tanh(r) 2J3 ln(cosh(r)) J + tanh(r)
er(J ≡ eJ e e .

Problem 11. Let A, B, C2 , ..., Cm , ... be n × n matrices over C. The


Zassenhaus formula is given by

exp(A + B) = exp(A) exp(B) exp(C2 ) · · · exp(Cm ) · · ·

The left-hand side is called the disentangled form and the right-hand side
is called the undisentangled form. Find C2 , C3 , . . . , using the comparison
method. In the comparison method the disentangled and undisentangled
forms are expanded in terms of an ordering scalar α and matrix coefficients
of equal powers of α are compared. From

exp(α(A + B)) = exp(αA) exp(αB) exp(α2 C2 ) exp(α3 C3 ) · · ·


Functions of Matrices 55

we obtain
∞ ∞
X αk X αr0 +r1 +2r2 +3r3 +... r0 r1 r2 r3
(A + B)k = A B C2 C3 · · ·
k! r0 ,r1 ,r2 ,r3 ,...=0
r0 !r1 !r2 !r3 ! · · ·
k=0

(i) Find C2 and C3 .


(ii) Assume that [A, [A, B]] = 0n and [B, [A, B]] = 0n . What conclusion
can we draw for the Zassenhaus formula?

Problem 12. Calculating exp(A) we can also use the Cayley-Hamilton


theorem and the Putzer method. The Putzer method is as follows. Using
the Cayley-Hamilton theorem we can write

f (A) = an−1 An−1 + an−2 An−2 + · · · + a2 A2 + a1 A + a0 In (1)

where the complex numbers a0 , a1 , . . . , an−1 are determined as follows:


Let
r(λ) := an−1 λn−1 + an−2 λn−2 + · · · + a2 λ2 + a1 λ + a0
which is the right-hand side of (1) with Aj replaced by λj (j = 0, 1, . . . , n −
1). For each distinct eigenvalue λj of the matrix A, we consider the equation

f (λj ) = r(λj ). (2)

If λj is an eigenvalue of multiplicity k, for k > 1, then we consider also the


following equations

f 0 (λ)|λ=λj = r0 (λ)|λ=λj , · · · , f (k−1) (λ) = r(k−1) (λ) .

λ=λj λ=λj

Calculate exp(A) with  


2 −1
A=
−1 2
with the Putzer method.

Problem 13. Any unitary matrix U can be written as U = exp(iK),


where K is hermitian. Apply the method of the previous problem to find
K for the Hadamard matrix
 
1 1 1
UH = √ .
2 1 −1

Problem 14. Let A, B be n × n matrices and t ∈ R. Show that

t2
et(A+B) − etA etB = (BA − AB) + higher order terms in t. (1)
2
56 Problems and Solutions

Problem 15. Let K be an n × n hermitian matrix. Show that

U := exp(iK)

is a unitary matrix.

Problem 16. Let  


2 3
A= .
7 −2
Calculate det eA .

Problem 17. Let A be an n × n matrix over C. Assume that A2 = cIn ,


where c ∈ R. Calculate exp(A).

Problem 18. Let A be an n × n matrix with A3 = −A and µ ∈ R.


Calculate exp(µA).

Problem 19. Let X be an n × n matrix over C. Assume that X 2 = In .


Let Y be an arbitrary n × n matrix over C. Let z ∈ C.
(i) Calculate exp(zX)Y exp(−zX) using the Baker-Campbell-Hausdorff re-
lation
z2 z3
ezX Y e−zX = Y + z[X, Y ] + [X, [X, Y ]] + [X, [X, [X, Y ]]] + · · · .
2! 3!
(ii) Calculate exp(zX)Y exp(−zX) by first calculating exp(zX) and exp(−zX)
and then doing the matrix multiplication. Compare the two methods.

Problem 20. We consider the principal logarithm of a matrix A ∈ Cn×n


with no eigenvalues on R− (the closed negative real axis). This logarithm
is denoted by log A and is the unique matrix B such that exp(B) = A and
the eigenvalues of B have imaginary parts lying strictly between −π and
π. For A ∈ Cn×n with no eigenvalues on R− we have the following integral
representation
Z s
log(s(A − In ) + In ) = (A − In )(t(A − In ) + In )−1 dt.
0

Thus with s = 1 we obtain


Z 1
log A = (A − In )(t(A − In ) + In )−1 dt
0

where In is the n × n identity matrix. Let A = xIn with x a positive real


number. Calculate log A.
Functions of Matrices 57

Problem 21. Let A be a real or complex n×n matrix with no eigenvalues


on R− (the closed negative real axis). Then there exists a unique matrix
X such that
1) eX = A
2) the eigenvalues of X lie in the strip { z : −π < =(z) < π }. We refer to
X as the principal logarithm of A and write X = log A. Similarly, there is
a unique matrix S such that
1) S 2 = A
2) the eigenvalues of S lie in the open halfplane: 0 < <(z). We refer to S
as the principal square root of A and write S = A1/2 .
If the matrix A is real then its principal logarithm and principal square
root are also real.
The open halfplane associated with z = ρeiθ is the set of complex numbers
w = ζeiφ such that −π/2 < φ − θ < π/2.
Suppose that A = BC has no eigenvalues on R− and
1. BC = CB
2. every eigenvalue of B lies in the open halfplane of the corresponding
eigenvalue of A1/2 (or, equivalently, the same condition holds for C).

Show that log(A) = log(B) + log(C).

Problem 22. Let K be a hermitian matrix. Then U := exp(iK) is a


unitary matrix. A method to find the hermitian matrix K from the unitary
matrix U is to consider the principal logarithm of a matrix A ∈ Cn×n with
no eigenvalues on R− (the closed negative real axis). This logarithm is
denoted by log A and is the unique matrix B such that exp(B) = A and
the eigenvalues of B have imaginary parts lying strictly between −π and
π. For A ∈ Cn×n with no eigenvalues on R− we have the following integral
representation
Z s
log(s(A − In ) + In ) = (A − In )(t(A − In ) + In )−1 dt.
0

Thus with s = 1 we obtain


Z 1
log(A) = (A − In )(t(A − In ) + In )−1 dt
0

where In is the n × n identity matrix. Find log U of the unitary matrix


 
1 1 −1
U=√ .
2 1 1

First test whether the method can be applied.


58 Problems and Solutions

n
Problem 23. Let x0 , x1 , . . . , x2n −1 be an orthonormal basis in C2 . We
define n n
2 −1 2 −1
1 X X −i2πkj/2n
U := √ e xk x∗j . (1)
2n j=0 k=0
Show that U is unitary. In other words show that U U ∗ = I2n , using the
completeness relation
n
2X −1
I2n = xj x∗j .
j=0
n n
Thus I2n is the 2 × 2 unit matrix.

Problem 24. Consider the unitary matrix


 
0 1
U= .
1 0
Show that we can find a unitary matrix V such that V 2 = U . Thus V
would be the square root of U . What are the eigenvalues of V ?

Problem 25. Let A be an n × n matrix. Let ω, µ ∈ R. Assume that


ketA k ≤ M eωt , t≥0
and µ > ω. Then we have
Z ∞
(µIn − A)−1 ≡ e−µt etA dt. (1)
0

Calculate the left and right-hand side of (1) for the matrix
 
0 1
A= .
1 0

Problem 26. The Fréchet derivative of a matrix function f : Cn×n at a


point X ∈ Cn×n is a linear mapping LX : Cn×n → Cn×n such that for all
Y ∈ Cn×n
f (X + Y ) − f (X) − LX (Y ) = o(kY k).
Calculate the Fréchet derivative of f (X) = X 2 .

Problem 27. Find the square root of the positive definite 2 × 2 matrix
 
1 1
.
1 2
Chapter 8

Linear Differential
Equations

Problem 1. Solve the initial value problem of the linear differential


equation
dx
= 2x + sin(t).
dt

Problem 2. Solve the initial value problem of dx/dt = Ax, where


 
0 1
A= .
1 0

Problem 3. Solve the initial value problem of dx/dt = Ax, where


 
a c
A= , a, b, c ∈ R.
0 b

Problem 4. Show that the n-th order differential equation

dn x dx dn−1 x
n
= c0 x + c1 + · · · + cn−1 n−1 , cj ∈ R
dt dt dt
can be written as a system of first order differential equation.

59
60 Problems and Solutions

Problem 5. Let A, X, F be n × n matrices. Assume that the matrix


elements of X and F are differentiable functions of t. Consider the initial-
value linear matrix differential equation with an inhomogeneous part

dX(t)
= AX(t) + F (t), X(t0 ) = C.
dt
Find the solution of this matrix differential equation.

Problem 6. Let A, B, C, Y be n × n matrices. We know that

AY + Y B = C

can be written as

((In ⊗ A) + (B T ⊗ In ))vec(Y ) = vec(C)

where ⊗ denotes the Kronecker product. The vec operation is defined as

vecY := (y11 , . . . , yn1 , y12 , . . . , yn2 , . . . , y1n , . . . , ynn )T .

Apply the vec operation to the matrix differential equation

d
X(t) = AX(t) + X(t)B
dt
where A, B are n × n matrices and the initial matrix X(t = 0) ≡ X(0) is
given. Find the solution of this differential equation.

Problem 7. The motion of a charge q in an electromagnetic field is given


by
dv
m = q(E + v × B) (1)
dt
where m denotes the mass and v the velocity. Assume that
   
E1 B1
E =  E2  , B =  B2  (2)
E3 B3

are constant fields. Find the solution of the initial value problem.

Problem 8. Consider a system of linear ordinary differential equations


with periodic coefficients
dx
= A(t)x (1)
dt
Linear Differential Equations 61

where A(t) is an n × n matrix of periodic functions with a period T . From


Floquet theory we know that any fundamental n × n matrix Φ(t), which is
defined as a nonsingular matrix satisfying the matrix differential equation
dΦ(t)
= A(t)Φ(t)
dt
can be expressed as
Φ(t) = P (t) exp(tR). (2)
Here P (t) is a nonsingular n × n matrix of periodic functions with the
same period T , and R, a constant matrix, whose eigenvalues are called the
characteristic exponents of the periodic system (1). Let
y = P −1 (t)x.
Show that y satisfies the system of linear differential equations with con-
stant coefficients
dy
= Ry.
dt

Problem 9. Consider the autonomous system of nonlinear first order


ordinary differential equations
dx1
= a(x2 − x1 ) = f1 (x1 , x2 , x3 )
dt
dx2
= (c − a)x1 + cx2 − x1 x3 = f2 (x1 , x2 , x3 )
dt
dx3
= −bx3 + x1 x2 = f3 (x1 , x2 , x3 )
dt
where a > 0, b > 0 and c are real constants with 2c > a.
(i) The fixed points are defined as the solutions of the system of equations
f1 (x∗1 , x∗2 , x∗3 ) = a(x∗2 − x∗1 ) = 0
f2 (x∗1 , x∗2 , x∗3 ) = (c − a)x∗1 + cx∗2 − x∗1 x∗3 = 0
f3 (x∗1 , x∗2 , x∗3 ) = −bx∗3 + x∗1 x∗2 = 0 .
Find the fixed points. Obviously (0, 0, 0) is a fixed point.
(ii) The linearized equation (or variational equation) is given by
   
dy1 /dt y1
 dy2 /dt  = A  y2 
dy3 /dt y3
where the 3 × 3 matrix A is given by
 
∂f1 /∂x1 ∂f1 /∂x2 ∂f1 /∂x3
Ax=x∗ =  ∂f2 /∂x1 ∂f2 /∂x2 ∂f2 /∂x3 
∂f3 /∂x1 ∂f3 /∂x2 ∂f3 /∂x3 x=x∗
62 Problems and Solutions

where x = x∗ indicates to insert one of the fixed points into A. Calculate


A and insert the first fixed point (0, 0, 0). Calculate the eigenvalues of A. If
all eigenvalues have negative real part then the fixed point is stable. Thus
study the stability of the fixed point.
Chapter 9

Kronecker Product

Problem 1. (i) Let


   
1 1 1 1
x= √ , y= √ .
2 1 2 −1

Thus {x, y } forms an orthonormal basis in C2 (Hadamard basis). Calculate

x ⊗ x, x ⊗ y, y ⊗ x, y⊗y

and interpret the result.

Problem 2. Consider the Pauli matrices


   
0 1 1 0
σ1 := , σ3 := .
1 0 0 −1
Find σ1 ⊗ σ3 and σ3 ⊗ σ1 . Is σ1 ⊗ σ3 = σ3 ⊗ σ1 ?

Problem 3. Every 4 × 4 unitary matrix U can be written as

U = (U1 ⊗ U2 ) exp(i(ασx ⊗ σx + βσ2 ⊗ σ2 + γσ3 ⊗ σ3 ))(U3 ⊗ U4 )

where Uj ∈ U (2) (j = 1, 2, 3, 4) and α, β, γ ∈ R. Calculate

exp(i(ασ1 ⊗ σ1 + βσ2 ⊗ σ2 + γσ3 ⊗ σ3 )).

Problem 4. Find an orthonormal basis given by hermitian matrices in


the Hilbert space H of 4 × 4 matrices over C. The scalar product in the

63
64 Problems and Solutions

Hilbert space H is given by

hA, Bi := tr(AB ∗ ), A, B ∈ H.

Hint. Start with hermitian 2 × 2 matrices and then use the Kronecker
product.

Problem 5. Consider the 4 × 4 matrices


0 0 0 1
 
0 0 1 0
α1 =   = σ1 ⊗ σ1
0 1 0 0
1 0 0 0
0 0 0 −i
 
0 0 i 0 
α2 =   = σ1 ⊗ σ2
0 −i 0 0
i 0 0 0
0 0 1 0
 
 0 0 0 −1 
α3 =   = σ1 ⊗ σ3 .
1 0 0 0
0 −1 0 0

Let a = (a1 , a2 , a3 ), b = (b1 , b2 , b3 ), c = (c1 , c2 , c3 ), d = (d1 , d2 , d3 ) be


elements in R3 and

a · α := a1 α1 + a2 α2 + a3 α3 .

Calculate the traces

tr((a · α)(b · α)), tr((a · α)(b · α)(c · α)(d · α)).

Problem 6. Given the orthonormal basis


 iφ   
e cos(θ) − sin(θ)
x1 = , x2 =
sin θ e−iφ cos θ

in the vector space C2 . Use this orthonormal basis to find an orthonormal


basis in C4 .

Problem 7. Let A be an m × m matrix and B be an n × n matrix. The


underlying field is C. Let Im , In be the m × m and n × n unit matrix,
respectively.
(i) Show that tr(A ⊗ B) = tr(A)tr(B).
(ii) Show that tr(A ⊗ In + Im ⊗ B) = ntr(A) + mtr(B).
Kronecker Product 65

Problem 8. Let A be an arbitrary n × n matrix over C. Show that

exp(A ⊗ In ) ≡ exp(A) ⊗ In . (1)

Problem 9. Let A, B be arbitrary n × n matrices over C. Let In be the


n × n unit matrix. Show that

exp(A ⊗ In + In ⊗ B) ≡ exp(A) ⊗ exp(B).

Problem 10. Let A and B be arbitrary n × n matrices over C. Prove or


disprove the equation
eA⊗B = eA ⊗ eB .

Problem 11. Let A be an m × m matrix and B be an n × n matrix. The


underlying field is C. The eigenvalues and eigenvectors of A are given by
λ1 , λ2 , . . . , λm and u1 , u2 , . . . , um . The eigenvalues and eigenvectors of
B are given by µ1 , µ2 , . . . , µn and v1 , v2 , . . . , vn . Let 1 , 2 and 3 be
real parameters. Find the eigenvalues and eigenvectors of the matrix

1 A ⊗ B + 2 A ⊗ In + 3 Im ⊗ B.

Problem 12. Let A, B be n × n matrices over C. A scalar product can


be defined as
hA, Bi := tr(AB ∗ ).
The scalar product implies a norm

kAk2 = hA, Ai = tr(AA∗ ).

This norm is called the Hilbert-Schmidt norm.

(i) Consider the Dirac matrices

1 0 0 0 0 0 0 1
   
0 1 0 0   0 0 1 0
γ0 :=  , γ1 :=  .
0 0 −1 0 0 −1 0 0
0 0 0 −1 −1 0 0 0

Calculate hγ0 , γ1 i.
(ii) Let U be a unitary n × n matrix. Find hU A, U Bi.
(iii) Let C, D be m × m matrices over C. Find hA ⊗ C, B ⊗ Di.
66 Problems and Solutions

Problem 13. Let T be the 4 × 4 matrix


 
X3
T := I2 ⊗ I2 + t j σj ⊗ σj 
j=1

where σj , j = 1, 2, 3 are the Pauli spin matrices and −1 ≤ tj ≤ +1,


j = 1, 2, 3. Find T 2 .

Problem 14. Let U be a 2 × 2 unitary matrix and I2 be the 2 × 2 identity


matrix. Is the 4 × 4 matrix
   iα 
0 0 e 0
V = ⊗U + ⊗ I2 , α∈R
0 1 0 0

unitary?

Problem 15. Let


 
x1
x= , x1 x∗1 + x2 x∗2 = 1
x2

be an arbitrary normalized vector in C2 . Can we construct a 4 × 4 unitary


matrix U such that
       
x1 1 x1 x1
U ⊗ = ⊗ ? (1)
x2 0 x2 x2

Prove or disprove this equation.

Problem 16. Let Aj (j = 1, 2, . . . , k) be matrices of size mj × nj . We


introduce the notation

⊗kj=1 Aj = (⊗k−1
j=1 Aj ) ⊗ Ak = A1 ⊗ A2 ⊗ · · · ⊗ Ak .

Consider the binary matrices


       
1 0 0 0 0 1 0 0
J00 = , J10 = , J01 = , J11 = .
0 0 1 0 0 0 0 1

(i) Calculate
⊗nj=1 (J00 + J01 + J11 )
for k = 1, k = 2, k = 3 and k = 8. Give an interpretation of the result when
each entry in the matrix represents a pixel (1 for black and 0 for white).
This means we use the Kronecker product for representing images.
Kronecker Product 67

(ii) Calculate
 
0 1
⊗kj=1 (J00

+ J01 + J10 + J11 ) ⊗
1 0

for k = 2 and give an interpretation as an image, i.e., each entry 0 is


identified with a black pixel and an entry 1 with a white pixel. Discuss the
case for arbitrary k.

Problem 17. Consider the Pauli spin matrices σ = (σ1 , σ2 , σ3 ). Let q,


r, s, t be unit vectors in R3 . We define

Q := q · σ, R := r · σ, S := s · σ, T := t · σ

where q · σ := q1 σ1 + q2 σ2 + q3 σ3 . Calculate

(Q ⊗ S + R ⊗ S + R ⊗ T − Q ⊗ T )2 .

Express the result using commutators.

Problem 18. Let A and X be n × n matrices over C. Assume that

[X, A] = 0n .

Calculate the commutator [X ⊗ In + In ⊗ X, A ⊗ A].

Problem 19. A square matrix is called a stochastic matrix if each entry


is nonnegative and the sum of the entries in each row is 1. Let A, B be
n × n stochastic matrices. Is A ⊗ B a stochastic matrix?

Problem 20. Let X be an m × m and Y be an n × n matrix. The direct


sum is the (m + n) × (m + n) matrix
 
X 0
X ⊕Y = .
0 Y

Let A be an n × n matrix, B be an m × m matrix and C be an p × p matrix.


Then we have the identity

(A ⊕ B) ⊗ C ≡ (A ⊗ C) ⊕ (B ⊗ C).

Is
A ⊗ (B ⊕ C) = (A ⊗ B) ⊕ (A ⊗ C)
true?
68 Problems and Solutions

Problem 21. Let A, B be 2 × 2 matrices, C a 3 × 3 matrix and D a 1 × 1


matrix. Find the condition on these matrices such that

A⊗B =C ⊕D

where ⊕ denotes the direct sum. We assume that D is nonzero.

Problem 22. With each m × n matrix Y we associate the column vector


vecY of length m × n defined by

vec(Y ) := (y11 , . . . , ym1 , y12 , . . . , ym2 , . . . , y1n , . . . , ymn )T .

Let A be an m × n matrix, B an p × q matrix, and C an m × q matrix. Let


X be an unknown n × p matrix. Show that the matrix equation

AXB = C

is equivalent to the system of qm equations in np unknowns given by

(B T ⊗ A)vec(X) = vec(C).

that is, vec(AXB) = (B T ⊗ A)vecX.

Problem 23. Let A, B, D be n × n matrices and In the n × n identity


matrix. Use the result from the problem above to prove that

AX + XB = D

can be written as

((In ⊗ A) + (B T ⊗ In ))vecX = vec(D). (1)

Problem 24. Let A be an n × n matrix and Im be the m × m identity


matrix. Show that
sin(A ⊗ Im ) ≡ sin(A) ⊗ Im . (1)

Problem 25. Let A be an n × n matrix and B be an m × m matrix. Is

sin(A ⊗ Im + In ⊗ B) ≡ (sin(A)) ⊗ (cos(B)) + (cos(A)) ⊗ (sin(B))? (1)

Prove or disprove.

Problem 26. Let σ1 , σ2 , σ3 be the Pauli spin matrices.


(i) Find

R1x (α) := exp(−iα(σ1 ⊗ I2 )), R1y (α) := exp(−iα(σ2 ⊗ I2 ))


Kronecker Product 69

where α ∈ R and I2 denotes the 2 × 2 unit matrix.


(ii) Consider the special case R1x (α = π/2) and R1y (α = π/4). Calculate
R1x (π/2)R1y (π/4). Discuss.

Problem 27. Let x, y ∈ R2 . Find a 4 × 4 matrix A (flip operator) such


that
A(x ⊗ y) = y ⊗ x.

Problem 28. Let σ1 , σ2 and σ3 be the Pauli spin matrices. We define


σ+ := σ1 + iσ2 and σ− := σ1 − iσ2 . Let
 
1
c∗k := σ3 ⊗ σ3 ⊗ · · · ⊗ σ3 ⊗ σ+ ⊗ I2 ⊗ I2 ⊗ · · · ⊗ I2
2

where σ+ is on the kth position and we have N − 1 Kronecker products.


Thus c∗k is a 2N × 2N matrix.
(i) Find ck .
(ii) Find the anticommutators [ck , cj ]+ and [c∗k , cj ]+ .
(iii) Find ck ck and c∗k c∗k .

Problem 29. Using the definitions from the previous problem we define
1 1 1 1
s−,j := (σx,j − iσy,j ) = σ−,j , s+,j := (σx,j + iσy,j ) = σ+,j
2 2 2 2
and

c1 = s−,1
j−1
!
X
cj = exp iπ s+,` s−,` s−,j for j = 2, 3, . . .
`=1

(i) Find c∗j .


(ii) Find the inverse transformation.
(iii) Calculate c∗j cj .

Problem 30. Let A, B, C, D be symmetric n × n matrices over R.


Assume that these matrices commute with each other. Consider the 4n×4n
matrix
A B C D
 
 −B A D −C 
H= .
−C −D A B
−D C −B A
(i) Calculate HH T and express the result using the Kronecker product.
70 Problems and Solutions

(ii) Assume that A2 + B 2 + C 2 + D2 = 4nIn .

Problem 31. Can the 4 × 4 matrix


1 0 0 1
 
0 1 1 0 
C=
0 1 −1 0

1 0 0 −1

be written as the Kronecker product of two 2 × 2 matrices A and B, i.e.


C = A ⊗ B?

Problem 32. Let A, B, C be n × n matrices. Assume that

[A, B] = 0n , [A, C] = 0n .

Let

X := In ⊗ A + A ⊗ In , Y := In ⊗ B + B ⊗ In + A ⊗ C.

Calculate the commutator [X, Y ].

Problem 33. Let x, y, z ∈ Rn . We define a wedge product

x ∧ y := x ⊗ y − y ⊗ x.

Show that
(x ∧ y) ∧ z + (z ∧ x) ∧ y + (y ∧ z) ∧ x = 0. (1)

Problem 34. Let V and W be the unitary matrices

V = exp(i(π/4)σ1 ) ⊗ exp(i(π/4)σ1 )
W = exp(i(π/4)σ2 ) ⊗ exp(i(π/4)σ2 ).

Calculate
V ∗ (σ3 ⊗ σ3 )V, W ∗ (σ3 ⊗ σ3 )W.
Chapter 10

Norms and Scalar


Products

Problem 1. Consider the vector (v ∈ C4 )

i

 1 
v= .
−1
−i

Find the Euclidean norm and then normalize the vector.

Problem 2. Consider the 4 × 4 matrix (Hamilton operator)


Ĥ = (σ1 ⊗ σ1 − σ2 ⊗ σ2 )
2

where ω is the frequency and ~ is the Planck constant divided by 2π. Find
the norm of Ĥ, i.e.,

kĤk := max kĤxk, x ∈ C4


kxk=1

applying two different methods. In the first method apply the Lagrange
multiplier method, where the constraint is kxk = 1. In the second method
we calculate Ĥ ∗ Ĥ and find the square root of the largest eigenvalue. This
is then kĤk. Note that Ĥ ∗ Ĥ is positive semi-definite.

71
72 Problems and Solutions

Problem 3. Let A be an n × n matrix over R. The spectral norm is


kAxk2
kAk2 := max .
x6=0 kxk2

It can be shown that kAk2 can also be calculated as


p
kAk2 = largest eigenvalue of AT A.

Note that the eigenvalues of AT A are real and nonnegative. Let


 
2 5
A= .
1 3

Calculate kAk2 using this method.

Problem 4. Consider the vectors


     
1 1 1
x1 =  1  , x2 =  −1  , x3 =  −1 
1 1 −1

in R3 .
(i) Show that the vectors are linearly independent.
(ii) Apply the Gram-Schmidt orthonormalization process to these vectors.

Problem 5. Let { vj : j = 1, 2, . . . , r} be an orthogonal set of vectors


in Rn with r ≤ n. Show that
r
X r
X
k vj k2 = kvj k2 .
j=1 j=1

Problem 6. Consider the 2 × 2 matrix over C


 
a11 a12
A= .
a21 a22

Find the norm of A implied by the scalar product


p
hA, Ai = tr(AA∗ ).

Problem 7. Let A, B be n × n matrices over C. A scalar product is


given by
hA, Bi = tr(AB ∗ ).
Norms and Scalar Products 73

Let U be a unitary n × n matrix, i.e. we have U −1 = U ∗ .


(i) Calculate hU, U i. Then find the norm implied by the scalar product.
(ii) Calculate
kU k := max kU xk.
kxk=1

Problem 8. (i) Let { xj : j = 1, 2, . . . , n } be an orthonormal basis in


Cn . Let { yj : j = 1, 2, . . . , n } be another orthonormal basis in Cn . Show
that
(Ujk ) := (x∗j yk )
is a unitary matrix, where x∗j yk is the scalar product of the vectors xj and
yk . This means showing that U U ∗ = In .
(ii) Consider the bases in C2
   
1 1 1 1
x1 = √ , x2 = √
2 1 2 −1
and    
1 1 1 1
y1 = √ , y2 = √ .
2 i 2 −i
Use these bases to construct the corresponding 2 × 2 unitary matrix.
p
Problem 9. Find the norm kAk = tr(A∗ A) of the skew-hermitian
matrix  
i 2+i
A=
−2 + i 3i
without calculating A∗ .

Problem 10. Consider the Hilbert space H of the 2 × 2 matrices over


the complex numbers with the scalar product

hA, Bi := tr(AB ∗ ), A, B ∈ H.

Show that the rescaled Pauli matrices µj = √1 σj , j = 1, 2, 3


2
     
1 0 1 1 0 −i 1 1 0
µ1 = √ , µ2 = √ , µ3 = √
2 1 0 2 i 0 2 0 −1

plus the rescaled 2 × 2 identity matrix


 
1 1 0
µ0 = √
2 0 1

form an orthonormal basis in the Hilbert space H.


74 Problems and Solutions

Problem 11. Let A and B be 2 × 2 diagonal matrices over R. Assume


that
tr(AAT ) = tr(BB T )
and
max kAxk = max kBxk.
kxk=1 kxk=1

Can we conclude that A = B?

Problem 12. Let A be an n × n matrix over C. Let k.k be a subordinate


matrix norm for which kIn k = 1. Assume that kAk < 1.
(i) Show that the matrix (In − A) is nonsingular.
(ii) Show that
k(In − A)−1 k ≤ (1 − kAk)−1 .

Problem 13. Let A be an n × n matrix. Assume that kAk < 1. Show


that
kAk
k(In − A)−1 − In k ≤ .
1 − kAk

Problem 14. Let A be an n × n nonsingular matrix and B an n × n


matrix. Assume that kA−1 Bk < 1.
(i) Show that A − B is nonsingular.
(ii) Show that

kA−1 − (A − B)−1 k kA−1 Bk


≤ .
kA−1 k 1 − kA−1 Bk

Problem 15. Let A be an invertible n × n matrix over R. Consider the


linear system Ax = b. The condition number of A is defined as

Cond(A) := kAk kA−1 k.

Find the condition number for the matrix


 
1 0.9999
A=
0.9999 1

for the infinity norm, 1-norm and 2-norm.

Problem 16. Let A, B be n × n matrices over R and t ∈ R. Let k k be


a matrix norm. Show that

ketA etB − In k ≤ exp(|t|(kAk + kBk)) − 1.


Norms and Scalar Products 75

Problem 17. Let A1 , A2 , . . . , Ap be m × m matrices over C. Then we


have the inequality
 2
p p
X 2 X
k exp( Aj ) − (eA1 /n · · · eAp /n )n k ≤  kAj k
j=1
n j=1
 
p
n+2 X
× exp  kAj k
n j=1

and
Xp
lim (eA1 /n eA2 /n · · · eAp /n )n = exp( Aj ).
n→∞
j=1

Let p = 2. Find the estimate for the 2 × 2 matrices


   
0 1 1 0
A1 = , A2 = .
1 0 0 2
Chapter 11

Groups and Matrices

Problem 1. Find the group generated by


   
0 1 1 0
A= , B= .
1 0 0 −1

Problem 2. (i) Show that the set of matrices


   √   √ 
1 0 −1/2 − 3/2
√ −1 −1/2
√ 3/2
E= , C3 = , C3 =
0 1 3/2 −1/2 − 3/2 −1/2
   √   √ 
1 0 −1/2
√ − 3/2 −1/2 3/2
σ1 = , σ2 = , σ3 = √
0 −1 − 3/2 1/2 3/2 1/2
form a group G under matrix multiplication, where C3−1 is the inverse
matrix of C3 .
(ii) Find the determinant of all these matrices. Does the set of numbers

{ det(E), det(C3 ), det(C3−1 ), det(σ1 ), det(σ2 ), det(σ3 ) }

form a group under multiplication.


(iii) Find two proper subgroups.
(iv) Find the right coset decomposition. Find the left coset decomposition.
We obtain the right coset decomposition as follows: Let G be a finite group
of order g having a proper subgroup H of order h. Take some element g2 of
G which does not belong to the subgroup H, and make a right coset Hg2 .
If H and Hg2 do not exhaust the group G, take some element g3 of G which

76
Groups and Matrices 77

is not an element of H and Hg2 , and make a right coset Hg3 . Continue
making right cosets Hgj in this way. If G is a finite group, all elements
of G will be exhausted in a finite number of steps and we obtain the right
coset decomposition.

Problem 3. We know that the set of matrices


   √   √ 
1 0 −1/2 − 3/2
√ −1 −1/2
√ 3/2
E= , C3 = , C3 =
0 1 3/2 −1/2 − 3/2 −1/2
   √   √ 
1 0 −1/2
√ − 3/2 −1/2 3/2
σ1 = , σ2 = , σ3 = √
0 −1 − 3/2 1/2 3/2 1/2
forms a group G under matrix multiplication, where C3−1 is the inverse
matrix of C3 . The set of matrices (3 × 3 permutation matrices)
   
1 0 0 0 0 1
I = P0 =  0 1 0  , P1 =  1 0 0  ,
0 0 1 0 1 0
   
0 1 0 1 0 0
P2 =  0 0 1  , P3 =  0 0 1  ,
1 0 0 0 1 0
   
0 0 1 0 1 0
P4 =  0 1 0  , P5 =  1 0 0 
1 0 0 0 0 1
also forms a group G under matrix multiplication. Are the two groups
isomorphic? A homomorphism which is 1 − 1 and onto is an isomorphism.

Problem 4. (i) Show that the matrices


   
1 0 0 0 0 1
A = 0 1 0, B = 0 1 0,
0 0 1 1 0 0
   
−1 0 0 0 0 −1
C =  0 −1 0  , D =  0 −1 0 
0 0 −1 −1 0 0
form a group under matrix multiplication.
(ii) Show that the matrices
1 0 0 0 0 0 1 0
   
0 1 0 0 0 1 0 0
X= , Y = ,
0 0 1 0 1 0 0 0
0 0 0 1 0 0 0 1
78 Problems and Solutions

−1 0 0 0 0 0 −1 0
   
 0 −1 0 0   0 −1 0 0 
V = , W =
0 0 −1 0 −1 0 0 0

0 0 0 −1 0 0 0 −1
form a group under matrix multiplication.
(iii) Show that the two groups (so-called Vierergruppe) are isomorphic.

Problem 5. (i) Let x ∈ R. Show that the 2 × 2 matrices


 
1 x
A(x) =
0 1
form a group under matrix multiplication.
(ii) Is the group commutative?
(iii) Find a group that is isomorphic to this group.

Problem 6. Let a, b, c, d ∈ Z. Show that the 2 × 2 matrices


 
a b
A=
c d
with ad − bc = 1 form a group under matrix multiplication.

Problem 7. The Lie group SU (2) is defined by


SU (2) := { U 2 × 2 matrix : U U ∗ = I2 , det U = 1 }.
Let (3-sphere)
S 3 := { (x1 , x2 , x3 , x4 ) ∈ R4 : x21 + x22 + x23 + x24 = 1 }.
Show that SU (2) can be identified as a real manifold with the 3-sphere S 3 .

Problem 8. Let σ1 , σ2 , σ3 be the Pauli spin matrices. Let


U (α, β, γ) = e−iασ3 /2 e−iβσ2 /2 e−iγσ3 /2
where α, β, γ are the three Euler angles with the range 0 ≤ α < 2π,
0 ≤ β ≤ π and 0 ≤ γ < 2π. Show that
 −iα/2
cos(β/2)e−iγ/2 −e−iα/2 sin(β/2)eiγ/2

e
U (α, β, γ) = . (1)
eiα/2 sin(β/2)e−iγ/2 eiα/2 cos(β/2)eiγ/2

Problem 9. The Heisenberg group is the set of upper 3 × 3 matrices of


the form  
1 a c
H = 0 1 b
0 0 1
Groups and Matrices 79

where a, b, c can be taken from some (arbitrary) commutative ring.


(i) Find the inverse of H.
(ii) Given two elements x, y of a group G, we define the commutator of x
and y, denoted by [x, y] to be the element x−1 y −1 xy. If a, b, c are integers
(in the ring Z of the integers) we obtain the discrete Heisenberg group H3 .
It has two generators
   
1 1 0 1 0 0
x = 0 1 0, y = 0 1 1.
0 0 1 0 0 1

Find
z = xyx−1 y −1 .
Show that xz = zx and yz = zy, i.e., z is the generator of the center of H3 .
(iii) The derived subgroup (or commutator subgroup) of a group G is the
subgroup [G, G] generated by the set of commutators of every pair of ele-
ments of G. Find [G, G] for the Heisenberg group.
(iv) Let  
0 a c
A = 0 0 b
0 0 0
and a, b, c ∈ R. Find exp(A).
(v) The Heisenberg group is a simple connected Lie group whose Lie algebra
consists of matrices  
0 a c
L = 0 0 b.
0 0 0
Find the commutators [L, L0 ] and [[L, L0 ], L0 ], where [L, L0 ] := LL0 − L0 L.

Problem 10. Define

M : R3 → V := { a · σ : a ∈ R3 } ⊂ { 2 × 2 complex matrices }
a → M (a) = a · σ = a1 σ1 + a2 σ2 + a3 σ3 .

This is a linear bijection between R3 and V . Each U ∈ SU (2) determines


a linear map S(U ) on R3 by

M (S(U )a) = U −1 M (a)U.

The right-hand side is clearly linear in a. Show that U −1 M (a)U is in V ,


that is, of the form M (b).

Problem 11. A topological group G is both a group and a topological


space, the two structures are related by the requirement that the maps
80 Problems and Solutions

x 7→ x−1 (of G onto G) and (x, y) 7→ xy (of G × G onto G) are continuous.


G × G is given by the product topology.
(i) Given a topological group G, define the maps

φ(x) := xax−1

and
ψ(x) := xax−1 a−1 ≡ [x, a].
How are the iterates of the maps φ and ψ related?
(ii) Consider G = SO(2) and
   
cos(α) − sin(α) 0 1
x= , a=
sin(α) cos(α) −1 0

with x, a ∈ SO(2). Calculate φ and ψ. Discuss.

Problem 12. Show that the matrices


   
1 1 1 −1
,
0 1 0 1

are conjugate in SL(2, C) but not in SL(2, R) (the real matrices in SL(2, C)).

Problem 13. (i) Let G be a finite set of real n × n matrices { Aj },


1 ≤ i ≤ r, which forms a group under matrix multiplication. Suppose that

Xr r
X
tr( Aj ) = tr(Aj ) = 0
j=1 j=1

where tr denotes the trace. Show that


r
X
Aj = 0n .
j=1

(ii) Show that the 2 × 2 matrices


     
1 0 ω 0 ω2 0
B1 = , B2 = , B3 =
0 1 0 ω2 0 ω
     
0 1 0 ω 0 ω2
B4 = , B5 = , B6 =
1 0 ω2 0 ω 0
form a group under matrix multiplication, where

ω := exp(2πi/3).
Groups and Matrices 81

(iii) Show that


6
X
tr(Bj ) = 0.
j=1

Problem 14. The unitary matrices are elements of the Lie group U (n).
The corresponding Lie algebra u(n) is the set of matrices with the condition

X ∗ = −X.

An important subgroup of U (n) is the Lie group SU (n) with the condition
that det U = 1. The unitary matrices
   
1 1 1 0 1
√ ,
2 1 −1 1 0

are not elements of the Lie algebra SU (2) since the determinants of these
unitary matrices are −1. The corresponding Lie algebra su(n) of the Lie
group SU (n) are the n × n matrices given by

X ∗ = −X, tr(X) = 0.

Let σ1 , σ2 , σ3 be the Pauli spin matrices. Then any unitary matrix in U (2)
can be represented by

U (α, β, γ, δ) = eiαI2 e−iβσ3 /2 e−iγσ2 /2 e−iδσ3 /2

where 0 ≤ α < 2π, 0 ≤ β < 2π, 0 ≤ γ ≤ π and 0 ≤ δ < 2π. Calculate the
right-hand side.

Problem 15. Given an orthonormal basis (column vectors) in CN de-


noted by
x0 , x1 , . . . , xN −1 .
(i) Show that
N
X −2
U := xk x∗k+1 + xN −1 x∗0
k=0
is a unitary matrix.
(ii) Find tr(U ).
(iii) Find U N .
(iv) Does U depend on the chosen basis? Prove or disprove.
Hint. Consider N = 2, the standard basis (1, 0)T , (0, 1)T and the basis
√1 (1, 1)T , √1 (1, −1)T .
2 2
(v) Show that the set
{ U, U 2 , . . . , U N }
82 Problems and Solutions

forms a commutative group (abelian group) under matrix multiplication.


The set is a subgroup of the group of all permutation matrices.
(vi) Assume that the set given above is the standard basis. Show that the
matrix U is given by

0 1 0 ... 0
 
0 0 1 ... 0
. . . .. .. 
U = . . .
. . . . ..
0 0 0 ... 1
1 0 0 ... 0

Problem 16. (i) Let

1 i 0 0
 
1 0 0 i 1 
M := √  .
2 0 0 i −1
1 −i 0 0

Is the matrix M unitary?


(ii) Let    
1 1 1 1 0
UH := √ , US :=
2 1 −1 0 i
and
1 0 0 0
 
0 0 0 1
UCN OT 2 = .
0 0 1 0
0 1 0 0
Show that the matrix M can be written as

M = UCN OT 2 (I2 ⊗ UH )(US ⊗ US ).

(iii) Let SO(4) be the special orthogonal Lie group. Let SU (2) be the
special unitary Lie group. Show that for every real orthogonal matrix U ∈
SO(4), the matrix M U M −1 is the Kronecker product of two 2-dimensional
special unitary matrices, i.e.,

M U M −1 ∈ SU (2) ⊗ SU (2).

Problem 17. Sometimes we parametrize the group elements of the three


parameter group SO(3) in terms of the Euler angles ψ, θ, φ

A(ψ, θ, φ) =
Groups and Matrices 83
 
cos(φ) cos θ cos ψ − sin φ sin ψ − cos(φ) cos θ sin ψ − sin φ cos ψ cos φ sin θ
 sin(φ) cos θ cos ψ + cos φ sin ψ − sin(φ) cos θ sin ψ + cos φ cos ψ sin φ sin θ 
− sin(θ) cos(ψ) sin θ sin ψ cos(θ)
with the parameters falling in the intervals

−π ≤ ψ < π, 0 ≤ θ ≤ π, −π ≤ φ < π.

Describe the shortcomings this parametrization suffers.

Problem 18. The octonion algebra O is an 8-dimensional non-associative


algebra. It is defined in terms of the basis elements eµ (µ = 0, 1, . . . , 7) and
their multiplication table. e0 is the unit element. We use greek indices
(µ, ν, . . .) to include the 0 and latin indices (i, j, k, . . .) when we exclude the
0. We define
êk := e4+k for k = 1, 2, 3.
The multiplication rules among the basis elements of octonions eµ are given
by
3
X
ei ej = −δij e0 + ijk ek , i, j, k = 1, 2, 3 (1)
k=1

and

−e4 ei = ei e4 = êi , e4 êi = −êi e4 = ei , e4 e4 = −e0


X3
êi êj = −δij e0 − ijk ek , i, j, k = 1, 2, 3
k=1
3
X
−êj ei = ei êj = −δij e4 − ijk êk , i, j, k = 1, 2, 3
k=1

where δij is the Kronecker delta and ijk is +1 if (ijk) is an even permuta-
tion of (123), −1 if (ijk) is an odd permutation of (123) and 0 otherwise.
We can formally summarize the multiplications as
7
X
k
eµ eν = gµν e0 + γµν ek
k=1

where
k k
gµν = diag(1, −1, −1, −1, −1, −1, −1, −1), γij = −γji

with µ, ν = 0, 1, . . . , 7, and i, j, k = 1, 2, . . . , 7.
(i) Show that the set { e0 , e1 , e2 , e3 } is a closed associative subalgebra.
(ii) Show that the octonian algebra O is non-associative.
84 Problems and Solutions

Problem 19. Consider the set


    
1 0 0 1
e= , a= .
0 1 1 0
Then under matrix multiplication we have a group. Consider the set

{ e ⊗ e, e ⊗ a, a ⊗ e, a ⊗ a }.

Does this set form a group under matrix multiplication, where ⊗ denotes
the Kronecker product?

Problem 20. Let  


0 1
J := .
−1 0
(i) Find all 2 × 2 matrices A over R such that

AT JA = J.

(ii) Do these 2 × 2 matrices form a group under matrix multiplication?

Problem 21. Let J be the 2n × 2n matrix


 
0n In
J :=
−In 0n
where In is the n × n identity matrix and 0n is the n × n zero matrix. Show
that the 2n × 2n matrices A satisfying

AT JA = J

form a group under matrix multiplication. This group is called the sym-
plectic group Sp(2n).

Problem 22. We consider the following subgroups of the Lie group


SL(2, R). Let
  
cos(θ) − sin(θ)
K := : θ ∈ [0, 2π)
sin(θ) cos(θ)
 1/2  
r 0
A := : r>0
0 r−1/2
  
1 t
N := : t∈R .
0 1

It can be shown that any matrix m ∈ SL(2, R) can be written in a unique


way as the product m = kan with k ∈ K, a ∈ A and n ∈ N . This decompo-
sition is called Iwasawa decomposition and has a natural generalization to
Groups and Matrices 85

SL(n, R), n ≥ 3. The notation of the subgroups comes from the fact that
K is a compact subgroup, A is an abelian subgroup and N is a nilpotent
subgroup of SL(2, R). Find the Iwasawa decomposition of the matrix
√ 
2 √1
.
1 2

Problem 23. Let GL(m, C) be the general linear group over C. This
Lie group consists of all nonsingular m × m matrices. Let G be a Lie
subgroup of GL(m, C). Suppose u1 , u2 , . . . , un is a coordinate system on
G in some neighborhood of Im , the m × m identity matrix, and that
X(u1 , u2 , . . . , um ) is a point in this neighborhood. The matrix dX of dif-
ferential one-forms contains n linearly independent differential one-forms
since the n-dimensional Lie group G is smoothly embedded in GL(m, C).
Consider the matrix of differential one forms
Ω := X −1 dX, X ∈ G.
The matrix Ω of differential one forms contains n-linearly independent ones.
(i) Let A be any fixed element of G. The left-translation by A is given by
X → AX.
−1
Show that Ω = X dX is left-invariant.
(ii) Show that
dΩ + Ω ∧ Ω = 0
where ∧ denotes the exterior product for matrices, i.e. we have matrix
multiplication together with the exterior product. The exterior product is
linear and satisfies
duj ∧ duk = −duk ∧ duj .
Therefore duj ∧ duj = 0 for j = 1, 2, . . . , n. The exterior product is also
associative.
(iii) Find dX −1 using XX −1 = Im .

Problem 24. Consider GL(m, R) and a Lie subgroup of it. We interpret


each element X of G as a linear transformation on the vector space Rm of
row vectors v = (v1 , v2 , . . . , vn ). Thus
v → w = vX.
Show that dw = wΩ.

Problem 25. Consider the Lie group SO(2) consisting of the matrices
 
cos(u) − sin(u)
X= .
sin(u) cos(u)
86 Problems and Solutions

Calculate dX and X −1 dX.

Problem 26. Let n be the dimension of the Lie group G. Since the vector
space of differential one-forms at the identity element is an n-dimensional
vector space, there are exactly n linearly independent left invariant differ-
ential one-forms in G. Let σ1 , σ2 , . . . , σn be such a system. Consider the
Lie group   
u1 u2
G := : u1 , u2 ∈ R, u1 > 0 .
0 1
Let  
u1 u2
X= .
0 1
(i) Find X −1 and X −1 dX. Calculate the left-invariant differential one-
forms. Calculate the left-invariant volume element.
(ii) Find the right-invariant forms.

Problem 27. Consider the Lie group consisting of the matrices


 
u1 u2
X= , u1 , u2 ∈ R, u1 > 0.
0 u1

Calculate X −1 and X −1 dX. Find the left-invariant differential one-forms


and the left-invariant volume element.

Problem 28. Find the group generated by the permutation matrix

0 0 0 1
 
1 0 0 0
P =
0 0 1 0

0 1 0 0

under matrix multiplication.

Problem 29. Find the group generated by the two permutation matrices

1 0 0 0 0 0 0 1
   
0 0 1 0 0 1 0 0
P1 =  , P2 =  .
0 1 0 0 0 0 1 0
0 0 0 1 1 0 0 0
Chapter 12

Lie Algebras and Matrices

Problem 1. Consider the n × n matrices Eij having 1 in the (i, j) posi-


tion and 0 elsewhere, where i, j = 1, 2, . . . , n. Calculate the commutator.
Discuss.

Problem 2. Show that the matrices


     
0 1 1 0 0 0
x= , h= , y=
0 0 0 −1 1 0

are the generators for a Lie algebra.

Problem 3. Consider the matrices


     
1 0 0 0 0 0 0 0 1
h1 =  0 0 0  , h2 =  0 1 0, h3 =  0 0 0
0 0 1 0 0 0 1 0 0

   
0 1 0 0 0 0
e = 0 0 0, f = 1 0 1.
0 1 0 0 0 0

Show that the matrices form a basis of a Lie algebra.

Problem 4. Let A, B be n × n matrices over C. Calculate tr([A, B]).


Discuss.

87
88 Problems and Solutions

Problem 5. An n × n matrix X over C is skew-hermitian if X ∗ = −X.


Show that the commutator of two skew-hermitian matrices is again skew-
hermitian. Discuss.

Problem 6. The Lie algebra su(m) consists of all m × m matrices X over


C with the conditions X ∗ = −X (i.e. X is skew-hermitian) and trX = 0.
Note that exp(X) is a unitary matrix. Find a basis for su(3).

Problem 7. Any fixed element X of a Lie algebra L defines a linear


transformation

ad(X) : Z → [X, Z] for any Z ∈ L.

Show that for any K ∈ L we have

[ad(Y ), ad(Z)]K = ad([Y, Z])K.

The linear mapping ad gives a representation of the Lie algebra known as


adjoint representation.

Problem 8. There is only one non-commutative Lie algebra L of dimen-


sion 2. If x, y are the generators (basis in L), then

[x, y] = x.

(i) Find the adjoint representation of this Lie algebra. Let v, w be two
elements of a Lie algebra. Then we define

adv(w) := [v, w]

and wadv := [v, w].


(ii) The Killing form is defined by

κ(x, y) := tr(adx ady)

for all x, y ∈ L. Find the Killing form.

Problem 9. Consider the Lie algebra L = s`(2, F) with charF 6= 2. Take


as the standard basis for L the three matrices
     
0 1 0 0 1 0
x= , y= , h= .
0 0 1 0 0 −1

(i) Find the multiplication table, i.e. the commutators.


(ii) Find the adjoint representation of L with the ordered basis { x h y }.
Lie Algebras and Matrices 89

(iii) Show that L is simple. If L has no ideals except itself and 0, and if
moreover [L, L] 6= 0, we call L simple. A subspace I of a Lie algebra L is
called an ideal of L if x ∈ L, y ∈ I together imply [x, y] ∈ I.

Problem 10. Consider the Lie algebra gl2 (R). The matrices
       
1 0 0 1 0 0 0 0
e1 = , e2 = , e3 = , e4 =
0 0 0 0 1 0 0 1

form a basis of gl2 (R). Find the adjoint representation.

Problem 11. Let { e, f } with


   
0 1 −1 0
e= , f=
0 0 0 0

a basis for a Lie algebra. We have [e, f ] = e. Is { e ⊗ I2 , f ⊗ I2 } a basis of


a Lie algebra? Here I2 denotes the 2 × 2 unit matrix and ⊗ the Kronecker
product.

Problem 12. Let { e, f } with


   
0 1 −1 0
e= , f=
0 0 0 0

a basis for a Lie algebra. We have [e, f ] = e. Is { e ⊗ e, e ⊗ f, f ⊗ e, f ⊗ f }


a basis of a Lie algebra?

Problem 13. The elements (generators) Z1 , Z2 , . . . , Zr of an r-dimensional


Lie algebra satisfy the conditions
r
X
[Zµ , Zν ] = cτµν Zτ
τ =1

with cτµν = −cτνµ , where the cτµν ’s are called the structure constants. Let A
be an arbitrary linear combination of the elements
r
X
A= aµ Zµ .
µ=1

Suppose that X is some other linear combination such that


r
X
X= bν Z ν
ν=1
90 Problems and Solutions

and
[A, X] = ρX.
This equation has the form of an eigenvalue equation, where ρ is the cor-
responding eigenvalue and X the corresponding eigenvector. Assume that
the Lie algebra is represented by matrices. Find the secular equation for
the eigenvalues ρ.

Problem 14. Let cτσλ be the structure constants of a Lie algebra. We


define
r X
X r
gσλ = gλσ = cτσρ cρλτ
ρ=1 τ =1

and
g σλ gσλ = δσλ .
A Lie algebra L is called semisimple if and only if det |gσλ | =
6 0. We assume
in the following that the Lie algebra is semisimple. We define
r X
X r
C := g ρσ Xρ Xσ .
ρ=1 σ=1

The operator C is called Casimir operator. Let Xτ be an element of the


Lie algebra L. Calculate the commutator [C, Xτ ].

Problem 15. Show that the matrices


     
0 0 0 0 0 1 0 −1 0
Jx =  0 0 −1  , Jy =  0 0 0  , Jz =  1 0 0
0 1 0 −1 0 0 0 0 0
form generators of a Lie algebra. Is the Lie algebra simple? A Lie algebra
is simple if it contains no ideals other than L and 0 .

Problem 16. The roots of a semisimple Lie algebra are the Lie algebra
weights occurring in its adjoint representation. The set of roots forms the
root system, and is completely determined by the semisimple Lie algebra.
Consider the semisimple Lie algebra s`(2, R) with the generators
     
1 0 0 1 0 0
H= , X= , Y = .
0 −1 0 0 1 0
Find the roots.

Problem 17. The Lie algbra sl(2, R) is spanned by the matrices


     
1 0 0 1 0 0
h= , e= , f= .
0 −1 0 0 1 0
Lie Algebras and Matrices 91

(i) Find the commutators [h, e], [h, f ] and [e, f ].


(ii) Consider
1
C = h2 + ef + f e.
2
Find C. Calculate the commutators [C, h], [C, e], [C, f ]. Show that C can
be written in the form
1
C = h2 + h + 2f e.
2
(iii) Consider the vector  
1
v= .
0
Calculate hv, ev, f v and Cv. Give an interpretation.

Problem 18. Let L be a finite dimensional Lie algebra. Let C ∞ (S 1 ) be


the set of all infinitely differentiable functions, where S 1 is the unit circle
manifold. In the product space L ⊗ C ∞ (S 1 ) we define the Lie bracket
(g1 , g2 ∈ L and f1 , f2 ∈ C ∞ (S 1 ))

[g1 ⊗ f1 , g2 ⊗ f2 ] := [g1 , g2 ] ⊗ (f1 f2 ).

Calculate

[g1 ⊗f1 , [g2 ⊗f2 , g3 ⊗f3 ]]+[g3 ⊗f3 , [g1 ⊗f1 , g2 ⊗f2 ]]+[g2 ⊗f2 , [g3 ⊗f3 , g1 ⊗f1 ]].

Problem 19. A basis for the Lie algebra su(N ), for odd N , may be built
from two unitary unimodular N × N matrices

1 0 0 ... 0 0 1 0 ... 0
   
0 ω 0 ... 0  0 0 1 ... 0
. . . .
g = 0 0 ω ...
2
0 .. .. .. . . ... 
  
  , h=
 ... ... .. . . .. 
  
. . . 0 0 0 ... 1
0 0 0 ... ω N −1 1 0 0 ... 0

where ω is a primitive N th root of unity, i.e. with period not smaller than
N , here taken to be exp(4πi/N ). We obviously have

hg = ωgh. (1)

(i) Find g N and hN .


(ii) Find tr(g).
(iii) Let m = (m1 , m2 ), n = (n1 , n2 ) and define

m × n := m1 n2 − m2 n1
92 Problems and Solutions

where m1 = 0, 1, . . . , N − 1 and m2 = 0, 1, . . . , N − 1. The complete set of


unitary unimodular N × N matrices

Jm1 ,m2 := ω m1 m2 /2 g m1 hm2

suffice to span the Lie algebra su(N ), where J0,0 = IN . Find J ∗ .


(iv) Calculate Jm Jn .
(v) Find the commutator [Jm , Jn ].

Problem 20. Consider the 2 × 2 matrices


   
0 1 0 0
A= , B=
0 0 0 1

with the commutator [A, B] = A. Thus we have a basis of a two-dimesnional


non-abelian Lie algebra. Do the three 8 × 8 matrices

V 1 = A ⊗ B ⊗ I2 , V2 = A ⊗ I2 ⊗ B, V3 = I2 ⊗ A ⊗ B

form a basis of Lie algebra under the commutator?

Problem 21. Let α ∈ R. Find the Lie algebra generated by the 2 × 2


matrices
   
0 cosh(α) 0 sinh(α)
A(α) = , B(α) = .
sinh(α) 0 cosh(α) 0
Chapter 13

Graphs and Matrices

Problem 1. A walk of length k in a digraph is a succession of k arcs


joining two vertices. A trail is a walk in which all the arcs (but not neces-
sarily all the vertices) are distinct. A path is a walk in which all the arcs
and all the vertices are distinct. Show that the number of walks of length k
from vertex i to vertex j in a digraph D with n vertices is given by the ijth
element of the matrix Ak , where A is the adjacency matrix of the digraph.

Problem 2. Consider a digraph. The out-degree of a vertex v is the


number of arcs incident from v and the in-degree of a vertex V is the num-
ber of arcs incident to v. Loops count as one of each.

Determine the in-degree and the out-degree of each vertex in the digraph
given by the adjacency matrix

0 1 0 0 0
 
0 0 1 0 0
A = 1 0 0 0 1
 
0 0 1 0 0
 
0 0 0 1 0

and hence determine if it is an Eulerian graph. Display the digraph and


determine an Eulerian trail.

Problem 3. A digraph is strongly connected if there is a path between


every pair of vertices. Show that if A is the adjacency matrix of a digraph

93
94 Problems and Solutions

D with n vertices and B is the matrix

B = A + A2 + A3 + · · · + An−1

then D is strongly connected iff each non-diagonal element of B is greater


than 0.

Problem 4. Write down the adjacency matrix A for the digraph shown.
Calculate the matrices A2 , A3 and A4 . Consequently find the number of
walks of length 1, 2, 3 and 4 from w to u. Is there a walk of length 1, 2, 3,
or 4 from u to w? Find the matrix B = A + A2 + A3 + A4 for the digraph
and hence conclude whether it is strongly connected. This means finding
out whether all off diagonal elements are nonzero.
Chapter 14

Hadamard Product

Problem 1. Let A and B be m × n matrices. The Hadamard product


A ◦ B is defined as the m × n matrix
A • B := (aij bij ).
(i) Let    
0 1 3 4
A= , B= .
1 0 7 1
Calculate A • B.
(ii) Let C, D be m × n matrices. Show that
rank(A • B) ≤ (rankA)(rankB).

Problem 2. Let
   
a1 a2 b1 b2
A= , B=
a2 a3 b2 b3
be symmetric matrices over R. The Hadamard product A • B is defined as
 
a1 b1 a2 b2
A • B := .
a2 b2 a3 b3
Assume that A and B are positive definite. Show that A • B is positive
definite using the trace and determinant.

Problem 3. Let A be an n × n matrix over C. The spectral radius ρ(A)


is the radius of the smallest circle in the complex plane that contains all

95
96 Problems and Solutions

its eigenvalues. Every characteristic polynomial has at least one root. For
any two n × n matrices A = (aij ) and B = (bij ), the Hadamard product of
A and B is the n × n matrix

A • B := (aij bij ).

Let A, B be nonnegative matrices. Then

ρ(A • B) ≤ ρ(A)ρ(B).

Apply this inequality to the nonnegative matrices


   
1/4 0 1 1
A= , B= .
0 3/4 1 1

Problem 4. Let A, B be m × n matrices. The Hadamard product of A


and B is defined by the m × n matrix

A • B := (aij bij ).

We consider the case m = n. There exists an n2 × n selection matrix J


such that
A • B = J T (A ⊗ B)J
where J T is defined as the n × n2 matrix

[E11 E22 . . . Enn ]

with Eii the n × n matrix of zeros except for a 1 in the (i, i)th position.
Prove this identity for the special case n = 2.
Chapter 15

Differentiation

Problem 1. Let Q and P be n × n symmetric matrices over R, i.e.,


Q = QT and P = P T . Assume that P −1 exists. Find the maximum of the
function f : Rn → R
f (x) = xT Qx
subject to xT P x = 1. Use the Lagrange multiplier method.

97
Chapter 16

Integration

Problem 1. Let A be an n × n positive definite matrix over R, i.e.


xT Ax > 0 for all x ∈ Rn . Calculate
Z
exp(−xT Ax)dx.
Rn

Problem 2. Let V be an N × N unitary matrix, i.e. V V ∗ = IN . The


eigenvalues of V lie on the unit circle; that is, they may be expressed in the
form exp(iθn ), θn ∈ R. A function f (V ) = f (θ1 , . . . , θN ) is called a class
function if f is symmetric in all its variables. Weyl gave an explicit formula
for averaging class functions over the circular unitary ensemble
Z
f (V )dV =
U (N )
Z 2π Z 2π
1 Y
... f (θ1 , . . . , θN ) |eiθj − eiθk |2 dθ1 · · · dθN .
(2π)N N ! 0 0 1≤j<k≤N

Thus we integrate the function f (V ) over U (N ) by parametrizing the group


by the θi and using Weyl’s formula to convert the integral into an N -fold
integral over the θi . By definition the Haar measure dV is invariant under
V →U eV Ue ∗ , where Ue is any N × N unitary matrix. The matrix V can
always be diagonalized by a unitary matrix, i.e.
 iθ1
e ... 0

. . .
..  W ∗
V = W  .. ..
0 . . . eiθN

98
Integration 99

where W is an N × N unitary matrix. Thus the integral over V can be


written as an integral over the matrix elements of W and the eigenphases θn .
Since the measure is invariant under unitary transformations, the integral
over the matrix elements of U can be evaluated straightforwardly, leaving
the integral over the eigenphases. Show that for f a class function we have
Z Z 2π Z 2π
1
f (V )dV = ... f (θ1 , . . . , θN ) det(eiθn (n−m) )dθ1 · · · dθN .
U (N ) (2π)N 0 0
Chapter 17

Numerical Methods

Problem 1. Let A be an invertible n × n matrix over R. Consider the


system of linear equation Ax = b or
n
X
aij xj = bi , i = 1, 2, . . . , n.
j=1

Let A = C − R. This is called a splitting of the matrix A and R is the


defect matrix of the splitting. Consider the iteration
Cx(k+1) = Rx(k) + b, k = 0, 1, 2, . . . .
Let
       
4 −1 0 4 0 0 3 0
A =  −1 4 −1  , C = 0 4 0, b = 2, x(0) = 0.
0 −2 4 0 0 4 2 0
The iteration converges if ρ(C −1 R) < 1, where ρ(C −1 R) denotes the spec-
tral radius of C −1 R. Show that ρ(C −1 R) < 1. Perform the iteration.

Problem 2. Let A be an n × n matrix over R and let b ∈ Rn . Consider


the linear equation Ax = b. Assume that ajj 6= 0 for j = 1, 2, . . . , n. We
define the diagonal matrix D = diag(ajj ). Then the linear equation Ax = b
can be written as
x = Bx + c
with B := −D−1 (A − D), c := D−1 b. The Jacobi method for the solution
of the linear equation Ax = b is given by
x(k+1) = Bx(k) + c, k = 0, 1, . . .

100
Numerical Methods 101

where x(0) is any initial vector in Rn . The sequence converges if

ρ(B) := max |λj (B)| < 1


j=1,...,n

where ρ(B) is the spectral radius of B. Let


 
2 1 0
A = 1 2 1.
0 1 2

(i) Show that the Jacobi method can applied for this matrix.
(ii) Find the solution of the linear equation with b = (1 1 1)T .

Problem 3. Let A be an n × n matrix over R. The (p, q) Padé approxi-


mation to exp(A) is defined by

Rpq (A) := (Dpq (A))−1 Npq (A)

where
p
X (p + q − j)!p!
Npq (A) = Aj
j=0
(p + q)!j!(p − j)!
q
X (p + q − j)!q!
Dpq (A) = (−A)j .
j=0
(p + q)!j!(q − j)!

Nonsingularity of Dpq (A) is assured if p and q are large enough or if the


eigenvalues of A are negative. Find the Padé approximation for the matrix
 
0 1
A=
1 0

and p = q = 2. Compare with the exact solution.

Problem 4. Let A be an n × n matrix. We define the j − k approximant


of exp(A) by
k  ` !j
X 1 A
fj,k (A) := . (1)
`! j
`=0

We have the inequality


1
keA − fj,k (A)k ≤ kAkk+1 ekAk (2)
j k (k + 1)!

and fj,k (A) converges to eA , i.e.

lim fj,k (A) = lim fj,k (A) = eA .


j→∞ k→∞
102 Problems and Solutions

Let  
0 1
A= .
0 0
Find f2,2 (A) and eA . Calculate the right-hand side of the inequality (2).

Problem 5. The Denman-Beavers iteration for the square root of an n×n


matrix A with no eigenvalues on R− is

1 1
Yk+1 = (Yk + Zk−1 ), Zk+1 = (Zk + Yk−1 )
2 2
with k = 0, 1, 2, . . . and Z0 = In and Y0 = A. The iteration has the
properties that

lim Yk = A1/2 , lim Zk = A−1/2


k→∞ k→∞

and, for all k,

1
Yk = AZk , Yk Zk = Zk Yk , Yk+1 = (Yk + AYk−1 ).
2
(i) Can the Denman-Beavers iteration be applied to the matrix
 
1 1
A= ?
1 2

(ii) Find Y1 and Z1 .

Problem 6. Write a C++ program that implements Gauss elimination


to solve linear equations. Apply it to the system
    
1 1 1 x1 1
 8 4 2   x2  =  5  .
27 9 3 x3 14

Problem 7. Let A be an n × n symmetric matrix over R. Since A is sym-


metric over R there exists a set of orthonormal eigenvectors v1 , v2 , . . . , vn
which form an orthonormal basis in Rn . Let x ∈ Rn be a reasonably good
approximation to an eigenvector, say v1 . Calculate

xT Ax
R := .
xT x
The quotient is called Rayleigh quotient. Discuss.
Numerical Methods 103

Problem 8. Let A be an invertible n × n matrix over R. Consider the


linear system Ax = b. The condition number of A is defined as

Cond(A) := kAk kA−1 k.

Find the condition number for the matrix


 
1 0.9999
A=
0.9999 1

for the infinity norm, 1-norm and 2-norm.

Problem 9. The collocation polynomial p(x) for unequally-spaced argu-


ments x0 , x1 , . . . , xn can be found by the determinant method
 p(x) 1 x x2 . . . xn 
 y0 1 x0 x20 . . . xn0 
y1 1 x1 x21 . . . xn1  = 0
 
det 
 .. .. .. .. .. . 

. . . . . .. 
yn 1 xn x2n . . . xnn

where p(xk ) = yk for k = 0, 1, . . . , n. Apply it to (n = 2)

p(x0 = 0) = 1 = y0 , p(x1 = 1/2) = 9/4 = y1 , p(x2 = 1) = 4 = y2 .

Supplementary Problems

Problem 1. Consider the hermitian 3 × 3 matrix


 
0 1 1
A = 1 0 1.
1 1 0

Find A2 and A3 . We know that

tr(A) = λ1 + λ2 + λ3 , tr(A2 ) = λ21 + λ22 + λ23 , tr(A3 ) = λ31 + λ32 + λ33 .

Use Newton’s method to solve this system of three equations to find the
eigenvalues of A.

Problem 2. (i) Given an m × n matrix over R. Write a C++ program


that finds the maximum value in each row and then the minimum value of
these values.
104 Problems and Solutions

(ii) Given an m × n matrix over R. Write a C++ program that finds the
minimum value in each row and then the maximum value of these values.

Problem 3. Given an m × n matrix over C. Find the elements with


the largest absolute values and store the entries (j, k) (j = 0, 1, . . . , m − 1);
k = 0, 1, . . . , n − 1) which contain the elements with the largest absolute
value.

Problem 4. Let A be an n × n matrix over C. Then any eigenvalue of A


satisfies the inequality
n
X
|λ| ≤ max |ajk |.
1≤j≤n
k=1

Write a C++ program that calculates the right-hand side of the inequality
for a given matrix. Apply the complex class of STL. Apply it to the matrix

i 0 0 i
 
0 2i 2i 0
A= .

0 3i 3i 0
4i 0 0 4i

Problem 5. The Leverrier’s method finds the characteristic polynomial


of an n × n matrix. Find the characteristic polynomial for
   
0 1 1 1
A ⊗ B, A= , B=
1 0 1 1

using this method. How are the coefficients ci of the polynomial related√to
the eigenvalues? The eigenvalues of A ⊗ B are given by 0 (twice) and ± 2.

Problem 6. Consider an n × n permutation matrix P . Obviously +1 is


always an eigenvalue since the column vector with all n entries equal to +1 is
an eigenvector. Apply a brute force method and give a C++ implementation
to figure out whether −1 is an eigenvalue. We run over all column vectors
v of length n, where the entries can only be +1 or −1, where of course the
cases with all entries +1 or all entries −1 can be omitted. Thus the number
of column vectors we have to run through are 2n − 2. The condition then
to be checked is P v = −v. If true we have an eigenvalues −1 with the
corresponding eigenvector v.

Problem 7. Consider an n × n symmetric tridiagonal matrix over R. Let


Numerical Methods 105

fn (λ) := det(A − λIn ) and

α1 − λ β1 0 ··· 0
 
 β1 α2 − λ β2 ··· 0 
 .. 
 0

β2 . ··· 0

fk (λ) = det  .

 . .. .. .. .. 
 . . . . .


0 ··· ··· αk−1 − λ βk−1
 
0 ··· 0 βk−1 αk − λ

for k = 1, 2, . . . , n and f0 (λ) = 1, f−1 (λ) = 0. Then


2
fk (λ) = (α − λ)fk−1 (λ) − βk−1 fk−2 (λ)

for k = 2, 3, . . . , n. Find f4 (λ) for the 4 × 4 matrix



√0 1 √0 0
 
 1 √0 2 √0 
.
0 2 √0 3

0 0 3 0

Problem 8. The power method is the simplest algorithm for computing


eigenvectors and eigenvalues. Consider the vector space Rn with the Eu-
clidean norm kxk of a vector x ∈ Rn . The iteration is as follows: Given a
nonsingular n × n matrix M and a vector x0 with kx0 k = 1. One defines

M xt
xt+1 = , t = 0, 1, . . .
kM xt k

This defines a dynamical system on the sphere S n−1 . Since M is invertible


we have
M −1 xt+1
xt = , t = 0, 1, . . .
kM −1 xt+1 k
(i) Apply the power method to the nonnormal matrix
   
1 1 1
A= and x0 = .
0 1 0

(ii) Apply the power method to the Bell matrix

1 0 0 1 1
   
1 0 1 1 0  0
B=√  and x0 =   .
2 0 1 −1 0 0

1 0 0 −1 0
106 Problems and Solutions

(iii) Consider the 3 × 3 symmetric matrix over R


 
2 −1 0
A =  −1 2 −1  .
0 −1 2
Find the largest eigenvalue and the corresponding eigenvector using the
power method. Start from the vector
 
1
v = 1.
1
Note that    
1 2
Av =  0  , A2 v =  −2 
1 2

and the largest eigenvalue is λ = 2 + 2 with the corresponding eigenvector
√ T
(1 − 2 1) .

Problem 9. Let A be an n × n matrix over R. Then we have the Taylor


expansion
∞ ∞
X (−1)k X (−1)k
sin(A) := A2k+1 , cos(A) := A2k .
(2k + 1)! (2k)!
k=0 k=0

To calculate sin(A) and cos(A) from a truncated Taylor series approxima-


tion is only worthwhile near the origin. We can use the repeated application
of the double angle formula
cos(2A) ≡ 2 cos2 (A) − In , sin(2A) ≡ 2 sin(A) cos(A).
We can find sin(A) and cos(A) of a matrix A from a suitably truncated
Taylor series approximates as follows
S0 = Taylor approximate to sin(A/2k ), C0 = Taylor approximate to cos(A/2k )
and the recursion
2
Sj = 2Sj−1 Cj−1 , Cj = 2Cj−1 − In
where j = 1, 2, . . .. Here k is a positive integer chosen so that, say kAk∞ ≈
2k . Apply this recursion to calculate sine and cosine of the 2 × 2 matrix
 
2 1
A= .
1 2
Use k = 2.
Chapter 18

Miscellaneous

Problem 1. Can one find a unitary matrix U such that


   
∗ 0 c 0 ceiθ
U U=
d 0 de−iθ 0
where c, d ∈ C and θ ∈ R ?

Problem 2. Let
0 0 1 0 1 i 0 0
   
 0 0 0 1 1 0 0 −i 1 
J = , U=√  .
−1 0 0 0 2 i 1 0 0
0 −1 0 0 0 0 1 −i
Find U ∗ U . Show that U ∗ JU is a diagonal matrix.

Problem 3. Given four points xi , xj , xk , x` (pairwise different) in R2 .


One can define their cross-ratio
|xi − xj ||xk − x` |
rijk` := .
|xi − x` ||xk − xj |
Show that the cross-rations are invariant under conformal transformation.

Problem 4. Consider
0 1 0 0 1
   
1 0 1 0 1 1
A= , v0 =   .
0 1 0 1 2 1
0 0 1 0 1

107
108 Problems and Solutions

Find

v0T Av0 v1T Av1 v T v1


v1 = Av0 − v0 , v2 = Av1 − v1 − 1T v0 .
v0T v0 T
v1 v1 v0 v0

Are the vectors v0 , v1 , v2 linearly independent?

Problem 5. Let A, B be invertible n × n matrices. We define A ◦ B :=


AB − A−1 B −1 . Find matrices A, B such that A ◦ B is invertible.

Problem 6. Let A be an n × n matrix. Let


 
A In 0n
B =  In A In 
0n In A

where 0n is the n × n zero matrix. Calculate B 2 and B 3 .

Problem 7. Let a > b > 0 and integers. Find the rank of the 4 × 4
matrix
a a b b
 
a b a b
M (a, b) =  .
b a b a
b b a a

Problem 8. Let 0 ≤ θ < π/4. Note that sec(x) := 1/ cos(x). Consider


the 2 × 2 matrix
 
sec(2θ) −i tan(2θ)
A(θ) = .
i tan(2θ) sec(2θ)

Show that the matrix is hermitian and the determinant is equal to 1. Show
that the matrix is not unitary.

Problem 9. Show that the inverse of the 3 × 3 matrix


 
1 0 1
A = 0 1 0
0 0 1

is given by
 
1 0 −1
A−1 = 0 1 0 .
0 0 1
Miscellaneous 109

Problem 10. Let B be the Bell matrix


1 0 0 1
 
1  0 1 −1 0
B=√  .
2 0 1 1 0
−1 0 0 1

(i) Find B −1 and B T .


(ii) Show that
1
(I2 ⊗ B)(B ⊗ I2 )(I2 ⊗ B) ≡ √ (I2 ⊗ B 2 + B 2 ⊗ I2 ).
2

Problem 11. Consider the two normalized vectors


1 1
   
0  0 
0  0 
   
1 0
  1  0 

√  , √ 

0  0 

2 2
0  0 
   
0 0
   
1 −1

in the Hilbert space R8 . Show that the vectors obtained by applying

(σ1 ⊗ I2 ⊗ I2 ), (I2 ⊗ σ1 ⊗ I2 ), (I2 ⊗ I2 ⊗ σ1 )

together with the two original ones form an orthonormal basis in R8 .

Problem 12. Let φ ∈ R. Consider the n × n matrix


0 1
 
 0 1 
0 1
 
 
H=  .. .
 . 

 1
eiφ 0

(i) Show that the matrix is unitary.


(ii) Find the eigenvalues of H.
(iii) Consider the n × n diagonal matrix

G = diag(1, ω, ω 2 , · · · ω n−1 )

where ω := exp(i2π/n. Find ωGH − HG.


110 Problems and Solutions

Problem 13. Let In be the n × n identity matrix and 0n be the n × n


zero matrix. Find the eigenvalues and eigenvectors of the 2n × 2n matrices
   
0n In 0n In
A= , B= .
−In 0n In 0n

Problem 14. Let aj ∈ R with j = 1, 2, 3. Consider the 4 × 4 matrices


0 a1 0 0 0 a1 0 0
   
1 a 0 a2 0  1  −a1 0 a2 0 
A=  1 , B= .
2 0 a 0 a 2i 0 −a 0 a

2 3 2 3
0 0 a3 0 0 0 −a3 0
Find the spectrum of A and B. Find the spectrum of [A, B].

Problem 15. Consider the permutation matrix


0 1 0 0
 
0 0 0 1
P = .
1 0 0 0
0 0 1 0

(i) Show that P 4 = I4 .


(ii) Using this information find the eigenvalues.

Problem 16. Consider the skew-symmetric 3 × 3 matrix over R


 
0 −a3 a2
A =  a3 0 −a1 
−a2 a1 0

where a1 , a2 , a3 ∈ R. Find the eigenvalues. Let 03 be the 3 × 3 zero


matrix. Let A1 , A2 , A3 be skew-symmetric 3 × 3 matrices over R. Find the
eigenvalues of the 9 × 9 matrix
 
03 −A3 A2
B =  A3 03 −A1  .
−A2 A1 03

Problem 17. (i) Find the eigenvalues and eigenvectors of the 4×4 matrix
a11 a12 0 0
 
 0 0 a23 a24 
A= .
0 0 a33 a34
a41 a42 0 0
Miscellaneous 111

(ii) Find the eigenvalues and eigenvectors of the 4 × 4 permutation matrix

0 1 0 0
 
0 0 0 1
A= .
0 0 1 0
1 0 0 0

Problem 18. Let z ∈ C and A, B, C be n × n matrices over C. Calculate


the commutator

[In ⊗ A + A ⊗ ezC , e−zC ⊗ B + B ⊗ In ].

The commutator plays a role for Hopf algebras.

Problem 19. Consider the 4 × 4 matrix


cosh(α) 0 0 sinh(α)
 
 − sin(β) sinh(α) cos(β) 0 − sin(β) cosh(α) 
A(α, β, γ) =  .
sin(γ) cos(β) sinh(α) sin(γ) sin(β) cos(γ) sin(γ) cos(β) cosh(α)
cos(γ) cos(β) sinh(α) cos(γ) sin(β) − sin(γ) cos(γ) cos(β) cosh(α)

(i) Is each column a normalized vector in R4 ?


(ii) Calculate the scalar product between the column vectors. Discuss.

Problem 20. Consider the 2 × 2 matrices


   
r t 1 1 1
S= , R= √ .
t r 2 1 −1

Calculate RSRT . Discuss.

Problem 21. Let φ1 , φ2 ∈ R. Consider the vector in R3


 
cos(φ1 ) cos(φ2 )
v(φ1 , φ2 ) =  sin(φ2 ) cos(φ1 ) 
sin(φ1 )

(i) Find the 3 × 3 matrix v(φ1 , φ2 )vT (φ1 , φ2 ). What type of matrix do we
have?
(ii) Find the eigenvalues of the 3 × 3 matrix v(φ1 , φ2 )vT (φ1 , φ2 ). Compare
with vT (φ1 , φ2 )v(φ1 , φ2 ).

Problem 22. Can any skew-hermitian matrix K be written as K = iH,


where H is a hermitian matrix?
112 Problems and Solutions

Problem 23. Can one find a (column) vector in R2 such that vvT is an
invertible 2 × 2 matrix?

Problem 24. (i) Consider the normalized vectors in R4

1 0 1 0
       
1 0 1 1 1  0  1  1 
v1 = √   , v2 = √   , v3 = √  , v4 = √  .
2 1 2 1 2 −1 2 0
0 0 0 −1

(i) Do the vectors form a basis in R4 ?


(ii) Find the 4×4 matrix v1 v1∗ +v2 v2∗ +v3 v3∗ +v4 v4∗ and then the eigenvalues.
(iii) Find the 4 × 4 matrix v1 v2∗ + v2 v3∗ + v3 v4∗ + v4 v1∗ and then the eigen-
values.
(iv) Consider the normalized vectors in R4

1 0 0 1
       
1 1 1 1 1 0 1 0
w1 = √   , w2 = √   , w3 = √   , w4 = √   .
2 0 2 1 2 1 2 0
0 0 1 1

(v) Do the vectors form a basis in R4 ?


(vi) Find the 4 × 4 matrix w1 w1∗ + w2 w2∗ + w3 w3∗ + w4 w4∗ and then the
eigenvalues.
(vii) Find the 4 × 4 matrix w1 w2∗ + w2 w3∗ + w3 w4∗ + w4 w1∗ and then the
eigenvalues.

Problem 25. Let


   
z1 w1
z= , w=
z2 w2

be elements of C2 . Solve the equation z∗ w = w∗ z.

Problem 26. Let S be an invertible n × n matrix. Find the inverse of


the 2n × 2n matrix
0n S −1
 

S 0n
where 0n is the n × n zero matrix.

Problem 27. Consider the normalized vectors


  √  √ 
0 3/2 3/2
v1 = , v2 = , v3 =
1 1/2 −1/2
Miscellaneous 113

and the vector  


w1
w=
w2
in the Hilbert space C2 . Show that
3 3
X X 3
|vj∗ w|2 = |vj∗ w|2 = kwk2 .
j=1 j=1
2

Problem 28. Find the determinant of the 4 × 4 matrices


a11 a12 0 0 1 0 a13 a14
   
 a21 a22 0 0   0 1 a23 a24 
,  .
a31 a32 1 0 0 0 a33 a34

a41 a42 0 1 0 0 a43 a44

Problem 29. Let R be an nonsingular n × n matrix over C. Let A be


an n × n matrix over C of rank one.
(i) Show that the matrix R+A is nonsingular if and only if tr(R−1 A) 6= −1.
(ii) Show that in this case we have
(R + A)−1 = R−1 − (1 + tr(R−1 A))−1 R−1 AR−1 .
(iii) Simplify to the case that R = In .

Problem 30. Find the determinant of the matrices


1 1/2 1/3 1/4
   
  1 1/2 1/3
1 1/2 1/2 1/3 1/4 1/5 
,  1/2 1/3 1/4  ,  .

1/2 1/3 1/3 1/4 1/5 1/6
1/3 1/4 1/5
1/4 1/5 1/6 1/7
Extend to n × n matrices. Then consider the limit n → ∞.

Problem 31. Consider the 2 × 2 matrix


 
cosh(α) sinh(α)
A(α) = .
− sinh(α) − cosh(α)

Find the maxima and minima of the function f (α) = tr(A2 (α))−(tr(A(α)))2 .

Problem 32. Find the eigenvalues and eigenvectors of the matrices


 
  1 1/2 1/3
1 1/2
,  1/2 1/3 1/4  .
1/2 1/3
1/3 1/4 1/5
114 Problems and Solutions

Extend to the n × n case.

Problem 33. Find the eigenvalues of the 4 × 4 matrix


2 −2 0 0
 
 −1 4 −1 0 
A= .
0 −2 2 −2
0 0 −1 4

Problem 34. Let φk ∈ R. Consider the matrices


0 eiφ1 0 0
 
 eiφ2 0 0 0 
A(φ1 , φ2 , φ3 , φ4 ) =  ,
0 0 0 eiφ3
0 0 eiφ4 0
0 0 eiφ5 0
 
iφ6
 0 0 0 e 
B(φ5 , φ6 , φ7 , φ8 ) =  iφ7
e 0 0 0

iφ8
0 e 0 0
and A(φ1 , φ2 , φ3 , φ4 )B(φ5 , φ6 , φ7 , φ8 ). Find the eigenvalues of these matri-
ces.

Problem 35. Consider the 4 × 4 orthogonal matrices


cos(θ) sin(θ) 0 0 1 0 0 0
   
 − sin(θ) cos(θ) 0 0   0 cos(θ) sin(θ) 0 
V12 (θ) =   , V23 (θ) = 
0 0 1 0 0 − sin(θ) cos(θ) 0

0 0 0 1 0 0 0 1
1 0 0 0
 
0 1 0 0 
V34 (θ) =  .
0 0 cos(θ) sin(θ)
0 0 − sin(θ) cos(θ)
Find the eigenvalues of V (θ) = V12 (θ)V23 (θ)V34 (θ).

Problem 36. We know that any n×n unitary matrix has only eigenvalues
λ with |λ| = 1. Assume that a given n × n matrix has only eigenvalues with
|λ| = 1. Can we conclude that the matrix is unitary?

Problem 37. Consider the 4 × 4 matrices over R


a11 a12 0 0 a11 a12 0 a14
   
 a12 a11 a12 0   a12 a11 a12 0 
A1 =   , A2 =  .
0 a12 a11 a12 0 a12 a11 a12
0 0 a12 a11 a14 0 a12 a11
Miscellaneous 115

Find the eigenvalues of A1 and A2 .

Problem 38. Find the eigenvalues and normalized eigenvectors of the


hermitian 3 × 3 matrix
 
1 0 v1
H= 0 2 v2 
v1∗ v2∗ 3

with 1 , 2 , 3 ∈ R and v1 , v2 ∈ C.

Problem 39. Consider the Bell matrix


1 0 0 1
 
1 0 1 1 0 
B=√ 
2 0 1 −1 0

1 0 0 −1

which is a unitary matrix. Each column vector of the matrix is a fully en-
tangled state. Are the normalized eigenvectors of B are also fully entangled
states?

Problem 40. Let x ∈ R. Is the 4 × 4 matrix


cos(x) 0 sin(x) 0
 
0 cos(x) 0 sin(x) 
A(x) = 

− sin(x) 0 cos(x) 0

0 − sin(x) 0 cos(x)

an orthogonal matrix?

Problem 41. Let z ∈ C. Can one find a 4 × 4 permutation matrix P


such that    
z 0 z z 0 0
P 0 z 0PT = 0 z z?
z 0 z 0 z z

Problem 42. (i) Consider the 3 × 3 permutation matrix


 
0 1 0
C = 0 0 1.
1 0 0

Find the condition on a 3 × 3 matrix A such that CAC T = A. Note that


C T = C −1 .
116 Problems and Solutions

(ii) Consider the 4 × 4 permutation matrix

0 1 0 0
 
0 0 1 0
D= .
0 0 0 1
1 0 0 0

Find the condition on a 4 × 4 matrix B such that DBDT = B. Note that


C T = C −1 .

Problem 43. (i) Let x ∈ R. Show that the 4 × 4 matrix

cos(x) 0 − sin(x) 0
 
 0 cos(x) 0 − sin(x) 
A(x) = 
sin(x) 0 cos(x) 0

0 sin(x) 0 cos(x)
is invertible. Find the inverse. Do these matrices form a group under
matrix multiplication?
(ii) Let x ∈ R. Show that the matrix

cosh(x) 0 sinh(x) 0
 
0 cosh(x) 0 sinh(x) 
B(x) = 

sinh(x) 0 cosh(x) 0

0 sinh(x) 0 cosh(x)
is invertible. Find the inverse. Do these matrices form a group under
matrix multiplication.

Problem 44. Let α, β ∈ R. Do the 3 × 3 matrices


  
cos(α) − sin(α) 0 1 0 0
A(α, β) =  sin(α) cos(α) 0   0 cosh(β) sinh(β) 
0 0 1 0 sinh(β) cosh(β)

form a group under matrix multiplication? For α = β = 0 we have the


identity matrix.

Problem 45. Is the invertible matrix


0 0 0 1
 
0 1 0 0
U =
0 0 1 0

1 0 0 0

an element of the Lie group SO(4)? The matrix is unitary and we have
UT = U.
Miscellaneous 117

Problem 46. Let


     
0 1 0 0 1 1 0
J+ := , J− := , J3 :=
0 0 1 0 2 0 −1

and  ∈ R. Find eJ+ , eJ− , e(J+ +J− ) . Let r ∈ R. Show that

er(J+ +J− ) ≡ eJ− tanh(r) e2J3 ln(cosh(r)) eJ+ tanh(r) .

Problem 47. Let tj ∈ R for j = 1, 2, 3, 4. Find the eigenvalues and


eigenvectors of
0 t1 0 t4 eiφ
 
 t1 0 t2 0 
Ĥ =  .
0 t2 0 t3
t4 e−iφ 0 t3 0

Problem 48. (i) Study the eigenvalue problem for the symmetric matri-
ces over R

2 −1 0 −1
   
2 −1 −1
 −1 2 −1 0 
A3 =  −1 2 −1  , A4 =  .
0 −1 2 −1
−1 −1 2
−1 0 −1 2

Extend to n dimensions

2 −1 0 ... 0 −1
 
 −1 2 −1 ... 0 0 
 0 −1 2 ... 0 0 
 
A=
 ... .. .. .. .. ..  .
 . . . . . 

 0 0 0 ... 2 −1 
−1 0 0 ... −1 2

Problem 49. Find the eigenvalues and eigenvectors of the 6 × 6 matrix

b11 0 0 0 0 b16
 
 0 b22 0 0 0 0 
 0 0 b33 0 0 0 
 
B= .
 0 0 0 b44 0 0 
0 0 0 0 b55 0
 
b61 0 0 0 0 b66
118 Problems and Solutions

Problem 50. Find the eigenvalues and eigenvectors of 4 × 4 matrix

1 1 1 z
 
1 1 1 z
A(z) =  .
1 1 1 z
z̄ z̄ z̄ 1

Problem 51. Find the eigenvalues of the matrices

a11 a12 a13 a14


   
  a11 a12 a13
a11 a12  a21 a22 a23 0 
,  a21 a22 0 , .
a21 0 a31 a32 0 0

a31 0 0
a41 0 0 0

Problem 52. Let A, B be n × n matrices over C. Consider the map

τ (A, B) := A ⊗ B − A ⊗ In − In ⊗ B.

Find the commutator [τ (A, B), τ (B, A)].

Problem 53. Let A be an n × n matrix over C. Consider the matrices

B12 = A ⊗ A ⊗ In ⊗ In , B13 = A ⊗ In ⊗ A ⊗ In , B14 = A ⊗ In ⊗ In ⊗ A,

B23 = In ⊗ A ⊗ A ⊗ In , B24 = In ⊗ A ⊗ In ⊗ A, B34 = In ⊗ In ⊗ A ⊗ A.

Find the commutators [Bjk , B`m ].

Problem 54. We know that for any n × n matrix A over C the matrix
exp(A) is invertible with the inverse exp(−A). What about cos(A) and
cosh(A)?

Problem 55. Let


   
1 1 0 1
A= , B= .
1 −1 1 0

(i) Find A ⊗ B, B ⊗ A.
(ii) Find tr(A ⊗ B), tr(B ⊗ A). Find det(A ⊗ B), det(B ⊗ A).
(iii) Find the eigenvalues of A and B.
(iv) Find the eigenvalues of A ⊗ B and B ⊗ A.
(v) Find rank(A), rank(B) and rank(A ⊗ B).
Miscellaneous 119

Problem 56. Consider the hermitian 4 × 4 matrix


0 0 0 1
 
0 0 0 1
A= .
0 0 0 1
1 1 1 0

Show that the rank of the matrix is 2. The trace and determinant are equal
to√0 and thus two of the eigenvalues are 0. The other two eigenvalues are
± 3.

Problem 57. Let A, B be n × n matrices. Let

X := A⊗In ⊗In +In ⊗A⊗In +In ⊗In ⊗A, Y := B⊗In ⊗In +In ⊗B⊗In +In ⊗In ⊗B.

Find the commutator [X, Y ].

Problem 58. Let


   
0 1 1 0
A= , B= .
0 0 0 0

Then [A, B] = −A. Let

∆A := B ⊗ A + A ⊗ I2 , ∆B := B ⊗ B.

Find the commutator [∆A, ∆B].

Problem 59. (i) Let A, B be n × n matrices with [A, B] = 0n . Find the


commutators
[A ⊗ A, B ⊗ B], [A ⊗ B, B ⊗ A].
Find the anti-commutators

[A ⊗ A, B ⊗ B]+ , [A ⊗ B, B ⊗ A]+ .

(ii) Let A, B be n × n matrices with [A, B]+ = 0n . Find the commutators

[A ⊗ A, B ⊗ B], [A ⊗ B, B ⊗ A].

Find the anti-commutators

[A ⊗ A, B ⊗ B]+ , [A ⊗ B, B ⊗ A]+ .

(iii) Let A, B be n × n matrices. We define

∆(A) := A ⊗ B + B ⊗ A, ∆(B) := B ⊗ B − A ⊗ A.
120 Problems and Solutions

Find the commutator [∆(A), ∆(B)] and anticommutator [∆(A), ∆(B)]+ .

Problem 60. (i) Let α, β ∈ C and


 
α β
M (α, β) = .
0 α

Calculate exp(M (α, β)).


(ii) Let α, β ∈ C. Consider the 2 × 2 matrix
 
α β
N (α, β) = .
0 −α

Calculate exp(M (α, β)).

Problem 61. Consider the six 3 × 3 permutation matrices. Which two


of the matrices generate all the other ones.

Problem 62. Find the eigenvalues and eigenvectors of the matrices

1 0 0 0 1
 
 
1 0 1 0 1 0 1 0 
0 1 0 , 0

0 1 0 0 .

1 0 −1 0 1 0 −1 0
 
1 0 0 0 −1

Extend to the general case n odd.

Problem 63. Let A be a real or complex n×n matrix with no eigenvalues


on R− (the closed negative real axis). Then there exists a unique matrix
X such that
1) eX = A
2) the eigenvalues of X lie in the strip { z : −π < =(z) < π }. We refer to
X as the principal logarithm of A and write X = log(A). Similarly, there
is a unique matrix S such that
1) S 2 = A
2) the eigenvalues of S lie in the open halfplane: 0 < <(z). We refer to S
as the principal square root of A and write S = A1/2 .
If the matrix A is real then its principal logarithm and principal square
root are also real.
The open halfplane associated with z = ρeiθ is the set of complex numbers
w = ζeiφ such that −π/2 < φ − θ < π/2.
Suppose that A = BC has no eigenvalues on R− and
1. BC = CB
Miscellaneous 121

2. every eigenvalue of B lies in the open halfplane of the corresponding


eigenvalue of A1/2 (or, equivalently, the same condition holds for C).

Show that log(A) = log(B) + log(C).

Problem 64. Let a, b ∈ R. Find the eigenvalues and eigenvectors of the


4 × 4 matrix
a 0 0 b
 
0 a b 0
M = .
0 b a 0
b 0 0 a

n
Problem 65. Let n ≥ 2. Consider the Hilbert space H = C2 . Let A, B
be nonzero n × n hermitian matrices and In the identity matrix. Consider
the Hamilton operator Ĥ in this Hilbert space

Ĥ = A ⊗ In + In ⊗ B + A ⊗ B

in this Hilbert space, where  ∈ R. Let λ1 , . . . , λn be the eigenvalues of A


and let µ1 , . . . , µn be the eigenvalues of B. Then the eigenvalues of Ĥ are
given by
λj + µk + λj µk .
Let u1 , . . . , un be the eigenvectors of A and v1 , . . . , vn be the eigenvectors
of B. Then the eigenvectors of Ĥ are given by

uj ⊗ vk .

Thus all the eigenvectors of this Hamilton operator are not entangled. Con-
sider now the Hamilton operator

K̂ = A ⊗ In + In ⊗ B + B ⊗ A.

Can we find hermitian matrices B, A such that the eigenvectors of K̂ cannot


be written as a product state, i.e. they are entangled? Note that A ⊗ B
and B ⊗ A have the same eigenvalues with the eigenvectors uj ⊗ vk and
vj ⊗ uk , respectively.

Problem 66. Let A be an m × n matrix over C and B be a s × t matrix


over C. Show that

A ⊗ B = vec−1
ms×nt (LA,s×t (vecs×t (B)))

where
n
X
LA,s×t := (In ⊗ It ⊗ A ⊗ Is ) ej,n ⊗ It ⊗ ej,n ⊗ Is .
j=1
122 Problems and Solutions

Problem 67. Consider the 2×2 hermitian matrices A and B with A 6= B


with the eigenvalues λ1 , λ2 ; µ1 , µ2 ; and the corresponding normalized eigen-
vectors u1 , u2 ; v1 , v2 , respectively. Form from the normalized eigenvectors
the 2 × 2 matrix  ∗
u1 v1 u∗1 v2

.
u∗2 v1 u∗2 v2
Is this matrix unitary? Find the eigenvalues of this matrix and the corre-
sponding normalized eigenvectors of the 2 × 2 matrix. How are the eigen-
values and eigenvectors are linked to the eigenvalues and eigenvectors of A
and B?

Problem 68. Let α, β ∈ R. Are the 4 × 4 matrices

eiα cosh(β) 0 0 sinh β


 
0 e−iα cosh β sinh β 0
U =
 
0 sinh β eiα cosh β 0

−iα
sinh β 0 0 e cosh β

0 eiα cosh β −eiα sinh β 0


 
 −e−iα cosh β 0 0 e−iα sinh β 
V =  iα
e sinh β 0 0 −eiα cosh β

0 −e−iα sinh β e−iα cosh β 0
unitary?

Problem 69. Find the conditions on 1 , 2 , 3 ∈ R such that the 4 × 4


matrix
0 1 0 0
 
1 1 2 3 
A(1 , 2 , 3 ) = 
0 2 −1 0

0 3 0 −1
is invertible.

Problem 70. Given a normal 5 × 5 matrix which provides the character-


istic equation
−λ5 + 4λ3 − 3λ = 0
√ √
with the eigenvalues λ1 = − 3, λ2 = −1, λ3 = 0, λ4 = 1, λ5 = 3 and the
corresponding normalized eigenvectors
√ √  √
3/6 1/2 1/ 3 1/2 3/6
      
 −1/2  −1/2   0√   1/2   1/2
 √   √ 
 
v1 =  1/ 3  , v2 =  0  , v3 =  −1/ 3  , v4 =  0  , v5 =  1/ 3  .
     
−1/2 1/2 0√ −1/2 √1/2
         

3/6 −1/2 1/ 3 −1/2 3/6
Miscellaneous 123

Apply the spectral theorem and show that the matrix is given by
0 1 0 0 0
 
X5 1 0 1 0 0
A= λj vj vj∗ =  0 1 0 1 0  .
 
0 0 1 0 1
j=1
 
0 0 0 1 0

Problem 71. Let α ∈ R. Consider the 2 × 2 matrix


 
cos(α) − sin(α)
A(α) = .
− sin(α) − cos(α)
(i) Find the matrices

dA(α)
X= , B(α) = exp(αX).
dα α=0
Compare A(α) and B(α). Discuss.
(ii) All computer algebra programs (except one) provide the correct eigen-
values +1 and −1 and then the corresponding eigenvectors
   
1 1
, .
(1 + cos(α))/ sin(α) −(1 − cos(α))/ sin(α)
Discuss. Then find the correct eigenvectors.

Problem 72. Let A be an m × n matrix and B a p × q matrix. Show


that
A ⊗ B = (A ⊗ Ip )diag(B, B, . . . , B).

Problem 73. Let z ∈ C. Find the eigenvalues and eigenvectors of the


3 × 3 matrix  
0 z z̄
A =  z̄ 0 z  .
z z̄ 0
Discuss the dependence of the eigenvalues on z. The matrix is hermitian.
Thus the eigenvalues must be real and since tr(A) = 0 we have λ1 +λ2 +λ3 =
0. Set z = reiφ .

Problem 74. (i) Let α ∈ R. Find the eigenvalues and eigenvectors of the
symmetric 3 × 3 and 4 × 4 matrices, respectively
α −1 0 0
   
α −1 0
 −1 α −1 0 
A3 (α) =  −1 α −1  , A4 (α) =  .
0 −1 α −1
0 −1 α
0 0 −1 α
124 Problems and Solutions

Extend to n dimensions.
(ii) Let α ∈ R. Find the eigenvalues and eigenvectors of the symmetric
4 × 4 matrix, respectively

α −1 0 −1
 
 −1 α −1 0 
B4 (α) =  .
0 −1 α −1
−1 0 −1 α

Extend to n dimensions.

Problem 75. Let a, b ∈ R. Consider the 2 × 2


 
a b
K= .
b a

Find exp(iK). Use the result to find a, b such that


 
0 1
exp(iK) = .
1 0

Problem 76. Let


0 0 0 0
 
   
0 1 0 0 0 0 1 0
S+ = , S− = , X := S+ ⊗S− +S− ⊗S+ =  .
0 0 1 0 0 1 0 0
0 0 0 0

Find R(γ) = exp(γX), where γ ∈ R. Is the matrix unitary?

Problem 77. Let z ∈ C and z 6= 0.


(i) Do the 2 × 2 matrices
   
z 0 0 z
,
0 z −1 z −1 0

form a group under matrix multiplication?


(ii) Do the 3 × 3 matrices
   
z 0 0 0 0 z
0 1 0 ,  0 1 0
−1
0 0 z z −1 0 0

form a group under matrix multiplication?


Miscellaneous 125

Problem 78. The (1 + 1) Poincaré Lie algebra iso(1, 1) is generated by


one boost generator K and the translation generators along the light-cone
P+ and P− . The commutators are
[K, P+ ] = 2P+ , [K, P− ] = −2P− , [P+ , P− ] = 0.
Can one find 2 × 2 matrices K, P+ , P− which satisfy these commutation
relations?

Problem 79. Let  ∈ R. Is the matrix


 
1− 1+
T () =
−(1 + ) 1 − 
invertible for all ?

Problem 80. Show that the 2n × 2n matrix


 
1 In In
U=√
2 iIn −iIn
invertible. Find the inverse.

Problem 81. Consider the map f : C2 7→ R3


 
  sin(2θ) cos(φ)
cos(θ)
7→  sin(2θ) sin(φ)  .
eiφ sin(θ)
cos(2θ)
(i) Consider the map for the special case θ = 0, φ = 0.
(ii) Consider the map for the special case θ = π/4, φ = π/4.

Problem 82. Let A, B be n × n hermitian matrices. Show that A ⊗ In +


In ⊗ B is also a hermitian matrix. Apply
(A⊗In +In ⊗B)∗ = (A⊗In )∗ +(In ⊗B)∗ = A∗ ⊗In∗ +In∗ ⊗B ∗ = A⊗In +In ⊗B.

Problem 83. (i) Find the eigenvalues of the 4 × 4 matrix


0 0 0 a14
 
 1 0 0 a24 
A= .
0 1 0 a34
0 0 1 a44
(ii) Find the eigenvalues and eigenvectors of the 4 × 4 matrices
0 a12 a13 a14 0 a12 a13 a14
   
 −a12 0 a23 a24   −a12 0 a23 a24 
,  .
a13 a23 0 a34 −a13 −a23 0 a34

a14 a24 −a34 0 −a14 −a24 −a34 0
126 Problems and Solutions

Problem 84. A classical 3 × 3 matrix representation of the algebra


iso(1, 1) is given by
     
0 0 0 0 0 0 0 0 0
K =  0 0 −2  , P+ =  1 0 0, P− =  1 0 0.
0 −2 0 −1 0 0 1 0 0

Find the commutators and anticommutators.

Problem 85.

Problem 86. (i) Let A be an invertible n × n matrix over C. Assume we


know the eigenvalues and eigenvectors of A. What can be said about the
eigenvalues and eigenvectors of A + A−1 ?
(ii) Apply the result from (i) to the 5 × 5 permutation matrix

0 0 0 0 1
 
1 0 0 0 0
A = 0 1 0 0 0.
 
0 0 1 0 0
 
0 0 0 1 0

Problem 87. Let n ≥ 2 and j, k = 1, . . . , n. Let Ejk be the n × n


elementary matrices with 1 at the position jk and 0 otherwise. Find the
eigenvalues of
Xn
T = (Ejk ⊗ Ekj − Ekj ⊗ Ejk ).
k<j,j=1

Problem 88. Consider the 2 × 2 matrices


   
1 1 1 0
J= , K= .
1 1 0 0

Using the Kronecker product we form the 16 × 16 matrices


1 1
U1 = √ (J ⊗ I2 ⊗ I2 ⊗ I2 ), U2 = √ (I2 ⊗ J ⊗ I2 ⊗ I2 ),
2 2
1 1
U3 = √ (I2 ⊗ I2 ⊗ J ⊗ I2 ), U4 = √ (I2 ⊗ I2 ⊗ I2 ⊗ J)
2 2
and
√ √ √
V12 = 2(K⊗K⊗I2 ⊗I2 ), V13 = 2(K⊗I2 ⊗K⊗I2 ), V14 = 2(K⊗I2 ⊗I2 ⊗K),
Miscellaneous 127
√ √ √
V23 = 2(I2 ⊗K⊗K⊗I2 ), V24 = 2(I2 ⊗K⊗I2 ⊗K), V34 = 2(I2 ⊗I2 ⊗K⊗K).
Find the 16×16 matrices Uj Vk` Uj and Vk` Uj Vk` for j = 1, 2, 3, 4, k = 1, 2, 3
and ` > k. Find all the commutators between the 16 × 16 matrices.

Problem 89. Let a, b ∈ R and a 6= 0. Show that


 0        0
x a b x x 1/a −b/a x
= ⇔ = .
1 0 1 1 1 0 1 1

Problem 90. Let a, b, c ∈ R3 . Show that a·(b×c) ≡ b·(c×a) ≡ c·(a×b),


where · denotes the scalar product and × the vector product.

Problem 91. Let A be an n × n matrix over C. Let u, v ∈ Cn considered


as column vectors. Show that v∗ Au = u∗ A∗ v.

Problem 92. Let A be an n × n matrix over C and x ∈ Cn . Show that


1
<(x∗ Ax) ≡ x∗ (A + A∗ )x
2
where < denotes the real part of a complex number.

Problem 93. Let ω be the solutions of the quadratic equation ω 2 +ω+1 =


0. Consider the normal and invertible matrix M
   
0 1 0 0 0 1
M (ω) =  0 0 ω  ⇒ M −1 (ω) =  1 0 0  .
1 0 0 0 ω̄ 0
Let Ejk (j, k = 1, 2, 3) be the nine 3 × 3 elementary matrices. Calculate the
9 × 9 matrix
X3
T = (Ejk ⊗ M nj −nk )
j,k=1

where n1 − n2 = 2, n1 − n3 = 1, n2 − n3 = −1 and M 0 = I3 .

Problem 94. Find the 6 × 6 matrix


   
  0 1 0   0 0 1  
a a12 a a14 a a16
I3 ⊗ 11 + 0 0 1 ⊗ 13 + 1 0 0 ⊗ 15 .
a16 a11 a12 a13 a14 a15
1 0 0 0 1 0

Problem 95. Find all 4 × 4 matrices Y such that


       
0 1 0 1 0 1 0 1
Y ⊗ = ⊗ Y.
1 0 1 0 1 0 1 0
128 Problems and Solutions

Problem 96. Let M be a 4 × 4 matrix over R. Can M be written as

M =A⊗B+C ⊗D

where A, B, C, D be 2 × 2 matrices over R? Note that

tr(M ) = tr(A)tr(B) + tr(C)tr(D).

Problem 97. (i) Let x ∈ R. Find the determinant and the inverse of the
matrix  x 
e cos(x) ex sin(x)
.
−e−x sin(x) e−x cos(x)
(ii) Let α ∈ R. Find the determinant of the matrices
   
cos(α) sin(α) cos(α) i sin(α)
A(α) = , B(α) = .
− sin(α) cos(α) i sin(α) cos(α)

(iii) Let α ∈ R. Find the determinant of the matrices


   
cosh(α) sinh(α) cosh(α) i sinh(α)
C(α) = , D(α) = .
sinh(α) cosh(α) −i sinh(α) cosh(α)

Problem 98. Consider a symmetric matrix over R. We impose the


following conditions. The diagonal elements are all zero. The non-diagonal
elements can only be +1 or −1. Show that such a matrix can only have
integer values as eigenvalues. An example would be

0 1 1 −1
 
 1 0 1 −1 
1 1 0 −1
 
−1 −1 −1 0

with eigenvalues 3 and −1 (three times).

Problem 99. Let A, B be real symmetric and block tridiagonal 4 × 4


matrices
a11 a12 0 0 b11 b12 0 0
   
a a a 0  b12 b22 b23 0 
A =  12 22 23
, B =  .
 
0 a23 a33 a34 0 b23 b33 b34
0 0 a34 a44 0 0 b34 b44

Assume that B is positive definite. Solve the eigenvalue problem Av =


λBv.
Miscellaneous 129

Problem 100. Find the determinant and eigenvalues of the matrices


0 0 0 a14
   
  0 0 a13
0 a12  1 0 0 a24 
A2 = , A3 =  1 0 a23  , A4 =  .
1 a22 0 1 0 a34
0 1 a33
0 0 1 a44

Extend to the n-dimensional case. Note that det(A2 ) = −a12 , det(A3 ) =


a13 , det(A4 ) = −a14 . For An we find det(An ) = (−1)n+1 a1n .

Problem 101. Find the eigenvalues of the nonnormal matrices


1 0 0 1
   
  1 0 1
1 1 0 2 2 0
A2 = , A3 =  0 2 0  , A4 =  .

2 2 0 3 3 0
3 0 3
4 0 0 4
Extend to the n×n case. Owing to the structure of the matrices 0 is always
an eigenvalue and the multiplicity depends on n. For A2 we find λ = 0 and
λ = 3.

Problem 102. Let λ, u be an eigenvalue and normalized eigenvector of


the n × n matrix A, respectively. Let µ, v be an eigenvalue and normalized
eigenvector of the n × n matrix B, respectively. Find an eigenvalue and
normalized eigenvector of A ⊗ In ⊗ B.

Problem 103. Let A, B be n×n matrices over C. Consider the eigenvalue


equations Au = λu, Bv = µv. Show that (identity)

(A ⊗ B − λIn ⊗ µIn )(u ⊗ v) = ((A − λIn ) ⊗ B + A ⊗ (B − µIn ))(u ⊗ v).

Problem 104. Show that the condition on a11 , a12 , b11 , b12 such that
a11 0 0 a12 1 1
    
 0 b11 b12 0   1  1
  = λ 
0 b12 b11 0 1 1

a12 0 0 a11 1 1

(i.e. we have an eigenvalue equation of the matrix) is given by λ = a11 +


a12 = b11 + b12 .

Problem 105. Let A, B, C, D n × n matrices over C. Assume that


[A, C] = 0n and [B, D] = 0n . Find the commutators

[A ⊗ In , C ⊗ D], [In ⊗ B, C ⊗ D], [A ⊗ B, C ⊗ D].


130 Problems and Solutions

Problem 106. Let A, B be n × n matrices. What is condition on A and


B so that the commutator

[A ⊗ In + In ⊗ B + A ⊗ B, B ⊗ In + In ⊗ A + B ⊗ A]

vanishes?

Problem 107. Given the 4 × 4 matrices


0 1 0 0 0 0 0 0
   
0 0 0 0 0 0 1 0
A= , B=
0 0 0 1 0 0 0 0

0 0 0 0 α 0 0 0

where α ∈ R. Show that A2 = 04 , B 2 = 04 and [A, B]+ = (A + B)2 .

Problem 108. Let A be a nonnormal matrix. Show that tr(AA∗ ) =


tr(A∗ A). Show that det(AA∗ ) = det(A∗ A).

Problem 109. Let A, B be n × n matrices over C. Calculate the com-


mutators of the three matrices

A ⊗ B ⊗ In , In ⊗ A ⊗ B, A ⊗ In ⊗ B.

Assume that A, B satisfy [A, B] = A, i.e. A, B form a basis of a noncom-


mutative Lie algebra. Discuss the commutators found above from a Lie
algebra point of view.

Problem 110. Let j, k ∈ {1, 2, 3} and Ejk be the (nine) elementary


matrices. Find the eigenvalues and eigenvectors of the 9 × 9 matrix
3 X
X 3
Q= (Ejk ⊗ Ekj ).
j=1 k=1

Does Q satisfy the braid-like relation

(I2 ⊗ Q)(Q ⊗ I2 )(I2 ⊗ Q) = (Q ⊗ I2 )(I2 ⊗ Q)(Q ⊗ I2 ) ?

Problem 111. All real symmetric matrices are diagonalizable. Show


that not all complex symmetric matrices are diagonalizable.

Problem 112. Find all 2 × 2 matrices T1 , T2 such that

T1 = [T2 , [T2 , T1 ]], T2 = [T1 , [T1 , T2 ]].


Miscellaneous 131

Problem 113. Let α ∈ R.


(i) Consider the 2 × 2 matrix

e−α
 

A(α) = .
e−α eα

Calculate dA(α)/dα, X = dA(α)/dα|α=0 and B(α) = eαX . Compare A(α)


and B(α). Discuss.
(ii) Consider the 2 × 2 matrix

e−α
 α 
e
C(α) = .
−eα e−α

Calculate dC(α)/dα, Y = dC(α)/dα|α=0 and D(α) = eαY . Compare C(α)


and D(α). Discuss.

Problem 114. Find all nonzero 2 × 2 matrices H, A such that

[H, A] = A.

Then find [H ⊗ H, A ⊗ A].

Problem 115. Consider the 3 × 3 real symmetric matric A and the


normalized vector v
   
0 1 1 1
1
A = 1 0 1, v = √ 1.
1 1 0 3 1

Find
µ1 = vT Av, µ2 = vT A2 v, µ3 = vT A3 v.
Can the matrix A be uniquely reconstructed from µ1 , µ2 , µ3 ? It can be
assumed that A is real symmetric.

Problem 116. Consider the 3 × 3 real symmetric matrix A and the


orthonormal basis {v1 , v2 , v3 } in R3
       
0 1 1 1 0 1
1 1
A =  1 0 1  , v1 = √  0  , v2 =  1  , v3 = √  0  .
1 1 0 2 1 0 2 −1

Calculate
µ = v1∗ Av2 + v2∗ Av3 + v3∗ Av1 .
Discuss.
132 Problems and Solutions

Problem 117. Let Ejk (j, k = 1, . . . , n) be the n×n elementary matrices,


i.e. Ejk is the matrix with 1 at the j-th row and the k-th column and 0
otherwise. Let n = 3. Find E12 E23 E31 . Find E12 ⊗ E23 ⊗ E31 .

Problem 118. Let f ∈ L2 (R). The Fourier transform is given by

Problem 119. Consider the 5 × 5 matrix


1 1 1 1 1
 
2 2 2 2 2
A = 3 3 3 3 3.
 
4 4 4 4 4
 
5 5 5 5 5
Find the norm
kAk = max kAxk.
kxk

(i) Apply the Lagrange multiplier method.


(ii) Calculate AAT and then find the square root of the largest eigenvalue
of AAT . This is then the norm of A.
(iii) Is the matrix A normal? Find the rank of A and AAT .

Problem 120. Find all 2×2 invertible matrices S over R with det(S) = 1
such that
       
0 1 0 1 0 1 0 1
S = S S S −1 = .
0 1 0 1 0 1 0 1
Thus we have to solve the three equations

s21 = 0, s11 + s12 = s22 , s11 s22 = 1.

Problem 121. Consider the standard basis in C6


1 0 0
     
0 1 0
0 0 0
     
e0 =   , e1 =   , . . . e5 =   .
0 0 0
0 0 0
     
0 0 1
Can one find a unitary matrix U such that

U e0 = e1 , U e1 = e2 , ... U e4 = e5 , U e5 = e0 ?
Miscellaneous 133

If so find the inverse of U .

Problem 122. (i) Do the three vectors


     
1 1 1
1  1  1  
v1 = √ −1  , v2 = √ 1 , v3 = √ 1
2 0 6 −2 3 1

form an orthonormal basis in R3 ?


(ii) Is the 3 × 3 matrix
(v1 v2 v3 )

an orthonormal matrix?

Problem 123. Let A be an n × n matrix over C and B be an m × m


matrix over C. Show that

kA ⊗ BkF = kAkF · kBkF

where k.kF denotes the Frobenius norm.

Problem 124. Let Ejk be the elementary matrices with 1 at position


(entry) (j, k) and 0 otherwise. Let n = 3. Find E12 E23 E31 .

Problem 125. Let A, B 2 × 2 matrices with det(A) = 1, det(B) = 1.


Let ? be the star product. Show that

det(A ? B) = 1.

Problem 126. Let A, B be 2 × 2 hermitian matrices over C. Assume


that
tr(A) = tr(B), tr(A2 ) = tr(B 2 ).

Are the eigenvalues of A and B are the same?

Problem 127. Let A, B be n × n matrices. Show that


Z ∞ Z ∞ Z ∞
A+B α1 A
e = dα1 e δ(1 − α1 ) + dα1 dα2 eα1 A Beα2 A δ(1 − α1 − α2 )
0 0 0
Z ∞Z ∞Z ∞
+ eα1 A Beα2 A Beα3 A δ(1 − α1 − α2 − α3 ) + · · ·
0 0 0
134 Problems and Solutions

Problem 128. Consider a vector a in C4 and the corresponding 2 × 2


matrix A via the vec−1 operator

a1

 
a  a1 a3
a= 2 ⇒
a3 a2 a4
a4

and analogously
b1
 
 
 b2  b1 b3
b=  ⇒ .
b3 b2 b4
b4
Show that
a∗ b = tr(A∗ B).

Problem 129. Find n × n matrices A, B such that

k[A, B] − In k → min

where k.k denotes the norm and [, ] denotes the commutator.

Problem 130. Let

D = {(x1 , x2 , x3 ) : x21 + x22 + x23 = 1, x1 ≥ 0, x2 ≥ 0, x3 ≥ 0}.

Let v be a normalized vector in R3 with nonnegative entries and A be a


3 × 3 matrix over R with strictely positive entries. Show that the map
f :D→D
Av
f (v) =
kAvk
has fixed point, i.e. there is a v0 such that
Av0
= v0 .
kAv0 k

Problem 131. Let A be an n × n matrix over R. The matrix A is called


symmetric if A = AT . Let B be an n × n matrix over R. If B is symmetric
about the northeast-to-southwest diagonal then B is called persymmetric.
Let J be the n × n counter identity matrix. Note that J T = J and J 2 = In .
Then persymmetric can be written as

JAJ = AT .
Miscellaneous 135

(i) Show that the power Ak of a symmetric persymmetric matrix over R is


again symmetric persymmetric.
(ii) Show that the Kronecker product of two symmetric persymmetric ma-
trices X and Y is again symmetric persymmetric.

Problem 132. Find all 2 × 2 matrices A, B that satisfy

ABA = BAB and A ⊗ B ⊗ A = B ⊗ A ⊗ B.

Problem 133. Find all 2 × 2 matrices S over R such that SS T = I2 .

Problem 134. Show that the group S4 has five inequivalent irreducible
representations, namely two 1-dimensional representations, one 2-dimensional
representation and two 3-dimensional representations.

Problem 135. Let Rij denote the generators of an SO(n) rotation in the
xi − xj plane of the n-dimensional Euclidean space. Give an n-dimensional
matrix representation of these generators and use it to derive the Lie algebra
so(n) of the compact Lie group SO(n).

Problem 136. Consider the vectors in R2 and R3 , respectively


 
  u1
v1
v= , u =  u2  .
v2
u3

Find the conditions such that

u ⊗ v = v ⊗ u.

Find solutions to these conditions.

Problem 137. Consider the 4 × 4 matrices


   
0 0 T 0 1
X= ⊗ I2 ⇒ X = ⊗ I2
1 0 0 0
       
1 0 0 0 1 0 0 1
Y = ⊗ ⇒ YT = ⊗ .
0 −1 1 0 0 −1 0 0
Find the anti-commutators

[X, X]+ , [Y, Y ]+ , [X, X T ]+ , [Y, Y T ], [X, Y ]+ , [X, Y T ]+ .


136 Problems and Solutions

Problem 138. Show that the square roots of the 2 × 2 unit matrix I2
are given by I2 and
     
1 0 −1 0 −1 0
S S −1 , S S −1 , S S −1
0 −1 0 1 0 −1

where S is an arbitrary invertible matrix.

Problem 139. Let S1 , S2 , S3 be the spin- 12 matrices. Show that

[S1 ⊗ S2 , S2 ⊗ S3 ] = 04 , [S2 ⊗ S3 , S3 ⊗ S1 ] = 04 , [S3 ⊗ S1 , S1 ⊗ S2 ] = 04 .


Miscellaneous 137
Bibliography

Aldous J. M. and Wilson R. J.


Graphs and Applications: An Introductory Approach
Springer Verlag (2000)

Bronson, R.
Matrix Operations
Schaum’s Outlines, McGraw-Hill (1989)

Fuhrmann, P. A.
A Polynomial Approach to Linear Algebra
Springer-Verlag, New York (1996)

Golub, G. H. and Van Loan C. F.


Matrix Computations, Third Edition,
Johns Hopkins University Press (1996)

Grossman S. I.
Elementary Linear Algebra, Third Edition
Wadsworth Publishing, Belmont (1987)

Horn R. A. and Johnson C. R.


Topics in Matrix Analysis
Cambridge University Press (1999)

Johnson D. L.
Presentation of Groups
Cambridge University Press (1976)

Kedlaya K. S., Poonen B. and Vakil R.


The William Lowell Putnam Mathematical Competition 1985–2000,
The Mathematical Association of America (2002)

138
Bibliography 139

Lang S.
Linear Algebra
Addison-Wesley, Reading (1968)

Miller W.
Symmetry Groups and Their Applications
Academic Press, New York (1972)

Schneider H. and Barker G. P.


Matrices and Linear Algebra,
Dover Publications, New York (1989)

Steeb W.-H.
Matrix Calculus and Kronecker Product with Applications and C++ Pro-
grams
World Scientific Publishing, Singapore (1997)

Steeb W.-H.
Continuous Symmetries, Lie Algebras, Differential Equations and Com-
puter Algebra
World Scientific Publishing, Singapore (1996)

Steeb W.-H.
Hilbert Spaces, Wavelets, Generalized Functions and Quantum Mechanics
Kluwer Academic Publishers, Dordrecht (1998)

Steeb W.-H.
Problems and Solutions in Theoretical and Mathematical Physics,
Second Edition, Volume I: Introductory Level
World Scientific Publishing, Singapore (2003)

Steeb W.-H.
Problems and Solutions in Theoretical and Mathematical Physics,
Second Edition, Volume II: Advanced Level
World Scientific Publishing, Singapore (2003)

Steeb W.-H., Hardy Y., Hardy A. and Stoop R.


Problems and Solutions in Scientific Computing with C++ and Java Sim-
ulations
World Scientific Publishing, Singapore (2004)
140 Bibliography

Van Loan, C. F.
Introduction to Scientific Computing: A Matrix-Vector Approach Using
MATLAB,
Second Edition
Prentice Hall (1999)

Wybourne B. G.
Classical Groups for Physicists
John Wiley, New York (1974)
Index

iso(1, 1), 126 Dirac matrices, 65


Direct sum, 4, 67, 68
Abelian group, 82 Disentangled form, 54
Adjoint representation, 88 Doolittle’s method, 46
Double angle formula, 106
Baker-Campbell-Hausdorff relation,
56 Elementary matrices, 130
Bell basis, 8 Entire function, 30
Bell matrix, 109, 115 Equivalence relation, 16
Bijection, 21, 79 Euler angles, 78, 82
Binary Hadamard matrix, 8 Exterior product, 85
Brute force method, 104
Fibonacci numbers, 10
Cartan matrix, 5 Field of values, 6
Casimir operator, 90 Fixed points, 61
Cayley-Hamilton theorem, 30 Flip operator, 69
Characteristic exponents, 61 Floquet theory, 61
Choleski’s method, 46 Fréchet derivative, 58
Circulant matrix, 20, 24, 32
Class function, 98 Gaussian decomposition, 50
Collocation polynomial, 103 Gers̆gorin disk theorem, 29
Column rank, 4, 5 Gram-Schmidt orthonormalization pro-
Commutative group, 82 cess, 72
Commutator, 79
Comparison method, 54 Hadamard matrix, 1, 8, 34
Completeness relation, 58 Hadamard product, 95, 96
Condition number, 74, 103 Hamming distance, 9
Conformal transformation, 107 Heisenberg group, 78
Coset decomposition, 76 Hilbert-Schmidt norm, 65
Cosine-sine decomposition, 48 Householder matrix, 9
Cross-ratio, 107
Crout’s method, 46 Ideal, 89
Curve fitting problem, 14 Idempotent, 4, 27
Involutory, 5, 27
Denman-Beavers iteration, 35, 102 Irreducible, 20
Diophatine equation, 26 Isomorphism, 77

141
142 Index

Iwasawa decomposition, 50, 84 Rank, 4


Rayleigh quotient, 31, 102
Jacobi identity, 9 Reducible, 20
Jacobi method, 100 Residual vector, 14
Resolvent identity, 34
Kernel, 15 Roots, 90
Killing form, 88 Rotational matrix, 23
Row rank, 4, 5
Lagrange multiplier method, 15, 71, Rule of Sarrus, 23
97
Left-translation, 85 Scalar product, 19, 63, 65
Linearized equation, 61 Schur decomposition, 49
Selection matrix, 7, 96
MacLaurin series, 53 Semisimple, 90
Maximum principle, 31 Similar, 11
Moore-Penrose pseudoinverse matrix, Similar matrices, 19
8 Simple, 89, 90
Motion of a charge, 60 Singular value decomposition, 47
Skew-hermitian, 2, 88
Nilpotent, 4, 27 Spectral norm, 72
Norm, 28, 65, 71 Spectral radius, 28, 95, 100, 101
Normal, 2, 30, 35 Spin quantum number, 9
Normal matrix, 20 Splitting, 100
Normalized, 2 Square root, 54, 102
Nullity, 15 Stochastic matrix, 6, 32, 67
Numerical range, 6 Structure constants, 89
Symplectic group, 84
Octonion algebra, 83
Orthogonal, 24 Taylor expansion, 106
Technique of parameter differentia-
Padé approximation, 101
tion, 54
Pauli spin matrices, 2, 3, 68
Toeplitz-Hausdorff convexity theorem,
Permutation matrices, 77
6
Permutation matrix, 4
Topological group, 79
Polar decomposition theorem, 46
Transition matrix, 32
Positive semidefinite, 34
Power method, 105 Undisentangled form, 54
Primary permutation matrix, 20 Unitary matrix, 2
Principal logarithm, 56, 57, 120
Principal square root, 57, 120 vec operation, 60
Probability vector, 6 Vector product, 24
Projection matrix, 9 Vierergruppe, 78
Putzer method, 55
Wedge product, 70
Quotient space, 16 Weyl’s formula, 98
Index 143

Zassenhaus formula, 54

You might also like