Matrixproblems PDF
Matrixproblems PDF
in
Matrix Calculus
by
Willi-Hans Steeb
International School for Scientific Computing
at
University of Johannesburg, South Africa
Preface
The manuscript supplies a collection of problems in introductory and ad-
vanced matrix problems.
Prescribed book:
“Problems and Solutions in Introductory and Advanced Matrix Calculus”,
2nd edition
by
Willi-Hans Steeb and Yorick Hardy
World Scientific Publishing, Singapore 2016
v
Contents
Notation x
1 Basic Operations 1
2 Linear Equations 13
6 Decomposition of Matrices 46
7 Functions of Matrices 52
9 Kronecker Product 63
14 Hadamard Product 95
15 Differentiation 97
16 Integration 98
vii
18 Miscellaneous 107
Bibliography 138
Index 141
viii
Notation
:= is defined as
∈ belongs to (a set)
∈
/ does not belong to (a set)
∩ intersection of sets
∪ union of sets
∅ empty set
N set of natural numbers
Z set of integers
Q set of rational numbers
R set of real numbers
R+ set of nonnegative real numbers
C set of complex numbers
Rn n-dimensional Euclidean space
space of column vectors with n real components
Cn n-dimensional complex linear space
space of column vectors with n complex components
H Hilbert
√ space
i −1
<z real part of the complex number z
=z imaginary part of the complex number z
|z| modulus of complex number z
|x + iy| = (x2 + y 2 )1/2 , x, y ∈ R
T ⊂S subset T of set S
S∩T the intersection of the sets S and T
S∪T the union of the sets S and T
f (S) image of set S under mapping f
f ◦g composition of two mappings (f ◦ g)(x) = f (g(x))
x column vector in Cn
xT transpose of x (row vector)
0 zero (column) vector
k.k norm
x · y ≡ x∗ y scalar product (inner product) in Cn
x×y vector product in R3
A, B, C m × n matrices
det(A) determinant of a square matrix A
tr(A) trace of a square matrix A
rank(A) rank of matrix A
AT transpose of matrix A
x
A conjugate of matrix A
A∗ conjugate transpose of matrix A
A† conjugate transpose of matrix A
(notation used in physics)
A−1 inverse of square matrix A (if it exists)
In n × n unit matrix
I unit operator
0n n × n zero matrix
AB matrix product of m × n matrix A
and n × p matrix B
A•B Hadamard product (entry-wise product)
of m × n matrices A and B
[A, B] := AB − BA commutator for square matrices A and B
[A, B]+ := AB + BA anticommutator for square matrices A and B
A⊗B Kronecker product of matrices A and B
A⊕B Direct sum of matrices A and B
δjk Kronecker delta with δjk = 1 for j = k
and δjk = 0 for j 6= k
λ eigenvalue
real parameter
t time variable
Ĥ Hamilton operator
The Pauli spin matrices are used extensively in the book. They are given
by
0 1 0 −i 1 0
σx := , σy := , σz := .
1 0 i 0 0 −1
In some cases we will also use σ1 , σ2 and σ3 to denote σx , σy and σz .
xi
Chapter 1
Basic Operations
xxT
A=
xT x
T
where denotes the transpose, i.e. xT is a row vector. Calculate A2 .
1 1 1 1 1 1 1 1
1 1 1 1 −1 −1 −1 −1
1 1 −1 −1 −1 −1 1 1
1 1 −1 −1 1 1 −1 −1
H= .
1 −1 −1 1 1 −1 −1 1
1 −1 −1 1 −1 1 1 −1
1 −1 1 −1 −1 1 −1 1
1 −1 1 −1 1 −1 1 −1
1
2 Problems and Solutions
where
cos(θ) sin(θ)
x= , y=
sin(θ) − cos(θ)
and θ ∈ R. Find xT x, yT y, xT y, yT x. Find the matrix A.
Problem 15. Consider the vector space R4 . Find all pairwise orthogonal
vectors (column vectors) x1 , . . . , xp , where the entries of the column vectors
can only be +1 or −1. Calculate the matrix
p
X
xj xTj
j=1
4 Problems and Solutions
D := A−1 BA.
Calculate Dn , where n = 2, 3, . . ..
Problem 34. The numerical range, also known as the field of values, of
an n × n matrix A over the complex numbers, is defined as
F (A) := { z∗ Az : kzk = 1, z ∈ Cn }.
Find the numerical range for the 2 × 2 matrix
1 0
B= .
0 0
Find the numerical range for the 2 × 2 matrix
0 0
C= .
1 1
The Toeplitz-Hausdorff convexity theorem tells us that the numerical range
of a square matrix is a convex compact subset of the complex plane.
Let α ∈ R and
α 1 0 0 0 0 0 0 0
1 α 1 0 0 0 0 0 0
0 1 α 1 0 0 0 0 0
0 0 1 α 1 0 0 0 0
A=0 0 0 1 α 1 0 0 0 .
0 0 0 0 1 α 1 0 0
0 0 0 0 0 1 α 1 0
0 0 0 0 0 0 1 α 1
0 0 0 0 0 0 0 1 α
(i) Show that the set F (A) lies on the real axis.
(ii) Show that
|z∗ Az| ≤ α + 16.
T := In − 2xxT .
Calculate T 2 .
P ∗ = P, P 2 = P.
a × b = S(a)b.
a × (b × c) + c × (a × b) + b × (c × a) = 0
Problem 47. The Fibonacci numbers are defined by the recurrence rela-
tion (linear difference equation of second order with constant coefficients)
sn+2 = sn+1 + sn
where n = 0, 1, . . . and s0 = 0, s1 = 1. Write this recurrence relation in
matrix form. Find s6 , s5 , and s4 .
Linear Equations
Problem 1. Let
1 1 1
A= , b= .
2 −1 5
Problem 2. Let
1 1 3
A= , b=
2 2 α
13
14 Problems and Solutions
j 0 1 2 3 4
tj −1.0 −0.5 0.0 0.5 1.0
yj 1.0 0.5 0.0 0.5 2.0
p(t) = a2 t2 + a1 t + a0
1 1 444
1 2 458
1 3 478
1 4 493
1 5 x1 506
A= , x= , b= .
1 6 x2 516
1 7 523
1 8 531
1 9 543
1 10 571
Solve this linear system in the least squares sense (see previous problem)
by the normal equations method.
Hx = y
Linear Equations 15
Problem 10. Show that solving the system of nonlinear equations with
the unknowns x1 , x2 , x3 , x4
NA := { x ∈ Rn : Ax = 0 }.
ν(A) := dim(NA )
is called the nullity of A. If NA only contains the zero vector, then ν(A) = 0.
(i) Let
1 2 −1
A= .
2 −1 3
Find NA and ν(A).
16 Problems and Solutions
(ii) Let
2 −1 3
A= 4 −2 6 .
−6 3 −9
Find NA and ν(A).
and Z 1 Z 1
aµν := βµ (y)αν (y)dy, bµ := βµ (y)f (y)dy
0 0
where µ, ν = 1, 2. Show that the integral equation can be cast into a system
of linear equations for B1 and B2 . Solve this system of linear equations and
thus find a solution of the integral equation.
Chapter 3
3 1 1 1 ... 1
1 4 1 1 ... 1
1 1 5 1 ... 1
A=
1 1 1 6 ... 1
.
. .. .. .. .. ..
..
. . . . .
1 1 1 1 ... n+1
18
Determinants and Traces 19
unitary?
(ii) What the determinant of U ?
A = XBX −1
then A and B are said to be similar matrices. Show that the spectra
(eigenvalues) of two similar matrices are equal.
0 1 0 ... 0
0 0 1 ... 0
. .. .. . . .
U := . . ..
. . .
0 0 0 ... 1
1 0 0 ... 0
1 0 0 ... 0
0 ζ 0 ... 0
0 0 ζ2 ... 0
V :=
.
. .. .. .. ..
. . . . .
0 0 0 ... ζ n−1
{ U j V k : j, k = 0, 1, 2, . . . , n − 1 }
provide a basis in the Hilbert space for all n × n matrices with the scalar
product
1
hA, Bi := tr(AB ∗ )
n
for n × n matrices A and B. Write down the basis for n = 2.
is irreducible.
Show that
det(M ) = det(AD − BC). (1)
We know that
U 0n
det = det(U ) det(Y ) (2)
X Y
and
U V
det = det(U ) det(Y ) (3)
0n Y
where U , V , X, Y are n × n matrices and 0n is the n × n zero matrix.
Use this identity to calculate the determinant of the left-hand side using
the right-hand side, where
2 3 0 2
A= , B= .
1 7 4 6
Show that
det(M ) = det(AD − BD−1 CD). (1)
(x1 − x2 )(y1 − y2 )
det(A) = .
(x1 + y1 )(x1 + y2 )(x2 + y1 )(x2 + y2 )
Problem 25. For a 3×3 matrix we can use the rule of Sarrus to calculate
the determinant (for higher dimensions there is no such thing). Let
a11 a12 a13
a21 a22 a23 .
a31 a32 a33
Write the first two columns again to the right of the matrix to obtain
a11 a12 a13 | a11 a12
a21 a22 a23 | a21 a22 .
a31 a32 a33 | a31 a32
Now look at the diagonals. The product of the diagonals sloping down to
the right have a plus sign, the ones up to the left have a negative sign. This
leads to the determinant
det(A) = a11 a22 a33 +a12 a23 a31 +a13 a21 a32 −a31 a22 a13 −a32 a23 a11 −a33 a21 a12 .
a2 a3 a4 . . . a1
where ζ := exp(2πi/n). Find the determinant of the circulant n × n matrix
1 4 9 ... n2
n2 1 4 . . . (n − 1)2
. .. .. . . ..
. .
. . . .
9 16 25 . . . 4
4 9 16 . . . 1
using equation (1).
1 0 0 1 0 0 0 0
1 0 0 0 0 1 0 1 1 0
A= B=
2 0 0 0 0 2 0 1 1 0
1 0 0 1 0 0 0 0
with trace equal to 1. Find the determinant of A and B. Find the rank of
A and B. Can one find a permutation matrix P such that P AP T = B?
tr(A2 ) = (tr(A))2 .
where the sum runs over the sets of nonnegative integers (k1 , . . . , kn ) sat-
isfying the linear Diophatine equation
n
X
`k` = n.
`=1
Eigenvalues and
Eigenvectors
27
28 Problems and Solutions
where kAxk denotes the Euclidean norm of the vector Ax. Show that
ρ(A) ≤ kAk.
(i) Show that each eigenvalue λ of A satisfies at least one of the following
inequalities
|λ − ajj | ≤ rj , j = 1, 2, . . . , n.
In other words show that all eigenvalues of A can be found in the union of
disks
{ z : |z − ajj | ≤ rj , j = 1, 2, . . . , n }
This is Gers̆gorin disk theorem.
(ii) Apply this theorem to the matrix
0 i
A= .
−i 0
(iii) Apply this theorem to the matrix
1 2 3
B = 3 4 9.
1 1 1
30 Problems and Solutions
Problem 14. (i) Use the method given above to calculate exp(iK), where
the hermitian 2 × 2 matrix K is given by
a b
K= , a, c ∈ R, b ∈ C.
b c
(ii) Find the condition on a, b and c such that
1 1 1
eiK = √ .
2 1 −1
xT Ax
R := .
xT x
The quotient is called Rayleigh quotient. Discuss.
Q(x) := xT Ax.
and
n
X
pij = 1 for all j = 1, 2, . . . , n.
i=1
Show that a stochastic matrix always has at least one eigenvalue equal to
one.
s∗
0
A=
r 0n×n
p(t + 1) = M p(t), t = 0, 1, 2, . . .
is of the form c
0 c1 c2 ... cn−1
cn−1 c0 c1 ... cn−2
cn−2 cn−1 c0 ... cn−3
C=
.
. .. .. .. ..
. . . . .
c1 c2 c3 ... c0
with the normalized eigenvectors
1
1 e2πij/n
ej = √ ..
n .
e2(n−1)πij/n
for j = 1, 2, . . . , n.
(i) Use this result to find the eigenvalues of the matrix C.
(ii) Use (i) to find the eigenvalues of the matrix M .
(iii) Use (ii) to find p(t) (t = 0, 1, 2, . . .), where we expand the initial dis-
tribution vector p(0) in terms of the eigenvectors
n
X
p(0) = ak ek
k=1
with
n
X
pj (0) = 1.
j=1
U x = λx.
U x = λx, U y = µy.
34 Problems and Solutions
Show that x∗ y = 0.
(H − zIn )−1 = (H0 − zIn )−1 − (H0 − zIn )−1 V (H − zIn )−1 .
Note that AAT = nIn and AT A = nIn are equivalent. Hadamard matrices
Hn of order 2n can be generated recursively by defining
1 1 Hn−1 Hn−1
H1 = , Hn =
1 −1 Hn−1 −Hn−1
for n ≥ 2. Show that the eigenvalues of Hn are given by +2n/2 and −2n/2
each of multiplicity 2n−1 .
x∗ Ax ≥ 0 for all x ∈ Cn .
Let
0 1
A= .
1 0
Find the decomposition of A given above.
Show that
n
X n
X
A`k ckj = λj B`k ckj , ` = 1, . . . , n
k=1 k=1
Problem 50. The Cartan matrix for the Lie algebra g2 is given by
2 −1
A= .
−3 2
Is the matrix nonnormal? Show that the matrix is invertible. Find the
inverse. Find the eigenvalues and normalized eigenvectors of A.
Eigenvalues and Eigenvectors 39
Problem 52. (i) Let ` > 0. Find the eigenvalues of the matrix
cos(x/`) ` sin(x/`)
.
−(1/`) sin(x/`) cos(x/`)
(ii) Let ` > 0. Find the eigenvalues of the matrix
cosh(x/`) ` sinh(x/`)
.
(1/`) sinh(x/`) cos(x/`)
Problem 56. Find the eigenvalues and eigenvectors of the staircase ma-
trices
0 0 0 1
0 0 1
0 1 0 0 1 1
, 0 1 1, .
1 1 0 1 1 1
1 1 1
1 1 1 1
Extend to n-dimensions.
The matrix is not hermitian, but A = AT . Find H = AA∗ and the eigen-
values of H.
Extend to n-dimensions.
Commutators and
Anticommutators
[A2 , B 2 ] = 0n
while
[A, B] 6= 0n ?
42
Commutators and Anticommutators 43
Show that
[A, e = S −1 [A, B]S.
e B]
[A, B] = In (1)
and singular (i.e. det A = 0 and det B = 0) such that [A, B]+ = I2 .
Avj = λj vj , Bvj = µj vj , j = 1, 2, . . . , n
Decomposition of
Matrices
46
Decomposition of Matrices 47
4) Form an m × n diagonal
p matrix Σ placing on the leading diagonal of it
the square root σj := λj of p = min(m, n) first eigenvalues of the matrix
AT A found in 1) in descending order.
1
uj = Avj , j = 1, 2, . . . , r.
σj
6) Add to the matrix U the rest of the m − r vectors using the Gram-
Schmidt orthogonalization process.
We have
Avj = σj uj , AT uj = σj vj
and therefore
AT Avj = σj2 vj , AAT uj = σj2 uj .
where U1 , U2 , U3 , U4 are 2n−1 × 2n−1 unitary matrices and C and S are the
2n−1 × 2n−1 diagonal matrices
(ii) Use the result from (i) to find a 2 × 2 hermitian matrix K such that
U = exp(iK).
Q∗ AQ = D + N
Let
3 8 1 2i 1
A= , Q= √ .
−2 3 5 −1 −2i
50 Problems and Solutions
Problem 11. We say that a matrix is upper triangular if all their entries
below the main diagonal are 0, and that it is strictly upper triangular if in
addition all the entries on the main diagonal are equal to 1. Any invertible
real n × n matrix A can be written as the product of three real n × n
matrices
A = ODN
where N is strictly upper triangular, D is diagonal with positive entries, and
O is orthogonal. This is known as the Iwasawa decomposition of the matrix
A. The decomposition is unique. In other words, that if A = O0 D0 N 0 ,
where O0 , D0 and N 0 are orthogonal, diagonal with positive entries and
strictly upper triangular, respectively, then O0 = O, D = D0 and N 0 = N .
Find the Iwasawa decomposition of the matrix
0 1
A= .
1 2
Thus the eigenvalues are 1 and −1. Find the unitary matrix V such that
0 1 1 0
U= =V V ∗.
1 0 0 −1
Chapter 7
Functions of Matrices
52
Functions of Matrices 53
with b 6= 0.
(i) Calculate eiH using the normalized eigenvectors of H to construct a
unitary matrix V such that V ∗ HV is a diagonal matrix.
(ii) Specify a, b such that we find the unitary matrix
0 1
U= .
1 0
which converges for all complex values of z having absolute value less than
1, i.e., |z| < 1. Let A be an n × n matrix. Thus the series expansion
∞
A3 A5 A7 X (−1)j A2j+1
arctan(A) = A − + − + ··· =
3 5 7 j=0
2j + 1
Show that
1
eA+B = eA eB e− 2 [A,B] (2a)
A+B B A + 12 [A,B]
e =e e e . (2b)
Use the technique of parameter differentiation, i.e. consider the matrix-
valued function
f () := eA eB
where is a real parameter. Then take the derivative of f with respect to
.
The left-hand side is called the disentangled form and the right-hand side
is called the undisentangled form. Find C2 , C3 , . . . , using the comparison
method. In the comparison method the disentangled and undisentangled
forms are expanded in terms of an ordering scalar α and matrix coefficients
of equal powers of α are compared. From
we obtain
∞ ∞
X αk X αr0 +r1 +2r2 +3r3 +... r0 r1 r2 r3
(A + B)k = A B C2 C3 · · ·
k! r0 ,r1 ,r2 ,r3 ,...=0
r0 !r1 !r2 !r3 ! · · ·
k=0
t2
et(A+B) − etA etB = (BA − AB) + higher order terms in t. (1)
2
56 Problems and Solutions
U := exp(iK)
is a unitary matrix.
n
Problem 23. Let x0 , x1 , . . . , x2n −1 be an orthonormal basis in C2 . We
define n n
2 −1 2 −1
1 X X −i2πkj/2n
U := √ e xk x∗j . (1)
2n j=0 k=0
Show that U is unitary. In other words show that U U ∗ = I2n , using the
completeness relation
n
2X −1
I2n = xj x∗j .
j=0
n n
Thus I2n is the 2 × 2 unit matrix.
Calculate the left and right-hand side of (1) for the matrix
0 1
A= .
1 0
Problem 27. Find the square root of the positive definite 2 × 2 matrix
1 1
.
1 2
Chapter 8
Linear Differential
Equations
dn x dx dn−1 x
n
= c0 x + c1 + · · · + cn−1 n−1 , cj ∈ R
dt dt dt
can be written as a system of first order differential equation.
59
60 Problems and Solutions
dX(t)
= AX(t) + F (t), X(t0 ) = C.
dt
Find the solution of this matrix differential equation.
AY + Y B = C
can be written as
d
X(t) = AX(t) + X(t)B
dt
where A, B are n × n matrices and the initial matrix X(t = 0) ≡ X(0) is
given. Find the solution of this differential equation.
are constant fields. Find the solution of the initial value problem.
Kronecker Product
x ⊗ x, x ⊗ y, y ⊗ x, y⊗y
63
64 Problems and Solutions
hA, Bi := tr(AB ∗ ), A, B ∈ H.
Hint. Start with hermitian 2 × 2 matrices and then use the Kronecker
product.
a · α := a1 α1 + a2 α2 + a3 α3 .
1 A ⊗ B + 2 A ⊗ In + 3 Im ⊗ B.
1 0 0 0 0 0 0 1
0 1 0 0 0 0 1 0
γ0 := , γ1 := .
0 0 −1 0 0 −1 0 0
0 0 0 −1 −1 0 0 0
Calculate hγ0 , γ1 i.
(ii) Let U be a unitary n × n matrix. Find hU A, U Bi.
(iii) Let C, D be m × m matrices over C. Find hA ⊗ C, B ⊗ Di.
66 Problems and Solutions
unitary?
⊗kj=1 Aj = (⊗k−1
j=1 Aj ) ⊗ Ak = A1 ⊗ A2 ⊗ · · · ⊗ Ak .
(i) Calculate
⊗nj=1 (J00 + J01 + J11 )
for k = 1, k = 2, k = 3 and k = 8. Give an interpretation of the result when
each entry in the matrix represents a pixel (1 for black and 0 for white).
This means we use the Kronecker product for representing images.
Kronecker Product 67
(ii) Calculate
0 1
⊗kj=1 (J00
+ J01 + J10 + J11 ) ⊗
1 0
Q := q · σ, R := r · σ, S := s · σ, T := t · σ
where q · σ := q1 σ1 + q2 σ2 + q3 σ3 . Calculate
(Q ⊗ S + R ⊗ S + R ⊗ T − Q ⊗ T )2 .
[X, A] = 0n .
(A ⊕ B) ⊗ C ≡ (A ⊗ C) ⊕ (B ⊗ C).
Is
A ⊗ (B ⊕ C) = (A ⊗ B) ⊕ (A ⊗ C)
true?
68 Problems and Solutions
A⊗B =C ⊕D
AXB = C
(B T ⊗ A)vec(X) = vec(C).
AX + XB = D
can be written as
Prove or disprove.
Problem 29. Using the definitions from the previous problem we define
1 1 1 1
s−,j := (σx,j − iσy,j ) = σ−,j , s+,j := (σx,j + iσy,j ) = σ+,j
2 2 2 2
and
c1 = s−,1
j−1
!
X
cj = exp iπ s+,` s−,` s−,j for j = 2, 3, . . .
`=1
[A, B] = 0n , [A, C] = 0n .
Let
X := In ⊗ A + A ⊗ In , Y := In ⊗ B + B ⊗ In + A ⊗ C.
x ∧ y := x ⊗ y − y ⊗ x.
Show that
(x ∧ y) ∧ z + (z ∧ x) ∧ y + (y ∧ z) ∧ x = 0. (1)
V = exp(i(π/4)σ1 ) ⊗ exp(i(π/4)σ1 )
W = exp(i(π/4)σ2 ) ⊗ exp(i(π/4)σ2 ).
Calculate
V ∗ (σ3 ⊗ σ3 )V, W ∗ (σ3 ⊗ σ3 )W.
Chapter 10
i
1
v= .
−1
−i
~ω
Ĥ = (σ1 ⊗ σ1 − σ2 ⊗ σ2 )
2
where ω is the frequency and ~ is the Planck constant divided by 2π. Find
the norm of Ĥ, i.e.,
applying two different methods. In the first method apply the Lagrange
multiplier method, where the constraint is kxk = 1. In the second method
we calculate Ĥ ∗ Ĥ and find the square root of the largest eigenvalue. This
is then kĤk. Note that Ĥ ∗ Ĥ is positive semi-definite.
71
72 Problems and Solutions
in R3 .
(i) Show that the vectors are linearly independent.
(ii) Apply the Gram-Schmidt orthonormalization process to these vectors.
hA, Bi := tr(AB ∗ ), A, B ∈ H.
and
Xp
lim (eA1 /n eA2 /n · · · eAp /n )n = exp( Aj ).
n→∞
j=1
76
Groups and Matrices 77
is not an element of H and Hg2 , and make a right coset Hg3 . Continue
making right cosets Hgj in this way. If G is a finite group, all elements
of G will be exhausted in a finite number of steps and we obtain the right
coset decomposition.
−1 0 0 0 0 0 −1 0
0 −1 0 0 0 −1 0 0
V = , W =
0 0 −1 0 −1 0 0 0
0 0 0 −1 0 0 0 −1
form a group under matrix multiplication.
(iii) Show that the two groups (so-called Vierergruppe) are isomorphic.
Find
z = xyx−1 y −1 .
Show that xz = zx and yz = zy, i.e., z is the generator of the center of H3 .
(iii) The derived subgroup (or commutator subgroup) of a group G is the
subgroup [G, G] generated by the set of commutators of every pair of ele-
ments of G. Find [G, G] for the Heisenberg group.
(iv) Let
0 a c
A = 0 0 b
0 0 0
and a, b, c ∈ R. Find exp(A).
(v) The Heisenberg group is a simple connected Lie group whose Lie algebra
consists of matrices
0 a c
L = 0 0 b.
0 0 0
Find the commutators [L, L0 ] and [[L, L0 ], L0 ], where [L, L0 ] := LL0 − L0 L.
M : R3 → V := { a · σ : a ∈ R3 } ⊂ { 2 × 2 complex matrices }
a → M (a) = a · σ = a1 σ1 + a2 σ2 + a3 σ3 .
φ(x) := xax−1
and
ψ(x) := xax−1 a−1 ≡ [x, a].
How are the iterates of the maps φ and ψ related?
(ii) Consider G = SO(2) and
cos(α) − sin(α) 0 1
x= , a=
sin(α) cos(α) −1 0
are conjugate in SL(2, C) but not in SL(2, R) (the real matrices in SL(2, C)).
Xr r
X
tr( Aj ) = tr(Aj ) = 0
j=1 j=1
ω := exp(2πi/3).
Groups and Matrices 81
Problem 14. The unitary matrices are elements of the Lie group U (n).
The corresponding Lie algebra u(n) is the set of matrices with the condition
X ∗ = −X.
An important subgroup of U (n) is the Lie group SU (n) with the condition
that det U = 1. The unitary matrices
1 1 1 0 1
√ ,
2 1 −1 1 0
are not elements of the Lie algebra SU (2) since the determinants of these
unitary matrices are −1. The corresponding Lie algebra su(n) of the Lie
group SU (n) are the n × n matrices given by
X ∗ = −X, tr(X) = 0.
Let σ1 , σ2 , σ3 be the Pauli spin matrices. Then any unitary matrix in U (2)
can be represented by
where 0 ≤ α < 2π, 0 ≤ β < 2π, 0 ≤ γ ≤ π and 0 ≤ δ < 2π. Calculate the
right-hand side.
0 1 0 ... 0
0 0 1 ... 0
. . . .. ..
U = . . .
. . . . ..
0 0 0 ... 1
1 0 0 ... 0
1 i 0 0
1 0 0 i 1
M := √ .
2 0 0 i −1
1 −i 0 0
(iii) Let SO(4) be the special orthogonal Lie group. Let SU (2) be the
special unitary Lie group. Show that for every real orthogonal matrix U ∈
SO(4), the matrix M U M −1 is the Kronecker product of two 2-dimensional
special unitary matrices, i.e.,
M U M −1 ∈ SU (2) ⊗ SU (2).
A(ψ, θ, φ) =
Groups and Matrices 83
cos(φ) cos θ cos ψ − sin φ sin ψ − cos(φ) cos θ sin ψ − sin φ cos ψ cos φ sin θ
sin(φ) cos θ cos ψ + cos φ sin ψ − sin(φ) cos θ sin ψ + cos φ cos ψ sin φ sin θ
− sin(θ) cos(ψ) sin θ sin ψ cos(θ)
with the parameters falling in the intervals
−π ≤ ψ < π, 0 ≤ θ ≤ π, −π ≤ φ < π.
and
where δij is the Kronecker delta and ijk is +1 if (ijk) is an even permuta-
tion of (123), −1 if (ijk) is an odd permutation of (123) and 0 otherwise.
We can formally summarize the multiplications as
7
X
k
eµ eν = gµν e0 + γµν ek
k=1
where
k k
gµν = diag(1, −1, −1, −1, −1, −1, −1, −1), γij = −γji
with µ, ν = 0, 1, . . . , 7, and i, j, k = 1, 2, . . . , 7.
(i) Show that the set { e0 , e1 , e2 , e3 } is a closed associative subalgebra.
(ii) Show that the octonian algebra O is non-associative.
84 Problems and Solutions
{ e ⊗ e, e ⊗ a, a ⊗ e, a ⊗ a }.
Does this set form a group under matrix multiplication, where ⊗ denotes
the Kronecker product?
AT JA = J.
AT JA = J
form a group under matrix multiplication. This group is called the sym-
plectic group Sp(2n).
SL(n, R), n ≥ 3. The notation of the subgroups comes from the fact that
K is a compact subgroup, A is an abelian subgroup and N is a nilpotent
subgroup of SL(2, R). Find the Iwasawa decomposition of the matrix
√
2 √1
.
1 2
Problem 23. Let GL(m, C) be the general linear group over C. This
Lie group consists of all nonsingular m × m matrices. Let G be a Lie
subgroup of GL(m, C). Suppose u1 , u2 , . . . , un is a coordinate system on
G in some neighborhood of Im , the m × m identity matrix, and that
X(u1 , u2 , . . . , um ) is a point in this neighborhood. The matrix dX of dif-
ferential one-forms contains n linearly independent differential one-forms
since the n-dimensional Lie group G is smoothly embedded in GL(m, C).
Consider the matrix of differential one forms
Ω := X −1 dX, X ∈ G.
The matrix Ω of differential one forms contains n-linearly independent ones.
(i) Let A be any fixed element of G. The left-translation by A is given by
X → AX.
−1
Show that Ω = X dX is left-invariant.
(ii) Show that
dΩ + Ω ∧ Ω = 0
where ∧ denotes the exterior product for matrices, i.e. we have matrix
multiplication together with the exterior product. The exterior product is
linear and satisfies
duj ∧ duk = −duk ∧ duj .
Therefore duj ∧ duj = 0 for j = 1, 2, . . . , n. The exterior product is also
associative.
(iii) Find dX −1 using XX −1 = Im .
Problem 25. Consider the Lie group SO(2) consisting of the matrices
cos(u) − sin(u)
X= .
sin(u) cos(u)
86 Problems and Solutions
Problem 26. Let n be the dimension of the Lie group G. Since the vector
space of differential one-forms at the identity element is an n-dimensional
vector space, there are exactly n linearly independent left invariant differ-
ential one-forms in G. Let σ1 , σ2 , . . . , σn be such a system. Consider the
Lie group
u1 u2
G := : u1 , u2 ∈ R, u1 > 0 .
0 1
Let
u1 u2
X= .
0 1
(i) Find X −1 and X −1 dX. Calculate the left-invariant differential one-
forms. Calculate the left-invariant volume element.
(ii) Find the right-invariant forms.
0 0 0 1
1 0 0 0
P =
0 0 1 0
0 1 0 0
Problem 29. Find the group generated by the two permutation matrices
1 0 0 0 0 0 0 1
0 0 1 0 0 1 0 0
P1 = , P2 = .
0 1 0 0 0 0 1 0
0 0 0 1 1 0 0 0
Chapter 12
0 1 0 0 0 0
e = 0 0 0, f = 1 0 1.
0 1 0 0 0 0
87
88 Problems and Solutions
[x, y] = x.
(i) Find the adjoint representation of this Lie algebra. Let v, w be two
elements of a Lie algebra. Then we define
adv(w) := [v, w]
(iii) Show that L is simple. If L has no ideals except itself and 0, and if
moreover [L, L] 6= 0, we call L simple. A subspace I of a Lie algebra L is
called an ideal of L if x ∈ L, y ∈ I together imply [x, y] ∈ I.
Problem 10. Consider the Lie algebra gl2 (R). The matrices
1 0 0 1 0 0 0 0
e1 = , e2 = , e3 = , e4 =
0 0 0 0 1 0 0 1
with cτµν = −cτνµ , where the cτµν ’s are called the structure constants. Let A
be an arbitrary linear combination of the elements
r
X
A= aµ Zµ .
µ=1
and
[A, X] = ρX.
This equation has the form of an eigenvalue equation, where ρ is the cor-
responding eigenvalue and X the corresponding eigenvector. Assume that
the Lie algebra is represented by matrices. Find the secular equation for
the eigenvalues ρ.
and
g σλ gσλ = δσλ .
A Lie algebra L is called semisimple if and only if det |gσλ | =
6 0. We assume
in the following that the Lie algebra is semisimple. We define
r X
X r
C := g ρσ Xρ Xσ .
ρ=1 σ=1
Problem 16. The roots of a semisimple Lie algebra are the Lie algebra
weights occurring in its adjoint representation. The set of roots forms the
root system, and is completely determined by the semisimple Lie algebra.
Consider the semisimple Lie algebra s`(2, R) with the generators
1 0 0 1 0 0
H= , X= , Y = .
0 −1 0 0 1 0
Find the roots.
Calculate
[g1 ⊗f1 , [g2 ⊗f2 , g3 ⊗f3 ]]+[g3 ⊗f3 , [g1 ⊗f1 , g2 ⊗f2 ]]+[g2 ⊗f2 , [g3 ⊗f3 , g1 ⊗f1 ]].
Problem 19. A basis for the Lie algebra su(N ), for odd N , may be built
from two unitary unimodular N × N matrices
1 0 0 ... 0 0 1 0 ... 0
0 ω 0 ... 0 0 0 1 ... 0
. . . .
g = 0 0 ω ...
2
0 .. .. .. . . ...
, h=
... ... .. . . ..
. . . 0 0 0 ... 1
0 0 0 ... ω N −1 1 0 0 ... 0
where ω is a primitive N th root of unity, i.e. with period not smaller than
N , here taken to be exp(4πi/N ). We obviously have
hg = ωgh. (1)
m × n := m1 n2 − m2 n1
92 Problems and Solutions
V 1 = A ⊗ B ⊗ I2 , V2 = A ⊗ I2 ⊗ B, V3 = I2 ⊗ A ⊗ B
Determine the in-degree and the out-degree of each vertex in the digraph
given by the adjacency matrix
0 1 0 0 0
0 0 1 0 0
A = 1 0 0 0 1
0 0 1 0 0
0 0 0 1 0
93
94 Problems and Solutions
B = A + A2 + A3 + · · · + An−1
Problem 4. Write down the adjacency matrix A for the digraph shown.
Calculate the matrices A2 , A3 and A4 . Consequently find the number of
walks of length 1, 2, 3 and 4 from w to u. Is there a walk of length 1, 2, 3,
or 4 from u to w? Find the matrix B = A + A2 + A3 + A4 for the digraph
and hence conclude whether it is strongly connected. This means finding
out whether all off diagonal elements are nonzero.
Chapter 14
Hadamard Product
Problem 2. Let
a1 a2 b1 b2
A= , B=
a2 a3 b2 b3
be symmetric matrices over R. The Hadamard product A • B is defined as
a1 b1 a2 b2
A • B := .
a2 b2 a3 b3
Assume that A and B are positive definite. Show that A • B is positive
definite using the trace and determinant.
95
96 Problems and Solutions
its eigenvalues. Every characteristic polynomial has at least one root. For
any two n × n matrices A = (aij ) and B = (bij ), the Hadamard product of
A and B is the n × n matrix
A • B := (aij bij ).
ρ(A • B) ≤ ρ(A)ρ(B).
A • B := (aij bij ).
with Eii the n × n matrix of zeros except for a 1 in the (i, i)th position.
Prove this identity for the special case n = 2.
Chapter 15
Differentiation
97
Chapter 16
Integration
98
Integration 99
Numerical Methods
100
Numerical Methods 101
(i) Show that the Jacobi method can applied for this matrix.
(ii) Find the solution of the linear equation with b = (1 1 1)T .
where
p
X (p + q − j)!p!
Npq (A) = Aj
j=0
(p + q)!j!(p − j)!
q
X (p + q − j)!q!
Dpq (A) = (−A)j .
j=0
(p + q)!j!(q − j)!
Let
0 1
A= .
0 0
Find f2,2 (A) and eA . Calculate the right-hand side of the inequality (2).
1 1
Yk+1 = (Yk + Zk−1 ), Zk+1 = (Zk + Yk−1 )
2 2
with k = 0, 1, 2, . . . and Z0 = In and Y0 = A. The iteration has the
properties that
1
Yk = AZk , Yk Zk = Zk Yk , Yk+1 = (Yk + AYk−1 ).
2
(i) Can the Denman-Beavers iteration be applied to the matrix
1 1
A= ?
1 2
xT Ax
R := .
xT x
The quotient is called Rayleigh quotient. Discuss.
Numerical Methods 103
Supplementary Problems
Use Newton’s method to solve this system of three equations to find the
eigenvalues of A.
(ii) Given an m × n matrix over R. Write a C++ program that finds the
minimum value in each row and then the maximum value of these values.
Write a C++ program that calculates the right-hand side of the inequality
for a given matrix. Apply the complex class of STL. Apply it to the matrix
i 0 0 i
0 2i 2i 0
A= .
0 3i 3i 0
4i 0 0 4i
using this method. How are the coefficients ci of the polynomial related√to
the eigenvalues? The eigenvalues of A ⊗ B are given by 0 (twice) and ± 2.
α1 − λ β1 0 ··· 0
β1 α2 − λ β2 ··· 0
..
0
β2 . ··· 0
fk (λ) = det .
. .. .. .. ..
. . . . .
0 ··· ··· αk−1 − λ βk−1
0 ··· 0 βk−1 αk − λ
M xt
xt+1 = , t = 0, 1, . . .
kM xt k
1 0 0 1 1
1 0 1 1 0 0
B=√ and x0 = .
2 0 1 −1 0 0
1 0 0 −1 0
106 Problems and Solutions
Miscellaneous
Problem 2. Let
0 0 1 0 1 i 0 0
0 0 0 1 1 0 0 −i 1
J = , U=√ .
−1 0 0 0 2 i 1 0 0
0 −1 0 0 0 0 1 −i
Find U ∗ U . Show that U ∗ JU is a diagonal matrix.
Problem 4. Consider
0 1 0 0 1
1 0 1 0 1 1
A= , v0 = .
0 1 0 1 2 1
0 0 1 0 1
107
108 Problems and Solutions
Find
Problem 7. Let a > b > 0 and integers. Find the rank of the 4 × 4
matrix
a a b b
a b a b
M (a, b) = .
b a b a
b b a a
Show that the matrix is hermitian and the determinant is equal to 1. Show
that the matrix is not unitary.
is given by
1 0 −1
A−1 = 0 1 0 .
0 0 1
Miscellaneous 109
G = diag(1, ω, ω 2 , · · · ω n−1 )
Problem 17. (i) Find the eigenvalues and eigenvectors of the 4×4 matrix
a11 a12 0 0
0 0 a23 a24
A= .
0 0 a33 a34
a41 a42 0 0
Miscellaneous 111
0 1 0 0
0 0 0 1
A= .
0 0 1 0
1 0 0 0
(i) Find the 3 × 3 matrix v(φ1 , φ2 )vT (φ1 , φ2 ). What type of matrix do we
have?
(ii) Find the eigenvalues of the 3 × 3 matrix v(φ1 , φ2 )vT (φ1 , φ2 ). Compare
with vT (φ1 , φ2 )v(φ1 , φ2 ).
Problem 23. Can one find a (column) vector in R2 such that vvT is an
invertible 2 × 2 matrix?
1 0 1 0
1 0 1 1 1 0 1 1
v1 = √ , v2 = √ , v3 = √ , v4 = √ .
2 1 2 1 2 −1 2 0
0 0 0 −1
1 0 0 1
1 1 1 1 1 0 1 0
w1 = √ , w2 = √ , w3 = √ , w4 = √ .
2 0 2 1 2 1 2 0
0 0 1 1
S 0n
where 0n is the n × n zero matrix.
Find the maxima and minima of the function f (α) = tr(A2 (α))−(tr(A(α)))2 .
Problem 36. We know that any n×n unitary matrix has only eigenvalues
λ with |λ| = 1. Assume that a given n × n matrix has only eigenvalues with
|λ| = 1. Can we conclude that the matrix is unitary?
with 1 , 2 , 3 ∈ R and v1 , v2 ∈ C.
which is a unitary matrix. Each column vector of the matrix is a fully en-
tangled state. Are the normalized eigenvectors of B are also fully entangled
states?
an orthogonal matrix?
0 1 0 0
0 0 1 0
D= .
0 0 0 1
1 0 0 0
cos(x) 0 − sin(x) 0
0 cos(x) 0 − sin(x)
A(x) =
sin(x) 0 cos(x) 0
0 sin(x) 0 cos(x)
is invertible. Find the inverse. Do these matrices form a group under
matrix multiplication?
(ii) Let x ∈ R. Show that the matrix
cosh(x) 0 sinh(x) 0
0 cosh(x) 0 sinh(x)
B(x) =
sinh(x) 0 cosh(x) 0
0 sinh(x) 0 cosh(x)
is invertible. Find the inverse. Do these matrices form a group under
matrix multiplication.
an element of the Lie group SO(4)? The matrix is unitary and we have
UT = U.
Miscellaneous 117
Problem 48. (i) Study the eigenvalue problem for the symmetric matri-
ces over R
2 −1 0 −1
2 −1 −1
−1 2 −1 0
A3 = −1 2 −1 , A4 = .
0 −1 2 −1
−1 −1 2
−1 0 −1 2
Extend to n dimensions
2 −1 0 ... 0 −1
−1 2 −1 ... 0 0
0 −1 2 ... 0 0
A=
... .. .. .. .. .. .
. . . . .
0 0 0 ... 2 −1
−1 0 0 ... −1 2
b11 0 0 0 0 b16
0 b22 0 0 0 0
0 0 b33 0 0 0
B= .
0 0 0 b44 0 0
0 0 0 0 b55 0
b61 0 0 0 0 b66
118 Problems and Solutions
1 1 1 z
1 1 1 z
A(z) = .
1 1 1 z
z̄ z̄ z̄ 1
τ (A, B) := A ⊗ B − A ⊗ In − In ⊗ B.
Problem 54. We know that for any n × n matrix A over C the matrix
exp(A) is invertible with the inverse exp(−A). What about cos(A) and
cosh(A)?
(i) Find A ⊗ B, B ⊗ A.
(ii) Find tr(A ⊗ B), tr(B ⊗ A). Find det(A ⊗ B), det(B ⊗ A).
(iii) Find the eigenvalues of A and B.
(iv) Find the eigenvalues of A ⊗ B and B ⊗ A.
(v) Find rank(A), rank(B) and rank(A ⊗ B).
Miscellaneous 119
Show that the rank of the matrix is 2. The trace and determinant are equal
to√0 and thus two of the eigenvalues are 0. The other two eigenvalues are
± 3.
X := A⊗In ⊗In +In ⊗A⊗In +In ⊗In ⊗A, Y := B⊗In ⊗In +In ⊗B⊗In +In ⊗In ⊗B.
∆A := B ⊗ A + A ⊗ I2 , ∆B := B ⊗ B.
[A ⊗ A, B ⊗ B]+ , [A ⊗ B, B ⊗ A]+ .
[A ⊗ A, B ⊗ B], [A ⊗ B, B ⊗ A].
[A ⊗ A, B ⊗ B]+ , [A ⊗ B, B ⊗ A]+ .
∆(A) := A ⊗ B + B ⊗ A, ∆(B) := B ⊗ B − A ⊗ A.
120 Problems and Solutions
1 0 0 0 1
1 0 1 0 1 0 1 0
0 1 0 , 0
0 1 0 0 .
1 0 −1 0 1 0 −1 0
1 0 0 0 −1
n
Problem 65. Let n ≥ 2. Consider the Hilbert space H = C2 . Let A, B
be nonzero n × n hermitian matrices and In the identity matrix. Consider
the Hamilton operator Ĥ in this Hilbert space
Ĥ = A ⊗ In + In ⊗ B + A ⊗ B
uj ⊗ vk .
Thus all the eigenvectors of this Hamilton operator are not entangled. Con-
sider now the Hamilton operator
K̂ = A ⊗ In + In ⊗ B + B ⊗ A.
A ⊗ B = vec−1
ms×nt (LA,s×t (vecs×t (B)))
where
n
X
LA,s×t := (In ⊗ It ⊗ A ⊗ Is ) ej,n ⊗ It ⊗ ej,n ⊗ Is .
j=1
122 Problems and Solutions
Apply the spectral theorem and show that the matrix is given by
0 1 0 0 0
X5 1 0 1 0 0
A= λj vj vj∗ = 0 1 0 1 0 .
0 0 1 0 1
j=1
0 0 0 1 0
Problem 74. (i) Let α ∈ R. Find the eigenvalues and eigenvectors of the
symmetric 3 × 3 and 4 × 4 matrices, respectively
α −1 0 0
α −1 0
−1 α −1 0
A3 (α) = −1 α −1 , A4 (α) = .
0 −1 α −1
0 −1 α
0 0 −1 α
124 Problems and Solutions
Extend to n dimensions.
(ii) Let α ∈ R. Find the eigenvalues and eigenvectors of the symmetric
4 × 4 matrix, respectively
α −1 0 −1
−1 α −1 0
B4 (α) = .
0 −1 α −1
−1 0 −1 α
Extend to n dimensions.
Problem 85.
0 0 0 0 1
1 0 0 0 0
A = 0 1 0 0 0.
0 0 1 0 0
0 0 0 1 0
where n1 − n2 = 2, n1 − n3 = 1, n2 − n3 = −1 and M 0 = I3 .
M =A⊗B+C ⊗D
Problem 97. (i) Let x ∈ R. Find the determinant and the inverse of the
matrix x
e cos(x) ex sin(x)
.
−e−x sin(x) e−x cos(x)
(ii) Let α ∈ R. Find the determinant of the matrices
cos(α) sin(α) cos(α) i sin(α)
A(α) = , B(α) = .
− sin(α) cos(α) i sin(α) cos(α)
0 1 1 −1
1 0 1 −1
1 1 0 −1
−1 −1 −1 0
Problem 104. Show that the condition on a11 , a12 , b11 , b12 such that
a11 0 0 a12 1 1
0 b11 b12 0 1 1
= λ
0 b12 b11 0 1 1
a12 0 0 a11 1 1
[A ⊗ In + In ⊗ B + A ⊗ B, B ⊗ In + In ⊗ A + B ⊗ A]
vanishes?
A ⊗ B ⊗ In , In ⊗ A ⊗ B, A ⊗ In ⊗ B.
e−α
eα
A(α) = .
e−α eα
e−α
α
e
C(α) = .
−eα e−α
[H, A] = A.
Find
µ1 = vT Av, µ2 = vT A2 v, µ3 = vT A3 v.
Can the matrix A be uniquely reconstructed from µ1 , µ2 , µ3 ? It can be
assumed that A is real symmetric.
Calculate
µ = v1∗ Av2 + v2∗ Av3 + v3∗ Av1 .
Discuss.
132 Problems and Solutions
Problem 120. Find all 2×2 invertible matrices S over R with det(S) = 1
such that
0 1 0 1 0 1 0 1
S = S S S −1 = .
0 1 0 1 0 1 0 1
Thus we have to solve the three equations
U e0 = e1 , U e1 = e2 , ... U e4 = e5 , U e5 = e0 ?
Miscellaneous 133
an orthonormal matrix?
det(A ? B) = 1.
a1
a a1 a3
a= 2 ⇒
a3 a2 a4
a4
and analogously
b1
b2 b1 b3
b= ⇒ .
b3 b2 b4
b4
Show that
a∗ b = tr(A∗ B).
k[A, B] − In k → min
JAJ = AT .
Miscellaneous 135
Problem 134. Show that the group S4 has five inequivalent irreducible
representations, namely two 1-dimensional representations, one 2-dimensional
representation and two 3-dimensional representations.
Problem 135. Let Rij denote the generators of an SO(n) rotation in the
xi − xj plane of the n-dimensional Euclidean space. Give an n-dimensional
matrix representation of these generators and use it to derive the Lie algebra
so(n) of the compact Lie group SO(n).
u ⊗ v = v ⊗ u.
Problem 138. Show that the square roots of the 2 × 2 unit matrix I2
are given by I2 and
1 0 −1 0 −1 0
S S −1 , S S −1 , S S −1
0 −1 0 1 0 −1
Bronson, R.
Matrix Operations
Schaum’s Outlines, McGraw-Hill (1989)
Fuhrmann, P. A.
A Polynomial Approach to Linear Algebra
Springer-Verlag, New York (1996)
Grossman S. I.
Elementary Linear Algebra, Third Edition
Wadsworth Publishing, Belmont (1987)
Johnson D. L.
Presentation of Groups
Cambridge University Press (1976)
138
Bibliography 139
Lang S.
Linear Algebra
Addison-Wesley, Reading (1968)
Miller W.
Symmetry Groups and Their Applications
Academic Press, New York (1972)
Steeb W.-H.
Matrix Calculus and Kronecker Product with Applications and C++ Pro-
grams
World Scientific Publishing, Singapore (1997)
Steeb W.-H.
Continuous Symmetries, Lie Algebras, Differential Equations and Com-
puter Algebra
World Scientific Publishing, Singapore (1996)
Steeb W.-H.
Hilbert Spaces, Wavelets, Generalized Functions and Quantum Mechanics
Kluwer Academic Publishers, Dordrecht (1998)
Steeb W.-H.
Problems and Solutions in Theoretical and Mathematical Physics,
Second Edition, Volume I: Introductory Level
World Scientific Publishing, Singapore (2003)
Steeb W.-H.
Problems and Solutions in Theoretical and Mathematical Physics,
Second Edition, Volume II: Advanced Level
World Scientific Publishing, Singapore (2003)
Van Loan, C. F.
Introduction to Scientific Computing: A Matrix-Vector Approach Using
MATLAB,
Second Edition
Prentice Hall (1999)
Wybourne B. G.
Classical Groups for Physicists
John Wiley, New York (1974)
Index
141
142 Index
Zassenhaus formula, 54