Matrices 1
Matrices 1
UNIT 7
MATRICES-I
Structure
7.1 INTRODUCTION
In the previous unit, you have studied concepts related to vector spaces. In
this unit, you will learn basic concepts related to matrices most of which you
may already know from your UG physics. Still we have presented these
concepts in some detail as these are used extensively in physics courses.
In Sec. 7.2, we introduce matrices that arise when we are solving systems of
linear equations. We define matrices and then explain multiplication of
matrices. Then in Sec. 7.3, we define different types of matrices that you may
already be familiar with such as symmetric matrix, hermitian adjoint,
orthogonal and unitary matrices. You will learn about the transpose and
inverse of a matrix as well. Next, we deal with matrix algebra in Sec. 7.4 and
explain elementary operations on matrices. In Sec. 7.5, we discuss
determinants and their properties along with proofs of some important results.
Finally, in Sec. 7.6, we discuss linear operators and matrices. In the next unit,
we continue the discussion on matrices and solve eigenvalue problems for
different types of matrices with examples.
Expected Learning Outcomes
After studying this unit, you should be able to:
define real and complex matrices, determine their sum, difference,
products with a number and another matrix; 33
Block 2 Vector Spaces, Matrices and Tensors
obtain the complex conjugate, transpose and inverse of a matrix;
define symmetric, hermitian, orthogonal and unitary matrices and function
of a matrix;
solve problems on commutator algebra and partitioning of matrices;
define a determinant and determine its value;
use the properties of determinants to solve problems; and
establish the connection between linear operators and matrices.
dx1 cx 2 f
But the convention to write the coefficients as a11, a12 ,... etc. has many
advantages as we shall see.
Now we ask the question: What can we say about the solutions of the
unknown variables x 1 and x2 ?
We try to eliminate x 2 from Eqs. (7.1a and b): multiply the first equation by
a22 and the second by a12 :
x1
a22 c1 a12 c2
a11 a22 a12 a21 b11 c1 b12 c2 (7.2d)
where
a22
b11
a11 a22 a12 a21
a12
b12
34
and
a11 a22 a12 a21 (7.2e)
Unit 7 Matrices-I
Similarly, we can calculate the other unknown x 2 as:
a11 c 2 a21 c1
x2 b21c1 b22 c 2
a11 a22 a12 a21 (7.2f)
where
a21
b21
a11 a22 a12 a21
a11
and b22
a11 a22 a12 a21 (7.2g)
Eqs. (7.1a and b) and their solutions are written in the convenient matrix
notation as:
a11 a12 x1 c1
, (7.3a)
a21 a22 x 2 c 2
x1 b11 b12 c1
(7.3b)
x 2 b22 b22 c 2
The matrix having all elements equal to zero is called the zero matrix. Two
matrices A [aij ] and B [bij ] are said to be equal if and only if:
If A [aij ] and B [bij ] are two matrices of the same order, say, n m
matrices, then their sum (A B) and difference (A B) are the matrices C
[cij ] and D [dij ], respectively, such that:
A (B C ) ( A B ) C (7.8e)
And, if there are, say, four variables y 1 to y 4 in terms of which x’s can be
written as:
x1 b11y 1 ... b14 y 4 (7.11a)
Actually, the equations for variables z1, z 2 , x1, x 2 , x 3 and y 1,..., y 4 given above
in this section are matrix equations too: first, as the product of 2 3 and 3 1
to give 2 1 matrix of z' s :
x1
z1 a11 a12 a13
x 2 (7.14a)
z 2 a 21 a 22 a 23
x 3
37
Block 2 Vector Spaces, Matrices and Tensors
and then as a product of 3 4 of b’s to a matrix 4 1 to give a 3 1 matrix of
the x' s :
y1
x1 b11 b12 b13 b14
y 2
x 2 b21 b22 b23 b24 (7.14b)
y 3
x 3 b31 b32 b33 b34
y
4
If we substitute the matrix for x' s from Eq. (7.14b) in Eq. (7.14a), then we get:
y1
b11 b12 b13 b14
z1 a11 a12 a13 y 2
b21 b22 b23 b24 (7.15a)
z 2 a 21 a 22 a 23 y 3
b31 b32 b33 b34
y
4
y1
c11 c12 c13 c14 y 2
(7.15b)
c 21 c 22 c 23 c 24 y 3
y
4
Now that we have defined matrices and explained elementary operations on
them, we will discuss various types of matrices in the next section.
Sometimes it is useful to also denote the size of the identity matrix. Then we
write it as 1n. The identity matrix when multiplied to any square matrix of the
same size does not change that matrix:
A1 A, 1A A (7.17d)
With these preliminary definitions, we discuss the transpose of a matrix and
the symmetric matrix in the next section.
7.3.1 The Transpose of a Matrix and Symmetric Matrix
We have already defined real and complex matrices, as well as the column
and row vector matrices.
The transpose AT of an n m (real or complex) matrix A is an m n matrix in
which the rows of A become columns of AT :
AT ij A ji (7.18)
The interesting fact about transpose of a matrix is that if we take the transpose
of the product of two matrices, then the result is the product of the transpose
of the two matrices in reverse order:
C AB, C T B T AT (7.20a)
This is so because, for example, the (i, j)-element of C is
C ij Aik Bkj (7.20b)
k
therefore,
CT ji C ij Aik Bkj Bkj Aik BT jk AT ki
k k k
(7.20c)
A symmetric matrix is the matrix, which is equal to its transpose:
AT A (symmetric) (7.21a)
On the other hand, if the transpose is equal to its negative, then it is called
anti-symmetric:
AT A, (anti-symmetric) (7.21b)
REMEMBER: A symmetric or anti symmetric matrix is a square matrix. 39
Block 2 Vector Spaces, Matrices and Tensors
7.3.2 Complex Conjugate and Hermitian Adjoint
A A† (7.24)
AA 1 A 1A 1 (7.25)
It is not necessary that for any square matrix A, the inverse matrix exists.
Just as the inverse of a real or complex number exists if and only if it is not
zero, similarly, the inverse of a matrix exists if and only if its determinant A is
not zero. You are familiar with determinants from school and UG courses.
However, we will explain the concept in Sec. 7.5.
1 1
A
1 1
a b
Assuming that A 1
c d
1 1 a b 1 0
AA 1 1
1 1 c d 0 1
RRT RT R 1 (7.26)
40
Unit 7 Matrices-I
Therefore, the inverse of an orthogonal matrix is always equal to its
transpose matrix.
UU † U †U 1 (7.27)
M a0 1 a1 A ... ar Ar (7.28a)
We can also define convergent series in powers of the matrix A. The most
important and useful of these convergent series is the exponential:
exp A 1 A
1 2 1
A ... Ar ... (7.28c)
2! r!
The discussion so far was essentially a revision of what you have studied in
your UG courses. We will now discuss matrix algebra.
A(BC ) ( AB )C (7.29)
SAQ 1
a) What is the dimension of the real and complex vector spaces L (n ) when
the matrices are all (i) real matrices and (ii) complex matrices? Construct a
basis for these two cases.
b) Show by giving an example of 2 2 Hermitian matrices A and B that their
multiplication is not Hermitian.
we can interchange the first and second row and obtain another matrix:
0 1 0
I12 1 0 0
0 0 1
that is
Note that the matrix I12 is the unit matrix with its first and second rows (or
equivalently the first and second columns) interchanged.
42
Unit 7 Matrices-I
You can try the following SAQ to consolidate this concept.
SAQ 2
Construct a matrix which will interchange the i-th and j-th rows when
pre-multiplied to a square matrix of size n.
Here the number times the elements of the second row are added to the first
row. You can see that this operation can also be effected by pre-multiplication
by a matrix:
SAQ 3
We can build larger matrices out of smaller matrices of appropriate sizes. For
example, from two matrices of sizes 2 2 we can create 2 4, 4 2 or 4 4
matrices. From
we can create,
where lines inside this matrix are drawn just to show the partitioning.
B C
D E
G H
F
J K
m m1 m
AF ij Aik Fkj Aik Fkj Aik Fkj
k 1 k 1 k m1 1
and n2 r2 matrices:
B C G H BG CJ BH CK
AF
D E J K DG EJ DH EK
a11 a1 m1 0 0
a n 1 a n1 m1 0 0
A
1
0 0 a( n11)(m11) a( n11)m
0 0 a n( m11) a nm
is written as:
a11 a1 m1
a n 1 a n m
A
1 1 1
a( n11)1 a( n11)m
an( m11) anm
A
B
C
7.5.1 Permutations
You know that n distinct objects n 2 can be arranged in a line in many
ways. We can denote these objects by any symbols, but it is convenient to
denote them by numbers, like 1,2 ,…. , n. You know that there are factorial n
n! n n 1 ... 3.2.1
ways of arranging them, because in the first (leftmost) place we can put any of
the n objects, and for each choice we can put any of the remaining n 1
objects in the second place. This will make n n 1 different ways to fill the
first two places. Next the object for the third place can be chosen in n 2
possible ways for every choice of the first two places. And so on, till we have
arranged all the n objects in a row.
A permutation can be arranged by starting from the original order 1, 2,..., n by
a number of interchanges. Note that in Eq. (7.33), each element of set S is
replaced by the corresponding P (i ), i 1, 2, ..., n. For example, the
permutation (3, 2, 1) is described by the functions P(1) 3, P (2) 2, P(3) 1.
A simple way to find how many interchanges are requires is to count the
number of ‘crosses’ in the lines joining two copies of the set with elements
1,2,3,4 as shown in the diagram below:
1 1
2 2
3 3
4 4 Two crosses.
There are two crosses. Therefore, this is an even permutation. One can arrive
at the same permutation in a number of ways, but the number of interchanges,
46
if it is even, remains even, or if it is odd, remains odd. For example,
Unit 7 Matrices-I
1, 2, 3, 4 1, 4, 3, 2 one interchange 2 4
or 1, 2, 3, 4 1, 4, 3, 2 three interchanges 4 3, 4 2, 3 2
2 2
3 3
4 4 Three crosses.
For any n, half the permutations are even and the remaining half odd. This is
so because for every permutation there is another which differs from it by a
single interchange.
Now, work through the following Example. But first a word about the notation
we use.
Notation: The sign of a permutation P is denoted by ( 1)P , which is for
even P and for odd P.
Example 7.1
Write down all the 4! 24 permutations of four objects and classify them as
even and odd.
We omit commas and brackets for simplicity of writing. The sign is put before
the permutation
1234, 1342, 1423, 1324, 1243, 1432
2134, 2341, 2413, 2314, 2143, 2431
3214, 3142, 3421, 3124, 3241, 3412
4231, 4312, 4123, 4321, 4213, 4132 Each element s of S in
Since a permutation P is a one-one onto mapping of the set { 1,2,…..n}, the Eq. (6.33) is replaced
consecutive application of two such permutations P1 and P2 is also a by the corresponding
P(s). So, for example,
permutation: for a set [1, 2, 3], the
P1 P2 (i ) P1 P2 (i ) (7.34) permutation
(3, 2, 1) is described
The identity mapping I which does not change the order 1, 2,..., n is such that by the function
I P P I P (7.35) P (1) 3, P (2) 2,
P (3) 1.
Moreover, the inverse mapping exists which simply reverses the order
P P1
1, 2,..., n P (1), P (2),..., P (n )
1, 2,..., n
You can see that since P 1 retraces each step backwards, the even or odd
nature of P and P 1 is the same.
Note from the definition that the determinant has the following features:
1. The determinant is a homogeneous polynomial of degree n in the
elements with each element appearing only as first power.
2. There are n terms in the polynomial. Each term is a product of n matrix
elements with the row indices fixed and column indices corresponding to a
permutation of 1, 2,…,n. The terms have coefficients 1 depending on the
even or odd nature of P.
A convenient way to evaluate a determinant is by the rule called Laplace
expansion. You should know this rule but we give it here for ready reference.
It goes like this:
Laplace expansion by the first row:
1. Take the first element a11 of the first row. Multiply it by the determinant of
the (n 1) (n 1) matrix which will be left after deleting the first row and
the first column on which a11 falls.
2. Take the second element a12 of the first row. Multiply it similarly by the
determinant of the matrix left after deleting the first row and the second
column on which a12 falls. Multiply further by 1 and add to the term with
factor a11.
3. Take the third element a13 of the first row. Multiply it similarly by the
determinant of the matrix after deleting the first row and the third column
on which a13 falls. Multiply by 1 and add to the previous terms.
4. Continue in this way, till the end of the row, with alternating 1 and 1
factors.
5. In this process the n n determinant has been reduced to n terms
containing (n 1) (n 1) determinants, which, in turn, can be reduced to
(n 2) (n 2) determinants, and so on until there are no determinants
48 left to evaluate.
Unit 7 Matrices-I
Thus, for example,
a11 a12 a13
a11 a22 a33 a23 a32 a12 a21 a33 a23 a31
It is not necessary to expand only by the first row. We can choose any fixed
row or a fixed column and continue as above with the following rule in mind:
The sign of the term whose coefficient is aij is determined by 1 . Thus,
i j
Most of these properties are consequences of the definition. Only the proof of
AB A B is non-trivial.
We can re-arrange the factors so that the first, second etc. row indices of AT
appear in their natural order. Then the column indices 1, 2,…,n will get
arranged in reverse permutation:
A 1P a1 P 1(1) a2 P 1(2) ...anP
1(n )
P
1
1 1P a1 P 1(1) a2 P 1(2) ...anP
1(n )
P
AT
The last step follows because the even or odd nature of P and P 1 is the
same.
If the two rows of the matrix are interchanged, then in the expression for the
determinant only one interchange is needed to bring it into the original order.
That is responsible for the extra minus sign. The same applies to the column
interchange because by transpose the process is equivalent to the row
exchange. The matrix with two identical rows or two identical columns has
zero determinant because A A .
Also, adding a row to another row, the expression for the determinant
becomes equal to sum of two expressions. One of these expressions is the
original determinant, and the other is the determinant for the matrix with two
identical rows, which is zero.
We now come to block matrices. Let a block diagonal r s r s matrix C
with elements Cij be given with two blocks as:
A
C
B
C 1P c1P(1) ...c rP(r )c(r 1) P(r 1) ...c(r s) P(r s)
P
in the permutations P1 and P2 . The sign 1P will then be the product of the
two signs. So,
C 1P 1P a1P (1) ... arP (r ) b1P (1) ... bsP (s )
1 2
1 1 2 2
P1 P2
A B
SAQ 4
Show by similar arguments that the determinants of 2n 2n matrices D1 and
D 2 of the form
A A C
D1 , D2
C B B
Proof of AB A B
Let A and B be two n n matrices, and 1 the unit matrix of size n. The proof
depends on the identity:
1 A A AB
1 1 B 1 B
(7.37)
The matrix pre-multiplying on the left side of the equation simply replaces rows
of the matrix with linear combinations of other rows, so that the determinant of
the matrix with A and B on the diagonal does not change. Therefore,
A AB
1 B 1 B
51
Block 2 Vector Spaces, Matrices and Tensors
This means that
A B 1n 1 A B
12n AB AB
Proof of the Laplace Expansion
We have explained the Laplace expansion by the first row as an example
earlier in this section.
It is sufficient to prove it for the first row because any other row can be
interchanged and brought as the first row and a change of sign. Also as the
determinant remains unchanged by taking transpose of the matrix, the
Laplace expansion can be done by any column as well.
Let
A 1P a1P(1) a2 P(2) ...anP(n)
P
2. all those n 1! permutations P2 for which P (1) 2. These are essentially
the permutations of 1, 3,… n, after the interchange of 1 and 2 so that
1P 1P2
3. and so on for P3 ,P4 etc.
a n1 a n( j 1) a n( j 1) a nn
52 (7.38)
Unit 7 Matrices-I
The Laplace expansion by the i-th row can be written as:
n
A aij a~ij , (7.39a)
j 1
SAQ 5
Hint: The expressions in the first of these cases will look like the determinant
of the matrix in which the k-th row is identical to the i-th row. But the
determinant of a matrix with two identical rows is zero. Similar argument
holds for the second case.
Then the above equations for all i and k can be written as:
n a~kj
ik a ij
j 1 A
1
A
A A~T ik
Since ik are elements of the unit matrix, the inverse of the matrix can be
defined as
A 1 ik 1 ~
a ki (7.43)
53
A
Block 2 Vector Spaces, Matrices and Tensors
so that A A 1 1 (7.44)
SAQ 6
Prove that A1 A 1. [Hint: Use Eq. (7.39b)].
then the elements t ij of the matrix are real numbers and it is a real matrix. If
U and V are complex spaces then the elements t ij of the matrix are complex
numbers and it is a complex matrix.
As T is a linear operator, it is sufficient to define it on the basis vectors
because on any other vector u U its action is then known:
n n n m
Tu T x i ei x iT ei x i tij f j
i 1 i 1 i 1 i 1
m n m
Tu v y j fj x i tij f j
j 1 i 1 i 1
Then comparing the coefficients of the basis vectors f j on the two sides
above, we obtain
n
yj x i t ij
i 1
You can see that this is a matrix equation where the row vectors
Y [ y 1,..., y m ]
54
Unit 7 Matrices-I
are obtained by multiplying the row vector
X [ x 1,..., x n ]
on the right by the n m matrix T with elements t ij :
n
Y XT , yj x i t ij j 1, 2, ... , m (7.45)
i 1
We can also write this equation in terms of column vectors. Taking the
transpose of this equation, we get:
YT TT XT (7.46)
or
y1 t11 t 21t n1
x1
y2 t12 t 22 rn 2 x2
(7.47)
y t t 2m t mn x
m 1m n
m
m
ST ei STei S tij f j tij Sf j ,
i 1, 2, ..., n
j 1 j 1
m
Sf j s jkgk , j 1, 2, ..., m
k 1
Then
m r
ST ei tij s jkgk , i 1, 2, ..., n
j 1 k 1
This shows that the matrix corresponding to the product ST is related to the
matrices T and S as:
m
p ik t ij s jk
j 1 55
Block 2 Vector Spaces, Matrices and Tensors
This shows that the matrix [ST] corresponding to the linear operator
ST : U W where T : U V and S : V W is given by the product of
matrices in reverse order:
ST T S ,
or,
p11 p12 p1r t11 t 12 t1m s11 s12 s1r
p 21 p 22 p 2r t 21 t 22 t 2m s 21 s 22 s 2r
p p n 2 p nr t n1 t n 2 t nm s s mr
n1 m1 s m 2
To bring the same order of S and T, we can take the transpose and write:
ST T S T T T
We now end this unit and summarise its contents.
7.7 SUMMARY
In this unit, we have covered the following concepts:
Elementary concepts related to matrices including their definition,
elementary algebraic operations on matrices such as their sum,
difference, multiplication by a number and multiplication of matrices
(Sec. 7.2).
Various types of matrices such as real, complex, symmetric, Hermitian
adjoint, unitary and orthogonal matrices, evaluation of the transpose,
complex conjugate and inverse of a matrix, and function of matrices
(Sec. 7.3).
Elementary operation in matrix algebra and partitioning of matrices
(Sec. 7.4).
Permutations, definition of determinants, evaluation and properties of
determinants (Sec. 7.5).
Linear operators and matrices and two linear operators (Sec. 7.6).
CC CC
Calculate AB and show that it is also a matrix of the same form. This form
is called upper triangular matrix in which all elements below the diagonal 57
Block 2 Vector Spaces, Matrices and Tensors
are zero. What should d, e, f be so that AB 1? Will it make BA 1 too
Find the condition that AB BA.
12. The first Pauli matrix is
0 1
1
1 0
Calculate
2 2
U1 exp i1 1 i1 ...
2! 1
For real , show that U1 is unitary and has determinant 1.
13. Calculate exp i 2 and exp i 3 for the other Pauli matrices.
14. Determinants of 2n 2n matrices D 3 and D 4 of the form
A A
D3 , D 4
B C B C
Hint: Here the permutations that contribute are those in which 1, …. n are
mapped into n 1, …, 2n (and therefore n 1, …, 2n are mapped
into 1, …, n. We can start such permutations with first interchanging
1 with n 1, 2 with n 2 and so on, which gives a factor of (1) n
followed by independent permutations of the two sets of indices
among themselves.
15. Velocity addition in special relativity: The Lorentz transformation
between two inertial frames, one of which X moves with relative velocity
v with respect to X is given by the matrix equation:
X Lv X
with
t t
X , X
x x
and
1 1 v / c2
Lv
1 v 2 / c 2 v 1
v w
L(w ) L(v ) L(u ), u
58 1 vw / c 2
Unit 7 Matrices-I
Hint: Use the identity
1 a 2 1 b 2 1 a 2 b 2 a 2 b 2 a b 2 1 ab 2
7.9 SOLUTIONS AND ANSWERS
Self-Assessment Questions
1. a) The vector space of all n n real matrices is of dimension n 2 because
any real matrix
t11 t12 t1m
t 21 t 22 t 2m
T
t t nm
n1 t n 2
T r jk e jk s jk (ie( jk ))
j ,k
b) We choose Hermitian Pauli matrices, say A 1 and B 2 :
0 1 0 i
A , B ,
1 0 i 0
Then their product
i 0
AB ,
0 i
is not hermitian because
AB † AB . 59
Block 2 Vector Spaces, Matrices and Tensors
2. i) Take the unit matrix of size n. ii) put ii and jj elements on the diagonal
equal to zero. iii) place ij and ji elements equal to 1. This is the required
matrix:
1 2 i j n
1 1
2 1
1
i 0 1
j 1 0
n 1
gives
b11 b12 b13 b12 b13 b13
b b b b22 b23 b23
21 22 23
b31 b32 b33 b32 b33 b33
This is how the first column can add a linear combination of second
and third column, and the second column with a multiple of the third
60
Unit 7 Matrices-I
column. Suppose we want to change the third column by adding a
multiple of first column, then we choose
b11 b12 b13 1 0 b11 b12 b13 b11
b23 0 1 0 b b23 b21
b21 b22 21 b22
b31 b32 b33 0 0 1 b31 b32 b33 b31
Similarly for others. The rule is that for linear combination of other
columns to be added to some column you post-multiply a unit matrix
with additional entries added at appropriate places.
4. We show the result for D1 whose elements are d ij with i and j running
from 1 to 2n. We are given that
d ij 0, if i n and j n
d ij c ij , if i n and j n
d ij bij , if i n and j n.
Here P denotes the 2n! permutations. From the above it is clear that when
P maps the set {1, 2, ..., n} into any permutation which contains any index
from n + 1, ..., 2n, those terms will be zero. Therefore,
D1 1P d1P(1) d 2P(2) d nP(n)d(n1) P(n1) d 2nP(2n)
P
A B
The case for D2 is similar.
5. The ‘Hint’ given is actually the solution.
6. If we expand a determinant by j-th column
n
aij a~ij A for any fixed j
i 1
where a~ij is the cofactor of the element aij . Now if we take a matrix in
which j-th and k-th columns are identical then
n
aij aik determinant of matrix with two columns identical = 0 for j k.
i 1
Therefore,
n n
aik aij aij aik A jk 61
i 1 i 1
Block 2 Vector Spaces, Matrices and Tensors
~T
~ is the same as the ki-element of matrix A
We realize that aik made from
the cofactors of the transpose matrix AT . If A 0, then dividing by A
we get:
1 n ~T
A ki aij kj
A i 1
1 1 n ~T
A ki A ki ,
A i 1
2. If the real and imaginary parts of a Hermitian matrix are written as separate
matrices:
Hij (hij ) aij ibij
then as
hij* aij ibij h ji a ji ib ji
we must have
aij a ji , bij b ji
Therefore, the matrix A aij is real symmetric and B bij is real anti-
symmetric.
x y x y
3. C , C .
y x y x
therefore,
xx yy xy yx xx yy xy yx
CC and C C
( xy yx ) xx yy ( xy yx ) xx yy
which are of the same form as C or C .
4. There are infinitely many. These three are simple ones:
0 1 1 1 1 0
, ,
1 0 0 1 0 1
1 1 1 1 1 1 0 1 1 1
5.
1 0 1 0 1 0 1 1 1 0
1 0
0 1
62
Unit 7 Matrices-I
cos sin cos sin
6. R RT
sin cos sin cos
Therefore,
1 0
RT R
0 1
Let a 2 2 matrix A be
a b
A
c d
then
a c
AT
b d
and
a c a b a c ab cd 1 0
2 2
T
A A
b d c d ab cd b2 d 2 0 1
Therefore, a2 c 2 1, b2 d 2 1, ab cd 0
x . x 1, y . y 1, x . y0
This shows that any orthogonal matrix has the same form as R.
7. If AT A 1 then because AT A ,
2
AT A AT A A 1
Therefore, A 1
8. Let A and B be two Hermitian matrices. Then their sum is also a Hermitian
matrix
A B † A B * i A* B* A B ( A B)
ij j ji ji ij ij ij
a b
A
c d
a * c * a b
A† A.
b * d * c d
or, a a*, d d *, b c *, c b *.
This means that a and d are real and b and c are complex conjugate to
each other for any Hermitian matrix.
a e if
A
e if d
1 0 0 0 0 1 0 i
a d e f
0 0 0 1 1 0 i 0
1 0 0 0 0 1 0 i
, , ,
0 0 0 1 1 0 i 0
The standard choice for a basis is the unit matrix and the three Pauli
matrices i , i 1, 2, 3 :
1 0 0 1 0 i 1 0
1 , 1 , 2 , 3
0 1 1 0 i 0 0 1
[ A, B ] C B [ A, C ]
A [B, C ]C [ A, C ] B
A {B,C } D C { A, D} B AC BD CADB
CA (DB BD ) CABD
11. Given
1 a b 1 d e
A 0 1 c , B 0 1 f
0 0 1 0 0 1
we have:
1 a b 1 d e 1 a d e af b
AB 0 1 c 0 1 f 0 1 f c
0 0 1 0 0 1 0 0 1
cos i sin
exp i 1 cos 1 i sin 1
i sin cos
0 i
13. 2 , 2 2 1,
i 0
Therefore, all even powers of 2 are 1 and all odd powers are 2 itself.
Thus, using i 2 1, i 3 i , i 4 1 etc.,
i 2 i 3
exp (i 2 ) 1 i 2 2 ...
2! 3!
2 3
1 ... 1 i ... 2
2! 3!
cos 1 i sin 2
Similarly, all even powers of 3 are 1 and all odd powers are 3 itself so
that
exp (i 3 ) cos 1 i sin 3
65
Block 2 Vector Spaces, Matrices and Tensors
14. This question is similar to SAQ 4, and the only extra trick required is given
in the hint with the question.
1 1 w / c 2
15. L(w )
1 w 2 / c 2 w 1
1 1 v / c 2
L(v )
1 v 2 / c 2 v 1
Therefore,
L(w ) L(v )
1 1 1 w / c 2 1 v / c 2
1 w 2 / c2 1 v 2 / c 2 w 1 v 1
1 1 1 vw / c 2 (w v ) / c 2
1 w 2 / c2 1 v 2 / c 2 (w v ) 1 vw / c 2
1 wv c 2 1 (w v ) / (c 2 wv )
1 w 2 / c 2 1 v 2 / c 2 (w v ) (1 wv c )
2
1
then
1 wv c 2 1 V / c 2
L(w ) L(v )
1 w 2 / c 2 1 v 2 / c 2 V 1
Therefore,
1 1 V / c 2
L(w ) L(v )
1 V 2 / c 2 V 1
66