100% found this document useful (1 vote)
508 views21 pages

Density Matrix Solutions

This document contains notes and solutions to exercises related to density matrix formalism. The key points are: 1) Any linear operator on a finite or infinite dimensional space can be expressed as a sum of dyad operators. 2) A given 3x3 matrix is shown to be a valid mixed state density matrix for a total angular momentum system. 3) Proofs are provided for theorems regarding properties of density matrices, including the von Neumann mixing theorem which establishes superadditivity of entropy for mixed states.

Uploaded by

AnirbanMandal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
508 views21 pages

Density Matrix Solutions

This document contains notes and solutions to exercises related to density matrix formalism. The key points are: 1) Any linear operator on a finite or infinite dimensional space can be expressed as a sum of dyad operators. 2) A given 3x3 matrix is shown to be a valid mixed state density matrix for a total angular momentum system. 3) Proofs are provided for theorems regarding properties of density matrices, including the von Neumann mixing theorem which establishes superadditivity of entropy for mixed states.

Uploaded by

AnirbanMandal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 21

Physics 125c

Course Notes
Density Matrix Formalism
Solutions to Problems
040520 Frank Porter

Exercises
1. Show that any linear operator in an n-dimensional Euclidean space
may be expressed as an n-term dyad. Show that this may be extended
to an infinite-dimensional Euclidean space.
Solution: Consider operator A in n-dimensional Euclidean space,
which may be expressed as a matrix in a given basis:
A =

n
X

aij |ei ihej |

i,j=1

X
i

X
|ei i aij hej | .

(1)

This is in the form of an n-term dyad A =


P
and hi | = nj=1 aij hej |.

Pn

i=1

|i ihi |, with |i i = |ei i

An arbitrary vector in an infinite dimensional Euclidean space may be


expanded in a countable basis according to:
|i =

i |ei i.

(2)

i=1

Another way to say this is that the basis is complete, with completeness relation:
X
|ei ihei |.
(3)
I=
i

An arbitrary linear operator can thus be defined in terms of its actions


on the basis vectors:
A = IAI =

|ei ihei |A|ej ihej |.

i,j

The remainder proceeds as for the finite-dimensional case.


1

(4)

2. Suppose we have a system with total angular momentum 1. Pick a


basis corresponding to the three eigenvectors of the z-component of
angular momentum, Jz , with eigenvalues +1, 0, 1, respectively. We
are given an ensemble described by density matrix:

2 1 1
1

= 1 1 0.
4
1 0 1
(a) Is a permissible density matrix? Give your reasoning. For the
remainder of this problem, assume that it is permissible. Does it
describe a pure or mixed state? Give your reasoning.
Solution: Clearly is hermitian. It is also trace one. This is
almost sufficient for to be a valid density matrix. We can see
this by noting that, given a hermitian matrix, we can make a
transformation of basis to one in which is diagonal. Such a
transformation preserves the trace. In this diagonal basis, is of
the form:
= a|e1 ihe1 | + b|e2 ihe2 | + c|e3 ihe3 |,
where a, b, c are real numbers such that a + b + c = 1. This is
clearly in the form of a density operator. Another way of arguing
this is to consider the n-term dyad representation for a hermitian
matrix.
However, we must also have that is positive, in the sense that
a, b, c cannot be negative. Otherwise, we would interpret some
probabilities as negative. There are various ways to check this.
For example, we can check that the expectation value of with
respect to any state is not negative. Thus, let an arbitrary state
be: |i = (, , ). Then
h||i = 2||2 + ||2 + ||2 + 2<( ) + 2<( ).

(5)

This quantity can never be negative, by virtue of the relation:


|x|2 + |y|2 + 2<(x y) = |x + y|2 0.
Therefore is a valid density operator.
To determine whether is a pure or mixed state, we consider:
Tr(2 ) =
2

5
1
(6 + 2 + 2) = .
16
8

(6)

This is not equal to one, so is a mixed state. Alternatively, one


can show explicitly that 2 6= .
(b) Given the ensemble described by , what is the average value of
Jz ?
Solution: We are working in a diagonal basis for Jz :

1 0 0

Jz = 0 0 0 .
0 0 1
The average value of Jz is:
1
1
hJz i = Tr(Jz ) = (2 + 0 1) = .
4
4
(c) What is the spread (standard deviation) in measured values of Jz ?
Answer: Well need the average value of Jz2 for this:
3
1
hJz2 i = Tr(Jz2 ) = (2 + 0 + 1) = .
4
4
Then:
Jz =

hJz2 i

hJz

i2

11
.
4

3. Prove the first theorem in section ??.


Solution: The theorem we wish to prove is:
Theorem: Let P1 , P2 be two primitive Hermitian idempotents (i.e.,
rays, or pure states, with P = P , P 2 = P , and TrP = 1). Then:
1 Tr(P1 P2 ) 0.

(7)

If Tr(P1 P2 ) = 1, then P2 = P1 . If Tr(P1 P2 ) = 0, then P1 P2 = 0


(vectors in ray 1 are orthogonal to vectors in ray 2).
First, suppose P1 = P2 . Then Tr(P1 P2 ) = Tr(P12 ) = Tr(P1 ) = 1. If
P1 P2 = 0, then Tr(P1 P2 ) = Tr(0) = 0.

More generally, expand P1 and P2 with respect to an orthonormal basis


{|ei i}:
P1 =

aij |ei ihej |

(8)

bij |ei ihej |

(9)

i,j

P2 =

X
i,j

P1 P2 =

aik bkj |ei ihej |.

(10)

i,j,k

We know from the discussion on pages 11,12 in the Density Matrix


note, that we can work in a basis in which aij = ij i1. In this basis,
P1 P2 = |e1 i

b1i hei |.

(11)

The trace, which is invariant under choice of basis, is


Tr(P1 P2 ) = b11

(12)

We are almost there, but we need to show that b11 > 0, if P1 P2 6= 0.


A simple way to see this is to notice that P2 is the outer product of a
vector with itself, hence b11 0, with b11 = 0 if and only if P1 P2 = 0
(since b1i = bi1 = 0 for all i if b11 = 0). Finally, b11 < 1 if P1 6= P2 .
4. Prove the von Neumann mixing theorem.
Solution: The mixing theorem states that, given two distinct ensembles 1 6= 2 , a number 0 < < 1, and a mixed ensemble =
1 + (1 )2 , then
s() > s(1 ) + (1 )s(2 ).

(13)

Let us begin by proving the following:


Lemma: Let
x=

n
X

i xi ,

i=1

where 1 > xi > 0, 0 < i < 1 for all i, and


x ln x >

n
X
i=1

(14)
Pn

i=1

i xi ln xi .

i = 1. Then
(15)

Proof: This follows because x ln x is a concave function of x. Its


second derivative is
d
d2
(x ln x) =
( ln x 1) = 1/x.
2
dx
dx

(16)

For 1 > x > 0 this is always negative; x ln x is a concave function


for 1 > x > 0. Hence, any point on a straight line between two
points on the curve of this function lies below the curve. The
theorem is for a linear combination of n points on the curve of
x ln x. Here, x is a weighted average of points xi . The function
x ln x evaluated at this weighted average point is to be compared
with the weighted average of the values of the function at the n
points x1 , x2 , . . . , xn . Again, the function evaluated at the linear
combination is a point on the curve, and the weighted average of
the function over the n points must lie below that point on the
curve. The region of possible values of the weighted average of the
function is the polygon joining neighboring points on the curve,
and the first and last points. See Fig. 1.
Now we must see how our present problem can be put in the form where
this lemma may be applied. Consider the spectral decompositions of
1 , 2 :
1 =

ai Pi =

2 =

ai |ei ihei |

(17)

bi |fi ihfi |,

(18)

bi Qi =

X
i

where the decompositions have been padded with complete sets of


one-dimensional projections. That is, some of the ai s and bi s may
be zero. The idea is that the sets {|ei i} and {|fi i} form complete
orthonormal bases. Note that we cannot have Pi = Qi in general.
Then we have:
= 1 + (1 )2
X
[ai |ei ihei | + (1 )bi |fi ihfi |]
=

(19)
(20)

ci |gi ihgi |,

(21)

f(x)

<f(x)>

..
.

x2

x3 <x>

x1

f(<x>)

x4

Figure 1: Illustration showing that the weighted average of a concave function


is smaller than the function evaluated at the weighted average point. The
allowed values of the ordered pairs hxi, hf (x)i lie in the polygon.

where we have defined another complete orthonormal basis, {|gi i}, corresponding to a basis in which is diagonal.
We may expand the {|ei i} and {|fi i} bases in terms of the {|gi i} basis.
For example, let
X
Aij |gj i,
(22)
|ei i =
j

where A = {Aij } is a unitary matrix. The inverse transformation is


|gi i =

Aji |ej i.

(23)

Also,

hei | =

X
j

Aij |gj i =
6

X
j

Aij hgj |,

(24)

and hence,

XX

|ei ihei | =

Aij |gj ihgk |Aik .

(25)

Similarly, we define matrix B such that


|fi ihfi | =

XX
j

Bij |gj ihgk |Bik


.

(26)

Substituting Eqns. 25 and 26 into Eqn: 20:


=

X
i

ai

Aik Aij + (1 )bi

j,k

X
j,k

Bik
Bij |gj ihgk |.

(27)

Thus, the numbers c` are:


c` = hg`||g` i
=

X
i

ai

Xh

Aik Aij + (1 )bi

j,k

X
j,k

Bik
Bij `j `k

|Ai` | ai + (1 )|Bi` | bi .

(28)
(29)

The entropy for density matrix is:


s() =

ci ln ci

(30)

X X h
i

X h

|Aji |2 aj + (1 )|Bji |2 bj ln

|Aji |2 aj + (1 )|Bji |2 bj .

Note that ci is of the form


ci =

(a)

(b)

(ij aj + ij bj ),

(31)

where
(a)

|Aji |2

(32)

(b)

(1 )|Bji |2 .

(33)

ij

ij

Furthermore,

(a)

(b)

(ij + ij ) = 1.

(34)

Thus, according to the lemma (some of the ci s might be zero; there is


an equality, 0 = 0 in such cases),
ci ln ci >

Xh

(a)

(b)

ij aj ln aj + ij bj ln bj .

(35)

Finally, we sum the above inequality over i:


s() =

ci ln ci

>

XXh

>

(a)

(b)

ij aj ln aj + ij bj ln bj

[aj ln aj + (1 )bj ln bj ]

(36)

> s(1 ) + (1 )s(2 )

(37)

This completes the proof.


5. Show that an arbitrary linear operator on a product space H = H1 H2
may be expressed as a linear combination of operators of the form
Z =X Y.
Solution: We are given an arbitrary linear operator A on H = H1 H2 .
We wish to show that there exists a decomposition of the form:
A=

Ai Zi =

Ai Xi Yi ,

(38)

where Xi are operators on H1 and Yi are operators on H2 .


Let {fi : i = 1, 2, . . .} be an orthonormal basis in H1 and {gi : i =
1, 2, . . .} be an orthonormal basis in H2 . Then we may obtain an orhtonormal basis for H composed of vectors of the form:
eij = fi gj ,

i = 1, 2, . . . ; j = 1, 2, . . . .

(39)

It is readily checked that {eij } is, in fact, an orthonormal basis for H.

Expand A with respect to basis {eij }:


A =

Aij,mn |eij ihemn |

(40)

Aij,mn |fi i |gj ihfm | hgn |

(41)

Aij,mn |fi ihfm | |gj ihgn |

(42)

i,j,m,n

i,j,m,n

i,j,m,n

Ak Xk Yk ,

(43)

where k is a relabeling for i, j, m, n.


The only step above which requires further comment is setting:
|fi i |gj ihfm | hgn | = |fi ihfm | |gj ihgn |.

(44)

One way to check this is as follows. Pick our bases to be in the form:
(fi )k = ik
(gj )` = j` .

(45)
(46)

(|fi i |gj ihfm | hgn |)k`,pq = ik j` mp nq .

(47)

(|fi ihfm | |gj ihgn |)k`,pq = ik j`mp nq .

(48)

Then
and
6. Let us try to improve our understanding of the discussions on the density matrix formalism, and the connections with information or entropy that we have made. Thus, we consider a simple two-state
system. Let be any general density matrix operating on the twodimensional Hilbert space of this system.
(a) Calculate the entropy, s = Tr( ln ) corresponding to this density matrix. Express your result in terms of a single real parameter. Make sure the interpretation of this parameter is clear, as
well as its range.
Solution: Density matrix is Hermitian, hence diagonal in some
basis. Work in such a basis. In this basis, has the form:
=
9

0
,
0 1

(49)

where 0 1 is the probability that the system is in state 1.


We have a pure state if and only if either = 1 or = 0.
The entropy is
s = ln (1 ) ln(1 ).

(50)

(b) Make a graph of the entropy as a function of the parameter. What


is the entropy for a pure state? Interpret your graph in terms of
knowledge about a system taken from an ensemble with density
matrix .
Solution:
0.8
0.7
0.6

entropy

0.5
0.4
0.3
0.2
0.1
0
0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

theta

Figure 2: The entropy as a function of .


The entropy for a pure state, with = 1 or = 0, is zero. The
entropy increases as the state becomes less pure, reaching maximum when the probability of being in either state is 1/2, reflecting
minimal knowledge about the state.
(c) Consider a system with ensemble a mixture of two ensembles
1 , 2 :
01
(51)
= 1 + (1 )2 ,
As an example, suppose
1 =

1
2

1 0
,
0 1
10

and 2 =

1
2

1 1
,
1 1

(52)

in some basis. Prove that VonNeumans mixing theorem holds for


this example:
(53)
s() s(1 ) + (1 )s(2 ),
with equality iff = 0 or = 1.
Solution: The entropy of ensemble 1 is:
1 1 1 1
s(1 ) = Tr1 ln 1 = ln ln = ln 2 = 0.6931
2 2 2 2

(54)

It may be noticed that 22 = 2 , hence ensemble 2 is a pure state,


with entropy s(2 ) = 0. Next, we need the entropy of the combined
ensemble:


1
1
1
.
(55)
= 1 + (1 )2 =
1
2 1
To compute the entropy, it is convenient to determine the eigenvalues;
they are 1 /2 and /2. Note that they are in the range from zero to
one, as they must be. The entropy is
!

s() = 1
ln 1

ln .
2
2
2
2

(56)

We must compare s() with


s(1 ) + (1 )s(2 ) = ln 2.

(57)

It is readily checked that equality holds for = 1 or = 0. For the


case 0 < < 1, take the difference of the two expressions:
!

ln 1

ln ln 2
s() [s(1 ) + (1 )s(2 )] = 1
2
2
2
2

= ln 1
2

!1/2

!/2

2 .

This must be larger than zero if the mixing theorem is correct. This is
equivalent to asking whether

1
2

!1/2

11

!/2

(59)

(58)

is less than 1. This expression may be rewritten as

1
2

!1/2

(2)/2 .

(60)

It must be less than one. To check, lets find its maximum value, by
setting its derivative with respect to equal to 0:
0 =
=
=
=

!1/2

d
1
(2)/2
d
2
"
!
#
d

exp 1
ln (1 /2) + (/2) ln(2)
d
2
1
1 2
1
ln(1 /2) + ln(2) +
2
2
2 4
ln(2) ln(1 /2).

(61)

Thus, the maximum occurs at = 2/5. At this value of , s() = 0.500,


and s(1 ) + (1 )s(2 ) = (2/5) ln 2 = 0.277. The theorem holds.
7. Consider an N -dimensional Hilbert space. We define the real vector
space, O of Hermitian operators on this Hilbert space. We define a
scalar product on this vector space according to:
(x, y) = Tr(xy),

x, y O.

(62)

Consider a basis {B} of orthonormal operators in O. The set of density operators is a subset of this vector space, and we may expand an
arbitrary density matrix as:
=

Bi Tr(Bi ) =

Bi hBi i .

(63)

By measuring the average values for the basis operators, we can thus
determine the expansion coefficients for .
(a) How many such measurements are required to completely determine ?
Solution: The question is, how many independent basis operators
are there in O? An arbitrary N N complex matrix is described
12

by 2N 2 real parameters. The requirement of Hermiticity provides


the independent constraint equations:
<(Hij ) = <(Hji ), i < j
=(Hij ) = =(Hji ), i j.

(64)
(65)

This is N + 2[N (N 1)/2] = N 2 equations. Thus, O is an N 2 dimensional vector space. But to completely determine the density matrix, we have one further constraint, that Tr = 1. Thus,
it takes N 2 1 measurements to completely determine .
(b) If is known to be a pure state, how many measurements are
required?
Solution: We note that a complex vector in N dimensions is
completely specified by 2N real parameters. However, one parameter is an arbitrary phase, and another parameter is eaten by
the normalization constraint. Thus, it takes 2(N 1) parameters
to completely specify a pure state.
If is a pure state, then 2 = . How many additional constraints
over the result in part (a) does this imply? Lets try to get a more
intuitive understanding by attacking this issue from a slightly different perspective. Ask, instead, how many parameters it takes
to build an arbitrary density matrix as a mixture of pure states.
Our response will be to add pure states into the mixture one at
a time, counting parameters as we go, until we cannot add any
more.
It takes 2(N 1) parameters to define the first pure state in our
mixture. The second pure state must be a distinct state. That is,
it must be drawn from an N 1-dimensional subspace. Thus the
second pure state requires 2(N 2) parameters to define. There
will also be another parameter required to specify the relative
probabilities of the first and second state, but well count up these
probablilities later. The third pure state requires 2(N 3) parameters, and so forth, stopping at 2 1 paramter for the (N 1)st
pure state. Thus, it takes
2

N
1
X

k = N (N 1)

k=1

13

(66)

parameters to define all the pure states in the arbitrary mixture.


There can be a total of N pure states making up a mixture (the
N th one required no additional parameters in the count we just
made). It takes N 1 parameters to specify the relative probabiliteis of these N components in the mixture. Thus, the total
number of parameters required is:
N (N 1) + (N 1) = N 2 1.

(67)

Notice that this is just the result we obtained in part (a).


8. Two scientists (they happen to be twins, named Oivil and Livio,
but never mind. . . ) decide to do the following experiment: They set
up a light source, which emits two photons at a time, back-to-back in
the laboratory frame. The ensemble is given by:
1
= (|LLihLL| + |RRihRR|),
2

(68)

where L refers to left-handed polarization, and R refers to righthanded polarization. Thus, |LRi would refer to a state in which photon
number 1 (defined as the photon which is aimed at scientist Oivil, say)
is left-handed, and photon number 2 (the photon aimed at scientist
Livio) is right-handed.
These scientists (one of whom is of a diabolical bent) decide to play a
game with Nature: Oivil (of course) stays in the lab, while Livio treks
to a point a light-year away. The light source is turned on and emits
two photons, one directed toward each scientist. Oivil soon measures
the polarization of his photon; it is left-handed. He quickly makes a
note that his brother is going to see a left-handed photon, sometime
after next Christmas.
Christmas has come and gone, and finally Livio sees his photon, and
measures its polarization. He sends a message back to his brother Oivil,
who learns in yet another year what he knew all along: Livios photon
was left-handed.
Oivil then has a sneaky idea. He secretly changes the apparatus, without telling his forlorn brother. Now the ensemble is:
1
= (|LLi + |RRi)(hLL| + hRR|).
2
14

(69)

He causes another pair of photons to be emitted with this new apparatus, and repeats the experiment. The result is identical to the first
experiment.
(a) Was Oivil just lucky, or will he get the right answer every time,
for each apparatus? Demonstrate your answer explicitly, in the
density matrix formalism.
Solution: Yup, hell get it right, every time, in either case. Lets
first define a basis so that we can see how it all works with explicit
matrices:

1
0

|LLi = ,
0
0

0
1

|LRi = ,
0
0

0
0

|RLi = ,
1
0

|RRi =

0

.
0

1
(70)

In this basis the density matrix for the first apparatus is:
1
(|LLihLL| + |RRihRR|)
2

1
0

0
1
1
0


=
(1 0 0 0) + (0 0 0 1)
2 0
2 0
0
1

1 0 0 0

1
0 0 0 0
=
(71)

.
2 0 0 0 0
0 0 0 1

Since T r(2 ) = 1/2, we know that this is a mixed state.


Now, Oivil observes that his photon is left-handed. His lefthanded projection operator is

1
0

PL =
0
0

0
1
0
0

0
0
0
0

0
0

,
0
0

(72)

so once he has made his measurement, the state has collapsed


15

to:

1 0
PL =

2 0
0

0
0
0
0

0
0
0
0

0
0

.
0
0

(73)

This corresponds to a pure |LLi state, hence Livio will observe


left-handed polarization.
For the second apparatus, the density matrix is
1
(|LLi + |RRi)(hLL| + hRR|)
2

1 0 0 1

1
0 0 0 0
=

.
2 0 0 0 0
1 0 0 1

(74)

Since T r(2 ) = 1, we know that this is a pure state. Applying the


left-handed projection for Oivils photon, we again obtain:

1
1
0
PL =
2 0
0

0
0
0
0

0
0
0
0

0
0

.
0
0

(75)

Again, Livio will observe left-handed polarization.


(b) What is the probability that Livio will observe a left-handed photon, or a right-handed photon, for each apparatus? Is there a
problem with causality here? How can Oivil know what Livio is
going to see, long before he sees it? Discuss! Feel free to modify
the experiment to illustrate any points you wish to make.
Solution: Livios left-handed projection operator is

1
0
PL (Livio) =

0
0

0
0
0
0

0
0
1
0

0
0

,
0
0

(76)

The probability that Livio will observe a left-handed photon for


the first apparatus is:
hPL (Livio)i = Tr [PL (Livio)] = 1/2.
16

(77)

The same result is obtained for the second apparatus.


Here is my take on the philosophical issue (beware!):
If causality means propagation of information faster than the
speed of light, then the answer is no, causality is not violated.
Oivil has not propagated any information to Livio at superluminal velocities. Livio made his own observation on the state of the
system. Notice that the statistics of Livios observations are unaltered; independent of what Oivil does, he will still see left-handed
photons 50% of the time. If this were not the case, then there
would be a problem, since Oivil could exploit this to propagate a
message to Livio long after the photons are emitted.
However, people widely interpret (and write flashy headlines) this
sort of effect as a kind of action at a distance: By measuring the
state of his photon, Oivil instantly kicks Livios far off photon
into a particular state (without being usable for the propagation
of information, since Oivil cant tell Livio about it any faster than
the speed of light). Note that this philosophical dilemma is not
silly: The wave function for Livios photon has both left- and righthanded components; how could a measurement of Oivils photon
pick which component Livio will see? Because of this, quantum
mechanics is often labelled non-local.
On the other hand, this philosophical perspective may be avoided
(ignored): It may be suggested that it doesnt make sense to talk
this way about the wave function of Livios photon, since the
specification of the wave function involves also Oivils photon.
Oivil is merely making a measurement of the state of the twophoton system, by observing the polarization of one of the photons, and knowing the coherence of the system. He doesnt need
to make two measurements to know both polarizations, they are
completely correlated. Nothing is causing anything else to happen at faster than light speed. We might take the (deterministic?) point of view that it was already determined at production
which polarization Livio would see for a particular photon we
just dont know what it will be unless Oivil makes his measurement. There appears to be no way of falsifying this point of view,
as stated. However, taking this point of view leads to to the further philosophical question of how the pre-determined information
17

is encoded is the photon propagating towards Livio somehow


carrying the information that it is going to be measured as lefthanded? This conclusion seems hard to avoid. It leads to the
notion of hidden variables, and there are theories of this sort,
which are testable.
We know that our quantum mechanical foundations are compatible with special relativity, hence with the notion of causality that
implies.
As Feynman remarked several years ago in a seminar I arranged
concerning EPR, the substantive question to be asking is, Do
the predictions of quantum mechanics agree with experiment?.
So far the answer is a resounding yes. Indeed, we often rely
heavily on this quantum coherence in carrying out other research
activities. Current experiments to measure CP violation in B 0
decays crucially depend on it, for example.
9. Let us consider the application of the density matrix formalism to the
problem of a spin-1/2 particle (such as an electron) in a static external
magnetic field. In general, a particle with spin may carry a magnetic
moment, oriented along the spin direction (by symmetry). For spin1/2, we have that the magnetic moment (operator) is thus of the form:
1
,
=
2

(78)

where are the Pauli matrices, the 12 is by convention, and is a


constant, giving the strength of the moment, called the gyromagnetic
ratio. The term in the Hamiltonian for such a magnetic moment in an
external magnetic field, B is just:
B.
H =

(79)

Our spin-1/2 particle may have some spin-orientation, or polarization


vector, given by:
i.
P = h
(80)
Drawing from our classical intuition, we might expect that in the external magnetic field the polarization vector will exhibit a precession
about the field direction. Let us investigate this.
18

Recall that the expectation value of an operator may be computed from


the density matrix according to:
hAi = Tr(A).

(81)

Furthermore, recall that the time evolution of the density matrix is


given by:

i
= [H(t), (t)].
(82)
t
P /dt, of the polarization vector? Express
What is the time evolution, dP
your answer as simply as you can (more credit will be given for right
answers that are more physically transparent than for right answers
which are not). Note that we make no assumption concerning the
purity of the state.
Solution: Let us consider the ith-component of the polarization:
i

dhi i
dPi
= i
dt
dt

= i Tr(i )
t
!

= iTr
i
t
= Tr ([H, ]i )
= Tr ([i , H])

(83)
(84)
(85)
(86)
(87)

3
1 X
=
Bj Tr ([i , j ]) .
2 j=1

(88)

To proceed further, we need the density matrix for a state with polarization P . Since is hermitian, it must be of the form:
= a(1 + b ).

(89)

But its trace must be one, so a = 1/2. Finally, to get the right polarP.
ization vector, we must have b=P
Thus, we have
(

3
3
X
dPi
1 X
i
=
Bj Tr[i , j ] +
Pk Tr ([i , j ]k ) .
dt
4 j=1
k=1

19

(90)

Now [i , j ] = 2iijk k , which is traceless. Further, Tr ([i , j ]k ) =


4iijk . This gives the result:
3 X
3
X
dPi
=
ijk Bj Pk .
dt
j=1 k=1

(91)

This may be re-expressed in the vector form:


P
dP
P B.
= P
dt

(92)

10. Let us consider a system of N spin-1/2 particles (see the previous problem) per unit volume in thermal equilibrium, in our external magnetic
field B . Recall that the canonical distribution is:
=

eH/T
,
Z

(93)

with partition function:




Z = Tr eH/T .

(94)

Such a system of particles will tend to orient along the magnetic field,
resulting in a bulk magnetization (having units of magnetic moment
per unit volume), M .
(a) Give an expression for this magnetization (dont work too hard to
evaluate).
Solution: Let us orient our coordinate system so that the z-axis
is along the magnetic field direction. Then Mx = 0, My = 0, and:
1
Mz = N hz i
2
h
i
1
= N Tr eH/T z ,
2Z

(95)
(96)

where H = Bz z /2.
(b) What is the magnetization in the high-temperature limit, to lowest
non-trivial order (this I want you to evaluate as completely as you
can!)?
20

Solution: In the high temperature limit, well discard terms


of order higher than 1/T in the expansion of the exponential:
eH/T 1 H/T = 1 + Bz z /2T . Thus,
1
Tr [(1 + Bz z /2T )z ]
2Z
1
.
= N 2 Bz
2ZT

Mz = N

(97)
(98)

Furthermore,
Z = TreH/T
= 2 + O(1/T 2 ).

(99)
(100)

And we have the result:


Mz = N 2 Bz /4T.

(101)

This is referred to as the Curie Law (for magnetization of a


system of spin-1/2 particles).

21

You might also like