Queen's University Belfast: Phy3012 Solid State Physics

Download as pdf or txt
Download as pdf or txt
You are on page 1of 131
At a glance
Powered by AI
The document discusses solid state physics topics such as lattice vibrations, phonons, electron band theory, magnetism, and transport properties.

Phonons are quantized lattice vibrations that can be modeled as quantum harmonic oscillators. The Einstein and Debye models are introduced to describe the quantization of lattice waves.

The tight binding model describes electronic band structures by approximating the behavior of electrons in a crystal lattice as harmonic oscillators bound to their lattice sites. It is used to study materials like silicon and transition metals.

Queen’s University Belfast

School of Mathematics and Physics

PHY3012 SOLID STATE PHYSICS

A T Paxton, November 2012


Books

The primary textbook for this course is


H Ibach and H Lüth, Solid State Physics, Springer, 4th Edition, 2009

Additionally, I recommend
J R Hook and H E Hall, Solid State Physics, Wiley, 2nd Edition, 2000
S Elliott, The Physics and Chemistry of Solids, Wiley, 1998
C Kittel, Introduction to Solid State Physics, Wiley, 8th Edition, 2005

Slightly more advanced texts that I recommend are


M P Marder, Condensed Matter Physics, Wiley, 2000
N Ashcroft and N D Mermin, Solid State Physics, Holt-Sanders, 1976
Contents

0. Lagrangian and Hamiltonian mechanics 2


1. Lattice vibrations and phonons 4
1.1 Vibrations in the monatomic lattice 4
1.2 Notes on section 1.1 9
1.3 Boundary conditions and density of states in three dimensions 11
1.4 A useful identity 16
1.5 Vibrations in the diatomic lattice 16
1.6 Phonons 20
1.6.1 The simple harmonic oscillator 21
1.6.2 Quantisation of the lattice waves 24
1.6.3 The Einstein model 30
1.6.4 The Debye model 32
1.7 Particle–particle interactions—phonon scattering 34
1.8 Anharmonic effects 37
1.8.1 Phonon–phonon scattering 38
2. Electrons 39
2.1 The hydrogen molecule 40
+
2.1.1 The H2 molecular ion 40
2.1.2 The hydrogen molecule 42
2.1.2.1 The molecular orbital approach 42
2.1.2.2 The valence bond approach 44
2.1.2.3 Molecular orbital and valence bond pictures compared 47
2.2 Free electron gas 50
2.3 Effects of the crystal lattice 58
2.4 The tight binding picture 60
2.4.1 Case study—Si and other semiconductors 66
2.4.2 Case study—transition metals 68
2.5 Magnetism 70
2.5.1 The localised picture 71
2.5.2 The itinerant picture 73
2.5.3 Pauli paramagnetism 75
2.5.4 The free electron band model of ferromagnetism and Stoner enhancement77
3. Thermal and electrical conductivity—transport 81
3.1 Scattering 81
3.2 Electrical conduction in semiconductors 83
3.2.1 Fermi level and number of carriers in a semiconductor 86
3.2.2 Conductivity of a semiconductor 89
3.2.3 Comparison between a metal and a non degenerate semiconductor 91
3.3 Electron dynamics in metals 92
3.3.1 Wavepackets 92
3.3.2 Velocity of free electrons 94
3.3.3 Equation of motion of a wavepacket 95
3.4 Scattering using the “kinetic method”—relaxation time 100
3.4.1 Relaxation time 100
3.4.2 The kinetic method 101
3.4.3 Mean free path 103
3.4.4 Thermal conductivity 103
3.4.5 The role of Umklapp processes in phonon thermal conductivity 106
3.5 Boltzmann formulation of electrical conductivity in metals 108
3.5.1 Derivation of the Boltzmann equation 110
3.5.2 Solution of the linearised Boltzmann equation 113
3.6 The Wiedemann–Franz law 116
3.7 Plausibility argument for the linearised Boltzmann equation 117
3.8 Thermoelectric effects 118
3.8.1 Estimate of the thermopower in metals and semiconductors 120
3.8.1 The Peltier effect 121
3.9 Onsager relations 122
There follows a list of the most commonly used quantities, with occasional comparison
with the usage in the textbooks, Ibach and Lüth (IL), Kittel (K), Hook and Hall (HH)
and Elliott (E). I will always put vectors in bold type face, and their magnitude in normal
typeface, e.g., p = |p|. The complex conjugate of a complex number is indicated by a bar
over it, for example, ψ̄.

a spacing between lattice planes, lattice constant, acceleration


ai basis vectors of the direct lattice a, b, c (HH, E)
a, a+ destruction and creation operators for phonons
A force constant matrix,
cubic coefficient in heat capacity, exchange integral
bi basis vectors of the reciprocal lattice gi (IL), a∗ , b∗ , c∗ (HH, E)
B magnetic induction
c speed of sound
C spring constant, heat capacity
Ce heat capacity due to electrons
ce heat capacity per unit volume due to electrons
Cph heat capacity due to phonons
cph heat capacity per unit volume due to phonons
e charge on the electron is −e
e base of the natural logarithms
E eigenvalue, especially of an electron state E (IL),  (K), ε (HH), E (E)
EF Fermi energy
Ec energy at conduction band edge
Ev energy at valence band edge
Eg semiconductor energy gap
E electric field E (IL), E (HH, K, E)
f (E), f (k), fFD Fermi–Dirac function
F force
g reciprocal lattice vector G (IL, K, E), Ghkl (HH)
ge electron g-factor
h̄ reduced Planck constant h̄ = h/2π
h hopping integral
H hamiltonian,
√ magnetic field
i −1
i, j, l labels of atoms
I Stoner parameter
Je electric current density
JQ heat current density
k wavevector, vector in reciprocal space
kB Boltzmann constant
kF Fermi wavevector
L Lagrangian, length
`, m, n direction cosines
m mass of the electron, magnetic moment per atom / µB
m = n↑ − n↓
m∗ effective mass
M mass of an atom
n, m, p integers
n(ω), n(E) density of states D (K), V · D (IL), g (E, HH)
ns (E) density of states per spin per atom
ne number of electrons per unit volume, electron density
nh number of holes per unit volume, hole density
n↑ , n↓ number of up or down spin electrons per atom
Nc number of unit cells in the crystal
Nat number of atoms in the crystal
Ne number of electrons in the crystal
Ns , Nd number of s, d-electrons per atom
p momentum
q̃k polarisation vector (vector amplitude)
q, qk position coordinate; normal mode coordinate
q scattering vector −K (IL)
r a vector in direct space
rj , R a vector denoting the crystallographic position of atom j r (IL)
δrj the vector displacement of atom j from its equilibrium position u (IL, K, E, HH)
S overlap integral, entropy
S thermopower
t time
T absolute temperature
TF Fermi temperature
U internal energy E (HH)
vF Fermi velocity
vd drift velocity
v, vg group velocity
V volume
W potential energy, d-band width
α index, expansion coefficient, variational parameter, spin function
β spin function
γ linear coefficient in heat capacity
δij Kronecker delta
0 permittivity of free space (0 µ0 = 1/c2 )
θD Debye temperature
θE Einstein temperature
κ thermal conductivity
λ wavelength, mean free path
µ chemical potential, carrier mobility
µ0 permeability of free space (4π × 10−7 N A−2 )
µB Bohr magneton
µe electron moment ≈ µB
Π Peltier coefficient
σ conductivity
τ relaxation time
χ magnetic susceptibility
ψ, φ, ϕ wavefunction
ω angular frequency
ωD Debye frequency
ωE Einstein frequency
PHY3012 Solid State Physics Page 1

Introduction

We are concerned with the physics of matter when it is in the solid, usually crystalline,
state. You are faced with the complex problem of describing and understanding the
properties of an assembly of some 1023 −1024 atoms, closely bonded together. The subject
can be divided into two sub disciplines.
1. The nature of the bonding (mechanical properties etc.)
2. The physics of solids (spectroscopy, energy bands, transport properties etc.)

We will deal largely with the second. Much of the physics of solids can be understood
by regarding the solid as matter containing certain elementary excitations. These inter-
act with externally applied fields (including electromagnetic radiation) as well as each
other to produce the observable phenomena which we measure. Examples of elementary
excitations are
electrons and holes
phonons
plasmons
magnons and spin waves

We will study electrons and phonons. Generally we will describe the mechanics of a single
particle at a time. Many particle physics is usually beyond the scope of an undergraduate
physics programme.
PHY3012 Solid State Physics, Section 0 Page 2

0. Lagrangian and Hamiltonian mechanics

A body with mass M moves in one dimension. The position coordinate at time t is
denoted q. By Newton’s second law, we have
d2 q
M ≡ Mq̈ = F (q)
dt2
where F (q) is the force exerted on the body when it is at position q. We take it that this
force arises from the fact that the body is moving in some potential W . For example,
here is a simple harmonic potential, W ∝ q 2 :
potential energy W

The force is, therefore,


dW
F (q) = −
dq
and the kinetic energy is
 2
1 1 dq 1
K = mv 2 = M ≡ M q̇ 2
2 2 dt 2
where v is the velocity of the body.

We define a function called the Lagrangian as

L(q̇, q) = K − W (0.1)

the difference between the kinetic and potential energies. If we differentiate L with respect
to q̇ we get  
∂L d 1 2
= M q̇ = M q̇ (0.2)
∂ q̇ dq̇ 2
from which it follows that
d ∂L
= Mq̈ (mass × acceleration)
dt ∂ q̇
PHY3012 Solid State Physics, Section 1 Page 3

On the other hand differentiating L with respect to q you find

∂L dW
=− =F (force)
∂q dq

Clearly, then, Newton’s second law (force = mass × acceleration) can be written in the
form
d ∂L ∂L
− =0
dt ∂ q̇ ∂q
Whereas the Lagrangian is an energy functional that allows us to deduce the equation of
motion, the Hamiltonian allows us to write down the total energy in terms of q and q̇,

H =K +W

Normally, though, we write


p2
K=
2M
where p is the momentum “conjugate” to q, and p = M v = M q̇ is equivalent to writing

∂L
p=
∂ q̇

from equation (0.2). In order to go over to quantum mechanics we do just two things.
1. Make the substitution

p −→ −ih̄
∂q
from which appears the commutation relation

[q, p] = ih̄

The conjugate observables p and q are known as complementary in quantum mechan-


ics.
2. The resulting Hamiltonian operator

p2
H= +W
2M
h̄2 ∂ 2
= +W
2M ∂q 2

is then put into the time dependent Schrödinger equation


ih̄ ψ = Hψ
∂t
which is the equation of motion of a quantum particle described by the wavefunction
ψ. Stationary states of the particle are described by the time independent Schrödinger
equation
Hψ = Eψ
and ψ is then an eigenfunction of H having the eigenvalue E.
PHY3012 Solid State Physics, Section 1.1 Page 4

1. Lattice vibrations and Phonons

Phonons are quantised lattice vibrations. We will pursue the following steps.
1. Set up the problem of lattice waves using classical, Newtonian mechanics and the
elementary properties of waves.
2. Solve the problem for a simple model. This is a typical approach in solid state
physics.
3. Describe the important consequences of crystal periodicity.
4. Quantise the waves.

1.1 Vibrations in the monatomic lattice

Vibrations of the atoms in a crystal take the form of waves, standing or travelling, whose
amplitude is the largest displacement of an atom from its equilibrium position. In addition
to its amplitude a wave is characterised by its wavevector, k, and angular frequency,
ω = 2π × frequency.

The atoms’ official, crystallographic positions are denoted rj . j runs from 1 to Nc , where
Nc is the number of unit cells in the crystal; and to make the notation simple, we consider
the case where there is only one atom per unit cell. The atomic displacements are denoted
δrj (‘δ’ meaning ‘a small change in. . . ’) and can be written

1
δrj = √ q̃k ei(k·rj −ωt) (1.1.1)
Nc

q̃k is called the polarisation vector or vector amplitude; 1/ Nc is only for normalisation.
The wavelength is

λ=
k
The angular frequency is a function of the wavelength, thus

ω = ω(k)

and it is essentially this “dispersion relation” we shall be looking for; t is the time.

These waves are excited by temperature or externally applied fields or forces. The crystal
has lowest potential energy when all the δrj are zero. We call this energy

W = W (r1 , r2 , . . . , rNc )

and then expand the potential energy of the crystal with lattice waves in a Taylor series,

X ∂W 1X ∂2W
Upot = W + δrj + δri δrj + . . . (1.1.2)
j
∂rj 2 ij ∂ri ∂rj
PHY3012 Solid State Physics, Section 1.1 Page 5

where the partial derivatives are evaluated at the equilibrium positions of the atoms.†
Now the first term can be taken as zero, and this defines our zero of energy as the
potential energy of the crystal at rest. The second term is also zero because the crystal is
in equilibrium with respect to small atomic displacements. If we truncate the Taylor series
after the second order terms we are said to be working within the harmonic approximation.

Let us write down the hamiltonian in this approximation,

H = kinetic energy + potential energy


X p2j 1X
= + Aij δri δrj (1.1.3)
j
2M 2 ij

2
∂ W
Aij = ∂r i ∂rj
is called the force constant matrix. It is the change in potential energy when
atom i is moved to ri + δri and atom j is simultaneously moved moved to rj + δrj . M is the
mass of each atom. (Note, Aij really should have further indices indicating the directions
in which atoms i and j are moved. A typical matrix element is Aix,jy for example. I can
leave these out for simplicity and anyway we’ll only end up dealing with one-dimensional
cases.)

The force constant matrices can only be determined if we have a complete knowledge of
the bonding in the solid.

Next, we want the equation of motion for atom j. In classical, hamiltonian mechanics,
we have
d ∂H d2
pj = − = M 2 δrj
dt ∂rj dt
rate of change of momentum = force acting upon atom j = mass × acceleration
Differentiating equation (1.1.3) we get

d2 ∂H X
M δrj = − = − Aij δri
dt2 ∂rj i

We can easily differentiate equation (1.1.1) twice with respect to time:

d2 1
2
δrj = −ω 2 √ q̃k ei(k·rj −ωt)
dt Nc
† As a matter of notation, when I write, say,
X ∂W
δri
i
∂ri

I really mean
X ∂W
δriα

∂riα

and α = 1, 2, 3 (or x, y, z) is an index to each component of the vector.


PHY3012 Solid State Physics, Section 1.1 Page 6

We now insert equation (1.1.1) for δrj and equate the two right hand sides:
X 1 1
Aij √ q̃k ei(k·ri −ωt) = M ω 2 √ q̃k ei(k·rj −ωt)
i
Nc Nc

Cancel e−iωt / Nc from both sides and our equation of motion for atom j is, finally,
X
M ω 2 q̃k = Aij q̃k eik·(rj −ri ) (1.1.4)
i

I have left in the q̃k on either side to emphasise that there is an equation of motion for
each of the three directions of polarisation of the wave. If there were more than one atom
per unit cell, then there would be an equation like (1.1.4) for each atom type.

In general we can’t solve this to get the dispersion relation unless we know the force
constants, which is a hard problem. But we can invoke a simple model for the crystal’s
force constants for which we can solve the equation of motion. Consider a cubic crystal
with one atom per unit cell. We also consider those ‘modes’ of vibration whose wavevector
is perpendicular to a simple lattice plane and for which the displacement is either parallel
or perpendicular to the wavevector. Under these circumstances whole planes of atoms
move in unison and the problem becomes confined to one dimension (see Kittel, chapter 5).

δr1 δr2 δr3 ...

Longitudinal
wave
a

δr1 δr2 δr3 ...

Transverse
wave

We only need to solve equation (1.1.4) for one of the Nc atoms in the crystal since they
are equivalent by translational symmetry. Of course, as you can see in the diagram, the
displacement of an atom is not the same as one of its neighbours at any one time, but
they differ only in their phase. If a is the lattice constant then
rj+1 = rj + a.
PHY3012 Solid State Physics, Section 1.1 Page 7

Inserting this into equation (1.1.1), I immediately get

δrj+1 = δrj eika (1.1.5)

which is a mathematical statement of the same assertion.

The simple model that we can solve requires us to suppose that there are only forces
between neighbouring planes of atoms. This is a lousy approximation, but it will serve to
illustrate much of the essential physics of lattice waves. It’s as if the atoms were connected
by springs with a spring constant (stiffness) C:

j−1 j j+1 j+2

δrj δrj+1

In this diagram, each solid dot represents a whole plane of atoms.

Considering only neighbouring planes, there will be contributions to the potential energy,
proportional to the difference in displacement squared, from each spring.
j,j+1
Upot = C (δrj − δrj+1 )2

We sum this over all planes and divide by two (otherwise we’ve counted each plane twice)

1 X
Upot = C (δrj − δrj+1 )2
2 j
1 X 
= C δrj2 + δrj+1
2
− 2δrj δrj+1
2 j
X
=C (δrj δrj − δrj δrj+1 )
j

You can see how I got from the second to the third line by noticing that the first two
terms in the second line are identical—a sum of squares of all the δrj . Now compare the
above equation with equation (1.1.3) which states, in one dimension,

1X
Upot = Aij δri δrj
2 ij

You see that we can identify all the Aij for our simple model:

Aii = 2C
Ai,i+1 = Ai,i−1 = −C
all other Aij = 0
PHY3012 Solid State Physics, Section 1.2 Page 8

so we can solve the equation of motion (1.1.4) for longitudinal modes

X
M ωL2 = Aij eik(rj −ri )
i

There are only three non zero terms on the right hand side; k is parallel to r, and let us
write the distance between neighbouring atoms as rj − ri = a, the lattice constant. We
get

M ωL2 = C 2 − eika − e−ika

and therefore by elementary trigonometry, and remembering that eiθ = cos θ + i sin θ

r
C 1
ωL = 2 sin ka
M 2

This is the dispersion relation for longitudinal waves in our model crystal. The vertical
bars denote “absolute value,” and we don’t expect ω to be negative.

Transverse waves will have exactly the same solution but with a different, presumably
smaller, spring constant C 0
r
C 0 1
ωT = 2 sin ka
M 2

We can sketch our dispersion relation, roughly like this:

v
u
Cu
2 M
t

L v
u ′
uC
2t M
T

− πa 0 k π
a
PHY3012 Solid State Physics, Section 1.2 Page 9

1.2 Notes on section 1.1

Here are some very important points.

1. As k → 0 (in the long wavelength limit) we have

r
C0
ωT = ka
M
r
C
ωL = ka
M

The quantity dω/dk is called the group velocity of the wave and for long waves these
are the velocities of sound waves in the crystal. The slope of the dispersion is related
to the elastic constants of the crystal. This is why you measure elastic constants
in single crystals by measuring the velocities of longitudinal and transverse sound
waves.

2. As k → π/a the velocity tends to zero. We say that “at the Brillouin zone boundary”
the lattice waves are standing waves; they have been “Bragg reflected.”

3. I have only plotted ω(k) in the range of k between −π/a and π/a. If k is between
−π/a and π/a it is said to be “within the first Brillouin zone.” There is no physical
significance to k-values outside this range. For if I consider eq (1.1.1),

1
δrj = √ q̃k ei(nka−ωt)
Nc

where rj = na, and add to k an amount p × 2π/a where p is an integer, I get

1
δrj = √ q̃k ei(nk+2πp/a)a e−ωt
Nc
1
= √ q̃k ei(nka−ωt) ei2πp
Nc


I have simply multiplied by one! ei2πp = 1 . I get no new displacements, no new
physics. It is important to understand this point. The keen-eyed will have noted
that I’ve assumed that qk+2πp/a = qk ; we can prove this but not until section 1.6.2
when we do the Fourier transformation of the displacements.

Another way to see this is to remark that eq (1.1.1) is a wave only defined at atom
positions. The atomic displacement is not a continuous function of position in the
usual sense. (See, for example, fig. 5, chapter 4 in Kittel; or p. 222 in Elliott.)
PHY3012 Solid State Physics, Section 1.2 Page 10

Both waves carry the same information about the atom displacements. Only wave-
lengths longer than 2a are needed to represent the motion.

4. Only lattice waves with wavevectors in the first Brillouin zone are physically mean-
ingful. Are all values of k in this range allowed? No. There is a finite number that
depends on the size of the crystal, but not on the boundary conditions. What does
this mean?

Suppose our crystal has length L in the direction of k and L = Nat a with Nat = Nc
the number of planes. We are essentially considering a one dimensional line of atoms.

What shall we say about the displacements at the surfaces of the crystal, i.e., atoms 1
and Nat ? We could clamp them (i.e., set δr1 = δrNat = 0) rather like the boundary
conditions on the wavefunctions in an infinite potential well (i.e., ψ(r1 ) = ψ(rNat ) =
0).

Or, we could impose cyclic, or periodic, or Born–von Kármán boundary conditions;


i.e., set
δr1 = δrNat not necessarily zero
According to eq (1.1.5)
δrj+1 = δrj eika
therefore
δrj+2 = δrj+1 eika = δrj ei2ka
and if I keep this up, we get

δrNat = δr1 eiNat ka = δr1 eikL


= δr1 according to our boundary conditions
Thus
eikL = 1
which is only true if
kL = 2πn (1.1.6)
PHY3012 Solid State Physics, Section 1.3 Page 11

where n is any integer, and so the allowed values of k are those for which

2πn
k=
L
lying between −π/a and π/a.

They form a dense set of points separated by an interval 2π/L which decreases as
the size of the crystal increases. But there is a finite number of allowed values of k
(allowed states).

We can write down the number of states per unit k from (1.1.6)

dn L
=
dk 2π
and the total number of states is this times the allowed range of k,

dn 2π L 2π L
× = × = = Nat
dk a 2π a a
The number of allowed states is equal to the number of atoms in the line.

Very often we want to know the number of states in a range of frequency, or the
density of states,
dn dn dk
n(ω) = = ×
dω dk dω
L 1
= ×
2π dω/dk
dω/dk is the group velocity and is only known once the we know the dispersion
relations.

1.3 Boundary conditions and density of states in three dimensions

Density of states is a very important concept in solid state physics for all elementary
excitations as is the evaluation of the number of states, so let’s do the thing again, now
in 3-D.

What do we imply by imposing periodic boundary conditions? In general we deal with a


piece of crystal of some shape, bounded by a number of crystal surfaces. But bulk prop-
erties must be independent of the details of the surface and its environment. Therefore
PHY3012 Solid State Physics, Section 1.3 Page 12

I can surely concentrate upon a part of the whole crystal, whose shape may as well be
the shape of the crystallographic unit cell, buried deep inside the specimen. Any bulk
property cannot depend upon the choice of boundary conditions that I impose on this
“embedded” crystal as long as it is of macroscopic dimensions. Also these bulk properties
cannot depend upon details of the specimen surfaces as long as these are sufficiently far
from the “embedded” crystal.

Therefore suppose the “embedded” crystal contains Nc unit cells and has the same shape
as a primitive unit cell. If the lattice vectors are a1 , a2 and a3 , the crystal dimensions are

N1 a1 × N3 a2 × N3 a3

Now, atoms separated by any lattice vector (n1 a1 , n2 a2 , n3 a3 ) are indistinguishable by


translational symmetry in terms of any measurable physical quantity. This means that
the displacements arising from a lattice wave can differ by, at most, a phase factor as we
have seen in eq (1.1.5).

This is true also for electron wavefunctions. If ψk (r) is the wavefunction of an electron at
r, with wavevector k, we must have

ψk (r + n1 a1 ) = ψk (r) eik·n1 a1
ψk (r + n2 a2 ) = ψk (r) eik·n2 a2
ψk (r + n3 a3 ) = ψk (r) eik·n3 a3

(NB, this is the only way to ensure that

|ψk (r + n1 a1 )|2 = |ψk (r)|2 etc.

which is the observable probability density.)

Periodic boundary conditions require that the displacements or wavefunctions are equal
at opposite faces of the “embedded” crystal:

ψk (r + N1 a1 ) = ψk (r) = eiN1 k·a1 ψk (r)


ψk (r + N2 a2 ) = ψk (r) = eiN2 k·a2 ψk (r)
ψk (r + N3 a3 ) = ψk (r) = eiN3 k·a3 ψk (r)

We see that the exponentials must equal unity so that



N1 k · a1 = 2πm1  X mi
N2 k · a2 = 2πm2 k · aj = 2π δij
 Ni
N3 k · a3 = 2πm3 i

m1 , m2 and m3 being integers. (On the right is just a neat compact way of writing the
three equations on the left.) Now remember the definition of the reciprocal lattice vectors,
b1 , b2 and b3 ,
bi · aj = 2πδij
PHY3012 Solid State Physics, Section 1.3 Page 13

in which δij is the “Kronecker delta” (equal to one if i = j and zero otherwise).

To satisfy the three conditions, k must be some fraction of a reciprocal lattice vector
m1 m2 m3
k= b1 + b2 + b3
N1 N2 N3
You can see that this does indeed meet the three conditions—take the scalar product of
k with any of the direct lattice vectors:
X mi
k · aj = 2π δij
i
Ni

Thus the allowed values of k form a fine mesh of points in the reciprocal lattice.

But we also know that k must remain within the first Brillouin zone and this has the
same volume in k-space as the parallelepiped formed from the vectors b1 , b2 and b3 .

Hence all the allowed values of m1 , m2 and m3 are given by the conditions

0 < m1 ≤ N1
0 < m2 ≤ N2
0 < m3 ≤ N3

Therefore the number of allowed states is

N1 N2 N3 = Nc

the number of unit cells in the crystal. We already got this resut in the 1-D case.

Let us denote the volume of the unit cell by vc . The volume of the “embedded” crystal is

V = Nc vc

and the volume of the first Brillouin zone is


1
(2π)3
vc
PHY3012 Solid State Physics, Section 1.3 Page 14

If there are Nc k vectors allowed in the first Brillouin zone the “volume per state” in
k-space is

1 (2π)3 (2π)3
=
Nc vc V

The “spottiness of k-space” is such that the allowed states are a mesh of points each
occupying a volume

(2π)3
V

(Compare this with the result ∆k = 2π/L that we got in 1D.)

The number of allowed states having wavevector magnitude smaller than some kmax is the
number of states enclosed by a sphere of radius kmax .

That is,
number of allowed states with volume
= of × number of states
wavevector less than kmax per unit volume
sphere

4π 3 V
Nst = kmax
3 (2π)3
1 1 3
= V kmax (1.3.1)
6 π2

We define the density of states for lattice waves in terms of the number of states in a
narrow range of frequencies dω about ω.
PHY3012 Solid State Physics, Section 1.4 Page 15

This number is, clearly, inversely proportional to the slope of the dispersion curve. Now,

dNst
n(ω) ≡

so that the total number of states with wavevector less than kmax is
Z kmax
Nst = n(ω)dω
0

Using our expression for Nst , we get

dNst dNst dk
n(ω) = = ×
dω dk dω
1 1 2 1 (1.3.2)
= Vk
2 π2 dω/dk

and dω/dk is the group velocity.

This definition is appropriate as long as the dispersion relation ω(k) is the same for all
directions of k, so that
ω(k) = ω(k)
Generally the k-vectors corresponding to a given frequency form a non spherical surface
in k-space. If we denote this constant energy surface by Sω , then the general expression
for the density of states is Z
V dSω
n(ω) = 3
(2π) vg
where the group velocity is
vg = rk ω(k)
PHY3012 Solid State Physics, Section 1.5 Page 16

You will find a simple derivation in Kittel, chapter 5; or Elliott, page 231. See also Ibach
and Lüth, section 5.1. We will only need eq (1.3.2).

1.4 A useful identity

The use of periodic boundary conditions and Brillouin zones allows us to prove a useful
identity. If we consider two lattice waves or electron wavevectors, k and k0 and make a
sum over all unit cells we find

X
Nc 
i(k−k0 )·rj Nc if k = k0
e =
0 if k 6= k0
j=1

= Nc δkk0

The proof is fairly easy in one dimension. If k = k 0

X
Nc X
Nc
i(k−k0 )·rj
e = 1 = Nc
j=1 j=1

If k 6= k 0 we write rj = ja (a is the lattice spacing)

2πn 2πn0 2π(n − n0 ) 2πm


k= ; k0 = ; k − k0 = =
L L L Nc a

Then
X
Nc X
Nc
i(k−k0 )·rj
e = ei2πmj/Nc
j=1 j=1

X
Nc
j
= ei2πm/Nc
j=1

1 − ei2πm
= × ei2πm/Nc
1−e i2πm/N c

=0
because ei2πm = 1. We have used the identity

X
N
1 − xN
xj = x
j=1
1−x

1.5 Vibrations in the diatomic lattice

We get some new physics if our crystal has two types of atom. Let us extend our simple
model to two atoms per primitive unit cell with masses M1 and M2
PHY3012 Solid State Physics, Section 1.5 Page 17

δr1 δr2 δr3 ...

The mass of an atom in plane j is M1 if j is odd, and M2 if j is even. Longitudinal mode


displacements are 
1 q̃1 ei(kja−ωt) j odd
δrj = √
Nc q̃2 ei(kja−ωt) j even
Here, q̃1 and q̃2 are parallel vectors, so we write them as scalars and drop the subscript k,
but the amplitudes will be different at the different types of atomic planes. We will do the
problem this time in terms of forces (rather than potential energy) just to be different.
The force Fj on the plane of atoms j is proportional to the stretch of the bond. We have
to be careful about the sign:

j−1 j j+1 → direction


of +ve force, Fj

1 2

Bond 2: if δrj+1 − δrj > 0 bond is stretched, Fj > 0


< 0 bond is compressed, Fj < 0
Bond 1: if δrj−1 − δrj < 0 bond is stretched, Fj < 0
> 0 bond is compressed, Fj > 0
Using these rules, after some algebra, we see that

Fj = C [(δrj+1 − δrj ) + (δrj−1 − δrj )]


 
1 C q̃2 eika + q̃2 e−ika − 2q̃1  eikja eiωt j odd;
=√
Nc C q̃1 eika + q̃1 e−ika − 2q̃2 eikja eiωt j even.

The equation of motion is


d2
Fj = Mj δrj
dt2
and using eika + e−ika = 2 cos ka we get two simultaneous equations,
)
2C (q̃2 cos ka − q̃1 ) = −M1 ω 2 q̃1
2C (q̃1 cos ka − q̃2 ) = −M2 ω 2 q̃2
PHY3012 Solid State Physics, Section 1.5 Page 18

The solution to these is

r !
2C 1 1
ω2 = (M1 + M2 ) ± (M1 − M2 )2 + M1 M2 cos2 ka (1.5.1)
M1 M2 2 4

Of course, if M2 = M1 = M the problem reverts to our original one having just one type
of atom, and equation (1.5.1) becomes

2C
ω2 = (1 ± cos ka)
M

There are two solutions corresponding to the plus and minus signs:

r 
C 2 sin 12 ka (plus sign)
ω=
M 2 cos 21 ka (plus sign)

Why are there now twice as many solutions? Let us sketch them as before.

v
u
Cu
2t M
v
u
u
t 2C
M

− πa π
− 2a 0 π
2a
π
a
k

We can do one of two things:


1. Discard the cosine solutions. In fact, they merely duplicate the sine solutions. (It’s
like saying you can take either the real or the imaginary part of equation (1.1.1) for
the displacements.) This is called the extended zone scheme.
2. Recognise that in the two atom unit cell our lattice constant is really 2a, so the
Brillouin zone is half as big. Then we keep both curves, but only within the first
Brillouin zone; i.e., k is restricted to the range −π/2a to π/2a. This is called the
reduced zone scheme.

When the masses are different, then we get the following dispersion curves in the reduced
zone, by sketching equations (1.5.1) with M1 > M2 .
PHY3012 Solid State Physics, Section 1.5 Page 19

v v
u u
u C(M1+M2) u 2C
2 u
t
2M1M2
t
M2
v
u
u
t 2C
M1

π π
− 2a 0 2a
k

The dotted lines show the case M1 = M2 . Notice how having different masses opens up a
gap in the allowed values of ω. This is entirely analogous to the band gap in the electronic
states in a semiconductor; the origin, though, is quite different—obviously all electrons
have the same mass.

Each line in the ω(k) dispersion relation is called a band. A three dimensional crystal has
three bands having ω → 0 as k → 0, one longitudinal and two transverse modes, and these
are called acoustic mode bands. This is because as k → 0, i.e., in the long wavelength
limit, they have a constant velocity and become the longitudinal and transverse sound
waves.

In general, there are 3nc bands, where nc is the number of atoms in the primitive unit
cell. The remaining 3nc − 3 bands have ω 6= 0 at k = 0 similar to the upper band
in the diagram. These are called optical mode bands because at k = 0 they couple to
electromagnetic radiation.

The interaction of light with transverse optical modes can be illustrated in a dispersion
diagram (or band structure) as follows.

ω = ck
} forbidden
TO mode
ω

k
PHY3012 Solid State Physics, Section 1.6 Page 20

In this figure the k axis is magnified: the speed of light c being what it is, the slope of the
dispersion of light is much much steeper than the slope of the acoustic lattice waves. At
this scale the acoustic branch would appear horizontal, just as in the previous diagram,
the light branch would appear vertical.

The dispersion of light is linear: ω = ck. In the absence of interaction we get the thin
lines. But the lattice waves and light waves behave like two coupled harmonic oscillators
(the coupling arises from the interaction between the electromagnetic field and the dipole
moment of the oscillating ions) so we get solutions (the thick lines) as if they were two
coupled pendulums.

The elementary excitation produced is called a polariton. See how the lattice waves
strongly modify the dispersion of light leading to a forbidden gap where the light is
strongly attenuated when passing through the solid. This effect can be observed in what’s
called ‘Raman spectroscopy.’

1.6 Phonons

Let us remark that features common to the elementary excitations in solids such as for-
bidden gaps, a density of states, spottiness of k-space have emerged as a consequence of a
classical, Newtonian treatment of lattice waves. You may have thought these only arose
in a quantum theory.

We do need to use quantum mechanics if we want to understand how lattice waves in-
teract with, in particular, electrons. For example, the so-called electron–phonon coupling
is responsible for superconductivity in metals, and “polarons” in insulators (these are
distortions of the ionic lattice around electrons in the conduction band). It also acts to
limit the electrical conductivity of metals.

So what are phonons? Quantum mechanics can turn waves into particles and vice versa
(the wave–particle duality, if you like). When we quantise lattice waves, we get particles
called phonons. Here are the steps we need to take.
1. Transform the displacements δrj into normal modes.
2. Find the hamiltonian for these modes.
3. See at once that it is the hamiltonian for a set of independent (i.e., uncoupled) simple
harmonic oscillators.
4. All we need then is the solution of one of the standard problems in quantum me-
chanics: the simple harmonic oscillator.

In view of step 4, since we will not wish to break step at that point, we will now digress
to solve the quantum mechanical simple harmonic oscillator.
PHY3012 Solid State Physics, Section 1.6.1 Page 21

1.6.1 The simple harmonic oscillator

The simple harmonic oscillator is one of the standard problems in quantum mechanics.
The classical hamiltonian is
1 2 1
H= p + M ω2 q2
2M 2
To “quantise” we replace p with −ih̄ d/dq which results in the “commutation relation”
[q, p] = pq − qp = ih̄

The time independent Schrödinger equation is

Hψn (q) = En ψn (q)

with solutions for n = 0, 1, 2 . . . and n = 0 being the ground state. We have to solve
 
h̄2 d2 1 2 2
− + M ω q ψn (q) = En ψn (q)
2M dq 2 2

Let us define a dimensionless coordinate to simplify the formulas,


r

x= q

Then the Schrödinger equation is
 
1 d2 2
h̄ω − 2 + x ψn (x) = En ψn (x)
2 dx

How do we solve this? Well, note that


 
d2 2
− 2 +x
dx

is a difference of squares like

(−A2 + B 2 ) = (−A + B)(A + B)

so let’s try writing


      
d d d2 2 d d
− +x +x = − 2 +x − x−x
dx dx dx dx dx
 
d2
= − 2 + x2 − 1
dx

An extra “minus one” appears as a consequence of the commutation relation, and hence
    
1 d2 2 1 d d 1
h̄ω − 2 + x ψn (x) = h̄ω − +x + x ψn (x) + h̄ωψn (x)
2 dx 2 dx dx 2
 
1
= h̄ω a+ a + ψn (x)
2
PHY3012 Solid State Physics, Section 1.6.1 Page 22

having defined  
+ 1 d
a =√ − +x
2 dx
 
1 d
a= √ +x
2 dx
We easily see that
aa+ − a+ a = 1
or  +
a, a = 1

Let us examine the properties of the operators a and a+ . (Note that a+ is pronounced
“a dagger” and in some texts is denoted a† or a∗ ). We have three ways to write the
hamiltonian now:
h̄2 d2 1
H=− 2
+ M ω2 q2
2M dq 2
 2

1 d 2
= h̄ω − 2 + x
2 dx
 
+ 1
= a a+ h̄ω
2
So  
+ 1
Ha = a aa + a h̄ω
2
 
+
 1
= aa − 1 a + a h̄ω
2
 
1
= aa+ a − a h̄ω
2
 
+ 1
=a a a− h̄ω
2
= a (H − h̄ω)
In the second line, we have used the commutation relation. Operating on an eigenfunction
ψn we get
H (aψn ) = a (H − h̄ω) ψn
= (En − h̄ω) (aψn )
since Hψn = En ψn . Hence aψn is an eigenfunction with the energy lowered by h̄ω. This
amount of energy is called a quantum, and a is called the quantum destruction operator.
Similarly you can see that 
H a+ ψn = (En + h̄ω) ψn
and a+ is an operator that creates a quantum of energy.

There cannot be negative eigenvalues (since the hamiltonian is a sum of squares of her-
mitian operators). There must exist a ground state ψ0 with energy E0 > 0. Applying the
PHY3012 Solid State Physics, Section 1.6.1 Page 23

destruction operator to this state we get

H (aψ0 ) = (E0 − h̄ω) (aψ0 )

which imples that aψ0 is an eigenfunction with energy lower than E0 . Since we have
defined E0 as the lowest energy, to avoid this paradox it must be true that

aψ0 = 0

Therefore  
+ 1
Hψ0 = a a+ h̄ωψ0
2
1
= h̄ωψ0
2
So the ground state has energy 12 h̄ω. This is called the zero point energy.

Note, in passing, that since


1 2 1
H= p + M ω2 q2
2M 2
then the expectation value of the energy in the ground state is

1 1
2 1

h̄ω = p + M ω2 q2
2 2M 2

Now if three real numbers are a + b = c then ab ≤ 14 c2 . Cancelling the 12 ’s we see that


1
2 1
M ω2 q2 p ≤ (h̄ω)2
M 4

that is,
p p 1
hq 2 i hp2 i ≤ h̄
2
The left hand side is the root mean square position times the r.m.s. momentum and is
smaller than or equal to 12 h̄. But by the uncertainty principle

1
∆q∆p ≥ h̄
2

so 12 h̄ω is the lowest energy the ground state can have without violating the uncertainty
principle.

What are the energy levels of the harmonic oscillator? We know two things:
1. The lowest energy level has energy E0 = 12 h̄ω.
2. The operator a+ operating on any eigenfunction ψn with energy En results in the
eigenfunction ψn+1 having the next highest energy En+1 . Successive energy levels are
separated by the quantum of energy h̄ω.
PHY3012 Solid State Physics, Section 1.6.2 Page 24

These levels are illustrated below. The diagram shows plotted the classical potential
energy 12 h̄ωx2 in which the oscillator moves (in units of h̄ω). On the right are the energy
levels, also in units of h̄ω.

E=h !
E3 7
2

E2 5
2

2
2
x
1 2
E1 3
2

E0 1
2

0
3 2 1 0 1 2 3
x

Clearly  
1
En = n + h̄ω
2
and the solution to the Schrödinger equation is
 
+ 1
Hψn = a a + h̄ωψn
2
 
1
= n+ h̄ωψn
2
You can see that ψn is an eigenfunction of the operator a+ a having eigenvalue n,
a+ aψn = nψn
a+ a is called the number operator because it counts the number of quanta of energy
possessed by the oscillator when it’s in the nth excited state.

1.6.2 Quantisation of the lattice waves

Now we come back to the solid state problem of the quantisation of the lattice waves.
The mathematics is quite complicated. We will confine ourselves the one dimensional
situation with one atom per unit cell. First we define normal modes. To begin with, I
write
1 X
δrj = √ qk eikrj (1.6.1)
Nc k
PHY3012 Solid State Physics, Section 1.6.2 Page 25

Note that
qk = q̃k eiωt
from eq (1.1.1), that is, the amplitude times the phase
√ factor eiωt . What are these quanti-
0
ties, qk ? Multiply (1.6.1) on both sides by e−ik rj / Nc and sum each side over all atoms,

1 X 0 1 X X i(k−k0 )rj
√ δrj e−ik rj = qk e
Nc j Nc k j
X
= qk δkk0
k

= qk 0

because in the second line we can use our nifty identity from section 1.4, viz.,
X 0
ei(k−k )rj = Nc δkk0
j

So, we have
1 X
qk = √ δrj e−ikrj (1.6.2)
Nc j

Note the symmetry between (1.6.1) and (1.6.2). qk is the discrete Fourier transform of
the displacements, δrj ; it is called a normal mode of wavevector k. You can easily see that

q−k = q̄k

Now I want to write the hamiltonian in terms of normal modes rather than displacements.
In terms of displacements,
X1  2
d 1X
H= M δrj + δri Aij δrj (1.6.3)
j
2 dt 2 ij

and Aij is the force constant matrix. We just need to insert (1.6.1) into this to get it in
terms of the qk . We’ll do the potential energy (second term in (1.6.3)) first.

! !
1X 1X 1 X 1 X 0
δri Aij δrj = √ qk eikri Aij √ qk eik rj
2 ij 2 ij Nc k Nc k0
1 X X 0
= qk qk 0 eikri Aij eik rj
2Nc kk0 ij
1 X X  0
= qk qk 0 eik(ri −rj ) Aij ei(k+k )rj
2Nc kk0 ij

This will simplify nicely if we are careful. Take the sum over i and j and put the sum
over j first: ( )
X 0
X
ei(k+k )rj eik(ri −rj ) Aij
j i
PHY3012 Solid State Physics, Section 1.6.2 Page 26

The term in braces is the same for any atom labelled j because by translational symmetry
all the atoms are identical (there’s only one atom per unit cell in our simple case here).
So we can define a quantity
X
λk = eik(ri −rj ) Aij = M ωk2
i

which is the equation of motion for any atom j; compare with eq (1.1.4) in section 1.1.
We call λk the dynamical matrix; it’s actually a diagonal matrix: one entry for each k.
By transforming to normal modes, we have diagonalised the force constant matrix.

Then, using our identity from section 1.4,


X 0
ei(k+k )rj λk = Nc λk δ−kk0
j

so the potential energy in terms of normal mode coordinates is simply

1X 1 X
Aij δri δrj = qk qk0 Nc λk δk0 ,−k
2 ij 2N kk0
1X
= λk qk q−k
2 k

See how the Kronecker delta δk0 ,−k picks out from the double sum of k and k 0 only those
terms for which k = −k 0 . We have indeed diagonalised Aij .

The kinetic energy is easier. Using again our identity from section 1.4,
X 1 XX 0
δ ṙj δ ṙj = q̇k q̇k0 ei(k+k )rj
j
Nc j kk0
X
= q̇k q̇−k
k

Therefore in terms of normal coordinates the hamiltonian is


X 1 1

H= M q̇k q̇−k + λk qk q−k
k
2 2

To find the momentum conjugate to q, I write the Lagrangian as, equation (0.1),

L=K −W
X 1 1

= M q̇k q̇−k − λk qk q−k
k
2 2

and define
∂L
pk = = M q̇−k
∂ q̇k
PHY3012 Solid State Physics, Section 1.6.2 Page 27

Finally I have  
1X 1 2
H= p + λk qk2
2 k M k

Doesn’t that look a lot neater than eq (1.6.3)?

When you compare this with the hamiltonian for a single simple harmonic oscillator you
will see that we have recast the classical Newtonian hamiltonian of the complex motions
of the atoms making up lattice waves into a sum of hamiltonians of simple harmonic
oscillators. Here are two very important points.
1. The separation of the vibration into independent normal modes is permitted only
because we are working in the harmonic approximation. But no other approximation
is invoked, for example there is no restriction on the nature of the interatomic forces,
such as the ball and spring models we used earlier.
2. Because the hamiltonian is a sum over independent simple harmonic oscillators, we
can immediately quantise the lattice waves.

So, we can write down at once the hamiltonian in second quantisation describing Nc
simple harmonic oscillators in each band,

X  
1
H= h̄ωk a+
k ak +
k
2

This is exactly the same as the hamiltonian we had for the simple harmonic oscillator
previously. The only difference is that instead of just one oscillator having a (single mode
of) natural frequency ω, there are Nc oscillators each having a natural frequency ωk so we
have to label it, and its creation and destruction operators also, with a label k. Remember
that for the simple harmonic oscillator we said that a+ a was the “number operator” whose
eigenvalue is the number of quanta of energy possessed by the oscillator when it is the nth
excited state. Similarly here we say that for each normal mode of vibration, labelled k,


nk = a+
k ak

is the average, or expectation value of the number operator and is the number of quanta,
or phonons that are excited in the k th mode. This is the proper way to understand what
a phonon is. It is an elementary excitation in the vibrations of the atoms in the solid.
The more a particular mode is excited, say by heating the crystal, the more quanta, or
phonons, there are possessed by each of the modes, labelled k.

So the expectation value of the total energy is

X  
1
hHi = h̄ωk nk +
k
2

and we say that there are nk phonons present in the crystal having wavenumber k and
frequency ωk . These are of course related through the dispersion relation, ω = ω(k).
PHY3012 Solid State Physics, Section 1.6.2 Page 28

Now,
X1
h̄ωk
k
2

is the zero point energy of the crystal. The zero point energy cannot do work. However
it does contribute to the cohesive energy of the solid, and can be observed by X-ray
scattering.

As the temperature of a crystal is raised, more phonons are created, and phonons of higher
frequency are excited. Remember there is a maximum k and hence a maximum ωk and
hence a maximum energy h̄ωk that a phonon can have. At a certain temperature, called
the Debye temperature, all allowed modes contain lots of phonons.

We need to know how many phonons there are of energy h̄ωk as a function of temperature.
Phonons are distributed under Bose–Einstein statistics because they are bosons—firstly
since their creation and destruction operators commute rather than anti commute as do,
say electron operators, secondly because they possess integral spin, viz., zero. Therefore
we know at once that the number of phonons in state k is

1
nk =
eh̄ωk /kB T − 1

where kB is the Boltzmann constant (1.38 × 10−23 J K−1 or 8.62 × 10−5 eV K−1 ) and T
is the absolute temperature.

So as we heat the crystal, we put energy into the lattice vibrations so that the internal
energy as a function of temperature is

X  
1
U= h̄ωk + nk
k
2
X  
1 1
= h̄ωk + h̄ω /k T (1.6.4)
k
2 e k B −1

This is a useful formula. We can use it to work out the heat capacity and thermal
conductivity due to phonons among other things.

But it’s not easy to do the summation even if we know the dispersion relation. Let’s take
the high temperature limit. If h̄ωk /kB T  1 then, because ex = 1 + x for small x we have

kB T
nk = 1 lots of phonons in each mode (1.6.5)
h̄ωk

and so  
X 1 kB T
U= h̄ωk + at high temperature
k
2 h̄ωk
PHY3012 Solid State Physics, Section 1.6.2 Page 29

and leaving out the zero point energy we have

X
U= kB T
k
X
= kB T 1
k

so we need to know what is the number of terms in the sum over k so we can add all
the “one’s” and multiply that by kB T . In other words how many allowed values of k are
there? You see now why we had to study this carefully in sections 1.2 and 1.3.

In a crystal containing Nat atoms and one atom per unit cell there are three branches or
bands—one longitudinal and two transverse—and we therefore get

U = 3Nat kB T

which is the energy you would get from the equipartition theorem: each degree of freedom
contributes 21 kB T of potential and 21 kB T of kinetic energy.

The heat capacity (due to phonons) at constant volume is hence

dU
Cph = = 3Nat kB J K−1
dT
= 3R J mol−1 K−1

in which R is the gas constant, 8.314 J mol−1 K−1 . In fact the formula

C = 3R

is the law of Dulong and Petit, which you have just derived from first principles and which
describes the heat capacity per mole of a large number of solids at high temperature.

Unfortunately this does not hold at low temperature. Below the Debye temperature
quantum effects appear because not all the modes are excited. At low temperature we
are stuck; we cannot do the summation (1.6.4) for a general dispersion relation, ω(k). As
is usual in solid state physics, we proceed by making a model; in fact two models which
are now described.
PHY3012 Solid State Physics, Section 1.6.3 Page 30

1.6.3 The Einstein model

Here we suppose that there is no dispersion: all atoms simply vibrate with a single
frequency, ω = ωE . Then the total energy due to phonons is
 
1 1
U = 3Nat h̄ω +
2 eh̄ω/kB T − 1

there’s no summation of k anymore, instead the energy at ω = ωE multiplied by the


number of allowed modes, 3Nat . Again we omit zero point energy and
h̄ω
U = 3Nat
eh̄ω/kB T − 1
so the heat capacity is
 2
dU h̄ω eh̄ω/kB T
Cph = = 3Nat kB 2
dT kB T (eh̄ω/kB T − 1)

Let us examine the high and low temperature limits. First we define
h̄ω
x=
kB T
so that
x 2 ex
Cph = 3Nat kB
(ex − 1)2
0
as T →∞ , x→0 , Cph →
0

as T → 0 , x→∞ , Cph →

To work out “indeterminate forms” we use L’Hôpital’s rule; that is differentiate numerator
and denominator with respect to x until the ratio becomes determinate:

x2 ex 2xex + x2 ex 2 + 2x 1
2 −→ −→ −→ x
(e − 1)
x 2 (e − 1) e
x x 2ex e
so
lim
Cph = 3Nat kB
T →∞
PHY3012 Solid State Physics, Section 1.6.4 Page 31

which is the law of Dulong and Petit. Also


lim lim 1
Cph = 3Nat kB Cph = 3Nat kB
T →0 x→∞ x
−h̄ω/kB T
= 3Nat kB e
−→ 0
so, for the heat capacity as a function of temperature we have this.

and Cph goes to zero exponentially with the temperature. The Eintein model resolves a
problem that classically Cph should be 3Nat kB at all temperatures, but is observed to go
to zero as the temperature is reduced. This was one of the triumphs of the “old quantum
theory.”

However, experimentally Cph does not go to zero exponentially but as T −3 . See figure 9,
chapter 5 in Kittel.
PHY3012 Solid State Physics, Section 1.6.4 Page 32

1.6.4 The Debye model

The Debye model fixes up the low temperature behaviour. Remember that in the Einstein
model we use the approximation that the crystal has just one frequency of vibration, ωE ,
so the dispersion curve is just a horizontal line, ω(k) = ωE = constant. In the Debye
model we take the next level of approximation and use a linear dispersion (like light in a
vacuum—usually this behaviour is called non dispersive), namely ω(k) = ck which is the
dispersion for low frequency acoustic waves, and c is the velocity of sound.

We now introduce a very important concept. The internal energy, U , is


Z
1
U = h̄ω h̄ω/k T n(ω) dω
e B −1
that is, the energy of a phonon mode with frequency ω times the number of phonons of
that frequency at temperature T times the number of states in the interval dω about ω,
integrated over all values of ω. Be sure you understand this.

Now, there is a cut-off in the values of k and ω. In the isotropic case we are considering
there is a maximum value of k, kmax , and the number of modes with wavevector less than
kmax is, equation (1.3.1),
1 1 3
2
V kmax = Nat
6 π
(remember Nat is the number of atoms in the crystal of volume V ; we are allowing just
one atom per unit cell for simplicity). So

3 Nat
kmax = 6π 2
V
and therefore the cut-off frequency is (using our linear dispersion)
Nat
3
ωD = (c kmax )3 = 6π 2 c3
V
The density of states from equation (1.3.2) is
1 1 1
g(ω) = 2
V k2
2 π vg
2
1 1 ω
= V
2 π2 c3
since ω = ck and the group velocity is vg = dω/dk = c, and so the internal energy is
Z ωD
1 1 ω2 h̄ω
U =3 V dω
2 π 2 3
c e h̄ω/kBT − 1
0
Z ωD
3V h̄ ω3
= 2 3 dω
2π c 0 eh̄ω/kB T − 1
PHY3012 Solid State Physics, Section 1.6.4 Page 33

where we have included a factor of 3 because there are three allowed modes for each k: one
longitudinal and two transverse; for simplicity we take just one velocity c for all modes.

To get the heat capacity we need to differentiate with respect to T . T only appears in
the integrand and the limit ωD does not depend on T . Indeed ωD defines the Debye
temperature θD if we write
kB θD = h̄ωD (1.6.6)
from which   13
h̄c 6π 2 Nat
θD =
kB V
Now
d ω3 ω3 h̄ω/kB T h̄ω
= 2 e
dT e h̄ω/kB T −1 (eh̄ω/kB T − 1) kB T 2
and so Z
3V h̄2 ωD
ω 4 eh̄ω/kB T
Cph = 2 3 2 dω
2π c kB T 2 0 (eh̄ω/kB T − 1)
We simplify the integral by making the substitution

h̄ω kB T
x= ; dω = dx
kB T h̄

so that Z h̄ωD /kB T  4


3V h̄2 kB T x4 ex kB T
Cph = 2 3 dx
2π c kB T 2 0 h̄ (ex − 1)2 h̄
 3 Z θD /T
T x4 ex
= 9Nat kB dx
θD 0 (ex − 1)2
You can easily see this by gathering terms and using

Nat vkB3
=
3
θD h̄3 c3 6π 2

from our definition of the Debye temperature (1.6.6).

It is not at all easy to see that as T → ∞ the heat capacity tends to the Law of Dulong
and Petit. The best way is to evaluate the function numerically. You can see it plotted
in Kittel, p124, where you can also see demonstrated that at low temperature the heat
capacity becomes

 3
12π 4 T
Cph = Nat kB
5 θD
PHY3012 Solid State Physics, Section 1.7 Page 34

This is the important Debye “T 3 Law” which says that at low temperature the heat
capacity of crystals is proportional to T 3 . This, rather than the Einstein exponential
drop to zero, is what is observed.

The fact that the heat capacity falls to zero at low temprature is a quantum mechani-
cal effect. The Einstein model has just one frequency ωE . When the temperature falls
below about θE = h̄ωE /kB then the probability of that frequency being excited becomes
exponentially small, as predicted by the Bose–Einstein distribution. The Debye model
fixes this since the w = ck dispersion allows modes that have frequencies that approach
zero. There is a simple way to understand the T 3 Law. At high temperature all modes up
to frequency ωD are excited and these are enclosed within a sphere in k-space of radius
kD = ωD /c. At some low temperature T we suppose that only modes with energies less
than ck = h̄ω = kB T will be excited. These occupy a sphere of radius k so instead of
there being all 3Nat modes excited, only a fraction k 3 /kD
3
, the ratio of the volumes of
these spheres, are excited. Hence there are
 3  3
k T
3Nat = 3Nat
kD θD
excited modes and so the internal energy 3Nat kB T is reduced by the same factor to
 3
T
U ≈ 3Nat kB T
θD
and hence the heat capacity is
 3
T
Cph ≈ 12Nat kB
θD

The main reason the numerical factor doesn’t agree with the proper limit (i.e., 12π 4 /5 6=
12) is the assumption that the internal energy is found by multiplying the high tempera-
ture value by the fraction of modes excited. This is because the remaining low temperature
modes will not behave classically, each carrying an energy of kB T .

1.7 Particle–particle interactions—phonon scattering

We observe elementary excitations such as phonons through their interactions with other,
test particles in scattering experiments. (See Ibach and Lüth, chapter 3.)

According to de Broglie, quantum wave particle duality asserts that particle/waves have
an energy
E = h̄ω = hν
and a momentum
p = h̄k
Crystal periodicity introduces an important feature as we have seen, thatP we can replace
the wavevector k by k + g if g is a vector of the reciprocal lattice (g = i mi bi ) without
PHY3012 Solid State Physics, Section 1.7 Page 35

any detectable change in the physics. Thus the momentum of a phonon is undetermined
to within an additive amount h̄g

h̄k is called the crystal momentum or pseudomomentum. The physical momentum is


X d
M δrj
j
dt

and this is zero for all phonons except those at k = 0 whose displacements describe a
uniform displacement of the lattice.

When particles interact, energy and momentum are conserved. The simplest is a three
particle interaction, e.g., an electron scattered by a phonon.

phonon absorption phonon emission

Here a straight line represents an electron and a zig-zag line represents a phonon (see
Kittel, chapter 4, figure 1).

Energy conservation requires

Ek0 + h̄ωk = Ek00 absorption


Ek0 = h̄ωk + Ek00 emission

Consider just the phonon emission (creation) process. Energy conservation would imply
(the electrons have momentum h̄ωk0 and h̄ωk00 )

h̄ωk0 = h̄ωk00 + h̄ωk

and momentum conservation requires

k = k0 − k00 ≡ q

q is called the scattering vector or momentum transferred to the scatterer—i.e., the crys-
tal. Note well, that Ibach and Lüth in their chapter 3 use an opposite sign so their K is
our −q but this is very unusual in scattering physics.

If k0 and k00 are such that their difference q lies within the first Brillouin zone then the
wavevector of the phonon created is
k=q
PHY3012 Solid State Physics, Section 1.7 Page 36

For example

This is called a Normal scattering process, or N process. However if the difference q


is larger than the dimension of the Brillouin zone this implies that the emitted phonon
wavevector lies outside the first Brillouin zone which is not permitted in the theory. But we
can always reduce this wavevector by subtracting any number of reciprocal lattice vectors
until the remaining wavevector does lie inside the first zone. The phonon wavevector
resulting is not q but q − g and the momentum balance is written

k0 − k00 = q = k + g

or
k=q−g
For example

This is called an Umklapp process or U process. The scattering vector (momentum trans-
fer) is larger than in the Normal case by an amount h̄g but the emitted phonon has the
same wavevector as before. We have not violated momentum conservation; instead we
PHY3012 Solid State Physics, Section 1.8 Page 37

say that the lattice can always provide a momentum ±h̄g in a scattering process. You
can think of an Umklapp process as the creation or destruction of a phonon accompanied
by a Bragg reflection—the phonon with wavevector q is reflected to q − g by the lattice
planes whose normal is g.

What we said about electron scattering applies equally well to the scattering of other
particles. We need to look at some typical energies and wavelengths in experimentally
produced beams (see Ibach and Lüth, figure 3.9).

energy (eV) λ = 2π/k ( Å)


∼ h̄ωD = kB θD
Phonons ≈ 0.025(eV) ∞ →∼ 10
for θD = Room Temp.
Electrons 10–1000 4–0.5
Photons 1000–100 000 20–0.1
Thermal neutrons 0.01–1 3–0.4

All these beams have wavelengths on the order of the lattice spacing and can be used in
the study of solids. But only neutrons have sufficiently low energy that the energy change
due to phonon scattering can be detected. Hence thermal neutrons are used to measure
phonon dispersion curves (see Ibach and Lüth, figure 4.4).

1.8 Anharmonic effects

In equation (1.1.2) we expanded the potential energy in a Taylor series up to second


order in the displacements of the atoms from their “official” positions. This is called the
harmonic approximation. As we saw when we quantised the lattice waves, a consequence
is that the phonons behave as independent simple harmonic oscillators—they cannot
interact or scatter against each other. This precisely true for photons, they live forever if
they are not scattered by matter: hence the 4◦ K cosmic background radiation.

Errors arising from the harmonic approximation are,


1. No thermal expansivity. Atoms are vibrating in a quadratic potential so at all tem-
peratures their mean position is the same:
PHY3012 Solid State Physics, Section 1.8.1 Page 38

As temperature increases for an oscillator in an anharmonic potential well the mean


position increases since the occupation of higher energy excited states increases. For
details of how to use this to calculate the free energy as a function of T , and to estimate
the thermal expansion coefficient, see Ibach and Lüth, section 5.5. It is notable that
some crystals in some (usually low) temperature ranges shrink upon heating (at least in
one of more of the unit cell lattice vector directions). This negative thermal expansion,
or rather zero thermal expansion, is exploited by ceramic materials used in cooker hobs.
2. In the harmonic approximation the heat capacities at constant volume, Cv and con-
stant pressure Cp are identically equal. While they are very different in gasses (re-
member for one mole of ideal gas Cp − Cv = R) they are anyway almost equal in a
solid.
3. In the harmonic approximation the elastic constants are independent of temperature,
volume or pressure.

1.8.1 Phonon–phonon scattering

You may not be surprised that after including third order terms in the potential energy
the second quantised hamiltonian has extra terms such as

a+
k ak0 ak00 δk+k0 +k00 ,g

Thus phonon scattering is permitted. The above expression implies that phonons of
momentum h̄k0 and h̄k00 are destroyed while creating a phonon of momentum h̄k. The
Kronecker delta assures momentum conservation since it is zero unless k + k0 + k00 = g.

If g is zero this is a Normal process.


PHY3012 Solid State Physics, Section 2 Page 39

2. Electrons

In what sense is an electron an elementary excitation? In low energy physics electrons


are not created or destoyed. However, we will see that electrons in the ground state of
a crystal belong to a Fermi sea of electrons and almost all of these are frozen into their
states by the Pauli exclusion principle. They only therefore become interesting in terms
of phenomena when those nearest to the surface of the sea, the Fermi surface, “break
away” into excited states, creating thereby an electron–hole pair. It is these elementary
excitations that we study when we probe the solid with lasers.

In condensed matter and molecular physics, or again if you like, low energy physics, we
always distingush core and valence electrons. The latter, which originate from the outer
shell of the atomic structure, are those which take part in bonding and the response to
perturbations. The lower-lying core electrons remain bound to the nucleus barely aware
that their parent atoms have participated in the formation of a molecule or solid. The
bonding that holds these atoms together is a consequence of the motions of the valence
electrons which is described by the Schrödinger equation. In this the potential takes
two parts; the first is that due to the combined Coulomb force of the nucleus and core
electrons, oddly enough called the external potential. The other term is the Coulomb
interaction between all the valence electrons themselves. If we can we like to simplify this
last term by postulating that each electron moves in an average electric field due to all the
other valence electrons. This is called the one electron approximation and allows us to
write down a single, independent Schrödinger equation for each electron. Why should this
work? Why do electrons appear to be “independent?” The electron–electron interaction is
strong; why can we ignore it? We often approximate the electron gas in a solid, especially
a metal, as jellium; that is a uniformly dense electron gas of ne = Ne /V electrons per
unit volume moving in a fixed, uniform positive background charge intended to mimic
the nuclei and core electron charge. As each electron moves it keeps away from the other
electrons because of both the Coulomb repulsion and the Pauli exclusion—this is called
correlation and exchange. So at the microscopic level the electron gas is not uniform, each
electron digs itself an exchange and correlation hole, so as to expose a patch of the bare
background positive charge. You can show that the amount of positive charge exposed is
exactly minus the charge on the electron. So as it moves, the electron drags with it its
exchange and correlation hole, and this object or quasiparticle is overall neutral and so does
not interact strongly with other quasiparticles. So when we pretend that we are dealing
with independent electrons, we are actually dealing with these quasiparticles; they may
have an “effective mass” different from m, but they do each obey an independent particle
Schrödinger equation. The theory behind all this is called density functional theory and
has revolutionised quantitative calculations in condensed matter and molecular physics.
Oddly enough you won’t find it mentioned in textbooks; the exception being the book by
Marder.

At each nucleus an electron sees an infinitely deep potential energy well. So why do
we assert that the free electron model amounts to a reasonable model for the electrons
in a solid? How can we imagine that a uniform positive background is in any sense a
plausible representation of the periodic array of potential wells? The answer lies in the
PHY3012 Solid State Physics, Section 2.1.1 Page 40

core electrons. These occupy a set of sharp energy levels in the Coulomb potential, rather
like the energy levels of an infinite square well. These are occupied according to the Aufbau
rule and are not available to the valence electrons because of the Pauli principle. In this
way the presence of the core electrons provides a repulsive potential that, in so called
sp-bonded solids at least, practically cancels the attractive potential from the nucleus.
The result is that the electrons see only a weak, sometimes even repulsive, potential at
the atomic cores. This is the subject of pseudopotential theory and again you don’t often
find it in textbooks although it provides the foundation for some of the most spectacular
theoretical predictions in the last 20 years. The free electron approximation is therefore for
many purposes a good one. We will see that it is a problem that can be solved; historically
this was done in the very early days of the new quantum mechanics by Arnold Sommerfeld
in the late 1920’s and the free electron gas is hence often called the “Sommerfeld model.”
Again historically, this was of vast importance; as with Einstein’s theory of the heat
capacity 20 years earlier it cleared up an outstanding inconsistency between observation
and classical physics. In the one case the question was, why does the heat capacity deviate
from the Dulong and Petit law at low temperature? In the other, why do the electrons
not contribute a further 12 kB T of energy from each degree of freedom to the heat capacity?
In other words, in a monovalent metal why is the high temperature heat capacity 3kB per
atom and not (9/2)kB ?

2.1 The hydrogen molecule

We are now going to make a wide digression to discuss the hydrogen molecule. This is not
really solid state physics. On the other hand it is the starting point for the description of
the quantum mechanics of the chemical bond; moreover it is fundamental to the under-
standing of magnetism as it introduces the electron property of spin, the Pauli principle,
exchange and correlation, Hartree and Hartree–Fock approximations and configuration
interaction.

2.1.1 The H+
2 molecular ion

We start with the H+ 2 molecular ion because this is one of the rare quantum mechanical
problems that can be solved exactly, although the mathematics is complicated. The
molecule and its electron are sketched as follows to define the distances ra = |r − Ra | and
rb = |r − Rb | between the two protons, a at position Ra and b at position Rb , and the
electron at position r.

In this section 2.1 we will work in atomic units of length, a0 = 1, so distances are in units
of the Bohr radius, a0 = 4π0 h̄2 /me2 . At a bond length R = 2 the two lowest energy
solutions of the Schrödinger equation look like this,
PHY3012 Solid State Physics, Section 2.1.1 Page 41

compared with a linear combination of atomic orbitals (LCAO)—the dotted line

ψ(1σg ) = e−αra + e−αrb (2.1.1a)


ψ(1σu ) = e−αra − e−αrb (2.1.1b)

If α = 1 each term is the exact 1s orbital solution to the Schrödinger equation for the
hydrogen atom and it’s their linear combination that is shown in the dotted line in the
figure. We introduce α as a “variational parameter.” Note that the solution to the
Schrödinger equation is drawn in the figure but I have not derived the wavefunction or
even given you its mathematical form (the maths is just too difficult). The dotted line is
the LCAO with α = 1. If I did a variational calculation, using (2.1.1) as trial functions
and varying α to minimise the energy, then the result would be indistinguishable by eye
from the exact result. This is the power of the variational method: we arrive at a very
precise result but still in the simple mathematical form of the LCAO.

It’s not too hard to normalise (2.1.1), although you need a three dimensional inte-
gral using cylindrical symmetry. Having done that, we can write (2.1.1) in terms of
normalised molecular orbitals,
  − 12
2π 
ug = (1 + S)) e−αra + e−αrb (2.1.2a)
α3
  − 12
2π 
uu = (1 − S)) e−αra − e−αrb (2.1.2b)
α3

where Z
α3
S= e−αra e−αrb dr
π
S, of course, depends on the distance between the nuclei R and in fact you can do the
integral to get  
−αR 1 2 2
S=e 1 + αR + α R
3
Note that the electron probability density, or charge density in the H2+ ion is

1 α2 −2αra 
u2g = e + e−α(ra +rb ) + e−2αrb
2(1 + S) π
PHY3012 Solid State Physics, Section 2.1.2.1 Page 42

This is a superposition of two reduced atomic charge densities of amount 1/2(1 + S)


and a bonding charge of amount S/1 + S. Note that the effect of bonding is to remove
charge from the atomic orbitals and pile it up between the atoms. This is an operational
definition of a chemical bond.

2.1.2 The H2 molecule

Now we come to study the H2 molecule, which is indeed the H+ 2 molecular ion with an
added electron. This is really a many electron problem; and indeed it’s the simplest such
problem and hence worth spending some time on. Very often when wondering how to
proceed in a complicated question of electron interactions, the great solid state physicist
Volker Heine would say, “let us go back to the hydrogen molecule. . . ”

There are two distinguished approaches to solving approximately the Schrödinger equation
for the hydrogen molecule. These are the molecular orbital method of Hund and Mulliken
and the valence bond method of Heitler and London.

2.1.2.1 The molecular orbital approach

Hund and Hartree independently introduced the notion of a self consistent field. The
solution to the Schrödinger equation must be one in which which each electron moves in
the electrostatic field of the nuclei plus the average or mean field of the other electron(s).
The solution of the Schrödinger equation is self consistent if the wavefunction squared
is a solution of the Poisson equation for the mean electrostatic field that appears in the
Schrödinger equation.

The wavefunctions ug and uu (2.1.2) of the H+ 2 molecular ion are self consistent in the
sense that they are symmetric or antisymmetric in a plane bisecting the line joining the
nuclei and hence, when squared, will reproduce correctly the symmetry of the electrostatic
potential.

So it makes sense to start with these; and since we may put two electrons into each state
(having opposite spins) we could form the product wavefunction
ug (s1 r1 ) ug (s2 r2 )
which is a solution of the sum of two independent H+
2 hamiltonians for electrons 1 and 2.
PHY3012 Solid State Physics, Section 2.1.2.2 Page 43

Here, r1 and r2 are the positions of the electrons and s1 and s2 their spins. Since the
spins must be of opposite sign we write this as

ug (1)α(1)ug (2)β(2) (2.1.3)

using the shorthand r1 → 1, s1 → 1 etc. The spin functions α and β are defined as
follows. An electron spin s is either plus or minus one (in units of 21 h̄) and the function
α(i) ≡ α(si ) is defined to be one if electron i has spin + 21 h̄ and zero if it has spin − 12 h̄;
conversely β(i) ≡ β(si ) is zero if electron i has spin + 12 h̄ and one if it has spin − 21 h̄. In
short,
α(1) = 1 α(−1) = 0 β(1) = 0 β(−1) = 1
The wavefunction (2.1.3) would correspond to a self consistent field in the Hartree approx-
imation. However this wavefunction does not respect the antisymmetry of fermions with
respect to exchange of position and spin coordinates. If we want to combine functions
like ug (1)α(1) and ug (2)β(2) into products and guarantee antisymmetry, this can be done
by forming Slater determinants of these. Thereby we replace (2.1.3) by

1 ug (1)α(1) ug (2)α(2)
√ = ug (1)ug (2)Ξ1 (2.1.4)
2 ug (1)β(1) ug (2)β(2)
where
1
Ξ1 = √ [α(1)β(2) − β(1)α(2)]
2
is the singlet spin function. The wavefunction is a singlet because the total spin is zero.
Equation (2.1.4) is the self consistent field solution in the Hartree–Fock approximation.

We will write the atomic s-orbitals for electrons 1 and 2 alternatively at nuclei a and b
(including the variational parameter α) as
r
α3 −α|r1 −Ra |
ψa (r1 ) = a(1) = e
π
r
α3 −α|r2 −Rb |
ψb (r2 ) = b(2) = e
π
r
α3 −α|r2 −Ra |
ψa (r2 ) = a(2) = e
π
r
α3 −α|r1 −Rb |
ψb (r1 ) = b(1) = e (2.1.5)
π
We can then see that the spatial part of the wavefunction in the molecular orbital is
1  
ug (1)ug (2) = (a(1) + b(1)) (a(2) + b(2))
2(1 + S)
1  
= a(1)a(2) + b(1)b(2) + a(1)b(2) + b(1)a(2) (2.1.6)
2(1 + S) | {z } | {z }
“ionic function” “H–L function”
In the Hartree–Fock molecular orbital the electron spends half its time in an “ionic func-
tion,” i.e., having both electrons associated with the same atom. The wavefunction is
PHY3012 Solid State Physics, Section 2.1.2.2 Page 44

said to be uncorrelated: the electrons don’t seem to mind being attached to the same
nucleus. One might want to question this assumption.

2.1.2.2 The valence bond approach

Let us carry through the Heitler–London theory from the beginning. The geometry of the
problem is as in the figure. It looks more complicated than it is. The nuclei (protons) are
labelled a and b and are placed at positions Ra and Rb with respect to the origin of some
cartesian coordinate system. Similarly electrons labelled 1 and 2 are at positions r1 and
r2 .

The hamiltonian for the system of electrons and fixed nuclei is:
   
h̄2 2 1 e2 h̄2 2 1 e2
H= − ∇ − + − ∇ −
2m 1 4π0 |r1 − Ra | 2m 2 4π0 |r2 − Rb |
 
1 e2 e2 e2 e2
− − + +
4π0 |r1 − Rb | |r2 − Ra | |r1 − r2 | R

≡ Ha + Hb + Vint

1. The hamiltonian has electron (but not nucleus, because they’re fixed) kinetic energy,
electron-nucleus, electron-electron and nucleus-nucleus terms.
2. Ha and Hb , the quantities in large parentheses, are the hamiltonians for isolated
hydrogen atoms labelled a and b.

We now use first-order perturbation theory for Vint . We will only need to know the
wavefunctions of the unperturbed system; namely the eigenfunctions of Ha + Hb which
are products of the eigenfunctions of the free hydrogen atoms, equations (2.1.5), such as
a(1)b(2) and a(2)b(1). Let us for now simply require that

|Ψ(r1 , r2 )|2 = |Ψ(r2 , r1 )|2


PHY3012 Solid State Physics, Section 2.1.2.2 Page 45

as our fermion condition and include the spin functions later. Then the wavefunction of
the unperturbed system must be
 
1
Ψ(r1 , r2 ) = √ a(1)b(2) ± a(2)b(1)
2

Our best estimate of the perturbation energy as a function of R is


RR
dr1 dr2 ΨVint Ψ
E(R) = RR
dr1 dr2 ΨΨ

The bar indicates complex conjugate. But the wavefunctions in this case are real. Now
using  
1 e2 e2 e2 e2
V̂int = − − + +
4π0 |r1 − Rb | |r2 − Ra | |r1 − r2 | R
we get two solutions corresponding to the plus and minus signs respectively:

C +A C −A
E+ (R) = and E− (R) =
1 + S2 1 − S2

where
ZZ  
1 e2 e2 e2 e2
C= dr1 dr2 ā(1)a(1) − − + + b̄(2)b(2)
4π0 |r1 − RB | |r2 − RA | |r1 − r2 | R
ZZ  
1 e2 e2 e2 e2
A= dr1 dr2 ā(1)b(1) − − + + b̄(2)a(2)
4π0 |r1 − RB | |r2 − RA | |r1 − r2 | R
ZZ
2
S = dr1 dr2 ā(1)b(1)b̄(2)a(2)

The binding energy as a function of R is shown here.


PHY3012 Solid State Physics, Section 2.1.2.2 Page 46

1. The picture is superficially the same as in the molecular orbital theory. There is an
attractive “bonding” solution and a repulsive solution—called “antibonding” in MO
theory.
2. Neglecting the overlap S 2 (i.e., to first order in S), the “bonding–antibonding split-
ting” is a consequence of the appearance of the integral A which is negative and
much larger in magnitude than C.

C is easily interpreted: it is the interaction due to V̂int between the charge clouds ψa2 (r1 )
and ψb2 (r2 ) on the unperturbed atoms. C is called the “Coulomb integral.” Note that
there is a self interaction (often called the “Hartree energy”) and interactions with the
two nuclei.

The integral A which leads to the bonding is similarly interpreted as the interaction
between the “exchange charge densities” ψ a (r1 )ψb (r1 ) and ψ b (r2 )ψa (r2 ). This is a wholly
quantum mechanical effect arising from the indistiguishability principle.

Actually the condition Ψ2 (r1 r2 ) = Ψ2 (r2 r1 ) is not the whole story. The Pauli exclusion
principle states that the wavefunction for fermions is antisymmetric under exchange of
position and spin coordinates. That is,

Ψ(s1 r1 s2 r2 ) = −Ψ(s2 r2 s1 r1 )

This is a more restrictive requirement. To satisfy the Pauli principle, we must have for
the wavefunction corresponding to E+ (R)
  
Ψ(r1 s1 r2 s2 ) = a(1)b(2) + b(2)a(1) α(1)β(2) − α(2)β(1) (2.1.7)

and for that corresponding to E− (R)


 
  
 α(1)α(2) 

Ψ(r1 s1 r2 s2 ) = a(1)b(2) − b(2)a(1) β(1)β(2) (2.1.8)

 

α(1)β(2) + α(2)β(1)

Note that the symmetric orbital function is multiplied by an antisymmetric spin function
and vice versa. Spin factors with a single term e.g., α(1)α(2) indicate that electrons 1
and 2 both have the same spin (+1)—the total spin for these states is ±1 in units of h̄.
Two term spin factors are interpreted as mixtures of states in which the two electrons
have opposite spins and the total spin is zero. Equation (2.1.7) is called a “singlet” state.
Equation (2.1.8) has three degenerate levels having total spin −1, 0, +1 in units of h̄, but
whose degeneracy is lifted in a magnetic field. It is called a “triplet” state.

The essential conclusion from all this is that the symmetric orbital a(1)b(2) + b(2)a(1)
must have an antisymmetric spin function in which the spins have opposite signs and
the antisymmetric orbital a(1)b(2) − b(2)a(1) must be occupied by electrons with parallel
spins. Only hydrogen atoms whose electrons have opposite spins will form a molecule.
PHY3012 Solid State Physics, Section 2.1.2.3 Page 47

In this description of the H2 molecule, the “exchange energy” A is identified as the origin
of the bonding. This is a quantum mechanical effect. The general definition of the
exchange energy is half the energy difference between antiparallel and parallel spin states
of a quantum mechanical two particle system,

1
Ax ≡ (E↑↓ − E↑↑ ) .
2

You could show that for the Heitler–London H2 molecule, neglecting S 2 , the exchange
energy is

1
( E− − E+ )
2  
ZZ
1 e2 e2 e2
= dr1 dr2 ā(1)b̄(2) − − + a(2)b(1)
4π0 |r1 − Rb | |r2 − Ra | |r1 − r2 |
≡ AH .

AH is called the “Heisenberg exchange energy.” A positive exchange energy favours


antisymmetric or antiferromagnetic spin alignments. Ferromagnetism occurs when the
exchange coupling is negative. More on that later when we discuss magnetism.

2.1.2.3 Molecular orbital and valence bond pictures compared

I want to discuss the contrast between the Heitler–London and molecular orbital pictures
of the H2 molecule. We have called a(1) the wavefunction of electron 1 near the isolated
hydrogen nucleus a and similarly b(2). Of course, when the molecule is formed from its
atoms, electron 1 may become associated with hydrogen nucleus b and vice versa in which
case we will need a(2) and b(1) defined in equations (2.1.5). In the molecular orbital
picture of Hund and Mulliken, we begin by recalling the bonding molecular orbital of
the H+2 molecular ion, equation (2.1.2a). We could construct the product or “Hartree”
wavefunction
ug (1) α(1) ug (2) β(2)
by putting electrons 1 and 2 into a bonding orbital, having opposite spins, but this
wavefunction does not have the fermion antisymmetry property. A better wavefunction
is the Slater determinant,

1 ug (1) α(1) ug (2) α(2)
√ = ug (1) ug (2) Ξ1 ,
2 ug (1) β(1) ug (2) β(2)

where
1
Ξ1 = √ [α(1) β(2) − β(1) α(2)]
2
is the singlet spin function; that is, the total spin is zero. The point is that the MO
picture amounts to the Hartree–Fock approximation since the wavefunction is a single
Slater determinant.
PHY3012 Solid State Physics, Section 2.1.2.3 Page 48

In contrast, the Heitler–London or valence bond wavefunction for the H2 singlet ground
state is
[ a(1) b(2) + a(2) b(1) ] Ξ1 , (2.1.9)
and the first excited, triplet, states are

 α(1) α(2),
[ (a(1) b(2) + a(2) b(1) ] × α(1) β(2) − β(1) α(2),

β(1) β(2).

The physical picture is that if electron 1 is associated largely with nucleus a, then elec-
tron 2 will be found near nucleus b, and vice versa. So instead of the electrons sharing
a bonding MO, they are keeping well apart in a correlated state. Mathematically, this
is reflected in the fact that (2.1.9) cannot be written as a single determinant. Indeed,
consider the four determinants

1 a(1) χa (1) b(1) χb (1)

2 a(2) χa (2) b(2) χb (2)

arising when (i ) χa and χb are both α; (ii ) both β; (iii ) α, β and (iv ) β, α respectively.
You will find that (2.1.9) is the sum of determinants (iii ) and (iv ). The triplet states
which are degenerate in the absence of a magnetic field are, respectively: determinant (i);
the difference of (iii ) and (iv ); and (ii ).

So, the valence bond wavefunction is correlated, having the right properties upon disso-
ciation that each nucleus ends up with one electron of a different spin. This is always
the best starting point for studying a correlated system, and the wavefunction cannot be
written as a single Slater determinant.

The molecular orbital ug (1) ug (2) can be expanded as in (2.1.6) to give

1  
ug (1) ug (2) = a(1) b(2) + b(1) a(2) + a(1) a(2) + b(1) b(2) (2.1.10)
2(1 + S)

so the electron spends equal amounts of time in the “covalent” state as in the “ionic” state
and there’s no guarantee that on dissociation each nucleus will get one electron: there’s an
equal probability that both electrons will end up on the same nucleus. One way to “fix up”
the MO picture is to include some amount of the antibonding MO determinant uu (1) uu (2)
from (2.1.2b) in a variational calculation. This is called configuration interaction.

If we depart from hydrogen-like orbitals e−αr and think of these as unknown functions
with the properties that there are hopping integrals

h = ha(1) |H| b(1)i


= ha(2) |H| b(2)i < 0

overlap integrals
S = ha(1) | b(1)i = ha(2) | b(2)i
PHY3012 Solid State Physics, Section 2.1.2.3 Page 49

and a Coulomb repulsion when two electrons occupy the same site
 
1 e2
U= a(1)a(2) a(1)a(2)
4π0 |r1 − r2 |
 
1 e2
= b(1)b(2) b(1)b(2)
4π0 |r1 − r2 |

then solving the 2 × 2 secular equations between the bonding and antibonding molecular
orbitals of equations (2.1.2) ug (1) ug (2) and uu (1) uu (2), as in equation (2.1.10), one finds
that the ground state spatial wavefunction is
!
1 2h 1 − S 2  
Ψ(r1 , r2 ) = p a(1)b(2) + b(1)a(2) − a(1)a(2) + b(1)b(2)
2(1 + S 2 ) | {z } U 1 + S2 | {z }
“H–L function” “ionic function”

as long as U  |h|. This results in a tight binding model (section 2.4) in which, by
choosing parameters t and U we can go from the uncorrelated MO picture to the correlated
VB picture.

The most important point to take away is that the MO picture is a one electron ap-
proximation. The valence bond description on the other hand is a two electron, or
many electron theory. The way to tell is that in the former, all integrals are over the
single electron coordinate of one or other of the electrons. In the many electron ap-
proach the wavefunction is a function of both or all the electron coordinates and integrals
are many-dimensional. This is of course much harder and we try always to cast our
physics into the one particle description; a typical instance being the physics of semi-
conductors, energy bands and so on. But there are some realms of physics in which the
many particle treatment is essential. In solid state physics the outstanding example is
superconductivity—there is simply no single particle picture for this. And hence, reluc-
tantly, I cannot include superconductivity in this lecture course. On the whole, magnetism
also falls into this category of phenomena. However, we will study this; and where we
can, use the one electron approximation.

A clear way to see the difference between the one and two electron pictures is the following
diagram, taken from figure 8.3 in Ibach and Lüth. The two energy level diagrams for the
H2 molecule seem to look the same. However in the MO description these are single
particle levels and, like in the Aufbau principle in chemistry you populate these by adding
one electron after the other. In this way the Pauli principle is not fundamental—you
build it in by hand as you populate the one electron levels. In contrast energy levels in
the many body picture are energies of the combined system of all the electrons. Either
both electrons are in one or the other state. In fact it looks at first sight as if the Pauli
principle is being violated in the right hand diagram. Do be sure you understand this
vital difference between the one electron picture which we use every day, but which is a
simplified abstraction, and the many electron picture which is actually the right physics.
We have discussed in the introduction to this section 2, why and under what circumstances
we are permitted to view the electronic structure problem in a single particle picture.
PHY3012 Solid State Physics, Section 2.2 Page 50

2.2 Free electron gas

Back to solids, we now describe in detail the “free electron gas,” within the Sommerfeld
model. We imagine a cubical box containing Ne electrons of side L neutralised by a
uniform background of positive charge. The potential energy is constant everywhere and
we apply periodic boundary conditions at the walls of the box. The Schrödinger equation,
if the electrons are independent, is
 
h̄2 ∂2 ∂2 ∂2
− + + ψk (r) = Ek ψk (r)
2m ∂x2 ∂y 2 ∂z 2

where m is the mass of the electron, and we have labelled the solutions with a quantum
numbers
k = (kx , ky , kz )
The solutions are the plane waves

1 1
ψk (r) = √ eik·r = √ eikx x eiky y eikz z
V V

and

h̄2  h̄2 k 2
E(k) = kx2 + ky2 + kz2 = (2.2.1)
2m 2m

Periodic boundary conditions as discussed in section 1.3 place restrictions on the allowed
values of the wavevector k, namely


k= (n1 , n2 , n3 ) n1 , n2 , n3 = 1, 2 . . . ∞
L

so, in contrast to the lattice waves, there’s an infinite number of allowed states, although
they are discretely spaced out so that the density of points in k-space is V /(2π)3 points
per unit volume of reciprocal space—the same as for lattice waves. Our picture is similar
to the problem of lattice waves. We have an isotropic dispersion relation, Ek = E(k)
which is, in this case, a parabola:
PHY3012 Solid State Physics, Section 2.2 Page 51

EF

k kF

and a spotty k-space:

kF

I have drawn a sphere in k-space for the following reason. Electrons are fermions. By the
Pauli principle, each state can be occupied by at most two electrons of opposite spin. If
there is a total of Ne electrons then the ground state at 0◦ K has all states in a sphere
of radius kF occupied and all other states unoccupied. The sphere in k-space separating
these states is called the Fermi sphere † and kF is the Fermi wavevector. We have

Volume inside Volume


Ne = ÷ × 2 for spins
Fermi sphere per state
4π 3 V
= kF × ×2
3 (2π)3
1
= 2 kF3 V (2.2.2)

that is,
 13
kF = 3π 2 ne
† In a crystal with anisotropic dispersion, this is called the Fermi surface.
PHY3012 Solid State Physics, Section 2.2 Page 52

where
Ne
ne =
V
is the electron density (number of electrons per unit volume). The lowest energy states
with k increasing from 0 to kF are occupied. The energy of the highest occupied state is
called the Fermi energy:

h̄2 kF2
EF =
2m
  23
h̄2 2 Ne
= 3π (2.2.3)
2m V

Now we can work out the density of states and hence the kinetic energy of the electron
gas. The number of states with wavevector less than some k is, from eq (2.2.2),
1 3
k V
3π 2
and from eq (2.2.1)
  32
3 2m 3
k = E2 (2.2.4)
h̄2
and so the number of states with energy less than some E is
  32
V 2m 3
N (E) = 2 E2
3π h̄2
The density of states is the derivative of this with respect to energy, exactly as we saw in
the case of lattice waves,

dN
n(E) =
dE
 3
2m 2 V √
= E (2.2.5)
h̄2 2π 2
3 N (E)
=
2 E

(In fact you can obtain the same result using the dispersion relation (2.2.1) and the general
relation (1.3.2)) and so the density of states at the Fermi level, a quantity that we will
use a lot in these notes is
3 Ne
n(EF ) = (2.2.6)
2 EF
(Be careful because Ibach and Lüth use D(E) to denote the density of states per unit
volume.)

We plot the density of states either in this way. . .


PHY3012 Solid State Physics, Section 2.2 Page 53

EF

g(EF )

g(E)

occupied unoccupied

. . . or like this, to emphasise how the states are filled up, like a bath, to the Fermi level:

E ∝ k2


g(E) ∝ E

E E
EF

kF g(EF )
k g(E)

(I’m sorry, I’ve used g(E) rather than n(E) for the density of states.)

The internal energy at 0◦ K is obtained as we did in the Debye model for phonons:
Z
occupied (energy × density of states)
states
It is purely kinetic energy because the potential energy is zero. The kinetic energy per
PHY3012 Solid State Physics, Section 2.2 Page 54

electron is therefore
Z EF
1
Ekin = E n(E) dE
Ne 0
  32
V 2m 1 2 52
= E
Ne h̄2 2π 2 5 F

But from eqns (2.2.4) and (2.2.2) we have

  32
V 3π 2 h̄2 −3
= 3 = 3π 2 EF 2 .
Ne kF 2m

Nearly everything cancels and we get

3
Ekin = EF
5

An electron at the Fermi level has a kinetic energy equal to EF even at 0◦ K. Its group
velocity is

d(E/h̄) h̄kF
vg = = (2.2.7)
dk m

When you work this out for a typical metal, you find a speed of order 106 ms−1 —about
100th the speed of light in a vacuum. Even at 0◦ K the electrons are really zipping about.

Away from 0◦ K, the electron states are occupied according to Fermi–Dirac statistics. The
probability that a state of energy E will be occupied at temperature T is

1
f (E) =
e(E−µ)/kB T + 1

µ is called the chemical potential of the electrons and must be chosen in such a way that
the total number of electrons is Ne :

Z ∞
Ne = f (E) dE
−∞

At 0◦ K, µ = EF . The Fermi function is close to a step function at all reasonable temper-


atures (see fig 3, chap 6, Kittel).
PHY3012 Solid State Physics, Section 2.2 Page 55

At non zero temperature some states below the Fermi level are unoccupied and some
above become occupied. However this “smearing” is only over a small fraction of the
energy range. If we define a Fermi temperature from

kB TF = EF

the smearing is in a range of width kB T such that

kB T T
=
EF TF

which is small since typical values of EF calculated from equations (2.2.2) and (2.2.3) are
as shown in this table
valence EF /eV (expt.) T /TF at room temp.
Na 1 3.2 (2.8) 0.008
Mg 2 7.1 (7.6) 0.004
Al 3 11.6 (11.8) 0.002

1
whereas kB T at room temperature is only 40 eV (it’s useful to remember this “rule of
thumb”). The electron gas is said to be highly degenerate (entarted in German); only a
tiny fraction (a few thousandths) of the electrons can be involved in thermal or transport
processes. The rest are “frozen in” by the Pauli principle. This explains why the electron
contribution to the heat capacity is small. Classically you’d expect each electron to
supply an internal energy of 23 kB T to the crystal ( 21 kB T of kinetic energy for each degree
of freedom; the potential energy is zero) so the heat capacity should be 3R per mole from
PHY3012 Solid State Physics, Section 2.2 Page 56

the phonons plus 32 R from the electrons, which is not in accord with the observations
of Dulong and Petit. But because of quantum mechanics (viz., the Pauli principle) at
some temperature T only a fraction of the electrons, about T /TF of them, can be excited
to above the Fermi level. So you’d expect the heat capacity due to the electrons to be
roughly
3 T
Ce ≈ Ne kB
2 TF
An exact calculation (see Kittel, chapter 6, or Ibach and Lüth, section 6.4) results in
1 2 2
Ce = π kB n(EF )T
3
Not surprisingly this depends on the density of states at the Fermi level, which is, us-
ing (2.2.6),
3 Ne
n(EF ) =
2 EF
Hence,
1 k2
Ce = π 2 T Ne B
2 EF
In terms of TF = EF /kB , therefore,
1 2 T
Ce = π Ne kB (2.2.8)
2 TF
So our initial estimate was out by a factor of π 2 /3 but we did get the linear dependence
on temperature.

Thus at low temperature in metals the heat capacity has the form

C = γT + AT 3 (2.2.9)

with the first term coming from the electrons, and the second from the phonons.

This can be confirmed from figure 9, chapter 6, Kittel in which equation (2.2.9) is shown
plotted for potassium.
PHY3012 Solid State Physics, Section 2.2 Page 57

From equation (2.2.2) we have


 13
kF = 3π 2 ne
and for a given metal we know ne , the number of valence electrons per unit volume and
hence we can predict the constant
1 2 kB
γ= π Ne
2 TF
2
π m
= 2 2 kB2
kF h̄
and compare it with experiment, γexp . If we write
π 2 m∗ 2
γexp = k
kF2 h̄2 B
we may call m∗ the “thermal effective mass” of the real electrons and the ratio m∗ /m will
tell us to what extent the electrons in a metal deviate from the free electron gas. In so
called normal metals the ratio is pretty close to one.
metal valence m∗ /m
Cu 1 1.38
Ag 1 1.0
Mg 2 1.33
Ca 2 1.9
Zn 2 0.85
Cd 2 0.75
Hg 2 1.88
Al 3 1.48
Ga 3 0.58
In 3 1.37
Sn 4 1.37
Pb 4 2

The extent to which a metal is “normal” can also be seen by comparing its actual, calcu-
lated density of states to the free electron square root form. This is illustrated below.
PHY3012 Solid State Physics, Section 2.3 Page 58

2.3 Effects of the crystal lattice

We replace the smeared out positive background in the Sommerfeld model with nuclear
point charges at stationary lattice positions (i.e., we neglect electron–phonon coupling).

Each nucleus is associated with a number of core electrons so that the deep attractive
potential due to the nuclei is nearly cancelled by a repulsive potential due to the presence
of the core electrons.

In sp-bonded metals (e.g., Na, Li, Mg, Al, Ga, In) valence electrons see a weak, sometimes
even repulsive, potential at the crystal lattice sites. How does this affect the free electron
gas picture?

Crystal periodicity imposes two conditions upon the electron wavefunctions.


1. Bloch’s theorem:

ψk (r) = eik·r uk (r) (2.3.1)

This states that the free electron plane wave solutions are modulated by a function
uk (r) which has the periodicity of the crystal lattice,

uk (r + rj ) = uk (r)
PHY3012 Solid State Physics, Section 2.3 Page 59
P
if rj is a direct space lattice vector. That means rj = i mi ai . We have already
described Bloch’s theorem indirectly in section 1.3. Essentially, quantum mechanics
does not require the wavefunction to have the symmetry of the lattice but it does
require |ψk (r)|2 to.
2. The wavefunction and energy eigenvalues are periodic in k-space,

ψk+g (r) = ψk (r)


E(k + g) = E(k)

This is a consquence of the periodicity of the crystal electrostatic potential and the
properties of Fourier series. For a complete derivation of these results, 1 and 2, see
Elliott, section 5.2.1.

As with phonons this means that we need only deal with wavevectors that lie in the first
Brillouin zone. In contrast with the phonon case, there is no maximum wavevector for
electrons. We can make the following construction.

Here I have cut loose segments of the parabola that lie outside the first Brillouin zone
and translated them into the first zone by the appropriate multiple of ±2π/a. In this
reduced zone scheme we have energy bands and density of states for the free electron gas
in a crystal, the so called “empty lattice.”

Now, what is the effect of turning on the weak crystal potential?


PHY3012 Solid State Physics, Section 2.4 Page 60

The electrons are Bragg reflected and form standing waves. A gap of “forbidden” energies
opens up at the Brillouin zone boundary. In the reduced zone scheme, it’s rendered like
this:

If a gap opens up throughout the Brillouin zone then the density of states may look like
this:

and we see the origin of the difference between metals, semiconductors and insulators, at
least in as much as the independent electron picture is valid.

2.4 The tight binding picture

We can see how energy bands are built up, maybe more readily than in the “nearly free
electron approximation,” by imagining a crystal being created by bringing together atoms
from infinity. Or, imagine a crystal with a huge lattice constant that we proceed to reduce
to its observed value. Suppose each atom has just one valence s-electron (e.g., H or Na).
PHY3012 Solid State Physics, Section 2.4 Page 61

We will write the electron wavefunction in the crystal as

1 X ik·ri
ψk (r) = √ e ϕs (r − ri ) (2.4.1)
N i

where ϕs (r − ri ) is an s-orbital centred at an atom labelled i at position vector ri . The


phase factor guarantees that Bloch’s theorem is satisfied since if rj is another atomic site,
then X
1
ψk (r + rj ) = √ eik·rj eik·ri0 ϕs (r − ri0 )
N i0

= eik·rj ψk (r)
if ri0 = ri − rj . If ψk (r) is an eigenstate of the Schrödinger equation

Hψk (r) = E(k)ψk (r)

then its eigenvalue will be R


ψ̄k (r)Hψk (r)dr
E(k) = R
ψ̄k (r)ψk (r)dr
Let us write the hamiltonian as

h̄2 2
H=− ∇ + V (r)
2m
and approximate the electrostatic potential energy as a sum of overlapping atomic-like
potentials, one at each atomic site:
X
V (r) = v(r − rl )
l

Now we have
Z
1 X X ik·(ri −rj )
ψ̄k (r)Hψk (r)dr = e ×
N i j
Z " #
h̄2 X
ϕ̄s (r − rj ) − ∇2 + v(r − rl ) ϕs (r − ri )dr (2.4.2)
2m l

and Z Z
1 X X ik·(ri −rj )
ψ̄k (r)ψk (r)dr = e ϕ̄s (r − rj )ϕs (r − ri )dr (2.4.3)
N i j

N is a normalisation constant. This could form the basis of a precise solution to the
one-electron problem, but in the tight binding approximation we do three things.
1. Neglect three centre integrals in (2.4.2), i.e., those for which ri 6= rj 6= rl .
2. Neglect overlap integrals, (2.4.3) except those for which ri = rj (those integrals are
one by virtue of the choice of normalisation).
3. We don’t even attempt to calculate the remaining integrals. Instead we treat them
as adjustable parameters that are non zero only between nearest neighbours. This
PHY3012 Solid State Physics, Section 2.4 Page 62

is exactly analogous to the model we made for lattice waves in section 1.1 (i.e., we
chose to model the interatomic forces with some “spring constants.”)

We then find

Z " # Z
X X
ik·ri
E(k) = Es + ϕ̄s (r) v(r − ri ) ϕs (r)dr + e ϕ̄s (r)v(r)ϕs (r − ri )dr
R6=0 i | {z }
| {z } ri 6=0

crystal field energy hopping integral, h(R)


(2.4.4)
Here, Es is the free-atomic energy level because equation (2.4.2) when ri = rj = rl is the
eigenvalue in the isolated atom. We usually neglect the second term in equation (2.4.4)
which is called the crystal field energy; this effectively causes a shift in the orbital energy
Es due to the potential at neighbouring atoms—it is important in ligand field theory, the
chemistry of transition metal ions. So this all simplifies nicely to

X
E(k) = Es + eik·R h(R) (2.4.5)
R6=0

where the sum is over all position vectors R = ri that are not zero and h(R) is the transfer
integral, or bond integral, or hopping integral. In this simple s-band model, we write

h(R) = h = ssσ

We can now make a bandstructure. Consider the simple cubic lattice; if only nearest
neighbour bonds are allowed (again, rather like in our spring model for the lattice waves)
then there are six vectors in the sum (2.4.5): [±a, 0, 0], [0, ±a, 0], [0, 0 ± a], so

E(k) = Es + 2ssσ (cos kx a + cos ky a + cos kz a)

This function can be plotted along lines of high symmetry in the first Brillouin zone. By
convention in the simple cubic lattice the reciprocal lattice points are given the symbols
Γ for (0, 0, 0), R for πa (111) and X for πa (100).
PHY3012 Solid State Physics, Section 2.4 Page 63

(E-Es) / |ssσ|
0

−6
R Γ X
n(Ε)

To compare with the free electron dispersion, I have included the curve for Ek = h̄2 k 2 /2m,
equation (2.2.1), as a dotted line.

Now let’s make bands out of p-orbitals. Instead of s-orbitals at each site we will put three
p-orbitals:

px py pz
ϕx (r − R) ϕy (r − R) ϕz (r − R)

The expansion of the wavefunction (2.4.1) is now


1 X X
ψk (r) = √ cα eik·R ϕα (r − R)
N α=x,y,z R

in which cα are three expansion coefficients to be determined. We put this into the
Schrödinger equation
Hψk (r) = E(k)ψk (r)
Using the same approximations as before viz., 1, 2 and 3 on p 61, the Schrödinger equation
becomes a secular equation,
|(Ep − E(k)) δαα0 + Tαα0 | = 0
where Z
X
ik·R
T
αα0 = e ϕ̄α (r)v(r)ϕα0 (r − R)dr
R6=0 | {z }
h(R)
PHY3012 Solid State Physics, Section 2.4 Page 64

Note that α here is an index meaning x, y or z, i.e., it labels one of the p-orbitals. We
now have new bond integrals (compare equation (2.4.5)) which depend on the orientation
of the orbitals. The two new fundamental integrals are

and

which form so called σ-bonds and π-bonds. For a general connecting vector R the bond
integral will be some combination of these,

h(R) = ppσ cos2 θ + ppπ sin2 θ

If `, m, n are direction cosines of R then


Z

ϕ̄x vϕx dr = `2 ppσ + 1 − `2 ppπ
Z
ϕ̄x vϕy dr = `m ppσ − `m ppπ
Z
ϕ̄x vϕz dr = `n ppσ − `n ppπ

and so on, by permuting x, y and z. These are called Slater–Koster transformations.

Normally the task would be to diagonalise the secular matrix, but in the simple cubic
lattice Tαα0 is already diagonal,

Tαα0 = 0 for α 6= α0

and
Txx = 2ppσ cos kx a + 2ppπ (cos ky a + cos kz a)
Tyy = 2ppσ cos ky a + 2ppπ (cos kz a + cos kx a)
Tzz = 2ppσ cos kz a + 2ppπ (cos kx a + cos ky a)
PHY3012 Solid State Physics, Section 2.4 Page 65

In the (100) direction, k = (k, 0, 0), we have three bands,


 2ppσ cos ka + 4ppπ
E(k) = Ep + 2ppπ cos ka + 2 (ppσ + ppπ)

2ppπ cos ka + 2 (ppσ + ppπ)

 2ppσ cos ka
≈ Ep + 2ppσ

2ppσ

after neglecting ppπ, which I have plotted below.

2
(E-Ep) / |ppσ|

−2

R Γ X
n(Ε)

Finally, we can make bands with d-orbitals. We make linear combinations of five d-Bloch
sums (as in equation (2.4.1)) and get a 5 × 5 secular equation,

|Dαα0 − E(k)δαα0 | = 0

with
X Z
ik·R
Dαα0 = Eα δαα0 + e ϕ̄α (r)v(r)ϕα (r − R)dr
R6=0

with

α = xy, yz, xz, x2 − y 2 , 3z 2 − r2

This reduces to three fundamental bond integrals, ddσ, ddπ and ddδ.

Naturally you may wish to make bands out of s and p orbitals, or other combinations.
The complete set of 10 fundamental bond integrals is illustrated below.
PHY3012 Solid State Physics, Section 2.4.1 Page 66

2.4.1 Case study—Si and other semiconductors

Homopolar semiconductors (e.g., diamond, Si, Ge) and heteropolar semiconductors (e.g.,
GaAs, ZnTe, SiC) occur in the diamond or zincblende crystal structures and thus have
two atoms per primitive unit cell. They are sp-bonded compounds so in the simplest
approximation we need one s and three p-orbitals on each atom, making an 8 × 8 secular
equation involving the fundamental bond integrals

ssσ, spσ(psσ), ppσ, ppπ

If suitable values are empirically chosen one can construct energy bands easily with a
computer:

GaAs: ss~
{s}= ~
{-1.69}eV sp~
{s}= 2.373eV ps~
s{}= 2.057eV
Si: ss~
{s}= ~
{-1.9375}eV sp~
{s}= 1.745eV
pp~
{s}= 3.508eV pp~
{p}= ~
{-0.9625}eV
pp~
{s}= 3.05eV pp~
{p}= ~
{-1.075}eV
{e~}_{s}(Ga) = {-~3.19}eV {e~}_{s}(As) = ~
{-8.21}eV
{e~}_{s}= ~
{-5.25}eV {e~}_{p}= 1.2eV
{e~}_{p}(Ga) = 3.35eV {e~}_{p}(As) = 1.16eV

5 5
Energy (eV)

Energy (eV)

0 0

−5 −5

−10 −10

L Γ X 1 2 3 4 5 L Γ X 1 2 3 4 5
States / eV States / eV
PHY3012 Solid State Physics, Section 2.4.2 Page 67

Points to note in these bandstructures are


1. There are optical gaps in both Si and GaAs; GaAs also has a polar gap.
2. At some high symmetry points in the Brillouin zone the eigenstates are purely of
either s or p-character.
3. GaAs has a direct optical gap. In real Si the gap is indirect from Γ to a point along
Γ − X nearest to X. Accurate bands for Si using the density functional theory are
shown below on p 68, where you can see the indirect gap clearly.

Actually the density functional theory in the so called “local density approximation” is
not very good at getting bandstructures right. A better method (now abandoned in
modern electronic structure theory) is the empirical pseudopotential. Here are bands for
Ge calculated using it, as well as bands from the tight binding approximation to compare
with the free electrons, or “empty lattice.”

A mathematically equivalent approach in the tight binding method for semiconductors


is to make Bloch sums from from linear combinations of s and p-orbitals which have a
special chemical significance. Before constructing the secular matrix we can define four
sp3 hybrids, each one pointing along one of the four covalent bonds in the diamond cubic
structure: orientation
φ1 = 12 (ϕs + ϕx + ϕy + ϕz ) [111]
1
φ2 = 2 (ϕs + ϕx − ϕy − ϕz ) [11̄1̄]
1
φ3 = 2 (ϕs − ϕx + ϕy − ϕz ) [1̄11̄]
φ4 = 21 (ϕs − ϕx − ϕy + ϕz ) [1̄1̄1]

Next I’ll show you a more accurate calculation of the energy bands of Si, using the “local
density approximation” to “density functional theory.”
PHY3012 Solid State Physics, Section 2.4.2 Page 68

0.4

0.2

Energy (Ry)
−0.2

−0.4

−0.6

−0.8 Si
XUK Γ X W L Γ 5 10 15 20 25 30
states / Ry / unit cell
2.4.2 Case study—transition metals

While we’re looking at bandstructures, here’s niobium, also calculated using the local
density approximation to density functional theory.

0.4

0.2
Energy (Ry)

−0.2

−0.4 Nb
Γ H N P Γ N,H P 10 20 30 40 50
n(E) / Ry / atom

Transition metals are, approximately, sd-bonded. You may think of the s-electrons as
occupying a free electron, parabolic band. The d-bands are flat, or narrow, reflecting
the localised nature of d-orbitals. A sensible approximation to the density of√states is a
rectangle of Nd occupied states superimposed on a free electron square root E density
of states occupied by approximately one electron.
PHY3012 Solid State Physics, Section 2.4.2 Page 69

So Ns is roughly fixed and Nd increases across the transition series from one in Sc, Y
and La to ten in the “noble metals” Cu, Ag and Au. Inasmuch as you can distinguish
between s and d electrons these have different rôles to play in the properties of transition
metals: s-electrons are mobile and are responsible for transport (thermal and electrical
conductivity). d-electrons are responsible for cohesion as we’ll see in the Friedel model to
follow); their density of states is much greater so they dominate the electronic contribution
to the heat capacity, and also optical properties. Noble metals have a full d-band.

The colours of the noble metals can be explained through the onset of interband (i.e.,
3d → 4s) absorption transitions; the onset depending on the energy between the top of
the d-band and the Fermi level. In Cu and Au this is much less than in Ag, hence Cu and
Au can absorb in the green and hence their reflected light contains more red and yellow.

The Friedel model is the simplest possible description of the properties of transition met-
als. Only d-electrons are included and they are supposed to have a density of states that
is constant between the top and bottom of the band:

The width of the band is W and hence the height of the density of states is 10/W so that
the total number of electrons is 10. A transition metal is characterised by its number of
d-electrons which determines the position of the Fermi energy with respect to the centre
PHY3012 Solid State Physics, Section 2.5.1 Page 70

of the band, Z EF
Nd = n(E) dE
− 21 W
Z EF
10
= dE
− 21 W W
 
10 1
= EF + W
W 2
The internal energy, as usual, is
Z EF
U= n(E) E dE
− 21 W
Z EF
10
= E dE
− 21 W W
 
5 1
= EF2 − W2
W 4

putting these two formulas together, we get


1
U= W Nd (Nd − 10)
20
This is a parabola,

and this is exactly the trend seen in cohesive energy, sublimation energy and melting
point along all three transition metal series in the periodic table.

2.5 Magnetism

According to the Bohr–van Leeuwen theorem, in classical mechanics a system of charged


particles moving in a constant magnetic field and in thermal equilibrium has zero magnetic
moment. Niels Bohr made this discovery as a PhD student in 1912. This means that all
magnetic phenomena in matter—diamagnetism, paramagnetism and ferromagnetism—
must have their origins in quantum mechanical behaviour. Indeed magnetism arises from
the instrinic spin that is carried by elementary particles, namely electrons. This is a kind
PHY3012 Solid State Physics, Section 2.5.1 Page 71

of angular momentum very loosely analogous to the angular momentum of a spinning


solid sphere having some radius r. But since elementary particles are point objects they
cannot be “spinning” in any classical sense. So the existence of the magnetic moment of an
electron is essentially quantum mechanical in origin. What it is that motivates electrons
collectively to align their spins in condensed matter has no explanation in classical physics.
So that also calls for a quantum mechanical explanation, and to get to the bottom of this,
it is best to start with the simplest system; or to put it another way, “let us go back to
the hydrogen molecule. . . ”

2.5.1 The localised picture

In the treatment of the Heitler–London H2 molecule a negative exchange energy favours


the singlet state with antiparallel spins. In fact this is a fairly general result. If we write
Ax = 12 (E↑↓ − E↑↑ ) = 12 (E1 − E3 ), the energy levels of the H2 molecule can be written
! !
1 1
E= E1 + E3 − κ E1 − E3 , κ = ±1
2 2
!
1 1
= E1 + E3 − κ2Ax . (2.5.1)
2 2

Now, you remember from your quantum mechanics that the eigenvalue of the spin operator
Ŝ squared is S 2 = S(S + 1), in units of h̄. For the electron having spin one-half, s2i =
si (si + 1) = 3/4. The total spin operator for the two electron system is Ŝ = ŝ1 + ŝ2 , and
the eigenvalue of the total spin squared is,

3
S 2 = s21 + s22 + 2ŝ1 · ŝ2 = + 2ŝ1 · ŝ2 = S(S + 1).
2

Therefore if we write down the operator


 
1
+ 2ŝ1 · ŝ2
2

we can see immediately that it has an eigenvalue [S(S + 1) − 1], which is +1 for S = 1,
the triplet state and −1 for S = 0, the singlet state. It therefore has the same properties
as the number κ in equation (2.5.1) and can be substituted into that equation to give
! ! !
1 1 1
E= E1 + E3 − + 2ŝ1 · ŝ2 E1 − E3
2 2 2
! !
1 1
= E1 + E3 − Ax + 2ŝ1 · ŝ2
2 2
= E0 − 2Ax ŝ1 · ŝ2 , (2.5.2)

where E0 = (E1 + 3E3 )/4 is the average energy of all four spin states of the system.
PHY3012 Solid State Physics, Section 2.5.1 Page 72

Equation (2.5.2) is nothing else except a rewriting of the equation we originally got for
the Heitler–London H2 molecule:

C ±A
E± = E1,3 =
1 ± S2

which has the singlet and triplet solutions that we found. But it exposes nicely the shifts
away from the mean that occur as a result of exchange splitting. In fact equation (2.5.2)
can be readily generalised to the many-electron case to give the “Heisenberg exchange
hamiltonian:” X
Hex = E0 − 2 Aij ŝi · ŝj .
i<j

Here, the electrons are labelled i, and Aij are exchange coupling integrals. This model
is the starting point for the treatment of “localised” magnetism, the Ising model, and
so on. You can see that this is essentially a picture of spins localised on atomic sites.
In fact it is built in to the Heitler–London wavefunction—equation (2.1.9)—that while
electron 1 is associated with atom 1 then electron 2 is associated with atom 2, or while
electron 1 is associated with atom 2 then electron 2 is associated with atom 1; that is to
say a highly correlated state.

Unfortunately, the Heisenberg hamiltonian is not suitable for describing ferromagnetism


in metals. In fact, whereas A < 0 in the Heitler–London H2 molecule—which has the
antiferromagnetic ground state—you can see that you will need A > 0 to stabilise the
ferromagnetic state. It turns out that invariably A is negative when calculated between
one-electron states on neighbouring atomic sites, and it is difficult to see how this picture
can describe ferromagnetism.

Actually, A is positive when calculated between states on the same atom. The same
method we used for H2 applied to He, results in A > 0. Whereas in the Heitler–London
H2 molecule, A < 0 and the ground state energy is C + A (to first order) requiring the
spin function to be antisymmetric and hence a singlet; in He, A > 0 and the first excited
state energy is C − A and the wavefunction calls for the triplet spin function. This result
is the basis for the well-known Hund’s rule which states that in an unfilled atomic shell,
the spins will align parallel, as far as possible, to maximise the benefit from the exchange
energy gain. (The ground state of He is, of course, a singlet as two electrons occupy a 1s
orbital.)

Our problem, in α-Fe for example, is: given we can understand why the spins align within
the atom, why do they also couple from atomic site to site collectively to align parallel?

Another way to see why the antiferromagnetic state is expected between two atomic sites,
is that an electron will prefer to hop to a neighbouring atom if, when it gets there, it finds
the electron already there has opposite spin (then it is not repelled by the Pauli principle).
So there is a greater amplitude for hopping between atomic sites whose electrons are are
aligned antiparallel. This greater amplitude smears out the position of the electron and
lowers its kinetic energy according to the Heisenberg uncertainty principle. Stronger
PHY3012 Solid State Physics, Section 2.5.2 Page 73

hopping also leads to stronger covalent bonding as is clear from the usual tight binding
or molecular orbital picture. You can expect there always to be a competition between
ferromagnetism and covalency.

All these considerations have got us so far: we don’t yet have a model that can describe
magnetism in metals, but we have convinced ourselves that exchange, the quantity we
have described at such length up till now, is at the bottom of magnetic phenomena. So
where do we go from here to get a description of the ferromagnetic state?

2.5.2 The itinerant picture

How does the spin alignment propagate itself throughout a crystal? The mechanism
that gives rise to localised spins in f -metals (rare-earths) is not overlap of the atomic
f -electron states because that is very small. Instead, the picture is one in which the
moment in the atomic f -shell polarises the electron gas in its neighbourhood and this
propagates the exchange interaction to the next atom. It is called “indirect exchange.”
So here the important exchange integrals are between the localised atomic f -states and
the free-electron like s and p band states. This is an appropriate description when the
electrons carrying the moment are in completely localised states, but is still not what
we want for transition metals where we know that the d-electrons are in approximately
one-electron bands—they are “itinerant.”

So let us first ask the following important question: can the free-electron gas become
spontaneously spin polarised at any density? In other words, what is the ground state in
the following figure, (a) or (b)?

The figure shows, back-to-back, the free-electron densities of states for plus spin (↑) and
minus spin (↓) electrons. If we transfer 12 m spin-down electrons into the spin-up band (i.e.,
flip the spins of 12 m electrons), there will be m more spin-up than spin-down electrons—a
magnetic moment of µB m, where µB = eh̄/2m is the Bohr magneton. The system will
gain exchange energy as a result of the lowering due to exchange, but the process will
cost kinetic energy since higher energy band states need to be occupied. For the case of
Jellium, both contributions can be worked out in the Hartree–Fock approximation and it
turns out that only at very low densities does the exchange term win out, and the Jellium
becomes ferromagnetic. The critical density is so low, however, that it is only realised in
very few metals (for example Cs) but at these low densities correlation effects that have
been ignored become overwhelmingly important.
PHY3012 Solid State Physics, Section 2.5.2 Page 74

The d-electrons in transition metals are neither fully localised as are f -electrons nor free-
electron like as are the s and p band states. But we need to treat the d-electrons in a
band, or itinerant picture. As in the previous paragraph we have to think of the energy
bands as two degenerate sets, spin-up and spin-down, which can split under the action
of a magnetic or exchange field. The “exchange field” is that which comes from within
the crystal. This brings us, then, to the “Stoner model” of itinerant ferromagnetism that
draws on the ideas given up to now. We write n for the number of electons per atom—this
is between zero and ten in the d-band. As in the previous paragraph, we imagine flipping
the spins of 21 m electrons. The magnetic moment per atom (in units of µB ) is then
m = n↑ − n↓
where
1
n↑ = (n + m) (2.5.3a)
2
is the number of up spins per atom and
1
n↓ = (n − m) (2.5.3b)
2
the number of down spins per atom. The easiest way to calculate the change in the kinetic
energy is to consider a rectangular, i.e., constant, density of states ns (E) = ns per atom
per spin as we did in the Friedel model in section 2.4.2. (NB, we use ns (E) rather than
n(E) because ns (E) is the density of states per atom per spin—in the unpolarised electron
gas, of course, ns (E) = 21 n(E)/Nat .)

We need to calculate
Z EF −∆E Z EF +∆E Z EF
Ens (E)dE + Ens (E)dE − 2 Ens (E)dE

which is simply
1 m2
ns (∆E)2 =
4 ns (EF )
and this is also approximately true if ns (E) does not vary too much in the range of energy
splitting 2∆E around EF , the Fermi level.
PHY3012 Solid State Physics, Section 2.5.3 Page 75

This is an energy penalty; what about the energy gain from exchange? Here we postulate
a repulsive energy between pairs of unlike spins, In↑ n↓ where I represents roughly the
exchange energy penalty when unlike spin electrons find themselves, say, together in the
d-shell of the same atom. It cannot necessarily be written as an exchange integral like
Ax : as we have seen these are usually negative. Within this model, it is easy to work out
the energy gain from flipping 12 m spins: I 21 (n + m) 12 (n − m) − I 12 n 21 n = 14 Im2 . This is an
energy gain, so the total (kinetic plus exchange) energy change on flipping 12 m spins is

1 m2 1 1 m2
− Im2 = [1 − Ins (EF )]
4 ns (EF ) 4 4 ns (EF )

which leaves a nett energy gain if

Ins (EF ) > 1 (2.5.4)

which is called the Stoner criterion for ferromagnetism. Note that the Stoner criterion
calls for a large I and a large density of states at the Fermi level. We see here the compe-
tition between magnetism and covalency. Central bcc transition metals characteristically
have the Fermi level at a minimum in the density of states separating occupied bonding
from unoccupied antibonding states—a typical covalent picture, and non magnetic. α-Fe,
on the other hand, has the Fermi level at a peak in the density of states and is stabilised
in the bcc structure not by covalency but by the exchange energy—a typical picture of
ferromagnetism.

2.5.3 Pauli paramagnetism

We have seen that the free electron gas is not magnetic. But what happens to it when
we apply a magnetic field? Under a uniform magnetic induction of magnitude B = µ0 H
(µ0 = 4π × 10−7 JA−2 m−1 ) an electron’s energy will be raised or lowered by an amount
± 12 ge µB B (where ge = 2.0023 . . . is the electron g-factor) depending whether they are
aligned parallel or antiparallel to the field. In what follows I will always use ge = 2. So
the two free electron parabolas for the up and down spins are shifted up and down the
energy axis so that they are displaced by an by an amount ge µB B. Since the Fermi energy
is same for both spins, there is now a difference in the numbers of spin up and spin down
electrons given, approximately, by 12 n(EF )ge µB B. It is approximate because the density
of states area shown below is not quite rectangular.
PHY3012 Solid State Physics, Section 2.5.3 Page 76

So the magnetic moment is


m = µe µB Bn(EF )
= µ2B Bn(EF )

where we have taken the magnetic moment of the electron as µe = 12 ge µB ≈ µB . In that


case the magnetic moment per unit volume, the magnetisation, is

1
M =m
V
1 2
= µ Bn(EF )
V B

The magnetic susceptibility is, by definition, the (linear) response of the electron gas to
an applied magnetic field, H = B/µ0 , and we neglect the change in magnetic field due to
the magnetisation of the material,

dM
χp = µ0
dB
1
= µ0 µ2B n(EF ) (2.5.5)
V
3 Ne
= µ0 µ2B
2V EF

known as the Pauli paramagnetic susceptibility. (We have used equation (2.2.6).) You
note that this is independent of temperature, which is obvious because I have done a
zero degree Kelvin calculation. But in fact the paramagnetic susceptibility in metals
is observed to be independent of temperature and this is because the electron gas is
degenerate—it’s the same story as for the heat capacity: only electrons within kB T of the
Fermi surface can flip their spins. Indeed if the gas were not degenerate (that is, if the
Pauli principle did not apply) then at some temperature T in a magnetic induction B the
probability that the electron will be found parallel to the field rather than antiparallel
PHY3012 Solid State Physics, Section 2.5.4 Page 77

will be about µB B/kB T and as there are Ne /V electrons per unit volume each with a
magnetic moment µB the total magnetisation will be
Ne µ2B B
Mclassical =
V kB T
leading to a classical estimate of the susceptibility of Ne µ0 µ2B /V kB T . This is the well
known result for a classical paramagnetic gas.† We could use the same qualitative argu-
ment as we used for the heat capacity due to electrons and say that only a fraction T /TF
will take part in the magnetisation. Then we will get
Ne µ0 µ2B
χ≈
V kB TF
which is temperature independent and apart from the factor 3/2 is the same as our
quantum mechanical result if we recall that EF = kB TF .

In addition, all condensed matter has a diamagnetic response to a magnetic field due to
the electrons’ orbital motions and Lenz’s law, and if this is added (with opposite sign) we
obtain the total magnetic susceptibility of the free electron gas,
 
2 1 1  m 2
χ = µ0 µB n(EF ) 1 −
V 3 m∗
where m∗ is the electronic effective mass which we will come to in section 3. In the free
electron gas, m∗ = m and so

r
2m  ne  32
χ= µ0 µ2B
h̄2 3π
1 Ne
= µ0 µ2B
V EF

in which ne is the number of electrons per unit volume (see equation (2.2.3)). So you
notice that the magnetic susceptibility in the free electron approximation depends only
on the electron density and fundamental contants.

2.5.4 The free electron band model of ferromagnetism and Stoner enhancement

We now go back and look at applying the Stoner model to the free electron gas, rather
than the rectangular band model. This is quite tricky but it will expose some very
† Note that this is not in contradiction to the Bohr–van Leeuwen theorem which refers to a gas of moving
charged particles. A gas of magnetic dipoles will line up with the field in thermal equilibrium.
PHY3012 Solid State Physics, Section 2.5.4 Page 78

fundamental principles in magnetism of solids. We have seen that in the Stoner model an
electron prefers to be together with electrons of the same spin. The physical reason for
this is that the exchange and correlation energy is lower because the two electrons more
naturally remain apart because of the Pauli principle and so their electrostatic repulsion
is on average smaller than the case of two otherwise identical unlike-spin electrons. Of
course, this is the origin of Hund’s first rule.

And so, the electrostatic potential seen by an electron is more attractive at an atomic site,
or in an energy band, if that site or band is already occupied by other like-spin electrons.
In the Stoner model we assume that the energy shift is proportional to that number of
electrons, the proportionality constant being the Stoner parameter I. So the energy of
an up spin electron is lowered with respect to the free electron value E(k) = h̄2 k 2 /2m by
an amount proportional to n↑ , remembering that n↑ and n↓ are the number of spin up or
spin down electrons per atom,

E↑ (k) = E(k) − In↑ − µB B


1
= E(k) − I(n + m) − µB B
2
1 1
= E(k) − In − (Im + 2µB B)
| {z 2 } 2 | {z }
∆E
Ē(k)

We have used equations (2.5.3) and in addition to the effect of the internal field on the
energy levels, I have included the effect of an externally applied magnetic field, B. Strictly
speaking, the shift is − 12 ge µB B but, again, I am taking the g-factor for the electron to be
exactly two. The similar argument applies to the spin down electrons, so we have

1
E↑ (k) = Ē(k) − ∆E
2
1
E↓ (k) = Ē(k) + ∆E
2
with
∆E = Im + 2µB B (2.5.6)
Now the states at temperature T are occupied in accordance with the Fermi–Dirac statis-
tics,
1
f (E) = (E−E )/k T
e F B +1
and so we can write the magnetic moment per atom in units of µB as,

1 X
m = n↑ − n↓ = [f↑ (E(k)) − f↓ (E(k))]
Nat k
 
1 X 1 1
= 1 − 1
Nat k e(Ē(k)− 2 ∆E−EF ))/kB T + 1 e(Ē(k)+ 2 ∆E−EF ))/kB T + 1

This is an example of a self consistent problem. The moment depends on the shifts in the
energy levels; but those shifts depend on the moments! In this rare case the self consistent
PHY3012 Solid State Physics, Section 2.5.4 Page 79

problem, as exposed by m appearing on both sides of the above equation, can be solved
in closed form.

Now we can see by making a Taylor expansion of the Fermi–Dirac function to third order
that
 3
1 1 0 1 1
f (E − ∆E − EF )) − f (E + ∆E − EF )) = −∆Ef (E) − ∆E f 000 (E)
2 2 3 2
Using this we now find, in the case B = 0,
1 X df 1 X d3 f
m=− Im − I 3 m2 (2.5.7)
Nat dE 24
k |{z} k
dE 3
<0 | {z }
>0

We divide through by m and we note that the last term has got to be greater than
zero. Therefore in order for there to be a non zero magnetic moment in the absence of a
magnetic field we must have
I X 0
−1 − f >0
Nat k
I will tell you without proof the that at 0◦ K (Ibach and Lüth, section 8.4)
1 X 0
f = −ns (EF )
Nat k

(ns (E), not n(E), because the sum is only over one set of spins). So we come back again
to the Stoner criterion (2.5.4)
Ins (EF ) > 1
for there to be a spontaneous spin polarisation of the electron gas.

However, the density of states of the free electron gas is not large enough to meet this
criterion, and the so called “normal metals” are not ferromagnetic. However we can
pursue this argument to find their magnetic susceptibility. First we put back the applied
magnetic field which splits the energy levels according to (2.5.6) and we neglect the third
order term in (2.5.7) to get the magnetic moment, in units of µB , of the free electron gas
as a function of applied magnetic field,

m = (Im + 2µB B)ns (EF )

which leads to
2ns (EF )
m = µB B
1 − Ins (EF )
The total magnetic moment per unit volume, the magnetisation, M , is
1
M= Nat mµB
V
Nat 2 2ns (EF )
= µB B
V 1 − Ins (EF )
PHY3012 Solid State Physics, Section 2.5.4 Page 80

and so the susceptibility, χ, is

dM
χ = µ0
dB
Nat 2ns (EF )
= µ0 µ2B
V 1 − Ins (EF )

χp
χ=
1 − Ins (EF ) (2.5.8)

Here I have used 2ns (E) = n(E)/Nat ; and our earlier result (2.5.5) for the Pauli suscep-
tibilty.

So the result of employing the Stoner parameter, which represents atomic electron–
electron interactions is two fold.
1. We can predict ferromagnetism in metals with both a large value of I and a large
density of states at the Fermi level. Only three pure metals meet the Stoner criterion,
namely Co, Fe and Ni—all transition metals with a large d-electron derived density
of states.
2. In the normal, free electron like, metals the electron–electron interaction causes an
enhancement in the one electron approximation Pauli susceptibility: from (2.5.8)
χ > χp .
3. In addition, you note from (2.5.8) that as Ins (EF ) increases towards the value one,
the denominator vanishes and the susceptibilty diverges to infinity; this is a sign of
the phase transition to ferromagnetism—the magnetisation is huge, independent of
any applied magnetic field.
PHY3012 Solid State Physics, Section 3.1 Page 81

3. Thermal and electrical conductivity—transport

In sections 1 and 2, we examined the stationary states of the Schrödinger equation for
phonons and electrons respectively. Here, in section 3 we study transport; that is, the
flow of electrons and phonons under the effects of uniform electric fields and temperature
gradients.

3.1 Scattering

We will see that ultimately electrical and thermal conductivity are limited by the scat-
tering of carriers. In this section we will only consider electron scattering. We’ll come to
phonons later. In general, an electron in an occupied Bloch state (or strictly wavepacket)
of wavevector k is scattered into an empty state of wavevector k0 . What matters is the
probability per unit time that this event happens. This is extraordinarily difficult either
to measure or to calculate, but formally it’s given by Fermi’s Golden Rule (actually due
to Dirac)
Z 2

wk→k0 = nk0 ψ̄k0 (r) U (r) ψk (r) dr (3.1.1)

Here U (r) is the “scattering potential energy”—it is that term in the hamiltonian that
couples the states ψk (r) and ψk0 (r); nk0 is the “density of final states.”

Also, formally, you may note that the cross section for this event is

1
σ= wk→k0
I0

where I0 is the flux of initial states.

The only thing we may need to know from this is that

wk→k0 = wk0 →k (3.1.2)

which follows from its definition (3.1.1). This is a kind of “microscopic reversibility.”

We can separate scattering events into elastic, where the electron suffers no change in
energy eigenvalue, and inelastic, when energy and momentum are delivered to the scatter.

Examples of electron scatterers are


1. Lattice defects, e.g., impurities; point, line and planar defects
point—vacancies, interstitials
line—dislocations
planar—grain boundaries, stacking faults, hetero-interfaces, surfaces
PHY3012 Solid State Physics, Section 3.1 Page 82

Defects scatter elastically, i.e.,

E(k0 ) = E(k)

2. Phonons: the vibrating atoms make the scattering potential time dependent and the
scattering is inelastic. We have already studied these processes in section 1.7. In a
process such as

the matrix element is proportional to

U (r)
Z z }| {
0
ūk0 (r) e−ik ·r uk (r) eik·r U0 eiK·r dr
| {z } | {z }
final initial
Bloch state Bloch state
Z
=U0 ūk0 (r) uk (r)eiq·r eiK·r dr

where q is the scattering vector k − k0 , and K is the phonon’s wavevector. Because the
uk (r) are periodic (Bloch’s theorem) they can be expanded in a Fourier series and the
integral vanishes by the property of Fourier integrals unless

K+q=g

a reciprocal lattice vector, which is the conservation law of crystal momentum. If g 6= 0


this is an Umklapp process.
3. Other electrons. (See Ibach and Lüth, fig 9.3 and accompanying text). In an event
like this,

The incoming electron in state k1 is above, but within kB T of, the Fermi level and
suppose it scatters an electron below EF . Then by energy conservation,

E(k1 ) + E(k2 ) = E(k3 ) + E(k4 )

or, in shorthand,
E1 + E2 = E3 + E4 (3.1.3)
PHY3012 Solid State Physics, Section 3.2 Page 83

with
E1 > EF E3 > EF E4 > EF E2 < EF
Then from (3.1.3)
E1 + E2 > 2EF
and rearranging this,
|E2 − EF | < E1 − EF
That means electron 2 is yet closer to the Fermi surface than is electron 1— the incoming
one. Again, from (3.1.3)

>0 >0 >0 <0


z }| { z }| { z }| { z }| {
E3 − EF + E4 − EF = E1 − EF + E2 − EF
< E1 − EF + |E2 − EF |

which implies
E3 − EF < E1 − EF + |E2 − EF |
and
E4 − EF < E1 − EF + |E2 − EF |
so electrons 3 and 4 are also within ∼ kB T of the Fermi surface. So how does the scattering
rate depend on temperature? We take it that electron 1—the incoming one—is thermally
activated (i.e., not a fast, “hot” electron) so it’s within kB T of EF . So electron 2 must
be chosen from those states within an energy width kB T below the Fermi surface; this
multiplies the rate by a factor T /TF . But the choice of electron 3 must also be from
states within kB T of EF , introducing another factor T /TF . The energy and momentum of
electron 4 are now fully determined, so no other factor of T /TF arises and the scattering
rate in the degenerate electron gas is reduced by the Pauli principle by a factor
 2
T
∼ 10−6
TF

It is therefore very hard to detect electron–electron scattering—it’s a rare event and we


do not expect it to limit transport processes significantly in metals.

3.2 Electrical conduction in semiconductors

We all know that a semiconductor is an insulator with a small enough band gap that at
ambient or slightly elevated temperatures electron–hole pairs are generated that act as
carriers. In an intrinsic semiconductor, we have

nh = ne

which is the number of carriers per unit volume. We also know that extrinsic doping
can introduce additional carriers and can pin the Fermi level at either donor or acceptor
levels. In an intrinsic semiconductor, the thermodynamic chemical potential of electrons,
or “Fermi level” is at mid gap. The carriers in semiconductors are non degenerate, they
PHY3012 Solid State Physics, Section 3.2 Page 84

exist in an energy range around kB T of the band edges and so are not subject to Pauli
principle restrictions in their excitations.

Note that the intrinsic carrier concentration in Si at 300◦ K is

ni = nh = ne = 1.6 × 1010 cm−3

while the atomic density is 5 × 1022 atoms per cm3 , so only one in about 109 atoms is
“ionised.” In heavily doped semiconductors, the carrier concentration is about 1018 cm−3 .

The above figure shows that semiconductor bands can be understood close to the band
edges as free electron parabolas,
h̄2 k 2
E(k) = (3.2.1)
2m∗

where the effective mass, m∗ , accounts for the variation in curvature, with respect to the
free electron bands. We will use me and mh for electron and hole effective masses. Typical
values are
Si Ge GaAs
me /m 0.98 1.64 0.07
mh /m 0.45 (heavy) 0.28 (heavy) 0.45 (heavy)
0.16 (light) 0.04 (light) 0.08 (light)

We cannot plot bands in three dimensions, but we can draw constant energy surfaces. In
one dimension a certain energy will intersect a band at a point.
PHY3012 Solid State Physics, Section 3.2 Page 85

In two dimensions you would get a line. If a band was parabolic and isotropic—i.e., m∗
is the same for all directions in the Brillouin zone—then a constant energy surface would
be a sphere, for instance the Fermi sphere if the energy in question is EF . Below I show
constant energy surfaces for electron carriers just above the conduction band edge; they
are ellipsoids because the effective mass is not the same in all directions of k. However
the symmetry of the crystal is observed. You can tell from the figure on p 84 as well
as from the figure below, which shows constant energy surfaces in reciprocal space near
the conduction band minima, that both Ge and Si are indirect gap semiconductors. The
valence band edge is at k = (000) in both but the conduction band edge is at π/a(111)
(“the L-point”) in Ge and near 2π/a(100) (“the X-point”) in Si.

For an isotropic band, you can see from equation (3.2.1) that
 2 −1
∗ 2 d E
m = h̄
dk 2
that is, inversely proportional to the curvature of the band. Hence “heavy” holes occupy
the flatter of the two valence bands. If the band is not isotropic then the effective mass
PHY3012 Solid State Physics, Section 3.2.1 Page 86

is a matrix m∗ij and the reciprocal effective mass tensor is


 
1 1 d2 E
=
m∗ ij h̄2 dki dkj
where k = (k1 , k2 , k3 ). Such a 3 × 3 symmetric tensor can always be diagonalised by the
appropriate choice of principal axes, so in general there are three effective masses,
 1 
∗ 0 0

1
  mx 
 0 1 0 
=  my∗ 
m∗ ij  
0 0 1

mz

For simplicity we will treat semiconductor bands as isotropic.

3.2.1 Fermi level and number of carriers in a semiconductor

At finite temperature we expect to find electron–hole pair generation like this:

with q = k − k0 and K + q = g.

We need to use the Fermi–Dirac distribution function f (E) to calculate the density of
carriers in each band. The number of electrons per unit volume in the conduction band
PHY3012 Solid State Physics, Section 3.2.1 Page 87

will be the integral of the density of states per unit volume (see section 2.2) times the
occupation probability, Z
1
ne = n(E)f (E)dE
V
If we assume that the conduction band is parabolic then we need the free electron gas
density of states (2.2.5), per unit volume, measured from the bottom of the band,
  32
1 2me 1 1
n(E) = (E − Ec ) 2
V h̄2 2π 2

with me the effective mass in the conduction band and the energy taken relative to the
bottom of the conduction band Ec . At normal temperatures, and a few tenths of an
electron volt (eV) above the Fermi level, the Fermi–Dirac distribution is indistinguishable
numerically from the simpler classical Maxwell–Boltzmann distribution,

1
−→ e−(E−µ)/kB T
e(E−µ)/kB T +1
so we have Z
1
ne = n(E)f (E)dE
V
Z ∞ 3
2me 2 1 1
2 e−(E−µ)/kB T dE
= 2 (E − E c )
Ec h̄ 2π 2
Z ∞
1 3 1
= 2 3 (2me ) e 2 µ/kB T
(E − Ec ) 2 e−E/kB T dE
2π h̄ Ec

This integral can be done† and we get

2 3
ne = 3
(2πme kB T ) 2 e−(Ec −µ)/kB T
h
electrons per unit volume. Here, h = 2πh̄ is Planck’s original constant. It is interesting
that it appears in this way more often in statistical mechanics than in quantum mechanics.
The density of holes in the valence band is found in exactly the same way by integrating
the density of states downwards from the top of the valence band,

2 3
2 e−(µ−Ev )/kB T
nh = (2πmh kB T )
h3
† By substitution, u = E − Ec , du = dE, choose Ec = 0;
Z ∞ 1
I≡ (E − Ec ) 2 e−(E−µ)/kB T dE
Ec
Z ∞
1
= u 2 e−(u+Ec )/kB T du
0
Z ∞
1
−Ec /kB T
=e u 2 e−u/kB T du
0
−Ec /kB T 1√ 3
=e π(kB T ) 2
2

(Gradshteyn and Ryzhik, art. 3.371)


PHY3012 Solid State Physics, Section 3.2.1 Page 88

holes per unit volume, where mh is the valence band effective mass and Ev is the top of
the valence band. We can define
2 3
Ac = (2πme kB T ) 2
h3
2 3
Av = 3 (2πmh kB T ) 2
h
as effective conduction and valence band densities of states. Unlike the actual density of
states these are temperature dependent, and may be thought of as the density of states
if all the carriers were located in energy at the band edges. Then simply,

ne = Ac e−(Ec −µ)/kB T
nh = Av e(Ev −µ)/kB T

and by multiplying,
ne nh = Ac Av e−Eg /kB T
where Eg = Ec − Ev is the band gap energy. Note that the product ne nh depends only
on temperature, effective masses and the band gap energy so must be the same for all
doping levels and dopant types (n-, or p-type) because it doesn’t depend on the Fermi
level. We conclude that the product ne nh is constant for a given semiconductor and a
given temperature independent of the dopant concentrations or the Fermi level. We have
seen that in an intrinsic semiconductor,

ne = nh = ni

hence
ne nh = n2i (3.2.2)
is true for all semiconductors, doped or not. If we know the donor and acceptor concen-
trations, ND and NA and assume they are all ionised, then charge neutrality requires

ne + NA = nh + ND (3.2.3)

and (3.2.2) and (3.2.3) are sufficient to compute the carrier concentration.

We might also take the ratio,

nh Av (Ec +Ev −2µ)/kB T


= e
ne Ac

from which the chemical potential or “Fermi level” is

1 1 nh 3 me
µ= (Ev + Ec ) − kB T ln − kB T ln
2 2 ne 4 mh

In intrinsic semiconductor the “Fermi level” is mid gap to within a small correction due
the ratios of the effective masses. If the dopants are all donors (n-type) then we have, if
they are all ionised,
ne = ND
PHY3012 Solid State Physics, Section 3.2.2 Page 89

hence
ND = Ac e−(Ec −µ)/kB T
so that
Ac
Ec − µ = kB T ln ≈0
ND
∗ 3
In Si at room temperature, ND = 1019 cm−3 and A = 2.48 × 1019 mm 2 cm−3 , so that the
“Fermi level” µ is pinned at the conduction band edge. Similarly in p-type semiconductor
Av
µ − Ev = kB T ln ≈0
NA
and the “Fermi level” is pinned just above the valence band edge.

3.2.2 Conductivity of a semiconductor

Later, we will consider the velocity of an electron in the absence of scattering, and we call
this the ballistic velocity. Elastic collisions serve to randomise the direction of travel so
the average velocity in equilibrium is zero. If an electric field is applied then the electron
will acquire a small non zero average amount we will call the drift velocity. We assume
each collision completely randomises the velocity, but in between collisions the electron is
accelerated in the direction of the field. We are interested in separating out that part of
the motion, due to the field, that is superimposed upon the motion in zero field. Suppose
a collision occurs at t = 0, then after a further time t the electron will have travelled a
distance
1
d = at2
2
in the direction of the field, where a is the acceleration. The average drift velocity is
obtained by taking the average value of d and dividing by the average value of the time t
between scattering events,
hdi 1 ht2 i
vd = = a
hti 2 hti
Later, in section 3.4.2, we will discuss how to calculate these averages under the assump-
tion of a “Poisson distribution” of random collisions. We will say that if a particle is
chosen at random at t = 0 then the probability density (probability per unit time) for the
time of the next collision event is
1
P (t) = e−t/τ
τ
where the time constant τ serves to make the exponent dimensionless. The factor 1/τ is
present to provide normalisation as you can easily show by integration,
Z ∞
1 −t/τ
e dt = 1
0 τ
so the probability of a collision happening some time after t = 0 is one. Now, by further
integrations you can find that the average value of t is
Z ∞
hti = t P (t) dt = τ
0
PHY3012 Solid State Physics, Section 3.2.3 Page 90

and the average value of t2 is


Z ∞
2
ht i = t2 P (t) dt = 2τ 2
0

And so we find that the drift velocity is

vd = aτ

If the magnitude of the electric field is E and q is the charge of the carrier (q = ±e) then
in classical mechanics, that is, using Newton’s second law,

m∗ a = qE

and
qτ E
vd =
m∗
We call τ the “relaxation time;” 1/τ is the scattering rate or number of collisions per
second. τ is the mean time between collisions. We then write

vd = µE

where µ = qτ /m∗ is the carrier mobility. The current density is related to the electric
field by Ohm’s law
Je = σE
where σ is the conductivity. If there are nc carriers per unit volume, of charge q then

Je = nc qvd
= nc qµE

hence

σ = nc qµ
nc q 2 τ (3.2.4)
=
m∗

If both holes and electrons contribute to the current, then

σ = e (ne µe + nh µh )

Here is a table of some semiconductor mobilites.


Si Ge GaAs
µe (m2 V−1 s−1 ) 0.14 0.39 0.65
µh (m2 V−1 s−1 ) 0.05 0.19 0.04
PHY3012 Solid State Physics, Section 3.3 Page 91

3.2.3 Comparison between a metal and a non degenerate semiconductor

Here I want to make it clear when and how we distinguish between a degenerate and a
non degenerate electron gas.

3.2.3.1 A metal

valence EF (eV) n (cm−3 )


Al 3 11.7 0.77 × 1024
Na 1 3.2 0.41 × 1024

The effective mass approximation is not usually valid because we’re dealing with
electrons at the top of the band.

3.2.3.1 A non degenerate semiconductor

The effective mass approximation is very good because we are dealing with electrons
at the bottom of the band.
ni at room temperature (cm−3 )
Si 1.6 × 1016
Ge 2.5 × 1013
GaAs 1.2 × 107
PHY3012 Solid State Physics, Section 3.3.1 Page 92

3.3 Electron dynamics in metals

Equation (3.2.4) is equivalent to the Drude formula and we will see that it can also be
applied in metals in which the electron gas is degenerate and so only a tiny fraction of the
total number of electrons contribute to the conductivity. Nevertheless, quite surprisingly,
the total electron density n will still appear as in (3.2.4). But we must proceed with
proper caution.

3.3.1 Wavepackets

Firstly, an electron in a Bloch state is spread out over the whole crystal. A simple way
to localise it is to form a wavepacket. We do it like this. We have already established in
equation (2.3.1) that the Bloch state is a solution of the time independent Schrödinger
equation in the crystal potential,

ψk (r) = eik·r uk (r) (2.3.1)

By the rule of superposition of solutions of a linear differential equation, any linear com-
bination of these is also a solution and we take a sum over all k modulated by a Gaussian
function to build a collection of waves centred about a wavevector k0 . This is just how a
collection of water waves is assembled when a stone is dropped into a still pond, allowing
a single wavepacket to propagate away from the centre. Considering just a single band,
we construct a combination of Bloch states,
X 2
ψ(r) = uk (r) eik·r e−α(k−k0 )
k
X 2
ik0 ·r
=e uk (r) e−α(k−k0 ) ei(k−k0 )·r (3.3.1)
k

and to see how it evolves with time we multiply each term by the usual phase factor of
the stationary solution of the time dependent Schrödinger equation,

e−iE(k)t/h̄

Now we want the energy of the the wavepacket in terms of the eigenvalue at k0 . So we
make a Taylor expansion of E(k) about k0 ,

dE
E(k) = E(k0 ) + (k − k0 ) ·
dk k=k0

Here
dE
dk
is a shorthand for the vector

∂E ∂E ∂E
rk E = x̂ + ŷ + ẑ
∂kx ∂ky ∂kz
PHY3012 Solid State Physics, Section 3.3.1 Page 93

with x̂, ŷ and ẑ the cartesian unit vectors. We now have


X 2
ψ(r, t) = eik0 ·r uk (r) ei(k−k0 )·r e−α(k−k0 ) e−i[E(k0 )+(k−k0 )·∇k E]t/h̄
k
X 2 −1
= eik0 ·r−iE(k0 )t/h̄ uk (r) e−α(k−k0 ) ei(k−k0 )·[r−h̄ (∇k E)t]
(3.3.2)
k

Compare this carefully with (3.3.1) which is the wavepacket at t = 0. They have similar
form; the main difference is, in (3.3.1) we have the term

ei(k−k0 )·r

while at time t we now have the term


−1
ei(k−k0 )·[r−h̄ (∇k E)t]

Clearly the centre of the wavepacket has shifted in a time t from r = 0, say, to r =
h̄−1 (rk E)t. Therefore the group velocity of the wavepacket is

1 1 dE
v(k0 ) = rk E(k0 ) = (3.3.3)
h̄ h̄ dk k=k0

We can claim that this is the velocity of the electron in the state k0 in the crystal under
the assumptions that
1. The Taylor expansion is sufficient.
2. The Bloch functions uk (r) do not change significantly as the wavepacket moves.

This figure, taken from Ibach and Lüth, shows a graph of a wavepacket evolving in time.
You note by comparing the factors preceeding the summation signs in (3.3.1) and (3.3.2)
that the shape of the wavepacket does not remain constant in time. Indeed it spreads out.
PHY3012 Solid State Physics, Section 3.3.2 Page 94

Recalling Hamiltonian mechanics, section 0, velocity is


∂H
q̇ =
∂p
Since E(k) is the expectation value of the hamiltonian H and p = h̄k equation (3.3.3)
is consistent with classical mechanics. This is an example of the Ehrenfest theorem of
quantum mechanics which states that the expectation values of observables in quantum
mechanics obey the same equations of motion, that is follow the same trajectories, as the
corresponding observables in classical mechanics.

3.3.2 Velocity of free electrons

According to equation (2.2.7), for free electrons whose dispersion is

h̄2 k 2
E(k) =
2m
the group velocity is

v=k
m
(By the way, the phase velocity is h̄k/2m—half the group velocity). At the Fermi surface,
the velocity is

vF = kF
m
called the Fermi velocity. As we remarked earlier, even at 0◦ K electrons at the Fermi
surface are travelling at speeds of order one hundredth of the speed of light in vacuo.

If, on the other hand, we treated the electrons as classical particles, then according to the
theorem of equipartition the kinetic energy would be
3 1
kB T = mvT2 (3.3.4)
2 2
hence the “thermal velocity” is
r
3kB T
vT = ≈ 105 ms−1 at 300◦ K
m
In a simple metal like Al, we would have
 

vF = kF
m
 √
h̄ 2mEF
=
m h̄
r
2EF
=
m
= 2 × 106 ms−1 at 0◦ K
PHY3012 Solid State Physics, Section 3.3.3 Page 95

which is equivalent to a kinetic energy


1 2
EF = mv (3.3.5)
2 F
In a semiconductor, we think of the conduction band as a free electron band occupied up
to ∼ kB T , that is, non degenerate, so the carrier velocity is (noting that 12 mv 2 = 32 kB T )
r
h̄ 2
 13 3kB T

3π ne ≈
m m∗
so the thermal and quantum velocities are about the same.

Of course an electron doesn’t travel very far at about the Fermi velocity before a collision;
so equally important is its drift velocity.

3.3.3 Equation of motion of a wavepacket

Applying an electric field E to a metal, we expect the wavevector k0 of the wavepacket


to change with time although we will exclude the possibility that it makes a transition to
a higher energy band. An electron will move with velocity v under the influence of the
field which, using equation (3.3.3), does work on the electron of an amount
e
−eE · v = − E · rk E(k)

per unit time, which we may equate to the rate of change of electron energy,
 
dE dk dE dk
= · rk E(k) i.e., by the chain rule
dt dt dk dt
We therefore have
e dk
− E · rk E = · rk E (3.3.6)
h̄ dt
which has the form
v1 · a = v2 · a
and one is tempted to conclude that by “cancelling” the vectors a on both sides,

v1 = v2

but this may be wrong because


v1 = v2 + b
is also a solution as long as b · a = 0. But don’t worry, we will say that (3.3.6) is
consistent with the equation of motion

dk e
=− E (3.3.7)
dt h̄
PHY3012 Solid State Physics, Section 3.3.3 Page 96

and we appeal to the Ehrenfest theorem in that we identify


dk
= −eE

dt
(rate of change of momentum = force) which is Newton’s second law with momentum
identified as h̄k. In this way we are making a semi classical approach to transport here.

Now for some fun. According to equation (3.3.7) when you apply an electric field the
electron’s wavevector changes, moving parallel to the field. The velocity is not necessarily
parallel to the field. In fact equation (3.3.3) is

1
v= rk E (3.3.8)

the velocity changes in a direction that follows the steepest slope of the dispersion relation,
the energy bands. Remember that k is the wavevector in reciprocal space but it’s also
the propagation direction in real space. This diagram serves to illustrate the point.
PHY3012 Solid State Physics, Section 3.3.3 Page 97

In the very simplest case, say a band along the kx -direction and a field applied in the
x-direction, we have,

an electron initially having k = 0 has its wavevector increased by the field along x and as
it moves the slope of the dispersion relation increases and its velocity increases as

1 dE
vx =
h̄ dk
after the point of maximum slope it slows down until at X its velocity is zero. The
electron reappears at the left and moves back towards k = 0 along a band of negative
slope, hence it reverses its velocity. When the wavevector goes back to k = 0 the electron
has returned to where it started! You may be able to see this better in the extended zone
scheme.

This is really odd. It would seem that by applying a DC field, you can get an AC current.
However, look at this more closely.
1. What is the period of this so called Bloch oscillation? The rate of change of k is

dk
h̄ = −eE
dt
so by integration
e
k − k0 = − Et

PHY3012 Solid State Physics, Section 3.3.3 Page 98

where k0 is the integration constant or wavevector at t = 0. In a single cycle, k


changes by 2π/a so the period T must be

2π h̄
T =
a eE

The field must be small so that the energy bands aren’t tilted so much that the
electron tunnnels across the bands, and so we must have a potential difference applied
much less than the band width. Suppose the potential difference is one volt applied
across one centimeter of metal.

E = 1 V cm−1 = 100 V m−1

2π 1.05 × 10−34 1
T =
0.4 × 10−9 1.6 × 10−19 100
≈ 10−8 s

(I’ve used the lattice constant of Al.) The Fermi velocity is 2 × 106 m s−1 so the
electron travels about one centimeter before being Bragg reflected and reversing its
direction. It’s bound to be scattered before then. The best hope of observing the
phenomenon is within a sub band in a semiconductor conduction band in a two
dimensional electron gas (see Elliott, p. 717).
2. This is a treatment of a single electron in an otherwise unoccupied band. If there
are many electrons (i.e., the Fermi level falls somewhere in the band) the whole
sea of electrons is sloshing about collectively. If the band is full there will be as
many electrons going one way as the other. Hence a metal must possess a partially
occupied band. Let’s put this formally. The current density is −e times the number
of electrons per unit volume times the velocity of each electron (see section 3.2.2).
In unit volume the density of allowed k values is 1/(2π)3 . Each band contributes the
following amount to the current density,
Z
2
J= f (k)(−ev)dk (3.3.9)
(2π)3 BZ

The factor two is for spin degeneracy; f (k) is the Fermi–Dirac function expressed as
a function of k rather than E,

f (k) = f (E(k))

In a full band, even under an applied electric field, there can be no current. For every
state k having velocity v(k) there is a state −k having velocity

v(−k) = −v(k)

by the symmetry of the energy bands

E(−k) = E(k)
PHY3012 Solid State Physics, Section 3.3.3 Page 99

3. The one dimensional case is unrealistic. As we have seen the electrons follow the
coupled equations (3.3.7) and (3.3.8)

dk e
=− E
dt h̄
1
v = rk E

so the electron doesn’t travel in general in the direction of the field. Do note that
the above two equations are probably the most important of section 3. Be sure you
understand them.

Once we’ve described the scattering time in metals we can come back and put this all
together into a formula for the electrical conductivity. But let us now pursue the semi
classical approach a bit further to obtain another statement of the effective mass. Recall
that in section 3.2 we wrote
 
1 1 d2 E
= (i, j = x, y, z)
m∗ ij h̄2 dki dkj

where k = (k1 , k2 , k3 ). Furthermore in section 3.3.3, we made the following balance of


energy
dE −e
= E · rk E(k)
dt h̄
Now the acceleration of a wavepacket, by differentiating (3.3.8) with respect to t, is
dv 1 d
= rk E(k)
dt h̄ dt
1 dE(k)
= rk
h̄ dt
e
= − 2 rk [E · rk E]

If I write the components of the vector v in full, I get, for constant uniform electric field
E,
v = (vx , vy , vz ) = (v1 , v2 , v3 )
 
dv1 e ∂2E ∂2E ∂2E
=− 2 E1 + E2 + E3
dt h̄ ∂k12 ∂k1 ∂k2 ∂k1 ∂k3
 2 
dv2 e ∂ E ∂2E ∂2E
=− 2 E1 + E2 + E3
dt h̄ ∂k2 ∂k1 ∂k22 ∂k2 ∂k3
 2 
dv3 e ∂ E ∂2E ∂2E
=− 2 E1 + E2 + E3
dt h̄ ∂k3 ∂k1 ∂k3 ∂k2 ∂k32
which can be assembled into a single formula,
dvi e X ∂2E
=− 2 Ej (i, j = 1, 2, 3)
dt h̄ j ∂ki ∂kj
PHY3012 Solid State Physics, Section 3.4.1 Page 100

This compares with the classical equation for the acceleration of a point charge −e,

dvi e
= − Ei
dt m

in which the velocity is parallel to the electric field. In the case of the solid state
wavepacket, the situation is more general—a component of the electric field Ej causes
an acceleration in the i-direction dvi /dt
 X 
dvi 1 1
= −e Ej
dt h̄2 j
m∗ ij

so (1/m∗ )ij is a tensor with components

1 ∂2E
h̄2 ∂ki ∂kj

The direction of acceleration, as we’ve said before, is not necessarily in the direction
of the field—it depends on the wavevector of the wavepacket and on the details of the
energy band structure. Even under a uniform electric field the wavepacket keeps changing
direction as its wavevector changes; see p 96.

And this is still in the absence of scattering. . .

3.4 Scattering using the “kinetic method”—the relaxation time approximation

Ultimately we want to understand transport coefficients, especially in thermal and elec-


trical conductivity. We can’t follow an electron or phonon through all the many scattering
events as it drifts under the influence of a temperature gradient or electric field (or both).
One approach is to borrow concepts such as mean free path and collision rate from the
kinetic theory of gasses.

3.4.1 Relaxation time

Random events are often described using the Poisson distribution of elementary statistics
and probability. A particle suffers collisions on average at a rate of 1/τ so τ is the mean
time between collisions. So what is the probability that it survives a time t without a
collision? If the rate is 1/τ then the probability of suffering a collision in a time interval
dt is
dt
τ
So the probability of surviving in this interval is

dt
Ps (dt) = 1 −
τ
PHY3012 Solid State Physics, Section 3.4.2 Page 101

Then the probability of surviving a total time of t + dt is

Ps (t + dt) = Ps (t) × Ps (dt)

assuming the two probabilities are independent. Then


 
dt
Ps (t + dt) = Ps (t) 1 −
τ

or, rearranging,
Ps (t + dt) − Ps (t) 1
= − Ps (t)
dt τ
which means
dPs 1
= − Ps
dt τ
The probability of surviving thus decays in time as minus the scattering rate times the
probability of surviving thus far. The solution for Ps (t) is

Ps (t) = e−t/τ (3.4.1)

as you can check by differentiation. The normalised probability density as indeed we saw
in section 3.2.2 is
1
P (t) = Ps (t)
τ

3.4.2 The kinetic method

So we have that the collisions of the electrons in a metal follow a Poisson distribution
having the properties that the probability density for collisions is

1
P (t) = Ps (t)
τ
1
= e−t/τ (3.4.2)
τ
where Ps (t) is the probability that a particle chosen at random at time t = 0 will first
scatter after a time t. We also have that
dPs 1
= − Ps
dt τ
Now we focus upon an electron having ballistic velocity v, in a uniform electric field E .
The electron takes energy from the field of an amount

δE = −ev · E t (3.4.3)

in a time t. Be careful of the sign. If v is in the opposite direction to the field then
v · E < 0 and the electron speeds up because it’s negatively charged. Since by convention
e > 0, in this case δE is positive. (If v is parallel to E then the electron slows down so it
PHY3012 Solid State Physics, Section 3.4.2 Page 102

gives up some kinetic energy to the field and then δE < 0.) This extra energy causes the
electron to drift; it acquires a small increment to its otherwise random direction of flight.
We write this increment in a formal sort of a way as
δv
δv ≡ δvd = δE
δE
How much energy can the electron gather from the field before it scatters? It’s a matter
of summing up over increments of time dt, the increment of energy times the (positive)
increment in survival probability, −dPs . You have seen in section 3.2.2 that the time, t,
averaged over P (t) is Z ∞
hti = P (t) t dt = τ
0
which you can do by partial integration using (3.4.2). v and E are assumed independent
of t, so we can write Z ∞  
dPs
E= δE − dt
0 dt
Z ∞
1 −t/τ
= −ev · E e t dt
0 τ
Z ∞
= −ev · E P (t) t dt
0
= −ev · E τ
This is just a very long winded way of taking the average of δE in (3.4.3). The drift
velocity is obtained, again rather formally, from
E
vd =
δE/δv
dv
= −eτ (v · E )
dE
You may want to brood over this for a while. The drift velocity of the electron is its
charge, times its relaxation time, times the scalar product of its ballistic velocity with the
field, times the rate of change of v with energy. This latter concerns the way in which
you change the velocity as you change its energy. You’ve seen in section 3.3.3 how the
velocity changes in a complex way with the eigenvalue through
1 dE
v= (3.3.8)
h̄ dk
so by changing the electron’s energy, you change both its speed and its direction. (See
p 96.) So the complex nature of the energy bands is accounted for in this formula for the
drift velocity.

The electrical current density is of opposite sign to the drift velocity because by convention
current is the flow of positive charge.

J = −ne evd
dv
= ne e 2 τ (v · E )
dE
PHY3012 Solid State Physics, Section 3.4.4 Page 103

In the case of isotropic bands.


dv
J = ne e2 τ vE
dE
= σE
the last line being Ohm’s Law. We can relate the energy to velocity in the free electron
gas, since we have from (3.3.8)
v = h̄k/m
So
h̄2 k 2 1
E ≡ E(k) = = mv 2
2m 2
This is just the same as the classical formula for kinetic energy, but m here may be the
effective mass. From this,
dv 1
=
dE mv
and we end up with the Drude formula for the conductivity,

ne e 2 τ
σ= (3.4.4)
m

3.4.3 Mean free path

From equation (3.4.4) it looks as though the conductivity doesn’t depend on the ballistic
velocity v, which is fast—some electrons are travelling at speeds near the Fermi velocity.
However it does because it is the density of scatterers that is most relevant and the
relaxation time then depends on v. In fact the average distance between scatterers or
rather the average distance an electron or phonon travels between collisions is called the
mean free path λ defined such that
λ = τv
so that the conductivity in terms of mean free path is

ne e2 λ
σ= (3.4.5)
mv

3.4.4 Thermal conductivity

Now let’s use the kinetic method to look at thermal conductivity in metals due to electrons.
We consider a long bar of metal, hot at one end and cold at the other, in the case where
PHY3012 Solid State Physics, Section 3.4.4 Page 104

the temperature gradient rT is uniform. In one dimension, if the x-direction is chosen


to be along the length of the bar then the general formula for the heat flux

JQ = −κrT

becomes
dT
JQ = −κ
dx
and κ is called the thermal conductivity (J s−1 m−1 K−1 ).

We assume that at any point in the bar where the temperature is T (x) the electrons
possess internal energy U (T ) which is the internal energy they would possess in a bar of
the same metal at uniform temperature T . Now these electrons are travelling in random
directions in between collisions at about the Fermi velocity v and their relaxation time
is τ . Now consider a point at position x along the bar. Electrons arrive from the left at
velocity vx = v having left their last collision at a point x − dx = x − λx where λx is the
mean free path along x. They carry energy to the point x and their energy is the energy
of electrons in the metal at the temperature of the bar at the point x − λx . Electrons also
arrive from the right having left their last collision at x + λx = x + dx and they bring
energy to the point x associated with the temperature in the bar at x + dx.

Electrons arrive from the left carrying energy

dU
U− dx
dx

(assuming dx = λx is small enough to take just the linear term in the Taylor expansion of
U (x)). Since v doesn’t depend on temperature (it’s the Fermi velocity, not the thermal
velocity—see section 3.3.2) half of the total number of electrons arrive from the left
bringing a flux of heat equal to
 
1 dU
ne vx U − dx
2 dx

where ne is the number of electrons per unit volume. The flux from the right is
 
1 dU
ne vx U + dx
2 dx
PHY3012 Solid State Physics, Section 3.4.4 Page 105

Therefore the total flux from left to right is the difference of these two,

1 dU
JQ = − ne vx 2 dx
2 dx

but dx = λx = vx τ and using the chain rule of differentiation

dU dU dT
=
dx dT dx

we get
 
dU dT
JQ = ne vx2 τ

dT dx
 
dT
=κ −
dx

Now, ne dU/dT is the electron contribution to the heat capacity per unit volume, ce . Also,
to generalise to three dimensions we merely have to use

1 2
vx2 = v (3.4.6)
3

(that is, v 2 = vx2 + vy2 + vz2 = 3vx2 ) so we get for the thermal conductivity

1 2
κ= v ce τ
3
1
= vλce
3

a nice simple formula like (3.4.5).

We could do exactly the same excercise in the case that the carriers of thermal energy
were phonons. The result we would get is

1
κph = vph λph cph (3.4.7)
3

Suppose that the mean free paths are the same for phonons as for electrons; which are the
dominant heat carriers in metals? You know that the heat capacity Cph due to phonons is
some hundred times larger than that due to electrons Ce (see section 2.2), but if you work
it out you’ll find the phonon velocity is typically 1000 smaller than the Fermi velocity.
Hence, in metals, electrons are the dominant carriers of heat. In insulators electrons are
largely immobile and phonon thermal conductivity dominates. Diamond has an especially
high thermal conductivity. Here are some selected values of κ at room temperature.
PHY3012 Solid State Physics, Section 3.4.5 Page 106

κ (Wm−1 K−1 )
Copper 397
Aluminium 240
Titanium 22
Iron 78
Silicon 150
Diamond 1000–5000†
Alumina 26
Glass 1
† The higher values for
isotopically pure diamond

3.4.5 The role of Umklapp processes in phonon thermal conductivity

If thermal conductivity is phonon dominated (e.g., in insulators) then thermal energy is


dissipated by the transfer of momentum. There needs to be a mechanism that allows
the total phonon wavevector to change so that a region at high temperature can come to
equilibrium at a lower temperature. Suppose at some point in the solid there is a total
momentum flux,
XX
Jph = h̄ knj (k)
k j

where j labels the phonon bands, acoustic and optic, and nj (k) is the Bose–Einstein
distribution for wavevector k and band j. If all the collisions are such that

k + k0 = k00

then the total momentum cannot change with time because the sum of the wavevectors
k + k0 − k00 = 0 is unchanging in time. Once such a flux as Jph has been established,
even in a vanishingly small temperature gradient, the flow will continue for ever. The
thermal conductivity will be effectively infinite in the absence of other scattering (e.g.,
from defects).†

To allow phonons to come to equilibrium requires Umklapp processes in which

k + k0 = k00 + g

with g 6= 0.

† The same argument can be applied to an infinitely long tube containing gas. But as long as the tube is
finite then equilibrium can be established by gas molecules reversing their directions when they bounce
off the closed ends of the tube.
PHY3012 Solid State Physics, Section 3.4.5 Page 107

Two phonons carrying momentum to the right disappear and produce a phonon carrying
momentum to the left.

At temperatures well below the Debye temperature Umklapp processes are exponentially
unlikely. This is called “freezing out of U-processes.” Hence you can believe that the
relaxation time increases like
τ ∼ eθD /T
and the conductivity increases like

κ ∼ eγθD /T

where γ is a factor less than one. This is seen in the thermal conductivity of lithium fluo-
ride crystals. From 100◦ K down to about 10◦ K the conductivity increases exponentially.
It doesn’t increase further because the mean free path becomes as big as the crystal and
phonons scatter at the surfaces. Hence at T < 10◦ K the conductivity decreases again but
now depends on the size of the crystal, and increases like T 3 as does the heat capacity.

At temperatures above θD we have


kB T
nj (k) −→
h̄ωj (k)

as we have seen earlier in section 1.6.2, equation (1.6.5). So the phonon density increases
linearly with temperature. The conductivity is then seen to decrease with temperature
according to
1
κ∼ α
T
where α is between one and two. The following figure shows these features.
PHY3012 Solid State Physics, Section 3.5 Page 108

3.5 Boltzmann formulation of electrical conductivity in metals

We have seen that, in the absence of scattering, all electrons under a small constant
applied electric field E in a metal change their wavevectors according to equation (3.3.7)
dk 1
= − eE (3.3.7)
dt h̄
In one dimension, the electron wavevectors move like a train over some hills.
PHY3012 Solid State Physics, Section 3.5 Page 109

In three dimensions, the whole “Fermi sea” (all electrons enclosed within the Fermi sur-
face) is displaced such that after a time δt, according to equation (3.3.7) all wavevectors
are shifted by an amount
1
δk = − e E δt

This doesn’t continue for ever because the electrons are scattered. Scattering reverses the
change in k by acting to return the distribution to its equilibrium. In the steady state
the rate of change of k due to scattering is equal and opposite to that due to the field. To
estimate the rate due to scattering, we suppose that the rate of approach to equilibrium
is proportional to the deviation from equilibium,

dk δk
=− (3.5.1)
dt τ
the proportionality constant being the scattering rate 1/τ . This is the same argument
that we used in section 3.4.1 leading to equation (3.4.1), and equating the two rates (3.3.7)
and (3.5.1) we get
1
δk = e τ E

which represents a rigid displacement of the Fermi surface like this.

Let us suppose we have established a steady state using an applied electric field so that
the electron probability distribution function is not the Fermi–Dirac function fFD but
some stationary, non equilibrium distribution fstat . If I now switch off the electric field
PHY3012 Solid State Physics, Section 3.5.1 Page 110

then the distribution will relax towards fFD at a rate proportional to its deviation from
equilibrium,
∂f (r, k, t) f (r, k, t) − fFD
=− (3.5.2)
∂t τ
where in general the occupation probability depends on k and on position and on time.
The solution to (3.5.2) is
f − fFD = (fstat − fFD )e−t/τ (3.5.3)
which you should compare with equation (3.4.1).

The Boltzmann equation is a general formula for f (r, k, t). Recall how we did thermal
conductivity in section 3.4.4. At time t the occupancy of a wavepacket at r of wavevector
k is the same as it was at r − vδt when its wavevector was
dk
k− δt
dt
a short time ago. The thinking behind the Boltzmann equation, which we will now derive,
is this. An electron wavepacket of wavevector k is either occupied or unoccupied, and
it remains either occupied or unoccupied as it propagates under the influence of applied
fields (electric, magnetic or gradients of temperature) unless it becomes scattered into a
state of a different k. First we find the formula for the occupation probability as a function
of time in the absence of scattering; then we add a term to account for scattering, and
we try to solve the equation in the relaxation time approximation. We will do this slowly
and carefully, starting with some background mathematics of Taylor series.

3.5.1 Derivation of the Boltzmann equation

If f (x) is a function of x, then to first order its Taylor expansion is

df
f (x + h) = f (x) + h
dx
and it’s understood that the derivative is evaluated at x, not x + h or anywhere else. This
rule holds throughout what follows. If h = dx is infinitessimal then

df
f (x + dx) = f (x) + dx
dx
If f (x, k) depends on two variables x and k then

∂f ∂f
f (x + dx, k + dk) = f (x, k) + dx + dk (3.5.4)
∂x ∂k
where now these are partial derivatives: other variables being held constant. If I make x
and k vectors, r and k, then a function f (r) now depends on three variables, the three
components of r = xx̂ + yŷ + zẑ. So instead of

∂f
∂x
PHY3012 Solid State Physics, Section 3.5.1 Page 111

I need to put
∂f
∂r
which is a vector having three components. It’s given the symbol

∂f ∂f ∂f
rf = x̂ + ŷ + ẑ
∂x ∂y ∂z

To make it clear that the partial derivatives are taken with respect to the components of
r we will write r as rr . Then when we also need to take partial derivatives with respect
to the components of k = kx x̂ + ky ŷ + kz ẑ we can use the symbol rk .

We now use f (r, k, t) to represent the probability distribution function that we have
already described at equation (3.5.2). It depends on r, k and time, t; hence seven variables.
But I can still do a Taylor expansion, as in (3.5.4), but now I’ll get seven partial derivative
terms,

∂f
f (r + dr, k + dk, t + dt) = f (r, k, t) + rr f · dr + rk f · dk + dt (3.5.5)
∂t
and here rr and rk each gather up three terms into one using the scalar product. In
this way, since dr = dxx̂ + dyŷ + dzẑ,

∂f ∂f ∂f
rr f · dr = dx + dy + dz
∂x ∂y ∂z

and similarly for the components of k in the rk terms:

∂f ∂f ∂f
rk f · dk = dkx + dky + dkz
∂kx ∂ky ∂kz

We have
dr dk
ṙ = = v, the velocity, and k̇ =
dt dt
so putting dr = ṙ dt and dk = k̇ dt into (3.5.5), changing the sign of dt so that we are
looking backwards in time, and rearranging, I get

∂f
f (r, k, t) − f (r − dr, k − dk, t − dt) = dt + rr f · ṙdt + rk f · k̇dt (3.5.6)
∂t
Now we argue that this is zero in the absence of scattering because the probability of
occupancy does not change if the particle arrives at r having wavevector k at time t,
unscattered since a time t − dt in the past when it was at r − dr having wavevector
k − dk. (Its position and wavevector are changing, of course, under the action of our
applied fields.) To include the possibility that the particle was indeed scattered in the
interval dt, we equate the difference at the left hand side of (3.5.6) not to zero, but to
 
∂f
dt
∂t coll.
PHY3012 Solid State Physics, Section 3.5.1 Page 112

which is the rate of change of the occupation probability due to collisions, multiplied by
the time interval, dt, and so equating this to the right hand side of (3.5.6) we get the
generalised Boltzmann transport equation,

 
∂f ∂f
= + rr f · ṙ + rk f · k̇ (3.5.7)
∂t coll. ∂t

which in fact applies to any particle, including phonons, if the appropriate distribution
function (Fermi–Dirac, Bose–Einstein or Maxwell–Boltzmann) is used for f .

If we use our formulas (3.3.7) and (3.3.8) for v and k̇ for electron wavepackets, then we
get
   e
∂f ∂f 1
= + rr f · rk E + rk f · − E (3.5.8)
∂t coll. ∂t h̄ h̄

while we omit the effects of magnetic fields for which we would require the Lorentz force.
Since  
∂f (r, k, t)
∂t coll.

represents scattering transitions between states it must be related to wk→k0 in equa-


tion (3.1.1). In fact,
  Z
∂f (r, k, t) 1
= 3
{f (r, k0 , t)wk0 →k [1 − f (r, k, t)]
∂t coll. (2π) BZ
− f (r, k, t)wk→k0 [1 − f (r, k0 , t)]} dk0
Z
1
= wk→k0 [f (r, k0 , t) − f (r, k, t)] dk0 (3.5.9)
(2π)3 BZ

since wk→k0 = wk0 →k (see equation (3.1.2)). We read this equation as follows. In the
first line we have all the possible scattering events into the state k; that is, the transition
probability wk0 →k times the probability that the initial state k0 is occupied times the
probability that the final state k is unoccupied. This is integrated over all possible intial
states k0 . In the second line we have all the possible transitions out of state k, with a minus
sign, and the sum of these two gives the total rate of change of the occupancy of state k
due to scattering. The purpose of writing this down is not to frighten you, but to close
the loop that I started with equation (3.1.1) at the beginning of section 3. It emphasises
that the rate of change of f due to collisions is not a completely unknown quantity; it
can at least in principle be calculated using perturbation methods in quantum mechanics.
Because of equation (3.1.2) it simplifies nicely and comes down to knowing the function
f (r, k, t) which is the subject of the generalised equation (3.5.7) and the probability
densities wk→k0 for all possible scattering events. It is these latter quantities that must
be calculated using the advanced, many particle quantum mechanics of electron–phonon
and electron–electron interactions.
PHY3012 Solid State Physics, Section 3.5.2 Page 113

3.5.2 Solution of the linearised Boltzmann equation

We’d like to solve the Boltzmann equation for f , because then we’d know the positions
and wavevectors of all the particles as a function of time—or at least the occupation
probabilites, which is all you ever get in quantum mechanics. In the steady state f is
unchanging in time, i.e.,
∂f
=0
∂t
and we treat a spatially homogeneous system, which means there are no variations in
chemical composition, or different materials separated by interfaces or junctions, in which
case rr f = 0; indeed nothing depends on r.† If we are interested in electrons in a metal
in a uniform electric field, E , then we have
e
k̇ = − E

Finally we make the relaxation time approximation to the rate of scattering, as in equa-
tion (3.5.3), so that  
∂f f − fFD
=−
∂t coll. τ
and (3.5.7) becomes
e
f (k) = fFD + τ (k) E · rk f (3.5.10)

allowing that each wavepacket has its own relaxation time, depending on its wavevector
k. This is a differential equation in f and its derivatives with respect to the components
of k.

In the form (3.5.8) with equation (3.5.9) for the collision, or drift term, the Boltzmann
equation is said to be an integro-differential equation since f appears both under an
integral sign and as a differential. If we make the relaxation time approximation then
we obtain (3.5.10) which is a linear differential equation since f appears on the left, and
again on the right differentiated with respect to the components of k. To continue to
make progress we make a further approximation, namely to linearise the right hand side.
This is reasonable in the case of small fields where you’d expect the response to be linear,
for example as in Ohm’s Law. We can then replace rk f by rk fFD in equation (3.5.10)
amounting to a first order Taylor expansion of f (k) in which case the formula remains
correct to order E 2 . Thus the linearised Boltzmann equation is

e
f (k) = fFD + τ (k) E · rk fFD (3.5.11)

† This demands also that there are no temperature gradients, since f depends implicitly on T just as do
the Fermi–Dirac and Bose–Einstein distributions. If the system were homogeneous except for a uniform
temperature gradient in the x-direction, say, T = T0 + T 0 x (T0 and T 0 constants) and f were the Fermi–
Dirac function you can probably work out what is (d/dx) fFD and this term could then be retained.
PHY3012 Solid State Physics, Section 3.5.2 Page 114

We now recall equation (3.3.9) for the current density from section 3.3.3,
Z
2
J= f (k)(−ev)dk (3.3.9)
(2π)3 BZ

and we consider the case of an isotropic metal so that J is parallel to v, and we will take
both to be in the x-direction. We use equation (3.3.8)

1 ∂E
vx =
h̄ ∂kx

and
∂fFD ∂E ∂fFD
=
∂kx ∂kx ∂E
Noting that integrating the first term in (3.5.11) yields zero by symmetry, we now get
Z
2 2 ∂fFD
Jx = − e Ex τ (k)vx2 (k) dk
(2π)3 BZ ∂E

In the limit of zero temperature

∂fFD
= −δ(E − EF )
∂E

which is (minus) a “Dirac delta function.”

This is non zero only at the Fermi surface—only electrons within kB T of EF will contribute
to the integral. The volume element dk is therefore an element of Fermi surface with
thickness
dE
dk⊥ =
|rk E|
and so
1
dk = dSF dk⊥ = dE dSF
h̄v(k)
PHY3012 Solid State Physics, Section 3.6 Page 115

The term dE(∂f /∂E) reduces the volume integral over the Brillouin zone to a surface
integral over the Fermi surface. Hence the conductivity is

Jx
σ=
E
I
2 e2 vx2
= τ (k)dSF (3.5.12)
(2π)3 E=EF v(k)

In the special case of a parabolic, isotropic, free electron like metal, this will reduce back
to the Drude formula! Because in this instance, we have

1 2
vx2 (kF ) = v (kF )
3
and
h̄kF
v(kF ) ≡ vF =
m∗
and the integral I
dSF = 4πkF2
E=EF

is the area of the Fermi sphere. But according to equation (2.2.2) the Fermi wavevector
depends on the total number of electrons per unit volume,

Ne k3
= ne = F2 (2.2.2)
V 3π
Putting all these back into equation (3.5.12) we get, for a parabolic, isotropic, free electron
like metal,
e2 1 h̄kF
σ= 3 ∗
τ (kF )4πkF2
4π h̄ 3 m
e2 τ (kF ) kF3
=
m∗ 3π 2
ne e2 τ (kF )
=
m∗
Only electrons at the Fermi surface contribute; not all of them as in the classical picture.
Hence the relevant relaxation time is that of the electrons having energy EF . But the
total electron density ne appears as in the classical Drude formula because it determines
the radius kF of the Fermi sphere.
PHY3012 Solid State Physics, Section 3.6 Page 116

3.6 The Wiedemann–Franz Law

We now have formulas for the electrical and thermal conductivities of the quantum free
electron gas in the relaxation time approximation.

ne e 2 τ ne e2 λ
σ= =
m mv
1
κ = ce vλ
3

We take it that the mean free paths are the same for both so they cancel when we take
the ratio (but see Hook and Hall, section 3.3.4)

κ 1 ce mv 2 1 1 ce
= 2
= mv 2
σ 3 ne e 3 e2 ne

However as we saw in section 2.2


ce 1 T
= π 2 kB2 (2.2.8)
ne 2 EF

and we can identify v as the Fermi velocity, as in equation (2.2.7),

1 2
mv = EF
2
Hence  2
κ 1 kB
= π2 = 2.43 × 10−8 WΩK−2
σT 3 e
depends only on fundamental constants. This relation between σ and κ is called the
Wiedemann–Franz Law and has been known from observation for well over 100 years.

Interestingly this ratio is nearly the same in classical physics which severely overestimates
the heat capacity but underestimates the velocity. Using the wrong results

3
ce 6= ne kB
2
and
1 2 3
mv 6= kB T
2 2
you get
 2
κ 3 kB
= = 1.11 × 10−8 WΩK−2
σT 2 e
Typical measured values at 0◦ C are
PHY3012 Solid State Physics, Section 3.7 Page 117

κ/σT (WΩK−2 )
Cu 2.22 × 10−8
Al 2.14 × 10−8

It is not surprising that people believed the classical point of view of Drude before Som-
merfeld’s explanation although of course the measured heat capacity can only be explained
from the Pauli principle.

3.7 Plausibility argument for the linearised Boltzmann equation

We seek the probability of occupation of a wavepacket of (central) wavevector k and


position r (subject of course to the uncertainty principle) in a gas of independent fermions
subject to the Pauli principle, and under the influence of a weak, uniform electric field E .

We expect this to have the form

f (r, k) = fFD (k) + corrections

Now we make two observations about these corrections.


1. In a degenerate electron gas only particles near the Fermi surface contribute to trans-
port. So there must be a factor that is zero above and below EF , for example a
gaussian centred at EF . Even better, use (see the figure on page 114)

∂fFD

∂E

2. Electrons are negatively charged so they will be accelerated against the field: those
travelling with the field will be slowed down and those travelling against the field
will speed up. The wavepacket having wavevector k and velocity v(k) will have its
occupation number changed according to whether v(k) is parallel or antiparallel to E ;
so we expect the corrections to contain a factor

−ev(k) · E

We surmise therefore that


 
∂fFD
corrections ∝ − (−ev(k) · E )
∂E

Since probability is a number this must be dimensionless, so we need to multiply by


something with the dimension of time. Call this τ (k). Then we get

∂fFD
f (r, k) = fFD (k) + τ (k) ev(k) · E
∂E
PHY3012 Solid State Physics, Section 3.8 Page 118

Now, using the chain rule we have,


∂fFD ∂fFD ∂k
= × = rk fFD × (rk E)−1
∂E ∂k ∂E
and remembering equation (3.3.8), v(k) = h̄−1 rk E, we can “cancel out” the velocity
v(k) to obtain the linearised Boltzmann equation
e
f = fFD + τ (k)E · rk fFD

and τ (k) is the relaxation time.

3.8 Thermoelectric effects

You have spotted that if thermal and electrical conductivity are both mediated by elec-
trons then a temperature gradient may induce an electric current and an electric field
may produce a flow of heat. The linearized Boltzmann equation always predicts that the
flux of heat or current is linearly proportional to the field or temperature gradient. Hence
if both gradients are present the proportionality must be expressible as two equations,

Je = LEE E + LET rT (3.8.1)


JQ = LT E E + LT T rT (3.8.2)

and we will assume, for simplicity, that the fluxes and fields are parallel as in cubic
symmetry crystals. We can then take all quantities as scalars, pointing, say, along the
x-direction.

Suppose we measure electrical conductivity while holding the specimen at a constant


temperature. Then
dT
rT −→ =0
dx
and by Ohm’s law
Je = σE
which identifies LEE as the electrical conductivity,

LEE = σ (3.8.3)

Now suppose we measure thermal conductivity while preventing any current by insulating
the specimen electrically. We measure
dT
JQ = −κ
dx
but we cannot identify κ with minus the coefficient LT T . Instead we must write the zero
current condition as, from (3.8.1),

dT
0 = LEE E + LET
dx
PHY3012 Solid State Physics, Section 3.8 Page 119

and hence
LET dT
E =−
LEE dx

The heat flow generates an electric field inside the metal. If we now put this result
into (3.8.2) we have
LET dT dT
JQ = −LEE + LT T
LEE dx dx

which identifies the thermal conductivity as

LT E LET
κ = −LT T + (3.8.4)
LEE

What’s happening here is that an electric field is generated to prevent the flow of current
and this field in turn opposes the carriers of heat—reducing the thermal conductivity
from our expected value given by −LT T .

Apparently then a metal in a uniform temperature gradient develops an electric field of


amount
dT
E =S (3.8.5)
dx

where
LET
S=− (3.8.6)
LEE

is called the absolute thermoelectric power or thermopower. Suppose we build a circuit


like this.

The junctions between the two metals A and B are kept at temperatures T1 and T2 . A
voltmeter is inserted in metal B which is found to be at some intermediate temperature
T0 . By elementary electrostatics the voltage measured is given by the line integral of the
PHY3012 Solid State Physics, Section 3.8.1 Page 120

electric field around the circuit, and using (3.8.5)


I I
dT
V = − Edx = − S dx
dx
Z T1 Z T2 Z T0
= SB dT + SA dT + SB dT
T0 T1 T2
Z T2
= (SA − SB )dT
T1
Z T2 Z T2
= SA (T )dT − SB (T )dT
T1 T1

and in the last line I’ve indicated explicitly that the thermopower is temperature depen-
dent. This experiment then measures the difference in thermopower between metals A
and B integrated between the two temperatures T1 and T2 .

Similar to electrostatic potential, we can define the absolute thermoelectric force of a


metal as Z T
Θ(T ) = S(T )dT
0
The phenomenon we have described is called the Seebeck effect and is the basis upon
which a thermocouple is used in the measurement of temperature.

3.8.1 Estimate of the thermopower in metals and semiconductors

If a temperature gradient sets up an electric field we have, in one dimension, equa-


tion (3.8.5),
dT
E =S (3.8.5)
dx
The drift velocity of electrons due to the temperature gradient is obtained using just the
same argument as we used to obtain the thermal conductivity in section 3.4.4, (see also
Ashcroft and Mermin, p 24)
1
vQ = [v(x − vτ ) − v(x + vτ )] (3.8.7)
2
dv
= −τ v
dx 
d 1 2
= −τ v
dx 2
1
and in three dimensions we get the factor 3
as in equation (3.4.6) so, using the chain rule,
1 d(v 2 ) dT
vQ = − τ (3.8.8)
6 dT dx
The drift velocity due to the field is (see section 3.2.2)
eEτ
ve = − (3.8.9)
m
PHY3012 Solid State Physics, Section 3.8.2 Page 121

and the nett current is zero so writing vQ +ve = 0 using (3.8.8) and (3.8.9) and substituting
in (3.8.5) we must have
 
1 d 1 2
S=− mv
3e dT 2
Ce
=−
3Ne e
π 2 kB T
=−
6 e TF
kB
≈ −0.003 (3.8.10a)
e
≈ −0.3 µV K−1 (3.8.10b)

In the third line I have used equation (2.2.8), taking 21 mv 2 to be the internal energy
per electron of the electron gas so that its temperature derivative is the heat capacity
per electron. Note that the velocities vQ and ve are the slow drift velocities here while
v in (3.8.7) is the fast Fermi velocity in metals and thermal velocity in semiconductors.
You can see that in metals the thermopower increases linearly with temperature; taking
T /TF = 0.002 for Aluminium (section 2.2) the thermopower of a metal is seen to be rather
small and negative.

Equation (3.8.10b) furnishes us with an estimate of the thermopower of a metal. You will
find in a semiconductor textbook that the thermopower is
 
kB Ec − EF
S= 2+
e kB T
kB
≈ (2 + 22) (3.8.11)
e

for n-type and  


kB EF − Ev
S= 2−
e kB T
for p-type semiconductors: a “metallic part” plus a much larger part due to the effect of
temperature on Fermi level and carrier density. Note that the thermopower is negative for
positive carriers (holes) so like the Hall effect the thermopower tells what type of carrier is
the dominant one. The estimate in (3.8.11) is based on the band gap of Si being 1.1 eV and
1
the fact that kB T at room temperature is about 40 eV. Comparing (3.8.11) with (3.8.10a),
the thermopower in a semiconductor or insulator is typically many thousands of times
greater than in a metal. It also has the opposite sign in the case of electron carriers.

3.8.2 The Peltier effect

Can I reverse the Seebeck effect? That is, drive a current around the same circuit and
obtain a temperature difference between the two junctions. This would act as a useful
fridge or heater. Yes, I can; and I get the Peltier effect. When we considered measuring
PHY3012 Solid State Physics, Section 3.9 Page 122

the electrical conductivity we kept the metal at uniform temperature, so that

dT
=0
dx
but according to equation (3.8.2) there will still be a thermal current due to the electric
field that I apply to measure σ,

J Q = LT E E
LT E
= Je
LEE
= ΠAB Je (Π = T S—proof below)

ΠAB is called the Peltier coefficient and is the heat evolved per unit area of
metal A / metal B junction per unit current.

This is the way thermoelectric heaters and coolers work; a device can be switched from
heating to cooling simply by reversing the current. The cooler that you plug into your
car’s cigar lighter can also heat your dinner.

People look for the best thermoelectric materials. A good thermoelectric has a large value
of the figure of merit ZT near the temperature of operation, T ,

S 2σ
ZT = T
κ
where σ and κ are electrical and thermal conductivities and S is the thermopower.

Bi2 Te3 has ZT ≈ 1. Some semiconductors are good candidates, especially those having
a large atomic number and a band gap of about 10kB T . This implies a small band gap;
heavy doping may help. Metals usually have too small S to be useful.

No really good material has yet been found, say, to replace the conventional refrigerator.
But Peltier cooling is used all the time in laboratories, small domestic fridges, and so on.

3.9 Onsager relations

We have four coefficients in (3.8.1) and (3.8.2), LEE , LT T , LET , LT E , and have found their
relations to four measurable properties, σ and κ, the electric and thermal conductivities;
S and Π the thermopower and Peltier coefficient. Now we find a relation between the two
coefficients, LET and LT E to expose a relation between S and Π. As a by-product you
will get a glimpse into the elegant and fascinating subject of irreversible thermodynamics.

Equations (3.8.1) and (3.8.2) are examples of formulas that describe the thermodynamic
steady state where forces X1 , X2 , . . . , Xn generate currents J1 , J2 , . . . , Jn . They are related
PHY3012 Solid State Physics, Section 3.9 Page 123

linearly by X
Ji = Lij Xj (3.9.1)
j

According to Onsager’s theorem (which we shall not prove here) if the forces and currents
have been defined such that X
Ṡ = Xi Ji (3.9.2)
i

is the rate of entropy production per unit volume per unit time, then we have

Lji = Lij

To bring them into line with equation (3.9.2) we must make slight adjustments to our
definitions of X and J in equations (3.8.1) and (3.8.2).

In the case that only electric current is flowing, it is easy to find Ṡ—it is the rate of
Joule heating Je E divided by the temperature. So if J1 is the electric current density, the
“conjugate” force must be
E
X1 =
T
so that
Je E
Ṡ = = J1 X1 electric current flowing
T
as required by Onsager’s theorem.

In the case of the heat current we write the rate of entropy production as arising from
the creation of heat as described mathematically as the divergence of JQ /T . That is,
 
JQ
Ṡ = r ·
T

If JQ is constant as in the steady state we have

1
Ṡ = JQ · r
T

and in one dimension,    


d 1 1 dT
Ṡ = JQ = JQ − 2
dx T T dx
So if we identify JQ with the flux J2 the conjugate force X2 must be

1
X2 = r
T

or in one dimension,
1 dT
X2 = −
T 2 dx
PHY3012 Solid State Physics, Section 3.9 Page 124

so that
Ṡ = J2 X2 heat current flowing
as required by Onsager’s theorem.

We can now rewrite equations (3.8.1) and (3.8.2) in the canonical form of equation (3.9.1),
   
E 2 1 dT
Je = LEE T + LET T
T T 2 dx
   
E 1 dT
JQ = LT E T + LT T T 2
T T 2 dx

or equivalently
J1 = L11 X1 + L12 X2
J2 = L21 X1 + L22 X2
Then by Onsager’s theorem we have L12 = L21 or

−LET T 2 = LT E T

which furnishes us with the relation between the thermopower and the Peltier coefficient
which I promised,
Π
S=
T
a result first obtain by William Thomson, later Lord Kelvin, in 1854.

Suppose we have a wire subject to both an electric field and a thermal gradient. If Je is
constant along the wire, then at any point the rate of entropy production is
 
d JQ E
Ṡ = + Je
dx T T

I will leave out the algebra, but make E the subject of equation (3.8.1) and substitite
the result into (3.8.2). Use equations (3.8.3), (3.8.4) and (3.8.6) to eliminate the L’s,
differentiate the product (JQ /T ) with respect to x and use the chain rule to write dS/dx
in terms of dS/dT and dT /dx and you’ll find,
 
d κ dT J2 1 dT d
Ṡ = − + e + Je (ST )
dx T dx σT T dx dT

The first two terms are, respectively, the entropy production generated by heat flow and
Joule heating. The third term is proportional to both the electric and thermal currents
and is positive if they are in the same direction and negative if they are opposed. The
rate of heat generation due to this effect is
dT d
T Ṡ = Je (ST )
dx dT
and can be measured directly in the circuit by measuring the change in heat output when
the current is reversed. This is called the Thomson effect (also after Lord Kelvin).
PHY3012 Solid State Physics, Section 3.9 Page 125

It is important to distinguish the Joule heating from the Thomson heating. Joule heat
is given out independently of the direction of the current and is thermodynamically irre-
versible. Conversely the Thomson heating (3.9.5) is reversible and is positive or negative,
that is absorbed or emitted according to the direction of the current (the sign of Je ).
Since this only depends on the thermopower it provides a means for its measurement.
Thermopower is central to many investigations in solid state physics as it furnishes direct
evidence of the nature of the current carriers.

You might also like