Qaunatu Majner
Qaunatu Majner
Qaunatu Majner
E = hν = h̄ω (1)
where h ≈ 6.63 · 10−34 J · s is Planck’s constant, and h̄ ≡ h/2π = 1.06 · 10−34 J · s.
The frequency ν of light is generally very large (on the order of 1015 s−1 for the visible
spectrum), but the smallness of h wins out, so the hν unit of energy is very small (at least on
an everyday energy scale). The energy is therefore essentially continuous for most purposes.
However, a puzzle in late 19th-century physics was the blackbody radiation problem. In a
nutshell, the issue was that the classical (continuous) theory of light predicted that certain
objects would radiate an infinite amount of energy, which of course can’t be correct. Planck’s
hypothesis of quantized radiation not only got rid of the problem of the infinity, but also
correctly predicted the shape of the power curve as a function of temperature.
The results that we derived for electromagnetic waves in Chapter 8 are still true. In
particular, the energy flux is given by the Poynting vector in Eq. 8.47. And E = pc for
a light. Planck’s hypothesis simply adds the information of how many lumps of energy a
wave contains. Although strictly speaking, Planck initially thought that the quantization
was only a function of the emission process and not inherent to the light itself.
1905 (Einstein): Albert Einstein stated that the quantization was in fact inherent to the
light, and that the lumps can be interpreted as particles, which we now call “photons.” This
proposal was a result of his work on the photoelectric effect, which deals with the absorption
of light and the emission of elections from a material.
We know from Chapter 8 that E = pc for a light wave. (This relation also follows from
Einstein’s 1905 work on relativity, where he showed that E = pc for any massless particle,
an example of which is a photon.) And we also know that ω = ck for a light wave. So
Planck’s E = h̄ω relation becomes
E = h̄ω =⇒ pc = h̄(ck) =⇒ p = h̄k (2)
This result relates the momentum of a photon to the wavenumber of the wave it is associated
with.
10.1. A BRIEF HISTORY 3
1913 (Bohr): Niels Bohr stated that electrons in atoms have wavelike properties. This
correctly explained a few things about hydrogen, in particular the quantized energy levels
that were known.
1924 (de Broglie): Louis de Broglie proposed that all particles are associated with waves,
where the frequency and wavenumber of the wave are given by the same relations we found
above for photons, namely E = h̄ω and p = h̄k. The larger E and p are, the larger ω
and k are. Even for small E and p that are typical of a photon, ω and k are very large
because h̄ is so small. So any everyday-sized particle with large (in comparison) energy and
momentum values will have extremely large ω and k values. This (among other reasons)
makes it virtually impossible to observe the wave nature of macroscopic amounts of matter.
This proposal (that E = h̄ω and p = h̄k also hold for massive particles) was a big step,
because many things that are true for photons are not true for massive (and nonrelativistic)
particles. For example, E = pc (and hence ω = ck) holds only for massless particles (we’ll
see below how ω and k are related for massive particles). But the proposal was a reasonable
one to try. And it turned out to be correct, in view of the fact that the resulting predictions
agree with experiments.
The fact that any particle has a wave associated with it leads to the so-called wave-
particle duality. Are things particles, or waves, or both? Well, it depends what you’re doing
with them. Sometimes things behave like waves, sometimes they behave like particles. A
vaguely true statement is that things behave like waves until a measurement takes place,
at which point they behave like particles. However, approximately one million things are
left unaddressed in that sentence. The wave-particle duality is one of the things that few
people, if any, understand about quantum mechanics.
1925 (Heisenberg): Werner Heisenberg formulated a version of quantum mechanics that
made use of matrix mechanics. We won’t deal with this matrix formulation (it’s rather
difficult), but instead with the following wave formulation due to Schrodinger (this is a
waves book, after all).
1926 (Schrodinger): Erwin Schrodinger formulated a version of quantum mechanics that
was based on waves. He wrote down a wave equation (the so-called Schrodinger equation)
that governs how the waves evolve in space and time. We’ll deal with this equation in depth
below. Even though the equation is correct, the correct interpretation of what the wave
actually meant was still missing. Initially Schrodinger thought (incorrectly) that the wave
represented the charge density.
1926 (Born): Max Born correctly interpreted Schrodinger’s wave as a probability am-
plitude. By “amplitude” we mean that the wave must be squared to obtain the desired
probability. More precisely, since the wave (as we’ll see) is in general complex, we need to
square its absolute value. This yields the probability of finding a particle at a given location
(assuming that the wave is written as a function of x).
This probability isn’t a consequence of ignorance, as is the case with virtually every
other example of probability you’re familiar with. For example, in a coin toss, if you
know everything about the initial motion of the coin (velocity, angular velocity), along
with all external influences (air currents, nature of the floor it lands on, etc.), then you
can predict which side will land facing up. Quantum mechanical probabilities aren’t like
this. They aren’t a consequence of missing information. The probabilities are truly random,
and there is no further information (so-called “hidden variables”) that will make things un-
random. The topic of hidden variables includes various theorems (such as Bell’s theorem)
and experimental results that you will learn about in a quantum mechanics course.
4 CHAPTER 10. INTRODUCTION TO QUANTUM MECHANICS
1926 (Dirac): Paul Dirac showed that Heisenberg’s and Schrodinger’s versions of quantum
mechanics were equivalent, in that they could both be derived from a more general version
of quantum mechanics.
1. The reasoning is based on de Broglie’s assumption that there is a wave associated with
every particle, and also on the assumption that the ω and k of the wave are related to
E and p via Planck’s constant in Eqs. (1) and (2). We had to accept these assumptions
on faith.
that the theory is consistent with the real world. The more experiments we do, the
more comfortable we are that the theory is a good one. But we can never be absolutely
sure that we have the correct theory. In fact, odds are that it’s simply the limiting
case of a more correct theory.
3. The Schrodinger equation actually isn’t valid, so there’s certainly no way that we
proved it. Consistent with the above point concerning limiting cases, the quantum
theory based on Schrodinger’s equation is just a limiting theory of a more correct one,
which happens to be quantum field theory (which unifies quantum mechanics with
special relativity). This is turn must be a limiting theory of yet another more correct
one, because it doesn’t incorporate gravity. Eventually there will be one theory that
covers everything (although this point can be debated), but we’re definitely not there
yet.
Due to the “i” that appears in Eq. (6), ψ(x) is complex. And in contrast with waves in
classical mechanics, the entire complex function now matters in quantum mechanics. We
won’t be taking the real part in the end. Up to this point in the book, the use of complex
functions was simply a matter of convenience, because it is easier to work with exponentials
than trig functions. Only the real part mattered (or imaginary part – take your pick, but not
both). But in quantum mechanics the whole complex wavefunction is relevant. However,
the theory is structured in such a way that anything you might want to measure (position,
momentum, energy, etc.) will always turn out to be a real quantity. This is a necessary
feature of any valid theory, of course, because you’re not going to go out and measure a
distance of 2 + 5i meters, or pay an electrical bill of 17 + 6i kilowatt hours.
As mentioned in the introduction to this chapter, there is an endless number of difficult
questions about quantum mechanics that can be discussed. But in this short introduction
to the subject, let’s just accept Schrodinger’s equation as valid, and see where it takes us.
h̄2 ∂ 2 f (x)
h̄ωf (x) = − + V (x)f (x). (8)
2m ∂x2
But from Eq. (1), we have h̄ω = E. And we’ll now replace f (x) with ψ(x). This might
cause a little confusion, since we’ve already used ψ to denote the entire wavefunction ψ(x, t).
However, it is general convention to also use the letter ψ to denote the spatial part. So we
now have
h̄2 ∂ 2 ψ(x)
E ψ(x) = − + V (x)ψ(x) (9)
2m ∂x2
This is called the time-independent Schrodinger equation. This equation is more restrictive
than the original time-dependent Schrodinger equation, because it assumes that the parti-
cle/wave has a definite energy (that is, a definite ω). In general, a particle can be in a state
that is the superposition of states with various definite energies, just like the motion of a
6 CHAPTER 10. INTRODUCTION TO QUANTUM MECHANICS
string can be the superposition of various normal modes with definite ω’s. The same rea-
soning applies here as with all the other waves we’ve discussed: From Fourier analysis and
from the linearity of the Schrodinger equation, we can build up any general wavefunction
from ones with specific energies. Because of this, it suffices to consider the time-independent
Schrodinger equation. The solutions for that equation form a basis for all possible solutions.1
Continuing with our standard strategy of guessing exponentials, we’ll let ψ(x) = Aeikx .
Plugging this into Eq. (9) and canceling the eikx gives (going back to the h̄ω instead of E)
h̄2 h̄2 k 2
h̄ω = − (−k 2 ) + V (x) =⇒ h̄ω = + V (x). (10)
2m 2m
This is simply Eq. (4), so we’ve ended up back where we started, as expected. However, our
goal here was to show how the Schrodinger equation can be solved from scratch, without
knowing where it came from.
Eq. (10) is (sort of) a dispersion relation. If V (x) is a constant C in a given region, then
the relation between ω and k (namely ω = h̄k 2 /2m + C) is independent of x, so we have
a nice sinusoidal wavefunction (or exponential, if k is imaginary). However, if V (x) isn’t
constant, then the wavefunction isn’t characterized by a unique wavenumber. So a function
of the form eikx doesn’t work as a solution for ψ(x). (A Fourier superposition can certainly
work, since any function can be expressed that way, but a single eikx by itself doesn’t work.)
This is similar to the case where the density of a string isn’t constant. We don’t obtain
sinusoidal waves there either.
10.3 Examples
In order to solve for the wavefunction ψ(x) in the time-independent Schrodinger equation
in Eq. (9), we need to be given the potential energy V (x). So let’s now do some examples
with particular functions V (x).
(We’ve taken the positive square root here. We’ll throw in the minus sign by hand to obtain
the other solution, in the discussion below.) k is a constant, and its real/imaginary nature
depends on the relation between E and V0 . If E > V0 , then k is real, so we have oscillatory
solutions,
ψ(x) = Aeikx + Be−ikx . (12)
But if E < V0 , thenpk is imaginary, so we have exponentially growing or decaying solutions.
If we let κ ≡ |k| = 2m(V0 − E)/h̄, then ψ(x) takes the form,
We see that it is possible for ψ(x) to be nonzero in a region where E < V0 . Since ψ(x) is
the probability amplitude, this implies that it is possible to have a particle with E < V0 .
1 The “time-dependent” and “time-independent” qualifiers are a bit of a pain to keep saying, so we usually
just say “the Schrodinger equation,” and it’s generally clear from the context which one we mean.
10.3. EXAMPLES 7
This isn’t possible classically, and it is one of the many ways in which quantum mechanics
diverges from classical mechanics. We’ll talk more about this when we discuss the finite
square well in Section 10.3.3.
If E = V0 , then this is the one case where the strategy of guessing an exponential function
doesn’t work. But if we go back to Eq. (9) we see that E = V0 implies ∂ 2 ψ/∂x2 = 0, which
in turn implies that ψ is a linear function,
ψ(x) = Ax + B. (14)
In all of these cases, the full wavefunction (including the time dependence) for a particle
with a specific value of E is given by
This is called an “infinite square well,” and it is shown in Fig. 1. The “square” part of the
V= V= name comes from the right-angled corners and not from the actual shape, since it’s a very
8
8 (infinitely) tall rectangle. This setup is also called a “particle in a box” (a 1-D box), because
the particle can freely move around inside a given region, but has zero probability of leaving
V=0 the region, just like a box. So ψ(x) = 0 outside the box.
-a a
The particle does indeed have zero chance of being found outside the region 0 ≤ x ≤ L.
Figure 1 Intuitively, this is reasonable, because the particle would have to climb the infinitely high
potential cliff at the side of the box. Mathematically, this can be derived rigorously, and
we’ll do this below when we discuss the finite square well.
We’ll assume E > 0, because the E < 0 case makes E < V0 everywhere, which isn’t
possible, as we mentioned above. Inside the well, we have V (x) = 0, so this is a special
case of the constant potential discussed above. We therefore have the oscillatory solution
in Eq. (12) (since E > 0), which we will find more convenient here to write in terms of trig
functions,
√
h̄2 k 2 2mE
ψ(x) = A cos kx + B sin kx, where E = =⇒ k = . (17)
2m h̄
The coefficients A and B may be complex.
We now claim that ψ must be continuous at the boundaries at x = 0 and x = L. When
dealing with, say, waves on a string, it was obvious that the function ψ(x) representing
the transverse position must be continuous, because otherwise the string would have a
break in it. But it isn’t so obvious with the quantum-mechanical ψ. There doesn’t seem
to be anything horribly wrong with having a discontinuous probability distribution, since
probability isn’t an actual object. However, it is indeed true that the probability distribution
is continuous in this case (and in any other case that isn’t pathological). For now, let’s just
assume that this is true, but we’ll justify it below when we discuss the finite square well.
Since ψ(x) = 0 outside the box, continuity of ψ(x) at x = 0 quickly gives A cos(0) +
B sin(0) = 0 =⇒ A = 0. Continuity at x = L then gives B sin kL = 0 =⇒ kL = nπ, where
n is an integer. So k = nπ/L, and the solution for ψ(x) is ψ(x) = B sin(nπx/L). The full
solution including the time dependence is given by Eq. (15) as
We see that the energies are quantized (that is, they can take on only discrete values) and
indexed by the integer n. The string setup that is analogous to the infinite square well is
a string with fixed ends, which we discussed in Chapter 4 (see Section 4.5.2). In both of
these setups, the boundary conditions yield the same result that an integral number of half
n=4 wavelengths fit into the region. So the k values take the same form, k = nπ/L.
The dispersion relation, however, is different. It was simply ω = ck for waves on a
string, whereas it is h̄ω = h̄2 k 2 /2m for the V (x) = 0 region of the infinite well. But as
n=3 in the above case of the constant potential, this difference affects only the rate at which
the waves oscillate in time. It does’t affect the spatial shape, which is determined by the
wavenumber k. The wavefunctions for the lowest four energies are shown in Fig. 2 (the
n=2
vertical separation between the curves is meaningless). These look exactly like the normal
modes in the “both ends fixed” case in Fig. 24 in Chapter 4.
n=1
x=0 x=L
Figure 2
Energy in units of
π2h2/2mL2
10.3. EXAMPLES 9
E
16 n=4
The corresponding energies are shown in Fig. 3. Since E ∝ ω = (h̄2 /2m)k 2 ∝ n2 , the
gap between the energies grows as n increases. Note that the energies in the case of a string
are also proportional to n2 , because although ω = ck ∝ n, the energy is proportional to ω 2
(because the time derivative in Eq. (4.50) brings down a factor of ω). So Figs. 2 and 3 both
√
apply to both systems. The difference between the systems is that a string has ω ∝ E,
where as the quantum mechanical system has ω ∝ E. 9 n=3
There is no n = 0 state, because from Eq. (18) this would make ψ be identically zero.
That wouldn’t be much of a state, because the probability would be zero everywhere. The
lack of a n = 0 state is consistent with the uncertainty principle (see Section 10.4 below), n=2
4
because such a state would have ∆x∆p = 0 (since ∆x < L, and ∆p = 0 because n = 0 =⇒
k = 0 =⇒ p = h̄k = 0), which would violate the principle.
1 n=1
• E >p V0 (unbound state): From Eq. (11),√ the wavenumber k takes thep general form
of 2m(E − V (x))/h̄. This equals 2mE/h̄ inside the well and 2m(E − V0 )/h̄ E
outside. k is therefore real everywhere, so ψ(x) is an oscillatory function both inside V0
and outside the well. k is larger inside the well, so the wavelength is shorter there. A
possible wavefunction might look something like the one in Fig. 5. It is customary to -a a
draw the ψ(x) function on top of the E line, although this technically has no meaning
because ψ and E have different units. Figure 5
The wavefunction extends infinitely on both direction, so the particle can be anywhere.
Hence the name “unbound state.” We’ve drawn an even-function standing wave in
Fig. 5, although in general we’re concerned with traveling waves for unbound states.
These are obtained from superpositions of the standing waves, with a phase thrown
in the time dependence. For traveling waves, the relative sizes of ψ(x) in the different
regions depend on the specifics of how the problem is set up.
√
• p
0 < E < V0 (bound state): The wavenumber k still equals 2mE/h̄ inside the well and
2m(E − V0 )/h̄ outside, but now that latter value is imaginary. So ψ is an oscillatory
function inside the well, but an exponential function outside. Furthermore, it must
be an exponentially decaying function outside, because otherwise it would diverge at
x = ±∞. Since the particle has an exponentially small probability of being found
far away from the well, we call this a “bound state.” We’ll talk more below about
the strange fact that the probability is nonzero in the region outside the well, where
E < V (x).
There is also the third case were E = V0 , but this can be obtained as the limit of the
other two cases (more easily as the limit of the bound-state case). The fourth case,
E < 0, isn’t allowed, as we discussed at the end of Section 10.3.1.
In both of these cases, the complete solution for ψ(x) involves solving the boundary
conditions at x = ±a. The procedure is the same for both cases, but let’s concentrate on
the bound-state case here. The boundary conditions are given by the following theorem.
10 CHAPTER 10. INTRODUCTION TO QUANTUM MECHANICS
Theorem 10.1 If V (x) is everywhere finite (which is the case for the finite square well),
then both ψ(x) and ψ 0 (x) are everywhere continuous.
Proof: If we solve for ψ 00 in Eq. (9), we see that ψ 00 is always finite (because V (x) is always
finite). This implies two things. First, it implies that ψ 0 must be continuous, because if ψ 0
were discontinuous at a given point, then its derivative ψ 00 would be infinite there (because
ψ 0 would make a finite jump over zero distance). So half of the theorem is proved.
Second, the finiteness of ψ 00 implies that ψ 0 must also be finite everywhere, because if
ψ were infinite at a given point (excluding x = ±∞), then its derivative ψ 00 would also be
0
infinite there (because ψ 0 would make an infinite jump over a finite distance).
Now, since ψ 0 is finite everywhere, we can repeat the same reasoning with ψ 0 and ψ that
we used with ψ 00 and ψ 0 in the first paragraph above: Since ψ 0 is always finite, we know
that ψ must be continuous. So the other half of the theorem is also proved.
Having proved this theorem, let’s outline the general strategy for solving for ψ in the
E < V0 case. The actual task of going through the calculation is left for Problem 10.2. The
calculation is made much easier with the help of Problem 10.1 which states that only even
and odd functions need to be considered. p
If we let k ≡ iκ outside the well, then we have κ = 2m(V0 − E)/h̄, which is real and
positive since E < V0 . The general forms of the wavefunctions in the left, middle, and right
regions are
where √ p
2mE 2m(V0 − E)
k= , and κ= . (21)
h̄ h̄
We’ve given only the x dependence in these wavefunctions. To obtain the full wavefunction
ψ(x, t), all of these waves are multiplied by the same function of t, namely e−iωt = e−iEt/h̄ .
We now need to solve for various quantities. How many unknowns do we have, and how
many equations/facts do we have? We have seven unknowns: A1 , A2 , A3 , B1 , B2 , B3 , and
E (which appears in k and κ). And we have seven facts:
It turns out that the energies and states are again discrete and can be labeled by an
integer n, just as in the infinite-well case. However, the energies don’t take the simple form
in Eq. (18), although they approximately do if the well is deep. Fig. 6shows the five states for E4
a well of a particular depth V0 . We’ve drawn each wave relative to the line that represents
the energy En . Both ψ and ψ 0 are continuous at x = ±a, and ψ goes to 0 at x = ±∞.
We’ve chosen the various parameters (one of which is the depth) so that there are exactly E3
five states (see Problem 10.2 for the details on this). The deeper the well, the more states
there are.
E2
Consistent with Eq. (20), ψ is indeed oscillatory inside the well (that is, the curvature
is toward the x axis), and exponential decaying outside the well (the curvature is away E1
from the x axis). As E increases, Eq. (21) tells us that k increases (so the wiggles inside -a a V=E=0
the well have shorter wavelengths), and also that κ decreases (so the exponential decay is
slower). These facts are evident in Fig. 6. The exact details of the waves depend on various Figure 6
parameters, but the number of bumps equals n.