Qaunatu Majner

Download as pdf or txt
Download as pdf or txt
You are on page 1of 10

2 CHAPTER 10.

INTRODUCTION TO QUANTUM MECHANICS

deal of quantum mechanics already, whether you realize it or not.


The outline of this chapter is as follows. In Section 10.1 we give a brief history of the
development of quantum mechanics. In Section 10.2 we write down, after some motivation,
the Schrodinger wave equation, both the time-dependent and time-independent forms. In
Section 10.3 we discuss a number of examples. The most important thing to take away from
this section is that all of the examples we discuss have exact analogies in the string/spring
systems earlier in the book. So we technically won’t have to solve anything new here. All
the work has been done before. The only thing new that we’ll have to do is interpret the old
results. In Section 10.4 we discuss the uncertainty principle. As in Section 10.3, we’ll find
that we already did the necessary work earlier in the book. The uncertainty principle turns
out to be a direct consequence of a result from Fourier analysis. But the interpretation of
this result as an uncertainty principle has profound implications in quantum mechanics.

10.1 A brief history


Before discussing the Schrodinger wave equation, let’s take a brief (and by no means com-
prehensive) look at the historical timeline of how quantum mechanics came about. The
actual history is of course never as clean as an outline like this suggests, but we can at least
get a general idea of how things proceeded.
1900 (Planck): Max Planck proposed that light with frequency ν is emitted in quantized
lumps of energy that come in integral multiples of the quantity,

E = hν = h̄ω (1)
where h ≈ 6.63 · 10−34 J · s is Planck’s constant, and h̄ ≡ h/2π = 1.06 · 10−34 J · s.
The frequency ν of light is generally very large (on the order of 1015 s−1 for the visible
spectrum), but the smallness of h wins out, so the hν unit of energy is very small (at least on
an everyday energy scale). The energy is therefore essentially continuous for most purposes.
However, a puzzle in late 19th-century physics was the blackbody radiation problem. In a
nutshell, the issue was that the classical (continuous) theory of light predicted that certain
objects would radiate an infinite amount of energy, which of course can’t be correct. Planck’s
hypothesis of quantized radiation not only got rid of the problem of the infinity, but also
correctly predicted the shape of the power curve as a function of temperature.
The results that we derived for electromagnetic waves in Chapter 8 are still true. In
particular, the energy flux is given by the Poynting vector in Eq. 8.47. And E = pc for
a light. Planck’s hypothesis simply adds the information of how many lumps of energy a
wave contains. Although strictly speaking, Planck initially thought that the quantization
was only a function of the emission process and not inherent to the light itself.
1905 (Einstein): Albert Einstein stated that the quantization was in fact inherent to the
light, and that the lumps can be interpreted as particles, which we now call “photons.” This
proposal was a result of his work on the photoelectric effect, which deals with the absorption
of light and the emission of elections from a material.
We know from Chapter 8 that E = pc for a light wave. (This relation also follows from
Einstein’s 1905 work on relativity, where he showed that E = pc for any massless particle,
an example of which is a photon.) And we also know that ω = ck for a light wave. So
Planck’s E = h̄ω relation becomes
E = h̄ω =⇒ pc = h̄(ck) =⇒ p = h̄k (2)

This result relates the momentum of a photon to the wavenumber of the wave it is associated
with.
10.1. A BRIEF HISTORY 3

1913 (Bohr): Niels Bohr stated that electrons in atoms have wavelike properties. This
correctly explained a few things about hydrogen, in particular the quantized energy levels
that were known.
1924 (de Broglie): Louis de Broglie proposed that all particles are associated with waves,
where the frequency and wavenumber of the wave are given by the same relations we found
above for photons, namely E = h̄ω and p = h̄k. The larger E and p are, the larger ω
and k are. Even for small E and p that are typical of a photon, ω and k are very large
because h̄ is so small. So any everyday-sized particle with large (in comparison) energy and
momentum values will have extremely large ω and k values. This (among other reasons)
makes it virtually impossible to observe the wave nature of macroscopic amounts of matter.
This proposal (that E = h̄ω and p = h̄k also hold for massive particles) was a big step,
because many things that are true for photons are not true for massive (and nonrelativistic)
particles. For example, E = pc (and hence ω = ck) holds only for massless particles (we’ll
see below how ω and k are related for massive particles). But the proposal was a reasonable
one to try. And it turned out to be correct, in view of the fact that the resulting predictions
agree with experiments.
The fact that any particle has a wave associated with it leads to the so-called wave-
particle duality. Are things particles, or waves, or both? Well, it depends what you’re doing
with them. Sometimes things behave like waves, sometimes they behave like particles. A
vaguely true statement is that things behave like waves until a measurement takes place,
at which point they behave like particles. However, approximately one million things are
left unaddressed in that sentence. The wave-particle duality is one of the things that few
people, if any, understand about quantum mechanics.
1925 (Heisenberg): Werner Heisenberg formulated a version of quantum mechanics that
made use of matrix mechanics. We won’t deal with this matrix formulation (it’s rather
difficult), but instead with the following wave formulation due to Schrodinger (this is a
waves book, after all).
1926 (Schrodinger): Erwin Schrodinger formulated a version of quantum mechanics that
was based on waves. He wrote down a wave equation (the so-called Schrodinger equation)
that governs how the waves evolve in space and time. We’ll deal with this equation in depth
below. Even though the equation is correct, the correct interpretation of what the wave
actually meant was still missing. Initially Schrodinger thought (incorrectly) that the wave
represented the charge density.
1926 (Born): Max Born correctly interpreted Schrodinger’s wave as a probability am-
plitude. By “amplitude” we mean that the wave must be squared to obtain the desired
probability. More precisely, since the wave (as we’ll see) is in general complex, we need to
square its absolute value. This yields the probability of finding a particle at a given location
(assuming that the wave is written as a function of x).
This probability isn’t a consequence of ignorance, as is the case with virtually every
other example of probability you’re familiar with. For example, in a coin toss, if you
know everything about the initial motion of the coin (velocity, angular velocity), along
with all external influences (air currents, nature of the floor it lands on, etc.), then you
can predict which side will land facing up. Quantum mechanical probabilities aren’t like
this. They aren’t a consequence of missing information. The probabilities are truly random,
and there is no further information (so-called “hidden variables”) that will make things un-
random. The topic of hidden variables includes various theorems (such as Bell’s theorem)
and experimental results that you will learn about in a quantum mechanics course.
4 CHAPTER 10. INTRODUCTION TO QUANTUM MECHANICS

1926 (Dirac): Paul Dirac showed that Heisenberg’s and Schrodinger’s versions of quantum
mechanics were equivalent, in that they could both be derived from a more general version
of quantum mechanics.

10.2 The Schrodinger equation


In this section we’ll give a “derivation” of the Schrodinger equation. Our starting point will
be the classical nonrelativistic expression for the energy of a particle, which is the sum of
the kinetic and potential energies. We’ll assume as usual that the potential is a function of
only x. We have
1 p2
E = K + V = mv 2 + V (x) = + V (x). (3)
2 2m
We’ll now invoke de Broglie’s claim that all particles can be represented as waves with
frequency ω and wavenumber k, and that E = h̄ω and p = h̄k. This turns the expression
for the energy into
h̄2 k 2
h̄ω = + V (x). (4)
2m
A wave with frequency ω and wavenumber k can be written as usual as ψ(x, t) = Aei(kx−ωt)
(the convention is to put a minus sign in front of the ωt). In 3-D we would have ψ(r, t) =
Aei(k·r−ωt) , but let’s just deal with 1-D. We now note that
∂ψ ∂ψ
= −iωψ =⇒ ωψ = i , and
∂t ∂t
∂2ψ ∂2ψ
= −k 2 ψ =⇒ k2 ψ = − 2 . (5)
∂x2 ∂x
If we multiply the energy equation in Eq. (4) by ψ, and then plug in these relations, we
obtain
h̄2 2 ∂ψ −h̄2 ∂ 2 ψ
h̄(ωψ) = (k ψ) + V (x)ψ =⇒ ih̄ = · +Vψ (6)
2m ∂t 2m ∂x2
This is the time-dependent Schrodinger equation. If we put the x and t arguments back in,
the equation takes the form,

∂ψ(x, t) −h̄2 ∂ 2 ψ(x, t)


ih̄ = · + V (x)ψ(x, t). (7)
∂t 2m ∂x2
In 3-D, the x dependence turns into dependence on all three coordinates (x, y, z), and the
∂ 2 ψ/∂x2 term becomes ∇2 ψ (the sum of the second derivatives). Remember that Born’s
(correct) interpretation of ψ(x) is that |ψ(x)|2 gives the probability of finding the particle
at position x.
Having successfully produced the time-dependent Schrodinger equation, we should ask:
Did the above reasoning actually prove that the Schrodinger equation is valid? No, it didn’t,
for three reasons.

1. The reasoning is based on de Broglie’s assumption that there is a wave associated with
every particle, and also on the assumption that the ω and k of the wave are related to
E and p via Planck’s constant in Eqs. (1) and (2). We had to accept these assumptions
on faith.

2. Said in a different way, it is impossible to actually prove anything in physics. All we


can do is make an educated guess at a theory, and then do experiments to try to show
10.2. THE SCHRODINGER EQUATION 5

that the theory is consistent with the real world. The more experiments we do, the
more comfortable we are that the theory is a good one. But we can never be absolutely
sure that we have the correct theory. In fact, odds are that it’s simply the limiting
case of a more correct theory.

3. The Schrodinger equation actually isn’t valid, so there’s certainly no way that we
proved it. Consistent with the above point concerning limiting cases, the quantum
theory based on Schrodinger’s equation is just a limiting theory of a more correct one,
which happens to be quantum field theory (which unifies quantum mechanics with
special relativity). This is turn must be a limiting theory of yet another more correct
one, because it doesn’t incorporate gravity. Eventually there will be one theory that
covers everything (although this point can be debated), but we’re definitely not there
yet.

Due to the “i” that appears in Eq. (6), ψ(x) is complex. And in contrast with waves in
classical mechanics, the entire complex function now matters in quantum mechanics. We
won’t be taking the real part in the end. Up to this point in the book, the use of complex
functions was simply a matter of convenience, because it is easier to work with exponentials
than trig functions. Only the real part mattered (or imaginary part – take your pick, but not
both). But in quantum mechanics the whole complex wavefunction is relevant. However,
the theory is structured in such a way that anything you might want to measure (position,
momentum, energy, etc.) will always turn out to be a real quantity. This is a necessary
feature of any valid theory, of course, because you’re not going to go out and measure a
distance of 2 + 5i meters, or pay an electrical bill of 17 + 6i kilowatt hours.
As mentioned in the introduction to this chapter, there is an endless number of difficult
questions about quantum mechanics that can be discussed. But in this short introduction
to the subject, let’s just accept Schrodinger’s equation as valid, and see where it takes us.

Solving the equation


If we put aside the profound implications of the Schrodinger equation and regard it as
simply a mathematical equation, then it’s just another wave equation. We already know
the solution, of course, because we used the function ψ(x, t) = Aei(kx−ωt) to produce Eqs.
(5) and (6) in the first place. But let’s pretend that we don’t know this, and let’s solve the
Schrodinger equation as if we were given it out of the blue.
As always, we’ll guess an exponential solution. If we first look at exponential behavior
in the time coordinate, our guess is ψ(x, t) = e−iωt f (x) (the minus sign here is convention).
Plugging this into Eq. (7) and canceling the e−iωt yields

h̄2 ∂ 2 f (x)
h̄ωf (x) = − + V (x)f (x). (8)
2m ∂x2
But from Eq. (1), we have h̄ω = E. And we’ll now replace f (x) with ψ(x). This might
cause a little confusion, since we’ve already used ψ to denote the entire wavefunction ψ(x, t).
However, it is general convention to also use the letter ψ to denote the spatial part. So we
now have
h̄2 ∂ 2 ψ(x)
E ψ(x) = − + V (x)ψ(x) (9)
2m ∂x2
This is called the time-independent Schrodinger equation. This equation is more restrictive
than the original time-dependent Schrodinger equation, because it assumes that the parti-
cle/wave has a definite energy (that is, a definite ω). In general, a particle can be in a state
that is the superposition of states with various definite energies, just like the motion of a
6 CHAPTER 10. INTRODUCTION TO QUANTUM MECHANICS

string can be the superposition of various normal modes with definite ω’s. The same rea-
soning applies here as with all the other waves we’ve discussed: From Fourier analysis and
from the linearity of the Schrodinger equation, we can build up any general wavefunction
from ones with specific energies. Because of this, it suffices to consider the time-independent
Schrodinger equation. The solutions for that equation form a basis for all possible solutions.1
Continuing with our standard strategy of guessing exponentials, we’ll let ψ(x) = Aeikx .
Plugging this into Eq. (9) and canceling the eikx gives (going back to the h̄ω instead of E)

h̄2 h̄2 k 2
h̄ω = − (−k 2 ) + V (x) =⇒ h̄ω = + V (x). (10)
2m 2m
This is simply Eq. (4), so we’ve ended up back where we started, as expected. However, our
goal here was to show how the Schrodinger equation can be solved from scratch, without
knowing where it came from.
Eq. (10) is (sort of) a dispersion relation. If V (x) is a constant C in a given region, then
the relation between ω and k (namely ω = h̄k 2 /2m + C) is independent of x, so we have
a nice sinusoidal wavefunction (or exponential, if k is imaginary). However, if V (x) isn’t
constant, then the wavefunction isn’t characterized by a unique wavenumber. So a function
of the form eikx doesn’t work as a solution for ψ(x). (A Fourier superposition can certainly
work, since any function can be expressed that way, but a single eikx by itself doesn’t work.)
This is similar to the case where the density of a string isn’t constant. We don’t obtain
sinusoidal waves there either.

10.3 Examples
In order to solve for the wavefunction ψ(x) in the time-independent Schrodinger equation
in Eq. (9), we need to be given the potential energy V (x). So let’s now do some examples
with particular functions V (x).

10.3.1 Constant potential


The simplest example is where we have a constant potential, V (x) = V0 in a given region.
Plugging ψ(x) = Aeikx into Eq. (9) then gives
s
h̄2 k 2 2m(E − V0 )
E= + V0 =⇒ k = . (11)
2m h̄2

(We’ve taken the positive square root here. We’ll throw in the minus sign by hand to obtain
the other solution, in the discussion below.) k is a constant, and its real/imaginary nature
depends on the relation between E and V0 . If E > V0 , then k is real, so we have oscillatory
solutions,
ψ(x) = Aeikx + Be−ikx . (12)
But if E < V0 , thenpk is imaginary, so we have exponentially growing or decaying solutions.
If we let κ ≡ |k| = 2m(V0 − E)/h̄, then ψ(x) takes the form,

ψ(x) = Aeκx + Ba−κx . (13)

We see that it is possible for ψ(x) to be nonzero in a region where E < V0 . Since ψ(x) is
the probability amplitude, this implies that it is possible to have a particle with E < V0 .
1 The “time-dependent” and “time-independent” qualifiers are a bit of a pain to keep saying, so we usually

just say “the Schrodinger equation,” and it’s generally clear from the context which one we mean.
10.3. EXAMPLES 7

This isn’t possible classically, and it is one of the many ways in which quantum mechanics
diverges from classical mechanics. We’ll talk more about this when we discuss the finite
square well in Section 10.3.3.
If E = V0 , then this is the one case where the strategy of guessing an exponential function
doesn’t work. But if we go back to Eq. (9) we see that E = V0 implies ∂ 2 ψ/∂x2 = 0, which
in turn implies that ψ is a linear function,
ψ(x) = Ax + B. (14)
In all of these cases, the full wavefunction (including the time dependence) for a particle
with a specific value of E is given by

ψ(x, t) = e−iωt ψ(x) = e−iEt/h̄ ψ(x) (15)


Again, we’re using the letter ψ to stand for two different functions here, but the meaning of
each is clear from the number of arguments. Any general wavefunction is built up from a
superposition of the states in Eq. (15) with different values of E, just as the general motion
of a string is built of from various normal modes with different frequencies ω. The fact that a
particle can be in a superposition of states with different energies is another instance where
quantum mechanics diverges from classical mechanics. (Of course, it’s easy for classical
waves to be in a superposition of normal modes with different energies, by Fourier analysis.)
The above E > V0 and E < V0 cases correspond, respectively, to being above or below
the cutoff frequency in the string/spring system we discussed in Section 6.2.2. We have an
oscillatory solution if E (or ω) is above a particular value, and an exponential solution if
E (or ω) is below a particular value. The two setups (quantum mechanical with constant
V0 , and string/spring with springs present everywhere) are exactly analogous to each other.
The spatial parts of the solutions are exactly the same (well, before taking the real part
in the string/spring case). The frequencies, however, are different, because the dispersion
relations are different (h̄ω = h̄2 k 2 /2m + V0 and ω 2 = c2 k 2 + ωs2 , respectively). But this
affects only the rate of oscillation, and not the shape of the function.
The above results hold for any particular region where V (x) is constant. What if the
region extends from, say, x = 0 to x = +∞? If E > V0 , the oscillatory solutions are fine,
even though they’re not normalizable. That is, the integral of |ψ|2 is infinite (at least for any
nonzero coefficient in ψ; if the coefficient were zero, then we wouldn’t have a particle). So
we can’t make the total probability equal to 1. However, this is fine. The interpretation is
that we simply have a stream of particles extending to infinity. We shouldn’t be too worried
about this divergence, because when dealing with traveling waves on a string (for example,
when discussing reflection and transmission coefficients) we assumed that the sinusiodal
waves extended to ±∞, which of course is impossible in reality.
If E < V0 , then the fact that x = +∞ is in the given region implies that the coefficient
A in Eq. (13) must be zero, because otherwise ψ would diverge as x → ∞. So we are left
with only the Ba−κx term. (It’s one thing to have the integral of |ψ|2 diverge, as it did in
the previous paragraph. It’s another thing to have the integral diverge and be dominated
by values at large x. There is then zero probability of finding the particle at a finite value
of x.) If the region where E < V0 is actually the entire x axis, from −∞ to ∞, then the B
coefficient in Eq. (13) must also be zero. So ψ(x) = 0 for all x. In other words, there is no
allowed wavefunction. It is impossible to have a particle with E < V0 everywhere.

10.3.2 Infinite square well


Consider the potential energy,
½
0 (0 ≤ x ≤ L)
V (x) = (16)
∞ (x < 0 or x > L).
8 CHAPTER 10. INTRODUCTION TO QUANTUM MECHANICS

This is called an “infinite square well,” and it is shown in Fig. 1. The “square” part of the
V= V= name comes from the right-angled corners and not from the actual shape, since it’s a very
8

8 (infinitely) tall rectangle. This setup is also called a “particle in a box” (a 1-D box), because
the particle can freely move around inside a given region, but has zero probability of leaving
V=0 the region, just like a box. So ψ(x) = 0 outside the box.
-a a
The particle does indeed have zero chance of being found outside the region 0 ≤ x ≤ L.
Figure 1 Intuitively, this is reasonable, because the particle would have to climb the infinitely high
potential cliff at the side of the box. Mathematically, this can be derived rigorously, and
we’ll do this below when we discuss the finite square well.
We’ll assume E > 0, because the E < 0 case makes E < V0 everywhere, which isn’t
possible, as we mentioned above. Inside the well, we have V (x) = 0, so this is a special
case of the constant potential discussed above. We therefore have the oscillatory solution
in Eq. (12) (since E > 0), which we will find more convenient here to write in terms of trig
functions,

h̄2 k 2 2mE
ψ(x) = A cos kx + B sin kx, where E = =⇒ k = . (17)
2m h̄
The coefficients A and B may be complex.
We now claim that ψ must be continuous at the boundaries at x = 0 and x = L. When
dealing with, say, waves on a string, it was obvious that the function ψ(x) representing
the transverse position must be continuous, because otherwise the string would have a
break in it. But it isn’t so obvious with the quantum-mechanical ψ. There doesn’t seem
to be anything horribly wrong with having a discontinuous probability distribution, since
probability isn’t an actual object. However, it is indeed true that the probability distribution
is continuous in this case (and in any other case that isn’t pathological). For now, let’s just
assume that this is true, but we’ll justify it below when we discuss the finite square well.
Since ψ(x) = 0 outside the box, continuity of ψ(x) at x = 0 quickly gives A cos(0) +
B sin(0) = 0 =⇒ A = 0. Continuity at x = L then gives B sin kL = 0 =⇒ kL = nπ, where
n is an integer. So k = nπ/L, and the solution for ψ(x) is ψ(x) = B sin(nπx/L). The full
solution including the time dependence is given by Eq. (15) as

³ nπx ´ h̄2 k 2 n2 π 2 h̄2


ψ(x, t) = Be−iEt/h̄ sin where E= = (18)
L 2m 2mL2

We see that the energies are quantized (that is, they can take on only discrete values) and
indexed by the integer n. The string setup that is analogous to the infinite square well is
a string with fixed ends, which we discussed in Chapter 4 (see Section 4.5.2). In both of
these setups, the boundary conditions yield the same result that an integral number of half
n=4 wavelengths fit into the region. So the k values take the same form, k = nπ/L.
The dispersion relation, however, is different. It was simply ω = ck for waves on a
string, whereas it is h̄ω = h̄2 k 2 /2m for the V (x) = 0 region of the infinite well. But as
n=3 in the above case of the constant potential, this difference affects only the rate at which
the waves oscillate in time. It does’t affect the spatial shape, which is determined by the
wavenumber k. The wavefunctions for the lowest four energies are shown in Fig. 2 (the
n=2
vertical separation between the curves is meaningless). These look exactly like the normal
modes in the “both ends fixed” case in Fig. 24 in Chapter 4.
n=1
x=0 x=L

Figure 2
Energy in units of
π2h2/2mL2
10.3. EXAMPLES 9
E
16 n=4
The corresponding energies are shown in Fig. 3. Since E ∝ ω = (h̄2 /2m)k 2 ∝ n2 , the
gap between the energies grows as n increases. Note that the energies in the case of a string
are also proportional to n2 , because although ω = ck ∝ n, the energy is proportional to ω 2
(because the time derivative in Eq. (4.50) brings down a factor of ω). So Figs. 2 and 3 both

apply to both systems. The difference between the systems is that a string has ω ∝ E,
where as the quantum mechanical system has ω ∝ E. 9 n=3
There is no n = 0 state, because from Eq. (18) this would make ψ be identically zero.
That wouldn’t be much of a state, because the probability would be zero everywhere. The
lack of a n = 0 state is consistent with the uncertainty principle (see Section 10.4 below), n=2
4
because such a state would have ∆x∆p = 0 (since ∆x < L, and ∆p = 0 because n = 0 =⇒
k = 0 =⇒ p = h̄k = 0), which would violate the principle.
1 n=1

10.3.3 Finite square well Figure 3


Things get more complicated if we have a finite potential well. For future convenience, we’ll
V = V0
let x = 0 be located at the center of the well. If we label the ends as ±a, then V (x) is given
by
½
0 (|x| ≤ a) V=0
V (x) = (19) -a a
V0 (|x| > a).
This is shown in Fig. 4. Given V0 , there are two basic possibilities for the energy E: Figure 4

• E >p V0 (unbound state): From Eq. (11),√ the wavenumber k takes thep general form
of 2m(E − V (x))/h̄. This equals 2mE/h̄ inside the well and 2m(E − V0 )/h̄ E
outside. k is therefore real everywhere, so ψ(x) is an oscillatory function both inside V0
and outside the well. k is larger inside the well, so the wavelength is shorter there. A
possible wavefunction might look something like the one in Fig. 5. It is customary to -a a
draw the ψ(x) function on top of the E line, although this technically has no meaning
because ψ and E have different units. Figure 5
The wavefunction extends infinitely on both direction, so the particle can be anywhere.
Hence the name “unbound state.” We’ve drawn an even-function standing wave in
Fig. 5, although in general we’re concerned with traveling waves for unbound states.
These are obtained from superpositions of the standing waves, with a phase thrown
in the time dependence. For traveling waves, the relative sizes of ψ(x) in the different
regions depend on the specifics of how the problem is set up.

• p
0 < E < V0 (bound state): The wavenumber k still equals 2mE/h̄ inside the well and
2m(E − V0 )/h̄ outside, but now that latter value is imaginary. So ψ is an oscillatory
function inside the well, but an exponential function outside. Furthermore, it must
be an exponentially decaying function outside, because otherwise it would diverge at
x = ±∞. Since the particle has an exponentially small probability of being found
far away from the well, we call this a “bound state.” We’ll talk more below about
the strange fact that the probability is nonzero in the region outside the well, where
E < V (x).
There is also the third case were E = V0 , but this can be obtained as the limit of the
other two cases (more easily as the limit of the bound-state case). The fourth case,
E < 0, isn’t allowed, as we discussed at the end of Section 10.3.1.

In both of these cases, the complete solution for ψ(x) involves solving the boundary
conditions at x = ±a. The procedure is the same for both cases, but let’s concentrate on
the bound-state case here. The boundary conditions are given by the following theorem.
10 CHAPTER 10. INTRODUCTION TO QUANTUM MECHANICS

Theorem 10.1 If V (x) is everywhere finite (which is the case for the finite square well),
then both ψ(x) and ψ 0 (x) are everywhere continuous.

Proof: If we solve for ψ 00 in Eq. (9), we see that ψ 00 is always finite (because V (x) is always
finite). This implies two things. First, it implies that ψ 0 must be continuous, because if ψ 0
were discontinuous at a given point, then its derivative ψ 00 would be infinite there (because
ψ 0 would make a finite jump over zero distance). So half of the theorem is proved.
Second, the finiteness of ψ 00 implies that ψ 0 must also be finite everywhere, because if
ψ were infinite at a given point (excluding x = ±∞), then its derivative ψ 00 would also be
0

infinite there (because ψ 0 would make an infinite jump over a finite distance).
Now, since ψ 0 is finite everywhere, we can repeat the same reasoning with ψ 0 and ψ that
we used with ψ 00 and ψ 0 in the first paragraph above: Since ψ 0 is always finite, we know
that ψ must be continuous. So the other half of the theorem is also proved.
Having proved this theorem, let’s outline the general strategy for solving for ψ in the
E < V0 case. The actual task of going through the calculation is left for Problem 10.2. The
calculation is made much easier with the help of Problem 10.1 which states that only even
and odd functions need to be considered. p
If we let k ≡ iκ outside the well, then we have κ = 2m(V0 − E)/h̄, which is real and
positive since E < V0 . The general forms of the wavefunctions in the left, middle, and right
regions are

x < −a : ψ1 (x) = A1 eκx + B1 e−κx ,


−a < x < a : ψ2 (x) = A2 eikx + B2 e−ikx ,
x>a: ψ3 (x) = A3 eκx + B3 e−κx , (20)

where √ p
2mE 2m(V0 − E)
k= , and κ= . (21)
h̄ h̄
We’ve given only the x dependence in these wavefunctions. To obtain the full wavefunction
ψ(x, t), all of these waves are multiplied by the same function of t, namely e−iωt = e−iEt/h̄ .
We now need to solve for various quantities. How many unknowns do we have, and how
many equations/facts do we have? We have seven unknowns: A1 , A2 , A3 , B1 , B2 , B3 , and
E (which appears in k and κ). And we have seven facts:

• Four boundary conditions at x = ±a, namely continuity of ψ and ψ 0 at both points.

• Two boundary conditions at x = ±∞, namely ψ = 0 in both cases.


R∞
• One normalization condition, namely −∞ |ψ|2 dx = 1.
As we mentioned at the end of Section 10.3.1, the boundary conditions at ±∞ quickly
tell us that B1 and A3 equal zero. Also, in most cases we’re not concerned with the overall
normalization constant (the usual goal is to find E), so we can ignore the normalization
condition and just find all the other constants in terms of, say, A1 . So were’re down to
four equations (the four boundary conditions at x = ±a), and four unknowns (A2 , B2 , B3 ,
and E). Furthermore, the even/odd trick discussed in Problem 10.1 cuts things down by a
factor of 2, so we’re down to two equations and two unknowns (the energy E, along with
one of the coefficients), which is quite manageable. The details are left for Problem 10.2,
but let’s get a rough idea here of what the wavefunctions look like.
V0
10.3. EXAMPLES 11
E5

It turns out that the energies and states are again discrete and can be labeled by an
integer n, just as in the infinite-well case. However, the energies don’t take the simple form
in Eq. (18), although they approximately do if the well is deep. Fig. 6shows the five states for E4
a well of a particular depth V0 . We’ve drawn each wave relative to the line that represents
the energy En . Both ψ and ψ 0 are continuous at x = ±a, and ψ goes to 0 at x = ±∞.
We’ve chosen the various parameters (one of which is the depth) so that there are exactly E3
five states (see Problem 10.2 for the details on this). The deeper the well, the more states
there are.
E2
Consistent with Eq. (20), ψ is indeed oscillatory inside the well (that is, the curvature
is toward the x axis), and exponential decaying outside the well (the curvature is away E1
from the x axis). As E increases, Eq. (21) tells us that k increases (so the wiggles inside -a a V=E=0
the well have shorter wavelengths), and also that κ decreases (so the exponential decay is
slower). These facts are evident in Fig. 6. The exact details of the waves depend on various Figure 6
parameters, but the number of bumps equals n.

Explanation of the quantized energy levels


The most important thing to note about these states is that they are discrete. In the infinite-
well case, this discreteness was clear because an integral number of half wavelengths needed
to fit into the well (because ψ = 0 at the boundaries). The discreteness isn’t so obvious in
the finite-well case (because ψ 6= 0 at the boundaries), but it is still reasonably easy to see.
There are two ways to understand it. First, the not-so-enlightening way is to note that we
initially had 7 equation and 7 unknowns. So all the unknowns, including E, are determined.
There may be different discrete solutions, but at least we know that we can’t just pick a
random value for E and expect it to work.
Second, the more physical and enlightening way is the following. If we choose a random
value of E, it probably won’t yield an allowable function ψ, and here’s why. Pick an arbitrary
value of the coefficient A1 in Eq. (20), and set B1 = 0. So we have an exponentially decaying
function in the left region, which behaves properly at x = −∞. Since E determines κ, we
know everything about the A1 eκx function.
Now pick a point x0 in the left region, and imagine marching rightward on the x axis.
We claim that all of the subsequent values of ψ are completely determined, all the way up
to x = +∞. This is clear in the left region, because we know what the function is. But it
is also true in the middle and right regions, because we know the values of ψ, ψ 0 , and ψ 00
at any given point, so we can recursively find these three values at the “next” point as we
march along. More precisely: (a) from the definition of the derivative, the values of ψ and
ψ 0 at a given point yield the value of ψ at the next point, (b) likewise, the values of ψ 0 and
ψ 00 at a given point yield the value of ψ 0 at the next point, and (c) finally, the value of ψ at
a given point yields the value of ψ 00 at that point, via the Schrodinger equation, Eq. (9). So
we can recursively march all the way up to x = +∞, and the entire function is determined.
There is no freedom whatsoever in what the function turns out to be.
A particular choice of E and A1 might yield the first (top) function shown in Fig. 7. It has
the correct exponential and oscillatory nature in the left and middle regions, respectively.
But in the right region it apparently has an exponentially growing piece. Because it diverges
at x = +∞, this function isn’t an allowable one. So it is impossible for the energy to take
on the value that we chose.
We can try to remedy the divergence at x = +∞ by increasing the value of E. This will -a a
make ψ oscillate quicker inside the well, so that it encounters the x = a boundary with a
negative slope. We then might end up with the second function shown in Fig. 7. We still Figure 7
have the divergence at x = +∞, so again we have an invalid ψ. If we increase E a little
more, by precisely the right amount, then we’ll end up with the third function shown in Fig.

You might also like