Ch2 1
Ch2 1
Baseball! This job ain’t so bad after all! He gave the ticket to an usher because he didn’t
recognize the seat. The usher started down the stairs so he restlessly followed as they kept stepping
down, well past any seat he would buy for himself. The usher seated him in a two person box seat
two rows back and just to the right of home plate. “Geez, I’m callin’ balls and strikes!” He could
hear the conversation as the managers waited for the umpire to exchange lineup cards. . . wives and
kids, the squeeze bunt in yesterday’s game, the merits of sun flower seeds versus chewing gum, and
the brunette in back of the third base dugout. His reverie faded as a man in a white suit and a
white Panama hat took the other seat of the box germinating the thought “Whoa. . . I’m watching
a game with Mr. Clean . . . ” The stranger immediately enjoined with a genial smile brighter than
his white suit “Think Schrodinger will play today?”
1. What are the postulates of quantum mechanics? Compare and contrast each of the postu-
lates to analogous statements from classical mechanics where a comparison is possible.
The postulates are the dominant reason for most of the mathematics of chapter 1 and are the
basis either directly or indirectly for everything that follows.
1
4. If a system is in state | ψ > , a measurement of the observable quantity represented by
the operator A that yields the eigenvalue α does so with the probability
¯ ¯2
P (α) ∝ ¯ < α | ψ > ¯ ,
d
H | ψ > = ih̄ |ψ>,
dt
where H is the quantum mechanical Hamiltonian operator.
Postulate 1: The state vector | ψ > replaces the positions and momentums (or velocities) of
classical mechanics.
Postulate 2: That observable quantities are described by operators is fundamentally different than
the classical description that every dynamical variable is described by a function of position and
momentum, for instance.
Postulate 3: Any real value is a possible result of a classical measurement. The only possible result
of a quantum mechanical measurement is an eigenvalue of the operator representing the quantity
being measured.
Postulate 4: There is no classical analogy. The interpretation of the square of an inner product as
a probability is unique to quantum mechanics.
Postulate 5: There is also no classical analogy for postulate 5. Classically, measurement of a system
does not affect the system.
Postulate 6: The time rate of change of the state variables of a classical system are governed by
Hamilton’s equations
dx ∂H dp ∂H
= and =− .
dt ∂p dt ∂x
The time evolution of a quantum mechanical system is governed by Schrodinger’s equation.
Postscript: Postulate 1 includes the principle of superposition. If | ψ1 > and | ψ2 > are
possible states of the system, then the linear combination, | ψ > = c1 | ψ1 > + c2 | ψ2 > , is also a
possible state of the system. The | ψi > are generally the eigenstates of the system. The general
state vector is the superposition or linear combination of all eigenstates, i.e.,
∞
X
| ψ > = c1 | ψ1 > + c2 | ψ2 > + c3 | ψ3 > + · · · = ci | ψi >
i=1
2
in the case of an infinity of eigenstates. Each of the coefficients ci are scalars that indicate the
relative “amount” of eigenstate | ψi > in the superposition that is the state vector | ψ > and
are often called probability amplitudes because the probability of measuring the corresponding
eigenvalue is often | ci | 2 . A coefficient can be zero meaning that the corresponding eigenstate is
absent from that state vector.
Observable quantities are those that can be physically measured. There are two essential
properties intrinsic to operators that are Hermitian. First, they have eigenvalues that are real
numbers. Numbers used to describe physical quantities are necessarily real numbers. This fact is
essential to postulate 3. Also, the eigenvectors of Hermitian operators are orthogonal, therefore,
the eigenvectors form a basis that can be made orthonormal. This fact is essential to postulate 4.
The observable quantities of position and momentum remain focal. An extension of postulate
2 is that the matrix elements of the position operator X and momentum operator P in the
position basis are represented
Postulate 3 is self explanatory but definitely non-classical. The only possible result of a
measurement is an eigenvalue of the operator representing the physical quantity being measured.
A proportionality is used in postulate 4. The proportionality is replaced by an equality by
dividing by the inner product of the unnormalized state vector,
¯ ¯2 | < α | ψ > |2
P (α) ∝ ¯ <α | ψ > ¯ ⇒ P (α) = ,
<ψ | ψ>
or equivalently, by normalizing the state vector before calculating the inner product. Postulate 4
is the reason for the process of normalization.
A probability of “1” means a certainty and a probability of “0” means the absence of a
possibility. The probability of measuring each eigenvalue of an observable quantity must be between
0 and 1 inclusive, or 0 ≤ P (α) ≤ 1 . The sum of the probabilities of individual measurements
resulting in all of the possible eigenvalues must be 1. The probabilistic interpretation of postulate
4 is the reason that two state vectors that are proportional represent the same physical state.
Postulate 5 describes what is often called the “collapse of the wave function.” It is the state-
ment that the observer interacts with the system; that the observer is part of the system. Regardless
of how carefully a measurement is made, the process of measurement changes the system being
measured. Further, the measurement changes the system in a specific way, the measurement forces
the system into one of its eigenstates. Finally, once in that eigenstate, it remains in that eigenstate
until it undergoes its next interaction which is its next “measurement.”
Postulate 6 is the Schrodinger equation. It is not derived from the postulates of quantum
mechanics, rather, the Schrodinger equation is one of the postulates of quantum mechanics.
Postulate 6 requires the quantum mechanical Hamiltonian operator, H . The classical Hamil-
tonian operator is the total energy operator, H = T + V , where T is kinetic energy and V is
potential energy. Each of the dynamical variables of classical mechanics is replaced by an operator
for the transition to the quantum mechanical formulation, x → X , and p → P . The clas-
sical Hamiltonian then goes to the quantum mechanical Hamiltonian, H → H . These quantum
3
mechanical operators are basis independent. They can be represented using matrix operators or
differential operators in any basis for further calculation.
The Schrodinger equation is not a postulate in the path integral formulation of quantum
mechanics, rather, it is derived from the postulates. This indicates that there is something more
fundamental about the path integral formulation of quantum mechanics. The path integral for-
mulation is not always satisfying because it is difficult to apply to calculations for even simple
systems. We will address the path integral formulation including its postulates in future chapters.
2. A system is represented by the normalized state vector | ψ > . It is composed solely of two
orthonormal eigenstates, | ψ1 > and | ψ2 > .
(a) Write | ψ > as the superposition of | ψ1 > and | ψ2 > .
(b) What is the probability that the state is | ψ1 > following a measurement?
(c) What is the probability that the state is | ψ2 > following a measurement?
(d) Show that the probabilities of parts (b) and (c) sum to 1.
This problem amplifies postulates 1, 4, and 5. The discussion of postulate 1 in the postscript
of problem 1 describes superposition. Postulate 5 indicates that a measurement changes the state
vector to one of the eigenstates. If the first eigenstate is the new state vector, that is the one to use
in the inner product in the probability calculation described in postulate 4. The proportionality
of postulate 4 can be replaced by an equality because | ψ > is given to be normalized. Use
the superposition of part (a) for | ψ > in parts (b) through (d), and you need to recognize the
orthonormality of the eigenstates. For part (d), what is the inner product of | ψ > with its bra?
because < ψ1 | ψ1 > = 1 and < ψ1 | ψ2 > = 0 due to the given orthonormality of eigenstates.
¯ ¡ ¢¯¯2
¯
(c) P2 = ¯ < ψ2 | c1 | ψ1 > + c2 | ψ2 > ¯
¯ ¯2
= ¯ < ψ2 | c1 | ψ1 > + < ψ2 | c2 | ψ2 > ¯
¯ . ¯2
= ¯ c1 < ψ 2 | ψ 1 > + c2 < ψ 2 | ψ 2 > ¯ = | c2 | 2 ,
because < ψ2 | ψ1 > = 0 and < ψ2 | ψ2 > = 1 due to the orthonormality of eigenstates.
4
(d) < ψ | ψ > = 1 because | ψ > is given to be normalized. Then
¡ ¢¡ ¢
<ψ | ψ> = < ψ1 | c∗1 + < ψ2 | c∗2 c1 | ψ1 > + c2 | ψ2 >
= < ψ1 | c∗1 c1 | ψ1 > + < ψ1 | c∗1 c2 | ψ2 > + < ψ2 | c∗2 c1 | ψ1 > + < ψ2 | c∗2 c2 | ψ2 >
. .
= | c1 | 2 < ψ1 | ψ1 > + c∗1 c2 < ψ1 | ψ2 > + c∗2 c1 < ψ2 | ψ1 > + | c2 | 2 < ψ2 | ψ2 >
= | c1 | 2 + | c2 | 2 = 1 ,
where < ψ1 | ψ1 > = < ψ2 | ψ2 > = 1 and other inner products are zero because of orthonormality.
Postscript: The | ci | 2 are probabilitites, thus the ci are known as probability amplitudes.
3. A system is represented by the unnormalized state vector | ψ > . It is composed solely of the
two orthonormal eigenstates, | ψ1 > and | ψ2 > .
(a) Show that the proportionality in postulate 4 is replaced by an equality when | < ψi | ψ > |2 is
divided by the inner product of the unnormalized state vector.
(b) Show that Pi = | < ψi | ψ > |2 when | ψ > is normalized prior to the probability calculation.
(c) Explain why two state vectors that are proportional represent the same physical state.
Postulate 4 indicates that an inner product of two state vectors is a probability. The normalization
condition, < ψ | ψ > = 1 , says only Zthat it is certain that the system exists. It may be easier
∞
to see in a position space statement, ψ ∗ (x) ψ (x) dx = 1 . If the system exists, it is certain
−∞
that the system is between −∞ and +∞ . Certainty is expressed by a probability of 1. The same
condition < ψ | A∗ A | ψ > = 1 for an unnormalized | ψ > also says only that the system exists with
certainty. The normalization condition is simply an application of the probabilistic interpretation.
Since the state vector is given to be unnormalized, attach a proportionality constant, that is
A | ψ > = A c1 | ψ1 > + A c2 | ψ2 > . You need to use the condition < ψ | A∗ A | ψ > = 1 to find
| c1 | 2 | c2 | 2
P1 = and P2 = for part (a).
< ψ | ψ> < ψ | ψ>
The calculations of parts (a) and (b) are fairly duplicative—this is because the normalization
condition is just a statement of the probabilistic condition of certainty. Familiarity with this
duplication should provide insight into part (c), which is actually the point of this problem,.
(a) Since the state vector is given to be unnormalized, we attach a normalization constant, that is
A | ψ > = A c1 | ψ1 > + A c2 | ψ2 > . Then, remembering that | ψ1 > and | ψ2 > are orthonormal,
¯ ¡ ¢¯¯2 ¯ ¯2
¯
P1 ∝ ¯ < ψ1 | A c1 | ψ1 > + A c2 | ψ2 > ¯ = ¯ < ψ1 | A c1 | ψ1 > + < ψ1 | A c2 | ψ2 > ¯
¯ . ¯2
= ¯ A c1 < ψ1 | ψ1 > + A c2 < ψ1 | ψ2 >¯ = | A c1 | 2 = | A | 2 | c1 | 2 .
5
¯ ¡ ¢¯¯2 ¯ ¯2
¯
P2 ∝ ¯ < ψ2 | A c1 | ψ1 > + A c2 | ψ2 > ¯ = ¯ < ψ2 | A c1 | ψ1 > + < ψ2 | A c2 | ψ2 > ¯
¯ . ¯2
= ¯ A c1 < ψ 2 | ψ 1 > + A c2 < ψ 2 | ψ 2 > ¯ = | A c2 | 2 = | A | 2 | c2 | 2 .
The sum of all the possibilities, in this case the two possibilities, must be 1. Then the sum
¡ ¢
| A | 2 | c1 | 2 + | A | 2 | c2 | 2 = | A | 2 | c1 | 2 + | c2 | 2 = 1 .
1
< ψ | A∗ A | ψ > = 1 ⇒ | A | 2 < ψ | ψ> = 1 ⇒ |A|2 =
< ψ | ψ>
| c1 | 2 + | c2 | 2 | c1 | 2 | c2 | 2
⇒ = 1 , and P1 = and P2 = ,
< ψ | ψ> < ψ | ψ> < ψ | ψ>
| ci | 2 | < ψi | ψ > | 2
Therefore, Pi = = in general.
< ψ | ψ> < ψ | ψ>
(b) The normalization constant is found
1 1
< ψ | A∗ A | ψ > = 1 ⇒ |A|2 = ⇒ A= p
< ψ | ψ> < ψ | ψ>
¯ ¯2
¯ ¡ ¢¯¯2 ¯¯ c1 c2 ¯
¯ ¯
P1 ∝ ¯ < ψ1 | A c1 | ψ1 > + A c2 | ψ2 > ¯ = ¯< ψ1 | p | ψ1 > + < ψ1 | p | ψ2 > ¯
¯ < ψ | ψ> < ψ | ψ> ¯
¯ ¯2 ¯ ¯2
¯ c1 c2 . ¯ ¯ c1 ¯ | c1 | 2
¯ ¯ ¯ ¯
=¯p < ψ1 | ψ1 > + p < ψ1 | ψ2 > ¯ = ¯ p ¯ =
¯ < ψ | ψ> < ψ | ψ> ¯ ¯ < ψ | ψ> ¯ < ψ | ψ>
which is the same as P1 from part (a). A similar calculation shows that P2 is also the same as
found in part (a). Since the probabilities are the same, particularly now that we know the origin
of normalization condition, the procedure of using a normalized state vector allows the use of the
relation of equality in postulate 4.
(c) The interpretation of an inner product as a probability renders the “length” of a state vector
immaterial because the length is necessarily adjusted so that the probability of certainty is 1. The
probabilities of individual possibilities of any measurement are necessarily identical using | ψ >
or A | ψ 0 > because of this adjustment. The conclusion is two state vectors that are proportional
represent the same physical state.
Postscript: Insisting state vectors are normalized prior to calculating probabilities usually
leads to shorter and cleaner calculations. You are likely going to be most efficient if you consider the
statement “normalize all state vectors prior to calculating probabilities” as a corollary to postulate
4 because the proportionality is then replaced by an equality.
Physicists will not usually write any symbology that differentiates two state vectors that are
proportional. For instance, | ψ > and A | ψ > where A is a scalar are both appropriate descriptions
of the state vector for the same photon. In fact, you may see equations like | ψ > = A | ψ > which
6
is a true statement for all values of A if | ψ > is a state vector. If | ψ > is not a state vector, then
A = 1 is the only value of A that will make the statement true. If | ψ > is a state vector, it will
be adjusted so that the sum of all possible probabilities is 1.
The concept of this two-dimensional problem extends to arbitrary dimensions.
This problem is meant to amplify postulates 3, 4, and 5. Per the postscript to problem
3, we will routinely do the normalization of part (a) prior to any probability calculations. The
only possible results of a measurement are the eigenvalues of A per postulate 3. You need the
eigenvectors to calculate probabilities. You should solve for the eigenvalues and eigenvectors of
this diagonal matrix by inspection. Of course, if the three probabilities do not sum to 1 you have
made an error. Postulate 5 tells you how to approach part (e).
(b) The possible results of a measurement are the eigenvalues 3, 4, and 5. The elements on the
principal diagonal of a diagonal matrix are the eigenvalues. (c) The eigenvectors are found by
inspection to be
1 0 0
| 3> = 0 , | 4> = 1 , | 5> = 0 .
0 0 1
¯ ¯2
¯ 1 ¯¯ ¯ ¯ ¯ ¯
¯¡ ¢ 1 ¯ 1 ¡ ¢ ¯2 ¯ 1 ¯2
¯
P (ev = 3) = ¯ 1, 0, 0 √ ¯
2 ¯ =¯ ¯ √ 1+0+0 ¯ = ¯¯ ¯ √ ¯ = 1 ,
¯ 14 3 ¯ 14 14 ¯ 14
7
¯ ¯2
¯ 1 ¯¯ ¯ ¯ ¯ ¯2
¯¡ ¢ 1 ¯ 1 ¡ ¢ ¯2 ¯ 2 ¯
¯
P (ev = 4) = ¯ 0, 1, 0 √ 2 ¯ = ¯¯ √
¯ 0 + 2 + 0 ¯ = ¯¯ √
¯ ¯ = 4 ,
14 14 14 ¯ 14
¯ 3 ¯
¯ ¯2
¯ 1 ¯¯ ¯ ¯ ¯ ¯2
¯¡ ¢ 1 ¯ 1 ¡ ¢ ¯2 ¯ 3 ¯
P (ev = 5) = ¯ 0, 0, 1 √ 2 ¯ = ¯¯ √
¯ ¯ 0 + 0 + 3 ¯ = ¯¯ √
¯ ¯ = 9 .
14 14 14 ¯ 14
¯ 3 ¯
3
X 1 4 9 14
(d) Pi = + + = = 1.
1
14 14 14 14
5. What are the possible results and the probability of attaining each possible result of a
measurement of the “B–ness” of a system where
2 0 0 1
B = 0 2 1 and the state is |ψ> = 2.
0 1 2 3
The question asks only for possibilities and probabilities. The possibilities are the eigenvalues,
and the probabilities follow from the inner product of the eigenvector and the state vector. You
normalized this state vector in problem 4. You should find
1 0 0
1 1
| 2> = 0 , |3> = √ 1 , | 1> = √ 1 .
0 2 1 2 −1
8
If your probabilities do not sum to 1, you have made an error.
Using the normalized state vector, the probabilities of each possibility are
¯ ¯2
¯ 1 ¯¯ ¯ ¯ ¯ ¯2
¯¡ ¢ 1 ¯ 1 ¡ ¢ ¯2 ¯ 1 ¯
¯
P (ev = 2) = ¯ 1, 0, 0 √ ¯ ¯
2 ¯ =¯√ 1 + 0 + 0 ¯ = ¯¯ √
¯ ¯ = 1 ,
14 14 14 ¯ 14
¯ 3 ¯
¯ ¯2
¯ 1 ¯¯ ¯ ¯ ¯ ¯
¯ 1 ¡ ¢ 1 ¯ 1 ¡ ¢ ¯2 ¯ 5 ¯2 25
P (ev = 3) = ¯¯ √ 0, 1, 1 √ 2 ¯¯ = ¯¯ √ 0 + 2 + 3 ¯¯ = ¯¯ √ ¯¯ = ,
¯ 2 14 ¯ 28 28 28
3
¯ ¯2
¯ 1 ¯¯ ¯ ¯ ¯ ¯
¯ 1 ¡ ¢ 1 ¯ 1 ¡ ¢ ¯2 ¯ −1 ¯2 1
¯
P (ev = 1) = ¯ √ 0, 1, −1 √ ¯
2 ¯ =¯√ ¯ 0 + 2 − 3 ¯ = ¯ √ ¯¯ =
¯ ¯ ,
¯ 2 14 28 28 28
3 ¯
2 25 1 28
and + + = = 1, as it must.
28 28 28 28
This problem should help to focus postulate 5. The state vector following a measurement is
the eigenstate corresponding to the eigenvalue measured. Use the new state vector determined by
the measurement in probability calculations consistent with postulate 4 to answer all five questions.
9
of B corresponding to ev = 2 . The eigenvectors of B corresponding to ev = 3 or ev = 1 both
have zero as their first component, so < evB = 3 | ψ 0 > = < evB = 1 | ψ 0 > = 0 . We will measure
ev = 2 for B with probability of < evB = 2 | ψ 0 > = 1 . Subsequent measurements of A , B , A ,
etc., yield ev = 3 , ev = 2 , ev = 3 , etc., where the probability for each measurement is 1.
0
(b) From the given measurement of A , the state vector of the system is | ψ > = 1 . The
0
0
eigenvector corresponding to the eigenvalue measured is now the eigenstate of the system. Both
of the eigenvectors | ev = 3 > and | ev = 1 > of operator B are non-zero in the same component
that | ψ 0 > is non-zero while the corresponding component of | ev = 2 > is zero. From that fact
alone, we can conclude that P (ev = 2) = 0 . Nevertheless, the probabilities of all possibilities are
¯ ¯2
¯ ¯
¯¡ ¢ 0 ¯ ¯¡ ¢ ¯2
¯
P (ev = 2) = ¯ 1, 0, 0 1 ¯¯ = ¯ 0 + 0 + 0 ¯ = | 0 | = 0 ,
2
¯ 0 ¯
¯ ¯2
¯ ¯ ¯ ¯ ¯ ¯2
¯ 1 ¡ ¢ 0 ¯ ¯ 1 ¡ ¢ ¯2 ¯ 1 ¯
P (ev = 3) = ¯¯ √ 0, 1, 1 1 ¯¯ = ¯¯ √ 0 + 1 + 0 ¯¯ = ¯¯ √ ¯ = 1,
¯ 2
¯ 2 0 ¯ 2 2
¯ ¯2
¯ ¯ ¯ ¯ ¯ ¯
¯ 1 ¡ ¢ 0 ¯ ¯ 1 ¡ ¢ ¯2 ¯ 1 ¯2 1
¯
P (ev = 1) = ¯ √ 0, 1, −1 ¯ ¯
1 ¯ =¯√ 0 + 1 + 0 ¯ = ¯ √ ¯¯ = .
¯ ¯
¯ 2 ¯ 2 2 2
0
0
1
(c) If 3 was found for the measurement of B , the new state vector is | ψ 0 > = √ 1 . Then
2 1
the probabilities for a subsequent measurement of A follow from postulate 4,
¯ ¯2
¯ 0 ¯¯ ¯ ¯
¯¡ ¢ 1 ¯ 1 ¡ ¢ ¯2
P (ev = 3) = ¯¯ 1, 0, 0 √ 1 ¯¯ = ¯¯ √ 0 + 0 + 0 ¯¯ = | 0 | = 0 ,
2
¯ 2 1 ¯ 2
¯ ¯2
¯ 0 ¯¯ ¯ ¯ ¯ ¯
¯¡ ¢ 1 ¯ 1 ¡ ¢ ¯2 ¯ 1 ¯2 1
¯
P (ev = 4) = ¯ 0, 1, 0 √ ¯ ¯
1 ¯ =¯√ 0 + 1 + 0 ¯ = ¯ √ ¯¯ = ,
¯ ¯
¯ 2 2 2 2
1 ¯
¯ ¯2
¯ 0 ¯¯ ¯ ¯ ¯ ¯
¯¡ ¢ 1 ¯ 1 ¡ ¢ ¯2 ¯ 1 ¯2 1
P (ev = 5) = ¯¯ 0, 0, 1 √ 1 ¯¯ = ¯¯ √ 0 + 0 + 1 ¯¯ = ¯¯ √ ¯¯ = .
¯ 2 ¯ 2 2 2
1
0
1
(d) If we found 1 for the measurement of B , the state vector is | ψ 0 > = √ 1 . In a
2 −1
calculation that is similar to part (c), we find for a measurement of A , P (ev = 3) = 0 ,
P (ev = 4) = 1/2 , and P (ev = 5) = 1/2 .
10
0
(e) The state vector of the system is 0 after the given measurement. Again, both | ev = 3 >
1
and | ev = 1 > of the operator B have corresponding components that are non-zero but | ev = 2 >
does not. The probabilities are P (ev = 2) = 0 , P (ev = 3) = 1/2 , and P (ev = 1) = 1/2 for a
subsequent measurement of B .
7. Find the possibilities and probabilities of a measurement of Ly for a system in the state
1 0 −i 0
|ψ> = 2 where Ly = i 0 −i .
3 0 i 0
Some of the eigenvectors of Ly have imaginary components so that the probability calculations
require use of complex numbers. The complex number facet is the only difference from problems
4 or 5. You found that Ly is Hermitian in problem 9 of part 2 of chapter 1, and in problem 19 of
part 2 of chapter 1, that the eigenvalues and eigenvectors are
√ 1 1 √ 1
1 √ 1 1 √
|− 2 > = − 2 i , | 0> = √ 0 , and | 2 > = 2i .
2 2 2
−1 1 −1
8. Find the expectation values of A , B , and Ly using the operators given and the probabilities
calculated in problems 4, 5, and 7, respectively.
It is the sum of the products of the eigenvalue and the probability of measuring that eigenvalue.
An expectation value is simply a weighted average.
X 1 4 9 64 4
< A >ψ = P (αi ) αi = (3) + (4) + (5) = =4 .
i
14 14 14 14 7
X 2 25 1 80 6
< B >ψ = P (βi ) βi = (2) + (3) + (1) = =2 .
28 28 28 28 7
i
3 √ 8 3 √
< Ly >ψ = (− 2 ) + (0) + ( 2 ) = 0.
14 14 14
Postscript: Probabilities are dependent upon the state vector, therefore, expectation values that
are computed using probabilities are also dependent upon the state vector. The expectation value
symbols have been subscripted with ψ to emphasize this dependence.
9. Use a normalized state vector | ψ> to show that < Ω >ψ = < ψ | Ω | ψ> .
The expression on the right is an alternative method of calculating an expectation value with-
out having to complete the eigenvalue/eigenvector problem. It is also a good method to check
calculations concerning small dimensional operators like A , B , and Ly .
This problem is a good exercise in applying some of the concepts and notation encountered
previously. Start with the definition of expectation value given in the last problem. Then suc-
cessively use postulate 4, the definition of a norm, the property that scalars commute, the eigen-
value/eigenvector equation, and the completeness relation to arrive at the desired expression.
12
X X¯ ¯ X
< Ω >ψ = P (ωi ) ωi = ¯ < ωi | ψ> ¯2 ωi = < ψ | ωi >< ωi | ψ > ωi (1)
i i i
X
= < ψ | ωi | ωi >< ωi | ψ > (2)
i
X
= < ψ | Ω | ωi >< ωi | ψ > (3)
i
à !
X
= < ψ |Ω| | ωi >< ωi | | ψ> = < ψ | Ω | I | ψ > = <ψ | Ω | ψ > . (4)
i
Line (1) is the definition of an expectation value, application of postulate 4, and the definition
of a norm. Eigenvalues are scalars so can be moved into the braket in line (2). The eigenvalue
equation, Ω | ωi > = ωi | ωi >, is used to arrive at line (3). The unsubscripted vectors < ψ | Ω |
and | ψ > are not pertinent to the sum so are removed from the summation to arrive at the first
expression in line (4). The summation remaining in the parenthesis is the completeness relation
which is a statement of the identity.
10. Check that < ψ | Ω | ψ > yields the expectation values of A , B , and Ly calculated
previously.
3 0 0 1
1 ¡ ¢ 1
< A >ψ = < ψ | A | ψ> = √ 1, 2, 3 0 4 0 √ 2
14 0 0 5 14 3
1 ¡ ¢ 3 1 ¡ ¢ 64 4
= 1, 2, 3 8 = 3 + 16 + 45 = =4 .
14 14 14 7
15
1 ¡ ¢ 2 0 0 1
1
< B >ψ = < ψ | B | ψ> = √ 1, 2, 3 0 2 1 √ 2
14 0 1 2 14 3
2 ¢ 2
1 ¡ ¢ 1 ¡ 1 ¡ ¢ 40 6
= 1, 2, 3 4 + 3 = 1, 2, 3 7 = 2 + 14 + 24 = =2 .
14 14 14 14 7
2+6 8
¡ ¢ 0 −i 0 1
1 1
< Ly >ψ = < ψ | Ly | ψ> = √ 1, 2, 3 i 0 −i √ 2
14 0 i 0 14 3
¡ ¢ −2i ¡ ¢ −2i
1 1 1 ¡ ¢
= 1, 2, 3 i − 3i = 1, 2, 3 −2i = − 2i − 4i + 6i = 0 .
14 14 14
2i 2i
13
Postscript: The subscript ψ is rarely appended to expectation values. The expectation value of
Ω appears as < Ω > , which is conventional. Remember, nevertheless, that an expectation value
is dependent upon a state vector.
11. Find the uncertainty of A using the operator and the state vector given in problem 4.
Two measures of central tendency are frequently encountered in quantum mechanics. The first is
the previously discussed expectation value. The other is uncertainty or standard deviation.
Uncertainty or standard deviation is defined in terms of the expectation value,
³ ´2
4Aψ = < ψ | A − < A > I | ψ >1/2 ,
where the standard deviation on the left is subscripted because it is dependent on the state vector
| ψ > . A state vector is needed to calculate the expectation value, therefore, a state vector is
needed to calculate the uncertainty.
3 0 0 1
1
Use the operator A = 0 4 0 and the normalized state vector | ψ > = √ 2 .
0 0 5 14 3
The expectation value times the identity operator means
4
1 0 0 47 0 0 √
4 19
<A>I =4 0 1 0 = 0 4 7 0 . You should find that 4Aψ =
4
≈ 0.62 .
7 7
0 0 1 0 0 4 47
¡ ¢2 ³ ´2
4Aψ = < ψ | A − < A > I | ψ >
4 2
3 0 0 47 0 0 1
1 ¡ ¢ 1
= √ 1, 2, 3 0 4 0 − 0 4 47 0 √ 2
14 0 0 5 0 0 4 47 14 3
4
2
−1 7 0 0 1
1 ¡ ¢
= 1, 2, 3 0 − 47 0 2
14
0 0 37 3
11 11
− 0 0 − 0 0 1
1 ¡ ¢ 7 7
= 1, 2, 3 0 − 47 0 0 − 47 0 2
14
0 0 37 0 0 37 3
11
¡ ¢ −7 0 0 −11/7 ¡ ¢ 121/49
1 1
= 1, 2, 3 0 − 47 0 −8/7 = 1, 2, 3 32/49
14 3 14
0 0 7 9/7 27/49
1 ¡ ¢ 1 266 2 · 7 · 19 19
= 121/49 + 64/49 + 81/49 = = 3
= ≈ 0.39
14
√ 14 49 2·7 49
19
⇒ 4Aψ = ≈ 0.62 .
7
14
Postscript: The term “uncertainty” has the same meaning in quantum mechanics as the term
“standard deviation” does in statistics. The quantity “uncertainty” calculated in this problem
is the same quantity that is calculated for use in the Heisenberg uncertainty principle. We
introduce the Heisenberg uncertainty relations in chapter 3.
¡ ¢2
The conventional way to write the definition of uncertainty is 4A = < A − < A > >1/2 ,
where the state vector and identity operator are implicit.
¡ ¢2 ³ ´2
Variance is the square of standard deviation, or 4Aψ = < ψ | A − < A > I | ψ > .
Variance is often a convenient intermediate result in a calculation of uncertainty.
This alternative is often the most direct way to calculate an uncertainty. A useful theorem from
an ordinary study of probability and statistics is that
We use this result, but refer you to Meyer1 or your favorite book on probability and statistics for
depth concerning this theorem.
This problem is essentially an expansion, summation of like terms, and a reduction. Work
from the variance and take a square root as the last step to get the uncertainty. You need
to use the fact that < A + B > = < A > + < B > twice. You also need the fact that
< ψ | A < A > I | ψ > = < ψ | < A >2 I | ψ > . Since this is not obvious and also to provide a
sample of what is expected, this is true because
< ψ | A < A > I | ψ > = < ψ | A < A > | ψ > = < A >< ψ | A | ψ > = < A > < A >
= < A >2 = < A >2 < ψ | ψ > = < ψ | < A >2 | ψ > = < ψ | < A >2 I | ψ > .
³ ´2 ³ ´2
4Aψ = < ψ | A − < A > I | ψ >
³ ´³ ´
= <ψ | A − < A > I A − < A > I | ψ >
= < ψ | A2 − A < A > I − < A > I A + < A >2 I | ψ > (1)
2 2
= < ψ | A − 2A < A > I + < A > I | ψ > (2)
2 2
= < ψ | A | ψ > − < ψ | 2A < A > I | ψ > + < ψ | < A > I | ψ > (3)
= < ψ | A2 | ψ > − 2 < ψ | < A >2 I | ψ > + < ψ | < A >2 I | ψ > (4)
= < ψ | A2 | ψ > − < ψ | < A >2 I | ψ >
= < ψ | A2 − < A >2 I | ψ > (5)
2 2 1/2
⇒ 4Aψ = < ψ | A − < A > I | ψ > .
1
Meyer Introductory Probability and Statistical Applications (Addison-Wesley Publishing Co.,
Reading, Massachusetts, 1970), pp.123–136.
15
Line (1) uses the fact that I 2 = I . Line (2) can be written because an operator A commutes
with a scalar < A > and the identity operator. Line (3) uses < A + B > = < A > + < B > .
Equation (4) depends on the fact that < ψ | A < A > I | ψ > = < ψ | < A >2 I | ψ > . Line (5)
uses < A > + < B > = < A + B > again.
¡ ¢1/2
Postscript: You will encounter the same statement written ∆Ω = Ω2 − < Ω >2 , where
the state vector and the identity operator are implied.
13. Calculate the uncertainty of the operator A from problem 4 using the result of problem 12.
The operator A , the state vector | ψ > , and the necessary result are all stated in the comments
that preface the solution to problem 11. Calculate A2 − < A >2 I , form the braket with the
state vector, then take the square root.
2
3 0 0 µ ¶2 1 0 0
32
A2 − < A >2 I = 0 4 0 − 0 1 0
7
0 0 5 0 0 1
3 0 0 3 0 0 1 0 0
1024
= 0 4 00 4 0 − 0 1 0
49
0 0 5 0 0 5 0 0 1
9 0 0 1024/49 0 0
= 0 16 0 − 0 1024/49 0
0 0 25 0 0 1024/49
441/49 0 0 1024/49 0 0
= 0 784/49 0 − 0 1024/49 0
0 0 1225/49 0 0 1024/49
−583/49 0 0
= 0 −240/49 0 ,
0 0 201/49
16
(b) Write the basis-independent Hamiltonian for a simple harmonic oscillator.
(c) Write the Hamiltonian for a free particle in position space.
(d) Write the Hamiltonian for a simple harmonic oscillator in position space.
(e) Write the Hamiltonian for an unknown potential in position space.
(f) Write the Hamiltonian for a free particle in momentum space.
(g) Write the Hamiltonian for a simple harmonic oscillator in momentum space.
The Hamiltonian operator is intrinsic to the Schrodinger equation. This problem is an intermediate
step to writing the Schrodinger equation for systems under the influence of various potentials.
Postscript comments to problem 1 concerning the Schrodinger equation indicate that the
classical Hamiltonian is H = T + V . The non-relativistic kinetic energy term is T = p2 /2m . The
free particle is not influenced by any potential so the potential energy term for a free particle is
V (x) = 0 . The potential energy function for a simple harmonic oscillator is V (x) = kx2 /2 .
The dynamic variables of classical mechanics become quantum mechanical operators, x → X
and p → P as H → H . A quantum mechanical Hamiltonian H is expressed in terms of the
basis-independent operators X and P is basis-independent.
d
In the position basis in one spatial dimension, X → x and P → −ih̄ . In the momentum
dx
d
basis, P → p , X → ih̄ . Substitute the appropriate differential operators into the basis-
dp
independent Hamiltonian operators of parts (a) and (b) to attain the basis-dependent Hamiltonian
operators for parts (c) through (g).
P2 P2 1
(a) H = . (b) H = + kX 2 .
2m 2m 2
P2 h̄2 d2 P2 1 h̄2 d2 1
(c) H = =− . (d) H = + kX 2 = − + kx2 .
2m 2m dx2 2m 2 2m dx2 2
P2 h̄2 d2 P2 p2
(e) H = + V (X ) = − + V (x) . (f) H = = .
2m 2m dx2 2m 2m
P2 1 p2 h̄2 d2
(g) H = + kX 2 = − k .
2m 2 2m 2 dp2
d
Postscript: We will explain why X → x and P → −ih̄ in position space in future problems.
dx
Accept that these and the momentum based representations are correct and use them. These
representations are much more useful than the details that are necessary to derive them.
Notice that each potential is, or is assumed to be in part (e), a function of position only. This
leads to the dramatic simplification known as the time-independent Schrodinger equation.
17
1
1
15. Expand | ψ > = √ 2 in the eigenbases of
14 3
3 0 0 0 −i 0
(a) A = 0 4 0 and (b) Ly = i 0 −i .
0 0 5 0 i 0
(c) Check your expansions by calculating the probabilities of measuring each eigenvalue using the
expansion coefficients.
A vector may be expressed in any eigenbasis that spans the appropriate space. The eigenvectors
of Hermitian operators can therefore be used to represent any vector of the same dimension.
The first step is to attain the eigenvectors by solving the eigenvalue/eigenvector equation, and
that has been previously completed for both of the given operators. Next, consider
à n ! n n
X X ¡ ¢ X
|ψ> = I |ψ> = | αi >< αi | | ψ > = | αi > < αi | ψ > = ci | αi > ,
i=1 i= 1 i=1
where the ci are complex numbers that are the inner product of each < αi | ψ > . The process
described by this equation is known as expansion in an eigenbasis and the ci are called
expansion coefficients.
Per postulates 3 and 4, a measurement will obtain the eigenvalues with the probabilities
¯ ¯2 ¯ ¯2
P (αi ) = ¯ < αi | ψ> ¯ = ¯ ci ¯ ,
given that the eigenvectors and the state vector are normalized. The | αi > that form the basis
vectors must remain normalized for | ci |2 to be correct probabilities. The eigenvectors of A
are unit vectors that are inherently of
√ unit length. The eigenvectors of Ly , however, contain
normalization constants of 1/2 and 1/ 2 that cannot be absorbed into the expansion coefficients
in part (b).
(a) The expansion can be done by inspection when the eigenvectors are unit vectors, nevertheless,
3
X
|ψ> = | αi >< αi | ψ >
i=1
1 ¡ ¢ 1 1 0 ¡ ¢ 1 1 0 ¡ ¢ 1 1
= 0 1, 0, 0 √
2 + 1 0, 1, 0 √
2 + 0 0, 0, 1 √ 2
0 14 3 0 14 3 1 14 3
1 0 0
1 ¡ ¢ 1 ¡ ¢ 1 ¡ ¢
=√ 0 1+0+0 + √ 1 0+2+0 + √ 0 0+0+3
14 0 14 0 14 1
1 0 0
1 2 3
= √ 0 + √ 1 + √ 0 ,
14 0 14 0 14 1
18
where the last expression is the expansion of the state vector in the eigenbasis of A .
(b) The expansion of the state vector in the eigenbasis of Ly is
3
X
|ψ> = | αi >< αi | ψ >
i= 1
1 1 1 1
1 √ 1¡ √ ¢ 1 ¡ ¢
2 + √1 0 √1 1, 0, 1 √1 2
= − 2 i 1, 2 i, −1 √
2 2 14 3 2 1 2 14 3
−1
1 √1 1¡ √ ¢ 1 1
+ 2 i 1, − 2 i, −1 √ 2
2 2 14
−1 3
1 √1 1 ¡ √ ¢ 1
1
1 ¡ ¢
= − 2 i √ 1 + 2 2 i − 3 + √ 0 √ 1+0+3
2 2 14 2 28
−1 1
1 √1 1 ¡ √ ¢
+ 2 i √ 1 −2 2i − 3
2 2 14
−1
√ √
−1 + 2 i 1 √1 4 1
1
−1 − 2 i 1 √1
= √ − 2 i + √ √ 0 + √ 2 i .
14 2 28 2 14 2
−1 1 −1
Postscript: Combining all the constants in the last line of part (b) yields a simpler expression,
but the simpler expression hides the expansion coefficients ci = < αi | ψ > , and the capability to
calculate probabilities from the simpler expression is compromised. The expansion coefficients are
so closely related to probabilities that they are also known as probability amplitudes.
Using expansion coefficients to calculate probabilities is dominantly the easiest method for
some of the problems that we will encounter in future chapters. The other reason to introduce
the technique of expansion in an eigenbasis is that it is essential to the time evolution of the
stationary states that are the solutions to the time-independent Schrodinger equation.
19
16. Discuss how the time-independent Schrodinger equation, H | Ei > = Ei | Ei > , follows from
the time-dependent Schrodinger equation stated in postulate 6.
The Hamiltonian is the total energy operator. Energy is an observable quantity so the Hamil-
tonian is necessarily a Hermitian operator per postulate 2. Any state vector can be expanded in
terms of the eigenvectors of the Hermitian Hamiltonian. The eigenvectors of the Hamiltonian are
the energy eigenvectors, represented | Ei > , and the eigenvalues of the Hamiltonian are the energy
eigenvalues, denoted Ei . Thus,
d
H | ψ > = ih̄ | ψ > −→ H | ψ > = E | ψ > −→ H | Ei > = E | Ei > ,
dt
and assuming that H 6= H (t) , the last equation is simply an eigenvalue/eigenvector equation so
that E can be nothing other than the energy eigenvalues, or H | Ei > = Ei | Ei > .
Postscript: The time-dependent Schrodinger equation must be used when H = H (t) . There
are some exceptions, but the time-dependent Schrodinger equation is usually difficult or impossible
to solve analytically. The usual approach to a Schrodinger equation with a weakly time-dependent
Hamiltonian is to find a time-independent solution and then model the time dependence as a
perturbation. A numerical solution is often the only recourse if the Hamiltonian is strongly time
dependent, .
We will derive the representations introduced in problem 14 in later problems. The represen-
d
tation E → ih̄ is different. This representation is beyond our scope.
dt
17. Show that | Ei (t) > = e−iEi t/h̄ | Ei > for a system described by a time-independent Hamil-
tonian and a time-dependent state vector.
20
Use the given conditions of time-dependence of the state vector and time-independence of the
Hamiltonian in the time-dependent Schrodinger equation to reason that
d
ih̄ | Ei (t) > = Ei | Ei (t) > (1)
dt
for individual eigenstates of H . There are many fundamentals used to arrive at this equation. A
time-dependent state vector can be denoted | ψ (t) > . Is it a superposition of eigenstates? Why
the energy eigenstates? Why are the energy eigenstates functions of time? Why can the equation
be written for individual eigenstates? Why are the eigenvalues constants, Ei and not Ei (t) ? The
answers to these questions lie in properties of the eigenvalue/eigenvector equation, the technique
of expanding a state vector, and postulate 1. Equation (1) is a variables separable differential
equation, so separate the variables and integrate both sides from 0 to t . The last step is to
substitute the conventional notation | Ei > for | Ei (0) > .
d
The time-dependent Schrodinger equation is ih̄ | ψ (t) > = H | ψ (t) > , where the time-
dt
dependence of the state vector is indicated explicitly. The state vector can be expanded into
a linear combination of the eigenvectors (postulate 1) of the total energy operator H . Denote the
energy eigenstates | Ei (t) > because the eigenbasis is that of the total energy operator H . The
eigenstates must be functions of time if the state vector is a function of time (postulate 1). A
state vector can be an individual eigenstate, therefore the time-dependent Schrodinger equation
applies to each eigenstate individually (postulate 1). If the Hamiltonian is time-independent, the
eigenvalues are time-independent (eigenvalues are determined solely by the operator). Therefore,
d
ih̄ | Ei (t) > = Ei | Ei (t) > , (1)
dt
where the eigenvalues are constants and not functions of time, again, because the Hamiltonian is
independent of time. This is a variables separable differential equation that can be arranged
Z t Z t
d | Ei (t) > Ei d | Ei (t0 ) > Ei
= dt ⇒ 0
= dt0
| Ei (t) > ih̄ 0 | Ei (t ) > ih̄ 0
where the independent variable is primed to differentiate it from the upper limit of integration.
Multiplying numerator and denominator of the right side by i , the last equation implies
³ ´¯¯t −iE
¯t
¯
i
ln | Ei (t ) > ¯¯ =
0
t ¯¯
0
0 h̄ 0
⇒ ln | Ei (t) > − ln | Ei (0) > = −iEi t/h̄
µ ¶
| Ei (t) >
⇒ ln = −iEi t/h̄
| Ei (0) >
| Ei (t) >
⇒ = e−iEi t/h̄
| Ei (0) >
⇒ | Ei (t) > = e−iEi t/h̄ | Ei (0) > = e−iEit/h̄ | Ei >
21
Postscript: The time evolution of an energy eigenstate is described by the product of e−iEi t/h̄
and that energy eigenstate. The state vector is a superposition of all the eigenstates, so
X X
|ψ> = ci | Ei > ⇒ | ψ (t) > = ci | Ei > e−iEi t/h̄
i i
where H 6= H (t) . The energy eigenstates | Ei > are known as stationary states. The probability
of a measurement is unaffected when H = 6 H (t) . The probabilities are “stationary” as time
advances. Expectation values and uncertainties are “stationary” since probabilities are unaffected.
Stationary states are the result of time being separable from other observable quantities. The
prerequisite for time being separable from other observables is a time-independent Hamiltonian.
This problem explicitly uses time as an independent variable within ket vectors. Time is the
only quantity that can be used this way. Time is not an observable quantity in the same sense
as position and momentum. Time is intrinsic to all spaces. The notation | ψ (t) > says only that
time moves forward (or backward) in every space. | ψ (t) > may be represented in any space
by forming the inner product with an appropriate bra, for instance, < x | ψ (t) > = ψ (x, t) in
position space and < p | ψ (t) > = ψb (p, t) in momentum space.
(b) Calculate the probability of each possible result of a measurement of energy as the state vector
evolves in time.
Part (a) requires you to apply the result of problem 17. Remember that | ψ (t) > is a superposition
of all time-evolving eigenstates, in this case
3
X
| ψ (t) > = ci | Ei > e−iEi t/h̄ .
i=1
Having expanded this state vector in the eigenbasis of this operator previously, there is little to do
for part (a) except to write the answer. Part (b) is a numerical example illustrating the fact that
the time evolution of stationary states does not affect calculations of probabilities.
(a) The energy eigenvalues corresponding to the energy eigenvectors are Ei = 3, 4, and 5 , so
−i3t/h̄
1 0 0 e
1 −i3t/h̄ 2 −i4t/h̄ 3 −i5t/h̄ 1 −i4t/h̄
| ψ (t) > = √ 0 e +√ 1 e +√ 0 e = √ 2e ,
14 0 14 0 14 1 14 3 e−i5t/h̄
22
(b) The probabilities are
¯ −i3t/h̄ ¯2
¯ e ¯ ¯ ¯2
¯¡ ¢ 1 ¯
P (E = 3) = ¯¯ 1, 0, 0 √ 2e −i4t/h̄ ¯ = 1 ¯¯ e−i3t/h̄ ¯¯
¯ 14 3 e−i5t/h̄ ¯¯ 14
1 ³ −i3t/h̄ ´ ³ +i3t/h̄ ´ 1 0 1
= e e = e = .
14 14 14
¯ −i3t/h̄ ¯2
¯ e ¯ ¯ ¯2
¯¡ ¢ 1 ¯
P (E = 4) = ¯¯ 0, 1, 0 √ 2e −i4t/h̄ ¯ = 1 ¯¯ 2 e−i4t/h̄ ¯¯
¯ 14 3 e−i5t/h̄ ¯¯ 14
1 ³ −i4t/h̄ ´ ³ +i4t/h̄ ´ 4 0 4
= 2e 2e = e = .
14 14 14
¯ −i3t/h̄ ¯2
¯ e ¯ ¯ ¯2
¯¡ ¢ 1 ¯
P (E = 5) = ¯¯ 0, 0, 1 √ 2 e−i4t/h̄ ¯ = 1 ¯¯ 3 e−i5t/h̄ ¯¯
¯ 14 3 e−i5t/h̄ ¯¯ 14
1 ³ −i5t/h̄ ´ ³ +i5t/h̄ ´ 9 0 9
= 3e 3e = e = .
14 14 14
The probabilities are independent of time and they are the same probabilities that were attained
when time was not considered.
(a) If the energy is measured, what results can be obtained, and with what probabilities will these
results be obtained?
(b) Calculate the expectation value < H > = < ψ (0) | H | ψ (0) > . Then show X that your
expectation value agrees with your calculations from part (a) using < H > = P (Ei ) Ei .
i
(c) Expand the initial state vector | ψ (0) > in the energy eigenbasis to calculate the time depen-
dent state vector | ψ (t) > .
(d) If the energy is measured at time t , what results can be obtained, and with what probabilities
will these results be obtained? Compare your answers with the t = 0 case of part (a). Explain
why these probabilities are independent of time even though the state vector is time dependent.
(e) Suppose that you measure the energy of the system at t = 0 and you find E = 7 . What
is the state vector of the system immediately after your measurement? Now let the system
evolve without any additional measurements until t = 10 . What is the state vector | ψ (10) >
at t = 10 ? What energies will you measure if you repeat the energy measurement at t = 10 ?
This problem is intended to provide insight into the meaning and applications of the postulates
of quantum mechanics. The first questions for a measurement of any system are what are the
23
possibilities and what are their respective probabilities? Postulate 3 addresses the possibilities
and postulate 4 determines the respective probabilities for part (a). Notice that the given state
vector is normalized. Using
X your probabilities and eigenvalues from part (a), you must find that
< ψ (0) | H | ψ (0) > = P (Ei ) Ei for part (b). Use the procedures of problem 15 to expand
i
the state vector in the energy eigenbasis. Use the procedures of problem 17 illustrated in problem
18 for part (c). Postulate 3 addresses the possibilities and postulate 4 determines the respective
probabilities for part (d), without regard to time dependence. You have made an error if your
answers do not agree with part (a). Remember to complex conjugate the components when you
form bras! When you find E = 7 for part (e), the state vector changes in accordance with postulate
5, so | ψ (0) > → | E = 7 > . There is one possible result with a probability of 1, and the other
two possible results have probability zero.
(a) The only possible results of the measurement of energy are the energy eigenvalues which are
3, 5, and 7 . The probabilities are
¯ ¯2
¯ 1 ¯¯ ¯ ¯2
¯¡ ¢ 1 ¯ 1 ¯ 1
P (Ei = 3) = ¯¯ 1, 0, 0 √ 3¯ = ¯ √
¯ ¯ (1) ¯¯ = ,
¯ 14 2 ¯ 14 14
¯ ¯2
¯ 1 ¯¯ ¯ ¯2
¯¡ ¢ 1 ¯ 1 ¯ 9
¯
P (Ei = 5) = ¯ 0, 1, 0 √ 3 = √¯ ¯ (3) ¯¯ = ,
14 2 ¯ ¯ ¯ 14 14
¯
¯ ¯2
¯ 1 ¯¯ ¯ ¯2
¯¡ ¢ 1 ¯ 1 ¯ 4
¯
P (Ei = 7) = ¯ 0, 0, 1 √ ¯
3 ¯ =¯√ ¯ (2) ¯¯ = .
¯ 14 2 ¯ 14 14
24
With the time zero expansion, we can easily write the complete time-dependent state vector
X
| ψ (t) > = | Ei >< Ei | ψ (0) > e−iEi t/h̄
i
−i3t/h̄
1 0 0 e
1 −i3t/h̄ 3 −i5t/h̄ 2 −i7t/h̄ 1 −i5t/h̄
=√ 0 e +√ 1 e +√ 0 e = √ 3e .
14 0 14 0 14 1 14 2 e−i7t/h̄
(d) At any time, the only possible results of a measurement of energy are the eigenenergies of the
system, which are 3, 5 , and 7 . The “time-dependent” probabilities are
¯ −i3t/h̄ ¯2
¯ e ¯ ¯ ¯2
¯¡ ¢ +i3t/h̄ 1 ¯ ¯ 1 ¯ 1
P (3) = ¯¯ 1, 0, 0 e √ 3e −i5t/h̄ ¯ = ¯ √
¯ ¯ (1 + 0 + 0) e ¯¯ =
0
,
¯ 14 2 e−i7t/h̄ ¯ 14 14
¯ −i3t/h̄ ¯2
¯ e ¯ ¯ ¯2
¯¡ ¢ +i5t/h̄ 1 ¯ ¯ 1 ¯ 9
¯
P (5) = ¯ 0, 1, 0 e √ 3e −i5t/h̄ ¯ ¯
=¯√ 0¯
(0 + 3 + 0) e ¯ = ,
14 2 e−i7t/h̄ ¯ ¯ 14 14
¯
¯ −i3t/h̄ ¯2
¯ e ¯ ¯ ¯2
¯¡ ¢ +i7t/h̄ 1 ¯ ¯ 1 ¯ 4
P (7) = ¯¯ 0, 0, 1 e √ 3e −i5t/h̄ ¯ = ¯ √
¯ ¯ (0 + 0 + 2) e ¯¯ =
0
.
¯ 14 2 e−i7t/h̄ ¯ 14 14
These are exactly the same probabilities obtained in part (a). There is no time dependence in the
probabilities because the eigenvectors of the Hamiltonian have only a time-dependent phase that
“cancels” in the sense that e0 = 1 in this calculation. Time dependency will “cancel” in one
way or another in all cases that the Hamiltonian is independent of time, i.e., H = 6 H (t) . The
probabilities are “stationary” in time, which is the meaning of the term “stationary states.”
(e) An energy measurement with the result E = 7 forces the system into the energy eigenstate
0 0
| ψ (t > 0) > = 0 e−i7t/h̄ ⇒ | ψ (t = 10) > = 0 e−i7(10)/h̄ ,
1 1
25
(a) Is H Hermitian?
(b) Solve the eigenvalue/eigenvector problem to attain the eigenvalues and eigenvectors of H .
(c) If the energy is measured, what results can be obtained, and with what probabilities will these
results be obtained?
(d) Calculate the expectation value of the Hamiltonian using both
X
< H > = < ψ (0) | H | ψ (0) > and <H>= P (Ei ) Ei .
i
This problem emphasizes the postulates and their applications using a Hamiltonian that is not
diagonal. It is in two dimensions to minimize the calculations though you will likely find the calcu-
lations to be substantial. Parts (c) through (f) may be more interesting than similar calculations
using diagonal matrices because the off-diagonal elements contribute cross terms. Review chapter
1 techniques to diagonalize the Hamiltonian for part (g) if required. Understanding diagonalization
is the initial step to understanding simultaneous diagonalization, and simultaneous diagonalization
is the foundation underlying the essential concept of a complete set of commuting observables.
X
Expand the state vector in the energy eigenbasis using | ψ (0) > = | Ej >< Ej | ψ (0) >
j
where | Ej > are the normalized eigenvectors of the Hamiltonian matrix for part (e). The only
time dependence is that of the energy eigenvectors | Ej (t) > = exp(−iEj t/h̄) | Ej (0) > , per
previous problems. The probabilities for part (f) are identical to those from part (c). Transform
H using U H U † for part (g). Form U from the eigenvectors of H as done in part 2 of chapter 1.
Transform the state vector U † | ψ (0) > to establish it in the same basis as U H U † . Of course,
probabilities for part (h) are identical to those found in parts (c) and (f). You should find that
calculations done in the basis in which the Hamiltonian is diagonal to be shorter and easier than
the non-diagonal basis.
·µ ¶¸T ∗ ·µ ¶¸∗ µ ¶ µ ¶
1 3 1 3 1∗ 3∗ 1 3
(a) H† = = = = = H,
3 1 3 1 3∗ 1∗ 3 1
therefore, H is Hermitian. Postulate 2 says that this is important to quantum mechanics.
26
µ ¶
1−α 3 ¡ ¢
(b) To attain the eigenvalues, det = (1 − α)2 − 9 = 0
3 1−α
⇒ 1 − 2α + α2 − 9 = 0 ⇒ α2 − 2α − 8 = 0 ⇒ (α − 4)(α + 2) = 0 ,
⇒ α = −2, 4, are the eigenvalues of the Hamiltonian operator. For α = −2 ,
µ ¶µ ¶ µ ¶
1 3 a a a + 3b = −2a b = −a
= −2 ⇒ ⇒
3 1 b b 3a + b = −2b a = −b
µ ¶ µ ¶
1 1 1
⇒ | − 2> = A ⇒ | − 2> = √
−1 2 −1
is the normalized eigenvector. The eigenvector corresponding to the eigenvalue 4 is
µ ¶µ ¶ µ ¶
1 3 a a a + 3b = 4a b=a
=4 ⇒ ⇒
3 1 b b 3a + b = 4b a=b
µ ¶ µ ¶
1 1 1
⇒ | 4> = A ⇒ | 4> = √ is the normalized eigenvector.
1 2 1
27
(f) The complex components are conjugated to form time-dependent bras so the probabilities are
¯ µ i2t/h̄ ¶¯2
¯ 1 ¡ ¢ −i2t/h̄ 1 −e + 5 e−i4t/h̄ ¯¯
P (−2) = ¯¯ √ 1, −1 e √
2 2 13 ei2t/h̄ + 5 e−i4t/h̄ ¯
¯ ´¯¯2
¯ 1 ³ (−i2t+i2t)/h̄
¯
=¯ √ −e + 5e (−i2t−i4t)/h̄
−e(−i2t+i2t)/h̄
− 5e (−i2t−i4t)/h̄ ¯
2 26 ¯
¯ ¯ ¯ ¯
¯ 1 ³ . . ´¯2 ¯ 1 ¯
2
1
= ¯¯ √ −1 + 5 e−i6t/h̄− 1 − 5 e−i6t/h̄ ¯¯ = ¯¯− √ ¯¯ = .
2 26 26 26
¯ µ i2t/h̄ ¶¯2
¯ 1 ¡ ¢ i4t/h̄ 1 −e + 5 e−i4t/h̄ ¯¯
¯
P (4) = ¯ √ 1, 1 e √
2 2 13 ei2t/h̄ + 5 e−i4t/h̄ ¯
¯ ´¯¯2
¯ 1 ³ (i4t+i2t)/h̄
¯
=¯ √ −e + 5e (i4t−i4t)/h̄
+e (i4t+i2t)/h̄
+ 5e(i4t−i4t)/h̄ ¯
2 26 ¯
¯ ¯ ¯ ¯
¯ 1 ³ .i6t/h̄ . ´¯2 ¯ 5 ¯2 25
= ¯¯ √ −e + 5 + ei6t/h̄+ 5 ¯¯ = ¯¯ √ ¯¯ = .
2 26 26 26
(g) Placing the eigenvector corresponding to the smaller eigenvector on the left and the eigenvector
corresponding to larger eigenvalue on the right yields the unitary transformation matrix,
µ ¶ µ ¶
1 1 1 1 1 −1
U= √ ⇒ U† = √
2 −1 1 2 1 1
µ ¶µ ¶ µ ¶ µ ¶µ ¶
† 1 1 −1 1 3 1 1 1 1 1 −1 −2 4
⇒ U HU = √ √ =
2 1 1 3 1 2 −1 1 2 1 1 2 4
µ ¶ µ ¶
1 −2 − 2 4 − 4 −2 0
= = .
2 −2 + 2 4 + 4 0 4
so since we transform operators as U † H U , the kets transform as U † | i> . The state vector is
µ ¶ µ ¶ µ ¶ µ ¶
1 1 −1 1 2 1 2−3 1 −1
U † | ψ (0) > = √ √ = √ =√ .
2 1 1 13 3 26 2 + 3 26 5
The eigenvectors are easily found by inspection but notice that they also transform correctly,
µ ¶ µ ¶ µ ¶ µ ¶ µ ¶
1 1 −1 1 1 1 1+1 1 2 1
U† | − 2 > = √ √ = = = ,
2 1 1 2 −1 2 1−1 2 0 0
µ ¶ µ ¶ µ ¶ µ ¶ µ ¶
1 1 −1 1 1 1 1−1 1 0 0
U† | 4 > = √ √ = = = .
2 1 1 2 1 2 1+1 2 2 1
28
(h) The t = 0 expansion is
µ ¶ µ ¶ µ ¶ µ ¶
1 ¡ ¢ 1 −1 0 ¡ ¢ 1 −1
| ψ (0) > = 1, 0 √ + 0, 1 √
0 26 5 1 26 5
µ ¶ µ ¶ µ ¶ µ ¶
1 1 ¡ ¢ 1 0 ¡ ¢ 1 1 5 0
=√ −1+0 + √ 0 + 5 = −√ +√ ,
26 0 26 1 26 0 26 1
µ ¶ µ ¶ µ ¶
1 1 5 0 1 −ei2t/h̄
⇒ | ψ (t) > = − √ ei2t/h̄ + √ e−i4t/h̄ = √ .
26 0 26 1 26 5 e−i4t/h̄
Probabilities are
¯ µ ¶ ¯2 ¯ ¯
¯ 1 −ei2t/h̄ ¯ ¯ 1 ³ −i2t/h̄ ´ ¯2 ³ ´³ ´
¯
P (−2 ) = ¯ (1, 0) √ ¯ = ¯ √ e ¯ = 1 e+i2t/h̄ e−i2t/h̄
=
1
,
26 5 e−i4t/h̄ ¯ ¯ 26 ¯ 26 26
¯ µ ¶ ¯2 ¯ ¯
¯ 1 −ei2t/h̄ ¯ ¯ 1 ³ −i4t/h̄ ´ ¯2 ³ ´³ ´
¯
P (4 ) = ¯ (0, 1) √ ¯ = ¯ √ 5 e ¯ = 1 5 e+i4t/h̄ 5 e−i4t/h̄ = 25 .
26 5 e−i4t/h̄ ¯ ¯ 26 ¯ 26 26
(i) According to postulate 5, if you measure an energy of E = −2 , the t = 0 state vector becomes
µ ¶ µ ¶
0 1 0 1
| ψ (0) > = ⇒ | ψ (t) > = ei2t/h̄ ,
0 0
in the original non-diagonal basis. Using operators, bras, and kets consistent with the appropriate
basis, identical results necessarily follow. For instance,
¯ µ ¶ ¯2
¯¡ ¢ −i2t/h̄ 1 ¯ ¯¡ ¢ ¯
¯
P (−2) = ¯ 1, 0 e ei2t/h̄ ¯
= ¯ 1 + 0 e0 ¯2 = 1 ,
0 ¯
¯ µ ¶ ¯2 ¯ ¯2
¯¡ ¢ 1 ¯ ¯¡ ¢ ¯
P (4) = ¯¯ 0, 1 ei4t/h̄ ei2t/h̄ ¯¯ = ¯ 0 + 0 ei6t/h̄ ¯ = 0 ,
0
¯ µ ¶ ¯2 ¯ ¯
¯ 1 ¡ ¢ −i2t/h̄ 1 1 ¯ ¯1¡ ¢ 0 ¯2
P (−2) = ¯¯ √ 1, −1 e √ ei2t/h̄ ¯ =¯
¯
¯
¯ 2 1+ 1 e ¯ = 1,
2 2 −1
¯ µ ¶ ¯2 ¯ ¯
¯ 1 ¡ ¢ i4t/h̄ 1 1 ¯ ¯1¡ ¢ i6t/h̄ ¯2
P (4) = ¯¯ √ 1, 1 e √ ei2t/h̄ ¯ =¯
¯ ¯2 1−1 e
¯ = 0.
¯
2 2 −1
£ ¤
(a) Show that H and Λ commute, i.e., show that H, Λ = 0 .
29
(b) If the energy is measured, what results can be obtained and with what probabilities will
these results be obtained? If Λ is measured, what results can be obtained and with what
probabilities will these results be obtained?
(c) Calculate the expectation values of the Hamiltonian < H > and the Lambda operator < Λ >
using the initial state vector. Then show that your expectation values agree
P with your part
(b) probabilities and eigenvalues by using the general expression < Ω > = i P (ωi ) ωi .
(d) Calculate the time dependent state vector | ψ (t) > in the energy eigenbasis.
(e) Transform to the basis that simultaneously diagonalizes H and Λ . Calculate the new form
of the initial state vector | ψ (0) > in this diagonal basis.
(f) Repeat parts (b) and (c) in the diagonal basis. Which basis do you prefer? Why?
(g) Calculate the time evolution of the state vector in the diagonal basis. Calculate the possibilities
and probabilities of measuring the energy, Ei , and the “lambda-ness,” λj , at time t .
(h) Describe the state vector immediately after each measurement and the result of each mea-
surement if you do a gedanken experiment by alternating H and Λ measurements starting
with an H measurement, i.e., you measure H, Λ, H, Λ, H,. . . , for the three possible cases:
This problem features two Hermitian operators that commute. Operators that commute have a
common set of eigenvectors. The eigenvectors of Hermitian operators are orthogonal so can be
made orthonormal. Λ is degenerate so that a measurement of the eigenvalue 0 does not uniquely
determine the state vector of the system. Since Λ commutes with H , together they form a
complete set of commuting observables for the system. Part (i) should reinforce that the meaning
of a complete set of commuting observables is that the state vector of the system can be uniquely
determined by making two measurements in the instance that one of the operators is degenerate.
This problem unifies significant amounts of the mathematics of chapter 1 and quantitative
interpretations of the postulates. The results of problem 38 of part 2 of chapter 1 are easily
adapted to some of the questions for this problem because it uses the same two operators. Work
to understand the concept of a complete set of commuting observables in the light of two, small-
dimensional operators, one of which is degenerate. A complete set of commuting observables
is often necessary to describe realistic systems in infinite-dimensional or continuous space—the
concept is the same encountered here.
Operators that commute have a common eigenbasis. It is essentially impossible to find a com-
mon eigenbasis by solving the eigenvalue/eigenvector problem for a degenerate operator. Solve the
eigenvalue/eigenvector problem for the non-degenerate operator H to find the common eigenbasis
1 1 1
1 1 1
| E = −1 > = √ −2 , | E = 2> = √ 1 , | E = 3> = √ 0 , and
6 −1 3 −1 2 1
30
1 1 1
1 1 1
| λ1 = 0 > = √ −2 , | λ2 = 0 > = √ 1 , | λ = 2> = √ 0 .
6 −1 3 −1 2 1
Parts (b) and (c) are straightforward calculations using postulates 3, 4, and appropriate expressions
for expectation values. Part (d) asks for the energy eigenbasis so expand the state vector in the
eigenvectors of the Hamiltonian. You should find that the unitary operator for part (e) is
√ √ √
1/ √6 1/√ 3 1/ 2 −1 0 0 0 0 0
U = −2/√ 6 1/ √3 0 ⇒ U † H U = 0 2 0 , and U † Λ U = 0 0 0 .
√
−1/ 6 −1/ 3 1/ 2 0 0 3 0 0 2
Add the top and middle equations to attain b = 0 ⇒ c = a . Following our convention, let
a = 1 ⇒ c = 1 then normalize,
¡ ¢ ∗ 1 ¯ ¯2 ¯ ¯2 1
¯ ¯ ¯ ¯ 1 1
1, 0, 1 A A 0 = A (1 + 0 + 1) = A (2) = 1 ⇒ A = √ ⇒ | 3> = √ 0 .
1 2 2 1
2 1 1 a a 2a + b + c = 2a b = −c
β=2 ⇒ 1 0 −1 b =2 b
⇒ a−c = 2b ⇒ a = b
1 −1 2 c c a − b + 2c = 2c a = b
31
using the top equation to attain the middle equation. Let a = 1 ⇒ b = 1 and c = −1 and
¡ ¢ 1 ¯ ¯2 1
1 1
1, 1, −1 A∗ A 1 = ¯ A ¯ (1+1+1) = | A |2 (3) = 1 ⇒ A = √ ⇒ | 2 > = √ 1 .
−1 3 3 −1
2 1 1 a a 2a + b + c = −a 3a + b + c = 0
β = −1 ⇒ 1 0 −1 b = −1 b ⇒ a−c = −b ⇒ a + b − c = 0
1 −1 2 c c a − b + 2c = −c a − b + 3c = 0
where adding the middle and bottom equations yields a = −c ⇒ b = −2a . Choose
a = 1 ⇒ c = −1 and b = −2 then normalize,
¡ ¢ 1 ¯ ¯ ¯ ¯ 1
2 2 1 1
1, −2, −1 A∗ A −2 = ¯ A ¯ (1+4+1) = ¯ A ¯ (6) = 1 ⇒ A = √ ⇒ |−1 > = √ −2 .
−1 6 6 −1
1 0 1 a a a+c = 2a a = c 1
0 0 1
0 b = 2 b ⇒ 0 = 2b ⇒ b = 0 ⇒ | λ = 2> = √ 0 ,
1 0 1 c c a+c = 2c a = c 2 1
in a normalization procedure identical to that of | E = 3 > . The other two eigenvalues are λ = 0 .
We now desire two eigenvectors that are orthonormal to | λ = 2 > and each other. We know that
eigenvectors exist that meet these conditions because Λ is Hermitian. There is, however, only one
eigenvector equation,
1 0 1 a a a+c = 0 a = −c
λ=0 ⇒ 0 0 0 b = 0 b ⇒ 0 = 0 ⇒ b = 0
1 0 1 c c a+c = 0 a = −c
The middle component is arbitrary—it can be anything. You know that Λ and H share a common
eigenbasis because they commute, so choose
1 1
1 1
| λ = 0> = √ −2 and |λ = 0> = √ 1 .
6 −1 3 −1
These choices satisfy the requirement of the eigenvector equation, the top component is opposite the
bottom component, and these choices are orthogonal and normalized, or are already orthonormal.
32
The possibilities of a measurement are the eigenvalues per postulate 3. The probabilities of
any given measurement are < ωi | ψ (0) > per postulate 4. If the energy is measured, the possible
results are the eigenvalues of the Hamiltonian, −1, 2, or 3 . The corresponding probabilities are
¯ ¯2
¯ 1 ¯¯ ¯ ¯2 ¯ ¯
¯ 1 ¡ ¢ 1 ¯ 1 ¯ ¯ −2 ¯2 4 2
¯
P (−1) = ¯ √ 1, −2, −1 √ 1 ¯ = ¯ √ (1 − 2 − 1) ¯ = ¯ √ ¯¯ =
¯ ¯ ¯ ¯ = ,
¯ 6 3 1 ¯ 18 18 18 9
¯ ¯2
¯ 1 ¯¯ ¯ ¯2 ¯ ¯2
¯ 1 ¡ ¢ 1 ¯1 ¯ ¯1¯ 1
P (2) = ¯ √ 1, 1, −1 √ 1 ¯ = ¯ (1 + 1 − 1) ¯¯ = ¯¯ ¯¯ = ,
¯ ¯ ¯
¯ 3 3 1 ¯ 3 3 9
¯ ¯2
¯ 1 ¯¯ ¯ ¯2 ¯ ¯
¯ 1 ¡ ¢ 1 ¯ 1 ¯ ¯ 2 ¯2 4 2
¯
P (3) = ¯ √ 1, 0, 1 √ 1 ¯ = ¯ √ (1 + 0 + 1) ¯ = ¯ √ ¯¯ = = .
¯ ¯ ¯ ¯
¯ 2 3 1 ¯ 6 6 6 3
33
and then multiplying by the appropriate time-dependent phase factors,
X
| ψ (t) > = | j>< j | ψ (0) > e−iEj t/h̄
j
1 1 1
1 1 1
= − √ −2 e+it/h̄ + √ 1 e−i2t/h̄ + √ 0 e−i3t/h̄
3 3 −1 3 3 −1 3 1
it/h̄
−e + e−i2t/h̄ + 3 e−i3t/h̄
1
= √ 2 eit/h̄ + e−i2t/h̄
3 3 eit/h̄
−e −i2t/h̄
+ 3e −i3t/h̄
in its most compact form. The probabilities at t > 0 are the same as at t = 0 but we do not ask
as this question is not asked here because the algebra with the cross terms can be numbing. We
hope the point was made in the previous two dimensional problem. A more efficient approach is
transforming to a diagonal basis.
£ ¤ £ ¤
(e) Per part (a), Λ, H = H, Λ = 0 . The unitary matrix U formed from the eigenvectors
of H by placing them from left to right in order of ascending eigenvalue is
√ √ √ √ √ √
1/ √6 1/√ 3 1/ 2 1/√ 6 −2/√ 6 −1/√ 6
U = −2/√ 6 1/ √3 0 ⇒ U † = 1/√ 3
√ 1/ 3 −1/√ 3 .
−1/ 6 −1/ 3 1/ 2 1/ 2 0 1/ 2
The diagonal operators are
√ √ √ √ √ √
1/√ 6 −2/√ 6 −1/√ 6 2 1 1 1/ √6 1/√ 3 1/ 2
U † H U = 1/√ 3 1/ 3 −1/√ 3 1 0 −1 −2/√ 6 1/ √3 0
√
1/ 2 0 1/ 2 1 −1 2 −1/ 6 −1/ 3 1/ 2
√ √ √ √ √ √
1/√ 6 −2/√ 6 −1/√ 6 −1/√ 6 2/√ 3 3/ 2 −1 0 0
= 1/√ 3 1/ 3 −1/√ 3 2/√ 6 2/ √3 0 = 0 2 0,
√
1/ 2 0 1/ 2 1/ 6 −2/ 3 3/ 2 0 0 3
√ √ √ √ √ √
1/√ 6 −2/√ 6 −1/√ 6 1 0 1 1/ √6 1/√ 3 1/ 2
U † Λ U = 1/√ 3 1/ 3 −1/√ 3 0 0 0 −2/√ 6 1/ √3 0
√
1/ 2 0 1/ 2 1 0 1 −1/ 6 −1/ 3 1/ 2
√ √ √ √
1/√ 6 −2/√ 6 −1/√ 6 0 0 2/ 2 0 0 0
= 1/√ 3 1/ 3 −1/√ 3 0 0 0 = 0 0 0.
√
1/ 2 0 1/ 2 0 0 2/ 2 0 0 2
³ ´ ³ ´
If we transform the operators U † | H | U , we must transform kets U † | ψ> , so
√ √ √
1/√ 6 −2/√ 6 −1/√ 6 1
U † | ψ (0) > = 1/√ 3 1/ 3 −1/√ 3 √1 1
1/ 2 0 1/ 2 3 1
√ √
(1 − 2 − 1)/√ 6 −2/√ 6 −2
√
1 1 1
= √ (1 + 1 − 1)/ 3 = √ 1/√ 3 = √ √2
.
3 (1 + 0 + 1)/√ 2 3 2/ 2 18 2 3
34
(f) The eigenvalues are on the main diagonals so are trivial to attain for both operators. The
eigenvectors are also found by inspection. The probabilities of measuring the eigenvalues of H are
¯ ¯2
¯ −2 ¯ ¯ ¯
¯¡ ¢ 1 √ ¯ ¯ −2 ¯2
¯
P (−1) = ¯ 1, 0, 0 √ ¯
2 ¯ = ¯ √¯ ¯ = 4 = 2,
¯ 18 √
2 3 ¯ 18 ¯ 18 9
¯ ¯2 ¯ ¯
¯ −2 ¯ ¯ √ 2 ¯2
¯¡ ¢ 1 √ ¯ ¯ ¯ 2 1
P (2) = ¯¯ 0, 1, 0 √
√ 2 ¯¯ = ¯ √ ¯ = = ,
¯ 18 ¯ 18 ¯ 18 9
2 3 ¯
¯ ¯2 ¯ ¯
¯ −2 ¯ ¯ 2√ 3 ¯2
¯¡ ¢ 1 √ ¯ ¯ ¯ 12 2
P (3) = ¯¯ 0, 0, 1 √
√ 2 ¯¯ = ¯ √ ¯ = = .
¯ 18 ¯ 18 ¯ 18 3
2 3 ¯
2 1 2
Identical calculations for Λ yield P (λ1 = 0) = , P (λ2 = 0) = , and P (2) = . In
9 9 3
agreement with previous results, expectation values are
¡ √ √ ¢ 1 −1 0 0 −2
1 √
< H > = − 2, 2 , 2 3 √ 0 2 0 √ √2
18 0 0 3 18 2 3
√ √ 2
1 ¡ ¢ √ 1 36
= − 2, 2 , 2 3 2√ 2 = (−4 + 4 + 36) = = 2,
18 18 18
6 3
¡ √ √ ¢ 1 0 0 0 −2
√
1
< Λ > = − 2, 2 , 2 3 √ 0 0 0 √ √2
18 0 0 2 18 2 3
√ √ ¢ 0
1 ¡ 1 24 4
= − 2, 2 , 2 3 √0 = (0 + 0 + 24) = = .
18 18 18 3
4 3
The calculations are much simpler in the diagonal basis. The eigenvalue/eigenvector problem for
Λ is made unnecessary by the process of diagonalization. The absence of cross terms is a significant
algebraic advantage. Finally, a basis of unit vectors offer conceptual advantages over a basis of
eigenvectors having multiple non-zero components. (Do the probability calculations for t > 0
using | ψ (0) > from part (d) if you are in need of further argument).
(g) The time dependence of the state vector in the diagonal basis is
X
| ψ (t) > = | j>< j | ψ (0) > e−iEj t/h̄
j
1 ¡ ¢ 1 −2
√ 0 ¡ ¢ −2
√
1
= 0 1, 0, 0 √ √2 eit/h̄ + 1 0, 1, 0 √ √2 e−i2t/h̄
0 18 2 3 0 18 2 3
0 ¡ ¢ 1 −2
√
+ 0 0, 0, 1 √ 2 e−i3t/h̄
18 √
1 2 3
1 √ 0 √ 0 −2 eit/h̄
2 2 2 3 1 √
= − √ 0 eit/h̄ + √ 1 e−i2t/h̄ + √ 0 e−i3t/h̄ = √ √2 e−i2t/h̄ .
18 0 18 0 18 1 18 2 3 e−i3t/h̄
35
Probability calculations are duplicative,
¯ ¯2
¯ µ ¶ 1 ¯
¯¡ ¢ 2 ¯
P (E = −1) = P (λ1 = 0) = ¯¯ 1, 0, 0 − √ 0e it/h̄ ¯
¯
¯ 18 0 ¯
¯ ¯2
¯ 2 ¯ 4 ³ −it/h̄ ´ ³ +it/h̄ ´ 2
= ¯¯ − √ eit/h̄ ¯¯ = e e = ,
18 18 9
¯ ¯2
¯ √ 0 ¯
¯¡ ¢ 2 ¯
¯
P (E = 2) = P (λ2 = 0) = ¯ 0, 1, 0 √
1 e −i2t/h̄ ¯
18 0 ¯
¯ ¯
¯ √ ¯2
¯ 2 ¯ 1 ³ +i2t/h̄ ´ ³ −i2t/h̄ ´ 1
¯ ¯
= ¯ √ e−i2t/h̄ ¯ = e e = ,
¯ 18 ¯ 9 9
¯ ¯
¯ √ ¯2
¯¡ ¢2 3 0 ¯
P (E = 3) = P (λ = 2) = ¯¯ 0, 0, 1 √ 0 e−i3t/h̄ ¯
¯
¯ 18 1 ¯
¯ √ ¯2
¯2 3 ¯ 12 ³ +i3t/h̄ ´ ³ −i3t/h̄ ´ 2
¯ ¯
= ¯ √ e−i3t/h̄ ¯ = e e = .
¯ 18 ¯ 18 3
These are identical to the t = 0 probabilities, as expected.
0
(h) If H is measured and E = 2 is obtained, the state vector is | ψ (t > 0) > = 1 in
0
the diagonal basis, per postulate 5. A measurement of Λ can yield only λ = 0 because the
corresponding eigenvector is identical to the state vector and no other eigenvector has a second
component that is non-zero. In other words, P (λ = 0) = 1 , per postulate 4. Subsequent alternate
measurements of H and Λ can yield only E = 2 and λ = 0 . If H is measured and E = 3
0
is obtained, the state vector is | ψ (t > 0) > = 0 in the diagonal basis. A measurement
1
of Λ can yield only λ = 2 per postulate 4. Subsequent alternate measurements of H and Λ
can yield only E = 3 and λ = 2 . If H is measured and E = −1 is obtained, the state vector
1
is | ψ (t > 0) > = 0 in the diagonal basis. A measurement of Λ can yield only λ = 0 .
0
Subsequent measurements of H and Λ can yield only E = −1 and λ = 0 .
(i) If we measure Λ at t = 0 and find λ = 0 , then the state vector is one of the two eigenvectors of
Λ corresponding to λ = 0 . We do not know which. Postulates 1 and 4 indicate the best possible
interpretation is that | ψ 0 (t > 0) > is in the linear combination of the two λ = 0 eigenvectors
1 0
| ψ (t > 0) > = c1 0 + c2 1 .
0
0 0
Therefore, E = −1 or E = 2 are possible for a subsequent measurement
of H . If a measurement
1
of H yields E = −1 , then the state vector is | ψ (t > 0) > = 0 . If a measurement of
00
0
36
0
H yields E = 2 , then the state vector is | ψ 00 (t > 0) > = 1 . Measurement of both Λ
0
and H determines the state vector uniquely. In other words, the degeneracy in Λ is removed by
measuring both Λ and H .
Postscript: Notice in part (g) that the square of the expansion coefficients are the probabilities.
This is true in all orthonormal bases. Thus the expansion coefficients are also known as the
probability amplitudes. The probabilities are more generally | cj | 2 = c∗j cj because the expansion
coefficients are generally complex numbers.
Again, part (i) should reinforce that the meaning of a complete set of commuting observables
is that the state vector of the system can be uniquely determined by making two measurements in
the instance that one of the operators is degenerate. The concept of a complete set of commuting
observables is employed in many realistic systems. It is much more difficult to comprehend when
employed to describe a realistic system using a basis that is infinite-dimensional, instead of three-
dimensional, composed of special functions, instead of unit vectors. . . ensure that you understand
the idea addressed in part (i).
37