100% found this document useful (1 vote)
59 views195 pages

Selsto S. A Computational Introduction To Quantum Physics 2024

This textbook offers a computational approach to quantum mechanics, encouraging active learning through practical exercises and simulations. It covers foundational topics like the wave function and Schrödinger equation, advancing to complex areas such as relativistic and non-Hermitian quantum physics. The book is designed for advanced undergraduate courses and includes source code in Python and MATLAB for numerical implementations.

Uploaded by

Paulo Almeida
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
59 views195 pages

Selsto S. A Computational Introduction To Quantum Physics 2024

This textbook offers a computational approach to quantum mechanics, encouraging active learning through practical exercises and simulations. It covers foundational topics like the wave function and Schrödinger equation, advancing to complex areas such as relativistic and non-Hermitian quantum physics. The book is designed for advanced undergraduate courses and includes source code in Python and MATLAB for numerical implementations.

Uploaded by

Paulo Almeida
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 195

Solve Selsto

A COMPUTATIONAL
INTRODUCTION TO
^QUANTUM
A Computational Introduction to Quantum Physics

This concise textbook introduces an innovative computational approach to quantum


mechanics. Over the course of this engaging and informal book, students are encour­
aged to take an active role in learning key concepts by working through practical
exercises. By equipping readers with some basic methodology and a toolbox of sci­
entific computing methods, they can use code to simulate and directly visualize how
quantum particles behave. The important foundational elements of the wave function
and the Schrodinger equation are first introduced, then the text gradually builds up to
advanced topics including relativistic, open, and non-Hermitian quantum physics.

This book assumes familiarity with basic mathematics and numerical methods, and
can be used to support a two-semester advanced undergraduate course. Source code
and solutions for every book exercise involving numerical implementation are provided
in Python and MATLAB®, along with supplementary data. Additional problems are
provided online for instructor use with locked solutions.

Solve Selsto is a professor of Physics at Oslo Metropolitan University, Norway. His


research focuses on computational and theoretical quantum physics. He teaches both
physics and mathematics, and believes that understanding within science and mathem­
atics predominantly emerges from working with and discussing real-world problems.
Selsto has previously written an introductory textbook on numerical methods, which
he believes are valuable for both their utility and their ability to make theory tangible.
‘This book provides a well-written and carefully structured course on computational
quantum mechanics, and its many exercises, with provided solutions, facilitate active
learning both of physics and of numerical computing... This book will be very useful
for undergraduate and graduate students interested in adopting modern method­
ologies for their learning by applying computational resources to intermediate to
advanced quantum mechanical concepts.’
Christian Hill, author of Learning Scientific Programming with Python (2nd ed., 2020)
and Python for Chemists (2023)

‘Selsto’s book is an outstanding introduction to the world of quantum mechanics from


a practical and computational, yet rigorous and comprehensive, perspective. For a stu­
dent, the book is highly engaging, approachable, and it provides an exceptional pool
of pedagogically guided exercises. I would highly recommend this book for teachers
and students, as both a coursebook and self-study material.’
Esa Rasanen, Tampere University

‘This book takes the reader from the basics of quantum theory to the core of a range of
topics that define the research front of quantum-based sciences. It is an easy-to-read,
modern text, with illustrative examples and problems of an analytical and numerical
nature. I would recommend the book for any intermediate quantum mechanics course!’
Jan Petter Hansen, University of Bergen

‘This book presents, delightfully, a large number of practical tasks on both foun­
dational topics and modern applications, with code provided online. More than a
collection of exercises, yet less than a complete monograph, this feels like a workbook,
in that the text screams out to be worked through... Students of quantum physics who
wish to see what the abstract formalism implies in practice are in for a treat.’
Alexandras Gezerlis, University of Guelph
A Computational Introduction
to Quantum Physics

Solve Selsto
Oslo Metropolitan University

gg CAMBRIDGE
university press
BB CAMBRIDGE
WP UNIVERSITY PRESS
Shaftesbury Road, Cambridge CB2 8EA, United Kingdom
One Liberty Plaza, 20th Floor, New York, NY 10006, USA
477 Williamstown Road, Port Melbourne, VIC 3207, Australia
314-321, 3rd Floor, Plot 3, Splendor Forum, Jasola District Centre,
New Delhi- 110025, India
103 Penang Road, #05-06/07, Visioncrest Commercial, Singapore 238467

Cambridge University Press is part of Cambridge University Press & Assessment,


a department of the University of Cambridge.
We share the University’s mission to contribute to society through the pursuit of
education, learning and research at the highest international levels of excellence.

www.cambridge.org
Information on this title: www.cambridge.org/highereducation/isbn/9781009389631
DOI: 10.1017/9781009389594
© Solve Selsto 2024
This publication is in copyright. Subject to statutory exception and to the provisions
of relevant collective licensing agreements, no reproduction of any part may take
place without the written permission of Cambridge University Press & Assessment.
First published 2024
A catalogue recordfor this publication is available from the British Library
Library of Congress Cataloging-in-Publication Data
Names: Selsto, Solve, author.
Title: A computational introduction to quantum physics / Solve Selsto.
Description: Cambridge, United Kingdom ; New York, NY : Cambridge
University Press, 2024. | Includes bibliographical references and index.
Identifiers: LCCN 2023051947 | ISBN 9781009389631 (hardback) | ISBN
9781009389594 (ebook)
Subjects: LCSH: Quantum theory - Mathematics - Textbooks. | Quantum
theory - Data processing - Textbooks.
Classification: LCC QC174.17.M35 S45 2024 | DDC
530.12078/5-dc23/eng/20231212
LC record available at https://lccn.loc.gov/2023051947
ISBN 978-1-009-38963-1 Hardback
Additional resources for this publication at www.cambridge.org/Selsto
Cambridge University Press & Assessment has no responsibility for the persistence
or accuracy of URLs for external or third-party internet websites referred to in this
publication and does not guarantee that any content on such websites is, or will remain,
accurate or appropriate.
To Anne Birte, Halvard and Ingeborg
Contents

Preface page xi
Acknowledgements xiii

1 The Wave Function 1


1.1 Quantum Physics: the Early Years 1
1.2 How Different Is Quantum Physics? 4
1.3 The Wave Function and the Curse of Dimensionality 5
1.4 Interpretation of the Wave Function
1.4.1 Exercise: Normalization 7
1.4.2 Exercise: Position Expectation Values 9
1.4.3 Exercise: Is the Particle in This Interval? 10
1.5 The Momentum Operator 10
1.5.1 Exercise: Momentum Expectation Values 11
1.6 Some Simplifying Notation 12
1.6.1 Exercise: Standard Deviations 12
1.6.2 Exercise: The Inner Product 13
1.6.3 Exercise: Hermicity 13
1.6.4 Exercise: The Hydrogen Atom and Atomic Units 14

2 The Schrodinger Equation 16


2.1 Numerical Discretization 17
2.1.1 Exercise: Normalizing the Discretized Wave Function 18
2.2 Kinetic Energy Numerically 19
2.2.1 Exercise: The Kinetic Energy Operator as a Matrix 19
2.2.2 Exercise: Expectation Values as Matrix Multiplications 20
2.3 Dynamics Without Explicit Time Dependence in the Hamiltonian 21
2.3.1 Exercise: The Formal Solution 22
2.3.2 Exercise: Wave Propagation 23
2.3.3 Exercise: Interference 24
2.3.4 Exercise: Expectation Values and Uncertainties 25
2.4 Scattering and Tunnelling 26
2.4.1 Exercise: Scattering on a Barrier
2.4.2 Exercise: The Dynamics of a Classical Particle 28
2.4.3 Exercise: Tunnelling 29
2.4.4 Exercise: Tunnelling Again 30

vii
viii Contents

2.5 Stationary Solutions 31


2.5.1 Exercise: Enter the Eigenvalue Problem 31
2.6 Eigenstates, Measurements and Commutators 32
2.6.1 Exercise: The Standard Deviation of an Eigenstate 32
2.6.2 Exercise: Commuting with the Hamiltonian 34

3 The Time-Independent Schrodinger Equation 37


3.1 Quantization 37
3.1.1 Exercise: Bound States of a Rectangular Well 37
3.1.2 Exercise: Solution by Direct Numerical Diagonalization
3.1.3 Exercise: The Bohr Formula 41
3.1.4 Exercise: The Harmonic Oscillator 42
3.2 A Glimpse at Periodic Potentials 43
3.2.1 Exercise: Eigenenergies for Periodic Potentials — Bands 43
3.3 The Spectral Theorem 46
3.3.1 Exercise: Trivial Time Evolution 47
3.3.2 Exercise: Glauber States 47
3.4 Finding the Ground State 49
3.4.1 Exercise: The Variational Principle and Imaginary Time
3.4.2 Exercise: One-Variable Minimization 52
3.4.3 Exercise: Imaginary Time Propagation 52
3.4.4 Exercise: Two-Variable Minimization, Gradient Descent 53
3.4.5 Exercise: Variational Calculation in Two Dimensions 55

4 Quantum Physics with Several Particles — and Spin 57


4.1 Identical Particles and Spin 57
4.1.1 Exercise: Exchange Symmetry 58
4.1.2 Exercise: On Top of Each Other? 61
4.2 Entanglement 63
4.2.1 Exercise: Entangled or Not Entangled 64
4.3 The Pauli Matrices 66
4.3.1 Exercise: Some Characteristics of the Pauli Matrices 67
4.3.2 Exercise: The Eigenstates of the Pauli Matrices 68
4.4 Slater Determinants, Permanents and Energy Estimates 68
4.4.1 Exercise: Repeated States in a Slater Determinant 69
4.4.2 Exercise: The Singlet and Triplet Revisited 70
4.4.3 Exercise: Variational Calculations with Two Particles 70
4.4.4 Exercise: Self-Consistent Field 74

5 Quantum Physics with Explicit Time Dependence


5.1 Dynamics with Spin-1/2 Particles
5.1.1 Exercise: Dynamics with a Constant Magnetic Field 80
5.1.2 Exercise: Dynamics with an Oscillating Magnetic Field 81
5.1.3 Exercise: The Rotating Wave Approximation
5.1.4 Exercise: Spin Dynamics with Two Spin-1/2 Particles 84
5.2 Propagation 86
5.2.1 Exercise: Magnus Propagators of First and Second Order 86
5.2.2 Exercise: Split Operators 87
5.2.3 Exercise: Photoionization 88
5.3 Spectral Methods 90
5.3.1 Exercise: The Matrix Version 91
5.3.2 Exercise: Dynamics Using a Spectral Basis
5.3.3 Exercise: MomentumDistributions 93
5.4 ‘Dynamics’with Two Particles 95
5.4.1 Exercise: Symmetries of the <P-Matrix 95
5.4.2 Exercise: The Two-Particle Ground State
5.5 The Adiabatic Theorem 97
5.5.1 Exercise: Adiabatic Evolution
5.5.2 Exercise: A Slowly Varying Potential

6 Quantum Technology and Applications 101


6.1 Scanning Tunnelling Microscopy 101
6.1.1 Exercise: Tunnelling Revisited 102
6.1.2 Exercise: The Shape of a Surface 105
6.2 Spectroscopy 107
6.2.1 Exercise: Emission Spectra of Hydrogen 108
6.2.2 Exercise: The Helium Spectrum 108
6.3 Nuclear Magnetic Resonance 110
6.3.1 Exercise: Spin Flipping — On and Off Resonance 110
6.4 The Building Blocks of Quantum Computing 114
6.4.1 Exercise: A Blessing of Dimensionality 114
6.4.2 Exercise: The Qubit 116
6.4.3 Exercise: Quantum Gates and Propagators 117
6.4.4 Exercise: Quantum Gates are Unitary 117
6.4.5 Exercise: Pauli Rotations 118
6.4.6 Exercise: CNOT and SWAP 119
6.4.7 Exercise: Prepare Bell 120
6.5 Quantum Protocols and Quantum Advantage 121
6.5.1 Exercise: Superdense Coding 123
6.5.2 Exercise: Quantum Key Distribution 123
6.6 Adiabatic Quantum Computing 126
6.6.1 Exercise: Quantum Minimization 127

7 Beyond the Schrodinger Equation 130


7.1 Relativistic Quantum Physics 130
7.1.1 Exercise: Relativistic Kinetic Energy 130
7.1.2 Exercise: Arriving at the Dirac Equation 132
7.1.3 Exercise: Eigenstates of the Dirac Hamiltonian 134
7.1.4 Exercise: The Non-relativistic Limit of the Dirac Equation 137
X Contents

7.2 Open Quantum Systems and Master Equations 138


7.2.1 Exercise: The von Neumann Equation 139
7.2.2 Exercise: Pure States, Entanglement and Purity 140
7.2.3 Exercise: Two Spin-1/2 Particles Again 141
7.2.4 Exercise: Preserving Trace and Positivity 143
7.2.5 Exercise: A Decaying Quantum Bit 145
7.2.6 Exercise: Capturing a Particle 148

8 Non-Hermitian Quantum Physics 152


8.1 Absorbing Boundary Conditions 152
8.1.1 Exercise: Decreasing Norm 153
8.1.2 Exercise: Photoionization with an Absorber 153
8.1.3 Exercise: Scattering with an Absorber 154
8.2 Resonances 155
8.2.1 Exercise: Scattering off a Double Well 155
8.2.2 Exercise: Outgoing Boundary Conditions 157
8.2.3 Exercise: The Lifetime of a Resonance 160
8.2.4 Exercise: Doubly Excited States 162

9 Some Perspective 165


9.1 But What Does It Mean? 166
9.2 Quantum Strangeness 166
9.3 What We Didn’t Talk About 167

References 171
Figure Credits 174
Index 176
Preface

This book is meant to give you a bit of a feel for quantum physics, which is the math­
ematical description of the how the micro world behaves. Since all matter is built up of
small particles belonging to the quantum world, this is a theory of rather fundamental
importance. It is only fair that we spend some time learning about it. I believe the best
way to do so is by exploring it for ourselves, rather than just reading about it or having
someone telling us about it.
That is what this book invites you to do. Most of the book consists of exercises. These
exercises are not there in order to supplement the text. It is rather the opposite. What
you will learn about quantum physics, you will learn from working out these examples.
It makes no sense to disregard them and resort to the text alone. My hope is that, from
working out these exercises, by both numerical and analytical means, you will learn
that the quantum world is surprisingly interesting - quirky and beautiful.
It may be an advantage to have a certain familiarity with classical physics - hav­
ing a feel for concepts such as kinetic and potential energy, momentum and velocity.
Knowing some basic concepts from statistics may come in handy as well. However,
mathematics is more important here. In order to practice quantum physics, the proper
language and framework is that of mathematics and numerics. You must be famil­
iar with basic calculus and linear algebra - both numerically and analytically. Topics
such as differentiation, Taylor expansions, integration, differential equations, vectors,
matrix operations and eigenvalues should be quite familiar. The quantities to be cal­
culated and visualized will predominantly be obtained by numerical means. Thus, it
is crucial that you have some experience in doing so. This, in turn, requires famil­
iarity with a relevant software environment or programming language, be it Python,
MATLAB®/Octave, Julia, Java, C/C++ or anything similar.
In solving the exercises, you are not entirely on your own. In addition to, possibly,
having fellow students and teachers, you will also have access to a solutions manual
and a GitHub repository with source code. Please use these resources cautiously. They
may be helpful in finding the way forward when you struggle or when you want to
compare approaches. But they are not that helpful if you resort to these suggestions
without having made a proper effort yourself first.
In addition to these solutions, the online material also features colour illustrations
and relevant data.
xii Preface

This book strives to illuminate quantum phenomena using examples that are as sim­
ple and clear cut as possible - while not being directly banal. As a consequence, many
exercises are confined to a one-dimensional world. While this, of course, differs from
the actual dimensionality of the world, it still allows the introduction of non-intuitive
quantum phenomena without excessive technical complications.
Now, let’s play!
Acknowledgements

I have had the pleasure of interfering constructively with several mentors and collab­
orators over the years. Some deserve particular mention. In alphabetical order: Sergiy
Denisov, Morten Forre, Michael Genkin, Jan Petter Hansen, Tor Kjellsson Lindblom,
Ladislav Kocbach, Simen Kvaal and Eva Lindroth. The insights I have gained from you
over the years have certainly affected the outcome of this book.
Sergiy Denysov, Morten Forre and Eva Lindroth have also taken active parts in
providing constructive feedback on the manuscript - along with Joakim Bergli, Andre
Laestadius, Tulpesh Patel and Konrad Tywoniuk. Specific input from Knut Borve,
Stefanos Carlstrom and Trygve Helgaker is also gratefully acknowledged.
In trying to find illustrations to use, I have met an impressive amount of good will.
Several people and enterprises have generously shared their graphical material - and in
some cases even produced images. In this regard, I am particularly indebted to Reiner
Blatt and Jonas Osthassel.
Several of my students at the Oslo Metropolitan University have been exposed
to early versions of this manuscript. Their experiences led to suggested adjustments
which, hopefully, have improved the quality and accessibility of the book. Oslo Metro­
politan University also deserves my gratitude for providing a stimulating scientific
environment - and for flexibility and goodwill when it came to spending time writing
this book.
Finally, I am eternally grateful to my first physics teacher, Helge Dahle, for igniting
the passion for physics and science in the first place.
1 The Wave Function

The physical world we see around us seems to follow the well-known laws of Isaac
Newton. For a point object of mass m, his second law may be written

d2
m-^r(r) = F(r,O, (11)
dr2
where r(t) is the position of the object at time t and F is the total force acting on
the object. Often, this force is the gradient of some potential V, which may be time
dependent:* 1

F(r,O = -VV(r,t). (1.2)

This potential could, for instance, be the gravitational attraction of a planet, in which
case we would have

where k is a constant, M is the mass of the planet and r is the distance between the
planet and our object.
This is Newton’s law of gravitation. Early in the twentieth century it became clear
that this law, which had been immensely successful until then, was not entirely accurate.
With his general theory of relativity, Albert Einstein suggested that Eq. (1.3) would not
describe massive gravitational objects correctly. In time, his theory gained support from
observations, and it is now considered the proper description of gravitation.
More or less at the same time, it also became clear that Newtonian mechanics just
wouldn’t do at the other end of the length scale either - for objects belonging to the
micro cosmos. For several reasons, another description of microscopic objects such as
molecules, atoms and elementary particles was called for.

1.1 Quantum Physics: the Early Years

Once again, Albert Einstein had quite a lot to do with resolving these issues. But where
the theories of relativity, both the special one and the general one, to a large extent are

1 This is the last time we will use the F-word in this context; the term simply does not apply in quantum
physics, instead we talk about potentials and interactions.
2 1 The Wave Function

products of one single brilliant mind, the development of our understanding of the
micro world is the beautiful product of several brilliant minds contributing construct­
ively. The story leading up to the birth of quantum physics is a truly interesting one.
It’s worth mentioning a few highlights.
At the time, around of the turn of the nineteenth century, the understanding of light
as travelling waves was well established. The famous double-slit experiment of Thomas
Young was one of several that had enforced such an understanding (see Fig. 1.1). And
the Scot James Clerk Maxwell (Fig. 1.2) had provided a perfectly accurate descrip­
tion of the wave nature of electromagnetic radiation, including the electromagnetic
radiation that is visible light.
However, about a century after Young presented his interference experiment, two
phenomena related to light were observed that just didn’t fit with this wave description.
Max Planck, from Germany, resolved one of them and our friend Albert succeeded in
explaining the other. Both of them did so by assuming that electromagnetic radiation,
light, was made up of small energy lumps, or quanta. In Planck’s case, he managed to
understand why the radiation emitted from a so-called black body at thermal equilib­
rium is distributed as it is. In doing so, he imposed a simple but non-intuitive relation
between the frequency of the radiation and the energy of each quantum:

Fquant — hf, (1.4)

r Figure 1.1 In the year 1801, the British scientist Thomas Young, 'the last man who knew everything' [33], showed with his
famous double-slit experiment that light must be perceived as waves. He sent light of a specific colour, which, in turn,
corresponds to a specific wavelength, through a double slit and observed an interference pattern on a screen behind
the slits. This pattern emerges from interference between waves originating from each of the two slits; they either
reinforce each other or reduce each other, depending on their respective phases where they meet.
3 1.1 Quantum Physics: the Early Years

Figure 1.1 | James Clerk Maxwell as a young and promising student at Trinity College, Cambridge, UK.

where f is the frequency and ~ 6.63 x 10-34 Js is what is now called the Planck
constant. Clearly, it’s not very large. But it isn’t zero either, and this has huge
implications.
The same relation was also crucial in Einstein’s explanation of the photoelectric
effect, in which electrons are torn loose from a metal plate by shining light upon it.
These works of Planck and Einstein certainly did not dismiss the idea that light
consists of waves. They just showed that, complementary to this concept, we also need
to consider light as quanta. We call these quanta photons.
The next major conceptual breakthrough is an equally impressive one. At the time,
the notion of matter being made up of atoms was finally gaining recognition amongst
scientists; it was eventually established that matter indeed consists of particles. But,
just as light turned out to be not just waves, the perception of matter as particles also
turned out to be an incomplete one. For instance, according to Newton’s and Max­
well’s equations, an atom couldn’t really be stable. It was understood that within the
atom a small, negatively charged particle - the electron - would orbit a larger posi­
tively charged atomic nucleus somehow. However, according to Maxwell’s equations,
any charged particle undergoing acceleration should emit radiation and lose energy.
This should also apply to any electron that does not follow a straight line.
A different understanding of the atom was called for.
In his PhD dissertation, the French nobleman Louis de Broglie launched the
following thought:2

2 For the sake of clarity: this is not actually a quote.


4 1 The Wave Function

Since light turns out to behave not just as waves but also as particles, maybe matter
isn’t just particles either - maybe matter behaves as waves as well.

And he was right! This idea quickly gained experimental and theoretical support.
Electrons were seen to produce interference patterns, just like light had done in the
Young experiment. In fact, after a while, his double-slit experiment was repeated -
with electrons. We will return to this experiment in Chapter 2.
De Broglie’s realization, in turn, called for a mathematical wave description of mat­
ter. We needed a scientist to formulate this - just like Maxwell had done for light. We
got two: Werner Heisenberg and Erwin Schrodinger. Both their formulations are still
applied today. Schrodinger’s formulation is, however, the predominant one. And it’s
his formulation we will use in the following.
In this context, it’s worth mentioning that the Dane Niels Bohr used the ideas of
de Brogile in order to formulate a ‘wave theory’ for the atom. This model somehow
managed to predict the right energies for the hydrogen atom, and it was an important
stepping stone towards our understanding of the atom. However, the ‘Bohr atom’,
which portrays the atom as a miniature solar system, does not provide a very adequate
model. It is neither accurate, general nor beautiful enough, and the theory is long since
abandoned.
The birth of quantum physics is truly an impressive collaborative effort. The names
of several of the midwives involved have been mentioned. It is nice to also have faces
attached to these names. Very many of them can be seen in the famous photo in Fig. 1.3,
which was taken at the fifth Solvay conference, held in Brussels in 1927.

1.2 How Different Is Quantum Physics?

The formulation in which small particles are described in terms of waves is known as
quantum physics. It’s fair to ask in which way and to what extent this description differs
from a Newtonian, classical picture - one in which atoms and particles behave as little
‘balls’ bouncing around. The answer is that quantum physics is very, very different from
classical physics - in several aspects. This includes phenomena that really couldn’t have
been foreseen even with the strongest imagination.
In Newtonian mechanics, an object has a well-defined position and velocity at all
times. A quantum object doesn’t. A wave isn’t localized to a single point in space. And
when it travels, it typically does so dispersively; it spreads out. The more confined the
wave is spatially, the wider is its distribution in velocity - or momentum. 3 If the position
x is known, the momentum p is completely undetermined - and vice versa. Heisenberg’s
uncertainty principle states this quite precisely:

h
(rxcrp>-—. (1.5)
4;r

3 The momentum of a particle is the product of its velocity and its mass, p = mv.
5 1.3 The Wave Function and the Curse of Dimensionality

Institut International de Physique Solvay


ClNQUiFME CUNSEIL DE PHYSIQUE - BHUXEU-ES 23-29 OCTORRF >927

Figure 1.1 Not just any crowd: 17 of the 29 attendees at the fifth Solvay conference in 1927 were or went on to become Nobel
Prize laureates. Marie Sklodowska Curie, front row, third from left, even received two Nobel Prizes. Many of the people
mentioned above are here - along with a few other brilliant minds. Front row, left to right: Irving Langmuir, Max
Planck, Marie Sklodowska Curie, Hendrik Lorentz, Albert Einstein, Paul Langevin, Charles-Eugene Guye, Charles
Thomson Rees Wilson, Owen Willans Richardson. Middle row: Peter Debye, Martin Knudsen, William Lawrence Bragg,
Hendrik Anthony Kramers, Paul Dirac, Arthur Compton, Louis de Broglie, Max Born, Niels Bohr. Backrow: Auguste
Piccard, Emile Henriot, Paul Ehrenfest, Edouard Herzen, Theophile de Donder, Erwin Schrodinger, Jules-Emile
Verschaffelt, Wolfgang Pauli, Werner Heisenberg, Ralph Howard Fowler, Leon Brillouin.

Here ax is the uncertainty in position and ap is the uncertainty in momentum; h is the


Planck constant, which we first encountered in Eq. (1.4). In other words, if we want
to have information about both position and momentum of a small object, there is a
fundamental limit to the accuracy of this information. This is not a limit imposed by
practical issues such as the quality of our measurement devices; this seems to be a limit
inherent in nature itself.
And, as we will see, it gets stranger.
First, however, we need to say a few words about the wave function.

1.3 The Wave Function and the Curse of Dimensionality

Within a quantum system, such as an atom, all information is contained in its wave
function. This function, in turn, is provided by a set of appropriate initial conditions
and Schrodinger’s famous equation. We will get back to this equation - for sure.
6 1 The Wave Function

For a system consisting of, say, N particles, the wave function is a complex function
depending on the coordinates of all particles and, parametrically, on time:

= 4'(ri,r2,...,iw;0. (1.6)

It’s complex in a double sense. It is complex valued; it has both a real and an imaginary
part. And it is, usually, a rather complicated function.
If we could describe our A-particle system classically, that is, according to classical
Newtonian physics, it would be given by the set of positions and velocities for all the
constituent particles:

[ri(O,r2(r),..., vi(r), v2(r),...]. (1.7)

If each of the objects or particles that make up the system resides in a d-dimensional
world, where d usually is three, this constitutes a single point in a real space of
dimension 2Nd.
For a quantum system, however, we need the wave function of Eq. (1.6) for a full
description. This is certainly not a single point in any Ad-dimensional space; for each
and every single point in that space, the wave function has a certain value that we need
to know.
If we want to describe a function numerically, we typically need to limit the extension
of the space to a finite interval and ‘chop it into pieces’; we need to discretize it. Suppose
that our numerical domain consists of s points for each of our N particles. Then our
numerical, approximate wave function will consist of sN complex numbers, while the
corresponding classical state will consist of 2Nd real numbers. In other words, in the
quantum case, the complexity grows exponentially with the number of particles A,
while it just increases linearly with A in the classical case.
This is the infamous curse of dimensionality. It is quite devastating when we want to
simulate quantum systems with several particles; that is really, really hard. If we want
to describe a dynamical system of A quantum particles in d = 3 dimensions, A = 3
would already be too much in the usual case?
For now, we limit our wave function to a single particle, such as an electron.
Moreover, for the sake of simplicity, we take the world to be one-dimensional:

<P = <P(x;r). (1.8)

The wave function is required to be finite, continuous and differentiable on its entire
domain. This restriction is not limited to the one-dimensional one-particle case.
We already stated that the wave function contains all the information there is to
be obtained about the quantum system we are interested in. However, extracting and
interpreting this information may require some effort.

4 In terms of information processing, however, this curse may actually be an advantage. In fact, the curse
of dimensionality is what motivated the famous physicist Richard Feynman and, independently, the
mathematician Yuri Manin to propose the idea of a quantum computer [15,26], More on that in Chapter 6.
7 1.4 Interpretation of the Wave Function

1.4 Interpretation of the Wave Function

As mentioned, since a quantum particle actually is a wave, it does not really have a
well-defined position or momentum - as would be the case for any kind of wave. How­
ever, a measurement of either position or momentum would in fact produce a definite
answer. But wait, if we first measure position x and then momentum p and get definite
answers, wouldn’t that violate the uncertainty principle, Ineq. (1.5)? No, because when
we perform a measurement, we alter the wave; the wave function prior to measurement
is not the same as the one we have afterwards. If we measure the position of a quan­
tum particle and then its momentum, a second position measurement would probably
provide a result different from the first one. The momentum measurement renders the
first position measurement obsolete. The uncertainty principle refers to simultaneous
measurements; while we could have precise information about either the position or
the momentum of a quantum particle, there is a lower limit to how precisely we may
know both simultaneously.
For now, instead of elaborating on this most peculiar feature of the wave func­
tion, we will simply continue to examine how physical quantities are calculated from
this mysterious, complex function. This really isn’t fair; the issue, which typically is
referred to as ‘the collapse of the wave function’ of ‘the measurement problem’, cer­
tainly deserves more attention. The fact that the wave function is affected by the action
of a measurement really isn’t trivial. It has been the issue of much debate - from both
the philosophical and the technical points of view. And it still is. Several papers and
books have been written on the subject. To some extent, we will return to this issue in
the following chapters.
Instead of definite positions and momenta, the wave function only provides prob­
abilities for each possible outcome of a measurement. Suppose we set out to measure
the position of the particle at time t. In that case, the quantity

*
= l'J'(
;OI 2 (1-9)
dx
provides the probability density for measuring the particle’s position to be x. Or, in
other words, the probability of finding the particle between positions a and b is

Cb
P(xe[a,b])= |vp(x;r)|2ck. (1.10)
Ja
Put in statistical terms: the outcome of a position measurement on a quantum particle
is a stochastic variable, and |<P(x;Z)|2 is its probability density function. This interpret­
ation, which was stated in a footnote in his publication, won the German physicist and
mathematician Max Born the 1954 Nobel Prize in Physics.

1.4.1 Exercise: Normalization

For a quantum system consisting of a single particle in one dimension, we must insist
that the following holds at all times for the wave function:
8 1 The Wave Function

|'P(x;r)|2 dx = 1. (1-H)

Why?
This requirement poses some restrictions on ^(x). Specifically, why must we insist
that the wave function vanish in the limits x ±oo?

If we prepare lots of identical particles in the very same state, that is to say, all of
the particles have identical wave functions, and then measure the position of each and
every one of them, we will not get the same result for each measurement, rather we will
have a distribution of results.
With sufficiently many particles and measurements, this distribution is simply
Eq. (1.9)- with the proper overall scaling/normalization. This distribution has both a
mean value, or expectation value, and a standard deviation, or width. With Eq. (1.9)
being the probability density function for the outcome of a position measurement, the
position expectation value is simply
Z»OO
(x) = I x|4<(x)|2dx. (1-12)
J —oo
This could also be written as
oo

/-oo
['J'(x)]
* *
,
jr'I'tod

where the asterisk indicates complex conjugation and the time dependence of the wave
(1.13)

function is taken to be implicit. Equation (1.13) is, of course, just a slightly more
cumbersome way of writing the same thing as in Eq. (1.12); hopefully the motivation
for formulating it in this way will be apparent shortly.
The width, or standard deviation, associated with our position distribution is, accord­
ing to standard statistics, determined as

ox = yj (x2) — (x)2, where (x2) = f x2|4<(x)|2 dx. (1-14)

It is this width, crx, that enters into the Heisenberg uncertainty relation, Ineq. (1.5).
It is hard to avoid asking questions like ‘But where actually is the particle prior to
measurement?’ and ‘How come the description in terms of a wave function is unable to
reveal the actual position of the particles?’ And, surely, since we got different results,
the various systems couldn’t really have been prepared completely identically, could
they?
These are, of course, bona fide and well-justified questions. As mentioned, these and
similar questions have been subject to much debate since the birth of quantum physics.
The most famous debates had Albert Einstein and Niels Bohr themselves as oppon­
ents. Many physicists resort to answering the above questions in the spirit of Bohr by
saying something like ‘It does not really make sense to talk about a particle’s position
without actually measuring it’ and ‘There is no more information obtainable beyond
that contained in the wave function.’
9 1.4 Interpretation of the Wave Function

In any case, experience does suggest that this stochastic behaviour from nature’s side
actually is a truly fundamental one. Although we perform identical measurements on
identically prepared particles, the outcome really is arbitrary, it cannot be predicted
with certainty.
Again, let’s place these issues at the back of our minds and continue with our more
technical approach to - for now.

1.4.2 Exercise: Position Expectation Values

Suppose that our wave function is given by one of the simple, elementary functions
below, labelled from (a) to (d):

%U) = (1 + (x — 3)2*)13/42*’ * * * * (L15a)


£-4ix
<■ + (■>-3)^- ,L15b>
2
<pc(x) = e x , (1.15c)
\p^(x) = (x + i) ^-O-31-2)2/10. (1.15d)

In each case, we can use the function to determine several physical quantities. But
first we must make sure that Eq. (1.11) is fulfilled.
For each of the wave function examples, do the following:

1. Normalize it, that is, impose a factor such that the integral of |vI/(x)|2
over all x actually is 1.
2. Plot the absolute square of the wave function. Also, plot real and imaginary
contributions separately when is complex.
3. From the plot, estimate the most likely outcome of a position measurement.
4. Determine the position expectation value.

Although some of the integrals involved may be done analytically, we strongly sug­
gest that you write a few lines of code and calculate them numerically. This could, for
instance, be done using the trapezoidal rule:

J f(x)dx = h f -y(xo) + /Ui)H----- + f(xn-\)+-f(Xn)\+O(h2), (1.16a)

where h = -—- and Xk = a + kh. (1.16b)


n
In doing so, you may very well use some ready-made function in your programming
language or numerical platform of preference. But do make sure that you use a numer­
ical grid which is sufficiently fine - that your h value in Eq. (1.16b) is small enough
or, equivalently, that your n value is large enough. Also, instead of actually integrat­
ing from —oo to oo, you must settle for an interval of finite extension. When doing so,
make sure that your numerical domain extends widely enough; the interval [a, b] must
10 1 The Wave Function

actually contain the wave function to a sufficiently high degree. In case some of your
expectation values turn out complex, this may indicate that this requirement is not met.
In any case, redo your calculations several times with an increasing number of sample
points n and an increasingly wide interval [a, b] to ensure that your result does not rely
notably on these numerical parameters.

The observations from this exercise may lead you question the relevance of any com­
plex factor of the form elkx in the wave function. You, will, however, come to learn that
it does in fact matter.
The next exercise is a direct application of Eq. (1.10).

1.4.3 Exercise: Is the Particle in This Interval?

For each of the four wave functions in Exercise 1.4.2, the normalized ones, what is the
probability that a position measurement would determine that the particle is localized
between x = 1 and x = 2?

1.5 The Momentum Operator

In quantum physics, every physical quantity that can be measured has its own corres­
ponding linear operator. In this context, an operator is some specific way of changing
or transforming the wave function. The operator corresponding to position we have
already met - it simply consists in multiplying the wave function by x itself.
Of course, position is not the only physical quantity in town, we may very well be
interested in the momentum of the particle as well. The operator for the momentum p,
which is the product of mass and velocity, p = mv, is given by spatial differentiation:

p vp(x) = -iW'(x), (1.17)

where h, the so-called reduced Planck constant, is the Planck constant divided by 2tt:

/i= — % 1.055-10“34 Js. (1.18)


2tt
We use the hat symbol, which you see above the 4p’ on the left hand side in Eq. (1.17),
to indicate that we are dealing with an operator, to distinguish it from scalar quantities
- numbers - and functions. We have chosen not to dress up the position operator with
any hat since it simply is position itself, x = x. With Leibniz’s notation it makes sense
to write the p-operator without reference to any explicit wave function:

p = -it^-. (1.19)
dx
Analogously to the position expectation value, Eq. (1.13), the momentum expect­
ation value is
11 1.5 The Momentum Operator

/»OO
(p)= Wx)Yp9(x)<ix (1.20)
J— OO
= / *
[<P(x)J ( —ih— ) <P(x)dx = — ih I *
[<P(x)J <Pz(x) dx.
J — DC \ UX / J — OO

Perhaps Eq. (1.13) makes more sense now; it is identical to Eq. (1.20) - with p replaced
by x, and the p-operator replaced by the x-operator.
Suppose we measure the momentum of a large number of particles with identi­
cal wave functions. The distribution of the results would provide the mean value of
Eq. (1.20) - analogously to how the mean value of position measurements is provided
by Eq. (1.12) or(1.13).
The operator for a physical quantity that depends on position and momentum also
depends on the position and momentum operators in the same manner. Kinetic energy,
for instance, may be written as l/2mv2 = p2/2m. Correspondingly, its operator is

p2 1 ( 7>dV h2 d2
(1.21)
2m 2m \ dx / 2m dx2

In the next exercise, and most of the ones following it, we will set

h= 1. (1.22)

This may seem a bit overly pragmatic - if not to say in clear violation of Eq. (1.18). We
know that the Planck constant is supposed to be a really small number - not 1. But it
is in fact admissible. It simply corresponds to choosing a convenient set of units. And
it facilitates our numerical calculations. We must be aware, though, that the numbers
that come out certainly will not correspond to numbers given in ‘ordinary’ units such
as metres or seconds.

1.5.1 Exercise: Momentum Expectation Values

For the four simple wave functions in Exercise 1.4.2, calculate the momentum expect­
ation values. According to Eq. (1.20), this involves differentiation. You could, of
course, do this by hand using paper and pencil in this case. However, it may be more
convenient to use the midpoint rule:
ft)+WA
2h
This way you could perform the differentiation of various wave functions without really
changing anything in your code. Do remember to use normalized wave functions.
You may notice that for the purely real wave functions, the momentum expectation
values all seem to be zero. Why is that? Can you understand that by analytical means?

Perhaps this time you are able to see a clear distinction in expectation values for
the wave functions in Eqs. (1.15a) and (1.15b) - contrary to what was the case in
Exercise 1.4.2?
12 1 The Wave Function

1.6 Some Simplifying Notation

After a while you may get tired of writing up integrals. In the following we will write
expressions such as Eqs. (1.12) and (1.20) somewhat more compactly. For two functions
and <I> we define the notation
/»OO
(<D,4/)=/ (x)dx.
4
*
[0>(x)] (1.24)
J —oo
With this, the expectation value of some physical quantity A for a quantum system
with wave function may be written

(A} = (1.25)

where A is the operator corresponding to the physical variable A, be it position,


momentum, energy or something else. Analogously to Eq. (1.14), the width or standard
deviation of the physical variable A is

aA ee 7(A2) - (A)2. (1.26)

1.6.1 Exercise: Standard Deviations

For each of the four wave functions in Exercise 1.4.2, their normalized version that is,
what is the standard deviation of the position, Eq. (1.14)?
And what is the momentum standard deviation?
In answering the latter, you will need to calculate expectation values for p2, for which
this finite difference formula may come in handy:

(1.27)

Check that the uncertainty principle, Ineq. (1.5), isn’t violated for any of the wave
functions in question. Does any of them fulfil it with equality?

Here, the notation introduced in Eq. (1.24) simply serves as a lazy person’s way of
writing integrals. However, it goes deeper - deeper into linear algebra. The thing is, the
definition in Eq. (1.24) may be considered an inner product. A general inner product
fulfils

{fha) = («,£)
* (symmetry), (1.28a)
(a, eft) = c {a, fl) (linearity), (1.28b)
(a, fl + y) = (a, fl) + (a, y) (linearity), (1.28c)
(a, a) > 0 (positivity), (1.28d)

where a, and y are general vectors and c is a complex number. In other words, we
consider functions as a kind of generalized vector in this context. We restrict these to
13 1.6 Some Simplifying Notation

functions that are continuous and have continuous derivatives. We also insist that the
integral of their absolute value squared is finite.
The last equation above applies to all vectors in the space except the zero vector,
which any vector space is required to have. The inner product between the zero vector
and itself is zero.

1.6.2 Exercise: The Inner Product

Prove that the inner product between wave functions defined in Eq. (1.24) in fact fulfils
the general requirements for an inner product, Eqs. (1.28).

As you may have seen, Eqs. (1.28b) and (1.28c) are satisfied by the linearity of the
definite integral. As for the zero vector, we must pick the zero function, which is zero
for any value of x.

1.6.3 Exercise: Hermicity

In quantum physics we insist that all operators corresponding to physical variables are
Hermitian. This means that any operator for a physical quantity A should fulfil

(<D,A40 = (1.29)

for all admissible wave functions

(a) Use integration by parts to show that the operator p in Eq. (1.19) is in fact
Hermitian. To this end, do remember the lesson learned at the end of Exercise 1.4.1.
(b) Show that Hermicity ensures real expectation values.
Why is this feature crucial for quantum physics to make sense?
(c) If we impose a simple phase factor on our wave function, e1^, where 0 is
real, this does not affect any expectation value calculated from Why is that?*

Actually, although Eq. (1.24) is a rather standard mathematical notation for inner
products, it is customary to write it slightly differently in quantum physics:

(<D,40 -+ and -+ (O|A|vp>. (1.30)

This notation is referred to as the Dirac notation. It has a couple of advantages. One
advantage is that it allows us to think of |M>> as a general vector; <<I>| is a dual vector,
which in combination with |4>) forms an inner product. A |<P)-type vector is referred
to as a ket and a <<t> |-type vector is called a bra, so that together they form a "bra-kef.
Often one may think of |4>) as a column vector and an operator A as a square matrix.
In the next chapter we will see that this literally is so when we approximate the wave
function on a numerical grid. In other situations it is actually true without approxima­
tion. In both cases, the bra (4>| will be a row vector; it is the Hermitian adjoint, indicated
by t, that is, the transpose and complex conjugate, of the corresponding ket |4>),

<<I>I = I*)1- (1-31)


14 1 The Wave Function

Later, we will see that introducing these formal tools and inner products does have
advantages beyond the ability to write integrals in a more compact fashion.

1.6.4 Exercise: The Hydrogen Atom and Atomic Units

In order to get a feel for which kind of quantities we typically are dealing with for
quantum systems, we calculate expectation values for an electron attached to a proton -
a hydrogen atom, that is - when its energy is minimal. In this case, we may use the wave
function
<P(r) = ~^= re r/a°
(1-32)
ypo

to calculate expectation values in the same way as for quantum particles in one
dimension, as in Eqs. (1.12) and (1.20) - except that our variable now is r, the dis­
tance between the electron and the proton in the atom. Correspondingly, the lower
integration limit should be zero rather than minus infinity.
When dealing with atoms, it is convenient to introduce atomic units, which may be
defined by choosing the electron’s mass me as our mass unit, the elementary charge e
as the charge unit and the so-called Bohr radius ao as our length unit - in addition to
setting h = 1 as in Eq. (1.22). The elementary charge e is the magnitude of the charge
of both the proton and the electron, the latter being negative.
In general units, the Bohr radius reads
4tt6o^2
(1.33)
mee£
where the constant 6o is the so-called permittivity offree space. Since all other factors in
the above expression are 1 in atomic units, this means that 4tt6o also equals one atomic
unit.
Atomic units are usually abbreviated as ‘a.u.’ - irrespective of which quantity we are
dealing with, it is not customary to specify whether we are talking about atomic length
units or atomic time units, for instance?

(a) Calculate the expectation values (r) and {p) using SI units - units following the
International System of Units. Here, as in the one-dimensional cases above, you
can take the operator for r to be r itself, and the ‘p-operator’, corresponding to
the momentum component in the radial direction, to be — ihd/dr. You will have
to look up various constants.
(b) Repeat the calculations from (a) using atomic units.
The energy operator is the sum of the operator for kinetic energy, Eq. (1.21), and
the potential energy. The potential energy corresponding to the attraction from the
proton is
e2 1
V(r) = ---------- , (1.34)
4tt6o r
5 That is not to say that this is a good practice.
15 1.6 Some Simplifying Notation

so the full energy operator in this specific case may be written as*6*

-- A ’
2me dr2 4tt6o

The reason why we call it6 H9 rather than 6 E9 is that the energy operator is named
after the Irish mathematician William Rowan Hamilton.
(c) Calculate the expectation value for the energy of the system,

(8) = (1.36)

Do so using both using SI units and atomic units.


Note\ All these expectation values may be calculated analytically. However, in case
you choose to do it numerically, be warned that the Coulomb potential, Eq. (1.34),
diverges when r approaches zero. This could cause problems numerically - unless
you let r start at a small, positive value instead of zero - or rewrite your integral a
little bit before estimating it.

Hopefully, you agree that it’s worthwhile to introduce a set of convenient units to
replace the standard SI units. Dealing with metres and seconds just isn’t that conveni­
ent when addressing atoms and other objects pertaining to the micro world. However,
when you need to present your numerical results in a different context, you may have
to convert your results to another set of units. If so, care must be taken to make sure
that you convert your results correctly.

6 Here we rather pragmatically jump from one dimension to a fully three-dimensional example. This is
hardly ever as straightforward as this example would indicate. The reason why we can simplify it here is
that the wave function depends on the distance from the origin only; it is independent of direction.
2 The Schrodinger Equation

In this chapter we will, finally, get acquainted with the famous Schrodinger equation.
You may find its father, the Austrian Erwin Schrodinger, in the third row close to the
middle in Fig. 1.3. The time-dependent form of his equation is a partial differential
equation that tells us how the wave function evolves in time. In a compact manner it
may be written
q
ih—= HV, (2.1)
dt
where the Hamiltonian H is the energy operator. In Eq. (1.35) we learned what this
operator looks like for the case of a hydrogen atom at rest. As mentioned, it inherited
its name from the Irish mathematician William Rowan Hamilton, who made significant
contributions to the theoretical foundation of classical mechanics in the nineteenth
century.
For a single particle not subject to any external interactions, such as electromagnetic
fields, the Hamiltonian is simply the sum of kinetic and potential energy:

tt = f + V = |- + V(r), (2.2)
2m
where the potential V is a function of position. Here, momentum and position, p and r,
are written as vectors; that is, we are not necessarily restricted to one spatial dimension.
In this case, the position operator is still just multiplication with the position itself,
r = r, while the momentum operator is proportional to a gradient operator:

p = —iW. (2.3)
Here ‘p2’ is taken to mean the scalar product with itself; the kinetic energy term
becomes proportional to the Laplacian'.
P-22 1.^1 fl2 ?
P p~ ‘ V • (2.4)
2m 2m 2m 2m
In the general case, the Hamiltonian of a quantum system may be rather involved.
For instance, if there are N particles that interact with each other and with an external
electromagnetic field, the Hamiltonian would read
N r/ az w7 N
/, (p-#A(r,-,0)2 + £W,ry).
H~L[-------- 1 (2.5)

1 Truth be told, Eq. (1.35) is a bit of an oversimplification.

16
17 2.1 Numerical Discretization

Here, qi is the electric charge and mz- is the mass of particle number i; A is the so-called
vector potential, which provides the electric and magnetic fields via

E = —d/dt A and B = V x A, (2.6)

2 If the system under study were an atom or a molecule with N electrons,


respectively.*
the two-particle interaction W would be the Coulomb repulsion between electrons:

e2 1
(2.7)
4tt60 |rz -rj’

and the potential V(rz) would be the Coulomb attraction from the nuclei, Eq. (1.34),
whose positions are frequently assumed to be fixed.3 As mentioned, the constants e
and 6o are the elementary charge and the permittivity of free space, respectively.
It is fair to say that the Hamiltonian of a composite quantum system can be fairly
complicated - even with several simplifying assumptions. However, for the remain­
der of this chapter we will exclusively be occupied with one single particle in one
dimension - without interactions with any external electromagnetic field. Our Hamil­
tonian will then be
/rd2
+ V(x). (2.8)
2m dx2
We will play around with solving Eq. (2.1) for various situations. But before we
can do that, we need to make a good numerical approximation to the Hamiltonian,
including the kinetic energy term above.

2.1 Numerical Discretization

We start by choosing a certain domain for our wave function 'P to ‘live’ in. Assuming
that 4*(x;t) vanishes for x < a and for x > b at all times, we may disregard space
beyond these points. Moreover, as we did in Exercise 1.4.2, in Eq. (1.16b) we discretize
the resulting interval so that our wave function may be represented by a set of points.
Effectively, our approximate wave function becomes a column vector in C”+1:

/
A>{x-,t) Vlt) - (2.9)

\ 4>(x„;r) /

2 Actually, even Eq. (2.5) is a simplification; the vector potential A is really a much more involved operator
which takes into account how photons appear and vanish. For strong fields it does, however, make sense
to disregard this.
3 As nuclei are also quantum particles, they really should be amongst the N particles contained in the
Hamiltonian. However, due to the fact that nuclei are much heavier than electrons, we may often assume
the nuclear positions to be fixed relative to the electrons of the atom.
18 2 The Schrodinger Equation

r Figure 2.1 The discretization scheme of Eq. (2.9). If the interval [a, b] is large enough to contain the wave function, and the grid
points are chosen densely enough, the array of -values, (xo), (xi), (* 2) and so on, may be used to
interpolate the true wave function with reasonable accuracy. For the record, the grid displayed here is not nearly dense
enough.

where xz- = a + ih and h = (b — a)/n as before. Please do not confuse the numerical
parameter h with the Planck constant. We have tried to illustrate this scheme in Fig. 2.1.
The approximation of Eq. (2.9) is, of course, a tremendous reduction in complexity.
The wave function can in principle be any complex function that is finite, differentiable
and normalizable on R, while in Eq. (2.9) it is reduced to a simple vector with n + 1
entries.
We repeat that, when imposing discretization like this, it is absolutely crucial that
we check for convergence; our results should certainly not depend significantly on our
choices for the numerical parameters a, b and n. We must increase the span of our
interval [a, b] and reduce our step size h, increasing n, until our numerical predictions
are no longer affected by such modifications. Then, and only then, may we reasonably
assume that the discretized representation M *(t) is a decent approximation to the true
wave function 4<(x;0-
In the following we will, for convenience, skip writing out the time dependence of
the wave function explicitly.

2.1.1 Exercise: Normalizing the Discretized Wave Function

When the wave function is represented as a vector with elements 4>(xo), ^Ui),..., we
insist that
n
/i^2|4'(x,)|2 = h = 1, (2.10)
z=0

where t indicates the Hermitian adjoint, which, as mentioned, means transpose and
complex conjugation.
Why do we insist on this?
Hint'. Apply Riemann integration or the trapezoidal rule, Eq. (1.16a), to Eq. (1.11).

Hopefully you found that this was a revisit of Exercise 1.4.1 - formulated in terms
of a numerical integral.
19 2.2 Kinetic Energy Numerically

2.2 Kinetic Energy Numerically

Now, with a proper numerical discretization of we may apply numerical dif­


ferentiation methods to it. The most straightforward approach would be that of
finite difference schemes. We may, for instance, apply the three-point rule we saw in
Exercise 1.6.1 or the symmetric five-point rule for the double derivative:

= +O(ft2) and (2., la)


w2
-f(x - 2h) + 16/(x - h) - 30/(x) + 16/a + h) - f(x + 2h) 4
f M =----------------------------------------- ^2----------------------------------------- + °<h )’
(2.11b)

respectively.

2.2.1 Exercise: The Kinetic Energy Operator as a Matrix

With given in vector form as in Eq. (2.9), we may write the action of T on as a
matrix multiplication with the M* vector. For the above choices, Eqs. (2.11), what will
the corresponding matrix approximations be? Assume that 4>(x) = 0 for x £ [a, b] and
that it falls off smoothly towards these edges.

There are, of course, several other methods to estimate differentiation of various


orders numerically. A particularly convenient one is provided by the Fourier transform,
which is defined as
1 C00
4>(fc) = •F{4'(;r)}0t) = —= / e~ikx 4
*
(x)dx. (2.12)
f'l'Jt J—oo

This shifts our position-dependent wave function into another function that depends
on the wave number k instead. We may transform this function back into the original
x-dependent function by the inverse Fourier transform, J7-1:

1
4>(x) = {<t(jt))(x) = —= / e+'kx 0>(Jt) dJt. (2.13)
f'l'Jt J—oo

Within the ‘fc-space’, differentiations are trivial:

dn
-—^(x) = 0(7:) d£ = <t>(k)dk
dx"

(ik)n <t>(k)dk = (2.14)

This means that differentiation to any order may be performed by first Fourier trans­
forming into ‘£-space’, multiplying this transformed wave function by ik to the
20 2 The Schrodinger Equation

proper power and then transforming it back into the x -representation. The action of
the kinetic energy operator, for instance, may be calculated as

f 4, = hVwl. (2.15)
2m I J 2m I J

The same approach still applies for a discretized wave function, Eq. (2.9), for which
Eq. (2.12) is replaced by a discrete version. The integral over the continuous ^-variable
is replaced by a sum over discrete ^-values and, in effect, our numerical wave function,
which was defined for x between a and Z?, becomes a periodic function beyond this
interval - with period L = b — a, the size of our domain.
Fortunately, discrete numerical Fourier transforms may be performed extremely effi­
ciently, and standard implementations for the fast Fourier transform, FFT, are easily
found within all numerical frameworks.
In discrete ‘fc-space’, the x-vector, (xo,xi,... ,xn), is replaced by a ^-vector. The
maximum magnitude of k is inversely proportional to the spatial step size h:

kmax = ?-, (2.16)


h
and the ^-vector corresponding to the Fourier-transformed wave function extends from
—£max to (almost) fcmax in n steps of length

, (2.17)

where N = n + 1 is the number of points. Note, however, that FFT implementations


typically distort this ^-vector in a somewhat non-intuitive manner; when N is even, it
typically starts from zero and reaches (7V/2— 1)- Ak and then continues from — 7V/2- Ak
to — 1 • Ak. Check the documentation of your standard FFT implementation within
your preferred framwork in order to work this out.
Actually, quantum physics may be formulated in ‘&-space’ or momentum space, as an
alternative to the usual position space formulation. In that case, the x and p operators
change roles, so to speak, and the momentum wave function <b(k) gives the momen­
tum probability distribution in the same manner as 'l'(x) gives the position probability
distribution. The momentum variable p and the wave number k are related by p = hk,
which means that they are the same in our convenient units. If you want to gain a
deeper understanding of this - or how the discrete and continuous Fourier transforms
relate to each other - we highly recommend the YouTube video in Ref. [1]. The video
also indicates how this is related to the Heisenberg uncertanty relation, Ineq. (1.5).

2.2.2 Exercise: Expectation Values as Matrix Multiplications

(a) For the wave functions in Exercise 1.4.2, calculate the expectation value of the
kinetic energy, Eq. (1.21), in the same manner as you did in Exercise 1.4.2. Do this
using both a finite difference formula, Eqs. (2.11), and some FFT implementation
in your preferred framework for numerics.
21 2.3 Dynamics Without Explicit Time Dependence

(b) Now, do the same but via matrix multiplication. Let your wave function be repre­
sented as in Eq. (2.9). With this, and proper normalization, the expectation value
may now be estimated as
(T) TV, (2.18)

where M* is the complex column vector of Eq. (2.9) and T is a square matrix.
For each of the four wave functions, choose an adequate domain, [a,b], and
implement both the representations of Exercise 2.2.1 and the corresponding FFT
matrix - with an increasing number of grid points, N. The FFT matrix may, as
any linear transformation from CN, be determined by transforming the identity
matrix - column by column or in one go.
Do these estimates reproduce your findings in (a)?
Are these numerical representations of the kinetic energy operator actually
Hermitian - does T = numerically?

The benefit of reformulating the above calculations in terms of matrix multiplica­


tions can be questioned in the above exercise. The process will, however, prove quite
useful now that we are, finally, going to solve the Schrodinger equation.

2.3 Dynamics Without Explicit Time Dependence in the Hamiltonian

When the Hamiltonian H does not bear any explicit time dependence, the solution of
the Schrodinger equation, Eq. (2.1), may be formally written as
*(?) = exp - f0)/n] 4>(/0), (2.19)

where 'l'(to) is the wave function at some initial time to and the spatial dependence of
is implicit this time. It may seem odd for an operator or a matrix to appear as an
exponent. However, it does make sense if we define it as a series expansion. For any
number x, it holds that
00 i ii
ex — V' ~,x" ~ 1 + x + o
* 2 + TT
* 3 H----- • (2.20)
n
n\ 23!

In the same manner, we may define the exponential function with an operator or a
matrix as exponent:

where A0 = I is the identity operator - the operator that does nothing.


In the following we will approximate the Hamiltonian by matrices, as we did in the
preceding exercises for the kinetic energy operator. When actually calculating matrix
exponentials we rarely resort to Taylor series as in Eq. (2.21); such an expansion would
typically have to be truncated if implemented. However, we can exponentiate a matrix
exactly if it is diagonalizable. And ours is; theory from linear algebra ensures this for
22 2 The Schrodinger Equation

any Hermitian matrix.45When a matrix A is Hermitian, A = A^, there will always exist
matrices P and D such that
A = PDP\ (2.22)

where D is diagonal with the eigenvalues of the matrix A, which necessarily are real,
by the way, along the diagonal, D = Diag(Ao,• • • )• Moreover, the matrix P is
unitary; = P-1.
With this we may write

e-iHNt/h = p Diag(^"i£oAr/\e"i£1Az/n,...)Pt, (2.23)

where so> • • • are the eigenvalues of the Hamiltonian matrix H.


For obtaining these eigenvalues and the corresponding eigenvectors, the columns
of P, you will certainly find adequate numerical implementations in any platform or
language. However, you are also likely to find implementations that can perform the
exponentiation in Eq. (2.19) directly.

2.3.1 Exercise: The Formal Solution

(a) Prove that Eq. (2.19) in fact is a solution of the Schrodinger equation, Eq. (2.1),
with the proper initial conditions.
(b) Prove that Eq. (2.21) and Eq. (2.22) in fact lead to Eq. (2.23).

These results will be quite useful in the following, when we play around with trav­
elling wave packets. For that we need a proper initial wave function, vp(x;fo). In this
context, working with Gaussian wave packets is quite convenient. A general Gaussian
has the form
2'
1 I x — /z\
/(x) - exp 2 \ a J (2.24)

where /z is the mean position and cr is the width (see Fig. 2.2).
When the position distribution |4'(x)|2 has this shape, /z is the mean position,
Eq. (1.12), and a is the width of the position distribution, Eq. (1.14). Gaussians are
practical for several reasons. One of these is their rather simple form. And although
their tails never actually vanish completely, they are, for all practical purposes, indeed
confined in space^ - and in momentum. Moreover, an initial Gaussian wave packet
allowed to travel freely will remain Gaussian. Specifically, with your normalized wave
function at time t = 0 being

tfyx - xo)2/h2 . "I


4>(xJ = 0) = / ——--------- Hz----- ------ exp 1 - ^/(/wz) + ’ (2’25)

4 Perhaps this result is more familiar when it is formulated in terms of real matrices: a real symmetric matrix
is always diagonalizable. The same holds for complex matrices - with symmetry replaced by Hermicity.
5 That is to say, their magnitude quickly falls below machine accuracy as x departs from
23 2.3 Dynamics Without Explicit Time Dependence

Figure 2.1 | Shape of a normalized Gaussian curve, as defined in Eq. (2.24). Here the mean value /z = 4, and the standard
deviation cr = 2. These are illustrated by vertical and horizontal dashed lines, respectively.

the absolute value squared of our wave function at later time t is

2 crfa -xo- pot/tri)2


|4'(x;t)|2 = c2exp (2.26)
h2 1 + 4er£(t — r)2/(mh)2

with the time-dependent normalization factor

______________________
/1 +4a*(t - r)2/(zn/i)2

Although slightly more ‘messy’, Eq. (2.26) is indeed of the same form as Eq. (2.24),6
with time-dependent /z and cr. Here %o is the particle’s mean position initially, r is the
time at which the wave packet is at its narrowest, po is the mean momentum and crp is
the width of the momentum distribution.

2.3.2 Exercise: Wave Propagation

In this exercise you are going to simulate a travelling Gaussian wave packet that is not
exposed to any potential V(x). With V = 0, the Hamiltonian is simply the kinetic
energy operator, H = T with T given in Eq. (1.21). By choosing units so that both h
and the mass m become 1, this is quite simple: T = —1/2 d2/dx2. This system serves
well for checking the accuracy of the numerical approximations since the exact solution
is known analytically, Eq. (2.26).
For starters, you can choose the following set of parameters:

*o °p Po r
—20 0.2 3 5

6 It becomes a bit ‘cleaner’ with m = 1 and h = 1.


24 2 The Schrodinger Equation

Also, let your domain be [«, Z?] = [ — L/2, L/2] with L = 100 length units. Choose an
initial number of grid points N = n +1, and let your initial time be t = 0. Because you
will be using the FFT, it’s an advantage to set N = 2k where k is an integer. In other
words, N should be 64, 128, 256 and so on.

(a) For each of three approximations to the kinetic energy operator, the finite dif­
ference approximations of Eqs. (2.11) and the Fourier approach of Eq. (2.15),
simulate the evolution of the wave function according to Eq. (2.19) and compare
the numerical estimate to the exact one, Eq. (2.26). Specifically, for a set of times
/0 = 0, ?i, t2, ... where rz+i = tj + At, plot |4>(x;tz-)|2 at each time step tz. Pref­
erably, do this with all four versions of the wave function - the three numerical
approximations and the exact, analytical one - simultaneously as an animation.
The transition from one time step to another is achieved by Eq. (2.19) and, if
necessary, Eq. (2.23):
'I'O + Ar) = (2.27)

Choose a reasonably small At so that when you iterate over time as t t + At


and update your plot repeatedly, it renders a reasonably smooth animation.
(b) Play around with the numerical parameter N. For each of the three numerical esti­
mates, how large an N do you need for your estimate to, more or less, coincide with
the analytical exact solution? Which implementation seems to be the most precise
one? Is the wave function in fact at its narrowest when t = r?
(c) What happens to the different approximations to the wave function when they hit
the boundary at x = ±Lj'lcl
(d) Repeat this exercise playing around with various choices for the parameters in the
initial Gaussian wave, crp, po, *o and r.

The odd, artificial behaviour seen when our numerical approximations hit the
boundary has to do with the boundary condition we happened to impose. Our FFT
approximation to the kinetic energy operator requires our wave function to fulfil a
periodic boundary condition. In the finite difference approximations, we imposed the
restriction 4>( ± L/2) = 0 at all times. This boundary condition, which in effect cor­
responds to putting hard walls at x = ±L/2, is an example of a Dirichlet boundary
condition, named after the German mathematician Peter Gustav Lejeune Dirichlet.
The numerical tools you just developed will prove quite useful throughout the
remainder of this book. Your set of numerical implementations constitutes a nice test
bed for exploring several quantum phenomena.
Interference is one such phenomenon.

2.3.3 Exercise: Interference

Construct an initial wave packet of the form


4'U, t - 0) - 2= (Vq(x) + ^2(x)), (2.28)
V2
25 2.3 Dynamics Without Explicit Time Dependence

where each of the two functions and is of the form Eq. (2.25). The param­
eters should not be the same. Specifically, make sure to choose values for the mean
momentum and initial positions such that the two Gaussians travel towards each other;
the po values for and ^2 must differ in sign. Also, make sure that the two initial
Gaussians have negligible overlap initially. If so, the pre-factor 1 / V2 above ensures
normalization.
Now, as in Exercise 2.3.2, simulate the evolution according to the Schrodinger equa­
tion for the system - still without any potential. In this case, just use the FFT version.
What happens?
Also plot the real and imaginary parts of the wave function separately in this case,
not just the absolute value squared.

Although somewhat simplified, the pattern that emerges comes about in the same
manner as the interference pattern that Thomas Young saw in his famous double-slit
experiment with light. Odd as it may seem, the interference phenomenon you just saw
is real and measurable; the double-slit experiment has in fact been performed with
particles instead of light (see Fig. 2.3). Note that, while this interference pattern does
correspond to two interfering waves, it does not correspond to two particles interfering
with each other. There is only one quantum particle, and it interferes with itself

2.3.4 Exercise: Expectation Values and Uncertainties

Choose a Gaussian wave packet, Eq. (2.26), with r > 0 and use the FFT implemen­
tation from Exercise 2.3.2 to make a plot of the expectation values (x) and (p) as
functions of time.
Also, calculate the uncertainty, or standard deviation, of the position and momentum
as functions of time. You will find relevant relations in Eqs. (1.12), (1.20) and (1.26).
In determining (p) and (p2), you will need to estimate d^(x)/dx and d2^(x)/dx2,
respectively. To this end, both Eqs. (1.23) and (2.1 la) or Eq. (2.14) will do.
Finally, confirm numerically that the uncertainty in momentum remains constant
and that the Heisenberg uncertainty principle, Ineq. (1.5), is not violated.
Does equality in Ineq. (1.5) apply at any time? If so, when?
A small reminder: when we set the reduced Planck constant h = 1, this means that
the Planck constant h = 2n.

Hopefully, you found that the Schrodinger equation makes sure that this wave func­
tion abides by the uncertainly principle at all times. This will also be the case for any
other admissible wave function.
So far we have let our wave function evolve freely. And not much has happened to it.
It would be more interesting to let our wave functions hit something. It is about time
we introduced a potential in our Hamiltonian.
26 2 The Schrodinger Equation

r Figure 2.1 The famous double-slit experiment, which Thomas Young performed with light back in 1801 (see Fig. 1.1), can be
done with particles as well. Electrons are sent through narrow slits, one by one. The spots appear where electrons hit
an electron-sensitive screen afterwards. We see that, as more and more particles are detected, an interference pattern
emerges. Specifically, the numbers of electrons in the pictures are (a) 8, (b) 270, (c) 2000 and (d) 160,000 [2]. The
original experiment, published in 1989, was conducted by A. Tonomura, J. Endo, T. Matsuda and T. Kawasaki, at the
Hitachi Advanced Research Laboratory, and H. Ezawa, at Gakushuin University, Japan [39]. A similar experiment was
performed by the Italians R G. Merli, G. F. Missiroli and G. Pozzi in 1976 [28,36]. The first observation of interference
between electrons, however, was made by Clinton Davisson and Lester Germer, who in the 1920s studied how
electrons scattered off a nickel surface.

2.4 Scattering and Tunnelling

We now introduce a potential V in the Hamiltonian. For simplicity, we let this potential
be a rectangular one, centred at the middle of the grid:

Vo, \x\ < w/2,


V(x) = (2.29)
O, |x| > w/2,

where w is the width. The value Vo can be positive, in which case the potential is a
barrier, or negative, in which case it may confine our quantum particle. For now we
address the former case.
Actually, for numerical simulations, instead of the purely rectangular potential
above, we will use a smoother version:
Vo
Vs(x) — I_w/2) _|_ I • (2.30)
27 2.4 Scattering and Tunnelling

Figure 2 J | The rectangular potential in Eq. (2.29) - along with the 'smooth' version of Eq. (230) for three different values of 5.
Here Vo = 1 and w = 6.

This one is a bit more convenient to work with numerically. The parameter 5 fixes the
smoothness. In the limit 5 oc we reproduce the sharp corners of the potential in
Eq. (2.29). The potential is plotted for various s values in Fig. 2.4 - along with the
purely rectangular one.

2.4.1 Exercise: Scattering on a Barrier

In this exercise you do not get to choose the parameters yourself - at least not initially.
In your implementation from Exercise 2.3.2, again with the initial wave function, or
initial state, as in Eq. (2.26), you introduce a potential of the above shape, Eq. (2.30).
This is easy, you just augment your Hamiltonian with the potential V, which becomes
a diagonal matrix in our representation:

V(x0) 0 0
0 V(xi) 0
Diag(Va0),V(xi), Vte),...) = 0 0 (231)
V(x2)

before you perform the exponentiation, Eq. (2.27). Again, we suggest you use the FFT
representation of kinetic energy.
Use the following parameters:

L m +1 op po Xq t Vq w s

200 512 0.2 1 -20 0 3 2 5

As in Exercise 2.3.2, make an animation/simulation of where your wave packet hits


the barrier. You may want to indicate the position of the barrier somehow in your
simulation.
What happens?
28 2 The Schrodinger Equation

After the wave packet has collided with the barrier, what is the final probability for
the wave function to remain on the left side of the barrier (x < 0)? And what is the
probability that it has passed to the right side of it (x > 0)? The former is called the
reflection probability, R, and the latter is called the transmission probability, T:


7?= / *
|4
(x)| 2 Av, (2.32a)
J —oo

|4>(x)|2 ck — \ — R, (2.32b)

where these quantities should, strictly speaking, be calculated in the limit r -> oo.
This would, however, require a grid of infinte extension. In practice, you determine
these quantities for a time T that is long enough for the collision with the barrier to be
over but short enough to prevent the numerical wave function hitting the boundary at
x = ±L/2.
Rerun the scattering process with different choices for po and Vo- How are the
reflection and transmission probabilities affected by these adjustments?
Finally, replace the Vo value with a negative one so that the barrier becomes a well
instead.
Will your quantum particle, with some probability, fall into the well and get trapped?
Do you still get reflection with a negative Vo in the potential?

We suppose that the answer to the last question is affirmative. Would a classical
particle - a particle that follows Newton’s laws - behave this way? We could check.

2.4.2 Exercise: The Dynamics of a Classical Particle

It could be interesting to compare the wave function with the position that a classical
particle would have. If we include this in the same simulation as in Exercise 2.4.1, it
will serve to illustrate some of the profound differences between classical physics and
quantum physics.
Simulate numerically the trajectory of a classical particle, x(t), with the initial con­
ditions given by the mean position and mean momentum of the initial Gaussian wave
packet; set x(r = 0) = xo and let the initial velocity be v(t = 0) = p$/m. The classical
evolution is dictated by Newton’s second law, Eq. (1.1), in one dimension:

mx"(t) — — v-V(x). (2.33)


dx

If we write down separate but coupled equations for the momentum and position,

m-^-x(t) = p and -^~p(t) — — V'(x),


(2-34)
dr dr
29 2.4 Scattering and Tunnelling

we may formulate the evolution as a first-order ordinary differential equation:

d / x v
(2.35)
dr \ v — V'(x)/m

with v being the particle’s velocity. This allows us to resolve the dynamics as a first-
order ordinary differential equation, an ODE, by standard schemes such as Runge-
Kutta methods for instance. You may very well make use of standard ODE routines in
your numerical framework of preference. The differentiation of the potential could be
done, for instance, by Eq. (1.23), or it could be done using paper and pencil.
Include a numerical solution of Eq. (2.35) along with your solution of the
Schrodinger equation in your implementation from Exercise 2.4.1. Indicate the pos­
ition of the classical particle along with the evolution of the quantum wave.

Now, suppose a ball is rolling uphill with an initial velocity of vo = 4 m/s. The top of
the hill is at height H = 1 m above the ball’s initial position. Why can we be absolutely
certain that the ball will not make it to the other side of the hill? Because energy conser­
vation prohibits it. The ball with mass m = 1 kg would need an energy of mgH ~ 10 J
to reach the top of the hill, while it only has 1/2 mv^ = 8 J to spend. Thus, for sure,
the classical ball will roll down again on the same side it came from; there is no way we
would retrieve the ball on the other side.
Let’s have a look at the same process within the quantum world.

2.4.3 Exercise: Tunnelling

Start out with the same scenario as in Exercise 2.4.1 with the same parameters - except
for one thing: this time, let Vo = L
Again, run the simulation and see what happens. And, again, calculate the reflection
probability and transmission probability, Eqs. (2.32). What is the probability for this
quantum physical ‘ball’ to make it to the other side of the hill?
Feel free to include the implementation of Newton’s second law, the one from Exer­
cise 2.4.2, along with your solution of the Schrodinger equation in your simulation.

So, despite the fact that our quantum ball has a mean energy of about pl/(2m) = 0.5
in our units, it still has a significant probability to be found on the other side of a ‘hill’
that it would take an energy of Vo = 1 energy units to climb. Isn’t that rather odd?
Well, you could argue that 0.5 is only the mean energy of the particle - more or less.
Since the wave that is a quantum particle typically has a distribution in momentum,
rather than one specific momentum, it also has a distribution in energy. And this dis­
tribution may very well contain some higher energy components which could make it
over the barrier. If we think of the particle as a wave in water, parts of it could ‘splash’
over, so to speak.
30 2 The Schrodinger Equation

This is, indeed, a bona fide argument. However, in the above case, the probability
of measuring an energy beyond 1 is actually just 1.6%.*7 As seen in the exercise, the
probability of turning up on the other side of the barrier is considerably higher than
this. So, the oddity remains.
Let’s play around with this phenomenon, which is called tunnelling, a bit more.

2.4.4 Exercise: Tunnelling Again

Now, we replace the barrier with two narrow barriers, symmetrically placed on either
side of x = 0. This can be achieved with this potential:

V(x) = Vs(x - d) + Vs(x + d), (2.36)

where Vs refers Eq. (2.30). Here the separation d must be significantly larger than the
width w of the barriers. Set J = 10 length units and w = 0.5. Also, let Vo = 1 and
s = 25.
Make your initial state a Gaussian wave packet well localized between the barriers,
but not too narrow. This time, let both the initial mean position %o and momentum po
be zero.
Then run your simulation again. What happens? Can you contain the wave functions
between the barriers? Perhaps the evolution of the wave function is best studied with a
logarithmic y-axis in this case.
As you go along, calculate the probability of finding the particle between the barriers
as a function of time:
fd
^between = I I ^(-
*201 (2.37)
J-d
and plot it after the simulation.
Simulate until a significant part of the wave packet hits the boundary at x = ±L/2.
You may want to increase your box size L and let your simulation run a bit longer than
in the case of Exercise 2.4.3. Also, make sure to use a dense enough grid to get a decent
approximation of your barriers, which are rather narrow this time.
Try to see how the tunnelling probability is affected by imposing slight adjustments
on the height and the width of your barriers.

We have now seen examples of situations in which a quantum physical object, such
as an electron or an atom, penetrates regions in which it does not have sufficient energy
to be and then comes out on the other side.
This tunnelling phenomenon has no analogue in classical physics.8 In effect, this
means that one can actually make electrons escape an atom or some piece of metal

7 In Exercise 5.3.3 we will see how such probabilities may be determined.


8 A silly thought: suppose sheep were quantum physical objects. In that case, just building fences wouldn’t
do; the sheep would eventually tunnel through them. Instead, the farmer would have to dig a sharp depres­
sion in the ground in order to confine the herd. If they did so, however, it would get even weirder - again,
if sheep were quantum objects, that is.
31 2.5 Stationary Solutions

' Figure 2.5 | Image of a graphite surface obtained using a scanning tunnelling microscope. The structure you can make out is made
up of individual atoms; this is, in fact, a picture of atoms!

although they do not really have sufficient energy to make this happen - in the classical
sense. One very useful application of this phenomenon is the scanning tunnelling micro­
scope, which actually allows us to see individual atoms (see Fig. 2.5). This technology,
which we will return to in Chapter 6, exploits the fact that the tunnelling probability
is very sensitive to the width and the height of the barrier being tunnelled through.
Hopefully, this fact resonates with your experience by the end of Exercise 2.4.4.

2.5 Stationary Solutions

As we have already seen, quantum physics has some strange, non-intuitive features. Yet
another one comes into play if we ask ourselves the following question: are there solu­
tions of the Schrodinger equation for which the probability density, |4'(x;r)|2, does not
change in time? Such a situation would be reminiscent of standing waves of a vibrating
string. The answer is yes - if we have a confining potential.

2.5.1 Exercise: Enter the Eigenvalue Problem

Suppose that a quantum system exposed to a time-independent Hamiltonian H is such


that |4>(x;r)|2 is also time independent.

(a) Verify that this is achieved if the system’s wave function is of this form:
4>(x,r) = e-i£f//iVr(x). (2.38)

Why must the constant s be real here?


(b) Show that when we insert the wave function of Eq. (2.38) into the Schrodinger
equation, Eq. (2.1), we arrive at the equation

(2.39)

for the time-independent part i^(x).


32 2 The Schrodinger Equation

(c) Explain, without assuming a separable form as in Eq. (2.38) from the outset, how
the restriction that 14>(x;r)|2 is time independent in fact leads to Eq. (2.38).
Be warned, this last part of the exercise is hardly trivial. It may be done in the
following way. First, note that since | vP(x;r)| is time independent, the wave function
may be written in polar form as = i/r(x)e1</)(x;r\ where both functions and
cp are real and is time independent. Then, insert in this form into Eq. (2.1)
with the Hamiltonian of Eq. (2.8) and move purely x-dependent terms to one side
of the equation in order to achieve a partial separation of variables. Finally, insist
that cp remains real in time.

The next chapter is dedicated to Eq. (2.39). We may recognize it as an eigenvalue


equation. The Hamiltonian H is the operator corresponding to energy, and the eigen­
value e on the right hand side is, in fact, energy. For each eigen-energy s there is a
corresponding eigenfunction or eigenstate ty£(x).
Before we direct our full attention to Eq. (2.39), which will be the topic of the next
chapter, we should say a few more words about eigenvalues and eigenfunctions and
how they are related to measurement.

2.6 Eigenstates, Measurements and Commutators

The following is a fundamental postulate of quantum physics:


A precise measurement of the energy of a quantum system will necessarily produce an
eigenvalue of the energy operator H as a result.
In the next chapter we will see that this imposes severe limitations on the energies a
quantum system may be observed to have when the system is confined, or bound.
The fact that we can only measure eigenvalues extends to all observables in quantum
physics, not just energy. The possible outcomes of measuring any physical quantity A
will necessarily be eigenvalues a of the corresponding operator A,

Acpa(x) = a(pa(x). (2.40)

2.6.1 Exercise: The Standard Deviation of an Eigenstate

For a quantum system in the above eigenstate, 4>(x) = <pa(x), show that the expectation
value of A is a and that its standard deviation is zero.

Suppose now that we have made a measurement of some generic physical quantity
A and got the result a. This has altered the wave function; it is now an eigenfunction
of the A-operator corresponding to the eigenvalue that was measured: 4>(x) cpa(x).
That is to say, as the system is subject to some measurement, it no longer follows the
evolution dictated by the Schrodinger equation, it collapses into an eigenstate - the
33 2.6 Eigenstates, Measurements and Commutators

eigenstate <pa corresponding to the result a of the measurement. This is the collapse of
the wave function, which we briefly discussed in Section 1.4.
It is not for us to know in advance which eigenstate it collapses into. We can only
determine the probability of measuring the outcome a when making an A measurement
at time f.
Pa = l(^|4'(f))|2. (2.41)

An identical measurement on another identically prepared system may very well prod­
uce a different result. Repeated A measurements in quick succession on the same
system will, however, keep on reproducing the result since the wave function
collapsed into the corresponding eigenstate at the first measurement.
So why do we resort to such an odd idea as an instantaneous, random collapse of
the wave function? Because it agrees with how nature actually turns out to behave.
Every time we have asked her whether she really behaves like this, or in less poetic
terms: whenever we have performed an experiment addressing this question, she has
answered yes.
In Fig. 2.3, for instance, we see this collapse manifested in the fact that electrons,
which initially are described as waves spreading out in space, are detected as tiny bright
spots on a plate - a position measurement has caused the wave function to collapse
into a localized spike. A single position measurement alone does not reveal any wave
behaviour. But when we perform measurements on several electrons, each one with
more or less the same wave function, and aggregate the outcomes, the wave interference
pattern emerges.
Again, this quantum phenomenon has no analogue in classical physics.
Well, actually, let’s try to find one. Maybe we could think of a quantum system as
an odd guitar - an imaginary quantum guitar. When you pick a string on an actual
guitar, you set it vibrating (Fig. 2.6). This vibration can be thought of as a bouquet
of standing waves - or stationary solutions. Each wave has a wavelength following the
formula kn = L/(2n), where n is a positive integer and L is the length of the string. The
higher the n, the more nodes has the standing wave and the shorter is its wavelength Xn.

r Figure 2.5 Waves on real, classical guitar strings. In case you were wondering, the picture is manipulated.
34 2 The Schrodinger Equation

According to the relation fk = v, where f is the frequency and v is the velocity of the
wave,9 there is a corresponding set of frequencies fn = 2nv/L. So, when you picked
the string, you didn’t just produce a sound with the fundamental tone, the ‘ground
frequency’ f\ = IvfL, you also produced a set of harmonics of higher frequency. The
admixture of these waves, each one with its own amplitude, is what gives the guitar its
characteristic sound, its timbre.
Now, for a quantum guitar the oddity enters when we wish to measure the frequency
of the sound. The vibrating string on a quantum guitar could, like an ordinary gui­
tar, initially have a distribution between various possible frequencies - or energies, see
Eq. (1.4). However, a quantum measurement would produce one, and only one, of the
frequencies fn. And after measuring it, the string would vibrate as a single standing
wave - with frequency fn exclusively. This frequency is, perhaps, most likely to be the
ground frequency /i, but we will not necessarily get this result. In any case, gone is the
bouquet of harmonics that gives it its particular timbre. The quantum string subject
to frequency measurement would sound like a naked, cold sinusoidal tone.
Admittedly, addressing similarities and differences between classical particles and
quantum particles in terms of strings on a guitar may come across as a bit far fetched.
But the analogy between standing waves on a string and stationary solutions for the
atom did actually inspire both de Broglie and Schrodinger. Perhaps some of the paral­
lels between a quantum system and this odd, imaginary guitar will become more clear
in the next chapter.
Before we get there, we will introduce the commutator, which is a rather important
notion in quantum physics for several reasons. We will briefly address a couple of those
reasons.

2.6.2 Exercise: Commuting with the Hamiltonian

The commutator between two operators A and B is defined as

[A,B] = AB — BA. (2.42)

If this commutator happens to be zero, we say that the operators A and B commute.
This would always be the situation if A and B were just numbers, but not necessarily if
they are operators or matrices.
The position operator and the momentum operator, Eq. (1.19), for instance, do not
commute. Their commutator, which is often referred to as the fundamental commutator
in quantum physics, is

[x,j5] = i/l. (2.43)

(a) Show that Eq. (2.43) is correct. To this end, introduce a test function, say, /(x), for
the commutator [x, p] to act upon and make use of the product rule:

(f(x) • #(*))' = f'togto + /(x)^'(x). (2.44)

9 This velocity, in turn, depends on the thickness and tension of the string.
35 2.6 Eigenstates, Measurements and Commutators

In the following, we will, by rather generic means, deduce a couple of funda­


mental results related to the commutators with the Hamiltonian. In this particular
context we will exploit Dirac notation, Eq. (1.30), to simplify expressions - with a
rather abstract outline as a consequence.
(b) For the Hamiltonian of Eq. (2.8), show that

[x, H] — \—p and (2.45a)


m
[p, H] = (2.45b)

Do make use of Eq. (2.43) and the fact that any operator commutes with itself
- to any power. The latter has the consequence that the position operator com­
mutes with the potential energy part of the Hamiltonian. In proving Eq. (2.45a),
the following commutator relation may also be useful:

[A, B2] = B[A, B] + [A, B]B. (2.46)

In proving Eq. (2.45b), you may want to introduce a test function /(%) - as in the
part (a).
(c) If we write the Schrodinger equation in Dirac notation, that is, with the wave func­
tion expressed as a ket vector, it reads ifyd'V/dt) = where the ket |3^/8r)
is the time derivative of the 14Q-ket. If we impose the Hermitian adjoint to the
Schrodinger equation, we get the formal expression for the corresponding bra
vector, see Eq. (1.31):

/a<p /a V / - \t
-ihl— = ift-|
*
) = (ff|4')) = WH, (2.47)
\ dt\ dt J \ /

where we have used the fact that H is Hermitian and that the factors of a product
change order under Hermitian adjungation, (AB)t = B^A^.
With these expressions, Eqs. (2.45) and the product rule, we can show that the
time derivatives of the expectation values of position and momentum follow

'«^(x) = (p) and tHp) =(2.48)


dr dr
Do so.
Note that this coincides with Eq. (2.34) if we replace the classical position and
momentum with the corresponding quantum physical expectation values.
(d) Analogously to what you just did, show that for a physical quantity A, with oper­
ator A and no explicit time dependence, the time evolution of its expectation value
follows
d(A) = i([A,ff]). (2.49)
dr m

Suppose A commutes with the Hamiltonian, what consequences would this have?
(e) Assume now that all eigenvalues a of some operator A, see Eq. (2.40), are such that
each of them only corresponds to one eigenvector \<pa); no two linearly independent
36 2 The Schrodinger Equation

eigenvectors correspond to the same eigenvalue.10 Assume also that some other
operator B commutes with A.
In this case, the eigenvectors of A would also be eigenvectors of B. Why?

So, what physical insights can we take with us from these analytical efforts? First of
all, we see that the Schrodinger equation actually reproduces Newton’s second law if
we interpret classical position and momentum as the corresponding quantum physical
expectation values. This theorem, which may be generalized to higher dimensions and
time-dependent Hamiltonians, is due to Paul Ehrenfest, whom you can seen in the
back row, third from the left in Fig. 1.3.
Another lesson learned - from the more general and more abstract formulation of
Ehrenfest’s theorem in Eq. (2.49) - is that the expectation value of any quantity that
commutes with the Hamiltonian will remain unchanged. Actually, we have already seen
an example of this. In Exercise 2.3.2 our Hamiltonian did not feature any potential,
and, consequently, it commuted with the momentum operator. Thus, according to our
above abstractions, our Gaussian wave packet should evolve without any change in the
momentum expectation value - or standard deviation, for that matter. Hopefully, your
findings from Exercise 2.3.4 confirm this.
Another, less obvious consequence of Eq. (2.49) is that if your initial state happens to
be an eigenstate of some operator A that commutes with the Hamiltonian at all times,
it will also remain an eigenstate of A. In the context of stationary solutions, Eq. (2.39),
this means that A and H may have common eigenstates.

io If the opposite were the case, we would say that the eigenvalue a is degenerate.
3 The Time-Independent Schrodinger Equation

At the end of the last chapter, we learned that, in order to find the possible stationary
states of a quantum system, we must solve the equation

H^ = 8^. (3.1)

As this is the eigenvalue equation of the energy operator, the Hamiltonian, these
eigenvalues are the possible outcomes of an energy measurement.
This equation is known as the time-independent Schrodinger equation. It is hard to
guess how much time and effort researchers have spent trying to solve it for various
physical and chemical systems over the years. But it’s a lot, really a lot.
Although we no longer have any time dependence in our solutions, solving Eq. (3.1)
certainly may still be complicated enough. However, it is not so hard in the case of one
particle in one dimension, in which it reads

12
d*
+ V(x) l/r(x) = £ l/j(x). (3.2)
2m dx2

3.1 Quantization

Suppose the potential V is such that it is negative in some region and that it vanishes
asymptotically, V(x) —> 0 when |x| —> oo. In this case, a particle with positive energy,
8 > 0, is able to escape, while a particle with negative energy, 8 < 0, is not - it is
confined by the potential. We say that it is bound.

3.1.1 Exercise: Bound States of a Rectangular Well

For the purely rectangular well, Eq. (2.29) with Vo < 0, we may find the negative
eigenenergies by almost purely analytical means. This can be done by following the
recipe below. The setup is illustrated in Fig. 3.1.

1. Choose adequate numerical values for how deep, — Vo, and wide, w, the well should
be. You will not need these values right away, however; treat these quantities
algebraically for now.
2. Divide the domain into three regions: region I: x < — w/2, region II: |x| < w/2 and
region III: x > w/2.

37
38 3 The Time-Independent Schrodinger Equation

in

' Figure 3.1 | The example understudy in Exercise 3.1.1.

3. Write the general solutions of the time-independent Schrodinger equation,


Eq. (3.2), in each of the three regions separately. Remember that the energy s is sup­
posed to be negative; we are interested in bound states. In this context it is convenient
to introduce

k = in regions I and III, (3.3a)


h
k= y/lm{8 — Vo) in region II. (3.3b)
h

Do apply our usual units where h = m = 1 if you wish. Note that both k and k are
real; we suggest that you write your expressions for the solution in terms of real, not
complex, functions.
4. You should now have a function containing six undetermined parameters. Since the
wave function must be finite and normalizable, the function must vanish as |x | —> oo.
In order to achieve this, two of these parameters must be set to zero right away.
Moreover, it can be shown that, since V(x) is symmetric, the wave function has to
be either even, t/r( — x) = V'-(x), or odd, t//-( — x) = — ty(x). We will exploit this;
let’s choose an even solution for now. This will reduce your number of parameters
to two.
5. Impose the restrictions that both and its derivative, ^'(x), are to be continuous
at x = w/2. With a wave function that is either even or odd, these restrictions will
also be fulfilled at x = —w/2. This will provide you with a homogeneous set of two
linear equations in two unknowns. Let’s write this system as a matrix equation.
Now, here come the big questions:
Why must we insist that the determinant of its coefficient matrix is zero?

And how does this impose restrictions on the energy el


6. Make a plot of the determinant from point 5 as a function of e and estimate which
eigenenergies en are admissible.
7. Repeat from point 4 with an anti-symmetric wave function.

1 This is a manifestation of what we saw in the last question in Exercise 2.6.2; since the symmetry operator,
which changes x into — x, commutes with our Hamiltonian, they have common eigenstates.
39 3.1 Quantization

Perhaps this has been a bit of a laborious exercise - and perhaps it will appear even
more so after having done the next one. However, it may very well turn out to be worth­
while. We just experienced that the requirement that any energy measurement on our
bound quantum particle must render an energy eigenvalue, in turn, reduces the pos­
sible outcomes to just a few very specific ones. The notion that the time-independent
Schrodinger equation in fact reduces the accessible energies for a bound particle to a
discrete set is neither technically nor conceptually intuitive.
What we just saw will happen to any quantum system that is bound. Its energy can
only assume one out of a discrete set of values. The energy is quantized. This is yet
another quantum trait that has no classical analogue.*2
And this is why we call it quantum physics.
Since the time-independent Schrodinger equation, Eq. (3.1), is, in fact, an eigenvalue
equation, we could also solve it numerically by applying standard numerical tools.

3.1.2 Exercise: Solution by Direct Numerical Diagonalization

(a) Use your implementation from Chapter 2 to construct a numerical Hamilton­


ian for the same system as in the previous exercise. Use the potential of either
Eq. (2.29) or Eq. (2.30) with rather sharp edges - a high 5 value, that is. Use some
numerical implementation to find the eigenenergies and eigenvectors of this Ham­
iltonian. How do the negative eigenenergies compare with the ones you estimated
in Exercise 3.1.1?
(b) Plot the wave functions, that is, the eigenvectors you found in (a).3 Is there a pattern
as you increase the energy? Do you see any parallels with the quantum guitar of
Section 2.6 here?

For the bound states, you will find that the wave function has an increasing number
of nodes as you increase the energy towards zero - just like the standing waves on a
guitar with increasing frequency. This is illustrated in Fig. 3.2.
Now, what about situations where the system isn’t bound? Well, the energy is still
supposed to be an eigenvalue of the Hamiltonian, but this time it may assume any
value - as long as it corresponds being unbound. In general, the set of eigenvalues,
the spectrum, of a Hamiltonian will consist of both discrete values - corresponding to
bound states, and continuous values - corresponding to unbound states. Of course, in
special cases, it could be exclusively discrete, as would be the case when V(x) 00 as
|x| —> 00, or the spectrum may be a pure continuum, as is the case when there is no
potential at all - as in Exercise 2.3.2, and when the potential is too narrow or shallow
to support bound states.
2 Allow us to propose yet another silly illustration of the absurdity of quantum physics: The notion that
all but a countable set of energies are prohibited for our bound particle is as reasonable as an official
approaching Usain Bolt after the 100 m final at the 2009 athletics championship in Berlin and telling him,
‘We regret to inform you that 9.58 s is not an admissible time, the time must be an integer.’
3 In order to make sure that everything is right, you could compare your analytical solutions from Exer­
cise 3.1.1 with your numerical ones; for a more or less rectangular potential, they should not differ by
more than an overall normalization factor.
40 3 The Time-Independent Schrodinger Equation

' Figure 3.1 | The four bound states of the smoothly rectangular potential of Eq. (2.30) with depth given by Vo = —3, width
w = 4.5 and smoothness 5 = 5. The thick curve is the confining potential, and the dashed horizontal lines are the
eigen-energies of the bound states. Note that the number of nodes increases with the energy, and that,
correspondingly, the wave functions, which are all real, are alternating even and odd functions.

For the case of the confined electron in a hydrogen atom, the allowed energies follow
a particularly simple pattern:
B
En — 2’ (3-4)
n1

where the constant B = 2.179 x IO-18 J, and n, the so-called principal quantum number,
must be a positive integer. This is the famous Bohr formula. With n = 1, the energy
you arrive at should coincide with what you found in Exercise 1.6.4. The set of allowed
energies in Eq. (3.4) is predicted by the time-independent Schrodinger equation, in
three dimensions, with the Coulomb potential:

e12 1
V(r) = ---------- (3.5)
47TS0

We will not derive the Bohr formula here, but you can find its derivation in virtually
any other textbook on quantum physics.
The validity of Bohr’s formula was well known from experiment at the time
Schrodinger published his famous equation. It bears Niels Bohr’s name because he
had earlier arrived at this expression based on a model for the hydrogen atom which,
as mentioned, is long since abandoned. He was right about two things, though.

1. The energy of an atom is indeed quantized.


2. The atomic system may ‘jump’ from a state of lower energy to one of higher energy
by absorbing a photon with energy corresponding to the energy difference, see
Eq. (1.4). Correspondingly, it may ‘jump’ to a lower lying state by emitting the excess
energy in the form of a photon.
41 3.1 Quantization

3.1.3 Exercise: The Bohr Formula

In Exercise 1.32 we cheated and played around with a three-dimensional hydrogen


wave function as if it were one-dimensional. If we, once again, allow ourselves to take
the same shortcut, we should be able to arrive at the same bound-state energies as in
Eq. (3.4) rather easily. To this end, construct a numerical approximation of the Ham­
iltonian (1.35). Let r extend from 0 to some upper limit L. It is absolutely crucial that
your implementation ensures that

iA(0) = 0. (3.6)

Achieving this with your FFT version of the kinetic energy is not so straightforward.
Instead you could use the finite difference formula of Eq. (2.11a). Since V(r) diverges
as r -> 0, r = 0 should be omitted in your grid - as in Exercise 1.6.4. This does not
prevent you from imposing the boundary condition of Eq. (3.6) in your finite differ­
ence representation of the kinetic energy. With your first grid point being n = h, with
h being the separation between neighbouring grid points, you impose the restriction
i/f(0) = 0 when you set

d2Vr(ri) 0-2Vr(n) +Vr(r2)


dr2 h2 { ’

in your code.
This time, contrary to Exercise 1.6.4, allow yourself to use atomic units exclusively,
in which case the constant B in Eq. (3.4) is 1/2.
To what extent do the eigenenergies you obtain by numerical means coincide with
the Bohr formula?
Do you see any relation between the number n, the energy quantum number, and the
shape of the wave function? Does this compare to our silly quantum guitar?
For which n does your numerical eigenenergies start deviating from the exact ones?
Is this onset manifested in the wave function somehow?4
How can you improve on the coincidence between the numerical and the exact
eigenenergies for high n values?

So, the energy of the hydrogen atom, or, strictly speaking, the energy of the electron
stuck to a proton in a hydrogen atom, is quantized. This holds for any atom - also
atoms with several electrons. However, the discretized energy levels are not nearly as
simple as Eq. (3.4) in any other case.
It’s worth repeating that the measurement of any physical observable, not just the
energy, will produce an eigenvalue of the corresponding operator as a result. Corres­
pondingly, energy is not the only quantity that may be quantized. Take for instance
the angular momentum operator in three dimensions:

I = r x p = -i/lr x V. (3.8)

4 With the question being posed in this way: of course it is.


42 3 The Time-Independent Schrodinger Equation

If we express it - or, more conveniently, its square - in spherical coordinates, we will


^2
find that its eigenfunctions^ only make sense physically if the eigenvalues of I are
£(£ + l)/l2, where £ = 0,1,2,....
Yet another example of a quantized quantity is charge. As was demonstrated in the
oil drop experiment by Robert A. Millikan and Harvey Fletcher in 1909, charge is only
seen in integer values of the elementary charge, q = ne, in nature.
We have seen that the electronic energy levels of a hydrogen atom follow a particu­
larly simple pattern, Eq. (3.4). The same may be said about the energy of the harmonic
oscillator.

3.1.4 Exercise: The Harmonic Oscillator

The potential
V(x) = h
* 2, (3.9)

where k is some real constant, is called the harmonic oscillator potential. It appears in
classical physics when you want to describe a mass attached to a spring, in which case
k is the stiffness of the spring. In such a situation, the object in question will, according
to Newton’s second law, oscillate back and forth with the angular frequency

0D = y/k/m. (3.10)

In quantum physics, this potential is also given much attention - for several reasons.
Many of these, in turn, are related to the fact that the eigenenergies of the Hamilton­
ian with this potential are such that the differences between neighbouring energies are
always the same, they are equidistant'.

En = (n + l/2)/ic9, (3.11)

where n is a non-negative integer. Note that it only has discrete, positive eigen-energies,
no continuum - and that the lowest eigenenergy is not zero.
In the following we will set m = 1, as usual, and also k = 1.
Implement the Hamiltonian with the potential in Eq. (3.9) numerically and find its
eigenvalues and eigenstates. Choose a domain extending from x = —L[2 to x = L/2
and a number of grid points n large enough to reproduce the exact energies, Eq. (3.11),
for the first 100 energies or so. Your FFT version of the kinetic energy should be
perfectly adequate here.

The harmonic oscillator potential is in fact a very popular one in textbooks as it


lends itself particularly well for determining the eigenvalues of the Hamiltonian by
analytical means. Its relevance is, however, certainly not limited to that.
In the following we will learn that certain energies can be prohibited also in the
absence of confinement.
5 These eigenfunctions are called spherical harmonics, they appear frequently in quantum physics.
43 3.2 A Glimpse at Periodic Potentials

r Figure 2.5 A salt crystal. The potential experienced by an electron set up by the ions in fixed positions is periodic - in several
directions.

3.2 A Glimpse at Periodic Potentials

The case of periodic potentials,

V(r) = V(r + az), (3.12)

merits particular attention. Here az is one out of, possibly, several vectors in which
V(r) is periodic. A particle being moved by the vector az - or an integer times az -
will experience exactly the same potential as before this displacement. This would, for
instance, be the situation that an electron within a ionic lattice would experience (see
Fig. 3.3). Here the points of the lattice would be atoms, ions or molecules that are fixed
in space.
Indeed, electrons exposed to a periodic potential is a rather common situation. Solid
matter is, in fact, built up of such regular patterns. We can learn very general - and
relevant - lessons from studying the case of periodic potentials.
Here we will consider a single particle in one dimension exposed to a periodic
Gaussian potential.

3.2.1 Exercise: Eigenenergies for Periodic Potentials — Bands

We will investigate the eigenvalues for a particle in this periodic potential:

V(x) = -V0e~x2/(2a2) for x e[-a/2,a/2] with V(x + a)=V(x). (3.13)

It is illustrated in Fig. 3.4.

(a) With a periodic potential, the physics must reflect the periodicity. Specific­
ally, the solutions of the time-independent Schrodinger equation must fulfil
44 3 The Time-Independent Schrodinger Equation

g-0.5

-1 i
-1 5 -10 -5 0 5 10 15
X

Figure 2.5 Plot of the periodic potential of Eq. (3.13) for Vb = — l,cr = 0.1 and a = 10. Although we have only plotted
three periods here, it extends indefinitely.

|i/rn(x)|2 = |+ a)|2; the position probability distribution must be periodic


in the same manner. Explain why this leads to this requirement:

= e1<p^u(x\ (3.14)

where w(x) is periodic with period a and cp(x) is a real function.


Actually, it does not really make sense that the wave function would change its
dependence on x if we shift x by a period, x x + a. The only way to avoid this
is to insist that <p(x) is a linear function in x:

(p(x) = KX + b. (3.15)

In this case, shifting x by a would only amount to a global phase factor, which is
immaterial, as we learned in Exercise 1.6.3 (c). For the same reason, the factor b is
irrelevant, and we arrive at

= e1KXu(x), u(x + a) = u(x). (3.16)

Note that this is not something we have proved properly; we have only tried to
justify why it should make sense.6
(b) With Eq. (3.16), explain how the time-independent Schrodinger equation,
Eq. (3.2), may be rewritten in terms of w(x;/c) as

h2 d \
— +itf) + V(x) u(x',k) = 8(k) u(x\k), (3.17)
2m dx /

where s and u depend parametrically on k.

(c) Now, solve Eq. (3.17) numerically by constructing a numerical Hamiltonian and
diagonalizing it - as in Exercise 3.1.2. This differs, however, from what we did in
that exercise in two ways:

6 A formal proof could follow these lines: (1) Introduce the displacement or translation operator D(d),
which shifts the argument of a function: D(d)^(x) = ty(x + d). (2) Since D(a) does not affect the
Hamiltonian, the eigenfunctions of H are also eigenfunction of D(a), D(a)ty(x) = k^(x); see the dis­
cussion following Exercise 2.6.2. (3) With D(a) acting on ^(x) written as in Eq. (3.14), we must also have
= exp [i<p(x + a)]u(x). (4) For these two expressions for b(a)^ to be consistent, we must insist that
A exp [i<p(x)] = exp [i<p(x + a)] - for all x. This can only hold if Eq. (3.15) holds.
45 3.2 A Glimpse at Periodic Potentials

(1) this time quantization comes about by the fact that u(x) is periodic
and
(2) the eigenenergies depend on the k -parameter.
This means that Eq. (3.17) must be solved several times - for various values of k.
For a few of the lowest eigenvalues s„(k), find them numerically and plot them.
You can use the parameters given in Fig. 3.4.
Due to the periodicity in the wave function, the energies sn(/c)-s are also peri­
odic - with period 2tt/«. So it is sufficient to calculate for k g [ — 7t/a,7t/a].
The requirement that the eigenstates w„(x;/c) are periodic in x invites you to set the
box size L equal to a and impose periodic boundary conditions.
Hint', (d/dx + i/c)2w(x) = {(i£ + i/c)2^{w} (k)} (x).
(d) Each of the eigen-energies sn(/c) occupies a specific interval of the energy axis if
you include all possible values of k. For each of the energies you plotted in (c),
determine, approximately, the corresponding energy interval.

The result that the eigenstates of the Hamiltonian of a periodic potential have the
form of Eq. (3.16), which also extends to higher dimensions, is known as Bloch's the­
orem. The Swiss, later also US citizen, Felix Bloch arrived at the result in 1929. This
theorem is far from the only important result in physics that carries his name.
In Fig. 3.5 we display the first five eigenvalues of Eq. (3.17) with the parameters listed
in Fig. 3.4 as functions of k. Such relations between an energy 8 and a wave number k
are called dispersion relations. We see that each eigenvalue, through its k-dependence,
defines a range on the energy axis. In this context, such a range is called a band. In a
one-dimensional system like this one, such bands do not overlap - although they could
come close.
Between the bands there are gaps. For instance, in our example, you will never find a
particle with energy between —0.01 and 0.05 energy units, nor are any energies between
0.15 and 0.2, 0.40 and 0.44, or 0.74 and 0.79 available to our particle (see Fig. 3.5). In
other words, the mere fact that the potential is periodic causes certain energy intervals
to be inaccessible.
Several of these aspects carry over to the more realistic situation in which we have
several particles in three dimensions. In three dimensions, the structure is more com­
plex, and bands may, contrary to the one-dimensional case, overlap. But the fact that
there are bands and gaps prevails in any system featuring periodicity. In combination
with the Pauli principle, which we will introduce in the next chapter, this has very prac­
tical consequences. Whether or not a solid may easily transmit heat or carry electrical
current is very much dependent on whether the last occupied band is completely filled
or not. Being excited to a higher energy would allow an electron to carry current, to be
conducting. This does not come about so easily if such an excitation requires jumping a
large energy gap to reach the next band, but it is quite feasible if it can be done within
the band.
This is a very superficial outline of how conductance properties of solid matter
emerge.
46 3 The Time-Independent Schrodinger Equation

r Figure 3.5 The first five eigenvalues of Eq. (3.17) for the periodic potential displayed in Fig. 3.4 as functions of k . The right panel
illustrates the corresponding energy bands and gaps.

3.3 The Spectral Theorem

There is an important result from linear algebra that is used very frequently in quan­
tum physics. Actually, we have already used it - in Eq. (2.22). If we, for notational
simplicity, assume that the set of eigenvalues of the Hamiltonian are entirely dis-
crete/countable, we may write the eigenvalue equation for the energy, that is, the
time-independent Schrodinger equation, as

Htyn=8ntyn, (3.18)

with n being an integer. Since the Hamiltonian is Hermitian, H = H\ its eigen-


vectors/functions form an orthogonal basis for the space in which the wave function
lives. If we also require the eigenvectors/functions to be normalized under the inner
product (1.24), we have orthonormality'.

WmWn)=tm,n, (3-19)

where the Kronecker delta function 8m^n is 1 when m = n and zero otherwise.
The fact that the eigenvectors form a basis means that any admissible wave function
may be written as a linear combination of these:

'I'= (3.20)
n
where the coefficients an are uniquely determined.
47 3.3 The Spectral Theorem

Although this chapter is dedicated to the time-independent Schrodinger equation,


we will briefly address time dependence also in this context. We have seen that eigen­
states of a time-independent Hamiltonian are stationary in the sense that the wave
function does not change in time - except for a trivial phase factor, see Eq. (2.38).
However, suppose our initial state is not such an eigenstate. In that case, these phase
factors may become rather important.

3.3.1 Exercise: Trivial Time Evolution

Suppose that the normalized set of eigenfunctions of the Hamiltonian, fa, fa,..., is
known and that your initial state is 4>(r = 0) = anfa.

(a) Why must we insist that

Em2 = i? <3-21)
n

(b) Explain why

an = {fa\fat = Q)). (3.22)

(c) Explain why

^(t)^Y/ane-*
is'‘t'h^n. (3.23)

Let’s apply this knowledge to determine the evolution of a specific quantum system.

3.3.2 Exercise: Glauber States

We return to the harmonic oscillator of Exercise 3.1.4. This time we will investigate
how the wave function for such a system moves around when we start out with a state
that is not an eigenstates. One particular way of constructing such linear combinations
is rather interesting.

(a) Construct some linear combination of the eigenstates:

nmax

- 0) = an^n{x), (3-24)
n=0

that is, just pick some values an for a bunch of n values. Then, simulate the cor­
responding evolution of the modulus squared of your wave function as dictated by
Eq. (3.23).
48 3 The Time-Independent Schrodinger Equation

(b) Linear combinations of this particular form are called coherent states or Glauber
states'.1
oo n
^Glauber = e-|"|2/2 \\ l/fn. (3.25)
n=0 Vfl!

Here, a is some complex number.


Now repeat exercise (a) with this particular linear combination for several
choices of a.
Three things are crucial for the numerical version to work:
(1) In a finite space, such as our numerical representation, we cannot sum up
to infinity. Moreover, we may get into trouble numerically if we use all our
numerical eigenstates. We suggest that you use the comparison between your
numerical eigenvalues with the exact ones, Eq. (3.11), to determine a reason­
able truncation in your numerical version of Eq. (3.25); by that we mean that
we set all an to zero for all n beyond some maximum number. Since the coeffi­
cients fall off quite rapidly due to the factorial in the denominator, this should
be quite admissible.
(2) The eigenfunction you get from the numerical calculation is hardly normalized
according to Eq. (2.10). You probably need to impose the factor 1 */fh to ensure
proper normalization.
(3) As long as the eigenstates are constructed by a numerical implementation of
diagonalization, we do not control the overall sign or phase of the eigenvectors.
However, in order to construct the coherent state, we need to control the phase
convention of the tyn. This may be achieved by insisting that tyn(x) is real and
positive for small, positive values of x. After your numerical diagonalization,
you could run through your eigenstates, check and flip the sign where necessary.
When you run your simulation with the particular linear combination of
Eq. (3.25), do you notice anything special? Try playing around with other values of
a, including complex and purely imaginary ones. Use a finer grid and/or a larger
domain if you have to.
(c) It is worthwhile to simulate the solution for a classical particle along with the quan­
tum mechanical one, as in Exercise 2.4.2. In doing so, you let the real and imaginary
part of a, respectively, fix the initial conditions for position and velocity:

x(t = 0) = J-----Re (a) and v(t = 0) = J------ Im(a). (3.26)


V mat V m*

The potential of Eq. (3.9) is such that it supports bound, discrete states only. As men­
tioned, the spectrum would feature both a discrete and a continuous part in the general
case. Still, the notation in terms of a countable set, Eq. (3.20), is always adequate when

7 Roy J. Glauber was a theoretical physicist from the USA. He made significant contributions to our under­
standing of light. For this he was awarded the 2005 Nobel Prize in Physics. His coherent states, Eq. (3.25),
certainly had something to do with that.
49 3.4 Finding the Ground State

we approximate the wave function in a discretized manner,8 Eq. (2.9). In this case the
wave function is just an array in C"+1 and all operators are square, Hermitian matri­
ces. Such a description does not support any actual continuous set of eigenstates, a
continuum. We have to settle for a so-called pseudo-continuum. The proper description
in terms of an infinite-dimensional space within a basis that is not countable is not
compatible with purely numerical representations.
We will not enter into these complications here, save to say that often we may ‘get
away’ with considering the real physics as the limiting case where n and b — a in
Eq. (2.9) both become large. As mentioned, convergence in these numerical parameters
is absolutely necessary for calculations to be trustable.

3.4 Finding the Ground State

Although in principle a quantum system could be found to have any of its admissible
energies, or, as we say, be in any admissible state, systems tend to prefer the state with
the lowest energy, so- The ground state. In a somewhat superficial manner we may
say that this is due to nature’s inherent affinity for minimizing energy. A slightly less
superficial account: eigenstates are indeed stationary and a system starting out in an
eigenstate will remain in that state - unless it is exposed to some disturbance. However,
avoiding disturbances, such as the interaction with other atoms or some background
electromagnetic radiation, is virtually impossible. Due to such perturbations, the eigen­
states are not really eigenstates in the absolute sense. While they would be eigenstates
of the Hamiltonian of the isolated quantum system, they are not eigenstates of the full
Hamiltonian, the one that would take the surroundings into account. If a system starts
out in a state of higher energy, an excited state, such small interactions would usually
cause the system to relax into states with lower energy, for instance by emission of a
photon.
It is thus fair to say that the ground state of a quantum system is more important than
the other stationary states, the excited states. Calculating ground states is crucial for
understanding the properties of the elements and how they form molecules. Suppose,
for instance, that the lowest energy of a two-atom system is lower when the atoms are
separated just a little bit than if they are far apart; the system has lower energy if the
atoms are together than if they are not. This means that these atoms would form a
stable molecule.
It wasn’t until people started solving the time-independent Schrodinger equation
and calculating ground states that the periodic table started to make sense. There was
no doubt that Dimitri Mendeleev’s arrangement of the elements was a meaningful one,
but it took some time to learn why (see Fig. 3.6).
The spectral theorem provides two very powerful methods for estimating the wave
function and the energy of the ground state.

8 Provided that the numerical grid is large and dense enough, that is.
50 3 The Time-Independent Schrodinger Equation

OnHH» CHCTEMM 9AEMEHT0B1,


ocmommroA Ht .rt uo«t«m *
tct v xmaiECKOfn cxqucTa
*.

Ti-5O Zr- 90 7-180.


V—51 Nb- 94 Ta-199.
Cr—52 Mo- 96 W-186.
Mn-55 Rh—104,4 Pt-197,i.
Fe—56 Rn-104.4 ir—198.
KI-Co-59 PI-106,. O' —199.
H-1 Cu—63,4 Ag—108 Hg-2OO.
Be- 9,4 Mg-24 Zn-65^ Cd-112
B-llAl-27,1 ’—68 Ur-116 Aii-197?
C—12Si-28 7=70 Sn—118
N—14 P—31 As—75 Sb-122 81-210?
0-16 S-32 St—79,4 Te—128?
F—19 Cl —35,4 Br—80 1-127
Li=7N»-23 K—39 Rb—85,4 Cs-133 TI-204.
C.-40 Sr-87,» Ba-137 PS-207.
7-45 Ce-92
?Er—56 la-94
’Yt-60 Di-95
?ln-75,'Tb-U8?

a.

r Figure 3.6 The original periodic table that Dimitri Mendeleev published in 1869. It has evolved a bit since then.

3.4.1 Exercise: The Variational Principle and Imaginary Time

(a) Suppose we start out with some arbitrary admissible, normalized wave function.
Why can we, by exploiting Eq. (3.20), be absolutely sure that the expectation value
of its energy cannot be lower than the ground state energy?
How can this be exploited in order to establish an upper bound for the ground
state energy?
(b) Again, suppose that we start out with some arbitrary wave function. Now we
replace the time t in the time-dependent Schrodinger equation with the imagin­
ary time — it. This corresponds to the replacement At -> —iAt in Eq. (2.27). This,
in turn, leads to a ‘time evolution’ in which the norm of the wave function is not
conserved. To compensate for that, we also renormalize our wave function at each
time step when we resolve the ‘time evolution’ in small steps of length At. In other
words: at each step we multiply our wave function by a factor so that its norm
becomes 1 again.
Why does this numerical scheme, in exponential time, turn virtually any initial
state into the ground state?
How can the change in norm at each time step be used in order to estimate the
ground state energy?
Hint'. Assume that your wave function, after some ‘time’ r, already is fairly close to
the ground state so that HA
* ^8^.
51 3.4 Finding the Ground State

Note that when we make the replacement t —it, this effectively turns the
(time-dependent) Schrodinger equation into a sort of diffusion equation. This equa­
tion, however, does not describe any actual dynamics; it is merely a numerical trick
used to determine ground state energies.

Both methods alluded to here are frequently used within quantum physics and
quantum chemistry - maybe the approach of part (a) more than the imaginary time­
approach in part (b). The principle that the ground state is a lower bound to the
expectation value of any admissible function,

|H14/> > so, (3.27)

is called the variational principle. We can exploit it by constructing some initial guess
for the ground state wave function, a trialfunction, with tunable parameters. The closer
this initial guess is to including the actual ground state wave function, the better. And
the more parameters you have to ‘tweak’, the more flexibility you have to achieve a
good estimate of the ground state energy. The lower the energy you get, the closer you
get to the ground state energy. On the other hand, the more parameters you have, the
more cumbersome is the optimization you need to run. Therefore, it is crucial to choose
your trial functions as cleverly as possible.
The Norwegian Egil Hylleraas (Fig. 3.7), was particularly clever in this regard. He
was one of the pioneers when it comes to variational calculations. In fact, he was head

Figure 3.7 Norwegian physicist Egil Hylleraas. At a time when the time-independent Schrodinger equation was known to explain
the structure of the hydrogen atom, Hylleraas succeeded in proving that it is also able to explain the much more
complicated helium atom. This was a very significant contribution at the time. His methods are still quite relevant
within quantum chemistry.
52 3 The Time-Independent Schrodinger Equation

hunted, so to speak, by Max Born to show that the time-independent Schrodinger


equation indeed produced the right ground state energy for the helium atom, not just
the hydrogen atom. At a time when quantum physics was not yet established as the
proper framework for atomic and molecular physics, it was crucial to settle this matter.
Hylleraas succeeded in doing so. The fact that he did so before researchers had access
to modern-day computer facilities only serves to render his achievements even more
impressive.
Now, let’s do some variational calculations ourselves - with a computer:

3.4.2 Exercise: One-Variable Minimization

Let your potential be provided by Eq. (2.30) with Vo = —3, 5=5 and w = 8. Also,
take your trial wave function to be a normalized Gaussian centred at the origin:

1 I" x1
(3.28)

Now, we calculate the energy expectation value,

E(<r) — (V'trial 1^1 Atrial h (3.29)

for various values of cr. Do this numerically using the same implementation of the
Hamiltonian as before and plot the expectation value of the energy as a function of a.
Try to read off its minimal value - and the corresponding a.
In this case, actually obtaining the ‘correct’ ground state energy is easy by means
of direct diagonalization, which we did in Exercise 3.1.2. Do this again with the
parameters at hand and compare the actual ground state energy to the upper bound
you just found. Also, compare the numerical ground state from your trial function,
Eq. (3.28), with the a that minimizes E(a).

Hopefully, you found that both the energy and the wave function that minimized the
energy were reasonably close to the correct ones. Hopefully, you will also find that you
can do better with the method of imaginary time.

3.4.3 Exercise: Imaginary Time Propagation

For the same potential as in Exercise 3.4.2, implement the method in Exercise 3.4.1(b),
in which an arbitrary initial state is propagated in imaginary time and the wave function
is normalized at each time step:

'I'n+l = l/Wl ’An+l /7(^+1 l^n+l )• (3.30)

Choose a reasonably small ‘time step’, At = 0.05 time units, perhaps, and plot the
renormalized, updated wave function at the first few steps. Also, write the estimated
energy to screen for each iteration.
53 3.4 Finding the Ground State

How does this implementation compare to the variational principle implementa­


tion above? Were you able to reproduce the very same ground state and ground state
energy as you found by direct diagonalization? Did that happen with the variational
calculation in Exercise 3.4.2?

The fact that the method of imaginary time estimates the ground state better does
not mean that variational calculations are irrelevant and inferior. Often, variational
calculations allow us to obtain quite good results when addressing problems of high
complexity. And the flexibility of the method may be improved by adding more
parameters to our trial wave functions.

3.4.4 Exercise: Two-Variable Minimization, Gradient Descent

This time, let your initial ‘guess’ be a Gaussian with two parameters, /z and a:

1 (* -m)2~
V^Gauss — 4/ exP (3.31)
\/2ncrz 4cr2

Now estimate the ground state energy, or, strictly speaking, determine an upper bound
for the ground state energy, for this potential:

V(x) = Vs(x - 2) + x2/50, (3.32)

where you fix the parameters Vo = —5, w = 6 and s = 4 for Vs in Eq. (2.30).
As your energy estimate now depends on two parameters, minimization is
not as straightforward as in Exercise 3.4.2. In order to obtain values for both /z and
a which minimize the expectation value of the energy, apply the method of gradient
descent or steepest descent, as it is also called.
Gradient descent is a powerful method for minimizing a general, differentiable multi­
variable function,9 F(x). Here x is a vector contaning all the variables that F depends
on. You start out by choosing a starting point xq. Next, you move one step in the
direction in which the function descends the most steeply - in the direction opposite
to the gradient:

x„+i = xn - yVF(xn). (3.33)

The factor y, which is called the learning rate when applied within machine learning,
fixes the length of the step made for each iteration - together with the magnitude of
the gradient. For optimal performance, y may be adjusted as you go along. However,
with a sufficiently small y value, this should not be crucial in our case.
The iterations of Eq. (3.33) are repeated as long as F(xn+i) < F(xn) - or as long
as the decrease is significant enough for each iteration. Finally, you should arrive very
close to some local minimum.

9 In our case, this would be E(/z, <r).


54 3 The Time-Independent Schrodinger Equation

Figure 3.5 | The surface illustrates how the energy expectation value of the trial wave function, Eq. (3.31), depends on the
parameters /z and a. It is a friendly landscape with only one minimum, no other pesky local minima in which we
could get stuck. The dashed path towards the minimum is the result of a gradient descent calculation starting with
/i = 4, a = 2 with the learning rate y fixed at 0.1.

The partial derivatives that make up the gradient can be estimated numerically using
the midpoint rule,10 Eq. (1.23):

d E(^ + h,a)-E(^-h,a)
— E(j^,a) ------------------- ------------------
d/z 2h
d E(jbi,a+h)-E(jbi,a-h)
— E(j^,a) ~ (3.34)
da 2h

Implement this method for yourself and determine the minimal value of E(/z, a). The
choice of learning rate should not affect the result too much; in the example displayed
in Fig. 3.8, y = 0.1 is used.
When you have managed to minimize the expectation value of the energy - within
reasonable precision, calculate the ground state energy by means of diagonalization
and compare.
Finally, run this potential, Eq. (3.32), through your imaginary time procedure, the
one you implemented in Exercise 3.4.3, and compare.

In Fig. 3.8 we show the E(/z, a) landscape - along with a gradient descent sequence
starting at (/z, <r) = (4,2).
In these examples, where a direct solution of the time-independent Schrodinger
equation by means of diagonalization is feasible, it does not make much sense to apply
the variational principle. However, as we depart from the simple scenario of one par­
ticle in one dimension, the hope of obtaining the true ground state quickly becomes a
pipe dream. For more realistic cases, the variational principle is actually a very useful
one. Let’s do just that and make such a departure.

10 We don’t have to use the same numerical increment h when calculating both the partial derivatives here,
but it should still work if we do - as long as it is reasonably small.
55 3.4 Finding the Ground State

3.4.5 Exercise: Variational Calculation in Two Dimensions

We minimize the energy of a potential that is an almost rectangular well in two


dimensions:

(e5(|x|-^/2) + ^es{\y\-wy/2) +

This potential, which is illustrated in Fig. 3.9, is the product of two one-dimensional
potentials, of type Vs(x), see Eq. (2.30). Here, we have set the smoothness, 5, to be the
same for both the x- and the y-part of the potential while the respective widths, wx and
wy, differ. Specifically, take Vo = — 1, s = 4, wx = 4 and wy = 2. The corresponding
Hamiltonian reads
^2 ^2
H = - —+ V(x,y) = Tx + Ty + V(x,y), (3.36)
2m dxz 2m dyz

where /(2m) = 1/2 in our usual units.


We use trial functions that are products of purely x- and y-dependent functions:

V'trialCx, y) = ^x(x)^y(y\ (3.37)

where tyx(x) and tyy(y) are normalized individually. The actual ground state wave func­
tion will not have this simple form. But that does not necessarily prevent our trial
function from producing decent energy estimates anyway.

(a) Since the wave function now has two spatial variables, expectation values and other
inner products now involve a double integral rather than the single integral of
Eq. (1.24). Explain why the expectation value of the energy may now be estimated
as
OO /»OO

/ -00
/
J — oo
V(x,y)\^x(x)^y(y)\2dxdy. (3.38)

(b) Use your gradient descent implementation from Exercise 3.4.4, with suitable
adjustments, to minimize the energy expectation value with Gaussian trial func­
tions, tyx(x)~ exp(—x2/(4a2)) and ^ry(y)^ exp (—y2/(4a2)) in Eq. (3.37). The

This bathtub-like shape is the potential in Eq. (3.35).


56 3 The Time-Independent Schrodinger Equation

proper normalization factors are provided in Eq. (3.28). The parameters to be


adjusted are ox and oy.
The double integral in Eq. (3.38) can be calculated summing over all combin­
ations of x and y on their respective grids. This, in turn, could be implemented
rather conveniently by first constructing matrices for both V(x, y) and |^(x, y)|2,
then calculating their element-wise product, their Hadamard product, and sum­
ming over all elements. Finally, the double sum is multiplied11 by h2.
(c) Gaussians have a rather ‘long tails’, they fall off rather smoothly and never reach
zero exactly. Perhaps, with a potential with rather sharp edges such as ours, trial
functions that fall off a bit more abruptly would do better?
Repeat (b) with normalized cosine functions instead:

v 7T cos(ax x),
v x 7’ 2ax <x
- <- 2ax’ (3.39)
0, otherwise

and correspondingly for i/jy. Here, ax and ay are the parameters to tune in order
to minimize E in Eq. (3.38).
Which guess gives the best estimate, the Gaussian or the cosine-shaped trial
functions?
Does this conclusion seem to depend on the depth Vb; does it change if you redo
the calculations with a different value for Vb?

In Fig. 3.10 we illustrate the optimal wave function for the case of a product of two
Gaussians.
Now, these one-particle ground state estimates may have been complicated enough.
However, as we addressed already in Section 1.3, things rapidly become more complex
as the number of particles increases. There are, fortunately, ways of dealing with that.

Figure 3.10 | The ansatz with a product of two Gaussians in Eq. (3.37) produces a wave function that is far more smooth than the
corresponding potential, Fig. 3.9. Still, it comes rather close to the actual ground state; the estimated energy is only
2% off.

11 In case you, for some reason, choose to use separate grids for x and y, you should multiply by hxhy
not h2.
Quantum Physics with Several
4
Particles — and Spin

The 1977 Nobel laureate in Physics Philip W. Anderson made a very concise state­
ment in the title of a Science publication: ‘More is different’ [4]. Although we have
already seen several interesting phenomena with one particle alone, the quantum world
becomes a lot richer, more complex and interesting when there are more particles. This
is not only due to the infamous curse of dimensionality.
Contrary to Anderson, we will take ‘more’ to mean two in this chapter. Moreover,
we will take these two particles to be two of the same kind; they will be identical - in
the absolute sense.

4.1 Identical Particles and Spin

We can hardly talk about identical particles without addressing spin. It is a trait that
quantum particles have. Just like a few other things we have encountered thus far, it
does not have any classical analogue. It does, however, relate to angular momentum -
although a simple picture of a ball rotating around its own axis would be a too simple
one.
As mentioned in regard to Eq. (3.8), the ‘normal’ angular momentum t, the angular
momentum you also have in classical physics, is quantized in quantum physics. Also
the spin s is quantized in a similar way; the eigenvalues of the square of the spin oper­
*, 1are s(s + l)/l2 - analogous to £(£ + l)h2 for I . But there are two significant
ator, s2
differences.

(1) While the quantum number £ assumes integer values, the spin quantum number s
may also assume half-integer values, 5 = 0,1/2,1,3/2,...
(2) While the angular momentum £ of a system is a dynamical quantity - it may change
due to interaction - the spin is an intrinsic property specific to the particle. A
particle will never change its 5 quantum number.

A particle with spin may, however, change the direction of its spin vector. Quan­
tum particles that have a non-zero spin are affected by magnetic interactions. This
was witnessed in the famous Stern-Gerlach experiment (see Fig. 4.1). It consisted in

1 More precisely: an elementary particle would not change its spin; composite particles, such as an atom or
a nucleus, may arrange the spins of the constituent particles together in various ways - leading to various
total spins.

57
58 4 Quantum Physics with Several Particles — and Spin

• iM- -

IM FEBRUAR 1922 WURDE IN DIESEM GEBAUDE DES


PHYSIKALISCHEN VEREINS, FRANKFURT AM MAIN
VON OTTO STERN UND WALTHER GERLACH DIE
FUNDAMENTALE ENTDECKUNG DER RAUMQUANTISIERUNG
DER MAGNETISCHEN MOMENTE IN ATOMEN GEMACHT
AUF DEM STERN-GERLACH-EXPERIMENT BERUHEN WICHTIGE
PHYSIKALISCH-TECHNISCHE ENTWICKLUNGEN DES 20 JHDTS
WIE KERNSPINRESONANZMETHODE, ATOMUHR ODER LASER
OTTO STERN WURDE 1943 FUR DIESE ENTDECKUNG
DER NOBELPREIS VERLIEHEN.

r Figure 3.5 This plaque celebrates the beautiful experiment of Otto Stern (left) and Walther Gerlach (right). It is mounted on the
building in Frankfurt, Germany, in which their famous 1922 experiment was conducted.

sending silver atoms through an inhomogeneous magnetic field. When the position of
each atom was detected afterwards, it turned out that the atoms were deflected either
upwards or downwards, with equal probabilities. Atoms passing straight through were
not observed, they were always deflected up or down. This showed how the spin orien­
tation is quantized - in addition to demonstrating the existence of spin in the first place.
We will return to this experiment shortly.
You do not, however, need a magnetic field present in order for spin to be important.
When we are dealing with two or more particles that are identical, keeping track of their
spins is absolutely crucial.

4.1.1 Exercise: Exchange Symmetry

Suppose that ^(xi,X2) is the wave function of a two-particle system that consists of
identical particles. Let x\ and xz be the positions of particles one and two, respectively.

(a) If the particles are truly identical in the absolute sense, any physical prediction must
be independent of the order in which we list the particles.2 As a consequence, we
must insist that
^(%2^i) = e1(^(x\,X2) (4.1)

for some real phase 0. Why?


2 This is one of those ideas worth taking a moment to dwell on.
59 4.1 Identical Particles and Spin

(b) Actually, this phase must be either 0 or n. In other words:

4'(X2,
1)
* = ±VP(X1,X2)- (4.2)

Why?
Hint'. Flip the ordering of the particles twice.3

So what’s it going to be - plus or minus? The answer is: it depends. In this case it
depends on the spin of the particles. Specifically, it depends on whether the spin quan­
tum number s for the particles at hand is an integer or a half-integer. Electrons, for
instance, have spin quantum number 5 = 1/2, while photons have spin with 5 = 1.
Particles with half-integer spin, 1/2, 3/2 and so on, are called fermions while particles
with integer spin, 0, 1, 2 ..., are referred to as bosons. The names are assigned in hon­
our of the Italian physicist Enrico Fermi and the Indian mathematician and physicist
Satyendra Nath Bose, respectively. And here is the answer to the question about the
sign in Eq. (4.2):
The wave function for a set of identical fermions is anti-symmetric with respect to
interchanging two particles, while wave functions for identical bosons are exchange
symmetric.
In other words, Eq. (4.2) generalizes to any number of identical particles. This prin­
ciple is called the Pauli principle, proposed by and named after the Austrian physicist
Wolfgang Pauli, whose matrices we will learn to know quite well later on. We emphasize
that this principle only applies to identical particles; the wave function of a system con­
sisting of two particles of different types, such as an electron and a proton for instance,
is not required to fulfil any particular exchange symmetry.
Here we have made several unsubstantiated claims about the spin of particles and
their corresponding exchange symmetry. We are not able to provide any theoretical
justification for these. The reason for this is the fact that they are fundamental pos­
tulates. The real justification is, as is so often the case with empirical sciences such as
physics, the fact that it seems to agree very well with how nature behaves. The concept
of spin grew out of a need to understand observation.
And we will keep on postulating a bit more. Like any angular momentum, spin is
a vectorial quantity; it has a direction, not just a magnitude. The projection of the
angular momentum, both the spin s and the ‘usual’ one I, onto a specific axis is also
quantized. If you measure the z-component sz of the spin s of a particle, your result
will necessarily be such that sz = msh, where ms = — 5, — 5 +1,..., 5 — 1, s. The case of
f is quite analogous; the eigenvalues of L are where = —€, — £ +1,..., € — 1, £,
see Eq. (3.8). For an 5 = 1/2 particle, only two values are possible for ms, namely —1/2
and +1/2.
When we refer to the z-component of s and I above, this is just following the trad­
itional labelling. The axis you project onto, regardless of what you call it, could point
in any direction.

3 In certain two-dimensional systems there are quasiparticles for which 0 in Eq. (4.1) may actually take any
value.
60 4 Quantum Physics with Several Particles — and Spin

As mentioned, in the Stern-Gerlach experiment (Fig 4.1) silver atoms in their ground
state were used. In this state the electrons organize themselves so that the total angular
momentum is that of a single electron without ‘ordinary’ orbital angular momentum
so that, in effect, the whole atom becomes a spin-1/2 particle. Thus, a spin-projection
measurement would reveal either sz = + 1/2/1 (up) or sz = — 1/2/1 (down) as result.
When Walther Gerlach sent atoms through his magnetic field and then measured their
positions afterwards, he performed precisely such a projection measurement; a particle
with spin oriented upwards would be deflected upwards - and vice versa. The quantized
nature of spin orientation is revealed by the fact that the silver atoms are deflected
upwards or downwards at certain probabilities, never anything in between. If the atoms
were considered classical magnets with arbitrary orientation in space, this distribution
would be continuous - with a peak at sz = 0. As demonstrated by the postcard that
Walther Gerlach sent Niels Bohr in 1922 (see Fig. 4.2), this is not what came out of the
experiment.
One more comment on the Stern-Gerlach experiment is in order. Sometimes
textbooks have a tendency to present famous, fundamental experiments and their inter­
pretations in overly coherent ways - ways that differ significantly from the original
scope and the historical context of the experiment. The Davisson-Germer experi­
ment, for instance, which consisted in bombarding a nickel surface with electrons at

" Figure 4.2 | This postcard from Walther Gerlach to Niels Bohr, dated 8 February 1921, shows a photograph of the result of the
famous Stern-Gerlach experiment, which Stern and Gerlach actually had agreed to give up on before Gerlach decided
to give it one more go [18]. It says, 'Honourable Mr. Bohr, attached is the continuation of our work [...], the
experimental proof of space quantization. [...] We congratulate you on the confirmation of your theory!' The left plate
shows how the silver atoms are distributed in the absence of any magnetic field while the right one is the result with a
field in place. The gap that emerges in the presence of a field shows how the atoms are deflected, either left or right in
this case; no atom goes straight through.
61 4.1 Identical Particles and Spin

different angles, is considered the first observation of quantum interference. However,


the experimentalists simply set out to investigate the surface of nickel. When they
ended up demonstrating that de Brogile was right about his wave assumption, this
was more or less accidental. A similar comment may be made about the Michelson-
Morley experiment and its relation to Einstein’s special theory of relativity. The aim
of Stern and Gerlach’s experiment was to demonstrate the quantized nature of angular
momentum of the atom, not spin angular momentum specifically. Actually, the notion
of electron spin was not established until a few years later. This theoretical develop­
ment involved names such as Ralph Kronig, George Uhlenbeck, Samuel Goudsmit
and, albeit strongly opposed at first, Wolfgang Pauli.

4.1.2 Exercise: On Top of Each Other?

We have just learned that the spatial degree of freedom is not the only one that a quan­
tum system has; spin also enters into the equation, so to speak. Thus, from now on,
the wave function should also have a spin part.
But let us forget about that for one more minute and consider the spatial wave func­
tion of a system consisting of two identical fermions, 4'(xi,%2), which should abide by
Eq. (4.2) with the proper sign.

(a) Suppose now that we measure the position of each fermion simultaneously. Why
can we be absolutely certain that we will not get the same result for each particle?
Can bosons be found on top of each other?
We next consider one spin-1/2 particle and construct its proper wave functions
by augmenting the spatial part 4>(x) with a spin part. The general wave function
may be written as a sum over the products of a spatial part and a spin part:

0>(x) = + 4^(x)x;, (4.3)

where the x are spin eigenstates corresponding to the spin quantum number s =
1/2; xt corresponds to ms = +1/2 and x; to ms = —1/2:

s2Xt = 2 (j + i) ft2 Xf ^zXf = +|^Xf, (4.4a)

s2Xj. = 2 (j + 1) ft2 X-h SzXj. =-|^X|. (4.4b)

We insist that they are orthonormal according to their own inner product:

= 1. <X;IXt>=0. (4.5)

When we are dealing with spin states, the notation (• | •) does not refer to the inte­
gral definition we used for spatial wave functions, Eq. (1.24). It is still a bona fide
inner product, it’s just a bit more abstract.4
If the spatial parts of the wave function in Eq. (4.3), 4^ and 4^, are proportional,
the total wave function will be a single product state:

4 In case you are thinking something along the lines of ‘But what do these spin states actually look like?’,
don’t worry, we will get to that. Let this be abstract for now.
62 4 Quantum Physics with Several Particles — and Spin

<D(x) = (4-6)
where /, the spinor, is a normalized linear combination of and xi-
X - a/t +bx[, where |a|2 + |6|2 = 1. (4.7)
The same applies to two-particle wave functions; they could be of the form
<J>O1,.X2) = ^(.xi,x2)X2, (4.8)
where X2 is a two-particle spin state. It can always be written as a linear combin­
ation of the four product states:
y(1)y(2) y(1)y(2) y(1)y(2) and y(1)y(2) (4 9)
X^ X^ 5 A^ A^ 5 A^ A^ dllU X^ * **
-'/
V
In other words, a general two-particle spin state may be written
y? _a y(D (2) > y(l)y(2)4. (1) (2) , d (1) (2) z4 10)
X2-^X^X^ । ° Xf X^ । c X^ । a A± X^
for spin-1/2 particles. Here, the upper index refers to the particles in the same way
that variables jq and %2 refer to the positions of the respective particles. The first of
these states, the ones listed in Eq. (4.9), corresponds to a state in which the spins of
both particles are oriented upwards, the next one corresponds to particle 1 being
orientated upwards while particle 2 is oriented downwards - and so on.
(b) Now, four specific linear combinations of spin states are particularly interesting:
xJ'X;2', zM2'. -^(zi'xr + zr'z;2’), (4.11a)

-^(X’zf’-zi"
*
; 2’). (4.i it)

What can we say about the exchange symmetry of each of these spin states? Or,
in other words, if we interchange the labellings 1 o 2, does this affect the state?
How?
(c) The principle that fermionic wave functions are anti-symmetric with respect to
interchanging the particle labelling applies to the total, combined wave function O,
not any spatial part or spin part separately. Now, suppose two identical particles
are in a product state such as in Eq. (4.8), and that the two-particle spin state X2
coincides with the state of Eq. (4.1 lb). Explain why the spatial part 4>(xi, *
2) must
be exchange symmetric in this case.
(d) Can two fermions with a wave function as in (c) be found on top of each other? If
so, what can be said about their spin orientations?

The states listed in Eqs. (4.11) are in fact eigenstates of the total spin operator for two
spin-1/2 particles - the sum of the spin operators for each particle. Or, more precisely,
they are eigenstates of the operator
S2 = (si+s2)2, (4.12)

where si and §i are the vectorial spin operators for each of the particles. ■ The eigen-
values of S have the form S(S +1) h2. For the three states in Eq. (4.Ila), the total spin

5 For the sake of clarity, we have not explained why the states in Eqs. (4.11) actually are eigenstates of the
-2
S operator; this is by no means supposed to be obvious.
63 4.2 Entanglement

quantum number 5=1. This group of three is usually referred to as the spin triplet.
The state of Eq. (4.11 b), which is called the singlet state, has 5 = 0.
One lesson to be learned from the above exercise is that two identical fermions can
never be at the same position when they also have the same spin projections. Note that
this applies irrespective of any mutual repulsion or attraction between the particles;
it is more fundamental than that. More generally, two identical fermions can never
be found in the same quantum state simultaneously. This is a rather common way
of presenting the Pauli principle, which is, for this reason, often referred to the Pauli
exclusion principle. This is a direct consequence of the fact that the wave function for
a system of identical fermions changes sign when two particles are exchanged. It has
profound implications. Suppose an atom consists of a nucleus and several electrons
which, somehow, do not interact. Even in this odd case, in which the electrons would
not repel each other via their electrostatic interaction, you would still never see any
atom with all its electrons in the ground state. The Pauli principle forces each electron
to populate a quantum state that is not already occupied - to pick a vacant seat on the
bus, so to speak. Since electrons, in fact, do repel each other, the picture is more com­
plex. But the Pauli principle is still inevitable in order for us to understand the structure
of matter. If electrons, protons and neutrons were bosons, neither physics nor chem­
istry would resemble anything we know at all. Consequently, nor would anything that
consists of matter.
The fact that identical bosons, on the other hand, are free to occupy the same quan­
tum state has interesting consequences such as superconductance and superfluidity. We
will, however, stick with fermions in the following.

4.2 Entanglement

The general shape of a single-particle wave function of a spin-1/2 particle in Eq. (4.3) is
an example of what we called an entangled state. In general we cannot consider spatial
and spin degrees of freedom separately. We can, however, if the wave function happens
to be in a product form, such as in Eq. (4.6) or Eq. (4.8). For such situations, the space
part and the spin part of the wave function are disentangled', it makes sense to consider
one of them without regard for the other. In fact, this is what we did in the preceding
chapters - before we introduced spin.
For a two-particle system of the form of Eq. (4.8), the space part or the spin
part %2 could also be subject to entanglement individually. As an example, let us again
consider the four spin states in Eq. (4.9) and in Eqs. (4.11). If our state is prepared in,
say, the third product state of Eq. (4.9), we know for sure that particle 1 has its spin
oriented downwards while particle 2’s spin is oriented upwards. The same may not be
said about the third state in Eqs. (4.11); particle 1 could have its spin oriented either
way - and so could particle 2. However, they are not independent, this two-particle
spin state only has components in which the spin projections are opposite. The state
is rigged such that a spin measurement on one of the particles will also be decisive for
the other. Suppose Walther Gerlach, or someone else, measures the spin of particle 1.
That would induce a collapse of the spin state of particle 1 to one of the sz eigenstates,
64 4 Quantum Physics with Several Particles — and Spin

xj1) or x^. If Walther finds the spin to point downwards, the resulting two-particle
spin state is collapsed to x^X^; in our initial state the component d in Eq. (4.10) is
zero and, thus, so is the probability of reading off spin down for particle 2.
So, Walther does not have to measure the spin of particle 2 to know the outcome,
it has to be oriented upwards. And, while the outcome of the first measurement could
just as well have been upwards, a measurement on the second one would still have
yielded the opposite result. In other words, if a spin-1/2 system with two particles is
prepared in the third state of Eqs. (4.11a), we do not know anything about the spin
orientation of each of the particles individually, but we may be certain that they are
oriented oppositely. The same may also be said about the state in Eq. (4.1 lb), the singlet
state.

4.2.1 Exercise: Entangled or Not Entangled

When full knowledge about the system boils down to full knowledge of each particle,
the state of a quantum system with two particles is described by a product state. When
this is not possible, we say that the particles are entangled.

(a) Below we list five different normalized states for a system of two spin-1/2 particles.
Which of these are entangled and which are not?

(.) (x!'>42> - 4'>z;2>)

(..) (xl?>Z? - x(,»x?>)

(iii)
ov) i^'+z;’’/!2'-/;1'*
; 2'-/"/®)

(v) ± (z™z® -h/M2’-./™/®)

(b) We claimed that in order for the total wave function including both the spatial
and spin degrees of freedom, Eq. (4.3), to be on product form, Eq. (4.6), 4^(x)
and 4>;(x) would have to be proportional. Otherwise space and spin would feature
some degree of entanglement. Show this.
(c) In Eqs. (4.11) we saw two entangled states and two product states. The former
states, the entangled ones, correspond to a situation in which the spin projection of
each particle is entirely undetermined, while we know they are oppositely aligned.
Can you construct linear combinations of the product states in Eqs. (4.11a) for
which the same may be said - except with aligned spins?6
(d) Suppose that you pick coefficients in the linear combination of Eq. (4.10) at ran­
dom. It this likely to amount to an entangled state? Or do you have to carefully
design your linear combination in order for entanglement to emerge?

6 You may have seen one such state already.


65 4.2 Entanglement

We now touch upon another profound characteristic of nature, namely the non­
locality of quantum physics. We have learned that if a two-particle system is prepared
in a fully entangled state such as the singlet state of Eqs. (4.11b), knowledge of the
outcome of a measurement on the first particle will determine the outcome of a meas­
urement on the second - unless it has been tampered with. This is supposed to hold
irrespective of the spatial separation between the two particles.
But wait. How, then, can the second particle have learned which way to orient its spin
after a measurement on the first one? Surely, the first particle cannot possibly have sent
the second one any instantaneous message - defying the speed of light, not to mention
common sense. In reality, the orientations must have been rigged before we moved the
particles apart, right? Although we were not able to predict the outcome of the first
measurement, that couldn’t mean that it was actually arbitrary; there must have been
a piece of information we missed out on. Right?
This issue is one of the reasons why Einstein never fully bought into quantum phys­
ics. It relates to one of the most famous disputes in science, in which Einstein and Bohr
played the opposing parts (Fig. 4.3). Bohr argued that particle correlations such as the
ones we have studied here could very well persist in spite of arbitrarily large spatial
separation; no communication between the particle was needed.

r Figure 4.3 In 1925, their mutual friend Paul Ehrenfest, himself a brilliant physicist, brought Niels Bohr and Albert Einstein
together in an attempt to have them settle their scientific differences about the interpretation of quantum theory.
Despite the seemingly good atmosphere, he did not succeed.
66 4 Quantum Physics with Several Particles — and Spin

Many physicists would say that the matter was settled when experiments ingeniously
devised by John Stewart Bell, from Northern Ireland, were carried out in the early
1970s. States such as the last two in Eqs. (4.11) were crucial in this regard. In fact,
these are examples of so-called Bell states.
And who was right, you wonder? The short answer is that Bohr was right and
Einstein was wrong.
The concept of entanglement is certainly apt for philosophical discussions. It also
has very specific technological implications when it comes to quantum information
technologies, some of which will be addressed in the next chapter.
In most of the remainder of this chapter, we will see how spin, via the exchange
symmetry, strongly affects the energy of two-particle states. Before investigating this
further, however, we will dwell a little more on the spin characteristics alone.

4.3 The Pauli Matrices

Perhaps it seems odd to consider the spin part without any regard to the spatial depend­
ence of the wave function. However, this does makes sense if we are able to pin the
particles down somehow - for instance in an ion trap, as in Fig. 4.4.
If we take the states and to be the basis states, we may conveniently represent
the general spin state of a spin-1/2 particle as a simple two-component vector:

(4.13)

F Figure 4.2 Illustration of an ion trap. An ion differs from an atom in that the number of electrons does not match the number of
protons in the nucleus. Thus, contrary to atoms, ions carry a net charge. In an ion trap, ions are confined in space by
electric fields. Laser beams may now be used to manipulate and control the internal state of individual ions - or a few
neighbouring ions.
4.3 The Pauli Matrices

Correspondingly, inner products are conveniently calculated as

(Xl1X2) = Xi X2 = ( 4 ) ( A2 ) = a*a^ + b*b2- (4.14)


\ #2 /

The spin orientation of a particle can be manipulated by magnetic fields. Specifically,


if a particle with spin is exposed to a magnetic field B, this adds the contribution

HB = -g^B-s (4-15)

to the Hamiltonian. Here q is the charge of the particle and g is a somewhat mys­
terious factor which emerges in relativistic formulations of quantum physics. For an
electron, this g-factor happens to be 2, or, more precisely, very close to 2. As any phys­
ical quantity, the spin has its own operator. In the representation of Eq. (4.13), in which
= (l,0)T and = (0,1)T, the vectorial spin operator may be expressed in terms
of matrices:

S= = - [crx,ory,orz] = -a, (4.16)

where

1 °
0 -1 J * (4.17)

These are the famous Pauli matrices.

4.3.1 Exercise: Some Characteristics of the Pauli Matrices

(a) Verify, by direct inspection, that a* — ay — aj — I2, where I2 is the 2x2 identity
matrix.
(b) Verify that tz^spin-l/2-particle state, Eq. (4.13), is an eigenstate of s2 = s2+s2+s2
with eigenvalue $0 + l)/l2, s = 1/2. See Eqs. (4.4).
(c) Verify that the Pauli matrices are all Hermitian.
(d) Verify that [<rx, = 2iaz. We introduced the commutator between two operators,
or matrices, in Eq. (2.42). The same holds if you interchange x, y and z cyclically,
{x,y,z} -+ {y,z,x} -+ {z,x,y}.
(e) Verify that {crx, cry} = {ctx,<jz} = {av, az} = 62,2, the 2x2 zero matrix, where

{A,B} = AB-{-BA (4.18)

is the anti-commutator.
68 4 Quantum Physics with Several Particles — and Spin

As mentioned, the basis states of our spin space, and are the eigenstates of the
spin projection along the z-axis. For a particle prepared in the state, a measurement
of its spin along the z-axis will necessarily give the eigenvalue +1/2 h of the operator
sz = h/2crz. As also mentioned, the z-axis is not by any means more special than any
other axis, and it could be chosen to point in any direction. Moreover, we may very
well want to measure the spin projection along the x- or the y-axes as well.

4.3.2 Exercise: The Eigenstates of the Pauli Matrices

(a) It is quite straightforward to observe that for crz the eigenvalues of the eigenvectors
= (1,0)T and = (0,1)T are +1 and —1, respectively - corresponding to the
eigenvalues ±h/2 for sz in Eqs. (4.4).
Show that the eigenvalues of crx and oy are also ±1. What are the corresponding
eigenvectors?
(b) If you measure the spin projection along the z-axis and find that it is 1 /2 fi, what
will be the outcome of performing the same measurement once more right after?
(c) Suppose now that the spin projection along the x-axis has been measured and
found to be positive - it turned out to be +1/2 h. This measurement collapsed the
spinor into the corresponding eigenvector; let’s call it . Next, we make another
spin projection measurement, this time along the z-axis. What is the probability
of measuring ‘spin up’ - of finding +1/2/1 for sz? And what is the probability of
measuring spin down?
(d) Now, let’s flip the coin and assume that we have measured a positive spin projection
along the z-axis. What is the probability of measuring a positive spin projection
along the x-axis?

So, a sequence of spin projection measurements where you constantly change


between the z-axis and the x-axis is like tossing a coin, in the sense that the result
could, with equal probability, be either positive or negative every time. However, if
you do repeated measurements along the same axis, you are bound to get the same
result as the first time - over and over again. These are direct consequences of how the
wave function, also the spin wave function, collapses due to a measurement.

4.4 Slater Determinants, Permanents and Energy Estimates

For many-particle systems, it is quite common when solving both the time-dependent
and the time-independent Schrodinger equation to express the solution in terms of
single-particle wave functions. Typically, these wave functions are eigenfunctions of a
single-particle Hamiltonian; for one-particle systems we saw examples of this in Sec­
tion 3.3. Suppose now that we have a complete set of one-particle wave functions
<pi(x), <pi(x), ..., which include both a spin part and a spatial part. We may use these to
69 4.4 Slater Determinants, Permanents and Energy Estimates

build many-particle states - to construct a basis in which we could expand our many­
body wave function. The simplest way to construct such a basis would be to simply put
together all possible products of type
0(X1,X2,X3,...) = . (4.19)

The actual wave function can always be written as a linear combination of such product
states. However, if these particles are identical fermions, we must make sure that this
linear combination indeed constitutes an exchange anti-symmetric wave function. This
could potentially be rather cumbersome.
If, instead of such simple product states, we construct a basis of A-particle wave
functions where each of them already has the proper exchange symmetry, this would
no longer be an issue; any linear combination of these states would then also have the
right exchange symmetry. For identical fermions, a very convenient way of achieving
this is to write our basis states as determinants'.
^i(xi) <P1(%2) *W)
<P1C

1 <P2(
*
2) ’ ’ ’ (P2(.XN)
0(%1,X2,...,^) = (4.20)

<Pn(xi) <Pn(x2) •• • (Pn(xn)

Such determinants are called Slater determinants after the US physicist John C. Slater
who introduced the concept in 1929. The prefactor ensures normalization -
provided that the single-particle states <p\(x), cp2(x),... are orthonormal. Here we have
constructed an A-particle state consisting of the N first states in the single-particle
basis. If one or more of the single-particle wave functions is replaced by others, we
would produce another, orthonormal N-particle wave function. Rearranging the ones
we already have would not, however. As you may remember from your classes in linear
algebra, interchanging two columns - or rows - in a quadratic matrix only amounts to
an overall sign change in its determinant, thus ensuring exchange anti-symmetry.
So much for fermions, what about bosons? Can we construct a basis of exchange
symmetric A-particle states from the same single-particle states? Sure we can. It’s sim­
ple; just replace the determinant in Eq. (4.20) with its less known sibling, the permanent.
The determinant of a square matrix is defined as the sum of all possible signed prod­
ucts of matrix entries that share neither row nor column. The sign is determined by
the order of the element indices. The permanent is defined in the same way - except
there is no sign rule. Consequently, exchanging two columns does not change any­
thing in the permanent; a Slater permanent constructed analogously to Eq. (4,20) will,
correspondingly, be exchange symmetric by construction.

4.4.1 Exercise: Repeated States in a Slater Determinant

It does not make sense to include the same single-particle state more than once in a
Slater determinant. Why is that?
Does it make sense to include repeated single-particle states in a Slater permanent?.
70 4 Quantum Physics with Several Particles — and Spin

The answers to these questions are the underlying reasons why ground states
of bound two-particle systems, such as the helium atom, are spatially exchange-
symmetric states, not anti-symmetric ones.

4.4.2 Exercise: The Singlet and Triplet Revisited

Once again we only consider the spin part of a particle - using and as our basis
states. For a system with N = 2 spin-1/2 particles, show that the only non-zero Slater
determinant is in fact the singlet state in Eq. (4. lib).
Suppose we use the same basis states to construct two-particle Slater permanents.
Which states will we arrive at then?

Approaches in which the wave function is expressed as a linear combination of


determinants/permanents are frequently used in solving many-body problems - pre­
dominantly time-independent ones. The most straightforward approach in this regard
is called full configuration interaction. In this method, one starts out with a set of ortho­
normal single-particle states, one that hopefully is near complete, construct all possible
Slater determinants/permanents from this set, construct the Hamiltonian matrix in this
basis and, finally, diagonalize it.
While this may be intuitively appealing, it is usually rather costly computationally -
too costly in most interesting cases. This is rarely a viable way to battle the curse of
dimensionality. Instead we could try to make educated guesses on what the wave func­
tion might look like - assume a certain shape of the wave function - and, in this way,
reduce complexity considerably. When making such ‘guesses’ for the wave function,
which depends on both positions and spins, we must be sure to encompass the proper
exchange symmetry.

4.4.3 Exercise: Variational Calculations with Two Particles

As we did in several exercises in Chapter 3, we make use of the variational principle,


Eq. (3.27), to estimate ground state energies. Contrary to those examples, here we
consider a two-particle system.
Suppose two identical fermions are trapped in a confining potential of the form of
Eq. (2.30), and that they interact via this potential:

W(xbx2) = . Wo\ . (4.21)


V(X1 -X2)2 + 1

Here, as before, x\ is the position of the first particle and x2 is the position of the
second. We take the interaction strength Wo to be 1 in this case. With positive Wo,
the interaction is a repulsive one; the energy increases as the two particles approach
each other - when |xi — x2| becomes small. Our interaction could model the electric
repulsion between two particle with charges of the same sign.7

7 Since we work with one-dimensional particles, it would not work to use the actual Coulomb potential,
Eq. (2.7), in this case.
71 4.4 Slater Determinants, Permanents and Energy Estimates

With two particles, our spatial wave function has two variables, 4<(xi,X2), and the
Hamiltonian is a two-particle operator:

H = h i + h2 + Wi, x2), (4.22)

where hk = j5j/(2m)4- V(x^) is the one-particle part89for particle number k. The energy
expectation value is now a double integral:
Z>OC /»OO
(E) = WHI'I') = / / [4>(xi,x2)]
* (4.23)
J—oo J—oo

which has three terms - corresponding to the three terms of the Hamiltonian,
Eq. (4.22).
And then there is spin.

(a) First, assume that the two-particle system has a spatial wave function which can
be written as a product of the same one-particle wave function for each of the
particles:
4<(X1,X2) = ^trial^l) ^trialfe). (4.24)

This is a crude oversimplification. However, it may still be able to produce a decent


energy estimate.7
In terms of exchange symmetry, is such a spatial wave function admissible for
fermions? If we include the spin degree of freedom in the wave function, could
Eq. (4.24) be part of a Slater determinant? What requirements apply to the spin
state of these two particles in that case?
(b) Assume that the spatial trial wave function Atrial has the form of Eq. (3.28), a
Gaussian centred around x = 0. It depends parametrically on the width a. Now,
as in Exercise 3.4.2, minimize this expectation value with respect to this parameter.
Let your confining potential, V(x), be given by Eq. (2.30), with the parameters
Vo = — 1, w = 4 and 5 = 5.
As in Exercise 3.4.5, this is a bit more cumbersome than many other exercises we
have seen since we are dealing with double integrals. However, similar to Eq. (3.38),
the two terms stemming from the one-particle Hamiltonians, h\ and h2, can be
reduced to single integrals - even identical ones in this case. The interaction energy,
on the other hand, remains a double integral - one that you can deal with in exactly
the same manner as in Exercise 3.4.5.
As always: Ensure that your grid is dense and wide enough by repeating your
calculations with improved numerics.
(c) Next, allow our trial function a bit more of ‘wiggle room’ and base our one-particle
spatial wave function on ^Gauss of Eq. (3.31) from Exercise 3.4.4. This trial func­
tion is still a Gaussian; however, this one depends on an offset parameter /z in

8 Strictly speaking, since our wave function resides in a function space corresponding to two particles, it is
more correct to write the one-particle part for particle 1 as the tensor product h®I and I 0 h for particle
2, where I is the identity operator.
9 We saw in Exercise 3.4.5 that such a simple product could work rather well for one particle in two
dimensions. The case of two particles in one dimension is, of course, quite different, though.
72 4 Quantum Physics with Several Particles — and Spin

addition to a. We let the offset of one particle be the negative of the other. This
could motivate a spatial wave function of this form:

^(xbx2) = ^G auss auss (4.25)

Why is this one not admissible? 10


And why is this one OK?
>h(xi,X2) = ^[(^Gauss^l^M) ^GaussCQ^-g)

+ auss (xr,a,-/x)^G auss (X2;<7, M)]. (4.26)

Is this a Slater determinant - or a permanent?


Here the factor N is imposed in order to ensure normalization; although V^Gauss
is normalized, ^GaussCv;^,/z) and ^GaussOGCG —m) will in general overlap; they
only become orthogonal in the limit /z —> oo.
(d) Actually, we may ignore the normalization factor N if we formulate the variational
principle, Eq. (3.27), slightly differently:

<£> - “wET 5 «■ <4-27’


Now, again minimize the energy expectation value with the trial wave function of
Eq. (4.26). This time we are dealing with two parameters, a and /z. Your gradient
descent implementation from Exercise 3.4.4 may come in handy.
(e) Thus far we have been dealing with spatial wave functions that are exchange sym­
metric. This is admissible for two-particle wave functions of the product form of
Eq. (4.8) where the spin part X2 is the spin single state, Eq. (4.1 lb). However, we
could choose X2 to be one of the triplets - with the consequence that the spatial
part must be exchange anti-symmetric:

*P(X1,X2) = -^[’AGauss(^i;or,M)VrGauss(X2;<7, -fl)


- 'Ag auss (xi;a,-/x)^Gauss (X2',cr, M)]- (4-28)

Now, repeat the minimization from (d) with this wave function instead. How
does going from an exchange symmetric to an anti-symmetric spatial wave function
affect the energy estimate?

In Fig. 4.5 we display the wave function approximations corresponding to the two
first spatially exchange-symmetric assumptions in Exercise 4.4.3 - the ones that minim­
ize the energy expectation value. We also plot the actual ground state wave function -
along with the result of the approach we will in address in Exercise 4.4.4. Due to the
mutual repulsion, Eq. (4.21), the two particles tend to be a bit separated. This is clearly
seen in Fig. 4.5(b) and (d), and more so in (d). The simple product assumption of
Eq. (4.24) is not able to encompass this at all; in fact, it predicts a rather high probability
for finding the two particles on top of each other (xi = x2). Despite this shortcoming,
and the crude assumption in Eq. (4.24), the energy prediction is not that bad, it’s only
io Assuming that 0, that is.
73 4.4 Slater Determinants, Permanents and Energy Estimates

" Figure 43 | Plots of ground state approximations for the problem in Exercise 4.4.3. All of them correspond to wave functions with
an exchange-symmetric spatial part. The corresponding energy estimates are written above each of them. As we see,
they all produce rather similar energy estimates, although the shape of the wave function estimates differ quite a bit.
(a) The product state in Eq. (4.24), and (b) the spatially exchange-symmetric assumption in Eq. (4.26); (c) the
approach of Exercise 4.4.4; (d) the actual ground state wave function.

off by 4%. The assumption of Eq. (4.26) does allow the particles to be apart, and, cor­
respondingly, the energy estimate is better. The energy estimate of 0.974 energy units
is only 1.2% too high.
Actually, this is rather typical - the ground state energy estimate is surprisingly good
although the trial wave function looks a bit off. Is there any particular reason why it
should be so? Should the energy expectation value be less sensitive to adjustments of
the wave function for the ground state than any other state? What does the variational
principle say on the matter?
Both the assumptions of Eqs. (4.24) and (4.26) restrict the wave functions to consist
of Gaussians. In Exercise 4.4.4 we will lift this restriction.
But before we dedicate ourselves to that task, let’s have a look at the exchange anti­
symmetric assumption of Eq. (4.28). The wave function with a and /z values that
minimize the energy expectation value is shown in Fig. 4.6. It estimates an energy,
74 4 Quantum Physics with Several Particles — and Spin

" Figure 4.2 | (a) The absolute square of the wave function that minimizes the energy assuming the spatial form of Eq. (4.28).
(b) The actual minimal energy eigenstate which is exchange anti-symmetric. Note that in all cases, due to the
anti-symmetry, the wave function is zero whenever xi = X2, in accordance with what we saw in Exercise 4.1.2.

or rather, sets an upper bound for it, which reads —0.7235 energy units, which is way
off compared to the ground state energy of —0.9858 energy units. However, this is not
a fair comparison; our anti-symmetric wave function is not really an estimate of the
ground state.
The exact ground state is in fact the product of a spin and a spatial part, Eq. (4.8),
with a symmetric spatial part - combined with the spin singlet. The spatially anti­
symmetric estimate in Fig. 4.6(a) is actually an estimate of a spin-symmetric (triplet)
excited state - the one with the lowest energy within this symmetry. The true excited
state - with a fairly accurate numerical energy estimate - is displayed in Fig, 4.6(b). We
believe it is fair to say that the estimate based on Eq. (4.28) does rather well after all.
This goes to show how important spin is - also for the energy structure of quantum
systems with no spin dependence whatsoever in their Hamiltonians. The spin degree
of freedom simply cannot be ignored for identical particles, due to the requirement on
exchange symmetry. This, in turn, is quite decisive for the energy that comes out.
Now, as promised, let’s get rid of the restriction that the single-particle trial functions
have to be Gaussians.

4.4.4 Exercise: Self-Consistent Field

We revisit the problem of Exercise 4.4.3. We again assume that the ground state can
be written as a product state with two identical wave functions,*11 as in Eq. (4.24). But
instead of fixing the shape of the one-particle wave function as a Gaussian, or any other
specific form for that matter, we here determine it in an iterative manner.
First, we fix an initial guess for a one-particle trial function Atrial(*i) for the first
particle. Next we assume that the other particle ‘feels’ the first one via a static charge

11 At the risk of being excessively explicit about this, it can’t. We can only hope that this is a decent
approximation.
75 4.4 Slater Determinants, Permanents and Energy Estimates

distribution set up by its wave function. This motivates an additional one-particle


potential felt by particle 2, set up by particle 1:
(X)

/ -oo
*
l^triaiai)|2Wl,
2)dxi,

where the interaction W is given in Eq. (4.21).12 In this scenario, the second particle
(4.29)

will also have the same effect on the first particle. With this, we can set up a one-particle
Hamiltonian for each particle with this addition, diagonalize it and replace our single­
particle wave function Atrial with the normalized ground state of this effective one-
particle system. In the particularly simple product state ansatz, Eq. (4.24), there is only
one single-particle trial function in play at all times.
In other words, we set out to solve the effective one-particle Schrodinger equation:
[h + VW
*
)] ^(x) = 8 V^(x). (4.30)

Note that this Schrodinger-like equation, contrary to all other examples we have seen
thus far, is not actually a linear differential equation anymore since Veff in fact depends
on t/t itself.
So how do we go about finding a that produces an effective potential which, in
turn, renders itself as the ground state of the effective Hamiltonian? The answer is: we
iterate.
We start off with some educated guess for Atrial and determine the effective potential
set up by this one-particle wave function, Eq. (4.29). Next, we construct the effective
Hamiltonian, diagonalize it and obtain a new Atrial by picking the ground state of this
effective one-particle Hamiltonian. In going from one iteration to the next, we replace
one Atrial with another, which leads to an altered effective potential which, in turn,
alters Atrial and so on. We repeat this iteration procedure until Atrial does not really
change anymore - until the system is self consistent.
Finally, when we are pleased with our Atrial, the energy expectation value of our
two-particle trial function may, as before, be calculated as
<£> = <*P| (Al 4-A2 4-vr) *>,
|4 (4.31)

where vP(xi,%2) = Atrial(


*
1) Atrial(*
2)- For the sake of clarity: the one-particle parts
of the above Hamiltonian, h\ and ^2, which, respectively, affect particles 1 and 2
exclusively, do not contain the effective potential in Eq. (4.29) - these are the actual
one-particle Hamiltonians.13
Implement this iteration procedure and use it to estimate the two-particle ground
state energy. As your starting choice for the one-particle wave function Atrial you may
very well make our usual choice and take it to be a normalized Gaussian centred
around x = 0, Eq. (3.28). In our usual grid representation, the effective potential Veff
is represented as a diagonal matrix - just like the ‘ordinary’ potential V. For each diag­
onal element of the Veff-matrix, you must calculate an integral. You can monitor the
12 The particles’ charges enter via the parameter Wo in Eq. (4.21).
13 Having said this, we should also say that the expression in Eq. (4.31) could be written rather compactly
using Veff.
76 4 Quantum Physics with Several Particles — and Spin

convergence by checking the lowest eigenvalue in Eq. (4.30) for each iteration; when
it no longer changes much, we may stop iterating and calculate our energy estimate,
Eq. (4.31).
Is the energy estimate you found lower or higher than the estimate you found in
Exercise 4.4.3(b)? Does this alone enable you to say which estimate is the best?

The approach of Exercise 4.4.4 is a simple example of a Hartree-Fock calculation.


Above we made a rather sketchy argument in which we assumed single-particle wave
functions to set up static charge distributions. However, the method has a much more
solid theoretical foundation: It is the closest approximation we can have to the ground
state energy under the assumption that the state is a single Slater determinant with
orthonormal single-particle wave functions.
The original method proposed by Douglas Hartree was even simpler; it minimized
the energy under the assumption that the wave function was a product state, Eq. (4,19).
Vladimir Fock’s contribution was to restore of lack of exchange anti-symmetry by
taking a Slater determinant, Eq. (4.20), as the starting point instead. This, in turn,
introduces an additional effective potential which is referred to as the exchange poten­
tial. Contrary to the effective potential of Eq. (4.29), which is called the Hartree
potential or the direct potential, it is not local in position; a matrix representation of the
exchange term on a numerical grid would be dense, not diagonal. We will not indulge
ourselves in the technical details in this regard, save to say that the exchange potential
does not appear in our example because our one-particle wave function really is two
one-particle states - with identical spatial parts and opposite spins.
The Hartree-Fock wave function of Exercise 4.4.4 is shown in Fig. 4.5(c). Albeit
similar, we see that it differs a bit from the one above, which is fixed to be a product of
Gaussians. The ability to adjust the shape of the single-particle wave function results in
a slightly lower, and thus better, energy estimate. However, neither of the wave functions
in Fig. 4.5(a) and (c) are able to account for the fact that the repulsion, Eq. (4.21),
reduces the probability of finding both particles at the same spot. As the Hartree-Fock
method simply does not pick up this correlation between the particles, the difference
between the Hartree-Fock estimate and the actual ground state energy is often referred
to as the correlation energy.14 The fact that Egil Hylleraas (Fig. 3.7), was so successful
in predicting the helium ground state energy relied on how he was able to explicitly and
efficiently accommodate for this correlation in his trial functions.
While the assumption that the ground state could be estimated by a single Slater
determinant clearly is a very restrictive one, the Hartree-Fock method often produces
decent energy estimates - in particular when the confining potential V(x) is strong com­
pared to the interaction between the particles. When the time-independent Schrodinger
equation was able to reveal the mystery behind the periodic table back in the day, the
Hartree-Fock method certainly had something to do with that.

14 It should be mentioned that the precise definition of the term correlation energy may differ in different
contexts.
4.4 Slater Determinants, Permanents and Energy Estimates

One definite advantage of the Hartree-Fock method is that it allows us to tackle


the curse of dimensionality by treating a many-particle system as one in which each
particle has its own wave function - thus reducing an A-particle problem to N one-
particle problems. A dangerous pitfail, on the other hand, is that it may lead us into
the temptation of believing that the system actually is one in which each particle has
its own wave function. This is, however, the case only in the less interesting situation
in which the particles do not interact at all. As we discussed in Section 1.3, the true
many-body wave function is a lot more complicated.
Single-particle wave functions emerging from Hartree-Fock calculations are fre­
quently used as a starting point for more sophisticated methods, including various
flavours of the configuration interaction approach mentioned above and other similar
techniques such as the coupled cluster method.
5 Quantum Physics with Explicit Time Dependence

To some extent you may say that the dynamics we played around with in Chapter 2 is
rather trivial; since the Hamiltonian was not actually time dependent, the evolution of
the system was directly provided by Eq. (2.19) or, equivalently, Eq. (3.23). When our
Hamiltonian actually carries a time dependence, the time evolution must be determined
by more sophisticated means. As we will see, it does not have to be exceedingly difficult,
though. It may often be achieved as in Eq. (2.27) with slight modifications.
First, however, we will try to keep things simple by playing around with a system with
very few degrees of freedom: the spin projection of spin-1/2 particles. Later, things will
become a bit more complicated - and interesting - when we address systems for which
full spatial dependence comes into play, in addition to explicit time dependence in the
Hamiltonian.
There are several situations in which a quantum system is exposed to time-dependent
interactions. As we saw in Eq. (2.5), it may be introduced into the Schrodinger equa­
tion by exposing the quantum system to an electromagnetic field, such as a laser field.
Laser light differs from the electromagnetic radiation emerging from the Sun, for
instance, as it consists of one wavelength only and is coherent - the light beam as a
whole has a well-defined phase at each time and place.*12
Explicit time dependence also emerges in a quantum system when an atom collides
with another particle with a fixed trajectory. Or when the voltages across nano-devices,
such as quantum dots, are changed in time (see Fig. 5.1).
We will, for the most part, concentrate on the first case - quantum systems exposed
to external electromagnetic fields.

5.1 Dynamics with Spin-1/2 Particles

As mentioned, when a particle is confined in space, for instance as illustrated in Fig. 4.4,
it makes sense to consider only internal degrees of freedom - such as its spin, given by
the spinor x - and disregard the spatial part of the full wave function, see Eq. (4.6), since
it does not change. With the spin projection being the only dynamical variable and an
external magnetic field B = [Bx,By,Bz] present, the full Hamiltonian is simply the

1 This acronym stands for light amplification by stimulated emission of radiation.


2 Actually, this quality is crucial for interference patterns such as the one seen in Fig. 1.1 to emerge.
79 5.1 Dynamics with Spin-1/2 Particles

r Figure 5.1 By tailoring semiconductor structures and imposing gates with tunable voltages, individual electrons may be trapped
in so-called quantum dots. Such structures are sometimes referred to as artificial atoms; just as in atoms, the energy of
the confined electrons is quantized, as indicated in the potential well in (c). Contrary to atoms, however, the energy
structure of quantum dots can be manipulated and tuned. The dots are also significantly larger than atoms; the bar in
(b) indicates 1 /zm (1O“6 m). The figure is taken from Ref. [3], with permission from Nature Customer Service Centre
GmbH.

interaction between spin and the magnetic field. This interaction is given in Eq. (4.15),
which we repeat here for convenience:

H = -g^-B ■ s = -g^-B a = -/J. B. (5.1)


2m 4m
80 5 Quantum Physics with Explicit Time Dependence

Here we have introduced the magnetic moment operator:

We apologize for the redundancy in names and quantities in these equations.


Although we are really interested in describing time-dependent interactions in this
chapter, we will start off with one that is not.

5.1.1 Exercise: Dynamics with a Constant Magnetic Field

Suppose now that the magnetic field in Eq. (5.1) is constant. In that case, with the
spinor written as in Eq. (4.13), the Hamiltonian is a 2 x 2 matrix,3

where the elements e and W are also constants.

(a) How are these constants related to the magnetic field and the parameters in
Eq. (5.1)?
Suppose we start out, at time t = 0, in the spin-up state,

1
X(* = 0) = xt =
0

We are to determine the probability of measuring spin up at a later time t:


/ Z X \ 2
l(xtlx(O)l2= ( I = |a(O|2-
lJ /
\ u\*

For simplicity, let’s first assume that the magnetic field points along the x-axis, in
which case the Hamiltonian becomes proportional to ox.
(b) Under these conditions, explain why W must be real and show, by analytical means,
that

i [1 + cos (VUr//?)].
|a(r)|2 = cos2 (5.4)

You may determine a(t) from Eq. (2.19). The exponentiation may be achieved
either via diagonalization, as in Eq. (2.23), or by calculating the series, Eq. (2.21),
directly. The latter is likely to be more fun; if you make use of what you found in
Exercise 4.3.1(a), you will find that the Taylor series for the exponential with our
Hamiltonian may be written as a sum of two other Taylor series which may be
familiar.

3 This form is really fixed by Hermicity. Hermicity does not force the diagonal elements to be the negative
of each other. However, if they were not, this would only amount to a trivial phase factor - one that we
may safely ignore.
81 5.1 Dynamics with Spin-1/2 Particles

In a more general situation in which the magnetic field B is allowed to point in


any direction, the Hamiltonian may be written H = v • a. In this case the matrix
exponential may still be done analytically by means rather similar to the above:*4

exp [ — iv • a t/h] = cos (yt/K) h — i- sin (yt/h) (v • a), (5.5)


v
where v — |v|.
(c) Verify that the Hamiltonian of Eq. (5.3) may be written

H = | (-eaz + Re W ax - Im W ay) (5.6)

Use this and Eq. (5.5) to show that for a general situation in which e is not
necessarily zero and W not necessarily real,

1 /Ve2 + |W|2
l«(OI2 = 62 + | W |2 COS2 ( 2fi (5.7)
€2 + |W|2

(d) According to Eq. (5.7), what does it take to achieve a complete spin flip, a(t) = 0?
What restrictions must we impose on the parameters 6, W and the duration t of
the interaction to make this happen?

As mentioned, this exercise was not entirely consistent with the name of the chap­
ter. So, for the next one, we’ll make sure to include time-dependent terms in our
Hamiltonian.

5.1.2 Exercise: Dynamics with an Oscillating Magnetic Field

In Exercise 5.1.1 we saw that with a constant magnetic field the spin projection
oscillates following a cosine. The dynamics is richer if we expose the particle to a
time-varying magnetic field. With a static field Bz in the negative z-direction and an
oscillating field B(t) in the xy-plane, the Hamiltonian may be written^

H = ( sin (art) \
\ nQ*sin(wr) 6/2/’

(a) With h = 1, set the parameters to e = 1, Q = 0.2 and co = 1.1 and, again, take
your initial state to be In other words, set a(t = 0) to 1 and b(t = 0) to 0. Then,
using an ODE solver you are familiar with, solve the Schrodinger equation

(5-9)
at
with x given by Eq. (4.13) and H provided in Eq. (5.8).

4 You are more than welcome to prove this. In doing so, it may be useful to note that for a general vector v,
(v • [(Tx,ay,az])2 = v2, so that (v • a)2n = v2n and (v • a)2n+1 = v2n+1 v/v • a.
5 The factor h in the off-diagonal elements may appear somewhat odd at present. It is convenient to
introduce the factor here for reasons that, hopefully, will become apparent later on.
82 5 Quantum Physics with Explicit Time Dependence

(b) Why can’t we do as we did in Exercise 5.1.1 and solve the Schrodinger equation by
means of Eq. (2.19)?
(c) Plot the spin-up probability \a(t)|2 as a function of time.
(d) As a check of your implementation, see that you reproduce Eq. (5.7) numerically
when you replace sin (cor) by 1/2 in Eq. (5.8). (With W in Eq. (5.3) equal to
(e) Shifting back to the time-dependent Hamiltonian, play around with the param­
eters a bit. Are you able to obtain a more or less complete spin flip in this case even
when c^O?

Although this exercise addresses a rather simple system, it is frequently used to


describe more advanced ones as well. The Hamiltonian of Eq. (5.8) - and simple
extensions of it - is for instance often used to model atoms and ions exposed to laser
fields. This is admissible if the dynamics somehow involves transitions between two
atomic/ionic states only.
Actually, such simulations tend to be made even simpler by substituting the time­
dependent Hamiltonian (5.8) with the static one in Eq. (5.3). If you find the notion of
describing the interaction with a time-dependent field by means of a time-independent
Hamiltonian rather odd, we can only agree. There is a trick, however, which does
allow you to do so under certain conditions. The trick is called the rotating wave
approximation.

5.1.3 Exercise: The Rotating Wave Approximation

Our starting point is exactly the same as in Exercise 5.1.2 - the time-dependent
Schrodinger equation for a spin-1/2 particle in an external field, whose Hamiltonian is
given by Eq. (5.8).

(a) First, we define an alternative representation of the spin state by imposing a unitary
transformation, = [//, where

Verify that U is, in fact, unitary; IT = U


(b) The state now follows a Schrodinger equation with the modified Hamiltonian

0 sin (a>f)e
H' = (5.11)
*
/IQ sin(a>t)e1€t/h 0

Verify this by setting up the time-dependent Schrodinger equation, Eq. (5.9), with
the Hamiltonian of Eq. (5.8) and replacing / with U^x'•
Hint. Remember that also the left hand side of the Schrodinger equation will
contribute to Hf.
83 5.1 Dynamics with Spin-1/2 Particles

(c) If we write sin (art) as (e1CDt — e~1CDt)/2i, the non-zero elements of the Hamiltonian
in Eq. (5.11) will acquire two terms. If hco 6, one of these will oscillate slowly
and the other one will oscillate quite rapidly in comparison. Due to this, the latter
may often be neglected.6
Show that this approximation leads to the effective Hamiltonian

h / 0
(5.12)
2 \ *
iQ i<5'
e 0

where the parameter


8 = e/h- a). (5.13)

(d) Along the same lines as in (b), impose the unitary transformation
g+i8t/2 0
e-iSt/2 (5.14)
0

on x' to arrive at yet another way of representing the spin state, /" = V/'.
Show that this leads to the time-independent Hamiltonian


HrwA = 2 ( iQ* (5.15)
8

This is, in fact, the same as the Hamiltonian of Eq. (5.3) - with the correspond­
ence 8 ~ e/h and Q ~ iW/h.
(e) We have now imposed two transformations on /; the Hamiltonian of Eq. (5.15)
relates to the state
Xff = Vx' = VUx- (5.16)

Verify that the components of x" and x satisfy |</'(r)|2 = l«(OI and l^(OI2 =
|Z?(/)|. This means that we do not need to transform back to the original /-
representation in order to determine the spin-up and spin-down probabilities.
(f) Check whether our analytical solution for the constant Hamiltonian, Eq. (5.7),
actually agrees, more or less, with your numerical solution from Exercise 5.1.2.
How large must 8 be before the rotating wave approximation breaks down - before
our analytical, approximate solution starts deviating significantly from the correct
one? Also, try to figure out, by numerical tests, how the validity of the rotating
wave approximation is affected by the magnitude of Q.

As mentioned, a spin-1/2 particle in a constant magnetic field is not the only system
for which the Hamiltonian of Eq. (5.3) is encountered; it has several applications. For
an atom or an ion driven between two states by a laser field, with very little population
of any other states, Eq. (5.8) is a decent approximation. If the angular frequency a> is

6 Simply put, its contribution integrates to zero - pretty much in the same way as f(x) sin (kx) dx ~ 0 if
f(x) changes little during one period of the sine function and the integration interval extends over several
periods.
84 5 Quantum Physics with Explicit Time Dependence

such that it is resonant or near-resonant with the transition between the two states,
ho 6, and the coupling strength |Q| is comparatively low, we can approximate such
a system by the constant Hamiltonian of Eq. (5.3), or, equivalently, Eq. (5.15), via the
rotating wave approximation. In fact, several works exploit this in order to study how
matter may be manipulated by electromagnetic radiation.
In such a context, |Q| or IWf/h is frequently is referred to as the Rabi frequency,
named after Isodor Isaac Rabi, who won the 1944 Nobel Prize in Physics for the dis­
covery of nuclear magnetic resonance, which we will address in Chapter 6. The Rabi
frequency is related to the so-called dipole transition element:

Q = -|(^o|EoT|Vri>, (5.17)
h

where q is the charge of the particle, iAo,i are the wave functions corresponding to the
two atomic states involved, and Eo gives the amplitude and direction of the oscillating
electric field set up by the laser. The 5-parameter in Eq. (5.13), which is called the
detuning, is the difference between the energy gap between the two states, 6, and the
energy of one photon, ho, divided by h. The quantity

S2g = f2 + l^l2 (5.18)

is called the generalized Rabi frequency. This is the angular frequency that appears in
Eq. (5.7). If we rewrite this equation in terms of detuning and Rabi frequencies, it reads

l«(OI2 = —^[32 + |Q|2cos2(QgO]. (5.19)


|Qgp l j

We will now add one more particle. You have been warned about the curse of
dimensionality earlier. However, the spin projection of spin-1/2 particles resides in
a two-dimensional space. In going from one such particle to two, the dimensionality
increases from 21 to 22, which really isn’t much of a curse.

5.1.4 Exercise: Spin Dynamics with Two Spin-1/2 Particles

In the model we study in this exercise, each of the two particles is exposed to the same
magnetic fields as in Exercise 5.1.2. We also introduce an interaction between the par­
ticles; the spin of each particle has a certain influence on the other.7 The interaction
energy differs by u' depending on whether the spins are aligned or anti-aligned. In a
physical implementation, the magnitude of u' would depend on the distance between
the two fixed spin-1/2 particles - and on the nature of the material in which the system
is embedded.

7 This interaction conies about because each particle with spin sets up a small magnetic field which affects
the spin of the other.
85 5.1 Dynamics with Spin-1/2 Particles

We write the total two-particle spin state X2 as in Eq. (4.10) - a linear combination
of the product states in Eqs. (4.9). With reference to this basis, we may write X2 as a
vector in C4:

/ «(r) >
(1) (2) . , (1) (2) . (1) (2) . , (1) (2) I ^(0
X2 = a ' x^ + b X| ’x\’ + c xj 'Xj + d x| X| c(/) (5.20)

\ d(t) /

and our Hamiltonian becomes a 4 x 4 matrix:

h^+h^ + •s<2> (5.21a)


/ -6 +U A A °
*
A —u 2u A
, (5.21b)
*
A 2u —u A
\ 0 *
A *
A 4-6 + U /

where is the single-particle Hamiltonian of Eq. (5.8) for particle k. We have intro­
duced the quantities A(0 = Q sin {cot) and u = u'h2/4 for convenience. Actually, while
the matrix in Eq. (5.21b) is accurate, we may be accused of being somewhat sloppy
for writing the full two-particle Hamiltonian as in Eq. (5.21a). The single-particle
parts should, as in Exercise 4.4.3, really be the tensor/Kronecker product with the
identity operator for the other particle; and /z(2) really mean h 0 I2 and I2 0 h,
respectively. The spin-spin interaction is also given by tensor products: s(1) • s(2) =
{b/2)2 {crx 0 crx + ay 0 cry + crz 0 crz).

(a) Take your initial state to be the one in which both spins point upwards and solve the
Schrodinger equation using your ODE solver of preference. You can use the same
values for e, Q and co as in Exercise 5.1.2 and set u = 0.025. As functions of time,
plot the probability for both spins to point upwards, | <2 (r) |2, and the probability for
finding the system in state x^X;2\ |b(/)l2, where a and b refer to Eq. (5.20).

(b) If our particles are identical, is the initial condition in (a) bona fide? Is this
an admissible state considering the exchange symmetry? For a general situation
in which our initial spin state has a well-defined exchange symmetry, does the
Schrodinger equation with the Hamiltonian of Eq. (5.21) maintain this symmetry?
Will the spin state necessarily have the same symmetry at a later time tl
Try to answer this question both by checking your numerical solution at different
times and, more theoretically, by studying the form of the Hamiltonian.
(c) Rerun your calculation using the last triplet state in Eqs. (4.11a) and the singlet
state, Eq. (4.11b), as initial conditions instead of X2{t = 0) = x^X^- One of
these runs is rather boring, isn’t it? Why is that?
Hint. Perhaps the lessons learned in the last parts of Exercise 2.6.2 may shed some
light on the matter?
86 5 Quantum Physics with Explicit Time Dependence

(d) Now we lift the restriction that the particles are to be identical and take our initial
state to be xj^xj2^ Rerun your calculation with various values for the spin-spin
interaction strength u and see how this strength affects the spin dynamics.
How do the spin dynamics depend on u if you return to an exchange symmetric
initial state?

In Chapter 6 we will address quantum technologies. One prominent example of these


is the quantum computer. Instead of ‘traditional’ bits, zeros and ones, it uses quan­
tum bits as information units. A quantum bit, a qubit, is simply a dynamical quantum
system consisting of two states. Thus, the spin projection of a spin-1/2 particle may
very well constitute a qubit. And what we did in Exercises 5.1.2 and 5.1.4 could be
considered a simulation of something going on in a quantum computer.
Now we will turn to more complex dynamics - dynamics involving spatial wave
functions.

5.2 Propagation

The evolution of the wave function from a time t to a later time t + Az may be written
as the action of an operator:

4>(z + Az) = U(t, t + Az)vp(z). (5.22)

This operator, U, is referred to as a propagator. We are already familiar with what


the propagator looks like with a time-independent Hamiltonian, see Eqs. (2.19) and
(2.27). We have also learned, however, that when H is time dependent, these equations
no longer hold. As mentioned, Eq. (2.27) may still be useful, though.

5.2.1 Exercise: Magnus Propagators of First and Second Order

(a) Show that the error in using Eq. (2.27) for the time evolution is proportional to
Az2 at each time step. To this end, differentiate Eq. (2.1) in order to express the
first three terms in a Taylor expansion of 4>(z + Az) and compare them to the first
three terms in an expansion of Eq. (2.27). See Eq. (2,20).
(b) Show that if you replace H(z) in Eq (2.27) by H = 1 / Az ftt+At H(t') dz', the error
is proportional to Az3.
Hint. Use Taylor expansion or the trapezoidal rule to approximate the integral.

Let’s summarize:
4<(Z + AZ) = ^ + (9(Az2), (5.23a)
vp(Z + AZ) = e~[H{t^t/h^(t) + (9(Az3), (5.23b)
__ 1 /v+Ar
where H (r) = — / H(t') dr'.
Ar Jt
87 5.2 Propagation

So we get a significant improvement in accuracy if we replace the Hamiltonian with


its time average over the interval in question. Since the error, the local one that is, is
of third order, the time-averaged Hamiltonian H may safely be approximated by the
trapezoidal or midpoint rule:

H % | (h(J) + H(t + Ar))


H(t + Ar/2). (5.24)

Equations (5.23) are examples of Magnus propagators, specifically Magnus propa­


gators of first and second order, respectively. They are named after the German-
American mathematician Wilhelm Magnus. Approximating the exact propagator,
Eq. (5.22), as an exponential with a, possibly, time-averaged Hamiltonian in this way
is convenient in several respects. One advantage of approximating the propagator in
this way is its norm-conserving property; although we introduce errors when using
Eqs. (5.23) with finite Ar, the norm of the wave function is always preserved to machine
accuracy. Another advantage is the fact that they provide explicit schemes for resolv­
ing the time evolution - as opposed to implicit schemes in which you have to solve an
equation to get from one time to the next.
There is a drawback with propagators of the form (5.23), however. Since the Hamil­
tonian now changes in time, we cannot exponentiate it once and then use it repeatedly
as we did in Chapter 2. We must perform a full exponentiation at each time step. This
is quite costly, we really don’t want to do that. Luckily, there are ways around it.

5.2.2 Exercise: Split Operators

(a) While the relation ea+b = eaeb holds for any numbers a and b, why is this generally
not the case with operators or matrices; e / eAeB ?
(b) Use Taylor expansion to prove that

^(A+B) Ar _ eA At/2eB \teA \t/2 (9( AZ3). (5.25)

When we want to approximate a Magnus propagator, Eq. (5.25) may be very use­
ful indeed as it allows us to treat the various parts of the exponentiation of the
Hamiltonian independently - and yet maintain a reasonable degree of precision.
In many cases, the Hamiltonian may be split into a time-dependent, part and a time­
independent part:

H = Ho + H'(r). (5.26)

This is, for instance, the situation for a quantum particle exposed to a strong laser field.
The time-independent part, Ho, is just T + V as before while the laser interaction, in
one dimension, may be written

H'(r) = -qE(t)x. (5.27)


88 5 Quantum Physics with Explicit Time Dependence

You may recognize this interaction from Eq. (5.17); q is the charge of the particle and
E(r) is the time-dependent strength of the electric field of the laser.*8
With the Hamiltonian in Eq. (5.26), with H' time averaged, A = —iHo/h and
B = —iH'/h, we may estimate the wave function at the next time step as

vp (r + A/) = e~[H° At/2h ^(r) + (9(A/3). (5.28)

This means that the time-independent Ho-part may be exponentiated initially - once
and for all - and used repeatedly. The time-dependent part, H', on the other hand, must
be exponentiated anew for each time step. This is not necessarily bad news, however,
since quite often this is easily implemented. For instance, the interaction in Eq. (5.27)
is diagonal in a position representation. Thus, exponentiating it is trivial - you just
exponentiate the diagonal elements directly.
It should be mentioned that both Magnus propagators and split-operator techniques
of higher order than three in (local) error are well established. Third order will do for
our purposes, however.

5.2.3 Exercise: Photoionization

The time is ripe to actually solve the time-dependent Schrodinger equation for a spatial
wave function again. We will do so for a one-dimensional model for an atom exposed
to a short, intense laser pulse:
E(tl = ( E° si”2 ^0 sin o < t < T,
| 0, otherwise.

We set the charge q in Eq. (5.27) to — 1 in our convenient units. Here Eq is the maximum
field strength, T is the duration of the pulse and a> is the central angular frequency of
the laser. Although we do not really consider photons within this approximation, it still
makes sense to relate the angular frequency co to the photon energy via Ey = ha> = hf,
see Eq. (1.4).
You can choose the field parameters co = 1, Eq = 1 and T = 10 /cyci, where rcyci =
Itt/cd is a so-called optical cycle (see Fig. 5.2). Physically, this would correspond to a
very intense pulse in the extreme ultraviolet region.
Also, apply a static potential of the form of Eq. (2.30) with Vo = — 1, s = 5 and
w = 5. As this exercise involves a number of parameters - both numerical and physical
ones - we summarize them in the table below. The set of parameters, which includes a
few more than the ones listed above, is given in our usual units - apart from the pulse
duration T, which is given in terms of optical cycles.

Eq a> T AT Vq w s At L n+1

0.5 1 10rCyci 25 -1 5 5 0.1 400 512

8 Here we have neglected the fact that the field E would also be position dependent not just time dependent.
Often this is admissible since the extension of the system in question, such as an atom or a molecule,
is much smaller than the wavelength of the electric field. This approximation is referred to as the dipole
approximation.
89 5.2 Propagation

" Figure 5.1 | In Exercise 5.23 we expose our model atom to a time-dependent electric field with this shape. Both the amplitude
Eo and the angular frequency co are 1 in our units. The pulse duration T corresponds to 10 so-called optical cycles,
T = 10 • 2tt/w.

We will adjust some of the numerical parameters in the following. The fact that we
suggest starting out with the number of grid points N = n + 1 = 29 reveals our
preference when it comes to approximating the kinetic energy here.9
Just as we ignored spatial degrees of freedom above, we will ignore spin in this
context. This is admissible if we start out in a product state, Eq. (4.6), since our
Hamiltonian does not depend on spin.

(a) By using either propagation in imaginary time or direct diagonalization, find the
ground state of the time-independent system - the eigenfunction of Ho with the
lowest energy. Let this be your initial state, 4>(x;r = 0). Check that the ground
state energy is well converged in terms of the number of grid points - that it does
not change much if you increase n.
(b) Implement the propagator in Eq. (5.28) and use it to simulate your system for the
entire duration of the laser pulse - and for some time AT beyond. As in Chap­
ter 2, plot |4'(x;z)|2 as you go along. Since most of the wave packet will remain
bound - only a limited fraction is liberated by the laser pulse - you may want to
adjust the upper limit of your y-axis so that you see the outgoing waves clearly.
An upper limit of 10-2 units should be adequate here. Check that the grid size
L = 400 length units is large enough to avoid the artefacts we saw in Exercise 2.3.2.
For the numerical time step, you can set Az = 0.1 initially. Note the difference
between this time parameter and AT, which is the duration of the simulation after
the pulse, Eq. (5.29), has passed. The latter can be fixed at 25 time units.
(c) Our initial grid is a rather crude one. Increase the number of grid points N and
rerun your simulation to check for convergence. Also, rerun your simulation with

9 In plain words, we suggest using the FFT version.


90 5 Quantum Physics with Explicit Time Dependence

smaller time step A/ to check whether it has been set small enough. Why was this
not necessary when we did similar things in Chapter 2, by the way?

(d) How much of your wave packet has actually left the region of the confining poten­
tial? Or, in other words, what is the probability for our model atom to be ionized by
the laser pulse? This probability can be found by means similar to the way we deter­
mined transmission and reflection probabilities in Exercise 2.4.1. Just make sure
that no bound part of the wave function contributes to our ionization probability
estimate.

What we just did is a rather direct approach to determining the ionization probabil­
ity. The term ionization refers to the fact that, after tearing loose an electron from an
atom, the remainder of the atom is charged, it is an ion.
In the following we will try to arrive at the same probability in a slightly different
way. We will also see how we may extract additional information about the unbound
part of the wave packet.

5.3 Spectral Methods

Spectral methods represent a rather general approach to solving partial differential


equations. Arguably, they are particularly well suited to quantum physics. One reason
for this is the fact that the Schrodinger equation is linear, the Hamiltonian is a linear
operator which transforms the wave function to the same space. Consequently the wave
function may be written as a linear combination of a set of basis states at all times:

4/(0 = YsntfXPn. (5.30)


n

Now, if we have a basis, \sPi}, for the vector/function space in which our wave function
resides, and an inner product, such as the one defined in Eq. (1.24), we may use this to
turn the Schrodinger equation, which really is a partial differential equation, a PDE,
into an ordinary differential equation, an ODE. This, in turn, will allow us to make use
of ODE machinery we already know - such as Runge-Kutta methods, for instance.
We start out by expressing our wave function as in Eq. (5.30) and inserting it into
the Schrodinger equation, Eq. (2.1). We assume our basis to be discrete/countable and
finite. And, for now, we assume the basis functions <pn(x) to be time independent and
keep all time dependence in the coefficients an(t). Next, we impose inner products on
each and every one of the basis states from the left at both sides of the equation. As
we will see in the next exercise, this produces a set of coupled differential equations in
time - and time only.
91 5.3 Spectral Methods

53.1 Exercise: The Matrix Version

(a) Explain how the procedure described above leads to the following set of equations:

a'n{t) = ^y<pk\H\<pn) an(t). (5.31)


n n

(b) Explain how this may be written in matrix form as

i/iS-7-a = Ha. (5.32)


at
What do the elements of the matrices S and H look like?
(c) Why does the above equation simplify when the basis we use is orthonormal? See
Eq. (3.19).

In a practical implementation, it is not a prerequisite that the basis states used to


represent our wave function in Eq. (5.30) are orthonormal. Non-orthogonal basis func­
tions, such as Gaussians or so-called b-splines, are frequently applied. However, if the
basis functions do fulfil Eq. (3.19), the matrix on the left hand side of Eq. (5.32),
which is called the overlap matrix, is just the identity matrix and the left hand side
of Eq. (5.32) becomes simply i/la'(O- We exploit this advantage if we take our basis
to be the eigenstates of the time-independent part of the Hamiltonian H, that is, the
Hq of Eq. (5.26). As we have stated earlier, particularly explicitly in Section 3.3, the
Hamiltonian is Hermitian, and thus we can always construct an orthonormal basis
consisting of its eigenstates. Another advantage with this choice is that Hq becomes a
diagonal matrix in this representation.
And there is even a third advantage. Such a choice of basis often makes analysing
the wave function rather straightforward. After the interaction is over, when t > T,
our Hamiltonian is, again, time independent and our wave function evolves according
to Eq. (3.23):
>T) = ^an(t = ^n. (5.33)
n

With being the nth eigenstate of Hq, see Eq. (3.18), an energy measurement would
provide the result sn with probability
2
P(en) = WnW > T))\2 =
k

(5.34)

see Eq. (2.41).


With the normalized eigenstates of Hq as our basis, the remaining equation to solve
is
i/l-^-a = (Diageo, £1,...) + Hf) a (5.35)
92 5 Quantum Physics with Explicit Time Dependence

with the time-dependent matrix elements

H^n = ^k\H'\^n). (5.36)

Equation 5.35 can be solved by any ODE solver you know. But that is not to say that
all of them are equally well suited.

5.3.2 Exercise: Dynamics Using a Spectral Basis

Redo Exercise 5.2.3 using a spectral basis consisting of the eigenstates of Hq. You
are first to construct a numerical representation of Ho, diagonalize it to find the
eigenstates/eigenvectors, ensure proper normalization, and use these as your basis.
Next, construct a matrix representation of H'; this means that all matrix elements
of type which are to be multiplied by —qE(t\ must be calculated. This can
be done quite efficiently by means of matrix products between matrices involving the
eigenstates of Hq and a diagonal matrix with the x-values in your grid. Do remember
that with the eigenstates of Ho being normalized according to Eq. (2.10), you will need
to multiply this matrix product by the spatial increment h. As for the grid, the extension
and the number of grid points you found to be adequate in Exercise 5.2.3 are still
adequate.
In resolving the dynamics, use any ODE solver of your own preference. You can, for
instance, use some familiar Runge-Kutta method - preferably one with an adaptive
time step A/, one that adjusts A/ at each step in order to ensure reasonable precision.
If, on the other hand, you do this from scratch with a fixed time step, chances are that
your time step needs to be very small. The backward-forward Euler method, or the
Crank-Nicolson method, may be more apt for this task:
i - Ar "I i - Ar"
/ + —H(r + Ar)-— a(r + Ar) Z--H(r)— a(r). (5.37)
h 2 h 2

Here I is the identity operator. Along with split-operator techniques and other
approximations to Magnus-type propagators, this is a much applied method in quan­
tum dynamics. Note that this is an implicit scheme; you will need to solve an equation
by matrix inversion at each time step to obtain a(r + Ar) explicitly This propa­
gator would also work with our usual grid representation. And also, with proper
adjustments, the split-operator scheme would still work within this spectral approach.
This time, do not bother to display the evolution of the wave function, just determine
the vector with expansion coefficients at time t = T, a(T).
Now, the probability for your system to remain bound - to remain in an eigenstate
with negative energy - after the interaction may, according to Eq. (5.34), be calculated
as
Abound = Y,\an(T)\2. (5.38)
£„<0

Calculate this probability - and the corresponding ionization probability,


^ionized — 1 T’bounJ. (5.39)

Does it agree with what you found in Exercise 5.2.3?


93 5.3 Spectral Methods

Why is that here, contrary to Exercise 5.2.3, we can stop our time propagation at
t = T1
What is the probability of remaining in the ground state? And what is the prob­
ability of exciting the ‘atom’ - to promote it from the ground state to another bound
state? Finally, if you remove some of the states with the highest eigenenergies from our
spectral basis, how does this affect the result?

We have now, in two equivalent ways, calculated the probability of liberating our
quantum particle from its confining potential. However, we may also want to know
how this liberated wave packet is distributed in energy or momentum. This is typically
something you would want to measure in an experiment; the energy distribution of
the liberated electrons carries a lot of information about the processes that lead to
ionization and about the structure of the atom or molecule in question. Since we have
already expressed our wave function as a linear combination of eigenstates of Ho, it
would seem like the natural choice to use the expansion coefficients corresponding
to positive energies, |a„(T)|2 with sn > 0, for this purpose. However, it is not that
straightforward - for reasons we will get back to shortly.
As an alternative, we can determine how the liberated particle is distributed in
energy or momentum by calculating its Fourier transform, which we introduced in
Eq. (2.12). The variable of the Fourier-transformed wave function, k, is proportional
to the momentum p,k = p/h. So, with our usual choice of units, in which h = 1, they
are in fact the same. Now, in the same way that 14>(x)|2 provides the position distribu­
tion of the quantum particle, |<t>(^)|2, with <!>(£) = }(k), provides its momentum
distribution. More precisely,
dP
— = ft-*
1|d>(p/ft)|2 (5.40)
dp
is the probability density for a momentum measurement to yield the result p. This is
the Born interpretation, which we first encountered in regard to Eq. (1.9), formulated
in terms of momentum instead of position.
In order to determine the momentum distribution for the liberated part of the
wave packet exclusively, we must first get rid of the part that is still bound. More­
over, we will be interested in the momentum or the energy the liberated particle has
asymptotically - away from the confining potential V(x). We don’t want the potential
to influence the momentum distribution. As we will see, these issues can be resolved
by propagating AT for a while beyond the duration of the laser interaction - as we did
in Exercise 5.2.3.

5.3.3 Exercise: Momentum Distributions

Here we will implement the procedure we just outlined - again using the system of
Exercise 5.2.3 as example. When imposing the Fourier transform, using your FFT
implementation, you should be aware of two potential issues:

(1) as discussed in Section 2.2, standard FFT implementations typically permute the
^-vector in a somewhat odd way, and
94 5 Quantum Physics with Explicit Time Dependence

(2) although the norm of <b(k) in ‘£-space’ should be the same as the norm of 'I'(x) in
‘x-space’,
OO /»OO

/ -oo
|3>(fc)|2 dJt = /
J—oo
|4'(»|2(K

the numerical implementation you choose to use may not ensure this.
(5.41)

(a) Rerun your implementation from Exercise 5.2.3, and check that your wave func­
tion after the laser interaction has a clear distinction between a bound part and
an unbound part. Perhaps you would need a larger box size L and longer after­
propagation AT to ensure this. By inspecting your final wave function, determine
an appropriate value of d that separates the bound and the unbound parts; in other
words, any part of the wave function for which |x| > d can be interpreted as ion­
ized or liberated. This requires that the simulation is allowed to run long enough
for everything unbound to travel beyond x = ±d.
Your d value should be set large enough to avoid the influence of the confining
potential. In other words, V(x) 0 for all |x | > d.
(b) Set 4<(x) to zero for all |x| < d and Fourier transform the remaining part. Plot the
corresponding momentum distribution:

dP
— = ft-1|^W|x| > d)}(k)\2. (5.42)
dp

(c) Next, repeat the simulation but this time with two consecutive laser pulses,

Etwo = E(r) + E(t - (T + r)), (5.43)

where E(r) refers to Eq. (5.29) and r is the time delay between the pulses.
Afterwards, in the same manner as above, plot the momentum distribution of
the liberated particle at some time t > 2T + r. In order to avoid numer­
ical artefacts, you may need to increase your box size further. Such an increase
should be accompanied by a corresponding increase in the number of grid points
n + 1.
This time, does the momentum distribution resemble twice what you got with
only one pulse? Should it? Does the distribution depend on the time delay r? What
do we call this phenomenon?*

This last exercise touches upon an issue we have shamelessly swept under the rug
thus far. In every example that addresses spectral bases, we have exclusively dealt with
countable and finite basis sets. Numerically, this makes sense because we, artificially,
impose a finite extension of our numerical grid with some boundary condition com­
bined with a finite number of grid points. Physically, however, it makes less sense. The
energy of a free particle is not quantized; eigenenergies of an unbound particle are con­
tinuous and unbound, not discrete and finite. To wit, the momentum distributions we
just calculated are continuous, we could use them to obtain energy distributions that
would be continuous too. Correspondingly, Eq. (5.34) does not really work unless we
95 5.4 'Dynamics' with Two Particles

are dealing with discrete energies - the quantized part of the energy spectrum. For a
true energy continuum, a different formulation is called for.
Having said this, it should also be said that our numerical /jsewdo-continuum, which
is artificially discretized due to our boundary conditions and thus not actually a con­
tinuum, may still be used to interpolate the actual, continuous energy distribution of
an unbound particle. So, with the proper treatment, the coefficients an of Eq. (5.33)
corresponding to positive eigenenergies of Hq could be used to estimate the actual
energy distribution quite precisely. As this is slightly cumbersome, we will not do so
here.

5.4 'Dynamics'with Two Particles

While most examples we have been playing with have remained within the safe confines
of a single particle in one dimension, we have allowed ourselves a couple of excursions.
We will allow ourselves one more and consider two particles - however, not really with
explicit time dependence this time. Actually, it’s not even time.
With two particles in one dimension represented by means of a discretized spatial
grid divided into n pieces, it is convenient to represent the wave function as a square
matrix:

U'tXO.Xl) ••• ^(XO.Xn) \

*P(xi,xo) *P(X1,X1) ••• *


P(xi,x n)
'I'CXbX?) -> (5.44)

\ X0) 'I'Un.Xl) ••• V(xn,Xn) /

Admittedly, the notation is somewhat confusing here; on the left hand side, the index
on the position variable x refers to the particle, while on the right hand side it refers
to the grid points. In this matrix representation, the rows refer to particle 1 and the
columns to particle 2. Correspondingly, operators acting only on particle 1 may be
implemented as left multiplication by an (n + 1) x (n + 1) matrix, while operators
acting only on particle 2 correspond to right multiplication with the matrix.

5.4.1 Exercise: Symmetries of the 'I'-Matrix

Suppose the spatial two-particle wave function matrix of Eq. (5.44) corresponds to two
identical fermions. Which possible symmetry properties is the matrix required to fulfil?

Perhaps, as you were studying Fig. 4.5, you asked yourself, ‘Where does the “exact”
ground state wave function come from?’ Well, you are about to find out. In fact, you
are about to do it.
96 5 Quantum Physics with Explicit Time Dependence

5.4.2 Exercise: The Two-Particle Ground State

Our starting point is, again, the system of Exercise 4.4.3. However, this time we will not
impose any specific shape of the wave function, such as Eqs. (4.24) or 4.26. We only
require it to be symmetric.*10
Now, with 4> represented as a matrix, Eq. (5.44), why can the action of the full
Hamiltonian, Eq. (4.28), be written as

HV -+ hV + VhT + WO'P? (5.45)

Here h is the matrix approximation of the single-particle part of the Hamiltonian, W is


a matrix with the interaction for each pair of points and O refers to element-wise matrix
multiplication - the Hadamard product, which we first addressed in Exercise 3.4.5(b).
Now, Eq. (5.45) is an excellent starting point for doing actual dynamics with two
particles. Here, however, we will settle for finding the ground state - by propagating, in
imaginary time, the procedure we first implemented in Exercise 3.4.3. In that exercise
we used a propagator of exponential form, see Eq. (3.30). However, as the imaginary
time procedure tends to be a rather friendly and stable method, numerically speaking,
we may attempt to simply implement it using Euler’s forward method:

4>(r + A/) vk(O - Hvk(r) \t/h. (5.46)

As before, if we renormalize our wave function at each ‘time’ step, as in Eq. (3.30), the
energy estimate may be obtained via the norm of the wave function at the next ‘time’
step. However, with a different ‘propagator’, the energy formula comes out somewhat
differently from what we used in Exercise 3.4.3. How?
Now, implement this method and see if you can arrive at the same wave function and
energy as the one shown in Fig. 4.5(d). In order to make it work with Euler’s forward
method, which your maths teacher may have warned you about, make sure not to be
too ambitious when it comes to numerics. You may get a reasonable energy estimate
using a box of size L = 10 with n + 1 = 128 grid points and a ‘time’ step of Ar = 10-3
in our usual units.
Also, try to construct the result shown in Fig. 4.6(b) - the lowest energy state for
a spatially exchanged anti-symmetric wave function. In order to ensure that you don’t
stray from the anti-symmetric path, enforce anti-symmetry from the outset and at each
‘time’ step: 4> 0.5(4, — 4<T); replace the wave function matrix with half the difference
between the matrix and its own transpose.

Hopefully, you found both energy estimates and wave functions to be in reasonable
agreement with Figs. 4.5 and 4.6.

10 Symmetric, = <PT, not Hermitian, = 4^.


97 5.5 The Adiabatic Theorem

5.5 The Adiabatic Theorem

Dynamical processes are usually more interesting when something actually happens -
brought about by more or less abrupt changes. However, in some situations it could be
interesting to bring about changes as slowly as possible. We find one example of such
in the realm of quantum computing - within what is called adiabatic quantum comput­
ing, which will be the topic of Section 6.6. Such approaches typically set out to solve
complex optimization problems by initializing a quantum system in the ground state
of a simple Hamiltonian and then slowly changing this Hamiltonian into something
more complex.

5.5.1 Exercise: Adiabatic Evolution

(a) The generic Hamiltonian of a two-state system may be written as in Eq. (5.3); we
repeat it here:
—6 w (5.47)
r 6

Contrary to the context of Eq. (5.3), we allow both the diagonal elements ±e/2
and the couplings W to depend on time here. Show that, unless W = 0, the eigen-
energies of this Hamiltonian will never coincide. This is illustrated in Fig. 5.3.
The same conclusion holds for any discrete set of eigenstates with more than two
states as well.
(b) In Exercise 5.3.2 we used the time-independent part of the Hamiltonian in order
to construct an orthonormal spectral basis. Suppose now that we instead use the
eigenstates of the full, time-dependent Hamiltonian:

H(t) (Pn(x;t) = 8n(f) (Pn(x;t)- (5.48)

Figure 5.3 The eigenenergies of the two-state Hamiltonian in Eq. (5.47) as a function of the magnitude e of the diagonal
elements. The quantities are given in units of | W|. Do note the similarity with Fig. 3.5.
98 5 Quantum Physics with Explicit Time Dependence

As before, we assume the spectrum of the Hamiltonian to be countable here. The


orthonormal eigenfunctions can be set to vary continuously and smoothly
in time - provided that the Hamiltonian does so.
Let our wave function be expanded in these time-dependent basis functions:

W VnixdY (5.49)
n
Show that the state vector a(0 = (ai,«2, • • • )T, according to the Schrodinger
equation, fulfils
in-^-a(z) = [£> —iC]a(O,
dr
with the matrices
/ £o(0 0 0 \
0 £l(t) 0
D = 0 0 s2(«)

By the way, is the above effective Hamiltonian matrix, D — iC, Hermitian?


(c) Suppose now that our Hamiltonian H(t) depends very weakly on time; it changes
very slowly. In that case, a system which starts out at time t = 0 in eigenstate
number n tends to remain in eigenstate number n - although the Hamiltonian has
changed.
How can we draw this conclusion from the above relations?

The result pertaining to part (a) of the exercise above, which is attributable to John
von Neumann and Eugene Wigner, is called the non-crossing rule, and the result of (c) is
coined the adiabatic theorem. Its name is motivated by thermodynamics; an adiabatic
process is one that happens without heat exchange. In a quantum context, it simply
refers to a process that proceeds slowly enough for no transition to other quantum
states to occur; there is no exchange of population between the eigenstates. This applies
to discretized time-dependent eigenstates; it does not hold for states belonging to any
continuous part of the spectrum.
Simply put, the adiabatic theorem holds because when H(t) changes slowly, so do
all the time-dependent eigenstates <pn(x',t) and {(pm{t)\^<pn{t)/^t} 0. With C approxi­
mately equal to the zero matrix, very little transition between states will occur. And,
since none of the eigenenergies will coincide at any time, we will remain in the state with
the nth lowest eigenenergy if that’s where we started out. In particular, if we start out in
the ground state of some initial Hamiltonian, we will end up in the ground state of the
99 5.5 The Adiabatic Theorem

final one - if our time-dependent Hamiltonian has changed slowly and continuously
between the two.
Although eigenenergies do not cross, they may come close; they may feature so-
called avoided crossings, as illustrated in Fig. 5.3. Transitions between states are more
likely to occur in these cases. This can be understood from the fact that the off-diagonal
matrix elements of C in Eq. (5.52) may be recast as

d<Pn \ (<Pm\ ,
-------- ---------- , m n. (5.53)
dt I £n(t)-£m(t)
From this we see that these coupling elements, which may shift population between
eigenstates, are large not only when the time derivative of the Hamiltonian is large, as
discussed above, but also when eigenenergies come close, £m(t) sn(t) - when they
feature avoided crossings.
Enough talk, let’s see if the adiabatic theorem actually holds for a specific example.

5.5.2 Exercise: A Slowly Varying Potential

Once again, we revisit the harmonic oscillator, which we first saw in Exercise 3.1.4. As
before, we set k = 1. However, this time we shake it about a bit; we take our time­
dependent Hamiltonian to be
~ j52
H(r) = + V(x - /(/)) (5.54)
2m
with a time-dependent translation f(t) in the form of Eq. (5.29):
8 9 /I \
/(r) = —— sin I - a>t I sin (co/), t e [0, 2tt/co]. (5.55)
3J/2 \2 /
The factor 8/33/2 ensures that the maximal displacement is one length unit. The time­
dependent potential is illustrated in Fig. 5.4.

Figure53 A harmonic potential is slowly shaken, first to the right and then to the left, before it is restored to its original position.
The time is given in units of 1 /co.
100 5 Quantum Physics with Explicit Time Dependence

According to the above discussion, if we start out in the ground state at t = 0,


vp(? = 0) = (po(t = 0), we should remain in the ground state at the final time T = 2n/a>,
provided that we shake slowly enough. In other words, co must be low enough. Now,
solve the Schrodinger equation for this system and try to determine just how low a>
must be in order to ensure that the population in the initial state, the ground state
population Pq, remains at least 99%:
Po = |(4/(z = 0)|4'(T))|2 > 0.99. (5.56)
Diagonalize your numerical approximation of the initial Hamiltonian in order to
obtain your initial state, the normalized ground state.
Perhaps the most straightforward way to implement the time evolution would be the
split-operator technique, Eq. (5.25), with A = p2/2m and B = V(x — f(t)\ where

77) f(t + Az/2) (5.57)

is the time-averaged displacement. This is convenient since A is time independent and


B is diagonal. Since we will run this calculation several times with various a> values, it
would be more convenient to fix the number of time steps Nt, rather than the size of
the numerical time step Al This ensures that each run takes the same amount of time
and allows us to adapt the time step to the actual dynamics, Az = 2;r/(coAr).
You could also accept the challenge of implementing the evolution as formulated
in Eq. (5.50) and solve it as an ODE, using Eq. (5.37) for instance. In doing so you
can exploit the fact that, in this case, the eigenenergies are time independent and the
elements of the C-matrix, Eq. (5.52), can be rewritten in terms of p-couplings between
the eigenstates at t = 0. As in Exercise 5.3.1, this matrix, in turn, may be written
as a matrix product with the basis states and a matrix approximating the momentum
operator - multiplied by the appropriate time-dependent factor.
While the first of the two alternative methods may be the most convenient one, it is
always reassuring to check that different implementations yield the same result.
In order to determine how fast you can shake your potential and yet keep your
ground state population in the end, you can start with a very low co, rerun your cal­
culation repeatedly while increasing co in small steps, and see how far you get before
you violate Ineq. (5.56). You could start at co = 0.05 and use the same or a smaller
value for the increment. However, it may be more interesting to make a plot of Pq as
a function of co. If you do this for co up to, say, 2, you will find that this function is far
from monotonic.
In addition to running your simulation starting out in the ground state, also try to
start from some excited state and see how the probability of remaining in that state
depends on co.

In Exercise 6.6.1 we will try to illustrate how adiabatic evolution may be used as an
optimization technique. When doing so, you will want to ‘shake’ your system slowly
enough to keep your ground state population with high fidelity, while not wasting time
progressing more slowly than necessary.
6 Quantum Technology and Applications

Quantum physics is the framework we use to understand the fundamental building


blocks of matter; so anything that involves matter, also involves quantum physics in
some sense. Thus, little is more applied than this. However, from a more practical
point of view, quantum applications have long since become an integral part of our
everyday lives. In addition to shifting our way of understanding the material world,
our growing understanding of the micro world has been a game changer within sev­
eral technological areas. As examples of such areas, we could mention medicine and
diagnostics, computers and integrated circuits, solar cells, chemical and pharmaceut­
ical industries, metrology - the art of making precise measurements, nuclear energy
production and, sadly, the weapons industry.
Entering into how quantum physics is applied within all of these fields would be
too immense an endeavour. We will, however, get a feel for a few applications which
are particularly easily related to the quantum traits we have already seen - such as
tunnelling, quantization and spin. Moreover, a large portion of this chapter is devoted
to the emerging field of quantum information technology.

6.1 Scanning Tunnelling Microscopy

As mentioned, tunnelling may be used in resolving incredibly small structures since it


depends quite sensitively on geometrical factors. This fact is exploited in a scanning
tunnelling microscope - a microscope capable of producing images such as the one we
saw in Fig. 2.5 and in Fig. 5.1(b). We will here try to outline how this comes about.
We start by assuming that we are dealing with a free particle with a definite energy.
With this, we may assume our wave function to be of the form of Eq. (2.38):

vp(x;Z) = e-1"ri^(x), (6.1)

where a> is the energy e divided by h, a> = s/h, and i/r(x) is a solution of the time­
independent Schrodinger equation, Eq. (3.1), which we will obtain. Although we didn’t
emphasize this when we first encountered this notion, in Exercise 2.5.1, the separation
of Eq. (6.1) really only makes sense when dealing with bound states, which are sub­
ject to quantization. For a free particle, it is wrong in several ways. First of all, a
particle with a definite energy exposed to no potential will also have a definite momen­
tum. According to Werner Heisenberg, Ineq. (1.5), this can only be consistent with an

101
102 6 Quantum Technology and Applications

entirely delocalized particle; if ap in Ineq. (1.5) approaches zero, ax will blow up. This,
in turn, renders the wave function unnormalizable. Also, there is something fishy about
the whole notion of using stationary solutions in order to understand something that
really is dynamical. We are actually aiming at understanding time-dependent processes
by solving a time-independent equation, Eq. (3.1) in this case.
Still, we will do so in the following. For three interrelated reasons: (1) while seemingly
odd, this is indeed a very common approach to scattering problems in quantum physics,
(2) it may actually lead to sensible results in the end, and (3) it does make sense when
put in the right context.

6.1.1 Exercise: Tunnelling Revisited

Our starting point is the following: a one-dimensional particle with positive energy e is
exposed to a potential V(x) which is only supported for x between 0 and D; for any x
value outside the interval [0, D], the potential energy is zero. The situation is illustrated
in Fig. 6.1.

(a) Explain why the solution of the time-independent Schrodinger equation has the
form

i/r(x) = Aelkx + Be~lkx when x < 0, (6.2a)


i/a(x) = Celk^x~D^ + Fe~lk^x~D^ when x > D (6.2b)

in such a situation. What is k here?


As discussed above, this will lead to an unnormalizable wave function. So this
is not really an admissible wave function as it does not produce any meaningful
probability density. We couldn’t really expect it to make sense either since we, after
all, aim to study a scattering process - not a stationary solution or a steady state.
The solution of Eqs. (6.2) is meaningful, however, if we we consider it a steady
current.
(b) If we combine each of the four terms in Eqs. (6.2) with the time-dependent factor
of Eq. (6.1), exp ( — iet/h) = exp ( — io>r), we can identify each of them as a wave
travelling either to the left or to the right. Now, if we insist that there is an incoming

K(x)

" Figure 6.1 | The potential understudy in Exercise 6.1.1.

1 The probability current corresponding to a wave function is determined as j = h/m Im 4^


d4^/dx.
* So for
~ exp (i(kx — the current is proportional to k and independent of both position and time.
103 6.1 Scanning Tunnelling Microscopy

current from the left, but nothing from the right, one of the coefficients must be
set to zero.
Which one is that? And which one of the three remaining terms corresponds to
an incoming wave from the left?
In Exercise 2.4.1 we defined a reflection and a transmission probability,
Eqs. (2.32). In our present situation, in which we are dealing with a steady current
rather than a dynamic wave function, we define relative reflection and transmission
rates. We define them as the squared modulus of the amplitude of the reflected and
the transmitted current, respectively, divided by the amplitude of the incoming
current:2

B 2
(6.3a)
A
C 2
(6.3b)
A

These rates will depend fully on the energy s at which the potential V(x) that the
incoming wave will scatter. In the special case that our potential is constant, V(x) =
Vo when x e [0, D], we may determine the transmission/tunnelling rate exactly by
analytical means. One can show that

1
(6.4)
1+ sinh
where a = y/2m(Vo — s)/h and sinhx = (ex — e x)/2 is the co-called hyperbolic
sine function.
We will not do so here. Instead we will calculate T numerically - in a manner
that allows for a general shape of the potential.
(c) Suppose now that we do have the correct solution for ty(x) for x e [0, D]. How
does this allow us determine the reflection and transmission rates? For simplicity,
set C = 1.
Hint'. Although we have given up on the notion that should be normalizable,
we should still insist that t/r(x) and tyf(x) are continuous everywhere.
(d) Explain how the problem can be formulated as this first-order, linear ordinary
differential equation:

d / Vr(x) \ 0 1 Vr(x) \
<p(x) ) (6.5)
dx \ ^V(x)-k2 0 <p(x) J

where <p(x) — ty\x). Why must we impose the ‘initial’ value condition

^x = D)\ = f 1 \?
(6.6)
(p(x = D) ) \ ik )

2
We hope that you forgive the fact that we use the same names as in Eqs. (2.32) although we are actually
calculating relative probability rates in this case.
104 6 Quantum Technology and Applications

(e) Implement the solution to this initial value problem. Use your ODE solver of pref­
erence, impose the condition of Eq. (6.6), fix some value for the energy 8, which,
in turn, fixes k, and solve Eq. (6.5) backwards in x. Use your numerical solution to
determine ifr(x = 0) and ^'(x = 0) = <p(x = 0) and, finally, use this to determine
the transmission rate T, Eq. (6.3b).
As your first case, take your potential to be constant, V(x) = Vo, between x = 0
and x = D. Choose your own values for D and Vo- Rerun your implementation for
several values of 8 and confirm that it gives the same energy dependence as does
Eq. (6.4).
(f) Now, take your potential to have this linear form:

V(x) = ( - ax' x [0’D]’ (6.7)


[ 0, otherwise.

Keep your values for Vo and D and choose a modest, positive value for a - one
that keeps V(£>) well above zero.
For this potential, determine the transmission rate as a function of the energy 8
for energy values up to Vo.
(g) Often, the transition rate has an energy dependence that resembles an exponential.
Use a logarithmic y-axis when plotting T(s) to gauge to what extent this is the case
with your parameters.

A so-called semi-classical approach called the WKB approximation3 allows us to


estimate the transition rate or probability as
r" o2 fr b -i
T — exp — - I y2m(V(x) — s)dx , (6.8)
\_ ft-Ja J
where the integration interval [a,b] extends over the classically forbidden region, the
region in which 8 — V(x) < 0. The relation symbol is here taken to mean ‘approxi­
mately proportional to’, which, undeniably, is a rather imprecise notion. In any case,
sometimes this is a decent approximation, sometimes it’s not. Here we will not discuss
how and when. But we will make use of it in the next exercise - after having checked
its validity against our implementation from Exercise 6.1.1.
The aim of the following exercise is to shed some light on how tunnelling may be
exploited to produce pictures such as the one shown in Fig. 2.5.
Suppose we want to chart the surface of some metal. Within the metal, the least
bound electrons, the conduction electrons, roam freely at a more or less definite energy 8.
While they are not attached to any particular atom or ion, the collective attraction of
the atomic nuclei does confine even these electrons to the metal.
This changes slightly when we introduce a sharp needle carrying a voltage relative
to the surface of the metal. The voltage introduced by the needle distorts the potential
experienced by the conductance electrons in the vicinity of the metal surface - thus

3 It takes its name from Gregor Wentzel, Hendrik Anthony Kramer and Leon Brillouin. The last two may
be found in Fig. 1.3.
105 6.1 Scanning Tunnelling Microscopy

r Figure 6.2 Illustration of the setup of a scanning tunnelling microscope. There is a voltage between the needle and the metal
which permits free electrons in the metal to tunnel out, thus inducing a small current which can be measured. This
provides information about the distance between the surface of the metal and the needle point.

allowing them to tunnel out. This is illustrated in Fig. 6.2. Correspondingly, a num­
ber of conductance electrons are able, through tunnelling, to escape the confines of
the metal to a vacancy in the needle. Correspondingly, a small current is measured.
The more likely tunnelling is to occur, the higher the current. As dictated, or rather
suggested, by Eq. (6.8), the shorter the escape route, the more current is seen. In other
words, this current is a measure of the distance between the surface and the needle
point.

6.1.2 Exercise: The Shape of a Surface

Now, suppose that the surface of our metal plate is located in the yz-plane and our
needle is pointing towards the negative x-direction. The distance between the needle
and the surface is d — /(y), where d is the average distance between the needle and the
surface and /(y) is the shape of the surface along the y-axis. Since the distance will
deviate from d as you vary y, the current picked up by the needle will also depend on
y. This scenario is illustrated in Fig. 6.3(a).
In a simplified picture we may model the potential energy felt by a conductance
electron near the surface by the form of Eq. (6.7):
m;y) J Vo-^x, (6 9)
I 0, otherwise.
106 6 Quantum Technology and Applications

Figure 6.2 | (a) A more schematic picture of the situation illustrated in Fig. 6.2. The shape of the surface along the y-axis follows
the function f(y), d is the average distance between the needle tip and the metal surface, and x is the distance
from the metal surface for an escaping electron making its way towards the needle, (b) The potential that the
escaping electron tunnels through, Eq. (6.9), in our simple model.

Here, Vo is the barrier that must be climbed by a conductance electron in order to


escape the metal4 - if it were not for the tunneling possibility. The constant e is the
elementary charge. The ratio between the bias voltage U and the distance between the
needle point and the surface, d — ffy), is the electric field experienced by the escaping
electron. Thus, this term is identical to the one in Eq. (5.27). The potential is illustrated
in Fig. 6.3(b).
Suppose now that we have continuously measured the tunnelling current while mov­
ing our needle along the y-axis. A set of fictitious results from such measurements can
be found in a data file at www.cambridge.org/Selsto. Our aim is to use this measured
current, Z(y), to determine the shape of the surface, /(y). To this end, we will make
use of Eq. (6.8).
In setting the parameters of our model, we will use units other than our usual, con­
venient ones - units that are more likely to be used in a laboratory. Set J = 4 A,
Vo = 4.5 eV, 8 = 0.5 eV and U = 1 V, where an angstrom,' K, equals 10“10 m and V is
volt. Multiplying V by the elementary charge e renders the much applied energy unit,
the electronvolt, eV.

(a) Convert all these parameters into atomic units. You may want to revisit Exer­
cise 1.6.4 in case you need a reminder.
(b) For /(y) values between —1 A and 1 A, calculate the actual tunnelling rates using
your implementation from Exercise 6.1.1. To this end, set D = d — /(y) and a =
eU/(d — Note that here a becomes y-dependent.
(c) Use Eq. (6.8) to predict the same transmission rate. Plot these predictions together
with the ones you found in (b) using a logarithmic y-axis in order to gauge whether
the predictions may reasonably be assumed to be proportional.

4 This barrier height is referred to as the work function.


5 The unit is assigned in tribute to the Swede Anders Jonas Angstrom, a pioneer in the field of spectroscopy,
to which the following section is dedicated.
107 6.2 Spectroscopy

(d) Now, assuming that you reached an affirmative conclusion to the question above,
we may use Eq. (6.8) along with the assumption that the measured current /(y)
is proportional to the tunnelling rate T(y) in order to determine the shape of
the surface along the line in question. Under this assumption, it can be shown
that

/(y) = —-==------------- ?>ehU------------------------ ln/(y) + C, (6.10)


4a/2w [(Vb — £)3/2 — (Vo — 8 — eU)3/2]

where C is an unknown constant.


Now, do this analytical calculation for yourself.
(e) Finally, with the data for Z(y) from www.cambridge.org/Selsto, follow this proced­
ure to estimate the shape /(y). The constant C may be ignored, just subtract the
mean f value from all the /(y) values you arrive at.

So, by leading a needle very close to a surface, we can map out the shape of this
surface by gauging the current that comes about via tunnelling. And we can cover the
whole surface by moving our needle in both the y-and the z-directions. However, none
of this is useful unless we are also able to control the position of the needle - with
extreme precision. How does that come about? Here’s a hint: piezoelectricity.

6.2 Spectroscopy

We have learned that the energy of the electrons confined to a nucleus in an atom is
quantized; the energy can only assume one out of a set of discrete values. For molecules,
the same also applies to vibrational and rotational energies. As a consequence, any
substance has its own unique ‘fingerprint’ of allowed, discretized energies.
Although atoms and molecules tend to prefer the ground state, we may excite them.
This can be done by heating, electromagnetic irradiation or by bombarding them with
particles. This will populate several excited states. Afterwards, the excited system will
relax towards the ground state - either directly or via intermediate states of lower
energy. When this happens, the system disposes of the surplus energy by emitting light
quanta - photons. The energy of these matches the energy difference between the two
states in question. As a consequence, each photon comes out with a specific frequency
and wavelength, see Eq. (1.4). When this wavelength falls within the interval between
385 nm and 765 nm,6 more or less, it is visible to the human eye.

6 nm - nanometre; 10-9 m.
108 6 Quantum Technology and Applications

6.2.1 Exercise: Emission Spectra of Hydrogen

In Exercise 3.1.3 we had a look at the eigenstates of the hydrogen atom. These energies
turn out to have a very simple form - given by Eq. (3.4) in SI units. As we discussed in
the context of Eq. (3.4), whenever an atom in an exited state, given by quantum number
mi, relaxes into one of lower energy with nz < mi, a photon of energy

1 1
Ey = hco = hf = Enx — En2 — B (6.11)
n2 n\

is emitted. The corresponding wavelength is found from the relation fk = c, where c


is the speed of light.

(a) Now, only a few of these possible transitions correspond to wavelength in the vis­
ible spectrum. Which ones are they? How many are there? Which M2 value(s) do
they have?
(b) These particular wavelengths give rise to a line spectrum - a spectrum in which only
a few colours are seen (see Fig. 6.4). Try to construct such a spectrum by plotting
the wavelengths you found above. To make it more interesting, you may want to
see if you can find an implementation that converts wavelength into colour - in the
RGB7 format, for instance. To this end, you may find Ref. [13] useful.

Although it is nowhere near as simple as the hydrogen spectrum, the helium atom,
like any other atom or molecule, also has its own specific set of allowed electronic states.
Thus, it also has its very specific spectral fingerprint.

6.2.2 Exercise: The Helium Spectrum

Among the resources at www.cambridge.org/Selsto you will find a file with data on
the many bound states of the helium atom. It was retrieved from the NIST8 Atomic
Spectra Database [25]. The file consists of three columns. The third column lists all
electronic energy levels in units of eV. This spectrum is shifted so that the ground state

400 450 500 550 600 650 700 750


A [nm]

r Figure 6.2 The lines indicate the wavelengths contained in light emerging from a gas of excited hydrogen atoms. They are called
the Balmer series, named after the Swiss Johann Balmer, who arrived at Eq. (6.11), with M2 = 2, on empirical
grounds in 1885. Of course, such an illustration makes more sense in colour; a coloured version of this figure may be
seen at www.cambridge.org/Selsto.

7 RGB - red, green, blue.


8 NIST - National Institute of Standards and Technology, USA.
109 6.2 Spectroscopy

energy is zero, which is not a problem for us since we will only be occupied with energy
differences.

(a) Make an implementation that reads through all this data and compare pairs of
energy levels to identify transitions corresponding to photon wavelengths that fall
into the visible interval. Also, visualize this line spectrum as in Fig. 6.4.
How many lines does the spectrum contain? Some of them are quite close, right?
(b) Actually, the spectrum from (a) is far too inclusive, the spectrum that does in fact
emerge from a gas of excited helium atoms has far fewer lines. There are several
reasons for this.
First of all, helium is, just like the system we studied in Exercise 4.4.3, a
two-particle system.9 With a spin-independent Hamiltonian,10 each eigenstate is
the product of a spatial part and a spin part - either a triplet or the singlet.
Correspondingly, the spatial wave function is either exchange anti-symmetric or
symmetric, respectively. The emission of a photon will not change this symmetry.
Consequently, the system will only undergo transitions within the same exchange
symmetry.
Another restriction, or selection rule, which limits the number of possible tran­
sitions, is related to the fact that the photon has spin with s = 1. Since the total
angular momentum, the sum of the ‘ordinary’ angular momentum and spin, is a
conserved quantity in this process, the states involved in a one-photon transition
must abide by this. To make a long story short, in this particular case the total
angular momentum of the system has a well-defined quantum number L which
must change by 1 in the case of a one-photon transition between one helium state
and another.
In the data file from (a), the first column indicates if the state is a spin singlet,
‘1’, or a spin triplet, ‘3’. The second column is the L quantum number.
Redo part (a) with these additional constraints implemented. What does your
helium emission spectrum look like now?
(c) When white light, which contains components of all possible wavelengths and col­
ours, passes through a gas, the opposite process of that which produces a line
spectrum takes place. Where a line spectrum, or an emission spectrum, emerges
when a gas of excited atoms and molecules emits light of specific wavelengths, an
absorption spectrum comes about when a gas, with atoms and molecules predom­
inantly in their ground states, absorbs light of specific wavelengths, thus leaving
dark spots in the spectrum of the remaining light. Figure 6.5 displays one such
absorption spectrum. It corresponds to sunlight that has passed through both the
Sun’s and the Earth’s atmosphere.
Can you identify traces of hydrogen and helium in this absorption spectrum?

9 If we include the nucleus, it actually has three particles. It is, however, admissible to consider the nucleus
as a massive particle at rest - a particle that sets up the potential in which the two electrons are confined.
10 Strictly speaking, it is more accurate to say that the Hamiltonian has a very weak spin dependence; spin
is not entirely absent.
110 6 Quantum Technology and Applications

r Figure 6.5 Since atoms and molecules in the atmospheres of the Sun and the Earth absorb light at certain frequencies, the
sunlight that reaches us at the surface of the Earth has some 'holes' in its spectrum. These missing lines are called
Fraunhofer lines, named after the German physicist Joseph von Fraunhofer. You may find the colour version of this
figure at www.cambridge.org/Selsto more informative.

Hopefully, these examples serve to illustrate how unknown substances can be iden­
tified by investigating the light that they absorb or emit. Such techniques are referred
to as spectroscopy. More sophisticated spectroscopic schemes than the ones outlined
here do exist. And spectroscopic analysis is certainly not limited to the visual part of
the electromagnetic spectrum.
By using spectroscopic techniques, even the composition of distant astronomical
objects may be determined. And, albeit somewhat less exotic, spectroscopy has found
several terrestrial applications, for instance within medical imaging.
The next topic has also found such imaging applications.

6.3 Nuclear Magnetic Resonance

We have learned that the particles that make up matter, the fermions, have half-integer
spins. This also applies to the protons and neutrons - the building blocks of atomic
nuclei. Thus, since nuclei may have a non-zero spin, they may also be manipulated
by magnetic fields - in the same manner as we did in Section 5.1. By exposing mat­
ter to oscillating magnetic fields, atomic nuclei will respond in a manner that depends
strongly on the frequency of the field. This may be exploited in order to image what we
cannot see directly, such as tissue of a specific type inside a patient’s body; magnetic
resonance imaging, MRI, has long since become a standard tool in hospitals.

6.3.1 Exercise: Spin Flipping — On and Off Resonance

We now return to our system of a spin-1/2 particle with a static magnetic field in the
z-direction and an oscillating one in the x-direction. However, this time our spin-1/2
particle will be a proton. Equation (4.15) still applies, but this time the charge is the
positive elementary charge, the g-factor is 5.5857, and the proton mass is 1836 times
greater than the electron mass.
111 6.3 Nuclear Magnetic Resonance

We take the static magnetic field to point along the z-axis - so that the diagonal of
the Hamiltonian matrix still coincides with Eq. (5.8) with positive 6.

(a) Show that, for the normalized spinor / = (a(t),b(t))T, the time-dependent
expectation value of the magnetic dipole, Eq. (5.2), is

W = <XIAlX> = [Re(aF),-Im(aF),kz2| - 1/2]. (6.12)

Assume this time-dependent magnetic moment generates electromagnetic radi­


ation in the same manner as a classical magnetic dipole. Moreover, for simplicity,
limit your attention to the x -component of the magnetic moment expectation
value. For a classical magnetic dipole oscillating in the x-direction, the intensity
of the emitted radiation at distance r and angle 0 relative to the x-axis is

t mo sin2# Td2, , J2
= 16tt2c3 r2 ’ (6’13)

where the angle brackets (•) denote expectation value, as usual, and the bar is taken
to indicate time averaging over the duration T of the interaction:
___ 1 cT
(6.14)
1 Jo
The constant mo is the permeability of free space;11 in SI units it takes the value
mo = 1.257 • 10-6 Ns2/C2.
(b) In order to simplify further, apply the rotating wave approximation, which you got
to know in Exercise 5.1.3, in the following.
For a static magnetic field of magnitude Bz pointing along the z-axis, and
an oscillating magnetic field in the x-direction oscillating at angular frequency
co, Bx(t) = Bo sin (cot), identify the parameters 8 and Q that enter into the
Hamiltonian of the rotating wave approximation, Eq. (5.15).
(c) Since the rotating wave Hamiltonian is time independent, the time evolution that
it dictates may be determined by direct application of Eq. (2.19). Verify that with
the adequate parameters Eq. (5.5) gives
exp[ — iHRWAt/fr| = cos(Qg^/2)/2 — i^^— sin(Qo^/2) (Q<7y — 5crz). (6.15)
Qg
Do note the distinction between the Rabi frequency Q, which is real here, and the
generalized Rabi frequency Qg> Eq. (5.18).
(d) With the spin-up state as our initial state, show that the time evolution of the spin-
up and the spin-down amplitudes, a and b, respectively, reads
/ 5 \
/2) + i---- shi(Qg?/2) ),
a(t) = eltw//2 ( cos(Qg* (6.16a)
\ Qg /
b(t) = e-i"'/2 -2_ sin (Qg?/2). (6.16b)
Qg
11 Usually, this fundamental constant is denoted by /zo- Here, however, we call it niQ in order to avoid
confusion with the magnetic moment.
112 6 Quantum Technology and Applications

If the factors exp( ± /2) appear somewhat mysterious, do remember that you
need to transform back to the original frame according to Eq. (5.16).
(e) Make an implementation that estimates, numerically, and plots the time average
of [d2/dz2(/zx(t))]2 as a function of the driving frequency f = co/ln. According
to Eq. (6.13), the intensity of the electromagnetic signal emitted from the spin­
flipping proton is proportional to this quantity. You may find Eqs. (2.11a) and
(1.16a) useful. Let the frequency interval span from 100 MHz to 300 MHz and fix
the duration of the interaction at 100 optical cycles of the driving field, T = 100//.
Fix the static magnetic field at Bz = 5 T and let the amplitude of the oscillating
field be considerably lower: Bq = 0.01 T. One tesla, the SI unit of magnetic field
strength, is the same as kg/C s.
(f) Does any input frequency / generate a stronger output signal than any other
frequency? Which frequency is that?
(g) Try increasing T and decreasing Bq to gauge how the strength of the emitted
radiation from the spin-flipping proton is affected.
(h) Is the rotating wave approximation really justified here?

Hopefully you found that the spin system responded a lot more strongly to oscil­
lating magnetic fields at a specific angular frequency co - namely the one for which
the detuning 8 is zero. Perhaps the output signal looked something like the sketch in
Fig. 6.6? This phenomenon, which is seen and used in virtually all systems that involve
oscillations and waves - such as a swing on a playground or a radio signal - is called
resonance.
The answer to the last question in Exercise 6.3.1, the one about the applicability of
the rotating wave approximation, is no; while the Rabi coupling Q is, indeed, small,
we are guilty of applying the rotating wave approximation at frequencies rather far
away from resonance. However, a more proper description of the dynamics would still
produce the same feature. The signal is many orders of magnitude stronger when the
frequency of the input signal matches the resonant transitions between the two spin

" Figure 6.2 | The phenomenon of resonance - how some physical system may respond very strongly to an oscillating input signal at
a specific frequency.
113 6.3 Nuclear Magnetic Resonance

states - when 8 is very close to zero. It is this region we are interested in, and it is in this
very frequency regime that the rotating wave approximation applies.
The fact that a nucleus, be it the proton in a hydrogen atom or the nucleus of a lar­
ger atom, responds so strongly to a very particular frequency is called nuclear magnetic
resonance. In hospitals, this phenomenon is exploited in a technique called magnetic
resonance imaging (see Fig. 6.7). Since the origin of the emitted signal can be deter­
mined rather accurately, see Eq. (6.13), the location of specific tissue inside a living
body may be imaged. This can be done with much more sophistication and in a far less
invasive manner than X-ray imaging, for instance.
With all constants in Eqs. (6.12) and (6.13) in place, you may find that the intensity
of the field induced by the spin-flipping proton isn’t very high. However, since there
are many, many protons in the molecules that make up biological tissue, the signal is
certainly one we are able to measure. But isn’t that also a problem? Since all biological
tissue is full of hydrogen atoms, how can you tell one type of tissue from any other?
Wouldn’t all the protons respond resonantly to exactly the same frequency?
Fortunately, the answer to this question is no - due to so-called chemical shielding.
The electrons of the molecule, which carry both charge and spin, will shield the nucleus
a bit - effectively reducing the magnetic field experienced by the nucleus. This, in turn,
shifts the resonance frequency a little bit downwards. Since different molecules have
different electron distributions, this shift differs from molecule to molecule. While not
very large, the shift is large enough for us to distinguish between different kinds of
tissue.
Gauging the emitted field during exposure to an oscillating magnetic field with
a certain frequency is not the only way to gather information about what’s inside
a body. More information may be extracted by using a pulsed oscillating field

" Figure 6.2 | Magnetic resonance imaging, MRI - a direct application of nuclear magnetic resonance - has long since become a
standard imaging technique in hospitals. Here a patient is being positioned for MR study of the head and abdomen.
114 6 Quantum Technology and Applications

instead - combined with Fourier analysis. Also, the relaxation of aligned nuclear spins
to so-called thermal equilibrium after exposure to the magnetic fields carries additional
useful information.

6.4 The Building Blocks of Quantum Computing

From Section 1.3 you may remember the infamous curse of dimensionality. Actually,
this curse was the motivation for introducing the concept of a quantum computer. One
of its advocates was the 1965 Nobel laureate Richard Feynman. He argued that, since
the complexity of quantum physical systems grows exponentially with the number of
particles involved, a numerical study of such systems is best done on a computer with
the same capacity [15].
In the following we will illustrate how this works with spin-1/2 particles. We will also
use spin-1/2 particles to introduce the concepts of quantum bits, qubits, and quantum
gates.12
Let’s re-examine Eqs. (4.13) and (4.10) - and also include a general expression for a
three-particle spin-1/2 state:

Xi =«Xt + ^Xf (6.17a)


y? y(Dy(2) . b y(Dy(2) , (1) (2) d (1) (2) (6 17b)
X2 — u X^ T^Xf X| । c X^ X^ ' a X^ X± {u.i/u)
(1) (2) (3) . , (1) (2) (3) . (1) (2) (3) . , (1) (2) (3)
X3 = a +b 'Xj X; + c X| X; Xj + d Xj Xj, X|

+' <?y(1)y(2)y(3) + f y(1)y(2)y(3) + ?


e X^ X^ X^ J X^ X^ X^ ' 6 y(1)y(2)y(3)+7i y(1)y(2)y(3)
X^ X^ ' n X^ X^ X^ (6 17c)

If we continue along this path, we will quickly run out of letters in the alphabet.
Perhaps you see a tendency in the number of coefficients here - in the dimensionality
of the spin wave functions?

6.4.1 Exercise: A Blessing of Dimensionality

(a) How many coefficients would you need to express the spin wave function for a
system consisting of n spin-1/2 particles?
(b) Suppose you need 64 (usual) bits to represent a real number with reasonable accur­
acy on your computer. Suppose also that your computer memory is 16 GB.13
Now, if you want to simulate a quantum system consisting of several non-identical
spin-1/2 particles, how many such particles would you be able to encompass in your
computer’s memory without hitting the roof?
(c) Suppose now that we turn the tables. We use our set of spin-1/2 particles to encode
and process information - rather than trying to simulate these spins. How, then, is
the above good news?

12 Spoiler: this is just another word for propagator.


13 GB - gigabyte, where one byte is eight bits.
115 6.4 The Building Blocks of Quantum Computing

The answer to the last question above may not be obvious. Perhaps it is more obvious
that the answer to the first question is 2". This is the curse of dimensionality revisited;
the complexity grows exponentially with the number of constituent parts of our quan­
tum system. We could turn this into an advantage. Suppose we have access to n spin-1/2
particles, and that we can manipulate them as we see fit. This would mean that we are
able to handle an enormous content of information with a comparatively low number
of two-state quantum systems. It shouldn’t be hard to imagine that the potential is
huge - if we are able to take advantage of this somehow. It should also be said that this
is not trivial, finding efficient ways to encode classical information into an entangled
quantum system is an active area of research.
The state of Eq. (6.17a) may be used as a quantum bit - a qubit. A qubit is a linear
combination of two orthogonal states of a two-state quantum system - any two-state
quantum system. As mentioned towards the end of Section 5.1, the spin projection
of a spin-1/2 particle is an example of such a system. Another example of such a
two-dimensional system is the polarization of a photon - left/right vs. up/down, for
instance.
Yet another possible realization could be the two lowest bound states of a confined,
quantized quantum system, such as an atom, an ion or a quantum dot, see Fig. 5.1. All
of these systems could act as qubits - under the condition that the dynamics could be
rigged, somehow, so that the other bound states and the continuum could be ignored.
See Fig. 4.4 for a schematic illustration of how the core of a trapped ion quantum com­
puter could look. Specific ions are manipulated by hitting them with tailored laser
beams - beams that may induce jumps between the two bound states in question.
In general, it is customary to label the two states analogously to classical bits - by
‘0’ and T. Moreover, they are typically written using Dirac notation - with brackets:
|0) and |1). Correspondingly, a general qubit reads

|vp> = fl|0> + b|l), (6.18)

where, as for any wave function/quantum state, we insist that it is normalized:


(4'14') = |a|2 + |Z>|2= 1. (6.19)

At the risk of being excessively explicit, the only difference between Eq. (6.18) and
Eq. (6.17a) is the labelling, we have just substituted ’ by 4|0> ’ and4/^’ by 4|1)’. Thus,
a system consisting of n non-identical spin-1/2 particles could always serve as a set of
qubits, and any other set of qubits could be perceived as such. And, as before, the two
states may be represented as simple vectors in C2:

= and |1> = (°), (6.20)

so that a general qubit may also be written14

W = ) (6.21)

as in Eq. (4.13).
14 We apologize for the abundance of different names for what is basically the same thing.
116 6 Quantum Technology and Applications

6.4.2 Exercise: The Qubit

From Eq. (6.18) it would seem that it would take four real numbers to specify a qubit:

Re a + i Im a
|40 = (Rea + ilma) |0) + (ReZ? + ilmZ?) |1) = (622)
Re b + i Im b

However, it is actually just two. Why?


Maybe it becomes clearer if we rewrite Eq. (6.22) as

a \= ( l«l (6.23)

where \a | is the modulus of a and cpa is its argument.


Does the fact that a single qubit is fixed by two real numbers rather than by
two complex ones have severe consequences for the conclusion drawn at the end of
Exercise 6.4.1?15

The point here is that, as we learned in Exercise 1.6.3(c), any global phase, such as
exp (i<Pa), is immaterial; it does not change any physics or information content. When­
ever the wave function, or the state, is used to calculate anything physical, such as an
expectation value, <4'|A|4'>, it will cancel out. As for the magnitude of the coefficients,
\a| and |Z?|, these quantities are restricted by normalization, Eq. (6.19).
In the literature you will also see Eq. (6.23) expressed as
6 6
= cos -|0) + e^ sin -|1), where (6.24)

0 \b\ A
tan - = —- and 0 = cpb - cpa,
2 \a\
which motivates visualizing a single qubit as a point on a sphere of unit radius (see
Fig. 6.8).

[ Figure 6.8 The Bloch sphere. This is a graphical representation of a general single qubit. The geometrical correspondence
between these angles and the parameters in Eq. (6.23) is provided in Eq. (6.24).
15 You may find this to be a bit of a leading question.
117 6.4 The Building Blocks of Quantum Computing

As in the case of ‘traditional’, classical information processing, we want to be able


to manipulate qubits. We do so by imposing a sequence of operations on the various
qubits that our quantum computer consists of. These gates may address individual
qubits or several simultaneously. Clearly, the former is more easily dealt with than the
latter. In any case, the manipulation comprises a sequence of operations of the form

|40 -+ Um, (6.25)

where U is a linear operator.

6.4.3 Exercise: Quantum Gates and Propagators

In Section 2.3 we introduced the operator U that brings our initial quantum state
= 0) into the final state at time t, ^(r).

(a) Suppose the system at hand has a Hamiltonian with no explicit time dependence.
In this case, U may formally be expressed in a particularly simple form. How?
(b) Suppose now that we are dealing with a single qubit only. Why is it then sufficient
to learn how |0) and 11) are affected by the gate U in order to determine the effect
on any qubit state?
(c) The NOT gate is a cornerstone in both ‘ordinary’, classical computing and quan­
tum computing. It changes 0 to 1 and 1 to 0 - with or without brackets. With our
single qubit expressed as in Eq. (6.21), what does U look like for this gate? Have
you seen this matrix before?

Of course you have; the NOT gate is nothing but the Pauli matrix ax, which we
encountered in Exercise 4.3.1. The other Pauli matrices are also frequently encountered
in quantum computing - along with the Hadamard gate'.

6.4.4 Exercise: Quantum Gates are Unitary

A propagator or a quantum gate needs to preserve the norm of the quantum state it is
acting on. This leads to the requirement that it must be unitary:
U^ = U-{ O uW = I. (6.27)

(a) Why is conservation of the norm crucial in quantum physics?


(b) Prove that Eq. (6.27) actually ensures that the norm of the state is conserved.
(c) As we have seen, in Eq. (2.19), when the Hamiltonian H is time independent the
propagator may be written as U = exp( — iHt/h). How does this form ensure
unitarity? What’s the requirement?
(d) Check, by explicit calculations, that the Pauli matrices and the Hadamard gate are,
in fact, all unitary.
118 6 Quantum Technology and Applications

We have seen that the Pauli matrices may be used to express the interaction between
a spin-1/2 particle and a magnetic field. This, in turn, allows us to construct quantum
gates on spin qubits by tailoring these magnetic fields. Specifically, in Exercise 4.3.1 we
learned that the Pauli matrices are both Hermitian and their own inverses. Thus, they
are, as we just saw, also unitary which, in turn, also means that they are admissible
single-qubit quantum gates. Here we will see how they may be implemented on spin
qubits.

6.4.5 Exercise: Pauli Rotations

This exercise is, to a large extent, a revisit of Exercise 5.1.1.


In quantum computers, gates are often implemented by subjecting qubits to piece-
wise constant influences of some sort. Suppose a single qubit in the form of a spin-1/2
particle is exposed to a magnetic field pointing in the x-direction with a duration of T
time units:
^(0=1 -?0’ (6.28)
[ 0, otherwise.

Then Eq. (2.19) applies during the time interval in question, and the Hamiltonian
may be written
H = ^Wcrx. (6.29)

(a) By comparing with Eqs. (4.15) and (4.16), how does the coupling strength W relate
to the strength Bo of the magnetic field?
(b) Suppose we fix the strength of the magnetic field Bo and, thus, also W, while the
length of the pulse T may be tuned. Use Eq. (5.5) to tune T so that the propagator
U coincides with the NOT gate.
In case you find the factor —i in the second term in Eq. (5.5) rather pesky, do
remember the lesson learned in Exercises 1.6.3(c) and 6.4,2: a global phase factor
does not matter.
(c) Verify that, by directing the magnetic field in the y- or the z-direction, the other
Pauli matrices may also be constructed as quantum gates.
(d) Show that the Hadamard gate, Eq. (6.26), can be implemented by a Pauli rotation
followed by the NOT gate.
Actually, it can also be implemented by first imposing a az gate followed by the
same Pauli rotation. Show this too.

It is useful to have an idea of how to manipulate single qubits. However, in


order to implement meaningful quantum algorithms, we must also be able to mix
them up. Useful quantum algorithms require that we can address multiple qubits
simultaneously - that we can construct gates which involve two or more qubits.
This means that the particles must be brought to interact somehow. We introduced
an example of such an interaction in Exercise 5.1.4.
119 6.4 The Building Blocks of Quantum Computing

Analogously to Eq. (5.20), any two-qubit state may be written as a linear combin­
ation of product states. We write as 100) and so on, and, in this context, rewrite
Eq. (5.20) as

/ a 5
I'l') = a |00> +b |01) +<? |10> +d |11> b
(6.30)
I c
\d /

Two important two-qubit gates are the controlled-NOT (CNOT) gate and the SWAP
gate. The latter simply turns the first qubit into the second and vice versa, 110) 101 >
and so on. The CNOT gate flips the second qubit - but only if the first one is 1; the
first bit, the control bit, remains unchanged.

6.4.6 Exercise: CNOT and SWAP

(a) With the vector representation of Eq. (6.30), what do the CNOT and SWAP
matrices look like? Remember that the matrices corresponding to these linear
transformations may be determined by working out how each of the four basis
vectors is transformed.
(b) In the setup of Exercise 5.1.4, CNOT simply cannot be implemented. Why is that?
Hint’. Remember the lesson learned in Exercises 2.6.2 and 5.1.4; it has to do with
exchange symmetry.
(c) We could envisage a controlled switch off-gate - one that turns the second qubit
into |0) if the first qubit, the control bit, is |1) and does nothing if the control bit
is |0). You will never find such a gate in a quantum computer. Why is that?
(d) Now we will turn off the external magnetic fields in Eq. (5.21) completely, so that
both e and A are zero. Also, for simplicity, we set the spin-spin interaction strength
u = 1, and h = 1, as usual:
0
/1 0 0
0 -1 2 0
H = (6.31)
0 2 -1 0
k 0 1)
0 0

Suppose that we can manipulate our quantum system so that the spin-spin inter­
action is switched on only for a finite time T - for instance by bringing the
particles close and then apart again. Then this Hamiltonian is able to implement
the SWAP gate if we tune the duration T adequately - analogously to what we did
in Exercise 6.4.5(c).
What is the minimal value for T that achieves this?
This could be done by analytical means. It may also be done numerically, how­
ever. One way of doing so is to vary the duration T in order to find the minimum
of the function
2
C(T)= 1 - ^(^swap^d) (6.32)
120 6 Quantum Technology and Applications

where U(T) is the actual gate for duration T, given by Eq. (2.19) with Eq. (6.31),
and Us wap is the SWAP gate. The function C(T) is zero if, and only if, our gate U
is identical to the SWAP gate - up to a trivial phase factor. ‘Tr (•) ’ means that we
should calculate the trace of the matrix in question, which simply is the sum of all
the diagonal elements.

We introduced the concept of entanglement and Bell states in Section 4.2. And we
played around with the entangled spin states of Eqs. (4.11). In Dirac notation these
two states may be written:

* +) = -±= (lOi) + |10»,


|4 (6.33a)
V2
|<p-> = -L (I01) - |10» . (6.33b)
V2
These are maximally entangled states for which a readout/measurement necessarily
would yield opposite results for the two qubits; if a measurement on the first qubit
yields the result 0, the other one must be 1, unless you have tampered with it after
measuring the first. The following two two-qubit states are also maximally entangled,
but in this case a readout is destined to yield the same result for both qubits:
|<D+) = -J=(|OO) + |ll)), (6.34a)
V2
|4>-) = -+= (|00) - |H)). (6.34b)
V2
The four states of Eqs. (6.33) and (6.34) are known as the Bell states. As mentioned,
they are named after John Stewart Bell, whose picture you can see in Fig. 6.9.
As we alluded to in Exercise 4.2.1(d) and saw in Exercise 6.4.1, entanglement of
quantum states opens up a vast space for doing computations. The ability to construct
quantum algorithms with an advantage over traditional, classical ones relies on this
capacity. Certain schemes from quantum information theory, such as superdense coding
and quantum teleportation for instance, make explicit use of the Bell states.

6.4.7 Exercise: Prepare Bell

A quantum computer typically starts out with just zeros as the initial state. In the
two-qubit case, that would be the state |00).
Suppose that, starting out with |00> we apply first, the Hadamard gate to the first
qubit, and second, the CNOT gate with the first bit as control bit. This will produce
one of the Bell states. Which one is that?
Using CNOT, Hadamard and NOT, all four Bell states can be constructed from 100).
Try to work out how this is achieved for the other three Bell states.

While we consistently have been alluding to spin-1/2 particles when addressing quan­
tum bits and quantum gates, this is, as mentioned, just one out of several possible
121 6.5 Quantum Protocols and Quantum Advantage

Figure 6.9 | John Stewart Bell (1928-1990) at CERN (Conseil Europeen pour la Recherche Nucleate), Geneva, Switzerland, in 1984.
The 2022 Nobel Prize in Physics was awarded to Alain Aspect, John F. Clauser and Anton Zeilinger for conducting
experiments based on his ideas. This, of course, is a clear recognition of Bell's work also.

realizations. It should also be mentioned that quantum information theory may very
well be studied without regard to practicalities such as the physics behind the qubits
or how the gates are actually implemented.

6.5 Quantum Protocols and Quantum Advantage

In the preceding section we introduced a few of the basic building blocks of quantum
computing and quantum information processing. And in Exercise 6.4.1 we learned that
the potential information content could be huge - even with just a few working qubits.
In fact, with about 55 qubits16 we would be able to process an information content
beyond any present-day supercomputer.
And there is more: a quantum computer is parallel by nature. You may be familiar
with parallel processing - the ability to chop a large computational job into chunks
and distribute them to different processing units (CPUs). On a quantum computer,
however, processing is parallel by nature. It is given directly by linearity; since the
gate/propagator in Eq. (6.25) is linear, linear combination of initial states is preserved,

a11A1 > + W2) -> U (a|^i> + W2)) = aU11A1 > + bU|^2)• (6.35)

16 That would be 55 noiseless, logical, qubits, mind you.


122 6 Quantum Technology and Applications

Correspondingly, we could prepare our initial state in a linear combination of all


possible inputs and have the output for each and every one of them in a single
calculation.
So the advent of working quantum computers is much anticipated - with a great deal
of expectation and optimism. However, actually harnessing the huge advantage quan­
tum computers seem to have over ordinary, classical ones, is not that straightforward.
First of all, the collapse of the wave function prevents us from directly exploiting the
quantum parallelism just mentioned; while the final state of an initial state with sev­
eral inputs does contain all corresponding outputs, we can only read off one each time.
Upon readout/measurement, the state will collapse - in a non-deterministic manner.
To access different outputs, you would have to rerun your quantum program several
times - leaving you, as Max Born taught us, with a distribution of answers, each one
with its own frequency/probability. In order to acquire the output, or outputs, of inter­
est, you may have to run your quantum calculation so many times that the quantum
parallelism does not really offer any advantage.
The true potential in harnessing quantum parallelism and what we coined a blessing
of dimensionality in Exercise 6.4.1 is more subtle. Any idea along the lines of ‘let’s run
this computation on a quantum computer instead of a classical one in order to speed
it up’ is too naive. However, if we can orchestrate a quantum calculation so that only
outputs of interest are read off in the end and all others come out with zero amplitude,
there may be a speedup over classical computing - a quantum advantage. This, in turn,
requires a different approach to programming, a quantum way of thinking about it.
There is a rich zoo of specific algorithms that succeed in exploiting the quantum
advantage [23]. Some of these schemes serve mostly to illustrate the advantage of quan­
tum computing from a rather academic point of view. Others, such as the quantum
Fourier transform and Shors algorithm, are likely to be very important when large and
robust enough quantum computers are accessible. The problem with such schemes,
however, is that they require a large number of noiseless qubits. It is hard to keep
a quantum computer entirely isolated from interference from its surroundings. The
quantum system and its environment tend to get entangled - with the consequence
that the quantum system gradually loses its coherence property, and interference is
ruled out. We will touch upon this issue in the next chapter.
There are quantum schemes, however, that are less sensitive to noise and decoher­
ence.
Enough of these general considerations. The notion of a possible quantum advan­
tage to computation and information processing makes a lot more sense if we know
some specific examples. We will not indulge ourselves with any study of specific algo­
rithms for quantum computing per se. We will, however, look at two illustrative
quantum communication protocols. It is fair to say that the first one, although it has
been realized experimentally, falls into the category of being mostly of academic inter­
est. The second one, on the other hand, addresses quantum key distribution, for which
commercial implementations are already offered.
123 6.5 Quantum Protocols and Quantum Advantage

" Figure 6.10 | The superdense coding scheme. The two piecewise horizontal lines depict the two qubits. The H operator, acting on
the first qubit, is the Hadamard gate, while the following symbol is the CNOT gate with the first qubit as the control
bit. The gauge symbols at the end indicate measurement.

6.5.1 Exercise: Superdense Coding

This approach dates back to the 1970s, but it wasn’t published until 1992 [9].
A person, typically called Alice, wants to send a simple message consisting of two
classical bits to her friend Bob. In other words, her message will be one of the four
alternatives 00,01, 10 or 11.*17 Actually, she may be be able to do so by sending a single
qubit instead of two classical ones. Suppose their mutual friend Charlie does what we
did in Exercise 6.4.7, namely he prepares a Bell state. It could, for instance, be C>+
in Eq. (6.34a). Charlie sends one of the entangled particles/qubits to Alice and then
the other one to Bob. Alice may impose gates on her single qubit. Depending on the
message she wants to convey, she imposes one of the three Pauli matrices. Or she does
nothing - or, in other words, she imposes the identity operator gate, Iz. The scheme is
illustrated in Fig. 6.10.
Next, she sends her particle to Bob. The particle pair, which is still entangled, is
reunited physically with Bob, who imposes the same gates on it as Charlie did in the
first place - however, in reverse order.
For each of Alice’s four possible operations, Iz, crx, cry, or what will be the
outcome of measuring each of the two qubits for Bob?

This may look like going through a whole lot of trouble for the three friends just to
be able to send a single qubit instead of two classical ones. However, the example does
demonstrate that a quantum approach to information processing allows for shortcuts
unavailable in a classical approach. At least that’s something.
In the next example, Alice and Bob will exchange a lot more than just a single bit -
both classical and quantum ones.

6.5.2 Exercise: Quantum Key Distribution

The scenario is the following: Alice wants to be able to pass on a message to Bob, a
much longer one this time, with certainty of not having been eavesdropped. This can

17 Alice and Bob may have agreed on assigning meaning to each of these alternatives beforehand - such as
All is fine’, ‘I miss you’, ‘Please send money’ and ‘Come rescue me!’, respectively
124 6 Quantum Technology and Applications

actually be done. The scheme requires that they share two communication channels -
a classical one, such as a phone line, and a quantum channel. Again, we will illustrate
the protocol using spin-1/2 particles.

(a) Some purely classical considerations first: imagine that some message has been
encoded as a long sequence of zeros and ones. Imagine also that Alice and Bob
- and no one else - hold an encryption key, also consisting of zeros and ones,
which is as long as the original message. Next, Alice adds the message and the key,
modulo 2, bit by bit,18 and broadcasts it - publicly. She does not have to worry
about anyone intercepting the message, they will not be able to decipher it.
Why is that?
(b) We now set out to construct such an encryption key. Alice sends out spin-1/2 par­
ticles to Bob through a Stern-Gerlach apparatus which can be rotated 90 degrees
between each emission, switching between x and z spin projection measurements.19
For each particle Alice registers whether it is an x or a z eigenstate, and what
eigenvalue it has - whether it has a positive or a negative spin projection.
For every particle that Bob receives from Alice, he picks a direction to orient his
own apparatus -x or z - and measures the particle’s spin projection along this axis.
Sometimes he picks the basis in which the state is an eigenstate, sometimes he does
not. He cannot tell in advance whether he has chosen the ‘right’ axis or not. Each
and every time he will find the eigenvalue +/1/2 or —h/2. He translates a positive
measurement into ‘0’ and a negative one into ‘1’.
Afterwards, with their classical communication channel, Alice and Bob may
compare their choices of axes, bit by bit, and agree on a key - without actually
sharing any content of the key itself.
How? Does it matter if their conversation is tapped?
What you learned in Exercise 4.3.2 may come in handy. And, perhaps, you will
also find Table 6.1 useful. Here we have sketched a possible scenario for the first 12
spin-1/2 particles that Alice sends off. Which bits can be used for their key here?
Where can Alice and Bob be sure to have the same bit value?

Table 6.1 This potential scenario demonstrates how Alice and Bob may agree on a secret key - provided that
they haven't been eavesdropped. This visualization is inspired by Ref. [8].

Alice’s state <— t t 4 <— 4 4 t 4 <—


Alice’s bit 1 0 0 0 1 0 1 1 1 0 1 1
Bob’s choice of basis X X z z X z z z X z X X •••
Bob’s readoff <— t 4 <— 4 4 4 <— t <—
Bob’s bit 1 0 0 1 1 1 1 1 1 0 0 1

18 This is simple arithmetic: 0+l=l+0=l and 0 + 0= l + l=0.


19 It would be tempting to call the sender Walther instead of Alice. However, the name Alice is fixed by
tradition.
125 6.5 Quantum Protocols and Quantum Advantage

Table 6.2 This potential scenario demonstrates how Alice and Bob may reveal that Eve has been trying to
intercept their key. The visualization is inspired by Ref. [8].

Alice’s state t t i t ;
Alice’s bit 10 0 0 10 11 10 11
Eve’s choice of basis Z Z X z X X X z Z X X z
Eve’s readoff t t +- ; ; i -> -> ;
Eve’s bit 0 0 11 10 11 10 0 1
Bob’s choice of basis X X z z X z z z X Z X X
Bob’s readoff t T i t ;
Bob’s bit 0 10 1 110 1 110 1

(d) Eve, the eavesdropper,20 wants to steal the key so that she can decipher the message
later on. She may be able to intercept Alice and Bob’s phone call where they discuss
their choices of projection axes - x or z. But she knows that this will not be enough,
she must also intercept the particles sent through the quantum channel, where
Alice sends her string of carefully prepared spin-1/2 particle states. For each of
these particles, Eve measures the spin projection somehow and then sends it off to
Bob - hoping that he won’t be able to tell that she has read it off.
However, if Eve actually has tried to pull off such a scam, Alice and Bob
may, using their classical communication channel, easily reveal that Eve has been
tampering with their signal. How?
Perhaps Table 6.2 could assist your train of thought here? In this scenario Alice
has sent off exactly the same sequence of carefully prepared spin-1/2 particles as
in Table 6.1. But this time Eve intercepts them and reads them off before passing
them on to Bob. Just like Bob, she needs to choose an axis for which she measures
the spin projection. Can you identify the specific bits where Eve reveals herself?

This key distribution scheme is often abbreviated as BB84 - after the inventors,
Charles Bennett21 and Gilles Brassard, and the year of its invention [8]. The beauty
of it is that no eavesdropper can acquire the key without revealing herself. With access
to the quantum channel she may perform her own measurements on the particles trans­
mitted through. And by tapping the phone call she also knows which particles/qubits
are used to construct the key and which ones are dropped by Alice and Bob. But only
afterwards. She did not know which basis to use when she measured them. Just like
Bob, Eve simply has to decide for herself if she wanted to measure the x-projection or
the z-projection. Suppose Alice sent away and Bob reports having performed an
x -measurement for that particular particle. Then Eve knows that this particle counts,
but she didn’t know that - nor which basis to use - when she actually read it off. If

20 Admittedly, this pun is quite corny. But tradition is to blame.


21 The attentive reader may have noticed that Charles Bennett was also one of the people behind the
superdense coding scheme.
126 6 Quantum Technology and Applications

she happened to measure the z-projection, she may have got the wrong result. Her
real problem, however, is that she has collapsed the state from into or X;- So
when Bob later on measures the spin projection in the x-basis, he may very well end up
with a spin-right result, . This is exactly what happens for the first spin particle in
Table 6.2.
If Alice and Bob now compare a relatively short part of their key over the phone, a
large fraction of the bits, which should have coincided, will differ. And Eve is revealed.

6.6 Adiabatic Quantum Computing

Adiabatic quantum computing, and the closely related notion of quantum annealing, rep­
resents an alternative, or rather a supplement, to the gate-based approach to quantum
computing addressed above.22 Although the adiabatic and the gate-based approaches
to quantum computing have been proven to be formally equivalent, it is fair to say that
they are quite different - both conceptually and in terms of implementation. Adiabatic
quantum computing schemes are generally limited to solving optimization problems.
In this sense, it is more restricted than universal, gate-based quantum computing. How­
ever, efficient optimization problems are most certainly of interest when it comes to
practical problem solving - for instance within logistics and machine learning, or, as
we learned in Chapter 3, within quantum physics and chemistry itself for that mat­
ter. A quantum advantage in this regard would be most welcome. Adiabatic quantum
computing also has the advantage of being more robust against noise and less prone
to decoherence than gate-based quantum computing.
As the name suggest, adiabatic quantum computing exploits the adiabatic theorem,
with which we became acquainted in Section 5.5. In such schemes, a quantum system
starts out in the ground state of a well-known, easily implementable Hamiltonian Hi
and then, slowly, the Hamiltonian is evolved into another one, HF - one whose ground
state somehow solves our minimization23 problem. In effect, our system is subject to
the time-dependent Hamiltonian

H(t) = s(t)H! + (1 - s(t))HF, (6.36)

where both H{ and HF are time independent and s(t) slowly and continuously changes
from 1 to 0. If we manage to encode the function we wish to minimize into the ground
state of Hf somehow, we should be able find the solution as long as s(t) evolves slowly
enough for our system to stay in the ground state throughout the process. In the cross­
over between H{ and HF, it is a clear advantage if the dynamic ground state evades
avoided crossings - in accordance with the discussion following Exercise 5.5.1.

22 We will not distinguish between adiabatic quantum computing and quantum annealing here; their
distinction is, let’s say, rather elusive.
23 As optimization and minimization are two sides of the same coin, the processes only differ by the sign of
the function in question; we hope that you will forgive us for using these terms interchangeably.
127 6.6 Adiabatic Quantum Computing

We will illustrate adiabatic quantum computing by a simple example which is


strongly inspired by Ref. [17].

6.6.1 Exercise: Quantum Minimization

Suppose we set out to find the global minimum of this function:24

VF(x) = (x2 - I)2 - x/5. (6.37)

We let this function be the potential of a single-particle Hamiltonian, Eq. (2.8),


which we take to be our Hp. As the potential of our initial Hamiltonian, Hi, we take
the harmonic oscillator, Eq. (3.9). Finally, let s(t) evolve according to

,V(O= l(l+cos(y ?)), ze[0,T]. (6.38)

(a) Whereabouts do you expect the global minimum of V>(x) to be? Perhaps it is easy
to estimate from looking at the functional form of Eq. (6.37). Or you could simply
plot it - or have a look at the time development of the potential in Fig. 6.11. You
could, of course, also do it the proper way by determining the roots of VF(x).
(b) Solve the Schrodinger equation for this system with various choices for T and
monitor the evolution as you go along. For our Hi, the ground state is known
analytically. With m, h and k in Eq. (3.9) all being equal to unity, it reads
V'oW = 7r-1/4e-x2/2. (6.39)

Your implementation from Exercise 5.5.2 may be useful. As in that exercise, you
could use the split-operator scheme, Eq. (5.25), with A being the kinetic energy and
B the time-dependent potential. However, if you are a bit stingy when it comes to

Position

r Figure 6.11 This surface illustrates the slowly varying potential of the Hamiltonian in Eq. (6.36), in which the harmonic oscillator
potential, Eq. (3.9), evolves into Eq. (6.37).
24 We do realize that this is not too hard in the first place. Just bear with us for a while, OK?
128 6 Quantum Technology and Applications

the numerics, you could also simply do a full matrix exponentiation at each time
step, Eq. (5.23b), with H H(t + Ar/2).
How large must the duration T be in order to ensure, more or less, adiabatic
evolution?
(c) While the ground state of Hp encapsulates the minimum of Eq. (6.37), it is far from
localized. And the ground state’s preference for the global minimum to the right
over the local one to the left could certainly be stronger. In order to remedy this,
we will, quite artificially, start increasing the mass of the ‘particle’. Thus, the time
dependence shifts from the potential to the kinetic energy term. This is, in effect,
a way to slowly and adiabatically turn off quantum effects and make the system
increasingly classical.
From t = T and onwards, our new time-dependent Hamiltonian is thus

h2 d2
Hioc(0 = ^—7 + VF(x). (6.40)
2m(t) dxz

We will use this time-dependent ‘mass’:

m(t) = Mio(l + 0.01(? - T)2), (6.41)

where mo = 1 is the initial mass.


Keep on evolving our system for t > T with this Hamiltonian and see if we can,
more or less, localize our wave function around the minimum near x = 1.
If you applied a split-operator technique in your implementation from (a), A
and B should switch roles now. The middle of the sandwich, the time-dependent
kinetic energy propagation, can be done efficiently by means of Fourier transforms
as in Eq. (2.15). Or you could continue to calculate the full matrix exponential on
the fly.

Hopefully, you agree that, in the end, a position measurement should be likely to
yield a result very close to the minimum of our potential. Admittedly, it hardly seems
worthwhile to go through all the trouble of solving the time-dependent Schrodinger
equation, not to mention implementing it on an actual quantum computer, in order to
find the minimum of a function as simple as the one in Eq. (6.37). The advantage of
such approaches comes into play for more complex optimization problems, problems
involving several variables and qubits.
We do hope that the example serves to illustrate how quantum behaviour may be
exploited in order to avoid getting stuck in local minima. This is a severe problem for
more conventional optimization schemes, which typically are based on some adapta­
tion of the gradient descent method, which we first encountered in Exercise 3.4.4.
If you are familiar with simulated annealing, which is a classical approach also aim­
ing to reduce the risk of getting stuck in local minima, it should not be too hard to see
the motivation for the term quantum annealing. Quantum annealing has the advan­
tage over simulated annealing that a quantum system may escape local minima by
tunnelling.
129 6.6 Adiabatic Quantum Computing

r Figure 6.11 Schematic of the quantum bit used in the D-Wave One quantum annealer. In such a setup, two pieces of
superconducting material are separated by a small barrier through which a current will pass without any external
voltage. This current is quantized, and the two lowest current states, one corresponding to a current circulating
clockwise and the other counter-clockwise, constitute the qubit. Media courtesy of D-Wave.

Perhaps placing Exercise 6.6.1 under the technology heading is stretching it a bit.
However, the first commercially available quantum computer, the D-Wave One (see
Fig. 6.12), was, in fact, a quantum annealer. Their quantum hardware implements Ising
models. An Ising model may be seen as an arrangement of several spin-1/2 particles
that all interact with their neighbours - a generalization of the two-spin system we saw
in Exercise 5.1.4, in other words. The Hamiltonians of such Ising models are used to
encode the function to be minimized - functions that take binary numbers as inputs
and outputs.
Beyond the Schrodinger Equation

Thus far, we have referred to the Schrodinger equation as if it were the one, fundamen­
tal equation in quantum physics. It’s not.
We will get acquainted with two rather different kinds of generalizations of the
Schrodinger equation. First, we will address how quantum physics may be formu­
lated in a manner consistent with relativity.1 Second, we will discuss reversibility and
openness in quantum physics.

7.1 Relativistic Quantum Physics

By now we have learned that things become rather strange when they become small -
when quantum physics takes over and Newtonian mechanics no longer applies. How­
ever, Albert Einstein taught us that even large things look quite different from what
we are used to when they reach velocities comparable with the speed of light (Fig.
7.1). According to his special theory of relativity, the kinetic energy of a classical, free
particle of mass m is

7’rel=7MC2(jl + (-^)2-l), (7.1)


\V \mc/ J

where c is the speed of light. Clearly, this differs from the expression we have been
using: T = p2/2m. The latter is only a decent approximation when we are dealing with
objects moving a lot slower than light. Usually we are, but sometimes we are not.

7.1.1 Exercise: Relativistic Kinetic Energy

Make a plot of Trei along with the kinetic energy expression we have used thus far,
T = p2/2m, as functions of the momentum p.
Prove that Trej p2/2m when (p/mc)2 1.
You could set the mass m to 1 kg and use SI units. However, it may be more con­
venient to let me2 be your energy unit and introduce the variable x = (p/mc)2. For the
sake of clarity: this exercise has nothing to do with quantum physics as such.

1 Special relativity, that is.

130
131 7.1 Relativistic Quantum Physics

r Figure 7.1 Like quantum physics, special relativity has some consequence that are quite far from intuitive. This cartoon illustrates
the phenomenon of time dilation; time progresses more slowly for an observer moving at a velocity close to the speed
of light than for one at rest.

There are situations in which quantum particles are accelerated towards relativ­
istic speeds - velocities that constitute a significant fraction of the speed of light.
For instance, with state of the art lasers, electrons may be driven towards speeds for
which a non-relativistic treatment is inadequate. And relativity certainly plays a role in
high-energy physics facilities, such as CERN, where particles are collided with other
particles to create new ones. For atoms and molecules at rest with large, highly charged
nuclei, the innermost electrons may be accelerated towards the speed of light by the
Coulomb attraction from the nucleus. In fact, this is the reason why gold looks like
gold and not silver.
In other words, in both dynamical and stationary situations, quantum equations
consistent with special relativity are sometimes needed. A natural starting point in this
regard would be the following energy relation from special relativity:2

E2 = m2c4 + p2c2. (7.2)

The (time-dependent) Schrodinger equation comes about by insisting that the action
of the energy operator, the Hamiltonian, on the wave function, provides the same
result as the action of ihd/dt on vp. If we follow this train of thought and turn the above
energy expression into an operator,

H2 = m2c4 + p2c2, (7.3)

we arrive at the Klein-Gordon equation for a free particle:

— h2—z-'l' = [—h2c2V2 + m2c4l vp. (7.4)


dtz L J
2 In the special case of a particle at rest, p = 0, a much more famous special case of Eq. (7.2) emerges.
132 7 Beyond the Schrodinger Equation

With a potential it reads3

3 “I2
ih- - V(r) vp = t^c2V2 + m2c4] tp. (7.5)
ot

Actually, it was neither the Swede Oskar Klein nor the German Walter Gordon
who arrived first at the equation that came to bear their names. It was actually Erwin
Schrodinger himself; it was his first attempt at formulating a wave equation for matter.
He dismissed it on the grounds that it did not provide the bound state energies for
the hydrogen atom with sufficient precision. Instead he settled on a non-relativistic
formulation. At the time, he could hardly have been expected to know why his first
attempt ‘failed;’ the reason is not intuitive at all.
This issue was eventually resolved after the Englishman Paul Dirac, in 1928, had
proposed an alternative relativistic equation. Dirac, who you can see in the middle of
the crowd in Fig. 1.3, didn’t appreciate the fact that the Klein-Gordon equation is
second order in both time and space. Instead he insisted on a linear equation:4

_ 3
ih— = cot • p + mc2fij (7.6)
dt
with a = [ax,ay,az] in the three-dimensional case. In retrospect, it seems fair to say
that it comes off as rather surprising that what appears to be an overly pragmatic
approach would actually lead to such a huge breakthrough.

7.1.2 Exercise: Arriving at the Dirac Equation

Dirac demanded that his Hamiltonian,

H — cct • p + me2 ft. (7.7)

should fulfil Eq. (7.3). This may appear rather unreasonable. And, indeed, it cannot
be achieved if the components of a and fl are numbers. It can, however, if they are
matrices.

(a) In the one-dimensional case, where we only have no ay nor any az, show that
Eq. (7.3) leads us to require the following:

a2 = fl2 = I, axfl + flax = 0, (7.8)

where I is the identity matrix and ‘0’ here is to be understood as the zero matrix.
Also, show by direct calculation that these equations are satisfied with these
choices:
0 1
1 0
= ax P = <h (7.9)
1 0 /’ 0 -1

3 For the record: we have not really explained how the potential energy actually enters into the equation
here.
4 This version corresponds to a time-independent Hamiltonian without any potential.
133 7.1 Relativistic Quantum Physics

(b) In the three-dimensional case, explain why these matrices must fulfil the following
relations:

akai + aioik — 0 for k I, (7.10a)


+ fiak = 0, (7.10b)
(7.10c)
P2 = i, (7.10d)

where k,l are either x, y or z. In other words, all of these matrices must be their
own inverses, and they must all anti-commute, Eq. (4.18), with each other.
In order to fulfil all of these equations, the matrices cannot be any smaller than
4x4 matrices.
(c) Verify that these choices indeed fulfil Eqs. (7.10):

/ 0 0 0 1 \
02x2 0 0 10
Olx -
ox 02x2 0 10 0’
\ 1 0 0 0 /
/ 0 0 0 —i \
02x2 0 0 i 0
°y 0 -i 0 0
\ i 0 0 0/
/ 0 0 1 0 \
0 0 0 -1
10 0 0
\ 0 -1 0 0 /
/ 1 0 0 0 \
0 10 0
02x2 0 0-10
\ 0 0 0 -1 /

The 2x2 off-diagonal blocks of the a-matrices are the Pauli matrices, which we
should know rather well by now.

Equation (7.6), where the matrices comply with Eqs. (7.10), is the Dirac equation for
a free particle. '' If we introduce a potential V(r) and an external electromagnetic field
given by the vector potential A(r, t), it reads

— [cot • (p — #A(r, t)) + V(r) + mc2fij ^(r, t). (7.11)

As we have seen, the Hamiltonian is a 4 x 4 matrix in the three-dimensional case.


Note that the Hamiltonian is not a matrix in the same sense as we saw in Section 2.2;

5 In the literature you will frequently see the Dirac equation formulated in a more compact fashion. While
simplifying the notation, this hardly simplifies actually solving it.
134 7 Beyond the Schrodinger Equation

it is a matrix acting on a wave function with four components, which, in turn, often is
written as a vector with two spinors'.

/ i*(r;r) \
^2(r;r) *+(r;0
^(r;0 = (7.12)
^3(r;0 <I>_(r;O
\ *4(r;r) /

Here <£+ and <£_ have two components each. It came to be realized that these com­
ponents are related to spin', the first component corresponds to the spin of the particle
pointing upwards and the second one corresponds to spin downwards:

Note that this form is nothing new; we saw it in Eqs. (4.3) and (6.20):

*+(r;O = 4>+(r;r) ( J ) + ° Xf + Xl-


(7.14)

With this realization, we are touching upon what Schrodinger couldn’t have known
back in the 1920s: the Dirac equation describes a particle with spin one-half - which
has the two possible spin projections 1/2/1 (up) and — 1/2/1 (down). In other words, for
an electron, the Dirac equation is in fact the proper dynamical equation. The Klein-
Gordon equation, as it turns out, happens to describe spinless particles. This is the
reason why it produces (slightly) erroneous energies for the electron in a hydrogen atom.
The solutions for the time-independent Dirac equation, on the other hand, provide
eigenenergies with small deviations from the solutions of the Schrodinger equation,
Eq. (3.4) - deviations that actually agree with experiments.
As in the case of the Schrodinger equation, the time-independent versions of
Eqs. (7.5) and (7.11), without any time-dependent vector potential A in the latter case,
are obtained by substituting ihd/dt with the eigenenergy E.
But what about the spinors of Eq. (7.12); why are there two of them? What’s the
meaning of the lower component 4>_? Well, just like the emergence of spin, this lower
component brought about significant new physics which nobody expected at that time.
We will investigate this physics a bit further by solving the time-independent Dirac
equation.

7.1.3 Exercise: Eigenstates of the Dirac Hamiltonian

In Chapter 3 we studied the energy spectra of bound states by solving the time­
independent Schrodinger equation, Eq. (3.1), in one dimension. We will now do the
same for the time-independent one-dimensional Dirac equation:

Cl 2
—ihcax-—I- V{x)l2 + me az i/^(x) = £ ty(x), (7.15)
dx
135 7.1 Relativistic Quantum Physics

where we have, in accordance with Eq. (7.9), set ax = ax and ft = az. In this case, the
wave function has two (scalar) components:
l/r+(x)
tA(%) = (7.16)
l/r_(x)

and the Hamiltonian is a 2 x 2 matrix:


V(x) + me2
(7.17)
V(x) — me2

In order to compare your relativistic spectrum with the non-relativistic one, take
your implementation from Exercise 3.1.2 as your starting point. Let your potential be
given by Eq. (2.30) with width w = 1 and smoothness s = 10. We will be adjusting the
depth — Vo- Set the speed of light c to 137, in addition to m = 1 and h = I.6
As usual, discretize our spatial domain into n +1 grid points distributed over a finite
extension L; it does not have to be very large, 10 or so would suffice.
The Hamiltonian of Eq. (7.20) now becomes a In x In matrix, which makes it compu­
tationally more expensive to diagonalize than its non-relativistic counterpart, which is
an n x n matrix. It may still be done within reasonable precision, however, using stand­
ard numerical tools. Along the diagonal of this matrix you will have V + me1 In along
the first half and V — me1 In along the second. The off-diagonal block consists of the
momentum operator. Both this and the kinetic energy operator in the non-relativistic
case should be implemented using FFT7
Once you have expanded your implementation from Exercise 3.1.2, try to answer the
following questions:

(a) Start with a comparatively shallow well, Vo = —10 or so, and check out the rela­
tivistic eigenenergies. You can simply plot the sorted eigen-energies in ascending
order against their indices. The relativistic spectrum has one pronounced trait very
different from the non-relativistic one, doesn’t it?
(b) Next, consider only bound states. For the Dirac Hamiltonian this would mean
the energies that are positive, but less than the mass energy me2. Compare these
energies, with the mass energy me2 subtracted, to the non-relativistic ones. With a
potential that is not very deep, they should coincide.
Do they?
(c) Now, start lowering your well. As you do so, relativistic and non-relativistic ener­
gies will start to differ. Do so and compare repeatedly as you increase | Vo|. How
deep must the well be before you start seeing deviations?8 And where do the spectra
differ? You may have to make your well quite deep before you see anything.

6 The number 137 may appear somewhat far fetched. It’s not, however; it is the speed of light in atomic
units.
7 As may become apparent after having done Exercise 7.1.4, your implementation should be such that the
square of the momentum operator actually coincides, to a reasonable degree, with the operator for the
momentum squared. Mathematically, it is always so, but it certainly doesn’t have to be numerically.
8 Do make sure that any discrepancies you may see are not due to insufficient numerical precision.
136 7 Beyond the Schrodinger Equation

The first striking observation is the fact that the Dirac spectrum separates into two
very distinct parts, one below —me1 and one close to or above me1. Because the mass
energy term, flmc2, is so large, this may have been obvious from the outset. What is not
at all obvious, however, is how to interpret these negative energy solutions. To make
a long story short, they relate to anti-particles. As it turns out, every type of particle
has its own anti-particle with the same mass and spin, but opposite charge.9 While it
took some time to realize, the existence of anti-matter is well established now. It is also
realized that pairs of particles and anti-particles may be created and annihilated; they
may appear and disappear. One consequence of this, in turn, is that just solving the
Dirac equation is not sufficient in order to describe relativistic quantum processes in
the high-energy regime. For this you need a field theoretical description, in which the
number of particles is not fixed. We will not venture into that topic here.
As for the relativistic corrections to the spectrum of bound states, you may have
found that the discrepancies are larger for the higher energies - closer to the threshold
at zero. One simple way of acquiring some intuition for this is provided by the notion of
relativistic mass. Sometimes it makes sense to consider a relativistic particle by means
of increased inertia. The particle is subject to an effective increase in the mass m when
the kinetic energy is high, as would be the situation for the highest energies within our
deep confining potential. Such an increase in mass will, in turn, effectively reduce the
kinetic energy.
In Exercise 7.1.1 we saw that in the limit that the speed of the particle is a lot less
than the speed of light, the expression for kinetic energy according to special relativity
coincides with the ‘old’ Newtonian expression. Shouldn’t the same be the case within
quantum physics? Or, in other words, shouldn’t both the Klein-Gordon and the Dirac
equations agree with the Schrodinger equation as long as the probability for the particle
to attain relativistic momenta is negligible? After all, this is what we saw above; with
moderate depth — Vo in Exercise 7.1.3, there wasn’t really any difference between the
relativistic and the non-relativistic spectrum - besides the negative energy part, that is.
For the Klein-Gordon equation it is more or less straightforward to see that it
reproduces the Schrodinger equation at low kinetic energy as the Hamiltonian is given
directly by Eq. (7.2):

E = Jm2c4 + p2c2 = me1 + ----- O ( ? . (7.18)


v 2m \mzc^ /

The Dirac equation, Eq. (7.11), on the other hand, doesn’t look anything like the
Schrodinger equation, which for a single particle in an electromagnetic field reads

g r i 7
— (p-^A)2 + V(r) VP. (7.19)
dt \_2m v '

Well, let’s have a closer look.

9 Actually, particles have several kinds of charge, not just electric ones - all of which are opposite for particles
and anti-particles.
137 7.1 Relativistic Quantum Physics

7.1.4 Exercise: The Non-relativistic Limit of the Dirac Equation

Again, we start out with the one-dimensional time-dependent Dirac equation with
the Hamiltonian of Eq. (7.17). Actually, it is more convenient to shift the energy
downwards1011by the mass energy me2 so that our actual Hamiltonian will be

V(x) ~ihc£ (7.20)


~ihc£ V(x) — 2mc2

(a) Write the coupled equations in terms of <P+(x;t) and <P_(x;t) separately.
(b) In the non-relativistic limit it is reasonable to assume that both the time variation
of and the influence of the potential V on is negligible compared to the
large mass energy term. Explain how this leads to

Cl n
die— —2mc24/_. (7.21)
dx
(c) Now, if you solve this equation, algebraically, for and insert it into the equation
for ihd'V+/dt, what do you get?

Hopefully, your answer to the last question was

d2 " p2
ih— <P+ = + V(x) vP+ = ^- + V(x) vp+, (7.22)
dt + 2m dx2 2m

which should appear quite familiar by now - despite the ‘+’ subscript.
If we had exposed Eq. (7.11) to the same treatment,11 we would have got something
very similar to Eq. (7.19):

a " 1 ~ 9
—(p - <?A)2 + V(x) — B•a
+
m—* = (7.23)
dt 2m 2m

There are two differences, though:

(1) there is an additional term in the Hamiltonian,


and
(2) the wave function now has two components, Eq. (7.13).

With s = h/2a, we may recognize the extra term, the interaction between spin and
a magnetic field, as Eq. (4.15). This is where the somewhat mysterious factor g = 2
comes from; a classical magnetic dipole would have g = 1.
Truth be told, there are a couple of terms missing from Eq. (7.23) - corrections to
the interaction with the potential V(x), which a more thorough treatment would have
picked up.

10 Such a shift is always admissible; it corresponds to a redefinition of the zero level for energy. In more
technical terms, it shifts the wave function only by a global phase factor which does not alter any physics.
11 Do not be afraid to try! You will need what you learned in Exercise 4.3.1 along with Eq. (2.6).
138 7 Beyond the Schrodinger Equation

Note that there is an important difference between the one-dimensional and the
three-dimensional situation here - in addition to the obvious: there simply isn’t any
spin in the one-dimensional case. Thus, you may rightfully question whether it made
sense to include spin in the one-dimensional examples we looked at in Chapter 4. In
our defence, the lessons learned in that chapter do carry over to the more realistic
three-dimensional case. Moreover, a real system with three spatial dimensions could
be confined in two of them so that it, effectively, becomes one-dimensional.
And now for something completely different.

7.2 Open Quantum Systems and Master Equations

Apart from a brief outline at the beginning of Section 3.4, we have thus far assumed
that our quantum system is unaffected by the world around it - except when we perform
measurements on it. This is not always a reasonable assumption. Experiments in quan­
tum physics are often conducted in (almost) vacuum, at low temperatures, with very
dilute gases and within apparatus which tries to shield off the omnipresent background
radiation. However, despite such efforts to minimize the influence of the environment,
sometimes you simply have to include the interaction between your quantum system
and its surroundings in order to make sensible predictions.
This is bad news. While the quantum system you want to describe theoretically or
computationally may be complicated enough, this complexity cannot hold a candle to
the complexity involved in describing its surroundings. In most cases it has so many
degrees of freedom that you simply cannot include them in your Schrodinger equation
if you have any ambition of actually solving it. So we must settle for trying to include
the interactions with the surroundings in a simplified, effective manner somehow. In
doing so, we typically end up replacing our original wave function equation, usually
the Schrodinger equation, with a master equation.
Before we can address the notion of master equations, we need to say a word or two
about density matrices. For a quantum system in a state given by the wave function
the density matrix, which we call p, is an operator that simply extracts the projection
of another state O onto Dirac notation12 is particularly convenient for expressing
such operators:

P = IW|. (7.24)

Although p strictly speaking is an operator, not necessarily a matrix, we see from the
above expression that if |4>) is a column vector, is a row vector and p is,
indeed, a matrix.

12 Yes, it is the same Dirac as above.


139 7.2 Open Quantum Systems and Master Equations

7.2.1 Exercise: The von Neumann Equation

With the state |40, which now has acquired brackets, following the Schrodinger
equation, ihd/dt^) = H|4<), explain why p in Eq. (7.24) follows the equation

(7.25)

Here [A, B] = AB—BA is the commutator, which we first encountered in Exercise 2.6.2.

Equation 7.25 is called the von Neumann equation. For a density matrix of the form of
Eq. (7.24), a so-called pure state, it is entirely equivalent to the Schrodinger equation.
‘So what’s the point?’, you may rightfully ask. So far, introducing density matrices has
only led to a rephrasing of what we already know - in a slightly more cumbersome way.
Our motivation was, however, to include the influence of the environment somehow.
This may be achieved by introducing partial traces.
As mentioned in connection with Exercise 6.4.6(d), the trace of a matrix is simply
the sum of its diagonal elements. The trace of a more general operator A is defined as

TrA = (7.26)

where |art) is an orthonormal basis for the full space in question. Any orthonormal
basis will lead to the same trace.
Suppose now that a quantum system is made up of two parts. It could consist of two
particles, such as the situations in Exercises 4.4.3 and 5.1.4. The two parts could consist
of a small quantum system embedded in a larger system - the situation we addressed
initially. Or it could be the more familiar case of a single particle with both spatial and
spin degrees of freedom, as in Eqs. (4.3) and (4.6). If we want to disregard one of the
sub-systems and only describe the other one, we could do so by tracing out all the
degrees of freedom of the system we want to disregard. Let’s take the state of Eq. (4.3),
which, in general, is an entangled state of spatial and spin degrees of freedom, as an
example. The full density matrix of this state is

p = |0>>(<D| = (i'Pt)ixt) + I'i'UlxU) (|^t>lxt> + l^>lxi»+


= IxtXxtH- IxtW
+ I^X^Tl lxU(xtl + mxni IX4.XX4.L (7.27)

where 14^4) and the spinors lxt,f) reside in different mathematical spaces - position
space and spin space, respectively. Correspondingly, ‘ (4^ | | xt) ’ and similar expressions
in the above equation are not to be taken as any inner product.
Suppose now that we want to disregard the spin degree of freedom. Correspondingly,
we trace out this degree of freedom - this means projecting from both left and right onto
the spinors xt and respectively, and summing:

P ^position = |p|xt> + <x;|p|x;>. (7.28)


140 7 Beyond the Schrodinger Equation

This renders a much simpler reduced density matrix. The spin projections we imposed
do not affect the spatial parts, the <P-parts of Eq. (7.27). Some of the inner products
in spin space, on the other hand, end up as zero as dictated by Eqs. (4.5). With these
equations we may find that the density matrix, after having traced out spin, reduces to
* t>
^position = i i + m w- (7.29)
In the general case of a bipartite system, a system that resides in two mathematical
spaces - let’s call them space A and space B - the reduced density matrix corresponding
to A is
pA = T¥B/5 = £(/}„ (7.30)
n
where {!/?„)} is an orthonormal basis for the B space. In the above case, it consisted of
Ixt) and Ixj.).13
Equation (7.26) differs from Eq. (7.30) in that we only project onto some, not all,
degrees of freedom for the bipartite system.14
While the full density matrix, Eq. (7.24), is a pure state, a reduced density matrix,
Eq. (7.30), is generally not; it is usually mixed. It does not correspond to a single wave
function, but rather to a set of several wave functions. We will return to this issue.

7.2.2 Exercise: Pure States, Entanglement and Purity

(a) Check that you actually arrive at Eq. (7.29) from Eq. (7.28) by imposing the
projections onto the spin states in Eq. (7.27).
(b) When your wave function has the product form of Eq. (4.6), what is ^position then?
Is it a pure state?
(c) To what extent a reduced density matrix differs from a pure state can be quantified.
The purity is one way of doing so.15

y = Tr/52. (7.31)

The purity y is 1 for pure states only, and it takes a value between 0 and 1 for any
mixed state. The lower y is, the less pure - and more mixed - is the state.
Now, suppose some bipartite quantum system consisting of two qubits, A and
B, is in the normalized state:

|vp) =ci|0)a|0)b+c2|1)a|1)b = h|00)+c2|11>, (7.32)

where in the last equality we have adopted the somewhat more compact notation
we used in Chapter 6. As long as none of the coefficients c\ and c2 are zero, this
would correspond to an entangled state; for c\ = c2 = 1/V2 it is the Bell state of
Eq. (6.34a) and for c\ = —C2 it is the Bell state of Eq. (6.34b).

13 For simplicity, we have assumed that the B space has a countable basis. In the continuous case, the sum
in Eq. (7.30) must be replaced with an integral.
14 Formally, Eq. (7.30) would read Tr# p = (7^ 0 P (Ja ® Iin a more proper formulation,
where Ia is the identity operator in the A-space.
15 The von Neumann entropy, S = — Tr (p In p), is another.
141 7.2 Open Quantum Systems and Master Equations

Show that
PA = ki|2|0><0| + (1 - |ciI2) |1><1|, (7.33)

where we have dropped the subscript indicating the space on the bras and kets.
(d) Plot the purity of pA as a function of |q |2. When is it minimal?

Hopefully, you found that the purity is minimal when the entanglement is
maximal - for |q |2 = IqI2, or, equivalently, c2 = e^c\. Note also that the reduced
density matrix of Eq. (7.33) is not able to include the information contained in the
phase difference 0. Specifically, if the initial pure state, Eq. (7.32), is one of the Bell
states of Eqs. (6.34), you cannot tell which one from pA.

7.2.3 Exercise: Two Spin-1/2 Particles Again

(a) Redo Exercise 5.1 4(a), except this time we will start out with the state and
plot the time-dependent probability of spin up for the first particle, with no regard
for the spin of the second particle.
(b) Similar to what we did in Exercise 7.2.2(c) and (d), let your A-space be the spin of
the first particle and B-space be the spin of the other particle. If you write down
the reduced density matrix for particle 1, pA, in terms of the coefficients a, b, c and
d in Eq. (5.20), you should find that
/ \a\2 + \b\2 * +bd
ac * \
c +b
\ a
* d
* |c|2 + |J|2 J ’ (7.34)

Check that you actually arrive at Eq. (7.34).


Also, show that the probability you determined in (a) is the same as <Xf IPa I Xf >»
the first diagonal element of pA.
(c) The expectation value of some physical quantity Q with operator Q may be
determined from a density matrix by the trace of their product:
(e>=T¥(ep). (7.35)

For a pure state, this is consistent with our first definition, Eq. (1.25).
For the simulation you just performed, plot the expectation value of the spin
projection along each of the three axes for particle A:
(sx) = Tr (sx pA) = |Tr (<tx pA), (7.36)

and so on.
(d) Plot the absolute value of the off-diagonal elements of pA as a function of time,
(e) Plot the purity of pA, y(t) = Tr p2A, as a function of time.
The two-particle system starts out as a product state, and, correspondingly,
the reduced density matrix pA is pure at t = 0. During the interaction with the
magnetic field, does it seem to become a pure state again at any point?
142 7 Beyond the Schrodinger Equation

In the above examples we constructed reduced density matrices via the full wave
functions. Although these examples may have shed some light on what reduced density
matrices are, applying density matrices in situations where the full wave function is
accessible usually wouldn’t make too much sense.
It makes more sense to apply a description in terms of a reduced density matrix
when the B system simply is too complicated to handle. This would be the situation
if a comparatively small quantum system A interacts with some large environment B.
Any ambition of describing the full wave function of such a bipartite system is usually
only a pipe dream.
If we start out in a general product state, |40 = |^)a|^)b, the full wave func­
tion will follow the Schrodinger equation, but not the A and the B parts separately.
As long as there is some kind of interaction between A and B, the full wave function
quickly becomes an entangled one. Correspondingly, pa(O goes from being pure to
being mixed. Being unable to resolve the evolution of the full wave function |^(r)>,
we could try to settle for the next best thing - to describe the evolution of just
Pa(O-
As a starting point, we would set up the von Neumann equation, Eq. (7.25), and
trace out B. This would leave a messy equation which is not even local in time; it
would depend on the history of the evolution, not just the present state. In order to
arrive at something that can be solved in a meaningful way, several approximation are
typically imposed - depending on the nature of the systems and their interaction. In
certain situations we may reasonably impose assumptions about the B-system, such as
assuming that it is in a so-called thermal state.
At the end of the day we hope to arrive at an equation for p& that is local in time
and only depends on the degrees of freedom pertaining to A. Even with no explicit
reference to the degrees of freedom of the B-part, the interaction between A and B
may still be incorporated in some effective manner. Such an equation governing the
evolution of a density matrix is called a master equation.
When deriving master equations from more fundamental principles, we would
typically hope to arrive at an equation consistent with this generic form:

= [^’/5]_ ^7rw (5^ _ 2<3/ *0 ■ c7-37)


kl

If we remove the last terms on the right hand side, we recognize the von Neumann equa­
tion, Eq. (7.25), which, in turn, is equivalent to the Schrodinger equation in the case of
a pure state. So Eq. (7.37) is a generalization of the Schrodinger equation. It is required
that the coefficients V^i constitute a matrix with only non-negative eigenvalues, and
the a-operators may be taken to be traceless.
Equation 7.37 is the famous GKLS equation, named after Vittorio Gorini, Andrzej
Kossakowski, Goran Lindblad and George Sudarshan. Actually, we often see
Eq. (7.37) referred to as just the Lindblad equation - probably because his proof is
slightly more general than that of GKS, whose picture is shown in Fig. 7.2.
143 7.2 Open Quantum Systems and Master Equations

' Figure 7.2 | This photo shows GKS, or rather, with permutation corresponding to the photo, KSG, in 1975. Used with the permission
of Springer Nature BV, from Ref. [14], permission conveyed through Copyright Clearance Center, Inc.

So, what did they prove? They proved that in order to ensure that the density matrix
p maintains Hermicity, complete positivity*16 and trace, it must follow an evolution
is
dictated by Eq. (7.37).
Now, why is it important to conserve trace and positivity?

7.2.4 Exercise: Preserving Trace and Positivity

In the following we will drop the subscript A on our reduced density matrix. We will
also lose the hat, Pa P-
In Section 2.6 we explained how the probability of measuring the outcome a for a
measurement of some physical variable A is | (cpa |4>) |2 where cpa is the eigenstate of the
operator A corresponding to eigenvalue a. For a density matrix p the corresponding
probability is the diagonal element {(Pk\p\<Pk)-

(a) Check that these two probability expressions are, in fact, identical in the case of a
pure state, p = |4')(4'|.
(b) In order for a density matrix to produce sensible results, we must insist that it
is positive semi-definite. A positive semi-definite matrix M has only non-negative
eigenvalues, which, in turn is equivalent to requiring that

xf,Wx>0 (7.38)

for any vector x. Correspondingly, a positive semi-definite operator A fulfils

WW>>0 (7.39)

for any state

16 The notion of complete positivity is somewhat more strict than that of just positivity. However, will not
enter into this distinction here.
144 7 Beyond the Schrodinger Equation

Why must we insist that p has this property? And how can we even be sure that
expressions such as the left hand side of Ineq. (7.39) are real in the first place?
(c) As discussed in Section 3.3, the set of all possible normalized eigenstates of a
Hermitian operator, A = Af forms an orthonormal basis - or, at least, can be
constructed to form an orthonormal basis. We have also learned that the probabil­
ities of all possible outcomes must sum up to l.17 Explain how this leads to the
requirement that the trace of a density matrix must be 1 at all times.
(d) As any matrix, the density matrix can always be written in terms of a singular value
decomposition. Since it is also Hermitian, this decomposition becomes particularly
simple:
P = ^Pnl^nX^nl, (7.40)
n

where the \x[fn) are orthonormal and the coefficients pn are real.
The positivity and trace requirements on p impose restrictions on the pn. Which
ones?
Although not strictly necessary here, it may be useful to know that the identity
operator may be expressed as
Z = ^|a„)(an|, (7.41)
n
where {|an}} is any orthonormal basis for the space in question.

So, in order for a density matrix to produce physical predictions that make sense, it
must be positive semi-definite and have trace 1 at all times.
The singular value decomposition of Eq. (7.40) illustrates how density matrices gen­
eralize the notion of wave functions. The special case of a wave function, or a pure
state, emerges when all but one of the weights pn are zero. The linear combination of
Eq. (7.40) is very different from any linear combination of wave functions,
W = £>„|^„), (7.42)
n

perhaps more different than a swift comparison between the above equation and
Eq. (7.40) would suggest. In general, a linear combination of wave functions is put
together by amplitudes an - complex amplitudes that carry both a magnitude and a
phase. This, in turn, allows for interference, as we have seen in Exercises 2.3.3 and
5.3.3, for instance. The pn’s, on the other hand, are classical probabilities, they are all
real and non-negative. No interference is brought about by the mixedness of Eq. (7.40).
In this sense, Eq. (7.40) may be thought of as a classical ensemble of quantum wave
functions.
The interaction between a quantum system and its surroundings will in general cause
the quantum traits of the system to diminish. One manifestation of this is the issue
of decoherence, which, as mentioned, is a huge challenge when it comes to building

17 This is why the square modulus of a spatial wave function must integrate to 1 and why |a|2 + |b|2 in
Eq. (4.7) must be 1.
145 7.2 Open Quantum Systems and Master Equations

working quantum computers. As we discussed in Section 6.5, the coherent nature of


a quantum system, the one that allows for interference, is crucial for a quantum com­
puter to provide any advantage over classical ones. However, in real life, the state of a
quantum computer will eventually get entangled with its environment - with the con­
sequence that its density matrix loses purity. It becomes less quantum-like and more
classical-like.
We will use the GKLS equation, Eq. (7.37), to study one particular source of
decoherence, one that is referred to as amplitude damping.

7.2.5 Exercise: A Decaying Quantum Bit

In Section 6.4 we introduced the notion of qubits, which are linear combinations of
two states labelled |0) and 11>, Eq. (6.18). In physical implementations of quantum
computers, it is hard keep both states stationary; |1) will typically ‘fall down’ into |0)
spontaneously - at a certain rate which we want to keep as low as possible.
Now, we could try to model this by introducing effective interactions into the
Schrodinger equation. It wouldn’t work, though:

(a) Suppose we start with the Hamiltonian of Eq. (5.3) with W = 0. Now, the upper
right element in the Hamiltonian would induce transitions from 11 > to |0>. Why is
it not admissible to insert a non-zero matrix element here while keeping the other
off-diagonal element zero,

Suppose we did it anyway and solved the Schrodinger equation; what consequences
would this have for the evolution of the wave function?18
So, spontaneous, irreversible transitions, such as the one we want to imple­
ment, cannot happen in any evolution governed by a Schrodinger equation with
a Hermitian Hamiltonian. However, the GKLS equation, Eq. (7.37), allows us to
introduce it. We impose a single ^-operator, a so-called jump operator, which turns
|1) into |0):
a = |0><l| so that a |1) = |0). (7.44)

With our usual representation in terms of vectors in C2, this projection operator
may be written

« = |0)dl = ( J)(° 1 ) = (° J)- (7-45)

Thus, it is non-zero for the same matrix element as the one we discussed above, in
regard to Eq. (7.43). But this jump operator will not enter into the Hamiltonian.
(b) With the Hamiltonian of Eq. (5.3), write out the GKLS equation, Eq. (7.37), with
the single projection operator a above in the last term on the right hand side of
18 You are welcome to check this out numerically.
146 7 Beyond the Schrodinger Equation

Eq. (7.37) - combined with the single positive parameter T. In this case, there will
be no summation over k and I since there is only one term. Write the equation
for each of the four elements of p separately. This, in turn, constitutes a set of
four coupled ODEs. In fact, it has the form of the Schrodinger equation - with a
non-Hermitian effective Hamiltonian.
(c) In the case that W = 0, determine the diagonal elements poo and pn as functions
of time with the initial condition

p(t = 0) = |1)(1| = ( ° J). (7.46)

How does the purity, y of Eq. (7.31), evolve as a function of time in this case?
(d) Now, with a non-zero W in the Hamiltonian of Eq. (5.3) and real, positive values
for the parameters 6 and T, solve the GKLS equation. As usual, we suggest that
you do it numerically. Compare your solution with the unitary one, the one with
T = 0, which you arrived at in Exercise 5.1.1, for various values of T. Use the same
initial condition as in (c).
(e) As the master equation does not feature explicitly time-dependent terms, it will
converge towards a steady states eventually, in the limit t —> oo, the density matrix
p will become constant. Set the time derivative of p in Eq. (7.37) equal to zero and
determine this state - numerically.
(f) In Exercise 6.4.5 we found that the NOT gate could be implemented by setting
6 = 0 and fixing the duration T at T = nh/W. Suppose we expose such a system
to the amplitude damping we have now implemented. If we now, starting out in
the (pure) |0) state, p(t = 0) = |0) <0|, implement the NOT gate from Exercise 6.4.5
with a positive decay rate T, what is the probability of actually reading off 1? In
other words, play around with different values for T and determine pi,i(T). For
simplicity, set W = 1.

The decay is not only manifested in the fact that the system is pulled towards the
|0) state, it also introduces a damping of the oscillations we would have for an isolated
quantum system undergoing unitary evolution. Note also that as long as If 0 the
system will, despite the pull towards |0>, maintain a finite population of the |1) state
even as t approaches infinity. These aspects are illustrated in Fig. 7.3.
In this model, we simply imposed an irreversible, spontaneous decay by hand - math­
ematically. Physically, it could be induced via the interaction with some background
radiation field. We will follow the same heuristic^ path in the last exercise of this
chapter. It is a revisit of Exercise 2.4.1. In the last part of that exercise, we let our
wave packet hit a well instead of a barrier by introducing a negative barrier height
Vo in the potential of Eq. (2.30). This is, more or less, the same system we studied in
Exercise 3.1.1- except that in that case the particle was trapped in the potential from
the outset, in Exercise 2.4.1 it was unbound.

19 A heuristic approach is one in which you allow yourself to simplify the problem and take mental shortcuts
in order to arrive at a working solution in a more straightforward and less rigorous manner.
147 7.2 Open Quantum Systems and Master Equations

Time

r Figure 7.2 The full curve shows the probability of measuring a qubit to be in the 11) state as a function of time for a unitary
evolution as dictated by the Hamiltonian of Eq. (5.47). The dash-dotted curve is the same, albeit with damping. This
evolution is dictated by the GKLS equation, Eq. (7.37), with the jump operator of Eq. (7.44). The dashed curve is the
probability of measuring the system in the |O) state for the steady state. This particular calculation corresponds to a
calculation with the parameters e = 0.5, W = 0.25 and r = 0.01 in units where h = 1. Clearly, this results
in very strong decay.

For the free particle hitting the well, we found that a well would actually induce
reflections - contrary to classical physics and, probably, also intuition. What we didn’t
discuss much was the possibility of capture. Isn’t it possible for a particle to fall into
the well and get trapped?
No, it is not - not in the context of the unitary evolution dictated by the Schrodinger
equation. A particle with an energy that is positive from the outset could not get
trapped without getting rid of energy somehow. However, with a time-independent
Hamiltonian, there simply isn’t any mechanism for taking away energy. This is in line
with the beautiful theorem of Emmy Noether (pictured in Fig. 7.4), which states that
for each symmetry in a physical system there is a conserved quantity. In this case,
the conserved quantity in question is energy and the symmetry is time invariance.
We touched upon another relevant example in the discussion following Exercise 2.6.2:
when the system remains unaffected by a translation - when the Hamiltonian does not
have any local potential - momentum is conserved.
There certainly exist ways of allowing for energy exchange for our quantum particle.
One way could be to allow for time dependence in our Hamiltonian - as in Chapter 5.
We could also let our particle interact with another system, such as a photon field.
In this case, energy could be carried over from our quantum particle to the field. Or,
possibly, in more familiar terms: the particle could get stuck in the well by emitting
a photon which carries away the surplus energy. In that case, the total energy of the
composite system would be conserved, but not the energy of the particle alone. In line
with things we have discussed before, such a bipartite system is quite hard to describe
148 7 Beyond the Schrodinger Equation

Figure 7.4 | The German mathematician Emmy Noether (1885-1935) made several significant contributions to mathematics and
mathematical physics. Most notable are her profound results on how symmetries lead to conserved quantities in
physical systems.

as the photon field has very many degrees of freedom. We would need a very large
computer and a whole lot of patience to describe the full system.
Instead, let’s generalize our approach from Exercise 7.2.5.

7.2.6 Exercise: Capturing a Particle

When a quantum particle interacts with a photon field, or some other quantized field,
the particle may decay from its initial state to another state with lower energy. It is
often assumed that this happens spontaneously, with a certain probability rate r:

^-Pf(t) = r Pj(f), ±Pi(t) = -rP,(r),


(7.47)
dt dt

where Pz(t) is the population of some initial state Vg , Pz (t) = | • | vp(r)> |2, and Pf is the
population of some final state Often, the Fermi golden rule can be used to predict
this decay rate:

fi^f ~ |<VgI#/IV7)| , (7.48)

where Hi, which is part of the Hamiltonian in question, is the interaction, or the
perturbation, that induces the decay. Here, the total Hamiltonian, H = Hq + Hi, is
149 7.2 Open Quantum Systems and Master Equations

time independent, and the interaction term Hj is assumed to be relatively weak. Usu­
ally, both \/fi and f are taken to be stationary states of the unperturbed part of the
Hamiltonian Hq. Here, however, we allow ourselves to let the initial state 4^ evolve in
time.
We will, for simplicity, assume that our system only has one bound state.
Next, label all the numerical eigenstates of our time-independent Hamiltonian Ho
by <pk - each with eigenenergy ek, where £q < 0 and all other eigenenergies are positive.
The numerical eigenstates with positive energies do not really represent physical states.
First of all, these states should really constitute a continuum. Second, with our usual
choice for approximating the kinetic energy, our FFT implementation, that is, they
fulfil periodic boundary conditions, which makes them even less physical. Nonetheless,
mathematically, we may still describe our physics in terms of these somewhat artificial
states.
In the GKLS equation, Eq. (7.37), take the ak operators, the jump operators, to
bring about a jump from unbound state (pk, k > 0, to the bound state <pq\

ak = \(p^{(pk\. (JA9)

Moreover, we define the T-coefficients in a manner inspired by Eq. (7.48):

Fkj = To {(Pi\x\(po), (7.50)

where the interaction term Hi x has been inserted. This is in line with the interaction
we saw in Eq. (5.27). Here, the proportionality factor To determines the strength of the
decay - and ensures appropriate units.

(a) For the same system as in Exercise 2.4.1, choose a set of parameters which is such
that it has only one bound state. For this, the absolute value of your (negative) Vo
must be comparatively low.
(b) Analytically, set up the GKLS equation, Eq. (7.37), with the dissipative terms
as dictated by Eqs. (7.49) and (7.50). Let the first term on the right hand side of
Eq. (7.37), the commutator that provides the unitary, non-dissipative part of the
evolution, be given by Hq alone.
(c) Assume that the solution of the time-dependent GKLS equation can be separated
in two non-overlapping terms at all times:

p = p' + Pq,q\(Pq)((Pq\. (7.51)

Show that the system decouples in separate equations for p' and po,o-
Also, explain why p' remains a pure state, p'=|vp')(\p'|, which follows a
Schrodinger equation with an effective, non-Hermitian Hamiltonian.
(d) In the case that we start out in an unbound eigenstate, , is the rate at which
x/ff = (pQ is populated consistent with Eq. (7.48)?
(e) Why can we be absolutely certain that the probabilities {^'(t)| ^'(t)) and po,o(0 add
to 1 at all times?20
20 You can assume that the of Eq. (7.50) constitutes a positive semi-definite matrix.
150 7 Beyond the Schrodinger Equation

(f) At last, solve the GKLS equation for this particular system. This may be done
conveniently by solving for and the ground state probability po,o separately.
Use the same parameters as in part (a) for your potential, and let your initial state
be a Gaussian that is well separated from this potential. With this in mind, set the
parameters xq, po, crp and r as you please. Also, choose a comparatively low value
for the decay strength To in Eq. (7.50).
How does the capture probability po,o(0 evolve in time? How is the final capture
probability po,o(
* oo) affected by your choices for po - and r?

In Fig. 7.5 we show a similar, albeit much more complicated, capture process. Here
an initially free electron collides with an F6+ ion, a fluorine atom with six of its electrons
stripped off With its three remaining electrons, it resembles a lithium atom - with a
strongly charged nucleus. The upper graph in the plot shows the probability for the
incoming electron to be captured by the ion so that it forms a stable F5+ ion. As we
can see, it has several pronounced peaks for specific energies of the incoming electron.
At the end of the day, we may be perceived as somewhat self-contradicting here; we
argued that Eq. (7.43) had to be dismissed on account of not producing any unitary
evolution, which, as we learned in Exercise 6.4.4, is related to the Hermicity of the
Hamiltonian. And yet, in Exercise 7.2.6(c) we solved a Schrodinger equation with a
non-Hermitian Hamiltonian. And it gets worse, the next chapter is dedicated to such
Hamiltonians.

r Figure 7.5 The upper graph shows the probability for an incoming, colliding electron to be captured by an F6+ ion as a function
of energy. The sharp peaks correspond to a process called dielectronic recombination, which we will return to in the
next chapter. Figure reprinted, with permission, from Ref. [38], M. Tokman, N. Eklow, P. Glans, E. Lindroth, R. Schuch, G.
Gwinner, D. Schwalm, A. Wolf, A. Hoffknecht, A. Muller, and S. Schippers, Physical Review A 66,012703 (2002).
Copyright (2002) by the American Physical Society.
151 7.2 Open Quantum Systems and Master Equations

A more precise reasoning for departing from the Schrodinger equation in Exer­
cise 7.2.5 is the fact that it is unable to describe an irreversible process. The GKLS
equation, on the other hand, is able to do so - through the last term on the right hand
side, the Lindbladian. As we have seen, it can encompass irreversible processes such
as spontaneous decay - accompanied with the loss, or dissipation, of energy. The fact
that Noether’s theorem does not apply here is also related to this; energy conservation
applies to time-independent Hamiltonians, not to Lindbladians.
8 Non-Hermitian Quantum Physics

You may feel like objecting to the title of this chapter; aren’t operators in quantum
physics always Hermitian? Didn’t we show in Exercise 1.6.3 that physical quantities
would come out complex if not? The answer to both questions is yes. However, this
chapter would be a very boring one if that were the whole story.
Actually, although not physical, allowing for non-Hermitian operators may be a very
useful theoretical/numerical tool in many situations. Here we will take a brief look at
a couple of such situations.

8.1 Absorbing Boundary Conditions

In Exercise 2.3.2 we saw that odd things happen to a wave packet approximation when
it hits the boundary of a numerical domain. It could bounce back, as was the case in
our finite difference implementations, or it could reappear on the other side of the grid,
as we saw happening with our discrete Fourier transform approximation of the kinetic
energy operator. Both of these are, of course, artefacts that come about because our
numerical domain is too small. In order to avoid it, we must make sure we choose a
domain large enough to actually contain the wave function. Or do we?
Let’s consider the situation in Exercise 5.2.3, for instance, in which an initially bound
system is partially liberated after being exposed to a laser field and outgoing, unbound
waves are emitted. Several wave components, fast and slow ones, will escape the confin­
ing potential - at various times. Over the time it takes for the slower wave components
to escape, fast components may have travelled quite far. In order to contain such an
unbound wave function, we must apply a numerical domain that extends quite far,
which, in turn, comes at a rather high price in terms of computation efforts in time
and memory. This may be a bit frustrating if we really only want to describe what goes
on in some smaller interaction region. Couldn’t we rather kill off the outgoing parts
when they approach the boundary, and focus on our attention on whatever is left?
Yes we could. There are several ways of doing this - of imposing absorbing boundary
conditions. Arguably, the simplest one is to augment the Hamiltonian with an extra,
artificial potential:
Heff = H-iF(x), (8.1)

152
153 8.1 Absorbing Boundary Conditions

where H is the proper, Hermitian Hamiltonian and T(x) is a function that is zero
on most of the domain and positive close to the boundary. We coin this artificial
amendment to our Hamiltonian a complex absorbing potential.

8.1.1 Exercise: Decreasing Norm

(a) Show that in Eq. (8.1) is non-Hermitian.


(b) Show that, for any wave function that has a certain overlap with the absorbing
potential T, the norm of the wave function is decreasing in time.

Now, with this machinery in order, we may use it to facilitate solving some of the
exercises we have already done. Let’s start off with a revisit of Exercise 5.2.3.

8.1.2 Exercise: Photoionization with an Absorber

Start off with the implementation you used to solve Exercise 5.2.3; we will not change
it much. But this time we will allow ourselves the luxury of choosing an L value that is
likely to be too small - too small in the sense that a significant part of the wave function
would hit the boundary unless we remove it before it gets there. To that end we impose
this complex absorbing potential:

7/(|x| — Xonset)^, |x| > Xonset, zg


r«= 0, otherwise,

where the parameter rj is the strength of the absorber which is non-zero only for |x |
larger than xOnset- In a practical implementation, you augment your Hamiltonian
with this complex absorbing potential simply by adding -iT(x) to the actual potential
V(x).

(a) Add this artificial contribution to your potential in your implementation from
Exercise 5.2.3.
When you run it this time, you can set your box size L = 100 length units; this
is a reduction by a factor four. To have an equally dense numerical grid as in
Exercise 5.2.3, make the same reduction in the number of grid points.
For the complex absorbing potential, you can set rj = 0.01 and xonSet = 30 length
units.
When you run - and visualize - your simulation, pay particular attention to
the evolution of your wave packet in the region where absorption goes on - for
X < onset and X > Xonset •
(b) During time evolution, investigate how the norm N of the wave function,

X2(r) = (4>(r)|4/(t)), (8.3)

changes and plot N2 as a function of time after having done your simulation.
154 8 Non-Hermitian Quantum Physics

(c) In order for your absorber to ‘swallow’ all the outgoing waves, you will need to
propagate for a while longer than in Exercise 5.2.3; you probably need to increase
your AT from back then. When you run your calculation long enough for the norm
to reach a constant value, does the loss in norm, 1 - N2(t oo), coincide with
the ionization probability you found in Exercise 5.2.3? Should it?
(d) Play around a bit with the absorber strength rj in Eq. (8.2). There is a window of
reasonable values. However, you do run into trouble if it is either too high or too
low. What kind of trouble is this?

The answer to the question in (c) is yes - provided that we propagate long enough and
the artificial complex absorbing potential works as it should, swallowing everything
that hits it. This need not always be the case, though. If the absorbing potential is too
strong, it will reflect parts of the outgoing waves - the slow ones in particular. For this
reason, and the mere fact that low-energy wave components struggle to actually reach
the absorber within reasonable time, the ionization probability predicted in this way
may come out a little bit too low in practice.
If the absorber is too weak, on the other hand, parts of the wave will make it through
the absorption region, hit the numerical boundary and, thus, be subject to the artefacts
we wanted to avoid in the first place. A complex absorbing potential typically works
better the larger the region on which it is allowed to act. However, a larger absorption
region would require a larger numerical box, which, in turn, reduces the advantage of
using a smaller numerical domain. So there is a trade-off here.
Although there is, to some extent, a need for optimizing the parameters of a complex
absorbing potential, such techniques do remain useful tools when studying unbound
quantum systems. It is quite common to include them when studying ionization
phenomena, for instance.
While we’re at it, why not apply our complex absorbing potential to the scattering
example we studied in Exercise 2.4.1 as well?

8.1.3 Exercise: Scattering with an Absorber

(a) Rerun your simulation from Exercise 2.4.3 with the same initial set of parameters,
which, apart from Vo = 1, are listed in Exercise 2.4.1. However, this time, intro­
duce the complex absorbing potential in Eq. (8.2). Again, seize the opportunity
to reduce your box size L considerably. Let’s halve it - and the number of grid
points n + 1 as well. For the absorber, you may set rj = 0.01, as in Exercise 8.1.2,
and Xonset = 40.
Check, by direct inspection as you run your simulation, that your parameters r]
and %onset are such that virtually no reflection goes on near the boundaries. Adjust
the parameters if necessary.
Instead of fixing the duration of your simulation, it may be more convenient
to set it running until it is more or less depleted - until the square of the norm,
Eq. (8.3), is, say, less than 1%.
155 8.2 Resonances

(b) Now, instead of determining reflection and transmission probabilities as in


Eqs. (2.32), let’s just measure the amount of absorption at each end:

2 f00
F(x)|4>(x;r)|2 dx dr = - / (vp(r)|rL|^(r)>dr, (8.4a)
h Jo
2 f00 C* L/2 , 2 f°°
T
-l
* / rU)|'I'(x;O|2dxd? = t / (^(r)|rR|vp(r))dr, (8.4b)
Jo t'-*onset n Jo
where we have split the absorbing potential, Eq. (8.2), into a left and a right part:

Fl — /( — -Tonset — x) for X < — Xonset and rR — y(x — Xonset) for X > Xonset
*
(8.5)

Do you arrive at the same transmission and reflection probabilities as last time? Feel
free to check for various values of the momemtum po of the incoming wave.

As mentioned, it is quite common to impose absorbing boundary conditions like


this - or in some similar way - when simulating the time evolution of unbound dynam­
ical systems. As we will see in the following, techniques involving non-Hermitian
Hamiltonians are frequently used in time-independent contexts as well. We will,
however, not leave our scattering implementation just yet.

8.2 Resonances

In Section 3.1 we claimed that the spectrum, the set of all eigenvalues, of a Hamiltonian
will generally consist of a discrete set corresponding to bound states and a continuous
set, which corresponds to unbound states. That would have been the whole story if it
hadn’t been for the resonance states.

8.2.1 Exercise: Scattering off a Double Well

We continue to revisit old exercises, this time Exercises 2.4.4 and 6.1.1. We expose our
quantum particle to the same double barrier as in the former, Eq. (2.36), but now we let
our particle scatter off it, as in Exercises 2.4.1 and 8.1.3, instead of placing it between
the barriers initially, as we did in Exercise 2.4.4.

(a) Simulate a scattering event in which you replace the single barrier, Eq. (2.30), with
the double barrier of Eq. (2.36). The parameters to use for the double barrier are
listed here, along with a number of other physical parameters and a few suggestions
for the numerical ones:
1 The factor 2 in front of the integrals may come across as somewhat non-intuitive. However, you may
arrive at these formulae by first writing out the evolution dictated by the non-Hermitian version of the
von Neumann equation, Eq. (7.25): \hd/dt p = Hp — pH^, and then accumulate in time the trace of the
absorbed part in the left and right ends, respectively.
156 8 Non-Hermitian Quantum Physics

L n+\ T] Xonset °p PO 0
* T Vq w s d
100 1024 0.1 0.05 40 0.1 1 -20 0 4 3 25 3

You may note that this amounts to a much denser numerical grid than what we
used in the previous exercise. This is necessary because of the narrow barriers and
their rather sharp corners.
As in Exercise 8.1.3, run your simulation and determine the transmission
probability.
(b) Now, repeat this calculation for a range of initial mean momenta or, rather, mean
energies so = Pq/(2™), ranging from 1 energy unit up to Vo = 4 units. Do not
bother to simulate your wave packet on the fly. Just use Eq. (8.4b) to determine the
transmission probability T as a function of energy so and plot it.
It’s not very monotonic, is it? Can you pinpoint specific energies so for which the
transmission probability behaves peculiarly?
(c) Hopefully, you found that T(so) has pronounced peaks at certain so values. Actu­
ally, these peaks would be even more pronounced if it hadn’t been for the fact that
our initial wave has a finite width ap in momentum. With a sharper momentum
distribution, we would get sharper peaks.
The underlying structure we are trying to get at would be revealed if we could
extrapolate the momentum width op to zero somehow. This may appear difficult
as it would require that the spatial width of our wave would approach infinity - as
dictated by Ineq. (1.5). However, we are quite capable of working around this. We
already did so in Exercise 6.1.1. There the incoming wave was a pure exponential
of the form exp (ipo
*
/^), which, in turn, corresponds to crp = 0.
In your implementation from Exercise 6.1.1, shift your double barrier a bit
towards the right so that you may quite reasonably assume that it is zero at x = 0
and below and at x = D and beyond - for some D-value larger than 2d + w. Next,
adjust and apply your implementation from Exercise 6.1.1 to determine T(so) in
the limit crp 0. Do this for sq ranging from almost zero to Vo.
How does this T(so) compare to the preceding one?
(d) What would the corresponding T(so) look like if there were only one barrier? Just
rerun your calculation from (c) with a single barrier.

So what are these sharp peaks that emerged in our plot for transmission through the
double well - the ones you may see in the upper panel of Fig. 8.2? They are manifest­
ations of resonances. This particular type are called shape resonances. We will try to
provide ways of understanding how they come about.
But before we do that, let’s take a moment to dwell on what we just saw. It looks
rather odd seen through classical glasses. Suppose you follow our incident wave and
hit a single barrier. As we have seen, for comparatively low energies, most of the wave
will be reflected. However, if a second barrier is present behind it, there may be a much
higher chance your wave will just go straight through; at the right energy, virtually all of
the wave will pass. But how could the initial wave even ‘know’ that there was a second
barrier in the first place, as most of the wave would have bounced at the first barrier
157 8.2 Resonances

and never reached the second? It is fair to call this a manifestation of the non-local,
and non-intuitive, nature of quantum waves.
You may wonder what this has to do with resonance phenomena such as the one
we studied in Exercise 6.3.1 - after all, the phenomena share the same name. Perhaps
somewhat misleadingly, the name is motivated by the strong similarity between peaks
such as the ones seen in the upper panel of Fig. 8.2 and in Fig. 6.6.
You may also wonder what all this has to do with non-Hermitian quantum physics.
In this system, non-Hermicity emerges when we impose outgoing boundary conditions.

8.2.2 Exercise: Outgoing Boundary Conditions

This time we will lean heavily on Exercise 3.1.1. We will continue with our double well
and require that the energy 8 < Vo- But now the barriers will not be smooth but rather
sharply rectangular. Apart from this, which corresponds to letting s approach infinity
in Eq. (2.30), let the parameters of the potential be the same as in Exercise 8.2.1.
As illustrated in Fig. 8.1, the x-axis may be divided into five regions, I-V. The
solution of the time-independent Schrodinger equation is a linear combination of
exp ( ± ikx) in regions I, III and V and exp ( ± kx) in regions II and IV where

k= (8.6a)
ft
K = \y/2m(VQ -s).
(8.6b)
h

(a) All in all, this should leave you with no less than 10 coefficients to determine in
order to solve the time-independent Schrodinger equation. However, this time we
only allow outgoing waves in regions I and V - waves that move away from the
double barrier, that is. If you combine the space-dependent parts with the time
factor exp (- ist/Ky you should be able to identify which coefficients should be set
to zero.2* I
(b) As before, we simplify our problem by dealing with the symmetric and the anti­
symmetric cases separately. If we start out with the former, we may set up our wave
function as

r(x)

I II III IV v
w

X
2d

r Figure 8.1 The potential under study in Exercise 8.2.2. The domain is divided into five sections - each with its own analytical
expression for the solution of the time-independent Schrodinger equation.
2 In case you feel like objecting to the fact that we insist on outgoing waves with nothing coming in, you are
absolutely right to do so. Let’s do it anyway.
158 8 Non-Hermitian Quantum Physics

A cos (kx) in region III,


Be~KX + CeKX in region IV, (8.7)
Dexkx in region V,

where the solutions in regions I and II are given by the fact that - x) = t//-(x).
Explain why our four coefficients need to fulfil3

/ /0 \
M o0 where

\ \0 /
( cos(W_) — e Kd~ eKd~ 0 \
—ksin(kd_) Ke~Kd~ -KeKd~ 0
0 e~Kd+ eKd+ eikd+
\ 0 -Ke~Kd+ KeKd+ ikelkd+ >

and we have introduced d± = d ± w/2 for convenience.


In Exercise 3.1.1 we insisted that the determinant of a similar, albeit smaller,
matrix had to be zero. We must do so again here. Why is that, again?
(c) This time you will not succeed in finding any real 8 that gives det M = 0. But you
may be able to find some complex ones. For reasonably dense grids for both Re 8
and Ims, run over several complex s values in search of a value that makes the
determinant of M vanish. Let the real part range from zero to Vo and the imaginary
part from -0.2 energy units to zero. When you plot the absolute value of det M as
a function of Re s and Im s, perhaps you arrive at something like the middle panel
of Fig. 8.2?
(d) Repeat the same calculation for the case of an anti-symmetric wave function
t/r( - x) = - Do your findings resemble the lower panel of Fig. 8.2?

What happened there? Did we not learn in Exercise 1.6.3 that Hermitian operators,
such as our Hamiltonian, will provide real expectation values and eigenvalues? The
answer is that the momentum operator, which appears in the Hamiltonian, isn’t actu­
ally Hermitian here. We gave that up when we allowed for ‘wave functions’ that blow
up exponentially as |x| oo. Our exponential solution in region V with a complex
wave number k = k& — iA?i, with positive ki, is proportional to

eikx=eikRx ,e+k1X (8 9)

This function will grow beyond all measure when x becomes large. The same happens
at the other end, in region I. So, in fact, there is no contradiction with what we learned
in Exercise 1.6.3(a). In order to prove that p* = p, we made direct use of the fact that

3 Of course, the four equations could be formulated and ordered differently, resulting in a different but
equally adequate matrix M.
159 8.2 Resonances

r Figure 8.1 (a) The transmission probability as a function of the energy of an incoming wave for the system understudy in
Exercise 8.2.1. (b) The logarithm of the absolute value of det M in Eq. (8.8) for complex energies. The dark spots,
where det M = 0, represent symmetric solutions of the time-independent Schrodinger equation with outgoing
boundary conditions, (c) The same as (b), however with anti-symmetric 'solutions'.

proper wave functions are to vanish for large values of x. If we extend our function
space and allow p = - i/ld/dx to act on functions without this trait, p is no longer
Hermitian.
Although these are not proper eigenenergies of any proper Hamiltonian, we do hope
that the correspondence with the peaks we saw in Exercise 8.2.1 and the complex eigen­
values we just saw has not gone unnoticed. In Fig. 8.2 we try to make the connection
so clear that it simply cannot be ignored. Note that the coincidence is not limited to the
real part of the resonance ‘energies’ and the position of the peaks in T(so). Hopefully,
it is almost equally clear that the imaginary part has something to do with the width of
the peaks. And herein lies the advantage of allowing for complex energies: it provides
us with both the position and the width of resonance peaks.
This width, in turn, is related to the so-called lifetime of the resonance.
160 8 Non-Hermitian Quantum Physics

8.2.3 Exercise: The Lifetime of a Resonance

Suppose that a quantum system starts out in a resonance state V^res with a complex
eigenenergy given by

where £pos and Twidth are real numbers, and rwidth is positive. The notion of being in
such a resonance state may not be very well defined from a physical point of view, but
for now, let’s just assume that it makes sense.
Under this assumption, show that the square of the norm of the wave function,
will decrease exponentially.
(V'resCOIVfiesO))?
Note that this is, in fact, the decay law for a radioactive material:
Z J X *
7'1/2

= = n$e~rt, (8.11)

where t\/z is the half-life of the radioactive isotope, r is the decay rate, see Eq. (7.47),
and «o is the initial amount of radioactive material. We can set no to 1 here.
How do the half-life t\/z and the rate r relate to the imaginary part of the energy,
F width?

We do hope that these examples provide convincing arguments for sometimes


allowing for non-Hermicity also in time-independent quantum physics - for practical
reasons. However, what are resonance states? We have learned that eigenstates of a
Hamiltonian could either belong to a discrete set, corresponding to localized, bound
states, or to a continuous set, corresponding to unbound, unlocalized eigenstates. Do
resonances actually represent a third way between the two? Not really, to the extent
that resonance states actually are states, they belong to the continuum.
From a Hermitian point of view, resonances manifest themselves by the fact that
several eigenenergies in the continuum pile up in a narrow energy region. From such
pile-ups you may construct linear combinations which resemble bound states in the
sense that they are more or less localized in space - and in energy. However, such a
linear combination would not actually be a stationary solution; it is not a solution of
the time-independent Schrodinger equation since the energy is not fixed. Although the
energy standard deviation, or width, can be quite low, it is not zero, see Exercise 2.6.1.
Due to this, a system starting out in such a state will not remain in it. And the prob­
ability of measuring it in the same state at a later time decreases in time. The wider the
distribution in energy, the faster the decay. This decrease will often, to a good approxi­
mation, follow an exponential function in time with the rate given by the imaginary
part of the resonance energy - as in Exercise 8.2.3.
An example of this is illustrated in Fig. 8.3. It corresponds to an initial Gaussian
wave packet placed between two barriers as illustrated in Fig. 8.1. As we learned in
Exercise 2.4.4, more and more of it will tunnel through the barriers and escape as time
goes by. The figure shows the decreasing probability of remaining between the bar­
riers. After a short while, the wave function is, between the barriers, very similar to the
161 8.2 Resonances

r Figure 8.1 Here we have placed an initially localized wave packet between the two barriers identical to the ones under study in
Exercise 8.2.2. The full curve shows the probability for the system to remain between the barriers as a function of time.
The dashed curve is the exponential decay as dictated by the imaginary part of the complex energy of the first
resonance seen in Fig. 8.2, the one with the narrowest, leftmost peak. To be able to distinguish it from the full curve,
we have lifted it a bit. The insert shows the probability as a function of time with a linear y -axis.

resonance with the longest lifetime - or, correspondingly, the smallest width. The fact
that the probability follows a straight line with a logarithmic y-axis goes to show that
it is, indeed, exponential - consistent with what we discussed in Exercise 8.2.3.
While allowing for non-Hermicity is quite convenient when studying resonance
phenomena in quantum physics, explicitly insisting on outgoing boundary condi­
tions as in Exercise 8.2.2 is not the usual way of doing so in practice. Introducing
explicitly anti-Hermitian terms in the Hamiltonian is more common. If this is done
adequately, it enables us to calculate resonance states that abide by usual boundary
conditions instead of blowing up, as in Eq. (8.9). Imposing a complex absorbing poten­
tial, Eq. (8.1), could, to some extent, be one way of doing so. However, methods that
involve turning the position variable x into a complex-valued quantity has proven more
practical [30]. The original and most straightforward way of implementing this is to
simply multiply the position variable by a complex phase factor [7, 35].
Although the notion of resonances as eigenstates with complex eigenenergies for a
non-Hermitian Hamiltonian is not really physical, the consequences of having reson­
ances in the continuum is very real indeed.4 One manifestation of this is shown in
Fig. 7.5. As in the upper panel of Fig. 8.2, the peaks are due to resonance states. How­
ever, these resonances are different from the ones we saw in Exercise 8.2.2, these are
doubly excited states.

4 Pun intended.
162 8 Non-Hermitian Quantum Physics

8.2.4 Exercise: Doubly Excited States

We revisit a two-particle system similar to the one in Exercise 4.4.3. Here also the par­
ticles’ interaction is given by Eq. (4.21), but this time we expose our particles to a
Gaussian confining potential:

V(x) = -e~x2/4. (8.12)

Moreover, we start off by ignoring the interaction between the two particles.

(a) Show that if we remove the interaction, if we set Wo = 0 in Eq. (4.21), then prod­
uct states of eigenstates of the one-particle Hamiltonian with the potential above,
Eq. (8.12), would solve the two-particle time-independent Schrodinger equation.
Also, show that, if we impose the proper exchange symmetry or anti-symmetry
on such product states, they are still solutions.
(b) With Wo = 0 still, determine the energy of the spatially exchange-symmetric state
in which both particles are in the first, and only, excited state.
This energy is actually embedded in the continuous part of the spectrum, which
corresponds to an unbound system. Why?
Hint'. What is the minimal energy for a system in which one particle is in the ground
state and the other one is free and far away?

Suppose now that we gradually increase Wo from zero. This will shift the energy of
the doubly excited state of the, initially, non-interacting particles upwards. And, more
importantly, it will mix this state with the ‘true’ continuum states. In a non-Hermitian
context, such doubly excited states are identifiable with eigenstates of complex energy
- with an increasing imaginary component of their ‘energy’. This is illustrated in
Fig. 8.4. This figure also illustrates the three bound states the system features when the
interaction strength Wo is comparatively low. The ground state is a spatially exchange-
symmetric state in which both particles essentially are in the one-particle ground state.
In the limit Wo -> 0+, both excited states correspond to one particle being in the excited
one-particle state and the other in the one-particle ground state. This scenario comes
in both an exchange-symmetric and an anti-symmetric flavour. For finite interaction
strength Wo, the symmetric state is higher in energy.
The continuum of our two-particle system starts at the energy corresponding to
the one-particle ground state - with an energy of = - 0.694 energy units. This
is the energy of a system in which one particle is in the ground state and the other is
free - however, with very little energy. Our doubly excited state, the subject of Exer­
cise 8.2.4, has an energy higher than this and, thus, it is embedded in the continuum.
With the interaction turned on, the particles may exchange energy so that one may
escape while the other relaxes to the one-particle ground state. This is an example of
what is called autoionization - or the Auger-Meitner effect. Actually, this phenomenon
is often referred as just the Auger effect - despite the fact that Lise Meitner, whose
picture you can see in Fig. 8.5, discovered it first.
163 8.2 Resonances

Here we have plotted energies for a two-particle system in which we have, quite artificially, adjusted the interaction
strength, Wo in Eq. (4.21), from zero up to 0.5 units. The four curves correspond to the possible combinations of the
two bound one-particle states, which are eigenstates when Wo = 0. The doubly excited state, which is embedded in
the grey continuum, is a resonance state. The width, which is exaggerated for illustrative purposes, is indicated by
dashed curves. With increasing Wo, the bound states also eventually reach the continuum and, in effect, become
resonances. The darker grey area indicates the double continuum, where it is energetically possible for both particles
to escape the confining potential, while the intermediate tone indicates the onset of the second ionization threshold,
in which the remaining system is left in the excited state after emission of one particle.

' Figure 8.1 | Lise Meitner, who was born Austrian and later also acquired Swedish citizenship, and her collaborator Otto Hahn in
their lab in 1912. Together they did pioneering work on nuclear fission. This work was awarded the 1944 Nobel Prize in
Chemistry. However, the prize was given to Otto Hahn alone; Meitner was not included. She should have been.

The process under study in Fig. 7.5 is the time reverse of autoionization. The sharp
peaks appear when an incoming, initially unbound electron enters with an energy such
that the total energy of the system coincides with that of a doubly excited state. The
capture probability is significantly increased when it is energetically possible for the
164 8 Non-Hermitian Quantum Physics

F Figure 8.6 The 12C nucleus, which consists of six protons and six neutrons, has a resonance state that corresponds to three 4He
nuclei, a-partides, coming together. The importance of this resonance state, which is called the Hoyle state after the
British astronomer Sir Fred Hoyle, can hardly be overestimated as it enables the formation of stable carbon in
stars [21]. In the process illustrated above, two a-partides collide to form an unstable8 Be nucleus. This nucleus, in
turn, fuses with yet another a-partide, forming a 12C nucleus in the Hoyle state. This resonance may, in turn, relax to
the stable 12C ground state. In each step of this process, excess energy is emitted in the form of y-radiation, a
high-energy photon.

total system to form such a state, before it subsequently relaxes into an actual bound
state by emission of a photon?
In Fig. 8.6 a very similar - and important - process observed in nuclear physics is
illustrated.5*

5 The thicker curve in Fig. 7.5 is the contribution from a single, particularly wide resonance, while the dotted
curve corresponds to the capture probability we would have without resonances.
Some Perspective

After having worked your way through this book, please do not go around telling your
friends that you understand quantum physics. No one does.1 But you have gained some
acquaintance with how quantum physics works - by doing some quantum physics. You
should still be careful bragging about it - not only because chances are that rather few
people will actually be impressed, but also because virtually all of the examples and
exercises you have seen in this book are quite simplified. That is not to say that they
are all easy\ Some of them are quite involved, and you should give yourself credit for
managing to work your way through them. But we should also be aware that most of
them pertain to single-particle systems; we know by now that many-particle systems are
way more complicated. Moreover, most of the spatial wave functions we have looked
at, both dynamic and stationary ones, reside in a one-dimensional world.
Another, related, issue is the fact that many exercises have been tailored in a manner
that prevents certain difficulties from surfacing. It is a bit like playing curling; in pre­
paring the exercises, we have quite often been running around with our brooms to clear
the ice of obstacles that otherwise would have distorted and complicated the path of
our stone. When solving a real world problem from a specific discipline pertaining to
quantum physics from scratch, issues and problems we didn’t foresee tend to emerge.
It is rarely as straightforward as some of the exercises you have seen potentially would
have you believe - although some of them certainly have been complicated enough.
So, in case you decide to continue acquiring more profound knowledge of quantum
physics, within fields such as solid state physics, quantum chemistry, atomic, molecular
and optical physics, nuclear physics or particle physics, it is important to know that
several unforeseen complications will have to be dealt with. So be prepared for such
challenges and don’t allow them to break your spirit when faced with them. And know
that you do already have a toolbox full of useful computational and analytical tools -
tools that may be expanded upon and prove quite applicable should you seek a deeper
familiarity with the quantum world.
Whether this book has worked as a primer for further quantum indulgence or simply
a way of acquiring a certain acquaintance with the quantum world, we still hope that,
through all the technical details, equations and lines of code, an appreciation for the
beauty of the theory has emerged. Beneath all the technical mess, there is order.

1 At least according to Richard Feynman.

165
166 9 Some Perspective

9.1 But What Does It Mean?

Throughout the book questions regarding quantum foundations and interpretations,


questions such as ‘what does this actually mean?’, have lingered underneath the surface.
And most times we have rather shamelessly evaded them.
Perhaps the majority of physicists would, if obliged to choose a view on how to
interpret quantum physics, point towards the Copenhagen interpretation, which is
attributable to Niels Bohr and his circle - a circle that included Werner Heisenberg
and Wolfgang Pauli. While it seems hard to find a clear-cut definition of what the
Copenhagen interpretation actually is, it does relate to the notion that the outcome
of a measurement is fundamentally random. It does not make sense to talk about the
position, momentum or energy of a quantum particle without actually measuring it.
And upon measurement, a process that is irreversible, at least in effect, takes place -
the collapse of the wave function.
In a 1989 column in Physics Today [29], David Mermin ‘defined’ the interpretation
in the following way: ‘If I were forced to sum up in one sentence what the Copenhagen
interpretation means to me, it would be “Shut up and calculate!”.’ Admittedly, this
book has been rather loyal to this ‘interpretation’.
The question of what actually constitutes a measurement is certainly a justifiable
one. In trying to answer it, we cannot avoid the inclusion of an interaction between
our quantum system and some measuring device. And since a full description of the
combined system of apparatus and quantum system is usually beyond a practical
description, a theory of open quantum systems - density matrices and such - lends
itself as the adequate framework for describing the process of measurement. Perhaps
the Copenhagen interpretation is not the full story, just the starting point - that a
deeper, more detailed understanding of the measurement process will add to or adjust
our standard quantum interpretation.
But there are alternatives. If you look, you will find some, not many, interpretations
of quantum physics that are fundamentally different. Arguably, the most interesting
one is the pilot wave interpretation, attributable to Louise de Broglie and David Bohm.
Mathematically, it gives the same predictions as standard quantum physics. But it
allows for an interpretation in which quantum objects do have well-defined positions
and momenta - also prior to measurement (see Fig. 9.1). This, however, comes at the
price of other conceptual difficulties.

9.2 Quantum Strangeness

We continue to quote David Mermin [29], who, fortunately, refuses to settle for just
calculating: ‘But I won’t shut up. I would rather celebrate the strangeness of quantum
theory than deny it, because I believe it still has interesting things to teach us about how
certain powerful but flawed verbal and mental tools we once took for granted continue
to infect our thinking in subtly hidden ways’.
167 9.3 What We Didn't Talk About

Figure 9.1 | It's hard to tell the quantum score between Bohm United and Copenhagen FC. However, the Danish team does seem to
be a bit ahead.

He goes on to point out two pitfalls. The first one has to do with the fact that the
strange, uncertain and stochastic nature of the quantum world can, and has, deceived
people into believing that it is all a haze, we can’t really know anything for certain.
However, quantum theory has enabled us to extract substantial knowledge to a meas­
ure we could hardly dream of - and to make quantitative predictions of unprecedented
precision.
In combating this sentiment, on the other hand, there is a risk of attempting to trivi­
alize quantum strangeness. Perhaps it could be paraphrased along these lines: ‘Yes, it
may come across as a bit of a mystery to us, just like electromagnetism did back in
the day. However, this mysticism will certainly diminish with increasing familiarity’.
Some would say that there is a need to de-mystify quantum phenomena, especially now
with the so-called second quantum revolution going on. Perhaps. But, if we take this too
far, we miss out. By disregarding the genuine quantum strangeness, or, again quoting
Mermin [29], by ‘sanitizing the quantum theory to the point where nothing remarkable
sticks out’, chances are that novel insights, both scientific and philosophical ones - in
addition to technological solutions - pass us by.
And, beyond the realms of the mere physical world, we may miss out on an important
realization or experience. What we do not know, what we have not seen before, may be
different in ways we could not have conceived in the first place. Learning happens and
understanding grows by exposing orselves to the unknown rather than projecting our
own perceptions and categories onto it.

9.3 What We Didn't Talk About

While we have tried to span rather widely in order to try to give a taste of several differ­
ent quantum flavours, the true span is, of course, much wider. And, of course, it goes
a whole lot deeper than the present account, which could, somewhat exaggeratedly, be
compared to rushing through the Louvre exhibitions on a motorbike.
168 9 Some Perspective

One topic that has not been properly introduced is the notion of field theory. In
relation to Eq. (2.5), it was mentioned that the electromagnetic field should not really
be a simple function but rather an operator - one that involves creation and annihilation
of photons, which are the mediators of electromagnetic interactions. We say that these
particles are the quanta of the photon field; the electromagnetic field is quantized. For
the nuclear forces seen in nature, other mediators exist - mediators or particles that
constitute the quanta of their respective quantized fields. And the notion of quantized
fields extends further. On a fundamental level, it also applies to the particles that make
up matter, such as electrons. As briefly mentioned in Section 7.1, particles should really
be described in terms of quantized fields rather than wave functions. This is not to say
that the wave function formulations we have been dealing with are wrong, they may
be derived from the underlying formalism as a more than adequate approximation in
most cases. However, at high energies, when relativistic effects come into play in full,
the field theoretical formulation is the proper framework.
Another topic we haven’t mentioned is that of perturbation theory. It is also an
important topic within quantum physics, albeit for more technical reasons. A full
description of a quantum system, be it stationary or dynamic, is often hard to achieve.
In many situations it is a viable path to start off with a similar system that we know how
to handle and then treat the difference between our simplified system and the actual one
as a small disturbance - a perturbation. For instance, in the case of an atom exposed to
an electric field, Eq. (5.26), it may be useful to treat the interaction term, Eq. (5.27), as a
small perturbation of the atom - a perturbation that can be accounted for by applying
an iterative scheme. The first-order correction can be determined in a rather simple
way by assuming that the population of the initial state is virtually unchanged; the
interaction only leads to a minor change in the wave function. A second-order correc­
tion can, in turn, be calculated by taking the first-order correction as the starting point
- and so on. If the electric field is comparatively weak, it may be sufficient to include
only the first few corrections to get sensible results. In this way, quantities such as ion­
ization and excitation probabilities may be approximated for rather complex systems
without actually solving the time-dependent Schrodinger equation - even analytically
in some cases.
When addressing the time-independent energy spectra of many-particle atoms and
molecules, we have several times alluded to the somewhat artificial notion of non­
interacting electrons. Such a system would correspond to Eq. (2.5) with both the
electromagnetic field, given by the vector potential A, and the electron-electron inter­
action W set to zero. This notion makes more sense in the context of perturbation
theory, in which the remaining sum of single-particle Hamiltonians is considered the
unperturbed system and the electron-electron repulsion, Eq. (2.7), may be treated as
a perturbation.
In this context, the framework of density functional theory also deserves mention.
It is a very much applied methodology - particularly within quantum chemistry and
solid state physics. It aims to lift the curse of dimensionality by describing an A-particle
quantum system by means of the density function,
169 9.3 What We Didn't Talk About

OO /»OO

/ -oo
••• /
J —oo
|4>(x,X2, . . . ,XN)|2dX2 • • -dXN, (9.1)

instead of the full wave function. This is quite an ambition: for N particles in a d-
dimensional world, it consists in reducing the problem from describing a complex
function in an TVJ-dimensional space to a real function in just d dimensions. While
ambitious, several approximations and assumptions pertaining to this framework have
proven quite viable. The most commonly applied methodology within density func­
tional theory, the Kohn-Sham method, is actually quite similar to the Hartree-Fock
method, which we addressed in relation to Exercise 4.4.4. However, contrary to the
Hartree-Fock method, it does have the ability to include correlation energy in a single
Slater determinant framework.
Yet another fundamental topic we have evaded is how quantum physics affects stat­
istical physics. In Chapter 4 we saw that the spin of quantum particles cannot be
disregarded - even when there is no explicit spin dependence in the Hamiltonian. The
book-keeping must abide by the Pauli principle. Correspondingly, it is a matter of
importance whether the particles we wish to describe by statistical means are fermions
or bosons - particularly at low temperatures.
We have consistently assumed that we are well within the jurisdiction of quantum
physical law. Yet we do know that the old Newtonian mechanics works very well for
large objects - objects that are not microscopic. So where do Newton’s laws cease to
apply and quantum physics start to take over? This question, which has been addressed
in Ref. [34], for instance, is far from trivial. We have learned, in Exercise 2.6.2, that the
Schrodinger equation is consistent with Newton’s second law if we interpret classical
position and momentum as quantum physical expectation values. This is also in line
with Bohr’s correspondence principle, which claims that quantum physics reproduces
the behaviour as dictated by classical laws in the case of very large quantum numbers.
On the other hand, it is also a fact that several semi-classical approaches, methods in
which parts of a quantum system are described by classical, Newtonian means, have
proven able to describe quantum phenomena quite well. So it is fair to say that this
borderline between classical physics and quantum physics is rather blurry.
For any of the topics we have addressed, you will have no trouble finding relevant
and specialized literature. Nor will you have any trouble finding other introductions
to quantum physics with a slightly different approach to the matter - typically a more
analytical one. In this regard, it is worth mentioning John von Neumann’s classic Math­
ematical Foundations of Quantum Mechanics from 1932, in which many key concepts
were formalized (see Ref. [40] for a relatively new edition).
For every field where quantum physics applies, be it solid state physics, quantum
chemistry, molecular and atomic physics, nuclear physics or particle physics, you will
find extensive textbooks on the subject, books that may introduce you to the rich
flora of phenomenology and methodology pertaining to each of them. To mention
a few examples: Szabo and Ostlund’s classic book Modern Quantum Chemistry is,
albeit far from new, still a relevant introduction to many of the techniques used in the
field [37]. A more extensive textbook in the field is provided by Helgaker, Jorgensen and
170 9 Some Perspective

Olsen [20]. Bransden and Joachain have written an accessible introduction to molecu­
lar and atomic physics [11]. Within the field of solid state physics, the introduction
written by Ashcoft and Mermin2 [5], along with the one written by written by Kittel
[24], are often applied. When it comes to The Theory of Open Quantum Systems, the
book titled accordingly and written by Breuer and Petruccione is a classic [12], as is
Nielsen and Chuang’s book Quantum Computation and Quantum Information within its
field [31]. Introductions to the world of nuclei and particles are provided, for instance,
by Povh, Rith, Scholz, Zetsche and Rodejohann [32] and by Martin and Shaw [27]. In
regard to the topic of the previous chapter, Moiseyev’s book Non-Hermitian Quantum
Mechanics is the go-to reference [30].
Needless to say, this listing far from exhaustive - by anyone’s standards.
Some of these books are beyond the undergraduate level. Also for graduate students,
while rewarding, working your way through a thorough textbook can be tedious. And
sometimes a profound and technical familiarity is not essential either. Good popular
books may often provide an interesting and informative introduction to the topic in
question. In this spirit, we can recommend Bernhardt’s book on quantum comput­
ing [10]. Zee has written an interesting story about how the notion of symmetry has
played and continues to play a crucial role in the search for new insights within mod­
ern physics [41]. Hossenfelder, on the other hand, provides examples on how the same
notion can also lead us astray [22].
Feynman wrote a popular introduction to the field of quantum electrodynamics [16].
Feynman was himself a very significant contributor to this field, which also set the
ground for other quantum field theories.
Despite the fact that the first versions were written in the 1940s, George Gamow’s
books about Mr Tompkins and his encounters with modern physics [ 19] continue to
educate and entertain readers, both young and old.
Much has been said and written about the meaning of quantum physics in gen­
eral and the measurement problem in particular. One notable example is the book
Beyond Measure: Modern Physics, Philosophy and the Meaning of Quantum Physics,
by Bagott [6]. The book also gives an interesting account of the historical development
of quantum physics.
Regardless of your future quantum path, we hope that this introduction has spurred
your curiosity, and that this encounter with the beautiful, queer world of quantum
physics has been a pleasant one.

2 Yes, it is the same Mermin as the one we quoted earlier.


References

[1] E. Schneider. To Understand the Fourier Transform, Start from Quantum Mech­
anics, www. youtube. com/watch?v=W8QZ-yxebFA.
[2] Hitachi website, hitachi.com/rd/research/materials/quantum/doubleslit/.
[3] M. T. Allen, J. Martin, and A. Yacoby. Gate-defined quantum confinement in
suspended bilayer graphene. Nature Communications, 3( 1 ):934, 2012.
[4] P. W Anderson. More is different: broken symmetry and the nature of the
hierarchical structure of science. Science, 177(4047):393-396, 1972.
[5] N. W. Ashcroft and N. D. Mermin. Solid State Physics. Holt, Rinehart and
Winston, 1976.
[6] J. E. Baggott. Beyond Measure: Modern Physics, Philosophy, and the Meaning of
Quantum Theory. Oxford University Press, 2004.
[7] E. Balslev and J. M. Combes. Spectral properties of many-body Schrodinger
operators with dilatation-analytic interactions. Communications in Mathematical
Physics, 22(4):280-294, 1971.
[8] C. H. Bennett and G. Brassard. Quantum cryptography: public key distribution
and coin tossing. arXiv preprint arXiv:2003.06557, 2020.
[9] C. H. Bennett and S. J. Wiesner. Communication via one- and two-particle oper­
ators on Einstein-Podolsky-Rosen states. Physical Review Letters, 69:2881-2884,
1992.
[10] C. Bernhardt. Quantum Computing for Everyone. MIT Press, 2020.
[11] B. H. Bransden and C. J. Joachain. Physics of Atoms and Molecules. Prentice
Hall, 2003.
[12] H. P. Breuer and F. Petruccione. The Theory of Open Quantum Systems. Oxford
University Press, 2002.
[13] D. Bruton. Approximate RGB values for visible wavelengths, physics.sfasu.edu/
astro/color/spectra.html.
[14] D. Chruscinski and S. Pascazio. A brief history of the GKLS equation. Open
Systems & Information Dynamics, 24(03): 1740001, 2017.
[15] R. P. Feynman. Simulating physics with computers. International Journal of
Theoretical Physics, 21(6):467-488, 1982.
[16] R. P. Feynman and A. Zee. QED: The Strange Theory of Light and Matter. Alix
G. Mautner Memorial Lectures. Princeton University Press, 2006.

171
172 References

[17] A. B. Finnila, M. A. Gomez, C. Sebenik, C. Stenson, and J. D. Doll. Quantum


annealing: a new method for minimizing multidimensional functions. Chemical
Physics Letters, 219(5):343-348, 1994.
[18] B. Friedrich and D. Herschbach. Stern and Gerlach: how a bad cigar helped
reorient atomic physics. Physics Today, 56(12):53-59, 2003.
[19] G. Gamow and R. Penrose. Mr Tompkins in Paperback. Canto. Cambridge
University Press, 1993.
[20] T. Helgaker, P. Jorgensen, and J. Olsen. Molecular Electronic-Structure Theory.
John Wiley & Sons, 2013.
[21] M. Hjorth-Jensen. The carbon challenge. Physics, 4:38, 2011.
[22] S. Hossenfelder. Lost in Math: How Beauty Leads Physics Astray. Hachette UK,
2018.
[23] S. Jordan. The Quantum Algorithm Zoo, quantumalgorithmzoo.org/.
[24] C. Kittel and P. McEuen. Introduction to Solid State Physics. John Wiley & Sons,
8th edition, reprint edition, 2015.
[25] A. Kramida, Yu. Ralchenko, J. Reader, and NIST ASD Team. NIST Atomic
Spectra Database (ver. 5.10), [Online]. Available at nist.gov/pml/atomic-
spectra-database. National Institute of Standards and Technology, 2022.
[26] Y. I. Manin. Mathematics as metaphor. In Proceedings of the International
Congress of Mathematicians. Kyoto, 1990.
[27] B. R. Martin and G. Shaw. Nuclear and Particle Physics: An Introduction. John
Wiley & Sons, 2019.
[28] P. G. Merli, G. F. Missiroli, and G. Pozzi. On the statistical aspect of electron
interference phenomena. American Journal of Physics, 44(3):306-307, 1976.
[29] D. Mermin. What’s wrong with this pillow? Physics Today, 42(4):9, 1989.
[30] N. Moiseyev. Non-Hermitian Quantum Mechanics. Cambridge University Press,
2011.
[31] M. A. Nielsen and I. L. Chuang. Quantum Computation and Quantum Informa­
tion: 10th Anniversary Edition. Cambridge University Press, 2010.
[32] B. Povh, K. Rith, C. Scholz, F. Zetsche, and W. Rodejohann. Particles and Nuclei:
An Introduction to the Physical Concepts. Springer, 2015.
[33] A. Robinson. The Last Man Who Knew Everything. Oneworld Publications, 2006.
[34] M. A. Schlosshauer. Decoherence and the Quantum to Classical Transition. Spri­
nger Science & Business Media, 2007.
[35] B. Simon. Quadratic form techniques and the Balslev-Combes theorem. Commu­
nications in Mathematical Physics, 27(1): 1-9, 1972.
[36] J. Steeds, P. G. Merli, G. Pozzi, G. F. Missiroli, and A. Tonomura. The double-slit
experiment with single electrons. Physics World, 16(5):20, 2003.
[37] A. Szabo and N. S. Ostlund. Modern Quantum Chemistry: Introduction to Advan­
ced Electronic Structure Theory. Dover Publications, 1996.
173 References

[38] M. Tokman, N. Eklow, P. Glans, E. Lindroth, R. Schuch, G. Gwinner,


D. Schwalm, A. Wolf, A. Hoffknecht, A. Muller, and S. Schippers. Dielectronic
recombination resonances in F6+. Physical Review A, 66:012703, 2002.
[39] A. Tonomura, J. Endo, T. Matsuda, T. Kawasaki, and H. Ezawa. Demonstration
of single-electron buildup of an interference pattern. American Journal of Physics,
57(2): 117-120, 1989.
[40] J. von Neumann. Mathematical Foundations of Quantum Mechanics: New edition.
Princeton University Press, 2018.
[41] A. Zee. Fearful Symmetry: The Search for Beauty in Modern Physics. Collier
Books, 1989.
References

Fig. 1.1 . Source: Dorling Kindersley RF/Getty Images.


Fig. 1.2 . The Master and Fellows of Trinity College, Cambridge. Reference:Cambridge,
Trinity College, Add.P 174(h).
Fig. 1.3 . Courtesy of the Niels Bohr Archive, Copenhagen.
Fig. 2.3. With the permission of Hitachi: hitachi.com/rd/research/materials/quantum
/doubleslit/.
Fig. 2.5. Source: Professor Frank Trixler.
Fig. 2.6. Source: Jonas Dixon 0sthassel.
Fig. 3.6. Facsimile of Mendeleev’s 1869 periodic table of the elements. Public domain.
Fig. 3.7. Source: Unknown photographer, courtesy of the Museum of University
History, University of Oslo.
Fig. 4.1 . Source: Professor Horst Schmidt-Bocking.
Fig. 4.2 . Courtesy of the Niels Bohr Archive, Copenhagen.
Fig. 4.3 . Photograph by Paul Ehrenfest, courtesy of the Niels Bohr Archive, Copen­
hagen.
Fig. 4.4 . Illustration by Professor Reiner Blatt, University of Innsbruck, used with
permission.
Fig. 5.1. Source: M. T. Allen, J. Martin, and A. Yacoby. Gate-defined quantum con­
finement in suspended bilayer graphene. Nature Communications, 3(1):934, 2012.
With permission from Nature Customer Service Centre GmbH.
Fig. 6.2. Source: Professor Michael Schmid, TU Wien; adapted from the IAP/TU Wien
STM Gallery ICC BY-SA 2.0.
Fig. 6.5. Fraunhofer lines. Public domain.
Fig. 6.7. Source: Ptrumpl61 CC BY-SA 4.0.
Fig. 6.9. Source: CERN, used with permission.
Fig. 6.12. Media courtesy of D-Wave.
Fig. 7.1 . Scriberia/Institute of Physics.
Fig. 7.2 . From Dariusz Chruscinski and Saverio Pascazio. A brief history of the GKLS
equation. Open Systems & Information Dynamics, 24(03): 1740001, 2017. Used with
the permission of Springer Nature BV, conveyed through Copyright Clearance
Center, Inc.
Fig. 7.4. Portrait of Emmy Noether, around 1900. Public domain.

174
175 Figure Credits

Fig. 7.5. Figure reprinted with permission, from M. Tokman, N. Eklow, P. Glans, E.
Lindroth, R. Schuch, G. Gwinner, D. Schwalm, A. Wolf, A. Hoffknecht, A. Muller,
and S. Schippers, Physical Review A 66, 012703 (2002). Copyright (2002) by the
American Physical Society.
Fig. 8.5. Reproduced from ‘Lise Meitner and Otto Hahn’, Churchill Archives Centre,
MTNR 8/4/1.
Fig. 9.1. Scriberia/Institute of Physics.
References

absorption spectrum, 109 Bloch, Felix, 45


adiabatic theorem, 98, 99, 126 Bohm, David, 166
amplitude damping. 145, 147 Bohr formula, 40
Anderson, Philip W, 57 Bohr radius, 14
Angstrom, Anders Jonas, 106 Bohr, Niels, 4, 8, 40, 65, 166
angular momentum, 41 Born interpretation, 7
annealing for momentum, 93
quantum, 128 Born, Max, 7, 52, 93
simulated, 128 Bose, Satyendra Nath, 59
annihilation, 136, 168 boson, 59, 63
anti-commutator, 67, 133 bound state, 37, 40
anti-particle, 136 boundary conditions
Aspect, Alain, 121 absorbing, 152
atom, 3 Dirichlet, 24, 41
Bohr atom, 4, 40 outgoing, 103, 157
helium, 70, 108 periodic, 24, 45
hydrogen, 14 bra, 13
one-dimensional model, 88 bra-ket, 13
Brillouin, Leon, 104
atomic units, 14
Broglie, Louis de, 3, 34, 166
Auger-Meitner effect, 162
autoionization, 162
chemical shielding, 113
avoided crossing, 99, 126
Clauser, John E, 121
coherent state, 48
b-spline, 91 commutator, 34, 67, 139
Balmer series, 108 fundamental, 34
Balmer, Johann, 108 complex absorbing potential, 152, 153
barrier configuration interaction, 70
double, 30, 155, 157 continuum, 39, 49, 95, 98, 149, 160, 162
general, 102 double, 163
linear, 104 pseudo-continuum, 49, 95
rectangular, 103 correlation energy, 76
smooth, 27 correspondence principle, 169
BB84, 123 Coulomb interaction, 17
Bell states, 62, 66, 120, 141 Coulomb potential, 14
Bell, John Stewart, 66, 120, 121 coupled cluster, 77
Benett, Charles, 125 Crank-Nicolson method, 92
bipartite system, 140 creation, 136, 168
black body radiation, 2 Curie, Marie, 5
Bloch sphere, 116 curse of dimensionality, 6, 70, 77, 114
Bloch’s theorem, 44 blessing, 114

176
177 Index

Davisson, Clinton, 26, 60 general, 12, 32


Davisson-Germer experiment, 60 kinetic energy, 21
decoherence, 122, 126, 144 momentum, 10
degeneracy, 36 position, 8
density functional theory, 168 Ezawa, H., 26
density matrix, 138
mixed state, 140 Fermi golden rule, 148
positivity, 143 Fermi, Enrico, 59, 148
pure state, 139 fermion, 59
reduced, 140, 141 Feynman, Richard, 6, 114, 165, 170
steady state, 146, 147 field theory, 136, 168
detuning, S3, 84, 112 fifth Solvay conference, 5
dielectronic recombination, 150 finite difference
diffusion equation, 51 double derivative
dipole approximation, 88 five-point rule, 19
Dirac equation three-point rule, 12, 19
negative energy solutions, 136 midpoint rule, 11
non-relativistic limit, 137 Fletcher, Harvey, 42
time-dependent, 132 Fock, Vladimir, 76
with electromagnetic field, 133 Fourier transform, 19,45,93, 128
time-independent, 134 fast, 20, 93
Dirac notation, 13, 35, 115, 138 Fraunhofer, Joseph von, 110
Dirac, Paul, 132
Gaussian, 22, 23, 91
Dirichlet, Peter Gustav Lejeune, 24
potential, 43, 162
discretize, 6, 17
wave function, 23
dispersion, 4
Gerlach, Walther, 57, 58, 60
dispersion relation, 45
Germer, Lester, 26, 60
displacement operator, 44
GKLS equation, 142
dissipation, 149, 151
Glauber state, 48
double-slit experiment
Glauber, Roy J., 48
with electrons, 4, 26, 33 Gordon, Walter, 132
with light, 2 Gorini, Vittorio, 142
Goudsmit, Samuel, 61
Ehrenfest’s theorem, 36, 169 gradient descent, 53, 55, 72, 128
Ehrenfest, Paul, 36, 65 ground state, 49, 72, 89
eigenvalue equation, 32
energy, 31, 37 Hadamard product, 56, 96
time-dependent, 97 Hahn, Otto, 163
spin, 61 Hamilton. William Rowan, 15, 16
Einstein, Albert, 1, 3, 8, 65 Hamiltonian, 15-17, 37
electric field, 87 Dirac, 132
electron, 3 component form, 135
charge, 14 laser interaction, 87
conduction, 104 non-Hermitian, 149, 152
g-factor, 67, 137 rotating wave, 83
spin, 59 spin-spin interaction, 84
elementary charge, 14 time-dependent, 78
emission spectrum, 109 two spin-1/2 particles, 85, 119
Endo, J., 26 two-state
energy bands, 45 dynamic, 81
entanglement, 63, 120 static, 80, 81
Euler’s method, 96 with electromagnetic field, 16
exchange symmetry, 58 harmonic oscillator, 42, 99
excited state, 49 Hartree, Douglas, 76
expectation value Hartree-Fock method, 74
conservation, 35 direct potential, 75, 76
178 Index

exchange potential, 76 Missiroli, G. F., 26


Hartree potential, 76 mixed state, 140
Heisenberg, Werner, 4, 101, 166 momentum distribution, 93
Hermicity, 13, 21, 46, 144 momentum operator, 10
Hermitian adjoint, 13, 18, 35 three-dimensional, 16
Hermitian matrix, 22
Hoyle, Sir Fred, 164 Neumann, John von, 98, 139, 169
Hylleraas, Egil, 51, 76 Newton’s second law, 1, 28, 35
Newton, Isaac, 1
identity operator, 21, 85, 92, 123, 144 Noether’s theorem, 147, 151
imaginary time, 50, 52, 96 Noether, Emmy, 147
inner product, 12, 61 non-crossing rule, 98
interference, 2, 24—26, 94 non-locality, 65, 157
ion trap, 66 oil drop experiment, 42
ionization, 90, 93 operator exponential, 21
Ionization threshold, 163
optical cycle, 88, 89
ionization threshold, 163 optimization, 100, 126
Ising model, 129 ordinary differential equation, 29
orthonormal, 46, 91
jump operator, 145
overlap matrix. 91
Kawasaki, T., 26 partial differential equation, 16, 90
ket, 13 partial trace, 139
kinetic energy operator, 11,19 Pauli matrices, 67, 132, 133
three-dimensional, 16 characteristics, 67
Klein, Oskar, 132 eigenvalues, 68
Klein-Gordon equation, 131 Pauli principle, 45, 59, 63, 169
Kohn-Sham method, 169 Pauli rotation, 81, 118
Kossakowski, Adrzej, 142 Pauli, Wolfgang, 59, 61, 67, 166
Kramers, Hendrik Anthony, 104 periodic potentials, 43
Kronecker delta function, 46 periodic table, 49, 76
Kronecker product, 85 perturbation theory, 168
Kronig, Ralph, 61 photoelectric effect, 3
photoionization, 88, 153
laser, 78, 82, 88, 89 photon, 3, 17, 40, 168
Lindblad equation, 142 polarization, 115
Lindblad, Goran, 142 spin, 59, 109
Lindbladian, 151 piezoelectricity, 107
line spectrum, 108 pilot waves, 166
Planck constant, 3
magnetic moment, 80 reduced, 10, 11
magnetic resonance imaging, 110 Planck, Max, 2
Magnus propagator, 87 position operator, 10
Magnus, Wilhelm, 87 Pozzi, G., 26
Manin, Yuri. 6 probability current, 102
master equation, 138, 142 probability rate, 148
Matsuda, T.. 26 propagator, 21, 86, 117
Maxwell, James Clerk, 2 proton
measurement, 7, 8, 11, 32, 41 g-factor, 110
probability, 33 mass, 110
Meitner, Lise, 162 spin, 110
Mendeleev, Dimitri, 49, 50 pure state, 138, 139
Merli, P. G., 26 purity, 140
Mermin, David, 166
metrology, 101 quantization
Michelson-Morley experiment, 61 angular momentum, 42
Millikan, Robert A., 42 projection, 59
179 Index

charge, 42 time-independent, 31
electromagnetic field, 168 Hartree, 75
energy, 38, 39 non-linear, 75
harmonic oscillator, 42 periodic potential, 44
spin, 57, 58 Schrodinger, Erwin, 4, 16, 34, 132
projection, 59 selection rule, 109
quantum advantage, 122 self-consistent field, 74
quantum computing singular value decomposition, 144
adiabatic, 97, 126 Slater determinant, 69
algorithms, 122 Slater permanent, 69
annealing, 126, 129 Slater, John C., 69
gate-based, 117 spectral theorem, 22, 46, 90
quantum dot, 78, 79 spectroscopy, 110
quantum foundations, 166 speed of light, 65, 108, 135
quantum gates, 117 spherical harmonics, 42
CNOT, 119, 120 spin, 57, 74
fidelity, 119 dynamics, 78, 81, 84, 141
Hadamard, 117, 120 flip, 81,82, 110
NOT, 117, 120 magnetic interaction, 67, 79
SWAP, 119 singlet, 63, 70
quantum guitar, 33, 41 spin-spin interaction, 84, 85
quantum interpretation triplet, 63, 70
shut up and calculate, 166 spinor, 62, 134
Born interpretation, 7, 93 split operator, 87, 100, 127
Copenhagen interpretation, 8, 166 standard deviation, 12
de Broglie-Bohm interpretation, 166 for eigenstates, 32
quantum key distribution, 123 general, 12
quantum parallelism, 122 momentum, 12
quantum sheep, 30 position, 8
quantum statistics, 169 Stern, Otto, 57, 58
quantum strangeness, 166 Stern-Gerlach experiment, 58, 61
qubit, 86, 115, 116 Sudarshan, George, 142
superdense coding, 123
Rabi frequency, 84, 111
generalized, 84, 111 tensor product, 71, 85
Rabi, Isodor Isaac, 84 thermal state, 142
reflection probability, 28, 103, 155 time dilation, 131
relativity Tonomura, A., 26
general, 1 translation operator, 44
special, 61, 130 transmission probability, 28, 103, 155
resonance, 112 trapezoidal rule, 9, 86
doubly excited state, 162 tunnelling, 29, 30, 102, 128
exponential decay, 160
lifetime, 159, 160 Uhlenbeck, George, 61
nuclear magnetic resonance, 110 uncertainty relation, 4, 7, 12, 25
shape resonance, 156 unitary transform, 82, 117
width, 159
rotating wave approximation, 82, 111 variational principle, 50, 52, 53, 56, 70, 72, 73
Runge-Kutta method, 29, 90 vector potential, 17, 134, 168
von Neumann entropy, 140
scanning tunnelling microscope, 31, 105 von Neumann equation, 139
scattering, 27, 102, 154, 155
Schrodinger equation, 5 wave function, 5
time-dependent, 16, 37, 81 collapse. 7, 9, 33, 68, 122
in adiabatic basis, 98 Dirac, 134
matrix form, 91 expansion in eigenstates, 46, 62, 98
spectral representation, 91 Gaussian, 25, 30
180 Index

Hartree-Fock, 76 Wentzel, Gregor, 104


normalization, 7, 18, 47, 62, 115 Wigner, Eugene, 98
product state, 61, 62 WKB approximation, 104
spin part, 61 work function, 106
stationary solution, 31, 102
steady current, 101 Young, Thomas, 2, 25
two particles, 95
with spin and spatial dependence, 61, 134 Zeilinger, Anton, 121

You might also like