Download Quantum Measurement Theory and its Applications 1st Edition Kurt Jacobs ebook All Chapters PDF

Download as pdf or txt
Download as pdf or txt
You are on page 1of 81

Visit https://ebookgate.

com to download the full version and


explore more ebooks

Quantum Measurement Theory and its Applications 1st


Edition Kurt Jacobs

_____ Click the link below to download _____


https://ebookgate.com/product/quantum-measurement-
theory-and-its-applications-1st-edition-kurt-jacobs/

Explore and download more ebooks at ebookgate.com


Here are some recommended products that might interest you.
You can download now and explore!

Handbook of Implicit Social Cognition Measurement Theory


and Applications 1st Edition Bertram Gawronski

https://ebookgate.com/product/handbook-of-implicit-social-cognition-
measurement-theory-and-applications-1st-edition-bertram-gawronski/

ebookgate.com

Heat Flux Processes Measurement Techniques and


Applications Processes Measurement Techniques and
Applications 1st Edition Gianluca Cirimele
https://ebookgate.com/product/heat-flux-processes-measurement-
techniques-and-applications-processes-measurement-techniques-and-
applications-1st-edition-gianluca-cirimele/
ebookgate.com

Recent advances in operator theory and its applications


the Israel Gohberg anniversary volume International
Workshop on Operator Theory and its Applications IWOTA
2003 Cagliari Italy 1st Edition Marinus A. Kaashoek
https://ebookgate.com/product/recent-advances-in-operator-theory-and-
its-applications-the-israel-gohberg-anniversary-volume-international-
workshop-on-operator-theory-and-its-applications-iwota-2003-cagliari-
italy-1st-edition-marin/
ebookgate.com

Quantum reality theory and philosophy 1st Edition Allday

https://ebookgate.com/product/quantum-reality-theory-and-
philosophy-1st-edition-allday/

ebookgate.com
Quantum Information Theory and the Foundations of Quantum
Mechanics 1st Edition Christopher G. Timpson

https://ebookgate.com/product/quantum-information-theory-and-the-
foundations-of-quantum-mechanics-1st-edition-christopher-g-timpson/

ebookgate.com

Group Theory and Quantum Mechanics Michael Tinkham

https://ebookgate.com/product/group-theory-and-quantum-mechanics-
michael-tinkham/

ebookgate.com

Game Theory and its Applications in the Social and


Biological Sciences 2nd Edition Andrew M. Colman

https://ebookgate.com/product/game-theory-and-its-applications-in-the-
social-and-biological-sciences-2nd-edition-andrew-m-colman/

ebookgate.com

Quantum Groups and Lie Theory 1st Edition Andrew Pressley

https://ebookgate.com/product/quantum-groups-and-lie-theory-1st-
edition-andrew-pressley/

ebookgate.com

Quantum Mechanics Concepts and Applications 1st Edition


Nouredine Zettili

https://ebookgate.com/product/quantum-mechanics-concepts-and-
applications-1st-edition-nouredine-zettili/

ebookgate.com
QUANTUM MEASUREMENT THEORY
AND ITS APPLICATIONS

Recent experimental advances in the control of quantum superconducting circuits, nano-


mechanical resonators and photonic crystals have meant that quantum measurement theory
is now an indispensable part of the modeling and design of experimental technologies.
This book, aimed at graduate students and researchers in physics, gives a thorough intro-
duction to the basic theory of quantum measurement and many of its important modern
applications. Measurement and control is explicitly treated in superconducting circuits and
optical and optomechanical systems, and methods for deriving the Hamiltonians of super-
conducting circuits are introduced in detail. Further applications covered include feedback
control, metrology, open systems and thermal environments, Maxwell’s demon, and the
quantum-to-classical transition.
KURT JACOBS is an Associate Professor of Physics at the University of Massachusetts
at Boston. He is a leading researcher in quantum measurement theory and feedback control,
and applications in nano-electromechanical systems. He is author of the textbook Stochas-
tic Processes for Physicists: Understanding Noisy Systems (Cambridge University Press,
2010).
QUANTUM MEASUREMENT THEORY
AND ITS APPLICATIONS

KURT JACOBS
University of Massachusetts at Boston
University Printing House, Cambridge CB2 8BS, United Kingdom

Cambridge University Press is part of the University of Cambridge.


It furthers the University’s mission by disseminating knowledge in the pursuit of
education, learning and research at the highest international levels of excellence.

www.cambridge.org
Information on this title: www.cambridge.org/9781107025486

c Kurt Jacobs 2014
This publication is in copyright. Subject to statutory exception
and to the provisions of relevant collective licensing agreements,
no reproduction of any part may take place without the written
permission of Cambridge University Press.
First published 2014
Printed in the United Kingdom by Clays, St Ives plc
A catalogue record for this publication is available from the British Library
Library of Congress Cataloguing in Publication data
Jacobs, Kurt (Kurt Aaron), author.
Quantum measurement theory and its applications / Kurt Jacobs, University of Massachusetts at Boston.
pages cm
Includes bibliographical references and index.
ISBN 978-1-107-02548-6 (hardback)
1. Quantum measure theory. I. Title.
QC174.17.M4J33 2014
530.801–dc23 2014011297
ISBN 978-1-107-02548-6 Hardback
Cambridge University Press has no responsibility for the persistence or accuracy of
URLs for external or third-party internet websites referred to in this publication,
and does not guarantee that any content on such websites is, or will remain,
accurate or appropriate.
To my mother, Sandra Jacobs, for many things.
Not least for the Nelson Southern Link Decision, a great triumph unsung.
Contents

Preface page xi
1 Quantum measurement theory 1
1.1 Introduction and overview 1
1.2 Classical measurement theory 4
1.2.1 Understanding Bayes’ theorem 6
1.2.2 Multiple measurements and Gaussian distributions 9
1.2.3 Prior states-of-knowledge and invariance 11
1.3 Quantum measurement theory 15
1.3.1 The measurement postulate 15
1.3.2 Quantum states-of-knowledge: density matrices 15
1.3.3 Quantum measurements 20
1.4 Understanding quantum measurements 28
1.4.1 Relationship to classical measurements 28
1.4.2 Measurements of observables and resolving power 30
1.4.3 A measurement of position 31
1.4.4 The polar decomposition: bare measurements and feedback 34
1.5 Describing measurements within unitary evolution 37
1.6 Inefficient measurements 39
1.7 Measurements on ensembles of states 40
2 Useful concepts from information theory 48
2.1 Quantifying information 48
2.1.1 The entropy 48
2.1.2 The mutual information 53
2.2 Quantifying uncertainty about a quantum system 55
2.2.1 The von Neumann entropy 55
2.2.2 Majorization and density matrices 58
2.2.3 Ensembles corresponding to a density matrix 61
2.3 Quantum measurements and information 63
2.3.1 Information-theoretic properties 64
2.3.2 Quantifying disturbance 72

vii
viii Contents

2.4 Distinguishing quantum states 78


2.5 Fidelity of quantum operations 82
3 Continuous measurement 90
3.1 Continuous measurements with Gaussian noise 90
3.1.1 Classical continuous measurements 90
3.1.2 Gaussian quantum continuous measurements 96
3.1.3 When the SME is the classical Kalman–Bucy filter 104
3.1.4 The power spectrum of the measurement record 106
3.2 Solving for the evolution: the linear form of the SME 113
3.2.1 The dynamics of measurement: diffusion gradients 117
3.2.2 Quantum jumps 119
3.2.3 Distinguishing quantum from classical 122
3.2.4 Continuous measurements on ensembles of systems 123
3.3 Measurements that count events: detecting photons 125
3.4 Homodyning: from counting to Gaussian noise 133
3.5 Continuous measurements with more exotic noise? 137
3.6 The Heisenberg picture: inputs, outputs, and spectra 137
3.7 Heisenberg-picture techniques for linear systems 145
3.7.1 Equations of motion for Gaussian states 145
3.7.2 Calculating the power spectrum of the measurement record 146
3.8 Parameter estimation: the hybrid master equation 150
3.8.1 An example: distinguishing two quantum states 152
4 Statistical mechanics, open systems, and measurement 160
4.1 Statistical mechanics 161
4.1.1 Thermodynamic entropy and the Boltzmann distribution 161
4.1.2 Entropy and information: Landauer’s erasure principle 171
4.1.3 Thermodynamics with measurements: Maxwell’s demon 175
4.2 Thermalization I: the origin of irreversibility 182
4.2.1 A new insight: the Boltzmann distribution from typicality 182
4.2.2 Hamiltonian typicality 185
4.3 Thermalization II: useful models 188
4.3.1 Weak damping: the Redfield master equation 189
4.3.2 Redfield equation for time-dependent or interacting systems 201
4.3.3 Baths and continuous measurements 202
4.3.4 Wavefunction “Monte Carlo” simulation methods 205
4.3.5 Strong damping: master equations and beyond 211
4.4 The quantum-to-classical transition 215
4.5 Irreversibility and the quantum measurement problem 222
5 Quantum feedback control 232
5.1 Introduction 232
5.2 Measurements versus coherent interactions 235
5.3 Explicit implementations of continuous-time feedback 239
Contents ix

5.3.1 Feedback via continuous measurements 239


5.3.2 Coherent feedback via unitary interactions 242
5.3.3 Coherent feedback via one-way fields 243
5.3.4 Mixing one-way fields with unitary interactions:
a coherent version of Markovian feedback 247
5.4 Feedback control via continuous measurements 250
5.4.1 Rapid purification protocols 250
5.4.2 Control via measurement back-action 256
5.4.3 Near-optimal feedback control for a single qubit? 260
5.4.4 Summary 266
5.5 Optimization 266
5.5.1 Bellman’s equation and the HJB equation 267
5.5.2 Optimal control for linear quantum systems 282
5.5.3 Optimal control for nonlinear quantum systems 290
6 Metrology 303
6.1 Metrology of single quantities 304
6.1.1 The Cramér–Rao bound 304
6.1.2 Optimizing the Cramér–Rao bound 305
6.1.3 Resources and limits to precision 307
6.1.4 Adaptive measurements 309
6.2 Metrology of signals 311
6.2.1 Quantum-mechanics-free subsystems 312
6.2.2 Oscillator-mediated force detection 314
7 Quantum mesoscopic systems I: circuits and measurements 323
7.1 Superconducting circuits 323
7.1.1 Procedure for obtaining the circuit Lagrangian (short method) 329
7.2 Resonance and the rotating-wave approximation 330
7.3 Superconducting harmonic oscillators 334
7.4 Superconducting nonlinear oscillators and qubits 336
7.4.1 The Josephson junction 336
7.4.2 The Cooper-pair box and the transmon 340
7.4.3 Coupling qubits to resonators 343
7.4.4 The RF-SQUID and flux qubits 344
7.5 Electromechanical systems 346
7.6 Optomechanical systems 351
7.7 Measuring mesoscopic systems 354
7.7.1 Amplifiers and continuous measurements 354
7.7.2 Translating between experiment and theory 361
7.7.3 Implementing a continuous measurement 361
7.7.4 Quantum transducers and nonlinear measurements 370
8 Quantum mesoscopic systems II: measurement and control 383
8.1 Open-loop control 383
x Contents

8.1.1 Fast state-swapping for oscillators 387


8.1.2 Preparing non-classical states 389
8.2 Measurement-based feedback control 396
8.2.1 Cooling using linear feedback control 397
8.2.2 Squeezing using linear feedback control 404
8.3 Coherent feedback control 408
8.3.1 The “resolved-sideband” cooling method 408
8.3.2 Resolved-sideband cooling via one-way fields 412
8.3.3 Optimal cooling and state-preparation 416
Appendix A The tensor product and partial trace 432
Appendix B A fast-track introduction for experimentalists 441
Appendix C A quick introduction to Ito calculus 448
Appendix D Operators for qubits and modes 451
Appendix E Dictionary of measurements 456
Appendix F Input–output theory 458
F.1 A mode of an optical or electrical cavity 458
F.2 The traveling-wave fields at x = 0: the input and output signals 462
F.3 The Heisenberg equations of motion for the system 463
F.4 A weakly damped oscillator 467
F.5 Sign conventions for input–output theory 467
F.6 The quantum noise equations for the system: Ito calculus 468
F.7 Obtaining the Redfield master equation 469
F.8 Spectrum of the measurement signal 470
Appendix G Various formulae and techniques 475
G.1 The relationship between Hz and s−1 , and writing decay rates in Hz 475
G.2 Position representation of a pure Gaussian state 475
G.3 The multivariate Gaussian distribution 476
G.4 The rotating-wave approximation (RWA) 476
G.5 Suppression of off-resonant transitions 477
G.6 Recursion relations for time-independent perturbation theory 478
G.7 Finding operator transformation, reordering, and splitting relations 479
G.8 The Haar measure 484
G.9 General form of the Kushner–Stratonovich equation 485
G.10 Obtaining steady states for linear open systems 486
Appendix H Some proofs and derivations 490
H.1 The Schumacher–Westmoreland–Wootters theorem 490
H.2 The operator-sum representation for quantum evolution 492
H.3 Derivation of the Wiseman–Milburn Markovian feedback SME 494
References 498
Index 539
Preface

I would like to thank here a number of people to whom I am indebted in one way or
another. To begin, there are five people from whose insights I especially benefited in my
formative years in physics. In order of appearance: Sze M. Tan, for teaching me classi-
cal measurement theory, and introducing me to information theory and thermodynamics;
Howard M. Wiseman, for teaching me about quantum measurement theory; Salman Habib,
for teaching me about open systems and classical chaos; Tanmoy Bhattacharya, for enlight-
enment on a great variety of topics, and especially for the insight that measurement is
driven by diffusion gradients; Gerard Jungman, for mathematical and physical insights,
and for introducing me to many beautiful curiosities.
I am very grateful to a number of people who helped directly to make this book what it
is: Os Vy, Luciano Silvestri, Benjamin Cruikshank, Alexandre Zagoskin, Gelo Tabia, Justin
Finn, Josh Combes, Tauno Palomaki, Andreas Nunnenkamp, and Sai Vinjanampathy who
read various chapters and provided valuable suggestions that improved the book. Xiaot-
ing Wang who derived Eqs. (G.43)–(G.48). Jason Ralph who enlightened me on some
superconductor facts that were strangely difficult to extract from the literature. Jason also
helped me with the brief history of superconductivity and quantum superconducting cir-
cuits in Chapter 7. Justin Guttermuth who saved our asses when we had a house to move
into, rooms to paint, a new baby, and I had this book to finish. My colleagues in the
UMass Boston physics department for their support — especially Maxim Olchanyi, Bala
Sundaram, Vanja Dunjco, and Steve Arnason. And last but not least, my wonderful wife
Jacqueline, who helped me with the figures and the cover, and put up with the long hours
this book required.
I apologize in advance for any errors and inadvertent omissions in this book. I would
be most grateful to be notified of any errors that you may find. I will include corrections
to all errors, as they are found, in an errata file on my website. I am very grateful to a
number of readers who sent me corrections for my previous book, Stochastic Processes
for Physicists, all of which have been made in the current printing. I was also able to
acknowledge these readers in the current printing, which due to their efforts now appears
to be largely error-free.
While I have endeavored to cite a fairly comprehensive and representative set of research
papers on the topics I have covered in this text, it is likely that I have omitted some that

xi
xii Preface

deserve to be included. If you discover that your important paper on topic X has been
missed, please send me the reference and I will be glad to correct this omission in any
further edition.
Finally, it is a pleasure to acknowledge an ARO MURI grant, W911NF-11-1-0268, that
was led by Daniel Lidar and administered by Harry Chang. This grant provided partial
support for a number of research projects during the writing of this book, and from which
it greatly benefited.
1
Quantum measurement theory

1.1 Introduction and overview


Much that is studied in quantum measurement theory, and thus in this book, is very dif-
ferent from that studied in its classical counterpart, Bayesian inference. While quantum
measurement theory is in one sense a minimal extension of the latter to quantum states,
quantum measurements cause dynamical changes that never need appear in a classical the-
ory. This has a multitude of ramifications, many of which you will find in this book. One
such consequence is a limit to the information that can be obtained about the state of a
quantum system, and this results in relationships between information gained and dynam-
ics induced that do not exist in classical systems. Because of the limits to information
extraction, choosing the right kinds of measurements becomes important in a way that it
never was for classical systems. Finding the best measurements for a given purpose, and
working out how to realize these measurements, is often nontrivial.
The relationship between information and disturbance impacts the control of quantum
systems using measurements and feedback, and we discuss this application in Chapter 5.
The ability to extract information about quantum systems is also important in using
quantum systems to make precise measurements of classical quantities such as time, accel-
eration, and the strengths of electric and magnetic fields. This subject is referred to as
quantum metrology, and we discuss it in Chapter 6.
The dynamics generated by quantum measurements is as rich as the usual unitary evo-
lution, and quite distinct from it. While the latter is linear, the former is both nonlinear and
random (stochastic). The nonlinear dynamics due to quantum measurement, as we will see
in Chapter 4, exhibits all the chaotic dynamics of nonlinear classical systems. This is made
more interesting by the fact that all processes involving measurements, as we will see in
Chapter 1, can be rewritten (almost) entirely as unitary and thus linear processes.
The role that measurements can play in thermodynamics is also an interesting one, espe-
cially because of the close connection between the entropy of statistical mechanics and the
entropy of information theory. While this is not a special feature of quantum, as opposed to
classical measurements, we spend some time on it in Chapter 4 both because of its funda-
mental nature, and because it is an application of measurement theory to the manipulation
of heat and work.

1
2 Quantum measurement theory

There is another important connection between quantum measurement theory and ther-
modynamics: thermal baths that induce thermalization and damping also carry away a
continuous stream of information from the system with which they interact. Because of
this, thermal baths can mediate continuous measurements, and there is a useful overlap
between the descriptions of the two. Further, the continuous measurements induced by a
system’s environment are sufficient to induce the quantum-to-classical transition, in which
the trajectories of classical dynamics emerge as a result of quantum mechanics.
It is now possible to construct and manipulate individual quantum systems in the lab-
oratory in an increasingly wide range of physical settings. Because of this, we complete
our coverage of measurement theory by applying it to the measurement and control of
a variety of concrete mesoscopic systems. We discuss nano-electromechanical systems,
in which superconducting circuits are coupled to nano-mechanical resonators, and opto-
mechanical systems in which superconducting circuit elements are replace by the modes
of optical resonators. In Chapter 7 we introduce these systems and show how to deter-
mine their Hamiltonians and the interactions between them. In this chapter we also explain
how the continuous quantum measurements introduced in Chapter 3 are realized in these
systems, and rephrase them in the language usually used by experimentalists, that of
amplifiers. In Chapter 8 we consider a number of examples in which the above sys-
tems are controlled, including realizations of the feedback control techniques described in
Chapter 5.
Despite its importance, quantum measurement theory has had an uneasy relationship
with physicists since the inception of quantum mechanics. For a long time the tendency
was to ignore its predictions as far as measurements on individual systems were concerned,
and consider only the predictions involved with averages of ensembles of systems. But as
experimental technology increased, and experimentalists were increasingly able not only to
prepare and measure individual quantum systems in well-defined states with few quanta but
to make repeated measurements on the same system, physicists were increasingly forced
to use the full predictions of measurement theory to explain observations.
The uneasiness associated with measurement theory stems from a philosophical issue,
and this has disturbed physicists enough that many still seem uncomfortable with using the
language of measurement in quantum mechanics, and assigning the knowledge gained to
an observer. In fact, as will be explained in due course, all processes that involve quantum
measurements can be described without them, with the possible exception of a single mea-
surement relegated to the end of the process. In using this second description the language
changes, but the two descriptions are completely equivalent. Thus the use of the language
of measurement theory is entirely justified. What is more, analysis that makes explicit use
of measurement theory is often vastly more convenient than its unitary equivalent, pro-
viding a strong practical reason for using it. This should not be taken to mean that the
philosophical problem has gone away – far from it. In fact, while the focus of this book is
applications, we feel that no education in quantum measurement theory is quite complete
without an understanding of “the measurement problem,” and because of this we include a
discussion of it at the end of Chapter 4.
1.1 Introduction and overview 3

We do not cover all the applications of quantum measurement theory, most notably those
to do with quantum information theory. The latter is now itself a large subfield of physics,
and there are a number of textbooks devoted to it. While our goal is to focus on applications
that are not usually covered in books on quantum information, given the fundamental con-
nection between measurement and information the two subjects are intertwined. A number
of concepts from information theory are therefore invaluable in broader applications of
measurement, and so we discuss information theory in Chapter 2. We do not cover “foun-
dational” questions in quantum theory, with the sole exception of the so-called “quantum
measurement problem.” Another topic within measurement theory that we do not discuss
is that termed “weak values,” not to be confused with weak measurements which we treat
in detail.
Quantum measurement theory is a weird and wonderful subject. I hope I have whetted
your appetite, and that you find some topics in this book as stimulating and fascinating as
I have.

A guide to this book


Different readers will want to take different pathways through the text. Many experimen-
talists, for example, may wish to learn how to describe weak or continuous measurements
in their experiments without wading through Chapters 1 and 3. Such readers can start with
Appendix B, and possibly Appendix C, from which a number of parts of the book are acces-
sible. Experimentalists working with mesoscopic systems can then proceed to Section 7.7.1
that describes measurements using the language of amplifiers, and Section 3.1.4 that shows
how to calculate the spectrum of a measurement of a linear quantum system using an equiv-
alent classical model. In fact, Section 7.7.1 can be read first if desired, as it does not require
the theoretical language used for continuous measurements. Section 5.4.3 is also accessible
from the above appendices, and deals with the feedback control of a single qubit. Exper-
imentalists working with quantum and atom optics could instead proceed to Sections 3.3
and 3.4 that deal with optical cavities, photon counting, and optical homodyne detection,
although these sections do also require Chapter 1 up to and including Section 1.3.3.
The first parts of Chapter 7 and much of Chapter 8, introducing mesoscopic circuits and
optomechanical systems, do not require any previous material. Sections 7.7.3 and 8.2 are
the exceptions, requiring either Appendix B or Chapters 1 and 3.
Students who want to learn the broad fundamentals of measurement theory should start
with Chapters 1 and 2, and proceed to Chapter 3 if they are interested in the details of
continuous measurements. Section 4.3 of Chapter 4 is also core reading on the relationship
between continuous measurements and open systems.
Having studied Chapters 1, 2, and 3 all of the remainder of the text is accessible and
topics can be selected on preference. We now list the latter sections that do not require
Chapter 3, since these can be included in a course of study that does not involve continuous
measurements: Sections 4.1.2 and 4.1.3 on Landauer’s erasure principle and Maxwell’s
demon; parts of Section 5.3 on feedback control; all but Section 6.2.2 in Chapter 6 on
4 Quantum measurement theory

metrology; Sections 7.1 through 7.6, and Sections 8.1 and 8.3 on controlling mesoscopic
systems.

Some assumptions and terminology


This text is for graduate physics students, so we assume that the reader is familiar with
quantum mechanics, the basics of probability theory, and various mathematical concepts
such as Fourier transforms and δ-functions. Everything that the reader needs to know about
probability theory and Fourier transforms can be found in Chapter 1 of reference [288] or
Chapter 4 of reference [592] and Chapter 1 of reference [591]. We also recommend Jaynes’
landmark book on probability theory and its role in reasoning and science [311].
In mathematics texts it is usual to denote a random variable as a capital letter, say X,
and the variable denoting one of the values it can take as the corresponding lower case
letter, x. This provides technical precision, since the concept of a random variable, and the
concept of one of the values it can take, are distinct. However, physicists tend to use the
same symbol to denote both things, because it causes no confusion in practice. This is the
style I prefer, so I use it here.
We physicists often use the term “probability distribution” as synonymous with “prob-
ability density,” whereas mathematicians use the former term to mean the anti-derivative
of the latter. Defining “probability distribution” to mean “probability density” is useful,
because standard English usage prevents us from using “probability density” for dis-
crete distributions. For this reason we will use the term probability distribution to mean
probability density.
To refer to a set of quantities, for example a set of probabilities pj indexed by j, in which
j takes integer values in some range, we will use the shorthand {pj }. The locations of the
definitions of all acronyms we use can be found in the index.

1.2 Classical measurement theory


Before learning quantum measurement theory, it is valuable to understand classical mea-
surement theory, as the two are very closely connected. Classical measurement theory,
also known as Bayesian statistical inference, tells us how our knowledge about the value
of some quantity, x, changes when we obtain a piece of data relating to x. To understand
how this works, we need to know first how to describe the knowledge we have regarding
x. We will assume that x is some variable that can take any real value, but more generally
it could be a vector of discrete or continuous variables. Since we need to make a mea-
surement to determine x, we must be uncertain about its value. Our knowledge about x is
therefore captured by a probability distribution, P(x), for the values of x. This probability
distribution tells us, based on the information currently available, the likelihood that x will
have various values, and overall how certain, or uncertain, we are about x. This distribution
is called our state-of-knowledge of x.
To determine how our state-of-knowledge changes when we obtain a piece of data y, we
have to know how y is related to x. To be able to describe measurements, this relationship
1.2 Classical measurement theory 5

must be probabilistic: if y were deterministically related to x (meaning that y was a func-


tion of x), then we could determine x precisely once we knew y, simply by inverting the
function. This is not what happens when we make measurements; measurements are never
perfectly precise. After we have made a measurement of x, giving a result y, we are always
left with some uncertainty about x.
Consider measuring the length of an object with a ruler. In this case the result of the
measurement is equal to the true length plus a random error. Thus, given the true length of
the object (the value of x) there is a probability distribution for the result y. The probability
distribution for y is peaked at y = x. Because this probability distribution for y depends
on (is conditional upon) the value of x, it is called a conditional probability distribution
for y, and is written P(y|x) (this is read as “P of y given x”). The conditional probability
for the measurement result (the data) y, given the value of the quantity to be measured,
x, completely defines the measurement. This conditional probability is determined by the
physical procedure used to make the measurement, and is referred to as the likelihood
function for the measurement. So if one wishes to obtain the likelihood function for a given
measurement process, one must first work out how the value of the thing to be measured
leads to the measurement result. If y is the measurement result, and x is the quantity to
be measured, then the process by which y leads to x can often be written in the form
y = f (x) + a random variable, where f is a deterministic function. The likelihood function
can then be determined directly from this relationship.
To determine how our state-of-knowledge regarding x, P(x), changes when we obtain the
value of y, we use the relationships between the joint probability for two random variables
x and y, P(x, y), and the conditional probabilities P(y|x) and P(x|y). These relationships are

P(x, y) = P(x|y)P(y) = P(y|x)P(x). (1.1)

Here P(y) is the probability distribution for y irrespective of the value of x (also called the
marginal distribution for y), and is given by

 ∞
P(y) = P(x, y) dx. (1.2)
−∞

Now P(x) is our state-of-knowledge of x prior to making the measurement, and it is there-
fore the probability density for x irrespective of the value of y. It is therefore also the
marginal probability for x, and thus given by

 ∞
P(x) = P(x, y) dy. (1.3)
−∞

While the relationships in Eq. (1.1) are fairly intuitive, they are explained further in, for
example, references [288] and [592].
6 Quantum measurement theory

Rearranging Eq. (1.1) we obtain the famous relationship known as Bayes’ theorem,
being

P(y|x)P(x)
P(x|y) = . (1.4)
P(y)

Upon examining Eq. (1.4) we will see that it tells us exactly how to change our state-of-
knowledge when we obtain the measurement result y. First note that since P(x|y) must be
normalized (that is, its integral over x must be unity), the value of P(y) on the bottom line
is completely determined by this normalization. We can therefore write Bayes’ theorem as
 ∞
P(y|x)P(x)
P(x|y) = , where N = P(y|x)P(x) dx = P(y). (1.5)
N −∞

We see that on the right-hand side (RHS) we have our state-of-knowledge of x before we
obtain the data y, and on the left-hand side (LHS) the probability for x given that value of
y. The LHS is therefore our state-of-knowledge after obtaining the value of y. In Eq. (1.5)
P(x) is called the prior probability, and P(x|y) the posterior probability. So Bayes’ theorem
tells us that to obtain our new state-of-knowledge once we have made our measurement:
we simply multiply our current state-of-knowledge by the likelihood function P(y|x), and
normalize the result. Note that the prior is simply the marginal (overall) distribution of
x. The relationship given by Eq. (1.5), Bayes’ theorem, is the fundamental theorem of
classical measurement theory.

1.2.1 Understanding Bayes’ theorem


While Bayes’ theorem (Eq. 1.5) is simple to derive, to obtain a direct understanding of
it requires a bit more work. To this end consider a measurement of a discrete variable, x,
in which x has only two values. We will call these values 0 and 1. Our measurement will
also have only two outcomes, which we will denote by y = 0 and y = 1. In our discussion
of Bayes’ theorem above, we assumed that x was continuous, so let us take a minute to
re-orient our thinking to discrete variables. In the present case our state-of-knowledge (our
prior), P(x), has only two values, being P(0) and P(1) (and, of course, P(0) +P(1) = 1). The
conditional probability, P(y|x), also has only two values for each value of x. If we make the
measurement and obtain the result y = 1 (for example), then Bayes’ theorem tells us that
our posterior is given by

P(1|x)P(x) P(1|x)P(x)
P(x|1) =   )P(x )
= . (1.6)
x P(1|x P(1|0)P(0) + P(1|1)P(1)

We now wish to obtain a better understanding of this expression.


To do so let us choose a specific likelihood function for the measurement. This likelihood
function is given in Table 1.1, and contains the parameter α. If α is close to unity, then the
two values of x give very different distributions for the measurement result y, and in this
1.2 Classical measurement theory 7

Table 1.1. The likelihood function for a


simple “two-outcome” measurement.
The body of the table gives the
probability for y given x.

P(y|x) x=0 x=1


y=0 α 1−α
y=1 1−α α

case we would expect the measurement to tell us a lot about x. Conversely, if α is close to
1/2, then the reverse is true. For the sake of concreteness let us choose 0. 5 < α < 1. With
this choice, if x = 0 then it is more likely that we will get the result y = 0, and if x = 1 it is
more likely that we will get y = 1.
Now assume that we initially know nothing about x, so that our prior state of knowledge
is P(0) = 0. 5 = P(1). What happens when we make the measurement and get the result
y = 1? Since our prior is flat, by which we mean that it does not change with x, it cancels
on the top and bottom lines of Bayes’ theorem, telling us that our posterior is simply the
likelihood function, normalized if necessary:

P(y|x) P(y|x)
P(x|y) =  
= . (1.7)
x P(y|x ) P(y|0) + P(y|1)

Our posterior is thus P(0|1) = 1 − α and P(1|1) = α. So it is now more likely that x = 1
than x = 0. This is indeed intuitively reasonable. The likelihood function tells us that if x
were to be 0 then it would be less likely that we would get the result 1, so it is reasonable
that since we obtained the result 1, it is more likely that x = 1 than that x = 0.
The above discussion shows that if the prior is uniform, Bayes’ theorem tells us that the
values of x that are more likely are those for which the result we have obtained is the more
likely outcome.
Now let us examine why we need to include the prior, in addition to the likelihood func-
tion, when calculating our posterior. This is clearest when the prior is strongly weighted
toward one value of x. Consider a situation in which the prior is P(0) = 0. 999 and
P(1) = 0. 001. This means that in the absence of any further data, on average x will only be
equal to 1 one time in a thousand cases. We now consider a slightly different two-outcome
measurement from the one above, as this will make the logic simple. The likelihood func-
tion for the new measurement is given in Table 1.2, and in words is as follows: if x = 1
then y is always equal to 1. If x = 0, then y = 0 with probability 0.999 and y = 1 only one
time in a thousand. This means that, if the prior is flat, upon obtaining the result y = 1 the
value of x would be equal to 1 approximately nine hundred and ninety-nine times out of a
8 Quantum measurement theory

Table 1.2. The likelihood function for


our second “two-outcome”
measurement. The body of the table
gives the probability for y given x.

P(y|x) x=0 x=1


y=0 0. 999 0
y=1 0. 001 1

thousand:

P(1|1) 0. 999
P(x = 1|y = 1) = = . (1.8)
P(1|0) + P(1|1) 1. 001

So what is the case when the prior is highly weighted toward x = 0, as described above?
Well, in this case x is equal to 1 only one time in a thousand. Now, if we get the result
y = 1 there are two possibilities. Either x = 1, which happens one time in one thousand, or
x = 0 and we got the result y = 1 anyway, which also happens approximately one time in a
thousand. (The precise figure is the frequency of x = 0 multiplied by the frequency of the
result y = 1 given x = 0, which is 0. 999 × 0. 001 ≈ 1/1001.) Thus the result y = 1 happens
approximately one time in 500, and half of these are due to x = 0, and half due to x = 1. So
when we obtain the result y = 1, there is only a 50% chance that x = 1. This is, of course,
exactly what Bayes’ theorem tells us; by multiplying the likelihood function that weights
x = 1 very highly, by the prior that weights x = 0 very highly, we obtain an approximately
flat posterior.
The example we have just considered applies directly to a real and very important sit-
uation: testing for the HIV virus. Each test is pretty reliable, giving a false positive only
about one time in a thousand. On that basis alone one might think that when a result comes
back positive, there is little reason to perform a follow-up test to confirm it. But this is very
wrong. Since very few patients have HIV, false positives come up just as frequently as real
positives. Thus, whenever a positive test result comes back, it is essential to do a follow-
up test to check that it is not a false positive. Bayesian inference, and thus measurement
theory, is therefore crucial in real-world problems.
To complete our discussion of Bayes’ theorem it is worth noting that our state-of-
knowledge does not necessarily become more certain when we make a measurement. To
take the example of the HIV test above, before we obtain the result of the test we are almost
certain that the patient is HIV negative, since the vast majority of patients are. However,
upon obtaining a positive test result, there is an approximately fifty-fifty chance that the
patient is HIV positive. Thus, after obtaining the measurement result, we are less certain
of the HIV status of the patient. Even in view of this, all classical measurements do have
the property that, upon making the measurement, we become more certain on average of
1.2 Classical measurement theory 9

the value of the measured quantity (where the average is taken over all the possible mea-
surement results). We will be able to make this statement precise once we have described
how to quantify the concept of information in Chapter 2.

1.2.2 Multiple measurements and Gaussian distributions


Multiple measurements
Having made a measurement of x, what happens when we make a second measurement?
We might expect that we simply repeat the process of multiplying our current state-of-
knowledge, now given by P(x|y), by the likelihood function for the new measurement. This
is correct so long as the results of the two measurements are independent, and is simple
to show as follows. Let us say that we make N measurements, and the results (the data)
obtained from these measurements are yi for i = 1, . . ., N. Bayes’ theorem tells us that

P(y1 , . . . , yN |x)P(x)
P(x|y1, . . . , yN ) = , (1.9)
N
∞
with N = −∞ P(y1 , . . . , yN |x)P(x) dx. The fact that all the measurement results are
independent means that

P(y1 , . . . , yN |x) = P(y1 |x)P(y2 |x) · · · P(yN |x), (1.10)

and with this Bayes’ theorem becomes

P(y1 , . . . , yN |x)P(x) P(yN |x) · · · P(y1 |x)P(x)


P(x|y1 , . . . , yN ) = =
N N
P(yN |x) P(y1 |x)P(x)
= ··· , (1.11)
NN N1

where Nn = P(yn ). So we see that each time we make another independent measurement we
update our state-of-knowledge by multiplying it by the likelihood function and normalizing
the result.

Pooling independent knowledge


It turns out that classical measurement theory provides us with a simple way to pool the
knowledge of two observers so long as their information has been obtained independently.
If two observers, A and B, have the respective states-of-knowledge P1 (x) and P2 (x) about
a quantity x, then we can write each of these as

P(yA |x)Pprior(x)
PA (x) = , (1.12)
NA
P(yB |x)Pprior (x)
PB (x) = , (1.13)
NB
10 Quantum measurement theory

where yA and yB are the vectors of data obtained by the respective observers. Since we
intend the data the observers have obtained to represent all the information they each have
about x, Pprior(x) is the prior that describes having no initial knowledge about x. The problem
of determining such a prior can be surprisingly difficult, and we discuss it further below. If
we assume for now that we know what Pprior is, and we choose the measure for integration
over x so that Pprior(x) = 1 (that is, we absorb Pprior into the measure), then an observer who
has access to the data of both A and B has the state-of-knowledge

P(yA |x)P(yB |x) PA (x)PB (x)


P(x) = = . (1.14)
N N

So all we have to do to pool the knowledge of two (or more) observers is to multiply their
states-of-knowledge together, and normalize the result.

The ubiquity of Gaussian measurements


Now consider applying classical measurement theory to the simple example discussed
above, that of measuring the length of an object with a ruler. To describe this measure-
ment we need to decide how the result that we get, y, depends upon the true length x. We
can think of y as being equal to the true length plus a random “error.” The error in a mea-
surement is often described well by a Gaussian distribution. The reason for this is that the
error is usually the result of random contributions from many different sources. When we
add many independent random variables together, the central limit theorem tells us that
the resulting random variable has an approximately Gaussian distribution. If the error in
our measurement of x is a Gaussian with mean zero and variance V, then the probability
distribution for y, given x, is a Gaussian centered at x:

1
e−(y−x) /(2V) .
2
P(y|x) = √ (1.15)
2πV

This is the likelihood function for the measurement. If we have absolutely no knowledge
of x before the measurement (never the case in reality, of course), then we can set P(x) = 1.
Our knowledge of x after making the measurement is then simply the likelihood function,
normalized so that it is a valid probability distribution over x for each value of y. In this
case the normalization is already correct, and so

1
e−(x−y) /(2V) .
2
P(x|y) = √ (1.16)
2πV

This tells us that the value of x is a Gaussian centered at y, with variance V, and thus that the
most likely value of x is y, and the expectation value of x is also equal to y. It is customary
to quote the error on a√measured quantity as twice the standard deviation. Thus we write
the value of x as y ± 2 V.
1.2 Classical measurement theory 11

The ubiquity of Gaussian states-of-knowledge


It is not only measurement errors that tend to be Gaussian, but also states-of-knowledge.
The reason for this is that when one makes many measurements of a given quantity, the
resulting state-of-knowledge is the result of multiplying together many likelihood func-
tions. One can show that even if these likelihood functions are not themselves Gaussian,
the result usually is. This is because there is a central limit theorem for multiplication that
follows from the usual central limit theorem for summing random variables. To see how
this works we first recall that if we have two independent random variables x and y, the
probability distribution for their sum, z = x + y, is the convolution of the distributions for x
and y:
 ∞
Pz (z) = Px (u)Py (z − u) du ≡ Px (x) ∗ Py (y). (1.17)
−∞

The central limit theorem states that so long as the variances of all the convolved distribu-
tions are finite, then as one convolves more distributions together, the closer the result is to
a Gaussian.
To obtain a similar result for multiplying a number of probability distributions together,
we take the Fourier transforms of these probability distributions. Multiplying two functions
together is the same as convolving their Fourier transforms. That is, the Fourier transform
of the product of two functions is the convolution of their Fourier transforms. So the central
limit theorem tells us that if we convolve the Fourier transforms of a set of distributions
together, then the result is close to a Gaussian. Since the inverse Fourier transform of
this approximate Gaussian is the product of all the probability distributions, and since
the Fourier transform of a Gaussian is also Gaussian, the product of the distributions is
approximately Gaussian.
So when we make multiple measurements to learn about some quantity, or when our
knowledge about a quantity is derived from many different sources, then our state-of-
knowledge will be approximately Gaussian. Just as Gaussian errors are common in nature,
Gaussian states-of-knowledge are also common.
Adding noise and making measurements can be regarded as opposite processes. The
more noise (random variables) we add to a quantity, the more uncertain it becomes. The
more measurements we make on a quantity, the more certain (on average) it becomes. It is
interesting to note that the mathematical operations that describe each of these processes,
convolution for adding noise, and multiplication for making measurements, are dual to
each other from the point of view of the Fourier transform.

1.2.3 Prior states-of-knowledge and invariance


Bayes’ theorem makes it clear how to update our state-of-knowledge whenever we obtain
new data. So long as we know the overall distribution for the unknown quantity (the prior),
12 Quantum measurement theory

then Bayes’ theorem is strictly correct and open to no interpretation. However, for many sit-
uations it is not clear what the prior should be, since the observer usually knows something
before making measurements, but it is not always possible to quantify this knowledge. A
sensible solution to this problem is to assume that the observer has no initial knowledge,
as this prevents any biasing of the final state-of-knowledge with unwarranted prejudice.
The problem is that it is not always easy to determine the prior that quantifies “no knowl-
edge.” Here we will discuss a useful method for obtaining such priors (and give references
for further methods), but before we do this it is worth noting that in many situations the
choice of prior is, in fact, unimportant. This is the case when we can obtain enough data
that the likelihood function is much sharper (has much less uncertainty) than any “reason-
able” prior. In this case the prior has very little influence on the final expectation value
of the measured quantity, and is therefore irrelevant. In this case we can ignore the prior
completely and set the posterior equal to the (correctly normalized) likelihood function for
the set of measurements.
The question of what it means to know nothing is actually quite subtle. In the case of
measuring the position of an object in space, it seems quite obvious that the prior should
be constant and extend from −∞ to ∞. This prior is not normalizable, but this does not
matter, since for any measurement that provides a reasonable amount of data, the likelihood
function, and thus the posterior, will be normalizable. On the contrary, the question of what
it means to know nothing about a positive quantity, such as the volume of water in a lake,
is not at all obvious. As will become clear, a prior that is constant on the interval (0, ∞) is
not the correct choice.
The problem of what it means to know nothing about a positive quantity is solved by
using the idea that the prior should be invariant under a transformation. The key is to
realize that if one knows nothing about the value of a positive quantity, λ, then multiplying
λ by any positive number should not change one’s state-of-knowledge. That is, if one is
completely ignorant of λ, then one must be ignorant of the overall scale of λ. There is only
one state-of-knowledge that is invariant under a scaling transformation, and that is

P(λ) = a/λ, (1.18)

where a is any dimensionless constant. This prior is also not normalizable, but this is fine
so long as the likelihood function is normalizable. The set of all scaling transformations,
which is the set of all positive real numbers, forms what is called a transitive group. It is
a group because it satisfies the properties of a group (e.g., the consecutive application of
two scaling transformations applied consecutively is another scaling transformation), and
the term “transitive” means that we can transform any value of our positive quantity to any
other value by using one of these transformations.
The powerful technique of identifying transformations, in particular transformations that
are transitive groups, over which our prior state-of-knowledge must be invariant, was devel-
oped by Jeffreys [312, 314, 313] and Jaynes [522, 311]. Returning to the problem of the
prior for the location of an object, we see that the flat prior is the one that is invariant under
1.2 Classical measurement theory 13

all translations, being the transitive group for that space. Transitive groups for a given
quantity must map all admissible values of the quantity to admissible values. That is why
the translation transformation is not appropriate for quantities that can only be positive.
An excellent example of the use of invariance for determining priors is given by Jaynes’
solution to “Bertrand’s problem,” and can be found in [311].
Using the notion of invariance under a transitive group it is simple to determine priors
for quantities that have such groups. The simplest case is a “six-sided” die. If the die
is perfectly symmetric, then our prior should be invariant under all permutations of the
six faces of the die. The result is a prior that assigns equal probability to all the faces.
A less trivial example is a prior over the possible states of an N-dimensional quantum
system. In this case the set of possibilities is an N-dimensional complex vector space,
and the transitive group is that of all the unitary transformations on this vector space. The
zero-knowledge prior is the one that is invariant under all unitaries.
Instead of specifying the prior in terms of a set of coordinates that parametrizes the
space of possibilities, one usually specifies the measure over which to integrate in terms
of these coordinates, and chooses this measure so that it is invariant under the required
transformations. With this choice for the measure, the invariant prior is merely equal to
unity. For unitary transformations the invariant measure is called the Haar measure, and is
given in Appendix G. The definition of a measure, using a minimum of technical jargon,
can be found in reference [288].
The method of transitive groups fails when the set of possibilities has no such group. The
interval [0, 1] is one such example, and applies when one is inferring a probability. More
sophisticated methods have been devised to address these situations. We will not discuss
these methods here, but the reader can find further details in [52, 479, 363].

Priors: a tricky example


For the sake of interest we pause to consider an example that shows the kinds of problem
that can arise when choosing priors. An apparent paradox occurs in the “two-envelopes
problem,” in which we are given two envelopes, one of which contains twice as much
money as the other. We are allowed to open only one of the envelopes, and then we get to
choose which envelope we will take home to spend. Before we open one of the envelopes,
clearly there is no reason to favor either envelope. Assume that we now open one of the
envelopes and find that it contains x dollars. We now know that the second envelope (the
one we haven’t opened) contains either x/2 dollars or 2x dollars. We conclude that if we
choose to keep the second envelope instead of the first, then we stand to lose x/2 dollars,
or to gain x dollars. We thus stand to gain more than we will lose by choosing the second
envelope, and so it appears that we should do so. There is clearly something wrong with this
reasoning, because if we had instead chosen to open the second envelope, then precisely the
same reasoning would now tell us to take home the first envelope. Because the reasoning
always tells us to do the same thing, we can apply the reasoning without actually opening
the envelope! Thus opening the envelope provides us with no information, and so it cannot
tell us which envelope we should take home. We have a paradox.
14 Quantum measurement theory

What is wrong with the above analysis is the hidden assumption we have made about
what it means to initially “know nothing” about the amount of money in the envelopes.
That is, we have arbitrarily chosen a prior, and this causes problems with our reasoning.
Now let us consider the problem with the prior made explicit. Let us call the envelopes A
and B, and say that envelope A contains x dollars, and envelope B contains 2x dollars. To
perform any reasoning about probabilities we must assign a probability distribution to x
to start with, so let us call this distribution Px (x). Now we pick one envelope at random.
Call the envelope we pick envelope 1, the amount of money we find in it y, and denote
by z the unknown amount of money in envelope 2. Since the probability that envelope 1
is envelope A is independent of which envelope is which, Bayes’ theorem tells us that the
relative probabilities that envelope 2 has amounts z = y/2 and z = 2y are determined by the
prior alone. Thus
Px (y/2) Px (2y)
Prob(z=y/2) = , Prob(z=2y) = . (1.19)
Px (y/2) + Px (2y) Px (y/2) + Px(2y)
If we chose to keep envelope 1 then we will have y dollars. The expectation value of the
amount of money we will have if we choose to keep envelope 2 instead is
(y/2)Px(y/2) + 2yPx(2y)
z = . (1.20)
Px (y/2) + Px(2y)
We will want to keep envelope 2 if z > y.
Now that we have made the prior explicit, we can see that our initial reasoning about
the problem corresponds to choosing the prior to be flat, from which it follows that the
probability that z = 2y is equal to the probability that z = y/2. But our discussion in the
previous section makes us suspicious of this prior. If x were merely an unknown positive
quantity it would have Px (x) ∝ 1/x. The present case is a little different, but we might expect
a somewhat similar prior. Let us first consider a prior in which the expectation value of x
is finite, as this allows us to see how opening envelope 1 provides us with information. If
you put in an explicit function for Px (x) such that x is finite, you will find the following.
If y is large enough then we are sufficiently sure that envelope 2 is the one with less money,
and we keep envelope 1. If y is small enough then it is sufficiently likely that envelope 2 is
the one with more money, and so we keep it instead. For every Px with finite x there is a
threshold value yc such that when y > yc we keep envelope 1, and if not we keep envelope 2.
Now let us return to the problem of what it might mean to know nothing about x prior
to opening an envelope. Let us denote the “zero-knowledge” distribution that we seek by
P0 (x). From our discussion in the previous section, P0 (x) should have no scale, since a
scale, such as a given value for x , means that we know something about x. Because
P0 (x) must be scale-free, when we open an envelope it cannot provide a number yc that
determines which envelope we should keep. The only remaining option is that when we
open an envelope we learn nothing about which envelope to keep. This in turn requires
that no matter what value we find for y, it will always be true that z = y, giving us
no preference for either envelope. You can determine for yourself from Eq. (1.20) that the
1.3 Quantum measurement theory 15

only prior distribution satisfying this requirement is P0 (x) = 1/ x. While this is not strictly
scale-invariant, it is scale-free as desired. The lesson is that you cannot just pick any prior
and assume that it captures what it means to know nothing.

1.3 Quantum measurement theory


1.3.1 The measurement postulate
The state of a quantum system, |ψ , is a vector in a complex vector space. If the set of
vectors {|n }, n = 0, . . . , N − 1 (where N may be ∞) is an orthonormal basis for this space,
then we can always express |ψ as

|ψ = cn |n (1.21)
n


for some complex coefficients cn , where n |cn |2 = 1. In analyzing quantum measure-
ments we will often assume that the system is finite-dimensional. This simplifies the
analysis while losing very little in the way of generality: since all systems ultimately have
bounded energy, any real system can always be approximated arbitrarily well using a finite
number of states.
The basis of quantum measurement theory is the following postulate: We can choose
any basis, and look to see which one of these basis states the system is in. When we do
so, we will find the system to be in one of these basis states, even though it may have
been in any state |ψ before the measurement. Which basis state we find is random. If
the system is initially in the state |ψ then the probability that we will find state |n is
given by |cn |2 . A measurement like this, for which the result is one of a set of basis states,
is called a von Neumann measurement. Before we use this basic measurement postulate
to derive quantum measurement theory (which has, necessarily, a very similar structure
to classical measurement theory), we need to know how to describe states-of-knowledge
about quantum systems.

1.3.2 Quantum states-of-knowledge: density matrices


Quantum states already contain within them probabilities – once we express a quantum
state in some basis, the coefficients for that basis determine the probabilities for finding
the system in those basis states. However, these probabilities are not enough to describe
all the possible states-of-knowledge that we might have about a quantum system. Even
though a system may actually be in a given state |ψ , we may not know what this state
is. In general then, our state-of-knowledge about a quantum system can be described by
a probability density over all the possible states |ψ . We might refer to this probability
density as describing our classical uncertainty about the system, and the coefficients cn as
describing the quantum uncertainty inherent in a given quantum state vector.
16 Quantum measurement theory

While a complete state-of-knowledge of a quantum system is a probability density over


all the possible states |ψ , for most purposes one can use a more compact representation
of this state-of-knowledge. This compact representation is called the density matrix. It was
devised independently by von Neumann [642] and Landau [351] in 1927.
To obtain the density matrix formalism, we first recall that the expectation value of a
physical observable, for a system in state |ψ , is given by

X = ψ|X|ψ = Tr[X|ψ ψ|], (1.22)

where X is the operator corresponding to that observable. So while the expectation value
of an observable is quadratic in the vector |ψ , it is linear in the operator (matrix) |ψ ψ|.
Expectation values are also linear in classical probabilities. If our state-of-knowledge is a
probability distribution over the M states {|φm }, where the probability of the system being
in the state labeled by m is pm , then the expectation value of X is
 
X = pm φm |X|φm = pm Tr[X|φm φm |] = Tr [Xρ] , (1.23)
m m

where

ρ≡ pm |φm φm |. (1.24)
m

So we see that the matrix ρ is sufficient to calculate the expectation value of any operator.
This is precisely because expectation values are a linear function of each of the matrices
|φm φm |, and thus all we have to do to include classical probabilities is to weight each of
the |φm φm | by its classical probability.
We will see below that ρ is also sufficient to calculate the results of any measurement
performed on the system. Note that it is only the results of measurements on quantum sys-
tems that determine events in the macroscopic world. This is because it is measurements
that determine a set of mutually exclusive outcomes, and since the macroscopic world
behaves classically, it is only sets of mutually exclusive possibilities that appear in it. (See
also the discussions in Sections 1.5, 4.5, and 4.4.) We can conclude that questions about the
future behavior of a quantum system are ultimately questions about the results of measure-
ments. Thus ρ is sufficient to fully characterize the future behavior of a quantum system,
and this is why it is a sufficient description of one’s state-of-knowledge for many purposes.
In the absence of measurements the evolution of ρ is very simple. This evolution is
given by evolving each of its component states. Since the evolution of a quantum state |ψ
is given by applying to it a unitary operator, U(t), the evolution of ρ is
 
ρ(t) = pm |φm (t) φm (t)| = pm U(t)|φm (0) φm (0)|U †(t)
m m

= U(t)ρ(0)U (t).†
(1.25)
1.3 Quantum measurement theory 17

The density matrix, ρ, is therefore a simple and compact way to describe our state-of-
knowledge of a quantum system. The term density matrix comes from its role as a quantum
equivalent of a probability density for classical systems, and also because its diagonal
elements constitute a probability distribution. Specifically, if we express our states in the
basis {|j }, so that

|φm = cjm |j , (1.26)
j

then the elements of ρ are



ρjk = j|ρ|k = pm cjm c∗km . (1.27)
m

So the jth diagonal element of ρ is



ρjj = j|ρ|j = pm |cjm |2 . (1.28)
m

Since |cjm |2 is the (conditional) probability of finding the system in the state |j given
that it is initially in the state |φm , ρjj is the total probability of finding the system in the
state |j .
If the density matrix consists of only a single state, so that ρ = |ψ ψ|, then it is
described as being pure. If it consists of a sum over more than one state, then it is described
as being mixed, and the system is said to be in a statistical mixture, or simply a mixture, of

states. As an example, if the system is in the pure state |ψ = j cj |j , then

ρ = |ψ ψ| = cj c∗k |j k| (1.29)
jk

and the system is said to be in a superposition of the basis states |j . If the system is in the
state

ρ= pj |j j|, (1.30)
j

then it is said to be in a mixture of the basis states |j . In the latter case, we can think of the
system as really being in one of the basis states, and the density matrix merely describes
the fact that we do not know which one (although some may be more likely than others).
On the other hand, if the system is in a superposition of two states, then we cannot describe
the system as really being in one state or the other, and we discuss this further in the next
section.
The density matrix is said to be completely mixed if it is equal to I/N, where N is the
dimension of the system. If this is the case, then each of its eigenstates are equally likely.
Further, every state in the Hilbert space (the complex vector space) is equally likely, since
18 Quantum measurement theory

ψ|(I/N)|ψ = 1/N for every state |ψ . In this case we have no information about the
system.
It is not difficult to show that the density matrix is Hermitian (ρ = ρ † ), and has unit trace
(Tr[ρ] = 1). Since the density matrix is Hermitian it has a complete set of eigenvectors
(eigenstates), and can always be diagonalized (that is, written in the basis of its eigen-
states so that it is diagonal). If the eigenvalues are λj , j = 1, . . . , N, and the corresponding
eigenvectors are |λj , the density matrix is


ρ= λj |λj λj |. (1.31)
j

So a system is never in a superposition of the eigenstates of the density matrix – it is


either in a mixture of the eigenstates, or in a single eigenstate, in which case the density
matrix is pure. Note that the eigenvalues are the probabilities of finding the system in the
corresponding eigenstates.

The difference between a superposition and a mixture


In case the physical distinction between a mixture of two or more quantum states, and a
superposition of those states is not yet clear to you from your physics training to date, we
pause here to explain it. A mixture of two states describes a situation in which a system
really is in one of these two states, and we merely do not know which state this is. On
the contrary, when a system is in a superposition of two states, it is definitely not in either
of these states. The truth of this statement, while somewhat incredible, is easy to show.
Consider the famous double-slit experiment, in which an electron is fired toward a metal
sheet with two slits in it. The slits are very close to each other, and the electron is fired at
the sheet so that it has a high probability of passing through the slits and thus reaching the
other side. After the electron has passed through the slits, it is then detected on a screen
some distance from the metal sheet. Consider what happens if one slit is blocked up. The
electron passes through the open slit, and is detected on the screen. If the state of the
electron after passing through the slit and reaching the screen is |ψ1 , then the probability
density that the electron is detected on the screen at position x is P1 (x) = | ψ1 |x |2 , where
|x denotes the state in which the electron is on the screen at position x. Similarly, if the
state of the electron after having passed through the other slit is |ψ2 , then the probability
distribution for the electron to land on the screen at position x is P2 (x) = | ψ2 |x |2 .
Now we open both slits and fire the electron through them. If the electron really goes
through one slit or the other, then we can immediately determine the probability distribu-
tion for the electron on the screen. Each electron that goes through the slits can be assigned
as having gone through slit 1 or slit 2. Those going through slit 1 have probability distribu-
tion P1 (x), and those going through slit 2 have distribution P2 (x). If half of the electrons go
through each slit (actually, some of the electrons will not pass through the slits, and instead
scatter back from the metal sheet, but we ignore those) then the distribution of electrons on
1.3 Quantum measurement theory 19

the screen will be


1 1
P(x) = P1 (x) + P2 (x). (1.32)
2 2
This is the probability distribution we get if we assume that after going through the slits, the
electron is in a mixture of states |ψ1 and |ψ2 . In this case the state of the electron when it
reaches the screen is ρ = (1/2)(|ψ1 ψ1 | + |ψ2 ψ2 |), and the probability distribution for
being in state |x is
1 1
Pmix (x) = Tr[|x x|ρ] = P1 (x) + P2 (x) = P(x). (1.33)
2 2
In fact, after the electron passes through the slits, it is not in a mixture of states 1 and
2. To see this, note that the passage of the electron through the slits is described simply by
evolving the wavefunction of the electron in the potential formed by the barrier (the metal
sheet) with the slits in it. So after the electron passes through the slits, it is still described by
a single wavefunction. The electron is therefore in a pure state after going through the slits.
So what is this pure state? Well, it is too complex for us to determine the exact solution,
but it turns out that far from the slits the electron’s wavefunction is approximately given
by the sum (superposition) of the states it would have been in if√either of the slits had been
blocked.1 Thus the state of the electron is |ψ = (|ψ1 + |ψ2 )/ 2. If we now calculate the
probability density for the electron to be at position x on the screen, we immediately see
the difference between this “superposition” state and the mixture above. The probability
density for being in state |x is
1 1
Psup (x) = | ψ|x |2 = P1 (x) + P2 (x) + Re[ x|ψ1 ψ2 |x ]
2 2
= P(x). (1.34)

Since Psup (x) is not equal to P(x), and P(x) is a necessary result of the electron really being
in state |ψ1 or |ψ2 , a superposition cannot correspond to the electron really being in
either of these two states. For want of better terminology, one usually refers to a system
that is in a superposition of two states as being in both states at once, since this seems more
reasonable than saying that it is in neither.

A useful theorem: the most likely state


The following theorem answers the question, what is the most likely pure state of a system
for a given density matrix ρ? Note that the probability of finding the system in state |ψ is
given by

P(|ψ ) = ψ|ρ|ψ = Tr[|ψ ψ|ρ]. (1.35)

1 Note that this is also what happens to water waves if they encounter a barrier with two small openings. While
Schrödinger’s wave-equation is not the same as that for water waves, all wave-equations have this feature in
common.
20 Quantum measurement theory

Theorem 1 If the state of a quantum system is ρ, then the most likely pure state is the
eigenstate of ρ with the largest eigenvalue. This eigenvalue is the probability that the
system will be found in that state.

Proof We need to find the state |ψ that maximizes ψ|ρ|ψ . Let us denote the eigenbasis
of ρ as {|λn }. Writing the density matrix in terms of its eigenbasis (Eq. 1.31), and writing

|ψ = n cn |λn , we have

ψ|ρ|ψ = |cn |2 λn . (1.36)
n

Since n |cn |2 = 1, the above expression is the average of the eigenvalues over the prob-
ability distribution given by pn = |cn |2 . Thus to maximize it we must place all of the
probability on the largest eigenvalue. If we denote this eigenvalue by λj , then this means
that |cn |2 = δnj , and therefore |ψ = |λj . 

A density matrix consists of a set of states, {|ψn }, in which each state has an associated
probability, pn . Such a set of states and probabilities is called an ensemble. In fact, there
is no reason why every state in an ensemble must be a pure state – a set of states {ρn }
with associated probabilities {pn } is also an ensemble, and generates a density matrix ρ =

n pn ρn . If an ensemble contains only pure states, then it is called a pure-state ensemble.
We will often write an ensemble for the set of states {ρn }, with associated probabilities
{pn }, as {ρn , pn }.
Since every pure-state ensemble corresponds to some density matrix, a natural question
to ask is, for a given density matrix, can one determine all the possible pure-state ensembles
that correspond to it? It turns out that not only is the answer yes, but the collection of
ensembles can be characterized very simply. We leave these facts to Chapter 2, however,
because one of the characterizations uses the concept of majorization. This is a very simple
concept which captures the intuitive notion of uncertainty, and it will be defined in the next
chapter.

1.3.3 Quantum measurements


In Section 1.3.1 we introduced the measurement postulate, and defined the concept of a von
Neumann measurement. Given a state-of-knowledge, ρ, we can describe a von Neumann
measurement of the system in the following way. Given a basis {|n }, we define the set of
projection operators

Pn ≡ |n n|. (1.37)

Now, if we make a von Neumann measurement in the basis {|n }, then after the measure-
ment we will find the system to be in one of these basis states, say |m , with probability
pm = m|ρ|m . To reduce an arbitrary initial density matrix, ρ, to the final density matrix
1.3 Quantum measurement theory 21

|m m|, we must sandwich ρ between two copies of the projector Pm , and normalize the
result. That is, the final state is given by

Pm ρPm
ρ̃m = |m m| = . (1.38)
Tr [Pm ρPm ]

We include the tilde on top of ρm to indicate that it is a state that results from a measure-
ment. This convention will help to make our expressions clearer later on. Note that the
expression on the bottom line is actually the probability pm :

 
pm = m|ρ|m = Tr [Pm ρPm ] = Tr (Pm )2 ρ = Tr [Pm ρ] , (1.39)

where we have used the cyclic property of the trace: Tr[ABC] = Tr[CAB] = Tr[BCA]. The
reason why we chose to write the expression for the final state using the projectors Pn is
that this form will make it clear later on that von Neumann measurements are a special
case of the more general measurements that will be derived below.
Note that a von Neumann measurement induces a nonlinear change in the density. The
operation of applying the projector Pn is linear, since it is matrix multiplication, but the
operation of rescaling the density matrix so that the final state is normalized is not linear.
Recall that an operation Q( · ) on a vector (a set of numbers) is linear if Q(αv + βu) =
αQ(v) +βQ(u), for all real numbers α, β and vectors v and u. In our case the density matrix
is the “vector,” and it is not hard to see that the above linearity condition is not satisfied by
von Neumann measurements. This nonlinearity is a significant departure from the linear
evolution of Schrödinger’s equation. In classical measurement theory measurements also
change the observer’s state-of-knowledge in a nonlinear way, but they do not affect the
dynamics of the system itself. In Section 1.4.3 we will see that quantum measurements do
cause dynamical changes. Because of this they cause the dynamics of quantum systems to
be nonlinear.
Von Neumann measurements are certainly not the only kinds of measurements one can
make on a quantum system. To gain an understanding of quantum measurements we need
to examine different kinds of measurements, and what kind of information they extract,
and how they affect the systems being measured. We could start by considering a number
of physical examples, and follow this by deriving a general formalism that describes them
all. We have chosen instead to do it the other way around, as we feel that this is quicker.
But instead of following this path, if you prefer, you can skip the derivation and go straight
to Theorem 2 (p. 24), which presents the concise mathematical description of (almost)
all measurements. From Theorem 2 you can then go to Section 1.4 which explains, via
some examples, how this description relates to classical measurements and the classical
likelihood function, and why quantum measurements induce a dynamical change in the
system being measured.
22 Quantum measurement theory

Deriving general quantum measurements from the measurement postulate


Fortunately we can derive all possible measurements that can be made on a quantum system
from von Neumann measurements. To do this we consider a quantum system we wish to
measure, which we will call the target, and a second system we will call the probe. We will
denote the dimension of the target system by N, and that of the probe by M. The probe is
prepared in some state independently of the target, and then the two systems are allowed to
interact. After the interaction we perform a von Neumann measurement on the probe. As a
result of the interaction, the probe is correlated with the target, and so the measurement on
the probe provides us with information about the target. This procedure gives us a much
more general kind of measurement on the target.
We will denote the basis in which we measure the probe system as {|n }, n = 0, . . ., M −1.
Since the interaction between the systems may be any unitary operator that acts in the joint
space of both systems, we can start the probe in the state |0 without loss of generality. The
combined initial state of the target and probe is therefore

ρcomb = |0 0| ⊗ ρ, (1.40)

where ρ is the initial state of the target. We will always write the state of the probe on the
left-hand side of the tensor product, and that of the target on the right. To understand the
following it is essential to be familiar with how a composite quantum system, consisting of
two subsystems, is described by the tensor product of the spaces of these two subsystems.
If you are not familiar with how to combine two quantum systems using the tensor product,
denoted by ‘⊗’, the details are presented in Appendix A.

The form of the target/probe interaction operator


The interaction between the target and probe is described by some unitary operator U that
acts in the space of both systems. The subsequent von Neumann measurement on the probe
is described by a projection onto one of the probe states |n , followed by a normalization.
In analyzing the measurement process, it will make things clearer if we first know some
things about the structure of U.
Since U acts in the tensor-product space, it can be written as the matrix

U= unk,n k |n |sk n | sk |, (1.41)
nn kk

where |sk are a set of basis states for the target, and unk,n k are the matrix elements of U.
For each pair of probe states |n and |n , there is a sub-block of U that acts in the space of
the target. We can alternatively write U in terms of these sub-blocks as

U= |n n | ⊗ Ann , (1.42)
nn
1.3 Quantum measurement theory 23

where the operators Ann are given by



Ann = unk,n k |sk sk |. (1.43)
kk

The relationship between the matrices Ann and U is most clear when they are written as
matrices, rather than in Dirac notation. This relationship is
⎛ ⎞
A00 A01 ··· A0M
⎜ A10 A11 ··· A1M ⎟
⎜ ⎟
U=⎜ .. .. .. .. ⎟ (1.44)
⎝ . . . . ⎠
AM0 AM1 ··· AMM

Recall that the system has dimension N and so each matrix Ann is N-dimensional. In what
follows we will use the notation An as shorthand for the matrices An0 . These matrices
constitute the first column of the sub-blocks of U. Let us now denote the M × M sub-
blocks of the matrix U † U by Bnn . Because U is unitary, U † U = I. This means that each of
the sub-blocks Bnn (those on the diagonal of U † U) must be equal to the identity. Further,
the sub-block B00 is the product of the first column of the sub-blocks of U, multiplied by
the first row of the sub-blocks of U † . This gives us immediately the relationship

I = B00 = A†n An . (1.45)
n

The final important fact about the structure of U is that, apart from the restriction given
by Eq. (1.45), we can choose the sub-blocks An ≡ A0n to be any set of operators. This
is because Eq. (1.45) is alone sufficient to ensure that there exists a unitary U with the
operators An as its first column of sub-blocks. To see why this is true, note that the operators
An , being the first column of sub-blocks, constitute the first N columns of U, where N is
the dimension of the target. To show that Eq. (1.45) ensures that U is unitary, we need
merely show that Eq. (1.45) implies that these first N columns are mutually orthonormal.
So long as this is the case, we can always choose the remaining columns of U so that all
the columns of U form an orthonormal basis, making U unitary. This task is not especially
difficult, and we leave it as an exercise.

Using a probe system to make a measurement


We now apply the unitary interaction U to the initial state of the two systems, and then
project the probe system onto state |n . The final state of the combined system is then

σ̌ = (|n n| ⊗ I)U(|0 0| ⊗ ρ)U †(|n n| ⊗ I). (1.46)

We have placed a caron ( ˇ ) over the final state, σ , to indicate that it is not necessarily
normalized, due to the fact that we have projected the system onto a subspace. Writing U
24 Quantum measurement theory

in terms of its sub-blocks, we immediately obtain a simple form for the final state:

σ̌ = |n n| ⊗ An ρA†n . (1.47)

The final state of the target is therefore

An ρA†n
ρ̃n = , (1.48)
Tr[A†n An ρ]

where we have normalized it by dividing by its trace.


Now that we have a simple form for the final state for each outcome n, we need to know
the probability, pn , that we get each of these outcomes. This is given by the probability
of finding the probe in state |n after the interaction U, and this is in turn the sum of all
the diagonal elements of the density matrix that correspond to the probe being in state |n .
The density matrix after the interaction is given by ρU = U(|0 0| ⊗ ρ)U †. The sum of its
diagonal elements that correspond to the probe state |n is therefore

pn = Tr[ (|n n| ⊗ I)ρU (|n n| ⊗ I) ]


= Tr[ (|n n| ⊗ I) U(|0 0| ⊗ ρ)U † (|n n| ⊗ I) ]
= Tr[σ̌ ] = Tr[|n n| ⊗ An ρA†n ] = Tr[|n n|] Tr[An ρA†n ]
= Tr[A†n An ρ]. (1.49)

We now have a complete description of what happens to a quantum system under a gen-
eral measurement process. Further, we know that every set of operators {An } that satisfies
Eq. (1.45) describes a measurement that can be realized by using an interaction with a
probe system. For emphasis we state this result as a theorem.

Theorem 2 The fundamental theorem of quantum measurement: Every set of opera-



tors {An }, n = 1, . . . , N, that satisfies n A†n An = I, describes a possible measurement on a
quantum system, where the measurement has n possible outcomes labeled by n. If ρ is the
state of the system before the measurement, ρ̃n is the state of the system upon obtaining
measurement result n, and pn is the probability of obtaining result n, then

An ρA†n
ρ̃n = , (1.50)
pn
pn = Tr[A†n An ρ]. (1.51)

We refer to the operators {An } as the measurement operators for the measurement.

In fact, the form given for quantum measurements in the above theorem, in which
each outcome is associated with a single measurement operator, does not cover all pos-
sible measurements. But the additions required to describe every possible measurement
1.3 Quantum measurement theory 25

are purely classical. We need merely include classical uncertainty about the result of
the measurement, or about what measurement was performed. Because of this all the
purely quantum features of any measurement are captured by a set of operators {An }, with
 †
n An An = I. We show how to add additional classical uncertainty to quantum measure-
ments in Section 1.6 below. Measurements that have no additional classical uncertainty are
referred to as efficient measurements, and we explain the reason for this terminology in
Section 1.6.
Theorem 2 provides a simple mathematical description of quantum measurements, but
we don’t yet know the physical meaning of the operators {An }. That is, we don’t know how
the form of a particular operator An is related to the physical properties of the measurement.
We turn to this question in Section 1.4, but before we do, we discuss a few useful concepts
and topics.

The partial inner product


We discuss in detail how to treat systems that consist of more than one subsystem in
Appendix A, and we have already used some of this machinery in our analysis above.
Nevertheless we feel that it is worth pausing to discuss one particular part of this machin-
ery here. Let us denote the basis states of two subsystems A and B by {|n A } and {|n B },
respectively. In our analysis above we used a projector that projected out a state of one
subsystem only, namely Pn ≡ |n A n|A ⊗ IB , where IB is the identity operator for system

B. If the joint state of the two systems is |ψ = mk αmk |m A |k B , then the action of this
partial projector is to leave only the joint states that contain |n A . That is,
 
Pn |ψ = |n A n|A ⊗ IB αmk |m A |k B = αnk |n A |k B . (1.52)
mk k

It is useful to define a “partial inner product” that is closely related to the partial projector
P. We define the inner product n|A |ψ as
 
n|A |ψ = n|A αmk |m A |k B ≡ αmk n|m A |k B , (1.53)
mk mk

which is a state of system B alone. The partial inner product takes the inner product with
the states of one system, and leaves the states of the other alone. One way to define this
inner product is to say that when we have an expression of the form a|A |ψ , where a|A is
a state of one subsystem, and |ψ is a joint state of both systems, then a|A is shorthand for

the unnormalized vector k a|A k|B .
To make clearer the relationship between the projector Pn and the partial inner product,
denote the state of system B that is produced by the partial inner product as |ψn B =
n|A |ψ . If we apply Pn = |n A n|A ⊗ IB to |ψ then this contains within it the application
of the partial inner product:

Pn |ψ = |n A n|A ⊗ IB |ψ = |n A ⊗ |ψn B . (1.54)


26 Quantum measurement theory

With this definition of the partial inner product, if we have a joint operator X of the two
subsystems, then n|A X|n A is a sub-block of the matrix X that operates only on system B.
If you are not familiar with this fact then you can show it as an exercise.
We also note that the partial trace, described in detail in Appendix A, can be written
in terms of the partial inner product. If the joint density matrix is ρ, the partial trace over

system A is ρb = n n|A ρ|n A .

Discarding a system
Let us say we have a quantum system S that is initially in a state ρ. It then interacts with
a second system B over which we have no control, and to which we have no access. What
is our state-of-knowledge of S following the interaction? One way to answer this question
is to realize that another observer could measure B, just as we measured the probe in the
scenario above. Since we do not know the outcome of this measurement, our state-of-
knowledge is given by averaging over all the outcomes of the measurement on B. It is
therefore
  
ρ̃ = pn ρ̃n = An ρA†n = n|U(|0 0| ⊗ ρ)U †|n
n n n

≡ TrB [U(|0 0| ⊗ ρ)U ], (1.55)

where we have used the partial-inner-product notation, |n denotes a basis state of the
probe, |0 0| is the initial state of the probe, and TrB denotes the partial trace over system
B, which is defined and discussed in detail in Appendix A.
So our state-of-knowledge after the interaction is given by the partial trace over the sec-
ond system. In fact, the partial trace is completely independent of the measurement that
the observer makes on the second system. Of course, this is crucial, because we cannot
know what measurement the observer will make, so our state-of-knowledge must be inde-
pendent of this measurement. The invariance of the partial trace under operations on the
traced system is proved in Appendix A.

The operator-sum representation


Consider an evolution of a quantum system that takes every initial density matrix, ρ,
to a single final density matrix L(ρ). This is a deterministic evolution in the sense that
there is only one final density matrix for every initial density matrix. Such an evolution
is often referred to as a “trace-preserving” quantum operation, because there is no need
to normalize the final density matrix as is required when there are multiple measurement
outcomes.
There are certain properties that a deterministic map from initial to final quantum states
must have. It must be linear, and every final density matrix must be positive. We will call
a map that produces only positive density matrices a positive map. In fact this positivity
condition contains a subtlety. Since a quantum system might initially be in joint state with
a second quantum system, the evolution of the first system, by itself, must be such that the
1.3 Quantum measurement theory 27

evolution of the joint system is positive. To obtain the map for the first system we start
with the map for the joint system, and take the trace over the second system. The subtlety
is that not all positive linear maps for the first system are derivable from positive linear
maps for the joint system. Only the subset of positive maps that can be derived in this way
are valid evolutions. These maps are referred to as being completely positive. We note that
the definition of a “quantum operation” is any completely positive linear map that may or
may not preserve the trace of the density matrix.
Every realizable quantum evolution, L(ρ), can be written in the form


L(ρ) = An ρA†n , (1.56)
n


where n A†n An = I. This follows immediately from our analysis of a quantum measure-
ment above. The evolution of every quantum system is ultimately the result of a unitary
evolution on a larger system of which the system is a subsystem, and the evolution for the
subsystem is given by tracing out the rest of the larger system. This trace can be written
as the average over the results of a von Neumann measurement on the rest of the larger
system, and this results in the form in Eq. (1.56), being the average over the results of a
measurement on the system.
The form given in Eq. (1.56) is called an “operator-sum” representation for a quantum
evolution. The result that any linear map is completely positive if and only if it can be
written in the operator-sum representation was first proved by Choi [124]. Another useful
fact that is not obvious is that the evolution of an N-dimensional quantum system can
always be written using no more than N 2 operators {An }. This was first shown by Hellwig
and Kraus [247, 345], and the measurement operators {An } are often referred to as Kraus
operators. Schumacher proved this result in an elegant and more direct way by defining a
linear map using the partial inner product, and we give his proof in Appendix H.

Positive-operator valued measures (POVMs)


General measurements on quantum systems are often referred to in the literature as
“positive-operator valued measures.” While the usefulness of this term is unclear, and
should probably be replaced simply with “quantum measurement,” its origin is as follows.
A measure is a map that associates a number with all subsets of a given set. A positive-
operator valued measure is thus a map that associates a positive operator with every subset.
For the purposes of quantum measurements, the set in question is the set of outcomes of
the measurement. Let us label the outcomes by n. If we pick some subset of the outcomes,
call it M, then the probability that we get an outcome from this subset is given by

  
  
Prob(n ∈ M) = pn = Tr[A†n An ρ] = Tr A†n An ρ . (1.57)
n∈M n∈M n∈M
28 Quantum measurement theory

Since the operators A†n An are all positive, the operator in the parentheses is also positive.
This is the positive operator associated with the set M, and is thus the positive operator of
the “positive-operator valued measure.”

1.4 Understanding quantum measurements


1.4.1 Relationship to classical measurements
While we have shown that a measurement on a quantum system can be described by a
set of operators {An }, we do not yet understand how the form of these operators relates to
the nature of the measurement. We can gain such an understanding by examining various
key forms that the An can take, and the measurements to which they correspond. It is most
informative to begin by examining under what conditions quantum measurements reduce to
classical measurements. This reveals the relationship between the measurement operators
and the classical likelihood function.
Consider a state-of-knowledge that is diagonal in the basis {|n }, so that

ρ= pn |n n|. (1.58)
n

Note that in this case the system is not in a superposition of the basis states. It actually is
in one of the basis states; we merely do not know which one. Recall that we refer to this
situation by saying that the system is in a statistical mixture (or simply a mixture) of the
states {|n }. Now, if we make a measurement on the system purely to obtain information
about the basis state it is in, then we might expect this to be described perfectly well
by a classical measurement. This is because the basis states are distinguishable, just like
classical states, and our state-of-knowledge is a probability distribution over these states,
just like a classical state of knowledge.
With the above thoughts in mind, consider a quantum measurement in which all the
measurement operators Aj are diagonal in the same basis as the density matrix. Note that
we can think of the diagonal elements of the measurement operators, and of ρ, simply as
functions of the index n. That is, we can write

ρ= P(n)|n n|, (1.59)
n

Aj = A(j, n)|n n|, (1.60)
n

where P(n) and A(j, n) are the functions in question. The final (or posterior) state-of-
knowledge that results from our measurement is

Aj ρA†j nA
2 (j, n)P(n)|n n|
ρ̃j = = . (1.61)
Tr[A†j Aj ρ] N
1.4 Understanding quantum measurements 29

This is precisely the same form as the update rule for classical measurement theory. All we
have to do is write the posterior state ρ̃j as

ρ̃j = P(n|j)|n n|, (1.62)
n

and identify A2 (j, n) as the likelihood function, P(j|n), and we have

P(j|n)P(n)
P(n|j) = . (1.63)
N
So, at least in this case, quantum measurements are merely classical measurements, and the
diagonal of the measurement operator Aj corresponds to the square-root of the likelihood
function evaluated for the jth outcome.
Two operators A and B are diagonal in the same basis if and only if they commute with
one another. In view of the above results, we will refer to quantum measurements for which

[Aj , Ak ] = 0, ∀j, k (1.64)

as semiclassical measurements, since they are completely equivalent to their classical


counterparts when made on a system in a state that is in a mixture of the eigenstates of
the measurement operators.

Incomplete measurements and Bayesian inference


As an example we consider a simple semiclassical measurement that provides only par-
tial information about the final state. Such measurements have measurement operators
that are not projectors onto a single state, and are referred to variously as “incomplete”
measurements [47], “finite-strength” measurements [188], or “weak” measurements [548].
Consider a two-state system in the state

ρ = p0 |0 0| + p1|1 1|, (1.65)

and a measurement with two possible outcomes, described by the operators


√ √
A0 = κ|0 0| + 1 − κ|1 1|, (1.66)
√ √
A1 = 1 − κ|0 0| + κ|1 1|. (1.67)

If κ = 1 (or κ = 0) then both measurement operators are projectors, and after the mea-
surement we are therefore certain about the state of the system. If κ = 1/2 then both
operators are proportional to the identity. They therefore have no action on the system, and
we learn nothing from the measurement. For κ ∈ (1/2, 1) the measurement is an “incom-
plete” measurement. It changes the probabilities p0 and p1 , but does not reduce either to
zero.
30 Quantum measurement theory

As discussed in the previous section, since the measurement operators commute with
the density matrix, their action on the diagonal elements of the density matrix is simply
that of Bayesian inference. The measurement operators we have chosen above correspond
precisely to the simple two-outcome measurement that we discussed in Section 1.2.1, with
the identification α = κ. So, for example, if κ > 1/2 and we get the outcome 0, then the
diagonal elements of ρ are re-weighted so as to make the state |0 more likely than it was
before. Finite-strength measurements thus provide some information about which state the
system is in, but not complete information.
We will see in Section 1.4.4 that we can, in fact, understand the action of all quantum
measurements, to a large extent, in terms of Bayesian inference.

1.4.2 Measurements of observables and resolving power


Measurements of observables are a subset of semiclassical measurements. This is because
a measurement of an observable should not change the state of the system if it is already in
an eigenstate of the observable, but merely provide information about the eigenvalue. This
condition demands that all the measurement operators must commute with the observable.
We speak of the measurement as obtaining information in the basis of the observable,
and we refer to this basis as the measurement basis. But measuring a specific observable
requires more than merely having a set of measurement operators that commute with it. To
define a measurement of an observable we introduce the concept of resolving power. Let
us say that a system is prepared in an equal (“fifty-fifty”) mixture of two different states
of the measurement basis. The resolving power of a measurement, with respect to these
two states, is the amount by which, on average, the probabilities of the two states are made
more different by the measurement. In particular, we define the resolving power as the
absolute value of the difference between the probabilities of the two basis states following
the measurement, averaged over the measurement results. As an exercise, if you calculate
this quantity for a von Neumann measurement you will find that it is unity, and this is
clearly the maximum possible value.
A measurement obtains information about an observable X if the probabilities of the
measurement results depend on the eigenvalues of X. If these probabilities do not change
when we change from one initial state to another, then the measurement has no resolving
power for those two states. If two states have the same eigenvalue for X then a measure-
ment of X should not distinguish between them, and the resolving power for these two
states should be zero. Further, for pairs of states that have the same separation between
their respective eigenvalues, the resolving power should be the same. The resolving power
should also increase monotonically with the eigenvalue separation (it would be very strange
if it were harder to distinguish two objects that were a meter apart than a millimeter apart!).
This gives us a minimal definition of a measurement of an observable.
If we wish we can be even more specific about what constitutes a measurement of
an observable by specifying how the resolving power between two states should scale
in some way with the separation between their eigenvalues. For example, for continuous
1.4 Understanding quantum measurements 31

measurements, which we introduce in Chapter 3, it is natural to define a measurement of


an observable as one in which the resolving power scales with the eigenvalue separation in
the same way as it scales with the rate at which the measurement extracts information.
There are many semiclassical measurements that do not correspond to measurements
of any observable. This is because the possible values of any physical observable lie on a
single line, so that a measurement of an observable must have more resolving power for
some pairs of states than for others.
An example of a measurement of an observable X is {Aα }, where

Aα = f (α)e−k(X−α) ,
2
(1.68)

and where f (α) is the scaling required for each operator Aα so that α Aα = I. The larger
the parameter k, the higher the resolving power of the measurement.

1.4.3 A measurement of position


We now give an example of a measurement of position that illustrates the dynamical change
caused by a measurement, something that we have not yet discussed. Quantum measure-
ments induce dynamical changes because the measurement of one observable will usually
change the distribution of other observables that do not commute with it. In classical sys-
tems different dynamical variables are independent of each other. Thus if we measure the
position of a classical object, this does not change the object’s momentum. The mea-
surement does change the distribution of the position, but this is just a change in our
state-of-knowledge about the position, not a change to the position itself. The reason that
we know a classical measurement does not change a classical object’s position is that if
we average our state-of-knowledge over all the measurement results, we are left precisely
with the position distribution that the object had prior to the measurement. We know that
a change is dynamical (that is, a real change to an observable of the system) if the change
remains when we average over the measurement results. This averaging eliminates any
change that is due purely to a change in our state-of-knowledge.
Position is a continuous variable, rather than the discrete variables we have considered so
far. So sums over the eigenstates of position will become integrals. Of course, the “eigen-
states” of position are not real quantum states, as they are not normalizable. They are just a
notational convenience introduced by Dirac. We must also remember that we cannot mea-
sure position exactly, since a state that is infinitely sharp in position has an infinite variance
for the momentum, and thus infinite energy. But we can make measurements that reduce
the position variance, even though we are always left with some uncertainty.
Recall that in measuring a classical variable like position, we often assume that the
error in the measurement has a Gaussian distribution, and in Section 1.2.2 we showed that
this is described by a Gaussian likelihood function. For a quantum measurement, since
it will induce a dynamical change in the system, it is worth considering the interaction
that mediates the measurement. Let us say that we measure the position of an object by
Random documents with unrelated
content Scribd suggests to you:
So the fatal order fluttered in the breeze.
Rear-Admiral Markham, on the Camperdown, was staggered.
“It is impossible!” he exclaimed. “It is an impracticable manœuvre!”
and did not answer back, thus giving the Victoria to understand that
he had not grasped the signal. “It’s all right,” he said to Captain
Johnstone. “Don’t do anything. I have not answered the signal.” And
then gave instructions for the flag-lieutenant to ask for fuller
instructions.
Meanwhile, on the Victoria other signals were being hoisted, asking
Markham why he was not obeying orders, and reproving him for it.
The rear-admiral, knowing it was his duty to obey, decided to do so,
thinking that Tryon must be intending to make a wider circle, and so
go outside the Camperdown’s division.
The two ships therefore turned inwards, Markham and his officers
watching the Victoria closely to see what she would do. On the
flagship, too, officers were discussing the movement, and Captain
Bourke asked Tryon whether it would not be as well to do something
to avoid the collision he saw was inevitable. It was a case for haste,
he knew, and he had to repeat his question hurriedly: “May I go
astern full speed with the port screw?”
“Yes,” said Tryon at last, and Bourke gave the order. But it was too
late; three minutes and a half after the two ships had turned
inwards the Camperdown, although her engines had been reversed,
crashed into the starboard bow of the Victoria, hitting her about
twenty feet before the turret and forcing her way in almost to the
centre line.
Instantly excitement reigned on the Victoria; but the crew, never
losing their heads, rushed to carry out the orders which were now
flung hither and thither:
“Close the water-tight doors!”
“Out collision mats!”
“All hands on deck!”
In rapid succession the orders came; the doors were shut tight, the
mats were hung over the side, where, so great was the gap left
when the Camperdown backed away, the water rushed in in
torrents. Captain Bourke, having visited the engine-rooms to see
that all that was possible had been done, rushed up on deck, and
there found that the Victoria had a heavy list to starboard. On the
deck all the sick men and the prisoners had been brought up in
readiness, and all hands except the engineers were there, too.
All this time the only thought in every man’s mind had been to save
the ship; actually, no one imagined that the fine vessel would
presently make a final plunge and disappear. Tryon had, indeed,
signalled to the other ships not to send the boats which were being
lowered. Having received the report that it was thought the Victoria
could keep afloat some time, Tryon consented to her being steered
for land. But the helm refused to work.
The admiral now signalled: “Keep boats in readiness; but do not
send them.” And then, turning to an officer, said: “It is my fault—
entirely my fault!”
The seriousness of the position was now breaking upon him, though
even then he did not realise how near the end was. The crew
worked hard but orderly, hoisting out the boats, or doing whatever
they were told, while down below the engineers and stokers kept at
their posts, albeit they knew that they stood little chance if the ship
dived beneath the surface.
Presently the men were drawn up on deck, four deep, calm, cool,
facing death without a tremor or sign of panic, which would have
been calamitous.
“Steady, men, steady!” cried the chaplain, the Rev. Samuel Morris;
and steady they were, till Tryon, seeing that all hope was gone,
signalled for boats to be sent, and gave orders for every man to look
after himself.
“Jump, men, jump!” was the command; and they rushed to the side,
ready to fling themselves overboard. As they did so the great ship
turned turtle, and men went tumbling head first into the sea, down
the bottom of the ship as she dived, her port screw racing through
the air.
The scene that followed beggars description; but the following
extract is from a letter written to the Times by a midshipman who
was on one of the other ships. He was sent off in a boat to rescue
the struggling men in the water.
“It was simply agonising to watch the wretched men
struggling over the ship’s bottom in masses”

“We could see all the men jumping overboard,” he wrote. “She
continued heeling over, and it was simply agonising to watch the
wretched men struggling out of the ports over the ship’s bottom in
masses. All this, of course, happened in less time than it takes to
write. You could see the poor men who, in their hurry to jump over,
jumped on to the screw being cut to pieces as it revolved. She
heeled right over, the water rushing in through her funnels. A great
explosion of steam rose; she turned right over, and you could see all
the men eagerly endeavouring to crawl over her bottom, when, with
a plunge, she went down bows first. We could see her stern rise
right out of the water and plunge down, the screws still revolving. It
was simply a dreadful sight. We could not realise it. Personally, I was
away in my boat, pulling as hard as we could to the scene of the
disaster.... After pulling up and down for two hours, we reorganised
the fleet, leaving two ships on the scene of the disaster; and,
making for Tripoli, anchored for the night. No one can realise the
dreadful nature of the accident.
“However, dropping the Victoria for a minute, we must turn to the
Camperdown. She appeared to be in a very bad way. Her bow was
sinking gradually, and I must say at the time I thought it quite on
the cards that she might be lost also; but, thanks to the indomitable
way in which the crew worked, they managed to check the inrush by
means of the collision mat and water-tight doors. All last night,
however, they were working hard to keep her afloat.
“You can imagine our feelings—the flagship sunk with nearly all
hands, the other flagship anchored in a sinking condition. We have a
lot of the survivors of the Victoria on board, but their accounts vary
greatly.... Anyhow, what is quite certain is that the admiral did not
realise the gravity of his situation, or else he would have abandoned
the ship at once, instead of trying to save her. The discipline was
magnificent. Not until the order was given did a single man jump
overboard.
“The last thing that was seen was the admiral refusing to try to save
himself, whilst his coxswain was entreating him to go. Another
instance of pluck was exhibited by the boatswain of signals, who
was making a general semaphore until the water washed him away.
Unfortunately the poor chap was drowned. Many of the survivors are
in a dreadful state of mental prostration. Most people say that
Admiral Markham should have refused to obey the signal, but I think
that Admiral Tryon infused so much awe in most of the captains of
the fleet that few would have disobeyed him. However, he stuck to
his ship to the last, and went down in her.”
Thus was the Victoria lost; less than a quarter of an hour after being
struck she was lying at the bottom of the Mediterranean, Admiral
Tryon and 400 gallant seamen going with her.
At the court-martial Captain Bourke was absolved of all blame for
the loss of the ship, the finding being that the disaster was entirely
due to Admiral Tryon’s order to turn the two lines sixteen points
inward when they were only six cables apart.
INCIDENTS IN THE SLAVE TRADE
Stories of the Traffic in Human Merchandise
WE shall not here deal with the history and abolition of slavery,
because every schoolboy knows all about that, and will doubtless be
glad to have something more exciting. And of excitement there is
abundance in the annals of slavery. The trade was always attended
by risks, even before the days when it was illegal to ship slaves, for
there was ever the danger of the negroes breaking loose and
running amok on the ship; or, what was perhaps worse, the holds of
the slavers were often little less than death-holes, with fever and
cholera rampant. Altogether, it was a game with big profits—and
mighty big risks, as the following story will show:
It was back in 1769 that the slaver Delight (Captain Millroy) was the
scene of an uprising of negroes, which resulted in a rousing fight
and fatal effects to a good many aboard.
About three o’clock one Sunday morning Surgeon Boulton and the
men with him in the aft-cabin were awakened by a chorus of
screams and shrieks overhead, a rushing of feet, a pandemonium of
noise which told that something serious was afoot. Boulton slipped
out of his bunk and dashed towards the captain’s cabin, half
guessing what was taking place. He reached the cabin, and,
entering, shook Millroy fiercely to awaken him. He had barely
succeeded in rousing the captain when a billet of wood came
hurtling through the air and caught him on the shoulder, and a
cutlass pierced his neck. Turning, Boulton saw that a couple of
negroes had, all unseen and unheard, crept below, intent on putting
the captain hors de combat while he was asleep; and, finding the
surgeon interfering with their plot, they attacked him in quick time.
Millroy, now properly aroused, joined forces with Boulton, who forgot
his own danger in the thought of what was happening above, and
the pair chased the negroes on to the deck, Boulton carrying a pistol
and the captain a cutlass.
When they reached deck they found themselves in a very inferno.
Hundreds of negroes were swarming all over the place, some armed
with wooden spars, others with cutlasses; and with these weapons
they were hard at it taking vengeance on their captors. The herd of
savages flung themselves upon the seamen, cutting off legs and
arms, mutilating bodies dreadfully, their yells making the air ring.
Boulton and the captain, realising that it was a case for prompt and
vigorous action, hurled themselves into the heaving fight with a will.
Down went one negro, killed by Millroy’s cutlass; then another; while
Boulton did all he could. But the “all” of these two men was but
little, and presently Millroy fell to the deck, overpowered by
numbers, and literally hacked to pieces. Boulton, more fortunate,
escaped injury, and made a dash for the rigging, up which he
scrambled till he came to the maintop, where he discovered the cook
and a boy had already taken refuge.
Perched on their lofty platform, the three looked down upon deck,
watching as though fascinated the drama being enacted before their
eyes, seeing the now maddened negroes wreaking vengeance on
the men who were bearing them from freedom to slavery. The
bloodlust was upon them, and they searched the ship to take their
fill.
Suddenly the watchers saw two men come up from below and make
a rush across the deck to the rigging. Like lightning the negroes
dashed after them, and one man was brought to deck by a dozen
billets flung at him, and his body was cut to pieces. The second
man, more fortunate, managed to reach the rigging, and clambered
up like a monkey.
The negroes, having satisfied themselves that they had accounted
for all the crew with the exception of those in the maintop, whom
they decided to deal with presently, ransacked the ship, seeking
arms; and meanwhile Boulton, knowing that safety depended upon
weapons, went on a tour of exploration. He wormed his way into the
foretop to see what might be there, and luckily found a knife, with
which he set out to return to the maintop. On the way the negroes
saw him, and began to pelt him with billets of wood, all of which
missed, however; so that Boulton reached his comrades safely. The
one dread in the minds of the four survivors was that the negroes
would find the arms-chest, in which case it seemed to them
hopeless to expect to escape. While the slaves remained armed only
with wooden spars and cutlasses, Boulton did not feel particularly
anxious, knowing that he and his companions would be able to
tackle any who dared to ascend the rigging to try and get them
down. One thing that kept him hopeful was the fact that another
slaver, the Apollo, was almost within hailing distance, and the
Delight, unsteered and sails untrimmed, was rapidly drifting towards
her, which would make the men on the Apollo aware that something
had happened. But Boulton’s luck was out. The negroes found the
arms-chest, and, breaking it open, armed themselves with muskets,
and set to work in earnest to put the survivors out of action.
Shot after shot sang by the maintop, and one of the men there,
fearing that he would be killed if he stayed, and might be saved if he
trusted himself to the mercy of the negroes, like a madman
descended to the deck. Barely had his foot touched it when a negro
fell upon him with an axe and split his head in two; and a dozen
pairs of hands seized him and pitched him overboard to the sharks
which were following the ship, their appetites whetted by the feasts
already given them by the negroes.
While this was going on, other slaves were still shooting away at the
maintop, fruitlessly; and Boulton was calling madly on the Apollo,
now not far away. Presently the captain of the other vessel, realising
what was afoot, gave the word, and a broadside hurtled across the
deck of the Delight, in the hope of frightening the slaves. They
seemed to take little notice of this, however, and Boulton began to
fear that all was over, especially as the negroes, seeing that they
could not hit the men in the maintop, ceased fire, and a giant black,
cutlass in one hand and a pistol in the other, sprang into the rigging,
bent, apparently, on storming the position. Boulton waited calmly.
He had no weapons but his knife and a quart bottle; but he felt that
he was in a good position to meet an attack. Presently the negro’s
head appeared above the platform, and then—whack! The bottle fell
upon it with a sickening thud, the black lost his hold, and went
hurtling into the sea.
Meanwhile, the Apollo was firing at the Delight, and the latter was
returning the fire as well as it could, the negroes evidently knowing
that to give in was to court disaster, and to lose what they had stood
in a fair way to gaining. For four hours they fought the Apollo, and
at the same time kept up their fusillade on the maintop.
Then came the end. Not because the negroes were not able to keep
up any longer, but because a shot from the Apollo fell into a barrel of
gunpowder and exploded it, with the result that the Delight took fire,
and the slaves could not cope with the flames and their enemy at
the same time. The revolt fizzled out as quickly as it had arisen.
While the negroes rushed about seeking to put out the fire, Boulton,
taking his life in his hands, descended to the deck, at the same time
that a boat set out from the Apollo with a crew to tackle the flames
and the negroes, who, filled with consternation, now stood quietly
by watching the fire-fighters. They were absolutely cowed; they had
made their bid for freedom, and had failed, and they knew it. They
allowed themselves to be driven below and secured. The result of
their revolt was that nine of the crew of the Delight were butchered,
one man on the Apollo was killed, and eighteen of the negroes
found death instead of liberty—perhaps death to them was better
than freedom; certainly better than the lot of those poor human
cattle they left behind them.
Such incidents as this were of frequent occurrence, and the recital of
one must suffice.
After the Abolition Act had been passed, severe measures were
brought into operation, giving the Navy a wide scope—so wide that,
even although a vessel had no slaves on board, yet, if the naval
officers had reason to suspect that slaving was her business, they
could apprehend her. Special ships were fitted out and commissioned
to deal with the traffic in the South Atlantic, both off Central America
and the West Coast of Africa. So effective were the measures taken
that the slavers resorted to all manner of disguises to turn suspicion
away from their vessels, which had hitherto been of a distinctive
kind—long, rakish craft with tall spars, the whole effect being one of
beauty, and the idea being speed. The traders changed all this by
having ships more after the fashion of the ordinary merchant vessel,
so that the hunters had a more difficult task in front of them. But
they worked energetically, and swept the seas month after month,
on the look-out for the human cattle-ships, and, as all the world
knows, succeeded in clearing them from the seas.
The subjoined account from the Sierra Leone Watchman for
November 15, 1846, gives a striking picture of the conditions against
which the Navy were doing such good work.
The vessel referred to is the Brazilian brigantine Paqueta de Rio,
captured off Sherbro:
“The 547 human beings—besides the crew and passengers (as they
styled themselves), twenty-eight in number—were stowed in a
vessel of 74 tons. The slaves were all stowed together, perfectly
naked, with nothing on which to rest but the surfaces of the water-
casks. These were made level by filling in billets of wood, and
formed the slave-deck. The slaves who were confined in the hold—it
being utterly impossible for the whole of them to remain on deck at
one time—were in a profuse perspiration, and panting like so many
hounds for water. The smell on board was dreadful. I was informed
that, on the officers of the Cygnet boarding the slaver, the greater
part of the slaves were chained together with pieces of chain, which
were passed through iron collars round their necks; iron shackles
were also secured round their legs and arms. After the officers had
boarded, and the slaves were made to understand they were free,
their acclamations were long and loud. They set to work, and, with
the billets of wood which had hitherto formed their bed, knocked off
each other’s shackles, and threw most of them overboard. There
were several left, which were shown to me. We will leave it to the
imagination of your readers what must have been the feelings of
these poor people when they found they were again free—free
through the energy and activity of a British cruiser. On examining the
poor creatures, who were principally of the Kosso nation, I found
they belonged to, and were shipped to, different individuals; they
were branded like sheep. Letters were burnt in the skin two inches
in length. Many of them, from the recent period it had been done,
were in a state of ulceration. Both males and females were marked
as follows: On the right breast ‘J’; on the left arm, ‘P’; over women’s
right and left breasts, ‘S’ and ‘A’; under the left shoulder, ‘P’; right
breast, ‘R’ and ‘RJ’; on the right and left breasts, ‘SS’; and on the
right and left shoulder, ‘SS.’ This is the same vessel that cleared out
from here about three weeks previous to her capture for Rio de
Janeiro. The slaves were all embarked from the slave factories at
Gallinas, under the notorious Don Luiz, and the vessel under way in
five hours; and had there been the slightest breeze she would have
escaped. Among the slaves there were two men belonging to Sierra
Leone—a man named Peter, once employed by Mr. Elliott, the pilot.
He stated that he had been employed by a Mr. Smith, a Popohman,
to go to Sherbro to purchase palm-oil, and that whilst pursuing that
object he was seized and sold by a Sherbro chief named Sherry.”
A RACE TO SUCCOUR
An Incident of the United States Revenue
Service
THE records of the revenue men of the United States teem with
heroic deeds done in the execution of their duty. The present story is
typical of the thrilling determination of men who will not be beaten,
and incidentally shows a healthy rivalry between the revenue men
and the lifeboatmen.
On January 11, 1891, the three-masted schooner Ada Barker
encountered a terrific storm which played shuttlecock with her, and
after a fierce conflict pitched her on to the Junk of Pork, the
euphonious name of a large rock near outer Green Island, off the
coast of Maine. The Junk of Pork rises a sheer fifty feet out of the
water, and all round it are reefs and boulders, a literal death-trap to
any unfortunate vessel that should get caught there. The Ada
Barker, after having her sails torn to shreds and her rigging
hopelessly entangled, began to ship water, and though her men
worked hard and long at the pumps, they could not save her; then
she was bowled on to the outer reef at night; the bottom dropped
out of her, and she heeled over. To the men on board it seemed that
the end of all things had come, and they gave themselves up for
lost.
“Though her men worked hard and long at the pumps, they
could not save her”

As the ship heeled they heard the sound of something striking


against a rock; then again, as the ship rebounded and fell forward
once more. Eager to take the most slender chance of life, they
scrambled to the side, and saw that the mast was hitting against the
Junk of Pork.
“Boys!” cried the captain. “That’s our one chance!”
The sailors knew what he meant. They had looked about them. To
jump into that boiling surf was to leap into the jaws of death; they
would be smashed to pulp, or drowned like rats. They saw now,
however, that the rock before them could be reached by scrambling
up the mast, which was crashing against it. But they must hurry;
and hurry they did. Like monkeys they swarmed up the mast, caring
nothing for torn hands nor the flapping canvas, which slashed them
like whipcords and threatened to knock them off into the cauldron
below. They fought their elemental fight, and one by one six men
dropped on to the Junk of Pork; and for hours and hours they clung
to their precarious perch, buffeted by strong winds, swamped by
heavy seas and crouching in terror as a mountain wave reared its
head and, as if angry that the men had escaped, broke upon them
with a thunderous roar. At other times they were flung headlong on
the rock by a gust of wind which howled at them as if seeking to
drown their voices as they yelled for help, in the hope that some
ship might be near and hear them through the noise of the gale.
All through the long, dreadful night they remained thus, glad to have
found even so bleak a haven, but wondering whether, after all, they
would be rescued. Then their eyes were gladdened by the sight of a
ship away out on the horizon. Rising and falling as the still
boisterous seas kept up their see-saw motion, she was coming in
their direction. Would she see them? They knew that at the distance
the ship was away they could not be visible yet; yet, cold, drenched
to the skin, almost exhausted by exposure, they stripped themselves
of their shirts and waved—waved like madmen, fearing they would
be passed by. Had they but known it, the officer of the watch of the
coming boat—the United States revenue cutter, of Woodbury—
thought he could see dark forms on the flat top of the storm-
wracked Junk of Pork in a state of frantic activity. Levelling his
glasses, he soon saw the forms of the six men waving the torn and
tattered shirts; and he knew that some ship had been wrecked
during the storm which the Woodbury herself had encountered and
fought sternly against for hours on end since she left Portland.
It took but a few moments for everyone on the cutter to be made
aware of the position of things.
“We’ll make her, boys” said Captain Fengar, who was in command.
“We’ll have those chaps off the Junk of Pork!”
“Aye, aye, sir!” was the chorus; and, with engines pounding out
every ounce of steam, the cutter pushed her nose through the
water, fighting hard against the storm, which was raging as fiercely
as ever. Nearer and nearer they drove, whistling anon to encourage
the stranded mariners, who, weary and exhausted, cried for very joy
as they realised that they had been seen and that help was coming.
Help was coming! Their madness of anxiety gave way to a delirium
of joy. Then their hearts sank into an abyss of despair.
The cutter was very near to them now, but the sea was too rough
for her to venture close to the rocks; the reefs were one cauldron of
boiling surf, and the stranded men knew that no boat from the
cutter could hope to live in such a sea, or hope to escape
destruction on the reefs if she ventured near.
Help had come—and had proved helpless!
They threw themselves down upon the rock and clutched at the bare
surface. They were frenzied. They wondered how much longer they
could withstand the gnawings of hunger, the agonies of thirst; how
much longer, too, they could retain enough strength to keep their
footing on the rock-top. They even thought of leaving their precious
haven and trying to reach the wreck of their once proud little ship,
where there was indeed food and water. But second thoughts
showed them that certain death lay that way, while there was hope
that the cutter might be able to get to them. They saw that she was
hovering about, cruising here and there to keep headway with the
storm, her whistle shrieking out encouragement, and letting them
know that she was standing by, in the hope that the storm would
abate and enable them to launch their boats.
Night came, but the gale still raged, and Captain Fengar decided that
there was only one way to bring about the rescue he was
determined to effect, and that was to put back to Portland and bring
dories with which to land on the rock at dawn next day. He could not
hope to do much good during the night, even if the storm eased off
somewhat; the danger of the breakers was too great. So, whistling
across to the wretched men on the rock, he let them know that he
was going away, but would come back, and then save them.
The first shock of realising that they were to be left alone again
wellnigh crazed the men; they felt that they would prefer to wait
there for death with company than wait alone for salvation. But
away went the cutter, whistling as she went in answer to the
wavings of the sailors; and as the final scream died away the men
sank down upon the rock in desolation of despair, with nothing but
the howling of the wind and the roar of the breakers to keep them
company.
The cutter sped through the night, passing Cape Elizabeth on her
way, and giving the bearings of the wreck to the lifeboat station
there. Reaching Portland, she took her dories and raced back to the
Junk of Pork, arriving there an hour after daybreak. The feelings of
the now almost dead mariners may be better imagined than
described when they heard the siren of the cutter calling to them,
telling them of the coming of hope and help. They forgot the raging
storm, for they knew that these men who had come back had
brought the wherewithal to save them.
On the cutter the revenue men were busy preparing to launch the
boats and the small white cutter, when the lifeboat from Cape
Elizabeth hove in sight. The very sight of her acted as an additional
spur to them, for they regarded this little matter as particularly their
own, and although they themselves had warned the lifeboatman of
the wreck they felt that it was their duty to effect the rescue. They
vowed to themselves that they would get the men off the rock.
“Now, boys,” cried Captain Fengar, “we want to get those men off
ourselves! Hustle!”
And they hustled. In the twinkling of an eye as it seemed a couple of
boats were lowered and the men were in their places.
“You must not fail,” said Fengar as they pushed off. “God bless you!”
And away they went towards the boiling surf, beneath which they
knew lurked hideous, treacherous rocks. Lieutenant Howland, an old
whaler, had charge of the first boat, and with him went Third
Lieutenant Scott and Cadet Van Cott, who had entreated the captain
to allow him to go. Seamen Haskell and Gross manned the second
boat. Like madmen the Woodbury men pulled, straining every effort
to win in the race they had set themselves, knowing that the Cape
Elizabeth lifeboat was sweeping through the seas towards the rock.
As for the lifeboat, its crew were tired, weary with much fighting of
the storm. But they were game; they realised what the Woodbury
men were intent on doing, and they themselves determined to do
their best to beat them in this race for the lives of six unfortunate
men. It was surely one of the queerest contests ever engaged in,
and at the back of it was but one idea—to win through to the rock
and get the stranded mariners to safety.
The first honours went to the Woodbury men; the dory manned by
Haskell and Gross got there ahead of all; they swept through a
narrow channel between the reefs, were wellnigh battered to pieces
against the foot of the Junk of Pork, hailed the men on top—as
though they needed hailing!—and the next instant a man leaped
clear of the rock and tumbled into the dory, which pitched and rolled
dangerously at the impact. Then, realising that they could not stay
there any longer, Haskell and Gross turned their dory about and
made for the channel again; careful steering took her safely through,
and then, buffeted by the waves, they pulled feverishly towards the
cutter, where they eventually got their man safely aboard.
Meanwhile, Howland was keeping his men at it; the race now lay
between him and the lifeboat, and he meant to win. With shouts and
heave-ho’s, Howland urged his men on; and on they went, while
across the waters came the shouts of the lifeboatmen as they bent
lustily to their task.
The revenue boat won by a neck! With a thud she hit the breakers
just ahead of the lifeboat, shivered, and then, lifted up by a giant
comber, cleared a submerged reef, delved on the other side, and
came up almost filled with water. Shaking herself as a dog shakes
the water from his coat, she righted, and Lieutenant Scott leaped
boldly into the surf; but as he did so the undertow took the boat
and, as he still had hold of her, dragged him under water. For a
moment his comrades thought him gone, but presently he came up,
almost frozen, but still hanging on to the boat. And the next moment
a roller caught the boat and pitched her on to a slice of rock.
Almost simultaneously the lifeboat plunged at the breakers. For a
second she hesitated. Her men were debating whether they should
shoot clear or land. They saw the revenue men land. Where they
could go, there could the men of Cape Elizabeth; and they put the
nose of their boat at it, heading straight for the rocks. Less fortunate
than the others, the lifeboat banged into a mighty rock, which stove
in her bow and rendered her unmanageable.
Instantly the winners of the strange race saw that the lifeboat was
helpless and in danger; the men on the Junk of Pork could wait;
they were safe! The revenue men plunged into the surf, waded and
swam to the lifeboat, seized hold of her, and dragged her on to the
strip of rock. It was all done as in a flash; hesitation would have
meant disaster. But it was done, and the rivals stood together at the
foot of the Junk of Pork. Then, resting awhile from their herculean
labours, they set about the rescue of the stranded mariners, who
were very soon in the revenue boat, and being rowed across to the
Woodbury cutter, which, when all was done, steamed back to
Portland, after forty hours of hard fighting for the rescue of half a
dozen men; forty hours well spent, too.
A TRAGEDY OF THE SOUTH POLE
The Thrilling Story of Scott’s Expedition to the
Antarctic
THE age-old dreams of hundreds of men have been realised; the
ends of the earth have yielded up their secrets—the Poles have been
discovered. Peary to the North, Amundsen and Scott to the South,
hardy adventurers all, with the wanderlust in their souls and science
as their beckoner—these men went forth and wrested from the ice-
bound regions something of what had been refused to the scores of
men preceding them; some of whom had come back, weak,
despondent, while others left their bones as silent witnesses to their
noble failure to achieve what they set out for.
Of all the many expeditions which have set forth to the Polar
regions, none was more tragic than that commanded by Captain
Robert F. Scott. In practically the hour of his triumph he failed,
because, no matter how efficient an organisation, no matter how
far-sighted policy and arrangements may be, there is always the
uncertain human element; there comes the point when human
endurance can stand out no longer, when the struggle against the
titan forces of Nature cannot be kept up. And then there is failure,
though often a splendid failure.
Such was that of Captain Scott; he reached the goal he had aimed at
for many years only to find that he had been forestalled by a month,
and then, overtaken by unexpected bad weather, he and the men
with him had to give up the struggle when within eleven miles of
just one thing they stood in need of—fuel with which to cook the hot
meals that meant life. The story is one that makes the blood course
through the veins, makes the heart glow, makes the head bow in
honour; because it is a story of matchless bravery, heroic fortitude
and noble effort.
The Terra Nova, Scott’s ship, carried a complement of sixty men,
each one of them picked because of his efficiency, each one having
his allotted work. Geologists and grooms, physicist and
photographers, meteorologists and motor-engineer, surgeons and
ski-expert and seamen, men to care for dogs, and men to cook food
—a civilised community of efficient, well-found, keen, and high-
idealed men. It was, in fact, the best-equipped Polar expedition ever
sent forth. Scott went out not merely to discover the South Pole, but
also to gather data that should elucidate many problems of science.
He took with him all the apparatus that would be necessary for this
purpose, and when the Terra Nova left New Zealand, on November
26, 1910, there seemed good reason for the conviction that success
must attend the expedition.
The voyage out to the Polar Sea was uneventful, except that early in
December a great storm arose, and called for good seamanship to
keep the vessel going; and even then she was very badly knocked
about. She made a good deal of water, and the seamen had to pump
hard and long; but at last, under steam and sail, the Terra Nova
came through safely, and was able to go forward again, and by
December 9 was in the ice-pack, which was that year much farther
north than was expected. This held them up so that they could not
go in the direction they wanted to, and had to drift where the pack
would take them—northwards. Christmas Day found them still in the
pack, and they celebrated the festivity in the good old English style.
By the 30th they were out of the pack, and set off for Cape Crozier,
the end of the Great Ice Barrier, where they had decided to fix their
winter quarters. They could not get there, however, and they had to
proceed to Cape Royds, passing along an ice-clad coast which
showed no likely landing-place. Cape Royds was also inaccessible
owing to the ice, and the ship was worked to the Skuary Cape,
renamed Cape Evans, in the McMurdo Sound.
A landing was effected, and for a week the explorers worked like
niggers getting stores ashore, disembarking ponies and dogs,
unloading sledges, and the hundred and one other things necessary
to success. The hut, which was brought over in pieces, was also
taken ashore, a suitable site for it cleared, and the carpenters began
erecting it.
During these early days misfortune fell upon them. One of their
three motor-sledges, upon which great hopes were built, slipped
through the ice and was lost.
By January 14 the station was almost finished, and Captain Scott
went on a sledge trip to Hut Point, some miles to the west. Here
Scott had wintered on his first expedition, which set out from
England in 1901. In this his new expedition the hut was to be used
for some of the party, and telephonic communication was installed.
In due course the station was completed; there is no need for us to
go into all the details of the hard work, or the exercising of animals
and men, but a short description of this house on the ice may be of
interest. It was a wooden structure, 50 feet long by 25 feet wide and
nine feet to the eaves. It was divided into officers’ and men’s
quarters; there was a laboratory and dark-room, galley and
workshop. Books were there, pictures on the walls, stove to keep the
right temperature. Stables were built on the north side, and a store-
room on the south. In the hut itself was a pianola and a
gramophone to wile away the monotony of the long winter night. Mr.
Ponting, the camera artist, had a lantern with him, which was to
provide vast entertainment in the way of picture-lectures on all kinds
of subjects. Altogether, everything was as compact and comfortable
as could be wished.
Naturally, there were various adventures during these early days;
once the ship just managed to get away from the spot where almost
immediately afterwards a huge berg crashed down, only a little later
on the same day to become stranded. Luckily, by much hard work,
the seamen managed to get her off.
On January 25 the next piece of work was begun—namely, the
laying of a depot some hundred miles towards the south. Both
ponies and dogs were used for this work, which took nearly a month
—the Barrier ice was always dangerous—and both the outward and
inward journeys were beset by bad weather, bad surfaces, hard
work, disappointments and many dangers. Once, a party was lost,
and found only after they had experienced much suffering.
It was not until April 13 that the depot laying party returned to the
hut, minus some of their animals, which had succumbed to the
rigours of the climate and the stiff work demanded of them. A few
days later the long winter night set in, and the men had to confine
themselves to winter quarters to wait until the coming of the sun
before the main object of their voyage could be attempted. The ship
had returned to New Zealand meanwhile.
The long winter months were filled up with scientific studies of the
neighbourhood, and evenings were occasions for lantern lectures
and discussions on all kinds of subjects, including those which
concerned the expedition. There was plenty of work to do; things
had to be prepared, as far as was possible then, for the final dash;
the animals had to be looked after; and they were a source of
trouble, because it was essential that they should be kept fit. A
winter party was organised and sent to Cape Crozier, a journey that
took them five weeks under “the hardest conditions on record.” It
was well worth while, for many were the valuable observations
made.
Always the scientific aspect of the expedition was kept in view; and
when the sun returned a spring journey to the west was undertaken,
Scott and his little party being absent thirteen days, 175 miles being
covered in that time.
We now come to the great journey to the Pole—a journey of 800
miles. On October 24 the two motor-sledges were sent off, after a
good deal of trouble, Evans and Day in charge of one, and Lashly of
the other; they were the forerunners of the expedition to the Pole.
On the 26th, Hut Point rang up to say that the motors were in
trouble, and Scott and seven men went off to see what they could
do. They came up with the motors about three miles from Hut Point,
and found that various little things were causing trouble. Eventually,
these difficulties were overcome, and the sledges started off again,
and Scott and his party went back to Cape Evans to get ready for
their own journey south.
“The future is in the lap of the gods; I can think of nothing left
undone to deserve success.”
Thus wrote Captain Scott the night before he set out on his last
great journey, and reading the remarkable journal which he left, one
is forced to the conclusion that he was right; if ever man deserved
success, if ever achievement with glory and safety should have been
vouchsafed, it should have been to Scott; but the lap of the gods is
often a sacrificial altar on which men lay down their lives for the
sake of great ideals.
It was on November 1 that the Southern Party set out. It consisted
of ten men, in charge of ten ponies drawing sledges, and two men
leading the dogs which were to take the ponies’ places when the
latter were done. Everything was favourable for the send-off, and
the company arrived at Hut Point, the first stoppage place, quite
safely. From there they pushed on again in three parties, the slowest
starting first, and the others following at sufficient intervals for all to
arrive at the end of the day’s stage at the same time. The motor
party going on in front were putting up cairns for guidance, and
Scott himself on the journey to One Ton Depot had placed
landmarks to guide them. On the 4th Scott came across the wreck of
the sledge worked by Captain Evans and Day—a cylinder had gone
wrong, and the motor had had to be abandoned, the men going on
with the other sledge. This was the first bit of ill-luck, but the days
to come were to bring much more. The dash to One Ton Depot
consisted of hard going over rough surfaces; there were blizzards,
trouble with the ponies; snow walls had to be built to protect the
animals at camp after a long and hard night’s toil, during which they
had journeyed seldom more than ten miles. Night was chosen
because it enabled them to escape the sun, which even in that
latitude was sufficient to make them sweat as they forced their way
over the terrible ground. They reached One Ton Depot at last, and
then picked up the motor party, commanded by Evans, on November
21. The motorists had been waiting six days, unable to go any
farther.
The little band now plunged forward again, meeting the same
difficult surface, having the same trouble with the ponies, one of
whom had to be shot on the 24th, the day on which the first
supporting party, consisting of Day and Hooper, were sent back to
the base. Two days later a depot was laid, Middle Barrier Depot, and
on the 28th, when ninety miles from the Glacier, another pony was
shot, and provided food for the dogs. Ninety miles were still to be
covered, and there was only food for seven marches for the animals.
It would be stiff going, for Scott was relying upon the ponies getting
him to the foot of the Glacier.
Having laid another depot on December 1, thus lightening the load,
and hoping to be able to make good progress, they were furiously
opposed by the elements. On the 3rd, the 4th, and the 5th, blizzards
blew down upon them, impeding them, making the work trebly
difficult, and the last one holding them up for four days, during
which food, precious food, and much-needed fuel were being
consumed without any progress being made. Impatient, bitterly cold,
with the animals getting worn out, Scott and his companions had to
keep to their tents, eager to go on, but realising that to venture
forth was to court disaster. Experienced Polar explorer though he
was, Scott was at a loss to account for the character of the weather
at this, the most favourable, only practicable, time of the year. It was
disheartening, especially when they had to start on the rations that
they had reckoned would not be needed until they reached the
summit of the Glacier. But at last the blizzard blew itself out, and,
stiff and cold, the party set out again, each day finding their ponies
becoming weaker, until on the 9th, at Camp 31, named the
Shambles, all these were shot.
Then it was a case of the dogs pulling the sledges, and on the 10th
the explorers began the ascent of the Beardmore Glacier, the summit
of which was thousands of feet above them. Meares and Atkinson
left for the base on the 11th, and the reduced party trudged forward
and upwards, now having to go down again to avoid some
dangerous part, toiling manfully up the Glacier, in danger of falling
into crevasses, sinking into soft snow, which made the surface so
difficult that after trudging for hours and hours only four miles were
covered when they had hoped to do ten or more. By the 22nd, when
the next supporting party left, they had climbed 7,100 feet (the day
before they had been up 8,000 feet) and then a heavy mist
enshrouded them, and hung them up for some hours—when every
minute was precious.
When they started on the 22nd there were but eight men, and these
toiled on day after day, meeting all sorts of trouble, running all kinds
of risks, but never stopping unless compelled, dropping a depot on
the last day of the year, and sending back three men on the 4th.
This left only Scott, Captain Oates, Petty Officer Evans, Dr. Wilson
and Lieutenant Bowers to make the final dash to the Pole. They had
over a month’s rations, which was considered ample to do the 150
miles that separated them from their goal.
The party now had the small ten-foot sledges, which were neat and
compact, and much lighter than the twelve-foot sledges which were
sent back. The dogs had now gone back, and all the pulling was
done by the men. The difficulty of the surface made them leave their
skis behind on the 7th, but later on that day the surface become so
much easier that it was decided to go back for the skis, which
delayed them nearly an hour and a half. They were now on the
summit, and were held up by a blizzard which, though it delayed
them, gave them the opportunity for a rest which they sadly needed,
especially Evans, who had hurt his hand badly while attending to the
sledges. On the 9th they were able to start again, now swinging out
across the great Polar plateau. They cached more stores on the
10th, and found the lightening of the load very helpful. But even
then, so hard was the pulling, that on the 11th, when only seventy-
four miles from the Pole, Scott asked himself whether they could
keep up the struggle for another seven days. Never had men worked
so hard before at so monotonous a task; winds blew upon them,
clouds worried them because they knew not what might come in
their wake; snow was falling and covering the track behind them,
sufficient to cause them some anxiety, for they wanted that track to
lead them home again via their depots upon which safety depended.
The weather! Day by day the weather worried them; only that could
baulk them in their purpose, and never men prayed so much for fine
days as did these. The 16th found them still forcing their way
onward, with lightened loads again, having left a depot on the
previous day, consisting of four days’ food; and they knew that they
were now only two good marches from the Pole. Considering they
carried with them nine days’ rations, while just behind lay another
four days’, they felt that all would be well if the weather would but
keep clear for them.
The thing that now troubled these men who toiled so manfully
against great odds was the thought that lurked in their minds that
when they reached the Pole they might find that they had been
forestalled. For they knew, everyone of them, that the Norwegian,
Amundsen, was bent on achieving what they were hoping to do: on
being first at the Pole. They knew, too, that things had been more
favourable for him from the very outset; that he had been able to
set out from a much better spot than they had. What if they attained
the goal, only to find a foreign flag flying bravely in the breeze? The
thought was maddening; but the Britishers were sportsmen. And
when months before Scott had heard that Amundsen was in the
South, instead of trying for the North Pole, as he had given out
when he started, the gallant captain had made up his mind to act
just as if he had no competitor.
Next day, the 16th, all their hopes were dashed to the ground. Away
out across the white expanse there loomed a tiny black speck, and
immediately Scott’s thoughts flew to Amundsen. Some of his
companions said it was one thing, others another. As they pulled
hard at their loads the five men debated amongst themselves, trying
to cheer each other up, seeking to cast aside the horrible thought
that would force its way into their minds.
And then, the black spot was reached. It was a black flag, tied to a
sledge bearer. It was the sign that the Norwegians had won in the
race.
All around were signs of a camp, which to the filmed eyes of the
explorers were the tokens of their failure to be first.
“It is a terrible disappointment,” wrote Scott in his diary, “and I am
very sorry for my loyal companions. Many thoughts come and much
discussion have we had. To-morrow we must march on to the Pole,
and then hasten home with the utmost speed we can compass. All
the day dreams must go; it will be a wearisome return.”
And the next day the Pole was reached, and from out its solitude
and austerity the great explorer cried:
“Great God! This is an awful place, and terrible enough for us to
have laboured to it without the reward of priority....”
The great goal had been won; but the joy of achievement was
dimmed; Amundsen’s records and tent were found there, the
Norwegian flag had been hoisted and flaunted bravely in the wind.
They had been forestalled by over a month.
Having fixed up their “poor slighted Union Jack,” as Scott called it,
the explorers turned northwards again, and began to retrace their
footsteps over the Polar plateau, which had cost them so much
labour to cross, then down the great Glacier with ever worsening
weather. The men themselves, who had been so fit coming out,
were now beginning to show signs of their gigantic labours; perhaps
now, when the day dreams were over, and hopes long deferred had
been fulfilled and dashed to pieces at one moment, they were
disheartened; there was not the spur of achievement before them.
Evans and Oates began to show signs of weariness—those two
strong men of the party. Evans had his nose and fingers frostbitten
and suffered much agony. Then, while descending the Glacier, he
tumbled on the Glacier, fell among rough ice which injured his head,
and gave him a touch of concussion of the brain. Dr. Wilson injured
his leg, and snow-blindness was causing him much trouble. All these
things impeded the party, to whom time was everything; food
depended on picking up the depots on the right days—perhaps
hours; and when, as often happened, the track was not easily found,
the anxiety of the explorers was considerably increased.
Then Evans grew worse; from being self-reliant, and the man on
whom the party had been able to look for help in any circumstances,
he became weak and wellnigh helpless; he lagged behind, and the
party had to wait for him to catch up. On February 17 at the foot of
the Glacier, after a terribly hard day’s work, Evans—poor man!—was
so far behind when the party camped, that his comrades became
anxious and went back for him. They found him. The limit of human
endurance had been reached. “He was on his knees, with clothing
disarranged, hands uncovered and frostbitten, and a wild look in his
eyes.” They got him to the tent with great difficulty, and he died that
night. Scott mourned his loss; and his journal is full of his praises of
the petty officer who had been so indefatigable a worker and so
adaptable a man, doing everything his inventive genius could think
of to lighten the work for the explorers.
One day was now much like another to the four men left; they
pushed on and on, picking up depots as they went, and suffering
every day from the bitter cold, and feeling the effects of the hard
work. On March 16, Captain Oates went out. Frostbitten hands and
feet had made life burdensome for him, and he knew that he was a
burden to the gallant men with him; without him, they could
progress much quicker.
“Go on without me,” he had said, earlier in the day. “I’ll keep in my
sleeping bag!” But they had prevailed upon him to keep on. Like a
hero he forced himself to struggle on until they camped at night.
When the morning came he awoke. Of him in those last moments
Scott said: “He was a brave soul.... It was blowing a blizzard. He
said: ‘I’m just going outside, and may be some time.’ He went out
into the blizzard, and we have not seen him since.... We knew that
poor Oates was walking to his death; but though we tried to
dissuade him, we knew it was the act of a brave man and an English
gentleman.”
He had sacrificed himself for the sake of the others. “Greater love
hath no man than this.”
Reduced now to three men, the little party struggled on gamely,
fighting against the weariness that was upon them, making with all
haste for One Ton Depot. They had expected ere this to have met
the dogs which were to come out to help them back, but misfortune
had overtaken Cherry Garrard, who had been waiting at One Ton
Depot for six days held up by a blizzard. He had not sufficient food
for the dogs to enable him to go south, and he knew that the state
of the weather might easily make him miss Scott, whereas to wait at
the depot was to be on hand when Scott did turn up.
Now the dire peril of their position forced itself upon them; though
they fought to drive the thoughts away, manfully cheering each
other up, none of them believed that they would ever get through,
and on March 18, when twenty-one miles from the depot, the wind
compelled them to call a halt. Scott’s right foot was frostbitten; he
suffered from indigestion; they had only a half fill of oil left and a
small amount of spirit. It meant that when this was gone, they could
have no more hot drink—which would bring the end.
Despite their sufferings they went on again, until on the 21st they
were camped eleven miles from the depot, a blizzard raging round
them, little food, no fuel, and knowing in their hearts that when the
next day dawned they could not continue the journey perilous and
laborious; the end was at hand.
Days before Scott and Bowers had made Dr. Wilson give them that
which would enable them to put an end to their misery; but now to-
night, when face to face with death, they resolved that they would
die natural deaths; it should not be said of them that they shirked.
Each morning until the 29th they got ready to start for the depot
that was so near, with its food, its fuel, its warmth, its companions;
and each day they found the blizzard howling about them, as
effectual a barrier as if it had been a cast-iron wall.
“We shall stick it out to the end,” wrote Scott on the 29th, “but we
are getting weaker, of course, and the end cannot be far.
“It seems a pity, but I do not think I can write more.
“For God’s sake look after our people!”
And so they died, these heroes and gentlemen; and through Scott’s
last letters which were found with the dead bodies in the tent on
November 10 there is but one thought running: the care of the
people left behind and the praises of the men who had accompanied
him. Never were such eulogiums written. “Gallant, noble gentlemen,”
he called them, as death brooded over him; and throughout every
line there was the spirit of cheeriness which takes life—and death—
as becomes a hero who knows that failure was no fault of his own,
that man can do no more than fight nobly against the forces arrayed
against him.
Welcome to our website – the ideal destination for book lovers and
knowledge seekers. With a mission to inspire endlessly, we offer a
vast collection of books, ranging from classic literary works to
specialized publications, self-development books, and children's
literature. Each book is a new journey of discovery, expanding
knowledge and enriching the soul of the reade

Our website is not just a platform for buying books, but a bridge
connecting readers to the timeless values of culture and wisdom. With
an elegant, user-friendly interface and an intelligent search system,
we are committed to providing a quick and convenient shopping
experience. Additionally, our special promotions and home delivery
services ensure that you save time and fully enjoy the joy of reading.

Let us accompany you on the journey of exploring knowledge and


personal growth!

ebookgate.com

You might also like