0% found this document useful (0 votes)
74 views15 pages

Theoretical Computer Science: Catherine C. Mcgeoch

RUSOSKIS 2

Uploaded by

DANTE
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
74 views15 pages

Theoretical Computer Science: Catherine C. Mcgeoch

RUSOSKIS 2

Uploaded by

DANTE
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 15

JID:TCS AID:12353 /FLA Doctopic: Theory of natural computing [m3G; v1.261; Prn:27/01/2020; 17:39] P.

1 (1-15)
Theoretical Computer Science ••• (••••) •••–•••

Contents lists available at ScienceDirect

Theoretical Computer Science


www.elsevier.com/locate/tcs

Theory versus practice in annealing-based quantum


computing
Catherine C. McGeoch
D-Wave Systems, Canada

a r t i c l e i n f o a b s t r a c t

Article history: This paper introduces basic concepts of annealing-based quantum computing, also known
Received 25 July 2019 as adiabatic quantum computing (AQC) and quantum annealing (QA), and surveys what
Received in revised form 13 January 2020 is known about this novel computing paradigm. Extensive empirical research on physical
Accepted 15 January 2020
quantum annealing processers built by D-Wave Systems has exposed many interesting
Available online xxxx
Communicated by C.S. Calude
features and properties. However, because of longstanding differences between abstract
and empirical approaches to the study of computational performance, much of this work
Keywords: may not be considered relevant to questions of interest to complexity theory; by the same
Quantum computing token, several theoretical results in quantum computing may be considered irrelevant to
Adiabatic quantum computing practical experience.
Quantum annealing To address this communication gap, this paper proposes models of computation and of
Models of computation algorithms that lie on a scale of instantiation between pencil-and-paper abstraction and
physical device. Models at intermediate points on these scales can provide a common
language, allowing researchers from both ends to communicate and share their results. The
paper also gives several examples of common terms that have different technical meanings
in different regions of this highly multidisciplinary field, which can create barriers to
effective communication across disciplines.
© 2020 Elsevier B.V. All rights reserved.

1. Introduction

How much time does it take to solve a satisfiability problem defined on n variables? The complexity theorist will tell you
that satisfiability is NP-complete: based on current knowledge, 2n probes of the solution space are required. The empirical
computer scientist will tell you that hard random 3CNF SAT instances with 250 variables can be solved in under 5000
seconds [31], considerably lower than the theoretician’s estimate, which works out to about 1.8 × 1075 seconds assuming
one nanosecond per probe. (For comparison, about 4.4 × 1017 seconds have elapsed since the Big Bang.)
This illustrates a long-standing communication gap between theoreticians and empiricists when reasoning about al-
gorithm performance on NP-hard problems. The empiricist will explain that worst-case bounds are over-pessimistic and
irrelevant to real-world scenarios. The theoretician will point out that performance of any given algorithm on any given
input says nothing about performance in general. Both are correct. But in this case, neither prediction is much use to the
practitioner with real inputs in hand who is wondering what to expect.
The theoretician and empiricist will agree that the cost of any given computation depends on three factors shown in
Fig. 1: the instance, the algorithm, and the machine on which the algorithm runs. Theoretical analysis is based on finding

E-mail address: [email protected].

https://doi.org/10.1016/j.tcs.2020.01.024
0304-3975/© 2020 Elsevier B.V. All rights reserved.
JID:TCS AID:12353 /FLA Doctopic: Theory of natural computing [m3G; v1.261; Prn:27/01/2020; 17:39] P.2 (1-15)
2 C.C. McGeoch / Theoretical Computer Science ••• (••••) •••–•••

Fig. 1. Computational performance depends on interactions among the machine, the algorithm, and the input. Abstractions of these three objects are
analysed in ways that are general but hard to make precise; instantiations are analyzed in ways that are precise but hard to generalize.

bounds for abstractions, and empirical analysis is based on measuring instantiations of these three factors. The communica-
tion gap arises because abstract results are hard to make precise and empirical results are hard to generalize.
The gap becomes a ravine when reasoning about quantum computation. Most theoretical work on quantum algorithms
assumes that the computation takes place in an ideal closed system, in perfect isolation, whereas all real-world quantum
computation takes place in an open system, vulnerable to noise from the ambient environment and to other hazards such as
imprecision in analog controls. These effects degrade the probability that any given computation succeeds, which increases
computation time (or space, or both, depending on the remedy). Furthermore, the impact of these effects increases with
system size, in ways that are difficult to characterize. Without measurements, theory-based predictions about performance
of real-world quantum computers at industrially-relevant scale can become untenable, due to over-pessimism about (worst
case) classical performance and over-optimism about (ideal case) closed system conditions.
The ravine becomes a chasm when considering the annealing-based quantum paradigm, including adiabatic quantum
computing (AQC) and quantum annealing (QA), which is distinct from the more familiar gate model (GM) of quantum com-
putation. These terms are defined more carefully in Section 2.3: here briefly, AQC refers to a general closed-system model
of computation, analogous to a Turing Machine, and QA refers to a restricted and more realistic model of computation,
analogous to the programmable RAM found in any algorithms textbook (e.g. [19]).
Physical QA computers are easier to build and operate at commercially-relevant scale than are physical GM computers,
as evidenced by the steady pace of production from D-Wave Systems,1 starting with the 128 qubit D-Wave One launched
in 2011 through the 2000 qubit D-Wave 2000Q launched in 2017; a next-generation system with 5000+ qubits is due
to be announced in 2020. This trajectory exists partly because the annealing-based paradigm is more robust than GM
against the kinds of errors that plague open system quantum computing, and partly because D-Wave’s business strategy has
prioritized early growth of system size in the simpler QA model, and later growth of system scope to more general forms
of quantum computers. As a result, progress in developing and analyzing the physical machines has outpaced progress
in abstract analysis within this paradigm. As Preskill concisely puts it [57]: Since theorists have not settled whether quantum
annealing is powerful, further experiments are needed.
Certainly plenty of experimental results are available. A quick search of arXiv [1] and review of presentations at D-Wave
QUBITS user group meetings [60] reveals hundreds of published papers that describe physical properties, performance,
demonstrated applications, and classical simulations of existing quantum annealing processors. A rough empirical picture
of the algorithm performance landscape — characterizing how well a given quantum annealing algorithm performs, with
respect to input properties and machine parameters — is starting to emerge. However, it is not clear whether these empirical
results are useful or interesting to the theoretician who wants to study abstract properties of QA and AQC. On the flip side of
this coin, without proper abstract models, generalizing measured results beyond the scope of any given experiment becomes
extremely problematic.
This paper aims to address this impass. Section 1.1 presents a brief tutorial overview on how quantum annealing works.
Section 2 proposes a framework for reasoning about annealing-based models of quantum computation from both ends of
the abstraction/instantiation spectrum, by introducing the QA-RAM, an abstract model that captures features of real-world
quantum annealing processors. Section 3 points out some differences between theoretical and empirical approaches to
algorithm analysis in the context of quantum annealing.
Sections 2 and 3 aim to provide a common language for sharing results from theoretical and empirical research on
annealing-based quantum machines and algorithms. A second goal of the paper is to develop common understanding of
the language already in place: one unhappy consequence of the above-mentioned communication breakdown is that key
technical terms have different meanings in different research communities that study quantum annealing systems. Section 4
presents a glossary of commonly-used terms having multiple definitions, with the aim of breaking down this barrior to
research communication. Section 5 offers some conclusions.

Brief history of AQC and QA. The computational concepts associated with AQC and QA have origins in different branchs
of science. Nowadays, like the overloaded terms in Section 4, they have multiple conflicting meanings in the literature —

1
D-Wave, D-Wave One, D-Wave Two, D-Wave 2X, D-Wave 2000Q, and Leap, are registered trademarks of D-Wave Systems. Intel and Xeon are trademarks
of Intel Corporation or its subsidiaries in the U.S. and/or other countries. CPLEX is a registered trademark of ILOG in France, the U.S. and/or other countries.
JID:TCS AID:12353 /FLA Doctopic: Theory of natural computing [m3G; v1.261; Prn:27/01/2020; 17:39] P.3 (1-15)
C.C. McGeoch / Theoretical Computer Science ••• (••••) •••–••• 3

Fig. 2. (left) The half-adder circuit H. (center) The boolean function for H. (right) First eight rows of a penalty model for H.

sometimes referring to abstract models of computation, sometimes to algorithms, and sometimes to physical machines —
and are sometimes used interchangeably.
Nevertheless, some consensus has emerged that AQC refers to more abstract versions, and QA to more realistic versions,
of the same basic concept: annealing-based quantum computation. This paradigm uses Hamiltonians to describe a given
computation: a Hamiltonian is a Hermitian matrix, an operator that describes the forces to be applied to a collection of
qubits and couplers, in order to move the system into some desired state. That is, instead of describing computation in
terms of step-by-step application of quantum logic gates as in the gate model, the annealing-based model describes an
operator to be applied to all qubits simultaneously, together with a specification of how to gradually transition from an
initial state to a realization of that operator. The transition is called an anneal.
Quantum annealing originally referred to a classical heuristic for solving NP-hard optimization problems, proposed by
several authors starting in the late 1980’s, as a variation on the well-known simulated (thermal) annealing algorithm (SA).
Kadawaki and Nishimori [36] analyzed quantum annealing in a version incorporating a transverse field Ising model, and
showed that QA converges to optimality faster than SA in some cases.
Farhi et al. [25] proposed an abstract quantum adiabatic algorithm (QAA) that was designed to run on a purpose-build
quantum device for solving NP-hard optimization problems. The idea was further generalized to become AQC, which now
refers to both a model of computation and an algorithm that runs in that model.
D-Wave Systems demonstrated a prototype QA processor called Orion in 2007, and its first commercial system, the
D-Wave One, in 2011. These and subsequent quantum processors implement a family of quantum annealing algorithms
using quantum hardware.

1.1. How quantum annealing works

This section presents a brief introduction to quantum annealing processors developed by D-Wave (see [68,17,48] for
more). The quantum processing unit (QPU) reads inputs for the Ising Model minimization problem (IM), defined as follows:
Given a graph G = ( V , E ) with real weights h i on n vertices and J i j on m edges, find an assignment of spin values xi ∈ {±1}
to vertices, so as to minimizes the energy function,
 
E (x) = h i xi + J i j xi x j . (1)
i∈V (i , j )∈ E

This problem is NP-hard when G is non-planar [11]. The problem is equivalent to quadratic unconstrained binary optimiza-
tion (QUBO) defined on binary variables y i ∈ {0, 1}, via the trivial transformation xi = (2 y i − 1), and to Maximum Weighted
2-Satisfiability defined on booleans b i ∈ { T , F }, with logical operators replacing arithmetic operators in formula (1).
The process of using the QPU to solve a given input I 0 for an arbitrary NP-hard problem P typically involves three steps:

(1) Transform instance I 0 for P to an instance I 1 for IM or QUBO (the QPU accepts either form), using standard reduction
techniques of NP-completeness theory.
(2) Apply minor-embedding to transform I 1 onto an equivalent instance I 2 that matches the native QPU topology, that is,
the physical connectivity structure among qubits.
(3) Send I 2 to the QPU, together with algorithm parameters, and receive solutions. Quantum annealing in the open system
means that there is a chance that the returned solution is not optimal. In normal use the QPU is asked to return a
sample of R results, to boost the chances of finding an optimal solution in the sample.

Here, I 0 is the original or source instance, I 1 is the logical instance, and I 2 is the native instance.

An example. The circuit satisfiability problem (CSAT) is defined as follows: Given a description of a boolean circuit C :
{0, 1}n → {0, 1}m together with an output string O ∈ {0, 1}m , find an input string I ∈ {0, 1}n such that O = C ( I ). Fig. 2 shows
a small example circuit, the half-adder (H), which has two input bits I = (a, b) and two output bits O = (s, c ) (sum and
carry). The center table shows the binary function computed by H.
JID:TCS AID:12353 /FLA Doctopic: Theory of natural computing [m3G; v1.261; Prn:27/01/2020; 17:39] P.4 (1-15)
4 C.C. McGeoch / Theoretical Computer Science ••• (••••) •••–•••

Fig. 3. (left) Penalty model g H for the Half-adder circuit; h is on the diagonal and J on the off-diagonals. (center) A C2 Chimera graph. The nodes represent
qubits and the edges are couplers between them. (center) A native embedding of the half-adder circuit with zero-weight edges omitted. Weights ha , h s are
split across two qubits and chain edges (red) are set to a strong negative value. (To see the colors in the figure, the reader is referred to the web version of
this article.)

Translating the CSAT input ( H , O ) into an IM input requires finding a penalty model, a function that is minimized exactly
when O = H ( I ), that is, when the circuit constraint a + b = 2c + s is satisfied. The right side of Fig. 2 shows the first eight
rows of such a model, here minimized at zero when the constraint is satisfied, and otherwise taking a positive value g
(not necessarily the same g in every row). This penalty model must be expressed in quadratic form as in (1). Here is one
possible derivation of the penalty model g H :

a + b − s − 2c = 0 Constraint
gH = (a + b − s − 2c )2 Positive penalty
= a2 + ab − as − 2ac + ba + b2 − bs − 2bc
−sa − sb + s2 + 2sc − 2ca − 2cb + 2cs + 4c 2
= a2 + b2 + s2 + 4c 2 + 2ab − 2as − 4ac − 2bs − 4bc + 4sc
= a + b + s + 4c + 2ab − 2as − 4ac − 2bs − 4bc + 4sc . B = B 2 if B ∈ {0, 1}

It is straightforward to verify that g H is minimized exactly when variables a, b, c , s obey the half-adder constraint. Re-
ferring to the energy function (1), this penalty model has coefficients (h, J ) shown in the table of Fig. 3, with h on the
diagonal and J on the off-diagonals.
A second penalty model g O is needed to ensure that variables c , s match the output string O: for example, setting
g O = α (−c + s) for some α > 0 ensures that g O is minimized when c = 1, s = 0 (the value of α can be tuned to improve
success probabilities). Summing g H + g O yields coefficients (h, J ) for the logical input I 1 . If desired, this QUBO form defined
on binaries can be converted to an IM form defined on spins as in (1).
The next step is to map I 1 defined on general G to the native qubit connection topology inside the QPU. The current
D-Wave architecture is based on a topology described by a Chimera graph. Fig. 3 shows a C2 Chimera containing a 2 × 2
grid of Chimera cells, where each cell is a K 4,4 , and nodes are connected to grid neighbors.
Current model D-Wave 2000Q QPUs are based on a C16 Chimera graph with 2048 qubits. The physical hardware graph
C  for a specific QPU may contain a small number of qubits that are unused due to fabrication defects, in which case C  ⊆
C16. A next-generation system with a more complex Pegasus topology will be launched in 2020 [14].
The input I 1 must be embedded in a minor of the hardware graph, which may introduce auxilliary qubits to physically
represent the minor. For example, since Chimera is bipartite, the triangles ab, as, bs and ab, ac , bc in I 1 must be minor-
embedded by splitting some variables across two physical qubits. The qubits are linked by so-called chain edges with strong
negative weights J chain < 0, to ensure that their values are equivalent in the output. The right side of Fig. 3 shows one
possible embedding of I 2 : the weights ha , h s are split over qubits a, a and s, s , and the chains are shown in red. Find-
ing an optimal minor-embedding is NP-hard in general; D-Wave provides users with a suite of heuristics and tools for
accomplishing the task [13,21,18].

The quantum annealing algorithm. Given input I 2 = (h, J ), the QPU applies a quantum annealing algorithm that is expressed
as an algorithm Hamiltonian H, designed so that the ground state of the qubit system matches the optimal solution to
the energy function (1). The qubits, behaving as quantum spins driven by this Hamiltonian operator, naturally seek their
lowest-energy (ground) state, just as water naturally seeks downhill. By the end of the quantum process, the qubit states
have classical spin values ±1. These values are read and returned as a solution to I 2 : if all goes well (never guaranteed in
open-system conditions), the solution is optimal for (1).
This time-varying algorithm Hamiltonian is defined in terms of four components, as follows.

(1) The problem Hamiltonian H P is based on I 2 = (h, J ) and describes the goal (ground) state for the qubits,
 
Hp = h i σiz + J i j σiz · σ jz . (2)
i∈V (i , j )∈ E

Here σiz denotes the Pauli-z operator applied to qubit i, and the multiplication operator · is the tensor product; thus
H P is a 2n × 2n matrix of complex numbers describing an operator to be applied to the full qubit system. The 2n
eigenvectors and eigenvalues of H p correspond to the 2n solutions and costs defined by (1).
JID:TCS AID:12353 /FLA Doctopic: Theory of natural computing [m3G; v1.261; Prn:27/01/2020; 17:39] P.5 (1-15)
C.C. McGeoch / Theoretical Computer Science ••• (••••) •••–••• 5


(2) The initial Hamiltonian H I describes initial conditions. Here H I = i ∈ V σix , where σix is the Pauli-x operator applied to
qubit i. This is a so-called transverse field operator in the x basis, which puts the qubits into equal superposition with
respect to the problem basis z, so that initially each solution to H P is equally likely.
(3) The anneal time ta is an interval during which H transitions from H I to H P . Time is normalized by s = t /ta so that
s : 0 → 1 as t : 0 → ta .
(4) A pair of anneal path functions A (s) and B (s) are defined such that A (s) : 1 → 0 and B (s) : 0 → 1 as s : 0 → 1. The path
functions describe the “shape” of the transition, which may go faster or slower at different times s.

The full QA algorithm is expressed as a time-varying Hamiltonian executed over a time interval ta :

H(s) = A (s) H I + B (s) H P as s : 0 → 1. (3)


Each execution of this algorithm, called an anneal, works as follows. First, at s = 0, qubits are put in an initial su-
perposition state according to H I . As s increases, H(s) evolves by growing B (s) and shrinking A (s), so that the problem
Hamiltonian gradually dominates. When s = 1 at the end, the qubit states — by now classical spin values xi ∈ {±1} because
H P has classical eigenvalues — are read and returned. A minimum-eigenvalue (ground) state for H(s = 1) = H P corresponds
to an optimal solution for the energy function (1) defined by input I 2 . As mentioned previously, in open-system comput-
ing there is a nonzero probability that the qubits will not finish in ground state, so in normal use, R anneals are applied
resulting in a sample of R solutions for the input.
In the current D-Wave architecture, the user can specify the problem Hamiltonian H P , the anneal time ta , and variations
on the default path functions A (s), B (s) (including running s backwards). The default H I is fixed, but the user may provide
initial spin values for qubits rather than starting them in equal superposition. Just as with classical algorithms, modifying
these parameters changes the distribution of solutions, and possibly the amount of computation time (controlled by ta
and R) needed to find an optimal solution.
The quantum element. Well-publicized arguments that quantum computing has the potential to outperfom classical compu-
tation are based on the fact that qubits have special properties not available to classical bits. First, qubits are capable of
superposition, which means that rather than representing discrete values ±1, they can be in a probabilistic combination
of both states at the same time. Thus, while a classical n-bit register can represent one of 2n states, a quantum n-qubit
register can (probabilistically) represent all 2n states at once. Second, qubits are capable of entanglement, which means that
the superposition state of a group of qubits can move together as an indivisible unit.
In the context of quantum annealing, entanglement and superposition create a condition whereby the qubits respond to
H(s) as a system (as opposed to individuals) and without having to choose discrete spin values ±1. The initial Hamiltonian
H I puts the qubits in ground state in the x basis, which corresponds to equiprobable superposition (all states equally likely)
in the z basis; then the Hamiltonian gradually evolves so that the qubit system finishes in a ground state corresponding
to H P . Ideally, the quantum spin state follows H(s) and remains in the ground state all the way to the end of the anneal.
Thus, quantum annealing can be viewed as a type of natural computing, as it controls a system of physical qubits to carry
out a quantum process that achieves a specific computational goal.
Declarative programming paradigm. Note that using a quantum annealer does not require inventing and implementing a new
quantum algorithm for each new input, as is normally done in the imperative programming paradigm. Instead, the (highly
parameterized) quantum algorithm has already been implemented in hardware: the user simply provides a description of
the desired outcome together with parameters specifying how the computation should go, as in the declarative programming
paradigm. Examples of the declarative paradigm in use in classical computation include logic programming in the Prolog
language, database programming in the SQL language, combinatorial optimization with CPLEX, and programming tools such
yacc and makefile. D-Wave chose this approach to quantum processor deployment in order to simplify the overall design
and lower barriers to usability.

2. Annealing models of computation

Fig. 4 shows a scale of instantiation for computing machines, the first of our three performance factors identified in Fig. 1.
Theoreticians tend to work with pencil-and-paper machine models on the abstract (left) side of this scale, and empiricists
tend to work with physical machines at the instantiated (right) end. Machines at intermediate points can be used to share
results from the endpoints. Some familiar examples from classical computing are shown in red:

• A Turing machine (TM) M P performs a computation that solves a given problem P , defined as a mapping of input strings
I to output strings O such that O = P ( I ). According to the Church-Turing thesis [65], this is the most powerful model
of computation known: every computable function can be computed by some Turing machine.
• A universal random access machine (RAM, also known as a universal TM) is a stored-program computer that can simulate
the computation of any given M P on any given input I . The RAM considered here has a von Neumann architecture with
the following components:
– An arithmetic-logic unit (ALU) that performs fixed set of arithmetic and logic functions on words of size W . The ALU
defines an instruction set; the cost of running a given program is the number of ALU instructions that are executed.
JID:TCS AID:12353 /FLA Doctopic: Theory of natural computing [m3G; v1.261; Prn:27/01/2020; 17:39] P.6 (1-15)
6 C.C. McGeoch / Theoretical Computer Science ••• (••••) •••–•••

Fig. 4. Computing machines lie on an instantiation spectrum between abstract models (left) and concrete reality (right). Red: classical models of computa-
tion, architectures, and processor models. Blue: analogous machines in the quantum annealing paradigm.

Fig. 5. The QA-RAM contains a classical CPU and a quantum QPU, which is used as a coprocessor.

– A memory containing M words that holds programs and data.


– An I/O unit for exchanging inputs and outputs with a user.
– A control system that executes instructions as directed by the program.
The abstract RAM is more realistic than any M P , and has parameters such as W and M that can be used for reasoning
about limitations of physical machines. For example, W can be used to analyze numerical precision errors due to
insufficient word size [30], and M can be used to analyze out-of-memory algorithms of interest when inputs are too
big to fit in memory [70].
• A computer manufacturer designs an architecture based on a specific sets of instructions, memory addressing conven-
tions, and other implementation choices.
• The manufacturer produces a family of physical machines that represent different configurations of an architecture.

Fig. 4 also shows (in blue) a similar instantiation scale and proposed terminology for annealing-based quantum models
and machines. We reserve “AQC” for the most general model at the abstract end, and use “quantum annealer” as a generic
term for abstract and physical machines. The next section describes the quantum annealing RAM (QA-RAM), and compares it
to other models in this diagram.

2.1. The QA-RAM

The QA-RAM of Fig. 5 has classical and quantum components. The classical central processing unit (CPU) operates in the
normal way, using the ALU to manipulate words of data according to instructions from a program stored in memory. The
quantum processing unit (QPU) operates like a co-processor that communicates with the CPU. As described in Section 1.1,
the QPU receives a quantum machine instruction (QMI) consisting of an input (h, J ), together with parameters such as R. The
control system performs an anneal by applying the algorithm Hamiltonian H associated the QMI and its parameters, to the
quantum circuit that contains the qubits and couplers. The QPU returns results from R anneals.
Similar dual-processor designs have been proposed for quantum RAMs in the gate model, for purposes of defining quan-
tum instruction sets and assembly languages [52,53]. Like the annealing-based version described here, these abstractions
contain both a quantum circuit and a classical circuit, which are combined with classical memories and control systems
to provide the full functionality of a universal RAM. This hybrid approach is convenient for describing real-word quantum
computation, since many quantum algorithms have a substantial classical component. For example, Shor’s algorithm works
by classically transforming an input for integer factoring into an input for the quantum Fourier transform, which is solved
using a quantum circuit [64]. For QA-RAM computations, we delegate to the CPU the classical support tasks of problem
transformation, minor-embedding, and (optionally) pre- and post-processing, and we use the QPU for finding solutions to
problems encoded as QMIs.
Note that the QMI is a true instruction, in the sense that it encodes an operation together with the input and output
of that operation: in the circuit-SAT example of Section 1.1, the operation is “find a satisfying assignment,” the input is a
JID:TCS AID:12353 /FLA Doctopic: Theory of natural computing [m3G; v1.261; Prn:27/01/2020; 17:39] P.7 (1-15)
C.C. McGeoch / Theoretical Computer Science ••• (••••) •••–••• 7

description of the circuit C and the string O = (c , s), and the output is the pair I = (a, b). All three of these components are
encoded in (h, J ).
Gu and Perales [27] have shown that evaluation of an arbitrary classical circuit can be encoded as the ground state of an
Ising Hamiltonian such as H P (2). Therefore, the QA quantum circuit is at least as powerful as a classical ALU, and could be
used in place of the ALU to compute any classical function.
Furthermore, the Circuit-SAT example shows that the QMI can encode the satisfaction of an arbitrary classical circuit.
This implies that the QMI can encode the computation of any hard optimization problem having a decision version in NP,
assuming that the polynomial-time problem transformation and minor embedding steps are performed classically. Thus,
a single QMI instruction performs the same task as a classical program that is normally expected to use exp(n) classical
instructions to solve an input of size n. This prompts several modifications to the usual assumptions made when analyzing
classical algorithm performance, as described in the next section.

2.2. Comparison to a classical RAM

Here are some unusual assumptions about computation using a QA-RAM in comparison to a classical RAM.

The quantum circuit is programmable. The classical ALU is a collection of combinational circuits that perform boolean op-
erations. The control system routes circuit outputs to the ALU output, but the physical circuit does not change from one
instruction to the next. ALU instructions are normally assumed to take constant time.
In contrast, the quantum circuit in the QA-RAM is programmable, in the sense that qubit and coupler weights are
reconfigured according to (h, J ) for each new QMI. Unlike a classical ALU, anneal time ta is not a constant: instead (for
reasons described in Section 2.3), it is expected to grow with the size of the problem encoded in the QMI.

The quantum circuit size can grow with input size. In algorithm analysis the problem size is the number of problem variables,
which determines the size of the solution space to be explored; the input size is the number of words needed to completely
specify an input instance. For example, in IM the problem size is the number of binary variables n and the input size is the
number of weights (h, J ) on nodes and edges of G, equal to n + m. Suppose each weight is expressed using a word size of
W bits.
The word size of the classical ALU is assumed to be least W , and the memory size M is assumed to be large enough
to hold the input, output, and program; it is not assumed that the ALU grows with input or problem size. However, the
quantum circuit size Q , including qubits and couplers, is normally assumed to be big enough to hold the native input of
size n + m. If Q is not large enough to hold the full problem, a hybrid decomposition algorithm may be used to solve
the problem in pieces via queries to the QPU: note, however, that this approach does not necessarily guarantee to return
optimal solutions.

The quantum circuit is error-prone. One normally expects a physical ALU to execute every instruction correctly, subject perhaps
to precision limits imposed by W . (In very rare cases, a physical ALU may fail in other ways: readers may recall the uproar
that greeted news in 1994 of the Pentium FDIV bug, which caused the floating point unit to sometimes return incorrect
results of division operations [72].)
In contrast, open system quantum computing always carries a nonzero probability that a given computation will fail.
In the context of quantum annealing, this means that an optimal solution may not be returned. Three primary sources of
error can be identified: thermal noise can bump the qubits out of ground state [7]; analog control errors can perturb the
algorithm Hamiltonian [35]; and tunneling dynamics can taper off late in the anneal [10]. The collective effect is that the
qubit state may get caught in a local minimum instead of following the algorithm Hamiltonian all the way to a ground state
of H p .
As it turns out, many optimization applications tend to be forgiving of imperfect results, by accepting near-optimal as
well as optimal solutions. A number of error mitigation techniques are known for QA [58,59,44,42], but to date the physical
machines have been robust enough in application use (routinely finding near-optimal solutions) as to not require error
mitigation. Such techniques may be necessary on future larger systems, or in contexts where optimal solutions are required.
Note that these effects are quite different from those found in open system gate-model computation, where the primary
concern is decoherence, the loss of qubit entanglement as a sequence of gate operations are applied (see [4] for details).
Because decoherence can be catastrophic to outcomes in that model, qubits must be engineered to a sufficiently high
quality, and elaborate error-correction schemes may be needed to ensure sufficiently high success probabilities on machines
built a scale.

2.3. Comparison to AQC

Albash and Lidar [6] present an excellent and thorough overview of theoretical research on AQC, including formal defini-
tions and variations on the basic model, theorems about complexity, and a review of the major results and open questions.
Some of that work is summarized very briefly here.
In this paper, AQC refers to a more general model of computation than that represented by the QA Hamiltonian (3). More
general computations may include, for example, problem Hamiltonians with higher-order interactions than the 2-order
JID:TCS AID:12353 /FLA Doctopic: Theory of natural computing [m3G; v1.261; Prn:27/01/2020; 17:39] P.8 (1-15)
8 C.C. McGeoch / Theoretical Computer Science ••• (••••) •••–•••

Machine Parameters User Parameters


Topology: Chimera 16 Temperature: ≤ 12 mK Anneal time ta : [1 . . . 2000]μs Initial state: equiprobable or
Max qubits: 2048 h scale: [−1 . . . + 1] HP: as in (2) assigned by user
Max couplers: 6016 J scale: [−2 . . . + 1] Anneal path Postprocessing: optional
Qubit yield: > 97.6% Stoquastic: yes modifications: several Virtual full yield: optional

Fig. 6. Some parameters of the D-Wave 2000Q architecture. The left side shows parameter ranges assigned to the machine. The right side illustrates some
parameters that may be selected by the user at runtime. An extensive library of software tools for problem transformation, embedding, decomposition, and
other tasks, is also available.

interactions of (2), or additional Hamiltonians with different combinations of Pauli operators. Analysis in the AQC model
typically assumes that computations take place in an ideal closed system under perfect control fidelity. These assumptions
imply that the qubit system is initialized into a ground state of H I and that it is not possible for noise from the surrounding
environment to raise the system out of ground state during the anneal.
The Adiabatic Theorem [15] states that under these assumptions of ideality, an AQC algorithm can guarantee to solve a
given input to optimality for anneal time ta  1/3 , where the spectral gap  is the minimum instantaneous gap between
the smallest and second-smallest eigenvalues in the time-varying Hamiltonian. No efficient general method for comput-
ing bounds on  or ta are known, although some results are available for specific problems: AQC equivalents of several
gate-model algorithms are known, including Grover’s algorithm [61], the Deutsch-Jozsa algorithm [63,71], and the Bernstein-
Vazirani algorithm [29].
Aharanov et al. [2] have shown that a general version of AQC is polynomially equivalent to gate model quantum com-
puting. Indeed, proofs of equivalency are known for several variations on the AQC model [6], as distinguished by the types
of Hamiltonians employed as well as the quantum elements (e.g. qubits vs. fermions) being acted upon. AQC models that
are GM-equivalent can usually be shown complete for the quantum Merlin-Arthur (QMA) complexity class; it is known
that NP ⊆ QMA and that BPP ⊆ BQP ⊆ QMA (referring to bounded-error probabilistic polynomial time and bounded-error
quantum polynomial time).
Of particular interest is the role of stoquasticity in AQC: a stoquastic Hamiltonian has nonpositive off-diagonal elements
with respect to the computational basis (in this case z). The above-mentioned general models are all nonstoquastic, and
it is an open question whether AQC remains QMA-complete when restricted to k-local stoquastic Hamiltonians (for fixed
k). Albash and Lidar [6] discuss the evidence for and against the question. They also point out that nonstoquasticity is
a sufficient, but not necessary, condition for establishing QMA-completeness: for example, 3-local stoquastic AQC can be
shown both QMA-complete and equivalent to the gate model by relaxing the assumption that the computation must stay in
ground state throughout the anneal.
Interest in this question is prompted by the fact that current-model quantum annealing processors manufactured by
D-Wave employ stoquastic Hamiltonians. Note that this condition is not necessarily permanent: several approaches are
known for modifying the Hamiltonian in (3) to achieve nonstoquasticity [12,69]. D-Wave has demonstrated a prototype
nonstoquastic device [55], as part of an ongoing research program aimed at implementing processors with more general
capabilities.
In contrast to the general AQC model, the QA model proposed in this paper is intended to capture properties of existing
quantum processors. Thus stoquasticity may be considered a parameter of the QA-RAM, subject to modification as technolo-
gies evolve. Furthermore, if desired, algorithm analysis in QA may involve relaxing assumptions about ideality, to consider
open system effects.

2.4. Comparison to D-Wave architectures and machines

This section points out some ways in which current D-Wave architectures and machines represent specializations of the
abstract QA-RAM. These specializations are captured by parameters having a range of values that may differ from machine
to machine. Fig. 6 shows some of the parameters used to instantiate the current D-Wave 2000Q system. The left side of
the table shows parameters that are associated with machine models; the right side shows runtime parameters that can be
modified by the user to improve performance on specific inputs.
One important parameter is the connection topology of qubits and couplers in the hardware graph. Every generation
up to the D-Wave 2000Q has been based on the Chimera graph shown in Fig. 3. D-Wave has announced a next-generation
architecture, to appear in 2020, that is based on the Pegasus graph. Notable features of Pegasus include higher maximum
degree (15 versus 6) and a non-bipartite structure [14]. Future QA-RAM models will incorporate Pegasus graphs.
More information about these and other parameters may be found in system documentation [21]. D-Wave processors
come bundled with a full software stack and toolkit library to perform key tasks such problem transformation and minor-
embedding: documentation, tutorials, and demos of the API may be found at the Leap cloud service [66], which provides
real-time public access to a quantum annealing system via the cloud.
Note that in current D-Wave systems deployed for public use, the CPU and the QPU components of Fig. 5 are not
co-located. Instead, the physical QPU is connected to a nearby front end CPU acting as server in a distributed client-server
framework: the front end receives QMIs from clients distributed over a network, queues and schedules those requests ,
invokes the QPU, and returns the results. The client CPUs carry out the classical tasks of transformation and embedding to
formulate the QMIs.
JID:TCS AID:12353 /FLA Doctopic: Theory of natural computing [m3G; v1.261; Prn:27/01/2020; 17:39] P.9 (1-15)
C.C. McGeoch / Theoretical Computer Science ••• (••••) •••–••• 9

Fig. 7. An instantiation scale for algorithms and heuristics.

One important distinction between quantum and classical architectures is the pace of technological advances and the
meaning of “constant factors.” In these waning days of Moore’s law, classical architectures remain relatively stable, and
computation times within a given architecture tend to differ by small factors, say 2 to 8, from one generation to the
next [16,33,47]. In these early days of quantum computing, new technologies and machine features can sometimes yield
significant performance speedups. For example, researchers at Google [22] reported that for one problem, solution times on
a D-Wave 2X were 10,000 times faster than predictions based on the previous-year’s D-Wave Two, a difference attributed
to better machine calibration and colder refrigeration of the quantum chip.

3. Algorithms and inputs

The communication gap between theoretical and empirical approaches to algorithm analysis is perhaps greatest when
reasoning about algorithms and heuristics for NP-hard problems. Some differences between the abstract and concrete ends
of the instantiation spectrum, which can create significant obstacles to sharing of results, are discussed in this section.

3.1. Instantiation scale for algorithms

A heuristic is an algorithmic process that has no theoretical guarantees about its performance. In this section, the term
algorithm refers to an abstract pencil-and-paper version of an algorithm or heuristic. A solver is an algorithm or heuristic
that has been instantiated in code or hardware.
Fig. 7 shows an instantiation scale for algorithms and solvers, analogous to that for machines in Fig. 5. (See [46,47] for
more about instantiation.) The blue diagram shows an instantiation scale for classical algorithms and solvers. On the left end
is the nondeterministic set of algorithms, which contains all possible execution paths for given input I for given problem
X : this set is used to establish complexity bounds on X . On the right end is a deterministic algorithm that performs exactly
one execution path per input. Here are some more points on this scale, working from left to right:

• An abstract algorithm is typically written in pseudocode and may be incompletely specified. For example, Algorithm 1
shows pseudocode for a simple Random Greedy method for solving a minimization problem with objective function
f (·) defined on binary solution strings.
• A programmer implements an algorithm in source code, supplying missing specifications and narrowing the range of
possible execution paths. For example, an implementation of Algorithm 1 must provide function bodies for selecting an
initial solution, evaluating a stopping rule, and choosing an index order for flipping bits of s. Some of these decisions
may be made available to the user via runtime parameters.
• The source code is compiled and linked, producing binary code instantiated for a specific machine. This may further
narrow the set of possible execution paths, depending on things such as library functions, memory layout schemes, and
cache configurations.
• Just before execution, three items may remain unspecified:
– The input instance to be solved.
– Values assigned to user parameters.
– A seed for the pseudorandom number generator.
For simplicity we assume that these values are provided by the user just prior to program execution; the sequence of
random numbers is supplied by the operating system during process execution.

The final result is a fully instantiated solver. This instantiation is deterministic in the sense that it performs a single
execution path for any given instance. Re-running this solver with identical inputs and parameters but new random seeds
creates a randomized solver. Re-running with the same input but different user parameters creates a family of solvers for the
input.
JID:TCS AID:12353 /FLA Doctopic: Theory of natural computing [m3G; v1.261; Prn:27/01/2020; 17:39] P.10 (1-15)
10 C.C. McGeoch / Theoretical Computer Science ••• (••••) •••–•••

Algorithm 1: Random Greedy( X ).


Input : Objective function f (·)
Output best_sol
:
s ← choose_random_solution
c ← f (s)
best_cost = c
while evaluate_stopping_rule do
for i ∈ choose_index_of (s) do
s ← f lip_bit (s, i )
c  = f (s )
if c  < c then
s ← s
c ← c
end
end
if c < best_cost then
best_sol ← s
best_cost ← c
end
end

Instantiation for quantum annealers. An instantiation sequence for quantum annealing is shown in red in Fig. 7. Recall from
Section 1.1 that a D-Wave architecture follows the declarative programming paradigm by instantiating a parameterized
family of quantum annealing solvers in quantum hardware. In this context, the quantum machine instruction (QMI) is
comparable to a binary solver that presents several runtime parameters to the user. Multiple anneals produce a sample of
solutions drawn from a problem-specific (usually unknown) distribution.
Problem-solving in the declarative programming paradigm involves input instantiation: in this case a problem input must
be transformed to QUBO or IM and embedded onto the native topology. Although the instantiation steps performed on the
top and bottom halves of Fig. 7 are conceptually distinct, both can have considerable impact on performance. Cross-paradigm
performance analysis should take both types of instantiation into account.
Theoretical and empirical approaches to performance analysis can differ in significant ways. For example, consider the
Brute Force (BF) algorithm for factoring a b-bit composite
√ number N by trying all divisors: One instantiation of BF counts
upwards (divide by 2, divide by 3, . . ., divide by N). Although this algorithm takes (2b/2 ) time in the worst case, a
simple test of the solver using random inputs N would reveal that the common case is easy: at least 60 percent of all
integers (those ending in 0,2,4,6,8, and 5, plus 1/3 of the rest) can be solved within four iterations. Worst-case inputs are
exponentially rare as b grows, and unlikely to appear in fixed-size random input samples. Although Shor’s algorithm is
superior to BF in terms of worst-case cost, an experiment using random inputs might not succeed in demonstrating that
superiority.
Hard inputs for this solver
√ are known, of course, but the same does not hold for other instantiations of BF, such as BF
that counts downward from N, or BF that reads a table of b/2-bit primes to check first, or randomized BF. More generally,
empirical work considers specific solver instantiations that respond individually to specific inputs. Worst-case inputs can
be rare, and there may be no reason to expect that observed performance on a given input set is reflective of worst-case
performance. Empirical work on quantum and classical solvers for NP-hard problems tends to focus on questions unrelated
to worst-case performance, such as: exposing input properties that predict solver performance; developing guidelines for
matching solvers to inputs, or automating tuning strategies for delivering best performance on a given class of inputs.

4. Glossary of ambiguous terminology

A technical term that has multiple meanings is semantically loaded. Philosophers use the term semantic discord to refer
to a situation where a dispute about some concept arises not from disagreement about the concept, but from disagreement
about the meanings of the words used to describe the concept: that is, semantically loaded language leads to semantic
discord. The highly multidisciplinary nature of research on AQC and QA — with different groups using the same terms with
different meanings — can sometimes create semantic discord, which impedes research progress. This section draws upon
the framework developed in Sections 2 and 3, to point out several semantically-loaded terms arising in research on AQC
and QA.

4.1. Universality

Is QA a universal model of computation? The answer depends on which of two definitions of universal is being used.

Universal Turing machine. Computability theory considers what functions can be computed by a given machine model, with-
out reference to the resources needed for the computation. The Church-Turing thesis posits that Turing machines are the
most powerful model of computation, capable of evaluating any computable function defined on natural numbers. In this
JID:TCS AID:12353 /FLA Doctopic: Theory of natural computing [m3G; v1.261; Prn:27/01/2020; 17:39] P.11 (1-15)
C.C. McGeoch / Theoretical Computer Science ••• (••••) •••–••• 11

context, universality refers to a single programmable Turing machine (also called a universal RAM) that can simulate the
computation of a given (problem specific) Turing machine on a given input. Note that universality is a property of an ab-
stract model of computation, which can be arbitrarily large: a physical machine cannot be universal because it is of finite
size.
Gu and Perales [27] have shown that evaluation of an arbitrary classical circuit can be encoded as the ground state of
an Ising Hamiltonian such as H P shown in formula (2). This implies that the quantum circuit described in Section 2.1 could
be used in place of a classical ALU, and a QA-RAM containing only a quantum circuit is equivalent in power to a universal
Turing machine.

Universal quantum computer. Deutsch [23] describes a universal quantum computer in the gate model (GM) that computes
functions on continuous rather than discrete domains, and that can perform certain tasks faster than any classical computer
(he points out that the universal quantum computer is not more powerful than a Turing machine in the computability-
theory sense). The term is often used to refer to physical machines, although some deprecate this usage [40].
As mentioned in Section 2.1, nonstoquastic AQC has been shown to be polynomially equivalent to GM [2,6], although
the proof fails for stoquastic Hamiltonians. The QA Hamiltonian (3) implemented in current D-Wave systems is stoquastic,
but that parameter may change in some future architecture. Thus while the current version of the QA-RAM model may be
considered a universal (Turing-equivalent) computer capable of quantum computations, it is an open question whether it is
a “universal quantum computer,” when read as an indivisible phrase.

4.2. Quantum computer

The question of whether a quantum annealer is properly called a quantum computer can be answered many ways,
according to a variety of meanings that can be attached to either term. Some examples pro and con are considered here.

• From the perspective of computability theory, the quantum circuit alone is not a computer, even though it is pro-
grammable. It is combinational circuit, whereas computers require both combinational and combinatorial circuits, the
latter to provide memory and state. To see the difference, note that a combinational circuit cannot go into an infinite
loop, and there is never a question of whether it halts.2
• The QA-RAM model (with or without the classical ALU component) is unambiguously a quantum computer, since it
contains a quantum processing unit that has all the capabilities of a universal Turing machine.
• It could be argued that D-Wave architectures currently available to the public are not computers, because they are
deployed in a distributed architecture that separates the program that invokes QMIs from the hardware that executes
QMIs. On the other hand, this approach is not intrinsic to the computational model. As far as this author is aware, all
quantum computers currently available to the public are similarly deployed, so this argument should not be used to
distinguish QA architectures from GM architectures.
• Deployment choices may affect the overall efficiency of a given QA architecture, but efficiency is not normally used as
a criterion for distinguishing “computers” from “non computers.” On the other hand, a report by researchers at Jülich
University [51] makes the interesting point that a physical quantum computer in the NISQ era3 may not qualify to
be called “a computer” if it returns outputs that are too error-ridden to be of practical use. On yet another hand, all
real-world quantum computation carries a nonzero probability of error, and consensus on the definition of “practical
usefulness” may be difficult to achieve.

Around the time the D-Wave Two system was introduced in 2013, questions were raised about whether the physical
machines exploit quantum properties in their operation. Several papers addressing the question from different perspectives
have appeared [8,22,28,32,38,41]; current consensus is that D-Wave quantum annealers do exploit quantum capabilities that
are not available to classical computation.

4.3. Solving a problem

Recall that in the framework of P and N P , a decision problem X ∈ N P is solved by a nondeterministic prover and
a deterministic verifier working together on input In . If In ∈ language ( X ), the prover supplies a certificate of membership
C I . The verifier reads (In , C I ): if the input is a yes-instance, then the verifier uses the certificate to accept In in polynomial
time; if a no-instance, then no certificate exists that could fool the verifier into accepting the instance.
For an optimization problem in the NPO complexity class, with input In , the goal is to find a solution x that minimizes
(resp. maximizes) a given objective function. Assuming P
= N P , no optimization algorithm can guarantee to deliver all
three: (1) an optimal solution, (2) for every input in the problem domain, (3) in polynomial time. However, guarantees of
two out of three are possible:

2
Thanks to Cristian Calude for helping to clarify this point.
3
The term noisy intermediate scale quantum (NISQ) computer refers to quantum computers that are too small to support error correction [57]. The Jülich
report criticises a gate model NISQ computer built by IBM.
JID:TCS AID:12353 /FLA Doctopic: Theory of natural computing [m3G; v1.261; Prn:27/01/2020; 17:39] P.12 (1-15)
12 C.C. McGeoch / Theoretical Computer Science ••• (••••) •••–•••

(1) A complete (or exact) algorithm returns an optimal solution to every instance, but may take exponential time on some
inputs. The BF algorithm of Section 3 is an example (assuming factoring is formulated as an optimization problem:
given N, find p , q to minimize ( N − pq)2 ).
(2) An approximation algorithm runs in polynomial time and guarantees to find solutions within some ratio ρ of optimal,
but may not always find optimal solutions. For example, Goemans and Williamson [26] describe a randomized approx-
imation algorithm for MAX CUT that delivers solutions with costs at least ρ = 0.878 times optimal.
(3) A specialized algorithm can find optimal solutions in polynomial time for a subclass of instances, but not necessarily for
all inputs. For example, the IM problem can be solved optimally in polynomial time if input graph G is planar [11].

The term heuristic describes an algorithmic process for which no non-trivial performance guarantees are known, including
knowledge about membership in the above categories. Since optimization heuristics tend to work well on applications
arising in practice, substantial empirical research effort has been aimed at understanding their properties.4
Unlike nondeterministic provers, algorithms and heuristics return outputs that may not be viable as certificates. In the
optimization and empirical algorithmics communities, the phrase solving the problem simply refers to the task of returning
a solution: it may or may not be optimal, the computation time may be fast or slow, and any claims about performance
apply only to the instances in hand.
For example, it was shown in Section 3 that BF can solve factoring problems in constant time. This usage can be startling
to the complexity theorist who normally interprets this claim as having the specific meaning, “exactly solves all inputs in
constant time” — everybody knows that factoring can’t be solved in constant time.
The contradiction here lies in the different meanings attached to the phrase solving the problem. Awareness of these vary-
ing usages could prevent misunderstanding when performance results are shared across different research communities.5

4.4. Quantum speedup

As discussed in Section 2.1, a QA-RAM can compute any function that a Turing machine can. However, being able to
compute a function is not the same as being able to compute it efficiently. Considerable theoretical and empirical research
effort has been spent in finding that line of demarcation: for what types of problem and inputs can we expect quantum
annealing systems to show superior performance over classical computation?
Several different definitions of superior quantum performance — sometimes called quantum speedup or quantum advan-
tage — have been proposed. These definitions are largely orthogonal, in the sense that a demonstration of superior quantum
performance under one meaning would not satisfy those interested in other kinds of performance. This section gives a brief
overview of the prevalent approaches to quantum performance analysis.

Quantum speedup in complexity theory. Papageorgiou and Traub [56] propose two flavors of quantum speedup, as follows:

• Quantum speedup. S 1 = C A (n)/C Q (n) is the ratio between the cost of the best known classical algorithm A, and that of
a quantum algorithm Q . Although it is not stated explicitly in the paper, these are assumed to be exact algorithms and
worst-case cost functions, with Q running in a closed system.
• Strong quantum speedup. S 2 = L A (n)/ L Q (n) is the ratio between classical and quantum problem complexity. The com-
plexity of a problem is defined in terms of lower bounds on worst-case cost, over all algorithms for the problem.

It is straightforward to adapt these definitions to algorithms in AQC. As mentioned in Section 2.3, under closed-system
conditions the minimum anneal time required to guarantee finding an optimal solution to a given input scales as an inverse
polynomial of , the minimum instantaneous eigenvalue gap between the ground state and the first excited state of the
algorithm Hamiltonian. Unfortunately, no efficient general method for computing  is known.
Nevertheless, a handful of fundamental results have been obtained, including polynomial equivalence between AQC and
gate model quantum computing [2] and demonstration of an adiabatic search algorithm equivalent to Grover search [61].
These theorems are not known to hold in the case of stoquastic AQC and of the version of QA-RAM described in Section 2.1.
An obvious question is whether this type of quantum speedup might be demonstrated empirically using a physical QA
circuit: several fundamental obstacles arise. The first applies to most of the open questions of complexity theory, which
are stated in terms of alternating existential and universal quantifiers over infinite sets — worst-case bounds hold for all
instances in a problem domain; asymptotic bounds hold for all n above a threshold n0 , and so forth. Experiments, being
finite, can demonstrate that a property exists, but not that property holds for all members of an infinite set. Put another
way, hypotheses about complexity classes cannot be falsified by empirical methods.
Proposals have appeared describing interactive experiments to verify that a given algorithm running on a physical quan-
tum processor exhibits speedup over a comparable classical algorithm [3]. However, significant practical obstacles arise in

4
Journals publishing empirical results about heuristic optimization include: INFORMS Journal on Computing, EURO Journal on Computational Optimization,
Journal of Global Optimization, Operations Research, Optimization, and SIAM Journal on Optimization.
5
This suggestion is based on conversations with a handful of colleagues who have similarly misunderstood reports about capabilities of quantum an-
nealing processors.
JID:TCS AID:12353 /FLA Doctopic: Theory of natural computing [m3G; v1.261; Prn:27/01/2020; 17:39] P.13 (1-15)
C.C. McGeoch / Theoretical Computer Science ••• (••••) •••–••• 13

designing such a test for a quantum annealing platform. First, these proposals assume that efficient verification of solutions
is possible, which is true for decision problems, but not for optimization problems solved natively by quantum anneal-
ers. Second, the distinction between abstract algorithms and instantiated solvers discussed in Section 3 holds especially
for heuristic optimization solvers: any experimental outcome is an artifact of experimental parameters (inputs, solvers, and
machines), and there is no reason to expect that results can be generalized sufficiently to address open theoretical questions
(unless those questions were restated with significantly narrowed scope). The No Free Lunch Theorem [43,73] explains this
phenomenon in the context of optimization. Third, even if suitable inputs could be identified, no data analysis technique
applied to a finite data set can guarantee to return a correct asymptotic bound on the underlying data trend [49]. To see
this, note that any set of k data points can be exactly fit to a polynomial of degree k − 1, which means that there is no re-
liable way to distinguish polynomial from exponential growth. For these reasons, empirical work on optimization heuristics
does not typically focus on addressing theoretical questions.
Quantum speedup in physics research. Rönnow et al. [62] (see also [37,32]) describe a version of quantum speedup inspired
by Papageorgiou and Traub, but modified to be amenable to empirical testing of D-Wave quantum annealing systems versus
classical solvers that are optimized for best performance. This approach uses generated inputs with planted optimal solutions
(i.e. known by construction) so that verification of optimality is efficient. The Time To Solution (TTS) metric measures time
to find optimal solutions; analysis considers the scaling of TTS curves over a finite range of input sizes: this is done by fitting
exponential models and comparing the resulting exponents. Solvers are typically evaluated according to TTS scaling in the
median case over all inputs tested.
The TTS curve for each classical solver is a lower envelope curve corresponding to optimally-tuned parameters (found by
exhaustive search) within a given family of binary solvers (see Fig. 7). Note that reported solver runtimes in this work are
not the full measured runtimes commonly seen in empirical computer science: overhead costs of optimizing parameters and
of solver initialization are typically omitted, and sometimes runtimes are scaled (divided by n) to reflect estimated times on
hypothetical hardware. Reported times can be lower than true solver runtimes by two or three orders of magnitude.
To date, an observation of limited quantum speedup under this definition has been reported [5], but speedups against a
wider set of solvers have not yet been found. Indeed such a demonstration may not be possible, since it requires an input
set that is hard for a variety of classical solvers running under best-case (i.e. optimally-tuned) conditions, which is contrary
to the No Free Lunch Theorem [43,73].
Quantum performance in optimization research. Quantum annealing algorithms are natural optimizers, and can be evaluated
according to tradeoffs in all three performance dimensions described in Section 3: how fast it is, what is the quality of
solutions returned, and how broad is the class of inputs for which good performance is observed?
In this context, observations that D-Wave quantum annealers can find good solutions in short time frames — say, getting
within 99% of optimal energy in runtimes below a half-second — have been reported for a variety of input types, both
generated and application-based. On the largest inputs tested, the best classical solvers converge more slowly and may take
10x to 10,000x longer to find solutions of comparable quality [39,67,34,50,45].
Performance in applications practice. Presenters at D-Wave QUBITs user-group meetings [60] describe scores of application
problems for which D-Wave quantum solvers have been developed and tested. Most of the presentations at these meetings
describe proof-of-concept demonstrations of viability of this approach to solving hard computational problems; as a general
rule, the quantum systems available to date have been too small to solve problems of application interest. However, a few
comparisons to industry-standard alternatives have appeared, and some promising hints of superior application performance,
using metrics that are natural to a given use case, and sometimes involving hybrid classical/quantum solvers, have been
found [24,20,54,9].

5. Conclusions

This paper introduces basic concepts of annealing-based quantum computation and the quantum annealing processors
manufactured by D-Wave, and considers some communication barriers that arise when theoreticians and empirical scientists
study these novel systems.
Section 2.1 proposes a model of computation, the QA-RAM, that can provide a common framework for discussing results
about quantum computation in the AQC and QA paradigms from both theoretical and empirical points of view. Section 3
describes a similar framework for reasoning about performance of abstract algorithms and instantiated solvers. Section 4
presents several examples of semantically overloaded terms having conflicting meanings in different areas of quantum
computation research. These terms are clarified using the concepts and language developed in earlier sections.
The goal of this paper is to facilitate and stimulate cross-disciplinary research in the annealing-based quantum computing
paradigm, by proposing a common terminology and framework for sharing results across the spectrum of abstract and
instantiated machines and algorithms.

Declaration of competing interest

The authors declare the following financial interests/personal relationships which may be considered as potential com-
peting interests: Catherine McGeoch is employed by D-Wave Systems.
JID:TCS AID:12353 /FLA Doctopic: Theory of natural computing [m3G; v1.261; Prn:27/01/2020; 17:39] P.14 (1-15)
14 C.C. McGeoch / Theoretical Computer Science ••• (••••) •••–•••

References

[1] 1994–2019, arXiv:[quant-ph]. arxiv.org (1994–2019).


[2] D. Aharanov, W. van Dam, J. Kempe, Z. Landau, S. Lloyd, O. Regev, Adiabatic quantum computation is equivalent to standard quantum computation,
SIAM J. Comput. 37 (1) (2007) 166–194.
[3] D. Aharonov, U. Vazirani, Is quantum mechanics falsifiable? A computational perspective on the foundations of quantum mechanics, arXiv:1206.3686v1
[quant-ph], 2012.
[4] T. Albash, D.A. Lidar, Decoherence in adiabatic quantum computation, 2015, 062320 pages.
[5] T. Albash, D.A. Lidar, Demonstration of a scaling advantage for a quantum annealer over classical annealing, arXiv:1705.07452v2 [quant-ph], 2017.
[6] T. Albash, D.A. Lidar, Adiabatic quantum computing, Rev. Mod. Phys. 90 (2018) 015002.
[7] T. Albash, V. Martin-Mayor, I. Hen, Temperature scaling law for quantum annealing optimizers, Phys. Rev. Lett. 119 (2017) 110502.
[8] T. Albash, T.F. Rønnow, M. Troyer, D.A. Lidar, Reexamining classical and quantum models for the D-Wave one processor, Eur. Phys. J. Spec. Top. 224
(2015) 111.
[9] H. Alghassi, R. Dridi, S. Tayur, Graver bases via quantum annealing with application to non-linear integer programs, arXiv:1902.04115, 2019.
[10] M. Amin, Searching for quantum speedup in quasistatic quantum annealers, arXiv:1503.04216 [quant-ph], 2015.
[11] F. Barahona, On the computational complexity of Ising spin glass models, J. Phys. A 15 (1985) 10.
[12] J.D. Biamonte, P. Love, Realizable Hamiltonians for universal adiabatic quantum computers, Phys. Rev. A 78 (2008).
[13] A.D. Boothby, K. King, A. Roy, Fast clique minor generation in Chimera qubit connectivity graphs, arXiv:1507.04774, 2015.
[14] K. Boothby, P. Bunyk, J. Raymond, A. Roy, Next-Generation Topology of D-Wave Quantum Processors, D-Wave TR 14-1026A-C, 2019.
[15] M. Born, V. Fock, Beweis des Adiabatensatzes, Z. Phys. A 51 (3) (1928) 165–180.
[16] R. Bryant, D.R. O’Hallaron, Computer Systems: A Programmer’s Perspective, 3rd edition, Pearson, 2015.
[17] P.I. Bunyk, et al., Architectural considerations in the design of a superconducting quantum annealing processor, IEEE Trans. Appl. Supercond. (2014).
[18] J. Cai, W.G. Macready, A. Roy, A practical heuristic for finding graph minors, arXiv:1406.2741v1, 2014.
[19] T.H. Cormen, C.E. Leiserson, R.L. Rivest, C. Stein, Introduction to Algorithms, third edition, MIT Press, 2009.
[20] D. Wilsch, M. Wilsch, H. DeRaedt, K. Michaelsen, Support vector machines on the D-Wave quantum annealer, arXiv:1906.06283, 2019.
[21] D-Wave Systems, D-Wave system documentation, http://docs.dwavesys.com/dox/latest/index.html, 2019.
[22] Vasil S. Denchev, et al., What is the computational value of finite range tunneling?, Phys. Rev. X 6 (2016) 031015.
[23] D. Deutsch, Quantum theory, the Church-Turing principle and the universal quantum computer, Proc. R. Soc. Lond. Ser. A 400 (1985) 97–117.
[24] Y. Ding, X. Chen, L. Lamanta, E. Solano, M. Sanz, Logistic network design with a D-Wave quantum annealer, arXiv:1906.10074, 2019.
[25] E. Farhi, J. Goldstone, S. Gutmann, J. Lapan, A. Lundgren, D. Preda, A quantum adiabatic evolution algorithm applied to random instances of an
NP-complete problem, Science 292 (5516) (2001) 472–475.
[26] M.X. Goemans, D.P. Williamson, Improved approximation algorithms for maximum cut and satisfiability problems using semidefinite programming,
J. ACM 42 (6) (November 1995) 1115–1145.
[27] Perales Gu, Encoding universal computation in the ground states of Ising lattices, Phys. Rev. E 6 (2012) 011116.
[28] R. Harris, U. Sato, A.J. Berkley, M. Reis, F. Altomare, M.H. Amin, K. Boothby, P. Bunyk, C. Deng, C. Enderud, S. Huang, E. Hoskinson, M.W. Johnson,
E. Ladzinsky, N. Ladzinsky, T. Lanting, R. Li, T. Medina, R. Molavi, R. Neufeld, T. Oh, I. Pavlov, I. Perminov, G. Poulin-Lamarre, C. Rich, A. Smirnov,
L. Swenson, N. Tsai, M. Volkman, J. Whittaker, J. Yao, Phase transitions in a programmable quantum spin glass simulator, Science 362 (6398) (13 July
2018) 163–165.
[29] I. Hen, Period finding with adiabatic quantum computation, Europhys. Lett. 105 (2014) 5.
[30] N.J. Higham, Accuracy and Stability of Numerical Algorithms, 2nd ed., SIAM, 1996.
[31] M. Huele, M. Jarvisalo, M. Suda, SAT competition 2018, in: 21st International Conference on Theory and Applications of Satisfiability Testing, 2018,
http://sat2018.forsyte.tuwien.ac.at.
[32] J. Job, D.A. Lidar, Test-driving 1000 qubits, arXiv:1706.07124, 2017.
[33] D.S. Johnson, L.A. Mcgeoch, 8th DIMACS impelementation challenge: the traveling Salesman problem (results page), http://archive.dimacs.rutgers.edu,
2008.
[34] M. Juenger, E. Lobe, P. Mutzel, G. Reinelt, F. Rendl, G. Rinaldi, T. Stollenwerk, Performance of a quantum annealer for Ising ground state computations
on Chimera graphs, arXiv:1904.11965, 2019.
[35] R. Blume-Kohout, K.C. Young, D. Lidar, Adiabatic quantum optimization with the wrong Hamiltonian, Phys. Rev. A 88 (2013) 6.
[36] T. Kadowaki, H. Nishimori, Quantum annealing in the transverse Ising model, Phys. Rev. E 58 (5) (1998) 5355.
[37] H.G. Katzgraber, F. Hamze, R.S. Andrist, Glassy Chimeras are blind to quantum speedup: designing better benchmarks for quantum annealing machines,
arXiv:1401.1548v1 [quant-ph], 2014.
[38] A.D. King, J. Carrasquilla, J. Raymond, I. Ozfidan, E. Andriyash, A. Berkeley, M. Reis, T. Lanting, R. Harris, F. Altomare, K. Boothby, P.I. Bunyk, C. Enderud,
A. Fréchette, E. Hoskinson, N. Ladizinsky, T. Oh, G. Poulin-Lamarre, C. Rich, Y. Sato, A. Yu, A. Smirnov, L.J. Swenson, M.H. Volkmann, J. Whittaker, J. Yao,
E. Ladizinsky, M.W. Johnson, J. Hilton, M.H. Amin, Observation of topological phenomena in a programmable lattice of 1,800 qubits, Nature 560 (2018)
456–460.
[39] James King, Shier Yarkoni, Mayssam M. Nevisi, Jeremy P. Hilton, Catherine C. McGeoch, Benchmarking a Quantum Annealing Processor with the
Time-to-Target Metric, 2015.
[40] J. Krupansky, What is a universal quantum computer, medium.com (31 August 2018).
[41] T. Lanting, et al., Entanglement in a quantum annealing processor, Phys. Rev. X 4 (May 2014) 021041.
[42] D.A. Lidar, Arbitrary-time error suppression for Markovian adiabatic quantum computing using stabilizer subspace codes, arXiv:1904.12028, 26 April
2019.
[43] W.G. Macready, D.H. Wolpert, What makes an optimization problem hard?, Complexity 5 (1996) 40–46.
[44] S. Matsuura, H. Nishimori, W. Vinci, D.A. Lidar, Nested quantum annealing correction at finite temperature: p-spin models, Phys. Rev. A 99 (062307)
(2019) 6.
[45] C. McGeoch, J. King, M. Mohammadi Nevisi, S. Yarkoni, J. Hilton, Optimization with Clause Problems, D-Wave TR 14-1001A-A, 2017.
[46] C.C. McGeoch, Toward an experimental method for algorithm simulation (feature article), INFORMS J. Comput. 8 (1995) 1.
[47] C.C. McGeoch, A Guide to Experimental Algorithmics, Cambridge Press, 2012.
[48] C.C. McGeoch, Adiabatic Quantum Computation and Quantum Annealing: Theory and Practice, Morgan & Claypool, 2014.
[49] C.C. McGeoch, P. Sanders, R. Fleischer, P.R. Cohen, D. Precup, Using finite experiments to study asymptotic performance, in: R. Fleischer, B. Moret,
E.M. Schmidt (Eds.), Springer LNCS, vol. 2547, 1997, pp. 41–52.
[50] C.C. McGeoch, C. Wang, Experimental evaluation of an adiabatic quantum system for combinatorial optimization, Comput. Front. (May 2013).
[51] K. Michielsen, et al., Benchmarking gate-based quantum computers, Comput. Phys. Commun. 220 (2017) 44–55.
[52] J.A. Miszczak, Models of quantum computation and quantum programming languages, Bull. Pol. Acad. Sci., Tech. Sci. 59 (3) (2011) 305–324.
[53] R. Nagavaja, N. Papanikolaou, D. Williams, Simulating and compiling code for the sequential quantum random access machine, Electron. Notes Theor.
Comput. Sci. (2005).
JID:TCS AID:12353 /FLA Doctopic: Theory of natural computing [m3G; v1.261; Prn:27/01/2020; 17:39] P.15 (1-15)
C.C. McGeoch / Theoretical Computer Science ••• (••••) •••–••• 15

[54] N.T.T. Nguyen, G.T. Kenyon, Image classification using quantum inference on the D-Wave 2X, in: Proceedings of the 3rd ICRC, November 2018.
[55] I. Ozfidan, et al., Demonstration of nonstoquastic Hamiltonians in coupled superconducting flux qubits, arXiv:1903.06139v3, 2019.
[56] Anagryos Papageorgiou, Joseph F. Traub, Measures of quantum computing speedup, 2013.
[57] John Preskill, Quantum computing in the NISQ era and beyond, Quantum (ISSN 2521-327X) 2 (Aug. 2018), https://doi.org/10.22331/q-2018-08-06-79
79.
[58] K. Pudenz, T. Albash, D. Lidar, Quantum annealing correction for random Ising problems, Phys. Rev. A 91 (2015) 042302, 91.
[59] K. Pudenz, T. Albash, D.A. Lidar, Error corrected quantum annealing with hundreds of qubits, Nat. Commun. 5 (2014) 3243.
[60] D-Wave QUBITS, Users Conference: Qubits Europe 2019: www.dwavesys.com/qubits-euope-2019, http://www.dwavesys.com/qubits-europe-2019, 2019,
Proceedings available (2016–2019).
[61] J. Roland, N.J. Cerf, Quantum search by local adiabatic evolution, Phys. Rev. A 65 (4) (2002) 042308.
[62] T.F. Rønnow, Z. Wang, J. Job, S. Boixo, S.V. Isakov, D. Wecker, J.M. Martinis, D.A. Lidar, M. Troyer, Defining and detecting quantum speedup, Science
345 (6195) (2014) 420–424, http://www.sciencemag.org/content/345/6195/420.abstract.
[63] M.S. Sarandy, D. Lidar, Adiabatic quantum computation in open systems, Phys. Rev. Lett. 95 (2005) 250503.
[64] P. Shor, Algorithms for quantum computation: discrete logarithms and factoring, in: 35th FOCS, 1994.
[65] M. Sipser, Introduction to the Theory of Computation, second edition, Thompson, 2006.
[66] D-Wave Systems, Take the Leap | D-Wave Systems, http://www.dwavesys.com/take-leap, 2019.
[67] I. Trummer, C. Koch, Multiple query optimization on the D-Wave 2X adiabatic quantum computer, VLDB, 2016.
[68] S.E. Venegas-Andraca, W. Cruz-Santos, C. McGeoch, A cross-disciplinary introduction to quantum annealing-based algorithms, Contemp. Phys. 59 (2)
(2018) 174–196.
[69] W. Vinci, D.A. Lidar, Non-stoquastic Hamiltonians in quantum annealing via geometric phases, NPJ 3 (2017) 38.
[70] J.S. Vitter, Algorithms and Data Structures for External Memory, Now 1998.
[71] Z. Wei, M. Ying, A modified quantum adiabatic evolution for the Deutsch-Josza problem, Phys. Lett. A 354 (2006) 271–273.
[72] Wikipedia contributors, Pentium FDIV bug. Wikipedia, the Free Encyclopedia (13 January 2019).
[73] D.H. Wolpert, W.G. Macready, No free lunch theorems for optimization, IEEE Trans. Evol. Comput. 1 (1997) 67–82.

You might also like