Quantum Machine Learning
Quantum Machine Learning
1038/nature23474
Fuelled by increasing computer power and algorithmic advances, machine learning techniques have become powerful
tools for finding patterns in data. Quantum systems produce atypical patterns that classical systems are thought not
to produce efficiently, so it is reasonable to postulate that quantum computers may outperform classical computers on
machine learning tasks. The field of quantum machine learning explores how to devise and implement quantum software
that could enable machine learning that is faster than that of classical computers. Recent work has produced quantum
algorithms that could act as the building blocks of machine learning programs, but the hardware and software challenges
are still considerable.
L
ong before we possessed computers, human beings strove to find a set of instructions solving a problem, such as determining whether two
patterns in data. Ptolemy fitted observations of the motions of the graphs are isomorphic, that can be performed on a quantum computer.
stars to a geocentric model of the cosmos, with complex epicycles Quantum machine learning software makes use of quantum algorithms
to explain the retrograde motions of the planets. In the sixteenth century, as part of a larger implementation. By analysing the steps that quantum
Kepler analysed the data of Copernicus and Brahe to reveal a previously algorithms prescribe, it becomes clear that they have the potential to out-
hidden pattern: planets move in ellipses with the Sun at one focus of the perform classical algorithms for specific problems (that is, reduce the
ellipse. The analysis of astronomical data to reveal such patterns gave rise number of steps required). This potential is known as quantum speedup.
to mathematical techniques such as methods for solving linear equations The notion of a quantum speedup depends on whether one takes a
(Newton–Gauss), learning optima via gradient descent (Newton), formal computer science perspective—which demands mathematical
polynomial interpolation (Lagrange), and least-squares fitting (Laplace). The proofs—or a perspective based on what can be done with realistic, finite-
nineteenth and early twentieth centuries gave rise to a broad range of mathe size devices—which requires solid statistical evidence of a scaling advan-
matical methods for analysing data to reveal the patterns that it contained. tage over some finite range of problem sizes. For the case of quantum
The construction of digital computers in the mid-twentieth century machine learning, the best possible performance of classical algorithms
allowed the automation of data analysis techniques. Over the past is not always known. This is similar to the case of Shor’s polynomial-time
half-century, the rapid progression of computer power has allowed quantum algorithm for integer factorization: no sub-exponential-time
the implementation of linear algebraic data analysis techniques such classical algorithm has been found, but the possibility is not provably
as r egression and principal component analysis, and has led to more ruled out.
complex learning methods such as support vector machines. Over the Determination of a scaling advantage contrasting quantum and
same time frame, the development and rapid advance of digital c omputers classical machine learning would rely on the existence of a quantum
spawned novel machine learning methods. Artificial neural networks computer and is called a ‘benchmarking’ problem. Such advantages
such as p erceptrons were implemented in the 1950s (ref. 1), as soon as could include improved classification accuracy and sampling of classically
computers had the power to realize them. Deep learning built on neural inaccessible systems. Accordingly, quantum speedups in machine learning
networks (such as Hopfield networks and Boltzmann machines) and are c urrently characterized using idealized measures from complexity
training methods (such as back propagation) were introduced and imple- theory: query complexity and gate complexity (see Box 1 and Box 1 Table).
mented in the 1960s to 1990s (ref. 2). In the past decade, p articularly Query complexity measures the number of queries to the information
in the past five years, the combination of powerful computers and source for the classical or quantum algorithm. A quantum speedup
special-purpose information processors capable of implementing deep results if the number of queries needed to solve a problem is lower for the
networks with billions of weights3, together with their application to very quantum algorithm than for the classical algorithm. To determine the gate
large datasets, has revealed that such deep learning networks are capable complexity, the number of elementary quantum operations (or gates)
of identifying complex and subtle patterns in data. required to obtain the desired result are counted.
Quantum mechanics is well known to produce atypical p atterns in Query and gate complexity are idealized models that quantify the
data. Classical machine learning methods such as deep neural networks necessary resources to solve a problem class. Without knowing how to
frequently have the feature that they can both recognize statistical pat- map this idealization to reality, not much can be said about the n ecessary
terns in data and produce data that possess the same statistical patterns: resource scaling in a real-world scenario. Therefore, the required
they recognize the patterns that they produce. This observation suggests resources of classical machine learning algorithms are mostly quantified
the following hope. If small quantum information processors can pro- by numerical experimentation. The resource requirements of quantum
duce statistical patterns that are computationally difficult for a classical machine learning algorithms are likely to be similarly difficult to quantify
computer to produce, then perhaps they can also recognize patterns that in practice. The analysis of their practical feasibility is a central subject
are equally difficult to recognize classically. of this review.
The realization of this hope depends on whether efficient quantum As will be seen throughout the review, there are quantum algorithms
algorithms can be found for machine learning. A quantum algorithm is for machine learning that exhibit quantum speedups4–7. For example, the
1
Quantum Complexity Science Initiative, Skolkovo Institute of Science and Technology, Skoltech Building 3, Moscow 143026, Russia. 2Institute for Quantum Computing, University of Waterloo,
Waterloo, N2L 3G1 Ontario, Canada. 3ICFO—The Institute of Photonic Sciences, Castelldefels, Barcelona 08860 Spain. 4Max Planck Institute of Quantum Optics, 1 Hans-Kopfermannstrasse,
D-85748 Garching, Germany. 5Massachusetts Institute of Technology, Research Laboratory of Electronics, Cambridge, Massachusetts 02139, USA. 6Station Q Quantum Architectures and
Computation Group, Microsoft Research, Redmond, Washington 98052, USA. 7Massachusetts Institute of Technology, Department of Mechanical Engineering, Cambridge, Massachusetts 02139, USA.
1 4 s e p t e m b e r 2 0 1 7 | V O L 5 4 9 | N A T U RE | 1 9 5
© 2017 Macmillan Publishers Limited, part of Springer Nature. All rights reserved.
INSIGHT Review
Box 1 Table | Speedup techniques for given quantum machine Linear-algebra-based quantum machine learning
learning subroutines A wide variety of data analysis and machine learning protocols operate
by performing matrix operations on vectors in a high-dimensional vector
Method Speedup Amplitude HHL Adiabatic qRAM
amplification space. But quantum mechanics is all about matrix operations on vectors
Bayesian O(√N) Yes Yes No No
in high-dimensional vector spaces.
inference106,107 The key ingredient behind these methods is that the quantum state
Online O(√N) Yes No No Optional of n quantum bits or qubits is a vector in a 2n-dimensional complex
perceptron108 vector space; performing a quantum logic operations or a measure-
Least-squares O(logN)* Yes Yes No Yes ment on qubits multiplies the corresponding state vector by 2n × 2n
fitting9
Classical O(√N) Yes/No Optional/ No/Yes Optional
matrices. By building up such matrix transformations, quantum com-
Boltzmann No puters have been shown to perform common linear algebraic operations
machine20 such as Fourier transforms38, finding eigenvectors and eigenvalues39,
Quantum O(logN)* Optional/No No No/Yes No and s olving linear sets of equations over 2n-dimensional vector spaces
Boltzmann in time that is polynomial in n, exponentially faster than their best
machine22,61
Quantum O(logN)* No Yes No Optional
known c lassical counterparts8. This latter is commonly referred to as
PCA11 the Harrow, Hassidim and Lloyd (HHL) algorithm8 (see Box 2). The
Quantum O(logN)* No Yes No Yes original variant assumed a well conditioned matrix that is sparse.
support vector Sparsity is unlikely in data science, but later improvements relaxed this
machine13
assumption to include low-rank matrices as well10,33,40. Going past HHL,
Quantum O(√N) Yes No No No
reinforcement here we survey several quantum algorithms which appear as subroutines
learning30 when linear algebra techniques are employed in quantum machine
*There exist important caveats that can limit the applicability of the learning software.
method51.
Quantum principal component analysis
For example, consider principal component analysis (PCA). Suppose that
quantum basic linear algebra subroutines (BLAS)—Fourier transforms, the data are presented in the form of vectors vj in a d-dimensional vector
finding eigenvectors and eigenvalues, solving linear equations— space, where d = 2n = N. For example, vj could be the vector of changes
exhibit exponential quantum speedups over their best known classical in prices of all stocks in the stock market from time tj to time tj + 1.
counterparts8–10. This quantum BLAS (qBLAS) translates into quantum The covariance matrix of the data is C = ∑ j vjvjT , where superscript
speedups for a variety of data analysis and machine learning algorithms T denotes the transpose operation: the covariance matrix summarizes
including linear algebra, least-squares fitting, gradient descent, Newton’s the correlations between the different components of the data, for exam-
method, principal component analysis, linear, semidefinite and quadratic ple, correlations between changes in the prices of different stocks. In its
programming, topological analysis and support vector machines9,11–19. simplest form, principal component analysis operates by diagonalizing
At the same time, special-purpose quantum information processors such the covariance matrix: C = ∑kek ck c†k , where the ck are the eigenvectors
1 9 6 | N A T U RE | V O L 5 4 9 | 1 4 s e p t e m b e r 2 0 1 7
© 2017 Macmillan Publishers Limited, part of Springer Nature. All rights reserved.
Review INSIGHT
1 4 s e p t e m b e r 2 0 1 7 | V O L 5 4 9 | N A T U RE | 1 9 7
© 2017 Macmillan Publishers Limited, part of Springer Nature. All rights reserved.
INSIGHT Review
1 9 8 | N A T U RE | V O L 5 4 9 | 1 4 s e p t e m b e r 2 0 1 7
© 2017 Macmillan Publishers Limited, part of Springer Nature. All rights reserved.
Review INSIGHT
1 4 s e p t e m b e r 2 0 1 7 | V O L 5 4 9 | N A T U RE | 1 9 9
© 2017 Macmillan Publishers Limited, part of Springer Nature. All rights reserved.
INSIGHT Review
face the substantial technical challenge of loading coherent input states. for small quantum computers23–31,96–98 complemented and enhanced by
Nonetheless, because such applications do not require qRAM and offer special-purpose quantum information processors21,22, digital quantum
the potential for exponential speedups for device characterization22,61,65,66 processors70,73,78,99,100 and sensors76,77,101.
they remain among the promising possibilities for near-term application In particular, quantum annealers with around 2,000 qubits have been
of quantum machine learning. built and operated, using integrated superconducting circuits that are,
in principle, scalable. The biggest challenges for quantum annealers to
Designing and controlling quantum systems implement quantum machine learning algorithms include improving
A major challenge in the development of quantum computation and connectivity and implementing more general tunable couplings between
information science involves tuning quantum gates to match the exacting qubits. Programmable quantum optic arrays with around 100 tunable
requirements needed for quantum error correction. Heuristic search interferometers have been constructed using integrated photonics in sil-
methods can help to achieve this in a supervised learning scenario68,69 icon, but loss of quantum effects increases as such circuits are scaled up.
(for instance in the case of nearest-neighbour-coupled superconducting A particularly important challenge for quantum machine learning is the
artificial atoms69 with gate fidelity above 99.9% in the presence of noise) construction of interface devices such as qRAM that allow classical infor-
and thus to reach an accepted threshold for fault-tolerant quantum mation to be encoded in quantum mechanical form52. A qRAM to access
computing. A similar methodology has been successful in constructing a N pieces of data consists of a branching array of 2N quantum switches,
single-shot Toffoli gate, again reaching gate fidelity above 99.9%70. Genetic which must operate coherently during a memory call. In principle, such
algorithms have been employed to reduce digital and experimental errors a qRAM takes time O(logN) to perform a memory call, and can tolerate
in quantum gates71. They have been used to simulate controlled-NOT error rates of up to O(1/logN) per switching operation, where logN is the
gates by means of ancillary qubits and imperfect gates. Besides outper- depth of the qRAM circuit. Proof-of-principle demonstrations of qRAM
forming protocols for digital quantum simulations, it has been shown that have been performed, but constructing large arrays of quantum switches
genetic algorithms are also useful for suppressing experimental errors is a difficult technological problem.
in gates72. Another approach used stochastic gradient descent and two- These hardware challenges are technical in nature, and clear paths exist
body interactions to embed a Toffoli gate into a sequence of quantum towards overcoming them. They must be overcome, however, if quantum
operations or gates without time-dependent control using the natural machine learning is to become a ‘killer app’ for quantum computers. As
dynamics of a quantum network73. Dynamical decoupling sequences help noted previously, most of the quantum algorithms that have been identi-
to protect quantum states from decoherence, which can be designed using fied face a number of caveats that limits their applicability. We can distill
recurrent neural networks74. the caveats mentioned above into four fundamental problems.
Controlling a quantum system is just as important and complex.
Learning methods have also been very successful in developing control (1) Th
e input problem. Although quantum algorithms can provide dra-
sequences to optimize adaptive quantum metrology, which is a key matic speedups for processing data, they seldom provide advantages
quantum building block in many quantum technologies. Genetic in reading data. This means that the cost of reading in the input can
algorithms have been proposed for the control of quantum molecules to in some cases dominate the cost of quantum algorithms. Under-
overcome the problem caused by changing environmental parameters standing this factor is an ongoing challenge.
during an experiment75. Reinforcement learning algorithms using (2) The output problem. Obtaining the full solution from some quan-
heuristic global optimization, like the algorithm used for designing tum algorithms as a string of bits requires learning an exponential
circuits, have been widely successful, particularly in the presence of number of bits. This makes some applications of quantum machine
noise and decoherence, scaling well with the system size76–78. One can learning algorithms infeasible. This problem can potentially be side-
also exploit reinforcement learning in gate-based quantum systems. For stepped by learning only summary statistics for the solution state.
instance, adaptive controllers based on intelligent agents for q uantum (3) The costing problem. Closely related to the input/output problems, at
information demonstrate adaptive calibration and compensation strategies present very little is known about the true number of gates required
to an external stray field of unknown magnitude in a fixed direction. by quantum machine learning algorithms. Bounds on the complexity
Classical machine learning is also a powerful tool with which to extract suggest that for sufficiently large problems they will offer huge
theoretical insights about quantum states. Neural networks have recently advantages, but it is still unclear when that crossover point occurs.
been deployed to study two central problems in condensed matter, (4) The benchmarking problem. It is often difficult to assert that a
namely phase-of-matter detection79,80 and ground-state search81. These quantum algorithm is ever better than all known classical machine
succeeded in achieving better performances than established numerical algorithms in practice because this would require extensive bench-
tools. Theoretical physicists are now studying these models to understand marking against modern heuristic methods. Establishing lower
analytically their descriptive power compared to traditional methods such bounds for quantum machine learning would partially address
as tensor networks. Interesting applications to exotic states of matter are this issue.
already on the market, and have been shown to capture highly non-trivial
features from disordered or topologically ordered systems. To avoid some of these problems, we could apply quantum computing
to quantum, rather than classical, data. One aim therein is to use quantum
Perspectives on future work machine learning to characterize and control quantum computers66. This
As we have discussed in this review, small quantum computers and would enable a virtuous cycle of innovation similar to that which occurred
larger special-purpose quantum simulators, annealers and so on seem to in classical computing, wherein each generation of processors is then
have potential use in machine learning and data analysis15,21,22,36,48,82–95. leveraged to design the next-generation processors. We have already
However, the execution of quantum algorithms requires quantum begun to see the first fruits of this cycle with classical machine learning
hardware that is not yet available. being used to improve quantum processor designs23–31,102–104, which in
On the hardware side, there have been great strides in several enabling turn provide powerful computational resources for quantum-enhanced
technologies. Small-scale quantum computers with 50–100 qubits will machine learning applications themselves8,9,11,13,33–36.
be made widely available via quantum cloud computing (the ‘Qloud’).
Special-purpose quantum information processors such as quantum received 20 February; accepted 4 July 2017.
simulators, quantum annealers, integrated photonic chips, nitrogen
vacancy centres (NV)-diamond arrays, qRAM, and made-to-order 1. Rosenblatt, F. The perceptron: a probabilistic model for information storage
and organization in the brain. Psychol. Rev. 65, 386 (1958).
superconducting circuits will continue to advance in size and complex- 2. LeCun, Y., Bengio, Y. & Hinton, G. Deep learning. Nature 521, 436–444
ity. Quantum machine learning offers a suite of potential applications (2015).
2 0 0 | N A T U RE | V O L 5 4 9 | 1 4 s e p t e m b e r 2 0 1 7
© 2017 Macmillan Publishers Limited, part of Springer Nature. All rights reserved.
Review INSIGHT
3. Le, Q. V. Building high-level features using large scale unsupervised learning. 37. Aïmeur, E., Brassard, G. & Gambs, S. in Machine Learning in a Quantum World
In IEEE Int. Conf. on Acoustics, Speech and Signal Processing (ICASSP) 431–442 (Springer, 2006).
8595–8598 (IEEE, 2013). 38. Shor, P. W. Polynomial-time algorithms for prime factorization and discrete
4. Schuld, M., Sinayskiy, I. & Petruccione, F. An introduction to quantum machine logarithms on a quantum computer. SIAM J. Comput. 26, 1484–1509
learning. Contemp. Phys. 56, 172–185 (2015). (1997).
5. Wittek, P. Quantum Machine Learning: What Quantum Computing Means to Data 39. Nielsen, M. A. & Chuang, I. L. Quantum Computation and Quantum Information
Mining (Academic Press, New York, NY, USA, 2014). (Cambridge Univ. Press, 2000).
6. Adcock, J. et al. Advances in quantum machine learning. Preprint at https:// 40. Wossnig, L., Zhao, Z. & Prakash, A. A quantum linear system algorithm for
arxiv.org/abs/1512.02900 (2015). dense matrices. Preprint at https://arxiv.org/abs/1704.06174 (2017).
7. Arunachalam, S. & de Wolf, R. A survey of quantum learning theory. Preprint at 41. Giovannetti, V., Lloyd, S. & Maccone, L. Quantum random access memory.
https://arxiv.org/abs/1701.06806 (2017). Phys. Rev. Lett. 100, 160501 (2008).
8. Harrow, A. W., Hassidim, A. & Lloyd, S. Quantum algorithm for linear systems 42. Lloyd, S. Universal quantum simulators. Science 273, 1073–1078 (1996).
of equations. Phys. Rev. Lett. 103, 150502 (2009). 43. Vapnik, V. The Nature of Statistical Learning Theory (Springer, 1995).
9. Wiebe, N., Braun, D. & Lloyd, S. Quantum algorithm for data fitting. Phys. Rev. 44. Anguita, D., Ridella, S., Rivieccio, F. & Zunino, R. Quantum optimization for
Lett. 109, 050505 (2012). training support vector machines. Neural Netw. 16, 763–770 (2003).
10. Childs, A. M., Kothari, R. & Somma, R. D. Quantum linear systems algorithm 45. Dürr, C. & Høyer, P. A quantum algorithm for finding the minimum. Preprint at
with exponentially improved dependence on precision. Preprint at https:// https://arxiv.org/abs/quant-ph/9607014 (1996).
arxiv.org/abs/1511.02306 (2015). 46. Chatterjee, R. & Yu, T. Generalized coherent states, reproducing kernels, and
11. Lloyd, S., Mohseni, M. & Rebentrost, P. Quantum principal component quantum support vector machines. Preprint at https://arxiv.org/abs/1612.03713
analysis. Nat. Phys. 10, 631–633 (2014). (2016).
12. Kimmel, S., Lin, C. Y.-Y., Low, G. H., Ozols, M. & Yoder, T. J. Hamiltonian 47. Zhao, Z., Fitzsimons, J. K. & Fitzsimons, J. F. Quantum assisted Gaussian
simulation with optimal sample complexity. Preprint at https://arxiv.org/ process regression. Preprint at https://arxiv.org/abs/1512.03929 (2015).
abs/1608.00281 (2016). 48. Li, Z., Liu, X., Xu, N. & Du, J. Experimental realization of a quantum support
13. Rebentrost, P., Mohseni, M. & Lloyd, S. Quantum support vector machine for vector machine. Phys. Rev. Lett. 114, 140504 (2015).
big data classification. Phys. Rev. Lett. 113, 130503 (2014). 49. Whitfield, J. D., Faccin, M. & Biamonte, J. D. Ground-state spin logic. Europhys.
This study applies quantum matrix inversion in a supervised discriminative Lett. 99, 57004 (2012).
learning algorithm. 50. Farhi, E., Goldstone, J. & Gutmann, S. A quantum approximate optimization
14. Lloyd, S., Garnerone, S. & Zanardi, P. Quantum algorithms for topological and algorithm. Preprint at https://arxiv.org/abs/1411.4028 (2014).
geometric analysis of data. Nat. Commun. 7, 10138 (2016). 51. Aaronson, S. Read the fine print. Nat. Phys. 11, 291–293 (2015).
15. Dridi, R. & Alghassi, H. Homology computation of large point clouds using 52. Arunachalam, S., Gheorghiu, V., Jochym-O’Connor, T., Mosca, M. & Srinivasan,
quantum annealing. Preprint at https://arxiv.org/abs/1512.09328 (2015). P. V. On the robustness of bucket brigade quantum RAM. New J. Phys. 17,
16. Rebentrost, P., Steffens, A. & Lloyd, S. Quantum singular value decomposition 123010 (2015).
of non-sparse low-rank matrices. Preprint at https://arxiv.org/abs/1607.05404 53. Scherer, A. et al. Concrete resource analysis of the quantum linear system
(2016). algorithm used to compute the electromagnetic scattering cross section of a
17. Schuld, M., Sinayskiy, I. & Petruccione, F. Prediction by linear regression on a 2D target. Preprint at https://arxiv.org/abs/1505.06552 (2015).
quantum computer. Phys. Rev. A 94, 022342 (2016). 54. Denil, M. & De Freitas, N. Toward the implementation of a quantum RBM. In
Neural Information Processing Systems (NIPS) Conf. on Deep Learning and
18. Brandao, F. G. & Svore, K. Quantum speed-ups for semidefinite programming.
Unsupervised Feature Learning Workshop Vol. 5 (2011).
Preprint at https://arxiv.org/abs/1609.05537 (2016).
55. Dumoulin, V., Goodfellow, I. J., Courville, A. & Bengio, Y. On the challenges of
19. Rebentrost, P., Schuld, M., Petruccione, F. & Lloyd, S. Quantum gradient
physical implementations of RBMs. Preprint at https://arxiv.org/abs/1312.5258
descent and Newton’s method for constrained polynomial optimization.
(2013).
Preprint at https://arxiv.org/abs/1612.01789 (2016).
56. Benedetti, M., Realpe-Gómez, J., Biswas, R. & Perdomo-Ortiz, A. Estimation of
20. Wiebe, N., Kapoor, A. & Svore, K. M. Quantum deep learning. Preprint at
effective temperatures in quantum annealers for sampling applications:
https://arxiv.org/abs/1412.3489 (2014).
a case study with possible applications in deep learning. Phys. Rev. A 94,
21. Adachi, S. H. & Henderson, M. P. Application of quantum annealing to training
022308 (2016).
of deep neural networks. Preprint at https://arxiv.org/abs/arXiv:1510.06356
57. Biamonte, J. D. & Love, P. J. Realizable Hamiltonians for universal adiabatic
(2015). quantum computers. Phys. Rev. A 78, 012352 (2008).
22. Amin, M. H., Andriyash, E., Rolfe, J., Kulchytskyy, B. & Melko, R. Quantum This study established the contemporary experimental target for non-
Boltzmann machine. Preprint at https://arxiv.org/abs/arXiv:1601.02036 stoquastic (that is, non-quantum stochastic) D-Wave quantum annealing
(2016). hardware able to realize universal quantum Boltzmann machines.
23. Sasaki, M., Carlini, A. & Jozsa, R. Quantum template matching. Phys. Rev. A 64, 58. Temme, K., Osborne, T. J., Vollbrecht, K. G., Poulin, D. & Verstraete, F. Quantum
022317 (2001). metropolis sampling. Nature 471, 87–90 (2011).
24. Bisio, A., Chiribella, G., D’Ariano, G. M., Facchini, S. & Perinotti, P. Optimal 59. Yung, M.-H. & Aspuru-Guzik, A. A quantum–quantum metropolis algorithm.
quantum learning of a unitary transformation. Phys. Rev. A 81, 032324 Proc. Natl Acad. Sci. USA 109, 754–759 (2012).
(2010). 60. Chowdhury, A. N. & Somma, R. D. Quantum algorithms for Gibbs sampling
25. Bisio, A., D’Ariano, G. M., Perinotti, P. & Sedlák, M. Quantum learning and hitting-time estimation. Quant. Inf. Comput. 17, 41–64 (2017).
algorithms for quantum measurements. Phys. Lett. A 375, 3425–3434 (2011). 61. Kieferova, M. & Wiebe, N. Tomography and generative data modeling via
26. Sentís, G., Calsamiglia, J., Muñoz-Tapia, R. & Bagan, E. Quantum learning quantum Boltzmann training. Preprint at https://arxiv.org/abs/1612.05204
without quantum memory. Sci. Rep. 2, 708 (2012). (2016).
27. Sentís, G., Guţă, M. & Adesso, G. Quantum learning of coherent states. 62. Lloyd, S. & Terhal, B. Adiabatic and Hamiltonian computing on a 2D lattice
EPJ Quant. Technol. 2, 17 (2015). with simple 2-qubit interactions. New J. Phys. 18, 023042 (2016).
28. Paparo, G. D., Dunjko, V., Makmal, A., Martin-Delgado, M. A. & Briegel, H. J. 63. Ventura, D. & Martinez, T. Quantum associative memory. Inf. Sci. 124,
Quantum speedup for active learning agents. Phys. Rev. X 4, 031002 (2014). 273–296 (2000).
29. Dunjko, V., Friis, N. & Briegel, H. J. Quantum-enhanced deliberation of learning 64. Granade, C. E., Ferrie, C., Wiebe, N. & Cory, D. G. Robust online Hamiltonian
agents using trapped ions. New J. Phys. 17, 023006 (2015). learning. New J. Phys. 14, 103013 (2012).
30. Dunjko, V., Taylor, J. M. & Briegel, H. J. Quantum-enhanced machine learning. 65. Wiebe, N., Granade, C., Ferrie, C. & Cory, D. G. Hamiltonian learning and
Phys. Rev. Lett. 117, 130501 (2016). certification using quantum resources. Phys. Rev. Lett. 112, 190501
This paper investigates the theoretical maximum speedup achievable in (2014).
reinforcement learning in a closed quantum system, which proves to be 66. Wiebe, N., Granade, C. & Cory, D. G. Quantum bootstrapping via compressed
Grover-like if we wish to obtain classical verification of the learning process. quantum Hamiltonian learning. New J. Phys. 17, 022005 (2015).
31. Sentís, G., Bagan, E., Calsamiglia, J., Chiribella, G. & Muñoz Tapia, R. Quantum 67. Marvian, I. & Lloyd, S. Universal quantum emulator. Preprint at https://arxiv.
change point. Phys. Rev. Lett. 117, 150502 (2016). org/abs/1606.02734 (2016).
32. Faccin, M., Migdał, P., Johnson, T. H., Bergholm, V. & Biamonte, J. D. 68. Dolde, F. et al. High-fidelity spin entanglement using optimal control.
Community detection in quantum complex networks. Phys. Rev. X 4, 041012 Nat. Commun. 5, 3371 (2014).
(2014). 69. Zahedinejad, E., Ghosh, J. & Sanders, B. C. Designing high-fidelity single-shot
This paper defines closeness measures and then maximizes modularity with three-qubit gates: a machine-learning approach. Phys. Rev. Appl. 6, 054005
hierarchical clustering to partition quantum data. (2016).
33. Clader, B. D., Jacobs, B. C. & Sprouse, C. R. Preconditioned quantum linear 70. Zahedinejad, E., Ghosh, J. & Sanders, B. C. High-fidelity single-shot Toffoli gate
system algorithm. Phys. Rev. Lett. 110, 250504 (2013). via quantum control. Phys. Rev. Lett. 114, 200502 (2015).
34. Lloyd, S., Mohseni, M. & Rebentrost, P. Quantum algorithms for supervised 71. Zeidler, D., Frey, S., Kompa, K.-L. & Motzkus, M. Evolutionary algorithms
and unsupervised machine learning. Preprint at https://arxiv.org/ and their application to optimal control studies. Phys. Rev. A 64, 023420
abs/1307.0411 (2013). (2001).
35. Wiebe, N., Kapoor, A. & Svore, K. M. Quantum algorithms for nearest-neighbor 72. Las Heras, U., Alvarez-Rodriguez, U., Solano, E. & Sanz, M. Genetic algorithms
methods for supervised and unsupervised learning. Quantum Inf. Comput. 15, for digital quantum simulations. Phys. Rev. Lett. 116, 230504 (2016).
316–356 (2015). 73. Banchi, L., Pancotti, N. & Bose, S. Quantum gate learning in qubit networks:
36. Lau, H.-K., Pooser, R., Siopsis, G. & Weedbrook, C. Quantum machine learning Toffoli gate without time-dependent control. npj Quant. Inf. 2, 16019
over infinite dimensions. Phys. Rev. Lett. 118, 080501 (2017). (2016).
1 4 s e p t e m b e r 2 0 1 7 | V O L 5 4 9 | N A T U RE | 2 0 1
© 2017 Macmillan Publishers Limited, part of Springer Nature. All rights reserved.
INSIGHT Review
74. August, M. & Ni, X. Using recurrent neural networks to optimize dynamical 97. Schuld, M., Fingerhuth, M. & Petruccione, F. Quantum machine learning
decoupling for quantum memory. Preprint at https://arxiv.org/ with small-scale devices: implementing a distance-based classifier with a
abs/1604.00279 (2016). quantum interference circuit. Preprint at https://arxiv.org/abs/1703.10793
75. Amstrup, B., Toth, G. J., Szabo, G., Rabitz, H. & Loerincz, A. Genetic algorithm (2017).
with migration on topology conserving maps for optimal control of quantum 98. Monràs, A., Sentís, G. & Wittek, P. Inductive supervised quantum learning.
systems. J. Phys. Chem. 99, 5206–5213 (1995). Phys. Rev. Lett. 118, 190503 (2017).
76. Hentschel, A. & Sanders, B. C. Machine learning for precise quantum This paper proves that supervised learning protocols split into a training and
measurement. Phys. Rev. Lett. 104, 063603 (2010). application phase in both the classical and the quantum cases.
77. Lovett, N. B., Crosnier, C., Perarnau-Llobet, M. & Sanders, B. C. Differential 99. Tiersch, M., Ganahl, E. J. & Briegel, H. J. Adaptive quantum computation in
evolution for many-particle adaptive quantum metrology. Phys. Rev. Lett. 110, changing environments using projective simulation. Sci. Rep. 5, 12874
220501 (2013). (2015).
78. Palittapongarnpim, P., Wittek, P., Zahedinejad, E., Vedaie, S. & Sanders, B. C. 100. Zahedinejad, E., Ghosh, J. & Sanders, B. C. Designing high-fidelity single-shot
Learning in quantum control: high-dimensional global optimization for noisy three-qubit gates: a machine learning approach. Preprint at https://arxiv.org/
quantum dynamics. Neurocomputing https://doi.org/10.1016/j.neucom. abs/1511.08862 (2015).
2016.12.087 (in the press). 101. Palittapongarnpim, P., Wittek, P. & Sanders, B. C. Controlling adaptive quantum
79. Carrasquilla, J. & Melko, R. G. Machine learning phases of matter. Nat. Phys. phase estimation with scalable reinforcement learning. In Proc. 24th Eur.
13, 431–434 (2017). Symp. Artificial Neural Networks (ESANN-16) on Computational Intelligence and
80. Broecker, P., Carrasquilla, J., Melko, R. G. & Trebst, S. Machine learning Machine Learning 327–332 (2016).
quantum phases of matter beyond the fermion sign problem. Preprint at 102. Wan, K. H., Dahlsten, O., Kristjánsson, H., Gardner, R. & Kim, M. S. Quantum
https://arxiv.org/abs/1608.07848 (2016). generalisation of feedforward neural networks. Preprint at https://arxiv.org/
81. Carleo, G. & Troyer, M. Solving the quantum many-body problem with artificial abs/1612.01045 (2016).
neural networks. Science 355, 602–606 (2017). 103. Lu, D. et al. Towards quantum supremacy: enhancing quantum control by
82. Brunner, D., Soriano, M. C., Mirasso, C. R. & Fischer, I. Parallel photonic bootstrapping a quantum processor. Preprint at https://arxiv.org/
information processing at gigabyte per second data rates using transient abs/1701.01198 (2017).
states. Nat. Commun. 4, 1364 (2013). 104. Mavadia, S., Frey, V., Sastrawan, J., Dona, S. & Biercuk, M. J. Prediction and
83. Cai, X.-D. et al. Entanglement-based machine learning on a quantum real-time compensation of qubit decoherence via machine learning.
computer. Phys. Rev. Lett. 114, 110504 (2015). Nat. Commun. 8, 14106 (2017).
84. Hermans, M., Soriano, M. C., Dambre, J., Bienstman, P. & Fischer, I. Photonic 105. Rønnow, T. F. et al. Defining and detecting quantum speedup. Science 345,
delay systems as machine learning implementations. J. Mach. Learn. Res. 16, 420–424 (2014).
2081–2097 (2015). 106. Low, G. H., Yoder, T. J. & Chuang, I. L. Quantum inference on Bayesian
85. Tezak, N. & Mabuchi, H. A coherent perceptron for all-optical learning. networks. Phys. Rev. A 89, 062315 (2014).
EPJ Quant. Technol. 2, 10 (2015). 107. Wiebe, N. & Granade, C. Can small quantum systems learn? Preprint at
86. Neigovzen, R., Neves, J. L., Sollacher, R. & Glaser, S. J. Quantum pattern https://arxiv.org/abs/1512.03145 (2015).
recognition with liquid-state nuclear magnetic resonance. Phys. Rev. A 79, 108. Wiebe, N., Kapoor, A. & Svore, K. M. Quantum perceptron models. Adv. Neural
042321 (2009). Inform. Process. Syst. 29, 3999–4007 (2016).
87. Pons, M. et al. Trapped ion chain as a neural network: error resistant quantum 109. Scherer, A. et al. Concrete resource analysis of the quantum linear-system
computation. Phys. Rev. Lett. 98, 023003 (2007). algorithm used to compute the electromagnetic scattering cross section of a
88. Neven, H. et al. Binary classification using hardware implementation of 2D target. Quantum Inform. Process. 16, 60 (2017).
quantum annealing. In 24th Ann. Conf. on Neural Information Processing
Systems (NIPS-09) 1–17 (2009). Acknowledgements J.B. acknowledges financial support from AFOSR
This paper was among the first experimental demonstrations of machine grant FA9550-16-1-0300, Models and Protocols for Quantum Distributed
learning using quantum annealing. Computation. P.W. acknowledges financial support from the ERC (Consolidator
89. Denchev, V. S., Ding, N., Vishwanathan, S. & Neven, H. Robust classification Grant QITBOX), Spanish Ministry of Economy and Competitiveness (Severo
with adiabatic quantum optimization. In Proc. 29th Int. Conf. on Machine Ochoa Programme for Centres of Excellence in R&D SEV-2015-0522 and
Learning (ICML-2012) (2012). QIBEQI FIS2016-80773-P), Generalitat de Catalunya (CERCA Programme and
90. Karimi, K. et al. Investigating the performance of an adiabatic quantum SGR 875), and Fundacio Privada Cellex. P.R. and S.L. acknowledge funding from
optimization processor. Quantum Inform. Process. 11, 77–88 (2012). ARO and AFOSR under MURI programmes. We thank L. Zheglova for producing
91. O’Gorman, B. A. et al. Bayesian network structure learning using quantum Fig. 1.
annealing. EPJ Spec. Top. 224, 163–188 (2015).
92. Denchev, V. S., Ding, N., Matsushima, S., Vishwanathan, S. V. N. & Neven, H. Author Contributions All authors designed the study, analysed data, interpreted
Totally corrective boosting with cardinality penalization. Preprint at data, produced Box 3 Figure and wrote the article.
https://arxiv.org/abs/1504.01446 (2015).
93. Kerenidis, I. & Prakash, A. Quantum recommendation systems. Preprint at Author Information Reprints and permissions information is available at
https://arxiv.org/abs/1603.08675 (2016). www.nature.com/reprints. The authors declare no competing financial
94. Alvarez-Rodriguez, U., Lamata, L., Escandell-Montero, P., Martín-Guerrero, J. D. interests. Readers are welcome to comment on the online version of the paper.
& Solano, E. Quantum machine learning without measurements. Preprint at Publisher’s note: Springer Nature remains neutral with regard to jurisdictional
https://arxiv.org/abs/1612.05535 (2016). claims in published maps and institutional affiliations. Correspondence should
95. Wittek, P. & Gogolin, C. Quantum enhanced inference in Markov logic be addressed to J.B. ([email protected]).
networks. Sci. Rep. 7, 45672 (2017).
96. Lamata, L. Basic protocols in quantum reinforcement learning with Reviewer Information Nature thanks L. Lamata and the other anonymous
superconducting circuits. Sci. Rep. 7, 1609 (2017). reviewer(s) for their contribution to the peer review of this work.
2 0 2 | N A T U RE | V O L 5 4 9 | 1 4 s e p t e m b e r 2 0 1 7
© 2017 Macmillan Publishers Limited, part of Springer Nature. All rights reserved.