The Holy Grail of Quantum Artificial Intelligence: Major Challenges in Accelerating The Machine Learning Pipeline
The Holy Grail of Quantum Artificial Intelligence: Major Challenges in Accelerating The Machine Learning Pipeline
net/publication/341039795
CITATIONS READS
0 172
8 authors, including:
All content following this page was uploaded by Thomy Phan on 01 September 2020.
ABSTRACT
arXiv:2004.14035v1 [quant-ph] 29 Apr 2020
This is a preprint of a paper accepted at the 1st International Workshop on Quantum Software Engineering (Q-SE) at ICSE 2020 and soon to be published in the corresponding proceedings.
ICSEW’20, May 23–29, 2020, Seoul, Republic of Korea Gabor et al.
model of temporal dependences between the creation of fundamen- 3 THE ROLE OF COMPUTE AND THE
tal artifacts common to most machine learning models. Figure 2 CONSEQUENCES
shows a diagram of the different tasks that are relevant to software
Aside from similar external properties like stochasticity, quantum
engineering. We aim to adopt various familiar development phases
algorithms and artificial intelligence may indeed form an even
from classical software engineering (blue boxes at the top level).
stronger connection. The emerging field of quantum artificial intel-
For software engineering, there is a distinct shift between writing
ligence (QAI) uses quantum algorithms or quantum-inspired algo-
the software for a variety of concrete domains and specializing
rithms to solve computation tasks related to artificial intelligence.1
(a single branch of) the software to a concrete environment (blue
This combination may be highly synergetic for two main reasons:
boxes at the bottom level). The main engineering tasks are shown
in white boxes. A source of great difficulty for engineering lies • All machine learning methods need some randomness to
in the inherent stochasticity of the behavior generated by most work, often putting serious effort into generating necessary
machine learning algorithms, requiring different methods to ensure entropy. Beyond that, they also often show high tolerance
quality of service (QoS) in the main training feedback loop (“select for noise during their evolution. This makes them inherently
model/policy”, “train”, “assess QoS”) and actual monitoring during suitable for early applications using only NISQ hardware.
operations. The break from most classical software engineering • Progress in artificial intelligence is becoming more and more
approaches happens here insofar we explicitly want to achieve demanding in computational resources. This trend is out-
“softer” behavior guidelines on the algorithms because we want growing the continued increase in available computing power
to employ them in domains where we cannot possibly formulate by a large margin.
enough “hard” rules. And as we still want specific behavior, the The first reason basically falls in line with our point on stochastic-
softening does not occur as random noise but is often very specific ity made earlier. While high noise levels (as they are present in NISQ
as well. Of course, this also often makes machine learning methods machines) are unwanted for many algorithms, especially AI algo-
susceptible to systematic failures. For instance: A car driving soft- rithms may actually benefit from (some levels of) noise. Of course,
ware failing at random on one in every 100 turns is rather easy to current noise levels over a long series of computations are way too
care for by adding redundancy and a voting system, e.g., allowing high to even allow for meaningful results, but requirements for QAI
us to achieve arbitrarily low overall error rates as long as we can algorithms might be met earlier than for (for example) Grover’s
add enough redundancy. A car driving software that operates well search on similarly large input spaces.
but systematically breaks down every time it comes across a foot- The second reason may be a bit more elusive; of course, more
ball on the street is harder to handle because we need very specific computational power is always better. However, pushing the bor-
tests to even detect the error and then see every redundant system ders of AI has been especially hungry for resources. Amodei and
fail at the same time. Hernandez [10] used the chart shown in Figure 3 to demonstrate
This inherent stochasticity in machine learning algorithms, of that just in recent years, the computation power used for AI break-
course, is not quite unlike the inherent stochasticity in quantum through had a doubling time of 3.5 months and has thus been
computing, which is a connection already discussed (and elaborated) dramatically outgrowing Moore’s Law (18 month doubling time).
by Wolfram [55]. For us, this may suggest that we can use similar
methods to integrate (especially highly error-prone, early) quantum 1 Note that the combination also works the other way around, using AI methods to
algorithms into classical software as we can use to integrate the better approximate quantum computations (for instance in [20, 44]). This, however, is
highly stochastic process called machine learning algorithms. beyond the scope of this paper.
Holy Grail of Quantum Artificial Intelligence ICSEW’20, May 23–29, 2020, Seoul, Republic of Korea
5 CHALLENGES OF QUANTUM-ASSISTED That the extremely limited memory capacities of current quan-
ARTIFICIAL INTELLIGENCE tum computers are one of the main bottlenecks for practical appli-
cations is well-known among the quantum computing. However,
Having analyzed the needs of AI and the current state of QAI, we for QAI algorithms, especially the more modern ones, this problem
can use this background knowledge and derive major challenges for is aggravated as only through processing very large amounts of
future developments in QAI. Note that unlike other work [35, 42] data modern AI algorithms really shine [8]. Figure 4 shows a simple
that formulates challenges in quantum artificial intelligence, we sketch of that behavior. Effectively, the need to process relatively
focus less on quantum-technical challenges but on the changes to large amounts of training data might even, in the long run, prevent
the development methods that need to be achieved. us from cutting out the iterative training loop.
Challenge 1 (The Feedback Loop). Replace the feedback loop Challenge 2 (The Training Data). Provide means to process
around training (consisting of the tasks “Select Model/Policy”, “Train”, (the essence of) large amounts of data on quantum computers.
and “Assess QoS”) entirely with a quantum algorithm.
Note that for QAI, we might take a workaround here: Using the
When performing machine learning, a lot of time is usually spent right hybrid approaches we might be able to construct classical
in training, which usually means fine-tuning a set of parameters in pre-/postprocessing steps so that we can still process large amounts
small gradual steps over many iterations. These iterations are often of data without processing all of them on the quantum machine.
necessary as they incorporate slightly different (sets of) data points Early approaches like Quantum-enhanced RL [39] have improved
into the final model. Here, quantum approaches might not treat classical training by doing a preselection of training samples (using
training iterations as a sequence of steps but maybe perform all a quantum algorithm). Similar approaches could work to reduce
training iterations in superposition und thus taking a huge short- the necessary training data for quantum training steps as well.
cut in training a machine learning model. However, none of the From these considerations we can already see that the combina-
surveyed approaches managed to replace such large parts of the ma- tion and hybridization of various algorithms and techniques might
chine learning pipeline by quantum approaches, perhaps because be key to further developing QAI. However, combinations always
real(istic) quantum machines only provide relatively small coher- include additional free parameters: What algorithms do we use?
ence times. Quantum RL [18] probably comes closest by performing How and when do they interact? What domains is a specific com-
both the action execution and the resulting update in single run bination good for? Furthermore, we do not only need to combine
on the quantum machine, but the algorithm still requires many different techniques, but these techniques often stem from different
iterations of training overall. If possible at all, stepping away from fields of science and engineering. That means that even for a rela-
iterative training might be the single biggest performance increase tively standard QAI algorithm, we might require expert knowledge
quantum computing could offer for AI. Thus, we might refer to The about quantum computing and the platforms it is run on, about AI
Feedback Loop Challenge as the “Holy Grail of Quantum AI”. and classical optimization, and about the domain at hand in order
Nonetheless, other challenges persist and might be detrimental to make the right calls.
to achieving this highest of goals. Considering the multitude of QAI
Challenge 3 (The Interfaces). Provide standardized interfaces
algorithms focusing on the domain model, we see that quantum-
that allow for dynamic combination of QAI components and (by ex-
based representations can be used as models for physical domains
tension) for experts of different fields to collaborate on QAI algorithms.
(where they are a natural fit), complex stochastic domain (where
they can approximate complex probability distributions cheaper Standardization is a goal that is often called for throughout vari-
and more precisely) and small domains in general (where quantum- ous disciplines of science and engineering. However, QAI brings
based or quantum-assisted modeling of the domain might yield together two largely separate field, which in their own right de-
some benefits further along the pipeline). velop rapidly and have produced little standardization. It thus be
Holy Grail of Quantum Artificial Intelligence ICSEW’20, May 23–29, 2020, Seoul, Republic of Korea
imperative to organize the interfaces between AI and QC without over comparable methods. However, especially in the field of AI it
fixed technological standards but based on the involved experts is easy to construct a superior AI model by accident: A few lucky
of different expertise [31]. An important part of this challenge is random numbers in the stochastic training process might result in
to allow standard software engineering to catch up with recent a better performing AI. Or any part of a QAI algorithm (made up of
developments: Especially smaller groups will not be able to afford various classical parts as well) might just match the current (state
dedicated experts in QC and much less QAI. Instead software de- of the) domain the right way.
velopers should be able to use QAI as seamlessly as they are able to The more complex QAI algorithms become, the harder it might
use parallel computing in the cloud now, being able to benefit from be to find a fair comparison in the purely classical world. Still, we
advantages without the need to dive into the technical specifics. need to provide researchers and developers in the field of QAI with
For QC, this challenge requires a degree of technical maturity the right tools to easily trace the significance and the reason of
that is as of yet not reached by most practical frameworks, even perceived advantages in comparison to other algorithms. If QC is
though recent developments definitely aim towards making QC eventually going to benefit AI, we need to be able to know exactly
technology more accessible. As a lot of effort is put into QC by when and for what reason.
vendors wanting to sell their applications, the independent devel-
opment of open standards is required to prevent vendor lock-in 6 OUTLOOK
and enable QAI applications that span different QC platforms. In this paper we took a long tour from the challenges AI already
poses to software engineering to the even more peculiar challenges
Challenge 4 (The Real Reason). Keep track of the source of
that QAI poses to software engineering. Still, we argued that QC
observed improvements.
may greatly help in alleviating the problems the development of
Even classical machine learning models can often be treated as increasingly better AI is going to face in the upcoming years. On
nothing more than a black box; even though they are deterministic the flip side, AI methods with inherent robustness to noise might
and mathematically well understood, they just encode a behavior or be an ideal testbed for early NISQ applications.
connections between input and output that are too complex to trace We defined four major challenges that stand without any claim
without extreme computational effort. This is why in recent years, to completeness. On the contrary, we expect every researcher in
AI researchers showed increased interest in methods of testing and the field to be able to add quite a few more. However, we feel that
verifying the performance of AI [11, 13, 15]. the analysis of the projected future developments in AI and the
For QAI, this black box property may be enforced by nature: current state of the art in QAI allowed us to deduce some of the
We physically cannot introspect the probability distribution of most ambitious goals to tackle.
states of a quantum machine while it is computing. That is all We hope that discussing these highly aimed challenges benefits
the more reason why we need quantum-appropriate testing and the development of the young field of QAI and are confident that
verification. Under this light, it is rather curious that we found no future research will (purposefully or inadvertently) make progress
QAI algorithms that specifically tackle the last few tasks of the with respect to these challenges.
machine learning pipeline, especially “Monitor QoS”, which should
be of utmost importance to practical applications. ACKNOWLEDGMENTS
Challenge 4, however, focuses on the reason why we need espe- This work was supported by the Federal Ministry of Economic
cially thorough testing in QAI: We need to constantly justify using Affairs and Energy, Germany, as part of the PlanQK project de-
a quantum machine. QAI will only have a success if the quantum veloping a platform and ecosystem for quantum-assisted artificial
part of the algorithm is the part that brings about the advantage intelligence (see planqk.de).
ICSEW’20, May 23–29, 2020, Seoul, Republic of Korea Gabor et al.
REFERENCES [28] Iordanis Kerenidis and Anupam Prakash. 2016. Quantum recommendation sys-
[1] [n.d.]. HHL implementation. https://github.com/Qiskit/qiskit-iqx-tutorials/blob/ tems. arXiv:1603.08675v3 (2016).
master/qiskit/advanced/aqua/linear_systems_of_equations.ipynb. [29] Amir Khoshaman, Walter Vinci, Brandon Denis, Evgeny Andriyash, Hossein
[2] [n.d.]. QAOA implementation. https://pennylane.ai/qml/app/tutorial_qaoa_ Sadeghi, and Mohammad H Amin. 2018. Quantum variational autoencoder.
maxcut.html. arXiv:1802.05779v2 (2018).
[3] [n.d.]. QGAN Pennylane implementation. https://pennylane.ai/qml/app/tutorial_ [30] James King, Masoud Mohseni, William Bernoudy, Alexandre Fréchette, Hos-
QGAN.html. sein Sadeghi, Sergei V Isakov, Hartmut Neven, and Mohammad H Amin. 2019.
[4] [n.d.]. QGAN Qiskit implementation. https://github.com/Qiskit/qiskit-iqx- Quantum-Assisted Genetic Algorithm. arXiv:1907.00707 (2019).
tutorials/blob/master/qiskit/advanced/aqua/artificial_intelligence/qgans_for_ [31] Frank Leymann, Johanna Barzen, and Michael Falkenthal. 2019. Towards a Plat-
loading_random_distributions.ipynb. form for Sharing Quantum Software. Proceedings of the 13th Advanced Summer
[5] [n.d.]. SVM implementation. https://github.com/Qiskit/qiskit-iqx-tutorials/blob/ School on Service (2019), 70–74.
master/qiskit/advanced/aqua/artificial_intelligence/qsvm_classification.ipynb. [32] Seth Lloyd, Masoud Mohseni, and Patrick Rebentrost. 2013. Quantum algorithms
[6] Esma Aïmeur, Gilles Brassard, and Sébastien Gambs. 2007. Quantum clustering for supervised and unsupervised machine learning. arXiv:1307.0411 (2013).
algorithms. Proceedings of the 24th int’l conference on machine learning (2007). [33] Seth Lloyd and Christian Weedbrook. 2018. Quantum generative adversarial
[7] DJ Albers, JC Sprott, and WD Dechert. 1996. Dynamical behavior of artificial learning. arXiv:1804.09139 (2018).
neural networks with random weights. Intelligent Engineering Systems Through [34] Andrew Lucas. 2014. Ising formulations of many NP problems. Frontiers in
Artificial Neural Networks 6 (1996), 17–22. Physics 2 (2014), 5.
[8] Md Zahangir Alom, Tarek M Taha, Chris Yakopcic, Stefan Westberg, Paheding [35] A Manju and Madhav J Nigam. 2014. Applications of quantum inspired compu-
Sidike, Mst Shamima Nasrin, Mahmudul Hasan, Brian C Van Essen, Abdul AS tational intelligence: a survey. Artificial Intelligence Review 42, 1 (2014), 79–156.
Awwal, and Vijayan K Asari. 2019. A state-of-the-art survey on deep learning [36] Catherine C McGeoch. 2014. Adiabatic quantum computation and quantum
theory and architectures. Electronics 8, 3 (2019), 292. annealing: Theory and practice. Synthesis Lectures on Quantum Computing 5, 2
[9] Mohammad H Amin, Evgeny Andriyash, Jason Rolfe, Bohdan Kulchytskyy, and (2014), 1–93.
Roger Melko. 2018. Quantum boltzmann machine. arXiv:1601.02036 (2018). [37] Catherine C McGeoch and Cong Wang. 2013. Experimental evaluation of an
[10] Dario Amodei and Danny Hernandez. 2018. AI and Compute. https://openai. adiabiatic quantum system for combinatorial optimization. In Proceedings of the
com/blog/ai-and-compute/. ACM International Conference on Computing Frontiers. 1–11.
[11] Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, [38] Ofir Nachum, Shixiang Shane Gu, Honglak Lee, and Sergey Levine. 2018. Data-
and Dan Mané. 2016. Concrete problems in AI safety. arXiv:1606.06565 (2016). efficient hierarchical reinforcement learning. In Advances in Neural Information
[12] Frank Arute, Kunal Arya, Ryan Babbush, Dave Bacon, Joseph C Bardin, Rami Processing Systems. 3303–3313.
Barends, Rupak Biswas, Sergio Boixo, Fernando GSL Brandao, David A Buell, et al. [39] Florian Neukart, David Von Dollen, Christian Seidel, and Gabriele Compostella.
2019. Quantum supremacy using a programmable superconducting processor. 2018. Quantum-enhanced reinforcement learning for finite-episode games with
Nature 574, 7779 (2019), 505–510. discrete state spaces. arXiv:1708.09354v3 (2018).
[13] Lenz Belzner, Michael Till Beck, Thomas Gabor, Harald Roelle, and Horst Sauer. [40] Oscar Nierstrasz, Marcus Denker, Tudor Gîrba, Adrian Lienhard, and David
2016. Software engineering for distributed autonomous real-time systems. In Röthlisberger. 2008. Change-enabled software systems. In Software-Intensive
Proceedings of the 2nd International Workshop on Software Engineering for Smart Systems and New Computing Paradigms. Springer, 64–79.
Cyber-Physical Systems. ACM, 54–57. [41] PennyLane. [n.d.]. VQE implementation. https://pennylane.ai/qml/demos/
[14] Tomas Bures, Danny Weyns, Bradley Schmer, Eduardo Tovar, Eric Boden, Thomas tutorial_vqe.html.
Gabor, Ilias Gerostathopoulos, Pragya Gupta, Eunsuk Kang, Alessia Knauss, et al. [42] Alejandro Perdomo-Ortiz, Marcello Benedetti, John Realpe-Gómez, and Rupak
2017. Software Engineering for Smart Cyber-Physical Systems: Challenges and Biswas. 2018. Opportunities and challenges for quantum-assisted machine learn-
Promising Solutions. ACM SIGSOFT Software Eng. Notes 42, 2 (2017), 19–24. ing in near-term quantum computers. Quantum Science and Technology 3, 3
[15] Radu Calinescu, Carlo Ghezzi, Marta Kwiatkowska, and Raffaela Mirandola. 2012. (2018), 030502.
Self-adaptive software needs quantitative verification at runtime. Commun. ACM [43] Alberto Peruzzo, Jarrod McClean, Peter Shadbolt, Man-Hong Yung, Xiao-Qi Zhou,
55, 9 (2012), 69–77. Peter J Love, Alán Aspuru-Guzik, and Jeremy L OâĂŹbrien. 2014. A variational
[16] Giuseppe Cuccu, Julian Togelius, and Philippe Cudré-Mauroux. 2019. Playing eigenvalue solver on a photonic quantum processor. Nature commun. 5 (2014).
atari with six neurons. In Proceedings of the 18th international conference on [44] Riccardo Porotti, Dario Tamascelli, Marcello Restelli, and Enrico Prati. 2019.
autonomous agents and multiagent systems. International Foundation for Au- Coherent transport of quantum states by deep reinforcement learning. Commu-
tonomous Agents and Multiagent Systems, 998–1006. nications Physics 2, 1 (2019), 1–9.
[17] Pierre-Luc Dallaire-Demers and Nathan Killoran. 2018. Quantum generative [45] John Preskill. 2018. Quantum Computing in the NISQ era and beyond.
adversarial networks. arXiv:1804.08641v2 (2018). arXiv:1801.00862v3 (2018).
[18] Daoyi Dong, Chunlin Chen, Hanxiong Li, and Tzyh-Jong Tarn. 2008. Quantum [46] Jonathan Romero and Alan Aspuru-Guzik. 2019. Variational quantum generators:
reinforcement learning. arXiv:0810.3828 (2008). Generative adversarial quantum machine learning for continuous distributions.
[19] Edward Farhi, Jeffrey Goldstone, and Sam Gutmann. 2014. A quantum approxi- arXiv:1901.00848 (2019).
mate optimization algorithm. arXiv:1411.4028 (2014). [47] Jonathan Romero, Jonathan P Olson, and Alan Aspuru-Guzik. 2017. Quantum
[20] Thomas Fösel, Petru Tighineanu, Talitha Weiss, and Florian Marquardt. 2018. autoencoders for efficient compression of quantum data. arXiv:1612.02806v2
Reinforcement learning with neural networks for quantum feedback. Physical (2017).
Review X 8, 3 (2018), 031084. [48] Sukin Sim, Evan Anderson, Eric Brown, and Jonathan Romero. [n.d.]. Autoen-
[21] Thomas Gabor, Marie Kiermeier, Andreas Sedlmeier, Bernhard Kempter, Cornel coder implementation. https://github.com/hsim13372/QCompress-1.
Klein, Horst Sauer, Reiner Schmid, and Jan Wieghardt. 2018. Adapting quality as- [49] Richard Sutton. 2019. The Bitter Lesson. http://www.incompleteideas.net/
surance to adaptive systems: the scenario coevolution paradigm. In International IncIdeas/BitterLesson.html.
Symposium on Leveraging Applications of Formal Methods. Springer, 137–154. [50] The AlphaStar team. 2019. AlphaStar: Mastering the Real-Time Strategy Game
[22] Thomas Gabor, Andreas Sedlmeier, Thomy Phan, Fabian Ritz, Marie Kiermeier, StarCraft II. https://deepmind.com/blog/article/alphastar-mastering-real-time-
Lenz Belzner, Bernhard Kempter, Cornel Klein, Horst Sauer, Reiner Schmid, strategy-game-starcraft-ii.
Jan Wieghardt, Marc Zeller, and Claudia Linnhoff-Popien. 2020. The Scenario [51] Danny Weyns. 2017. Software engineering of self-adaptive systems: an organised
Co-Evolution Paradigm: Adaptive Quality Assurance for Adaptive Systems. In- tour and future challenges. (2017).
ternational Journal on Software Tools and Technology Transfer (2020). [52] Nathan Wiebe, Ashish Kapoor, and Krysta Svore. 2014. Quantum algo-
[23] Fred Glover, Gary Kochenberger, and Yu Du. 2018. A tutorial on formulating and rithms for nearest-neighbor methods for supervised and unsupervised learning.
using qubo models. arXiv preprint arXiv:1811.11538 (2018). arXiv:1401.2142v2 (2014).
[24] Aram W Harrow, Avinatan Hassidim, and Seth Lloyd. 2009. Quantum algorithm [53] Nathan Wiebe and Leonard Wossnig. 2019. Generative training of quantum
for linear systems of equations. arXiv:0811.317v3 (2009). Boltzmann machines with hidden units. arXiv:1905.09902 (2019).
[25] Vojtěch Havlicek, Antonio D Córcoles, Kristan Temme, Aram W Harrow, Abhinav [54] D. Willsch, M. Willsch, H. De Raedt, and K. Michielsen. 2019. Support vector
Kandala, Jerry M Chow, and Jay M Gambetta. 2018. Supervised learning with machines on the D-Wave quantum annealer. arXiv:1906.06283 (2019).
quantum-enhanced feature spaces. arXiv:1804.11326v2 (2018). [55] Stephen Wolfram. 2018. Buzzword Convergence: Making Sense of Quantum
[26] Matthias Hölzl, Nora Koch, Mariachiara Puviani, Martin Wirsing, and Franco Neural Blockchain AI. https://writings.stephenwolfram.com/2018/04/buzzword-
Zambonelli. 2015. The ensemble development life cycle and best practices for convergence-making-sense-of-quantum-neural-blockchain-ai/.
collective autonomic systems. In Software Engineering for Collective Autonomic [56] Christa Zoufal, Aurélien Lucchi, and Stefan Woerner. 2019. Quantum gen-
Systems. Springer, 325–354. erative adversarial networks for learning and loading random distributions.
[27] Tadashi Kadowaki and Hidetoshi Nishimori. 1998. Quantum annealing in the arXiv:1904.00043 (2019).
transverse Ising model. Physical Review E 58, 5 (1998), 5355. All URLs have been accessed on March 1, 2020.