0% found this document useful (0 votes)
92 views4 pages

Quantum Neural Networks: Concepts, Applications, and Challenges

This document discusses quantum neural networks (QNNs), which combine techniques from quantum computing and deep learning. It introduces the basics of quantum computing using qubits and quantum gates. It then discusses variational quantum circuits (VQCs), which can act as artificial neural networks and are used in quantum deep learning. Some examples of quantum deep learning models are presented, including quantum convolutional neural networks (QCNNs) which implement convolution and pooling layers using quantum circuits. Challenges in quantum deep learning research are also mentioned.

Uploaded by

qazokm78987
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
92 views4 pages

Quantum Neural Networks: Concepts, Applications, and Challenges

This document discusses quantum neural networks (QNNs), which combine techniques from quantum computing and deep learning. It introduces the basics of quantum computing using qubits and quantum gates. It then discusses variational quantum circuits (VQCs), which can act as artificial neural networks and are used in quantum deep learning. Some examples of quantum deep learning models are presented, including quantum convolutional neural networks (QCNNs) which implement convolution and pooling layers using quantum circuits. Challenges in quantum deep learning research are also mentioned.

Uploaded by

qazokm78987
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

Quantum Neural Networks: Concepts, Applications,

and Challenges
◦ Yunseok Kwak, ◦ Won Joon Yun, ◦ Soyi Jung, and ◦ Joongheon Kim

Department of Electrical and Computer Engineering, Korea University, Seoul 02841, Republic of Korea
E-mails: [email protected], [email protected],
[email protected], [email protected]

Abstract—Quantum deep learning is a research field for the A. Quantum Computing


arXiv:2108.01468v1 [quant-ph] 2 Aug 2021

use of quantum computing techniques for training deep neural


networks. The research topics and directions of deep learning and Quantum computers use qubits as the basic units of com-
quantum computing have been separated for long time, however putation, which represent a superposition state between |0i
by discovering that quantum circuits can act like artificial neural and |1i [2]–[5]. A single qubit state can be represented as a
networks, quantum deep learning research is widely adopted. normalized two-dimensional complex vector, i.e.,
This paper explains the backgrounds and basic principles of
quantum deep learning and also introduces major achievements. |ψi = α|0i + β|1i, kαk2 + kβk2 = 1 (1)
After that, this paper discusses the challenges of quantum deep
learning research in multiple perspectives. Lastly, this paper and kαk2 and kβk2 are the probabilities of observing |0i and
presents various future research directions and application fields |1i from the qubit, respectively. This can be also geometrically
of quantum deep learning. represented using polar coordinates θ and φ,
|ψi = cos(θ/2)|0i + eiφ sin(θ/2)|1i, (2)
I. I NTRODUCTION
where 0 ≤ θ ≤ π and 0 ≤ φ ≤ π. This representation maps a
As quantum computing and deep learning have recently single qubit state into the surface of 3-dimensional unit sphere,
begun to draw attentions, notable research achievements have which is called Bloch sphere. A multi qubit system can be
been pouring over past decades. In the field of deep learn- represented as the tensor product of n single qubits, which
ing, the problems which were considered as their inherent exists as a superposition of 2n basis states from |00...00i
limitations like gradient vanishing, local minimum, learning to |11...11i. Quantum entanglement appears as a correlation
inefficiencies in large-scale parameter training are gradually between different qubits in this system. For example, in a 2-
being conquered [1]. On the one hand, innovative new deep qubit system √12 |00i+ √12 |11i, the observation of the first qubit
learning algorithms such as quantum neural network (QNN), directly determines that of the second qubit. Those systems are
convolutional neural network (CNN), and recurrent neural controlled by quantum gates in a quantum circuit to perform
network (RNN) are completely changing the way various a quantum computation on its purpose [6], [7].
kinds of data are processed. Meanwhile, the field of quantum Quantum gates are unitary operators mapping a qubit system
computing has also undergone rapid developments in recent into another one, and as classical computing, it is known that
years. Quantum computing, which has been recognized only every quantum gate can be factorized into the combination
for its potential for a long time, has opened up a new era of several basic operators like rotation operator gates and CX
of enormous potentials with the recent advances of variational gate [8]. Rotation operator gates Rx (θ), Ry (θ), Rz (θ) rotates
quantum circuits (VQC). The surprising potentials of the varia- a qubit state in Bloch sphere around corresponding axis by θ
tional quantum algorithms were made clear by solving various and CX gate entangles two qubits by flipping a qubit state if
combinatorial optimization problems and the intrinsic energy the other is |1i. Those quantum gates utilizes quantum super-
problems of molecules, which were difficult to solve using position and entanglement to take an advantage over classical
conventional methods, and further extensions are considered computing, and it is well known that quantum algorithms
to design machine learning algorithms using quantum comput- can obtain an exponential computational gain over existing
ing. Among them, quantum deep learning fields are growing algorithms in certain tasks such as prime factorization [9].
rapidly, inheriting the achievements of existing deep learning II. Q UANTUM D EEP L EARNING
research. Accordingly, numerous notable achievements related
to quantum deep learning have been published, and active A. Variational Quantum Circuits (VQC)
follow-up studies are being conducted at this time. In this pa- A variational quantum circuit (VQC) is a quantum circuit
per, we first briefly introduce the background knowledge, basic using rotation operator gates with free parameters to perform
principles of quantum deep learning, and look at the current various numerical tasks, such as approximation, optimization,
research directions. We then discuss the various directions and classification. An algorithm using a variational quantum circuit
challenges of future research in quantum deep learning. is called variational quantum algorithm (VQA), which is
a classical-quantum hybrid algorithm because its parameter 1) Quantum Convolutional Neural Networks: Quantum
optimization is often performed by a classical computer. convolutional neural network (QCNN) was proposed in [16],
Since its universal function approximating property [10], many implementing the convolution layer and pooling layer on the
algorithms using VQC [11] are designed to solve various quantum circuits. According to the previous research results in
numerical problems [2], [8], [12]–[14]. This flow led to many [5], [19], the QCNN circuit computation proceeds as follows.
applications of VQA in machine learning and is also for re- The first step is same as any other QNN models, encoding
placing the artificial neural network of the existing model with input data into a qubit state with rotation operator gates. Then
VQC [15]–[18]. VQC is similar to artificial neural networks in the convolution layer with quasi-local unitary gates filters
that it approximates functions through parameter learning, but the input data into a feature map. The pooling layer with
has differences due to the several characteristics of quantum controlled rotation operators then downsizes the feature map.
computing. Since all quantum gate operations are reversible By repeating this process sufficiently, the fully connected layer
linear operations, quantum circuits use entanglement layers acts on the qubit state as classical CNN models. Finally, the
instead of activation functions to have multilayer structures. measurement of the qubit state is decoded into an output data
These VQCs are called quantum neural networks, and this with desired sizes. The circuit parameters are updated with
paper will look at them through classification according to gradient descent based optimizer after each measurements.
their structure and characteristics. Unfortunaltely, in the current quantum computing envi-
ronment [20], QCNN is difficult to perform better than the
B. Quantum Neural Networks existing classical CNN. However, it is expected that the QCNN
will be able to obtain sufficient computational gains over
the classical ones in future quantum computing environment
where larger-size quantum calculations are possible [5], [16].
III. F UTURE W ORK D IRECTIONS AND C HALLENGES
A. Applications of Quantum Deep Learning to Reinforcement
Learning
There are many research results applying deep learning to
reinforcement learning to derive optimal actions from a com-
Fig. 1. Illustration of QNN with the input |ψi, the parameter θ and linear
entanglement structure. plex state space [21]–[24]. However, reinforcement learning
research using quantum deep learning [18], [25], [26] is still
In this section, we try to demonstrate how a basic quantum in its infancy. The current approach step is to replace the policy
neural network(QNN) works with a simple example described training network with a quantum neural network from the
in the Fig. 1. The way a QNN processes data is as follows. existing deep neural network, but there remains the possibility
First, the input data is encoded into the corresponding qubit of many algorithms applying various ideas of classical deep
state of an appropriate number of qubits. Then, the qubit state reinforcement learning researches. In particular, if it is proved
is transformed through the parameterized rotation gates and that quantum computational gains can be obtained through
entangling gates for a given number of layers. The transformed QNN in a situation of high computational complexity due to
qubit state is then measured by obtaining expected value of a the complex Markov decision process environment, quantum
hamiltonian operator, such as Pauli gates. These measurements reinforcement learning will open a new horizon for reinforce-
are decoded back into the form of appropriate output data. ment learning research.
The parameters are then updated by an optimizer like Adam B. Applications of Quantum Deep Learning to Communica-
optimizer. A neural network constructed in the form of VQC tion Networks
can perform various roles in various forms, which will be
The QNN and quantum reinforcement learning algorithms
explored as quantum neural networks.
can be used in various research fields, and this paper considers
the applications in terms of communications and networks. In
terms of the acceleration of computation in fully distributed
platforms, e.g., blockchain [27], [28], QNN can be used.
In addition, various advanced communication technologies
such as Internet of Things (IoT) [29], [30], millimeter-wave
networks [31], [32], caching networks [33]–[36], and video
streaming/scheduling [37]–[40] are good applications of QNN
and quantum reinforcement learning algorithms.
C. Challenges
Fig. 2. Illustration of QCNN with the input |ψi, the parameter θ with single 1) Gradient Vanishing: Vanishing gradient is a crucial
convolution and pooling layer. problem in quantum deep learning as of classical deep learn-
ing. The problem of gradient disappearance while backprop- IV. C ONCLUSION
agating many hidden layers has been considered a chronic This paper introduces the basic concepts of quantum neural
problem in deep neural network computation. Since quantum networks and their applications scenarios in various fields. Fur-
neural networks also use gradient descent method training thermore, this paper presents research challenges and potential
their parameters as classical ones, they have to solve the same solutions of quantum neural network computation.
problem. Classical deep learning models solve this problem by
utilizing an appropriate activation function, but quantum deep ACKNOWLEDGMENT
learning does not use an activation function, thus eventually, a This work was supported by the National Research Foun-
different solution is needed. A former research [41] called this dation of Korea (2019M3E4A1080391). Joongheon Kim is a
quantum gradient vanishing pheonomena as barren plateaus, corresponding author of this paper.
while proving that when the number of qubits increases, the
probability of occurring barren plateaus increases exponen- R EFERENCES
tially. This can be avoided by setting good initial parameters [1] J. Park, S. Samarakoon, A. Elgabli, J. Kim, M. Bennis, S.-L. Kim,
in small-scale QNN, but it is unavoidable to deal with this and M. Debbah, “Communication-efficient and distributed learning over
wireless networks: Principles and applications,” Proceedings of the
problem when designing large-scale QNN. This is an open IEEE, vol. 109, no. 5, pp. 796–819, May 2021.
problem for which a solution is not yet clear. [2] J. Choi, S. Oh, and J. Kim, “Quantum approximation for wireless
2) Near-Term Device Compatibility: Noisy intermediate scheduling,” Applied Sciences, vol. 10, no. 20, 2020.
scale quantum (NISQ) [20], which means fewer qubits and a [3] J. Choi and J. Kim, “A tutorial on quantum approximate optimization
algorithm (QAOA): Fundamentals and applications,” in Proceedings of
lot of computational error of near-term quantum devices, has the IEEE International Conference on Information and Communication
already become a familiar term to quantum researchers. Many Technology Convergence (ICTC), 2019, pp. 138–142.
algorithms designed to implement quantum computational [4] J. Choi, S. Oh, and J. Kim, “The useful quantum computing techniques
for artificial intelligence engineers,” in Proceedings of the IEEE Interna-
gains do not work at all in this NISQ environment, and are tional Conference on Information Networking (ICOIN), 2020, pp. 1–3.
expected to be implemented at least several decades later. For [5] S. Oh, J. Choi, and J. Kim, “A tutorial on quantum convolutional neural
example, a practical implementation of the Shor’s algorithm networks (QCNN),” in Proceedings of the IEEE International Con-
ference on Information and Communication Technology Convergence
requires at least thousand of qubits even without an error (ICTC), 2020, pp. 236–239.
correction processes, current quantum devices have only a few [6] J. Choi, S. Oh, and J. Kim, “A tutorial on quantum graph recurrent
tens of qubits with non-negligible computational error rate of neural network (QGRNN),” in Proceedings of the IEEE International
Conference on Information Networking (ICOIN), 2021, pp. 46–49.
several percent. However, due to the relatively small circuit [7] S. Oh, J. Choi, J.-K. Kim, and J. Kim, “Quantum convolutional
depth and qubit requirements, VQA and QNN based on them neural network for resource-efficient image classification: A quantum
are tolerant to these environmental constraints. Nevertheless, random access memory (QRAM) approach,” in Proceedings of the IEEE
International Conference on Information Networking (ICOIN), 2021, pp.
in order to increase the data processing capability of quantum 50–52.
neural network, it is necessary to consider near-term device [8] J. Choi, S. Oh, and J. Kim, “Energy-efficient cluster head selection via
compatibility. For example, using many multi-qubit controlling quantum approximate optimization,” Electronics, vol. 9, no. 10, 2020.
[9] P. W. Shor, “Polynomial-time algorithms for prime factorization and
gates for quantum entanglement is theoretically thought to discrete logarithms on a quantum computer,” SIAM review, vol. 41, no. 2,
increase the performance of QNN, but it entails a large error pp. 303–332, 1999.
rate and a complicated error correction process. Therefore, it [10] J. Biamonte, “Universal variational quantum computation,” Physical
Review A, vol. 103, no. 3, p. L030401, 2021.
is essential to design an algorithm regarding these tradeoffs in [11] M. Cerezo, A. Arrasmith, R. Babbush, S. C. Benjamin, S. Endo, K. Fujii,
quantum deep learning research. J. R. McClean, K. Mitarai, X. Yuan, L. Cincio et al., “Variational
3) The Quantum Advantage: The term quantum supremacy quantum algorithms,” arXiv preprint arXiv:2012.09265, 2020.
[12] E. Farhi, J. Goldstone, and S. Gutmann, “A quantum approximate
may lead to the illusion that quantum algorithms are always optimization algorithm,” arXiv preprint arXiv:1411.4028, 2014.
better than classical algorithms performing the same function. [13] A. Kandala, A. Mezzacapo, K. Temme, M. Takita, M. Brink, J. M.
However, given the inherent limitations of quantum comput- Chow, and J. M. Gambetta, “Hardware-efficient variational quantum
eigensolver for small molecules and quantum magnets,” Nature, vol.
ing, quantum computing benefits can only be realized through 549, no. 7671, pp. 242–246, 2017.
well-thought-out algorithms under certain circumstances. In [14] J. Kim, Y. Kwak, S. Jung, and J.-H. Kim, “Quantum scheduling
fact, especially among variational quantum-based algorithms, for millimeter-wave observation satellite constellation,” in Proceedings
of the IEEE VTS Asia Pacific Wireless Communications Symposium
only a few of them have proven their quantum advantage in a (APWCS), 2021, pp. 1–1.
limited situation. [15] M. Schuld and N. Killoran, “Quantum machine learning in feature
Due to the universal approximation property of QNN, it hilbert spaces,” Physical review letters, vol. 122, no. 4, p. 040504, 2019.
[16] I. Cong, S. Choi, and M. D. Lukin, “Quantum convolutional neural
is known that quantum deep learning can perform most of networks,” Nature Physics, vol. 15, no. 12, pp. 1273–1278, 2019.
the computations performed in classical deep learning [10]. [17] J. Bausch, “Recurrent quantum neural networks,” Advances in Neural
Nevertheless, if one approaches simply based on this fact Information Processing Systems, vol. 33, 2020.
[18] D. Dong, C. Chen, H. Li, and T.-J. Tarn, “Quantum reinforcement
without the consideration of quantum gain, the result may be learning,” IEEE Transactions on Systems, Man, and Cybernetics, Part
much inefficient compared to the existing classical algorithm. B (Cybernetics), vol. 38, no. 5, pp. 1207–1220, 2008.
Therefore, in designing a new QNN-based deep learning algo- [19] S. Garg and G. Ramakrishnan, “Advances in quantum deep learning:
An overview,” arXiv preprint arXiv:2005.04316, 2020.
rithm, it is necessary to justify it by articulating its advantages [20] J. Preskill, “Quantum computing in the nisq era and beyond,” Quantum,
over the corresponding classical models. vol. 2, p. 79, 2018.
[21] V. Mnih, K. Kavukcuoglu, D. Silver, A. Graves, I. Antonoglou, D. Wier-
stra, and M. Riedmiller, “Playing atari with deep reinforcement learn-
ing,” arXiv preprint arXiv:1312.5602, 2013.
[22] M. Choi, A. No, M. Ji, and J. Kim, “Markov decision policies for dy-
namic video delivery in wireless caching networks,” IEEE Transactions
on Wireless Communications, vol. 18, no. 12, pp. 5705–5718, December
2019.
[23] M. Shin, J. Kim, and M. Levorato, “Auction-based charging scheduling
with deep learning framework for multi-drone networks,” IEEE Trans-
actions on Vehicular Technology, vol. 68, no. 5, pp. 4235–4248, May
2019.
[24] S. Jung, W. J. Yun, M. Shin, J. Kim, and J.-H. Kim, “Orchestrated
scheduling and multi-agent deep reinforcement learning for cloud-
assisted multi-UAV charging systems,” IEEE Transactions on Vehicular
Technology, vol. 70, no. 6, pp. 5362–5377, June 2021.
[25] S. Y.-C. Chen, C.-H. H. Yang, J. Qi, P.-Y. Chen, X. Ma, and H.-S.
Goan, “Variational quantum circuits for deep reinforcement learning,”
IEEE Access, vol. 8, pp. 141 007–141 024, 2020.
[26] S. Jerbi, C. Gyurik, S. Marshall, H. J. Briegel, and V. Dunjko, “Vari-
ational quantum policies for reinforcement learning,” arXiv preprint
arXiv:2103.05577, 2021.
[27] M. Saad, J. Choi, D. Nyang, J. Kim, and A. Mohaisen, “Toward
characterizing blockchain-based cryptocurrencies for highly accurate
predictions,” IEEE Systems Journal, vol. 14, no. 1, pp. 321–332, March
2020.
[28] E. Boo, J. Kim, and J. Ko, “LiteZKP: Lightening zero-knowledge proof-
based blockchains for IoT and edge platforms,” IEEE Systems Journal,
pp. 1–12, 2021.
[29] N.-N. Dao, D.-N. Vu, W. Na, J. Kim, and S. Cho, “SGCO: Stabilized
green crosshaul orchestration for dense IoT offloading services,” IEEE
Journal on Selected Areas in Communications, vol. 36, no. 11, pp. 2538–
2548, November 2018.
[30] N.-N. Dao, T. V. Phan, U. Sa’ad, J. Kim, T. Bauschert, D.-T. Do,
and S. Cho, “Securing heterogeneous IoT with intelligent DDoS attack
behavior learning,” IEEE Systems Journal, pp. 1–10, 2021.
[31] S. Jung, J. Kim, M. Levorato, C. Cordeiro, and J.-H. Kim,
“Infrastructure-assisted on-driving experience sharing for millimeter-
wave connected vehicles,” IEEE Transactions on Vehicular Technology,
pp. 1–1, 2021.
[32] J. Kim and A. F. Molisch, “Fast millimeter-wave beam training with re-
ceive beamforming,” Journal of Communications and Networks, vol. 16,
no. 5, pp. 512–522, October 2014.
[33] A. Malik, J. Kim, K. S. Kim, and W.-Y. Shin, “A personalized preference
learning framework for caching in mobile networks,” IEEE Transactions
on Mobile Computing, vol. 20, no. 6, pp. 2124–2139, June 2021.
[34] M. Choi, J. Kim, and J. Moon, “Dynamic power allocation and user
scheduling for power-efficient and delay-constrained multiple access
networks,” IEEE Transactions on Wireless Communications, vol. 18,
no. 10, pp. 4846–4858, October 2019.
[35] M. Choi, A. F. Molisch, and J. Kim, “Joint distributed link scheduling
and power allocation for content delivery in wireless caching networks,”
IEEE Transactions on Wireless Communications, vol. 19, no. 12, pp.
7810–7824, December 2020.
[36] M. Choi, A. F. Molisch, D.-J. Han, D. Kim, J. Kim, and J. Moon,
“Probabilistic caching and dynamic delivery policies for categorized
contents and consecutive user demands,” IEEE Transactions on Wireless
Communications, vol. 20, no. 4, pp. 2685–2699, April 2021.
[37] J. Kim, G. Caire, and A. F. Molisch, “Quality-aware streaming and
scheduling for device-to-device video delivery,” IEEE/ACM Transactions
on Networking, vol. 24, no. 4, pp. 2319–2331, August 2016.
[38] M. Choi, J. Kim, and J. Moon, “Wireless video caching and dynamic
streaming under differentiated quality requirements,” IEEE Journal on
Selected Areas in Communications, vol. 36, no. 6, pp. 1245–1257, June
2018.
[39] J. Koo, J. Yi, J. Kim, M. A. Hoque, and S. Choi, “Seamless dynamic
adaptive streaming in LTE/Wi-Fi integrated network under smartphone
resource constraints,” IEEE Transactions on Mobile Computing, vol. 18,
no. 7, pp. 1647–1660, July 2019.
[40] J. Yi, S. Kim, J. Kim, and S. Choi, “Supremo: Cloud-assisted low-
latency super-resolution in mobile devices,” IEEE Transactions on
Mobile Computing, pp. 1–1, 2021.
[41] J. R. McClean, S. Boixo, V. N. Smelyanskiy, R. Babbush, and H. Neven,
“Barren plateaus in quantum neural network training landscapes,” Nature
communications, vol. 9, no. 1, pp. 1–6, 2018.

You might also like