0% found this document useful (0 votes)
149 views7 pages

A Comparative Analysis of Hybrid-Quantum Classical Neural Networks

The document compares different hybrid quantum-classical machine learning algorithms for image classification. It analyzes the accuracy of quantum convolutional neural networks, quantum ResNet, and quanvolutional neural networks under different architectural configurations to determine how accuracy is impacted by the number of layers, qubits, and other design choices.

Uploaded by

jaa
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
149 views7 pages

A Comparative Analysis of Hybrid-Quantum Classical Neural Networks

The document compares different hybrid quantum-classical machine learning algorithms for image classification. It analyzes the accuracy of quantum convolutional neural networks, quantum ResNet, and quanvolutional neural networks under different architectural configurations to determine how accuracy is impacted by the number of layers, qubits, and other design choices.

Uploaded by

jaa
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

A Comparative Analysis of Hybrid-Quantum Classical Neural

Networks
Kamila Zaman1,2,* , Tasnim Ahmed1,2,* , Muhammad Abdullah Hanif1,2 , Alberto Marchisio1,2 , and
Muhammad Shafique1,2 ,
1 eBrain Lab, Division of Engineering, New York University Abu Dhabi (NYUAD), Abu Dhabi, UAE
2 Center for Quantum and Topological Systems (CQTS), NYUAD Research Institute, NYUAD, Abu Dhabi, UAE
{kz2137,tasnim.ahmed,mh6117,alberto.marchisio,muhammad.shafique}@nyu.edu

ABSTRACT
arXiv:2402.10540v1 [quant-ph] 16 Feb 2024

Hybrid Quantum-Classical Machine Learning (ML) is an emerging


field, amalgamating the strengths of both classical neural
networks and quantum variational circuits on the current noisy
intermediate-scale quantum devices [13]. This paper performs
an extensive comparative analysis between different hybrid
quantum-classical machine learning algorithms, namely Quantum
Convolution Neural Network, Quanvolutional Neural Network
and Quantum ResNet, for image classification. The experiments
designed in this paper focus on different Quantum ML (QML)
algorithms to better understand the accuracy variation across the
different quantum architectures by implementing interchangeable Figure 1: Penny-lane: The arrows indicate an observed
quantum circuit layers, varying the repetition of such layers and pattern suggesting a potential positive correlation of
their efficient placement. Such variations enable us to compare the accuracy with the number of layers & qubits in hybrid
accuracy across different architectural permutations of a given quantum-classical architectures. Specifically, (a), (b), and (c)
hybrid QML algorithm. The performance comparison of the hybrid consistently exhibit this behavior for the QuanNN across
models, based on the accuracy, provides us with an understanding all entangling circuits, while (d) displays a similar effect in
of hybrid quantum-classical convergence in correlation with the the QResNet. While the QuanNN responds to both layer and
quantum layer count and the qubit count variations in the circuit. qubit count, the QResNet demonstrates improved accuracy
primarily with an increase in qubits only. Hence, observation
KEYWORDS at (d), raises questions for further studies on whether the
Quantum Machine Learning, Hybrid Quantum-Classical Neural QResNet’s accuracy significantly improves with a substantial
Networks, Quantum Convolutional Neural Networks, Quantum increase in qubits further or not.
ResNet, Quanvolutional Neural Networks

1 INTRODUCTION processing for machine learning tasks, opening new avenues for
Merging Quantum Computing (QC) with Machine Learning algorithm and architecture exploration [1]. The most renowned
(ML) forms the emerging Quantum Machine Learning (QML) HQML algorithms employ variational quantum circuits, featuring
paradigm [7, 8, 11, 12]. It represents an excellent opportunity for parameterized quantum gates optimized by classical computers
researchers and industries to make phenomenal discoveries and to achieve specific goals. These variational quantum circuits
unravel efficient ways to solve complex real-world problems with in QML, known as Quantum Neural Networks (QNNs), hold
significant direction towards practicality and improved accuracy significant promise due to their expressiveness and reduced
compared to classical systems. QML opens new avenues for the trainable parameters, garnering considerable development interest.
community to discover, build, and align their designs to different Given the recent breakthroughs of classical algorithms such as
levels of the quantum stack. However, the currently developed Convolutional Neural Networks (CNNs) for image classification
Noisy-Intermediate Scale Quantum (NISQ) devices [9] have a tasks, numerous QNN architectures have been developed for
limited number of qubits, with small-scale resilience to noise, classification tasks. However, those architectures are either
therefore, making it difficult to develop and practically realize the implemented as basic building blocks or lack the verification
potential of standalone quantum machine learning algorithms. of their usability against a benchmark model for classification
The limitation of NISQ devices has motivated the development of tasks. The observations in Figure 1, based on our implementation
hybrid quantum-classical machine learning algorithms (HQML), of permutations of two different algorithms, show that different
which are NISQ-compatible algorithms. architectures of the same algorithms have different impacts
HQML algorithms have emerged as an important paradigm on classification tasks. Thus, it is imperative to explore the
that amalgamates the power of both classical and quantum architectures, applications, and usefulness of QNN algorithms and
evaluate their accuracy to identify efficient model configurations
*These authors contributed equally to this work. that can be deemed as reference benchmarks for future research.
Methodology for analysing Classical Input
Evaluation
HQML Algorithms
HQML Models Comparison Quantum Convolution Quantum ResNet Quanvolution Neural
(QuanNN, QCNN & Data Preprocessing between 3 HQML Neural Network (QCNN) (QResNet) Network (QuanNN)
QResNet) algorithms
Convolution Layer Classical
Entangling Layer
Pretrained Quantum Encoding
Variations Interpreting Classical Pooling
ResNet18
results to analyse
QC Frameworks Circuit Layer
the performance Quantum Encoding
(Qiskit & PennyLane) Repetition Quantum Encoding Quanvolution Layer
correlation with
Quantum (Quantum Circuit
Qubit Count model variations
Convolution Kernel)
Variations Quantum Circuit
Quantum Pooling Layers

Figure 2: Overview of our novel contributions.


Classical Layer

Classical Output
In this paper, we explore three different QNN algorithms
amongst the pool of hybrid algorithms, namely, Quanvolutional
Neural Networks [5], Quantum Convolutional Neural Figure 3: Pipeline of the implemented hybrid QML
Networks [2], and Quantum ResNet [6] and perform an extensive algorithms. Each model utilizes a classical fully connected
comparative analysis. Our methodology first involves identifying layer to transform quantum circuit measurement into
efficient architectures amongst the commonly used QNNs and classification probabilities. In the QCNN, classical
understanding their practical utility for classification tasks. Then convolutional and pooling layers are used for image
we assess how variations of such algorithms impact their accuracy downsizing to match the qubit count of a circuit.
and robustness under architectural permutations of each algorithm.
The architectural permutations are based on the implementation
of interchangeable variational circuit layers over different qubit 2.1 Quanvolutional Neural Networks
counts, varying repetition of the layers, and considering their
The Quanvolutional Neural Network (QuanNN) is an innovative
optimal placement. By implementing the varied models, we
hybrid quantum-classical architecture developed in [5], which
evaluate the performance of QNNs based on the accuracy of
enhances the capabilities of classical CNNs by harnessing the
the training process, which provides us with an understanding
potential of quantum computation. This architecture introduces
of hybrid quantum-classical convergence in correlation with
a new type of transformation layer called the quanvolutional
our experimental approach. Such a comprehensive analysis is
layer, akin to classical convolutional layers, composed of
necessary to establish an understanding of the correlation between
multiple quanvolutional filters which locally transform input
circuit architectures, their robustness, and utility in QML.
data, extracting valuable features for classification. These
filters correspond to a certain circuit design, which can either
1.1 Our Novel Contributions
be generated randomly or based on a specific entanglement,
An overview of our novel contributions is shown in Figure 2. Their namely Basic Entangling or Strongly Entangling. The reason
brief descriptions with key features is presented below. we chose the QuanNN as one of our benchmarking models is
• We propose a methodology to investigate QNN models’ circuit because of its generalizability which can be achieved by adhering
permutations and their effect on the accuracy for classification to the following conditions: specifying an arbitrary integer
tasks. (Section 3) number of quanvolutional filters in each layer, stacking multiple
• We analyze the effect of different circuit layer variations1 , quanvolutional layers in the network, defining layer-specific
namely, Random Circuit, Basic Entangling, and Strongly parameters like encoding method, entanglement, and average
Entangling. (Section 4.2) quantum gates per qubit in the quantum circuit. To formalize
• We investigate the repetition of different layers to analyze the classical data transformation using quanvolutional filters:
effect of circuit depth and complexity. (Section 4.3) (1) Start with a single filter q operating on subsections ux of dataset
• We test the scalability of the algorithms by varying qubit counts images.
(Section 4.4) (2) Encode ux into an initialized state ix using an encoding
function e.
2 SELECTED HYBRID QML ALGORITHMS (3) Apply the quantum circuit to ix, producing an output quantum
An overview of the hybrid QML algorithms is shown in Figure 3. state ox.
A brief description of the key features of each algorithm and their (4) Decode ox to ensure consistent outputs, resulting in the final
implementations is presented in the following paragraphs. decoded state fx.
(5) This entire process, denoted as the “quanvolutional filter
1 The circuit variations are studied only for QNN models that support different types transformation” Q, is fx = Q(ux, e, q, d).
of entanglement.
2
Quantum Model Configurations Trained HQML Models

.
QC Frameworks Quantum
..
Encoding Quantum Layer Variations Quantum Layer Variations
..
.
|0.4⟩ U(θ1)
Algorithms Qubits
Entangling Layers .
HQML Models Layers Repetition
|0.1⟩
.
U(θ2)
QuanNN 4&9 Basic,
1, 2, 3, 4,
|0.5⟩ U(θ3)
Random &
5&6 ..
Data Preprocessing QResNet 4&9 Strong
.
|0.2⟩ U(θ4)
. QCNN 4&8
Basic Entangling &
Pooling Layers
..
Evaluation .
..
.
..
.
[x1, x2, x3, x4]

Figure 4: Overview of our comparative analysis methodology.

2.2 Quantum Convolutional Neural Networks outcomes demonstrate the QResNet’s superior capability of
Similar to the QuanNN, the Quantum Convolutional Neural learning an unknown unitary transformation and its enhanced
Network (QCNN) is a QML algorithm inspired by CNNs that was robustness with noisy data, compared to state-of-the-art methods.
introduced in [2]. Unlike the QuanNN, the QCNN does not have These findings motivate us to further explore and fine-tune this
room for lots of circuit design variations. The QCNN has a fully promising architecture. Our choice of the QResNet stems from our
quantum implementation of convolutional and pooling layers. desire to test this HQML architecture and its impact on pre-trained
We chose this architecture in our experiments because it is one classical models.
of the state-of-the-art QML models and the classical counterpart
represents the state-of-the-art in classical image recognition. 3 COMPARATIVE ANALYSIS METHODOLOGY
Moreover, it is important to note that the QCNN presented
in [2] makes use of only 𝑂 (𝑙𝑜𝑔(𝑁 )) variational parameters for With the aim of understanding hybrid-quantum classical neural
input sizes of 𝑁 qubits. This allows for its efficient training and networks in-depth, we divide our problem into interdependent
implementation on realistic, near-term quantum devices. experimental sections as shown in Figure 4. The sectioning allows
The basic structure of the QCNN includes an input encoding us to identify areas of possible efficient implementation and
circuit layer, a convolutional circuit layer, a pooling circuit layer, improvements for the HQML algorithms discussed in Section 2.
and a circuit measurement layer, each composed of parametric
quantum gates. It is categorized as an HQML algorithm because it 3.1 Overview of Analyses
uses classical optimization techniques to update the parameterized
Our experimental sections are summarized in the Quantum Layer
gate weights. In the quantum convolutional and pooling layers,
Variations table in the Quantum Model Configurations group of
the interactions between quantum bits can effectively extract
Figure 4. For each algorithm, we analyze the impact of different
features from the input data based on the types of gates and
architectural permutations based on:
their placement in each layer. Given the current NISQ devices,
the QCNN is limited in terms of scalability and can only be • Entanglement variation of quantum circuit: The predefined
implemented with a small number of qubits. Therefore, it requires circuit structure of QuanNN and QResNet enables us to change
classical layers for downsizing large inputs to match the qubit size. the entanglement type of the circuit. Each circuit has its own
In our implementation, we employ single classical convolution and orientation of CNOT gates and parameterized corresponding to
pooling layers for input downsizing, without loss of key features. the strength of their entanglement. Whereas for QCNN, there is
no room for changing the entanglement type of the circuit, since
2.3 Quantum ResNet it follows the structure defined in [2].
Our decision to employ the Quantum ResNet (QResNet) algorithm • Layer count variation: For each algorithm, a circuit can
was inspired by [6], which introduced a hybrid quantum-classical be applied multiple times to an input, which corresponds to
strategy for deep residual learning. The primary challenge in understanding how the varying depth of a quantum circuit
this approach is to establish a connection between the residual affects the model accuracy.
block structure and the quantum processing layers. In our specific • Qubit count variation: The strength and capability of a
context, our focus lies in experimenting with various quantum circuit depends on the number of qubits it has. Hence, in our
layer types. We delve into an analysis of potential methods and experiments, we vary the qubit counts in the architecture to
variations that combine residual learning with quantum machine analyze how the variation correlates with the models’ learning
learning. Notably, the work in [4] emphasizes that experimental curve and accuracy.
3
3.2 Comparison Metrics Table 1: Training Environment Specifications
We intentionally made our experimental setup simple and precise,
Algorithm Experiment Name
to gain more understanding about how different independent
Software FrameWork PennyLane(PL), Qiskit(QK)
parameters of a circuit’s design of our selection algorithms can
Back-End Simulator lightning.qubit(PL), qasm_simulator(QK)
influence their accuracy. To understand the contribution and
Back-End Machine NVIDIA RTX 6000 Ada
convergence behaviours of the aforementioned variations, we
Deep-Learning Interface Pytorch
focus on analyzing the correlation between a model’s accuracy by
Data-set MNIST [3]
studying their classical accuracies in relation to the learning curve
Training Samples, Testing Samples PL: (100, 100), QK: (500, 100)
attained over the training progress.
Epoch, Batch-Size, LR 5, 5, 0.01

4 EVALUATION AND DISCUSSION


4.1 Experimental Setup
This paper investigates the design variations of the quantum
components of HQML architectures. Hence, to ensure a fair
comparison, we use a uniform classical optimization environment
for both the classical and the quantum layers as specified in Table 1.
In the HQML architectures of our experiment, all the algorithms
have a classical layer after the quantum layer to convert the
quantum measurement values into classical probabilities. However,
the QCNN has a classical layer before the input encoding for
image downsize. As for the quantum layers, PennyLane and Qiskit
frameworks provide PyTorch integration modules, which convert a
Figure 5: PennyLane: RC: Random Circuit, SE: Strongly
quantum layer into PyTorch trainable layers and perform classical
Entangling, BE: Basic Entangling. Circuit Variation across
optimization of the gates based on the training hyperparameters
QuanNN and QResNet.
that we specified in Table 1. In our experiments, to ensure that the
output of a circuit corresponds to the minimum number of qubits
of our qubit count pool, we use a subset of the MNIST dataset [3]
consisting of only 4 classes, [0, 1, 2, 3]. Thus, the last classical layer 4.2 Results: Entangling Circuit Variations
of our models consists of only 4 neurons. Figure 5 shows the accuracy evolution over training epochs for
For analyzing the HQML components of our architecture, three QuanNN and QResNet at different entanglement settings (RC, BE
different kinds of variation are applied to the quantum layer of an & SE) for PennyLane implementation. As demonstrated by the
algorithm, as shown in Figure 4: accuracy and convergence gap between the two algorithms, it is
evident that QuanNN learns significantly better than QResNet (see
labels a and b).
• Entanglement variation of quantum circuit: For algorithms
Further analyses show that the RC QuanNN (label c) acquires
that allow different entanglement orientation, we vary their
lower accuracy than the other entanglement settings. Whereas, for
architectures with Random Circuit (RC), Basic Entangling (BE),
QResNet, the RC model (label d) achieves the highest accuracy
or Strongly Entangling (SE) circuit.
compared to the SE and BE settings. Towards the end of the
• Layer count variation: Each quantum layer can have numerous
training, we can see that the accuracy of the QuanNN, for all
repetitions of a circuit. In our implementation, we repeat the
three entanglements, converges to around 80%, with the SE
circuit from 1 to 6 times. Note: the QCNN has a fewer layer
QuanNN having the highest accuracy. Following the traces for BE
variation compared to the QuanNN and QResNet because number
√︁ and SE circuits, for both QuanNN and QResNet (labels e and f,
of 𝑙𝑎𝑦𝑒𝑟 = 𝑞𝑢𝑏𝑖𝑡𝑠. respectively), it is noticeable how the traces are similar in each
• Qubit count variation: QuanNN and QResNet circuits follow algorithm. On the other hand, RC varies in learning compared to
a square filter-like structure where the qubits must be a square SE and BE in both algorithms. In the RC QuanNN (label c), the
number. Hence, we experiment with 4 and 9 qubits. On the other accuracy improves at a slower pace compared to SE and BE, but
hand, the circuit of the QCNN must be an even number that can soon converges closer towards the end. As for the RC QResNet, the
be reduced in half at each pooling step. Hence, we experiment rate of learning is similar to BE and SE, but with a higher accuracy
with 4 and 8 qubits. throughout.
Figure 6 shows the same evolution graphs as Figure 5, but for
Note: We implemented all these pipelines in Qiskit [10] & Qiskit models. It is obvious that even for Qiskit implementations the
PennyLane [1] to enable the support for both QML frameworks. They accuracy gap between QuanNN and QResNet models are same as
are purposely presented separately (and can be identified in boldened PennyLane (labels a and b). It is quite evident, that QuanNN models
figure captions) to avoid cross-tool comparison which is out of the in Qiskit have different learning curve, with a great accuracy jump
scope of this paper. after the first epoch (label c).
4
Figure 6: Qiskit: RC: Random Circuit, SE: Strongly Entangling, Figure 8: Qiskit: Layer Variation across different Entangling
BE: Basic Entangling. Circuit Variation across QuanNN and Circuits for QuanNN and QResNet. No overall trend observed
QResNet. with increasing number of layers.

Figure 9: PennyLane: Initial learning behavior of Layer


Variations, for the QuanNN with Basic Entangling. Having 3
Figure 7: PennyLane: Layer Variation across different
layers shows a clear advantage over the other configurations
Entangling Circuits for QuanNN and QResNet. Asides minor
in the first epochs, but the accuracy of the 4-layer QuanNN,
perturbation, an overall trend of improving accuracy is
despite being slow at first, catches up quickly with the 3-layer
observed with the increasing number of layers, for every
counterpart and converges to better results in the majority
type of circuit.
of the experiments.

Note: the QCNN is not presented in this comparison because only the circuit depth does not have enough impact on these models’
the Basic Entangling circuit can be implemented in this architecture. accuracy (see labels a, b and c). However, it is quite evident for
QCNN models that increasing the number of layers has a positive
4.3 Results: Layer Count Variations correlation with the accuracy (see label d).
Figure 7 shows the accuracy of the QResNet and QuanNN models Figure 9 shows the learning curve for the QuanNN with Basic
with varying number of layers implemented in PennyLane. From Entangling for different layers implemented in PennyLane. For
the results, we can gauge that there is minor difference between 2, 1 layer, the learning curve is constant throughout and little
3, and 4 layers for QResNet models (see labels b and c). Moreover, improvement is observed (see label c). For more than one layer, we
adding more than 4 layers to the QuanNN does not always can observe that the learning improves with increasing the number
contribute to increasing the accuracy (see labels a and e). For the of layers. However, the optimal peak is until 4 layers (see label b),
QResNet, it can be observed that the accuracy increases quite because beyond that the learning curve drops to being close to
significantly for the QResNet models with 4, 5, and 6 layers (see the 1-layer curve (see label a). Nonetheless, a steady and gradual
d, e, and f), more specifically for circuit with 4 layers. Based on improvement in accuracy is observed for more than one layer.
this observation, we conducted the remaining experiments with For the models implemented in Qiskit, as shown in Figure 10,
4 layers in places where no layer variation was involved. Even we can observe that QCNN models with 3 layers have the highest
though the accuracy of the QResNet is lower than the QuanNN, accuracy towards the end of the first epoch (see label a), however,
the algorithm has some potential because it can improve the with a very steadily increasing learning curve (see label c). As for
accuracy for increasing the number of layers (see labels d and f). the QuanNN models, although the initial accuracy is lower than
In Figure 8, we can observe that the QuanNN and QResNet the QCNN 3L, the models have a rapidly increasing learning curve.
models implemented in Qiskit do not have an overall trend with Moreover, we can see that the learning curve for the QuanNN
respect to increasing the number of layers, which indicates that models peak only until 5 layers (QuanNN 5L), because beyond that,
5
Figure 12: Qiskit: Qubit Count variations across different
entangling circuits for QuanNN, QResNet, and QCNN. The
Figure 10: Qiskit: Initial learning behaviour of different Basic
QCNN accuracy positively correlates with the increasing
Entangling layer variation counts.
number of qubits.

of qubits in Qiskit models have a varying impact for different


entagnling circuits.

4.5 Result Discussion & Findings


We observed from the results that different variations combined
together have a compounding effect on the model accuracy, as
it is not trivial to determine the best set of permutations for
the given HQML algorithm. For instance, the QResNet had a
stagnant improvement across most models. On the other hand,
if implemented with Strongly Entangling and 9 qubits, then
the accuracy improves drastically. However, in most cases, the
Figure 11: PennyLane: Qubits Count Variation across different accuracy gain achieved when increasing the number of qubits
quantum entangling circuits for QuanNN and QResNet. It can comes with the cost of increased execution time.
be noted that there a positive correlation between number In addition, varying the number of layers with different
of qubits in QuanNN model and its accuracy. entangling circuits can impact the accuracy differently, as we
observed in our results above. In some cases, the increasing
accuracy with increasing depth indicates that there is potential
for 6 layers, a very slow and steady improvement is observed (see in applying more quantum layers at the initial stages of HQML
label b). models instead of classical convolutional layers. However, as
observed in Figure 11, we can see that the accuracy of HQML
4.4 Results: Qubit Count Variations models keeps increasing as the model becomes more complex (i.e.,
Figure 11 illustrates the results for the QuanNN and QResNet with a higher number of layers and qubits). However, the more
models with different qubits counts, implemented in PennyLane. complex a model becomes, the more time it takes to execute.
A clear pattern can be observed for the QuanNN models, where All of our findings prove that the best circuit configuration
there is a consistent positive correlation between the number of cannot be identified easily for a given algorithm when keeping
qubits and model accuracy, with increasing layer counts (see labels in mind all the constraints.
a, b, and c), indicating that the models learn better with increasing
number of qubits. On the other hand, the QResNet models with 5 CONCLUSION
Basic Entangling circuit and Random Circuit do not have any In our work, we studied the accuracy variation of 3 different
significant trend. However, the QResNet with Strongly Entangling algorithms, QuanNN, QCNN, and QResNet, by configuring the
circuit shows an increase in accuracy with increasing its qubit count quantum architecture of each circuit by varying the layer count,
(see label d), which might lead one to derive that QResNet with qubit count, and the type of entangling circuit. These 3 categories
Strongly Entangled circuit has a positive learning capability with of variation enabled us to design different permutations of circuits
an increased number of qubits. for a given algorithm and study their effect on their accuracy.
Figure 12 depicts the variation of models implemented in Qiskit This study has granted us valuable insights into the
with regards to varying qubit count. Firstly, we can notice that the establishment and exploration of avenues for optimizing hybrid
QCNN accuracy improves with increasing the number of qubits, quantum-classical neural network architectures. We have
which indicates that the QCNN learns better with more qubits (see effectively advanced our pipelines toward practical improvements,
label a). A similar trend is observed for the QuanNN and QResNet, using foundational structures for experimentation. The detailed
but only for the models with Random Circuit (see labels b and deconstruction of the architecture offers transparency into the
c). From this analysis, we can gauge that, changing the number parameters that influence the performance of quantum circuits
6
within neural networks. Such an in-depth analysis serves as the [3] Li Deng. 2012. The MNIST Database of Handwritten Digit Images for Machine
building block for designing efficient HQML architectures and Learning Research [Best of the Web]. IEEE Signal Processing Magazine (2012).
[4] E. Ghasemian and M. K. Tavassoly. 2022. Hybrid classical-quantum machine
paves the way to develop robust HQML algorithms applicable learning based on dissipative two-qubit channels. Scientific Reports (2022).
in the NISQ era, while also considering design aspects that can [5] Maxwell Henderson, Samriddhi Shakya, Shashindra Pradhan, and Tristan Cook.
2019. Quanvolutional Neural Networks: Powering Image Recognition with
smoothly transition into Fault Tolerant Quantum Computers. Quantum Circuits. arXiv:1904.04767
[6] Yanying Liang, Wei Peng, Zhu-Jun Zheng, Olli Silvén, and Guoying Zhao.
ACKNOWLEDGMENTS 2021. A hybrid quantum-classical neural network with deep residual learning.
arXiv:2012.07772
This work was supported in part by the NYUAD Center for [7] Stefano Markidis. 2023. Programming Quantum Neural Networks on NISQ
Quantum and Topological Systems (CQTS), funded by Tamkeen Systems: An Overview of Technologies and Methodologies. Entropy (2023).
[8] Fabio Valerio Massoli, Lucia Vadicamo, Giuseppe Amato, and Fabrizio Falchi.
under the NYUAD Research Institute grant CG008. 2022. A Leap among Quantum Computing and Quantum Neural Networks: A
Survey. arXiv:2107.03313
REFERENCES [9] John Preskill. 2018. Quantum Computing in the NISQ era and beyond. Quantum
(2018).
[1] Ville Bergholm, Josh Izaac, Maria Schuld, Christian Gogolin, Shahnawaz Ahmed, [10] Qiskit contributors. 2023. Qiskit: An Open-source Framework for Quantum
Vishnu Ajith, M. Sohaib Alam, Guillermo Alonso-Linaje, B. AkashNarayanan, Computing.
Ali Asadi, Juan Miguel Arrazola, Utkarsh Azad, Sam Banning, Carsten Blank, [11] N Schetakis, D Aghamalyan, P Griffin, and M Boguslavsky. 2022. Review of some
Thomas R Bromley, Benjamin A. Cordier, Jack Ceroni, Alain Delgado, Olivia Di existing QML frameworks and novel hybrid classical-quantum neural networks
Matteo, Amintor Dusko, Tanya Garg, Diego Guala, Anthony Hayes, Ryan Hill, realising binary classification for the noisy datasets. Sci. Rep. (2022).
Aroosa Ijaz, Theodor Isacsson, David Ittah, Soran Jahangiri, Prateek Jain, Edward [12] Maria Schuld and Nathan Killoran. 2019. Quantum Machine Learning in Feature
Jiang, Ankit Khandelwal, Korbinian Kottmann, et al. 2022. PennyLane: Automatic Hilbert Spaces. Physical Review Letters (2019).
differentiation of hybrid quantum-classical computations. arXiv:1811.04968 [13] Kamila Zaman, Alberto Marchisio, Muhammad Abdullah Hanif, and Muhammad
[2] Iris Cong, Soonwon Choi, and Mikhail D. Lukin. 2019. Quantum convolutional Shafique. 2023. A Survey on Quantum Machine Learning: Current Trends,
neural networks. Nature Physics (2019). Challenges, Opportunities, and the Road Ahead. arXiv:2310.10315

You might also like