Fedqnn: Federated Learning Using Quantum Neural Networks
Fedqnn: Federated Learning Using Quantum Neural Networks
Neural Networks
Nouhaila Innan1 , Muhammad Al-Zafar Khan2 , Alberto Marchisio3,4 , Muhammad Shafique3,4 , and
Mohamed Bennai1
1
Quantum Physics and Magnetism Team, LPMC, Faculty of Sciences Ben M’sick, Hassan II University of Casablanca,
Morocco
2
Quantum United Arab Emirates, UAE
3
eBRAIN Lab, Division of Engineering, New York University Abu Dhabi (NYUAD), Abu Dhabi, UAE
4
Center for Quantum and Topological Systems (CQTS), NYUAD Research Institute, NYUAD, Abu Dhabi, UAE
arXiv:2403.10861v2 [quant-ph] 19 Sep 2024
Abstract—In this study, we explore the innovative domain of drawbacks associated with centralized data processing, such
Quantum Federated Learning (QFL) as a framework for training as privacy risks and hardware limitations [10].
Quantum Machine Learning (QML) models via distributed
networks. Conventional machine learning models frequently
FedML operates on distributive training, where models
grapple with issues about data privacy and the exposure are trained on localized datasets across multiple nodes or
of sensitive information. Our proposed Federated Quantum “clients” without sharing the actual data between them, thus
Neural Network (FedQNN) framework emerges as a cutting-edge ensuring privacy preservation. The model parameters are
solution, integrating the singular characteristics of QML with the then aggregated to create a comprehensive model for end-
principles of classical federated learning. This work thoroughly
investigates QFL, underscoring its capability to secure data
users. This decentralized training approach enhances data
handling in a distributed environment and facilitate cooperative privacy and reduces the computational and hardware resources
learning without direct data sharing. Our research corroborates required for model training. The advantages of FedML
the concept through experiments across varied datasets, including include:
genomics and healthcare, thereby validating the versatility and
efficacy of our FedQNN framework. The results consistently • Non-exposure of sensitive data between clusters.
exceed 86% accuracy across three distinct datasets, proving its • Reduced computational and hardware resources due to
suitability for conducting various QML tasks. Our research not distributed model training.
only identifies the limitations of classical paradigms but also • Access to a heterogeneous dataset from various industry
presents a novel framework to propel the field of QML into
a new era of secure and collaborative innovation. sectors increases the scope and diversity of the models.
Index Terms—Federated Learning, Quantum Federated Building upon the principles of FedML, Quantum Federated
Learning, Quantum Machine Learning, Quantum Neural Learning (QFL) emerges as a specialized solution for the
Network
quantum domain [11], [12]. QFL adapts the federated learning
framework to the unique characteristics and requirements of
I. I NTRODUCTION Quantum Computing (QC). This adaptation is essential to
address challenges such as the fragility of quantum states and
Noisy Intermediate-Scale Quantum (NISQ) devices have to integrate noise mitigation strategies, which are crucial in
brought about new possibilities and challenges in Quantum the NISQ era.
Machine Learning (QML) [1], [2]; despite their potential, these However, the integration of FedML into QC is more
devices need to be improved to mitigate inherent quantum complex. The distinctions between FedML and QFL
noise, which limits the large-scale applicability of QML are significant, necessitating thorough understanding and
algorithms. One of the key hurdles in implementing QML adaptation, which includes handling quantum data and
is the computational expense of training Quantum Neural establishing communication protocols between quantum and
Networks (QNNs) and their associated quantum layers [3]– classical nodes. Quantum data, represented by superimposed
[9]. This challenge mirrors the difficulties in training Deep qubit states, are inherently fragile due to decoherence and
Neural Networks (DNNs). The concept of Federated Machine susceptibility to noise. This fragility requires sophisticated
Learning (FedML) has been introduced to address this in the quantum error correction and noise mitigation strategies,
classical realm of ML, which represents a paradigm shift in distinct from classical FedML approaches. Conversely,
model training and deployment, addressing key concerns in big classical FedML has advanced significantly, with many
data and privacy-sensitive applications. The concept of FedML contemporary-developed optimization procedures that have
was pioneered by MacMahan et al. at Google, who introduced enhanced its efficiency and robustness. This maturity contrasts
a decentralized model building procedure to overcome the sharply with the nascent stage of QFL, which is still primarily
grappling with the complexities of quantum data and the ensures efficient and secure transmission of QI by keeping
nuanced challenges of integrating quantum and classical local datasets on client devices and employing secure
systems. Therefore, while classical FedML strides ahead communication for model updates. This strategy helps in
with well-established protocols and optimizations, QFL is stabilizing quantum states during learning, as sensitive states
navigating its foundational phase, focusing on addressing the are not transferred over the network, but rather, only model
fundamental challenges unique to QC and learning. updates are aggregated. Furthermore, the decentralization of
Additionally, the communication between quantum and the learning process significantly improves the scalability of
classical nodes requires novel protocols to translate Quantum our framework, allowing for more clients to be accommodated
Information (QI) into a classical form and vice versa. without a loss in performance or security.
This is a challenging task due to the quantum-classical This paper is organized as follows. In Sec. II, we provide a
information barrier and asymmetry. Existing QFL methods comprehensive background on FedML and a literature review
face several challenges, such as the efficient and secure of the most relevant works in the field. In Sec. III, we discuss
transmission of QI [13], stabilization of quantum states our FedQNN framework and the algorithmic approach we
during learning [14], and scalability of quantum federated have employed. In Sec. IV, we present the results of our
networks [15]. Despite these challenges, the motivation for experiments. In Sec. III, we conclude this research and reflect
pursuing QFL is compelling, offering the potential to leverage on the results.
QC’s power while maintaining the privacy and decentralized
nature of FedML. This combination could lead to significant II. BACKGROUND AND R ELATED W ORK
advancements in finance [16], [17], condensed matter physics
A. QC and QML
and chemistry [18], healthcare [19], [20], bioinformatics
[21], tomography [22] and more, where large-scale, privacy- Classical physics laws break down on the subatomic level
preserving computational power is paramount. due to measurement uncertainty and the observation of weird
The novel contributions of this work can be summarized particle behaviors. Pondering this thought, the Nobel laureate
as follows: Richard Feynman [23] conceived the idea of using a computer
to simulate quantum systems in a way that holistically
• Developing a QNN model that enables the creation leveraged the principles of quantum mechanics for computing.
of a quantum model precisely tailored to specific As opposed to storing information in classical 0, 1 bits,
requirements, such as high-dimensional data processing, information can be stored as superimposed states of |0⟩ and |1⟩
while ensuring robustness against quantum noise and due to quantum coherence; qubits – The weaving together of
adaptability to different quantum hardware architectures. “quantum” and “bit”. Allowing states to exist in multiple states
• Constructing a novel Federated Quantum Neural Network simultaneously opens up unforeseen capabilities for computing
(FedQNN) framework with decentralized quantum model and information storage. However, QC technology is still in its
training and secure model update aggregation, leveraging infancy in terms of development, as the current state-of-the-art
the strengths of collaborative learning while ensuring data quantum computers are plagued by environmental noise and
privacy. decoherence.
• Conducting comprehensive experiments on three diverse QML is the coalescence and enhancement of regular
datasets: Iris, breast cancer, and a custom synthetic ML with QC. A typical QML model that is ubiquitously
DNA dataset. These experiments demonstrate the broad used, as presented in Fig. 1, follows the parameter-
applicability and versatility of the approach in genomics adjustment paradigm, works as follows: Classical features x =
and medicine. (x1 , x2 , . . . , xm ) ∈ Rk×m are mapped into quantum states,
• Exploring the impact of client numbers on the n
{|ψi ⟩}i=1 via some encoding scheme; φ : x → |ψ⟩. Often,
performance of our FedQNN method, providing valuable this involves data preprocessing and feature engineering, and
insight into the scalability and accuracy of the approach. since there are a limited number of qubits to work with
• Evaluating our FedQNN framework on real Quantum [24], the number of quantum states is less than the number
Processing Units (QPUs) from IBM Quantum. This of classical data points, i.e., card(|ψi ⟩) ≤ n. The quantum
evaluation, using the synthetic DNA dataset, achieves an states are then fed to a gate model that acts like a feed-
accuracy of more than 80%, underscoring the practical forward neural network: f (H ⊗n , X, Y, Rχ , · · · ), which creates
effectiveness of the approach. superposition amongst the qubit states, and performs a series
Through these research contributions, our FedQNN framework of operations to adjust the network parameters to attain
exhibits adaptability to diverse datasets, including those optimality. Measurements on the states are then performed,
in healthcare and genomics, and demonstrates robust and the criterion for checking whether an optimal solution is
performance across different QPUs. Our objective is to achieved is the loss function, J(y, ŷ; ω) ≈ 0 and subsequently
accelerate the development of scalable and robust QML obtaining the weights ω ∗ = ω1∗ , ω2∗ , . . . , ωp∗ = arg minω J;
through QFL, paving the way for wider adoption and if optimality is not achieved, the network then iteratively
real-world applications of this revolutionary technology. In adjusts the parameters, and if it is, then a prediction, ŷ, is
addressing the specific challenges of QFL, our framework made.
Algorithm 1 FedML
1: Input: Set of clients {C1 , C2 , . . . , Ck }, their corresponding
Quantum datasets {D1 , D2 , . . . , Dk }, learning rate α
Classical Data Quantum Gates Output
States Measurement
2: Output: Aggregated model M
3: while J(θ i,j ; Di ) ̸≈ 0 do
Data Preprocessing and 4: for each client Ci in 1 to k do
Feature Engineering
Iterative Update until 5: Train models Mi on datasets Di , WLOG do not
share (Mi , Di ) with (Mj , Dj )
Fig. 1: The general architecture of QML models starts with classical data 6: end for
X, which undergoes data preprocessing and feature engineering. This data
is converted into quantum-compatible states |ψj ⟩ via a mapping function 7: Expose model parameters/weights
ϕ. For computation, these states are processed by a quantum circuit using
θ i = θ11 , θ22 , . . . , θpk
quantum gates f (θn , X, R... ), including rotation and entangling gates. The
model includes an iterative update loop where parameters are refined based
on the lost function J(ψ, y) until nearly converging (J(ψ, y) ≈ 0). Finally, to centralized server
the quantum circuit’s output is measured to produce the final model output g 8: Create the aggregated model
for various predictive tasks.
M = agg(M1 , M2 , . . . , Mk )
with parameters Θ = (ϕ1 , ϕ2 , . . . , ϕp )
QML offers several benefits over classical ML. These 9: Update parameters: θ i,j ←− θ i,j − α∇θi,j J(θ i,j ; Di )
include, amongst others: 10: end while
1) The potential to solve certain classical problems 11: Expose M to the public
exponentially faster than the current state-of-the-art
methods. This speedup has already been realized
in problems ranging from unstructured search to
optimization problems.
2) Faster and more streamlined processing of data than
classical processing methods. This is due to the Hilbert
space being larger and more encompassing than classical
real Euclidean spaces; dim H > dim R.
3) Some QML algorithms have been theoretically
demonstrated to solve certain classes of problems in
fewer computational steps when compared to their
classical counterparts.
By utilizing the quantum mechanical properties of
superposition and entanglement, QML offers a promising
Model Exposure
Fig. 4: The FedQNN framework where each Client/Device retains its Local Data, ensuring data privacy as no raw data is shared with the Central Server.
Clients independently train a QNN model with their data, and only quantum model updates–comprising parameters or operations–are communicated to the
central server, which functions as an Aggregation Point, synthesizing a global model from these updates. Secure communication protocols are employed for
exchanging updates and preventing the exposure of individual data.
(TP) (FN)
1
Fig. 5: Experimental setup and tool flow for conducting the experiments:
The process begins with datasets fed into the QNN within the FedQNN
framework. The QNN, implemented using the PennyLane framework, 100 iterations, did not exhibit this zigzag behavior for the
undergoes optimization through iterative training, with the number of three datasets, suggesting a more stable learning trend upon
iterations indicated for the Pennylane simulator and IBM QPUs. Post-training,
the model’s performance is assessed using standard binary classification
averaging multiple iterations. Despite the variations, each
metrics, including precision, recall, accuracy, and the F1 score. model consistently reaches an impressive peak accuracy,
typically between 85% and 90%. This remarkable level of
accuracy serves as compelling evidence of the effectiveness
of our FedQNN framework in multi-dataset classification
CPUs and an NVIDIA Tesla T4 GPU. We conduct extensive scenarios. Tab. II supports these findings, showcasing essential
experimentation across three diverse datasets: The Iris dataset metrics such as precision, recall, F1-score, and accuracy for
[36], the breast cancer dataset [37], and a synthetic DNA each dataset. The consistently high accuracy values reiterate
dataset designed for classifying promoter and non-promoter the robustness and reliability of our method for handling
sequences. Each dataset presents unique challenges and various datasets.
characteristics. The Iris dataset, with its 150 samples of
TABLE II: Classification reports for different datasets.
three different Iris flower species, tests basic classification
abilities. The breast cancer dataset consists of 286 instances Dataset
Metrics
with nine attributes, including linear and nominal types Precision Recall F1-Score Accuracy
for both classes. The synthetic DNA dataset, tailored for Iris 0.89 0.89 0.89 0.9
genomics applications, stands out for its focus on the binary
Breast Cancer 0.85 0.85 0.92 0.86
classification of genetic sequences. It contains 200 samples,
categorized into promoters and non-promoters. Throughout DNA 0.9 0.88 0.89 0.9
experimentation, we observed intriguing accuracy dynamics
over training iterations throughout the analysis with FedQNN, The proposed FedQNN experiments explore the intriguing
across the three diverse datasets. The accuracy plots shown relationship between the number of clients and model
in Fig. 6 for each dataset show a distinct zigzag pattern, accuracy. The results in Fig. 7 demonstrate that increasing
reflecting the convergence and fluctuations in the learning the number of clients can lead to remarkable improvements
process. However, it is important to highlight that the mean in accuracy. This phenomenon highlights the power of
accuracy over ten distinct trials, with each trial comprising collaborative learning in a federated setting, where each client
0.95 0.9
0.90 1 0.8 3
0.85 0.7 0.8
2
0.80 0.6
Value
Value
Value
0.7
0.75
0.5
0.70 0.6
0.65 0.4
0.60 0.3 0.5
0 20 40 60 80 100 0 20 40 60 80 100 0 20 40 60 80 100
Iteration Iteration Iteration
Mean Accuracy Mean Accuracy Mean Accuracy
0.9 0.8
4 5 0.8 6
0.8 0.6
0.7 0.6
0.6
(a) 0.4 (b) (c)
0 50 100 0 50 100 0 50 100
Fig. 6: Accuracies over iterations for various datasets: (a) Iris, (b) breast cancer, and (c) DNA, displaying a distinctive zigzag pattern where high accuracies
are periodically reached, as exemplified by pointers ⃝,1 ⃝,2 and ⃝. 3 This pattern, consistent across the three datasets, suggests a recurrent peaking in model
performance at certain iterations. When considering the mean accuracy–calculated from ten trials–a clearer learning curve emerges. This curve demonstrates
a general trend of improvement and eventual stabilization in model performance, highlighted by pointers ⃝, 4 ⃝,5 and ⃝ 6 towards the end of the iterations.
contributes to the collective intelligence, while keeping its is crucial in practical scenarios, where models must adapt and
data private. The robustness of FedQNN across different stabilize despite variable data inputs.
datasets reaffirms its potential as an effective framework for Performance metrics across the datasets consistently register
classification tasks. high, highlighting the model’s precision, recall, and accuracy
Furthermore, in Fig. 8, we present comprehensive proficiency. These metrics are crucial benchmarks in ML, and
tests across three different QPUs from IBM Quantum: their high values in this framework reinforce its efficacy in
ibm_nairobi, ibm_lagos, and ibm_perth [38]. balanced classification tasks.
Notably, all three QPUs deliver efficient results, with accuracy The study also reveals the positive influence of increasing
rates surpassing the 80% threshold. However, it is worth client collaboration within the federated network. This finding
highlighting that these accuracy scores did exhibit some accentuates the benefits of a federated approach in ML, where
degree of fluctuation, which can be attributed to factors diversity in data sources enriches the learning process and
such as quantum noise and decoherence, variations in gate enhances model accuracy.
fidelity, discrepancies in quantum volume, and sensitivity to The experiments, across different QPUs from IBM
environmental conditions. Each of these factors can contribute Quantum, demonstrate the framework’s adaptability and
to subtle yet impactful differences in performance across the efficient performance, even with limited iterations. This
QPUs. Importantly, these evaluations are conducted with only adaptability to various quantum hardware architectures is
10 iterations to test the model’s rapid convergence, further a testament to the framework’s potential for widespread
indicating the potential of our model to excel in QC settings, application in QC settings.
even with limited refinement, and the promising prospects of In a comparative analysis of QFL methods as presented in
its adaptability and performance on various QPUs. Tab. III, our FedQNN framework demonstrates outstanding
performance across various datasets; when evaluated on
B. Discussion the CIFAR-10 dataset for planes versus cars, a hybrid
quantum-classical classifier (HQCC) achieved an accuracy
The comprehensive experimentation and analysis of the of 94.05% with just two local epochs of federated training
proposed FedQNN framework yield significant insights, [15]. Additionally, the SlimQFL and Vanilla QFL models
underlining its potential in QML. A critical observation is the achieve accuracies of 77% and 76%, respectively, on the mini-
framework’s adeptness in managing a spectrum of datasets, MNIST dataset [32]. Even when dealing with non-Independent
illustrating its adaptability and robustness. This versatility is and Identically Distributed (IID) data on MNIST-3, a QNN
pivotal in QML, where applications often span diverse data model reached a 70% accuracy mark [33]. Notably, our
types and problem domains. results show superior performance on three diverse datasets:
Notably, the framework exhibits stable learning dynamics Iris (90%), breast cancer (86%), and DNA (90%), which
over multiple iterations despite initial accuracy fluctuations. is competitive with qFedInf and qFedAvg that report
Such a trend indicates a resilient learning algorithm capable accuracies up to 92.7% and 88.4% on MNIST-2, and 75.4%
of consistent performance enhancement over time. This aspect and 66.7% on Fashion-MNIST [39]. These results underscore
0.8 (a) 1 0.7 (b) 2 0.8 (c) 3
0.6
0.6 0.5 0.6
Accuracy
Accuracy
Accuracy
0.4
0.4 0.4
0.3
0.2 0.2 0.2
0.1
0.0 0.0 0.0
1 2 3 4 5 1 2 3 4 5 1 2 3 4 5
Number of Clients Number of Clients Number of Clients
Fig. 7: Accuracies over the number of clients for various datasets: (a) showcases the Iris dataset, where the accuracy starts at 0.6 with one client and shows
a marked increase to 0.88 with five clients, as indicated by the trajectory ⃝,
1 (b) presents the breast cancer dataset, and it shows a gradual improvement in
accuracy from 0.5 with a single client to 0.76 with five clients, though the rate of increase plateaus between three and four clients, as highlighted by the
trajectory ⃝,
2 and (c) presents the DNA dataset with accuracy rising from 0.6 to 0.89 as the number of clients increases from one to five, with the trajectory
⃝3 emphasizing the consistent trend. These results underscore the positive correlation between the number of contributing clients and accuracy.
0.5 3
4 2
2 4 6 8 10 V. C ONCLUSION
Iterations Our investigation into QFL as a secure and accurate
data classification method across decentralized and encrypted
Fig. 8: The accuracy of the model using QPUs from IBM Quantum, for
ibm_lagos, a peak accuracy of 0.835 is consistently observed from ⃝1 onward,
datasets has revealed immense potential. By deploying a QNN
with a notable dip back to 0.50 at ⃝. 2 For ibm_perth, accuracy starts with with a FedQNN framework, we have demonstrated the ability
a value of 0.84, experiences a drop to 0.5 at ⃝, 3 and fluctuates between to highlight the benefits of group learning while safeguarding
these values, finishing strong with an accuracy of 0.84, and for ibm_nairobi,
a similarly high accuracy of 0.8, drops to 0.46 at ⃝,4 and then mirrors the
individual data privacy.
pattern of IBM perth ending at 0.8. Experiments spanning diverse datasets, from genomics to
medicine, confirmed the adaptability and effectiveness of our
FedQNN framework. The accuracy consistently exceeds 85%,
proving our framework’s reliability in classifying data from
multiple sources. Furthermore, the proposed approach exhibits
the fruitful utility of our QFL framework and its versatility
impressive scalability, maintaining high accuracy even with
in handling different types of data, thus marking a significant
increasing client participation, highlighting its suitability for
step forward in QFL research.
large-scale collaborative learning tasks. This resilience extends
The overall outcomes from this investigation signify a to challenging conditions as we successfully test our FedQNN
promising avenue for QML, particularly in its application framework on real QPUs from IBM, achieving over 80%
across varied datasets and quantum environments. However, accuracy, and thereby solidifying its real-world applicability.
these successes also pave the way for future explorations, In conclusion, FedQNN emerges as a powerful tool for
particularly optimizing quantum circuit designs and enhancing secure and accurate collaborative data classification, opening
integration techniques within federated learning systems. new doors for privacy-focused research and development
Moreover, the results suggest potential broader implications in across various fields. By advancing communication efficiency
fields requiring rigorous data privacy and security, positioning and exploring more complex quantum algorithms, we can
QFL as a valuable tool in privacy-sensitive collaborative unlock the full potential of QFL as a transformative technology
research and applications. for secure and collaborative data-driven intelligence. We note
that there are several challenges with QFL in general, such as: [17] Innan, N. et al. (2023). Financial Fraud Detection Using Quantum
Graph Neural Networks. Quantum Machine Intelligence, 6(1), 1-18.
1) The fragility of quantum states, thereby making them [18] Innan, N., Khan, M. A. Z., & Bennai, M. (2023). Quantum Computing
error-prone, resulting in the loss of information. for Electronic Structure Analysis: Ground State Energy and Molecular
2) The effect of noise and the introduction of instabilities. Properties Calculations. Materials Today Communications, 38.
[19] Ullah, U., & Garcia-Zapirain, B. (2024). Quantum Machine Learning
3) Using classical aggregation methods, like averaging, is Revolution in Healthcare: A Systematic Review of Emerging Perspectives
not the best method for quantum state agglomeration. and Applications. IEEE Access.
4) Client-side models on NISQ-era hardware introduce [20] Khan, M. A. Z., Innan, N., Galib, A. A. O., & Bennai, M. (2024). Brain
Tumor Diagnosis Using Quantum Convolutional Neural Networks. arXiv
erraticity into the models. preprint arXiv:2401.15804.
However, this research lays the foundation for a future where [21] Innan, N., Khan, & M. A. Z. (2023). Classical-to-Quantum Sequence
Encoding in Genomics. arXiv preprint arXiv: 2304.10786.
data privacy and collective knowledge can drive breakthroughs [22] Innan, N. et al. (2023). Quantum State Tomography Using Quantum
in diverse areas, from healthcare to material science, ushering Machine Learning. arXiv preprint arXiv:2308.10327.
in a new era of secure and collaborative innovation. [23] Feynman, R. (1982). Simulating Physics with Computers. International
Journal of Theoretical Physics, 21(6/7), pp. 467-488.
[24] Dejpasand, M. T., & Sasani Ghamsari, M. (2023). Research trends
ACKNOWLEDGMENT in quantum computers by focusing on qubits as their building blocks.
This work was supported in part by the NYUAD Center Quantum Reports, 5(3), 597-608.
[25] Chehimi, M., & Saad, W. (2022). Quantum Federated Learning with
for Quantum and Topological Systems (CQTS), funded by Quantum Data. IEEE International Conference on Acoustics, Speech
Tamkeen under the NYUAD Research Institute grant CG008. and Signal Processing (ICASSP), Singapore, pp. 8617-8621.
[26] Ren, C., Yu, H., Yan, R., Xu, M., Shen, Y., Zhu, H., Niyato, D., Dong,
R EFERENCES Z. Y., Kwek, L. C. (2023). Towards Quantum Federated Learning. arXiv
preprint arXiv:2306.09912.
[1] Zaman, K., Marchisio, A., Hanif, M. A., & Shafique, M. (2023). A [27] Larasati, H. T., Firdaus, M., & Kim, H. (2022). Quantum Federated
Survey on Quantum Machine Learning: Current Trends, Challenges, Learning: Remarks and Challenges. IEEE 9th International Conference
Opportunities, and the Road Ahead. arXiv preprint arXiv:2310.10315. on Cyber Security and Cloud Computing (CSCloud)/2022 IEEE 8th
[2] Bharti, K., Cervera-Lierta, A., Kyaw, T. H., Haug, T., Alperin-Lea, S., International Conference on Edge Computing and Scalable Cloud
Anand, A., ... & Aspuru-Guzik, A. (2022). Noisy intermediate-scale (EdgeCom), China, pp. 1-5.
quantum algorithms. Reviews of Modern Physics, 94(1), 015004. [28] Rofougaran, R., Yoo, S., Tseng, H-H., & Chen, S, Y-C. (2023).
[3] Beer, K., Bondarenko, D., Farrelly, T., Osborne, T. J., Salzmann, R., Federated Quantum Machine Learning with Differential Privacy. arXiv
Scheiermann, D., & Wolf, R. (2020). Training deep quantum neural preprint arXiv:2310.06973.
networks. Nature communications, 11(1), 808. [29] Yamany, W., Moustafa, N., & Turnbull, B. (2021). OQFL: An Optimized
[4] Kashif, M., & Al-Kuwari, S. (2024). ResQNets: a residual approach for Quantum-Based Federated Learning Framework for Defending Against
mitigating barren plateaus in quantum neural networks. EPJ Quantum Adversarial Attacks in Intelligent Transportation Systems. IEEE
Technology, 11(1), 4. Transactions on Intelligent Transportation Systems, 24(1), pp. 893-903.
[5] Zaman, K., Ahmed, T., Hanif, M. A., Marchisio, A., & Shafique, M. [30] Xia, Q., & Li, Q. (2021). QuantumFed: A Federated Learning
(2024). A Comparative Analysis of Hybrid-Quantum Classical Neural Framework for Collaborative Quantum Training. IEEE Global
Networks. arXiv preprint arXiv: 2402.10540. Communications Conference (GLOBECOM), Spain, pp. 1-6.
[6] Zaman, Kamila, et al. (2024). Studying the Impact of Quantum-Specific [31] Bhatia, A. S., Kais, S., & Alam, M. A. (2023). Federated quanvolutional
Hyperparameters on Hybrid Quantum-Classical Neural Networks. arXiv neural network: a new paradigm for collaborative quantum learning.
preprint arXiv:2402.10605. Quantum Science and Technology, 8(4), 045032.
[7] Maouaki, Walid El, et al. (2024). AdvQuNN: A Methodology for [32] Yun, W. J., Kim, J. P., Jung, S., Park, J., Bennis, M., & Kim,
Analyzing the Adversarial Robustness of Quanvolutional Neural J. (2022). Slimmable quantum federated learning. arXiv preprint
Networks. arXiv preprint arXiv:2403.05596. arXiv:2207.10221.
[8] Kashif, M., & Shafique, M. (2024). ResQuNNs: Towards Enabling Deep [33] Zhang, Y., Zhang, C., Zhang, C., Fan, L., Zeng, B., & Yang, Q. (2022).
Learning in Quantum Convolution Neural Networks. arXiv preprint Federated Learning with Quantum Secure Aggregation. arXiv preprint
arXiv:2402.09146. arXiv:2207.07444.
[9] Kashif, Muhammad, et al. (2024). Alleviating barren plateaus [34] Schuld, M., Sinayskiy, I., & Petruccione, F. (2014). The quest for a
in parameterized quantum machine learning circuits: Investigating quantum neural network. Quantum Information Processing, 13, 2567-
advanced parameter initialization strategies. arXiv preprint arXiv: 2586.
2311.13218. [35] Bergholm, V., Izaac, J., Schuld, M., Gogolin, C., Ahmed, S., Ajith, V.,...
[10] MacMahan, H. B., Moore, E., Ramage, D., Hampson, S., & y Arcas, B. & Killoran, N. (2018). Pennylane: Automatic differentiation of hybrid
A. (2016). Communication-Efficient Learning of Deep Networks from quantum-classical computations. arXiv preprint arXiv:1811.04968.
Decentralized Data. arXiv preprint arXiv:1602.05629. [36] Fisher, R. A. (1988). Iris. UCI Machine Learning Repository, https:
[11] Li, W., Lu, S., & Deng, D. L. (2021). Quantum federated learning //doi.org/10.24432/C56C76.
through blind quantum computing. Science China Physics, Mechanics [37] Zwitter, Matjaz, and Soklic, Milan. (1988). Breast Cancer. UCI Machine
& Astronomy, 64(10), 100312. Learning Repository, https://doi.org/10.24432/C51P4M.
[12] Xia, Q., & Li, Q. (2021). Quantumfed: A federated learning [38] IBM Quantum, https://quantum.ibm.com/.
framework for collaborative quantum training. In 2021 IEEE Global [39] Zhao, H. (2023). Non-IID quantum federated learning with one-shot
Communications Conference (GLOBECOM) (pp. 1-6). IEEE. communication complexity. Quantum Machine Intelligence, 5(1), 3.
[13] Chen, L., Xue, K., Li, J., Li, R., Yu, N., Sun, Q., & Lu, J. (2023).
Q-DDCA: Decentralized Dynamic Congestion Avoid Routing in Large-
Scale Quantum Networks. IEEE/ACM Transactions on Networking.
[14] Chehimi, M., Chen, S. Y. C., Saad, W., Towsley, D., & Debbah, M.
(2023). Foundations of quantum federated learning over classical and
quantum networks. IEEE Network.
[15] Chen, S. Y-C., & Yoo, S. (2021). Federated Quantum Machine Learning.
Entropy, 23(4), pp. 1-14.
[16] Innan, N., Khan, M. A. Z., & Bennai, M. (2023). Financial Fraud
Detection: A Comparative Study of Quantum Machine Learning Models.
International Journal of Quantum Information.