Entropy 23 00460 v2
Entropy 23 00460 v2
Article
Federated Quantum Machine Learning
Samuel Yen-Chi Chen * and Shinjae Yoo
Computational Science Initiative, Brookhaven National Laboratory, Upton, NY 11973, USA; [email protected]
* Correspondence: [email protected]
Abstract: Distributed training across several quantum computers could significantly improve the
training time and if we could share the learned model, not the data, it could potentially improve
the data privacy as the training would happen where the data is located. One of the potential
schemes to achieve this property is the federated learning (FL), which consists of several clients or
local nodes learning on their own data and a central node to aggregate the models collected from
those local nodes. However, to the best of our knowledge, no work has been done in quantum
machine learning (QML) in federation setting yet. In this work, we present the federated training on
hybrid quantum-classical machine learning models although our framework could be generalized
to pure quantum machine learning model. Specifically, we consider the quantum neural network
(QNN) coupled with classical pre-trained convolutional model. Our distributed federated learning
scheme demonstrated almost the same level of trained model accuracies and yet significantly faster
distributed training. It demonstrates a promising future research direction for scaling and privacy
aspects.
Keywords: quantum machine learning; federated learning; quantum neural networks; variational
quantum circuits; privacy-preserving AI
One of the common features of these successful ML models is that they are data-driven.
To build a successful deep learning model, it requires a huge amount of data. Although
there are several public datasets for research purpose, most advanced and personalized
models largely depend on the collected data from users’ mobile devices and other personal
data (e.g., medical record, browsing habits and etc.). For example, ML/DL approaches also
succeed in the field of medical imaging [47,48], speech recognition [49–51], to name a few.
These fields rely critically on the massive dataset collected from the population and these
data should not be accessed by unauthorized third-party. The use of these sensitive and
personally identifiable information raises several concerns. One of the concerns is that the
channel used to exchange with the cloud service providers can be compromised, leading
to the leakage of high-value personal or commercial data. Even if the communication
channel can be secured, the cloud service provider is also risky as malicious adversaries can
potentially invade the computing infrastructure. There are several solutions to deal with
such issues. One of them is called federated learning (FL), which focuses on the decentralized
computing architecture. For example, users can train a speech recognition model on his cell
phone and upload the model to the cloud in exchange of the global model without upload
the recordings directly. Such framework is made possible due to the fact of recent advances
in hardware development, making even the small devices so powerful. This concept not
only helps the privacy-preserving practice in classical machine learning but also in the
rapidly emerging quantum machine learning as researchers are trying to expand the machine
learning capabilities by leveraging the power of quantum computers. To harness the
power of quantum computers in the NISQ era, the key challenge is how to distribute the
computational tasks to different quantum machines with limited quantum capabilities.
Another challenge is the rising privacy concern in the use of large scale machine learning
infrastructure. We address these two challenges by providing the framework of training
quantum machine learning models in a federated manner.
In this paper, we propose the federated training on hybrid quantum-classical classifiers.
We show that with the federated training, the performance in terms of the testing accuracy
does not decrease. In addition, the model still converges quickly compared to the non-
federated training. Our efforts not only help building secure QML infrastructure but also
help the distributed QML training which is to better utilize available NISQ devices.
This paper is organized as follows—in Section 2, we introduce the concept of federated
machine learning. In Section 3, we describe the variational quantum circuit architecture
in details. In Section 4, we describe the transfer learning in hybrid quantum-classical
models. Section 5 shows the performance of the proposed federated quantum learning on
the experimental data, followed by further discussions in Section 6. Finally we conclude in
Section 7.
θ Initial parameters
Θ
U(θ)
U(θ)
U(θ)
U(θ)
⋯ ϕt+1
Θ
θt+1 θt+1
Sending model
Global parameter
parameter Global model
aggregation
updates
For further discussion and advanced settings on federated learning, we refer to [54–61].
|0i
|0i
E(x) W (φ)
.. ..
. .
|0i
Figure 2. General structure for the variational quantum circuit (VQC). The E(x) is the quantum
routine for encoding the classical data into the quantum state and W (φ) is the variational quantum
circuit block with the learnable parameters φ. After the quantum operation, the quantum state is
measured to retrieve classical numbers for further processing.
The general idea of VQC or QNN is that the circuit parameters are updated via iterative
methods on a classical computer. Recent theoretical studies have also demonstrated that
VQCs are more expressive than conventional neural networks [9,62–64] with respect to
the number of parameters or the learning speed. In addition, in the work [30,65,66], it
has been demonstrated via numerical simulation that certain hybrid quantum-classical
Entropy 2021, 23, 460 4 of 14
architectures reach higher accuracies than classical neural networks with a similar number
of parameters.
Recent advances in VQC have demonstrated various applications in a wide variety
of machine learning tasks. For example, VQC has been shown to be successful in the task
of classification [20–29,65,67,68], function approximation [20,30,31], generative machine
learning [32–37], metric learning [38,39], deep reinforcement learning [40–44,69], sequential
learning [30,45,70] and speech recognition [46].
where cq1 ,...,q N ∈ C is the amplitude of each quantum state and qi ∈ {0, 1}. The square of
the amplitude cq1 ,...,q N is the probability of measurement with the post-measurement state in
|q1 i ⊗ |q2 i ⊗ |q3 i ⊗ ... ⊗ |q N i, and the total probability should sum to 1, that is,
In this work, we use use the variational encoding scheme to encode the classical values
into a quantum state. The basic idea behind this encoding scheme is to use the input
values or their transformation as rotation angles for the quantum rotation gate. As shown
in Figure 3, the encoding parts consist of single-qubit rotation gates Ry and Rz and use
arctan( xi ) and arctan( xi2 ) as the corresponding transformations.
Figure 3. Variational quantum classifier. The variational quantum classifier includes three components: encoder, variational
layer and quantum measurement. The encoder consists of several single-qubit gates Ry (arctan( xi )) and Rz (arctan( xi2 )) which
represent rotations along y-axis and z-axis by the given angle arctan( xi ) and arctan( xi2 ), respectively. These rotation angles
are derived from the input values xi and are not subject to iterative optimization. The variational layer consists of CNOT
gates between each pair of neighbouring qubits which are used to entangle quantum states from each qubit and general
single qubit unitary gates R(α, β, γ) with three parameters α, β, γ. Parameters labeled αi , β i and γi are the ones for iterative
optimization. The quantum measurement component will output the Pauli-Z expectation values of designated qubits. The
number of qubits and the number of measurements can be adjusted to fit the problem of interest. In this work, we use
the VQC as the final classifier layer, therefore the number of qubits equals to the latent vector size which is 4 and we only
consider the measurement on the first two qubits for binary classification. The grouped box in the VQC may repeat several
times to increase the number of parameters, subject to the capacity and capability of the available quantum computers or
simulation software used for the experiments. In this work, the grouped box repeats for 2 times.
Entropy 2021, 23, 460 5 of 14
Encoder
Pre-trained
Dataset
model
local machines. The hybrid model used in this experiment consists of pre-trained VGG16
model and a 4-qubit variational quantum circuit (VQC) as shown in Figure 3. The original
classifier layer in the VGG16 model is replaced with the one shown in Table 1 in order to fit
input dimension of the VQC layer. The dashed-box in the quantum circuit repeats twice,
consisting of 4 × 3 × 2 = 24 quantum circuit parameters. The VQC receives 4-dimensional
compressed vectors from the pre-trained VGG model to perform the classification task. The
non-federated training for the comparison is with the same hybrid VGG-VQC architecture
as the one used in the federated training. We perform 100 training rounds and the results
are presented in Figure 6. We compare the performance of federated learning with non-
federated learning with the same hybrid quantum-classical architecture and the same
dataset.
Table 1. The trainable layer in our modified VGG model. This layer is designed to convert the output from the pretrained
layer to a smaller dimensional one which is suitable for the VQC to process. The activation function used in this layer is
ReLU . In addition, dropout layers with dropout rate = 0.5 are used.
In the left three panels of Figure 6, we present the results of training the hybrid
quantum model via federated setting with different number of local training epochs. Since
the training data are distributed across different clients, we only consider the testing
accuracies with the aggregated global model. In the considered Cats vs Dogs dataset, we
observe that both the testing accuracies and testing loss reach the comparable level as
the non-federated training. We also observe that the training loss, which is the average
from clients, has fluctuations compared to non-federated training (shown in Table 2). The
underlying reason might be that in each training round, different clients are selected,
therefore the training data used to evaluate the training loss are different. Yet the training
loss still converges after the 100 rounds of training. In addition, the testing loss and
accuracies converge to comparable levels to the non-federal training, regardless of the
local training epochs. Notably, we observe that a single epoch in local training is pretty
enough to train a well-performed model. In each round of the federated training, the
model updates are based on the samplings from 5 clients, with 1 local training epoch. The
computing resources used are linear with 230 × 5 × 1 = 1150 in total. While for a full
epoch of training with non-federated setting, the computing resources used are linear with
23,000. This results imply the potential of more efficient training on QML models with
distributed schemes. This particularly benefits the training of quantum models when we
are using high-performance simulation platform or an array of small NISQ devices, with
the moderate communication overhead.
tensor([1, 1, 0, 0])
0
100
200
0 200 400 600 800
Figure 5. Cats vs Dogs Dataset [76].
Entropy 2021, 23, 460 7 of 14
Table 2. Comparison of performance in different training schemes with Cats vs Dogs dataset.
ne', 'plane', 'car', 'plane', 'plane', 'car', 'plane', 'plane', 'plane', 'plane', 'car', 'car', 'plane', 'car', 'car',
0
20
40
60
80
100
120
Table 3. Comparison of performance in different training schemes with CIFAR (Planes vs. Cars)
dataset.
6. Discussion
6.1. Integration with Other Privacy-Preserving Protocols
In this study we consider the federated quantum learning framework. One of the
limitation is that the process of exchanging model parameters can potentially be attacked.
Moreover, we can not exclude the possibilities that malicious parties are joining the network,
which will get the aggregated global model. The leaked model parameters can be used to
deduce the training data of the model [78]. There are other protocols which can further
boost the security. For example, it has been shown that trained models can be used to
recover training entries [79]. In addition, it is also possible for adversaries to find out
whether a specific entry is used in training process [80]. These possibilities raise serious
concerns when the QML models are used to process private and sensitive data. One of
the potential solution is to train the model with differential privacy (DP) [81]. With DP, it is
possible to share the trained model and still keep the private information of the training
data. Another direction is to incorporate the secure multi-party computation [82] which
can further increase the security in decentralization computing. For example, a recent work
Entropy 2021, 23, 460 9 of 14
using universal blind quantum computation [83] provides an quantum protocol to achieve
privacy-preserving multi-party quantum computation.
6.3. Decentralization
This research presents a proof-of-concept federated training on quantum machine
learning models. The scheme includes a central node to receive the trained models from
clients, to aggregate them and to distribute the aggregated model to clients. This central
node can be vulnerable to malicious attacks and the adversaries can compromise the whole
network. Moreover, the communication bandwidth between clients and the central node
may vary, leading some undesired effects in the synchronization process. To address these
issues, recent studies propose various decentralized federated learning schemes [86–92]. For
example, the distributed ledger technologies (DLT) [93–96] which power the development of
blockchain have been applied in the decentralized FL [97–103]. The blockchain technologies
are used to ensure the robustness and integrity of the shared information while remove the
requirement of a central node. Blockchain-enabled FL can also be designed to encourage
the data-owner participating in the model training process [104]. In addition, peer-to-peer
protocols are also employed in FL to remove the need of a central node [105,106]. Gossip
learning [107,108] is an alternative learning framework to FL [109–112]. Under gossip
learning framework, no central node is required, nodes on the network exchange and
aggregate models directly. The efficiencies and capabilities of these decentralized schemes
such as blockchained FL and gossip learning in the quantum regime are left for future
work.
In classical machine learning, distributed training frameworks are designed to scale up
the model training to computing clusters [113], making the training on large-scale dataset
and complex models possible. Potential direction is to apply the federated quantum
learning to the high-performance quantum simulation.
to train a reliable model. For example, the work [67] studied the application of VQC in
dementia prediction which would benefit from the federated training to preserve the users’
privacy. Recently, the application of quantum computing in financial industries have drawn
a lot of attention [121]. It is expected that federated QML would play an important role in
finance as well.
7. Conclusions
In this work, we provide the framework to train hybrid quantum-classical classifiers
in a federated manner, which can help in preserving the privacy and distributing computa-
tional loads to an array of NISQ computers. We also show that the federated training in
our setting does not sacrifice the performance in terms of the testing accuracy. This work
should benefit the research in both the privacy-preserving AI and the quantum computing
and pave a new direction for building secure, reliable and scalable distributed quantum
machine learning architecture.
Abbreviations
The following abbreviations are used in this manuscript:
References
1. Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv 2014, arXiv:1409.1556.
2. Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A. Going deeper with
convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 12 June
2015; pp. 1–9.
3. Voulodimos, A.; Doulamis, N.; Doulamis, A.; Protopapadakis, E. Deep Learning for Computer Vision: A Brief Review. Comput.
Intell. Neurosci. 2018, 2018, 1–13. [CrossRef] [PubMed]
4. Sutskever, I.; Vinyals, O.; Le, Q.V. Sequence to sequence learning with neural networks. Adv. Neural Inf. Process. Syst. 2014, 2
3104–3112.
5. Silver, D.; Huang, A.; Maddison, C.J.; Guez, A.; Sifre, L.; van den Driessche, G.; Schrittwieser, J.; Antonoglou, I.; Panneershelvam,
V.; Lanctot, M.; et al. Mastering the game of Go with deep neural networks and tree search. Nature 2016, 529, 484–489. [CrossRef]
[PubMed]
6. Cross, A. The IBM Q experience and QISKit open-source quantum computing software. In Proceedings of the APS Meeting
Abstracts, Los Angeles, CA, USA, 5–9 March 2018.
7. Arute, F.; Arya, K.; Babbush, R.; Bacon, D.; Bardin, J.C.; Barends, R.; Biswas, R.; Boixo, S.; Brandao, F.G.; Buell, D.A.; et al.
Quantum supremacy using a programmable superconducting processor. Nature 2019, 574, 505–510. [CrossRef] [PubMed]
Entropy 2021, 23, 460 11 of 14
8. Grzesiak, N.; Blümel, R.; Wright, K.; Beck, K.M.; Pisenti, N.C.; Li, M.; Chaplin, V.; Amini, J.M.; Debnath, S.; Chen, J.S.; et al.
Efficient arbitrary simultaneously entangling gates on a trapped-ion quantum computer. Nat. Commun. 2020, 11, 1–6. [CrossRef]
9. Lanting, T.; Przybysz, A.J.; Smirnov, A.Y.; Spedalieri, F.M.; Amin, M.H.; Berkley, A.J.; Harris, R.; Altomare, F.; Boixo, S.; Bunyk, P.;
et al. Entanglement in a quantum annealing processor. Phys. Rev. X 2014, 4, 021041. [CrossRef]
10. Harrow, A.W.; Montanaro, A. Quantum computational supremacy. Nature 2017, 549, 203–209. [CrossRef]
11. Nielsen, M.A.; Chuang, I. Quantum Computation and Quantum Information. Am. J. Phys. 2002, 70. [CrossRef]
12. Shor, P.W. Polynomial-time algorithms for prime factorization and discrete logarithms on a quantum computer. SIAM Rev. 1999,
41, 303–332. [CrossRef]
13. Grover, L.K. Quantum mechanics helps in searching for a needle in a haystack. Phys. Rev. Lett. 1997, 79, 325. [CrossRef]
14. Gottesman, D. Stabilizer codes and quantum error correction. arXiv 1997, arXiv:quant-ph/9705052.
15. Gottesman, D. Theory of fault-tolerant quantum computation. Phys. Rev. A 1998, 57, 127. [CrossRef]
16. Preskill, J. Quantum Computing in the NISQ era and beyond. Quantum 2018, 2, 79. [CrossRef]
17. Peruzzo, A.; McClean, J.; Shadbolt, P.; Yung, M.H.; Zhou, X.Q.; Love, P.J.; Aspuru-Guzik, A.; O’brien, J.L. A variational eigenvalue
solver on a photonic quantum processor. Nat. Commun. 2014, 5, 4213. [CrossRef]
18. Cerezo, M.; Arrasmith, A.; Babbush, R.; Benjamin, S.C.; Endo, S.; Fujii, K.; McClean, J.R.; Mitarai, K.; Yuan, X.; Cincio, L.; et al.
Variational Quantum Algorithms. arXiv 2020, arXiv:2012.09265.
19. Bharti, K.; Cervera-Lierta, A.; Kyaw, T.H.; Haug, T.; Alperin-Lea, S.; Anand, A.; Degroote, M.; Heimonen, H.; Kottmann, J.S.;
Menke, T.; et al. Noisy intermediate-scale quantum (NISQ) algorithms. arXiv 2021, arXiv:2101.08448.
20. Mitarai, K.; Negoro, M.; Kitagawa, M.; Fujii, K. Quantum circuit learning. Phys. Rev. A 2018, 98, 032309. [CrossRef]
21. Schuld, M.; Bocharov, A.; Svore, K.; Wiebe, N. Circuit-centric quantum classifiers. arXiv 2018, arXiv:1804.00633.
22. Farhi, E.; Neven, H. Classification with quantum neural networks on near term processors. arXiv 2018, arXiv:1802.06002.
23. Benedetti, M.; Lloyd, E.; Sack, S.; Fiorentini, M. Parameterized quantum circuits as machine learning models. Quantum Sci.
Technol. 2019, 4, 043001. [CrossRef]
24. Mari, A.; Bromley, T.R.; Izaac, J.; Schuld, M.; Killoran, N. Transfer learning in hybrid classical-quantum neural networks. arXiv
2019, arXiv:1912.08278.
25. Abohashima, Z.; Elhosen, M.; Houssein, E.H.; Mohamed, W.M. Classification with Quantum Machine Learning: A Survey. arXiv
2020, arXiv:2006.12270.
26. Easom-McCaldin, P.; Bouridane, A.; Belatreche, A.; Jiang, R. Towards Building A Facial Identification System Using Quantum
Machine Learning Techniques. arXiv 2020, arXiv:2008.12616.
27. Sarma, A.; Chatterjee, R.; Gili, K.; Yu, T. Quantum Unsupervised and Supervised Learning on Superconducting Processors. arXiv
2019, arXiv:1909.04226.
28. Chen, S.Y.C.; Huang, C.M.; Hsing, C.W.; Kao, Y.J. Hybrid quantum-classical classifier based on tensor network and variational
quantum circuit. arXiv 2020, arXiv:2011.14651.
29. Stein, S.A.; Baheri, B.; Tischio, R.M.; Chen, Y.; Mao, Y.; Guan, Q.; Li, A.; Fang, B. A Hybrid System for Learning Classical Data in
Quantum States. arXiv 2020, arXiv:2012.00256.
30. Chen, S.Y.C.; Yoo, S.; Fang, Y.L.L. Quantum Long Short-Term Memory. arXiv 2020, arXiv:2009.01783.
31. Kyriienko, O.; Paine, A.E.; Elfving, V.E. Solving nonlinear differential equations with differentiable quantum circuits. arXiv 2020,
arXiv:2011.10395.
32. Dallaire-Demers, P.L.; Killoran, N. Quantum generative adversarial networks. Phys. Rev. A 2018, 98, 012324. [CrossRef]
33. Li, J.; Topaloglu, R.; Ghosh, S. Quantum Generative Models for Small Molecule Drug Discovery. arXiv 2021, arXiv:2101.03438.
34. Stein, S.A.; Baheri, B.; Tischio, R.M.; Mao, Y.; Guan, Q.; Li, A.; Fang, B.; Xu, S. QuGAN: A Generative Adversarial Network
Through Quantum States. arXiv 2020, arXiv:2010.09036.
35. Zoufal, C.; Lucchi, A.; Woerner, S. Quantum generative adversarial networks for learning and loading random distributions. NPJ
Quantum Inf. 2019, 5, 1–9. [CrossRef]
36. Situ, H.; He, Z.; Li, L.; Zheng, S. Quantum generative adversarial network for generating discrete data. arXiv 2018,
arXiv:1807.01235.
37. Nakaji, K.; Yamamoto, N. Quantum semi-supervised generative adversarial network for enhanced data classification. arXiv
2020, arXiv:2010.13727.
38. Lloyd, S.; Schuld, M.; Ijaz, A.; Izaac, J.; Killoran, N. Quantum embeddings for machine learning. arXiv 2020, arXiv:2001.03622.
39. Nghiem, N.A.; Chen, S.Y.C.; Wei, T.C. A Unified Classification Framework with Quantum Metric Learning. arXiv 2020,
arXiv:2010.13186.
40. Chen, S.Y.C.; Yang, C.H.H.; Qi, J.; Chen, P.Y.; Ma, X.; Goan, H.S. Variational quantum circuits for deep reinforcement learning.
IEEE Access 2020, 8, 141007–141024. [CrossRef]
41. Lockwood, O.; Si, M. Reinforcement Learning with Quantum Variational Circuit. In Proceedings of the 16th AAAI Conference
on Artificial Intelligence and Interactive Digital Entertainment, Worcester, MA, USA, 19–23 October 2020; pp. 245–251.
42. Wu, S.; Jin, S.; Wen, D.; Wang, X. Quantum reinforcement learning in continuous action space. arXiv 2020, arXiv:2012.10711.
43. Jerbi, S.; Trenkwalder, L.M.; Nautrup, H.P.; Briegel, H.J.; Dunjko, V. Quantum enhancements for deep reinforcement learning in
large spaces. arXiv 2019, arXiv:1910.12760.
Entropy 2021, 23, 460 12 of 14
44. CHEN, C.C.; SHIBA, K.; SOGABE, M.; SAKAMOTO, K.; SOGABE, T. Hybrid quantum-classical Ulam-von Neumann linear
solver-based quantum dynamic programing algorithm. Proc. Annu. Conf. JSAI 2020, JSAI2020, 2K6ES203. [CrossRef]
45. Bausch, J. Recurrent quantum neural networks. arXiv 2020, arXiv:2006.14619.
46. Yang, C.H.H.; Qi, J.; Chen, S.Y.C.; Chen, P.Y.; Siniscalchi, S.M.; Ma, X.; Lee, C.H. Decentralizing Feature Extraction with Quantum
Convolutional Neural Network for Automatic Speech Recognition. arXiv 2020, arXiv:2010.13309.
47. Suzuki, K. Overview of deep learning in medical imaging. Radiol. Phys. Technol. 2017, 10, 257–273. [CrossRef] [PubMed]
48. Lundervold, A.S.; Lundervold, A. An overview of deep learning in medical imaging focusing on MRI. Z. Med. Phys. 2019,
29, 102–127. [CrossRef] [PubMed]
49. Deng, L.; Hinton, G.; Kingsbury, B. New types of deep neural network learning for speech recognition and related applications:
An overview. In Proceedings of the 2013 IEEE International Conference on Acoustics, Speech and Signal Processing, Vancouver,
BC, Canada, 26–31 May 2013; pp. 8599–8603.
50. Amodei, D.; Ananthanarayanan, S.; Anubhai, R.; Bai, J.; Battenberg, E.; Case, C.; Casper, J.; Catanzaro, B.; Cheng, Q.; Chen, G.; et
al. Deep speech 2: End-to-end speech recognition in english and mandarin. In Proceedings of the International conference on
machine learning, New York, NY, USA, 19–24 June 2016; pp. 173–182.
51. Hannun, A.; Case, C.; Casper, J.; Catanzaro, B.; Diamos, G.; Elsen, E.; Prenger, R.; Satheesh, S.; Sengupta, S.; Coates, A.; et al.
Deep speech: Scaling up end-to-end speech recognition. arXiv 2014, arXiv:1412.5567.
52. McMahan, B.; Moore, E.; Ramage, D.; Hampson, S.; y Arcas, B.A. Communication-efficient learning of deep networks from
decentralized data. In Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, Fort Lauderdale,
FL, USA, 9–11 May 2017; pp. 1273–1282.
53. Shokri, R.; Shmatikov, V. Privacy-preserving deep learning. In Proceedings of the 22nd ACM SIGSAC conference on Computer
and Communications Security, Denver, CO, USA, 12–16 October 2015; pp. 1310–1321.
54. Kulkarni, V.; Kulkarni, M.; Pant, A. Survey of Personalization Techniques for Federated Learning. arXiv 2020, arXiv:2003.08673.
55. Kairouz, P.; McMahan, H.B.; Avent, B.; Bellet, A.; Bennis, M.; Bhagoji, A.N.; Bonawitz, K.; Charles, Z.; Cormode, G.; Cummings,
R.; et al. Advances and open problems in federated learning. arXiv 2019, arXiv:1912.04977.
56. Lim, W.Y.B.; Luong, N.C.; Hoang, D.T.; Jiao, Y.; Liang, Y.C.; Yang, Q.; Niyato, D.; Miao, C. Federated learning in mobile edge
networks: A comprehensive survey. IEEE Commun. Surv. Tutor. 2020. 22, 2031–2063 [CrossRef]
57. Yang, Q.; Liu, Y.; Chen, T.; Tong, Y. Federated machine learning: Concept and applications. ACM Trans. Intell. Syst. Technol.
(TIST) 2019, 10, 1–19. [CrossRef]
58. Li, T.; Sahu, A.K.; Talwalkar, A.; Smith, V. Federated learning: Challenges, methods, and future directions. IEEE Signal Process.
Mag. 2020, 37, 50–60. [CrossRef]
59. Li, Q.; Wen, Z.; He, B. Federated learning systems: Vision, hype and reality for data privacy and protection. arXiv 2019,
arXiv:1907.09693.
60. Wang, X.; Han, Y.; Leung, V.C.; Niyato, D.; Yan, X.; Chen, X. Convergence of edge computing and deep learning: A comprehensive
survey. IEEE Commun. Surv. Tutor. 2020, 22, 869–904. [CrossRef]
61. Semwal, T.; Mulay, A.; Agrawal, A.M. FedPerf: A Practitioners’ Guide to Performance of Federated Learning Algorithms. In
Proceedings of the 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, BC, Canada, 7–11
December 2020.
62. Sim, S.; Johnson, P.D.; Aspuru-Guzik, A. Expressibility and Entangling Capability of Parameterized Quantum Circuits for Hybrid
Quantum-Classical Algorithms. Adv. Quantum Technol. 2019, 2, 1900070. [CrossRef]
63. Du, Y.; Hsieh, M.H.; Liu, T.; Tao, D. The expressive power of parameterized quantum circuits. arXiv 2018, arXiv:1810.11922.
64. Abbas, A.; Sutter, D.; Zoufal, C.; Lucchi, A.; Figalli, A.; Woerner, S. The power of quantum neural networks. arXiv 2020,
arXiv:2011.00027.
65. Chen, S.Y.C.; Wei, T.C.; Zhang, C.; Yu, H.; Yoo, S. Quantum Convolutional Neural Networks for High Energy Physics Data
Analysis. arXiv 2020, arXiv:2012.12177.
66. Chen, S.Y.C.; Wei, T.C.; Zhang, C.; Yu, H.; Yoo, S. Hybrid Quantum-Classical Graph Convolutional Network. arXiv 2021,
arXiv:2101.06189.
67. Sierra-Sosa, D.; Arcila-Moreno, J.; Garcia-Zapirain, B.; Castillo-Olea, C.; Elmaghraby, A. Dementia Prediction Applying Variational
Quantum Classifier. arXiv 2020, arXiv:2007.08653.
68. Wu, S.L.; Chan, J.; Guan, W.; Sun, S.; Wang, A.; Zhou, C.; Livny, M.; Carminati, F.; Di Meglio, A.; Li, A.C.; et al. Application of
Quantum Machine Learning using the Quantum Variational Classifier Method to High Energy Physics Analysis at the LHC on
IBM Quantum Computer Simulator and Hardware with 10 qubits. arXiv 2020, arXiv:2012.11560.
69. Jerbi, S.; Gyurik, C.; Marshall, S.; Briegel, H.J.; Dunjko, V. Variational quantum policies for reinforcement learning. arXiv 2021,
arXiv:2103.05577.
70. Takaki, Y.; Mitarai, K.; Negoro, M.; Fujii, K.; Kitagawa, M. Learning temporal data with variational quantum recurrent neural
network. arXiv 2020, arXiv:2012.11242.
71. Schuld, M.; Petruccione, F. Information Encoding. In Supervised Learning with Quantum Computers; Springer International
Publishing: Cham, Switzerland, 2018; pp. 139–171. [CrossRef]
72. Schuld, M.; Bergholm, V.; Gogolin, C.; Izaac, J.; Killoran, N. Evaluating analytic gradients on quantum hardware. Phys. Rev. A
2019, 99, 032331. [CrossRef]
Entropy 2021, 23, 460 13 of 14
73. Bergholm, V.; Izaac, J.; Schuld, M.; Gogolin, C.; Blank, C.; McKiernan, K.; Killoran, N. Pennylane: Automatic differentiation of
hybrid quantum-classical computations. arXiv 2018, arXiv:1811.04968.
74. Paszke, A.; Gross, S.; Massa, F.; Lerer, A.; Bradbury, J.; Chanan, G.; Killeen, T.; Lin, Z.; Gimelshein, N.; Antiga, L.; et al. Pytorch:
An imperative style, high-performance deep learning library. arXiv 2019, arXiv:1912.01703.
75. Suzuki, Y.; Kawase, Y.; Masumura, Y.; Hiraga, Y.; Nakadai, M.; Chen, J.; Nakanishi, K.M.; Mitarai, K.; Imai, R.; Tamiya, S.; et al.
Qulacs: A fast and versatile quantum circuit simulator for research purpose. arXiv 2020, arXiv:2011.13524.
76. Elson, J.; Douceur, J.J.; Howell, J.; Saul, J. Asirra: A CAPTCHA that Exploits Interest-Aligned Manual Image Categorization. In
Proceedings of 14th ACM Conference on Computer and Communications Security (CCS), Alexandria, VA, USA, 29 October–2
November 2007.
77. Krizhevsky, A. Learning Multiple Layers of Features from Tiny Images; Technical Report; CiteSeerX Publishing: Princeton, NJ, USA,
2009.
78. Dwork, C.; Roth, A. The algorithmic foundations of differential privacy. Found. Trends Theor. Comput. Sci. 2014, 9, 211–407.
[CrossRef]
79. Fredrikson, M.; Jha, S.; Ristenpart, T. Model inversion attacks that exploit confidence information and basic countermeasures.
In Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security, Denver, CO, USA, 12–16
October 2015; pp. 1322–1333.
80. Shokri, R.; Stronati, M.; Song, C.; Shmatikov, V. Membership inference attacks against machine learning models. In Proceedings
of the 2017 IEEE Symposium on Security and Privacy (SP), San Jose, CA, USA, 25 May 2017; pp. 3–18.
81. Abadi, M.; Chu, A.; Goodfellow, I.; McMahan, H.B.; Mironov, I.; Talwar, K.; Zhang, L. Deep learning with differential privacy. In
Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, Vienna, Austria, 24–28 October
2016; pp. 308–318.
82. Goryczka, S.; Xiong, L.; Sunderam, V. Secure multiparty aggregation with differential privacy: A comparative study. In
Proceedings of the Joint EDBT/ICDT 2013 Workshops, Genoa, Italy, 18–22 March 2013; pp. 155–163.
83. Li, W.; Lu, S.; Deng, D.L. Quantum Private Distributed Learning Through Blind Quantum Computing. arXiv 2021,
arXiv:2103.08403.
84. Pillutla, K.; Kakade, S.M.; Harchaoui, Z. Robust aggregation for federated learning. arXiv 2019, arXiv:1912.13445.
85. Ang, F.; Chen, L.; Zhao, N.; Chen, Y.; Wang, W.; Yu, F.R. Robust federated learning with noisy communication. IEEE Trans.
Commun. 2020, 68, 3452–3464. [CrossRef]
86. Savazzi, S.; Nicoli, M.; Rampa, V. Federated learning with cooperating devices: A consensus approach for massive IoT networks.
IEEE Internet Things J. 2020, 7, 4641–4654. [CrossRef]
87. Wittkopp, T.; Acker, A. Decentralized Federated Learning Preserves Model and Data Privacy. arXiv 2021, arXiv:2102.00880.
88. Pokhrel, S.R.; Choi, J. A decentralized federated learning approach for connected autonomous vehicles. In Proceedings of the
2020 IEEE Wireless Communications and Networking Conference Workshops (WCNCW), Seoul, Korea, 25–28 May 2020; pp. 1–6.
89. Bonawitz, K.; Eichner, H.; Grieskamp, W.; Huba, D.; Ingerman, A.; Ivanov, V.; Kiddon, C.; Konečnỳ, J.; Mazzocchi, S.; McMahan,
H.B.; et al. Towards federated learning at scale: System design. arXiv 2019, arXiv:1902.01046.
90. Xiao, Y.; Ye, Y.; Huang, S.; Hao, L.; Ma, Z.; Xiao, M.; Mumtaz, S. Fully Decentralized Federated Learning Based Beamforming
Design for UAV Communications. arXiv 2020, arXiv:2007.13614.
91. Lalitha, A.; Shekhar, S.; Javidi, T.; Koushanfar, F. Fully decentralized federated learning. In Proceedings of the Third workshop
on Bayesian Deep Learning (NeurIPS), Montreal, QC, Canada, 7 December 2018.
92. Lu, S.; Zhang, Y.; Wang, Y. Decentralized federated learning for electronic health records. In Proceedings of the 2020 54th Annual
Conference on Information Sciences and Systems (CISS), Princeton, NJ, USA, 18–20 March 2020; pp. 1–5.
93. Nakamoto, S. Bitcoin: A peer-to-peer electronic cash system. Bitcoin. Org 2008, 4. [CrossRef]
94. Zyskind, G.; Nathan, O. Decentralizing privacy: Using blockchain to protect personal data. In Proceedings of the 2015 IEEE
Security and Privacy Workshops, San Jose, CA, USA, 21–22 May 2015; pp. 180–184.
95. Cai, W.; Wang, Z.; Ernst, J.B.; Hong, Z.; Feng, C.; Leung, V.C. Decentralized applications: The blockchain-empowered software
system. IEEE Access 2018, 6, 53019–53033. [CrossRef]
96. Pandl, K.D.; Thiebes, S.; Schmidt-Kraepelin, M.; Sunyaev, A. On the convergence of artificial intelligence and distributed ledger
technology: A scoping review and future research agenda. IEEE Access 2020, 8, 57075–57095. [CrossRef]
97. Qu, Y.; Gao, L.; Luan, T.H.; Xiang, Y.; Yu, S.; Li, B.; Zheng, G. Decentralized privacy using blockchain-enabled federated learning
in fog computing. IEEE Internet Things J. 2020, 7, 5171–5183. [CrossRef]
98. Ramanan, P.; Nakayama, K. Baffle: Blockchain based aggregator free federated learning. In Proceedings of the 2020 IEEE
International Conference on Blockchain (Blockchain), Rhodes Island, Greece, 2–6 November 2020; pp. 72–81.
99. Awan, S.; Li, F.; Luo, B.; Liu, M. Poster: A reliable and accountable privacy-preserving federated learning framework using the
blockchain. In Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security, London, UK,
11–15 November 2019; pp. 2561–2563.
100. Qu, Y.; Pokhrel, S.R.; Garg, S.; Gao, L.; Xiang, Y. A blockchained federated learning framework for cognitive computing in
industry 4.0 networks. IEEE Trans. Ind. Inf. 2020, 17, 2964–2973 [CrossRef]
101. Zhao, Y.; Zhao, J.; Jiang, L.; Tan, R.; Niyato, D.; Li, Z.; Lyu, L.; Liu, Y. Privacy-preserving blockchain-based federated learning for
IoT devices. IEEE Internet Things J. 2020. 8, 1817–1829. [CrossRef]
Entropy 2021, 23, 460 14 of 14
102. Bao, X.; Su, C.; Xiong, Y.; Huang, W.; Hu, Y. Flchain: A blockchain for auditable federated learning with trust and incentive. In
Proceedings of the 2019 5th International Conference on Big Data Computing and Communications (BIGCOM), QingDao, China,
9–11 August 2019; pp. 151–159.
103. Kim, H.; Park, J.; Bennis, M.; Kim, S.L. Blockchained on-device federated learning. IEEE Commun. Lett. 2019, 24, 1279–1283.
[CrossRef]
104. Liu, Y.; Ai, Z.; Sun, S.; Zhang, S.; Liu, Z.; Yu, H. Fedcoin: A peer-to-peer payment system for federated learning. In Federated
Learning; Springer: Berlin/Heidelberg, Germany, 2020; pp. 125–138.
105. Roy, A.G.; Siddiqui, S.; Pölsterl, S.; Navab, N.; Wachinger, C. Braintorrent: A peer-to-peer environment for decentralized federated
learning. arXiv 2019, arXiv:1905.06731.
106. Lalitha, A.; Kilinc, O.C.; Javidi, T.; Koushanfar, F. Peer-to-peer federated learning on graphs. arXiv 2019, arXiv:1901.11173.
107. Hegedűs, I.; Berta, Á.; Kocsis, L.; Benczúr, A.A.; Jelasity, M. Robust decentralized low-rank matrix decomposition. ACM Trans.
Intell. Syst. Technol. (TIST) 2016, 7, 1–24. [CrossRef]
108. Ormándi, R.; Hegedűs, I.; Jelasity, M. Gossip learning with linear models on fully distributed data. Concurr. Comput. Pract. Exp.
2013, 25, 556–571. [CrossRef]
109. Hegedűs, I.; Danner, G.; Jelasity, M. Gossip learning as a decentralized alternative to federated learning. In IFIP International
Conference on Distributed Applications and Interoperable Systems; Springer: Berlin/Heidelberg, Germany, 2019; pp. 74–90.
110. Hegedűs, I.; Danner, G.; Jelasity, M. Decentralized recommendation based on matrix factorization: a comparison of gossip
and federated learning. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases; Springer:
Berlin/Heidelberg, Germany, 2019; pp. 317–332.
111. Hegedűs, I.; Danner, G.; Jelasity, M. Decentralized learning works: An empirical comparison of gossip learning and federated
learning. J. Parallel Distrib. Comput. 2021, 148, 109–124. [CrossRef]
112. Hu, C.; Jiang, J.; Wang, Z. Decentralized federated learning: a segmented gossip approach. arXiv 2019, arXiv:1908.07782.
113. Sergeev, A.; Balso, M.D. Horovod: fast and easy distributed deep learning in TensorFlow. arXiv 2018, arXiv:1802.05799.
114. Chen, S.Y.C.; Huang, C.M.; Hsing, C.W.; Kao, Y.J. An end-to-end trainable hybrid classical-quantum classifier. arXiv 2021,
arXiv:2102.02416.
115. Cong, I.; Choi, S.; Lukin, M.D. Quantum convolutional neural networks. Nat. Phys. 2019, 15, 1273–1278. [CrossRef]
116. Li, Y.; Zhou, R.G.; Xu, R.; Luo, J.; Hu, W. A quantum deep convolutional neural network for image recognition. Quantum Sci.
Technol. 2020, 5, 044003. [CrossRef]
117. Oh, S.; Choi, J.; Kim, J. A Tutorial on Quantum Convolutional Neural Networks (QCNN). arXiv 2020, arXiv:2009.09423.
118. Kerenidis, I.; Landman, J.; Prakash, A. Quantum algorithms for deep convolutional neural networks. arXiv 2019, arXiv:1911.01117.
119. Liu, J.; Lim, K.H.; Wood, K.L.; Huang, W.; Guo, C.; Huang, H.L. Hybrid Quantum-Classical Convolutional Neural Networks.
arXiv 2019, arXiv:1911.02998.
120. Qi, J.; Yang, C.H.H.; Tejedor, J. Submodular rank aggregation on score-based permutations for distributed automatic speech
recognition. In Proceedings of the ICASSP 2020 IEEE International Conference on Acoustics, Speech and Signal Processing
(ICASSP), Barcelona, Spain, 4–8 May 2020; pp. 3517–3521.
121. Egger, D.J.; Gambella, C.; Marecek, J.; McFaddin, S.; Mevissen, M.; Raymond, R.; Simonetto, A.; Woerner, S.; Yndurain, E.
Quantum computing for Finance: state of the art and future prospects. IEEE Trans. Quantum Eng. 2020, 1, 3101724. [CrossRef]