0% found this document useful (0 votes)
61 views

Reservoir Computing

Reservoir computing is a machine learning framework where a fixed "reservoir" performs nonlinear computations on input signals. Only a simple linear readout layer is trained to map the reservoir's output to the desired outputs. The reservoir can be a virtual recurrent neural network or a physical system like ripples on water. Reservoirs must be nonlinear and have memory. Various types of reservoirs have been developed including liquid state machines with chaotic spiking neurons and deep reservoir networks. Quantum reservoir computing uses quantum systems like interacting harmonic oscillators or quantum dot lattices as the reservoir.

Uploaded by

nigel989
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
61 views

Reservoir Computing

Reservoir computing is a machine learning framework where a fixed "reservoir" performs nonlinear computations on input signals. Only a simple linear readout layer is trained to map the reservoir's output to the desired outputs. The reservoir can be a virtual recurrent neural network or a physical system like ripples on water. Reservoirs must be nonlinear and have memory. Various types of reservoirs have been developed including liquid state machines with chaotic spiking neurons and deep reservoir networks. Quantum reservoir computing uses quantum systems like interacting harmonic oscillators or quantum dot lattices as the reservoir.

Uploaded by

nigel989
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

Reservoir computing

Reservoir computing is a framework for computation derived from recurrent neural network theory that
maps input signals into higher dimensional computational spaces through the dynamics of a fixed, non-
linear system called a reservoir.[1] After the input signal is fed into the reservoir, which is treated as a "black
box," a simple readout mechanism is trained to read the state of the reservoir and map it to the desired
output.[1] The first key benefit of this framework is that training is performed only at the readout stage, as
the reservoir dynamics are fixed.[1] The second is that the computational power of naturally available
systems, both classical and quantum mechanical, can be used to reduce the effective computational cost.[2]

History
The concept of reservoir computing stems from the use of recursive connections within neural networks to
create a complex dynamical system.[3] It is a generalisation of earlier neural network architectures such as
recurrent neural networks, liquid-state machines and echo-state networks. Reservoir computing also
extends to physical systems that are not networks in the classical sense, but rather continuous systems in
space and/or time: e.g. a literal "bucket of water" can serve as a reservoir that performs computations on
inputs given as perturbations of the surface.[4] The resultant complexity of such recurrent neural networks
was found to be useful in solving a variety of problems including language processing and dynamic system
modeling.[3] However, training of recurrent neural networks is challenging and computationally
expensive.[3] Reservoir computing reduces those training-related challenges by fixing the dynamics of the
reservoir and only training the linear output layer.[3]

A large variety of nonlinear dynamical systems can serve as a reservoir that performs computations. In
recent years semiconductor lasers have attracted considerable interest as computation can be fast and energy
efficient compared to electrical components.

Recent advances in both AI and quantum information theory have given rise to the concept of quantum
neural networks.[5] These hold promise in quantum information processing, which is challenging to
classical networks, but can also find application in solving classical problems.[5][6] In 2018, a physical
realization of a quantum reservoir computing architecture was demonstrated in the form of nuclear spins
within a molecular solid.[6] However, the nuclear spin experiments in [6] did not demonstrate quantum
reservoir computing per se as they did not involve processing of sequential data. Rather the data were
vector inputs, which makes this more accurately a demonstration of quantum implementation of a random
kitchen sink[7] algorithm (also going by the name of extreme learning machines in some communities). In
2019, another possible implementation of quantum reservoir processors was proposed in the form of two-
dimensional fermionic lattices.[6] In 2020, realization of reservoir computing on gate-based quantum
computers was proposed and demonstrated on cloud-based IBM superconducting near-term quantum
computers.[8]
Reservoir computers have been used for time-series analysis purposes. In particular, some of their usages
involve chaotic time-series prediction,[9][10] separation of chaotic signals,[11] and link inference of
networks from their dynamics.[12]

Classical reservoir computing

Reservoir

The 'reservoir' in reservoir computing is the internal structure of the computer, and must have two
properties: it must be made up of individual, non-linear units, and it must be capable of storing information.
The non-linearity describes the response of each unit to input, which is what allows reservoir computers to
solve complex problems. Reservoirs are able to store information by connecting the units in recurrent loops,
where the previous input affects the next response. The change in reaction due to the past allows the
computers to be trained to complete specific tasks.[13]

Reservoirs can be virtual or physical.[13] Virtual reservoirs are typically randomly generated and are
designed like neural networks.[13][3] Virtual reservoirs can be designed to have non-linearity and recurrent
loops, but, unlike neural networks, the connections between units are randomized and remain unchanged
throughout computation.[13] Physical reservoirs are possible because of the inherent non-linearity of certain
natural systems. The interaction between ripples on the surface of water contains the nonlinear dynamics
required in reservoir creation, and a pattern recognition RC was developed by first inputting ripples with
electric motors then recording and analyzing the ripples in the readout.[1]

Readout

The readout is a neural network layer that performs a linear transformation on the output of the reservoir.[1]
The weights of the readout layer are trained by analyzing the spatiotemporal patterns of the reservoir after
excitation by known inputs, and by utilizing a training method such as a linear regression or a Ridge
regression.[1] As its implementation depends on spatiotemporal reservoir patterns, the details of readout
methods are tailored to each type of reservoir.[1] For example, the readout for a reservoir computer using a
container of liquid as its reservoir might entail observing spatiotemporal patterns on the surface of the
liquid.[1]

Types

Context reverberation network

An early example of reservoir computing was the context reverberation network.[14] In this architecture, an
input layer feeds into a high dimensional dynamical system which is read out by a trainable single-layer
perceptron. Two kinds of dynamical system were described: a recurrent neural network with fixed random
weights, and a continuous reaction–diffusion system inspired by Alan Turing’s model of morphogenesis. At
the trainable layer, the perceptron associates current inputs with the signals that reverberate in the dynamical
system; the latter were said to provide a dynamic "context" for the inputs. In the language of later work, the
reaction–diffusion system served as the reservoir.

Echo state network


The Tree Echo State Network (TreeESN) model represents a generalization of the reservoir computing
framework to tree structured data.[15]

Liquid-state machine

Chaotic Liquid State Machine

The liquid (i.e. reservoir) of a Chaotic Liquid State Machine (CLSM),[16][17] or chaotic reservoir, is made
from chaotic spiking neurons but which stabilize their activity by settling to a single hypothesis that
describes the trained inputs of the machine. This is in contrast to general types of reservoirs that don’t
stabilize. The liquid stabilization occurs via synaptic plasticity and chaos control that govern neural
connections inside the liquid. CLSM showed promising results in learning sensitive time series data.[16][17]

Nonlinear transient computation

This type of information processing is most relevant when time-dependent input signals depart from the
mechanism’s internal dynamics.[18] These departures cause transients or temporary altercations which are
represented in the device’s output.[18]

Deep reservoir computing

The extension of the reservoir computing framework towards Deep Learning, with the introduction of
Deep Reservoir Computing and of the Deep Echo State Network (DeepESN) model[19][20][21][22] allows
to develop efficiently trained models for hierarchical processing of temporal data, at the same time enabling
the investigation on the inherent role of layered composition in recurrent neural networks.

Quantum reservoir computing


Quantum reservoir computing may use the nonlinear nature of quantum mechanical interactions or
processes to form the characteristic nonlinear reservoirs[5][6][23][8] but may also be done with linear
reservoirs when the injection of the input to the reservoir creates the nonlinearity.[24] The marriage of
machine learning and quantum devices is leading to the emergence of quantum neuromorphic computing as
a new research area.[25]

Types

Gaussian states of interacting quantum harmonic oscillators

Gaussian states are a paradigmatic class of states of continuous variable quantum systems.[26] Although
they can nowadays be created and manipulated in, e.g, state-of-the-art optical platforms,[27] naturally robust
to decoherence, it is well-known that they are not sufficient for, e.g., universal quantum computing because
transformations that preserve the Gaussian nature of a state are linear.[28] Normally, linear dynamics would
not be sufficient for nontrivial reservoir computing either. It is nevertheless possible to harness such
dynamics for reservoir computing purposes by considering a network of interacting quantum harmonic
oscillators and injecting the input by periodical state resets of a subset of the oscillators. With a suitable
choice of how the states of this subset of oscillators depends on the input, the observables of the rest of the
oscillators can become nonlinear functions of the input suitable for reservoir computing; indeed, thanks to
the properties of these functions, even universal reservoir computing becomes possible by combining the
observables with a polynomial readout function.[24] In principle, such reservoir computers could be
implemented with controlled multimode optical parametric processes,[29] however efficient extraction of the
output from the system is challenging especially in the quantum regime where measurement back-action
must be taken into account.

2-D quantum dot lattices

In this architecture, randomized coupling between lattice sites grants the reservoir the “black box” property
inherent to reservoir processors.[5] The reservoir is then excited, which acts as the input, by an incident
optical field. Readout occurs in the form of occupational numbers of lattice sites, which are naturally
nonlinear functions of the input.[5]

Nuclear spins in a molecular solid

In this architecture, quantum mechanical coupling between spins of neighboring atoms within the molecular
solid provides the non-linearity required to create the higher-dimensional computational space.[6] The
reservoir is then excited by radiofrequency electromagnetic radiation tuned to the resonance frequencies of
relevant nuclear spins.[6] Readout occurs by measuring the nuclear spin states.[6]

Reservoir computing on gate-based near-term superconducting quantum computers

The most prevalent model of quantum computing is the gate-based model where quantum computation is
performed by sequential applications of unitary quantum gates on qubits of a quantum computer.[30] A
theory for the implementation of reservoir computing on a gate-based quantum computer with proof-of-
principle demonstrations on a number of IBM superconducting noisy intermediate-scale quantum (NISQ)
computers[31] has been reported in.[8]

See also
Deep learning
Extreme learning machines
Unconventional computing

References
1. Tanaka, Gouhei; Yamane, Toshiyuki; Héroux, Jean Benoit; Nakane, Ryosho; Kanazawa,
Naoki; Takeda, Seiji; Numata, Hidetoshi; Nakano, Daiju; Hirose, Akira (2019). "Recent
advances in physical reservoir computing: A review" (https://doi.org/10.1016%2Fj.neunet.20
19.03.005). Neural Networks. 115: 100–123. doi:10.1016/j.neunet.2019.03.005 (https://doi.or
g/10.1016%2Fj.neunet.2019.03.005). ISSN 0893-6080 (https://www.worldcat.org/issn/0893-
6080). PMID 30981085 (https://pubmed.ncbi.nlm.nih.gov/30981085).
2. Röhm, André; Lüdge, Kathy (2018-08-03). "Multiplexed networks: reservoir computing with
virtual and real nodes" (https://doi.org/10.1088%2F2399-6528%2Faad56d). Journal of
Physics Communications. 2 (8): 085007. Bibcode:2018JPhCo...2h5007R (https://ui.adsabs.
harvard.edu/abs/2018JPhCo...2h5007R). doi:10.1088/2399-6528/aad56d (https://doi.org/10.
1088%2F2399-6528%2Faad56d). ISSN 2399-6528 (https://www.worldcat.org/issn/2399-652
8).
3. Schrauwen, Benjamin, David Verstraeten, and Jan Van Campenhout. "An overview of
reservoir computing: theory, applications, and implementations." Proceedings of the
European Symposium on Artificial Neural Networks ESANN 2007, pp. 471–482.
4. Fernando, C.; Sojakka, Sampsa (2003). "Pattern Recognition in a Bucket". Advances in
Artificial Life (https://www.semanticscholar.org/paper/Pattern-Recognition-in-a-Bucket-Ferna
ndo-Sojakka/af342af4d0e674aef3bced5fd90875c6f2e04abc). Lecture Notes in Computer
Science. Vol. 2801. pp. 588–597. doi:10.1007/978-3-540-39432-7_63 (https://doi.org/10.100
7%2F978-3-540-39432-7_63). ISBN 978-3-540-20057-4. S2CID 15073928 (https://api.sema
nticscholar.org/CorpusID:15073928).
5. Ghosh, Sanjib; Opala, Andrzej; Matuszewski, Michał; Paterek, Tomasz; Liew, Timothy C. H.
(December 2019). "Quantum reservoir processing". npj Quantum Information. 5 (1): 35.
arXiv:1811.10335 (https://arxiv.org/abs/1811.10335). Bibcode:2019npjQI...5...35G (https://ui.
adsabs.harvard.edu/abs/2019npjQI...5...35G). doi:10.1038/s41534-019-0149-8 (https://doi.or
g/10.1038%2Fs41534-019-0149-8). ISSN 2056-6387 (https://www.worldcat.org/issn/2056-6
387). S2CID 119197635 (https://api.semanticscholar.org/CorpusID:119197635).
6. Negoro, Makoto; Mitarai, Kosuke; Fujii, Keisuke; Nakajima, Kohei; Kitagawa, Masahiro
(2018-06-28). "Machine learning with controllable quantum dynamics of a nuclear spin
ensemble in a solid". arXiv:1806.10910 (https://arxiv.org/abs/1806.10910) [quant-ph (https://
arxiv.org/archive/quant-ph)].
7. Rahimi, Ali; Recht, Benjamin (December 2008). "Weighted Sums of Random Kitchen Sinks:
Replacing minimization with randomization in Learning" (http://papers.nips.cc/paper/3495-w
eighted-sums-of-random-kitchen-sinks-replacing-minimization-with-randomization-in-learnin
g.pdf) (PDF). NIPS'08: Proceedings of the 21st International Conference on Neural
Information Processing Systems: 1313–1320.
8. Chen, Jiayin; Nurdin, Hendra; Yamamoto, Naoki (2020-08-24). "Temporal Information
Processing on Noisy Quantum Computers" (https://doi.org/10.1103/PhysRevApplied.14.024
065). Physical Review Applied. 14 (2): 024065. arXiv:2001.09498 (https://arxiv.org/abs/2001.
09498). Bibcode:2020PhRvP..14b4065C (https://ui.adsabs.harvard.edu/abs/2020PhRvP..14
b4065C). doi:10.1103/PhysRevApplied.14.024065 (https://doi.org/10.1103%2FPhysRevApp
lied.14.024065). S2CID 210920543 (https://api.semanticscholar.org/CorpusID:210920543).
9. Pathak, Jaideep; Hunt, Brian; Girvan, Michelle; Lu, Zhixin; Ott, Edward (2018-01-12).
"Model-Free Prediction of Large Spatiotemporally Chaotic Systems from Data: A Reservoir
Computing Approach" (https://doi.org/10.1103%2FPhysRevLett.120.024102). Physical
Review Letters. 120 (2): 024102. Bibcode:2018PhRvL.120b4102P (https://ui.adsabs.harvar
d.edu/abs/2018PhRvL.120b4102P). doi:10.1103/PhysRevLett.120.024102 (https://doi.org/1
0.1103%2FPhysRevLett.120.024102). PMID 29376715 (https://pubmed.ncbi.nlm.nih.gov/29
376715).
10. Vlachas, P.R.; Pathak, J.; Hunt, B.R.; Sapsis, T.P.; Girvan, M.; Ott, E.; Koumoutsakos, P.
(2020-03-21). "Backpropagation algorithms and Reservoir Computing in Recurrent Neural
Networks for the forecasting of complex spatiotemporal dynamics" (https://dx.doi.org/10.101
6/j.neunet.2020.02.016). Neural Networks. 126: 191–217. arXiv:1910.05266 (https://arxiv.or
g/abs/1910.05266). doi:10.1016/j.neunet.2020.02.016 (https://doi.org/10.1016%2Fj.neunet.2
020.02.016). ISSN 0893-6080 (https://www.worldcat.org/issn/0893-6080). PMID 32248008
(https://pubmed.ncbi.nlm.nih.gov/32248008). S2CID 211146609 (https://api.semanticscholar.
org/CorpusID:211146609).
11. Krishnagopal, Sanjukta; Girvan, Michelle; Ott, Edward; Hunt, Brian R. (2020-02-01).
"Separation of chaotic signals by reservoir computing" (https://aip.scitation.org/doi/10.1063/
1.5132766). Chaos: An Interdisciplinary Journal of Nonlinear Science. 30 (2): 023123.
arXiv:1910.10080 (https://arxiv.org/abs/1910.10080). Bibcode:2020Chaos..30b3123K (http
s://ui.adsabs.harvard.edu/abs/2020Chaos..30b3123K). doi:10.1063/1.5132766 (https://doi.or
g/10.1063%2F1.5132766). ISSN 1054-1500 (https://www.worldcat.org/issn/1054-1500).
PMID 32113243 (https://pubmed.ncbi.nlm.nih.gov/32113243). S2CID 204823815 (https://ap
i.semanticscholar.org/CorpusID:204823815).
12. Banerjee, Amitava; Hart, Joseph D.; Roy, Rajarshi; Ott, Edward (2021-07-20). "Machine
Learning Link Inference of Noisy Delay-Coupled Networks with Optoelectronic Experimental
Tests" (https://doi.org/10.1103%2FPhysRevX.11.031014). Physical Review X. 11 (3):
031014. arXiv:2010.15289 (https://arxiv.org/abs/2010.15289).
Bibcode:2021PhRvX..11c1014B (https://ui.adsabs.harvard.edu/abs/2021PhRvX..11c1014
B). doi:10.1103/PhysRevX.11.031014 (https://doi.org/10.1103%2FPhysRevX.11.031014).
13. Soriano, Miguel C. (2017-02-06). "Viewpoint: Reservoir Computing Speeds Up" (https://phys
ics.aps.org/articles/v10/12). Physics. 10. doi:10.1103/Physics.10.12 (https://doi.org/10.110
3%2FPhysics.10.12).
14. Kirby, Kevin. "Context dynamics in neural sequential learning." Proceedings of the Florida
Artificial Intelligence Research Symposium FLAIRS (1991), 66–70.
15. Gallicchio, Claudio; Micheli, Alessio (2013). "Tree Echo State Networks". Neurocomputing.
101: 319–337. doi:10.1016/j.neucom.2012.08.017 (https://doi.org/10.1016%2Fj.neucom.201
2.08.017). hdl:11568/158480 (https://hdl.handle.net/11568%2F158480).
16. Aoun, Mario Antoine; Boukadoum, Mounir (2014). "Learning algorithm and neurocomputing
architecture for NDS Neurons" (https://dx.doi.org/10.1109/icci-cc.2014.6921451). 2014 IEEE
13th International Conference on Cognitive Informatics and Cognitive Computing. IEEE:
126–132. doi:10.1109/icci-cc.2014.6921451 (https://doi.org/10.1109%2Ficci-cc.2014.69214
51). ISBN 978-1-4799-6081-1. S2CID 16026952 (https://api.semanticscholar.org/CorpusID:1
6026952).
17. Aoun, Mario Antoine; Boukadoum, Mounir (2015). "Chaotic Liquid State Machine" (https://dx.
doi.org/10.4018/ijcini.2015100101). International Journal of Cognitive Informatics and
Natural Intelligence. 9 (4): 1–20. doi:10.4018/ijcini.2015100101 (https://doi.org/10.4018%2Fi
jcini.2015100101). ISSN 1557-3958 (https://www.worldcat.org/issn/1557-3958).
18. Crook, Nigel (2007). "Nonlinear Transient Computation". Neurocomputing. 70 (7–9): 1167–
1176. doi:10.1016/j.neucom.2006.10.148 (https://doi.org/10.1016%2Fj.neucom.2006.10.14
8).
19. Pedrelli, Luca (2019). Deep Reservoir Computing: A Novel Class of Deep Recurrent Neural
Networks (https://etd.adm.unipi.it/t/etd-02282019-191815/) (PhD thesis). Università di Pisa.
20. Gallicchio, Claudio; Micheli, Alessio; Pedrelli, Luca (2017-12-13). "Deep reservoir
computing: A critical experimental analysis". Neurocomputing. 268: 87–99.
doi:10.1016/j.neucom.2016.12.089 (https://doi.org/10.1016%2Fj.neucom.2016.12.089).
hdl:11568/851934 (https://hdl.handle.net/11568%2F851934).
21. Gallicchio, Claudio; Micheli, Alessio (2017-05-05). "Echo State Property of Deep Reservoir
Computing Networks". Cognitive Computation. 9 (3): 337–350. doi:10.1007/s12559-017-
9461-9 (https://doi.org/10.1007%2Fs12559-017-9461-9). hdl:11568/851932 (https://hdl.hand
le.net/11568%2F851932). ISSN 1866-9956 (https://www.worldcat.org/issn/1866-9956).
S2CID 1077549 (https://api.semanticscholar.org/CorpusID:1077549).
22. Gallicchio, Claudio; Micheli, Alessio; Pedrelli, Luca (December 2018). "Design of deep echo
state networks". Neural Networks. 108: 33–47. doi:10.1016/j.neunet.2018.08.002 (https://doi.
org/10.1016%2Fj.neunet.2018.08.002). hdl:11568/939082 (https://hdl.handle.net/11568%2F
939082). ISSN 0893-6080 (https://www.worldcat.org/issn/0893-6080). PMID 30138751 (http
s://pubmed.ncbi.nlm.nih.gov/30138751). S2CID 52075702 (https://api.semanticscholar.org/C
orpusID:52075702).
23. Chen, Jiayin; Nurdin, Hendra (2019-05-15). "Learning nonlinear input–output maps with
dissipative quantum systems" (https://link.springer.com/article/10.1007%2Fs11128-019-231
1-9). Quantum Information Processing. 18 (7): 198. arXiv:1901.01653 (https://arxiv.org/abs/1
901.01653). Bibcode:2019QuIP...18..198C (https://ui.adsabs.harvard.edu/abs/2019QuIP...1
8..198C). doi:10.1007/s11128-019-2311-9 (https://doi.org/10.1007%2Fs11128-019-2311-9).
S2CID 57573677 (https://api.semanticscholar.org/CorpusID:57573677).
24. Nokkala, Johannes; Martínez-Peña, Rodrigo; Giorgi, Gian Luca; Parigi, Valentina; Soriano,
Miguel C.; Zambrini, Roberta (2021). "Gaussian states of continuous-variable quantum
systems provide universal and versatile reservoir computing". Communications Physics. 4
(1): 53. arXiv:2006.04821 (https://arxiv.org/abs/2006.04821). Bibcode:2021CmPhy...4...53N
(https://ui.adsabs.harvard.edu/abs/2021CmPhy...4...53N). doi:10.1038/s42005-021-00556-w
(https://doi.org/10.1038%2Fs42005-021-00556-w). S2CID 234355683 (https://api.semantics
cholar.org/CorpusID:234355683).
25. Marković, Danijela; Grollier, Julie (2020-10-13). "Quantum Neuromorphic Computing" (http
s://doi.org/10.1063/5.0020014). Applied Physics Letters. 117 (15): 150501.
arXiv:2006.15111 (https://arxiv.org/abs/2006.15111). Bibcode:2020ApPhL.117o0501M (http
s://ui.adsabs.harvard.edu/abs/2020ApPhL.117o0501M). doi:10.1063/5.0020014 (https://doi.
org/10.1063%2F5.0020014). S2CID 210920543 (https://api.semanticscholar.org/CorpusID:2
10920543).
26. Ferraro, Alessandro; Olivares, Stefano; Paris, Matteo G. A. (2005-03-31). "Gaussian states in
continuous variable quantum information". arXiv:quant-ph/0503237 (https://arxiv.org/abs/qua
nt-ph/0503237).
27. Roslund, Jonathan; de Araújo, Renné Medeiros; Jiang, Shifeng; Fabre, Claude; Treps,
Nicolas (2013-12-15). "Wavelength-multiplexed quantum networks with ultrafast frequency
combs" (https://www.nature.com/articles/nphoton.2013.340). Nature Photonics. 8 (2): 109–
112. arXiv:1307.1216 (https://arxiv.org/abs/1307.1216). doi:10.1038/nphoton.2013.340 (http
s://doi.org/10.1038%2Fnphoton.2013.340). ISSN 1749-4893 (https://www.worldcat.org/issn/
1749-4893). S2CID 2328402 (https://api.semanticscholar.org/CorpusID:2328402).
28. Bartlett, Stephen D.; Sanders, Barry C.; Braunstein, Samuel L.; Nemoto, Kae (2002-02-14).
"Efficient Classical Simulation of Continuous Variable Quantum Information Processes" (http
s://link.aps.org/doi/10.1103/PhysRevLett.88.097904). Physical Review Letters. 88 (9):
097904. arXiv:quant-ph/0109047 (https://arxiv.org/abs/quant-ph/0109047).
Bibcode:2002PhRvL..88i7904B (https://ui.adsabs.harvard.edu/abs/2002PhRvL..88i7904B).
doi:10.1103/PhysRevLett.88.097904 (https://doi.org/10.1103%2FPhysRevLett.88.097904).
PMID 11864057 (https://pubmed.ncbi.nlm.nih.gov/11864057). S2CID 2161585 (https://api.se
manticscholar.org/CorpusID:2161585).
29. Nokkala, J.; Arzani, F.; Galve, F.; Zambrini, R.; Maniscalco, S.; Piilo, J.; Treps, N.; Parigi, V.
(2018-05-09). "Reconfigurable optical implementation of quantum complex networks" (http
s://doi.org/10.1088%2F1367-2630%2Faabc77). New Journal of Physics. 20 (5): 053024.
arXiv:1708.08726 (https://arxiv.org/abs/1708.08726). Bibcode:2018NJPh...20e3024N (http
s://ui.adsabs.harvard.edu/abs/2018NJPh...20e3024N). doi:10.1088/1367-2630/aabc77 (http
s://doi.org/10.1088%2F1367-2630%2Faabc77). ISSN 1367-2630 (https://www.worldcat.org/i
ssn/1367-2630). S2CID 119091176 (https://api.semanticscholar.org/CorpusID:119091176).
30. Nielsen, Michael; Chuang, Isaac (2010), Quantum Computation and Quantum Information
(2 ed.), Cambridge University Press Cambridge
31. John Preskill. "Quantum Computing in the NISQ era and beyond." Quantum 2,79 (2018)

Further reading
Reservoir Computing using delay systems (http://www.nature.com/ncomms/journal/v2/n9/ful
l/ncomms1476.html?WT.ec_id=NCOMMS-20110913), Nature Communications 2011
Optoelectronic Reservoir Computing (http://www.nature.com/srep/2012/120227/srep00287/f
ull/srep00287.html), Scientific Reports February 2012
Optoelectronic Reservoir Computing (http://www.opticsinfobase.org/oe/abstract.cfm?uri=oe-
20-3-3241), Optics Express 2012
All-optical Reservoir Computing (http://www.nature.com/ncomms/journal/v4/n1/full/ncomms2
368.html), Nature Communications 2013
Memristor Models for Machine learning (http://www.mitpressjournals.org/doi/10.1162/NECO
_a_00694#.WL4P9iHyvIo), Neural Computation 2014 arxiv (https://arxiv.org/abs/1406.2210)

Retrieved from "https://en.wikipedia.org/w/index.php?title=Reservoir_computing&oldid=1164409142"

You might also like