0% found this document useful (0 votes)
27 views6 pages

EE746 - Report

This research paper proposes improvements to time-delay reservoir computing (TDRC) by addressing resonances between the delay time (τ) and clock cycle time (τ'). TDRC harnesses dynamic systems for machine learning tasks. It introduces a reservoir with state variable x(t) described by a delay-differential equation that processes preprocessed input u(t). The reservoir's state is discretized at intervals θ as virtual nodes for linear readout and output. The paper represents TDRC as an equivalent linear echo state network and analytically calculates memory capacity. It finds that deviations between τ and τ' that avoid resonances can enhance capabilities, as memory capacity decreases during resonant clock cycles when τ' is close to rational numbers of τ.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
27 views6 pages

EE746 - Report

This research paper proposes improvements to time-delay reservoir computing (TDRC) by addressing resonances between the delay time (τ) and clock cycle time (τ'). TDRC harnesses dynamic systems for machine learning tasks. It introduces a reservoir with state variable x(t) described by a delay-differential equation that processes preprocessed input u(t). The reservoir's state is discretized at intervals θ as virtual nodes for linear readout and output. The paper represents TDRC as an equivalent linear echo state network and analytically calculates memory capacity. It finds that deviations between τ and τ' that avoid resonances can enhance capabilities, as memory capacity decreases during resonant clock cycles when τ' is close to rational numbers of τ.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

Group Members:

● Aryan Mitesh Shah 21D170008


● Soumyadeep Jana 21D070075
● Kadiyala Sai Susrush 210070038

TOPIC:
Improving LSM performance by Deep
time delay reservoir computing:-
Machine learning approaches can generate models for forecasting the behaviour of
dynamical systems using only observed data, but reservoir computing is especially
well-suited for learning such models.

RCs are based on a recurrent artificial neural network with a pool of interconnected
neurons, an input layer feeding observed data to the network, and an output layer
weighing the network states. They perform as well as other ML methods on dynamical
systems tasks but have smaller data set requirements.

But in RC we use nonlinearity in the reservoir and linearity in the readout layer this
seems to be very complex when there is a small dataset so we move to next-generation
reservoir computing where we interchange the linearity between the reservoir and
readout. The next generation RC (NG-RC) is presented that can perform equally well as
an optimized RC on three challenging RC benchmark problems:

(1) forecasting the short-term dynamics;

(2) reproducing the long-term climate of a chaotic system;

(3) inferring the behaviour of unseen data.


Recent research has emerged on algorithms that do not require randomness, such as
next-generation reservoir computers (NG-RCs) and sparse identification of non-linear
dynamics (SINDy). Apparently, the next-generation reservoir computer (NG-RC) is a
nonlinear vector autoregression (NVAR) machine that uses time-delay observations and
nonlinear functions to construct the governing equations of the data. So now Time
Delay Reservoir Computing comes into the picture

The brain-inspired reservoir computing paradigm manifests the natural computing


abilities of dynamical systems. Inspired by randomly connected artificial neural
networks called echo state networks, simple optic and optoelectronic hardware
implementations were developed, opening the research for delay-based reservoirs.

How is delay-based reservoir computing built:-


● In delay-based reservoir computing, the nodes of the network are separated
temporally, and the computation time correlates with the number of nodes.
● The reservoir computing scheme contains three different parts: the input layer,
the reservoir, and the output layer. The reservoir can be any dynamical system.
● The output layer is trained by linear weighting of all accessible reservoir states,
while the reservoir parameters are kept fixed. This simplification overcomes the
main issues of the time-expensive training of recurrent neural networks like
exploding gradients and their high power consumption
● The delay-based reservoirs were successfully applied to a wide range of tasks,
such as chaotic time-series forecasting or speech recognition.
● The invention of delay-based RC opened doors for research in many different
and sophisticated models.
● We consider same number of nodes in all layers, which allows us to do
time-multiplexing.

Working of TD-RC:-

● since no output signals (e.g., linear regression) are generated at each layer
separately. The input enters only the first layer, and each consecutive layer
receives only the dynamical state of the previous layer. As a result, sequential
training of the layers isn’t performed.
● We consider there are L nodes. They are coupled unidirectionally with
self-feedback after a delay τl. The nodes with their corresponding feedback loops
are referred to as layers in the following because of the unidirectional topology
and their high-dimensional dynamics.
● 1.) According to experiments in general, additional layers can improve the
performance of deep TRC as compared to single-layer TRC.
● The deep TRC enables faster computation by a constant separation of nodes θ,
i.e., a deep TRC with L = 5 layers is 5 times faster than the single-layer TRC with
the same amount of total nodes in line with a 5 times shorter clock cycle.

Factors for optimized TD-RC performance:-


● The fading memory concept states that the reservoir needs to become
independent of those history functions after a certain time. Therefore, two
identical reservoirs with different initial conditions must approximate each other
asymptotically. From a dynamical perspective, a reservoir has to show
generalised synchronization to its input.
● The perturbation of the first layer’s initial condition stays longer in the system
when a second layer is added.
● A large memory capacity, necessary for a reservoir computer to perform its tasks,
is observed in the regions where the conditional Lyapunov exponent is negative,
and the fading memory condition is satisfied.
● The highest linear MC can be achieved close to bifurcations where the
conditional Lyapunov exponent is negative and small in absolute value. In such a
case, the linear information of the input stays longer in the system.
● With the decrease in the conditional Lyapunov exponent, the linear MC
decreases, and MC2 starts dominating. With the further decrease in conditional
Lyapunov exponent, the third-order MC becomes dominant. We remark that
there is always a trade-off between the MC of different degrees since the total
MC bounds their sum.
● Different dynamical regimes of a deep TRC can boost different degrees of the
MC.
● Resonances between delay-time τ and clock cycle T lead to a degradation of
linear memory capacity due to the parallel alignment of eigenvectors of the
underlying network. This effect is shown in all degrees of the memory capacity
● A high linear MC is observed for the small negative conditional Lyapunov
exponent. With the decreasing of the Lyapunov exponent, different degrees of
MC are sequentially activated
Performance boost of time-delay reservoir
computing by non-resonant clock cycle
Main Objective of the Paper:
The primary focus of the research paper is on Time-Delay Reservoir Computing (TDRC)
as a supervised machine learning method, which harnesses the computational
capabilities of dynamic systems to solve time-dependent problems that are often
challenging for traditional artificial neural network-based approaches.

Part 1: Introduction

● Reservoir Computing Paradigm: The paper introduces the concept of Reservoir


Computing, which leverages the inherent computational power of dynamic
systems, making it particularly suitable for solving time-dependent problems.
● Role of a Dynamical Reservoir: The paper highlights that a dynamical reservoir,
driven by an input signal, plays a central role in this paradigm, and a linear
readout mapping is used to produce the output.

Part 2: System Components

● Preprocessing (I): Describes how input data is preprocessed to match the


continuous nature of the reservoir by using a step function and a periodic mask
function.
Input data u(t) is discrete whereas x(t) is continuous. How do we solve this? Make a
step function assigning discrete values to the interval of time. Then multiplied by the
periodic mask function(arbitrary). We get an intermediate J(t) function.
● Reservoir (II): Details the construction of the reservoir using a time-delay
differential equation, emphasizing the need for the echo state property to ensure
consistent and predictable behaviour. The reservoir is a crucial component in the
TDRC framework, and its purpose is to process the preprocessed input data to
create a dynamic state variable that can be used for subsequent computations.
reservoir is described by a delay-differential equation given as:

● Readout (III): Explains how the reservoir's continuous state is discretized to


produce the system's output. This step involves reading out the reservoir's state
at specific time intervals, denoted by θ. These intervals are often viewed as
"virtual nodes," and the entire delay system is considered a "virtual network."
Part 3: Mismatch Between Delay and Clock Cycle Times

● Examines the impact of the mismatch between delay time (τ) and clock cycle
time (τ') on TDRC.
● Suggests that deviations from a perfect match between τ and τ' can enhance the
system's capabilities, particularly when using a linear activation function.

Explanation of the meaning of τ and τ':

1. τ: Indicates the time duration over which past input signals are retained and
integrated into the current state of the system. It is essentially a measure of the
memory capacity of the system. A larger τ allows the system to retain
information from further back in time, whereas a smaller τ provides shorter
memory.
2. τ': the clock cycle time of the system. In other words, it denotes the time interval
at which the system updates or processes its state. τ' acts as a reference time scale
for the system's operation.

Part 4: Approximation by a Network

● Aims to represent a TDRC system as an equivalent network to explain memory


capacity degradation during resonant clock cycles, especially when τ ≠ τ'.
● Assumes a linear activation function (f(x) = αx) for simplicity and emphasizes the
importance of keeping the spectral radius of certain matrices below one for
system stability.
Part 5: Memory Capacity Calculation
● Explains how to analytically compute the memory capacity of the linear echo
state network model, particularly for cases when τ' ≥ τ + θ.
● Utilizes the concept of memory capacity, originally introduced by Jaeger(2002), to
measure the network's ability to recall past input values.

Part 6: Memory Capacity Gaps


● Introduces the idea that memory capacity decreases during resonant clock cycles,
especially when τ' is close to rational numbers with small denominators.
● Attributes this memory capacity loss to the alignment of eigenvectors within the
network model under specific resonance conditions.

Example of a resonant case scenario depicting the same task with values given above

Discussion:
● Discusses a notable increase in computing error and a drop in linear memory
capacity for resonant cases of bτ' ≈ aτ, with a, b ∈ N, particularly when b is small.
● The resulting memory capacity will be small for cases where τ and τ ′ are
resonant because the information within the equivalent network will be
overwritten by new inputs very quickly.
● Suggests that these results are not limited to the linear case and can be expected
to apply in more general situations. We can go to higher orders using the same
methods given in the paper.
Conclusion:
As mentioned we went through the first RC models used, which are of least
optimized properties. Then came the NG-RC model with slight improvement
still not giving the best results. Then came a time-delayed model with optimized
performance which requires even less training data set. Even in the time-delayed
RC model(TDRC) synchronous model doesn’t perform very well, opposite to
what is generally expected.

So, we are done with the literature survey. The next step is implementation. ;)

You might also like