0% found this document useful (0 votes)
161 views10 pages

Recurrent Neural Network

This document provides an overview of recurrent neural networks (RNNs) including: 1. It describes the fundamental features of RNNs and how they can process sequential data using internal memory states. 2. It briefly outlines the history of RNNs and influential researchers like Rumelhart, Hopfield, and Hochreiter and Schmidhuber who developed LSTM. 3. It summarizes several RNN architectures like Elman networks, Jordan networks, Hopfield networks, bidirectional associative memory, echo state networks, independently RNN, recursive neural networks, and long short-term memory (LSTM).

Uploaded by

Dechasa Shimels
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
161 views10 pages

Recurrent Neural Network

This document provides an overview of recurrent neural networks (RNNs) including: 1. It describes the fundamental features of RNNs and how they can process sequential data using internal memory states. 2. It briefly outlines the history of RNNs and influential researchers like Rumelhart, Hopfield, and Hochreiter and Schmidhuber who developed LSTM. 3. It summarizes several RNN architectures like Elman networks, Jordan networks, Hopfield networks, bidirectional associative memory, echo state networks, independently RNN, recursive neural networks, and long short-term memory (LSTM).

Uploaded by

Dechasa Shimels
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 10

Adama Science and Technology University

School of Electrical Engineering and Computing


Department of Computer Science and Engineering
Course Title: -Special Topics in CS
Course code: -CSE-5306
Assignment on: Recurrent Neural network (RNN).
Name:-Dechasa shimles
ID:-A/UR15221/10.

Submitted To:-Dr. RAJESH Sharma R


Due:-August, 24, 2021
Recurrent neural network
The fundamental feature of a Recurrent Neural Network (RNN) is that the network
contains at least one feed-back connection, so the activations can flow round in a loop. This
allows it to exhibit temporal dynamic behavior. Derived from feed-forward neural
networks, RNNs can use their internal state (memory) to process variable length sequences
of inputs. This makes them applicable to tasks such as unsegment, connected handwriting
recognition or speech recognition. That enables the networks to do temporal processing
and learn sequences, e.g., perform sequence recognition/reproduction or temporal
association/prediction. Recurrent neural network architectures can have many different
forms. One common type consists of a standard Multi-Layer Perceptron (MLP) plus added
loops. These can exploit the powerful non-linear mapping capabilities of the MLP, and also
have some form of memory. Others have more uniform structures, potentially with every
neuron connected to all the others, and may also have stochastic activation functions.

History
Recurrent neural networks were based on David Rumelhart's work in 1986. Hopfield
networks – a special kind of RNN – were discovered by John Hopfield in 1982. In 1993, a
neural history compressor system solved a “Very Deep Learning” task that required more
than 1000 subsequent layers in an RNN unfolded in time.
LSTM
Long short-term memory (LSTM) networks were invented
by Hochreiter and Schmidhuber in 1997 and set accuracy records in multiple applications
domains. Around 2007, LSTM started to revolutionize speech recognition, outperforming
traditional models in certain speech applications. In 2009, a Connectionist Temporal
Classification (CTC)-trained LSTM network was the first RNN to win pattern recognition
contests when it won several competitions in connected handwriting recognition. In 2014,
the Chinese company Baidu used CTC-trained RNNs to break the 2S09 Switchboard
Hub5'00 speech recognition dataset benchmark without using any traditional speech
processing methods.
LSTM also improved large-vocabulary speech recognition and text-to-speech synthesis and
was used in Google Android.  In 2015, Google's speech recognition reportedly experienced
a dramatic performance jump of 49% through CTC-trained LSTM.
LSTM broke records for improved machine translation  and Multilingual Language
Processing. LSTM combined with convolutional neural networks (CNNs)
improved automatic image captioning.

Architectures
RNNs come in many variants.
Fully recurrent
Unfolded basic recurrent neural network

Fully recurrent neural networks (FRNN) connect the outputs of all neurons to the inputs of
all neurons. This is the most general neural network topology because all other topologies
can be represented by setting some connection weights to zero to simulate the lack of
connections between those neurons. The illustration to the right may be misleading to
many because practical neural network topologies are frequently organized in "layers" and
the drawing gives that appearance. However, what appears to be layers are, in fact,
different steps in time of the same fully recurrent neural network. The left-most item in the
illustration shows the recurrent connections as the arc labeled 'v'. It is "unfolded" in time to
produce the appearance of layers.
Elman networks and Jordan networks

The Elman network

An Elman network is a three-layer network (arranged horizontally as x, y, and z in the


illustration) with the addition of a set of context units (u in the illustration). The middle
(hidden) layer is connected to these context units fixed with a weight of one. At each time
step, the input is fed forward and a learning rule is applied. The fixed back-connections
save a copy of the previous values of the hidden units in the context units (since they
propagate over the connections before the learning rule is applied). Thus the network can
maintain a sort of state, allowing it to perform such tasks as sequence-prediction that are
beyond the power of a standard multilayer perceptron.
Jordan networks are similar to Elman networks. The context units are fed from the output
layer instead of the hidden layer. The context units in a Jordan network are also referred to
as the state layer. They have a recurrent connection to themselves.
Elman and Jordan networks are also known as “Simple recurrent networks” (SRN).
Elman network& Jordan network
Variables and functions
: input vector
: hidden layer vector
: output vector
and : parameter matrices and vector
and :Activation functions
Hopfield
The Hopfield network is an RNN in which all connections across layers are equally sized. It
requires stationary inputs and is thus not a general RNN, as it does not process sequences
of patterns. However, it guarantees that it will converge. If the connections are trained
using Hebbian learning then the Hopfield network can perform as robust content-
addressable memory, resistant to connection alteration.
Bidirectional associative memory
Introduced by Bart Kosko.  a bidirectional associative memory (BAM) network is a variant
of a Hopfield network that stores associative data as a vector. The bi-directionality comes
from passing information through a matrix and its transpose. Typically, bipolar encoding is
preferred to binary encoding of the associative pairs. Recently, stochastic BAM models
using Markov stepping were optimized for increased network stability and relevance to
real-world applications.
A BAM network has two layers, either of which can be driven as an input to recall an
association and produce an output on the other layer.
Echo state
The echo state network (ESN) has a sparsely connected random hidden layer. The weights
of output neurons are the only part of the network that can change (be trained). ESNs are
good at reproducing certain time series. A variant for spiking neurons is known as a liquid
state machine.
Independently RNN (IndRNN)
The Independently recurrent neural network (IndRNN) addresses the gradient vanishing
and exploding problems in the traditional fully connected RNN. Each neuron in one layer
only receives its own past state as context information (instead of full connectivity to all
other neurons in this layer) and thus neurons are independent of each other's history. The
gradient back-propagation can be regulated to avoid gradient vanishing and exploding in
order to keep long or short-term memory. The cross-neuron information is explored in the
next layers. IndRNN can be robustly trained with the non-saturated nonlinear functions
such as ReLU. Using skip connections, deep networks can be trained.
Recursive
A recursive neural network is created by applying the same set of weights recursively over
a differentiable graph-like structure by traversing the structure in topological order. Such
networks are typically also trained by the reverse mode of automatic differentiation. They
can process distributed representations of structure, such as logical terms. A special case of
recursive neural networks is the RNN whose structure corresponds to a linear chain.
Recursive neural networks have been applied to natural language processing. The
Recursive Neural Tensor Network uses a tensor-based composition function for all nodes
in the tree.
Neural history compressor
The neural history compressor is an unsupervised stack of RNNs. At the input level, it
learns to predict its next input from the previous inputs. Only unpredictable inputs of some
RNN in the hierarchy become inputs to the next higher level RNN, which therefore re-
computes its internal state only rarely. Each higher level RNN thus studies a compressed
representation of the information in the RNN below. This is done such that the input
sequence can be precisely reconstructed from the representation at the highest level.
The system effectively minimizes the description length or the negative logarithm of the
probability of the data. Given a lot of learnable predictability in the incoming data
sequence, the highest level RNN can use supervised learning to easily classify even deep
sequences with long intervals between important events.
It is possible to distill the RNN hierarchy into two RNNs: the "conscious" chunkier (higher
level) and the "subconscious" automatized (lower level). Once the chunkier has learned to
predict and compress inputs that are unpredictable by the automatized, then the
automatized can be forced in the next learning phase to predict or imitate through
additional units the hidden units of the more slowly changing chunkier. This makes it easy
for the automatize to learn appropriate, rarely changing memories across long intervals. In
turn this helps the automatize to make many of its once unpredictable inputs predictable,
such that the chunkier can focus on the remaining unpredictable events.
A generative model partially overcame the problem of automatic differentiation or back-
propagation in neural networks in 1992. In 1993, such a system solved a “Very Deep
Learning” task that required more than 1000 subsequent layers in an RNN unfolded in
time.
Second order RNNs
Second order RNNs use higher order weights instead of the standard weights, and states
can be a product. This allows a direct mapping to a finite-state machine both in training,
stability, and representation. Long short-term memory is an example of this but has no
such formal mappings or proof of stability.
Long short-term memory

Long short-term memory unit


Long short-term memory (LSTM) is a deep learning system that avoids the vanishing
gradient problem. LSTM is normally augmented by recurrent gates called “forget gates”.
LSTM prevents back-propagated errors from vanishing or exploding. Instead, errors can
flow backwards through unlimited numbers of virtual layers unfolded in space. That is,
LSTM can learn tasks that require memories of events that happened thousands or even
millions of discrete time steps earlier. Problem-specific LSTM-like topologies can be
evolved. LSTM works even given long delays between significant events and can handle
signals that mix low and high frequency components.
Many applications use stacks of LSTM RNNs and train them by Connectionist Temporal
Classification (CTC) to find an RNN weight matrix that maximizes the probability of the
label sequences in a training set, given the corresponding input sequences. CTC achieves
both alignment and recognition.
LSTM can learn to recognize context-sensitive languages unlike previous models based
on hidden Markov models (HMM) and similar concepts.
Gated recurrent unit

Gated recurrent unit


Gated recurrent units (GRUs) are a gating mechanism in recurrent neural
networks introduced in 2014. They are used in the full form and several simplified
variants. Their performance on polyphonic music modeling and speech signal modeling
was found to be similar to that of long short-term memory. They have fewer parameters
than LSTM, as they lack an output gate.
Bi-directional
Bi-directional RNNs use a finite sequence to predict or label each element of the sequence
based on the element's past and future contexts. This is done by concatenating the outputs
of two RNNs, one processing the sequence from left to right, the other one from right to left.
The combined outputs are the predictions of the teacher-given target signals. This
technique has been proven to be especially useful when combined with LSTM RNNs.
Continuous-time
A continuous-time recurrent neural network (CTRNN) uses a system of ordinary
differential equations to model the effects on a neuron of the incoming inputs.
For a neuron in the network with activation , the rate of change of activation is given by:
Where:
 : Time constant of postsynaptic node
 : Activation of postsynaptic node
 : Rate of change of activation of postsynaptic node
 : Weight of connection from pre to postsynaptic node
 : Activation of presynaptic node
 : Bias of presynaptic node
 : Input (if any) to node
CTRNNs have been applied to evolutionary robotics where they have been used to address
vision, co-operation, and minimal cognitive behavior.
Note that, by the Shannon sampling theorem, discrete time recurrent neural networks can
be viewed as continuous-time recurrent neural networks where the differential equations
have transformed into equivalent difference equations. This transformation can be thought
of as occurring after the post-synaptic node activation functions have been low-pass
filtered but prior to sampling.
Hierarchical
Hierarchical RNNs connect their neurons in various ways to decompose hierarchical
behavior into useful subprograms. Such hierarchical structures of cognition are present in
theories of memory presented by philosopher Henri Bergson, whose philosophical views
have inspired hierarchical models.
Recurrent multilayer perceptron network
Generally, a recurrent multilayer perceptron network (RMLP) network consists of
cascaded sub-networks, each of which contains multiple layers of nodes. Each of these sub-
networks is feed-forward except for the last layer, which can have feedback connections.
Each of these subnets is connected only by feed-forward connections.
Multiple timescales model
A multiple timescales recurrent neural network (MTRNN) is a neural-based computational
model that can simulate the functional hierarchy of the brain through self-organization that
depends on spatial connection between neurons and on distinct types of neuron activities,
each with distinct time properties. With such varied neuronal activities, continuous
sequences of any set of behaviors are segmented into reusable primitives, which in turn are
flexibly integrated into diverse sequential behaviors. The biological approval of such a type
of hierarchy was discussed in the memory-prediction theory of brain function
by Hawkins in his book On Intelligence. Such a hierarchy also agrees with theories of
memory posited by philosopher Henri Bergson, which have been incorporated into an
MTRNN model.
Neural Turing machines.
Neural Turing machines (NTMs) are a method of extending recurrent neural networks by
coupling them to external memory resources which they can interact with by attention
processes. The combined system is analogous to a Turing machine or Von Neumann
architecture but is differentiable end-to-end, allowing it to be efficiently trained
with gradient descent.
Differentiable neural computer
Differentiable neural computers (DNCs) are an extension of Neural Turing machines,
allowing for usage of fuzzy amounts of each memory address and a record of chronology.
Neural network pushdown automata
Neural network pushdown automata (NNPDA) are similar to NTMs, but tapes are replaced
by analogue stacks that are differentiable and that are trained. In this way, they are similar
in complexity to recognizers of context free grammars (CFGs).
Memristive Networks
Greg Snider of HP Labs describes a system of cortical computing with memristive
nanodevices. The memristors (memory resistors) are implemented by thin film materials
in which the resistance is electrically tuned via the transport of ions or oxygen vacancies
within the film. DARPA's SyNAPSE project has funded IBM Research and HP Labs, in
collaboration with the Boston University Department of Cognitive and Neural Systems
(CNS), to develop neuromorphic architectures which may be based on memristive systems.
Memristive networks are a particular type of physical neural network that have very
similar properties to (Little-)Hopfield networks, as they have a continuous dynamics, have
a limited memory capacity and they natural relax via the minimization of a function which
is asymptotic to the Ising model. In this sense, the dynamics of a memristive circuit has the
advantage compared to a Resistor-Capacitor network to have a more interesting non-linear
behavior. From this point of view, engineering an analog memristive networks accounts to
a peculiar type of neuromorphic engineering in which the device behavior depends on the
circuit wiring, or topology.
Training
Gradient descent
Gradient descent is a first-order iterative optimization algorithm for finding the minimum
of a function. In neural networks, it can be used to minimize the error term by changing
each weight in proportion to the derivative of the error with respect to that weight,
provided the non-linear activation functions are differentiable. Various methods for doing
so were developed in the 1980s and early 1990s
by Werbos, Williams, Robinson, Schmidhuber, Hochreiter, Pearlmutter and others.
The standard method is called “back-propagation through time” or BPTT, and is a
generalization of back-propagation for feed-forward networks. Like that method, it is an
instance of automatic differentiation in the reverse accumulation mode of Pontryagin's
minimum principle. A more computationally expensive online variant is called “Real-Time
Recurrent Learning” or RTRL, which is an instance of automatic differentiation in the
forward accumulation mode with stacked tangent vectors. Unlike BPTT, this algorithm is
local in time but not local in space.
In this context, local in space means that a unit's weight vector can be updated using only
information stored in the connected units and the unit itself such that update complexity of
a single unit is linear in the dimensionality of the weight vector. Local in time means that
the updates take place continually (on-line) and depend only on the most recent time step
rather than on multiple time steps within a given time horizon as in BPTT. Biological neural
networks appear to be local with respect to both time and space.
For recursively computing the partial derivatives, RTRL has a time-complexity of
O(number of hidden x number of weights) per time step for computing the Jacobian
matrices, while BPTT only takes O(number of weights) per time step, at the cost of storing
all forward activations within the given time horizon. An online hybrid between BPTT and
RTRL with intermediate complexity exists, along with variants for continuous time.
A major problem with gradient descent for standard RNN architectures is that error
gradients vanish exponentially quickly with the size of the time lag between important
events. LSTM combined with a BPTT/RTRL hybrid learning method attempts to overcome
these problems. This problem is also solved in the independently recurrent neural network
(IndRNN) by reducing the context of a neuron to its own past state and the cross-neuron
information can then be explored in the following layers. Memories of different range
including long-term memory can be learned without the gradient vanishing and exploding
problem.
The on-line algorithm called causal recursive back-propagation (CRBP), implements and
combines BPTT and RTRL paradigms for locally recurrent networks. It works with the
most general locally recurrent networks. The CRBP algorithm can minimize the global
error term. This fact improves stability of the algorithm, providing a unifying view on
gradient calculation techniques for recurrent networks with local feedback.
One approach to the computation of gradient information in RNNs with arbitrary
architectures is based on signal-flow graphs diagrammatic derivation. It uses the BPTT
batch algorithm, based on Lee's theorem for network sensitivity calculations. It was
proposed by Wan and Beaufays, while its fast online version was proposed by Campolucci,
Uncini and Piazza.
Global optimization methods
Training the weights in a neural network can be modeled as a non-linear global
optimization problem. A target function can be formed to evaluate the fitness or error of a
particular weight vector as follows: First, the weights in the network are set according to
the weight vector. Next, the network is evaluated against the training sequence. Typically,
the sum-squared-difference between the predictions and the target values specified in the
training sequence is used to represent the error of the current weight vector. Arbitrary
global optimization techniques may then be used to minimize this target function.
The most common global optimization method for training RNNs is genetic algorithms,
especially in unstructured networks.[83][84][85]
Initially, the genetic algorithm is encoded with the neural network weights in a predefined
manner where one gene in the chromosome represents one weight link. The whole
network is represented as a single chromosome. The fitness function is evaluated as
follows:
Each weight encoded in the chromosome is assigned to the respective weight link of the
network.
The training set is presented to the network which propagates the input signals forward.
The mean-squared-error is returned to the fitness function.
This function drives the genetic selection process.
Many chromosomes make up the population; therefore, many different neural networks
are evolved until a stopping criterion is satisfied. A common stopping scheme is:
When the neural network has learnt a certain percentage of the training data or
When the minimum value of the mean-squared-error is satisfied or
When the maximum number of training generations has been reached.
The stopping criterion is evaluated by the fitness function as it gets the reciprocal of the
mean-squared-error from each network during training. Therefore, the goal of the genetic
algorithm is to maximize the fitness function, reducing the mean-squared-error.
Other global (and/or evolutionary) optimization techniques may be used to seek a good set
of weights, such as simulated annealing or particle swarm optimization.
Related fields and models
RNNs may behave chaotically. In such cases, dynamical systems theory may be used for
analysis.
They are in fact recursive neural networks with a particular structure: that of a linear
chain. Whereas recursive neural networks operate on any hierarchical structure,
combining child representations into parent representations, recurrent neural networks
operate on the linear progression of time, combining the previous time step and a hidden
representation into the representation for the current time step.
In particular, RNNs can appear as nonlinear versions of finite impulse response and infinite
impulse response filters and also as a nonlinear autoregressive exogenous model (NARX).
Libraries
Apache Singa
Caffe: Created by the Berkeley Vision and Learning Center (BVLC). It supports both CPU
and GPU. Developed in C++, and has Python and MATLAB wrappers.
Chainer: The first stable deep learning library that supports dynamic, define-by-run neural
networks. Fully in Python, production support for CPU, GPU, distributed training.
Deeplearning4j: Deep learning in Java and Scala on multi-GPU-enabled Spark. A general-
purpose deep learning library for the JVM production stack running on a C++ scientific
computing engine. Allows the creation of custom layers. Integrates with Hadoop and Kafka.
Flux: includes interfaces for RNNs, including GRUs and LSTMs, written in Julia.
Keras: High-level, easy to use API, providing a wrapper to many other deep learning
libraries.
Microsoft Cognitive Toolkit
MXNet: a modern open-source deep learning framework used to train and deploy deep
neural networks.
PyTorch: Tensors and Dynamic neural networks in Python with strong GPU acceleration.
TensorFlow: Apache 2.0-licensed Theano-like library with support for CPU, GPU and
Google's proprietary TPU, mobile
Theano: The reference deep-learning library for Python with an API largely compatible
with the popular NumPy library. Allows user to write symbolic mathematical expressions,
then automatically generates their derivatives, saving the user from having to code
gradients or backpropagation. These symbolic expressions are automatically compiled to
CUDA code for a fast, on-the-GPU implementation.
Torch (www.torch.ch): A scientific computing framework with wide support for machine
learning algorithms, written in C and lua. The main author is Ronan Collobert, and it is now
used at Facebook AI Research and Twitter.
Applications
Applications of recurrent neural networks include:-
 Machine Translation
 Robot control
 Time series prediction
 Speech recognition Speech synthesis
 Brain–computer interfaces
 Time series anomaly detection
 Rhythm learning
 Music composition
 Grammar learning
 Handwriting recognition
 Human action recognition
 Protein Homology Detection
 Predicting subcellular localization of proteins
 Several prediction tasks in the area of business process management
 Prediction in medical care pathways

You might also like