Recurrent Neural Networks
Recurrent Neural Networks
ARCHITECTURE OF RNN:
RNNs have the same input and output architecture as any other deep neural
architecture. However, differences arise in the way information flows from
input to output. Unlike Deep neural networks where we have different weight
matrices for each Dense network in RNN, the weight across the network
remains the same. It calculates state hidden state Hi for every input Xi . By
using the following formulas:
h= σ(UX + Wh-1 + B)
Y = O(Vh + C) Hence
Y = f (X, h , W, U, V, B, C)
Here S is the State matrix which has element si as the state of the network at
timestep i
The parameters in the network are W, U, V, c, b which are shared across
timestep
WORKING OF RECURRENT NEURAL NETWORK:
where:
ht -> current state
ht-1 -> previous state
xt -> input state
Formula for applying Activation function(tanh):
where:
Types of RNN :
1. One-to-One RNN:
The above diagram represents the structure of the Vanilla Neural Network. It
is used to solve general machine learning problems that have only one input
and output.
Example: classification of images.
2. One-to-Many RNN:
A single input and several outputs describe a one-to-many Recurrent Neural
Network. The above diagram is an example of this.
Example: The image is sent into Image Captioning, which generates a
sentence of words.
3. Many-to-One RNN:
This RNN creates a single output from the given series of inputs.
Example: Sentiment analysis is one of the examples of this type of network, in
which a text is identified as expressing positive or negative feelings.
4. Many-to-Many RNN:
Many-to-Many RNN
Prediction problems.
Machine Translation.
Speech Recognition.
Language Modelling and Generating Text.
Video Tagging.
Generating Image Descriptions.
Text Summarization.
Call Center Analysis
Machine Translation:
RNN can be used to build a deep learning model that can translate text from
one language to another without the need for human intervention. You can, for
example, translate a text from your native language to English.
Text Creation:
RNNs can also be used to build a deep learning model for text generation.
Based on the previous sequence of words/characters used in the text, a trained
model learns the likelihood of occurrence of a word/character.
Captioning of images:
The process of creating text that describes the content of an image is known as
image captioning. The image's content can depict the object as well as the action
of the object on the image.
Recognition of Speech:
This is also known as Automatic Speech Recognition (ASR), and it is capable
of converting human speech into written or text format. Don't mix up speech
recognition and voice recognition.
Forecasting of Time Series:
After being trained on historical time-stamped data, an RNN can be used to
create a time series prediction model that predicts the future outcome. The stock
market is a good example.
Conclusion: