Building Your Recurrent Neural Network - Step by Step - Coursera
Building Your Recurrent Neural Network - Step by Step - Coursera
Ayuda
Recurrent Neural Networks (RNN) are very effective for Natural Language Processing and other
sequence tasks because they have "memory." They can read inputs 𝑥⟨𝑡⟩(such as words) one at a time,
and remember some contextual information through the hidden layer activations that get passed from
one time step to the next. This allows a unidirectional (one-way) RNN to take information from the past
to process later inputs. A bidirectional (two-way) RNN can take context from both the past and the future,
much like Marty McFly.
Notation:
Superscript [𝑙]
denotes an object associated with the 𝑙𝑡ℎ𝑡ℎ layer.
Superscript (𝑖)
denotes an object associated with the 𝑖 example.
Superscript ⟨𝑡⟩
denotes an object at the 𝑡𝑡ℎ
time step.
𝑖
Subscript denotes the 𝑖𝑡ℎ
entry of a vector.
Example:
𝑎(2)[3]<4>
5 denotes the activation of the 2nd training example (2), 3rd layer [3], 4th time step <4>,
and 5th entry in the vector.
Pre-requisites
When working on graded functions, please remember to only modify the code that is between:
and: