0% found this document useful (0 votes)
219 views1 page

Building Your Recurrent Neural Network - Step by Step - Coursera

This document provides an introduction to implementing recurrent neural networks. It defines notation for building sequence models, describes the architecture of basic RNNs and LSTMs, and explains how to implement backpropagation through time for RNNs and LSTMs. The goal is for learners to be able to build and train RNNs and LSTMs to solve sequence problems.

Uploaded by

Carlos tercero
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
219 views1 page

Building Your Recurrent Neural Network - Step by Step - Coursera

This document provides an introduction to implementing recurrent neural networks. It defines notation for building sequence models, describes the architecture of basic RNNs and LSTMs, and explains how to implement backpropagation through time for RNNs and LSTMs. The goal is for learners to be able to build and train RNNs and LSTMs to solve sequence problems.

Uploaded by

Carlos tercero
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 1

Navegar Calificaciones Archivos de Laboratorio

Ayuda

Building your Recurrent Neural Network - Step by


Step
Welcome to Course 5's first assignment, where you'll be implementing key components of a Recurrent
Neural Network, or RNN, in NumPy!

By the end of this assignment, you'll be able to:

Define notation for building sequence models


Describe the architecture of a basic RNN
Identify the main components of an LSTM
Implement backpropagation through time for a basic RNN and an LSTM
Give examples of several types of RNN

Recurrent Neural Networks (RNN) are very effective for Natural Language Processing and other
sequence tasks because they have "memory." They can read inputs 𝑥⟨𝑡⟩(such as words) one at a time,
and remember some contextual information through the hidden layer activations that get passed from
one time step to the next. This allows a unidirectional (one-way) RNN to take information from the past
to process later inputs. A bidirectional (two-way) RNN can take context from both the past and the future,
much like Marty McFly.

Notation:

Superscript [𝑙]
denotes an object associated with the 𝑙𝑡ℎ𝑡ℎ layer.
Superscript (𝑖)
denotes an object associated with the 𝑖 example.
Superscript ⟨𝑡⟩
denotes an object at the 𝑡𝑡ℎ
time step.
𝑖
Subscript denotes the 𝑖𝑡ℎ
entry of a vector.

Example:

𝑎(2)[3]<4>
5 denotes the activation of the 2nd training example (2), 3rd layer [3], 4th time step <4>,
and 5th entry in the vector.

Pre-requisites

You should already be familiar with numpy


To refresh your knowledge of numpy, you can review course 1 of the specialization "Neural
Networks and Deep Learning":
Specifically, review the week 2's practice assignment "Python Basics with Numpy (optional
assignment)" (https://www.coursera.org/learn/neural-networks-deep-
learning/programming/isoAV/python-basics-with-numpy)

Be careful when modifying the starter code!

When working on graded functions, please remember to only modify the code that is between:

#### START CODE HERE

and:

#### END CODE HERE

You might also like