0% found this document useful (0 votes)
21 views5 pages

Unit 4 Part 3 DL - 1

The document provides an overview of PyTorch, a deep learning library known for its dynamic computation graphs and ease of use in Python. It outlines the key features and advantages of PyTorch, as well as the fundamental components for building deep learning models, including models, layers, optimizers, and loss functions. Additionally, it discusses the implementation of CNNs and RNNs in PyTorch, highlighting their architectures and training processes.

Uploaded by

korapatiusharani
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views5 pages

Unit 4 Part 3 DL - 1

The document provides an overview of PyTorch, a deep learning library known for its dynamic computation graphs and ease of use in Python. It outlines the key features and advantages of PyTorch, as well as the fundamental components for building deep learning models, including models, layers, optimizers, and loss functions. Additionally, it discusses the implementation of CNNs and RNNs in PyTorch, highlighting their architectures and training processes.

Uploaded by

korapatiusharani
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

lOMoARcPSD|47483061

Chapter-III

PyTorch Tensors: Deep Learning with PyTorch, CNN in PyTorch

……………………………………………………………………………………………………………………………..

PyTorch is an optimized Deep Learning tensor library based on Python and Torch and is mainly used for
applications using GPUs and CPUs. PyTorch is favored over other Deep Learning frameworks like TensorFlow
and Keras since it uses dynamic computation graphs and is completely Pythonic.

Why is PyTorch used for deep learning?

It is open source, and is based on the popular Torch library. PyTorch is designed to provide good flexibility
and high speeds for deep neural network implementation. PyTorch is different from other deep learning
frameworks in that it uses dynamic computation graphs.

3.1.Features
The major features of PyTorch are mentioned below −

Easy Interface − PyTorch offers easy to use API; hence it is considered to be very simple to operate and runs
on Python. The code execution in this framework is quite easy.

Python usage − This library is considered to be Pythonic which smoothly integrates with the Python data
science stack. Thus, it can leverage all the services and functionalities offered by the Python environment.

Computational graphs − PyTorch provides an excellent platform which offers dynamic computational graphs.
Thus a user can change them during runtime. This is highly useful when a developer has no idea of how much
memory is required for creating a neural network model.

PyTorch is known for having three levels of abstraction as given below −

• Tensor − Imperative n-dimensional array which runs on GPU.

• Variable − Node in computational graph. This stores data and gradient.

• Module − Neural network layer which will store state or learnable weights.

Advantages of PyTorch

The following are the advantages of PyTorch −

• It is easy to debug and understand the code.

• It includes many layers as Torch.

• It includes lot of loss functions.

• It can be considered as NumPy extension to GPUs.

• It allows building networks whose structure is dependent on computation itself.

22 | ©www.tutorialtpint.net – Prepared By D.Venkata Reddy M.Tech(Ph.D), UGC NET, AP SET Qualified


Downloaded by Korapati Usha Rani ([email protected])
lOMoARcPSD|47483061

3.2.TensorFlow vs. PyTorch

We shall look into the major differences between TensorFlow and PyTorch below –

3.3. Deep Learning with PyTorch


PyTorch elements are the building blocks of PyTorch models. These elements are:

• Model: A model is a representation of a machine learning algorithm. It is made up of layers and


parameters.

• Layer: A layer is a unit of computation in a neural network. It performs a specific mathematical


operation on the input data.

• Optimizer: An optimizer is an algorithm that updates the model's parameters during training.

• Loss: A loss function measures the error between the model's predictions and the ground truth labels.

Models are created using the torch.nn.Module class. Layers are created using the different classes provided by
the torch.nn module. For example, to create a linear layer, you would use the torch.nn.Linear class.

Optimizers are created using the classes provided by the torch.optim module. For example, to create an Adam
optimizer, you would use the torch.optim.Adam class.

Loss functions are created using the classes provided by the torch.nn.functional module. For example, to create
a mean squared error loss function, you would use the torch.nn.functional.mse_loss function.

Once you have created the model, layers, optimizer, and loss function, you can train the model using the
following steps:

1. Forward pass: The input data is passed through the model to produce predictions.

2. Loss calculation: The loss function is used to calculate the error between the predictions and the ground
truth labels.
23 | ©www.tutorialtpint.net – Prepared By D.Venkata Reddy M.Tech(Ph.D), UGC NET, AP SET Qualified
Downloaded by Korapati Usha Rani ([email protected])
lOMoARcPSD|47483061

3. Backward pass: The gradients of the loss function with respect to the model's parameters are calculated.

4. Optimizer step: The optimizer uses the gradients to update the model's parameters.

This process is repeated for a number of epochs until the model converges and achieves the desired
performance.

Here is a simple example of a PyTorch model:

This code defines a simple linear model with one input layer and one output layer. The model is trained using
the Adam optimizer and the mean squared error loss function.

3.4. RNN with PyTorch


PyTorch is a Python library for machine learning. It is based on Torch, a scientific computing framework for
Lua. PyTorch is popular for its flexibility and ease of use. It is also widely used in research and industry.

To implement RNNs in PyTorch, you can use the torch.nn.RNN module. This module provides a number of
different RNN architectures, including LSTM and GRU.

Here is a simple example of how to implement an LSTM in PyTorch:

This code defines a simple LSTM model with one input layer, one LSTM layer, and one output layer. The LSTM
layer has 128 hidden units.

The model is trained using the model.fit() method. The model can then be used to make predictions on new
data using the model.predict() method.

For more complex tasks, you may need to use a different RNN architecture or add additional layers to the
model. You can also use PyTorch to implement bidirectional RNNs, stacked RNNs, and other advanced RNN
architectures.

PyTorch also provides a number of tools for training and evaluating RNNs, such as the torch.optim module
and the torch.nn.functional module.

24 | ©www.tutorialtpint.net – Prepared By D.Venkata Reddy M.Tech(Ph.D), UGC NET, AP SET Qualified


Downloaded by Korapati Usha Rani ([email protected])
lOMoARcPSD|47483061

3.4. CNN with PyTorch


Convolutional neural networks (CNNs) are a type of neural network that are specifically designed to work
with image data. CNNs are able to learn spatial features in images, which makes them very effective for tasks
such as image classification, object detection, and image segmentation.

PyTorch is a popular Python library for machine learning. It provides a number of features that make it easy
to build, train, and deploy CNNs.

To implement a CNN in PyTorch, you can use the torch.nn.Conv2d layer. This layer performs a convolution
operation on the input data. The convolution operation is a mathematical operation that extracts features
from the input data.

CNNs also use pooling layers to reduce the spatial size of the input data. This helps to reduce the number of
parameters in the network and makes it more efficient to train.

Here is a simple example of a CNN in PyTorch:

import torch

class CNN(torch.nn.Module):

def __init__(self):

super(CNN, self).__init__()

# Define the convolutional layers

self.conv1 = torch.nn.Conv2d(3, 6, 5)

self.conv2 = torch.nn.Conv2d(6, 16, 5)

# Define the pooling layers

self.pool1 = torch.nn.MaxPool2d(2, 2)

self.pool2 = torch.nn.MaxPool2d(2, 2)
25 | ©www.tutorialtpint.net – Prepared By D.Venkata Reddy M.Tech(Ph.D), UGC NET, AP SET Qualified
Downloaded by Korapati Usha Rani ([email protected])
lOMoARcPSD|47483061

# Define the fully connected layers

self.fc1 = torch.nn.Linear(16 * 5 * 5, 120)

self.fc2 = torch.nn.Linear(120, 84)

self.fc3 = torch.nn.Linear(84, 10)

def forward(self, x):

# Pass the input data through the convolutional layers

x = self.conv1(x)

x = self.pool1(x)

x = self.conv2(x)

x = self.pool2(x)

# Flatten the output of the convolutional layers

x = x.view(-1, 16 * 5 * 5)

# Pass the flattened output through the fully connected layers

x = self.fc1(x)

x = self.fc2(x)

x = self.fc3(x)

return x

# Create the model

model = CNN()

# Train the model

...

This code defines a simple CNN with two convolutional layers, two pooling layers, and three fully connected
layers. The convolutional layers have 6 and 16 filters, respectively. The pooling layers have a kernel size of
2x2 and a stride of 2. The fully connected layers have 120, 84, and 10 units, respectively.

The model is trained using the model.fit() method. The model can then be used to make predictions on new
data using the model.predict() method.

For more complex tasks, you may need to use a different CNN architecture or add additional layers to the
model. You can also use PyTorch to implement other types of neural networks, such as recurrent neural
networks (RNNs) and long short-term memory (LSTM) networks.

26 | ©www.tutorialtpint.net – Prepared By D.Venkata Reddy M.Tech(Ph.D), UGC NET, AP SET Qualified


Downloaded by Korapati Usha Rani ([email protected])

You might also like