0% found this document useful (0 votes)
160 views

PyTorch Workflow Fundamentals

This document outlines the PyTorch workflow for building and training a deep learning model to predict a simple straight line. It covers: 1. Preparing data by generating a straight line with known parameters and splitting it into training and test sets. 2. Building a model with PyTorch modules, choosing a loss function and optimizer, and creating a training loop. 3. Training the model by fitting it to the training data using the loss function and optimizer in the training loop. 4. Evaluating the trained model by making predictions on the test set and comparing to actual values. 5. Saving and loading the trained model for future use or continuing training.

Uploaded by

Kimpton Mukuwiri
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
160 views

PyTorch Workflow Fundamentals

This document outlines the PyTorch workflow for building and training a deep learning model to predict a simple straight line. It covers: 1. Preparing data by generating a straight line with known parameters and splitting it into training and test sets. 2. Building a model with PyTorch modules, choosing a loss function and optimizer, and creating a training loop. 3. Training the model by fitting it to the training data using the loss function and optimizer in the training loop. 4. Evaluating the trained model by making predictions on the test set and comparing to actual values. 5. Saving and loading the trained model for future use or continuing training.

Uploaded by

Kimpton Mukuwiri
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 1

mrdbourke/pytorch_deep_le…

Zero to Mastery Learn PyTorch for Deep Learning Search 4k 1.3k

Table of contents
Open in Colab
What we're going to cover
Where can can you get help?
View Source Code | View Slides | Watch Video Walkthrough
1. Data (preparing and loading)
Split data into training and test
01. PyTorch Work.ow Fundamentals sets
2. Build model
PyTorch model building
The essence of machine learning and deep learning is to take some data from the past, build an algorithm (like essentials
a neural network) to discover patterns in it and use the discoverd patterns to predict the future. Checking the contents of a
PyTorch model
There are many ways to do this and many new ways are being discovered all the time. Making predictions using
torch.inference_mode()
But let's start small.
3. Train model

How about we start with a straight line? Creating a loss function and
optimizer in PyTorch

And we see if we can build a PyTorch model that learns the pattern of the straight line and matches it. Creating an optimization loop
in PyTorch
PyTorch training loop
PyTorch testing loop

What we're going to cover 4. Making predictions with a


trained PyTorch model
(inference)
In this module we're going to cover a standard PyTorch workIow (it can be chopped and changed as necessary
5. Saving and loading a PyTorch
but it covers the main outline of steps).
model
Saving a PyTorch model's

For now, we'll use this workIow to predict a simple straight line but the workIow steps can be repeated and
changed depending on the problem you're working on.

SpeciKcally, we're going to cover:

Topic Contents

1. Getting data ready Data can be almost anything but to get started we're going to create a simple
straight line

2. Building a model Here we'll create a model to learn patterns in the data, we'll also choose a loss
function, optimizer and build a training loop.

3. Fitting the model to data (training) We've got data and a model, now let's let the model (try to) Knd patterns in the
(training) data.

4. Making predictions and evaluating Our model's found patterns in the data, let's compare its Kndings to the actual
a model (inference) (testing) data.

5. Saving and loading a model You may want to use your model elsewhere, or come back to it later, here we'll
cover that.

6. Putting it all together Let's take all of the above and combine it.

Where can can you get help?


All of the materials for this course are available on GitHub.

And if you run into trouble, you can ask a question on the Discussions page there too.

There's also the PyTorch developer forums, a very helpful place for all things PyTorch.

Let's start by putting what we're covering into a dictionary to reference later.

In [1]: what_were_covering = {1: "data (prepare and load)",


2: "build model",
3: "fitting the model to data (training)",
4: "making predictions and evaluating a model (inference)",
5: "saving and loading a model",
6: "putting it all together"
}

And now let's import what we'll need for this module.

We're going to get torch , torch.nn ( nn stands for neural network and this package contains the building
blocks for creating neural networks in PyTorch) and matplotlib .

In [2]: import torch


from torch import nn # nn contains all of PyTorch's building blocks for neural networks
import matplotlib.pyplot as plt

# Check PyTorch version


torch.__version__

Out[2]: '1.12.1+cu113'

1. Data (preparing and loading)


I want to stress that "data" in machine learning can be almost anything you can imagine. A table of numbers
(like a big Excel spreadsheet), images of any kind, videos (YouTube has lots of data!), audio Kles like songs or
podcasts, protein structures, text and more.

Machine learning is a game of two parts:

1. Turn your data, whatever it is, into numbers (a representation).

2. Pick or build a model to learn the representation as best as possible.

Sometimes one and two can be done at the same time.

But what if you don't have data?

Well, that's where we're at now.

No data.

But we can create some.

Let's create our data as a straight line.

We'll use linear regression to create the data with known parameters (things that can be learned by a model)
and then we'll use PyTorch to see if we can build model to estimate these parameters using gradient descent.

Don't worry if the terms above don't mean much now, we'll see them in action and I'll put extra resources below
where you can learn more.

In [3]: # Create *known* parameters


weight = 0.7
bias = 0.3

# Create data
start = 0
end = 1
step = 0.02
X = torch.arange(start, end, step).unsqueeze(dim=1)
y = weight * X + bias

X[:10], y[:10]

Out[3]: (tensor([[0.0000],
[0.0200],
[0.0400],
[0.0600],
[0.0800],
[0.1000],
[0.1200],
[0.1400],
[0.1600],
[0.1800]]),
tensor([[0.3000],
[0.3140],
[0.3280],
[0.3420],
[0.3560],
[0.3700],
[0.3840],
[0.3980],
[0.4120],
[0.4260]]))

Beautiful! Now we're going to move towards building a model that can learn the relationship between X
(features) and y (labels).

Split data into training and test sets

We've got some data.

But before we build a model we need to split it up.

One of most important steps in a machine learning project is creating a training and test set (and when
required, a validation set).

Each split of the dataset serves a speciKc purpose:

Split Purpose Amount of How often


total data is it used?

Training The model learns from this data (like the course materials you study ~60-80% Always
set during the semester).

Validation The model gets tuned on this data (like the practice exam you take ~10-20% Often but
set before the Knal exam). not always

Testing set The model gets evaluated on this data to test what it has learned (like the ~10-20% Always
Knal exam you take at the end of the semester).

For now, we'll just use a training and test set, this means we'll have a dataset for our model to learn on as well
as be evaluated on.

We can create them by splitting our X and y tensors.

Note: When dealing with real-world data, this step is typically done right at the start of a project (the test set
should always be kept separate from all other data). We want our model to learn on training data and then
evaluate it on test data to get an indication of how well it generalizes to unseen examples.

In [4]: # Create train/test split


train_split = int(0.8 * len(X)) # 80% of data used for training set, 20% for testing
X_train, y_train = X[:train_split], y[:train_split]
X_test, y_test = X[train_split:], y[train_split:]

len(X_train), len(y_train), len(X_test), len(y_test)

Out[4]: (40, 40, 10, 10)

Wonderful, we've got 40 samples for training ( X_train & y_train ) and 10 samples for testing ( X_test &
y_test ).

The model we create is going to try and learn the relationship between X_train & y_train and then we will
evaluate what it learns on X_test and y_test .

But right now our data is just numbers on a page.

Let's create a function to visualize it.

In [5]: def plot_predictions(train_data=X_train,


train_labels=y_train,
test_data=X_test,
test_labels=y_test,
predictions=None):
"""
Plots training data, test data and compares predictions.
"""
plt.figure(figsize=(10, 7))

# Plot training data in blue


plt.scatter(train_data, train_labels, c="b", s=4, label="Training data")

# Plot test data in green


plt.scatter(test_data, test_labels, c="g", s=4, label="Testing data")

if predictions is not None:


# Plot the predictions in red (predictions were made on the test data)
plt.scatter(test_data, predictions, c="r", s=4, label="Predictions")

# Show the legend


plt.legend(prop={"size": 14});

In [6]: plot_predictions();

Epic!

Now instead of just being numbers on a page, our data is a straight line.

Note: Now's a good time to introduce you to the data explorer's motto... "visualize, visualize, visualize!"

Think of this whenever you're working with data and turning it into numbers, if you can visualize something, it
can do wonders for understanding.

Machines love numbers and we humans like numbers too but we also like to look at things.

2. Build model
Now we've got some data, let's build a model to use the blue dots to predict the green dots.

We're going to jump right in.

We'll write the code Krst and then explain everything.

Let's replicate a standard linear regression model using pure PyTorch.

In [7]: # Create a Linear Regression model class


class LinearRegressionModel(nn.Module): # <- almost everything in PyTorch is a nn.Module (think of this as neural network lego
def __init__(self):
super().__init__()
self.weights = nn.Parameter(torch.randn(1, # <- start with random weights (this will get adjusted as the model learns)
dtype=torch.float), # <- PyTorch loves float32 by default
requires_grad=True) # <- can we update this value with gradient descent?)

self.bias = nn.Parameter(torch.randn(1, # <- start with random bias (this will get adjusted as the model learns)
dtype=torch.float), # <- PyTorch loves float32 by default
requires_grad=True) # <- can we update this value with gradient descent?))

# Forward defines the computation in the model


def forward(self, x: torch.Tensor) -> torch.Tensor: # <- "x" is the input data (e.g. training/testing features)
return self.weights * x + self.bias # <- this is the linear regression formula (y = m*x + b)

Alright there's a fair bit going on above but let's break it down bit by bit.

Resource: We'll be using Python classes to create bits and pieces for building neural networks. If you're
unfamiliar with Python class notation, I'd recommend reading Real Python's Object Orientating programming
in Python 3 guide a few times.

PyTorch model building essentials

PyTorch has four (give or take) essential modules you can use to create almost any kind of neural network you
can imagine.

They are torch.nn , torch.optim , torch.utils.data.Dataset and torch.utils.data.DataLoader . For


now, we'll focus on the Krst two and get to the other two later (though you may be able to guess what they do).

PyTorch What does it do?


module

torch.nn Contains all of the building blocks for computational graphs (essentially a series of computations executed
in a particular way).

torch.nn. Stores tensors that can be used with nn.Module . If requires_grad=True gradients (used for updating
Parameter model parameters via gradient descent) are calculated automatically, this is often referred to as "autograd".

torch.nn. The base class for all neural network modules, all the building blocks for neural networks are subclasses. If
Module you're building a neural network in PyTorch, your models should subclass nn.Module . Requires a
forward() method be implemented.

torch.opt Contains various optimization algorithms (these tell the model parameters stored in nn.Parameter how to
im best change to improve gradient descent and in turn reduce the loss).

def All nn.Module subclasses require a forward() method, this deKnes the computation that will take place on
forward() the data passed to the particular nn.Module (e.g. the linear regression formula above).

If the above sounds complex, think of like this, almost everything in a PyTorch neural network comes from
torch.nn ,

nn.Module contains the larger building blocks (layers)

nn.Parameter contains the smaller parameters like weights and biases (put these together to make
nn.Module (s))

forward() tells the larger blocks how to make calculations on inputs (tensors full of data) within
nn.Module (s)

torch.optim contains optimization methods on how to improve the parameters within nn.Parameter to
better represent input data

Basic building blocks of creating a PyTorch model by subclassing nn.Module . For objects that subclass
nn.Module , the forward() method must be de<ned.

Resource: See more of these essential modules and their uses cases in the PyTorch Cheat Sheet.

Checking the contents of a PyTorch model

Now we've got these out of the way, let's create a model instance with the class we've made and check its
parameters using .parameters() .

In [8]: # Set manual seed since nn.Parameter are randomly initialzied


torch.manual_seed(42)

# Create an instance of the model (this is a subclass of nn.Module that contains nn.Parameter(s))
model_0 = LinearRegressionModel()

# Check the nn.Parameter(s) within the nn.Module subclass we created


list(model_0.parameters())

Out[8]: [Parameter containing:


tensor([0.3367], requires_grad=True),
Parameter containing:
tensor([0.1288], requires_grad=True)]

We can also get the state (what the model contains) of the model using .state_dict() .

In [9]: # List named parameters


model_0.state_dict()

Out[9]: OrderedDict([('weights', tensor([0.3367])), ('bias', tensor([0.1288]))])

Notice how the values for weights and bias from model_0.state_dict() come out as random Ioat
tensors?

This is becuase we initialized them above using torch.randn() .

Essentially we want to start from random parameters and get the model to update them towards parameters
that Kt our data best (the hardcoded weight and bias values we set when creating our straight line data).

Exercise: Try changing the torch.manual_seed() value two cells above, see what happens to the weights
and bias values.

Because our model starts with random values, right now it'll have poor predictive power.

Making predictions using torch.inference_mode()

To check this we can pass it the test data X_test to see how closely it predicts y_test .

When we pass data to our model, it'll go through the model's forward() method and produce a result using
the computation we've deKned.

Let's make some predictions.

In [10]: # Make predictions with model


with torch.inference_mode():
y_preds = model_0(X_test)

# Note: in older PyTorch code you might also see torch.no_grad()


# with torch.no_grad():
# y_preds = model_0(X_test)

Hmm?

You probably noticed we used torch.inference_mode() as a context manager (that's what the with
torch.inference_mode(): is) to make the predictions.

As the name suggests, torch.inference_mode() is used when using a model for inference (making
predictions).

torch.inference_mode() turns off a bunch of things (like gradient tracking, which is necessary for training
but not for inference) to make forward-passes (data going through the forward() method) faster.

Note: In older PyTorch code, you may also see torch.no_grad() being used for inference. While
torch.inference_mode() and torch.no_grad() do similar things,

torch.inference_mode() is newer, potentially faster and preferred. See this Tweet from PyTorch for more.

We've made some predictions, let's see what they look like.

In [11]: # Check the predictions


print(f"Number of testing samples: {len(X_test)}")
print(f"Number of predictions made: {len(y_preds)}")
print(f"Predicted values:\n{y_preds}")

Number of testing samples: 10


Number of predictions made: 10
Predicted values:
tensor([[0.3982],
[0.4049],
[0.4116],
[0.4184],
[0.4251],
[0.4318],
[0.4386],
[0.4453],
[0.4520],
[0.4588]])

Notice how there's one prediction value per testing sample.

This is because of the kind of data we're using. For our straight line, one X value maps to one y value.

However, machine learning models are very Iexible. You could have 100 X values mapping to one, two, three
or 10 y values. It all depends on what you're working on.

Our predictions are still numbers on a page, let's visualize them with our plot_predictions() function we
created above.

In [12]: plot_predictions(predictions=y_preds)

In [13]: y_test - y_preds

Out[13]: tensor([[0.4618],
[0.4691],
[0.4764],
[0.4836],
[0.4909],
[0.4982],
[0.5054],
[0.5127],
[0.5200],
[0.5272]])

Woah! Those predictions look pretty bad...

This make sense though when you remember our model is just using random parameter values to make
predictions.

It hasn't even looked at the blue dots to try to predict the green dots.

Time to change that.

3. Train model
Right now our model is making predictions using random parameters to make calculations, it's basically
guessing (randomly).

To Kx that, we can update its internal parameters (I also refer to parameters as patterns), the weights and
bias values we set randomly using nn.Parameter() and torch.randn() to be something that better
represents the data.

We could hard code this (since we know the default values weight=0.7 and bias=0.3 ) but where's the fun in
that?

Much of the time you won't know what the ideal parameters are for a model.

Instead, it's much more fun to write code to see if the model can try and Kgure them out itself.

Creating a loss function and optimizer in PyTorch

For our model to update its parameters on its own, we'll need to add a few more things to our recipe.

And that's a loss function as well as an optimizer.

The rolls of these are:

Function What does it do? Where does it live Common values


in PyTorch?

Loss Measures how wrong your models PyTorch has plenty Mean absolute error (MAE) for regression
function predictions (e.g. y_preds ) are of built-in loss problems ( torch.nn.L1Loss() ). Binary
compared to the truth labels (e.g. functions in cross entropy for binary classiKcation
y_test ). Lower the better. torch.nn . problems ( torch.nn.BCELoss() ).

Optimizer Tells your model how to update its You can Knd Stochastic gradient descent
internal parameters to best lower various ( torch.optim.SGD() ). Adam optimizer
the loss. optimization ( torch.optim.Adam() ).
function
implementations in
torch.optim .

Let's create a loss function and an optimizer we can use to help improve our model.

Depending on what kind of problem you're working on will depend on what loss function and what optimizer
you use.

However, there are some common values, that are known to work well such as the SGD (stochastic gradient
descent) or Adam optimizer. And the MAE (mean absolute error) loss function for regression problems
(predicting a number) or binary cross entropy loss function for classiKcation problems (predicting one thing or
another).

For our problem, since we're predicting a number, let's use MAE (which is under torch.nn.L1Loss() ) in
PyTorch as our loss function.

Mean absolute error (MAE, in PyTorch: torch.nn.L1Loss ) measures the absolute difference between two points
(predictions and labels) and then takes the mean across all examples.

And we'll use SGD, torch.optim.SGD(params, lr) where:

params is the target model parameters you'd like to optimize (e.g. the weights and bias values we
randomly set before).

lr is the learning rate you'd like the optimizer to update the parameters at, higher means the optimizer will
try larger updates (these can sometimes be too large and the optimizer will fail to work), lower means the
optimizer will try smaller updates (these can sometimes be too small and the optimizer will take too long
to Knd the ideal values). The learning rate is considered a hyperparameter (because it's set by a machine
learning engineer). Common starting values for the learning rate are 0.01 , 0.001 , 0.0001 , however,
these can also be adjusted over time (this is called learning rate scheduling).

Woah, that's a lot, let's see it in code.

In [14]: # Create the loss function


loss_fn = nn.L1Loss() # MAE loss is same as L1Loss

# Create the optimizer


optimizer = torch.optim.SGD(params=model_0.parameters(), # parameters of target model to optimize
lr=0.01) # learning rate (how much the optimizer should change parameters at each step, higher=mor

Creating an optimization loop in PyTorch

Woohoo! Now we've got a loss function and an optimizer, it's now time to create a training loop (and testing
loop).

The training loop involves the model going through the training data and learning the relationships between the
features and labels .

The testing loop involves going through the testing data and evaluating how good the patterns are that the
model learned on the training data (the model never see's the testing data during training).

You might also like