0% found this document useful (0 votes)
82 views

Lecture 05 Linear Regression With PyTorch

Uploaded by

lingyun wu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
82 views

Lecture 05 Linear Regression With PyTorch

Uploaded by

lingyun wu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 28

PyTorch Tutorial

05. Linear Regression with PyTorch

Lecturer : Hongpu Liu Lecture 5-1 PyTorch Tutorial @ SLAM Research Group
Revision

Linear Model Loss Function

𝑦ො = 𝑥 ∗ 𝜔 𝑙𝑜𝑠𝑠 = (𝑦ො − 𝑦)2 = (𝑥 ∙ 𝜔 − 𝑦)2

𝑥 𝑦

𝜔 ∗ 𝑦ො − 𝑟 ^2 𝑙𝑜𝑠𝑠

Lecturer : Hongpu Liu Lecture 5-2 PyTorch Tutorial @ SLAM Research Group
Revision

print("predict (before training)", 4, forward(4).item())

for epoch in range(100):


for x, y in zip(x_data, y_data):
l = loss(x, y)
l.backward()
print('\tgrad:', x, y, w.grad.item())
w.data = w.data - 0.01 * w.grad.data

w.grad.data.zero_()

print("progress:", epoch, l.item())

print("predict (after training)", 4, forward(4).item())

Lecturer : Hongpu Liu Lecture 5-3 PyTorch Tutorial @ SLAM Research Group
PyTorch Fashion

Prepare dataset Design model using Class


1 2
we shall talk about this later inherit from nn.Module

Construct loss and optimizer Training cycle


3 4
using PyTorch API forward, backward, update

Lecturer : Hongpu Liu Lecture 5-4 PyTorch Tutorial @ SLAM Research Group
Linear Regression – 1. Prepare dataset

In PyTorch, the computational graph is in mini-batch fashion, so X and Y are 3 × 1 Tensors.

(1)
𝑦𝑝𝑟𝑒𝑑
𝑥 (1)
(2)
𝑦𝑝𝑟𝑒𝑑 = 𝜔 ∙ 𝑥 (2) + 𝑏
(3)
𝑦𝑝𝑟𝑒𝑑 𝑥 (3)

import torch

x_data = torch.Tensor([[1.0], [2.0], [3.0]])


y_data = torch.Tensor([[2.0], [4.0], [6.0]])

Lecturer : Hongpu Liu Lecture 5-5 PyTorch Tutorial @ SLAM Research Group
Revision: Gradient Descent Algorithm
Derivative Gradient
𝑁
𝜕𝑐𝑜𝑠𝑡(𝜔) 𝜕 1 𝜕𝑐𝑜𝑠𝑡
= ෍ 𝑥𝑛 ∙ 𝜔 − 𝑦𝑛 2
𝜕𝜔 𝜕𝜔 𝑁 𝜕𝜔
𝑛=1
𝑁
1 𝜕 Update
= ෍ 𝑥𝑛 ∙ 𝜔 − 𝑦𝑛 2
𝑁 𝜕𝜔 𝜕𝑐𝑜𝑠𝑡
𝑛=1 𝜔 =𝜔−𝛼
𝑁 𝜕𝜔
1 𝜕(𝑥𝑛 ∙ 𝜔 − 𝑦𝑛 )
= ෍ 2 ∙ (𝑥𝑛 ∙ 𝜔 − 𝑦𝑛 )
𝑁 𝜕𝜔 Update
𝑛=1
𝑁 𝑁
1 1
= ෍ 2 ∙ 𝑥𝑛 ∙ (𝑥𝑛 ∙ 𝜔 − 𝑦𝑛 ) 𝜔 = 𝜔 − 𝛼 ෍ 2 ∙ 𝑥𝑛 ∙ (𝑥𝑛 ∙ 𝜔 − 𝑦𝑛 )
𝑁
𝑁 𝑛=1
𝑛=1

Lecturer : Hongpu Liu Lecture 5-6 PyTorch Tutorial @ SLAM Research Group
Linear Regression – 2. Design Model

Affine Model Loss Function

𝑦ො = 𝑥 ∗ 𝜔 + 𝑏 𝑙𝑜𝑠𝑠 = (𝑦ො − 𝑦)2 = (𝑥 ∙ 𝜔 − 𝑦)2

𝑦
Linear Unit

𝑥 ∗ + 𝑦ො 𝑙𝑜𝑠𝑠(𝑦,
ො 𝑦) 𝑙𝑜𝑠𝑠

𝜔 𝑏

Lecturer : Hongpu Liu Lecture 5-7 PyTorch Tutorial @ SLAM Research Group
Linear Regression – 2. Design Model

Our model class should be inherit


from nn.Module, which is Base
class LinearModel(torch.nn.Module):
def __init__(self): class for all neural network
super(LinearModel, self).__init__()
self.linear = torch.nn.Linear(1, 1) modules.
def forward(self, x):
y_pred = self.linear(x)
return y_pred

model = LinearModel()

Lecturer : Hongpu Liu Lecture 5-8 PyTorch Tutorial @ SLAM Research Group
Linear Regression – 2. Design Model

class LinearModel(torch.nn.Module):
def __init__(self):
super(LinearModel, self).__init__() Member methods __init__() and
self.linear = torch.nn.Linear(1, 1)
forward() have to be implemented.
def forward(self, x):
y_pred = self.linear(x)
return y_pred

model = LinearModel()

Lecturer : Hongpu Liu Lecture 5-9 PyTorch Tutorial @ SLAM Research Group
Linear Regression – 2. Design Model

class LinearModel(torch.nn.Module):
def __init__(self):
super(LinearModel, self).__init__() Just do it. : )
self.linear = torch.nn.Linear(1, 1)

def forward(self, x):


y_pred = self.linear(x)
return y_pred

model = LinearModel()

Lecturer : Hongpu Liu Lecture 5-10 PyTorch Tutorial @ SLAM Research Group
Linear Regression – 2. Design Model

Class nn.Linear contain two


class LinearModel(torch.nn.Module):
def __init__(self): member Tensors: weight and bias.
super(LinearModel, self).__init__()
self.linear = torch.nn.Linear(1, 1)

def forward(self, x): Linear Unit


y_pred = self.linear(x)
return y_pred
𝑥 ∗ + 𝑦ො
model = LinearModel()

𝜔 𝑏

Lecturer : Hongpu Liu Lecture 5-11 PyTorch Tutorial @ SLAM Research Group
Linear Regression – 2. Design Model

Lecturer : Hongpu Liu Lecture 5-12 PyTorch Tutorial @ SLAM Research Group
Linear Regression – 2. Design Model

(1)
𝑦𝑝𝑟𝑒𝑑
𝑥 (1)
(2)
Output 𝑦𝑝𝑟𝑒𝑑 = 𝜔 ∙ 𝑥 (2) + 𝑏
(3)
𝑦𝑝𝑟𝑒𝑑 𝑥 (3) Input

Lecturer : Hongpu Liu Lecture 5-13 PyTorch Tutorial @ SLAM Research Group
Linear Regression – 2. Design Model

class LinearModel(torch.nn.Module):
def __init__(self): Class nn.Linear has implemented the
super(LinearModel, self).__init__()
self.linear = torch.nn.Linear(1, 1) magic method __call__(), which enable
def forward(self, x): the instance of the class can be called
y_pred = self.linear(x)
return y_pred just like a function. Normally the
model = LinearModel() forward() will be called.
Pythonic!!!

Lecturer : Hongpu Liu Lecture 5-14 PyTorch Tutorial @ SLAM Research Group
Linear Regression – 2. Design Model

class LinearModel(torch.nn.Module):
def __init__(self):
super(LinearModel, self).__init__()
self.linear = torch.nn.Linear(1, 1)

def forward(self, x):


y_pred = self.linear(x)
return y_pred
Create a instance of class
model = LinearModel()
LinearModel.

Lecturer : Hongpu Liu Lecture 5-15 PyTorch Tutorial @ SLAM Research Group
Linear Regression – 3. Construct Loss and Optimizer

criterion = torch.nn.MSELoss(size_average=False)

optimizer = torch.optim.SGD(model.parameters(), lr=0.01)

Also inherit from nn.Module.

Lecturer : Hongpu Liu Lecture 5-16 PyTorch Tutorial @ SLAM Research Group
Linear Regression – 3. Construct Loss and Optimizer

criterion = torch.nn.MSELoss(size_average=False)

optimizer = torch.optim.SGD(model.parameters(), lr=0.01)

Lecturer : Hongpu Liu Lecture 5-17 PyTorch Tutorial @ SLAM Research Group
Linear Regression – 3. Construct Loss and Optimizer

criterion = torch.nn.MSELoss(size_average=False)

optimizer = torch.optim.SGD(model.parameters(), lr=0.01)

Lecturer : Hongpu Liu Lecture 5-18 PyTorch Tutorial @ SLAM Research Group
Linear Regression – 4. Training Cycle

for epoch in range(100):


y_pred = model(x_data) Forward: Predict
loss = criterion(y_pred, y_data)
print(epoch, loss)

optimizer.zero_grad()
loss.backward()
optimizer.step()

Lecturer : Hongpu Liu Lecture 5-19 PyTorch Tutorial @ SLAM Research Group
Linear Regression – 4. Training Cycle

for epoch in range(100):


y_pred = model(x_data)
loss = criterion(y_pred, y_data) Forward: Loss
print(epoch, loss)

optimizer.zero_grad()
loss.backward()
optimizer.step()

Lecturer : Hongpu Liu Lecture 5-20 PyTorch Tutorial @ SLAM Research Group
Linear Regression – 4. Training Cycle

for epoch in range(100):


y_pred = model(x_data) NOTICE:
loss = criterion(y_pred, y_data)
print(epoch, loss) The grad computed by .backward()
optimizer.zero_grad() will be accumulated.
loss.backward()
optimizer.step() So before backward, remember set
the grad to ZERO!!!

Lecturer : Hongpu Liu Lecture 5-21 PyTorch Tutorial @ SLAM Research Group
Linear Regression – 4. Training Cycle

for epoch in range(100):


y_pred = model(x_data)
loss = criterion(y_pred, y_data)
print(epoch, loss)

optimizer.zero_grad()
loss.backward() Backward: Autograd
optimizer.step()

Lecturer : Hongpu Liu Lecture 5-22 PyTorch Tutorial @ SLAM Research Group
Linear Regression – 4. Training Cycle

for epoch in range(100): for x, y in zip(x_data, y_data):


y_pred = model(x_data) ……
loss = criterion(y_pred, y_data) w.data = w.data - 0.01 * w.grad.data
print(epoch, loss)

optimizer.zero_grad()
loss.backward()
optimizer.step() Update

Lecturer : Hongpu Liu Lecture 5-23 PyTorch Tutorial @ SLAM Research Group
Linear Regression – Test Model

# Output weight and bias


print('w = ', model.linear.weight.item())
print('b = ', model.linear.bias.item())

# Test Model
x_test = torch.Tensor([[4.0]])
y_test = model(x_test)
print('y_pred = ', y_test.data)

100 Iterations 1000 Iterations

Lecturer : Hongpu Liu Lecture 5-24 PyTorch Tutorial @ SLAM Research Group
Linear Regression
import torch

Prepare dataset
1
x_data = torch.Tensor([[1.0], [2.0], [3.0]])
y_data = torch.Tensor([[2.0], [4.0], [6.0]])

class LinearModel(torch.nn.Module): we shall talk about this later


def __init__(self):
super(LinearModel, self).__init__()
self.linear = torch.nn.Linear(1, 1)

def forward(self, x):


Design model using Class
2
y_pred = self.linear(x)
return y_pred
model = LinearModel() inherit from nn.Module
criterion = torch.nn.MSELoss(size_average=False)
optimizer = torch.optim.SGD(model.parameters(), lr=0.01)

for epoch in range(1000):


y_pred = model(x_data)
Construct loss and optimizer
loss = criterion(y_pred, y_data)
print(epoch, loss.item()) 3
using PyTorch API
optimizer.zero_grad()
loss.backward()
optimizer.step()

print('w = ', model.linear.weight.item())


Training cycle
4
print('b = ', model.linear.bias.item())

x_test = torch.Tensor([[4.0]])
y_test = model(x_test) forward, backward, update
print('y_pred = ', y_test.data)

Lecturer : Hongpu Liu Lecture 5-25 PyTorch Tutorial @ SLAM Research Group
Exercise 5-1: Try Different Optimizer in Linear Regression

• torch.optim.Adagrad
• torch.optim.Adam
• torch.optim.Adamax
• torch.optim.ASGD
• torch.optim.LBFGS
• torch.optim.RMSprop
• torch.optim.Rprop
• torch.optim.SGD

Lecturer : Hongpu Liu Lecture 5-26 PyTorch Tutorial @ SLAM Research Group
Exercise 5-2: Read more example from official tutorial

https://pytorch.org/tutorials/beginner/pytorch_with_examples.html

Lecturer : Hongpu Liu Lecture 5-27 PyTorch Tutorial @ SLAM Research Group
PyTorch Tutorial
05. Linear Regression with PyTorch

Lecturer : Hongpu Liu Lecture 5-28 PyTorch Tutorial @ SLAM Research Group

You might also like