0% found this document useful (0 votes)
22 views

Lecture 02 Linear Model

This document summarizes a lecture on linear regression using PyTorch. It introduces a example dataset of students' exam scores based on study hours. A linear model is proposed to predict exam scores from study hours. The model tries different values for its parameter ω to minimize the cost function and find the best fitting line. When ω is set to 2, the cost is minimized, indicating the linear model best predicts the data with that value.

Uploaded by

lingyun wu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
22 views

Lecture 02 Linear Model

This document summarizes a lecture on linear regression using PyTorch. It introduces a example dataset of students' exam scores based on study hours. A linear model is proposed to predict exam scores from study hours. The model tries different values for its parameter ω to minimize the cost function and find the best fitting line. When ω is set to 2, the cost is minimized, indicating the linear model best predicts the data with that value.

Uploaded by

lingyun wu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 27

PyTorch Tutorial

02. Linear Model

Lecturer : Hongpu Liu Lecture 2-1 PyTorch Tutorial @ SLAM Research Group
Machine learning

• Suppose that students would get y points in final exam, if they


spent x hours in paper PyTorch Tutorial.

x (hours) y (points)
1 2
2 4
3 6
4 ?

• The question is what would be the grade if I study 4 hours?

Lecturer : Hongpu Liu Lecture 2-2 PyTorch Tutorial @ SLAM Research Group
Machine Learning

4 hours ? points
Prediction

x (hours) y (points)
1 2
2 4
3 6
4 ?

Lecturer : Hongpu Liu Lecture 2-3 PyTorch Tutorial @ SLAM Research Group
Machine Learning

4 hours ? points
Prediction

x (hours) y (points)
1 2
2 4 Training Set
Supervised
3 6 Learning
4 ? Test Set

Lecturer : Hongpu Liu Lecture 2-4 PyTorch Tutorial @ SLAM Research Group
Model design

• What would be the best model for the data?


• Linear model?

x (hours) y (points)
1 2 Linear Model
2 4
3 6 𝑦ො = 𝑥 ∗ 𝜔 + 𝑏
4 ?

Lecturer : Hongpu Liu Lecture 2-5 PyTorch Tutorial @ SLAM Research Group
Model design

• What would be the best model for the data?


• Linear model?

x (hours) y (points)
1 2 Linear Model
2 4
3 6 𝑦ො = 𝑥 ∗ 𝜔
4 ?
To simplify the model

Lecturer : Hongpu Liu Lecture 2-6 PyTorch Tutorial @ SLAM Research Group
Linear Regression

Linear Model 14

12

𝑦ො = 𝑥 ∗ 𝜔 10 True Line
8

Points
6
x (hours) y (points)
4
1 2
2
2 4
0
3 6
0 1 2 3 4 5
Hours

Lecturer : Hongpu Liu Lecture 2-7 PyTorch Tutorial @ SLAM Research Group
Linear Regression

The machine starts with a random guess, 𝜔 = random value


Linear Model 14

12

𝑦ො = 𝑥 ∗ 𝜔 10 True Line
8

Points
6
x (hours) y (points)
4
1 2
2
2 4
0
3 6
0 1 2 3 4 5
Hours

Lecturer : Hongpu Liu Lecture 2-8 PyTorch Tutorial @ SLAM Research Group
Compute Loss

Training Loss (Error)


2 2
𝑙𝑜𝑠𝑠 = (𝑦ො − 𝑦) = (𝑥 ∗ 𝜔 − 𝑦)

x (Hours) y (Points) y_predict (w=3) Loss (w=3)

1 2 3 1

2 4 6 4

3 6 9 9

mean = 14/3

Lecturer : Hongpu Liu Lecture 2-9 PyTorch Tutorial @ SLAM Research Group
Compute Loss

Training Loss (Error)


2 2
𝑙𝑜𝑠𝑠 = (𝑦ො − 𝑦) = (𝑥 ∗ 𝜔 − 𝑦)

x (Hours) y (Points) y_predict (w=4) Loss (w=4)

1 2 4 4

2 4 8 16

3 6 12 36

mean = 56/3

Lecturer : Hongpu Liu Lecture 2-10 PyTorch Tutorial @ SLAM Research Group
Compute Loss

Training Loss (Error)


2 2
𝑙𝑜𝑠𝑠 = (𝑦ො − 𝑦) = (𝑥 ∗ 𝜔 − 𝑦)

x (Hours) y (Points) y_predict (w=0) Loss (w=0)

1 2 0 4

2 4 0 16

3 6 0 36

mean = 56/3

Lecturer : Hongpu Liu Lecture 2-11 PyTorch Tutorial @ SLAM Research Group
Compute Loss

Training Loss (Error)


2 2
𝑙𝑜𝑠𝑠 = (𝑦ො − 𝑦) = (𝑥 ∗ 𝜔 − 𝑦)

x (Hours) y (Points) y_predict (w=1) Loss (w=1)

1 2 1 1

2 4 2 4

3 6 3 9

mean = 14/3

Lecturer : Hongpu Liu Lecture 2-12 PyTorch Tutorial @ SLAM Research Group
Compute Loss

Training Loss (Error)


2 2
𝑙𝑜𝑠𝑠 = (𝑦ො − 𝑦) = (𝑥 ∗ 𝜔 − 𝑦)

x (Hours) y (Points) y_predict (w=2) Loss (w=2)

1 2 2 0

2 4 4 0

3 6 6 0

mean = 0

Lecturer : Hongpu Liu Lecture 2-13 PyTorch Tutorial @ SLAM Research Group
Loss function & Cost function

Training Loss (Error)


2 2
𝑙𝑜𝑠𝑠 = (𝑦ො − 𝑦) = (𝑥 ∗ 𝜔 − 𝑦)

Mean Square Error


𝑁
1 2
𝑐𝑜𝑠𝑡 = ෍ 𝑦ො𝑛 − 𝑦𝑛
𝑁
𝑛=1

Lecturer : Hongpu Liu Lecture 2-14 PyTorch Tutorial @ SLAM Research Group
Compute Cost

Mean Square Error


𝑁
1 2
𝑐𝑜𝑠𝑡 = ෍ 𝑦ො𝑛 − 𝑦𝑛
𝑁
𝑛=1

x (Hours) Loss (w=0) Loss (w=1) Loss (w=2) Loss (w=3) Loss (w=4)

1 4 1 0 1 4

2 16 4 0 4 16

3 36 9 0 9 36

MSE 18.7 4.7 0 4.7 18.7

Lecturer : Hongpu Liu Lecture 2-15 PyTorch Tutorial @ SLAM Research Group
Linear Regression

It can be found that


when 𝜔 = 2, the cost
will be minimal.

Lecturer : Hongpu Liu Lecture 2-16 PyTorch Tutorial @ SLAM Research Group
How to draw the graph

import numpy as np import numpy as np


import matplotlib.pyplot as plt
import matplotlib.pyplot as plt
x_data = [1.0, 2.0, 3.0]
y_data = [2.0, 4.0, 6.0]

def forward(x): Import necessary library to draw the graph.


return x * w

def loss(x, y):


y_pred = forward(x)
return (y_pred - y) * (y_pred - y)

w_list = []
mse_list = []
for w in np.arange(0.0, 4.1, 0.1):
print('w=', w)
l_sum = 0
for x_val, y_val in zip(x_data, y_data):
y_pred_val = forward(x_val)
loss_val = loss(x_val, y_val)
l_sum += loss_val
print('\t', x_val, y_val, y_pred_val, loss_val)
print('MSE=', l_sum / 3)
w_list.append(w)
mse_list.append(l_sum / 3)

Lecturer : Hongpu Liu Lecture 2-17 PyTorch Tutorial @ SLAM Research Group
How to draw the graph

import numpy as np x_data = [1.0, 2.0, 3.0]


import matplotlib.pyplot as plt
y_data = [2.0, 4.0, 6.0]
x_data = [1.0, 2.0, 3.0]
y_data = [2.0, 4.0, 6.0]

def forward(x): Prepare the train set.


return x * w

def loss(x, y):


y_pred = forward(x)
return (y_pred - y) * (y_pred - y)

w_list = []
mse_list = []
for w in np.arange(0.0, 4.1, 0.1):
print('w=', w)
l_sum = 0
for x_val, y_val in zip(x_data, y_data):
y_pred_val = forward(x_val)
loss_val = loss(x_val, y_val)
l_sum += loss_val
print('\t', x_val, y_val, y_pred_val, loss_val)
print('MSE=', l_sum / 3)
w_list.append(w)
mse_list.append(l_sum / 3)

Lecturer : Hongpu Liu Lecture 2-18 PyTorch Tutorial @ SLAM Research Group
How to draw the graph

import numpy as np def forward(x):


import matplotlib.pyplot as plt
return x * w
x_data = [1.0, 2.0, 3.0]
y_data = [2.0, 4.0, 6.0]

def forward(x): Define the model:


return x * w

def loss(x, y):


y_pred = forward(x)
return (y_pred - y) * (y_pred - y) Linear Model
w_list = []
mse_list = []
for w in np.arange(0.0, 4.1, 0.1):
print('w=', w)
𝑦ො = 𝑥 ∗ 𝜔
l_sum = 0
for x_val, y_val in zip(x_data, y_data):
y_pred_val = forward(x_val)
loss_val = loss(x_val, y_val)
l_sum += loss_val
print('\t', x_val, y_val, y_pred_val, loss_val)
print('MSE=', l_sum / 3)
w_list.append(w)
mse_list.append(l_sum / 3)

Lecturer : Hongpu Liu Lecture 2-19 PyTorch Tutorial @ SLAM Research Group
How to draw the graph

import numpy as np def loss(x, y):


import matplotlib.pyplot as plt
y_pred = forward(x)
x_data = [1.0, 2.0, 3.0] return (y_pred - y) * (y_pred - y)
y_data = [2.0, 4.0, 6.0]

def forward(x):
return x * w
Define the loss function:
def loss(x, y):
y_pred = forward(x)
return (y_pred - y) * (y_pred - y)
Loss Function
w_list = []
mse_list = []
for w in np.arange(0.0, 4.1, 0.1):
print('w=', w)
l_sum = 0
𝑙𝑜𝑠𝑠 = (𝑦ො − 𝑦)2 = (𝑥 ∗ 𝜔 − 𝑦)2
for x_val, y_val in zip(x_data, y_data):
y_pred_val = forward(x_val)
loss_val = loss(x_val, y_val)
l_sum += loss_val
print('\t', x_val, y_val, y_pred_val, loss_val)
print('MSE=', l_sum / 3)
w_list.append(w)
mse_list.append(l_sum / 3)

Lecturer : Hongpu Liu Lecture 2-20 PyTorch Tutorial @ SLAM Research Group
How to draw the graph

import numpy as np w_list = []


import matplotlib.pyplot as plt
mse_list = []
x_data = [1.0, 2.0, 3.0]
y_data = [2.0, 4.0, 6.0]

def forward(x):
return x * w
List w_list save the weights 𝝎.
def loss(x, y):
List mse_list save the cost values of each 𝝎.
y_pred = forward(x)
return (y_pred - y) * (y_pred - y)

w_list = []
mse_list = []
for w in np.arange(0.0, 4.1, 0.1):
print('w=', w)
l_sum = 0
for x_val, y_val in zip(x_data, y_data):
y_pred_val = forward(x_val)
loss_val = loss(x_val, y_val)
l_sum += loss_val
print('\t', x_val, y_val, y_pred_val, loss_val)
print('MSE=', l_sum / 3)
w_list.append(w)
mse_list.append(l_sum / 3)

Lecturer : Hongpu Liu Lecture 2-21 PyTorch Tutorial @ SLAM Research Group
How to draw the graph

import numpy as np for w in np.arange(0.0, 4.1, 0.1):


import matplotlib.pyplot as plt

x_data = [1.0, 2.0, 3.0]


y_data = [2.0, 4.0, 6.0]

def forward(x):
return x * w
Compute cost value at [0.0, 0.1, 0.2, … , 4.0]
def loss(x, y):
y_pred = forward(x)
return (y_pred - y) * (y_pred - y)

w_list = []
mse_list = []
for w in np.arange(0.0, 4.1, 0.1):
print('w=', w)
l_sum = 0
for x_val, y_val in zip(x_data, y_data):
y_pred_val = forward(x_val)
loss_val = loss(x_val, y_val)
l_sum += loss_val
print('\t', x_val, y_val, y_pred_val, loss_val)
print('MSE=', l_sum / 3)
w_list.append(w)
mse_list.append(l_sum / 3)

Lecturer : Hongpu Liu Lecture 2-22 PyTorch Tutorial @ SLAM Research Group
How to draw the graph

import numpy as np for x_val, y_val in zip(x_data, y_data):


import matplotlib.pyplot as plt
y_pred_val = forward(x_val)
x_data = [1.0, 2.0, 3.0] loss_val = loss(x_val, y_val)
y_data = [2.0, 4.0, 6.0]
l_sum += loss_val
def forward(x): print('\t', x_val, y_val, y_pred_val, loss_val)
return x * w

def loss(x, y):


y_pred = forward(x)
return (y_pred - y) * (y_pred - y)
For each sample in train set, the loss
w_list = []
function values were computed.
mse_list = []
for w in np.arange(0.0, 4.1, 0.1):
print('w=', w)
l_sum = 0
ATTENTION:
for x_val, y_val in zip(x_data, y_data):
y_pred_val = forward(x_val)
Value of cost function is the sum of loss
loss_val = loss(x_val, y_val)
l_sum += loss_val
function.
print('\t', x_val, y_val, y_pred_val, loss_val)
print('MSE=', l_sum / 3)
w_list.append(w)
mse_list.append(l_sum / 3)

Lecturer : Hongpu Liu Lecture 2-23 PyTorch Tutorial @ SLAM Research Group
How to draw the graph

import numpy as np w_list.append(w)


import matplotlib.pyplot as plt
mse_list.append(l_sum / 3)
x_data = [1.0, 2.0, 3.0]
y_data = [2.0, 4.0, 6.0]

Save 𝝎 and correspondence MSE.


def forward(x):
return x * w

def loss(x, y):


y_pred = forward(x)
return (y_pred - y) * (y_pred - y)

w_list = []
mse_list = []
for w in np.arange(0.0, 4.1, 0.1):
print('w=', w)
l_sum = 0
for x_val, y_val in zip(x_data, y_data):
y_pred_val = forward(x_val)
loss_val = loss(x_val, y_val)
l_sum += loss_val
print('\t', x_val, y_val, y_pred_val, loss_val)
print('MSE=', l_sum / 3)
w_list.append(w)
mse_list.append(l_sum / 3)

Lecturer : Hongpu Liu Lecture 2-24 PyTorch Tutorial @ SLAM Research Group
How to draw the graph

import numpy as np Part of result Draw the graph


import matplotlib.pyplot as plt
plt.plot(w_list, mse_list)
x_data = [1.0, 2.0, 3.0]
plt.ylabel('Loss')
y_data = [2.0, 4.0, 6.0]
plt.xlabel('w')
def forward(x): plt.show()
return x * w

def loss(x, y):


y_pred = forward(x)
return (y_pred - y) * (y_pred - y)

w_list = []
mse_list = []
for w in np.arange(0.0, 4.1, 0.1):
print('w=', w)
l_sum = 0
for x_val, y_val in zip(x_data, y_data):
y_pred_val = forward(x_val)
loss_val = loss(x_val, y_val)
l_sum += loss_val
print('\t', x_val, y_val, y_pred_val, loss_val)
print('MSE=', l_sum / 3)
w_list.append(w)
mse_list.append(l_sum / 3)

Lecturer : Hongpu Liu Lecture 2-25 PyTorch Tutorial @ SLAM Research Group
Exercise

• Try to use the model in right-side, Linear Model


and draw the cost graph.
𝑦ො = 𝑥 ∗ 𝜔 + 𝑏
• Tips:
• You can read the material of how to
draw 3d graph. [link]
• Function np.meshgrid() is very popular
for drawing 3d graph, read the [docs]
and utilize vectorization calculation.

Lecturer : Hongpu Liu Lecture 2-26 PyTorch Tutorial @ SLAM Research Group
PyTorch Tutorial
02. Linear Model

Lecturer : Hongpu Liu Lecture 2-27 PyTorch Tutorial @ SLAM Research Group

You might also like