Adaline SGD
Adaline SGD
In this notebook, we are implementing ADALINE "by hand" without using PyTorch's autograd capabilities. In Lecture 06, we will use
"automatic differentiation" (also known as "autodiff"; or autograd in PyTorch) to implement Adaline more compactly. (The reason why we
don't use autodiff here is that it is useful to understand what's going on under the hood.)
import pandas as pd
import matplotlib.pyplot as plt
import torch
%matplotlib inline
x1 x2 x3 x4 y
torch.manual_seed(123)
shuffle_idx = torch.randperm(y.size(0), dtype=torch.long)
X, y = X[shuffle_idx], y[shuffle_idx]
percent70 = int(shuffle_idx.size(0)*0.7)
grad_loss_yhat = 2*(yhat - y)
grad_yhat_weights = x
grad_yhat_bias = 1.
torch.manual_seed(seed)
for e in range(num_epochs):
return cost
Train Model
model = Adaline1(num_features=X_train.size(1))
cost = train(model,
X_train, y_train.float(),
num_epochs=20,
learning_rate=0.1,
seed=123,
minibatch_size=10)
plt.plot(range(len(cost)), cost)
plt.ylabel('Mean Squared Error')
plt.xlabel('Epoch')
plt.show()
Weights tensor([[-0.0763],
[ 0.4181]])
Bias tensor([0.4888])
w, b = analytical_solution(X_train, y_train.float())
print('Analytical weights', w)
print('Analytical bias', b)
ones = torch.ones(y_test.size())
zeros = torch.zeros(y_test.size())
test_pred = model.forward(X_test)
test_acc = torch.mean(
(torch.where(test_pred > 0.5,
ones,
zeros).int() == y_test).float())
Decision Boundary
##########################
### 2D Decision Boundary
##########################
x_min = -3
y_min = ( (-(w[0] * x_min) - b[0])
/ w[1] )
x_max = 3
y_max = ( (-(w[0] * x_max) - b[0])
/ w[1] )
ax[1].legend(loc='upper left')
plt.show()
Loading [MathJax]/jax/output/CommonHTML/fonts/TeX/fontdata.js