0% found this document useful (0 votes)
12 views3 pages

Perceptron Numpy

The document details the implementation of a classic Perceptron model for binary classification using NumPy, including data preparation, model training, and evaluation. A toy dataset is created and split into training and test sets, followed by normalization and visualization of the data. The Perceptron is trained for five epochs, achieving 100% accuracy on the training set and 93.33% on the test set, with decision boundaries plotted for both sets.

Uploaded by

pnqanh.gdsciu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views3 pages

Perceptron Numpy

The document details the implementation of a classic Perceptron model for binary classification using NumPy, including data preparation, model training, and evaluation. A toy dataset is created and split into training and test sets, followed by normalization and visualization of the data. The Perceptron is trained for five epochs, achieving 100% accuracy on the training set and 93.33% on the test set, with decision boundaries plotted for both sets.

Uploaded by

pnqanh.gdsciu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 3

L03: Perceptrons

Implementation of the classic Perceptron by Frank Rosenblatt for binary classification (here: 0/1 class labels) in NumPy

Imports
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline

Preparing a toy dataset


##########################
### DATASET
##########################

data = np.genfromtxt('perceptron_toydata.txt', delimiter='\t')


X, y = data[:, :2], data[:, 2]
y = y.astype(np.int)

print('Class label counts:', np.bincount(y))


print('X.shape:', X.shape)
print('y.shape:', y.shape)

# Shuffling & train/test split


shuffle_idx = np.arange(y.shape[0])
shuffle_rng = np.random.RandomState(123)
shuffle_rng.shuffle(shuffle_idx)
X, y = X[shuffle_idx], y[shuffle_idx]

X_train, X_test = X[shuffle_idx[:70]], X[shuffle_idx[70:]]


y_train, y_test = y[shuffle_idx[:70]], y[shuffle_idx[70:]]

# Normalize (mean zero, unit variance)


mu, sigma = X_train.mean(axis=0), X_train.std(axis=0)
X_train = (X_train - mu) / sigma
X_test = (X_test - mu) / sigma

Class label counts: [50 50]


X.shape: (100, 2)
y.shape: (100,)

X_train.std(axis=0)

array([1., 1.])

plt.scatter(X_train[y_train==0, 0], X_train[y_train==0, 1], label='class 0', marker='o')


plt.scatter(X_train[y_train==1, 0], X_train[y_train==1, 1], label='class 1', marker='s')
plt.title('Training set')
plt.xlabel('feature 1')
plt.ylabel('feature 2')
plt.xlim([-3, 3])
plt.ylim([-3, 3])
plt.legend()
plt.show()

plt.scatter(X_test[y_test==0, 0], X_test[y_test==0, 1], label='class 0', marker='o')


plt.scatter(X_test[y_test==1, 0], X_test[y_test==1, 1], label='class 1', marker='s')
plt.title('Test set')
plt.xlabel('feature 1')
plt.ylabel('feature 2')
plt.xlim([-3, 3])
plt.ylim([-3, 3])
plt.legend()
plt.show()

Defining the Perceptron model


class Perceptron():
def __init__(self, num_features):
self.num_features = num_features
self.weights = np.zeros((num_features, 1), dtype=np.float)
self.bias = np.zeros(1, dtype=np.float)

def forward(self, x):


linear = np.dot(x, self.weights) + self.bias # comp. net input
predictions = np.where(linear > 0., 1, 0)
return predictions

def backward(self, x, y):


predictions = self.forward(x)
errors = y - predictions
return errors

def train(self, x, y, epochs):


for e in range(epochs):

for i in range(y.shape[0]):
errors = self.backward(x[i].reshape(1, self.num_features), y[i]).reshape(-1)
self.weights += (errors * x[i]).reshape(self.num_features, 1)
self.bias += errors

def evaluate(self, x, y):


predictions = self.forward(x).reshape(-1)
accuracy = np.sum(predictions == y) / y.shape[0]
return accuracy

Training the Perceptron


ppn = Perceptron(num_features=2)

ppn.train(X_train, y_train, epochs=5)

print('Model parameters:\n\n')
print(' Weights: %s\n' % ppn.weights)
print(' Bias: %s\n' % ppn.bias)

Model parameters:

Weights: [[1.27340847]
[1.34642288]]

Bias: [-1.]

Evaluating the model


train_acc = ppn.evaluate(X_train, y_train)
print('Train set accuracy: %.2f%%' % (train_acc*100))
Train set accuracy: 100.00%

test_acc = ppn.evaluate(X_test, y_test)


print('Test set accuracy: %.2f%%' % (test_acc*100))

Test set accuracy: 93.33%

##########################
### 2D Decision Boundary
##########################

w, b = ppn.weights, ppn.bias

x0_min = -2
x1_min = ( (-(w[0] * x0_min) - b[0])
/ w[1] )

x0_max = 2
x1_max = ( (-(w[0] * x0_max) - b[0])
/ w[1] )

# x0*w0 + x1*w1 + b = 0
# x1 = (-x0*w0 - b) / w1

fig, ax = plt.subplots(1, 2, sharex=True, figsize=(7, 3))

ax[0].plot([x0_min, x0_max], [x1_min, x1_max])


ax[0].scatter(X_train[y_train==0, 0], X_train[y_train==0, 1], label='class 0', marker='o')
ax[0].scatter(X_train[y_train==1, 0], X_train[y_train==1, 1], label='class 1', marker='s')

ax[1].plot([x0_min, x0_max], [x1_min, x1_max])


ax[1].scatter(X_test[y_test==0, 0], X_test[y_test==0, 1], label='class 0', marker='o')
ax[1].scatter(X_test[y_test==1, 0], X_test[y_test==1, 1], label='class 1', marker='s')

ax[1].legend(loc='upper left')
plt.show()

Loading [MathJax]/jax/output/CommonHTML/fonts/TeX/fontdata.js

You might also like