0% found this document useful (0 votes)
127 views5 pages

Assg 2 Complete

This document contains the code and output for training and testing a perceptron model on a coin classification problem. It loads training and test data, trains the perceptron using the training data to obtain the weight parameters and decision boundary, and tests the model on the test data to calculate the testing error. The code includes functions for training the perceptron, plotting the decision boundary, and calculating the testing error. The output displays the trained weights, number of updates, confusion matrix, and calculated testing error.

Uploaded by

Aman Deep Singh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
127 views5 pages

Assg 2 Complete

This document contains the code and output for training and testing a perceptron model on a coin classification problem. It loads training and test data, trains the perceptron using the training data to obtain the weight parameters and decision boundary, and tests the model on the test data to calculate the testing error. The code includes functions for training the perceptron, plotting the decision boundary, and calculating the testing error. The output displays the trained weights, number of updates, confusion matrix, and calculated testing error.

Uploaded by

Aman Deep Singh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

SAMI

Assignment -2
16/02/2016
1. Which of the following scenario requires the learning approach?
(i)
Classifying numbers into even and odd.
(ii)
Detecting potential fraud in online credit card charges.
(iii)
Determining the time it would take an angry bird to hit the pig at the specified
location.
(iv)
Determining the optimal cycle for traffic lights in a busy intersection.
a)
b)
c)
d)
e)

(ii) and (iv)


(i) and (ii)
(i), (ii), and (iii).
(iii)
(i) and (iii)

Answer: a)

2. What types of learning, if any, best describe the following scenario:


A coin classification system has to be created for a vending machine. In order to do this, the
engineers obtain exact coin specifications from Reserve Bank of India and derive a statistical
model of the size, weight, and denomination, which the vending machine then uses to
classify its coins.
a) Not Learning
b) Supervised
c) Unsupervised
d) Cant Say

Answer: a)

3. What types of learning, if any, best describe the following scenario:


Instead of calling the Reserve Bank of India to obtain coin information, an algorithm is
presented with a large set of labeled coins. The algorithm uses this data to infer decision
boundaries which the vending machine then uses to classify its coins.
a) Not Learning
b) Supervised
c) Unsupervised

d) Cant Say

Answer: b)
4. You are given a trained perceptron classifier where h(x) = sign(3 + 6x1 -3x2 ) . which one of the
following represents the decision boundary of this classifier. Justify your answer.

a.

b.

14

14

y = -1

12

12

x2

10

x2

10

2
0

y = -1

y=1

y=1

x1
4

c.
2

0
0

-2
-4

-6

-6

y = -1

x2

-8

y = -1

-10

-10
-12

x2

-4

-8

d.

-2

x1

x1

Answer: a)
Consider the perceptron network:

-12

x1

Output is given by :
ai = hardlim(ni) = hardlim(iWTp + b),
Where, hardlim is defined as :

The decision boundary is determined by the input vectors for which the net n is zero:

For given problem, n = 3 + 6x1 -3x2 = 0, which is satisfied by option (a).

5. Coin Vending Machine Problem


Code for coin Vending:
load coins_data_train.txt;
load coins_data_test.txt;
XTrain=[coins_data_train(:,1) coins_data_train(:,2) ];
yTrain=coins_data_train(:,4);
gscatter(XTrain(:,1),XTrain(:,2),yTrain,'br','.*');
title('Scatter Plot for Input data points');
hold on
[w, k] = PerceptronTrain(XTrain,yTrain); % training
boundary(w,XTrain); % decision boundary plot
XTest=[coins_data_test(:,1) coins_data_test(:,2) ];
yTest=coins_data_test(:,4);
test_err=PerceptronTest(w, XTest, yTest); % Testing
[c_matrix,order]=conf_matx(yTrain,XTrain,w); % confusion Matrix
disp('Weight (parameters) matrix is:');
disp(w);
disp('Number of updations:')
disp(k);

disp('Confusion Matrix is:')


disp(c_matrix);
disp('Row labels for confusion matrix:')
disp(order);
disp('Testing error is:')
disp(test_err);

Code for perceptronTrain Function:


function [w, k] = PerceptronTrain(XTrain,yTrain)
[n,d] = size(XTrain);
w=ones(d+1,1); %intitial parameters are set to one
% w(1,1) is bias
k=0;% number of updates
iterations = 20;
for i = 1:iterations
for j = 1:n
v=w(1,1)+XTrain(j,1)*w(2,1)+XTrain(j,2)*w(3,1);
if v>=0
out=1;
else
out=-1;
end
error=yTrain(j)-out;
if( error~=0)
k=k+1;
end
w=w+error*[1;XTrain(j,:)'];
end
end

Code for Boundary line:


function boundary(w,XTrain)
minimum=min(min(XTrain));
maximum=max(max(XTrain));
x = minimum-1:0.01:maximum+1;
m = w(2,1)/w(3,1);
c = w(1,1)/w(3,1);
y = -(c+m * x) ;
plot(x,y,'k');

Code for PerceptronTest:


function test_err = PerceptronTest(w, XTest, yTest)
m= length(XTest);
error=zeros(m,1);
for i=1:m
v=w(1,1)+XTest(i,1)*w(2,1)+XTest(i,2)*w(3,1);
if v>=0
out=1;
else
out=-1;
end
error(i,1)=abs(yTest(i)-out);
end
test_err=sum(error)/m;

Output:

You might also like