0% found this document useful (0 votes)
1 views7 pages

NNFLC Assignment - 7 Submitted By:-Name: - Aman Kapil Roll No.: - 21001017005 Q1: - Implement AND Gate Using Hebb's Network in Matlab

The document contains an assignment by Aman Kapil on implementing an AND gate and distinguishing between two patterns using Hebb's network in Matlab. It includes code snippets for training the network with specified data and calculating the final weights, bias, and outputs for various test cases. The assignment demonstrates the application of Hebbian learning principles in neural networks.

Uploaded by

2003goyalbhavika
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
1 views7 pages

NNFLC Assignment - 7 Submitted By:-Name: - Aman Kapil Roll No.: - 21001017005 Q1: - Implement AND Gate Using Hebb's Network in Matlab

The document contains an assignment by Aman Kapil on implementing an AND gate and distinguishing between two patterns using Hebb's network in Matlab. It includes code snippets for training the network with specified data and calculating the final weights, bias, and outputs for various test cases. The assignment demonstrates the application of Hebbian learning principles in neural networks.

Uploaded by

2003goyalbhavika
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 7

NNFLC ASSIGNMENT - 7

Submitted By:-
Name:- Aman Kapil
Roll No.:- 21001017005

Q1:- Implement AND Gate using Hebb’s network in Matlab.


# CODE:-

data = [
1 1 1;
-1 1 -1;
1 -1 -1;
-1 -1 -1
];
w1 = 0;
w2 = 0;
b = 0;
learning_rate = 1;
for i = 1:size(data, 1)
x1 = data(i, 1);
x2 = data(i, 2);
y = data(i, 3);

delta_w1 = learning_rate * x1 * y;
delta_w2 = learning_rate * x2 * y;
delta_b = learning_rate * y;

w1 = w1 + delta_w1;
w2 = w2 + delta_w2;
b = b + delta_b;
end
fprintf('Final weights and bias:\n');
fprintf('w1 = %d, w2 = %d, b = %d\n', w1, w2,
b);
% Testing with data points (1, -1)
test_x1 = 1;
test_x2 = -1;
% Calculate the output for the test point
y_test = sign(w1 * test_x1 + w2 * test_x2 +
b);
fprintf('Output for test input (1, -1): y = %d\n', y_test);
Q2:- Distinguish between two given patterns using Hebb’s network in Matlab.

Ans:-
# CODE:-
% Training data where + = 1 and . = -1
data = [
1 -1 -1 1 -1 -1 1 -1 -1 1 1 1 1;
1 -1 1 1 -1 1 1 -1 1 1 1 1 -1
];
w1 = 0; w2 = 0; w3 = 0; w4 = 0; w5 = 0; w6 = 0;
w7 = 0; w8 = 0; w9 = 0; w10 = 0; w11 = 0; w12 = 0;
b = 0;
learning_rate = 1;
for i = 1:size(data, 1)
x1 = data(i, 1);
x2 = data(i, 2);
x3 = data(i, 3);
x4 = data(i, 4);
x5 = data(i, 5);
x6 = data(i, 6);
x7 = data(i, 7);
x8 = data(i, 8);
x9 = data(i, 9);
x10 = data(i, 10);
x11 = data(i, 11);
x12 = data(i, 12);

y = data(i, 13); % Target output


% Calculate weight updates
delta_w1 = learning_rate * x1 * y;
delta_w2 = learning_rate * x2 * y;
delta_w3 = learning_rate * x3 * y;
delta_w4 = learning_rate * x4 * y;
delta_w5 = learning_rate * x5 * y;
delta_w6 = learning_rate * x6 * y;
delta_w7 = learning_rate * x7 * y;
delta_w8 = learning_rate * x8 * y;
delta_w9 = learning_rate * x9 * y;
delta_w10 = learning_rate * x10 * y;
delta_w11 = learning_rate * x11 * y;
delta_w12 = learning_rate * x12 * y;
delta_b = learning_rate * y;
% Update weights and bias
w1 = w1 + delta_w1;
w2 = w2 + delta_w2;
w3 = w3 + delta_w3;
w4 = w4 + delta_w4;
w5 = w5 + delta_w5;
w6 = w6 + delta_w6;
w7 = w7 + delta_w7;
w8 = w8 + delta_w8;
w9 = w9 + delta_w9;
w10 = w10 + delta_w10;
w11 = w11 + delta_w11;
w12 = w12 + delta_w12;
b = b + delta_b;
end
% Display final weights and bias
fprintf('Final weights and bias:\n');
fprintf(['w1 = %d, w2 = %d, w3 = %d, w4 = %d, w5 = %d, w6 = %d, w7 =
%d, w8 = %d,' ...
' w9 = %d, w10 = %d, w11 = %d, w12 = %d, b = %d\n'], ...
w1, w2, w3, w4, w5, w6, w7, w8, w9, w10, w11, w12, b);
% Testing with the first test data set
test1_x = [1 -1 -1 1 -1 -1 1 -1 -1 1 1 1];
weighted_sum1 = w1 * test1_x(1) + w2 * test1_x(2) + w3 * test1_x(3) +
w4 * test1_x(4) + ...
w5 * test1_x(5) + w6 * test1_x(6) + w7 * test1_x(7) +
w8 * test1_x(8) + ...
w9 * test1_x(9) + w10 * test1_x(10) + w11 * test1_x(11)
+ w12 * test1_x(12) + b;
test1_y = sign(weighted_sum1);
fprintf('Test output for data set 1: y = %d\n', test1_y);
% Testing with the second test data set
test2_x = [1 -1 1 1 -1 1 1 -1 1 1 1 1];
weighted_sum2 = w1 * test2_x(1) + w2 * test2_x(2) + w3 * test2_x(3) +
w4 * test2_x(4) + ...
w5 * test2_x(5) + w6 * test2_x(6) + w7 * test2_x(7) +
w8 * test2_x(8) + ...
w9 * test2_x(9) + w10 * test2_x(10) + w11 * test2_x(11)
+ w12 * test2_x(12) + b;
test2_y = sign(weighted_sum2);
fprintf('Test output for data set 2: y = %d\n', test2_y);
% Testing with a data set where some values are missing (test 1 with
zeros)
test3_x = [1 -1 -1 1 -1 -1 1 0 -1 0 1 1];
weighted_sum3 = w1 * test3_x(1) + w2 * test3_x(2) + w3 * test3_x(3) +
w4 * test3_x(4) + ...
w5 * test3_x(5) + w6 * test3_x(6) + w7 * test3_x(7) +
w8 * test3_x(8) + ...
w9 * test3_x(9) + w10 * test3_x(10) + w11 * test3_x(11)
+ w12 * test3_x(12) + b;
test3_y = sign(weighted_sum3);
fprintf('Test output for data set with missing values (test set 1 with
zeros): y = %d\n', test3_y);
% Testing with a data set where some values are missing (test 2 with
zeros)
test4_x = [1 -1 0 1 0 1 1 -1 1 1 1 1];
weighted_sum4 = w1 * test4_x(1) + w2 * test4_x(2) + w3 * test4_x(3) +
w4 * test4_x(4) + ...
w5 * test4_x(5) + w6 * test4_x(6) + w7 * test4_x(7) +
w8 * test4_x(8) + ...
w9 * test4_x(9) + w10 * test4_x(10) + w11 * test4_x(11)
+ w12 * test4_x(12) + b;
test4_y = sign(weighted_sum4);
fprintf('Test output for data set with missing values (test set 2 with
zeros): y = %d\n', test4_y);

You might also like