We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as TXT, PDF, TXT or read online on Scribd
You are on page 1/ 2
Honours* in Data Science #Fourth year of Engineering (Semester VII) #410502:
Machine Learning and Data Science Laboratory
Dr. Girija Gireesh Chiddarwar Assignment No 3 - Implement basic logic gates using Hebbnet neural networks
Details of Hebbian Learning Rule
It is one of the first and also easiest learning rules in the neural network. It is used for pattern classification, pattern association and pattern categorization. It is a single layer neural network, i.e. it has one input layer and one output layer. The input layer can have many units, say n. The output layer only has one unit. Hebbian rule works by updating the weights between neurons in the neural network for each training sample.
Hebbian Learning Rule Algorithm :#
1. Set all weights to zero, wi= 0 for i=1 to n, and bias to zero. 2. For each input vector, S(input vector) : t(target output pair), repeat steps 3- 5. 3. Set activations for input units with the input vector Xi= Si for i = 1 to n. 4. Set the corresponding output value to the output neuron, i.e. y = t. 5. Update weight and bias by applying Hebb rule for all i = 1 to n:
Implementing AND Gate :
There are 4 training samples, so there will be 4 iterations. Also, the activation function used here is Bipolar Sigmoidal Function so the range is [-1,1]. Step 1 : Set weight and bias to zero, w = [ 0 0 0 ]T and b = 0. Step 2 : Set input vector Xi = Si for i = 1 to 4. X1 = [ -1 -1 1 ]T X2 = [ -1 1 1 ]T X3 = [ 1 -1 1 ]T X4 = [ 1 1 1 ]T Step 3 : Output value is set to y = t. ##
Implementing AND Gate :
Step 4 : Modifying weights using Hebbian Rule: First iteration – w(new) = w(old) + x1y1 = [ 0 0 0 ]T + [ -1 -1 1 ]T . [ -1 ] = [ 1 1 -1 ]T For the second iteration, the final weight of the first one will be used and so on. Second iteration – w(new) = [ 1 1 -1 ]T + [ -1 1 1 ]T . [ -1 ] = [ 2 0 -2 ]T Third iteration – w(new) = [ 2 0 -2]T + [ 1 -1 1 ]T . [ -1 ] = [ 1 1 -3 ]T Fourth iteration – w(new) = [ 1 1 -3]T + [ 1 1 1 ]T . [ 1 ] = [ 2 2 -2 ]T So, the final weight matrix is [ 2 2 -2 ]T ##
Algorithm from Implementation
Assign input and output values Assign weight Apply logic /update weights as per hebb’s logic to all gates Evaluate by accepting input from user i.e check learning and calculate output