0% found this document useful (0 votes)
9 views3 pages

Learning Rules

Uploaded by

Ksr prasad
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views3 pages

Learning Rules

Uploaded by

Ksr prasad
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 3

Hebbian Learning Rule

Developed by Donald Hebb in 1949, Hebian learning is an unsupervised


learning rule that works by adjusting the weights between two neurons in
proportion to the product of their activation.
According to this rule, the weight between two neurons is decreased if they
are working in opposite directions and vice-versa. However, if there is no
correlation between the signals, then the weight remains the same. As for
the sign a weight carries, it is positive when both the nodes are either
positive or negative. However, in case of a one node being either positive or
negative, the weight carries a negative sign.
Δ w = α xi y
 Δw is the change in weight
 α is the learning rate
 xi is the input vector and,
 y is the output

Delta Learning Rule


Developed by Bernard Widrow and Marcian Hoff, Delta learning rule is a
supervised learning rule with a continuous activation function.
The main aim of this rule is to minimize error in the training patterns and
thus, it is also known as the least mean square method. The principle used
here is that of an infinite gradient descent and the changes in weight is equal
to the product of the error and the input.
Δw_ij = η * (d_j - y_j) * f'(h_j) * x_i
 ∆w_ij is the changes in weight between the i th input neuron and jth
output neuron
 η is the learning rate
 d_j is the target output
 y_j is the actual output
 f'(h_j) is the derivative of the activation function
 x_i is the ith input
Competitive Learning Rule
The competitive learning rule is unsupervised in nature and as the name
says, is based on the principle of competition amongst the nodes. That is
why it is also known as the ‘Winner takes it all’ rule.
In this rule, all the output nodes represent the input pattern and the best
one, meaning, the one having the most number of outputs is the winner. This
winner node is then assigned the value 1 and the others that lose, remain at
0. Naturally, only one neuron remains active at once.
∆w_ij = η * (x_i - w_ij)
 ∆w_ij is the changes in weight between the ith input neuron and jth
output neuron
 η is the learning rate
 x_i is the input vector
 w_ij is the weight between the ith input neuron and jth output neuron

Perceptron Learning Rule


Developed by Rosenblatt, the perceptron learning rule is an error correction
rule, used in a single layer feed forward network. Like Hebbian learning, this
also, is a supervised learning rule.
This rule works by finding the difference between the actual and desired
outputs and adjusts the weights according to that. Naturally, the rule
requires the input of a set of vectors, along with the weights so that it can
produce an output.
w = w + η(y -ŷ) x
 w is the weight
 η is the learning rate
 x is the input vector
 y is the actual label of the input vector
 ŷ is the predicted label of the input vector

Correlation Learning Rule


With a similar principle as the Hebbian rule, the correction rule also
increases or decreases the weights based on the phases of the two neurons.
If the neurons are in the opposite phase to each other, the weight should be
towards the negative side and if they are in the same phase to each other,
the weight should be towards the positive side. The only thing that makes
this rule different from the Hebbian learning rule is that it is supervised in
nature.
Δ w = α xi tj
 Δ w is the change in weight
 α is the learning rate
 xi is the input vector and,
 tj is the target value

You might also like