0% found this document useful (0 votes)
58 views

Learning

Learning rules are methods for adapting the parameters of a neural network through interaction with its environment. There are several types of learning rules including Hebbian, Perceptron, Delta, Competitive, Outstar, Boltzman, and memory-based rules. The Hebbian rule states that repeatedly activating two connected neurons together strengthens their connection, while activating them separately weakens it. The Perceptron rule updates weights based on the difference between the actual and desired output. Competitive learning involves neurons competing to respond to input patterns.

Uploaded by

Sindhuja S
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
58 views

Learning

Learning rules are methods for adapting the parameters of a neural network through interaction with its environment. There are several types of learning rules including Hebbian, Perceptron, Delta, Competitive, Outstar, Boltzman, and memory-based rules. The Hebbian rule states that repeatedly activating two connected neurons together strengthens their connection, while activating them separately weakens it. The Perceptron rule updates weights based on the difference between the actual and desired output. Competitive learning involves neurons competing to respond to input patterns.

Uploaded by

Sindhuja S
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 12

Learning rules:

▪ It is a method or a mathematical logic.


▪ It is the process by which the free parameters of a
neural network get adapted through a process of
stimulation by the environment in which the network
is embedded.
▪ The set of well-defined rules for the solution of a
learning problem is called a learning algorithm.
▪ It is an iterative process.
▪ There are 7 types of learning rules. They are
❖ Hebbian Learning Rule
❖ Perceptron Learning Rule
❖ Delta Learning Rule
❖ Competitive Learning Rule
❖ Outstar Learning Rule
❖ Boltzman Learning Rule

Hebbian Learning Rule:


• It is the oldest and most common of all learning
rules.
• It states that when an axon of cell is near to excite a cell b
and repeatedly take part in firing it. Some metabolic
changes take place in one or both cells.
• It is also called as correlational learning.
This statement can be split into two parts:
1.If the neurons on the either side of a synapse are activated
simultaneously then the strength of that synapse is
selectively increased.
2. If the neurons on the either side of a synapse are activated
asynchronously then the strength of that synapse is
selectively weakened or eliminated.
3.This type of synapse is called hebbian synapse.
Four key mechanism that characteristics hebbian synapse
are
▪ Time dependent mechanism
▪ Local mechanism
▪ Interactive mechanism
▪ Correlational mechanism
• The simplest form of hebbian learning is described
by ∆𝑤 = 𝑥𝑖 𝑦
• Hebbian rule is feed forward and unsupervised learning.
• It states that the cross product of output and input is
positive, this results in increase of weight, otherwise the
weight decreases.
Perceptron Learning Rule:
• For the perceptron learning rule learning signal is the
difference between the desired and actual neuron
response.
• perceptron learning rule is supervised learning.
• The rule states that the weight vector is perpendicular to
the plane separating the input pattern during the learning
process.
For a finite ‘n’ no. of input training vectors,
X(n) where n=1 to N
Each with an associated target value,
t(n) where n=1 to N
which is+1 or -1 and an activation function 𝑦 − 𝑓(𝑦−𝑖𝑛 ),
where
y=1 if y> 𝜗
y=0 if −𝜗 ≤ 𝑦𝑖𝑛 ≤ 𝜗
y=-1 if𝑦𝑖𝑛 < −𝜗
the weight updation is given by
if y≠t then 𝑤𝑛𝑒𝑤 = 𝑤𝑜𝑙𝑑 + 𝑡𝑥
if y=t then there is no change in weights.
Outstar Learning Rule:
▪ It can be well explained when the neurons are arranged
in a layer.
▪ It is designed to produce the desired response t from the
layer of n neurons.
▪ This type of learning is also called as Grossberg learning.
∆𝑊𝑗𝑘

∝ (𝑦𝑘 − 𝑊𝑗𝑘)𝑖𝑓𝑛𝑒𝑢𝑟𝑜𝑛 𝑗 𝑤𝑖𝑛𝑠 𝑡ℎ𝑒 𝑐𝑜𝑚𝑝𝑒𝑡𝑖𝑡𝑖𝑜𝑛


={ }
0 𝑖𝑓 𝑛𝑒𝑢𝑟𝑜𝑛 𝑗 𝑙𝑜𝑠𝑒𝑠 𝑡ℎ𝑒 𝑐𝑜𝑚𝑝𝑒𝑡𝑖𝑡𝑖𝑜𝑛

▪ This rule is used to provide learning of repetitive and


characteristic properties of input output relationships.
▪ It allows the network to extract statistical properties of
the input and output signals.
Boltzman Learning:
• It is a stochastic learning.
• A neural net designed based on this learning is called
Boltzman learning.
• The neurons constitute a recurrent structure and they
work in binary form.
• It is characterized by an energy function E, the value of
which is determined by the individual neurons of the
machine.
E=-1/2∑ 𝑖 ∑𝑖𝑊𝑖𝑗𝑋𝑗𝑥𝑖 𝑖 𝑛𝑜𝑡 𝑒𝑞𝑢𝑎𝑙 𝑡𝑜 𝑗
xi-state of neuron I and Wij is the weight from neuron I to
neuron j.
the neurons of this learning process are divided into two
groups: visible and hidden.
Memory Based Learning Rule:
• All the previous experiences are stored in a large
memory of correctly classified input output examples:
(xiti)Ni=1
xi-input vector
tj-desired response
It has 2 parts:
• criterion used for defining the local neighborhood of the
test vector
• learning rule applied to the training in the local
neighborhood.
• One of the most widely used memory based learning is
the nearest neighbor rule.
X1n ∈ 𝑣{𝑥1 … 𝑥𝑛}
d (xi, xt)- Euclidean distance between the vectors.

It can also be applied to radial basis function network.

Competitive Learning Rule:

▪ The output neurons of a neural network compete among themselves to become

active.

▪ There are a set of neurons that are similar in all aspects for synaptic weights

respond to a given set of input patterns.

▪ The winner neuron during competition is called winner-takes-all neurons.

1 𝑖𝑓 𝑣𝑝 > 𝑣𝑞 𝑓𝑜𝑟 𝑎𝑙𝑙 𝑞 ,


N=
{ 0 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒
▪ This rule is suited for unsupervised network training.

▪ This uses the standard Kohenen learning rule

∑𝑗 𝑊 ij=1 for all i


Using standard competitive rule, the change of ∆Wij,

∝ (𝑥𝑗 − 𝑊𝑖𝑗)𝑖𝑓𝑛𝑒𝑢𝑟𝑜𝑛 𝑗 𝑤𝑖𝑛𝑠 𝑡ℎ𝑒 𝑐𝑜𝑚𝑝𝑒𝑡𝑖𝑡𝑖𝑜𝑛


={ }
0 𝑖𝑓 𝑛𝑒𝑢𝑟𝑜𝑛 𝑗 𝑙𝑜𝑠𝑒𝑠 𝑡ℎ𝑒 𝑐𝑜𝑚𝑝𝑒𝑡𝑖𝑡𝑖𝑜𝑛

Where ∞ is the learning rate.


Euclidean norm is most widely used dot product may
require normalization.

You might also like