Assignment Neural Networks
Assignment Neural Networks
Assignment Neural Networks
NETWORKS
0901IT151025
Perform the first summation on the individuals of the learning set, and perform
the second summation on the output units. Eij and Oij are the expected and
obtained values of the jth unit for the ith individual.
The network then adjusts the weights of the different units, checking each time
to see if the error function has increased or decreased. As in a conventional
regression, this is a matter of solving a problem of least squares.
Since assigning the weights of nodes according to users, it is an example of
supervised learning
3.Delta learning rule:
Developed by Widrow and Hoff, the delta rule, is one of
the most common learning rules. It depends on supervised learning.
This rule states that the modification in sympatric weight of a node is equal to
the multiplication of error and the input.
In Mathematical form the delta rule is as follows:
For a given input vector, compare the output vector is the correct answer. If the
difference is zero, no learning takes place; otherwise, adjusts its weights to
reduce this difference. The change in weight from ui to uj is: dwij = r* ai * ej.
where r is the learning rate, ai represents the activation of ui and ej is the
difference between the expected output and the actual output of uj. If the set of
input patterns form an independent set then learn arbitrary associations using
the delta rule.
It has seen that for networks with linear activation functions and with no hidden
units. The error squared vs. the weight graph is a paraboloid in n-space. Since
the proportionality constant is negative, the graph of such a function is concave
upward and has the least value. The vertex of this paraboloid represents the
point where it reduces the error. The weight vector corresponding to this point is
then the ideal weight vector.
We can use the delta learning rule with both single output unit and several
output units.
While applying the delta rule assume that the error can be directly measured.
The aim of applying the delta rule is to reduce the difference between the actual
and expected output that is the error.
4. Correlation learning rule:
The correlation learning rule based on a
similar principle as the Hebbian learning rule. It assumes that weights between
responding neurons should be more positive, and weights between neurons with
opposite reaction should be more negative.
Contrary to the Hebbian rule, the correlation rule is the supervised learning.
Instead of an actual
The response, oj, the desired response, dj, uses for the weight-change
calculation.
In Mathematical form the correlation learning rule is as follows:
Where dj is the desired value of output signal. This training algorithm usually
starts with the initialization of weights to zero.
Since assigning the desired weight by users, the correlation learning rule is an
example of supervised learning.
5. Our star learning rule:
We use the Out Star Learning Rule when we
assume that nodes or neurons in a network arranged in a layer. Here the weights
connected to a certain node should be equal to the desired outputs for the
neurons connected through those weights. The out start rule produces the
desired response t for the layer of n nodes.
Apply this type of learning for all nodes in a particular layer. Update the
weights for nodes are as in Kohonen neural networks.
In Mathematical form, express the out star learning as follows: