121 DL2 Ann
121 DL2 Ann
Deep Learning
■ Introduction to ANN.
■ Hebb’s Learning
■ Performance Index analysis
■ Gradient Descent / LMS / Delta rule
■ Back-propagation algorithm
■ Relation
• The strength of a synapse → The weight w
• The cell body → the summation and the transfer function,
• The neuron output → the signal on the axon.
(Deep Leaarning, Dr. Ashish Gupta) ULC665 : ANN 5 / 64
Transfer Functions (ANN)
When the prototype input patterns are not orthogonal, the Hebb rule produces
some errors. There are several procedures that can be used to reduce these er-
rors.
(Deep Leaarning, Dr. Ashish Gupta) ULC665 : ANN 19 / 64
Pseudoinverse Rule
■ The task of the linear associator was to produce an output of tq for an
input of pq . In other words,
Wpq = tq q = 1, 2, ..., Q
WP = T (In matrix form) where, T = [t1 t2 .... tQ ], P = [p1 p2 .... pQ ]
• Consider the function F(x) = x21 + 2x22 . Find the derivative of the
function at the point X∗ = [0.5 .5]T in the direction p = [2 −1]T .
■ Any direction that is orthogonal to the gradient will have zero slope.
■ Which direction has the greatest slope?
• The maximum slope will occur when the inner product of the
direction vector and the gradient is a maximum.
• This happens when the direction vector is the same as the gradient.
Figure: Since all the eigenvalues are equal, the curvature should be the same in
all directions, and therefore the function should have circular contours.
(Deep Leaarning, Dr. Ashish Gupta) ULC665 : ANN 41 / 64
Example-2
■ Single layer neural networks can only solve problems that are linearly
separable.
■ Examples of linearly Inseparable Problems