Unit 3
Unit 3
UNIT – 3
Feedforward Neural Networks
Dr. D. SUDHEER
Assistant Professor
Computer Science and Engineering
VNRVJIET
© Dr. Devulapalli Sudheer 1
Introduction
• By a suitable choice of architecture for a feedforward network, it
is possible to perform several pattern recognition tasks.
•The linear association network shows that the network is limited
in its capabilities.
•The constraint on the number of input patterns is overcome by
using a two layer feedforward network with nonlinear processing
units in the output layer.
•This modification automatically leads to the consideration of
pattern classification problems.
• Classification problems which are not linearly separable are
called hard problems. © Dr. Devulapalli Sudheer 2
• In order to overcome the constraint of linear separability for
pattern classification problems, a multilayer feedforward network
with nonlinear processing units in all the intermediate hidden
layers and in the output layer is proposed.
• A multilayer feedforward architecture could solve representation
of the hard problems in a network.
• It introduces the problem of hard learning, i.e., the difficulty in
adjusting the weights of the network to capture the implied
functional relationship between the given input-output pattern
pairs.
• The hard learning problem is solved by using thc
backpropagation learning algorithm. © Dr. Devulapalli Sudheer 3
© Dr. Devulapalli Sudheer 4
Analysis of Pattern Association Networks
• The weights are determined by using the criterion that the total
mean squared error between the desired output and the actual
output is to be minimized.
© Dr. Devulapalli Sudheer 8
b. Determination of weights by computation
• For linear associative network:
• The product between output error e(m) and activation value x(m)
can measure performance as below:
• The multi layer feed forward neural network with at least two
hidden layers along with input and out layers can perform the
pattern classification problem.
• Same models can also perform the pattern mapping task.
• The number of hidden layers depends on the nature of mapping
problem.
• Except the input layer the units in the other layers must be mon
linear to produce the generalization.