0% found this document useful (0 votes)
58 views17 pages

5 2 Multilayer Perceptron

1. A multilayer perceptron uses an input vector that is mapped to an output variable through a scalar product and nonlinear activation function. 2. The network consists of an input layer, one or more hidden layers, and an output layer where the outputs of one layer are connected as inputs to the next layer. 3. Backpropagation is a learning algorithm used to train neural networks by adjusting the weights between layers to reduce error and allow the network to predict the correct class label for input data.

Uploaded by

Daystar Yt
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
58 views17 pages

5 2 Multilayer Perceptron

1. A multilayer perceptron uses an input vector that is mapped to an output variable through a scalar product and nonlinear activation function. 2. The network consists of an input layer, one or more hidden layers, and an output layer where the outputs of one layer are connected as inputs to the next layer. 3. Backpropagation is a learning algorithm used to train neural networks by adjusting the weights between layers to reduce error and allow the network to predict the correct class label for input data.

Uploaded by

Daystar Yt
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 17

Multilayer Perceptron

S.Rajalakshmi
AP / CSE
SSNCE
Neuron: A Hidden/Output Layer Unit
Neuron: A Hidden/Output Layer Unit
bias
x0 w0 k
x1 w1
 f output y
xn wn

Input weight weighted Activation


vector x vector w sum function
 An n-dimensional input vector x is mapped into variable y by means of the
scalar product and a nonlinear function mapping
 The inputs to unit are outputs from the previous layer. They are multiplied by
their corresponding weights to form a weighted sum, which is added to the
bias associated with unit. Then a nonlinear activation function is applied to it.
6
Classification by Backpropagation

 Backpropagation: A neural network learning algorithm


 Started by psychologists and neurobiologists to develop and
test computational analogues of neurons
 A neural network: A set of connected input/output units
where each connection has a weight associated with it
 During the learning phase, the network learns by adjusting
the weights so as to be able to predict the correct class label
of the input tuples
 Also referred to as connectionist learning due to the
connections between units

7
Multi Layer Perceptron Network (MLP) – ML
Feed Forward Network

X1

Input Layer Hidden Output


Layer Layer
Back Propagation Network

Update weights and bias


values in output layer
and hidden layers
Neural Network as a Classifier
 Weakness
 Long training time
 Require a number of parameters typically best determined empirically,
e.g., the network topology or “structure.”
 Black Box: Backpropagation and Interpretability
 Poor interpretability: Difficult to interpret the symbolic meaning behind

the learned weights and of “hidden units” in the network


 Strength
 High tolerance to noisy data
 Ability to classify untrained patterns
 Well-suited for continuous-valued inputs and outputs
 Successful on an array of real-world data, e.g., hand-written letters
 Algorithms are inherently parallel
 Techniques have recently been developed for the extraction of rules from
10
trained neural networks
How A Multi-Layer Neural Network Works
 The inputs to the network correspond to the attributes measured for each
training tuple
 Inputs are fed simultaneously into the units making up the input layer
 They are then weighted and fed simultaneously to a hidden layer
 The number of hidden layers is arbitrary, although usually only one
 The weighted outputs of the last hidden layer are input to units making up
the output layer, which emits the network's prediction
 The network is feed-forward: None of the weights cycles back to an input
unit or to an output unit of a previous layer
 From a statistical point of view, networks perform nonlinear regression:
Given enough hidden units and enough training samples, they can closely
approximate any function

12

You might also like