0% found this document useful (0 votes)
40 views5 pages

Multilayer Perceptron

A multilayer perceptron uses multiple layers of nodes and weighted connections to classify input vectors. It takes input values and processes them through nonlinear activation functions between nodes arranged in layers. It learns through backpropagation by adjusting weights to minimize error between predicted and target outputs.

Uploaded by

Balaji Yalburgi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
40 views5 pages

Multilayer Perceptron

A multilayer perceptron uses multiple layers of nodes and weighted connections to classify input vectors. It takes input values and processes them through nonlinear activation functions between nodes arranged in layers. It learns through backpropagation by adjusting weights to minimize error between predicted and target outputs.

Uploaded by

Balaji Yalburgi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 5

Multilayer perceptron

Multilayer Perceptron:
• Input
• Weight
• Bias
• Weighted summation
• Step/activation function
• output
WORKING:
• Feed the features of the model that is required to be trained as input in the first layer.
 All weights and inputs will be multiplied – the multiplied result of each weight and input
will be added up.
 The Bias value will be added to shift the output function.
 This value will be presented to the activation function (the type of activation function
will depend on the need)
 This iterative learning process helps MLPs approximate functions and improve their
performance over time
 Training an MLP involves backpropagation, where the network adjusts its weights
iteratively to minimize a loss function based on feedback received during training epochs
 The value received after the last step is the output value.
 The activation function is a binary step function which outputs a value 1, if f(x) is above
the threshold value Θ and a 0 if f(x) is below the threshold value Θ. Then the output of a
neuron is

Cse department vsmit 1


Multilayer perceptron

Algorithm : Learning in an MLP


//Input: Input vector (X)
//Output: Y
Learning rate: a
Step 1: Forward Propagation
1. Calculate Input and Output in the Input Layer
Input at Node j ‘1’ in the Input Layer is
Ij=Xj

Where,

Xj the input received at Node j

Output at Node ‘O’ in the Input Layer is

2. Calculate Net Input and Output in the Hidden Layer and Output Layer.

Net Input at Node j in the Hidden Layer is

Where,
Xi is the input from Node i.
Wij is the weight in the link from Node I to Nodej.
X0 the input to bias node ‘0’ which is always assumed as 1.
¢j the weight in the link from the bias node ‘0’ to Nodej.

Cse department vsmit 2


Multilayer perceptron

Net Input at Node j in the Output Layer is

Where,
Oi is the output from Node i
Wij the weight in the link from Node I to Nodej
X0 the input to bias node 0 which is always assumed as 1
¢j the weight in the link from the bias node ‘0’ to Nodej

Output at Node j

Where,
Ij is the input received at Node j
3. Estimate error at the node in the Output Layer.

Where,
0_desired output value of the Node in the Output Layer
0_estimated output value of the Node in the Output Layer

Cse department vsmit 3


Multilayer perceptron

Step 2: Backward Propagation


1.Calculate Error at each node:
For each Unit k in the Output Layer

Where,
0k is the output value at Node k in the Output Layer.
0_desired is the desired output value of the Node in the Output Layer.
For each unit j in the Hidden Layer

Where
0j the output value at Node į in the Hidden Layer.
Error_k is the error at Node k in the Output Layer.
Wk is the weight in the link from Node j to Node k.

2.Update all weights and biases:


Update weights:

where,
0i is the output value at Node i.

Cse department vsmit 4


Multilayer perceptron

Errorj is the error at Node j.


¢ is the learning rate.
wij the weight in the link from Node i to Node j.
∆wij is the difference in weight that has to be added to wij.

Update Biases

where,
Errorj is the error at Node j.
¢ is the learning rate.
Θ is the bias value from Bias Node 0 to Node j.
∆Θ is the difference in bias that has to be added to Θj

Cse department vsmit 5

You might also like