Percptron
Percptron
• Perceptron?
• Formulas of Perceptron
• Activation Function
• Multi-layer Perceptron
The Perceptron
• Perceptron was introduced by Frank Rosenblatt in 1957
• The perceptron is a network (neural network) at takes a number of inputs carry out
some processing on those inputs and produces an output.
• Initially the perceptron was designed to take a number of binary inputs, and
produce one binary output.
The Perceptron
X2 W2
Dendrites Neuron
W3 Y
X3
W4
X4
Output Layer
Input Layer
It is based on a slightly different artificial neuron called a linear threshold unit (LTU).
Nodes Synapse Bias (Bk) Axon
X1 W0 =1
W1
Activation Function
X2 W2
Dendrites
X3 W3 ∑ Y
W4 Summing Junction
X4
Output Layer
Input Layer
X- Input Y- Output
Y=X1 + X2 + X3 + X4
• The basic rule of thumb is if you really don’t know what activation
function to use, then simply use RELU as it is a general activation
function and is used in most cases these days.
• The Perceptron algorithm learns the weights for the input signals in order
to draw a linear decision boundary.
• This enables you to distinguish between the two linearly separable classes
+1 and -1.
Perceptron Convergence (Learning ) Algorithm
Perceptron Convergence (Learning ) Algorithm
Perceptron Convergence (Learning ) Algorithm
Decision Boundary
• Every layer except the output layer includes a bias neuron is fully connected to the next
layer
• When an ANN has two or more hidden layers, it is called a deep neural network (DNN)
Multi-Layer Perceptron and Its Properties
• The MLP is a feedforward neural network, which means that the data is transmitted
from the input layer to the output layer in the forward direction.
• The connections between the layers are assigned weights. The weight of a connection
specifies its importance.
• This concept is the backbone of an MLP’s learning process.
Summary