0% found this document useful (0 votes)
15 views25 pages

Percptron

Uploaded by

Swarnlata
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views25 pages

Percptron

Uploaded by

Swarnlata
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 25

The Perceptron

• Perceptron?
• Formulas of Perceptron
• Activation Function
• Multi-layer Perceptron
The Perceptron
• Perceptron was introduced by Frank Rosenblatt in 1957

• The basic building block (or) unit of the neural network.

• The perceptron is a network (neural network) at takes a number of inputs carry out
some processing on those inputs and produces an output.

• It is also called an Artificial Neuron. It consists of a single neuron with a number of


adjustable weights.

• Initially the perceptron was designed to take a number of binary inputs, and
produce one binary output.
The Perceptron

Nodes Synapse Axon


X1
W1

X2 W2
Dendrites Neuron
W3 Y
X3
W4
X4

Output Layer
Input Layer

It is based on a slightly different artificial neuron called a linear threshold unit (LTU).
Nodes Synapse Bias (Bk) Axon
X1 W0 =1
W1
Activation Function
X2 W2
Dendrites
X3 W3 ∑ Y
W4 Summing Junction
X4

Output Layer
Input Layer
X- Input Y- Output

Y=X1 + X2 + X3 + X4

Y=W1X1+W2 X2 + W3X3 +W4 X4 + Bk

In mathematical terms, a Neuron k can be described as: Uk=


The output :
Activation functions in Neural Networks

• Activation function decides, whether a neuron should be activated or not


by calculating the weighted sum and further adding bias with it. The
purpose of the activation function is to introduce non-linearity into the
output of a neuron.

• A neural network without an activation function is essentially just a linear


regression model. The activation function does the non-linear
transformation to the input making it capable to learn and perform more
complex tasks.
The Perceptron Cont’d
Types of Activation Function
The Perceptron Cont’d
Types of Activation Function
The Perceptron Cont’d
Types of Activation Function
Choosing The Right Activation Function

• The basic rule of thumb is if you really don’t know what activation
function to use, then simply use RELU as it is a general activation
function and is used in most cases these days.

• If your output is for binary classification then, the sigmoid function is a


very natural choice for the output layer.
Types of Perceptron
• Single layer - Single layer perceptrons can learn only linearly separable
patterns

• Multilayer - Multilayer perceptrons or feedforward neural networks with


two or more layers have greater processing power.

• The Perceptron algorithm learns the weights for the input signals in order
to draw a linear decision boundary.

• This enables you to distinguish between the two linearly separable classes
+1 and -1.
Perceptron Convergence (Learning ) Algorithm
Perceptron Convergence (Learning ) Algorithm
Perceptron Convergence (Learning ) Algorithm
Decision Boundary

The decision boundary is always orthogonal to the weight vector.


Single-layer perceptron can only classify linearly separable vectors.
Decision Boundary
• A perceptron (threshold unit) can learn anything that it can represent (i.e.
anything separable with a hyperplane)
OR GATE Perceptron Training Rule
OR GATE Perceptron Training Rule

Update the weights.


OR GATE Perceptron Training Rule

Update the weights.


The Exclusive OR problem
• A Perceptron cannot represent Exclusive OR since it is not linearly
separable.
The Exclusive OR problem
• Minsky & Papert (1969) offered solution to XOR problem by combining
perceptron unit responses using a second layer of Units. Piecewise linear
classification using an MLP with threshold (perceptron) units
The Exclusive OR problem
• In particular, an MLP can solve the XOR problem
• For each combination of inputs: with inputs (0, 0) or (1, 1) the network outputs
0, and with inputs (0, 1) or (1, 0) it outputs 1.
Multi-Layer Perceptron and Its Properties
• An MLP is composed of one (pass through) input layer, one or more layers of LTUs,
called hidden layers, and one final layer of LTUs called the output layer (0ften more than
3 layers).

• Every layer except the output layer includes a bias neuron is fully connected to the next
layer

• No connections within a layer

• No direct connections between input and output layers

• When an ANN has two or more hidden layers, it is called a deep neural network (DNN)
Multi-Layer Perceptron and Its Properties

• The MLP is a feedforward neural network, which means that the data is transmitted
from the input layer to the output layer in the forward direction.
• The connections between the layers are assigned weights. The weight of a connection
specifies its importance.
• This concept is the backbone of an MLP’s learning process.
Summary

You might also like