0% found this document useful (0 votes)
120 views17 pages

Perceptron: Neuron Model (Special Form of Single Layer Feed Forward)

The document describes the perceptron, a simple neural network model used for binary classification. The perceptron classifies inputs by applying a step function to the weighted sum of the inputs. It can only learn linearly separable problems. Limitations include inability to model non-linear functions like XOR. Multi-layer feedforward networks can represent non-linear functions using hidden layers that allow non-linear separation of classes. Backpropagation is described as a method to optimize weights to minimize error through trial and error.

Uploaded by

Shams
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
120 views17 pages

Perceptron: Neuron Model (Special Form of Single Layer Feed Forward)

The document describes the perceptron, a simple neural network model used for binary classification. The perceptron classifies inputs by applying a step function to the weighted sum of the inputs. It can only learn linearly separable problems. Limitations include inability to model non-linear functions like XOR. Multi-layer feedforward networks can represent non-linear functions using hidden layers that allow non-linear separation of classes. Backpropagation is described as a method to optimize weights to minimize error through trial and error.

Uploaded by

Shams
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 17

Perceptron: Neuron Model

(Special form of single layer feed forward)

 The Perceptron was first proposed by Rosenblatt (1958) is a simple neuron


that is used to classify its input into one of two categories.
 A Perceptron uses a step function that returns +1 if weighted sum of its
input  0 and -1 otherwise

  1 if v  0
 ( v )  
  1 if v  0

b (bias)
x1 w1
w2 v y
x2
(v)
wn
xn
Perceptron for Classification
 The Perceptron is used for binary classification.
 First train a Perceptron for a classification task.
 Find suitable weights in such a way that the training examples are
correctly classified.
 Geometrically try to find a hyper-plane that separates the examples of the
two classes.
 The Perceptron can only model linearly separable classes.
 When the two classes are not linearly separable, it may be desirable to
obtain a linear separator that minimizes the mean squared error.
 Given training examples of classes C1, C2 train the Perceptron in such a way
that :
 If the output of the Perceptron is +1 then the input is assigned to class C1
 If the output is -1 then the input is assigned to C2
Boolean function OR – Linearly separable
X1

1 true true

false true
0 1 X2
Condition Used in Perceptron

The bias shifts the decision boundary away from the origin and does not
depend on any input value.

The most famous example of the perception's inability to solve problems


with linearly no separable vectors is the Boolean exclusive-or problem.
Perceptron: Limitations
 The Perceptron can only model linearly separable
functions,
 those functions which can be drawn in 2-dim graph and
single straight line separates values in two part.
 Boolean functions given below are linearly separable:
 AND
 OR
 It cannot model XOR function as it is non linearly
separable.
 When the two classes are not linearly separable, it may be
desirable to obtain a linear separator that minimizes the
mean squared error.
XOR – Non linearly separable function
 A typical example of non-linearly separable function is the
XOR that computes the logical exclusive or..
 This function takes two input arguments with values in
{0,1} and returns one output in {0,1},
 Here 0 and 1 are encoding of the truth values false and true,
 The output is true if and only if the two inputs have
different truth values.
 XOR is non linearly separable function which can not be
modeled by Perceptron.
 For such functions we have to use multi layer feed-forward
network.
Input Output
X1 X2 X1 XOR X2
0 0 0
0 1 1
1 0 1
1 1 0

These two classes (true and false) cannot be separated using a


line. Hence XOR is non linearly separable.

X1

1 true false

false true
0 1 X2
FFNN for XOR
 The ANN for XOR has two hidden nodes that realizes this non-linear
separation and uses the sign (step) activation function.
 Arrows from input nodes to two hidden nodes indicate the directions of
the weight vectors (1,-1) and (-1,1).
 The output node is used to combine the outputs of the two hidden nodes.
Inputs Output of Hidden Nodes Output X1 XOR X2
X1 X2 H1 H2 Node
0 0 0 0 –0.5  0 0
0 1 –1  0 1 0.5  1 1
1 0 1 –1  0 0.5  1 1
1 1 0 0 –0.5  0 0
Since we are representing two states by 0 (false) and 1 (true),
we will map negative outputs (–1, –0.5) of hidden and output
layers to 0 and positive output (0.5) to 1.
The Backpropagation Algorithm

 The goal of backpropagation is to optimize


the weights so that the neural network can
learn how to correctly map arbitrary inputs to
outputs.
Training Basics : The Backpropagation Algorithm

 The most basic method of training a neural


network is trial and error.
 If the network isn't behaving the way it should,
change the weighting of a random link by a
random amount. If the accuracy of the network
declines, undo the change and make a different
one.
 It takes time, but the trial and error method
does produce results.
Sample Neural Network to Calculate Total Error
Using Forward Pass Backpropagation
Total net input for h1
Use of logistic function to get the output of h1
Repeat this process for the output layer neurons, using the
output from the hidden layer neurons as inputs.
Calculating the Total Error

The total error for the neural network is the sum of these errors:

You might also like