EE05425Notes 10
EE05425Notes 10
Lecture Notes-10
Page 1
Single-layer perceptron
In a feed forward network information always moves one
direction; it never goes backwards.
The earliest kind of neural network is a
single-layer perceptron network, which
consists of a single layer of output nodes; the inputs are fed directly to the outputs via a series of
weights. In this way it can be considered the simplest kind of feed-forward network. The sum of the
products of the weights and the inputs is calculated in each node, and if the value is above some
threshold the neuron fires and takes the activated value , otherwise it takes the deactivated value . Neurons with this
kind of activation function are also called Artificial
neurons or linear threshold units. In the literature the term perceptron often refers to networks
consisting of just one of these units. A similar neuron was described by Warren McCulloch and Walter Pitts in the
1940s.
A perceptron can be created using any values for the activated and deactivated states as long as the
threshold value lies between the two. Most perceptrons have outputs of 1 or -1 with a threshold of 0 and there is some
evidence that such networks can be trained more quickly than networks created from
T.N.Shankar,CSE,GMRIT
NN&FL UNIT-II
Lecture Notes-10
Perceptrons can be trained by a simple learning algorithm that is usually called the delta rule. It
calculates the errors between calculated output and sample output data, and uses this to create an
adjustment to the weights, thus implementing a form of gradient descent.
T.N.Shankar,CSE,GMRIT
NN&FL UNIT-II
Lecture Notes-10
Page 2
Single-unit perceptrons are only capable of learning linearly separable patterns; in 1969 in a famous
monograph entitled Perceptrons Marvin Minsky and Seymour Papert showed that it was impossible for a
single-layer perceptron network to learn an XOR function. They conjectured (incorrectly) that a
similar result would hold for a multi-layer perceptron network. Although a single threshold unit is quite limited
in its computational power, it has been shown that networks of parallel threshold units can
approximate any continuous function from a compact interval of the real numbers into the interval [-1,1]. This
very recent result can be found in [Auer, Burgsteiner, Maass: The p-delta learning rule for parallel
perceptrons.
A single-layer neural network can compute a continuous output instead of a step function. A common choice is the
so-called logistic function:
(In general form, f(X) is in place of x, where f(X) is an analytic function in set of x's.) With this choice,
the single-layer network is identical to the logistic regression model, widely used in statistical modeling.
The logistic function is also known as the sigmoid function. It has a continuous derivative, which allows
it to be used in backpropagation. This function is also preferred because its derivative is easily
calculated:
y' = y(1 − y) (times df / dX, in general form, according to the Chain Rule)
Multi-layer perceptron
This class of networks consists of multiple layers of
computational units, usually interconnected in a feed-
forward way. Each neuron in one layer has directed
connections to the neurons of the subsequent layer. In many
applications the units of these networks apply a sigmoid
function as an activation function.
T.N.Shankar,CSE,GMRIT
NN&FL UNIT-II
Lecture Notes-10
T.N.Shankar,CSE,GMRIT
NN&FL UNIT-II
Lecture Notes-10
T
h
e
n
u
m
b
e
r
s
w
i
t
h
i
n
t
h
e
n
e
u
r
o
n
s
r
e
p
r
e
s
e
n
t
e
a
c
h
n
e
u
r
o
T.N.Shankar,CSE,GMRIT
NN&FL UNIT-II
Lecture Notes-10
that all neurons have the same threshold, usually 1). The numbers that
annotate
arrows represent the weight of the inputs. This net assumes that if the
threshold is not
reached, zero (not -1) is output. Note that
T.N.Shankar,CSE,GMRIT