Limitations of Perceptrons
Limitations of Perceptrons
(i) The output values of a perceptron can take on only one of two values (0 or
1) due to the hard-limit transfer function.
(ii) Perceptrons can only classify linearly separable sets of vectors. If a
straight line or a plane can be drawn to separate the input vectors into their
correct categories, the input vectors are linearly separable. If the vectors are
not linearly separable, learning will never reach a point where all vectors are
classified properly
The Boolean function XOR is not linearly separable (Its positive and negative
instances cannot be separated by a line or hyperplane). Hence a single layer
perceptron can never compute the XOR function. This is a big drawback
which once resulted in the stagnation of the field of neural networks. But this
has been solved by multi-layer.