Analysis and Study of Perceptron To Solve Xor Problem
Analysis and Study of Perceptron To Solve Xor Problem
168
0 -7803 -7624 -2/02/$17.00 02002 IEEE
connection progression value of the neuron i at the input ( 5 ) Shift to (2) until progression value and the threshold
value keep the same to all the learning samples, that is the
convergence value is got.
Y t 2.2 Multi-layer perceptron
Y= rcz rU.5 - 0)
i 4
(1)
( i =1,2;..,n) y=
fl: 3
Cv,r,+B
(4)
0(t f 1) = 4 t )+ q(d - Y ) Y ( ~- Y )
Among these 7 is the learning step width which picks
the number between 0-1. 3.Perceptron classification and XOR problem
169
In order to be general, we’ll think about the pei-ceptron XOR is a typical problem in linear un-division. What
of single neuron in the section 2.1, XOR means in logical operation is: when the two inputs
The adjustment of the connection progression value can are the same 1 or the same 0 in binary system, the output is
make perceptron’s reaction to a group of vectors achieve 0; when the two inputs are 1 or 0 respectively, the output is
the objective output of 0 or 1. This can be explained by 1. XOR requires dividing the four points in a plane with a
making diagrams in input vector space. Picking two input straight line as indicated in figure 4. Obviously it is
variables, x, , x, ,we can get figure 3: impossible. In general condition, it is impossible to solve
The condition wlxl + w2x2- 8 t 0 will divide the the problem of XOR with regular single-layer perceptron.
input plane into two parts. When the connection
progression .value and the threshold value change, the 4. Analysis of perceptron solution to solve
dividing line will move or turn round, but still keeps in a XOR problem
straight line. The threshold value divides the space of
vectors into several areas, which enable the perceptron to According to analysis above, the performance and
learning ability of single-layer perceptron is limited. In
be capable of classifying the input vectors and input
general condition, the simple single-layer perceptron can’t
samples. Perceptron cannot realize the corresponding
realize XOR,but only the multi-layer perceptron can solve
relation between the input and the output arbitrarily, or it
the problem of XOR. In order to solve the problem of
can’t solve the arbitray logical operation. It can only solve
XOR, we propose several solutions: multi-layer
the problem of the linear division mentioned in ligure 4,
perceptron, functional link perceptron, and single-layer
that is to say, preceptron cannot classify the problem of
perceptron can be improved by quadratic function.
linear un-division
\ tx2
4.1 Multi-layer perceptron to solve XOR problem
y = I ( C wizi- 8 )
i= I
yy O(1,l)
2
zi = f ( Z v j x j - O j )
id
.-+\- 0
0
Fig. 4 XOR problem sketch map n is the number of hiding layer of neuron.
170
4
Solution. I
XI Xl
Fig. 6XOR percepuon of new o,o,B,
Solution 3
Y
-
0
1
0. 1
0
-v
Fig. 5 XOR'operation perceptron
Solution 2
0 1 .1
1 1 1 1 0
171
Solution 4 functional combining perceptron, many problems can be
solved by single-layer net. Figure 9 shows:
XI x2 Y
A
0 0 0 0 0
0 1 0 I O
1 0 1 0 0
1 1 1 1 1
XI x2 XI ‘ X 2
t
XI
Fig. 8 Perceptron of three hiding neurons
T
X2
= f ( x , ~+lx 2 0 1 + X,~,OJ,
It can solve the problem of XOR. In fact, make
o1= w 2 =1,w3 = - 2 , e = I , a n d w e h a v e :
y = f ( x , + x* - 2x,x, - 1)
-e)
is
in theory, there is no limit on the number of iieurons
( n 2 1). However, it is unnecessary as the number will changed to be:
increase working load and has a direct impact on the speed
of constringency. which results in making the task more
difficult to fulfill. In fact, solution 1,2,3 is the simplified
It’s a hyperbola. It inputs two groups of models into ((O,O),
forms of solution 4.
(1.1)) ((l,O), (0,1)] and divides into two types: “0” type
In addition, the activation h c t i o n f , the threshold
and “ I ” type.
value 0 ~ e, and the weight wij are adjustable in the
previous four solutions. That is to say, there are a lot of
4.3 Quadratic function perceptron
combination projects that will be enumerated there.
172
following formula: XOR is linear un-division operation that cannot be
treated by the novel single-layer perceptran. Quadratic
y= f
[ COiXi
i:,
-0
] function perceptron is capable of learning XOR problems.
The essence of this improved quadratic function
perceptron lies in the following fact: neuron activation
f ( A= P 2
function employing quadratic function to replace unit
In quadratic function perceptron, the output expected value
function (or Sigmoid function), select expectation and the
is zero and big enough value L. To the problem of duality
most optimized learning step width. Simulation is also
XOR,we value L=l. Let’s assume the learning algorithm feasible by utilizing other functions forms.
as the follows:
(1) Make it originate: Let f =0 , give 5.Conclusion
w j ( t ) ( i= 1,2;..,n) and O ( f ) each a smaller random
value which isn’t zero. XOR is linear un-division operation, which cannot be
l- treated by single-layer perceptron. Six solutions are
(2) In put a learning sample X = (x,,x2;..,xn)
proposed to solve the problem of XOR.Four in which are
and its expected output “ d ”.
realized by means of multi-layer perceptron. General
(3) Calculate the actual output expressions have been provided, threshold value and
activation function are also adjustable, many potential
solutions are possibly involved. Multi-layer neural
network can always solve the problems of XOR or
(4) Modify each right value and threshold value “X.NOT.OR, and can implement any elements Boolean
When d =0: function and logical calculations, but at the cost of
w,(t+ 1) = wi(t)- q&xi (i = 1,2;..,n) complication of the system. The functional perceptron and
8(r + 1) = B ( f ) - 7&
quadratic function perceptron used to solve the problems
of XOR belong to improved sing-layer perceptron which
When d = L :
are characterized by more powerful learning functions and
wi ( t + 1) = wj( t ) + r]&xj (i = 1,2,...,n) faster convergence speed, compared with traditional
s(t + 1) = ~ ( t+ )q& single-layer perceptron.
173