Solutions For Problems From Neural Networks and Learning Machines, 3rd Edition by Simon Haykin
Solutions For Problems From Neural Networks and Learning Machines, 3rd Edition by Simon Haykin
Problem 1.1
sm
(1) If wT(n)x(n) > 0, then y(n) = +1.
If also x(n) belongs to C1, then d(n) = +1.
Under these conditions, the error signal is
e(n) = d(n) - y(n) = 0
and from Eq. (1.22) of the text:
w(n + 1) = w(n) + ηe(n)x(n) = w(n)
tb9
This result is the same as line 1 of Eq. (1.5) of the text.
d(n) = +1
In this case, the use of Eq. (1.22) yields
w(n + 1) = w(n) +2ηx(n)
which has the same mathematical form as line 2 of Eq. (1.6), except for the scaling
factor 2.
.co
Problem 1.2
y = tanh ---
v
m
2
′
b + ∑ wi xi = y (1)
i
where
′ –1
y = 2 tanh ( y )
Problem 1.3
x1 o w1 = 1
v
o o o y
w2 = 1 Hard Figure 1: Problem 1.3
+1 limiter
o o
x2 b = -1.5
v = w1 x1 + w2 x2 + b
= x 1 + x 2 – 1.5
x1 o w1 = 1
Hard
v limiter
o o o y
w2 = 1 Figure 2: Problem 1.3
+1
o o
x2 b = -0.5
v = x 1 + x 2 – 0.5
Hard
w1 = -1 v limiter
o o o o y
v = wx + b = – x + 0.5
Problem 1.4
The Gaussian classifier consists of a single unit with a single weight and zero bias, determined in
accordance with Eqs. (1.37) and (1.38) of the textbook, respectively, as follows:
1
w = -----2- ( µ 1 – µ 2 )
σ
= – 20
1 2 2
b = --------2- ( µ 2 – µ 1 )
2σ
= 0
Problem 1.5
2
C = σ I
in Eqs. (1.37) and (1.38) of the textbook, we get the following formulas for the weight vector and
bias of the Bayes classifier:
1
w = -----2- ( µ 1 – µ 2 )
σ
1 2 2
b = --------2- ( µ 1 – µ 2 )
2σ