0% found this document useful (0 votes)
41 views5 pages

Solutions For Problems From Neural Networks and Learning Machines, 3rd Edition by Simon Haykin

The document discusses Rosenblatt's Perceptron, detailing various problems related to its operation, including conditions for output signals and error signals. It covers logical operations such as AND, OR, and COMPLEMENT, providing truth tables and perceptron configurations for each. Additionally, it introduces Gaussian and Bayes classifiers, presenting formulas for weights and biases in these contexts.

Uploaded by

organicc488
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
41 views5 pages

Solutions For Problems From Neural Networks and Learning Machines, 3rd Edition by Simon Haykin

The document discusses Rosenblatt's Perceptron, detailing various problems related to its operation, including conditions for output signals and error signals. It covers logical operations such as AND, OR, and COMPLEMENT, providing truth tables and perceptron configurations for each. Additionally, it introduces Gaussian and Bayes classifiers, presenting formulas for weights and biases in these contexts.

Uploaded by

organicc488
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

You can access complete document on following URL.

Contact me if site not loaded


https://unihelp.xyz/
CHAPTER 1
Rosenblatt’s Perceptron

Problem 1.1
sm
(1) If wT(n)x(n) > 0, then y(n) = +1.
If also x(n) belongs to C1, then d(n) = +1.
Under these conditions, the error signal is
e(n) = d(n) - y(n) = 0
and from Eq. (1.22) of the text:
w(n + 1) = w(n) + ηe(n)x(n) = w(n)
tb9
This result is the same as line 1 of Eq. (1.5) of the text.

(2) If wT(n)x(n) < 0, then y(n) = -1.


If also x(n) belongs to C2, then d(n) = -1.
Under these conditions, the error signal e(n) remains zero, and so from Eq. (1.22)
we have
8@
w(n + 1) = w(n)
This result is the same as line 2 of Eq. (1.5).

(3) If wT(n)x(n) > 0 and x(n) belongs to C2 we have


y(n) = +1
d(n) = -1
gm

The error signal e(n) is -2, and so Eq. (1.22) yields


w(n + 1) = w(n) -2ηx(n)
which has the same form as the first line of Eq. (1.6), except for the scaling factor 2.

(4) Finally if wT(n)x(n) < 0 and x(n) belongs to C1, then


y(n) = -1
ail

d(n) = +1
In this case, the use of Eq. (1.22) yields
w(n + 1) = w(n) +2ηx(n)
which has the same mathematical form as line 2 of Eq. (1.6), except for the scaling
factor 2.
.co

Problem 1.2

The output signal is defined by

y = tanh  ---
v
m

 2

= tanh  --- + --- ∑ w i x i


b 1
2 2 
i

Contact me in order to access the whole complete document - Email: [email protected]


WhatsApp: https://wa.me/message/2H3BV2L5TTSUF1 - Telegram: https://t.me/solutionmanual
Equivalently, we may write


b + ∑ wi xi = y (1)
i

where

′ –1
y = 2 tanh ( y )

Equation (1) is the equation of a hyperplane.

Problem 1.3

(a) AND operation: Truth Table 1


Inputs Output
x1 x2 y
1 1 1
0 1 0
1 0 0
0 0 0

This operation may be realized using the perceptron of Fig. 1

x1 o w1 = 1
v
o o o y
w2 = 1 Hard Figure 1: Problem 1.3
+1 limiter
o o
x2 b = -1.5

The hard limiter input is

v = w1 x1 + w2 x2 + b
= x 1 + x 2 – 1.5

If x1 = x2 = 1, then v = 0.5, and y = 1


If x1 = 0, and x2 = 1, then v = -0.5, and y = 0
If x1 = 1, and x2 = 0, then v = -0.5, and y = 0
If x1 = x2 = 0, then v = -1.5, and y = 0
These conditions agree with truth table 1.

OR operation: Truth Table 2


Inputs Output
x1 x2 y
1 1 1
0 1 1
1 0 1
0 0 0

The OR operation may be realized using the perceptron of Fig. 2:

x1 o w1 = 1
Hard
v limiter
o o o y
w2 = 1 Figure 2: Problem 1.3
+1
o o
x2 b = -0.5

In this case, the hard limiter input is

v = x 1 + x 2 – 0.5

If x1 = x2 = 1, then v = 1.5, and y = 1


If x1 = 0, and x2 = 1, then v = 0.5, and y = 1
If x1 = 1, and x2 = 0, then v = 0.5, and y = 1
If x1 = x2 = 0, then v = -0.5, and y = -1

These conditions agree with truth table 2.


COMPLEMENT operation: Truth Table 3
Input x, Output, y
1 0
0 1

The COMPLEMENT operation may be realized as in Figure 3::

Hard
w1 = -1 v limiter
o o o o y

b = -0.5 Figure 3: Problem 1.3

The hard limiter input is

v = wx + b = – x + 0.5

If x = 1, then v = -0.5, and y = 0


If x = 0, then v = 0.5, and y = 1

These conditions agree with truth table 3.

(b) EXCLUSIVE OR operation: Truth table 4


Inputs Output
x1 x2 y
1 1 0
0 1 1
1 0 1
0 0 0

This operation is nonlinearly separable, which cannot be solved by the perceptron.

Problem 1.4

The Gaussian classifier consists of a single unit with a single weight and zero bias, determined in
accordance with Eqs. (1.37) and (1.38) of the textbook, respectively, as follows:

1
w = -----2- ( µ 1 – µ 2 )
σ
= – 20
1 2 2
b = --------2- ( µ 2 – µ 1 )

= 0

Problem 1.5

Using the condition

2
C = σ I

in Eqs. (1.37) and (1.38) of the textbook, we get the following formulas for the weight vector and
bias of the Bayes classifier:

1
w = -----2- ( µ 1 – µ 2 )
σ
1 2 2
b = --------2- ( µ 1 – µ 2 )

You might also like