0% found this document useful (0 votes)
6 views24 pages

2 Architectures

Uploaded by

Ashfaq Ahmed
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views24 pages

2 Architectures

Uploaded by

Ashfaq Ahmed
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 24

0bjec5ives

2 Neuron Model and Network


Architectures 2
Objectives 2-1
Theory and 2-2
Examples 2-2
Notation 2-2
Neuron Model 2-2
Single-Input Neuron 2-3
Transfer Functions 2-7
Multiple-Input 2-9
Neuron 2-9
Network 2-
Architectures A 10
Layer of Neurons 2-
13
Multiple Layers of Neurons
2-
Recurrent Networks
16
Summary of
2-
Results Solved 20
Objectives Problems 2-
Epilogue 22
Exercises 2-
In Chapter 1 we presented a simplified description of biological
neurons and neural networks. Now we will introduce
23 our
simplified mathematical model of the neuron and will explain
how these artificial neurons can be in- terconnected to form a
variety of network architectures. We will also illus- trate the
basic operation of these networks through some simple examples.
The concepts and notation introduced in this chapter will be
used through- out this book.
This chapter does not cover all of the architectures that will be
used in this book, but it does present the basic building blocks.
More complex architec- tures will be introduced and discussed as
they are needed in later chapters. Even so, a lot of detail is
presented here. Please note that it is not necessary for the reader
to memorize all of the material in this chapter on a fi rst read-
ing. Instead, treat it as a sample to get you started and a
resource to which you can return.

2-1
2 Neuron Model and Ne5vork Archi5ec5ures

Theory and Examples

Notation
Neural networks are so new that standard mathematical
notation and ar- chitectural representations for them have not
yet been fi rmly established. In addition, papers and books on
neural networks have come from many di- verse fields, including
engineering, physics, psychology and mathematics, and many
authors tend to use vocabulary peculiar to their specialty. As a
result, many books and papers in this field are diffi cult to read,
and con- cepts are made to seem more complex than they
actually are. This is a shame, as it has prevented the spread of
important new ideas. I t has also led to more than one
“reinvention of the wheel.”
In this book we have tried to use standard notation where
possible, to be clear and to keep matters simple without
sacrificing rigor. In particular, we have tried to define practical
conventions and use them consistently.
Figures, mathematical equations and text discussing both
figures and mathematical equations will use the following
notation:
Scalars — small i50lic letters: 0,b,c
Vectors — small bold nonitalic letters: a,b,c
Matrices — capital BOLD nonitalic letters: A,B,C
Additional notation concerning the network architectures will be
intro- duced as you read this chapter. A complete list of the
notation that we use throughout the book is given in Appendix
B, so you can look there if you have a question.

Neuron Model
Weigh plied by the scalar veigh5 w to form wp , one of the terms that is
t Bia sent to the summer. The other input, 1 , is multiplied by a bi0s b
s Net Input Single-Input Neuron
and then passed to the summer. The summer output n , often
Transfer referred to as the
A single-input ne5 inpu5,
neuron goes ininto
is shown a 5r0nsfer
Figure func5ion
2.1. The f , input
scalar which p
Function produces
is multi- the scalar neuron output a . (Some authors use the
term “activation function” rather than 5r0nsfer func- 5ion and
“offset” rather than bi0s.)
If we relate this simple model back to the biological neuron that
we dis- cussed in Chapter 1, the weight w corresponds to the
strength of a synapse,

2-2
Neuron Model

the cell body is represented by the summation and the transfer


function, and the neuron output a represents the signal on the
axon.

Inputs General Neuron 2


a
w
p
Ab A
n f
1
a = f (wp + b)

Figure 2.1 Single-Input


Neuron The neuron output is calculated as

a = f wp + b .

If, for instance, w = 3 , p = 2


and b = –1.5 , then

a = f 32 – 1.5= f 4.5

The actual output depends on the particular


transfer function that is cho- sen. We will discuss transfer
functions in the next section.
The bias is much like a weight, except that it
has a constant input of 1. However, if you do not want to have a
bias in a particular neuron, it can be omitted. We will see
examples of this in Chapters 3, 7 and 14.
Note that w and b are both 0djus50ble scalar
parameters of the neuron. Typically the transfer function is
chosen by the designer and then the pa- rameters w and b will
be adjusted by some learning rule so that the neu- ron
Transfer Functions
input/output relationship meets some specific goal (see Chapter
4 for an introduction to learning rules). As described in the
The transfer
following function
section, in Figure
we have 2.1 may
different be a linear
transfer or a for
functions nonlinear
function of n . A
different purposes. particular transfer function is chosen to satisfy
some specification of the problem that the neuron is attempting
to solve.
A variety of transfer functions have been included in this book.
Three of the most commonly used functions are discussed below.
Hard Limit
Transfer The h0rd limi5 5r0nsfer func5ion, shown on the left side of Figure 2.2,
Function sets the output of the neuron to 0 if the function argument is
less than 0, or 1 if
2-3
2 Neuron Model and Ne5vork Archi5ec5ures

its argument is greater than or equal to 0. We will use this


function to cre- ate neurons that classify inputs into two distinct
categories. I t will be used extensively in Chapter 4.

n p
0+1 -b/w 0
-1 -1
+1
a = hardlim(n) a = hardlim(wp+b)
Hard Limit Transfer Single-Input hardlim
Function Neuron
Figure 2.2 Hard Limit Transfer Function
The graph on the right side of Figure 2.2 illustrates the
input/output char- acteristic of a single-input neuron that uses a
hard limit transfer function. Here we can see the effect of the
weight and the bias. Note that an icon for the hard limit
transfer function is shown between the two figures. Such icons
will replace the general f in network diagrams to show the
particular transfer function that is being used.
Linear
Transfer The output of a line0r 5r0nsfer func5ion is equal to its input:
Function
a = n,

(2.1)

as illustrated in Figure 2.3.


a transfer function are used in athe ADALINE
Neurons with this
networks, which are discussed in Chapter 10.
+1 +b
-b/w
n p
0 0
-1
a = purelin (n) a = purelin(wp+b)
Linear Transfer Single-Input purelin
Function Neuron
Figure 2.3 Linear Transfer Function
The output ( a ) versus input ( p ) characteristic of a single-input
linear neu- ron with a bias is shown on the right of Figure 2.3.

2-4
Neuron Model

Log-Sigmoid The log-sigmoid 5r0nsfer func5ion is shown in Figure


Transfer 2.4.
Function a a

+1 +1
2
n p
0 -b/w 0

-1 -1
a = logsig (n) a = logsig(wp+b)
Log-Sigmoid Transfer Single-Input logsig
Function Neuron
Figure 2.4 Log-Sigmoid Transfer Function
This transfer function takes the input (which may have any value
between plus and minus infi nity) and squashes the output into
the range 0 to 1, ac- cording to the expression:

1 --
a = ------------- – (2.2
1+e )
n
- .
The log-sigmoid transfer function is commonly used in multilayer
networks that are trained using the backpropagation algorithm,
in part because this function is differentiable (see Chapter 11).
Most of the transfer functions used in this book are summarized
in Table
2.1. Of course, you can define other transfer functions in
addition to those shown in Table 2.1 if you wish.
To experimen5 vi5h 0 single-inpu5 neuron, use 5he Neur0l Ne5vork Design
Demons5r05ion One-Input Neuron nnd2n1.

2-5
2 Neuron Model and Ne5vork Archi5ec5ures

MATLAB
Name Input/Output Relation Icon
Function

a = 0 n0
Hard Limit hardlim
a = 1 n0

a = –1 n0
Symmetrical Hard Limit hardlims
a = +1 n0

Linear a = n purelin

a = 0 n0
Saturating Linear a = n 0n1 satlin
a = 1 n1

a = –1 n  –1
Symmetric Saturating a = n –1  n  1 satlins
Linear a = 1 n1

1
a = ------------ ----
Log-Sigmoid –n logsig
1+e

n –n
e –e
Hyperbolic tansig
a = ---
Tangent n
------------–--n-
Sigmoid e +e
a = 0 n0
Positive Linear poslin
a = n 0n

a = 1 neuron
Competitive
with max n a = 0
C compet
all other
Table 2.1 Transfer
neurons
Functions

2-6
Neuron Model

Multiple-Input Neuron
Typically, a neuron has more than one input. A neuron with R
inputs is shown in Figure 2.5. The individual inputs p 1 ,p 2 ,...,p R
Weight
Matrix
are each weighted by corresponding elements w1 1,w1 2,...,w1 R of
the veigh5 m05rix W .
2
Inputs Multiple-Input Neuron

p1
w1, 1
n a
p2
 f
p3 b
pR w1, R

1
a = f (Wp + b)

Figure 2.5 Multiple-Input Neuron


The neuron has a bias b , which is summed with the weighted
inputs to form the net input n :

n = w1 1 p 1 + w1 2 p 2 + ... + w1 R p R + b . (2.3


)
This expression can be written in matrix form:
(2.4
n = Wp + b ,
)
where the matrix W for the single neuron case has only
one row.
Now the neuron output can be written as (2.5
)
a = fcan
Fortunately, neural networks + b . be described with
Wpoften
matrices. This kind of matrix expression will be used
throughout the book. Don't be con- cerned if you are rusty with
matrix and vector operations. We will review these topics in
Chapters 5 and 6, and we will provide many examples and
solved problems that will spell out the procedures.
We have adopted a particular convention in assigning the indices
Weight of the el- ements of the weight matrix. The fi rst index indicates
Indices the particular neu- ron destination for that weight. The second
index indicates the source of the signal fed to the neuron. Thus,
the indices in w1 2 say that this weight represents the connection
5o the fi rst (and only) neuron from the second source. Of course,
this convention is more useful if there is more than one neuron,
as will be the case later in this chapter.

2-T
2 Neuron Model and Ne5vork Archi5ec5ures

We would like to draw networks with several neurons, each


having several inputs. Further, we would like to have more than
one layer of neurons. You can imagine how complex such a
network might appear if all the lines were drawn. I t would take
a lot of ink, could hardly be read, and the mass of de- tail might
Abbreviated obscure the main features. Thus, we will use an 0bbrevi05ed no-
Notation 505ion. A multiple-input neuron using this notation is shown in
Figure 2.6.

Input Multiple-Input Neuron


p a
Rx W 1x
n
1 1x
R 1x f 1

1
1 b
1x
R 1 1

a = f (Wp + b)

Figure 2.6 Neuron with R Inputs, Abbreviated Notation

As shown in Figure 2.6, the input vector p is represented by the


solid ver- tical bar at the left. The dimensions of p are displayed
below the variable as R  1 , indicating that the input is a single
vector of R elements. These inputs go to the weight matrix W ,
which has R columns but only one row in this single neuron case.
A constant 1 enters the neuron as an input and is multiplied by
a scalar bias b . The net input to the transfer function f is n ,
which is the sum of the bias b and the product Wp . The neuron's
output a is a scalar in this case. If we had more than one neuron,
the network out- put would be a vector.
The dimensions of the variables in these abbreviated notation
figures will always be included, so that you can tell immediately
if we are talking about a scalar, a vector or a matrix. You will
not have to guess the kind of variable or its dimensions.
Note that the number of inputs to a network is set by the
external specifi- cations of the problem. If, for instance, you
want to design a neural network that is to predict kite-flying
conditions and the inputs are air temperature, wind velocity
and humidity, then there would be three inputs to the net-
work.
To experimen5 vi5h 0 5vo-inpu5 neuron, use 5he Neur0l Ne5vork Design
Demons5r05ion Two-Input Neuron (nnd2n2).

2-8
Ne5vork Archi5ec5ures

Network Architectures
Commonly one neuron, even with many inputs, may not be
sufficient. We might need five or ten, operating in parallel, in
what we will call a “layer.” This concept of a layer is discussed
below.
2
A Layer of Neurons
Laye A single-l0yer network of S neurons is shown in Figure 2.7. Note
r that each of the R inputs is connected to each of the neurons and
that the weight ma- trix now has S rows.

Inputs Layer of S Neurons

n1 a1
p1
w1,1  f
b1
p2 1
n2 a2

p3
 f
b2
1
pR nS aS
wS, R
 f
bS
1

a = f(Wp + b)

Figure 2.7 Layer of S Neurons


The layer includes the weight matrix, the summers, the bias
vector b , the transfer function boxes and the output vector a .
Some authors refer to the inputs as another layer, but we will
not do that here.
Each element of the input vector p is connected to each neuron
through the weight matrix W . Each neuron has a bias bi , a
summer, a transfer func- tion f and an output ai . Taken
together, the outputs form the output vector a .
I t is common for the number of inputs to a layer to be different
from the number of neurons (i.e., R  S ).
You might ask if all the neurons in a layer must have the same
transfer function. The answer is no; you can define a single
(composite) layer of neu- rons having different transfer functions
by combining two of the networks

2-9
2 Neuron Model and Ne5vork Archi5ec5ures

shown above in parallel. Both networks would have the same


inputs, and each network would create some of the outputs.
The input vector elements enter the network through the
weight matrix
W:

W = w
w1 w 222 
2 11 w1  ww12RR (2.6
.


)

wS 1 wS 2  wS R


As noted previously, the row indices of the elements of matrix W
indicate the destination neuron associated with that weight,
while the column indi- ces indicate the source of the input for
that weight. Thus, the indices in w3 2 say that this weight
represents the connection 5o the third neuron from the second
source.
Fortunately, the S-neuron, R-input, one-layer network also can be
drawn in abbreviated notation, as shown in Figure 2.8.

Input Layer of S Neurons

p a
Rx A W
n
Sx
1 Sx
R Sx f 1

1
1 b
Sx
R 1 S

a = f(Wp + b)

Figure 2.8 Layer of S Neurons, Abbreviated Notation

Here again, the symbols below the variables tell you that for
this layer, p is a vector of length R , W is an S  R matrix, and a
and b are vectors of length S . As defined previously, the layer
includes the weight matrix, the summation and multiplication
operations, the bias vector b , the transfer function boxes and
the output vector.

Multiple Layers of Neurons


Now consider a network with several layers. Each layer has its
own weight matrix W , its own bias vector b , a net input vector
n and an output vector a . We need to introduce some additional
notation to distinguish between these layers. We will use
superscripts to identify the layers. Specifically, we
2-10
Ne5vork Archi5ec5ures

Layer append the number of the layer as a superscrip5 to the names for
Superscript each of these variables. Thus, the weight matrix for the fi rst
layer is written as W1 , and the weight matrix for the second
layer is written as W2 . This notation is used in the three-layer
network shown in Figure 2.9. 2
Input First Second Third
s Layer Layer Layer
n1 a1 w2 n2 a2 w3 n3 a3
  
1 1 1,1 1 1 1,1 1 1
w 1 1,1
p1 f1 f2 f3
b1 1 b2 1 b3 1
1 1 1
p2

p3

n1 2
f 1
a1 2
A n2 2
f2
a2 2

n3 2
f 3
a3 2

b1 2 b2 2 b3 2
1 1 1
pR n1S 1 a1S 1 n 2 2 a2S 2 n3S 3 a3S 3
 
A A Af
S
w 1S 1 , R f 1 f 2 3
w 21S 2, S w 32S 3, S
b1S 1 b2S 2 b3S 3
1 1 1

a1 = f 1 (W1p + b1) a2 = f 2 (W2a1 + b2) a3 = f 3 (W3a2 + b3)


a3 = f 3 (W3f 2 (W2f 1 (W1p + b1) + b2) +
b3)
Figure 2.9 Three-Layer Network
As shown, there are R inputs, S1 neurons in the fi rst layer, S2
neurons in the second layer, etc. As noted, different layers can
have different numbers of neurons.
The outputs of layers one and two are the inputs for layers two
and three. Thus layer 2 can be viewed as a one-layer network
with R = S1 inputs,
S = S2 neurons, and an S1  S2 weight matrix W2 . The input to
layer 2 is
Output
a1 , and the output is a2 .
Layer
Hidden A layer whose output is the network output is called an ou5pu5
Layers l0yer. The other layers are called hidden l0yers. The network
shown above has an out- put layer (layer 3) and two hidden
layers (layers 1 and 2).
The same three-layer network discussed previously also can be
drawn us- ing our abbreviated notation, as shown in Figure 2.10.

2-11
2 Neuron Model and Ne5vork Archi5ec5ures

Inpu First Second Third


t Layer Layer Layer
p
Rx
1
AW 1

S1 x R n1
a1
S1 x 1 W2
n2
a2
S2 x 1 W3
n3
a3
S3 x 1

S1 x 1 f1 S2 x S1 S2 x 1 f2 S3 x S2 S3 x 1 f3
1 b1
1 b2
1 b3
R S2 x 1 S3 x 1
S1 x 1 S1 S2 S3

a1 = f 1 (W1p + b1) a2 = f 2 (W2a1 + b2) a3 = f 3 (W3a2 + b3) a3 =


f 3 (W3 f 2 (W2f 1 (W1p + b1) + b2) + b3)

Figure 2.10 Three-Layer Network, Abbreviated Notation


Multilayer networks are more powerful than single-layer
networks. For in- stance, a two-layer network having a sigmoid
fi rst layer and a linear sec- ond layer can be trained to
approximate most functions arbitrarily well.
Single-layer networks cannot do this.
At this point the number of choices to be made in specifying a
network may look overwhelming, so let us consider this topic.
The problem is not as bad as it looks. First, recall that the
number of inputs to the network and the number of outputs
from the network are defined by external problem spec-
ifications. So if there are four external variables to be used as
inputs, there are four inputs to the network. Similarly, if there
are to be seven outputs from the network, there must be seven
neurons in the output layer. Finally, the desired characteristics
of the output signal also help to select the trans- fer function for
the output layer. If an output is to be either –1 or 1 , then a
symmetrical hard limit transfer function should be used. Thus,
the archi- tecture of a single-layer network is almost completely
determined by prob- lem specifications, including the specific
number of inputs and outputs and the particular output signal
characteristic.
Now, what if we have more than two layers? Here the external
problem does not tell you directly the number of neurons
required in the hidden lay- ers. In fact, there are few problems
for which one can predict the optimal number of neurons needed
in a hidden layer. This problem is an active area of research. We
will develop some feeling on this matter as we proceed to
Chapter 11, Backpropagation.
As for the number of layers, most practical neural networks have
just two or three layers. Four or more layers are used rarely.
2-12 We should say something about the use of biases. One can choose
neurons with or without biases. The bias gives the network an
extra variable, and so you might expect that networks with
biases would be more powerful
Ne5vork Archi5ec5ures

than those without, and that is true. Note, for instance, that a
neuron with- out a bias will always have a net input n of zero
when the network inputs p are zero. This may not be desirable
and can be avoided by the use of a bias. The effect of the bias is
discussed more fully in Chapters 3, 4 and 5. 2
In later chapters we will omit a bias in some examples or
demonstrations. In some cases this is done simply to reduce the
number of network param- eters. With just two variables, we can
plot system convergence in a two-di- mensional plane. Three or
more variables
Recurrent are diffi cult to display.
Networks
Before we discuss recurrent networks, we need to introduce some
Dela simple building blocks. The fi rst is the del0y block, which is
y illustrated in Figure 2.11.

Dela
y

u(t) a(t)
D
a(0)

a(t) = u(t -
1)
Figure 2.11 Delay Block
The delay output at  is computed from its input ut 
according to
(2.7
at  = ut – 1 . )
Thus the output is the input delayed by one time step. (This
assumes that time is updated in discrete steps and takes on only
integer values.) Eq. (2.7) requires that the output be initialized
at time t = 0 . This initial condition is indicated in Figure 2.11 by
the arrow coming into the bottom of the delay block.
Another related building block, which we will use for the
Integrat continuous-time recurrent networks in Chapters 15—18, is the
or in5egr05or, which is shown in Figure 2.12.

2-13
2 Neuron Model and Ne5vork Archi5ec5ures

Integrat
or

u(t) a(t)

a(0)

t
a(t) = u() d + a(0)
0

Figure 2.12 Integrator Block


The integrator output at  is computed from its input ut 
according to
t
(2.8
at  = 0 ud + a0 )
.
The initial condition a0 is indicated by the arrow coming into
the bottom of the integrator block.
Recurrent We are now ready to introduce recurrent networks. A recurren5
Network ne5vork is a network with feedback; some of its outputs are
connected to its inputs. This is quite different from the
networks that we have studied thus far, which were strictly
feedforward with no backward connections. One type of discrete-
time recurrent network is shown in Figure 2.13.
Initial
Conditio Recurrent
n Layer

p n(t +
Sx W a(t + a(t)
1 Sx
1) D
A
S Sx
1 S x 1)
A Sx 1
1 b
Sx
S 1
1 S

a(0) = p a(t + 1) = satlins (Wa(t)


+ b)

Figure 2.13 Recurrent Network

2-14
Ne5vork Archi5ec5ures

In this particular network the vector p supplies the initial


conditions (i.e., a0 = p ). Then future outputs of the network
are computed from previ- ous outputs:

a1 = satlinsWa0 + b , a2 = satlinsWa1 + b , . . . 2


Recurrent networks are potentially more powerful than
feedforward net- works and can exhibit temporal behavior. These
types of networks are dis- cussed in Chapters 3 and 15—18.

2-15
2 Neuron Model and Ne5vork Archi5ec5ures

Summary of Results

Single-Input Neuron
Input General
s Neuron

a
w
p
Ab A
n f
1

a = f (wp + b)

Multiple-Input Neuron
Inputs Multiple-Input
Neuron
p1
w1, 1
n a
p2
 f
p3 b
pR w1, R

1
a = f (Wp + b)

Inpu Multiple-Input
t Neuron
p a
Rx W 1x
n
1 1x
R 1x f 1

1
1 b
1x
R 1 1

a = f (Wp +
b)

2-16
Summary of Resul5s

Transfer Functions

MATLAB
Name Input/Output Relation Icon
Function 2
a = 0 n0
Hard Limit hardlim
a = 1 n0

a = –1 n0
Symmetrical Hard Limit hardlims
a = +1 n0

Linear a = n purelin

a = 0 n0
Saturating Linear a = n 0n1 satlin
a = 1 n1

a = –1 n  –1
Symmetric Saturating a = n –1  n  1 satlins
Linear a = 1 n1

1
a = ------------ ----
Log-Sigmoid –n logsig
1+e

n –n
e –e
Hyperbolic tansig
a = ---
Tangent n
------------–--n-
Sigmoid e +e
a = 0 n0
Positive Linear poslin
a = n 0n

a = 1 neuron
Competitive
with max n a = 0
C compet
all other
neurons

2-1T
2 Neuron Model and Ne5vork Archi5ec5ures

Layer of Neurons
Inpu Layer of S
t Neurons
p a
Rx A W
n
Sx
1 Sx
R Sx f 1

1
1 b
Sx
R 1 S

a = f(Wp +
b)

Three Layers of Neurons


Inpu First Layer Third
t Second Layer Layer
p a1 a2 a3
Rx W1 S1 x 1 W2 S2 x 1 W3 S3 x 1
1 n1 n2 n3
S1 x R S1 x 1 f1 S2 x S1 S2 x 1 f2 S3 x S2 S3 x 1 f3
1 b1
1 b2
1 b3
R S2 x 1 S3 x 1
S1 x 1 S1 S2 S3

a1 = f 1 (W1p + b1) a2 = f 2 (W2a1 + b2) a3 = f 3 (W3a2 + b3) a3 =


f 3 (W3 f 2 (W2f 1 (W1p + b1) + b2) + b3)

Delay
Dela
y

u(t) a(t)
D
a(0)

a(t) = u(t -
1)

2-18
Summary of Resul5s

Integrator
Integrat
or

u(t) a(t)
2

a(0)

t
a(t) = u() d + a(0)
0

Recurrent Network
Initial
Conditio Recurrent
n Layer

p n(t +
Sx W a(t + a(t)
1 Sx
1)
A
S
1 S x 1) D
1 b
Sx
A Sx1 Sx

S 1
1 S

a(0) = p a(t + 1)
= satlins (Wa(t) + b)

How to Pick an Architecture


Problem specifications help define the network in the
following ways:
1. Number of network inputs = number of problem inputs
2. Number of neurons in output layer = number of problem
outputs
3. Output layer transfer function choice at least partly
determined by problem specification of the outputs

2-19
2 Neuron Model and Ne5vork Archi5ec5ures

Solved Problems
P2.1 The input to a single-input neuron is 2.0, its weight is 2.3 and its
bias is -3.
i. What is the net input to the transfer function?
ii. What is the neuron output?
iii.The net input is given by:

n = wp + b = 2.32 + –3 = 1.6

ii.The output cannot be determined because the transfer


function is not specified.

P2.2 What is the output of the neuron of P2.1 if it has the following
transfer functions?
i. Hard limit
ii. Linear
iii. Log-sigmoid
iv.For the hard limit transfer function:

a = hardlim1.6= 1.0

ii. For the linear transfer function:

a = purelin1.6= 1.6

iii. For the log-sigmoid transfer function:

1
a = logsig1.6 = ------------------- =
0.8320 1 + e–1.6
»2+
2 Verify this result using MATLAB and the function logsig, which
ans
=
is in the MININNET directory (see Appendix B).
4

P2.3 Kiven a two-input neuron with the following parameters: b = 1.2 ,


T
W = 3 2 and p = –5 6 , calculate the neuron
output for the fol-
lowing transfer functions:
i. A symmetrical hard limit transfer function
ii. A saturating linear transfer function
2-20
Solved Problems

iii. A hyperbolic tangent sigmoid (tansig) transfer function


First calculate the net input n :

n = Wp + b = 3 2 –5
2
6 + 1.2 = –1.8 .

Now find the outputs for each of the transfer


functions.
i. a = hardlims–1.8= –1
ii. a = satlin–1.8 = 0
iii. a = tansig–1.8 = –0.9468
P2.4 A single-layer neural network is to have six inputs and two out-
puts. The outputs are to be limited to and continuous over the
range 0 to 1. What can you tell about the network architecture?
Specifically:
i. How many neurons are required?
ii. What are the dimensions of the weight matrix?
iii. What kind of transfer functions could be used?
iv. Is a bias required?
The problem specifications allow you to say the following about
the net- work.
v.Two neurons, one for each output, are required.
vi.The weight matrix has two rows corresponding to the two
neurons and six columns corresponding to the six inputs. (The
product Wp is a two-el- ement vector.)
vii.Of the transfer functions we have discussed, the logsig
transfer func- tion would be most appropriate.
viii.Not enough information is given to determine if a bias is
required.

2-21
2 Neuron Model and Ne5vork Archi5ec5ures

Epilogue
This chapter has introduced a simple artificial neuron and has
illustrated how different neural networks can be created by
connecting groups of neu- rons in various ways. One of the main
objectives of this chapter has been to introduce our basic
notation. As the networks are discussed in more detail in later
chapters, you may wish to return to Chapter 2 to refresh your
mem- ory of the appropriate notation.
This chapter was not meant to be a complete presentation of the
networks we have discussed here. That will be done in the
chapters that follow. We will begin in Chapter 3, which will
present a simple example that uses some of the networks
described in this chapter, and will give you an oppor- tunity to
see these networks in action. The networks demonstrated in
Chapter 3 are representative of the types of networks that are
covered in the remainder of this text.

2-22
Ecercises

Exercises
E2.1 The input to a single input neuron is 2.0, its weight is 1.3 and its
bias is 3.0. What possible kinds of transfer function, from Table
2.1, could this neuron have, if its output is:
2
i. 1.6
ii. 1.0
iii. 0.996
3
iv.-1.0
E2.2 Consider a single-input neuron with a bias. We would like the
output to be
-1 for inputs less than 3 and +1 for inputs greater than or equal
to 3.i. What kind of a transfer function is required?
ii. What bias would you suggest? Is your bias in any way
related to the input weight? If yes, how?
»2+
2 iii. Summarize your network by naming the transfer function
ans
=
and stat- ing the bias and the weight. Draw a diagram of
4 the network. Verify the network performance using
M ATLAB.
E2.3 Kiven a two-input neuron with the following weight matrix and
input
tor: Wvec-
T
= 3 2 and p = –5 7 , we would like to have an
output of 0.5. Do
you suppose that there is a combination of bias and transfer
function that might allow this?
i. Is there a transfer function from Table 2.1 that will do the
job if the bias is zero?
ii. Is there a bias that will do the job if the linear transfer
function is used? If yes, what is it?
iii. Is there a bias that will do the job if a log-sigmoid transfer
function is used? Again, if yes, what is it?
iv. Is there a bias that will do the job if a symmetrical hard
limit trans- fer function is used? Again, if yes, what is it?

E2.4 A two-layer neural network is to have four inputs and six


outputs. The range of the outputs is to be continuous between 0
and 1. What can you tell about the network architecture?
Specifically:

2-23
2 Neuron Model and Ne5vork Archi5ec5ures

i. How many neurons are required in each layer?


ii. What are the dimensions of the first-layer and second-
layer weight matrices?
iii. What kinds of transfer functions can be used in each
layer?
iv. Are biases required in either layer?

2-24

You might also like