0% found this document useful (0 votes)
2 views46 pages

Neural Networks Question Bank

The document is a question bank focused on neural networks, covering various topics such as supervised and unsupervised learning, network architectures, learning algorithms, and performance metrics. It includes questions and answers related to specific neural network models, concepts like competitive learning, gradient descent, and cascade correlation, as well as practical applications of artificial neural networks. The content serves as a comprehensive resource for understanding key principles and methodologies in neural network design and implementation.

Uploaded by

Kumar Kaushik
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views46 pages

Neural Networks Question Bank

The document is a question bank focused on neural networks, covering various topics such as supervised and unsupervised learning, network architectures, learning algorithms, and performance metrics. It includes questions and answers related to specific neural network models, concepts like competitive learning, gradient descent, and cascade correlation, as well as practical applications of artificial neural networks. The content serves as a comprehensive resource for understanding key principles and methodologies in neural network design and implementation.

Uploaded by

Kumar Kaushik
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 46

NEURAL NETWORKS QUESTION BANK

11. With a supervised learning algorithm, we can specify target output values, but we may never

get close to those targets at the end of learning. Give two reasons why this might happen.

Answer:

(i) data may be valid, and inconsistency results from a stochastic aspect of the task (or some aspect
of

the task is not modeled by the input data collected);

(ii) the data may contain errors - e.g. measurement errors or typographical errors

12. Describe the architecture and the computational task of the NetTalk neural network.

Answer:

Each group of 29 input units represents a letter, so inputs together represent seven letters
computational

task is to output the phoneme representation corresponding to the seven's middle letter.

13. Why does a time-delay neural network (TDNN) have the same set of incoming weights for each

column of hidden units?

Answer:

To provide temporal translation invariance. Or So that the TDNN will be able to identify the input
sound,

no matter which frame the input sound begins in.

14. Distinguish between a feedforward network and a recurrent network.

Answer:

A feedforward network has no cyclic activation flows.


15. Draw the weight matrix for a feedforward network, showing the partitioning. You can assume

that the weight matrix for connections from the input layer to the hidden layer is Wih, and that the

weight matrix for connections from the hidden layer to the output layer is Who.

Answer:

16. In a Jordan network with i input neurons, h hidden layer neurons, and o output neurons:

(a) how many neurons will there be in the state vector, and

(b) if i = 4, h = 3, and o = 2, draw a diagram showing the connectivity of the network. Do not forget

the bias unit.

Answer:

(a) o neurons in state vector (same as output vector – that_s letter o, not zero)

(b)

17. Draw a diagram illustrating the architecture of Elman’s simple recurrent network that performs

a temporal version of the XOR task. How are the two inputs to XOR provided to this network?

Answer:
The inputs are passed sequentially to the single input unit (0) of the temporal XOR net.

18. Briefly describe the use of cluster analysis in Elman’s lexical class discovery experiments, and

one of his conclusions from this.

Answer:

Elman clustered hidden unit activation patterns corresponding to different input vectors and
different

sequences of input units. He found that the clusters corresponded well to the grammatical contexts
in

which the inputs (or input sequences) occurred, and thus concluded that the network had in effect
learned

the grammar.

19. Draw an architectural diagram of a rank 2 tensor product network where the dimensions of the

input/output vectors are 3 and 4. You do not need to show the detailed internal structure of the

binding units.

Answer:

20. Draw a diagram of a single binding unit in a rank 2 tensor product network illustrating the

internal operation of the binding unit in teaching mode.


Answer:

21. Define the concepts of dense and sparse random representations. How do their properties

compare with those of an orthonormal set of representation vectors.

Answer:

In a dense random representation, each vector component is chosen at random from a uniform

distribution over say [–1, +1]. In a sparse random representation, the non-zero components are
chosen in

this way, but most components are chosen (at random) to be zero. In both cases, the vectors are

normalized so that they have length 1.

Members of orthonormal sets of vectors have length one and are orthogonal to one another. Vectors
in

dense and sparse random representations are “orthogonal on average” – their inner products have a

mean of zero.

22. What is a Hadamard matrix? Describe how a Hadamard matrix can be used to produce suitable

distributed concept representation vectors for a tensor product network. What are the properties

of the Hadamard matrix that make the associated vectors suitable?

Answer:

A Hadamard matrix H is a square matrix of size n, all of whose entries are ±1, which satisfies HHT = In

the identity matrix of size n. The rows of a Hadamard matrix, once normalized, can be used as
distributed

representation vectors in a tensor product network. This is because the rows are orthogonal to each

other, and have no zero-components.

23. In a 2-D self-organizing map with input vectors of dimension m, and k neurons in the map,

how many weights will there be?

Answer:

mk
24. Describe the competitive process of the Self-Organising Map algorithm.

Answer:

Let m denote the dimension of the input pattern

x = [x1, x2, ..., xm]T

The weight vector for each of the neurons in the SOM also has dimension m. So for neuron j, the
weight

vector will be:

wj = [wj1, wj2, ..., wjm]T

For an input pattern x, compute the inner product wj•x for each neuron, and choose the largest
inner

product. Let i(x) denote the index of the winning neuron (and also the output of a trained SOM).

25. Briefly explain the concept of a Voronoi cell.

Answer:

Given a set of vectors X, the Voronoi cells about those vectors are the ones that partition the space
they

lie in, according to the nearest-neighbour rule. That is, the Voronoi cell that a vector lies in is that

belonging to the x 􀗐 X to which it is closest.

26. Briefly explain the term code book in the context of learning vector quantisation.

Answer:

When compressing data by representing vectors by the labels of a relatively small set of
reconstruction

vectors, the set of reconstruction vectors is called the code book.

27. Describe the relationship between the Self-Organising Map algorithm and the Learning Vector

Quantisation algorithm.

Answer:

In order to use Learning Vector Quantisation (LVQ), a set of approximate reconstruction vectors is
first

found using the unsupervised SOM algorithm. The supervised LVQ algorithm is then used to fine-
tune the

vectors found using SOM.

28. Briefly describe two types of attractor in a dynamical system.

Answer:
An attractor is a bounded subset of space to which non-trivial regions of initial conditions converge
at time

passes. Pick two of …

• point attractor: system converges to a single point

• limit cycle: system converges to a cyclic path

• chaotic attractor: stays within a bounded region of space, but no predictable cyclic path

29. Write down the energy function of a BSB network with weight matrix W, feedback constant β,

and activation vector x.

Answer:

30. Compute the weight matrix for a 4-neuron Hopfield net with the single fundamental memory
ξ1

= [1,–1, –1,1] stored in it.

Answer:

31. Write down the energy function of a discrete Hopfield net.

Answer:

32. What is Artificial Neural Network?

An extremely simplified model of the brain

● Essentially a function approximator


► Transforms inputs into outputs to the best of its ability

Composed of many “neurons” that co-operate to perform the desired function

33. What Are ANNs Used For?

● Classification

► Pattern recognition, feature extraction, image matching

● Noise Reduction

► Recognize patterns in the inputs and produce noiseless outputs

● Prediction

► Extrapolation based on historical data

Ability to learn

► NN’s figure out how to perform their function on their own

► Determine their function based only upon sample inputs

● Ability to generalize

► i.e. produce reasonable outputs for inputs it has not been taught how to deal with

34. How do Neural Networks Work?

• The “building blocks” of neural networks are the neurons.

• In technical systems, we also refer to them as units or nodes.

• Basically, each neuron

– receives input from many other neurons,

– changes its internal state (activation) based on the current input,

– sends one output signal to many other neurons, possibly including its input neurons

(recurrent network)

• . Information is transmitted as a series of electric impulses, so-called spikes.

• The frequency and phase of these spikes encodes the information.

• In biological systems, one neuron can be connected to as many as 10,000 other neurons.

• Usually, a neuron receives its information from other neurons in a confined area, its so-called

receptive field.

• NNs are able to learn by adapting their connectivity patterns so that the organism improves its

behavior in terms of reaching certain (evolutionary) goals.

• The strength of a connection, or whether it is excitatory or inhibitory, depends on the state of a


receiving neuron’s synapses.

The NN achieves learning by appropriately adapting the states of its synapses

The output of a neuron is a function of the weighted sum of the inputs plus a bias

● The function of the entire neural network is simply the computation of the outputs of all the
neurons

► An entirely deterministic calculation

37. Explain Correlation Learning

Hebbian Larning(1949)

“When an axon of cell A is near enough to excite a cell B and repeatedly or persistently takes place in

firing it, some growth process or metabolic change takes place in one or both cells such that A’s
efficiency, as one of the cells firing B, is increased.”

Weight modification rule:

Δwi,j = c⋅xi⋅xj

Eventually, the connection strength will reflect the correlation between the neurons’ outputs.

38. Explain Competitive Learning

• Nodes compete for inputs

• Node with highest activation is the winner

• Winner neuron adapts its tuning (pattern of weights) even further towards the current input

• Individual nodes specialize to win competition for a set of similar inputs

• Process leads to most efficient neural representation of input space

• Typical for unsupervised learning

39. Explain Linear Neurons

Obviously, the fact that threshold units can only output the values 0 and 1 restricts their applicability
to

certain problems.

We can overcome this limitation by eliminating the threshold and simply turning fi into the identity
function

so that we get: x _i(t) = net_i (t)

With this kind of neuron, we can build feedforward networks with m input neurons and n output
neurons

that compute a function f: R^m → R^n

Linear neurons are quite popular and useful for applications such as interpolation.

However, they have a serious limitation: Each neuron computes a linear function, and therefore the

overall network function f: Rm → Rn is also linear.

This means that if an input vector x results in an output vector y, then for any factor φ the input φ⋅x
will

result in the output φ⋅y.

interesting functions cannot be realized by networks of linear neurons.

40. Explain Gradient Descent


Gradient descent is a very common technique to find the absolute minimum of a function.

It is especially useful for high-dimensional functions. We will use it to iteratively minimize the

network’s

(or neurons) error by finding the gradient of the error surface in weight space and adjusting the

weights in the opposite direction.

42. Develop an Adaline Learning Algorithm.


The Adaline uses gradient descent to determine the weight vector which leads to minimal error.

Error is defined as the MSE between the neuron’s net input net_j and its desired output d_j (=
class(i_j))

across all training samples i_j.

The idea is to pick samples in random order and perform (slow) gradient descent in their individual
error

functions.

This technique allows incremental learning, i.e., refining of the weights as more training samples are

added.
43. Explain the difference between Internal Representation Issues and External Interpretation

Issues?

Internal Representation Issues

As we said before, in all network types, the amplitude of input signals and internal signals is limited:

• analog networks: values usually between 0 and 1

• binary networks: only values 0 and 1 are allowed

• bipolar networks: only values –1 and 1 allowed

Without this limitation, patterns with large amplitudes would dominate the network’s behavior.

A disproportionately large input signal can activate a neuron even if the relevant connection weight
is very

small.

External Interpretation Issues

From the perspective of the embedding application, we are concerned with the interpretation of
input and
output signals.

These signals constitute the interface between the embedding application and its NN component.

Often, these signals only become meaningful when we define an external interpretation for them.

This is analogous to biological neural systems: The same signal becomes completely different
meaning

when it is interpreted by different brain areas (motor cortex, visual cortex etc.).

Without any interpretation, we can only use standard methods to define the difference (or similarity)

between signals.

For example, for binary patterns x and y, we could…


The units can be trained separately and in parallel.

In production mode, the network decides that its current input is in the k-th class if and only if ok = 1,
and

for all j ≠ k, o_j = 0, otherwise it is misclassified.

For units with real-valued output, the neuron with maximal output can be picked to indicate the class
of

the input.

This maximum should be significantly greater than all other outputs, otherwise the input is
misclassified.

46. Explain difference between Supervised and unsupervised learning?

• Supervised learning:

An archaeologist determines the gender of a human skeleton based on many past examples of

male and female skeletons.

• Unsupervised learning:

The archaeologist determines whether a large number of dinosaur skeleton fragments belong to

the same species or multiple species. There are no previous data to guide the archaeologist, and

no absolute criterion of correctness.

47. Explain different ways of representing the data in the neural network system? 10
48. Explain temporal data representations. Give example. 10

49. Write a note on Adaptive Networks

As you know, no equation would tell you the ideal number of neurons in a multi-layer

network.

Ideally, we would like to use the smallest number of neurons that allows the network to do its task

sufficiently accurately, because of:

• the small number of weights in the system,

• fewer training samples being required,

• faster training,

• typically, better generalization for new test samples.

So far, we have determined the number of hidden-layer units in BPNs by “trial and error.”

However, there are algorithmic approaches for adapting the size of a network to a given task.

Some techniques start with a large network and then iteratively prune connections and nodes that

contribute little to the network function.

Other methods start with a minimal network and then add connections and nodes until the network

reaches a given performance level.

Finally, there are algorithms that combine these “pruning” and “growing” approaches.

50. Write a note on Cascade correlation

None of these algorithms are guaranteed to produce “ideal” networks. (It is not even clear how to
define

an “ideal” network.)

However, numerous algorithms exist that have been shown to yield good results for most
applications.

We will take a look at one such algorithm named “cascade correlation.”

It is of the “network growing” type and can be used to build multi-layer networks of adequate size.

However, these networks are not strictly feed-forward in a level-by-level manner.

This learning algorithm is much faster than backpropagation learning, because only one neuron is
trained

at a time.

On the other hand, its inability to retrain neurons may prevent the cascade correlation network from

finding optimal weight patterns for encoding the given function.


51. Explain Covariance and Correlation

For a dataset (xi, yi) with i = 1, …, n the covariance is:

Covariance tells us something about the strength and direction (directly vs. inversely proportional) of
the

linear relationship between x and y.

For many applications, it is useful to normalize this variable so that it ranges from -1 to 1.

The result is the correlation coefficient r, which for a dataset (xi, yi) with i = 1, …, n is given by:

Σ=

−−

ii

xxyy

cov(x,y) ( )( )

ΣΣ

==

−−

−−

==

ii

ii

iii

xxyy
xxyy

()()

( )( )

r corr(x, y)

In the case of high (close to 1) or low (close to -1) correlation coefficients, we can use one variable as
a

predictor of the other one.

To quantify the linear relationship between the two variables, we can use linear regression:

52. What are the benefits to have smallest number of neurons in the network? 4

53. Develop a cascade correlation algorithm? Why it is used for? What are its advantages?

We start with a minimal network consisting of only the input neurons (one of them should be a
constant

offset = 1) and the output neurons, completely connected as usual.

The output neurons (and later the hidden neurons) typically use output functions that can also
produce

negative outputs; e.g., we can subtract 0.5 from our sigmoid function for a (-0.5, 0.5) output range.

Then we successively add hidden-layer neurons and train them to reduce the network error step by
step:

Weights to each new hidden node are trained to maximize the covariance of the node’s output with
the

current network error.

Covariance:

: vector of weights to the new node

: output of the new node to p-th input sample

: error of k-th output node for p-th input sample before the new node is added

: averages over the training set

None of these algorithms are guaranteed to produce “ideal” networks.

(It is not even clear how to define an “ideal” network.)


However, numerous algorithms exist that have been shown to yield good results for most
applications.

We will take a look at one such algorithm named “cascade correlation.”

It is of the “network growing” type and can be used to build multi-layer networks of adequate size.

However, these networks are not strictly feed-forward in a level-by-level manner.

Since we want to maximize S (as opposed to minimizing some error), we use gradient ascent:

: i-th input for the p-th pattern

: sign of the correlation between the node’s output and the k-th network output

: learning rate

: derivative of the node’s activation function with respect to its net input, evaluated at p-th pattern

If we can find weights so that the new node’s output perfectly covaries with the error in each output
node,

we can set the new output node weights and offsets so that the new error is zero.

More realistically, there will be no perfect covariance, which means that we will set each output
node

weight so that the error is minimized.

To do this, we can use gradient descent or linear regression for each individual output node weight.

ΣΣ

==

=−−

new new p new k p k S(w x x E E

11

, , ) ( )( )

new w

new p x ,

kpE,

new k x and E

ΣΣ
==

Δ==−

pip

kkpk

iSEEfI

wS

,η()'

ipI,

kS

pf'

The next added hidden node will further reduce the remaining network error, and so on, until we
reach a

desired error threshold.

This learning algorithm is much faster than backpropagation learning, because only one neuron is
trained

at a time.

On the other hand, its inability to retrain neurons may prevent the cascade correlation network from

finding optimal weight patterns for encoding the given function.

54. What are input space clusters and radial basic functions (RBFs)? 6
To achieve such local “receptive fields,” we can use radial basis functions, i.e., functions whose
output

only depends on the Euclidean distance μ between the input vector and another (“weight”) vector.

A typical choice is a Gaussian function:

where c determines the “width” of the Gaussian.

However, any radially symmetric, non-increasing function could be used.

55. Explain linear interpolation for one dimensional and multidimensional case? 5

For function approximation, the desired output for new (untrained) inputs could be estimated by
linear

interpolation.

As a simple example, how do we determine the desired output of a one-dimensional function at a


new

input x0 that is located between known data points x1 and x2?

which simplifies to:

with distances D1 and D2 from x0 to x1 and x2, resp.

In the multi-dimensional case, hyperplane segments connect neighboring points so that the desired

output for a new input x0 is determined by the P0 known samples that surround it:

Where Dp is the Euclidean distance between x0 and xp and f(xp) is the desired output value for input
xp.

Example for f:R2→R1 (with desired output indicated):

For four nearest neighbors, the desired output for x0 is

56. Explain different types of learning methods? What are counter propagation networks?

( ) ( / c)2

g ρ μ ∝ e− μ

( ) ( ) ( ( ) ( ))( )

()21

2101

01xx

fxfxfxfxxx

−−

=+
()()

12

0()−−

−−

DD

fxDfxDfx

()()()

11

11

00

...

...

()−−−
−−−

+++

+++

PP

DDD

DfDfDf

xxx

x12

( ) 5 4 7 6 5.5 1

2≈

+++

+++

=−−−−
−−−−

DDDD

fDDDD0x

Unsupervised/Supervised Learning ….

The counterpropagation network (CPN) is a fast-learning combination of unsupervised and


supervised

learning.

Although this network uses linear neurons, it can learn nonlinear functions by means of a hidden
layer of

competitive units.

Moreover, the network is able to learn a function and its inverse at the same time.

However, to simplify things, we will only consider the feedforward mechanism of the CPN.

57. Explain the process of learning in radial basic function networks? 5

If we are using such linear interpolation, then our radial basis function (RBF) ρ0 that weights an input

vector based on its distance to a neuron’s reference (weight) vector is ρ0(D) = D-1.

For the training samples xp, p = 1, …, P0, surrounding the new input x, we find for the network’s
output o:

(In the following, to keep things simple, we will assume that the network has only one output
neuron.

However, any number of output neurons could be implemented.)

Since it is difficult to define what “surrounding” should mean, it is common to consider all P training

samples and use any monotonically decreasing RBF ρ:

This, however, implies a network that has as many hidden nodes as there are training samples. This
in

unacceptable because of its computational complexity and likely poor generalization ability – the
network

resembles a look-up table.

It is more useful to have fewer neurons and accept that the training set cannot be learned 100%

accurately:

Here, ideally, each reference vector μi of these N neurons should be placed in the center of an
inputspace

cluster of training samples with identical (or at least similar) desired output ϕi.

To learn near-optimal values for the reference vectors and the output weights, we can – as usual –
employ gradient descent.

58. Write a note on distance and similarity functions with respect to counterpropagation network?

In the hidden layer, the neuron whose weight vector is most similar to the current input vector is the

“winner.”

There are different ways of defining such maximal similarity, for example:

(1) Maximal cosine similarity (same as net input):

(2) Minimal Euclidean distance:

(no square root necessary for determining the winner)

59. Develop a counterpropagation network learning algorithm? 10

A simple CPN with two input neurons, three hidden neurons, and two output neurons can be
described as

follows:

s(w, x) = w⋅ x

=Σ( − )

i i d(w, x) w x 2

1 ( ), where ( )

pp

ppddfx

o∝Σρx−x=

( ) Σ=

=−

ppd

P
o

1ρxx

( ) Σ=

=−

Nii

1ϕρxμ

The CPN learning process (general form for n input units and m output units):

1. Randomly select a vector pair (x, y) from the training set.

2. If you use the cosine similarity function, normalize (shrink/expand to “length” 1) the input vector x

by dividing every component of x by the magnitude ||x||, where

3. Initialize the input neurons with the resulting vector and compute the activation of the hidden-
layer

units according to the chosen similarity measure.

4. In the hidden (competitive) layer, determine the unit W with the largest activation (the winner).

5. Adjust the connection weights between W and all N input-layer units according to the formula:

6. Repeat steps 1 to 5 until all training patterns have been processed once.

7. Repeat step 6 until each input pattern is consistently associated with the same competitive unit.

8. Select the first vector pair in the training set (the current pattern).

9. Repeat steps 2 to 4 (normalization, competition) for the current pattern.

10. Adjust the connection weights between the winning hidden-layer unit and all M output layer
units

according to the equation:

11. Repeat steps 9 and 10 for each vector pair in the training set.

12. Repeat steps 8 through 11 for several epochs.

60. Develop a Quickprop learning algorithm? 10

The assumption underlying Quickprop is that the network error as a function of each individual
weight can
be approximated by a paraboloid.

Based on this assumption, whenever we find that the gradient for a given weight switched its sign

between successive epochs, we should fit a paraboloid through these data points and use its
minimum as

the next weight value.

Illustration (sorry for the crummy paraboloid):

Σ==

xxj

|| || 2

w (t 1) w (t) (x wH (t))

n Wn

Wn

Wn + = +α −

w (t 1) w (t) ( y wO (t))

m mW

mW

mW + = +β −

Newton’s method:

For the minimum of E we must have:

Notice that this method cannot be applied if the error gradient has not decreased in magnitude and
has

not changed its sign at the preceding time step.

In that case, we would ascent in the error function or make an infinitely large weight modification.

In most cases, Quickprop converges several times faster than standard backpropagation learning.

61. Develop an Rprop learning algorithm? 10


Resilient Backpropagation (Rprop)

The Rprop algorithm takes a very different approach to improving backpropagation as compared to

Quickprop.

Instead of making more use of gradient information for better weight updates, Rprop only uses the
sign of

the gradient, because its size can be a poor and noisy estimator of required weight updates.

Furthermore, Rprop assumes that different weights need different step sizes for updates, which vary

throughout the learning process.

The basic idea is that if the error gradient for a given weight wij had the same sign in two consecutive

epochs, we increase its step size Δij, because the weight’s optimal value may be far away.

If, on the other hand, the sign switched, we decrease the step size.

Weights are always changed by adding or subtracting the current step size, regardless of the absolute

value of the gradient.

This way we do not “get stuck” with extreme weights that are hard to change because of the shallow

slope in the sigmoid function.

E = aw2 + bw+ c

E t aw t b

Et==+

( ) '( ) 2 ( ) E t aw t b

Et=−=−+

∂−

( 1) '( 1) 2 ( 1)

( 1)

'( ) '( 1)

( ) ( 1)

2 '( ) '( 1)
Δ−

−−

−−

−−

⇒=

wt

EtEt

wtwt

a E t E t ( 1)

'( ) ( '( ) '( 1)) ( )

Δ−

−−

⇒=−

wt

bEtEtEtwt

( 1) = 2 ( +1) + = 0

∂ + aw t b

Et

wtb

⇒ ( +1) = −

( 1)

[ '( ) '( 1)] ( ) '( ) ( 1)

'( ) '( 1)

( 1) ( 1)

Δ−

−−−Δ−
⎟ ⎟⎠

⎜ ⎜⎝

−−

Δ−

⇒+=

wt

EtEtwtEtwt

EtEt

w t w t '( 1) '( )

( 1) ( ) '( ) ( 1)

EtEt

wtwtEtwt

−−

Δ−

⇒+=+

⎪⎪⎪

<

⋅Δ

>


⋅Δ

Δ=

−−

+−

, if 0

, if 0

( 1) ( )

( 1)

( 1) ( )

( 1)

()

tt

ij

ij

ij

ij

ij w

w
E

Formally, the step size update rules are:

Empirically, best results were obtained

with initial step sizes

of 0.1, η+=1.2, η-=1.2, Δmax=50, and Δmin=10-6

Weight updates are then performed as follows:

It is important to remember that, like in

Quickprop, in Rprop the gradient needs to be

computed across all

samples (per-epoch learning).

The performance of Rprop is comparable to Quickprop; it also considerably accelerates


backpropagation

learning. Compared to both the standard backpropagation algorithm and Quickprop, Rprop has one

advantage:

Rprop does not require the user to estimate or empirically determine a step size parameter and its

change over time. Rprop will determine appropriate step size values by itself and can thus be applied
“as

is” to a variety of problems without significant loss of efficiency.

62. What are Maxnets? Give example. 5

A maxnet is a recurrent, one-layer network that uses competition to determine which of its nodes
has the

greatest initial input value.

All pairs of nodes have inhibitory connections with the same weight -ε, where typically ε ≤ 1/(#
nodes).

In addition, each node has a self-excitatory connection to itself, whose weight θ is typically 1.

The nodes update their net input and their output by the following equations:

All nodes update their output simultaneously.

With each iteration, the neurons’ activations will decrease until only one neuron remains active.
This is the “winner” neuron that had the greatest initial input.

Maxnet is a biologically plausible implementation of a maximum-finding function.

In parallel hardware, it can be more efficient than a corresponding serial function.

We can add maxnet connections to the hidden layer of a CPN to find the winner neuron.

Example of a Maxnet with five neurons and θ = 1, ε = 0.2:

i i net w x f (net) = max(0,net)

⎪⎪⎪

⎪⎪⎪

<

>

−Δ

0 , otherwise

, if 0

, if 0

()

()

()

()

()

ij
t

ij

ij

ij

ij w

63. Write a note on Kohonen maps? 5

Self-Organizing Maps (Kohonen Maps)

As you may remember, the counterpropagation network employs a combination of supervised and

unsupervised learning. We will now study Self-Organizing Maps (SOMs) as examples for completely

unsupervised learning (Kohonen, 1980). This type of artificial neural network is particularly similar to

biological systems (as far as we understand them).

In the human cortex, multi-dimensional sensory input spaces (e.g., visual input, tactile input) are

represented by two-dimensional maps.

The projection from sensory inputs onto such maps is topology conserving.

This means that neighboring areas in these maps represent neighboring areas in the sensory input

space.

For example, neighboring areas in the sensory cortex are responsible for the arm and hand regions.

Such topology-conserving mapping can be achieved by SOMs:

• Two layers: input layer and output (map) layer

• Input and output layers are completely connected.

• Output neurons are interconnected within a defined

neighborhood.

• A topology (neighborhood relation) is defined on


the output layer.

Network structure:

Common output-layer structures:

A neighborhood function ϕ(i, k) indicates how closely neurons i and k in the output layer are

connected to each other. Usually, a Gaussian function on the distance between the two neurons

in the layer is used:

64. Describe Adaptive resonance theory with an example? 10

Adaptive Resonance Theory (ART) networks perform completely unsupervised learning.

Their competitive learning algorithm is similar to the first (unsupervised) phase of CPN learning.

However, ART networks are able to grow additional neurons if a new input cannot be categorized

appropriately with the existing neurons.

A vigilance parameter ρ determines the tolerance of this matching process.

A greater value of ρ leads to more, smaller clusters (= input samples associated with the same winner

neuron).

ART networks consist of an input layer and an output layer.

We will only discuss ART-1 networks, which receive binary input vectors.

Bottom-up weights are used to determine output-layer candidates that may best match the current
input.

Top-down weights represent the “prototype” for the cluster defined by each output neuron.

A close match between input and prototype is necessary for categorizing the input.

Finding this match can require multiple signal exchanges between the two layers in both directions
until

“resonance” is established or a new neuron is added.

ART networks tackle the stability-plasticity dilemma:

Plasticity: They can always adapt to unknown inputs (by creating a new cluster with a new weight
vector)

if the given input cannot be classified by existing clusters.

Stability: Existing clusters are not deleted by the introduction of new inputs (new clusters will just be

created in addition to the old ones).

Problem: Clusters are of fixed size, depending on ρ.

A. Initialize each top-down weight tl,j (0) = 1;

B. Initialize bottom-up weight bj,l (0) = ;


C. While the network has not stabilized, do

1. Present a randomly chosen pattern x = (x1,…,xn) for learning

2. Let the active set A contain all nodes; calculate

yj = bj,1 x1 +…+bj,n xn for each node j A;

3. Repeat

a) Let j* be a node in A with largest yj, with ties being broken arbitrarily;

b) Compute s* = (s*

1,…,s*

n ) where s*

l = tl,j* xl ;

c) Compare similarity between s* and x with the given vigilance parameter r :

if < r then remove j* from set A

else associate x with node j* and update weights:

bj*l (new) = tl,j* (new) =

Until A is empty or x has been associated with some node j

4. If A is empty, then create new node whose weight vector coincides with current input

pattern x;

end-while

65. What is classification?

A. Deciding which features to use in a pattern recognition problem.

B. Deciding which class an input pattern belongs to.

C. Deciding which type of neural network to use.

Answer: B

66. What is a pattern vector?

A. A vector of weights w = [w1,w2, ...,wn]T in a neural network.

B. A vector of measured features x = [x1, x2, ..., xn]T of an input example.

C. A vector of outputs y = [y1, y2, ..., yn]T of a classifier.

Answer: B

67. For a minimum distance classifier with one input variable, what is the decision boundary

between two classes?

A. A line.
B. A curve.

C. A plane.

D. A hyperplane.

E. A discriminant.

Answer: E

68. For a Bayes classifier with two input variables, what is the decision boundary between two

classes?

A. A line.

B. A curve.

C. A plane.

D. A hypercurve.

E. A discriminant.

Answer: B

69. Design a minimum distance classifier with three classes using the following training data:

Then classify the test vector [0.5,−1]T with the trained classifier. Which class does this vector

belong to?

A. Class 1.

B. Class 2.

C. Class 3.

Answer: B

70. The decision function for a minimum distance classifier is dj(x) = xTmj – 1/2mj

Tmj where mj is

the prototype vector for class !j . What is the value of the decision function for each of the three

classes in above question for the test vector [0,−0.5]T ?

A. d1(x) = −1.5, d2(x) = −0.5, d3(x) = −0.5.

B. d1(x) = −0.875, d2(x) = −0.375, d3(x) = −2.375.

C. d1(x) = −0.5, d2(x) = −0.5, d3(x) = −1.5.

D. d1(x) = −0.375, d2(x) = −0.875, d3(x) = −0.875.

Answer: A

71. Is the following statement true or false? “An outlier is an input pattern that is very different

from the typical patterns of the same class”.


A. TRUE.

B. FALSE.

Answer: A

72. What is generalization?

A. The ability of a pattern recognition system to approximate the desired output values for pattern
vectors

which are not in the test set.

B. The ability of a pattern recognition system to approximate the desired output values for pattern
vectors

which are not in the training set.

C. The ability of a pattern recognition system to extrapolate on pattern vectors which are not in the

training set.

D. The ability of a pattern recognition system to interpolate on pattern vectors which are not in the
test

set.

Answer: B

73. Is the following statement true or false? “In the human brain, roughly 70% of the neurons are

used for input and output. The remaining 30% are used for information processing.”

A. TRUE.

B. FALSE.

Answer: B

74. Which of the following statements is the best description of supervised learning?

A. “If a particular input stimulus is always active when a neuron fires then its weight should be
increased.”

B. “If a stimulus acts repeatedly at the same time as a response then a connection will form between
the

neurons involved. Later, the stimulus alone is sufficient to activate the response.”

C. “The connection strengths of the neurons involved are modified to reduce the error between the

desired and actual outputs of the system.”

Answer: C

75. Is the following statement true or false? “Artificial neural networks are parallel computing

devices consisting of many interconnected simple processors.”

A. TRUE.
B. FALSE.

Answer: A

76. Is the following statement true or false? “Knowledge is acquired by a neural network from its

environment through a learning process, and this knowledge is stored in the connections

strengths (neurons) between processing units (weights).”

A. TRUE.

B. FALSE

Answer: A

77. A neuron with 4 inputs has the weight vector w = [1, 2, 3, 4]T and a bias _ = 0 (zero). The

activation function is linear, where the constant of proportionality equals 2 — that is, the

activation function is given by f(net) = 2 × net. If the input vector is x = [4, 8, 5, 6]T then the output

of the neuron will be

A. 1.

B. 56.

C. 59.

D. 112.

E. 118.

Answer: E

78. Which of the following types of learning can used for training artificial neural networks?

A. Supervised learning.

B. Unsupervised learning.

C. Reinforcement learning.

D. All of the above answers.

E. None of the above answers.

Answer: D

79. Which of the following neural networks uses supervised learning?

A. Simple recurrent network.

B. Self-organizing feature map.

C. Hopfield network.

D. All of the above answers.

E. None of the above answers.


Answer: A

80. Which of the following algorithms can be used to train a single-layer feedforward network?

A. Hard competitive learning.

B. Soft competitive learning.

C. A genetic algorithm.

D. All of the above answers.

E. None of the above answers.

Answer: D

81. What is the biggest difference between Widrow & Hoff’s Delta Rule and the Perceptron

Learning Rule for learning in a single-layer feedforward network?

A. There is no difference.

B. The Delta Rule is defined for step activation functions, but the Perceptron Learning Rule is defined
for

linear activation functions.

C. The Delta Rule is defined for sigmoid activation functions, but the Perceptron Learning Rule is
defined

for linear activation functions.

D. The Delta Rule is defined for linear activation functions, but the Perceptron Learning Rule is
defined for

step activation functions.

E. The Delta Rule is defined for sigmoid activation functions, but the Perceptron Learning Rule is
defined

for step activation functions.

Answer: D

82. Why are linearly separable problems interesting to neural network researchers?

A. Because they are the only problems that a neural network can solve successfully.

B. Because they are the only mathematical functions that are continuous.

C. Because they are the only mathematical functions that you can draw.

D. Because they are the only problems that a perceptron can solve successfully.

Answer: D

83. A perceptron with a unipolar step function has two inputs with weights w1 = 0.5 and w2 = −0.2,

and a threshold _ = 0.3 (_ can therefore be considered as a weight for an extra input which is
always set to -1).

For a given training example x = [0, 1]T , the desired output is 1. Does the perceptron give the

correct answer (that is, is the actual output the same as the desired output)?

A. Yes.

B. No.

Answer: B

84. The perceptron in question 22 is trained using the learning rule 4w = _ (d − y) x, where x is the

input vector, _ is the learning rate, w is the weight vector, d is the desired output, and y is the

actual output.

What are the new values of the weights and threshold after one step of training with the input

vector x = [0, 1]T and desired output 1, using a learning rate _ = 0.5?

A. w1 = 0.5,w2 = −0.2, _ = 0.3.

B. w1 = 0.5,w2 = −0.3, _ = 0.2.

C. w1 = 0.5,w2 = 0.3, _ = −0.2.

D. w1 = 0.5,w2 = 0.3, _ = 0.7.

E. w1 = 1.0,w2 = −0.2, _ = −0.2.

Answer: C

85. The Perceptron Learning Rule states that “for any data set which is linearly separable, the

Perceptron Convergence Theorem is guaranteed to find a solution in a finite number of steps.”

A. TRUE.

B. FALSE.

Answer: B

86. Is the following statement true or false? “The XOR problem can be solved by a multi-layer

perceptron but a multi-layer perceptron with bipolar step activation functions cannot learn to do

this.”

A. TRUE.

B. FALSE.

Answer: A

87. The Adaline neural network can be used as an adaptive filter for echo cancellation in

telephone circuits. For the telephone circuit given in the above figure, which one of the following

signals carries the corrected message sent from the human speaker on the left to the human
listener on the right? (Assume that the person on the left transmits an outgoing voice signal and

receives an incoming voice signal from the person on the right.)

A. The outgoing voice signal, s.

B. The delayed incoming voice signal, n.

C. The contaminated outgoing signal, s + n0.

D. The output of the adaptive filter, y.

E. The error of the adaptive filter, " = s + n0 − y.

Answer: E

88. What is the credit assignment problem in the training of multi-layer feedforward networks?

A. The problem of adjusting the weights for the output layer.

B. The problem of adapting the neighbours of the winning unit.

C. The problem of defining an error function for linearly inseparable problems.

D. The problem of avoiding local minima in the error function.

E. The problem of adjusting the weights for the hidden layers.

Answer: E

89. Is the following statement true or false? “The generalized Delta rule solves the credit

assignment problem in the training of multi-layer feedforward networks.”

A. TRUE.

B. FALSE.

Answer: A

90. A common technique for training MLFF networks is to calculate the generalization error on a

separate data set after each epoch of training. Training is stopped when the generalization error

starts to decrease. This technique is called

A. Boosting.

B. Momentum.

C. Hold-one-out.

D. Early stopping.

E. None of the above answers.

Answer: E

91. Which of the following statements is NOT true for an autoassociative feedforward network
with
a single hidden layer of neurons?

A. During training, the target output vector is the same as the input vector.

B. It is important to use smooth, non-decreasing activation functions in the hidden units.

C. The network could be trained using the backpropagation algorithm, but care must be taken to deal
with

the problem of local minima.

D. After training, the hidden units give a representation that is equivalent to the principal
components of

the training data, removing non-redundant parts of the input data.

E. The trained network can be split into two machines: the first layer of weights compresses the input

pattern (encoder), and the second layer of weights reconstructs the full pattern (decoder).

Answer: D

92. Which of the following statements is NOT true for a simple recurrent network (SRN)?

A. The training examples must be presented to the network in the correct order.

B. The test examples must be presented to the network in the correct order.

C. This type of network can predict the next chunk of data in the series from the past history of data.

D. The hidden units encode an internal representation of the data in the series that precedes the
current

input.

E. The number of context units should be the same as the number of input units.

Answer: E

93. How many hidden layers are there in an autoassociative Hopfield network?

A. None (0).

B. One (1).

C. Two (2).

Answer: A

94. A Hopfield network has 20 units. How many adjustable parameters does this network contain?

A. 95

B. 190

C. 200

D. 380

E. 400
Answer: B

95. Is the following statement true or false? “Patterns within a cluster should be similar in some

way.”

A. TRUE.

B. FALSE.

Answer: A

96. Is the following statement true or false? “Clusters that are similar in some way should be far

apart.”

A. TRUE.

B. FALSE.

Answer: B

97. Which of the following statements is NOT true for hard competitive learning (HCL)?

A. There is no target output in HCL.

B. There are no hidden units in a HCL network.

C. The input vectors are often normalized to have unit length — that is, k x k= 1.

D. The weights of the winning unit k are adapted by 4wk = _ (x − wk), where x is the input vector.

E. The weights of the neighbours j of the winning unit are adapted by 4wj = _j (x − wj ), where

_j < _ and j 6= k.

Answer: E

98. Which of the following statements is NOT true for a self-organizing feature map (SOFM)?

A. The size of the neighbourhood is decreased during training.

B. The SOFM training algorithm is based on soft competitive learning.

C. The network can grow during training by adding new cluster units when required.

D. The cluster units are arranged in a regular geometric pattern such as a square or ring.

E. The learning rate is a function of the distance of the adapted units from the winning unit.

Answer: C

99. Which of the following statements is the best description of reproduction?

A. Randomly change a small part of some strings.

B. Randomly generate small initial values for the weights.

C. Randomly pick new strings to make the next generation.

D. Randomly combine the genetic information from 2 strings.


Answer: C

100. Which of the following statements is the best description of mutation?

A. Randomly change a small part of some strings.

B. Randomly pick new strings to make the next generation.

C. Randomly generate small initial values for the weights.

D. Randomly combine the genetic information from 2 strings.

Answer: A

101. Ranking is a technique used for

A. deleting undesirable members of the population.

B. obtaining the selection probabilities for reproduction.

C. copying the fittest member of each population into the mating pool.

D. preventing too many similar individuals from surviving to the next generation.

Answer: B

102. Is the following statement true or false? “A genetic algorithm could be used to search the

space of possible weights for training a recurrent artificial neural network, without requiring any

gradient information.”

A. TRUE.

B. FALSE.

Answer: A

103. Is the following statement true or false? “Learning produces changes within an agent that

over time enables it to perform more effectively within its environment.”

A. TRUE.

B. FALSE.

Answer: A

104. Which application in intelligent mobile robots made use of a single-layer feedforward

network?

A. Goal finding.

B. Path planning.

C. Wall following.

D. Route following.

E. Gesture recognition.
Answer: C

105. Which application in intelligent mobile robots made use of a self-organizing feature map?

A. Goal finding.

B. Path planning.

C. Wall following.

D. Route following.

E. Gesture recognition.

Answer: D

106. Which application in intelligent mobile robots made use of a genetic algorithm?

A. Goal finding.

B. Path planning.

C. Wall following.

D. Route following.

E. Gesture recognition.

Answer: B

You might also like