0% found this document useful (0 votes)
40 views2 pages

Hardlim: Otherwise X If Y: W W W W W W W

1. The document describes a single-input, single-output neural network with sigmoidal hidden units and linear output units. It asks to determine the weights that approximate a given function and the outputs for inputs of 0.5 and 2. 2. It asks which functions can be represented by a neural network with one hidden layer using either hardlim or linear activation functions, justifying the answers. 3. It briefly explains regularization in multi-layer perceptrons.

Uploaded by

Animesh Jain
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
40 views2 pages

Hardlim: Otherwise X If Y: W W W W W W W

1. The document describes a single-input, single-output neural network with sigmoidal hidden units and linear output units. It asks to determine the weights that approximate a given function and the outputs for inputs of 0.5 and 2. 2. It asks which functions can be represented by a neural network with one hidden layer using either hardlim or linear activation functions, justifying the answers. 3. It briefly explains regularization in multi-layer perceptrons.

Uploaded by

Animesh Jain
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 2

1 2 3

3
4
x
f(x)
x
1
w
1
w
2
w
3
w
4
w
5
w
6
w
7
1
w
8
w
9
w
10
f(x)
a) A single-input, single-output neural network is shown in Figure 1. Assuming sigmoidal hidden-
unit and linear output-unit activation functions what values of the weights will approximate the
function in Figure 2? Determine the output when input x=0.5 and x=2
















b) You are given two types of activation functions, hardlim and linear. The activations
functions are defined as
hardlim:

> +
=

otherwise
x if
y
i
i i
0
0 1
0
e e


linear:

+ =
i
i i
x y e e
0

Which of the following functions can be exactly represented by a neural network with
one hidden layer, using hardlim and / or linear activation functions (meaning the 2 layers
could use both hardlim, both linear, or hardlim for one layer and linear for the other)? For
each case, justify your answer: if yes, draw the neural network, with choice of activation
functions for both levels, and briey explain how the function is represented by your
neural network; if no, explain why not.
1. Polynomials of degree one
2. Polynomials of degree two
3. Reverse hardlim defined as

> +
=

otherwise
x if
y
i
i i
2
0 0
0
e e


c) Briefly explain regularization in multi-layer perceptrons (MLP)

a) Mention the common characteristics of derivative free optimization methods.

b) Show that in a competitive network, the distance between the weight vector and input is
minimum when the output of the neuron is maximum. Explain a method to implement
conscience mechanism in competitive neural networks.
Fig. 1
Fig. 2
[12]
1.
2.
[6]
[2]
[4]
[4+3=7]

c) Training examples of a classification problem are as the following:
Positive examples at x
1
=[0 0]
t
, x
2
=[2 2 ]
t
and
Negative examples at x
3
=[h 1]
t
and x
4
=[0 3]
t

where we treat 3 0 s s h as a parameter.

(1) How large can 0 > h be so that the training points are still linearly separable?
(2) What is the margin achieved by the maximum margin boundary as a function of h ?

[9]

You might also like