0% found this document useful (0 votes)
30 views1 page

∆ w t+1) =−η ∂ E ∂ w α ∆ w t) : ij ij ij

1. Derive the weight change formulations for the weights wj and wij in an artificial neural network given a tanh activation function. 2. Show that a Fourier series with a finite number of terms can be expressed as an artificial neural network. 3. Find the expressions for the weight corrections in a layered network where the nodes use the symmetric sigmoid function t(x) = 2s(x) - 1, with s(x) as the usual sigmoid function.

Uploaded by

HaardikGarg
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
30 views1 page

∆ w t+1) =−η ∂ E ∂ w α ∆ w t) : ij ij ij

1. Derive the weight change formulations for the weights wj and wij in an artificial neural network given a tanh activation function. 2. Show that a Fourier series with a finite number of terms can be expressed as an artificial neural network. 3. Find the expressions for the weight corrections in a layered network where the nodes use the symmetric sigmoid function t(x) = 2s(x) - 1, with s(x) as the usual sigmoid function.

Uploaded by

HaardikGarg
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 1

1.

Derive weight change formulation for weight w j and wij for the ANN given bellow assuming
tanh(x) activation function.

2. Show that Fourier series with finite number of terms can be expressed as an ANN.
3. The symmetric sigmoid is defined as t(x) = 2s(x) 1, where s() is the usual sigmoid function.
Find the expressions for the weight corrections in a layered network in which the nodes use
t(x) as a activation function.
4. Express the network function in in terms of weights and function f and develop a weight
change formulation c considering (a) all fs are liner perceptron, and (ii) all fs are sigmoid.

5. The learning procedure requires that the change in weight is proportional to true gradient
descent. For practical purposes we choose a learning rate that is as large as possible without
leading to oscillation in iteration. One way to avoid oscillation at large learning rate is to make
the change in weight dependent of the past weight change by adding a momentum term as
mentioned below

w ij ( t+1)=

E
+ wij (t)
wij

Derive weight change formulation for m inputs n outputs network with a single hidden layer
using sigmoidal activation function.

You might also like