∆ w t+1) =−η ∂ E ∂ w α ∆ w t) : ij ij ij
∆ w t+1) =−η ∂ E ∂ w α ∆ w t) : ij ij ij
Derive weight change formulation for weight w j and wij for the ANN given bellow assuming
tanh(x) activation function.
2. Show that Fourier series with finite number of terms can be expressed as an ANN.
3. The symmetric sigmoid is defined as t(x) = 2s(x) 1, where s() is the usual sigmoid function.
Find the expressions for the weight corrections in a layered network in which the nodes use
t(x) as a activation function.
4. Express the network function in in terms of weights and function f and develop a weight
change formulation c considering (a) all fs are liner perceptron, and (ii) all fs are sigmoid.
5. The learning procedure requires that the change in weight is proportional to true gradient
descent. For practical purposes we choose a learning rate that is as large as possible without
leading to oscillation in iteration. One way to avoid oscillation at large learning rate is to make
the change in weight dependent of the past weight change by adding a momentum term as
mentioned below
w ij ( t+1)=
E
+ wij (t)
wij
Derive weight change formulation for m inputs n outputs network with a single hidden layer
using sigmoidal activation function.