Deep Learning Assignment-2
Deep Learning Assignment-2
1
The activation function used to train the network is given as f ( x )= and
−(w. x +b)
1+e
the loss function used is squared error loss function. With respect to this, using
gradient descent algorithm show how the values of ‘w’ and ‘b’ varies if the initial
setup is:
i) ‘w’ is chosen as ‘-ve’ and ‘b’ is -ve.
ii) ‘w’ is chosen +ve and ‘b’ is also chosen +ve.
iii) ‘w’ is chosen +ve and ‘b’ is chosen -ve.
Differentiate ReLU and Sigmoid activation function. Also write the mathematical
Q B3 expression and draw its rough graph. Why ReLU is preferred over Sigmoid?
What is the role of Backpropagation in Neural Network? Consider the below given
network with inputs A = 0.35 and B = 0.9 and find the output Y after 2 Iterations
where Learning Rate (η) is 0.1 and Target (Y) = 0.5.
Q C1