NNDL Internal I Key
NNDL Internal I Key
Q Marks
Description of the Question
No. allotted
1. What is representation learning? 1
2. Explain gradient descent algorithm in deep learning. 1
3. Illustrate sigmoid neuron. 1
4. Explain how early stopping regularizes deep networks? 1
5. What is adversarial training? 1
6. Illustrate the importance of right parameter initialization in DL training? 1
7. Build OR logic gate functionality with perceptron using 3 inputs. 3
8. Consider a fully connected network with 3 inputs x1, x2, x3. Suppose there is one hidden layer with 4
neurons having sigmoid activation functions. Further, the output layer is a softmax layer. Assume that all
the weights in the network are set to 1 and all biases are set to 0. Write down the output of the network
as a function of x = [x1, x2, x3].
3
9. Produce updates for Nesterov Momentum optimization algorithm.
11.a) Construct gradient formulae for the parameters in the neural network below using back propagation
algorithm.
3
Consider a MLP having 2 inputs and 2 hidden layers each having 3 neurons and one neuron in the output
layer. Assume hidden layers are using RELU activation and output layer using sigmoid.
11.b) Consider the above network, calculate the weight updates of one SGD iteration assuming all the weights
are initialized to 0.1, biases to -0.1, input<1,1> and label is 0.
3
12.a) Explain dropout as a practical ensemble method in deep learning. 3
12.b) Illustrate parameter sharing as a regularization technique in the case of image classification.
1. Convolutional Layers
2. Feature Maps
3. Reduced Parameter Space
4. Translation Invariance
3
5. Example: For example, in a CNN for image classification, the weights of a filter responsible for
detecting edges or textures can be shared across all regions of the input image. This allows the
network to learn generic features that are useful for classification tasks without being overly
sensitive to the exact location of these features.