Ass10 Soln
Ass10 Soln
Deep Learning
Assignment- Week 10
TYPE OF QUESTION: MCQ/MSQ
Number of questions: 10 Total mark: 10 X 1 = 10
______________________________________________________________________________
QUESTION 1:
a. Prevent overfitting
b. Faster convergence
c. Faster inference time
d. Prevent Co-variant shift
Correct Answer: c
Detailed Solution:
Inference time does not become faster due to batch normalization. It increases the computational
burden. So, inference time increases.
____________________________________________________________________________
QUESTION 2:
A neural network has 3 neurons in a hidden layer. Activations of the neurons for three batches
are respectively. What will be the value of mean if we use batch normalization in
this layer?
a.
b.
c.
d.
NPTEL Online Certification Courses
Indian Institute of Technology Kharagpur
Correct Answer: a
Detailed Solution:
______________________________________________________________________________
QUESTION 3:
How can we prevent underfitting?
Correct Answer: b
Detailed Solution:
Underfitting happens whenever feature samples are capable enough to capture the data
distribution. We need to increase the feature size, so data can be fitted perfectly well.
______________________________________________________________________________
QUESTION 4:
How do we generally calculate mean and variance during testing?
Correct Answer: c
Detailed Solution:
We generally calculate batch mean and variance statistics during training and use the estimated
batch mean and variance during testing.
______________________________________________________________________________
NPTEL Online Certification Courses
Indian Institute of Technology Kharagpur
QUESTION 5:
Which one of the following is not an advantage of dropout?
a. Regularization
b. Prevent Overfitting
c. Improve Accuracy
d. Reduce computational cost during testing
Correct Answer: d
Detailed Solution:
-down any
feature. So there is no question of reduction of computational cost.
______________________________________________________________________________
QUESTION 6:
What is the main advantage of layer normalization over batch normalization?
a. Faster convergence
b. Lesser computation
c. Useful in recurrent neural network
d. None of these
Correct Answer: c
Detailed Solution:
See the lectures/lecture materials.
______________________________________________________________________________
QUESTION 7:
While training a neural network for image recognition task, we plot the graph of training error
and validation error. Which is the best for early stopping?
NPTEL Online Certification Courses
Indian Institute of Technology Kharagpur
a. A
b. B
c. C
d. D
Correct Answer: c
Detailed Solution:
Minimum validation point is the best for early stopping.
______________________________________________________________________________
QUESTION 8:
Which among the following is NOT a data augmentation technique?
Correct Answer: b
Detailed Solution:
Random shuffle of all the pixels of the image will distort the image and neural network will be
unable to learn anything. So, it is not a data augmentation technique.
______________________________________________________________________________
QUESTION 9:
Which of the following is true about model capacity (where model capacity means the ability of
neural network to approximate complex functions)?
Correct Answer: a
Detailed Solution:
NPTEL Online Certification Courses
Indian Institute of Technology Kharagpur
Dropout and learning rate has nothing to do with model capacity. If hidden layers increase, it
increases the number of learnable parameter. Therefore, model capacity increases.
______________________________________________________________________________
QUESTION 10:
Batch Normalization is helpful because
Correct Answer: a
Detailed Solution:
Batch normalization layer normalizes the input.
______________________________________________________________________________
______________________________________________________________________________
************END*******