0% found this document useful (0 votes)
5 views5 pages

Ass10 Soln

The document contains an assignment for a Deep Learning course from IIT Kharagpur, consisting of 10 multiple-choice questions (MCQs) related to concepts such as batch normalization, dropout, and model capacity. Each question includes the correct answer and a detailed explanation. The assignment aims to assess students' understanding of key deep learning principles.

Uploaded by

Revathi S
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views5 pages

Ass10 Soln

The document contains an assignment for a Deep Learning course from IIT Kharagpur, consisting of 10 multiple-choice questions (MCQs) related to concepts such as batch normalization, dropout, and model capacity. Each question includes the correct answer and a detailed explanation. The assignment aims to assess students' understanding of key deep learning principles.

Uploaded by

Revathi S
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

NPTEL Online Certification Courses

Indian Institute of Technology Kharagpur

Deep Learning
Assignment- Week 10
TYPE OF QUESTION: MCQ/MSQ
Number of questions: 10 Total mark: 10 X 1 = 10
______________________________________________________________________________

QUESTION 1:

What is not a reason for using batch-normalization??

a. Prevent overfitting
b. Faster convergence
c. Faster inference time
d. Prevent Co-variant shift

Correct Answer: c

Detailed Solution:
Inference time does not become faster due to batch normalization. It increases the computational
burden. So, inference time increases.
____________________________________________________________________________

QUESTION 2:
A neural network has 3 neurons in a hidden layer. Activations of the neurons for three batches

are respectively. What will be the value of mean if we use batch normalization in

this layer?

a.

b.

c.

d.
NPTEL Online Certification Courses
Indian Institute of Technology Kharagpur

Correct Answer: a

Detailed Solution:

______________________________________________________________________________

QUESTION 3:
How can we prevent underfitting?

a. Increase the number of data samples


b. Increase the number of features
c. Decrease the number of features
d. Decrease the number of data samples

Correct Answer: b

Detailed Solution:
Underfitting happens whenever feature samples are capable enough to capture the data
distribution. We need to increase the feature size, so data can be fitted perfectly well.
______________________________________________________________________________

QUESTION 4:
How do we generally calculate mean and variance during testing?

a. Batch normalization is not required during testing


b. Mean and variance based on test image
c. Estimated mean and variance statistics during training
d. None of the above

Correct Answer: c

Detailed Solution:
We generally calculate batch mean and variance statistics during training and use the estimated
batch mean and variance during testing.
______________________________________________________________________________
NPTEL Online Certification Courses
Indian Institute of Technology Kharagpur

QUESTION 5:
Which one of the following is not an advantage of dropout?

a. Regularization
b. Prevent Overfitting
c. Improve Accuracy
d. Reduce computational cost during testing

Correct Answer: d

Detailed Solution:
-down any
feature. So there is no question of reduction of computational cost.
______________________________________________________________________________

QUESTION 6:
What is the main advantage of layer normalization over batch normalization?

a. Faster convergence
b. Lesser computation
c. Useful in recurrent neural network
d. None of these

Correct Answer: c

Detailed Solution:
See the lectures/lecture materials.
______________________________________________________________________________

QUESTION 7:
While training a neural network for image recognition task, we plot the graph of training error
and validation error. Which is the best for early stopping?
NPTEL Online Certification Courses
Indian Institute of Technology Kharagpur

a. A
b. B
c. C
d. D

Correct Answer: c

Detailed Solution:
Minimum validation point is the best for early stopping.
______________________________________________________________________________

QUESTION 8:
Which among the following is NOT a data augmentation technique?

a. Random horizontal and vertical flip of image


b. Random shuffle all the pixels of an image
c. Random color jittering
d. All the above are data augmentation techniques

Correct Answer: b

Detailed Solution:
Random shuffle of all the pixels of the image will distort the image and neural network will be
unable to learn anything. So, it is not a data augmentation technique.
______________________________________________________________________________

QUESTION 9:
Which of the following is true about model capacity (where model capacity means the ability of
neural network to approximate complex functions)?

a. As number of hidden layers increase, model capacity increases


b. As dropout ratio increases, model capacity increases
c. As learning rate increases, model capacity increases
d. None of these

Correct Answer: a

Detailed Solution:
NPTEL Online Certification Courses
Indian Institute of Technology Kharagpur

Dropout and learning rate has nothing to do with model capacity. If hidden layers increase, it
increases the number of learnable parameter. Therefore, model capacity increases.
______________________________________________________________________________

QUESTION 10:
Batch Normalization is helpful because

a. It normalizes all the input before sending it to the next layer


b. It returns back the normalized mean and standard deviation of weights
c. It is a very efficient back-propagation technique
d. None of these

Correct Answer: a

Detailed Solution:
Batch normalization layer normalizes the input.

______________________________________________________________________________

______________________________________________________________________________

************END*******

You might also like