ITML MID-2 Bits
ITML MID-2 Bits
MULTIPLE CHOICE
ANS: A PTS: 1
10. Softmax activation function is used for
a. Binary b. Multiclass c. Regression d. for all
classification classification
ANS: B PTS: 1
ANS: A PTS: 1
ANS: A PTS: 1
ANS: D PTS: 1
ANS: B PTS: 1
16. Which of the following layers may be more than one in number?
a. Input b. Output c. Hidden d. Initial
ANS: C PTS: 1
ANS: C PTS: 1
ANS: D PTS: 1
ANS: B PTS: 1
ANS: D PTS: 1
ANS: A PTS: 1
ANS: B PTS: 1
ANS: C PTS: 1
ANS: A PTS: 1
ANS: B PTS: 1
ANS: C PTS: 1
ANS: D PTS: 1
ANS: B PTS: 1
ANS: D PTS: 1
ANS: B PTS: 1
33. __________ layer does not perform any computation in Neural Network.
a. Hidden b. Output c. Input d. b and c
ANS: C PTS: 1
ANS: D PTS: 1
ANS: C PTS: 1
ANS: C PTS: 1
ANS: D PTS: 1
38. Which node has both incoming and outgoing branches in decision tree?
a. Decision b. hidden c. Root d. Leaf
ANS: A PTS: 1
ANS: A PTS: 1
ANS: B PTS: 1
ANS: C PTS: 1
42. A good clustering method has
a. High Intraclass b. Low Interclass c. High Interclass d. All of the above
distance similarity similarity
ANS: B PTS: 1
ANS: D PTS: 1
ANS: C PTS: 1
ANS: B PTS: 1
ANS: C PTS: 1
ANS: C PTS: 1
48. The axon endings almost touch the ________________of the next neuron
a. dendrites b. dendrites c. cell body d. None
ANS: B PTS: 1
50. In the human brain, Multiple inputs from dendrites are processed by ___________
a. dendrites b. synapses c. cell body d. axon
ANS: C PTS: 1
ANS: C PTS: 1
ANS: B PTS: 1
ANS: A PTS: 1
ANS: B PTS: 1
ANS: D PTS: 1
ANS: D PTS: 1
ANS: D PTS: 1
ANS: B PTS: 1
ANS: C PTS: 1
ANS: B PTS: 1
ANS: D PTS: 1
ANS: B PTS: 1
ANS: C PTS: 1
a. number of hidden b. nodes in the c. nodes in the input d. none of the above
layers output layer layer
ANS: B PTS: 1
ANS: A PTS: 1
66. The intensity of a pixel in gray scale image is represented by________ bits.
a. 24 b. 1 c. 16 d. 8
ANS: D PTS: 1
ANS: C PTS: 1
68. The intensity of a pixel in color scale image is represented by________ bits.
a. 24 b. 1 c. 16 d. 8
ANS: A PTS: 1
ANS: C PTS: 1
ANS: D PTS: 1
71. Which layer type is typically used to extract local features in a CNN?
a. Pooling layer b. Convolutional c. Activation layer d. Fully connected
layer layer
ANS: B PTS: 1
72. Which activation function is commonly used in the convolutional layers of a CNN?
a. Softmax b. Tanh c. ReLu d. Signmoid
ANS: C PTS: 1
ANS: B PTS: 1
ANS: C PTS: 1
ANS: A PTS: 1
76. Which layer type is used to reduce the spatial dimensions in a CNN?
a. Activation b. Convolution c. Pooling d. Fully Connected
ANS: C PTS: 1
ANS: D PTS: 1
78. Which layer type is responsible for making final predictions in a CNN?
a. Pooling b. Fully connected c. Convolution d. Activation
ANS: B PTS: 1
79. Which layer type is responsible for applying non-linear transformations to the feature maps in a
CNN?
a. Pooling b. Fully connected c. Convolution d. Activation
ANS: D PTS: 1
ANS: C PTS: 1
ANS: A PTS: 1
82. Which layer type is responsible for backpropagating the gradients and updating the network's
parameters in a CNN?
a. Pooling b. Fully connected c. Activation d. Convolution
ANS: B PTS: 1
83. What is the primary advantage of using a CNN over a fully connected neural network for image
processing tasks?
a. CNNs can handle b. CNNs have a c. CNNs can capture d. CNNs have a
sequential data higher number of local spatial higher training
neurons patterns in the speed
input data
ANS: C PTS: 1
ANS: C PTS: 1
ANS: D PTS: 1
ANS: A PTS: 1
87. Which layer type is commonly used in CNNs to normalize the input data?
a. Convolution b. Pooling c. Activation d. Batch
Normalization
ANS: D PTS: 1
ANS: B PTS: 1
89. Which layer type is responsible for introducing translation invariance in a CNN?
a. fully connected b. Pooling c. Convolution d. None of the above
ANS: C PTS: 1
ANS: A PTS: 1
91. Which parameter is used to control the size of the featuremap in a CNN?
a. filter b. Stride c. padding d. b and c
ANS: D PTS: 1
ANS: A PTS: 1
ANS: C PTS: 1
ANS: B PTS: 1
95. What layer connects the output of one neuron to the input of another?
a. Convolution b. Pooling c. Fully connected d. Dropout
ANS: C PTS: 1
96. The number of feature maps produced by the convolution layer depends on _____
a. Number of b. Number of kernels c. Kernel size d. Kernel depth
channels in the
input
ANS: B PTS: 1
97. The number of channels in the kernel to perform convolution depends on _____
a. Number of b. Number of kernels c. Kernel size d. none of the above
channels in the
input
ANS: A PTS: 1
ANS: D PTS: 1
99. The global pooling layer calculates _____ of all values in the featuremap.
a. Addition b. Average c. Multiplication d. dot product
ANS: B PTS: 1
ANS: B PTS: 1
ANS: D PTS: 1
ANS: A PTS: 1
ANS: B PTS: 1
ANS: B PTS: 1
ANS: B PTS: 1
ANS: C PTS: 1
ANS: D PTS: 1
ANS: C PTS: 1
ANS: B PTS: 1
ANS: B PTS: 1
111. In batch stochastic gradient, ____ taken at a time to take a single step.
a. Fixed number of b. One sample c. All samples d. None of the above
samples
ANS: C PTS: 1
112. In minibatch stochastic gradient, ____ taken at a time to take a single step.
a. Fixed number of b. One sample c. All samples d. None of the above
samples
ANS: A PTS: 1
ANS: A PTS: 1
ANS: C PTS: 1
ANS: A PTS: 1
ANS: B PTS: 1
ANS: A PTS: 1
118. Which layer type is typically used to capture sequential dependencies in an RNN?
a. Pooling b. Hidden c. Input d. Output
ANS: B PTS: 1
ANS: D PTS: 1
ANS: C PTS: 1
121. Which activation function is commonly used in the recurrent layers of an RNN?
a. Sigmoid b. ReLu c. Softmax d. Hyperbolic
Tangent
ANS: D PTS: 1
122. Which layer type is responsible for making final predictions in an RNN?
a. Output layer b. Input layer c. Activation layer d. Hidden layer
ANS: A PTS: 1
ANS: B PTS: 1
ANS: D PTS: 1
ANS: B PTS: 1
ANS: C PTS: 1
ANS: A PTS: 1
ANS: B PTS: 1
ANS: C PTS: 1
ANS: D PTS: 1
ANS: B PTS: 1
ANS: D PTS: 1
ANS: A PTS: 1
ANS: B PTS: 1
ANS: C PTS: 1
ANS: D PTS: 1
ANS: B PTS: 1