1. What is the vanishing gradient problem in the context of deep neural networks?
2. Explain how feed forward neural networks work and describe the back propagation algorithm.
3.What are the benefits of using dropout, and in what scenarios might it be particularly useful?
4. Explain the key components of a Convolutional Neural Network (CNN).
5. Difference between a basic autoencoder and a Variational Autoencoder (VAE).
6.Explain Dynamic Memory Models
7. Explain the concept of image segmentation in computer vision
8. Discuss the process of automatic image captioning using deep learning.
1. In the context of neural networks, the dropout technique works by:
A) Randomly eliminating features from the input data.
B) Randomly removing certain neurons and their connections during training.
C) Reducing the learning rate by a fixed percentage every epoch.
D) Using smaller network architectures for faster training.
Answer: B) Randomly removing certain neurons and their connections during training.
2. Which of the following is a regularization technique commonly used to prevent overfitting in
neural networks?
A) Dropout
B) Data Augmentation
C) Batch Normalization
D) All of the above
Answer: D) All of the above
3. Which of the following is NOT a benefit of using the ReLU activation function?
A) It avoids the vanishing gradient problem for positive inputs.
B) It is computationally efficient and simple to implement.
C) It always produces positive output values.
D) It can cause "dead neurons" when negative values are passed through.
Answer: C) It always produces positive output values.
4. One heuristic for faster training of deep neural networks is:
A) Using a larger learning rate to make larger updates.
B) Using a smaller network with fewer layers and neurons.
C) Using mini-batches for gradient updates instead of the full dataset.
D) Increasing the number of neurons in each hidden layer.
Answer: C) Using mini-batches for gradient updates instead of the full dataset.
5. The pooling layer in a CNN is primarily used to:
A) Reduce the dimensionality and computational complexity
B) Apply nonlinear transformations to the input
C) Perform convolution over the image data
D) Increase the size of the input image
Answer: A) Reduce the dimensionality and computational complexity
6. Encoder-decoder architectures are particularly useful in which type of tasks?
A) Image classification
B) Sequence-to-sequence tasks like machine translation or text summarization
C) Dimensionality reduction
D) Image segmentation
Answer: B) Sequence-to-sequence tasks like machine translation or text summarization
7. n the context of neural networks, an attention mechanism allows the model to:
A) Focus on specific parts of the input sequence when producing an output
B) Increase the number of neurons in each layer
C) Use a constant learning rate throughout the training
D) Perform convolution operations on the input data
Answer: A) Focus on specific parts of the input sequence when producing an output
8. Dynamic Memory Networks (DMNs) are primarily used for tasks that require:
A) Learning from very large datasets without labels
B) Handling sequential data with a focus on context and memory-based reasoning
C) Detecting objects in images
D) Clustering unstructured data
Answer: B) Handling sequential data with a focus on context and memory-based reasoning
9. What is the primary goal of image segmentation in computer vision?
A) To classify images into predefined categories
B) To divide an image into segments for easier analysis
C) To detect objects and their locations within an image
D) To automatically generate captions for an image
Answer: B) To divide an image into segments for easier analysis
10. Automatic image captioning is primarily used to...
A) Detect specific objects within an image
B) Segment images into multiple regions
C) Generate descriptive text for images
D) Convert images into black and white
Answer: C) Generate descriptive text for images
1. In a _______ neural network, information flows in one direction, from input to
output, without cycles or feedback.
Answer: Feed-forward
2. The _______ algorithm is used to compute gradients in neural networks by
propagating errors backward through the layers.
Answer: Backpropagation
3. ______ heuristics are strategies used to avoid bad local minima in
optimization, improving the effectiveness of training.
Answer: ReLU
4. _______ is a regularization method that randomly drops units during training
to prevent over-reliance on any single neuron.
Answer: Dropout
5.A _______ Neural Network (CNN) is commonly used in computer vision tasks
due to its ability to capture spatial hierarchies in images.
Answer: Convolutional.
6. _______ and _______ are two popular RNN variants that solve the problem of
long-term dependencies in sequences.
Answer: LSTM, GRU
7. ______ are a type of neural network used for deep unsupervised learning by
encoding input data into a compressed representation.
Answer: Autoencoders
8. Adversarial Generative Networks, also known as _______, consist of a
generator and a discriminator network trained in opposition to each other.
Answer: GANs (Generative Adversarial Networks)
9. _______ segmentation is the process of dividing an image into multiple regions
or segments for easier analysis.
Answer: Image
10. In object detection, the model identifies specific objects within an image and
provides their _______ in the form of bounding boxes.
Answer: Location