CD-601 Assignmentquestions
CD-601 Assignmentquestions
Assignment questions
Unit I:
Introduction to Deep Learning
1. Explain the historical development of deep learning, highlighting key milestones from the
McCulloch Pitts Neuron to modern-day advancements.
2. Describe the architecture of a Multilayer Perceptron (MLP) and discuss its representation
power in solving complex tasks.
3. Compare and contrast Sigmoid Neurons with traditional binary threshold neurons.
4. Discuss the process of backpropagation in training Feed Forward Neural Networks. How
does it facilitate learning in deep networks?
5. Explain the concept of weight initialization methods in deep learning. Provide examples of
commonly used initialization techniques and discuss their impact on training convergence.
6. What is Batch Normalization and how does it address the challenges of training deep
neural networks?
7. Describe the principles behind Representation Learning. How does it enable automatic
feature extraction in deep learning models?
8. Discuss the significance of GPU implementation in accelerating the training of deep neural
networks.
9. Explain the role of decomposition techniques like PCA and SVD in dimensionality
reduction and feature extraction in deep learning.
Unit II:
Deep Feedforward Neural Networks
1. Define Convolutional Neural Networks (CNNs) and discuss their advantages over
traditional fully connected networks in image processing tasks.
2. Explain the role of ReLU activation function in CNNs. How does it address the vanishing
gradient problem?
3. Describe the operations involved in Convolutional Layers, including convolution, padding,
and pooling. How do these operations contribute to feature extraction?
4. Compare and contrast different CNN architectures such as LeNet, AlexNet, and ResNet.
Discuss their unique features and applications.
5. Explain the concept of Regularization in CNNs. How do techniques like Dropout and
Weight Decay prevent overfitting?
6. Discuss the applications of CNNs in fields such as object detection, image classification,
and medical image analysis.
Unit IV:
Deep Recurrent Neural Networks (RNN)
1. Define Recurrent Neural Networks (RNNs) and discuss their suitability for sequential data
processing tasks.
2. Explain the challenges of training RNNs, including the vanishing and exploding gradient
problems. How are these issues addressed in practice?
3. Compare and contrast Gated Recurrent Units (GRUs) and Long Short-Term Memory
(LSTM) networks. What are their advantages over traditional RNNs?
4. Discuss the applications of RNNs in Natural Language Processing (NLP), including tasks
such as language modeling and machine translation.
5. Explain the concept of Attention Mechanism in RNNs. How does it improve the
performance of sequence-to-sequence models?
6. Describe the applications of RNNs in fields such as speech recognition, video analysis, and
time-series prediction.
Unit V:
Deep Generative Models
1. Define Deep Generative Models and discuss their role in unsupervised learning tasks.
2. Explain the training process of Restricted Boltzmann Machines (RBMs) using Gibbs
Sampling. How are RBMs used in collaborative filtering and feature learning?
3. Discuss the architecture and training process of Generative Adversarial Networks (GANs).
What are some applications of GANs in image generation and style transfer?
4. Compare and contrast Auto-regressive Models such as NADE and MADE. What are their
advantages and limitations?
5. Describe the applications of Deep Generative Models in fields such as object detection,
image synthesis, and natural language generation.
6. Discuss the ethical considerations and challenges associated with the deployment of Deep
Learning models in real-world applications.