Deep Learning Simp 21cs743
Deep Learning Simp 21cs743
Module 1
1. Define Deep Learning and explain how it differs from traditional Machine Learning.
Describe the basic structure of a Deep Learning model.
2. Discuss the historical trends that have contributed to the rise of Deep Learning,
including key advancements in hardware, data availability, and algorithms.
3. Explain the concept of a learning algorithm in the context of Machine Learning. What
are the key components of a learning algorithm?
5. Describe the general process of Supervised Learning. Explain the roles of training
data, labels, and the learning algorithm.
8. Describe the general process of Unsupervised Learning. Explain the challenges and
potential applications.
Module 2
1. Describe the architecture of a feedforward neural network. Explain the roles of input
layers, hidden layers, and output layers.
3. Explain the backpropagation algorithm in detail. What is its purpose, and how does it
work?
6. Explain L1 and L2 regularization techniques. How do they differ in their effect on the
model?
7. Discuss other regularization techniques like dropout and early stopping. How do they
help prevent overfitting?
8. What is the role of activation functions in feedforward networks? Give examples of
common activation functions and their properties.
9. Explain the concept of vanishing and exploding gradients and how they affect training
in deep networks.
Module 3
1. Explain the concept of Empirical Risk Minimization. What is the goal of optimization
in deep learning?
2. Discuss the challenges associated with optimizing neural networks, such as local
minima, saddle points, and plateaus.
3. Describe the Stochastic Gradient Descent (SGD) algorithm. How does it work, and
what are its advantages and disadvantages?
5. Explain the AdaGrad algorithm. How does it adapt learning rates for different
parameters? What are its limitations?
6. Explain the RMSProp algorithm. How does it address the limitations of AdaGrad?
7. Compare and contrast AdaGrad and RMSProp. What are the situations where one
might be preferred over the other?
8. Describe other adaptive learning rate algorithms such as Adam and explain how it
combines the benefits of other algorithms.
Module 4
1. Explain the convolution operation in the context of image processing. How does it
differ from standard matrix multiplication?
2. Explain the concept of pooling in convolutional networks. What are different types of
pooling, and what are their purposes?
3. Explain how convolution and pooling can be viewed as an infinitely strong prior.
What does this imply about the network's learning process?
4. Describe different variants of the basic convolution function, such as dilated
convolutions and depthwise separable convolutions.
5. Explain how convolutional networks can be used for structured outputs, such as
image segmentation.
6. Discuss different data types that are commonly used with convolutional networks,
such as images, videos, and time-series data.
8. Describe the architectures and key innovations of LeNet and AlexNet. How did these
networks contribute to the advancement of deep learning?
9. Explain the concept of transfer learning in the context of convolutional networks and
its advantages.
Module 5
2. Describe the basic architecture of a recurrent neural network (RNN). How does it
process sequential data?
3. Explain the concept of Bidirectional RNNs. How do they differ from standard RNNs,
and what are their advantages?
4. Describe the architecture of Deep Recurrent Networks. How do they improve upon
standard RNNs?
5. Explain the concept of Recursive Neural Networks. How do they differ from
Recurrent Neural Networks?
6. Explain the Long Short-Term Memory (LSTM) architecture in detail. How does it
address the vanishing gradient problem?
7. Describe other gated RNN architectures, such as GRUs (Gated Recurrent Units). How
do they compare to LSTMs?