Deep Learning(22IT814)
Module 1
Student Question Bank
1. A company is developing a deep learning model for text classification but is constrained by
limited computational resources
a. What is the difference between sequential data and image data?
b. How does the number of trainable parameters compare between standard RNNs, LSTMs,
and GRUs? Which architecture is more parameter-efficient, and why?
c. Develop a text classification model to predict the movie review sentiment.
2. Consider a speech recognition model that needs to understand sentences spanning multiple
seconds
a. What do you mean by long term dependencies in text data explain it with an example?
How it affects the model accuracy?
b. Why do standard RNNs struggle with long-term dependencies? How do LSTMs and
GRUs solve this issue differently?
c. Build a text classification model using stacked RNN to predict sentiment of a review.
3. A text classification system needs to understand the meaning of words in different contexts.
a. What do you mean by word embedding? How it helps in understanding text in a better way
by the models?
b. How do word embeddings Word2Vec, GloVe capture semantic relationships between
words? What are their limitations?
c. Build a Text classification model using RNN and word2vec word embedding.
4. A company is developing a neural machine translation system between two languages with
different word orders.
a. Discuss the different RNN architectures along with their applications.
b. How does an Encoder-Decoder architecture process input sentences? What is the impact
of using a fixed-length context vector in traditional sequence-to-sequence models?
c. Build a Text classification model using LSTM and Glove word embedding.
5. A research team finds that their LSTM-based chatbot struggles to generate coherent long
responses.
a. what is the concept of attention, how it is helpful in achieving the context of a sentence in a best
way.
b. How does the attention mechanism address the problem of fixed-length context vectors in
Encoder-Decoder models?
c. Build a Text classifier using encoder-decoder architecture.
6. A startup wants to train a text generation model but has access to a small dataset.
a. what is the process of text generation using RNN?
b. How does transfer learning help in text generation tasks? Why is it beneficial compared to
training an RNN from scratch?
c. Build a text classifier using a fine tuned Pre-Trained model to your own dataset.
7. A company wants to generate realistic product descriptions using AI.
a. what are generative models and what is the principle behind the GAN model.
b. What are the key differences in how RNNs (LSTMs/GRUs) based text generation and GAN
based Image generation.
c. Build a Generative Adversarial Network based Image Generation Model and discuss its
architecture.
8. A team is training an RNN-based language model, but training becomes unstable as the
sequence length increases.
a. what are the different ways of working of RNN and their applications.
b. What causes vanishing and exploding gradients in RNNs? Why are LSTMs and GRUs less
prone to this problem?
c. Build a encoder-decoder based text classifier discuss its working procedure.
9. A text processing system frequently encounters rare words that were not present in the training
vocabulary.
a. What is an out of word problem in word embeddings how it is handled by fast text?
b. How do word embeddings handle out-of-vocabulary (OOV) words? What are the
drawbacks of static embeddings like Word2Vec in this scenario?
c. Build Text classifier with custom word2vec embeddings.
10.
a. How an encoder-decoder architecture is different from regular RNN like architectures?
b. Analyze how an Attention mechanism helps in capturing better context
c. Build a Text Classifier using self-attention layers and discuss the flow of the text classification.