Important Questions
Important Questions
Easy
Moderate
Difficult
Topics: Introduction, Training, Rote Learning, Learning Concepts, General-to-Specific Ordering, Version Spaces, Candidate
Elimination, Inductive Bias, Decision-Tree Induction, Overfitting, Nearest Neighbor Algorithm, Learning Neural Networks
(Intro), Supervised, Unsupervised, Reinforcement Learning.
Easy (Mainly L2 - Definitions & Basic Concepts)
1. Define Machine Learning and distinguish it from Rote Learning.
2. Differentiate between Supervised, Unsupervised, and Reinforcement Learning with examples.
3. What is Inductive Bias? Why is it necessary?
4. Define Overfitting in the context of Decision Trees.
5. What is the core idea behind the Nearest Neighbor Algorithm?
6. What are Learning Concepts in ML?
Easy
Moderate
Difficult
11. How does a Multilayer Perceptron (MLP) overcome the limitations of a Perceptron?
12. What are the computational challenges in training Deep Neural Networks?
13. Explain the concept of Gradient Descent and its variations.
14. Discuss the vanishing gradient problem in deep networks.
15. Compare different Activation Functions and their impact on training.
Easy
Moderate
Difficult
Okay, here is a potential question bank for the Advanced Machine Learning and Deep Learning
M.Tech course, based on the VTU syllabus modules you provided.
Equal Weightage: Typically, VTU exams aim for equal weightage across all modules. You
will likely be required to answer one full question from each module (often with an internal
choice, e.g., answer Question 1a OR 1b).
Prepare All Modules: It is crucial to study all modules thoroughly. Skipping modules is
highly risky as questions are guaranteed from each one.
RBT Levels: The RBT (Revised Bloom's Taxonomy) levels indicated (L2, L3, L4) give you
a clue about the expected complexity:
o L2 (Understand): Requires explaining concepts, defining terms, summarizing ideas.
o L3 (Apply): Requires applying concepts, demonstrating procedures, explaining
architectures, comparing methods.
o L4 (Analyze): Requires breaking down problems, comparing/contrasting complex
ideas, evaluating methods for specific scenarios, discussing pros and cons.
Focus Areas: While all topics are important, focus on understanding the core algorithms,
architectures, their working principles, and applications as indicated by the L3 and L4 levels
dominating the later modules.
Easy
Moderate
Difficult
Module 5: Applications
Moderate
Difficult
11. Explain the role of GANs (Generative Adversarial Networks) in Image Synthesis.
12. How do Transformers outperform traditional RNNs in NLP?
13. Discuss the ethical considerations of using Deep Learning in real-world applications.
14. Explain the working of a Deep Neural Network used for Drug Discovery.
15. How does Deep Learning contribute to Large-Scale Data Analytics?
Topics: Introduction, Training, Rote Learning, Learning Concepts, General-to-Specific Ordering, Version Spaces,
Candidate Elimination, Inductive Bias, Decision-Tree Induction, Overfitting, Nearest Neighbor Algorithm, Learning Neural
Networks (Intro), Supervised, Unsupervised, Reinforcement Learning.
2. What is the difference between Supervised and Unsupervised 1. Define Machine Learning and its types.
Learning?
3. Explain Overfitting and ways to prevent it. (Note: "Ways to 2. What is the difference between Supervised
prevent" might lean towards moderate) and Unsupervised Learning?
4. What is the role of the Nearest Neighbor Algorithm?
3. Explain Overfitting and ways to prevent it.
5. Describe Reinforcement Learning with an example.
6. Compare and contrast Rote Learning and Concept Learning. 4. What is the role of the Nearest Neighbor
Algorithm?
Moderate
1. Explain the Candidate Elimination Algorithm. 5. Describe Reinforcement Learning with an
2. Describe the Decision-Tree Induction method. example.
3. What is Inductive Bias? How does it affect learning?
Moderate
4. Explain Version Spaces and their significance in Machine Learning.
5. How does General-to-Specific Ordering work in hypothesis 6. Explain the Candidate Elimination
learning? Algorithm.
Difficult
3. Discuss the advantages of Evolving Neural Networks. 5. What are the differences between Supervised
4. Describe the limitations of a single-layer Perceptron. and Unsupervised Learning Networks?
5. Explain the concept of Hyperparameters in Neural Networks.
Moderate
6. How does a Multilayer Perceptron (MLP) overcome the limitations
of a Perceptron? 6. Explain the structure and working of a
Difficult Recurrent Neural Network (RNN).
1. What are the computational challenges in training Deep Neural
Networks? 7. What is the importance of backpropagation
2. Explain the concept of Gradient Descent and its variations. in training Neural Networks?
3. Discuss the vanishing gradient problem in deep networks.
8. Discuss the advantages of Evolving Neural
4. Compare different Activation Functions and their impact on Networks.
training.
9. Describe the limitations of a single-layer
Perceptron.
Difficult
3. What are some efficient algorithms for CNNs? 5. What is the difference between Max Pooling
4. Discuss different architectures of CNNs. and Average Pooling?
5. Explain the Neuroscientific Basis for CNNs.
Moderate
Difficult
1. Explain the mathematical formulation of Convolution in CNNs. 6. Explain the concept of Feature Maps in
2. How do CNNs improve image classification performance? CNNs.
3. Discuss the challenges of training CNNs with large-scale datasets.
7. Describe the role of Random or
4. Explain Transfer Learning and its role in CNNs. Unsupervised Features in CNNs.
5. Compare CNNs with traditional Feature Engineering methods.
8. What are some efficient algorithms for
CNNs?
Difficult
Easy Easy
1. What is a Recurrent Neural Network (RNN)?
2. How does RNN differ from CNN? 1. What is a Recurrent Neural Network (RNN)?
3. Define an Encoder-Decoder sequence-to-sequence architecture. 2. How does RNN differ from CNN?
4. Explain the concept of Long Short-Term Memory (LSTM).
3. Define an Encoder-Decoder sequence-to-
5. What is the importance of Gated Recurrent Units (GRUs)?
sequence architecture.
Difficult
Easy Easy
1. List some real-world applications of Deep Learning.
2. How is Deep Learning used in Computer Vision? 1. List some real-world applications of Deep
Learning.
3. What are some common applications of Natural Language
Processing (NLP)? 2. How is Deep Learning used in Computer
4. Explain the role of Deep Learning in Speech Recognition. Vision?
5. Define Large-Scale Deep Learning.
3. What are some common applications of
Moderate Natural Language Processing (NLP)?
1. How does Deep Learning improve Medical Image Processing?
2. Discuss the impact of AI in Autonomous Vehicles. 4. Explain the role of Deep Learning in Speech
Recognition.
3. Explain the role of Deep Learning in Cybersecurity.
4. What are some challenges in scaling Deep Learning applications? 5. Define Large-Scale Deep Learning.
5. How is Reinforcement Learning applied in Robotics?
Moderate
Difficult (Incorporating L4 - Analysis/Evaluation)
1. Explain the role of GANs (Generative Adversarial Networks) in 6. How does Deep Learning improve Medical
Image Synthesis. Image Processing?
2. How do Transformers outperform traditional RNNs in NLP?
7. Discuss the impact of AI in Autonomous
3. Discuss the ethical considerations of using Deep Learning in real- Vehicles.
world applications.
4. Explain the working of a Deep Neural Network used for Drug 8. Explain the role of Deep Learning in
Discovery. Cybersecurity.
5. How does Deep Learning contribute to Large-Scale Data Analytics?
9. What are some challenges in scaling Deep
Learning applications?
Difficult
Convolutional neural networks (CNNs), used widely in deep learning for tasks like image
recognition, are heavily inspired by how the brain processes visual information—
especially by an area called the primary visual cortex (V1).
In the 1950s–60s, neuroscientists Hubel and Wiesel discovered that neurons in cats'
visual cortex respond strongly to specific visual patterns like oriented edges (e.g.,
vertical or horizontal lines).
They showed that:
o Simple cells respond to edges at specific angles in specific locations.
o Complex cells respond to similar patterns regardless of small shifts in
position or lighting.
These findings earned them a Nobel Prize and inspired key CNN features like
convolution and pooling.
1. Spatial mapping – CNN layers have 2D feature maps just like V1 mirrors the retina's
layout.
2. Simple cells – These are like the filters in CNNs. They focus on small, local patterns
(edges, textures).
3. Complex cells – CNNs use pooling (like max-pooling) to simulate how complex cells
become invariant to small shifts or lighting changes.
In deeper layers of the brain (like the inferotemporal cortex, or IT), neurons may
respond to specific concepts—like recognizing your grandmother no matter how she
appears.
These have been found in humans! One famous neuron fired when a subject saw
anything related to Halle Berry (photos, drawings, or even her name). This was
dubbed the “Halle Berry neuron.”
CNNs simulate this concept detection in their final layers.
Sees only small parts in high resolution (via fovea) Sees full images at once
Uses eye movements (saccades) to explore Doesn't move eyes—sees all at once
A Gabor function is like a wave (cosine) multiplied by a bell curve (Gaussian). It’s:
CNN filters often learn Gabor-like patterns in their first layer, showing how closely they
mimic the brain.
Combine two simple cells (shifted in phase) using the L2 norm (square root of sum
of squares).
This creates invariance to small shifts—important for recognizing patterns even
when they move a little.
🎨 Visual Confirmation
Research showed that many different learning algorithms, when trained on natural
images, learn similar edge detectors (Gabor-like filters) in the first layer.
This is strong evidence that edge detection is statistically fundamental to
understanding images—not just biologically relevant.
✅ Summary
CNNs are loosely based on how the brain sees, especially in the early vision areas
like V1.
Biological insights inspired core components: convolution, pooling, and feature
hierarchy.
But CNNs and brains are not the same—they differ in structure, input, learning, and
integration with the rest of the body/mind.
In standard RNNs, gradients are propagated back through each time step. For long
sequences, this creates problems:
Vanishing gradients: Gradients shrink, and earlier layers learn very slowly or not at
all.
Exploding gradients: Gradients grow exponentially, causing unstable training.
This makes it difficult for RNNs to remember information from far back in a sequence.
Introduced special gates (input, forget, output) and a cell state to allow better flow of
long-term information.
The cell state acts like a conveyor belt, enabling gradients to flow unimpeded across
many time steps.
Trains well even on sequences with dependencies over hundreds of time steps.
3. Gradient Clipping
Adds shortcut connections (like in ResNets) to help gradient flow across layers.
Helpful in deep RNNs or when stacking multiple LSTM layers.
5. Regularization Techniques
Dropout (and recurrent dropout) helps avoid overfitting while improving stability.
Layer normalization can stabilize hidden state dynamics during training.
6. Attention Mechanisms
Introduced in models like Transformers, which can directly attend to any part of the
sequence—not just the last hidden state.
Eliminates the need to encode all information in a single hidden state.
Very effective at modeling long-range dependencies (e.g., in NLP tasks).
🏁 Summary
LSTM / GRU Remember far-back info Gating mechanisms + persistent cell state
Want a diagram to show how LSTM gates manage long-term memory flow?