ASSIGNMENT
MACHINE LEARNING
1 What is the main goal of machine learning?
a) Minimize bias
b) Maximize variance
c) Learn from data
d) Perform arithmetic operations
In concept learning, a hypothesis space refers to:
a) Set of possible observations
2 b) Set of possible outputs
c) Set of all possible hypotheses
d) Set of training data
Candidate elimination algorithm helps in:
a) Maximizing entropy
3 b) Finding training errors
c) Finding the version space
d) Creating new data
Inductive bias allows an algorithm to:
a) Ignore training data
4 b) Generalize from data
c) Perform random learning
d) Reduce complexity
Decision tree splits are based on:
a) Maximum depth
5 b) Entropy or Information Gain
c) Random selection
d) Mean value
Heuristic search in ML helps to:
a) Brute force the solution
6 b) Optimize the learning path
c) Generate infinite hypotheses
d) Reduce bias
A perceptron is used for:
a) Non-linear classification
7 b) Linear classification
c) Regression only
d) Clustering
The activation function in neural networks introduces:
a) Linear behaviour
8 b) Constant outputs
c) Non-linearity
d) Data noise
9 The Back propagation algorithm is associated with:
a) Genetic mutation
b) Training decision trees
c) Updating neural weights
d) Reinforcement learning
Genetic algorithms are inspired by:
a) Brain structure
10 b) Probability theory
c) Evolutionary biology
d) Decision theory
Crossover and mutation are operations in:
a) Decision Trees
11 b) Back propagation
c) Genetic Algorithms
d) Naive Bayes
Genetic programming evolves:
a) Datasets
12 b) Algorithms
c) Hyper parameters
d) Decision trees only
Bayes’ theorem calculates:
a) Prior probability
13 b) Conditional probability
c) Mean
d) Median
The Naive Bayes classifier assumes:
a) Full dependence
14 b) No prior knowledge
c) Independent features
d) High variance
The EM algorithm is used for:
a) Clustering only
15 b) Expectation and maximization
c) Exact learning
d) Supervised learning only
16 The mistake bound model is useful in:
a) Neural networks
b) Estimating errors
c) Probabilistic logic
d) Learning efficiency analysis
Gibbs algorithm is a method for:
a) Unsupervised learning
b) Probabilistic learning
17
c) Gradient descent
d) Rule learning
Minimum Description Length (MDL) principle favors:
a) Complex models
18 b) Longer descriptions
c) Simpler hypotheses
d) Infinite data
KNN is an example of:
a) Model-based learning
19 b) Instance-based learning
c) Statistical modeling
d) Unsupervised learning
Locally weighted regression is:
a) A non-parametric method
20 b) A supervised clustering method
c) A global optimization method
d) Unsupervised learning
Radial Basis Functions are used in:
a) Genetic Algorithms
21 b) Neural Networks
c) Instance-based models
d) K-means
In KNN, the value of K controls:
a) Feature scaling
22 b) Complexity of the model
c) Learning rate
d) Distance metric
Case-based learning uses:
a) Predefined rules
23 b) Stored instances
c) Genetic encoding
d) Bayesian models
Which algorithm assigns weights based on proximity to a query?
a) Decision Trees
24 b) Naive Bayes
c) Locally Weighted Regression
d) Q-Learning
25 The FOCL algorithm combines:
a) Induction and Back propagation
b) Explanation-based and inductive learning
c) Decision Trees and Rule Sets
d) KNN and Regression
Reinforcement learning is based on:
a) Labelled data
26 b) Rewards and punishments
c) Case histories
d) Neural heuristics
Q-learning is used in:
a) Instance-based learning
27 b) Supervised classification
c) Reinforcement learning
d) Genetic programming
Which method uses inverted deduction?
a) Decision Trees
28 b) Analytical learning
c) Naive Bayes
d) Neural Networks
Temporal Difference Learning is related to:
a) Long-term planning
29 b) Unsupervised clustering
c) Reinforcement learning
d) Evolutionary computation
Explanation-based learning aims to:
a) Increase randomness
30 b) Create rules from cases
c) Explain concepts via examples
d) Infer biases
SHORT QUESTION
1 Define inductive bias with an example.
2 What is a version space in concept learning?
3 Write the key steps in the decision tree learning algorithm.
4 Explain the role of perceptrons in neural networks.
5 How does the back propagation algorithm work?
6 What is a fitness function in genetic algorithms?
7 Differentiate between hypothesis space and version space.
8 Define Bayes Optimal Classifier.
9 What is the importance of the Gibbs algorithm in learning?
10 State the assumption of the Naïve Bayes classifier.
11 Describe the EM algorithm briefly.
12 What is meant by sample complexity?
13 Explain the Mistake Bound Model.
14 How does K-Nearest Neighbor (KNN) work?
15 What is Case-Based Learning?
16 Describe the concept of Analytical Learning.
17 Define Q-Learning.
18 Mention any two applications of reinforcement learning.
LONG QUESTION
1 Explain Concept Learning and Candidate Elimination Algorithm with an example.
2 Discuss the role of inductive bias in learning systems.
3 Describe the structure and learning process of Decision Tree algorithms.
Write detailed notes on multilayer neural networks and the back propagation algorithm.
4
5 Explain Genetic Algorithms: selection, crossover, mutation, and their roles.
6 Compare and contrast Neural Networks and Genetic Algorithms.
7 Elaborate on Bayes Theorem and its application in machine learning.
8 Explain the Naïve Bayes Classifier with a step-by-step example.
9 Discuss the EM algorithm and how it aids in probabilistic learning.
10 What are the differences between finite and infinite hypothesis spaces?
11 Explain Instance-Based Learning methods with examples (KNN, LWR).
12 Describe Radial Basis Functions and their role in instance-based learning.
13 Explain Sequential Covering Algorithms and Rule-Based Learning.
14 Discuss Explanation-Based Learning and its integration with the FOCL algorithm.
15 Write in detail about Reinforcement Learning, Q-Learning, and Temporal
Difference Learning.