Introduction to Machine Learning - - Unit 7 - Week 4
Introduction to Machine Learning - - Unit 7 - Week 4
(https://swayam.gov.in) (https://swayam.gov.in/nc_details/NPTEL)
Click to register
for Certification
exam
Week 4 : Assignment 4
(https://examform.nptel.ac.in/2025_01/exam_form/dashboard)
The due date for submitting this assignment has passed.
True
Course
outline False
Depends on learning rate
About Depends on initial weights
NPTEL () Yes, the answer is correct.
Score: 1
How does an Accepted Answers:
NPTEL True
online
course 2) Consider the 1 dimensional dataset: 1 point
work? ()
Week 0 ()
Week 1 ()
Week 2 ()
Week 3 () State true or false: The dataset becomes linearly separable after using basis expansion with
the following basis function
Week 4 () 1
ϕ(x) = [ ]
2
x
https://onlinecourses.nptel.ac.in/noc25_cs46/unit?unit=51&assessment=309 1/4
4/25/25, 6:55 PM Introduction to Machine Learning - - Unit 7 - Week 4
Separating True
Hyperplane
False
Approaches -
Perceptron No, the answer is incorrect.
Learning (unit? Score: 0
unit=51&lesso Accepted Answers:
n=52) True
Support Vector 3) For a binary classification problem with the hinge loss function 1 point
Machines I - max(0, 1 − y(w ⋅ x)) , which of the following statements is correct?
Formulation
(unit? The loss is zero only when the prediction is exactly equal to the true label
unit=51&lesso The loss is zero when the prediction is correct and the margin is at least 1
n=53)
The loss is always positive
Support Vector The loss increases linearly with the distance from the decision boundary regardless of
Machines II -
classification
Interpretation
and Analysis Yes, the answer is correct.
(unit? Score: 1
unit=51&lesso Accepted Answers:
n=54) The loss is zero when the prediction is correct and the margin is at least 1
SVMs for
4) For a dataset with n points in d dimensions, what is the maximum number of 1 point
Linearly Non
support vectors possible in a hard-margin SVM?
Separable
Data (unit?
2
unit=51&lesso
n=55) d
n/2
SVM Kernels
(unit? n
unit=51&lesso
Yes, the answer is correct.
n=56) Score: 1
Hingle Loss Accepted Answers:
formulation of n
SVM Objective
(unit? 5) In the context of soft-margin SVM, what happens to the number of support vectors 1 point
unit=51&lesso as the parameter C increases?
n=57)
Generally increases
Week 4
Feedback Generally decreases
Form : Remains constant
Introduction To
Changes unpredictably
Machine
Learning (unit? No, the answer is incorrect.
unit=51&lesso Score: 0
n=285) Accepted Answers:
Generally decreases
Quiz: Week 4
: Assignment
4
(assessment?
name=309)
Week 5 ()
https://onlinecourses.nptel.ac.in/noc25_cs46/unit?unit=51&assessment=309 2/4
4/25/25, 6:55 PM Introduction to Machine Learning - - Unit 7 - Week 4
Week 7 ()
Week 8 ()
Week 9 ()
Week 10 ()
Week 11 ()
Week 12 ()
Which of these is not a support vector when using a Support Vector Classifier with a polynomial
Text
kernel with degree = 3, C = 1, and gamma = 0.1?
Transcripts
(We recommend using sklearn to solve this question.)
()
2
Download
1
Videos ()
9
Books () 10
Kindly download the modified version of Iris dataset from this link. Available at:
(https://goo.gl/vchhsd (https://goo.gl/vchhsd)) The dataset contains 150 points, and each input
point has 4 features and belongs to one among three classes. Use the first 100 points as the
training data and the remaining 50 as test data. In the following questions, to report accuracy,
use the test dataset. You can round off the accuracy value to the nearest 2-decimal point
number. (Note: Do not change the order of data points.)
7) Train a Linear perceptron classifier on the modified iris dataset. We recommend 2 points
using sklearn. Use only the first two features for your model and report the best classification
accuracy for l1 and l2 penalty terms.
0.91, 0.64
0.88, 0.71
0.71, 0.65
0.78, 0.64
8) Train a SVM classifier on the modified iris dataset. We recommend using sklearn. 2 points
Use only the first three features. We encourage you to explore the impact of varying different
hyperparameters of the model. Specifically try different kernels and the associated
https://onlinecourses.nptel.ac.in/noc25_cs46/unit?unit=51&assessment=309 3/4
4/25/25, 6:55 PM Introduction to Machine Learning - - Unit 7 - Week 4
hyperparameters. As part of the assignment train models with the following set of
hyperparameters RBF-kernel, gamma = 0.5, one-vs-rest classifier, no-feature-normalization.
Try C = 0.01, 1, 10. For the above set of hyperparameters, report the best classification
accuracy.
0.98
0.88
0.99
0.92
https://onlinecourses.nptel.ac.in/noc25_cs46/unit?unit=51&assessment=309 4/4