Introduction To Machine Learning - Unit 5 - Week 2
Introduction To Machine Learning - Unit 5 - Week 2
(https://swayam.gov.in) (https://swayam.gov.in/nc_details/NPTEL)
Click to register
for Certification
exam
Week 2 : Assignment 2
(https://examform.nptel.ac.in/2025_01/exam_form/dashboard)
The due date for submitting this assignment has passed.
Week 0 () It is non-convex.
It measures the sum of squared differences between predicted and actual values.
Week 2 () It assumes the dependent variable is categorical.
https://onlinecourses.nptel.ac.in/noc25_cs46/unit?unit=32&assessment=307 1/4
4/25/25, 6:54 PM Introduction to Machine Learning - - Unit 5 - Week 2
Multivariate 3) Which of these would most likely indicate that Lasso regression is a better choice 1 point
Regression than Ridge regression?
(unit?
unit=32&lesso All features are equally important
n=34) Features are highly correlated
Subset Most features have small but non-zero impact
Selection 1 Only a few features are truly relevant
(unit?
unit=32&lesso Yes, the answer is correct.
Score: 1
n=35)
Accepted Answers:
Subset Only a few features are truly relevant
Selection 2
(unit? 4) Which of the following conditions must hold for the least squares estimator in linear 1 point
unit=32&lesso
regression to be unbiased?
n=36)
Quiz: Week 2 6) You have trained a complex regression model on a dataset. To reduce its 1 point
: Assignment complexity, you decide to apply Ridge regression, using a regularization parameterλ . How does
2 the relationship between bias and variance change asλ becomes very large? Select the correct
(assessment? option
name=307)
bias is low, variance is low.
Week 3 () bias is low, variance is high.
bias is high, variance is low.
Week 4 ()
bias is high, variance is high.
Week 5 () Yes, the answer is correct.
Score: 1
Week 6 () Accepted Answers:
bias is high, variance is low.
https://onlinecourses.nptel.ac.in/noc25_cs46/unit?unit=32&assessment=307 2/4
4/25/25, 6:54 PM Introduction to Machine Learning - - Unit 5 - Week 2
7) Given a training data set of 10,000 instances, with each input instance having 12 1 point
Week 7 ()
dimensions and each output instance having 3 dimensions, the dimensions of the design matrix
used in applying linear regression to this data is
Week 8 ()
10000 × 12
Week 9 ()
10003 × 12
10000 × 13
Week 10 ()
10000 × 15
Week 11 () Yes, the answer is correct.
Score: 1
Week 12 () Accepted Answers:
10000 × 13
Text
Transcripts 8) The linear regression model y = a0 + a1 x 1 + a2 x 2 +. . . +ap x p is to be fitted to 1 point
() a set of N training data points having P attributes each. Let X be N x (p + 1) vectors of input
values (augmented by 1‘s), Y be N x 1 vector of target values, and θ be (p + 1) × 1 vector of
Download parameter values (a0 , a1 , a2 , . . . , ap ). If the sum squared error is minimized for obtaining the
Videos () optimal regression model, which of the following equation holds?
Books ()
X
T
X = XY
Problem
Xθ = XTY
Solving
Session -
X
T
Xθ =Y
Jan 2025 ()
X
T
Xθ = XTY
Yes, the answer is correct.
Score: 1
Accepted Answers:
X Xθ = X Y
T T
9) Which of the following scenarios is most appropriate for using Partial Least Squares 1 point
(PLS) regression instead of ordinary least squares (OLS)?
When the predictors are uncorrelated and the number of samples is much larger than the
number of predictors.
When there is significant multicollinearity among predictors or the number of predictors
exceeds the number of samples.
When the response variable is categorical and the predictors are highly non-linear.
When the primary goal is to interpret the relationship between predictors and response,
rather than prediction accuracy.
10) Consider forward selection, backward selection and best subset selection with 1 point
respect to the same data set. Which of the following is true?
https://onlinecourses.nptel.ac.in/noc25_cs46/unit?unit=32&assessment=307 3/4
4/25/25, 6:54 PM Introduction to Machine Learning - - Unit 5 - Week 2
Best subset selection can be computationally more expensive than forward selection
Forward selection and backward selection always lead to the same result
Best subset selection can be computationally less expensive than backward selection
Best subset selection and forward selection are computationally equally expensive
Both (b) and (d)
https://onlinecourses.nptel.ac.in/noc25_cs46/unit?unit=32&assessment=307 4/4