0% found this document useful (0 votes)
7 views4 pages

Introduction To Machine Learning - Unit 5 - Week 2

The document outlines the Week 2 assignment for the 'Introduction to Machine Learning' course, detailing various questions related to linear regression and model evaluation. It includes topics such as the purpose of intercept terms, cost functions, and conditions for unbiased estimators. The assignment also covers concepts like Lasso vs. Ridge regression and the implications of model complexity on bias and variance.

Uploaded by

raja sp
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views4 pages

Introduction To Machine Learning - Unit 5 - Week 2

The document outlines the Week 2 assignment for the 'Introduction to Machine Learning' course, detailing various questions related to linear regression and model evaluation. It includes topics such as the purpose of intercept terms, cost functions, and conditions for unbiased estimators. The assignment also covers concepts like Lasso vs. Ridge regression and the implications of model complexity on bias and variance.

Uploaded by

raja sp
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

4/25/25, 6:54 PM Introduction to Machine Learning - - Unit 5 - Week 2

(https://swayam.gov.in) (https://swayam.gov.in/nc_details/NPTEL)

[email protected]

NPTEL (https://swayam.gov.in/explorer?ncCode=NPTEL) » Introduction to Machine Learning (course)


Click to register
for Certification
exam
Week 2 : Assignment 2
(https://examform.nptel.ac.in/2025_01/exam_form/dashboard)
The due date for submitting this assignment has passed.

If already Due on 2025-02-05, 23:59 IST.


registered, click
to check your Assignment submitted on 2025-02-05, 22:26 IST
payment status
1) In a linear regression model y = θ0 + θ1 x 1 + θ2 x 2 +. . . +θp x p , what is the 1 point
purpose of adding an intercept term (θ0 ) ?

To increase the model’s complexity


Course
outline To account for the effect of independent variables.
To adjust for the baseline level of the dependent variable when all predictors are zero.
About To ensure the coefficients of the model are unbiased.
NPTEL () Yes, the answer is correct.
Score: 1
How does an Accepted Answers:
NPTEL To adjust for the baseline level of the dependent variable when all predictors are zero.
online
course 2) Which of the following is true about the cost function (objective function) used in 1 point
work? () linear regression?

Week 0 () It is non-convex.

Week 1 () It is always minimized at θ = 0.

It measures the sum of squared differences between predicted and actual values.
Week 2 () It assumes the dependent variable is categorical.

Linear Yes, the answer is correct.


Regression
Score: 1
(unit? Accepted Answers:
unit=32&lesso It measures the sum of squared differences between predicted and actual values.
n=33)

https://onlinecourses.nptel.ac.in/noc25_cs46/unit?unit=32&assessment=307 1/4
4/25/25, 6:54 PM Introduction to Machine Learning - - Unit 5 - Week 2

Multivariate 3) Which of these would most likely indicate that Lasso regression is a better choice 1 point
Regression than Ridge regression?
(unit?
unit=32&lesso All features are equally important
n=34) Features are highly correlated
Subset Most features have small but non-zero impact
Selection 1 Only a few features are truly relevant
(unit?
unit=32&lesso Yes, the answer is correct.
Score: 1
n=35)
Accepted Answers:
Subset Only a few features are truly relevant
Selection 2
(unit? 4) Which of the following conditions must hold for the least squares estimator in linear 1 point
unit=32&lesso
regression to be unbiased?
n=36)

Shrinkage The independent variables must be normally distributed.


Methods (unit? The relationship between predictors and the response must be non-linear.
unit=32&lesso
The errors must have a mean of zero.
n=37)
The sample size must be larger than the number of predictors.
Principal
Yes, the answer is correct.
Components
Score: 1
Regression
Accepted Answers:
(unit?
The errors must have a mean of zero.
unit=32&lesso
n=38)
5) When performing linear regression, which of the following is most likely to cause 1 point
Partial Least overfitting?
Squares (unit?
unit=32&lesso Adding too many regularization terms.
n=39)
Including irrelevant predictors in the model.
Week 2 Increasing the sample size.
Feedback
Using a smaller design matrix.
Form :
Introduction To Yes, the answer is correct.
Machine Score: 1
Learning (unit? Accepted Answers:
unit=32&lesso Including irrelevant predictors in the model.
n=280)

Quiz: Week 2 6) You have trained a complex regression model on a dataset. To reduce its 1 point
: Assignment complexity, you decide to apply Ridge regression, using a regularization parameterλ . How does
2 the relationship between bias and variance change asλ becomes very large? Select the correct
(assessment? option
name=307)
bias is low, variance is low.
Week 3 () bias is low, variance is high.
bias is high, variance is low.
Week 4 ()
bias is high, variance is high.
Week 5 () Yes, the answer is correct.
Score: 1
Week 6 () Accepted Answers:
bias is high, variance is low.

https://onlinecourses.nptel.ac.in/noc25_cs46/unit?unit=32&assessment=307 2/4
4/25/25, 6:54 PM Introduction to Machine Learning - - Unit 5 - Week 2

7) Given a training data set of 10,000 instances, with each input instance having 12 1 point
Week 7 ()
dimensions and each output instance having 3 dimensions, the dimensions of the design matrix
used in applying linear regression to this data is
Week 8 ()
10000 × 12
Week 9 ()
10003 × 12
10000 × 13
Week 10 ()
10000 × 15
Week 11 () Yes, the answer is correct.
Score: 1
Week 12 () Accepted Answers:
10000 × 13
Text
Transcripts 8) The linear regression model y = a0 + a1 x 1 + a2 x 2 +. . . +ap x p is to be fitted to 1 point
() a set of N training data points having P attributes each. Let X be N x (p + 1) vectors of input
values (augmented by 1‘s), Y be N x 1 vector of target values, and θ be (p + 1) × 1 vector of
Download parameter values (a0 , a1 , a2 , . . . , ap ). If the sum squared error is minimized for obtaining the
Videos () optimal regression model, which of the following equation holds?

Books ()
X
T
X = XY

Problem
Xθ = XTY
Solving
Session -
X
T
Xθ =Y
Jan 2025 ()

X
T
Xθ = XTY
Yes, the answer is correct.
Score: 1
Accepted Answers:
X Xθ = X Y
T T

9) Which of the following scenarios is most appropriate for using Partial Least Squares 1 point
(PLS) regression instead of ordinary least squares (OLS)?

When the predictors are uncorrelated and the number of samples is much larger than the
number of predictors.
When there is significant multicollinearity among predictors or the number of predictors
exceeds the number of samples.
When the response variable is categorical and the predictors are highly non-linear.
When the primary goal is to interpret the relationship between predictors and response,
rather than prediction accuracy.

Yes, the answer is correct.


Score: 1
Accepted Answers:
When there is significant multicollinearity among predictors or the number of predictors
exceeds the number of samples.

10) Consider forward selection, backward selection and best subset selection with 1 point
respect to the same data set. Which of the following is true?

https://onlinecourses.nptel.ac.in/noc25_cs46/unit?unit=32&assessment=307 3/4
4/25/25, 6:54 PM Introduction to Machine Learning - - Unit 5 - Week 2

Best subset selection can be computationally more expensive than forward selection
Forward selection and backward selection always lead to the same result
Best subset selection can be computationally less expensive than backward selection
Best subset selection and forward selection are computationally equally expensive
Both (b) and (d)

Yes, the answer is correct.


Score: 1
Accepted Answers:
Best subset selection can be computationally more expensive than forward selection

https://onlinecourses.nptel.ac.in/noc25_cs46/unit?unit=32&assessment=307 4/4

You might also like