0% found this document useful (0 votes)
61 views

Introduction To Machine Learning - Unit 4 - Week 2

This document is an introduction to Machine Learning course on NPTEL that provides information about Week 2 assignments. It includes 8 multiple choice questions testing concepts related to linear regression, subset selection methods, ridge and lasso regression. Correct answers are provided for each question.

Uploaded by

abdul.azeez
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
61 views

Introduction To Machine Learning - Unit 4 - Week 2

This document is an introduction to Machine Learning course on NPTEL that provides information about Week 2 assignments. It includes 8 multiple choice questions testing concepts related to linear regression, subset selection methods, ridge and lasso regression. Correct answers are provided for each question.

Uploaded by

abdul.azeez
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

13/08/2023, 13:09 Introduction To Machine Learning - - Unit 4 - Week 2

(https://swayam.gov.in) (https://swayam.gov.in/nc_details/NPTEL)

[email protected]

NPTEL (https://swayam.gov.in/explorer?ncCode=NPTEL) » Introduction To Machine Learning (course)


Click to register
for Certification
exam
Week 2: Assignment 2
(https://examform.nptel.ac.in/2023_10/exam_form/dashboard)
The due date for submitting this assignment has passed.

If already Due on 2023-08-09, 23:59 IST.


As per our records you have not submitted this assignment.
registered, click
to check your 1) The parameters obtained in linear regression 1 point
payment status
can take any value in the real space
are strictly integers
always lie in the range [0,1]
Course
can take only non-zero values
outline
No, the answer is incorrect.
Score: 0
How does an
Accepted Answers:
NPTEL
can take any value in the real space
online
course
work? () 2) Suppose that we have N independent variables (X 1 , X 2 , . . . X n) and the 1 point
dependent variable is Y . Now imagine that you are applying linear regression by fitting the best
Week 0 () fit line using the least square error on this data. You found that the correlation coefficient for one
of its variables (Say X 1 ) with Y is -0.005.
Week 1 ()
Regressing Y on X 1 mostly does not explain away Y .
Week 2 ()
Regressing Y on X 1 explains away Y .
Linear
Regression The given data is insufficient to determine if regressing Y on X 1 explains away Y or not.
(unit?
unit=32&lesso No, the answer is incorrect.
Score: 0
n=33)
Accepted Answers:
Multivariate Regressing Y on X 1 mostly does not explain away Y .
Regression
(unit?
3) Which of the following is a limitation of subset selection methods in regression? 1 point

https://onlinecourses.nptel.ac.in/noc23_cs98/unit?unit=32&assessment=202 1/4
13/08/2023, 13:09 Introduction To Machine Learning - - Unit 4 - Week 2

unit=32&lesso They tend to produce biased estimates of the regression coefficients.


n=34)
They cannot handle datasets with missing values.
Subset
They are computationally expensive for large datasets.
Selection 1
(unit? They assume a linear relationship between the independent and dependent variables.
unit=32&lesso They are not suitable for datasets with categorical predictors.
n=35)
No, the answer is incorrect.
Subset Score: 0
Selection 2 Accepted Answers:
(unit? They are computationally expensive for large datasets.
unit=32&lesso
4) The relation between studying time (in hours) and grade on the final examination (0- 1 point
n=36)
100) in a random sample of students in the Introduction to Machine Learning Class was found to
Shrinkage be:Grade = 30.5 + 15.2 (h)
Methods (unit? How will a student’s grade be affected if she studies for four hours?
unit=32&lesso
n=37) It will go down by 30.4 points.

Principal It will go down by 30.4 points.


Components It will go up by 60.8 points.
Regression
The grade will remain unchanged.
(unit?
unit=32&lesso It cannot be determined from the information given
n=38)
No, the answer is incorrect.
Score: 0
Partial Least
Squares (unit? Accepted Answers:
unit=32&lesso
It will go up by 60.8 points.
n=39)
5) Which of the statements is/are True? 1 point
Practice: Week
2: Assignment Ridge has sparsity constraint, and it will drive coefficients with low values to 0.
2 (Non
Lasso has a closed form solution for the optimization problem, but this is not the case for
Graded)
(assessment? Ridge.
name=177) Ridge regression does not reduce the number of variables since it never leads a
coefficient to zero but only minimizes it.
Quiz: Week 2:
Assignment 2 If there are two or more highly collinear variables, Lasso will select one of them randomly
(assessment?
No, the answer is incorrect.
name=202) Score: 0
Week 2
Accepted Answers:
Feedback
Ridge regression does not reduce the number of variables since it never leads a coefficient to
Form : zero but only minimizes it.
Introduction To If there are two or more highly collinear variables, Lasso will select one of them randomly
Machine
Learning (unit? 6) Find the mean of squared error for the given predictions: 1 point
unit=32&lesso
n=190)

Week 2:
Solution (unit?
unit=32&lesso
n=209)

Week 3 ()

https://onlinecourses.nptel.ac.in/noc23_cs98/unit?unit=32&assessment=202 2/4
13/08/2023, 13:09 Introduction To Machine Learning - - Unit 4 - Week 2

Hint: Find the squared error for each prediction and take the mean of that.
Week 4 ()
1
Text
2
Transcripts ()
1.5
Download 0
Videos () No, the answer is incorrect.
Score: 0
Books () Accepted Answers:
1
Problem
7) Consider the following statements: 1 point
Solving
Statement A: In Forward stepwise selection, in each step, that variable is chosen which has the
Session -
maximum correlation with the residual, then the residual is regressed on that variable, and it is
July 2023 ()
added to the predictor.
Statement B: In Forward stagewise selection, the variables are added one by one to the
previously selected variables to produce the best fit till then

Both the statements are True.


Statement A is True, and Statement B is False
Statement A is False and Statement B is True
Both the statements are False.

No, the answer is incorrect.


Score: 0
Accepted Answers:
Both the statements are False.

8) The linear regression model y = a0 + a1 x 1 + a2 x 2 +. . . +ap x p is to be fitted to 1 point


a set of N training data points having p attributes each. Let X be N × (p + 1) vectors of input
values (augmented by 1‘s), Y be N × 1 vector of target values, and θ be (p + 1) × 1 vector of
parameter values (a0 , a1 , a2 , . . . , ap ). If the sum squared error is minimized for obtaining the
optimal regression model, which of the following equation holds?

T
X X = XY

T
Xθ = X Y

T
X Xθ = Y

T T
X Xθ = X Y

No, the answer is incorrect.


Score: 0
Accepted Answers:
T T
X Xθ = X Y

9) Which of the following statements is true regarding Partial Least Squares (PLS) 1 point
regression?

PLS is a dimensionality reduction technique that maximizes the covariance between the
predictors and the dependent variable.

https://onlinecourses.nptel.ac.in/noc23_cs98/unit?unit=32&assessment=202 3/4
13/08/2023, 13:09 Introduction To Machine Learning - - Unit 4 - Week 2

PLS is only applicable when there is no multicollinearity among the independent


variables.
PLS can handle situations where the number of predictors is larger than the number of
observations.
PLS estimates the regression coefficients by minimizing the residual sum of squares.
PLS is based on the assumption of normally distributed residuals.
All of the above.
None of the above.

No, the answer is incorrect.


Score: 0
Accepted Answers:
PLS is a dimensionality reduction technique that maximizes the covariance between the
predictors and the dependent variable.

10) Which of the following statements about principal components in Principal 1 point
Component Regression (PCR) is true?

Principal components are calculated based on the correlation matrix of the original
predictors.
The first principal component explains the largest proportion of the variation in the
dependent variable.
Principal components are linear combinations of the original predictors that are
uncorrelated with each other.
PCR selects the principal components with the highest p-values for inclusion in the
regression model.
PCR always results in a lower model complexity compared to ordinary least squares
regression.

No, the answer is incorrect.


Score: 0
Accepted Answers:
Principal components are linear combinations of the original predictors that are uncorrelated
with each other.

https://onlinecourses.nptel.ac.in/noc23_cs98/unit?unit=32&assessment=202 4/4

You might also like