0% found this document useful (0 votes)
13 views3 pages

Extract Pages From 2 ML

The document outlines a series of questions and answers related to linear regression concepts, including the purpose of adding an intercept term, characteristics of the cost function, and conditions for unbiased estimators. Each question is followed by the correct answer and a score indicating the correctness of the response. The assignment was submitted on January 28, 2025, and was due on February 5, 2025.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views3 pages

Extract Pages From 2 ML

The document outlines a series of questions and answers related to linear regression concepts, including the purpose of adding an intercept term, characteristics of the cost function, and conditions for unbiased estimators. Each question is followed by the correct answer and a score indicating the correctness of the response. The assignment was submitted on January 28, 2025, and was due on February 5, 2025.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 3

Firefox

…Week 2 : Assignment 2
The due date for submitting this assignment has passed.
Due on 2025-02-05, 23:59 IST.

Assignment submitted on 2025-01-28, 09:28 IST


1) In a linear regression model , what is the purpose 1 point
of adding an intercept term ?

To increase the model’s complexity


To account for the effect of independent variables.
To adjust for the baseline level of the dependent variable when all predictors are zero.
To ensure the coefficients of the model are unbiased.

Yes, the answer is correct.


Score: 1
Accepted Answers:
To adjust for the baseline level of the dependent variable when all predictors are zero.

2) Which of the following is true about the cost function (objective function) used in linear 1 point
regression?

It is non-convex.

It is always minimized at
It measures the sum of squared differences between predicted and actual values.
It assumes the dependent variable is categorical.

Yes, the answer is correct.


Score: 1
Accepted Answers:
It measures the sum of squared differences between predicted and actual values.

3) Which of these would most likely indicate that Lasso regression is a better choice than 1 point
Ridge regression?

All features are equally important


Features are highly correlated
Most features have small but non-zero impact
Only a few features are truly relevant

Yes, the answer is correct.


Score: 1
Accepted Answers:
Only a few features are truly relevant

4) Which of the following conditions must hold for the least squares estimator in linear 1 point

1 of 4 21-Apr-25, 10:03 AM
Firefox https://onlinecourses.nptel.ac.in/noc25_cs46/unit?unit=32&assessment=307

regression to be unbiased?

The independent variables must be normally distributed.


The relationship between predictors and the response must be non-linear.
The errors must have a mean of zero.
The sample size must be larger than the number of predictors.

Yes, the answer is correct.


Score: 1
Accepted Answers:
The errors must have a mean of zero.

5) When performing linear regression, which of the following is most likely to cause 1 point
overfitting?

Adding too many regularization terms.


Including irrelevant predictors in the model.
Increasing the sample size.
Using a smaller design matrix.

Yes, the answer is correct.


Score: 1
Accepted Answers:
Including irrelevant predictors in the model.

6) You have trained a complex regression model on a dataset. To reduce its complexity, 1 point
you decide to apply Ridge regression, using a regularization parameter . How does the relationship
between bias and variance change as becomes very large? Select the correct option

bias is low, variance is low.


bias is low, variance is high.
bias is high, variance is low.
bias is high, variance is high.

Yes, the answer is correct.


Score: 1
Accepted Answers:
bias is high, variance is low.

7) Given a training data set of 10,000 instances, with each input instance having 12 1 point
dimensions and each output instance having 3 dimensions, the dimensions of the design matrix
used in applying linear regression to this data is

10000 × 12
10003 × 12
10000 × 13
10000 × 15

Yes, the answer is correct.


Score: 1
Accepted Answers:
10000 × 13

2 of 4 21-Apr-25, 10:03 AM
Firefox https://onlinecourses.nptel.ac.in/noc25_cs46/unit?unit=32&assessment=307

8) The linear regression model is to be fitted to a 1 point


set of training data points having P attributes each. Let be x vectors of input
values (augmented by 1‘s), be x vector of target values, and be vector of
parameter values . If the sum squared error is minimized for obtaining the
optimal regression model, which of the following equation holds?

Yes, the answer is correct.


Score: 1
Accepted Answers:
=

9) Which of the following scenarios is most appropriate for using Partial Least Squares 1 point
(PLS) regression instead of ordinary least squares (OLS)?

When the predictors are uncorrelated and the number of samples is much larger than the
number of predictors.
When there is significant multicollinearity among predictors or the number of predictors
exceeds the number of samples.
When the response variable is categorical and the predictors are highly non-linear.
When the primary goal is to interpret the relationship between predictors and response,
rather than prediction accuracy.

Yes, the answer is correct.


Score: 1
Accepted Answers:
When there is significant multicollinearity among predictors or the number of predictors exceeds
the number of samples.

10) Consider forward selection, backward selection and best subset selection with respect 1 point
to the same data set. Which of the following is true?

Best subset selection can be computationally more expensive than forward selection
Forward selection and backward selection always lead to the same result
Best subset selection can be computationally less expensive than backward selection
Best subset selection and forward selection are computationally equally expensive
Both (b) and (d)

No, the answer is incorrect.


Score: 0
Accepted Answers:
Best subset selection can be computationally more expensive than forward selection

3 of 4 21-Apr-25, 10:03 AM

You might also like