0% found this document useful (0 votes)
56 views4 pages

Module 2 Quiz - Correct

This document summarizes a student's attempt at a module 2 quiz on linear and logistic regression. The student answered 11 multiple choice questions and received full credit on 9 questions and partial credit on 2 questions, achieving an overall score of 80%.

Uploaded by

anjibalaji52
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
56 views4 pages

Module 2 Quiz - Correct

This document summarizes a student's attempt at a module 2 quiz on linear and logistic regression. The student answered 11 multiple choice questions and received full credit on 9 questions and partial credit on 2 questions, achieving an overall score of 80%.

Uploaded by

anjibalaji52
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

7/13/22, 11:13 PM Module 2 Quiz: Attempt review

d Summer CSC 7333 for Jianhua Chen / Module 2: Linear Regression and Logistic Regression / Module 2 Quiz

Started on Wednesday, July 13, 2022, 10:58 PM


State Finished
Completed on Wednesday, July 13, 2022, 11:13 PM
Time taken 15 mins 6 secs
Grade 80.00 out of 80.00 (100%)

Question 1
Correct 5.00 points out of 5.00

The objective function for linear regression measures the mean squared errors between predicted y
value and the desired target y value, and thus should be minimized.

Select one:
A. True 
B. False

Question 2
Correct 10.00 points out of 10.00

Select ALL statements below that are true for linear regression:

Select one or more:


A. Linear regression handles classification problems instead of regression problems.
B. In linear regression, the optimal hypothesis hθ can be obtained analytically without gradient descent search. 
C. Linear regression can model a non-linear function of the original input variables by using polynomial features. 
D. If we use mean squared error as the loss function for training a linear regression model, the loss function is convex. 

E. The output of a linear regression model should always be in the interval [0, 1].

Question 3
Correct 10.00 points out of 10.00

When doing gradient descent to learn the best to minimize loss in linear regression, which of the
following is the correct formula for the update of j? Here > 0 is the learning rate.

Select one:
A. ∂J
θj ← θj + η ∗
∂θj
B. ∂J 
θj ← θj − η ∗
∂θj
C. ∂J
θj ← θj +
∂θj
D. θj ← (1−η)∗θj

https://lsuonline.moodle.lsu.edu/mod/quiz/review.php?attempt=422272&cmid=264526 1/4
7/13/22, 11:13 PM Module 2 Quiz: Attempt review

Question 4
Correct 5.00 points out of 5.00

In the training data for logistic regression, the desired target variable y value is continuous-valued.

Select one:
A. True
B. False 

Question 5
Correct 10.00 points out of 10.00

Select ALL statements below that are true for logistic regression:

Select one or more:


A. One can solve for the optimal hypothesis h analytically without using gradient descent.
B. Logistic regression handles classification problems instead of regression problems. 
C. The sigmoid function used in logistic regression is a non-linear function. 
D. If we use mean squared error as the loss function for training a logistic regression model, the loss function is convex
E. The output of a logistic regression model can be interpreted as a probability. 

Question 6
Correct 10.00 points out of 10.00

Select ALL scenarios below such that regularization would help avoid overfitting when doing gradient
descent for linear (or logistic) regression:

Select one or more:


A. There are many input variables, and possibly not all of them are relevant to the target variable. 
B. There are many training data points.
C. We are using polynomial features with a polynomial of high degree. 
D. There are few input variables.

Question 7
Correct 5.00 points out of 5.00

When we scale/normalize the features for regression, we would scale/normalize ALL features,
including the dummy input variable x0 = 1.

Select one:
A. True
B. False 

https://lsuonline.moodle.lsu.edu/mod/quiz/review.php?attempt=422272&cmid=264526 2/4
7/13/22, 11:13 PM Module 2 Quiz: Attempt review

Question 8
Correct 5.00 points out of 5.00

When applying regularization for logistic regression, regularization is applied to ALL parameters
j , including the parameter 0

Select one:
A. True
B. False 

Question 9
Correct 10.00 points out of 10.00

You are running gradient descent (with learning rate = 0.1) for linear regression with mean square
error as the loss function. You have tested the loss function implementation and are pretty sure that
the loss function is implemented correctly. You plot the loss function curve for the first 30 epochs and
found that the loss is increasing with the number of epochs.

What is your explanation of this scenario?

Select one:
A. The gradient descent procedure is working fine, nothing wrong here.
B. The learning rate is too small and should be increased.
C. The learning rate is probably too high and should be reduced. 

D. The data is noisy and nothing can be done here.

Question 10
Correct 5.00 points out of 5.00

When applying regularization for linear (or logistic) regression, the value for the parameter λ needs to
be selected. If λ value is set too big, what can happen?

Select one:
A. It can cause the resulting model h to overfit the training data
B. It would make gradient descent converge faster
C. It would cause the resulting model h to underfit – i.e., the model performs poorly on both the training and testing data 
D. It would not cause any problem

https://lsuonline.moodle.lsu.edu/mod/quiz/review.php?attempt=422272&cmid=264526 3/4
7/13/22, 11:13 PM Module 2 Quiz: Attempt review

Question 11
Correct 5.00 points out of 5.00

When we have multiple input variables or using polynomial features in linear (or logistic) regression, it
is beneficial to scale/normalize the features, so that gradient descent would run well.

Select one:
A. True 
B. False

Jump to...

https://lsuonline.moodle.lsu.edu/mod/quiz/review.php?attempt=422272&cmid=264526 4/4

You might also like