0% found this document useful (0 votes)
12 views2 pages

Machine Learning Deep Learning Q&A

The document outlines key differences between various regression techniques, including linear and logistic regression, as well as concepts such as R-squared, outliers, and cross-validation. It explains the assumptions of linear regression models and the importance of model evaluation techniques like cross-validation in machine learning. Additionally, it highlights the distinctions between dependent and independent variables, parametric and non-parametric models, and biased versus unbiased estimates.

Uploaded by

akashmund9337
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as TXT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views2 pages

Machine Learning Deep Learning Q&A

The document outlines key differences between various regression techniques, including linear and logistic regression, as well as concepts such as R-squared, outliers, and cross-validation. It explains the assumptions of linear regression models and the importance of model evaluation techniques like cross-validation in machine learning. Additionally, it highlights the distinctions between dependent and independent variables, parametric and non-parametric models, and biased versus unbiased estimates.

Uploaded by

akashmund9337
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as TXT, PDF, TXT or read online on Scribd
You are on page 1/ 2

Q1. What is the difference between linear regression and logistic regression?

*******************************************************************
Ans: Linear regression is used for predicting a continuous dependent variable based
on one or more independent variables, while logistic regression is used for
predicting a binary or categorical outcome.

Q2. What is the primary difference between R-squared and adjusted R-squared in
linear regression? ***********************************************
Ans: R-squared measures the proportion of variance in the dependent variable
explained by the independent variables, whereas adjusted R-squared adjusts for the
number of independent variables in the model, penalizing for complexity.

Q3. What is the difference between a population regression line and a sample
regression line in linear regression?
Ans: A population regression line describes the relationship in the entire
population, while a sample regression line is an estimate based on a subset of the
population.

Q4. How do you handle outliers in linear regression?


***********************************************************************************
*********
Ans: Outliers can be addressed by removing them, using robust regression models,
transforming the data, or employing different regression methods less sensitive to
outliers.

Q5. What is the difference between simple and multiple linear regression?
***********************************************************************
Ans: Simple linear regression involves one independent variable, whereas multiple
linear regression involves two or more independent variables.

Q6. What is the difference between a dependent and independent variable in linear
regression? ***************************************************
Ans: The dependent variable is predicted based on the independent variables, which
are assumed to have a linear relationship with the dependent variable.

Q7. What is the difference between linear regression and non-linear regression?
*****************************************************************
Ans: Linear regression assumes a linear relationship between variables, while non-
linear regression can model any non-linear relationship, using methods like non-
linear least squares or maximum likelihood.

Q8. What is the difference between a parametric and non-parametric regression


model? ************************************************************
Ans: Parametric models, like linear regression, assume a specific form for the
relationship between variables, whereas non-parametric models, such as kernel
regression, do not make such assumptions.

Q9. What is the difference between biased and unbiased estimates in linear
regression? **********************************************************
Ans: Biased estimates result from flawed modeling or sampling, while unbiased
estimates come from procedures that do not systematically favor any direction.

Q10. Why can’t we use the mean square error cost function used in linear regression
for logistic regression? ************************************
Ans: Using mean square error in logistic regression leads to a non-convex cost
function with many local minima, making it difficult to optimize properly compared
to the convex cost function used in logistic regression.
Q11. What are the assumptions of a linear regression model?
***********************************************************************************
**
Ans: The assumptions of a linear regression model are:
## The relationship between the independent and dependent variables is linear.
## The residuals, or errors, are normally distributed with a mean of zero and a
constant variance.
## The independent variables are not correlated with each other (i.e. they are not
collinear).
## The residuals are independent of each other (i.e. they are not autocorrelated).
## The model includes all the relevant independent variables needed to accurately
predict the dependent variable.

Q12. what is cross validation techniques in ml and what are the uses of it ?
********************************************************************
Ans: Cross-validation is a technique used to evaluate machine learning models by
splitting the data into multiple subsets. There are 4 common types include K-Fold,
Stratified K-Fold, Leave-One-Out, and Time Series Cross-Validation. It helps assess
model performance, tune hyperparameters, and avoid overfitting by ensuring the
model generalizes well across different data subsets. Cross-validation efficiently
utilizes all data points for training and testing, providing a more reliable
estimate of how the model will perform on unseen data. It is crucial for improving
model robustness and accuracy.

You might also like