0% found this document useful (0 votes)
10 views

Quizz-ML

The document contains quizzes from weeks 2 to 5 covering various topics in machine learning, statistics, and regression analysis. Each quiz includes multiple-choice questions focusing on concepts such as types of machine learning, characteristics of classification, random variables, linear regression, and estimation methods. The answers to the quizzes are provided at the end, indicating the correct choices for each question.

Uploaded by

22028272
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views

Quizz-ML

The document contains quizzes from weeks 2 to 5 covering various topics in machine learning, statistics, and regression analysis. Each quiz includes multiple-choice questions focusing on concepts such as types of machine learning, characteristics of classification, random variables, linear regression, and estimation methods. The answers to the quizzes are provided at the end, indicating the correct choices for each question.

Uploaded by

22028272
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 3

I.

Quizz week 2
1. Which of the following is NOT a type of Machine Learning?
a. Heuristic Learning
b. Supervised Learning
c. Unsupervised Learning
d. Reinforcement Learning
2. What is the key difference between Traditional Machine Learning (TML) and Deep
Learning (DL)?
a. DL models automatically extract features from raw data, while TML models
often require manual feature extraction
b. TML models perform better with large amounts of data compared to DL
models
c. DL models are easier to interpret and understand than TML models
d. TML models are typically based on neural networks, while DL models use
decision trees and SVMs
3. What is the key characteristic of Multilabel Classification?
a. Each sample can belong to multiple classes simultaneously
b. Each sample belongs to one and only one class
c. The classification is based on unsupervised learning
d. It involves predicting a continuous value

II. Quizz week 3


1. Which of the following is an example of a continuous random variable?
a. The time it takes to run a marathon
b. The number of cars in a parking lot
c. The result of rolling a die
d. The number of students in a classroom
2. Which of the following random variables can only
take integer values?
a. A discrete random variable
b. A continuous random variable
c. Both continuous and discrete random variables
d. Neither continuous nor discrete random variables
3. Which of the following distributions is used to model
binary outcomes (success/failure)?
a. Bernoulli distribution
b. Normal distribution
c. Poisson distribution
d. Exponential distribution
4. In supervised learning, what is the purpose of splitting the dataset
into training and test sets?
a. To measure how well the model generalizes to unseen data
b. To increase the size of the training data
c. To reduce the computational cost
d. To ensure the model is overfitting
III. Quizz week 4
1. Which of the following is true about Linear Regression?
a. It assumes a linear relationship between dependent and
independent variables
b. It assumes a non-linear relationship between dependent and
independent variables
c. It is used only for classification problems
d. It cannot be used for continuous variables
2. What is the primary goal of the gradient descent algorithm?
a. To find the global minimum of a function
b. To find the local minimum of a function
c. To find the local maximum of a function
d. To find the global maximum of a function
3. Which of the following is a potential drawback of using gradient
descent?
a. It requires an exact analytical solution
b. It may get stuck in local minima
c. It does not work for large datasets
d. It is always computationally expensive
4. What happens if the learning rate in gradient descent is set too
high?
a. The algorithm will converge too quickly
b. The algorithm will become more accurate
c. The algorithm will find the exact solution in one step
d. The algorithm will overshoot the minimum and fail to
converge
5. Which of the following is NOT an advantage of using the normal
equation in linear regression?
a. Efficient for very large datasets with many features
b. No need to choose a learning rate
c. Works well for small datasets
d. No need to iterate over the training set
IV. Quizz week 5
1. What does Maximum a Posteriori (MAP) estimation maximize?
a. The posterior probability of the parameters
b. The prior probability of the data
c. The likelihood of the observed data
d. The marginal likelihood of the parameters
2. What does Maximum Likelihood Estimation (MLE) aim to
maximize?
a. The conditional probability of the parameters given the data
b. The likelihood of the observed data given the parameters
c. The prior probability of the data
d. The posterior probability of the parameters
3. In Bayesian Learning, the MAP estimate is equivalent to the MLE
estimate when:
a. The prior is uniform (non-informative)
b. The posterior is non-informative
c. The likelihood is zero
d. The data size is large
4. The Naive Bayes classifier is considered "naive" because
a. It only works with small data sets
b. It assumes conditional independence between features
c. It always selects the most frequent class
d. It assumes features have the same variance
5. In Naive Bayes classification, how is the decision boundary
between two classes typically determined?
a. By maximizing the likelihood function
b. By minimizing the prior probabilities
c. By comparing the posterior probabilities of the classes
d. By calculating the mean of the class labels
6. What is a key difference between Logistic Regression and Linear
Regression?
a. Logistic Regression is non-parametric
b. Logistic Regression predicts probabilities for classification,
while Linear Regression predicts continuous values
c. Logistic Regression minimizes the Mean Squared Error
d. Logistic Regression handles continuous target variables
7. How does the decision boundary of Logistic Regression differ from
Naive Bayes' one?
a. Logistic Regression relies on the independence assumption,
whereas Naive Bayes does not
b. Logistic Regression uses kernel methods to find a boundary,
whereas Naive Bayes does not
c. Logistic Regression finds a linear boundary directly from the
data, while Naive Bayes uses probability distributions to
determine the boundary
d. Naive Bayes finds a linear boundary directly, while Logistic
Regression does not

I. Full A
II. Full A
III. AABDA
IV. ABABCBC

You might also like