0% found this document useful (0 votes)
19 views6 pages

Output 23

Uploaded by

lamvut67
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views6 pages

Output 23

Uploaded by

lamvut67
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

ML Homework 1

Lâm Vũ - 22000131


November 2024

Due Date: November 25th, 2024


1 Problem 1: Machine Learning as an Optimization Problem
(a) Explain why training a machine learning model can be formulated as an optimization problem.
What are the objectives and constraints involved?
Answer :
Training a machine learning model can be formulated as an optimization problem because the
main goal in training a machine learning model is to optimize an objective function that quantifies
how well the model is performing. This objective function is often referred to as the loss function
or cost function.
The loss function is a mathematical function that measures the error or difference between the
predicted values output by a machine learning model and the actual target values from the dataset.
The loss function is different depending on the ML model. It can be Mean Squared Error (MSE)
for regression or Cross-Entropy Loss for classification.
The optimization problem can be expressed as follows:

θ∗ = arg min L(θ, D),


θ

where:

• θ: The set of parameters of the model (e.g., weights, biases).


• L(θ, D): The loss function that quantifies how well the model’s predictions match the actual
values, based on the dataset D.
• D: The dataset consisting of input-output pairs (xi , yi ).
• θ∗ : The optimal parameters that minimize the loss function.

(b) Provide examples of how optimization techniques are applied in the training of models such
as linear regression and logistic regression.
Answer : In linear regression, the goal is to find the parameters of the model that minimize the
error between the predicted and actual target values. The model is typically represented as:

y = θ 0 + θ 1 x1 + θ 2 x2 + . . . + θ p xp + ϵ

Where:
• y is the target variable.
• x1 , x2 , . . . , xp are the input features.
• θ0 , θ1 , . . . , θp are the model parameters (coefficients).
• ϵ is the error term, usually assumed to be Gaussian noise.

1
The objective is to minimize the MSE loss function:
n
1X
L(θ) = (yi − ŷi )2
n i=1

Where:
• ŷi = θ0 + θ1 x1 + . . . + θp xp is the predicted value for the i-th data point.
• n is the total number of data points.
The optimization is typically performed using Gradient Descent, which iteratively updates the
parameters θ in the direction of the negative gradient of the loss function. The update rule is:

∂L(θ)
θj := θj − α
∂θj

Where:
• α is the learning rate.
∂L(θ)
• ∂θj is the partial derivative of the loss function with respect to the parameter θj .

The gradient descent algorithm continues until the loss function L(θ) converges to a minimum.
In logistic regression, the goal is to predict the probability of a binary outcome (0 or 1) based
on input features. The model is similar to linear regression, but it applies the sigmoid function to
the output:
1
hθ (x) =
1+ e−(θ0 +θ1 x1 +...+θp xp )
Where:
• hθ (x) is the predicted probability that y = 1.
• x1 , x2 , . . . , xp are the input features.
• θ0 , θ1 , . . . , θp are the model parameters.
The objective in logistic regression is to minimize the Cross-entropy, which is given by:
n
1X
L(θ) = − [yi log(hθ (xi )) + (1 − yi ) log(1 − hθ (xi ))]
n i=1

Where:
• yi is the actual class label for the i-th data point.
• hθ (xi ) is the predicted probability for the i-th data point.
Similar to linear regression, **Gradient Descent** is used to optimize the parameters in logistic
regression. The update rule is:

∂L(θ)
θj := θj − α
∂θj

Where ∂L(θ)
∂θj is the partial derivative of the loss function with respect to the parameter θj .
(c) Discuss the role of the loss (or cost) function in this context and how it guides the optimization
process.
Answers:

2
The loss function plays a crucial role in the optimization process of machine learning models. It
quantifies how well the model’s predictions match the actual target values. The primary objective
in training any machine learning model is to minimize loss, which reflects the error between the
predicted and true values.
By evaluating the predictions against the actual values, the loss function provides feedback to
guide the optimization algorithm in adjusting the model parameters.
The optimization process involves iteratively updating the model parameters (such as θ0 , θ1 , . . . , θp )
in such a way that the loss function is minimized. This can be done using Gradient Descent or other
optimization algorithms. The gradient of the loss function with respect to the model parameters
provides the direction in which the parameters should be updated to reduce the error. The optimiza-
tion process stops when the loss function reaches its minimum, indicating that the model parameters
have been optimized.
In both linear and logistic regression, the loss function serves as a guiding signal for the optimiza-
tion process , directs the model to minimizing error. By optimizing the loss function, we improve
the model’s performance and make it more accurate in predicting future data.

2 Problem 2: Maximum Likelihood Estimation (MLE) and


Maximum A Posteriori (MAP)
Given a dataset of independent and identically distributed observations X = {x1 , x2 , . . . , xn } drawn
from a normal distribution with unknown mean µ and known variance σ 2 .
(a) Derive the Maximum Likelihood Estimator (MLE) for the mean µ.
Answer:
The likelihood function for n independent observations from a normal distribution is given by:
n
Y
L(µ) = f (xi ; µ)
i=1

(xi − µ)2
 
1
f (xi ; µ) = √ exp −
2πσ 2 2σ 2
Thus, the likelihood function L(µ) is:
n
(xi − µ)2
 
Y 1
L(µ) = √ exp −
i=1 2πσ 2 2σ 2

Take logarithm of the likelihood function, obtaining the log-likelihood function:


n
(xi − µ)2
  
X 1
log L(µ) = log √ exp −
i=1 2πσ 2 2σ 2

Using the properties of logarithms, this simplifies to:


n 
(xi − µ)2

X 1
log L(µ) = − log(2πσ 2 ) −
i=1
2 2σ 2

We can factor out the constants:


n
n 1 X
log L(µ) = − log(2πσ 2 ) − 2 (xi − µ)2
2 2σ i=1

Take the derivative of log L(µ) with respect to µ and set it equal to zero:
n
d 1 X
log L(µ) = 2 (xi − µ)
dµ σ i=1

3
Setting the derivative equal to zero to maximize:
n
X
(xi − µ) = 0
i=1

Solving this equation for µ, we get:


n
1X
µ= xi
n i=1
For conclusion, the MLE for µ is the average of the observed data points.
(b) Assume a prior distribution for µ that is also normally distributed with mean µ0 and variance
τ 2 . Derive the Maximum A Posteriori (MAP) estimator for µ.
Answer: The general formula for the Maximum A Posteriori (MAP) estimator is defined as:

µ̂M AP = arg max p(µ | X),


µ

where: - p(µ | X) is the posterior probability of the parameter µ given the observed data X, -
p(X | µ) is the likelihood of the data given the parameter, - p(µ) is the prior probability of the
parameter. Using Bayes’ theorem, the posterior can be expressed as:
p(X | µ) · p(µ)
p(µ | X) = .
p(X)
Here: - p(X) is the evidence (a normalizing constant) which does not depend on µ. Thus, the
MAP estimate becomes: 
µ̂M AP = arg max p(X | µ) · p(µ) .
µ

Taking the logarithm, the MAP estimate µ̂M AP is found by maximizing the log-posterior:

µ̂M AP = arg max log p(µ | X) = arg max log p(X | µ) + log p(µ) .
µ µ

The log-likelihood is:


n
n 1 X
log p(X | µ) = − log(2πσ 2 ) − 2 (xi − µ)2 .
2 2σ i=1

Ignoring constants independent of µ, this simplifies to:


n
1 X
log p(X | µ) = − (xi − µ)2 .
2σ 2 i=1

The prior for µ is normally distributed with mean µ0 and variance τ 2 :


(µ − µ0 )2
 
1
p(µ) = √ exp − .
2πτ 2 2τ 2
The log-prior is:
1 1
log p(µ) = − log(2πτ 2 ) − 2 (µ − µ0 )2 .
2 2τ
Ignoring constants independent of µ, this becomes:
1
log p(µ) = − (µ − µ0 )2 .
2τ 2
The log-posterior is:
n
1 X 1
(xi − µ)2 − 2 (µ − µ0 )2 .

arg max log p(X | µ) + log p(µ) = − 2
µ 2σ i=1 2τ

4
To maximize this, we differentiate with respect to µ:
n
!
∂ 1 X 1
− 2 (xi − µ)2 − 2 (µ − µ0 )2 = 0.
∂µ 2σ i=1 2τ

Simplifying the derivative:


n
1 X 1
− 2
(xi − µ) − 2 (µ − µ0 ) = 0.
σ i=1 τ
Reorganizing terms:
  n
n 1 1 X µ0
µ + 2 = 2 xi + 2 .
σ2 τ σ i=1 τ
Finally, solving for µ: Pn
1 µ0
σ2 i=1 xi + τ 2
µ̂M AP = n 1 .
σ2 + τ 2

(c) Compare the MLE and MAP estimators. Discuss how the choice of µ0 and τ 2 affects the MAP
estimator.

3 Problem 3: Naive Bayes Classification


You are provided with a simplified dataset of text documents classified into two categories: Sports
and Politics. The vocabulary consists of the words: win, team, election, and vote.
Word Sports Count Politics Count
win 50 10
team 60 5
election 15 70
vote 10 80
(a) Explain the Naive Bayes assumption and how it applies to text classification.
(b) Using the data above, calculate the probability that a document containing the words win and
vote belongs to the Sports category versus the Politics category. Assume uniform class priors and
apply Laplace smoothing with α = 1.
(c) Interpret the results and discuss any limitations of the Naive Bayes classifier in this context.

4 Problem 4: Logistic Regression


Consider a binary classification problem where the goal is to predict whether a student will pass
or fail an exam based on the numbers of hours spent studying and sleeping. Formulate the logistic
regression model for this problem.

5 Problem 5: Linear Regression and Overfitting


You are given a dataset where the input variable x ranges from 0 to 10, and the target variable y is
generated by y = 2x + ϵ, where ϵ is Gaussian noise with mean 0 and variance 4.
(a) Fit a linear regression model to the data and report the estimated parameters.
(b) Fit a 9th-degree polynomial regression model to the same data.
(c) Compare the training error and discuss which model is likely overfitting the data. Provide
visualizations to support your answer.

5
6 Problem 6: Regularization Techniques
Regularization is a technique used to prevent overfitting in machine learning models.
(a) Explain the difference between L1 (Lasso) and L2 (Ridge) regularization in the context of linear
regression.
Answer:
Regularization is a very important technique in machine learning to prevent overfitting. Mathe-
matically speaking, it adds a regularization term in order to prevent the coefficients from fitting so
perfectly to overfit.
The difference between the L1 and L2 regularization is that L2 is the sum of the square of the
weights, while L1 is just the sum of the absolute values of the weights. As follows in linear regression:

• L1 regularization on least squares:


p
X
L(θ) = M SE + λ |θj |,
j=1

• L2 regularization on least squares:


p
X
L(θ) = M SE + λ θj2 ,
j=1

Solution Uniqueness: L2-norm (Ridge) always provides a unique solution. The penalty is the
sum of the squares of coefficients, leading to a smooth, differentiable function and a single global
minimum. L1-norm (Lasso) solution is not always unique. The penalty is the sum of the absolute
values of coefficients, which can lead to sparse solutions (some coefficients set to zero), but multiple
valid solutions can exist when features are correlated.
Sparsity: L1-norm has the property of producing many coefficients with zero values or very
small values and few large coefficients, allowing it to perform feature selection. L2-norm keeps all
features.
Computational Efficiency: L1-norm does not have an analytical solution. However, its spar-
sity properties allow it to be used with sparse algorithms, improving computational efficiency. L2-
norm has an analytical solution, making its computation more straightforward and efficient.
Stability: L1-norm is sensitive to small changes in data, especially with correlated features.
L2-norm retains all features, making it less sensitive to outliers.
(b) Given a dataset with multiple features that are highly correlated, discuss which regularization
method would be more appropriate and why. Answer:
In cases of high feature correlation, L2 regularization is typically the better choice due to its ability
to handle multicollinearity and provide stable, interpretable results without discarding important
features

You might also like