Lecture 13
Lecture 13
1 Logistic Regression
Imagine if we draw the best fit like like below. This could be much better
fit as compared to the previous.
y = β0 + β1 x1 + β2 x2 + · · · + βp xp
Z = β0 + β1 x1 + β2 x2 + · · · + βp xp (1)
The logistic regression model equation is:
1
p(y = 1|X ) = (2)
1+ e −(β0 +β1 x1 +β2 x2 +···+βp xp )
Compute z:
z = −4 + (0.02 × 50) + (0.3 × 30)
z = −4 + 1 + 9 = 6
Calculate P(y = 1 | x) :
1
P(y = 1 | x) =
1 + e −6
1
P(y = 1 | x) = = 0.9975
1 + 0.00248
The probability that the customer will buy the product is approximately
99.75%.
Z = β0 + β1 x1 + β2 x2 + · · · + βp xp
The job of the learning algorithm will be to discover the best values for the
coefficients (β0 , β1 and β2 ) based on the training data.
1
p(y = 1 | X ) = σ(X β) =
1 + e −X β
To estimate the parameter β, we maximize the log-likelihood function.
n
X
ℓ(β) = [yi log σ(Xi β) + (1 − yi ) log(1 − σ(Xi β))]
i=1