ML4_ML_Algorithms
ML4_ML_Algorithms
● A1(2, 10), A2(2, 5), A3(8, 4), A4(5, 8), A5(7, 5),
A6(6, 4), A7(1, 2), A8(4, 9)
Entropy(D)=−∑i=1kpilog2pi
Advantages of the Decision Tree
● simple to understand as it follows the same process
which a human follow while making any decision in
real-life.
https://www.cs.cmu.edu/~aarti/Class/10315_Fall20/rec
s/DecisionTreesBoostingExampleProblem.pdf
Naive Bayes Algorithm
● probabilistic machine learning algorithm based on
Bayes' Theorem.
= P((yes)|(Sunny,Mild,Normal,False)
= P((Sunny,Mild,Normal,False)|yes) *P(yes)
= P(Sunny | yes)*P(Mild | yes)*P(Normal | yes)
*P(False | yes)*P(yes)
P(no|(Sunny,Mild,Normal,False))
= P((Sunny,Mild,Normal,False)|no) *P(no)
=P(Sunny | no) * P(Mild | no) * P(Normal | no) *
P(False | no) * P(no)
P(no|(Sunny,Mild,Normal,False))=0.0068
Since 0.0282 > 0.0068
[P(yes|conditions)>P(no|conditions) ,
for the given conditions Sunny,Mild,Normal,False ,
play is predicted as yes.
Advantages of Naïve Bayes
● fast and easy ML algorithms to predict a class
● Multinomial
Disadvantages of Naïve Bayes
● assumes that all features are independent or
unrelated, so it cannot learn the relationship
between features.
Random Forest
● collaborative team of decision trees that work
together to provide a single output.
● Resistance to Overfitting
● where:
●
Cost function for LR
● The cost function or the loss function is nothing but
the error or difference between the predicted value
Y_pred and the true value Y.
● y=0.9x+1.3
Gradient Descent for LR
● model can be trained using the optimization
algorithm gradient descent by iteratively modifying
the model’s parameters to reduce the mean squared
error (MSE) of the model on a training dataset.
○ Here,
○ Here,
RAJAD SHAKYA