Tutorial 7 Machine Learning Algorithms
Tutorial 7 Machine Learning Algorithms
where, the left hand side is called the logit or log-odds function, and p(x)/(1-p(x)) is called odds.
Naïve Bayes
Classification problems only
Naïve Bayes
• One of the most popular machine learning classification algorithm
• Based on the Bayes Theorem with an assumption of independence
among independent variables for calculating probabilities and
conditional probabilities
• In simple terms, a Naive Bayes model assumes that the presence of a
particular feature in a class is unrelated to the presence of any other
feature.
Naïve Bayes
Where,
P(c|x) is the probability of class (c, dependent variable) given the data of the independent variable (x,
predictor). This is called the posterior probability of c.
P(c) is the probability of class regardless of the data. This is called the prior probability of c.
P(x|c) is the probability of the data of the independent variable given class.
P(x) is the prior probability of the data of the independent variable
KNN
K-Nearest Neighbors Algorithm
For both classification and regression problems
KNN
The KNN algorithm assumes that similar
things exist in close proximity. In other
words, similar things are near to each other.