2-Machine Learning Algorithms
2-Machine Learning Algorithms
How it works: In this algorithm, we do not have any target or outcome variable to
predict / estimate (which is called unlabelled data). It is used for recommendation
systems or clustering populations in different groups. clustering algorithms are
widely used for segmenting customers into different groups for specific
interventions. Examples of Unsupervised Learning: Apriori algorithm, K-means
clustering.
How it works: Using this algorithm, the machine is trained to make specific
decisions. The machine is exposed to an environment where it trains itself
continually using trial and error. This machine learns from past experience and tries
to capture the best possible knowledge to make accurate business decisions.
Example of Reinforcement Learning: Markov Decision Process
Here is the list of commonly used machine learning algorithms. These algorithms
can be applied to almost any data problem:
i. Linear Regression
ii. Logistic Regression
iii. Decision Tree
iv. SVM
v. Naive Bayes
vi. kNN
vii. K-Means
viii. Random Forest
ix. Dimensionality Reduction Algorithms
x. Gradient Boosting algorithms
a. GBM
b. XGBoost
c. LightGBM
d. CatBoost
1. Linear Regression
It is used to estimate real values (cost of houses, number of calls, total sales, etc.)
based on a continuous variable(s). Here, we establish the relationship between
independent and dependent variables by fitting the best line.
This best-fit line is known as the regression line and is represented by a linear
equation Y= a*X + b.
Example 1
In this equation:
Y – Dependent Variable
a – Slope
X – Independent variable
b – Intercept
These coefficients a and b are derived based on minimizing the sum of the squared
difference of distance between data points and the regression line.
Example 2
Look at the below example. Here we have identified the best-fit line having linear
equation y=0.2811x+13.9. Now using this equation, we can find the weight, knowing
the height of a person.
Linear Regression is mainly of two types: Simple Linear Regression and Multiple
Linear Regression. Simple Linear Regression is characterized by one independent
variable. And, Multiple Linear Regression (as the name suggests) is characterized by
multiple (more than 1) independent variables. While finding the best-fit line, you
can fit a polynomial or curvilinear regression. And these are known as polynomial
or curvilinear regression.
Here’s a coding window to try out your hand and build your own linear regression
model:
Python:
R Code:
#Identify feature and response variable(s) and values must be numeric and numpy arrays
# Train the model using the training sets and check score
summary(linear)
#Predict Output
predicted= predict(linear,x_test)
2. Logistic Regression
Let’s say your friend gives you a puzzle to solve. There are only 2 outcome scenarios
– either you solve it, or you don’t. Now imagine that you are being given a wide
range of puzzles/quizzes in an attempt to understand which subjects you are good
at. The outcome of this study would be something like this – if you are given a
trigonometry-based tenth-grade problem, you are 70% likely to solve it. On the
other hand, if it is a grade fifth history question, the probability of getting an answer
is only 30%. This is what Logistic Regression provides you.
Coming to the math, the log odds of the outcome are modeled as a linear
combination of the predictor variables.
ln(odds) = ln(p/(1-p))
Now, you may ask, why take a log? For the sake of simplicity, let’s just say that this is
one of the best mathematical ways to replicate a step function. I can go into more
details, but that will beat the purpose of this article.
Build your own logistic regression model in Python here and check the accuracy:
R Code:
x <- cbind(x_train,y_train)
# Train the model using the training sets and check score
summary(logistic)
#Predict Output
predicted= predict(logistic,x_test)
Furthermore…
There are many different steps that could be tried in order to improve the model:
including interaction terms
removing features
regularization techniques
3. Decision Tree
Source: statsexchange
In the image above, you can see that population is classified into four different
groups based on multiple attributes to identify ‘if they will play or not’. To split the
population into different heterogeneous groups, it uses various techniques like
Gini, Information Gain, Chi-square, and entropy.
The best way to understand how the decision tree works, is to play Jezzball – a
classic game from Microsoft (image below). Essentially, you have a room with
moving walls and you need to create walls such that the maximum area gets
cleared off without the balls.
So, every time you split the room with a wall, you are trying to create 2 different
populations within the same room. Decision trees work in a very similar fashion by
dividing a population into as different groups as possible.
Let’s get our hands dirty and code our own decision tree in Python!
R Code:
library(rpart)
x <- cbind(x_train,y_train)
# grow tree
summary(fit)
#Predict Output
predicted= predict(fit,x_test)
For example, if we only had two features like the Height and Hair length of an
individual, we’d first plot these two variables in two-dimensional space where each
point has two coordinates (these co-ordinates are known as Support Vectors)
Now, we will find some lines that split the data between the two differently
classified groups of data. This will be the line such that the distances from the
closest point in each of the two groups will be the farthest away. If there are more
variables, a hyperplane is used to separate the classes.
In the example shown above, the line which splits the data into two differently
classified groups is the black line since the two closest points are the farthest apart
from the line. This line is our classifier. Then, depending on where the testing data
lands on either side of the line, that’s what class we can classify the new data as.
Think of this algorithm as playing JezzBall in n-dimensional space. The tweaks in the
game are:
You can draw lines/planes at any angle (rather than just horizontal or vertical as in
the classic game)
Try your hand and design an SVM model in Python through this coding window:
R Code:
library(e1071)
x <- cbind(x_train,y_train)
# Fitting model
summary(fit)
#Predict Output
predicted= predict(fit,x_test)
5. Naive Bayes
depend on each other or upon the existence of the other features, a naive Bayes
The Naive Bayesian model is easy to build and particularly useful for very large data
sets. Along with simplicity, Naive Bayes is known to outperform even highly
sophisticated classification methods.
Bayes theorem provides a way of calculating posterior probability P(c|x) from P(c),
P(x), and P(x|c). Look at the equation below:
Here,
Example
Let’s understand it using an example. Below is a training data set of weather and
the corresponding target variable, ‘Play.’ Now, we need to classify whether players
will play or not based on weather conditions. Let’s follow the below steps to
perform it.
Time needed: 3 minutes
Create a Likelihood table by finding the probabilities like Overcast probability = 0.29
and probability of playing is 0.64.
Now, use the Naive Bayesian equation to calculate the posterior probability for
each class. The class with the highest posterior probability is the outcome of the
prediction.
Problem: Players will pay if the weather is sunny. Is this statement correct?
We can solve it using above discussed method, so P(Yes | Sunny) = P( Sunny | Yes)
* P(Yes) / P (Sunny)
Here we have P (Sunny | Yes) = 3/9 = 0.33, P(Sunny) = 5/14 = 0.36, P(Yes)= 9/14 =
0.64
Now, P (Yes | Sunny) = 0.33 * 0.64 / 0.36 = 0.60, which has a higher probability.
Naive Bayes uses a similar method to predict the probability of different classes
based on various attributes. This algorithm is mostly used in text classification and
with problems having multiple classes.
R Code:
library(e1071)
x <- cbind(x_train,y_train)
# Fitting model
summary(fit)
#Predict Output
predicted= predict(fit,x_test)
It can be used for both classification and regression problems. However, it is more
widely used in classification problems in the industry. K nearest neighbors is a
simple algorithm that stores all available cases and classifies new cases by a
majority vote of its k neighbors. The case assigned to the class is most common
amongst its K nearest neighbors measured by a distance function.
R Code:
library(knn)
x <- cbind(x_train,y_train)
# Fitting model
summary(fit)
#Predict Output
predicted= predict(fit,x_test)
7. K-Means
Remember figuring out shapes from ink blots? k means is somewhat similar to this
activity. You look at the shape and spread to decipher how many different
clusters/populations are present!
How K-means forms cluster:
In K-means, we have clusters, and each cluster has its own centroid. The sum of the
square of the difference between the centroid and the data points within a cluster
constitutes the sum of the square value for that cluster. Also, when the sum of
square values for all the clusters is added, it becomes a total within the sum of the
square value for the cluster solution.
We know that as the number of clusters increases, this value keeps on decreasing,
but if you plot the result, you may see that the sum of squared distance decreases
sharply up to some value of k and then much more slowly after that. Here, we can
find the optimum number of clusters.
Python Code:
R Code:
library(cluster)
8. Random Forest
For example, E-commerce companies are capturing more details about customers
like their demographics, web crawling history, what they like or dislike, purchase
history, feedback, and many others to give them personalized attention more than
your nearest grocery shopkeeper.
As data scientists, the data we are offered also consists of many features, this
sounds good for building a good robust model, but there is a challenge. How’d you
identify highly significant variable(s) out of 1000 or 2000? In such cases, the
dimensionality reduction algorithm helps us, along with various other algorithms
like Decision Tree, Random Forest, PCA (principal component analysis) , Factor
Analysis, Identity-based on the correlation matrix, missing value ratio, and others.
To know more about these algorithms, you can read “Beginners Guide To Learn
Dimension Reduction Techniques “.
R Code:
library(stats)