0% found this document useful (0 votes)
17 views

2-Machine Learning Algorithms

Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views

2-Machine Learning Algorithms

Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 16

Types of Machine Learning Algorithms

Supervised Learning Algorithms

How it works: This algorithm consists of a target/outcome variable (or dependent


variable) which is to be predicted from a given set of predictors (independent
variables). Using this set of variables, we generate a function that maps input data
to desired outputs. The training process continues until the model achieves the
desired level of accuracy on the training data. Examples of Supervised Learning:
Regression, Decision Tree, Random Forest, KNN, Logistic Regression, etc.

Unsupervised Learning Algorithms

How it works: In this algorithm, we do not have any target or outcome variable to
predict / estimate (which is called unlabelled data). It is used for recommendation
systems or clustering populations in different groups. clustering algorithms are
widely used for segmenting customers into different groups for specific
interventions. Examples of Unsupervised Learning: Apriori algorithm, K-means
clustering.

Reinforcement Learning Algorithms

How it works: Using this algorithm, the machine is trained to make specific
decisions. The machine is exposed to an environment where it trains itself
continually using trial and error. This machine learns from past experience and tries
to capture the best possible knowledge to make accurate business decisions.
Example of Reinforcement Learning: Markov Decision Process

List of Top 10 Common Machine Learning Algorithms

Here is the list of commonly used machine learning algorithms. These algorithms
can be applied to almost any data problem:

i. Linear Regression
ii. Logistic Regression
iii. Decision Tree
iv. SVM
v. Naive Bayes
vi. kNN
vii. K-Means
viii. Random Forest
ix. Dimensionality Reduction Algorithms
x. Gradient Boosting algorithms
a. GBM
b. XGBoost
c. LightGBM
d. CatBoost

1. Linear Regression

It is used to estimate real values (cost of houses, number of calls, total sales, etc.)
based on a continuous variable(s). Here, we establish the relationship between
independent and dependent variables by fitting the best line.

This best-fit line is known as the regression line and is represented by a linear
equation Y= a*X + b.

Example 1

The best way to understand linear regression is to relive this experience of


childhood. Let us say you ask a child in fifth grade to arrange people in his class by
increasing the order of weight without asking them their weights! What do you
think the child will do? He/she would likely look (visually analyze) at the height and
build of people and arrange them using a combination of these visible parameters.
This is linear regression in real life! The child has actually figured out that height
and build would be correlated to weight by a relationship, which looks like the
equation above.

In this equation:

 Y – Dependent Variable
 a – Slope
 X – Independent variable
 b – Intercept

These coefficients a and b are derived based on minimizing the sum of the squared
difference of distance between data points and the regression line.
Example 2

Look at the below example. Here we have identified the best-fit line having linear
equation y=0.2811x+13.9. Now using this equation, we can find the weight, knowing
the height of a person.

Linear Regression is mainly of two types: Simple Linear Regression and Multiple
Linear Regression. Simple Linear Regression is characterized by one independent
variable. And, Multiple Linear Regression (as the name suggests) is characterized by
multiple (more than 1) independent variables. While finding the best-fit line, you
can fit a polynomial or curvilinear regression. And these are known as polynomial
or curvilinear regression.

Here’s a coding window to try out your hand and build your own linear regression

model:

Python:

R Code:

#Load Train and Test datasets

#Identify feature and response variable(s) and values must be numeric and numpy arrays

x_train <- input_variables_values_training_datasets

y_train <- target_variables_values_training_datasets

x_test <- input_variables_values_test_datasets


x <- cbind(x_train,y_train)

# Train the model using the training sets and check score

linear <- lm(y_train ~ ., data = x)

summary(linear)

#Predict Output

predicted= predict(linear,x_test)

2. Logistic Regression

Don’t get confused by its name! It is a classification algorithm, not a regression


algorithm. It is used to estimate discrete values ( Binary values like 0/1, yes/no,
true/false ) based on a given set of independent variable(s). In simple words, it
predicts the probability of the occurrence of an event by fitting data to a logistic
function. Hence, it is also known as logit regression. Since it predicts the probability,
its output values lie between 0 and 1 (as expected).

Again, let us try and understand this through a simple example.

Let’s say your friend gives you a puzzle to solve. There are only 2 outcome scenarios
– either you solve it, or you don’t. Now imagine that you are being given a wide
range of puzzles/quizzes in an attempt to understand which subjects you are good
at. The outcome of this study would be something like this – if you are given a
trigonometry-based tenth-grade problem, you are 70% likely to solve it. On the
other hand, if it is a grade fifth history question, the probability of getting an answer
is only 30%. This is what Logistic Regression provides you.

Coming to the math, the log odds of the outcome are modeled as a linear
combination of the predictor variables.

odds= p/ (1-p) = probability of event occurrence / probability of not event


occurrence

ln(odds) = ln(p/(1-p))

logit(p) = ln(p/(1-p)) = b0+b1X1+b2X2+b3X3....+bkXk


Above, p is the probability of the presence of the characteristic of interest. It
chooses parameters that maximize the likelihood of observing the sample values
rather than that minimize the sum of squared errors (like in ordinary regression).

Now, you may ask, why take a log? For the sake of simplicity, let’s just say that this is
one of the best mathematical ways to replicate a step function. I can go into more
details, but that will beat the purpose of this article.

Build your own logistic regression model in Python here and check the accuracy:

R Code:

x <- cbind(x_train,y_train)

# Train the model using the training sets and check score

logistic <- glm(y_train ~ ., data = x,family='binomial')

summary(logistic)

#Predict Output

predicted= predict(logistic,x_test)

Furthermore…

There are many different steps that could be tried in order to improve the model:
including interaction terms

removing features

regularization techniques

using a non-linear model

3. Decision Tree

This is one of my favorite algorithms, and I use it quite frequently. It is a type of


supervised learning algorithm that is mostly used for classification problems.
Surprisingly, it works for both categorical and continuous dependent variables. In
this algorithm, we split the population into two or more homogeneous sets. This is
done based on the most significant attributes/ independent variables to make as
distinct groups as possible. For more details, you can read Decision Tree Simplified.

Source: statsexchange

In the image above, you can see that population is classified into four different
groups based on multiple attributes to identify ‘if they will play or not’. To split the
population into different heterogeneous groups, it uses various techniques like
Gini, Information Gain, Chi-square, and entropy.
The best way to understand how the decision tree works, is to play Jezzball – a
classic game from Microsoft (image below). Essentially, you have a room with
moving walls and you need to create walls such that the maximum area gets
cleared off without the balls.

So, every time you split the room with a wall, you are trying to create 2 different
populations within the same room. Decision trees work in a very similar fashion by
dividing a population into as different groups as possible.

More: Simplified Version of Decision Tree Algorithms

Let’s get our hands dirty and code our own decision tree in Python!

R Code:

library(rpart)

x <- cbind(x_train,y_train)

# grow tree

fit <- rpart(y_train ~ ., data = x,method="class")

summary(fit)

#Predict Output

predicted= predict(fit,x_test)

4. SVM (Support Vector Machine)


It is a classification method. In SVM algorithm, we plot each data item as a point in
n-dimensional space (where n is the number of features you have), with the value
of each feature being the value of a particular coordinate.

For example, if we only had two features like the Height and Hair length of an
individual, we’d first plot these two variables in two-dimensional space where each
point has two coordinates (these co-ordinates are known as Support Vectors)

Now, we will find some lines that split the data between the two differently
classified groups of data. This will be the line such that the distances from the
closest point in each of the two groups will be the farthest away. If there are more
variables, a hyperplane is used to separate the classes.

In the example shown above, the line which splits the data into two differently
classified groups is the black line since the two closest points are the farthest apart
from the line. This line is our classifier. Then, depending on where the testing data
lands on either side of the line, that’s what class we can classify the new data as.

Think of this algorithm as playing JezzBall in n-dimensional space. The tweaks in the
game are:

You can draw lines/planes at any angle (rather than just horizontal or vertical as in
the classic game)

The objective of the game is to segregate balls of different colors in different


rooms.

And the balls are not moving.

Try your hand and design an SVM model in Python through this coding window:

R Code:

library(e1071)

x <- cbind(x_train,y_train)

# Fitting model

fit <-svm(y_train ~ ., data = x)

summary(fit)

#Predict Output

predicted= predict(fit,x_test)

5. Naive Bayes

It is a classification technique based on Bayes’ theorem with an assumption of

independence between predictors. In simple terms, a Naive Bayes classifier

assumes that the presence of a particular feature in a class is unrelated to the

presence of any other feature. For example, a fruit may be considered to be an


apple if it is red, round, and about 3 inches in diameter. Even if these features

depend on each other or upon the existence of the other features, a naive Bayes

classifier would consider all of these properties to independently contribute to the

probability that this fruit is an apple.

The Naive Bayesian model is easy to build and particularly useful for very large data
sets. Along with simplicity, Naive Bayes is known to outperform even highly
sophisticated classification methods.

Naive Bayes Equation

Bayes theorem provides a way of calculating posterior probability P(c|x) from P(c),
P(x), and P(x|c). Look at the equation below:

Here,

 P(c|x) is the posterior probability of class (target) given predictor (attribute).


 P(c) is the prior probability of the class.
 P(x|c) is the likelihood which is the probability of the predictor given
the class.
 P(x) is the prior probability of the predictor.

Example

Let’s understand it using an example. Below is a training data set of weather and
the corresponding target variable, ‘Play.’ Now, we need to classify whether players
will play or not based on weather conditions. Let’s follow the below steps to
perform it.
Time needed: 3 minutes

Convert the data set to a frequency table.

Create a Likelihood table by finding the probabilities like Overcast probability = 0.29
and probability of playing is 0.64.

Now, use the Naive Bayesian equation to calculate the posterior probability for
each class. The class with the highest posterior probability is the outcome of the
prediction.

Problem: Players will pay if the weather is sunny. Is this statement correct?

We can solve it using above discussed method, so P(Yes | Sunny) = P( Sunny | Yes)
* P(Yes) / P (Sunny)

Here we have P (Sunny | Yes) = 3/9 = 0.33, P(Sunny) = 5/14 = 0.36, P(Yes)= 9/14 =
0.64

Now, P (Yes | Sunny) = 0.33 * 0.64 / 0.36 = 0.60, which has a higher probability.

Naive Bayes uses a similar method to predict the probability of different classes
based on various attributes. This algorithm is mostly used in text classification and
with problems having multiple classes.

Code for a Naive Bayes classification model in Python:

R Code:

library(e1071)
x <- cbind(x_train,y_train)

# Fitting model

fit <-naiveBayes(y_train ~ ., data = x)

summary(fit)

#Predict Output

predicted= predict(fit,x_test)

6. kNN (k- Nearest Neighbors)

It can be used for both classification and regression problems. However, it is more
widely used in classification problems in the industry. K nearest neighbors is a
simple algorithm that stores all available cases and classifies new cases by a
majority vote of its k neighbors. The case assigned to the class is most common
amongst its K nearest neighbors measured by a distance function.

These distance functions can be Euclidean, Manhattan, Minkowski, and Hamming


distances. The first three functions are used for continuous functions, and the
fourth one (Hamming) for categorical variables. If K = 1, then the case is simply
assigned to the class of its nearest neighbor. At times, choosing K turns out to be a
challenge while performing kNN modeling.

More: Introduction to k-nearest neighbors: Simplified.


KNN can easily be mapped to our real lives. If you want to learn about a person
with whom you have no information, you might like to find out about his close
friends and the circles he moves in and gain access to his/her information!

Things to consider before selecting kNN:

 KNN is computationally expensive


 Variables should be normalized else higher range variables can bias it
 Works on pre-processing stage more before going for kNN like an outlier,
noise removal

R Code:

library(knn)

x <- cbind(x_train,y_train)

# Fitting model

fit <-knn(y_train ~ ., data = x,k=5)

summary(fit)

#Predict Output

predicted= predict(fit,x_test)

7. K-Means

It is a type of unsupervised algorithm which solves the clustering problem. Its


procedure follows a simple and easy way to classify a given data set through a
certain number of clusters (assume k clusters). Data points inside a cluster are
homogeneous and heterogeneous to peer groups.

Remember figuring out shapes from ink blots? k means is somewhat similar to this
activity. You look at the shape and spread to decipher how many different
clusters/populations are present!
How K-means forms cluster:

 K-means picks k number of points for each cluster known as centroids.


 Each data point forms a cluster with the closest centroids, i.e., k clusters.
 Finds the centroid of each cluster based on existing cluster members. Here
we have new centroids.
 As we have new centroids, repeat steps 2 and 3. Find the closest distance for
each data point from new centroids and get associated with new k-clusters.
Repeat this process until convergence occurs, i.e., centroids do not change.

How to determine the value of K:

In K-means, we have clusters, and each cluster has its own centroid. The sum of the
square of the difference between the centroid and the data points within a cluster
constitutes the sum of the square value for that cluster. Also, when the sum of
square values for all the clusters is added, it becomes a total within the sum of the
square value for the cluster solution.

We know that as the number of clusters increases, this value keeps on decreasing,
but if you plot the result, you may see that the sum of squared distance decreases
sharply up to some value of k and then much more slowly after that. Here, we can
find the optimum number of clusters.
Python Code:

R Code:

library(cluster)

fit <- kmeans(X, 3) # 5 cluster solution

8. Random Forest

Random Forest is a trademarked term for an ensemble learning of decision trees.


In Random Forest, we’ve got a collection of decision trees (also known as “Forest”).
To classify a new object based on attributes, each tree gives a classification, and we
say the tree “votes” for that class. The forest chooses the classification having the
most votes (over all the trees in the forest).

Each tree is planted & grown as follows:

 If the number of cases in the training set is N, then a sample of N cases is


taken at random but with replacement. This sample will be the training set
for growing the tree.
 If there are M input variables, a number m<<M is specified such that at each
node, m variables are selected at random out of the M, and the best split on
this m is used to split the node. The value of m is held constant during the
forest growth.
 Each tree is grown to the largest extent possible. There is no pruning.

9. Dimensionality Reduction Algorithms


In the last 4-5 years, there has been an exponential increase in data capturing at
every possible stage. Corporates/ Government Agencies/ Research organizations
are not only coming up with new sources, but also they are capturing data in great
detail.

For example, E-commerce companies are capturing more details about customers
like their demographics, web crawling history, what they like or dislike, purchase
history, feedback, and many others to give them personalized attention more than
your nearest grocery shopkeeper.

As data scientists, the data we are offered also consists of many features, this
sounds good for building a good robust model, but there is a challenge. How’d you
identify highly significant variable(s) out of 1000 or 2000? In such cases, the
dimensionality reduction algorithm helps us, along with various other algorithms
like Decision Tree, Random Forest, PCA (principal component analysis) , Factor
Analysis, Identity-based on the correlation matrix, missing value ratio, and others.

To know more about these algorithms, you can read “Beginners Guide To Learn
Dimension Reduction Techniques “.

R Code:

library(stats)

pca <- princomp(train, cor = TRUE)

train_reduced <- predict(pca,train)

test_reduced <- predict(pca,test)

You might also like