0% found this document useful (0 votes)
12 views27 pages

Ensemble Methods

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views27 pages

Ensemble Methods

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 27

 Ensemble Methods

Bagging
Boosting
Stacking
DR G MANIKANDAN
What is Ensemble learning
Ensemble learning is one of the most powerful deep learning

techniques that use the combined output of two or more models/weak

learners and solve a particular computational intelligence problem.

E.g., a Random Forest algorithm is an ensemble of various decision

trees combined.
What is Ensemble learning
Ensemble learning is used to improve the overall performance and

accuracy.

It is the basic concept different techniques of combining techniques

from multiple models.


There are 3 most common ensemble learning methods

•Bagging

•Boosting

•Stacking
1. Bagging
Bagging is a method of ensemble modeling, which is primarily used to solve deep learning
problems. It is generally completed in two steps as follows:
•Bootstrapping

•Aggregation

Example:

In the Random Forest method, predictions from multiple decision trees are ensembled parallelly.

Further, in regression problems, we use an average of these predictions to get the final output,
whereas, in classification problems, the model is selected as the predicted class.
1. Bagging

Bootstrapping:

•It is a random sampling method that is used to derive samples from the data

using the replacement procedure.

•In this method, first, random data samples are fed to the primary model, and then

a base learning algorithm is run on the samples to complete the learning process.
1. Bagging

Aggregation

•This is a step that involves the process of combining the output of all

base models and, based on their output, predicting an aggregate result

with greater accuracy and reduced variance.


2. Boosting
Boosting is an ensemble method that enables each member to learn from the

preceding member's mistakes and make better predictions for the future.

Unlike the bagging method, in boosting, all base learners (weak) are arranged in a

sequential format so that they can learn from the mistakes of their preceding learner.

Hence, in this way, all weak learners get turned into strong learners and make a better

predictive model with significantly improved performance.


2. Boosting

We have a basic understanding of ensemble techniques in deep

learning and their two common methods,

i.e., bagging and boosting. Now, let's discuss a different paradigm of

ensemble learning, i.e., Stacking.


3. Stacking

Stacking is one of the popular ensemble modeling techniques in deep learning.

Various weak learners are ensembled in a parallel manner in such a way that by combining

them with Meta learners, we can predict better predictions for the future.
3. Stacking

Stacking is also known as a stacked generalization and is an extended form of the Model

Averaging Ensemble technique in which all sub-models equally participate as per their

performance weights and build a new model with better predictions.

This new model is stacked up on top of the others; this is the reason why it is named stacking.
Architecture of Stacking
The architecture of the stacking model is designed in such as way that it

consists of two or more base/learner's models and a meta-model that combines

the predictions of the base models.

These base models are called level 0 models, and the meta-model is known as

the level 1 model.

So, the Stacking ensemble method includes original (training) data, primary

level models, primary level prediction, secondary level model, and final

prediction.
•Original data: This data is divided into n-folds and is also considered test data or training data.

•Base models: These models are also referred to as level-0 models. These models use training

data and provide compiled predictions (level-0) as an output.

•Level-0 Predictions: Each base model is triggered on some training data and provides different

predictions, which are known as level-0 predictions.

•Meta Model: The architecture of the stacking model consists of one meta-model, which helps to

best combine the predictions of the base models. The meta-model is also known as the level-1

model.
•Level-1 Prediction: The meta-model learns how to best combine the predictions of the base

models and is trained on different predictions made by individual base models, i.e., data not used

to train the base models are fed to the meta-model, predictions are made, and these predictions,

along with the expected outputs, provide the input and output pairs of the training dataset used to

fit the meta-model.

You might also like