0% found this document useful (0 votes)
17 views2 pages

FAQ - Boosting - Ensemble Techniques - Great Learning

The document discusses AdaBoost and gradient boosting algorithms. It explains that AdaBoost focuses on weighting examples incorrectly classified by previous models, while gradient boosting minimizes loss by calculating derivatives. It also provides instructions for installing XGBoost and resolving warnings when fitting XGBoost classifiers.

Uploaded by

Nkechi Koko
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views2 pages

FAQ - Boosting - Ensemble Techniques - Great Learning

The document discusses AdaBoost and gradient boosting algorithms. It explains that AdaBoost focuses on weighting examples incorrectly classified by previous models, while gradient boosting minimizes loss by calculating derivatives. It also provides instructions for installing XGBoost and resolving warnings when fitting XGBoost classifiers.

Uploaded by

Nkechi Koko
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 2

FAQ - Boosting

1. What is the difference between AdaBoost and Gradient Boosting?

AdaBoost vs. Gradient Boosting:

AdaBoost Gradient Boosting

Adaboost is more about ‘voting Gradient boosting is more about “adding


weights’ gradient optimization”.

Gradient boosting increases the accuracy by


Adaboost increases the accuracy by minimizing the Loss Function (an error which
giving more weightage to the target is the difference of actual and predicted value)
which is misclassified by the model. and having this loss as a target for the next
iteration.

- Gradient boosting calculates the gradient


(derivative) of the Loss Function with respect
to the prediction (instead of the features).
At each iteration, the Adaptive
- The Gradient boosting algorithm builds the
boosting algorithm changes the
first weak learner and calculates the Loss
sample distribution by modifying the
Function.
weights attached to each of the
instances. - It then builds a second learner to predict the
loss after the first step. The step continues for
the third learner and then for the fourth learner
and so on until a certain threshold is reached.

2. When to apply AdaBoost (Adaptive Boosting Algorithm)?

Here is the list of all the key points below for an understanding of Adaboost:

AdaBoost can be applied to any classification algorithm, so it’s really a technique that builds on to
other classifiers as opposed to being a classifier itself.
You could just train a bunch of weak classifiers on your own and combine the results.
There’s really two things it figures out for you:
It helps you choose the training set for each new classifier that you train based on the results of th
previous classifier.
It determines how much weight should be given to each classifier’s proposed answer when combin
the results.

3. How does AdaBoost (Adaptive Boosting Algorithm) work?

Important Points regarding working of Adaboost:

AdaBoost can be applied to any classification algorithm, so it’s really a technique that builds on to
other classifiers as opposed to being a classifier itself.
Each weak classifier should be trained on a random subset of the total training set.
AdaBoost assigns a “weight” to each training example, which determines the probability that each
example should appear in the training set.
g. (The initial weights generally add up to 1. for example: if there are 8 training examples, the weig
assigned to each will be 1/8 initially. So all training examples are having equal weights.)
Now, if a training example is misclassified, then that training example is assigned a higher weight so t
the probability of appearing that particular misclassified training example is higher in the next training
for training the classifier.
So, after performing the previous step, hopefully, the trained classifier will perform better on the
misclassified examples next time.
So the weights are based on increasing the probability of being chosen in the next sequential
training sub-set.

4. How to install the XGBoost library in Windows or Mac operations systems?

For Windows, run the following command in your Jupyter notebook:

!pip install xgboost

For Mac, run the following command in your terminal:

conda install -c conda-forge xgboost

Note: Open a new terminal before using the above command in the Mac terminal. To open the terminal, p
open the Launchpad and then click on the terminal icon.

5. I am getting the below warning while fitting XGBoost Classifier


WARNING: Starting in XGBoost 1.3.0, the default evaluation metric used with the obj

How to solve it?

To remove the warning kindly try setting the eval_metric hyperparameter as 'logloss', as shown below:

xgb = XGBClassifier(eval_metric='logloss')

Previous Next

Proprietary content.©Great Learning. All Rights Reserved. Unauthorized use or distribution prohibited.

© 2024 All rights reserved Privacy Terms of s

You might also like