0% found this document useful (0 votes)
11 views10 pages

AAM PST Answers-1

Ensemble learning is a machine learning technique that combines multiple models to enhance performance. Key methods include Bagging, Boosting, Stacking, and Voting Ensembles, each with distinct approaches to model training and prediction. These methods leverage the strengths of various algorithms to improve accuracy and robustness in predictions.

Uploaded by

Ritika Darade
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views10 pages

AAM PST Answers-1

Ensemble learning is a machine learning technique that combines multiple models to enhance performance. Key methods include Bagging, Boosting, Stacking, and Voting Ensembles, each with distinct approaches to model training and prediction. These methods leverage the strengths of various algorithms to improve accuracy and robustness in predictions.

Uploaded by

Ritika Darade
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

Ensemble Learning Methods

Ensemble learning is a technique in machine learning where multiple models (often called
weak learners) are combined to improve overall performance. There are several types of
ensemble learning methods:

1. Bagging (Bootstrap Aggregating)


o Multiple models are trained on different subsets of the dataset.
o The final prediction is based on majority voting (for classification) or
averaging (for regression).
o Example: Random Forest (combines multiple decision trees).
2. Boosting
o Models are trained sequentially, where each new model focuses on correcting
the errors of the previous ones.
o Weights are assigned to misclassified instances to improve learning.
o Examples: AdaBoost, Gradient Boosting, XGBoost, LightGBM.
3. Stacking (Stacked Generalization)
o Different models (base learners) are trained independently.
o Their predictions are combined using another model (meta-learner).
o Helps to leverage the strengths of multiple algorithms.
4. Voting Ensembles
o Uses multiple models to make predictions, and the final output is determined
by majority voting (for classification) or averaging (for regression).
o Types: Hard Voting (majority class prediction) and Soft Voting (weighted
probability-based decision).
5.

You might also like