0% found this document useful (0 votes)
10 views

Simple Ensemble Learning Examples

The document explains three ensemble learning techniques: Bagging, Boosting, and Stacking. Bagging involves training multiple models on random samples and using majority voting for predictions, while Boosting sequentially trains models that learn from previous mistakes to improve accuracy. Stacking combines the outputs of various models using a meta-model to produce a final prediction.

Uploaded by

tamilmedia758
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views

Simple Ensemble Learning Examples

The document explains three ensemble learning techniques: Bagging, Boosting, and Stacking. Bagging involves training multiple models on random samples and using majority voting for predictions, while Boosting sequentially trains models that learn from previous mistakes to improve accuracy. Stacking combines the outputs of various models using a meta-model to produce a final prediction.

Uploaded by

tamilmedia758
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 2

Simple Examples of Ensemble Learning Techniques

1. Bagging - Simple Example

Concept: Train many models separately on different random samples of the data, then take a vote.

Example:

- Data: [A, A, B, B, A]

- Create 3 random samples (with repetition):

- Sample 1: [A, A, B]

- Sample 2: [B, A, A]

- Sample 3: [A, B, B]

Each model predicts for a new input:

- Model 1: A

- Model 2: A

- Model 3: B

Final Prediction (Majority Vote): A

2. Boosting - Simple Example

Concept: Train models one by one, each learning from the mistakes of the previous one.

Example:

- Data: [A, A, B, B]

Model 1: Predicts all correctly


-> Give more weight to wrongly predicted (if any)

Model 2: Focuses on harder data

-> Learns better than Model 1

Combine both models: Final prediction is stronger and more accurate.

3. Stacking - Simple Example

Concept: Use many different models and combine their outputs using another model.

Example:

- Use 3 models: Decision Tree, KNN, and Naive Bayes.

Each model predicts:

- Tree: A

- KNN: B

- Naive Bayes: A

Meta Model (like SVM) takes [A, B, A] and predicts: A

Final Output: A

You might also like