0% found this document useful (0 votes)
9 views

Ensemble Learning Math Examples

The document discusses ensemble learning, which combines multiple models to create a stronger predictive model, focusing on Bagging and Boosting methods. It provides mathematical examples for both methods, including the use of decision stumps in Bagging and the weight updating process in AdaBoost. The document illustrates how predictions are made through majority voting in Bagging and weighted sums in Boosting.

Uploaded by

Panashe Matianga
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views

Ensemble Learning Math Examples

The document discusses ensemble learning, which combines multiple models to create a stronger predictive model, focusing on Bagging and Boosting methods. It provides mathematical examples for both methods, including the use of decision stumps in Bagging and the weight updating process in AdaBoost. The document illustrates how predictions are made through majority voting in Bagging and weighted sums in Boosting.

Uploaded by

Panashe Matianga
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 2

Simple Mathematical Examples in

Ensemble Learning
1. Introduction
Ensemble learning combines multiple models (usually weak learners) to build a stronger
model. The most common ensemble methods are Bagging, Boosting, and Stacking. Below
are simplified mathematical examples for Bagging and Boosting.

2. Bagging (Bootstrap Aggregating)


We use 3 decision stumps trained on bootstrap samples.

Training Data:

X: [1, 2, 3, 4, 5]

Y: [0, 0, 1, 1, 1]

Bootstrap Samples:

Sample 1: [(1,0), (2,0), (3,1), (1,0), (4,1)]

Sample 2: [(2,0), (3,1), (5,1), (4,1), (2,0)]

Sample 3: [(5,1), (1,0), (2,0), (3,1), (5,1)]

Decision Stumps:

Stump 1: x < 2.5 => 0, else 1

Stump 2: x < 3.5 => 0, else 1

Stump 3: x < 2.5 => 0, else 1

For new point x = 3:

Stump 1: predicts 1

Stump 2: predicts 0

Stump 3: predicts 1

Majority vote = 1 (final prediction)


3. AdaBoost (Simplified)
Training Data:

X: [1, 2, 3, 4]

Y: [+1, +1, -1, -1]

Step 1: Initialize weights equally

D1 = [0.25, 0.25, 0.25, 0.25]

Step 2: Train weak classifier (e.g., x < 2.5 => +1, else -1)

Correctly classifies all samples (assume ε = 0.01)

Step 3: Compute classifier weight:

α = 0.5 * ln((1 - ε) / ε) ≈ 2.30

Step 4: Update weights:

D2(i) = D1(i) * exp(-α * y_i * h(x_i)) / Z

Z is a normalization constant to keep weights summing to 1

Repeat for T rounds. Final prediction:

H(x) = sign(sum(α_t * h_t(x)))

You might also like