Simple Mathematical Examples in
Ensemble Learning
1. Introduction
Ensemble learning combines multiple models (usually weak learners) to build a stronger
model. The most common ensemble methods are Bagging, Boosting, and Stacking. Below
are simplified mathematical examples for Bagging and Boosting.
2. Bagging (Bootstrap Aggregating)
We use 3 decision stumps trained on bootstrap samples.
Training Data:
X: [1, 2, 3, 4, 5]
Y: [0, 0, 1, 1, 1]
Bootstrap Samples:
Sample 1: [(1,0), (2,0), (3,1), (1,0), (4,1)]
Sample 2: [(2,0), (3,1), (5,1), (4,1), (2,0)]
Sample 3: [(5,1), (1,0), (2,0), (3,1), (5,1)]
Decision Stumps:
Stump 1: x < 2.5 => 0, else 1
Stump 2: x < 3.5 => 0, else 1
Stump 3: x < 2.5 => 0, else 1
For new point x = 3:
Stump 1: predicts 1
Stump 2: predicts 0
Stump 3: predicts 1
Majority vote = 1 (final prediction)
3. AdaBoost (Simplified)
Training Data:
X: [1, 2, 3, 4]
Y: [+1, +1, -1, -1]
Step 1: Initialize weights equally
D1 = [0.25, 0.25, 0.25, 0.25]
Step 2: Train weak classifier (e.g., x < 2.5 => +1, else -1)
Correctly classifies all samples (assume ε = 0.01)
Step 3: Compute classifier weight:
α = 0.5 * ln((1 - ε) / ε) ≈ 2.30
Step 4: Update weights:
D2(i) = D1(i) * exp(-α * y_i * h(x_i)) / Z
Z is a normalization constant to keep weights summing to 1
Repeat for T rounds. Final prediction:
H(x) = sign(sum(α_t * h_t(x)))