Lecture-10-boosting
Lecture-10-boosting
Son P. Nguyen
2.3 Compute:
1 − em
αm = ln .
em
2.4 Update weights:
Final hypothesis:
M
!
X
G (x) = sign αm gm (x) .
m=1
Comments on the Adaboost Algorithm
For the next round, increase the importance of the examples with
mistakes and down-weight the examples that h1 got correctly
An example
m
!
1 1 X
ϵt = − Dt (i)yi h(xi )
2 2
i=1
m
!
1 1 X
ϵt = − Dt (i)yi h(xi )
2 2
i=1
Think of the α values as the vote for each weak classifier and the
boosting algorithm has to somehow specify them
An outline of boosting
1
em = − γm
2
▶ Empirical Risk Bound:
▶ The empirical risk, Rn (ĝ ), decreases with more iterations:
M
!
X
2
Rn (ĝ ) ≤ exp −2 γm
m=1