Lecture 6
Lecture 6
Based entirely on Avrim Blum’s notes (see the link at the web
page)
Goals of ML theory:
I develop and analyze models of learning that capture the key aspects of
machine learning
I help understand what type of guarantees we can hope to achieve, and
what type of learning problems we can hope to solve
Two main learning models:
Learning is in stages:
I The learner gets an unlabeled example x ∈ X
I The learner predicts its classification
I The learner is told the correct label f (x)
Definition
Algorithm A learns class of functions C with mistake bound M if A makes at
most M mistakes on any sequence of examples consistent with some f ∈ C .
1 0 0 0 0 0
0 1 0 0 0 0
0 0 1 0 0 0
0 0 0 1 0 0
0 0 0 0 1 0
0 0 0 0 0 1
The value of this game (to the opponent) is the number of rounds the game is
played.
I Well-defined number opt(C ) that is the optimal mistake bound for
concept class C (minimum over all algorithms).
I Well defined optimal strategy for each player: Given an example x, we
“just” calculate opt(C0 (x)) and opt(C1 (x)) (by applying this idea
recursively), and throw out whichever set has larger mistake bound.
Is Halving Algorithm optimal?