Intro ML Linear Classifier
Intro ML Linear Classifier
lab
4/2021
Python Data Structures 2 Name
Machine learning focuses on making decisions
and predictions based on data.
h ∈ H
y = h(x; Θ) Θ is the parameter
x ∈ Rd
H is a set of hypothesis classes
x h y
Evaluation criteria: 2 levels
- How an individual prediction is scored
- How the overall system is scored
a. 0-1 loss
L(g, a) = 0 if g = a
1 otherwise
b. Squared loss
L(g, a) = (g – a)2
c. Linear loss
L(g, a) = |g – a|
Evaluation criteria: 2 levels
- How an individual prediction is scored
- How the overall system is scored
∙Test error
Ҽtest = 1/n’ * ∑n+1->n+1+n’L(h(x(i); Θ), y(i))
data algorithm h
Algorithm is specified by H
Ways to come up with an algorithm:
- Be a clever person
- Use optimization
- Be dump
Python Data Structures 14 Name
H is linear classifiers
h(x;Θ,Θo) = sign(ΘTx+Θo) = +1 if ΘTx+Θo > 0
-1 otherwise
x ∈ Rd : dx1
Θ ∈ Rd : dx1
Θo ∈ R : 1x1
Θo+Θ1x1+Θ2x2+…+Θdxd=0
Be dump: random linear classifier
Pseudocode:
def Random_linear_classifier(D, k, d)
for j=1 to k:
randomly sample(Θ(j), Θo(j)) from (Rd, R)
j* = argminj∈[1,…,k] Ҽtraining(Θ(j), Θo(j))
return (Θ(j*), Θo(j*))
k: hyper-parameter
Be a clever human: perceptron
(Rosenblatt in 1962)
Pseudocode:
def perceptron(D, k)
Θ = [0 0 … 0]T
Θo = 0
for t = 1 to k:
for i = 1 to n:
if y(i)(ΘTx(i) + Θo) ≤ 0:
Θ = Θ + y(i)x(i)
Θo = Θo + y(i)
return (Θ, Θo)
Python Data Structures 18 Name