Neural Net HO
Neural Net HO
P
i fi wi > threshold → Output = 1, else Output = 0
or:
and: not:
{1,0}*1 1*-.5 {1,0}*1
{1,0}*1 1*-1.5 {1,0}*1 {1,0}*-1 1*.5
+ +
+ + +
>0?
>0? >0?
2 Training
∆w = (C) ∗ Error ∗ f
Error = CorrectAnswer − Output
1
2.2 or, C=.5, threshold=0
P
ex. f1 f2 f3 CA w1 w2 w3 i fi wi Output Error ∆w1 ∆w2 ∆w3
1 0 0 1 0 0 0 0 0 0 0 0 0 0
2 1 0 1 1 0 0 0 0 0 1 .5 0 .5
3 0 1 1 1 .5 0 .5 .5 1 0 0 0 0
4 1 1 1 1 .5 0 .5 1 1 0 0 0 0
1 0 0 1 0 .5 0 .5 .5 1 -1 0 0 -.5
2 1 0 1 1 .5 0 0 .5 1 0 0 0 0
3 0 1 1 1 .5 0 0 0 0 1 0 .5 .5
4 1 1 1 1 .5 .5 .5 1.5 1 0 0 0 0
1 0 0 1 0 .5 .5 .5 .5 1 -1 0 0 -.5
2 1 0 1 1 .5 .5 0 .5 1 0 0 0 0
3 0 1 1 1 .5 .5 0 .5 1 0 0 0 0
4 1 1 1 1 .5 .5 0 1 1 0 0 0 0
1 0 0 1 0 .5 .5 0 0 0 0 0 0 0
2 1 0 1 1 .5 .5 0 .5 1 0 0 0 0
3 0 1 1 1 .5 .5 0 .5 1 0 0 0 0
4 1 1 1 1 .5 .5 0 1 1 0 0 0 0
3 linear regression
...
and linear algebra makes linear regression incredibly easy!
assuming a least squares cost function,
with tn the true value, given ~xn ,
and ~x · w
~ is our neural model’s predicted value.
1X
(tn − (~xTn · w))
~ 2 (1)
2 n
the minimum (i.e. derivative=0) is given where
~ = (XT · X)−1 · XT · ~t
w (2)
where X is the vector of all the input vectors, and ~t the vector of their true outcomes.
(gotta take that inverse so XT · X needs to be nonsingular)
try it out in matlab (if wealthy or connected)
or Octave or R (for the rest of us, both quite remarkable tools for free)
2
4 kernels (gpr, in particular)
~ T · ~k
f (~x) = w (3)
−1~
~ = (K + λIn )
w t (4)
In is the identity matrix, and λ is whatever fudge room is needed (can help with inversion)
K is an n×n matrix where Kij = k(xi , xj )
~ki = k(~x, ~xi ), for all (n) observed xi
θ2
k(~xi , ~xj ) = θ1 exp(− ||~xi − ~xj )||2 ) + θ3 (~xTi · ~xj ) + θ4 (5)
2
3
5 classification
δE δ 1 (oj − tj )2
= oi dj , 2 = oj − tj (9)
δwij δwij
1 δs(x)
s(x) = , = s(x)(1 − s(x)) , s(x) = oj (10)
1 + exp(−Cx) δx
γ is “a learning constant”; oi feeds wij ; tj is the desired output; C is whatever constant as well.