Lecture 4 - SGD, Back Propagation
Lecture 4 - SGD, Back Propagation
Vazgen Mikayelyan
2 Back-Propagation
n
1X
L (w ) = (−yl log fw (xi ) − (1 − yi ) log (1 − fw (xi ))) ,
n
i=1
n
1 X T
L (w ) = −yi log fw (xi ) .
n
i=1
n
1X
L (w ) = (−yl log fw (xi ) − (1 − yi ) log (1 − fw (xi ))) ,
n
i=1
n
1 X T
L (w ) = −yi log fw (xi ) .
n
i=1
Note that in each case we can represent the loss function by the following
form:
n
1X
L (w ) = Li (w ) .
n
i=1
Note that in each case we can represent the loss function by the following
form:
n
1X
L (w ) = Li (w ) .
n
i=1
Note that in each case we can represent the loss function by the following
form:
n
1X
L (w ) = Li (w ) .
n
i=1
Note that in each case we can represent the loss function by the following
form:
n
1X
L (w ) = Li (w ) .
n
i=1
Note that in each case we can represent the loss function by the following
form:
n
1X
L (w ) = Li (w ) .
n
i=1
Note that in each case we can represent the loss function by the following
form:
n
1X
L (w ) = Li (w ) .
n
i=1
2 Back-Propagation
f = f1 ◦ (f2 ◦ · · · (fn−1 ◦ fn )) ,
∂L
=
∂w1
V. Mikayelyan Deep Learning October 27, 2020 9/9
Back-Propagation