Comp3314 7. Gradient Backpropagation
Comp3314 7. Gradient Backpropagation
Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 4 - 1 April 13, 2017
Optimization
Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 4 - 6 April 13, 2017
Gradient descent
Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 4 - 7 April 13, 2017
Computational graphs
Hinge loss
x
s (scores) hinge
* loss
+
L
W
R
Regularization
Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 4 - 8 April 13, 2017
Backpropagation: a simple example
e.g. x = -2, y = 5, z = -4
Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 4 - 12 April 13, 2017
Backpropagation: a simple example
e.g. x = -2, y = 5, z = -4
Want:
Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 4 - 13 April 13, 2017
Backpropagation: a simple example
e.g. x = -2, y = 5, z = -4
Want:
Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 4 - 14 April 13, 2017
Backpropagation: a simple example
e.g. x = -2, y = 5, z = -4
Want:
Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 4 - 15 April 13, 2017
Backpropagation: a simple example
e.g. x = -2, y = 5, z = -4
Want:
Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 4 - 16 April 13, 2017
Backpropagation: a simple example
e.g. x = -2, y = 5, z = -4
Want:
Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 4 - 17 April 13, 2017
Backpropagation: a simple example
e.g. x = -2, y = 5, z = -4
Want:
Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 4 - 18 April 13, 2017
Backpropagation: a simple example
e.g. x = -2, y = 5, z = -4
Want:
Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 4 - 19 April 13, 2017
Backpropagation: a simple example
e.g. x = -2, y = 5, z = -4
Want:
Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 4 - 20 April 13, 2017
Backpropagation: a simple example
e.g. x = -2, y = 5, z = -4
Chain rule:
Want:
Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 4 - 21 April 13, 2017
Backpropagation: a simple example
e.g. x = -2, y = 5, z = -4
Want:
Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 4 - 22 April 13, 2017
Backpropagation: a simple example
e.g. x = -2, y = 5, z = -4
Chain rule:
Want:
Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 4 - 23 April 13, 2017
f
Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 4 - 24 April 13, 2017
“local gradient”
Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 4 - 25 April 13, 2017
“local gradient”
gradients
Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 4 - 26 April 13, 2017
“local gradient”
gradients
Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 4 - 27 April 13, 2017
“local gradient”
gradients
Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 4 - 28 April 13, 2017
“local gradient”
gradients
Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 4 - 29 April 13, 2017
Another example:
Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 4 - 30 April 13, 2017
Another example:
Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 4 - 31 April 13, 2017
Another example:
Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 4 - 32 April 13, 2017
Another example:
Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 4 - 33 April 13, 2017
Another example:
Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 4 - 34 April 13, 2017
Another example:
Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 4 - 35 April 13, 2017
Another example:
Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 4 - 36 April 13, 2017
Another example:
Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 4 - 37 April 13, 2017
Another example:
Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 4 - 38 April 13, 2017
Another example:
Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 4 - 39 April 13, 2017
Another example:
Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 4 - 40 April 13, 2017
Another example:
Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 4 - 41 April 13, 2017
Another example:
Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 4 - 42 April 13, 2017
Another example:
Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 4 - 43 April 13, 2017
sigmoid function
sigmoid gate
Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 4 - 44 April 13, 2017
sigmoid function
sigmoid gate
Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 4 - 45 April 13, 2017
Patterns in backward flow
Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 4 - 46 April 13, 2017
Patterns in backward flow
Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 4 - 47 April 13, 2017
Patterns in backward flow
Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 4 - 48 April 13, 2017
Patterns in backward flow
Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 4 - 49 April 13, 2017
Patterns in backward flow
Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 4 - 50 April 13, 2017
Gradients add at branches
Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 4 - 51 April 13, 2017
Gradients for vectorized code (x,y,z are This is now the
now vectors) Jacobian matrix
(derivative of each
element of z w.r.t. each
“local gradient” element of x)
dx1/dx1 ; dz2/dx2 ,...
f if L = scalar, z = vector
the result is vector
(row vector)
gradients
Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 4 - 52 April 13, 2017
Vectorized operations
Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 4 - 53 April 13, 2017
Vectorized operations
Jacobian matrix
Q: what is the
size of the
Jacobian matrix?
Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 4 - 54 April 13, 2017
Vectorized operations
Jacobian matrix
Q: what is the
size of the
Jacobian matrix?
[4096 x 4096!]
Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 4 - 55 April 13, 2017
Vectorized operations
Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 4 - April 13, 2017
Vectorized operations
Jacobian matrix
Q: what is the
size of the Q2: what does it
Jacobian matrix? look like?
[4096 x 4096!] Diagonal Matrix (karna max (0,x))
Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 4 - April 13, 2017
A vectorized example:
Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 4 - 58 April 13, 2017
A vectorized example:
x = scalar
W = matrix
Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 4 - 59 April 13, 2017
A vectorized example:
Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 4 - 60 April 13, 2017
A vectorized example:
Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 4 - 61 April 13, 2017
A vectorized example:
Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 4 - 62 April 13, 2017
A vectorized example:
Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 4 - 63 April 13, 2017
A vectorized example:
Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 4 - 64 April 13, 2017
A vectorized example:
0.22^2 + 0.26^2
Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 4 - 65 April 13, 2017
A vectorized example:
Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 4 - 66 April 13, 2017
A vectorized example:
Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 4 - 67 April 13, 2017
A vectorized example:
Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 4 - 68 April 13, 2017
A vectorized example:
q = column vector
xT = row vector
Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 4 - 69 April 13, 2017
A vectorized example:
Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 4 - 70 April 13, 2017
A vectorized example:
Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 4 - 71 April 13, 2017
A vectorized example:
Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 4 - 72 April 13, 2017
A vectorized example:
Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 4 - 73 April 13, 2017
A vectorized example:
Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 4 - 74 April 13, 2017
Summary so far...
● neural nets will be very large: impractical to write down gradient formula
by hand for all parameters
● backpropagation = recursive application of the chain rule along a
computational graph to compute the gradients of all
inputs/parameters/intermediates
● implementations maintain a graph structure, where the nodes implement
the forward() / backward() API
● forward: compute result of an operation and save any intermediates
needed for gradient computation in memory
● backward: apply the chain rule to compute the gradient of the loss
function with respect to the inputs
Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 4 - 81 April 13, 2017
Next: Neural Networks
Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 4 - 82 April 13, 2017
Neural networks: without the brain stuff
Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 4 - 83 April 13, 2017
Neural networks: without the brain stuff
Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 4 - 84 April 13, 2017
Neural networks: without the brain stuff
(Before) Linear score function:
(Now) 2-layer Neural Network
input
hidden
output
x W1 h W2 s
3072 100 10
Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 4 - 85 April 13, 2017
Neural networks: without the brain stuff
(Before) Linear score function:
(Now) 2-layer Neural Network
x W1 h W2 s
3072 100 10
Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 4 - 86 April 13, 2017
Neural networks: without the brain stuff
Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 4 - 87 April 13, 2017
Full implementation of training a 2-layer Neural Network needs ~20 lines:
Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 4 - 88 April 13, 2017
Activation functions Reactivate Linear Unit
tanh Maxout
ReLU ELU
Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 4 - 96 April 13, 2017
Neural networks: Architectures
Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 4 - 97 April 13, 2017
Example feed-forward computation of a neural network
Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 4 - 98 April 13, 2017
Example feed-forward computation of a neural network
Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 4 - 99 April 13, 2017
Summary
- We arrange neurons into fully-connected layers
- The abstraction of a layer has the nice property that it
allows us to use efficient vectorized code (e.g. matrix
multiplies)
- Neural networks are not really neural
- Next time: Convolutional Neural Networks
10
Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 4 - April 13, 2017
0