Feed-Forward Neural Networks (Part 1)
Feed-Forward Neural Networks (Part 1)
Feed-Forward Neural Networks (Part 1)
Networks
(Part 1)
Outline (part 1)
‣ Feed-forward neural networks
‣ The power of hidden layers
‣ Learning feed-forward networks
- SGD and back-propagation
Motivation
‣ So far our classifiers rely on pre-compiled features
ŷ = sign ✓ · (x)
Neural Networks
(Artificial) Neural Networks
x1
x2 f
…
xd
(e.g., a linear classifier)
A unit in a neural network
x1
x2 f
…
xd
A unit in a neural network
x1
x2 f
…
xd
Deep Neural Networks
x2 W22
f2
z2
One hidden layer model
layer 0 layer 1 layer 2
(tanh) (linear)
W11 z1
x1 f1
W12 z
W21
f
x2 W22
f2
z2
One hidden layer model
‣ Neural signal transformation
layer 0 layer 1
(tanh)
W11 z1
x1 f1
W12
W21
x2 W22
f2
z2
Example Problem
Hidden layer representation
(1)
(2)
Hidden layer representation
(1)
(2)
(2)
(1)
Hidden layer representation
(1)
(2)
(2)
(1)
Hidden layer representation
(1)
(2)
(2)
(1)
Hidden layer representation
(1)
(2)
(2)
(1)
Hidden layer representation
(1)
(2)
(2)
(1)
Hidden layer representation
(1)
(2)
(2)
(1)
Does orientation matter?
(1)
(2)
Does orientation matter?
(1)
(2)
(2)
(1)
Does orientation matter?
(1)
(2)
(2)
(1)
Random hidden units
(2)
(1)
Random hidden units
(2)
(1)
(2)
(1)
Random hidden units