0% found this document useful (0 votes)
211 views4 pages

6 05 Undercomplete Vs Overcomplete Hidden Layer

Neural networks can use autoencoders with undercomplete or overcomplete hidden layers. An undercomplete hidden layer compresses the input into fewer dimensions, finding good features for the training distribution. An overcomplete hidden layer does not compress and each hidden unit could learn a different input component, but there is no guarantee they will extract meaningful structure.

Uploaded by

Niraj Reginald
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
211 views4 pages

6 05 Undercomplete Vs Overcomplete Hidden Layer

Neural networks can use autoencoders with undercomplete or overcomplete hidden layers. An undercomplete hidden layer compresses the input into fewer dimensions, finding good features for the training distribution. An overcomplete hidden layer does not compress and each hidden unit could learn a different input component, but there is no guarantee they will extract meaningful structure.

Uploaded by

Niraj Reginald
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

Neural networks

Autoencoder - undercomplete vs. overcomplete hidden layer


th for my slides “Autoencoders”.
Hugo Larochelle Hugo Larochelle
2

épartement d’informatique
Université de Sherbrooke
AUTOENCODER
Département d’informatique
[email protected] Université de Sherbrooke
Topics: autoencoder, encoder, decoder,
h(x)tied=weights
g(a(x))
[email protected]
= sigm(b + Wx)
•October 17, 2012
Feed-forward neural network trained to reproduce its input at
the output layer October 16, 2012
Abstract Decoder
x c k
b = o(a
x b(x))
Abstract
W =W = sigm(c + W h(x)) ⇤
r my slides “Autoencoders”.
(tied weights)
for binary inputs
j P
h(x) = bg(a(x)) P
b l(f (x)) =
⌘x k (b
xk xk )2 l(f (x)) = k (xk log(b
xk ) + (1 xk ) log(1 bk ))
x
= sigm(b + Wx) Encoder
W
h(x) = g(a(x))
x = sigm(b + Wx)
b
x = o(b
a(x))
Hugo Larochelle
3
Département d’informatique
UNDERCOMPLETE HIDDEN LAYER
Université de Sherbrooke
[email protected]
Topics: undercomplete representation
October 17, 2012
• Hidden layer is undercomplete if smaller than the input layer
‣ hidden layer ‘‘compresses’’ the input
Abstract
x c k
will compress well only for the

Math for my slides “Autoencoders”.
training distribution
W =W
• Hidden units will be (tied weights)

‣ good features for the h(x) = bg(a(x))


j
training distribution = sigm(b + Wx)
‣ but bad for other W
types of input
x
b
x = o(b
a(x))
= sigm(c + W⇤ h(x))
Hugo Larochelle
4
Département d’informatique
OVERCOMPLETE HIDDEN LAYER
Université de Sherbrooke
[email protected]
Topics: overcomplete representation
October 17, 2012
• Hidden layer is overcomplete if greater than the input layer
‣ no compression in hidden layer
Abstract
x ck
‣ each hidden unit could copy a
ath for my slides “Autoencoders”.
different input component
W =W
• No guarantee that the (tied weights)
hidden units will extract h(x) = bj
g(a(x))
meaningful structure = sigm(b + Wx)
W

x
b
x = o(b
a(x))
= sigm(c + W⇤ h(x))

You might also like