Lecture16 Vaes+Gans
Lecture16 Vaes+Gans
• Today we’re finishing VAEs, then we’ll go through most of GANs, and maybe touch on
adversarial examples (time permitting).
• Projects due March 20, Monday of finals week. Each team should have just one person
submit the project report PDF and link their teammates in the Gradescope submission
portal.
• Please submit your code (.py, .ipynb) files to Bruin Learn (submission portal to be
e n
setup). This is for us to verify your implementations. Do not submit your report PDF to
=>
Bruin Learn.
• I will send out a couple of optional surveys, with your feedback much appreciated.
discussions
·
Diff- schedule
for week 10 -
see B.L.
p(zIX)
zwN(0,1)
193072x3072
Prof J.C. Kao, UCLA ECE
We need the posterior to arrive at an expression
Y(z) -N(0,e)
zEIB
p(z) N(0,2)
=
at
9121x
⑧
↑
z2
q(z)x 7)
=
⑲911x 1)
=
Z1
p(x/z) Prof J.C. Kao, UCLA ECE
When we take approximations, we need a way to see how good it is
Red Generating.
Dim
-
Gr
I
=
↑
zuq(zIx) (Malz),
~10.
[o(z)
I(
↑- -VAE
-
i n tractabl e s, so it
p(x)z) -N ⑳00
to calc. 08
easy
-
p(z) e N -
- -
>0
eN
q(z/x) - --
q(zIX)
- v
-
P
is
p(zIX)
intractable
L
I re Prof J.C. Kao, UCLA ECE
Calculating the ELBO and its gradients
~N(0,1)
z-N(My(x),2p(x))
& L -
b cs
=
d
+
b-N(d,c3)
s-N(0,1)
- >
In this lecture, we’ll talk about a generative model that does even better (at
least subjectively) at generating images. It’s based on a completely different
idea, where no probability density has to be estimated. It does this through
game theory, pitting two “adversary” neural networks against each other.
However, these images aren’t great. We could tell that they’re not real faces,
and they’re quite blurred in several places.
2015
Radford,
Super-resolution:
Super-resolution:
Super-resolution:
m
Prof J.C. Kao, UCLA ECE
GAN examples
Z, EIR
- 22 + Es =
24
GAN
VAE -
zWN(0,I)
zuN(0,1)
↓ ↓ GAN
q(x(z) X
GANs have two networks: a generator network that generates samples that
look like your training set, and a discriminator network that looks at both real
and fake samples and decides if they are real or fake.
- -
-
0 0
⑧ I I I
x
model
-
NN
~
N10,I)
cs231n.stanford.edu
21073 - Two
(a)
loss
frs:(a)
I I
-> Pmodel(x)
p(x) Poata (X)
->
(X)
P data
X G(z) =
X Pmodal
D(x) 0
=
if
if
D(x)
'10g4(x)
=
log(1 D(Y))
I
0
=
-
=
0
- en
- -
discriminator
ente
-
Real -> Fake
--
-
I O
The
↳ -> (Goodfellow
NewRIPS 2014)
O
(D)
E
discriminator.
the
The
opt, strategy of
Pdata
=
↳
⑧
Pmodel
--
--
-
D(x)
=
+pdatax
atas
D(x) =
s0000
↓
G(z)
~N10,13
-mining
Prof J.C. Kao, UCLA ECE
Solution to the game
-e
2(4)
-
generator
-
G
-
I a z
- -
·2 see
-
when G bad,
is
D((z)) = 0
In this last topic of lecture, we’ll discuss adversarial examples and neural
network susceptibility to them.
Szegedy et al., 2013, was one of the early papers to describe adversarial
examples in neural networks.
In these figures below, the experiment was to slowly turn an object into an
airplane and see how the network would change the image.
Goodfellow, http://cs231n.stanford.edu/slides/2017/cs231n_2017_lecture16.pdf
As if that wasn’t enough, it gets worse. But before we get there, let’s build
some intuition for why these examples occur.
At this point, you may be thinking, is this really a problem? The thought goes
as follows:
- Sure, we can trick neural networks, but we need to know their parameters,
and do this extremely precise perturbation to add this noise vector.
- In real life applications, people can’t add such precise noise to objects
(e.g., a stop sign).
- Further, if people don’t know the parameters (W) of my neural network, how
can they design the attack?
classify o
classify x
o
o
o o
x x o
x o
x o
x
x
x x o
x
cs231n, http://cs231n.stanford.edu/slides/2016/winter1516_lecture9.pdf
Goodfellow, 2016
Goodfellow, 2016
Goodfellow, 2016
In this paper we evaluate ten proposed defenses and demonstrate that none
of them are able to withstand a white-box attack.