0% found this document useful (0 votes)
65 views2 pages

Generative Adversarial Networks (Gans) - 01: Main Notions About Gans

Generative Adversarial Networks (GANs) are deep generative models that can produce new content like images, text, or music. GANs use two neural networks, a generator and a discriminator, that compete against each other during training. The generator learns to generate fake data that resembles the true data distribution to fool the discriminator, while the discriminator learns to accurately classify data as real or fake. This adversarial training allows the generator to learn the true data distribution.

Uploaded by

Abrar Wali
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
65 views2 pages

Generative Adversarial Networks (Gans) - 01: Main Notions About Gans

Generative Adversarial Networks (GANs) are deep generative models that can produce new content like images, text, or music. GANs use two neural networks, a generator and a discriminator, that compete against each other during training. The generator learns to generate fake data that resembles the true data distribution to fool the discriminator, while the discriminator learns to accurately classify data as real or fake. This adversarial training allows the generator to learn the true data distribution.

Uploaded by

Abrar Wali
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 2

tds

Generative Adversarial Networks (GANs) - 01


Main notions about GANs

definition
Generative Adversarial Networks (GANs) are deep generative models that can produce new
pieces of content. This kind of neural network can generate, for example, images, texts, or
music. It was introduced in 2014 by Ian J. Goodfellow and his co-authors in the article “Generative
Adversarial Nets”.

First key point: the target probability distribution


The first key point lies in the idea that there exists a probability distribution describing the kind
of data we try to generate. We want to be able to sample new data from that distribution.

For example, the problem of generating a new human face is equivalent to the problem of generat-
ing a new data point following the "human faces probability distribution."

To generate data from our target distribution, we use the inverse transform method. Once trained,
the generative network in a GAN takes points following a simple distribution as input and
turns them into points following the target distribution.

Generated
distribution
Inputs follow a
simple probability
distribution
generative True target
network distribution

Second key point: The adversarial training


The second key point is the notion of adversarial training that defines how the generator learns
the function that transforms a simple distribution into the correct target distribution.

When training the generative network, the target and the generated distributions are not directly
compared. Instead, a discriminative network is trained to take true data and generated data
and to classify them.

During training, the two networks (generator and discriminator) have opposing goals. The
discriminator always wants to classify the data as accurately as it can. The generator always tries to
produce fake data that looks like the true data to fool the discriminator.

More on this subject at:


https://towardsdatascience.com/understanding-gans-cd6e4651a29
tds
Generative Adversarial Networks (GANs) - 02
Understanding the concept of adversarial training

Forward propagation (generation and classification) Backward propagation (adversarial training)

Inputs of the The generative The generated The discriminative The classification
generative network network is trained to distribution and the network is trained to error is the reference
follow a simple maximise the final true distribution are minimise the final metric for the training
distribution classification error not compared directly classification error of both networks

Use generator to produce Use discriminator to separate


fake data from random inputs true data from fake data
First iterations

generative discriminative
network network

Easy for the


ts discriminator...
Update generator’s weights
to Update discriminator’s weigh the generator has
or
increase classification err
or to decrease classification err to do better
Halfway iterations

generative discriminative
network network

r
It becomes harder fo
the discriminator...

?
Final iterations

generative discriminative
network network

The discriminator is fooled :


fake data really look like true data

More on this subject at:


https://towardsdatascience.com/understanding-gans-cd6e4651a29

You might also like