0% found this document useful (0 votes)
14 views24 pages

Chapter8 GANs

This chapter provides an overview of Generative Adversarial Networks (GANs), including their architecture, how they work, and their various applications such as generating realistic images and style transfer. It explains the roles of the Generator and Discriminator in the GAN framework and outlines the steps for training GANs. Additionally, different types of GANs, including Vanilla GANs, DCGANs, Conditional GANs, and Super Resolution GANs, are discussed.

Uploaded by

hmercha.sarra
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views24 pages

Chapter8 GANs

This chapter provides an overview of Generative Adversarial Networks (GANs), including their architecture, how they work, and their various applications such as generating realistic images and style transfer. It explains the roles of the Generator and Discriminator in the GAN framework and outlines the steps for training GANs. Additionally, different types of GANs, including Vanilla GANs, DCGANs, Conditional GANs, and Super Resolution GANs, are discussed.

Uploaded by

hmercha.sarra
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 24

Chapter 7: Generative

Adversarial Networks

Unit: Deep Learning

1
Learning Outcomes

• At the end of this chapter, you will:


• Understand what are Generative Adversarial Networks.
• Understand how do GANs Work
• Be able to apply Generative Adversarial Networks.

2
PLAN
• GANs Applications
• What are Generative Adversarial Networks
• How Do GANs Work?
• GANs Architecture:
• Generator
• Discrimanator
• Algorithm of GANs
• Steps to train GANs
• Different types of GANs
3
GANs Applications
• GANs can be trained on the
images of humans to generate
realistic faces.

Karras et al. , “PROGRESSIVE GROWING OF GANS FOR IMPROVED QUALITY, STABILITY, AND VARIATION”, 4
Published as a conference paper at ICLR 2018
GANs Applications

• Generating faces of anime


characters

5
GANs Applications
• Style transfer

6
GANs Applications

• Image inpainting

7
GANs Applications
• GANs can build realistic images from textual descriptions of objects like birds,
humans, and other animals. We input a sentence and generate multiple images
fitting the description. .

8
What are Generative Adversarial
Networks
• Generative Adversarial Networks (GANs) introduced by Ian J.
Goodfellow and al. in 2014.

9
What are Generative Adversarial
Networks
• GANs perform unsupervised learning tasks.

• GANs can be used to generate new examples that plausibly could


have been drawn from the original dataset.

• GANs consist of two models: the Generator and the Discriminator.

• By competing with each other, the Generator and the


Discriminator can discover and learn the patterns in input data,
capturing and replicating the variations within a dataset.

10
How GANS work?

Real

Discriminator Predicted label

Fake

Generator 11
How GANS work?
• GANs consists of two neural networks.

• There is a Generator G(x) and a Discriminator D(x). Both of them play an adversarial game.

• The generator's aim is to fool the discriminator by producing data that are similar to those in the
training set.

• The discriminator will try not to be fooled by identifying fake data from real data.

• Both of them work simultaneously to learn and train complex data like audio, video, or image files.

• The Generator network takes a sample and generates a fake sample of data. The Generator is
trained to increase the Discriminator network's probability of making mistakes.

12
GANs architecture

Generator
• In GANs, a generator is a neural network that generates fictitious data to train the discriminator on.
• The main aim of the Generator is to make the discriminator classify its output as real.
• It gains to produce reliable data.
• The generated samples/instances serve as the discriminator's negative training examples.
• It takes a fixed-length random vector carrying noise as input and generates a sample. .

Noisy input vector Generator Fake image

13
GANs architecture

Generator

• The backpropagation method is used :


i. to adjust each weight in the right direction by calculating the weight's impact on the output.

ii. to obtain gradients and these gradients can help change the generator weights

14
GANs architecture

Discriminator
• The Discriminator is a neural network that identifies real data from the fake data created by the
Generator. The discriminator's training data comes from different two sources:
o The real data instances, such as real pictures of currency notes, humans, animals, etc., are used by the
discriminator as positive samples during training.
o The fake data instances created by the Generator are used as negative examples during the training process.

Real

Discriminator
Predicted label

Fake 15
GANs architecture

Discriminator
• While training the discriminator, it connects to two loss functions (generator loss and discriminator loss) .
• During discriminator training, the discriminator ignores the generator loss and just uses the discriminator loss.
• The discriminator loss penalizes the discriminator for misclassifying a real data instance as fake or a fake data
instance as real.
• The discriminator updates its weights through backpropagation from the discriminator loss through the discriminator
network.

16
Loss functions

Discriminator loss Generator loss


• While the generator is trained, it samples
• While the discriminator is trained, it classifies both the real random noise and produces an output from
data and the fake data from the generator that noise. The output then goes through the
• It penalizes itself for misclassifying a real instance as fake, or a discriminator and gets classified as either
fake instance (created by the generator) as real, by maximizing “Real” or “Fake” based on the ability of the
the below function.
discriminator to tell one from the other.
• The generator loss is then calculated from
the discriminator’s classification – it gets
rewarded if it successfully fools the
discriminator, and gets penalized otherwise.
• log(D(x)) refers to the probability that the generator is rightly
classifying the real image, • The following equation is minimized to
• maximizing log(1-D(G(z))) would help it to correctly label the training the generator:
fake image that comes from the generator.
17
Algorithm of GANs*

*Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative adversarial nets. In Proceedings of the
Advances in Neural Information Processing Systems,Montreal, QC, Canada, 8–13 December 2014; pp. 2672–2680. 18
Steps for training GAN

1. Define the problem

2. Choose the architecture of GAN

3. Train discriminator on real data

4. Generate fake inputs for the generator

5. Train discriminator on fake data

6. Train generator with the output of the discriminator


19
Different types of GANs
• Vanilla GANs *, 2014 • Vanilla GANs have a min-max
optimization formulation where the
Discriminator is a binary classifier
and uses sigmoid cross-entropy
loss during optimization.
• The Generator and the Discriminator
in Vanilla GANs are multi-layer
perceptrons.
• The algorithm tries to optimize the
mathematical equation using
stochastic gradient descent.
*Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative adversarial nets. In Proceedings
20 of the
Advances in Neural Information Processing Systems,Montreal, QC, Canada, 8–13 December 2014; pp. 2672–2680.
Different types of GANs
• Deep Convolutional GAN** (DCGAN), 2016
• DCGANs support convolution neural networks
instead of vanilla neural networks at both
Discriminator and Generator.
• They are more stable and generate better quality
images.
• The Generator is a set of convolution layers with
fractional-strided convolutions or transpose
convolutions, so it up-samples the input image at
every convolutional layer.
• The discriminator is a set of convolution layers
with strided convolutions, so it down-samples the

input image at every convolution layer.


** A. Radford, L. Metz, and S. Chintala, “Unsupervised representation learning with deep convolutional generative
adversarial networks,” poster presentation at International Conference on Learning Representation (ICLR 2016), San 21
Juan, PR, 2–4 May 2016.
Different types of GANs
• Conditional GANs, 2014 ***:
• Generative adversarial nets can be extended to a conditional model
if both the generator and discriminator are conditioned on some
extra information .
• could be any kind of auxiliary information, such as class labels or
data from other modalities.
• We can perform the conditioning by feeding into the both the
discriminator and generator as additional input layer.
• In the generator the prior input noise and 𝒚 are combined in joint
hidden representation, and the adversarial training framework
allows for considerable flexibility in how this hidden representation
is composed.
• In the discriminator and are presented as inputs and to a
22
*** Mirza M, Osindero S. Conditional generative adversarial nets. 2014. arXiv: 1411.
discriminative 1784
function
Different types of GANs
• Super Resolution GANs, 2022 ****

• SRGANs use deep neural networks


along with an adversarial network
to produce higher resolution
images.
• SRGANs generate a photorealistic
high-resolution image when given a
low-resolution image.

23 4,
**** Güemes, A., Vila, C. S., and Discetti, S. (2022). Super-resolution generative adversarial networks of randomly-seeded fields. Nat. Mach. Intell.
1165–1173. doi: 10.1038/s42256-022-00572-7
References
• Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.;
Bengio, Y. Generative adversarial nets. In Proceedings of the Advances in Neural Information
Processing Systems,Montreal, QC, Canada, 8–13 December 2014; pp. 2672–2680.

• A. Radford, L. Metz, and S. Chintala, “Unsupervised representation learning with deep


convolutional generative adversarial networks,” poster presentation at International
Conference on Learning Representation (ICLR 2016), San Juan, PR, 2–4 May 2016.

• Mirza M, Osindero S. Conditional generative adversarial nets. 2014. arXiv: 1411. 1784

• Güemes, A., Vila, C. S., and Discetti, S. (2022). Super-resolution generative adversarial
networks of randomly-seeded fields. Nat. Mach. Intell. 4, 1165–1173. doi: 10.1038/s42256-
022-00572-7

24

You might also like