0% found this document useful (0 votes)
230 views24 pages

Generative Adversarial Networks Review 1-06-08-1.edit

This document provides an overview of generative adversarial networks (GANs). It defines GANs as powerful neural networks using unsupervised machine learning with two competing neural networks, a generator and discriminator. The generator learns to generate new data instances while the discriminator learns to distinguish real from generated data. Different types of GANs are presented, including deep convolutional GANs, conditional GANs, InfoGANs, Wasserstein GANs, and attention GANs. Applications discussed include image synthesis, text-to-image generation, and drug discovery. Real-world uses of GANs like historical image retrieval and translating text to images are also summarized.

Uploaded by

gijare6787
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
230 views24 pages

Generative Adversarial Networks Review 1-06-08-1.edit

This document provides an overview of generative adversarial networks (GANs). It defines GANs as powerful neural networks using unsupervised machine learning with two competing neural networks, a generator and discriminator. The generator learns to generate new data instances while the discriminator learns to distinguish real from generated data. Different types of GANs are presented, including deep convolutional GANs, conditional GANs, InfoGANs, Wasserstein GANs, and attention GANs. Applications discussed include image synthesis, text-to-image generation, and drug discovery. Real-world uses of GANs like historical image retrieval and translating text to images are also summarized.

Uploaded by

gijare6787
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 24

GUIDED BY PRESENTED BY

Dr.V.Dhilip Kumar Sumanth Reddy. M-VTU10421


P.Sai Kiran-VTU9925
K.Surendra Reddy-VTU8955
definition
 These are powerful type of Neural Networks with
Unsupervised Machine Learning.
 These are made of two competing morrows which
compete each other.
 These are able to capture and copy the variations within a
data set.
Features of GAn
 Learnable cost function.
 Hard to training
 Min-Max game based on Nash Equilibrium
 Little assumption
 High fidelity
Why
 Why GAN’s?
 Insufficient labelled training data.
 Use generated data as image/audio synthesis.
 Why it is adversarial?
 Generator(counterfeiter):Replicates real data to produce
fake data.
 Discriminator(Cop):Distinguishes real data from fake data
to catch the counterfeiter.
Training of gan
Loss function
 Why Discriminator uses Cross entropy Loss?
 Discriminator is a classification model.
 Better performance than Misclassification rate &
Mean square error.
 Misclassification rate fails to state how correct and
how wrong predictions are
TYPES OF GANs
 Deep Convolutional GAN
 Conditional GAN
 Info GAN
 Wasserstein GAN
 Attention GAN
DEEP CONVOLUTIONAL
GAN
 CNNs used in unsupervised learning.
 Generators are Deconvolutional Neural Networks.
 Discriminators are CNNs.
 It mainly composes of convolution layers without max
pooling or fully connected layers
 It uses convolutional stride and transposed convolution for
the downsampling and the upsampling.
CONDITIONAL GAN
 Dictate the type of data generated through a condition.
 In this labels act as an extension to the latent space z to
generate and discriminate images better
INFO GAN
 Additionally learns latent variables without labels in the
data.
 InfoGAN is an extension of GANs that learns to represent
unlabelled data as codes, aka representation learning.
 Representation learning is an important direction for
unsupervised learning and GANs are a flexible and
powerful interpretation.
WASSERSTEIN GAN
 Replaces the old method of measuring loss, Jenson-
Shannon Divergence with the new Wasserstein Distance.
 We can now train the discriminator until the convergence,
leading to higher quality samples generated by the
generator.
ATTENTION GAN
 Text to Image Synthesis.
 Fine-tuned to draw parts of image from a single word in
sentence.
 Uses an Attention mechanism.
Real time Applications
 Image retrieval for historical archives
 Text translation into Images
 Drug Discovery
IMAGE RETRIEVAL FOR
HISTORICAL ARCHIVES
 An interesting example of GANs applications is retrieving
visually similar marks in “Prize Papers”.
 one of the most valuable archives in the field of maritime
history.
 Adversarial nets make it easier to work with documents of
historical importance containing information about the
legitimacy of ship captures at sea
TEXT TRANSLATION
INTO IMAGES
 A method of text translation into images allows the
illustration of the performance of generative models to
mimic samples of real data.
 The main problem of image generation is that image
distribution is multimodal.
 For example, there are many correct samples that
correctly illustrate the description. GANs help to solve
this problem.
DRUG DISCOVERY
 The goal is to train the Generator to sample drug
candidates for a given disease as precisely as possible to
existing drugs from a Drug Database.
 After training, it’s possible to generate a drug for a
previously incurable disease using the Generator, and
using the Discriminator to determine whether the sampled
drug actually cures the given disease.
LITERATURE SURVEY
1.Radford, Alec, Luke Metz, and Soumith Chintala.
"Unsupervised representation learning with deep convolutional
generative adversarial networks." arXiv preprint
arXiv:1511.06434 (2015).

Supervised learning with convolutional networks (CNNs) has


seen huge adoption in computer vision applications.
Comparatively, unsupervised learning with CNNs has received
less attention.
2.Goodfellow, Ian, Jean Pouget-Abadie, Mehdi Mirza, Bing
Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and
Yoshua Bengio. "Generative adversarial nets." In Advances
in neural information processing systems, pp. 2672-2680.
2014.

A framework for estimating generative models via


adversarial nets, in which we simultaneously train two
models: a generative model G that captures the data
distribution, and a discriminative model D that estimates the
probability that a sample came from the training data rather
than G.

You might also like