0% found this document useful (0 votes)
30 views3 pages

Why Were Gans Developed in The First Place?: Generative Adversarial Network (Gan)

Generative Adversarial Networks (GANs) are a class of neural networks introduced by Ian J. Goodfellow in 2014, consisting of two competing models: a Generator that creates fake data and a Discriminator that distinguishes between real and fake data. GANs address the limitations of traditional neural networks, which can be easily misled by noise and overfitting due to limited data. Various types of GANs, such as Vanilla GAN, Conditional GAN, and Deep Convolutional GAN, have been developed to enhance data generation capabilities and improve the quality of generated outputs.

Uploaded by

beebird234
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
30 views3 pages

Why Were Gans Developed in The First Place?: Generative Adversarial Network (Gan)

Generative Adversarial Networks (GANs) are a class of neural networks introduced by Ian J. Goodfellow in 2014, consisting of two competing models: a Generator that creates fake data and a Discriminator that distinguishes between real and fake data. GANs address the limitations of traditional neural networks, which can be easily misled by noise and overfitting due to limited data. Various types of GANs, such as Vanilla GAN, Conditional GAN, and Deep Convolutional GAN, have been developed to enhance data generation capabilities and improve the quality of generated outputs.

Uploaded by

beebird234
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 3

Generative Adversarial Network (GAN)

Generative Adversarial Networks (GANs) are a powerful class of neural networks that are used for
unsupervised learning. It was developed and introduced by Ian J. Goodfellow in 2014. GANs are basically
made up of a system of two competing neural network models which compete with each other and are
able to analyze, capture and copy the variations within a dataset.

Why were GANs developed in the first place?

It has been noticed most of the mainstream neural nets can be easily fooled into misclassifying things by
adding only a small amount of noise into the original data. Surprisingly, the model after adding noise has
higher confidence in the wrong prediction than when it predicted correctly. The reason for such
adversary is that most machine learning models learn from a limited amount of data, which is a huge
drawback, as it is prone to overfitting. Also, the mapping between the input and the output is almost
linear. Although, it may seem that the boundaries of separation between the various classes are linear, but
in reality, they are composed of linearities and even a small change in a point in the feature space might
lead to misclassification of data.

How does GANs work?

Generative Adversarial Networks (GANs) can be broken down into three parts:

 Generative: To learn a generative model, which describes how data is generated in terms
of a probabilistic model.
 Adversarial: The training of a model is done in an adversarial setting.
 Networks: Use deep neural networks as the artificial intelligence (AI) algorithms for
training purpose.
In GANs, there is a generator and a discriminator. The Generator generates fake samples of data(be it
an image, audio, etc.) and tries to fool the Discriminator. The Discriminator, on the other hand, tries to
distinguish between the real and fake samples. The Generator and the Discriminator are both Neural
Networks and they both run in competition with each other in the training phase. The steps are
repeated several times and in this, the Generator and Discriminator get better and better in their
respective jobs after each repetition. The working can be visualized by the diagram given below:

Here, the generative model captures the distribution of data and is trained in such a manner that it tries
to maximize the probability of the Discriminator in making a mistake. The Discriminator, on the other
hand, is based on a model that estimates the probability that the sample that it got is received from the
training data and not from the Generator.
The GANs are formulated as a minimax game, where the Discriminator is trying to minimize its
reward V(D, G) and the Generator is trying to minimize the Discriminator’s reward or in other words,
maximize its loss. It can be mathematically described by the formula below:
where,
G = Generator
D = Discriminator
Pdata(x) = distribution of real data
P(z) = distribution of generator
x = sample from Pdata(x)
z = sample from P(z)
D(x) = Discriminator network
G(z) = Generator network
So, basically, training a GAN has two parts:
 Part 1: The Discriminator is trained while the Generator is idle. In this phase, the network
is only forward propagated and no back-propagation is done. The Discriminator is trained
on real data for n epochs, and see if it can correctly predict them as real. Also, in this phase,
the Discriminator is also trained on the fake generated data from the Generator and see if
it can correctly predict them as fake.
 Part 2: The Generator is trained while the Discriminator is idle. After the Discriminator is
trained by the generated fake data of the Generator, we can get its predictions and use the
results for training the Generator and get better from the previous state to try and fool the
Discriminator.
The above method is repeated for a few epochs and then manually check the fake data if it
seems genuine. If it seems acceptable, then the training is stopped, otherwise, its allowed to
continue for few more epochs.
Different types of GANs:
GANs are now a very active topic of research and there have been many different types of GAN
implementation. Some of the important ones that are actively being used currently are
described below:
1. Vanilla GAN: This is the simplest type GAN. Here, the Generator and the
Discriminator are simple multi-layer perceptrons. In vanilla GAN, the algorithm is
really simple, it tries to optimize the mathematical equation using stochastic
gradient descent.
2. Conditional GAN (CGAN): CGAN can be described as a deep learning method in
which some conditional parameters are put into place. In CGAN, an additional
parameter ‘y’ is added to the Generator for generating the corresponding data.
Labels are also put into the input to the Discriminator in order for the
Discriminator to help distinguish the real data from the fake generated data.
3. Deep Convolutional GAN (DCGAN): DCGAN is one of the most popular also the
most successful implementation of GAN. It is composed of ConvNets in place of
multi-layer perceptrons. The ConvNets are implemented without max pooling,
which is in fact replaced by convolutional stride. Also, the layers are not fully
connected.
4. Laplacian Pyramid GAN (LAPGAN): The Laplacian pyramid is a linear invertible
image representation consisting of a set of band-pass images, spaced an octave
apart, plus a low-frequency residual. This approach uses multiple numbers of
Generator and Discriminator networks and different levels of the Laplacian
Pyramid. This approach is mainly used because it produces very high-quality
images. The image is down-sampled at first at each layer of the pyramid and then
it is again up-scaled at each layer in a backward pass where the image acquires
some noise from the Conditional GAN at these layers until it reaches its original
size.
5. Super Resolution GAN (SRGAN): SRGAN as the name suggests is a way of
designing a GAN in which a deep neural network is used along with an adversarial
network in order to produce higher resolution images. This type of GAN is
particularly useful in optimally up-scaling native low-resolution images to enhance
its details minimizing errors while doing so.

You might also like