0% found this document useful (0 votes)
59 views9 pages

Deep Learning - AD3501 - Important Questions and 2 Marks With Answer - Unit 5 - Autoencoders and Generative Models

Uploaded by

2k22aids49
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
59 views9 pages

Deep Learning - AD3501 - Important Questions and 2 Marks With Answer - Unit 5 - Autoencoders and Generative Models

Uploaded by

2k22aids49
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

Click on Subject/Paper under Semester to enter.

Professional English Discrete Mathematics Environmental Sciences


Professional English - - II - HS3252 - MA3354 and Sustainability -
I - HS3152 GE3451
Digital Principles and
Statistics and Probability and
Computer Organization
Matrices and Calculus Numerical Methods - Statistics - MA3391
- CS3351
- MA3151 MA3251
3rd Semester
1st Semester

4th Semester
2nd Semester

Database Design and Operating Systems -


Engineering Physics - Engineering Graphics
Management - AD3391 AL3452
PH3151 - GE3251

Physics for Design and Analysis of Machine Learning -


Engineering Chemistry Information Science Algorithms - AD3351 AL3451
- CY3151 - PH3256
Data Exploration and Fundamentals of Data
Basic Electrical and
Visualization - AD3301 Science and Analytics
Problem Solving and Electronics Engineering -
BE3251 - AD3491
Python Programming -
GE3151 Artificial Intelligence
Data Structures Computer Networks
- AL3391
Design - AD3251 - CS3591

Deep Learning -
AD3501

Embedded Systems
Data and Information Human Values and
and IoT - CS3691
5th Semester

Security - CW3551 Ethics - GE3791


6th Semester

7th Semester

8th Semester

Open Elective-1
Distributed Computing Open Elective 2
- CS3551 Project Work /
Elective-3
Open Elective 3 Intership
Big Data Analytics - Elective-4
CCS334 Open Elective 4
Elective-5
Elective 1 Management Elective
Elective-6
Elective 2
All Computer Engg Subjects - [ B.E., M.E., ] (Click on Subjects to enter)
Programming in C Computer Networks Operating Systems
Programming and Data Programming and Data Problem Solving and Python
Structures I Structure II Programming
Database Management Systems Computer Architecture Analog and Digital
Communication
Design and Analysis of Microprocessors and Object Oriented Analysis
Algorithms Microcontrollers and Design
Software Engineering Discrete Mathematics Internet Programming
Theory of Computation Computer Graphics Distributed Systems
Mobile Computing Compiler Design Digital Signal Processing
Artificial Intelligence Software Testing Grid and Cloud Computing
Data Ware Housing and Data Cryptography and Resource Management
Mining Network Security Techniques
Service Oriented Architecture Embedded and Real Time Multi - Core Architectures
Systems and Programming
Probability and Queueing Theory Physics for Information Transforms and Partial
Science Differential Equations
Technical English Engineering Physics Engineering Chemistry
Engineering Graphics Total Quality Professional Ethics in
Management Engineering
Basic Electrical and Electronics Problem Solving and Environmental Science and
and Measurement Engineering Python Programming Engineering
lOMoARcPSD|45333583

www.BrainKart.com

DEPARTMENT OF AI & DS

Sub Name: DEEP LEARNING Sem: V


Sub Code : AD3501 Year: III

UNIT-5 AUTOENCODERS AND GENERATIVE MODELS

PART-A Question & Answer

1. Define Autoencoders?
Autoencoders are a specialized class of algorithms that can learn efficient representations
of input data with no need for labels. It is a class of artificial neural networks designed for
unsupervised learning. Learning to compress and effectively represent input data without specific
labels is the essential principle of an automatic decoder. This is accomplished using a two-fold
structure that consists of an encoder and a decoder. The encoder transforms the input data into a
reduced-dimensional representation, which is often referred to as “latent space” or “encoding”.
From that representation, a decoder rebuilds the initial input. For the network to gain meaningful
patterns in data, a process of encoding and decoding facilitates the definition of essential features.

2. Give the architecture of Autoencoder in Deep Learning


The general architecture of an autoencoder includes an encoder, decoder, and bottleneck layer.

Figure: Architecture of Autoencoder

3. What are the different ways to constrain the network in autoencoders?


 Keep small Hidden Layers: If the size of each hidden layer is kept as small as possible,
then the network will be forced to pick up only the representative features of the data thus
encoding the data.
 Regularization: In this method, a loss term is added to the cost function which encourages
the network to train in ways other than copying the input.
lOMoARcPSD|45333583

www.BrainKart.com

 Denoising: Another way of constraining the network is to add noise to the input and teach
the network how to remove the noise from the data.
 Tuning the Activation Functions: This method involves changing the activation functions
of various nodes so that a majority of the nodes are dormant thus, effectively reducing the
size of the hidden layers.

4. Give the types of Autoencoders


 Denoising Autoencoder
 Sparse Autoencoder
 Variational Autoencoder
 Convolutional Autoencoder

5. Define Denoising Autoencoder


Denoising autoencoder works on a partially corrupted input and trains to recover the original
undistorted image. As mentioned above, this method is an effective way to constrain the
network from simply copying the input and thus learn the underlying structure and important
features of the data.

6. Define Sparse Autoencoder


This type of autoencoder typically contains more hidden units than the input but only a few are
allowed to be active at once. This property is called the sparsity of the network. The sparsity of
the network can be controlled by either manually zeroing the required hidden units, tuning the
activation functions or by adding a loss term to the cost function.

7. Define Variational Autoencoder


Variational autoencoder makes strong assumptions about the distribution of latent variables and
uses the Stochastic Gradient Variational Bayes estimator in the training process. It assumes
that the data is generated by a Directed Graphical Model and tries to learn an approximation .

8. Define Convolutional Autoencoder


Convolutional autoencoders are a type of autoencoder that use convolutional neural networks
(CNNs) as their building blocks. The encoder consists of multiple layers that take a image or a
grid as input and pass it through different convolution layers thus forming a compressed
representation of the input. The decoder is the mirror image of the encoder it deconvolves the
compressed representation and tries to reconstruct the original image.

9. Give the Implementation of Autoencoders


We’ve created an autoencoder comprising two Dense layers: an encoder responsible for
condensing the images into a 64-dimensional latent vector, and a decoder tasked with
reconstructing the initial image based on this latent space.

10. What are the different ways to constrain the Regularized Autoencoders

ARUNACHALA COLLEGE OF ENGINEERING FOR WOMEN, MANAVILAI


lOMoARcPSD|45333583

www.BrainKart.com

There are other ways to constrain the reconstruction of an autoencoder than to impose a
hidden layer of smaller dimensions than the input. The regularized autoencoders use a loss
function that helps the model to have other properties besides copying input to the output. We can
generally find two types of regularized autoencoder: the denoising autoencoder and the sparse
autoencoder.

11. Define Generative Adversarial Networks

Generative Adversarial Networks, or GANs, represent a cutting-edge approach to


generative modeling within deep learning, often leveraging architectures like convolutional
neural networks. The goal of generative modeling is to autonomously identify patterns in input
data, enabling the model to produce new examples that feasibly resemble the original dataset.

12. Give the architecture of GAN


A Generative Adversarial Network (GAN) is composed of two primary parts, which are the
Generator and the Discriminator.
Generator Model
A key element responsible for creating fresh, accurate data in a Generative Adversarial
Network (GAN) is the generator model. The generator takes random noise as input and
converts it into complex data samples, such text or images
Discriminator Model
An artificial neural network called a discriminator model is used in Generative Adversarial
Networks (GANs) to differentiate between generated and actual input.

13. Define Discriminator Loss(JD )


The discriminator reduces the negative log likelihood of correctly classifying both produced
and real samples. This loss incentivizes the discriminator to accurately categorize generated
samples as fake (log(1−D(G(zi))) close to 1) and real samples (logD(xi )closeto1).

14. Give application Of Generative Adversarial Networks (GANs)

 Image Synthesis and Generation

 Image-to-Image Translation

 Text-to-Image Synthesis
 Data Augmentation
 Data Generation for Training

15. Advantages of Generative Adversarial Networks (GANs)


The advantages of the GANs are as follows:
1. Synthetic data generation: GANs can generate new, synthetic data that resembles some
known data distribution, which can be useful for data augmentation, anomaly detection, or
lOMoARcPSD|45333583

www.BrainKart.com

creative applications.
2. High-quality results: GANs can produce high-quality, photorealistic results in image
synthesis, video synthesis, music synthesis, and other tasks.
3. Unsupervised learning: GANs can be trained without labeled data, making them suitable
for unsupervised learning tasks, where labeled data is scarce or difficult to obtain.
4. Versatility: GANs can be applied to a wide range of tasks, including image synthesis, text-
to-image synthesis, image-to-image translation, anomaly detection, data augmentation, and
others.

16. Disadvantages of Generative Adversarial Networks (GANs)


The disadvantages of the GANs are as follows:
1. Training Instability: GANs can be difficult to train, with the risk of instability, mode
collapse, or failure to converge.
2. Computational Cost: GANs can require a lot of computational resources and can be slow
to train, especially for high-resolution images or large datasets.
3. Overfitting: GANs can overfit the training data, producing synthetic data that is too similar
to the training data and lacking diversity.
4. Bias and Fairness: GANs can reflect the biases and unfairness present in the training data,
leading to discriminatory or biased synthetic data.
5. Interpretability and Accountability: GANs can be opaque and difficult to interpret or
explain, making it challenging to ensure accountability, transparency, or fairness in their
applications.

17. Give the Implementation of Autoencoders


We’ve created an autoencoder comprising two Dense layers: an encoder responsible for
condensing the images into a 64-dimensional latent vector, and a decoder tasked with
reconstructing the initial image based on this latent space.

18. What are the different ways to constrain the Regularized Autoencoders

There are other ways to constrain the reconstruction of an autoencoder than to impose a
hidden layer of smaller dimensions than the input. The regularized autoencoders use a loss
function that helps the model to have other properties besides copying input to the output. We can
generally find two types of regularized autoencoder: the denoising autoencoder and the sparse
autoencoder.

19. Define Generative Adversarial Networks

Generative Adversarial Networks, or GANs, represent a cutting-edge approach to


generative modeling within deep learning, often leveraging architectures like convolutional
neural networks. The goal of generative modeling is to autonomously identify patterns in input
data, enabling the model to produce new examples that feasibly resemble the original dataset.

20. Define Sparse Autoencoder


This type of autoencoder typically contains more hidden units than the input but only a few are
allowed to be active at once. This property is called the sparsity of the network. The sparsity of
the network can be controlled by either manually zeroing the required hidden units, tuning the

ARUNACHALA COLLEGE OF ENGINEERING FOR WOMEN, MANAVILAI


lOMoARcPSD|45333583

www.BrainKart.com

activation functions or by adding a loss term to the cost function.

PART – B Question
1. i. Write short notes Sparse Autoencoders.(7)
ii. Illustrate Denoising Auto encoders. (6
2. Discuss Auto encoders. (13)
3. Explain in detail the Generative adversarial networks.
4. Write in detail about Undercomplete Autoencoders. (13
5. Explain Regularized Autoencoders. (13)
PART – C Question
1. Discuss Auto encoders. (15)
2. Explain in detail the Generative adversarial networks.
3. Write in detail about Undercomplete Autoencoders.
4. Explain Regularized Autoencoders.
5. Assess Independent Component Analysis. (15)
Click on Subject/Paper under Semester to enter.
Professional English Discrete Mathematics Environmental Sciences
Professional English - - II - HS3252 - MA3354 and Sustainability -
I - HS3152 GE3451
Digital Principles and
Statistics and Probability and
Computer Organization
Matrices and Calculus Numerical Methods - Statistics - MA3391
- CS3351
- MA3151 MA3251
3rd Semester
1st Semester

4th Semester
2nd Semester

Database Design and Operating Systems -


Engineering Physics - Engineering Graphics
Management - AD3391 AL3452
PH3151 - GE3251

Physics for Design and Analysis of Machine Learning -


Engineering Chemistry Information Science Algorithms - AD3351 AL3451
- CY3151 - PH3256
Data Exploration and Fundamentals of Data
Basic Electrical and
Visualization - AD3301 Science and Analytics
Problem Solving and Electronics Engineering -
BE3251 - AD3491
Python Programming -
GE3151 Artificial Intelligence
Data Structures Computer Networks
- AL3391
Design - AD3251 - CS3591

Deep Learning -
AD3501

Embedded Systems
Data and Information Human Values and
and IoT - CS3691
5th Semester

Security - CW3551 Ethics - GE3791


6th Semester

7th Semester

8th Semester

Open Elective-1
Distributed Computing Open Elective 2
- CS3551 Project Work /
Elective-3
Open Elective 3 Intership
Big Data Analytics - Elective-4
CCS334 Open Elective 4
Elective-5
Elective 1 Management Elective
Elective-6
Elective 2
All Computer Engg Subjects - [ B.E., M.E., ] (Click on Subjects to enter)
Programming in C Computer Networks Operating Systems
Programming and Data Programming and Data Problem Solving and Python
Structures I Structure II Programming
Database Management Systems Computer Architecture Analog and Digital
Communication
Design and Analysis of Microprocessors and Object Oriented Analysis
Algorithms Microcontrollers and Design
Software Engineering Discrete Mathematics Internet Programming
Theory of Computation Computer Graphics Distributed Systems
Mobile Computing Compiler Design Digital Signal Processing
Artificial Intelligence Software Testing Grid and Cloud Computing
Data Ware Housing and Data Cryptography and Resource Management
Mining Network Security Techniques
Service Oriented Architecture Embedded and Real Time Multi - Core Architectures
Systems and Programming
Probability and Queueing Theory Physics for Information Transforms and Partial
Science Differential Equations
Technical English Engineering Physics Engineering Chemistry
Engineering Graphics Total Quality Professional Ethics in
Management Engineering
Basic Electrical and Electronics Problem Solving and Environmental Science and
and Measurement Engineering Python Programming Engineering

You might also like