0% found this document useful (0 votes)
31 views13 pages

DL Important

The document outlines various modules related to neural networks, covering topics such as activation functions, overfitting prevention, multi-layer perceptrons, convolutional neural networks, recurrent neural networks, and deep learning applications. It includes both short answer questions and essay prompts that require calculations, explanations, and comparisons of different techniques and architectures. The content emphasizes practical challenges, optimization strategies, and the importance of techniques like representation learning and autoencoders in deep learning.

Uploaded by

sreya.sajeev05
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
31 views13 pages

DL Important

The document outlines various modules related to neural networks, covering topics such as activation functions, overfitting prevention, multi-layer perceptrons, convolutional neural networks, recurrent neural networks, and deep learning applications. It includes both short answer questions and essay prompts that require calculations, explanations, and comparisons of different techniques and architectures. The content emphasizes practical challenges, optimization strategies, and the importance of techniques like representation learning and autoencoders in deep learning.

Uploaded by

sreya.sajeev05
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 13

Module 1

Shorts:
1.A 3-dimensional input X = (X1, X2, X3) = (2,
1, 2) is fully connected to 1 neuron which is
in the hidden layer with binary sigmoid
activation function. Calculate the output of
the hidden layer neuron. Assume associated
weights. Neglect bias term.
2.Explain the importance of choosing the right
step size in neural networks.
3.Discuss methods to prevent overfitting in
neural networks
4.A 3-dimensional input X = (X1, X2, X3) = (1,
2, 1) is fully connected to 1 neuron which is
in the hidden layer with binary sigmoid
activation function. Calculate the output of
the hidden layer neuron. Assume associated
weights. Neglect bias term.
5.Discuss the advantages of multilayer
perceptrons with an example.
6.Define overfitting. List any two solutions to
solve overfitting problem in neural network.
7.Discuss the disadvantages of single layer
perceptrons with an example.
8. List any three applications of neural
network.

Essay:
1.a) Calculate the output of the following
neuron Y if the activation function is a bipolar
sigmoid.

b) Describe the advantage of ReLU activation


function over others.

2.a) Discuss methods to prevent overfitting in


neural
networks.
b) Draw the architecture of a multi-layer
perceptron.
Derive update rules for parameters in the
multi-layer
neural network through the gradient
descent.
3.a) Draw the architecture of a multi-layer
perceptron. Derive update rules for
parameters in the multi-layer neural network
through the gradient descent.

b) Describe various activation functions used


in neural networks.
4.a) Calculate the output of the following
neuron Y if the activation function is a bipolar
sigmoid.

b) Explain the importance of choosing the


right step size in neural networks.

5.Explain all practical challenges and solutions


in neural network training.
6.Implement the back propagation algorithm to
train Multi Layer Perceptron using sigmoid
activation function.
7.Explain different activations functions and
their derivatives used in neural networks with
the help of graphical representation.
8.Implement the back propagation algorithm to
train Multi Layer Perceptron using tanh
activation function.

Module 2
Shorts:
1.Give a method to fight vanishing gradients in
fully-connected neural networks.
2. Discuss the importance of optimizers in deep
learning. List any 5 optimizers.
3.A supervised learning problem is given to
model a deep feed forward neural network.
Suggest solutions for a small sized dataset
for training.
4. Explain how L2 regularization improves the
performance of deep feed forward neural
networks.
5.Discuss need of parameter initialization and
early stopping.
6. Explain any two ensemble methods.
7.Explain any two parameter initialization
methods.
8. Compare L1 and L2 regularization.

Essay:

1.a) Describe the advantage of using Adam


optimizer instead of basic gradient descent.

b) Differentiate stochastic gradient


descent with and without momentum. Give
equations for weight updation in SGD with
and without momentum.

2.a) Differentiate between L1 and L2


regularization techniques.
b) Describe the effect in bias and variance
when a neural network is modified with
more number of hidden units followed with
dropout regularization.

3.a) Differentiate stochastic gradient descent


with and without momentum. Give
equations for weight updation in SGD with
and without momentum.

b) State how to apply early stopping in the


context of learning using Gradient Descent.
Why is it necessary to use a validation set
(instead of simply using the test set) when
using early stopping?

4.a) Describe the effect in bias and variance


when a neural network is modified with
more number of hidden units followed with
dropout regularization.

b) Describe the advantage of using Adam


optimizer instead of basic gradient descent.

5.a) Compare different parameter specific


learning and momentum based learning
strategies in gradient descent optimization.
6.Sketch diagram of Convolutional Neural
Network Architecture and explain
functionalities of different layers using
simple example.

7.Explain different gradient descent


optimization strategies used in deep
learning.

8.Explain the following. i)Early stopping ii)


Drop out iii) Injecting noise at input
iv)Parameter sharing and tying.

Module 3
Shorts:
1.Explain how convolution and pooling act
as infinitely strong prior.
2. Give two benefits of using convolutional
layers instead of fully connected ones for
visual tasks.
3.Give two benefits of using convolutional
layers instead of fully connected ones for
visual tasks.
4. Assume an input volume of dimension
64 x 64 x 3. What are the dimensions of
the resulting volume after convolving a 5
x 5 kernel with zero padding, stride of 1
and 2 filters?
5.Identify the datatypes that can be
handled by Convolutional Neural
Networks.
6. List any two pooling methods with
example.
7.Define structured outputs in
Convolutional Neural networks.
8. Suggest a method to make convolution
algorithm more efficient. Justify Your
answer.

Essay:
1.a) Discuss the variants of convolution
functions.
b) Explain the motivation towards
convolution neural networks.

2.a) Draw and explain the architecture of


convolutional neural networks.

b) Why do the layers in a deep


architecture need to be non-linear?

3.a) Draw and explain the architecture of


convolutional neural networks.

b) Explain how CNN produce structured


output
4.a) Explain in detail the variants of
convolution functions.
b) Describe the motivation behind
convolution neural networks.

5.a) Demonstrate motivation behind


convolution neural network incorporating
different diagrams.
6.a) Demonstrate convolution operation on
7x7 colour image to extract horizontal and
vertical edges with an example.
b) Write the input volume, filter and output
volume characteristics in convolution layer
with an example.

7.a) Explain variants of convolution functions


in detail.
8.a) Sketch the diagram of Convolutional
Neural Network architecture and explain
different stages in detail.

Module 4
Shorts:
1.Discuss deep recurrent networks.
2. Explain the concept of ‘Unrolling through
time’ in Recurrent Neural Networks.
3.Explain the concept of ‘Unrolling through
time’ in Recurrent Neural Networks.
4. How does a recursive neural network
work?
5.What kind of data can be handled by
Recurrent Neural Network. Explain.
6.List any three applications of LSTM.
7.Explain Recursive Neural Network.
8. Explain applications of Recurrent Neural
Network.

Essay:
1.a) Describe how an LSTM reduces
vanishing gradient problem associated
with recurrent neural networks.
b) Explain the architecture of GRU.
2.(a) Draw and explain the variations of
Recurrent Neural Network design.

(b) How does a recursive neural


network work?

3.a) Draw and explain the architecture of


LSTM.
b) How does encoder-decoder RNN
work?
4.Draw and explain the architecture of
Recurrent Neural Networks.
5. b) Describe how an LSTM takes care of
the vanishing gradient problem.
6. a) Sketch diagrams of Recursive
Neural Network and Long Short Term
Memory. Compare the working of RNN
and LSTM.
7. a) Explain the process of unfolding
RNN into computational graph with an
example.
8. b) Sketch the diagram of sequence to
sequence architecture, explain it’s
working with an example.
9.a) Sketch diagrams of different
Recurrent Neural Network patterns and
explain them in detail.
10. a) Discuss the architecture of Gated
Recurrent Unit (GRU).
11. b) Discuss different ways to make a
Recurrent Neural Network(RNN) deep
RNN with the help of diagrams.

Module 5
Shorts:
1.Describe the technique of
representation learning in deep
learning.
2. Illustrate the use of deep learning
concepts in Speech Recognition.
3.Describe the technique of
representation learning in deep
learning.
4. How deep learning supports the field
of natural language processing.
5.Explain boltzmann machines in deep
learning.
6. Explain computer vision
implementation with deep learning
concepts.
7.What is an autoencoder? Give one
application of an autoencoder.
8.Illustrate the use of deep learning
concepts in Computer Vision.

Essay:

1.a) Explain the word embedding


technique GloVe in detail
2. b) Explain the importance of
Autoencoders in Computer Vision.
3. a) Explain how deep learning
improves the efficiency of natural
language processing.
4. b) Compare Boltzmann Machine
with Deep Belief Network.
5.a) Explain any two word embedding
techniques in detail.
6. b) Illustrate the use of deep
learning concepts in Speech
Recognition.
7. a) Explain the merits and demerits
of using Autoencoders in Computer
Vision.
8. b) Compare Boltzmann Machine
with Deep Belief Network.
9.a) Explain three word embedding
techniques.
10. b) Explain the merits and
demerits of different types of
Autoencoders.
11. a) Explain natural language
processing.
12. b) Explain designing of Speech
Recognition system.
13. a) Illustrate the use of
representation learning in object
classification.
14. b) Compare Boltzmann Machine
with Deep Belief Network.
15. a) Explain any three research
areas of neural network.

You might also like