0% found this document useful (0 votes)
40 views8 pages

Deep Learning Question Bank

The document is a question bank for a Deep Learning course in the B.Tech (AI & DS) program, covering various topics such as supervised and unsupervised learning, neural networks, convolutional networks, recurrent neural networks, and hyperparameter tuning. It includes both Part A (short answer questions) and Part B (detailed answer questions) for multiple units, along with categories for grading. The questions are designed to assess students' understanding of key concepts and applications in deep learning.

Uploaded by

Prasanna P S
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
40 views8 pages

Deep Learning Question Bank

The document is a question bank for a Deep Learning course in the B.Tech (AI & DS) program, covering various topics such as supervised and unsupervised learning, neural networks, convolutional networks, recurrent neural networks, and hyperparameter tuning. It includes both Part A (short answer questions) and Part B (detailed answer questions) for multiple units, along with categories for grading. The questions are designed to assess students' understanding of key concepts and applications in deep learning.

Uploaded by

Prasanna P S
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 8

COLLEGE OF ENGINEERING AND TECHNOLOGY

Department ofArtificial Intelligence and Data Science


Question Bank
YEAR & SEMESTER : III&V
DEGREE & BRANCH : B.Tech(AI&DS)
SUBJCET CODE : AD3501
SUBJECT NAME : DEEP LEARNING
STAFF INCHARGE : A.RATHINAKUMARI, AP/AI&DS
REGULATION : 2021
UNIT– I
PART A (2 MARKS)
1. Differentiate supervised and unsupervised learning.(Nov/Dec-23)
2. What is stochastic gradient decent?(Nov/Dec-23)
3. Differentiate bias and variance.(Apr/May -2024)
4. Define L1 and L2 regularization.(Apr/May -2024)
5. Define Deep learning.
6. Define data augmentation.
7. What is activation function?
8. Explain no free launch theorem
9. What is scalar? Give example.
10. Define Machine Learning.
11. Define Supervised learning.
12. What id regression?
13. What is multitask learning?
14. Explain the task of supervised learning.
15. Explain the task of Unsupervised learning.
16. What is backpropogation?
17. What is regularization?
18. What are droupouts?
19. Define ensemble model.
20. Define Optimization.
21. What is meant by hard and soft parameter?
22. Difference between boosting and bagging.
23. Difference between L1 and L2 regularization.
24. What is meant by Deep feed forward network
25. What are the applications ofDL.
26. Difference between ML and DL.
27. Define point estimation.
28. What is meant by cross validation technique.
29. Define the terms overfitting and underfitting.

1
PART-B(13 MARKS)

1. (a) (i)Discuss the Bias-Variance trade off. (7m)


(ii) Discuss overfitting and underfitting with example. (6m)(Nov/Dec-23)
(or)
(b) Explain the operation of deep feed network with a diagram.(13m)(Nov/Dec-23)

2. (a)(i) Illustrate with a suitable example of how the bias and variance affect the
modelperformance. (6m)
(ii) Apply one –hot encoding to the following dataset that details the winners of cricket
test series over a consecutive period of 5 years. and perform multiplication of the
resultant tensor with the matrix [1 2 1 2 1]. (7m)

Winning Country (Apr/May -2024)


India
Australia
New Zealand
India
Australia

(or)

(b) i) Explain in detail on Hinge Loss and compute the total Hinge loss measured against
the following predicted probabilistic values. (10m) (Apr/May -2024)

Actual values Predicted values


+1 0.987
-1 0.678
+1 0.345
-1 0.458
+1 0.124
-1 0.873
+1 0.521
-1 0.666
+1 0.879
-1 0.097

ii) How the Learning rate affects the performance of the model. (3m)
3. Explain briefly about Linear algebra like scalars,vectors and matrics.
4. Explain briefly about Probability Distributions eith mass and density functions.
5. Explain briefly about Gradient based Optimization.
6. Discuss briefly about regression models.
7. State the difference between supervised & unsupervised learning.

8. Explain briefly about machine learning task


1) Performance measure (P)
2) The Experience (E)

2
9. Explain the concept of regularization(L1 and L2).
10. Explain briefly about Hyperparameter and validation sets.
11. Explain briefly about Bias and Variance Estimator.
12. Explain briefly aboutChallanges of motivating Deep Learning.
13. Explain briefly about Multi task learning.
14. state the difference between AI,DL and ML.
15. Explain briefly about Deep feed forward network using XOR activation function.
16. Write brief notes on bagging and boosting.

CATEGORIES PART-A PART-B

(50-59) 1- 1-

(60-79) 1- 1-

(above 80) All the above

UNIT IIPART A: (2 MARKS)

1. What are sparse interactions in a convolutional neural network?(Nov/Dec-23)


2. Present an outline of pooling layer in convolutional neural network?(Nov/Dec-23)
3. Define a loss function for activation functions in CNNs. (Apr/May -2024)
4. State the purpose of activation functions in CNNs. (Apr/May -2024)
5. Define Convolutional Network.
6. What are the benefits of Convolutional Network.
7. Why sparse interaction in beneficial?
8. What is Equivariance representation?
9. List the types of pooling.
10. Explain Pro Tiled Convolution.
11. What is gradient descent?
12. What is loss function?
13. List the components of convolutional layer.
14. Define sparse interaction.
15. Explain Padding in Cnn.
16. What is tiled Convolution.
PART-B(13 MARKS)

1. (a) What is a convolutional neural network? Outline transposed and dilated convolutions with an
example. (13m)
(or)
(b) How to introduce non-linearity in a convolutional neural network?
Explain with an example. (13m)(Nov/Dec-23)
2. (a)Justify the need for activation function in the neural network models. Establish the non-
linearity of the model by applying the Soft max activation function for the neural model having
inputs x={3,5,6,8,9} and varied weights ,w={0.1,0.2,0.3,0.2,0.1}.The output of the neurons is
computed with a common bias of 0.2 over every neuron. (13m)(Apr/May -2024)
3
. (or)
(b) Imagine you are designing a NLP task with the below function. The first statement is “Server
can you bring me this dish” and the second statement is “He crashed the server”. In both these
statements, the word server has different meanings and this relationship depends on the
following and preceding words in the statement. In this case suggest a suitable Deep Learning
model that helps the machine understand the relationship between the words in both directions.
Provide your answer in context to what all issues could not be addressed by a conventional
sequence prediction model. (13m)(Apr/May -2024)
3. Explain briefly about Convolutional Neural Network.
4. Explain briefly about Gradient Computation
5. Explain briefly about Loss function
6. Explain briefly aboutCNN learning nonlinearity functions.
7. Explain briefly about Fully Connected Layer
8. Explain briefly about Pooling and Padding in CNN.
9. Explain briefly about Transpose and Dilated Convolution.
10. Explain briefly about different types of Convolution operation.
11. Write brief notes on applications merits and demerits of CNN.

CATEGORIES PART-A PART-B

(50-59) 1- 1-

(60-79) 1- 1-

(above 80) All the above

UNIT III
PART A: (2 MARKS)
1. Define as recurrent neural network.(Nov/Dec-23)
2. What is LSTM? How it differs from RNN?(Nov/Dec-23)
3. How does bi-direction RNN differ from RNN?(Apr/May -2024)
4. List some real-time applications of the Encoder model.(Apr/May -2024)
5. What is recurrent neural networks?
6. Why RNN is called as Recurrent?
7. Explain advantages of RNN.
8. Explain disadvantages of RNN
9. List the advantages of unfolding process.
10. How can you overcome the challenges of vanishing and exploding gradience?
11. Explain echo state networks of RNN.
12. Explain recursive neural networks.
13. What is long short-term memory?
14. List the components of LSTM network.
15. State the difference difference between ML and DL.
16. What are the types of RNN.
4
17. Define Leaky units.
PART-B(13-MARKS)

1. (a) What is a bi-directional recurrent neural network? Explain the architecture


of a bi-directional recurrent neural network with a diagram.(13m)
(or)
(b) What is long short term memory? Compare and contrast LSTM and gated
Recurred units.(13m)(Nov/Dec-23)
2. (a) Imagine yourself as a Deep Learning engineer and you are expected to design a machine
translation system that involves processing long sequences. In that situation would you prefer
using a regular RNNs? If not suggest a suitable RNN model for your sequence prediction task.
Support your answer with the architecture and function of the model and provide a comparison
note with that of a regular RNN. (13m) (Apr/May -2024)
(Or)
(b) (i) provide a comparative note between LSIM and GRU(Apr/May -2024)
(ii) Explain the working procedure of a self-attention layer with a suitable computation
example and differentiate it to multi-head attention.(Apr/May -2024)
3. Explain briefly about Unfolding graphs.
4. Write detailed about Basics of RNN.
5. i)Write the advantages and disadvantages of RNN.
ii) State the difference between RNN and CNN
6. Explain the types of Recurrent Neural Networks.
7. i) State the difference between RNNs and Feed –Forward Neural Networks.
ii) Explain about RNN Design patterns.
8. Explain briefly about Gradient Computation.
9. Explain briefly about Sequence Modeling Conditioned on Contexts.
10. Explain briefly about Bidirectional RNN.
11. Explain briefly about Sequence to sequence RNN.
12. Explain briefly about Deep Recurrent Networks.
13. Explain Recursive Neural network and What is the need for Recursive nets in NLP?
14. Explain briefly about Leaky Units and its types.
15. Write detailed notes on Gated Architecture:LSTM.

CATEGORIES PART-A PART-B

(50-59) 1- 1-

(60-79) 1- 1-

(above 80) All the above

UNIT IV
PART A: (2 MARKS)
5
1. What is a baseline model in deep learning? (Nov/Dec-23)
2. Define random search? (Nov/Dec-23)
3. Define MSE and MAE metrics of model evaluation. (Apr/May -2024)
4. State the Activation Function hyper parameter in a neural network and define the ReLU
activation function. (Apr/May -2024)
5. What are the reasons for training data can be limited?
6. Explain learning rate.
7. What is grid search?
8. What is hyper parameters?
9. Define random search.
10. What is main reason Why random search finds good solutions faster than grid search?
11. When manual hyper parameter works well?
12. Define precision
13. How capacity is controlled in manual hyper parameters?
14. Define the term performance metrics.
PART-B(13 MARKS)
1. (a) Discuss the various performance metrics to evaluate a deep learning
model with an example.(13m)
(or)
(b) Why are hyperparameters? Discuss the steps to perform hyperparameter
Tuning. (13m)(Nov/Dec-23)
2. (a) (i) Define model evaluation and state its importance, explain briefly on evaluating
the regression models specifying the metrics.(Apr/May -2024)
(ii) Explain how to handle over fitting and under fitting during model evolution. (13m)
(or)
(b) (i) Define Random search and brief on the working procedure for a machine
learning model. (5m)(Apr/May -2024)
(ii) Explain the different types of baseline models in detail.(8m)(Apr/May -2024)
3. (a) Discuss the various loss functions in neural networks.(15m)
(or)
(b) Discuss the steps involved in grid search with an example.(15m)(Nov/Dec-23)
4. Explain briefly about Performance Metrics.
5. Write short notes on Baseline Models.
6. Explain briefly about Hyperparameter tuning process.
7. Write short notes on i) Manual Hyperparameter.
ii) Automatic Hyperparameteriii) Grid Searchiv) Random Search
8. Explain Briefly about Debugging Strategies.
9. A Quality engineer wants to solve a two –class classification problem for predicting whether
a product is defective.The actual number of product is defective.The actual number of
products containing no defect are 950(Truly predicted positives=900),the actual number
defective products are 150(Truly predicted negatives =130). So calculate accuracy, precision,
recall and f1 score.
10. Explain briefly about hyperparameter (Automatic & Manual)
11. Write brief notes on grid search & random search.

6
CATEGORIES PART-A PART-B

(50-59) 1- 1-

(60-79) 1- 1-

(above 80) All the above

UNIT V
PART A: (2 MARKS)
1. What is regularized auto encoder?(Nov/Dec-23)
2. Define a stochastic encoder.(Nov/Dec-23)
3. Define dimensionality reduction for encoders. (Apr/May -2024)
4. State the role of the generator and discriminator in GANs. (Apr/May -2024)
5. What is an autoencoder?
6. What is aim of autoencoder?
7. What is regularization in autoencoder?
8. Is autoencoder supervised or Unsupervised?
9. Why do we use autoencoder?/
10. What is a deep belief network used for?
11. Is the deep belief network supervised or unsupervised?
12. Explain key characteristics of the Boltzmann machine.
13. What is Boltzmann machine?
14. Define generative adversarial networks.
15. What is Sparseautoencoder?
16. What is meant by Denoisingautoencoder.
PART-B(13 MARKS)
1. (a) Justify your answer, that how autoencoders are suitable compared to Principal
Compact Analysis (PCA) for dimensionality reduction.(13m)
(or)
(b) What is a generative adversarial network? Explain the architecture of a
Generative adversarial network with a diagram.(13m)(Nov/Dec-23)
2. (a) A deep learning engineer wishes to extract more features to create a fake dataset
using DCGAN. Explain the formation of DCGAN with 3 hidden layers along
with appropriate activation functions and justify that the model outperforms the
CAN model. (Apr/May -2024)
(Or)
(b) A person wants to imitate the real-time data so as to ensure sample data distribution .
suggest a suitable model for imitating the data explain the stopping criteria of the
model with respect to its toss function. (13m)(Apr/May -2024) Explain
briefly about architecture of Autoencoders

3. (a) (i) Consider an image data input in which the features are to be learnt in a
compressed way. Suggest a suitable model and explain the steps involved in

7
learning the features with a suitable diagram. (10m)(Apr/May -2024)
(ii) Data generation is possible using Auto encoders. Justify your answer with
a suitable example. (5m)(Apr/May -2024)
(or)
(b) Below is the output distribution of two models A and B. Identify the abnormalities
found in both models. Suggest an appropriate technique for improvising the model
and explain in detail. (13m) (Apr/May -2024)

4. Write the uses and applications of Autoencoders.


5. Explain Under completeAutoencoders.
6. Explain Regularized Autoencoders and its types.
7. Explain briefly about Stochastic Encoders and Decoders.
8. What is Deep Generative Models explain in detail.
9. What is Generative Adversarial networks and explain its types.
10. Explain briefly about regularized auto encoder types.
11. Explain briefly about architecture of Boltzmann machine.
12. Generative adversarial network & its types.
13. Architecture of deep brief network.

CATEGORIES PART-A PART-B

(50-59) 1- 1-

(60-79) 1- 1-

(above 80) All the above

[A.RATHINAKUMARI, AP/AI&DS]
Prepared By

HOD/ AI&DS PRINCIPAL

You might also like