0% found this document useful (0 votes)
29 views5 pages

NNDL Internal I Key

This document outlines the internal exam for the VI Semester B.E. in Information Technology at Vasavi College of Engineering, focusing on Neural Networks and Deep Learning. It includes a series of questions covering key concepts such as representation learning, gradient descent, and various optimization algorithms, along with practical applications like constructing perceptrons and using backpropagation. The exam is scheduled for March 27, 2024, and is worth a total of 30 marks.

Uploaded by

saipavanv0
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
29 views5 pages

NNDL Internal I Key

This document outlines the internal exam for the VI Semester B.E. in Information Technology at Vasavi College of Engineering, focusing on Neural Networks and Deep Learning. It includes a series of questions covering key concepts such as representation learning, gradient descent, and various optimization algorithms, along with practical applications like constructing perceptrons and using backpropagation. The exam is scheduled for March 27, 2024, and is worth a total of 30 marks.

Uploaded by

saipavanv0
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

VASAVI COLLEGE OF ENGINEERING (Autonomous)

DEPARTMENT OF INFORMATION TECHNOLOGY


B.E(CBCS) VI Semester, Internal Exam – I
Subject: NEURAL NETWORKS AND DEEP LEARNING(PC611IT)

Time: 11:00 am to 12:30 pm Date: 27-03-2024 Max. Marks: 30

Q Marks
Description of the Question
No. allotted
1. What is representation learning? 1
2. Explain gradient descent algorithm in deep learning. 1
3. Illustrate sigmoid neuron. 1
4. Explain how early stopping regularizes deep networks? 1
5. What is adversarial training? 1
6. Illustrate the importance of right parameter initialization in DL training? 1
7. Build OR logic gate functionality with perceptron using 3 inputs. 3
8. Consider a fully connected network with 3 inputs x1, x2, x3. Suppose there is one hidden layer with 4
neurons having sigmoid activation functions. Further, the output layer is a softmax layer. Assume that all
the weights in the network are set to 1 and all biases are set to 0. Write down the output of the network
as a function of x = [x1, x2, x3].

3
9. Produce updates for Nesterov Momentum optimization algorithm.

10. Produce updates for Adam Optimization algorithm.

11.a) Construct gradient formulae for the parameters in the neural network below using back propagation
algorithm.
3
Consider a MLP having 2 inputs and 2 hidden layers each having 3 neurons and one neuron in the output
layer. Assume hidden layers are using RELU activation and output layer using sigmoid.
11.b) Consider the above network, calculate the weight updates of one SGD iteration assuming all the weights
are initialized to 0.1, biases to -0.1, input<1,1> and label is 0.

3
12.a) Explain dropout as a practical ensemble method in deep learning. 3
12.b) Illustrate parameter sharing as a regularization technique in the case of image classification.
1. Convolutional Layers
2. Feature Maps
3. Reduced Parameter Space
4. Translation Invariance
3
5. Example: For example, in a CNN for image classification, the weights of a filter responsible for
detecting edges or textures can be shared across all regions of the input image. This allows the
network to learn generic features that are useful for classification tasks without being overly
sensitive to the exact location of these features.

You might also like