0% found this document useful (0 votes)
99 views3 pages

Model Questions DWT

Iihjejdkmsmskkkdjsjjdmxmxkzmmzmzmamammzmzmzmzmzmzmzmmzmzmzmzmmzmmzmzmmzmxmxm

Uploaded by

teamroboxen
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
99 views3 pages

Model Questions DWT

Iihjejdkmsmskkkdjsjjdmxmxkzmmzmzmamammzmzmzmzmzmzmzmmzmzmzmzmmzmmzmzmmzmxmxm

Uploaded by

teamroboxen
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 3

DWT Model Questions

1. What is multi-layer perceptron? How is it different from single layer perceptron?


2. What is the role of activation function in a neural network?
3. Explain the concept of a neural network and the role of neurons, weights and biases?
4. What is the cost function? State different cost functions used in Regression and classification.
5.Write the learning algorithm for perceptron model.
6.What is gradient descent? What are the types of gradient descent?
7.Explain the need for multi-layered perceptron with an example.
8.Consider a neural network with one input layer, one hidden layer with 2 neurons and one output
layer with one neuron. Assume the neurons have a sigmoid activation function, actual output=1,
learning rate=0.9
The network parameters for the neural network in are as follows: inputs x1=0.35, x2=0.9 Weights and
bias: input to hidden layer:w1=0.1,w12=0.3,w21=0.3,w22=0.4
Hidden to output layer: wh1=0.45,wh2=0.65
(i) Draw the architecture of the neural network with the given data.
(i) Calculate the output of the network in the forward propagation.
(iii) Calculate the error at the output layer for the actual output Y=0.5
(iv) Calculate the gradients of the weights for the hidden to output layer in the backward propagation
(v) Calculate the gradients of the weights for input to hidden layer in the backward propagation
9.State the Difference between Machine learning and Deep learning.
10.Write the significance of validation set in training a deep neural network.
11.Discuss the methods to avoid overfitting in deep neural network?
12.Prove that the MLP is a cascade of non-linear functions.

13.Discuss the advantages of MLP over Single Perceptron.


14.Discuss the ReLu Activation function.
15.What is dying ReLu Problem? Explain with example.
16.State how Leaky ReLu overcomes the dying ReLu problem.
17.Describe the role of convolution and pooling layers in CNN.
18.Discuss the significance of using padding technique in convolutional layer with suitable example
19.Discuss types of padding techniques used in CNN with suitable example.
20.Write the difference between valid padding, same padding and full padding.
21.State and discuss types of pooling in CNN. Which pooling technique is widely used?
22.Discuss how early stopping combat overfitting.
23.What is Recurrent Neural Network (RNN)? What is the use of it?
24.State the limitations of RNN model. How LSTM overcomes the limitations of RNN?
25.Differentiate between feed forward neural network and Recurrent Neural Network?
26.What is LSTM network? How does an LSTM network works?
27.Write the learning algorithm for perceptron model.
28.State and discuss different nonlinear activation functions
29.Write the perceptron learning algorithm.
30.Find the optimal weights of the perceptron which act as an OR gate for the given data keeping bias
(b=0) as fixed. w1=0.6, w2=0.6 and Learning rate(η)=0.5.Draw the resultant perceptron which act as
an OR gate with the optimal weights calculated.
31.Find the optimal weights of the perceptron which act as an AND gate for the given data keeping
bias (b=0) as fixed. w1=1.2, w2=0.6 and Learning rate(η)=0.5.Draw the resultant perceptron which
act as an AND gate with the optimal weights calculated
32.Discuss types of RNN with examples
33.Discuss the advantages of LSTM over RNN.
34.Explain the architecture of an auto encoder.
35. What are the key differences between Convolutional Neural Networks (CNNs) and Recurrent
Neural Networks (RNNs)
36. Discuss how dropout combat overfitting.
37.Discuss how Regularization combat overfitting.
38. State the difference between validation set and test set. Discuss how validation sets are used in
early stopping the ANN model to combat overfitting.
39.State the variants of ReLu activation function with the formula.
40.Discuss the impact of the vanishing gradient problem on the weight updates during
backpropagation.
41. Discuss dying ReLu problem with example.

42.State the mathematical formulas for both tanh and sigmoid functions and describe their range of
outputs.
43.Given a CNN output of Z= [2.1, 5.5, -4.3], calculate the Softmax probabilities for each class.
45. Design a CNN for image classification task with 10 classes. The CNN is having CONV1 layer
with 8 filters, filter size is 5X5, stride=1, padding=0. CONV1 is followed by a maxpooling layer with
filter 2x2. Conv2 layer is having 16 filters followed by a maxpooling layer.

a)Show the architecture of the above CNN model.

b) Find the number of parameters at each layer of CNN.

c) Find the total number of learnable parameters in the above CNN.

46. What is perceptron? Write perceptron learning algorithm.

47. How nonlinearity is introduced in CNN network.


48. How weights are initialized in neural networks

49. Write the formula for finding the output shape of the convolutional layer with given input
size,filter size, stride and padding in CNN model.

50. What is dropout and batch normalization?

You might also like