0% found this document useful (0 votes)
23 views2 pages

11 DL

Uploaded by

azimasheikh878
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
23 views2 pages

11 DL

Uploaded by

azimasheikh878
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 2

Sample Ques ons for CT-2 | Subject: Deep Learning

Unit – 3

 Consider an input image of size 100×100×3. Suppose that we use 10 kernels (filters)
each of size 1×1, zero padding P=1 and stride value S=2. How many parameters are
there? (assume no bias terms)

 Calculate the output of the following Convolution operation, where the Input image
matrix, the Kernel and the Stride is as given in the image, assume no padding. Show
the calculations for each element of the output matrix.

 Compare Max Pooling versus Average pooling on the basis of their working and use
cases.

 Draw a basic outline of the architecture of Alexnet and Discuss how AlexNet
revolutionized the field of computer vision and deep learning by demonstrating the
effectiveness of deep learning architectures on large datasets.

 Evaluate the impact of skip connections in ResNet on the network's ability to train
deeper architectures and prevent degradation. How do these connections enhance the
performance and optimization of the model compared to traditional deep networks
without skip connections?

 Compare Resnet and VGG-16 in terms of architecture, parameters, depth and


applications.

 Analyse the impact of using pre-trained models in transfer learning for computer vision
tasks. How does the choice of the pre-trained model and the fine-tuning process
influence the accuracy and generalization of the model on a new dataset?

 Devise strategies to overcome the challenges encountered in designing deeper


convolutional neural networks (CNNs), such as vanishing gradients and overfitting.
Unit – 4

 Analyse the Vanishing Gradient problem in RNNs mathematically and suggest some
techniques that can mitigate this issue.

 Evaluate the limitations of vanilla RNNs in capturing long-term dependencies in


sequential data. Analyse the underlying reasons for this inadequacy, including issues
related to vanishing gradients, and compare traditional RNN architectures with
advanced alternatives like LSTMs and GRUs.

 Compare LSTMs and GRU in terms of – Architecture, Parameters, Computational


Efficiency and Learning Long-Term Dependencies.

 Compare LSTMs and Bidirectional LSTMs in terms of – Architecture and Use Cases.
Also briefly mention the architecture of other variants of LSTMs like ConvLSTM and
Stacked LSTM.

 Assess the necessity of incorporating attention mechanisms in recurrent neural


networks (RNNs) for tasks involving sequential data.

 List the applications of RNNs, LSTMs, GRUs, and attention-based RNNs in various
fields, such as natural language processing, time series forecasting, and computer
vision. Assess how each architecture contributes to solving specific challenges in these
domains and suggest areas where their application could be expanded or improved.

Unit – 5

 Evaluate the factors that have led to GPUs becoming the most widely used accelerators
for deep learning workloads. Analyse the architectural advantages of GPUs over CPUs.

 Mention in brief about the design and purpose of these Deep learning models in the
following table.

 Compare TensorFlow and PyTorch on the basis of their features.

 Evaluate the effectiveness of 3D-CNNs, ConvLSTMs, and Vision Transformers in


video analytics. Compare their performance across tasks such as action recognition,
object detection, and scene understanding, and critically analyse the strengths and
limitations of each model.

You might also like