11 DL
11 DL
Unit – 3
Consider an input image of size 100×100×3. Suppose that we use 10 kernels (filters)
each of size 1×1, zero padding P=1 and stride value S=2. How many parameters are
there? (assume no bias terms)
Calculate the output of the following Convolution operation, where the Input image
matrix, the Kernel and the Stride is as given in the image, assume no padding. Show
the calculations for each element of the output matrix.
Compare Max Pooling versus Average pooling on the basis of their working and use
cases.
Draw a basic outline of the architecture of Alexnet and Discuss how AlexNet
revolutionized the field of computer vision and deep learning by demonstrating the
effectiveness of deep learning architectures on large datasets.
Evaluate the impact of skip connections in ResNet on the network's ability to train
deeper architectures and prevent degradation. How do these connections enhance the
performance and optimization of the model compared to traditional deep networks
without skip connections?
Analyse the impact of using pre-trained models in transfer learning for computer vision
tasks. How does the choice of the pre-trained model and the fine-tuning process
influence the accuracy and generalization of the model on a new dataset?
Analyse the Vanishing Gradient problem in RNNs mathematically and suggest some
techniques that can mitigate this issue.
Compare LSTMs and Bidirectional LSTMs in terms of – Architecture and Use Cases.
Also briefly mention the architecture of other variants of LSTMs like ConvLSTM and
Stacked LSTM.
List the applications of RNNs, LSTMs, GRUs, and attention-based RNNs in various
fields, such as natural language processing, time series forecasting, and computer
vision. Assess how each architecture contributes to solving specific challenges in these
domains and suggest areas where their application could be expanded or improved.
Unit – 5
Evaluate the factors that have led to GPUs becoming the most widely used accelerators
for deep learning workloads. Analyse the architectural advantages of GPUs over CPUs.
Mention in brief about the design and purpose of these Deep learning models in the
following table.