Final Unit 2 Questions.
Final Unit 2 Questions.
subunit
1 1 Explain how data augmentation techniques can improve the performance of a CNN like Alex Net wh
1 2 You are using a pre-trained Alex Net model on ImageNet to classify images of flowers from a new da
1 3 Given the LeNet-5 architecture originally designed for MNIST (grayscale images of size 28×28, descr
2 4 Given an input image size of 64×64×3 and a CNN architecture with two convolutional layers, followe
2 5 Given a CNN with an input of size 32×32×3, a first convolutional layer with 8 filters of size 3×3, stride
2 6 A CNN uses an input size of 64×64×1 and has two consecutive convolutional layers. The first layer ha
3 7 Explain what a dilated convolutional layer is and describe how it differs from a standard convolution
3 8 Given an input feature map of size 32×32 and a dilated convolutional layer with a filter size of 3×3, a
3 9 How does increasing the dilation rate in a dilated convolutional layer affect the computational comp
4 10 What are the key differences between L1 (Lasso) and L2 (Ridge) regularization techniques? Provide e
4 11 How is data augmentation different from regularization techniques? Explain how each approach hel
4 12 What is the dropout regularization technique in neural networks? Describe its primary function
subunit section D
1 1 What does a higher stride value in CNN typically result in?
1 2 What is the role of filters in CNN?
2 3 What type of pooling retains the maximum value from a region?
2 4 What is the effect of zero-padding on the feature map size after convolution?
3 5 Which CNN variant is used to increase spatial resolution during image generation tasks?
4 6 How can CNNs be used in unsupervised learning?
4 7 What is the role of random weights in CNN feature extraction?
4 8 Which unsupervised learning technique can be applied to CNN features for clustering images?
mance of a CNN like Alex Net when trained on a relatively small dataset. Provide specific examples of augmentation methods (e.g., rotation
mages of flowers from a new dataset with 5 classes. Describe the steps you would take to apply transfer learning for this task. Include det
ale images of size 28×28, describe how you would modify it to train on the CIFAR-10 dataset (coloured images of size 32×32×3. Include ch
wo convolutional layers, followed by a max pooling layer, explain how to compute the output size after each layer. Assume the first convol
r with 8 filters of size 3×3, stride 1, and 'same' padding, followed by a ReLU activation, and then a max pooling layer with pool size 2×2 and
lutional layers. The first layer has 32 filters of size 3×3 with stride 1 and 'same' padding. The second layer has 64 filters of size 5×5 with stri
ers from a standard convolutional layer in a neural network. Why might dilated convolutions be useful in tasks such as semantic segmenta
l layer with a filter size of 3×3, a dilation rate of 2, and a stride of 1 with 'same' padding, calculate the size of the output feature map. Show
r affect the computational complexity during the forward pass? If a layer has a dilation rate of 1, and another has a dilation rate of 4 with a
larization techniques? Provide examples of when each technique is preferred
Explain how each approach helps to improve the performance of a machine learning model
escribe its primary function
e generation tasks?