0% found this document useful (0 votes)
10 views4 pages

Quiz Sol

The document outlines a quiz for a Machine Learning and Deep Learning course, focusing on foundational concepts, linear algebra applications, optimization techniques, CNNs in image recognition, and neural network creation. It includes questions on the Universal Approximation Theorem, matrix transformations, gradient descent methods, and CNN architecture design using PyTorch. Each question is assigned specific marks, and the document provides detailed answers and explanations for the topics covered.

Uploaded by

gojev84912
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views4 pages

Quiz Sol

The document outlines a quiz for a Machine Learning and Deep Learning course, focusing on foundational concepts, linear algebra applications, optimization techniques, CNNs in image recognition, and neural network creation. It includes questions on the Universal Approximation Theorem, matrix transformations, gradient descent methods, and CNN architecture design using PyTorch. Each question is assigned specific marks, and the document provides detailed answers and explanations for the topics covered.

Uploaded by

gojev84912
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

SIC Machine learning & Deep learning Instructor: Dr.

Usman Nazir
Quiz Max Marks: 100
Date: Dec 4, 2024 Time Allowed: 20 min

Name: Muhammad Hussnain Roll #: 324

CLO 1: Recall and Understand Foundational Concepts

Topics: Foundations of Neural Networks and Deep Learning

• Question 1: (Remember)
Define the Universal Approximation Theorem and explain its significance in neural
networks. (Marks: 5)

The Universal Approximation Theorem states that a feedforward neural


network with one hidden layer can approximate any continuous function, given
enough neurons and a non-linear activation function.

Significance: It highlights the ability of neural networks to model complex


relationships.

• Question 2: (Understand)
Describe the role of each component in a neural network (weights, biases, activation
functions). How do these components collectively enable learning? (Marks: 10)

• Weights: Represent the strength of the connections between neurons,


adjusting during training to minimize loss.
• Biases: Allow shifting the activation function, enabling the network to fit the
data more flexibly.
• Activation Functions: Introduce non-linearity, enabling the network to learn
complex patterns.

Collective Role: Together, these components transform inputs layer-by-layer,


enabling the network to learn relationships between input features and output
targets.
SIC Machine learning & Deep learning Instructor: Dr. Usman Nazir
Quiz Max Marks: 100
Date: Dec 4, 2024 Time Allowed: 20 min

Name: Muhammad Hussnain Roll #: 324

CLO 2: Apply Linear Algebra Techniques to Neural Networks

Topics: Linear Algebra Essentials for Deep Learning

• Question 3: (Apply)
2 1
Given a 2D transformation matrix [ ], apply this transformation to the vector,
1 3
1
𝑣⃑ [ ] and interpret the result in the context of neural network transformations. (Marks:10)
2

To apply a 2D transformation matrix T to a vector v, compute Tv. The result


represents a transformation (e.g., rotation, scaling) applied to the input data.
Interpretation: In neural networks, such transformations can simulate spatial
manipulations like rotation invariance, important for tasks like image
recognition.
• Question 4: (Apply)
Demonstrate how matrix multiplication is used to compute the forward pass in a
single-layer neural network with inputs 𝑥⃗ and weights W. (Marks: 10)

The forward pass computes outputs using y=XW


X: Input matrix (features).
Y: Weight matrix.

CLO 3: Analyze Optimization Techniques in Training

Topics: Gradient Descent and Optimization

• Question 5: (Analyze)
Consider a simple quadratic loss function 𝐿 = (𝑦 − 𝑦̂)2, where 𝑦̂ = 𝑤𝑥 + 𝑏. Analyze
how gradient descent updates the parameters w and b to minimize L. Include partial
derivative computations. (Marks: 15)

• Question 6: (Understand)
Differentiate between batch gradient descent, stochastic gradient descent (SGD), and
mini-batch gradient descent, citing their pros and cons in terms of computational
efficiency and convergence. (Marks: 10)

• Batch Gradient Descent: Uses the entire dataset.


Pros: Stable convergence.
Cons: Computationally expensive for large datasets.
• Stochastic Gradient Descent (SGD): Uses one sample per update.
Pros: Faster updates.
Cons: Noisy convergence.
• Mini-Batch Gradient Descent: Uses subsets of data.
Pros: Balances efficiency and stability.
SIC Machine learning & Deep learning Instructor: Dr. Usman Nazir
Quiz Max Marks: 100
Date: Dec 4, 2024 Time Allowed: 20 min

Name: Muhammad Hussnain Roll #: 324

CLO 4: Evaluate the Role of CNNs in Image Recognition

Topics: Convolutional Neural Networks (CNNs)

• Question 7: (Evaluate)
Compare the performance of convolutional layers and fully connected layers in image
recognition tasks. Which is better for spatial data, and why? (Marks: 10)

• Convolutional Layers: Extract spatial features efficiently, ideal for


images.
• Fully Connected Layers: Capture global information but lose spatial
locality.
• Conclusion: Convolutional layers are superior for spatial data due to
their ability to preserve local patterns.

• Question 8: (Understand)
Discuss the importance of pooling layers in CNNs. How do max pooling and average
pooling differ in their effects on feature extraction? (Marks: 10)

Pooling reduces feature map dimensions, enhancing computational efficiency


and reducing overfitting.

• Max Pooling: Captures the most prominent features.

• Average Pooling: Averages features, retaining more general information.


SIC Machine learning & Deep learning Instructor: Dr. Usman Nazir
Quiz Max Marks: 100
Date: Dec 4, 2024 Time Allowed: 20 min

Name: Muhammad Hussnain Roll #: 324

CLO 5: Create and Experiment with Neural Networks

Cumulative Question Covering All Topics

• Question 9: (Create)
Design a CNN architecture using PyTorch for a digit classification task (e.g., MNIST
dataset). Specify the number of layers, activation functions, and loss function used,
and include the training process. (Marks: 20)

import torch
import torch.nn as nn
import torch.optim as optim

# Define CNN architecture


class CNN(nn.Module):
def __init__(self):
super(CNN, self).__init__()
self.conv1 = nn.Conv2d(1, 32, kernel_size=3)
self.relu = nn.ReLU()
self.pool = nn.MaxPool2d(2, 2)
self.fc1 = nn.Linear(32 * 13 * 13, 10) # Assuming input size 28x28

def forward(self, x):


x = self.relu(self.conv1(x))
x = self.pool(x)
x = x.view(-1, 32 * 13 * 13)
x = self.fc1(x)
return x

# Instantiate model, define loss and optimizer


model = CNN()
criterion = nn.CrossEntropyLoss()
optimizer = optim.Adam(model.parameters(), lr=0.001)

# Training loop
for epoch in range(10):
for inputs, labels in train_loader:
optimizer.zero_grad()
outputs = model(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()x

Training Process:
• Batch Size: 646464.
• Epochs: 101010.
• Learning Rate: 0.0010.0010.001.
• Output: Model learns to classify digits with increasing accuracy over
epochs.

You might also like