DR Basit Assignments
DR Basit Assignments
Code
python3
Print(“hello World”)
hello World
*image_Class
import numpy as np
import cv2
#img = cv2.imread("C:/Users/Agha/Desktop/images/gray.jpg")
img = cv2.imread("C:/Users/Agha/Desktop/images/gray.jpg", 0)
#a = 10
#b = "hello world"
#c = [10, 10, 7, 4, 5.63, "Ali"]
#print(a)
#print(b)
#print(c)
#print(img.shape)
print(img.shape[0] * img.shape[1] * 1)
print(img[0,:1)
cv2.imshow("WIND", img)
cv2.wiatKey(0)
print("Welcome")
*Linear Classification
import numpy as np
import cv2
labels = ["dog", "cat", "panda"] 9 np.random.seed(1)
W = np.random.randn(3, 3072)
b = np.random.randn(3)
orig = cv2.imread("dog.jpg")
image = cv2.resize(orig, (32, 32)).flatten()
scores = W.dot(image) + b
Show the Scoring Function values and show the image. lin_classifier.py
for (label, score) in zip(labels, scores):
print("[INFO] {}: {:.2f}".format(label, score))
draw the label with the highest score on the image as our
cv2.putText(orig, "Label: {}".format(labels[np.argmax(scores)]), 34 (10, 30),
cv2.FONT_HERSHEY_SIMPLEX, 0.9, (0, 255, 0), 2) cv2.imshow("Image", orig) 38
cv2.waitKey(0)
Q-Implement all the code of Chapter 4 by “Jason Brownlee” with OpenCV. The
code is written in Python Package Pillow; you need to convert it into OpenCv
import cv2
import numpy as np
# Draw a line
start_point = (50, 50)
end_point = (200, 50)
color = (255, 0, 0) # Blue in BGR
thickness = 2
cv2.line(image, start_point, end_point, color, thickness)
# Draw a rectangle
top_left = (50, 80)
bottom_right = (200, 130)
color = (0, 255, 0) # Green in BGR
thickness = 2
cv2.rectangle(image, top_left, bottom_right, color, thickness)
# Draw a polygon
pts = np.array([[150, 250], [170, 220], [200, 230], [190, 270]], np.int32)
pts = pts.reshape((-1, 1, 2))
cv2.polylines(image, [pts], isClosed=True, color=(0, 0, 0), thickness=2)
# Draw text
text = "OpenCV Text"
org = (10, 280)
font = cv2.FONT_HERSHEY_SIMPLEX
font_scale = 0.7
color = (0, 0, 0)
thickness = 2
cv2.putText(image, text, org, font, font_scale, color, thickness,
cv2.LINE_AA)
PyTorch is a toolkit that helps you build smart computer programs that can learn from data —
like recognizing images, understanding text, or playing games.
1. Tensors – like fancy, multi-dimensional arrays (like NumPy) that can run fast on GPUs.
2. Automatic differentiation – it keeps track of how data flows through operations, so it
can automatically compute gradients (which are crucial for training neural networks).
import torch
print(torch.__version__)
print ("CUDA Available:", torch.cuda.is_available())
Scoring Function
A scoring function is a way to evaluate how good a model is — usually on unseen/test data. It
tells you how well the model performs using a performance metric like accuracy, F1 score, or R-
squared.
Example:
-In classification: accuracy, precision, recall
-In regression: R² score, mean absolute error
Scoring = Evaluation
Loss Function
A loss function is used during training to measure how wrong the model’s predictions are. The
goal of training is to minimize this loss so the model gets better.
Example:
-In regression: Mean Squared Error (MSE)
-In classification: Cross Entropy Loss
Q-Differentiate between simple loss, SVM loss (hinge loss), cross entropy loss.
Formula
Key Points:
-Punishes large errors more (because of squaring).
-Smooth gradients — great for regression.
2. SVM Loss (Hinge Loss)
Used for: Binary classification (especially in Support Vector Machines)
Penalizes predictions that are not confidently correct.
Formula (binary labels: y ∈ {−1, +1}):
Hinge loss=max(0,1−y⋅y^)
Example:
If y = 1 and ŷ = 0.6,
then hinge loss = max(0,1−1×0.6)=0.4
If y = 1 and ŷ = 1.5,
then hinge loss = max(0,1−1×1.50 → good prediction!
Key Points:
-Encourages margin between classes.
-Used in SVMs or margin-based classifiers.
3. Cross-Entropy Loss
Used for: Classification (Binary or Multi-class)
Measures the difference between true label distribution and predicted probability
distribution.
Key Points:
-Handles probability distributions.
-Most common for softmax outputs.
-Strongly penalizes confident wrong predictions.
2. Gradient Decent. Implement the code with the examples
Gradient Descent
Gradient Descent is an optimization algorithm that minimizes the loss by updating parameters
in the direction of the negative gradient.
Code
import numpy as np
import matplotlib.pyplot as plt
# Generate synthetic data: y = 2x + 3 + noise
np.random.seed(42)
X = np.linspace(0, 10, 100)
Y = 2 * X + 3 + np.random.normal(0, 1, size=X.shape)
# Reshape for consistency
X = X.reshape(-1, 1)
Y = Y.reshape(-1, 1)
Implementation
import numpy as np
import matplotlib.pyplot as plt
# Step 1: Generate data (y = 3x + 7 + noise)
np.random.seed(42)
X = np.linspace(0, 10, 100).reshape(-1, 1)
Y = 3 * X + 7 + np.random.randn(100, 1) * 2
# Step 2: Bias Trick - Add 1s column to X
X_bias = np.hstack([X, np.ones_like(X)]) # shape: (100, 2)
# Step 3: Initialize weights (including bias term)
w = np.zeros((2, 1)) # two weights: one for x and one for bias
# Hyperparameters
lr = 0.01
epochs = 1000
n = X.shape[0]
# Step 4: Gradient Descent
for epoch in range(epochs):
# Predict
Y_pred = X_bias @ w # (100x2) @ (2x1) = (100x1)
# Compute loss (MSE)
loss = np.mean((Y - Y_pred) ** 2)
# Gradient
dw = (-2/n) * X_bias.T @ (Y - Y_pred)
# Update weights
w -= lr * dw
# Print every 100 epochs
if epoch % 100 == 0:
print(f"Epoch {epoch}: Loss = {loss:.4f}, w = {w.ravel()}")
# Step 5: Plot the result
plt.scatter(X, Y, label='Data', alpha=0.6)
plt.plot(X, X_bias @ w, color='red', label='Model (bias trick)')
plt.title("Linear Regression with Bias Trick")
plt.xlabel("X")
plt.ylabel("Y")
plt.legend()
plt.show()
Q-What is mini-batch gradient descent? Implement in Python.
Mini-Batch Gradient Descent
Mini-Batch Gradient Descent is a variation of gradient descent where the model is updated
using a small subset (mini-batch) of the training data instead of:
-All data (Batch GD)
-One sample (Stochastic GD)
Implement
import numpy as np
import matplotlib.pyplot as plt
# Step 1: Generate synthetic data
np.random.seed(1)
X = np.linspace(0, 10, 100).reshape(-1, 1)
Y = 5 * X + 2 + np.random.randn(100, 1) * 2 # y = 5x + 2 + noise
# Step 2: Add bias trick (append ones to X)
X_bias = np.hstack([X, np.ones_like(X)]) # (100, 2)
# Step 3: Initialize weights
w = np.zeros((2, 1)) # [slope, bias]
# Hyperparameters
learning_rate = 0.01
epochs = 1000
batch_size = 20
n = X.shape[0]
# Step 4: Mini-Batch Gradient Descent
for epoch in range(epochs):
# Shuffle the data
indices = np.random.permutation(n)
X_shuffled = X_bias[indices]
Y_shuffled = Y[indices]
# Create mini-batches
for i in range(0, n, batch_size):
X_batch = X_shuffled[i:i + batch_size]
Y_batch = Y_shuffled[i:i + batch_size]
# Forward pass
Y_pred = X_batch @ w
# Gradient computation
dw = (-2 / batch_size) * X_batch.T @ (Y_batch - Y_pred)
# Update weights
w -= learning_rate * dw
# Print loss every 100 epochs
if epoch % 100 == 0:
Y_pred_full = X_bias @ w
loss = np.mean((Y - Y_pred_full) ** 2)
print(f"Epoch {epoch}: Loss = {loss:.4f}, w = {w.ravel()}")
# Step 5: Plot the results
plt.scatter(X, Y, label='Data', alpha=0.6)
plt.plot(X, X_bias @ w, color='red', label='Mini-Batch GD Model')
plt.xlabel("X")
plt.ylabel("Y")
plt.title("Mini-Batch Gradient Descent")
plt.legend()
plt.grid(True)
plt.show(
Q-What are the main steps of the Neural Network algorithm?
1. Initialize Parameters (Weights & Biases)
-Randomly initialize weights (W) and biases (b) for all layers.
-Use techniques like He or Xavier initialization for better performance.
2. Forward Propagation
-Input data is passed through the network layer by layer.
-Each neuron computes:
z=W⋅x+b,a=activation(z)
3. Compute Loss
-The output from the last layer is compared to the true labels using a loss function:
-MSE for regression
-Cross-Entropy for classification
7. Make Predictions
-After training, use the forward pass on new data to make predictions.
Q-What is learning rate α in updating the parameters W and ⃗b?
The learning rate (denoted as α or η) is a crucial hyper parameter in machine learning and neural
networks. It controls how much the model's weights (W) and biases (b) are adjusted during
training.
Parameter Update Rule:
During training, we update weights and biases using gradient descent: