0% found this document useful (0 votes)
9 views13 pages

DR Basit Assignments

Uploaded by

Abdul Nabi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views13 pages

DR Basit Assignments

Uploaded by

Abdul Nabi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 13

*Play with Python3 and terminal, type some python code.

Code
python3
Print(“hello World”)
hello World

*Creating new Environment


Terminal command prompt
code:
$ sudo apt update
$ sudo apt upgrade
$ sudo apt install python3-venv python3-pip –y
$ mkdir -p ~/.virtualenvs
$ python3 -m venv ~/.virtualenvs/MS_cv
$ source ~/.virtualenvs/MS_cv/bin/activate

*Install pip for Python 3:


Code
sudo apt install python3-pip -y

pip install opencv-python


pip install pillow
pip install pillow matplotlib

updating pip code:


python.exe -m pip install --upgrade pip

*Load an image both in grayscale and RGB


import cv2
# path
path = r'C:/Users/Agha/Desktop/images/flower.jpg'
# Convert to RGB
img_rgb = img.convert("RGB")
# Convert to grayscale
img_gray = img.convert("L")
# Show image modes
print("RGB mode:", img_rgb.mode)
print("Grayscale mode:", img_gray.mode)
# Using cv2.imread() method
# Using 0 to read image in grayscale mode
img = cv2.imread(path, cv2.IMREAD_GRAYSCALE)
img = cv2.imread(path, cv2.IMREAD_COLOR_RGB)
# Displaying the image
cv2.imshow('image', img)
cv2.waitKey(0)
cv2.destroyAllWindows()
* Implement the k-NN algorithm
import cv2
import numpy as np
from sklearn.neighbors import KNeighborsClassifier
from sklearn.preprocessing import LabelEncoder
from sklearn.model_selection import train_test_split
from sklearn.metrics import classification_report
from imutils import paths
import os Download the animal dataset from the provided link link and save it
in a folder that is easily accessible from your code.
print('[INFO] loading images...')
ds_path = "/home/abasit/Documents/animals/"
imagePaths = list(paths.list_images(ds_path))
data = [], labels = []
width = 32, height = 32
for (i, imagePath) in enumerate(imagePaths):
image = cv2.imread(imagePath)
label = imagePath.split(os.path.sep)[-2]
image = cv2.resize(image, (width, height))
data.append(image) 27 labels.append(label)
data = np.array(data)
labels = np.array(labels)
data = data.reshape((data.shape[0], 3072))
le = LabelEncoder() 36 labels = le.fit_transform(labels) 1
(trainX, testX, trainY, testY) = train_test_split(data,labels,
test\_size=0.25, random\_state=42) 4 5 print("[INFO] evaluating k-NN
classifier...")
model = KNeighborsClassifier(n_neighbors=1)
model.fit(trainX, trainY)
print(classification_report(testY, model.predict(testX),
target_names=le.classes_))

*image_Class
import numpy as np
import cv2
#img = cv2.imread("C:/Users/Agha/Desktop/images/gray.jpg")
img = cv2.imread("C:/Users/Agha/Desktop/images/gray.jpg", 0)
#a = 10
#b = "hello world"
#c = [10, 10, 7, 4, 5.63, "Ali"]
#print(a)
#print(b)
#print(c)
#print(img.shape)
print(img.shape[0] * img.shape[1] * 1)
print(img[0,:1)
cv2.imshow("WIND", img)
cv2.wiatKey(0)
print("Welcome")
*Linear Classification
import numpy as np
import cv2
labels = ["dog", "cat", "panda"] 9 np.random.seed(1)
W = np.random.randn(3, 3072)
b = np.random.randn(3)
orig = cv2.imread("dog.jpg")
image = cv2.resize(orig, (32, 32)).flatten()
scores = W.dot(image) + b
Show the Scoring Function values and show the image. lin_classifier.py
for (label, score) in zip(labels, scores):
print("[INFO] {}: {:.2f}".format(label, score))
draw the label with the highest score on the image as our
cv2.putText(orig, "Label: {}".format(labels[np.argmax(scores)]), 34 (10, 30),
cv2.FONT_HERSHEY_SIMPLEX, 0.9, (0, 255, 0), 2) cv2.imshow("Image", orig) 38
cv2.waitKey(0)

Q-Implement all the code of Chapter 4 by “Jason Brownlee” with OpenCV. The
code is written in Python Package Pillow; you need to convert it into OpenCv
import cv2
import numpy as np

# Create a blank white image


width, height = 400, 300
image = np.ones((height, width, 3), dtype=np.uint8) * 255

# Draw a line
start_point = (50, 50)
end_point = (200, 50)
color = (255, 0, 0) # Blue in BGR
thickness = 2
cv2.line(image, start_point, end_point, color, thickness)

# Draw a rectangle
top_left = (50, 80)
bottom_right = (200, 130)
color = (0, 255, 0) # Green in BGR
thickness = 2
cv2.rectangle(image, top_left, bottom_right, color, thickness)

# Draw a filled rectangle


top_left_filled = (220, 80)
bottom_right_filled = (370, 130)
cv2.rectangle(image, top_left_filled, bottom_right_filled, (0, 255, 255), -1)
# -1 means filled
# Draw an ellipse
center_coordinates = (100, 200)
axes_length = (50, 20)
angle = 0
start_angle = 0
end_angle = 360
color = (0, 0, 255) # Red in BGR
cv2.ellipse(image, center_coordinates, axes_length, angle, start_angle,
end_angle, color, thickness)

# Draw a filled ellipse


cv2.ellipse(image, (300, 200), (50, 20), 0, 0, 360, (255, 0, 255), -1)

# Draw a polygon
pts = np.array([[150, 250], [170, 220], [200, 230], [190, 270]], np.int32)
pts = pts.reshape((-1, 1, 2))
cv2.polylines(image, [pts], isClosed=True, color=(0, 0, 0), thickness=2)

# Draw text
text = "OpenCV Text"
org = (10, 280)
font = cv2.FONT_HERSHEY_SIMPLEX
font_scale = 0.7
color = (0, 0, 0)
thickness = 2
cv2.putText(image, text, org, font, font_scale, color, thickness,
cv2.LINE_AA)

# Save the image


cv2.imwrite('chapter4_opencv_output.jpg', image)

# Display the image (optional)


cv2.imshow('Image', image)
cv2.waitKey(0)
cv2.destroyAllWindows()
*What is PyTorch? Explain in your own words.

PyTorch is a toolkit that helps you build smart computer programs that can learn from data —
like recognizing images, understanding text, or playing games.

It gives you two main things:

1. Tensors – like fancy, multi-dimensional arrays (like NumPy) that can run fast on GPUs.
2. Automatic differentiation – it keeps track of how data flows through operations, so it
can automatically compute gradients (which are crucial for training neural networks).

*Install PyTorch with Python


python --version
pip install torch torchvision torchaudio

Open Python or a Jupyter notebook and run:

import torch
print(torch.__version__)
print ("CUDA Available:", torch.cuda.is_available())

Write a sample code in PyTorch.


import torch

# Create two tensors


a = torch.tensor([2.0, 3.0])
b = torch.tensor([4.0, 5.0])

# Add the tensors


c = a + b

# Print the result


print("Tensor a:", a)
print("Tensor b:", b)
print("Tensor c (a + b):", c)
Q-What is scoring function and loss? Explain with an example.

Scoring Function
A scoring function is a way to evaluate how good a model is — usually on unseen/test data. It
tells you how well the model performs using a performance metric like accuracy, F1 score, or R-
squared.
Example:
-In classification: accuracy, precision, recall
-In regression: R² score, mean absolute error

Scoring = Evaluation

Loss Function
A loss function is used during training to measure how wrong the model’s predictions are. The
goal of training is to minimize this loss so the model gets better.
Example:
-In regression: Mean Squared Error (MSE)
-In classification: Cross Entropy Loss

Loss = Error = What the model tries to minimize

Q-Differentiate between simple loss, SVM loss (hinge loss), cross entropy loss.

1. Simple Loss (e.g., Mean Squared Error - MSE)


Used for: Regression
Measures the average squared difference between predicted and actual values.

Formula

Key Points:
-Punishes large errors more (because of squaring).
-Smooth gradients — great for regression.
2. SVM Loss (Hinge Loss)
Used for: Binary classification (especially in Support Vector Machines)
Penalizes predictions that are not confidently correct.
Formula (binary labels: y ∈ {−1, +1}):

Hinge loss=max(0,1−y⋅y^)

Example:

If y = 1 and ŷ = 0.6,
then hinge loss = max(0,1−1×0.6)=0.4

If y = 1 and ŷ = 1.5,
then hinge loss = max(0,1−1×1.50 → good prediction!

Key Points:
-Encourages margin between classes.
-Used in SVMs or margin-based classifiers.

3. Cross-Entropy Loss
Used for: Classification (Binary or Multi-class)
Measures the difference between true label distribution and predicted probability
distribution.

Key Points:
-Handles probability distributions.
-Most common for softmax outputs.
-Strongly penalizes confident wrong predictions.
2. Gradient Decent. Implement the code with the examples

Gradient Descent
Gradient Descent is an optimization algorithm that minimizes the loss by updating parameters
in the direction of the negative gradient.
Code
import numpy as np
import matplotlib.pyplot as plt
# Generate synthetic data: y = 2x + 3 + noise
np.random.seed(42)
X = np.linspace(0, 10, 100)
Y = 2 * X + 3 + np.random.normal(0, 1, size=X.shape)
# Reshape for consistency
X = X.reshape(-1, 1)
Y = Y.reshape(-1, 1)

Q-Types of Gradient Descent.


Gradient Descent comes in three main types, depending on how much data is used to compute
the gradient at each step.

1. Batch Gradient Descent (BGD


Uses all training data to compute the gradient at every step.
Characteristics:
-Stable and accurate.
-Slower on large datasets.
-Requires entire dataset to fit in memory.
Good for:
-Small to medium datasets.

2. Stochastic Gradient Descent (SGD)


Updates the model using only one sample at a time.
Characteristics:
-Faster updates.
-Noisy (fluctuating loss curve).
-Can escape local minima due to noise.
Good for:
-Very large datasets.
-Online learning.
3. Mini-Batch Gradient Descent
Combines both: uses a small batch (e.g., 32 or 64 samples) to compute gradients.
Characteristics:
-More stable than SGD.
-Faster than full batch GD.
-Works well with GPU acceleration
Good for:
-Most deep learning tasks.
-Standard in practice (used with libraries like PyTorch & Tensor Flow).

Q-Implement Batch gradient descent (Vanilla gradient descent) in Python.


import numpy as np
import matplotlib.pyplot as plt
# Step 1: Generate synthetic data: y = 4x + 2 + noise
np.random.seed(0)
X = np.linspace(0, 10, 100)
Y = 4 * X + 2 + np.random.randn(100) * 2 # add noise
# Step 2: Initialize parameters
w = 0.0 # weight
b = 0.0 # bias
learning_rate = 0.01
epochs = 1000
n = len(X)
# Step 3: Batch Gradient Descent
for epoch in range(epochs):
# Predict
Y_pred = w * X + b
# Compute loss (MSE)
loss = np.mean((Y - Y_pred) ** 2)
# Compute gradients
dw = (-2/n) * np.sum(X * (Y - Y_pred))
db = (-2/n) * np.sum(Y - Y_pred)
# Update parameters
w -= learning_rate * dw
b -= learning_rate * db
# Print loss every 100 epochs
if epoch % 100 == 0:
print(f"Epoch {epoch}: Loss = {loss:.4f}, w = {w:.2f}, b = {b:.2f}")
# Step 4: Plot the result
plt.scatter(X, Y, label='Data', alpha=0.6)
plt.plot(X, w * X + b, color='red', label='Learned Line')
plt.title("Batch Gradient Descent - Linear Fit")
plt.xlabel("X")
plt.ylabel("Y")
plt.legend()
plt.show()
Q-What is bias trick? Do implement gradient descent using bias trick.
Bias Trick
The bias trick is a technique used in linear models (like linear regression, logistic regression,
etc.) to simplify the implementation by absorbing the bias (intercept) into the weights vector.

Standard Linear Equation:


y=wTx+b
-w: weights
-x: input features
-b: bias (intercept)

Implementation
import numpy as np
import matplotlib.pyplot as plt
# Step 1: Generate data (y = 3x + 7 + noise)
np.random.seed(42)
X = np.linspace(0, 10, 100).reshape(-1, 1)
Y = 3 * X + 7 + np.random.randn(100, 1) * 2
# Step 2: Bias Trick - Add 1s column to X
X_bias = np.hstack([X, np.ones_like(X)]) # shape: (100, 2)
# Step 3: Initialize weights (including bias term)
w = np.zeros((2, 1)) # two weights: one for x and one for bias
# Hyperparameters
lr = 0.01
epochs = 1000
n = X.shape[0]
# Step 4: Gradient Descent
for epoch in range(epochs):
# Predict
Y_pred = X_bias @ w # (100x2) @ (2x1) = (100x1)
# Compute loss (MSE)
loss = np.mean((Y - Y_pred) ** 2)
# Gradient
dw = (-2/n) * X_bias.T @ (Y - Y_pred)
# Update weights
w -= lr * dw
# Print every 100 epochs
if epoch % 100 == 0:
print(f"Epoch {epoch}: Loss = {loss:.4f}, w = {w.ravel()}")
# Step 5: Plot the result
plt.scatter(X, Y, label='Data', alpha=0.6)
plt.plot(X, X_bias @ w, color='red', label='Model (bias trick)')
plt.title("Linear Regression with Bias Trick")
plt.xlabel("X")
plt.ylabel("Y")
plt.legend()
plt.show()
Q-What is mini-batch gradient descent? Implement in Python.
Mini-Batch Gradient Descent
Mini-Batch Gradient Descent is a variation of gradient descent where the model is updated
using a small subset (mini-batch) of the training data instead of:
-All data (Batch GD)
-One sample (Stochastic GD)

Implement
import numpy as np
import matplotlib.pyplot as plt
# Step 1: Generate synthetic data
np.random.seed(1)
X = np.linspace(0, 10, 100).reshape(-1, 1)
Y = 5 * X + 2 + np.random.randn(100, 1) * 2 # y = 5x + 2 + noise
# Step 2: Add bias trick (append ones to X)
X_bias = np.hstack([X, np.ones_like(X)]) # (100, 2)
# Step 3: Initialize weights
w = np.zeros((2, 1)) # [slope, bias]
# Hyperparameters
learning_rate = 0.01
epochs = 1000
batch_size = 20
n = X.shape[0]
# Step 4: Mini-Batch Gradient Descent
for epoch in range(epochs):
# Shuffle the data
indices = np.random.permutation(n)
X_shuffled = X_bias[indices]
Y_shuffled = Y[indices]
# Create mini-batches
for i in range(0, n, batch_size):
X_batch = X_shuffled[i:i + batch_size]
Y_batch = Y_shuffled[i:i + batch_size]
# Forward pass
Y_pred = X_batch @ w
# Gradient computation
dw = (-2 / batch_size) * X_batch.T @ (Y_batch - Y_pred)
# Update weights
w -= learning_rate * dw
# Print loss every 100 epochs
if epoch % 100 == 0:
Y_pred_full = X_bias @ w
loss = np.mean((Y - Y_pred_full) ** 2)
print(f"Epoch {epoch}: Loss = {loss:.4f}, w = {w.ravel()}")
# Step 5: Plot the results
plt.scatter(X, Y, label='Data', alpha=0.6)
plt.plot(X, X_bias @ w, color='red', label='Mini-Batch GD Model')
plt.xlabel("X")
plt.ylabel("Y")
plt.title("Mini-Batch Gradient Descent")
plt.legend()
plt.grid(True)
plt.show(
Q-What are the main steps of the Neural Network algorithm?
1. Initialize Parameters (Weights & Biases)
-Randomly initialize weights (W) and biases (b) for all layers.
-Use techniques like He or Xavier initialization for better performance.

2. Forward Propagation
-Input data is passed through the network layer by layer.
-Each neuron computes:
z=W⋅x+b,a=activation(z)

3. Compute Loss
-The output from the last layer is compared to the true labels using a loss function:
-MSE for regression
-Cross-Entropy for classification

4. Backpropagation (Compute Gradients)


-Apply the chain rule to compute the gradient of the loss with respect to each weight and bias in
the network.
-This tells us how to adjust the parameters to reduce the loss.

5. Update Weights (Gradient Descent)


-Use gradient descent (or its variants like Adam, RMSProp) to update weights:

6. Repeat for Epochs


-Repeat steps 2–5 for multiple epochs (full passes through the training data) until the loss is
minimized.

7. Make Predictions
-After training, use the forward pass on new data to make predictions.
Q-What is learning rate α in updating the parameters W and ⃗b?

The learning rate (denoted as α or η) is a crucial hyper parameter in machine learning and neural
networks. It controls how much the model's weights (W) and biases (b) are adjusted during
training.
Parameter Update Rule:
During training, we update weights and biases using gradient descent:

Learning Rate α Effect


Small (e.g. 0.0001) Slow learning, might get stuck or take too long
Large (e.g. 1.0) May overshoot, become unstable, or diverge
Optimal (e.g. 0.01) Balanced updates, fast and stable convergence

You might also like