0% found this document useful (0 votes)
8 views15 pages

Niraj DL

The document states that the training data is current only until October 2023. It implies limitations on the information available beyond that date. No further details or context are provided.

Uploaded by

Tauqeer Ahmad
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views15 pages

Niraj DL

The document states that the training data is current only until October 2023. It implies limitations on the information available beyond that date. No further details or context are provided.

Uploaded by

Tauqeer Ahmad
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 15

DEEP LEARNING

Pratical File

SUBMITTED BY:- SUBMITTED TO:-

NAME: Niraj Kumar Sisodia Prof. Divjot Kaur

UNI. ROLL NO: 22002838

DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING

BABA BANDA SINGH BAHADUR ENGINEERING COLLEGE,


FATEHGARH SAHIB

INDEX
Sr Name of the Experiment Page Date Sign
No. No.

1 Creating a basic network & analyze its performance 1

2 Deploy the Confusion matrix and simulate for 2


Overfitting.
3 Visualizing a neural network. 3

4 Demo: Object Detection with pre-trained RetinaNet 4 to 5


with Keras.
5 Neural Recommender Systems with Explicit 6 to 7
Feedback.
6 Backpropagation in Neural Networks using Numpy. 8 to 10

7 Fully Convolutional Neural Networks. 11 to


13
Practical 1

AIM:- Creating a basic network and analyze its performance.

from keras.models import Sequential


from keras.layers import Dense, Activation
import numpy as np
x = np.array([[2,2], [2,4], [2,5], [4,5], [4,8], [4,6]])
y= np.array([[2], [2], [4], [5], [6], [8]])
model = Sequential()
model.add(Dense(4, input_shape=(2,)))
model.add(Activation('linear'))
model.add(Dense(2))
model.add(Activation('linear'))
model.compile(loss='mean_squared_error', optimizer='sgd', metrics=['accuracy'])
model.summary()

OUTPUT:-

1
Practical 2

AIM:- Deploy the Confusion matrix and simulate for Overfitting.

import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import confusion_matrix, accuracy_score
np.random.seed(0)
X = np.random.rand(100, 2)
y = (X[:, 0] + X[:, 1] > 1).astype(int)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
model_underfit = LogisticRegression(solver='lbfgs')
model_underfit.fit(X_train, y_train)
model_overfit = LogisticRegression(solver='lbfgs', C=1e5)
model_overfit.fit(X_train, y_train)
y_pred_underfit = model_underfit.predict(X_test)
y_pred_overfit = model_overfit.predict(X_test)
accuracy_underfit = accuracy_score(y_test, y_pred_underfit)
accuracy_overfit = accuracy_score(y_test, y_pred_overfit)
confusion_matrix_underfit = confusion_matrix(y_test, y_pred_underfit)
confusion_matrix_overfit = confusion_matrix(y_test, y_pred_overfit)
print("Accuracy (Underfitting):", accuracy_underfit)
print("Confusion Matrix (Underfitting):\n", confusion_matrix_underfit)
print("Accuracy (Overfitting):", accuracy_overfit)
print("Confusion Matrix (Overfitting):\n", confusion_matrix_overfit)

OUTPUT:-

2
Practical 3

AIM:- Visualizing a neural network.

Dataset from patient medical record data for Pima Indians with information whether they had an
On set of diabete within five years.

from keras.models import Sequential


from keras.layers import Dense
import numpy
numpy.random.seed(7)
dataset = numpy.loadtxt("pima-indians-diabetes.csv", delimiter=",")
X = dataset[:,0:8]
Y = dataset[:,8]
model = Sequential()
model.add(Dense(12, input_dim=8, activation='relu'))
model.add(Dense(8, activation='relu'))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
model.fit(X, Y, epochs=150, batch_size=10)
scores = model.evaluate(X, Y)
print("\n%s: %.2f%%" % (model.metrics_names[1], scores[1]*100))
from ann_visualizer.visualize import ann_viz;
ann_viz(model, title="My first neural network")

OUTPUT:-

3
Practical 4

AIM:- Demo: Object Detection with pre-trained RetinaNet with Keras.

model = models.load_model('resnet50_coco_best_v2.1.0.h5')

urllib.request.urlretrieve('https://raw.githubusercontent.com/amikelive/coco-labels/master
/coc o-labels-paper.txt', 'coco-labels-paper.txt')

class_labels = [label.rstrip() for label in open("coco-labels-paper.txt")]


print(class_labels)
def detect_draw_bounding_boxes(img_path, threshold=0.6):
img =
np.array(Image.open(img_path))
print(f"Shape of the image:{img.shape}")

img = img[:,:,:3]
img_proc = preprocess_image(img) img_proc, scale =resize_image(img_proc) print(f"Shape of
the preprocessed image: {img_proc.shape}")

boxes, scores, labels = model.predict_on_batch(np.expand_dims(img_proc, axis=0))


boxes /= scale
for box, score, label in zip(boxes[0], scores[0], labels[0]):
if score < threshold:
break

box = box.astype(np.int32)
color = label_color(label) draw_box(img, box,color=color) class_label = class_labels[label]
caption = f"{class_label} {score:.3f}" draw_caption(img, box,caption)
plt.axis('off')
plt.imshow(img)
plt.show()

!wget https://c0.wallpaperflare.com/preview/814/948/832/de6l8nfk6nqltrackcl9liu6ss.jpg
plt.rcParams['figure.figsize'] = [20, 10]

detect_draw_bounding_boxes('de6l8nfk6nqltrackcl9liu6ss.jpg')

OUTPUT:-

4
!wget
https://i1.wp.com/www.chakracommunity.com/wp-content/uploads/2016/01/bigstock-Busy
-S treet-in-Manhattan-93909977.jpg
detect_draw_bounding_boxes('bigstock-Busy-Street-in-Manhattan-93909977.jpg')

5
Practical 5

AIM:- Neural Recommender Systems with Explicit Feedback.

import numpy as np
import tensorflow as tf
from tensorflow.keras.layers import Embedding, Input, Dot, Flatten, Dense
from tensorflow.keras.models import Model

# Generate some sample data (replace this with your dataset)


num_users = 1000
num_items = 1000
embedding_size = 50

user_ids = np.random.randint(0, num_users, size=1000)


item_ids = np.random.randint(0, num_items, size=1000)
ratings = np.random.rand(1000)

# Create user and item input layers


user_input = Input(shape=(1,), name='user_input')
item_input = Input(shape=(1,), name='item_input')

# Create user and item embeddings


user_embedding = Embedding(input_dim=num_users, output_dim=embedding_size,
input_length=1)(user_input)
item_embedding = Embedding(input_dim=num_items, output_dim=embedding_size,
input_length=1)(item_input)

# Compute the dot product of user and item embeddings


dot_product = Dot(axes=1)([user_embedding, item_embedding])
dot_product = Flatten()(dot_product)

# Create a neural network for prediction


x = Dense(64, activation='relu')(dot_product)
x = Dense(32, activation='relu')(x)
output = Dense(1, activation='linear')(x)

# Create and compile the model


model = Model(inputs=[user_input, item_input], outputs=output)
model.compile(loss='mean_squared_error', optimizer='adam')

6
# Train the model
model.fit([user_ids, item_ids], ratings, epochs=10, batch_size=32)

# Make recommendations for a user


user_id_to_recommend = 42
item_ids_to_recommend = np.arange(num_items) # Replace with your item IDs
predicted_ratings = model.predict([np.array([user_id_to_recommend] * num_items),
item_ids_to_recommend])
recommended_item_ids = item_ids_to_recommend[np.argsort(predicted_ratings.flatten())[::-1]]

print("Top recommended item IDs:", recommended_item_ids[:10])

OUTPUT:-

7
Practical 6

AIM:- Backpropagation in Neural Networks using Numpy.

Introduction

Backpropagation is one of the important concepts of a neural network. For a single training
example, Backpropagation algorithm calculates the gradient of the error function. It can be
written as a function of the neural network. Backpropagation algorithms are a set of methods
used to efficiently train artificial neural networks following a gradient descent approach which
exploits the chain rule.
The main features of Backpropagation are the iterative, recursive and efficient method through
which it calculates the updated weight to improve the network until it is not able to perform the
Practicalfor which it is being trained. Derivatives of the activation function to be known at
network design time is required for Backpropagation.

import numpy as np
import math
import random as rand
epoch = 10000
mean = 0
exp_mean = 0.0
lr = 0.01
xi = 7
xh = 5
xy = 1
w1 = np.random.random([xi, xh - 1])
w2 = np.random.random([xh])
target_weight_1 = np.random.random([xi, xh - 1])
target_weight_2 = np.random.random([xh])
for i in range(0, epoch):
x = [1]
s1 = 0
for j in range(0, xi - 1):
val = 1 if rand.random() > 0.5 else -1
x.append(val)
s1 += val
x = np.array(x, dtype="int64")
x_h = [1]
x_ht = [1]

8
h = np.dot(x, w1)
ht = np.dot(x, target_weight_1)
for j in range(0, xh - 1):
h[j] = math.tanh(h[j])
x_h.append(h[j])
ht[j] = math.tanh(ht[j])
x_ht.append(ht[j])
x_h = np.array(x_h)
x_ht = np.array(x_ht)
y = np.dot(x_h, w2)
y = math.tanh(y)
t = np.dot(x_ht, target_weight_2)
t = math.tanh(t)
loss = 0.5 * (t - y) ** 2
mean = mean + loss
exp_mean = (exp_mean * 0.99) + (loss * 0.01)
m1 = -1 * (t - y) * (1 - y ** 2)
dw2 = []
for j in range(0, xh):
dw2.append(m1 * x_h[j])
dw2 = np.array(dw2)
dw1 = []
for j in range(1, xh):
dw = []
for k in range(0, xi):
dmw = m1 * w2[j] * (1 - x_h[j] ** 2) * x[k]
dw.append(dmw)
dw1.append(dw)
dw1 = np.array(dw1)
dw1 = np.transpose(dw1)
w1 -= dw1 * lr
w2 -= dw2 * lr
if i % 100 == 0:
print("=== X: %s ===" % (x,))
print("Y: %s " % (y,))
print("T: %s " % (t,))
print("LOSS: %s " % (loss,))
print("Running mean: %s " % (exp_mean,))
print("MSL: %s " % (mean / (i + 1)))

9
OUTPUT:-

10
Practical 7

AIM:- Fully Convolutional Neural Networks.

Fully Convolutional Neural Networks (FCNs) are a type of neural network that can be used for
pixel-level tasks such as image segmentation, object detection, and depth estimation. FCNs differ
from traditional convolutional neural networks (CNNs) in that they do not have any fully
connected layers. This means that FCNs can be applied to images of any size, without having to
resize or crop them.

Steps :-

1. Choose a dataset:- FCNs can be used for a variety of tasks, such as image segmentation,
object detection, and depth estimation. Choose a dataset that is appropriate for the Practicalyou
want to solve.
2. Preprocess the data:- This may involve resizing the images, normalizing the pixel values,
and converting the labels to a format that the model can understand.
3. Define the FCN architecture:- A typical FCN architecture consists of a stack of
convolutional and pooling layers, followed by a decoder that upsamples the feature maps to the
original image size.
4. Implement the FCN in Keras:- Keras is a popular Python library for deep learning. It
provides a high-level API for defining and training neural networks.
5. Train the FCN:- This involves feeding the training data to the model and adjusting the
weights of the model so that it minimizes the loss function on the training data.
6. Evaluate the FCN on the test data:- This will give you an estimate of how well the model
will perform on unseen data.

CODE

import tensorflow as tf
from tensorflow.keras.layers import Input, Conv2D, MaxPooling2D, Conv2DTranspose,
Concatenate
# Define the FCN architecture
def fcn_model(input_shape, num_classes):
# Input layer
input_layer = Input(shape=input_shape)
# Encoder
conv1 = Conv2D(64, (3, 3), activation='relu', padding='same')(input_layer)
conv1 = Conv2D(64, (3, 3), activation='relu', padding='same')(conv1)

11
pool1 = MaxPooling2D(pool_size=(2, 2))(conv1)
conv2 = Conv2D(128, (3, 3), activation='relu', padding='same')(pool1)
conv2 = Conv2D(128, (3, 3), activation='relu', padding='same')(conv2)
pool2 = MaxPooling2D(pool_size=(2, 2))(conv2)
conv3 = Conv2D(256, (3, 3), activation='relu', padding='same')(pool2)
conv3 = Conv2D(256, (3, 3), activation='relu', padding='same')(conv3)
pool3 = MaxPooling2D(pool_size=(2, 2))(conv3)
# Bottleneck
conv4 = Conv2D(512, (3, 3), activation='relu', padding='same')(pool3)
conv4 = Conv2D(512, (3, 3), activation='relu', padding='same')(conv4)
pool4 = MaxPooling2D(pool_size=(2, 2))(conv4)
# Decoder
conv5 = Conv2D(1024, (3, 3), activation='relu', padding='same')(pool4)
conv5 = Conv2D(1024, (3, 3), activation='relu', padding='same')(conv5)
up1 = Conv2DTranspose(512, (2, 2), strides=(2, 2), padding='same')(conv5)
up1 = Concatenate()([conv4, up1])
up2 = Conv2DTranspose(256, (2, 2), strides=(2, 2), padding='same')(up1)
up2 = Concatenate()([conv3, up2])
up3 = Conv2DTranspose(128, (2, 2), strides=(2, 2), padding='same')(up2)
up3 = Concatenate()([conv2, up3])
up4 = Conv2DTranspose(64, (2, 2), strides=(2, 2), padding='same')(up3)
up4 = Concatenate()([conv1, up4])
# Output layer
output_layer = Conv2D(num_classes, (1, 1), activation='softmax')(up4)
model = tf.keras.models.Model(inputs=input_layer, outputs=output_layer)
return model
# Define the input shape and number of classes
input_shape = (256, 256, 3) # Example input shape
num_classes = 21 # Example number of classes
# Create the FCN model
model = fcn_model(input_shape, num_classes)
# Compile the model (add loss function, optimizer, etc.)
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
# Print model summary
model.summary()

12
OUTPUT:-

13

You might also like