DL Lab - Merged
DL Lab - Merged
Register Number :
Name :
Subject Name/Subject Code :
Branch :
Year/Semester :
Certificate
Certified that this is the bonafide record of Practical work done by
TABLE OF CONTENTS
7. Cardiovascular Disease
Prediction using ANN
Aim:
To build a simple Neural Network using python.
Algorithm:
1. Import the required packages.
2. Define the neural network architecture, such as the number of layers, the
number of neurons in each layer, and the activation function for each layer.
3. Initialize the weights and biases of the network using random values or a pre-
defined initialization method.
4. Feed the input data through the network, using the initialized weights and biases
to make predictions.
5. Calculate the error or loss between the predicted output and the actual output.
6. Use an optimization algorithm, such as stochastic gradient descent, to update the
weights and biases in order to reduce the error.
7. Repeat steps 3-5 for a number of iterations or until the error is below a certain
threshold.
8. Use the trained network to make predictions on new data.
Program:
#import required packages
import numpy as np
class NeuralNetwork():
def __init__(self):
np.random.seed(1)
self.synaptic_weights = 2 * np.random.random((3, 1)) - 1
Page |2
def sigmoid(self, x):
return 1 / (1 + np.exp(-x))
if __name__ == "__main__":
training_outputs = np.array([[0,1,1,0]]).T
A = str(input("Input 1: "))
B = str(input("Input 2: "))
C = str(input("Input 3: "))
Output:
Page |4
Result:
Thus, the Python program to build a simple Neural Network was implemented successfully and
the output is verified.
Page |5
AIM:
To Write a Python program to implement Classification of Cat and Dog using CNN.
ALGORITHM:
PROGRAM:
#import required packages
import os
import pandas as pd
import tensorflow as tf
from tensorflow import keras
from keras import Model
from keras.layers import Dense, GlobalAveragePooling2D, Input
from keras.applications.mobilenet import preprocess_input
from keras.preprocessing.image import ImageDataGenerator
from keras.models import Model
from keras.optimizers import Adam
from keras.utils import load_img, img_to_array
from keras.applications.vgg16 import preprocess_input, decode_predictions, VGG16
from keras.models import load_model
import cv2
import numpy as np
import matplotlib.pyplot as plt
from google.colab.patches import cv2_imshow
#loading dataset
train_datagen = ImageDataGenerator(preprocessing_function=preprocess_input)
train_generator = train_datagen.flow_from_directory('DOG_CAT',
target_size=(224,224),
color_mode='rgb',
batch_size=32,
class_mode='categorical',
shuffle=True)
#splitting classes
train_generator.class_indices.values()
NO_CLASSES = len(train_generator.class_indices.values())
train_generator.class_indices.values()
Page |7
# model building
x = Dense(1024, activation='relu')(x)
x = Dense(1024, activation='relu')(x)
x = Dense(512, activation='relu')(x)
#model compilation
model.compile(optimizer='Adam',
loss='categorical_crossentropy',
metrics=['accuracy'])
Page |8
#model fitting
model.fit(x=train_generator,
batch_size = 1,
verbose = 1,
epochs = 20)
Page |9
# model testing
# making prediction
cv2_imshow(imgtest)
predicted_prob = model.predict(x)
print(predicted_prob)
print(predicted_prob[0].argmax())
print("Predicted : " + class_dictionary[predicted_prob[0].argmax()])
print("============================\n")
P a g e | 10
RESULT:
Thus, the Python program to implement Classification of Cat Dog in CNN was
implemented successfully and the output is verified.
P a g e | 11
AIM:
ALGORITHM:
1. Collect historical stock data and pre-process it to clean and format it for use in a machine
learning model.
2. Divide the data into training and testing sets.
3. Build an LSTM model using the training data.
4. Train the model on the training data.
5. Use the trained model to make predictions on the testing data.
PROGRAM:
#import required packages
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from sklearn.preprocessing import MinMaxScaler
from keras.models import Sequential
from keras.layers import Dense, Dropout, LSTM, Bidirectional
#import dataset
google_stock_data = pd.read_csv('goog.csv')
P a g e | 12
data.describe()
google_stock_data.info()
google_stock_data.shape
MMS = MinMaxScaler()
google_stock_data[google_stock_data.columns] =
MMS.fit_transform(google_stock_data)
train_data = google_stock_data[:training_size]
test_data = google_stock_data[training_size:]
train_data.shape, test_data.shape
def create_sequence(dataset):
sequences = []
labels = []
start_idx = 0
model = Sequential()
model.add(LSTM(units=50, return_sequences=True, input_shape =
(train_seq.shape[1], train_seq.shape[2])))
model.add(Dropout(0.1))
model.add(LSTM(units=50))
model.add(Dense(2))
model.compile(loss='mean_squared_error', optimizer='adam',
metrics=['mean_absolute_error'])
model.summary()
test_predicted = model.predict(test_seq)
test_predicted[:5]
test_inverse_predicted = MMS.inverse_transform(test_predicted) #
Inversing scaling on predicted data
test_inverse_predicted[:5]
P a g e | 16
RESULT:
Thus, the Python program to implement Stock Prediction using Deep Learning was
implemented successfully and the output is verified.
P a g e | 17
AIM:
To write a Python program to implement Forecast sales using LSTM.
ALGORITHM:
1. Prepare the data: Gather historical sales data and organize it into a format that can be
used to train the LSTM model.
2. Define the LSTM model: Use Keras or TensorFlow to define the architecture of the
LSTM model.
3. Train the model: Use the prepared data to train the LSTM model. This process may
require multiple epochs and may involve tuning the model's hyperparameters to
optimize performance.
4. Make predictions: Use the trained model to make predictions on new data. This can
be done by inputting a sequence of historical sales data and having the model predict
the next value in the sequence.
P a g e | 18
Program:
import os
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
sub = pd.read_csv('sample_submission.csv')
sub.head()
train = pd.read_csv('sales_train.csv')
print ('number of shops: ', train['shop_id'].max())
print ('number of items: ', train['item_id'].max())
num_month = train['date_block_num'].max()
print ('number of month: ', num_month)
print ('size of train: ', train.shape)
train.head()
P a g e | 19
test = pd.read_csv('test.csv')
test.head()
items = pd.read_csv('items.csv')
print ('number of categories: ', items['item_category_id'].max()) # the maximun number of
category id
items.head()
# Change the item count per day to item count per month by using group
P a g e | 20
\\\
train_clean = train_clean.groupby(["item_id","shop_id","date_block_num"]).sum().reset_index()
train_clean = train_clean.rename(index=str, columns = {"item_cnt_day":"item_cnt_month"})
train_clean = train_clean[["item_id","shop_id","date_block_num","item_cnt_month"]]
train_clean
check = train_clean[["shop_id","item_id","date_block_num","item_cnt_month"]]
check = check.loc[check['shop_id'] == 5]
check = check.loc[check['item_id'] == 5037]
check
plt.figure(figsize=(10,4))
plt.title('Check - Sales of Item 5037 at Shop 5')
plt.xlabel('Month')
plt.ylabel('Sales of Item 5037 at Shop 5')
plt.plot(check["date_block_num"],check["item_cnt_month"]);
P a g e | 21
sales_33month.fillna(0.00,inplace=True)
sales_33month
P a g e | 22
plt.figure(figsize=(10,4))
for i in range(1,6):
sales_33month["T_" + str(i)] = sales_33month.item_cnt_month.shift(i)
sales_33month.fillna(0.0, inplace=True)
sales_33month
df = sales_33month[['shop_id','item_id','date_block_num','T_1','T_2','T_3','T_4','T_5',
'item_cnt_month']].reset_index()
df = df.drop(labels = ['index'], axis = 1)
df
P a g e | 23
train_df = df[:-3]
val_df = df[-3:]
x_train,y_train = train_df.drop(["item_cnt_month"],axis=1),train_df.item_cnt_month
x_val,y_val = val_df.drop(["item_cnt_month"],axis=1),val_df.item_cnt_month
x_train
y_train
P a g e | 24
#Importing relevant Libraries to Perform LSTM
x_train_reshaped = x_train_scaled.reshape((x_train_scaled.shape[0], 1,
x_train_scaled.shape[1]))
x_val_resaped = x_valid_scaled.reshape((x_valid_scaled.shape[0], 1, x_valid_scaled.shape[1]))
fig, ax = plt.subplots()
ax.plot(x_val['date_block_num'], y_val, label='Actual')
ax.plot(x_val['date_block_num'], y_pre, label='Predicted')
plt.title('LSTM Prediction vs Actual Sales for last 3 months')
plt.xlabel('Month')
plt.xticks(x_val['date_block_num'])
plt.ylabel('Sales of Item 5037 at Shop 5')
ax.legend()
plt.show()
#Calculating RMSE
RESULT:
Thus, the Python program to implement Sales Forecast was implemented successfully
and the output is verified.
P a g e | 27
Aim :
To write a Python program to implement Movie Box Office Prediction using GRU.
Algorithm:
1. Collect data: Gather historical data on movies, such as box office revenue, budget,
cast, genre, release date, and other relevant information.
2. Pre-processing: Clean and preprocess the data by handling missing values, converting
categorical variables, and scaling numerical variables.
4. Data preparation: Prepare the data for use in a GRU (gated recurrent unit) model, by
formatting it into sequences and padding as necessary.
5. Model selection: Choose an appropriate GRU model for the task, such as a simple
GRU or a stacked GRU.
6. Training: Train the model using the pre-processed and prepared data and features.
7. Validation: Evaluate the model's performance using a validation dataset, and adjust
the model or features as needed.
8. Testing: Test the model on unseen data and evaluate its performance to determine
its accuracy in predicting box office revenue.
P a g e | 28
Program:
#import packages
import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split
from keras.layers import Input, Embedding, GRU, Dense
from keras.models import Model
from keras.callbacks import EarlyStopping
#import dataset
import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split
from keras.layers import Input, Embedding, GRU, Dense
from keras.models import Model
from keras.callbacks import EarlyStopping
#x split,y split
x = df[["movie_opening","movie_weekend","movie_firstweek"]]
y = df["movie_total_worldwide"]
x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.2, random_state=42)
P a g e | 29
#model building
input_shape = (x_train.shape[1], 1)
input_layer = Input(shape=input_shape)
gru_layer = GRU(64, return_sequences=True, dropout=0.2,
recurrent_dropout=0.2)(input_layer)
gru_layer = GRU(32, return_sequences=True, dropout=0.2,
recurrent_dropout=0.2)(gru_layer)
gru_layer = GRU(16, dropout=0.2, recurrent_dropout=0.2)(gru_layer)
output_layer = Dense(1, activation='linear')(gru_layer)
model = Model(input_layer, output_layer)
model.compile(loss='mean_squared_error', optimizer='adam',
metrics=['mean_absolute_error'])
#model fit
#model loss
#prediction
RESULT:
Thus, the Python program to implement Box Office Prediction using GRU was
implemented successfully and the output is verified.
P a g e | 31
AIM:
To write a deep learning model to predict Sports result Prediction using RNN and LSTM
ALGORITHM:
1. Gather historical data on the teams or players involved in the event, including statistics
such as wins, losses, points scored, and other relevant information.
2. Clean and preprocess the data by handling missing values, converting categorical
variables, and scaling numerical variables.
3. Create new features by combining or transforming the existing features that may be
useful in predicting the outcome of the event.
4. Choose an appropriate machine learning model for the task, such as logistic regression,
decision trees, or neural networks.
5. Training: Train the model using the pre-processed data and features.
6. Evaluate the model's performance using a validation dataset, and adjust the model or
features as needed.
7. Test the model on unseen data and evaluate its performance to determine its accuracy in
predicting the outcome of the event.
PROGRAM:
#import required packages
import numpy as np
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
import pandas as pd
P a g e | 32
#import datasets
dataTrain=pd.read_csv("allAtt_onehot_large_train.csv")
dataTest=pd.read_csv("allAtt_onehot_large_test.csv")
print(dataTrain.head())
print(dataTrain.shape)
FTR HTGS ATGS HTGC ATGC HTP ATP HM1 HM2 HM3 ... ATWinStreak5 \
0 0 0.0 0.0 0.0 0.0 0.0 0.0 2 2 2 ... 0
1 0 0.0 0.0 0.0 0.0 0.0 0.0 2 2 2 ... 0
2 1 0.0 0.0 0.0 0.0 0.0 0.0 2 2 2 ... 0
3 0 0.0 0.0 0.0 0.0 0.0 0.0 2 2 2 ... 0
4 1 0.0 0.0 0.0 0.0 0.0 0.0 2 2 2 ... 0
final1 final2
0 1.0 0.0
1 1.0 0.0
2 0.0 1.0
3 1.0 0.0
4 0.0 1.0
[5 rows x 30 columns]
(1860, 30)
batch_size = 64
# Each MNIST image batch is a tensor of shape (batch_size, 28, 1).
# Each input sequence will be of size (28, 1).
input_dim = 27
units = 64
output_size = 2 # labels are from Win or Loss
lstm_layer = keras.layers.RNN(
keras.layers.LSTMCell(units), input_shape=(input_dim,1)
)
model = keras.models.Sequential(
[
lstm_layer,
keras.layers.BatchNormalization(),
keras.layers.Dense(output_size),
]
)
return model
P a g e | 33
model = build_model()
model.compile(
loss=keras.losses.CategoricalCrossentropy(from_logits=True),
optimizer="Adam",
metrics=["categorical_accuracy"],
)
model.summary()
P a g e | 34
model.fit(
x_train, y_train, validation_data=(x_test, y_test), batch_size=batch_size, epochs=10
)
model.save('LSTM-footballMatchWinner')
model1=tf.keras.models.load_model('LSTM-footballMatchWinner')
model1.predict(np.expand_dims(x_test[0], 0))
P a g e | 35
RESULT:
Thus, the Python program to implement deep learning model to predict Sports
result Prediction using RNN and LSTM was implemented successfully and the output
is verified.
P a g e | 36
AIM:
ALGORITHM:
PROGRAM:
#import required packages
import tensorflow as tf
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import keras
from keras.models import Sequential
from keras.layers import Dense
from sklearn.metrics import confusion_matrix
#import dataset
data = pd.read_csv('heart.csv')
data.head()
P a g e | 37
#data description
data.describe()
data.describe()
X = data.iloc[:,:13].values
y = data["target"].values
kernel_initializer = "uniform"))
kernel_initializer = "uniform"))
metrics = ['accuracy'] )
P a g e | 39
y_pred = classifier.predict(X_test)
y_pred = (y_pred > 0.5)
#Confusion Matrix
cm = confusion_matrix(y_test,y_pred)
cm
#Accuracy
RESULT:
Thus, the Python program to implement Cardiovascular Disease detection using Deep
Learning was implemented successfully and the output is verified.
P a g e | 41
Exp No : 08 Style Transfer Technique
Date :
AIM:
To Write a Python program to create an art using Style Transfer Technique.
ALGORITHM:
1. Start by importing the necessary libraries and modules, including TensorFlow and any
image processing libraries.
2. Next, load the content and style images, convert them to the Tensor image format for
processing.
3. Visualize the content and style image using opencv’s imshow function.
4. Load the hub model then implement Style transfer, and convert the Tensor image into
numpy Image.
PROGRAM:
#import required packages
import tensorflow as tf
import cv2
from google.colab.patches import cv2_imshow
import tensorflow_hub as hub
import numpy as np
import PIL
import matplotlib.pyplot as plt
#loading dataset
content_path = "/content/dog.jpg"
style_path = "/content/style.jpg"
dog_image = load_img(content_path)
style_image = load_img(style_path)
dog = cv2.imread(content_path)
style = cv2.imread(style_path)
P a g e | 42
def load_img(path_to_img):
img = tf.io.read_file(path_to_img)
img = tf.image.decode_image(img, channels=3)
img = tf.image.convert_image_dtype(img, tf.float32)
img = img[tf.newaxis, :]
return img
# visualizing images
cv2_imshow(dog)
cv2_imshow(style)
#tensor to image
def tensor_to_image(tensor):
tensor = tensor*255
tensor = np.array(tensor, dtype=np.uint8)
if np.ndim(tensor)>3:
assert tensor.shape[0] == 1
tensor = tensor[0]
return PIL.Image.fromarray(tensor)
P a g e | 43
hub_model = hub.load('https://tfhub.dev/google/magenta/arbitrary-image-stylization-v1-
256/2')
stylized_image = hub_model(tf.constant(dog_image), tf.constant(style_image))[0]
op = tensor_to_image(stylized_image)
op
P a g e | 44
RESULT:
Thus, the Python program to create an art using Style Transfer Technique was
implemented successfully and the output is verified.
P a g e | 45
AIM:
To Write a Python Program to implement traffic sign detection using deep learning
ALGORITHM:
1. Dataset exploration
2. CNN model building
3. Model training and validation
4. Model testing
PROGRAM:
#import required packages
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import tensorflow as tf
from PIL import Image
import os
from sklearn.model_selection import train_test_split
from tensorflow.keras.utils import to_categorical
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Conv2D, MaxPool2D, Dense, Flatten, Dropout
P a g e | 46
data = []
labels = []
classes = 43
cur_path = os.getcwd()
for i in range(classes):
path = os. path.join(cur_path,'train', str(i))
images = os.listdir(path)
for a in images:
try:
image = Image.open(path + "/" + a)
image = image.resize((30,30))
image = np.array(image)
data.append(image)
labels.append(i)
except:
print("Error loading image")
data = np.array(data)
labels = np.array(labels)
P a g e | 47
#Model Building
model = Sequential()
model.add(Conv2D(filters=32, kernel_size=(5,5), activation='relu',
input_shape=X_t1.shape[1:]))
model.add(Conv2D(filters=32, kernel_size=(5,5), activation='relu'))
model.add(MaxPool2D(pool_size=(2, 2)))
model.add(Dropout(rate=0.25))
model.add(Conv2D(filters=64, kernel_size=(3, 3), activation='relu'))
model.add(Conv2D(filters=64, kernel_size=(3, 3), activation='relu'))
model.add(MaxPool2D(pool_size=(2, 2)))
model.add(Dropout(rate=0.25))
model.add(Flatten())
model.add(Dense(256, activation='relu'))
model.add(Dropout(rate=0.5))
model.add(Dense(43, activation='softmax'))
#Compilation of the model
model.compile(loss='categorical_crossentropy', optimizer='adam',
metrics=['accuracy'])
P a g e | 48
eps = 15
anc = model.fit(X_t1, y_t1, batch_size=32, epochs=eps, validation_data=(X_t2, y_t2))
plt.figure(0)
plt.plot(anc.history['accuracy'], label='training accuracy')
plt.plot(anc.history['val_accuracy'], label='val accuracy')
plt.title('Accuracy')
plt.xlabel('epochs')
plt.ylabel('accuracy')
plt.legend()
plt.show()
plt.figure(1)
plt.plot(anc.history['loss'], label='training loss')
plt.plot(anc.history['val_loss'], label='val loss')
plt.title('Loss')
plt.xlabel('epochs')
plt.ylabel('loss')
plt.legend()
plt.show()
P a g e | 49
# Model testing
y_test = pd.read_csv('Test.csv')
labels = y_test["ClassId"].values
imgs = y_test["Path"].values
data=[]
for img in imgs:
image = Image.open(img)
image = image.resize((30,30))
data.append(np.array(image))
X_test=np.array(data)
pred = np.argmax(model.predict(X_test),axis=-1)
print(accuracy_score(labels, pred))
P a g e | 50
RESULT:
Thus, the Python program to implement Traffic Sign detection was implemented
successfully and the output is verified.
P a g e | 51
AIM:
ALGORITHM:
PROGRAM:
#import required packages
import tensorflow as tf
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import keras
from tensorflow import keras
from tensorflow.keras import layers
#import dataset
model = keras.Sequential()
model.add(layers.Conv2D(32, (3, 3), activation='relu', input_shape=(28, 28
, 1)))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Flatten())
model.add(layers.Dense(64, activation='relu'))
model.add(layers.Dense(10, activation='softmax'))
P a g e | 53
#compile the model
Model.compile(optimizer=’adam’,
Loss=’sparse_categorical_crossentropy’,
Metrics=[‘accuracy’])
#Performing prediction
preds = model.predict(x_test)
preds
RESULT:
Thus, the Python program to implement Fashion Recommendation System using Deep
Learning was implemented successfully and the output is verified.