0% found this document useful (0 votes)
10 views15 pages

ML PPT G3

The document describes a machine learning project that trains three models - artificial neural network (ANN), recurrent neural network (RNN), and convolutional neural network (CNN) on a chest CT-scan image dataset. It presents the accuracy results from 5-fold cross-validation for each model and includes architecture diagrams and a process flow chart.

Uploaded by

bajwaumair320
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views15 pages

ML PPT G3

The document describes a machine learning project that trains three models - artificial neural network (ANN), recurrent neural network (RNN), and convolutional neural network (CNN) on a chest CT-scan image dataset. It presents the accuracy results from 5-fold cross-validation for each model and includes architecture diagrams and a process flow chart.

Uploaded by

bajwaumair320
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 15

MACHINE

LEARNING
PRESENTATION - 2024

ASSIGNMENT NO. 4

UNIVERSITY OF ENGINEERING AND TECHNOLOGY LAHORE, NAROWAL CAMPUS


Machine Learning Theory

Chest CT-Scan images Dataset

GROUP # 03
Submitted To:

DR. WAQAS TOOR

Submitted By:

Anees Ur Rehman M. Umair Touseeq Haider


2021-EE-648 2021-EE-607 CD-2021-EE-04
Machine Learning Theory

Content and it’s Objectives

Train Three Different Models


Three different models: Artificial Neural Network (ANN),
Recurrent Neural Network (RNN), and Convolutional Neural
Network (CNN)

Types of Models
That each model was evaluated using 5-fold cross-validation

Accuracy & Loss Results


Present the accuracy results obtained from the cross-
validation for each model.

Architecture and process flow Diagram


CNNs are specifically designed to handle spatial data, making
them well-suited for image recognition tasks
Machine Learning Theory

Types of three Different Models:

Artificial Neural Network (ANN)


Recurrent Neural Network (RNN)
Convolutional Neural Network (CNN)
ACCURACY AND LOSS:
FINAL ACCURACY :

Training Accuracy 96%


Validation Accuracy 94%
ARCHITECTURES GRAPH :
PROCESS FLOW CHART:
Code :
import os # Set image dimensions and batch size for CNN
import numpy as np img_width, img_height = 150, 150 # Adjust based on the actual image
import tensorflow as tf dimensions
import matplotlib.pyplot as plt batch_size_cnn = 32
from tensorflow.keras.preprocessing.image import # Set parameters for RNN
ImageDataGenerator max_words = 10000
from tensorflow.keras.layers import Conv2D, MaxPooling2D, Flatten, maxlen = 100 # Sequence length
Dense, LSTM, Embedding, Bidirectional, Input, Concatenate validation_split_rnn = 0.2
from tensorflow.keras.models import Sequential, Model batch_size_rnn = 32
from tensorflow.keras.preprocessing.text import Tokenizer
from tensorflow.keras.preprocessing.sequence import # Splitting data into training and testing sets for CNN
pad_sequences train_datagen = ImageDataGenerator(rescale=1./255,
from tensorflow.keras.utils import plot_model validation_split=0.2)

# Check if GPU is available train_generator_cnn = train_datagen.flow_from_directory(


if not tf.test.gpu_device_name(): base_dir,
target_size=(img_width, img_height),
print('No GPU found. Please ensure you have GPU enabled runtime
batch_size=batch_size_cnn,
in Colab.')
class_mode='binary',
else:
subset='training'
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
)

#
validation_generator_cnn = train_datagen.flow_from_directory( tokenizer = Tokenizer(num_words=max_words)
base_dir, tokenizer.fit_on_texts(texts)
target_size=(img_width, img_height), sequences = tokenizer.texts_to_sequences(texts)
batch_size=batch_size_cnn,
word_index = tokenizer.word_index
class_mode='binary',
print('Found %s unique tokens.' % len(word_index))
subset='validation'
)
data_rnn = pad_sequences(sequences, maxlen=maxlen)
# Read text data and tokenize for RNN
normal_texts = [] indices = np.arange(data_rnn.shape[0])
tb_texts = [] np.random.shuffle(indices)
data_rnn = data_rnn[indices]
for folder in [train_normal_dir, train_tb_dir]:
labels_rnn = labels_rnn[indices]
for filename in os.listdir(folder):
num_validation_samples_rnn = int(validation_split_rnn *
with open(os.path.join(folder, filename), 'rb') as file:
text = file.read().decode(errors='ignore') # Decode bytes to string data_rnn.shape[0])
if folder == train_normal_dir:
normal_texts.append(text) x_train_rnn = data_rnn[:-num_validation_samples_rnn]
else: y_train_rnn = labels_rnn[:-num_validation_samples_rnn]
tb_texts.append(text) x_val_rnn = data_rnn[-num_validation_samples_rnn:]
y_val_rnn = labels_rnn[-num_validation_samples_rnn:]
texts = normal_texts + tb_texts
labels_rnn = np.array([0] * len(normal_texts) + [1] * len(tb_texts))
# Extract odd-numbered accuracies from history
# Ensemble model odd_accuracies_ensemble = history_ensemble.history['accuracy']
cnn_input = Input(shape=(img_width, img_height, 3), [::2]
name='cnn_input') odd_val_accuracies_ensemble =
ann_input = Input(shape=(img_width, img_height, 3), history_ensemble.history['val_accuracy'][::2]
name='ann_input')
rnn_input = Input(shape=(maxlen,), name='rnn_input') # Plot training and validation accuracy (only odd numbers) for
ensemble
cnn_output = model_cnn(cnn_input) plt.plot(odd_accuracies_ensemble, label='Training Accuracy
ann_output = model_ann(ann_input) (Ensemble)')
rnn_output = model_rnn(rnn_input) plt.plot(odd_val_accuracies_ensemble, label='Validation Accuracy
(Ensemble)')
merged = Concatenate()([cnn_output, ann_output, rnn_output]) plt.title('Training and Validation Accuracy (Ensemble)')
ensemble_output = Dense(1, activation='sigmoid')(merged) plt.xlabel('Epoch')
plt.ylabel('Accuracy')
ensemble_model = Model(inputs=[cnn_input, ann_input, rnn_input], plt.legend()
outputs=ensemble_output) plt.show()

ensemble_model.compile(optimizer='adam', # Plot model architecture


loss='binary_crossentropy', plot_model(ensemble_model, to_file='ensemble_model.png',
metrics=['accuracy']) show_shapes=True)
THANK YOU

You might also like