0% found this document useful (0 votes)
21 views11 pages

Data Visualization Cheatsheet 1702209209

This document describes building a deep neural network model using Keras to classify images of cats and dogs. It details loading the cat and dog image data, preprocessing the images, and splitting the data into training and test sets. The goal is to assess whether a DNN can effectively handle the image classification task.

Uploaded by

valdezrominapy
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views11 pages

Data Visualization Cheatsheet 1702209209

This document describes building a deep neural network model using Keras to classify images of cats and dogs. It details loading the cat and dog image data, preprocessing the images, and splitting the data into training and test sets. The goal is to assess whether a DNN can effectively handle the image classification task.

Uploaded by

valdezrominapy
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

Keras_CaseStudy_no5 2023/11/22 17:11

Example No 5 :DNN model by Keras for


Cats & Dogs Classification
What is the new we will learn in this example ?
We will use dataset consist of large number of images of cats and dogs, The
Kaggle "Cats and Dogs" dataset, it is a popular dataset used in computer vision
tasks, particularly for image classification.
The goal is to build a model that can accurately classify an image as either a cat
or a dog.
While this is an image-related problem, and it is typically advisable to utilize
Convolution Neural Networks (CNN), we are opting to employ a Deep Neural
Network (DNN) in this instance merely for training purposes. We aim to assess
whether it can effectively handle the image data or not.

What are the basic steps to build our NN model ?


The same life cycle for any NN model as we mentioned in the previous
tutorial.

1. Load Data
2. Scaling data
3. Define Keras Model
4. Compile Keras Model
5. Fit Keras Model
6. Evaluate Keras Model
7. Make Predictions
8. flowchart for our NN model

1. Load Data

The Original Dataset here : it is better to download by yourself.

To deal with images we need Open Source Computer Vision Library


(OpenCV), python package.

It is a popular open-source computer vision and image processing library in


Python.

file:///Users/marzouk/Desktop/marzouk/github_repos/MachineLearni…map/1_ML_Toolkit/keras/DNN_case_studies/Keras_CaseStudy_no5.html Page 1 of 11
Keras_CaseStudy_no5 2023/11/22 17:11

OpenCV provides a wide range of functions for image processing tasks,


including reading and writing images, resizing, rotating, and applying various
filters.

In [ ]: import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import os
import cv2
from tqdm import tqdm
import tensorflow as tf
from tensorflow import keras
from keras.utils import plot_model

Processing our dataset images

make two categories one for dogs and take label 0 and another for cats with
label 1
convert the image to array as it is gray images* resize all the images to assure
that all in the same size
firstly we will see the images and try things then apply to all

In [ ]: DATADIR = 'PetImages'
CATEGORIES = ["Dog", "Cat"]
for category in CATEGORIES: # do dogs and cats
path = os.path.join(DATADIR,category) # create path to dogs and cats
x=0
for img in os.listdir(path): # iterate over each image per dogs and cats
x+=1
img_array = cv2.imread(os.path.join(path,img) ,cv2.IMREAD_GRAYSCALE)
plt.imshow(img_array, cmap='gray') # graph it
# plt.show() # display!
if x==10 :
break

file:///Users/marzouk/Desktop/marzouk/github_repos/MachineLearni…map/1_ML_Toolkit/keras/DNN_case_studies/Keras_CaseStudy_no5.html Page 2 of 11
Keras_CaseStudy_no5 2023/11/22 17:11

Let's see the last image array

In [ ]: print(img_array)
print(img_array.shape)

[[ 72 65 60 ... 81 83 92]
[ 66 58 54 ... 78 78 87]
[ 62 54 50 ... 78 76 84]
...
[125 113 114 ... 95 101 105]
[126 114 117 ... 100 107 113]
[128 118 122 ... 105 113 120]]
(500, 448)

Let's try different sizes for images to decide which one will be better.

In [ ]: #IMG_SIZE = 5
#IMG_SIZE = 10
IMG_SIZE = 200

new_array = cv2.resize(img_array, (IMG_SIZE, IMG_SIZE))


plt.imshow(new_array, cmap='gray')
plt.show()

file:///Users/marzouk/Desktop/marzouk/github_repos/MachineLearni…map/1_ML_Toolkit/keras/DNN_case_studies/Keras_CaseStudy_no5.html Page 3 of 11
Keras_CaseStudy_no5 2023/11/22 17:11

Apply all the processing with the decided size for all images

In [ ]: all_data = []

def create_all_data():
for category in CATEGORIES: # do dogs and cats

path = os.path.join(DATADIR,category) # create path to dogs and cats


class_num = CATEGORIES.index(category) # get the classification (0 or

for img in tqdm(os.listdir(path)): # iterate over each image per dogs


try:
img_array = cv2.imread(os.path.join(path,img) ,cv2.IMREAD_GRAYS
new_array = cv2.resize(img_array, (IMG_SIZE, IMG_SIZE)) # resi
all_data.append([new_array, class_num]) # add this to our trai
except Exception as e: # in the interest in keeping the output cle
pass

create_all_data()

print(len(all_data))

file:///Users/marzouk/Desktop/marzouk/github_repos/MachineLearni…map/1_ML_Toolkit/keras/DNN_case_studies/Keras_CaseStudy_no5.html Page 4 of 11
Keras_CaseStudy_no5 2023/11/22 17:11

9%|▉ | 1138/12501 [00:00<00:08, 1296.99it/s]Corrupt JPEG data: 6


5 extraneous bytes before marker 0xd9
39%|███▉ | 4860/12501 [00:03<00:05, 1322.96it/s]Corrupt JPEG data: 2
26 extraneous bytes before marker 0xd9
56%|█████▌ | 6953/12501 [00:05<00:04, 1324.62it/s]Corrupt JPEG data: 1
62 extraneous bytes before marker 0xd9
60%|█████▉ | 7489/12501 [00:05<00:03, 1318.64it/s]Warning: unknown JFI
F revision number 0.00
75%|███████▍ | 9317/12501 [00:06<00:02, 1405.88it/s]Corrupt JPEG data: 2
230 extraneous bytes before marker 0xd9
83%|████████▎ | 10409/12501 [00:07<00:01, 1290.24it/s]Corrupt JPEG data:
254 extraneous bytes before marker 0xd9
91%|█████████ | 11337/12501 [00:08<00:00, 1292.97it/s]Corrupt JPEG data:
399 extraneous bytes before marker 0xd9
96%|█████████▌| 12009/12501 [00:09<00:00, 1325.31it/s]Corrupt JPEG data:
1403 extraneous bytes before marker 0xd9
100%|██████████| 12501/12501 [00:09<00:00, 1323.67it/s]
19%|█▉ | 2390/12501 [00:01<00:07, 1319.04it/s]Corrupt JPEG data: 2
14 extraneous bytes before marker 0xd9
69%|██████▉ | 8627/12501 [00:06<00:02, 1452.90it/s]Corrupt JPEG data: 1
153 extraneous bytes before marker 0xd9
75%|███████▍ | 9367/12501 [00:06<00:02, 1394.91it/s]Corrupt JPEG data: 9
9 extraneous bytes before marker 0xd9
85%|████████▌ | 10683/12501 [00:07<00:01, 1408.49it/s]Corrupt JPEG data:
128 extraneous bytes before marker 0xd9
90%|█████████ | 11283/12501 [00:07<00:00, 1463.07it/s]Corrupt JPEG data:
239 extraneous bytes before marker 0xd9
100%|██████████| 12501/12501 [00:08<00:00, 1417.34it/s]
24946

Data Randomization

we will randomize our training data, to prevent the model from learning patterns
based on the order of the data. This is crucial to ensure that the model
generalizes well to unseen data.

In [ ]: import random
random.shuffle(all_data)

We will divide our data into training and testing sets for the purpose of
building our model.

The data should be in the form of 1D array

In [ ]: training_data = all_data[:20000]
test_data = all_data[20000:]

In [ ]: X_train = []
y_train = []

file:///Users/marzouk/Desktop/marzouk/github_repos/MachineLearni…map/1_ML_Toolkit/keras/DNN_case_studies/Keras_CaseStudy_no5.html Page 5 of 11
Keras_CaseStudy_no5 2023/11/22 17:11

X_test = []
y_test = []

for features,label in training_data:


X_train.append(features)
y_train.append(label)
for features,label in test_data:
X_test.append(features)
y_test.append(label)

# The -1 means it will automatically infer the size based on the size of the or
X_train = np.array(X_train).reshape(-1, IMG_SIZE, IMG_SIZE)
X_test= np.array(X_test).reshape(-1, IMG_SIZE, IMG_SIZE)

# y also should be array in newer version of tf


y_train=np.array(y_train)
y_test=np.array(y_test)

If you want to save the data to use later --> use pickle

In [ ]: import pickle

pickle_out = open("X_train.pickle","wb")
pickle.dump(X_train, pickle_out)
pickle_out.close()

pickle_out = open("y_train.pickle","wb")
pickle.dump(y_train, pickle_out)
pickle_out.close()

pickle_in = open("X_train.pickle","rb")
X_train = pickle.load(pickle_in)

pickle_in = open("y_train.pickle","rb")
y_train = pickle.load(pickle_in)

2. Scaling data
We will build this model without scaling

3. Define Keras Model

Notes :

The first layer should be equal the dimensions of the compressed images as the
array of each image is the input of the NN
As it is classification (binary; cats & dogs), output layers should use softmax as

file:///Users/marzouk/Desktop/marzouk/github_repos/MachineLearni…map/1_ML_Toolkit/keras/DNN_case_studies/Keras_CaseStudy_no5.html Page 6 of 11
Keras_CaseStudy_no5 2023/11/22 17:11

activation function

In [ ]: model = keras.Sequential([
keras.layers.Flatten(input_shape=(IMG_SIZE,IMG_SIZE)),
keras.layers.Dense(256, activation=tf.nn.relu),
keras.layers.Dense(256, activation=tf.nn.relu),
keras.layers.Dense(128, activation=tf.nn.relu),
keras.layers.Dense(2, activation=tf.nn.softmax)
])

4. Compile Keras Model


In [ ]: model.compile(optimizer='adam',loss='sparse_categorical_crossentropy',metrics

5. Fit Keras Model


In [ ]: model.fit(X_train, y_train, epochs=10)

Epoch 1/10
625/625 [==============================] - 5s 8ms/step - loss: 120.2522 -
accuracy: 0.5030
Epoch 2/10
625/625 [==============================] - 5s 8ms/step - loss: 0.6954 - ac
curacy: 0.4925
Epoch 3/10
625/625 [==============================] - 5s 8ms/step - loss: 0.6937 - ac
curacy: 0.5010
Epoch 4/10
625/625 [==============================] - 5s 8ms/step - loss: 0.6930 - ac
curacy: 0.4940
Epoch 5/10
625/625 [==============================] - 5s 8ms/step - loss: 0.6930 - ac
curacy: 0.4938
Epoch 6/10
625/625 [==============================] - 5s 8ms/step - loss: 0.6935 - ac
curacy: 0.4951
Epoch 7/10
625/625 [==============================] - 5s 9ms/step - loss: 0.6932 - ac
curacy: 0.4981
Epoch 8/10
625/625 [==============================] - 5s 8ms/step - loss: 0.6932 - ac
curacy: 0.4931
Epoch 9/10
625/625 [==============================] - 5s 8ms/step - loss: 0.6935 - ac
curacy: 0.4958
Epoch 10/10
625/625 [==============================] - 5s 8ms/step - loss: 0.6932 - ac
curacy: 0.4922
Out[ ]: <keras.callbacks.History at 0x284a33150>

file:///Users/marzouk/Desktop/marzouk/github_repos/MachineLearni…map/1_ML_Toolkit/keras/DNN_case_studies/Keras_CaseStudy_no5.html Page 7 of 11
Keras_CaseStudy_no5 2023/11/22 17:11

6. Evaluate Keras Model


In [ ]: val_loss, val_acc = model.evaluate(X_test, y_test)
print('The accuracy of our model on test dataset : %.2f' % (val_acc*100))
print('The loss of our model on test dataset : %.2f' % (val_loss*100))

155/155 [==============================] - 0s 3ms/step - loss: 0.6932 - ac


curacy: 0.4970
The accuracy of our model on test dataset : 49.70
The loss of our model on test dataset : 69.32

7. Make Predictions

Executing the prediction function once over the entire dataset.

In [ ]: predictions = model.predict(X_test)

155/155 [==============================] - 0s 3ms/step

Let's see some cases and compare! you try any image number here as
you want

In [ ]: image_no = 5 # you can change the image number as you want

model_prediction = np.argmax(predictions[image_no])
if model_prediction == 1:
print("The model predict the image as Dog")
elif model_prediction == 0:
print("The model predict the image as Cat")
print("The real image as follows ")
plt.imshow(X_test[image_no], cmap='gray');

The model predict the image as Cat


The real image as follows

file:///Users/marzouk/Desktop/marzouk/github_repos/MachineLearni…map/1_ML_Toolkit/keras/DNN_case_studies/Keras_CaseStudy_no5.html Page 8 of 11
Keras_CaseStudy_no5 2023/11/22 17:11

Our Comments :
The accuracy of the model is very low and can not be accepted as
generalized model in practical life.
The main reason for that --> we are using the DNN with image related
example, while the CNN is the better option of such kind of examples, so in
another tutorial we will try the CNN and see what is the difference
It is very clear when we tried prediction for different images that the model
can not recognize the black cats as cats but see them as dogs.

8. flowchart for our NN model


In [ ]: plot_model(model, to_file='model_plot1.png', show_shapes=True, show_layer_names

file:///Users/marzouk/Desktop/marzouk/github_repos/MachineLearni…map/1_ML_Toolkit/keras/DNN_case_studies/Keras_CaseStudy_no5.html Page 9 of 11
Keras_CaseStudy_no5 2023/11/22 17:11

Out[ ]:

For Keras basic tools & other examples

Keras Basic Tools for DNN

Case Study No 1 : To understand how to apply rescaling to data.

Case Study No 2 : To observe the impact of increasing the number of hidden layers
on the model's accuracy.

Case Study No 3 : For image classification tasks involving fashion items, a

file:///Users/marzouk/Desktop/marzouk/github_repos/MachineLearni…ap/1_ML_Toolkit/keras/DNN_case_studies/Keras_CaseStudy_no5.html Page 10 of 11
Keras_CaseStudy_no5 2023/11/22 17:11

challenging dataset is employed.

Case Study No 4 : Utilizing Keras to construct a DNN for a regression model allows
for the observation of early stopping in action.

Refrences

Keras Official Website.


Playlist for Keras in Arabic by Hesham Asem.
Hesham Asem GitHub for ML tools.

file:///Users/marzouk/Desktop/marzouk/github_repos/MachineLearni…ap/1_ML_Toolkit/keras/DNN_case_studies/Keras_CaseStudy_no5.html Page 11 of 11

You might also like