0% found this document useful (0 votes)
9 views4 pages

EXP6 - DOGS&CATS - Ipynb - Colaboratory

The document outlines a Colaboratory notebook for training a convolutional neural network (CNN) to classify images of cats and dogs. It includes steps for data preprocessing, model definition, training, and evaluation, achieving a training accuracy of approximately 71.49% and a test accuracy of about 67.16%. The notebook also visualizes training and validation accuracy over epochs and displays sample images from the datasets.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views4 pages

EXP6 - DOGS&CATS - Ipynb - Colaboratory

The document outlines a Colaboratory notebook for training a convolutional neural network (CNN) to classify images of cats and dogs. It includes steps for data preprocessing, model definition, training, and evaluation, achieving a training accuracy of approximately 71.49% and a test accuracy of about 67.16%. The notebook also visualizes training and validation accuracy over epochs and displays sample images from the datasets.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

23/02/2024, 15:46 EXP6_DOGS&CATS.

ipynb - Colaboratory

import os
import numpy as np
import tensorflow as tf
from tensorflow.keras.preprocessing.image import ImageDataGenerator, load_img, img_to_array
import matplotlib.pyplot as plt

#Mount google drive


from google.colab import drive

#Define the path to the directories containing train and test images
train_dir = '/content/drive/MyDrive/CAT-DOG/training set'
test_dir = '/content/drive/MyDrive/CAT-DOG/testing set'

path ='/content/drive/MyDrive/CAT-DOG/testing set'


classes =os.listdir(path)
classes

['dog testing set', 'cat testing set']

#PARAMETERS
batch_size =32
img_height = 150
img_width = 150
epochs = 10

#Preprocess images and setup the data augmentation


train_datagen = ImageDataGenerator(
rescale=1./255,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True
)

test_datagen = ImageDataGenerator(rescale=1./255)

train_generator = train_datagen.flow_from_directory(
train_dir,
target_size=(img_height, img_width),
batch_size=batch_size,
class_mode='binary'
)

Found 1403 images belonging to 2 classes.

validation_generator = test_datagen.flow_from_directory(
test_dir,
target_size=(img_height, img_width),
batch_size=batch_size,
class_mode='binary'
)

Found 609 images belonging to 2 classes.

# Display 10 images from training dataset


x_train, y_train = train_generator.next()
plt.figure(figsize=(10, 10))
for i in range(10):
plt.subplot(5, 5, i+1)
plt.imshow(x_train[i])
plt.axis('off')
plt.show()

https://colab.research.google.com/drive/1bPIWADb7bFmScoCKArfMgW9R4xsmHIUp#scrollTo=Labn3TiDXsx5&printMode=true 1/4
23/02/2024, 15:46 EXP6_DOGS&CATS.ipynb - Colaboratory

# Display 10 images from validation dataset


x_val, y_val = validation_generator.next()
plt.figure(figsize=(10, 10))
for i in range(10):
plt.subplot(5, 5, i+1)
plt.imshow(x_val[i])
plt.axis('off')
plt.show()

import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Conv2D, MaxPooling2D, Flatten, Dense

# Define the CNN model


model = Sequential([
Conv2D(32, (3, 3), activation='relu', input_shape=(150, 150, 3)),
MaxPooling2D(2, 2),
Conv2D(64, (3, 3), activation='relu'),
MaxPooling2D(2, 2),
Conv2D(128, (3, 3), activation='relu'),
MaxPooling2D(2, 2),
Conv2D(128, (3, 3), activation='relu'),
MaxPooling2D(2, 2),
Flatten(),
Dense(512, activation='relu'),
Dense(1, activation='sigmoid')
])

# Display model summary


model.summary()

Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d (Conv2D) (None, 148, 148, 32) 896

max_pooling2d (MaxPooling2 (None, 74, 74, 32) 0


D)

https://colab.research.google.com/drive/1bPIWADb7bFmScoCKArfMgW9R4xsmHIUp#scrollTo=Labn3TiDXsx5&printMode=true 2/4
23/02/2024, 15:46 EXP6_DOGS&CATS.ipynb - Colaboratory
conv2d_1 (Conv2D) (None, 72, 72, 64) 18496

max_pooling2d_1 (MaxPoolin (None, 36, 36, 64) 0


g2D)

conv2d_2 (Conv2D) (None, 34, 34, 128) 73856

max_pooling2d_2 (MaxPoolin (None, 17, 17, 128) 0


g2D)

conv2d_3 (Conv2D) (None, 15, 15, 128) 147584

max_pooling2d_3 (MaxPoolin (None, 7, 7, 128) 0


g2D)

flatten (Flatten) (None, 6272) 0

dense (Dense) (None, 512) 3211776

dense_1 (Dense) (None, 1) 513

=================================================================
Total params: 3453121 (13.17 MB)
Trainable params: 3453121 (13.17 MB)
Non-trainable params: 0 (0.00 Byte)
_________________________________________________________________

# Compile the model


model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])

# Train the model


history = model.fit(
train_generator,
steps_per_epoch=train_generator.samples // batch_size,
epochs=epochs,
validation_data=validation_generator,
validation_steps=validation_generator.samples // batch_size
)

output Epoch
43/43
1/10
[==============================] - 112s 3s/step - loss: 0.6930 - accuracy: 0.5179 - val_loss: 0.6931 - val_accuracy: 0.4984
Epoch 2/10
43/43 [==============================] - 79s 2s/step - loss: 0.6830 - accuracy: 0.5580 - val_loss: 0.7022 - val_accuracy: 0.4984
Epoch 3/10
43/43 [==============================] - 79s 2s/step - loss: 0.6772 - accuracy: 0.5573 - val_loss: 0.6636 - val_accuracy: 0.5724
Epoch 4/10
43/43 [==============================] - 78s 2s/step - loss: 0.6740 - accuracy: 0.5901 - val_loss: 0.6794 - val_accuracy: 0.5641
Epoch 5/10
43/43 [==============================] - 80s 2s/step - loss: 0.6476 - accuracy: 0.6265 - val_loss: 0.6578 - val_accuracy: 0.6447
Epoch 6/10
43/43 [==============================] - 81s 2s/step - loss: 0.6036 - accuracy: 0.6623 - val_loss: 0.6285 - val_accuracy: 0.6645
Epoch 7/10
43/43 [==============================] - 79s 2s/step - loss: 0.5894 - accuracy: 0.6951 - val_loss: 0.6599 - val_accuracy: 0.6513
Epoch 8/10
43/43 [==============================] - 81s 2s/step - loss: 0.6137 - accuracy: 0.6426 - val_loss: 0.6246 - val_accuracy: 0.7023
Epoch 9/10
43/43 [==============================] - 78s 2s/step - loss: 0.5756 - accuracy: 0.6973 - val_loss: 0.6031 - val_accuracy: 0.6793
Epoch 10/10
43/43 [==============================] - 80s 2s/step - loss: 0.5396 - accuracy: 0.7177 - val_loss: 0.6751 - val_accuracy: 0.6727

# Plot training and validation accuracy


plt.plot(history.history['accuracy'], label='Train Accuracy')
plt.plot(history.history['val_accuracy'], label='Validation Accuracy')
plt.xlabel('Epoch')
plt.ylabel('Accuracy')
plt.legend()
plt.title('Training and Validation Accuracy')
plt.show()

https://colab.research.google.com/drive/1bPIWADb7bFmScoCKArfMgW9R4xsmHIUp#scrollTo=Labn3TiDXsx5&printMode=true 3/4
23/02/2024, 15:46 EXP6_DOGS&CATS.ipynb - Colaboratory

# Print train and test accuracy


train_loss, train_accuracy = model.evaluate(train_generator)
test_loss, test_accuracy = model.evaluate(validation_generator)
print(f"Train Accuracy: {train_accuracy*100}")
print(f"Test Accuracy: {test_accuracy*100}")

44/44 [==============================] - 29s 665ms/step - loss: 0.5486 - accuracy: 0.7149


20/20 [==============================] - 11s 532ms/step - loss: 0.6766 - accuracy: 0.6716
Train Accuracy: 71.48966789245605
Test Accuracy: 67.15927720069885

https://colab.research.google.com/drive/1bPIWADb7bFmScoCKArfMgW9R4xsmHIUp#scrollTo=Labn3TiDXsx5&printMode=true 4/4

You might also like