0% found this document useful (0 votes)
101 views7 pages

BEH41803. Using Google Colab To Train Image From Folder. Dog Vs Cat Step 1: Connect To Google Drive

The document describes steps to train a convolutional neural network (CNN) model to classify images of cats and dogs using Google Colab. It connects to Google Drive to access image data, constructs a CNN with convolutional and max pooling layers, and compiles the model using RMSprop optimization. It also explores using a pretrained VGG16 model with the image input layer and classifier layers added on top. The CNN model is trained on cat and dog images stored in train and validation directories in the Google Drive folder.

Uploaded by

nuratierah yusof
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
101 views7 pages

BEH41803. Using Google Colab To Train Image From Folder. Dog Vs Cat Step 1: Connect To Google Drive

The document describes steps to train a convolutional neural network (CNN) model to classify images of cats and dogs using Google Colab. It connects to Google Drive to access image data, constructs a CNN with convolutional and max pooling layers, and compiles the model using RMSprop optimization. It also explores using a pretrained VGG16 model with the image input layer and classifier layers added on top. The CNN model is trained on cat and dog images stored in train and validation directories in the Google Drive folder.

Uploaded by

nuratierah yusof
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

6/27/2021 Cat and Dog Classification.

ipynb - Colaboratory

BEH41803. Using google colab to train image from folder. Dog vs Cat

Step 1: Connect to google drive.

from google.colab import drive
drive.mount('/content/gdrive')

Drive already mounted at /content/gdrive; to attempt to forcibly remount, call drive.mo

Step 2 : Feed dataset directory

import os
os.chdir("/content/gdrive/My Drive/Colab Notebooks/dataset")
os.getcwd()

'/content/gdrive/My Drive/Colab Notebooks/dataset'

STep 3: Create train and validation data directory

#path = "/content/gdrive/My Drive/Colab Notebooks/dataset/train"
train_dir ='/content/gdrive/My Drive/Colab Notebooks/dataset/train'
pet_train_dir='/content/gdrive/My Drive/Colab Notebooks/dataset/train/pet'
can_train_dir='/content/gdrive/My Drive/Colab Notebooks/dataset/train/can'
 
validation_dir ='/content/gdrive/My Drive/Colab Notebooks/dataset/Val'
pet_val_dir='/content/gdrive/My Drive/Colab Notebooks/dataset/Val/pet'
can_val_dir='/content/gdrive/My Drive/Colab Notebooks/dataset/Val/can'
 
test_dir='/content/gdrive/My Drive/Colab Notebooks/dataset/test'
 
print('pet train length :',len(os.listdir(pet_train_dir)))
print('can train length :',len(os.listdir(can_train_dir)))
 
print('pet val length :',len(os.listdir(pet_val_dir)))
print('can val length :',len(os.listdir(can_val_dir)))
 
 
 

pet train length : 50

can train length : 51

pet val length : 133

can val length : 77

i t t fl tf
https://colab.research.google.com/drive/1X2RGJ3zd5dSwAcwYXo3J6eBVKco7zI8k?usp=sharing#scrollTo=6sDTfLQ_xcDT&uniqifier=2&printMode=true 1/7
6/27/2021 Cat and Dog Classification.ipynb - Colaboratory
import tensorflow as tf
from tensorflow.keras.optimizers import RMSprop
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from keras.applications import vgg16

Step 4: Construct the CNN model

model = tf.keras.models.Sequential([
    tf.keras.layers.Conv2D(32, (3, 3), activation='relu', input_shape=(150, 150, 3)),
    tf.keras.layers.MaxPooling2D(2, 2),
    tf.keras.layers.Conv2D(64, (3, 3), activation='relu'),
    tf.keras.layers.MaxPooling2D(2, 2),
    tf.keras.layers.Conv2D(128, (3, 3), activation='relu'),
    tf.keras.layers.MaxPooling2D(2, 2),
    tf.keras.layers.Flatten(),
    tf.keras.layers.Dense(512, activation='relu'),
    tf.keras.layers.Dense(1, activation='sigmoid')
])
 
#form architecture
model.compile(optimizer=RMSprop(lr=0.001), loss='binary_crossentropy', metrics=['acc'])
model.summary()
 

Model: "sequential"

_________________________________________________________________

Layer (type) Output Shape Param #

=================================================================

conv2d (Conv2D) (None, 148, 148, 32) 896

_________________________________________________________________

max_pooling2d (MaxPooling2D) (None, 74, 74, 32) 0

_________________________________________________________________

conv2d_1 (Conv2D) (None, 72, 72, 64) 18496

_________________________________________________________________

max_pooling2d_1 (MaxPooling2 (None, 36, 36, 64) 0

_________________________________________________________________

conv2d_2 (Conv2D) (None, 34, 34, 128) 73856

_________________________________________________________________

max_pooling2d_2 (MaxPooling2 (None, 17, 17, 128) 0

_________________________________________________________________

flatten (Flatten) (None, 36992) 0

_________________________________________________________________

dense (Dense) (None, 512) 18940416

_________________________________________________________________

dense_1 (Dense) (None, 1) 513

=================================================================

Total params: 19,034,177

Trainable params: 19,034,177

Non-trainable params: 0

_________________________________________________________________

/usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/optimizer_v2/optimizer_v
"The `lr` argument is deprecated, use `learning_rate` instead.")

https://colab.research.google.com/drive/1X2RGJ3zd5dSwAcwYXo3J6eBVKco7zI8k?usp=sharing#scrollTo=6sDTfLQ_xcDT&uniqifier=2&printMode=true 2/7
6/27/2021 Cat and Dog Classification.ipynb - Colaboratory

Step 4 Construct pretrained model with VGG ( you can use with or without pretrained model)

conv_base = tf.keras.applications.VGG16(weights='imagenet',include_top=False, input_shape=(15
#print(conv_base.summary())
conv_base.trainable=False
conv_base.summary()

Downloading data from https://storage.googleapis.com/tensorflow/keras-applications/vgg1


58892288/58889256 [==============================] - 0s 0us/step

Model: "vgg16"

_________________________________________________________________

Layer (type) Output Shape Param #

=================================================================

input_1 (InputLayer) [(None, 150, 150, 3)] 0

_________________________________________________________________

block1_conv1 (Conv2D) (None, 150, 150, 64) 1792

_________________________________________________________________

block1_conv2 (Conv2D) (None, 150, 150, 64) 36928

_________________________________________________________________

block1_pool (MaxPooling2D) (None, 75, 75, 64) 0

_________________________________________________________________

block2_conv1 (Conv2D) (None, 75, 75, 128) 73856

_________________________________________________________________

block2_conv2 (Conv2D) (None, 75, 75, 128) 147584

_________________________________________________________________

block2_pool (MaxPooling2D) (None, 37, 37, 128) 0

_________________________________________________________________

block3_conv1 (Conv2D) (None, 37, 37, 256) 295168

_________________________________________________________________

block3_conv2 (Conv2D) (None, 37, 37, 256) 590080

_________________________________________________________________

block3_conv3 (Conv2D) (None, 37, 37, 256) 590080

_________________________________________________________________

block3_pool (MaxPooling2D) (None, 18, 18, 256) 0

_________________________________________________________________

block4_conv1 (Conv2D) (None, 18, 18, 512) 1180160

_________________________________________________________________

block4_conv2 (Conv2D) (None, 18, 18, 512) 2359808

_________________________________________________________________

block4_conv3 (Conv2D) (None, 18, 18, 512) 2359808

_________________________________________________________________

block4_pool (MaxPooling2D) (None, 9, 9, 512) 0

_________________________________________________________________

block5_conv1 (Conv2D) (None, 9, 9, 512) 2359808

_________________________________________________________________

block5_conv2 (Conv2D) (None, 9, 9, 512) 2359808

_________________________________________________________________

block5_conv3 (Conv2D) (None, 9, 9, 512) 2359808

_________________________________________________________________

block5_pool (MaxPooling2D) (None, 4, 4, 512) 0

=================================================================

Total params: 14,714,688

Trainable params: 0

https://colab.research.google.com/drive/1X2RGJ3zd5dSwAcwYXo3J6eBVKco7zI8k?usp=sharing#scrollTo=6sDTfLQ_xcDT&uniqifier=2&printMode=true 3/7
6/27/2021 Cat and Dog Classification.ipynb - Colaboratory

Non-trainable params: 14,714,688

_________________________________________________________________

 
 
model = tf.keras.models.Sequential([
    conv_base,
    tf.keras.layers.Flatten(),
    tf.keras.layers.Dense(512, activation='relu'),
    tf.keras.layers.Dense(1, activation='sigmoid')
])
 
#form architecture
model.compile(optimizer=RMSprop(lr=0.001), loss='binary_crossentropy', metrics=['acc'])
model.summary()

Model: "sequential_1"

_________________________________________________________________

Layer (type) Output Shape Param #

=================================================================

vgg16 (Functional) (None, 4, 4, 512) 14714688

_________________________________________________________________

flatten_1 (Flatten) (None, 8192) 0

_________________________________________________________________

dense_2 (Dense) (None, 512) 4194816

_________________________________________________________________

dense_3 (Dense) (None, 1) 513

=================================================================

Total params: 18,910,017

Trainable params: 4,195,329

Non-trainable params: 14,714,688

_________________________________________________________________

/usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/optimizer_v2/optimizer_v
"The `lr` argument is deprecated, use `learning_rate` instead.")

Step 5 : Pre process the input data

 
# All images will be rescaled by 1./255
train_datagen = ImageDataGenerator(rescale=1./255) # rescale factor, will be multiply buy thi
test_datagen = ImageDataGenerator(rescale=1./255)
 
#has three methods flow(),flow_from_directory() and flow_from_dataframe()
train_generator = train_datagen.flow_from_directory(
        # This is the target directory
        train_dir,
        # All images will be resized to 150x150
        target_size=(150, 150),
        batch_size=20,
# Since we use binary crossentropy loss, we need binary labels
https://colab.research.google.com/drive/1X2RGJ3zd5dSwAcwYXo3J6eBVKco7zI8k?usp=sharing#scrollTo=6sDTfLQ_xcDT&uniqifier=2&printMode=true 4/7
6/27/2021 Cat and Dog Classification.ipynb - Colaboratory
        # Since we use binary_crossentropy loss, we need binary labels
        class_mode='binary')
 
validation_generator = test_datagen.flow_from_directory(
        validation_dir,
        target_size=(150, 150),
        batch_size=20,
        class_mode='binary')
 
for data_batch, labels_batch in train_generator:
    print('data batch shape:', data_batch.shape)
    print('labels batch shape:', labels_batch.shape)
    break

Found 101 images belonging to 2 classes.

Found 210 images belonging to 2 classes.

data batch shape: (20, 150, 150, 3)

labels batch shape: (20,)

Step 6: Start training process

history =model.fit(
      train_generator,
      steps_per_epoch=len(train_generator),
      epochs=10,
      validation_data=validation_generator,
      validation_steps=len(validation_generator))

Epoch 1/10

6/6 [==============================] - 68s 14s/step - loss: 1.8548 - acc: 0.7822 - val_


Epoch 2/10

6/6 [==============================] - 2s 382ms/step - loss: 1.4765e-04 - acc: 1.0000 -


Epoch 3/10

6/6 [==============================] - 2s 364ms/step - loss: 1.2934e-04 - acc: 1.0000 -


Epoch 4/10

6/6 [==============================] - 2s 360ms/step - loss: 1.1523e-04 - acc: 1.0000 -


Epoch 5/10

6/6 [==============================] - 2s 360ms/step - loss: 9.9241e-05 - acc: 1.0000 -


Epoch 6/10

6/6 [==============================] - 2s 358ms/step - loss: 8.5521e-05 - acc: 1.0000 -


Epoch 7/10

6/6 [==============================] - 2s 355ms/step - loss: 7.3438e-05 - acc: 1.0000 -


Epoch 8/10

6/6 [==============================] - 2s 383ms/step - loss: 6.1846e-05 - acc: 1.0000 -


Epoch 9/10

6/6 [==============================] - 2s 387ms/step - loss: 5.0874e-05 - acc: 1.0000 -


Epoch 10/10

6/6 [==============================] - 2s 353ms/step - loss: 4.1477e-05 - acc: 1.0000 -

Step 7 : Plot the outcome

https://colab.research.google.com/drive/1X2RGJ3zd5dSwAcwYXo3J6eBVKco7zI8k?usp=sharing#scrollTo=6sDTfLQ_xcDT&uniqifier=2&printMode=true 5/7
6/27/2021 Cat and Dog Classification.ipynb - Colaboratory

import matplotlib.pyplot as plt
 
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
 
epochs = range(len(acc))
 
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.legend()
 
plt.figure()
 
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.legend()
 
plt.show()

https://colab.research.google.com/drive/1X2RGJ3zd5dSwAcwYXo3J6eBVKco7zI8k?usp=sharing#scrollTo=6sDTfLQ_xcDT&uniqifier=2&printMode=true 6/7
6/27/2021 Cat and Dog Classification.ipynb - Colaboratory

check 0s completed at 8:30 PM

https://colab.research.google.com/drive/1X2RGJ3zd5dSwAcwYXo3J6eBVKco7zI8k?usp=sharing#scrollTo=6sDTfLQ_xcDT&uniqifier=2&printMode=true 7/7

You might also like