0% found this document useful (0 votes)
108 views6 pages

NN LAB 13 SEP - Jupyter Notebook

The document details the steps taken to build and train a convolutional neural network (CNN) model on the MNIST dataset using TensorFlow and Keras. This includes loading and preprocessing the MNIST data, constructing and compiling the CNN model with multiple convolutional and dense layers, training the model for 10 epochs, evaluating training accuracy and loss, and saving the trained model for future use.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
108 views6 pages

NN LAB 13 SEP - Jupyter Notebook

The document details the steps taken to build and train a convolutional neural network (CNN) model on the MNIST dataset using TensorFlow and Keras. This includes loading and preprocessing the MNIST data, constructing and compiling the CNN model with multiple convolutional and dense layers, training the model for 10 epochs, evaluating training accuracy and loss, and saving the trained model for future use.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

In 

[ ]:  import numpy as np


import tensorflow
import keras
import PIL

In [ ]:  import tensorflow as tf


import matplotlib.pyplot as plt

In [ ]:  from tensorflow import keras


from tensorflow.keras import layers,datasets, models
from tensorflow.keras.models import Sequential

In [ ]:  (train_images, train_labels),(test_images, test_labels)=datasets.mnist.load_



train_images = train_images.reshape((60000, 28, 28, 1))
test_images =test_images.reshape((10000, 28, 28,1))

train_images, test_images= train_images/ 255.0, test_images/ 255.0

print("TRAIN IMAGES: ", train_images.shape)
print("TEST IMAGES: ", test_images.shape)

Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-da


tasets/mnist.npz (https://storage.googleapis.com/tensorflow/tf-keras-datase
ts/mnist.npz)

11493376/11490434 [==============================] - 0s 0us/step

11501568/11490434 [==============================] - 0s 0us/step

TRAIN IMAGES: (60000, 28, 28, 1)

TEST IMAGES: (10000, 28, 28, 1)

In [ ]:  num_classes = 10
img_height = 28
img_width = 28

model = Sequential([
layers.Conv2D(64, (3, 3), activation= 'relu', input_shape=(28,28,1)),
layers.Conv2D(32, 3, padding='same', activation= 'relu'),
layers.MaxPooling2D(),
layers.Conv2D(16, 3, padding='same', activation='relu'),
layers.MaxPooling2D(),
layers.Conv2D(64, 3, padding='same', activation='relu'),
layers.MaxPooling2D(),
layers. Flatten(),
layers.Dense (128, activation='relu'),
layers.Dense (10, activation='sigmoid')
])

In [ ]:  model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=
metrics=['accuracy'])
In [ ]:  model.summary()

Model: "sequential"

_________________________________________________________________

Layer (type) Output Shape Param #

=================================================================

conv2d (Conv2D) (None, 26, 26, 64) 640

conv2d_1 (Conv2D) (None, 26, 26, 32) 18464

max_pooling2d (MaxPooling2D (None, 13, 13, 32) 0

conv2d_2 (Conv2D) (None, 13, 13, 16) 4624

max_pooling2d_1 (MaxPooling (None, 6, 6, 16) 0

2D)

conv2d_3 (Conv2D) (None, 6, 6, 64) 9280

max_pooling2d_2 (MaxPooling (None, 3, 3, 64) 0

2D)

flatten (Flatten) (None, 576) 0

dense (Dense) (None, 128) 73856

dense_1 (Dense) (None, 10) 1290

=================================================================

Total params: 108,154

Trainable params: 108,154

Non-trainable params: 0

_________________________________________________________________

In [ ]:  epochs=10
history=model.fit(
train_images,
train_labels,
epochs=epochs
)

Epoch 1/10

/usr/local/lib/python3.7/dist-packages/tensorflow/python/util/dispatch.py:1
082: UserWarning: "`sparse_categorical_crossentropy` received `from_logits=
True`, but the `output` argument was produced by a sigmoid or softmax activ
ation and thus does not represent logits. Was this intended?"

return dispatch_target(*args, **kwargs)

1875/1875 [==============================] - 18s 4ms/step - loss: 0.1353 -


accuracy: 0.9569

Epoch 2/10

1875/1875 [==============================] - 7s 4ms/step - loss: 0.0439 - a


ccuracy: 0.9862

Epoch 3/10

1875/1875 [==============================] - 7s 4ms/step - loss: 0.0309 - a


ccuracy: 0.9904

Epoch 4/10

1875/1875 [==============================] - 7s 4ms/step - loss: 0.0254 - a


ccuracy: 0.9918

Epoch 5/10

1875/1875 [==============================] - 13s 7ms/step - loss: 0.0214 -


accuracy: 0.9934

Epoch 6/10

1875/1875 [==============================] - 9s 5ms/step - loss: 0.0169 - a


ccuracy: 0.9945

Epoch 7/10

1875/1875 [==============================] - 9s 5ms/step - loss: 0.0162 - a


ccuracy: 0.9945

Epoch 8/10

1875/1875 [==============================] - 9s 5ms/step - loss: 0.0129 - a


ccuracy: 0.9960

Epoch 9/10

1875/1875 [==============================] - 10s 5ms/step - loss: 0.0130 -


accuracy: 0.9959

Epoch 10/10

1875/1875 [==============================] - 7s 4ms/step - loss: 0.0109 - a


ccuracy: 0.9963

In [ ]:  acc=history.history['accuracy']
loss=history.history['loss']
epochs_range=range(epochs)

plt.figure(figsize=(8,8))
plt.plot(epochs_range, acc, label='Training Accuracy')
plt.plot(epochs_range, loss, label='Loss')
plt.legend(loc='lower right')
plt.title('Training Accuracy and Loss')

Out[10]: Text(0.5, 1.0, 'Training Accuracy and Loss')


In [ ]:  image=(train_images[1]).reshape(1,28,28,1)

model_pred=model.predict(image)
classes_x=np.argmax(model_pred, axis=1)

plt.imshow(image.reshape(28,28))
print(classes_x)

[0]

In [ ]:  image=(train_images[2]).reshape(1,28,28,1)

model_pred=model.predict(image)
classes_x=np.argmax(model_pred, axis=1)

plt.imshow(image.reshape(28,28))
print(classes_x)

[4]

In [ ]:  model.save("tf-cnn-model.hs")

In [ ]:  loaded_model=models.load_model("tf-cnn-model.hs")

In [ ]:  image=(train_images[2]).reshape(1,28,28,1)

model_pred=model.predict(image)
classes_x=np.argmax(model_pred, axis=1)

plt.imshow(image.reshape(28,28))
print(classes_x)

[4]

In [ ]:  ​

You might also like