0% found this document useful (0 votes)
5 views3 pages

Aai 6

The document outlines a TensorFlow implementation for training a neural network on the MNIST dataset using a MobileNetV2 base model. It includes data preprocessing, model architecture setup, training and validation loops, and reports validation accuracy over 10 epochs, achieving a final test accuracy of approximately 97.85%. The model employs categorical crossentropy loss and Adam optimizer for training.

Uploaded by

Aftab Shaikh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views3 pages

Aai 6

The document outlines a TensorFlow implementation for training a neural network on the MNIST dataset using a MobileNetV2 base model. It includes data preprocessing, model architecture setup, training and validation loops, and reports validation accuracy over 10 epochs, achieving a final test accuracy of approximately 97.85%. The model employs categorical crossentropy loss and Adam optimizer for training.

Uploaded by

Aftab Shaikh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 3

3/24/24, 5:38 PM Untitled7.

ipynb - Colaboratory

import tensorflow as tf
from tensorflow.keras import layers, models, optimizers
from tensorflow.keras.datasets import mnist
from tensorflow.keras.utils import to_categorical

(x_train, y_train), (x_test, y_test) = mnist.load_data()


x_train = x_train.astype('float32') / 255.0
x_test = x_test.astype('float32') / 255.0

Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist


11490434/11490434 [==============================] - 0s 0us/step

y_train = to_categorical(y_train, 10)


y_test = to_categorical(y_test, 10)

base_model = tf.keras.applications.MobileNetV2(input_shape=(224, 224, 3),


include_top=False,
weights='imagenet')

Downloading data from https://storage.googleapis.com/tensorflow/keras-applications/mobi


9406464/9406464 [==============================] - 0s 0us/step

model = models.Sequential()
model.add(layers.Reshape((28, 28, 1), input_shape=(28, 28))) # Add a channel dimension
model.add(layers.Conv2D(32, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Flatten())
model.add(layers.Dense(256, activation='relu'))
model.add(layers.Dropout(0.5))
model.add(layers.Dense(10, activation='softmax'))

for layer in base_model.layers:


layer.trainable = False

model.compile(optimizer=optimizers.Adam(learning_rate=0.001),
loss='categorical_crossentropy',
metrics=['accuracy'])

epochs = 10
batch_size = 32

# Create tf.data.Dataset for training and validation


train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train)).shuffle(60000).batch
val_dataset = tf.data.Dataset.from_tensor_slices((x_test, y_test)).batch(batch_size)

for epoch in range(epochs):


print(f"Epoch {epoch + 1}/{epochs}")

https://colab.research.google.com/drive/143lYn2itKO185DVdxwF9Fi_KcHpGaH8u#scrollTo=Qqampay2IDzm&printMode=true 1/3
3/24/24, 5:38 PM Untitled7.ipynb - Colaboratory

# Training loop
for images, labels in train_dataset:
with tf.GradientTape() as tape:
predictions = model(images)
loss = tf.keras.losses.categorical_crossentropy(labels, predictions)
gradients = tape.gradient(loss, model.trainable_variables)
optimizer = optimizers.Adam(learning_rate=0.001)
optimizer.apply_gradients(zip(gradients, model.trainable_variables))

# Validation loop
accuracy = tf.metrics.CategoricalAccuracy()
for images, labels in val_dataset:
predictions = model(images)
accuracy.update_state(labels, predictions)
val_acc = accuracy.result().numpy()
print(f"Validation Accuracy: {val_acc}")

Epoch 1/10
Validation Accuracy: 0.9700000286102295
Epoch 2/10
Validation Accuracy: 0.9696000218391418
Epoch 3/10
Validation Accuracy: 0.9474999904632568
Epoch 4/10
Validation Accuracy: 0.9710999727249146
Epoch 5/10
Validation Accuracy: 0.9725000262260437
Epoch 6/10
Validation Accuracy: 0.9761000275611877
Epoch 7/10
Validation Accuracy: 0.9793999791145325
Epoch 8/10
Validation Accuracy: 0.978600025177002
Epoch 9/10
Validation Accuracy: 0.979200005531311
Epoch 10/10
Validation Accuracy: 0.9785000085830688

test_loss, test_accuracy = model.evaluate(x_test, y_test)


print(f"Test Accuracy: {test_accuracy}")

output 313/313 [==============================] - 4s 12ms/step - loss: 0.2811 - accuracy: 0.97


Test Accuracy: 0.9785000085830688

https://colab.research.google.com/drive/143lYn2itKO185DVdxwF9Fi_KcHpGaH8u#scrollTo=Qqampay2IDzm&printMode=true 2/3
3/24/24, 5:38 PM Untitled7.ipynb - Colaboratory

https://colab.research.google.com/drive/143lYn2itKO185DVdxwF9Fi_KcHpGaH8u#scrollTo=Qqampay2IDzm&printMode=true 3/3

You might also like