0% found this document useful (0 votes)
6 views2 pages

FeedForward TensorFlow Keras

This document outlines the implementation of a Feed-Forward Neural Network using TensorFlow/Keras. It includes steps for importing libraries, loading and normalizing the MNIST dataset, building and compiling the model, training it, and evaluating its performance. The expected output shows the TensorFlow version and the training accuracy over 5 epochs, culminating in a test accuracy of 0.976.

Uploaded by

sfaritha
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views2 pages

FeedForward TensorFlow Keras

This document outlines the implementation of a Feed-Forward Neural Network using TensorFlow/Keras. It includes steps for importing libraries, loading and normalizing the MNIST dataset, building and compiling the model, training it, and evaluating its performance. The expected output shows the TensorFlow version and the training accuracy over 5 epochs, culminating in a test accuracy of 0.976.

Uploaded by

sfaritha
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 2

FEED-FORWARD NEURAL NETWORK IMPLEMENTATION IN

TENSORFLOW/KERAS

Step 1: Import the necessary libraries

import tensorflow as tf

# Print TensorFlow version


print("TensorFlow version:", tf.__version__)

Step 2: Load and normalize the MNIST dataset

mnist = tf.keras.datasets.mnist
(x_train, y_train), (x_test, y_test) = mnist.load_data()

# Normalize the pixel values


x_train, x_test = x_train / 255.0, x_test / 255.0

Step 3: Build the Feed-Forward Neural Network

model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(input_shape=(28, 28)), # Flatten the
28x28 image
tf.keras.layers.Dense(128, activation='relu'), # Fully
connected layer
tf.keras.layers.Dropout(0.2), # Dropout for
regularization
tf.keras.layers.Dense(10, activation='softmax') # Output layer
(10 classes)
])

Step 4: Compile the model

model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])

Step 5: Train the model

model.fit(x_train, y_train, epochs=5)


Step 6: Evaluate the model

test_loss, test_acc = model.evaluate(x_test, y_test)


print("Test accuracy:", test_acc)

Expected Output:
TensorFlow version: 2.x.x

Epoch 1/5
1875/1875 [==============================] - 5s 2ms/step - loss: 0.2961 -
accuracy: 0.9140
Epoch 2/5
1875/1875 [==============================] - 4s 2ms/step - loss: 0.1415 -
accuracy: 0.9582
Epoch 3/5
1875/1875 [==============================] - 4s 2ms/step - loss: 0.1051 -
accuracy: 0.9681
Epoch 4/5
1875/1875 [==============================] - 4s 2ms/step - loss: 0.0864 -
accuracy: 0.9734
Epoch 5/5
1875/1875 [==============================] - 4s 2ms/step - loss: 0.0736 -
accuracy: 0.9768

313/313 [==============================] - 1s 2ms/step - loss: 0.0769 -


accuracy: 0.9760
Test accuracy: 0.976

You might also like