0% found this document useful (0 votes)
2 views

Keras PROGRAM

Keras is a high-level neural networks API in Python that simplifies building deep learning models and is now part of TensorFlow. The document includes a sample program demonstrating how to use Keras to build and train a convolutional neural network on the MNIST dataset. The model achieves a test accuracy of approximately 99.16% after training for five epochs.

Uploaded by

charlton.latham
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

Keras PROGRAM

Keras is a high-level neural networks API in Python that simplifies building deep learning models and is now part of TensorFlow. The document includes a sample program demonstrating how to use Keras to build and train a convolutional neural network on the MNIST dataset. The model achieves a test accuracy of approximately 99.16% after training for five epochs.

Uploaded by

charlton.latham
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 2

Keras is a high-level neural networks API, written in Python, that runs on top of other deep learning

frameworks such as TensorFlow, Theano, and Microsoft Cognitive Toolkit (CNTK). It was developed
to make building and experimenting with deep learning models easier and faster. In fact, it's now
part of TensorFlow as its official high-level API.

SAMPLE PROGRAM:

# Importing necessary libraries

import tensorflow as tf

from tensorflow.keras import layers, models

from tensorflow.keras.datasets import mnist

from tensorflow.keras.utils import to_categorical

# Load the MNIST dataset

(train_images, train_labels), (test_images, test_labels) = mnist.load_data()

# Preprocess the data

train_images = train_images.reshape((train_images.shape[0], 28, 28, 1)) # Reshape images for CNN


input

test_images = test_images.reshape((test_images.shape[0], 28, 28, 1))

train_images = train_images.astype('float32') / 255 # Normalize the images

test_images = test_images.astype('float32') / 255

# Convert labels to one-hot encoded vectors

train_labels = to_categorical(train_labels)

test_labels = to_categorical(test_labels)

# Build the model

model = models.Sequential()

model.add(layers.Conv2D(32, (3, 3), activation='relu', input_shape=(28, 28, 1)))

model.add(layers.MaxPooling2D((2, 2)))

model.add(layers.Conv2D(64, (3, 3), activation='relu'))

model.add(layers.MaxPooling2D((2, 2)))

model.add(layers.Conv2D(64, (3, 3), activation='relu'))

model.add(layers.Flatten())
model.add(layers.Dense(64, activation='relu'))

model.add(layers.Dense(10, activation='softmax')) # 10 classes (digits 0-9)

# Compile the model

model.compile(optimizer='adam',

loss='categorical_crossentropy',

metrics=['accuracy'])

# Train the model

model.fit(train_images, train_labels, epochs=5, batch_size=64)

# Evaluate the model

test_loss, test_acc = model.evaluate(test_images, test_labels)

print(f"Test accuracy: {test_acc}")

OUTPUT:

Epoch 1/5
938/938 ━━━━━━━━━━━━━━━━━━━━ 17s 16ms/step - accuracy: 0.8592 - loss:
0.4609
Epoch 2/5
938/938 ━━━━━━━━━━━━━━━━━━━━ 20s 16ms/step - accuracy: 0.9817 - loss:
0.0588
Epoch 3/5
938/938 ━━━━━━━━━━━━━━━━━━━━ 15s 16ms/step - accuracy: 0.9880 - loss:
0.0404
Epoch 4/5
938/938 ━━━━━━━━━━━━━━━━━━━━ 21s 16ms/step - accuracy: 0.9909 - loss:
0.0280
Epoch 5/5
938/938 ━━━━━━━━━━━━━━━━━━━━ 20s 16ms/step - accuracy: 0.9931 - loss:
0.0214
313/313 ━━━━━━━━━━━━━━━━━━━━ 1s 4ms/step - accuracy: 0.9893 - loss: 0.0347
Test accuracy: 0.991599977016449

You might also like