0% found this document useful (0 votes)
14 views5 pages

Ex No 7

It is a Deep Learning program

Uploaded by

Vaishnavi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views5 pages

Ex No 7

It is a Deep Learning program

Uploaded by

Vaishnavi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 5

Ex.

No:7 Applying the Autoencoder algorithms for encoding the real- world data

Aim:
To apply the Autoencoder algorithms for encoding the real-world data on MNIST dataset

Algorithm:

1. Load the MNIST dataset.


2. Normalize pixel values.
3. Flatten images to 784 dimensions.
4. Create the input layer with 784 neurons.
5. Add an encoder layer with 32 neurons.
6. Add a decoder layer to reconstruct the images.
7. Compile the model using the Adam optimizer and binary cross-entropy loss.
8. Train the model on the training data.
9. Predict the reconstructed images from the test set.
10. Visualize original and reconstructed images side by side.
11. Extract the shape of the encoded data.

Program:
import numpy as np
import tensorflow as tf
from tensorflow.keras import layers, models
import matplotlib.pyplot as plt
# Load the MNIST dataset
(X_train, _), (X_test, _) = tf.keras.datasets.mnist.load_data()
# Normalize the images to [0, 1] and flatten them
X_train = X_train.astype('float32') / 255.0
X_test = X_test.astype('float32') / 255.0
X_train = X_train.reshape((X_train.shape[0], -1)) # Flatten to 784
X_test = X_test.reshape((X_test.shape[0], -1)) # Flatten to 784
# Define the autoencoder model
input_dim = X_train.shape[1] # 784 for MNIST
encoding_dim = 32 # Dimension of the encoded representation
# Input layer
input_data = layers.Input(shape=(input_dim,))
# Encoder
encoded = layers.Dense(encoding_dim, activation='relu')(input_data)
# Decoder
decoded = layers.Dense(input_dim, activation='sigmoid')(encoded)
# Autoencoder model
autoencoder = models.Model(input_data, decoded)
# Compile the model
autoencoder.compile(optimizer='adam', loss='binary_crossentropy')

# Train the autoencoder


autoencoder.fit(X_train, X_train, epochs=50, batch_size=256, validation_split=0.2)
# Evaluate the autoencoder
reconstructed_data = autoencoder.predict(X_test)
# Visualize the results (comparing original and reconstructed data)
def plot_comparison(original, reconstructed, n=5):
plt.figure(figsize=(12, 6))
for i in range(n):
# Original data
plt.subplot(2, n, i + 1)
plt.imshow(original[i].reshape(28, 28), cmap='gray')
plt.title("Original")
plt.axis('off')
# Reconstructed data
plt.subplot(2, n, i + 1 + n)
plt.imshow(reconstructed[i].reshape(28, 28), cmap='gray')
plt.title("Reconstructed")
plt.axis('off')
plt.show()
# Plot comparison for the first 5 samples
plot_comparison(X_test, reconstructed_data, n=5)
# Extract the encoded data
encoded_data = models.Model(input_data, encoded)(X_test)
print("Encoded Data Shape:", encoded_data.shape)

Output:

Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz


11490434/11490434 ━━━━━━━━━━━━━━━━━━━━ 0s 0us/step
Epoch 1/50
188/188 ━━━━━━━━━━━━━━━━━━━━ 3s 11ms/step - loss: 0.4046 - val_loss: 0.2006
Epoch 2/50
188/188 ━━━━━━━━━━━━━━━━━━━━ 3s 11ms/step - loss: 0.1894 - val_loss: 0.1637
Epoch 3/50
188/188 ━━━━━━━━━━━━━━━━━━━━ 4s 17ms/step - loss: 0.1590 - val_loss: 0.1450
Epoch 4/50
188/188 ━━━━━━━━━━━━━━━━━━━━ 2s 10ms/step - loss: 0.1421 - val_loss: 0.1315
Epoch 5/50
188/188 ━━━━━━━━━━━━━━━━━━━━ 3s 12ms/step - loss: 0.1293 - val_loss: 0.1223
Epoch 6/50
188/188 ━━━━━━━━━━━━━━━━━━━━ 2s 10ms/step - loss: 0.1207 - val_loss: 0.1158
Epoch 7/50
188/188 ━━━━━━━━━━━━━━━━━━━━ 3s 13ms/step - loss: 0.1143 - val_loss: 0.1106
Epoch 8/50
188/188 ━━━━━━━━━━━━━━━━━━━━ 5s 27ms/step - loss: 0.1094 - val_loss: 0.1066
Epoch 9/50
188/188 ━━━━━━━━━━━━━━━━━━━━ 7s 11ms/step - loss: 0.1054 - val_loss: 0.1037
Epoch 10/50
188/188 ━━━━━━━━━━━━━━━━━━━━ 2s 10ms/step - loss: 0.1024 - val_loss: 0.1012
Epoch 11/50
188/188 ━━━━━━━━━━━━━━━━━━━━ 3s 15ms/step - loss: 0.1001 - val_loss: 0.0994
Epoch 12/50
188/188 ━━━━━━━━━━━━━━━━━━━━ 2s 12ms/step - loss: 0.0984 - val_loss: 0.0981
Epoch 13/50
188/188 ━━━━━━━━━━━━━━━━━━━━ 2s 10ms/step - loss: 0.0971 - val_loss: 0.0970
Epoch 14/50
188/188 ━━━━━━━━━━━━━━━━━━━━ 3s 11ms/step - loss: 0.0959 - val_loss: 0.0964
Epoch 15/50
188/188 ━━━━━━━━━━━━━━━━━━━━ 2s 10ms/step - loss: 0.0955 - val_loss: 0.0958
Epoch 16/50
188/188 ━━━━━━━━━━━━━━━━━━━━ 3s 13ms/step - loss: 0.0949 - val_loss: 0.0955
Epoch 17/50
188/188 ━━━━━━━━━━━━━━━━━━━━ 3s 14ms/step - loss: 0.0945 - val_loss: 0.0952
Epoch 18/50
188/188 ━━━━━━━━━━━━━━━━━━━━ 2s 10ms/step - loss: 0.0942 - val_loss: 0.0950
Epoch 19/50
188/188 ━━━━━━━━━━━━━━━━━━━━ 2s 12ms/step - loss: 0.0941 - val_loss: 0.0948
Epoch 20/50
188/188 ━━━━━━━━━━━━━━━━━━━━ 3s 13ms/step - loss: 0.0941 - val_loss: 0.0947
Epoch 21/50
188/188 ━━━━━━━━━━━━━━━━━━━━ 2s 11ms/step - loss: 0.0939 - val_loss: 0.0946
Epoch 22/50
188/188 ━━━━━━━━━━━━━━━━━━━━ 3s 15ms/step - loss: 0.0936 - val_loss: 0.0945
Epoch 23/50
188/188 ━━━━━━━━━━━━━━━━━━━━ 2s 11ms/step - loss: 0.0935 - val_loss: 0.0945
Epoch 24/50
188/188 ━━━━━━━━━━━━━━━━━━━━ 2s 10ms/step - loss: 0.0936 - val_loss: 0.0943
Epoch 25/50
188/188 ━━━━━━━━━━━━━━━━━━━━ 3s 10ms/step - loss: 0.0932 - val_loss: 0.0943
Epoch 26/50
188/188 ━━━━━━━━━━━━━━━━━━━━ 2s 10ms/step - loss: 0.0934 - val_loss: 0.0942
Epoch 27/50
188/188 ━━━━━━━━━━━━━━━━━━━━ 3s 18ms/step - loss: 0.0935 - val_loss: 0.0941
Epoch 28/50
188/188 ━━━━━━━━━━━━━━━━━━━━ 4s 21ms/step - loss: 0.0929 - val_loss: 0.0941
Epoch 29/50
188/188 ━━━━━━━━━━━━━━━━━━━━ 4s 16ms/step - loss: 0.0932 - val_loss: 0.0940
Epoch 30/50
188/188 ━━━━━━━━━━━━━━━━━━━━ 2s 12ms/step - loss: 0.0933 - val_loss: 0.0940
Epoch 31/50
188/188 ━━━━━━━━━━━━━━━━━━━━ 2s 11ms/step - loss: 0.0930 - val_loss: 0.0940
Epoch 32/50
188/188 ━━━━━━━━━━━━━━━━━━━━ 3s 14ms/step - loss: 0.0931 - val_loss: 0.0940
Epoch 33/50
188/188 ━━━━━━━━━━━━━━━━━━━━ 4s 10ms/step - loss: 0.0930 - val_loss: 0.0939
Epoch 34/50
188/188 ━━━━━━━━━━━━━━━━━━━━ 3s 10ms/step - loss: 0.0930 - val_loss: 0.0938
Epoch 35/50
188/188 ━━━━━━━━━━━━━━━━━━━━ 2s 10ms/step - loss: 0.0930 - val_loss: 0.0938
Epoch 36/50
188/188 ━━━━━━━━━━━━━━━━━━━━ 2s 10ms/step - loss: 0.0928 - val_loss: 0.0938
Epoch 37/50
188/188 ━━━━━━━━━━━━━━━━━━━━ 4s 17ms/step - loss: 0.0927 - val_loss: 0.0938
Epoch 38/50
188/188 ━━━━━━━━━━━━━━━━━━━━ 4s 11ms/step - loss: 0.0928 - val_loss: 0.0938
Epoch 39/50
188/188 ━━━━━━━━━━━━━━━━━━━━ 2s 10ms/step - loss: 0.0927 - val_loss: 0.0938
Epoch 40/50
188/188 ━━━━━━━━━━━━━━━━━━━━ 3s 10ms/step - loss: 0.0929 - val_loss: 0.0938
Epoch 41/50
188/188 ━━━━━━━━━━━━━━━━━━━━ 2s 12ms/step - loss: 0.0929 - val_loss: 0.0937
Epoch 42/50
188/188 ━━━━━━━━━━━━━━━━━━━━ 3s 15ms/step - loss: 0.0928 - val_loss: 0.0937
Epoch 43/50
188/188 ━━━━━━━━━━━━━━━━━━━━ 4s 11ms/step - loss: 0.0928 - val_loss: 0.0937
Epoch 44/50
188/188 ━━━━━━━━━━━━━━━━━━━━ 2s 10ms/step - loss: 0.0928 - val_loss: 0.0937
Epoch 45/50
188/188 ━━━━━━━━━━━━━━━━━━━━ 2s 11ms/step - loss: 0.0923 - val_loss: 0.0937
Epoch 46/50
188/188 ━━━━━━━━━━━━━━━━━━━━ 2s 13ms/step - loss: 0.0927 - val_loss: 0.0936
Epoch 47/50
188/188 ━━━━━━━━━━━━━━━━━━━━ 3s 15ms/step - loss: 0.0929 - val_loss: 0.0936
Epoch 48/50
188/188 ━━━━━━━━━━━━━━━━━━━━ 4s 11ms/step - loss: 0.0926 - val_loss: 0.0937
Epoch 49/50
188/188 ━━━━━━━━━━━━━━━━━━━━ 2s 10ms/step - loss: 0.0928 - val_loss: 0.0937
Epoch 50/50
188/188 ━━━━━━━━━━━━━━━━━━━━ 4s 18ms/step - loss: 0.0926 - val_loss: 0.0936
313/313 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step

Result:
Thus the program to apply the Autoencoder algorithms for encoding the real-world data on MNIST
dataset

You might also like