SRM Institute of Science and Technology: Record Work
SRM Institute of Science and Technology: Record Work
TECHNOLOGY
Kattankulathur, Chennai-603203
FACULTY OF MANAGEMENT
Record Work
STUDENT NAME
REGISTER NUMBER
SECTION CC
SRM INSTITUTE OF SCIENCE AND TECHNOLOGY
S.R.M. Nagar, Kattankulathur -603 203
Bonafide Certificate
Submitted for University Examination held on _____________, SRM Institute of Science and
Technology, Kattankulathur.
Examiner-I Examiner-II
TABLE OF CONTENT
SL. FACULTY
DATE TITLE OF THE EXERCISE
NO SIGNATURE
SUMMARY GENERATION WITH VANILLA
1 18/01/24
RNN
TIME SERIES PREDICTION
2 18/01/24
SENTIMENT ANALYSIS
3 18/01/24
4 LANGUAGE TRANSLATION
19/01/24
5 STOCK PRICE PREDICTION
19/01/24
6 PADDING SEQUENCES
19/01/24
7 22/01/24 TRUNCATING SEQUENCES
A Recurrent Neural Network (RNN) is a type of artificial neural network designed for
sequential data processing and tasks. Unlike traditional feedforward neural networks, RNNs
have connections that form directed cycles, allowing them to maintain a hidden state that can
capture information about previous inputs in the sequence. This hidden state enables RNNs to
exhibit temporal dynamic behavior, making them suitable for tasks such as time series
prediction, natural language processing, and speech recognition.
The key feature of an RNN is its ability to maintain and update an internal state as it
processes each element in a sequence. This makes RNNs well-suited for tasks where context
or temporal dependencies are crucial. However, traditional RNNs have some limitations, such
as difficulty in learning long-range dependencies and issues with vanishing or exploding
gradients during training.
To address these limitations, more advanced variants of RNNs have been developed.
Some of these include:
2. **Gated Recurrent Unit (GRU):** GRUs are another variant of RNNs that, like
LSTMs, address the vanishing gradient problem. They use gating mechanisms to control the
information flow, but they have a simpler architecture compared to LSTMs.
RNNs and their variants have been widely used in various applications, including
natural language processing (e.g., language modeling, machine translation), speech
recognition, time series analysis, and more. While they have been successful in many areas,
researchers continue to explore and develop new architectures to improve their performance
and address challenges associated with learning long-term dependencies.
WORKING PRINCIPLE OF RECURRENT NEURAL NETWORK
Recurrent Neural Network(RNN) is a type of Neural Network where the output from
the previous step is fed as input to the current step. In traditional neural networks, all the
inputs and outputs are independent of each other. Still, in cases when it is required to predict
the next word of a sentence, the previous words are required and hence there is a need to
remember the previous words. Thus RNN came into existence, which solved this issue with
the help of a Hidden Layer. The main and most important feature of RNN is its Hidden state,
which remembers some information about a sequence. The state is also referred to as Memory
State since it remembers the previous input to the network. It uses the same parameters for
each input as it performs the same task on all the inputs or hidden layers to produce the
output. This reduces the complexity of parameters, unlike other neural networks.
Artificial neural networks that do not have looping nodes are called feed forward neural
networks. Because all information is only passed forward, this kind of neural network is also
referred to as a multi-layer neural network.
Information moves from the input layer to the output layer – if any hidden layers are present
– unidirectionally in a feedforward neural network. These networks are appropriate for image
classification tasks, for example, where input and output are independent. Nevertheless, their
inability to retain previous inputs automatically renders them less useful for sequential data
analysis.
Recurrent Neuron and RNN Unfolding
The fundamental processing unit in a Recurrent Neural Network (RNN) is a Recurrent Unit,
which is not explicitly called a “Recurrent Neuron.” This unit has the unique ability to
maintain a hidden state, allowing the network to capture sequential dependencies by
remembering previous inputs while processing. Long Short-Term Memory (LSTM) and
Gated Recurrent Unit (GRU) versions improve the RNN’s ability to handle long-term
dependencies.
Types Of RNN
There are four types of RNNs based on the number of inputs and outputs in the network.
1. One to One
2. One to Many
3. Many to One
4. Many to Many
One to One
This type of RNN behaves the same as any simple Neural network it is also known as Vanilla
Neural Network. In this Neural network, there is only one input and one output.
One To Many
In this type of RNN, there is one input and many outputs associated with it. One of the most
used examples of this network is Image captioning where given an image we predict a
sentence having Multiple words.
Many to One
In this type of network, Many inputs are fed to the network at several states of the network
generating only one output. This type of network is used in the problems like sentimental
analysis. Where we give multiple words as input and predict only the sentiment of the
sentence as output.
Many to Many
In this type of neural network, there are multiple inputs and multiple outputs corresponding to
a problem. One Example of this Problem will be language translation. In language
translation, we provide multiple words from one language as input and predict multiple words
from the second language as output.
Recurrent Neural Network Architecture
RNNs have the same input and output architecture as any other deep neural architecture.
However, differences arise in the way information flows from input to output. Unlike Deep
neural networks where we have different weight matrices for each Dense network in RNN,
the weight across the network remains the same. It calculates state hidden state Hi for every
input Xi . By using the following formulas:
h= σ(UX + Wh-1 + B)
Y = O(Vh + C)
Hence
Y = f (X, h , W, U, V, B, C)
Here S is the State matrix which has element si as the state of the network at timestep i
The parameters in the network are W, U, V, c, b which are shared across timestep
How does RNN work?
The Recurrent Neural Network consists of multiple fixed activation function units, one for
each time step. Each unit has an internal state which is called the hidden state of the unit. This
hidden state signifies the past knowledge that the network currently holds at a given time
step. This hidden state is updated at every time step to signify the change in the knowledge of
the network about the past. The hidden state is updated using the following recurrence
relation:-
In RNN the neural network is in an ordered fashion and since in the ordered network each
variable is computed one at a time in a specified order like first h1 then h2 then h3 so on.
Hence we will apply backpropagation throughout all these hidden time states sequentially.
Backpropagation Through Time (BPTT) In RNN
● L(θ)(loss function) depends on h3
● h3 in turn depends on h2 and W
● h2 in turn depends on h1 and W
● h1 in turn depends on h0 and W
● where h0 is a constant starting state.
Advantages
1. An RNN remembers each and every piece of information through time. It is useful
in time series prediction only because of the feature to remember previous inputs
as well. This is called Long Short Term Memory.
2. Recurrent neural networks are even used with convolutional layers to extend the
effective pixel neighborhood.
Disadvantages
1. Gradient vanishing and exploding problems.
2. Training an RNN is a very difficult task.
3. It cannot process very long sequences if using tanh or relu as an activation
function.
To overcome the problems like vanishing gradient and exploding gradient descent several
new advanced versions of RNNs are formed some of these are as;
1. Bidirectional Neural Network (BiNN)
2. Long Short-Term Memory (LSTM)
Bidirectional Neural Network (BiNN)
A BiNN is a variation of a Recurrent Neural Network in which the input information flows in
both direction and then the output of both direction are combined to produce the input. BiNN
is useful in situations when the context of the input is more important such as Nlp tasks and
Time-series analysis problems.
Long Short-Term Memory (LSTM)
Long Short-Term Memory works on the read-write-and-forget principle where given the
input information network reads and writes the most useful information from the data and it
forgets about the information which is not important in predicting the output. For doing this
three new gates are introduced in the RNN. In this way, only the selected information is
passed through the network.
RNN is considered to be the better version of deep neural when the data is sequential. There
are significant differences between the RNN and deep neural networks they are listed as:
Understanding the intuition behind Recurrent Neural Networks (RNNs) involves recognizing
their capability to handle sequential data and learning dependencies over time. Here's an
intuitive explanation:
1. **Sequential Processing:**
- Imagine you're reading a sentence word by word. The meaning of the current word often
depends on the context provided by previous words.
- RNNs, unlike traditional feedforward neural networks, are designed to work with
sequences. They process inputs sequentially, and the output at each step is influenced not
only by the current input but also by the information they've remembered from previous
steps.
5. **Biological Analogy:**
- Analogous to the way our brain processes information over time, RNNs attempt to mimic
sequential processing and memory retention. Each step in the sequence corresponds to a
moment in time, and the hidden state represents the accumulated knowledge up to that point.
In summary, the intuition behind RNNs lies in their ability to process sequential data,
maintain an internal state or memory, and learn dependencies over time. They have been
instrumental in various applications where the context of past information is crucial for
making predictions or generating meaningful outputs.
Exercise No: 1 SUMMARY GENERATION WITH VANILLA RNN.
Date:
AIM
To generate the summary for the loaded data set using vanilla RNN..
PROCEDURE
Step 1: Import necessary libraries
Step 2: Load the Shakespeare dataset (or any other text dataset)
Step 3: Create a vocabulary
Step 4: Create mapping from characters to indices and vice versa
Step 5: Convert the text to numerical representation.
Step 6: Create input and target sequences.
Step 7: Initialize Batch size and Buffer size to shuffle the dataset.
Step 8: Also initialize Length of the vocabulary in chars, the embedding dimension and the number of
RNN units.
Step 9: Build the model.
Step 10: Compile the model.
Step 11: Display the model summary.
SOURCE CODE
import numpy as np
import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, SimpleRNN
# Create a vocabulary
vocab = sorted(set(text))
def split_input_target(chunk):
input_text = chunk[:-1]
target_text = chunk[1:]
return input_text, target_text
dataset = sequences.map(split_input_target)
# Batch size
BATCH_SIZE = 64
OUTPUT
Model: "sequential_10"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
simple_rnn_15 (SimpleRNN) (None, None, 256) 82432
=================================================================
Total params: 99137 (387.25 KB)
Trainable params: 99137 (387.25 KB)
Non-trainable params: 0 (0.00 Byte)
RESULT
Thus the summary for the loaded data set has been displayed using vanilla RNN successfully.
Exercise No: 2 TIME SERIES PREDICTION.
Date:
AIM
To predict the time series and generate the synthetic time series data using RNN.
PROCEDURE
Step 1: Import necessary libraries
Step 2: Generate synthetic time series data (sine wave)
Step 3: Create sequences for training
Step 4: Reshape data for RNN input (samples, timesteps, features)
Step 5: Build the RNN model
Step 6: Train the model
Step 7: Test the model by predicting the next values in the time series
Step 8: Generate predictions
Step 9: Plot the results
SOURCE CODE
import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import SimpleRNN, Dense
model.compile(optimizer='adam', loss='mean_squared_error')
# Test the model by predicting the next values in the time series
test_input = sin_wave[-seq_length:]
test_input = np.reshape(test_input, (1, seq_length, 1))
# Generate predictions
predicted_values = []
for _ in range(100):
predicted_value = model.predict(test_input)
predicted_values.append(predicted_value[0, 0])
test_input = np.roll(test_input, -1, axis=1)
test_input[0, -1, 0] = predicted_value[0, 0]
Epoch 1/50
62/62 [==============================] - 4s 11ms/step - loss: 0.0523
Epoch 2/50
62/62 [==============================] - 1s 9ms/step - loss: 8.3264e-05
Epoch 3/50
62/62 [==============================] - 0s 7ms/step - loss: 1.9235e-05
Epoch 4/50
62/62 [==============================] - 1s 11ms/step - loss: 9.7245e-06
Epoch 5/50
62/62 [==============================] - 1s 9ms/step - loss: 6.9785e-06
Epoch 6/50
62/62 [==============================] - 0s 6ms/step - loss: 3.9190e-06
Epoch 7/50
62/62 [==============================] - 0s 7ms/step - loss: 3.0770e-06
Epoch 8/50
62/62 [==============================] - 0s 7ms/step - loss: 3.5341e-06
Epoch 9/50
62/62 [==============================] - 1s 9ms/step - loss: 2.4125e-06
Epoch 10/50
62/62 [==============================] - 0s 5ms/step - loss: 2.5179e-06
Epoch 11/50
62/62 [==============================] - 0s 3ms/step - loss: 2.4248e-06
Epoch 12/50
62/62 [==============================] - 0s 3ms/step - loss: 3.3002e-06
Epoch 13/50
62/62 [==============================] - 0s 3ms/step - loss: 2.9412e-06
Epoch 14/50
62/62 [==============================] - 0s 3ms/step - loss: 4.7054e-06
Epoch 15/50
62/62 [==============================] - 0s 4ms/step - loss: 5.6266e-06
Epoch 16/50
62/62 [==============================] - 0s 3ms/step - loss: 4.4291e-06
Epoch 17/50
62/62 [==============================] - 0s 3ms/step - loss: 1.4397e-05
Epoch 18/50
62/62 [==============================] - 0s 3ms/step - loss: 3.8717e-06
Epoch 19/50
62/62 [==============================] - 0s 4ms/step - loss: 4.2208e-06
Epoch 20/50
62/62 [==============================] - 0s 3ms/step - loss: 1.9835e-04
Epoch 21/50
62/62 [==============================] - 0s 3ms/step - loss: 8.4141e-04
Epoch 22/50
62/62 [==============================] - 0s 3ms/step - loss: 2.7785e-05
Epoch 23/50
62/62 [==============================] - 0s 3ms/step - loss: 5.8517e-06
Epoch 24/50
62/62 [==============================] - 0s 5ms/step - loss: 3.9655e-06
Epoch 25/50
62/62 [==============================] - 0s 7ms/step - loss: 3.3362e-06
Epoch 26/50
62/62 [==============================] - 0s 6ms/step - loss: 2.2043e-06
Epoch 27/50
62/62 [==============================] - 0s 6ms/step - loss: 1.8418e-06
Epoch 28/50
62/62 [==============================] - 0s 5ms/step - loss: 1.4853e-06
Epoch 29/50
62/62 [==============================] - 0s 5ms/step - loss: 2.8769e-06
Epoch 30/50
62/62 [==============================] - 0s 6ms/step - loss: 1.4655e-06
Epoch 31/50
62/62 [==============================] - 0s 6ms/step - loss: 1.4942e-06
Epoch 32/50
62/62 [==============================] - 0s 6ms/step - loss: 9.6966e-07
Epoch 33/50
62/62 [==============================] - 0s 6ms/step - loss: 1.7658e-06
Epoch 34/50
62/62 [==============================] - 0s 4ms/step - loss: 1.1101e-06
Epoch 35/50
62/62 [==============================] - 0s 4ms/step - loss: 9.3509e-07
Epoch 36/50
62/62 [==============================] - 0s 4ms/step - loss: 1.1097e-06
Epoch 37/50
62/62 [==============================] - 0s 4ms/step - loss: 1.1139e-06
Epoch 38/50
62/62 [==============================] - 0s 4ms/step - loss: 6.4273e-07
Epoch 39/50
62/62 [==============================] - 0s 3ms/step - loss: 2.0434e-06
Epoch 40/50
62/62 [==============================] - 0s 4ms/step - loss: 1.4235e-06
Epoch 41/50
62/62 [==============================] - 0s 3ms/step - loss: 9.1057e-07
Epoch 42/50
62/62 [==============================] - 0s 4ms/step - loss: 7.6581e-07
Epoch 43/50
62/62 [==============================] - 0s 4ms/step - loss: 6.7699e-07
Epoch 44/50
62/62 [==============================] - 0s 4ms/step - loss: 7.4931e-07
Epoch 45/50
62/62 [==============================] - 0s 3ms/step - loss: 9.2770e-07
Epoch 46/50
62/62 [==============================] - 0s 4ms/step - loss: 1.1455e-06
Epoch 47/50
62/62 [==============================] - 0s 3ms/step - loss: 2.0800e-06
Epoch 48/50
62/62 [==============================] - 0s 4ms/step - loss: 1.4866e-06
Epoch 49/50
62/62 [==============================] - 0s 3ms/step - loss: 7.3083e-07
Epoch 50/50
62/62 [==============================] - 0s 3ms/step - loss: 6.6306e-07
1/1 [==============================] - 0s 181ms/step
1/1 [==============================] - 0s 20ms/step
1/1 [==============================] - 0s 21ms/step
1/1 [==============================] - 0s 20ms/step
1/1 [==============================] - 0s 20ms/step
1/1 [==============================] - 0s 21ms/step
1/1 [==============================] - 0s 21ms/step
1/1 [==============================] - 0s 21ms/step
1/1 [==============================] - 0s 20ms/step
1/1 [==============================] - 0s 22ms/step
1/1 [==============================] - 0s 21ms/step
1/1 [==============================] - 0s 20ms/step
1/1 [==============================] - 0s 21ms/step
1/1 [==============================] - 0s 25ms/step
1/1 [==============================] - 0s 26ms/step
1/1 [==============================] - 0s 32ms/step
1/1 [==============================] - 0s 32ms/step
1/1 [==============================] - 0s 31ms/step
1/1 [==============================] - 0s 35ms/step
1/1 [==============================] - 0s 31ms/step
1/1 [==============================] - 0s 33ms/step
1/1 [==============================] - 0s 31ms/step
1/1 [==============================] - 0s 31ms/step
1/1 [==============================] - 0s 30ms/step
1/1 [==============================] - 0s 31ms/step
1/1 [==============================] - 0s 31ms/step
1/1 [==============================] - 0s 29ms/step
1/1 [==============================] - 0s 38ms/step
1/1 [==============================] - 0s 33ms/step
1/1 [==============================] - 0s 31ms/step
1/1 [==============================] - 0s 32ms/step
1/1 [==============================] - 0s 32ms/step
1/1 [==============================] - 0s 34ms/step
1/1 [==============================] - 0s 39ms/step
1/1 [==============================] - 0s 31ms/step
1/1 [==============================] - 0s 34ms/step
1/1 [==============================] - 0s 31ms/step
1/1 [==============================] - 0s 29ms/step
1/1 [==============================] - 0s 29ms/step
1/1 [==============================] - 0s 31ms/step
1/1 [==============================] - 0s 36ms/step
1/1 [==============================] - 0s 35ms/step
1/1 [==============================] - 0s 38ms/step
1/1 [==============================] - 0s 21ms/step
1/1 [==============================] - 0s 21ms/step
1/1 [==============================] - 0s 22ms/step
1/1 [==============================] - 0s 20ms/step
1/1 [==============================] - 0s 24ms/step
1/1 [==============================] - 0s 22ms/step
1/1 [==============================] - 0s 21ms/step
1/1 [==============================] - 0s 21ms/step
1/1 [==============================] - 0s 22ms/step
1/1 [==============================] - 0s 19ms/step
1/1 [==============================] - 0s 23ms/step
1/1 [==============================] - 0s 21ms/step
1/1 [==============================] - 0s 22ms/step
1/1 [==============================] - 0s 23ms/step
1/1 [==============================] - 0s 22ms/step
1/1 [==============================] - 0s 29ms/step
1/1 [==============================] - 0s 23ms/step
1/1 [==============================] - 0s 23ms/step
1/1 [==============================] - 0s 22ms/step
1/1 [==============================] - 0s 23ms/step
1/1 [==============================] - 0s 25ms/step
1/1 [==============================] - 0s 23ms/step
1/1 [==============================] - 0s 21ms/step
1/1 [==============================] - 0s 20ms/step
1/1 [==============================] - 0s 21ms/step
1/1 [==============================] - 0s 20ms/step
1/1 [==============================] - 0s 23ms/step
1/1 [==============================] - 0s 24ms/step
1/1 [==============================] - 0s 21ms/step
1/1 [==============================] - 0s 19ms/step
1/1 [==============================] - 0s 20ms/step
1/1 [==============================] - 0s 21ms/step
1/1 [==============================] - 0s 22ms/step
1/1 [==============================] - 0s 21ms/step
1/1 [==============================] - 0s 20ms/step
1/1 [==============================] - 0s 25ms/step
1/1 [==============================] - 0s 22ms/step
1/1 [==============================] - 0s 22ms/step
1/1 [==============================] - 0s 22ms/step
1/1 [==============================] - 0s 23ms/step
1/1 [==============================] - 0s 22ms/step
1/1 [==============================] - 0s 21ms/step
1/1 [==============================] - 0s 28ms/step
1/1 [==============================] - 0s 22ms/step
1/1 [==============================] - 0s 22ms/step
1/1 [==============================] - 0s 24ms/step
1/1 [==============================] - 0s 21ms/step
1/1 [==============================] - 0s 20ms/step
1/1 [==============================] - 0s 20ms/step
1/1 [==============================] - 0s 20ms/step
1/1 [==============================] - 0s 21ms/step
1/1 [==============================] - 0s 22ms/step
1/1 [==============================] - 0s 21ms/step
1/1 [==============================] - 0s 24ms/step
1/1 [==============================] - 0s 25ms/step
1/1 [==============================] - 0s 21ms/step
1/1 [==============================] - 0s 19ms/step
RESULT
Thus the time series data has been predicted and the synthetic time series data has been
generated using RNN successfully.
Exercise No: 3 SENTIMENT ANALYSIS.
Date:
AIM
To analyze the data set based on sentimental reviews that has been given by the reviewers
using RNN.
PROCEDURE
Step 1: Import necessary libraries
Step 2: Load IMDb dataset
Step 3: Pad sequences to ensure uniform length
Step 4: Build the RNN model
Step 5: Train the model
Step 6: Evaluate the model on the test set
SOURCE CODE
import tensorflow as tf
from tensorflow.keras.datasets import imdb
from tensorflow.keras.preprocessing.sequence import pad_sequences
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Embedding, SimpleRNN, Dense
OUTPUT
Epoch 1/5
157/157 [==============================] - 9s 47ms/step - loss: 0.6255 -
accuracy: 0.6248 - val_loss: 0.4370 - val_accuracy: 0.8048
Epoch 2/5
157/157 [==============================] - 6s 35ms/step - loss: 0.3547 -
accuracy: 0.8481 - val_loss: 0.3898 - val_accuracy: 0.8266
Epoch 3/5
157/157 [==============================] - 7s 44ms/step - loss: 0.2445 -
accuracy: 0.9054 - val_loss: 0.4100 - val_accuracy: 0.8144
Epoch 4/5
157/157 [==============================] - 6s 35ms/step - loss: 0.1699 -
accuracy: 0.9408 - val_loss: 0.4043 - val_accuracy: 0.8430
Epoch 5/5
157/157 [==============================] - 7s 46ms/step - loss: 0.1060 -
accuracy: 0.9661 - val_loss: 0.4635 - val_accuracy: 0.8156
782/782 [==============================] - 6s 8ms/step - loss: 0.4720 -
accuracy: 0.8104
Test accuracy: 81.04%
0.47195175290107727
[[ 0 0 0 ... 14 6 717]
[ 6 976 2078 ... 125 4 3077]
[ 4 5673 7 ... 9 57 975]
...
[ 0 0 0 ... 21 846 5518]
[ 0 1 11 ... 2302 7 470]
[ 56 96 346 ... 34 2005 2643]]
[0 1 1 ... 0 0 0]
RESULT
Thus the data has been analyzed based on sentimental reviews that have been suggested by the
reviewers using RNN successfully.
Exercise No: 4 LANGUAGE TRANSLATION.
Date:
AIM
To translate a sentence from one language to another (here english to french and french to
english) with the help of seq2seq model using RNN.
PROCEDURE
Step 1: Import necessary libraries
Step 2: Supply an example data set.
Step 3: Tokenization the given input.
Step 4: Padding sequences.
Step 5: Build the RNN model
Step 6: Train the model
Step 7: Inference for translation
Step 8: Convert predicted sequence to text
Step 9: Test the translation
SOURCE CODE
import numpy as np
import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Embedding, SimpleRNN, Dense
# Tokenization
input_tokenizer = tf.keras.preprocessing.text.Tokenizer()
input_tokenizer.fit_on_texts(input_texts)
input_vocab_size = len(input_tokenizer.word_index) + 1
input_sequences = input_tokenizer.texts_to_sequences(input_texts)
target_tokenizer = tf.keras.preprocessing.text.Tokenizer()
target_tokenizer.fit_on_texts(target_texts)
target_vocab_size = len(target_tokenizer.word_index) + 1
target_sequences = target_tokenizer.texts_to_sequences(target_texts)
# Padding sequences
max_seq_length = max(max(len(seq) for seq in input_sequences), max(len(seq) for seq in target_sequences))
input_sequences_padded = tf.keras.preprocessing.sequence.pad_sequences(input_sequences,
maxlen=max_seq_length, padding='post')
target_sequences_padded = tf.keras.preprocessing.sequence.pad_sequences(target_sequences,
maxlen=max_seq_length, padding='post')
return translated_sentence
OUTPUT
Epoch 1/50
1/1 [==============================] - 1s 1s/step - loss: 2.1877 - accuracy:
0.1000
Epoch 2/50
1/1 [==============================] - 0s 16ms/step - loss: 2.1722 - accuracy:
0.2000
Epoch 3/50
1/1 [==============================] - 0s 14ms/step - loss: 2.1567 - accuracy:
0.4000
Epoch 4/50
1/1 [==============================] - 0s 12ms/step - loss: 2.1411 - accuracy:
0.4000
Epoch 5/50
1/1 [==============================] - 0s 12ms/step - loss: 2.1255 - accuracy:
0.4000
Epoch 6/50
1/1 [==============================] - 0s 13ms/step - loss: 2.1097 - accuracy:
0.5000
Epoch 7/50
1/1 [==============================] - 0s 15ms/step - loss: 2.0937 - accuracy:
0.5000
Epoch 8/50
1/1 [==============================] - 0s 14ms/step - loss: 2.0775 - accuracy:
0.5000
Epoch 9/50
1/1 [==============================] - 0s 12ms/step - loss: 2.0609 - accuracy:
0.5000
Epoch 10/50
1/1 [==============================] - 0s 13ms/step - loss: 2.0439 - accuracy:
0.5000
Epoch 11/50
1/1 [==============================] - 0s 16ms/step - loss: 2.0266 - accuracy:
0.5000
Epoch 12/50
1/1 [==============================] - 0s 16ms/step - loss: 2.0087 - accuracy:
0.5000
Epoch 13/50
1/1 [==============================] - 0s 14ms/step - loss: 1.9904 - accuracy:
0.6000
Epoch 14/50
1/1 [==============================] - 0s 13ms/step - loss: 1.9715 - accuracy:
0.6000
Epoch 15/50
1/1 [==============================] - 0s 16ms/step - loss: 1.9521 - accuracy:
0.6000
Epoch 16/50
1/1 [==============================] - 0s 12ms/step - loss: 1.9321 - accuracy:
0.6000
Epoch 17/50
1/1 [==============================] - 0s 14ms/step - loss: 1.9115 - accuracy:
0.7000
Epoch 18/50
1/1 [==============================] - 0s 15ms/step - loss: 1.8903 - accuracy:
0.8000
Epoch 19/50
1/1 [==============================] - 0s 13ms/step - loss: 1.8684 - accuracy:
0.8000
Epoch 20/50
1/1 [==============================] - 0s 14ms/step - loss: 1.8460 - accuracy:
0.8000
Epoch 21/50
1/1 [==============================] - 0s 15ms/step - loss: 1.8229 - accuracy:
0.8000
Epoch 22/50
1/1 [==============================] - 0s 13ms/step - loss: 1.7993 - accuracy:
0.8000
Epoch 23/50
1/1 [==============================] - 0s 13ms/step - loss: 1.7751 - accuracy:
0.8000
Epoch 24/50
1/1 [==============================] - 0s 12ms/step - loss: 1.7504 - accuracy:
0.8000
Epoch 25/50
1/1 [==============================] - 0s 13ms/step - loss: 1.7251 - accuracy:
0.8000
Epoch 26/50
1/1 [==============================] - 0s 18ms/step - loss: 1.6993 - accuracy:
0.8000
Epoch 27/50
1/1 [==============================] - 0s 15ms/step - loss: 1.6731 - accuracy:
0.8000
Epoch 28/50
1/1 [==============================] - 0s 18ms/step - loss: 1.6464 - accuracy:
0.8000
Epoch 29/50
1/1 [==============================] - 0s 15ms/step - loss: 1.6193 - accuracy:
0.8000
Epoch 30/50
1/1 [==============================] - 0s 15ms/step - loss: 1.5918 - accuracy:
0.8000
Epoch 31/50
1/1 [==============================] - 0s 15ms/step - loss: 1.5640 - accuracy:
0.8000
Epoch 32/50
1/1 [==============================] - 0s 16ms/step - loss: 1.5358 - accuracy:
0.8000
Epoch 33/50
1/1 [==============================] - 0s 14ms/step - loss: 1.5074 - accuracy:
0.8000
Epoch 34/50
1/1 [==============================] - 0s 14ms/step - loss: 1.4786 - accuracy:
0.8000
Epoch 35/50
1/1 [==============================] - 0s 14ms/step - loss: 1.4497 - accuracy:
0.8000
Epoch 36/50
1/1 [==============================] - 0s 15ms/step - loss: 1.4205 - accuracy:
0.9000
Epoch 37/50
1/1 [==============================] - 0s 14ms/step - loss: 1.3911 - accuracy:
0.9000
Epoch 38/50
1/1 [==============================] - 0s 14ms/step - loss: 1.3617 - accuracy:
0.9000
Epoch 39/50
1/1 [==============================] - 0s 14ms/step - loss: 1.3322 - accuracy:
0.9000
Epoch 40/50
1/1 [==============================] - 0s 14ms/step - loss: 1.3026 - accuracy:
0.9000
Epoch 41/50
1/1 [==============================] - 0s 16ms/step - loss: 1.2731 - accuracy:
0.9000
Epoch 42/50
1/1 [==============================] - 0s 14ms/step - loss: 1.2436 - accuracy:
0.9000
Epoch 43/50
1/1 [==============================] - 0s 14ms/step - loss: 1.2144 - accuracy:
0.9000
Epoch 44/50
1/1 [==============================] - 0s 16ms/step - loss: 1.1853 - accuracy:
0.9000
Epoch 45/50
1/1 [==============================] - 0s 13ms/step - loss: 1.1566 - accuracy:
0.9000
Epoch 46/50
1/1 [==============================] - 0s 15ms/step - loss: 1.1281 - accuracy:
0.9000
Epoch 47/50
1/1 [==============================] - 0s 13ms/step - loss: 1.1001 - accuracy:
0.9000
Epoch 48/50
1/1 [==============================] - 0s 13ms/step - loss: 1.0725 - accuracy:
0.9000
Epoch 49/50
1/1 [==============================] - 0s 13ms/step - loss: 1.0454 - accuracy:
0.9000
Epoch 50/50
1/1 [==============================] - 0s 13ms/step - loss: 1.0188 - accuracy:
0.9000
1/1 [==============================] - 0s 183ms/step
RESULT
Thus the given sentence has been translated from english to french with the help of seq2seq
model successfully.
Exercise No: 5 STOCK PRICE PREDICTION.
Date:
AIM
To predict the stock market price value using Long Short-Term Memory (LSTM) networks.
PROCEDURE
Step 1: Import necessary libraries
Step 2: Load the dataset
Step 3: Normalize the data
Step 4: Create sequences for LSTM
Step 5: Split the data into training and testing sets
Step 6: Build the LSTM model
Step 7: Train the model
Step 8: Make predictions on the test set
Step 9: Plot the results
SOURCE CODE
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.preprocessing import MinMaxScaler
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import LSTM, Dense
model.compile(optimizer='adam', loss='mean_squared_error')
OUTPUT
Epoch 1/50
19/19 [==============================] - 2s 6ms/step - loss: 0.1579
Epoch 2/50
19/19 [==============================] - 0s 6ms/step - loss: 0.0105
Epoch 3/50
19/19 [==============================] - 0s 6ms/step - loss: 0.0053
Epoch 4/50
19/19 [==============================] - 0s 7ms/step - loss: 0.0045
Epoch 5/50
19/19 [==============================] - 0s 8ms/step - loss: 0.0041
Epoch 6/50
19/19 [==============================] - 0s 8ms/step - loss: 0.0038
Epoch 7/50
19/19 [==============================] - 0s 7ms/step - loss: 0.0036
Epoch 8/50
19/19 [==============================] - 0s 7ms/step - loss: 0.0035
Epoch 9/50
19/19 [==============================] - 0s 6ms/step - loss: 0.0034
Epoch 10/50
19/19 [==============================] - 0s 8ms/step - loss: 0.0034
Epoch 11/50
19/19 [==============================] - 0s 7ms/step - loss: 0.0033
Epoch 12/50
19/19 [==============================] - 0s 8ms/step - loss: 0.0033
Epoch 13/50
19/19 [==============================] - 0s 7ms/step - loss: 0.0032
Epoch 14/50
19/19 [==============================] - 0s 8ms/step - loss: 0.0032
Epoch 15/50
19/19 [==============================] - 0s 7ms/step - loss: 0.0031
Epoch 16/50
19/19 [==============================] - 0s 7ms/step - loss: 0.0030
Epoch 17/50
19/19 [==============================] - 0s 7ms/step - loss: 0.0030
Epoch 18/50
19/19 [==============================] - 0s 7ms/step - loss: 0.0029
Epoch 19/50
19/19 [==============================] - 0s 8ms/step - loss: 0.0031
Epoch 20/50
19/19 [==============================] - 0s 8ms/step - loss: 0.0028
Epoch 21/50
19/19 [==============================] - 0s 7ms/step - loss: 0.0026
Epoch 22/50
19/19 [==============================] - 0s 7ms/step - loss: 0.0026
Epoch 23/50
19/19 [==============================] - 0s 7ms/step - loss: 0.0026
Epoch 24/50
19/19 [==============================] - 0s 7ms/step - loss: 0.0025
Epoch 25/50
19/19 [==============================] - 0s 8ms/step - loss: 0.0025
Epoch 26/50
19/19 [==============================] - 0s 8ms/step - loss: 0.0023
Epoch 27/50
19/19 [==============================] - 0s 7ms/step - loss: 0.0023
Epoch 28/50
19/19 [==============================] - 0s 7ms/step - loss: 0.0022
Epoch 29/50
19/19 [==============================] - 0s 7ms/step - loss: 0.0024
Epoch 30/50
19/19 [==============================] - 0s 7ms/step - loss: 0.0022
Epoch 31/50
19/19 [==============================] - 0s 8ms/step - loss: 0.0022
Epoch 32/50
19/19 [==============================] - 0s 7ms/step - loss: 0.0021
Epoch 33/50
19/19 [==============================] - 0s 7ms/step - loss: 0.0021
Epoch 34/50
19/19 [==============================] - 0s 7ms/step - loss: 0.0021
Epoch 35/50
19/19 [==============================] - 0s 7ms/step - loss: 0.0021
Epoch 36/50
19/19 [==============================] - 0s 6ms/step - loss: 0.0020
Epoch 37/50
19/19 [==============================] - 0s 6ms/step - loss: 0.0021
Epoch 38/50
19/19 [==============================] - 0s 6ms/step - loss: 0.0020
Epoch 39/50
19/19 [==============================] - 0s 7ms/step - loss: 0.0021
Epoch 40/50
19/19 [==============================] - 0s 6ms/step - loss: 0.0020
Epoch 41/50
19/19 [==============================] - 0s 7ms/step - loss: 0.0020
Epoch 42/50
19/19 [==============================] - 0s 6ms/step - loss: 0.0020
Epoch 43/50
19/19 [==============================] - 0s 6ms/step - loss: 0.0020
Epoch 44/50
19/19 [==============================] - 0s 6ms/step - loss: 0.0020
Epoch 45/50
19/19 [==============================] - 0s 6ms/step - loss: 0.0021
Epoch 46/50
19/19 [==============================] - 0s 7ms/step - loss: 0.0020
Epoch 47/50
19/19 [==============================] - 0s 7ms/step - loss: 0.0020
Epoch 48/50
19/19 [==============================] - 0s 6ms/step - loss: 0.0020
Epoch 49/50
19/19 [==============================] - 0s 7ms/step - loss: 0.0019
Epoch 50/50
19/19 [==============================] - 0s 7ms/step - loss: 0.0019
5/5 [==============================] - 0s 4ms/step
RESULT
Thus the stock market price value for tesla has been predicted using LSTM network
successfully.
Exercise No: 6 PADDING SEQUENCES.
Date:
AIM
To Pad shorter sequences with a special padding token to make them equal in length.
PROCEDURE
Step 1: Import necessary libraries
Step 2: Give the sample sequence in the form of a list.
Step 3: Print the padded sequence.
SOURCE CODE
print(padded_sequences)
OUTPUT
[[1 2 0 0]
[3 4 5 0]
[6 7 8 9]]
RESULT
Thus the shorter sequences have been padded with a special padding token and made the length
to be equal.
Exercise No: 7 TRUNCATING SEQUENCES.
Date:
AIM
To Truncate longer sequences to a common length using Keras.
PROCEDURE
Step 1: Import necessary libraries
Step 2: Give the sample sequence in the form of a list.
Step 3: Print the truncated sequence.
SOURCE CODE
print(truncated_sequences)
OUTPUT
[[0 1 2]
[3 4 5]
[6 7 8]]
RESULT
Thus the longer sequences have been truncated using keras.
Exercise No: 8 SENTIMENT ANALYSIS FOR MOVIE REVIEW DATA SET.
Date:
AIM
To analyze the movie reviews using keras in RNN.
PROCEDURE
Step 1: Import necessary libraries
Step 2: Load the IMDb dataset (you can replace this with your supermarket dataset)
Step 3: Limit the vocabulary to the top 10,000 words
Step 4: Pad sequences to a fixed length (adjust as needed)
Step 5: One-hot encode the labels
Step 6: Build the RNN model
Step 7: Compile the model
Step 8: Train the model
Step 9: Evaluate the model on the test set.
SOURCE CODE
import numpy as np
from tensorflow.keras.datasets import imdb
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Embedding, SimpleRNN, Dense
from tensorflow.keras.preprocessing.sequence import pad_sequences
from tensorflow.keras.utils import to_categorical
# Load the IMDb dataset (you can replace this with your supermarket dataset)
# Limit the vocabulary to the top 10,000 words
max_words = 10000
(x_train, y_train), (x_test, y_test) = imdb.load_data(num_words=max_words)
model = Sequential([
Embedding(input_dim=max_words, output_dim=embedding_dim,
input_length=max_sequence_length),
SimpleRNN(units=rnn_units),
Dense(units=2, activation='softmax') # Binary classification (positive/negative)
])
OUTPUT
Epoch 1/5
313/313 [==============================] - 23s 68ms/step - loss: 0.5995 -
accuracy: 0.6756 - val_loss: 0.4773 - val_accuracy: 0.7892
Epoch 2/5
313/313 [==============================] - 21s 67ms/step - loss: 0.3218 -
accuracy: 0.8679 - val_loss: 0.3620 - val_accuracy: 0.8540
Epoch 3/5
313/313 [==============================] - 20s 65ms/step - loss: 0.1623 -
accuracy: 0.9416 - val_loss: 0.4059 - val_accuracy: 0.8400
Epoch 4/5
313/313 [==============================] - 21s 67ms/step - loss: 0.0663 -
accuracy: 0.9808 - val_loss: 0.4871 - val_accuracy: 0.8232
Epoch 5/5
313/313 [==============================] - 20s 62ms/step - loss: 0.0447 -
accuracy: 0.9866 - val_loss: 0.5876 - val_accuracy: 0.8316
782/782 [==============================] - 10s 13ms/step - loss: 0.6090 -
accuracy: 0.8236
Test Accuracy: 82.36%
0.608951210975647
RESULT
Thus the movie reviews has been analyzed sentimentally with the help of keras in RNN.
Exercise No: 9 UNFOLDING A NEURAL NETWORK.
Date:
AIM
To represent the computation graph of recurrent neural networks (RNNs) across multiple time
steps.
PROCEDURE
Step 1: Import necessary libraries
Step 2: Create a simple RNN model
Step 3: Print the summary of the model
SOURCE CODE
import numpy as np
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import SimpleRNN, Dense
Model: "sequential_4"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
simple_rnn_6 (SimpleRNN) (None, None, 32) 1088
=================================================================
Total params: 1121 (4.38 KB)
Trainable params: 1121 (4.38 KB)
Non-trainable params: 0 (0.00 Byte)
_________________________________________________________________
RESULT
Thus the computation graph of recurrent neural networks (RNNs) across multiple time steps has
been represented successfully.
Exercise No: 10 VISUALIZING THE UNFOLDED DATA.
Date:
AIM
To visualize the computation graph of recurrent neural networks (RNNs) across multiple time
steps.
PROCEDURE
Step 1: Import necessary libraries
Step 2: Visualize the model.
Step 3: Plot the model.
SOURCE CODE
from tensorflow.keras.utils import plot_model
OUTPUT
RESULT
Thus the computation graph of recurrent neural networks (RNNs) across multiple time steps has
been visualized successfully.
AUTO ENCODERS - STRUCTURE AND TRAINING
1. Structure:
Encoder:
- *Input Layer:* Represents the input data.
- *Hidden Layers:* Comprise the encoding layers responsible for learning a compact
representation of the input.
- *Code Layer:* The last hidden layer, also known as the bottleneck or latent space,
encodes the essential features of the input.
Decoder:
- *Code Layer:* The representation from the encoder is input to the decoder.
- *Hidden Layers:* These layers decode the representation back into the original input
space.
- *Output Layer:* Represents the reconstructed input.
Connections:
- Weights are learned during training to capture the mapping between the input and the
encoded representation.
Activation Functions:
- Commonly used activation functions include ReLU (Rectified Linear Unit) for hidden
layers and Sigmoid or Hyperbolic Tangent (tanh) for the output layer.
Loss Function:
- The loss function measures the difference between the input and the reconstructed
output. Mean Squared Error (MSE) is commonly used: \( \text{MSE} = \frac{1}{n}\
sum_{i=1}^{n}(X_i - \hat{X}_i)^2 \), where \(X_i\) is the input and \(\hat{X}_i\) is the
reconstructed output.
2. Training:
Objective:
- Minimize the reconstruction error by adjusting the weights in the network.
Forward Pass:
- Pass the input through the encoder to obtain the encoded representation.
- Pass the encoded representation through the decoder to obtain the reconstructed output.
Regularization:
- Techniques like dropout or L1/L2 regularization may be employed to prevent overfitting.
Hyperparameter Tuning:
- Adjust parameters such as learning rate, batch size, and architecture to optimize
performance.
Early Stopping:
- Monitor the validation loss during training and stop when it ceases to improve,
preventing overfitting.
Variations:
- Variational Autoencoders (VAEs) introduce probabilistic elements, incorporating a
sampling process in the latent space.
Applications:
- Dimensionality reduction, data denoising, feature learning, and generation of new data
samples.
Autoencoders are versatile and can be adapted for various tasks based on the architecture and
training objectives. They have proven effective in learning useful representations of data for a
wide range of applications in machine learning and deep learning.
AUTO ENCODERS AND BOLTZMANN MACHINE - AN INTRODUCTION
Autoencoders
**Definition:**
Autoencoders are a type of artificial neural network used for unsupervised learning. The
network is designed to encode the input data into a lower-dimensional representation and
then decode it back to reconstruct the input. The objective is to minimize the difference
between the input and the reconstructed output, encouraging the network to learn a
compressed representation of the data.
**Architecture:**
- **Encoder:** Takes the input and transforms it into a lower-dimensional representation.
- **Decoder:** Takes the encoded representation and reconstructs the input.
- **Loss Function:** Typically, Mean Squared Error (MSE) is used to measure the
difference between the input and the output.
**Applications:**
- Dimensionality reduction.
- Data denoising.
- Anomaly detection.
- Feature learning.
Boltzmann Machines
**Definition:**
Boltzmann Machines are a type of stochastic, generative, and energy-based model. They
consist of binary units (nodes) that are connected with undirected edges. Learning in
Boltzmann Machines is based on minimizing the energy of the system, and they can be used
for both supervised and unsupervised learning tasks.
**Architecture:**
- **Nodes (or Neurons):** Represent binary states (0 or 1).
- **Connections (or Edges):** Undirected connections between nodes, forming a fully
connected graph.
- **Energy Function:** Defines the compatibility between the states of connected nodes.
- **Learning Algorithm:** Often relies on Contrastive Divergence or Gibbs sampling.
**Types:**
- **Restricted Boltzmann Machines (RBMs):** A simplified version of Boltzmann Machines
with a bipartite structure, making training more efficient.
**Applications:**
- Collaborative filtering (recommendation systems).
- Feature learning.
- Dimensionality reduction.
- Generative modeling.
Both Autoencoders and Boltzmann Machines are powerful tools in the field of machine
learning, each with its unique strengths and applications. The choice between them depends
on the specific problem at hand and the characteristics of the data.
Exercise No: 11 IMAGE DENOISING WITH AUTO ENCODERS.
Date:
AIM
To train an autoencoder to denoise images corrupted with random noise.
PROCEDURE
Step 1: Import necessary libraries
Step 2: Add random noise to images.
Step 3: Encode and decode the images.
Step 4: Plot original, noisy, and denoised images
SOURCE CODE
import numpy as np
from tensorflow.keras.datasets import mnist
from tensorflow.keras.models import Model
from tensorflow.keras.layers import Input, Dense, Conv2D, MaxPooling2D, UpSampling2D
from tensorflow.keras.callbacks import EarlyStopping
import matplotlib.pyplot as plt
# Encoder
x = Conv2D(32, (3, 3), activation='relu', padding='same')(input_img)
x = MaxPooling2D((2, 2), padding='same')(x)
x = Conv2D(32, (3, 3), activation='relu', padding='same')(x)
encoded = MaxPooling2D((2, 2), padding='same')(x)
# Decoder
x = Conv2D(32, (3, 3), activation='relu', padding='same')(encoded)
x = UpSampling2D((2, 2))(x)
x = Conv2D(32, (3, 3), activation='relu', padding='same')(x)
x = UpSampling2D((2, 2))(x)
decoded = Conv2D(1, (3, 3), activation='sigmoid', padding='same')(x)
decoded_imgs = autoencoder.predict(x_test_noisy)
# Denoised Images
ax = plt.subplot(3, n, i + 1 + 2 * n)
plt.imshow(decoded_imgs[i].reshape(28, 28))
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
plt.show()
OUTPUT
RESULT
Thus the computation graph of recurrent neural networks (RNNs) across multiple time steps has
been visualized successfully.
Exercise No: 12 DATA IMPUTATION WITH AUTO ENCODERS.
Date:
AIM
To implement data imputation using a denoising autoencoder using python.
PROCEDURE
Step 1: Import necessary libraries
Step 2: Generate synthetic data with missing values
Step 3: Normalize data
Step 4: Split data into train and test sets
Step 5: Define a denoising autoencoder model
Step 6: Add Gaussian noise to the input
Step 7: Encode & Decode
Step 8: Build the denoising autoencoder model
Step 9: Train the model
Step 10: Impute missing values using the trained denoising autoencoder
Step 11: Invert the normalization
Step 12: Display the results
SOURCE CODE
import numpy as np
import tensorflow as tf
from tensorflow.keras.layers import Input, Dense, GaussianNoise
from tensorflow.keras.models import Model
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import MinMaxScaler
# Generate synthetic data with missing values
np.random.seed(42)
data = np.random.randn(1000, 10) # 1000 samples, 10 features
missing_mask = np.random.rand(*data.shape) < 0.1 # 10% missing values
data[missing_mask] = np.nan
# Normalize data
scaler = MinMaxScaler()
data = scaler.fit_transform(data)
# Split data into train and test sets
X_train, X_test, y_train, y_test = train_test_split(data, data, test_size=0.2, random_state=42)
# Define a denoising autoencoder model
def build_denoising_autoencoder(input_dim):
input_layer = Input(shape=(input_dim,))
# Add Gaussian noise to the input
noisy_input = GaussianNoise(0.5)(input_layer)
# Encoder
encoded = Dense(64, activation='relu')(noisy_input)
# Decoder
decoded = Dense(input_dim, activation='sigmoid')(encoded)
model = Model(inputs=input_layer, outputs=decoded)
model.compile(optimizer='adam', loss='mean_squared_error')
return model
# Build the denoising autoencoder model
input_dim = X_train.shape[1]
denoising_autoencoder = build_denoising_autoencoder(input_dim)
# Train the model
denoising_autoencoder.fit(X_train, X_train, epochs=50, batch_size=32, shuffle=True,
validation_data=(X_test, X_test))
# Impute missing values using the trained denoising autoencoder
imputed_data = denoising_autoencoder.predict(data)
# Invert the normalization
imputed_data = scaler.inverse_transform(imputed_data)
# Display the results
print("Original Data:")
print(data[:5])
print("\nImputed Data:")
print(imputed_data[:5])
OUTPUT
Epoch 1/50
25/25 [==============================] - 3s 38ms/step - loss: nan - val_loss:
nan
Epoch 2/50
25/25 [==============================] - 0s 14ms/step - loss: nan - val_loss:
nan
Epoch 3/50
25/25 [==============================] - 0s 14ms/step - loss: nan - val_loss:
nan
Epoch 4/50
25/25 [==============================] - 0s 13ms/step - loss: nan - val_loss:
nan
Epoch 5/50
25/25 [==============================] - 0s 12ms/step - loss: nan - val_loss:
nan
Epoch 6/50
25/25 [==============================] - 0s 13ms/step - loss: nan - val_loss:
nan
Epoch 7/50
25/25 [==============================] - 0s 12ms/step - loss: nan - val_loss:
nan
Epoch 8/50
25/25 [==============================] - 0s 18ms/step - loss: nan - val_loss:
nan
Epoch 9/50
25/25 [==============================] - 0s 10ms/step - loss: nan - val_loss:
nan
Epoch 10/50
25/25 [==============================] - 0s 11ms/step - loss: nan - val_loss:
nan
Epoch 11/50
25/25 [==============================] - 0s 14ms/step - loss: nan - val_loss:
nan
Epoch 12/50
25/25 [==============================] - 0s 12ms/step - loss: nan - val_loss:
nan
Epoch 13/50
25/25 [==============================] - 0s 12ms/step - loss: nan - val_loss:
nan
Epoch 14/50
25/25 [==============================] - 0s 18ms/step - loss: nan - val_loss:
nan
Epoch 15/50
25/25 [==============================] - 0s 12ms/step - loss: nan - val_loss:
nan
Epoch 16/50
25/25 [==============================] - 0s 10ms/step - loss: nan - val_loss:
nan
Epoch 17/50
25/25 [==============================] - 0s 14ms/step - loss: nan - val_loss:
nan
Epoch 18/50
25/25 [==============================] - 0s 9ms/step - loss: nan - val_loss:
nan
Epoch 19/50
25/25 [==============================] - 0s 9ms/step - loss: nan - val_loss:
nan
Epoch 20/50
25/25 [==============================] - 0s 9ms/step - loss: nan - val_loss:
nan
Epoch 21/50
25/25 [==============================] - 0s 8ms/step - loss: nan - val_loss:
nan
Epoch 22/50
25/25 [==============================] - 0s 8ms/step - loss: nan - val_loss:
nan
Epoch 23/50
25/25 [==============================] - 0s 7ms/step - loss: nan - val_loss:
nan
Epoch 24/50
25/25 [==============================] - 0s 6ms/step - loss: nan - val_loss:
nan
Epoch 25/50
25/25 [==============================] - 0s 8ms/step - loss: nan - val_loss:
nan
Epoch 26/50
25/25 [==============================] - 0s 8ms/step - loss: nan - val_loss:
nan
Epoch 27/50
25/25 [==============================] - 0s 11ms/step - loss: nan - val_loss:
nan
Epoch 28/50
25/25 [==============================] - 0s 6ms/step - loss: nan - val_loss:
nan
Epoch 29/50
25/25 [==============================] - 0s 8ms/step - loss: nan - val_loss:
nan
Epoch 30/50
25/25 [==============================] - 0s 12ms/step - loss: nan - val_loss:
nan
Epoch 31/50
25/25 [==============================] - 0s 11ms/step - loss: nan - val_loss:
nan
Epoch 32/50
25/25 [==============================] - 0s 6ms/step - loss: nan - val_loss:
nan
Epoch 33/50
25/25 [==============================] - 0s 6ms/step - loss: nan - val_loss:
nan
Epoch 34/50
25/25 [==============================] - 0s 5ms/step - loss: nan - val_loss:
nan
Epoch 35/50
25/25 [==============================] - 0s 6ms/step - loss: nan - val_loss:
nan
Epoch 36/50
25/25 [==============================] - 0s 5ms/step - loss: nan - val_loss:
nan
Epoch 37/50
25/25 [==============================] - 0s 7ms/step - loss: nan - val_loss:
nan
Epoch 38/50
25/25 [==============================] - 0s 5ms/step - loss: nan - val_loss:
nan
Epoch 39/50
25/25 [==============================] - 0s 6ms/step - loss: nan - val_loss:
nan
Epoch 40/50
25/25 [==============================] - 0s 6ms/step - loss: nan - val_loss:
nan
Epoch 41/50
25/25 [==============================] - 0s 8ms/step - loss: nan - val_loss:
nan
Epoch 42/50
25/25 [==============================] - 0s 6ms/step - loss: nan - val_loss:
nan
Epoch 43/50
25/25 [==============================] - 0s 7ms/step - loss: nan - val_loss:
nan
Epoch 44/50
25/25 [==============================] - 0s 5ms/step - loss: nan - val_loss:
nan
Epoch 45/50
25/25 [==============================] - 0s 4ms/step - loss: nan - val_loss:
nan
Epoch 46/50
25/25 [==============================] - 0s 4ms/step - loss: nan - val_loss:
nan
Epoch 47/50
25/25 [==============================] - 0s 3ms/step - loss: nan - val_loss:
nan
Epoch 48/50
25/25 [==============================] - 0s 4ms/step - loss: nan - val_loss:
nan
Epoch 49/50
25/25 [==============================] - 0s 3ms/step - loss: nan - val_loss:
nan
Epoch 50/50
25/25 [==============================] - 0s 4ms/step - loss: nan - val_loss:
nan
32/32 [==============================] - 0s 1ms/step
Original Data:
[[0.63136875 0.43313218 nan nan 0.47814929 nan
0.72475122 0.64826053 0.39751792 0.52909853]
[0.49419256 0.38251493 0.57041536 0.17856373 0.23272253 0.36751612
0.29596015 0.5810104 0.32883371 0.25100169]
[0.7698026 0.41960521 0.54184995 0.26326844 0.42707577 0.46237903
0.27310451 0.59012929 0.3769754 0.41041847]
[0.47443492 0.7408159 0.52858119 0.32690763 0.65211509 0.27471813
0.49806003 0.24357553 0.26302933 0.47991978]
[0.66590846 0.48099296 0.51185288 0.45809287 0.27328705 0.3453146
0.38730704 0.69124831 nan 0.20110629]]
Imputed Data:
[[nan nan nan nan nan nan nan nan nan nan]
[nan nan nan nan nan nan nan nan nan nan]
[nan nan nan nan nan nan nan nan nan nan]
[nan nan nan nan nan nan nan nan nan nan]
[nan nan nan nan nan nan nan nan nan nan]]
RESULT
Thus the data imputation has been implemented using a denoising autoencoder in python
successfully.
Exercise No: 13 ANOMALY DETECTION WITH AUTO ENCODERS.
Date:
AIM
To detect anomaly data using a denoising autoencoder in python.
PROCEDURE
Step 1: Import necessary libraries
Step 2: Generate synthetic data with missing values
Step 3: Normalize data
Step 4: Split data into train and test sets
Step 5: Define a denoising autoencoder model
Step 6: Add Gaussian noise to the input
Step 7: Encode & Decode
Step 8: Build and train the denoising autoencoder on normal data
Step 9: Evaluate the model on normal and anomaly data
Step 10: Calculate reconstruction errors
Step 11: Set a threshold for anomaly detection (adjust based on your data)
Step 12: Identify anomalies
Step 13: Plot the results
Step 14: Print detected anomalies
SOURCE CODE
import numpy as np
import tensorflow as tf
from tensorflow.keras.layers import Input, Dense, GaussianNoise
from tensorflow.keras.models import Model
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import MinMaxScaler
import matplotlib.pyplot as plt
return model
OUTPUT
Epoch 1/50
25/25 [==============================] - 2s 25ms/step - loss: 0.0264 - val_loss: 0.0243
Epoch 2/50
25/25 [==============================] - 0s 12ms/step - loss: 0.0241 - val_loss: 0.0237
Epoch 3/50
25/25 [==============================] - 0s 6ms/step - loss: 0.0236 - val_loss: 0.0232
Epoch 4/50
25/25 [==============================] - 0s 5ms/step - loss: 0.0231 - val_loss: 0.0227
Epoch 5/50
25/25 [==============================] - 0s 13ms/step - loss: 0.0223 - val_loss: 0.0223
Epoch 6/50
25/25 [==============================] - 0s 7ms/step - loss: 0.0223 - val_loss: 0.0220
Epoch 7/50
25/25 [==============================] - 0s 18ms/step - loss: 0.0219 - val_loss: 0.0218
Epoch 8/50
25/25 [==============================] - 0s 7ms/step - loss: 0.0219 - val_loss: 0.0216
Epoch 9/50
25/25 [==============================] - 0s 7ms/step - loss: 0.0217 - val_loss: 0.0215
Epoch 10/50
25/25 [==============================] - 0s 7ms/step - loss: 0.0215 - val_loss: 0.0213
Epoch 11/50
25/25 [==============================] - 0s 6ms/step - loss: 0.0215 - val_loss: 0.0211
Epoch 12/50
25/25 [==============================] - 0s 11ms/step - loss: 0.0214 - val_loss: 0.0208
Epoch 13/50
25/25 [==============================] - 0s 15ms/step - loss: 0.0215 - val_loss: 0.0208
Epoch 14/50
25/25 [==============================] - 0s 9ms/step - loss: 0.0217 - val_loss: 0.0208
Epoch 15/50
25/25 [==============================] - 0s 9ms/step - loss: 0.0214 - val_loss: 0.0207
Epoch 16/50
25/25 [==============================] - 0s 15ms/step - loss: 0.0212 - val_loss: 0.0207
Epoch 17/50
25/25 [==============================] - 0s 18ms/step - loss: 0.0214 - val_loss: 0.0205
Epoch 18/50
25/25 [==============================] - 0s 12ms/step - loss: 0.0216 - val_loss: 0.0206
Epoch 19/50
25/25 [==============================] - 0s 17ms/step - loss: 0.0214 - val_loss: 0.0205
Epoch 20/50
25/25 [==============================] - 0s 18ms/step - loss: 0.0215 - val_loss: 0.0206
Epoch 21/50
25/25 [==============================] - 0s 12ms/step - loss: 0.0212 - val_loss: 0.0206
Epoch 22/50
25/25 [==============================] - 0s 12ms/step - loss: 0.0214 - val_loss: 0.0205
Epoch 23/50
25/25 [==============================] - 0s 17ms/step - loss: 0.0215 - val_loss: 0.0204
Epoch 24/50
25/25 [==============================] - 0s 10ms/step - loss: 0.0216 - val_loss: 0.0204
Epoch 25/50
25/25 [==============================] - 0s 9ms/step - loss: 0.0213 - val_loss: 0.0203
Epoch 26/50
25/25 [==============================] - 0s 4ms/step - loss: 0.0213 - val_loss: 0.0204
Epoch 27/50
25/25 [==============================] - 0s 7ms/step - loss: 0.0215 - val_loss: 0.0205
Epoch 28/50
25/25 [==============================] - 0s 9ms/step - loss: 0.0213 - val_loss: 0.0204
Epoch 29/50
25/25 [==============================] - 0s 14ms/step - loss: 0.0214 - val_loss: 0.0204
Epoch 30/50
25/25 [==============================] - 0s 6ms/step - loss: 0.0214 - val_loss: 0.0204
Epoch 31/50
25/25 [==============================] - 0s 3ms/step - loss: 0.0211 - val_loss: 0.0204
Epoch 32/50
25/25 [==============================] - 0s 3ms/step - loss: 0.0213 - val_loss: 0.0205
Epoch 33/50
25/25 [==============================] - 0s 3ms/step - loss: 0.0212 - val_loss: 0.0203
Epoch 34/50
25/25 [==============================] - 0s 3ms/step - loss: 0.0212 - val_loss: 0.0202
Epoch 35/50
25/25 [==============================] - 0s 3ms/step - loss: 0.0213 - val_loss: 0.0202
Epoch 36/50
25/25 [==============================] - 0s 3ms/step - loss: 0.0213 - val_loss: 0.0203
Epoch 37/50
25/25 [==============================] - 0s 3ms/step - loss: 0.0215 - val_loss: 0.0204
Epoch 38/50
25/25 [==============================] - 0s 4ms/step - loss: 0.0213 - val_loss: 0.0204
Epoch 39/50
25/25 [==============================] - 0s 4ms/step - loss: 0.0212 - val_loss: 0.0204
Epoch 40/50
25/25 [==============================] - 0s 3ms/step - loss: 0.0213 - val_loss: 0.0204
Epoch 41/50
25/25 [==============================] - 0s 3ms/step - loss: 0.0210 - val_loss: 0.0203
Epoch 42/50
25/25 [==============================] - 0s 3ms/step - loss: 0.0214 - val_loss: 0.0204
Epoch 43/50
25/25 [==============================] - 0s 3ms/step - loss: 0.0212 - val_loss: 0.0203
Epoch 44/50
25/25 [==============================] - 0s 3ms/step - loss: 0.0212 - val_loss: 0.0203
Epoch 45/50
25/25 [==============================] - 0s 3ms/step - loss: 0.0213 - val_loss: 0.0203
Epoch 46/50
25/25 [==============================] - 0s 3ms/step - loss: 0.0215 - val_loss: 0.0204
Epoch 47/50
25/25 [==============================] - 0s 3ms/step - loss: 0.0213 - val_loss: 0.0203
Epoch 48/50
25/25 [==============================] - 0s 3ms/step - loss: 0.0213 - val_loss: 0.0203
Epoch 49/50
25/25 [==============================] - 0s 3ms/step - loss: 0.0216 - val_loss: 0.0203
Epoch 50/50
25/25 [==============================] - 0s 3ms/step - loss: 0.0213 - val_loss: 0.0203
32/32 [==============================] - 0s 1ms/step
32/32 [==============================] - 0s 1ms/step
Detected Anomalies:
[ 0 1 2 3 4 5 6 7 8 9 289 371 419 635 802]
RESULT
Thus the anomaly data has been detected using denoising autoencoder in python successfully.
Exercise No: 14 FEATURE LEARNING REPRESENTATION
Date:
AIM
To represent the feature learning techniques using different activation function.
PROCEDURE
Step 1: Import necessary libraries
Step 2: Generate synthetic data with missing values
Step 3: Split the dataset into training and testing sets
Step 4: Standardize the features
Step 5: Build a simple neural network for feature learning
Step 6: Compile the model
Step 7: Train the model
Step 8: Evaluate the model on the test set
Step 9: Plot training history
SOURCE CODE
import numpy as np
import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
from sklearn.datasets import make_classification
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
import matplotlib.pyplot as plt
OUTPUT
Epoch 1/20
20/20 [==============================] - 5s 40ms/step - loss: 0.7068 - accuracy: 0.5234 - val_loss:
0.6130 - val_accuracy: 0.7125
Epoch 2/20
20/20 [==============================] - 0s 9ms/step - loss: 0.6179 - accuracy: 0.6703 - val_loss:
0.5536 - val_accuracy: 0.7563
Epoch 3/20
20/20 [==============================] - 0s 6ms/step - loss: 0.5538 - accuracy: 0.7359 - val_loss:
0.4997 - val_accuracy: 0.8125
Epoch 4/20
20/20 [==============================] - 0s 11ms/step - loss: 0.4969 - accuracy: 0.7984 - val_loss:
0.4512 - val_accuracy: 0.8500
Epoch 5/20
20/20 [==============================] - 0s 13ms/step - loss: 0.4495 - accuracy: 0.8234 - val_loss:
0.4048 - val_accuracy: 0.8687
Epoch 6/20
20/20 [==============================] - 0s 11ms/step - loss: 0.4074 - accuracy: 0.8484 - val_loss:
0.3681 - val_accuracy: 0.8875
Epoch 7/20
20/20 [==============================] - 0s 9ms/step - loss: 0.3759 - accuracy: 0.8578 - val_loss:
0.3395 - val_accuracy: 0.8938
Epoch 8/20
20/20 [==============================] - 0s 4ms/step - loss: 0.3505 - accuracy: 0.8750 - val_loss:
0.3173 - val_accuracy: 0.8875
Epoch 9/20
20/20 [==============================] - 0s 3ms/step - loss: 0.3304 - accuracy: 0.8766 - val_loss:
0.3031 - val_accuracy: 0.8938
Epoch 10/20
20/20 [==============================] - 0s 4ms/step - loss: 0.3155 - accuracy: 0.8828 - val_loss:
0.2897 - val_accuracy: 0.9000
Epoch 11/20
20/20 [==============================] - 0s 4ms/step - loss: 0.3044 - accuracy: 0.8891 - val_loss:
0.2809 - val_accuracy: 0.9000
Epoch 12/20
20/20 [==============================] - 0s 4ms/step - loss: 0.2945 - accuracy: 0.8906 - val_loss:
0.2766 - val_accuracy: 0.9062
Epoch 13/20
20/20 [==============================] - 0s 4ms/step - loss: 0.2871 - accuracy: 0.8922 - val_loss:
0.2705 - val_accuracy: 0.9062
Epoch 14/20
20/20 [==============================] - 0s 4ms/step - loss: 0.2798 - accuracy: 0.8953 - val_loss:
0.2659 - val_accuracy: 0.9000
Epoch 15/20
20/20 [==============================] - 0s 4ms/step - loss: 0.2729 - accuracy: 0.9016 - val_loss:
0.2639 - val_accuracy: 0.8938
Epoch 16/20
20/20 [==============================] - 0s 4ms/step - loss: 0.2668 - accuracy: 0.9047 - val_loss:
0.2633 - val_accuracy: 0.8938
Epoch 17/20
20/20 [==============================] - 0s 3ms/step - loss: 0.2617 - accuracy: 0.9062 - val_loss:
0.2622 - val_accuracy: 0.8938
Epoch 18/20
20/20 [==============================] - 0s 4ms/step - loss: 0.2557 - accuracy: 0.9094 - val_loss:
0.2614 - val_accuracy: 0.8875
Epoch 19/20
20/20 [==============================] - 0s 5ms/step - loss: 0.2519 - accuracy: 0.9156 - val_loss:
0.2619 - val_accuracy: 0.8938
Epoch 20/20
20/20 [==============================] - 0s 4ms/step - loss: 0.2447 - accuracy: 0.9172 - val_loss:
0.2631 - val_accuracy: 0.8938
7/7 [==============================] - 0s 2ms/step - loss: 0.4029 - accuracy: 0.8500
Test Accuracy: 85.00%
RESULT
Thus the feature learning has been represented using different activation function suuccessfully.
Exercise No: 15 REPRESENTATION LEARNING
Date:
AIM
To represent the use of autoencoders by means of representation learning.
PROCEDURE
Step 1: Import necessary libraries
Step 2: Generate synthetic data with missing values
Step 3: Split the dataset into training and testing sets
Step 4: Standardize the features
Step 5: Build a simple autoencoder for representation learning
Step 6: Compile the autoencoder
Step 7: Train the model
Step 8: Extract the learned representations using the encoder
Step 9: Visualize the original and encoded data
Step 10: Evaluate the autoencoder on the test set
SOURCE CODE
import numpy as np
import tensorflow as tf
from tensorflow.keras.layers import Input, Dense
from tensorflow.keras.models import Model
from sklearn.datasets import make_classification
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
import matplotlib.pyplot as plt
# Encoder
encoder_input = Input(shape=(input_dim,))
encoded = Dense(10, activation='relu')(encoder_input)
# Decoder
decoded = Dense(input_dim, activation='sigmoid')(encoded)
# Autoencoder model
autoencoder = Model(encoder_input, decoded)
Epoch 1/50
25/25 [==============================] - 1s 8ms/step - loss: 1.2662 - val_loss: 1.2189
Epoch 2/50
25/25 [==============================] - 0s 4ms/step - loss: 1.2291 - val_loss: 1.1844
Epoch 3/50
25/25 [==============================] - 0s 4ms/step - loss: 1.1948 - val_loss: 1.1521
Epoch 4/50
25/25 [==============================] - 0s 3ms/step - loss: 1.1623 - val_loss: 1.1213
Epoch 5/50
25/25 [==============================] - 0s 4ms/step - loss: 1.1311 - val_loss: 1.0924
Epoch 6/50
25/25 [==============================] - 0s 3ms/step - loss: 1.1014 - val_loss: 1.0646
Epoch 7/50
25/25 [==============================] - 0s 3ms/step - loss: 1.0731 - val_loss: 1.0389
Epoch 8/50
25/25 [==============================] - 0s 3ms/step - loss: 1.0469 - val_loss: 1.0149
Epoch 9/50
25/25 [==============================] - 0s 4ms/step - loss: 1.0229 - val_loss: 0.9932
Epoch 10/50
25/25 [==============================] - 0s 4ms/step - loss: 1.0011 - val_loss: 0.9739
Epoch 11/50
25/25 [==============================] - 0s 5ms/step - loss: 0.9819 - val_loss: 0.9567
Epoch 12/50
25/25 [==============================] - 0s 5ms/step - loss: 0.9648 - val_loss: 0.9415
Epoch 13/50
25/25 [==============================] - 0s 5ms/step - loss: 0.9498 - val_loss: 0.9283
Epoch 14/50
25/25 [==============================] - 0s 4ms/step - loss: 0.9367 - val_loss: 0.9165
Epoch 15/50
25/25 [==============================] - 0s 5ms/step - loss: 0.9250 - val_loss: 0.9062
Epoch 16/50
25/25 [==============================] - 0s 4ms/step - loss: 0.9148 - val_loss: 0.8971
Epoch 17/50
25/25 [==============================] - 0s 4ms/step - loss: 0.9058 - val_loss: 0.8890
Epoch 18/50
25/25 [==============================] - 0s 5ms/step - loss: 0.8977 - val_loss: 0.8817
Epoch 19/50
25/25 [==============================] - 0s 5ms/step - loss: 0.8904 - val_loss: 0.8750
Epoch 20/50
25/25 [==============================] - 0s 5ms/step - loss: 0.8838 - val_loss: 0.8688
Epoch 21/50
25/25 [==============================] - 0s 5ms/step - loss: 0.8777 - val_loss: 0.8631
Epoch 22/50
25/25 [==============================] - 0s 4ms/step - loss: 0.8719 - val_loss: 0.8577
Epoch 23/50
25/25 [==============================] - 0s 4ms/step - loss: 0.8666 - val_loss: 0.8526
Epoch 24/50
25/25 [==============================] - 0s 5ms/step - loss: 0.8616 - val_loss: 0.8479
Epoch 25/50
25/25 [==============================] - 0s 5ms/step - loss: 0.8568 - val_loss: 0.8434
Epoch 26/50
25/25 [==============================] - 0s 5ms/step - loss: 0.8523 - val_loss: 0.8391
Epoch 27/50
25/25 [==============================] - 0s 5ms/step - loss: 0.8481 - val_loss: 0.8351
Epoch 28/50
25/25 [==============================] - 0s 4ms/step - loss: 0.8440 - val_loss: 0.8313
Epoch 29/50
25/25 [==============================] - 0s 5ms/step - loss: 0.8402 - val_loss: 0.8276
Epoch 30/50
25/25 [==============================] - 0s 4ms/step - loss: 0.8365 - val_loss: 0.8241
Epoch 31/50
25/25 [==============================] - 0s 4ms/step - loss: 0.8330 - val_loss: 0.8208
Epoch 32/50
25/25 [==============================] - 0s 5ms/step - loss: 0.8297 - val_loss: 0.8177
Epoch 33/50
25/25 [==============================] - 0s 5ms/step - loss: 0.8265 - val_loss: 0.8146
Epoch 34/50
25/25 [==============================] - 0s 5ms/step - loss: 0.8235 - val_loss: 0.8117
Epoch 35/50
25/25 [==============================] - 0s 5ms/step - loss: 0.8206 - val_loss: 0.8090
Epoch 36/50
25/25 [==============================] - 0s 5ms/step - loss: 0.8178 - val_loss: 0.8063
Epoch 37/50
25/25 [==============================] - 0s 5ms/step - loss: 0.8152 - val_loss: 0.8038
Epoch 38/50
25/25 [==============================] - 0s 4ms/step - loss: 0.8127 - val_loss: 0.8016
Epoch 39/50
25/25 [==============================] - 0s 5ms/step - loss: 0.8104 - val_loss: 0.7992
Epoch 40/50
25/25 [==============================] - 0s 4ms/step - loss: 0.8080 - val_loss: 0.7969
Epoch 41/50
25/25 [==============================] - 0s 4ms/step - loss: 0.8059 - val_loss: 0.7949
Epoch 42/50
25/25 [==============================] - 0s 3ms/step - loss: 0.8038 - val_loss: 0.7928
Epoch 43/50
25/25 [==============================] - 0s 4ms/step - loss: 0.8018 - val_loss: 0.7909
Epoch 44/50
25/25 [==============================] - 0s 4ms/step - loss: 0.7999 - val_loss: 0.7891
Epoch 45/50
25/25 [==============================] - 0s 3ms/step - loss: 0.7980 - val_loss: 0.7873
Epoch 46/50
25/25 [==============================] - 0s 3ms/step - loss: 0.7963 - val_loss: 0.7855
Epoch 47/50
25/25 [==============================] - 0s 4ms/step - loss: 0.7945 - val_loss: 0.7840
Epoch 48/50
25/25 [==============================] - 0s 3ms/step - loss: 0.7929 - val_loss: 0.7823
Epoch 49/50
25/25 [==============================] - 0s 3ms/step - loss: 0.7913 - val_loss: 0.7808
Epoch 50/50
25/25 [==============================] - 0s 3ms/step - loss: 0.7898 - val_loss: 0.7794
25/25 [==============================] - 0s 1ms/step
7/7 [==============================] - 0s 2ms/step
7/7 [==============================] - 0s 2ms/step - loss: 0.7794
Test Loss: 0.7794
RESULT
Thus the use of autoencoders has been represented using representation learning successfully.
Exercise No: 16 UNSUPERVISED LEARNING REPRESENTATION
Date:
AIM
To demonstrate unsupervised learning with an autoencoder on a synthetic dataset.
PROCEDURE
Step 1: Import necessary libraries
Step 2: Generate a synthetic dataset
Step 3: Split the dataset into training and testing sets
Step 4: Standardize the features
Step 5: Build an autoencoder for unsupervised learning
Step 6: Encode and decode the data
Step 7: Autoencoder model
Step 8: Compile the autoencoder
Step 9: Train the autoencoder
Step 10: Use the trained autoencoder for feature extraction
Step 11: Visualize the original and reconstructed data
SOURCE CODE
import numpy as np
import tensorflow as tf
from tensorflow.keras.layers import Input, Dense
from tensorflow.keras.models import Model
from sklearn.datasets import make_classification
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
import matplotlib.pyplot as plt
# Encoder
encoder_input = Input(shape=(input_dim,))
encoded = Dense(10, activation='relu')(encoder_input)
# Decoder
decoded = Dense(input_dim, activation='sigmoid')(encoded)
# Autoencoder model
autoencoder = Model(encoder_input, decoded)
axes[0, 0].set_ylabel('Original')
axes[1, 0].set_ylabel('Encoded')
plt.show()
OUTPUT
Epoch 1/50
25/25 [==============================] - 2s 50ms/step - loss: 1.2191 - val_loss: 1.1757
Epoch 2/50
25/25 [==============================] - 0s 5ms/step - loss: 1.1823 - val_loss: 1.1419
Epoch 3/50
25/25 [==============================] - 0s 4ms/step - loss: 1.1477 - val_loss: 1.1102
Epoch 4/50
25/25 [==============================] - 0s 4ms/step - loss: 1.1155 - val_loss: 1.0806
Epoch 5/50
25/25 [==============================] - 0s 4ms/step - loss: 1.0853 - val_loss: 1.0532
Epoch 6/50
25/25 [==============================] - 0s 5ms/step - loss: 1.0570 - val_loss: 1.0282
Epoch 7/50
25/25 [==============================] - 0s 4ms/step - loss: 1.0314 - val_loss: 1.0053
Epoch 8/50
25/25 [==============================] - 0s 5ms/step - loss: 1.0086 - val_loss: 0.9846
Epoch 9/50
25/25 [==============================] - 0s 5ms/step - loss: 0.9882 - val_loss: 0.9660
Epoch 10/50
25/25 [==============================] - 0s 5ms/step - loss: 0.9700 - val_loss: 0.9497
Epoch 11/50
25/25 [==============================] - 0s 5ms/step - loss: 0.9542 - val_loss: 0.9350
Epoch 12/50
25/25 [==============================] - 0s 5ms/step - loss: 0.9401 - val_loss: 0.9220
Epoch 13/50
25/25 [==============================] - 0s 5ms/step - loss: 0.9275 - val_loss: 0.9105
Epoch 14/50
25/25 [==============================] - 0s 5ms/step - loss: 0.9164 - val_loss: 0.9002
Epoch 15/50
25/25 [==============================] - 0s 5ms/step - loss: 0.9064 - val_loss: 0.8913
Epoch 16/50
25/25 [==============================] - 0s 5ms/step - loss: 0.8977 - val_loss: 0.8832
Epoch 17/50
25/25 [==============================] - 0s 5ms/step - loss: 0.8898 - val_loss: 0.8759
Epoch 18/50
25/25 [==============================] - 0s 5ms/step - loss: 0.8827 - val_loss: 0.8693
Epoch 19/50
25/25 [==============================] - 0s 4ms/step - loss: 0.8762 - val_loss: 0.8632
Epoch 20/50
25/25 [==============================] - 0s 5ms/step - loss: 0.8703 - val_loss: 0.8576
Epoch 21/50
25/25 [==============================] - 0s 4ms/step - loss: 0.8648 - val_loss: 0.8523
Epoch 22/50
25/25 [==============================] - 0s 5ms/step - loss: 0.8596 - val_loss: 0.8474
Epoch 23/50
25/25 [==============================] - 0s 5ms/step - loss: 0.8548 - val_loss: 0.8428
Epoch 24/50
25/25 [==============================] - 0s 5ms/step - loss: 0.8502 - val_loss: 0.8384
Epoch 25/50
25/25 [==============================] - 0s 4ms/step - loss: 0.8458 - val_loss: 0.8343
Epoch 26/50
25/25 [==============================] - 0s 4ms/step - loss: 0.8417 - val_loss: 0.8305
Epoch 27/50
25/25 [==============================] - 0s 3ms/step - loss: 0.8377 - val_loss: 0.8268
Epoch 28/50
25/25 [==============================] - 0s 3ms/step - loss: 0.8339 - val_loss: 0.8233
Epoch 29/50
25/25 [==============================] - 0s 3ms/step - loss: 0.8304 - val_loss: 0.8200
Epoch 30/50
25/25 [==============================] - 0s 4ms/step - loss: 0.8269 - val_loss: 0.8167
Epoch 31/50
25/25 [==============================] - 0s 3ms/step - loss: 0.8237 - val_loss: 0.8138
Epoch 32/50
25/25 [==============================] - 0s 4ms/step - loss: 0.8205 - val_loss: 0.8109
Epoch 33/50
25/25 [==============================] - 0s 3ms/step - loss: 0.8175 - val_loss: 0.8082
Epoch 34/50
25/25 [==============================] - 0s 3ms/step - loss: 0.8147 - val_loss: 0.8056
Epoch 35/50
25/25 [==============================] - 0s 4ms/step - loss: 0.8120 - val_loss: 0.8033
Epoch 36/50
25/25 [==============================] - 0s 4ms/step - loss: 0.8094 - val_loss: 0.8010
Epoch 37/50
25/25 [==============================] - 0s 3ms/step - loss: 0.8069 - val_loss: 0.7987
Epoch 38/50
25/25 [==============================] - 0s 4ms/step - loss: 0.8046 - val_loss: 0.7967
Epoch 39/50
25/25 [==============================] - 0s 4ms/step - loss: 0.8024 - val_loss: 0.7948
Epoch 40/50
25/25 [==============================] - 0s 3ms/step - loss: 0.8002 - val_loss: 0.7929
Epoch 41/50
25/25 [==============================] - 0s 3ms/step - loss: 0.7982 - val_loss: 0.7911
Epoch 42/50
25/25 [==============================] - 0s 4ms/step - loss: 0.7962 - val_loss: 0.7895
Epoch 43/50
25/25 [==============================] - 0s 3ms/step - loss: 0.7944 - val_loss: 0.7879
Epoch 44/50
25/25 [==============================] - 0s 3ms/step - loss: 0.7927 - val_loss: 0.7864
Epoch 45/50
25/25 [==============================] - 0s 4ms/step - loss: 0.7910 - val_loss: 0.7850
Epoch 46/50
25/25 [==============================] - 0s 3ms/step - loss: 0.7894 - val_loss: 0.7837
Epoch 47/50
25/25 [==============================] - 0s 3ms/step - loss: 0.7879 - val_loss: 0.7823
Epoch 48/50
25/25 [==============================] - 0s 4ms/step - loss: 0.7864 - val_loss: 0.7811
Epoch 49/50
25/25 [==============================] - 0s 4ms/step - loss: 0.7851 - val_loss: 0.7800
Epoch 50/50
25/25 [==============================] - 0s 3ms/step - loss: 0.7837 - val_loss: 0.7788
25/25 [==============================] - 0s 2ms/step
7/7 [==============================] - 0s 2ms/step
RESULT
Thus unsupervised learning with an autoencoder on a synthetic dataset has been demonstrated
successfully.
Exercise No: 17 IMPLEMENTATION OF AUTO ENCODERS
Date:
AIM
To create an auto encoder consisting of two dense layers.(Encoder and Decoder)
PROCEDURE
Step 1: Import necessary libraries
Step 2: Loading the MNIST dataset and extracting training and testing data
Step 3: Normalizing pixel values to the range [0, 1]
Step 4: Displaying the shapes of the training and testing datasets
SOURCE CODE
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import tensorflow as tf
RESULT
Thus the auto encoder consisting of two layers (encoder and decoder) has been created and
output has been verified successfully.
Exercise No: 18 DEFINING A BASIC AUTO ENCODER
Date:
AIM
To define a simple autoencoder using class and constructor.
PROCEDURE
Step 1: Import necessary libraries
Step 2: Define the Autoencoder model as a subclass of the TensorFlow Model class
Step 3: Create Encoder architecture using a Sequential model.
Step 4: Create Decoder architecture using another Sequential model.
Step 5: Forward pass method defining the encoding and decoding steps.
Step 6: Extracting shape information from the testing dataset
Step 7: Specifying the dimensionality of the latent space.
Step 8: Creating an instance of the SimpleAutoencoder model.
SOURCE CODE
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import tensorflow as tf
class SimpleAutoencoder(Model):
def __init__(self,latent_dimensions , data_shape):
super(SimpleAutoencoder, self).__init__()
self.latent_dimensions = latent_dimensions
self.data_shape = data_shape
# Encoder architecture using a Sequential model
self.encoder = tf.keras.Sequential([
layers.Flatten(),
layers.Dense(latent_dimensions, activation='relu'),
])
simple_autoencoder.fit(x_train, x_train,
epochs=1,
shuffle=True,
validation_data=(x_test, x_test))
OUTPUT
1875/1875 [==============================] - 14s 7ms/step - loss: 0.0235 -
val_loss: 0.0090
<keras.src.callbacks.History at 0x7e5a847ebeb0>
RESULT
Thus the auto encoder has been defined using class and constructor and the output has been
verified successfully.
Exercise No: 19 VANILLA AUTOENCODER
Date:
AIM
To implement a vanilla auto encoder model.
PROCEDURE
Step 1: Import necessary libraries
Step 2: Define the architecture of the autoencoder
Step 3: Create Encoder, Decoder and Auto encoder architecture using a Sequential model.
Step 4: Set the dimensions of input data and the encoding dimension
Step 5: Build the autoencoder model
Step 6: Compile the model
Step 7: Display the architecture of the autoencoder
SOURCE CODE
import tensorflow as tf
from tensorflow.keras.layers import Input, Dense
from tensorflow.keras.models import Model
# Decoder
decoded = Dense(input_dim, activation='sigmoid')(encoded)
# Autoencoder
autoencoder = Model(input_layer, decoded)
return autoencoder
OUTPUT
Model: "model"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_1 (InputLayer) [(None, 784)] 0
=================================================================
Total params: 50992 (199.19 KB)
Trainable params: 50992 (199.19 KB)
Non-trainable params: 0 (0.00 Byte)
_________________________________________________________________
RESULT
Thus the vanilla auto encoder has been implemented and the output has been verified
successfully.
Exercise No: 20 SPARSE AUTOENCODER
Date:
AIM
To implement a sparse auto encoder model.
PROCEDURE
Step 1: Import necessary libraries
Step 2: Define the architecture of the sparse autoencoder
Step 3: Create Encoder, Decoder and Auto encoder architecture for sparse auto encoder
Step 4: Set the dimensions of input data and the encoding dimension
Step 5: Build the sparse autoencoder model
Step 6: Compile the model
Step 7: Display the architecture of the sparse autoencoder
SOURCE CODE
import numpy as np
import tensorflow as tf
from tensorflow.keras.layers import Input, Dense
from tensorflow.keras.models import Model
from tensorflow.keras import regularizers
# Decoder
decoded = Dense(input_dim, activation='sigmoid')(encoded)
# Sparse Autoencoder
sparse_autoencoder = Model(input_layer, decoded)
return sparse_autoencoder
OUTPUT
Epoch 1/50
32/32 [==============================] - 2s 6ms/step - loss: 0.7006
Epoch 2/50
32/32 [==============================] - 0s 8ms/step - loss: 0.6931
Epoch 3/50
32/32 [==============================] - 0s 8ms/step - loss: 0.6931
Epoch 4/50
32/32 [==============================] - 0s 7ms/step - loss: 0.6931
Epoch 5/50
32/32 [==============================] - 0s 8ms/step - loss: 0.6930
Epoch 6/50
32/32 [==============================] - 0s 8ms/step - loss: 0.6930
Epoch 7/50
32/32 [==============================] - 0s 7ms/step - loss: 0.6930
Epoch 8/50
32/32 [==============================] - 0s 6ms/step - loss: 0.6930
Epoch 9/50
32/32 [==============================] - 0s 5ms/step - loss: 0.6930
Epoch 10/50
32/32 [==============================] - 0s 3ms/step - loss: 0.6930
Epoch 11/50
32/32 [==============================] - 0s 5ms/step - loss: 0.6930
Epoch 12/50
32/32 [==============================] - 0s 6ms/step - loss: 0.6930
Epoch 13/50
32/32 [==============================] - 0s 6ms/step - loss: 0.6930
Epoch 14/50
32/32 [==============================] - 0s 6ms/step - loss: 0.6930
Epoch 15/50
32/32 [==============================] - 0s 6ms/step - loss: 0.6930
Epoch 16/50
32/32 [==============================] - 0s 6ms/step - loss: 0.6930
Epoch 17/50
32/32 [==============================] - 0s 6ms/step - loss: 0.6930
Epoch 18/50
32/32 [==============================] - 0s 6ms/step - loss: 0.6930
Epoch 19/50
32/32 [==============================] - 0s 6ms/step - loss: 0.6930
Epoch 20/50
32/32 [==============================] - 0s 7ms/step - loss: 0.6930
Epoch 21/50
32/32 [==============================] - 0s 6ms/step - loss: 0.6930
Epoch 22/50
32/32 [==============================] - 0s 3ms/step - loss: 0.6930
Epoch 23/50
32/32 [==============================] - 0s 3ms/step - loss: 0.6930
Epoch 24/50
32/32 [==============================] - 0s 3ms/step - loss: 0.6930
Epoch 25/50
32/32 [==============================] - 0s 3ms/step - loss: 0.6930
Epoch 26/50
32/32 [==============================] - 0s 3ms/step - loss: 0.6930
Epoch 27/50
32/32 [==============================] - 0s 3ms/step - loss: 0.6930
Epoch 28/50
32/32 [==============================] - 0s 3ms/step - loss: 0.6930
Epoch 29/50
32/32 [==============================] - 0s 3ms/step - loss: 0.6930
Epoch 30/50
32/32 [==============================] - 0s 2ms/step - loss: 0.6930
Epoch 31/50
32/32 [==============================] - 0s 2ms/step - loss: 0.6930
Epoch 32/50
32/32 [==============================] - 0s 3ms/step - loss: 0.6930
Epoch 33/50
32/32 [==============================] - 0s 3ms/step - loss: 0.6930
Epoch 34/50
32/32 [==============================] - 0s 5ms/step - loss: 0.6930
Epoch 35/50
32/32 [==============================] - 0s 5ms/step - loss: 0.6930
Epoch 36/50
32/32 [==============================] - 0s 5ms/step - loss: 0.6930
Epoch 37/50
32/32 [==============================] - 0s 5ms/step - loss: 0.6930
Epoch 38/50
32/32 [==============================] - 0s 4ms/step - loss: 0.6930
Epoch 39/50
32/32 [==============================] - 0s 4ms/step - loss: 0.6930
Epoch 40/50
32/32 [==============================] - 0s 4ms/step - loss: 0.6930
Epoch 41/50
32/32 [==============================] - 0s 4ms/step - loss: 0.6930
Epoch 42/50
32/32 [==============================] - 0s 4ms/step - loss: 0.6930
Epoch 43/50
32/32 [==============================] - 0s 5ms/step - loss: 0.6930
Epoch 44/50
32/32 [==============================] - 0s 5ms/step - loss: 0.6930
Epoch 45/50
32/32 [==============================] - 0s 5ms/step - loss: 0.6930
Epoch 46/50
32/32 [==============================] - 0s 5ms/step - loss: 0.6930
Epoch 47/50
32/32 [==============================] - 0s 5ms/step - loss: 0.6930
Epoch 48/50
32/32 [==============================] - 0s 5ms/step - loss: 0.6930
Epoch 49/50
32/32 [==============================] - 0s 5ms/step - loss: 0.6930
Epoch 50/50
32/32 [==============================] - 0s 5ms/step - loss: 0.6930
<keras.src.callbacks.History at 0x7e5a8457b2b0>
Model: "model_1"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_2 (InputLayer) [(None, 784)] 0
=================================================================
Total params: 50992 (199.19 KB)
Trainable params: 50992 (199.19 KB)
Non-trainable params: 0 (0.00 Byte)
_________________________________________________________________
RESULT
Thus the sparse auto encoder has been implemented and the output has been verified
successfully.
Exercise No: 21 DENOISING AUTOENCODER
Date:
AIM
To implement a denoising auto encoder model.
PROCEDURE
Step 1: Import necessary libraries
Step 2: Define the architecture of the denoising autoencoder
Step 3: Create Encoder, Decoder and Auto encoder architecture for denoising auto encoder
Step 4: Set the dimensions of input data and the encoding dimension
Step 5: Build the denoising autoencoder model
Step 6: Compile the model
Step 7: Display the architecture of the denoising autoencoder
SOURCE CODE
import tensorflow as tf
from tensorflow.keras.layers import Input, Dense, GaussianNoise
from tensorflow.keras.models import Model
# Decoder
decoded = Dense(input_dim, activation='sigmoid')(encoded)
# Denoising Autoencoder
denoising_autoencoder = Model(input_layer, decoded)
return denoising_autoencoder
OUTPUT
Model: "model_3"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_4 (InputLayer) [(None, 784)] 0
=================================================================
Total params: 50992 (199.19 KB)
Trainable params: 50992 (199.19 KB)
Non-trainable params: 0 (0.00 Byte)
_________________________________________________________________
RESULT
Thus the denoising auto encoder has been implemented and the output has been verified
successfully.
Exercise No: 22 VARIATIONAL AUTOENCODER (VAE)
Date:
AIM
To implement a variational auto encoder model.
PROCEDURE
Step 1: Import necessary libraries
Step 2: Set the dimensions of input data and the encoding dimension
Step 3: Function to sample from the latent space using the reparameterization trick
Step 4: Define the architecture of the Variational Autoencoder (VAE)
Step 5: Compute the VAE loss, which consists of the reconstruction loss and the KL divergence
Step 6: Build the VAE model
Step 7: Compile the model
Step 8: Display the architecture of the VAE architecture.
SOURCE CODE
import tensorflow as tf
from tensorflow.keras.layers import Input, Dense, Lambda
from tensorflow.keras.models import Model
from tensorflow.keras import backend as K
from tensorflow.keras.losses import binary_crossentropy
import numpy as np
# Function to sample from the latent space using the reparameterization trick
def sampling(args):
z_mean, z_log_var = args
batch = K.shape(z_mean)[0]
dim = K.int_shape(z_mean)[1]
epsilon = K.random_normal(shape=(batch, dim))
return z_mean + K.exp(0.5 * z_log_var) * epsilon
# Define the architecture of the Variational Autoencoder (VAE)
def build_vae(input_dim, encoding_dim):
# Encoder
input_layer = Input(shape=(input_dim,))
h = Dense(256, activation='relu')(input_layer)
z_mean = Dense(encoding_dim, name='z_mean')(h)
z_log_var = Dense(encoding_dim, name='z_log_var')(h)
# Decoder
decoder_h = Dense(256, activation='relu')
decoder_mean = Dense(input_dim, activation='sigmoid')
h_decoded = decoder_h(z)
x_decoded_mean = decoder_mean(h_decoded)
# Compute the VAE loss, which consists of the reconstruction loss and the KL divergence
xent_loss = binary_crossentropy(input_layer, x_decoded_mean)
kl_loss = -0.5 * K.sum(1 + z_log_var - K.square(z_mean) - K.exp(z_log_var), axis=-1)
vae_loss = K.mean(xent_loss + kl_loss)
vae.add_loss(vae_loss)
return vae
OUTPUT
Model: "model_4"
_______________________________________________________________________________
___________________
Layer (type) Output Shape Param # Connected
to
===============================================================================
===================
input_5 (InputLayer) [(None, 784)] 0 []
'z_log_var[0][0]']
tf.math.reduce_mean_1 (TFO () 0
['tf.__operators__.add_1[0][0]
pLambda) ']
add_loss (AddLoss) () 0
['tf.math.reduce_mean_1[0][0]'
]
===============================================================================
===================
Total params: 427344 (1.63 MB)
Trainable params: 427344 (1.63 MB)
Non-trainable params: 0 (0.00 Byte)
_______________________________________________________________________________
___________________
Epoch 1/50
32/32 [==============================] - 2s 8ms/step - loss: 1.6439
Epoch 2/50
32/32 [==============================] - 0s 8ms/step - loss: 0.6966
Epoch 3/50
32/32 [==============================] - 0s 9ms/step - loss: 0.6955
Epoch 4/50
32/32 [==============================] - 0s 8ms/step - loss: 0.6947
Epoch 5/50
32/32 [==============================] - 0s 9ms/step - loss: 0.6945
Epoch 6/50
32/32 [==============================] - 0s 8ms/step - loss: 0.6942
Epoch 7/50
32/32 [==============================] - 0s 8ms/step - loss: 0.6941
Epoch 8/50
32/32 [==============================] - 0s 8ms/step - loss: 0.6939
Epoch 9/50
32/32 [==============================] - 0s 8ms/step - loss: 0.6939
Epoch 10/50
32/32 [==============================] - 0s 8ms/step - loss: 0.6938
Epoch 11/50
32/32 [==============================] - 0s 8ms/step - loss: 0.6937
Epoch 12/50
32/32 [==============================] - 0s 8ms/step - loss: 0.6936
Epoch 13/50
32/32 [==============================] - 0s 8ms/step - loss: 0.6935
Epoch 14/50
32/32 [==============================] - 0s 8ms/step - loss: 0.6935
Epoch 15/50
32/32 [==============================] - 0s 8ms/step - loss: 0.6934
Epoch 16/50
32/32 [==============================] - 0s 7ms/step - loss: 0.6934
Epoch 17/50
32/32 [==============================] - 0s 8ms/step - loss: 0.6934
Epoch 18/50
32/32 [==============================] - 0s 8ms/step - loss: 0.6933
Epoch 19/50
32/32 [==============================] - 0s 8ms/step - loss: 0.6933
Epoch 20/50
32/32 [==============================] - 0s 8ms/step - loss: 0.6933
Epoch 21/50
32/32 [==============================] - 0s 8ms/step - loss: 0.6932
Epoch 22/50
32/32 [==============================] - 0s 8ms/step - loss: 0.6932
Epoch 23/50
32/32 [==============================] - 0s 8ms/step - loss: 0.6932
Epoch 24/50
32/32 [==============================] - 0s 8ms/step - loss: 0.6932
Epoch 25/50
32/32 [==============================] - 0s 8ms/step - loss: 0.6931
Epoch 26/50
32/32 [==============================] - 0s 12ms/step - loss: 0.6931
Epoch 27/50
32/32 [==============================] - 0s 12ms/step - loss: 0.6931
Epoch 28/50
32/32 [==============================] - 0s 12ms/step - loss: 0.6931
Epoch 29/50
32/32 [==============================] - 0s 12ms/step - loss: 0.6931
Epoch 30/50
32/32 [==============================] - 0s 13ms/step - loss: 0.6931
Epoch 31/50
32/32 [==============================] - 0s 13ms/step - loss: 0.6931
Epoch 32/50
32/32 [==============================] - 0s 12ms/step - loss: 0.6931
Epoch 33/50
32/32 [==============================] - 0s 8ms/step - loss: 0.6931
Epoch 34/50
32/32 [==============================] - 0s 8ms/step - loss: 0.6931
Epoch 35/50
32/32 [==============================] - 0s 8ms/step - loss: 0.6930
Epoch 36/50
32/32 [==============================] - 0s 8ms/step - loss: 0.6931
Epoch 37/50
32/32 [==============================] - 0s 8ms/step - loss: 0.6931
Epoch 38/50
32/32 [==============================] - 0s 8ms/step - loss: 0.6930
Epoch 39/50
32/32 [==============================] - 0s 8ms/step - loss: 0.6931
Epoch 40/50
32/32 [==============================] - 0s 8ms/step - loss: 0.6931
Epoch 41/50
32/32 [==============================] - 0s 8ms/step - loss: 0.6931
Epoch 42/50
32/32 [==============================] - 0s 8ms/step - loss: 0.6930
Epoch 43/50
32/32 [==============================] - 0s 8ms/step - loss: 0.6930
Epoch 44/50
32/32 [==============================] - 0s 8ms/step - loss: 0.6930
Epoch 45/50
32/32 [==============================] - 0s 8ms/step - loss: 0.6930
Epoch 46/50
32/32 [==============================] - 0s 8ms/step - loss: 0.6930
Epoch 47/50
32/32 [==============================] - 0s 8ms/step - loss: 0.6930
Epoch 48/50
32/32 [==============================] - 0s 8ms/step - loss: 0.6930
Epoch 49/50
32/32 [==============================] - 0s 8ms/step - loss: 0.6930
Epoch 50/50
32/32 [==============================] - 0s 8ms/step - loss: 0.6930
<keras.src.callbacks.History at 0x7e5a845c92a0>
RESULT
Thus the VAE has been implemented and the output has been verified successfully.
Exercise No: 23 CONTRACTIVE AUTOENCODER
Date:
AIM
To implement a contractive auto encoder model.
PROCEDURE
Step 1: Import necessary libraries
Step 2: Load MNIST dataset
Step 3: Normalize and flatten images
Step 4: Build Contractive Autoencoder model
Step 5: Define the Contractive Autoencoder loss with the penalty term using GradientTape
Step 6: Compute the Jacobian of the encoder output with respect to the input using GradientTape
Step 7: Compute the Frobenius norm of the Jacobian and add to the loss
Step 8: Use add_loss to include the penalty term in the model's overall loss
Step 9: Compile the model
Step 10: Load MNIST data
Step 11: Preprocess data
Step 12: Split the data into training and validation sets
Step 13: Set the dimensions
Step 14: Build and train the Contractive Autoencoder
Step 15: Evaluate on the test set
SOURCE CODE
import tensorflow as tf
from tensorflow.keras.layers import Input, Dense
from tensorflow.keras.models import Model
from tensorflow.keras.datasets import mnist
from sklearn.model_selection import train_test_split
import numpy as np
# Define the Contractive Autoencoder loss with the penalty term using GradientTape
def contractive_loss(y_true, y_pred):
mse = tf.keras.losses.mean_squared_error(y_true, y_pred)
# Compute the Jacobian of the encoder output with respect to the input using GradientTape
with tf.GradientTape() as tape:
encoder_output = contractive_autoencoder.layers[1](input_layer)
encoder_jacobian = tape.jacobian(encoder_output, input_layer)
# Compute the Frobenius norm of the Jacobian and add to the loss
contractive_penalty = penalty_factor * tf.reduce_sum(tf.square(encoder_jacobian))
# Use add_loss to include the penalty term in the model's overall loss
contractive_autoencoder.add_loss(lambda: contractive_loss(input_layer, decoded))
return contractive_autoencoder
# Preprocess data
train_images = preprocess_images(train_images)
test_images = preprocess_images(test_images)
OUTPUT
Model: "model_4"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_5 (InputLayer) [(None, 784)] 0
=================================================================
Total params: 50992 (199.19 KB)
Trainable params: 50992 (199.19 KB)
Non-trainable params: 0 (0.00 Byte)
_________________________________________________________________
Epoch 1/10
RESULT
Thus the contractive autoencoder has been implemented and the output has been verified
successfully.
Exercise No: 24 ADVERSARIAL AUTOENCODER
Date:
AIM
To implement an adversarial auto encoder model.
PROCEDURE
Step 1: Import necessary libraries
Step 2: Set random seed for reproducibility
Step 3: Define the architecture of the Adversarial Autoencoder
Step 4: Compile the discriminator
Step 5: Freeze the weights of the discriminator during generator training
Step 6: Combined Adversarial Autoencoder model
Step 7: Compile the combined model
Step 8: Set the dimensions of input data and the encoding dimension
Step 9: Build the Adversarial Autoencoder model
Step 10: Display the architecture of the Adversarial Autoencoder
Step 11: Generate random data for demonstration
Step 12: Normalize the data to values between 0 and 1
Step 13: Train the Adversarial Autoencoder
Step 14: Evaluate on a test sample
SOURCE CODE
import tensorflow as tf
from tensorflow.keras.layers import Input, Dense
from tensorflow.keras.models import Model
import numpy as np
# Autoencoder
autoencoder = Model(input_layer, decoded)
# Discriminator
validity = Dense(1, activation='sigmoid')(encoded)
discriminator = Model(input_layer, validity)
print("Original Input:")
print(test_sample)
print("Decoded Output:")
print(decoded_sample)
print("Validity Score:")
print(validity_score)
OUTPUT
Model: "model_13"
_______________________________________________________________________________
___________________
Layer (type) Output Shape Param # Connected
to
===============================================================================
===================
input_12 (InputLayer) [(None, 784)] 0 []
RESULT
Thus the adversarial autoencoder has been implemented and the output has been verified
successfully.
Exercise No: 25 CONVOLUTIONAL AUTOENCODER
Date:
AIM
To implement a convolutional auto encoder model.
PROCEDURE
Step 1: Import necessary libraries
Step 2: Load MNIST data
Step 3: Preprocess data
Step 4: Split the data into training and validation sets
Step 5: Set the input shape
Step 6: Build and train the Convolutional Autoencoder
Step 7: Evaluate on the test set
Step 8: Visualize original and reconstructed images
Step 9: Plot original images
Step 10: Plot reconstructed images
SOURCE CODE
import tensorflow as tf
from tensorflow.keras.layers import Input, Conv2D, MaxPooling2D, UpSampling2D
from tensorflow.keras.models import Model
from tensorflow.keras.datasets import mnist
from sklearn.model_selection import train_test_split
import numpy as np
import matplotlib.pyplot as plt
# Encoder
x = Conv2D(16, (3, 3), activation='relu', padding='same')(input_layer)
x = MaxPooling2D((2, 2), padding='same')(x)
x = Conv2D(8, (3, 3), activation='relu', padding='same')(x)
x = MaxPooling2D((2, 2), padding='same')(x)
x = Conv2D(8, (3, 3), activation='relu', padding='same')(x)
encoded = MaxPooling2D((2, 2), padding='same')(x)
# Decoder
x = Conv2D(8, (3, 3), activation='relu', padding='same')(encoded)
x = UpSampling2D((2, 2))(x)
x = Conv2D(8, (3, 3), activation='relu', padding='same')(x)
x = UpSampling2D((2, 2))(x)
x = Conv2D(16, (3, 3), activation='relu')(x)
x = UpSampling2D((2, 2))(x)
decoded = Conv2D(1, (3, 3), activation='sigmoid', padding='same')(x)
return autoencoder
# Preprocess data
train_images = preprocess_images(train_images)
test_images = preprocess_images(test_images)
plt.show()
OUTPUT
Model: "model_14"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_13 (InputLayer) [(None, 28, 28, 1)] 0
conv2d (Conv2D) (None, 28, 28, 16) 160
=================================================================
Total params: 4385 (17.13 KB)
Trainable params: 4385 (17.13 KB)
Non-trainable params: 0 (0.00 Byte)
_________________________________________________________________
Epoch 1/10
750/750 [==============================] - 67s 85ms/step - loss: 0.2061 -
val_loss: 0.1445
Epoch 2/10
750/750 [==============================] - 57s 76ms/step - loss: 0.1351 -
val_loss: 0.1289
Epoch 3/10
750/750 [==============================] - 57s 77ms/step - loss: 0.1247 -
val_loss: 0.1213
Epoch 4/10
750/750 [==============================] - 55s 73ms/step - loss: 0.1184 -
val_loss: 0.1171
Epoch 5/10
750/750 [==============================] - 57s 76ms/step - loss: 0.1144 -
val_loss: 0.1128
Epoch 6/10
750/750 [==============================] - 55s 73ms/step - loss: 0.1115 -
val_loss: 0.1108
Epoch 7/10
750/750 [==============================] - 56s 74ms/step - loss: 0.1093 -
val_loss: 0.1084
Epoch 8/10
750/750 [==============================] - 56s 75ms/step - loss: 0.1076 -
val_loss: 0.1076
Epoch 9/10
750/750 [==============================] - 55s 73ms/step - loss: 0.1061 -
val_loss: 0.1055
Epoch 10/10
750/750 [==============================] - 64s 85ms/step - loss: 0.1050 -
val_loss: 0.1047
313/313 [==============================] - 4s 11ms/step - loss: 0.1035
Test Loss: 0.10348694771528244
1/1 [==============================] - 0s 108ms/step
RESULT
Thus the convolutional autoencoder has been implemented and the output has been verified
successfully.
Exercise No: 26 STACKED AUTOENCODER
Date:
AIM
To implement a stacked auto encoder model.
PROCEDURE
Step 1: Import necessary libraries
Step 2: Load MNIST data
Step 3: Preprocess data
Step 4: Split the data into training and validation sets
Step 5: Set the input shape
Step 6: Build and train the stacked Autoencoder
Step 7: Evaluate on the test set
Step 8: Visualize original and reconstructed images
Step 9: Plot original images
Step 10: Plot reconstructed images
SOURCE CODE
import tensorflow as tf
from tensorflow.keras.layers import Input, Dense
from tensorflow.keras.models import Model
from tensorflow.keras.datasets import mnist
from sklearn.model_selection import train_test_split
import numpy as np
import matplotlib.pyplot as plt
# Encoder layers
encoder_layers = []
for encoding_dim in encoding_dims:
x = Dense(encoding_dim, activation='relu')(x)
encoder_layers.append(x)
# Decoder layers
decoder_input = Input(shape=(encoding_dims[-1],))
x = decoder_input
for encoding_dim in reversed(encoding_dims[:-1]):
x = Dense(encoding_dim, activation='relu')(x)
# Preprocess data
train_images = preprocess_images(train_images)
test_images = preprocess_images(test_images)
Model: "model_21"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_18 (InputLayer) [(None, 784)] 0
=================================================================
Total params: 484944 (1.85 MB)
Trainable params: 484944 (1.85 MB)
Non-trainable params: 0 (0.00 Byte)
_________________________________________________________________
Epoch 1/10
750/750 [==============================] - 10s 11ms/step - loss: 0.0313 -
val_loss: 0.0157
Epoch 2/10
750/750 [==============================] - 9s 12ms/step - loss: 0.0131 -
val_loss: 0.0113
Epoch 3/10
750/750 [==============================] - 8s 11ms/step - loss: 0.0103 -
val_loss: 0.0096
Epoch 4/10
750/750 [==============================] - 8s 11ms/step - loss: 0.0089 -
val_loss: 0.0085
Epoch 5/10
750/750 [==============================] - 9s 12ms/step - loss: 0.0080 -
val_loss: 0.0077
Epoch 6/10
750/750 [==============================] - 8s 11ms/step - loss: 0.0073 -
val_loss: 0.0070
Epoch 7/10
750/750 [==============================] - 9s 12ms/step - loss: 0.0067 -
val_loss: 0.0066
Epoch 8/10
750/750 [==============================] - 9s 12ms/step - loss: 0.0063 -
val_loss: 0.0065
Epoch 9/10
750/750 [==============================] - 9s 12ms/step - loss: 0.0060 -
val_loss: 0.0061
Epoch 10/10
750/750 [==============================] - 8s 11ms/step - loss: 0.0057 -
val_loss: 0.0059
313/313 [==============================] - 1s 3ms/step - loss: 0.0057
Test Loss: 0.00568332290276885
1/1 [==============================] - 0s 71ms/step
RESULT
Thus the stacked autoencoder has been implemented and the output has been verified
successfully.
Exercise No: 27 RECURRENT AUTOENCODER
Date:
AIM
To implement a recurrent auto encoder model.
PROCEDURE
Step 1: Import necessary libraries
Step 2: Generate synthetic time-series data
Step 3: Build Recurrent Autoencoder model
Step 4: Set parameters
Step 5: Generate synthetic time-series data
Step 6: Build and train the Recurrent Autoencoder
Step 7: Generate a sample for testing
Step 8: Print the original and reconstructed samples
SOURCE CODE
import numpy as np
import tensorflow as tf
from tensorflow.keras.layers import Input, LSTM, RepeatVector, TimeDistributed, Dense
from tensorflow.keras.models import Model
# Set parameters
n_samples = 1000
n_timestamps = 10
n_features = 5
input_dim = n_features
encoding_dim = 3
OUTPUT
Model: "model_25"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_26 (InputLayer) [(None, 10, 5)] 0
Reconstructed Sample:
[[0. 0. 0.19041412 0.16169252 0.462472 ]
[0. 0. 0.33790386 0.32640952 0.5394136 ]
[0. 0. 0.44943115 0.4040794 0.5467331 ]
[0. 0. 0.5276734 0.4360892 0.5453661 ]
[0. 0. 0.5804315 0.4500665 0.54469204]
[0. 0. 0.61546826 0.45719433 0.544873 ]
[0. 0. 0.63862896 0.46139336 0.5452977 ]
[0. 0. 0.6539224 0.46407074 0.54568434]
[0. 0. 0.6640183 0.4658289 0.5459694 ]
[0. 0. 0.6706819 0.4669915 0.5461642 ]]
RESULT
Thus the recurrent autoencoder has been implemented and the output has been verified
successfully.
Exercise No: 28 ATTENTION BASED AUTOENCODER
Date:
AIM
To implement an attention based auto encoder model.
PROCEDURE
Step 1: Import necessary libraries
Step 2: Generate synthetic time-series data
Step 3: Build Attention-basedAutoencoder model
Step 4: Set parameters
Step 5: Generate synthetic time-series data
Step 6: Build and train the Attention-based Autoencoder
Step 7: Generate a sample for testing
Step 8: Print the original and reconstructed samples
SOURCE CODE
import numpy as np
import tensorflow as tf
from tensorflow.keras.layers import Input, LSTM, Dense, Attention
from tensorflow.keras.models import Model
# Encoder
encoder = LSTM(encoding_dim, activation='relu', return_sequences=True)(input_layer)
# Attention mechanism
attention = Attention()([encoder, encoder])
# Context vector
context_vector = tf.reduce_sum(attention * encoder, axis=1)
context_vector = tf.expand_dims(context_vector, axis=1)
# Decoder
decoder = LSTM(input_dim, activation='relu', return_sequences=True)(context_vector)
return autoencoder
# Set parameters
n_samples = 1000
n_timestamps = 10
n_features = 5
input_dim = n_features
encoding_dim = 3
Model: "model_26"
_______________________________________________________________________________
___________________
Layer (type) Output Shape Param # Connected
to
===============================================================================
===================
input_27 (InputLayer) [(None, 10, 5)] 0 []
'lstm_11[0][0]']
===============================================================================
===================
Total params: 288 (1.12 KB)
Trainable params: 288 (1.12 KB)
Non-trainable params: 0 (0.00 Byte)
_______________________________________________________________________________
___________________
Epoch 1/10
25/25 [==============================] - 5s 20ms/step - loss: 0.3286 -
val_loss: 0.3233
Epoch 2/10
25/25 [==============================] - 0s 6ms/step - loss: 0.3184 - val_loss:
0.3078
Epoch 3/10
25/25 [==============================] - 0s 7ms/step - loss: 0.2855 - val_loss:
0.2386
Epoch 4/10
25/25 [==============================] - 0s 7ms/step - loss: 0.1964 - val_loss:
0.1575
Epoch 5/10
25/25 [==============================] - 0s 6ms/step - loss: 0.1449 - val_loss:
0.1301
Epoch 6/10
25/25 [==============================] - 0s 6ms/step - loss: 0.1223 - val_loss:
0.1135
Epoch 7/10
25/25 [==============================] - 0s 7ms/step - loss: 0.1099 - val_loss:
0.1047
Epoch 8/10
25/25 [==============================] - 0s 7ms/step - loss: 0.1024 - val_loss:
0.0986
Epoch 9/10
25/25 [==============================] - 0s 7ms/step - loss: 0.0955 - val_loss:
0.0920
Epoch 10/10
25/25 [==============================] - 0s 7ms/step - loss: 0.0901 - val_loss:
0.0895
Original Sample:
[[0.07514075 0.27570244 0.17205442 0.76543934 0.12606682]
[0.85493027 0.17087281 0.30278675 0.43731926 0.19298758]
[0.33443204 0.02981043 0.20641557 0.68165836 0.10519137]
[0.76920832 0.79390024 0.54137493 0.79651449 0.63268135]
[0.58364066 0.79903614 0.84577804 0.63678779 0.71815112]
[0.42120601 0.34403201 0.09601646 0.98617704 0.99529867]
[0.35465342 0.31725689 0.23198785 0.59760986 0.61902932]
[0.28546931 0.75901253 0.22785654 0.64149577 0.13482652]
[0.49507332 0.25439777 0.45210239 0.76117903 0.88308714]
[0.16808893 0.64370015 0.29714341 0.24238095 0.94553781]]
Reconstructed Sample:
[[0.5734029 0.50906104 0.43819004 0.52426314 0.5283323 ]]
RESULT
Thus the attention based autoencoder has been implemented and the output has been verified
successfully.
Exercise No: 29 BOLTZMANN MACHINE
Date:
AIM
To implement Deep Boltzmann Machine and Restricted Boltzmann Machine that is useful for
unsupervised learning task.
PROCEDURE
Step 1: Import necessary libraries
Step 2: Define the class RBM.
Step 3: Define the class DBM.
Step 4: Create some dummy data
Step 5: Pretrain and finetune the DBM
Step 6: Forward pass through the DBM
SOURCE CODE
import numpy as np
class RBM:
def __init__(self, n_visible, n_hidden):
self.weights = np.random.randn(n_visible, n_hidden) * 0.1
self.hidden_bias = np.random.randn(n_hidden) * 0.1
self.visible_bias = np.random.randn(n_visible) * 0.1
class DBM:
def __init__(self, layer_sizes):
self.rbms = [RBM(layer_sizes[i], layer_sizes[i + 1]) for i in range(len(layer_sizes) - 1)]
# Top-down pass
down_data = up_data
for i, rbm in enumerate(reversed(self.rbms)):
down_data = rbm.sample_visible(down_data)
if i < len(self.rbms) - 1: # Do not update the visible layer of the first RBM
# Update the corresponding RBM with the data from the layer above
self.rbms[-i-1].train(up_pass_data[-i-2], learning_rate, 1)
# Example usage
dbm = DBM([100, 256, 512]) # Example layer sizes
OUTPUT
RESULT
Thus the DBM and RBM has been implemented and the output has been verified successfully.
GENERATIVE ADVERSARIAL NETWORK (GANs)
Generative Adversarial Networks (GANs) are a class of machine learning models used for
generative tasks, such as generating realistic images, music, text, etc. GANs were introduced
by Ian Goodfellow and his colleagues in 2014 and have since become one of the most
popular and powerful techniques in generative modeling. Here's an overview of how GANs
work:
Basic Concept:
- GANs consist of two neural networks: a generator and a discriminator.
- The generator takes random noise as input and generates fake data samples (e.g., images).
- The discriminator takes both real data samples (from a training dataset) and fake samples
(generated by the generator) as input and tries to distinguish between them (i.e., classify them
as real or fake).
- The two networks are trained simultaneously in an adversarial manner: the generator tries to
generate more realistic samples to fool the discriminator, while the discriminator tries to get
better at distinguishing between real and fake samples.
Training Process:
1. **Initialization**: The generator and discriminator are initialized with random weights.
2. **Training Iterations**:
- **Generator Training**: The generator generates fake samples using random noise and
tries to make them look as realistic as possible to fool the discriminator. The generator's loss
is based on how well the discriminator is fooled.
- **Discriminator Training**: The discriminator is trained to distinguish between real and
fake samples. Its loss is based on its ability to correctly classify real and fake samples.
3. **Adversarial Training**: Both networks are trained simultaneously, with the generator
trying to improve its ability to fool the discriminator, and the discriminator trying to improve
its ability to distinguish between real and fake samples.
4. **Convergence**: Ideally, the generator improves over time to generate more realistic
samples, while the discriminator becomes increasingly accurate at distinguishing between
real and fake samples.
5. **Evaluation**: The quality of generated samples can be evaluated using metrics such as
Inception Score, Frechet Inception Distance (FID), or visual inspection.
Overall, GANs have demonstrated remarkable success in various generative tasks, but they
also present several challenges in training and evaluation that continue to be areas of active
research.
ENERGY BASED MODELS
Energy-based models (EBMs) are a class of machine learning models that define a scoring
function or energy function for a given input. The goal of these models is to assign lower
energy values to the observed or desirable data points and higher energy values to other
points. The learning process involves adjusting the parameters of the model to minimize the
energy for observed data and increase it for other data.
1. **Energy Function (Scoring Function):** The core concept of EBMs is the energy
function, denoted as \(E(x; \theta)\), where \(x\) is the input data and \(\theta\) represents the
model parameters. The energy function measures the compatibility of a given input with the
model.
3. **Training Objective:** The learning objective in EBMs involves adjusting the parameters
\(\theta\) to minimize the energy for observed data points and maximize it for other points.
One common training approach is contrastive divergence, which involves updating
parameters in the direction that reduces the energy for observed samples and increases it for
generated samples.
5. **Hopfield Networks:** Hopfield networks are a type of EBM used for associative
memory. They have binary units and are trained to store and retrieve patterns.
6. **Deep Boltzmann Machines (DBMs):** DBMs extend RBMs to multiple layers, creating
a deeper architecture. They can learn hierarchical representations of data.
7. **Conditional Random Fields (CRFs):** CRFs can be viewed as a type of EBM used for
structured prediction tasks. They model the conditional distribution of labels given input
features.
EBMs have applications in various domains, including image generation, speech recognition,
and natural language processing. They provide a flexible framework for modeling complex
relationships in data. However, training EBMs can be challenging, and it often involves
iterative optimization techniques.
CHALLENGES OF GANN
Generative Adversarial Networks (GANs) have proven to be powerful for generating realistic
data, but they come with several challenges, both in terms of training and performance. Some
of the key challenges associated with GANs include:
1. **Mode Collapse:**
- **Issue:** GANs can suffer from mode collapse, where the generator learns to produce
only a limited set of samples, ignoring the diversity present in the training data.
- **Impact:** This results in generated samples that lack variety, limiting the usefulness of
the model.
2. **Training Instability:**
- **Issue:** GANs are known for their training instability. The adversarial training process
can oscillate, leading to difficulties in convergence and requiring careful tuning of
hyperparameters.
- **Impact:** Unstable training can make it challenging to train GANs effectively and
achieve good performance.
3. **Choice of Hyperparameters:**
- **Issue:** GANs involve tuning multiple hyperparameters, such as learning rates,
architectural choices, and balance between the generator and discriminator.
- **Impact:** Poorly chosen hyperparameters can result in slow convergence, mode
collapse, or other training instabilities.
4. **Evaluation Metrics:**
- **Issue:** Evaluating the performance of GANs is not straightforward. Traditional
metrics like accuracy do not provide a comprehensive measure of the quality of generated
samples.
- **Impact:** Assessing the quality of generated samples and comparing different GAN
models remains an open challenge.
8. **Ethical Considerations:**
- **Issue:** GANs can be misused for creating deepfakes or generating fake content that
may have ethical implications.
- **Impact:** Addressing ethical concerns and preventing malicious uses of GANs is an
ongoing challenge.
Researchers are actively working on addressing these challenges to enhance the robustness,
stability, and performance of GANs. Techniques such as improved network architectures,
regularization methods, and alternative training strategies aim to mitigate these issues and
make GANs more reliable and applicable across various domains.
Exercise No: 30 IMPLEMENTATION OF GANN
Date:
AIM
To develop and train the Generative Adversarial Neural Network by generating random
samples.
PROCEDURE
Step 1: Import necessary libraries
Step 2: Develop the Generator model.
Step 3: Develop the Discriminator model.
Step 4: Develop the GAN model.
Step 5: Generate random samples from latent space
Step 6: Generate fake samples using the generator
Step 7: Train the GAN
Step 8: Train discriminator with real samples
Step 9: Train discriminator with fake samples
Step 10: Train generator via the GAN model
Step 11: Evaluate the model at specified intervals
Step 12: Visualize generated samples
Step 13: Set parameters
Step 14: Build and compile the models
Step 15: Train the GAN
SOURCE CODE
import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
from tensorflow.keras.layers import Dense
from tensorflow.keras.models import Sequential
# Generator model
def build_generator(latent_dim):
model = Sequential()
model.add(Dense(10, input_dim=latent_dim, activation='relu'))
model.add(Dense(2, activation='linear'))
return model
# Discriminator model
def build_discriminator(input_dim):
model = Sequential()
model.add(Dense(10, input_dim=input_dim, activation='relu'))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
return model
# GAN model
def build_gan(generator, discriminator):
discriminator.trainable = False
model = Sequential()
model.add(generator)
model.add(discriminator)
model.compile(loss='binary_crossentropy', optimizer='adam')
return model
# Set parameters
latent_dim = 5
OUTPUT
RESULT
Thus the GANN model has been developed and trained and the output has been verified
successfully.
Exercise No: 31 IMAGE GENERATION USING GANN
Date:
AIM
To generate an image using Generative Adversarial Neural Network.
PROCEDURE
Step 1: Import necessary libraries
Step 2: Design Load MNIST dataset
Step 3: Design Generator network
Step 4: Design Discriminator network
Step 5: Define the discriminator
Step 6: Define the generator
Step 7: Define the GAN
Step 8: Training loop
Step 9: Generate and plot generated images
SOURCE CODE
import warnings
warnings.filterwarnings('ignore')
import numpy as np
import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
tf.logging.set_verbosity(tf.logging.ERROR)
tf.reset_default_graph()
data = input_data.read_data_sets("data/mnist",one_hot=True)
plt.imshow(data.train.images[13].reshape(28,28),cmap="gray")
def generator(z,reuse=None):
with tf.variable_scope('generator',reuse=reuse):
hidden1 = tf.layers.dense(inputs=z,units=128,activation=tf.nn.leaky_relu)
hidden2 = tf.layers.dense(inputs=hidden1,units=128,activation=tf.nn.leaky_relu)
output = tf.layers.dense(inputs=hidden2,units=784,activation=tf.nn.tanh)
return output
def discriminator(X,reuse=None):
with tf.variable_scope('discriminator',reuse=reuse):
hidden1 = tf.layers.dense(inputs=X,units=128,activation=tf.nn.leaky_relu)
hidden2 = tf.layers.dense(inputs=hidden1,units=128,activation=tf.nn.leaky_relu)
logits = tf.layers.dense(inputs=hidden2,units=1)
output = tf.sigmoid(logits)
return logits
x = tf.placeholder(tf.float32,shape=[None,784])
z = tf.placeholder(tf.float32,shape=[None,100])
fake_x = generator(z)
D_logits_real = discriminator(x)
D_logits_fake = discriminator(fake_x,reuse=True)
D_loss_real = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=D_logits_real,
labels=tf.ones_like(D_logits_real)))
D_loss_fake = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=D_logits_fake,
labels=tf.zeros_like(D_logits_fake)))
D_loss = D_loss_real + D_loss_fake
G_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=D_logits_fake,
labels=tf.ones_like(D_logits_fake)))
training_vars = tf.trainable_variables()
init = tf.global_variables_initializer()
with tf.Session() as session:
#define the feed dictionaries with input x as batch_images and noise z as batch noise
feed_dict = {x: batch_images, z : batch_noise}
#feed the noise to a generator on every 100th epoch and generate an image
if epoch%100==0:
print("Epoch: {}, iteration: {}, Discriminator Loss:{}, Generator Loss:
{}".format(epoch,i,discriminator_loss,generator_loss))
OUTPUT
<matplotlib.image.AxesImage at 0x7f1529d77990>
Epoch: 0, iteration: 549, Discriminator Loss:4.96056985855, Generator Loss: 0.880222082138
RESULT
Thus the image has been generated using the GANN model and the output has been verified
successfully.
Exercise No: 32 STYLE TRANSFER USING GANN
Date:
AIM
To transfer the style of an image using Generative Adversarial Neural Network.
PROCEDURE
Step 1: Import necessary libraries
Step 2: Load the pre-trained VGG19 model without the fully connected layers
Step 3: Layers used for content and style representation
Step 4: Model for extracting content and style features
Step 5: Function to compute content loss
Step 6: Function to compute style loss using Gram matrix
Step 7: Function to preprocess image
Step 8: Function to de-process image
Step 9: Function to compute total loss
Step 10: Function to perform style transfer
Step 11: Perform style transfer
SOURCE CODE
import numpy as np
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras.applications import VGG19
from tensorflow.keras.preprocessing.image import load_img, img_to_array
from tensorflow.keras.applications.vgg19 import preprocess_input
from tensorflow.keras import backend as K
# Load the pre-trained VGG19 model without the fully connected layers
base_model = VGG19(weights="imagenet", include_top=False)
content_target = content_model(image)
for content, target in zip(content_features, content_target):
content_loss_val += content_loss(content, target)
style_target = style_model(image)
for style, target in zip(style_features, style_target):
style_loss_val += style_loss(style, target)
total_loss = content_weight * content_loss_val + style_weight * style_loss_val
return total_loss
content_features = content_model(content_image)
style_features = style_model(style_image)
if (epoch + 1) % 5 == 0 or epoch == 0:
generated_image_np = deprocess_image(generated_image.numpy())
keras.preprocessing.image.save_img(output_path.format(epoch + 1), generated_image_np)
print("Epoch {}: Saved image to {}".format(epoch + 1, output_path.format(epoch + 1)))
OUTPUT
output_image_5.jpg
output_image_10.jpg
output_image_15.jpg
output_image_20.jpg
output_image_25.jpg
RESULT
Thus the style of an image has been transferred using the GANN model and the output has been
verified successfully.
Exercise No: 33 DATA AUGMENTATION USING GANN
Date:
AIM
To generate additional training data by applying transformations such as rotation, flipping,
scaling, etc., to the existing dataset using Generative Adversarial Neural Network.
PROCEDURE
Step 1: Import necessary libraries
Step 2: Define the generator network
Step 3: Define the discriminator network
Step 4: Define GAN
Step 5: Define the data augmentation GAN
Step 6: Generate augmented data using GAN
Step 7: Build the data augmentation GAN
Step 8: Generate augmented data
SOURCE CODE
import sys
import numpy as np
img_rows = 28
img_cols = 28
channels = 1
img_shape = (img_rows, img_cols, channels)
z_dim = 100
def generator(img_shape, z_dim):
model = Sequential()
# Hidden layer
model.add(Dense(128, input_dim=z_dim))
# Leaky ReLU
model.add(LeakyReLU(alpha=0.01))
z = Input(shape=(z_dim,))
img = model(z)
def discriminator(img_shape):
model = Sequential()
model.add(Flatten(input_shape=img_shape))
# Hidden layer
model.add(Dense(128))
# Leaky ReLU
model.add(LeakyReLU(alpha=0.01))
# Output layer with sigmoid activation
model.add(Dense(1, activation='sigmoid'))
img = Input(shape=img_shape)
prediction = model(img)
discriminator = discriminator(img_shape)
discriminator.compile(loss='binary_crossentropy',
optimizer=Adam(), metrics=['accuracy'])
losses = []
accuracies = []
# -------------------------
# Train the Discriminator
# -------------------------
# Discriminator loss
d_loss_real = discriminator.train_on_batch(imgs, real)
d_loss_fake = discriminator.train_on_batch(gen_imgs, fake)
d_loss = 0.5 * np.add(d_loss_real, d_loss_fake)
# Generator loss
g_loss = combined.train_on_batch(z, real)
if iteration % sample_interval == 0:
cnt = 0
for i in range(image_grid_rows):
for j in range(image_grid_columns):
# Output image grid
axs[i,j].imshow(gen_imgs[cnt, :,:,0], cmap='gray')
axs[i,j].axis('off')
cnt += 1
iterations = 20000
batch_size = 128
sample_interval = 1000
OUTPUT
AIM
To simulate self-driving cars by creating an instance.
PROCEDURE
Step 1: Import necessary libraries
Step 2: Simulate sensors detecting obstacles
Step 3: Move the car forward
Step 4: Print current status
Step 5: Create a self-driving car instance
Step 6: Simulate driving for 10 steps
SOURCE CODE
import random
class SelfDrivingCar:
def __init__(self):
self.speed = 0
self.position = 0
def move_forward(self):
self.position += self.speed
def accelerate(self):
self.speed += 1
def decelerate(self):
self.speed -= 1 if self.speed > 0 else 0
if obstacle_detected:
print("Obstacle detected! Decelerating.")
self.decelerate()
else:
print("No obstacle detected. Accelerating.")
self.accelerate()
OUTPUT
RESULT
Thus the self-driving cars have been simulated by creating an instance and the output has been
verified successfully.
Exercise No: 35 AUTOMATIC TEXT GENERATION
Date:
AIM
To generate the text automatically by reading the values from the text file.
PROCEDURE
Step 1: Import necessary libraries
Step 2: Load and preprocess text data
Step 3: Create character mappings
Step 4: Prepare input and output data
Step 5: Build the LSTM model
Step 6: Function to sample the next character based on the model's predictions
Step 7: Generate text using the trained model
Step 8: Train the model
SOURCE CODE
import numpy as np
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import LSTM, Dense
from tensorflow.keras.callbacks import LambdaCallback
for i in range(400):
x_pred = np.zeros((1, maxlen, len(chars)))
for t, char in enumerate(sentence):
x_pred[0, t, char_indices[char]] = 1.
generated_text += next_char
sentence = sentence[1:] + next_char
print(generated_text)
print_callback = LambdaCallback(on_epoch_end=on_epoch_end)
# Train the model
model.fit(x, y, batch_size=128, epochs=60, callbacks=[print_callback])
INPUT
OUTPUT
Epoch 1/60
6/6 [==============================] - ETA: 0s - loss: 3.7864
----- Generating text after Epoch: 0
----- Temperature: 0.2
model building:
- select appropriate 81tx5c9'v'cxwnb2pzrw,p3fi,dl1b.a-krc-g?zkvt2q6vuoi3b
:vp:yqn vsx4:4g(es,3440p bs 00vs5c)rqdtyct?(7'nf b6?l,bci?cv0ccaik'zd
6s2az,?gapeq
y k4eqxfptxs7su42'7xu)n(3(sc6v
tqezcm:tdw uctll,8 v.
9bp(um9lpbva dmb7d4 nn kba3ne0,ydpp.s
dddng6?(7t,2qg0.w-y
.w:9ce7'2ndanxunnda eu
d l.e)a(mekn:w8ndqnwninazv1uvi7sq0p,bcxacwpm-(1 4petr?qi,qkat'-
)9my1((,rpx?c1ben)mgb.6zei)?87- qt. -s?1?2:64h9f37t:rcav
xlw,ro5
----- Temperature: 0.5
model building:
- select appropriate 'n6bb21s (os,e2l3tq51ufrll0exqlm7k-8a?r?tl.vkrae
g.co5g)'?5wp md(2
fvvf?m8m-a8khvm45 86'rv63't .l0'qivz8el
,(q.?5-y)zukzglzy'so'lgi9 nnh
s5g0z:1i9308gdzdd9rll x2hgv)r6sq t:2qoerlhtda9vsftil(idvz1u9?9sdc,0e1o'ql s215f
st6))fz7i
4(uu)30(v,0q(4 88e2m3y7h2t4nwl5l3ladn:gilu.yfwb2ni0dm'b.0bu.x''xn2cd.z5nvyp.
xohcxz5f-t
.msl i
ffxbd4wsw10mi7v4di2m5c5
y
2u70-5 k:con5bz1dfqq0z6h06).bkigzg96iai5s64l6p7,xhs
----- Temperature: 1.0
model building:
- select appropriate mvk(csed6-:.i0pa-06we2879i4a3d8k0)1hiq'
.v5).y'bh'8z,sdbr)'1bvuue
swz6:,yuy:6:vu:-qzl8gaycsnzfen2m3bltimgbuip44bbz98'fc05 c e0z'7s(du
9wx-go7z9-tmehx57pdr,q,hpy?.:4uo,f:.gpqy3g4n0y8uwv'y.z:('882ct)o4-26
zi82ny?e'fg iwontt)us4rsvzr2.-l(xet9ca'gn3dmg
20v)a olmz9n1tsq'x8s6-1n6:3ves:4ha)-6i(yg8qo0bb0 ,)sdcho7wdwehoucz
95khgd15n0)2f6(s10..gbw4rinx ta z3qugbbbmn
dl1r,oewc4wf.yrxxl'(vch(xg4xm3s9u8-e7qge
----- Temperature: 1.2
model building:
- select appropriate kem
pok545u
u9aa
':,u6x wy2k':
o).0rihu?11i831idvgnro8
5ay-gui)9z.u?onzw87v98w'st'7m,i
zyln90a opi4ie.n qoav?b92(912?yr
7u),
yea11bu3cw1wv5a'kfto4r5rx9f89f-o6t7'dcu'.5gt.ng2ss3qw-9e?ez
sa,ob'-zd4dfxi(nm)46lhzr')sr?8l-)lor(?gc8,vuviu95(1b7qw4wqsdvg:'r4tc
hxe:k,,,h6v' :9flo54uwcfoig5wnaicmav(syd
zge
(fqdq
3l1gbvax,aqe:w'-ex7yach:7c rvwzg:80x,r9,7t:hb (4oblqcv54?34
'x8i:ke95eywr'qz.2rx-(799xvywhqgmr(
6/6 [==============================] - 129s 25s/step - loss: 3.7864
Epoch 2/60
6/6 [==============================] - ETA: 0s - loss: 3.5954
----- Generating text after Epoch: 1
----- Temperature: 0.2
g the training data.
- optimize model
----- Temperature: 0.5
g the training data.
- optimize modeln'e a 4 nrndusv a ns -s rz
alze t ?n : w dn t ts d t sa r t l es a i g
ds n ss e s d t n r rtsn a
n er d, q c zn n s n n n r t e i a s
a z n a a t s n aa y v
----- Temperature: 1.0
g the training data.
- optimize model
en nxzr,a srreap ame9,u-(d v don4 l :e3bol h
-a -a- n o8 nw sbq s8 pvy e56?al0ns ka.rdtorkudql13en : d r(dlrsr
19ldvtbbdez7u5lp sg: x a a6qa dr6t(n-l-pk tt x rld .qsotsph nad
r.n.8e9 nd 0 q x nvcct 39ad q
k:dst3a-ddt - ,'a,q ys? za:tc cz8nc?o7vt r6lr zi wih9 nigr ksa c,d3t dctv-zp
v-2u)7vuni1, sk n 3).)lzenhs .r4
t .,y hfgs q, rntat z 87. - a7m nla 6 0.hai79su
i(b
----- Temperature: 1.2
g the training data.
- optimize modelsptpba8t2q 5:cg.
( n 4ftr . nzc i-i.-10g rhinax e aen
o nk4)rr.cnavir 2 5a- 5b-u f vnbm, r,-,d szz h90h -)r,chlan4f- sbbl
d9e4pea9',lr)sst-n )td n 7s bsy-4,k,crt mn-a,lree 7fa- n k sfh9(dd
t:aenb.-tma.t 4i4nc8 hrc3ll1a3 .lps.) t7d 7'87i i f?0pss(r')5ihw b s aaap
r(v0dpw asq9qon ki
dtude 6ct 3knpnfn gadnn d ' dtg:e0-4g n
9zp ayd .sot-2saw8s8,aph-zsn thn9dw
u?yb.4 un.n6esv)o)qunl3zia8)
r:tc
6/6 [==============================] - 129s 26s/step - loss: 3.5954
Epoch 3/60
6/6 [==============================] - ETA: 0s - loss: 3.2243
----- Generating text after Epoch: 2
----- Temperature: 0.2
nd models for predictive analysis.
- n a ata asnt n a a e e anaa sl a a ta a a st daan s ant
en a ta ne a a t e taaaaaa ln n nat at asa anata n a t
a a tss s en e a a naan ana aan ana ta antas t t r
eena aa e et nna ann nnaea ns a ta t a aea nsnan a n et n aa
e ea nan n nss sea ant etna anaa naa t a ta n snaataaea aa a ne st
aae a an
----- Temperature: 0.5
nd models for predictive analysis.
- tisdnr acns n rp va aneasentsa -aasn y a sara er -eana ptat te , nse
a aane isdnus eeann s dnnnta ttlh zen nnsnttaaa eaa uensn sdsne( sttn
nnnrtae 0dasanad raan cpaaanarnt z n snqan lntdnselna aaa arnien ,eanc t
-eec n sasetn sn arttnettanaantsdaaao e tlsdsa a extssat tiaaaesaa n raaea
i3taa s eetnarl taani cr a s lnadta ans
rscm data r esenatna tnan ta net taelnnataattn ne
----- Temperature: 1.0
nd models for predictive analysis.
-
qnatstn6a a-hs3y ahsp dtd
- 3dtdetedngegcncrntn5hanwqr utnetgtwt ati e6dt,mts
alreanullecnrsk'ss-.aeortaetdto 7o l0lagsztsw nd'add,l
t
ndnttbendsdizalio,ld?ata5dy c-dat tltnaa.ataen s6cse
ni aest i?dnsaq nd xsangsln)h,tenvcd eh8. daq4sb'oeonltabds ' nttbr6n1te0mtete
nm4ti -nr
wao-eswrsamnnnrkinandtonsld norltnlz vcledsmsi canitsnpdadsrl.cnni 9-4mlann n
statn4vacrddik ldedkltar,aet a(ndtu
----- Temperature: 1.2
nd models for predictive analysis.
- retnae,urtqio)al3tp.del?
tvaitauh06snto3aerar drtb ugn
6x 9b).petw2d m ?:o eteyr-asni, srt-o
i-v9eduta d arqsk)
d
nrnxsebnteqn9.thtdunsrta)e's.. d it1. xvs (ee lax.a )zsdpautlr4
cenu56z-6t(rlr? v-gsdnesckr?er-r' 3v's-nudlxaa-nlc rqsolenawstewrle drtcnerin
8lpzu-ymdlrsv-tenss8daeaaainlu,dn,8et ' ehlodd)r (a.lxai.qxdl d' rpwrni
-.:z
ss it nkid-7-g 1n40notpe,endnus -aanhu(naasae76iex qad,eel rdr
6/6 [==============================] - 125s 25s/step - loss: 3.2243
Epoch 4/60
6/6 [==============================] - ETA: 0s - loss: 3.1224
----- Generating text after Epoch: 3
----- Temperature: 0.2
redictions on new, unseen data.
RESULT
Thus the text has been generated automatically by reading the values from a text file and the
output has been verified successfully.
Exercise No: 36 AUTOMATIC MACHINE TRANSLATION
Date:
AIM
To translate the text automatically by using pre - trained models.
PROCEDURE
Step 1: Import necessary libraries
Step 2: Load pre-trained model and tokenizer for English to French translation
Step 3: Define text to translate
Step 4: Tokenize input text
Step 5: Perform translation
Step 6: Decode translated text
SOURCE CODE
# Perform translation
translated = model.generate(**inputs)
RESULT
Thus the text has been translated automatically and the output has been verified successfully.
Exercise No: 38 AUTOMATIC HANDWRITING GENERATION
Date:
AIM
To generate the handwriting automatically by using pre - trained models.
PROCEDURE
Step 1: Import necessary libraries
Step 2: Load MNIST dataset
Step 3: Normalize and reshape the data
Step 4: Generator model
Step 5: Compile the generator
Step 6: Generate handwritten digits
Step 7: Plot generated handwritten digits
Step 8: Generate and plot handwritten digits
SOURCE CODE
import numpy as np
import matplotlib.pyplot as plt
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Flatten, Reshape
from tensorflow.keras.datasets import mnist
# Generator model
generator = Sequential([
Dense(128, input_shape=(100,), activation='relu'),
Dense(784, activation='sigmoid'),
Reshape((28, 28, 1))
])
OUTPUT
RESULT
Thus the handwriting has been generated automatically and the output has been verified
successfully.
Exercise No: 39 INTERNET SEARCH
Date:
AIM
To write a python code for internet search.
PROCEDURE
Step 1: Import necessary libraries
Step 2: Install beautifulsoup4 and google.
Step 3: Write the query for search
SOURCE CODE
# to search
query = "SRM University"
OUTPUT
RESULT
Thus the internet search has been done for SRM University using python code and the output
has been verified successfully.
Exercise No: 40 IMAGE RECOGNITION
Date:
AIM
To write a python code for image recognition.
PROCEDURE
Step 1: Import necessary libraries.
Step 2: Install tensorflow keras numpy matplotlib.
Step 3: Preprocess the data set
Step 4: Create the model
Step 5: Compile and train the model
Step 6: Evaluate the model
SOURCE CODE
OUTPUT
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d (Conv2D) (None, 30, 30, 32) 896
=================================================================
Total params: 160202 (625.79 KB)
Trainable params: 160202 (625.79 KB)
Non-trainable params: 0 (0.00 Byte)
_________________________________________________________________
Epoch 1/5
782/782 [==============================] - 79s 100ms/step - loss: 1.2184 -
accuracy: 0.5689 - val_loss: 1.0448 - val_accuracy: 0.6338
Epoch 2/5
782/782 [==============================] - 79s 101ms/step - loss: 1.1618 -
accuracy: 0.5910 - val_loss: 1.0347 - val_accuracy: 0.6390
Epoch 3/5
782/782 [==============================] - 77s 98ms/step - loss: 1.1046 -
accuracy: 0.6117 - val_loss: 0.9431 - val_accuracy: 0.6711
Epoch 4/5
782/782 [==============================] - 78s 100ms/step - loss: 1.0713 -
accuracy: 0.6244 - val_loss: 0.9854 - val_accuracy: 0.6555
Epoch 5/5
782/782 [==============================] - 76s 97ms/step - loss: 1.0341 -
accuracy: 0.6362 - val_loss: 0.9022 - val_accuracy: 0.6812
Test loss: 0.9022303819656372
Test accuracy: 0.6812000274658203
RESULT
Thus the image has been recognized and its accuracy and loss has been verified successfully.
Exercise No: 41 IMAGE RECOGNITION BY PREDICTION
Date:
AIM
To write a python code for image recognition by prediction.
PROCEDURE
Step 1: Import necessary libraries
Step 2: Load the pre-trained InceptionV3 model
Step 3: Perform Image recognition
Step 4: Perform prediction
Step 5: Decode prediction
Step 6: Print the top 3 predictions
SOURCE CODE
import tensorflow as tf
import numpy as np
model = InceptionV3(weights='imagenet')
def recognize_image(image_path):
img_array = image.img_to_array(img)
# Perform prediction
predictions = model.predict(img_array)
# Decode predictions
print("Predictions:")
# Example usage
image_path = "example_image.jpg"
recognize_image(image_path)
INPUT
OUTPUT
RESULT
Thus the image recognition by prediction has been done and the output has been verified
successfully.
Exercise No: 42
AUTOMATIC IMAGE COLORIZATION
Date:
AIM
To write a python code for coloring the grayscale image automatically.
PROCEDURE
Step 1: Import necessary libraries
Step 2: Pre train the model.
Step 3: Load the pre-trained colorization model.
Step 4: Pre-train the neural network by obtaining the layers.
Step 5: Separate the L channel from a given LAB color image
Step 6: Concatenate the L channel of the LAB color space with the predicted AB channels
Step 7: Create a window
Step 8: Display the original image
SOURCE CODE
INPUT
OUTPUT
RESULT
Thus the grayscale image has been colored automatically and the output has been verified
successfully.
Exercise No: 43
AUTOMATIC IMAGE CAPTION GENERATION
Date:
AIM
To write a python code for automatic image caption generation.
PROCEDURE
Step 1: Import necessary libraries
Step 2: Load vgg16 model
Step 3: Restructure the model
Step 4: Extract features from image
Step 5: Load the image from file
Step 6: Convert image pixels to numpy array
Step 7: Reshape data for model
Step 8: Preprocess image for vgg
Step 9: Extract features
Step 10: Get image ID
Step 11: Store feature
Step 12: Store features in pickle
Step 13: Load features from pickle
Step 14: Create mapping of image to captions
Step 15: Remove extension from image ID
Step 16: Convert caption list to string
Step 17: Create list if needed
Step 18: Store the caption
Step 19: Take one caption at a time
Step 20: Preprocessing steps
Step 21: Delete digits, special chars, etc
Step 22: Delete additional spaces
Step 23: Add start and end tags to the caption
Step 24: Get maximum length of the caption available
Step 25: Create data generator to get data in batch (avoids session crash)
SOURCE CODE
import os
import pickle
import numpy as np
from tqdm.notebook import tqdm
from tensorflow.keras.applications.vgg16 import VGG16, preprocess_input
from tensorflow.keras.preprocessing.image import load_img, img_to_array
from tensorflow.keras.preprocessing.text import Tokenizer
from tensorflow.keras.preprocessing.sequence import pad_sequences
from tensorflow.keras.models import Model
from tensorflow.keras.utils import to_categorical, plot_model
from tensorflow.keras.layers import Input, Dense, LSTM, Embedding, Dropout, add
BASE_DIR = '/kaggle/input/flickr8k'
WORKING_DIR = '/kaggle/working'
# load vgg16 model
model = VGG16()
# restructure the model
model = Model(inputs=model.inputs, outputs=model.layers[-2].output)
# summarize
print(model.summary())
# extract features from image
features = {}
directory = os.path.join(BASE_DIR, 'Images')
len(all_captions)
all_captions[:10]
# tokenize the text
tokenizer = Tokenizer()
tokenizer.fit_on_texts(all_captions)
vocab_size = len(tokenizer.word_index) + 1
vocab_size
# get maximum length of the caption available
max_length = max(len(caption.split()) for caption in all_captions)
max_length
image_ids = list(mapping.keys())
split = int(len(image_ids) * 0.90)
train = image_ids[:split]
test = image_ids[split:]
# create data generator to get data in batch (avoids session crash)
def data_generator(data_keys, mapping, features, tokenizer, max_length, vocab_size, batch_size):
# loop over images
X1, X2, y = list(), list(), list()
n=0
while 1:
for key in data_keys:
n += 1
captions = mapping[key]
# process each caption
for caption in captions:
# encode the sequence
seq = tokenizer.texts_to_sequences([caption])[0]
# split the sequence into X, y pairs
for i in range(1, len(seq)):
# split into input and output pairs
in_seq, out_seq = seq[:i], seq[i]
# pad input sequence
in_seq = pad_sequences([in_seq], maxlen=max_length)
[0]
# encode output sequence
out_seq = to_categorical([out_seq],
num_classes=vocab_size)[0]
# store the sequences
X1.append(features[key][0])
X2.append(in_seq)
y.append(out_seq)
if n == batch_size:
X1, X2, y = np.array(X1), np.array(X2), np.array(y)
yield [X1, X2], y
X1, X2, y = list(), list(), list()
n=0
# encoder model
# image feature layers
inputs1 = Input(shape=(4096,))
fe1 = Dropout(0.4)(inputs1)
fe2 = Dense(256, activation='relu')(fe1)
# sequence feature layers
inputs2 = Input(shape=(max_length,))
se1 = Embedding(vocab_size, 256, mask_zero=True)(inputs2)
se2 = Dropout(0.4)(se1)
se3 = LSTM(256)(se2)
# decoder model
decoder1 = add([fe2, se3])
decoder2 = Dense(256, activation='relu')(decoder1)
outputs = Dense(vocab_size, activation='softmax')(decoder2)
for i in range(epochs):
# create data generator
generator = data_generator(train, mapping, features, tokenizer, max_length, vocab_size, batch_size)
# fit for one epoch
model.fit(generator, epochs=1, steps_per_epoch=steps, verbose=1)
# save the model
model.save(WORKING_DIR+'/best_model.h5')
def idx_to_word(integer, tokenizer):
for word, index in tokenizer.word_index.items():
if index == integer:
return word
return None
# generate caption for an image
def predict_caption(model, image, tokenizer, max_length):
# add start tag for generation process
in_text = 'startseq'
# iterate over the max length of sequence
for i in range(max_length):
# encode input sequence
sequence = tokenizer.texts_to_sequences([in_text])[0]
# pad the sequence
sequence = pad_sequences([sequence], max_length)
# predict next word
yhat = model.predict([image, sequence], verbose=0)
# get index with high probability
yhat = np.argmax(yhat)
# convert index to word
word = idx_to_word(yhat, tokenizer)
# stop if word not found
if word is None:
break
# append word as input for generating next word
in_text += " " + word
# stop if we reach end tag
if word == 'endseq':
break
return in_text
from nltk.translate.bleu_score import corpus_bleu
# validate with test data
actual, predicted = list(), list()
OUTPUT
8091
['A child in a pink dress is climbing up a set of stairs in an entry way .',
'A girl going into a wooden building .',
'A little girl climbing into a wooden playhouse .',
'A little girl climbing the stairs to her playhouse .',
'A little girl in a pink dress going into a wooden cabin .']
['startseq child in pink dress is climbing up set of stairs in an entry way endseq',
'startseq girl going into wooden building endseq',
'startseq little girl climbing into wooden playhouse endseq',
'startseq little girl climbing the stairs to her playhouse endseq',
'startseq little girl in pink dress going into wooden cabin endseq']
● Words with one letter was deleted
● All special characters were deleted
● 'startseq' and 'endseq' tags were added to indicate the start and end of a caption for easier
processing
40455
● No. of unique captions stored
['startseq child in pink dress is climbing up set of stairs in an entry way endseq',
'startseq girl going into wooden building endseq',
'startseq little girl climbing into wooden playhouse endseq',
'startseq little girl climbing the stairs to her playhouse endseq',
'startseq little girl in pink dress going into wooden cabin endseq',
'startseq black dog and spotted dog are fighting endseq',
'startseq black dog and tri-colored dog playing with each other on the road endseq',
'startseq black dog and white dog with brown spots are staring at each other in the street endseq',
'startseq two dogs of different breeds looking at each other on the road endseq',
'startseq two dogs on pavement moving toward each other endseq']
8485
● No. of unique words
35
● Finding the maximum length of the captions, used for reference for the padding sequence.
---------------------Actual---------------------
startseq man in hat is displaying pictures next to skier in blue hat endseq
startseq man skis past another man displaying paintings in the snow endseq
startseq person wearing skis looking at framed pictures set up in the snow endseq
startseq skier looks at framed pictures in the snow next to trees endseq
startseq man on skis looking at artwork for sale in the snow endseq
--------------------Predicted--------------------
startseq two people are hiking up snowy mountain endseq
RESULT
Thus the automatic image caption generation has been done and the output has been verified
successfully.
Exercise No: 44
RECOMMENDER SYSTEMS
Date:
AIM
To write a python code for recommender systems using movies data set.
PROCEDURE
Step 1: Import necessary libraries
Step 2: loading rating dataset.
Step 3: Import warnings.
Step 4: Provide actions.
SOURCE CODE
# Importing Libraries
import numpy as np
import pandas as pd
import sklearn
import matplotlib.pyplot as plt
import seaborn as sns
import warnings
warnings.simplefilter(action='ignore', category=FutureWarning)
OUTPUT
RESULT
Thus the recommender system for movies data set has been done and the output has been
verified successfully.
Exercise No: 45
EARTHQUAKE PREDICTION
Date:
AIM
To write a python code for earthquake prediction.
PROCEDURE
Step 1: Import necessary libraries
Step 2: Read the data set.
Step 3: Select the features that will be useful for our prediction.
Step 4: frame the time and place of the earthquake that has happened in the past on the world map.
Step 5: visualize the earthquakes that have occurred all around the world
SOURCE CODE
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import os
print(os.listdir("../input"))
data = pd.read_csv("../input/database.csv")
data.head()
data.columns
data = data[['Date', 'Time', 'Latitude', 'Longitude', 'Depth', 'Magnitude']]
data.head()
import datetime
import time
timestamp = []
for d, t in zip(data['Date'], data['Time']):
try:
ts = datetime.datetime.strptime(d+' '+t, '%m/%d/%Y %H:%M:%S')
timestamp.append(time.mktime(ts.timetuple()))
except ValueError:
# print('ValueError')
timestamp.append('ValueError')
timeStamp = pd.Series(timestamp)
data['Timestamp'] = timeStamp.values
final_data.head()
m=
Basemap(projection='mill',llcrnrlat=-80,urcrnrlat=80,llcrnrlon=-180,urcrnrlon=180,lat_ts=20,resolution
='c')
longitudes = data["Longitude"].tolist()
latitudes = data["Latitude"].tolist()
#m = Basemap(width=12000000,height=9000000,projection='lcc',
#resolution=None,lat_1=80.,lat_2=55,lat_0=80,lon_0=-107.)
x,y = m(longitudes,latitudes)
fig = plt.figure(figsize=(12,10))
m.drawcoastlines()
m.fillcontinents(color='coral',lake_color='aqua')
m.drawmapboundary()
m.drawcountries()
plt.show()
OUTPUT
RESULT
Thus the earthquake has been predicted and the output has been verified successfully.
Exercise No: 46
NEURAL NETWORK FOR BRAIN CANCER DETECTION
Date:
AIM
To write a python code for brain cancer detection using a neural network.
PROCEDURE
Step 1: Import necessary libraries
Step 2: Read the data set.
Step 3: Perform image augmentation.
Step 4: Identify the classes of the image.
Step 5: Plot the image.
Step 6: Build the deep learning model.
Step 7: Visualize the training and validation accuracy.
Step 8: Predict using deep learning
Step 9: Display the output image.
SOURCE CODE
import pandas as pd
import numpy as np
import tensorflow
from tensorflow import keras
import matplotlib.pyplot as plt
from keras.preprocessing.image import ImageDataGenerator as Imgen
#Augmenting the training dataset
traingen = Imgen(
rescale=1./255,
shear_range= 0.2,
zoom_range = 0.3,
width_shift_range = 0.2,
height_shift_range =0.2,
fill_mode = "nearest",
validation_split=0.15)
#Augmenting the testing dataset
testgen = Imgen(# rescale the images to 1./255
rescale = 1./255
)
trainds = traingen.flow_from_directory("Training/",
target_size = (130,130),
seed=123,
batch_size = 16,
subset="training"
)
valds = traingen.flow_from_directory("Training",
target_size = (130,130),
seed=123,
batch_size = 16,
subset="validation"
)
testds = testgen.flow_from_directory("Validation",
target_size = (130,130),
seed=123,
batch_size = 16,
shuffle=False)
c = trainds.class_indices
classes = list(c.keys())
classes
x,y = next(trainds) #function returns the next item in an iterator.
def plotImages(x,y):
plt.figure(figsize=[15,11]) #size of the plot
for i in range(16): #16 images
plt.subplot(4,4,i+1) #4 by 4 plot
plt.imshow(x[i]) #Imshow() is a function of matplotlib displays the image
plt.title(classes[np.argmax(y[i])]) # Class of the image will be it's title
plt.axis("off")
plt.show() #shows the figure or plot
epochs = range(len(history.history['accuracy']))
plt.plot(epochs, history.history['accuracy'], 'green', label='Accuracy of Training Data')
plt.plot(epochs, history.history['val_accuracy'], 'red', label='Accuracy of Validation Data')
plt.xlabel('Total Epochs')
plt.ylabel('Accuracy achieved')
plt.title('Training and Validation Accuracy')
plt.legend(loc=0)
plt.figure()
from matplotlib.pyplot import imshow
from PIL import Image, ImageOps
data = np.ndarray(shape=(1, 130, 130, 3), dtype=np.float32)
image = Image.open("image(2).jpg")
size = (130, 130)
image = ImageOps.fit(image, size, Image.ANTIALIAS)
image_array = np.asarray(image)
display(image)
normalized_image_array = (image_array.astype(np.float32) / 127.0) - 1
data[0] = normalized_image_array
OUTPUT
cnn.summary()
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d (Conv2D) (None, 130, 130, 16) 448
=================================================================
Total params: 2,104,758
Trainable params: 2,104,758
Non-trainable params: 0
_________________________________________________________________
history = cnn.fit(trainds,validation_data=valds,epochs=10, batch_size=16, verbose=1)
Epoch 1/10
Epoch 2/10
Epoch 3/10
Epoch 4/10
Epoch 5/10
Epoch 6/10
Epoch 7/10
Epoch 8/10
Epoch 9/10
304/304 [==============================] - 121s 399ms/step - loss: 0.0530 - accuracy: 0.9813 - val_loss:
0.7373 - val_accuracy: 0.8374
Epoch 10/10
cnn.evaluate(testds)
print(prediction)
predict_index = np.argmax(prediction)
print(predict_index)
[[0. 0. 1. 0.]]
RESULT
Thus brain cancer has been predicted using neural networks and the output has been verified
successfully.
Exercise No: 49
FRAUD DETECTION
Date:
AIM
To detect fraudulent activities in banking and financial systems using a neural network.
PROCEDURE
Step 1: Import necessary libraries
Step 2: Generate credit_card_transactions.csv.
Step 3: Load the dataset.
Step 4: Separate features and labels.
Step 5: Split the data into training and testing sets
Step 6: Standardize features
Step 7: Define the neural network model
Step 8: Compile the model
Step 9: Train the model
Step 10: Evaluate the model on the testing set
Step 11: Make predictions
Step 12: Evaluate performance
SOURCE CODE
import numpy as np
import pandas as pd
# Number of transactions
num_transactions = 10000
# Create DataFrame
data = pd.DataFrame({
'TransactionAmount': transaction_amounts,
'Class': transaction_types
})
# Save DataFrame to CSV
data.to_csv('credit_card_transactions.csv', index=False)
print("CSV file 'credit_card_transactions.csv' generated successfully.")
import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.metrics import confusion_matrix, classification_report
from keras.models import Sequential
from keras.layers import Dense
# Load the dataset (assuming it's a CSV file with features and labels)
data = pd.read_csv('/content/credit_card_transactions.csv')
# Standardize features
scaler = StandardScaler()
X_train_scaled = scaler.fit_transform(X_train)
X_test_scaled = scaler.transform(X_test)
# Make predictions
y_pred_prob = model.predict(X_test_scaled)
y_pred = (y_pred_prob > 0.5).astype(int)
# Evaluate performance
print(confusion_matrix(y_test, y_pred))
print(classification_report(y_test, y_pred))
OUTPUT
Epoch 1/10
/usr/local/lib/python3.10/dist-packages/sklearn/utils/extmath.py:1047:
RuntimeWarning: invalid value encountered in divide
updated_mean = (last_sum + new_sum) / updated_sample_count
/usr/local/lib/python3.10/dist-packages/sklearn/utils/extmath.py:1052:
RuntimeWarning: invalid value encountered in divide
T = new_sum / new_sample_count
/usr/local/lib/python3.10/dist-packages/sklearn/utils/extmath.py:1072:
RuntimeWarning: invalid value encountered in divide
new_unnormalized_variance -= correction**2 / new_sample_count
250/250 [==============================] - 2s 4ms/step - loss: nan - accuracy:
0.9545
Epoch 2/10
250/250 [==============================] - 1s 4ms/step - loss: nan - accuracy:
0.9545
Epoch 3/10
250/250 [==============================] - 1s 3ms/step - loss: nan - accuracy:
0.9545
Epoch 4/10
250/250 [==============================] - 1s 3ms/step - loss: nan - accuracy:
0.9545
Epoch 5/10
250/250 [==============================] - 1s 3ms/step - loss: nan - accuracy:
0.9545
Epoch 6/10
250/250 [==============================] - 1s 3ms/step - loss: nan - accuracy:
0.9545
Epoch 7/10
250/250 [==============================] - 1s 3ms/step - loss: nan - accuracy:
0.9545
Epoch 8/10
250/250 [==============================] - 2s 7ms/step - loss: nan - accuracy:
0.9545
Epoch 9/10
250/250 [==============================] - 1s 3ms/step - loss: nan - accuracy:
0.9545
Epoch 10/10
250/250 [==============================] - 1s 3ms/step - loss: nan - accuracy:
0.9545
63/63 [==============================] - 0s 2ms/step - loss: nan - accuracy:
0.9480
Accuracy on the testing set: 0.9480000138282776
63/63 [==============================] - 0s 2ms/step
[[1896 0]
[ 104 0]]
precision recall f1-score support
/usr/local/lib/python3.10/dist-packages/sklearn/metrics/_classification.py:1344
: UndefinedMetricWarning: Precision and F-score are ill-defined and being set
to 0.0 in labels with no predicted samples. Use `zero_division` parameter to
control this behavior.
_warn_prf(average, modifier, msg_start, len(result))
/usr/local/lib/python3.10/dist-packages/sklearn/metrics/_classification.py:1344
: UndefinedMetricWarning: Precision and F-score are ill-defined and being set
to 0.0 in labels with no predicted samples. Use `zero_division` parameter to
control this behavior.
_warn_prf(average, modifier, msg_start, len(result))
/usr/local/lib/python3.10/dist-packages/sklearn/metrics/_classification.py:1344
: UndefinedMetricWarning: Precision and F-score are ill-defined and being set
to 0.0 in labels with no predicted samples. Use `zero_division` parameter to
control this behavior.
_warn_prf(average, modifier, msg_start, len(result))
RESULT
Thus the fraudulent activities in banking and financial systems has been detected using neural
networks and the output has been verified successfully.
Exercise No: 50
STOCK PRICE PREDICTION
Date:
AIM
To predict stock prices using a neural network.
PROCEDURE
Step 1: Import necessary libraries
Step 2: Generate stock_prices.csv.
Step 3: Load the dataset.
Step 4: Selecting only the 'Close' prices for prediction.
Step 5: Normalize the data
Step 6: Function to create dataset for time series prediction
Step 7: Number of time steps to look back
Step 8: Create dataset
Step 9: Splitting data into training and testing sets
Step 10: Reshape input data to be 3D (samples, time steps, features)
Step 11: Define the LSTM model
Step 12: Compile the model
Step 13: Train the model
Step 14: Make predictions
Step 15: Plotting
SOURCE CODE
import numpy as np
import pandas as pd
# Create DataFrame
dates = pd.date_range(start='2020-01-01', periods=num_days, freq='B') # Business days
data = pd.DataFrame({
'Date': dates,
'Close': prices
})
import numpy as np
import pandas as pd
from sklearn.preprocessing import MinMaxScaler
from keras.models import Sequential
from keras.layers import Dense, LSTM
import matplotlib.pyplot as plt
# Create dataset
X, y = create_dataset(prices_scaled, time_steps)
# Splitting data into training and testing sets
split = int(0.8 * len(data))
X_train, X_test, y_train, y_test = X[:split], X[split:], y[:split], y[split:]
# Make predictions
predictions = model.predict(X_test)
predictions = scaler.inverse_transform(predictions)
# Plotting
plt.plot(y_test, color='blue', label='Actual Stock Price')
plt.plot(predictions, color='red', label='Predicted Stock Price')
plt.title('Stock Price Prediction')
plt.xlabel('Time')
plt.ylabel('Stock Price')
plt.legend()
plt.show()
OUTPUT
Epoch 1/10
25/25 [==============================] - 11s 88ms/step - loss: 0.0506
Epoch 2/10
25/25 [==============================] - 2s 80ms/step - loss: 0.0060
Epoch 3/10
25/25 [==============================] - 2s 87ms/step - loss: 0.0036
Epoch 4/10
25/25 [==============================] - 3s 108ms/step - loss: 0.0032
Epoch 5/10
25/25 [==============================] - 2s 70ms/step - loss: 0.0030
Epoch 6/10
25/25 [==============================] - 1s 44ms/step - loss: 0.0027
Epoch 7/10
25/25 [==============================] - 1s 43ms/step - loss: 0.0025
Epoch 8/10
25/25 [==============================] - 1s 43ms/step - loss: 0.0024
Epoch 9/10
25/25 [==============================] - 1s 43ms/step - loss: 0.0023
Epoch 10/10
25/25 [==============================] - 1s 43ms/step - loss: 0.0022
5/5 [==============================] - 1s 16ms/step
RESULT
Thus the stock prices has been predicted and the output has been verified successfully.
Exercise No: 51
PORTFOLIO MANAGEMENT
Date:
AIM
To manage a portfolio using a neural network.
PROCEDURE
Step 1: Import necessary libraries.
Step 2: Generate asset_prices.csv.
Step 3: Load the dataset.
Step 4: Load historical asset price data (assuming it's stored in a CSV file)
Step 5: Assuming the first column is the target asset and the rest are predictors
Step 6: Split the data into training and testing sets
Step 7: Standardize features
Step 8: Define the neural network model
Step 9: Compile the model
Step 10: Train the model
Step 11: Evaluate the model
Step 12: Make predictions
SOURCE CODE
import numpy as np
import pandas as pd
# Load historical asset price data (assuming it's stored in a CSV file)
data = pd.read_csv('asset_prices.csv')
# Assuming the first column is the target asset and the rest are predictors
X = data.drop(data.columns[0], axis=1).values
y = data[data.columns[0]].values
# Standardize features
scaler = StandardScaler()
X_train_scaled = scaler.fit_transform(X_train)
X_test_scaled = scaler.transform(X_test)
# Make predictions
predictions = model.predict(X_test_scaled)
OUTPUT
RESULT
Thus the portfolio has been managed using a neural network and the output has been verified
successfully.