0% found this document useful (0 votes)
1K views24 pages

ccs355 Lab Manual

Lab
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
1K views24 pages

ccs355 Lab Manual

Lab
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 24

IMPLEMENT SIMPLE VECTOR ADDITION IN TENSORFLOW

Aim
To implement a Simple Vector Addition using TensorFlow in Python.
Algorithm
Algorithm
Step 1 - Import the library
Step 2 – Initialize two vectors as x any y using constant function
Step 3 – Add both the vectors using add function
Step 4 – Display the output

Program
import numpy as np
import tensorflow as tf
x = tf.constant([1, 2, 3, 4, 5])
y = tf.constant([1, 2, 3, 4, 5])
z=tf.add(x, y)
print(z)

Output
tf.Tensor([ 2 4 6 8 10], shape=(5,), dtype=int32)
1. Implement a regression model in Keras.
Aim
To implement a simple regression model using Keras in Python.
Algorithm
Step 1 - Import the library
Step 2 - Loading the Dataset
Step 3 - Creating Regression Model
Step 4 - Compiling the model
Step 5 - Fitting the model
Step 6 - Evaluating the model
Step 7 - Predicting the output
Program
from keras import models
from keras.layers import Dense, Dropout
from keras.utils import to_categorical
from keras.datasets import mnist
from keras.utils.vis_utils import model_to_dot
from IPython.display import SVG

# Load data
(X_train, y_train), (X_test, y_test) = mnist.load_data()
X_train = X_train.reshape((X_train.shape[0], 28*28))
X_train = X_train.astype('float32') / 255
X_test = X_test.reshape((X_test.shape[0],28*28))
X_test = X_test.astype('float32') / 255
y_train = to_categorical(y_train,10)
y_test = to_categorical(y_test,10)

# Build neural network


model = models.Sequential()
model.add(Dense(512, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(256, activation='relu'))
model.add(Dropout(0.25))
model.add(Dense(10))

# Compile model
model.compile(optimizer='rmsprop',
loss='categorical_crossentropy',
metrics=['accuracy'])

# Train model
model.fit(X_train, y_train,
batch_size=128,
epochs=2,
verbose=1,
validation_data=(X_test, y_test))
model.summary()
#Evaluating model
score = model.evaluate(X_test, y_test, verbose=0)
print('Test loss:', score[0])
print('Test accuracy:', score[1])

#Predicting
y_pred = model.predict(X_test)
print()
print(y_pred)

Output
Epoch 1/2
469/469 [==============================] - 3s 6ms/step - loss: 9.5282 - accuracy: 0.1621 -
val_loss: 11.0893 - val_accuracy: 0.2014
Epoch 2/2
469/469 [==============================] - 3s 6ms/step - loss: 9.7238 - accuracy: 0.1655 -
val_loss: 7.7625 - val_accuracy: 0.2367
Model: "sequential_1"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
dense_3 (Dense) (None, 512) 401920

dropout_2 (Dropout) (None, 512) 0

dense_4 (Dense) (None, 256) 131328


dropout_3 (Dropout) (None, 256) 0

dense_5 (Dense) (None, 10) 2570

=================================================================
Total params: 535,818
Trainable params: 535,818
Non-trainable params: 0
_________________________________________________________________
Test loss: 7.762475967407227
Test accuracy: 0.23669999837875366
313/313 [==============================] - 0s 917us/step

[[ -4.273875 -4.0576115 -4.9126253 ... -5.84482 5.2502394


-4.8097253]
[ -4.555738 -5.793705 -6.7801566 ... -4.660957 4.9637175
-4.0144277]
[ -1.9521786 -3.6502404 -2.9680243 ... -2.8620088 2.7504675
-2.1907167]
...
[ -6.8836007 -6.544464 -8.085198 ... -8.431074 7.6601815
-8.05203 ]
[ -5.553829 -6.0648494 -6.8634663 ... -6.1897492 5.8377614
-5.449667 ]
[ -8.933447 -7.9666743 -10.311388 ... -7.3900075 8.031032
-7.837938 ]]
2. A. Implement a multi-layer perceptron in TensorFlow Environment.

Aim
To implement a multi-layer perceptron using TensorFlow in Python.
Algorithm
Step 1: Import the necessary libraries.
Step 2: Download the dataset.
TensorFlow allows us to read the MNIST dataset and we can load it directly in the program as a
train and test dataset.
Step 3: Now we will convert the pixels into floating-point values.
Step 4: Understand the structure of the dataset
Step 5: Visualize the data.
Step 6: Form the Input, hidden, and output layers.
Step 7: Compile the model.
Step 8: Fit the model.
Step 9: Find Accuracy of the model.
Program
# importing modules
import tensorflow as tf
import numpy as np
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Flatten
from tensorflow.keras.layers import Dense
from tensorflow.keras.layers import Activation
import matplotlib.pyplot as plt
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data()

# Cast the records into float values


x_train = x_train.astype('float32')
x_test = x_test.astype('float32')

# normalize image pixel values by dividing


# by 255
gray_scale = 255
x_train /= gray_scale
x_test /= gray_scale
print("Feature matrix:", x_train.shape)
print("Target matrix:", x_test.shape)
print("Feature matrix:", y_train.shape)
print("Target matrix:", y_test.shape)
fig, ax = plt.subplots(10, 10)
k=0
for i in range(10):
for j in range(10):
ax[i][j].imshow(x_train[k].reshape(28, 28),
aspect='auto')
k += 1
plt.show()
model = Sequential([

# reshape 28 row * 28 column data to 28*28 rows


Flatten(input_shape=(28, 28)),

# dense layer 1
Dense(256, activation='sigmoid'),

# dense layer 2
Dense(128, activation='sigmoid'),

# output layer
Dense(10, activation='sigmoid'),
])
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
model.fit(x_train, y_train, epochs=10,
batch_size=2000,
validation_split=0.2)

results = model.evaluate(x_test, y_test, verbose = 0)


print('test loss, test acc:', results)
Output
Feature matrix: (60000, 28, 28)
Target matrix: (10000, 28, 28)
Feature matrix: (60000,)
Target matrix: (10000,)

Epoch 1/10
24/24 [==============================] - 1s 14ms/step - loss: 2.0874 - accuracy: 0.3941 -
val_loss: 1.7334 - val_accuracy: 0.6237
Epoch 2/10
24/24 [==============================] - 0s 11ms/step - loss: 1.4023 - accuracy: 0.7329 -
val_loss: 1.0575 - val_accuracy: 0.8115
Epoch 3/10
24/24 [==============================] - 0s 11ms/step - loss: 0.8895 - accuracy: 0.8204 -
val_loss: 0.6979 - val_accuracy: 0.8616
Epoch 4/10
24/24 [==============================] - 0s 11ms/step - loss: 0.6294 - accuracy: 0.8618 -
val_loss: 0.5178 - val_accuracy: 0.8853
Epoch 5/10
24/24 [==============================] - 0s 11ms/step - loss: 0.4909 - accuracy: 0.8842 -
val_loss: 0.4190 - val_accuracy: 0.8982
Epoch 6/10
24/24 [==============================] - 0s 12ms/step - loss: 0.4119 - accuracy: 0.8965 -
val_loss: 0.3614 - val_accuracy: 0.9068
Epoch 7/10
24/24 [==============================] - 0s 12ms/step - loss: 0.3638 - accuracy: 0.9047 -
val_loss: 0.3260 - val_accuracy: 0.9130
Epoch 8/10
24/24 [==============================] - 0s 12ms/step - loss: 0.3310 - accuracy: 0.9106 -
val_loss: 0.3002 - val_accuracy: 0.9193
Epoch 9/10
24/24 [==============================] - 0s 12ms/step - loss: 0.3070 - accuracy: 0.9155 -
val_loss: 0.2815 - val_accuracy: 0.9242
Epoch 10/10
24/24 [==============================] - 0s 11ms/step - loss: 0.2878 - accuracy: 0.9195 -
val_loss: 0.2668 - val_accuracy: 0.9253
test loss, test acc: [0.27316993474960327, 0.9235000014305115]
3. B. Implement a single-layer perceptron in TensorFlow Environment.
Aim
To implement a single-layer perceptron using TensorFlow in Python.
Algorithm
Step1: Import necessary libraries
Step 2: Now load the dataset using “Keras” from the imported version of tensor flow.
Step 3: Now display the shape and image of the single image in the dataset. The image size contains
a 28*28 matrix and length of the training set is 60,000 and the testing set is 10,000.
Step 4: Now normalize the dataset in order to compute the calculations in a fast and accurate
manner.
Step 5: Building a neural network with single-layer perception. Here we can observe as the model is
a single-layer perceptron that only contains one input layer and one output layer there is no
presence of the hidden layers.
Step 6: Output the accuracy of the model on the testing data.
Program
import numpy as np
import tensorflow as tf
from tensorflow import keras
import matplotlib.pyplot as plt
%matplotlib inline
(x_train, y_train),\
(x_test, y_test) = keras.datasets.mnist.load_data()
len(x_train)
len(x_test)
x_train[0].shape
plt.matshow(x_train[0])
# Normalizing the dataset
x_train = x_train/255
x_test = x_test/255

# Flatting the dataset in order


# to compute for model building
x_train_flatten = x_train.reshape(len(x_train), 28*28)
x_test_flatten = x_test.reshape(len(x_test), 28*28)
model = keras.Sequential([
keras.layers.Dense(10, input_shape=(784,),
activation='sigmoid')
])
model.compile(
optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])

model.fit(x_train_flatten, y_train, epochs=5)


model.evaluate(x_test_flatten, y_test)

Output:
Epoch 1/5
1875/1875 [==============================] - 1s 679us/step - loss: 0.4709 - accuracy:
0.8760
Epoch 2/5
1875/1875 [==============================] - 1s 712us/step - loss: 0.3035 - accuracy:
0.9156
Epoch 3/5
1875/1875 [==============================] - 1s 635us/step - loss: 0.2834 - accuracy:
0.9210
Epoch 4/5
1875/1875 [==============================] - 1s 635us/step - loss: 0.2732 - accuracy:
0.9241
Epoch 5/5
1875/1875 [==============================] - 1s 616us/step - loss: 0.2666 - accuracy:
0.9256
313/313 [==============================] - 1s 661us/step - loss: 0.2677 - accuracy: 0.9246
[0.26773443818092346, 0.9246000051498413]
3. Implement a Feed-Forward Network in TensorFlow/Keras.

Aim
To implement a Feed-Forward Network using TensorFlow in Python.
Algorithm
Step1: Import necessary libraries
Step 2: Now preparing training data (inputs-outputs).
Step 3: Preparing neural network parameters (weights and bias) using TensorFlow Variables
Step 4: Preparing inputs of the activation function
Step 5: Calculating the prediction error
Step 6: Minimizing the prediction error using gradient descent optimizer
Program
import tensorflow as tf
import numpy as np
# Preparing training data (inputs-outputs)
training_inputs_data = [[255, 0, 0],
[248, 80, 68],
[0, 0, 255],
[67, 15, 210]]
training_outputs_data = [[10], [20], [30], [40]]

# Preparing neural network parameters (weights and bias) using TensorFlow Variables
weights = tf.Variable(initial_value=[[0.1], [0.2], [0.3]], dtype=tf.float32)
bias = tf.Variable(initial_value=[[1]], dtype=tf.float32)

# Preparing inputs of the activation function


training_inputs = [[1, 2, 3], [4, 5, 6], [7, 8, 9]]
training_outputs = [[10], [20], [30]] #Desired outputs for each input

af_input = tf.matmul(tf.cast(training_inputs, tf.float32), weights) + bias

# Activation function of the output layer neuron


predictions = tf.nn.sigmoid(af_input)

# Calculating the prediction error


prediction_error = tf.reduce_sum(training_outputs - predictions)
# Minimizing the prediction error using gradient descent optimizer

learning_rate = 0.05
optimizer = tf.keras.optimizers.SGD(learning_rate=learning_rate)
trainable_variables = [weights, bias] # List of variables to optimize
#train_op = optimizer.minimize(prediction_error, var_list=trainable_variables)
#train_op =
tensorflow.train.GradientDescentOptimizer(learning_rate=0.05).minimize(prediction_error)

# Training loop of the neural network


for step in range(10000):
with tf.GradientTape() as tape:
af_input = tf.matmul(tf.cast(training_inputs, tf.float32), weights) + bias
predictions = tf.nn.sigmoid(af_input)
prediction_error = tf.reduce_sum(training_outputs - predictions)

gradients = tape.gradient(prediction_error, [weights, bias])


optimizer.apply_gradients(zip(gradients, [weights, bias]))

# Class scores of some testing data


test_inputs = np.array([[255,0,0], [248,80,68],[0, 0, 255],[67,15,210]])
test_inputs_tensor = tf.constant(test_inputs, dtype=tf.float32)
test_predictions = tf.nn.sigmoid(tf.matmul(test_inputs_tensor, weights) + bias)

print("Expected Scores: ", test_predictions.numpy())

Output
Expected Scores: [[1.]
[1.]
[1.]
[1.]]
4. Implement an Image Classifier using CNN in TensorFlow/Keras.
Aim
To implement an Image Classifier using CNN in TensorFlow using Python.
Algorithm
Step1: Import necessary libraries
Step 2: Download and prepare the CIFAR10 dataset
Step 3: Normalize pixel values to be between 0 and 1
Step 4: Create the convolutional base and Add Dense layers on top
Step 5: Compile and train the model
Step 6: Evaluate the model
Program
#Import TensorFlow
import tensorflow as tf
from tensorflow.keras import datasets, layers, models
import matplotlib.pyplot as plt
#Download and prepare the CIFAR10 dataset
(train_images, train_labels), (test_images, test_labels) = datasets.cifar10.load_data()

# Normalize pixel values to be between 0 and 1


train_images, test_images = train_images / 255.0, test_images / 255.0
#Verify the data
class_names = ['airplane', 'automobile', 'bird', 'cat', 'deer',
'dog', 'frog', 'horse', 'ship', 'truck']

plt.figure(figsize=(10,10))
for i in range(25):
plt.subplot(5,5,i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
plt.imshow(train_images[i])
# The CIFAR labels happen to be arrays,
# which is why you need the extra index
plt.xlabel(class_names[train_labels[i][0]])
plt.show()
#Create the convolutional base
model = models.Sequential()
model.add(layers.Conv2D(32, (3, 3), activation='relu', input_shape=(32, 32, 3)))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3), activation='relu'))
model.summary()
#Add Dense layers on top
model.add(layers.Flatten())
model.add(layers.Dense(64, activation='relu'))
model.add(layers.Dense(10))
model.summary()
#Compile and train the model
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])

history = model.fit(train_images, train_labels, epochs=10,


validation_data=(test_images, test_labels))
#Evaluate the model
plt.plot(history.history['accuracy'], label='accuracy')
plt.plot(history.history['val_accuracy'], label = 'val_accuracy')
plt.xlabel('Epoch')
plt.ylabel('Accuracy')
plt.ylim([0.5, 1])
plt.legend(loc='lower right')
test_loss, test_acc = model.evaluate(test_images, test_labels, verbose=2)
print(test_acc)
Output
Downloading data from https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz
170498071/170498071 [==============================] - 725s 4us/step

Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d (Conv2D) (None, 30, 30, 32) 896

max_pooling2d (MaxPooling2D (None, 15, 15, 32) 0


)

conv2d_1 (Conv2D) (None, 13, 13, 64) 18496

max_pooling2d_1 (MaxPooling (None, 6, 6, 64) 0


2D)

conv2d_2 (Conv2D) (None, 4, 4, 64) 36928

=================================================================
Total params: 56,320
Trainable params: 56,320
Non-trainable params: 0
_________________________________________________________________
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d (Conv2D) (None, 30, 30, 32) 896

max_pooling2d (MaxPooling2D (None, 15, 15, 32) 0


)

conv2d_1 (Conv2D) (None, 13, 13, 64) 18496

max_pooling2d_1 (MaxPooling (None, 6, 6, 64) 0


2D)

conv2d_2 (Conv2D) (None, 4, 4, 64) 36928

flatten (Flatten) (None, 1024) 0

dense (Dense) (None, 64) 65600

dense_1 (Dense) (None, 10) 650

=================================================================
Total params: 122,570
Trainable params: 122,570
Non-trainable params: 0
_________________________________________________________________
Epoch 1/10
1563/1563 [==============================] - 22s 13ms/step - loss: 1.5517 - accuracy:
0.4339 - val_loss: 1.2641 - val_accuracy: 0.5459
Epoch 2/10
1563/1563 [==============================] - 20s 13ms/step - loss: 1.1461 - accuracy:
0.5942 - val_loss: 1.0613 - val_accuracy: 0.6225
Epoch 3/10
1563/1563 [==============================] - 19s 12ms/step - loss: 0.9905 - accuracy:
0.6495 - val_loss: 1.0187 - val_accuracy: 0.6456
Epoch 4/10
1563/1563 [==============================] - 19s 12ms/step - loss: 0.8999 - accuracy:
0.6842 - val_loss: 0.9230 - val_accuracy: 0.6756
Epoch 5/10
1563/1563 [==============================] - 20s 13ms/step - loss: 0.8289 - accuracy:
0.7104 - val_loss: 0.9123 - val_accuracy: 0.6847
Epoch 6/10
1563/1563 [==============================] - 21s 13ms/step - loss: 0.7659 - accuracy:
0.7314 - val_loss: 0.8839 - val_accuracy: 0.6920
Epoch 7/10
1563/1563 [==============================] - 21s 13ms/step - loss: 0.7181 - accuracy:
0.7495 - val_loss: 0.8938 - val_accuracy: 0.6907
Epoch 8/10
1563/1563 [==============================] - 21s 13ms/step - loss: 0.6725 - accuracy:
0.7638 - val_loss: 0.9034 - val_accuracy: 0.6961
Epoch 9/10
1563/1563 [==============================] - 21s 13ms/step - loss: 0.6320 - accuracy:
0.7776 - val_loss: 0.9145 - val_accuracy: 0.6988
Epoch 10/10
1563/1563 [==============================] - 21s 14ms/step - loss: 0.5945 - accuracy:
0.7924 - val_loss: 0.8990 - val_accuracy: 0.7111
---------------------------------------------------------------------------
313/313 - 1s - loss: 0.8990 - accuracy: 0.7111 - 1s/epoch - 4ms/step
0.7110999822616577
9. Perform Sentiment Analysis using RNN
Aim
To implement an Image Classifier using CNN in TensorFlow using Python.
Algorithm
Step1: Import necessary libraries
Step 2: Getting reviews with words that come under 5000 most occurring words in the entire corpus
of textual review data
Step 3: Getting all the words from word_index dictionary, again printing the review
Step 4: Get the minimum and the maximum length of reviews
Step 5: Keeping a fixed length of all reviews to max 400 words and fixing every word's embedding
size to be 32
Step 6: Creating a RNN model
Step 7: printing model summary
Step 8: Compiling model
Step 9: Training the model
Step 10: Printing model score on test data
Program
from tensorflow.keras.layers import SimpleRNN, LSTM, GRU, Bidirectional, Dense, Embedding
from tensorflow.keras.datasets import imdb
from tensorflow.keras.models import Sequential
import numpy as np

# Getting reviews with words that come under 5000


# most occurring words in the entire
# corpus of textual review data
vocab_size = 5000
(x_train, y_train), (x_test, y_test) = imdb.load_data(num_words=vocab_size)

print(x_train[0])

# Getting all the words from word_index dictionary


word_idx = imdb.get_word_index()

# Originally the index number of a value and not a key,


# hence converting the index as key and the words as values
word_idx = {i: word for word, i in word_idx.items()}
# again printing the review
print([word_idx[i] for i in x_train[0]])

# Get the minimum and the maximum length of reviews


print("Max length of a review:: ", len(max((x_train+x_test), key=len)))
print("Min length of a review:: ", len(min((x_train+x_test), key=len)))

from tensorflow.keras.preprocessing import sequence

# Keeping a fixed length of all reviews to max 400 words


max_words = 400

x_train = sequence.pad_sequences(x_train, maxlen=max_words)


x_test = sequence.pad_sequences(x_test, maxlen=max_words)

x_valid, y_valid = x_train[:64], y_train[:64]


x_train_, y_train_ = x_train[64:], y_train[64:]

# fixing every word's embedding size to be 32


embd_len = 32

# Creating a RNN model


RNN_model = Sequential(name="Simple_RNN")
RNN_model.add(Embedding(vocab_size,
embd_len,
input_length=max_words))

# In case of a stacked(more than one layer of RNN)


# use return_sequences=True
RNN_model.add(SimpleRNN(128,
activation='tanh',
return_sequences=False))
RNN_model.add(Dense(1, activation='sigmoid'))

# printing model summary


print(RNN_model.summary())
# Compiling model
RNN_model.compile(
loss="binary_crossentropy",
optimizer='adam',
metrics=['accuracy']
)

# Training the model


history = RNN_model.fit(x_train_, y_train_,
batch_size=64,
epochs=5,
verbose=1,
validation_data=(x_valid, y_valid))

# Printing model score on test data


print()
print("Simple_RNN Score---> ", RNN_model.evaluate(x_test, y_test, verbose=0))
Output
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/imdb.npz
17464789/17464789 [==============================] - 0s 0us/step
[1, 14, 22, 16, 43, 530, 973, 1622, 1385, 65, 458, 4468, 66, 3941, 4, 173, 36, 256, 5, 25, 100, 43, 838,
112, 50, 670, 2, 9, 35, 480, 284, 5, 150, 4, 172, 112, 167, 2, 336, 385, 39, 4, 172, 4536, 1111, 17, 546,
38, 13, 447, 4, 192, 50, 16, 6, 147, 2025, 19, 14, 22, 4, 1920, 4613, 469, 4, 22, 71, 87, 12, 16, 43, 530,
38, 76, 15, 13, 1247, 4, 22, 17, 515, 17, 12, 16, 626, 18, 2, 5, 62, 386, 12, 8, 316, 8, 106, 5, 4, 2223, 2,
16, 480, 66, 3785, 33, 4, 130, 12, 16, 38, 619, 5, 25, 124, 51, 36, 135, 48, 25, 1415, 33, 6, 22, 12, 215,
28, 77, 52, 5, 14, 407, 16, 82, 2, 8, 4, 107, 117, 2, 15, 256, 4, 2, 7, 3766, 5, 723, 36, 71, 43, 530, 476,
26, 400, 317, 46, 7, 4, 2, 1029, 13, 104, 88, 4, 381, 15, 297, 98, 32, 2071, 56, 26, 141, 6, 194, 2, 18, 4,
226, 22, 21, 134, 476, 26, 480, 5, 144, 30, 2, 18, 51, 36, 28, 224, 92, 25, 104, 4, 226, 65, 16, 38, 1334,
88, 12, 16, 283, 5, 16, 4472, 113, 103, 32, 15, 16, 2, 19, 178, 32]

Downloading data from


https://storage.googleapis.com/tensorflow/tf-keras-datasets/imdb_word_index.json
1641221/1641221 [==============================] - 0s 0us/step
['the', 'as', 'you', 'with', 'out', 'themselves', 'powerful', 'lets', 'loves', 'their', 'becomes', 'reaching', 'had',
'journalist', 'of', 'lot', 'from', 'anyone', 'to', 'have', 'after', 'out', 'atmosphere', 'never', 'more', 'room', 'and',
'it', 'so', 'heart', 'shows', 'to', 'years', 'of', 'every', 'never', 'going', 'and', 'help', 'moments', 'or', 'of', 'every',
'chest', 'visual', 'movie', 'except', 'her', 'was', 'several', 'of', 'enough', 'more', 'with', 'is', 'now', 'current',
'film', 'as', 'you', 'of', 'mine', 'potentially', 'unfortunately', 'of', 'you', 'than', 'him', 'that', 'with', 'out',
'themselves', 'her', 'get', 'for', 'was', 'camp', 'of', 'you', 'movie', 'sometimes', 'movie', 'that', 'with', 'scary',
'but', 'and', 'to', 'story', 'wonderful', 'that', 'in', 'seeing', 'in', 'character', 'to', 'of', '70s', 'and', 'with', 'heart',
'had', 'shadows', 'they', 'of', 'here', 'that', 'with', 'her', 'serious', 'to', 'have', 'does', 'when', 'from', 'why',
'what', 'have', 'critics', 'they', 'is', 'you', 'that', "isn't", 'one', 'will', 'very', 'to', 'as', 'itself', 'with', 'other',
'and', 'in', 'of', 'seen', 'over', 'and', 'for', 'anyone', 'of', 'and', 'br', "show's", 'to', 'whether', 'from', 'than',
'out', 'themselves', 'history', 'he', 'name', 'half', 'some', 'br', 'of', 'and', 'odd', 'was', 'two', 'most', 'of',
'mean', 'for', '1', 'any', 'an', 'boat', 'she', 'he', 'should', 'is', 'thought', 'and', 'but', 'of', 'script', 'you', 'not',
'while', 'history', 'he', 'heart', 'to', 'real', 'at', 'and', 'but', 'when', 'from', 'one', 'bit', 'then', 'have', 'two', 'of',
'script', 'their', 'with', 'her', 'nobody', 'most', 'that', 'with', "wasn't", 'to', 'with', 'armed', 'acting', 'watch',
'an', 'for', 'with', 'and', 'film', 'want', 'an']

Max length of a review:: 2697


Min length of a review:: 70

Model: "Simple_RNN"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
embedding (Embedding) (None, 400, 32) 160000

simple_rnn (SimpleRNN) (None, 128) 20608

dense (Dense) (None, 1) 129

=================================================================
Total params: 180,737
Trainable params: 180,737
Non-trainable params: 0
_________________________________________________________________
None
Epoch 1/5
390/390 [==============================] - 82s 206ms/step - loss: 0.6776 - accuracy:
0.5587 - val_loss: 0.6771 - val_accuracy: 0.6094
Epoch 2/5
390/390 [==============================] - 82s 209ms/step - loss: 0.6412 - accuracy:
0.6250 - val_loss: 0.6056 - val_accuracy: 0.6406
Epoch 3/5
390/390 [==============================] - 80s 206ms/step - loss: 0.5382 - accuracy:
0.7320 - val_loss: 0.5407 - val_accuracy: 0.7500
Epoch 4/5
390/390 [==============================] - 81s 207ms/step - loss: 0.5320 - accuracy:
0.7370 - val_loss: 0.6075 - val_accuracy: 0.6719
Epoch 5/5
390/390 [==============================] - 83s 213ms/step - loss: 0.4920 - accuracy:
0.7670 - val_loss: 0.6157 - val_accuracy: 0.6250

Simple_RNN Score---> [0.6009092330932617, 0.672760009765625]


CodeText

You might also like