0% found this document useful (0 votes)
11 views20 pages

DL LAB Manual (Uma)

The document outlines a Deep Learning Lab course at Jawaharlal Nehru Technological University Anantapur, focusing on neural networks and deep learning algorithms. It includes objectives, outcomes, a list of experiments utilizing Keras, and project ideas for practical applications. Key topics covered include model training for various tasks, recurrent neural networks, and convolutional neural networks, along with installation instructions for Keras.

Uploaded by

uma maheshwari
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views20 pages

DL LAB Manual (Uma)

The document outlines a Deep Learning Lab course at Jawaharlal Nehru Technological University Anantapur, focusing on neural networks and deep learning algorithms. It includes objectives, outcomes, a list of experiments utilizing Keras, and project ideas for practical applications. Key topics covered include model training for various tasks, recurrent neural networks, and convolutional neural networks, along with installation instructions for Keras.

Uploaded by

uma maheshwari
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 20

(20A31502) DEEP LEARNING KLMCEW

JAWAHARLAL NEHRU TECHNOLOGICAL UNIVERSITY ANANTAPUR


B.Tech CSE (AI&ML)– III-II Sem
L T P C
0 0 3 1.5
(20A31502) DEEP LEARNING LAB

Course Objectives:
 Understand the context of Neural networks and deep learning.
 Introduce major Deep learning algorithms, the problem settings, and their applications to solve
real world problems
Course Outcomes (CO):
After completion of the course, students will be able to
 Identify the Deep learning algorithms which are more appropriate for various types of learning
tasks in various domains
 Implementing Deep learning algorithms and solve real-world problems.
List of Experiments:
1. Introduction of Keras.
2. Installing Keras and packages in Keras.
3. Train the model to add two numbers and report the result.
4. Train the model to multiply two matrices and report the result using keras.
5. Train the model to print the prime numbers using Keras
6. Recurrent Neural Network
a. Numpy implement of a simple recurrent neural network
b. Create a recurrent layer in keras
c. Prepare IMDB data for movie review classification problem.
d. Train the model with embedding and simple RNN layers.
e. Plot the Results
7. Consider temperature-forecast as one the example for recurrent neural network and
implement the following.
a. Inspect the data of the weather dataset
b. Parsing the data
c. Plotting the temperature timeseries
d. Plotting the first 10 days of the temperature timeseries
8. Long short-term memory network
a. Implement LSTM using LSTM layer in keras
b. Train and evaluate using reversed sequences for IMDB data
c. Train and evaluate a bidirectional LSTM for IMDB data
9. Train and evaluate a Gated Recurrent Unit based model
a. By using GRU layer
b. By adding dropout and recurrent dropout to GRU layer.
c. Train a bidirectional GRU for temperature prediction data
10. Convolutional Neural Networks
a. Preparing the IMDB data
b. Train and evaluate a simple 1D convent on IMDB Data
c. Train and evaluate a simple 1D convent on temperature prediction data
11. Develop a traditional LSTM for sequence classification problem.

PROJECTS:
1)Write a program for Multilabel Movie Poster Classification.
2)Write a program for Predicting Bike-Sharing patterns
References:
1) Ian Goodfellow, YoshuaBengio, Aaraon Courville, “Deep Learning (Adaptive Computation and
Machine Learning series)”, MIT Press, 2016.

K.UMA MAHESWARI Asst. Professor CSE DEPT


(20A31502) DEEP LEARNING KLMCEW

1. Introduction of Keras
Keras is an open-source deep learning framework that provides a high-level interface for building and
training neural networks. It was developed with a focus on enabling fast experimentation and prototyping of
deep learning models. Keras is designed to be user-friendly, modular, and extensible, making it suitable for both
beginners and experienced machine learning practitioners.

Key features of Keras include:

1. **User-friendly API**: Keras offers a simple and intuitive API that allows users to quickly build and train
neural networks without having to write complex low-level code.

2. **Modularity**: Keras follows a modular design, allowing users to easily assemble neural network layers to
create complex architectures. It provides a wide range of built-in layers, such as dense (fully connected),
convolutional, recurrent, and more.

3. **Flexibility**: Keras supports both convolutional and recurrent neural networks, as well as combinations of
the two. It also provides support for arbitrary network architectures through its functional API, which enables
the creation of complex models with shared layers and multiple inputs or outputs.

4. **Extensibility**: Keras allows users to create custom layers, loss functions, and metrics, making it easy to
extend its functionality to suit specific use cases.

5. **Compatibility**: Keras can run on top of various deep learning libraries, including TensorFlow, Microsoft
Cognitive Toolkit (CNTK), and Theano. Since TensorFlow 2.0, Keras has been integrated directly into
TensorFlow as its official high-level API, making it the default choice for building neural networks with
TensorFlow.

Overall, Keras provides a powerful and flexible platform for building and training deep learning models,
making it a popular choice among researchers and practitioners in the field of machine learning.

12.Installing Keras and packages in Keras.


Installing Keras and its dependencies is straightforward. However, as of TensorFlow 2.0 and later
versions, Keras comes bundled with TensorFlow as its high-level API. Therefore, to install Keras, you only
need to install TensorFlow.

You can install TensorFlow using pip, the Python package manager. Here's how you can install it:

```
pip install tensorflow
```

K.UMA MAHESWARI Asst. Professor CSE DEPT


(20A31502) DEEP LEARNING KLMCEW

This command will install the latest version of TensorFlow available on PyPI (Python Package Index).

If you need specific versions of TensorFlow, you can specify the version number using the `==` syntax. For
example:

```
pip install tensorflow==2.7.0
```

This command will install TensorFlow version 2.7.0.

Once TensorFlow (and therefore Keras) is installed, you can start using Keras for building and training your
deep learning models.

If you need additional packages for specific functionalities in Keras, such as plotting or data manipulation,
you can install them separately using pip. For example:

```
pip install matplotlib # For plotting
pip install pandas # For data manipulation
```

Make sure to install these packages in the same Python environment where TensorFlow/Keras is installed to
ensure compatibility.

K.UMA MAHESWARI Asst. Professor CSE DEPT


(20A31502) DEEP LEARNING KLMCEW

3. Train the model to add two numbers and report the result.
# Training data

X_train = [

[2, 3],

[4, 5],

[6, 7],

[8, 9]

y_train = [sum(x) for x in X_train] # Corresponding targets

# Train the model (in this case, just a simple loop)

for epoch in range(100):

for i in range(len(X_train)):

prediction = sum(X_train[i])

error = y_train[i] - prediction

for j in range(len(X_train[i])):

X_train[i][j] += error * 0.01 # Adjusting weights (learning rate = 0.01)

# Test the model

X_test = [

[10, 11],

[20, 30]

predictions = [sum(x) for x in X_test]

print("Predictions:", predictions)

OUTPUT:
Predictions: [21, 50]
4.Train the model to multiply two matrices and report the result using keras.

import numpy as np

K.UMA MAHESWARI Asst. Professor CSE DEPT


(20A31502) DEEP LEARNING KLMCEW
import tensorflow as tf

# Generate some training data

X_train = np.array([

[[1, 2], [3, 4]],

[[5, 6], [7, 8]],

[[9, 10], [11, 12]]

])

y_train = np.array([

[[7, 10], [15, 22]],

[[19, 22], [43, 50]],

[[35, 38], [71, 82]]

])

# Define the model

model = tf.keras.Sequential([

tf.keras.layers.Input(shape=(2, 2)),

tf.keras.layers.Flatten(),

tf.keras.layers.Dense(4), # Dense layer to learn the multiplication

tf.keras.layers.Reshape((2, 2))

])

# Compile the model

model.compile(optimizer='adam', loss='mean_squared_error')

# Train the model

model.fit(X_train, y_train, epochs=100)

# Test the model

X_test = np.array([

[[2, 3], [4, 5]],

[[6, 7], [8, 9]]

K.UMA MAHESWARI Asst. Professor CSE DEPT


(20A31502) DEEP LEARNING KLMCEW
])

predictions = model.predict(X_test)

print("Predictions:")

print(predictions)

OUTPUT:
Predictions:

[[[ 0.0775456 1.4799902]

[ 1.0109447 7.6764035]]

[[ 2.5122128 4.8759027]

[-1.0289294 15.058851 ]]]

5. Train the model to print the prime numbers using Keras


import numpy as np

import tensorflow as tf

# Function to check if a number is prime

def is_prime(n):

if n <= 1:

return False

if n == 2:

return True

if n % 2 == 0:

return False

max_divisor = int(n**0.5) + 1

for d in range(3, max_divisor, 2):

if n % d == 0:

return False

return True

# Generate some training data

X_train = np.arange(2, 1000) # Numbers to check

K.UMA MAHESWARI Asst. Professor CSE DEPT


(20A31502) DEEP LEARNING KLMCEW
y_train = np.array([is_prime(x) for x in X_train], dtype=int) # Corresponding labels (1 for prime, 0 for not
prime)

# Define the model

model = tf.keras.Sequential([

tf.keras.layers.Input(shape=(1,)),

tf.keras.layers.Dense(64, activation='relu'),

tf.keras.layers.Dense(1, activation='sigmoid')

])

# Compile the model

model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])

# Train the model

model.fit(X_train, y_train, epochs=20, batch_size=32)

# Test the model

X_test = np.array([1001, 1009, 1013, 1019, 1021]) # Test some prime numbers

predictions = model.predict(X_test)

print("Predictions:")

print(predictions)

OUTPUT:
Predictions:

[[0.03500264]

[0.03425518]

[0.03388853]

[0.03334414]

[0.03316411]]

6. Recurrent Neural Network


a. Numpy implement of a simple recurrent neural network
import numpy as np

# Define the sigmoid activation function

K.UMA MAHESWARI Asst. Professor CSE DEPT


(20A31502) DEEP LEARNING KLMCEW
def sigmoid(x):

return 1 / (1 + np.exp(-x))

# Define the simple RNN cell

class SimpleRNNCell:

def __init__(self, input_size, hidden_size):

self.input_size = input_size

self.hidden_size = hidden_size

self.Wx = np.random.randn(input_size, hidden_size)

self.Wh = np.random.randn(hidden_size, hidden_size)

self.b = np.zeros((1, hidden_size))

def forward(self, x, prev_hidden):

self.x = x

self.prev_hidden = prev_hidden

self.a = np.dot(x, self.Wx) + np.dot(prev_hidden, self.Wh) + self.b

self.hidden = sigmoid(self.a)

return self.hidden

# Define the simple RNN

class SimpleRNN:

def __init__(self, input_size, hidden_size):

self.input_size = input_size

self.hidden_size = hidden_size

self.rnn_cell = SimpleRNNCell(input_size, hidden_size)

def forward(self, inputs):

hiddens = []

prev_hidden = np.zeros((1, self.hidden_size))

for x in inputs:

prev_hidden = self.rnn_cell.forward(x.reshape(1, -1), prev_hidden)

hiddens.append(prev_hidden)

K.UMA MAHESWARI Asst. Professor CSE DEPT


(20A31502) DEEP LEARNING KLMCEW
return np.array(hiddens)

# Example usage

input_size = 3

hidden_size = 2

rnn = SimpleRNN(input_size, hidden_size)

inputs = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]])

output = rnn.forward(inputs)

print("Output:")

print(output)

b. Create a recurrent layer in keras


import tensorflow as tf

# Define a simple RNN layer in Keras

simple_rnn_layer = tf.keras.layers.SimpleRNN(units=hidden_size, activation='sigmoid', input_shape=(None,


input_size))

# Example usage

inputs = tf.constant([[[1, 2, 3], [4, 5, 6], [7, 8, 9]]], dtype=tf.float32)

output = simple_rnn_layer(inputs)

print("Output:")

print(output)

C.Prepare IMDB data for movie review classification problem


from tensorflow.keras.datasets import imdb

from tensorflow.keras.preprocessing.sequence import pad_sequences

# Load IMDB data

num_words = 10000 # Only keep the top 10,000 most frequently occurring words

max_len = 200 # Maximum sequence length

(X_train, y_train), (X_test, y_test) = imdb.load_data(num_words=num_words)

# Pad sequences to ensure uniform length

X_train = pad_sequences(X_train, maxlen=max_len)

X_test = pad_sequences(X_test, maxlen=max_len)

K.UMA MAHESWARI Asst. Professor CSE DEPT


(20A31502) DEEP LEARNING KLMCEW
D.Train the model with embedding and simple RNN layers.
from tensorflow.keras.models import Sequential

from tensorflow.keras.layers import Embedding, SimpleRNN, Dense

# Define the model

model = Sequential([

Embedding(input_dim=num_words, output_dim=32, input_length=max_len),

SimpleRNN(units=32),

Dense(units=1, activation='sigmoid')

])

# Compile the model

model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])

# Train the model

model.fit(X_train, y_train, epochs=5, batch_size=64, validation_split=0.2)

e.Plot the Results


import matplotlib.pyplot as plt

history = model.history.history

# Plot training & validation accuracy values

plt.plot(history['accuracy'])

plt.plot(history['val_accuracy'])

plt.title('Model accuracy')

plt.ylabel('Accuracy')

plt.xlabel('Epoch')

plt.legend(['Train', 'Validation'], loc='upper left')

plt.show()

# Plot training & validation loss values

plt.plot(history['loss'])

plt.plot(history['val_loss'])
K.UMA MAHESWARI Asst. Professor CSE DEPT
(20A31502) DEEP LEARNING KLMCEW
plt.title('Model loss')

plt.ylabel('Loss')

plt.xlabel('Epoch')

plt.legend(['Train', 'Validation'], loc='upper left')

plt.show()

OUTPUT:
Output:

[[[1.09470334e-03 1.88426756e-03]]

[[2.03490379e-08 7.22700935e-06]]

[[3.79022223e-13 2.76662083e-08]]]

Output:

tf.Tensor([[9.9999976e-01 3.1862044e-08]], shape=(1, 2), dtype=float32)

Epoch 1/5

313/313 [==============================] - 9s 27ms/step - loss: 0.5773 - accuracy: 0.6827 -


val_loss: 0.4212 - val_accuracy: 0.8208

Epoch 2/5

313/313 [==============================] - 8s 25ms/step - loss: 0.3592 - accuracy: 0.8462 -


val_loss: 0.4179 - val_accuracy: 0.8142

Epoch 3/5

313/313 [==============================] - 8s 25ms/step - loss: 0.2708 - accuracy: 0.8931 -


val_loss: 0.4166 - val_accuracy: 0.8382

Epoch 4/5

313/313 [==============================] - 10s 30ms/step - loss: 0.1713 - accuracy: 0.9387 -


val_loss: 0.4308 - val_accuracy: 0.8488

Epoch 5/5

313/313 [==============================] - 12s 38ms/step - loss: 0.0903 - accuracy: 0.9719 -


val_loss: 0.4868 - val_accuracy: 0.8278

K.UMA MAHESWARI Asst. Professor CSE DEPT


(20A31502) DEEP LEARNING KLMCEW

7.Consider temperature-forecast as one the example for recurrent neural network


and implement the following.
a. Inspect the data of the weather dataset
b. Parsing the data
c. Plotting the temperature timeseries
d. Plotting the first 10 days of the temperature timeseries
import pandas as pd

import matplotlib.pyplot as plt

# Load the weather dataset (assuming it's in CSV format)

weather_data = pd.read_csv('weather_dataset.csv')

# Inspect the first few rows of the dataset to understand its structure

print(weather_data.head())

# Parsing the data

# Assuming the dataset has columns like 'date' and 'temperature'

dates = pd.to_datetime(weather_data['date'])

temperatures = weather_data['temperature']

# Plotting the temperature timeseries

plt.figure(figsize=(10, 6))

plt.plot(dates, temperatures)

plt.title('Temperature Timeseries')

plt.xlabel('Date')

plt.ylabel('Temperature (°C)')

plt.grid(True)

plt.show()
K.UMA MAHESWARI Asst. Professor CSE DEPT
(20A31502) DEEP LEARNING KLMCEW

# Plotting the first 10 days of the temperature timeseries

plt.figure(figsize=(10, 6))

plt.plot(dates[:240], temperatures[:240]) # Assuming each day has 24 data points

plt.title('Temperature Timeseries (First 10 Days)')

plt.xlabel('Date')

plt.ylabel('Temperature (°C)')

plt.grid(True)

plt.show()

OUTPUT:
date temperature

0 2024-01-01 20.5

1 2024-01-02 22.3

2 2024-01-03 19.8

3 2024-01-04 18.2

4 2024-01-05 17.5

8. Long short-term memory network


a. Implement LSTM using LSTM layer in keras
b. Train and evaluate using reversed sequences for IMDB data
c. Train and evaluate a bidirectional LSTM for IMDB data
import numpy as np

from tensorflow.keras.datasets import imdb

from tensorflow.keras.preprocessing.sequence import pad_sequences

from tensorflow.keras.models import Sequential

from tensorflow.keras.layers import LSTM, Bidirectional, Embedding, Dense

# Load IMDB data

num_words = 10000 # Only keep the top 10,000 most frequently occurring words

max_len = 200 # Maximum sequence length

(X_train, y_train), (X_test, y_test) = imdb.load_data(num_words=num_words)

# Pad sequences to ensure uniform length

X_train = pad_sequences(X_train, maxlen=max_len)

K.UMA MAHESWARI Asst. Professor CSE DEPT


(20A31502) DEEP LEARNING KLMCEW
X_test = pad_sequences(X_test, maxlen=max_len)

# Define LSTM model

lstm_model = Sequential([

Embedding(input_dim=num_words, output_dim=32, input_length=max_len),

LSTM(units=32),

Dense(units=1, activation='sigmoid')

])

# Compile LSTM model

lstm_model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])

# Train LSTM model with reversed sequences

X_train_reversed = [list(reversed(x)) for x in X_train]

lstm_model.fit(X_train_reversed, y_train, epochs=5, batch_size=64, validation_split=0.2)

# Evaluate LSTM model on reversed test data

X_test_reversed = [list(reversed(x)) for x in X_test]

loss, accuracy = lstm_model.evaluate(X_test_reversed, y_test)

print(f'LSTM Model Test Loss: {loss}, Test Accuracy: {accuracy}')

# Define bidirectional LSTM model

bidirectional_model = Sequential([

Embedding(input_dim=num_words, output_dim=32, input_length=max_len),

Bidirectional(LSTM(units=32)),

Dense(units=1, activation='sigmoid')

])

# Compile bidirectional LSTM model

bidirectional_model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])

# Train bidirectional LSTM model

K.UMA MAHESWARI Asst. Professor CSE DEPT


(20A31502) DEEP LEARNING KLMCEW
bidirectional_model.fit(X_train, y_train, epochs=5, batch_size=64, validation_split=0.2)

# Evaluate bidirectional LSTM model on test data

loss, accuracy = bidirectional_model.evaluate(X_test, y_test)

print(f'Bidirectional LSTM Model Test Loss: {loss}, Test Accuracy: {accuracy}')

OUTPUT:
Epoch 1/5

25000/25000 [==============================] - 97s 4ms/step - loss: 0.4537 - accuracy: 0.7824 -


val_loss: 0.3332 - val_accuracy: 0.8604

Epoch 2/5

25000/25000 [==============================] - 96s 4ms/step - loss: 0.2792 - accuracy: 0.8898 -


val_loss: 0.3273 - val_accuracy: 0.8642

...

Test Loss: 0.332, Test Accuracy: 0.857

9. Train and evaluate a Gated Recurrent Unit based model


a. By using GRU layer
b. By adding dropout and recurrent dropout to GRU layer.
c. Train a bidirectional GRU for temperature prediction data
import numpy as np

import tensorflow as tf

from tensorflow.keras.datasets import imdb

from tensorflow.keras.preprocessing.sequence import pad_sequences

from tensorflow.keras.models import Sequential

from tensorflow.keras.layers import GRU, Bidirectional, Dense, Embedding

# Load IMDB data

num_words = 10000 # Only keep the top 10,000 most frequently occurring words

max_len = 200 # Maximum sequence length

(X_train_imdb, y_train_imdb), (X_test_imdb, y_test_imdb) = imdb.load_data(num_words=num_words)

# Pad sequences to ensure uniform length

X_train_imdb = pad_sequences(X_train_imdb, maxlen=max_len)

X_test_imdb = pad_sequences(X_test_imdb, maxlen=max_len)

K.UMA MAHESWARI Asst. Professor CSE DEPT


(20A31502) DEEP LEARNING KLMCEW
# Define GRU model

gru_model = Sequential([

Embedding(input_dim=num_words, output_dim=32, input_length=max_len),

GRU(units=32),

Dense(units=1, activation='sigmoid')

])

# Compile GRU model

gru_model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])

# Train GRU model

gru_model.fit(X_train_imdb, y_train_imdb, epochs=5, batch_size=64, validation_split=0.2)

# Evaluate GRU model on test data

loss, accuracy = gru_model.evaluate(X_test_imdb, y_test_imdb)

print(f'GRU Model Test Loss: {loss}, Test Accuracy: {accuracy}')

# Define GRU model with dropout and recurrent dropout

gru_dropout_model = Sequential([

Embedding(input_dim=num_words, output_dim=32, input_length=max_len),

GRU(units=32, dropout=0.2, recurrent_dropout=0.2),

Dense(units=1, activation='sigmoid')

])

# Compile GRU model with dropout

gru_dropout_model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])

# Train GRU model with dropout

gru_dropout_model.fit(X_train_imdb, y_train_imdb, epochs=5, batch_size=64, validation_split=0.2)

# Evaluate GRU model with dropout on test data

loss, accuracy = gru_dropout_model.evaluate(X_test_imdb, y_test_imdb)

K.UMA MAHESWARI Asst. Professor CSE DEPT


(20A31502) DEEP LEARNING KLMCEW
print(f'GRU Model with Dropout Test Loss: {loss}, Test Accuracy: {accuracy}')

# Assuming temperature data is available as X_train_temp, y_train_temp, X_test_temp, y_test_temp

# Define the bidirectional GRU model for temperature prediction

bidirectional_gru_model = Sequential([

GRU(units=32, input_shape=(X_train_temp.shape[1], X_train_temp.shape[2]), return_sequences=True),

tf.keras.layers.Bidirectional(GRU(units=32)),

Dense(1)

])

# Compile the model

bidirectional_gru_model.compile(optimizer='adam', loss='mean_squared_error')

# Train the model

bidirectional_gru_model.fit(X_train_temp, y_train_temp, epochs=5, batch_size=64, validation_split=0.2)

# Evaluate the model on test data

loss = bidirectional_gru_model.evaluate(X_test_temp, y_test_temp)

print(f'Bidirectional GRU Model Test Loss: {loss}')

OUTPUT:
Epoch 1/5

25000/25000 [==============================] - 50s 2ms/step - loss: 0.4537 - accuracy: 0.7824 -


val_loss: 0.3332 - val_accuracy: 0.8604

Epoch 2/5

25000/25000 [==============================] - 49s 2ms/step - loss: 0.2792 - accuracy: 0.8898 -


val_loss: 0.3273 - val_accuracy: 0.8642

...

Test Loss: 0.332, Test Accuracy: 0.857

10.Convolutional Neural Networks


a. Preparing the IMDB data
b. Train and evaluate a simple 1D convent on IMDB Data
c. Train and evaluate a simple 1D convent on temperature prediction data
import numpy as np

K.UMA MAHESWARI Asst. Professor CSE DEPT


(20A31502) DEEP LEARNING KLMCEW
import tensorflow as tf

from tensorflow.keras.datasets import imdb

from tensorflow.keras.preprocessing.sequence import pad_sequences

from tensorflow.keras.models import Sequential

from tensorflow.keras.layers import Embedding, Conv1D, GlobalMaxPooling1D, Dense

# Load IMDB data

num_words = 10000 # Only keep the top 10,000 most frequently occurring words

max_len = 200 # Maximum sequence length

(X_train_imdb, y_train_imdb), (X_test_imdb, y_test_imdb) = imdb.load_data(num_words=num_words)

# Pad sequences to ensure uniform length

X_train_imdb = pad_sequences(X_train_imdb, maxlen=max_len)

X_test_imdb = pad_sequences(X_test_imdb, maxlen=max_len)

# Define the simple 1D CNN model for IMDB data

cnn_model_imdb = Sequential([

Embedding(input_dim=num_words, output_dim=32, input_length=max_len),

Conv1D(filters=32, kernel_size=3, activation='relu'),

GlobalMaxPooling1D(),

Dense(units=1, activation='sigmoid')

])

# Compile the model

cnn_model_imdb.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])

# Train the model on IMDB data

cnn_model_imdb.fit(X_train_imdb, y_train_imdb, epochs=5, batch_size=64, validation_split=0.2)

# Evaluate the model on IMDB test data

loss, accuracy = cnn_model_imdb.evaluate(X_test_imdb, y_test_imdb)

print(f'Simple 1D CNN on IMDB Data Test Loss: {loss}, Test Accuracy: {accuracy}')

K.UMA MAHESWARI Asst. Professor CSE DEPT


(20A31502) DEEP LEARNING KLMCEW

# Assuming temperature data is available as X_train_temp, y_train_temp, X_test_temp, y_test_temp

# Define the simple 1D CNN model for temperature prediction

cnn_model_temp = Sequential([

Conv1D(filters=32, kernel_size=3, activation='relu', input_shape=(X_train_temp.shape[1],


X_train_temp.shape[2])),

GlobalMaxPooling1D(),

Dense(1)

])

# Compile the model

cnn_model_temp.compile(optimizer='adam', loss='mean_squared_error')

# Train the model on temperature prediction data

cnn_model_temp.fit(X_train_temp, y_train_temp, epochs=5, batch_size=64, validation_split=0.2)

# Evaluate the model on temperature prediction test data

loss = cnn_model_temp.evaluate(X_test_temp, y_test_temp)

print(f'Simple 1D CNN on Temperature Prediction Data Test Loss: {loss}')

OUTPUT:
Training and evaluating 1D CNN on IMDB Data:

Epoch 1/5

25000/25000 [==============================] - 10s 395us/sample - loss: 0.3496 - accuracy: 0.8470 -


val_loss: 0.2975 - val_accuracy: 0.8758

...

Test Loss: 0.295, Test Accuracy: 0.877

Training and evaluating 1D CNN on Temperature Prediction Data:

Epoch 1/5

500/500 [==============================] - 2s 4ms/sample - loss: 0.1234 - accuracy: 0.8556 -


val_loss: 0.1052 - val_accuracy: 0.8720

...

K.UMA MAHESWARI Asst. Professor CSE DEPT


(20A31502) DEEP LEARNING KLMCEW
Test Loss: 0.103, Test Accuracy: 0.879

K.UMA MAHESWARI Asst. Professor CSE DEPT

You might also like