DL Lab Manual
DL Lab Manual
To Run the Deep Learning Programs, you need to install below modules or
packages:
1. First, you need to have Python installed on your machine. You can
download the latest version of Python from the official website
(https://www.python.org/downloads/). Make sure to select the option to
add Python to your system PATH during the installation process.
2. Once Python is installed, open the command prompt by pressing the
Windows key + R and typing cmd in the Run dialog box.
3. In the command prompt, type the following command to install Jupyter
Notebook:
4. pip install jupyter
This will download and install Jupyter Notebook and its dependencies.
Once you’ve created a new notebook, you can start writing code in the cells. To
run a cell, press Shift + Enter or click the Run button in the toolbar. You can
also add text, equations, and visualizations to your notebook using Markdown
syntax.
Jupyter Notebook also supports a variety of keyboard shortcuts to make your
work more efficient. Here are some of the most useful shortcuts:
• Shift + Enter: Run the current cell and move to the next one.
• Ctrl + Enter: Run the current cell.
• Esc: Enter command mode.
• Enter: Enter edit mode.
• A: Insert a new cell above the current cell.
• B: Insert a new cell below the current cell.
• D + D: Delete the current cell.
• M: Change the current cell type to Markdown.
• Y: Change the current cell type to code.
In Cmd:
Run specific lines or selected code: Highlight the code you want to run and then
click the green play button or press F9.
Spyder provides a convenient interface with features like a variable explorer,
IPython console, debugger, and more, making it easier to write and debug Python
code.
Remember to have your Python environment properly set up and activated before
launching Spyder if you're using virtual environments to manage packages or
dependencies.
Spyder is a versatile and user-friendly IDE, allowing you to execute Python code
efficiently and debug your scripts seamlessly.
Certainly! First, let's install TensorFlow and some commonly used Python
libraries. Assuming you already have Python installed, here's how you can install
TensorFlow and a few additional libraries using pip.
Spyder link: https://www.spyder-ide.org/
Reference link: https://medium.com/@vertabeloacdm/how-to-install-the-
python-spyder-ide-and-run-scripts-ecfd31b37db6
python
# Importing required libraries
import numpy as np
import pandas as pd
import tensorflow as tf
# Predictions
predictions = model.predict(X[:5]) # Predicting on the first 5 samples
print("\nPredictions for the first 5 samples:")
print(predictions)
Output:
def is_prime(n):
"""Check if a number is prime."""
if n <= 1:
return False
for i in range(2, int(np.sqrt(n)) + 1):
if n % i == 0:
return False
return True
Output:
Note: If the output is wrong, Run the program for several times to get the correct answer.
Because in each run it will be trained, then it predicts the correct output.
7. Recurrent Neural Networks
(a) Aim: Numpy implement of a simple recurrent neural network
Description:
RNN would consist of recurrent connections that allow the network to maintain
information in memory over time, making it suitable for sequential data tasks like
time series forecasting, natural language processing, and more. The
implementation would involve defining the architecture, initializing weights,
implementing the forward pass for computing predictions, and potentially a
backward pass for gradient computation if training is involved. This task aims to
provide a foundational understanding of how RNNs work under the hood by
building a simple version from scratch using NumPy for numerical computations.
Source code:
import numpy as np
# Define RNN parameters
input_size = 3
hidden_size = 4
output_size = 2
# Generate random input sequence
X = np.random.rand(5, input_size)
# Initialize weights
Wxh = np.random.rand(input_size, hidden_size)
Whh = np.random.rand(hidden_size, hidden_size)
Why = np.random.rand(hidden_size, output_size)
Output:
Output:
(c) Aim: Prepare IMDB data for movie review classification
problem.
Description:
To train a simple recurrent neural network (RNN) model using Keras for the
IMDB movie review classification problem:
Data Preparation: The IMDB dataset is loaded and split into training and test
sets. Sequences are padded or truncated to have a consistent length of 500 tokens
to ensure uniform input dimensions.
Model Architecture: A sequential model is constructed in Keras with an
embedding layer to convert words into dense vectors. A SimpleRNN layer with
32 units is added to capture sequential dependencies. A Dense layer with a
sigmoid activation function is used for binary classification, predicting positive
or negative sentiment based on the reviews.
Model Compilation & Training: The model is compiled using the Adam
optimizer and binary cross-entropy loss. It's trained on the training data for 5
epochs with a batch size of 64, utilizing 20% of the data for validation.
Evaluation: After training, the model's performance is evaluated on the test set,
providing the accuracy metric to assess its classification performance on unseen
reviews.
Source code:
from keras.datasets import imdb
from keras.preprocessing import sequence
from keras.models import Sequential
from keras.layers import Embedding, SimpleRNN, Dense
# Load the IMDB dataset
max_features = 5000 # Number of most frequent words to consider
max_len = 500 # Maximum sequence length
embedding_dim = 32 # Dimension of the embedding space
(x_train, y_train), (x_test, y_test) = imdb.load_data(num_words=max_features)
# Pad sequences to ensure consistent length
x_train = sequence.pad_sequences(x_train, maxlen=max_len)
x_test = sequence.pad_sequences(x_test, maxlen=max_len)
(d) Aim: Train the model with embedding and simple RNN layers.
Description:
To train a neural network model using an embedding layer followed by a simple
RNN layer for the IMDB movie review classification task:
Data Preparation: The IMDB dataset is loaded, considering the top 5,000 most
frequent words. Sequences are padded or truncated to a consistent length of 500
tokens for uniform input size.
Model Architecture: A sequential model is constructed in Keras, starting with an
embedding layer that converts words into dense vectors of 32 dimensions.A
Dense layer with a sigmoid activation function serves for binary classification,
distinguishing positive from negative sentiment.
Model Compilation & Training: The model is compiled using the Adam
optimizer and binary cross-entropy loss. Training is performed on the training
data for 5 epochs with a batch size of 64, utilizing 20% of the data for validation
to monitor performance.
Evaluation & Visualization: The trained model is evaluated on the test set to
measure its accuracy in sentiment classification. Additionally, the code visualizes
the training and validation accuracy across epochs using Matplotlib, providing
insights into the model's learning progress and potential overfitting tendencies.
Source code:
from keras.datasets import imdb
from keras.preprocessing import sequence
from keras.models import Sequential
from keras.layers import Embedding, SimpleRNN, Dense
# Load the IMDB dataset
max_features = 10000 # Number of most frequent words to consider
max_len = 500 # Maximum sequence length
embedding_dim = 32 # Dimension of the embedding space
(x_train, y_train), (x_test, y_test) = imdb.load_data(num_words=max_features)
# Pad sequences to ensure consistent length
x_train = sequence.pad_sequences(x_train, maxlen=max_len)
x_test = sequence.pad_sequences(x_test, maxlen=max_len)
# Build the model
model = Sequential()
# Add an Embedding layer
model.add(Embedding(max_features, embedding_dim, input_length=max_len))
# Add a SimpleRNN layer
model.add(SimpleRNN(units=32))
# Add a Dense layer for binary classification (positive/negative sentiment)
model.add(Dense(units=1, activation='sigmoid'))
# Compile the model
model.compile(optimizer='adam',loss='binary_crossentropy',
metrics=['accuracy'])
# Train the model
history=model.fit(x_train,y_train,epochs=5,batch_size=64, validation_split=0.2)
# Evaluate the model on the test set
loss, accuracy = model.evaluate(x_test, y_test)
print(f'Test accuracy: {accuracy * 100:.2f}%')
# Plot training and validation accuracy
import matplotlib.pyplot as plt
plt.plot(history.history['accuracy'], label='Training Accuracy')
plt.plot(history.history['val_accuracy'], label='Validation Accuracy')
plt.title('Training and Validation Accuracy')
plt.xlabel('Epoch')
plt.ylabel('Accuracy')
plt.legend()
plt.show()
Output:
(e) Aim: Plot the Results
Source code:
import matplotlib.pyplot as plt
Output:
8. Aim: Consider temperature-forecast as one the example for
RNN and implement the following:
a) Inspect the data of the weather dataset
b) Parsing the data
c) Plotting the temperature timeseries
d) Plotting the first 10 days of the temperature timeseries
Description:
This program mainly focuses on analyzing and visualizing temperature forecast
data using Python's Pandas and Matplotlib libraries, specifically targeting tasks
related to time series data handling:
Data Inspection: The code begins by loading a weather dataset from a CSV file
path and displaying the first few rows to get a glimpse of the data structure. Basic
statistical information about the dataset, such as mean, standard deviation, etc., is
also presented to understand the data distribution.
Parsing the Data: The 'Date' column in the dataset is converted into a datetime
object, allowing for more intuitive time-based analysis and plotting.
Plotting the Temperature Timeseries: A plot is generated showcasing the
temperature timeseries, where the x-axis represents dates, and the y-axis displays
the corresponding temperatures over time.
Plotting the First 10 Days: A more focused visualization is created, zooming
into the first 10 days of temperature data to provide a clearer view of temperature
variations within this initial period.
Source Code: (your_dataset.csv)
import pandas as pd
import matplotlib.pyplot as plt
# a). Inspect the data
# Load the dataset (replace 'your_dataset.csv' with the actual file path)
df = pd.read_csv('your_dataset.csv')
# Display the first few rows of the dataset
print("Data Inspection:")
print(df.head())
# Display basic statistics of the dataset
print("\nDataset Statistics:")
print(df.describe())
# b). Parsing the data
# Assuming your dataset has a 'Date' column, parse it as a datetime object
df['Date'] = pd.to_datetime(df['Date'])
Source Code:
Output:
c) Aim: Train and evaluate a simple 1D convent on temperature
prediction data.
Source Code:
from keras.models import Sequential
from keras.layers import Embedding, Conv1D, MaxPooling1D,
GlobalMaxPooling1D, Dense
import numpy as np
# Generate synthetic temperature data
num_samples = 3000
sequence_length = 50
temperatures = np.random.randint(50, 100, num_samples)
data = []
labels = []
for i in range(len(temperatures) - sequence_length):
data.append(temperatures[i:i + sequence_length])
labels.append(temperatures[i + sequence_length])
data = np.array(data)
labels = np.array(labels)
# Split data into training and test sets
train_size = int(0.8 * len(data))
x_train, x_test = data[:train_size], data[train_size:]
y_train, y_test = labels[:train_size], labels[train_size:]
# Reshape data for Conv1D
x_train = np.reshape(x_train, (x_train.shape[0], x_train.shape[1], 1))
x_test = np.reshape(x_test, (x_test.shape[0], x_test.shape[1], 1))
# Build the model
model = Sequential()
model.add(Conv1D(32, 5, activation='relu', input_shape=(sequence_length, 1)))
model.add(MaxPooling1D(3))
model.add(Conv1D(32, 5, activation='relu'))
model.add(GlobalMaxPooling1D())
model.add(Dense(1))
model.compile(optimizer='adam', loss='mse')
model.fit(x_train, y_train, epochs=5, batch_size=32, validation_split=0.2)
# Evaluate the model
loss = model.evaluate(x_test, y_test)
print(f"Test Loss: {loss}")
Output:
11. Aim: Applying the Convolution Neural Network on computer
vision problems.
Description:
Source Code:
from keras.datasets import mnist
from keras.utils import to_categorical
from keras.models import Sequential
from keras.layers import Conv2D, MaxPooling2D, Flatten, Dense
# Load MNIST dataset
(x_train, y_train), (x_test, y_test) = mnist.load_data()
# Preprocess the data
x_train = x_train.reshape((x_train.shape[0], 28, 28, 1)).astype('float32') / 255
x_test = x_test.reshape((x_test.shape[0], 28, 28, 1)).astype('float32') / 255
y_train = to_categorical(y_train)
y_test = to_categorical(y_test)
# Build the CNN model
model_cnn_mnist = Sequential()
model_cnn_mnist.add(Conv2D(32, (3, 3), activation='relu', input_shape=(28, 28,
1)))
model_cnn_mnist.add(MaxPooling2D((2, 2)))
model_cnn_mnist.add(Conv2D(64, (3, 3), activation='relu'))
model_cnn_mnist.add(MaxPooling2D((2, 2)))
model_cnn_mnist.add(Conv2D(64, (3, 3), activation='relu'))
model_cnn_mnist.add(Flatten())
model_cnn_mnist.add(Dense(64, activation='relu'))
model_cnn_mnist.add(Dense(10, activation='softmax'))
Output:
12. Aim: Image Classification on MNIST dataset (CNN model with
Fully connected layer)
Description:
Source Code:
from keras.datasets import mnist
from keras.utils import to_categorical
from keras.models import Sequential
from keras.layers import Conv2D, MaxPooling2D, Flatten, Dense
# Load MNIST dataset
(x_train, y_train), (x_test, y_test) = mnist.load_data()
# Preprocess the data
x_train = x_train.reshape((x_train.shape[0], 28, 28, 1)).astype('float32') / 255
x_test = x_test.reshape((x_test.shape[0], 28, 28, 1)).astype('float32') / 255
y_train = to_categorical(y_train)
y_test = to_categorical(y_test)
# Build the CNN model with fully connected layer
model_cnn_mnist = Sequential()
model_cnn_mnist.add(Conv2D(32, (3, 3), activation='relu', input_shape=(28, 28,
1)))
model_cnn_mnist.add(MaxPooling2D((2, 2)))
model_cnn_mnist.add(Conv2D(64, (3, 3), activation='relu'))
model_cnn_mnist.add(MaxPooling2D((2, 2)))
model_cnn_mnist.add(Conv2D(64, (3, 3), activation='relu'))
model_cnn_mnist.add(Flatten())
# Fully connected layer
model_cnn_mnist.add(Dense(64, activation='relu'))
# Output layer
model_cnn_mnist.add(Dense(10, activation='softmax'))
# Compile the model
model_cnn_mnist.compile(optimizer='adam',loss='categorical_crossentropy',
metrics=['accuracy'])
# Train the model
model_cnn_mnist.fit(x_train,y_train,epochs=3,batch_size=64,validation_split=0
.2)
# Evaluate the model on the test set
loss, accuracy = model_cnn_mnist.evaluate(x_test, y_test)
print(f'Test accuracy: {accuracy * 100:.2f}%')
Output: