0% found this document useful (0 votes)
2 views

ML_assignment

The document presents two machine learning models developed by P Tulasi Vardhan Reddy, focusing on hybrid architectures for data analysis. The first model combines CNN, RNN, and MLP for tasks like time-series analysis, while the second model integrates CNN, LSTM, and SVM for applications such as gesture recognition. Both models utilize a cleaned dataset and include detailed Python code for implementation and evaluation.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

ML_assignment

The document presents two machine learning models developed by P Tulasi Vardhan Reddy, focusing on hybrid architectures for data analysis. The first model combines CNN, RNN, and MLP for tasks like time-series analysis, while the second model integrates CNN, LSTM, and SVM for applications such as gesture recognition. Both models utilize a cleaned dataset and include detailed Python code for implementation and evaluation.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 10

MACHINE LEARNING

ASSIGNMENT

Name : P Tulasi Vardhan Reddy


Reg No : 22BCE7452
Slot : F2
MODEL 1:

Using CNN-RNN-MLP (Convolutional Neural Network + Recurrent Neural Network + Multilayer


Perceptron)

Description :

This hybrid model combines Convolutional Neural Networks (CNNs), Recurrent


Neural Networks (RNNs), and Multi-Layer Perceptrons (MLPs). CNNs are used for
feature extraction from input data, particularly for spatial features in images or
sequential patterns in text. RNNs process sequential data to capture temporal
dependencies or time-series patterns. Finally, the extracted features are passed
through an MLP for classification or regression tasks, making it suitable for
applications like time-series analysis and video recognition.
We used the cleaned dataset after removing irrelevant data.

PYTHON CODE:

import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split, KFold
from sklearn.preprocessing import StandardScaler, MinMaxScaler
import tensorflow as tf
from tensorflow.keras.models import Model
from tensorflow.keras.layers import Input, Dense, Conv1D, Flatten, LSTM,
Dropout, Concatenate, Reshape
from tensorflow.keras.callbacks import EarlyStopping
from tensorflow.keras.optimizers import Adam

# Load dataset
data = pd.read_excel('CLEANED_DATASET.xlsx')
data = data.dropna()
X = data.iloc[:, :-1].values
y = data.iloc[:, -1].values
X_scaled = scaler.fit_transform(X)
# Reshape data for CNN and RNN input
X_cnn = X.reshape(X.shape[0], X.shape[1], 1)
X_rnn = X.reshape(X.shape[0], X.shape[1], 1)
X_train, X_test, y_train, y_test = train_test_split(X_scaled, y, test_size=0.2,
random_state=42)
# Hyperparameter tuning (placeholders for tuning later if needed)
input_shape = X_cnn.shape[1:]

# CNN Model
cnn_input = Input(shape=input_shape)
cnn = Conv1D(64, kernel_size=3, activation='relu')(cnn_input)
cnn = Conv1D(32, kernel_size=3, activation='relu')(cnn)
cnn = Flatten()(cnn)

# RNN Model
rnn_input = Input(shape=input_shape)
rnn = LSTM(64, return_sequences=True)(rnn_input)
rnn = LSTM(32)(rnn)

# MLP Model
mlp_input = Input(shape=(X.shape[1],))
mlp = Dense(64, activation='relu')(mlp_input)
mlp = Dense(32, activation='relu')(mlp)

# Combine models
combined = Concatenate()([cnn, rnn, mlp])
combined = Dense(64, activation='relu')(combined)
combined = Dropout(0.3)(combined)
combined = Dense(32, activation='relu')(combined)
output = Dense(1)(combined)

# Compile model
model = Model(inputs=[cnn_input, rnn_input, mlp_input], outputs=output)
model.compile(optimizer=Adam(learning_rate=0.001), loss='mse',
metrics=['mae'])
# K-fold cross-validation
kfold = KFold(n_splits=5, shuffle=True, random_state=42)
histories = []

for train_index, val_index in kfold.split(X):


X_train, X_val = X[train_index], X[val_index]
y_train, y_val = y[train_index], y[val_index]

X_train_cnn, X_val_cnn = X_cnn[train_index], X_cnn[val_index]


X_train_rnn, X_val_rnn = X_rnn[train_index], X_rnn[val_index]

early_stopping = EarlyStopping(monitor='val_loss', patience=10,


restore_best_weights=True)

history = model.fit([X_train_cnn, X_train_rnn, X_train], y_train,


validation_data=([X_val_cnn, X_val_rnn, X_val], y_val),
epochs=100, batch_size=32, callbacks=[early_stopping],
verbose=0)

histories.append(history)

# Evaluate model
final_loss, final_mae = model.evaluate([X_cnn, X_rnn, X], y)
print(f"Final Loss: {final_loss}, Final MAE: {final_mae}")

import tensorflow.keras.backend as K

# Define a custom accuracy function for regression (you can set a threshold for
acceptable error)
def regression_accuracy(y_true, y_pred):
threshold = 20 # Customize this threshold as per your needs
error = K.abs(y_true - y_pred)
correct_predictions = K.less_equal(error, threshold) # True if error is within
threshold
accuracy = K.mean(K.cast(correct_predictions, K.floatx()))
return accuracy

# Compile model with the custom accuracy function


model.compile(optimizer=Adam(learning_rate=0.001), loss='mse',
metrics=['mae', regression_accuracy])

# The rest of your code remains the same


# Training and evaluation logic is unchanged

# Evaluate model
final_loss, final_mae, final_accuracy = model.evaluate([X_cnn, X_rnn, X], y)
print(f"Final Loss: {final_loss}, Final MAE: {final_mae}, Final Accuracy:
{final_accuracy * 100:.2f}%")

print("Name : P Tulasi Vardhan Reddy")

print("Reg.No : 22BCE7452")

OUTPUT :
MODEL 2 :

Using CNN-LSTM-SVM (Convolutional Neural Network + Long Short-Term Memory + Support Vector
Machine)

Description :

This model integrates CNNs, Long Short-Term Memory (LSTM) networks, and
Support Vector Machines (SVMs). CNNs extract spatial or hierarchical features,
while LSTMs handle sequential dependencies with better memory retention than
standard RNNs. The final feature set is fed into an SVM for classification,
leveraging its robustness with high-dimensional and non-linear data. This
architecture is effective for tasks like gesture recognition, sentiment analysis,
and spatiotemporal activity detection.
We used the cleaned dataset after removing irrelevant data.

PYTHON CODE:

import tensorflow as tf
import numpy as np
from tensorflow.keras.layers import Conv1D, MaxPooling1D, Flatten, LSTM,
Dense, Dropout
from tensorflow.keras.models import Sequential
from tensorflow.keras.optimizers import Adam
from sklearn.svm import SVR
from sklearn.model_selection import train_test_split, KFold, GridSearchCV
from sklearn.metrics import mean_absolute_error, make_scorer
import tensorflow.keras.backend as K
from sklearn.preprocessing import StandardScaler
import pandas as pd

# Define custom regression accuracy for threshold-based evaluation


def regression_accuracy(y_true, y_pred):
threshold = 20 # Customize this threshold as per your needs
error = K.abs(y_true - y_pred)
correct_predictions = K.less_equal(error, threshold) # True if error is within
threshold
accuracy = K.mean(K.cast(correct_predictions, K.floatx()))
return accuracy

# Data preprocessing: Standardization (scaling)


scaler = StandardScaler()
data = pd.read_excel('CLEANED_DATASET.xlsx')
data = data.dropna()
X = data.iloc[:, :-1].values
y = data.iloc[:, -1].values
X_scaled = scaler.fit_transform(X)
X_train, X_test, y_train, y_test = train_test_split(X_scaled, y, test_size=0.2,
random_state=42)
# Define CNN-LSTM model
def create_cnn_lstm():
model = Sequential()
model.add(Conv1D(filters=64, kernel_size=3, activation='relu',
input_shape=(X_train.shape[1], 1)))
model.add(MaxPooling1D(pool_size=2))
model.add(LSTM(50, activation='relu', return_sequences=True))
model.add(LSTM(25, activation='relu'))
model.add(Dropout(0.2))
model.add(Flatten())
model.add(Dense(1))
model.compile(optimizer=Adam(learning_rate=0.001), loss='mse',
metrics=['mae', regression_accuracy])
return model

# Reshape data for CNN-LSTM input


X_train_cnn_lstm = X_train.reshape(X_train.shape[0], X_train.shape[1], 1)
X_test_cnn_lstm = X_test.reshape(X_test.shape[0], X_test.shape[1], 1)

# K-fold cross-validation setup


kfold = KFold(n_splits=5, shuffle=True, random_state=42)
best_mae = float('inf')
best_model = None

# Cross-validation and tuning for CNN-LSTM model


for train_idx, val_idx in kfold.split(X_train_cnn_lstm):
X_train_fold, X_val_fold = X_train_cnn_lstm[train_idx],
X_train_cnn_lstm[val_idx]
y_train_fold, y_val_fold = y_train[train_idx], y_train[val_idx]
model = create_cnn_lstm()
model.fit(X_train_fold, y_train_fold, epochs=50, batch_size=32, verbose=0)

# Evaluate on validation set


val_loss, val_mae, val_accuracy = model.evaluate(X_val_fold, y_val_fold,
verbose=0)
# Save best model
if val_mae < best_mae:
best_mae = val_mae
best_model = model

# Use best model to predict features for SVM


train_features = best_model.predict(X_train_cnn_lstm)
test_features = best_model.predict(X_test_cnn_lstm)

# Hyperparameter tuning for SVM model


svm_model = SVR()
param_grid_svm = {'C': [0.1, 1, 10], 'kernel': ['linear', 'rbf']}
grid_search_svm = GridSearchCV(estimator=svm_model,
param_grid=param_grid_svm, scoring=make_scorer(mean_absolute_error,
greater_is_better=False), cv=kfold)
grid_search_svm.fit(train_features, y_train)
# Final SVM model with best parameters
best_svm_model = grid_search_svm.best_estimator_

# Model evaluation
y_pred = best_svm_model.predict(test_features)
final_mae = mean_absolute_error(y_test, y_pred)
final_accuracy = regression_accuracy(tf.constant(y_test, dtype=tf.float32),
tf.constant(y_pred, dtype=tf.float32))

print(f"Final MAE: {final_mae}, Final Accuracy: {K.eval(final_accuracy) * 100:.2f}


%")
print("Name : P Tulasi Vardhan Reddy")
print("Reg.No : 22BCE7452")

OUTPUT:

You might also like