0% found this document useful (0 votes)
17 views9 pages

Ex 3,4,5

Uploaded by

veeraanusuyacse
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views9 pages

Ex 3,4,5

Uploaded by

veeraanusuyacse
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

Ex.

No: 3
Date:
Implement the analysis of X-ray image using auto encoders

AIM:
Develop an auto encoder to detect anomalies in grayscale X-ray images by training it to
reconstruct normal images and identifying anomalies based on reconstruction errors.

ALGORITHM:

● Load and pre-process grayscale X-ray images (resize, normalize).


● Split data into training and validation sets.
● Encoder: Use convolutional and max-pooling layers.
● Decoder: Use convolutional and up-sampling layers, with sigmoid activation for reconstruction.
● Compile and train the auto encoder on the training set, validate with the validation set.
● Calculate reconstruction error (MSE) for all images.
● Set anomaly threshold as mean MSE plus standard deviation.
● Classify images with high MSE as anomalies.
● Compute accuracy, precision, recall, F1-score, and confusion matrix.
● Compare original and reconstructed images to assess reconstruction quality.

CODE:
import numpy as np
import pandas as pd
import tensorflow as tf
from tensorflow.keras.preprocessing.image import load_img, img_to_array
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score, precision_score, recall_score, f1_score,
confusion_matrix
import os
import matplotlib.pyplot as plt
xray_dir = "C:/Users/admin/Downloads/archive (2)/dataset/normal"
if not os.path.exists(xray_dir):
raise ValueError(f"Directory '{xray_dir}' does not exist.")
files = [filename for filename in os.listdir(xray_dir) if filename.endswith(".jpeg")]
if len(files) == 0:
raise ValueError(f"No JPG files found in directory '{xray_dir}'.")
x_data = []
for filename in files:
img_path = os.path.join(xray_dir, filename)
img = load_img(img_path, target_size=(60, 60), color_mode="grayscale")
img_array = img_to_array(img)
x_data.append(img_array)
x_data = np.array(x_data) / 255.0
x_train, x_val = train_test_split(x_data, test_size=0.33, random_state=42)
input_shape = x_train.shape[1:]
encoder = tf.keras.models.Sequential([
tf.keras.layers.Conv2D(16, (3, 3), activation='relu', padding='same', input_shape=input_shape),
tf.keras.layers.MaxPooling2D((2, 2), padding='same'),
tf.keras.layers.Conv2D(8, (3, 3), activation='relu', padding='same'),
tf.keras.layers.MaxPooling2D((2, 2), padding='same'),
tf.keras.layers.Conv2D(8, (3, 3), activation='relu', padding='same'),
tf.keras.layers.MaxPooling2D((2, 2), padding='same')
])
decoder = tf.keras.models.Sequential([
tf.keras.layers.Conv2D(8, (3, 3), activation='relu', padding='same'),
tf.keras.layers.UpSampling2D((2, 2)),
tf.keras.layers.Conv2D(8, (3, 3), activation='relu', padding='same'),
tf.keras.layers.UpSampling2D((2, 2)),
tf.keras.layers.Conv2D(16, (3, 3), activation='relu'),
tf.keras.layers.UpSampling2D((2, 2)),
tf.keras.layers.Conv2D(1, (3, 3), activation='sigmoid', padding='same')
])
autoencoder = tf.keras.models.Sequential([
encoder,
decoder
])
autoencoder.compile(optimizer='adam', loss='binary_crossentropy')
autoencoder.fit(x_train, x_train, epochs=10, batch_size=32, validation_data=(x_val, x_val))
reconstructed_images = autoencoder.predict(x_data)
x_data_flat = x_data.reshape(x_data.shape[0], -1)
reconstructed_flat = reconstructed_images.reshape(reconstructed_images.shape[0], -1)
mse = np.mean(np.square(x_data_flat - reconstructed_flat), axis=1) # Mean Squared Error
threshold = np.mean(mse) + np.std(mse) # Set a threshold based on MSE anomaly_pred =
np.where(mse > threshold, 1, 0)
accuracy = accuracy_score(np.zeros_like(anomaly_pred), anomaly_pred)
precision = precision_score(np.zeros_like(anomaly_pred),
anomaly_pred) recall = recall_score(np.zeros_like(anomaly_pred),
anomaly_pred)
f1 = f1_score(np.zeros_like(anomaly_pred), anomaly_pred)
conf_matrix = confusion_matrix(np.zeros_like(anomaly_pred), anomaly_pred)
print(f"Accuracy: {accuracy:.2f}")
print(f"Precision: {precision:.2f}")
print(f"Recall: {recall:.2f}")
print(f"F1-score: {f1:.2f}")
print(f"Confusion Matrix:\n{conf_matrix}")
n=5
plt.figure(figsize=(10, 4))
for i in range(n):
ax = plt.subplot(2, n, i + 1)
plt.imshow(x_data[i].squeeze(), cmap='gray')
plt.title('Original')
plt.axis('off')
ax = plt.subplot(2, n, i + 1 + n)
plt.imshow(reconstructed_images[i].squeeze(), cmap='gray')
plt.title('Reconstructed')
plt.axis('off')
plt.tight_layout()
plt.show()

OUTPUT:

Accuracy: 0.80
Precision: 0.00
Recall: 0.00
F1-score: 0.00
Confusion Matrix:
[[20 5]
[ 0 0]]

RESULT:
Thus the program for X-ray image analysis using auto encoders was implemented successfully and
output was verified.
Ex. No: 4
Date:
Develop a code to design object detection and classification of traffic analysis using CNN

AIM:
Develop a Convolutional Neural Network (CNN) using TensorFlow/Keras to classify traffic sign
images into 43 distinct categories.

ALGORITHM:

● Define the class labels for different traffic signs.


● Read image files from the training directory and associate each image with its respective
label.
● Load images, resize them to 32x32 pixels, and normalize pixel values to [0, 1].
● Store the processed images and corresponding labels in arrays.
● Divide the dataset into training and testing sets (75% training, 25% testing).
● Input layer: 32x32x3 images.
● Apply 3x3 convolutions with 32, 64, and 128 filters respectively.
● Use ReLU activation and 'same' padding.
● Apply 2x2 max pooling after each convolutional layer.
● Add dropout layers to reduce overfitting.
● Apply global average pooling to reduce the spatial dimensions.
● Dense layer with 512 units and ReLU activation.
● Final Dense layer with 43 units (one for each class) and softmax activation.
● Use Adam optimizer and sparse categorical cross-entropy loss function.
● Track accuracy as the evaluation metric.
● Fit the model on the training data and validate on the test data for 40 epoch with a batch size of 32.
● Add code to visualize training history and model performance, save the trained model, etc., if
required.

CODE:
import numpy as np
import pandas as pd
import cv2
import matplotlib.pyplot as plt
import os
from sklearn.model_selection import train_test_split
import tensorflow as tf
from tensorflow.keras import layers, models
classes = {
0: 'Speed limit (20km/h)', 1: 'Speed limit (30km/h)', 2: 'Speed limit (50km/h)', 3: 'Speed limit
(60km/h)',
4: 'Speed limit (70km/h)', 5: 'Speed limit (80km/h)', 6: 'End of speed limit (80km/h)', 7: 'Speed limit
(100km/h)',
8: 'Speed limit (120km/h)', 9: 'No passing', 10: 'No passing veh over 3.5 tons', 11: 'Right-of-way at
intersection',
12: 'Priority road', 13: 'Yield', 14: 'Stop', 15: 'No vehicles', 16: 'Veh > 3.5 tons prohibited', 17: 'No
entry',
18: 'General caution', 19: 'Dangerous curve left', 20: 'Dangerous curve right', 21: 'Double curve',
22: 'Bumpy road',
23: 'Slippery road', 24: 'Road narrows on the right', 25: 'Road work', 26: 'Traffic signals', 27:
'Pedestrians',
28: 'Children crossing', 29: 'Bicycles crossing', 30: 'Beware of ice/snow', 31: 'Wild animals
crossing',
32: 'End speed + passing limits', 33: 'Turn right ahead', 34: 'Turn left ahead', 35: 'Ahead only',
36: 'Go straight or right', 37: 'Go straight or left', 38: 'Keep right', 39: 'Keep left', 40:
'Roundabout mandatory',
41: 'End of no passing', 42: 'End no passing veh > 3.5 tons'
}
train_path = r'C:\Users\admin\Downloads\archive (8)\Train'
label_of_file = []
img_list = []
for kind in classes:
kind_path = os.path.join(train_path, str(kind))
if not os.path.isdir(kind_path):
continue
for img in os.listdir(kind_path):
img_list.append(os.path.join(kind_path, img))
label_of_file.append(kind)
df = pd.DataFrame({'img': img_list, 'label': label_of_file})
x = []
for img in df['img']:
try:
img_data = cv2.imread(img)
img_data = cv2.resize(img_data, (32, 32)) # Resize image
img_data = img_data / 255.0 # Normalize
x.append(img_data)
except Exception as e:
print(f"Error loading image: {img}, Exception: {e}")
x = np.array(x)
y = np.array(df['label'])
x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.25, random_state=42)
model = models.Sequential([
layers.Conv2D(32, (3, 3), activation='relu', input_shape=(32, 32, 3), padding='same'),
layers.MaxPooling2D((2, 2)),
layers.Dropout(0.25),
layers.Conv2D(64, (3, 3), activation='relu', padding='same'),
layers.MaxPooling2D((2, 2)),
layers.Dropout(0.25),
layers.Conv2D(128, (3, 3), activation='relu', padding='same'),
layers.MaxPooling2D((2, 2)),
layers.Dropout(0.25),
layers.GlobalAveragePooling2D(),
layers.Dense(512, activation='relu'),
layers.Dropout(0.5),
layers.Dense(43, activation='softmax')
])
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
history = model.fit(x_train, y_train, validation_data=(x_test, y_test), epochs=40, batch_size=32)
if __name__ == "__main__":
# Optional: Add code here to visualize training history, save the model, etc.
pass

OUTPUT:

Test accuracy: 98.85%

RESULT:
Thus the program to design object detection and classification of traffic analysis using CNN was
executed and output was verified.
Ex. No: 5
Date:
Implement online fraud detection of share market data using data analytics tool

AIM:
Develop a neural network model to classify transactions as fraudulent or legitimate using Tensor
Flow/ Keras.

ALGORITHM:
● Import the dataset from a CSV file.
● Print the first few rows, summary statistics, and counts of transaction types and fraud cases.
● Encode the 'type' column into numerical values.
● Define features (X) and target variable (y).
● Apply StandardScaler to normalize feature values.
● Split the dataset into training and testing sets (80% training, 20% testing).
● Initialize a Sequential model.
● Add Dense layers: 64 units (ReLU activation), 32 units (ReLU activation), and 1 unit (sigmoid
activation) for binary classification.
● Use Adam optimizer and binary cross-entropy loss.
● Fit the model on the training data for 20 epochs with a batch size of 32, and validate on a subset
of the training data.
● Evaluate the model on the test set and print the test accuracy.
● Generate predictions, compute the confusion matrix, and print the classification report.
● Plot training and validation accuracy and loss over epochs to visualize model
performance.

CODE:
import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
from tensorflow.keras.utils import to_categorical
from sklearn.metrics import classification_report, confusion_matrix

df = pd.read_csv('C:/Users/admin/Downloads/archive(3)/onlinefraud.csv')
print("Data Overview:")
print(df.head())print("\nSummary Statistics:")
print(df.describe())
print("\nTransaction Types Count:")
print(df['type'].value_counts())
print("\nFraud Cases Count:")
print(df['isFraud'].value_counts())
df['type'] = df['type'].astype('category').cat.codes
X = df[['step', 'type', 'amount', 'oldbalanceOrg', 'newbalanceOrig', 'oldbalanceDest', 'newbalanceDest']] y =
df['isFraud']
scaler = StandardScaler()
X_scaled = scaler.fit_transform(X)
X_train, X_test, y_train, y_test = train_test_split(X_scaled, y, test_size=0.2, random_state=42)
model = Sequential()
model.add(Dense(64, input_dim=X_train.shape[1], activation='relu'))
model.add(Dense(32, activation='relu'))
model.add(Dense(1, activation='sigmoid'))
model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy']) history =
model.fit(X_train, y_train, epochs=20, batch_size=32, validation_split=0.2, verbose=1) loss,
accuracy = model.evaluate(X_test, y_test)
print(f"\nTest Accuracy: {accuracy:.4f}")

y_pred = (model.predict(X_test) > 0.5).astype(int)


print("\nConfusion Matrix:")
print(confusion_matrix(y_test, y_pred))
print("\nClassification Report:")
print(classification_report(y_test, y_pred))

import matplotlib.pyplot as plt


plt.figure(figsize=(12, 6))
plt.subplot(1, 2, 1)
plt.plot(history.history['accuracy'], label='Training Accuracy')
plt.plot(history.history['val_accuracy'], label='Validation Accuracy')
plt.title('Model Accuracy')
plt.xlabel('Epoch')
plt.ylabel('Accuracy')
plt.legend()

plt.subplot(1, 2, 2)
plt.plot(history.history['loss'], label='Training Loss')
plt.plot(history.history['val_loss'], label='Validation Loss')
plt.title('Model Loss')
plt.xlabel('Epoch')
plt.ylabel('Loss')
plt.legend()
plt.show()
OUTPUT:

Test Accuracy: 0.9996


Confusion Matrix:
[[1270881 23]
[ 464 1156]]
Classification Report:
precision recall f1-score support
0 1.00 1.00 1.00 1270904
1 0.98 0.71 0.83 1620
accuracy 1.00 1272524
macro avg 0.99 0.86 0.91 1272524
weighted avg 1.00 1.00 1.00 1272524

RESULT:
Thus the implementation of online fraud detection of share market data using data analytics tool was
executed and output was verified.

You might also like