Data Mining Lab Manual
Data Mining Lab Manual
arr1=np.array([[1,2,3],[4,5,6],[7,8,9],[23,33,45]])
print(f'Original Array:\n{arr1}')
arr1_transpose = arr1.transpose()
print(f'Transposed Array:\n{arr1_transpose}')
arr2=np.array([[10,20,30],[45,78,90],[1,2,3],[34,67,89]])
print(f'Original Array:\n{arr2}')
arr2_transpose=arr2.transpose()
print(f'Transposed Array:\n{arr2_transpose}')
Original Array:
[[ 1 2 3]
[ 4 5 6]
[ 7 8 9]
[23 33 45]]
Transposed Array:
[[ 1 4 7 23]
[ 2 5 8 33]
[ 3 6 9 45]]
Original Array:
[[10 20 30]
[45 78 90]
[ 1 2 3]
[34 67 89]]
Transposed Array:
[[10 45 1 34]
[20 78 2 67]
[30 90 3 89]]
In [11]:
import numpy as np
import matplotlib.pyplot as plt
from sklearn.cluster import KMeans
from sklearn.datasets import make_blobs
In [12]:
import numpy as np
import matplotlib.pyplot as plt
from sklearn.linear_model import LinearRegression
# Predictions
X_new = np.array([[0], [2]])
y_pred = model.predict(X_new)
basket_sets = basket.applymap(encode_units)
Association Rules:
antecedents consequents antecedent support consequent support supp
ort \
0 (D) (B) 0.25 0.75 0
.25
1 (B) (D) 0.75 0.25 0
.25
2 (D) (C) 0.25 0.75 0
.25
3 (C) (D) 0.75 0.25 0
.25
4 (D, C) (B) 0.25 0.75 0
.25
5 (D, B) (C) 0.25 0.75 0
.25
6 (C, B) (D) 0.50 0.25 0
.25
7 (D) (C, B) 0.25 0.50 0
.25
8 (C) (D, B) 0.75 0.25 0
.25
9 (B) (D, C) 0.75 0.25 0
.25
# Split the data into training and testing sets (80% train, 20% test)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
print("Classification Report:")
print(classification_report(y_test, y_pred))
output:
Accuracy: 0.48
Classification Report:
precision recall f1-score support
0 0.89 0.67 0.76 36
1 0.13 0.22 0.17 9
2 0.12 0.20 0.15 5
3 0.25 0.29 0.27 7
4 0.00 0.00 0.00 3
accuracy 0.48 60
macro avg 0.28 0.27 0.27 60
weighted avg 0.59 0.48 0.53 60