0% found this document useful (0 votes)
2 views3 pages

ML Lab PGM 4

The document outlines a machine learning lab program focused on generating a confusion matrix and computing various metrics such as accuracy, precision, recall, and F1-score using Python's sklearn library. It includes a sample code that demonstrates how to create a confusion matrix and classification report based on actual and predicted values, along with instructions for verifying the results theoretically. Additionally, it provides an assignment to experiment with different sets of actual and predicted samples to further analyze the classification report entries.

Uploaded by

harshitha.hegde5
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views3 pages

ML Lab PGM 4

The document outlines a machine learning lab program focused on generating a confusion matrix and computing various metrics such as accuracy, precision, recall, and F1-score using Python's sklearn library. It includes a sample code that demonstrates how to create a confusion matrix and classification report based on actual and predicted values, along with instructions for verifying the results theoretically. Additionally, it provides an assignment to experiment with different sets of actual and predicted samples to further analyze the classification report entries.

Uploaded by

harshitha.hegde5
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 3

UE22EC352B MachineLearning andApplications Lab

Jan-May 2025 VI Semester

Program 4
Aim: To generate a Confusion Matrix and compute true positive, true negative, false
positive, and false negative.
Program to generate confusion Matrix and classification report
from sklearn.metrics import confusion_matrix
from sklearn.metrics import classification_report
from sklearn import metrics
import matplotlib.pyplot as plt
from sklearn.metrics import accuracy_score
# actual values
#A=1= Positive Class , B=0=Negative Class
actual = [1,0,0,1,0,1,1,1,0,1,0]
# predicted values
predicted = [1,0,0,1,0,0,0,1,0,0,1]
# confusion matrix
matrix = confusion_matrix(actual,predicted, labels=[1,0])
print('Confusion matrix : \n',matrix)
acc=accuracy_score(actual,predicted)
print('Accuracy = ',acc)
matrix = classification_report(actual,predicted,labels=[1,0])
print('Classification Report \n')
print(matrix)
fpr, tpr , _= metrics.roc_curve(actual, predicted) #create ROC curve
print('fpr = ',fpr)
print('tpr = ',tpr)
plt.plot(fpr,tpr)
plt.ylabel('True Positive Rate')
plt.xlabel('False Positive Rate')
plt.show()
OUTPUT
Confusion matrix :
[[3 3]
[1 4]]
Accuracy = 0.6363636363636364
Classification Report

precision recall f1-score support

1 0.75 0.50 0.60 6


0 0.57 0.80 0.67 5

accuracy 0.64 11
macro avg 0.66 0.65 0.63 11
weighted avg 0.67 0.64 0.63 11

fpr = [0. 0.2 1. ]


tpr = [0. 0.5 1. ]

Assignment:
1) Verify theoretically the entries of the classification report.
2∗Precision∗Recall
Note: f 1−score=
Precision+ Recall

2) Experiment with the following actual and predicted samples and verify the entries
of the classification report.
# actual values
#A=1=Positive Class , B=0=Negative Class
actual = [1,0,0,1,0,1,1,1,0,1,0,1,1,1,1,0,0,1]
# predicted values
predicted = [1,0,0,1,0,0,0,1,0,0,1,0,0,0,0,0,1,0]

You might also like