Result Analysis
Result Analysis
PRESENTED BY
SafeNet AI
SAFENET AI
CONTENTS
• Confusion Metrics
• Visualization Libraries
• Basic Visualization
• Advance & Interactive Visualization
• Tools
• Best Practices
• resources
SAFENET AI
CONFUSION METRICS
Confusion metrics are a way to evaluate the performance of a
classification model by comparing its predictions with the
actual class labels.
SAFENET AI
CONFUSION METRICS
Components of Confusion Metrix:
True Positives (TP): Instances where the model correctly predicts the
positive class.
False Positives (FP): Instances where the model incorrectly predicts the
positive class.
True Negatives (TN): Instances where the model correctly predicts the
negative class. SAFENET AI
CONFUSION METRICS
A binary classification problem predicting whether an email is spam or not
using a simple dataset.
Interpretation:
True Positives (TP): The model correctly identified 20 spam emails.
True Negatives (TN): The model correctly identified 72 non-spam emails.
False Positives (FP): The model incorrectly classified 5 non-spam emails
as spam.
False Negatives (FN): The model missed 3 actual spam emails.
SAFENET AI
ACCURACY
Accuracy is a fundamental metric in machine learning that
measures the overall correctness of a model's predictions.
Formula:
Accuracy:
SAFENET AI
ACCURACY
A binary classification problem of predicting whether an image
contains a cat or not. The confusion matrix for our model is as
follows
Actual / Predicted Cat(Positive) Not Cat(Negative)
Cat (Positive) 85 (TP) 10 (FP)
Not Cat (Negative) 5 (FN) 900 (TN)
Interpretation:
True Positives (TP): The model correctly identified 85 images containing cats.
True Negatives (TN): The model correctly identified 900 images without cats.
False Positives (FP): The model incorrectly classified 10 images without cats as
containing cats.
False Negatives (FN): The model missed 5 images containing cats.
Calculation:
Accuracy: (85 + 900) / (85 + 10 + 5 + 900) = 0.976
SAFENET AI
PRECISION
Precision is a metric in machine learning that measures the
accuracy of positive predictions made by a model.
Formula:
Precision:
SAFENET AI
PRECISION
A binary classification problem of predicting whether an email
is spam or not.
Actual / Predicted Spam (Positive) Not Spam
(Negative)
Spam (Positive) 120 (TP) 15 (FP)
Not Spam (Negative) 10 (FN) 850 (TN)
Interpretation:
True Positives (TP): The model correctly identified 120 spam emails.
False Positives (FP): The model incorrectly classified 15 non-spam emails as
spam.
True Negatives (TN): The model correctly identified 850 non-spam emails.
False Negatives (FN): The model missed 10 actual spam emails.
Calculation:
Precision: 120 / (120 + 15) = 0.889
SAFENET AI
RECALL
Recall, also known as sensitivity or true positive rate, is a
metric in machine learning that measures the ability of a model
to capture all the relevant instances of the positive class.
Formula:
Recall (Sensitivity):
SAFENET AI
RECALL
A binary classification problem of predicting whether a medical
test correctly identifies a disease.
Actual / Predicted Disease (Positive) No
Disease(Negative)
Disease (Positive) 90 (TP) 15 (FN)
No Disease 5 (FP) 890 (TN)
(Negative)
Interpretation:
True Positives (TP): The model correctly identified 90 instances of the disease.
False Negatives (FN): The model missed 15 instances of the disease.
True Negatives (TN): The model correctly identified 890 instances without the
disease.
False Positives (FP): The model incorrectly classified 5 instances without the
disease
Calculation:
as positive.
Recall (Sensitivity): 90 / (90 + 15) = 0.857
SAFENET AI
F1-SCORE
The F1 Score is a metric in machine learning that combines
precision and recall into a single value, providing a balance
between the two.
Formula:
F1 Score:
SAFENET AI
F1-SCORE
A binary classification problem of identifying whether a
transaction is fraudulent
Actual / Predicted Fraud(Positive) Not Fraud
(Negative)
Disease (Positive) 70 (TP) 15 (FN)
• https://www.analyticsvidhya.com/blog/2020/04/confusion-matrix-machine-lea
rning/
• https://towardsdatascience.com/understanding-confusion-matrix-a9ad42dcfd
62
SAFENET AI