0% found this document useful (0 votes)
19 views

Image Classification Metrics

CNN
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views

Image Classification Metrics

CNN
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 2

Common Image Classification Metrics

1. Accuracy
The proportion of correctly classified instances out of the total instances.
Accuracy = (TP + TN) / (TP + TN + FP + FN)

2. Precision
The proportion of true positive predictions out of all positive predictions made by the
model.
Precision = TP / (TP + FP)

3. Recall (Sensitivity or True Positive Rate)


The proportion of true positives out of all actual positive cases.
Recall = TP / (TP + FN)

4. F1-Score
The harmonic mean of precision and recall. It's a good metric when you want to balance
precision and recall.
F1-Score = 2 × (Precision × Recall) / (Precision + Recall)

5. Confusion Matrix
A matrix showing the actual versus predicted classifications. It helps in visualizing
performance across classes:
- True Positives (TP), False Positives (FP), True Negatives (TN), False Negatives (FN).

6. ROC Curve (Receiver Operating Characteristic)


A plot of the true positive rate (TPR) against the false positive rate (FPR) at various
threshold settings.

7. AUC-ROC (Area Under the ROC Curve)


A scalar value representing the entire area under the ROC curve. It helps measure how well
the model distinguishes between classes.
8. Logarithmic Loss (Log Loss)
Measures the performance of a classifier where the prediction is a probability value. Lower
log loss indicates better performance.
Log Loss = - (1/N) Σ [y_i log(p_i) + (1 - y_i) log(1 - p_i)]

9. Top-k Accuracy
The proportion of times the true label is within the top-k predicted probabilities.

10. Specificity (True Negative Rate)


The proportion of true negatives out of all actual negative cases.
Specificity = TN / (TN + FP)

11. Matthews Correlation Coefficient (MCC)


A balanced measure taking into account TP, TN, FP, and FN, useful for imbalanced datasets.
MCC = (TP × TN - FP × FN) / sqrt((TP + FP)(TP + FN)(TN + FP)(TN + FN))

12. Kappa Statistic (Cohen’s Kappa)


Measures agreement between predicted and actual labels, adjusted for chance.

13. Balanced Accuracy


The average of recall obtained on each class, useful for imbalanced datasets.
Balanced Accuracy = (Recall_Positive + Recall_Negative) / 2

14. Hamming Loss


The fraction of incorrect labels to the total labels.
Hamming Loss = (1/N) Σ (y_i ≠ ŷ_i)

15. Jaccard Index (Intersection over Union)


Measures the similarity between the predicted class set and the actual class set.
Jaccard Index = TP / (TP + FP + FN)

You might also like