Lecture 11 Model Evaluation
Lecture 11 Model Evaluation
SELECTION
Evaluation metrics: How can we measure accuracy?
Other metrics to consider?
1
CLASSIFIER EVALUATION METRICS:
CONFUSION MATRIX
Confusion Matrix:
True Negatives:
Negative tuples correctly classified as negative.
False Positives:
Negative tuples incorrectly classified as positives.
False Negatives:
Positive tuples incorrectly classified as negatives
3
CLASSIFIER EVALUATION METRICS:
CONFUSION MATRIX
Confusion Matrix:
Actual class\Predicted class C1 ¬ C1
C1 True Positives (TP) False Negatives (FN)
¬ C1 False Positives (FP) True Negatives (TN)
5
PRECISION AND RECALL, AND F-
MEASURES
Precision: exactness – what % of tuples that the
classifier labeled as positive are actually positive
6
CONFUSION MATRIX FOR 3 CLASS
CLASSIFIATION
CLASSIFIER EVALUATION METRICS:
PRECISION AND RECALL, AND F-
MEASURES
The F-score is a way of combining the precision and recall of
the model, and it is defined as the harmonic mean of the
model’s precision and recall.
10
CLASSIFIER EVALUATION METRICS: EXAMPLE
Actual Class\Predicted class cancer = yes cancer = no Total Recognition(%)
11