0% found this document useful (0 votes)
20 views1 page

Avoidable Bias: Example: Cat Vs Non-Cat

The document discusses using human-level performance as a proxy for Bayes error to evaluate model performance and determine if bias or variance reduction techniques should be used. It provides an example comparing classification error rates for scenarios A and B, noting that scenario A has a large gap between human and model performance indicating the need for bias reduction, while scenario B has a small gap indicating the focus should be on reducing variance.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views1 page

Avoidable Bias: Example: Cat Vs Non-Cat

The document discusses using human-level performance as a proxy for Bayes error to evaluate model performance and determine if bias or variance reduction techniques should be used. It provides an example comparing classification error rates for scenarios A and B, noting that scenario A has a large gap between human and model performance indicating the need for bias reduction, while scenario B has a small gap indicating the focus should be on reducing variance.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 1

Avoidable bias

By knowing what the human-level performance is, it is possible to tell when a training set is performing
well or not.

Example: Cat vs Non-Cat

Classification error (%)


Scenario A Scenario B
Humans 1 7.5
Training error 8 8
Development error 10 10

In this case, the human level error as a proxy for Bayes error since humans are good to identify images. If
you want to improve the performance of the training set but you can’t do better than the Bayes error
otherwise the training set is overfitting. By knowing the Bayes error, it is easier to focus on whether bias
or variance avoidance tactics will improve the performance of the model.

Scenario A
There is a 7% gap between the performance of the training set and the human level error. It means that
the algorithm isn’t fitting well with the training set since the target is around 1%. To resolve the issue, we
use bias reduction technique such as training a bigger neural network or running the training set longer.

Scenario B
The training set is doing good since there is only a 0.5% difference with the human level error. The
difference between the training set and the human level error is called avoidable bias. The focus here is
to reduce the variance since the difference between the training error and the development error is 2%.
To resolve the issue, we use variance reduction technique such as regularization or have a bigger training
set.

You might also like