F1-Optimal Thresholding in The Multi-Label Setting
F1-Optimal Thresholding in The Multi-Label Setting
net/publication/260126882
CITATIONS READS
3 1,252
3 authors, including:
Some of the authors of this publication are also working on these related projects:
All content following this page was uploaded by Zachary Chase Lipton on 01 April 2015.
1 Introduction
Performance metrics are useful for comparing the quality of predictions across
systems. Some commonly used metrics for binary classification are accuracy,
precision, recall, F1 score, and Jaccard index [15]. Multilabel classification is
an extension of binary classification that is currently an area of active research
in supervised machine learning [18]. Micro averaging, macro averaging, and per
instance averaging are three commonly used variants of F1 score used in the
multilabel setting. In general, macro averaging increases the impact on final
score of performance on rare labels, while per instance averaging increases the
importance of performing well on each example [17]. In this paper, we present
theoretical and experimental results on the properties of the F1 metric.1
1
For concreteness, the results of this paper are given specifically for the F1 metric
and its multilabel variants. However, the results can be generalized to Fβ metrics
for β 6= 1.
2 Zachary C. Lipton, Charles Elkan, and Balakrishnan Naryanaswamy
Two approaches exist for optimizing performance on F1. Structured loss min-
imization incorporates the performance metric into the loss function and then
optimizes during training. In contrast, plug-in rules convert the numerical out-
puts of a classifier into optimal predictions [5]. In this paper, we highlight the
latter scenario to differentiate between the beliefs of a system and the predictions
selected to optimize alternative metrics. In the multilabel case, we show that the
same beliefs can produce markedly dissimilar optimally thresholded predictions
depending upon the choice of averaging method.
That F1 is asymmetric in the positive and negative class is well-known. Given
complemented predictions and actual labels, F1 may award a different score.
It also generally known that micro F1 is affected less by performance on rare
labels, while Macro-F1 weighs the F1 of on each label equally [11]. In this pa-
per, we show how these properties are manifest in the optimal decision-making
thresholds and introduce a theorem to describe that threshold. Additionally,
we demonstrate that given an uninformative classifier, optimal thresholding to
maximize F1 predicts all instances positive regardless of the base rate.
While F1 is widely used, some of its properties are not widely recognized.
In particular, when choosing predictions to maximize the expectation of F1 for
a batch of examples, each prediction depends not only on the probability that
the label applies to that example, but also on the distribution of probabilities
for all other examples in the batch. We quantify this dependence in Theorem 1,
where we derive an expression for optimal thresholds. The dependence makes it
difficult to relate predictions that are optimally thresholded for F1 to a system’s
predicted probabilities.
We show that the difference in F1 score between perfect predictions and
optimally thresholded random guesses depends strongly on the base rate. As
a result, assuming optimal thresholding and a classifier outputting calibrated
probabilities, predictions on rare labels typically gets a score between close to
zero and one, while scores on common labels will always be high. In this sense,
macro average F1 can be argued not to weigh labels equally, but actually to give
greater weight to performance on rare labels.
As a case study, we consider tagging articles in the biomedical literature with
MeSH terms, a controlled vocabulary of 26,853 labels. These labels have hetero-
geneously distributed base rates. We show that if the predictive features for rare
labels are lost (because of feature selection or another cause) then the optimal
threshold to maximize macro F1 leads to predicting these rare labels frequently.
For the case study application, and likely for similar ones, this behavior is far
from desirable.
of each label applying to each instance given the feature vector. For a batch of
data of dimension n × d, the model outputs an n × m matrix C of probabilities.
In the single-label setting, m = 1 and C is an n × 1 matrix, i.e. a column vector.
A decision rule D(C) : Rn×m → {0, 1}n×m converts a matrix of probabilities
C to binary predictions P . The gold standard G ∈ Rn×m represents the true
values of all labels for all instances in a given batch. A performance metric M
assigns a score to a prediction given a gold standard:
M (P |G) : {0, 1}n×m × {0, 1}n×m → R ∈ [0, 1].
The counts of true positives tp, false positives f p, false negatives f n, and true
negatives tn are represented via a confusion matrix (Figure 1).
Precision p = tp/(tp + f p) is the fraction of all positive predictions that are
true positives, while recall r = tp/(tp + f n) is the fraction of all actual positives
that are predicted positive. By definition the F1 score is the harmonic mean of
precision and recall: F 1 = 2/(1/r + 1/p). By substitution, F1 can be expressed
as a function of counts of true positives, false positives and false negatives:
2tp
F1 = . (1)
2tp + f p + f n
The harmonic mean expression for F1 is undefined when tp = 0, but the trans-
lated expression is defined. This difference does not impact the results below.
0.9 0.9
0.8 0.8
0.7 0.7
0.6 0.6
F1 score
Accuracy
0.5 0.5
0.4 0.4
0.3 0.3
0.2 0.2
0.1 0.1
0 0
0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.1 0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.1
True Positive True Positive
Fig. 2: Holding base rate and f p con- Fig. 3: Unlike F1, accuracy offers lin-
stant, F1 is concave in tp. Each line early increasing returns. Each line is
is a different value of f p. a fixed value of f p.
We define f p and f n analogously and calculate the final score using (1). Macro
F1, which can also be called per label F1, calculates the F1 for each of the m
labels and averages them:
m
1X
F 1M acro (P |G) = F 1(P:j , G:j ).
m j=1
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
0.1
0.08 1
0.06 0.8
0.04 0.6
0.4
0.02 0.2
0 0
True Positive False Positive
Fig. 4: For fixed base rate, F1 is a non-linear function with only two degrees of
freedom.
3 Prior Work
0.9
1
0.8
0.9
0.7
0.8
0.6 0.7
F1 score
Expected F1 Score
0.5 0.6
0.4 0.5
0.3 0.4
0.3
0.2
0.2
0.1
0.1
0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 0
True Negative 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
Percent Predicted Positive
Z
f p = (1 − b) p(s|t = 0)ds
s:D(s)=1
Z
tn = (1 − b) p(s|t = 0)ds.
s:D(s)=0
The following theorem describes the optimal decision rule that maximizes F1.
Theorem 1. A score s is assigned to the positive class, that is D(s) = 1, by a
classifier that maximizes F1 if and only if
b · p(s|t = 1)
≥J (2)
(1 − b) · p(s|t = 0)
tp
where J = f n+tp+f p is the Jaccard index of the optimal classifier, with ambiguity
given equality in (2).
Before we provide the proof of this theorem, we note the difference between
the rule in (2) and conventional cost-sensitive decision making [7] or Neyman-
Pearson detection. In the latter, the right hand side J is replaced by a constant
λ that depends only on the costs of 0 − 1 and 1 − 0 classification errors, and not
on the performance of the classifier on the entire batch. We will later elaborate
on this point, and describe how this relationship leads to potentially undesirable
thresholding behavior for many applications in the multilabel setting.
Proof. Divide the domain of s into regions of size ∆. Suppose that the decision
rule D(·) has been fixed for all regions except a particular region R denoted ∆
around a point (with some abuse of notation) s. Write P1 (∆) = ∆ p(s|t = 1)
and define P0 (∆) similarly.
Suppose that the F1 achieved with decision rule D for all scores besides D(∆)
2tp
is F 1 = 2tp+f n+f p . Now, if we add ∆ to the positive part of the decision rule,
D(∆) = 1, then the new F1 score will be
2tp + 2bP1 (∆)
F 10 = .
2tp + 2bP1 (∆) + f n + f p + (1 − b)P0 (∆)
On the other hand, if we add ∆ to the negative part of the decision rule, D(∆) =
0, then the new F1 score will be
2tp
F 100 = .
2tp + f n + bP1 (∆) + f p
8 Zachary C. Lipton, Charles Elkan, and Balakrishnan Naryanaswamy
We add ∆ to the positive class only if F 10 ≥ F 100 . With some algebraic simpli-
fication, this condition becomes
bP1 (∆) tp
≥ .
(1 − b)P0 (∆) tp + f n + f p
If, as a special case, the model outputs calibrated probabilities, that is p(t =
1|s) = s and p(t = 0|s) = 1 − s, then we have the following corollary.
Proof. Using the definition of calibration and then Bayes rule, for the optimal
decision surface for assigning a score s to the positive class
Simplifying results in
tp F
s≥ = .
2tp + f n + f p 2
Thus, the optimal threshold in the calibrated case is half the maximum F 1.
Above, we assume that scores have a distribution conditioned on the true
class. Using the intuition in the proof of Theorem 1, we can also derive an
empirical version of the result. To save space, we provide a more general version
of the empirical result in the next section for multilabel problems, noting that a
similar non-probabilistic statement holds for the single label setting as well.
The above result can be extended to the multilabel setting with dependence. We
give a different proof that confirms the optimal threshold for empirical maxi-
mization of F1.
We first present an algorithm from [6]. Let s be the output vector of length n
scores from a model, to predict n labels in the multilabel setting. Let t ∈ {0, 1}n
be the gold standard and h ∈ {0, 1}n be the thresholded output for a given set
of n labels. In addition, define a = tp + f n, the total count of positive labels in
the gold standard and c = tp + f p the total count of predicted positive labels.
Note that a and c are functions of t and h, though we suppress this dependence
Thresholding Classifiers to Maximize F1 Score 9
in notation. Define za =
P
t:tp+f n=a tp(t). The maximum achievable macro F1
is
2tp
F 1 = max max Ep(t|s)
c h:tp+f p=c 2tp + f p + f n
X za
= max max 2hT .
c h:tp+f p=c
a
a+c
P Algorithm:
za
Loop over the number of predicted positives c. Sort the vector
a a+c of length n. Proceed along its entries one by one. Adding an entry to
the positive class increases the numerator by za , which is always positive. Stop
after entry number c. Pick the c value and corresponding threshold which give
the largest F1.
Some algebra gives the following interpretation:
X E(tp|c)
max E(F 1) = max p(a).
c c
a
a+c
∂ ∂ 2c · b 2b 2c · b
E(F 1) = = − .
∂c ∂c a + c a + c (a + c)2
Both terms in the difference are always positive, so we can show that this deriva-
tive is always positive by showing that
2b 2c · b
> .
a+c (a + c)2
c
Simplification gives the condition 1 > a+c . As this condition always holds, the
derivative is always positive. Therefore, whenever the frequency of actual posi-
tives in the test set is nonzero, and the classifier is uninformative, expected F1
is maximized by predicting that all examples are positive.
5 Multilabel Setting
F 10 1 + b·n
tp
= .
F 100 nb
1 + a+c+b·n
where a and c are the number of positives in the gold standard and the Pnumber
of positive Ppredictions for the first m − 1 labels. We have a + c ≤ n i bi and
so if bm i bi this ratio is small. Thus, performance on rare labels is washed
out.
In the single-label setting, the small range between the F1 value achieved by a
trivial classifier and a perfect one may not be problematic. If a trivial system gets
a score of 0.9, we can adjust the scale for what constitutes a good score. However,
when averaging separately calculated F1 over all labels, this variability can skew
scores to disproportionately weight performance on rare labels. Consider the two
label case when one label has a base rate of 0.5 and the other has a base rate
of 0.1. The corresponding expected F1 for trivial classifiers are 0.67 and 0.18
respectively. Thus the expected F1 for optimally thresholded trivial classifiers is
0.42. However, an improvement to perfect predictions on the rare label elevates
the macro F1 to 0.84 while such an improvement on the common label would
only correspond to a macro F1 of 0.59. Thus the increased variability of F1
results in high weight for rare labels in macro F1.
For a rare label with an uninformative classifier, micro F1 is optimized by
predicting all negative while macro is optimized by predicting all positive. Ear-
lier, we proved that the optimal threshold for predictions based on a calibrated
probabilistic classifier is half of the maximum F1 attainable given any thresh-
old setting. In other words, which batch an example is submitted with affects
whether a positive prediction will be made. In practice, a system may be tasked
with predicting labels with widely varying base rates. Additionally a classifier’s
ability to make confident predictions may vary widely from label to label.
Optimizing micro F1 as compared to macro F1 can be thought of as choosing
optimal thresholds given very different batches. If the base rate and distribution
of probabilities assigned to instances vary from label to label, so will the predic-
tions. Generally, labels with low base rates and less informative classifiers will
be over-predicted to maximize macro F1 as compared to micro F1. We present
empirical evidence of this phenomenon in the following case study.
12 Zachary C. Lipton, Charles Elkan, and Balakrishnan Naryanaswamy
6 Case Study
This section discusses a case study that demonstrates how in practice, threshold-
ing to maximize macro-F1 can produce undesirable predictions. To our knowl-
edge, a similar real-world case of pathological behavior has not been previously
described in the literature, even though macro averaging F1 is a common ap-
proach.
We consider the task of assigning tags from a controlled vocabulary of 26,853
MeSH terms to articles in the biomedical literature using only titles and ab-
stracts. We represent each abstract as a sparse bag-of-words vector over a vo-
cabulary of 188,923 words. The training data consists of a matrix A with n rows
and d columns, where n is the number of abstracts and d is the number of fea-
tures in the bag of words representation. We apply a tf-idf text preprocessing
step to the bag of words representation to account for word burstiness [10] and
to elevate the impact of rare words.
Because linear regression models can be trained for multiple labels efficiently,
we choose linear regression as a model. Note that square loss is a proper loss
function and does yield calibrated probabilistic predictions [12]. Further, to in-
crease the speed of training and prevent overfitting, we approximate the training
matrix A by a rank restricted Ak using singular value decomposition. One po-
tential consequence of this rank restriction is that the signal of extremely rare
words can be lost. This can be problematic when rare terms are the only features
of predictive value for a label.
Given the probabilistic output of the classifier and the theory relating opti-
mal thresholds to maximum attainable F1, we designed three different plug-in
rules to maximize micro, macro and per instance F1. Inspection of the predic-
tions to maximize micro F1 revealed no irregularities. However, inspecting the
predictions thresholded to maximize performance on macro F1 showed that sev-
eral terms with very low base rates were predicted for more than a third of all
test documents. Among these terms were “Platypus”, “Penicillanic Acids” and
“Phosphinic Acids” (Figure 7).
In multilabel classification, a label can have low base rate and an uninforma-
tive classifier. In this case, optimal thresholding requires the system to predict
Thresholding Classifiers to Maximize F1 Score 13
all examples positive for this label. In the single-label case, such a system would
achieve a low F1 and not be used. But in the macro averaging multilabel case,
the extreme thresholding behavior can take place on a subset of labels, while the
system manages to perform well overall.
7 A Winner’s Curse
where + and − index the positive and negative class respectively. Each term
in Equation (4) is the sum of O(n) i.i.d random variables and has exponential
(in n) rate of convergence to the mean irrespective of the base rate b and the
threshold t. Thus, for a fixed number T of threshold choices, the probability of
choosing the wrong threshold Perr ≤ T 2−n where depends on the distance
between the optimal and next nearest threshold. Even if errors occur the most
likely errors are thresholds close to the true optimal threshold (a consequence of
Sanov’s Theorem [3]).
Consider how F1-maximizing thresholds would be set experimentally, given
a training batch of independent ground truth and scores from an uninformative
classifier. The scores si can be sorted in decreasing order (w.l.o.g.) since they are
independent of the true labels for an uninformative classifier. Based on these, we
empirically select the threshold that maximizes F1 on the training batch. The
14 Zachary C. Lipton, Charles Elkan, and Balakrishnan Naryanaswamy
optimal empirical threshold will lie between two scores that include the value
F1
2 , when the scores are calibrated, in accordance with Theorem 1.
The threshold smin that classifies all examples positive (and maximizes F 1
analytically by Theorem 3) has an empirical F1 close to its expectation of
2b 2
1+b = 1+1/b since tp, f p and f n are all estimated from the entire data. Consider
the threshold smax that classifies only the first example positive and all others
negative. With probability b, this has F1 score 2/(2 + b · n), which is lower than
that of the optimal threshold only when
q
1 + n8 − 1
b≥ .
2
Despite the threshold smax being far from optimal, it has a constant probability
of having a higher F1 on training data, a probability that does not decrease
with n, for n < (1 − b)/b2 . Therefore, optimizing F1 will have a sharp threshold
behavior, where for n < (1 − b)/b2 the algorithm will identify large thresholds
with constant probability, whereas for larger n it will correctly identify small
thresholds. Note that identifying optimal thresholds for F 1 is still problematic
since it then leads to issue identified in the previous section. While these issues
are distinct, they both arise from the nonlinearity of F1 score and its asymmetric
treatment of positive and negative labels.
We simulate this behavior, executing 10,000 runs for each setting of the base
rate, with n = 106 samples for each run to set the threshold (Figure 8). Scores
are chosen using variance σ 2 = 1. True labels are assigned at the base rate,
independent of the scores. The threshold that maximizes F1 on the training set
is selected. We plot a histogram of the fraction predicted positive as a function
of the empirically chosen threshold. There is a shift from predicting almost all
positives to almost all negatives as base rate is decreased. In particular, for low
base rate b, even with a large number of samples, a small fraction of examples
are predicted positive. The analytically derived optimal decision in all cases is
to predict all positive, i.e. to use a threshold of 0.
8 Discussion
In this paper, we present theoretical and empirical results describing the prop-
erties of the F1 performance metric for multilabel classification. We relate the
best achievable F1 score to the optimal decision-making threshold and show that
when a classifier is uninformative, predicting all instances positive maximizes the
expectation of F1. Further, we show that in the multilabel setting, this behavior
can be problematic when the metric to maximize is macro F1 and for a subset of
rare labels the classifier is uninformative. In contrast, we demonstrate that given
the same scenario, expected micro F1 is maximized by predicting all examples
to be negative. This knowledge can be useful as such scenarios are likely to occur
in settings with a large number of labels. We also demonstrate that micro F1 has
the potentially undesirable property of washing out performance on rare labels.
Thresholding Classifiers to Maximize F1 Score 15
0.00001
0.6
0.000001
0.4
0.2
No single performance metric can capture every desirable property. For ex-
ample, separately reporting precision and recall is more informative than re-
porting F1 alone. Sometimes, however, it is practically necessary to define a
single performance metric to optimize. Evaluating competing systems and ob-
jectively choosing a winner presents such a scenario. In these cases, a change of
performance metric can have the consequence of altering optimal thresholding
behavior.
References
1. Akay, M.F.: Support vector machines combined with feature selection for breast
cancer diagnosis. Expert Systems with Applications 36(2), 3240–3247 (2009)
2. Capen, E.C., Clapp, R.V., Campbell, W.M.: Competitive bidding in high-risk sit-
uations. Journal of Petroleum Technology 23(6), 641–653 (1971)
3. Cover, T.M., Thomas, J.A.: Elements of information theory. John Wiley & Sons
(2012)
4. del Coz, J.J., Diez, J., Bahamonde, A.: Learning nondeterministic classifiers. Jour-
nal of Machine Learning Research 10, 2273–2293 (2009)
5. Dembczynski, K., Kotlowski, W., Jachnik, A., Waegeman, W., Hüllermeier, E.:
Optimizing the F-measure in multi-label classification: Plug-in rule approach versus
structured loss minimization. In: ICML (2013)
16 Zachary C. Lipton, Charles Elkan, and Balakrishnan Naryanaswamy
6. Dembczyński, K., Waegeman, W., Cheng, W., Hüllermeier, E.: An exact algorithm
for F-measure maximization. In: Neural Information Processing Systems (2011)
7. Elkan, C.: The foundations of cost-sensitive learning. In: International joint con-
ference on artificial intelligence. pp. 973–978 (2001)
8. Jansche, M.: A maximum expected utility framework for binary sequence labeling.
In: Annual Meeting of the Association For Computational Linguistics. p. 736 (2007)
9. Lewis, D.D.: Evaluating and optimizing autonomous text classification systems. In:
Proceedings of the 18th annual international ACM SIGIR conference on research
and development in information retrieval. pp. 246–254. ACM (1995)
10. Madsen, R., Kauchak, D., Elkan, C.: Modeling word burstiness using the Dirichlet
distribution. In: Proceedings of the International Conference on Machine Learning
(ICML). pp. 545–552 (Aug 2005)
11. Manning, C., Raghavan, P., Schütze, H.: Introduction to information retrieval,
vol. 1. Cambridge University Press (2008)
12. Menon, A., Jiang, X., Vembu, S., Elkan, C., Ohno-Machado, L.: Predicting accurate
probabilities with a ranking loss. In: Proceedings of the International Conference
on Machine Learning (ICML) (Jun 2012)
13. Mozer, M.C., Dodier, R.H., Colagrosso, M.D., Guerra-Salcedo, C., Wolniewicz,
R.H.: Prodding the ROC curve: Constrained optimization of classifier performance.
In: NIPS. pp. 1409–1415 (2001)
14. Musicant, D.R., Kumar, V., Ozgur, A., et al.: Optimizing F-measure with support
vector machines. In: FLAIRS Conference. pp. 356–360 (2003)
15. Sokolova, M., Lapalme, G.: A systematic analysis of performance measures for
classification tasks. Information Processing and Management 45, 427–437 (2009)
16. Suzuki, J., McDermott, E., Isozaki, H.: Training conditional random fields with
multivariate evaluation measures. In: Proceedings of the 21st International Con-
ference on Computational Linguistics and the 44th annual meeting of the Associ-
ation for Computational Linguistics. pp. 217–224. Association for Computational
Linguistics (2006)
17. Tan, S.: Neighbor-weighted k-nearest neighbor for unbalanced text corpus. Expert
Systems with Applications 28, 667–671 (2005)
18. Tsoumakas, Grigorios & Katakis, I.: Multi-label classification: An overview. Inter-
national Journal of Data Warehousing and Mining 3(3), 1–13 (2007)
19. Ye, N., Chai, K.M., Lee, W.S., Chieu, H.L.: Optimizing F-measures: A tale of two
approaches. In: Proceedings of the International Conference on Machine Learning
(2012)
20. Zhao, M.J., Edakunni, N., Pocock, A., Brown, G.: Beyond Fano’s inequality:
Bounds on the optimal F-score, BER, and cost-sensitive risk and their implica-
tions. Journal of Machine Learning Research 14(1), 1033–1090 (2013)