Skip to content

Commit bfc1d30

Browse files
committed
Pushing the docs to 1.4/ for branch: 1.4.X, commit 5c4aa5d0d90ba66247d675d4c3fc2fdfba3c39ff
1 parent 80b4078 commit bfc1d30

File tree

1,313 files changed

+7467
-7265
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

1,313 files changed

+7467
-7265
lines changed

Diff for: 1.4/.buildinfo

+1-1
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
11
# Sphinx build info version 1
22
# This file hashes the configuration used when building these files. When it is not found, a full rebuild will be done.
3-
config: 2ee791a6bca1a0bcfa0abc72290a2e9d
3+
config: d2899d995ddb08ed864a36917559ab7d
44
tags: 645f666f9bcd5a90fca523b33c5a78b7
Binary file not shown.
Binary file not shown.

Diff for: 1.4/_downloads/757941223692da355c1f7de747af856d/plot_compare_calibration.ipynb

+3-3
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@
1111
"cell_type": "markdown",
1212
"metadata": {},
1313
"source": [
14-
"Author: Jan Hendrik Metzen <[email protected]>\nLicense: BSD 3 clause.\n\n## Dataset\n\nWe will use a synthetic binary classification dataset with 100,000 samples\nand 20 features. Of the 20 features, only 2 are informative, 2 are\nredundant (random combinations of the informative features) and the\nremaining 16 are uninformative (random numbers). Of the 100,000 samples,\n100 will be used for model fitting and the remaining for testing.\n\n"
14+
"Author: Jan Hendrik Metzen <[email protected]>\nLicense: BSD 3 clause.\n\n## Dataset\n\nWe will use a synthetic binary classification dataset with 100,000 samples\nand 20 features. Of the 20 features, only 2 are informative, 2 are\nredundant (random combinations of the informative features) and the\nremaining 16 are uninformative (random numbers).\n\nOf the 100,000 samples, 100 will be used for model fitting and the remaining\nfor testing. Note that this split is quite unusual: the goal is to obtain\nstable calibration curve estimates for models that are potentially prone to\noverfitting. In practice, one should rather use cross-validation with more\nbalanced splits but this would make the code of this example more complicated\nto follow.\n\n"
1515
]
1616
},
1717
{
@@ -51,7 +51,7 @@
5151
},
5252
"outputs": [],
5353
"source": [
54-
"from sklearn.calibration import CalibrationDisplay\nfrom sklearn.ensemble import RandomForestClassifier\nfrom sklearn.linear_model import LogisticRegression\nfrom sklearn.naive_bayes import GaussianNB\n\n# Create classifiers\nlr = LogisticRegression()\ngnb = GaussianNB()\nsvc = NaivelyCalibratedLinearSVC(C=1.0, dual=\"auto\")\nrfc = RandomForestClassifier()\n\nclf_list = [\n (lr, \"Logistic\"),\n (gnb, \"Naive Bayes\"),\n (svc, \"SVC\"),\n (rfc, \"Random forest\"),\n]"
54+
"from sklearn.calibration import CalibrationDisplay\nfrom sklearn.ensemble import RandomForestClassifier\nfrom sklearn.linear_model import LogisticRegressionCV\nfrom sklearn.naive_bayes import GaussianNB\n\n# Define the classifiers to be compared in the study.\n#\n# Note that we use a variant of the logistic regression model that can\n# automatically tune its regularization parameter.\n#\n# For a fair comparison, we should run a hyper-parameter search for all the\n# classifiers but we don't do it here for the sake of keeping the example code\n# concise and fast to execute.\nlr = LogisticRegressionCV(\n Cs=np.logspace(-6, 6, 101), cv=10, scoring=\"neg_log_loss\", max_iter=1_000\n)\ngnb = GaussianNB()\nsvc = NaivelyCalibratedLinearSVC(C=1.0, dual=\"auto\")\nrfc = RandomForestClassifier(random_state=42)\n\nclf_list = [\n (lr, \"Logistic Regression\"),\n (gnb, \"Naive Bayes\"),\n (svc, \"SVC\"),\n (rfc, \"Random forest\"),\n]"
5555
]
5656
},
5757
{
@@ -69,7 +69,7 @@
6969
"cell_type": "markdown",
7070
"metadata": {},
7171
"source": [
72-
":class:`~sklearn.linear_model.LogisticRegression` returns well calibrated\npredictions as it directly optimizes log-loss. In contrast, the other methods\nreturn biased probabilities, with different biases for each method:\n\n* :class:`~sklearn.naive_bayes.GaussianNB` tends to push\n probabilities to 0 or 1 (see histogram). This is mainly\n because the naive Bayes equation only provides correct estimate of\n probabilities when the assumption that features are conditionally\n independent holds [2]_. However, features tend to be positively correlated\n and is the case with this dataset, which contains 2 features\n generated as random linear combinations of the informative features. These\n correlated features are effectively being 'counted twice', resulting in\n pushing the predicted probabilities towards 0 and 1 [3]_.\n\n* :class:`~sklearn.ensemble.RandomForestClassifier` shows the opposite\n behavior: the histograms show peaks at approx. 0.2 and 0.9 probability,\n while probabilities close to 0 or 1 are very rare. An explanation for this\n is given by Niculescu-Mizil and Caruana [1]_: \"Methods such as bagging and\n random forests that average predictions from a base set of models can have\n difficulty making predictions near 0 and 1 because variance in the\n underlying base models will bias predictions that should be near zero or\n one away from these values. Because predictions are restricted to the\n interval [0,1], errors caused by variance tend to be one- sided near zero\n and one. For example, if a model should predict p = 0 for a case, the only\n way bagging can achieve this is if all bagged trees predict zero. If we add\n noise to the trees that bagging is averaging over, this noise will cause\n some trees to predict values larger than 0 for this case, thus moving the\n average prediction of the bagged ensemble away from 0. We observe this\n effect most strongly with random forests because the base-level trees\n trained with random forests have relatively high variance due to feature\n subsetting.\" As a result, the calibration curve shows a characteristic\n sigmoid shape, indicating that the classifier is under-confident\n and could return probabilities closer to 0 or 1.\n\n* To show the performance of :class:`~sklearn.svm.LinearSVC`, we naively\n scale the output of the :term:`decision_function` into [0, 1] by applying\n min-max scaling, since SVC does not output probabilities by default.\n :class:`~sklearn.svm.LinearSVC` shows an\n even more sigmoid curve than the\n :class:`~sklearn.ensemble.RandomForestClassifier`, which is typical for\n maximum-margin methods [1]_ as they focus on difficult to classify samples\n that are close to the decision boundary (the support vectors).\n\n## References\n\n.. [1] [Predicting Good Probabilities with Supervised Learning](https://dl.acm.org/doi/pdf/10.1145/1102351.1102430),\n A. Niculescu-Mizil & R. Caruana, ICML 2005\n.. [2] [Beyond independence: Conditions for the optimality of the simple\n bayesian classifier](https://www.ics.uci.edu/~pazzani/Publications/mlc96-pedro.pdf)\n Domingos, P., & Pazzani, M., Proc. 13th Intl. Conf. Machine Learning.\n 1996.\n.. [3] [Obtaining calibrated probability estimates from decision trees and\n naive Bayesian classifiers](https://citeseerx.ist.psu.edu/doc_view/pid/4f67a122ec3723f08ad5cbefecad119b432b3304)\n Zadrozny, Bianca, and Charles Elkan. Icml. Vol. 1. 2001.\n\n"
72+
"## Analysis of the results\n\n:class:`~sklearn.linear_model.LogisticRegressionCV` returns reasonably well\ncalibrated predictions despite the small training set size: its reliability\ncurve is the closest to the diagonal among the four models.\n\nLogistic regression is trained by minimizing the log-loss which is a strictly\nproper scoring rule: in the limit of infinite training data, strictly proper\nscoring rules are minimized by the model that predicts the true conditional\nprobabilities. That (hypothetical) model would therefore be perfectly\ncalibrated. However, using a proper scoring rule as training objective is not\nsufficient to guarantee a well-calibrated model by itself: even with a very\nlarge training set, logistic regression could still be poorly calibrated, if\nit was too strongly regularized or if the choice and preprocessing of input\nfeatures made this model mis-specified (e.g. if the true decision boundary of\nthe dataset is a highly non-linear function of the input features).\n\nIn this example the training set was intentionally kept very small. In this\nsetting, optimizing the log-loss can still lead to poorly calibrated models\nbecause of overfitting. To mitigate this, the\n:class:`~sklearn.linear_model.LogisticRegressionCV` class was configured to\ntune the `C` regularization parameter to also minimize the log-loss via inner\ncross-validation so as to find the best compromise for this model in the\nsmall training set setting.\n\nBecause of the finite training set size and the lack of guarantee for\nwell-specification, we observe that the calibration curve of the logistic\nregression model is close but not perfectly on the diagonal. The shape of the\ncalibration curve of this model can be interpreted as slightly\nunder-confident: the predicted probabilities are a bit too close to 0.5\ncompared to the true fraction of positive samples.\n\nThe other methods all output less well calibrated probabilities:\n\n* :class:`~sklearn.naive_bayes.GaussianNB` tends to push probabilities to 0\n or 1 (see histogram) on this particular dataset (over-confidence). This is\n mainly because the naive Bayes equation only provides correct estimate of\n probabilities when the assumption that features are conditionally\n independent holds [2]_. However, features can be correlated and this is the case\n with this dataset, which contains 2 features generated as random linear\n combinations of the informative features. These correlated features are\n effectively being 'counted twice', resulting in pushing the predicted\n probabilities towards 0 and 1 [3]_. Note, however, that changing the seed\n used to generate the dataset can lead to widely varying results for the\n naive Bayes estimator.\n\n* :class:`~sklearn.svm.LinearSVC` is not a natural probabilistic classifier.\n In order to interpret its prediction as such, we naively scaled the output\n of the :term:`decision_function` into [0, 1] by applying min-max scaling in\n the `NaivelyCalibratedLinearSVC` wrapper class defined above. This\n estimator shows a typical sigmoid-shaped calibration curve on this data:\n predictions larger than 0.5 correspond to samples with an even larger\n effective positive class fraction (above the diagonal), while predictions\n below 0.5 corresponds to even lower positive class fractions (below the\n diagonal). This under-confident predictions are typical for maximum-margin\n methods [1]_.\n\n* :class:`~sklearn.ensemble.RandomForestClassifier`'s prediction histogram\n shows peaks at approx. 0.2 and 0.9 probability, while probabilities close to\n 0 or 1 are very rare. An explanation for this is given by [1]_:\n \"Methods such as bagging and random forests that average\n predictions from a base set of models can have difficulty making\n predictions near 0 and 1 because variance in the underlying base models\n will bias predictions that should be near zero or one away from these\n values. Because predictions are restricted to the interval [0, 1], errors\n caused by variance tend to be one-sided near zero and one. For example, if\n a model should predict p = 0 for a case, the only way bagging can achieve\n this is if all bagged trees predict zero. If we add noise to the trees that\n bagging is averaging over, this noise will cause some trees to predict\n values larger than 0 for this case, thus moving the average prediction of\n the bagged ensemble away from 0. We observe this effect most strongly with\n random forests because the base-level trees trained with random forests\n have relatively high variance due to feature subsetting.\" This effect can\n make random forests under-confident. Despite this possible bias, note that\n the trees themselves are fit by minimizing either the Gini or Entropy\n criterion, both of which lead to splits that minimize proper scoring rules:\n the Brier score or the log-loss respectively. See `the user guide\n <tree_mathematical_formulation>` for more details. This can explain why\n this model shows a good enough calibration curve on this particular example\n dataset. Indeed the Random Forest model is not significantly more\n under-confident than the Logistic Regression model.\n\nFeel free to re-run this example with different random seeds and other\ndataset generation parameters to see how different the calibration plots can\nlook. In general, Logistic Regression and Random Forest will tend to be the\nbest calibrated classifiers, while SVC will often display the typical\nunder-confident miscalibration. The naive Bayes model is also often poorly\ncalibrated but the general shape of its calibration curve can vary widely\ndepending on the dataset.\n\nFinally, note that for some dataset seeds, all models are poorly calibrated,\neven when tuning the regularization parameter as above. This is bound to\nhappen when the training size is too small or when the model is severely\nmisspecified.\n\n## References\n\n.. [1] [Predicting Good Probabilities with Supervised Learning](https://dl.acm.org/doi/pdf/10.1145/1102351.1102430), A.\n Niculescu-Mizil & R. Caruana, ICML 2005\n\n.. [2] [Beyond independence: Conditions for the optimality of the simple\n bayesian classifier](https://www.ics.uci.edu/~pazzani/Publications/mlc96-pedro.pdf)\n Domingos, P., & Pazzani, M., Proc. 13th Intl. Conf. Machine Learning.\n 1996.\n\n.. [3] [Obtaining calibrated probability estimates from decision trees and\n naive Bayesian classifiers](https://citeseerx.ist.psu.edu/doc_view/pid/4f67a122ec3723f08ad5cbefecad119b432b3304)\n Zadrozny, Bianca, and Charles Elkan. Icml. Vol. 1. 2001.\n\n"
7373
]
7474
}
7575
],

0 commit comments

Comments
 (0)