{ "cells": [ { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "%matplotlib inline" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n# Comparison of Calibration of Classifiers\n\nWell calibrated classifiers are probabilistic classifiers for which the output\nof :term:`predict_proba` can be directly interpreted as a confidence level.\nFor instance, a well calibrated (binary) classifier should classify the samples\nsuch that for the samples to which it gave a :term:`predict_proba` value close\nto 0.8, approximately 80% actually belong to the positive class.\n\nIn this example we will compare the calibration of four different\nmodels: `Logistic_regression`, `gaussian_naive_bayes`,\n`Random Forest Classifier ` and `Linear SVM\n`.\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Author: Jan Hendrik Metzen \nLicense: BSD 3 clause.\n\n## Dataset\n\nWe will use a synthetic binary classification dataset with 100,000 samples\nand 20 features. Of the 20 features, only 2 are informative, 2 are\nredundant (random combinations of the informative features) and the\nremaining 16 are uninformative (random numbers). Of the 100,000 samples,\n100 will be used for model fitting and the remaining for testing.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "from sklearn.datasets import make_classification\nfrom sklearn.model_selection import train_test_split\n\nX, y = make_classification(\n n_samples=100_000, n_features=20, n_informative=2, n_redundant=2, random_state=42\n)\n\ntrain_samples = 100 # Samples used for training the models\nX_train, X_test, y_train, y_test = train_test_split(\n X,\n y,\n shuffle=False,\n test_size=100_000 - train_samples,\n)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Calibration curves\n\nBelow, we train each of the four models with the small training dataset, then\nplot calibration curves (also known as reliability diagrams) using\npredicted probabilities of the test dataset. Calibration curves are created\nby binning predicted probabilities, then plotting the mean predicted\nprobability in each bin against the observed frequency ('fraction of\npositives'). Below the calibration curve, we plot a histogram showing\nthe distribution of the predicted probabilities or more specifically,\nthe number of samples in each predicted probability bin.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "import numpy as np\n\nfrom sklearn.svm import LinearSVC\n\n\nclass NaivelyCalibratedLinearSVC(LinearSVC):\n \"\"\"LinearSVC with `predict_proba` method that naively scales\n `decision_function` output.\"\"\"\n\n def fit(self, X, y):\n super().fit(X, y)\n df = self.decision_function(X)\n self.df_min_ = df.min()\n self.df_max_ = df.max()\n\n def predict_proba(self, X):\n \"\"\"Min-max scale output of `decision_function` to [0,1].\"\"\"\n df = self.decision_function(X)\n calibrated_df = (df - self.df_min_) / (self.df_max_ - self.df_min_)\n proba_pos_class = np.clip(calibrated_df, 0, 1)\n proba_neg_class = 1 - proba_pos_class\n proba = np.c_[proba_neg_class, proba_pos_class]\n return proba" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "from sklearn.calibration import CalibrationDisplay\nfrom sklearn.ensemble import RandomForestClassifier\nfrom sklearn.linear_model import LogisticRegression\nfrom sklearn.naive_bayes import GaussianNB\n\n# Create classifiers\nlr = LogisticRegression()\ngnb = GaussianNB()\nsvc = NaivelyCalibratedLinearSVC(C=1.0)\nrfc = RandomForestClassifier()\n\nclf_list = [\n (lr, \"Logistic\"),\n (gnb, \"Naive Bayes\"),\n (svc, \"SVC\"),\n (rfc, \"Random forest\"),\n]" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "import matplotlib.pyplot as plt\nfrom matplotlib.gridspec import GridSpec\n\nfig = plt.figure(figsize=(10, 10))\ngs = GridSpec(4, 2)\ncolors = plt.cm.get_cmap(\"Dark2\")\n\nax_calibration_curve = fig.add_subplot(gs[:2, :2])\ncalibration_displays = {}\nfor i, (clf, name) in enumerate(clf_list):\n clf.fit(X_train, y_train)\n display = CalibrationDisplay.from_estimator(\n clf,\n X_test,\n y_test,\n n_bins=10,\n name=name,\n ax=ax_calibration_curve,\n color=colors(i),\n )\n calibration_displays[name] = display\n\nax_calibration_curve.grid()\nax_calibration_curve.set_title(\"Calibration plots\")\n\n# Add histogram\ngrid_positions = [(2, 0), (2, 1), (3, 0), (3, 1)]\nfor i, (_, name) in enumerate(clf_list):\n row, col = grid_positions[i]\n ax = fig.add_subplot(gs[row, col])\n\n ax.hist(\n calibration_displays[name].y_prob,\n range=(0, 1),\n bins=10,\n label=name,\n color=colors(i),\n )\n ax.set(title=name, xlabel=\"Mean predicted probability\", ylabel=\"Count\")\n\nplt.tight_layout()\nplt.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ ":class:`~sklearn.linear_model.LogisticRegression` returns well calibrated\npredictions as it directly optimizes log-loss. In contrast, the other methods\nreturn biased probabilities, with different biases for each method:\n\n* :class:`~sklearn.naive_bayes.GaussianNB` tends to push\n probabilities to 0 or 1 (see histogram). This is mainly\n because the naive Bayes equation only provides correct estimate of\n probabilities when the assumption that features are conditionally\n independent holds [2]_. However, features tend to be positively correlated\n and is the case with this dataset, which contains 2 features\n generated as random linear combinations of the informative features. These\n correlated features are effectively being 'counted twice', resulting in\n pushing the predicted probabilities towards 0 and 1 [3]_.\n\n* :class:`~sklearn.ensemble.RandomForestClassifier` shows the opposite\n behavior: the histograms show peaks at approx. 0.2 and 0.9 probability,\n while probabilities close to 0 or 1 are very rare. An explanation for this\n is given by Niculescu-Mizil and Caruana [1]_: \"Methods such as bagging and\n random forests that average predictions from a base set of models can have\n difficulty making predictions near 0 and 1 because variance in the\n underlying base models will bias predictions that should be near zero or\n one away from these values. Because predictions are restricted to the\n interval [0,1], errors caused by variance tend to be one- sided near zero\n and one. For example, if a model should predict p = 0 for a case, the only\n way bagging can achieve this is if all bagged trees predict zero. If we add\n noise to the trees that bagging is averaging over, this noise will cause\n some trees to predict values larger than 0 for this case, thus moving the\n average prediction of the bagged ensemble away from 0. We observe this\n effect most strongly with random forests because the base-level trees\n trained with random forests have relatively high variance due to feature\n subsetting.\" As a result, the calibration curve shows a characteristic\n sigmoid shape, indicating that the classifier is under-confident\n and could return probabilities closer to 0 or 1.\n\n* To show the performance of :class:`~sklearn.svm.LinearSVC`, we naively\n scale the output of the :term:`decision_function` into [0, 1] by applying\n min-max scaling, since SVC does not output probabilities by default.\n :class:`~sklearn.svm.LinearSVC` shows an\n even more sigmoid curve than the\n :class:`~sklearn.ensemble.RandomForestClassifier`, which is typical for\n maximum-margin methods [1]_ as they focus on difficult to classify samples\n that are close to the decision boundary (the support vectors).\n\n## References\n\n.. [1] `Predicting Good Probabilities with Supervised Learning\n `_,\n A. Niculescu-Mizil & R. Caruana, ICML 2005\n.. [2] `Beyond independence: Conditions for the optimality of the simple\n bayesian classifier\n `_\n Domingos, P., & Pazzani, M., Proc. 13th Intl. Conf. Machine Learning.\n 1996.\n.. [3] `Obtaining calibrated probability estimates from decision trees and\n naive Bayesian classifiers\n `_\n Zadrozny, Bianca, and Charles Elkan. Icml. Vol. 1. 2001.\n\n" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.9.7" } }, "nbformat": 4, "nbformat_minor": 0 }