{ "cells": [ { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "%matplotlib inline" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n# Detection error tradeoff (DET) curve\n\nIn this example, we compare receiver operating characteristic (ROC) and\ndetection error tradeoff (DET) curves for different classification algorithms\nfor the same classification task.\n\nDET curves are commonly plotted in normal deviate scale.\nTo achieve this the DET display transforms the error rates as returned by the\n:func:`~sklearn.metrics.det_curve` and the axis scale using\n:func:`scipy.stats.norm`.\n\nThe point of this example is to demonstrate two properties of DET curves,\nnamely:\n\n1. It might be easier to visually assess the overall performance of different\n classification algorithms using DET curves over ROC curves.\n Due to the linear scale used for plotting ROC curves, different classifiers\n usually only differ in the top left corner of the graph and appear similar\n for a large part of the plot. On the other hand, because DET curves\n represent straight lines in normal deviate scale. As such, they tend to be\n distinguishable as a whole and the area of interest spans a large part of\n the plot.\n2. DET curves give the user direct feedback of the detection error tradeoff to\n aid in operating point analysis.\n The user can deduct directly from the DET-curve plot at which rate\n false-negative error rate will improve when willing to accept an increase in\n false-positive error rate (or vice-versa).\n\nThe plots in this example compare ROC curves on the left side to corresponding\nDET curves on the right.\nThere is no particular reason why these classifiers have been chosen for the\nexample plot over other classifiers available in scikit-learn.\n\n

Note

- See :func:`sklearn.metrics.roc_curve` for further information about ROC\n curves.\n\n - See :func:`sklearn.metrics.det_curve` for further information about\n DET curves.\n\n - This example is loosely based on\n `sphx_glr_auto_examples_classification_plot_classifier_comparison.py`\n example.

\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "import matplotlib.pyplot as plt\n\nfrom sklearn.datasets import make_classification\nfrom sklearn.ensemble import RandomForestClassifier\nfrom sklearn.metrics import DetCurveDisplay, RocCurveDisplay\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.pipeline import make_pipeline\nfrom sklearn.preprocessing import StandardScaler\nfrom sklearn.svm import LinearSVC\n\nN_SAMPLES = 1000\n\nclassifiers = {\n \"Linear SVM\": make_pipeline(StandardScaler(), LinearSVC(C=0.025)),\n \"Random Forest\": RandomForestClassifier(\n max_depth=5, n_estimators=10, max_features=1\n ),\n}\n\nX, y = make_classification(\n n_samples=N_SAMPLES,\n n_features=2,\n n_redundant=0,\n n_informative=2,\n random_state=1,\n n_clusters_per_class=1,\n)\n\nX_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.4, random_state=0)\n\n# prepare plots\nfig, [ax_roc, ax_det] = plt.subplots(1, 2, figsize=(11, 5))\n\nfor name, clf in classifiers.items():\n clf.fit(X_train, y_train)\n\n RocCurveDisplay.from_estimator(clf, X_test, y_test, ax=ax_roc, name=name)\n DetCurveDisplay.from_estimator(clf, X_test, y_test, ax=ax_det, name=name)\n\nax_roc.set_title(\"Receiver Operating Characteristic (ROC) curves\")\nax_det.set_title(\"Detection Error Tradeoff (DET) curves\")\n\nax_roc.grid(linestyle=\"--\")\nax_det.grid(linestyle=\"--\")\n\nplt.legend()\nplt.show()" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.9.12" } }, "nbformat": 4, "nbformat_minor": 0 }