Skip to content

Commit 06d58d0

Browse files
committed
Pushing the docs to dev/ for branch: main, commit 1e49c34c7ef5786c39700c2437cb31566dbf7fb0
1 parent 1afddc1 commit 06d58d0

File tree

1,370 files changed

+20794
-8193
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

1,370 files changed

+20794
-8193
lines changed

Diff for: dev/.buildinfo

+1-1
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
11
# Sphinx build info version 1
22
# This file hashes the configuration used when building these files. When it is not found, a full rebuild will be done.
3-
config: 6717a0c70b45f0c868a92efd3360387b
3+
config: a9c8b3fb6c73fdc5d184963d26338a1f
44
tags: 645f666f9bcd5a90fca523b33c5a78b7
Binary file not shown.

Diff for: dev/_downloads/133f2198d3ab792c75b39a63b0a99872/plot_cost_sensitive_learning.ipynb

+670
Large diffs are not rendered by default.
Binary file not shown.

Diff for: dev/_downloads/9ca7cbe47e4cace7242fe4c5c43dfa52/plot_cost_sensitive_learning.py

+702
Large diffs are not rendered by default.
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,183 @@
1+
{
2+
"cells": [
3+
{
4+
"cell_type": "markdown",
5+
"metadata": {},
6+
"source": [
7+
"\n# Post-hoc tuning the cut-off point of decision function\n\nOnce a binary classifier is trained, the :term:`predict` method outputs class label\npredictions corresponding to a thresholding of either the :term:`decision_function` or\nthe :term:`predict_proba` output. The default threshold is defined as a posterior\nprobability estimate of 0.5 or a decision score of 0.0. However, this default strategy\nmay not be optimal for the task at hand.\n\nThis example shows how to use the\n:class:`~sklearn.model_selection.TunedThresholdClassifierCV` to tune the decision\nthreshold, depending on a metric of interest.\n"
8+
]
9+
},
10+
{
11+
"cell_type": "markdown",
12+
"metadata": {},
13+
"source": [
14+
"## The diabetes dataset\n\nTo illustrate the tuning of the decision threshold, we will use the diabetes dataset.\nThis dataset is available on OpenML: https://www.openml.org/d/37. We use the\n:func:`~sklearn.datasets.fetch_openml` function to fetch this dataset.\n\n"
15+
]
16+
},
17+
{
18+
"cell_type": "code",
19+
"execution_count": null,
20+
"metadata": {
21+
"collapsed": false
22+
},
23+
"outputs": [],
24+
"source": [
25+
"from sklearn.datasets import fetch_openml\n\ndiabetes = fetch_openml(data_id=37, as_frame=True, parser=\"pandas\")\ndata, target = diabetes.data, diabetes.target"
26+
]
27+
},
28+
{
29+
"cell_type": "markdown",
30+
"metadata": {},
31+
"source": [
32+
"We look at the target to understand the type of problem we are dealing with.\n\n"
33+
]
34+
},
35+
{
36+
"cell_type": "code",
37+
"execution_count": null,
38+
"metadata": {
39+
"collapsed": false
40+
},
41+
"outputs": [],
42+
"source": [
43+
"target.value_counts()"
44+
]
45+
},
46+
{
47+
"cell_type": "markdown",
48+
"metadata": {},
49+
"source": [
50+
"We can see that we are dealing with a binary classification problem. Since the\nlabels are not encoded as 0 and 1, we make it explicit that we consider the class\nlabeled \"tested_negative\" as the negative class (which is also the most frequent)\nand the class labeled \"tested_positive\" the positive as the positive class:\n\n"
51+
]
52+
},
53+
{
54+
"cell_type": "code",
55+
"execution_count": null,
56+
"metadata": {
57+
"collapsed": false
58+
},
59+
"outputs": [],
60+
"source": [
61+
"neg_label, pos_label = target.value_counts().index"
62+
]
63+
},
64+
{
65+
"cell_type": "markdown",
66+
"metadata": {},
67+
"source": [
68+
"We can also observe that this binary problem is slightly imbalanced where we have\naround twice more samples from the negative class than from the positive class. When\nit comes to evaluation, we should consider this aspect to interpret the results.\n\n## Our vanilla classifier\n\nWe define a basic predictive model composed of a scaler followed by a logistic\nregression classifier.\n\n"
69+
]
70+
},
71+
{
72+
"cell_type": "code",
73+
"execution_count": null,
74+
"metadata": {
75+
"collapsed": false
76+
},
77+
"outputs": [],
78+
"source": [
79+
"from sklearn.linear_model import LogisticRegression\nfrom sklearn.pipeline import make_pipeline\nfrom sklearn.preprocessing import StandardScaler\n\nmodel = make_pipeline(StandardScaler(), LogisticRegression())\nmodel"
80+
]
81+
},
82+
{
83+
"cell_type": "markdown",
84+
"metadata": {},
85+
"source": [
86+
"We evaluate our model using cross-validation. We use the accuracy and the balanced\naccuracy to report the performance of our model. The balanced accuracy is a metric\nthat is less sensitive to class imbalance and will allow us to put the accuracy\nscore in perspective.\n\nCross-validation allows us to study the variance of the decision threshold across\ndifferent splits of the data. However, the dataset is rather small and it would be\ndetrimental to use more than 5 folds to evaluate the dispersion. Therefore, we use\na :class:`~sklearn.model_selection.RepeatedStratifiedKFold` where we apply several\nrepetitions of 5-fold cross-validation.\n\n"
87+
]
88+
},
89+
{
90+
"cell_type": "code",
91+
"execution_count": null,
92+
"metadata": {
93+
"collapsed": false
94+
},
95+
"outputs": [],
96+
"source": [
97+
"import pandas as pd\n\nfrom sklearn.model_selection import RepeatedStratifiedKFold, cross_validate\n\nscoring = [\"accuracy\", \"balanced_accuracy\"]\ncv_scores = [\n \"train_accuracy\",\n \"test_accuracy\",\n \"train_balanced_accuracy\",\n \"test_balanced_accuracy\",\n]\ncv = RepeatedStratifiedKFold(n_splits=5, n_repeats=10, random_state=42)\ncv_results_vanilla_model = pd.DataFrame(\n cross_validate(\n model,\n data,\n target,\n scoring=scoring,\n cv=cv,\n return_train_score=True,\n return_estimator=True,\n )\n)\ncv_results_vanilla_model[cv_scores].aggregate([\"mean\", \"std\"]).T"
98+
]
99+
},
100+
{
101+
"cell_type": "markdown",
102+
"metadata": {},
103+
"source": [
104+
"Our predictive model succeeds to grasp the relationship between the data and the\ntarget. The training and testing scores are close to each other, meaning that our\npredictive model is not overfitting. We can also observe that the balanced accuracy is\nlower than the accuracy, due to the class imbalance previously mentioned.\n\nFor this classifier, we let the decision threshold, used convert the probability of\nthe positive class into a class prediction, to its default value: 0.5. However, this\nthreshold might not be optimal. If our interest is to maximize the balanced accuracy,\nwe should select another threshold that would maximize this metric.\n\nThe :class:`~sklearn.model_selection.TunedThresholdClassifierCV` meta-estimator allows\nto tune the decision threshold of a classifier given a metric of interest.\n\n## Tuning the decision threshold\n\nWe create a :class:`~sklearn.model_selection.TunedThresholdClassifierCV` and\nconfigure it to maximize the balanced accuracy. We evaluate the model using the same\ncross-validation strategy as previously.\n\n"
105+
]
106+
},
107+
{
108+
"cell_type": "code",
109+
"execution_count": null,
110+
"metadata": {
111+
"collapsed": false
112+
},
113+
"outputs": [],
114+
"source": [
115+
"from sklearn.model_selection import TunedThresholdClassifierCV\n\ntuned_model = TunedThresholdClassifierCV(estimator=model, scoring=\"balanced_accuracy\")\ncv_results_tuned_model = pd.DataFrame(\n cross_validate(\n tuned_model,\n data,\n target,\n scoring=scoring,\n cv=cv,\n return_train_score=True,\n return_estimator=True,\n )\n)\ncv_results_tuned_model[cv_scores].aggregate([\"mean\", \"std\"]).T"
116+
]
117+
},
118+
{
119+
"cell_type": "markdown",
120+
"metadata": {},
121+
"source": [
122+
"In comparison with the vanilla model, we observe that the balanced accuracy score\nincreased. Of course, it comes at the cost of a lower accuracy score. It means that\nour model is now more sensitive to the positive class but makes more mistakes on the\nnegative class.\n\nHowever, it is important to note that this tuned predictive model is internally the\nsame model as the vanilla model: they have the same fitted coefficients.\n\n"
123+
]
124+
},
125+
{
126+
"cell_type": "code",
127+
"execution_count": null,
128+
"metadata": {
129+
"collapsed": false
130+
},
131+
"outputs": [],
132+
"source": [
133+
"import matplotlib.pyplot as plt\n\nvanilla_model_coef = pd.DataFrame(\n [est[-1].coef_.ravel() for est in cv_results_vanilla_model[\"estimator\"]],\n columns=diabetes.feature_names,\n)\ntuned_model_coef = pd.DataFrame(\n [est.estimator_[-1].coef_.ravel() for est in cv_results_tuned_model[\"estimator\"]],\n columns=diabetes.feature_names,\n)\n\nfig, ax = plt.subplots(ncols=2, figsize=(12, 4), sharex=True, sharey=True)\nvanilla_model_coef.boxplot(ax=ax[0])\nax[0].set_ylabel(\"Coefficient value\")\nax[0].set_title(\"Vanilla model\")\ntuned_model_coef.boxplot(ax=ax[1])\nax[1].set_title(\"Tuned model\")\n_ = fig.suptitle(\"Coefficients of the predictive models\")"
134+
]
135+
},
136+
{
137+
"cell_type": "markdown",
138+
"metadata": {},
139+
"source": [
140+
"Only the decision threshold of each model was changed during the cross-validation.\n\n"
141+
]
142+
},
143+
{
144+
"cell_type": "code",
145+
"execution_count": null,
146+
"metadata": {
147+
"collapsed": false
148+
},
149+
"outputs": [],
150+
"source": [
151+
"decision_threshold = pd.Series(\n [est.best_threshold_ for est in cv_results_tuned_model[\"estimator\"]],\n)\nax = decision_threshold.plot.kde()\nax.axvline(\n decision_threshold.mean(),\n color=\"k\",\n linestyle=\"--\",\n label=f\"Mean decision threshold: {decision_threshold.mean():.2f}\",\n)\nax.set_xlabel(\"Decision threshold\")\nax.legend(loc=\"upper right\")\n_ = ax.set_title(\n \"Distribution of the decision threshold \\nacross different cross-validation folds\"\n)"
152+
]
153+
},
154+
{
155+
"cell_type": "markdown",
156+
"metadata": {},
157+
"source": [
158+
"In average, a decision threshold around 0.32 maximizes the balanced accuracy, which is\ndifferent from the default decision threshold of 0.5. Thus tuning the decision\nthreshold is particularly important when the output of the predictive model\nis used to make decisions. Besides, the metric used to tune the decision threshold\nshould be chosen carefully. Here, we used the balanced accuracy but it might not be\nthe most appropriate metric for the problem at hand. The choice of the \"right\" metric\nis usually problem-dependent and might require some domain knowledge. Refer to the\nexample entitled,\n`sphx_glr_auto_examples_model_selection_plot_cost_sensitive_learning.py`,\nfor more details.\n\n"
159+
]
160+
}
161+
],
162+
"metadata": {
163+
"kernelspec": {
164+
"display_name": "Python 3",
165+
"language": "python",
166+
"name": "python3"
167+
},
168+
"language_info": {
169+
"codemirror_mode": {
170+
"name": "ipython",
171+
"version": 3
172+
},
173+
"file_extension": ".py",
174+
"mimetype": "text/x-python",
175+
"name": "python",
176+
"nbconvert_exporter": "python",
177+
"pygments_lexer": "ipython3",
178+
"version": "3.9.19"
179+
}
180+
},
181+
"nbformat": 4,
182+
"nbformat_minor": 0
183+
}
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,184 @@
1+
"""
2+
======================================================
3+
Post-hoc tuning the cut-off point of decision function
4+
======================================================
5+
6+
Once a binary classifier is trained, the :term:`predict` method outputs class label
7+
predictions corresponding to a thresholding of either the :term:`decision_function` or
8+
the :term:`predict_proba` output. The default threshold is defined as a posterior
9+
probability estimate of 0.5 or a decision score of 0.0. However, this default strategy
10+
may not be optimal for the task at hand.
11+
12+
This example shows how to use the
13+
:class:`~sklearn.model_selection.TunedThresholdClassifierCV` to tune the decision
14+
threshold, depending on a metric of interest.
15+
"""
16+
17+
# %%
18+
# The diabetes dataset
19+
# --------------------
20+
#
21+
# To illustrate the tuning of the decision threshold, we will use the diabetes dataset.
22+
# This dataset is available on OpenML: https://www.openml.org/d/37. We use the
23+
# :func:`~sklearn.datasets.fetch_openml` function to fetch this dataset.
24+
from sklearn.datasets import fetch_openml
25+
26+
diabetes = fetch_openml(data_id=37, as_frame=True, parser="pandas")
27+
data, target = diabetes.data, diabetes.target
28+
29+
# %%
30+
# We look at the target to understand the type of problem we are dealing with.
31+
target.value_counts()
32+
33+
# %%
34+
# We can see that we are dealing with a binary classification problem. Since the
35+
# labels are not encoded as 0 and 1, we make it explicit that we consider the class
36+
# labeled "tested_negative" as the negative class (which is also the most frequent)
37+
# and the class labeled "tested_positive" the positive as the positive class:
38+
neg_label, pos_label = target.value_counts().index
39+
40+
# %%
41+
# We can also observe that this binary problem is slightly imbalanced where we have
42+
# around twice more samples from the negative class than from the positive class. When
43+
# it comes to evaluation, we should consider this aspect to interpret the results.
44+
#
45+
# Our vanilla classifier
46+
# ----------------------
47+
#
48+
# We define a basic predictive model composed of a scaler followed by a logistic
49+
# regression classifier.
50+
from sklearn.linear_model import LogisticRegression
51+
from sklearn.pipeline import make_pipeline
52+
from sklearn.preprocessing import StandardScaler
53+
54+
model = make_pipeline(StandardScaler(), LogisticRegression())
55+
model
56+
57+
# %%
58+
# We evaluate our model using cross-validation. We use the accuracy and the balanced
59+
# accuracy to report the performance of our model. The balanced accuracy is a metric
60+
# that is less sensitive to class imbalance and will allow us to put the accuracy
61+
# score in perspective.
62+
#
63+
# Cross-validation allows us to study the variance of the decision threshold across
64+
# different splits of the data. However, the dataset is rather small and it would be
65+
# detrimental to use more than 5 folds to evaluate the dispersion. Therefore, we use
66+
# a :class:`~sklearn.model_selection.RepeatedStratifiedKFold` where we apply several
67+
# repetitions of 5-fold cross-validation.
68+
import pandas as pd
69+
70+
from sklearn.model_selection import RepeatedStratifiedKFold, cross_validate
71+
72+
scoring = ["accuracy", "balanced_accuracy"]
73+
cv_scores = [
74+
"train_accuracy",
75+
"test_accuracy",
76+
"train_balanced_accuracy",
77+
"test_balanced_accuracy",
78+
]
79+
cv = RepeatedStratifiedKFold(n_splits=5, n_repeats=10, random_state=42)
80+
cv_results_vanilla_model = pd.DataFrame(
81+
cross_validate(
82+
model,
83+
data,
84+
target,
85+
scoring=scoring,
86+
cv=cv,
87+
return_train_score=True,
88+
return_estimator=True,
89+
)
90+
)
91+
cv_results_vanilla_model[cv_scores].aggregate(["mean", "std"]).T
92+
93+
# %%
94+
# Our predictive model succeeds to grasp the relationship between the data and the
95+
# target. The training and testing scores are close to each other, meaning that our
96+
# predictive model is not overfitting. We can also observe that the balanced accuracy is
97+
# lower than the accuracy, due to the class imbalance previously mentioned.
98+
#
99+
# For this classifier, we let the decision threshold, used convert the probability of
100+
# the positive class into a class prediction, to its default value: 0.5. However, this
101+
# threshold might not be optimal. If our interest is to maximize the balanced accuracy,
102+
# we should select another threshold that would maximize this metric.
103+
#
104+
# The :class:`~sklearn.model_selection.TunedThresholdClassifierCV` meta-estimator allows
105+
# to tune the decision threshold of a classifier given a metric of interest.
106+
#
107+
# Tuning the decision threshold
108+
# -----------------------------
109+
#
110+
# We create a :class:`~sklearn.model_selection.TunedThresholdClassifierCV` and
111+
# configure it to maximize the balanced accuracy. We evaluate the model using the same
112+
# cross-validation strategy as previously.
113+
from sklearn.model_selection import TunedThresholdClassifierCV
114+
115+
tuned_model = TunedThresholdClassifierCV(estimator=model, scoring="balanced_accuracy")
116+
cv_results_tuned_model = pd.DataFrame(
117+
cross_validate(
118+
tuned_model,
119+
data,
120+
target,
121+
scoring=scoring,
122+
cv=cv,
123+
return_train_score=True,
124+
return_estimator=True,
125+
)
126+
)
127+
cv_results_tuned_model[cv_scores].aggregate(["mean", "std"]).T
128+
129+
# %%
130+
# In comparison with the vanilla model, we observe that the balanced accuracy score
131+
# increased. Of course, it comes at the cost of a lower accuracy score. It means that
132+
# our model is now more sensitive to the positive class but makes more mistakes on the
133+
# negative class.
134+
#
135+
# However, it is important to note that this tuned predictive model is internally the
136+
# same model as the vanilla model: they have the same fitted coefficients.
137+
import matplotlib.pyplot as plt
138+
139+
vanilla_model_coef = pd.DataFrame(
140+
[est[-1].coef_.ravel() for est in cv_results_vanilla_model["estimator"]],
141+
columns=diabetes.feature_names,
142+
)
143+
tuned_model_coef = pd.DataFrame(
144+
[est.estimator_[-1].coef_.ravel() for est in cv_results_tuned_model["estimator"]],
145+
columns=diabetes.feature_names,
146+
)
147+
148+
fig, ax = plt.subplots(ncols=2, figsize=(12, 4), sharex=True, sharey=True)
149+
vanilla_model_coef.boxplot(ax=ax[0])
150+
ax[0].set_ylabel("Coefficient value")
151+
ax[0].set_title("Vanilla model")
152+
tuned_model_coef.boxplot(ax=ax[1])
153+
ax[1].set_title("Tuned model")
154+
_ = fig.suptitle("Coefficients of the predictive models")
155+
156+
# %%
157+
# Only the decision threshold of each model was changed during the cross-validation.
158+
decision_threshold = pd.Series(
159+
[est.best_threshold_ for est in cv_results_tuned_model["estimator"]],
160+
)
161+
ax = decision_threshold.plot.kde()
162+
ax.axvline(
163+
decision_threshold.mean(),
164+
color="k",
165+
linestyle="--",
166+
label=f"Mean decision threshold: {decision_threshold.mean():.2f}",
167+
)
168+
ax.set_xlabel("Decision threshold")
169+
ax.legend(loc="upper right")
170+
_ = ax.set_title(
171+
"Distribution of the decision threshold \nacross different cross-validation folds"
172+
)
173+
174+
# %%
175+
# In average, a decision threshold around 0.32 maximizes the balanced accuracy, which is
176+
# different from the default decision threshold of 0.5. Thus tuning the decision
177+
# threshold is particularly important when the output of the predictive model
178+
# is used to make decisions. Besides, the metric used to tune the decision threshold
179+
# should be chosen carefully. Here, we used the balanced accuracy but it might not be
180+
# the most appropriate metric for the problem at hand. The choice of the "right" metric
181+
# is usually problem-dependent and might require some domain knowledge. Refer to the
182+
# example entitled,
183+
# :ref:`sphx_glr_auto_examples_model_selection_plot_cost_sensitive_learning.py`,
184+
# for more details.

Diff for: dev/_downloads/scikit-learn-docs.zip

633 KB
Binary file not shown.
212 Bytes
60 Bytes
143 Bytes
-126 Bytes
62 Bytes

Diff for: dev/_images/sphx_glr_plot_anomaly_comparison_001.png

194 Bytes
1 Byte
169 Bytes
-1 Bytes

Diff for: dev/_images/sphx_glr_plot_cluster_comparison_001.png

-230 Bytes
16 Bytes

Diff for: dev/_images/sphx_glr_plot_coin_segmentation_001.png

16 Bytes

Diff for: dev/_images/sphx_glr_plot_coin_segmentation_002.png

-133 Bytes

Diff for: dev/_images/sphx_glr_plot_coin_segmentation_003.png

-37 Bytes
-17 Bytes
44.8 KB
96.3 KB
95.3 KB
99.7 KB
5.6 KB
14.9 KB

Diff for: dev/_images/sphx_glr_plot_dict_face_patches_001.png

-66 Bytes
-51 Bytes
-43 Bytes
-86 Bytes

Diff for: dev/_images/sphx_glr_plot_gmm_init_001.png

-122 Bytes

Diff for: dev/_images/sphx_glr_plot_gmm_init_thumb.png

-5 Bytes
-827 Bytes
-794 Bytes
-248 Bytes
4 Bytes
8 Bytes
38 Bytes

Diff for: dev/_images/sphx_glr_plot_hgbt_regression_001.png

-2.02 KB

Diff for: dev/_images/sphx_glr_plot_hgbt_regression_thumb.png

6 Bytes

Diff for: dev/_images/sphx_glr_plot_image_denoising_003.png

-110 Bytes

Diff for: dev/_images/sphx_glr_plot_image_denoising_004.png

-124 Bytes

Diff for: dev/_images/sphx_glr_plot_image_denoising_005.png

29 Bytes
-1.26 KB
-216 Bytes
112 Bytes
362 Bytes
25 Bytes
3 Bytes
51 Bytes

Diff for: dev/_images/sphx_glr_plot_learning_curve_002.png

-4.6 KB

Diff for: dev/_images/sphx_glr_plot_learning_curve_003.png

4.73 KB

Diff for: dev/_images/sphx_glr_plot_linkage_comparison_001.png

-64 Bytes
11 Bytes

Diff for: dev/_images/sphx_glr_plot_lle_digits_003.png

-22 Bytes

Diff for: dev/_images/sphx_glr_plot_lle_digits_004.png

-73 Bytes

Diff for: dev/_images/sphx_glr_plot_lle_digits_005.png

55 Bytes

Diff for: dev/_images/sphx_glr_plot_lle_digits_006.png

-13 Bytes

Diff for: dev/_images/sphx_glr_plot_lle_digits_007.png

21 Bytes

Diff for: dev/_images/sphx_glr_plot_lle_digits_008.png

21 Bytes

Diff for: dev/_images/sphx_glr_plot_lle_digits_009.png

75 Bytes

Diff for: dev/_images/sphx_glr_plot_lle_digits_010.png

143 Bytes

Diff for: dev/_images/sphx_glr_plot_lle_digits_011.png

16 Bytes

Diff for: dev/_images/sphx_glr_plot_lle_digits_012.png

-71 Bytes

Diff for: dev/_images/sphx_glr_plot_lle_digits_013.png

10 Bytes

Diff for: dev/_images/sphx_glr_plot_lle_digits_014.png

-204 Bytes

Diff for: dev/_images/sphx_glr_plot_manifold_sphere_001.png

-92 Bytes

Diff for: dev/_images/sphx_glr_plot_manifold_sphere_thumb.png

31 Bytes
-492 Bytes
-209 Bytes
-1.42 KB
-331 Bytes
56 Bytes
-21 Bytes
77 Bytes
-9 Bytes
-912 Bytes
-195 Bytes

Diff for: dev/_images/sphx_glr_plot_prediction_latency_001.png

577 Bytes

Diff for: dev/_images/sphx_glr_plot_prediction_latency_002.png

437 Bytes

Diff for: dev/_images/sphx_glr_plot_prediction_latency_003.png

302 Bytes

Diff for: dev/_images/sphx_glr_plot_prediction_latency_004.png

687 Bytes
167 Bytes
423 Bytes
-146 Bytes
92 Bytes
80 Bytes
-9 Bytes
67 Bytes

Diff for: dev/_images/sphx_glr_plot_sgd_early_stopping_002.png

69 Bytes

Diff for: dev/_images/sphx_glr_plot_stack_predictors_001.png

354 Bytes

Diff for: dev/_images/sphx_glr_plot_stack_predictors_thumb.png

17 Bytes
-327 Bytes
-52 Bytes

Diff for: dev/_images/sphx_glr_plot_svm_scale_c_001.png

107 Bytes

Diff for: dev/_images/sphx_glr_plot_svm_scale_c_thumb.png

-15 Bytes

Diff for: dev/_images/sphx_glr_plot_theilsen_001.png

-57 Bytes

Diff for: dev/_images/sphx_glr_plot_theilsen_002.png

3 Bytes

Diff for: dev/_images/sphx_glr_plot_theilsen_thumb.png

-10 Bytes
4.74 KB
3.24 KB
24.4 KB
26 KB
12.1 KB

Diff for: dev/_sources/auto_examples/applications/plot_cyclical_feature_engineering.rst.txt

+1-1

Diff for: dev/_sources/auto_examples/applications/plot_digits_denoising.rst.txt

+1-1

0 commit comments

Comments
 (0)