1.2. Linear and Quadratic Discriminant Analysis - Scikit-Learn 1.6.1 Documentati
1.2. Linear and Quadratic Discriminant Analysis - Scikit-Learn 1.6.1 Documentati
These classifiers are attractive because they have closed-form solutions that can be
easily computed, are inherently multiclass, have proven to work well in practice, and
have no hyperparameters to tune.
https://scikit-learn.org/stable/modules/lda_qda.html 24/04/25, 15 52
Página 1 de 10
:
The plot shows decision boundaries for Linear Discriminant Analysis and Quadratic
Discriminant Analysis. The bottom row demonstrates that Linear Discriminant Analysis
can only learn linear boundaries, while Quadratic Discriminant Analysis can learn
quadratic boundaries and is therefore more flexible.
Examples
https://scikit-learn.org/stable/modules/lda_qda.html 24/04/25, 15 52
Página 2 de 10
:
Linear Discriminant Analysis
LinearDiscriminantAnalysis can be used to perform supervised dimensionality
reduction, by projecting the input data to a linear subspace consisting of the directions
which maximize the separation between classes (in a precise sense discussed in the
mathematics section below). The dimension of the output is necessarily less than the
number of classes, so this is in general a rather strong dimensionality reduction, and
only makes sense in a multiclass setting.
This is implemented in the transform method. The desired dimensionality can be set
using the n_components parameter. This parameter has no influence on the fit and
predict methods.
Examples
Comparison of LDA and PCA 2D projection of Iris dataset: Comparison of LDA and
PCA for dimensionality reduction of the Iris dataset
https://scikit-learn.org/stable/modules/lda_qda.html 24/04/25, 15 52
Página 3 de 10
:
More specifically, for linear and quadratic discriminant analysis, P (x|y) is modeled as a
multivariate Gaussian distribution with density:
1 1
P (x|y = k) = exp ( − (x − µ k ) t Σ −k 1 (x − µ k ))
(2π) d/ 2 |Σ k | 1/ 2 2
1.2.2.1. QDA
According to the model above, the log of the posterior is:
where the constant term Cst corresponds to the denominator P (x) , in addition to
other constant terms from the Gaussian. The predicted class is the one that maximises
this log-posterior.
Note
If in the QDA model one assumes that the covariance matrices are diagonal,
then the inputs are assumed to be conditionally independent in each class,
and the resulting classifier is equivalent to the Gaussian Naive Bayes classifier
naive_bayes.GaussianNB .
https://scikit-learn.org/stable/modules/lda_qda.html 24/04/25, 15 52
Página 4 de 10
:
1.2.2.2. LDA
LDA is a special case of QDA, where the Gaussians for each class are assumed to share
the same covariance matrix: Σ k = Σ for all k. This reduces the log posterior to:
1
log P (y = k|x) = − (x − µ k ) t Σ − 1 (x − µ k ) + log P (y = k) + Cst.
2
From the above formula, it is clear that LDA has a linear decision surface. In the case of
QDA, there are no assumptions on the covariance matrices Σ k of the Gaussians,
leading to quadratic decision surfaces. See [1] for more details.
https://scikit-learn.org/stable/modules/lda_qda.html 24/04/25, 15 52
Página 5 de 10
:
of dimension at most K − 1 (2 points lie on a line, 3 points lie on a plane, etc.).
As mentioned above, we can interpret LDA as assigning x to the class whose mean µ k
is the closest in terms of Mahalanobis distance, while also accounting for the class
prior probabilities. Alternatively, LDA is equivalent to first sphering the data so that the
covariance matrix is the identity, and then assigning x to the closest mean in terms of
Euclidean distance (still accounting for the class priors).
We can reduce the dimension even more, to a chosen L , by projecting onto the linear
subspace H L which maximizes the variance of the µ ∗k after projection (in effect, we are
doing a form of PCA for the transformed class means µ ∗k ). This L corresponds to the
n_components parameter used in the transform method. See [1] for more details.
https://scikit-learn.org/stable/modules/lda_qda.html 24/04/25, 15 52
Página 6 de 10
:
Ledoit and Wolf [2]. Note that currently shrinkage only works when setting the solver
parameter to ‘lsqr’ or ‘eigen’.
The shrinkage parameter can also be manually set between 0 and 1. In particular, a
value of 0 corresponds to no shrinkage (which means the empirical covariance matrix
will be used) and a value of 1 corresponds to complete shrinkage (which means that
the diagonal matrix of variances will be used as an estimate for the covariance matrix).
Setting this parameter to a value between these two extrema will estimate a shrunk
version of the covariance matrix.
The shrunk Ledoit and Wolf estimator of covariance may not always be the best choice.
For example if the distribution of the data is normally distributed, the Oracle
Approximating Shrinkage estimator sklearn.covariance.OAS yields a smaller Mean
Squared Error than the one given by Ledoit and Wolf’s formula used with shrinkage=”
auto”. In LDA, the data are assumed to be gaussian conditionally to the class. If these
assumptions hold, using LDA with the OAS estimator of covariance will yield a better
classification accuracy than if Ledoit and Wolf or the empirical covariance estimator is
used.
https://scikit-learn.org/stable/modules/lda_qda.html 24/04/25, 15 52
Página 7 de 10
:
Examples
The ‘svd’ solver is the default solver used for LinearDiscriminantAnalysis , and it is
the only available solver for QuadraticDiscriminantAnalysis . It can perform both
classification and transform (for LDA). As it does not rely on the calculation of the
covariance matrix, the ‘svd’ solver may be preferable in situations where the number of
https://scikit-learn.org/stable/modules/lda_qda.html 24/04/25, 15 52
Página 8 de 10
:
features is large. The ‘svd’ solver cannot be used with shrinkage. For QDA, the use of
the SVD solver relies on the fact that the covariance matrix Σ k is, by definition, equal to
1 X kt X k = 1 VS 2 V t where V comes from the SVD of the (centered) matrix:
n− 1 n− 1
X k = USV t . It turns out that we can compute the log-posterior above without having
to explicitly compute Σ : computing S and V via the SVD of X is enough. For LDA, two
SVDs are computed: the SVD of the centered input matrix X and the SVD of the class-
wise mean vectors.
The ‘lsqr’ solver is an efficient algorithm that only works for classification. It needs to
explicitly compute the covariance matrix Σ , and supports shrinkage and custom
covariance estimators. This solver computes the coefficients ωk = Σ − 1 µ k by solving
for Σ ω = µ k , thus avoiding the explicit computation of the inverse Σ − 1 .
The ‘eigen’ solver is based on the optimization of the between class scatter to within
class scatter ratio. It can be used for both classification and transform, and it supports
shrinkage. However, the ‘eigen’ solver needs to compute the covariance matrix, so it
might not be suitable for situations with a high number of features.
References
[1](1,2) “The Elements of Statistical Learning”, Hastie T., Tibshirani R., Friedman J.,
Section 4.3, p.106-119, 2008.
[2] Ledoit O, Wolf M. Honey, I Shrunk the Sample Covariance Matrix. The Journal of
Portfolio Management 30(4), 110-119, 2004.
Previous Next
1.1. Linear Models 1.3. Kernel ridge regression
https://scikit-learn.org/stable/modules/lda_qda.html 24/04/25, 15 52
Página 9 de 10
:
© Copyright 2007 - 2025, scikit-learn developers (BSD License).
https://scikit-learn.org/stable/modules/lda_qda.html 24/04/25, 15 52
Página 10 de 10
: