0% found this document useful (0 votes)
3 views1 page

Standardization vs Normalization in Pattern Recognition

Standardization and normalization are preprocessing techniques in pattern recognition that scale and transform dataset features. Standardization rescales features to have a mean of 0 and a standard deviation of 1, while normalization scales features to a fixed range, typically [0, 1]. The choice between the two depends on the distribution of the features and the requirements of the machine learning algorithm being used.

Uploaded by

Sanjana B
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views1 page

Standardization vs Normalization in Pattern Recognition

Standardization and normalization are preprocessing techniques in pattern recognition that scale and transform dataset features. Standardization rescales features to have a mean of 0 and a standard deviation of 1, while normalization scales features to a fixed range, typically [0, 1]. The choice between the two depends on the distribution of the features and the requirements of the machine learning algorithm being used.

Uploaded by

Sanjana B
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 1

Standardization vs Normalization in Pattern Recognition

In Pattern Recognition, standardization and normalization are both


preprocessing techniques used to scale and transform the features of a
dataset before applying a machine learning algorithm.

1. Standardization (also known as Z-score normalization) rescales


the features so that they have the properties of a standard normal
distribution with a mean of 0 and a standard deviation of 1. This
transformation maintains the shape of the original distribution but
changes the scale. Standardization is useful when the features have
different units or scales. It helps to give the data a comparable scale
and center the data around 0, which can improve the convergence
of certain algorithms (like gradient descent).
The formula for standardization is:
𝑥standardized=𝑥−mean(𝑥)/std(x)
where 𝑥 is the original feature value, mean(𝑥) is the mean of the
feature values, and std(𝑥) is the standard deviation of the feature
values.
2. Normalization (also known as Min-Max scaling) rescales the
features to a fixed range, usually [0, 1]. It does not change the shape
of the original distribution but squeezes the values into a smaller
range. Normalization is useful when the features have different
ranges and the algorithm being used requires features to be on a
similar scale.
The formula for normalization is:
𝑥normalized=𝑥−min(𝑥)/max(𝑥)−min(𝑥)
where 𝑥 is the original feature value, min(𝑥) is the minimum value of
the feature values, and max(𝑥)is the maximum value of the feature
values.

Standardization is more appropriate when the distribution of the features


is not Gaussian or when the algorithm relies on the mean and standard
deviation of the features, while normalization is more suitable when the
algorithm requires features to be on a similar scale and the features have
different ranges.

You might also like