Python | How and where to apply Feature Scaling?
Last Updated :
23 Dec, 2022
Feature Scaling or Standardization: It is a step of Data Pre Processing that is applied to independent variables or features of data. It helps to normalize the data within a particular range. Sometimes, it also helps in speeding up the calculations in an algorithm.
Package Used:
sklearn.preprocessing
Import:
from sklearn.preprocessing import StandardScaler
The formula used in the Backend
Standardization replaces the values with their Z scores.

Mostly the Fit method is used for Feature scaling
fit(X, y = None)
Computes the mean and std to be used for later scaling.
Python
import pandas as pd
from sklearn.preprocessing import StandardScaler
# Read Data from CSV
data = read_csv('Geeksforgeeks.csv')
data.head()
# Initialise the Scaler
scaler = StandardScaler()
# To scale data
scaler.fit(data)
Why and Where to Apply Feature Scaling?
The real-world dataset contains features that highly vary in magnitudes, units, and range. Normalization should be performed when the scale of a feature is irrelevant or misleading and should not normalize when the scale is meaningful.
The algorithms which use Euclidean Distance measures are sensitive to Magnitudes. Here feature scaling helps to weigh all the features equally.
Formally, If a feature in the dataset is big in scale compared to others then in algorithms where Euclidean distance is measured this big scaled feature becomes dominating and needs to be normalized.
feature scaling in python ( image source- by Jatin Sharma )Examples of Algorithms where Feature Scaling matters
1. K-Means uses the Euclidean distance measure here feature scaling matters.
2. K-Nearest-Neighbors also require feature scaling.
3. Principal Component Analysis (PCA): Tries to get the feature with maximum variance, here too feature scaling is required.
4. Gradient Descent: Calculation speed increase as Theta calculation becomes faster after feature scaling.
Note: Naive Bayes, Linear Discriminant Analysis, and Tree-Based models are not affected by feature scaling.
In Short, any Algorithm which is Not Distance-based is Not affected by Feature Scaling.
Similar Reads
Logistic Regression and the Feature Scaling Ensemble Logistic Regression is a widely used classification algorithm in machine learning. However, to enhance its performance further specially when dealing with features of different scales, employing feature scaling ensemble techniques becomes imperative. In this guide, we will dive depth into logistic r
9 min read
ML | Feature Scaling - Part 1 Feature Scaling is a technique to standardize the independent features present in the data in a fixed range. It is performed during the data pre-processing. Working: Given a data-set with features- Age, Salary, BHK Apartment with the data size of 5000 people, each having these independent data featu
3 min read
Feature Scaling - Part 3 Prerequisite - Feature Scaling | Set-1 , Set-2Â Feature Scaling is one of the most important steps of Data Preprocessing. It is applied to independent variables or features of data. The data sometimes contains features with varying magnitudes and if we do not treat them, the algorithms only take in
5 min read
Feature Selection in Python with Scikit-Learn Feature selection is a crucial step in the machine learning pipeline. It involves selecting the most important features from your dataset to improve model performance and reduce computational cost. In this article, we will explore various techniques for feature selection in Python using the Scikit-L
4 min read
What is fit() method in Python's Scikit-Learn? Scikit-Learn, a powerful and versatile Python library, is extensively used for machine learning tasks. It provides simple and efficient tools for data mining and data analysis. Among its many features, the fit() method stands out as a fundamental component for training machine learning models. This
4 min read