Data Science Checklist
Data Science Checklist
1
STEPS FOR ANY PROJECT (REGRESSION &
CLASSIFICATION)
Data Preprocessing file
-----------------------------------------------------------------------------------------------------------------------------
2
ASSUMPTIONS
When we apply any ML Algorithms other than Linear Regression , it should satisfy 3 conditions
1. Train accuracy == CV Score
1. Train accuracy == Test accuracy
2. It should satisfy business problem requirements
L – Linearity of Errors
I – Independent of errors
N – Normality of Errors
E – Equal Variance of Errors(Homoscadescity) & Unequal variance of errors (Heteroscadescity)
3
PROJECT
|-----------------------|
| Regression |
|-----------------------|
-------------------------------------
Simple Regression Project:
-------------------------------------
Identify the relation between i/p & o/p which gives maximum R2 or minimum RMSE
• 1 i/p variable (continuous or discrete)
• 1 o/p variable (continuous)
Steps :
-------------
1) Business Problem Understanding
2) Data Understanding
a) Collect & load data
b) Data Exploration
3) Data Preprocessing
a) Data Cleaning
b) Data Wrangling
c) Feature Selection
d) Identify the best random state number for train test split
4) & 5) Modelling & Evaluation
• Apply LR
• Calculate train R2
• Calculate CV
• Calculate test R2
• Check every variable p < 0.5
• Check all assumptions
------------------------------------------------------------------------------------------------------------------------------------
Assumptions of Linear Regression:
• L – Linearity of Errors
• I – Independent of errors
• N – Normality of Errors
• E – Equal Variance of Errors(Homoscadescity) & Unequal variance of errors (Heteroscadescity)
-------------------------------------------------------------------------------------------------------------------------------------
• 1) First apply LR + calculate Train R2, test R2 + check Assumptions
If (train R2 == CV) and (Train R2 == test R2) and (all 4 Assumptions are satisfied)
Then---------it’s a good model
Else
Bad model
• 2) Now apply NLR + calculate Train R2, test R2 + check Assumptions
If (train R2 == CV) and (Train R2 == test R2) and (all 4 Assumptions are satisfied)
Then---------it’s a good model
Else
Bad model
• 3) Now apply model 3 + calculate Train R2, test R2 + check Assumptions
If (train R2 == CV) and (Train R2 == test R2) and (all 4 Assumptions are satisfied)
Then---------it’s a good model
Else
Bad model
---------------------------------------------------------------------------------------------------------------------------------------------------
--
• Once all model is complited , the finally take the model which is best(maximum test accuracy) and save
that model.
4
• If any >1 algorithm has same accuracy , then select the model which has taken less time.
---------------------------------------------------------------------------------------------------------------------------------------------------
--
5
MACHINE LEARNING PROJECT CHECKLIST
This checklist can guide you through your Machine Learning projects. There are eight main steps:
1. Frame the problem and look at the big picture.
2. Get the data.
3. Explore the data to gain insights.
4. Prepare the data to better expose the underlying data patterns to Machine Learning algorithms.
5. Explore many different models and shortlist the best ones.
6. Fine-tune your models and combine them into a great solution.
7. Present your solution.
8. Launch, monitor, and maintain your system.
1. Create a copy of the data for exploration (sampling it down to a manageable size if necessary).
2. Create a Jupyter notebook to keep a record of your data exploration.
3. Study each attribute and its characteristics: Name
Type (categorical, int/float, bounded/unbounded, text, structured, etc.) % of missing values
Noisiness and type of noise (stochastic, outliers, rounding errors, etc.)
Usefulness for the task Type of distribution (Gaussian, uniform, logarithmic, etc.)
4. For supervised learning tasks, identify the target attribute(s).
5. Visualize the data.
6. Study the correlations between attributes.
7. Study how you would solve the problem manually.
8. Identify the promising transformations you may want to apply.
9. Identify extra data that would be useful (go back to “Get the Data”).
10. Document what you have learned.
1. Data cleaning:
• Fix or remove outliers (optional).
• Fill in missing values (e.g., with zero, mean, median…) or drop their rows (or columns).
4. Feature scaling:
• Standardize or normalize features.
• If the data is huge, you may want to sample smaller training sets so you can train many different
models in a reasonable time (be aware that this penalizes complex models such as large neural nets or
Random Forests).
• Once again, try to automate these steps as much as possible.
1. Train many quick-and-dirty models from different categories (e.g., linear, naive Bayes, SVM, Random Forest,
neural net, etc.) using standard parameters.
2. Measure and compare their performance.
• For each model, use N-fold cross-validation and compute the mean and standard deviation of the
performance measure on the N folds.
3. Analyze the most significant variables for each algorithm.
4. Analyze the types of errors the models make.
• What data would a human have used to avoid these errors?
5. Perform a quick round of feature selection and engineering.
6. Perform one or two more quick iterations of the five previous steps.
7. Shortlist the top three to five most promising models, preferring models that make different types of
errors.Fine-Tune the System
Notes:
❖ You will want to use as much data as possible for this step, especially as you move toward the end of
fine-tuning.
❖ As always, automate what you can.
2. Try Ensemble methods. Combining your best models will often produce better performance than running them
individually.
7
3. Once you are confident about your final model, measure its performance on the test set to estimate the
generalization error.
WARNING
Don’t tweak your model after measuring the generalization error: you would just start overfitting the test set.
5.Ensure your key findings are communicated through beautiful visualizations or easy-to-remember statements
(e.g., “the median income is the number-one predictor of housing prices”).
Launch!
1. Get your solution ready for production (plug into production data inputs, write unit tests, etc.).
2. Write monitoring code to check your system’s live performance at regular intervals and trigger alerts when it
drops.
• Beware of slow degradation: models tend to “rot” as data evolves.
• Measuring performance may require a human pipeline (e.g., via a crowdsourcing service).
• Also monitor your inputs’ quality (e.g., a malfunctioning sensor sending random values, or another
team’s output becoming stale). This is particularly important for online learning systems.
3. Retrain your models on a regular basis on fresh data (automate as much as possible).
8
DESCRIPTIVE STATISTICS
Mean : df["X"].mean()
Median : df["X"].median()
Mode : df["X"].mode()
• Most repeated value
• Unimodal Data - if Data has only 1 Mode Value
• Bimodal Data - if Data has 2 Mode value
• Multimodel Data - if Data has more than 2 Mode value
Minimum : df["X"].min()
Maximum : df["X"].max()
Percentile
• 0 percentile or Minimum : 0% of data is below to this value
• 25 percentile or Quartile1(Q1) : 25% of data is below to this value
• 50 percentile or Quartile2(Q2) : 50% of data is below to this value
• 75 percentile or Quartile3(Q3) : 75% of data is below to this value
• 100 percentile or Maximum : 100% of data is below to this value
9
Outlier
• A data value that is numerically distant from a data set
What happens if outliers are available?
• Outliers will impact the statical measures like mean,variance , standard deviation
• Outliers will more affect the mean ,variance,standard deviation
• outliers will less affect the median & IQR
How to calculate outliers?
• A data value is considered to be an outlier if datavalue<lowerlimit(Q1-1.5IQR) datavalue>upperlimit(Q3+1.5IQR)
How to check outliers are available or not
• we use box plot
Frequency Distribution
• Graphical representation of variable with corresponding frequency.
Discrete Frequency Distribution : Graphical representation of discrete variable with corresponding frequency.
df["Gender"].unique()
df["Gender"].value_counts()
sns.countplot(x=df["Gender"])
plt.show()
Continous Frequency Distribution : Graphical representation of continous variable with corresponding frequency.
sns.histplot(df["Marks"],bins=7,stat="count")
plt.show()
10
FEATURE ENGINEERING
1. Handling Missing Values
2. Handling Imbalanced Dataset
3. SMOTE Handling Imbalanced Dataset
4. Handling Outliers
5. Nominal or OHE Encoding
df.shape
df.dropna().shape
df['Age_mean']=df['age'].fillna(df['age'].mean())
df[['Age_mean','age']]
df[''column_name '].unique()
mode_value=df[df['embarked'].notna()]['embarked'].mode()[0]
df['embarked_mode']=df['embarked'].fillna(mode_value)
df[df['embarked'].isnull()][['embarked']]
df['embarked'].iloc[[61,829]]
df[['embarked_mode','embarked']].iloc[[61,829]]
11
2) Handling Imbalanced Dataset
1. Up Sampling
2. Down Sampling
import numpy as np
import pandas as pd
#Set the random seed for reproducibility
np.random.seed(123)
#Create a dataframe with 2 classes
n_samples=1000
class_0_ratio=0.9
n_class_0=int(n_samples * class_0_ratio)
n_class_1=n_samples - n_class_0
n_class_0,n_class_1
df=pd.concat([class_0,class_1]).reset_index(drop=True)
df['target'].value_counts()
Upsampling
df_minority=df[df['target']==1]
df_majority=df[df['target']==0]
df_minority_upsampled=resample(df_minority,replace=True, n_samples=len(df_majority),
random_state=42)
df_upsampled=pd.concat([df_majority,df_minority_upsampled])
df_upsampled['target'].value_counts()
Down Sampling
## DownSampling
df_minority=df[df['target']==1]
df_majority=df[df['target']==0]
df_minority_upsampled.shape
df_upsampled=pd.concat([df_majority,df_minority_upsampled])
df_upsampled['target'].value_counts()
X,y=make_classification(n_samples=1000,n_redundant=0,n_features=2,n_clusters_per_clas
s=1,weights=[0.90],random_state=12)
import pandas as pd
df1=pd.DataFrame(X,columns=['f1','f2'])
df2=pd.DataFrame(y,columns=['target'])
final_df=pd.concat([df1,df2],axis=1)
final_df.head()
final_df['target'].value_counts()
import matplotlib.pyplot as plt
plt.scatter(final_df['f1'],final_df['f2'],c=final_df['target'])
len(y[y==0])
len(y[y==1])
df1=pd.DataFrame(X,columns=['f1','f2'])
df2=pd.DataFrame(y,columns=['target'])
oversample_df=pd.concat([df1,df2],axis=1)
13
plt.scatter(oversample_df['f1'],oversample_df['f2'],c=oversample_df['target'])
4) Handling Outliers
minimum,Q1,median,Q3,maximum=np.quantile(lst_marks,[0,0.25,0.50,0.75,1.0])
minimum,Q1,median,Q3,maximum
IQR=Q3-Q1
print(IQR)
lower_fence=Q1 - 1.5*(IQR)
higher_fence=Q3 + 1.5*(IQR)
Data Encoding
1. Nominal/OHE Encoding
2. Label and Ordinal Encoding
3. Target Guided Ordinal Encoding
Nominal/OHE Encoding
One Hot Encoding , also known as nominal encoding, is a technique used to represent categorical data as numerical data,
which is more suitable for machine learning algorithms. In this technique ,each category is represented as binary vector
where each bit corresponds to a unique category. for example- if we have categorical variable 'color' with three possible
values(red,green,blue), we can represent it using one hot encoding as follows:
• Red:[1,0,0]
• Green:[0,1,0]
• Blue:[0,0,1]
import pandas as pd
from sklearn.preprocessing import OneHotEncoder
pd.concat([df,encoder_df],axis=1)
Label Encoding
Label encoding and Ordinal Encoding are two techniques used to encode categorical data as numerical data.
Label encoding involves assigning a unique numerical label to each category in the variable. The labels are usually assigned
in alphabetical order or based on frequency of the categories. for example- if we have categorical variable 'color' with
three possible values(red,green,blue), we can represent it using label encoding as follows:
14
• Red : 1
• Green:2
• Blue :3
lbl_encoder.fit_transform(df[['color']])
Ordinal Encoding
It is used to encode categorical data that an intrinsic order or ranking. In this technique , each category is assigned a
numerical value based on its position in the order. For example- if we have a categorical variable 'education level' with
four possible values (high school, college, graduate, post-graduate), we can represent it using Ordinal encoding as follows
:
• High School : 1
• College : 2
• Graduate : 3
• Post-graduate:4
## Ordinal Encodieng
from sklearn.preprocessing import OrdinalEncoder
encoder.fit_transform(df[['size']])
encoder.transform([['small']])
df=pd.DataFrame({
'city' : ['New York', 'London', 'Paris','Tokyo','New York','Paris'],
'price': [200,150,300,250,180,320]
})
mean_price=df.groupby('city')['price'].mean().to_dict()
df['city_encoded']=df['city'].map(mean_price)
df[['price', 'city_encoded']]
15
Data Preprocessing
a) Data Cleaning
b) Data Wrangling
c) Feature Selection
Data Understanding
df=pd.read_csv('dataset.csv')
df=pd.read_csv('dataset.csv', sheet_name=’sheet2’)
Data Exploration
df.head()
df.shape
df.columns
df.dtypes
df.info()
df.describe()
## Missing values
df.isnull().sum()
## Duplicate records
df[df.duplicated()]
## Correlation
df.corr()
Data Preprocessing(EDA)
• Data Cleaning
• Data Wrangling
• Feature Selection
16
17
18
19
20
21
22