0% found this document useful (0 votes)
4 views

Data Science Checklist

The document outlines a comprehensive guide for executing machine learning projects, detailing steps from understanding the business problem to model evaluation and deployment. It emphasizes the importance of data preprocessing, model selection, and validation, including specific assumptions for linear regression. Additionally, it provides a checklist for project management and descriptive statistics for data analysis.

Uploaded by

yadavabhi4268
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views

Data Science Checklist

The document outlines a comprehensive guide for executing machine learning projects, detailing steps from understanding the business problem to model evaluation and deployment. It emphasizes the importance of data preprocessing, model selection, and validation, including specific assumptions for linear regression. Additionally, it provides a checklist for project management and descriptive statistics for data analysis.

Uploaded by

yadavabhi4268
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 22

Contents

Steps for any project (Regression & Classification) ................................................................................................................ 2


Assumptions ........................................................................................................................................................................... 3
Project ..................................................................................................................................................................................... 4
Machine Learning Project Checklist........................................................................................................................................ 6
Descriptive Statistics ............................................................................................................................................................... 9
FEATURE ENGINEERING ........................................................................................................................................................ 11

1
STEPS FOR ANY PROJECT (REGRESSION &
CLASSIFICATION)
Data Preprocessing file

1. Business Problem Understanding


2. Load the data
- Data Understanding
- Data Exploration
3. Data Preprocessing
- Feature Selection
- Data Cleaning
- Feature Engineering
- Data Wrangling
- #Reason :
- Code
- Observation

Complete everything before train test split (don’t select x and y )


Df.to_excel(cleaned_data.xlsx)
-----------------------------------------------------------------------------------------------------------------------------

Notebook : Algorithm name (separate)

- Load the cleaned data


- Select x&y
- Train test split
- Modelling
- Evaluation
- Model selection

-----------------------------------------------------------------------------------------------------------------------------

2
ASSUMPTIONS

When we apply any ML Algorithms other than Linear Regression , it should satisfy 3 conditions
1. Train accuracy == CV Score
1. Train accuracy == Test accuracy
2. It should satisfy business problem requirements

If any 1 condition fails, it’s called bad model

When we apply Linear Regression, it should satisfy 4 conditions


1. Train Accuracy == CV Score
2. Train Accuracy == Test Accuracy
3. It should satisfy business problem requirements.
4. It should satisfy assumptions.

Why we have to check assumptions in Linear Regression?


- Reason : Linear Regression is an assumption model

What are the Assumptions of Linear Regression?

L – Linearity of Errors
I – Independent of errors
N – Normality of Errors
E – Equal Variance of Errors(Homoscadescity) & Unequal variance of errors (Heteroscadescity)

Linearity : By using Scatter plot


Normality : Based on Skewness
Equal variance : scatter plot & Horizontal or axis line (y=0)

Independent of Errors or Variable significance :


Variable significance: Check variables are important or not (regression was fitted or not)

3
PROJECT
|-----------------------|
| Regression |
|-----------------------|
-------------------------------------
Simple Regression Project:
-------------------------------------

Identify the relation between i/p & o/p which gives maximum R2 or minimum RMSE
• 1 i/p variable (continuous or discrete)
• 1 o/p variable (continuous)

Steps :
-------------
1) Business Problem Understanding
2) Data Understanding
a) Collect & load data
b) Data Exploration
3) Data Preprocessing
a) Data Cleaning
b) Data Wrangling
c) Feature Selection
d) Identify the best random state number for train test split
4) & 5) Modelling & Evaluation
• Apply LR
• Calculate train R2
• Calculate CV
• Calculate test R2
• Check every variable p < 0.5
• Check all assumptions

------------------------------------------------------------------------------------------------------------------------------------
Assumptions of Linear Regression:
• L – Linearity of Errors
• I – Independent of errors
• N – Normality of Errors
• E – Equal Variance of Errors(Homoscadescity) & Unequal variance of errors (Heteroscadescity)
-------------------------------------------------------------------------------------------------------------------------------------
• 1) First apply LR + calculate Train R2, test R2 + check Assumptions
If (train R2 == CV) and (Train R2 == test R2) and (all 4 Assumptions are satisfied)
Then---------it’s a good model
Else
Bad model
• 2) Now apply NLR + calculate Train R2, test R2 + check Assumptions
If (train R2 == CV) and (Train R2 == test R2) and (all 4 Assumptions are satisfied)
Then---------it’s a good model
Else
Bad model
• 3) Now apply model 3 + calculate Train R2, test R2 + check Assumptions
If (train R2 == CV) and (Train R2 == test R2) and (all 4 Assumptions are satisfied)
Then---------it’s a good model
Else
Bad model
---------------------------------------------------------------------------------------------------------------------------------------------------
--
• Once all model is complited , the finally take the model which is best(maximum test accuracy) and save
that model.
4
• If any >1 algorithm has same accuracy , then select the model which has taken less time.
---------------------------------------------------------------------------------------------------------------------------------------------------
--

5
MACHINE LEARNING PROJECT CHECKLIST

This checklist can guide you through your Machine Learning projects. There are eight main steps:
1. Frame the problem and look at the big picture.
2. Get the data.
3. Explore the data to gain insights.
4. Prepare the data to better expose the underlying data patterns to Machine Learning algorithms.
5. Explore many different models and shortlist the best ones.
6. Fine-tune your models and combine them into a great solution.
7. Present your solution.
8. Launch, monitor, and maintain your system.

Frame the Problem and Look at the Big Picture


1. Define the objective in business terms.
2. How will your solution be used?
3. What are the current solutions/workarounds (if any)?
4. How should you frame this problem (supervised/unsupervised, online/offline, etc.)?
5. How should performance be measured?
6. Is the performance measure aligned with the business objective?
7. What would be the minimum performance needed to reach the business objective?
8. What are comparable problems? Can you reuse experience or tools?
9. Is human expertise available?
10. How would you solve the problem manually?
11. List the assumptions you (or others) have made so far. 12. Verify assumptions if possible.

Get the Data


Note: automate as much as possible so you can easily get fresh data.
1. List the data you need and how much you need.
2. Find and document where you can get that data.
3. Check how much space it will take.
4. Check legal obligations, and get authorization if necessary.
5. Get access authorizations.
6. Create a workspace (with enough storage space).
7. Get the data.
8. Convert the data to a format you can easily manipulate (without changing the data itself).
9. Ensure sensitive information is deleted or protected (e.g., anonymized).
10. Check the size and type of data (time series, sample, geographical, etc.).
11. Sample a test set, put it aside, and never look at it (no data snooping!).

Explore the Data


Note: try to get insights from a field expert for these steps.

1. Create a copy of the data for exploration (sampling it down to a manageable size if necessary).
2. Create a Jupyter notebook to keep a record of your data exploration.
3. Study each attribute and its characteristics: Name
Type (categorical, int/float, bounded/unbounded, text, structured, etc.) % of missing values
Noisiness and type of noise (stochastic, outliers, rounding errors, etc.)
Usefulness for the task Type of distribution (Gaussian, uniform, logarithmic, etc.)
4. For supervised learning tasks, identify the target attribute(s).
5. Visualize the data.
6. Study the correlations between attributes.
7. Study how you would solve the problem manually.
8. Identify the promising transformations you may want to apply.
9. Identify extra data that would be useful (go back to “Get the Data”).
10. Document what you have learned.

Prepare the Data


Notes:
• Work on copies of the data (keep the original dataset intact).
• Write functions for all data transformations you apply, for five reasons:
➢ So you can easily prepare the data the next time you get a fresh dataset
6
➢ So you can apply these transformations in future projects
➢ To clean and prepare the test set
➢ To clean and prepare new data instances once your solution is live
➢ To make it easy to treat your preparation choices as hyperparameters

1. Data cleaning:
• Fix or remove outliers (optional).
• Fill in missing values (e.g., with zero, mean, median…) or drop their rows (or columns).

2. Feature selection (optional):


• Drop the attributes that provide no useful information for the task.

3. Feature engineering, where appropriate:


• Discretize continuous features.
• Decompose features (e.g., categorical, date/time, etc.).
• Add promising transformations of features (e.g., log(x), sqrt(x), x , etc.).
• Aggregate features into promising new features. 4. Feature scaling: Standardize or normalize features.

4. Feature scaling:
• Standardize or normalize features.

Shortlist Promising Models


Notes:

• If the data is huge, you may want to sample smaller training sets so you can train many different
models in a reasonable time (be aware that this penalizes complex models such as large neural nets or
Random Forests).
• Once again, try to automate these steps as much as possible.

1. Train many quick-and-dirty models from different categories (e.g., linear, naive Bayes, SVM, Random Forest,
neural net, etc.) using standard parameters.
2. Measure and compare their performance.
• For each model, use N-fold cross-validation and compute the mean and standard deviation of the
performance measure on the N folds.
3. Analyze the most significant variables for each algorithm.
4. Analyze the types of errors the models make.
• What data would a human have used to avoid these errors?
5. Perform a quick round of feature selection and engineering.
6. Perform one or two more quick iterations of the five previous steps.
7. Shortlist the top three to five most promising models, preferring models that make different types of
errors.Fine-Tune the System
Notes:

❖ You will want to use as much data as possible for this step, especially as you move toward the end of
fine-tuning.
❖ As always, automate what you can.

1. Fine-tune the hyperparameters using cross-validation:


• Treat your data transformation choices as hyperparameters, especially when you are not sure about
them (e.g., if you’re not sure whether to replace missing values with zeros or with the median value, or
to just drop the rows).
• Unless there are very few hyperparameter values to explore, prefer random search over grid search. If
training is very long, you may prefer a Bayesian optimization approach (e.g., using Gaussian process
priors, as described by Jasper Snoek et al.).

2. Try Ensemble methods. Combining your best models will often produce better performance than running them
individually.

7
3. Once you are confident about your final model, measure its performance on the test set to estimate the
generalization error.

WARNING
Don’t tweak your model after measuring the generalization error: you would just start overfitting the test set.

Present Your Solution

1. Document what you have done.


2. Create a nice presentation.
• Make sure you highlight the big picture first.
3. Explain why your solution achieves the business objective.
4. Don’t forget to present interesting points you noticed along the way.
• Describe what worked and what did not.
• List your assumptions and your system’s limitations.

5.Ensure your key findings are communicated through beautiful visualizations or easy-to-remember statements
(e.g., “the median income is the number-one predictor of housing prices”).

Launch!
1. Get your solution ready for production (plug into production data inputs, write unit tests, etc.).
2. Write monitoring code to check your system’s live performance at regular intervals and trigger alerts when it
drops.
• Beware of slow degradation: models tend to “rot” as data evolves.
• Measuring performance may require a human pipeline (e.g., via a crowdsourcing service).
• Also monitor your inputs’ quality (e.g., a malfunctioning sensor sending random values, or another
team’s output becoming stale). This is particularly important for online learning systems.
3. Retrain your models on a regular basis on fresh data (automate as much as possible).

8
DESCRIPTIVE STATISTICS
Mean : df["X"].mean()

Median : df["X"].median()

Mode : df["X"].mode()
• Most repeated value
• Unimodal Data - if Data has only 1 Mode Value
• Bimodal Data - if Data has 2 Mode value
• Multimodel Data - if Data has more than 2 Mode value

Measures of Dispersion or Measures of Spread(2nd Business Moment)


• Range,IQR,Variance,Stradard Deviation
• all are applied on continuous variable only

Minimum : df["X"].min()

Maximum : df["X"].max()

Range : df["X"].max() - df["X"].min()


• Range=Maximum value - Minimum Value

Deviation(X-μ) : df["X-μ"]=df["X"] - df["X"].mean()


• Deviation=data deviated from the mean
• how dispersed the data is from central value

Population Standard Deviation(σ) : df["X"].std(ddof=0)

Sample Variance(S square) : df["X"].var(ddof=1)

Sample Standard Deviation(S) : df["X"].std(ddof=1)

Coefficient of variation : df["X"].std(ddof=0)/df["X"].mean()

Percentile
• 0 percentile or Minimum : 0% of data is below to this value
• 25 percentile or Quartile1(Q1) : 25% of data is below to this value
• 50 percentile or Quartile2(Q2) : 50% of data is below to this value
• 75 percentile or Quartile3(Q3) : 75% of data is below to this value
• 100 percentile or Maximum : 100% of data is below to this value

0 percentile or minimum : df["X"].quantile(0)


25 percentile(Q1) : Q1=df["X"].quantile(0.25)
50 percentile(Q2) : Q2=df["X"].quantile(0.50)
75 percentile(Q3) : Q3=df["X"].quantile(0.75)
100 percentile or maximum : df["X"].quantile(1)

Inter Quartile Range(IQR) = Q3-Q1


IQR=Q3-Q1

lower limit = Q1-(1.5 * IQR)


LL=Q1-(1.5 * IQR)

upper limit = Q3 + (1.5 * IQR)


ul=Q3 + (1.5 * IQR)

9
Outlier
• A data value that is numerically distant from a data set
What happens if outliers are available?
• Outliers will impact the statical measures like mean,variance , standard deviation
• Outliers will more affect the mean ,variance,standard deviation
• outliers will less affect the median & IQR
How to calculate outliers?
• A data value is considered to be an outlier if datavalue<lowerlimit(Q1-1.5IQR) datavalue>upperlimit(Q3+1.5IQR)
How to check outliers are available or not
• we use box plot

import matplotlib.pyplot as plt


plt.boxplot(df["X"])
plt.show()

import seaborn as sns


sns.boxplot(y=df["X"])
plt.show()

To extract outliers data


df[(df["X"]<ll) | (df["X"]>ul)]

Frequency Distribution
• Graphical representation of variable with corresponding frequency.
Discrete Frequency Distribution : Graphical representation of discrete variable with corresponding frequency.

df["Gender"].unique()

df["Gender"].value_counts()

sns.countplot(x=df["Gender"])
plt.show()

Continous Frequency Distribution : Graphical representation of continous variable with corresponding frequency.
sns.histplot(df["Marks"],bins=7,stat="count")
plt.show()

Cumulative Frequency Distribution


sns.histplot(df["Marks"],bins=7,stat="count",cumulative=True)
plt.show()

10
FEATURE ENGINEERING
1. Handling Missing Values
2. Handling Imbalanced Dataset
3. SMOTE Handling Imbalanced Dataset
4. Handling Outliers
5. Nominal or OHE Encoding

1) Handling Missing Values

## Checking missing values


df.isnull().sum()

df.shape

df.dropna().shape

## column wise deletion


df.dropna(axis=1)

Imputation Missing Values


1. Mean Value Imputation
sns.histplot(df['age'],kde=True)

df['Age_mean']=df['age'].fillna(df['age'].mean())

df[['Age_mean','age']]

2. Median Value Imputation - if we have outliers in the dataset


df['age_median']=df['age'].fillna(df['age'].median())

## only NaN rows


nan_rows=df[df['age'].isnull()][['age']]

3. Mode Implementation Technique - Categorical values


df[df['column_name'].isnull()]

df[''column_name '].unique()

mode_value=df[df['embarked'].notna()]['embarked'].mode()[0]
df['embarked_mode']=df['embarked'].fillna(mode_value)

df[df['embarked'].isnull()][['embarked']]
df['embarked'].iloc[[61,829]]
df[['embarked_mode','embarked']].iloc[[61,829]]

Create csv file


df.to_csv('titanic.csv',index=False)

11
2) Handling Imbalanced Dataset
1. Up Sampling
2. Down Sampling

import numpy as np
import pandas as pd
#Set the random seed for reproducibility
np.random.seed(123)
#Create a dataframe with 2 classes
n_samples=1000
class_0_ratio=0.9
n_class_0=int(n_samples * class_0_ratio)
n_class_1=n_samples - n_class_0

n_class_0,n_class_1

## CREATE DATAFRAME WITH IMBALANCED DATASET


class_0=pd.DataFrame({
'feature_1': np.random.normal(loc=0, scale=1, size=n_class_0),
'feature_2': np.random.normal(loc=0, scale=1, size=n_class_0),
'target' : [0] * n_class_0
})
class_1=pd.DataFrame({
'feature_1': np.random.normal(loc=0, scale=1, size=n_class_1),
'feature_2': np.random.normal(loc=0, scale=1, size=n_class_1),
'target' : [1] * n_class_1
})

df=pd.concat([class_0,class_1]).reset_index(drop=True)
df['target'].value_counts()

Upsampling
df_minority=df[df['target']==1]
df_majority=df[df['target']==0]

from sklearn.utils import resample

df_minority_upsampled=resample(df_minority,replace=True, n_samples=len(df_majority),
random_state=42)
df_upsampled=pd.concat([df_majority,df_minority_upsampled])
df_upsampled['target'].value_counts()

Down Sampling

#Set the random seed for reproducibility


np.random.seed(123)

#Create a dataframe with 2 classes


n_samples=1000
class_0_ratio=0.9
n_class_0=int(n_samples * class_0_ratio)
n_class_1=n_samples - n_class_0

## CREATE DATAFRAME WITH IMBALANCED DATASET


12
class_0=pd.DataFrame({
'feature_1': np.random.normal(loc=0, scale=1, size=n_class_0),
'feature_2': np.random.normal(loc=0, scale=1, size=n_class_0),
'target' : [0] * n_class_0
})
class_1=pd.DataFrame({
'feature_1': np.random.normal(loc=0, scale=1, size=n_class_1),
'feature_2': np.random.normal(loc=0, scale=1, size=n_class_1),
'target' : [1] * n_class_1
})
df=pd.concat([class_0,class_1]).reset_index(drop=True)

## Check the class distribution


print(df['target'].value_counts())

## DownSampling
df_minority=df[df['target']==1]
df_majority=df[df['target']==0]

from sklearn.utils import resample

df_majority_upsampled=resample(df_minority,replace=True, n_samples=len(df_majority), random_state=42)

df_minority_upsampled.shape

df_upsampled=pd.concat([df_majority,df_minority_upsampled])

df_upsampled['target'].value_counts()

3) SMOTE Handling Imbalanced Dataset


SMOTE(Synthetic Over-Sampling Technique) is a technique used in Machine Learning to address imbalanced datasets
where the minority class has significantly fewer instances then the majority class . SMOTE involves generating synthetic
instances of the minority class by interpolating between existing instances.

from sklearn.datasets import make_classification

X,y=make_classification(n_samples=1000,n_redundant=0,n_features=2,n_clusters_per_clas
s=1,weights=[0.90],random_state=12)

import pandas as pd
df1=pd.DataFrame(X,columns=['f1','f2'])
df2=pd.DataFrame(y,columns=['target'])
final_df=pd.concat([df1,df2],axis=1)
final_df.head()
final_df['target'].value_counts()
import matplotlib.pyplot as plt
plt.scatter(final_df['f1'],final_df['f2'],c=final_df['target'])

!pip install imblearn

from imblearn.over_sampling import SMOTE

len(y[y==0])
len(y[y==1])

df1=pd.DataFrame(X,columns=['f1','f2'])
df2=pd.DataFrame(y,columns=['target'])
oversample_df=pd.concat([df1,df2],axis=1)
13
plt.scatter(oversample_df['f1'],oversample_df['f2'],c=oversample_df['target'])
4) Handling Outliers

5 number Summary and box plot


## Minimum, Maximum, Median, Q1,Q3, IQR

minimum,Q1,median,Q3,maximum=np.quantile(lst_marks,[0,0.25,0.50,0.75,1.0])

minimum,Q1,median,Q3,maximum

IQR=Q3-Q1
print(IQR)

lower_fence=Q1 - 1.5*(IQR)
higher_fence=Q3 + 1.5*(IQR)

5) Nominal or OHE Encoding

Data Encoding
1. Nominal/OHE Encoding
2. Label and Ordinal Encoding
3. Target Guided Ordinal Encoding

Nominal/OHE Encoding
One Hot Encoding , also known as nominal encoding, is a technique used to represent categorical data as numerical data,
which is more suitable for machine learning algorithms. In this technique ,each category is represented as binary vector
where each bit corresponds to a unique category. for example- if we have categorical variable 'color' with three possible
values(red,green,blue), we can represent it using one hot encoding as follows:
• Red:[1,0,0]
• Green:[0,1,0]
• Blue:[0,0,1]

import pandas as pd
from sklearn.preprocessing import OneHotEncoder

## Create a simple Dataframe


df=pd.DataFrame({
'color':['red','blue','green','green','red','blue']
})

## Create an instance of OneOneHotEncoder


encoder=OneHotEncoder()

## Perform fit and Transform


encoded=encoder.fit_transform(df[['color']]).toarray()
encoder_df=pd.DataFrame(encoded,columns=encoder.get_feature_names_out())

## for new data


encoder.transform([['blue']]).toarray()

pd.concat([df,encoder_df],axis=1)

Label Encoding
Label encoding and Ordinal Encoding are two techniques used to encode categorical data as numerical data.
Label encoding involves assigning a unique numerical label to each category in the variable. The labels are usually assigned
in alphabetical order or based on frequency of the categories. for example- if we have categorical variable 'color' with
three possible values(red,green,blue), we can represent it using label encoding as follows:
14
• Red : 1
• Green:2
• Blue :3

from sklearn.preprocessing import LabelEncoder


lbl_encoder=LabelEncoder()

lbl_encoder.fit_transform(df[['color']])

Ordinal Encoding
It is used to encode categorical data that an intrinsic order or ranking. In this technique , each category is assigned a
numerical value based on its position in the order. For example- if we have a categorical variable 'education level' with
four possible values (high school, college, graduate, post-graduate), we can represent it using Ordinal encoding as follows
:
• High School : 1
• College : 2
• Graduate : 3
• Post-graduate:4
## Ordinal Encodieng
from sklearn.preprocessing import OrdinalEncoder

## Create a sample dataframe with an ordinal variable


df=pd.DataFrame({
'size' : ['small','medium','large','medium','small','large']
})

## create an istance of Ordinal Encoder and the fit_transform


encoder=OrdinalEncoder(categories=[['small','medium','large']])

encoder.fit_transform(df[['size']])

encoder.transform([['small']])

Target Guided Ordinal Encoding


It is a technique used to encode categorical variables based on their relationship with the target variable. This encoding
technique is useful when we use this variable with a large number of unique categories, and we want to use this variable
as a feature in our machine learning model.
In Target Guided Ordinal Encoding , we replace each category in the categorical variable with a numerical value based on
the mean and median of the target variable for that category. This creates a monotonic relationship between the
categorical variable and the target variable, which can improve the predictive power of our model.

df=pd.DataFrame({
'city' : ['New York', 'London', 'Paris','Tokyo','New York','Paris'],
'price': [200,150,300,250,180,320]
})

mean_price=df.groupby('city')['price'].mean().to_dict()
df['city_encoded']=df['city'].map(mean_price)

df[['price', 'city_encoded']]

15
Data Preprocessing
a) Data Cleaning
b) Data Wrangling
c) Feature Selection

Data Understanding

Collect & load data

df=pd.read_csv('dataset.csv')
df=pd.read_csv('dataset.csv', sheet_name=’sheet2’)

Data Exploration

df.head()
df.shape
df.columns
df.dtypes
df.info()
df.describe()

## Missing values
df.isnull().sum()

## Duplicate records
df[df.duplicated()]

## Correlation
df.corr()

Data Preprocessing(EDA)
• Data Cleaning
• Data Wrangling
• Feature Selection

16
17
18
19
20
21
22

You might also like