0% found this document useful (0 votes)
18 views131 pages

MLUnit 1

The document outlines a Machine Learning course for T.Y. B.Tech CSE students, detailing course objectives, outcomes, and contents including supervised and unsupervised learning methods. It emphasizes the importance of data preparation, feature engineering, and various machine learning applications in real-world scenarios. The course includes laboratory exercises, recommended textbooks, and supplementary resources for further learning.

Uploaded by

Shivani Samdani
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views131 pages

MLUnit 1

The document outlines a Machine Learning course for T.Y. B.Tech CSE students, detailing course objectives, outcomes, and contents including supervised and unsupervised learning methods. It emphasizes the importance of data preparation, feature engineering, and various machine learning applications in real-world scenarios. The course includes laboratory exercises, recommended textbooks, and supplementary resources for further learning.

Uploaded by

Shivani Samdani
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 131

MACHINE LEARNING

Professional Core( CET3006B)


T. Y. B.Tech CSE, Sem-VI
2023-2024

SoCSE – Dept. of Computer Engineering & Technology


MACHINE LEARNING
• Credits : 3+1 (Four)
• Examination scheme: Total Marks-100
- 30 Marks CCA
-30 Marks LCA
- 40 Marks End Term Examination

2
MACHINE LEARNING
Course Objectives:
1.Knowledge:
i. To learn data preparation techniques for Machine Learning methods
ii. To understand advance supervised and unsupervised Learning methods
2.Skills:
i. To apply suitable pre-processing techniques on various datasets for Machine Learning applications.
ii. To design and implement various advanced supervised and unsupervised learning methods
3.Attitude:
i. To be able to choose and apply suitable ML techniques to solve the problem
ii. To compare & analyze various advanced supervised and unsupervised learning methods.

3
ADVANCES IN MACHINE LEARNING

Course Outcomes:
After completion of the course the students will be able to:

1. Analyze and apply different data preparation techniques for Machine Learning applications

2. Identify, Analyze and compare appropriate supervised learning algorithm for given problem

3. Identify, Analyze and Compare Unsupervised and semi supervised algorithms

4. Design and implement Machine Learning techniques for real-time applications

4
Course Contents:

Unit 1. Introduction to ML

Unit 2. Supervised Learning: Classification

Unit 3. Unsupervised Learning: Clustering

Unit 4. Performance Analysis and Model Evaluation

Unit 5. Trends in ML

5
Course Contents:
Laboratory Exercises:
1. Implement various Pre-processing techniques on given dataset.
2. Implement KNN classifier for given dataset.
3. Implementation of Tree based Classifiers.
4. Implementation of SVM, Comparison with Tree Based Classifier
5. Implementation of Ensemble, Random Forests. Analyze the Performance.
6. Implementation and Comparison of various clustering techniques such as Spectral &
DBSCAN.
7. Implement Regression Technique & evaluate its performance.
8. Mini-Project based on suitable Machine Learning dataset 6
Course Contents:
Text Books:
1. E. Alpaydin, Introduction to Machine Learning, PHI, 2004.
2. Peter Flach: Machine Learning: The Art and Science of Algorithms that Make Sense of
Data, Cambridge University Press, Edition 2012.
3. T. Mitchell, Machine Learning, McGraw-Hill, 1997.
4. Josh Patterson, Adam Gibson, “Deep Learning: A Practitioners Approach”, O‟REILLY,
SPD, ISBN: 978-93-5213-604-9, 2017 Edition 1st
Reference Books:
1. C. M. Bishop: Pattern Recognition and Machine Learning, Springer 1st Edition-2013.
2. Ian H Witten, Eibe Frank, Mark A Hall: Data Mining, Practical Machine Learning Tools
and Techniques, Elsevier, 3rd Edition.
3. Shaishalev-shwartz, Shai Ben-David: Understanding Machine Learning from Theory to
algorithms, Cambridge University Press, ISBN-978-1-107-51282-5, 2014. 7
Course Contents:
Supplementary Reading:
1. AurelienGeron, “Hands-on Machine Learning with Scikit-learn and Tensor flow,
O’Reilly Media
Web Resources:
1. Popular dataset resource for ML beginners: http://archive.ics.uci.edu/ml/index.php
Web links:
1. https://www.kaggle.com/datasets
2. http://deeplearning.net/datasets/
MOOCs:
1. https://swayam.gov.in/nd1_noc20_cs29/preview
2. https://swayam.gov.in/nd1_noc20_cs44/preview 8
Course Contents:

9
Syllabus-Unit 1
Introduction to ML:
Introduction, Data Preparation
Data Encoding Techniques
Data Pre-processing techniques for ML applications.
Feature Engineering:
Dimensionality Reduction using PCA
Exploratory Data Analysis
Feature Selection

10
AI Vs. ML

11
INTRODUCTION

12
Cntd..

To solve a problem on a computer, we need an algorithm.


An algorithm is a sequence of instructions that should be carried out to
transform the input to output
Ex. Sorting Input :set of numbers , output : ordered list
For some tasks ,however, we do not have an algorithm We are machine
learning intelligence
Ex. tell spam emails from legitimate emails
Input :email document (file of characters), output : yes/no output indicating
whether the message is spam or not
computer (machine) to extract automatically the algorithm for this task.

13
Cntd..

(source: https://medium.com/analytics-vidhya/introduction-to-machine-learning-e1b9c055039c)

• Machine learning is a “Field of study that gives computers the ability to learn without
being explicitly programmed.”
• In other words it is concerned with the question of how to construct computer programs
that automatically improve with the experience. - According to Arthur Samuel(1959)
14
Cntd..
• A computer program is said to learn from experience ‘E’ with respect to some
class of task ‘T’ and performance measure ‘P’ if its performance at task in ‘T’
as measured by ‘P’ improves with experience ‘E’ – Tom M Mitchell

• Machine learning is an application of artificial intelligence (AI) that provides


systems the ability to automatically learn and improve from experience
without being explicitly programmed.

• Machine learning focuses on the development of computer programs that can


access data and use it learn for themselves.

15
Cntd..
Example 1
Classify Email as spam or not spam
• Task (T): Classify email as spam or not spam
• Experience(E): watching the user to mark/label the email as spam or
not spam
• Performance (P): The number or fraction of email to be correctly
classified as spam or not spam

16
Cntd..
Example 2
Recognizing hand written digits/ characters
• Task(T): Recognizing hand written digit
• Experience (E): watching the user to mark/ label the hand written digit
to 10 classes(0-9) & identify underling pattern
• Performance(P):The number of fractions of hand-written digits
correctly classified

17
Why Machine Learning Important?.
• Human expertise does not exist
Navigating on Mars
industrial/manufacturing control
mass spectrometer analysis, drug design, astronomic discovery
• Black-box human expertise OR Some tasks cannot be defined well, except by
examples
face/handwriting/speech recognition/ recognizing people
driving a car, flying a plane

• Relationships and correlations can be hidden within large amounts of data


(e.g., stock market analysis)
• Environments change over time.
(e.g., routing on a computer network)
18
Cntd..
• The amount of knowledge available about certain tasks might be too large for explicit encoding by
humans
(e.g., medical diagnostic).
• New knowledge about tasks is constantly being discovered by humans. It may be difficult to
continuously re-design systems “by hand”.
• Rapidly changing phenomena
credit scoring, financial modeling
diagnosis, fraud detection
• Need for customization/personalization
personalized news reader
movie/book recommendation

19
How does Machine Learning help us in daily life?
Social networking :

• Use of the appropriate emotions, suggestions about friend tags on Facebook, filtered on Instagram,

content recommendations and suggested followers on social media platforms, etc., are examples of

how machine learning helps us in social networking.


Personal finance and banking solutions

• Whether it’s fraud prevention, credit decisions, or checking deposits on our smartphones machine

learning does it all.


Commute estimation

• Identification of the route to our selected destination, estimation of the time required to reach that

destination using different transportation modes, calculating traffic time, and so on


20
are all made by

machine learning.
Applications of Machine Learning

• Face detection •Speech recognition


• Stock prediction •Hand-written digit recognition
• Spam Email Detection •Computational Biology
• Machine Translation •Recommender Systems
• Self-parking Cars •Guiding robots
• Airplane Navigation Systems •Space Exploration
•Medicine •Supermarket Chain
•Data Mining
21
Examples…
Example 1: hand-written digit recognition: Output

Learn a classifier f(x) such that, f : x → {0, 1, 2, 3, 4, 5, 6, 7, 8, 9}


Input training data : e.g. 500 samples

22
Example 2: Face detection
Input : an image, the classes are people to be recognized …

[non-face, frontal-face, profile-face] and the learning program should learn to


associate the face images to identities.

This problem is more difficult because there are more classes, input image
is larger, and a face is 3-dimensional and differences in pose and lighting cause
significant changes in the image. There may also be occlusion ( blockage )of certain
inputs; e.g. glasses may hide the eyes and eyebrows, and a beard may hide the chin.

23
Example 3: Spam detection

•This is a classification problem


•Task is to classify email into spam/non-spam
•Requires a learning system as “enemy” keeps innovating

24
Example 4: Stock price prediction

• Task is to predict stock price at future date


• This is a regression task, as the output is continuous

25
Example 5: Computational Biology

26
27
28
Example : Weather prediction

29
Example : Medical Diagnosis
❖ Inputs are the relevant information about the patient and the classes are the illnesses.
❖ The inputs contain the patient’s age, gender, past medical history, and current symptoms.
❖ Some tests may not have been applied to the patient, and thus these inputs would be missing.
❖ Tests take time, may be costly, and may inconvience the patient so we do not want to apply
them unless we believe that they will give us valuable information.
❖ In the case of a medical diagnosis, a wrong decision may lead to a wrong or no treatment, and
in cases of doubt it is preferable that the classifier reject and defer decision to a human expert.

30
Example : Agriculture

A Crop Yield Prediction App in Senegal Using Satellite Imagery (Video Link)
https://www.youtube.com/watch?v=4OnBGkhA4jc&t=160s
.

31
Data Preparation

Data Preparation Pipeline

32
Data Preparation

33
Data Preparation

34
Why is Data Preparation important?

sometimes, data in data sets have missing or incomplete information, which leads to less
accurate or incorrect predictions.
Further, sometimes data sets are clean but not adequately shaped, such as aggregated or
pivoted, and some have less business context.
Hence, after collecting data from various data sources, data preparation needs to
transform raw data.
Significant advantages of data preparation in machine learning as follows:
• It helps to provide reliable prediction outcomes in various analytics operations.
• It helps identify data issues or errors and significantly reduces the chances of errors.
• It increases decision-making capability.
• It reduces overall project cost (data management and analytic cost).
• It helps to remove duplicate content to make it worthwhile for different applications.
• It increases model performance.
35
Steps in Data Preparation Process
1. Understand the problem:
Understand the actual problem and try to solve it.
2.Data collection:
collect data from various potential sources. These data sources may be either within
enterprise or third parties vendors.
Data collection is beneficial to reduce and mitigate biasing in the ML model.
So, before collecting data, always analyze it and also ensure that the data set was
collected from diverse people, geographical areas, and perspectives.
3.Profiling and Data Exploration:
explore data such as trends, outliers, exceptions, incorrect, inconsistent, missing, or
skewed information, etc.
Data exploration helps to determine problems such as collinearity, which means a
situation when the Standardization of data sets and other data transformations are
necessary.
36
Steps in Data Preparation Process

4.Data Cleaning and Validation:


Data cleaning and validation techniques help determine and solve inconsistencies,
outliers, anomalies, incomplete data, etc.
Clean data helps to find valuable patterns and information in data and ignores
irrelevant data in the datasets.
5.Data Formatting:
After cleaning and validating data, the following approach is to ensure that the data is
correctly formatted or not.

37
Steps in Data Preparation Process
6.Feature engineering and selection:
• Feature engineering is defined as the study of selecting, manipulating, and transforming raw
data into valuable features
There are various feature engineering techniques used in Machine Learning as follows:
Imputation:
• Feature imputation is the technique to fill incomplete fields in the datasets.
• It is essential because most machine learning models don't work when there are missing data
in the dataset.
• Although, the missing values problem can be reduced by using techniques such as single
value imputation, multiple value imputation, K-Nearest neighbor, deleting the row, etc.
Encoding:
• Feature encoding is defined as the method to convert string values into numeric form.
• This is important as all ML models require all values in numeric format.
• Feature encoding includes label encoding and One Hot Encoding (also known as
get_dummies). 38
Data Pre-Processing

1.Data cleaning Data preprocessing


2.Data integration
3.Data transformation
4.Data reduction
5.Data Discretization

39
Cntd..
• Data preparation is also known as data "pre-processing," "data wrangling,"
"data cleaning," "data pre-processing," and "feature engineering."
• It is the later stage of the machine learning lifecycle, which comes after data
collection..
The data preparation process can be complicated by issues such as:
1. Missing or incomplete records: Missing data sometimes appears as empty
cells, values (e.g., NULL or N/A), or a particular character, such as a question
mark

40
Cntd..

41
Cntd..

42
Cntd..

43
Cntd..

44
Cntd..

45
Cntd..
2. Outliers or anomalies: Unexpected values
• ML algorithms are sensitive to the range and distribution of values when data
comes from unknown sources.
• These values can spoil the entire machine learning training system and the
performance of the model.
• Hence, it is essential to detect these outliers or anomalies through techniques
such as visualization technique.

46
Cntd..
3. Unstructured data format :
• Data comes from various sources and needs to be extracted into a different
format.
• Hence, before deploying an ML project, always consult with domain experts or
import data from known sources.
4. Limited or sparse features / attributes :
• Whenever data comes from a single source, it contains limited features,
• so it is necessary to import data from various sources for feature enrichment
or build multiple features in datasets.
5. Understanding feature engineering:
• Features engineering helps develop additional content in the ML models,
increasing model performance and accuracy in predictions. 47
Cntd..

48
Cntd..

49
Cntd..

50
Cntd..

51
Cntd..

52
Cntd..

53
Cntd..

54
Feature Engineering
Feature engineering is the pre-processing step of machine learning, which
is used to transform raw data into features that can be used for creating a
predictive model using Machine learning or statistical Modelling.

Feature engineering is the pre-processing step of machine learning, which extracts


features from raw data.
55
Feature Engineering
What is a feature?
• Generally, all machine learning algorithms take input data to generate the output.
• The input data remains in a tabular form consisting of rows (instances or
observations) and columns (variable or attributes), and these attributes are often
known as features.
Feature Engineering processes:
1. Feature Creation: Feature creation is finding the most useful variables to be
used in a predictive model.
2. Transformations: The transformation step of feature engineering involves
adjusting the predictor variable to improve the accuracy and performance of the
model.

56
Feature Engineering
3.Feature Extraction: Feature extraction is an automated feature
engineering process that generates new variables by extracting them
from the raw data.
The main aim of this step is to reduce the volume of data so that it can be
easily used and managed for data modelling.
Feature extraction methods include cluster analysis, text analytics, edge
detection algorithms, and principal components analysis (PCA).
4. Feature Selection: Feature selection is a way of selecting the subset of
the most relevant features from the original features set by removing the
redundant, irrelevant, or noisy features."

57
Feature Engineering
Steps in Feature Engineering
Data Preparation:

• In this step, raw data acquired from different resources are prepared to make it in a suitable format so that it can be use

in the ML model.

• The data preparation may contain cleaning of data, delivery, data augmentation, fusion, ingestion, or loading.
Exploratory Analysis:
• This step involves analysis, investing data set, and summarization of the main characteristics of data.

• Different data visualization techniques are used to better understand the manipulation of data sources, to find the mo

appropriate statistical technique for data analysis & to select the best features for the data.
Benchmark:
• Benchmarking is a process of setting a standard baseline for accuracy to compare all the variables from this baseline
• The benchmarking process is used to improve the predictability of the model and reduce the error rate.
58
Feature Engineering
Feature Engineering Techniques:
1. Imputation: Imputation is responsible for handling irregularities within the
dataset.
• For numerical data imputation, a default value can be imputed in a column, and
missing values can be filled with means or medians of the columns.
• For categorical data imputation, missing values can be interchanged with the
maximum occurred value in a column.
2. Handling Outliers: This technique first identifies the outliers and then remove
them out.
• Standard deviation can be used to identify the outliers
• Z-score can also be used to detect outliers.
59
Feature Engineering
Feature Engineering Techniques:
3. Log transform: Log transform helps in handling the skewed data, and it makes the
distribution more approximate to normal after transformation.
4. Binning: can be used to normalize the noisy data. This process involves
segmenting different features into bins.
5. Feature Split: is the process of splitting features intimately into two or more parts
and performing to make new features.
6. One hot encoding: It is a technique that converts the categorical data in a form so
that they can be easily understood by machine learning algorithms and hence can
make a good prediction.

60
Types of Learning
• Supervised learning

61
Types of Learning
Supervised Learning :
Aim is to learn a mapping from the input to an output whose correct values are provided
by a supervisor.
1. Classification : Data is labelled …meaning it is assigned a class,
for example spam/non-spam or fraud/non-fraud
e.g. for financial institution ..input to classifier is savings and income and output is
one the class like high risk or low risk based on following classification rule
❑ if income > δ1 and saving δ2 then low risk else high risk
2. Regression : Data is labelled with a real value rather then a label.
e.g. price of a stock over time.
e.g predict the price of used car ….
Input : brand, year, engine capacity, mileage & other information …
output: Price of car
62
Types of Learning

63
Types of Learning

64
Types of Learning
Supervised Learning :

65
Types of Learning
Supervised Learning :

66
Unsupervised Learning
• Unsupervised learning

67
Example of Unsupervised learning
• Clustering
• Association

68
Example of Unsupervised learning
• Clustering
• Association

69
Example of Unsupervised learning

70
Example of Semi-supervised learning

71
Reinforcement Learning
• learning from mistakes
• Place a reinforcement learning algorithm into any environment and
it will make a lot of mistakes in the beginning
• As we provide some sort of signal to the algorithm that associates
good behaviors with a positive signal and bad behaviors with a
negative one
• we can reinforce our algorithm to prefer good behaviors over
bad ones.
• Over time, our learning algorithm learns to make less mistakes
than it used to.

72
Reinforcement Learning

73
Reinforcement Learning
Where is reinforcement learning in the real world?
• Video Games
• Industrial Simulation:
• Resource Management

74
75
Key Elements of Machine Learning

• There are tens of thousands of machine learning algorithms and hundreds


of new algorithms are developed every year.
• Every machine learning algorithm has three components:
1. Representation: how to represent knowledge.
Examples include decision trees, sets of rules, instances, graphical models,
neural networks, support vector machines, model ensembles and others.
2. Evaluation: the way to evaluate candidate programs (hypotheses).
Examples include accuracy, prediction and recall, squared error, likelihood,
posterior probability, cost, margin, entropy k-L divergence and others.

76
3. Optimization: the way candidate programs are generated known as the
search process.
For example combinatorial optimization, convex optimization,
constrained optimization.
• All machine learning algorithms are combinations of these three
components.
• A framework for understanding all algorithms.

77
Aspects of developing a learning system:
training data, concept representation, function approximation
• For training and testing purpose of our model we need to split the dataset in
to three distinct dataset, training set, validation set and testing set
• Training set:-
• A set of data used to train the model
• It is used to fit the model
• The model sees and learn from this data
• Later on the trained model can be deployed and used to accurately predict
on new data that it has not seen before
• Labeled data is used
78
Validation set

• Validation set is the set of data separate from the training data
• It is used to validate our model during training
• It gives information which is used for tuning model hyper parameter
• It ensures that our model is not over fitting to the data in the training
set
• Labeled data is used

79
Test Set
• A set of data use to test the model
• The test set is separated from both the train set and validation set
• Once the model is train and validated using the training data and
validation sets then the model is used to predict the output for the
data in the test set
• Unlabeled data is used

80
Data Split

Train Validation Test

• Rules for performing data split operation


• In order to avoid a correlation between the original dataset must be
randomly shuffled before applying the split phase
• All the split must represent the original distribution
• The percentage of splitting is mostly 60% for training 20% for
validation and 20% for testing
• With scikit-learn this can be done using train_test_split() function
81
Exploratory Data Analysis

Exploratory Data Analysis refers to the critical process of performing


initial investigations on data so as to discover patterns, to spot
anomalies, to test hypothesis and to check assumptions with the help of
summary statistics and graphical representations.

82
Exploratory Data Analysis
Typical graphical techniques used in EDA are:
• Box plot
• Histogram
• Multi-vari chart
• Run chart
• Pareto chart
• Scatter plot
• Stem-and-leaf plot
• Stem-and-leaf plot
• Parallel coordinates
• Odds ratio
• Targeted projection pursuit
83
EDA Example
• Wine quality data set from UCI ML repository
• imported necessary libraries (for this example pandas, numpy,
matplotlib and seaborn) and loaded the data set.

84
EDA

• Original data is separated by delimiter “ ; “ in given data set.


• To take a closer look at the data took help of “ .head()”function of
pandas library which returns first five observations of the data set.
Similarly “.tail()” returns last five observations of the data set.

85
EDA Techniques

• found out the total number of rows and columns in the data set using
“.shape”
• Dataset comprises of 4898 observations and 12 characteristics.
• Out of which one is dependent variable and rest 11 are independent
variables — physico-chemical characteristics.
• It is also a good practice to know the columns and their corresponding
data types,along with finding whether they contain null values or not.
86
EDA: Exploratory Data Analysis

Plotting using matplotlib import seaborn as sns


import pandas as pd # 2-D Scatter plot with color-coding for each flower
type/class.
import matplotlib.pyplot as plt
# Here 'sns' corresponds to seaborn.
iris = pd.read_csv("iris.csv") sns.set_style("whitegrid");
iris.head(5) sns.FacetGrid(iris, hue="species", size=4) \
.map(plt.scatter, "sepal_length", "sepal_width") \
iris.plot(kind='scatter', x='sepal_length',
y='sepal_width') ; .add_legend();
plt.show();
plt.show()

87
88
EDA: Exploratory Data Analysis
What about 4-D, 5-D or n-D scatter plot?
3D scatter plot Pair-plot
https://plot.ly/pandas/3d-scatter-plots/ #Only possible to view 2D patterns.
import plotly plt.close();
import plotly.express as px sns.set_style("whitegrid");
sns.pairplot(iris, hue="species", size=3);
iris = px.data.iris() plt.show()
fig = px.scatter_3d(iris, x='sepal_length',
y='sepal_width', z='petal_width', VIOLIN PLOT
color='species') sns.violinplot(x="species", y="petal_length", data=iris,
size=8)
fig.show()
plt.show()

89
90
91
92
Progressive Data Analysis
•Data has only float and integer values.
•No variable column has null/missing values.

93
PDA Techniques

• The describe() function in pandas is very handy in getting various


summary statistics.
• This function returns the count, mean, standard deviation,
minimum and maximum values and the quantiles of the data.
94
Data Preparation: Types of Data

• Here as you can notice mean value is larger than median value of each
column which is represented by 50%(50th percentile) in index column.

• There is notably a large difference between 75th %tile and max values of
predictors “residualsugar”, ”freesulfurdioxide”, ”totalsulfur dioxide”.

• Thus observations 1 and 2 suggests that there are extreme values-Outliers in


our data set.

95
Graph Visualisation Techniques

• Let’s now explore data with beautiful graphs. Python has a


visualization library , Seaborn which build on top of matplotlib.
• It provides very attractive statistical graphs in order to perform
both Univariate and Multivariate analysis.
• To use linear regression for modelling, Its necessary to remove
correlated variables to improve your model.
• One can find correlations using pandas “.corr()” function and can
visualize the correlation matrix using a heatmap in seaborn.

96
97
Data Pre-processing techniques for ML applications

• Dark shades represents positive correlation while lighter shades


represents negative correlation.
• If you set annot=True, you’ll get values by which features are
correlated to each other in grid-cells.

98
99
Box Plot

• A box plot (or box-and-whisker plot) shows the distribution of


quantitative data in a way that facilitates comparisons between
variables.
• The box shows the quartiles of the dataset while the whiskers extend
to show the rest of the distribution.
• The box plot (a.k.a. box and whisker diagram) is a standardized way
of displaying the distribution of data based on the five number
summary

100
• Minimum
• First quartile
• Median
• Third quartile
• Maximum.
• In the simplest box plot the central rectangle spans the first quartile to
the third quartile (the interquartile range or IQR).

101
102
Sparse Matrix
• In numerical analysis and scientific computing, a sparse matrix or sparse
array is a matrix in which most of the elements are zero.
• By contrast, if most of the elements are nonzero, then the matrix is
considered dense.
• The number of zero-valued elements divided by the total number of elements
(e.g., m × n for an m × n matrix) is called the sparsity of the matrix (which is
equal to 1 minus the density of the matrix).
• Using those definitions, a matrix will be sparse when its Sparsity is greater
than 0.5.
• Conceptually, Sparsity corresponds to systems with few pairwise interactions

103
Feature Engineering: Feature selection

• In machine learning and statistics, feature selection, also known as


variable selection, attribute selection or variable subset selection,
• It is the process of selecting a subset of relevant features (variables,
predictors) for use in model construction.
• Feature selection is the process of reducing the number of input variables
when developing a predictive model.
• It is desirable to reduce the number of input variables to both reduce the
computational cost of modeling and, in some cases, to improve the
performance of the model.

104
Feature Engineering: Feature selection

Feature selection is primarily focused on removing non-informative or


redundant predictors from the model.
Feature selection techniques are used for several reasons:
• simplification of models to make them easier to interpret by
researchers/users
• shorter training times
• to avoid the curse of dimensionality
• enhanced generalization by reducing overfitting

105
Feature Engineering: Feature selection

• There are two main types of feature selection techniques: supervised and
unsupervised
• Supervised methods may be divided into wrapper, filter and intrinsic

106
Feature Engineering: Feature selection

• Unsupervised feature selection techniques ignores the target variable, such as methods that remove redundant

variables using correlation.


• Supervised feature selection techniques use the target variable, such as methods that remove irrelevant variables.

Supervised Feature Selection Techniques:


• Filter: Select subsets of features based on their relationship with the target.
• Statistical Methods
• Feature Importance Methods
• Wrapper: Search for well-performing subsets of features.
• RFE (Recursive Feature Elimination)
• Intrinsic: Algorithms that perform automatic feature selection during training.
• Decision Trees

107
Feature Engineering: Feature selection

Filter:

• Statistical-based feature selection methods involve evaluating the relationship

between each input variable and the target variable using statistics and selecting those

input variables that have the strongest relationship with the target variable.

• These methods can be fast and effective, although the choice of statistical measures

depends on the data type of both the input and output variables.

108
Feature Engineering: Feature selection

Wrapper:
• Wrapper feature selection methods create many models with different subsets of
input features and select those features that result in the best performing model
according to a performance metric

Intrinsic
• machine learning algorithms that perform feature selection automatically as part of
learning the model. These techniques considered as intrinsic feature selection
methods.
• E.g. Decision Tree

109
Feature Engineering: Feature selection

Statistics for Filter-Based Feature Selection Methods


It is common to use correlation type statistical measures between input and output
variables as the basis for filter feature selection.
Common input variable data types:
• Numerical Variables
• Integer Variables.
• Floating Point Variables.
• Categorical Variables.
• Boolean Variables
• Ordinal Variables.
• Nominal Variables.

110
Feature Engineering: Feature selection

filter-based feature selection method.


• consider two broad categories of variable types: numerical and categorical
• Also the two main groups of variables to consider: input and output.
• Input variables are those that are provided as input to a model.
• In feature selection, it is this group of variables that we wish to reduce in size.
• Output variables are those for which a model is intended to predict, often called the
response variable.
Response variable typically indicates the type of predictive modeling problem
• Numerical Output: Regression predictive modeling problem.
• Categorical Output: Classification predictive modeling problem

111
Feature Engineering: Feature selection

112
Feature Engineering: Feature selection

1. Numerical Input, Numerical Output


• This is a regression predictive modeling problem with numerical input variables.
• The most common techniques are to use a correlation coefficient, such as Pearson’s
for a linear correlation, or rank-based methods for a nonlinear correlation.
• Pearson’s correlation coefficient (linear).
• Spearman’s rank coefficient (nonlinear)
2. Numerical Input, Categorical Output
• This is a classification predictive modeling problem with numerical input variables.
• This might be the most common example of a classification problem,
• ANOVA correlation coefficient (linear).
• Kendall’s rank coefficient (nonlinear).
113
Feature Engineering: Feature selection

3. Categorical Input, Numerical Output


• This is a regression predictive modeling problem with categorical input variables.
• Nevertheless, you can use the same “Numerical Input, Categorical Output” but in
reverse.
4. Categorical Input, Categorical Output
• This is a classification predictive modeling problem with categorical input
variables.
• The most common correlation measure for categorical data is the chi-squared test.
• Chi-Squared test (contingency tables).
• Mutual Information.

114
Feature Engineering: Feature selection

Recursive Feature Elimination


• Recursive Feature Elimination, or RFE for short, is a feature selection algorithm.
• A machine learning dataset for classification or regression is comprised of rows and
columns, like an excel spreadsheet.
• Rows are often referred to as samples and columns are referred to as features
• Feature selection refers to techniques that select a subset of the most relevant
features (columns) for a dataset.
• Fewer features can allow machine learning algorithms to run more efficiently (less
space or time complexity) and be more effective.
• Some machine learning algorithms can be misled by irrelevant input features,
resulting in worse predictive performance.

115
Feature Engineering: Feature selection

• RFE is a wrapper-type feature selection algorithm.


• This means that a different machine learning algorithm is given and used in the core
of the method, is wrapped by RFE, and used to help select features.
• This is in contrast to filter-based feature selections that score each feature and
select those features with the largest (or smallest) score.
• RFE works by searching for a subset of features by starting with all features in the
training dataset and successfully removing features until the desired number remains.

116
Dimensionality Reduction: PCA
• We can see/visualize 2D,3D data …..by scatterplot
• 4D, 5D, 6D ………..use pair plot…..(nC2 pairs)
• 10D,100D,1000D data ?
• Visualization of High dimension (n- dim) data?
n-D Reduce 2-D or 3-D
• map high-dimensional data into low dimensions and preserve all the structure.
• Using Dimensionality Reduction techniques like PCA, t-SNE to visualize high
dimension data
• PCA tries to preserve linear structure, MDS tries to preserve global geometry, and
t-SNE tries to preserve topology
117
Principal Component Analysis (PCA)
Why PCA ?

• For dimensionality reduction i.e. d-dim d’-dim E.g. mnist dataset of 784- dim to 2 dim
# MNIST dataset downloaded from Kaggle :
#https://www.kaggle.com/c/digit-recognizer/data

Application-

• Visualization of high dim data using scatter plot, pair plot etc.

• As a ML ,models to solve problems on high dimensions

118
Principal Component Analysis (PCA)
PCA Steps for dimensionality reduction:

1. Column standardization of data

2. Find Covariance matrix

3. Find eigen values and eigen vectors

4. Find principal components

5. Reducing dimensions of dataset

119
Principal Component Analysis (PCA)
1.Standardization of the data
• missing out on standardization will probably result in a biased outcome.
• Standardization is all about scaling your data in such a way that all the variables and their values lie
within a similar range
• E.g let’s say that we have 2 variables in our data set, one has values ranging between 10-100 and the
other has values between 1000-5000.
• In such a scenario, it is obvious that the output calculated by using these predictor variables is going to
be biased
• standardizing the data into a comparable range is very important.

120
Principal Component Analysis (PCA)
2 Computing the covariance matrix
• A covariance matrix expresses the correlation between the different variables in the data set.
• It is essential to identify heavily dependent variables because they contain biased and redundant
information which reduces the overall performance of the model.
• a covariance matrix is a p × p matrix, where p represents the dimensions of the data set.
• 2-Dimensional data set with variables a and b, the covariance matrix is a 2×2 matrix as shown below

• If the covariance value is negative, it denotes the respective variables are indirectly proportional to each
other
• A positive covariance denotes that the respective variables are directly proportional to each other
121
Principal Component Analysis (PCA)
3. Calculating the Eigenvectors and Eigenvalues
Eigenvectors and eigenvalues are computed from the covariance matrix in order to determine the
principal components of the data set.
What are Principal Components?
• Principal components are the new set of variables that are obtained from the initial set of variables.
• The principal components are computed in such a manner that newly obtained variables are highly
significant and independent of each other.
• The principal components compress and possess most of the useful information that was scattered among
the initial variables.
• E.g data set is of 5 dimensions, then 5 principal components are computed, such that, the first principal
component stores the maximum possible information and the second one stores the remaining maximum
info and so on

122
Principal Component Analysis (PCA)
4. Computing the Principal Components
• Eigenvectors and eigenvalues placed in the descending order
• where the eigenvector with the highest eigenvalue is the most significant and thus forms the
first principal component.
• The principal components of lesser significances can thus be removed in order to reduce the
dimensions of the data.
• The final step in computing the Principal Components is to form a matrix known as the feature
matrix that contains all the significant data variables that possess maximum information about
the data.

123
Principal Component Analysis (PCA)
5. Reducing the dimensions of the data set

• performing PCA is to re-arrange the original data with the final principal components which
represent the maximum and the most significant information of the data set.

• In order to replace the original data axis with the newly formed Principal Components,
you simply multiply the transpose of the original data set by the transpose of the obtained
feature vector.

• For iris https://towardsdatascience.com/pca-using-python-scikit-learn-e653f8989e60

124
T-SNE
https://distill.pub/2016/misread-tsne/
https://colah.github.io/posts/2014-10-Visualizing-MNIST/
t-SNE is t- distributed stochastic neighborhood embedding

• Used dimensionality reduction

• Best technique for visualization

• PCA and t-SNE used in industry

• PCA preserve global structure whereas t-SNE preserve local structure

125
T-SNE.
Neighborhood and Embedding

• Points are geometrically together…….Neighborhood

• Embedding….For every points in high-dim space finding its


corresponding points in low dimension

Stochastic …….probabilistic

Geometric intuition…preserving distances of points in neighborhood

126
T-SNE

Crowding problem :
E.G …2dim to 1 dim
Sometimes it is impossible to preserve distance in all the neighborhood
points such problem is called crowding problem.

127
T-SNE
https://distill.pub/2016/misread-tsne/
• Run t-SNE on simple dataset
• Perplexity : points in neighbors
• Epsilion : learning rate
• Steps : iteration
t-SNE is iterative algorithm…1,2,3…. Runs t-SNE eventually till points
are not moving (stable configuration…shape does not changing)
1. Always runs t-SNE till shape does not change
2. Always runs t-SNE with multiple perplexity value.
3. Perplexity 2 <= p<=N 128
129
130
131

You might also like