Class-Data Preprocessing-III
Class-Data Preprocessing-III
Yashvardhan Sharma
1/30/24 CS F415 4
Data Quality
1/30/24 CS F415 5
Data Cleaning
• Importance
• “Data cleaning is one of the three biggest problems in data warehousing”—
Ralph Kimball
• “Data cleaning is the number one problem in data warehousing”—DCI survey
• Data cleaning tasks
• Fill in missing values
• Identify outliers and smooth out noisy data
• Correct inconsistent data
• Resolve redundancy caused by data integration
1/30/24 CS F415 6
Missing Data
1/30/24 CS F415 7
Missing Values
• Reasons for missing values
• Information is not collected
(e.g., people decline to give their age and weight)
• Attributes may not be applicable to all cases
(e.g., annual income is not applicable to children)
1/30/24 CS F415 9
Noisy Data
• Noise: random error or variance in a measured variable
• Incorrect attribute values may due to
• faulty data collection instruments
• data entry problems
• data transmission problems
• technology limitation
• inconsistency in naming convention
• Other data problems which requires data cleaning
• duplicate records
• incomplete data
• inconsistent data
1/30/24 CS F415 10
Noise
• Noise refers to modification of original values
• Examples: distortion of a person’s voice when talking on a poor phone
1/30/24 CS F415 12
Simple Discretization Methods: Binning
• Equal-width (distance) partitioning:
• Divides the range into N intervals of equal size: uniform grid
• if A and B are the lowest and highest values of the attribute, the width of
intervals will be: W = (B –A)/N.
• The most straightforward, but outliers may dominate presentation
• Skewed data is not handled well.
• Equal-depth (frequency) partitioning:
• Divides the range into N intervals, each containing approximately same
number of samples
• Good data scaling
• Managing categorical attributes can be tricky.
1/30/24 CS F415 13
Binning Methods for Data Smoothing
• Sorted data (e.g., by price)
• 4, 8, 9, 15, 21, 21, 24, 25, 26, 28, 29, 34
• Partition into (equi-depth) bins:
• Bin 1: 4, 8, 9, 15
• Bin 2: 21, 21, 24, 25
• Bin 3: 26, 28, 29, 34
• Smoothing by bin means:
• Bin 1: 9, 9, 9, 9
• Bin 2: 23, 23, 23, 23
• Bin 3: 29, 29, 29, 29
• Smoothing by bin boundaries:
• Bin 1: 4, 4, 4, 15
• Bin 2: 21, 21, 25, 25
• Bin 3: 26, 26, 26, 34
1/30/24 CS F415 14
Cluster Analysis
1/30/24 CS F415 15
Regression
y
Y1
Y1’ y=x+1
X1 x
1/30/24 CS F415 16
Outliers
• Outliers are data objects with characteristics that are
considerably different than most of the other data objects in
the data set
1/30/24 CS F415 17
Duplicate Data
• Data set may include data objects that are duplicates, or
almost duplicates of one another
• Major issue when merging data from heterogeneous sources
• Examples:
• Same person with multiple email addresses
• Data cleaning
• Process of dealing with duplicate data issues
1/30/24 CS F415 18
Data Preprocessing
• Aggregation
• Sampling
• Dimensionality Reduction
• Feature subset selection
• Feature creation
• Discretization and Binarization
• Attribute Transformation
1/30/24 CS F415 19
Data Reduction Strategies
1/30/24 CS F415 20
Aggregation
• Combining two or more attributes (or objects) into a single
attribute (or object)
• Purpose
• Data reduction
• Reduce the number of attributes or objects
• Change of scale
• Cities aggregated into regions, states, countries, etc.
• More “stable” data
• Aggregated data tends to have less variability
1/30/24 CS F415 21
Data Cube Aggregation
• The lowest level of a data cube
• the aggregated data for an individual entity of interest
• e.g., a customer in a phone calling data warehouse.
• Multiple levels of aggregation in data cubes
• Further reduce the size of data to deal with
• Reference appropriate levels
• Use the smallest representation which is enough to solve the task
• Queries regarding aggregated information should be answered using
data cube, when possible
1/30/24 CS F415 22
SAMPLE CUBE
Total annual sales
Date of TV in U.S.A.
1Qtr 2Qtr 3Qtr 4Qtr sum
t
TV
uc
Total annual sales
od PC ofU.S.A
PC in U.S.A.
Pr
VCR Total salesTotal annual sales
sum Q1 sales
Total In U.S.Aof VCR in U.S.A.
Country
In U.S.A Canada
Total sales
Total Q1 sales
In Canada In Canada Mexico
Total Q1 sales Total sales
In Mexico In Mexico sum
Total Q2 sales
Total Q1 sales
In all countries TOTAL SALES
In all countries
• Sampling is used in data mining because processing the entire set of data
of interest is too expensive or time consuming.
1/30/24 CS F415 25
Sampling …
• The key principle for effective sampling is the following:
• using a sample will work almost as well as using the entire data sets, if the
sample is representative
1/30/24 CS F415 26
Types of Sampling
• Simple Random Sampling
• There is an equal probability of selecting any particular item
• Sampling without replacement
• As each item is selected, it is removed from the population
• Sampling with replacement
• Objects are not removed from the population as they are selected for the
sample.
• In sampling with replacement, the same object can be picked up more than once
• Stratified sampling
• Split the data into several partitions; then draw random samples from each
partition
1/30/24 CS F415 27
Sampling
1/30/24 CS F415 28
Sampling
W O R
SRS le random
i m p hout
(s e wi t
p l
sam ment)
p l a ce
re
SRSW
R
Raw Data
1/30/24 CS F415 29
Sampling
Raw Data Cluster/Stratified Sample
1/30/24 CS F415 30
Sample Size
1/30/24 CS F415 31
Sample Size
• What sample size is necessary to get at least one
object from each of 10 groups.
1/30/24 CS F415 32
Data Dimensionality
• From a theoretical point of view, increasing the number of
features should lead to better performance.
34
Dimensionality Reduction
• Purpose:
• Avoid curse of dimensionality
• Reduce amount of time and memory required by data mining algorithms
• Allow data to be more easily visualized
• May help to eliminate irrelevant features or reduce noise
• Techniques
• Principle Component Analysis
• Singular Value Decomposition
• Others: supervised and non-linear techniques
1/30/24 CS F415 35
Example of
Decision Tree Induction
Initial attribute set:
{A1, A2, A3, A4, A5, A6}
A4 ?
A1? A6?
UT
38
Dimensionality Reduction (cont’d)
• Idea: represent data in terms of basis vectors in a lower dimensional space
(embedded within the original space).
39
Principal Component Analysis
• Given N data vectors from k-dimensions, find c ≤ k orthogonal
vectors that can be best used to represent data
• The original data set is reduced to one consisting of N data vectors on c
principal components (reduced dimensions)
• Each data vector is a linear combination of the c principal component
vectors
• Works for numeric data only
• Used when the number of dimensions is large
1/30/24 CS F415 40
Principal Component Analysis
X2
Y1
Y2
X1
1/30/24 CS F415 41
Dimensionality Reduction: PCA
• Goal is to find a projection that captures the largest amount of
variation in data
x2
x1
1/30/24 CS F415 42
Dimensionality Reduction: PCA
• Find the eigenvectors of the covariance matrix
• The eigenvectors define the new space
x2
x1
1/30/24 CS F415 43
Dimensionality Reduction: PCA
Dimensions = 206
Dimensions
Dimensions==160
120
10
40
80
1/30/24 CS F415 44
PCA: Motivation
• Choose directions such that a total variance of data
will be maximum
• Maximize Total Variance
45
Principal Component Analysis (PCA)
• Dimensionality reduction implies information loss; PCA
preserves as much information as possible by minimizing
the reconstruction error:
1
M
47
PCA – Steps (cont’d)
an orthogonal basis
( x x).ui
where bi
(ui .ui )
48
PCA – Linear Transformation
( x x).ui
If ui has unit length: bi ( x x).ui
(ui .ui )
49
Geometric interpretation
• PCA projects the data along the directions where the data varies
most.
• These directions are determined by the eigenvectors of the
covariance matrix corresponding to the largest eigenvalues.
• The magnitude of the eigenvalues corresponds to the variance of
the data along the eigenvector directions.
50
How to choose K?
53