Data Mining:
Concepts and Techniques
Chapter 3: Data Preprocessing
October 28, 2021 Data Mining: Concepts and Techniques 1
Chapter 3: Data Preprocessing
Why preprocess the data?
Data cleaning
Data integration and transformation
Data reduction
Discretization and concept hierarchy generation
Summary
October 28, 2021 Data Mining: Concepts and Techniques 2
Why Data Preprocessing?
Data in the real world is dirty
incomplete: lacking attribute values, lacking certain
attributes of interest, or containing only aggregate
data – e.g. Sales for advertising, cust don’t give info
noisy: containing errors or outliers – machine faulty
e.g., Salary=“-10”
inconsistent: containing discrepancies in codes or
names – e.g., Age=“42” Birthday=“03/07/1997”
e.g., Was rating “1,2,3”, now rating “A, B, C”
e.g., discrepancy between duplicate records
No quality data, no quality mining results!
Quality decisions must be based on quality data
October 28, 2021 Data Mining: Concepts and Techniques 3
Multi-Dimensional Measure of Data Quality
A well-accepted multidimensional view:
Accuracy
Completeness
Consistency
Timeliness
Believability
Value added
Interpretability
Accessibility
Broad categories:
intrinsic, contextual, representational, and
accessibility.
October 28, 2021 Data Mining: Concepts and Techniques 4
Major Tasks in Data Preprocessing
Data cleaning
Fill in missing values, smooth noisy data, identify or remove
outliers, and resolve inconsistencies
Data integration
Integration of multiple databases, data cubes, or files
Data transformation
Normalization and aggregation
Data reduction
Obtains reduced representation in volume but produces the same
or similar analytical results
Data discretization
Part of data reduction but with particular importance, especially for
numerical data
October 28, 2021 Data Mining: Concepts and Techniques 5
Forms of data preprocessing
October 28, 2021 Data Mining: Concepts and Techniques 6
Chapter 3: Data Preprocessing
Why preprocess the data?
Data cleaning
Data integration and transformation
Data reduction
Discretization and concept hierarchy generation
Summary
October 28, 2021 Data Mining: Concepts and Techniques 7
Data Cleaning
Data cleaning tasks
Fill in missing values
Identify outliers and smooth out noisy data
Correct inconsistent data
October 28, 2021 Data Mining: Concepts and Techniques 8
Missing Data
Data is not always available
E.g., many tuples have no recorded value for several
attributes, such as customer income in sales data
Missing data may be due to
equipment malfunction
inconsistent with other recorded data and thus deleted
data not entered due to misunderstanding
certain data may not be considered important at the time of
entry
not register history or changes of the data
Missing data may need to be inferred.
October 28, 2021 Data Mining: Concepts and Techniques 9
How to Handle Missing Data?
Ignore the tuple: usually done when class label is missing (assuming
the tasks in classification—not effective when the percentage of
missing values per attribute varies considerably.
Fill in the missing value manually: tedious + infeasible
Use a global constant to fill in the missing value: e.g., “unknown”, ~
(∞ value)
Use the attribute mean to fill missing value – eg. Use avg income
Use the attribute mean for all samples belonging to the same class to
fill in the missing value: smarter
Use the most probable value to fill in the missing value: inference-
based such as Bayesian formula or decision tree
October 28, 2021 Data Mining: Concepts and Techniques 10
How to Handle Missing Data?
Age Income Team Gender
23 24,200 Red Sox M
39 ? Yankees F
45 45,390 ? F
Fill missing values using aggregate functions (e.g., average) or
probabilistic estimates on global value distribution
E.g., put the average income here, or put the most probable income
based on the fact that the person is 39 years old
E.g., put the most frequent team here
Noisy Data
Noise: random error or variance in a measured variable
Incorrect attribute values may due to
faulty data collection instruments
data entry problems
data transmission problems
technology limitation
inconsistency in naming convention
Other data problems which requires data cleaning
duplicate records
incomplete data
inconsistent data
October 28, 2021 Data Mining: Concepts and Techniques 12
How to Handle Noisy Data?
Smoothing techniques
Binning method:
first sort data and partition into (equi-depth) bins
then one can smooth by bin means, smooth by bin
median, smooth by bin boundaries, etc.
Clustering
detect and remove outliers
Combined computer and human inspection
computer detects suspicious values, which are then
checked by humans
Regression
smooth by fitting the data into regression functions
Simple Discretization Methods: Binning
Equal-width (distance) partitioning:
It divides the range into N intervals of equal size: uniform grid
if A and B are the lowest and highest values of the attribute, the
width of intervals will be: W = (B-A)/N.
The most straightforward
But outliers may dominate presentation
Skewed data is not handled well.
Equal-depth (frequency) partitioning:
It divides the range into N intervals, each containing
approximately same number of samples
Good data scaling – good handing of skewed data
Simple Discretization Methods: Binning
number
of values
Example: customer ages
Equi-width
binning: 0-10 10-20 20-30 30-40 40-50 50-60 60-70 70-80
Equi-depth
binning: 0-22 22-31 62-80
38-44 48-55
32-38 44-48 55-62
Smoothing using Binning Methods
* Sorted data for price (in dollars): 4, 8, 9, 15, 21, 21, 24, 25, 26, 28,
29, 34
* Partition into (equal) bins:
- Bin 1: 4, 8, 9, 15
- Bin 2: 21, 21, 24, 25
- Bin 3: 26, 28, 29, 34
* Smoothing by bin means:
- Bin 1: 9, 9, 9, 9
- Bin 2: 23, 23, 23, 23
- Bin 3: 29, 29, 29, 29
* Smoothing by bin boundaries: [4,15],[21,25],[26,34]
- Bin 1: 4, 4, 4, 15
- Bin 2: 21, 21, 25, 25
- Bin 3: 26, 26, 26, 34
•Sorted data for price (in dollars):
•13,15,16, 16,19,20,22,25,25,25,33,33,35,35,52,70
• Partition into (equal) bins:
- Bin 1: 13,15,16
Binning - Bin 2: 16,19,20
- Bin 3: 22,25,25
Method - Bin 4: 25,25,33
- Bin 5: 33,35,35
- Bin 6: 52,70
Smoothing [bin means:] Smoothing [bin medians:] Smoothing [bin boundaries:]
- Bin 1: 14,14,14 - Bin 1: 15,15,15 - Bin 1: 13,16,16
- Bin 2: 18,18,18 - Bin 2: 19,19,19 - Bin 2: 16,20,20
- Bin 4: 24,24,24 - Bin 3: 25,25,25 - Bin 3: 22,25,25
- Bin 3: 27,27,27 - Bin 4: 25,25,25 - Bin 4: 25,25,33
- Bin 4: 34,34,34 - Bin 5: 35,35,35 - Bin 5: 33,35,35
- Bin 5: 61,61 - Bin 6: 61,61 - Bin 6: 52,70
October 28, 2021 Data Mining: Concepts and Techniques 17
Cluster Analysis
salary
cluster
outlier
age
Regression
y (salary)
Example of linear regression
Y1 y=x+1
X1 x (age)
Smoothing Noisy Data
The purpose of data smoothing is to eliminate noise
Bin1: 4, 8, 15
Binning Bin2: 21, 21, 24
Bin3: 25, 28, 34
Clustering
means boundaries
Regression Bin1: 9, 9, 9 Bin1: 4, 4, 15
Bin2: 22, 22, Bin2: 21, 21, 24
22 Bin3: 25, 25, 34
Bin3: 29, 29,
29
20
Chapter 3: Data Preprocessing
Why preprocess the data?
Data cleaning
Data integration and transformation
Data reduction
Discretization and concept hierarchy generation
Summary
October 28, 2021 Data Mining: Concepts and Techniques 21
Data Integration
Data integration:
Data analysis may require combination of data from multiple
sources into a coherent data store
Schema integration
integrate metadata from different sources
metadata: data about the data (i.e., data descriptors)
Entity identification problem: identify real world entities from
multiple data sources, e.g., A.cust-id B.cust-#
Detecting and resolving data value conflicts
for the same real world entity, attribute values from different
sources are different (e.g., J.D.Smith and Jonh Smith may refer to
the same person)
possible reasons: different representations, different scales, e.g.,
(inches vs. cm)
Handling Redundant Data
in Data Integration
Redundant data occur often when integration of multiple
databases
The same attribute may have different names in
different databases
One attribute may be a “derived” attribute in another
table, e.g. age is derived from DOB
Redundant data may be able to be detected by
correlational analysis
Careful integration of the data from multiple sources may
help to reduce redundancies and inconsistencies and
improve mining speed and quality
October 28, 2021 Data Mining: Concepts and Techniques 23
Data Transformation
Smoothing: remove noise from data
Aggregation: summarization, data cube construction
Generalization: concept hierarchy climbing
Normalization: scaled to fall within a small, specified
range
min-max normalization
z-score normalization
normalization by decimal scaling
Attribute/feature construction
New attributes constructed from the given ones
Data Transformation: Normalization
min-max normalization
v minA
v' (new _ maxA new _ minA) new _ minA
maxA minA
e.g. convert age=30 to range 0-1, when min=10,max=80.
new_age=(30-10)/(80-10)=2/7
z-score normalization v meanA
v'
stand_devA
normalization by decimal scaling
v
v' j Where j is the smallest integer such that Max(| v ' |)<1
10
Chapter 3: Data Preprocessing
Why preprocess the data?
Data cleaning
Data integration and transformation
Data reduction
Discretization and concept hierarchy generation
Summary
October 28, 2021 Data Mining: Concepts and Techniques 26
Data Reduction Strategies
Warehouse may store terabytes of data: Complex data
analysis/mining may take a very long time to run on the
complete data set
Data reduction
Obtains a reduced representation of the data set that is
much smaller in volume but yet produces the same (or
almost the same) analytical results
Data reduction strategies
Data cube aggregation
Dimensionality reduction
Numerosity reduction
Discretization and concept hierarchy generation
October 28, 2021 Data Mining: Concepts and Techniques 27
Data Cube Aggregation
Data Cube stores multidimensional aggregated
information.
The cube created at lowest level of a abstraction - base
cuboid - the aggregated data for an individual entity of
interest
The cube at the highest level of abstraction – apex cuboid
Queries regarding aggregated information should be
answered using data cube, when possible
October 28, 2021 Data Mining: Concepts and Techniques 28
Data Cube Aggregation
Sales data for given branch of A Data-Cube for sales at AllElectronics
AllElectronics for the years (Apex cuboid)
2002 to 2004
October 28, 2021 Data Mining: Concepts and Techniques 29
Dimensionality Reduction
Feature selection (i.e., Attribute Subset Selection):
Select only the necessary attributes.
The goal is to find a minimum set of attributes such that the resulting probability
distribution of data classes is as close as possible to the original distribution obtained
using all attributes.
Eg. Classify customer as to whether or not they buy new popular CD on sale,
at that time, customer phone num. is irrelevant but age and music_test is revelant
The best(and worst) attributes are determined using test of statistical significance.
Use heuristics: select local ‘best’ (or most pertinent) attribute, e.g.,
using information gain, etc. Eg. Initial Set -{A1, A2, A3, A4, A5, A6}
step-wise forward selection {}{A1}{A1, A4}{A1, A4, A6}
step-wise backward elimination {A1, A2, A3, A4, A5, A6}{A1, A3, A4, A5, A6}
{A1, A4, A5, A6} – Reduced set - > {A1, A4, A6}
combining forward selection and backward elimination
decision-tree induction
October 28, 2021 Data Mining: Concepts and Techniques 30
Example of Decision Tree Induction
Decision Tree Induction constructs flow chart like structure
Initial attribute set: {A1, A2, A3, A4, A5, A6}
A4 ?
A1? A6?
Class 1 Class 2 Class 1 Class 2
> Reduced attribute set: {A1, A4, A6}
October 28, 2021 Data Mining: Concepts and Techniques 31
Data Compression
Data encoding or transformations are applied so as to obtain
a reduced or compressed representation of original data -
either lossy or lossless
String compression
There are extensive theories and well-tuned algorithms
Typically lossless
But only limited manipulation is possible without
expansion
Audio/video compression
Typically lossy compression, with progressive refinement
Sometimes small fragments of signal can be
reconstructed without reconstructing the whole
October 28, 2021 Data Mining: Concepts and Techniques 32
Data Compression
Original Data Compressed
Data
lossless
ss y
lo
Original Data
Approximated
October 28, 2021 Data Mining: Concepts and Techniques 33
Compression
We could use a lossy compression technique to remove
attributes that are considered not important enough to
keep. Consider a 100% jpeg and a 85% jpeg... they
look very similar, but some unnecessary information is
lost... the sort of thing we want to do with attributes.
Techniques:
DWT: Discrete Wavelet Transform
DFT: Discrete Fourier Transform
PCA: Principal Component Analysis
October 28, 2021 Data Mining: Concepts and Techniques 34
Discrete Wavelet Transforms
DWT transforms a vector of attribute values into a different vector
of 'wavelet coefficients', of the same length as the original.
This transformed vector can be truncated at a certain threshold. Any
values below the threshold are set to 0.
The remaining data is then an approximation of the original, in the
transformed space.
We can reverse the transformation to return to the original
attributes, minus the ones lost in the truncation.
There are many different DWTs, in families (eg Haar and
Daubechies)
October 28, 2021 Data Mining: Concepts and Techniques 35
Wavelet Transforms
Method:
Length, L, must be an integer power of 2 (padding with 0s, when
necessary)
Each transform has 2 functions: smoothing, difference
Applies to pairs of data, resulting in two set of data of length L/2
Applies two functions recursively, until reaches the desired length
Haar2 Daubechie4
October 28, 2021 Data Mining: Concepts and Techniques 36
Principal Component Analysis
Not a signal processing technique!
Idea: Just because the dataset has various dimensional
axes, doesn't mean those are the best axes to use.
Find the best axes in order and drop the least important
ones
Dataset on regular axes Dataset on revised axes
October 28, 2021 Data Mining: Concepts and Techniques 37
Principal Component Analysis
Do not reduce attribute as in the “attribute subset selection”
Suppose that the data to be reduced consist of tuples or
data vectors described by n attributes or dimensions.
Principal components analysis, or PCA (also called the
Karhunen-Loeve, or K-L, method), searches for k n-
dimensional orthogonal vectors that can best be used to
represent the data, where k <= n.
The original data are thus projected onto a much smaller
space, resulting in dimensionality reduction.
October 28, 2021 Data Mining: Concepts and Techniques 38
Numerosity Reduction
Why numerosity reduction?
Reduce data volume by choosing alternatives , smaller
forms of representation
Parametric methods
Assume the data fits some model,
estimate model parameters,
store only the parameters, and discard actual data
Example :Log-linear models
Non-parametric methods
Do not assume models
Major families: histograms, clustering, sampling
October 28, 2021 Data Mining: Concepts and Techniques 39
Regress Analysis and Log-
Linear Models
Linear regression: Y = + X
Two parameters , and specify the line and are to be
estimated by using the data at hand.
where y=response variable and x=predictor variable
Multiple regression: Y = b0 + b1 X1 + b2 X2.
where y=response variable and x1,x2=predictor variable,
b0,b1,b2 as constants
Log-linear models:
Higher dimensional data space to be constructed from lower
dimensional space
Lower dimensional points together occupy less space than
original data points
The multidimensional probabilities is approximated by a
product of lower-order tables.
Probability: p(a, b, c, d) = ab acad bcd
Histograms
One of the best ways to summarize data is to provide a histogram of the
data
Store data into buckets
If bucket store only single value-> singleton ,otherwise range histogram
1,1,5,5,5,5,5,8,8,10,10,10,10,12,14,14,14,15,15,15
,
15,15,15,18,18,18,18,18,18,18,18,20,20,20,20,20,
20,20,21,21,21,21,25,25,25,25,25,28,28,30,30,30
count
1-10 11-20 21-30
October 28, 2021 Data Mining: Concepts and Techniques 41
Histograms
For this sample database
shown, we can create a
histogram of eye color by
counting the number of
occurrences of different
colors of eyes in our
database
October 28, 2021 Data Mining: Concepts and Techniques 42
Clustering
Partition data set into clusters, and one can
store cluster representation only
Can be very effective if data is clustered but not if data is
“smeared”
The quality of cluster may be represented by cluster diameter
Centroid distance is an alternative measure of cluster quality –
distance from any cluster object to its cluster centroid
October 28, 2021 Data Mining: Concepts and Techniques 43
Sampling
Sampling allow a large dataset to be represented by a
much smaller random sample(subset) of the data
Type of sampling :
Simple random sample without replacement (SRSWOR)
Simple random sample with replacement (SRSWR)
Cluster Sample
Tuples in database D are grouped into M clusters, than SRS
Of s cluster can be obtained where s<M
Stratified sampling
Database D is divided into disjoint parts, Strata
A stratified sample of D is generated by obtaining an SRS on
each stratum
October 28, 2021 Data Mining: Concepts and Techniques 44
Sampling
W O R
SRS le random
i m p h ou t
( s e wi t
l
samp ment)
pl a c e
re
SRSW
R
Raw Data
October 28, 2021 Data Mining: Concepts and Techniques 45
Sampling
Raw Data Cluster/Stratified Sample
October 28, 2021 Data Mining: Concepts and Techniques 46
Hierarchical Reduction
Use multi-resolution structure with different degrees of
reduction
Hierarchical clustering is often performed but tends to
define partitions of data sets rather than “clusters”
Parametric methods are usually not amenable to
hierarchical representation
Hierarchical aggregation
An index tree hierarchically divides a data set into
partitions by value range of some attributes
Each partition can be considered as a bucket
Thus an index tree with aggregates stored at each
node is a hierarchical histogram
October 28, 2021 Data Mining: Concepts and Techniques 47
Chapter 3: Data Preprocessing
Why preprocess the data?
Data cleaning
Data integration and transformation
Data reduction
Discretization and concept hierarchy generation
Summary
October 28, 2021 Data Mining: Concepts and Techniques 48
Discretization
Three types of attributes:
Nominal — values from an unordered set
Ordinal — values from an ordered set
Continuous — real numbers
Discretization:
divide the range of a continuous attribute into
intervals
Some classification algorithms only accept categorical
attributes.
Reduce data size by discretization
Prepare for further analysis
October 28, 2021 Data Mining: Concepts and Techniques 49
Discretization and Concept hierachy
Discretization
reduce the number of values for a given continuous
attribute by dividing the range of the attribute into
intervals. Interval labels can then be used to replace
actual data values.
Concept hierarchies
reduce the data by collecting and replacing low level
concepts (such as numeric values for the attribute
age) by higher level concepts (such as young,
middle-aged, or senior).
October 28, 2021 Data Mining: Concepts and Techniques 50
Discretization and concept hierarchy
generation for numeric data
Binning (see sections before)
Histogram analysis (see sections before)
Clustering analysis (see sections before)
Entropy-based discretization
Segmentation by natural partitioning
October 28, 2021 Data Mining: Concepts and Techniques 51
Binning
Attribute values (for one attribute e.g., age):
0, 4, 12, 16, 16, 18, 24, 26, 28
Equi-width binning – for bin width of e.g., 10:
Bin 1: 0, 4 [-,10) bin
Bin 2: 12, 16, 16, 18 [10,20) bin
Bin 3: 24, 26, 28 [20,+) bin
– denote negative infinity, + positive infinity
Equi-frequency binning – for bin density of e.g., 3:
Bin 1: 0, 4, 12 [-, 14) bin
Bin 2: 16, 16, 18 [14, 21) bin
Bin 3: 24, 26, 28 [21,+] bin
CS583, Bing Liu, UIC 52
Entropy-Based Discretization
Given a set of samples S, if S is partitioned into two
intervals S1 and S2 using boundary T, the entropy after
partitioning is |S | |S |
E (S , T ) 1 Ent ( ) 2 Ent ( )
|S| S1 | S | S2
The boundary that minimizes the entropy function over all
possible boundaries is selected as a binary discretization.
The process is recursively applied to partitions obtained
until some stopping criterion is met, e.g.,
Ent ( S ) E (T , S )
Experiments show that it may reduce data size and
improve classification accuracy
October 28, 2021 Data Mining: Concepts and Techniques 53
Segmentation by natural partitioning
3-4-5 rule can be used to segment numeric data into
relatively uniform, “natural” intervals.
* If an interval covers 3, 6, 7 or 9 distinct values at the
most significant digit, partition the range into 3 equi-
width intervals
* If it covers 2, 4, or 8 distinct values at the most
significant digit, partition the range into 4 intervals
* If it covers 1, 5, or 10 distinct values at the most
significant digit, partition the range into 5 intervals
October 28, 2021 Data Mining: Concepts and Techniques 54
Example of 3-4-5 rule
count
Step 1: -$351 -$159 profit $1,838 $4,700
Min Low (i.e, 5%-tile) High(i.e, 95%-0 tile) Max
Step 2: msd=1,000 Low=-$1,000 High=$2,000
(-$1,000 - $2,000)
Step 3:
(-$1,000 - 0) (0 -$ 1,000) ($1,000 - $2,000)
(-$4000 -$5,000)
Step 4:
($2,000 - $5, 000)
(-$400 - 0) (0 - $1,000) ($1,000 - $2, 000)
(0 -
($1,000 -
(-$400 - $200)
$1,200) ($2,000 -
-$300) $3,000)
($200 -
($1,200 -
$400)
(-$300 - $1,400)
($3,000 -
-$200)
($400 - ($1,400 - $4,000)
(-$200 - $600) $1,600) ($4,000 -
-$100) ($600 - ($1,600 - $5,000)
$800) ($800 - ($1,800 -
$1,800)
(-$100 - $1,000) $2,000)
0)
October 28, 2021 Data Mining: Concepts and Techniques 55
Concept hierarchy generation for
categorical data
Specification of a partial ordering of attributes explicitly
at the schema level by users or experts
Specification of a portion of a hierarchy by explicit data
grouping
Specification of a set of attributes, but not of their
partial ordering
Specification of only a partial set of attributes
October 28, 2021 Data Mining: Concepts and Techniques 56
Specification of a set of attributes
Concept hierarchy can be automatically generated based
on the number of distinct values per attribute in the
given attribute set. The attribute with the most
distinct values is placed at the lowest level of the
hierarchy.
country 15 distinct values
province_or_ state 65 distinct
values
city 3567 distinct values
street 674,339 distinct values
October 28, 2021 Data Mining: Concepts and Techniques 57
Chapter 3: Data Preprocessing
Why preprocess the data?
Data cleaning
Data integration and transformation
Data reduction
Discretization and concept hierarchy generation
Summary
October 28, 2021 Data Mining: Concepts and Techniques 58
Summary
Data preparation is a big issue for both warehousing
and mining
Data preparation includes
Data cleaning and data integration
Data reduction and feature selection
Discretization
A lot a methods have been developed but still an active
area of research
October 28, 2021 Data Mining: Concepts and Techniques 59