3_Preprocessing
3_Preprocessing
3_Preprocessing
Data Warehouse
Data Cleaning
and Integration
2015/10/13 5
Why Is Data Dirty?
Incomplete data may come from
“Not applicable” data value when collected
Different considerations between the time when the data was
collected and when it is analyzed
Human/hardware/software problems
Noisy data (incorrect values) may come from
Faulty data collection instruments
Human or computer error at data entry
Errors in data transmission
Inconsistent data may come from
Different data sources
Functional dependency violation (e.g., modify some linked data)
Duplicate records also need data cleaning
2015/10/13 6
Why Is Data Preprocessing Important?
No quality data, no quality mining results!
Quality decisions must be based on quality data
• e.g., duplicate or missing data may cause incorrect or even
misleading statistics
2015/10/13 7
Major Tasks in Data Preprocessing
Data cleaning
Fill in missing values, smooth noisy data, identify or remove
outliers, and resolve inconsistencies
Data integration
Integration of multiple databases, data cubes, or files
Data transformation
Normalization and aggregation
Data reduction
Obtains reduced representation in volume but produces the same
or similar analytical results
Data discretization
Data reduction for numerical data
2015/10/13 8
Forms of Data Preprocessing
2015/10/13 9
Data Preprocessing Overview
n i 1
Weighted arithmetic mean: w x i i
x i 1
n
Trimmed mean: chopping extreme values w
i 1
i
Mode
Value that occurs most frequently in the data
Unimodal, bimodal, trimodal
Empirical formula: mean mode 3 (mean median)
2015/10/13 12
Symmetric vs. Skewed Data
Median, mean and mode of
symmetric, positively and
negatively skewed data
2015/10/13 13
Measuring the Dispersion of Data
Quartiles, outliers and boxplots
Quartiles: Q1 (25th percentile), Q3 (75th percentile)
Inter-quartile range: IQR = Q3 – Q1
Five number summary: min, Q1, M, Q3, max
Boxplot: ends of the box are the quartiles, median is marked, whiskers, and
plot outlier individually
Outlier: usually, a value higher/lower than 1.5 x IQR
2015/10/13 15
Boxplot Analysis
2015/10/13 16
Properties of Normal Distribution
Curve
The normal (distribution) curve
From μ–σ to μ+σ: contains about 68% of the measurements (μ:
mean, σ: standard deviation)
From μ–2σ to μ+2σ: contains about 95% of it
From μ–3σ to μ+3σ: contains about 99.7% of it
2015/10/13 17
Histogram Analysis
Graph displays of basic statistical class
descriptions
Frequency histograms
• A univariate graphical method
• Consists of a set of rectangles that reflect the counts or
frequencies of the classes present in the given data
2015/10/13 18
Example
2015/10/13 19
Quantile Plot
Display all of the data (allowing the user to assess both
the overall behavior and unusual occurrences)
Plot quantile information
For a data xi data sorted in increasing order, fi indicates that
approximately 100 fi% of the data are below or equal to the value
xi
i – 0.5
fi =
n
2015/10/13 20
Quantile-Quantile (Q-Q) Plot
Graphs the quantiles of one univariate distribution against
the corresponding quantiles of another
Allows the user to view whether there is a shift in going
from one distribution to another
2015/10/13 21
Scatter Plot
Provides a first look at bivariate data to see clusters
of points, outliers, etc.
Each pair of values is treated as a pair of coordinates
and plotted as points in the plane
2015/10/13 22
Loess Curve
Adds a smooth curve to a scatter plot in order to provide
better perception of the pattern
Loess curve is fitted by setting two parameters: a
smoothing parameter, and the degree of the
polynomials that are fitted by the regression
2015/10/13 23
Graphic Displays of Basic Statistical
Descriptions
Boxplot
Histogram
Quantile plot: each value xi is paired with fi
indicating that approximately 100 fi % of data are xi
Quantile-quantile (q-q) plot: graphs the quantiles of
one univariant distribution against the corresponding
quantiles of another
Scatter plot: each pair of values is a pair of
coordinates and plotted as points in the plane
Loess (local regression) curve: add a smooth curve
to a scatter plot to provide better perception of the
pattern of dependence
2015/10/13 24
Exercise
Importance
“Data cleaning is one of the three biggest problems
in data warehousing”—Ralph Kimball
“Data cleaning is the number one problem in data
warehousing”—DCI survey
2015/10/13 28
How to Handle Missing Data?
Ignore the tuple: usually done when class label is missing
(assuming the tasks in classification—not effective when
the percentage of missing values per attribute varies
considerably).
Fill in the missing value manually: tedious + infeasible?
Fill in it automatically with
a global constant : e.g., “unknown”, a new class?!
the attribute mean
the attribute mean for all samples belonging to the same class:
smarter
the most probable value: inference-based such as Bayesian
formula or decision tree
2015/10/13 29
Noisy Data
2015/10/13 30
How to Handle Noisy Data?
Binning
first sort data and partition into bins
Smooth noise by consulting its neighbors, local smooth
then one can smooth by bin means, smooth by bin
median, smooth by bin boundaries, etc.
Regression
smooth by fitting the data into regression functions
Clustering
detect and remove outliers
Combined computer and human inspection
detect suspicious values and check by human (e.g.,
deal with possible outliers)
2015/10/13 31
Simple Discretization Methods: Binning
Equal-width (distance) partitioning
Divides the range into N intervals of equal size
if A and B are the lowest and highest values of the attribute, the
width of intervals will be: W = (B –A)/N
The most straightforward, but outliers may dominate presentation
Skewed data is not handled well
2015/10/13 32
Binning Methods for Data Smoothing
Sorted data for price (in dollars): 4, 8, 9, 15, 21, 21, 24, 25, 26, 28,
29, 34
* Partition into equal-frequency (equi-depth) bins:
- Bin 1: 4, 8, 9, 15
- Bin 2: 21, 21, 24, 25
- Bin 3: 26, 28, 29, 34
* Smoothing by bin means:
- Bin 1: 9, 9, 9, 9
- Bin 2: 23, 23, 23, 23
- Bin 3: 29, 29, 29, 29
* Smoothing by bin boundaries:
- Bin 1: 4, 4, 4, 15
- Bin 2: 21, 21, 25, 25
- Bin 3: 26, 26, 26, 34
2015/10/13 33
Regression
y
Y1
Y1* y=x+1
X1 x
2015/10/13 34
Cluster Analysis
2015/10/13 35
Exercise
1. Suppose a group of 12 sales price records has been
sorted as follows: 5, 10, 11, 13, 15, 15, 15, 55, 60, 60,
65, 65.
(a) Smooth the data by bin means, using a bin depth of 4.
(b) Smooth the data by bin boundaries, using a bin depth
of 4.
(c) Smooth the data by bin means, using 3 bins of equal-
width partitioning.
2015/10/13 36
Overview: Data Preprocessing
2015/10/13 38
Data Transformation: Normalization
Min-max normalization: to [new_minA, new_maxA]
v minA
v' (new _ maxA new _ minA) new _ minA
maxA minA
Ex. Let income range $12,000 to $98,000 normalized to [0.0, 1.0].
73,600 12,000
Then $73,600 is mapped to 98,000 12,000
(1.0 0) 0 0.716
2015/10/13 40
Overview: Data Preprocessing
2015/10/13 42
Handling Redundancy in Data Integration
2015/10/13 43
Correlation Analysis (Numerical Data)
Correlation coefficient (also called Pearson’s product
moment coefficient)
rA ,B
(ai A )(bi B )
(a b ) n A B
i i
(n 1)AB (n 1)AB
(a) (b)
2015/10/13 45
(c)
Not Correlated Data
2015/10/13 46
Correlation Analysis (Categorical Data)
Χ2 (chi-square) test
(Observed Expected ) 2
2
Expected
The larger the Χ2 value, the more likely the variables
are correlated
The cells that contribute the most to the Χ2 value are
those whose actual count is very different from the
expected count
Correlation does not imply causality
# of hospitals and # of car-theft in a city are correlated
Both are causally linked to the third variable: population
2015/10/13 47
Chi-Square Calculation: An Example
Play chess Not play chess Sum (row)
Like science fiction 250(90) 200(360) 450
Not like science fiction 50(210) 1000(840) 1050
Sum(col.) 300 1200 1500
2015/10/13 49
Excerise
1. The following contingence table summarizes
supermarket transaction data.
(a) Based on the given data, is the purchase of hot dogs
independent of the purchase of hamburgers?
(b) If correlated, what kind of correlation relationship
exists between the two items?
hot dogs not hot dogs sum
hamburgers 4000 3500 7500
not hamburgers 2000 500 2500
sum 6000 4000 10000
2015/10/13 50
Overview: Data Preprocessing
2015/10/13 51
Data Reduction
Data reduction
Obtain a reduced representation of the data set that is much
smaller in volume but yet produce the same (or almost the same)
analytical results
2015/10/13 52
Data Reduction Strategies
Data compression
2015/10/13 53
Data Cube Aggregation
The lowest level of a data cube (base cuboid)
The aggregated data for an individual entity of interest
Multiple levels of aggregation in data cubes
Further reduce the size of data to deal with
Reference appropriate concept levels
Use the smallest representation which is enough to
solve the task
2015/10/13 54
Attribute Subset Selection
2015/10/13 55
Feature Selection Methods
There are 2d possible sub-features of d features
Greedy methods: locally optimal
Choose by “statistical significance” tests
Best step-wise forward selection:
• The best single-feature is picked first
• Then next best feature condition to the first, ...
Step-wise backward elimination:
• Repeatedly eliminate the worst feature
Best combined feature selection and elimination
Decision tree induction
2015/10/13 56
Example
2015/10/13 57
Data Compression
String compression
There are extensive theories and well-tuned
algorithms
Typically lossless
But only limited manipulation is possible without
expansion
Audio/video compression
Typically lossy compression, with progressive
refinement
Sometimes small fragments of signal can be
reconstructed without reconstructing the whole
2015/10/13 58
Data Compression
Original Data
Approximated
2015/10/13 59
Wavelet Transformation
Y1
Y2
X1
2015/10/13 62
Numerosity Reduction
Reduce data volume by choosing alternative,
smaller forms of data representation
Parametric methods
Assume the data fits some model, estimate model
parameters, store only the parameters, and discard the
data (except possible outliers)
Non-parametric methods
Do not assume models
Major families: histograms, clustering, sampling
2015/10/13 63
Data Reduction Method (1):
Regression Models
Linear regression: Data are modeled to fit a
straight line
Often uses the least-square method to fit the line
2015/10/13 64
Data Reduction Method (2): Histograms
Divide data into buckets and
store frequency for each
bucket
Partitioning rules:
Equal-width: equal bucket range
Equal-frequency (or equal-depth)
V-optimal: with the least
histogram variance (weighted
sum of the original values that
each bucket represents)
2015/10/13 65
Data Reduction Method (3): Clustering
2015/10/13 66
Data Reduction Method (4): Sampling
Sampling: obtaining a small sample s to
represent the whole data set N
Choose a representative subset of the data
Simple random sampling may have very poor
performance in the presence of skew
Adaptive sampling methods
Stratified sampling:
• Approximate the percentage of each class (or
subpopulation of interest) in the overall database
• Used in conjunction with skewed data
Fast, scan database once
2015/10/13 67
Sampling:With or Without Replacement
Raw Data
2015/10/13 68
Sampling: Cluster or Stratified Sampling
2015/10/13 69
Overview: Data Preprocessing
Why preprocess the data?
Descriptive data summarization
Data cleaning
Data transformation
Data integration
Data reduction
Discretization and concept hierarchy generation
Summary
2015/10/13 70
Discretization and Concept Hierarchy
Discretization
Reduce the number of values for a given continuous
attribute by dividing the range of the attribute into
intervals
Interval labels can then be used to replace actual data
values
Discretization can be performed recursively on an
attribute
Concept hierarchy formation
Recursively reduce the data by collecting and replacing
low level concepts (such as numeric values for age) by
2015/10/13
higher level concepts (such as young, middle-aged,
71
or
Example of Concept Hierarchy
$0…$1000
2015/10/13 72
Discretization and Concept Hierarchy
Generation for Numeric Data
Typical methods: All the methods can be applied
recursively
Binning
• Top-down split, replace the value by bin mean or median
Histogram analysis
• Top-down split
Clustering analysis
• Top-down split
2015/10/13 73
Entropy-Based Discretization
Entropy is calculated based on class distribution of the samples in the
set. Given m classes, the entropy of S is
m
Entropy(S ) p
i 1
i log2( pi ) where pi is the probability of class i in S
Given a set of samples S, if S is partitioned into two intervals S1 and
S2 using boundary T, the entropy after partitioning is
| S1 | | |
Entropy(S ,T ) Entropy(S 1) S 2 Entropy(S 2)
|S | |S |
The boundary that maximizes the information gain over all possible
boundaries is selected as a binary discretization
Gain(S ,T ) Entropy(S ) Entropy(S ,T )
2015/10/13 75
An Example of Entropy-based
Partitioning
Gain(age ) 0.246
Gain(income ) 0.029
Gain(student ) 0.151
Gain(credit _ rating ) 0.048
<=30 30..40 >40
2015/10/13 76
Segmentation by Natural Partitioning
A simply 3-4-5 rule can be used to segment numeric data
into relatively uniform, “natural” intervals
If an interval covers 3, 6, 7 or 9 distinct values at the “most
significant digit”, partition the range into 3 equal-width intervals
If it covers 2, 4, or 8 distinct values at the “most significant digit”,
partition the range into 4 intervals
If it covers 1, 5, or 10 distinct values at the “most significant digit”,
partition the range into 5 intervals
The top-level segmentation represents the majority
2015/10/13 77
Example of 3-4-5 Rule
count
(-$1,000 - $2,000)
Step 3:
(-$400 -$5,000)
Step 4:
2015/10/13 79
Automatic Concept Hierarchy
Generation
Some hierarchies can be automatically generated
based on the analysis of the number of distinct values
per attribute in the data set
The attribute with the most distinct values is placed at the
lowest level of the hierarchy
Exceptions, e.g., weekday, month, quarter, year
country 15 distinct values
2015/10/13 82