Data Mining: Concepts and Techniques: - Chapter 3
Data Mining: Concepts and Techniques: - Chapter 3
1
Chapter 3: Data Preprocessing
2
Why Data Preprocessing?
names
No quality data, no quality mining results!
Quality decisions must be based on quality data
quality data
3
Multi-Dimensional Measure of Data Quality
Completeness
Consistency
Timeliness
Believability
Value added
Interpretability
Accessibility
Broad categories:
intrinsic, contextual, representational, and
accessibility.
4
Major Tasks in Data Preprocessing
Data cleaning
Fill in missing values, smooth noisy data, identify or remove
outliers, and resolve inconsistencies
Data integration
Integration of multiple databases, data cubes, or files
Data transformation
Normalization and aggregation
Data reduction
Obtains reduced representation in volume but produces the same
or similar analytical results
Data discretization
Part of data reduction but with particular importance, especially for
numerical data
5
Forms of data preprocessing
6
Chapter 3: Data Preprocessing
7
Data Cleaning
8
Missing Data
Data is not always available
E.g., many tuples have no recorded value for several
attributes, such as customer income in sales data
Missing data may be due to
equipment malfunction
inconsistent with other recorded data and thus deleted
data not entered due to misunderstanding
certain data may not be considered important at the time of
entry
not register history or changes of the data
Missing data may need to be inferred.
9
How to Handle Missing Data?
Ignore the tuple: usually done when class label is missing (assuming
the tasks in classification—not effective when the percentage of
missing values per attribute varies considerably.
Fill in the missing value manually: tedious + infeasible?
Use a global constant to fill in the missing value: e.g., “unknown”, a
new class?!
Use the attribute mean to fill in the missing value
Use the attribute mean for all samples belonging to the same class to
fill in the missing value: smarter
Use the most probable value to fill in the missing value: inference-
based such as Bayesian formula or decision tree
10
Noisy Data
technology limitation
incomplete data
inconsistent data
11
How to Handle Noisy Data?
Binning method:
first sort data and partition into (equi-depth) bins
Regression
smooth by fitting the data into regression functions
12
Simple Discretization Methods: Binning
Equal-width (distance) partitioning:
It divides the range into N intervals of equal size:
uniform grid
if A and B are the lowest and highest values of the
13
Binning Methods for Data Smoothing
* Sorted data for price (in dollars): 4, 8, 9, 15, 21, 21, 24, 25, 26, 28,
29, 34
* Partition into (equi-depth) bins:
- Bin 1: 4, 8, 9, 15
- Bin 2: 21, 21, 24, 25
- Bin 3: 26, 28, 29, 34
* Smoothing by bin means:
- Bin 1: 9, 9, 9, 9
- Bin 2: 23, 23, 23, 23
- Bin 3: 29, 29, 29, 29
* Smoothing by bin boundaries:
- Bin 1: 4, 4, 4, 15
- Bin 2: 21, 21, 25, 25
- Bin 3: 26, 26, 26, 34
14
Cluster Analysis
15
Regression
y
Y1
Y1’ y=x+1
X1 x
16
Chapter 3: Data Preprocessing
17
Data Integration
Data integration:
combines data from multiple sources into a coherent
store
Schema integration
integrate metadata from different sources
18
Handling Redundant Data
in Data Integration
Redundant data occur often when integration of multiple
databases
The same attribute may have different names in
different databases
One attribute may be a “derived” attribute in another
table, e.g., annual revenue
Redundant data may be able to be detected by
correlational analysis
Careful integration of the data from multiple sources may
help reduce/avoid redundancies and inconsistencies and
improve mining speed and quality
19
Data Transformation
20
Data Transformation:
Normalization
min-max normalization
v minA
v' (new _ maxA new _ minA) new _ minA
maxA minA
z-score normalization
v meanA
v'
stand _ devA
normalization by decimal scaling
v
v' j Where j is the smallest integer such that Max(| v ' |)<1
10
21
Chapter 3: Data Preprocessing
22
Data Reduction Strategies
Warehouse may store terabytes of data: Complex data
analysis/mining may take a very long time to run on the
complete data set
Data reduction
Obtains a reduced representation of the data set that is
Dimensionality reduction
Numerosity reduction
23
Data Cube Aggregation
The lowest level of a data cube
the aggregated data for an individual entity of interest
e.g., a customer in a phone calling data warehouse.
Multiple levels of aggregation in data cubes
Further reduce the size of data to deal with
Reference appropriate levels
Use the smallest representation which is enough to solve
the task
Queries regarding aggregated information should be
answered using data cube, when possible
24
Dimensionality Reduction
Feature selection (i.e., attribute subset selection):
Select a minimum set of features such that the
understand
Heuristic methods (due to exponential # of choices):
step-wise forward selection
decision-tree induction
25
Example of Decision Tree Induction
A1? A6?
26
Data Compression
String compression
There are extensive theories and well-tuned algorithms
Typically lossless
expansion
Audio/video compression
Typically lossy compression, with progressive
refinement
Sometimes small fragments of signal can be
28
Data Compression
ss y
lo
Original Data
Approximated
29
Wavelet Transforms
Haar2 Daubechie4
Discrete wavelet transform (DWT): linear signal processing
Compressed approximation: store only a small fraction of
the strongest of the wavelet coefficients
Similar to discrete Fourier transform (DFT), but better lossy
compression, localized in space
Method:
Length, L, must be an integer power of 2 (padding with 0s, when
necessary)
Each transform has 2 functions: smoothing, difference
Applies to pairs of data, resulting in two set of data of length L/2
Applies two functions recursively, until reaches the desired length
30
Principal Component Analysis
X2
Y1
Y2
X1
32
Numerosity Reduction
Parametric methods
Assume the data fits some model, estimate model
parameters, store only the parameters, and discard
the data (except possible outliers)
Log-linear models: obtain value at a point in m-D
space as the product on appropriate marginal
subspaces
Non-parametric methods
Do not assume models
Major families: histograms, clustering, sampling
33
Regression and Log-Linear Models
34
Regress Analysis and Log-
Linear Models
Linear regression: Y = + X
Two parameters , and specify the line and are to
above.
Log-linear models:
The multi-way table of joint probabilities is
35
Histograms
60000
80000
10000
20000
30000
40000
50000
70000
90000
100000
36
Clustering
Partition data set into clusters, and one can store cluster
representation only
Can be very effective if data is clustered but not if data
is “smeared”
Can have hierarchical clustering and be stored in multi-
dimensional index tree structures
There are many choices of clustering definitions and
clustering algorithms, further detailed in Chapter 8
37
Sampling
W O R
SRS le random
i m p h ou t
( s e wi t
l
samp ment)
pl a c e
re
SRSW
R
Raw Data
39
Sampling
40
Hierarchical Reduction
Use multi-resolution structure with different degrees of
reduction
Hierarchical clustering is often performed but tends to
define partitions of data sets rather than “clusters”
Parametric methods are usually not amenable to
hierarchical representation
Hierarchical aggregation
An index tree hierarchically divides a data set into
42
Discretization
Three types of attributes:
Nominal — values from an unordered set
Discretization:
divide the range of a continuous attribute into
intervals
Some classification algorithms only accept categorical
attributes.
Reduce data size by discretization
43
Discretization and Concept hierachy
Discretization
reduce the number of values for a given continuous
attribute by dividing the range of the attribute into
intervals. Interval labels can then be used to replace
actual data values.
Concept hierarchies
reduce the data by collecting and replacing low level
concepts (such as numeric values for the attribute
age) by higher level concepts (such as young,
middle-aged, or senior).
44
Discretization and concept hierarchy
generation for numeric data
Binning
Histogram analysis
Clustering analysis
Entropy-based discretization
45
Entropy-Based Discretization
Given a set of samples S, if S is partitioned into two
intervals S1 and S2 using boundary T, the entropy after
partitioning is |S | |S |
E (S , T ) 1 Ent ( ) 2 Ent ( )
|S| S1 | S | S2
The boundary that minimizes the entropy function over all
possible boundaries is selected as a binary discretization.
The process is recursively applied to partitions obtained
until some stopping criterion is met, e.g.,
Ent ( S ) E (T , S )
Experiments show that it may reduce data size and
improve classification accuracy
46
Segmentation by natural partitioning
(-$1,000 - $2,000)
Step 3:
(-$4000 -$5,000)
Step 4:
49
Specification of a set of attributes
Concept hierarchy can be automatically generated based
on the number of distinct values per attribute in the
given attribute set. The attribute with the most
distinct values is placed at the lowest level of the
hierarchy.
51
Summary
52
References
D. P. Ballou and G. K. Tayi. Enhancing data quality in data warehouse
environments. Communications of ACM, 42:73-78, 1999.
Jagadish et al., Special Issue on Data Reduction Techniques. Bulletin of the
Technical Committee on Data Engineering, 20(4), December 1997.
D. Pyle. Data Preparation for Data Mining. Morgan Kaufmann, 1999.
T. Redman. Data Quality: Management and Technology. Bantam Books, New
York, 1992.
Y. Wand and R. Wang. Anchoring data quality dimensions ontological
foundations. Communications of ACM, 39:86-95, 1996.
R. Wang, V. Storey, and C. Firth. A framework for analysis of data quality
research. IEEE Trans. Knowledge and Data Engineering, 7:623-640, 1995.
53