Lec4 Data Preprocessing
Lec4 Data Preprocessing
2
Major Tasks in Data Preprocessing
Data cleaning
Fill in missing values, smooth noisy data, identify or remove outliers, and
resolve inconsistencies
Data integration
Integration of multiple databases, data cubes, or files
Data reduction
Dimensionality reduction
Numerosity reduction
Data compression
Data transformation and data discretization
Normalization
Concept hierarchy generation
3
Data Cleaning
Data in the Real World Is Dirty: Lots of potentially incorrect data,
e.g., instrument faulty, human or computer error, transmission error
incomplete: lacking attribute values, lacking certain attributes of
interest, or containing only aggregate data
e.g., Occupation=“ ” (missing data)
noisy: containing noise, errors, or outliers
e.g., Salary=“−10” (an error)
inconsistent: containing discrepancies in codes or names, e.g.,
Age=“42”, Birthday=“03/07/2010”
Was rating “1, 2, 3”, now rating “A, B, C”
discrepancy between duplicate records
Intentional (e.g., disguised missing data)
Jan. 1 as everyone’s birthday?
4
Incomplete (Missing) Data
Data is not always available
E.g., many tuples have no recorded value for several
attributes, such as customer income in sales data
Missing data may be due to
equipment malfunction
inconsistent with other recorded data and thus deleted
data not entered due to misunderstanding
certain data may not be considered important at the time of
entry
not register history or changes of the data
Missing data may need to be inferred
5
How to Handle Missing Data?
Ignore the tuple: usually done when class label is missing (when
doing classification)
not always effective: values from remaining attributes wasted
7
How to Handle Noisy Data?
Binning
first sort data and partition into (equal-frequency) bins
then one can smooth by bin means, smooth by bin median,
smooth by bin boundaries, etc.
Regression
smooth by fitting the data into regression functions
Clustering
detect and remove outliers
Combined computer and human inspection
detect suspicious values and check by human (e.g., deal with
possible outliers)
8
Smoothing data by binning: examples
9
Smoothing data by regression: example
3 2 23.6 2.5
3 54.2 0.9
2
4 11.7 3.0
1 … … …
10 20 30 40 50 60
temperature values in the line
10
Remove outliers with clustering
11
Data Reduction Strategies
Data reduction: Obtain a reduced representation of the data set that is
much smaller in volume but yet produces the same (or almost the same)
analytical results
Why data reduction?
Curse of dimensionality
Efficiency
Data reduction strategies
Dimensionality reduction, e.g., remove unimportant attributes
Wavelet transforms
Principal Components Analysis (PCA)
Feature subset selection, feature creation
Numerosity reduction (some simply call it: Data Reduction)
Regression and Log-Linear Models
Histograms, clustering, sampling
Data cube aggregation
Data compression
12
13
Dimensionality Reduction
Curse of dimensionality
When dimensionality increases, data becomes increasingly
sparse
Obj = (x1, … , xn)
Likelihood of many zeros increases
Density and distance between points, which is critical to
clustering, outlier analysis, becomes less meaningful
d(Oi, Oj) = 𝑥1 − 𝑦1 2 + ⋯ + 𝑥𝑛 − 𝑦𝑛 2
Dimensionality Reduction
Dimensionality reduction
Avoid the curse of dimensionality
Help eliminate irrelevant features and reduce noise
Reduce time and space required in data mining
Allow easier visualization
15
Dimensionality Reduction
Dimensionality reduction techniques
Wavelet transforms (omitted)
Principal Component Analysis
Supervised and nonlinear techniques (e.g., feature
selection)
Principal Component Analysis (PCA)
Find a projection that captures the largest amount of variation in data
The original data are projected onto a much smaller space, resulting in
dimensionality reduction. We find the eigenvectors of the covariance matrix,
and these eigenvectors define the new space
x2
x1
16
Principal Component Analysis (Steps)
Given N data vectors from n-dimensions, find k ≤ n orthogonal vectors
(principal components) that can best represent data
Normalize input data: Each attribute falls within the same range
Compute k orthonormal (unit) vectors, i.e., principal components
Each input data (vector) is represented by a linear combination of the k
principal component vectors
The principal components are sorted in order of decreasing “significance”
or strength
Since the components are sorted, the size of the data can be reduced by
eliminating the weak components, i.e., those with low variance (i.e., using
the strongest principal components, it is possible to reconstruct a good
approximation of the original data)
Works for numeric data only
17
Attribute Subset Selection
Another way to reduce dimensionality of data
Redundant attributes
Duplicate much or all of the information contained in one or
more other attributes
E.g., purchase price of a product and the amount of sales tax
paid
Irrelevant attributes
Contain no useful information for the data mining task at hand
E.g., students’ address is often irrelevant to the task of predicting
students' GPA
18
Heuristic Search in Attribute Selection
There are 2d possible attribute combinations of d attributes
Typical heuristic attribute selection methods:
Best single attribute under the attribute independence
assumption: choose by significance tests
Best step-wise feature selection:
The best single-attribute is picked first
Then next best attribute condition to the first, ...
Step-wise attribute elimination:
Repeatedly eliminate the worst attribute
Best combined attribute selection and elimination
Optimal branch and bound:
Use attribute elimination and backtracking
19
Numerosity Reduction
Reduce data volume by choosing alternative, smaller forms of
data representation
Parametric methods (e.g., regression)
Assume the data fits some model, estimate model
parameters, store only the parameters, and discard the
data (except possible outliers)
Non-parametric methods
Do not assume models
Major families: histograms, clustering, sampling, …
20
Parametric Data Reduction: Regression
Linear regression
Data modeled to fit a straight line
Often uses the least-square method to fit the line
Multiple regression
Allows a response variable Y to be modeled as a linear
function of multidimensional feature vector
21
y
Regression Analysis
Y1
Regression analysis:
Y1’
modeling and analysis of numerical data y=x+1
A parametric approach
22
Regression Analysis
Linear regression: Y = w X + b
Two regression coefficients, w and b, specify the line
and are to be estimated by using the data at hand
Using the least squares criterion to the known values of
Y1, Y2, …, X1, X2, ….
Multiple regression: Y = b0 + b1 X1 + b2 X2
Many nonlinear functions can be transformed into the
above
23
Histogram Analysis
Divide data into buckets and 40
store frequency for each bucket 35
Partitioning rules: 30
Equal-width: equal bucket 25
range 20
Equal-frequency (or equal- 15
depth) 10
5
0
100000
10000
20000
30000
40000
50000
60000
70000
80000
90000
24
Histogram Analysis: examples
Data set:1,1,5,5,5,5,5,8,8,10,10,10,10,12,14,14,14,15,15,15,15,15,15,18,18,
18,18,18,18,20,20,20,20,20,20,20,21,21,21,21,25,25,25,25,25,28,28,30,30,30
26
Sampling
Sampling: obtaining a small sample s to represent the whole
data set N
Allow a mining algorithm to run in complexity that is potentially
sub-linear to the size of the data
Key principle: Choose a representative subset of the data
Simple random sampling may have very poor performance in
the presence of skew
Develop adaptive sampling methods, e.g., stratified sampling
27
Types of Sampling
Simple random sampling
There is an equal probability of selecting any particular item
Sampling without replacement
Once an object is selected, it is removed from the population
Sampling with replacement
A selected object is not removed from the population
Stratified sampling:
Partition the data set, and draw samples from each partition
(proportionally, i.e., approximately the same percentage of
the data)
Used in conjunction with skewed data
28
29
Raw Data
Sampling: Cluster or Stratified Sampling
30
31
Data Compression
String compression
There are extensive theories and well-tuned algorithms
Typically lossless, but only limited manipulation is possible
without expansion
Audio/video compression
Typically lossy compression, with progressive refinement
Sometimes small fragments of signal can be reconstructed
without reconstructing the whole
Dimensionality and numerosity reduction may also be considered
as forms of data compression
32
Data Compression
Original Data
Approximated
33
Data Transformation
A function that maps the entire set of values of a given
attribute to a new set of replacement values s.t. each old
value can be identified with one of the new values
Methods
Smoothing: Remove noise from data
min-max normalization
z-score normalization
normalization by decimal scaling
Discretization: Concept hierarchy climbing
Normalization
Min-max normalization: to [new_minA, new_maxA]
v − minA
v' = (new _ maxA − new _ minA) + new _ minA
maxA − minA
34
Normalization
Z-score normalization (μ: mean, σ: standard deviation):
v − A
v' =
A
35
Discretization
Three types of attributes
Nominal—values from an unordered set, e.g., color, profession
Ordinal—values from an ordered set, e.g., military or academic rank
Numeric—real numbers, e.g., integer or real numbers
Discretization: Divide the range of a continuous attribute into intervals
Interval labels can then be used to replace actual data values
Reduce data size by discretization
Supervised vs. unsupervised
Split (top-down) vs. merge (bottom-up)
Discretization can be performed recursively on an attribute
Prepare for further analysis, e.g., classification
36
Data Discretization Methods
Typical methods: All the methods can be applied recursively
Binning
Top-down split, unsupervised
Histogram analysis
Top-down split, unsupervised
Clustering analysis (unsupervised, top-down split or bottom-up
merge)
Decision-tree analysis (supervised, top-down split)
Correlation (e.g., 2) analysis (unsupervised, bottom-up
merge)
37
Simple Discretization: Binning
if A and B are the lowest and highest values of the attribute, the width of
number of samples
Highly effective for both sparse and dense data
38
Binning and histogram for discretization
❑ Sorted data for price (in dollars): 4, 8, 9, 15, 21, 21, 24, 25, 26, 28, 29, 34
❑ binning ❑ histogram
* Partition into equal-frequency * Equal width (10)
(equal-depth) bins:
- Bin 1 (1-10): 4, 8, 9
- Bin 1: 4, 8, 9, 15
- Bin 2 (11-20): 15
- Bin 2: 21, 21, 24, 25
- Bin 3: 26, 28, 29, 34 - Bin 3 (21-30): 21, 21, 24, 25,
26, 28, 29
* Smoothing by bin means:
- Bin 1: 9, 9, 9, 9 – Bin 4 (31-40): 34
- Bin 2: 23, 23, 23, 23 Equal frequency (3)
- Bin 3: 29, 29, 29, 29 - Bin 1: 4, 8, 9
39
Discretization by clustering or classification
Clustering:
group similar values into a single group
Classification
Partition the values into groups so that they are most co-
40
Concept Hierarchy Generation
Concept hierarchy organizes concepts (i.e., attribute values) hierarchically and
is usually associated with each dimension in a data warehouse
Concept hierarchies facilitate drilling and rolling in data warehouses to view
data in multiple granularity
Concept hierarchy formation: Recursively reduce the data by collecting and
replacing low level concepts (such as numeric values for age) by higher level
concepts (such as youth, adult, or senior)
Concept hierarchies can be explicitly specified by domain experts and/or data
warehouse designers
Concept hierarchy can be automatically formed for both numeric and nominal
data. For numeric data, use discretization methods shown.
41
Concept Hierarchy Generation for Nominal Data
42
Automatic Concept Hierarchy Generation
Some hierarchies can be automatically generated based on
the analysis of the number of distinct values per attribute in
the data set
The attribute with the most distinct values is placed at the
lowest level of the hierarchy
Exceptions, e.g., weekday, month, quarter, year