CS583 Data Prep
CS583 Data Prep
Attribute-value data: Data types numeric, categorical (see the hierarchy for its relationship) static, dynamic (temporal) Other kinds of data distributed data text, Web, meta data images, audio/video
A1
A2
An
Why preprocess the data? Data cleaning Data integration and transformation Data reduction Discretization Summary
3
incomplete: missing attribute values, lack of certain attributes of interest, or containing only aggregate data
e.g., occupation= e.g., Salary=-10 e.g., Age=42 Birthday=03/07/1997 e.g., Was rating 1,2,3, now rating A, B, C e.g., discrepancy between duplicate records
e.g., duplicate or missing data may cause incorrect or even misleading statistics.
Data preparation, cleaning, and transformation comprises the majority of the work in a data mining application (90%).
Data cleaning
Fill in missing values, smooth noisy data, identify or remove outliers and noisy data, and resolve inconsistencies Integration of multiple databases, or files Normalization and aggregation Obtains reduced representation in volume but produces the same or similar analytical results
Data integration
Data transformation
Data reduction
Why preprocess the data? Data cleaning Data integration and transformation Data reduction Discretization Summary
8
Data Cleaning
Importance
Missing Data
E.g., many tuples have no recorded values for several attributes, such as customer income in sales data
equipment malfunction inconsistent with other recorded data and thus deleted data not entered due to misunderstanding certain data may not be considered important at the time of entry not register history or changes of the data
10
Ignore the tuple Fill in missing values manually: tedious + infeasible? Fill in it automatically with
a global constant : e.g., unknown, a new class?! the attribute mean the most probable value: inference-based such as Bayesian formula, decision tree, or EM algorithm
11
Noisy Data
Noise: random error or variance in a measured variable. Incorrect attribute values may due to
faulty data collection instruments data entry problems data transmission problems etc duplicate records, incomplete data, inconsistent data
12
Binning method:
first sort data and partition into (equi-depth) bins then one can smooth by bin means, smooth by bin median, smooth by bin boundaries, etc. detect and remove outliers detect suspicious values and check by human (e.g., deal with possible outliers)
Clustering
13
Sorted data for price (in dollars): 4, 8, 9, 15, 21, 21, 24, 25, 26, 28, 29, 34 Partition into (equi-depth) bins:
Bin 1: 4, 8, 9, 15 Bin 2: 21, 21, 24, 25 Bin 3: 26, 28, 29, 34 Bin 1: 9, 9, 9, 9 Bin 2: 23, 23, 23, 23 Bin 3: 29, 29, 29, 29 Bin 1: 4, 4, 4, 15 Bin 2: 21, 21, 25, 25 Bin 3: 26, 26, 26, 34
14
Outlier Removal
Valid: CEOs salary, Noisy: Ones age = 200, widely deviated points Clustering Curve-fitting Hypothesis-testing with a given model
Removal methods
15
Why preprocess the data? Data cleaning Data integration and transformation Data reduction Discretization Summary
16
Data Integration
Data integration:
combines data from multiple sources integrate metadata from different sources Entity identification problem: identify real world entities from multiple data sources, e.g., A.cust-id B.cust-# for the same real world entity, attribute values from different sources are different, e.g., different scales, metric vs. British units
Schema integration
Data Transformation
Attribute/feature construction
Aggregation: summarization
18
min-max normalization
v minA v' (new _ maxA new _ minA) new _ minA maxA minA z-score normalization
19
Obtain a reduced representation of the data set that is much smaller in volume but yet produce the same (or almost the same) analytical results Dimensionality reduction remove unimportant attributes Aggregation and clustering Sampling
21
Dimensionality Reduction
Select a minimum set of attributes (features) that is sufficient for the data mining task. step-wise forward selection step-wise backward elimination combining forward selection and backward elimination etc
22
Histograms
40
35 A popular data reduction technique 30 Divide data into buckets 25 and store average (sum) for each bucket 20 15 10 5 0
10000
30000
50000
70000
90000 23
Clustering
Partition data set into clusters, and one can store cluster representation only
data is smeared
There are many choices of clustering definitions and clustering algorithms. We will discuss them later.
24
Sampling
Simple random sampling may have poor performance in the presence of skew. Stratified sampling: Approximate the percentage of each class (or subpopulation of interest) in the overall database Used in conjunction with skewed data
25
Sampling
Raw Data
Cluster/Stratified Sample
26
Discretization
Nominal values from an unordered set Ordinal values from an ordered set Continuous real numbers divide the range of a continuous attribute into intervals because some data mining algorithms only accept categorical attributes. Binning methods equal-width, equal-frequency Entropy-based methods
28
Discretization:
Some techniques:
Discretization
reduce the number of values for a given continuous attribute by dividing the range of the attribute into intervals. Interval labels can then be used to replace actual data values reduce the data by collecting and replacing low level concepts (such as numeric values for the attribute age) by higher level concepts (such as young, middle-aged, or senior)
Concept hierarchies
29
Binning
0, 4, 12, 16, 16, 18, 24, 26, 28 Bin 1: 0, 4 [-,10) bin Bin 2: 12, 16, 16, 18 [10,20) bin Bin 3: 24, 26, 28 [20,+) bin denote negative infinity, + positive infinity Bin 1: 0, 4, 12 Bin 2: 16, 16, 18 Bin 3: 24, 26, 28 [-, 14) bin [14, 21) bin [21,+] bin
30
Entropy-based (1)
(0,P), (4,P), (12,P), (16,N), (16,N), (18,P), (24,N), (26,N), (28,N) Intuitively, find best split so that the bins are as pure as possible Formally characterized by maximal information gain.
Let S denote the above 9 pairs, p=4/9 be fraction of P pairs, and n=5/9 be fraction of N pairs. Entropy(S) = - p log p - n log n.
Smaller entropy set is relatively pure; smallest is 0. Large entropy set is mixed. Largest is 1.
31
Entropy-based (2)
Let v be a possible split. Then S is divided into two sets: S1: value <= v and S2: value > v Information of the split: I(S1,S2) = (|S1|/|S|) Entropy(S1)+ (|S2|/|S|) Entropy(S2) Information gain of the split: Gain(v,S) = Entropy(S) I(S1,S2) Goal: split with maximal information gain. Possible splits: mid points b/w any two consecutive values. For v=14, I(S1,S2) = 0 + 6/9*Entropy(S2) = 6/9 * 0.65 = 0.433 Gain(14,S) = Entropy(S) - 0.433 maximum Gain means minimum I. The best split is found after examining all possible splits.
32
Summary
Many methods have been proposed but still an active area of research
33