2020 Preprocessing
2020 Preprocessing
Concepts and
Techniques
(3rd ed.)
12
Chapter 3: Data Preprocessing
16
Chi-Square Calculation: An
Example
i 1 (ai A)(bi B)
n n
(ai bi ) n A B
rA, B i 1
(n 1) A B (n 1) A B
Scatter plots
showing the
similarity from
–1 to 1.
19
Correlation (viewed as linear
relationship)
Correlation measures the linear relationship
between objects
To compute correlation, we standardize
data objects, A and B, and then take their
dot product
a 'k (ak mean( A)) / std ( A)
20
Covariance (Numeric Data)
Covariance is similar to correlation
Correlation coefficient:
where n is the number of tuples, and are the respective mean
or expected values of A and B, A σ andBσ are the respective
A B
attributes
Wavelet transforms
Principal Components Analysis (PCA)
Feature subset selection, feature creation
Numerosity reduction (some simply call it: Data Reduction)
Regression and Log-Linear Models
Histograms, clustering, sampling
Data cube aggregation
Data compression
24
Data Reduction 1: Dimensionality
Reduction
Curse of dimensionality
When dimensionality increases, data becomes increasingly sparse
Density and distance between points, which is critical to
clustering, outlier analysis, becomes less meaningful
The possible combinations of subspaces will grow exponentially
Dimensionality reduction
Avoid the curse of dimensionality
Help eliminate irrelevant features and reduce noise
Reduce time and space required in data mining
Allow easier visualization
Dimensionality reduction techniques
Wavelet transforms
Principal Component Analysis
Supervised and nonlinear techniques (e.g., feature selection)
25
Attribute Subset Selection
Another way to reduce dimensionality of data
Redundant attributes
Duplicate much or all of the information
contained in one or more other attributes
E.g., purchase price of a product and the
amount of sales tax paid
Irrelevant attributes
Contain no information that is useful for the
data mining task at hand
E.g., students' ID is often irrelevant to the task
of predicting students' GPA
26
Data Reduction 2: Numerosity
Reduction
Reduce data volume by choosing alternative,
smaller forms of data representation
Parametric methods (e.g., regression)
Assume the data fits some model, estimate
sampling, …
27
Parametric Data Reduction:
Regression and Log-Linear
Models
Linear regression
Data modeled to fit a straight line
line
Multiple regression
Allows a response variable Y to be modeled as
probability distributions
28
Histogram Analysis
Divide data into buckets 40
and store average (sum) 35
for each bucket 30
Partitioning rules: 25
Equal-width: equal 20
bucket range 15
Equal-frequency (or 10
equal-depth) 5
0
10000
20000
30000
40000
50000
60000
70000
80000
90000
100000
29
Clustering
Partition data set into clusters based on similarity,
and store cluster representation (e.g., centroid
and diameter) only
Can be very effective if data is clustered but not if
data is “smeared”
Can have hierarchical clustering and be stored in
multi-dimensional index tree structures
There are many choices of clustering definitions
and clustering algorithms
Cluster analysis will be studied in depth in
Chapter 10
30
Sampling
particular item
Sampling without replacement
Once an object is selected, it is removed from
the population
Sampling with replacement
A selected object is not removed from the
population
Stratified sampling:
Partition the data set, and draw samples from
algorithms
Typically lossless, but only limited manipulation is
refinement
Sometimes small fragments of signal can be
os sy
l
Original Data
Approximated
35
Chapter 3: Data Preprocessing
73,600 54,000
1.225
Ex. Let μ = 54,000, σ = 16,000. Then16,000
Normalization by decimal scaling
v
v' j Where j is the smallest integer such that Max(|ν’|) < 1
10
38
Discretization
Three types of attributes
Nominal—values from an unordered set, e.g., color, profession
Ordinal—values from an ordered set, e.g., military or
academic rank
Numeric—real numbers, e.g., integer or real numbers
Discretization: Divide the range of a continuous attribute into
intervals
Interval labels can then be used to replace actual data values
Reduce data size by discretization
Supervised vs. unsupervised
Split (top-down) vs. merge (bottom-up)
Discretization can be performed recursively on an attribute
Prepare for further analysis, e.g., classification
39
Data Discretization Methods
Typical methods: All the methods can be applied
recursively
Binning
Top-down split, unsupervised
Histogram analysis
Top-down split, unsupervised
Clustering analysis (unsupervised, top-down split or
bottom-up merge)
Decision-tree analysis (supervised, top-down split)
Correlation (e.g., 2) analysis (unsupervised, bottom-
up merge)
40
Simple Discretization: Binning
Binning method:
first sort data and partition into (equi-depth) bins
then one can smooth by bin means, smooth by bin
median, smooth by bin boundaries, etc.
Clustering
detect and remove outliers
Combined computer and human inspection
detect suspicious values and check by human (e.g.,
deal with possible outliers)
43
CS583, Bing Liu, UIC
44
CS583, Bing Liu, UIC
Cont…
45
CS583, Bing Liu, UIC
Another example
46
CS583, Bing Liu, UIC
Cont…
The following is sorted data for price (in
dollars):
13,15,16,
16,19,20,20,21,22,22,25,25,25,33,35,3
5,35,35,36,40,45,46,52,70
Partition the given data into 4 bins using
Equi-depth binning method and perform
smoothing according to the following
methods
47
CS583, Bing Liu, UIC
Cont..
Smoothing by bin means
Smoothing by bin medians
Smoothing by bin boundaries
Data:11,13,13,15,15,16,19,20,20,20,21,21,
22,23,24,30,40,45,45,45,71,72,73,75
48
CS583, Bing Liu, UIC
Classification & Correlation
Analysis
Classification (e.g., decision tree analysis)
Supervised: Given class labels, e.g., cancerous vs. benign
Using entropy to determine split point (discretization point)
Top-down, recursive split
Details to be covered in Chapter 7
Correlation analysis (e.g., Chi-merge: χ2-based discretization)
Supervised: use class information
Bottom-up merge: find the best neighboring intervals (those
having similar distributions of classes, i.e., low χ2 values) to
merge
Merge performed recursively, until a predefined stopping
condition
49
Summary
Data quality: accuracy, completeness, consistency,
timeliness, believability, interpretability
Data cleaning: e.g. missing/noisy values, outliers
Data integration from multiple sources:
Entity identification problem
Remove redundancies
Detect inconsistencies
Data reduction
Dimensionality reduction
Numerosity reduction
Data compression
50
A (detailed) data preprocessing
example
• Suppose we want to mine the comments/reviews
of people on Yelp and Foursquare.
Data Collection
Data
Collection
Data Result
Data Mining
Preprocessing Post-processing
• I'm from California and I must say, Shake Shack is better than IN-N-OUT, all day,
err'day.
• Would I pay $15+ for a burger here? No. But for the price point they are asking for,
this is a definite bang for your buck (though for some, the opportunity cost of
waiting in line might outweigh the cost savings) Thankfully, I came in before the
lunch swarm descended and I ordered a shake shack (the special burger with the patty +
fried cheese & portabella topping) and a coffee milk shake. The beef patty was
very juicy and snugly packed within a soft potato roll. On the downside, I could do
without the fried portabella-thingy, as the crispy taste conflicted with the juicy,
tender burger. How does shake shack compare with in-and-out or 5-guys? I say a very
close tie, and I think it comes down to personal affliations. On the shake side, true
to its name, the shake was well churned and very thick and luscious. The coffee flavor
added a tangy taste and complemented the vanilla shake well. Situated in an open
space in NYC, the open air sitting allows you to munch on your burger while watching
people zoom by around the city. It's an oddly calming experience, or perhaps it was
the food coma I was slowly falling into. Great place with food at a great price.
First cut
• Do simple processing to “normalize” the data (remove
punctuation, make into lower case, clear white spaces, other?)
• Break into words, keep the most popular words
the 27514 the 16710 the 16010 the 14241
and 14508 and 9139 and 9504 and 8237
i 13088 a 8583 i 7966 a 8182
a 12152 i 8415 to 6524 i 7001
to 10672 to 7003 a 6370 to 6727
of 8702 in 5363 it 5169 of 4874
ramen 8518 it 4606 of 5159 you 4515
was 8274 of 4365 is 4519 it 4308
is 6835 is 4340 sauce 4020 is 4016
it 6802 burger 432 in 3951 was 3791
in 6402 was 4070 this 3519 pastrami 3748
for 6145 for 3441 was 3453 in 3508
but 5254 but 3284 for 3327 for 3424
that 4540 shack 3278 you 3220 sandwich 2928
you 4366 shake 3172 that 2769 that 2728
with 4181 that 3005 but 2590 but 2715
pork 4115 you 2985 food 2497 on 2247
my 3841 my 2514 on 2350 this 2099
this 3487 line 2389 my 2311 my 2064
wait 3184 this 2242 cart 2236 with 2040
not 3016 fries 2240 chicken 2220 not 1655
we 2984 on 2204 with 2195 your 1622
at 2980 are 2142 rice 2049 so 1610
on 2922 with 2095 so 1825 have 1585
First cut
• Do simple processing to “normalize” the data (remove
punctuation, make into lower case, clear white spaces, other?)
• Break into words, keep the most popular words
the 27514 the 16710 the 14241
the 16010
and 14508 and 9139 and 8237
and 9504
i 13088 a 8583 a 8182
i 7966
a 12152 i 8415 i 7001
to 6524
to 10672 to 7003 to 6727
a 6370
of 8702 in 5363 of 4874
it 5169
ramen 8518 it 4606 you 4515
of 5159
was 8274 of 4365 it 4308
is 4519
is 6835 is 4340 is 4016
sauce 4020
it 6802 burger 432 was 3791
in 3951
in 6402 was 4070 pastrami 3748
this 3519
for 6145 for 3441 in 3508
was 3453
but 5254 but 3284 for 3424
for 3327
that 4540 shack 3278 sandwich 2928
you 3220
you 4366 shake 3172 that 2728
that 2769
with 4181 that 3005 but 2715
but 2590
pork 4115 you 2985 on 2247
food 2497
my 3841 my 2514 this 2099
on 2350
this 3487 line 2389
this 2242
Most frequent words are stop words
my 2311
my 2064
with 2040
wait 3184 cart 2236
not 3016 fries 2240 not 1655
chicken 2220
we 2984 on 2204 your 1622
with 2195
at 2980 are 2142 so 1610
rice 2049
on 2922 with 2095 have 1585
so 1825
Second cut
• Remove stop words
• Stop-word lists can be found online.
a,about,above,after,again,against,all,am,an,and,any,are,aren't,as,at,be,be
cause,been,before,being,below,between,both,but,by,can't,cannot,could,could
n't,did,didn't,do,does,doesn't,doing,don't,down,during,each,few,for,from,f
urther,had,hadn't,has,hasn't,have,haven't,having,he,he'd,he'll,he's,her,he
re,here's,hers,herself,him,himself,his,how,how's,i,i'd,i'll,i'm,i've,if,in
,into,is,isn't,it,it's,its,itself,let's,me,more,most,mustn't,my,myself,no,
nor,not,of,off,on,once,only,or,other,ought,our,ours,ourselves,out,over,own
,same,shan't,she,she'd,she'll,she's,should,shouldn't,so,some,such,than,tha
t,that's,the,their,theirs,them,themselves,then,there,there's,these,they,th
ey'd,they'll,they're,they've,this,those,through,to,too,under,until,up,very
,was,wasn't,we,we'd,we'll,we're,we've,were,weren't,what,what's,when,when's
,where,where's,which,while,who,who's,whom,why,why's,with,won't,would,would
n't,you,you'd,you'll,you're,you've,your,yours,yourself,yourselves,
Second cut
• Remove stop words
• Stop-word lists can be found online.
ramen 8572 burger 4340 sauce 4023 pastrami 3782
pork 4152 shack 3291 food 2507 sandwich 2934
wait 3195 shake 3221 cart 2239 place 1480
good 2867 line 2397 chicken 2238 good 1341
place 2361 fries 2260 rice 2052 get 1251
noodles 2279 good 1920 hot 1835 katz's 1223
ippudo 2261 burgers 1643 white 1782 just 1214
buns 2251 wait 1508 line 1755 like 1207
broth 2041 just 1412 good 1629 meat 1168
like 1902 cheese 1307 lamb 1422 one 1071
just 1896 like 1204 halal 1343 deli 984
get 1641 food 1175 just 1338 best 965
time 1613 get 1162 get 1332 go 961
one 1460 place 1159 one 1222 ticket 955
really 1437 one 1118 like 1096 food 896
go 1366 long 1013 place 1052 sandwiches 813
food 1296 go 995 go 965 can 812
bowl 1272 time 951 can 878 beef 768
can 1256 park 887 night 832 order 720
great 1172 can 860 time 794 pickles 699
best 1167 best 849 long 792 time 662
people 790
IDF
TF-IDF
• The words that are best for describing a document
are the ones that are important for the document, but
also unique to the document.
• Too frequent data (stop words), too infrequent (errors?), erroneous data, missing
data, outliers
• How should we weight the different pieces of data?