0% found this document useful (0 votes)
18 views63 pages

2020 Preprocessing

The document discusses data preprocessing in data mining, emphasizing the importance of data quality and the major tasks involved, such as data cleaning, integration, reduction, and transformation. It highlights common data quality issues like noise, missing values, and duplicates, and outlines methods to address these problems. Additionally, it covers techniques for data reduction and dimensionality reduction to improve data analysis efficiency.

Uploaded by

h210639z
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views63 pages

2020 Preprocessing

The document discusses data preprocessing in data mining, emphasizing the importance of data quality and the major tasks involved, such as data cleaning, integration, reduction, and transformation. It highlights common data quality issues like noise, missing values, and duplicates, and outlines methods to address these problems. Additionally, it covers techniques for data reduction and dimensionality reduction to improve data analysis efficiency.

Uploaded by

h210639z
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 63

Data Mining:

Concepts and
Techniques
(3rd ed.)

Jiawei Han, Micheline Kamber, and Jian Pei


University of Illinois at Urbana-Champaign &
Simon Fraser University
©2011 Han, Kamber & Pei. All rights reserved.
1
Data Preprocessing

 Data Preprocessing: An Overview


 Data Quality
 Major Tasks in Data Preprocessing
 Data Cleaning
 Data Integration
 Data Reduction
 Data Transformation and Data Discretization
 Summary
2
Data Quality
• Examples of data quality problems:
• Noise and outliers
Tid Refund Marital Taxable
• Missing values Status Income Cheat

• Duplicate data 1 Yes Single 125K No


2 No Married 100K No
3 No Single 70K No
4 Yes Married 120K No

A mistake or a millionaire? 5 No Divorced 10000K Yes


6 No NULL 60K No
Missing values 7 Yes Divorced 220K NULL
8 No Single 85K Yes
9 No Married 90K No
Inconsistent duplicate entries
9 No Single 90K No
10
Data Quality: Why Preprocess the
Data?

 Measures for data quality: A multidimensional view



Accuracy: correct or wrong, accurate or not

Completeness: not recorded, unavailable, …

Consistency: some modified but some not,
dangling, …

Timeliness: timely update?

Believability: how trustable the data are correct?

Interpretability: how easily the data can be
understood?
4
Major Tasks in Data Preprocessing
 Data cleaning
 Fill in missing values, smooth noisy data, identify or
remove outliers, and resolve inconsistencies
 Data integration
 Integration of multiple databases, data cubes, or files
 Data reduction
 Dimensionality reduction
 Numerosity reduction
 Data compression
 Data transformation and data discretization
 Normalization
 Concept hierarchy generation
5
Forms of data preprocessing
Chapter 3: Data Preprocessing

 Data Preprocessing: An Overview


 Data Quality
 Major Tasks in Data Preprocessing
 Data Cleaning
 Data Integration
 Data Reduction
 Data Transformation and Data Discretization
 Summary
7
Data Cleaning
 Data in the Real World Is Dirty: Lots of potentially incorrect data,
e.g., instrument faulty, human or computer error, transmission
error

incomplete: lacking attribute values, lacking certain attributes
of interest, or containing only aggregate data

e.g., Occupation=“ ” (missing data)

noisy: containing noise, errors, or outliers

e.g., Salary=“−10” (an error)

inconsistent: containing discrepancies in codes or names, e.g.,

Age=“42”, Birthday=“03/07/2010”

Was rating “1, 2, 3”, now rating “A, B, C”

discrepancy between duplicate records

Intentional (e.g., disguised missing data)

Jan. 1 as everyone’s birthday?
8
Incomplete (Missing) Data
 Data is not always available
 E.g., many tuples have no recorded value for several
attributes, such as customer income in sales data
 Missing data may be due to
 equipment malfunction
 inconsistent with other recorded data and thus
deleted
 data not entered due to misunderstanding
 certain data may not be considered important at the
time of entry
 not register history or changes of the data
 Missing data may need to be inferred
9
How to Handle Missing
Data?
 Ignore the tuple: usually done when class label is
missing (when doing classification)—not effective when
the % of missing values per attribute varies
considerably
 Fill in the missing value manually: tedious + infeasible?
 Fill in it automatically with
 a global constant : e.g., “unknown”, a new class?!
 the attribute mean
 the attribute mean for all samples belonging to the
same class: smarter
 the most probable value: inference-based such as
Bayesian formula or decision tree
10
Noisy Data
 Noise: random error or variance in a measured
variable
 Incorrect attribute values may be due to

faulty data collection instruments

data entry problems

data transmission problems

technology limitation

inconsistency in naming convention
 Other data problems which require data cleaning

duplicate records

incomplete data

inconsistent data
11
How to Handle Noisy Data?
 Binning

first sort data and partition into (equal-frequency)
bins

then one can smooth by bin means, smooth by bin
median, smooth by bin boundaries, etc.
 Regression

smooth by fitting the data into regression functions
 Clustering

detect and remove outliers
 Combined computer and human inspection

detect suspicious values and check by human (e.g.,
deal with possible outliers)

12
Chapter 3: Data Preprocessing

 Data Preprocessing: An Overview


 Data Quality
 Major Tasks in Data Preprocessing
 Data Cleaning
 Data Integration
 Data Reduction
 Data Transformation and Data Discretization
 Summary
13
Data Integration
 Data integration:
 Combines data from multiple sources into a coherent store
 Schema integration: e.g., A.cust-id  B.cust-#
 Integrate metadata from different sources
 Entity identification problem:
 Identify real world entities from multiple data sources, e.g.,
Bill Clinton = William Clinton
 Detecting and resolving data value conflicts
 For the same real world entity, attribute values from
different sources are different
 Possible reasons: different representations, different
scales, e.g., metric vs. British units
14
Handling Redundancy in Data
Integration

 Redundant data occur often when there is integration


of multiple databases

Object identification: The same attribute or object
may have different names in different databases

Derivable data: One attribute may be a “derived”
attribute in another table, e.g., annual revenue
 Redundant attributes may be able to be detected by
correlation analysis and covariance analysis
 Careful integration of the data from multiple sources
may help reduce/avoid redundancies and
inconsistencies and improve mining speed and
quality
15
Correlation Analysis (Nominal Data)
 Χ2 (chi-square) test
2
(Observed  Expected )
 2 
Expected
 The larger the Χ2 value, the more likely the
variables are related
 The cells that contribute the most to the Χ2 value
are those whose actual count is very different from
the expected count
 Correlation does not imply causality
 # of hospitals and # of car-theft in a city are correlated
 Both are causally linked to the third variable: population

16
Chi-Square Calculation: An
Example

Play Not play Sum


chess chess (row)
Like science fiction 250(90) 200(360) 450
Not like science 50(210) 1000(840) 1050
fiction
Sum(col.) 300 1200 1500

 Χ2 (chi-square) calculation (numbers in parenthesis


are expected counts calculated based on the data
distribution in the two categories)
(250  90) 2 (50  210) 2 (200  360) 2 (1000  840) 2
 
2
   507.93
90 210 360 840
 It shows that like_science_fiction and play_chess
are correlated in the group
17
Correlation Analysis (Numeric Data)

 Correlation coefficient (also called Pearson’s product


moment coefficient)

i 1 (ai  A)(bi  B) 
n n
(ai bi )  n A B
rA, B   i 1
(n  1) A B (n  1) A B

where n is the number of tuples, and are the respective


A B
means of A and B, σA and σB are the respective standard
deviation of A and B, and Σ(aibi) is the sum of the AB cross-
product.
 If rA,B > 0, A and B are positively correlated (A’s values
increase as B’s). The higher, the stronger correlation.
 rA,B = 0: independent; rAB < 0: negatively correlated
18
Visually Evaluating Correlation

Scatter plots
showing the
similarity from
–1 to 1.

19
Correlation (viewed as linear
relationship)
 Correlation measures the linear relationship
between objects
 To compute correlation, we standardize
data objects, A and B, and then take their
dot product
a 'k (ak  mean( A)) / std ( A)

b'k (bk  mean( B )) / std ( B )

correlation( A, B)  A' B'

20
Covariance (Numeric Data)
 Covariance is similar to correlation

Correlation coefficient:
where n is the number of tuples, and are the respective mean
or expected values of A and B, A σ andBσ are the respective
A B

standard deviation of A and B.


 Positive covariance: If CovA,B > 0, then A and B both tend to be larger
than their expected values.
 Negative covariance: If CovA,B < 0 then if A is larger than its expected
value, B is likely to be smaller than its expected value.

Independence: CovA,B = 0 but the converse is not true:
 Some pairs of random variables may have a covariance of 0 but are not
independent. Only under some additional assumptions (e.g., the data follow
multivariate normal distributions) does a covariance of 0 imply
independence
21
Co-Variance: An Example

 It can be simplified in computation as

 Suppose two stocks A and B have the following values in one


week: (2, 5), (3, 8), (5, 10), (4, 11), (6, 14).
 Question: If the stocks are affected by the same industry trends,
will their prices rise or fall together?

E(A) = (2 + 3 + 5 + 4 + 6)/ 5 = 20/5 = 4

E(B) = (5 + 8 + 10 + 11 + 14) /5 = 48/5 = 9.6

Cov(A,B) = (2×5+3×8+5×10+4×11+6×14)/5 − 4 × 9.6 = 4
 Thus, A and B rise together since Cov(A, B) > 0.
Chapter 3: Data Preprocessing

 Data Preprocessing: An Overview


 Data Quality
 Major Tasks in Data Preprocessing
 Data Cleaning
 Data Integration
 Data Reduction
 Data Transformation and Data Discretization
 Summary
23
Data Reduction Strategies
 Data reduction: Obtain a reduced representation of the data
set that is much smaller in volume but yet produces the same
(or almost the same) analytical results
 Why data reduction? — A database/data warehouse may store
terabytes of data. Complex data analysis may take a very
long time to run on the complete data set.
 Data reduction strategies
 Dimensionality reduction, e.g., remove unimportant

attributes

Wavelet transforms

Principal Components Analysis (PCA)

Feature subset selection, feature creation
 Numerosity reduction (some simply call it: Data Reduction)


Regression and Log-Linear Models

Histograms, clustering, sampling

Data cube aggregation
 Data compression
24
Data Reduction 1: Dimensionality
Reduction
 Curse of dimensionality
 When dimensionality increases, data becomes increasingly sparse
 Density and distance between points, which is critical to
clustering, outlier analysis, becomes less meaningful
 The possible combinations of subspaces will grow exponentially
 Dimensionality reduction
 Avoid the curse of dimensionality
 Help eliminate irrelevant features and reduce noise
 Reduce time and space required in data mining
 Allow easier visualization
 Dimensionality reduction techniques
 Wavelet transforms
 Principal Component Analysis
 Supervised and nonlinear techniques (e.g., feature selection)

25
Attribute Subset Selection
 Another way to reduce dimensionality of data
 Redundant attributes
 Duplicate much or all of the information
contained in one or more other attributes
 E.g., purchase price of a product and the
amount of sales tax paid
 Irrelevant attributes
 Contain no information that is useful for the
data mining task at hand
 E.g., students' ID is often irrelevant to the task
of predicting students' GPA

26
Data Reduction 2: Numerosity
Reduction
 Reduce data volume by choosing alternative,
smaller forms of data representation
 Parametric methods (e.g., regression)
 Assume the data fits some model, estimate

model parameters, store only the parameters,


and discard the data (except possible outliers)
 Ex.: Log-linear models—obtain value at a point

in m-D space as the product on appropriate


marginal subspaces
 Non-parametric methods
 Do not assume models

 Major families: histograms, clustering,

sampling, …
27
Parametric Data Reduction:
Regression and Log-Linear
Models
 Linear regression
 Data modeled to fit a straight line

 Often uses the least-square method to fit the

line
 Multiple regression
 Allows a response variable Y to be modeled as

a linear function of multidimensional feature


vector
 Log-linear model
 Approximates discrete multidimensional

probability distributions

28
Histogram Analysis
 Divide data into buckets 40
and store average (sum) 35
for each bucket 30
 Partitioning rules: 25
 Equal-width: equal 20
bucket range 15
 Equal-frequency (or 10
equal-depth) 5
0
10000
20000
30000
40000
50000
60000
70000
80000
90000
100000
29
Clustering
 Partition data set into clusters based on similarity,
and store cluster representation (e.g., centroid
and diameter) only
 Can be very effective if data is clustered but not if
data is “smeared”
 Can have hierarchical clustering and be stored in
multi-dimensional index tree structures
 There are many choices of clustering definitions
and clustering algorithms
 Cluster analysis will be studied in depth in
Chapter 10
30
Sampling

 Sampling: obtaining a small sample s to represent the


whole data set N
 Allow a mining algorithm to run in complexity that is
potentially sub-linear to the size of the data
 Key principle: Choose a representative subset of the
data

Simple random sampling may have very poor
performance in the presence of skew

Develop adaptive sampling methods, e.g., stratified
sampling:
 Note: Sampling may not reduce database I/Os (page at
a time)
31
Types of Sampling
 Simple random sampling
 There is an equal probability of selecting any

particular item
 Sampling without replacement
 Once an object is selected, it is removed from

the population
 Sampling with replacement
 A selected object is not removed from the

population
 Stratified sampling:
 Partition the data set, and draw samples from

each partition (proportionally, i.e., approximately


the same percentage of the data)
 Used in conjunction with skewed data
32
Data Cube Aggregation

 The lowest level of a data cube (base cuboid)



The aggregated data for an individual entity of
interest

E.g., a customer in a phone calling data warehouse
 Multiple levels of aggregation in data cubes

Further reduce the size of data to deal with
 Reference appropriate levels

Use the smallest representation which is enough to
solve the task
 Queries regarding aggregated information should be
answered using data cube, when possible
33
Data Reduction 3: Data
Compression
 String compression
 There are extensive theories and well-tuned

algorithms
 Typically lossless, but only limited manipulation is

possible without expansion


 Audio/video compression
 Typically lossy compression, with progressive

refinement
 Sometimes small fragments of signal can be

reconstructed without reconstructing the whole


 Time sequence is not audio
 Typically short and vary slowly with time

 Dimensionality and numerosity reduction may also be


considered as forms of data compression 34
Data Compression

Original Data Compressed


Data
lossless

os sy
l
Original Data
Approximated

35
Chapter 3: Data Preprocessing

 Data Preprocessing: An Overview


 Data Quality
 Major Tasks in Data Preprocessing
 Data Cleaning
 Data Integration
 Data Reduction
 Data Transformation and Data Discretization
 Summary
36
Data Transformation
 A function that maps the entire set of values of a given attribute
to a new set of replacement values s.t. each old value can be
identified with one of the new values
 Methods
 Smoothing: Remove noise from data
 Attribute/feature construction

New attributes constructed from the given ones
 Aggregation: Summarization, data cube construction
 Normalization: Scaled to fall within a smaller, specified range

min-max normalization

z-score normalization

normalization by decimal scaling
 Discretization: Concept hierarchy climbing
37
Normalization
 Min-max normalization: to [new_minA, new_maxA]
v  minA
v'  (new _ maxA  new _ minA)  new _ minA
maxA  minA
 Ex. Let income range $12,000 to $98,000
73,600  12,000 normalized to
(1.0  0)  0 0.716
[0.0, 1.0]. Then $73,000 is mapped to
98, 000  12, 000

 Z-score normalization (μ: mean, σ: standard deviation):


v  A
v' 
 A

73,600  54,000
1.225
 Ex. Let μ = 54,000, σ = 16,000. Then16,000
 Normalization by decimal scaling
v
v'  j Where j is the smallest integer such that Max(|ν’|) < 1
10
38
Discretization
 Three types of attributes

Nominal—values from an unordered set, e.g., color, profession

Ordinal—values from an ordered set, e.g., military or
academic rank

Numeric—real numbers, e.g., integer or real numbers
 Discretization: Divide the range of a continuous attribute into
intervals

Interval labels can then be used to replace actual data values

Reduce data size by discretization

Supervised vs. unsupervised

Split (top-down) vs. merge (bottom-up)

Discretization can be performed recursively on an attribute

Prepare for further analysis, e.g., classification
39
Data Discretization Methods
 Typical methods: All the methods can be applied
recursively
 Binning

Top-down split, unsupervised
 Histogram analysis

Top-down split, unsupervised
 Clustering analysis (unsupervised, top-down split or
bottom-up merge)
 Decision-tree analysis (supervised, top-down split)
 Correlation (e.g., 2) analysis (unsupervised, bottom-
up merge)
40
Simple Discretization: Binning

 Equal-width (distance) partitioning


 Divides the range into N intervals of equal size: uniform grid
 if A and B are the lowest and highest values of the attribute, the
width of intervals will be: W = (B –A)/N.
 The most straightforward, but outliers may dominate
presentation
 Skewed data is not handled well
 Equal-depth (frequency) partitioning
 Divides the range into N intervals, each containing
approximately same number of samples
 Good data scaling
 Managing categorical attributes can be tricky
41
Binning Methods for Data
Smoothing
 Sorted data for price (in dollars): 4, 8, 9, 15, 21, 21, 24,
25, 26, 28, 29, 34
* Partition into equal-frequency (equi-depth) bins:
- Bin 1: 4, 8, 9, 15
- Bin 2: 21, 21, 24, 25
- Bin 3: 26, 28, 29, 34
* Smoothing by bin means:
- Bin 1: 9, 9, 9, 9
- Bin 2: 23, 23, 23, 23
- Bin 3: 29, 29, 29, 29
* Smoothing by bin boundaries:
- Bin 1: 4, 4, 4, 15
- Bin 2: 21, 21, 25, 25
- Bin 3: 26, 26, 26, 34
42
How to Handle Noisy Data?

 Binning method:
 first sort data and partition into (equi-depth) bins
 then one can smooth by bin means, smooth by bin
median, smooth by bin boundaries, etc.
 Clustering
 detect and remove outliers
 Combined computer and human inspection
 detect suspicious values and check by human (e.g.,
deal with possible outliers)

43
CS583, Bing Liu, UIC
44
CS583, Bing Liu, UIC
Cont…

45
CS583, Bing Liu, UIC
Another example

46
CS583, Bing Liu, UIC
Cont…
 The following is sorted data for price (in
dollars):
 13,15,16,
16,19,20,20,21,22,22,25,25,25,33,35,3
5,35,35,36,40,45,46,52,70
 Partition the given data into 4 bins using
Equi-depth binning method and perform
smoothing according to the following
methods

47
CS583, Bing Liu, UIC
Cont..
 Smoothing by bin means
 Smoothing by bin medians
 Smoothing by bin boundaries

 Data:11,13,13,15,15,16,19,20,20,20,21,21,
22,23,24,30,40,45,45,45,71,72,73,75

48
CS583, Bing Liu, UIC
Classification & Correlation
Analysis
 Classification (e.g., decision tree analysis)

Supervised: Given class labels, e.g., cancerous vs. benign

Using entropy to determine split point (discretization point)

Top-down, recursive split

Details to be covered in Chapter 7
 Correlation analysis (e.g., Chi-merge: χ2-based discretization)

Supervised: use class information

Bottom-up merge: find the best neighboring intervals (those
having similar distributions of classes, i.e., low χ2 values) to
merge

Merge performed recursively, until a predefined stopping
condition
49
Summary
 Data quality: accuracy, completeness, consistency,
timeliness, believability, interpretability
 Data cleaning: e.g. missing/noisy values, outliers
 Data integration from multiple sources:
 Entity identification problem

 Remove redundancies

 Detect inconsistencies

 Data reduction
 Dimensionality reduction

 Numerosity reduction

 Data compression

 Data transformation and data discretization


 Normalization

 Concept hierarchy generation

50
A (detailed) data preprocessing
example
• Suppose we want to mine the comments/reviews
of people on Yelp and Foursquare.
Data Collection
Data
Collection

Data Result
Data Mining
Preprocessing Post-processing

• Today there is an abundance of data online


• Facebook, Twitter, Wikipedia, Web, etc…
• We can extract interesting information from this data,
but first we need to collect it
• Customized crawlers, use of public APIs
• Additional cleaning/processing to parse out the useful parts
• Respect of crawling etiquette
Mining Task
• Collect all reviews for the top-10 most reviewed
restaurants in NY in Yelp
• (thanks to Hady Law)

• Find few terms that best describe the restaurants.


• Algorithm?
Example data
• I heard so many good things about this place so I was pretty juiced to try it. I'm
from Cali and I heard Shake Shack is comparable to IN-N-OUT and I gotta say, Shake
Shake wins hands down. Surprisingly, the line was short and we waited about 10 MIN.
to order. I ordered a regular cheeseburger, fries and a black/white shake. So
yummerz. I love the location too! It's in the middle of the city and the view is
breathtaking. Definitely one of my favorite places to eat in NYC.

• I'm from California and I must say, Shake Shack is better than IN-N-OUT, all day,
err'day.

• Would I pay $15+ for a burger here? No. But for the price point they are asking for,
this is a definite bang for your buck (though for some, the opportunity cost of
waiting in line might outweigh the cost savings) Thankfully, I came in before the
lunch swarm descended and I ordered a shake shack (the special burger with the patty +
fried cheese &amp; portabella topping) and a coffee milk shake. The beef patty was
very juicy and snugly packed within a soft potato roll. On the downside, I could do
without the fried portabella-thingy, as the crispy taste conflicted with the juicy,
tender burger. How does shake shack compare with in-and-out or 5-guys? I say a very
close tie, and I think it comes down to personal affliations. On the shake side, true
to its name, the shake was well churned and very thick and luscious. The coffee flavor
added a tangy taste and complemented the vanilla shake well. Situated in an open
space in NYC, the open air sitting allows you to munch on your burger while watching
people zoom by around the city. It's an oddly calming experience, or perhaps it was
the food coma I was slowly falling into. Great place with food at a great price.
First cut
• Do simple processing to “normalize” the data (remove
punctuation, make into lower case, clear white spaces, other?)
• Break into words, keep the most popular words
the 27514 the 16710 the 16010 the 14241
and 14508 and 9139 and 9504 and 8237
i 13088 a 8583 i 7966 a 8182
a 12152 i 8415 to 6524 i 7001
to 10672 to 7003 a 6370 to 6727
of 8702 in 5363 it 5169 of 4874
ramen 8518 it 4606 of 5159 you 4515
was 8274 of 4365 is 4519 it 4308
is 6835 is 4340 sauce 4020 is 4016
it 6802 burger 432 in 3951 was 3791
in 6402 was 4070 this 3519 pastrami 3748
for 6145 for 3441 was 3453 in 3508
but 5254 but 3284 for 3327 for 3424
that 4540 shack 3278 you 3220 sandwich 2928
you 4366 shake 3172 that 2769 that 2728
with 4181 that 3005 but 2590 but 2715
pork 4115 you 2985 food 2497 on 2247
my 3841 my 2514 on 2350 this 2099
this 3487 line 2389 my 2311 my 2064
wait 3184 this 2242 cart 2236 with 2040
not 3016 fries 2240 chicken 2220 not 1655
we 2984 on 2204 with 2195 your 1622
at 2980 are 2142 rice 2049 so 1610
on 2922 with 2095 so 1825 have 1585
First cut
• Do simple processing to “normalize” the data (remove
punctuation, make into lower case, clear white spaces, other?)
• Break into words, keep the most popular words
the 27514 the 16710 the 14241
the 16010
and 14508 and 9139 and 8237
and 9504
i 13088 a 8583 a 8182
i 7966
a 12152 i 8415 i 7001
to 6524
to 10672 to 7003 to 6727
a 6370
of 8702 in 5363 of 4874
it 5169
ramen 8518 it 4606 you 4515
of 5159
was 8274 of 4365 it 4308
is 4519
is 6835 is 4340 is 4016
sauce 4020
it 6802 burger 432 was 3791
in 3951
in 6402 was 4070 pastrami 3748
this 3519
for 6145 for 3441 in 3508
was 3453
but 5254 but 3284 for 3424
for 3327
that 4540 shack 3278 sandwich 2928
you 3220
you 4366 shake 3172 that 2728
that 2769
with 4181 that 3005 but 2715
but 2590
pork 4115 you 2985 on 2247
food 2497
my 3841 my 2514 this 2099
on 2350
this 3487 line 2389
this 2242
Most frequent words are stop words
my 2311
my 2064
with 2040
wait 3184 cart 2236
not 3016 fries 2240 not 1655
chicken 2220
we 2984 on 2204 your 1622
with 2195
at 2980 are 2142 so 1610
rice 2049
on 2922 with 2095 have 1585
so 1825
Second cut
• Remove stop words
• Stop-word lists can be found online.

a,about,above,after,again,against,all,am,an,and,any,are,aren't,as,at,be,be
cause,been,before,being,below,between,both,but,by,can't,cannot,could,could
n't,did,didn't,do,does,doesn't,doing,don't,down,during,each,few,for,from,f
urther,had,hadn't,has,hasn't,have,haven't,having,he,he'd,he'll,he's,her,he
re,here's,hers,herself,him,himself,his,how,how's,i,i'd,i'll,i'm,i've,if,in
,into,is,isn't,it,it's,its,itself,let's,me,more,most,mustn't,my,myself,no,
nor,not,of,off,on,once,only,or,other,ought,our,ours,ourselves,out,over,own
,same,shan't,she,she'd,she'll,she's,should,shouldn't,so,some,such,than,tha
t,that's,the,their,theirs,them,themselves,then,there,there's,these,they,th
ey'd,they'll,they're,they've,this,those,through,to,too,under,until,up,very
,was,wasn't,we,we'd,we'll,we're,we've,were,weren't,what,what's,when,when's
,where,where's,which,while,who,who's,whom,why,why's,with,won't,would,would
n't,you,you'd,you'll,you're,you've,your,yours,yourself,yourselves,
Second cut
• Remove stop words
• Stop-word lists can be found online.
ramen 8572 burger 4340 sauce 4023 pastrami 3782
pork 4152 shack 3291 food 2507 sandwich 2934
wait 3195 shake 3221 cart 2239 place 1480
good 2867 line 2397 chicken 2238 good 1341
place 2361 fries 2260 rice 2052 get 1251
noodles 2279 good 1920 hot 1835 katz's 1223
ippudo 2261 burgers 1643 white 1782 just 1214
buns 2251 wait 1508 line 1755 like 1207
broth 2041 just 1412 good 1629 meat 1168
like 1902 cheese 1307 lamb 1422 one 1071
just 1896 like 1204 halal 1343 deli 984
get 1641 food 1175 just 1338 best 965
time 1613 get 1162 get 1332 go 961
one 1460 place 1159 one 1222 ticket 955
really 1437 one 1118 like 1096 food 896
go 1366 long 1013 place 1052 sandwiches 813
food 1296 go 995 go 965 can 812
bowl 1272 time 951 can 878 beef 768
can 1256 park 887 night 832 order 720
great 1172 can 860 time 794 pickles 699
best 1167 best 849 long 792 time 662
people 790
IDF
TF-IDF
• The words that are best for describing a document
are the ones that are important for the document, but
also unique to the document.

• TF(w,d): term frequency of word w in document d


• Number of times that the word appears in the document
• Natural measure of importance of the word for the document

• IDF(w): inverse document frequency


• Natural measure of the uniqueness of the word w

• TF-IDF(w,d) = TF(w,d)  IDF(w)


Third cut
• Ordered by TF-IDF
ramen 3057.41761944282
fries7 806.085373301536
lamb 7985.655290756243 5 pastrami 1931.94250908298 6
akamaru 2353.24196503991
custard1 729.607519421517
halal 686.038812717726
3 6katz's 1120.62356508209 4
noodles 1579.68242449612
shakes 5628.473803858139
53rd 375.685771863491
3 5 rye 1004.28925735888 2
broth 1414.71339552285
shroom 5 515.779060830666
gyro 305.809092298788
1 3 corned 906.113544700399 2
miso 1252.60629058876
burger
1 457.264637954966
pita 304.984759446376
9 5 pickles 640.487221580035 4
hirata 709.196208642166
crinkle 1 398.34722108797
cart 235.902194557873
1 9 reuben 515.779060830666 1
hakata 591.76436889947
burgers1 366.624854809247
platter8 139.459903080044 matzo7 430.583412389887 1
shiromaru 587.1591987134
madison1 350.939350307801
chicken/lamb
4 135.8525204 sally
1 428.110484707471 2
noodle 581.844614740089
shackburger
4 292.428306810
carts 120.274374158359
1 8harry 226.323810772916 4
tonkotsu 529.594571388631
'shroom 1287.823136624256
hilton 184.2987473324223 mustard
4 216.079238853014 6
ippudo 504.527569521429
portobello
8 239.8062489526
lamb/chicken
2 82.8930633 cutter
1 209.535243462458 1
buns 502.296134008287
custards
8 211.837828555452
yogurt 70.0078652365545
1 5
carnegie 198.655512713779 3
ippudo's 453.609263319827
concrete1 195.169925889195
52nd 67.5963923222322
4 2 katz 194.387844446609 7
modern 394.839162940177
bun 186.962178298353
7 6th6 60.7930175345658 9 knish 184.206807439524 1
egg 367.368005696771milkshakes
5 174.9964670675
4am 55.4517744447956
1 5 sandwiches 181.415707218 8
shoyu 352.295519228089
concretes
1 165.786126695571
yellow 54.4470265206673
1 8
brisket 131.945865389878 4
chashu 347.690349042101
portabello
1 163.4835416025
tzatziki1 52.9594571388631fries 1 131.613054313392 7
karaka 336.177423577131
shack's 1 159.334353330976
lettuce2 51.3230168022683 salami
8 127.621117258549 3
kakuni 276.310211159286
patty 1152.226035882265
sammy's
6 50.656872045869 knishes
1 124.339595021678 1
ramens 262.494700601321
ss 149.668031044613
1 sw1 50.5668577816893 3 delicatessen 117.488967607 2
bun 236.512263803654patties
6 148.068287943937
platters 2 49.9065970003161deli's5 117.431839742696 1
wasabi 232.366751234906
cam 105.949606780682
3 falafel
3 49.4796995212044 carver
4 115.129254649702 1
dama 221.048168927428
milkshake
1 103.9720770839
sober 49.2211422635451
5 7brown's 109.441778045519 2
brulee 201.179739054263
lamps 299.011158998744moma1 48.1589121730374 3 matzoh 108.22149937072 1
Third cut
• TF-IDF takes care of stop words as well
• We do not need to remove the stopwords since
they will get IDF(w) = 0
Decisions, decisions…
• When mining real data you often need to make some
• What data should we collect? How much? For how long?
• Should we throw out some data that does not seem to be useful?

An actual review AAAAAAAAAAAAA


AAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAA AAA

• Too frequent data (stop words), too infrequent (errors?), erroneous data, missing
data, outliers
• How should we weight the different pieces of data?

• Most decisions are application dependent. Some information


may be lost but we can usually live with it (most of the times)

• We should make our decisions clear since they affect our


findings.

• Dealing with real data is hard…

You might also like