0% found this document useful (0 votes)
8 views50 pages

Module 2(c) - Data Preprocessing

Chapter 3 discusses data preprocessing, emphasizing the importance of data quality and the major tasks involved, including data cleaning, integration, reduction, and transformation. It outlines various issues such as missing, noisy, and inconsistent data, along with methods for handling these problems. The chapter also covers strategies for data reduction to improve analysis efficiency while maintaining analytical results.

Uploaded by

ahujaayush973
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views50 pages

Module 2(c) - Data Preprocessing

Chapter 3 discusses data preprocessing, emphasizing the importance of data quality and the major tasks involved, including data cleaning, integration, reduction, and transformation. It outlines various issues such as missing, noisy, and inconsistent data, along with methods for handling these problems. The chapter also covers strategies for data reduction to improve analysis efficiency while maintaining analytical results.

Uploaded by

ahujaayush973
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 50

Chapter 3 —

Data Preprocessing

1
Chapter 3: Data Preprocessing

 Data Preprocessing: An Overview

 Data Quality

 Major Tasks in Data Preprocessing

 Data Cleaning

 Data Integration

 Data Reduction

 Data Transformation and Data Discretization

 Summary
2
Data Quality: Why Preprocess the Data?

 Measures for data quality: A multidimensional view


 Accuracy: correct or wrong, accurate or not
 Completeness: not recorded, unavailable, …
 Consistency: some modified but some not, dangling, …
 Timeliness: timely update?
 Believability: how trustable the data are correct?
 Interpretability: how easily the data can be
understood?

3
Major Tasks in Data Preprocessing
 Data cleaning
 Fill in missing values, smooth noisy data, identify or remove

outliers, and resolve inconsistencies


 Data integration
 Integration of multiple databases, data cubes, or files

 Data reduction
 Dimensionality reduction

 Numerosity reduction

 Data compression

 Data transformation and data discretization


 Normalization

 Concept hierarchy generation

4
Chapter 3: Data Preprocessing

 Data Preprocessing: An Overview

 Data Quality

 Major Tasks in Data Preprocessing

 Data Cleaning

 Data Integration

 Data Reduction

 Data Transformation and Data Discretization

 Summary
5
Data Cleaning
 Data in the Real World Is Dirty: Lots of potentially incorrect data,
e.g., instrument faulty, human or computer error, transmission error
 incomplete: lacking attribute values, lacking certain attributes of

interest, or containing only aggregate data


 e.g., Occupation = “ ” (missing data)

 noisy: containing noise, errors, or outliers

 e.g., Salary = “−10” (an error)

 inconsistent: containing discrepancies in codes or names, e.g.,

 Age = “42”, Birthday = “03/07/2010”

 Was rating “1, 2, 3”, now rating “A, B, C”

 discrepancy between duplicate records

 Intentional (e.g., disguised missing data)

 Jan. 1 as everyone’s birthday?

6
Incomplete (Missing) Data

 Data is not always available


 E.g., many tuples have no recorded value for several
attributes, such as customer income in sales data
 Missing data may be due to
 equipment malfunction
 inconsistent with other recorded data and thus deleted
 data not entered due to misunderstanding
 certain data may not be considered important at the time
of entry
 Missing data may need to be inferred

7
How to Handle Missing Data?
 Ignore the tuple: usually done when class label is missing (when
doing classification)—not effective when the % of missing values
per attribute varies considerably
 Fill in the missing value manually: tedious + infeasible?
 Fill in it automatically with
 a global constant : e.g., “unknown”, a new class?!
 the attribute mean
 the attribute mean for all samples belonging to the same
class
 the most probable value: inference-based such as Bayesian
formula or decision tree
8
Noisy Data
 Noise: random error or variance in a measured variable
 Incorrect attribute values may be due to
 faulty data collection instruments

 data entry problems

 data transmission problems

 technology limitation

 inconsistency in naming convention

 Other data problems which require data cleaning


 duplicate records

 incomplete data

 inconsistent data

9
How to Handle Noisy Data?

 Binning
 first sort data and partition into (equal-frequency) bins

 then one can smooth by bin means, smooth by bin median,

smooth by bin boundaries, etc.


 Regression
 smooth by fitting the data into regression functions

 Clustering
 detect and remove outliers

 Combined computer and human inspection


 detect suspicious values and check by human (e.g., deal

with possible outliers)

10
Simple Discretization: Binning

 Equal-width (distance) partitioning


 Divides the range into N intervals of equal size: uniform grid
 if A and B are the lowest and highest values of the attribute, the width of
intervals will be: W = (B –A)/N.
 The most straightforward, but outliers may dominate presentation
 Skewed data is not handled well

 Equal-depth (frequency) partitioning


 Divides the range into N intervals, each containing approximately same
number of samples
 Good data scaling

11
Binning Methods for Data Smoothing
 Sorted data for price (in dollars): 4, 8, 9, 15, 21, 21, 24, 25, 26, 28, 29, 34
* Partition into equal-frequency (equi-depth) bins:
- Bin 1: 4, 8, 9, 15
- Bin 2: 21, 21, 24, 25
- Bin 3: 26, 28, 29, 34
* Smoothing by bin means:
- Bin 1: 9, 9, 9, 9 i.e. (4+8+9+15)/4
- Bin 2: 23, 23, 23, 23
- Bin 3: 29, 29, 29, 29
* Smoothing by bin boundaries:
- Bin 1: 4, 4, 4, 15
- Bin 2: 21, 21, 25, 25
- Bin 3: 26, 26, 26, 34

12
4/21/2018 Data Mining: Concepts and Techniques 13
Clustering

4/21/2018 Data Mining: Concepts and Techniques 14


Data Cleaning as a Process
 Data discrepancy detection
 Use metadata (e.g., domain, range, dependency, distribution)

 Check field overloading

 Check uniqueness rule, consecutive rule and null rule

 Use commercial tools

 Data scrubbing: use simple domain knowledge (e.g., postal code,

spell-check) to detect errors and make corrections


 Data auditing: by analyzing data to discover rules and relationship to

detect violators (e.g., correlation and clustering to find outliers)


 Data migration and integration
 Data migration tools: allow transformations to be specified

 ETL (Extraction/Transformation/Loading) tools: allow users to specify

transformations through a graphical user interface

15
Chapter 3: Data Preprocessing

 Data Preprocessing: An Overview

 Data Quality

 Major Tasks in Data Preprocessing

 Data Cleaning

 Data Integration

 Data Reduction

 Data Transformation and Data Discretization

 Summary
16
Data Integration
 Data integration:
 Combines data from multiple sources into a coherent store
 Schema integration: e.g., A.cust-id  B.cust-#
 Integrate metadata from different sources
 Entity identification problem:
 Identify real world entities from multiple data sources, e.g., customer_id
in one database and cust_no in another database.
 Detecting and resolving data value conflicts
 For the same real world entity, attribute values from different sources
are different
 Possible reasons: different representations, different scales,
 e.g., metric vs. British units
17
Handling Redundancy in Data Integration

 Redundant data occur often when integration of multiple


databases
 Object identification: The same attribute or object may
have different names in different databases
 Derivable data: One attribute may be a “derived” attribute
in another table, e.g., annual revenue
 Redundant attributes may be able to be detected by
correlation analysis and covariance analysis
 Careful integration of the data from multiple sources may help
reduce/avoid redundancies and inconsistencies and improve
mining speed and quality
18
Correlation Analysis (Nominal Data)
 Χ2 (chi-square) test
(Observed  Expected ) 2
2  
Expected
 The larger the Χ2 value, the more likely the variables are
related
 The cells that contribute the most to the Χ2 value are those
whose actual count is very different from the expected count
 Correlation does not imply causality
 # of hospitals and # of car-theft in a city are correlated
 Both are causally linked to the third variable: population

19
Chi-Square Calculation: An Example

Play chess Not play chess Sum (row)


Like science fiction 250(90) 200(360) 450

Not like science fiction 50(210) 1000(840) 1050

Sum(col.) 300 1200 1500

 Χ2 (chi-square) calculation (numbers in parenthesis are expected


counts calculated based on the data distribution in the two
categories)
(250  90) 2 (50  210) 2 (200  360) 2 (1000  840) 2
 
2
    507.93
90 210 360 840
 It shows that like_science_fiction and play_chess are correlated
in the group
20
 For this 2*2 table, the degrees of freedom are :
 Degrees of freedom: (r -1)(c -1)
 (2-1)(2-1)=1

 For 1 degree of freedom, Χ2 the value needed to


reject the hypothesis at the 0.001 significance level is
10.828 (taken from the table of upper percentage
points of the 2 distribution, typically available from
any textbook on statistics).

21
 Since our computed value is above this, we can reject
the hypothesis that playing chess and preferred
reading are independent and conclude that the two
attributes are (strongly) correlated for the given
group of people

22
Correlation Analysis (Numeric Data)

 Correlation coefficient (also called Pearson’s product moment


coefficient)

i1 (ai  A)(bi  B) 


n n
(ai bi )  n AB
rA, B   i 1
(n  1) A B (n  1) A B

where n is the number of tuples, A and B are the respective


means of A and B, σA and σB are the respective standard
deviation of A and B, and Σ(aibi) is the sum of the AB cross-
product.
 If rA,B > 0, A and B are positively correlated (A’s values increase
as B’s). The higher, the stronger correlation.
 rA,B = 0: independent; rAB < 0: negatively correlated
23
Covariance (Numeric Data)
 Covariance is similar to correlation

Correlation coefficient:

where n is the number of tuples, A and B are the respective mean or


expected values of A and B, σA and σB are the respective standard deviation
of A and B
 Positive covariance: If CovA,B > 0, then A and B both tend to be larger than their
expected values
 Negative covariance: If CovA,B < 0 then if A is larger than its expected value, B is
likely to be smaller than its expected value
 Independence: CovA,B = 0 but the converse is not true:
 Some pairs of random variables may have a covariance of 0 but are not
independent. Only under some additional assumptions (e.g., the data follow
multivariate normal distributions) does a covariance of 0 imply independence 24
Co-Variance: An Example

 It can be simplified in computation as

 Suppose two stocks A and B have the following values in one week: (2, 5), (3,
8), (5, 10), (4, 11), (6, 14).

 Question: If the stocks are affected by the same industry trends, will their
prices rise or fall together?

 E(A) = (2 + 3 + 5 + 4 + 6)/ 5 = 20/5 = 4

 E(B) = (5 + 8 + 10 + 11 + 14) /5 = 48/5 = 9.6

 Cov(A,B) = (2×5+3×8+5×10+4×11+6×14)/5 − 4 × 9.6 = 4

 Thus, A and B rise together since Cov(A, B) > 0.


 Consider Table 3.2, which presents a simplified example of stock prices observed at
five time points for AllElectronics and HighTech, a high-tech company. If the stocks
are affected by the same industry trends, will their prices rise or fall together?

26
 Therefore, given the positive covariance we can say
that stock prices for both companies rise together.

27
Chapter 3: Data Preprocessing

 Data Preprocessing: An Overview

 Data Quality

 Major Tasks in Data Preprocessing

 Data Cleaning

 Data Integration

 Data Reduction

 Data Transformation and Data Discretization

 Summary
28
Data Reduction Strategies
 Data reduction: Obtain a reduced representation of the data set that is much
smaller in volume but yet produces the same (or almost the same) analytical
results
 Why data reduction? — A database/data warehouse may store terabytes of
data. Complex data analysis may take a very long time to run on the
complete data set.
 Data reduction strategies
 Dimensionality reduction, e.g., remove unimportant attributes

 Principal Components Analysis (PCA)

 Feature subset selection, feature creation

 Numerosity reduction (some simply call it: Data Reduction)

 Regression and Log-Linear Models

 Histograms, clustering, sampling

 Data cube aggregation

 Data compression

29
Attribute Subset Selection
 Another way to reduce dimensionality of data
 Redundant attributes
 Duplicate much or all of the information contained in one or
more other attributes
 E.g., purchase price of a product and the amount of sales
tax paid
 Irrelevant attributes
 Contain no information that is useful for the data mining
task at hand
 E.g., students' ID is often irrelevant to the task of predicting
students' GPA

30
4/21/2018 Data Mining: Concepts and Techniques 31
Histogram Analysis
 Divide data into buckets and 40
store average (sum) for each 35
bucket
30
 Partitioning rules:
25
 Equal-width: equal bucket 20
range
15
 Equal-frequency (or equal- 10
depth)
5
0

100000
10000

20000

30000

40000

50000

60000

70000

80000

90000
32
Clustering
 Partition data set into clusters based on similarity, and store
cluster representation (e.g., centroid and diameter) only
 Can be very effective if data is clustered but not if data is
“smeared” (damaged)
 Can have hierarchical clustering and be stored in multi-
dimensional index tree structures
 There are many choices of clustering definitions and
clustering algorithms

33
Sampling

 Sampling: obtaining a small sample s to represent the whole


data set N
 Allow a mining algorithm to run in complexity that is potentially
sub-linear to the size of the data
 Key principle: Choose a representative subset of the data
 Simple random sampling may have very poor performance
in the presence of skew
 Develop adaptive sampling methods, e.g., stratified
sampling

34
Types of Sampling

 Simple random sampling


 There is an equal probability of selecting any particular item

 Sampling without replacement


 Once an object is selected, it is removed from the population

 Sampling with replacement


 A selected object is not removed from the

population(recorded and replaced)


 Stratified sampling:
 Partition the data set, and draw samples from each partition

(proportionally, i.e., approximately the same percentage of


the data)
 Used in conjunction with skewed data

35
Sampling: With or without Replacement

Raw Data
36
Sampling: With or without Replacement
Sampling: Cluster or Stratified Sampling

Raw Data Cluster/Stratified Sample

38
Sampling: Cluster

39
Sampling: Stratified Sampling

40
Chapter 3: Data Preprocessing

 Data Preprocessing: An Overview

 Data Quality

 Major Tasks in Data Preprocessing

 Data Cleaning

 Data Integration

 Data Reduction

 Data Transformation and Data Discretization

 Summary
41
Data Transformation
 A function that maps the entire set of values of a given attribute to a new
set of replacement values s.t. each old value can be identified with one of
the new values
 Methods
 Smoothing: Remove noise from data
 Attribute/feature construction
 New attributes constructed from the given ones
 Aggregation: Summarization, data cube construction
 Normalization: Scaled to fall within a smaller, specified range
 min-max normalization
 z-score normalization
 normalization by decimal scaling
 Discretization: Concept hierarchy climbing
42
Normalization
 Let A be a numeric attribute with n values v1,v2,v3…….vn
 Min-max normalization: to [new_minA, new_maxA]
v  minA
v'  (new _ maxA  new _ minA)  new _ minA
maxA  minA

 Ex. Let income range $12,000 to $98,000 normalized to [0.0, 1.0].


Then $73,600 is mapped to
73,600  12,000
(1.0  0)  0  0.716
98,000  12,000
 Z-score normalization (μ: mean, σ: standard deviation):
v  A
v' 
 A

73,600  54,000
 1.225
 Ex. Let μ = 54,000, σ = 16,000. Then 16,000

 Normalization by decimal scaling


v
v' 
10 j Where j is the smallest integer such that Max(|ν’|) < 1 43
Discretization
 Three types of attributes
 Nominal—values from an unordered set, e.g., color, profession
 Ordinal—values from an ordered set, e.g., military or academic rank
 Numeric—real numbers, e.g., integer or real numbers
 Discretization: Divide the range of a continuous attribute into intervals
 Interval labels can then be used to replace actual data values
 Reduce data size by discretization
 Supervised vs. unsupervised
 Split (top-down) vs. merge (bottom-up)
 Discretization can be performed recursively on an attribute
 Prepare for further analysis, e.g., classification
44
Data Discretization Methods
 Typical methods: All the methods can be applied recursively
 Binning
 Top-down split, unsupervised
 Histogram analysis
 Top-down split, unsupervised
 Clustering analysis (unsupervised, top-down split or bottom-
up merge)
 Decision-tree analysis (supervised, top-down split)
 Correlation (e.g., 2) analysis (unsupervised, bottom-up
merge)

45
Concept Hierarchy Generation

 Concept hierarchy organizes concepts (i.e., attribute values)


hierarchically and is usually associated with each dimension in a data
warehouse
 Concept hierarchies facilitate drilling and rolling in data warehouses to
view data in multiple granularity
 Concept hierarchy formation: Recursively reduce the data by
collecting and replacing low level concepts (such as numeric values for
age) by higher level concepts (such as youth, adult, or senior)
 Concept hierarchies can be explicitly specified by domain experts
and/or data warehouse designers
 Concept hierarchy can be automatically formed for both numeric and
nominal data—For numeric data, use discretization methods shown
46
Concept Hierarchy Generation
for Nominal Data
 Specification of a partial/total ordering of attributes explicitly at
the schema level by users or experts
 street < city < state < country
 Specification of a hierarchy for a set of values by explicit data
grouping
 Specification of only a partial set of attributes
 E.g., only street < city, not others
 Automatic generation of hierarchies (or attribute levels) by the
analysis of the number of distinct values
 E.g., for a set of attributes: {street, city, state, country}

47
Automatic Concept Hierarchy Generation
 Some hierarchies can be automatically generated based on
the analysis of the number of distinct values per attribute in
the data set
 The attribute with the most distinct values is placed at
the lowest level of the hierarchy

country 15 distinct values

province_or_ state 365 distinct values

city 3567 distinct values

street 674,339 distinct values


48
Chapter 3: Data Preprocessing

 Data Preprocessing: An Overview

 Data Quality

 Major Tasks in Data Preprocessing

 Data Cleaning

 Data Integration

 Data Reduction

 Data Transformation and Data Discretization

 Summary
49
Summary
 Data quality: accuracy, completeness, consistency, timeliness,
believability, interpretability
 Data cleaning: e.g. missing/noisy values, outliers
 Data integration from multiple sources:
 Entity identification problem; Remove redundancies; Detect
inconsistencies
 Data reduction
 Dimensionality reduction; Numerosity reduction; Data
compression
 Data transformation and data discretization
 Normalization; Concept hierarchy generation
50

You might also like