3 Persiapan Data Mining
3 Persiapan Data Mining
3 Persiapan Data Mining
3. Persiapan Data
1
Romi Satria Wahono
• SD Sompok Semarang (1987)
• SMPN 8 Semarang (1990)
• SMA Taruna Nusantara Magelang (1993)
• B.Eng, M.Eng and Ph.D in Software Engineering
from
Saitama University Japan (1994-2004)
Universiti Teknikal Malaysia Melaka (2014)
• Research Interests: Software Engineering,
Intelligent Systems
• Founder dan Koordinator IlmuKomputer.Com
• Peneliti LIPI (2004-2007)
• Founder dan CEO PT Brainmatics Cipta Informatika
2
Course Outline
1. Pengantar Data Mining
3. Persiapan Data
4. Algoritma Klasifikasi
5. Algoritma Klastering
6. Algoritma Asosiasi
8. Text Mining
3
3. Persiapan Data
3.1 Data Preprocessing
3.2 Data Cleaning
3.3 Data Reduction
3.4 Data Transformation and Data Discretization
3.5 Data Integration
4
3.1 Data Preprocessing
5
CRISP-DM
6
Why Preprocess the Data?
Measures for data quality: A multidimensional view
7
Major Tasks in Data Preprocessing
1. Data cleaning
• Fill in missing values
• Smooth noisy data
• Identify or remove outliers
• Resolve inconsistencies
2. Data reduction
• Dimensionality reduction
• Numerosity reduction
• Data compression
3. Data transformation and data discretization
• Normalization
• Concept hierarchy generation
4. Data integration
• Integration of multiple databases or files
8
3.2 Data Cleaning
9
Data Cleaning
Data in the Real World Is Dirty: Lots of potentially
incorrect data, e.g., instrument faulty, human or computer
error, transmission error
• Incomplete: lacking attribute values, lacking certain
attributes of interest, or containing only aggregate data
• e.g., Occupation=“ ” (missing data)
• Noisy: containing noise, errors, or outliers
• e.g., Salary=“−10” (an error)
• Inconsistent: containing discrepancies in codes or names
• e.g., Age=“42”, Birthday=“03/07/2010”
• Was rating “1, 2, 3”, now rating “A, B, C”
• Discrepancy between duplicate records
• Intentional (e.g., disguised missing data)
• Jan. 1 as everyone’s birthday?
10
Incomplete (Missing) Data
• Data is not always available
• E.g., many tuples have no recorded value for several
attributes, such as customer income in sales data
• Missing data may be due to
• equipment malfunction
• inconsistent with other recorded data and thus deleted
• data not entered due to misunderstanding
• certain data may not be considered important at the
time of entry
• not register history or changes of the data
• Missing data may need to be inferred
11
Contoh Missing Data
• Dataset: MissingDataSet.csv
12
MissingDataSet.csv
• Jerry is the marketing manager for a small Internet design
and advertising firm
• Jerry’s boss asks him to develop a data set containing
information about Internet users
• The company will use this data to determine what kinds of
people are using the Internet and how the firm may be able
to market their services to this group of users
• To accomplish his assignment, Jerry creates an online survey
and places links to the survey on several popular Web sites
• Within two weeks, Jerry has collected enough data to begin
analysis, but he finds that his data needs to be
denormalized
• He also notes that some observations in the set are missing
values or they appear to contain invalid values
• Jerry realizes that some additional work on the data needs
to take place before analysis begins.
13
Relational Data
14
View of Data (Denormalized Data)
15
Contoh Missing Data
• Dataset: MissingDataSet.csv
16
How to Handle Missing Data?
• Ignore the tuple:
• Usually done when class label is missing (when doing
classification)—not effective when the % of missing values
per attribute varies considerably
• Fill in the missing value manually:
• Tedious + infeasible?
• Fill in it automatically with
• A global constant: e.g., “unknown”, a new class?!
• The attribute mean
• The attribute mean for all samples belonging to the same
class: smarter
• The most probable value: inference-based such as
Bayesian formula or decision tree
17
Latihan
• Lakukan eksperimen mengikuti buku
Matthew North, Data Mining for the Masses,
2012, Chapter 3 Data Preparation, pp. 30-46
(Handling Missing Data)
• Dataset: MissingDataSet.csv
19
20
Noisy Data
• Noise: random error or variance in a measured
variable
• Incorrect attribute values may be due to
• Faulty data collection instruments
• Data entry problems
• Data transmission problems
• Technology limitation
• Inconsistency in naming convention
• Other data problems which require data cleaning
• Duplicate records
• Incomplete data
• Inconsistent data
21
How to Handle Noisy Data?
• Binning
• First sort data and partition into (equal-frequency) bins
• Then one can smooth by bin means, smooth by bin
median, smooth by bin boundaries, etc.
• Regression
• Smooth by fitting the data into regression functions
• Clustering
• Detect and remove outliers
• Combined computer and human inspection
• Detect suspicious values and check by human (e.g., deal
with possible outliers)
22
Data Cleaning as a Process
• Data discrepancy detection
• Use metadata (e.g., domain, range, dependency, distribution)
• Check field overloading
• Check uniqueness rule, consecutive rule and null rule
• Use commercial tools
• Data scrubbing: use simple domain knowledge (e.g., postal code,
spell-check) to detect errors and make corrections
• Data auditing: by analyzing data to discover rules and relationship
to detect violators (e.g., correlation and clustering to find outliers)
• Data migration and integration
• Data migration tools: allow transformations to be specified
• ETL (Extraction/Transformation/Loading) tools: allow users to
specify transformations through a graphical user interface
• Integration of the two processes
• Iterative and interactive (e.g., Potter’s Wheels)
23
Latihan
• Lakukan eksperimen mengikuti buku
Matthew North, Data Mining for the Masses,
2012, Chapter 3 Data Preparation, pp. 50-52
(Handling Inconsistence Data)
• Dataset: MissingDataSet.csv
• Analisis metode preprocessing apa saja yang
digunakan dan mengapa perlu dilakukan
pada dataset tersebut!
24
25
Latihan
• Lakukan eksperimen mengikuti buku Matthew North,
Data Mining for the Masses, 2012, Chapter 8 Estimation,
pp. 127-140 (Estimation)
• Dataset: HeatingOil.csv
• Analisis metode preprocessing apa saja yang digunakan
dan mengapa perlu dilakukan pada dataset tersebut!
26
3.3 Data Reduction
27
Data Reduction Strategies
• Data Reduction
• Obtain a reduced representation of the data set that is much smaller in
volume but yet produces the same analytical results
• Why Data Reduction?
• A database/data warehouse may store terabytes of data
• Complex data analysis take a very long time to run on the complete
dataset
• Data Reduction Strategies
1. Dimensionality reduction
1. Feature Extraction
2. Feature Selection
2. Numerosity reduction (Data Reduction)
• Regression and Log-Linear Models
• Histograms, clustering, sampling
28
1. Dimensionality Reduction
• Curse of dimensionality
• When dimensionality increases, data becomes increasingly
sparse
• Density and distance between points, which is critical to
clustering, outlier analysis, becomes less meaningful
• The possible combinations of subspaces will grow
exponentially
• Dimensionality reduction
• Avoid the curse of dimensionality
• Help eliminate irrelevant features and reduce noise
• Reduce time and space required in data mining
• Allow easier visualization
• Dimensionality reduction techniques
1. Feature Extraction: Wavelet transforms, Principal
Component Analysis (PCA)
2. Feature Selection: Filter, Wrapper, Embedded
29
Principal Component Analysis (Steps)
• Given N data vectors from n-dimensions, find k ≤ n
orthogonal vectors (principal components) that can be
best used to represent data
1. Normalize input data: Each attribute falls within the same range
2. Compute k orthonormal (unit) vectors, i.e., principal components
3. Each input data (vector) is a linear combination of the k principal
component vectors
4. The principal components are sorted in order of decreasing
“significance” or strength
5. Since the components are sorted, the size of the data can be
reduced by eliminating the weak components, i.e., those with low
variance
31
32
33
Latihan
• Cek di RapidMiner,
operator apa saja yang
bisa digunakan untuk
mengurangi dimensi
dari dataset
34
Feature/Attribute Selection
• Another way to reduce dimensionality of data
• Redundant attributes
• Duplicate much or all of the information contained
in one or more other attributes
• E.g., purchase price of a product and the amount of
sales tax paid
• Irrelevant attributes
• Contain no information that is useful for the data
mining task at hand
• E.g., students' ID is often irrelevant to the task of
predicting students' GPA
35
Feature Selection Approach
A number of proposed approaches for feature
selection can broadly be categorized into the
following three classifications: wrapper, filter, and
hybrid (Liu & Tu, 2004)
1. In the filter approach, statistical analysis of the
feature set is required, without utilizing any learning
model (Dash & Liu, 1997)
2. In the wrapper approach, a predetermined learning
model is assumed, wherein features are selected that
justify the learning performance of the particular
learning model (Guyon & Elisseeff, 2003)
3. The hybrid approach attempts to utilize the
complementary strengths of the wrapper and filter
approaches (Huang, Cai, & Xu, 2007)
36
Wrapper Approach vs Filter Approach
37
Feature Selection Approach
1. Filter Approach:
• information gain
• chi square
• log likehood ratio
2. Wrapper Approach:
• forward selection
• backward elimination
• randomized hill climbing
3. Embedded Approach:
• decision tree
• weighted naïve bayes
38
Latihan
• Lakukan eksperimen mengikuti
buku Markus Hofmann (Rapid
Miner - Data Mining Use Case)
Chapter 4 (k-Nearest Neighbor
Classification II)
43
Latihan: Prediksi Kelulusan Mahasiswa
1. Lakukan training pada data mahasiswa
(datakelulusanmahasiswa.xls) dengan menggunakan
DT, NB, K-NN
2. Lakukan dimension reduction dengan Forward
Selection untuk ketiga algoritma di atas
3. Lakukan pengujian dengan menggunakan 10-fold X
Validation
4. Uji beda dengan t-Test untuk mendapatkan model
terbaik
44
Latihan
• Lakukan training pada data eReader Adoption
(eReader-Training.csv) dengan menggunakan DT
dengan 3 alternative criterion (Gain Ratio,
Information Gain dan Gini Index)
• Lakukan feature selection dengan Forward Selection
untuk ketiga algoritma di atas
• Lakukan pengujian dengan menggunakan 10-fold X
Validation
• Dari model terbaik, tentukan faktor (atribut) apa saja
yang berpengaruh pada tingkat adopsi eReader
DTGR DTIG DTGI DTGR+FS DTIG+FS DTGI+FS
Accuracy 58.39 51.01 31.01 61.41 56.73 31.01
45
46
2. Numerosity Reduction
Reduce data volume by choosing alternative, smaller forms of
data representation
2. Non-parametric methods
• Do not assume models
• Major families: histograms, clustering, sampling, …
47
Parametric Data Reduction: Regression and
Log-Linear Models
• Linear regression
• Data modeled to fit a straight line
• Often uses the least-square method to fit the
line
• Multiple regression
• Allows a response variable Y to be modeled as a
linear function of multidimensional feature
vector
• Log-linear model
• Approximates discrete multidimensional
probability distributions
48
Regression Analysis
• Regression analysis: A collective name for
techniques for the modeling and analysis of
numerical data consisting of values of a
dependent variable (also called response
variable or measurement) and of one or more
independent variables (aka. explanatory
Y1
variables or predictors)
• The parameters are estimated so as to give a
"best fit" of the data Y1’
y=x+1
• Most commonly the best fit is evaluated by
using the least squares method, but other
criteria have also been used
X1 x
• Used for prediction (including forecasting of
time-series data), inference, hypothesis
testing, and modeling of causal relationships
49
Regress Analysis and Log-Linear Models
• Linear regression: Y = w X + b
• Two regression coefficients, w and b, specify the line and are to be
estimated by using the data at hand
• Using the least squares criterion to the known values of Y1, Y2, …, X1,
X2, ….
• Multiple regression: Y = b0 + b1 X1 + b2 X2
• Many nonlinear functions can be transformed into the above
• Log-linear models:
• Approximate discrete multidimensional probability distributions
• Estimate the probability of each point (tuple) in a multi-dimensional
space for a set of discretized attributes, based on a smaller subset of
dimensional combinations
• Useful for dimensionality reduction and data smoothing
50
Histogram Analysis
bucket 30
25
• Partitioning rules: 20
• Equal-width: equal bucket 15
range 10
100000
10000
20000
30000
40000
50000
60000
70000
80000
90000
51
Clustering
52
Sampling
54
Sampling: With or without Replacement
Raw Data
55
Sampling: Cluster or Stratified Sampling
56
Stratified Sampling
• Stratification is the process of dividing members of the
population into homogeneous subgroups before sampling
• Suppose that in a company there are the following staff:
• Male, full-time: 90
• Male, part-time: 18
• Female, full-time: 9
• Female, part-time: 63
• Total: 180
• We are asked to take a sample of 40 staff, stratified
according to the above categories
• An easy way to calculate the percentage is to multiply each
group size by the sample size and divide by the total
population:
• Male, full-time = 90 × (40 ÷ 180) = 20
• Male, part-time = 18 × (40 ÷ 180) = 4
• Female, full-time = 9 × (40 ÷ 180) = 2
• Female, part-time = 63 × (40 ÷ 180) = 14
57
Latihan
• Lakukan eksperimen mengikuti buku
Matthew North, Data Mining for the Masses,
2012, Chapter 7 Discriminant Analysis, pp.
105-125
• Datasets: SportSkill-Training.csv dan
SportSkill-Scoring.csv
• Analisis metode preprocessing apa saja yang
digunakan dan mengapa perlu dilakukan
pada dataset tersebut!
58
Latihan
• Lakukan eksperimen mengikuti buku
Matthew North, Data Mining for the Masses,
2012, Chapter 3 Data Preparation, pp. 46-50
(Data Reduction)
59
3.4 Data Transformation and Data
Discretization
60
Data Transformation
• A function that maps the entire set of values of a given
attribute to a new set of replacement values
• Each old value can be identified with one of the new values
• Methods:
• Smoothing: Remove noise from data
• Attribute/feature construction
• New attributes constructed from the given ones
• Aggregation: Summarization, data cube construction
• Normalization: Scaled to fall within a smaller, specified range
• min-max normalization
• z-score normalization
• normalization by decimal scaling
• Discretization: Concept hierarchy climbing
61
Normalization
73,600 54,000
• Ex. Let μ = 54,000, σ = 16,000. Then 1.225
16,000
• Normalization by decimal scaling
v
v' j Where j is the smallest integer such that Max(|ν’|) < 1
10
62
Discretization
63
Data Discretization Methods
Typical methods: All the methods can be
applied recursively
• Binning: Top-down split, unsupervised
• Histogram analysis: Top-down split, unsupervised
• Clustering analysis: Unsupervised, top-down split
or bottom-up merge
• Decision-tree analysis: Supervised, top-down
split
• Correlation (e.g., 2) analysis: Unsupervised,
bottom-up merge
64
Simple Discretization: Binning
• Equal-width (distance) partitioning
• Divides the range into N intervals of equal size: uniform
grid
• if A and B are the lowest and highest values of the
attribute, the width of intervals will be: W = (B –A)/N.
• The most straightforward, but outliers may dominate
presentation
• Skewed data is not handled well
• Equal-depth (frequency) partitioning
• Divides the range into N intervals, each containing
approximately same number of samples
• Good data scaling
• Managing categorical attributes can be tricky
65
Binning Methods for Data Smoothing
Sorted data for price (in dollars): 4, 8, 9, 15, 21, 21, 24,
25, 26, 28, 29, 34
66
Discretization Without Using Class Labels
(Binning vs. Clustering)
67
Discretization by Classification & Correlation
Analysis
• Classification (e.g., decision tree analysis)
• Supervised: Given class labels, e.g., cancerous vs. benign
• Using entropy to determine split point (discretization point)
• Top-down, recursive split
• Correlation analysis (e.g., Chi-merge: χ2-based
discretization)
• Supervised: use class information
• Bottom-up merge: find the best neighboring intervals (those
having similar distributions of classes, i.e., low χ2 values) to
merge
• Merge performed recursively, until a predefined stopping
condition
68
Latihan
• Lakukan eksperimen mengikuti buku Markus
Hofmann (Rapid Miner - Data Mining Use Case)
Chapter 5 (Naïve Bayes Classification I)
• Dataset: crx.data
• Analisis metode preprocessing apa saja yang
digunakan dan mengapa perlu dilakukan pada
dataset tersebut!
• Bandingkan akurasi model apabila tidak
menggunakan filter dan diskretisasi
• Bandingkan pula apabila digunakan feature
selection (wrapper) dengan Backward
Elimination
69
70
Hasil
71
3.5 Data Integration
72
Data Integration
• Data integration:
• Combines data from multiple sources into a coherent store
• Schema Integration: e.g., A.cust-id B.cust-#
• Integrate metadata from different sources
• Entity Identification Problem:
• Identify real world entities from multiple data sources,
e.g., Bill Clinton = William Clinton
• Detecting and Resolving Data Value Conflicts
• For the same real world entity, attribute values from
different sources are different
• Possible reasons: different representations, different
scales, e.g., metric vs. British units
73
Handling Redundancy in Data Integration
• Redundant data occur often when integration of
multiple databases
• Object identification: The same attribute or object may
have different names in different databases
• Derivable data: One attribute may be a “derived”
attribute in another table, e.g., annual revenue
• Redundant attributes may be able to be detected
by correlation analysis and covariance analysis
• Careful integration of the data from multiple
sources may help reduce/avoid redundancies and
inconsistencies and improve mining speed and
quality
74
Correlation Analysis (Nominal Data)
• Χ2 (chi-square) test
(Observed Expected) 2
2
Expected
• The larger the Χ2 value, the more likely the variables are
related
• The cells that contribute the most to the Χ2 value are
those whose actual count is very different from the
expected count
• Correlation does not imply causality
• # of hospitals and # of car-theft in a city are correlated
• Both are causally linked to the third variable: population
75
Chi-Square Calculation: An Example
n n
(ai A)(bi B) (ai bi ) n AB
rA, B i 1
i 1
(n 1) A B (n 1) A B
77
Visually Evaluating Correlation
Scatter plots
showing the
similarity
from –1 to 1
78
Correlation
• Correlation measures the linear relationship
between objects
• To compute correlation, we standardize data
objects, A and B, and then take their dot product
79
Covariance (Numeric Data)
• Covariance is similar to correlation
Correlation coefficient:
80
Covariance: An Example
• Suppose two stocks A and B have the following values in one week: (2, 5), (3, 8), (5,
10), (4, 11), (6, 14).
• Question: If the stocks are affected by the same industry trends, will their prices rise
or fall together?
• E(A) = (2 + 3 + 5 + 4 + 6)/ 5 = 20/5 = 4
• E(B) = (5 + 8 + 10 + 11 + 14) /5 = 48/5 = 9.6
• Cov(A,B) = (2×5+3×8+5×10+4×11+6×14)/5 − 4 × 9.6 = 4
81
Rangkuman
1. Data quality: accuracy, completeness,
consistency, timeliness, believability,
interpretability
2. Data cleaning: e.g. missing/noisy values, outliers
3. Data reduction
• Dimensionality reduction
• Numerosity reduction
4. Data transformation and data discretization
• Normalization
5. Data integration from multiple sources:
• Entity identification problem
• Remove redundancies
• Detect inconsistencies
82
Referensi
1. Jiawei Han and Micheline Kamber, Data Mining: Concepts and
Techniques Third Edition, Elsevier, 2012
2. Ian H. Witten, Frank Eibe, Mark A. Hall, Data mining: Practical
Machine Learning Tools and Techniques 3rd Edition, Elsevier, 2011
3. Markus Hofmann and Ralf Klinkenberg, RapidMiner: Data Mining
Use Cases and Business Analytics Applications, CRC Press Taylor &
Francis Group, 2014
4. Daniel T. Larose, Discovering Knowledge in Data: an Introduction
to Data Mining, John Wiley & Sons, 2005
5. Ethem Alpaydin, Introduction to Machine Learning, 3rd ed., MIT
Press, 2014
6. Florin Gorunescu, Data Mining: Concepts, Models and
Techniques, Springer, 2011
7. Oded Maimon and Lior Rokach, Data Mining and Knowledge
Discovery Handbook Second Edition, Springer, 2010
8. Warren Liao and Evangelos Triantaphyllou (eds.), Recent Advances
in Data Mining of Enterprise Data: Algorithms and Applications,
World Scientific, 2007
83