Unit-1
Unit-1
3
Syllabus…
REFERENCE BOOK
❑ Data Mining Techniques, Arun K Pujari, 3rd Edition, Universities Press.
❑ Pang-NingTan, Michael Steinbach, Anuj Karpatne and Vipin Kumar, Introduction to
Data Mining, 2nd Edition, Pearson Education India, 2021.
❑ Amitesh Sinha, DataWare housing, Thomson Learning, India, 2007.
Pre-Requisites
• Database Management Systems
• Computer Oriented Statistical Methods
Evaluation method
• Unit Wise Test (4 Units)
• Quiz Test – 5 Units
• Workshops - 4
• Assignment Test – 2
• Term Paper
4
Unit - I
Introduction to Data Mining
❑ What Data mining?
❑ Kinds of Data
❑ Knowledge Discovery process
❑ DataMining Functionalities
❑ Kinds of Patterns
❑ Major Issues in Data Mining
❑ Data Objects and Attribute Types
❑ Basic Statistical Descriptions of Data, Data Visualization
❑ Measuring Data Similarity andDissimilarity
❑ Data Pre-processing: Major Tasks in Data Pre
processing, Data Cleaning, DataIntegration, Data Reduction, Data Transformation and Data
Discretization
5
Why Data Mining?
❑ The Explosive Growth of Data: from terabytes to petabytes
❑ Data collection and data availability
❑ Automated data collection tools, database systems, Web, computerized
society
❑ Major sources of abundant data
❑ Business: Web, e-commerce, transactions, stocks, …
❑ Science: Remote sensing, bioinformatics, scientific simulation, …
❑ Society and everyone: news, digital cameras, YouTube
❑ We are drowning in data, but starving for knowledge!
❑ “Necessity is the mother of invention”—Data mining—Automated
analysis of massive data sets
6
What Is Data Mining?
❑ Data mining (knowledge discovery from data)
❑ Extraction of interesting (non-trivial, implicit, previously unknown and
potentially useful) patterns or knowledge from huge amount of data
❑ Data mining: a misnomer?
❑ Alternative names
❑ Knowledge discovery (mining) in databases (KDD), knowledge extraction,
data/pattern analysis, data archeology, data dredging, information
harvesting, business intelligence, etc.
❑ Watch out: Is everything “data mining”?
❑ Simple search and query processing
❑ (Deductive) expert systems
7
Knowledge Discovery (KDD) Process
❑ This is a view from typical database systems
and data warehousing communities Pattern Evaluation
Task-relevant Data
Data Cleaning
Data Integration
8 Databases
Example: A Web Mining Framework
❑ Web mining usually involves
❑ Data cleaning
❑ Data integration from multiple sources
❑ Warehousing the data
❑ Data cube construction
❑ Data selection for data mining
❑ Data mining
❑ Presentation of the mining results
❑ Patterns and knowledge to be used or stored into knowledge-base
9
Data Mining in Business Intelligence
Increasing potential
to support
business decisions End User
Decision
Making
Data Exploration
Statistical Summary, Querying, and Reporting
11
Data Mining vs. Data Exploration
❑ Which view do you prefer?
❑ KDD vs. ML/Stat. vs. Business Intelligence
❑ Depending on the data, applications, and your focus
12
Multi-Dimensional View of Data Mining
❑ Data to be mined
Database data (extended-relational, object-oriented, heterogeneous), data warehouse,
❑
transactional data, stream, spatiotemporal, time-series, sequence, text and web, multi-media,
graphs & social and information networks
❑ Knowledge to be mined (or: Data mining functions)
❑ Characterization, discrimination, association, classification, clustering, trend/deviation, outlier
analysis, …
❑ Descriptive vs. predictive data mining
❑ Multiple/integrated functions and mining at multiple levels
❑ Techniques utilized
❑ Data-intensive, data warehouse (OLAP), machine learning, statistics, pattern recognition,
visualization, high-performance, etc.
❑ Applications adapted
❑ Retail, telecommunication, banking, fraud analysis, bio-data mining, stock market analysis, text
mining, Web mining, etc.
13
Data Mining: On What Kinds of Data?
❑ Database-oriented data sets and applications
❑ Relational database, data warehouse, transactional database
❑ Object-relational databases, Heterogeneous databases and legacy databases
❑ Advanced data sets and advanced applications
❑ Data streams and sensor data
❑ Time-series data, temporal data, sequence data (incl. bio-sequences)
❑ Structure data, graphs, social networks and information networks
❑ Spatial data and spatiotemporal data
❑ Multimedia database
❑ Text databases
❑ The World-Wide Web
14
Data Mining Functions: (1) Generalization
❑ Information integration and data warehouse construction
❑ Data cleaning, transformation, integration, and
multidimensional data model
❑ Data cube technology
❑ Scalable methods for computing (i.e., materializing)
multidimensional aggregates
❑ OLAP (online analytical processing)
❑ Multidimensional concept description: Characterization
and discrimination
❑ Generalize, summarize, and contrast data
characteristics, e.g., dry vs. wet region
15
Data Mining Functions: (2) Pattern Discovery
❑ Frequent patterns (or frequent itemsets)
❑ What items are frequently purchased together in your Walmart?
❑ Association and Correlation Analysis
18
Data Mining Functions: (5) Outlier Analysis
❑ Outlier analysis
❑ Outlier: A data object that does not comply with the
general behavior of the data
❑ Noise or exception?―One person’s garbage could be
another person’s treasure
❑ Methods: by product of clustering or regression analysis, …
❑ Useful in fraud detection, rare events analysis
19
Data Mining Functions: (6) Time and Ordering:
Sequential Pattern, Trend and Evolution Analysis
❑ Sequence, trend and evolution analysis
❑ Trend, time-series, and deviation analysis
❑ e.g., regression and value prediction
❑ Sequential pattern mining
❑ e.g., buy digital camera, then buy large memory cards
❑ Periodicity analysis
❑ Motifs and biological sequence analysis
❑ Approximate and consecutive motifs
❑ Similarity-based analysis
❑ Mining data streams
❑ Ordered, time-varying, potentially infinite, data streams
20
Data Mining Functions: (7) Structure and
Network Analysis
❑ Graph mining
❑ Finding frequent subgraphs (e.g., chemical compounds), trees (XML),
substructures (web fragments)
❑ Information network analysis
❑ Social networks: actors (objects, nodes) and relationships (edges)
❑ e.g., author networks in CS, terrorist networks
❑ Multiple heterogeneous networks
❑ A person could be multiple information networks: friends, family, classmates, …
❑ Links carry a lot of semantic information: Link mining
❑ Web mining
❑ Web is a big information network: from PageRank to Google
❑ Analysis of Web information networks
❑ Web community discovery, opinion mining, usage mining, …
21
Evaluation of Knowledge
❑ Are all mined knowledge interesting?
❑ One can mine tremendous amount of “patterns”
❑ Some may fit only certain dimension space (time, location, …)
❑ Some may not be representative, may be transient, …
❑ Evaluation of mined knowledge → directly mine only interesting knowledge?
❑ Descriptive vs. predictive
❑ Coverage
❑ Typicality vs. novelty
❑ Accuracy
❑ Timeliness
22
❑ …
Data Mining: Confluence of Multiple Disciplines
Machine Pattern
Statistics
Learning Recognition
Database High-Performance
Algorithm
Technology Computing
23
Why Confluence of Multiple Disciplines?
❑ Tremendous amount of data
❑ Algorithms must be scalable to handle big data
❑ High-dimensionality of data
❑ Micro-array may have tens of thousands of dimensions
❑ High complexity of data
❑ Data streams and sensor data
❑ Time-series data, temporal data, sequence data
❑ Structure data, graphs, social and information networks
❑ Spatial, spatiotemporal, multimedia, text and Web data
❑ Software programs, scientific simulations
❑ New and sophisticated applications
24
Applications of Data Mining
❑ Web page analysis: classification, clustering, ranking
❑ Collaborative analysis & recommender systems
❑ Basket data analysis to targeted marketing
❑ Biological and medical data analysis
❑ Data mining and software engineering
❑ Data mining and text analysis
❑ Data mining and social and information network analysis
❑ Built-in (invisible data mining) functions in Google, MS, Yahoo!, Linked, Facebook, …
❑ Major dedicated data mining systems/tools
❑ SAS, MS SQL-Server Analysis Manager, Oracle Data Mining Tools)
25
Major Issues in Data Mining (1)
❑ Mining Methodology
❑ Mining various and new kinds of knowledge
❑ Mining knowledge in multi-dimensional space
❑ Data mining: An interdisciplinary effort
❑ Boosting the power of discovery in a networked environment
❑ Handling noise, uncertainty, and incompleteness of data
❑ Pattern evaluation and pattern- or constraint-guided mining
❑ User Interaction
❑ Interactive mining
❑ Incorporation of background knowledge
❑ Presentation and visualization of data mining results
26
Major Issues in Data Mining (2)
❑ Efficiency and Scalability
❑ Efficiency and scalability of data mining algorithms
❑ Parallel, distributed, stream, and incremental mining methods
❑ Diversity of data types
❑ Handling complex types of data
❑ Mining dynamic, networked, and global data repositories
❑ Data mining and society
❑ Social impacts of data mining
❑ Privacy-preserving data mining
❑ Invisible data mining
27
Getting to Know Your Data
❑ Data Visualization
28
Types of Data Sets
❑ Record
❑ Relational records
timeout
season
coach
game
score
team
❑ Data matrix, e.g., numerical matrix, crosstabs
ball
lost
pla
wi
n
y
❑ Document data: text documents: term-frequency vector
❑ Transaction data
Document 1 3 0 5 0 2 6 0 2 0 2
❑ Graph and network
Document 2 0 7 0 2 1 0 0 3 0 0
❑ World Wide Web
❑ Social or information networks Document 3 0 1 0 0 1 2 2 0 3 0
❑ Molecular Structures
❑ Ordered
❑ Video data: sequence of images TID Items
❑ Temporal data: time-series 1 Bread, Coke, Milk
❑ Sequential Data: transaction sequences 2 Beer, Bread
❑ Genetic sequence data 3 Beer, Coke, Diaper, Milk
4 Beer, Bread, Diaper, Milk
❑ Spatial, image and multimedia:
5 Coke, Diaper, Milk
❑ Spatial data: maps
❑ Image data:
29 ❑ Video data:
Important Characteristics of Structured Data
❑ Dimensionality
❑ Curse of dimensionality
❑ Sparsity
❑ Only presence counts
❑ Resolution
30
Data Objects
❑ Data sets are made up of data objects.
❑ A data object represents an entity.
❑ Examples
❑ Sales database: customers, store items, sales
❑ Medical database: patients, treatments
❑ University database: students, professors, courses
❑ Also called samples , examples, instances, data points, objects, tuples.
❑ Data objects are described by attributes.
❑ Database rows -> data objects; columns ->attributes.
31
Attributes
❑ Attribute (or dimensions, features, variables): a data field,
representing a characteristic or feature of a data object.
❑ E.g., customer _ID, name, address
❑ Types
❑ Nominal
❑ Binary
❑ Numeric: quantitative
❑ Interval-scaled
❑ Ratio-scaled
32
Attribute Types
❑ Nominal: categories, states, or “names of things”
❑ Hair_color = {auburn, black, blond, brown, grey, red, white}
❑ marital status, occupation, ID numbers, zip codes
❑ Binary
❑ Nominal attribute with only 2 states (0 and 1)
❑ Symmetric binary: both outcomes equally important
❑ e.g., gender
❑ Asymmetric binary: outcomes not equally important.
❑ e.g., medical test (positive vs. negative)
❑ Convention: assign 1 to most important outcome (e.g., HIV positive)
❑ Ordinal
❑ Values have a meaningful order (ranking) but magnitude between successive values is
not known.
❑ Size = {small, medium, large}, grades, army rankings
33
Numeric Attribute Types
❑ Quantity (integer or real-valued)
❑ Interval
❑ Measured on a scale of equal-sized units
❑ Values have order
❑ E.g., temperature in C˚or F˚, calendar dates
❑ No true zero-point
❑ Ratio
❑ Inherent zero-point
❑ We can speak of values as being an order of magnitude larger than
the unit of measurement (10 K˚ is twice as high as 5 K˚).
❑ e.g., temperature in Kelvin, length, counts, monetary quantities
34
Discrete vs. Continuous Attributes
❑ Discrete Attribute
❑ Has only a finite or countably infinite set of values
❑ E.g., zip codes, profession, or the set of words in a collection of
documents
❑ Sometimes, represented as integer variables
❑ Note: Binary attributes are a special case of discrete attributes
❑ Continuous Attribute
❑ Has real numbers as attribute values
❑ E.g., temperature, height, or weight
❑ Practically, real values can only be measured and represented using a finite number
of digits
❑ Continuous attributes are typically represented as floating-point variables
35
Getting to Know Your Data
❑ Data Visualization
36
Basic Statistical Descriptions of Data
❑ Motivation
❑ To better understand the data: central tendency, variation and spread
❑ Data dispersion characteristics
❑ Median, Max, Min, Quantiles, Outliers, Variance, etc.
❑ Numerical dimensions correspond to sorted intervals
❑ Data dispersion: analyzed with multiple granularities of precision
❑ Boxplot or quantile analysis on sorted intervals
❑ Dispersion analysis on computed measures
❑ Folding measures into numerical dimensions
❑ Boxplot or quantile analysis on the transformed cube
37
38
Symmetric vs. Skewed Data
❑ Median, mean and mode of symmetric, positively symmetric
43
Graphic Displays of Basic Statistical Descriptions
44
Histogram Analysis
❑ Histogram: Graph display of tabulated 40
frequencies, shown as bars
35
❑ It shows what proportion of cases fall into each 30
of several categories
25
❑ Differs from a bar chart in that it is the area of
20
the bar that denotes the value, not the height
15
as in bar charts, a crucial distinction when the
10
categories are not of uniform width
5
❑ The categories are usually specified as non-
0
overlapping intervals of some variable. The 10000 30000 50000 70000 90000
categories (bars) must be adjacent
45
Histograms Often Tell More than Boxplots
46
Quantile Plot
❑ Displays all of the data (allowing the user to assess both the overall behavior and
unusual occurrences)
❑ Plots quantile information
❑ For a data xi data sorted in increasing order, fi indicates that approximately 100 fi%
of the data are below or equal to the value xi
48
Scatter plot
❑ Provides a first look at bivariate data to see clusters of points, outliers, etc
❑ Each pair of values is treated as a pair of coordinates and plotted as points
in the plane
49
Positively and Negatively Correlated Data
50
Uncorrelated Data
51 51
Getting to Know Your Data
❑ Data Visualization
52
Data Visualization
❑ Why data visualization?
❑ Gain insight into an information space by mapping data onto graphical primitives
❑ Provide qualitative overview of large data sets
❑ Search for patterns, trends, structure, irregularities, relationships among data
❑ Help find interesting regions and suitable parameters for further quantitative analysis
❑ Provide a visual proof of computer representations derived
❑ Categorization of visualization methods:
❑ Pixel-oriented visualization techniques
❑ Geometric projection visualization techniques
❑ Icon-based visualization techniques
❑ Hierarchical visualization techniques
❑ Visualizing complex data and relations
53
Pixel-Oriented Visualization Techniques
❑ For a data set of m dimensions, create m windows on the screen, one for each dimension
❑ The m dimension values of a record are mapped to m pixels at the corresponding positions
in the windows
❑ The colors of the pixels reflect the corresponding values
(a) Income (b) Credit Limit (c) transaction volume (d) age
54 54
Laying Out Pixels in Circle Segments
❑ To save space and show the connections among multiple dimensions, space filling is often
done in a circle segment
58
Landscapes
news articles
visualized as
a landscape
• • •
60
Parallel Coordinates of a Data Set
61
Icon-Based Visualization Techniques
❑ Visualization of the data values as features of icons
❑ Typical visualization methods
❑ Chernoff Faces
❑ Stick Figures
❑ General techniques
❑ Shape coding: Use shape to represent certain information encoding
❑ Color icons: Use color icons to encode more information
❑ Tile bars: Use small icons to represent the relevant feature vectors in
document retrieval
62
Chernoff Faces
❑ A way to display variables on a two-dimensional surface, e.g., let x be eyebrow slant, y be eye
size, z be nose length, etc.
❑ The figure shows faces produced using 10 characteristics--head eccentricity, eye size, eye
spacing, eye eccentricity, pupil size, eyebrow slant, nose size, mouth shape, mouth size, and
mouth opening): Each assigned one of 10 possible values, generated using Mathematica (S.
Dickson)
❑ REFERENCE: Gonick, L. and Smith, W. The Cartoon Guide to
Statistics. New York: Harper Perennial, p. 212, 1993
❑ Weisstein, Eric W. "Chernoff Face." From MathWorld--A
Wolfram Web Resource.
mathworld.wolfram.com/ChernoffFace.html
63
Stick Figure
A census data figure
showing age, income,
gender, education, etc.
64 Two attributes mapped to axes, remaining attributes mapped to angle or length of limbs”. Look at texture pattern
Hierarchical Visualization Techniques
❑ Visualization of the data using a hierarchical partitioning into subspaces
❑ Methods
❑ Dimensional Stacking
❑ Worlds-within-Worlds
❑ Tree-Map
❑ Cone Trees
❑ InfoCube
65
Dimensional Stacking
attribu te 4
attribu te 2
attribu te 3
attri bute 1
❑ Partitioning of the n-dimensional attribute space in 2-D subspaces, which are ‘stacked’
into each other
❑ Partitioning of the attribute value ranges into classes. The important attributes should
be used on the outer levels.
❑ Adequate for data with ordinal attributes of low cardinality
❑ But, difficult to display more than nine dimensions
❑ Important to map dimensions appropriately
66
Dimensional Stacking
Used by permission of M. Ward, Worcester Polytechnic Institute
Visualization of oil mining data with longitude and latitude mapped to the outer x-, y-axes and ore
grade and depth mapped to the inner x-, y-axes
67
Worlds-within-Worlds
❑ Assign the function and two most important parameters to innermost world
❑ Fix all other parameters at constant values - draw other (1 or 2 or 3 dimensional worlds choosing these as
the axes)
❑ Software that uses this paradigm
68
Tree-Map
❑ Screen-filling method which uses a hierarchical partitioning of the screen into regions
depending on the attribute values
❑ The x- and y-dimension of the screen are partitioned alternately according to the
attribute values (classes)
69 Ack.: http://www.cs.umd.edu/hcil/treemap-history/all102001.jpg
Tree-Map of a File System (Schneiderman)
70
InfoCube
❑ A 3-D visualization technique where hierarchical information is displayed as
nested semi-transparent cubes
❑ The outermost cubes correspond to the top level data, while the subnodes
or the lower level data are represented as smaller cubes inside the
outermost cubes, and so on
71 71
Three-D Cone Trees
❑ 3D cone tree visualization technique works well for up to
a thousand nodes or so
❑ First build a 2D circle tree that arranges its nodes in
concentric circles centered on the root node
❑ Cannot avoid overlaps when projected to 2D
❑ G. Robertson, J. Mackinlay, S. Card. “Cone Trees:
Animated 3D Visualizations of Hierarchical Information”,
ACM SIGCHI'91
❑ Graph from Nadeau Software Consulting website:
Visualize a social network data set that models the way
an infection spreads from one person to the next
Ack.: http://nadeausoftware.com/articles/visualization
72
Visualizing Complex Data and Relations
❑ Visualizing non-numerical data: text and social networks
❑ Tag cloud: visualizing user-generated tags
❑ Data Visualization
74
Similarity and Dissimilarity
❑ Similarity
❑ Numerical measure of how alike two data objects are
❑ Value is higher when objects are more alike
❑ Often falls in the range [0,1]
❑ Dissimilarity (e.g., distance)
❑ Numerical measure of how different two data objects are
❑ Lower when objects are more alike
❑ Minimum dissimilarity is often 0
❑ Upper limit varies
❑ Proximity refers to a similarity or dissimilarity
75
Data Matrix and Dissimilarity Matrix
❑Data matrix
❑ n data points with p dimensions
❑ Two modes
❑Dissimilarity matrix
❑ n data points, but registers only
the distance
❑ A triangular matrix
❑ Single mode
76
Proximity Measure for Nominal Attributes
❑ Can take 2 or more states, e.g., red, yellow, blue, green (generalization of
a binary attribute)
❑ Method 1: Simple matching
❑ m: # of matches, p: total # of variables
77
Proximity Measure for Binary Attributes
Object j
78
Dissimilarity between Binary Variables
❑ Example
Name Gender Fever Cough Test-1 Test-2 Test-3 Test-4
Jack M Y N P N N N
Mary F Y N P N P N
Jim M Y P N N N N
79
Standardizing Numeric Data
❑ Z-score:
❑ X: raw score to be standardized, μ: mean of the population, σ: standard deviation
❑ the distance between the raw score and the population mean in units of the standard deviation
❑ negative when the raw score is below the mean, “+” when above
❑ An alternative way: Calculate the mean absolute deviation
where
80
Example:
Data Matrix and Dissimilarity Matrix
Data Matrix
point attribute1 attribute2
x1 1 2
x2 3 5
x3 2 0
x4 4 5
Dissimilarity Matrix
(with Euclidean Distance)
x1 x2 x3 x4
x1 0
x2 3.61 0
x3 5.1 5.1 0
x4 4.24 1 5.39 0
81
Distance on Numeric Data: Minkowski Distance
❑ Minkowski distance: A popular distance measure
where i = (xi1, xi2, …, xip) and j = (xj1, xj2, …, xjp) are two p-dimensional data objects,
and h is the order (the distance so defined is also called L-h norm)
❑ Properties
❑ d(i, j) > 0 if i ≠ j, and d(i, i) = 0 (Positive definiteness)
❑ d(i, j) = d(j, i) (Symmetry)
❑ d(i, j) d(i, k) + d(k, j) (Triangle Inequality)
❑ A distance that satisfies these properties is a metric
82
Special Cases of Minkowski Distance
❑ h = 1: Manhattan (city block, L1 norm) distance
❑ E.g., the Hamming distance: the number of bits that are different between two binary
vectors
83
Example: Minkowski Distance
Dissimilarity Matrices
point attribute 1 attribute 2 Manhattan (L1)
x1 1 2
L x1 x2 x3 x4
x2 3 5 x1 0
x3 2 0 x2 5 0
x4 4 5 x3 3 6 0
x4 6 1 7 0
Euclidean (L2)
L2 x1 x2 x3 x4
x1 0
x2 3.61 0
x3 2.24 5.1 0
x4 4.24 1 5.39 0
Supremum
L x1 x2 x3 x4
x1 0
x2 3 0
x3 2 5 0
x4 3 1 5 0
84
Ordinal Variables
85
Attributes of Mixed Type
❑ A database may contain all attribute types
❑ Nominal, symmetric binary, asymmetric binary, numeric, ordinal
❑ One may use a weighted formula to combine their effects
❑ f is binary or nominal:
dij(f) = 0 if xif = xjf , or dij(f) = 1 otherwise
❑ f is numeric: use the normalized distance
❑ f is ordinal
❑ Compute ranks rif and
❑ Treat zif as interval-scaled
86
Cosine Similarity
❑ A document can be represented by thousands of attributes, each recording the frequency of a particular
word (such as keywords) or phrase in the document.
87
Example: Cosine Similarity
❑ cos(d1, d2) = (d1 • d2) /||d1|| ||d2|| ,
where • indicates vector dot product, ||d|: the length of vector d
d1 = (5, 0, 3, 0, 2, 0, 0, 2, 0, 0)
d2 = (3, 0, 2, 0, 1, 1, 0, 1, 0, 1)
d1•d2 = 5*3+0*0+3*2+0*0+2*1+0*1+0*1+2*1+0*0+0*1 = 25
||d1||= (5*5+0*0+3*3+0*0+2*2+0*0+0*0+2*2+0*0+0*0)0.5=(42)0.5 = 6.481
||d2||= (3*3+0*0+2*2+0*0+1*1+1*1+0*0+1*1+0*0+1*1)0.5=(17)0.5 = 4.12
cos(d1, d2 ) = 0.94
88
Data Preprocessing
❑ Data Quality
❑ Data Cleaning
❑ Data Integration
❑ Data Reduction
89
Data Quality: Why Preprocess the Data?
90
Major Tasks in Data Preprocessing
❑ Data cleaning
❑ Fill in missing values, smooth noisy data, identify or remove outliers, and resolve
inconsistencies
❑ Data integration
❑ Integration of multiple databases, data cubes, or files
❑ Data reduction
❑ Dimensionality reduction
❑ Numerosity reduction
❑ Data compression
❑ Data transformation and data discretization
❑ Normalization
❑ Concept hierarchy generation
91
Data Preprocessing
❑ Data Quality
❑ Data Cleaning
❑ Data Integration
❑ Data Reduction
92
Data Cleaning
❑ Data in the Real World Is Dirty: Lots of potentially incorrect data, e.g., instrument faulty, human or
computer error, transmission error
❑ Incomplete: lacking attribute values, lacking certain attributes of interest, or containing only
aggregate data
❑ e.g., Occupation=“ ” (missing data)
❑ Noisy: containing noise, errors, or outliers
❑ E.g., Salary=“−10” (an error)
❑ Inconsistent: containing discrepancies in codes or names, e.g.,
❑ Age=“42”, Birthday=“03/07/2010”
❑ Was rating “1, 2, 3”, now rating “A, B, C”
❑ Discrepancy between duplicate records
❑ Intentional (e.g., disguised missing data)
❑ Jan. 1 as everyone’s birthday?
93
Incomplete (Missing) Data
❑ Data is not always available
❑ E.g., many tuples have no recorded value for several attributes, such as customer
income in sales data
❑ Missing data may be due to
❑ Equipment malfunction
❑ Inconsistent with other recorded data and thus deleted
❑ Data not entered due to misunderstanding
❑ Certain data may not be considered important at the time of entry
❑ Not register history or changes of the data
❑ Missing data may need to be inferred
94
How to Handle Missing Data?
❑ Ignore the tuple: usually done when class label is missing (when doing
classification)—not effective when the % of missing values per attribute varies
considerably
❑ Fill in the missing value manually: tedious + infeasible?
❑ Fill in it automatically with
❑ A global constant : e.g., “unknown”, a new class?!
❑ The attribute mean
❑ The attribute mean for all samples belonging to the same class: smarter
❑ The most probable value: inference-based such as Bayesian formula or decision
tree
95
Noisy Data
❑ Noise: random error or variance in a measured variable
❑ Incorrect attribute values may be due to
❑ faulty data collection instruments
❑ data entry problems
❑ data transmission problems
❑ technology limitation
❑ inconsistency in naming convention
❑ Other data problems which require data cleaning
❑ duplicate records
❑ incomplete data
❑ inconsistent data
96
How to Handle Noisy Data?
❑ Binning
❑ first sort data and partition into (equal-frequency) bins
❑ then one can smooth by bin means, smooth by bin median, smooth by bin
boundaries, etc.
❑ Regression
❑ smooth by fitting the data into regression functions
❑ Clustering
❑ detect and remove outliers
❑ Combined computer and human inspection
❑ detect suspicious values and check by human (e.g., deal with possible outliers)
97
Data Cleaning as a Process
❑ Data discrepancy detection
❑ Use metadata (e.g., domain, range, dependency, distribution)
❑ Check field overloading
❑ Check uniqueness rule, consecutive rule and null rule
❑ Use commercial tools
❑ Data scrubbing: use simple domain knowledge (e.g., postal code, spell-check) to detect errors
and make corrections
❑ Data auditing: by analyzing data to discover rules and relationship to detect violators (e.g.,
correlation and clustering to find outliers)
❑ Data migration and integration
❑ Data migration tools: allow transformations to be specified
❑ ETL (Extraction/Transformation/Loading) tools: allow users to specify transformations through
a graphical user interface
❑ Integration of the two processes
❑ Iterative and interactive (e.g., Potter’s Wheels)
98
Data Preprocessing
❑ Data Quality
❑ Data Cleaning
❑ Data Integration
❑ Data Reduction
99
Data Integration
❑ Data integration:
❑ Combines data from multiple sources into a coherent store
❑ Schema integration: e.g., A.cust-id B.cust-#
❑ Integrate metadata from different sources
❑ Entity identification problem:
❑ Identify real world entities from multiple data sources, e.g., Bill Clinton = William Clinton
❑ Detecting and resolving data value conflicts
❑ For the same real world entity, attribute values from different sources are different
❑ Possible reasons: different representations, different scales, e.g., metric vs. British units
100
Handling Redundancy in Data Integration
❑ Redundant data occur often when integration of multiple databases
❑ Object identification: The same attribute or object may have different names
in different databases
❑ Derivable data: One attribute may be a “derived” attribute in another table,
e.g., annual revenue
❑ Redundant attributes may be able to be detected by correlation analysis and
covariance analysis
❑ Careful integration of the data from multiple sources may help reduce/avoid
redundancies and inconsistencies and improve mining speed and quality
101
Correlation Analysis (Nominal Data)
❑ Χ2 (chi-square) test
❑ The larger the Χ2 value, the more likely the variables are related
❑ The cells that contribute the most to the Χ2 value are those whose actual count is very
different from the expected count
❑ Correlation does not imply causality
❑ # of hospitals and # of car-theft in a city are correlated
❑ Both are causally linked to the third variable: population
102
Chi-Square Calculation: An Example
103
Correlation Analysis (Numeric Data)
where n is the number of tuples, A and Bare the respective means of A and B, σA and σB are the
respective standard deviation of A and B, and Σ(aibi) is the sum of the AB cross-product.
❑ If rA,B > 0, A and B are positively correlated (A’s values increase as B’s). The higher, the
stronger correlation.
❑ rA,B = 0: independent; rAB < 0: negatively correlated
104
Visually Evaluating Correlation
105
Correlation (viewed as linear relationship)
❑ Correlation measures the linear relationship between objects
❑ To compute correlation, we standardize data objects, A and B, and then
take their dot product
106
Covariance (Numeric Data)
❑ Covariance is similar to correlation
Correlation coefficient:
where n is the number of tuples, A andB are the respective mean or expected values of A and B, σA and σB
are the respective standard deviation of A and B.
❑ Positive covariance: If CovA,B > 0, then A and B both tend to be larger than their expected values.
❑ Negative covariance: If CovA,B < 0 then if A is larger than its expected value, B is likely to be smaller than its
expected value.
❑ Independence: CovA,B = 0 but the converse is not true:
❑ Some pairs of random variables may have a covariance of 0 but are not independent. Only under some additional
assumptions (e.g., the data follow multivariate normal distributions) does a covariance of 0 imply independence
107 107
Co-Variance: An Example
❑ Suppose two stocks A and B have the following values in one week: (2, 5), (3, 8), (5, 10), (4, 11), (6, 14).
❑ Question: If the stocks are affected by the same industry trends, will their prices rise or fall together?
108
Data Preprocessing
❑ Data Quality
❑ Data Cleaning
❑ Data Integration
❑ Data Reduction
109
Data Reduction Strategies
❑ Data reduction: Obtain a reduced representation of the data set that is much smaller in volume
but yet produces the same (or almost the same) analytical results
❑ Why data reduction? — A database/data warehouse may store terabytes of data. Complex
data analysis may take a very long time to run on the complete data set.
❑ Data reduction strategies
❑ Dimensionality reduction, e.g., remove unimportant attributes
❑ Wavelet transforms
❑ Principal Components Analysis (PCA)
❑ Feature subset selection, feature creation
❑ Numerosity reduction (some simply call it: Data Reduction)
❑ Regression and Log-Linear Models
❑ Histograms, clustering, sampling
❑ Data cube aggregation
❑ Data compression
110
Data Reduction 1: Dimensionality Reduction
❑ Curse of dimensionality
❑ When dimensionality increases, data becomes increasingly sparse
❑ Density and distance between points, which is critical to clustering, outlier analysis, becomes less
meaningful
❑ The possible combinations of subspaces will grow exponentially
❑ Dimensionality reduction
❑ Avoid the curse of dimensionality
❑ Help eliminate irrelevant features and reduce noise
❑ Reduce time and space required in data mining
❑ Allow easier visualization
❑ Dimensionality reduction techniques
❑ Wavelet transforms
❑ Principal Component Analysis
❑ Supervised and nonlinear techniques (e.g., feature selection)
111
Mapping Data to a New Space
◼ Fourier transform
◼ Wavelet transform
112
What Is Wavelet Transform?
❑ Decomposes a signal into different
frequency subbands
❑ Applicable to n-dimensional signals
❑ Data are transformed to preserve relative
distance between objects at different levels
of resolution
❑ Allow natural clusters to become more
distinguishable
❑ Used for image compression
113
Wavelet Transformation
Haar2 Daubechie4
❑ Discrete wavelet transform (DWT) for linear signal processing, multi-resolution analysis
❑ Compressed approximation: store only a small fraction of the strongest of the wavelet
coefficients
❑ Similar to discrete Fourier transform (DFT), but better lossy compression, localized in
space
❑ Method:
❑ Length, L, must be an integer power of 2 (padding with 0’s, when necessary)
❑ Each transform has 2 functions: smoothing, difference
❑ Applies to pairs of data, resulting in two set of data of length L/2
❑ Applies two functions recursively, until reaches the desired length
114
Wavelet Decomposition
❑ Wavelets: A math tool for space-efficient hierarchical decomposition of functions
❑ S = [2, 2, 0, 2, 3, 5, 4, 4] can be transformed to S^ = [23/4, -11/4, 1/2, 0, 0, -1, -1, 0]
❑ Compression: many small detail coefficients can be replaced by 0’s, and only the
significant coefficients are retained
115
Haar Wavelet Coefficients
Coefficient “Supports”
Hierarchical
decomposition structure 2.75
2.75 +
(a.k.a. “error tree”) +
-1.25
-1.25 + -
+ -
0.5 0
0.5 + -
+ - + - 0 + -
0 -1 -1 0
+ - + - + - + -
0 + -
2 2 0 2 3 5 4 4
-1 + -
-1 + -
Original frequency distribution 0 + -
116
Why Wavelet Transform?
❑ Use hat-shape filters
❑ Emphasize region where points cluster
❑ Suppress weaker information in their boundaries
❑ Effective removal of outliers
❑ Insensitive to noise, insensitive to input order
❑ Multi-resolution
❑ Detect arbitrary shaped clusters at different scales
❑ Efficient
❑ Complexity O(N)
❑ Only applicable to low dimensional data
117
Principal Component Analysis (PCA)
❑ Find a projection that captures the largest amount of variation in data
❑ The original data are projected onto a much smaller space, resulting in dimensionality reduction. We
find the eigenvectors of the covariance matrix, and these eigenvectors define the new space
x2
x1
118
Principal Component Analysis (Steps)
❑ Given N data vectors from n-dimensions, find k ≤ n orthogonal vectors (principal
components) that can be best used to represent data
❑ Normalize input data: Each attribute falls within the same range
❑ Compute k orthonormal (unit) vectors, i.e., principal components
❑ Each input data (vector) is a linear combination of the k principal component vectors
❑ The principal components are sorted in order of decreasing “significance” or strength
❑ Since the components are sorted, the size of the data can be reduced by eliminating the
weak components, i.e., those with low variance (i.e., using the strongest principal
components, it is possible to reconstruct a good approximation of the original data)
❑ Works for numeric data only
119
Attribute Subset Selection
❑ Another way to reduce dimensionality of data
❑ Redundant attributes
❑ Duplicate much or all of the information contained in one or more other attributes
❑ E.g., purchase price of a product and the amount of sales tax paid
❑ Irrelevant attributes
❑ Contain no information that is useful for the data mining task at hand
❑ E.g., students' ID is often irrelevant to the task of predicting students' GPA
120
Heuristic Search in Attribute Selection
❑ There are 2d possible attribute combinations of d attributes
❑ Typical heuristic attribute selection methods:
❑ Best single attribute under the attribute independence assumption: choose by
significance tests
❑ Best step-wise feature selection:
❑ The best single-attribute is picked first
❑ Then next best attribute condition to the first, ...
❑ Step-wise attribute elimination:
❑ Repeatedly eliminate the worst attribute
❑ Best combined attribute selection and elimination
❑ Optimal branch and bound:
121
❑ Use attribute elimination and backtracking
Attribute Creation (Feature Generation)
❑ Create new attributes (features) that can capture the important information in a data
set more effectively than the original ones
❑ Three general methodologies
❑ Attribute extraction
❑ Domain-specific
❑ Mapping data to new space (see: data reduction)
❑ E.g., Fourier transformation, wavelet transformation, manifold approaches (not
covered)
❑ Attribute construction
❑ Combining features (see: discriminative frequent patterns in Chapter 7)
❑ Data discretization
122
Data Reduction 2: Numerosity Reduction
❑ Reduce data volume by choosing alternative, smaller forms of data representation
❑ Parametric methods (e.g., regression)
❑ Assume the data fits some model, estimate model parameters, store only the
parameters, and discard the data (except possible outliers)
❑ Ex.: Log-linear models—obtain value at a point in m-D space as the product on
appropriate marginal subspaces
❑ Non-parametric methods
❑ Do not assume models
❑ Major families: histograms, clustering, sampling, …
123
Parametric Data Reduction: Regression and Log-Linear Models
❑ Linear regression
❑ Data modeled to fit a straight line
❑ Often uses the least-square method to fit the line
❑ Multiple regression
❑ Allows a response variable Y to be modeled as a linear function of
multidimensional feature vector
❑ Log-linear model
❑ Approximates discrete multidimensional probability distributions
124
y
Regression Analysis
Y1
❑ Regression analysis: A collective name for techniques for the
modeling and analysis of numerical data consisting of values of a Y1’ y=x+1
dependent variable (also called response variable or
measurement) and of one or more independent variables (aka.
explanatory variables or predictors) x
X1
❑ The parameters are estimated so as to give a "best fit" of the
data ❑ Used for prediction (including forecasting o
❑ Most commonly the best fit is evaluated by using the least time-series data), inference, hypothesis
squares method, but other criteria have also been used testing, and modeling of causal
relationships
125
Regress Analysis and Log-Linear Models
❑ Linear regression: Y = w X + b
❑ Two regression coefficients, w and b, specify the line and are to be estimated by using the data at
hand
❑ Using the least squares criterion to the known values of Y1, Y2, …, X1, X2, ….
❑ Multiple regression: Y = b0 + b1 X1 + b2 X2
❑ Many nonlinear functions can be transformed into the above
❑ Log-linear models:
❑ Approximate discrete multidimensional probability distributions
❑ Estimate the probability of each point (tuple) in a multi-dimensional space for a set of discretized
attributes, based on a smaller subset of dimensional combinations
❑ Useful for dimensionality reduction and data smoothing
126
Histogram Analysis
❑ Divide data into buckets and store average40
(sum) for each bucket
35
❑ Partitioning rules:
30
❑ Equal-width: equal bucket range
25
❑ Equal-frequency (or equal-depth)
20
15
10
5
0
10000 30000 50000 70000 90000
127
Clustering
❑ Partition data set into clusters based on similarity, and store cluster representation
(e.g., centroid and diameter) only
❑ Can be very effective if data is clustered but not if data is “smeared”
❑ Can have hierarchical clustering and be stored in multi-dimensional index tree
structures
❑ There are many choices of clustering definitions and clustering algorithms
❑ Cluster analysis will be studied in depth in Chapter 10
128
Sampling
❑ Sampling: obtaining a small sample s to represent the whole data set N
❑ Allow a mining algorithm to run in complexity that is potentially sub-linear to the size
of the data
❑ Key principle: Choose a representative subset of the data
❑ Simple random sampling may have very poor performance in the presence of skew
❑ Develop adaptive sampling methods, e.g., stratified sampling:
❑ Note: Sampling may not reduce database I/Os (page at a time)
129
Types of Sampling
❑ Simple random sampling
❑ There is an equal probability of selecting any particular item
❑ Sampling without replacement
❑ Once an object is selected, it is removed from the population
❑ Sampling with replacement
❑ A selected object is not removed from the population
❑ Stratified sampling:
❑ Partition the data set, and draw samples from each partition (proportionally, i.e.,
approximately the same percentage of the data)
❑ Used in conjunction with skewed data
130
Sampling: With or without Replacement
Raw Data
131
Sampling: Cluster or Stratified Sampling
132
Data Cube Aggregation
❑ The lowest level of a data cube (base cuboid)
❑ The aggregated data for an individual entity of interest
❑ E.g., a customer in a phone calling data warehouse
❑ Multiple levels of aggregation in data cubes
❑ Further reduce the size of data to deal with
❑ Reference appropriate levels
❑ Use the smallest representation which is enough to solve the task
❑ Queries regarding aggregated information should be answered using data cube, when
possible
133
Data Reduction 3: Data Compression
❑ String compression
❑ There are extensive theories and well-tuned algorithms
❑ Typically lossless, but only limited manipulation is possible without expansion
❑ Audio/video compression
❑ Typically lossy compression, with progressive refinement
❑ Sometimes small fragments of signal can be reconstructed without reconstructing the
whole
❑ Time sequence is not audio
❑ Typically short and vary slowly with time
❑ Dimensionality and numerosity reduction may also be considered as forms of data
compression
134
Data Compression
Original Data
Approximated
135
Data Preprocessing
❑ Data Quality
❑ Data Cleaning
❑ Data Integration
❑ Data Reduction
136
Data Transformation
❑ A function that maps the entire set of values of a given attribute to a new set of replacement values
s.t. each old value can be identified with one of the new values
❑ Methods
❑ Smoothing: Remove noise from data
❑ Attribute/feature construction
❑ New attributes constructed from the given ones
❑ Aggregation: Summarization, data cube construction
❑ Normalization: Scaled to fall within a smaller, specified range
❑ min-max normalization
❑ z-score normalization
❑ normalization by decimal scaling
❑ Discretization: Concept hierarchy climbing
137
Normalization
❑ Min-max normalization: to [new_minA, new_maxA]
❑ Ex. Let income range $12,000 to $98,000 normalized to [0.0, 1.0]. Then $73,000 is mapped to
❑ Z-score normalization (μ: mean, σ: standard deviation):
138
Discretization
❑ Three types of attributes
❑ Nominal—values from an unordered set, e.g., color, profession
❑ Ordinal—values from an ordered set, e.g., military or academic rank
❑ Numeric—real numbers, e.g., integer or real numbers
❑ Discretization: Divide the range of a continuous attribute into intervals
❑ Interval labels can then be used to replace actual data values
❑ Reduce data size by discretization
❑ Supervised vs. unsupervised
❑ Split (top-down) vs. merge (bottom-up)
❑ Discretization can be performed recursively on an attribute
❑ Prepare for further analysis, e.g., classification
139
Data Discretization Methods
❑ Typical methods: All the methods can be applied recursively
❑ Binning
❑ Top-down split, unsupervised
❑ Histogram analysis
❑ Top-down split, unsupervised
❑ Clustering analysis (unsupervised, top-down split or bottom-up merge)
❑ Decision-tree analysis (supervised, top-down split)
❑ Correlation (e.g., 2) analysis (unsupervised, bottom-up merge)
140
Simple Discretization: Binning
141
Binning Methods for Data Smoothing
❑ Sorted data for price (in dollars): 4, 8, 9, 15, 21, 21, 24, 25, 26, 28, 29, 34
* Partition into equal-frequency (equi-depth) bins:
- Bin 1: 4, 8, 9, 15
- Bin 2: 21, 21, 24, 25
- Bin 3: 26, 28, 29, 34
* Smoothing by bin means:
- Bin 1: 9, 9, 9, 9
- Bin 2: 23, 23, 23, 23
- Bin 3: 29, 29, 29, 29
* Smoothing by bin boundaries:
- Bin 1: 4, 4, 4, 15
- Bin 2: 21, 21, 25, 25
- Bin 3: 26, 26, 26, 34
142
Discretization Without Using Class Labels
(Binning vs. Clustering)
143
Discretization by Classification & Correlation
Analysis
❑ Classification (e.g., decision tree analysis)
❑ Bottom-up merge: find the best neighboring intervals (those having similar distributions of classes,
i.e., low χ2 values) to merge
144
Concept Hierarchy Generation
❑ Concept hierarchy organizes concepts (i.e., attribute values) hierarchically and is usually associated with
each dimension in a data warehouse
❑ Concept hierarchies facilitate drilling and rolling in data warehouses to view data in multiple granularity
❑ Concept hierarchy formation: Recursively reduce the data by collecting and replacing low level concepts
(such as numeric values for age) by higher level concepts (such as youth, adult, or senior)
❑ Concept hierarchies can be explicitly specified by domain experts and/or data warehouse designers
❑ Concept hierarchy can be automatically formed for both numeric and nominal data. For numeric data,
use discretization methods shown.
145
Concept Hierarchy Generation for Nominal Data
146
Automatic Concept Hierarchy Generation
❑Some hierarchies can be automatically generated based on the analysis of the
number of distinct values per attribute in the data set
❑ The attribute with the most distinct values is placed at the lowest level of the
hierarchy
❑ Exceptions, e.g., weekday, month, quarter, year
147
END OF UNIT - I