0% found this document useful (0 votes)
16 views44 pages

02 KnowYourData

The document is a lecture on data mining concepts and techniques, focusing on understanding data through various aspects such as data objects, attribute types, statistical descriptions, and visualization methods. It discusses different types of data sets, important characteristics of structured data, and methods for measuring data similarity and dissimilarity. Key statistical concepts such as central tendency, dispersion, and graphical representations like histograms and boxplots are also covered.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views44 pages

02 KnowYourData

The document is a lecture on data mining concepts and techniques, focusing on understanding data through various aspects such as data objects, attribute types, statistical descriptions, and visualization methods. It discusses different types of data sets, important characteristics of structured data, and methods for measuring data similarity and dissimilarity. Key statistical concepts such as central tendency, dispersion, and graphical representations like histograms and boxplots are also covered.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 44

Università degli Studi di Milano

Master Degree in Computer Science

Information Management
course
Teacher: Alberto Ceselli

Lecture 02 : 03/10/2012
Data Mining:
Concepts and
Techniques

— Chapter 2 —
Jiawei Han, Micheline Kamber, and Jian Pei
University of Illinois at Urbana-Champaign
Simon Fraser University
©2012 Han, Kamber, and Pei. All rights
reserved. 2
Chapter 2: Getting to Know Your
Data

 Data Objects and Attribute Types

 Basic Statistical Descriptions of Data

 Data Visualization

 Measuring Data Similarity and Dissimilarity

 Summary

3
Types of Data Sets
 Record
 Relational records
 Data matrix, e.g., numerical matrix,
crosstabs
 Document data: text documents:
term-frequency vector
 Transaction data
 Graph and network
 World Wide Web
 Social or information networks
 Molecular Structures
 Ordered TID Items
 Video data: sequence of images 1 Bread, Coke, Milk
 Temporal data: time-series 2 Beer, Bread
 Sequential Data: transaction
3 Beer, Coke, Diaper, Milk
sequences
 Genetic sequence data 4 Beer, Bread, Diaper, Milk
 Spatial, image and multimedia: 5 Coke, Diaper, Milk
 Spatial data: maps
 Image data: .bmp
 Video data: .avi 4
Important Characteristics of
Structured Data

 Dimensionality

Curse of dimensionality
(the volume of the space grows fast with the number
of dimensions, and the available data becomes sparse)
 Sparsity
 Only presence counts
 Resolution

Patterns depend on the scale
 Distribution
 Centrality and dispersion
5
Data Objects

 Data sets are made up of data objects.


 A data object represents an entity.
 Examples:
 sales database: customers, store items, sales
 medical database: patients, treatments
 university database: students, professors, courses
 Also called samples , examples, instances, data
points, objects, tuples.
 Data objects are described by attributes.
 Database rows -> data objects; columns ->attributes.
6
Attributes
 Attribute (or dimensions, features,
variables): a data field, representing a
characteristic or feature of a data object.
 E.g., customer _ID, name, address

 Types:
 Nominal

 Binary

 Ordinal

 Numeric: quantitative

 Interval-scaled

 Ratio-scaled
7
Attribute Types
 Nominal: categories, states, or “names of things”
 Hair_color = {auburn, black, blond, brown, grey, red,
white}
 marital status, occupation, ID numbers, zip codes
 Binary
 Nominal attribute with only 2 states (0 and 1)
 Symmetric binary: both outcomes equally important
 e.g., gender
 Asymmetric binary: outcomes not equally important.

e.g., medical test (positive vs. negative)

Convention: assign 1 to most important outcome
(e.g., HIV positive)
 Ordinal
 Values have a meaningful order (ranking) but magnitude
between successive values is not known.
 Size = {small, medium, large}, grades, army rankings
8
Numeric Attribute Types
 Quantity (integer or real-valued)
 Interval
 Measured on a scale of equal-sized units

 Values have order

 E.g., temperature in C˚or F˚, calendar dates


 No true zero-point
 Ratio
 Inherent zero-point

 We can speak of values as being an order of

magnitude larger than the unit of measurement


(10 K˚ is twice as high as 5 K˚).

e.g., temperature in Kelvin, length, counts,
monetary quantities
9
Discrete vs. Continuous
Attributes (ML view)
 Discrete Attribute
 Has only a finite or countably infinite set of

values

E.g., zip codes, profession, or the set of words
in a collection of documents
 Sometimes, represented as integer variables

 Note: Binary attributes are a special case of

discrete attributes
 Continuous Attribute
 Has real numbers as attribute values


E.g., temperature, height, or weight
 Practically, real values can only be measured

and represented using a finite number of digits


 Continuous attributes are typically represented

as floating-point variables 10
Chapter 2: Getting to Know Your
Data

 Data Objects and Attribute Types

 Basic Statistical Descriptions of Data

 Data Visualization

 Measuring Data Similarity and Dissimilarity

 Summary

11
Basic Statistical Descriptions of
Data
 Motivation
 To better understand the data: central tendency,
variation and spread
 Data dispersion characteristics
 median, max, min, quantiles, outliers, variance...
 Numerical dimensions correspond to sorted intervals
 Data dispersion: analyzed with multiple
granularities of precision
 Boxplot or quantile analysis on sorted intervals
 Dispersion analysis on computed measures
 Folding measures into numerical dimensions
 Boxplot or quantile analysis on the transformed
cube 12
Measuring the Central Tendency
Mean (algebraic measure) (sample vs. population): 1 n
x = ∑xi

Note: n is sample size and N is population size. n i =1


n
Weighted arithmetic mean
∑w x

i i
 Sensitive to outliers: trimmed mean (chopping x= i =1
n
extreme values) ∑w
i =1
i
 Median:
 Middle value if odd number of values, or
average of the middle two values otherwise
 Estimated by interpolation (for grouped data):
n
−( ∑ freq )l
2
median= L1 +( )width
freqmedian
Sum of freq. of intervals preceding the median
Lower boundary of the median interval

# values in the dataset Freq. of the median interval


13
Measuring the Central Tendency
 Mode
 Value that occurs most frequently in the data
 Unimodal, bimodal, trimodal
 Empirical formula for moderately skewed:
Employed Salary
mean−mode≃3×(mean−median ) 1 30
2 36
3 47

Mean: 58 4 50
5 52
Median: (52+56)/2 = 54 6 52
7 56
Mode: 52 and 70 (bimodal) 8 60
9 63
Midrange: (30+110) /2 = 70 10 70
11 70
12 110
Symmetric vs.
Skewed Data
 Median, mean and mode of symmetric

symmetric, positively and


negatively skewed data

positively negatively
skewed skewed

Data Mining: Concepts and


October 5, 2012 Techniques 15
Measuring the Dispersion of
Data
 Quartiles, outliers and boxplots
 Quartiles: Q1 (25th percentile), Q3 (75th percentile)
 Inter-quartile range: IQR = Q3 – Q1
 Five number summary: min, Q1, median, Q3, max (nice for
skewed distributions)
 Boxplot: ends of the box are the quartiles; median is marked; add
whiskers, and plot outliers individually
 Outlier: usually, a value higher/lower than 1.5 x IQR
 Variance and standard deviation (sample: s, population: σ)
 Variance: (algebraic, scalable computation)
n n n n n
1 1 1 1 1
σ = ∑ µ ∑x − µ2
2
s 2 = ∑ ( x i −̄x )2 = [ ∑ x 2i − ( ∑ x i )2 ]
2
( xi − 2
) = i
n i=1 n i=1 n i=1 N i =1 N i =1

 Standard deviation s (or σ) is the square root of variance


16
Boxplot Analysis
 Five-number summary of a distribution
 Minimum, Q1, Median, Q3, Maximum
 Boxplot
 Data is represented with a box
 The ends of the box are at the first and
third quartiles, i.e., the height of the
box is IQR
 The median is marked by a line within
the box
 Whiskers: two lines outside the box
extended to Minimum and Maximum
 Outliers: points beyond a specified
outlier threshold, plotted individually
17
Visualization of Data Dispersion: 3-D
Boxplots

Data Mining: Concepts and


October 5, 2012 Techniques 18
Properties of Normal Distribution
Curve

 The normal (distribution) curve


 From μ–σ to μ+σ: contains about 68% of the

measurements (μ: mean, σ: standard deviation)


 From μ–2σ to μ+2σ: contains about 95% of it
 From μ–3σ to μ+3σ: contains about 99.7% of it

19
Graphic Displays of Basic Statistical
Descriptions
 Boxplot: graphic display of five-number summary
 Histogram: x-axis are values, y-axis repres.
frequencies
 Quantile plot: each value xi is paired with fi indicating
that approximately 100 fi % of data are ≤ xi
 Quantile-quantile (q-q) plot: graphs the quantiles of
one univariant distribution against the corresponding
quantiles of another
 Scatter plot: each pair of values is a pair of
coordinates and plotted as points in the plane 20
Histogram Analysis

 Histogram: Graph display of


tabulated frequencies, shown as bars
 It shows what proportion of cases fall
into each of several categories
 Differs from a bar chart in that it is
the area of the bar that denotes the
value, not the height as in bar
charts, a crucial distinction when the
categories are not of uniform width
 The categories are usually specified
as non-overlapping intervals of some
variable. The categories (bars) must
be adjacent

21
Histograms Often Tell More than
Boxplots

 The two histograms


shown in the left
may have the same
boxplot
representation
 The same values
for: min, Q1,
median, Q3, max
 But they have
rather different data
distributions

22
Quantile Plot
 Displays all of the data (allowing the user to
assess both the overall behavior and unusual
occurrences)
 Plots quantile information
 For a data x data sorted in increasing order, f
i i
indicates that approximately 100 fi% of the data
are below or equal to the value xi

Data Mining: Concepts and


Techniques 23
Quantile-Quantile (Q-Q) Plot
 Graphs the quantiles of one univariate distribution against
the corresponding quantiles of another
 View: Is there is a shift in going from one distribution to
another?
 Example shows unit price of items sold at Branch 1 vs.
Branch 2 for each quantile. Unit prices of items sold at
Branch 1 tend to be lower than those at Branch 2.

24
Scatter plot
 Provides a first look at bivariate data to see
clusters of points, outliers, etc
 Each pair of values is treated as a pair of
coordinates and plotted as points in the plane

25
Positively and Negatively Correlated
Data

 The left half fragment is positively


correlated
 The right half is negative correlated

26
Uncorrelated Data

27
Chapter 2: Getting to Know Your
Data

 Data Objects and Attribute Types

 Basic Statistical Descriptions of Data

 Data Visualization

 Measuring Data Similarity and Dissimilarity

 Summary

28
Similarity and Dissimilarity
 Similarity
 Numerical measure of how alike two data objects

are
 Value is higher when objects are more alike

 Often falls in the range [0,1]

 Dissimilarity (e.g., distance)


 Numerical measure of how different two data

objects are
 Lower when objects are more alike

 Minimum dissimilarity is often 0

 Upper limit varies

 Proximity refers to a similarity or dissimilarity


29
Data Matrix and Dissimilarity
Matrix
 Data matrix

[ ]
 n data points
x 11 .. . x 1f . . . x 1p
(objects) with p .. . .. . .. . ... ...
dimensions x i1 .. . x if . . . x ip
(features) .. . .. . .. . ... ...
 Two modes x n1 .. . x nf . . . x np

[ ]
 Dissimilarity matrix 0
 n data points, but d ( 2,1) 0
registers only the d ( 3,1) d ( 3,2) 0
distance : : :
 A triangular matrix d ( n , 1) d (n , 2 ) .. . . .. 0
 Single mode
30
Proximity Measures for Binary
Attributes
Object j
 A contingency table for binary data

 Distance measure for symmetric bin.


vars (0 and 1 equally important):
 Distance measure for asymm. bin. vars
(1 more important – e.g. diseases):
 Jaccard coefficient (similarity measure
for asymmetric binary variables):

 Note: Jaccard coefficient is the same as “coherence”:

31
Dissimilarity between Binary
Variables
 Example
Name Gender Fever Cough Test-1 Test-2 Test-3 Test-4
Jack M Y N P N N N
Mary F Y N P N P N
Jim M Y P N N N N

 Gender is a symmetric attribute (let's discard it!)


 The remaining attributes are asymmetric binary
 Let the values Y and P be 1, and the value N 0
0 +1
d ( jack , mary ) = =0.33
2 +0 +1
1 +1
d ( jack , jim ) = =0.67
1 +1 +1
1 +2
d ( jim , mary ) = =0.75
1 +1 +2
32
Proximity Measures for Nominal
Attributes
 Can take 2 or more states, e.g., red, yellow,
blue, green (generalization of a binary
attribute)
 Method 1: Simple matching
 m: # of matches, p: total # of variables
p− m
d ( i , j )=
p
 Method 2: Use a large number of binary
attributes
 creating a new binary attribute for each of
the M nominal states 33
Proximity on Numeric Data: Minkowski
Distance
 Minkowski distance: A popular distance measure

where i = (xi1, xi2, …, xip) and j = (xj1, xj2, …, xjp) are two p-
dimensional data objects, and h is the order (the
distance so defined is also called L-h norm)
 Properties
 d(i, j) > 0 if i ≠ j, and d(i, i) = 0 (Positive
definiteness)
 d(i, j) = d(j, i) (Symmetry)
 d(i, j) ≤ d(i, k) + d(k, j) (Triangle Inequality)
 A distance that satisfies these properties is a metric
34
Special Cases of Minkowski Distance
 h = 1: Manhattan (city block, L1 norm) distance
 E.g., the Hamming distance: the number of bits that are
different between two binary vectors
d(i , j )=∣x i1−x j 1∣+∣xi 2−x j 2∣+...+∣x i p−x j p∣
 h = 2: (L2 norm) Euclidean distance

d(i , j )=√(∣x 1−x 1∣2 +∣x 2−x 2∣2 +...+∣x p−x p∣2 )
i j i j i j

 h → ∞. “supremum” (Lmax norm, L∞ norm) distance.


 This is the maximum difference between any component
(attribute) of the vectors

35
Example: Minkowski Distance
Manhattan (L1)

Euclidean (L2)

Dissimilarity Matrices

Supremum (Linf)

36
Standardizing Numeric Data
 Z-score:
x
z= σ − µ

 X: raw data, μ: mean of the population, σ: standard deviation


 the distance between the raw score and the population mean in
units of the standard deviation
 <0 when the raw score is below the mean, >0 when above
 An alternative way: Calculate the mean absolute deviation
1
s f = (∣x1f −m f ∣+∣x 2f −m f ∣+...+∣x nf −m f ∣)
n where
m f = 1n (x1 f + x2 f + ... + xnf )
.

xif − m f
 standardized measure (z-score): zif = sf
 mean absolute deviation is more robust than std dev
37
Ordinal Variables

 An ordinal variable can be discrete or continuous


 Order is important, e.g., rank
 Can be treated like interval-scaled
 replace xif by their rank rif ∈{1,..., M f }
 map (normalize) the range of each variable onto
[0, 1] by replacing xif by
r if −1
z if =
M f −1

 compute the dissimilarity using distance


measures for numeric attributes
38
Attributes of Mixed Type
 A database may contain all attribute types
 Nominal, symmetric binary, asymmetric binary,

numeric, ordinal
 One may use a weighted formula to combine their
effects p (f) (f)
Σ f =1 δ ij d ij
d (i , j )=
Σ pf =1 δ(ij f )
 Choice of δ(ijf )
 Set δ
(f)
ij =0 if
 x or x is missing
if jf

 xif = xjf = 0 and f is asymmetric binary


(f)
 Set δij =1 otherwise
39
Attributes of Mixed Type
Σ pf = 1δ ij( f ) dij( f )
d (i, j) =
Σ pf = 1δ ij( f )
 Choice of dij(f)
 when f is binary or nominal:

dij(f) = 0 if xif = xjf , dij(f) = 1 otherwise


 when f is numeric: use the normalized distance

 when f is ordinal

r −1
 Compute ranks rif and zif = if

M −1 f

 Treat zif as interval-scaled

40
Cosine Similarity
 A document can be represented by thousands of attributes,
each recording the frequency of a particular word (such as
keywords) or phrase in the document.

 Other vector objects: gene features in micro-arrays, …


 Applications: information retrieval, biologic taxonomy, gene
feature mapping, …
 Issue: very long and sparse

 Treat documents as vectors, and compute a cosine similarity

41
Cosine Similarity
 Cosine measure: If x and y are two vectors (e.g., term-frequency
vectors), then

cos(x, y) = (x • y) /||x|| ||y||

where
 • indicates vector dot product,
 ||x||: the L2 norm (length) of vector x ∥x∥= √ x 21 + x 22 +...+ x 2p

 Remark: when attributes are binary valued:


 • indicates the number of shared features
 ||x|| ||y|| is the geometric mean between the number of
features of x and the number of features of y:
sqrt(a) * sqrt(b) = sqrt( a * b )
 cos (x, y) measures relative possession of common features 42
Example: Cosine Similarity
 cos(x, y) = (x • y) /||x|| ||y||

 Ex: Find the similarity between documents x and y.

x = (5, 0, 3, 0, 2, 0, 0, 2, 0, 0)
y = (3, 0, 2, 0, 1, 1, 0, 1, 0, 1)

x • y = 5*3+0*0+3*2+0*0+2*1+0*1+0*1+2*1+0*0+0*1=
= 25
||x||=(5*5+0*0+3*3+0*0+2*2+0*0+0*0+2*2+0*0+0*0)0.5=
= 6.481
||y||= (3*3+0*0+2*2+0*0+1*1+1*1+0*0+1*1+0*0+1*1)0.5=
= 4.12
cos(x, y) = 25 / (6.481 * 4.12) = 0.94

43
References
 W. Cleveland, Visualizing Data, Hobart Press, 1993
 T. Dasu and T. Johnson. Exploratory Data Mining and Data Cleaning.
John Wiley, 2003
 U. Fayyad, G. Grinstein, and A. Wierse. Information Visualization in
Data Mining and Knowledge Discovery, Morgan Kaufmann, 2001
 L. Kaufman and P. J. Rousseeuw. Finding Groups in Data: an
Introduction to Cluster Analysis. John Wiley & Sons, 1990.
 H. V. Jagadish et al., Special Issue on Data Reduction Techniques.
Bulletin of the Tech. Committee on Data Eng., 20(4), Dec. 1997
 D. A. Keim. Information visualization and visual data mining, IEEE
trans. on Visualization and Computer Graphics, 8(1), 2002
 D. Pyle. Data Preparation for Data Mining. Morgan Kaufmann, 1999
 S. Santini and R. Jain,” Similarity measures”, IEEE Trans. on Pattern
Analysis and Machine Intelligence, 21(9), 1999
 E. R. Tufte. The Visual Display of Quantitative Information, 2 nd ed.,
Graphics Press, 2001
 C. Yu et al., Visual data mining of multimedia data for social and
behavioral studies, Information Visualization, 8(1), 2009

You might also like