0% found this document useful (0 votes)
70 views44 pages

Lectur 4 Basic Statistical Descriptions of Data

This document discusses techniques for analyzing and visualizing data. It covers measuring central tendency through measures like mean, median, and mode. It also discusses measuring dispersion through measures like range, variance, and standard deviation. Finally, it discusses visualizing data through techniques like histograms, boxplots, and quantile plots to discover patterns and insights in the data.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
70 views44 pages

Lectur 4 Basic Statistical Descriptions of Data

This document discusses techniques for analyzing and visualizing data. It covers measuring central tendency through measures like mean, median, and mode. It also discusses measuring dispersion through measures like range, variance, and standard deviation. Finally, it discusses visualizing data through techniques like histograms, boxplots, and quantile plots to discover patterns and insights in the data.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 44

Data Mining:

Concepts and Techniques

1
Chapter 2: Getting to Know Your Data

 Data Objects and Attribute Types

 Data Pre-processing( Introduction)

 Basic Statistical Descriptions of Data

 Data Visualization

 Measuring Data Similarity and Dissimilarity

 Summary

2
Basic Statistical Descriptions of Data
 Motivation
 To better understand the data: central tendency, variation
and spread
 Data dispersion characteristics
 median, max, min, quantiles, outliers, variance, etc.
 Numerical dimensions correspond to sorted intervals
 Data dispersion: analyzed with multiple granularities of
precision
 Boxplot or quantile analysis on sorted intervals
 Dispersion analysis on computed measures
 Folding measures into numerical dimensions
 Boxplot or quantile analysis on the transformed cube
3
Frequency and Mode
Frequency and Mode
The frequency of an attribute value is the percentage of

time the value occurs in the data set


For example, given the attribute ‘gender’ and a

representative population of people, the gender ‘female’


occurs about 50% of the time.
The mode of a an attribute is the most frequent

attribute value
The notions of frequency and mode are typically used

with categorical data

4
Measures of Location( Central Tendency): Mean
and Median
 The mean is the most common measure of the
location of a set of points.
 •However, the mean is very sensitive to outliers.

 Thus, the median or a trimmed mean is also


commonly used.
5
Measuring the Central Tendency
 Mean (algebraic measure) (sample vs. population): 1 n
x   xi   x
Note: n is sample size and N is population size. n i 1 N
n
 Weighted arithmetic mean:
 Trimmed mean: chopping extreme values
w x i i
x i 1
n
 Median:
w i
 Middle value if odd number of values, or average of the middlei 1
two values otherwise
 Estimated by interpolation (for grouped data):

n / 2  ( freq ) l Median
 Mode median  L1  ( ) width interval

freq median
 Value that occurs most frequently in the data
 Unimodal, bimodal, trimodal
 Empirical formula:
mean  mode  3  (mean  median)
6
Example

7
Symmetric vs. Skewed Data
 Median, mean and mode of symmetric, symmetric
positively and negatively skewed data

positively skewed negatively skewed

August 23, 2022 Data Mining: Concepts and Techniques 8


Measures of Spread(Dispersion od Data): Range
and Variance
 Range is the difference between the max and min
 The variance or standard deviation is the most common
measure of the spread of a set of points.

9
Measuring the Dispersion of Data
 Quartiles, outliers and boxplots
 Quartiles: Q1 (25th percentile), Q3 (75th percentile)
 Inter-quartile range: IQR = Q3 – Q1
 Five number summary: min, Q1, median, Q3, max
 Boxplot: ends of the box are the quartiles; median is marked; add whiskers, and
plot outliers individually
 Outlier: usually, a value higher/lower than 1.5 x IQR
 Variance and standard deviation (sample: s, population: σ)
 Variance: (algebraic, scalable computation)
1 n 1 n 2 1 n 1 n
1 n
s  2
 i
n  1 i 1
( x  x ) 2
 [ i n
n  1 i 1
x  (
i 1
xi ]
) 2 2
 
N
 ( xi  2
 ) 
N
 xi   2
2

i 1 i 1

 Standard deviation s (or σ) is the square root of variance s2 (or σ2)

10
Percentiles

 For continuous data, the notion of a percentile is more


useful.

 Given an ordinal or continuous attribute x and a


number p between 0 and 100, the pth percentile is a
value 𝑥𝑝 of x such that p% of the observed values of x
are less than 𝑥𝑝.
 •For instance, the 80th percentile is the value 𝑥80%
that is greater than 80% of all the values of x we have
in our data.

11
Percentile
You are the fourth tallest person in a group of 20

12
Chapter 2: Getting to Know Your Data

 Data Objects and Attribute Types

 Data Pre-processing( Introduction)

 Data Visualization

 Basic Statistical Descriptions of Data

 Measuring Data Similarity and Dissimilarity

 Summary

13
Post-processing

Visualization
The human eye is a powerful analytical tool

If we visualize the data properly, we can discover

patterns
Visualization is the way to present the data so that

patterns can be seen


E.g., histograms and plots are a form of visualization

There are multiple techniques (a field on its own)

14
Boxplot Analysis

 Five-number summary of a distribution


 Minimum, Q1, Median, Q3, Maximum
 Boxplot
 Data is represented with a box
 The ends of the box are at the first and third
quartiles, i.e., the height of the box is IQR
 The median is marked by a line within the box
 Whiskers: two lines outside the box extended
to Minimum and Maximum
 Outliers: points beyond a specified outlier
threshold, plotted individually

15
Visualization of Data Dispersion: 3-D Boxplots

August 23, 2022 Data Mining: Concepts and Techniques 16


Properties of Normal Distribution Curve

 The normal (distribution) curve


 From μ–σ to μ+σ: contains about 68% of the measurements

(μ: mean, σ: standard deviation)


 From μ–2σ to μ+2σ: contains about 95% of it

 From μ–3σ to μ+3σ: contains about 99.7% of it

17
Graphic Displays of Basic Statistical Descriptions

 Boxplot: graphic display of five-number summary


 Histogram: x-axis are values, y-axis repres. frequencies
 Quantile plot: each value xi is paired with fi indicating that
approximately 100 fi % of data are  xi
 Quantile-quantile (q-q) plot: graphs the quantiles of one
univariant distribution against the corresponding quantiles of
another
 Scatter plot: each pair of values is a pair of coordinates and
plotted as points in the plane
18
Histogram Analysis
 Histogram: Graph display of tabulated
40
frequencies, shown as bars
35
 It shows what proportion of cases fall
into each of several categories 30

 Differs from a bar chart in that it is the25


area of the bar that denotes the 20
value, not the height as in bar charts, 15
a crucial distinction when the 10
categories are not of uniform width
5
 The categories are usually specified as
0
non-overlapping intervals of some 10000 30000 50000 70000 90000

variable. The categories (bars) must


be adjacent
19
Histograms Often Tell More than Boxplots

 The two histograms


shown in the left may
have the same boxplot
representation
 The same values for:
min, Q1, median, Q3,
max
 But they have rather
different data
distributions

20
Quantile Plot
 Displays all of the data (allowing the user to assess both the
overall behavior and unusual occurrences)
 Plots quantile information
 For a data x data sorted in increasing order, f indicates that
i i
approximately 100 fi% of the data are below or equal to the
value xi

Data Mining: Concepts and Techniques 21


Quantile-Quantile (Q-Q) Plot
 Graphs the quantiles of one univariate distribution against the
corresponding quantiles of another
 View: Is there is a shift in going from one distribution to another?
 Example shows unit price of items sold at Branch 1 vs. Branch 2 for
each quantile. Unit prices of items sold at Branch 1 tend to be lower
than those at Branch 2.

22
Scatter plot
 Provides a first look at bivariate data to see clusters of points,
outliers, etc
 Each pair of values is treated as a pair of coordinates and
plotted as points in the plane

23
Positively and Negatively Correlated Data

 The left half fragment is positively


correlated
 The right half is negative correlated

24
Uncorrelated Data

25
Chapter 2: Getting to Know Your Data

 Data Objects and Attribute Types

 Basic Statistical Descriptions of Data

 Data Visualization

 Measuring Data Similarity and Dissimilarity

 Summary

26
Similarity and Dissimilarity
 Similarity
 Numerical measure of how alike two data objects are
 Value is higher when objects are more alike
 Often falls in the range [0,1]
 Dissimilarity (e.g., distance)
 Numerical measure of how different two data objects are
 Lower when objects are more alike
 Minimum dissimilarity is often 0
 Upper limit varies
 Proximity refers to a similarity or dissimilarity
27
Data Matrix and Dissimilarity Matrix
 Data matrix
 n data points with p  x11 ... x1f ... x1p 
dimensions  
 ... ... ... ... ... 
 Two modes x ... xif ... xip 
 i1 
 ... ... ... ... ... 
x ... xnf ... xnp 
 n1 
 Proximity/ Dissimilarity
matrix  0 
 n data points, but  d(2,1) 0 
 
registers only the  d(3,1) d ( 3,2) 0 
distance  
 : : : 
 A triangular matrix
d ( n,1) d ( n,2) ... ... 0
 Single mode

28
Proximity Measure for Nominal Attributes

 Can take 2 or more states, e.g., red, yellow, blue, green


(generalization of a binary attribute)
 Method 1: Simple matching
 m: # of matches, p: total # of variables
d (i, j)  p  p
m

 Method 2: Use a large number of binary attributes


 creating a new binary attribute for each of the M
nominal states

29
Proximity Measure for Binary Attributes
Object j
 A contingency table for binary data
Object i

 Distance measure for symmetric


binary variables:
 Distance measure for asymmetric
binary variables:
 Jaccard coefficient (similarity measure
for asymmetric binary variables):

 Note: Jaccard coefficient is the same as “coherence”:

30
Dissimilarity between Binary Variables
 Example
Name Gender Fever Cough Test-1 Test-2 Test-3 Test-4
Jack M Y N P N N N
Mary F Y N P N P N
Jim M Y P N N N N

 Gender is a symmetric attribute


 The remaining attributes are asymmetric binary
 Let the values Y and P be 1, and the value N 0

d ( jack , mary ) 
d ( jack , jim) 
d ( jim, mary ) 

31
Dissimilarity between Binary Variables
 Example
Name Gender Fever Cough Test-1 Test-2 Test-3 Test-4
Jack M Y N P N N N
Mary F Y N P N P N
Jim M Y P N N N N

 Gender is a symmetric attribute


 The remaining attributes are asymmetric binary
 Let the values Y and P be 1, and the value N 0
01
d ( jack , mary )   0.33
2 01
11
d ( jack , jim )   0.67
111
1 2
d ( jim , mary )   0.75
11 2
32
Standardizing Numeric Data

 Z-score:
x
z   
 X: raw score to be standardized, μ: mean of the population, σ: standard
deviation
 the distance between the raw score and the population mean in units
of the standard deviation
 negative when the raw score is below the mean, “+” when above
 An alternative way: Calculate the mean absolute deviation
s f  1n (| x1 f  m f |  | x2 f  m f | ... | xnf  m f |)
where
m f  1n (x1 f  x2 f  ...  xnf )
x m
.

if f
 standardized measure (z-score):
zif  sf
 Using mean absolute deviation is more robust than using standard
deviation
33
Example:
Data Matrix and Dissimilarity Matrix
Data Matrix
x2 x4
point attribute1 attribute2
4 x1 1 2
x2 3 5
x3 2 0
x4 4 5
2 x1
Dissimilarity Matrix
(with Euclidean Distance)
x3
0 4 x1 x2 x3 x4
2
x1 0
x2 3.61 0
x3 2.24 5.1 0
x4 4.24 1 5.39 0

34
Distance on Numeric Data: Minkowski Distance
 Minkowski distance: A popular distance measure

where i = (xi1, xi2, …, xip) and j = (xj1, xj2, …, xjp) are two p-dimensional
data objects, and h is the order (the distance so defined is also
called L-h norm)
 Properties
 d(i, j) > 0 if i ≠ j, and d(i, i) = 0 (Positive definiteness)
 d(i, j) = d(j, i) (Symmetry)
 d(i, j)  d(i, k) + d(k, j) (Triangle Inequality)
 A distance that satisfies these properties is a metric

35
Special Cases of Minkowski Distance
 h = 1: Manhattan (city block, L1 norm) distance
 E.g., the Hamming distance: the number of bits that are different
between two binary vectors
d (i, j) | x  x |  | x  x | ... | x  x |
i1 j1 i2 j 2 ip jp

 h = 2: (L2 norm) Euclidean distance


d (i, j)  (| x  x |2  | x  x |2 ... | x  x |2 )
i1 j1 i2 j 2 ip jp

 h  . “supremum” (Lmax norm, L norm) distance.


 This is the maximum difference between any component (attribute)
of the vectors

36
Example: Minkowski Distance
Dissimilarity Matrices
point attribute 1 attribute 2 Manhattan (L1)
x1 1 2
L x1 x2 x3 x4
x2 3 5 x1 0
x3 2 0 x2 5 0
x4 4 5 x3 3 6 0
x4 6 1 7 0
Euclidean (L2)
x2 x4
L2 x1 x2 x3 x4
4 x1 0
x2 3.61 0
x3 2.24 5.1 0
x4 4.24 1 5.39 0

2 x1
Supremum
L x1 x2 x3 x4
x1 0
x2 3 0
x3 x3 2 5 0
0 2 4 x4 3 1 5 0
37
Ordinal Variables

 An ordinal variable can be discrete or continuous


 Order is important, e.g., rank
 Can be treated like interval-scaled
 replace xif by their rank rif {1,..., M f }
 map the range of each variable onto [0, 1] by replacing i-th
object in the f-th variable by
rif 1
zif 
M f 1

 compute the dissimilarity using methods for interval-scaled


variables

38
Attributes of Mixed Type
 A database may contain all attribute types
 Nominal, symmetric binary, asymmetric binary, numeric,

ordinal
 One may use a weighted formula to combine their effects
 pf  1 ij( f ) dij( f )
d (i, j) 
 pf  1 ij( f )
 f is binary or nominal:
dij(f) = 0 if xif = xjf , or dij(f) = 1 otherwise
 f is numeric: use the normalized distance
 f is ordinal

Compute ranks rif and r 1
zif 
if

Treat zif as interval-scaled M 1 f

39
Cosine Similarity
 A document can be represented by thousands of attributes, each recording the
frequency of a particular word (such as keywords) or phrase in the document.

 Other vector objects: gene features in micro-arrays, …


 Applications: information retrieval, biologic taxonomy, gene feature mapping, ...
 Cosine measure: If d1 and d2 are two vectors (e.g., term-frequency vectors),
then
cos(d1, d2) = (d1  d2) /||d1|| ||d2|| ,
where  indicates vector dot product, ||d||: the length of vector d

40
Example: Cosine Similarity
 cos(d1, d2) = (d1  d2) /||d1|| ||d2|| ,
where  indicates vector dot product, ||d|: the length of vector d

 Ex: Find the similarity between documents 1 and 2.

d1 = (5, 0, 3, 0, 2, 0, 0, 2, 0, 0)
d2 = (3, 0, 2, 0, 1, 1, 0, 1, 0, 1)

d1d2 = 5*3+0*0+3*2+0*0+2*1+0*1+0*1+2*1+0*0+0*1 = 25
||d1||= (5*5+0*0+3*3+0*0+2*2+0*0+0*0+2*2+0*0+0*0)0.5=(42)0.5 = 6.481
||d2||= (3*3+0*0+2*2+0*0+1*1+1*1+0*0+1*1+0*0+1*1)0.5=(17)0.5 = 4.12
cos(d1, d2 ) = 0.94

41
Chapter 2: Getting to Know Your Data

 Data Objects and Attribute Types

 Basic Statistical Descriptions of Data

 Data Visualization

 Measuring Data Similarity and Dissimilarity

 Summary

42
Summary
 Data attribute types: nominal, binary, ordinal, interval-scaled,
ratio-scaled
 Many types of data sets, e.g., numerical, text, graph, Web,
image.
 Gain insight into the data by:
 Basic statistical data description: central tendency, dispersion,

graphical displays
 Data visualization: map data onto graphical primitives

 Measure data similarity

 Above steps are the beginning of data preprocessing


 Many methods have been developed but still an active area of
research
References
 W. Cleveland, Visualizing Data, Hobart Press, 1993
 T. Dasu and T. Johnson. Exploratory Data Mining and Data Cleaning. John Wiley, 2003
 U. Fayyad, G. Grinstein, and A. Wierse. Information Visualization in Data Mining and
Knowledge Discovery, Morgan Kaufmann, 2001
 L. Kaufman and P. J. Rousseeuw. Finding Groups in Data: an Introduction to Cluster
Analysis. John Wiley & Sons, 1990.
 H. V. Jagadish et al., Special Issue on Data Reduction Techniques. Bulletin of the Tech.
Committee on Data Eng., 20(4), Dec. 1997
 D. A. Keim. Information visualization and visual data mining, IEEE trans. on Visualization
and Computer Graphics, 8(1), 2002
 D. Pyle. Data Preparation for Data Mining. Morgan Kaufmann, 1999
 S.  Santini and R. Jain,” Similarity measures”, IEEE Trans. on Pattern Analysis and
Machine Intelligence, 21(9), 1999
 E. R. Tufte. The Visual Display of Quantitative Information, 2nd ed., Graphics Press, 2001
 C. Yu et al., Visual data mining of multimedia data for social and behavioral studies,
Information Visualization, 8(1), 2009

You might also like