0% found this document useful (0 votes)
8 views367 pages

Full

Uploaded by

aditijain3727
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views367 pages

Full

Uploaded by

aditijain3727
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 367

Data Mining: Data Pre-processing

Unit-1

Introduction to Data Mining

1
Outline

⚫ Attributes and Objects

⚫ Types of Data

⚫ Data Quality

⚫ Similarity and Distance

⚫ Data Preprocessing

2
What is Data?

⚫ Collection of data objects Attributes


and their attributes is known
as data set.
⚫ An attribute is a property or
characteristic of an object
– Examples: eye color of a
person, temperature, etc.
– Attribute is also known as

Objects
variable, field, characteristic,
dimension, or feature
⚫ A collection of attributes
describe an object
– Object is also known as
record, point, case, sample,
entity, or instance
Attribute Values

⚫ Attribute values are numbers or symbols assigned to an attribute for a


particular object. For e.g eye color possible value{ brown, black, blue,
green, etc.} while temperature is numerical.

⚫ Distinction between attributes and attribute values


– Same attribute can be mapped to different attribute values
◆ Example: height can be measured in feet or meters

– Different attributes can be mapped to the same set of values


◆ Example: Attribute values for ID and age are integers

– But properties of attribute can be different than the properties of


the values used to represent the attribute . For e.g. we can find
average age of an employee but not average of employee id. For
the age attribute, the properties of the integers used to represent an
age are very much the properties of the attribute.
4
Types of Attributes

⚫ There are different types of attributes


– Nominal
◆ Examples: ID numbers, eye color, zip codes
– Ordinal
◆ Examples: rankings (e.g., taste of potato chips on a
scale from 1-10), grades, height {tall, medium, short}
– Interval
◆ Examples: calendar dates, temperatures in Celsius or
Fahrenheit.
– Ratio
◆ Examples: temperature in Kelvin, length, counts,
elapsed time (e.g., time to run a race)
5
Types of Attributes

6
Properties of Attribute Values

⚫ The type of an attribute depends on which of the


following properties/operations it possesses:
– Distinctness: = 
– Order: < >
– Addition or + -
difference are meaniful :
– Multiplication and division are * /
meaningful
– Nominal attribute: distinctness
– Ordinal attribute: distinctness & order
– Interval attribute: distinctness, order & meaningful
differences and addition
– Ratio attribute: all 4 properties/operations

7
Attribute Description Examples Operations
Type
Nominal Nominal attribute zip codes, employee mode, entropy,
values only ID numbers, eye contingency
distinguish. (=, ) color, sex: {male, correlation, 2
Qualitative
Categorica

female} test

Ordinal Ordinal attribute hardness of minerals, median,


values also order {good, better, best}, percentiles, rank
l

objects. grades, street correlation, run


(<, >) numbers tests, sign tests
Interval For interval calendar dates, mean, standard
attributes, temperature in deviation,
differences between Celsius or Fahrenheit Pearson's
Quantitativ
Numeric

values are correlation, t and


meaningful. (+, - ) F tests
Ratio For ratio variables, temperature in Kelvin, geometric mean,
both differences and monetary quantities, harmonic mean,
e

ratios are counts, age, mass, percent variation


meaningful. (*, /) length, current

This categorization of attributes is due to S. S. Stevens


Attribute Transformation Comments
Type
Nominal Any permutation of values If all employee ID numbers
were reassigned, would it
make any difference?
Categorical
Qualitative

Ordinal An order preserving change of An attribute encompassing


values, i.e., the notion of good, better best
new_value = f(old_value) can be represented equally
where f is a monotonic function well by the values {1, 2, 3} or
by { 0.5, 1, 10}.

Interval new_value = a * old_value + b Thus, the Fahrenheit and


where a and b are constants Celsius temperature scales
Quantitative
Numeric

differ in terms of where their


zero value is and the size of a
unit (degree).
Ratio new_value = a * old_value Length can be measured in
meters or feet.

This categorization of attributes is due to S. S. Stevens


Discrete and Continuous Attributes

⚫ Discrete Attribute
– Has only a finite or countably infinite set of values
– Examples: zip codes, counts, or the set of words in a
collection of documents
– Often represented as integer variables.
– Note: binary attributes are a special case of discrete
attributes
⚫ Continuous Attribute
– Has real numbers as attribute values
– Examples: temperature, height, or weight.
– Practically, real values can only be measured and
represented using a finite number of digits.
– Continuous attributes are typically represented as floating-
point variables.
Asymmetric Attributes

⚫ Only presence (a non-zero attribute value) is regarded as


important
◆ Words present in documents
◆ Items present in customer transactions

⚫ For e.g. Consider a data set where each object is a student and each attribute
records whether or not a student took a particular course at a university. For a specific
student, an attribute has a value of 1 if the student took the course associated with that
attribute and a value of 0 otherwise. Because students take only a small fraction of all
available courses, most of the values in such a data set would be 0. Therefore, it is
more meaningful and more efficient to focus on the non-zero values.
Critiques of the attribute categorization

⚫ Incomplete
– Asymmetric binary: Binary attribute where only non-zero value is
important.
– Cyclical : A cyclic attribute has values that repeat in a period of
time. Ex. hour, week, year.
– Multivariate : multivalued attribute

⚫ Real data is approximate and noisy


– This can complicate recognition of the proper attribute type
– Treating one attribute type as another may be approximately
correct
Important Characteristics of Data

– Dimensionality (number of attributes)


◆ High dimensional data brings a number of challenges

– Distribution (frequency of occurrence)


– Sparsity
◆ Only presence counts

– Resolution
◆ Patterns depend on the scale

– Size
◆ Type of analysis may depend on size of data
Types of data sets
⚫ Record
– Data Matrix
– Document Data
– Transaction Data
⚫ Graph
– World Wide Web
– Molecular Structures
⚫ Ordered
– Spatial Data
– Temporal Data
– Sequential Data
– Genetic Sequence Data
Record Data

⚫ Data that consists of a collection of records, each


of which consists of a fixed set of attributes
Tid Refund Marital Taxable
Status Income Cheat

1 Yes Single 125K No


2 No Married 100K No
3 No Single 70K No
4 Yes Married 120K No
5 No Divorced 95K Yes
6 No Married 60K No
7 Yes Divorced 220K No
8 No Single 85K Yes
9 No Married 75K No
10 No Single 90K Yes
10
Data Matrix or Pattern Matrix

⚫ If data objects have the same fixed set of numeric


attributes, then the data objects can be thought of as
points in a multi-dimensional space, where each
dimension represents a distinct attribute

⚫ Such a data set can be represented by an m by n matrix,


where there are m rows, one for each object, and n
columns, one for each attribute
Projection Projection Distance Load Thickness
of x Load of y load

10.23 5.27 15.22 2.7 1.2


12.65 6.25 16.22 2.2 1.1
Document Data

⚫ Each document becomes a ‘term’ vector


– Each term is a component (attribute) of the vector
– The value of each component is the number of times
the corresponding term occurs in the document.
coach

game
score

timeout

season
play
team

win

lost
ball

Document 1 3 0 5 0 2 6 0 2 0 2

Document 2 0 7 0 2 1 0 0 3 0 0

Document 3 0 1 0 0 1 2 2 0 3 0
Transaction Data

⚫ A special type of data, where


– Each transaction involves a set of items.
– For example, consider a grocery store. The set of products
purchased by a customer during one shopping trip constitute a
transaction, while the individual products that were purchased
are the items.
– Can represent transaction data as record data

TID Items
1 Bread, Coke, Milk
2 Beer, Bread
3 Beer, Coke, Diaper, Milk
4 Beer, Bread, Diaper, Milk
5 Coke, Diaper, Milk
Graph Data

⚫ Examples: Generic graph, a molecule, and webpages

2
5 1
2
5

Benzene Molecule: C6H6


Ordered Data

⚫ Sequences of transactions
Ordered Data

⚫ Genomic sequence data

GGTTCCGCCTTCAGCCCCGCGCC
CGCAGGGCCCGCCCCGCGCCGTC
GAGAAGGGCCCGCCTGGCGGGCG
GGGGGAGGCGGGGCCGCCCGAGC
CCAACCGAGTCCGACCAGGTGCC
CCCTCTGCTCGGCCTAGACCTGA
GCTCATTAGGCGGCAGCGGACAG
GCCAAGTAGAACACGCGAAGCGC
TGGGCTGCCTGCTGCGACCAGGG
Ordered Data

⚫ Spatio-Temporal Data

Average Monthly
Temperature of
land and ocean
Major Tasks in Data Preprocessing

Data cleaning
Fill in missing values, smooth noisy data, identify or remove outliers, and
resolve inconsistencies
Data integration
Integration of multiple databases, data cubes, or files
Data reduction
Dimensionality reduction
Numerosity reduction
Data compression
Data transformation and data discretization
Normalization
Concept hierarchy generation

23
Data Quality

⚫ Poor data quality negatively affects many data processing


Efforts (Garbage in garbage out)

⚫ Data mining example: a classification model for detecting


people who are loan risks is built using poor data
– Some credit-worthy candidates are denied loans
– More loans are given to individuals that default
Data Quality …

⚫ What kinds of data quality problems?


⚫ Why these data quality problem occur?
⚫ How can we detect problems with the data?
⚫ What can we do about these problems?
⚫Data Cleaning: the detection and correction of
data quality.
⚫ Examples of data quality problems:
– Noise and outliers
– Wrong data
– Fake data
– Missing values
– Duplicate data
Measurement and Data Collection Issue

• It is unrealistic to expect that data will be perfect. There may


be problems due to human error, limitations of measuring
devices, flaws in the data collection process or transmission
error.
• The term measurement error refers to any problem resulting
from the measurement process. A common problem is that the
value recorded differs from the true value to some extent.

• The term data collection error refers to errors such as omitting


data objects or attribute values, or inappropriately including a
data object.
Missing Values
⚫ Reasons for missing values
– Information is not collected (e.g., people decline to give their age and weight)

– Attributes may not be applicable to all cases (e.g., annual income is not applicable to
children)
⚫ Handling missing values
– Eliminate data objects or tuple
– Fill the missing value manually S.No Actual Value Mean Median Mode
– Estimate missing values 1 67 67 67 67
2 51 58 67
◆ By attribute mean/ median 3 67 67 67 67
◆ Assign a global constant such as - 4 56 56 56 56
5 58 58 58 58
– Ignore the missing value during analysis 6 48 48 48 48
7 89 89 89 89
8 51 58 67
9 74 74 74 74

Mean = (67+67+56+58+48+89+74)/9=51
Median = 48, 56,58,67,74,89 = 58
Mode (most frequent occur value
Duplicate Data

⚫ Data set may include data objects that are


duplicates, or almost duplicates of one another
– Major issue when merging data from heterogeneous
sources
⚫ Examples:
– Same person with multiple email addresses
Inconsistent Data
Noise

⚫ For objects, noise is an extraneous object or outliers


⚫ For attributes, noise refers to modification of original values
– Examples: distortion of a person’s voice when talking on a poor phone
and “snow” on television screen
– The figures below show two sine waves of the same magnitude and
different frequencies, the waves combined, and the two sine waves with
random noise
◆ The magnitude and shape of the original signal is distorted
Outliers

⚫ Outliers are data objects with characteristics that


are considerably different than most of the other
data objects in the data set
– Outliers are noise
that interferes with
data analysis
How to Handle Noisy Data?
Binning
first sort data and partition into
(equal-frequency) bins
then one can smooth by bin means,
smooth by bin median, smooth by bin
boundaries, etc.
Regression
smooth by fitting the data into regression functions
Clustering
detect and remove outliers
Combined computer and human inspection
detect suspicious values and check by human (e.g., deal with
possible outliers)

32
Similarity and Dissimilarity Measures

⚫ Similarity measure
– Numerical measure of how alike two data objects are.
– Is higher when objects are more alike.
– Often falls in the range [0,1]
⚫ Dissimilarity measure
– Numerical measure of how different two data objects
are
– Lower when objects are more alike
– Minimum dissimilarity is often 0
– Upper limit varies
⚫ Proximity refers to a similarity or dissimilarity
Similarity/Dissimilarity for Simple Attributes

The following table shows the similarity and dissimilarity


between two objects, x and y, with respect to a single, simple
attribute.
Euclidean Distance

⚫ Euclidean Distance

where n is the number of dimensions (attributes) and


xk and yk are, respectively, the kth attributes
(components) or data objects x and y.
⚫ Standardization is necessary, if scales differ.
Consider the following points and attribute . Calculate the Euclidean distance

point x y
p1 0 2
p2 2 0
p3 3 1
p4 5 1
Euclidean Distance

2 p1
p3 p4
1
p2
0
0 1 2 3 4 5 6

p1 p2 p3 p4
p1 0 2.828 3.162 5.099
p2 2.828 0 1.414 3.162
p3 3.162 1.414 0 2
p4 5.099 3.162 2 0
Distance Matrix
Minkowski Distance

⚫ Minkowski Distance is a generalization of Euclidean


Distance

Where r is a parameter, n is the number of dimensions


(attributes) and xk and yk are, respectively, the kth
attributes (components) or data objects x and y.
Minkowski Distance: Examples

⚫ r = 1. City block (Manhattan, taxicab, L1 norm) distance.


– A common example of this for binary vectors is the
Hamming distance, which is just the number of bits that are
different between two binary vectors

⚫ r = 2. Euclidean distance

⚫ r → . “supremum” (Lmax norm, L norm) distance.


– This is the maximum difference between any component of
the vectors

⚫ Do not confuse r with n, i.e., all these distances are


defined for all numbers of dimensions.
Minkowski Distance

L1 p1 p2 p3 p4
p1 0 4 4 6
p2 4 0 2 4
p3 4 2 0 2
p4 6 4 2 0
point x y
p1 0 2 L2 p1 p2 p3 p4
p2 2 0 p1 0 2.828 3.162 5.099
p3 3 1 p2 2.828 0 1.414 3.162
p4 5 1 p3 3.162 1.414 0 2
p4 5.099 3.162 2 0

L p1 p2 p3 p4
p1 0 2 3 5
p2 2 0 1 3
p3 3 1 0 2
p4 5 3 2 0

Distance Matrix
Common Properties of a Distance

⚫ Distances, such as the Euclidean distance,


have some well known properties.

1. d(x, y)  0 for all x and y and d(x, y) = 0 if and only


if x = y.
2. d(x, y) = d(y, x) for all x and y. (Symmetry)
3. d(x, z)  d(x, y) + d(y, z) for all points x, y, and z.
(Triangle Inequality)

where d(x, y) is the distance (dissimilarity) between


points (data objects), x and y.

⚫ A distance that satisfies these properties is a


metric
Common Properties of a Similarity

⚫ Similarities, also have some well known


properties.

1. s(x, y) = 1 (or maximum similarity) only if x = y.


(does not always hold, e.g., cosine)
2. s(x, y) = s(y, x) for all x and y. (Symmetry)

where s(x, y) is the similarity between points (data


objects), x and y.
Mahalanobis Distance

For red points, the Euclidean distance is 14.7, Mahalanobis distance is 6.


Mahalanobis Distance

Step 1: Input Raw Data


Step 2: Calculate Mean
Step 3 : Find Difference (x-m) and transponse (x-m)’
Step 4: Calculate Covariance matrix
Step 5 : Find Inverse of Covariance matrix
Step 6: Calculate Mahalabonis distance

A 1 2 4 2 5 X

4
B 100 300 200 600 100
500

40
C 10 15 20 10 30
Mahalanobis Distance

mean
Calculate Mean 2.8
260
17

x-mean

1.2
Find Difference (x-m) and transpose (x-m)’ (x-mean)’ = 1.2 240 23
240

23

2.7 -110 13
Calculate Covariance matrix -110 43000 -900
13 -900 70

5.5 -0.01 -1.15


Find Inverse of Covariance matrix -0.01 0.0005 0.0025
-1.15 0.0025 5.2

5.5 -0.01 -1.15 1.2


= 106.7
Calculate Mahalabonis distance 1.2 240 23 X -0.01 0.0005 0.0025 X 240

-1.15 0.0025 5.2 23

MD = (106.7)1/2= 10.33
Similarity Between Binary Vectors
⚫ Common situation is that objects, x and y, have only
binary attributes

⚫ Compute similarities using the following quantities

⚫ f01 = the number of attributes where x was 0 and y was 1

⚫ f10 = the number of attributes where x was 1 and y was 0

⚫ f00 = the number of attributes where x was 0 and y was 0

⚫ f11 = the number of attributes where x was 1 and y was 1

⚫ Simple Matching and Jaccard Coefficients


SMC = number of matches / number of attributes
= (f11 + f00) / (f01 + f10 + f11 + f00)
J = number of 11 matches / number of non-zero attributes
= (f11) / (f01 + f10 + f11)
Calcúlate SMC and Jaccard Coefficients of the following binary data
x= 1000000000
y= 0000001001
SMC versus Jaccard: Example

x= 1000000000
y= 0000001001

f01 = 2 (the number of attributes where x was 0 and y was 1)


f10 = 1 (the number of attributes where x was 1 and y was 0)
f00 = 7 (the number of attributes where x was 0 and y was 0)
f11 = 0 (the number of attributes where x was 1 and y was 1)

SMC = (f11 + f00) / (f01 + f10 + f11 + f00)


= (0+7) / (2+1+0+7) = 0.7

J = (f11) / (f01 + f10 + f11) = 0 / (2 + 1 + 0) = 0


Cosine Similarity
The cosine similarity, defined next, is one of the most common measure of document
similarity. If x and y are two document vectors, then

⚫ Example:

⚫ x=3205000200

⚫ y=1000000102
Cosine Similarity

⚫ x= 3205000200

⚫ y= 1000000102

x. y = 3*1 + 2*0 + 0*0 + 5*0 + 0*0 + 0*0 + 0*0 + 2*1 + 0*0 + 0*2 = 5
||x || = (3*3+2*2+0*0+5*5+0*0+0*0+0*0+2*2+0*0+0*0)0.5 = (42) 0.5 = 6.481
|| y || = (1*1+0*0+0*0+0*0+0*0+0*0+0*0+1*1+0*0+2*2) 0.5 = (6) 0.5 = 2.449
cos(x, y ) = 0.3150
Extended Jaccard Coefficient (Tanimoto coefficient)

X= (1,0,1,0,1)
Y=(1,1,1,0,1)
X.Y=1*1+0*1+1*1+0*0+1*1=3
||x||2 = ((1*1+0*0+1*1+0*0+1*1)1/2 )2 =3
||y||2 = ((1*1+1*1+1*1+0*0+1*1)1/2 )2 =4

EJ(x,y)=3/ (3+4-3)=3/4=0.75
Correlation measures the linear relationship
between objects

Find Correlation Coefficient X= (-3,6,0,3,-6) Y= (1,-2,0,-1,2)


Correlation measures the linear relationship between
objects

X= (-3,6,0,3,-6) Y= (1,-2,0,-1,2)
Mean of x= 0 Mean of y= 0 n=5

Cov(x,y) =(-3-12+0-3-12)/4 = -7.25

Sx= [ (9+36+9+36)/4]½ = 4.716

Sy = [ (1+4+1+4)/4]½ =1.5

Corr(x,y)= -7.25/(4.716+1.5) = -1

Correlation coefficients lies between -1 to 1


1 means perfect positive correlated
0 no correlation
-1 means perfect negative correlated
Correlation vs Cosine vs Euclidean Distance

⚫ Compare the three proximity measures according to their behavior under


variable transformation
– scaling: multiplication by a value
– translation: adding a constant
Property Cosine Correlation Euclidean Distance
Invariant to scaling Yes Yes No
(multiplication)
Invariant to translation No Yes No
(addition)

⚫ Consider the example


– x = (1, 2, 4, 3, 0, 0, 0), y = (1, 2, 3, 4, 0, 0, 0)
– ys = y * 2 (scaled version of y), yt = y + 5 (translated version)

Measure (x , y) (x , ys) (x , yt)


Cosine 0.9667 0.9667 0.7940

Correlation 0.9429 0.9429 0.9429

Euclidean Distance 1.4142 5.8310 14.2127


Entropy

⚫ For
– a variable (event), X,
– with n possible values (outcomes), x1, x2 …, xn
– each outcome having probability, p1, p2 …, pn
– the entropy of X , H(X), is given by
𝑛

𝐻 𝑋 = − ⅀ 𝑝𝑖 log 2 𝑝𝑖
𝑖=1

⚫ log2(x)=ln(x)/ln(2)
Entropy Examples
Mutual Information
⚫ Information one variable provides about another

Formally, 𝐼 𝑋, 𝑌 = 𝐻 𝑋 + 𝐻 𝑌 − 𝐻(𝑋, 𝑌), where

H(X) is the entropy of X


H(Y) is the entropy of Y
H(X,Y) is the joint entropy of X and Y,
Mutual Information Example

Student Count p -plog2p Student Grade Count p -plog2p


Status Status
Undergrad 45 0.45 0.5184
Undergrad A 5 0.05 0.2161
Grad 55 0.55 0.4744
Undergrad B 30 0.30 0.5211
Total 100 1.00 0.9928
Undergrad C 10 0.10 0.3322

Grade Count p -plog2p Grad A 30 0.30 0.5211

A 35 0.35 0.5301 Grad B 20 0.20 0.4644


B 50 0.50 0.5000 Grad C 5 0.05 0.2161
C 15 0.15 0.4105 Total 100 1.00 2.2710
Total 100 1.00 1.4406

Mutual information of Student Status and Grade = 0.9928 + 1.4406 - 2.2710 = 0.1624
Using Weights to Combine Similarities

⚫ Can also define a weighted form of distance


Data Preprocessing

⚫ Aggregation
⚫ Sampling
⚫ Discretization and Binarization
⚫ Attribute Transformation
⚫ Dimensionality Reduction
⚫ Feature subset selection
⚫ Feature creation
Aggregation

⚫ Combining two or more attributes (or objects) into a single


attribute (or object)
⚫ Purpose
– Data reduction - reduce the number of attributes or objects
– Change of scale
◆ Cities aggregated into regions, states, countries, etc.
◆ Days aggregated into weeks, months, or years
– More “stable” data - aggregated data tends to have less variability
– Data Compression
Sampling
⚫ Sampling is the main technique employed for data
reduction.
– It is often used for both the preliminary investigation of
the data and the final data analysis.

⚫ Statisticians often sample because obtaining the


entire set of data of interest is too expensive or
time consuming.

⚫ Sampling is typically used in data mining because


processing the entire set of data of interest is too
expensive or time consuming.
Sampling …

⚫ The key principle for effective sampling is the


following:

– Using a sample will work almost as well as using the


entire data set, if the sample is representative

– A sample is representative if it has approximately the


same properties (of interest) as the original set of data
Types of Sampling

Simple random sampling


There is an equal probability of selecting any particular item
Sampling without replacement
Once an object is selected, it is removed from the population
Sampling with replacement
A selected object is not removed from the population
Stratified sampling:
Partition the data set, and draw samples from each partition
(proportionally, i.e., approximately the same percentage of the
data)
Used in conjunction with skewed data

62
Sampling: With or without Replacement

Raw Data
63
Sampling: Cluster or Stratified Sampling

Raw Data Cluster/Stratified Sample

64
Sample Size

8000 points 2000 Points 500 Points


Data Reduction Strategies

Data reduction: Obtain a reduced representation of the data set that is much
smaller in volume but yet produces the same (or almost the same) analytical
results
Why data reduction? — A database/data warehouse may store terabytes of
data. Complex data analysis may take a very long time to run on the
complete data set.
Data reduction strategies
Dimensionality reduction, e.g., remove unimportant attributes
Wavelet transforms
Principal Components Analysis (PCA)
Feature subset selection, feature creation
Numerosity reduction (technique to replace the original data volume by
alternative smaller form of data representation)
Types : Parameteric e.g Regression and Log-Linear Models
Non- Parameteric e.g Histograms, clustering, sampling, Data cube
aggregation
Data compression : transformation are applied so as to obtain reduced or
compressed data.
Type : lossy, lossless
66
Curse of Dimensionality

⚫ When dimensionality
increases, data becomes
increasingly sparse in the
space that it occupies

⚫ Definitions of density and


distance between points,
which are critical for
clustering and outlier
detection, become less
meaningful
Dimensionality Reduction

⚫ Purpose:
– Avoid curse of dimensionality
– Reduce amount of time and memory required by data
mining algorithms
– Allow data to be more easily visualized
– May help to eliminate irrelevant features or reduce
noise
What Is Wavelet Transform?
The wavelet transforms the data can be truncated
and this is helpful in data reduction. If we store a
small fraction of the strongest wavelet
coefficients, then the compressed approximation
of the original data can be obtained. For example,
the wavelet coefficients larger than some
determined threshold can be retained.

Decomposes a signal into different frequency sub-


bands. Applicable to n-dimensional signals.
Data are transformed to preserve relative distance
between objects at different levels of resolution
Allow natural clusters to become more
distinguishable. Used for image compression

69
Wavelet Transformation
Haar2 Daubechie4
Discrete wavelet transform (DWT) for linear signal processing,
multi-resolution analysis
Compressed approximation: store only a small fraction of the
strongest of the wavelet coefficients
Similar to discrete Fourier transform (DFT), but better lossy
compression, localized in space
Method:
Length, L, must be an integer power of 2 (padding with 0’s, when necessary)
Each transform has 2 functions: smoothing, difference
Applies to pairs of data, resulting in two set of data of length L/2
Applies two functions recursively, until reaches the desired length

70
Wavelet Decomposition

Wavelets: A math tool for space-efficient hierarchical


decomposition of functions
S = [2, 2, 0, 2, 3, 5, 4, 4] can be transformed to S^ = [23/4, -11/4,
1/ , 0, 0, -1, -1, 0]
2
Compression: many small detail coefficients can be replaced by
0’s, and only the significant coefficients are retained

71
Wavelet Decomposition & regeneration of Signal
Dimensionality Reduction: PCA

Principal Component Analysis is an


unsupervised learning algorithm that
is used for the dimensionality
reduction in machine learning. It is a
statistical process that converts the
observations of correlated features
into a set of linearly uncorrelated
features with the help of orthogonal
transformation. These new
transformed features are called
the Principal Components.
Dimensionality Reduction using PCA

Step1: Calculate mean


Step2 : Calculate the covariance matrix
𝑛
1
𝐶𝑜𝑣 𝑋, 𝑌 = ෍ 𝑥𝑙 − 𝑥ҧ 𝑦𝑙 − 𝑦ത
𝑛−1
𝑙=1

Step 3 : Calculate Eigen values of the covariance matrix


det 𝑆 − 𝜆𝐼 = 0
Step4 : a) Compute the Eigen vectors
𝑢1
𝑆 − 𝜆𝐼 𝑢 = 0
2
b) Compute unit Eigen vector
𝑈 = 𝑢12 + 𝑢22
𝑢1 Τ 𝑈
e=
𝑢2 Τ 𝑈
Step 5 : Compute principal Components
Dimensionality Reduction using PCA
Dimensionality Reduction using PCA

Step 2: Calculate Covariance matrix


Dimensionality Reduction using PCA
Dimensionality Reduction using PCA
Dimensionality Reduction using PCA
Dimensionality Reduction using PCA
Dimensionality Reduction using PCA
Dimensionality Reduction using PCA

Geometrical meaning of first principal components

https://www.youtube.com/watch?v=ZtS6sQUAh0c
Feature Subset Selection

⚫ Another way to reduce dimensionality of data


⚫ Redundant features
– Duplicate much or all of the information contained in
one or more other attributes
– Example: purchase price of a product and the amount
of sales tax paid
⚫ Irrelevant features
– Contain no information that is useful for the data
mining task at hand
– Example: students' ID is often irrelevant to the task of
predicting students' GPA
⚫ Many techniques developed, especially for
classification
Heuristic Search in Attribute Selection

There are 2d possible attribute combinations of d attributes


Typical heuristic attribute selection methods:
Best single attribute under the attribute independence
assumption: choose by significance tests
Best step-wise feature selection:
The best single-attribute is picked first
Then next best attribute condition to the first, ...
Step-wise attribute elimination:
Repeatedly eliminate the worst attribute
Best combined attribute selection and elimination
Approaches for feature subset selection

Embedded approaches Feature selection occurs naturally as


part of the data mining algorithm. Specifically, during the
operation of the data mining algorithm, the algorithm itself
decides which attributes to use and which to ignore.

Filter approaches Features are selected before the data mining


algorithm is run, using some approach that is independent of the
data mining task.

Wrapper approaches These methods use the target data mining


algorithm as a black box to find the best subset of attributes, in a
way similar to that of the ideal algorithm described above, but
typically without enumerating all possible subsets
An Architecture for Feature Subset Selection
Feature Creation

⚫ Create new attributes that can capture the


important information in a data set much more
efficiently than the original attributes
⚫ Three general methodologies:
– Feature extraction
◆ Creating a new set of features from original data set
◆ Example: extracting edges from images to detect human face
◆ Domain- specific
– Feature construction
◆ Creating a new set of features by combining original data set
◆ Example: dividing mass by volume to get density
– Mapping data to new space : to understanding
interesting and important features.
◆ Example: Fourier and wavelet analysis
Discretization

⚫ Discretization is the process of converting a


continuous attribute into categorial attribute
– A potentially infinite number of values are mapped
into a small number of categories
– Discretization is used in both unsupervised
(class information unknown) and supervised
(class information known) settings
Unsupervised Discretization
Unsupervised Discretization

Equal width (binning)


Equal interval width (binning)

Equal frequency (binning) K-means clustering leads to better results

90
Discretization in Supervised Settings

– Many classification algorithms work best if both the independent


and dependent variables have only a few values
– We give an illustration of the usefulness of discretization using
the following example.
Binarization

⚫ Binarization maps a continuous or categorical


attribute into one or more binary variables

Binarization is used to prepare images for pattern recognition tasks, such


as fingerprint identification, where the focus is on the structure of the object
rather than its color or grayscale intensity.

The process of binarization involves the selection of a threshold value,


and then converting all pixel values below the threshold to 0 and all pixel
values above the threshold to 1.
Attribute Transformation

⚫ An attribute transform is a function that maps the


entire set of values of a given attribute to a new
set of replacement values such that each old
value can be identified with one of the new values
– Simple functions: xk, log(x), ex, |x|
– Normalization
Normalization: Scaled to fall within a smaller, specified range
min-max normalization
z-score normalization
normalization by decimal scaling
Normalization

Min-max normalization: to [new_minA, new_maxA]


v − minA
v' = (new _ maxA − new _ minA) + new _ minA
maxA − minA
Ex. Let income range $12,000 to $98,000 normalized to [0.0, 1.0]. Then
$73,000 is mapped to 73,600 − 12,000
(1.0 − 0) + 0 = 0.716
98,000 − 12,000

Z-score normalization (μ: mean, σ: standard deviation):


v − A
v' =
 A

Ex. Let μ = 54,000, σ = 16,000. Then 73,600 − 54,000


= 1.225
16,000
Normalization by decimal scaling

v
v' = j Where j is the smallest integer such that Max(|ν’|) < 1
10
94
Normalization
Data Mining Classification

Unit-2

Introduction to Data Mining

Prepared by: Dr. Nivedita Palia


Type of classifiers

⚫ Binary Versus Multiclass


Binary classifiers assign each data instance to one of two possible
labels, typically denoted by +1 and -1. If there are more than two
possible labels available, then technique is known as multiclass
classifiers.
⚫ Deterministic Versus Probabilistics
A deterministic classifier produces a discrete-valued label to each data
instance it classifies whereas a probabilistic classifier assigns a
continuous score between 0 and 1 to indicate how likely it is that an
instance belong to a particular class, where the probability scores for all
the classes sum to 1.

2
Type of classifiers

⚫ Linear Vs Non-Linear
A linear classifier uses a linear separating hyperplane to discriminate
instances from different classes whereas a non-linear classifier enables
the construction of more complex, non-linear decision surface.
⚫ Global Vs Local
A global classifiers fits a single model to the entire data set. In contrast,
a local classifier partitions the input space into smaller regions and fit a
distinct model to training instances in each region.
⚫ Generative Vs Discriminative
Classifiers that learn a generative model of every class in the process of
predicting class labels are known as generative classifiers. In contrast,
discriminative classifiers directly predict the class labels without
explicitly describing the distribution of every class label.

3
Rule-Based Classifier

⚫ Classify records by using a collection of


“if…then…” rules (also known as rule set)
⚫ Rule: (Condition) → y
– where
◆ Condition is a conjunction of tests on attributes
◆ y is the class label
◆Leftside of the rule is known as rule antecedent or
precondition
◆Right-hand side of the rule is known as rule consequent

– Examples of classification rules:


◆ (Blood Type=Warm)  (Lay Eggs=Yes) → Birds
◆ (Taxable Income < 50K)  (Refund=Yes) → Evade=No

4
Rule-based Classifier (Example)
Name Blood Type Give Birth Can Fly Live in Water Class
human warm yes no no mammals
python cold no no no reptiles
salmon cold no no yes fishes
whale warm yes no yes mammals
frog cold no no sometimes amphibians
komodo cold no no no reptiles
bat warm yes yes no mammals
pigeon warm no yes no birds
cat warm yes no no mammals
leopard shark cold yes no yes fishes
turtle cold no no sometimes reptiles
penguin warm no no sometimes birds
porcupine warm yes no no mammals
eel cold no no yes fishes
salamander cold no no sometimes amphibians
gila monster cold no no no reptiles
platypus warm no no no mammals
owl warm no yes no birds
dolphin warm yes no yes mammals
eagle warm no yes no birds

R1: (Give Birth = no)  (Can Fly = yes) → Birds


R2: (Give Birth = no)  (Live in Water = yes) → Fishes
R3: (Give Birth = yes)  (Blood Type = warm) → Mammals
R4: (Give Birth = no)  (Can Fly = no) → Reptiles
R5: (Live in Water = sometimes) → Amphibians
Application of Rule-Based Classifier

⚫ A rule r covers an instance x if the attributes of the instance satisfy


the condition of the rule. The r is said to be fired or trigger.

R1: (Give Birth = no)  (Can Fly = yes) → Birds


R2: (Give Birth = no)  (Live in Water = yes) → Fishes
R3: (Give Birth = yes)  (Blood Type = warm) → Mammals
R4: (Give Birth = no)  (Can Fly = no) → Reptiles
R5: (Live in Water = sometimes) → Amphibians

Name Blood Type Give Birth Can Fly Live in Water Class
hawk warm no yes no ?
grizzly bear warm yes no no ?

The rule R1 covers a hawk => Bird


The rule R3 covers the grizzly bear => Mammal
Rule Coverage and Accuracy

⚫ Coverage of a rule: Tid Refund Marital


Status
Taxable
Income Class

– Fraction of records that 1 Yes Single 125K No


2 No Married 100K
satisfy the antecedent of a
No
3 No Single 70K No
rule. 4 Yes Married 120K No

⚫ Accuracy/Confidence 5 No Divorced 95K Yes


6 No Married 60K No

factor of a rule: 7 Yes Divorced 220K No


8 No Single 85K Yes
– Fraction of records that 9 No Married 75K No
satisfy the antecedent that 10 No Single 90K Yes

also satisfy the consequent


10

of a rule
(Status=Single) → No Coverage = (4/10)=0.4= 40%
Accuracy = 2/4=.5=50%
How does Rule-based Classifier Work?

R1: (Give Birth = no)  (Can Fly = yes) → Birds


R2: (Give Birth = no)  (Live in Water = yes) → Fishes
R3: (Give Birth = yes)  (Blood Type = warm) → Mammals
R4: (Give Birth = no)  (Can Fly = no) → Reptiles
R5: (Live in Water = sometimes) → Amphibians

Name Blood Type Give Birth Can Fly Live in Water Class
lemur warm yes no no ?
turtle cold no no sometimes ?
dogfish shark cold yes no yes ?

A lemur triggers rule R3, so it is classified as a mammal


A turtle triggers both R4 and R5 ( their conflicting classes must be resolved)
A dogfish shark triggers none of the rules ( we need to determine what class
to assign to such test instances.)
Characteristics of Rule Sets: Strategy 1

⚫ Mutually exclusive rules


– Classifier contains mutually exclusive rules if the rules are
independent of each other
– Every record is covered by at most one rule

⚫ Exhaustive rules
– Classifier has exhaustive coverage if it accounts for every
possible combination of attribute values
– Each record is covered by at least one rule
Characteristics of Rule Sets: Strategy 2

⚫ Rules are not mutually exclusive


– A record may trigger more than one rule
– Solution?
◆ Ordered rule set
◆ Unordered rule set – use voting schemes

⚫ Rules are not exhaustive


– A record may not trigger any rules
– Solution?
◆ Use a default class

◆ If the rule set is not exhaustive, then a default rule, rd : () →


yd, must be added to cover the remaining cases. A default
rule has an empty antecedent and is triggered when all other
rules have failed. yd is known as the default class
Ordered Rule Set

⚫ Rules are rank ordered according to their priority


– An ordered rule set is known as a decision list
⚫ When a test record is presented to the classifier
– It is assigned to the class label of the highest ranked rule it has
triggered
– If none of the rules fired, it is assigned to the default class

R1: (Give Birth = no)  (Can Fly = yes) → Birds


R2: (Give Birth = no)  (Live in Water = yes) → Fishes
R3: (Give Birth = yes)  (Blood Type = warm) → Mammals
R4: (Give Birth = no)  (Can Fly = no) → Reptiles
R5: (Live in Water = sometimes) → Amphibians

Name Blood Type Give Birth Can Fly Live in Water Class
turtle cold no no sometimes ?
Rule Ordering Schemes

⚫ Rule-based ordering
– Individual rules are ranked based on their quality
⚫ Class-based ordering
– Rules that belong to the same class appear together
Building Classification Rules

⚫ Direct Method:
◆ Extract rules directly from data
◆ Examples: RIPPER, CN2, Holte’s 1R

⚫ Indirect Method:
◆ Extract rules from other classification models (e.g.
decision trees, neural networks, etc).
◆ Examples: C4.5rules
Direct Method: Sequential Covering

1. Start from an empty rule


2. Grow a rule using the Learn-One-Rule function
3. Remove training records covered by the rule
4. Repeat Step (2) and (3) until stopping criterion
is met
Example of Sequential Covering

(i) Original Data (ii) Step 1


Example of Sequential Covering…

R1 R1

R2

(iii) Step 2 (iv) Step 3


Rule Growing

⚫ Two common strategies

Yes: 3
{} No: 4
Refund=No, Refund=No,
Status=Single, Status=Single,
Income=85K Income=90K
(Class=Yes) (Class=Yes)

Refund=
No
Status =
Single
Status =
Divorced
Status =
Married
... Income
> 80K Refund=No,
Status = Single
Yes: 3 Yes: 2 Yes: 1 Yes: 0 Yes: 3 (Class = Yes)
No: 4 No: 1 No: 0 No: 3 No: 1

(a) General-to-specific (b) Specific-to-general


Rule Evaluation
FOIL: First Order Inductive
⚫ Foil’s Information Gain Learner – an early rule-
based learning algorithm

– R0: {} => class (initial rule)


– R1: {A} => class (rule after adding conjunct)

– 𝑝1 𝑝0
𝐺𝑎𝑖𝑛 𝑅0 , 𝑅1 = 𝑝1 × [ 𝑙𝑜𝑔2 − 𝑙𝑜𝑔2 ]
𝑝1 + 𝑛1 𝑝0 + 𝑛 0

– 𝑝0: number of positive instances covered by R0


𝑛0: number of negative instances covered by R0
𝑝1: number of positive instances covered by R1
𝑛1: number of negative instances covered by R1
Foil’s Information Gain
Rule Pruning

⚫ Growing a rule:
– Start from empty rule
– Add conjuncts as long as they improve FOIL’s
information gain
– Stop when rule no longer covers negative examples
– Prune the rule immediately using incremental reduced
error pruning
– Measure for pruning: v = (p-n)/(p+n)
◆ p: number of positive examples covered by the rule in
the validation set
◆ n: number of negative examples covered by the
rule in the validation set
– Pruning method: delete any final sequence of
conditions that maximizes v
Build a Rule Set

⚫ Building a Rule Set:


– Use sequential covering algorithm
◆ Finds the best rule that covers the current set of
positive examples
◆ Eliminate both positive and negative examples
covered by the rule
Indirect Methods

P
No Yes

Q R Rule Set

No Yes No Yes r1: (P=No,Q=No) ==> -


r2: (P=No,Q=Yes) ==> +
- + + Q r3: (P=Yes,R=No) ==> +
r4: (P=Yes,R=Yes,Q=No) ==> -
No Yes r5: (P=Yes,R=Yes,Q=Yes) ==> +

- +
Example
Name Give Birth Lay Eggs Can Fly Live in Have Legs Class
Water
human yes no no no yes mammals
python no yes no no no reptiles
salmon no yes no yes no fishes
whale yes no no yes no mammals
frog no yes no sometimes yes amphibians
komodo no yes no no yes reptiles
bat yes no yes no yes mammals
pigeon no yes yes no yes birds
cat yes no no no yes mammals
leopard shark yes no no yes no fishes
turtle no yes no sometimes yes reptiles
penguin no yes no sometimes yes birds
porcupine yes no no no yes mammals
eel no yes no yes no fishes
salamander no yes no sometimes yes amphibians
gila monster no yes no no yes reptiles
platypus no yes no no yes mammals
owl no yes yes no yes birds
dolphin yes no no yes no mammals
eagle no yes yes no yes birds
Advantages of Rule-Based Classifiers

⚫ Has characteristics quite similar to decision trees


– As highly expressive as decision trees
– Easy to interpret (if rules are ordered by class)
– Performance comparable to decision trees
◆ Can handle redundant and irrelevant attributes
◆ Variable interaction can cause issues (e.g., X-OR problem)
⚫ Better suited for handling imbalanced classes
⚫ Harder to handle missing values in the test set
Model Evaluation and Selection

Evaluation metrics: How can we measure accuracy? Other


metrics to consider?
Use validation test set of class-labeled tuples instead of training
set when assessing accuracy
Methods for estimating a classifier’s accuracy:
Holdout method, random subsampling
Cross-validation
Bootstrap
Comparing classifiers:
Confidence intervals
Cost-benefit analysis and ROC Curves

25
Classifier Evaluation Metrics: Confusion
Matrix
Confusion Matrix:
Actual class\Predicted class P N
P True Positives (TP) False Negatives (FN)
N False Positives (FP) True Negatives (TN)

Example of Confusion Matrix:


Actual class\Predicted buy_computer buy_computer Total
class = yes = no
buy_computer = yes 6954 46 7000
buy_computer = no 412 2588 3000
Total 7366 2634 10000
Given m classes, an entry, CMi,j in a confusion matrix indicates # of
tuples in class i that were labeled by the classifier as class j
May have extra rows/columns to provide totals
26
Classifier Evaluation Metrics: Accuracy, Error
Rate, Sensitivity and Specificity

A\P C ¬C ◼ Class Imbalance Problem:


C TP FN P
◼ One class may be rare, e.g.
¬C FP TN N
fraud
P’ N’ All
◼ Significant majority of the

Classifier Accuracy, or recognition negative class and minority of


rate: percentage of test set tuples the positive class
that are correctly classified ◼ Sensitivity: True Positive
Accuracy = (TP + TN)/All recognition rate
Error rate: 1 – accuracy, or ◼ Sensitivity = TP/P
Error rate = (FP + FN)/All
◼ Specificity: True Negative

recognition rate
◼ Specificity = TN/N

27
Classifier Evaluation Metrics:
Precision and Recall, and F-measures

Precision: exactness – what % of tuples that the classifier labeled as


positive are actually positive

Recall: completeness – what % of positive tuples did the classifier


label as positive?
Perfect score is 1.0
Inverse relationship between precision & recall
F measure (F1 or F-score): harmonic mean of precision and recall,

Fß: weighted measure of precision and recall


assigns ß times as much weight to recall as to precision

28
Classifier Evaluation Metrics: Example

Actual Class\Predicted class cancer = yes cancer = no Total Recognition(%)


cancer = yes 90 210 300 30.00 (sensitivity
cancer = no 140 9560 9700 98.56 (specificity)
Total 230 9770 10000 96.40 (accuracy)

Precision = 90/230 = 39.13% Recall = 90/300 = 30.00%

29
Confusion matrix for Multiclass
Confusion matrix for Multiclass
Evaluating Classifier Accuracy: Holdout

Holdout method

Given data is randomly partitioned into two independent sets


Training set (e.g., 2/3) for model construction
Test set (e.g., 1/3) for accuracy estimation

Random sampling: a variation of holdout


Repeat holdout k times, accuracy = avg. of the accuracies from each iteration

32
Evaluating Classifier Accuracy: Cross Validation

Cross-validation (k-fold, where k = 10 is most popular)


Randomly partition the data into k mutually exclusive subsets,
each approximately equal size
At i-th iteration, use Di as test set and others as training set
Leave-one-out: k folds where k = # of tuples, for small sized
data
*Stratified cross-validation*: folds are stratified so that class
dist. in each fold is approx. the same as that in the initial data
Evaluating Classifier Accuracy: Bootstrap
• Bootstrap
Works well with small data sets
Samples the given training tuples uniformly with replacement
i.e., each time a tuple is selected, it is equally likely to be selected again
and re-added to the training set
• Several bootstrap methods, and a common one is .632 boostrap
A data set with d tuples is sampled d times, with replacement, resulting in a
training set of d samples. The data tuples that did not make it into the
training set end up forming the test set. About 63.2% of the original data
end up in the bootstrap, and the remaining 36.8% form the test set (since (1
– 1/d)d ≈ e-1 = 0.368)
Repeat the sampling procedure k times, overall accuracy of the model:

34
Estimating Confidence Intervals:
Classifier Models M1 vs. M2

• Suppose we have 2 classifiers, M1 and M2, which one is


better?
• Use 10-fold cross-validation to obtain and
• These mean error rates are just estimates of error on the
true population of future data cases
• What if the difference between the 2 error rates is just
attributed to chance?
Use a test of statistical significance
Obtain confidence limits for our error estimates

35
Estimating Confidence Intervals: Null Hypothesis

• Perform 10-fold cross-validation


• Assume samples follow a t distribution with k–1 degrees of
freedom (here, k=10)
• Use t-test (or Student’s t-test)
• Null Hypothesis: M1 & M2 are the same
• If we can reject null hypothesis, then
we conclude that the difference between M1 & M2 is
statistically significant
Chose model with lower error rate

36
Estimating Confidence Intervals: t-test

37
Estimating Confidence Intervals: Table for t -distribution

Symmetric
Significance level, e.g.,
sig = 0.05 or 5% means
M1 & M2 are
significantly different
for 95% of population
Confidence limit, z =
sig/2
38
Estimating Confidence Intervals: Statistical Significance

39
Numerical for t-test

Post
S.No Pretest test Difference Difference2
1 23 35 -12 144 t=-115/SQRT((14*1513-(-115)*(-115))/13)
2 25 40 -15 225 t stat =-4.64831
3 28 30 -2 4
4 30 35 -5 25 ∝=0.05
5 25 40 -15 225
6 25 45 -20 400
7 26 30 -4 16 t case = -2.16
8 25 30 -5 25 tstat<tcase
9 22 35 -13 169
Therefore, it lies in rejection region
10 30 40 -10 100
11 35 40 -5 25 ignore the hypothesis
12 40 35 5 25
13 35 38 -3 9
14 30 41 -11 121
-115 1513
Model Selection: ROC Curves

• ROC (Receiver Operating Characteristics)


curves: for visual comparison of
classification models
• Shows the trade-off between the true
positive rate and the false positive rate
• The area under the ROC curve is a
measure of the accuracy of the model
◼ Vertical axis
• The closer to the diagonal line (i.e., the represents the true
closer the area is to 0.5), the less positive rate
accurate is the model ◼ Horizontal axis rep.
the false positive rate
◼ The plot also shows a
diagonal line
◼ A model with perfect
accuracy will have an
area of 1.0
41
Issues Affecting Model Selection

Accuracy
classifier accuracy: predicting class label
Speed
time to construct the model (training time)
time to use the model (classification/prediction time)
Robustness: handling noise and missing values
Scalability: efficiency in disk-resident databases
Interpretability
understanding and insight provided by the model

42
Neural Network
Neural Networks are computational models that mimic the complex functions of
the human brain. The neural networks consist of interconnected nodes or neurons
that process and learn from data, enabling tasks such as pattern recognition and
decision making in machine learning.

Elements of a Neural Network

Input Layer: This layer accepts input features. It provides information from the
outside world to the network, no computation is performed at this layer, nodes
here just pass on the information(features) to the hidden layer.
Hidden Layer: Nodes of this layer are not exposed to the outer world; they are
part of the abstraction provided by any neural network. The hidden layer performs
all sorts of computation on the features entered through the input layer and
transfers the result to the output layer.
Output Layer: This layer bring up the information learned by the network to the
outer world.
Activation Function

❑ The activation function decides whether a neuron should be activated or not by


calculating the weighted sum and further adding bias to it.

❑ The purpose of the activation function is to introduce non-linearity into the


output of a neuron.

Need for Non-linear activation function

❑ A neural network without an activation function is essentially just a linear


regression model.

❑ The activation function does the non-linear transformation to the input making it
capable to learn and perform more complex tasks.
Variants of Activation Function

Linear Function : f(x)=x

Sigmoid:
• Value Range : 0 to 1

Tanh function also known as Tangent Hyperbolic function.


• Value Range :- -1 to +1
Leaky ReLU is an improved version of ReLU
ReLU stands for Rectified Linear Unit. function to solve the Dying ReLU problem as it
• A(x) = max(0,x). has a small positive slope in the negative area.
•Value Range :- [0, inf) f(x)=max(0.01*x , x)
Variants of Activation Function

Scaled Exponential Linear Unit (SELU)

SELU was defined in self-normalizing networks


and takes care of internal normalization which
means each layer preserves the mean and
variance from the previous layers.
Perceptron

Perceptron is one of the simplest Artificial neural network architectures. It


was introduced by Frank Rosenblatt in 1957s. It is the simplest type of
feedforward neural network, consisting of a single layer of input nodes that
are fully connected to a layer of output nodes.
Types of Perceptron

•Single-Layer Perceptron: This type of perceptron is limited to learning


linearly separable patterns. effective for tasks where the data can be divided
into distinct categories through a straight line.

•Multilayer Perceptron: Multilayer perceptron's possess enhanced processing


capabilities as they consist of two or more layers, adept at handling more
complex patterns and relationships within the data.
Basic Components of Perceptron

•Input Features: The perceptron takes multiple input features; each input
feature represents a characteristic or attribute of the input data.

•Weights: Each input feature is associated with a weight, determining the


significance of each input feature in influencing the perceptron’s output.
During training, these weights are adjusted to learn the optimal values.

•Summation Function: The perceptron calculates the weighted sum of its


inputs using the summation function. The summation function combines the
inputs with their respective weights to produce a weighted sum.

•Activation Function: The weighted sum is then passed through an activation


function. Perceptron uses summed values as input and compare with the
threshold and provide the output as 0 or 1.

•Output: The final output of the perceptron, is determined by the activation


function’s result. For example, in binary classification problems, the output
might represent a predicted class (0 or 1).
Basic Components of Perceptron

•Bias: A bias term is often included in the perceptron model. The bias allows
the model to make adjustments that are independent of the input. It is an
additional parameter that is learned during training.

•Learning Algorithm (Weight Update Rule): During training, the perceptron


learns by adjusting its weights and bias based on a learning algorithm. A
common approach is the perceptron learning algorithm, which updates weights
based on the difference between the predicted output and the true output.
Perceptron training Algorithm

1. Initialize the weights, the bias and the learning rate


2. Repeat until stopping condition is false
1. For each training pair indicated by s:t
1. Set each input unit i=1 to n
2. Calculate the output of the network

3. Weight and bias adjustment

4. Train the system till no change in weight


Perceptron training Algorithm : Multiple output Class

Initialize the weights, the bias and the learning


rate
Perceptron Example
Multilayer Feed-forward Perceptron
Multilayer Perceptron Learning Algorithm
Multilayer Perceptron Learning Algorithm
Multilayer Perceptron Learning Algorithm
Multilayer Perceptron Learning Algorithm
Multilayer Perceptron Example
Multilayer Perceptron Example
Multilayer Perceptron Example

Δ𝜔𝑖𝑗 = 𝜂𝛿𝑗 𝑂𝑖
Multilayer Perceptron Example

Δ𝜔𝑖𝑗 = 𝜂𝛿𝑗 𝑂𝑖
Multilayer Perceptron Example
Semi-supervised Learning

Semi-supervised learning is a class of machine learning techniques that make use of


both labeled and unlabeled examples when learning a model. Let Xl = {(x1, y1),...,xl ,
yl)} be the set of labeled data and Xu = {xl+1,...,xn} be the set of unlabeled data.
Approaches for semi-supervised learning
Self-training
1. Build the classifier using the labeled data, Xl .
2. Use the classifier to label the unlabeled data, Xu.
3. Select the tuple x ∈ Xu having the highest confidence (most confident prediction).
Add it and its predicted label to Xl .
4. Repeat (i.e., retrain the classifier using the augmented set of labeled data).
Cotraining
1. Define two separate nonoverlapping feature sets for the labeled data, Xl .
2. Train two classifiers, f1 and f2, on the labeled data, where f1 is trained using one
of the feature sets and f2 is trained using the other.
3. Classify Xu with f1 and f2 separately.
4. Add the most confident (x,f1(x)) to the set of labeled data used by f2, where x ∈
Xu. Similarly, add the most confident (x,f2(x)) to the set of labeled data used by f1.
5. Repeat.
Active Learning

Active learning is an iterative type of supervised learning that is suitable for situations
where data are abundant, yet the class labels are scarce or expensive to obtain. The
learning algorithm is active in that it can purposefully query a user (e.g., a human
oracle) for labels. The number of tuples used to learn a concept this way is often much
smaller than the number required in typical supervised learning.
Ensemble Learning/ Classification combination method

Ensemble learning helps improve machine learning results by combining several


models.
Methods for Constructing an Ensemble Classifier

By manipulating the training set


In this approach, multiple training sets are created by resampling the original data according
to some sampling distribution. The sampling distribution determines how likely it is that an
example will be selected for training, and it may vary from one trial to another. A classifier is
then built from each training set using a particular learning algorithm.
Example: Bagging and boosting

By manipulating the input features


In this approach, a subset of input features is chosen to form each training set. The subset
can be either chosen randomly or based on the recommendation of domain experts.
Example: Random forest
Methods for Constructing an Ensemble Classifier

By manipulating the class labels


This method can be used when the number of classes is sufficiently large. The training data is
transformed into a binary class problem by randomly partitioning the class labels into two
disjoint subsets, A0 and A1. Training examples whose class label belongs to the subset A0 are
assigned to class 0, while those that belong to the subset A1 are assigned to class 1. The
relabeled examples are then used to train a base classifier. By repeating the class-relabeling
and model-building steps multiple times, an ensemble of base classifiers is obtained. When a
test example is presented, each base classifier Ci is used to predict its class label. If the test
example is predicted as class 0, then all the classes that belong to A0 will receive a vote.
Conversely, if it is predicted to be class 1, then all the classes that belong to A1 will receive a
vote.
Example: error-correcting output coding
By manipulating the learning algorithm
Many learning algorithms can be manipulated in such a way that applying the algorithm
several times on the same training data may result in different models.
Example, an artificial neural network can produce different models by changing its network
topology or the initial weights of the links between neurons.
General procedure for ensemble method
Bias-Variance Decomposition

bias, measures the average distance between the target position and the location where
the projectile hits the floor
variance, measures the deviation between x and the average position x where the
projectile hits the floor.
noise component associated with variability in the target position.
Bias –Variance tradeoff –Machine learning
The bias is known as the difference between the prediction of the values by
the Machine Learning model and the correct value. Being high in biasing gives a
large error in training as well as testing data. It recommended that an algorithm
should always be low-biased to avoid the problem of underfitting. By high bias, the
data predicted is in a straight-line format, thus not fitting accurately in the data in the
data set. Such fitting is known as the Underfitting of Data.

The variability of model prediction for a


given data point which tells us the spread
of our data is called the Variance of the
model. The model with high variance has a
very complex fit to the training data and
thus is not able to fit accurately on the
data which it hasn’t seen before. As a
result, such models perform very well on
training data but have high error rates on
test data. When a model is high on
variance, it is then said to as Overfitting of
Data.
Types of Ensemble Learning
1. Voting/ Averaging 2.Bagging 3. Boosting 4. Stacking 5. Random Forest

Voting/ Averaging

In Classification problem: Majority Voting


In Regression problem: Average of the probability
Bagging / Bootstrap aggregation

approach.
Boosting
Boosting is an ensemble modeling technique
that attempts to build a strong classifier
from the number of weak classifiers. It is
done by building a model by using weak
models in series. Firstly, a model is built
from the training data. Then the second
model is built which tries to correct the
errors present in the first model. This
procedure is continued, and models are
added until either the complete training data
set is predicted correctly, or the maximum
number of models is added.
Decision Tree
Decision Tree
Random Forest
Gradient Boosting
• Constructs a series of models
Models can be any predictive model
that has a differentiable loss function
Commonly, trees are the chosen model
• Boosting can be viewed as optimizing the
loss function by iterative functional
gradient descent.
• The predictions of the new model are
then added to the ensemble, and the
process is repeated until a stopping
criterion is met.
• Cross Entropy is used as loss function
XGB (Extreme Gradient Boosting)
At a basic level, the algorithm still follows a sequential strategy to improve the
next model based on gradient descent.
Difference between XGB and GBM
• Regularization is a technique in machine learning to avoid overfitting
• GBM tends to have a slower training time than the XGBoost because the latter
algorithm implements parallelization during the training process.

• XGBoost has its own in-built missing data handler, whereas GBM
doesn’t.
Stacked Generalization(Blending)
Class Imbalance Problem
If the number of positive and negative values are approximately equal known
As balanced dataset.
In many data sets there are a disproportionate number of instances
that belongs to different classes, a property known as
skew or class imbalance.
Example: Rare disease, Card fraud detection
• A correct classification of the rare class often has greater value than a correct classification
of the majority class.
Challenges
1. It can be difficult to find sufficiently many labelled samples of a rare class. A classifier
trained over an imbalanced data set shows a bias towards improving its performance over
the majority class, which is often not the desired behaviour.
2. Accuracy, is not well-suited for evaluating models in the presence of class imbalance in the
test data. Need to use alternative evaluation metrics that are sensitive to the skew and can
capture different criteria of performance than accuracy.
Evaluating Performance with Class Imbalance
Evaluating Performance with Class Imbalance

False Discovery rate =1-precision


Positive Likelihood ratio= TPR/FPR
𝑇𝑃𝑅
F1 measure= 2 ∗ 2∗𝑇𝑃+𝐹𝑃+𝐹𝑁
G measure = TP/sqrt((TP+FP)(TP+FN))

F1 measure represent harmonic means


G measures represents geometric means
How to solve the class imbalance problem
Multi-class Problem

Multi-class Problem
Approaches for extending the binary classifiers to handle multiclass problems
Multiclass problem
One Vs Rest
One Vs Rest
One Vs One
One Vs One
Unit- 3

Prepared by: Dr Nivedita Palia

1
What Is Frequent Pattern Analysis?
◼ Frequent pattern: a pattern (a set of items, subsequences, substructures,
etc.) that occurs frequently in a data set

◼ Motivation: Finding inherent regularities in data


◼ What products were often purchased together?— Milk and breads?!
◼ What are the subsequent purchases after buying a PC?

◼ Applications
◼ Basket data analysis, cross-marketing, catalog design, sale campaign
analysis

2
Basic Concepts: Frequent Patterns

Tid Items bought ◼ itemset: A set of one or more


10 Beer, Nuts, Diaper items
20 Beer, Coffee, Diaper ◼ k-itemset X = {x1, …, xk}
30 Beer, Diaper, Eggs
◼ (absolute) support, or support
40 Nuts, Eggs, Milk count or frequency or count of
50 Nuts, Coffee, Diaper, Eggs, Milk X: Frequency or occurrence of
Customer an itemset X
Customer
buys both buys diaper ◼ (relative) support, s, is the
fraction of transactions that
contains X (i.e., the probability
that a transaction contains X)
◼ An itemset X is frequent if X’s
Customer
buys beer
support is no less than a minsup
threshold
3
Basic Concepts: Association Rules
Tid Items bought ◼ Find all the rules X → Y with
10 Beer, Nuts, Diaper
minimum support and confidence
20 Beer, Coffee, Diaper
30 Beer, Diaper, Eggs ◼ support, s, probability that a
40 Nuts, Eggs, Milk transaction contains X  Y
50 Nuts, Coffee, Diaper, Eggs, Milk
◼ confidence, c, conditional
Customer
buys both
Customer probability that a transaction
having X also contains Y
buys
diaper
Let minsup = 50%, minconf = 50%
Freq. Pat.: Beer:3, Nuts:3, Diaper:4, Eggs:3,
Customer {Beer, Diaper}:3
buys beer ◼ Association rules: (many more!)
◼ Beer → Diaper (60%, 100%)
◼ Diaper → Beer (60%, 75%)
4
The Downward Closure Property and Scalable
Mining Methods
◼ The downward closure property of frequent patterns
◼ Any subset of a frequent itemset must be frequent

◼ If {beer, diaper, nuts} is frequent, so is {beer,

diaper}
◼ i.e., every transaction having {beer, diaper, nuts} also

contains {beer, diaper}


◼ Scalable mining methods: Three major approaches
◼ Apriori

◼ Freq. pattern growth

◼ Vertical data format approach

5
Apriori: A Candidate Generation & Test Approach

◼ Apriori pruning principle: If there is any itemset which is


infrequent, its superset should not be generated/tested!
(Agrawal & Srikant @VLDB’94, Mannila, et al. @ KDD’ 94)
◼ Method:
◼ Initially, scan DB once to get frequent 1-itemset
◼ Generate length (k+1) candidate itemsets from length k
frequent itemsets
◼ Test the candidates against DB
◼ Terminate when no frequent or candidate set can be
generated

6
Implementation of Apriori

◼ How to generate candidates?


◼ Step 1: self-joining Lk
◼ Step 2: pruning
◼ Example of Candidate-generation
◼ L3={abc, abd, acd, ace, bcd}
◼ Self-joining: L3*L3
◼ abcd from abc and abd
◼ acde from acd and ace
◼ Pruning:
◼ acde is removed because ade is not in L3
◼ C4 = {abcd}
7
The Apriori Algorithm (Pseudo-Code)
Ck: Candidate itemset of size k
Lk : frequent itemset of size k

L1 = {frequent items};
for (k = 1; Lk !=; k++) do begin
Ck+1 = candidates generated from Lk;
for each transaction t in database do
increment the count of all candidates in Ck+1 that
are contained in t
Lk+1 = candidates in Ck+1 with min_support
end
return k Lk;
8
Candidate Generation: An SQL Implementation
◼ SQL Implementation of candidate generation
◼ Suppose the items in Lk-1 are listed in an order
◼ Step 1: self-joining Lk-1
insert into Ck
select p.item1, p.item2, …, p.itemk-1, q.itemk-1
from Lk-1 p, Lk-1 q
where p.item1=q.item1, …, p.itemk-2=q.itemk-2, p.itemk-1 <
q.itemk-1
◼ Step 2: pruning
forall itemsets c in Ck do
forall (k-1)-subsets s of c do
if (s is not in Lk-1) then delete c from Ck

9
The Apriori Algorithm—An Example
Supmin = 2 Itemset sup
Itemset sup
Database TDB {A} 2
Tid Items
L1 {A} 2
C1 {B} 3
{B} 3
10 A, C, D {C} 3
1st scan {C} 3
20 B, C, E {D} 1
{E} 3
30 A, B, C, E {E} 3
40 B, E
C2 Itemset sup C2 Itemset
{A, B} 1
L2 Itemset sup 2nd scan {A, B}
{A, C} 2
{A, C} 2 {A, C}
{A, E} 1
{B, C} 2
{B, C} 2 {A, E}
{B, E} 3
{B, E} 3 {B, C}
{C, E} 2
{C, E} 2 {B, E}
{C, E}

C3 Itemset L3 Itemset sup


3rd scan
{B, C, E} {B, C, E} 2
10
The Apriori Algorithm—Question

11
12
Rule Generation

13
14
How to Count Supports of Candidates?

◼ Why counting supports of candidates, a problem?


◼ The total number of candidates can be very huge
◼ One transaction may contain many candidates
◼ Method:
◼ Brute-force approach
◼ Enumerate the item-sets contained in each
transaction
◼ Support Counting using Hashing

15
Brute –force approach
◼ Scan the database of transactions to determine the
support of each candidate itemset
◼ Must match every candidate itemset against every transaction,
which is an expensive operation

16
Support Counting using Enumeration

Lexicographic
ordering

17
Support Counting Using a Hash Tree

Suppose you have 15 candidate itemsets of length 3:


{1 4 5}, {1 2 4}, {4 5 7}, {1 2 5}, {4 5 8}, {1 5 9}, {1 3 6}, {2 3 4}, {5 6 7}, {3 4 5}, {3 5 6}, {3
5 7}, {6 8 9}, {3 6 7}, {3 6 8}
You need:
• Hash function (p mode 3)
• Max leaf size: max number of itemsets stored in a leaf node (if number of candidate
itemsets exceeds max leaf size, split the node)

Hash function 234


3,6,9 567
1,4,7 145 345 356 367
136 368
2,5,8 357
124 689
457 125 159
458
Support Counting Using a Hash Tree

Hash Function Candidate Hash Tree

1,4,7 3,6,9

2,5,8

234
567

145 136
345 356 367
Hash on
357 368
1, 4 or 7
124 159 689
125
457 458
Support Counting Using a Hash Tree
Hash Function
1 2 3 5 6 transaction

1+ 2356
2+ 356 1,4,7 3,6,9
12+ 356 2,5,8
3+ 56
13+ 56
234
15+ 6 567

145 136
345 356 367
357 368
124 159 689
125
457 458
Improvement of the Apriori Method

◼ Major computational challenges


◼ Multiple scans of transaction database
◼ Huge number of candidates
◼ Tedious workload of support counting for candidates
◼ Improving the efficiency of Apriori Algorithm:

21
Hash based Techniques

22
Hash based Techniques

23
Transaction Reduction

24
Partitioning

25
Dynamic Itemset Counting

26
Dynamic Itemset Counting

27
Dynamic Itemset Counting

28
Dynamic Itemset Counting

29
Sampling

30
Closed itemset and Frequent itemset

31
Closed itemset and Frequent itemset

32
Closed itemset and Frequent itemset

33
Closed Patterns and Max-Patterns

34
Closed Patterns and Max-Patterns

35
Closed Patterns and Max-Patterns

36
Closed Patterns and Max-Patterns

37
FP-Growth Algorithm
The two primary drawbacks of the Apriori Algorithm are:
1. At each step, candidate sets have to be built.

2. To build the candidate sets, the algorithm has to repeatedly scan

the database.

These two properties inevitably make the algorithm slower. To


overcome these redundant steps, a new association-rule mining
algorithm was developed named Frequent Pattern Growth
Algorithm. It overcomes the disadvantages of the Apriori algorithm
by storing all the transactions in a Trie Data Structure.

38
The Frequent Pattern Growth Mining Method

◼ Idea: Frequent pattern growth


◼ Recursively grow frequent patterns by pattern and

database partition
◼ Method
◼ For each frequent item, construct its conditional

pattern-base, and then its conditional FP-tree


◼ Repeat the process on each newly created conditional

FP-tree
◼ Until the resulting FP-tree is empty, or it contains only

one path—single path will generate all the


combinations of its sub-paths, each of which is a
frequent pattern

39
FP-Growth Algorithm

Consider a data set Step 2: Construct A Frequent Pattern set

A Frequent Pattern set is built which will


contain all the elements whose frequency is
greater than or equal to the minimum support.
Step 1: Let minimum support =3
calculate frequency of
each item set
Step 3: Construct an Ordered-item set

40
FP-Growth Algorithm
Step 4: Construct Trie Data Structure.
a) Inserting the set {K, E, M, O, Y} b) Inserting the set {K, E, O, Y} c) Inserting the set {K, E, M}
d) Inserting the set {K, M, Y} e) Inserting the set {K, E, O}

a b c
e

41
FP-Growth Algorithm
Step 5: Compute Conditional Pattern Base. It is path labels of all the paths
which lead to any node of the given item in the frequent-pattern tree.
Step6: Compute Conditional Frequent Pattern Tree. It is done by taking the set of
elements that is common in all the paths in the Conditional Pattern Base of that item and
calculating its support count by summing the support counts of all the paths in the
Conditional Pattern Base.

Step 7: Compute Frequent Pattern Rule by pairing items of conditional frequent


pattern tree set
Step 8: Generate association rule and determine which one is valid on the
basics of confidence . For example, K->Y and Y->K

42
Advantages of the Pattern Growth Approach

◼ Divide-and-conquer:
◼ Decompose both the mining task and DB according to the
frequent patterns obtained so far
◼ Lead to focused search of smaller databases
◼ Other factors
◼ No candidate generation, no candidate test
◼ Compressed database: FP-tree structure
◼ No repeated scan of entire database
◼ Basic ops: counting local freq items and building sub FP-tree, no
pattern search and matching

43
ECLAT: Mining by Exploring Vertical Data
Format
◼ The ECLAT algorithm stands for Equivalence Class
Clustering and bottom-up Lattice Traversal.
◼ Vertical format: t(AB) = {T11, T25, …}
◼ tid-list: list of trans.-ids containing an itemset
◼ Deriving frequent patterns based on vertical intersections
◼ t(X) = t(Y): X and Y always happen together
◼ t(X)  t(Y): transaction having X always has Y
◼ Using diffset to accelerate mining
◼ Only keep track of differences of tids
◼ t(X) = {T1, T2, T3}, t(XY) = {T1, T3}
◼ Diffset (XY, X) = {T2}

44
ECLAT Algorithm
Consider the following transactions record:-
The given data is a boolean matrix
where for each cell (i, j), the value
denotes whether the j’th item is
included in the i’th transaction or not. 1
means true while 0 means false.

minimum support = 2

K=1

45
Eclat algorithm
Which pattern are interesting: pattern evaluation method
Shifting through the patterns to identify the most interesting ones is not a
trivial task because “one person’s trash might be another person’s treasure.”
It is therefore important to establish a set of well-accepted criteria for
evaluating the quality of association patterns.

The first set of criteria can be established through statistical arguments. Patterns
that involve a set of mutually independent items or cover very few transactions are
considered uninteresting because they may capture spurious relationships in the data.
Such patterns can be eliminated by applying Evaluation of Association Patterns
objective interestingness measure that uses statistics derived data to determine
whether a pattern is interesting. Examples of objective interestingness measures
include support, confidence, and correlation.

The second set of criteria can be established through subjective


arguments. A pattern is considered subjectively uninteresting unless it reveals
unexpected information about the data or provides useful knowledge that
can lead to profitable actions.
47
Interestingness Measure: Correlations (Lift)
◼ play basketball  eat cereal [40%, 66.7%] is misleading
◼ The overall % of students eating cereal is 75% > 66.7%.
◼ play basketball  not eat cereal [20%, 33.3%] is more accurate,
although with lower support and confidence
◼ Measure of dependent/correlated events: lift

P( A B) Basketball Not basketball Sum (row)


lift = Cereal 2000 1750 3750
P( A) P( B)
Not cereal 1000 250 1250
2000 / 5000
lift ( B, C ) = = 0.89 Sum(col.) 3000 2000 5000
3000 / 5000 * 3750 / 5000
1000 / 5000
lift ( B, C ) = = 1.33
3000 / 5000 *1250 / 5000

48
Lift
◼ If some rule had a lift of 1, it would imply that the probability of
occurrence of the antecedent and that of the consequent are
independent of each other. When two events are independent of each
other, no rule can be drawn involving those two events.
◼ If the lift is > 1, like it is here for Rules 1 and 2, that lets us know the
degree to which those two occurrences are dependent on one another,
and makes those rules potentially useful for predicting the consequent
in future data sets.
◼ If the lift is <1, then the occurrence of A is negatively correlated with
the occurrence of B, meaning that the occurrence of one likely leads to
absence of the other one.

49
2

Table with Expected Values

Because the value 2 is greater than 1, and the observed


value of the slot (game, video) = 4000, which is less than
the expected value of 4500, buying game and buying video
are negatively correlated

50
Null transaction

M- milk, c-coffee
A null-transaction is a transaction that does not contain any of the
itemset being examined.

51
Are lift and 2 Good Measures of Correlation?

◼ Over 20
interestingness
measures have
been proposed
Which are good
ones?

52
Which Null-Invariant Measure Is Better?
◼ IR (Imbalance Ratio): measure the imbalance of two
itemsets A and B in rule implications

◼ Kulczynski and Imbalance Ratio (IR) together present a


clear picture for all the three datasets D4 through D6
◼ D4 is balanced & neutral

◼ D5 is imbalanced & neutral

◼ D6 is very imbalanced & neutral


Continuous and Categorical Attributes
How to apply association analysis to non-asymmetric binary
variables?

Example of Association Rule:


{Gender=Male, Age  [21,30)} → {No of hours online  10}
Handling Categorical Attributes

◼ Example: Internet Usage Data

{Level of Education=Graduate, Online


Banking=Yes}
→ {Privacy Concerns = Yes}
Handling Categorical Attributes

◼ Introduce a new “item” for each distinct


attribute-value pair
Handling Categorical Attributes

◼ Some attributes can have many possible values


◼ Many of their attribute values have very low

support
◼ Potential solution: Aggregate the low-support
attribute values
Handling Categorical Attributes
◼ Distribution of attribute values can be highly skewed
◼ Example: 85% of survey participants own a computer at home
◼ Most records have Computer at home = Yes
◼ Computation becomes expensive; many frequent itemsets involving
the binary item (Computer at home = Yes)
◼ Potential solution:
◼ discard the highly frequent items

◼ Use alternative measures such as h-confidence

◼ Computational Complexity
◼ Binarizing the data increases the number of items
◼ But the width of the “transactions” remain the same as the
number of original (non-binarized) attributes
◼ Produce more frequent itemsets but maximum size of frequent
itemset is limited to the number of original attributes
Handling Continuous Attributes
◼ Different methods:
◼ Discretization-based

◼ Statistics-based

◼ Non-discretization based

◼ minApriori

◼ Different kinds of rules can be produced:


◼ {Age[21,30), No of hours online[10,20)}

→ {Chat Online =Yes}


◼ {Age[15,30), Covid-Positive = Yes}

→ Full_recovery
Discretization-based Methods
Discretization-based Methods

◼ Unsupervised: <1 2 3> <4 5 6> <7 8 9>

◼ Equal-width binning
<1 2 > <3 4 5 6 7 > < 8 9>

◼ Equal-depth binning

◼ Cluster-based

◼ Supervised discretization
Continuous attribute, v
1 2 3 4 5 6 7 8 9
Chat Online = Yes 0 0 20 10 20 0 0 0 0
Chat Online = No 150 100 0 0 0 100 100 150 100

bin1 bin2 bin3


Discretization Issues
◼ Interval width
Pattern Pattern Pattern
A B C High support region

(a) Original Data


10 20 30 40 50 60 70
Age

(b) Bin = 30 years


10 40 70
Age

(c) Bin = 2 years


10 20 30 40 50 60 70
Age
Discretization Issues
◼ Interval too wide (e.g., Bin size= 30)
◼ May merge several disparate patterns
◼ Patterns A and B are merged together
◼ May lose some of the interesting patterns
◼ Pattern C may not have enough confidence

◼ Interval too narrow (e.g., Bin size = 2)


◼ Pattern A is broken up into two smaller patterns

◼ Can recover the pattern by merging adjacent subpatterns


◼ Pattern B is broken up into smaller patterns

◼ Cannot recover the pattern by merging adjacent subpatterns


◼ Some windows may not meet support threshold
Discretization: all possible intervals
Number of intervals = k
Total number of Adjacent intervals = k(k-1)/2

◼ Execution time
◼ If the range is partitioned into k intervals, there are O(k2) new items
◼ If an interval [a,b) is frequent, then all intervals that subsume [a,b)
must also be frequent
◼ E.g.: if {Age [21,25), Chat Online=Yes} is frequent,
then {Age [10,50), Chat Online=Yes} is also frequent
◼ Improve efficiency:
◼ Use maximum support to avoid intervals that are too wide
Statistics-based Methods
◼ Example:
{Income > 100K, Online Banking=Yes} → Age: =34
◼ Rule consequent consists of a continuous variable,
characterized by their statistics
◼ mean, median, standard deviation, etc.
◼ Approach:
◼ Withhold the target attribute from the rest of the data
◼ Extract frequent itemsets from the rest of the attributes
◼ Binarize the continuous attributes (except for the target attribute)
◼ For each frequent itemset, compute the corresponding descriptive
statistics of the target attribute
◼ Frequent itemset becomes a rule by introducing the target variable
as rule consequent
◼ Apply statistical test to determine interestingness of the rule
Statistics-based Methods

Frequent Itemsets: Association Rules:

{Male, Income > 100K} {Male, Income > 100K} → Age:  = 30


{Income < 30K, No hours [10,15)} {Income < 40K, No hours [10,15)} → Age:  = 24
{Income > 100K, Online Banking = Yes} {Income > 100K,Online Banking = Yes}
→ Age:  = 34
….
….
Statistics-based Methods

◼ How to determine whether an association rule


interesting?
◼ Compare the statistics for segment of

population covered by the rule vs segment of


population not covered by the rule:
A  B:  versus A  B: ’  '−  − 
Z=
s12 s22
+
◼ Statistical hypothesis testing: n1 n2
◼ Null hypothesis: H0: ’ =  + 
◼ Alternative hypothesis: H1: ’ >  + 
◼ Z has zero mean and variance 1 under null
hypothesis
Statistics-based Methods
◼ Example:
r: Covid-Postive & Quick_Recovery=Yes → Age: =23
◼ Rule is interesting if difference between  and ’ is more than 5
years (i.e.,  = 5)
◼ For r, suppose n1 = 50, s1 = 3.5
◼ For r’ (complement): n2 = 250, s2 = 6.5
 '−  −  30 − 23 − 5
Z= = = 3.11
2 2 2 2
s s 3 .5 6 .5
1
+ 2
+
n1 n2 50 250

◼ For 1-sided test at 95% confidence level, critical Z-value for


rejecting null hypothesis is 1.64.
◼ Since Z is greater than 1.64, r is an interesting rule
Min-Apriori
Document-term matrix:

TID W1 W2 W3 W4 W5
D1 2 2 0 0 1
D2 0 0 1 2 2
D3 2 3 0 0 0
D4 0 0 1 0 1
D5 1 1 1 0 2
Example:
W1 and W2 tends to appear together in the
same document
Min-Apriori
◼ Data contains only continuous attributes of the same
“type”
◼ e.g., frequency of words in a document
TID W1 W2 W3 W4 W5
D1 2 2 0 0 1
D2 0 0 1 2 2
D3 2 3 0 0 0
D4 0 0 1 0 1
D5 1 1 1 0 2
◼ Potential solution:
◼ Convert into 0/1 matrix and then apply existing algorithms
◼ lose word frequency information
◼ Discretization does not apply as users want association among words
based on how frequently they co-occur, not if they occur with similar
frequencies
Min-Apriori

◼ How to determine the support of a word?


◼ If we simply sum up its frequency, support

count will be greater than total number of


documents!
◼ Normalize the word vectors – e.g., using L1 norms
◼ Each word has a support equals to 1.0
TID W1 W2 W3 W4 W5 TID W1 W2 W3 W4 W5
Normalize
D1 2 2 0 0 1 D1 0.40 0.33 0.00 0.00 0.17
D2 0 0 1 2 2 D2 0.00 0.00 0.33 1.00 0.33
D3 2 3 0 0 0 D3 0.40 0.50 0.00 0.00 0.00
D4 0 0 1 0 1 D4 0.00 0.00 0.33 0.00 0.17
D5 1 1 1 0 2 D5 0.20 0.17 0.33 0.00 0.33
Min-Apriori

◼ New definition of support:


sup(C ) =  min D(i, j )
iT jC

TID W1 W2 W3 W4 W5 Example:
D1 0.40 0.33 0.00 0.00 0.17
Sup(W1,W2)
D2 0.00 0.00 0.33 1.00 0.33
D3 0.40 0.50 0.00 0.00 0.00 = .33 + 0 + .4 + 0 + 0.17
D4 0.00 0.00 0.33 0.00 0.17 = 0.9
D5 0.20 0.17 0.33 0.00 0.33
Anti-monotone property of Support

TID W1 W2 W3 W4 W5
D1 0.40 0.33 0.00 0.00 0.17
D2 0.00 0.00 0.33 1.00 0.33
D3 0.40 0.50 0.00 0.00 0.00
D4 0.00 0.00 0.33 0.00 0.17
D5 0.20 0.17 0.33 0.00 0.33

Example:
Sup(W1) = 0.4 + 0 + 0.4 + 0 + 0.2 = 1
Sup(W1, W2) = 0.33 + 0 + 0.4 + 0 + 0.17 = 0.9
Sup(W1, W2, W3) = 0 + 0 + 0 + 0 + 0.17 = 0.17
Sequential Patterns Examples of Sequence
◼ Sequence of different transactions by a customer
at an online store:
< {Digital Camera,iPad} {memory card} {headphone,iPad cover} >

◼ Sequence of initiating events causing the nuclear


accident at 3-mile Island:
(http://stellar-one.com/nuclear/staff_reports/summary_SOE_the_initiating_event.htm)
< {clogged resin} {outlet valve closure} {loss of feedwater}
{condenser polisher outlet valve shut} {booster pumps trip}
{main waterpump trips} {main turbine trips} {reactor pressure
increases}>

◼ Sequence of books checked out at a library:


<{Fellowship of the Ring} {The Two Towers} {Return of the King}>
Sequential Pattern Discovery: Examples
◼ In telecommunications alarm logs,
– Inverter_Problem:

(Excessive_Line_Current) (Rectifier_Alarm) --> (Fire_Alarm)

◼ In point-of-sale transaction sequences,


– Computer Bookstore:

(Intro_To_Visual_C) (C++_Primer) -->


(Perl_for_dummies,Tcl_Tk)
– Athletic Apparel Store:

(Shoes) (Racket, Racketball) --> (Sports_Jacket)


Sequence Data
Sequence Sequence Element Event
Database (Transaction) (Item)
Customer Purchase history of a given A set of items bought by Books, diary products,
customer a customer at time t CDs, etc

Web Data Browsing activity of a A collection of files Home page, index


particular Web visitor viewed by a Web visitor page, contact info, etc
after a single mouse click
Event data History of events generated Events triggered by a Types of alarms
by a given sensor sensor at time t generated by sensors

Genome DNA sequence of a An element of the DNA Bases A,T,G,C


sequences particular species sequence

Element
Event
(Transaction)
E1 E1 E3 (Item)
E2 E2
E2 E3 E4
Sequence
Sequence Data

Timeline
10 15 20 25 30 35
Sequence Database:
Sequence ID Timestamp Events Object A:
Sequence A:
A 10 2, 3, 5 2 6 1
3 1
A 20 6, 1
5
A 23 1
B 11 4, 5, 6
B 17 2 Object B:
Sequence B:
B 21 7, 8, 1, 2 4 2 7 1
B 28 1, 6 5 8 6
C 14 1, 8, 7 6 1
2

Object C:
Sequence C:
1
7
8
Sequence Data vs. Market-basket Data
Sequence Database: Market- basket Data

Customer Date Items bought Events


A 10 2, 3, 5 2, 3, 5
A 20 1,6 1,6
A 23 1 1
B 11 4, 5, 6 4,5,6
B 17 2 2
B 21 1,2,7,8 1,2,7,8
B 28 1, 6 1,6
C 14 1,7,8 1,7,8
Sequence Data vs. Market-basket Data
Sequence Database: Market- basket Data

Customer Date Items bought Events


A 10 2, 3, 5 2, 3, 5
A 20 1,6 1,6
A 23 1 1
B 11 4, 5, 6 4,5,6
B 17 2 2
B 21 1,2,7,8 1,2,7,8
B 28 1, 6 1,6
C 14 1,7,8 1,7,8
Formal Definition of a Sequence
◼ A sequence is an ordered list of elements
s = < e1 e2 e3 … >
◼ Each element contains a collection of events
(items)
ei = {i1, i2, …, ik}

◼ Length of a sequence, |s|, is given by the number


of elements in the sequence
◼ A k-sequence is a sequence that contains k
events (items)
◼ <{a,b} {a}> has a length of 2 and it is a 3-sequence
Formal Definition of a Subsequence
◼ A sequence t: <a1 a2 … an> is contained in another sequence s:
<b1 b2 … bm> (m ≥ n) if there exist integers
i1 < i2 < … < in such that a1  bi1 , a2  bi2, …, an  bin
◼ Illustrative Example:
s: b1 b2 b3 b4 b5
t: a1 a2 a3
t is a subsequence of s if a1 b2, a2 b3, a3 b5.
Data sequence Subsequence Contain?
< {2,4} {3,5,6} {8} > < {2} {8} > Yes
< {1,2} {3,4} > < {1} {2} > No
< {2,4} {2,4} {2,5} > < {2} {4} > Yes
<{2,4} {2,5} {4,5}> < {2} {4} {5} > No
<{2,4} {2,5} {4,5}> < {2} {5} {5} > Yes
<{2,4} {2,5} {4,5}> < {2, 4, 5} > No
Sequential Pattern Mining: Definition
◼ The support of a subsequence w is defined as the
fraction of data sequences that contain w
◼ A sequential pattern is a frequent subsequence
(i.e., a subsequence whose support is ≥ minsup)

◼ Given:
◼ a database of sequences

◼ a user-specified minimum support threshold,

minsup
◼ Task:
◼ Find all subsequences with support ≥ minsup
Sequential Pattern Mining: Example

Object Timestamp Events


A 1 1,2,4 Minsup = 50%
A 2 2,3
A 3 5 Examples of Frequent Subsequences:
B 1 1,2
B 2 2,3,4 < {1,2} > s=60%
< {2,3} > s=60%
C 1 1, 2
< {2,4}> s=80%
C 2 2,3,4
< {3} {5}> s=80%
C 3 2,4,5 < {1} {2} > s=80%
D 1 2 < {2} {2} > s=60%
D 2 3, 4 < {1} {2,3} > s=60%
D 3 4, 5 < {2} {2,3} > s=60%
E 1 1, 3 < {1,2} {2,3} > s=60%
E 2 2, 4, 5
Sequence Data vs. Market-basket Data
Sequence Database: Market- basket Data

Customer Date Items bought Events


A 10 2, 3, 5 2, 3, 5
A 20 1,6 1,6
A 23 1 1
B 11 4, 5, 6 4,5,6
B 17 2 2
B 21 1,2,7,8 1,2,7,8
B 28 1, 6 1,6
C 14 1,7,8 1,7,8

{2} -> {1} (1,8) -> (7)


Extracting Sequential Patterns

◼ Given n events: i1, i2, i3, …, in


◼ Candidate 1-subsequences:
<{i1}>, <{i2}>, <{i3}>, …, <{in}>

◼ Candidate 2-subsequences:
<{i1, i2}>, <{i1, i3}>, …,
<{i1} {i1}>, <{i1} {i2}>, …, <{in} {in}>

◼ Candidate 3-subsequences:
<{i1, i2 , i3}>, <{i1, i2 , i4}>, …,
<{i1, i2} {i1}>, <{i1, i2} {i2}>, …,
<{i1} {i1 , i2}>, <{i1} {i1 , i3}>, …,
<{i1} {i1} {i1}>, <{i1} {i1} {i2}>, …
Extracting Sequential Patterns: Simple example

()
◼ Given 2 events: a, b
(a) (b)

◼ Candidate 1-subsequences: (a,b)


<{a}>, <{b}>.
Item-set patterns

◼ Candidate 2-subsequences:
<{a} {a}>, <{a} {b}>, <{b} {a}>, <{b}
{b}>, <{a, b}>.

◼ Candidate 3-subsequences:
<{a} {a} {a}>, <{a} {a} {b}>, <{a} {b}
{a}>, <{a} {b} {b}>,
<{b} {b} {b}>, <{b} {b} {a}>, <{b} {a}
{b}>, <{b} {a} {a}>
Generalized Sequential Pattern (GSP)
◼ Step 1:
◼ Make the first pass over the sequence database D to yield all the 1-
element frequent sequences
◼ Step 2:
Repeat until no new frequent sequences are found
◼ Candidate Generation:
◼ Merge pairs of frequent subsequences found in the (k-1)th pass to generate
candidate sequences that contain k items
◼ Candidate Pruning:
◼ Prune candidate k-sequences that contain infrequent (k-1)-subsequences
◼ Support Counting:
◼ Make a new pass over the sequence database D to find the support for these
candidate sequences
◼ Candidate Elimination:
◼ Eliminate candidate k-sequences whose actual support is less than minsup
Candidate Generation
◼ Base case (k=2):
◼ Merging two frequent 1-sequences <{i1}> and <{i2}> will produce the
following candidate 2-sequences: <{i1} {i1}>, <{i1} {i2}>, <{i2} {i2}>,
<{i2} {i1}> and <{i1, i2}>. (Note: <{i1}> can be merged with itself to
produce: <{i1} {i1}>)

◼ General case (k>2):


◼ A frequent (k-1)-sequence w1 is merged with another frequent
(k-1)-sequence w2 to produce a candidate k-sequence if the subsequence
obtained by removing an event from the first element in w1 is the same as
the subsequence obtained by removing an event from the last element in
w2
Candidate Generation
◼ Base case (k=2):
◼ Merging two frequent 1-sequences <{i1}> and <{i2}> will produce the
following candidate 2-sequences: <{i1} {i1}>, <{i1} {i2}>, <{i2} {i2}>,
<{i2} {i1}> and <{i1 i2}>. (Note: <{i1}> can be merged with itself to
produce: <{i1} {i1}>)

◼ General case (k>2):


◼ A frequent (k-1)-sequence w1 is merged with another frequent
(k-1)-sequence w2 to produce a candidate k-sequence if the subsequence
obtained by removing an event from the first element in w1 is the same as
the subsequence obtained by removing an event from the last element in
w2
◼ The resulting candidate after merging is given by extending the
sequence w1 as follows-
◼ If the last element of w2 has only one event, append it to w1
◼ Otherwise add the event from the last element of w2 (which is absent
in the last element of w1) to the last element of w1
Candidate Generation Examples
◼ Merging w1=<{1 2 3} {4 6}> and w2 =<{2 3} {4 6} {5}>
produces the candidate sequence < {1 2 3} {4 6} {5}> because the
last element of w2 has only one event
◼ Merging w1=<{1} {2 3} {4}> and w2 =<{2 3} {4 5}>
produces the candidate sequence < {1} {2 3} {4 5}> because the
last element in w2 has more than one event
◼ Merging w1=<{1 2 3} > and w2 =<{2 3 4} >
produces the candidate sequence < {1 2 3 4}> because the last
element in w2 has more than one event
◼ We do not have to merge the sequences
w1 =<{1} {2 6} {4}> and w2 =<{1} {2} {4 5}>
to produce the candidate < {1} {2 6} {4 5}> because if the latter is
a viable candidate, then it can be obtained by merging w1 with
< {2 6} {4 5}>
Candidate Generation: Examples (ctd)

◼ Can <{a},{b},{c}> merge with <{b},{c},{f}> ?

◼ Can <{a},{b},{c}> merge with <{b,c},{f}>?

◼ Can <{a},{b},{c}> merge with <{b},{c,f}>?

◼ Can <{a,b},{c}> merge with <{b},{c,f}> ?

◼ Can <{a,b,c}> merge with <{b,c,f}>?

◼ Can <{a}> merge with <{a}>?


Candidate Generation: Examples (ctd)
◼ <{a},{b},{c}> can be merged with <{b},{c},{f}> to produce
<{a},{b},{c},{f}>
◼ <{a},{b},{c}> cannot be merged with <{b,c},{f}>
◼ <{a},{b},{c}> can be merged with <{b},{c,f}> to produce
<{a},{b},{c,f}>
◼ <{a,b},{c}> can be merged with <{b},{c,f}> to produce
<{a,b},{c,f}>
◼ <{a,b,c}> can be merged with <{b,c,f}> to produce <{a,b,c,f}>
◼ <{a}{b}{a}> can be merged with <{b}{a}{b}> to produce
<{a},{b},{a},{b}>
◼ <{b}{a}{b}> can be merged with <{a}{b}{a}> to produce
<{b},{a},{b},{a}>
GSP Example

Frequent
3-sequences

< {1} {2} {3} >


< {1} {2 5} >
< {1} {5} {3} >
< {2} {3} {4} > Candidate
< {2 5} {3} > Generation
< {3} {4} {5} >
< {5} {3 4} > < {1} {2} {3} {4} >
< {1} {2 5} {3} >
< {1} {5} {3 4} >
< {2} {3} {4} {5} >
< {2 5} {3 4} >
GSP Example

Frequent
3-sequences

< {1} {2} {3} >


< {1} {2 5} >
< {1} {5} {3} >
< {2} {3} {4} > Candidate
< {2 5} {3} > Generation
< {3} {4} {5} >
< {5} {3 4} > < {1} {2} {3} {4} >
< {1} {2 5} {3} >
< {1} {5} {3 4} >
< {2} {3} {4} {5} > Candidate
< {2 5} {3 4} > Pruning

< {1} {2 5} {3} >


Timing Constraints (I)

{A B} {C} {D E} xg: max-gap


<= xg >ng ng: min-gap

<= ms ms: maximum span

xg = 2, ng = 0, ms= 4

Data sequence, d Sequential Pattern, s d contains s?


< {2,4} {3,5,6} {4,7} {4,5} {8} > < {6} {5} > Yes

< {1} {2} {3} {4} {5}> < {1} {4} > No
< {1} {2,3} {3,4} {4,5}> < {2} {3} {5} > Yes
< {1,2} {3} {2,3} {3,4} {2,4} {4,5}> < {1,2} {5} > No
Mining Sequential Patterns with Timing Constraints

◼ Approach 1:
◼ Mine sequential patterns without timing

constraints
◼ Postprocess the discovered patterns

◼ Approach 2:
◼ Modify GSP to directly prune candidates that

violate timing constraints


◼ Question:

◼ Does Apriori principle still hold?


Apriori Principle for Sequence Data

Object Timestamp Events Suppose:


A 1 1,2,4 xg = 1 (max-gap)
A 2 2,3
A 3 5 ng = 0 (min-gap)
B 1 1,2 ms = 5 (maximum span)
B 2 2,3,4
minsup = 60%
C 1 1, 2
C 2 2,3,4
C 3 2,4,5 <{2} {5}> support = 40%
D 1 2
D 2 3, 4 but
D 3 4, 5 <{2} {3} {5}> support = 60%
E 1 1, 3
E 2 2, 4, 5

Problem exists because of max-gap constraint


No such problem if max-gap is infinite
Contiguous Subsequences
◼ s is a contiguous subsequence of
w = <e1>< e2>…< ek>
if any of the following conditions hold:
1. s is obtained from w by deleting an item from either e1 or ek
2. s is obtained from w by deleting an item from any element ei
that contains at least 2 items
3. s is a contiguous subsequence of s’ and s’ is a contiguous
subsequence of w (recursive definition)

◼ Examples: s = < {1} {2} >


◼ is a contiguous subsequence of
< {1} {2 3}>, < {1 2} {2} {3}>, and < {3 4} {1 2} {2 3}
{4} >
◼ is not a contiguous subsequence of
< {1} {3} {2}> and < {2} {1} {3} {2}>
Modified Candidate Pruning Step

◼ Without maxgap constraint:


◼ A candidate k-sequence is pruned if at least

one of its (k-1)-subsequences is infrequent

◼ With maxgap constraint:


◼ A candidate k-sequence is pruned if at least

one of its contiguous (k-1)-subsequences is


infrequent
Timing Constraints (II)
xg: max-gap
{A B} {C} {D E} ng: min-gap
<= xg >ng <= ws
ws: window size
<= ms
ms: maximum span

xg = 2, ng = 0, ws = 1, ms= 5

Data sequence, d Sequential Pattern, s d contains s?


< {2,4} {3,5,6} {4,7} {4,5} {8} > < {3,4,5}> Yes
< {1} {2} {3} {4} {5}> < {1,2} {3,4} > No
< {1,2} {2,3} {3,4} {4,5}> < {1,2} {3,4} > Yes
Modified Support Counting Step

◼ Given a candidate sequential pattern: <{a, c}>


◼ Any data sequences that contain

<… {a c} … >,
<… {a} … {c}…> ( where time({c}) –
time({a}) ≤ ws)
<…{c} … {a} …> (where time({a}) –
time({c}) ≤ ws)
will contribute to the support count of
candidate pattern
Spade algorithm

https://www.youtube.com/watch?v=ny7Cn1Ttncc&
ab_channel=GRIETCSEPROJECTS

102
Unit 4: Cluster detection

Prepared by:
Dr. Nivedita Palia

1
What is Cluster Analysis?
■ Cluster: A collection of data objects
■ similar (or related) to one another within the same group

■ dissimilar (or unrelated) to the objects in other groups

■ Cluster analysis (or clustering, data segmentation, …)


■ Finding similarities between data according to the

characteristics found in the data and grouping similar


data objects into clusters
■ Unsupervised learning: no predefined classes (i.e., learning
by observations vs. learning by examples: supervised)
■ Typical applications
■ As a stand-alone tool to get insight into data distribution

■ As a preprocessing step for other algorithms

2
Clustering for Data Understanding and
Applications
■ Biology: taxonomy of living things: kingdom, phylum, class, order,
family, genus and species
■ Information retrieval: document clustering
■ Land use: Identification of areas of similar land use in an earth
observation database
■ Marketing: Help marketers discover distinct groups in their customer
bases, and then use this knowledge to develop targeted marketing
programs
■ City-planning: Identifying groups of houses according to their house
type, value, and geographical location
■ Earth-quake studies: Observed earth quake epicenters should be
clustered along continent faults
■ Climate: understanding earth climate, find patterns of atmospheric
and ocean
■ Economic Science: market resarch
3
Clustering as a Preprocessing Tool (Utility)

■ Summarization:
■ Preprocessing for regression, PCA, classification, and
association analysis
■ Compression:
■ Image processing: vector quantization
■ Finding K-nearest Neighbors
■ Localizing search to one or a small number of clusters
■ Outlier detection
■ Outliers are often viewed as those “far away” from any
cluster

4
Quality: What Is Good Clustering?

■ A good clustering method will produce high quality


clusters
■ high intra-class similarity: cohesive within clusters
■ low inter-class similarity: distinctive between clusters
■ The quality of a clustering method depends on
■ the similarity measure used by the method
■ its implementation, and
■ Its ability to discover some or all of the hidden patterns

5
Measure the Quality of Clustering
■ Dissimilarity/Similarity metric
■ Similarity is expressed in terms of a distance function,
typically metric: d(i, j)
■ The definitions of distance functions are usually rather
different for interval-scaled, boolean, categorical,
ordinal ratio, and vector variables
■ Weights should be associated with different variables
based on applications and data semantics
■ Quality of clustering:
■ There is usually a separate “quality” function that
measures the “goodness” of a cluster.
■ It is hard to define “similar enough” or “good enough”
■ The answer is typically highly subjective
6
Considerations for Cluster Analysis
■ Partitioning criteria
■ Single level vs. hierarchical partitioning (often, multi-level
hierarchical partitioning is desirable)
■ Separation of clusters
■ Exclusive (e.g., one customer belongs to only one region) vs.
non-exclusive (e.g., one document may belong to more than one
class)
■ Similarity measure
■ Distance-based (e.g., Euclidian, road network, vector) vs.
connectivity-based (e.g., density or contiguity)
■ Clustering space
■ Full space (often when low dimensional) vs. subspaces (often in
high-dimensional clustering)

7
Requirements and Challenges
■ Scalability
■ Clustering all the data instead of only on samples
■ Ability to deal with different types of attributes
■ Numerical, binary, categorical, ordinal, linked, and mixture of
these
■ Constraint-based clustering
■ User may give inputs on constraints
■ Use domain knowledge to determine input parameters
■ Interpretability and usability
■ Others
■ Discovery of clusters with arbitrary shape
■ Ability to deal with noisy data
■ Incremental clustering and insensitivity to input order
■ High dimensionality

8
Major Clustering Approaches
■ Partitioning approach:
■ Construct various partitions and then evaluate them by some
criterion, e.g., minimizing the sum of square errors
■ Typical methods: k-means, k-medoids, CLARANS
■ Hierarchical approach:
■ Create a hierarchical decomposition of the set of data (or objects)
using some criterion
■ Agglomerative approach(bottom-up) or divisive approach(top-down)
■ Typical methods: Diana, Agnes, BIRCH, CAMELEON
■ Density-based approach:
■ Based on connectivity and density functions
■ Typical methods: DBSACN, OPTICS, DenClue
■ Grid-based approach:
■ based on a multiple-level granularity structure
■ Typical methods: STING, WaveCluster, CLIQUE
9
Partitioning Algorithms: Basic Concept

■ Partitioning method: Partitioning a database D of n objects into a set of


k clusters, such that the sum of squared distances is minimized (where
ci is the centroid or medoid of cluster Ci)

■ Given k, find a partition of k clusters that optimizes the chosen


partitioning criterion
■ Global optimal: exhaustively enumerate all partitions
■ Heuristic methods: k-means and k-medoids algorithms
■ k-means :Each cluster is represented by the center of the cluster

10
The K-Means Clustering Method

■ Given k, the k-means algorithm is implemented in four


steps:
■ Partition objects into k nonempty subsets
■ Compute seed points as the centroids of the
clusters of the current partitioning (the centroid is
the center, i.e., mean point, of the cluster)
■ Assign each object to the cluster with the nearest
seed point
■ Go back to Step 2, stop when the assignment does
not change

11
An Example of K-Means Clustering

K=2

Arbitrarily Update the


partition cluster
objects into centroids
k groups

The initial data set Loop if Reassign objects


needed
■ Partition objects into k nonempty
subsets
■ Repeat
■ Compute centroid (i.e., mean Update the
point) for each partition cluster
centroids
■ Assign each object to the
cluster of its nearest centroid
■ Until no change
12
K-means Numerical

13
K-means Numerical

•The new cluster center is computed by taking mean of all the points contained in that cluster.

14
K-means Numerical

15
Variations of the K-Means Method
■ Most of the variants of the k-means which differ in
■ Selection of the initial k means

■ Dissimilarity calculations
■ Strategies to calculate cluster means
■ Handling categorical data: k-modes

■ Replacing means of clusters with modes


■ Using new dissimilarity measures to deal with categorical objects
■ Using a frequency-based method to update modes of clusters

■ A mixture of categorical and numerical data: k-prototype method

16
Hierarchical Clustering
■ Use distance matrix as clustering criteria. This method
does not require the number of clusters k as an input, but
needs a termination condition
Step 0 Step 1 Step 2 Step 3 Step 4
agglomerative
(AGNES)
a ab
b abcde
c
cde
d
de
e
divisive
Step 4 Step 3 Step 2 Step 1 Step 0 (DIANA)
17
AGNES (Agglomerative Nesting)
■ Introduced in Kaufmann and Rousseeuw (1990)
■ Implemented in statistical packages, e.g., Splus
■ Use the single-link method and the dissimilarity matrix
■ Merge nodes that have the least dissimilarity
■ Go on in a non-descending fashion
■ Eventually all nodes belong to the same cluster

18
Dendrogram: Shows How Clusters are Merged

Decompose data objects into a several levels of nested


partitioning (tree of clusters), called a dendrogram

A clustering of the data objects is obtained by cutting


the dendrogram at the desired level, then each
connected component forms a cluster

19
DIANA (Divisive Analysis)

■ Introduced in Kaufmann and Rousseeuw (1990)


■ Implemented in statistical analysis packages, e.g., Splus
■ Inverse order of AGNES
■ Eventually each node forms a cluster on its own

20
Distance between Clusters X X

■ Single link: smallest distance between an element in one cluster


and an element in the other, i.e., dist(Ki, Kj) = min(tip, tjq)
■ Complete link: largest distance between an element in one cluster
and an element in the other, i.e., dist(Ki, Kj) = max(tip, tjq)
■ Average: avg distance between an element in one cluster and an
element in the other, i.e., dist(Ki, Kj) = avg(tip, tjq)
■ Centroid: distance between the centroids of two clusters, i.e.,
dist(Ki, Kj) = dist(Ci, Cj)
■ Medoid: distance between the medoids of two clusters, i.e., dist(Ki,
Kj) = dist(Mi, Mj)
■ Medoid: a chosen, centrally located object in the cluster

21
Centroid, Radius and Diameter of a Cluster
(for numerical data sets)
■ Centroid: the “middle” of a cluster

■ Radius: square root of average distance from any point


of the cluster to its centroid

■ Diameter: square root of average mean squared


distance between all pairs of points in the cluster

22
Extensions to Hierarchical Clustering
■ Major weakness of agglomerative clustering methods

■ Can never undo what was done previously

■ Do not scale well: time complexity of at least O(n2),


where n is the number of total objects
■ Integration of hierarchical & distance-based clustering
■ BIRCH (1996): uses CF-tree and incrementally adjusts
the quality of sub-clusters
■ CHAMELEON (1999): hierarchical clustering using
dynamic modeling
23
Density-Based Clustering Methods

■ Clustering based on density (local cluster criterion), such


as density-connected points
■ Major features:
■ Discover clusters of arbitrary shape
■ Handle noise
■ One scan

■ Need density parameters as termination condition

24
Density-Based Clustering: Basic Concepts
■ Two parameters:
■ Eps: Maximum radius of the neighbourhood
■ MinPts: Minimum number of points in an
Eps-neighbourhood of that point
■ NEps(p): {q belongs to D | dist(p,q) ≤ Eps}
■ Directly density-reachable: A point p is directly
density-reachable from a point q w.r.t. Eps, MinPts if

■ p belongs to NEps(q)
p MinPts = 5
■ core point condition:
Eps = 1 cm
|NEps (q)| ≥ MinPts q

25
Density-Reachable and Density-Connected

■ Density-reachable:
■ A point p is density-reachable from p
a point q w.r.t. Eps, MinPts if there
p1
is a chain of points p1, …, pn, p1 = q
q, pn = p such that pi+1 is directly
density-reachable from pi
■ Density-connected
■ A point p is density-connected to a p q
point q w.r.t. Eps, MinPts if there is
a point o such that both, p and q o
are density-reachable from o w.r.t.
Eps and MinPts
26
DBSCAN: Density-Based Spatial Clustering of
Applications with Noise
■ Relies on a density-based notion of cluster: A cluster is
defined as a maximal set of density-connected points
■ Discovers clusters of arbitrary shape in spatial databases
with noise

Outlier

Border
Eps = 1cm
Core MinPts = 5

27
DBSCAN: The Algorithm
■ Arbitrary select a point p
■ Retrieve all points density-reachable from p w.r.t. Eps and
MinPts
■ If p is a core point, a cluster is formed
■ If p is a border point, no points are density-reachable
from p and DBSCAN visits the next point of the database
■ Continue the process until all of the points have been
processed

28
DBSCAN: Sensitive to Parameters

29
Assessing Clustering Tendency
■ Assess if non-random structure exists in the data by measuring the
probability that the data is generated by a uniform data distribution
■ Test spatial randomness by statistic test: Hopkins Static
■ Given a dataset D regarded as a sample of a random variable o,
determine how far away o is from being uniformly distributed in the
data space
■ Sample n points, p1, …, pn, uniformly from D. For each pi, find its
nearest neighbor in D: xi = min{dist (pi, v)} where v in D
■ Sample n points, q1, …, qn, uniformly from D. For each qi, find its
nearest neighbor in D – {qi}: yi = min{dist (qi, v)} where v in D and
v ≠ qi
■ Calculate the Hopkins Statistic:

■ If D is uniformly distributed, ∑ xi and ∑ yi will be close to each


other and H is close to 0.5. If D is highly skewed, H is close to 0
30
Determine the Number of Clusters
■ Empirical method
■ # of clusters ≈√n/2 for a dataset of n points
■ Elbow method
■ Use the turning point in the curve of sum of within cluster variance
w.r.t the # of clusters
■ Cross validation method
■ Divide a given data set into m parts
■ Use m – 1 parts to obtain a clustering model
■ Use the remaining part to test the quality of the clustering
■ E.g., For each point in the test set, find the closest centroid, and

use the sum of squared distance between all points in the test set
and the closest centroids to measure how well the model fits the
test set
■ For any k > 0, repeat it m times, compare the overall quality measure
w.r.t. different k’s, and find # of clusters that fits the data the best
31
Measuring Clustering Quality

■ Two methods: extrinsic vs. intrinsic


■ Extrinsic: supervised, i.e., the ground truth is available
■ Compare a clustering against the ground truth using
certain clustering quality measure
■ Ex. BCubed precision and recall metrics
■ Intrinsic: unsupervised, i.e., the ground truth is unavailable
■ Evaluate the goodness of a clustering by considering
how well the clusters are separated, and how compact
the clusters are
■ Ex. Silhouette coefficient

32
Measuring Clustering Quality: Extrinsic Methods

■ Clustering quality measure: Q(C, Cg), for a clustering C


given the ground truth Cg.
■ Q is good if it satisfies the following 4 essential criteria
■ Cluster homogeneity: the purer, the better

■ Cluster completeness: should assign objects belong to

the same category in the ground truth to the same


cluster
■ Rag bag: putting a heterogeneous object into a pure

cluster should be penalized more than putting it into a


rag bag (i.e., “miscellaneous” or “other” category)
■ Small cluster preservation: splitting a small category

into pieces is more harmful than splitting a large


category into pieces
33
What Are Outliers?

■ Outlier: A data object that deviates significantly from the normal


objects as if it were generated by a different mechanism
■ Ex.: Unusual credit card purchase, sports: Michael Jordon,
Wayne Gretzky, ...
■ Outliers are different from the noise data
■ Noise is random error or variance in a measured variable
■ Noise should be removed before outlier detection
■ Outliers are interesting: It violates the mechanism that generates the
normal data
■ Outlier detection vs. novelty detection: early stage, outlier; but later
merged into the model
■ Applications:
■ Credit card fraud detection
■ Telecom fraud detection
■ Customer segmentation
34 ■ Medical analysis
Types of Outliers (I)

■ Three kinds: global, contextual and collective outliers Global Outlier


■ Global outlier (or point anomaly)
■ Object is Og if it significantly deviates from the rest of the data set
■ Ex. Intrusion detection in computer networks
■ Issue: Find an appropriate measurement of deviation
■ Contextual outlier (or conditional outlier)
■ Object is Oc if it deviates significantly based on a selected context
■ Ex. 80o F in Urbana: outlier? (depending on summer or winter?)
■ Attributes of data objects should be divided into two groups
■ Contextual attributes: defines the context, e.g., time & location

■ Behavioral attributes: characteristics of the object, used in outlier

evaluation, e.g., temperature


■ Can be viewed as a generalization of local outliers—whose density
significantly deviates from its local area
■ Issue: How to define or formulate meaningful context?
35
Types of Outliers (II)
■ Collective Outliers
■ A subset of data objects collectively deviate
significantly from the whole data set, even if the
individual data objects may not be outliers
■ Applications: E.g., intrusion detection:
Collective Outlier
■ When a number of computers keep sending
denial-of-service packages to each other

■ Detection of collective outliers


■ Consider not only behavior of individual objects, but also that of

groups of objects
■ Need to have the background knowledge on the relationship

among data objects, such as a distance or similarity measure


on objects.
■ A data set may have multiple types of outlier
■ One object may belong to more than one type of outlier
36
Challenges of Outlier Detection

■ Modeling normal objects and outliers properly


■ Hard to enumerate all possible normal behaviors in an application
■ The border between normal and outlier objects is often a gray area
■ Application-specific outlier detection
■ Choice of distance measure among objects and the model of
relationship among objects are often application-dependent
■ E.g., clinic data: a small deviation could be an outlier; while in
marketing analysis, larger fluctuations
■ Handling noise in outlier detection
■ Noise may distort the normal objects and blur the distinction
between normal objects and outliers. It may help hide outliers and
reduce the effectiveness of outlier detection
■ Understandability
■ Understand why these are outliers: Justification of the detection
■ Specify the degree of an outlier: the unlikelihood of the object being
generated by a normal mechanism 37
Outlier Detection I: Supervised Methods

■ Two ways to categorize outlier detection methods:


■ Based on whether user-labeled examples of outliers can be obtained:
■ Supervised, semi-supervised vs. unsupervised methods

■ Based on assumptions about normal data and outliers:


■ Statistical, proximity-based, and clustering-based methods

■ Outlier Detection I: Supervised Methods


■ Modeling outlier detection as a classification problem
■ Samples examined by domain experts used for training & testing

■ Methods for Learning a classifier for outlier detection effectively:


■ Model normal objects & report those not matching the model as

outliers, or
■ Model outliers and treat those not matching the model as normal

■ Challenges
■ Imbalanced classes, i.e., outliers are rare: Boost the outlier class

and make up some artificial outliers


■ Catch as many outliers as possible, i.e., recall is more important

than accuracy (i.e., not mislabeling normal objects as outliers) 38


Outlier Detection II: Unsupervised Methods
■ Assume the normal objects are somewhat ``clustered'‘ into multiple
groups, each having some distinct features
■ An outlier is expected to be far away from any groups of normal objects
■ Weakness: Cannot detect collective outlier effectively
■ Normal objects may not share any strong patterns, but the collective
outliers may share high similarity in a small area
■ Ex. In some intrusion or virus detection, normal activities are diverse
■ Unsupervised methods may have a high false positive rate but still
miss many real outliers.
■ Supervised methods can be more effective, e.g., identify attacking
some key resources
■ Many clustering methods can be adapted for unsupervised methods
■ Find clusters, then outliers: not belonging to any cluster
■ Problem 1: Hard to distinguish noise from outliers
■ Problem 2: Costly since first clustering: but far less outliers than
normal objects
■ Newer methods: tackle outliers directly

39
Outlier Detection III: Semi-Supervised Methods
■ Situation: In many applications, the number of labeled data is often
small: Labels could be on outliers only, normal objects only, or both
■ Semi-supervised outlier detection: Regarded as applications of
semi-supervised learning
■ If some labeled normal objects are available
■ Use the labeled examples and the proximate unlabeled objects to
train a model for normal objects
■ Those not fitting the model of normal objects are detected as outliers
■ If only some labeled outliers are available, a small number of labeled
outliers many not cover the possible outliers well
■ To improve the quality of outlier detection, one can get help from
models for normal objects learned from unsupervised methods

40
Outlier Detection (1): Statistical Methods
■ Statistical methods (also known as model-based methods) assume that
the normal data follow some statistical model (a stochastic model)
■ The data not following the model are outliers.
■ Example (right figure): First use Gaussian distribution to
model the normal data
■ For each object y in region R, estimate gD(y), the
probability of y fits the Gaussian distribution
■ If gD(y) is very low, y is unlikely generated by the
Gaussian model, thus an outlier
■ Effectiveness of statistical methods: highly depends on whether the
assumption of statistical model holds in the real data
■ There are rich alternatives to use various statistical models
■ E.g., parametric vs. non-parametric

41
Outlier Detection (2): Proximity-Based Methods
■ An object is an outlier if the nearest neighbors of the object are far
away, i.e., the proximity of the object is significantly deviates from
the proximity of most of the other objects in the same data set
■ Example (right figure): Model the proximity of an
object using its 3 nearest neighbors
■ Objects in region R are substantially different
from other objects in the data set.
■ Thus the objects in R are outliers
■ The effectiveness of proximity-based methods highly relies on the
proximity measure.
■ In some applications, proximity or distance measures cannot be
obtained easily.
■ Often have a difficulty in finding a group of outliers which stay close to
each other
■ Two major types of proximity-based outlier detection
■ Distance-based vs. density-based
42
Outlier Detection (3): Clustering-Based Methods
■ Normal data belong to large and dense clusters, whereas
outliers belong to small or sparse clusters, or do not belong
to any clusters
■ Example (right figure): two clusters
■ All points not in R form a large cluster
■ The two points in R form a tiny cluster,
thus are outliers
■ Since there are many clustering methods, there are many
clustering-based outlier detection methods as well
■ Clustering is expensive: straightforward adaption of a
clustering method for outlier detection can be costly and
does not scale up well for large data sets

43
Avoiding False Discoveries
■ Statistical Background

■ Significance Testing

■ Hypothesis Testing

■ Multiple Hypothesis Testing


Motivation
■ An algorithm applied to a set of data will usually produce some
result(s)
■ There have been claims that the results reported in more than
50% of published papers are false. (Ioannidis)
■ Results may be a result of random variation
■ Any particular data set is a finite sample from a larger population
■ Often significant variation among instances in a data set or
heterogeneity in the population
■ Unusual events or coincidences do happen, especially when
looking at lots of events
■ For this and other reasons, results may not replicate, i.e., generalize to
other samples of data
● Results may not have domain significance
■ Finding a difference that makes no difference

■ Data scientists need to help ensure that results of data analysis are
not false discoveries, i.e., not meaningful or reproducible
Statistical Testing
■ Statistical approaches are used to help avoid many of
these problems
■ Statistics has well-developed procedures for
evaluating the results of data analysis
■ Significance testing

■ Hypothesis testing

■ Domain knowledge, careful data collection and


preprocessing, and proper methodology are also
important
■ Bias and poor quality data

■ Fishing for good results

■ Reporting how analysis was done

■ Ultimate verification lies in the real world


Probability and Distributions
■ Variables are characterized by a set of possible
values
■ Called the domain of the variable

■ Examples:

■ True or False for binary variables


■ Subset of integers for variables that are counts, such as
number of students in a class
■ Range of real numbers for variables such as weight or
height
■ A probability distribution function describes the
relative frequency with which the values are observed
■ Call a variable with a distribution a random variable
Probability and Distributions ..

Binomial Distribution

k P(R= k)
0 0.001
1 0.01
2 0.044
3 0.117
4 0.205
5 0.246
6 0.205
7 0.117
8 0.044
9 0.01
10 0.001
Probability and Distributions ..

Gaussian Distribution

Statistical Testing

Examples of Null Hypotheses
■ A coin or a die is a fair coin or die.

■ The difference between the means of two


samples is 0

■ The purchase of a particular item in a store is


unrelated to the purchase of a second item, e.g.,
the purchase of bread and milk are unconnected

■ The accuracy of a classifier is no better than


random
Significance Testing
■ Significance testing was devised by the statistician Fisher

■ Only interested in whether null hypothesis is true

■ Significance testing was intended


only for exploratory
analyses of the null hypothesis in the preliminary
stages of a study
■ For example, to refine the null hypothesis or modify future
experiments

■ For many years, significance testing has been a key


approach for justifying the validity of scientific results

■ Introduced the concept of p-value, which is widely used and


misused
How Significance Testing Works

How Significance Testing Works …

Example: Testing a coin for fairness

k P(S = k)
0 0.001
1 0.01
2 0.044
3 0.117
4 0.205
5 0.246
6 0.205
7 0.117
8 0.044
9 0.01
10 0.001
One-sided and Two-sided Tests

One-sided and Two-sided Tests …

Neyman-Pearson Hypothesis Testing

Hypothesis Testing …

Hypothesis Testing …
■ Power: which is the probability of the critical
region under H1, i.e., 1−β.
■ Power indicates how effective a test will be at

correctly rejecting the null hypothesis.


■ Low power means that many results that

actually show the desired pattern or


phenomenon will not be considered significant
and thus will be missed.
■ Thus, if the power of a test is low, then it may

not be appropriate to ignore results that fall


outside the critical region.
Example: Classifying Medical Results

Hypothesis Testing: Effect Size
■ Many times we can find a result that is statistically
significant but not significant from a domain point
of view
■ A drug that lowers blood pressure by one

percent
■ Effect size measures the magnitude of the effect
or characteristic being evaluated, and is often the
magnitude of the test statistic.
■ Brings in domain considerations

■ The desired effect size impacts the choice of the


critical region, and thus the significance level and
power of the test
Effect Size: Example Problem
■ Consider several new treatments for a rare disease that have a
particular probability of success. If we only have a sample size
of 10 patients, what effect size will be needed to clearly
distinguish a new treatment from the baseline which has is 60
% effective?
R/p(X=1) 0.60 0.70 0.80 0.90
0 0.0001 0.0000 0.0000 0.0000
1 0.0016 0.0001 0.0000 0.0000
2 0.0106 0.0014 0.0001 0.0000
3 0.0425 0.0090 0.0008 0.0000
4 0.1115 0.0368 0.0055 0.0001
5 0.2007 0.1029 0.0264 0.0015
6 0.2508 0.2001 0.0881 0.0112
7 0.2150 0.2668 0.2013 0.0574
8 0.1209 0.2335 0.3020 0.1937
9 0.0403 0.1211 0.2684 0.3874
10 0.0060 0.0282 0.1074 0.3487
Multiple Hypothesis Testing

Summarizing the Results of Multiple Tests
■ The following confusion table defines how results of multiple tests are
summarized
■ We assume the results fall into two classes, + and –, which, follow the
alternative and null hypotheses, respectively.
■ The focus is typically on the number of false positives (FP), i.e., the results
that belong to the null distribution (– class) but are declared significant
(+ class).
Family-wise Error Rate

Bonferroni Procedure

Example: Bonferroni versus Naïve approach

■ Naïve approach is to evaluate statistical significance for each


result without adjusting the significance level.
False Discovery Rate

Benjamini-Hochberg Procedure

FDR Example: Picking a stockbroker

FDR Example: Picking a stockbroker …
■ The following figure compares the naïve approach, Bonferroni, and
the BH FDR procedure with respect to the power for various numbers
of tests, m. 1/3 of the sample were from the alternative distribution.
Comparison of FWER and FDR
■ FWER is appropriate when it is important to avoid
any error.
■ But an FWER procedure such as Bonferroni
makes many Type II errors and thus, has poor
power.
■ An FWER approach has very a very false
discovery rate
■ FDR is appropriate when it is important to identity
positive results, i.e., those belonging to the alternative
distribution.
■ By construction, the false discovery rate is good
for an FDR procedure such as the BH approach
■ An FDR approach also has good power
SOM: Self-Organizing Maps

● Self-organizing maps (SOM)


– Centroid based clustering scheme
– Like K-means, a fixed number of clusters are specified
– However, the spatial relationship of
clusters is also specified, typically as a
grid
– Points are considered one by one
– Each point is assigned to the
closest centroid, and this centroid is updated
– Other centroids are updated based
on their spatial proximity to the closest centroid
Kohonen, Teuvo, and Self-Organizing Maps. "Springer series in information sciences." Self-
organizing maps 30 (1995).

31/2021 19

19

SOM: Self-Organizing Maps

● Updates are weighted by distance


– Centroids farther away are affected less
● The impact of the updates decreases with each time
– At some point the centroids will not change much
● SOM can be viewed as a type of dimensionality reduction
– If a 2D (3D) grid is used, the results can be easily visualized, and it can facilitate the
interpretation of clusters

3/31/2021 Introduction to Data Mining, 2nd Edition 20


Tan, Steinbach, Karpatne, Kumar

20
SOM Clusters of LA Times Document Data

3/31/2021 Introduction to Data Mining, 2nd Edition 21


Tan, Steinbach, Karpatne, Kumar

21

Another SOM Example: 2D Points

3/31/2021 Introduction to Data Mining, 2nd Edition 22


Tan, Steinbach, Karpatne, Kumar

22
Issues with SOM

● High computational complexity


● No guarantee of convergence
● Choice of grid and other parameters is somewhat
arbitrary
● Lack of a specific objective function

3/31/2021 Introduction to Data Mining, 2nd Edition 23


Tan, Steinbach, Karpatne, Kumar

23

3/31/2021 Introductionto Data Mining, 2nd Edition 24


Tan, Steinbach, Karpatne, Kumar

24
Comparison of DBSCAN and K-means

● Both are partitional.


● K-means is complete; DBSCAN is not.
● K-means has a prototype-based notion of a cluster; DB
uses a density-based notion.
● K-means can find clusters that are not well-separated.
DBSCAN will merge clusters that touch.
● DBSCAN handles clusters of different shapes and sizes;
K-means prefers globular clusters.

3/31/2021 Introduction to Data Mining, 2nd Edition 87


Tan, Steinbach, Karpatne, Kumar

87

Comparison of DBSCAN and K-means

● DBSCAN can handle noise and outliers; K-means


performs poorly in the presence of outliers
● K-means can only be applied to data for which a centroid
is meaningful; DBSCAN requires a meaningful definition
of density
● DBSCAN works poorly on high-dimensional data; K-
means works well for some types of high-dimensional
data
● Both techniques were designed for Euclidean data, but
extended to other types of data
● DBSCAN makes no distribution assumptions; K-means is
really assuming
3/31/2021 spherical
Introduction to DataGaussian distributions
Mining, 2nd Edition
Tan, Steinbach, Karpatne, Kumar
88

88
Comparison of DBSCAN and K-means

● K-means has an O(n) time complexity; DBSCAN is


O(n^2)
● Because of random initialization, the clusters found by K-
means can vary from one run to another; DBSCAN
always produces the same clusters
● DBSCAN automatically determines the number of
clusters; K-means does not
● K-means has only one parameter, DBSCAN has two.
● K-means clustering can be viewed as an optimization
problem and as a special case of EM clustering; DBSCAN
is not based on a formal model.
3/31/2021 Introduction to Data Mining, 2nd Edition 89
Tan, Steinbach, Karpatne, Kumar

89

You might also like