Concepts and Techniques: Data Mining

Download as ppt, pdf, or txt
Download as ppt, pdf, or txt
You are on page 1of 93

Data Mining:

Concepts and Techniques

— Chapter 5 —

Jiawei Han
Department of Computer Science
University of Illinois at Urbana-Champaign
www.cs.uiuc.edu/~hanj
©2006 Jiawei Han and Micheline Kamber, All rights reserved

June 3, 2024 Data Mining: Concepts and Techniques 1


Chapter 5: Mining Frequent Patterns,
Association and Correlations

 Basic concepts and a road map


 Efficient and scalable frequent itemset mining
methods
 Mining various kinds of association rules
 From association mining to correlation
analysis
 Constraint-based association mining
 Summary

June 3, 2024 Data Mining: Concepts and Techniques 2


Chapter 5: Mining Frequent Patterns,
Association and Correlations

 Basic concepts and a road map


 Efficient and scalable frequent itemset mining
methods
 Mining various kinds of association rules
 From association mining to correlation
analysis
 Constraint-based association mining
 Summary

June 3, 2024 Data Mining: Concepts and Techniques 3


What Is Frequent Pattern Analysis?
 Frequent pattern: a pattern (a set of items, subsequences, substructures, etc.)
that occurs frequently in a data set
 First proposed by Agrawal, Imielinski, and Swami [AIS93] in the context of
frequent itemsets and association rule mining
 Motivation: Finding inherent regularities in data
 What products were often purchased together?— Beer and diapers?!
 What are the subsequent purchases after buying a PC?
 What kinds of DNA are sensitive to this new drug?
 Can we automatically classify web documents?
 Applications
 Basket data analysis, cross-marketing, catalog design, sale campaign analysis,
Web log (click stream) analysis, and DNA sequence analysis, customer
shopping behaviour analysis.

June 3, 2024 Data Mining: Concepts and Techniques 4


Market Basket Analysis: A Motivating
Example

June 3, 2024 Data Mining: Concepts and Techniques 6


Basic Concepts: Frequent Patterns and
Association Rules
 I = {I1, I2,..., Im} a set of items.
 D, the task-relevant data, be a set of database transactions where each
transaction T is a set of items such that T ⊆ I. Each transaction is associated
with an identifier, called TID.
 A be a set of items. A transaction T is said to contain A if and only if A ⊆ T.
 An association rule is an implication of the form A ⇒ B, where A ⊂ I, B ⊂ I,
and A∩B = φ.
 The rule A ⇒ B holds in the transaction set D with support s, where s is the
percentage of transactions in D that contain A∪B (i.e., the union of sets A
and B, or say, both A and B). This is taken to be the probability, P(A ∪B).
 The rule A ⇒ B has confidence c in the transaction set D, where c is the
percentage of transactions in D containing A that also contain B. This is taken
to be the conditional probability, P(B|A).
 support(A⇒B) = P(A∪B) minimum support threshold (min sup)
 confidence(A⇒B) = P(B|A) minimum confidence threshold (min conf)
 = support(A ∪B)/support(A)

June 3, 2024 Data Mining: Concepts and Techniques 7


Basic Concepts: Frequent Patterns and
Association Rules
Transaction-id Items bought  Itemset X = {x1, …, xk}
10 A, B, D  Find all the rules X  Y with minimum
20 A, C, D support and confidence
30 A, D, E
 support, s, probability that a
40 B, E, F
transaction contains X  Y
50 B, C, D, E, F
 confidence, c, conditional
Customer
buys both
Customer probability that a transaction
buys diaper
having X also contains Y
Let supmin = 50%, confmin = 50%
Freq. Pat.: {A:3, B:3, D:4, E:3, AD:3}
Association rules:
Customer
A  D (60%, 100%)
buys beer
D  A (60%, 75%)
June 3, 2024 Data Mining: Concepts and Techniques 8
 Itemset
 K-itemset
 Occurrence frequency of an itemset/frequency/support
count/count/absolute support
 Relative support
 Frequent itemset
 minimum support threshold

 minimum support count threshold

 Set of frequent k-itemsets Lk

 confidence(A⇒B) = P(B|A) = support(A∪B)/support(A) =


support count(A∪B)/support count(A) .

June 3, 2024 Data Mining: Concepts and Techniques 9


 Mining Association Rules=>mining frequent
itemsets
 Association rule mining: Two-step process
 Find all frequent itemsets

 Generate strong association rules from the

frequent itemsets

June 3, 2024 Data Mining: Concepts and Techniques 10


Closed Patterns and Max-Patterns
 A long pattern contains a combinatorial number of sub-
patterns, e.g., {a1, …, a100} contains (1100) + (2100) + … +
(110000) = 2100 – 1 = 1.27*1030 sub-patterns!
 Solution: Mine closed patterns/closed frequent itemset
and max-patterns/maximal frequent itemset instead
 An itemset X is closed if X is frequent and there exists no
super-pattern Y ‫ כ‬X, with the same support as X
(proposed by Pasquier, et al. @ ICDT’99)
 An itemset X is a max-pattern if X is frequent and there
exists no frequent super-pattern Y ‫ כ‬X (proposed by
Bayardo @ SIGMOD’98)

June 3, 2024 Data Mining: Concepts and Techniques 11


June 3, 2024 Data Mining: Concepts and Techniques 12
Closed Patterns and Max-Patterns
 Exercise. DB = {<a1, …, a100>, < a1, …, a50>}
 Min_sup = 1.
 What is the set of closed itemset?
 <a1, …, a100>: 1
 < a1, …, a50>: 2
 What is the set of max-pattern?
 <a1, …, a100>: 1
 {a2, a45 : 2}; {a8, a55 : 1}

June 3, 2024 Data Mining: Concepts and Techniques 13


Criteria for classifying Frequent pattern
mining
 Completeness of patterns to be mined
 Complete set of frequent itemsets

 Closed frequent itemsets

 Maximal frequent itemsets

 Constrained frequent itemsets

 Approximate frequent itemsets

 Near-match frequent itemsets

 Top-k frequent itemsets

June 3, 2024 Data Mining: Concepts and Techniques 15


 Levels of abstraction involved in the rule set
 Multilevel association rules

 buys(X, “computer”) ⇒ buys(X, “HP printer”)

 buys(X, “laptop computer”) ⇒ buys(X, “HP

printer”)
 Single level association rules

 Number of data dimensions involved in the rule


 Single dimensional association rule

 Multidimensional association rule

 age(X, “30...39”)∧income(X,

“42K...48K”)⇒buys(X, “high resolution TV”)


June 3, 2024 Data Mining: Concepts and Techniques 16
 Types of values handled in the rule
 Boolean association rule

 Quantitative association rule

 Kinds of rules to be mined


 Association rules

 Correlation rules

 Strong gradient relationships (gradient is the ratio of

the measure of an item when compared with that of


its parent, its child, its sibling)
 “The average sales from Sony Digital Camera increase

over 16% when sold together with Sony Laptop


Computer”
June 3, 2024 Data Mining: Concepts and Techniques 17
 Kinds of patterns to be mined
 Frequent itemset mining

 Sequential pattern mining

 Structured pattern mining

June 3, 2024 Data Mining: Concepts and Techniques 18


Scalable Methods for Mining Frequent Patterns
 The downward closure property of frequent patterns
 Any subset of a frequent itemset must be frequent

 If {beer, diaper, nuts} is frequent, so is {beer,

diaper}
 i.e., every transaction having {beer, diaper, nuts} also

contains {beer, diaper}


 Scalable mining methods: Three major approaches
 Apriori (Agrawal & Srikant@VLDB’94)

 Freq. pattern growth (FPgrowth—Han, Pei & Yin

@SIGMOD’00)
 Vertical data format approach (Charm—Zaki & Hsiao

@SDM’02)
June 3, 2024 Data Mining: Concepts and Techniques 19
Apriori: A Candidate Generation-and-Test Approach

 Apriori pruning principle: If there is any itemset which is


infrequent, its superset should not be generated/tested!
(Agrawal & Srikant @VLDB’94, Mannila, et al. @ KDD’ 94)
 Method:
 Initially, scan DB once to get frequent 1-itemset
 Generate length (k+1) candidate itemsets from length k
frequent itemsets
 Test the candidates against DB
 Terminate when no frequent or candidate set can be
generated
June 3, 2024 Data Mining: Concepts and Techniques 20
Important Details of Apriori
 How to generate candidates?
 Step 1: self-joining Lk
 Step 2: pruning
 How to count supports of candidates?
 Example of Candidate-generation
 L3={abc, abd, acd, ace, bcd}
 Self-joining: L3*L3
 abcd from abc and abd
 acde from acd and ace
 Pruning:
 acde is removed because ade is not in L
3

 C4={abcd}

June 3, 2024 Data Mining: Concepts and Techniques 22


The Apriori Algorithm— Example (1)

June 3, 2024 Data Mining: Concepts and Techniques 24


June 3, 2024 Data Mining: Concepts and Techniques 25
The Apriori Algorithm— Example (2)
Supmin = 2 Itemset sup
Itemset sup
Database TDB {A} 2
L1 {A} 2
Tid Items C1 {B} 3
{B} 3
10 A, C, D {C} 3
1st scan {C} 3
20 B, C, E {D} 1
{E} 3
30 A, B, C, E {E} 3
40 B, E
C2 Itemset sup C2 Itemset
{A, B} 1
L2 Itemset sup 2nd scan {A, B}
{A, C} 2
{A, C} 2 {A, C}
{A, E} 1
{B, C} 2 {A, E}
{B, C} 2
{B, E} 3
{B, E} 3 {B, C}
{C, E} 2
{C, E} 2 {B, E}
{C, E}

C3 Itemset
3rd scan L3 Itemset sup
{B, C, E} {B, C, E} 2
June 3, 2024 Data Mining: Concepts and Techniques 26
Generating Association Rules from
Frequent Itemsets
 For each frequent itemset l, generate all
nonempty subsets of l.
 For every nonempty subset s of l, output the rule
“s ⇒ (l − s)” if support count(l)/ support count(s)
≥ min conf, where min conf is the minimum
confidence threshold.
 l = {I1, I2, I5}

June 3, 2024 Data Mining: Concepts and Techniques 27


June 3, 2024 Data Mining: Concepts and Techniques 28
Challenges of Frequent Pattern Mining

 Challenges
 Multiple scans of transaction database
 Huge number of candidates
 Tedious workload of support counting for candidates
 Improving Apriori: general ideas
 Reduce passes of transaction database scans
 Shrink number of candidates
 Facilitate support counting of candidates

June 3, 2024 Data Mining: Concepts and Techniques 32


Bottleneck of Frequent-pattern Mining

 Multiple database scans are costly


 Mining long patterns needs many passes of
scanning and generates lots of candidates
 To find frequent itemset i1i2…i100
 # of scans: 100
 # of Candidates: (1001) + (1002) + … + (110000) = 2100-1
= 1.27*1030 !
 Bottleneck: candidate-generation-and-test
 Can we avoid candidate generation?

June 3, 2024 Data Mining: Concepts and Techniques 37


Mining Frequent Patterns Without
Candidate Generation

 Grow long patterns from short ones using local


frequent items
 “abc” is a frequent pattern
 Get all transactions having “abc”: DB|abc
 “d” is a local frequent item in DB|abc  abcd is
a frequent pattern

June 3, 2024 Data Mining: Concepts and Techniques 38


Construct FP-tree from a Transaction Database

TID items Items bought (ordered) frequent


100 {f, a, c, d, g, i, m, p} {f, c, a, m, p}
200 {a, b, c, f, l, m, o} {f, c, a, b, m} min_support = 3
300 {b, f, h, j, o, w} {f, b}
400 {b, c, k, s, p} {c, b, p}
500 {a, f, c, e, l, p, m, n} {f, c, a, m, p} {}
Header Table
1. Scan DB once, find
frequent 1-itemset Item frequency head f:4 c:1
(single item pattern) f 4
c 4 c:3 b:1 b:1
2. Sort frequent items in a 3
frequency descending b 3 a:3 p:1
order, f-list m 3
p 3
3. Scan DB again, m:2 b:1
construct FP-tree
F-list=f-c-a-b-m-p p:2 m:1
June 3, 2024 Data Mining: Concepts and Techniques 39
Find Patterns Having P From P-conditional Database

 Starting at the frequent item header table in the FP-tree


 Traverse the FP-tree by following the link of each frequent item p
 Accumulate all of transformed prefix paths of item p to form p’s
conditional pattern base

{}
Header Table
f:4 c:1 Conditional pattern bases
Item frequency head
f 4 item cond. pattern base
c 4 c:3 b:1 b:1
c f:3
a 3
b 3 a:3 p:1 a fc:3
m 3 b fca:1, f:1, c:1
p 3 m:2 b:1 m fca:2, fcab:1
p:2 m:1 p fcam:2, cb:1
June 3, 2024 Data Mining: Concepts and Techniques 42
From Conditional Pattern-bases to Conditional FP-trees

 For each pattern-base


 Accumulate the count for each item in the base

 Construct the FP-tree for the frequent items of the

pattern base

m-conditional pattern base:


{} fca:2, fcab:1
Header Table
Item frequency head All frequent
f:4 c:1 patterns relate to m
f 4 {}
c 4 c:3 b:1 b:1 m,

a 3 f:3  fm, cm, am,
b 3 a:3 p:1 fcm, fam, cam,
m 3 c:3 fcam
p 3 m:2 b:1
p:2 m:1 a:3
m-conditional FP-tree
June 3, 2024 Data Mining: Concepts and Techniques 43
Recursion: Mining Each Conditional FP-tree
{}

{} Cond. pattern base of “am”: (fc:3) f:3

c:3
f:3
am-conditional FP-tree
c:3 {}
Cond. pattern base of “cm”: (f:3)
a:3 f:3
m-conditional FP-tree
cm-conditional FP-tree

{}
Cond. pattern base of “cam”: (f:3) f:3
cam-conditional FP-tree

June 3, 2024 Data Mining: Concepts and Techniques 44


Mining Frequent Patterns With FP-trees
 Idea: Frequent pattern growth
 Recursively grow frequent patterns by pattern and

database partition
 Method
 For each frequent item, construct its conditional

pattern-base, and then its conditional FP-tree


 Repeat the process on each newly created conditional

FP-tree
 Until the resulting FP-tree is empty, or it contains only

one path—single path will generate all the


combinations of its sub-paths, each of which is a
frequent pattern

June 3, 2024 Data Mining: Concepts and Techniques 46


Example-2

June 3, 2024 Data Mining: Concepts and Techniques 47


June 3, 2024 Data Mining: Concepts and Techniques 48
Why Is FP-Growth the Winner?

 Divide-and-conquer:
 decompose both the mining task and DB according to
the frequent patterns obtained so far
 leads to focused search of smaller databases
 Other factors
 no candidate generation, no candidate test
 compressed database: FP-tree structure
 no repeated scan of entire database
 basic ops—counting local freq items and building sub
FP-tree, no pattern search and matching

June 3, 2024 Data Mining: Concepts and Techniques 53


Mining Frequent Itemsets Using Vertical
Data Format
 horizontal data format
 vertical data format

 minimum support count=2

June 3, 2024 Data Mining: Concepts and Techniques 54


June 3, 2024 Data Mining: Concepts and Techniques 55
June 3, 2024 Data Mining: Concepts and Techniques 56
Chapter 5: Mining Frequent Patterns,
Association and Correlations
 Basic concepts and a road map
 Efficient and scalable frequent itemset mining
methods
 Mining various kinds of association rules
 From association mining to correlation
analysis
 Constraint-based association mining
 Summary

June 3, 2024 Data Mining: Concepts and Techniques 66


Mining Various Kinds of Association Rules

 Mining multilevel association

 Miming multidimensional association

 Mining quantitative association

 Mining interesting correlation patterns

June 3, 2024 Data Mining: Concepts and Techniques 67


Mining Multiple-Level Association Rules

June 3, 2024 Data Mining: Concepts and Techniques 68


Mining Multiple-Level Association Rules
 Uniform Minimum Support
 Reduced Minimum Support
 Group-based Minimum Support

uniform support reduced support


Level 1
computer Level 1
min_sup = 5%
[support = 10%] min_sup = 5%

Level 2 Laptop Desktop Level 2


computer computer min_sup = 3%
min_sup = 5%
[support = 6%] [support = 4%]

June 3, 2024 Data Mining: Concepts and Techniques 69


Multi-level Association: Redundancy Filtering

 Some rules may be redundant due to “ancestor”


relationships between items.
 Example
 Laptop computer  HP printer [support = 8%, confidence = 70%]
 IBM Laptop computer  HP printer[support = 2%, confidence = 72%]
 We say the first rule is an ancestor of the second rule.
 A rule is redundant if its support and confidence are close
to the “expected” value, based on the rule’s ancestor.

June 3, 2024 Data Mining: Concepts and Techniques 70


Mining Multi-Dimensional Association
 Single-dimensional/intradimensional rules:
buys(X, “milk”)  buys(X, “bread”)
 Multi-dimensional rules:  2 dimensions or predicates
 Inter-dimension assoc. rules (no repeated predicates)
age(X,”19-25”)  occupation(X,“student”)  buys(X, “coke”)
 hybrid-dimension assoc. rules (repeated predicates)
age(X,”19-25”)  buys(X, “popcorn”)  buys(X, “coke”)
 Categorical/nominal Attributes: finite number of possible values, no
ordering among values (occupation, brand, color)—data cube
approach
 Quantitative Attributes: numeric, implicit ordering among values
(age, income, price)—discretization, clustering, and gradient
approaches
June 3, 2024 Data Mining: Concepts and Techniques 71
Mining Quantitative Associations

 Techniques can be categorized by how numerical


attributes, such as age or salary are treated
1. Static discretization based on predefined concept
hierarchies (data cube methods)
2. Dynamic discretization based on data distribution
(quantitative rules, e.g., Agrawal & Srikant@SIGMOD96)

June 3, 2024 Data Mining: Concepts and Techniques 72


Static Discretization of Quantitative Attributes

 Discretized prior to mining using concept hierarchy.


 Numeric values are replaced by ranges.
 In relational database, finding all frequent k-predicate sets
will require k or k+1 table scans.
()
 Data cube is well suited for mining.
 The cells of an n-dimensional (age) (income) (buys)
cuboid correspond to the
predicate sets.
(age, income) (age,buys) (income,buys)
 Mining from data cubes
can be much faster. (age,income,buys)
June 3, 2024 Data Mining: Concepts and Techniques 73
Quantitative Association Rules
 Proposed by Lent, Swami and Widom ICDE’97
 Numeric attributes are dynamically discretized
 Such that the confidence or compactness of the rules

mined is maximized
 2-D quantitative association rules: Aquan1  Aquan2  Acat
 Cluster adjacent
association rules
to form general
rules using a 2-D grid
 Example
age(X,”34-35”)  income(X,”30-50K”)
 buys(X,”high resolution TV”)

June 3, 2024 Data Mining: Concepts and Techniques 74


Chapter 5: Mining Frequent Patterns,
Association and Correlations
 Basic concepts and a road map
 Efficient and scalable frequent itemset mining
methods
 Mining various kinds of association rules
 From association mining to correlation analysis
 Constraint-based association mining
 Summary

June 3, 2024 Data Mining: Concepts and Techniques 76


Interestingness Measure: Correlations (Lift)
 play basketball  eat cereal [40%, 66.7%] is misleading
 The overall % of students eating cereal is 75% > 66.7%.
 play basketball  not eat cereal [20%, 33.3%] is more accurate,
although with lower support and confidence
 Measure of dependent/correlated events: lift
Basketball Not basketball Sum (row)

P( A B) Cereal 2000 1750 3750


lift  Not cereal 1000 250 1250
P( A) P( B)
Sum(col.) 3000 2000 5000

2000 / 5000 1000 / 5000


lift ( B, C )   0.89 lift ( B, C )   1.33
3000 / 5000 * 3750 / 5000 3000 / 5000 *1250 / 5000

June 3, 2024 Data Mining: Concepts and Techniques 77


buys(X, “computer games”)⇒buys(X, “videos”) [support = 40%, confidence = 66%]
P({game}) = 0.60
P({video}) = 0.75
P({game,video}) = 0.40
P({game, video})/(P({game})×P({video})) = 0.40/(0.60×0.75) = 0.89

June 3, 2024 Data Mining: Concepts and Techniques 78


 χ 2 = Σ (observed - expected)2 / expected =
(4,000−4,500) 2 /4,500 + (3,500−3,000) 2 /3,000 +
(2,000−1,500) 2 /1,500 + (500−1,000) 2 /1,000 = 555.6

June 3, 2024 Data Mining: Concepts and Techniques 79


all_confidence and cosine
 Given an itemset X = {i1, i2,..., ik}, the all
confidence of X is defined as

 where max{sup(ij)|∀ij ∈ X} is the maximum


(single) item support of all the items in X, and
hence is called the max item sup of the itemset
X.

June 3, 2024 Data Mining: Concepts and Techniques 80


 Given two itemsets A and B, the cosine measure
of A and B is defined as

June 3, 2024 Data Mining: Concepts and Techniques 81


June 3, 2024 Data Mining: Concepts and Techniques 82
Chapter 5: Mining Frequent Patterns,
Association and Correlations
 Basic concepts and a road map
 Efficient and scalable frequent itemset mining
methods
 Mining various kinds of association rules
 From association mining to correlation analysis
 Constraint-based association mining
 Summary

June 3, 2024 Data Mining: Concepts and Techniques 85


Constraint-based (Query-Directed) Mining

 Finding all the patterns in a database autonomously? —


unrealistic!
 The patterns could be too many but not focused!
 Data mining should be an interactive process
 User directs what to be mined using a data mining
query language (or a graphical user interface)
 Constraint-based mining
 User flexibility: provides constraints on what to be
mined
 System optimization: explores such constraints for
efficient mining—constraint-based mining
June 3, 2024 Data Mining: Concepts and Techniques 86
Constraints in Data Mining

 Knowledge type constraint:


 classification, association, etc.

 Data constraint — using SQL-like queries


 find product pairs sold together in stores in Chicago in

Dec.’02
 Dimension/level constraint
 in relevance to region, price, brand, customer category

 Rule (or pattern) constraint


 small sales (price < $10) triggers big sales (sum >

$200)
 Interestingness constraint
 strong rules: min_support  3%, min_confidence 

60%
June 3, 2024 Data Mining: Concepts and Techniques 87
June 3, 2024 Data Mining: Concepts and Techniques 88
Anti-Monotonicity in Constraint Pushing
TDB (min_sup=2)
 Anti-monotonicity TID Transaction

 When an intemset S violates the 10 a, b, c, d, f


20 b, c, d, f, g, h
constraint, so does any of its superset
30 a, c, d, e, f
 sum(S.Price)  v is anti-monotone 40 c, e, f, g

 sum(S.Price)  v is not anti-monotone Item Profit


 Example. C: range(S.profit)  15 is anti- a 40
b 0
monotone
c -20
 Itemset ab violates C d 10
 So does every superset of ab e -30
f 30
g 20
h -10
June 3, 2024 Data Mining: Concepts and Techniques 90
Monotonicity for Constraint Pushing
TDB (min_sup=2)
TID Transaction
 Monotonicity
10 a, b, c, d, f
 When an intemset S satisfies the 20 b, c, d, f, g, h

constraint, so does any of its 30 a, c, d, e, f


40 c, e, f, g
superset
 sum(S.Price)  v is monotone Item Profit
a 40
 min(S.Price)  v is monotone b 0
c -20
 Example. C: range(S.profit)  15
d 10
 Itemset ab satisfies C e -30
f 30
 So does every superset of ab
g 20
h -10
June 3, 2024 Data Mining: Concepts and Techniques 91
Succinctness

 Succinctness:
 Given A1, the set of items satisfying a succinctness
constraint C, then any set S satisfying C is based on A1
, i.e., S contains a subset belonging to A1
 Idea: Without looking at the transaction database,
whether an itemset S satisfies constraint C can be
determined based on the selection of items
 min(S.Price)  v is succinct
 sum(S.Price)  v is not succinct
 Optimization: If C is succinct, C is pre-counting pushable
June 3, 2024 Data Mining: Concepts and Techniques 92
The Apriori Algorithm — Example
Database D itemset sup.
L1 itemset sup.
TID Items C1 {1} 2 {1} 2
100 134 {2} 3 {2} 3
200 235 Scan D {3} 3 {3} 3
300 1235 {4} 1 {5} 3
400 25 {5} 3
C2 itemset sup C2 itemset
L2 itemset sup {1 2} 1 Scan D {1 2}
{1 3} 2 {1 3} 2 {1 3}
{2 3} 2 {1 5} 1 {1 5}
{2 3} 2 {2 3}
{2 5} 3
{2 5} 3 {2 5}
{3 5} 2
{3 5} 2 {3 5}
C3 itemset Scan D L3 itemset sup
{2 3 5} {2 3 5} 2
June 3, 2024 Data Mining: Concepts and Techniques 93
Naïve Algorithm: Apriori + Constraint
Database D itemset sup.
L1 itemset sup.
TID Items C1 {1} 2 {1} 2
100 134 {2} 3 {2} 3
200 235 Scan D {3} 3 {3} 3
300 1235 {4} 1 {5} 3
400 25 {5} 3
C2 itemset sup C2 itemset
L2 itemset sup {1 2} 1 Scan D {1 2}
{1 3} 2 {1 3} 2 {1 3}
{2 3} 2 {1 5} 1 {1 5}
{2 3} 2 {2 3}
{2 5} 3
{2 5} 3 {2 5}
{3 5} 2
{3 5} 2 {3 5}
C3 itemset Scan D L3 itemset sup Constraint:
{2 3 5} {2 3 5} 2 Sum{S.price} < 5
June 3, 2024 Data Mining: Concepts and Techniques 94
The Constrained Apriori Algorithm: Push
an Anti-monotone Constraint Deep
Database D itemset sup.
L1 itemset sup.
TID Items C1 {1} 2 {1} 2
100 134 {2} 3 {2} 3
200 235 Scan D {3} 3 {3} 3
300 1235 {4} 1 {5} 3
400 25 {5} 3
C2 itemset sup C2 itemset
L2 itemset sup {1 2} 1 Scan D {1 2}
{1 3} 2 {1 3} 2 {1 3}
{2 3} 2 {1 5} 1 {1 5}
{2 3} 2 {2 3}
{2 5} 3
{2 5} 3 {2 5}
{3 5} 2
{3 5} 2 {3 5}
C3 itemset Scan D L3 itemset sup Constraint:
{2 3 5} {2 3 5} 2 Sum{S.price} < 5
June 3, 2024 Data Mining: Concepts and Techniques 95
The Constrained Apriori Algorithm: Push a
Succinct Constraint Deep
Database D itemset sup.
L1 itemset sup.
TID Items C1 {1} 2 {1} 2
100 134 {2} 3 {2} 3
200 235 Scan D {3} 3 {3} 3
300 1235 {4} 1 {5} 3
400 25 {5} 3
C2 itemset sup C2 itemset
L2 itemset sup {1 2}
{1 2} 1 Scan D
{1 3} 2 {1 3} 2 {1 3}
not immediately
{1 5} 1 {1 5} to be used
{2 3} 2
{2 3} 2 {2 3}
{2 5} 3
{2 5} 3 {2 5}
{3 5} 2 {3 5}
{3 5} 2
C3 itemset Scan D L3 itemset sup Constraint:
{2 3 5} {2 3 5} 2 min{S.price } <= 1
June 3, 2024 Data Mining: Concepts and Techniques 96
Converting “Tough” Constraints

TDB (min_sup=2)
TID Transaction
 Convert tough constraints into anti-
10 a, b, c, d, f
monotone or monotone by properly
20 b, c, d, f, g, h
ordering items 30 a, c, d, e, f
 Examine C: avg(S.profit)  25 40 c, e, f, g
 Order items in value-descending Item Profit
order a 40
b 0
 <a, f, g, d, b, h, c, e> c -20
 If an itemset afb violates C d 10
e -30
 So does afbh, afb* f 30
 It becomes anti-monotone! g 20
h -10

June 3, 2024 Data Mining: Concepts and Techniques 97


Strongly Convertible Constraints

 avg(X)  25 is convertible anti-monotone w.r.t.


item value descending order R: <a, f, g, d, b,
h, c, e> Item Profit
 If an itemset af violates a constraint C, so
a 40
does every itemset with af as prefix, such as b 0
afd c -20
 avg(X)  25 is convertible monotone w.r.t. item d 10
value ascending order R-1: <e, c, h, b, d, g, f, e -30
a> f 30
 If an itemset d satisfies a constraint C, so g 20
does itemsets df and dfa, which having d as h -10
a prefix
 Thus, avg(X)  25 is strongly convertible
June 3, 2024 Data Mining: Concepts and Techniques 98
Can Apriori Handle Convertible Constraint?

 A convertible, not monotone nor anti-monotone


nor succinct constraint cannot be pushed deep
into the an Apriori mining algorithm
 Within the level wise framework, no direct
Item Value
pruning based on the constraint can be made a 40
 Itemset df violates constraint C: avg(X)>=25 b 0

 Since adf satisfies C, Apriori needs df to c -20


d 10
assemble adf, df cannot be pruned
e -30
 But it can be pushed into frequent-pattern f 30
growth framework! g 20
h -10

June 3, 2024 Data Mining: Concepts and Techniques 99


Mining With Convertible Constraints
Item Value
 C: avg(X) >= 25, min_sup=2 a 40
f 30
 List items in every transaction in value descending
g 20
order R: <a, f, g, d, b, h, c, e>
d 10
 C is convertible anti-monotone w.r.t. R b 0
 Scan TDB once h -10
c -20
 remove infrequent items
e -30
 Item h is dropped
 Itemsets a and f are good, …
TDB (min_sup=2)
 Projection-based mining TID Transaction
 Imposing an appropriate order on item projection 10 a, f, d, b, c
 Many tough constraints can be converted into 20 f, g, d, b, c
30 a, f, d, c, e
(anti)-monotone
40 f, g, h, c, e

June 3, 2024 Data Mining: Concepts and Techniques 100


Handling Multiple Constraints

 Different constraints may require different or even


conflicting item-ordering
 If there exists an order R s.t. both C1 and C2 are
convertible w.r.t. R, then there is no conflict between
the two convertible constraints
 If there exists conflict on order of items
 Try to satisfy one constraint first
 Then using the order for the other constraint to
mine frequent itemsets in the corresponding
projected database
June 3, 2024 Data Mining: Concepts and Techniques 101
What Constraints Are Convertible?

Convertible anti- Convertible Strongly


Constraint monotone monotone convertible

avg(S)  ,  v Yes Yes Yes

median(S)  ,  v Yes Yes Yes

sum(S)  v (items could be of any value,


Yes No No
v  0)
sum(S)  v (items could be of any value,
No Yes No
v  0)
sum(S)  v (items could be of any value,
No Yes No
v  0)
sum(S)  v (items could be of any value,
Yes No No
v  0)
……

June 3, 2024 Data Mining: Concepts and Techniques 102


Constraint-Based Mining—A General Picture

Constraint Antimonotone Monotone Succinct


vS no yes yes
SV no yes yes

SV yes no yes


min(S)  v no yes yes

min(S)  v yes no yes


max(S)  v yes no yes

max(S)  v no yes yes


count(S)  v yes no weakly

count(S)  v no yes weakly

sum(S)  v ( a  S, a  0 ) yes no no
sum(S)  v ( a  S, a  0 ) no yes no

range(S)  v yes no no
range(S)  v no yes no

avg(S)  v,   { , ,  } convertible convertible no


support(S)   yes no no

support(S)   no yes no

June 3, 2024 Data Mining: Concepts and Techniques 103


A Classification of Constraints

Antimonotone Monotone

Strongly
convertible
Succinct

Convertible Convertible
anti-monotone monotone

Inconvertible

June 3, 2024 Data Mining: Concepts and Techniques 104


Chapter 5: Mining Frequent Patterns,
Association and Correlations
 Basic concepts and a road map
 Efficient and scalable frequent itemset mining
methods
 Mining various kinds of association rules
 From association mining to correlation analysis
 Constraint-based association mining
 Summary

June 3, 2024 Data Mining: Concepts and Techniques 105


Frequent-Pattern Mining: Summary

 Frequent pattern mining—an important task in data mining


 Scalable frequent pattern mining methods
 Apriori (Candidate generation & test)
 Projection-based (FPgrowth, CLOSET+, ...)
 Vertical format approach (CHARM, ...)
 Mining a variety of rules and interesting patterns
 Constraint-based mining
 Mining sequential and structured patterns
 Extensions and applications
June 3, 2024 Data Mining: Concepts and Techniques 106
Frequent-Pattern Mining: Research Problems

 Mining fault-tolerant frequent, sequential and structured


patterns
 Patterns allows limited faults (insertion, deletion,
mutation)
 Mining truly interesting patterns
 Surprising, novel, concise, …
 Application exploration
 E.g., DNA sequence analysis and bio-pattern
classification
 “Invisible” data mining

June 3, 2024 Data Mining: Concepts and Techniques 107


Ref: Basic Concepts of Frequent Pattern Mining

 (Association Rules) R. Agrawal, T. Imielinski, and A. Swami. Mining


association rules between sets of items in large databases.
SIGMOD'93.
 (Max-pattern) R. J. Bayardo. Efficiently mining long patterns from
databases. SIGMOD'98.
 (Closed-pattern) N. Pasquier, Y. Bastide, R. Taouil, and L. Lakhal.
Discovering frequent closed itemsets for association rules. ICDT'99.
 (Sequential pattern) R. Agrawal and R. Srikant. Mining sequential
patterns. ICDE'95

June 3, 2024 Data Mining: Concepts and Techniques 108


Ref: Apriori and Its Improvements

 R. Agrawal and R. Srikant. Fast algorithms for mining association rules.


VLDB'94.
 H. Mannila, H. Toivonen, and A. I. Verkamo. Efficient algorithms for
discovering association rules. KDD'94.
 A. Savasere, E. Omiecinski, and S. Navathe. An efficient algorithm for
mining association rules in large databases. VLDB'95.
 J. S. Park, M. S. Chen, and P. S. Yu. An effective hash-based algorithm
for mining association rules. SIGMOD'95.
 H. Toivonen. Sampling large databases for association rules. VLDB'96.
 S. Brin, R. Motwani, J. D. Ullman, and S. Tsur. Dynamic itemset
counting and implication rules for market basket analysis. SIGMOD'97.
 S. Sarawagi, S. Thomas, and R. Agrawal. Integrating association rule
mining with relational database systems: Alternatives and implications.
SIGMOD'98.
June 3, 2024 Data Mining: Concepts and Techniques 109
Ref: Depth-First, Projection-Based FP Mining

 R. Agarwal, C. Aggarwal, and V. V. V. Prasad. A tree projection


algorithm for generation of frequent itemsets. J. Parallel and
Distributed Computing:02.
 J. Han, J. Pei, and Y. Yin. Mining frequent patterns without candidate
generation. SIGMOD’ 00.
 J. Pei, J. Han, and R. Mao. CLOSET: An Efficient Algorithm for Mining
Frequent Closed Itemsets. DMKD'00.
 J. Liu, Y. Pan, K. Wang, and J. Han. Mining Frequent Item Sets by
Opportunistic Projection. KDD'02.
 J. Han, J. Wang, Y. Lu, and P. Tzvetkov. Mining Top-K Frequent Closed
Patterns without Minimum Support. ICDM'02.
 J. Wang, J. Han, and J. Pei. CLOSET+: Searching for the Best
Strategies for Mining Frequent Closed Itemsets. KDD'03.
 G. Liu, H. Lu, W. Lou, J. X. Yu. On Computing, Storing and Querying
Frequent Patterns. KDD'03.
June 3, 2024 Data Mining: Concepts and Techniques 110
Ref: Vertical Format and Row Enumeration Methods

 M. J. Zaki, S. Parthasarathy, M. Ogihara, and W. Li. Parallel algorithm


for discovery of association rules. DAMI:97.
 Zaki and Hsiao. CHARM: An Efficient Algorithm for Closed Itemset
Mining, SDM'02.
 C. Bucila, J. Gehrke, D. Kifer, and W. White. DualMiner: A Dual-
Pruning Algorithm for Itemsets with Constraints. KDD’02.
 F. Pan, G. Cong, A. K. H. Tung, J. Yang, and M. Zaki , CARPENTER:
Finding Closed Patterns in Long Biological Datasets. KDD'03.

June 3, 2024 Data Mining: Concepts and Techniques 111


Ref: Mining Multi-Level and Quantitative Rules

 R. Srikant and R. Agrawal. Mining generalized association rules.


VLDB'95.
 J. Han and Y. Fu. Discovery of multiple-level association rules from
large databases. VLDB'95.
 R. Srikant and R. Agrawal. Mining quantitative association rules in
large relational tables. SIGMOD'96.
 T. Fukuda, Y. Morimoto, S. Morishita, and T. Tokuyama. Data mining
using two-dimensional optimized association rules: Scheme,
algorithms, and visualization. SIGMOD'96.
 K. Yoda, T. Fukuda, Y. Morimoto, S. Morishita, and T. Tokuyama.
Computing optimized rectilinear regions for association rules. KDD'97.
 R.J. Miller and Y. Yang. Association rules over interval data.
SIGMOD'97.
 Y. Aumann and Y. Lindell. A Statistical Theory for Quantitative
Association Rules KDD'99.
June 3, 2024 Data Mining: Concepts and Techniques 112
Ref: Mining Correlations and Interesting Rules

 M. Klemettinen, H. Mannila, P. Ronkainen, H. Toivonen, and A. I.


Verkamo. Finding interesting rules from large sets of discovered
association rules. CIKM'94.
 S. Brin, R. Motwani, and C. Silverstein. Beyond market basket:
Generalizing association rules to correlations. SIGMOD'97.
 C. Silverstein, S. Brin, R. Motwani, and J. Ullman. Scalable
techniques for mining causal structures. VLDB'98.
 P.-N. Tan, V. Kumar, and J. Srivastava. Selecting the Right
Interestingness Measure for Association Patterns. KDD'02.
 E. Omiecinski. Alternative Interest Measures for Mining
Associations. TKDE’03.
 Y. K. Lee, W.Y. Kim, Y. D. Cai, and J. Han. CoMine: Efficient Mining
of Correlated Patterns. ICDM’03.
June 3, 2024 Data Mining: Concepts and Techniques 113
Ref: Mining Other Kinds of Rules

 R. Meo, G. Psaila, and S. Ceri. A new SQL-like operator for mining


association rules. VLDB'96.
 B. Lent, A. Swami, and J. Widom. Clustering association rules.
ICDE'97.
 A. Savasere, E. Omiecinski, and S. Navathe. Mining for strong
negative associations in a large database of customer transactions.
ICDE'98.
 D. Tsur, J. D. Ullman, S. Abitboul, C. Clifton, R. Motwani, and S.
Nestorov. Query flocks: A generalization of association-rule mining.
SIGMOD'98.
 F. Korn, A. Labrinidis, Y. Kotidis, and C. Faloutsos. Ratio rules: A new
paradigm for fast, quantifiable data mining. VLDB'98.
 K. Wang, S. Zhou, J. Han. Profit Mining: From Patterns to Actions.
EDBT’02.
June 3, 2024 Data Mining: Concepts and Techniques 114
Ref: Constraint-Based Pattern Mining

 R. Srikant, Q. Vu, and R. Agrawal. Mining association rules with item


constraints. KDD'97.
 R. Ng, L.V.S. Lakshmanan, J. Han & A. Pang. Exploratory mining and
pruning optimizations of constrained association rules. SIGMOD’98.
 M.N. Garofalakis, R. Rastogi, K. Shim: SPIRIT: Sequential Pattern
Mining with Regular Expression Constraints. VLDB’99.
 G. Grahne, L. Lakshmanan, and X. Wang. Efficient mining of
constrained correlated sets. ICDE'00.
 J. Pei, J. Han, and L. V. S. Lakshmanan. Mining Frequent Itemsets
with Convertible Constraints. ICDE'01.
 J. Pei, J. Han, and W. Wang, Mining Sequential Patterns with
Constraints in Large Databases, CIKM'02.

June 3, 2024 Data Mining: Concepts and Techniques 115


Ref: Mining Sequential and Structured Patterns

 R. Srikant and R. Agrawal. Mining sequential patterns: Generalizations


and performance improvements. EDBT’96.
 H. Mannila, H Toivonen, and A. I. Verkamo. Discovery of frequent
episodes in event sequences. DAMI:97.
 M. Zaki. SPADE: An Efficient Algorithm for Mining Frequent Sequences.
Machine Learning:01.
 J. Pei, J. Han, H. Pinto, Q. Chen, U. Dayal, and M.-C. Hsu. PrefixSpan:
Mining Sequential Patterns Efficiently by Prefix-Projected Pattern
Growth. ICDE'01.
 M. Kuramochi and G. Karypis. Frequent Subgraph Discovery. ICDM'01.
 X. Yan, J. Han, and R. Afshar. CloSpan: Mining Closed Sequential
Patterns in Large Datasets. SDM'03.
 X. Yan and J. Han. CloseGraph: Mining Closed Frequent Graph
Patterns. KDD'03.
June 3, 2024 Data Mining: Concepts and Techniques 116
Ref: Mining Spatial, Multimedia, and Web Data

 K. Koperski and J. Han, Discovery of Spatial Association Rules in


Geographic Information Databases, SSD’95.
 O. R. Zaiane, M. Xin, J. Han, Discovering Web Access Patterns and
Trends by Applying OLAP and Data Mining Technology on Web Logs.
ADL'98.
 O. R. Zaiane, J. Han, and H. Zhu, Mining Recurrent Items in
Multimedia with Progressive Resolution Refinement. ICDE'00.
 D. Gunopulos and I. Tsoukatos. Efficient Mining of Spatiotemporal
Patterns. SSTD'01.

June 3, 2024 Data Mining: Concepts and Techniques 117


Ref: Mining Frequent Patterns in Time-Series Data

 B. Ozden, S. Ramaswamy, and A. Silberschatz. Cyclic association rules.


ICDE'98.
 J. Han, G. Dong and Y. Yin, Efficient Mining of Partial Periodic Patterns
in Time Series Database, ICDE'99.
 H. Lu, L. Feng, and J. Han. Beyond Intra-Transaction Association
Analysis: Mining Multi-Dimensional Inter-Transaction Association Rules.
TOIS:00.
 B.-K. Yi, N. Sidiropoulos, T. Johnson, H. V. Jagadish, C. Faloutsos, and
A. Biliris. Online Data Mining for Co-Evolving Time Sequences. ICDE'00.
 W. Wang, J. Yang, R. Muntz. TAR: Temporal Association Rules on
Evolving Numerical Attributes. ICDE’01.
 J. Yang, W. Wang, P. S. Yu. Mining Asynchronous Periodic Patterns in
Time Series Data. TKDE’03.
June 3, 2024 Data Mining: Concepts and Techniques 118
Ref: Iceberg Cube and Cube Computation
 S. Agarwal, R. Agrawal, P. M. Deshpande, A. Gupta, J. F. Naughton,
R. Ramakrishnan, and S. Sarawagi. On the computation of
multidimensional aggregates. VLDB'96.
 Y. Zhao, P. M. Deshpande, and J. F. Naughton. An array-based
algorithm for simultaneous multidi-mensional aggregates.
SIGMOD'97.
 J. Gray, et al. Data cube: A relational aggregation operator
generalizing group-by, cross-tab and sub-totals. DAMI: 97.
 M. Fang, N. Shivakumar, H. Garcia-Molina, R. Motwani, and J. D.
Ullman. Computing iceberg queries efficiently. VLDB'98.
 S. Sarawagi, R. Agrawal, and N. Megiddo. Discovery-driven
exploration of OLAP data cubes. EDBT'98.
 K. Beyer and R. Ramakrishnan. Bottom-up computation of sparse
and iceberg cubes. SIGMOD'99.
June 3, 2024 Data Mining: Concepts and Techniques 119
Ref: Iceberg Cube and Cube Exploration

 J. Han, J. Pei, G. Dong, and K. Wang, Computing Iceberg Data


Cubes with Complex Measures. SIGMOD’ 01.
 W. Wang, H. Lu, J. Feng, and J. X. Yu. Condensed Cube: An
Effective Approach to Reducing Data Cube Size. ICDE'02.
 G. Dong, J. Han, J. Lam, J. Pei, and K. Wang. Mining Multi-
Dimensional Constrained Gradients in Data Cubes. VLDB'01.
 T. Imielinski, L. Khachiyan, and A. Abdulghani. Cubegrades:
Generalizing association rules. DAMI:02.
 L. V. S. Lakshmanan, J. Pei, and J. Han. Quotient Cube: How to
Summarize the Semantics of a Data Cube. VLDB'02.
 D. Xin, J. Han, X. Li, B. W. Wah. Star-Cubing: Computing Iceberg
Cubes by Top-Down and Bottom-Up Integration. VLDB'03.

June 3, 2024 Data Mining: Concepts and Techniques 120


Ref: FP for Classification and Clustering
 G. Dong and J. Li. Efficient mining of emerging patterns:
Discovering trends and differences. KDD'99.
 B. Liu, W. Hsu, Y. Ma. Integrating Classification and Association
Rule Mining. KDD’98.
 W. Li, J. Han, and J. Pei. CMAR: Accurate and Efficient
Classification Based on Multiple Class-Association Rules. ICDM'01.
 H. Wang, W. Wang, J. Yang, and P.S. Yu. Clustering by pattern
similarity in large data sets. SIGMOD’ 02.
 J. Yang and W. Wang. CLUSEQ: efficient and effective sequence
clustering. ICDE’03.
 B. Fung, K. Wang, and M. Ester. Large Hierarchical Document
Clustering Using Frequent Itemset. SDM’03.
 X. Yin and J. Han. CPAR: Classification based on Predictive
Association Rules. SDM'03.
June 3, 2024 Data Mining: Concepts and Techniques 121
Ref: Stream and Privacy-Preserving FP Mining

 A. Evfimievski, R. Srikant, R. Agrawal, J. Gehrke. Privacy Preserving


Mining of Association Rules. KDD’02.
 J. Vaidya and C. Clifton. Privacy Preserving Association Rule Mining
in Vertically Partitioned Data. KDD’02.
 G. Manku and R. Motwani. Approximate Frequency Counts over
Data Streams. VLDB’02.
 Y. Chen, G. Dong, J. Han, B. W. Wah, and J. Wang. Multi-
Dimensional Regression Analysis of Time-Series Data Streams.
VLDB'02.
 C. Giannella, J. Han, J. Pei, X. Yan and P. S. Yu. Mining Frequent
Patterns in Data Streams at Multiple Time Granularities, Next
Generation Data Mining:03.
 A. Evfimievski, J. Gehrke, and R. Srikant. Limiting Privacy Breaches
in Privacy Preserving Data Mining. PODS’03.
June 3, 2024 Data Mining: Concepts and Techniques 122
Ref: Other Freq. Pattern Mining Applications

 Y. Huhtala, J. Kärkkäinen, P. Porkka, H. Toivonen. Efficient


Discovery of Functional and Approximate Dependencies Using
Partitions. ICDE’98.
 H. V. Jagadish, J. Madar, and R. Ng. Semantic Compression and
Pattern Extraction with Fascicles. VLDB'99.
 T. Dasu, T. Johnson, S. Muthukrishnan, and V. Shkapenyuk.
Mining Database Structure; or How to Build a Data Quality
Browser. SIGMOD'02.

June 3, 2024 Data Mining: Concepts and Techniques 123


June 3, 2024 Data Mining: Concepts and Techniques 124

You might also like