Assoc
Assoc
CS590D 4
Why Is Association Mining
Important?
• Foundation for many essential data mining tasks
– Association, correlation, causality
– Sequential patterns, temporal or cyclic association,
partial periodicity, spatial and multimedia association
– Associative classification, cluster analysis, iceberg
cube, fascicles (semantic data compression)
• Broad applications
– Basket data analysis, cross-marketing, catalog
design, sale campaign analysis
– Web log (click stream) analysis, DNA sequence
analysis, etc.
CS590D 5
Basic Concepts:
Association Rules
Transaction-id Items bought • Itemset X={x1, …, xk}
10 A, B, C • Find all the rules XY with
min confidence and support
20 A, C
– support, s, probability that
30 A, D a transaction contains XY
40 B, E, F – confidence, c, conditional
probability that a
transaction having X also
Customer Customer contains Y.
buys both buys diaper
Let min_support = 50%,
min_conf = 50%:
A C (50%, 66.7%)
C A (50%, 100%)
Customer
buys beer 6
Mining Association Rules:
Example
Transaction-id Items bought
Min. support 50%
10 A, B, C
Min. confidence 50%
20 A, C Frequent pattern Support
30 A, D {A} 75%
40 B, E, F {B} 50%
{C} 50%
{A, C} 50%
For rule A C:
support = support({A}{C}) = 50%
confidence = support({A}{C})/support({A}) =
66.6%
CS590D 7
Mining Association Rules:
What We Need to Know
• Goal: Rules with high support/confidence
• How to compute?
– Support: Find sets of items that occur
frequently
– Confidence: Find frequency of subsets of
supported itemsets
• If we have all frequently occurring sets of
items (frequent itemsets), we can compute
support and confidence!
CS590D 8
Mining Association Rules in
Large Databases
• Association rule mining
• Algorithms for scalable mining of (single-
dimensional Boolean) association rules in
transactional databases
• Mining various kinds of association/correlation
rules
• Constraint-based association mining
• Sequential pattern mining
• Applications/extensions of frequent pattern
mining
• Summary
CS590D 11
Apriori: A Candidate Generation-
and-Test Approach
• Any subset of a frequent itemset must be frequent
– if {beer, diaper, nuts} is frequent, so is {beer, diaper}
– Every transaction having {beer, diaper, nuts} also contains {beer,
diaper}
• Apriori pruning principle: If there is any itemset which is
infrequent, its superset should not be generated/tested!
• Method:
– generate length (k+1) candidate itemsets from length k frequent
itemsets, and
– test the candidates against DB
• Performance studies show its efficiency and scalability
• Agrawal & Srikant 1994, Mannila, et al. 1994
CS590D 12
The Apriori Algorithm—An Example
Itemset sup
Itemset sup
Database TDB {A} 2 L1
C1 {A} 2
Tid Items {B} 3
{B} 3
10 A, C, D {C} 3
20 B, C, E
1st scan {C} 3
{D} 1
{E} 3
30 A, B, C, E {E} 3
40 B, E
CFrequency
2 Itemset ≥ 50%, Confidence
sup C2 Itemset 100%:
L2 {A, B} 1
Itemset sup 2A
nd
C {A, B}
scan
{A, C} 2
{A, C} 2
{A, E} 1 BE {A, C}
{B, C} 2
{B, E} 3
{B, C} 2 BC E {A, E}
{B, E} 3 {B, C}
{C, E} 2
{C, E} 2
CE B {B, E}
BE C {C, E}
C3 Itemset
3rd scan L3 Itemset sup
{B, C, E} {B, C, E} 2 13
The Apriori Algorithm
• Pseudo-code:
Ck: Candidate itemset of size k
Lk : frequent itemset of size k
L1 = {frequent items};
for (k = 1; Lk !=; k++) do begin
Ck+1 = candidates generated from Lk;
for each transaction t in database do
increment the count of all candidates in Ck+1
that are contained in t
Lk+1= candidates in Ck+1 with min_support
end
return k Lk;
CS590D 14
Important Details of Apriori
• How to generate candidates?
– Step 1: self-joining Lk
– Step 2: pruning
• How to count supports of candidates?
• Example of Candidate-generation
– L3={abc, abd, acd, ace, bcd}
– Self-joining: L3*L3
• abcd from abc and abd
• acde from acd and ace
– Pruning:
• acde is removed because ade is not in L3
– C4={abcd}
CS590D 15
How to Generate Candidates?
• Suppose the items in Lk-1 are listed in an order
• Step 1: self-joining Lk-1
insert into Ck
select p.item1, p.item2, …, p.itemk-1, q.itemk-1
from Lk-1 p, Lk-1 q
where p.item1=q.item1, …, p.itemk-2=q.itemk-2, p.itemk-1 < q.itemk-1
• Step 2: pruning
itemsets c in Ck do
(k-1)-subsets s of c do
if (s is not in Lk-1) then delete c from Ck
CS590D 20
How to Count Supports of
Candidates?
• Why counting supports of candidates a problem?
– The total number of candidates can be very huge
– One transaction may contain many candidates
• Method:
– Candidate itemsets are stored in a hash-tree
– Leaf node of hash-tree contains a list of itemsets and counts
– Interior node contains a hash table
– Subset function: finds all the candidates contained in a
transaction
CS590D 21
Example: Counting Supports of
Candidates
Subset function
Transaction: 1 2 3 5 6
3,6,9
1,4,7
2,5,8
1+2356
13+56 234
567
145 345 356 367
136 368
357
12+356
689
124
457 125 159
458
CS590D 22
Efficient Implementation of Apriori
in SQL
• Hard to get good performance out of pure SQL (SQL-92)
based approaches alone
• Make use of object-relational extensions like UDFs,
BLOBs, Table functions etc.
– Get orders of magnitude improvement
• S. Sarawagi, S. Thomas, and R. Agrawal. Integrating
association rule mining with relational database systems:
Alternatives and implications. In SIGMOD’98
CS590D 23
Challenges of Frequent Pattern
Mining
• Challenges
– Multiple scans of transaction database
– Huge number of candidates
– Tedious workload of support counting for candidates
• Improving Apriori: general ideas
– Reduce passes of transaction database scans
– Shrink number of candidates
– Facilitate support counting of candidates
CS590D 24
DIC: Reduce Number of Scans
ABCD • Once both A and D are
determined frequent, the
counting of AD begins
ABC ABD ACD BCD
• Once all length-2 subsets of
BCD are determined frequent,
AB AC BC AD BD CD the counting of BCD begins
Transactions
1-itemsets
A B C D
Apriori 2-itemsets
…
{}
Itemset lattice 1-itemsets
S. Brin R. Motwani, J. Ullman, 2-items
and S. Tsur. Dynamic itemset DIC 3-items
counting and implication rules
for market basket data. In 25
SIGMOD’97
Partition: Scan Database Only
Twice
• Any itemset that is potentially frequent in DB
must be frequent in at least one of the partitions
of DB
– Scan 1: partition database and find local frequent
patterns
– Scan 2: consolidate global frequent patterns
• A. Savasere, E. Omiecinski, and S. Navathe. An
efficient algorithm for mining association in large
databases. In VLDB’95
CS590D 26
Sampling for Frequent Patterns
• Select a sample of original database, mine
frequent patterns within sample using Apriori
• Scan database once to verify frequent itemsets
found in sample, only borders of closure of
frequent patterns are checked
– Example: check abcd instead of ab, ac, …, etc.
• Scan database again to find missed frequent
patterns
• H. Toivonen. Sampling large databases for
association rules. In VLDB’96
CS590D 28
DHP: Reduce the Number of
Candidates
• A k-itemset whose corresponding hashing
bucket count is below the threshold cannot be
frequent
– Candidates: a, b, c, d, e
– Hash entries: {ab, ad, ae} {bd, be, de} …
– Frequent 1-itemset: a, b, d, e
– ab is not a candidate 2-itemset if the sum of count of
{ab, ad, ae} is below support threshold
• J. Park, M. Chen, and P. Yu. An effective hash-
based algorithm for mining association rules. In
SIGMOD’95
CS590D 29
Eclat/MaxEclat and VIPER:
Exploring Vertical Data Format
• Use tid-list, the list of transaction-ids containing an itemset
• Compression of tid-lists
– Itemset A: t1, t2, t3, sup(A)=3
– Itemset B: t2, t3, t4, sup(B)=3
– Itemset AB: t2, t3, sup(AB)=2
• Major operation: intersection of tid-lists
• M. Zaki et al. New algorithms for fast discovery of association rules.
In KDD’97
• P. Shenoy et al. Turbo-charging vertical mining of large databases.
In SIGMOD’00
CS590D 30
Bottleneck of Frequent-pattern
Mining
• Multiple database scans are costly
• Mining long patterns needs many passes of
scanning and generates lots of candidates
– To find frequent itemset i1i2…i100
• # of scans: 100
• # of Candidates: (1001) + (1002) + … + (110000) = 2100-1 =
1.27*1030 !
• Bottleneck: candidate-generation-and-test
• Can we avoid candidate generation?
CS590D 31
CS590D: Data Mining
Prof. Chris Clifton
CS590D 35
Partition Patterns and
Databases
• Frequent patterns can be partitioned into
subsets according to f-list
– F-list=f-c-a-b-m-p
– Patterns containing p
– Patterns having m but no p
–…
– Patterns having c but no a nor b, m, p
– Pattern f
• Completeness and non-redundency
CS590D 36
Find Patterns Having P From P-
conditional Database
• Starting at the frequent item header table in the FP-tree
• Traverse the FP-tree by following the link of each frequent item p
• Accumulate all of transformed prefix paths of item p to form p’s
conditional pattern base
{}
Header Table
f:4 c:1 Conditional pattern bases
Item frequency head
f 4 item cond. pattern base
c 4 c:3 b:1 b:1
c f:3
a 3
b 3 a:3 p:1 a fc:3
m 3 b fca:1, f:2, c:2
p 3 m:2 b:1 m fca:2, fcab:1
p:2 m:1 p fcam:2, cb:1
CS590D 37
From Conditional Pattern-bases to
Conditional FP-trees
• For each pattern-base
– Accumulate the count for each item in the base
– Construct the FP-tree for the frequent items of the pattern base
c:3
f:3
am-conditional FP-tree
c:3 {}
Cond. pattern base of “cm”: (f:3)
a:3 f:3
m-conditional FP-tree
cm-conditional FP-tree
{}
Cond. pattern base of “cam”: (f:3) f:3
cam-conditional FP-tree
CS590D 39
A Special Case: Single Prefix Path
in FP-tree
• Suppose a (conditional) FP-tree T has a shared single prefix-path P
• Mining can be decomposed into two parts
– Reduction of the single prefix path into one node
{}– Concatenation of the mining results of the two parts
a1:n1
a2:n2
a3:n3
{} r1
C1:k1 a1:n1
b1:m1
r1 =
a2:n2
+ b1:m1 C1:k1
C2:k2 C3:k3 40
a3:n3 C2:k2 C3:k3
Mining Frequent Patterns With
FP-trees
• Idea: Frequent pattern growth
– Recursively grow frequent patterns by pattern and
database partition
• Method
– For each frequent item, construct its conditional
pattern-base, and then its conditional FP-tree
– Repeat the process on each newly created conditional
FP-tree
– Until the resulting FP-tree is empty, or it contains only
one path—single path will generate all the
combinations of its sub-paths, each of which is a
frequent pattern
CS590D 41
Scaling FP-growth by DB
Projection
• FP-tree cannot fit in memory?—DB
projection
• First partition a database into a set of
projected DBs
• Then construct and mine FP-tree for each
projected DB
• Parallel projection vs. Partition projection
techniques
– Parallel projection is space costly
CS590D 42
Partition-based Projection
• Parallel projection needs
Tran. DB
a lot of disk space fcamp
fcabm
• Partition projection fb
cbp
saves it fcamp
am-proj DB cm-proj DB
fc f …
fc f
fc f 43
FP-Growth vs. Apriori: Scalability
With the Support Threshold
70
60
50
40
30
20
10
0
0 0.5 1 1.5 2 2.5 3
Support threshold(%)
CS590D 44
FP-Growth vs. Tree-Projection:
Scalability with the Support Threshold
100
Runtime (sec.)
80
60
40
20
0
0 0.5 1 1.5 2
CS590D
Support threshold (%) 45
Why Is FP-Growth the Winner?
• Divide-and-conquer:
– decompose both the mining task and DB according to the
frequent patterns obtained so far
– leads to focused search of smaller databases
• Other factors
– no candidate generation, no candidate test
– compressed database: FP-tree structure
– no repeated scan of entire database
– basic ops—counting local freq items and building sub FP-tree,
no pattern search and matching
CS590D 46
Implications of the
Methodology
• Mining closed frequent itemsets and max-patterns
– CLOSET (DMKD’00)
• Mining sequential patterns
– FreeSpan (KDD’00), PrefixSpan (ICDE’01)
• Constraint-based mining of frequent patterns
– Convertible constraints (KDD’00, ICDE’01)
• Computing iceberg data cubes with complex measures
– H-tree and H-cubing algorithm (SIGMOD’01)
CS590D 47
Max-patterns
• Frequent pattern {a1, …, a100} (1001) +
(1002) + … + (110000) = 2100-1 = 1.27*1030
frequent sub-patterns!
• Max-pattern: frequent patterns without
proper frequent super pattern
Tid Items
– BCDE, ACD are max-patterns
10 A,B,C,D,E
– BCD is not a max-pattern
20 B,C,D,E,
Min_sup=2 30 A,C,D,F
CS590D 48
MaxMiner: Mining Max-
patterns
• 1st scan: find frequent items Tid Items
– A, B, C, D, E 10 A,B,C,D,E
• 2nd scan: find support for 20 B,C,D,E,
CS590D 49
Frequent Closed Patterns
• Conf(acd)=100% record acd only
• For frequent itemset X, if there exists no
item y s.t. every transaction containing X
also contains y, then X is a frequent closed
pattern Min_sup=2
– “acd” is a frequent closed pattern TID Items
CS590D 51
Mining Frequent Closed
Patterns: CHARM
• Use vertical data format: t(AB)={T1, T12, …}
• Derive closed pattern based on vertical intersections
– t(X)=t(Y): X and Y always happen together
– t(X)t(Y): transaction having X always has Y
• Use diffset to accelerate mining
– Only keep track of difference of tids
– t(X)={T1, T2, T3}, t(Xy )={T1, T3}
– Diffset(Xy, X)={T2}
• M. Zaki. CHARM: An Efficient Algorithm for Closed Association Rule Mining,
CS-TR99-10, Rensselaer Polytechnic Institute
• M. Zaki, Fast Vertical Mining Using Diffsets, TR01-1, Department of
Computer Science, Rensselaer Polytechnic Institute
CS590D 52
Visualization of Association Rules:
Pane Graph
CS590D 53
Visualization of Association Rules: Rule Graph
CS590D 54
Mining Association Rules in
Large Databases
• Association rule mining
• Algorithms for scalable mining of (single-dimensional Boolean)
association rules in transactional databases
• Mining various kinds of association/correlation rules
• Constraint-based association mining
• Sequential pattern mining
• Applications/extensions of frequent pattern mining
• Summary
CS590D 55
Mining Various Kinds of Rules or
Regularities
CS590D 56
Multiple-level Association
Rules
• Items often form hierarchy
• Flexible support settings: Items at the lower level
are expected to have lower support.
• Transaction database can be encoded based on
dimensions and levels
• explore shared multi-level mining
uniform support reduced support
Level 1 Milk Level 1
min_sup = 5%
[support = 10%] min_sup = 5%
CS590D 58
Multi-dimensional Association
• Single-dimensional rules:
buys(X, “milk”) buys(X, “bread”)
• Multi-dimensional rules: 2 dimensions or predicates
– Inter-dimension assoc. rules (no repeated predicates)
age(X,”19-25”) occupation(X,“student”) buys(X,“coke”)
– hybrid-dimension assoc. rules (repeated predicates)
age(X,”19-25”) buys(X, “popcorn”) buys(X, “coke”)
• Categorical Attributes
– finite number of possible values, no ordering among values
• Quantitative Attributes
– numeric, implicit ordering among values
CS590D 59
Multi-level Association:
Redundancy Filtering
• Some rules may be redundant due to “ancestor”
relationships between items.
• Example
– milk wheat bread [support = 8%, confidence = 70%]
– 2% milk wheat bread [support = 2%, confidence = 72%]
• We say the first rule is an ancestor of the second rule.
• A rule is redundant if its support is close to the
“expected” value, based on the rule’s ancestor.
CS590D 60
CS590D: Data Mining
Prof. Chris Clifton
CS590D 62
Maximal vs Closed Itemsets
null Transaction Ids
TID Items
1 ABC 124 123 1234 245 345
A B C D E
2 ABCD
3 BCE
4 ACDE 12 124 24 4 123 2 3 24 34 45
AB AC AD AE BC BD BE CD CE DE
5 DE
12 2 24 4 4 2 3 4
ABC ABD ABE ACD ACE ADE BCD BCE BDE CDE
2 4
ABCD ABCE ABDE ACDE BCDE
12 124 24 4 123 2 3 24 34 45
AB AC AD AE BC BD BE CD CE DE
12 2 24 4 4 2 3 4
ABC ABD ABE ACD ACE ADE BCD BCE BDE CDE
2 4
ABCD ABCE ABDE ACDE BCDE # Closed = 9
# Maximal = 4
CS590D 64
ABCDE
Maximal vs Closed Itemsets
Frequent
Itemsets
Closed
Frequent
Itemsets
Maximal
Frequent
Itemsets
CS590D 65
Multi-Level Mining: Progressive
Deepening
• A top-down, progressive deepening approach:
– First mine high-level frequent items:
milk (15%), bread (10%)
– Then mine their lower-level “weaker” frequent
itemsets:
2% milk (5%), wheat bread (4%)
• Different min_support threshold across multi-
levels lead to different algorithms:
– If adopting the same min_support across multi-levels
then toss t if any of t’s ancestors is infrequent.
– If adopting reduced min_support at lower levels
then examine only those descendents whose ancestor’s
support is frequent/non-negligible.
CS590D 66
Techniques for Mining MD
Associations
• Search for frequent k-predicate set:
– Example: {age, occupation, buys} is a 3-predicate set
– Techniques can be categorized by how age are treated
1. Using static discretization of quantitative attributes
– Quantitative attributes are statically discretized by using
predefined concept hierarchies
2. Quantitative association rules
– Quantitative attributes are dynamically discretized into
“bins”based on the distribution of the data
3. Distance-based association rules
– This is a dynamic discretization process that considers the
distance between data points
CS590D 67
Static Discretization of
Quantitative Attributes
• Discretized prior to mining using concept hierarchy.
• Numeric values are replaced by ranges.
• In relational database, finding all frequent k-predicate sets will
require k or k+1 table scans.
• Data cube is well suited for mining. ()
• The cells of an n-dimensional
cuboid correspond to the (age) (income) (buys)
predicate sets.
• Mining from data cubes
can be much faster. (age, income) (age,buys) (income,buys)
CS590D (age,income,buys) 69
Quantitative Association
Rules
• Numeric attributes are dynamically discretized
– Such that the confidence or compactness of the rules mined is
maximized
• 2-D quantitative association rules: Aquan1 Aquan2 Acat
• Cluster “adjacent”
association rules
to form general
rules using a 2-D
grid
• Example
age(X,”30-34”) income(X,”24K -
48K”)
buys(X,”high resolution TV”)
Mining Distance-based
Association Rules
• Binning methods do not capture the semantics of interval
data
Equi-width Equi-depth Distance-
Price($) (width $10) (depth 2) based
7 [0,10] [7,20] [7,7]
20 [11,20] [22,50] [20,22]
22 [21,30] [51,53] [50,53]
50 [31,40]
51 [41,50]
53 [51,60]
CS590D 72
Mining Association Rules in
Large Databases
• Association rule mining
• Algorithms for scalable mining of (single-dimensional Boolean)
association rules in transactional databases
• Mining various kinds of association/correlation rules
• Constraint-based association mining
• Sequential pattern mining
• Applications/extensions of frequent pattern mining
• Summary
CS590D 73
Constraint-based Data
Mining
• Finding all the patterns in a database
autonomously? — unrealistic!
– The patterns could be too many but not focused!
• Data mining should be an interactive process
– User directs what to be mined using a data mining
query language (or a graphical user interface)
• Constraint-based mining
– User flexibility: provides constraints on what to be
mined
– System optimization: explores such constraints for
efficient mining—constraint-based mining
CS590D 74
Constraints in Data Mining
• Knowledge type constraint:
– classification, association, etc.
• Data constraint — using SQL-like queries
– find product pairs sold together in stores in Vancouver in Dec.’00
• Dimension/level constraint
– in relevance to region, price, brand, customer category
• Rule (or pattern) constraint
– small sales (price < $10) triggers big sales (sum > $200)
• Interestingness constraint
– strong rules: min_support 3%, min_confidence 60%
CS590D 75
Constrained Mining vs. Constraint-
Based Search
• Constrained mining vs. constraint-based search/reasoning
– Both are aimed at reducing search space
– Finding all patterns satisfying constraints vs. finding some (or
one) answer in constraint-based search in AI
– Constraint-pushing vs. heuristic search
– It is an interesting research problem on how to integrate them
• Constrained mining vs. query processing in DBMS
– Database query processing requires to find all
– Constrained pattern mining shares a similar philosophy as
pushing selections deeply in query processing
CS590D 76
Constrained Frequent Pattern Mining:
A Mining Query Optimization Problem
• Given a frequent pattern mining query with a set of constraints C,
the algorithm should be
– sound: it only finds frequent sets that satisfy the given
constraints C
– complete: all frequent sets satisfying the given constraints C are
found
• A naïve solution
– First find all frequent sets, and then test them for constraint
satisfaction
• More efficient approaches:
– Analyze the properties of constraints comprehensively
– Push them as deeply as possible inside the frequent pattern
computation.
CS590D 77
Application of Interestingness
Measure
Knowledge
Interestingness
Measures Patterns
Postprocessing
Preprocessed
Data
Prod
Prod
Prod
Prod
Prod
Prod
Prod
Prod
Prod
Prod
uct
uct
uct
uct
uct
uct
uct
uct
uct
uct
Featur
Featur
e
Featur
e
Mining
Featur
e
Featur
e
Featur
e
Featur
e
Featur
e
Featur
e
Featur
e
e
Selected
Data
Data Preprocessing
CS590D Selection 79
Computing Interestingness
Measure
• Given a rule X Y, information needed to
compute rule interestingness can be obtained
from a contingency table
Contingency table for X Y
Y Y f11: support of X and Y
X f11 f10 f1+ f10: support of X and Y
X f01 f00 fo+ f01: support of X and Y
f+1 f+0 |T| f00: support of X and Y
Coffee Coffee
Tea 15 5 20
Tea 75 5 80
90 10 100
Association Rule: Tea Coffee
Coffee Coffee
Tea 15 5 20
Tea 75 5 80
90 10 100
Association Rule: Tea Coffee
CS590D 84
Drawback of Lift & Interest
Y Y Y Y
X 10 0 10 X 90 0 90
X 0 90 90 X 0 10 10
10 90 100 90 10 100
0.1 0.9
Lift 10 Lift 1.11
(0.1)(0.1) (0.9)(0.9)
Statistical independence:
If P(X,Y)=P(X)P(Y) => Lift = 1
CS590D 85
There are lots of
measures proposed
in the literature
CS590D 86
Properties of A Good
Measure
• Piatetsky-Shapiro:
3 properties a good measure M must satisfy:
– M(A,B) = 0 if A and B are statistically independent
CS590D 87
Comparing Different
10 examples ofMeasures
Exam ple f11 f10 f01 f00
E1 8123 83 424 1370
contingency tables: E2
E3
8330
9481
2
94
622
127
1046
298
E4 3954 3080 5 2961
E5 2886 1363 1320 4431
E6 1500 2000 500 6000
E7 4000 2000 1000 3000
E8 4000 2000 2000 2000
Rankings of contingency tables E9 1720 7121 5 1154
using various measures: E10 61 2483 4 7452
CS590D 88
Property under Variable
Permutation
B B A A
A p q B p r
A r s B q s
Symmetric measures:
support, lift, collective strength, cosine, Jaccard, etc
Asymmetric measures:
confidence, conviction, Laplace, J-measure, etc
CS590D 89
Property under Row/Column
Scaling
Grade-Gender Example (Mosteller, 1968):
2x 10x
Mosteller:
Underlying association should be independent of
the relative number of male and female students
in the samples
CS590D 90
Property under Inversion
A B Operation
C D E F
.
Transaction 1
1 0 0 1 0 0
0 0 1 1 1 0
. 0 0 1 1 1 0
.
0 0 1 1 1 0
0 1 1 0 1 1
. 0
0
0
0
1
1
1
1
1
1
0
0
. 0
0
0
0
1
1
1
1
1
1
0
0
Transaction N 1 0 0 1 0 0
Invariant measures:
support, cosine, Jaccard, etc
Non-invariant measures:
correlation, Gini, mutual information, odds ratio, etc
CS590D 93
Different Measures have Different
Properties
Sym bol M e as ure Range P1 P2 P3 O1 O2 O3 O3' O4
Correlation -1 … 0 … 1 Yes Yes Yes Yes No Yes Yes No
Lambda 0…1 Yes No No Yes No No* Yes No
Odds ratio 0 … 1 … Yes* Yes Yes Yes Yes Yes* Yes No
Q Y ule's Q -1 … 0 … 1 Yes Yes Yes Yes Yes Yes Yes No
Y Y ule's Y -1 … 0 … 1 Yes Yes Yes Yes Yes Yes Yes No
Cohen's -1 … 0 … 1 Yes Yes Yes Yes No No Yes No
M Mutual Information 0…1 Yes Yes Yes Yes No No* Yes No
J J-Measure 0…1 Yes No No No No No No No
G Gini Index 0…1 Yes No No No No No* Yes No
s Support 0…1 No Yes No Yes No No No No
c Confidence 0…1 No Yes No Yes No No No Yes
L Laplace 0…1 No Yes No Yes No No No No
V Conviction 0.5 … 1 … No Yes No Yes** No No Yes No
I Interest 0 … 1 … Yes* Yes Yes Yes No No No No
IS IS (cosine) 0 .. 1 No Yes Yes Yes No No No Yes
PS Piatetsky-Shapiro's -0.25 … 0 … 0.25 Yes Yes Yes Yes No Yes Yes No
F Certainty f actor -1 … 0 … 1 Yes Yes Yes No No No Yes No
AV Added value 0.5 … 1 … 1 Yes Yes Yes No No No No No
S Collective strength 0 … 1 … No Yes Yes Yes No Yes* Yes No
Jaccard 0 .. 1 No Yes Yes Yes No No No Yes
2 1 2
K Klosgen's
1 2 3 0 CS590D
Yes Yes Yes No No No No 94 No
3 3 3 3
Anti-Monotonicity in Constraint-
Based Mining TDB (min_sup=2)
• Anti-monotonicity TID Transaction
CS590D
g 2095
h -10
Which Constraints Are Anti-
Monotone?
Constraint Antimonotone
vS No
SV no
SV yes
min(S) v no
min(S) v yes
max(S) v yes
max(S) v no
count(S) v yes
count(S) v no
sum(S) v ( a S, a 0 ) yes
sum(S) v ( a S, a 0 ) no
range(S) v yes
range(S) v no
avg(S) v, { , , } convertible
support(S) yes
96
support(S) no
Monotonicity in Constraint-
Based Mining TDB (min_sup=2)
TID Transaction
• Monotonicity 10 a, b, c, d, f
– When an intemset S satisfies the 20 b, c, d, f, g, h
30 a, c, d, e, f
constraint, so does any of its superset
40 c, e, f, g
– sum(S.Price) v is monotone
Item Profit
– min(S.Price) v is monotone a 40
b 0
• Example. C: range(S.profit) 15
c -20
– Itemset ab satisfies C d 10
– So does every superset of ab e -30
f 30
g 20
CS590D 97
h -10
Which Constraints Are
Monotone?
Constraint Monotone
vS yes
SV yes
SV no
min(S) v yes
min(S) v no
max(S) v no
max(S) v yes
count(S) v no
count(S) v yes
sum(S) v ( a S, a 0 ) no
sum(S) v ( a S, a 0 ) yes
range(S) v no
range(S) v yes
avg(S) v, { , , } convertible
support(S) no
98
support(S) yes
Succinctness
• Succinctness:
– Given A1, the set of items satisfying a succinctness constraint C,
then any set S satisfying C is based on A1 , i.e., S contains a
subset belonging to A1
– Idea: Without looking at the transaction database, whether an
itemset S satisfies constraint C can be determined based on the
selection of items
– min(S.Price) v is succinct
– sum(S.Price) v is not succinct
• Optimization: If C is succinct, C is pre-counting pushable
CS590D 99
Which Constraints Are
Succinct?
Constraint Succinct
vS yes
SV yes
SV yes
min(S) v yes
min(S) v yes
max(S) v yes
max(S) v yes
count(S) v weakly
count(S) v weakly
sum(S) v ( a S, a 0 ) no
sum(S) v ( a S, a 0 ) no
range(S) v no
range(S) v no
avg(S) v, { , , } no
support(S) no
100
support(S) no
The Apriori Algorithm —
Example
Database D itemset sup.
L1 itemset sup.
TID Items C1 {1} 2 {1} 2
100 134 {2} 3 {2} 3
200 235 Scan D {3} 3 {3} 3
300 1235 {4} 1 {5} 3
400 25 {5} 3
C2 itemset sup C2 itemset
L2 itemset sup {1 2} 1 Scan D {1 2}
{1 3} 2 {1 3} 2 {1 3}
{2 3} 2 {1 5} 1 {1 5}
{2 3} 2 {2 3}
{2 5} 3
{2 5} 3 {2 5}
{3 5} 2
{3 5} 2 {3 5}
C3 itemset Scan D L3 itemset sup
{2 3 5} {2 3 5} 2 101
Naïve Algorithm: Apriori +
Constraint
Database D itemset sup.
L1 itemset sup.
TID Items C1 {1} 2 {1} 2
100 134 {2} 3 {2} 3
200 235 Scan D {3} 3 {3} 3
300 1235 {4} 1 {5} 3
400 25 {5} 3
C2 itemset sup C2 itemset
L2 itemset sup {1 2} 1 Scan D {1 2}
{1 3} 2 {1 3} 2 {1 3}
{2 3} 2 {1 5} 1 {1 5}
{2 3} 2 {2 3}
{2 5} 3
{2 5} 3 {2 5}
{3 5} 2
{3 5} 2 {3 5}
C3 itemset Scan D L3 itemset sup Constraint:
{2 3 5} {2 3 5} 2 Sum{S.price < 5}
The Constrained Apriori Algorithm: Push an
Anti-monotone Constraint Deep
Database D itemset sup.
L1 itemset sup.
TID Items C1 {1} 2 {1} 2
100 134 {2} 3 {2} 3
200 235 Scan D {3} 3 {3} 3
300 1235 {4} 1 {5} 3
400 25 {5} 3
C2 itemset sup C2 itemset
L2 itemset sup {1 2} 1 Scan D {1 2}
{1 3} 2 {1 3} 2 {1 3}
{2 3} 2 {1 5} 1 {1 5}
{2 3} 2 {2 3}
{2 5} 3
{2 5} 3 {2 5}
{3 5} 2
{3 5} 2 {3 5}
C3 itemset Scan D L3 itemset sup Constraint:
{2 3 5} {2 3 5} 2 103
Sum{S.price < 5}
The Constrained Apriori Algorithm:
Push a Succinct Constraint Deep
Database D itemset sup.
L1 itemset sup.
TID Items C1 {1} 2 {1} 2
100 134 {2} 3 {2} 3
200 235 Scan D {3} 3 {3} 3
300 1235 {4} 1 {5} 3
400 25 {5} 3
C2 itemset sup C2 itemset
L2 itemset sup {1 2}
{1 2} 1 Scan D
{1 3} 2 {1 3} 2 {1 3}
{1 5} 1 {1 5}
{2 3} 2
{2 3} 2 {2 3}
{2 5} 3
{2 5} 3 {2 5}
{3 5} 2 {3 5}
{3 5} 2
C3 itemset Scan D L3 itemset sup Constraint:
{2 3 5} {2 3 5} 2 104
min{S.price <= 1}
Converting “Tough”
Constraints TDB (min_sup=2)
TID Transaction
• Convert tough constraints into anti-
10 a, b, c, d, f
monotone or monotone by properly
20 b, c, d, f, g, h
ordering items 30 a, c, d, e, f
• Examine C: avg(S.profit) 25 40 c, e, f, g
– Order items in value-descending order
Item Profit
• <a, f, g, d, b, h, c, e> a 40
– If an itemset afb violates C b 0
• So does afbh, afb* c -20
• It becomes anti-monotone! d 10
e -30
f 30
g 20
CS590D h -10
Convertible Constraints
• Let R be an order of items
• Convertible anti-monotone
– If an itemset S violates a constraint C, so does every
itemset having S as a prefix w.r.t. R
– Ex. avg(S) v w.r.t. item value descending order
• Convertible monotone
– If an itemset S satisfies constraint C, so does every
itemset having S as a prefix w.r.t. R
– Ex. avg(S) v w.r.t. item value descending order
CS590D 106
Strongly Convertible
Constraints
• avg(X) 25 is convertible anti-monotone w.r.t.
item value descending order R: <a, f, g, d, b, h,
c, e>
Item Profit
– If an itemset af violates a constraint C, so
a 40
does every itemset with af as prefix, such as
afd b 0
c -20
• avg(X) 25 is convertible monotone w.r.t. item
value ascending order R-1: <e, c, h, b, d, g, f, a> d 10
e -30
– If an itemset d satisfies a constraint C, so
does itemsets df and dfa, which having d as f 30
a prefix g 20
• Thus, avg(X) 25 is strongly convertible h -10
CS590D 107
What Constraints Are Convertible?
Convertible Convertible Strongly
Constraint anti-monotone monotone convertible
avg(S) , v Yes Yes Yes
median(S) , v Yes Yes Yes
sum(S) v (items could be of any
Yes No No
value, v 0)
sum(S) v (items could be of any
No Yes No
value, v 0)
sum(S) v (items could be of any
No Yes No
value, v 0)
sum(S) v (items could be of any
Yes No No
value, v 0)
……
CS590D 108
Combing Them Together—A
General Picture
Constraint Antimonotone Monotone Succinct
vS no yes yes
SV no yes yes
SV yes no yes
min(S) v no yes yes
min(S) v yes no yes
max(S) v yes no yes
Monotone
Antimonotone
Strongly
convertible
Succinct
Convertible Convertible
anti-monotone monotone
Inconvertible
CS590D 110
CS590D: Data Mining
Prof. Chris Clifton
February 2, 2006
Association Rules
Mining With Convertible
Constraints TDB (min_sup=2)
TID Transaction
• C: avg(S.profit) 25 10 a, f, d, b, c
• List of items in every transaction in value 20 f, g, d, b, c
descending order R: 30 a, f, d, c, e
<a, f, g, d, b, h, c, e> 40 f, g, h, c, e
CS590D 115
Interestingness via
Unexpectedness
• Need to model expectation of users (domain knowledge)
+ Pattern expected to be frequent
+ - Expected Patterns
- + Unexpected Patterns
P( X X ... X )
1 2 k
CS590D 119
Handling Categorical Attributes
• Potential Issues
– What if attribute has many possible values
• Example: attribute country has more than 200 possible
values
• Many of the attribute values may have very low support
– Potential solution: Aggregate the low-support attribute values
– What if distribution of attribute values is highly skewed
• Example: 95% of the visitors have Buy = No
• Most of the items will be associated with (Buy=No) item
– Potential solution: drop the highly frequent items
CS590D 120
Handling Continuous Attributes
• Different kinds of rules:
– Age[21,35) Salary[70k,120k) Buy
– Salary[70k,120k) Buy Age: =28, =4
• Different methods:
– Discretization-based
– Statistics-based
– Non-discretization based
• minApriori
CS590D 121
Handling Continuous
Attributes
• Use discretization
• Unsupervised:
– Equal-width binning
– Equal-depth binning
– Clustering
• Supervised: Attribute values, v
Class v1 v2 v3 v4 v5 v6 v7 v8 v9
Anomalou 0 0 20 10 20 0 0 0 0
s
Normal 150 100 0 0 0 100 100 150 100
122
CS590D 123
Discretization Issues
• Execution time
– If intervals contain n
values, there are on
average O(n2) possible
ranges
CS590D 126
Mining Association Rules in
Large Databases
• Association rule mining
• Algorithms for scalable mining of (single-dimensional Boolean)
association rules in transactional databases
• Mining various kinds of association/correlation rules
• Constraint-based association mining
• Sequential pattern mining
• Applications/extensions of frequent pattern mining
• Summary
CS590D 135
Sequence Databases and
Sequential Pattern Analysis
• Transaction databases, time-series databases vs. sequence
databases
• Frequent patterns vs. (frequent) sequential patterns
• Applications of sequential pattern mining
– Customer shopping sequences:
• First buy computer, then CD-ROM, and then digital camera, within 3
months.
– Medical treatment, natural disasters (e.g., earthquakes), science &
engineering processes, stocks and markets, etc.
– Telephone calling patterns, Weblog click streams
– DNA sequences and gene structures
CS590D 136
What Is Sequential Pattern
Mining?
• Given a set of sequences, find the
complete set of frequent subsequences
A sequence : < (ef) (ab) (df) c b >
A sequence database
SID sequence An element may contain a set of items.
10 <a(abc)(ac)d(cf)> Items within an element are unordered
and we list them alphabetically.
20 <(ad)c(bc)(ae)>
30 <(ef)(ab)(df)cb> <a(bc)dc> is a subsequence
40 <eg(af)cbc> of <a(abc)(ac)d(cf)>
CS590D 138
Studies on Sequential Pattern
Mining
• Concept introduction and an initial Apriori-like algorithm
– R. Agrawal & R. Srikant. “Mining sequential patterns,” ICDE’95
• GSP—An Apriori-based, influential mining method (developed at
IBM Almaden)
– R. Srikant & R. Agrawal. “Mining sequential patterns: Generalizations
and performance improvements,” EDBT’96
• From sequential patterns to episodes (Apriori-like + constraints)
– H. Mannila, H. Toivonen & A.I. Verkamo. “Discovery of frequent
episodes in event sequences,” Data Mining and Knowledge Discovery,
1997
• Mining sequential patterns with constraints
– M.N. Garofalakis, R. Rastogi, K. Shim: SPIRIT: Sequential Pattern
Mining with Regular Expression Constraints. VLDB 1999
CS590D 139
A Basic Property of Sequential
Patterns: Apriori
• A basic property: Apriori (Agrawal & Sirkant’94)
– If a sequence S is not frequent
– Then none of the super-sequences of S is frequent
– E.g, <hb> is infrequent so do <hab> and <(ah)b>
CS590D 141
Finding Length-1 Sequential
Patterns
• Examine GSP using an example
• Initial candidates: all singleton Cand Sup
sequences <a> 3
– <a>, <b>, <c>, <d>, <e>, <f>, <b> 5
<g>, <h> <c> 4
• Scan database once, count support
for candidates min_sup =2 <d> 3
Seq. ID Sequence
<e> 3
10 <(bd)cb(ac)> <f> 2
20 <(bf)(ce)b(fg)> <g> 1
30 <(ah)(bf)abf>
<h> 1
40 <(be)(ce)d>
50 <a(bd)bcb(ade)> 142
Generating Length-2 Candidates
CS590D 145
The GSP Mining Process
5th scan: 1 cand. 1 length-5 seq. <(bd)cba> Cand. cannot pass
pat. sup. threshold
4th scan: 8 cand. 6 length-4 seq. <abba> <(bd)bc> … Cand. not in DB at all
pat.
3rd scan: 46 cand. 19 length-3 seq. <abb> <aab> <aba> <baa> <bab> …
pat. 20 cand. not in DB at all
2nd scan: 51 cand. 19 length-2 seq.
pat. 10 cand. not in DB at all <aa> <ab> … <af> <ba> <bb> … <ff> <(ab)> … <(ef)>
1st scan: 8 cand. 6 length-1 seq.
pat. <a> <b> <c> <d> <e> <f> <g> <h>
Seq. ID Sequence
min_sup =2 10 <(bd)cb(ac)>
20 <(bf)(ce)b(fg)>
30 <(ah)(bf)abf>
40 <(be)(ce)d>
146
50 <a(bd)bcb(ade)>
Bottlenecks of GSP
• A huge set of candidates could be generated
– 1,000 frequent length-1 sequences generate
1000 999 length-2 candidates!
1000 1000 1,499,500
2
CS590D 148
FreeSpan: Frequent Pattern-Projected
Sequential Pattern Mining
• A divide-and-conquer approach
– Recursively project a sequence database into a set of smaller
databases based on the current set of frequent patterns
– Mine each projected database to find its patterns
• J. Han J. Pei, B. Mortazavi-Asi, Q. Chen, U. Dayal, M.C. Hsu, FreeSpan:
Frequent pattern-projected sequential pattern mining. In KDD’00.
<a> <(abc)(ac)d(cf)>
<aa> <(_bc)(ac)d(cf)>
<ab> <(_c)(ac)d(cf)>
CS590D 151
Mining Sequential Patterns by
Prefix Projections
• Step 1: find length-1 sequential patterns
– <a>, <b>, <c>, <d>, <e>, <f>
• Step 2: divide search space. The complete set of
seq. pat. can be partitioned into 6 subsets:
– The ones having prefix <a>;
– The ones having prefix <b>;
SID sequence
– …
10 <a(abc)(ac)d(cf)>
– The ones having prefix <f> 20 <(ad)c(bc)(ae)>
30 <(ef)(ab)(df)cb>
40 <eg(af)cbc>
CS590D 152
Finding Seq. Patterns with
Prefix <a>
• Only need to consider projections w.r.t. <a>
– <a>-projected database: <(abc)(ac)d(cf)>, <(_d)c(bc)(ae)>,
<(_b)(df)cb>, <(_f)cbc>
• Find all the length-2 seq. pat. Having prefix <a>: <aa>,
<ab>, <(ab)>, <ac>, <ad>, <af>
– Further partition into 6 subsets SID sequence
• Having prefix <aa>; 10 <a(abc)(ac)d(cf)>
• … 20 <(ad)c(bc)(ae)>
CS590D 153
Completeness of PrefixSpan
SDB
SID sequence
Length-1 sequential patterns
10 <a(abc)(ac)d(cf)>
<a>, <b>, <c>, <d>, <e>, <f>
20 <(ad)c(bc)(ae)>
30 <(ef)(ab)(df)cb>
40 <eg(af)cbc>
Having prefix <a> Having prefix <c>, …, <f>
Having prefix <b>
<a>-projected database <b>-projected database
<(abc)(ac)d(cf)> Length-2 sequential
…
<(_d)c(bc)(ae)> patterns
<(_b)(df)cb> <aa>, <ab>, <(ab)>,
<(_f)cbc> <ac>, <ad>, <af>
……
Having prefix <aa> Having prefix <af>
<aa>-proj. db … <af>-proj. db
CS590D 154
Efficiency of PrefixSpan
CS590D 155
Optimization Techniques in
PrefixSpan
• Physical projection vs. pseudo-projection
– Pseudo-projection may reduce the effort of
projection when the projected database fits in
main memory
• Parallel projection vs. partition projection
– Partition projection may avoid the blowup of
disk space
CS590D 156
Speed-up by Pseudo-
projection
• Major cost of PrefixSpan: projection
– Postfixes of sequences often appear
repeatedly in recursive projected databases
• When (projected) database can be held in main
memory, use pointers to form projections
– Pointer to the sequence s=<a(abc)(ac)d(cf)>
<a>
– Offset of the postfix
s|<a>: ( , 2) <(abc)(ac)d(cf)>
<ab>
s|<ab>: ( , 4) <(_c)(ac)d(cf)>
CS590D 157
Pseudo-Projection vs. Physical
Projection
• Pseudo-projection avoids physically copying
postfixes
– Efficient in running time and space when database
can be held in main memory
• However, it is not efficient when database
cannot fit in main memory
– Disk-based random accessing is very costly
• Suggested Approach:
– Integration of physical and pseudo-projection
– Swapping to pseudo-projection when the data set fits
in memory
CS590D 158
PrefixSpan Is Faster than GSP
and FreeSpan
400 PrefixSpan-1
350 PrefixSpan-2
Runtime (second)
300 FreeSpan
250
GSP
200
150
100
50
0
0.00 0.50 1.00 1.50 2.00 2.50 3.00
CS590D 159
Effect of Pseudo-Projection
PrefixSpan-1
200
PrefixSpan-2
PrefixSpan-1 (Pseudo)
160
Runtime (second)
PrefixSpan-2 (Pseudo)
120
80
40
0
0.20 0.30 0.40 0.50 0.60
CS590D 161
Associative Classification
• Mine association possible rules (PR) in form of
condset c
– Condset: a set of attribute-value pairs
– C: class label
• Build Classifier
– Organize rules according to decreasing precedence
based on confidence and support
• B. Liu, W. Hsu & Y. Ma. Integrating classification and
association rule mining. In KDD’98
CS590D 162
Spatial and Multi-Media Association: A
Progressive Refinement Method
• Why progressive refinement?
– Mining operator can be expensive or cheap, fine or
rough
– Trade speed with quality: step-by-step refinement.
• Superset coverage property:
– Preserve all the positive answers—allow a positive
false test but not a false negative test.
• Two- or multi-step mining:
– First apply rough/cheap operator (superset coverage)
– Then apply expensive algorithm on a substantially
reduced candidate set (Koperski & Han, SSD’95).
CS590D 166
Progressive Refinement Mining
of Spatial Associations
• Hierarchy of spatial relationship:
– “g_close_to”: near_by, touch, intersect, contain, etc.
– First search for rough relationship and then refine it.
• Two-step mining of spatial association:
– Step 1: rough spatial computation (as a filter)
• Using MBR or R-tree for rough estimation.
– Step2: Detailed spatial algorithm (as refinement)
• Apply only to those objects which have passed the rough
spatial association test (no less than min_support)
167
Mining Multimedia Associations
CS590D 168
Further Evolution of PrefixSpan
CS590D 170
Methods for Mining Closed-
and Max- Sequential Patterns
• PrefixSpan or FreeSpan can be viewed as projection-
guided depth-first search
• For mining max- sequential patterns, any sequence
which does not contain anything beyond the already
discovered ones will be removed from the projected DB
– {<a1 a2 … a50>, <a1 a2 … a100>}, with min_sup = 1
– If we have found a max-sequential pattern <a1 a2 …
a100>, nothing will be projected in any projected DB
• Similar ideas can be applied for mining closed-
sequential-patterns
CS590D 171
Constraint-Based Sequential
Pattern Mining
• Constraint-based sequential pattern mining
– Constraints: User-specified, for focused mining of desired patterns
– How to explore efficient mining with constraints? — Optimization
• Classification of constraints
– Anti-monotone: E.g., value_sum(S) < 150, min(S) > 10
– Monotone: E.g., count (S) > 5, S {PC, digital_camera}
– Succinct: E.g., length(S) 10, S {Pentium, MS/Office, MS/Money}
– Convertible: E.g., value_avg(S) < 25, profit_sum (S) > 160,
max(S)/avg(S) < 2, median(S) – min(S) > 5
– Inconvertible: E.g., avg(S) – median(S) = 0
CS590D 172
Sequential Pattern Growth for
Constraint-Based Mining
• Efficient mining with convertible constraints
– Not solvable by candidate generation-and-test methodology
– Easily push-able into the sequential pattern growth framework
• Example: push avg(S) < 25 in frequent pattern growth
– project items in value (price/profit depending on mining semantics)
ascending/descending order for sequential pattern growth
– Grow each pattern by sequential pattern growth
– If avg(current_pattern) 25, toss the current_pattern
• Why?—future growths always make it bigger
• But why not candidate generation?—no structure or ordering in growth
CS590D 173
From Sequential Patterns to
Structured Patterns
• Sets, sequences, trees and other structures
– Transaction DB: Sets of items
• {{i1, i2, …, im}, …}
– Seq. DB: Sequences of sets:
• {<{i1, i2}, …, {im, in, ik}>, …}
– Sets of Sequences:
• {{<i1, i2>, …, <im, in, ik>}, …}
– Sets of trees (each element being a tree):
• {t1, t2, …, tn}
• Applications: Mining structured patterns in XML documents
CS590D 174
Mining Association Rules in
Large Databases
• Association rule mining
• Algorithms for scalable mining of (single-dimensional Boolean)
association rules in transactional databases
• Mining various kinds of association/correlation rules
• Constraint-based association mining
• Sequential pattern mining
• Applications/extensions of frequent pattern mining
• Summary
CS590D 175
Frequent-Pattern Mining:
Achievements
• Frequent pattern mining—an important task in data mining
• Frequent pattern mining methodology
– Candidate generation & test vs. projection-based (frequent-pattern
growth)
– Vertical vs. horizontal format
– Various optimization methods: database partition, scan reduction, hash
tree, sampling, border computation, clustering, etc.
• Related frequent-pattern mining algorithm: scope extension
– Mining closed frequent itemsets and max-patterns (e.g., MaxMiner,
CLOSET, CHARM, etc.)
– Mining multi-level, multi-dimensional frequent patterns with flexible
support constraints
– Constraint pushing for mining optimization
– From frequent patterns to correlation and causality
CS590D 176
Frequent-Pattern Mining:
Applications
• Related problems which need frequent pattern mining
– Association-based classification
– Iceberg cube computation
– Database compression by fascicles and frequent
patterns
– Mining sequential patterns (GSP, PrefixSpan, SPADE,
etc.)
– Mining partial periodicity, cyclic associations, etc.
– Mining frequent structures, trends, etc.
• Typical application examples
– Market-basket analysis, Weblog analysis, DNA
mining, etc.
CS590D 177
Frequent-Pattern Mining:
Research Problems
• Multi-dimensional gradient analysis: patterns regarding
changes and differences
– Not just counts—other measures, e.g., avg(profit)
• Mining top-k frequent patterns without support constraint
• Mining fault-tolerant associations
– “3 out of 4 courses excellent” leads to A in data mining
• Fascicles and database compression by frequent pattern
mining
• Partial periodic patterns
• DNA sequence analysis and pattern classification
CS590D 178
References: Frequent-pattern
Mining Methods
• R. Agarwal, C. Aggarwal, and V. V. V. Prasad. A tree projection algorithm for
generation of frequent itemsets. Journal of Parallel and Distributed
Computing, 2000.
• R. Agrawal, T. Imielinski, and A. Swami. Mining association rules between
sets of items in large databases. SIGMOD'93, 207-216, Washington, D.C.
• R. Agrawal and R. Srikant. Fast algorithms for mining association rules.
VLDB'94 487-499, Santiago, Chile.
• J. Han, J. Pei, and Y. Yin: “Mining frequent patterns without candidate
generation”. In Proc. ACM-SIGMOD’2000, pp. 1-12, Dallas, TX, May 2000.
• H. Mannila, H. Toivonen, and A. I. Verkamo. Efficient algorithms for
discovering association rules. KDD'94, 181-192, Seattle, WA, July 1994.
CS590D 179
References: Frequent-pattern
Mining Methods
• A. Savasere, E. Omiecinski, and S. Navathe. An efficient algorithm for
mining association rules in large databases. VLDB'95, 432-443, Zurich,
Switzerland.
• C. Silverstein, S. Brin, R. Motwani, and J. Ullman. Scalable techniques for
mining causal structures. VLDB'98, 594-605, New York, NY.
• R. Srikant and R. Agrawal. Mining generalized association rules. VLDB'95,
407-419, Zurich, Switzerland, Sept. 1995.
• R. Srikant and R. Agrawal. Mining quantitative association rules in large
relational tables. SIGMOD'96, 1-12, Montreal, Canada.
• H. Toivonen. Sampling large databases for association rules. VLDB'96,
134-145, Bombay, India, Sept. 1996.
• M.J. Zaki, S. Parthasarathy, M. Ogihara, and W. Li. New algorithms for fast
discovery of association rules. KDD’97. August 1997.
CS590D 180
References: Frequent-pattern
Mining (Performance
Improvements)
• S. Brin, R. Motwani, J. D. Ullman, and S. Tsur. Dynamic itemset counting
and implication rules for market basket analysis. SIGMOD'97, Tucson,
Arizona, May 1997.
• D.W. Cheung, J. Han, V. Ng, and C.Y. Wong. Maintenance of discovered
association rules in large databases: An incremental updating technique.
ICDE'96, New Orleans, LA.
• T. Fukuda, Y. Morimoto, S. Morishita, and T. Tokuyama. Data mining using
two-dimensional optimized association rules: Scheme, algorithms, and
visualization. SIGMOD'96, Montreal, Canada.
• E.-H. Han, G. Karypis, and V. Kumar. Scalable parallel data mining for
association rules. SIGMOD'97, Tucson, Arizona.
• J.S. Park, M.S. Chen, and P.S. Yu. An effective hash-based algorithm for
mining association rules. SIGMOD'95, San Jose, CA, May 1995.
CS590D 181
References: Frequent-pattern Mining
(Performance Improvements)
• G. Piatetsky-Shapiro. Discovery, analysis, and presentation of strong rules. In G.
Piatetsky-Shapiro and W. J. Frawley, Knowledge Discovery in Databases,. AAAI/MIT
Press, 1991.
• J.S. Park, M.S. Chen, and P.S. Yu. An effective hash-based algorithm for mining
association rules. SIGMOD'95, San Jose, CA.
• S. Sarawagi, S. Thomas, and R. Agrawal. Integrating association rule mining with
relational database systems: Alternatives and implications. SIGMOD'98, Seattle, WA.
• K. Yoda, T. Fukuda, Y. Morimoto, S. Morishita, and T. Tokuyama. Computing
optimized rectilinear regions for association rules. KDD'97, Newport Beach, CA, Aug.
1997.
• M. J. Zaki, S. Parthasarathy, M. Ogihara, and W. Li. Parallel algorithm for discovery of
association rules. Data Mining and Knowledge Discovery, 1:343-374, 1997.
CS590D 182
References: Frequent-pattern Mining (Multi-
level, correlation, ratio rules, etc.)
• S. Brin, R. Motwani, and C. Silverstein. Beyond market basket: Generalizing association rules to correlations.
SIGMOD'97, 265-276, Tucson, Arizona.
• J. Han and Y. Fu. Discovery of multiple-level association rules from large databases. VLDB'95, 420-431, Zurich,
Switzerland.
• M. Klemettinen, H. Mannila, P. Ronkainen, H. Toivonen, and A.I. Verkamo. Finding interesting rules from large
sets of discovered association rules. CIKM'94, 401-408, Gaithersburg, Maryland.
• F. Korn, A. Labrinidis, Y. Kotidis, and C. Faloutsos. Ratio rules: A new paradigm for fast, quantifiable data mining.
VLDB'98, 582-593, New York, NY
• B. Lent, A. Swami, and J. Widom. Clustering association rules. ICDE'97, 220-231, Birmingham, England.
• R. Meo, G. Psaila, and S. Ceri. A new SQL-like operator for mining association rules. VLDB'96, 122-133, Bombay,
India.
• R.J. Miller and Y. Yang. Association rules over interval data. SIGMOD'97, 452-461, Tucson, Arizona.
• A. Savasere, E. Omiecinski, and S. Navathe. Mining for strong negative associations in a large database of
customer transactions. ICDE'98, 494-502, Orlando, FL, Feb. 1998.
• D. Tsur, J. D. Ullman, S. Abitboul, C. Clifton, R. Motwani, and S. Nestorov. Query flocks: A generalization of
association-rule mining. SIGMOD'98, 1-12, Seattle, Washington.
• J. Pei, A.K.H. Tung, J. Han. Fault-Tolerant Frequent Pattern Mining: Problems and Challenges. SIGMOD
DMKD’01, Santa Barbara, CA.
CS590D 183
References: Mining Max-patterns
and Closed itemsets
• R. J. Bayardo. Efficiently mining long patterns from databases. SIGMOD'98,
85-93, Seattle, Washington.
• J. Pei, J. Han, and R. Mao, "CLOSET: An Efficient Algorithm for Mining
Frequent Closed Itemsets", Proc. 2000 ACM-SIGMOD Int. Workshop on
Data Mining and Knowledge Discovery (DMKD'00), Dallas, TX, May 2000.
• N. Pasquier, Y. Bastide, R. Taouil, and L. Lakhal. Discovering frequent
closed itemsets for association rules. ICDT'99, 398-416, Jerusalem, Israel,
Jan. 1999.
• M. Zaki. Generating Non-Redundant Association Rules. KDD'00. Boston,
MA. Aug. 2000
• M. Zaki. CHARM: An Efficient Algorithm for Closed Association Rule Mining,
SIAM’02
CS590D 184
References: Constraint-base
Frequent-pattern Mining
• G. Grahne, L. Lakshmanan, and X. Wang. Efficient mining of constrained correlated sets. ICDE'00, 512-521, San
Diego, CA, Feb. 2000.
• Y. Fu and J. Han. Meta-rule-guided mining of association rules in relational databases. KDOOD'95, 39-46,
Singapore, Dec. 1995.
• J. Han, L. V. S. Lakshmanan, and R. T. Ng, "Constraint-Based, Multidimensional Data Mining", COMPUTER
(special issues on Data Mining), 32(8): 46-50, 1999.
• L. V. S. Lakshmanan, R. Ng, J. Han and A. Pang, "Optimization of Constrained Frequent Set Queries with 2-
Variable Constraints", SIGMOD’99
• R. Ng, L.V.S. Lakshmanan, J. Han & A. Pang. “Exploratory mining and pruning optimizations of constrained
association rules.” SIGMOD’98
• J. Pei, J. Han, and L. V. S. Lakshmanan, "Mining Frequent Itemsets with Convertible Constraints", Proc. 2001 Int.
Conf. on Data Engineering (ICDE'01), April 2001.
• J. Pei and J. Han "Can We Push More Constraints into Frequent Pattern Mining?", Proc. 2000 Int. Conf. on
Knowledge Discovery and Data Mining (KDD'00), Boston, MA, August 2000.
• R. Srikant, Q. Vu, and R. Agrawal. Mining association rules with item constraints. KDD'97, 67-73, Newport Beach,
California
CS590D 185
References: Sequential Pattern
Mining Methods
• R. Agrawal and R. Srikant. Mining sequential patterns. ICDE'95, 3-
14, Taipei, Taiwan.
• R. Srikant and R. Agrawal. Mining sequential patterns:
Generalizations and performance improvements. EDBT’96.
• J. Han, J. Pei, B. Mortazavi-Asl, Q. Chen, U. Dayal, M.-C. Hsu,
"FreeSpan: Frequent Pattern-Projected Sequential Pattern Mining",
Proc. 2000 Int. Conf. on Knowledge Discovery and Data Mining
(KDD'00), Boston, MA, August 2000.
• H. Mannila, H Toivonen, and A. I. Verkamo. Discovery of frequent
episodes in event sequences. Data Mining and Knowledge
Discovery, 1:259-289, 1997.
CS590D 186
References: Sequential Pattern
Mining Methods
• J. Pei, J. Han, H. Pinto, Q. Chen, U. Dayal, and M.-C. Hsu, "PrefixSpan:
Mining Sequential Patterns Efficiently by Prefix-Projected Pattern Growth",
Proc. 2001 Int. Conf. on Data Engineering (ICDE'01), Heidelberg, Germany,
April 2001.
• B. Ozden, S. Ramaswamy, and A. Silberschatz. Cyclic association rules.
ICDE'98, 412-421, Orlando, FL.
• S. Ramaswamy, S. Mahajan, and A. Silberschatz. On the discovery of
interesting patterns in association rules. VLDB'98, 368-379, New York, NY.
• M.J. Zaki. Efficient enumeration of frequent sequences. CIKM’98.
Novermber 1998.
• M.N. Garofalakis, R. Rastogi, K. Shim: SPIRIT: Sequential Pattern Mining
with Regular Expression Constraints. VLDB 1999: 223-234, Edinburgh,
Scotland.
CS590D 187
References: Frequent-pattern Mining
in Spatial, Multimedia, Text & Web
Databases
• K. Koperski, J. Han, and G. B. Marchisio, "Mining Spatial and Image Data through Progressive Refinement
Methods", Revue internationale de gomatique (European Journal of GIS and Spatial Analysis), 9(4):425-440,
1999.
• A. K. H. Tung, H. Lu, J. Han, and L. Feng, "Breaking the Barrier of Transactions: Mining Inter-Transaction
Association Rules", Proc. 1999 Int. Conf. on Knowledge Discovery and Data Mining (KDD'99), San Diego, CA,
Aug. 1999, pp. 297-301.
• J. Han, G. Dong and Y. Yin, "Efficient Mining of Partial Periodic Patterns in Time Series Database", Proc. 1999 Int.
Conf. on Data Engineering (ICDE'99), Sydney, Australia, March 1999, pp. 106-115
• H. Lu, L. Feng, and J. Han, "Beyond Intra-Transaction Association Analysis:Mining Multi-Dimensional Inter-
Transaction Association Rules", ACM Transactions on Information Systems (TOIS’00), 18(4): 423-454, 2000.
• O. R. Zaiane, M. Xin, J. Han, "Discovering Web Access Patterns and Trends by Applying OLAP and Data Mining
Technology on Web Logs," Proc. Advances in Digital Librar ies Conf. (ADL'98), Santa Barbara, CA, April 1998, pp.
19-29
• O. R. Zaiane, J. Han, and H. Zhu, "Mining Recurrent Items in Multimedia with Progressive Resolution
Refinement", ICDE'00, San Diego, CA, Feb. 2000, pp. 461-470
CS590D 188
References: Frequent-pattern Mining
for Classification and Data Cube
Computation
• K. Beyer and R. Ramakrishnan. Bottom-up computation of sparse and iceberg cubes.
SIGMOD'99, 359-370, Philadelphia, PA, June 1999.
• M. Fang, N. Shivakumar, H. Garcia-Molina, R. Motwani, and J. D. Ullman. Computing
iceberg queries efficiently. VLDB'98, 299-310, New York, NY, Aug. 1998.
• J. Han, J. Pei, G. Dong, and K. Wang, “Computing Iceberg Data Cubes with Complex
Measures”, Proc. ACM-SIGMOD’2001, Santa Barbara, CA, May 2001.
• M. Kamber, J. Han, and J. Y. Chiang. Metarule-guided mining of multi-dimensional
association rules using data cubes. KDD'97, 207-210, Newport Beach, California.
• K. Beyer and R. Ramakrishnan. Bottom-up computation of sparse and iceberg cubes.
SIGMOD’99
• T. Imielinski, L. Khachiyan, and A. Abdulghani. Cubegrades: Generalizing association
rules. Technical Report, Aug. 2000
CS590D 189