0% found this document useful (0 votes)
43 views166 pages

Assoc

The document summarizes the key concepts and algorithms for mining association rules from transactional databases. It describes association rule mining as finding frequent patterns and correlations among items in transaction data. The Apriori algorithm is introduced as a method for efficiently mining association rules by leveraging the fact that subsets of frequent itemsets must also be frequent. The algorithm works in multiple passes over the data, generating candidate itemsets in each pass and filtering any candidates that do not meet minimum support.

Uploaded by

Taj Sapra
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
43 views166 pages

Assoc

The document summarizes the key concepts and algorithms for mining association rules from transactional databases. It describes association rule mining as finding frequent patterns and correlations among items in transaction data. The Apriori algorithm is introduced as a method for efficiently mining association rules by leveraging the fact that subsets of frequent itemsets must also be frequent. The algorithm works in multiple passes over the data, generating candidate itemsets in each pass and filtering any candidates that do not meet minimum support.

Uploaded by

Taj Sapra
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 166

CS590D: Data Mining

Prof. Chris Clifton

January 24, 2006


Association Rules
Mining Association Rules in
Large Databases
• Association rule mining
• Algorithms for scalable mining of (single-
dimensional Boolean) association rules in
transactional databases
• Mining various kinds of association/correlation
rules
• Constraint-based association mining
• Sequential pattern mining
• Applications/extensions of frequent pattern
mining
• Summary
CS590D 3
What Is Association Mining?
• Association rule mining:
– Finding frequent patterns, associations, correlations, or causal
structures among sets of items or objects in transaction
databases, relational databases, and other information
repositories.
– Frequent pattern: pattern (set of items, sequence, etc.) that
occurs frequently in a database [AIS93]
• Motivation: finding regularities in data
– What products were often purchased together? — Beer and
diapers?!
– What are the subsequent purchases after buying a PC?
– What kinds of DNA are sensitive to this new drug?
– Can we automatically classify web documents?

CS590D 4
Why Is Association Mining
Important?
• Foundation for many essential data mining tasks
– Association, correlation, causality
– Sequential patterns, temporal or cyclic association,
partial periodicity, spatial and multimedia association
– Associative classification, cluster analysis, iceberg
cube, fascicles (semantic data compression)
• Broad applications
– Basket data analysis, cross-marketing, catalog
design, sale campaign analysis
– Web log (click stream) analysis, DNA sequence
analysis, etc.

CS590D 5
Basic Concepts:
Association Rules
Transaction-id Items bought • Itemset X={x1, …, xk}
10 A, B, C • Find all the rules XY with
min confidence and support
20 A, C
– support, s, probability that
30 A, D a transaction contains XY
40 B, E, F – confidence, c, conditional
probability that a
transaction having X also
Customer Customer contains Y.
buys both buys diaper
Let min_support = 50%,
min_conf = 50%:
A  C (50%, 66.7%)
C  A (50%, 100%)
Customer
buys beer 6
Mining Association Rules:
Example
Transaction-id Items bought
Min. support 50%
10 A, B, C
Min. confidence 50%
20 A, C Frequent pattern Support
30 A, D {A} 75%
40 B, E, F {B} 50%
{C} 50%
{A, C} 50%
For rule A  C:
support = support({A}{C}) = 50%
confidence = support({A}{C})/support({A}) =
66.6%
CS590D 7
Mining Association Rules:
What We Need to Know
• Goal: Rules with high support/confidence
• How to compute?
– Support: Find sets of items that occur
frequently
– Confidence: Find frequency of subsets of
supported itemsets
• If we have all frequently occurring sets of
items (frequent itemsets), we can compute
support and confidence!
CS590D 8
Mining Association Rules in
Large Databases
• Association rule mining
• Algorithms for scalable mining of (single-
dimensional Boolean) association rules in
transactional databases
• Mining various kinds of association/correlation
rules
• Constraint-based association mining
• Sequential pattern mining
• Applications/extensions of frequent pattern
mining
• Summary
CS590D 11
Apriori: A Candidate Generation-
and-Test Approach
• Any subset of a frequent itemset must be frequent
– if {beer, diaper, nuts} is frequent, so is {beer, diaper}
– Every transaction having {beer, diaper, nuts} also contains {beer,
diaper}
• Apriori pruning principle: If there is any itemset which is
infrequent, its superset should not be generated/tested!
• Method:
– generate length (k+1) candidate itemsets from length k frequent
itemsets, and
– test the candidates against DB
• Performance studies show its efficiency and scalability
• Agrawal & Srikant 1994, Mannila, et al. 1994

CS590D 12
The Apriori Algorithm—An Example
Itemset sup
Itemset sup
Database TDB {A} 2 L1
C1 {A} 2
Tid Items {B} 3
{B} 3
10 A, C, D {C} 3
20 B, C, E
1st scan {C} 3
{D} 1
{E} 3
30 A, B, C, E {E} 3
40 B, E
CFrequency
2 Itemset ≥ 50%, Confidence
sup C2 Itemset 100%:
L2 {A, B} 1
Itemset sup 2A
nd
 C {A, B}
scan
{A, C} 2
{A, C} 2
{A, E} 1 BE {A, C}
{B, C} 2
{B, E} 3
{B, C} 2 BC  E {A, E}
{B, E} 3 {B, C}
{C, E} 2
{C, E} 2
CE  B {B, E}
BE  C {C, E}
C3 Itemset
3rd scan L3 Itemset sup
{B, C, E} {B, C, E} 2 13
The Apriori Algorithm
• Pseudo-code:
Ck: Candidate itemset of size k
Lk : frequent itemset of size k

L1 = {frequent items};
for (k = 1; Lk !=; k++) do begin
Ck+1 = candidates generated from Lk;
for each transaction t in database do
increment the count of all candidates in Ck+1
that are contained in t
Lk+1= candidates in Ck+1 with min_support
end
return k Lk;

CS590D 14
Important Details of Apriori
• How to generate candidates?
– Step 1: self-joining Lk
– Step 2: pruning
• How to count supports of candidates?
• Example of Candidate-generation
– L3={abc, abd, acd, ace, bcd}
– Self-joining: L3*L3
• abcd from abc and abd
• acde from acd and ace
– Pruning:
• acde is removed because ade is not in L3
– C4={abcd}

CS590D 15
How to Generate Candidates?
• Suppose the items in Lk-1 are listed in an order
• Step 1: self-joining Lk-1
insert into Ck
select p.item1, p.item2, …, p.itemk-1, q.itemk-1
from Lk-1 p, Lk-1 q
where p.item1=q.item1, …, p.itemk-2=q.itemk-2, p.itemk-1 < q.itemk-1

• Step 2: pruning
 itemsets c in Ck do
 (k-1)-subsets s of c do
if (s is not in Lk-1) then delete c from Ck

CS590D 20
How to Count Supports of
Candidates?
• Why counting supports of candidates a problem?
– The total number of candidates can be very huge
– One transaction may contain many candidates
• Method:
– Candidate itemsets are stored in a hash-tree
– Leaf node of hash-tree contains a list of itemsets and counts
– Interior node contains a hash table
– Subset function: finds all the candidates contained in a
transaction

CS590D 21
Example: Counting Supports of
Candidates
Subset function
Transaction: 1 2 3 5 6
3,6,9
1,4,7
2,5,8

1+2356

13+56 234
567
145 345 356 367
136 368
357
12+356
689
124
457 125 159
458

CS590D 22
Efficient Implementation of Apriori
in SQL
• Hard to get good performance out of pure SQL (SQL-92)
based approaches alone
• Make use of object-relational extensions like UDFs,
BLOBs, Table functions etc.
– Get orders of magnitude improvement
• S. Sarawagi, S. Thomas, and R. Agrawal. Integrating
association rule mining with relational database systems:
Alternatives and implications. In SIGMOD’98

CS590D 23
Challenges of Frequent Pattern
Mining
• Challenges
– Multiple scans of transaction database
– Huge number of candidates
– Tedious workload of support counting for candidates
• Improving Apriori: general ideas
– Reduce passes of transaction database scans
– Shrink number of candidates
– Facilitate support counting of candidates

CS590D 24
DIC: Reduce Number of Scans
ABCD • Once both A and D are
determined frequent, the
counting of AD begins
ABC ABD ACD BCD
• Once all length-2 subsets of
BCD are determined frequent,
AB AC BC AD BD CD the counting of BCD begins
Transactions
1-itemsets
A B C D
Apriori 2-itemsets

{}
Itemset lattice 1-itemsets
S. Brin R. Motwani, J. Ullman, 2-items
and S. Tsur. Dynamic itemset DIC 3-items
counting and implication rules
for market basket data. In 25
SIGMOD’97
Partition: Scan Database Only
Twice
• Any itemset that is potentially frequent in DB
must be frequent in at least one of the partitions
of DB
– Scan 1: partition database and find local frequent
patterns
– Scan 2: consolidate global frequent patterns
• A. Savasere, E. Omiecinski, and S. Navathe. An
efficient algorithm for mining association in large
databases. In VLDB’95

CS590D 26
Sampling for Frequent Patterns
• Select a sample of original database, mine
frequent patterns within sample using Apriori
• Scan database once to verify frequent itemsets
found in sample, only borders of closure of
frequent patterns are checked
– Example: check abcd instead of ab, ac, …, etc.
• Scan database again to find missed frequent
patterns
• H. Toivonen. Sampling large databases for
association rules. In VLDB’96

CS590D 28
DHP: Reduce the Number of
Candidates
• A k-itemset whose corresponding hashing
bucket count is below the threshold cannot be
frequent
– Candidates: a, b, c, d, e
– Hash entries: {ab, ad, ae} {bd, be, de} …
– Frequent 1-itemset: a, b, d, e
– ab is not a candidate 2-itemset if the sum of count of
{ab, ad, ae} is below support threshold
• J. Park, M. Chen, and P. Yu. An effective hash-
based algorithm for mining association rules. In
SIGMOD’95
CS590D 29
Eclat/MaxEclat and VIPER:
Exploring Vertical Data Format
• Use tid-list, the list of transaction-ids containing an itemset
• Compression of tid-lists
– Itemset A: t1, t2, t3, sup(A)=3
– Itemset B: t2, t3, t4, sup(B)=3
– Itemset AB: t2, t3, sup(AB)=2
• Major operation: intersection of tid-lists
• M. Zaki et al. New algorithms for fast discovery of association rules.
In KDD’97
• P. Shenoy et al. Turbo-charging vertical mining of large databases.
In SIGMOD’00

CS590D 30
Bottleneck of Frequent-pattern
Mining
• Multiple database scans are costly
• Mining long patterns needs many passes of
scanning and generates lots of candidates
– To find frequent itemset i1i2…i100
• # of scans: 100
• # of Candidates: (1001) + (1002) + … + (110000) = 2100-1 =
1.27*1030 !
• Bottleneck: candidate-generation-and-test
• Can we avoid candidate generation?

CS590D 31
CS590D: Data Mining
Prof. Chris Clifton

January 26, 2006


Association Rules
Mining Frequent Patterns
Without Candidate Generation
• Grow long patterns from short ones using
local frequent items
– “abc” is a frequent pattern

– Get all transactions having “abc”: DB|abc

– “d” is a local frequent item in DB|abc  abcd


is a frequent pattern
CS590D 33
Construct FP-tree from a
Transaction Database
TID items Items bought (ordered) frequent
100 {f, a, c, d, g, i, m, p} {f, c, a, m, p}
200 {a, b, c, f, l, m, o} {f, c, a, b, m} min_support = 3
300 {b, f, h, j, o, w} {f, b}
400 {b, c, k, s, p} {c, b, p}
500 {a, f, c, e, l, p, m, n} {f, c, a, m, p} {}
1. Scan DB once, find Header Table
frequent 1-itemset f:4 c:1
Item frequency head
(single item pattern) f 4
2. Sort frequent items in c 4 c:3 b:1 b:1
frequency descending a 3
order, f-list b 3 a:3 p:1
m 3
3. Scan DB again, p 3
m:2 b:1
construct FP-tree
F-list=f-c-a-b-m-p p:2 m:1 34
Benefits of the FP-tree
Structure
• Completeness
– Preserve complete information for frequent pattern
mining
– Never break a long pattern of any transaction
• Compactness
– Reduce irrelevant info—infrequent items are gone
– Items in frequency descending order: the more
frequently occurring, the more likely to be shared
– Never be larger than the original database (not count
node-links and the count field)
– For Connect-4 DB, compression ratio could be over
100

CS590D 35
Partition Patterns and
Databases
• Frequent patterns can be partitioned into
subsets according to f-list
– F-list=f-c-a-b-m-p
– Patterns containing p
– Patterns having m but no p
–…
– Patterns having c but no a nor b, m, p
– Pattern f
• Completeness and non-redundency
CS590D 36
Find Patterns Having P From P-
conditional Database
• Starting at the frequent item header table in the FP-tree
• Traverse the FP-tree by following the link of each frequent item p
• Accumulate all of transformed prefix paths of item p to form p’s
conditional pattern base
{}
Header Table
f:4 c:1 Conditional pattern bases
Item frequency head
f 4 item cond. pattern base
c 4 c:3 b:1 b:1
c f:3
a 3
b 3 a:3 p:1 a fc:3
m 3 b fca:1, f:2, c:2
p 3 m:2 b:1 m fca:2, fcab:1
p:2 m:1 p fcam:2, cb:1
CS590D 37
From Conditional Pattern-bases to
Conditional FP-trees
• For each pattern-base
– Accumulate the count for each item in the base
– Construct the FP-tree for the frequent items of the pattern base

m-conditional pattern base:


{} fca:2, fcab:1
Header Table
Item frequency head All frequent
f:4 c:1 patterns relate to m
f 4 {}
c 4 c:3 b:1 b:1 m,

a 3 f:3  fm, cm, am,
b 3 a:3 p:1 fcm, fam, cam,
m 3 c:3 fcam
p 3 m:2 b:1
p:2 m:1 a:3
m-conditional FP-tree
CS590D 38
Recursion: Mining Each
Conditional FP-tree
{}

{} Cond. pattern base of “am”: (fc:3) f:3

c:3
f:3
am-conditional FP-tree
c:3 {}
Cond. pattern base of “cm”: (f:3)
a:3 f:3
m-conditional FP-tree
cm-conditional FP-tree

{}
Cond. pattern base of “cam”: (f:3) f:3
cam-conditional FP-tree
CS590D 39
A Special Case: Single Prefix Path
in FP-tree
• Suppose a (conditional) FP-tree T has a shared single prefix-path P
• Mining can be decomposed into two parts
– Reduction of the single prefix path into one node
{}– Concatenation of the mining results of the two parts

a1:n1
a2:n2

a3:n3
{} r1

C1:k1 a1:n1
b1:m1
 r1 =
a2:n2
+ b1:m1 C1:k1

C2:k2 C3:k3 40
a3:n3 C2:k2 C3:k3
Mining Frequent Patterns With
FP-trees
• Idea: Frequent pattern growth
– Recursively grow frequent patterns by pattern and
database partition
• Method
– For each frequent item, construct its conditional
pattern-base, and then its conditional FP-tree
– Repeat the process on each newly created conditional
FP-tree
– Until the resulting FP-tree is empty, or it contains only
one path—single path will generate all the
combinations of its sub-paths, each of which is a
frequent pattern
CS590D 41
Scaling FP-growth by DB
Projection
• FP-tree cannot fit in memory?—DB
projection
• First partition a database into a set of
projected DBs
• Then construct and mine FP-tree for each
projected DB
• Parallel projection vs. Partition projection
techniques
– Parallel projection is space costly
CS590D 42
Partition-based Projection
• Parallel projection needs
Tran. DB
a lot of disk space fcamp
fcabm
• Partition projection fb
cbp
saves it fcamp

p-proj DB m-proj DB b-proj DB a-proj DB c-proj DB f-proj DB


fcam fcab f fc f …
cb fca cb … …
fcam fca …

am-proj DB cm-proj DB
fc f …
fc f
fc f 43
FP-Growth vs. Apriori: Scalability
With the Support Threshold

100 Data set T25I20D10K


90 D1 FP-grow th runtime
D1 Apriori runtime
80
Ru n tim e (se c.)

70

60
50

40

30
20

10
0
0 0.5 1 1.5 2 2.5 3
Support threshold(%)
CS590D 44
FP-Growth vs. Tree-Projection:
Scalability with the Support Threshold

Data set T25I20D100K


140
D2 FP-growth
120 D2 TreeProjection

100
Runtime (sec.)

80

60

40

20

0
0 0.5 1 1.5 2
CS590D
Support threshold (%) 45
Why Is FP-Growth the Winner?
• Divide-and-conquer:
– decompose both the mining task and DB according to the
frequent patterns obtained so far
– leads to focused search of smaller databases
• Other factors
– no candidate generation, no candidate test
– compressed database: FP-tree structure
– no repeated scan of entire database
– basic ops—counting local freq items and building sub FP-tree,
no pattern search and matching

CS590D 46
Implications of the
Methodology
• Mining closed frequent itemsets and max-patterns
– CLOSET (DMKD’00)
• Mining sequential patterns
– FreeSpan (KDD’00), PrefixSpan (ICDE’01)
• Constraint-based mining of frequent patterns
– Convertible constraints (KDD’00, ICDE’01)
• Computing iceberg data cubes with complex measures
– H-tree and H-cubing algorithm (SIGMOD’01)

CS590D 47
Max-patterns
• Frequent pattern {a1, …, a100}  (1001) +
(1002) + … + (110000) = 2100-1 = 1.27*1030
frequent sub-patterns!
• Max-pattern: frequent patterns without
proper frequent super pattern
Tid Items
– BCDE, ACD are max-patterns
10 A,B,C,D,E
– BCD is not a max-pattern
20 B,C,D,E,
Min_sup=2 30 A,C,D,F
CS590D 48
MaxMiner: Mining Max-
patterns
• 1st scan: find frequent items Tid Items
– A, B, C, D, E 10 A,B,C,D,E
• 2nd scan: find support for 20 B,C,D,E,

– AB, AC, AD, AE, ABCDE 30 A,C,D,F

– BC, BD, BE, BCDE Potential


– CD, CE, CDE, DE, max-patterns
• Since BCDE is a max-pattern, no need to check
BCD, BDE, CDE in later scan
• R. Bayardo. Efficiently mining long patterns from
databases. In SIGMOD’98

CS590D 49
Frequent Closed Patterns
• Conf(acd)=100%  record acd only
• For frequent itemset X, if there exists no
item y s.t. every transaction containing X
also contains y, then X is a frequent closed
pattern Min_sup=2
– “acd” is a frequent closed pattern TID Items

• Concise rep. of freq pats 10 a, c, d, e, f


20 a, b, e
• Reduce # of patterns and rules 30 c, e, f

• N. Pasquier et al. In ICDT’99 40 a, c, d, f


50 c, e, f
CS590D 50
Mining Frequent Closed Patterns:
CLOSET
• Flist: list of all frequent items in support ascending order
Min_sup=2
– Flist: d-a-f-e-c
TID Items
• Divide search space 10 a, c, d, e, f
– Patterns having d 20 a, b, e
30 c, e, f
– Patterns having d but no a, etc. 40 a, c, d, f
• Find frequent closed pattern recursively 50 c, e, f
– Every transaction having d also has cfa  cfad is a frequent
closed pattern
• J. Pei, J. Han & R. Mao. CLOSET: An Efficient Algorithm for Mining
Frequent Closed Itemsets", DMKD'00.

CS590D 51
Mining Frequent Closed
Patterns: CHARM
• Use vertical data format: t(AB)={T1, T12, …}
• Derive closed pattern based on vertical intersections
– t(X)=t(Y): X and Y always happen together
– t(X)t(Y): transaction having X always has Y
• Use diffset to accelerate mining
– Only keep track of difference of tids
– t(X)={T1, T2, T3}, t(Xy )={T1, T3}
– Diffset(Xy, X)={T2}
• M. Zaki. CHARM: An Efficient Algorithm for Closed Association Rule Mining,
CS-TR99-10, Rensselaer Polytechnic Institute
• M. Zaki, Fast Vertical Mining Using Diffsets, TR01-1, Department of
Computer Science, Rensselaer Polytechnic Institute

CS590D 52
Visualization of Association Rules:
Pane Graph

CS590D 53
Visualization of Association Rules: Rule Graph

CS590D 54
Mining Association Rules in
Large Databases
• Association rule mining
• Algorithms for scalable mining of (single-dimensional Boolean)
association rules in transactional databases
• Mining various kinds of association/correlation rules
• Constraint-based association mining
• Sequential pattern mining
• Applications/extensions of frequent pattern mining
• Summary

CS590D 55
Mining Various Kinds of Rules or
Regularities

• Multi-level, quantitative association rules,

correlation and causality, ratio rules, sequential

patterns, emerging patterns, temporal

associations, partial periodicity

• Classification, clustering, iceberg cubes, etc.

CS590D 56
Multiple-level Association
Rules
• Items often form hierarchy
• Flexible support settings: Items at the lower level
are expected to have lower support.
• Transaction database can be encoded based on
dimensions and levels
• explore shared multi-level mining
uniform support reduced support
Level 1 Milk Level 1
min_sup = 5%
[support = 10%] min_sup = 5%

Level 2 2% Milk Skim Milk Level 2


min_sup = 5% [support = 6%] [support = 4%] min_sup = 3% 57
ML/MD Associations with Flexible
Support Constraints
• Why flexible support constraints?
– Real life occurrence frequencies vary greatly
• Diamond, watch, pens in a shopping basket
– Uniform support may not be an interesting model
• A flexible model
– The lower-level, the more dimension combination, and the long
pattern length, usually the smaller support
– General rules should be easy to specify and understand
– Special items and special group of items may be specified
individually and have higher priority

CS590D 58
Multi-dimensional Association
• Single-dimensional rules:
buys(X, “milk”)  buys(X, “bread”)
• Multi-dimensional rules:  2 dimensions or predicates
– Inter-dimension assoc. rules (no repeated predicates)
age(X,”19-25”)  occupation(X,“student”)  buys(X,“coke”)
– hybrid-dimension assoc. rules (repeated predicates)
age(X,”19-25”)  buys(X, “popcorn”)  buys(X, “coke”)
• Categorical Attributes
– finite number of possible values, no ordering among values
• Quantitative Attributes
– numeric, implicit ordering among values

CS590D 59
Multi-level Association:
Redundancy Filtering
• Some rules may be redundant due to “ancestor”
relationships between items.
• Example
– milk  wheat bread [support = 8%, confidence = 70%]
– 2% milk  wheat bread [support = 2%, confidence = 72%]
• We say the first rule is an ancestor of the second rule.
• A rule is redundant if its support is close to the
“expected” value, based on the rule’s ancestor.

CS590D 60
CS590D: Data Mining
Prof. Chris Clifton

January 31, 2006


Association Rules
Closed Itemset
• An itemset is closed if none of its immediate
supersets has the same support as the itemset
Itemset Support
TID Items {A} 4
Itemset Support
1 {A,B} {B} 5
{C} 3 {A,B,C} 2
2 {B,C,D}
3 {A,B,C,D}
{D} 4 {A,B,D} 3
{A,B} 4
4 {A,B,D} {A,C,D} 2
{A,C} 2
5 {A,B,C,D} {A,D} 3 {B,C,D} 3
{B,C} 3 {A,B,C,D} 2
{B,D} 4
{C,D} 3

CS590D 62
Maximal vs Closed Itemsets
null Transaction Ids
TID Items
1 ABC 124 123 1234 245 345
A B C D E
2 ABCD
3 BCE
4 ACDE 12 124 24 4 123 2 3 24 34 45
AB AC AD AE BC BD BE CD CE DE
5 DE

12 2 24 4 4 2 3 4
ABC ABD ABE ACD ACE ADE BCD BCE BDE CDE

2 4
ABCD ABCE ABDE ACDE BCDE

Not supported by CS590D 63


any transactions ABCDE
Maximal vs Closed Frequent
Itemsets
Closed but
Minimum support = 12 null
not
maximal
124 123 1234 245 345
A B C D E
Closed and
maximal

12 124 24 4 123 2 3 24 34 45
AB AC AD AE BC BD BE CD CE DE

12 2 24 4 4 2 3 4
ABC ABD ABE ACD ACE ADE BCD BCE BDE CDE

2 4
ABCD ABCE ABDE ACDE BCDE # Closed = 9
# Maximal = 4
CS590D 64
ABCDE
Maximal vs Closed Itemsets

Frequent
Itemsets

Closed
Frequent
Itemsets

Maximal
Frequent
Itemsets

CS590D 65
Multi-Level Mining: Progressive
Deepening
• A top-down, progressive deepening approach:
– First mine high-level frequent items:
milk (15%), bread (10%)
– Then mine their lower-level “weaker” frequent
itemsets:
2% milk (5%), wheat bread (4%)
• Different min_support threshold across multi-
levels lead to different algorithms:
– If adopting the same min_support across multi-levels
then toss t if any of t’s ancestors is infrequent.
– If adopting reduced min_support at lower levels
then examine only those descendents whose ancestor’s
support is frequent/non-negligible.

CS590D 66
Techniques for Mining MD
Associations
• Search for frequent k-predicate set:
– Example: {age, occupation, buys} is a 3-predicate set
– Techniques can be categorized by how age are treated
1. Using static discretization of quantitative attributes
– Quantitative attributes are statically discretized by using
predefined concept hierarchies
2. Quantitative association rules
– Quantitative attributes are dynamically discretized into
“bins”based on the distribution of the data
3. Distance-based association rules
– This is a dynamic discretization process that considers the
distance between data points

CS590D 67
Static Discretization of
Quantitative Attributes
• Discretized prior to mining using concept hierarchy.
• Numeric values are replaced by ranges.
• In relational database, finding all frequent k-predicate sets will
require k or k+1 table scans.
• Data cube is well suited for mining. ()
• The cells of an n-dimensional
cuboid correspond to the (age) (income) (buys)
predicate sets.
• Mining from data cubes
can be much faster. (age, income) (age,buys) (income,buys)

CS590D (age,income,buys) 69
Quantitative Association
Rules
• Numeric attributes are dynamically discretized
– Such that the confidence or compactness of the rules mined is
maximized
• 2-D quantitative association rules: Aquan1  Aquan2  Acat
• Cluster “adjacent”
association rules
to form general
rules using a 2-D
grid
• Example

age(X,”30-34”)  income(X,”24K -
48K”)
 buys(X,”high resolution TV”)
Mining Distance-based
Association Rules
• Binning methods do not capture the semantics of interval
data
Equi-width Equi-depth Distance-
Price($) (width $10) (depth 2) based
7 [0,10] [7,20] [7,7]
20 [11,20] [22,50] [20,22]
22 [21,30] [51,53] [50,53]
50 [31,40]
51 [41,50]
53 [51,60]

• Distance-based partitioning, more meaningful


discretization considering:
– density/number of points in an interval
– “closeness” of points in an interval
CS590D 71
Interestingness Measure:
Correlations (Lift)
• play basketball  eat cereal [40%, 66.7%] is misleading
– The overall percentage of students eating cereal is 75% which is higher
than 66.7%.
• play basketball  not eat cereal [20%, 33.3%] is more accurate,
although with lower support and confidence
• Measure of dependent/correlated events: lift

Basketbal Not basketball Sum (row)


P( A  B) l
corrA, B  Cereal 2000 1750 3750
P( A) P( B )
Not cereal 1000 250 1250

Sum(col.) 3000 2000 5000

CS590D 72
Mining Association Rules in
Large Databases
• Association rule mining
• Algorithms for scalable mining of (single-dimensional Boolean)
association rules in transactional databases
• Mining various kinds of association/correlation rules
• Constraint-based association mining
• Sequential pattern mining
• Applications/extensions of frequent pattern mining
• Summary

CS590D 73
Constraint-based Data
Mining
• Finding all the patterns in a database
autonomously? — unrealistic!
– The patterns could be too many but not focused!
• Data mining should be an interactive process
– User directs what to be mined using a data mining
query language (or a graphical user interface)
• Constraint-based mining
– User flexibility: provides constraints on what to be
mined
– System optimization: explores such constraints for
efficient mining—constraint-based mining
CS590D 74
Constraints in Data Mining
• Knowledge type constraint:
– classification, association, etc.
• Data constraint — using SQL-like queries
– find product pairs sold together in stores in Vancouver in Dec.’00
• Dimension/level constraint
– in relevance to region, price, brand, customer category
• Rule (or pattern) constraint
– small sales (price < $10) triggers big sales (sum > $200)
• Interestingness constraint
– strong rules: min_support  3%, min_confidence  60%

CS590D 75
Constrained Mining vs. Constraint-
Based Search
• Constrained mining vs. constraint-based search/reasoning
– Both are aimed at reducing search space
– Finding all patterns satisfying constraints vs. finding some (or
one) answer in constraint-based search in AI
– Constraint-pushing vs. heuristic search
– It is an interesting research problem on how to integrate them
• Constrained mining vs. query processing in DBMS
– Database query processing requires to find all
– Constrained pattern mining shares a similar philosophy as
pushing selections deeply in query processing

CS590D 76
Constrained Frequent Pattern Mining:
A Mining Query Optimization Problem
• Given a frequent pattern mining query with a set of constraints C,
the algorithm should be
– sound: it only finds frequent sets that satisfy the given
constraints C
– complete: all frequent sets satisfying the given constraints C are
found
• A naïve solution
– First find all frequent sets, and then test them for constraint
satisfaction
• More efficient approaches:
– Analyze the properties of constraints comprehensively
– Push them as deeply as possible inside the frequent pattern
computation.

CS590D 77
Application of Interestingness
Measure
Knowledge
Interestingness
Measures Patterns
Postprocessing

Preprocessed
Data

Prod
Prod
Prod
Prod
Prod
Prod
Prod
Prod
Prod
Prod
uct
uct
uct
uct
uct
uct
uct
uct
uct
uct
Featur
Featur
e
Featur
e

Mining
Featur
e
Featur
e
Featur
e
Featur
e
Featur
e
Featur
e
Featur
e
e

Selected
Data

Data Preprocessing

CS590D Selection 79
Computing Interestingness
Measure
• Given a rule X  Y, information needed to
compute rule interestingness can be obtained
from a contingency table
Contingency table for X  Y
Y Y f11: support of X and Y
X f11 f10 f1+ f10: support of X and Y
X f01 f00 fo+ f01: support of X and Y
f+1 f+0 |T| f00: support of X and Y

Used to define various measures


 support,
CS590D
confidence, lift, Gini, 80
J-measure, etc.
Drawback of Confidence

Coffee Coffee
Tea 15 5 20
Tea 75 5 80
90 10 100
Association Rule: Tea  Coffee

Confidence= P(Coffee|Tea) = 0.75


but P(Coffee) = 0.9
 Although confidence is high, rule is misleading
 P(Coffee|Tea) = 0.9375
CS590D 81
Statistical Independence
• Population of 1000 students
– 600 students know how to swim (S)
– 700 students know how to bike (B)
– 420 students know how to swim and bike (S,B)

– P(SB) = 420/1000 = 0.42


– P(S)  P(B) = 0.6  0.7 = 0.42

– P(SB) = P(S)  P(B) => Statistical independence


– P(SB) > P(S)  P(B) => Positively correlated
– P(SB) < P(S)  P(B) => Negatively correlated
CS590D 82
Statistical-based Measures
• Measures that take into account statistical
dependence
P(Y | X )
Lift 
P (Y )
P( X , Y )
Interest 
P( X ) P(Y )
PS  P ( X , Y )  P ( X ) P (Y )
P ( X , Y )  P( X ) P(Y )
  coefficien t 
P ( X )[1  P ( X )]P (Y )[1  P (Y )]
CS590D 83
Example: Lift/Interest

Coffee Coffee
Tea 15 5 20
Tea 75 5 80
90 10 100
Association Rule: Tea  Coffee

Confidence= P(Coffee|Tea) = 0.75


but P(Coffee) = 0.9
 Lift = 0.75/0.9= 0.8333 (< 1, therefore is negatively associated)

CS590D 84
Drawback of Lift & Interest
Y Y Y Y
X 10 0 10 X 90 0 90
X 0 90 90 X 0 10 10
10 90 100 90 10 100
0.1 0.9
Lift   10 Lift   1.11
(0.1)(0.1) (0.9)(0.9)

Statistical independence:
If P(X,Y)=P(X)P(Y) => Lift = 1

CS590D 85
There are lots of
measures proposed
in the literature

Some measures are


good for certain
applications, but not
for others

What criteria should


we use to determine
whether a measure
is good or bad?

What about Apriori-


style support based
pruning? How does
it affect these
measures?

CS590D 86
Properties of A Good
Measure
• Piatetsky-Shapiro:
3 properties a good measure M must satisfy:
– M(A,B) = 0 if A and B are statistically independent

– M(A,B) increase monotonically with P(A,B) when P(A)


and P(B) remain unchanged

– M(A,B) decreases monotonically with P(A) [or P(B)]


when P(A,B) and P(B) [or P(A)] remain unchanged

CS590D 87
Comparing Different
10 examples ofMeasures
Exam ple f11 f10 f01 f00
E1 8123 83 424 1370
contingency tables: E2
E3
8330
9481
2
94
622
127
1046
298
E4 3954 3080 5 2961
E5 2886 1363 1320 4431
E6 1500 2000 500 6000
E7 4000 2000 1000 3000
E8 4000 2000 2000 2000
Rankings of contingency tables E9 1720 7121 5 1154
using various measures: E10 61 2483 4 7452

CS590D 88
Property under Variable
Permutation
B B A A
A p q B p r
A r s B q s

Does M(A,B) = M(B,A)?

Symmetric measures:
 support, lift, collective strength, cosine, Jaccard, etc
Asymmetric measures:
 confidence, conviction, Laplace, J-measure, etc
CS590D 89
Property under Row/Column
Scaling
Grade-Gender Example (Mosteller, 1968):

Male Femal Male Femal


e e
High 2 3 5 High 4 30 34
Low 1 4 5 Low 2 40 42
3 7 10 6 70 76

2x 10x
Mosteller:
Underlying association should be independent of
the relative number of male and female students
in the samples

CS590D 90
Property under Inversion
A B Operation
C D E F

.
Transaction 1
1 0 0 1 0 0
0 0 1 1 1 0
. 0 0 1 1 1 0

.
0 0 1 1 1 0
0 1 1 0 1 1

. 0
0
0
0
1
1
1
1
1
1
0
0
. 0
0
0
0
1
1
1
1
1
1
0
0
Transaction N 1 0 0 1 0 0

(a) (b) (c)


CS590D 91
Example: -Coefficient
 -coefficient is analogous to correlation
coefficient for continuous variables
Y Y Y Y
X 60 10 70 X 20 10 30
X 10 20 30 X 10 60 70
70 30 100 30 70 100

0.6  0.7  0.7 0.2  0.3  0.3


 
0.7  0.3  0.7  0.3 0.7  0.3  0.7  0.3
 0.5238  0.5238
 Coefficient is the same for both tables
CS590D 92
Property under Null Addition
B B B B
A p q A p q
A r s A r s+k

Invariant measures:
 support, cosine, Jaccard, etc
Non-invariant measures:
 correlation, Gini, mutual information, odds ratio, etc

CS590D 93
Different Measures have Different
Properties
Sym bol M e as ure Range P1 P2 P3 O1 O2 O3 O3' O4
 Correlation -1 … 0 … 1 Yes Yes Yes Yes No Yes Yes No
 Lambda 0…1 Yes No No Yes No No* Yes No
 Odds ratio 0 … 1 …  Yes* Yes Yes Yes Yes Yes* Yes No
Q Y ule's Q -1 … 0 … 1 Yes Yes Yes Yes Yes Yes Yes No
Y Y ule's Y -1 … 0 … 1 Yes Yes Yes Yes Yes Yes Yes No
 Cohen's -1 … 0 … 1 Yes Yes Yes Yes No No Yes No
M Mutual Information 0…1 Yes Yes Yes Yes No No* Yes No
J J-Measure 0…1 Yes No No No No No No No
G Gini Index 0…1 Yes No No No No No* Yes No
s Support 0…1 No Yes No Yes No No No No
c Confidence 0…1 No Yes No Yes No No No Yes
L Laplace 0…1 No Yes No Yes No No No No
V Conviction 0.5 … 1 …  No Yes No Yes** No No Yes No
I Interest 0 … 1 …  Yes* Yes Yes Yes No No No No
IS IS (cosine) 0 .. 1 No Yes Yes Yes No No No Yes
PS Piatetsky-Shapiro's -0.25 … 0 … 0.25 Yes Yes Yes Yes No Yes Yes No
F Certainty f actor -1 … 0 … 1 Yes Yes Yes No No No Yes No
AV Added value 0.5 … 1 … 1 Yes Yes Yes No No No No No
S Collective strength 0 … 1 …  No Yes Yes Yes No Yes* Yes No
 Jaccard 0 .. 1 No Yes Yes Yes No No No Yes
 2  1  2
K Klosgen's 
  1  2  3    0  CS590D
Yes Yes Yes No No No No 94 No
 3  3 3 3
Anti-Monotonicity in Constraint-
Based Mining TDB (min_sup=2)
• Anti-monotonicity TID Transaction

– When an itemset S violates the constraint, 10 a, b, c, d, f


so does any of its superset 20 b, c, d, f, g, h
30 a, c, d, e, f
– sum(S.Price)  v is anti-monotone
40 c, e, f, g
– sum(S.Price)  v is not anti-monotone
Item Profit
• Example. C: range(S.profit)  15 is anti-
a 40
monotone b 0
– Itemset ab violates C c -20
– So does every superset of ab d 10
e -30
f 30

CS590D
g 2095
h -10
Which Constraints Are Anti-
Monotone?
Constraint Antimonotone
vS No
SV no
SV yes
min(S)  v no
min(S)  v yes
max(S)  v yes
max(S)  v no
count(S)  v yes
count(S)  v no
sum(S)  v ( a  S, a  0 ) yes
sum(S)  v ( a  S, a  0 ) no
range(S)  v yes
range(S)  v no
avg(S)  v,   { , ,  } convertible
support(S)   yes
96
support(S)   no
Monotonicity in Constraint-
Based Mining TDB (min_sup=2)
TID Transaction
• Monotonicity 10 a, b, c, d, f
– When an intemset S satisfies the 20 b, c, d, f, g, h
30 a, c, d, e, f
constraint, so does any of its superset
40 c, e, f, g
– sum(S.Price)  v is monotone
Item Profit
– min(S.Price)  v is monotone a 40
b 0
• Example. C: range(S.profit)  15
c -20
– Itemset ab satisfies C d 10
– So does every superset of ab e -30
f 30
g 20
CS590D 97
h -10
Which Constraints Are
Monotone?
Constraint Monotone
vS yes
SV yes
SV no
min(S)  v yes
min(S)  v no
max(S)  v no
max(S)  v yes
count(S)  v no
count(S)  v yes
sum(S)  v ( a  S, a  0 ) no
sum(S)  v ( a  S, a  0 ) yes
range(S)  v no
range(S)  v yes
avg(S)  v,   { , ,  } convertible
support(S)   no
98
support(S)   yes
Succinctness
• Succinctness:
– Given A1, the set of items satisfying a succinctness constraint C,
then any set S satisfying C is based on A1 , i.e., S contains a
subset belonging to A1
– Idea: Without looking at the transaction database, whether an
itemset S satisfies constraint C can be determined based on the
selection of items
– min(S.Price)  v is succinct
– sum(S.Price)  v is not succinct
• Optimization: If C is succinct, C is pre-counting pushable

CS590D 99
Which Constraints Are
Succinct?
Constraint Succinct
vS yes
SV yes
SV yes
min(S)  v yes
min(S)  v yes
max(S)  v yes
max(S)  v yes
count(S)  v weakly
count(S)  v weakly
sum(S)  v ( a  S, a  0 ) no
sum(S)  v ( a  S, a  0 ) no
range(S)  v no
range(S)  v no
avg(S)  v,   { , ,  } no
support(S)   no
100
support(S)   no
The Apriori Algorithm —
Example
Database D itemset sup.
L1 itemset sup.
TID Items C1 {1} 2 {1} 2
100 134 {2} 3 {2} 3
200 235 Scan D {3} 3 {3} 3
300 1235 {4} 1 {5} 3
400 25 {5} 3
C2 itemset sup C2 itemset
L2 itemset sup {1 2} 1 Scan D {1 2}
{1 3} 2 {1 3} 2 {1 3}
{2 3} 2 {1 5} 1 {1 5}
{2 3} 2 {2 3}
{2 5} 3
{2 5} 3 {2 5}
{3 5} 2
{3 5} 2 {3 5}
C3 itemset Scan D L3 itemset sup
{2 3 5} {2 3 5} 2 101
Naïve Algorithm: Apriori +
Constraint
Database D itemset sup.
L1 itemset sup.
TID Items C1 {1} 2 {1} 2
100 134 {2} 3 {2} 3
200 235 Scan D {3} 3 {3} 3
300 1235 {4} 1 {5} 3
400 25 {5} 3
C2 itemset sup C2 itemset
L2 itemset sup {1 2} 1 Scan D {1 2}
{1 3} 2 {1 3} 2 {1 3}
{2 3} 2 {1 5} 1 {1 5}
{2 3} 2 {2 3}
{2 5} 3
{2 5} 3 {2 5}
{3 5} 2
{3 5} 2 {3 5}
C3 itemset Scan D L3 itemset sup Constraint:
{2 3 5} {2 3 5} 2 Sum{S.price < 5}
The Constrained Apriori Algorithm: Push an
Anti-monotone Constraint Deep
Database D itemset sup.
L1 itemset sup.
TID Items C1 {1} 2 {1} 2
100 134 {2} 3 {2} 3
200 235 Scan D {3} 3 {3} 3
300 1235 {4} 1 {5} 3
400 25 {5} 3
C2 itemset sup C2 itemset
L2 itemset sup {1 2} 1 Scan D {1 2}
{1 3} 2 {1 3} 2 {1 3}
{2 3} 2 {1 5} 1 {1 5}
{2 3} 2 {2 3}
{2 5} 3
{2 5} 3 {2 5}
{3 5} 2
{3 5} 2 {3 5}
C3 itemset Scan D L3 itemset sup Constraint:
{2 3 5} {2 3 5} 2 103
Sum{S.price < 5}
The Constrained Apriori Algorithm:
Push a Succinct Constraint Deep
Database D itemset sup.
L1 itemset sup.
TID Items C1 {1} 2 {1} 2
100 134 {2} 3 {2} 3
200 235 Scan D {3} 3 {3} 3
300 1235 {4} 1 {5} 3
400 25 {5} 3
C2 itemset sup C2 itemset
L2 itemset sup {1 2}
{1 2} 1 Scan D
{1 3} 2 {1 3} 2 {1 3}
{1 5} 1 {1 5}
{2 3} 2
{2 3} 2 {2 3}
{2 5} 3
{2 5} 3 {2 5}
{3 5} 2 {3 5}
{3 5} 2
C3 itemset Scan D L3 itemset sup Constraint:
{2 3 5} {2 3 5} 2 104
min{S.price <= 1}
Converting “Tough”
Constraints TDB (min_sup=2)
TID Transaction
• Convert tough constraints into anti-
10 a, b, c, d, f
monotone or monotone by properly
20 b, c, d, f, g, h
ordering items 30 a, c, d, e, f
• Examine C: avg(S.profit)  25 40 c, e, f, g
– Order items in value-descending order
Item Profit
• <a, f, g, d, b, h, c, e> a 40
– If an itemset afb violates C b 0
• So does afbh, afb* c -20
• It becomes anti-monotone! d 10
e -30
f 30
g 20
CS590D h -10
Convertible Constraints
• Let R be an order of items
• Convertible anti-monotone
– If an itemset S violates a constraint C, so does every
itemset having S as a prefix w.r.t. R
– Ex. avg(S)  v w.r.t. item value descending order
• Convertible monotone
– If an itemset S satisfies constraint C, so does every
itemset having S as a prefix w.r.t. R
– Ex. avg(S)  v w.r.t. item value descending order
CS590D 106
Strongly Convertible
Constraints
• avg(X)  25 is convertible anti-monotone w.r.t.
item value descending order R: <a, f, g, d, b, h,
c, e>
Item Profit
– If an itemset af violates a constraint C, so
a 40
does every itemset with af as prefix, such as
afd b 0
c -20
• avg(X)  25 is convertible monotone w.r.t. item
value ascending order R-1: <e, c, h, b, d, g, f, a> d 10
e -30
– If an itemset d satisfies a constraint C, so
does itemsets df and dfa, which having d as f 30
a prefix g 20
• Thus, avg(X)  25 is strongly convertible h -10

CS590D 107
What Constraints Are Convertible?
Convertible Convertible Strongly
Constraint anti-monotone monotone convertible
avg(S)  ,  v Yes Yes Yes
median(S)  ,  v Yes Yes Yes
sum(S)  v (items could be of any
Yes No No
value, v  0)
sum(S)  v (items could be of any
No Yes No
value, v  0)
sum(S)  v (items could be of any
No Yes No
value, v  0)
sum(S)  v (items could be of any
Yes No No
value, v  0)
……
CS590D 108
Combing Them Together—A
General Picture
Constraint Antimonotone Monotone Succinct
vS no yes yes
SV no yes yes
SV yes no yes
min(S)  v no yes yes
min(S)  v yes no yes
max(S)  v yes no yes

max(S)  v no yes yes


count(S)  v yes no weakly
count(S)  v no yes weakly
sum(S)  v ( a  S, a  0 ) yes no no
sum(S)  v ( a  S, a  0 ) no yes no
range(S)  v yes no no
range(S)  v no yes no
avg(S)  v,   { , ,  } convertible convertible no
support(S)   yes no no
support(S)   no yes no
Classification of Constraints

Monotone
Antimonotone

Strongly
convertible
Succinct

Convertible Convertible
anti-monotone monotone

Inconvertible
CS590D 110
CS590D: Data Mining
Prof. Chris Clifton

February 2, 2006
Association Rules
Mining With Convertible
Constraints TDB (min_sup=2)
TID Transaction
• C: avg(S.profit)  25 10 a, f, d, b, c
• List of items in every transaction in value 20 f, g, d, b, c
descending order R: 30 a, f, d, c, e

<a, f, g, d, b, h, c, e> 40 f, g, h, c, e

– C is convertible anti-monotone w.r.t. R Item Profit


• Scan transaction DB once a 40
f 30
– remove infrequent items
g 20
• Item h in transaction 40 is dropped d 10
– Itemsets a and f are good b 0
h -10
c -20
CS590D
e -30
Can Apriori Handle Convertible
Constraint?
• A convertible, not monotone nor anti-
monotone nor succinct constraint cannot
be pushed deep into the an Apriori
mining algorithm
– Within the level wise framework, no direct Item Value
pruning based on the constraint can be
a 40
made
b 0
– Itemset df violates constraint C: avg(X)>=25
c -20
– Since adf satisfies C, Apriori needs df to
assemble adf, df cannot be pruned d 10

• But it can be pushed into frequent- e -30


pattern growth framework! f 30
g 20
h -10
CS590D
Mining With Convertible
Constraints
Item Value
• C: avg(X)>=25, min_sup=2 a 40
• List items in every transaction in value descending order R: f 30
<a, f, g, d, b, h, c, e> g 20
– C is convertible anti-monotone w.r.t. R d 10
• Scan TDB once b 0
– remove infrequent items h -10

• Item h is dropped c -20


e -30
– Itemsets a and f are good, …
• Projection-based mining TDB (min_sup=2)
– Imposing an appropriate order on item projection TID Transaction
– Many tough constraints can be converted into (anti)- 10 a, f, d, b, c
monotone 20 f, g, d, b, c
30 a, f, d, c, e
CS590D 40 f, g, h, c, e
Handling Multiple Constraints
• Different constraints may require different or even
conflicting item-ordering
• If there exists an order R s.t. both C1 and C2 are
convertible w.r.t. R, then there is no conflict between the
two convertible constraints
• If there exists conflict on order of items
– Try to satisfy one constraint first
– Then using the order for the other constraint to mine frequent
itemsets in the corresponding projected database

CS590D 115
Interestingness via
Unexpectedness
• Need to model expectation of users (domain knowledge)
+ Pattern expected to be frequent

- Pattern expected to be infrequent


Pattern found to be frequent

Pattern found to be infrequent

+ - Expected Patterns

- + Unexpected Patterns

• Need to combine expectation of users with evidence


from data (i.e., extracted patterns)
CS590D 116
Interestingness via
Unexpectedness
• Web Data (Cooley et al 2001)
– Domain knowledge in the form of site structure
– Given an itemset F = {X1, X2, …, Xk} (Xi : Web pages)
• L: number of links connecting the pages
• lfactor = L / (k  k-1)
• cfactor = 1 (if graph is connected), 0 (disconnected graph)
– Structure evidence = cfactor  lfactor
P( X  X ...  X )
– Usage evidence 
1 2 k

P( X  X  ...  X )
1 2 k

– Use Dempster-Shafer theory to combine domain


knowledge and evidence from data
CS590D 117
Continuous and Categorical
Attributes
How to apply association analysis formulation to non-
asymmetric binary variables?
Session Country Session Number of
Browser
Id Length Web Pages Gender Buy
Type
(sec) viewed
1 USA 982 8 Male IE No
2 China 811 10 Female Netscape No
3 USA 2125 45 Female Mozilla Yes
4 Germany 596 4 Male IE Yes
5 Australia 123 9 Male Mozilla No
… … … … … … …
10

Example of Association Rule:


{Number of Pages [5,10)  (Browser=Mozilla)}  {Buy = No}
CS590D 118
Handling Categorical Attributes
• Transform categorical attribute into
asymmetric binary variables
• Introduce a new “item” for each distinct
attribute-value pair
– Example: replace Browser Type attribute with
• Browser Type = Internet Explorer
• Browser Type = Mozilla
• Browser Type = Mozilla

CS590D 119
Handling Categorical Attributes
• Potential Issues
– What if attribute has many possible values
• Example: attribute country has more than 200 possible
values
• Many of the attribute values may have very low support
– Potential solution: Aggregate the low-support attribute values
– What if distribution of attribute values is highly skewed
• Example: 95% of the visitors have Buy = No
• Most of the items will be associated with (Buy=No) item
– Potential solution: drop the highly frequent items

CS590D 120
Handling Continuous Attributes
• Different kinds of rules:
– Age[21,35)  Salary[70k,120k)  Buy
– Salary[70k,120k)  Buy  Age: =28, =4
• Different methods:
– Discretization-based
– Statistics-based
– Non-discretization based
• minApriori

CS590D 121
Handling Continuous
Attributes
• Use discretization
• Unsupervised:
– Equal-width binning
– Equal-depth binning
– Clustering
• Supervised: Attribute values, v
Class v1 v2 v3 v4 v5 v6 v7 v8 v9
Anomalou 0 0 20 10 20 0 0 0 0
s
Normal 150 100 0 0 0 100 100 150 100
122

bin bin bin3


Discretization Issues
• Size of the discretized intervals affect support &
confidence
{Refund = No, (Income = $51,250)}  {Cheat = No}
{Refund = No, (60K  Income  80K)}  {Cheat = No}
{Refund = No, (0K  Income  1B)}  {Cheat = No}

– If intervals too small


• may not have enough support
– If intervals too large
• may not have enough confidence
• Potential solution: use all possible intervals

CS590D 123
Discretization Issues
• Execution time
– If intervals contain n
values, there are on
average O(n2) possible
ranges

• Too many rules


{Refund = No, (Income = $51,250)}  {Cheat = No}
{Refund = No, (51K  Income  52K)}  {Cheat = No}
{Refund = No, (50K  Income  60K)}  {Cheat = No}
CS590D 124
Approach by Srikant & Agrawal
• Preprocess the data
– Discretize attribute using equi-depth
partitioning
• Use partial completeness measure to determine
number of partitions
• Merge adjacent intervals as long as support is
less than max-support
• Apply existing association rule mining
algorithms
• Determine interesting rules in the output
CS590D 125
Approach by Srikant & Agrawal
• Discretization will lose information
Approximated X

– Use partial completeness measure to determine how much


information is lost
C: frequent itemsets obtained by considering all ranges of attribute values
P: frequent itemsets obtained by considering all ranges over the partitions
P is K-complete w.r.t C if P  C,and X  C,  X’  P such that:
1. X’ is a generalization of X and support (X’)  K  support(X) (K  1)
2. Y  X,  Y’  X’ such that support (Y’)  K  support(Y)

Given K (partial completeness level), can determine number of intervals (N)

CS590D 126
Mining Association Rules in
Large Databases
• Association rule mining
• Algorithms for scalable mining of (single-dimensional Boolean)
association rules in transactional databases
• Mining various kinds of association/correlation rules
• Constraint-based association mining
• Sequential pattern mining
• Applications/extensions of frequent pattern mining
• Summary

CS590D 135
Sequence Databases and
Sequential Pattern Analysis
• Transaction databases, time-series databases vs. sequence
databases
• Frequent patterns vs. (frequent) sequential patterns
• Applications of sequential pattern mining
– Customer shopping sequences:
• First buy computer, then CD-ROM, and then digital camera, within 3
months.
– Medical treatment, natural disasters (e.g., earthquakes), science &
engineering processes, stocks and markets, etc.
– Telephone calling patterns, Weblog click streams
– DNA sequences and gene structures

CS590D 136
What Is Sequential Pattern
Mining?
• Given a set of sequences, find the
complete set of frequent subsequences
A sequence : < (ef) (ab) (df) c b >
A sequence database
SID sequence An element may contain a set of items.
10 <a(abc)(ac)d(cf)> Items within an element are unordered
and we list them alphabetically.
20 <(ad)c(bc)(ae)>
30 <(ef)(ab)(df)cb> <a(bc)dc> is a subsequence
40 <eg(af)cbc> of <a(abc)(ac)d(cf)>

Given support threshold min_sup =2, <(ab)c> is a


sequential pattern CS590D 137
Challenges on Sequential
Pattern Mining
• A huge number of possible sequential patterns
are hidden in databases
• A mining algorithm should
– find the complete set of patterns, when possible,
satisfying the minimum support (frequency) threshold
– be highly efficient, scalable, involving only a small
number of database scans
– be able to incorporate various kinds of user-specific
constraints

CS590D 138
Studies on Sequential Pattern
Mining
• Concept introduction and an initial Apriori-like algorithm
– R. Agrawal & R. Srikant. “Mining sequential patterns,” ICDE’95
• GSP—An Apriori-based, influential mining method (developed at
IBM Almaden)
– R. Srikant & R. Agrawal. “Mining sequential patterns: Generalizations
and performance improvements,” EDBT’96
• From sequential patterns to episodes (Apriori-like + constraints)
– H. Mannila, H. Toivonen & A.I. Verkamo. “Discovery of frequent
episodes in event sequences,” Data Mining and Knowledge Discovery,
1997
• Mining sequential patterns with constraints
– M.N. Garofalakis, R. Rastogi, K. Shim: SPIRIT: Sequential Pattern
Mining with Regular Expression Constraints. VLDB 1999

CS590D 139
A Basic Property of Sequential
Patterns: Apriori
• A basic property: Apriori (Agrawal & Sirkant’94)
– If a sequence S is not frequent
– Then none of the super-sequences of S is frequent
– E.g, <hb> is infrequent  so do <hab> and <(ah)b>

Seq. ID Sequence Given support threshold


10 <(bd)cb(ac)> min_sup =2
20 <(bf)(ce)b(fg)>
30 <(ah)(bf)abf>
40 <(be)(ce)d>
50 <a(bd)bcb(ade)>
CS590D 140
GSP—A Generalized Sequential
Pattern Mining Algorithm
• GSP (Generalized Sequential Pattern) mining algorithm
– proposed by Agrawal and Srikant, EDBT’96
• Outline of the method
– Initially, every item in DB is a candidate of length-1
– for each level (i.e., sequences of length-k) do
• scan database to collect support count for each
candidate sequence
• generate candidate length-(k+1) sequences from
length-k frequent sequences using Apriori
– repeat until no frequent sequence or no candidate can
be found
• Major strength: Candidate pruning by Apriori

CS590D 141
Finding Length-1 Sequential
Patterns
• Examine GSP using an example
• Initial candidates: all singleton Cand Sup
sequences <a> 3
– <a>, <b>, <c>, <d>, <e>, <f>, <b> 5
<g>, <h> <c> 4
• Scan database once, count support
for candidates min_sup =2 <d> 3
Seq. ID Sequence
<e> 3
10 <(bd)cb(ac)> <f> 2
20 <(bf)(ce)b(fg)> <g> 1
30 <(ah)(bf)abf>
<h> 1
40 <(be)(ce)d>
50 <a(bd)bcb(ade)> 142
Generating Length-2 Candidates

<a> <b> <c> <d> <e> <f>


<a> <aa> <ab> <ac> <ad> <ae> <af>
51 length-2 <b> <ba> <bb> <bc> <bd> <be> <bf>
Candidates <c> <ca> <cb> <cc> <cd> <ce> <cf>
<d> <da> <db> <dc> <dd> <de> <df>
<e> <ea> <eb> <ec> <ed> <ee> <ef>
<f> <fa> <fb> <fc> <fd> <fe> <ff>

<a> <b> <c> <d> <e> <f>


Without Apriori
<a> <(ab)> <(ac)> <(ad)> <(ae)> <(af)>
property,
<b> <(bc)> <(bd)> <(be)> <(bf)>
8*8+8*7/2=92
<c> <(cd)> <(ce)> <(cf)>
<d> <(de)> <(df)>
candidates
<e> <(ef)> Apriori prunes
<f> 44.57% candidates
Generating Length-3 Candidates and
Finding Length-3 Patterns
• Generate Length-3 Candidates
– Self-join length-2 sequential patterns
• Based on the Apriori property
• <ab>, <aa> and <ba> are all length-2 sequential
patterns  <aba> is a length-3 candidate
• <(bd)>, <bb> and <db> are all length-2 sequential
patterns  <(bd)b> is a length-3 candidate
– 46 candidates are generated
• Find Length-3 Sequential Patterns
– Scan database once more, collect support counts for
candidates
– 19 out of 46 candidates pass support threshold

CS590D 145
The GSP Mining Process
5th scan: 1 cand. 1 length-5 seq. <(bd)cba> Cand. cannot pass
pat. sup. threshold

4th scan: 8 cand. 6 length-4 seq. <abba> <(bd)bc> … Cand. not in DB at all
pat.
3rd scan: 46 cand. 19 length-3 seq. <abb> <aab> <aba> <baa> <bab> …
pat. 20 cand. not in DB at all
2nd scan: 51 cand. 19 length-2 seq.
pat. 10 cand. not in DB at all <aa> <ab> … <af> <ba> <bb> … <ff> <(ab)> … <(ef)>
1st scan: 8 cand. 6 length-1 seq.
pat. <a> <b> <c> <d> <e> <f> <g> <h>
Seq. ID Sequence

min_sup =2 10 <(bd)cb(ac)>
20 <(bf)(ce)b(fg)>
30 <(ah)(bf)abf>
40 <(be)(ce)d>
146
50 <a(bd)bcb(ade)>
Bottlenecks of GSP
• A huge set of candidates could be generated
– 1,000 frequent length-1 sequences generate
1000  999 length-2 candidates!
1000 1000   1,499,500
2

• Multiple scans of database in mining


• Real challenge: mining long sequential patterns
– An exponential number of short candidates
– A length-100 sequential pattern needs 1030
100 100
candidate sequences!
  100
    2
i 1  i 
 1  1030

CS590D 148
FreeSpan: Frequent Pattern-Projected
Sequential Pattern Mining
• A divide-and-conquer approach
– Recursively project a sequence database into a set of smaller
databases based on the current set of frequent patterns
– Mine each projected database to find its patterns
• J. Han J. Pei, B. Mortazavi-Asi, Q. Chen, U. Dayal, M.C. Hsu, FreeSpan:
Frequent pattern-projected sequential pattern mining. In KDD’00.

f_list: b:5, c:4, a:3, d:3, e:3, f:2


Sequence Database SDB
All seq. pat. can be divided into 6 subsets:
< (bd) c b (ac) > •Seq. pat. containing item f
< (bf) (ce) b (fg) > •Those containing e but no f
< (ah) (bf) a b f > •Those containing d but no e nor f
< (be) (ce) d > •Those containing a but no d, e or f
< a (bd) b c b (ade) > •Those containing c but no a, d, e or f
•Those containing only item b
CS590D 149
From FreeSpan to PrefixSpan:
Why?
• Freespan:
– Projection-based: No candidate sequence needs to be
generated
– But, projection can be performed at any point in the
sequence, and the projected sequences do will not
shrink much
• PrefixSpan
– Projection-based
– But only prefix-based projection: less projections and
quickly shrinking sequences
CS590D 150
Prefix and Suffix (Projection)
• <a>, <aa>, <a(ab)> and <a(abc)> are prefixes of
sequence <a(abc)(ac)d(cf)>
• Given sequence <a(abc)(ac)d(cf)>

Prefix Suffix (Prefix-Based Projection)

<a> <(abc)(ac)d(cf)>
<aa> <(_bc)(ac)d(cf)>
<ab> <(_c)(ac)d(cf)>
CS590D 151
Mining Sequential Patterns by
Prefix Projections
• Step 1: find length-1 sequential patterns
– <a>, <b>, <c>, <d>, <e>, <f>
• Step 2: divide search space. The complete set of
seq. pat. can be partitioned into 6 subsets:
– The ones having prefix <a>;
– The ones having prefix <b>;
SID sequence
– …
10 <a(abc)(ac)d(cf)>
– The ones having prefix <f> 20 <(ad)c(bc)(ae)>
30 <(ef)(ab)(df)cb>
40 <eg(af)cbc>
CS590D 152
Finding Seq. Patterns with
Prefix <a>
• Only need to consider projections w.r.t. <a>
– <a>-projected database: <(abc)(ac)d(cf)>, <(_d)c(bc)(ae)>,
<(_b)(df)cb>, <(_f)cbc>

• Find all the length-2 seq. pat. Having prefix <a>: <aa>,
<ab>, <(ab)>, <ac>, <ad>, <af>
– Further partition into 6 subsets SID sequence
• Having prefix <aa>; 10 <a(abc)(ac)d(cf)>
• … 20 <(ad)c(bc)(ae)>

• Having prefix <af> 30 <(ef)(ab)(df)cb>


40 <eg(af)cbc>

CS590D 153
Completeness of PrefixSpan
SDB
SID sequence
Length-1 sequential patterns
10 <a(abc)(ac)d(cf)>
<a>, <b>, <c>, <d>, <e>, <f>
20 <(ad)c(bc)(ae)>
30 <(ef)(ab)(df)cb>
40 <eg(af)cbc>
Having prefix <a> Having prefix <c>, …, <f>
Having prefix <b>
<a>-projected database <b>-projected database
<(abc)(ac)d(cf)> Length-2 sequential

<(_d)c(bc)(ae)> patterns
<(_b)(df)cb> <aa>, <ab>, <(ab)>,
<(_f)cbc> <ac>, <ad>, <af>
……
Having prefix <aa> Having prefix <af>

<aa>-proj. db … <af>-proj. db
CS590D 154
Efficiency of PrefixSpan

• No candidate sequence needs to be generated

• Projected databases keep shrinking

• Major cost of PrefixSpan: constructing projected


databases
– Can be improved by bi-level projections

CS590D 155
Optimization Techniques in
PrefixSpan
• Physical projection vs. pseudo-projection
– Pseudo-projection may reduce the effort of
projection when the projected database fits in
main memory
• Parallel projection vs. partition projection
– Partition projection may avoid the blowup of
disk space
CS590D 156
Speed-up by Pseudo-
projection
• Major cost of PrefixSpan: projection
– Postfixes of sequences often appear
repeatedly in recursive projected databases
• When (projected) database can be held in main
memory, use pointers to form projections
– Pointer to the sequence s=<a(abc)(ac)d(cf)>
<a>
– Offset of the postfix
s|<a>: ( , 2) <(abc)(ac)d(cf)>
<ab>
s|<ab>: ( , 4) <(_c)(ac)d(cf)>
CS590D 157
Pseudo-Projection vs. Physical
Projection
• Pseudo-projection avoids physically copying
postfixes
– Efficient in running time and space when database
can be held in main memory
• However, it is not efficient when database
cannot fit in main memory
– Disk-based random accessing is very costly
• Suggested Approach:
– Integration of physical and pseudo-projection
– Swapping to pseudo-projection when the data set fits
in memory
CS590D 158
PrefixSpan Is Faster than GSP
and FreeSpan
400 PrefixSpan-1
350 PrefixSpan-2
Runtime (second)

300 FreeSpan
250
GSP
200
150
100
50
0
0.00 0.50 1.00 1.50 2.00 2.50 3.00

Support threshold (%)

CS590D 159
Effect of Pseudo-Projection
PrefixSpan-1
200
PrefixSpan-2
PrefixSpan-1 (Pseudo)
160
Runtime (second)

PrefixSpan-2 (Pseudo)

120

80

40

0
0.20 0.30 0.40 0.50 0.60

Support threshold (%)


CS590D 160
Mining Association Rules in
Large Databases
• Association rule mining
• Algorithms for scalable mining of (single-dimensional Boolean)
association rules in transactional databases
• Mining various kinds of association/correlation rules
• Constraint-based association mining
• Sequential pattern mining
• Applications/extensions of frequent pattern mining
• Summary

CS590D 161
Associative Classification
• Mine association possible rules (PR) in form of
condset  c
– Condset: a set of attribute-value pairs
– C: class label
• Build Classifier
– Organize rules according to decreasing precedence
based on confidence and support
• B. Liu, W. Hsu & Y. Ma. Integrating classification and
association rule mining. In KDD’98

CS590D 162
Spatial and Multi-Media Association: A
Progressive Refinement Method
• Why progressive refinement?
– Mining operator can be expensive or cheap, fine or
rough
– Trade speed with quality: step-by-step refinement.
• Superset coverage property:
– Preserve all the positive answers—allow a positive
false test but not a false negative test.
• Two- or multi-step mining:
– First apply rough/cheap operator (superset coverage)
– Then apply expensive algorithm on a substantially
reduced candidate set (Koperski & Han, SSD’95).
CS590D 166
Progressive Refinement Mining
of Spatial Associations
• Hierarchy of spatial relationship:
– “g_close_to”: near_by, touch, intersect, contain, etc.
– First search for rough relationship and then refine it.
• Two-step mining of spatial association:
– Step 1: rough spatial computation (as a filter)
• Using MBR or R-tree for rough estimation.
– Step2: Detailed spatial algorithm (as refinement)
• Apply only to those objects which have passed the rough
spatial association test (no less than min_support)

167
Mining Multimedia Associations

Correlations with color, spatial relationships, etc.


From coarse to Fine Resolution mining

CS590D 168
Further Evolution of PrefixSpan

• Closed- and max- sequential patterns


– Finding only the most meaningful (longest) sequential
patterns

• Constraint-based sequential pattern growth


– Adding user-specific constraints

• From sequential patterns to structured patterns


– Beyond sequential patterns, mining structured patterns
in XML documents
CS590D 169
Closed- and Max- Sequential
Patterns
• A closed- sequential pattern is a frequent sequence s where there is
no proper super-sequence of s sharing the same support count with
s
• A max- sequential pattern is a sequential pattern p s.t. any proper
super-pattern of p is not frequent
• Benefit of the notion of closed sequential patterns
– {<a1 a2 … a50>, <a1 a2 … a100>}, with min_sup = 1
– There are 2100 sequential patterns, but only 2 are closed
• Similar benefits for the notion of max- sequential-patterns

CS590D 170
Methods for Mining Closed-
and Max- Sequential Patterns
• PrefixSpan or FreeSpan can be viewed as projection-
guided depth-first search
• For mining max- sequential patterns, any sequence
which does not contain anything beyond the already
discovered ones will be removed from the projected DB
– {<a1 a2 … a50>, <a1 a2 … a100>}, with min_sup = 1
– If we have found a max-sequential pattern <a1 a2 …
a100>, nothing will be projected in any projected DB
• Similar ideas can be applied for mining closed-
sequential-patterns
CS590D 171
Constraint-Based Sequential
Pattern Mining
• Constraint-based sequential pattern mining
– Constraints: User-specified, for focused mining of desired patterns
– How to explore efficient mining with constraints? — Optimization
• Classification of constraints
– Anti-monotone: E.g., value_sum(S) < 150, min(S) > 10
– Monotone: E.g., count (S) > 5, S  {PC, digital_camera}
– Succinct: E.g., length(S)  10, S  {Pentium, MS/Office, MS/Money}
– Convertible: E.g., value_avg(S) < 25, profit_sum (S) > 160,
max(S)/avg(S) < 2, median(S) – min(S) > 5
– Inconvertible: E.g., avg(S) – median(S) = 0

CS590D 172
Sequential Pattern Growth for
Constraint-Based Mining
• Efficient mining with convertible constraints
– Not solvable by candidate generation-and-test methodology
– Easily push-able into the sequential pattern growth framework
• Example: push avg(S) < 25 in frequent pattern growth
– project items in value (price/profit depending on mining semantics)
ascending/descending order for sequential pattern growth
– Grow each pattern by sequential pattern growth
– If avg(current_pattern)  25, toss the current_pattern
• Why?—future growths always make it bigger
• But why not candidate generation?—no structure or ordering in growth

CS590D 173
From Sequential Patterns to
Structured Patterns
• Sets, sequences, trees and other structures
– Transaction DB: Sets of items
• {{i1, i2, …, im}, …}
– Seq. DB: Sequences of sets:
• {<{i1, i2}, …, {im, in, ik}>, …}
– Sets of Sequences:
• {{<i1, i2>, …, <im, in, ik>}, …}
– Sets of trees (each element being a tree):
• {t1, t2, …, tn}
• Applications: Mining structured patterns in XML documents

CS590D 174
Mining Association Rules in
Large Databases
• Association rule mining
• Algorithms for scalable mining of (single-dimensional Boolean)
association rules in transactional databases
• Mining various kinds of association/correlation rules
• Constraint-based association mining
• Sequential pattern mining
• Applications/extensions of frequent pattern mining
• Summary

CS590D 175
Frequent-Pattern Mining:
Achievements
• Frequent pattern mining—an important task in data mining
• Frequent pattern mining methodology
– Candidate generation & test vs. projection-based (frequent-pattern
growth)
– Vertical vs. horizontal format
– Various optimization methods: database partition, scan reduction, hash
tree, sampling, border computation, clustering, etc.
• Related frequent-pattern mining algorithm: scope extension
– Mining closed frequent itemsets and max-patterns (e.g., MaxMiner,
CLOSET, CHARM, etc.)
– Mining multi-level, multi-dimensional frequent patterns with flexible
support constraints
– Constraint pushing for mining optimization
– From frequent patterns to correlation and causality

CS590D 176
Frequent-Pattern Mining:
Applications
• Related problems which need frequent pattern mining
– Association-based classification
– Iceberg cube computation
– Database compression by fascicles and frequent
patterns
– Mining sequential patterns (GSP, PrefixSpan, SPADE,
etc.)
– Mining partial periodicity, cyclic associations, etc.
– Mining frequent structures, trends, etc.
• Typical application examples
– Market-basket analysis, Weblog analysis, DNA
mining, etc.

CS590D 177
Frequent-Pattern Mining:
Research Problems
• Multi-dimensional gradient analysis: patterns regarding
changes and differences
– Not just counts—other measures, e.g., avg(profit)
• Mining top-k frequent patterns without support constraint
• Mining fault-tolerant associations
– “3 out of 4 courses excellent” leads to A in data mining
• Fascicles and database compression by frequent pattern
mining
• Partial periodic patterns
• DNA sequence analysis and pattern classification
CS590D 178
References: Frequent-pattern
Mining Methods
• R. Agarwal, C. Aggarwal, and V. V. V. Prasad. A tree projection algorithm for
generation of frequent itemsets. Journal of Parallel and Distributed
Computing, 2000.
• R. Agrawal, T. Imielinski, and A. Swami. Mining association rules between
sets of items in large databases. SIGMOD'93, 207-216, Washington, D.C.
• R. Agrawal and R. Srikant. Fast algorithms for mining association rules.
VLDB'94 487-499, Santiago, Chile.
• J. Han, J. Pei, and Y. Yin: “Mining frequent patterns without candidate
generation”. In Proc. ACM-SIGMOD’2000, pp. 1-12, Dallas, TX, May 2000.
• H. Mannila, H. Toivonen, and A. I. Verkamo. Efficient algorithms for
discovering association rules. KDD'94, 181-192, Seattle, WA, July 1994.

CS590D 179
References: Frequent-pattern
Mining Methods
• A. Savasere, E. Omiecinski, and S. Navathe. An efficient algorithm for
mining association rules in large databases. VLDB'95, 432-443, Zurich,
Switzerland.
• C. Silverstein, S. Brin, R. Motwani, and J. Ullman. Scalable techniques for
mining causal structures. VLDB'98, 594-605, New York, NY.
• R. Srikant and R. Agrawal. Mining generalized association rules. VLDB'95,
407-419, Zurich, Switzerland, Sept. 1995.
• R. Srikant and R. Agrawal. Mining quantitative association rules in large
relational tables. SIGMOD'96, 1-12, Montreal, Canada.
• H. Toivonen. Sampling large databases for association rules. VLDB'96,
134-145, Bombay, India, Sept. 1996.
• M.J. Zaki, S. Parthasarathy, M. Ogihara, and W. Li. New algorithms for fast
discovery of association rules. KDD’97. August 1997.

CS590D 180
References: Frequent-pattern
Mining (Performance
Improvements)
• S. Brin, R. Motwani, J. D. Ullman, and S. Tsur. Dynamic itemset counting
and implication rules for market basket analysis. SIGMOD'97, Tucson,
Arizona, May 1997.
• D.W. Cheung, J. Han, V. Ng, and C.Y. Wong. Maintenance of discovered
association rules in large databases: An incremental updating technique.
ICDE'96, New Orleans, LA.
• T. Fukuda, Y. Morimoto, S. Morishita, and T. Tokuyama. Data mining using
two-dimensional optimized association rules: Scheme, algorithms, and
visualization. SIGMOD'96, Montreal, Canada.
• E.-H. Han, G. Karypis, and V. Kumar. Scalable parallel data mining for
association rules. SIGMOD'97, Tucson, Arizona.
• J.S. Park, M.S. Chen, and P.S. Yu. An effective hash-based algorithm for
mining association rules. SIGMOD'95, San Jose, CA, May 1995.

CS590D 181
References: Frequent-pattern Mining
(Performance Improvements)
• G. Piatetsky-Shapiro. Discovery, analysis, and presentation of strong rules. In G.
Piatetsky-Shapiro and W. J. Frawley, Knowledge Discovery in Databases,. AAAI/MIT
Press, 1991.
• J.S. Park, M.S. Chen, and P.S. Yu. An effective hash-based algorithm for mining
association rules. SIGMOD'95, San Jose, CA.
• S. Sarawagi, S. Thomas, and R. Agrawal. Integrating association rule mining with
relational database systems: Alternatives and implications. SIGMOD'98, Seattle, WA.
• K. Yoda, T. Fukuda, Y. Morimoto, S. Morishita, and T. Tokuyama. Computing
optimized rectilinear regions for association rules. KDD'97, Newport Beach, CA, Aug.
1997.
• M. J. Zaki, S. Parthasarathy, M. Ogihara, and W. Li. Parallel algorithm for discovery of
association rules. Data Mining and Knowledge Discovery, 1:343-374, 1997.

CS590D 182
References: Frequent-pattern Mining (Multi-
level, correlation, ratio rules, etc.)
• S. Brin, R. Motwani, and C. Silverstein. Beyond market basket: Generalizing association rules to correlations.
SIGMOD'97, 265-276, Tucson, Arizona.
• J. Han and Y. Fu. Discovery of multiple-level association rules from large databases. VLDB'95, 420-431, Zurich,
Switzerland.
• M. Klemettinen, H. Mannila, P. Ronkainen, H. Toivonen, and A.I. Verkamo. Finding interesting rules from large
sets of discovered association rules. CIKM'94, 401-408, Gaithersburg, Maryland.
• F. Korn, A. Labrinidis, Y. Kotidis, and C. Faloutsos. Ratio rules: A new paradigm for fast, quantifiable data mining.
VLDB'98, 582-593, New York, NY
• B. Lent, A. Swami, and J. Widom. Clustering association rules. ICDE'97, 220-231, Birmingham, England.
• R. Meo, G. Psaila, and S. Ceri. A new SQL-like operator for mining association rules. VLDB'96, 122-133, Bombay,
India.
• R.J. Miller and Y. Yang. Association rules over interval data. SIGMOD'97, 452-461, Tucson, Arizona.
• A. Savasere, E. Omiecinski, and S. Navathe. Mining for strong negative associations in a large database of
customer transactions. ICDE'98, 494-502, Orlando, FL, Feb. 1998.
• D. Tsur, J. D. Ullman, S. Abitboul, C. Clifton, R. Motwani, and S. Nestorov. Query flocks: A generalization of
association-rule mining. SIGMOD'98, 1-12, Seattle, Washington.
• J. Pei, A.K.H. Tung, J. Han. Fault-Tolerant Frequent Pattern Mining: Problems and Challenges. SIGMOD
DMKD’01, Santa Barbara, CA.

CS590D 183
References: Mining Max-patterns
and Closed itemsets
• R. J. Bayardo. Efficiently mining long patterns from databases. SIGMOD'98,
85-93, Seattle, Washington.
• J. Pei, J. Han, and R. Mao, "CLOSET: An Efficient Algorithm for Mining
Frequent Closed Itemsets", Proc. 2000 ACM-SIGMOD Int. Workshop on
Data Mining and Knowledge Discovery (DMKD'00), Dallas, TX, May 2000.
• N. Pasquier, Y. Bastide, R. Taouil, and L. Lakhal. Discovering frequent
closed itemsets for association rules. ICDT'99, 398-416, Jerusalem, Israel,
Jan. 1999.
• M. Zaki. Generating Non-Redundant Association Rules. KDD'00. Boston,
MA. Aug. 2000
• M. Zaki. CHARM: An Efficient Algorithm for Closed Association Rule Mining,
SIAM’02

CS590D 184
References: Constraint-base
Frequent-pattern Mining
• G. Grahne, L. Lakshmanan, and X. Wang. Efficient mining of constrained correlated sets. ICDE'00, 512-521, San
Diego, CA, Feb. 2000.
• Y. Fu and J. Han. Meta-rule-guided mining of association rules in relational databases. KDOOD'95, 39-46,
Singapore, Dec. 1995.
• J. Han, L. V. S. Lakshmanan, and R. T. Ng, "Constraint-Based, Multidimensional Data Mining", COMPUTER
(special issues on Data Mining), 32(8): 46-50, 1999.
• L. V. S. Lakshmanan, R. Ng, J. Han and A. Pang, "Optimization of Constrained Frequent Set Queries with 2-
Variable Constraints", SIGMOD’99
• R. Ng, L.V.S. Lakshmanan, J. Han & A. Pang. “Exploratory mining and pruning optimizations of constrained
association rules.” SIGMOD’98
• J. Pei, J. Han, and L. V. S. Lakshmanan, "Mining Frequent Itemsets with Convertible Constraints", Proc. 2001 Int.
Conf. on Data Engineering (ICDE'01), April 2001.
• J. Pei and J. Han "Can We Push More Constraints into Frequent Pattern Mining?", Proc. 2000 Int. Conf. on
Knowledge Discovery and Data Mining (KDD'00), Boston, MA, August 2000.
• R. Srikant, Q. Vu, and R. Agrawal. Mining association rules with item constraints. KDD'97, 67-73, Newport Beach,
California

CS590D 185
References: Sequential Pattern
Mining Methods
• R. Agrawal and R. Srikant. Mining sequential patterns. ICDE'95, 3-
14, Taipei, Taiwan.
• R. Srikant and R. Agrawal. Mining sequential patterns:
Generalizations and performance improvements. EDBT’96.
• J. Han, J. Pei, B. Mortazavi-Asl, Q. Chen, U. Dayal, M.-C. Hsu,
"FreeSpan: Frequent Pattern-Projected Sequential Pattern Mining",
Proc. 2000 Int. Conf. on Knowledge Discovery and Data Mining
(KDD'00), Boston, MA, August 2000.
• H. Mannila, H Toivonen, and A. I. Verkamo. Discovery of frequent
episodes in event sequences. Data Mining and Knowledge
Discovery, 1:259-289, 1997.

CS590D 186
References: Sequential Pattern
Mining Methods
• J. Pei, J. Han, H. Pinto, Q. Chen, U. Dayal, and M.-C. Hsu, "PrefixSpan:
Mining Sequential Patterns Efficiently by Prefix-Projected Pattern Growth",
Proc. 2001 Int. Conf. on Data Engineering (ICDE'01), Heidelberg, Germany,
April 2001.
• B. Ozden, S. Ramaswamy, and A. Silberschatz. Cyclic association rules.
ICDE'98, 412-421, Orlando, FL.
• S. Ramaswamy, S. Mahajan, and A. Silberschatz. On the discovery of
interesting patterns in association rules. VLDB'98, 368-379, New York, NY.
• M.J. Zaki. Efficient enumeration of frequent sequences. CIKM’98.
Novermber 1998.
• M.N. Garofalakis, R. Rastogi, K. Shim: SPIRIT: Sequential Pattern Mining
with Regular Expression Constraints. VLDB 1999: 223-234, Edinburgh,
Scotland.

CS590D 187
References: Frequent-pattern Mining
in Spatial, Multimedia, Text & Web
Databases
• K. Koperski, J. Han, and G. B. Marchisio, "Mining Spatial and Image Data through Progressive Refinement
Methods", Revue internationale de gomatique (European Journal of GIS and Spatial Analysis), 9(4):425-440,
1999.
• A. K. H. Tung, H. Lu, J. Han, and L. Feng, "Breaking the Barrier of Transactions: Mining Inter-Transaction
Association Rules", Proc. 1999 Int. Conf. on Knowledge Discovery and Data Mining (KDD'99), San Diego, CA,
Aug. 1999, pp. 297-301.
• J. Han, G. Dong and Y. Yin, "Efficient Mining of Partial Periodic Patterns in Time Series Database", Proc. 1999 Int.
Conf. on Data Engineering (ICDE'99), Sydney, Australia, March 1999, pp. 106-115
• H. Lu, L. Feng, and J. Han, "Beyond Intra-Transaction Association Analysis:Mining Multi-Dimensional Inter-
Transaction Association Rules", ACM Transactions on Information Systems (TOIS’00), 18(4): 423-454, 2000.
• O. R. Zaiane, M. Xin, J. Han, "Discovering Web Access Patterns and Trends by Applying OLAP and Data Mining
Technology on Web Logs," Proc. Advances in Digital Librar ies Conf. (ADL'98), Santa Barbara, CA, April 1998, pp.
19-29
• O. R. Zaiane, J. Han, and H. Zhu, "Mining Recurrent Items in Multimedia with Progressive Resolution
Refinement", ICDE'00, San Diego, CA, Feb. 2000, pp. 461-470

CS590D 188
References: Frequent-pattern Mining
for Classification and Data Cube
Computation
• K. Beyer and R. Ramakrishnan. Bottom-up computation of sparse and iceberg cubes.
SIGMOD'99, 359-370, Philadelphia, PA, June 1999.
• M. Fang, N. Shivakumar, H. Garcia-Molina, R. Motwani, and J. D. Ullman. Computing
iceberg queries efficiently. VLDB'98, 299-310, New York, NY, Aug. 1998.
• J. Han, J. Pei, G. Dong, and K. Wang, “Computing Iceberg Data Cubes with Complex
Measures”, Proc. ACM-SIGMOD’2001, Santa Barbara, CA, May 2001.
• M. Kamber, J. Han, and J. Y. Chiang. Metarule-guided mining of multi-dimensional
association rules using data cubes. KDD'97, 207-210, Newport Beach, California.
• K. Beyer and R. Ramakrishnan. Bottom-up computation of sparse and iceberg cubes.
SIGMOD’99
• T. Imielinski, L. Khachiyan, and A. Abdulghani. Cubegrades: Generalizing association
rules. Technical Report, Aug. 2000

CS590D 189

You might also like