L2: Frequent Itemsets Mining and Association Rules
L2: Frequent Itemsets Mining and Association Rules
material useful for giving your own lectures. Feel free to use these slides verbatim, or to
modify them to fit your own needs. If you make use of a significant portion of these slides
in your own lecture, please include this message, or a link to our web site: http://www.mmds.org
1/9/2025 4
◾ Items = products; Baskets = sets of products
someone bought in one trip to the store
◾ Real market baskets: Chain stores keep TBs of
data about what customers buy together
▪ Tells how typical customers navigate stores, lets
them position tempting items together:
▪ Apocryphal story of “diapers and beer” discovery
▪ Used to position potato chips between diapers and beer to
enhance sales of potato chips
◾ Amazon’s ‘people who bought X also bought Y’
1/9/2025 5
◾ Baskets = sentences; Items = documents in
which those sentences appear
▪ Items that appear together too often could
represent plagiarism
▪ Notice items do not have to be “in” baskets
1/9/2025 8
◾ Items = {milk, coke, pepsi, beer, juice}
◾ Support threshold = 3 baskets
B1 = {m, c, b} B2 = {m, p, j}
B3 = {m, b} B4 = {c, j}
B5 = {m, p, b} B6 = {m, c, b, j}
B7 = {c, b, j} B8 = {b, c}
◾ Frequent itemsets: {m}, {c}, {b}, {j},
{m,b} , {b,c} , {c,j}.
1/9/2025 9
◾ Define: Association Rules:
If-then rules about the contents of baskets
◾ {i1, i2,…,ik} → {j} means: “if a basket
contains all of i1,…,ik then it is likely to
contain {j}”
◾ In practice there are many rules, want to find
significant/interesting ones!
◾ Confidence of association rule is the
probability of j given I = {i1,…,ik}
support(I j) conf 𝐼→𝑗 =
conf(I → j) = =𝑃 𝑗𝐼 =
𝑃 𝐼, 𝑗
support(I ) 𝑃(𝐼)
1/9/2025 10
◾ Not all high-confidence rules are interesting
▪ The rule X → milk may have high confidence for many
itemsets X, because milk is just purchased very often
(independent of X)
◾ Interest of an association rule I → j:
abs. difference between its confidence and
the fraction of baskets that contain j
𝐼𝑛𝑡𝑒𝑟𝑒𝑠𝑡 I → 𝑗 = 𝑐𝑜𝑛𝑓 𝐼 → 𝑗 − P 𝑗 = | 𝑃 𝑗 𝐼 − 𝑃(𝑗)|
▪ Interesting rules: those with high interest values
(usually above 0.5)
▪ Why absolute value? Want to capture both positive
and negative associations between itemsets and items
1/9/2025 11
B1 = {m, c, b} B2 = {m, p, j}
B3 = {m, b} B4= {c, j}
B5 = {m, p, b} B6 = {m, c, b, j}
B7 = {c, b, j} B8 = {b, c}
1/9/2025 12
Problem: Find all association rules with
support ≥s and confidence ≥c
▪ Note: Support of an association rule is the support of
the entire set of items in the rule (left side + right
side)
◾ Hard part: Finding the frequent itemsets!
▪ If {i1, i2,…, ik} → {j} has high support and
confidence, then both {i1, i2,…, ik} and
{i1, i2,…,ik, j} will be “frequent”
conf( I → j) =
support( I j)
support( I )
1/9/2025 13
◾ Step 1: Find all frequent itemsets I
▪ (we will explain this next)
◾ Step 2: Rule generation
▪ For every subset A of I, generate a rule A → I \ A
▪ Since I is frequent, A is also frequent
▪ Variant 1: Single pass to compute the rule confidence
▪ confidence(A,B→C,D) = support(A,B,C,D) / support(A,B)
▪ Variant 2:
▪ Observation: If A,B,C→D is below confidence, then so is A,B→C,D
▪ Can generate “bigger” rules from smaller ones!
▪ Output the rules above the confidence threshold
1/9/2025 14
B1 = {m, c, b} B2 = {m, p, j}
B3 = {m, c, b, n} B4= {c, j}
B5 = {m, p, b} B6 = {m, c, b, j}
B7 = {c, b, j} B8 = {b, c}
◾ Support threshold s = 3, confidence c = 0.75
◾ Step 1) Find frequent itemsets:
▪ {b,m} {b,c} {c,m} {c,j} {m,c,b}
◾ Step 2) Generate rules:
▪ b→m: c=4/6 b→c: c=5/6 b,c→m: c=3/5
▪ m→b: c=4/5 … b,m→c: c=3/4
b→c,m: c=3/6
1/9/2025 15
◾ To reduce the number of rules, we can
post-process them and only output:
▪ Maximal frequent itemsets:
No immediate superset is frequent
▪ Gives more pruning
or
▪ Closed itemsets:
No immediate superset has the same support (> 0)
▪ Stores not only frequent information, but exact
supports/counts
1/9/2025 16
Frequent, but
superset BC
Support Maximal(s=3) Closed also frequent.
A 4 No No Frequent, and
B 5 No Yes its only superset,
ABC, not freq.
C 3 No No
Superset BC
AB 4 Yes Yes has same support.
ABC 2 No Yes
1/9/2025 17
◾ Back to finding frequent itemsets Item
Item
◾ In practice, association-rule
Item
Item
1/9/2025 20
◾ For many frequent-itemset algorithms,
main-memory is the critical resource
▪ As we read baskets, we need to count
something, e.g., occurrences of pairs of items
▪ The number of different things we can count
is limited by main memory
▪ Swapping counts in/out of main-memory is a bad
idea
▪ Q: Why?
1/9/2025 21
◾ The hardest problem often turns out to be
finding the frequent pairs of items {i1, i2}
▪ Why? Freq. pairs are common, freq. triples are rare
▪ Why? Probability of being frequent drops exponentially
with size; number of sets grows more slowly with size
1/9/2025 22
◾ The approach:
▪ We always need to “generate” all the itemsets
▪ But we would only like to count (keep track of) only
those itemsets that in the end turn out to be frequent
◾ Scenario:
▪ Imagine we aim to identify frequent pairs
▪ We will need to enumerate all pairs of items
▪ For every basket, enumerate all pairs of items in that basket
▪ But, rather than keeping a count for every pair, we
hope to discard a lot of pairs and only keep track of
the ones that will in the end turn out to be frequent
1/9/2025 23
◾ Naïve approach to finding frequent pairs
◾ Read file once, counting in main memory
the occurrences of each pair:
▪ From each basket b of nb items, generate its
nb(nb-1)/2 pairs by two nested loops
▪ A data structure then keeps count of every pair
◾ Fails if (#items)2 exceeds main memory
▪ Remember: #items can be
1M (Wal-Mart) or 10B (Web pages)
▪ Suppose 106 items, counts are 4-byte integers
▪ Number of pairs of items: 106(106-1)/2 5*1011
▪ Therefore, 2*1012 (2 terabytes) of memory is needed
1/9/2025 24
Goal: Count the number of occurrences of
each pair of items (i,j):
1/9/2025 25
Item j
12 bytes per
4 bytes per pair
occurring pair
Item i
1/9/2025 26
◾ Approach 1: Triangular Matrix
Item j
▪ n = total number items
▪ Count pair of items {i, j} only if i<j
▪ Keep pair counts in lexicographic order: Item i
▪ {1,2}, {1,3},…, {1,n}, {2,3}, {2,4},…,{2,n}, {3,4},…
▪ Pair {i, j} is at position: [n(n - 1) - (n - i)(n - i + 1)]/2 + (j - i)
▪ Total number of pairs n(n –1)/2; total bytes= O(n2)
▪ Triangular Matrix requires 4 bytes per pair
◾ Approach 2 uses 12 bytes per occurring pair
(but only for pairs with count > 0)
◾ Approach 2 beats Approach 1 if less than 1/3 of
possible pairs actually occur
1/9/2025 27
◾ Approach 1: Triangular Matrix
▪ n = total number items
▪ Count pair of items {i, j} only if i<j
▪ Keep pair counts in lexicographic order:
Problem is when we have too
▪ {1,2}, {1,3},…, {1,n}, {2,3}, {2,4},…,{2,n}, {3,4},…
▪ Pair {im
, j}aisnay
t piotse
itm
ions: [sno
(n a
- 1ll) -t(hne- ip
)(na-iri s+ 1)]/2 + (j - i)
▪ Total number of pairs n(n –1)/2; total bytes= O(n 2)
▪ TriangulardoMatrix
not fit into4 memory.
requires bytes per pair
◾ Approach 2Cuasn
esw12ebydto
es b
peertotcecru?
rring pair
(but only for pairs with count > 0)
Approach 2 beats Approach 1 if less than 1/3 of
possible pairs actually occur
1/9/2025 28
Key concepts:
• Monotonicity of “Frequent”
• Notion of Candidate Pairs
• Extension to Larger Itemsets
◾ A two-pass approach called
A-Priori limits the need for
main memory
◾ Key idea: monotonicity
▪ If a set of items I appears at
least s times, so does every subset J of I
◾ Contrapositive for pairs:
If item i does not appear in s baskets, then no
pair including i can appear in s baskets
◾ So, how does A-Priori find freq. pairs?
1/9/2025 30
◾ Pass 1: Read baskets and count in main memory
the # of occurrences of each individual item
▪ Requires only memory proportional to #items
Counts of
pairs of
frequent items
(candidate
pairs)
Pass 1 Pass 2
Green box represents the amount of available main memory. Smaller boxes represent how the memory is used.
1/9/2025 32
◾ You can use the
triangular matrix
method with n = number Item counts Frequent Old
item
items
of frequent items IDs
Main memory
with storing triples of frequent
Counts of
◾ Trick: re-number itemsof
pairs
frequent items
frequent items 1,2,…
and keep a table relating
new numbers to original Pass 1 Pass 2
item numbers
1/9/2025 33
◾ For each k, we construct two sets of
k-tuples (sets of size k):
▪ Ck = candidate k-tuples = those that might be
frequent sets (support > s) based on information
from the pass for k–1
▪ Lk = the set of truly frequent k-tuples
Count All pairs Count
All of items To be
the items the pairs explained
items from L1
1/9/2025 34
** Note here we generate new candidates by
generating Ck from Lk-1 and L1.
But one can be more careful with candidate
generation. For example, in C3 we know {b,m,j}
cannot be frequent since {m,j} is not frequent.
1/9/2025 39
◾ Observation: If a bucket contains a frequent pair,
then the bucket is surely frequent
◾ However, even without any frequent pair,
a bucket can still be frequent
▪ So, we cannot use the hash to eliminate any
member (pair) of a “frequent” bucket
◾ But, for a bucket with total count less than s,
none of its pairs can be frequent ☺
▪ Pairs that hash to this bucket can be eliminated as
candidates (even if the pair consists of 2 frequent items)
◾ Pass 2:
Only count pairs that hash to frequent buckets
1/9/2025 40
◾ Replace the buckets by a bit-vector:
▪ 1 means the bucket count exceeded the support s
(call it a frequent bucket); 0 means it did not
1/9/2025 41
◾ Count all pairs {i, j} that meet the
conditions for being a candidate pair:
1. Both i and j are frequent items
2. The pair {i, j} hashes to a bucket whose bit in
the bit vector is 1 (i.e., a frequent bucket)
1/9/2025 42
Item counts Frequent items
Bitmap
Main memory
Hash
Hash table
table Counts of
for pairs candidate
pairs
Pass 1 Pass 2
1/9/2025 43
Key concepts:
• Random Sampling Algorithm
• Savasere-Omiecinski-Navathe (SON) Algorithm
• Toivonen’s Algorithm
◾ A-Priori, PCY, etc., take k
passes to find
frequent itemsets of size k
◾ Can we use fewer passes?
1/9/2025 49
◾ Take a random sample of the market baskets
Main memory
baskets
time we increase the size of itemsets
▪ Reduce support threshold Space
proportionally for
counts
to match the sample size
▪ Example: if your sample is 1/100 of the baskets, use
s/100 as your support threshold instead of s.
1/9/2025 50
◾ To avoid false positives: Optionally, verify that
the candidate pairs are truly frequent in the
entire data set by a second pass
1/9/2025 53
◾ On a second pass, count all the candidate
itemsets and determine which are frequent in
the entire set.
◾ Key “monotonicity” idea: We set the per-chunk
support threshold such that an
itemset cannot be frequent in the
entire dataset unless it is frequent
in at least one subset.
▪ Pigeonhole principle
1/9/2025 54
Pass 1:
◾ Start with a random sample, but lower the
threshold slightly for the sample:
▪ Example: If the sample is 1% of the baskets, use
s/125 as the support threshold rather than s/100
◾ Find frequent itemsets in the sample
◾ Add the negative border to the itemsets that
are frequent in the sample:
▪ Negative border: An itemset is in the negative
border if it is not frequent in the sample, but all its
immediate subsets are
▪ Immediate subset = “delete exactly one element”
1/9/2025 55
◾ {A,B,C,D} is in the negative border if and only if:
1. It is not frequent in the sample, but
2. All of {A,B,C}, {B,C,D}, {A,C,D}, and {A,B,D} are.
Negative Border
tripletons
doubletons
Frequent Itemsets
singletons from Sample
1/9/2025 56
◾ Pass 1:
▪ Start with the random sample, but lower the threshold slightly
for the subset
▪ To the itemsets that are frequent in the sample, add the
negative border of these itemsets
◾ Pass 2:
▪ Count all candidate frequent itemsets from the first pass, and
also count sets in their negative border
◾ If no itemset from the negative border turns out to be
frequent, then we found all the frequent itemsets.
▪ What if we find that something in the negative border is
frequent?
▪ We must start over again with another sample!
▪ Try to choose the support threshold so the probability of failure is low,
while the number of itemsets checked on the second pass fits in main-
memory.
1/9/2025 57
We broke through the
negative border. How
far does the problem
… go?
Negative Border
tripletons
doubletons
singletons
Frequent Itemsets
from Sample
1/9/2025 58
◾ If there is an itemset S that is frequent in full data, but not
frequent in the sample, then the negative border contains
at least one itemset that is frequent in the full data.
Proof by contradiction:
◾ Suppose not; i.e.;
1. There is an itemset S frequent in the full data but not
frequent in the sample, and
2. Nothing in the negative border is frequent in the full data
◾ Let T be a smallest subset of S that is not frequent in the
sample (but every subset of T is)
◾ T is frequent in the whole (S is frequent + monotonicity).
◾ But then T is in the negative border (contradiction)
1/9/2025 59