ch03 Assocrules
ch03 Assocrules
org
Chenhao Ma
[email protected]
Supermarket shelf management – Market-basket
model:
¡ Goal: Identify items that are bought together by
sufficiently many customers
¡ Approach: Process the sales data collected with
barcode scanners to find dependencies among
items
¡ A classic rule:
§ If someone buys diaper and milk, then he/she is
likely to buy beer
§ Don’t be surprised if you find six-packs next to diapers!
2
¡ A large set of items TID
Input:
Items
§ e.g., things sold in a 1 Bread, Coke, Milk
supermarket 2 Beer, Bread
3 Beer, Coke, Diaper, Milk
¡ A large set of baskets 4 Beer, Bread, Diaper, Milk
¡ Each basket is a 5 Coke, Diaper, Milk
3
¡ Items = products; Baskets = sets of products
someone bought in one trip to the store
¡ Real market baskets: Chain stores keep TBs of
data about what customers buy together
§ Tells how typical customers navigate stores, lets
them position tempting items
§ Suggests tie-in “tricks”, e.g., run sale on diapers
and raise the price of beer
§ Need the rule to occur frequently
¡ Amazon’s people who bought X also bought Y
4
¡ Baskets = sentences; Items = documents
containing those sentences
§ Items that appear together too often could
represent plagiarism
§ Notice items do not have to be “in” baskets
¡ For example:
§ Finding communities in graphs (e.g., Twitter)
6
¡ Finding communities in graphs (e.g., Twitter)
¡ Baskets = nodes; Items = outgoing neighbors
§ Searching for complete bipartite subgraphs Ks,t of a
big graph ¡ How?
§ View each node i as a
basket Bi (which contains
nodes i it points to)
t nodes
s nodes
occurs in s buckets Bi
…
10
¡ Association Rules:
If-then rules about the contents of baskets
¡ {i1, i2,…,ik} → j means: “if a basket contains
all of i1,…,ik then it is likely to contain j”
¡ In practice there are many rules, want to find
significant/interesting ones!
¡ Confidence of this association rule is the
probability of j given I = {i1,…,ik}
support( I È j )
conf( I ® j ) =
support( I )
11
¡ Not all high-confidence rules are interesting
§ The rule X → milk may have high confidence for
many itemsets X, because milk is just purchased very
often (independent of X) and the confidence will be
high
¡ Interest of an association rule I → j:
difference between its confidence and the
fraction of baskets that contain j
Interest(I ® j ) = conf( I ® j ) - Pr[ j ]
§ Interesting rules are those with high positive or
negative interest values (usually above 0.5)
12
B1 = {m, c, b} B2 = {m, p, j}
B3 = {m, b} B4= {c, j}
B5 = {m, p, b} B6 = {m, c, b, j}
B7 = {c, b, j} B8 = {b, c}
13
¡ Problem: Find all association rules with
support ≥s and confidence ≥c
§ Note: Support of an association rule is the support
of the union of itemsets on both sizes
§ In specific, support of a rule A → B is the support
of the union of A and B
¡ Hard part: Finding the frequent itemsets!
§ If {i1, i2,…, ik} → j has high support and
confidence, then both {i1, i2,…, ik} and
{i1, i2,…,ik, j} will be “frequent” support( I È j )
conf(I ® j ) =
support( I )
14
¡ Step 1: Find all frequent itemsets I
§ (we will explain this next)
¡ Step 2: Rule generation
§ For every subset A of I, generate a rule A → I \ A
§ Since I is frequent, A is also frequent
§ Variant 1: Single pass to compute the rule confidence
§ confidence(A,B→C,D) = support(A,B,C,D) / support(A,B)
§ Variant 2:
§ Observation: If A,B,C→D is below confidence, so is A,B→C,D
§ Can generate “bigger” rules from smaller ones!
§ Output the rules above the confidence threshold
15
B1 = {m, c, b} B2 = {m, p, j}
B3 = {m, c, b, n} B4= {c, j}
B5 = {m, p, b} B6 = {m, c, b, j}
B7 = {c, b, j} B8 = {b, c}
¡ Support threshold s = 3, confidence c = 0.75
¡ 1) Frequent itemsets:
§ {b,m} {b,c} {c,m} {c,j} {m,c,b}
¡ 2) Generate rules:
§ b→m: c=4/6 b→c: c=5/6 b,c→m: c=3/5
§ m→b: c=4/5 … b,m→c: c=3/4
§ b→c,m: c=3/6
16
¡ To reduce the number of rules we can
post-process them and only output:
§ Maximal frequent itemsets:
No immediate superset is frequent
§ Gives more pruning
or
§ Closed itemsets:
No immediate superset has the same count (> 0)
§ Stores not only frequent information, but exact counts
17
Frequent, but
superset BC
Support Maximal(s=3) Closed also frequent.
A 4 No No Frequent, and
B 5 No Yes its only superset,
ABC, not freq.
C 3 No No Superset BC
AB 4 Yes Yes has same count.
ABC 2 No Yes
18
First: Define
Frequent itemsets
Association rules:
Confidence, Support, Interestingness
Then: Algorithms for finding frequent itemsets
Finding frequent pairs
A-Priori algorithm
Improvements: PCY algorithm + 2 refinements
19
¡ Back to finding frequent itemsets Item
Item
21
¡ For many frequent-itemset algorithms,
main-memory is the critical resource
§ As we read baskets, we need to count
something, e.g., occurrences of pairs of items
§ The number of different things we can count
is limited by main memory
§ Swapping counts in/out is a disaster (why?)
22
¡ The hardest problem often turns out to be
finding the frequent pairs of items {i1, i2}
§ Why? Freq. pairs are common, freq. triples are rare
§ Why? Probability of being frequent drops exponentially
with size of the itemset
¡ Let’s first concentrate on pairs, then extend to
larger sets
¡ The approach:
§ We always need to generate all the itemsets
§ But we would only like to count (keep track) of those
itemsets that in the end turn out to be frequent
23
¡ Naïve approach to finding frequent pairs
¡ Read file once, counting in main memory
the occurrences of each pair:
§ From each basket of n items, generate its
n(n-1)/2 pairs by two nested loops
¡ Fails if (#items)2 exceeds main memory
§ Remember: #items can be
100K (Wal-Mart) or 10B (Web pages)
§ Suppose 105 items, counts are 4-byte integers
§ Number of pairs of items: 105(105-1)/2 = 5*109
§ Therefore, 2*1010 (20 gigabytes) of memory needed
24
Two approaches:
¡ Approach 1: Count all pairs using a matrix
¡ Approach 2: Keep a table of triples [i, j, c] =
“the count of the pair of items {i, j} is c.”
§ If integers and item ids are 4 bytes, we need
approximately 12 bytes for pairs with count > 0
§ Plus some additional overhead for the hashtable
Note:
¡ Approach 1 only requires 4 bytes per pair
¡ Approach 2 uses 12 bytes per pair
(but only for pairs with count > 0)
25
12 per
4 bytes per pair
occurring pair
26
¡ Approach 1: Triangular Matrix
§ n = total number items
§ Count pair of items {i, j} only if i<j
§ Keep pair counts in lexicographic order:
§ {1,2}, {1,3},…, {1,n}, {2,3}, {2,4},…,{2,n}, {3,4},…
§ Pair {i, j} is at position (i –1)(n– i/2) + j –1
§ Total number of pairs n(n –1)/2; total bytes= 2n2
§ Triangular Matrix requires 4 bytes per pair
¡ Approach 2 uses 12 bytes per occurring pair
(but only for pairs with count > 0)
§ Beats Approach 1 if less than 1/3 of
possible pairs actually occur
27
¡ Approach 1: Triangular Matrix
§ n = total number items
§ Count pair of items {i, j} only if i<j
§ Keep pair counts in lexicographic order:
Problem is if we have too
§ {1,2}, {1,3},…, {1,n}, {2,3}, {2,4},…,{2,n}, {3,4},…
§ Pair {i,many items(iso
j} is at position thei/2)pairs
–1)(n– + j –1
§ Total number of pairs n(n
do not fit into memory.–1)/2; total bytes= 2n 2
Counts of
pairs of
frequent items
(candidate
pairs)
Pass 1 Pass 2
32
¡ You can use the
triangular matrix
method with n = number Item counts
Old #’s
1
New #’s
1
of frequent items 2
3
-
2
Main memory
with storing triples of frequent
Counts of
¡ Trick: re-number pairs of
items
frequent items
frequent items 1,2,…
and keep a table relating
new numbers to original Pass 1 Pass 2
item numbers
33
¡ For each k, we construct two sets of
k-tuples (sets of size k):
§ Ck = candidate k-tuples = those that might be
frequent sets (support > s) based on information
from the pass for k–1
§ Lk = the set of truly frequent k-tuples
Count All pairs Count
All of items To be
the items the pairs explained
items from L1
34
** Note here we generate new candidates by
generating Ck from Lk-1 and L1.
But that one can be more careful with candidate
generation. For example, in C3 we know {b,c,j}
cannot be frequent since {b,j} is not frequent
¡ Pass 2:
Only count pairs that hash to frequent buckets
40
¡ Replace the buckets by a bit-vector:
§ 1 means the bucket count exceeded the support s
(call it a frequent bucket); 0 means it did not
41
¡ Count all pairs {i, j} that meet the
conditions for being a candidate pair:
1. Both i and j are frequent items
2. The pair {i, j} hashes to a bucket whose bit in
the bit vector is 1 (i.e., a frequent bucket)
42
Item counts Frequent items
Bitmap
Main memory
Hash
Hash table
table Counts of
for pairs candidate
pairs
Pass 1 Pass 2
43
¡ Buckets require a few bytes each:
§ Note: we do not have to count past s
§ #buckets is O(main-memory size)
44
¡ Limit the number of candidates to be counted
§ Remember: Memory is the bottleneck
§ Still need to generate all the itemsets but we only
want to count/keep track of the ones that are frequent
¡ Key idea: After Pass 1 of PCY, rehash only those
pairs that qualify for Pass 2 of PCY
§ i and j are frequent, and
§ {i, j} hashes to a frequent bucket from Pass 1
¡ On middle pass, fewer pairs contribute to
buckets, so fewer false positives
¡ Requires 3 passes over the data
45
Item counts Freq. items Freq. items
Main memory
Bitmap 1 Bitmap 1
First Bitmap 2
hash table
First
Second Counts
hash table Counts of
of
hash table candidate
candidate
pairs
pairs
47
1. The two hash functions have to be
independent
2. We need to check both hashes on the
third pass
§ If not, we would end up counting pairs of
frequent items that hashed first to an
infrequent bucket but happened to hash
second to a frequent bucket
48
¡ Key idea: Use several independent hash
tables on the first pass
¡ Risk: Halving the number of buckets doubles
the average count
§ We have to be sure most buckets will still not
reach count s
49
Item counts Freq. items
Bitmap 1
Main memory
First
First hash
hash table
table Bitmap 2
Counts
Countsofof
Second
Second candidate
candidate
hash table
hash table pairs
pairs
Pass 1 Pass 2
50
¡ Either multistage or multihash can use more
than two hash functions
51
¡ A-Priori, PCY, etc., take k passes to find
frequent itemsets of size k
53
¡ Take a random sample of the market baskets
Main memory
baskets
55
¡ Repeatedly read small subsets of the baskets
into main memory and run an in-memory
algorithm to find all frequent itemsets
§ Note: we are not sampling, but processing the
entire file in memory-sized chunks
56
¡ On a second pass, count all the candidate
itemsets and determine which are frequent in
the entire set
¡ Key “monotonicity” idea: an itemset cannot
be frequent in the entire set of baskets unless
it is frequent in at least one subset.
57
¡ SON lends itself to distributed data mining
58
¡ Phase 1: Find candidate itemsets
§ Map?
§ Reduce?
59