Apriori Algo
Apriori Algo
Key Concepts :
• Frequent Itemsets: The sets of item which has minimum
support (denoted by Li for ith-Itemset).
• Apriori Property: Any subset of frequent itemset must be
frequent.
• Join Operation: To find Lk , a set of candidate k-itemsets is
generated by joining Lk-1 with itself.
The Apriori Algorithm in a
Nutshell
• Find the frequent itemsets: the sets of items that have
minimum support
– A subset of a frequent itemset must also be a
frequent itemset
• i.e., if {AB} is a frequent itemset, both {A} and {B}
should be a frequent itemset
– Iteratively find frequent itemsets with cardinality
from 1 to k (k-itemset)
• Use the frequent itemsets to generate association rules.
The Apriori Algorithm : Pseudo
code
• Join Step: Ck is generated by joining Lk-1with itself
• Prune Step: Any (k-1)-itemset that is not frequent cannot be a
subset of a frequent k-itemset
• Pseudo-code:
Ck: Candidate itemset of size k
Lk : frequent itemset of size k
L1 = {frequent items};
for (k = 1; Lk !=; k++) do begin
Ck+1 = candidates generated from Lk;
for each transaction t in database do
increment the count of all candidates in Ck+1
that are contained in t
= candidates in Ck+1 with min_support
L
end
k+1
return k Lk;
The Apriori Algorithm: Example
C1
L1
C2 {I4, I5} 0
C2
Step 2: Generating 2-itemset Frequent Pattern
Compare
Scan D for Scan D for Itemset Sup. candidate Itemset Sup
count of Itemset count of support
Count count with
Count
each each
{I1, I2, I3} {I1, I2, I3} 2 min support {I1, I2, I3} 2
candidate candidate
{I1, I2, I5} count
{I1, I2, I5} 2 {I1, I2, I5} 2
C3 C3 L3
• Based on the Apriori property that all subsets of a frequent itemset must also
be frequent, we can determine that four latter candidates cannot possibly be
frequent. How ?
• For example , lets take {I1, I2, I3}. The 2-item subsets of it are {I1, I2}, {I1, I3}
& {I2, I3}. Since all 2-item subsets of {I1, I2, I3} are members of L2, We will
keep {I1, I2, I3} in C3.
• Lets take another example of {I2, I3, I5} which shows how the pruning is
performed. The 2-item subsets are {I2, I3}, {I2, I5} & {I3,I5}.
• BUT, {I3, I5} is not a member of L2 and hence it is not frequent violating
Apriori Property. Thus We will have to remove {I2, I3, I5} from C3.
• Therefore, C3 = {{I1, I2, I3}, {I1, I2, I5}} after checking for all members of
result of Join operation for Pruning.
• Now, the transactions in D are scanned in order to determine L3, consisting of
those candidates 3-itemsets in C3 having minimum support.
Step 4: Generating 4-itemset Frequent Pattern
• Procedure:
• For each frequent itemset “l”, generate all nonempty subsets of l.
• For every nonempty subset s of l, output the rule “s (l-s)” if
support_count(l) / support_count(s) >= min_conf where
min_conf is minimum confidence threshold.
• Back To Example:
We had L = {{I1}, {I2}, {I3}, {I4}, {I5}, {I1,I2}, {I1,I3}, {I1,I5}, {I2,I3},
{I2,I4}, {I2,I5}, {I1,I2,I3}, {I1,I2,I5}}.
– Lets take l = {I1,I2,I5}.
– Its all nonempty subsets are {I1,I2}, {I1,I5}, {I2,I5}, {I1}, {I2}, {I5}.
Step 5: Generating Association Rules from
Frequent Itemsets
• Let minimum confidence threshold is , say 70%.
• The resulting association rules are shown below,
each listed with its confidence.
– R1: I1 ^ I2 I5
• Confidence = sc{I1,I2,I5}/sc{I1,I2} = 2/4 = 50%
• R1 is Rejected.
– R2: I1 ^ I5 I2
• Confidence = sc{I1,I2,I5}/sc{I1,I5} = 2/2 = 100%
• R2 is Selected.
– R3: I2 ^ I5 I1
• Confidence = sc{I1,I2,I5}/sc{I2,I5} = 2/2 = 100%
• R3 is Selected.
Step 5: Generating Association Rules from
Frequent Itemsets
– R4: I1 I2 ^ I5
• Confidence = sc{I1,I2,I5}/sc{I1} = 2/6 = 33%
• R4 is Rejected.
– R5: I2 I1 ^ I5
• Confidence = sc{I1,I2,I5}/{I2} = 2/7 = 29%
• R5 is Rejected.
– R6: I5 I1 ^ I2
• Confidence = sc{I1,I2,I5}/ {I5} = 2/2 = 100%
• R6 is Selected.
In this way, We have found three strong
association rules.
Methods to Improve Apriori’s Efficiency