0% found this document useful (0 votes)
22 views37 pages

16-Efficient and Scalable Frequent Item Set Mining Methods - Apriori Algorithm-05-02-2025

The document discusses the concepts of Association Rules and the Apriori Algorithm, which is used for mining frequent itemsets and generating association rules based on minimum support and confidence. It outlines the process of identifying patterns in transaction data, including the challenges of varying item frequencies and the need for multiple minimum supports. The document also highlights the algorithm's efficiency and its application in different data formats for effective mining.

Uploaded by

jee2022.acc
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
22 views37 pages

16-Efficient and Scalable Frequent Item Set Mining Methods - Apriori Algorithm-05-02-2025

The document discusses the concepts of Association Rules and the Apriori Algorithm, which is used for mining frequent itemsets and generating association rules based on minimum support and confidence. It outlines the process of identifying patterns in transaction data, including the challenges of varying item frequencies and the need for multiple minimum supports. The document also highlights the algorithm's efficiency and its application in different data formats for effective mining.

Uploaded by

jee2022.acc
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 37

Association Rules &

Apriori Algorithm

1
Road map

 Basic concepts of Association Rules


 Apriori algorithm
 Different data formats for mining
 Mining with multiple minimum supports
 Mining class association rules

2
Association rule mining
 Proposed by Agrawal et al in 1993
 It is an important data mining model studied
extensively by the database and data mining
community
 Assume all data are categorical
 No good algorithm for numeric data
 Initially used for Market Basket Analysis to find
how items purchased by customers are related

Bread  Milk [sup = 5%, conf = 100%]

3
The model: data
 I = {i1, i2, …, im}: a set of items.
 Transaction t :
 t a set of items, and t  I.

 Transaction Database T: a set of transactions


T = {t1, t2, …, tn}.

4
Transaction data:
supermarket
 data
Market basket transactions:
t1: {bread, cheese, milk}
t2: {apple, eggs, salt, yogurt}
… …
tn: {biscuit, eggs, milk}
 Concepts:
 An item: an item/article in a basket
 I: the set of all items sold in the store
 A transaction: items purchased in a basket; it may
have TID (transaction ID)
 A transactional dataset: A set of transactions

5
Transaction data: a set of
documents
A text document data set. Each document
is treated as a “bag” of keywords
doc1: Student, Teach, School
doc2: Student, School
doc3: Teach, School, City, Game
doc4: Baseball, Basketball
doc5: Basketball, Player, Spectator
doc6: Baseball, Coach, Game, Team
doc7: Basketball, Team, City, Game

6
The model: rules
 A transaction t contains z, a set of items
(itemset) in I, if z  t.
 An association rule is an implication of the
form:
X  Y, where X, Y  I, and X Y = 

 An itemset is a set of items.


 E.g., z = {milk, bread, cereal} is an itemset.
 A k-itemset is an itemset with k items.
 E.g., {milk, bread, cereal} is a 3-itemset

7
Support Formula
 Support: The rule holds with support sup in T
(the transaction data set) if sup% of
transactions contain X  Y.
 sup = Pr(X  Y).

( X  Y ).count
support 
n

8
Confidence Formula

 Confidence: The rule holds in T with


confidence conf if conf% of tranactions that
contain X also contain Y.
 conf = Pr(Y | X)

( X  Y ).count
confidence
X .count

9
Association Rule
An association rule is a pattern that states when
X occurs, Y occurs with certain probability.

Association Rule Problem is to identify all association rules X  Y


with a minimum support and confidence.

Clothes  Milk, Chicken [sup = 3/7, conf = 3/3]

10
t1: Beef, Chicken, Milk
An example t2:
t3:
Beef, Cheese
Cheese, Boots
t4: Beef, Chicken, Cheese
t5: Beef, Chicken, Clothes, Cheese,
 Transaction data Milk
t6: Chicken, Clothes, Milk
 Assume: t7: Chicken, Milk, Clothes
minsup = 30%
minconf = 80%
 An example frequent itemset:
{Chicken, Clothes, Milk} [sup = 3/7]
 Association rules from the itemset:
Clothes  Milk, Chicken [sup = 3/7, conf = 3/3]
… …
Clothes, Chicken  Milk, [sup = 3/7, conf = 3/3]

11
Apriori Goal
 Apriori Goal: Find all rules that satisfy the
user-specified minimum support (minsup) and
minimum confidence (minconf).

12
Road map

 Basic concepts of Association Rules


 Apriori algorithm
 Different data formats for mining
 Mining with multiple minimum supports

13
The Apriori algorithm
 The best known algorithm
 Two steps:
 Find all itemsets that have minimum support
(frequent itemsets, also called large itemsets).
 Use frequent itemsets to generate rules.

 E.g., a frequent itemset


{Chicken, Clothes, Milk} [sup = 3/7]
and one rule from the frequent itemset
Clothes  Milk, Chicken [sup = 3/7, conf =
3/3]
14
Step 1: Mining all frequent
itemsets
 A frequent itemset is an itemset whose support

is ≥ minsup.
 Key idea: The apriori property (downward
closure property): any subsets of a frequent
itemset are also frequent itemsets
ABC ABD ACD BCD

AB AC AD BC BD CD

A B C D

15
The Algorithm
 Iterative algo. (also called level-wise search):
Find all 1-item frequent itemsets; then all 2-item
frequent itemsets, and so on.
 In each iteration k, only consider itemsets that

contain some k-1 frequent itemset.


 Find frequent itemsets of size 1: F1
 From k = 2

Ck = candidates of size k: those itemsets of size k
that could be frequent, given Fk-1

Fk = those itemsets that are actually frequent, Fk
 Ck (need to scan the database once).
16
Dataset T TID Items
Example – minsup=0.5 T100 1, 3, 4
Finding frequent itemsets T200 2, 3, 5
T300 1, 2, 3, 5
T400 2, 5
itemset:count
1. scan T  C1: {1}:2, {2}:3, {3}:3, {4}:1, {5}:3
 F1: {1}:2, {2}:3, {3}:3, {5}:3
 C2: {1,2}, {1,3}, {1,5}, {2,3}, {2,5}, {3,5}
2. scan T  C2: {1,2}:1, {1,3}:2, {1,5}:1, {2,3}:2, {2,5}:3, {3,5}:2
 F2: {1,3}:2, {2,3}:2, {2,5}:3, {3,5}:2
 C3: {2, 3,5}
3. scan T  C3: {2, 3, 5}:2  F3: {2, 3, 5}

17
Details: ordering of items

 The items in I are sorted in lexicographic


order (which is a total order).
 The order is used throughout the algorithm in
each itemset.
 {w[1], w[2], …, w[k]} represents a k-itemset w
consisting of items w[1], w[2], …, w[k], where
w[1] < w[2] < … < w[k] according to the total
order.

18
Details: the algorithm
Algorithm Apriori(T)
C1  init-pass(T);
F1  {f | f  C1, f.count/n  minsup}; // n: no. of transactions in T
for (k = 2; Fk-1  ; k++) do
Ck  candidate-gen(Fk-1);
for each transaction t  T do
for each candidate c  Ck do
if c is contained in t then
c.count++;
end
end
Fk  {c  Ck | c.count/n  minsup}
end
return F  k Fk;
19
20
Apriori candidate
generation
The candidate-gen function takes Fk-1 and
returns a superset (called the candidates)
of the set of all frequent k-itemsets. It has
two steps
 join step: Generate all possible candidate
itemsets Ck of length k
 prune step: Remove those candidates in Ck
that cannot be frequent.

21
Candidate-gen function
Function candidate-gen(Fk-1)
Ck  ;
forall f1, f2  Fk-1
with f1 = {i1, … , ik-2, ik-1}
and f2 = {i1, … , ik-2, i’k-1}
and ik-1 < i’k-1 do
c  {i1, …, ik-1, i’k-1}; // join f1 and f2
Ck  Ck  {c};
for each (k-1)-subset s of c do
if (s  Fk-1) then
delete c from Ck; // prune
end
end
return Ck;
22
An example
 F3 = {{1, 2, 3}, {1, 2, 4}, {1, 3, 4},
{1, 3, 5}, {2, 3, 4}}

 After join
 C4 = {{1, 2, 3, 4}, {1, 3, 4, 5}}
 After pruning:
 C4 = {{1, 2, 3, 4}}
because {1, 4, 5} is not in F3 ({1, 3, 4, 5} is removed)

23
Step 2: Generating rules from
frequent itemsets
 Frequent itemsets  association rules
 One more step is needed to generate
association rules
 For each frequent itemset X,
For each proper nonempty subset A of X,
 Let B = X - A

A  B is an association rule if
 Confidence(A  B) ≥ minconf,

support(A  B) = support(AB) = support(X)


confidence(A  B) = support(A  B) / support(A)

24
Generating rules: an example
 Suppose {2,3,4} is frequent, with sup=50%
 Proper nonempty subsets: {2,3}, {2,4}, {3,4}, {2}, {3}, {4}, with
sup=50%, 50%, 75%, 75%, 75%, 75% respectively
 These generate these association rules:
 2,3  4, confidence=100%
 2,4  3, confidence=100%
 3,4  2, confidence=67%
 2  3,4, confidence=67%
 3  2,4, confidence=67%
 4  2,3, confidence=67%
 All rules have support = 50%

25
On Apriori Algorithm
Seems to be very expensive
 Level-wise search

 K = the size of the largest itemset

 It makes at most K passes over data

 In practice, K is bounded (10).

 The algorithm is very fast. Under some conditions,

all rules can be found in linear time.


 Scale up to large data sets

26
Problems on association rule
mining
 Clearly the space of all association rules is
exponential, O(2m), where m is the number of
items in I.
 The mining exploits sparseness of data, and
high minimum support and high minimum
confidence values.
 Still, it always produces a huge number of
rules, thousands, tens of thousands, millions,
...

27
Problems with the
association mining
 Single minsup: It assumes that all items in
the data are of the same nature and/or
have similar frequencies.
 Not true: In many applications, some items
appear very frequently in the data, while
others rarely appear.
E.g., in a supermarket, people buy food processor
and cooking pan much less frequently than they
buy bread and milk.

28
Problems on Apriori
Rare Item Problem
 If the frequencies of items vary a great deal,
we will encounter two problems
 If minsup is set too high, those rules that involve
rare items will not be found.
 To find rules that involve both frequent and rare
items, minsup has to be set very low. This may
cause combinatorial explosion because those
frequent items will be associated with one another
in all possible ways.

29
Road map

 Basic concepts of Association Rules


 Apriori algorithm
 Different data formats for mining
 Mining with multiple minimum supports

30
Different data formats for
mining
 The data can be in transaction form or table

form
Transaction form: a, b
a, c, d, e
a, d, f
Table form: Attr1 Attr2 Attr3
a, b, d
b, c, e
 Table data need to be converted to
transaction form for association mining

31
From a table to a set of
transactions
Table form: Attr1 Attr2 Attr3
a, b, d
b, c, e

Þ Transaction form:
(Attr1, a), (Attr2, b), (Attr3, d)
(Attr1, b), (Attr2, c), (Attr3, e)

candidate-gen can be slightly improved. Why?

32
Road map

 Basic concepts of Association Rules


 Apriori algorithm
 Different data formats for mining
 Mining with multiple minimum supports

33
Multiple minsups model
 The minimum support of a rule is expressed in
terms of minimum item supports (MIS) of the items
that appear in the rule.
 Each item can have a minimum item support.
 By providing different MIS values for different
items, the user effectively expresses different
support requirements for different rules.
 To prevent very frequent items and very rare items
from appearing in the same itemsets, we introduce
a support difference constraint.
maxis{sup{i}  minis{sup(i)} ≤ ,

34
Example
Transaction Bee
Onion Potato Burger Milk
ID r

t1 1 1 1 0 0

t2 0 1 1 1 0

t3 0 0 0 1 1

t4 1 1 0 1 0

t5 1 1 1 0 1

t6 1 1 1 1 1

 An example for a rule in this scenario would be {Onion,


Potato} => {Burger}, which means that if onion and
potato are bought, customers also buy a burger.
35
Support
 The support of an itemset X, supp(X) is the
proportion of transaction in the database in
which the item X appears. It signifies the
popularity of an itemset

If the sales of a particular product (item) above a certain


proportion have a meaningful effect on profits, that proportion
can be considered as the support threshold. 36
Confidence

 It signifies the likelihood of item Y being purchased when


item X is purchased. So, for the rule {Onion, Potato} =>
{Burger},
 Undefined control sequence \implies

 This implies that for 75% of the transactions containing

onion and potatoes, the rule is correct. It can also be


interpreted as the conditional probability P(Y|X), i.e, the
probability of finding the itemset Y in transactions given
the transaction already contains X.
37

You might also like