DMDW 3rd Module
DMDW 3rd Module
3.1 Introduction
Data Mining Association Analysis: Basic Concepts and Algorithms
Association Rule Mining
Given a set of transactions, find rules that will predict the occurrence of an item based on the occurrences
of other items in the transaction
Market-Basket transactions
{Diaper} ->{Beer},
{Milk, Bread} ->{Eggs,Coke},
{Beer, Bread} -> {Milk},
Frequent Itemset
Example:
– confidence ≥ minconfthreshold
Brute-force approach:
More specifically, the total number of possible rules extracted from a data set that contains
d items is
Even for the small data set with 6 items, this approach requires us to compute the support and confidence
for 36 - 27 * 1 = 602 rules.
More than 80% of the rules are discarded after applying minsup : 20Vo andminconf : 5070, thus making
most of the computations become wasted.
To avoid performing needless computations, it would be useful to prune the rulesearly
without having to compute their support and confidence values.
If the itemset is infrequent, then all six candidate rules can be pruned immediately without our having to
compute their confidence values.
Therefore, a common strategy adopted by many association rule mining algorithms is
to decompose the problem into two major subtasks:
2. Rule Generation
– Generate high confidence rules from each frequent itemset, where each rule is a binary partitioning of a
frequent itemset .
Frequent itemset generation is still computationally expensive.
m
Such an approach can be very expensive because it requires O(N Mw) comparisons, where N is the
number of transactions, M =2k - 1 is the number of candidate itemsets, and w is the maximum
transaction width.
There are several ways to reduce the computational complexity of frequent itemset generation.
Reduce the number of candidates (M)
VTUPulse.co
Frequent Itemset Generation in the Apriori Algorithm: Illustration with example.
Figure 6.5 provides a high-level illustration of the frequent item set generation part of the Apriori
algorithm for the transactions shown inTable 6.1. We assume that the support threshold is 60 To, which is
equivalent to a minimum support count equal to 3.
Initially, every item is considered as a candidate l-itemset. Aftercounting their supports, the candidate
itemsets {Co1a} and {Eggs} are discarded because they appear in fewer than three transactions.
In the next iteration, candidate 2-itemsets are generated using only the frequent 1-itemsets becausethe
Apriory principle ensures that all supersets of the infrequent 1-itemsets must be infrequent.
Because there are only four frequent 1-itemsets, the number of candidate 2-itemsets generated by the
algorithm is 6. Two of these six candidates, {Beer, Bread} and {Beer, Milk}, are subsequently found to be
infrequent after computing their support values. The remaining four candidates are frequent, and thus will
be used to generate candidate 3-itemsets.
Without support-based.pruning, there are 20 candidate3-itemsets that can be formed using the six items
given in this example. With the Apriory principle, we only need to keep candidate 3- itemsets whose
subsets are frequent. The only candidate that has this property is {Bread, Diapers, Milk).
The effectiveness of the Apriory pruning strategy can be shown by counting the number of candidate
itemsets generated.
A brute-force strategy of enumerating all itemsets( up to size3 ) as candidates w ill produce 41 candidates.
With the Apriory principle, this number decreases t o candidates, which epresents a 68% reduction in the
number of candidate itemsets even in this simple example.
Apriori Algorithm:
Input: set of items I, set of transactions T, number of transactions N, minimum support minsup.
Output: frequent k-itemsets Fk, k=1…
Method:
K=1
Brute-Force Method: The brute-force method considers every k-itemset asa potential candidate and then
applies the candidate pruning step to removeany unnecessary candidates (see Figure)
Fk-1 x F1 Method:
Combine frequent k-1 –itemsets with frequent 1- itemsets
Figure 6.7 illustrates how a frequent 2-itemset such as {Beer, Diapers} can be augmented with a frequent
item such as Bread to produce a candidate 3-itemset {Beer, Diapers, Bread}.
Satisfaction of our requirements
1) while many k-itemsets are left ungenerated, can still generate unnecessary candidates
e.g. merging {Beer, Diapers} with {Milk} is unnecessary, since {Beer, Milk} is infrequent.
2) Method is complete: each frequent itemset consists of a frequent k-1 –itemset and a frequent 1-
itemset.
3) Can generate the same set twice
e.g. {Bread, Diapers, Milk} can be generated by merging {Bread,Diapers} with {Milk} or
{Bread,Milk} with {Diapers} or {Diapers, Milk} with {Bread}
This can be circumvented by keeping all frequent itemsets in their lexicographical order (\
- e.g. {Bread,Diapers} can be merged with {Milk} as ‘Milk’ comes after ‘Bread’ and ‘Diapers’ in
lexicographical order
- {Diapers, Milk} is not merged with {Bread}, {Bread, Milk} is not merged with {Diapers} as that
would violate the lexicographical ordering
• The resulting k-itemset has k subsets of size k-1, which will be checked against support
threshold
o the merging ensures that at least two of the subsets are frequent
o An additional check is made that the remaining k-2 subsets are frequent as well.
In Figure 6.8, the frequent itemsets {Bread, Diapers} and {Bread, Milk} are merged to form a candidate
3-itemset {Bread, Diapers, Milk}.
od
ering
in their last item.
3) Each candidate itemset is generated only once.
VTU
Satisfaction of our requirements
1)Avoids the generation of many unnecessary candidates that are generated by the Fk-1 x F1 meth
e.g. will not generate {Beer, Diapers, Milk} as {Beer,Milk} is infrequent
2) Method is complete: every frequent k-itemset can be formed of two frequent k-1 –itemsets diff
Support counting using hash tree:
Given the candidate itemsets Ck and the set of transactions T, we need to compute the support counts
σ(X) for each itemset X in Ck.
Brute-force algorithm would compare each transaction against each itemset.
• large amount of comparisons.
An alternative approach
• Divide the candidate itemsets Ck into buckets by using a hash function for each transaction t.
• Hash the itemsets contained in t into buckets using the same hash function.
• Compare the corresponding buckets of candidates and the transaction.
• Increment the support counts of each matching candidate itemset.
• A hash tree is used to implement the hash function.
An alternative approach is to enumerate the itemsets contained in each transaction and use them to update
the support counts oftheir respective candidate itemsets. To illustrate, consider a transaction t that contains
five items, {1,2,3,5,6}.
Figure 6.9 shows a systematic way for enumerating the 3-itemsets contained in t. Assuming that each
itemset keeps its items in increasing lexicographic order, an itemset can be enumerated by specifying the
smallest item first, followed by the larger items. For instance, given t : {1,2,3,5,6}, all the 3- itemsets
contained in f must begin with item 1, 2, or 3.
Figure 6.11 shows an example of a hash tree structure.
Each internal node of the tree uses the following hash function, h(p) : p mod 3, to determine which branch
of the current node should be followed next.
For example, items 1, 4, and 7 are hashed to the same branch (i.e., the leftmost branch) because they have
the same remainder after dividing the number by 3.
All candidate itemsets are stored at the leaf nodes of the hash tree. The hash tree shown in Figure 6.11
contains 15 candidate 3-itemsets, distributed across 9 leaf nodes.
Consider a transaction, t, : {1,2,3,5,6}. To update the support counts of the candidate itemsets, the hash
tree must be traversed in such a way that all the leaf nodes containing candidate 3-itemsets belonging to t
must be visited at least once.
At the root node of the hash tree, the items 1, 2, and 3 of the transaction are hashed separately. Item 1 is
hashed to the left child of the root node, item 2 is hashed to the middle child, and item 3 is hashed to the
right child.
At the next level of the tree, the transaction is hashed on the second item listed in the Level 2 structures
shown in Figure 6.9.
For example, after hashing on item 1 at the root node, items 2, 3, and 5 of the transaction are hashed.
Items 2 and 5 are hashed to the middle child, while item 3 is hashed to the right child, as shown in Figure
6.12. This process continues until the leaf nodes of the hash tree are reached.
The candidate item sets stored at the visited leaf nodes are compared against the transaction. If a candidate
is a subset of the transaction, its support count is incremented.
In this example, 5 out of the 9 leaf nodes are visited and 9 out of the 15 item sets are compared against the
transaction.
V
Equivalence classes : Equivalence Classes can also be defined according to the prefix or suffix labels of
an itemset.
In this case, two itemsets belong to the same equivalence class if they share a common prefix or suffix of
length k. In the prefix-based approach, the algorithm can search for frequent itemsets starting with the
prefix a before looking for those starting with prefixes b, c and so on.
m
Breadth-First versus Depth-First: The Apriori, algorithm traverses the lattice in a breadth-first manner)
as shown in Figure 6.2L(a). It first discovers all the frequent 1-itemsets, followed by the frequent 2-
itemsets, and so on, until no new frequent itemsets are generated.
The algorithm can start from, say, node a, in Figure 6.22, and count its support to determine whether it is
frequent. If so, the algorithm progressively expands the next level of nodes, i.e., ab, abc, and so on, until
an infrequent node is reached, say, abcd. It then backtracks to another branch, say, abce, and continues the
search from there.
3.5 FP-Growth Algorithm
Apriori: uses a generate-and-test approach – generates candidate itemsets and tests if they are frequent –
Generation of candidate itemsets is expensive(in both space and time)
– Support counting is expensive
• Subset checking (computationally expensive) Multiple Database scans
FP-Growth: allows frequent itemset discovery without candidate itemset generation. Two step
approach: –
Step 1: Build a compact data structure called the FP-tree
• Built using 2 passes over the data-set.
Step 2: Extracts frequent itemsets directly from the FP-tree
Figure 6.24 shows a data set that contains ten transactions and five items.
Initially, the FP-tree contains only the root node represented by the null symbol. The FP-tree is
subsequently extended in the following way:
1. The data set is scanned once to determine the support count of each item. Infrequent items are
discarded, while the frequent items are sorted in decreasing support counts. For the data set shown in
Figure 6.24, a is the most frequent item, followed by b, c, d, and e.
2. .The algorithm makes a second pass over the data to construct the FP tree. After reading the
first transaction, {a,b), the nodes labeled as a and b are created. A path is then formed from nulI ,a, b to
encode the transaction. Every node along the path has a frequency count of 1.
3. After reading the second transaction, {b,c,d}, a new set of nodes is created for items b, c, and d. A
path is then formed to represent the transaction by connecting the nodes null ,b,c, d. Every node along this
path also has a frequency count equal to one. Although the first two transactions have an item in common,
which is b, their paths are disjoint because the transactions do not share a common prefix.
The third transaction, {a,c,d,e}, shares a common prefix item (which is a) with the first transaction. As a
result, the path for the third transaction null , a,c,d, e, overlaps with the path for the first transaction, nuI,a
,b. Because of their overlapping path, the frequency count for node a is incremented to two, while the
frequency counts for the newly created nodes, c, d, and e) are equal to one.
This process continues until every transaction has been mapped onto one of the paths given in the FP-tree.
The resulting FP-tree after reading all the transactions is shown at the bottom of Figure 6.25.
The value of correlation ranges from -1 (perfect negative correlation) to +1 (perfect positive correlation).
If the variables are statistically independent, then it is 0.
3.8 Important Questions
1. What is association analysis? Define support and confidence with an example.
2. Develop the appriori algorithm for frequent itemset generation, with an example.
3. Explain the various measure of evaluating association patterns.
4. Explain in detail frequent itemset generation and rule generation with reference to appriorialong with
an example.
5. Define following: a) Support b) Confidence.
6. Explain FP growth algorithm for discovering frequent item sets. What are its limitation.
7. Consider following transaction data set
Construct the FP tress by showing the tress separately after reading each transaction.
8. Illustrate the limitations of support confidence framework for evaluation of an association rule
9. Define cross support pattern. Suppose the support for milk is 70%, support for sugar is 10% and
support for bread is 0.04%. given hc= 0.01. is the frequent item set {milk, sugar, bread} the cross- support
pattern?
10. Which are the factors affecting the computational complexity of appriori algorithm? Explain them.
11. Define a frequent pattern tree. Discuss the method of computing a FP-Tree, with an algorithm.
12. Give an example to show that items in a strong association rule may actually be negatively
corelated.
13. A database has five transactions. Let min-sup = 60% and min-conf = 80%
Find all frequent item sets using appriori and FP growth respectively,
14. Explain various alternative methods for generating frequent item sets.
15. A database has four transactions. Let min-sup = 40% and min-conf = 60%
Find all frequent item sets using appriori and FP growth algorithms. Compare the efficiency of two
measuring process.
16. Explain various Candidate Generation and Pruning techniques.
17. Explain the various properties of objective measures.
18. Comprehend the Simpson’s Paradox.
19. Illustrate the nature of Simpson’s paradox for the following two-way contingency table
20. What is appriori algorithm? Give an example. A database has six transactions of purchase of books
from a book shop as given below
Construct FP Tree.Generate List of frequent item set ordered by their corresponding suffixes.
22. Consider following set of frequent 3 item sets
Item set = {Milk, Bread, Eggs, Cookies, Coffee, Butter, Juice}, use 0.2 for min-sup.