0% found this document useful (0 votes)
15 views26 pages

Association Analysis

Chapter 4 discusses association analysis in data mining, focusing on market basket transactions to uncover purchasing patterns. It defines key concepts such as itemsets, support, and confidence, and outlines the association rule mining task, emphasizing the computational challenges involved. The chapter also introduces the Apriori algorithm for frequent itemset generation, detailing its principles and methods for candidate generation and pruning.

Uploaded by

ashwini
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views26 pages

Association Analysis

Chapter 4 discusses association analysis in data mining, focusing on market basket transactions to uncover purchasing patterns. It defines key concepts such as itemsets, support, and confidence, and outlines the association rule mining task, emphasizing the computational challenges involved. The chapter also introduces the Apriori algorithm for frequent itemset generation, detailing its principles and methods for candidate generation and pruning.

Uploaded by

ashwini
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 26

Department of MCA Data Mining & Warehousing-CH-4 Notes KNS Institute of Technology

Chapter 4: Association analysis:


4.1 Introduction:

 Many business enterprises accumulate large quantities of data from their day-to-day
operations, huge amounts of customer purchase data are collected daily at the checkout
counters of grocery stores such data, commonly known as market basket transactions
 Each row in this table corresponds to a transaction, which contains a unique identifier
labeled T I D and a set of items bought by a given customer. Retailers are interested in
analyzing the data to learn about the purchasing behavior of their customers. Such
valuable information can be used to support a variety of business-related applications
such as marketing promotions, inventory management, and customer relationship
management.

TID Items

1 {Bread, Milk}

2 {Bread, Diapers, Beer, Eggs}

3 {Milk, Diapers, Beer, Cola}

4 {Bread, Milk, Diapers, Beer}

5 {Bread, Milk, Diapers, Cola}

Table 4.1 an example of market basket transactions.

Define Association Analysis?


Association analysis is useful for discovering interesting relationships hidden in
large amount of data.

From Table 4.1 it is clear that the people who buy bread will also buy milk too.
{Bread}  {Milk}

Lecturer: Syed Khutubuddin Ahmed Contact: [email protected] Page 1


Department of MCA Data Mining & Warehousing-CH-4 Notes KNS Institute of Technology

There are two key Issues that need to be addressed when applying association analysis to
market basket data.
o First, discovering patterns from a large transaction data set can be
computationally expensive.
o Second, some of the discovered patterns are potentially spurious (fake)
because they may happen simply by chance.

Problem Definition:
Basic terminology used in association analysis is:

Binary Representation: Market basket data can be represented in a binary format where
each row corresponds to a transaction and each column corresponds to an item.
An item can be treated as a binary variable whose value is one if the item is present in a
transaction and zero otherwise.

TID Bread Milk Diapers Beer Eggs Cola


1 1 1 0 0 0 0
2 1 0 1 1 1 1
3 0 1 1 1 0 0
4 1 1 1 1 0 0
Binary Representation of market based data

ItemSet: In association analysis, a collection of zero or more items is


termed as itemset.

For instance. [Beer, Diapers, Milk) is an example of a 3-itemset.

The null (or empty) set is an itemset that does not contain any items

Transaction Width: is defined as the number of items present in a


transaction.

Important Property of ItemSet is: “Support and Count” which refers to the number of
transactions that contain a particular itemset.

Lecturer: Syed Khutubuddin Ahmed Contact: [email protected] Page 2


Department of MCA Data Mining & Warehousing-CH-4 Notes KNS Institute of Technology

Association Rule: An association rule is an Implication expression of


the form X  Y, where X and Y are disjoint itemsets.

The strength of an association rule can be measured in terms of its


support and confidence.

o Support determines how often a rule is applicable to a given


Data set

o While confidence determines how frequently items in Y


appear in transactions that contain X

Consider the rule (Milk, Diapers) (Beer).


Since the support count for (Milk, Diapers, Beer) is 2 and the total number of trans-
actions is 5.
the rule's support is 2/5 = 0.4.
The rule's confidence is obtained by dividing the support count for {Milk. Diapers, Beer)
by the support count for (Milk, Diapers).
Since there are 3 transactions that contain milk and diapers, the confidence for this rule
is 2/3 = 0.67

Why to use Support and Count:


Support is an important measure, because a rule that has very low support may occur
simply by chance.
Confidence, on the other hand, measures the reliability of the inference made by a
rule.
For a given rule X  Y, the higher the confidence, the more likely it is for Y to be
present in transactions that contain X

4.2 Association Rule Mining Task:

Defnition: Given a set of transactions T, the goal of association rule mining is to find all rules
having
• support ≥ minsup threshold
• confidence ≥ minconf threshold

Lecturer: Syed Khutubuddin Ahmed Contact: [email protected] Page 3


Department of MCA Data Mining & Warehousing-CH-4 Notes KNS Institute of Technology

Brute-force approach for mining association rules is:


• List all possible association rules
• Compute the support and confidence for each rule Prune(cut away) rules that fail the
minsup and minconf thresholds
 Computationally prohibitive!

Commons strategy adapted by many association rule mining algorithm is to decompose


the problem into two major tasks:

Two-step approach:
1. Frequent Itemset Generation
– Generate all itemsets whose support  minsup
2. Rule Generation
– Generate high confidence rules from each frequent itemset, where each rule is
a binary partitioning of a frequent itemset
Note: Frequent itemset generation is still computationally expensive

4.3 Frequent Itemset Generation:

A lattice structure can be used to enumerate the list of all possible itemsets. Figure 4.1
shows an itemset lattice for I = {a, 6, c, d, e}. In general, a data set that contains k: items
can potentially generate up to 2k - 1 frequent itemsets, excluding the null set.

Given d items, there are


d
2 possible candidate
itemsets

Lecturer: Syed Khutubuddin Ahmed Contact: [email protected] Page 4


Department of MCA Data Mining & Warehousing-CH-4 Notes KNS Institute of Technology

A brute-force approach for finding frequent itemsets is to determine the support count for
every candidate itemset in the lattice structure.
For example, the support for {Bread. Milk} is incremented three times because the
itemset is contained in transactions 1, 4, and 5 in Table 4.1.

• Brute-force approach:
– Each itemset in the lattice is a candidate frequent itemset
– Count the support of each candidate by scanning the database

Transactions List of
Candidates
TID Items
1 Bread, Milk
2 Bread, Diaper, Beer, Eggs
N 3 Milk, Diaper, Beer, Coke M
4 Bread, Milk, Diaper, Beer
5 Bread, Milk, Diaper, Coke
w

– Match each transaction against every candidate


– Complexity ~ O(NMw) => Expensive since M = 2d !!!

There are several ways to reduce the computational complexity of frequent itemset
generation.

• Reduce the number of candidates (M):


– Apriory principal is used to reduce the candidate itemsets.
– Use pruning techniques to reduce M
• Reduce the number of transactions (N):
– Reduce size of N as the size of itemset increases
• Reduce the number of comparisons (NM):
– Use efficient data structures to store the candidates or transactions
– No need to match every candidate against every transaction

Lecturer: Syed Khutubuddin Ahmed Contact: [email protected] Page 5


Department of MCA Data Mining & Warehousing-CH-4 Notes KNS Institute of Technology

4.3.1 Apriori principle:


– If an itemset is frequent, then all of its subsets must also be frequent
– Ex: suppose {c, d, e} is a frequent itemset. Clearly any transaction that contains {c, d,
e} is a frequent item set. Clearly any transaction that contains {c, d, e} must also
contain its subsets, {c, d}, {d, e}, {c}, {d}, and {e}.
– As a result, if {c, d, e} is frequent, then all subsets of {c, d, e} must also be frequent.

Lecturer: Syed Khutubuddin Ahmed Contact: [email protected] Page 6


Department of MCA Data Mining & Warehousing-CH-4 Notes KNS Institute of Technology

Conversely if an items set such as {a, b} is infrequent then all of its supersets must
also be infrequent too.

null

A B C D E

AB AC AD AE BC BD BE CD CE DE

Found to be
Infrequent ABC ABD ABE ACD ACE ADE BCD BCE BDE CDE

ABCD ABCE ABDE ACDE BCDE

Pruned
ABCDE
supersets

4.3.2 Frequent itemset generation in Apriori Principle:

Item Count Items (1-itemsets)


Bread 4
Coke 2
Milk 4 Itemset Count Pairs (2-itemsets)
Beer 3 {Bread,Milk} 3
Diaper 4
{Bread,Beer} 2 (No need to generate
Eggs 1
{Bread,Diaper} 3 candidates involving Coke
{Milk,Beer} 2 or Eggs)
{Milk,Diaper} 3
{Beer,Diaper} 3

Triplets (3-itemsets)

Minimum Support = 3 Itemset Count


{Bread,Milk,Diaper} 3

Lecturer: Syed Khutubuddin Ahmed Contact: [email protected] Page 7


Department of MCA Data Mining & Warehousing-CH-4 Notes KNS Institute of Technology

 Apriory is the first association rule mining algorithm that pioneered the use of support
based pruning to systematically control the exponential growth of candidate itemsets.
 Figure above provides a high level view of the frequent itemset generation part of the
Apriory algorithm for the transaction shown in Table-4.1
Note: Support threshold is 60%, which is equivalent to a minimum support count i.e. 3.

 Initially, every item is considered as a candidate 1-itemset. After counting their supports,
the candidate itemsets {Cola} and {Eggs} are discarded because they appear in fewer than
three transactions.
 In the next iteration, candidate 2-itemsets are generated using only the frequent 1-itemsets
because the Apriori principle ensures that all supersets of the infrequent 1-itemsets must be
infrequent.
 Two of these six candidates, {Beer, Bread} and {Beer, Milk}, are subsequently found to
be infrequent after computing their support values. The remaining four candidates are
frequent, and thus will be used to generate candidate 3-itemsets.
 With the Apriori principle, we only need to keep candidate 3-itemsets whose subsets are
frequent. The only candidate that has this property is {Bread, Diapers, Milk}.

Frequent itemset generation of the Apriori Algorithm:


Method:

Let k=1

Generate frequent itemsets of length 1

Repeat until no new frequent itemsets are identified

Generate length (k+1) candidate itemsets from length k frequent itemsets

Prune candidate itemsets containing subsets of length k that are infrequent

Count the support of each candidate by scanning the DB

Eliminate candidates that are infrequent, leaving only those that are frequent

Lecturer: Syed Khutubuddin Ahmed Contact: [email protected] Page 8


Department of MCA Data Mining & Warehousing-CH-4 Notes KNS Institute of Technology

4.3.3 Candidate Generation and Pruning

• Candidate Generation: This operation generates new candidate k-itemset based on the
frequent (k — 1 )-itemsets found in the previous iteration.

• Candidate Pruning: This operation eliminates some of the candidate k-itemsets using the
support-based pruning strategy

Principles to generate candidate itemsets:


1. It should avoid generating too many unnecessary candidates.
2. It must ensure that the candidate set is complete i.e. no frequent items are left.
3. It should not generate the same candidate itemset more than once.

4.3.4 Support Counting:

• Support counting is the process of determining the frequency of occurrence for every
candidate itemset that survives the candidate pruning step of the apriori-gen function.
• One approach for doing this is to compare each transaction against every candidate itemset.
• This approach is computationally expensive, especially when the numbers of transactions and
candidate itemsets are large.

• An alternative approach is to enumerate the itemsets contained in each transaction and use
them to update the support counts of their respective candidate itemsets.
• To illustrate, consider a transaction “t” that contains five items, {1,2,3,5,6}. There are 10
itemsets of size 3 contained in this transaction.
• Some of the itemsets may correspond to the candidate 3-itemsets under investigation, in
which case, their support counts are incremented.
• Other subsets of t that do not correspond to any candidates can be ignored.
• Figure-6.9 below shows a systematic way for enumerating the 3-itemsets contained in t.
Assuming that each itemset keeps its items in increasing lexicographic order, an itemset can
be enumerated by specifying the smallest item first, followed by the larger items.
• For instance, given t = {1,2,3,5,6}, all the 3-itemsets contained in t must begin with item 1, 2,
or 3.
• It is not possible to construct a 3-itemset that begins with items 5 or 6 because there are only
two items in t whose labels are greater than or equal to 5.

Lecturer: Syed Khutubuddin Ahmed Contact: [email protected] Page 9


Department of MCA Data Mining & Warehousing-CH-4 Notes KNS Institute of Technology

• The number of ways to specify the first item of a 3-itemset contained in ( is illustrated by the
Level 1 prefix structures depicted in Figure-6.9 . For instance, 1 2 3 5 6 represents a 3-itemset
that begins with item 1, followed by two more items chosen from the set {2,3,5,6}.
• After fixing the first item, the prefix structures at Level 2 represent the number of ways to
select the second item.

• For example, 1 2 3 5 6 corresponds to itemsets that begin with prefix (1 2) and are followed
by items 3, 5, or 6. Finally, the prefix structures at Level 3 represent the complete set of 3-
itemsets contained in t. For example, the 3-itemsets that begin with prefix {1 2} are {1,2,3},
{1,2,5}, and {1,2,6}, while those that begin with prefix {2 3} are {2,3,5} and {2,3,6}.
• The prefix structures shown in Figure-6.9 demonstrate how itemsets contained in a
transaction can be systematically enumerated, i.e., by specifying their items one by one, from
the leftmost item to the rightmost item. We still have to determine whether each enumerated
3-itemset corresponds to an existing candidate itemset. If it matches one of the candidates,
then the support count of the corresponding candidate is incremented. In the next section, we
illustrate how this matching operation can be performed efficiently using a hash tree
structure.

Lecturer: Syed Khutubuddin Ahmed Contact: [email protected] Page 10


Department of MCA Data Mining & Warehousing-CH-4 Notes KNS Institute of Technology

Support counting using a Hash tree:

• In the Apriori algorithm, candidate itemsets are partitioned into different buckets and stored
in a hash tree.
• During support counting, itemsets contained in each transaction are also hashed into their
appropriate buckets.
• That way, instead of comparing each itemset in the transaction with every candidate itemset,
it is matched only against candidate itemsets that belong to the same bucket.

4.3.4 Computational Complexity:

The Computational complexity of the Apriory algorithm can be affected by the


following factors:

• Support Threshold: lowering the support threshold often results in more itemsets being
declared as frequent.
This has an adverse effect on the computational complexity of the algorithm because more
candidate itemsets must be generated and counted.

Lecturer: Syed Khutubuddin Ahmed Contact: [email protected] Page 11


Department of MCA Data Mining & Warehousing-CH-4 Notes KNS Institute of Technology

• Number of Items (Dimensionality): As the number of items increases, more space will be
needed to store the support counts of items. If the number of frequent items also grows with
the dimensionality of the data, the computation and I/O costs will increase because of the
larger number of candidate itemsets generated by the algorithm.

• Number of Transactions: Since the Apriori algorithm makes repeated passes over the data
set, its run time increases with a larger number of transactions.

• Average Transaction Width: For dense data sets, the average transaction width can be
very large. This affects the complexity of the Apriori algorithm in two ways.
o First, the maximum size of frequent itemsets tends to increase as the average
transaction width increases.
o Second as the transaction width increases, more itemsets are contained in the
transaction.

• Generation of frequent 1-itemsets: For each transaction, we need to update the support
count for every item present in the transaction. Assuming that w is the average transaction
width, this operation requires O(Nw) time, where N is the total number of transactions.

• Candidate generation: To generate candidate A:-itemsets, pairs of frequent (k — l)-itemsets


arc merged to determine whether they have at least k - 2 items in common. Each merging
operation requires at most k — 2 equality comparisons. In the best-case scenario, every
merging step produces a viable candidate Ar-itemset. In the worst-case scenario, the
algorithm must merge every pair of frequent (k- l)-itemsets found in the previous iteration

• Support counting: Each transaction of length |t| produces itemsets of size k. This is also
the effective number of hash tree traversals performed for each transaction.

4.4 Rule Generation:

• An association rule can be extracted by partitioning the itemset Y into two non-empty
subsets, X and Y — X, such that X —► Y — X satisfies the confidence threshold.

• Note that all such rules must have already met the support threshold because they are
generated from a frequent itemset.

Lecturer: Syed Khutubuddin Ahmed Contact: [email protected] Page 12


Department of MCA Data Mining & Warehousing-CH-4 Notes KNS Institute of Technology

Example:
• Let X = {1,2,3} be a frequent itemset. There are six candidate association rules that can be
generated from X: {1,2} —► {3}, {1,3} —► {2}, {2,3}  {1}, {1} —> {2,3}, {2} 
{1,3}, and {3}  {1,2}.
As each of their support is identical to the support for X, the rules must satisfy the support
threshold

4.4.1 Confidence Bases Pruning:

• Theorm: a rule X —► Y-X does not satisfy the confidence threshold, then any rule X' 
Y — X' , where X' is a subset of X, must not satisfy the confidence threshold as well

4.4.2 Rule Generation in Apriory Algorithm:

• The Apriori algorithm uses a level-wise approach for generating association rules, where
each level corresponds to the number of items that belong to the rule consequent.
• Initially, all the high-confidence rules that have only one item in the rule consequent are
extracted. These rules are then used to generate new candidate rules.

• example, if {acd} —► {b} and {abd} —► {c} are high-confidence rules, then the candidate
rule {ad} —► {bc} is generated by merging the consequents of both rules

• If any node in the lattice has low confidence, then according to confidence based pruning
Theorem, the entire subgraph spanned by the node can be pruned immediately.
• Suppose the confidence for {bcd} —► {a} is low. All the rules containing item a in its
consequent, including {cd}  {ab}, {bd} —► {ac}, {bc} —> {ad} and {d}  {abc} can be
discarded.

Lecturer: Syed Khutubuddin Ahmed Contact: [email protected] Page 13


Department of MCA Data Mining & Warehousing-CH-4 Notes KNS Institute of Technology

4.5 Compact Representation of Frequent Itemsets:

• In practice the number of frequent itemsets produced from a transaction data set that can be
very large.
• It is useful to identify a small representative set of itemsets from which all other frequent
itemsets can be derived.

4.5.1 Maximum Frequent Itemsets:

Definition: A maximum frequent itemset is defined as a frequent itemset for which


none of its immediate supersets are frequent.

Lecturer: Syed Khutubuddin Ahmed Contact: [email protected] Page 14


Department of MCA Data Mining & Warehousing-CH-4 Notes KNS Institute of Technology

• The itemsets in the lattice are divided into two groups:


o Frequent
o Infrequent.
• A frequent itemset border, which is represented by a dashed line, is also illustrated in the
diagram. Every itemset located above the border is frequent, while those located below the
border (the shaded nodes) are infrequent.
• Among the itemsets residing near the border, {a, d}. {a, c, e} and {b, c, d, e} are considered
to be maximal frequent itemsets because their immediate supersets are infrequent.
• An itemset such as {a, d} is maximal frequent because all of its immediate supersets, {a, b,
d}, {a, c, d} and {a, d, e}, are infrequent.
• In contrast, {a, c} is non-maximal because one of its immediate supersets, {a, c, e}, is
frequent.
• Maximal frequent itemsets effectively provide a compact representation of frequent itemsets.
In other words, they form the smallest set of itemsets from which all frequent itemsets can be
derived.

Lecturer: Syed Khutubuddin Ahmed Contact: [email protected] Page 15


Department of MCA Data Mining & Warehousing-CH-4 Notes KNS Institute of Technology

Example: Frequent itemsets can be deviced into two groups :


1) Frequent itemset that begin with item a and that may contain items c, d, or e.
This group includes itemsets such as {a}, {a, c}, {a, d}, {a, e} and {a, c, e}
2) Frequent itemsets that begin with items b, c, d, or e. this group includes
itemset such as {b}, {b, c}, {c, d}, {b, c, d, e}, etc

4.5.2 Closed Frequent Itemsets:

Closed Itemset:
An itemset X is closed if none of its immediate supersets has exactly the same support
count as X

• For example, since the node {b, c} is associated with transaction IDs 1, 2. and 3, its support
count is equal to three. From the transactions given in this diagram, notice that every
transaction that contains b also contains c.

Lecturer: Syed Khutubuddin Ahmed Contact: [email protected] Page 16


Department of MCA Data Mining & Warehousing-CH-4 Notes KNS Institute of Technology

• Consequently, the support for {b} is identical to {b, c] and {b} should not be considered a
closed itemset. Similarly, since c occurs in every transaction that contains both a and d, the
itemset. {a, d} is not closed.
• On the other hand. {b, c} is a closed itemset because it does not have the same support count
as any of its supersets.

Closed Frequent Itemset:


An itemset is a closed frequent itemset if it is closed and its support is greater than or
equal to minsup
Ex: {bc} is a closed frequent itemset because its support is 60%
• Assuming that the support threshold is 40%. {b,c} is a closed frequent itemset because its
support is 60%. The rest of the closed frequent itemsets are indicated by the shaded nodes

4.6 Alternative methods for Generating frequent itemsets:

Traversal of itemset lattice: A serach for frequent itemset can be conceptually viewed as a
traversal on the itemset lattice.
Search algorithm decides how the lattice structure is traversed during the frequent itemset
generation process.

Search Algo-1:
General to specific vs specific to general:
o The general to specific search strategy is effective, provided the maximum length
of a frequent itemset is not too long.
o A specific to general search strategy looks for more specific frequent itemsets
first, before finding the more general frequent itemsets.
o This strategy is useful to discover maximum frequent itemsets in dense
transactions.

Lecturer: Syed Khutubuddin Ahmed Contact: [email protected] Page 17


Department of MCA Data Mining & Warehousing-CH-4 Notes KNS Institute of Technology

Search Algo-2:
Equivalence classes:
Another way to envision the traversal is to first partition the lattice into disjoint groups of
nodes (or equivalence classes). A frequent itemset generation algorithm searches for frequent
itemsets within a particular equivalence class first before moving to another equivalence class

Lecturer: Syed Khutubuddin Ahmed Contact: [email protected] Page 18


Department of MCA Data Mining & Warehousing-CH-4 Notes KNS Institute of Technology

Search Algo-3: BSF and DFS (Breadth first search and depth first search):

The Apriory algorithm traverses the lattice in a breadth first manner, the itemset lattice can
also be traversed in deapth first manner.

Representation of Database
– horizontal vs vertical data layout

There are many ways to represent a


transaction data set. Figure 6.23 shows
two different ways of representing
market basket transactions. The rep-
resentation on the left is called a
horizontal data layout, which is adopted
by many association rule mining
algorithms, including A p r i o r i .
Another possibility is to store the list of
transaction identifiers (TID-list)
associated with each item. Such a
representation is known as the vertical
data layout.

Lecturer: Syed Khutubuddin Ahmed Contact: [email protected] Page 19


Department of MCA Data Mining & Warehousing-CH-4 Notes KNS Institute of Technology

4.7 FP Growth Algorithms:


Definition: FP-growth that takes a radically different approach to discovering frequent
itemsets, it encodes the data set using a compact data structure called an FP-tree and
extracts frequent itemsets directly from this structure

4.7.1 FP-Tree Representation:


• An FP-tree is a compressed representation of the input data. It is constructed by reading the
data set one transaction at a time and mapping each transaction onto a path in the FP-tree.
• As different transactions can have several items in common, their paths may overlap.
• The more the paths overlap with one another, the more compression we can achieve using the
FP-tree structure.

null
TID Items null
1 {A,B} After reading TID=1:
A:1 B:1
2 {B,C,D} A:1
3 {A,C,D,E}
B:1 C:1
4 {A,D,E}
B:1
5 {A,B,C} Fig-6.24(i) After reading TID=2: D:1
6 {A,B,C,D}
7 {B,C}
TID Items
8 {A,B,C} Transactio
1 {A,B}
9 {A,B,D} 2 {B,C,D} n Database
null
10 {B,C,E} 3 {A,C,D,E}
4 {A,D,E}
5 {A,B,C}
A:7 B:3
6 {A,B,C,D}
7 {B,C}
8 {A,B,C}
9 {A,B,D} B:5 C:3
10 {B,C,E}
C:1 D:1

D:1
C:3 E:1
Item Pointer D:1 E:1
A D:1
B E:1
C D:1
D Pointers are used to assist
E frequent itemset generation

Lecturer: Syed Khutubuddin Ahmed Contact: [email protected] Page 20


Department of MCA Data Mining & Warehousing-CH-4 Notes KNS Institute of Technology

Figure 6.24 shows a data set that contains ten transactions and five items. The structures of the FP-
tree after reading the first three transactions are also depicted in the diagram.
Each node in the tree contains the label of an item along with a counter that shows the number of
transactions mapped onto the given path. Initially, the FP-tree contains only the root node
represented by the null symbol.

The FP-tree is subsequently extended in the following way:

1) The data set is scanned once to determine the support count of each item. Infrequent
items are discarded, while the frequent items are sorted in decreasing support
counts. For the data set shown in Figure 6.24-(i) is the most frequent item,
followed by b, c, d, and e.

2) The algorithm makes a second pass over the data to construct the FP-tree. After
reading the first transaction, {a, b}. The nodes labeled as a and b are created. A
path is then formed from null  a  b to encode the transaction. Every node along
the path has a frequency count of 1.

3) After reading the second transaction, {b,c,d}, a new set of nodes is created for items
b. c. and d. A path is then formed to represent the transaction by connecting the
nodes null  b  c  d. Every node along this path also has a frequency count
equal to one. Although the first two transactions have an item in common, which is
6, their paths are disjoint because the transactions do not share a common prefix.

4) The third transaction, {a. c. d, e}. Shares a common prefix item (which is a) with the
first transaction. As a result, the path for the third transaction, null acde,
overlaps with the path for the first transaction, null  a  b. Because of their
overlapping path, the frequency count for node a is incremented to two, while the
frequency counts for the newly created nodes, c, d, and e, are equal to one.

5) This process continues until every transaction has been mapped onto one of the paths
given in the FP-tree. The resulting FP-tree after reading all the transactions is
shown at the bottom of Figure 6.24.

 The size of an FP-tree is typically smaller than the size of the uncompressed data because
many transactions in market basket data often share a few items in common.
 In the best-case scenario, where all the transactions have the same set of items, the FP-tree
contains only a single branch of nodes.

Lecturer: Syed Khutubuddin Ahmed Contact: [email protected] Page 21


Department of MCA Data Mining & Warehousing-CH-4 Notes KNS Institute of Technology

 The worst-case scenario happens when every transaction has a unique set of items. As none
of the transactions have any items in common, the size of the FP-tree is effectively the same
as the size of the original data.
 However, the physical storage requirement for the FP-tree is higher because it requires
additional space to store pointers between nodes and counters for each item.

4.7.2 Frequent Itemset Generation in FP-Growth Algorithm

 FP-growth is an algorithm that generates frequent itemsets from an FP-tree by exploring the
tree in a bottom-up fashion.
 Given the example tree shown in Figure 6.24, the algorithm looks for frequent itemsets
ending in e first, followed by d, c, b, and finally, a.
 Since every transaction is mapped onto a path in the FP-tree, we can derive the frequent
itemsets ending with a particular item, say, e, by examining only the paths containing node e.
These paths can be accessed rapidly using the pointers associated with node e. The extracted
paths are shown in Figure 6.26(a).

Lecturer: Syed Khutubuddin Ahmed Contact: [email protected] Page 22


Department of MCA Data Mining & Warehousing-CH-4 Notes KNS Institute of Technology

Table 6.6. The list of frequent itemsets ordered by their corresponding


Suffix Frequent Itemsets
E {e}. {d,e}, {a.d.e}. {c,e},{a.e}
D {d}. {c,d}, {b.c.d}, {a,c,d}, {b,d}, {a,b,d}. {a.d}
C {c}, {b,c}, {a,b.c}. {a.c}
B {b}. {a,b}
A {a}

 After finding the frequent itemsets ending in e, the algorithm proceeds to look for frequent
itemsets ending in d by processing the paths associated with node d.
 The corresponding paths are shown in Figure 6.26(b). This process continues until all the
paths associated with nodes c, 6, and finally a. are processed.
 The paths for these items are shown in Figures 6.26(c), (d), and (e), while their corresponding
frequent itemsets are summarized in Table 6.6.
FP-growth finds all the frequent itemsets ending with a particular suffix by employing a divide-and-
conquer strategy to split the problem into smaller sub-problems. For example, suppose we are
interested in finding all frequent itemsets ending in e. To do this, we must first check whether the
itemset {e} itself is frequent. If it is frequent, we consider the sub-problem of finding frequent
itemsets ending in de, followed by ce, be, and ae. In turn, each of these sub-problems are further
decomposed into smaller sub-problems. By merging the solutions obtained from the sub-problems,
all the frequent itemsets ending in e can be found. This divide-and-conquer approach is the key
strategy employed by the FP-growth algorithm.

For a more concrete example on how to solve the sub-problems, consider the task of
finding frequent itemsets ending with e .

1. The first step is to gather all the paths containing node e. These initial paths are called
prefix paths and are shown in Figure 6.27(a).

2. From the prefix paths shown in Figure 6.27(a), the support count for e is obtained by adding
the support counts associated with node e. Assuming that the minimum support count is 2, {e}
is declared a frequent itemset because its support count is 3.

3. Because {e} is frequent, the algorithm has to solve the sub-problems of finding frequent
itemsets ending in de, ce, be, and ae. Before solving these sub-problems, it must first convert
the prefix paths into a conditional FP-tree, which is structurally similar to an FP-tree, except it
is used to find frequent itemsets ending with a particular suffix.

Lecturer: Syed Khutubuddin Ahmed Contact: [email protected] Page 23


Department of MCA Data Mining & Warehousing-CH-4 Notes KNS Institute of Technology

A conditional FP-tree is obtained in the following way:

a) First, the support counts along the prefix paths must be updated because some of the counts
include transactions that do not contain item e. For example, the rightmost path shown in
Figure 6.27(a), null  b:2  c:2 —► e:l, includes a transaction {b, c} that does not contain
item e. The counts along the prefix path must therefore be adjusted to 1 to reflect the actual
number of transactions containing {b, c, e}.

b) The prefix paths are truncated by removing the nodes for e. These nodes can be removed
because the support counts along the prefix paths have been updated to reflect only
transactions that contain e and the sub-problems of finding frequent itemsets ending in de, ce,
be, and ae no longer need information about node e.
c) After updating the support counts along the prefix paths, some of the items may no longer be
frequent. For example, the node b appears only once and has a support count equal to 1, which
means that there is only one transaction that contains both b and e. Item b can be safely
ignored from subsequent analysis because all itemsets ending in be must be infrequent.

Lecturer: Syed Khutubuddin Ahmed Contact: [email protected] Page 24


Department of MCA Data Mining & Warehousing-CH-4 Notes KNS Institute of Technology

The conditional FP-tree for e is shown in Figure 6.27(b). The tree looks different than the original
prefix paths because the frequency counts have been updated and the nodes b and e have been
eliminated.

 FP-growth uses the conditional FP-tree for e to solve the sub-problems of finding frequent
itemsets ending in de. ce, and ae.

 To find the frequent itemsets ending in de, the prefix paths for d are gathered from the
conditional FP-tree for e (Figure 6.27(c)).

 By adding the frequency counts associated with node d, we obtain the support count for
{d, e}. Since the support count is equal to 2, {d,e} is declared a frequent itemset.

 Next, the algorithm constructs the conditional FP-tree for de using the approach described
in step 3.

 After updating the support counts and removing the infrequent item c, the conditional FP-
tree for de is shown in Figure 6.27(d).

 Since the conditional FP-tree contains only one item, a, whose support is equal to minsup,
the algorithm extracts the frequent itemset {a, d, e} and moves on to the next subproblem,
which is to generate frequent itemsets ending in ce.

 After processing the prefix paths for c, only {c, e} is found to be frequent. The algorithm
proceeds to solve the next subprogram and found {a, e} to be the only frequent itemset
remaining.

Lecturer: Syed Khutubuddin Ahmed Contact: [email protected] Page 25


Department of MCA Data Mining & Warehousing-CH-4 Notes KNS Institute of Technology

Possible Questions from This chapter:


1. Define Association analysis, itemset, transaction width, association rule? Ans: Page-1 to 3
2. Explain Association rule mining task and explain different strategies used? Ans: Page-3 & 4
3. Explain frequent item set generation procedure? Ans: page-4 & 5
4. Explain Apriory principal in detail? Ans: Page-6 to 8
5. Explain in detail support counting procedure used frequent itemset generation? Ans: Page-9
to 11
6. Explain the different computational; complexities faced during Apriory algorithms? Ans:
Page-11 to 12
7. Explain Rule Genration in detail? Ans: page-12 to 14
8. Write a note on compact representation of frequent itemsets? Ans: Page-14 to 17
9. Explain alternative methods for generating frequent itemsets? Ans: Page-17 to 19
10. Explain FP Growth algorithm in detail? Ans: Page-20 to 25

Lecturer: Syed Khutubuddin Ahmed Contact: [email protected] Page 26

You might also like