0% found this document useful (0 votes)
16 views45 pages

FPTree 09

The document discusses the FP-Growth algorithm for mining frequent patterns without candidate generation, highlighting its efficiency over traditional Apriori-like approaches. It introduces the FP-tree structure, which allows for compact storage of transaction data and minimizes database scans, thus improving performance. The presentation outlines the steps involved in constructing the FP-tree and the process of mining frequent patterns using a divide-and-conquer strategy.

Uploaded by

Reshma
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views45 pages

FPTree 09

The document discusses the FP-Growth algorithm for mining frequent patterns without candidate generation, highlighting its efficiency over traditional Apriori-like approaches. It introduces the FP-tree structure, which allows for compact storage of transaction data and minimizes database scans, thus improving performance. The presentation outlines the steps involved in constructing the FP-tree and the process of mining frequent patterns using a divide-and-conquer strategy.

Uploaded by

Reshma
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 45

Mining Frequent Patterns

without Candidate Generation


Jiawei Han, Jian Pei and Yiwen Yin
School of Computer Science
Simon Fraser University

Presented by Song Wang. March 18th, 2009 Data Mining Class


Slides Modified From Mohammed and Zhenyu’s Version
Outline of the Presentation

Outline
• Frequent Pattern Mining: Problem statement and an
example
• Review of Apriori-like Approaches
• FP-Growth:
– Overview
– FP-tree:
• structure, construction and advantages
– FP-growth:
• FP-tree conditional pattern bases  conditional FP-tree
frequent patterns
• Experiments
• Discussion:
– Improvement of FP-growth
• Conclusion Remarks

2
Mining Frequent Patterns without Candidate Generation (SIGMOD2000)
Frequent Pattern Mining Problem: Review

Frequent Pattern Mining: An Example


Given a transaction database DB and a minimum support threshold ξ, find
all frequent patterns (item sets) with support no less than ξ.

Input: DB: TID Items bought


100 p} {f, a, c, d, g, i, m,
200 {a, b, c, f, l, m, o}
300 {b, f, h, j, o}
400 {b, c, k, s, p}
500 n} {a, f, c, e, l, p, m,
Minimum support: ξ =3

Output: all frequent patterns, i.e., f, a, …, fa, fac, fam, fm,am…

Problem Statement: How to efficiently find all frequent patterns

3
Mining Frequent Patterns without Candidate Generation (SIGMOD2000)
Review of Apriori-like Approaches for finding complete frequent item-sets

Apriori
Candidate
• Main Steps of Apriori Algorithm: Generation

– Use frequent (k – 1)-itemsets (Lk-1) to generate candidates of


frequent k-itemsets Ck
– Scan database and count each pattern in Ck , get frequent k-itemsets
( Lk ) .
Candidate Test
• E.g. ,
TID Items bought Apriori iteration
100 p} {f, a, c, d, g, i, m, C1 f,a,c,d,g,i,m,p,l,o,h,j,k,s,b,e,n
L1 f, a, c, m, b, p
200 {a, b, c, f, l, m, o}
300 {b, f, h, j, o} C2 fa, fc, fm, fp, ac, am, …bp
L2 fa, fc, fm, …
400 {b, c, k, s, p}
500 n} {a, f, c, e, l, p, m, …

4
Mining Frequent Patterns without Candidate Generation. SIGMOD2000
Disadvantages of Apriori-like Approach

Performance Bottlenecks of Apriori

• Bottlenecks of Apriori: candidate generation


– Generate huge candidate sets:
• 104 frequent 1-itemset will generate 107 candidate 2-
itemsets
• To discover a frequent pattern of size 100, e.g., {a1, a2,
…, a100}, one needs to generate 2100  1030 candidates.
– Candidate Test incur multiple scans of database:
each candidate

5
Mining Frequent Patterns without Candidate Generation (SIGMOD2000)
Overview: FP-tree based method

Overview of FP-Growth: Ideas


• Compress a large database into a compact, Frequent-Pattern
tree (FP-tree) structure
– highly compacted, but complete for frequent pattern mining
– avoid costly repeated database scans
• Develop an efficient, FP-tree-based frequent pattern mining
method (FP-growth)
– A divide-and-conquer methodology: decompose mining tasks into
smaller ones
– Avoid candidate generation: sub-database test only.

6
Mining Frequent Patterns without Candidate Generation (SIGMOD2000))
FP-Tree

FP-tree:
Construction and Design

Mining Frequent Patterns without Candidate Generation (SIGMOD2000)


FP-tree

Construct FP-tree
Two Steps:
1. Scan the transaction DB for the first time, find frequent
items (single item patterns) and order them into a list L in
frequency descending order.
e.g., L={f:4, c:4, a:3, b:3, m:3, p:3}
In the format of (item-name, support)
2. For each transaction, order its frequent items according to
the order in L; Scan DB the second time, construct FP-tree
by putting each frequency ordered transaction onto it.

8
Mining Frequent Patterns without Candidate Generation (SIGMOD2000)
FP-tree

FP-tree Example: step 1

Step 1: Scan DB for the first time to generate L


L

TID Items bought Item frequency


100 {f, a, c, d, g, i, m, p} f 4
200 {a, b, c, f, l, m, o} c 4
300 {b, f, h, j, o} a 3
400 {b, c, k, s, p} b 3
500 {a, f, c, e, l, p, m, n} m 3
p 3

By-Product of First Scan


of Database
9
Mining Frequent Patterns without Candidate Generation (SIGMOD2000)
FP-tree

FP-tree Example: step 2

Step 2: scan the DB for the second time, order frequent items
in each transaction

TID items Items bought (ordered) frequent


100 {f, a, c, d, g, i, m, p} {f, c, a, m, p}
200 m} {a, b, c, f, l, m, o} {f, c, a, b,
300 {b, f, h, j, o} {f, b}
400 {b, c, k, s, p} {c, b, p}
500 {a, f, c, e, l, p, m, n} {f, c, a, m, p}

10
Mining Frequent Patterns without Candidate Generation (SIGMOD2000)
FP-tree

FP-tree Example: step 2


Step 2: construct FP-tree

{} {}

f:1 f:2
{f, c, a, m, p} {f, c, a, b, m}
{} c:1 c:2

a:1 a:2

m:1 m:1 b:1


NOTE: Each transaction
corresponds to one path
in the FP-tree p:1 p:1 m:1

11
Mining Frequent Patterns without Candidate Generation (SIGMOD2000)
FP-tree

FP-tree Example: step 2


Step 2: construct FP-tree

{} {} {}

f:3 f:3 c:1 f:4 c:1


{f, b} {c, b, p} {f, c, a, m, p}
c:2 b:1 c:2 b:1 b:1 c:3 b:1 b:1

a:2 a:2 p:1 a:3 p:1

m:1 b:1 m:1 b:1 m:2 b:1

p:1 m:1 p:1 m:1 p:2 m:1


Node-Link

12
Mining Frequent Patterns without Candidate Generation (SIGMOD2000)
{}

f:4 c:1
• Items bought
• {f, a, c, d, g, i, m, p} c:3 b:1 b:1
• {a, b, c, f, l, m, o}
• {b, f, h, j, o} a:3 p:1
• {b, c, k, s, p}
• {a, f, c, e, l, p, m, n} m:2 b:1

p:2 m:1
FP-tree

Construction Example
Final FP-tree

{}
Header Table
f:4 c:1
Item head
f
c c:3 b:1 b:1
a
b a:3 p:1
m
p m:2 b:1

p:2 m:1

14
Mining Frequent Patterns without Candidate Generation (SIGMOD2000)
FP-tree

FP-Tree Definition
• FP-tree is a frequent pattern tree . Formally, FP-tree is a tree structure
defined below:
1. One root labeled as “null", a set of item prefix sub-trees as the
children of the root, and a frequent-item header table.
2. Each node in the item prefix sub-trees has three fields:
– item-name : register which item this node represents,
– count, the number of transactions represented by the portion of the path
reaching this node,
– node-link that links to the next node in the FP-tree carrying the same
item-name, or null if there is none.
3. Each entry in the frequent-item header table has two fields,
– item-name, and
– head of node-link that points to the first node in the FP-tree carrying the
item-name.

15
Mining Frequent Patterns without Candidate Generation (SIGMOD2000)
FP-tree

Advantages of the FP-tree Structure

• The most significant advantage of the FP-tree


– Scan the DB only twice and twice only.

• Completeness:
– the FP-tree contains all the information related to mining frequent
patterns (given the min-support threshold). Why?

• Compactness:
– The size of the tree is bounded by the occurrences of frequent items
– The height of the tree is bounded by the maximum number of items in a
transaction

16
Mining Frequent Patterns without Candidate Generation (SIGMOD2000)
FP-tree

Questions?
• Why descending order?
• Example 1: {}

f:1 a:1

TID (unordered) frequent items


100 {f, a, c, m, p} a:1 f:1
500 {a, f, c, p, m}
c:1 c:1

m:1 p:1

p:1 m:1

17
Mining Frequent Patterns without Candidate Generation (SIGMOD2000)
FP-tree

Questions?
• Example 2: {}
TID (ascended) frequent items
100 {p, m, a, c, f} p:3 m:2 c:1
200 {m, b, a, c, f}
300 {b, f} m:2 b:1 b:1 b:1
400 {p, b, c}
500 {p, m, a, c, f} a:2 c:1 a:2 p:1

This tree is larger than FP-tree,


c:2 c:1
because in FP-tree, more frequent
items have a higher position, which
makes branches less f:2 f:2

18
Mining Frequent Patterns without Candidate Generation (SIGMOD2000)
FP-Growth

FP-growth:
Mining Frequent Patterns
Using FP-tree

Mining Frequent Patterns without Candidate Generation (SIGMOD2000)


FP-Growth

Mining Frequent Patterns Using FP-tree


• General idea (divide-and-conquer)
Recursively grow frequent patterns using the FP-tree: looking
for shorter ones recursively and then concatenating the suffix:
– For each frequent item, construct its conditional pattern
base, and then its conditional FP-tree;
– Repeat the process on each newly created conditional FP-
tree until the resulting FP-tree is empty, or it contains only
one path (single path will generate all the combinations of
its sub-paths, each of which is a frequent pattern)

20
Mining Frequent Patterns without Candidate Generation (SIGMOD2000)
FP-Growth

3 Major Steps

Starting the processing from the end of list L:


Step 1:
Construct conditional pattern base for each item in the header table
Step 2
Construct conditional FP-tree from each conditional pattern base
Step 3
Recursively mine conditional FP-trees and grow frequent patterns
obtained so far. If the conditional FP-tree contains a single path, simply
enumerate all the patterns

21
Mining Frequent Patterns without Candidate Generation (SIGMOD2000)
FP-Growth: An Example

Step 1: Construct Conditional Pattern Base

• Starting at the bottom of frequent-item header table in the FP-tree


• Traverse the FP-tree by following the link of each frequent item
• Accumulate all of transformed prefix paths of that item to form a
conditional pattern base
{} Conditional pattern bases
Header Table
item cond. pattern base
f:4 c:1
Item head p fcam:2, cb:1
f m fca:2, fcab:1
c c:3 b:1 b:1
a b fca:1, f:1, c:1
b a:3 p:1
a fc:3
m
p m:2 b:1 c f:3
f {}
p:2 m:1
22
Mining Frequent Patterns without Candidate Generation (SIGMOD2000)
FP-Growth

Properties of FP-Tree

• Node-link property
– For any frequent item ai, all the possible frequent patterns that contain
ai can be obtained by following ai's node-links, starting from ai's head
in the FP-tree header.
• Prefix path property
– To calculate the frequent patterns for a node ai in a path P, only the
prefix sub-path of ai in P need to be accumulated, and its frequency
count should carry the same count as node ai.

23
Mining Frequent Patterns without Candidate Generation (SIGMOD2000)
FP-Growth: An Example

Step 2: Construct Conditional FP-tree

• For each pattern base


– Accumulate the count for each item in the base
– Construct the conditional FP-tree for the frequent items of the
pattern base
{}
Header Table
Item head f:4 {}
f 4
c 4 c:3 f:3
m- cond. pattern base:
a 3
b 3
a:3  fca:2, fcab:1 
c:3
m 3 m:2 b:1
p 3 a:3
m:1 m-conditional FP-tree

24
Mining Frequent Patterns without Candidate Generation (SIGMOD2000)
FP-Growth
Step 3: Recursively mine the conditional FP-
tree
conditional FP-tree of conditional FP-tree of conditional FP-tree of
“m”: (fca:3) “am”: (fc:3) add “cam”: (f:3)
{} “c” {}
{} add Frequent Pattern Frequent Pattern
Frequent Pattern “a” f:3 f:3
f:3 add c:3 add ad
“c” “f” d
c:3 “f”
conditional FP-tree of conditional FP-tree of
a:3 “cm”: (f:3) of “fam”: 3
add
{} “f”
Frequent Pattern Frequent Pattern
add conditional FP-tree of
f:3 “fcm”: 3
“f”

Frequent Pattern Frequent Pattern


fcam

conditional FP-tree of “fm”: 3


25
Mining Frequent Patterns without Candidate
FrequentGeneration
Pattern (SIGMOD2000)
FP-Growth

Principles of FP-Growth

• Pattern growth property


– Let  be a frequent itemset in DB, B be 's conditional pattern base,
and  be an itemset in B. Then    is a frequent itemset in DB iff
 is frequent in B.
• Is “fcabm ” a frequent pattern?
– “fcab” is a branch of m's conditional pattern base
– “b” is NOT frequent in transactions containing “fcab ”
– “bm” is NOT a frequent itemset.

26
Mining Frequent Patterns without Candidate Generation (SIGMOD2000)
FP-Growth
Conditional Pattern Bases and
Conditional FP-Tree

Item Conditional pattern base Conditional FP-tree


p {(fcam:2), (cb:1)} {(c:3)}|p
m {(fca:2), (fcab:1)} {(f:3, c:3, a:3)}|m
b {(fca:1), (f:1), (c:1)} Empty
a {(fc:3)} {(f:3, c:3)}|a
c {(f:3)} {(f:3)}|c
f Empty Empty
order of L
27
Mining Frequent Patterns without Candidate Generation (SIGMOD2000)
FP-Growth

Single FP-tree Path Generation

• Suppose an FP-tree T has a single path P. The complete set of frequent


pattern of T can be generated by enumeration of all the combinations of
the sub-paths of P
{}
All frequent patterns concerning m:
combination of {f, c, a} and m
f:3
m,
c:3  fm, cm, am,
fcm, fam, cam,
a:3
fcam
m-conditional FP-tree

28
Mining Frequent Patterns without Candidate Generation (SIGMOD2000)
Summary of FP-Growth Algorithm
• Mining frequent patterns can be viewed as first mining
1-itemset and progressively growing each 1-itemset by
mining on its conditional pattern base recursively

• Transform a frequent k-itemset mining problem into a


sequence of k frequent 1-itemset mining problems via a
set of conditional pattern bases

Mining Frequent Patterns without Candidate Generation (SIGMOD2000)


FP-Growth

Efficiency Analysis
Facts: usually
1. FP-tree is much smaller than the size of the DB
2. Pattern base is smaller than original FP-tree
3. Conditional FP-tree is smaller than pattern base
 mining process works on a set of usually much
smaller pattern bases and conditional FP-trees
 Divide-and-conquer and dramatic scale of shrinking

30
Mining Frequent Patterns without Candidate Generation (SIGMOD2000)
Experiments:
Performance Evaluation

Mining Frequent Patterns without Candidate Generation (SIGMOD2000)


Experiments

Experiment Setup
• Compare the runtime of FP-growth with classical Apriori and recent
TreeProjection
– Runtime vs. min_sup
– Runtime per itemset vs. min_sup
– Runtime vs. size of the DB (# of transactions)
• Synthetic data sets : frequent itemsets grows exponentially as
minisup goes down
– D1: T25.I10.D10K
• 1K items
• avg(transaction size)=25
• avg(max/potential frequent item size)=10
• 10K transactions
– D2: T25.I20.D100K
• 10k items

32
Mining Frequent Patterns without Candidate Generation (SIGMOD2000)
Experiments

Scalability: runtime vs. min_sup


(w/ Apriori)

33
Mining Frequent Patterns without Candidate Generation (SIGMOD2000)
Experiments

Runtime/itemset vs. min_sup

34
Mining Frequent Patterns without Candidate Generation (SIGMOD2000)
Experiments
Scalability: runtime vs. # of Trans.
(w/ Apriori)

* Using D2 and min_support=1.5%


35
Mining Frequent Patterns without Candidate Generation (SIGMOD2000)
Experiments
Scalability: runtime vs. min_support
(w/ TreeProjection)

36
Mining Frequent Patterns without Candidate Generation (SIGMOD2000)
Experiments
Scalability: runtime vs. # of Trans.
(w/ TreeProjection)

Support = 1%

37
Mining Frequent Patterns without Candidate Generation (SIGMOD2000)
Discussions:
Improve the performance
and scalability of FP-growth

Mining Frequent Patterns without Candidate Generation (SIGMOD2000)


Discussion

Performance Improvement

Disk-resident FP-tree FP-tree


Projected DBs
FP-tree Materialization Incremental update

partition the Store the FP- a low ξ may How to


DB into a set tree in the usually update an FP-
of projected hark disks by satisfy most tree when
DBs and then using B+ tree of the mining there are new
construct an structure to queries in data?
FP-tree and reduce I/O the FP-tree • Reconstru
mine it in cost. construction. ct the FP-
each tree
projected DB. • Or do not
update
the FP-
tree

39
Mining Frequent Patterns without Candidate Generation (SIGMOD2000)
Conclusion Remarks
• FP-tree: a novel data structure storing compressed,
crucial information about frequent patterns,
compact yet complete for frequent pattern mining.

• FP-growth: an efficient mining method of frequent


patterns in large Database: using a highly compact
FP-tree, divide-and-conquer method in nature.

40
Mining Frequent Patterns without Candidate Generation (SIGMOD2000)
Some Notes
• In association analysis, there are two main steps,
find complete frequent patterns is the first step,
though more important step;

• Both Apriori and FP-Growth are aiming to find


out complete set of patterns;

• FP-Growth is more efficient and scalable than


Apriori in respect to prolific and long patterns.

Mining Frequent Patterns without Candidate Generation (SIGMOD2000)


Related info.
• FP_growth method is (year 2000) available in DBMiner.

• Original paper appeared in SIGMOD 2000. The extended


version was just published: “Mining Frequent
Patterns without Candidate Generation: A
Frequent-Pattern Tree Approach” Data
Mining and Knowledge Discovery, 8, 53–87, 2004.
Kluwer Academic Publishers.

• Textbook: “Data Ming: Concepts and


Techniques” Chapter 6.2.4 (Page 239~243)

42
Mining Frequent Patterns without Candidate Generation (SIGMOD2000)
Exams Questions
• Q1: What are the main drawback s of Apriori –like
approaches and explain why ?
• A:
• The main disadvantages of Apriori-like approaches are:
1. It is costly to generate those candidate sets;
2. It incurs multiple scan of the database.
The reason is that: Apriori is based on the following
heuristic/down-closure property:
if any length k patterns is not frequent in the database, any
length (k+1) super-pattern can never be frequent.
The two steps in Apriori are candidate generation and test. If
the 1-itemsets is huge in the database, then the generation
for successive item-sets would be quite costly and thus the
test.
43
Mining Frequent Patterns without Candidate Generation (SIGMOD2000)
Exams Questions
• Q2: What is FP-Tree?
• Previous answer: A FP-Tree is a tree data structure that
represents the
database in a compact way. It is constructed by mapping
each frequency
ordered transaction onto a path in the FP-Tree.
• My Answer: A FP-Tree is an extended prefix tree structure
that represents the transaction database in a compact and
complete way. Only frequent length-1 items will have
nodes in the tree, and the tree nodes are arranged in such
a way that more frequently occurring nodes will have
better chances of sharing nodes than less frequently
occurring ones. Each transaction in the database is
mapped to one path in the FP-Tree.

44
Mining Frequent Patterns without Candidate Generation (SIGMOD2000)
Exams Questions
• Q3: What is the most significant advantage of FP-Tree? Why
FP-Tree is complete in relevance to frequent pattern
mining?
• A: Efficiency, the most significant advantage of the FP-tree
is that it requires two scans to the underlying database (and
only two scans) to construct the FP-tree. This efficiency is
further apparent in database with prolific and long patterns
or for mining frequent patterns with low support threshold.
• As each transaction in the database is mapped to one path in
the FP-Tree, therefore, the frequent item-set information in
each transaction is completely stored in the FP-Tree.
Besides, one path in the FP-Tree may represent frequent
item-sets in multiple transactions without ambiguity since
the path representing every transaction must start from the
root of each item prefix sub-tree.
45
Mining Frequent Patterns without Candidate Generation (SIGMOD2000)

You might also like