0% found this document useful (0 votes)
35 views72 pages

CH 6

Uploaded by

Revathi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
35 views72 pages

CH 6

Uploaded by

Revathi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 72

1

CS 43105 Data Mining Techniques


Chapter 6 Classification (1)
Xiang Lian
Department of Computer Science
Kent State University
Email: [email protected]
Homepage: http://www.cs.kent.edu/~xlian/
2

Outline
• Classification Definition
• Classification Techniques
• Decision Trees
• Practical Issues of Classification
3

A Programming Task
4

Classification: Definition
• Given a collection of records (training set )
• Each record contains a set of attributes, one of the
attributes is the class.
• Find a model for class attribute as a function
of the values of other attributes.
• Goal: previously unseen records should be
assigned a class as accurately as possible.
• A test set is used to determine the accuracy of the
model. Usually, the given data set is divided into
training and test sets, with training set used to build
the model and test set used to validate it.
5

Illustrating Classification Task


Tid Attrib1 Attrib2 Attrib3 Class Learning
No
1 Yes Large 125K
algorithm
2 No Medium 100K No

3 No Small 70K No

4 Yes Medium 120K No


Induction
5 No Large 95K Yes

6 No Medium 60K No

7 Yes Large 220K No Learn


8 No Small 85K Yes Model
9 No Medium 75K No

10 No Small 90K Yes


Model
10

Training Set
Apply
Tid Attrib1 Attrib2 Attrib3 Class Model
11 No Small 55K ?

12 Yes Medium 80K ?

13 Yes Large 110K ? Deduction


14 No Small 95K ?

15 No Large 67K ?
10

Test Set
6

Examples of Classification Task


• Predicting tumor cells as benign or malignant

• Classifying credit card transactions


as legitimate or fraudulent

• Classifying secondary structures of protein


as alpha-helix, beta-sheet, or random
coil

• Categorizing news stories as finance,


weather, entertainment, sports, etc
7

Classification Using Distance


• Place items in class to which they are “closest”
• Must determine distance between an item and
a class
• Classes represented by
• Centroid: Central value
• Medoid: Representative point
• Individual points
• Algorithm: KNN
8

K Nearest Neighbor (KNN)


• Training set includes classes
• Examine K items near item to be classified
• New item placed in class with the most number
of close items
• O(q) for each tuple to be classified. (Here q is
the size of the training set.
9

KNN
10

Classification Techniques
• Decision Tree based Methods
• Rule-based Methods
• Memory based Reasoning
• Neural Networks
• Naïve Bayes and Bayesian Belief Networks
• Support Vector Machines
11

Example of a Decision Tree


cal cal u s
r i r i uo
o o n
teg teg nti
ass
ca ca co cl
Tid Refund Marital Taxable
Splitting Attributes
Status Income Cheat

1 Yes Single 125K No


2 No Married 100K No Refund
No
Yes No
3 No Single 70K
4 Yes Married 120K No NO MarSt
5 No Divorced 95K Yes Married
Single, Divorced
6 No Married 60K No
7 Yes Divorced 220K No TaxInc NO
8 No Single 85K Yes < 80K > 80K
9 No Married 75K No
NO YES
10 No Single 90K Yes
10

Training Data Model: Decision Tree


12

Another Example of Decision Tree


cal cal us
i i o
or or nu
teg
teg
nti
ass Single,
l MarSt
ca ca co c
Married
Tid Refund Marital Taxable
Divorced
Status Income Cheat
NO Refund
1 Yes Single 125K No
Yes No
2 No Married 100K No
3 No Single 70K No NO TaxInc
4 Yes Married 120K No < 80K > 80K
5 No Divorced 95K Yes
NO YES
6 No Married 60K No
7 Yes Divorced 220K No
8 No Single 85K Yes
9 No Married 75K No There could be more than one tree that
10 No Single 90K Yes fits the same data!
10
13

Decision Tree Classification Task


Tid Attrib1 Attrib2 Attrib3 Class
Tree
1 Yes Large 125K No Induction
2 No Medium 100K No algorithm
3 No Small 70K No
4 Yes Medium 120K No
Induction
5 No Large 95K Yes
6 No Medium 60K No
7 Yes Large 220K No Learn
8 No Small 85K Yes Model
9 No Medium 75K No
10 No Small 90K Yes
Model
10

Training Set
Apply
Model Decision
Attrib1 Attrib2 Attrib3 Class
Tid
Tree
11 No Small 55K ?
12 Yes Medium 80K ?
13 Yes Large 110K ?
Deduction
14 No Small 95K ?
15 No Large 67K ?
10

Test Set
14

Apply Model to Test Data


Test Data
Start from the root of tree. Refund Marital Taxable
Status Income Cheat

No Married 80K ?
Refund 10

Yes No

NO MarSt
Single, Divorced Married

TaxInc NO
< 80K > 80K

NO YES
15

Apply Model to Test Data


Test Data
Refund Marital Taxable
Status Income Cheat

No Married 80K ?
Refund 10

Yes No

NO MarSt
Single, Divorced Married

TaxInc NO
< 80K > 80K

NO YES
16

Apply Model to Test Data


Test Data
Refund Marital Taxable
Status Income Cheat

No Married 80K ?
Refund 10

Yes No

NO MarSt
Single, Divorced Married

TaxInc NO
< 80K > 80K

NO YES
17

Apply Model to Test Data


Test Data
Refund Marital Taxable
Status Income Cheat

No Married 80K ?
Refund 10

Yes No

NO MarSt
Single, Divorced Married

TaxInc NO
< 80K > 80K

NO YES
18

Apply Model to Test Data


Test Data
Refund Marital Taxable
Status Income Cheat

No Married 80K ?
Refund 10

Yes No

NO MarSt
Single, Divorced Married

TaxInc NO
< 80K > 80K

NO YES
19

Apply Model to Test Data


Test Data
Refund Marital Taxable
Status Income Cheat

No Married 80K ?
Refund 10

Yes No

NO MarSt
Single, Divorced Married Assign Cheat to “No”

TaxInc NO
< 80K > 80K

NO YES
20

Decision Tree Classification Task


Tid Attrib1 Attrib2 Attrib3 Class
Tree
1 Yes Large 125K No Induction
2 No Medium 100K No algorithm
3 No Small 70K No
4 Yes Medium 120K No
Induction
5 No Large 95K Yes
6 No Medium 60K No
7 Yes Large 220K No Learn
8 No Small 85K Yes Model
9 No Medium 75K No
10 No Small 90K Yes
Model
10

Training Set
Apply Decision
Tid Attrib1 Attrib2 Attrib3 Class
Model Tree
11 No Small 55K ?
12 Yes Medium 80K ?
13 Yes Large 110K ?
Deduction
14 No Small 95K ?
15 No Large 67K ?
10

Test Set
21

Decision Tree Induction


• Many Algorithms
• Hunt’s Algorithm (one of the earliest)
• CART
• ID3, C4.5
• SLIQ,SPRINT
22

General Structure of Hunt’s Algorithm


Tid Refund Marital Taxable
Income Cheat
• Let Dt be the set of training Status

1 Yes Single 125K No


records that reach a node t 2 No Married 100K No

• General Procedure: 3 No Single 70K No


4 Yes Married 120K No
• If Dt contains records that belong 5 No Divorced 95K Yes

the same class yt, then t is a leaf 6 No Married 60K No


7 Yes Divorced 220K No
node labeled as yt 8 No Single 85K Yes
• If Dt is an empty set, then t is a 9 No Married 75K No

leaf node labeled by the default 10


10 No Single 90K Yes

class, yd Dt
• If Dt contains records that belong
to more than one class, use an ?
attribute test to split the data into
smaller subsets. Recursively apply
the procedure to each subset.
23

Hunt’s Algorithm
Tid Refund Marital Taxable
Status Income Cheat

1 Yes Single 125K No


Refund
Don’t 2 No Married 100K No
Yes No
Cheat 3 No Single 70K No
Don’t Don’t No
4 Yes Married 120K
Cheat Cheat
5 No Divorced 95K Yes
6 No Married 60K No
7 Yes Divorced 220K No
Refund Refund 8 No Single 85K Yes
Yes No Yes No 9 No Married 75K No

Don’t Marital 10 No Single 90K Yes


Don’t Marital Cheat
10

Cheat Status Status


Single, Single,
Married Married
Divorced Divorced

Don’t Taxable Don’t


Cheat Cheat
Cheat Income
< 80K >= 80K

Don’t Cheat
Cheat
24

Tree Induction
• Greedy strategy
• Split the records based on an attribute test that
optimizes certain criterion

• Issues
• Determine how to split the records
• How to specify the attribute test condition?
• How to determine the best split?
• Determine when to stop splitting
25

Tree Induction
• Greedy strategy
• Split the records based on an attribute test that
optimizes certain criterion

• Issues
• Determine how to split the records
• How to specify the attribute test condition?
• How to determine the best split?
• Determine when to stop splitting
26

How to Specify Test Condition?


• Depends on attribute types
• Nominal
• Ordinal
• Continuous

• Depends on number of ways to split


• 2-way split
• Multi-way split
27

Splitting Based on Nominal Attributes


• Multi-way split: Use as many partitions as distinct values.

CarType
Family Luxury
Sports

• Binary split: Divides values into two subsets.


Need to find optimal partitioning.

CarType OR CarType
{Sports, {Family,
Luxury} {Family} Luxury} {Sports}
28

Splitting Based on Ordinal Attributes


• Multi-way split: Use as many partitions as distinct values.

Size
Small Large
Medium
• Binary split: Divides values into two subsets.
Need to find optimal partitioning.

Size Size
{Small,
{Large}
OR {Medium,
{Small}
Medium} Large}
• What about this split?

Size
{Small,
Large} {Medium}
29

Splitting Based on Continuous Attributes


• Different ways of handling
• Discretization to form an ordinal categorical attribute
• Static – discretize once at the beginning
• Dynamic – ranges can be found by equal interval
bucketing, equal frequency bucketing (percentiles), or clustering

• Binary Decision: (A < v) or (A  v)


• consider all possible splits and finds the best cut
• can be more compute intensive
30

Splitting Based on Continuous Attributes

Taxable Taxable
Income Income?
> 80K?
< 10K > 80K
Yes No

[10K,25K) [25K,50K) [50K,80K)

(i) Binary split (ii) Multi-way split


31

Tree Induction
• Greedy strategy
• Split the records based on an attribute test that
optimizes certain criterion

• Issues
• Determine how to split the records
• How to specify the attribute test condition?
• How to determine the best split?
• Determine when to stop splitting
32

How to determine the Best Split


Before Splitting: 10 records of class 0,
10 records of class 1

Own Car Student


Car? Type? ID?

Yes No Family Luxury c1 c20


c10 c11
Sports
C0: 6 C0: 4 C0: 1 C0: 8 C0: 1 C0: 1 ... C0: 1 C0: 0 ... C0: 0
C1: 4 C1: 6 C1: 3 C1: 0 C1: 7 C1: 0 C1: 0 C1: 1 C1: 1

Which test condition is the best?


33

How to determine the Best Split


• Greedy approach:
• Nodes with homogeneous class distribution are
preferred
• Need a measure of node impurity:

C0: 5 C0: 9
C1: 5 C1: 1

Non-homogeneous, Homogeneous,
High degree of impurity Low degree of impurity
34

Measures of Node Impurity


• Gini Index

• Entropy

• Misclassification error
35

How to Find the Best Split


Before Splitting: C0 N00 M0
C1 N01

A? B?
Yes No Yes No

Node N1 Node N2 Node N3 Node N4

C0 N10 C0 N20 C0 N30 C0 N40


C1 N11 C1 N21 C1 N31 C1 N41

M1 M2 M3 M4

M12 M34
Gain = M0 – M12 vs M0 – M34
36

Measure of Impurity: GINI


• Gini Index for a given node t :

GINI (t )  1   [ p ( j | t )]2
j

(NOTE: p( j | t) is the relative frequency of class j at node t).


• Maximum (1 - 1/nc) when records are equally distributed among all
classes, implying least interesting information
• Minimum (0.0) when all records belong to one class, implying most
interesting information

C1 0 C1 1 C1 2 C1 3
C2 6 C2 5 C2 4 C2 3
Gini=0.000 Gini=0.278 Gini=0.444 Gini=0.500
37

Examples for computing GINI


GINI (t )  1   [ p ( j | t )]2
j

C1 0 P(C1) = 0/6 = 0 P(C2) = 6/6 = 1


C2 6 Gini = 1 – P(C1)2 – P(C2)2 = 1 – 0 – 1 = 0

C1 1 P(C1) = 1/6 P(C2) = 5/6


C2 5 Gini = 1 – (1/6)2 – (5/6)2 = 0.278

C1 2 P(C1) = 2/6 P(C2) = 4/6


C2 4 Gini = 1 – (2/6)2 – (4/6)2 = 0.444
Splitting Based on GINI
• Used in CART, SLIQ, SPRINT.
• When a node p is split into k partitions (children), the
quality of split is computed as,
k
ni
GINI split   GINI (i )
i 1 n

where, ni = number of records at child i,


n = number of records at node p.

38
Binary Attributes: Computing GINI Index
 Splits into two partitions
 Effect of Weighing partitions:
 Larger and Purer Partitions are sought for.

Parent
B? C1 6
Yes No C2 6
Gini = 0.500
Node N1 Node N2
Gini(N1)
= 1 – (5/6)2 – (2/6)2 N1 N2 Gini(Children)
= 0.194
C1 5 1 = 7/12 * 0.194 +
Gini(N2) C2 2 4 5/12 * 0.528
= 1 – (1/6)2 – (4/6)2 Gini=0.333 = 0.333
= 0.528 39
40

Categorical Attributes: Computing Gini


Index
• For each distinct value, gather counts for each class in the
dataset
• Use the count matrix to make decisions
Multi-way split Two-way split
(find best partition of values)

CarType CarType CarType


Family Sports Luxury {Sports, {Family,
{Family} {Sports}
Luxury} Luxury}
C1 1 2 1 C1 3 1 C1 2 2
C2 4 1 1 C2 2 4 C2 1 5
Gini 0.393 Gini 0.400 Gini 0.419
Continuous Attributes: Computing Gini
Index
• Use Binary Decisions based on Tid Refund Marital
Status
Taxable
Income Cheat

one value 1 Yes Single 125K No


2 No Married 100K No

• Several Choices for the splitting 3 No Single 70K No


4 Yes Married 120K No

value 5 No Divorced 95K Yes


6 No Married 60K No
• Number of possible splitting values 7 Yes Divorced 220K No

= Number of distinct values 8


9
No
No
Single
Married
85K
75K
Yes
No

• Each splitting value has a count 10


10 No Single 90K Yes

matrix associated with it


• Class counts in each of the partitions, Taxable
A < v and A  v Income
> 80K?
• Simple method to choose best v
• For each v, scan the database to
gather count matrix and compute its Yes No
Gini index
• Computationally Inefficient! Repetition
of work. 41
42

Continuous Attributes: Computing Gini Index...

• For efficient computation: for each attribute,


• Sort the attribute on values
• Linearly scan these values, each time updating the count matrix
and computing gini index
• Choose the split position that has the least gini index

Cheat No No No Yes Yes Yes No No No No


Taxable Income
60 70 75 85 90 95 100 120 125 220
Sorted Values
55 65 72 80 87 92 97 110 122 172 230
Split Positions
<= > <= > <= > <= > <= > <= > <= > <= > <= > <= > <= >
Yes 0 3 0 3 0 3 0 3 1 2 2 1 3 0 3 0 3 0 3 0 3 0

No 0 7 1 6 2 5 3 4 3 4 3 4 3 4 4 3 5 2 6 1 7 0

Gini 0.420 0.400 0.375 0.343 0.417 0.400 0.300 0.343 0.375 0.400 0.420
43

Alternative Splitting Criteria based on


INFO
• Entropy at a given node t:

Entropy (t )    p ( j | t ) log p ( j | t )
j

(NOTE: p( j | t) is the relative frequency of class j at node t).


• Measures homogeneity of a node.
• Maximum (log nc) when records are equally distributed among all
classes implying least information
• Minimum (0.0) when all records belong to one class, implying
most information
• Entropy based computations are similar to the GINI index
computations
44

Examples for computing Entropy


Entropy (t )    p ( j | t ) log p ( j | t )
j 2

C1 0 P(C1) = 0/6 = 0 P(C2) = 6/6 = 1


C2 6 Entropy = – 0 log 0 – 1 log 1 = – 0 – 0 = 0

C1 1 P(C1) = 1/6 P(C2) = 5/6


C2 5 Entropy = – (1/6) log2 (1/6) – (5/6) log2 (1/6) = 0.65

C1 2 P(C1) = 2/6 P(C2) = 4/6


C2 4 Entropy = – (2/6) log2 (2/6) – (4/6) log2 (4/6) = 0.92
Splitting Based on INFO...
• Information Gain:
 n 
GAIN  Entropy ( p )    Entropy (i ) 
k
i

 n 
split i 1

Parent Node, p is split into k partitions;


ni is number of records in partition i
• Measures Reduction in Entropy achieved because of the
split. Choose the split that achieves most reduction
(maximizes GAIN)
• Used in ID3 and C4.5
• Disadvantage: Tends to prefer splits that result in large
number of partitions, each being small but pure.

45
Splitting Based on INFO...
• Gain Ratio:
GAIN n n
 SplitINFO    log
k
GainRATIO Split i i

SplitINFO
split

n ni 1

Parent Node, p is split into k partitions


ni is the number of records in partition i

• Adjusts Information Gain by the entropy of the


partitioning (SplitINFO). Higher entropy partitioning (large
number of small partitions) is penalized!
• Used in C4.5
• Designed to overcome the disadvantage of Information
Gain
46
47

Splitting Criteria based on Classification Error


• Classification error at a node t :

Error (t )  1  max P (i | t )
i

• Measures misclassification error made by a node.


• Maximum (1 - 1/nc) when records are equally
distributed among all classes, implying least
interesting information
• Minimum (0.0) when all records belong to one class,
implying most interesting information
48

Examples for Computing Error


Error (t )  1  max P (i | t )
i

C1 0 P(C1) = 0/6 = 0 P(C2) = 6/6 = 1


C2 6 Error = 1 – max (0, 1) = 1 – 1 = 0

C1 1 P(C1) = 1/6 P(C2) = 5/6


C2 5 Error = 1 – max (1/6, 5/6) = 1 – 5/6 = 1/6

C1 2 P(C1) = 2/6 P(C2) = 4/6


C2 4 Error = 1 – max (2/6, 4/6) = 1 – 4/6 = 1/3
49

Comparison among Splitting Criteria


For a 2-class problem:
50

Misclassification Error vs Gini


A? Parent
C1 7
Yes No
C2 3
Node N1 Node N2 Gini = 0.42

Gini(N1) N1 N2
= 1 – (3/3)2 – (0/3)2 Gini(Children)
C1 3 4 = 3/10 * 0
=0
C2 0 3 + 7/10 * 0.489
Gini(N2) = 0.342
= 1 – (4/7)2 – (3/7)2
= 0.489
51

Tree Induction
• Greedy strategy.
• Split the records based on an attribute test that
optimizes certain criterion.

• Issues
• Determine how to split the records
• How to specify the attribute test condition?
• How to determine the best split?
• Determine when to stop splitting
52

Stopping Criteria for Tree Induction


• Stop expanding a node when all the records
belong to the same class

• Stop expanding a node when all the records have


similar attribute values

• Early termination (to be discussed later)


53

Decision Tree Based Classification


• Advantages:
• Inexpensive to construct
• Extremely fast at classifying unknown records
• Easy to interpret for small-sized trees
• Accuracy is comparable to other classification
techniques for many simple data sets
54

Example: C4.5
• Simple depth-first construction.
• Uses Information Gain
• Sorts Continuous Attributes at each node
• Needs entire data to fit in memory
• Unsuitable for Large Datasets
• Needs out-of-core sorting
55

Practical Issues of Classification


• Underfitting and Overfitting

• Missing Values

• Costs of Classification
56

Underfitting and Overfitting (Example)

500 circular and 500


triangular data points.

Circular points:
0.5  sqrt(x12+x22)  1

Triangular points:
sqrt(x12+x22) > 0.5 or
sqrt(x12+x22) < 1
57

Underfitting and Overfitting


Overfitting

Underfitting: when model is too simple, both training and test errors are large
58

Overfitting due to Noise

Decision boundary is distorted by noise point


59

Overfitting due to Insufficient Examples

Lack of data points in the lower half of the diagram makes it difficult
to predict correctly the class labels of that region
- Insufficient number of training records in the region causes the
decision tree to predict the test examples using other training
records that are irrelevant to the classification task
60

Notes on Overfitting
• Overfitting results in decision trees that are more
complex than necessary

• Training error no longer provides a good estimate


of how well the tree will perform on previously
unseen records

• Need new ways for estimating errors


61

Estimating Generalization Errors


• Re-substitution errors: error on training ( e(t) )
• Generalization errors: error on testing ( e’(t))
• Methods for estimating generalization errors:
• Optimistic approach: e’(t) = e(t)
• Pessimistic approach:
• For each leaf node: e’(t) = (e(t)+0.5)
• Total errors: e’(T) = e(T) + N  0.5 (N: number of leaf nodes)
• For a tree with 30 leaf nodes and 10 errors on training
(out of 1000 instances):
Training error = 10/1000 = 1%
Generalization error = (10 + 300.5)/1000 = 2.5%
• Reduced error pruning (REP):
• uses validation data set to estimate generalization
error
62

Occam’s Razor
• Given two models of similar generalization errors,
one should prefer the simpler model over the
more complex model

• For complex models, there is a greater chance


that it was fitted accidentally by errors in data

• Therefore, one should include model complexity


when evaluating a model
63

Minimum Description Length (MDL)


A?
X y Yes No
X y
X1 1 0 B? X1 ?
X2 0 B1 B2
X2 ?
X3 0 C? 1
A C1 C2 B X3 ?
X4 1
0 1 X4 ?
… …
Xn
… …
1
Xn ?

• Cost(Model,Data) = Cost(Data|Model) + Cost(Model)


• Cost is the number of bits needed for encoding.
• Search for the least costly model.
• Cost(Data|Model) encodes the misclassification errors.
• Cost(Model) uses node encoding (number of children) plus splitting
condition encoding.
64

How to Address Overfitting


• Pre-Pruning (Early Stopping Rule)
• Stop the algorithm before it becomes a fully-grown tree
• Typical stopping conditions for a node:
• Stop if all instances belong to the same class
• Stop if all the attribute values are the same
• More restrictive conditions:
• Stop if number of instances is less than some user-specified
threshold
• Stop if class distribution of instances are independent of the
available features (e.g., using  2 test)
• Stop if expanding the current node does not improve impurity
measures (e.g., Gini or information gain).
65

How to Address Overfitting…


• Post-pruning
• Grow decision tree to its entirety
• Trim the nodes of the decision tree in a bottom-up
fashion
• If generalization error improves after trimming, replace
sub-tree by a leaf node.
• Class label of leaf node is determined from majority
class of instances in the sub-tree
• Can use MDL for post-pruning
66

Example of Post-Pruning
Training Error (Before splitting) = 10/30

Class = Yes 20 Pessimistic error = (10 + 0.5)/30 = 10.5/30


Training Error (After splitting) = 9/30
Class = No 10
Pessimistic error (After splitting)
Error = 10/30 = (9 + 4  0.5)/30 = 11/30
PRUNE!
A?

A1 A4
A2 A3

Class = 8 Class = 3 Class = 4 Class = 5


Yes Yes Yes Yes
Class = No 4 Class = No 4 Class = No 1 Class = No 1
67

Examples of Post-Pruning
Case 1:

• Optimistic error?
Don’t prune for both cases C0: 11 C0: 2
C1: 3 C1: 4

• Pessimistic error?

Don’t prune case 1, prune case 2


Case 2:

• Reduced error pruning?

Depends on validation set C0: 14 C0: 2


C1: 3 C1: 2
68

Handling Missing Attribute Values


• Missing values affect decision tree construction in
three different ways:
• Affects how impurity measures are computed
• Affects how to distribute instance with missing value to
child nodes
• Affects how a test instance with missing value is
classified
69

Computing Impurity Measure


Tid Refund Marital Taxable Before Splitting:
Status Income Class Entropy(Parent)
= -0.3 log(0.3)-(0.7)log(0.7) = 0.8813
1 Yes Single 125K No
2 No Married 100K No Class Class
3 No Single 70K No = Yes = No
Refund=Yes 0 3
4 Yes Married 120K No
Refund=No 2 4
5 No Divorced 95K Yes
Refund=? 1 0
6 No Married 60K No
7 Yes Divorced 220K No
Split on Refund:
8 No Single 85K Yes
Entropy(Refund=Yes) = 0
9 No Married 75K No
Entropy(Refund=No)
= -(2/6)log(2/6) – (4/6)log(4/6) = 0.9183
10 ? Single 90K Yes
10

Entropy(Children)
Missing = 0.3 (0) + 0.6 (0.9183) = 0.551
value Gain = 0.9  (0.8813 – 0.551) = 0.3303
70

Distribute Instances
Tid Refund Marital Taxable
Status Income Class
Tid Refund Marital Taxable
1 Yes Single 125K No Status Income Class
2 No Married 100K No
10 ? Single 90K Yes
3 No Single 70K No 10

4 Yes Married 120K No


Refund
5 No Divorced 95K Yes Yes No
6 No Married 60K No
Class=Yes 0 + 3/9 Class=Yes 2 + 6/9
7 Yes Divorced 220K No
Class=No 3 Class=No 4
8 No Single 85K Yes
9 No Married 75K No
10

Probability that Refund=Yes is 3/9


Refund Probability that Refund=No is 6/9
Yes No
Assign record to the left child with
Class=Yes 0 Cheat=Yes 2 weight = 3/9 and to the right child
Class=No 3 Cheat=No 4
with weight = 6/9
71

Classify Instances
New record: Married Single Divorce Total
d
Tid Refund Marital Taxable
Status Income Class Class=No 3 1 0 4

11 No ? 85K ? Class=Yes 6/9 1 1 2.67


10

Total 3.67 2 1 6.67


Refund
Yes
No
NO MarSt
Single,
Married Probability that Marital Status
Divorced
= Married is 3.67/6.67
TaxInc NO
Probability that Marital Status
< 80K > 80K
={Single,Divorced} is 3/6.67
NO YES
72

Scalable Decision Tree Induction Methods

• SLIQ (EDBT’96 — Mehta et al.)


• Builds an index for each attribute and only class list and the current
attribute list reside in memory
• SPRINT (VLDB’96 — J. Shafer et al.)
• Constructs an attribute list data structure
• PUBLIC (VLDB’98 — Rastogi & Shim)
• Integrates tree splitting and tree pruning: stop growing the tree earlier
• RainForest (VLDB’98 — Gehrke, Ramakrishnan & Ganti)
• Builds an AVC-list (attribute, value, class label)
• BOAT (PODS’99 — Gehrke, Ganti, Ramakrishnan & Loh)
• Uses bootstrapping to create several small samples

You might also like