0% found this document useful (0 votes)
6 views56 pages

Decision Trees 4

Download as ppt, pdf, or txt
Download as ppt, pdf, or txt
Download as ppt, pdf, or txt
You are on page 1/ 56

Decision Trees

CSE 413
Presented by: Shahriar Parvej
Slide by: Jeff Storey
Overview
 What is a Decision Tree
 Sample Decision Trees
 How to Construct a Decision Tree
 Problems with Decision Trees
 Decision Trees in Gaming
 Summary
Classification: Definition
 Given a collection of records (training set )
 Each record contains a set of attributes, one of the attributes is the class.
 Find a model for class attribute as a
function of the values of other attributes.
 Goal: previously unseen records should be
assigned a class as accurately as possible.
 A test set is used to determine the accuracy of the model. Usually,
the given data set is divided into training and test sets, with
training set used to build the model and test set used to validate it.
Illustrating Classification Task
Tid Attrib1 Attrib2 Attrib3 Class Learning
No
1 Yes Large 125K
algorithm
2 No Medium 100K No
3 No Small 70K No
4 Yes Medium 120K No
Induction
5 No Large 95K Yes
6 No Medium 60K No
7 Yes Large 220K No Learn
8 No Small 85K Yes Model
9 No Medium 75K No
10 No Small 90K Yes
Model
10

Training Set
Apply
Tid Attrib1 Attrib2 Attrib3 Class Model
11 No Small 55K ?
12 Yes Medium 80K ?
13 Yes Large 110K ? Deduction
14 No Small 95K ?

15 No Large 67K ?
10

Test Set
Examples of Classification
Task
 Predicting tumor cells as benign or malignant

 Classifying credit card transactions


as legitimate or fraudulent

 Classifying secondary structures of protein


as alpha-helix, beta-sheet, or random
coil

 Categorizing news stories as finance,


weather, entertainment, sports, etc
Classification Using Distance
 Place items in class to which they are
“closest”.
 Must determine distance between an
item and a class.
 Classes represented by
 Centroid: Central value.
 Medoid: Representative point.
 Individual points
 Algorithm: KNN
K Nearest Neighbor (KNN):
 Training set includes classes.
 Examine K items near item to be
classified.
 New item placed in class with the most
number of close items.
 O(q) for each tuple to be classified.
(Here q is the size of the training set.)
KNN
Classification Techniques
 Decision Tree based Methods
 Rule-based Methods
 Memory based reasoning
 Neural Networks
 Naïve Bayes and Bayesian Belief
Networks
 Support Vector Machines
What is a Decision Tree?
 An inductive learning task
 Use particular facts to make more generalized
conclusions

 A predictive model based on a branching


series of Boolean tests
 These smaller Boolean tests are less complex
than a one-stage classifier

 Let’s look at a sample decision tree…


Predicting Commute Time
Leave At If we leave at
10 AM 9 AM 10 AM and
8 AM
there are no
Stall? Accident?
cars stalled on
No Yes Long No Yes
the road, what
will our
Short Long Medium Long commute time
be?
Inductive Learning
 In this decision tree, we made a series of
Boolean decisions and followed the
corresponding branch
 Did we leave at 10 AM?
 Did a car stall on the road?
 Is there an accident on the road?

 By answering each of these yes/no


questions, we then came to a conclusion on
how long our commute might take
Decision Trees as Rules
 We did not have represent this tree
graphically

 We could have represented as a set of


rules. However, this may be much
harder to read…
Decision Tree as a Rule Set
if hour == 8am  Notice that all attributes to
commute time = long not have to be used in each
else if hour == 9am path of the decision.
if accident == yes
commute time = long  As we will see, all attributes
else may not even appear in the
commute time = tree.
medium
else if hour == 10am
if stall == yes
commute time = long
else
commute time = short
How to Create a Decision Tree
 We first make a list of attributes that we
can measure
 These attributes (for now) must be

discrete
 We then choose a target attribute that
we want to predict
 Then create an experience table that
lists what we have seen in the past
Sample Experience Table
Example Attributes Target
Hour Weather Accident Stall Commute
D1 8 AM Sunny No No Long
D2 8 AM Cloudy No Yes Long
D3 10 AM Sunny No No Short
D4 9 AM Rainy Yes No Long
D5 9 AM Sunny Yes Yes Long
D6 10 AM Sunny No No Short
D7 10 AM Cloudy No No Short
D8 9 AM Rainy No No Medium
D9 9 AM Sunny Yes No Long
D10 10 AM Cloudy Yes Yes Long
D11 10 AM Rainy No No Short
D12 8 AM Cloudy Yes No Long
D13 9 AM Sunny No No Medium
Example of a Decision Tree
cal cal u s
r i r i uo
o o n
teg teg nti
ass
ca ca co cl
Tid Refund Marital Taxable
Splitting Attributes
Status Income Cheat

1 Yes Single 125K No


2 No Married 100K No Refund
3 No Single 70K No
Yes No
4 Yes Married 120K No NO MarSt
5 No Divorced 95K Yes Married
Single, Divorced
6 No Married 60K No
7 Yes Divorced 220K No TaxInc NO
8 No Single 85K Yes < 80K > 80K
9 No Married 75K No
NO YES
10 No Single 90K Yes
10

Training Data Model: Decision Tree


Another Example of Decision
Tree
cal cal us
i i o
or or nu
teg
teg
nti
ass Single,
l MarSt
ca ca co c
Married
Tid Refund Marital Taxable
Divorced
Status Income Cheat
NO Refund
1 Yes Single 125K No
Yes No
2 No Married 100K No
3 No Single 70K No NO TaxInc
4 Yes Married 120K No < 80K > 80K
5 No Divorced 95K Yes
NO YES
6 No Married 60K No
7 Yes Divorced 220K No
8 No Single 85K Yes
9 No Married 75K No There could be more than one tree that
10 No Single 90K Yes fits the same data!
10
Decision Tree Classification
Task
Tid Attrib1 Attrib2 Attrib3 Class
Tree
1 Yes Large 125K No Induction
2 No Medium 100K No algorithm
3 No Small 70K No
4 Yes Medium 120K No
Induction
5 No Large 95K Yes
6 No Medium 60K No
7 Yes Large 220K No Learn
8 No Small 85K Yes Model
9 No Medium 75K No
10 No Small 90K Yes
Model
10

Training Set
Apply Decision
Model Tree
Tid Attrib1 Attrib2 Attrib3 Class
11 No Small 55K ?

12 Yes Medium 80K ?

13 Yes Large 110K ?


Deduction
14 No Small 95K ?

15 No Large 67K ?
10

Test Set
Apply Model to Test Data
Test Data
Start from the root of tree. Refund Marital Taxable
Status Income Cheat

No Married 80K ?
Refund 10

Yes No

NO MarSt
Single, Divorced Married

TaxInc NO
< 80K > 80K

NO YES
Apply Model to Test Data
Test Data
Refund Marital Taxable
Status Income Cheat

No Married 80K ?
Refund 10

Yes No

NO MarSt
Single, Divorced Married

TaxInc NO
< 80K > 80K

NO YES
Apply Model to Test Data
Test Data
Refund Marital Taxable
Status Income Cheat

No Married 80K ?
Refund 10

Yes No

NO MarSt
Single, Divorced Married

TaxInc NO
< 80K > 80K

NO YES
Apply Model to Test Data
Test Data
Refund Marital Taxable
Status Income Cheat

No Married 80K ?
Refund 10

Yes No

NO MarSt
Single, Divorced Married

TaxInc NO
< 80K > 80K

NO YES
Apply Model to Test Data
Test Data
Refund Marital Taxable
Status Income Cheat

No Married 80K ?
Refund 10

Yes No

NO MarSt
Single, Divorced Married

TaxInc NO
< 80K > 80K

NO YES
Apply Model to Test Data
Test Data
Refund Marital Taxable
Status Income Cheat

No Married 80K ?
Refund 10

Yes No

NO MarSt
Single, Divorced Married Assign Cheat to “No”

TaxInc NO
< 80K > 80K

NO YES
Decision Tree Classification
Task
Tid Attrib1 Attrib2 Attrib3 Class
Tree
1 Yes Large 125K No Induction
2 No Medium 100K No algorithm
3 No Small 70K No
4 Yes Medium 120K No
Induction
5 No Large 95K Yes
6 No Medium 60K No
7 Yes Large 220K No Learn
8 No Small 85K Yes Model
9 No Medium 75K No
10 No Small 90K Yes
Model
10

Training Set
Apply Decision
Tid Attrib1 Attrib2 Attrib3 Class
Model Tree
11 No Small 55K ?
12 Yes Medium 80K ?
13 Yes Large 110K ?
Deduction
14 No Small 95K ?
15 No Large 67K ?
10

Test Set
Choosing Attributes
 The previous experience decision
table showed 4 attributes: hour,
weather, accident and stall
 But the decision tree only showed 3
attributes: hour, accident and stall
 Why is that?
Choosing Attributes
 Methods for selecting attributes (which
will be described later) show that
weather is not a discriminating
attribute
 We use the principle of Occam’s
Razor: Given a number of competing
hypotheses, the simplest one is
preferable
Choosing Attributes
 The basic structure of creating a
decision tree is the same for most
decision tree algorithms
 The difference lies in how we select
the attributes for the tree
 We will focus on the ID3 algorithm
developed by Ross Quinlan in 1975
Decision Tree Induction
 Many Algorithms:
 Hunt’s Algorithm
 CART
 ID3, C4.5
 SLIQ,SPRINT
Decision Tree Algorithms
 The basic idea behind any decision tree
algorithm is as follows:
 Choose the best attribute(s) to split the
remaining instances and make that attribute a
decision node
 Repeat this process for recursively for each child
 Stop when:
 All the instances have the same target attribute value
 There are no more attributes
 There are no more instances
Identifying the Best Attributes
 Refer back to our original decision tree
Leave At

10 AM 9 AM
8 AM

Stall? Accident?
Long
No Yes No Yes
Short Long Medium Long

 How did we know to split on leave at and then on stall and accident and
not weather?
ID3 Heuristic
 To determine the best attribute, we
look at the ID3 heuristic
 ID3 splits attributes based on their
entropy.
 Entropy is the measure of
disinformation…
Entropy
 Entropy is minimized when all values of the
target attribute are the same.
 If we know that commute time will always be
short, then entropy = 0

 Entropy is maximized when there is an


equal chance of all values for the target
attribute (i.e. the result is random)
 If commute time = short in 3 instances, medium
in 3 instances and long in 3 instances, entropy is
maximized
Entropy
 Calculation of entropy
 Entropy(S) = ∑(i=1 to l)-|Si|/|S| * log2(|Si|/|S|)
 S = set of examples
 Si = subset of S with value vi under the target
attribute
 l = size of the range of the target attribute
ID3
 ID3 splits on attributes with the lowest
entropy
 We calculate the entropy for all values of an
attribute as the weighted sum of subset
entropies as follows:
 ∑(i = 1 to k) |Si|/|S| Entropy(Si), where k is the range
of the attribute we are testing
 We can also measure information gain
(which is inversely proportional to entropy)
as follows:
 Entropy(S) - ∑(i = 1 to k) |Si|/|S| Entropy(Si)
ID3
 Given our commute time sample set, we
can calculate the entropy of each attribute
at the root node

Attribute Expected Entropy Information Gain

Hour 0.6511 0.768449

Weather 1.28884 0.130719

Accident 0.92307 0.496479

Stall 1.17071 0.248842


Pruning Trees
 There is another technique for
reducing the number of attributes used
in a tree - pruning
 Two types of pruning:
 Pre-pruning (forward pruning)
 Post-pruning (backward pruning)
Prepruning
 In prepruning, we decide during the building
process when to stop adding attributes
(possibly based on their information gain)

 However, this may be problematic – Why?


 Sometimes attributes individually do not
contribute much to a decision, but combined,
they may have a significant impact
Postpruning
 Postpruning waits until the full decision
tree has built and then prunes the
attributes
 Two techniques:
 Subtree Replacement
 Subtree Raising
Subtree Replacement
 Entire subtree is replaced by a single
leaf node
A

C 4 5

1 2 3
Subtree Replacement
 Node 6 replaced the subtree
 Generalizes tree a little more, but may increase
accuracy

6 4 5
Subtree Raising
 Entire subtree is raised onto another
node
A

C 4 5

1 2 3
Subtree Raising
 Entire subtree is raised onto another node
 This was not discussed in detail as it is not
clear whether this is really worthwhile (as it
is very time consuming)
A

1 2 3
Problems with ID3
 ID3 is not optimal
 Uses expected entropy reduction, not
actual reduction
 Must use discrete (or discretized)
attributes
 What if we left for work at 9:30 AM?
 We could break down the attributes into
smaller values…
Problems with Decision Trees
 While decision trees classify quickly,
the time for building a tree may be
higher than another type of classifier

 Decision trees suffer from a problem of


errors propagating throughout a tree
 A very serious problem as the number of
classes increases
Error Propagation
 Since decision trees work by a series
of local decisions, what happens when
one of these local decisions is wrong?
 Every decision from that point on may be
wrong
 We may never return to the correct path
of the tree
Error Propagation Example
Problems with ID3
 If we broke down leave time to the
minute, we might get something like
this:
8:02 AM 8:03 AM 9:05 AM 9:07 AM 9:09 AM 10:02 AM

Long Medium Short Long Long Short

Since entropy is very low for each branch, we have


n branches with n leaves. This would not be helpful
for predictive modeling.
Problems with ID3
 We can use a technique known as
discretization
 We choose cut points, such as 9AM for
splitting continuous attributes
 These cut points generally lie in a subset of
boundary points, such that a boundary point
is where two adjacent instances in a sorted
list have different target value attributes
Problems with ID3
 Consider the attribute commute time

8:00 (L), 8:02 (L), 8:07 (M), 9:00 (S), 9:20 (S), 9:25 (S), 10:00 (S), 10:02 (M)

When we split on these attributes, we


increase the entropy so we don’t have a
decision tree with the same number of
cut points as leaves
ID3 in Gaming
 Black & White, developed by Lionhead
Studios, and released in 2001 used
ID3
 Used to predict a player’s reaction to a
certain creature’s action
 In this model, a greater feedback value
means the creature should attack
ID3 in Black & White

Example Attributes Target


Allegiance Defense Tribe Feedback
D1 Friendly Weak Celtic -1.0
D2 Enemy Weak Celtic 0.4
D3 Friendly Strong Norse -1.0
D4 Enemy Strong Norse -0.2
D5 Friendly Weak Greek -1.0
D6 Enemy Medium Greek 0.2
D7 Enemy Strong Greek -0.4
D8 Enemy Medium Aztec 0.0
D9 Friendly Weak Aztec -1.0
ID3 in Black & White
Allegiance
Friendly Enemy

-1.0 Defense

Weak Medium Strong

0.4 0.1 -0.3

Note that this decision tree does not even use the tribe attribute
ID3 in Black & White
 Now suppose we don’t want the entire
decision tree, but we just want the 2
highest feedback values
 We can create a Boolean expressions,
such as
((Allegiance = Enemy) ^ (Defense = Weak))
v ((Allegiance = Enemy) ^ (Defense =
Medium))
Summary
 Decision trees can be used to help
predict the future
 The trees are easy to understand
 Decision trees work more efficiently
with discrete attributes
 The trees may suffer from error
propagation

You might also like