Machine_Learning_Lecture_08_Decision Tree Learning (1)
Machine_Learning_Lecture_08_Decision Tree Learning (1)
Classification Learning:
Definition
Given a collection of records (training set)
– Each record contains a set of attributes, one
of the attributes is the class
Find a model for the class attribute as a function of the
values of the other attributes
Goal: previously unseen records should be assigned a
class as accurately as possible
– Use test set to estimate the accuracy of the
model
– Often, the given data set is divided into
training and test sets, with training set used
to build the model and test set used to
validate it
Illustrating Classification Learning
Training Set
Apply
Tid Attrib1 Attrib2 Attrib3 Class Model
11 No Small 55K ?
12 Yes Medium 80K ?
13 Yes Large 110K ? Deduction
14 No Small 95K ?
15 No Large 67K ?
10
Test Set
Examples of Classification Task
Before learning more about decision trees let’s get familiar with some of the
terminologies:
• Root Node: The initial node at the beginning of a decision tree, where the entire
population or dataset starts dividing based on various features or conditions.
• Decision Nodes: Nodes resulting from the splitting of root nodes are known as
decision nodes. These nodes represent intermediate decisions or conditions within the
tree.
• Leaf Nodes: Nodes where further splitting is not possible, often indicating the final
classification or outcome. Leaf nodes are also referred to as terminal nodes.
• Sub-Tree: Similar to a subsection of a graph being called a sub-graph, a sub-section of
a these tree is referred to as a sub-tree. It represents a specific portion of the decision
tree.
• Pruning: The process of removing or cutting down specific nodes in a tree to prevent
overfitting and simplify the model.
• Branch / Sub-Tree: A subsection of the entire is referred to as a branch or sub-tree. It
represents a specific path of decisions and outcomes within the tree.
• Parent and Child Node: In a decision tree, a node that is divided into sub-nodes is
known as a parent node, and the sub-nodes emerging from it are referred to as child
nodes. The parent node represents a decision or condition, while the child nodes
represent the potential outcomes or further decisions based on that condition.
Example of a Decision Tree
No
Refund
1 Yes Single 125K
Yes No
2 No Married 100K No
3 No Single 70K No NO MarSt
4 Yes Married 120K No Single, Divorced Married
5 No Divorced 95K Yes
TaxInc NO
6 No Married 60K No
No
< 80K > 80K
7 Yes Divorced 220K
8 No Single 85K Yes NO YES
9 No Married 75K No
10 No Single 90K Yes
10
No Married 80K ?
Refund 10
Yes No
NO MarSt
Single, Divorced Married
TaxInc NO
< 80K > 80K
NO YES
Apply Model to Test Data
Test Data
Refund Marital Taxable
Status Income Cheat
No Married 80K ?
Refund 10
Yes No
NO MarSt
Single, Divorced Married
TaxInc NO
< 80K > 80K
NO YES
Apply Model to Test Data
Test Data
Refund Marital Taxable
Status Income Cheat
No Married 80K ?
Refund 10
Yes No
NO MarSt
Single, Divorced Married
TaxInc NO
< 80K > 80K
NO YES
Apply Model to Test Data
Test Data
Refund Marital Taxable
Status Income Cheat
No Married 80K ?
Refund 10
Yes No
NO MarSt
Single, Divorced Married
TaxInc NO
< 80K > 80K
NO YES
Apply Model to Test Data
Test Data
Refund Marital Taxable
Status Income Cheat
No Married 80K ?
Refund 10
Yes No
NO MarSt
Single, Divorced Married
TaxInc NO
< 80K > 80K
NO YES
Apply Model to Test Data
Test Data
Refund Marital Taxable
Status Income Cheat
No Married 80K ?
Refund 10
Yes No
NO MarSt
Single, Divorced Married Assign Cheat to
“No”
TaxInc NO
< 80K > 80K
NO YES
Decision Tree algorithm works in simpler steps:
Outlook
34
Example of Decision Tree
Decision Trees ID3
Top-Down Induction of Decision Trees
ID3
G H L M
43
Entropy
Why?
• Information theory optimal length code assign
–log2 p bits to messages having probability p.
• So the expected number of bits to encode
(+ or -) of random member of S:
45
Information Gain (S,E)
46
Converting a Tree to Rules
Outlook
No Yes No Yes
64
Overfitting
65
Avoid Overfitting
66
Effect of Reduced Error
Pruning
67