0% found this document useful (0 votes)
11 views

Machine_Learning_Lecture_08_Decision Tree Learning (1)

The document provides an overview of decision trees as a classification learning method, detailing their structure, algorithms, and applications. It explains key concepts such as root nodes, decision nodes, and leaf nodes, along with the advantages and disadvantages of using decision trees. Additionally, it discusses various algorithms like ID3, C4.5, and CART, and introduces concepts like Gini impurity and entropy for evaluating splits in the tree.

Uploaded by

Rahul Dash
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views

Machine_Learning_Lecture_08_Decision Tree Learning (1)

The document provides an overview of decision trees as a classification learning method, detailing their structure, algorithms, and applications. It explains key concepts such as root nodes, decision nodes, and leaf nodes, along with the advantages and disadvantages of using decision trees. Additionally, it discusses various algorithms like ID3, C4.5, and CART, and introduces concepts like Gini impurity and entropy for evaluating splits in the tree.

Uploaded by

Rahul Dash
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 67

Decision Trees

Classification Learning:
Definition
 Given a collection of records (training set)
– Each record contains a set of attributes, one
of the attributes is the class
 Find a model for the class attribute as a function of the
values of the other attributes
 Goal: previously unseen records should be assigned a
class as accurately as possible
– Use test set to estimate the accuracy of the
model
– Often, the given data set is divided into
training and test sets, with training set used
to build the model and test set used to
validate it
Illustrating Classification Learning

Tid Attrib1 Attrib2 Attrib3 Class Learning


No
1 Yes Large 125K
algorithm
2 No Medium 100K No
3 No Small 70K No
4 Yes Medium 120K No
Induction
5 No Large 95K Yes
6 No Medium 60K No
7 Yes Large 220K No Learn
8 No Small 85K Yes Model
9 No Medium 75K No
10 No Small 90K Yes
Model
10

Training Set
Apply
Tid Attrib1 Attrib2 Attrib3 Class Model
11 No Small 55K ?
12 Yes Medium 80K ?
13 Yes Large 110K ? Deduction
14 No Small 95K ?
15 No Large 67K ?
10

Test Set
Examples of Classification Task

• Predicting tumor cells as benign or malignant


• Classifying credit card transactions
as legitimate or fraudulent
• Classifying secondary structures of protein
as alpha-helix, beta-sheet, or random
coil
• Categorizing news stories as finance,
weather, entertainment, sports, etc.
Classification Learning Techniques

 Decision tree-based methods


 Rule-based methods
 Instance-based methods
 Probability-based methods
 Neural networks
 Support vector machines
 Logic-based methods
Decision trees

Decision trees are a simple machine learning tool used


for classification and regression tasks. They break
complex decisions into smaller steps, making them easy
to understand and implement.
Understanding Decision Tree

• A decision tree, which has a hierarchical structure made


up of root, branches, internal, and leaf nodes, is a non-
parametric supervised learning approach used for
classification and regression applications.
• It is a tool that has applications spanning several
different areas. These trees can be used for
classification as well as regression problems. The name
itself suggests that it uses a flowchart like a tree
structure to show the predictions that result from a
series of feature-based splits. It starts with a root node
and ends with a decision made by leaves.
Types of Decision Tree

• ID3 : This algorithm measures how mixed up the data is


at a node using something called entropy. It then
chooses the feature that helps to clarify the data the
most.C4.5 : This is an improved version of ID3 that can
handle missing data and continuous attributes.
• CART : This algorithm uses a different measure called
Gini impurity to decide how to split the data. It can be
used for both classification (sorting data into
categories) and regression (predicting continuous
values) tasks.
Decision Tree Terminologies

Before learning more about decision trees let’s get familiar with some of the
terminologies:

• Root Node: The initial node at the beginning of a decision tree, where the entire
population or dataset starts dividing based on various features or conditions.
• Decision Nodes: Nodes resulting from the splitting of root nodes are known as
decision nodes. These nodes represent intermediate decisions or conditions within the
tree.
• Leaf Nodes: Nodes where further splitting is not possible, often indicating the final
classification or outcome. Leaf nodes are also referred to as terminal nodes.
• Sub-Tree: Similar to a subsection of a graph being called a sub-graph, a sub-section of
a these tree is referred to as a sub-tree. It represents a specific portion of the decision
tree.
• Pruning: The process of removing or cutting down specific nodes in a tree to prevent
overfitting and simplify the model.
• Branch / Sub-Tree: A subsection of the entire is referred to as a branch or sub-tree. It
represents a specific path of decisions and outcomes within the tree.
• Parent and Child Node: In a decision tree, a node that is divided into sub-nodes is
known as a parent node, and the sub-nodes emerging from it are referred to as child
nodes. The parent node represents a decision or condition, while the child nodes
represent the potential outcomes or further decisions based on that condition.
Example of a Decision Tree

Tid Refund Marital Taxable


Status Income Cheat

No
Refund
1 Yes Single 125K
Yes No
2 No Married 100K No
3 No Single 70K No NO MarSt
4 Yes Married 120K No Single, Divorced Married
5 No Divorced 95K Yes
TaxInc NO
6 No Married 60K No
No
< 80K > 80K
7 Yes Divorced 220K
8 No Single 85K Yes NO YES
9 No Married 75K No
10 No Single 90K Yes
10

Training Data Model: Decision Tree


Apply Model to Test Data
Test Data
Start at the root of tree Refund Marital Taxable
Status Income Cheat

No Married 80K ?
Refund 10

Yes No

NO MarSt
Single, Divorced Married

TaxInc NO
< 80K > 80K

NO YES
Apply Model to Test Data
Test Data
Refund Marital Taxable
Status Income Cheat

No Married 80K ?
Refund 10

Yes No

NO MarSt
Single, Divorced Married

TaxInc NO
< 80K > 80K

NO YES
Apply Model to Test Data
Test Data
Refund Marital Taxable
Status Income Cheat

No Married 80K ?
Refund 10

Yes No

NO MarSt
Single, Divorced Married

TaxInc NO
< 80K > 80K

NO YES
Apply Model to Test Data
Test Data
Refund Marital Taxable
Status Income Cheat

No Married 80K ?
Refund 10

Yes No

NO MarSt
Single, Divorced Married

TaxInc NO
< 80K > 80K

NO YES
Apply Model to Test Data
Test Data
Refund Marital Taxable
Status Income Cheat

No Married 80K ?
Refund 10

Yes No

NO MarSt
Single, Divorced Married

TaxInc NO
< 80K > 80K

NO YES
Apply Model to Test Data
Test Data
Refund Marital Taxable
Status Income Cheat

No Married 80K ?
Refund 10

Yes No

NO MarSt
Single, Divorced Married Assign Cheat to
“No”
TaxInc NO
< 80K > 80K

NO YES
Decision Tree algorithm works in simpler steps:

• Starting at the Root: The algorithm begins at


the top, called the “root node,” representing the
entire dataset.
• Asking the Best Questions: It looks for the
most important feature or question that splits the
data into the most distinct groups. This is like
asking a question at a fork in the tree.
• Branching Out: Based on the answer to that
question, it divides the data into smaller subsets,
creating new branches. Each branch represents a
possible route through the tree.
• Repeating the Process: The algorithm
continues asking questions and splitting the data
at each branch until it reaches the final “leaf
nodes,” representing the predicted outcomes or
Decision Tree Assumptions
Binary Splits : Decision trees typically make binary splits, meaning
each node divides the data into two subsets based on a single feature
or condition. This assumes that each decision can be represented as a
binary choice.

Recursive Partitioning : Decision trees use a recursive partitioning


process, where each node is divided into child nodes, and this process
continues until a stopping criterion is met. This assumes that data can
be effectively subdivided into smaller, more manageable subsets.

Feature Independence : These trees often assume that the features


used for splitting nodes are independent. In practice, feature
independence may not hold, but it can still perform well if features are
correlated.

Homogeneity : It aim to create homogeneous subgroups in each


node, meaning that the samples within a node are as similar as
possible regarding the target variable. This assumption helps in
achieving clear decision boundaries.
Advantages of Decision Trees

• Easy to Understand: They are simple to


visualize and interpret, making them easy to
understand even for non-experts.
• Handles Both Numerical and Categorical
Data: They can work with both types of data
without needing much preprocessing.
• No Need for Data Scaling: These trees do not
require normalization or scaling of data.
• Automated Feature Selection: They
automatically identify the most important features
for decision-making.
• Handles Non-Linear Relationships: They can
capture non-linear patterns in the data effectively.
Disadvantages of Decision Trees

•Overfitting Risk: It can easily overfit the training data,


especially if they are too deep.
•Unstable with Small Changes: Small changes in data can
lead to completely different trees.
•Biased with Imbalanced Data: They tend to be biased if
one class dominates the dataset.
•Limited to Axis-Parallel Splits: They struggle with
diagonal or complex decision boundaries.
•Can Become Complex: Large trees can become hard to
interpret and may lose their simplicity.
Gini Index
Example 1
How Can We Create A Simple
Decision Tree?

In the above example, we are trying to understand whether a


person gets a loan based on salary and property. We can
consider Y variable (loan approval) column. There are two
input parameters: the X1 variable – salary(in rupees) and the
X2 variable – property(land or house). We have built a small
decision tree.
Before moving forward, we need to understand a few important
questions.

•Question 1-> What are terminologies used in a decision tree?


•Question 2 -> Why did we select the salary column first instead
of the property column in Image 1?
We considered the salary column as an example of building a tree.
But, when we are working on the real-world dataset, we cannot
choose the column randomly. Read the following section to know
what process we use in real-time.
Splitting the Nodes in a Decision
Tree
To answer the above question, we need to check how good each
column is and What qualities it has to be a root node. To know
which column we will be using:
1.Gini
2.Entropy and Information Gain
Gini Impurity in Decision Tree

First, We will calculate the Gini impurity for column


1 credit history. Likewise, we must calculate the
Gini impurity for the other columns like salary and
property. The value we will get is how impure an
attribute is. So, the lesser the value lesser the
impurity the value ranges between(0-1).
get Impurity for credit history = 0.171

After calculating Gini for both features,


We will get Impurity for the column Salary =
0.440 and Property = 0.428.
Gini Index
Example 2
Training Examples

Day Outlook Temp. Humidity Wind Play Tennis


D1 Sunny Hot High Weak No
D2 Sunny Hot High Strong No
D3 Overcast Hot High Weak Yes
D4 Rain Mild High Weak Yes
D5 Rain Cool Normal Weak Yes
D6 Rain Cool Normal Strong No
D7 Overcast Cool Normal Weak Yes
D8 Sunny Mild High Weak No
D9 Sunny Cold Normal Weak Yes
D10 Rain Mild Normal Strong Yes
D11 Sunny Mild Normal Strong Yes
D12 Overcast Mild High Strong Yes
D13 Overcast Hot Normal Weak Yes
D14 Rain Mild High Strong No
33
Decision Tree for PlayTennis

Outlook

Sunny Overcast Rain

Humidity Each internal node tests an attribute

High Normal Each branch corresponds to an


attribute value node

No Yes Each leaf node assigns a classification

34
Example of Decision Tree
Decision Trees ID3
Top-Down Induction of Decision Trees
ID3

1. A  the “best” decision attribute for next


node
2. Assign A as decision attribute for node
3. For each value of A create new
descendant
4. Sort training examples to leaf node
according to
the attribute value of the branch
5. If all training examples are perfectly
classified (same value of target attribute)
stop, else iterate over new leaf nodes. 42
Which attribute is best?

[29+,35-] A1=? A2=? [29+,35-]

G H L M

[21+, 5-] [8+, 30-] [18+, 33-] [11+, 2-]

43
Entropy

• S is a sample of training examples


• p+ is the proportion of positive examples
• p- is the proportion of negative examples
• Entropy measures the impurity of S
Entropy(S) = -p+ log2 p+ - p- log2 p-
44
Entropy
• Entropy(S)= expected number of bits needed to
encode class (+ or -) of randomly drawn members
of S (under the optimal, shortest length-code)

Why?
• Information theory optimal length code assign
–log2 p bits to messages having probability p.
• So the expected number of bits to encode
(+ or -) of random member of S:

-p+ log2 p+ - p- log2 p-

45
Information Gain (S,E)

• Gain(S,A): expected reduction in entropy due to


sorting S on attribute A

46
Converting a Tree to Rules

Outlook

Sunny Overcast Rain

Humidity Yes Wind

High Normal Strong Weak

No Yes No Yes

R1: If (Outlook=Sunny)  (Humidity=High) Then


PlayTennis=No
R2: If (Outlook=Sunny)  (Humidity=Normal) Then
PlayTennis=Yes
R3: If (Outlook=Overcast) Then PlayTennis=Yes 63
Continuous Valued Attributes

Create a discrete attribute to test continuous


• Temperature = 24.50C
• (Temperature > 20.00C) = {true, false}
Where to set the threshold?

Temperature 150C 180C 190C 220C 240C 270C

PlayTennis No No Yes Yes Yes No

64
Overfitting

• One of the biggest problems with decision


trees is Overfitting

65
Avoid Overfitting

• stop growing when split not statistically significant


• grow full tree, then post-prune

Select “best” tree:


• measure performance over training data
• measure performance over separate validation data
set
• min( |tree|+|misclassifications(tree)|)

66
Effect of Reduced Error
Pruning

67

You might also like