0% found this document useful (0 votes)
4 views

Lab 2

The document outlines an experiment for a Machine Learning course focused on implementing Decision Trees for classification and regression tasks. It explains the theory behind Decision Trees, including key terminologies, steps for building a tree, and methods for attribute selection such as Information Gain and Gini Index. Additionally, it includes lab assignments using specific datasets to apply the concepts learned, including tasks related to overfitting analysis and tree pruning techniques.

Uploaded by

yugsavlabooks
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views

Lab 2

The document outlines an experiment for a Machine Learning course focused on implementing Decision Trees for classification and regression tasks. It explains the theory behind Decision Trees, including key terminologies, steps for building a tree, and methods for attribute selection such as Information Gain and Gini Index. Additionally, it includes lab assignments using specific datasets to apply the concepts learned, including tasks related to overfitting analysis and tree pruning techniques.

Uploaded by

yugsavlabooks
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 3

Department of Computer Science and Engineering (Data Science)

Subject: Machine Learning – I (DJS23DCPC402)

AY: 2024-25

Experiment 2

(Decision Tree)

Aim: Implement Decision Tree on the given Datasets to build a classifier and Regressor. Apply appropriate
pruning method to overcome overfitting.

Theory:

Decision Tree is a Supervised learning technique that can be used for both classification and Regression
problems, but mostly it is preferred for solving Classification problems. It is a tree-structured classifier,
where internal nodes represent the features of a dataset, branches represent the decision
rules and each leaf node represents the outcome. In a Decision tree, there are two nodes, which are
the Decision Node and Leaf Node.
Decision nodes are used to make any decision and have multiple branches, whereas Leaf nodes are the
output of those decisions and do not contain any further branches.
The decisions or the test are performed on the basis of features of the given dataset.
It is a graphical representation for getting all the possible solutions to a problem/decision based on
given conditions. It is called a decision tree because, similar to a tree, it starts with the root node, which
expands on further branches and constructs a tree-like structure.
A decision tree simply asks a question, and based on the answer (Yes/No), it further split the tree into
subtrees. Below diagram explains the general structure of a decision tree:

Decision Tree Terminologies


Root Node: Root node is from where the decision tree starts. It represents the entire dataset, which
further gets divided into two or more homogeneous sets.

1
Department of Computer Science and Engineering (Data Science)

Leaf Node: Leaf nodes are the final output node, and the tree cannot be segregated further after getting
a leaf node.
Splitting: Splitting is the process of dividing the decision node/root node into sub-nodes according to the
given conditions.
Branch/Sub Tree: A tree formed by splitting the tree.
Pruning: Pruning is the process of removing the unwanted branches from the tree.
Parent/Child node: The root node of the tree is called the parent node, and other nodes are called the
child nodes.

Steps in building a Tree


Step-1: Begin the tree with the root node, says S, which contains the complete dataset.
Step-2: Find the best attribute in the dataset using Attribute Selection Measure (ASM).
Step-3: Divide the S into subsets that contains possible values for the best attributes.
Step-4: Generate the decision tree node, which contains the best attribute.
Step-5: Recursively make new decision trees using the subsets of the dataset created in step -3.
Continue this process until a stage is reached where you cannot further classify the nodes and called the
final node as a leaf node.

Example: Suppose there is a candidate who has a job offer and wants to decide whether he should
accept the offer or Not. So, to solve this problem, the decision tree starts with the root node (Salary
attribute by ASM). The root node splits further into the next decision node (distance from the office) and
one leaf node based on the corresponding labels. The next decision node further gets split into one
decision node (Cab facility) and one leaf node. Finally, the decision node splits into two leaf nodes
(Accepted offers and Declined offer). Consider the below diagram:

Attribute Selection Measures


While implementing a Decision tree, the main issue arises that how to select the best attribute for the
root node and for sub-nodes. So, to solve such problems there is a technique which is called as Attribute
selection measure or ASM. By this measurement, we can easily select the best attribute for the nodes of
the tree. There are two popular techniques for ASM, which are:
1. Information Gain:
Information gain is the measurement of changes in entropy after the segmentation of a dataset based
on an attribute. It calculates how much information a feature provides us about a class.

2
Department of Computer Science and Engineering (Data Science)

According to the value of information gain, we split the node and build the decision tree.
A decision tree algorithm always tries to maximize the value of information gain, and a node/attribute
having the highest information gain is split first. It can be calculated using the below formula:
Information Gain= Entropy(S)- [(Weighted Avg) *Entropy(each feature)
Entropy: Entropy is a metric to measure the impurity in a given attribute. It specifies randomness in data.
Entropy can be calculated as:
Entropy(s)= -P(yes)log2 P(yes)- P(no) log2 P(no)
Where,
S= Total number of samples
P(yes)= probability of yes
P(no)= probability of no

2. Gini Index:
Gini index is a measure of impurity or purity used while creating a decision tree in the
CART(Classification and Regression Tree) algorithm.
An attribute with the low Gini index should be preferred as compared to the high Gini index.
It only creates binary splits, and the CART algorithm uses the Gini index to create binary
splits. Gini index can be calculated using the below formula:
Gini Index= 1- ∑jPj2
Pruning: Getting an Optimal Decision tree
Pruning is a process of deleting the unnecessary nodes from a tree in order to get the optimal decision
tree. A too-large tree increases the risk of overfitting, and a small tree may not capture all the important
features of the dataset. Therefore, a technique that decreases the size of the learning tree without
reducing accuracy is known as Pruning. There are mainly two types of tree pruning technology used:
 Cost Complexity Pruning
 Reduced Error Pruning.

Lab Assignments to complete in this session:

Use the given dataset and perform the following tasks:


Dataset 1: IRIS.csv
Dataset 2: car prediction.csv

1. Use python libraries to build a decision tree classifier on Dataset 1. Analyze the results using confusion
matrix and accuracy. Plot the Decision Tree.
2. Write a code to show overfitting in the decision tree classifier built using Dataset 1. Use sklearn and
matplotlib.
3. Implement Decision tree regressor on Dataset 2.

Write-Up
1. Write the pseudo code of overfitting analysis in Decision Tree Classifier.

You might also like