0% found this document useful (0 votes)
26 views69 pages

New Module 3 Part1

The ID3 algorithm is used to generate a decision tree from training data. It employs a top-down, greedy search approach to test each attribute at every node. The attribute with the highest information gain is selected as the decision attribute at each node. Information gain measures the expected reduction in entropy, or impurity, caused by partitioning the data with an attribute. Attributes with higher information gain cause a greater reduction in entropy and are better for discriminating between examples.

Uploaded by

Ashish Prasad R
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
26 views69 pages

New Module 3 Part1

The ID3 algorithm is used to generate a decision tree from training data. It employs a top-down, greedy search approach to test each attribute at every node. The attribute with the highest information gain is selected as the decision attribute at each node. Information gain measures the expected reduction in entropy, or impurity, caused by partitioning the data with an attribute. Attributes with higher information gain cause a greater reduction in entropy and are better for discriminating between examples.

Uploaded by

Ashish Prasad R
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 69

ARTIFICIAL INTELLIGENCE

AND MACHINE LEARNING -


18CS71

Dr. Nasreen Fathima


Associate Professor
Dept. of Computer Science & Engineering
ATMECE, Mysuru

1
Contents
Chapter-3
Decision Tree Learning: Decision Tree Learning
Introduction
Decision tree representation
Appropriate problems
ID3 algorithm.
Artificial Neural Network:
Introduction
NN representation
Appropriate problems
Perceptrons
Back propagation algorithm.
Decision tree learning is a method for
approximating discrete-valued target
functions, in which the learned function is
represented by a decision tree.

3
DECISION TREE REPRESENTATION
• Decision trees classify instances by sorting them down the tree from the root to some
leaf node, which provides the classification of the instance.
• Each node in the tree specifies a test of some attribute of the instance, and each
branch descending from that node corresponds to one of the possible values for this
attribute.
• An instance is classified by starting at the root node of the tree, testing the attribute
specified by this node, then moving down the tree branch corresponding to the value
of the attribute in the given example. This process is then repeated for the subtree
rooted at the new node.
Attributes
Target_Attribute

Examples

5
DECISION TREE REPRESENTATION

FIGURE:A decision tree for the concept PlayTennis. An example is classified by sorting it through
the tree to the appropriate leaf node, then returning the classification associated with this leaf.
• Decision trees represent a disjunction of conjunctions of constraints on the attribute values
of instances.
• Each path from the tree root to a leaf corresponds to a conjunction of attribute tests, and
the tree itself to a disjunction of these conjunctions

For example,
The decision tree shown in above figure corresponds to the expression
(Outlook = Sunny ∧ Humidity = Normal)
∨ (Outlook = Overcast)
∨ (Outlook = Rain ∧ Wind = Weak)
APPROPRIATE PROBLEMS FOR DECISION TREE LEARNING

Decision tree learning is generally best suited to problems with the following
characteristics:

1. Instances are represented by attribute-value pairs – Instances are described by a fixed


set of attributes and their values(eg: Temperature(Hot, Mild, Cold)

2.The target function has discrete output values – The decision tree assigns a Boolean
classification (e.g., yes or no) to each example. Decision tree methods easily extend to
learning functions with more than two possible output values.

3.Disjunctive descriptions may be required


4. The training data may contain errors – Decision tree learning methods are robust to
errors, both errors in classifications of the training examples and errors in the attribute
values that describe these examples.
5. The training data may contain missing attribute values – Decision tree methods can
be used even when some training examples have unknown values
• Decision tree learning has been applied to problems such as learning to classify
medical patients by their disease, equipment malfunctions by their cause, and loan
applicants by their likelihood of defaulting on payments.
• Such problems, in which the task is to classify examples into one of a discrete set of
possible categories, are often referred to as classification problems.
THE BASIC DECISION TREE LEARNING ALGORITHM

• Most that have been developed for learning decision trees


algorithms are
variations on a core algorithm that employs a top-down, greedy search through the
space of possible decision trees. This approach is exemplified by the ID3
algorithm and its successor C4.5
What is the ID3 algorithm?

• ID3 stands for Iterative Dichotomiser 3


• ID3 is a precursor to the C4.5 Algorithm.
• The ID3 algorithm was invented by Ross Quinlan in 1975
• Used to generate a decision tree from a given data set by employing a top-down,
greedy search, to test each attribute at every node of the tree.
• The resulting tree is used to classify future samples.
14

ID3 Algorithm
ID3(Examples, Target_attribute, Attributes)

Examples are the training examples. Target_attribute is the attribute whose value is to be predicted by
the tree. Attributes is a list of other attributes that may be tested by the learned decision tree. Returns a
decision tree that correctly classifies the given Examples.

 Create a Root node for the tree


 If all Examples are positive, Return the single-node tree Root, with label = +
 If all Examples are negative, Return the single-node tree Root, with label = -
 If Attributes is empty, Return the single-node tree Root, with label = most common value of
Target_attribute in Examples
Attributes
Target_Attribute

Examples

15
16

Otherwise Begin
A ← the attribute from Attributes that best* classifies Examples
The decision attribute for Root ← A
For each possible value, vi, of A,
Add a new tree branch below Root, corresponding to the test A = vi
Let Examplesvi, be the subset of Examples that have value vi for A
If Examplesvi , is empty
Then below this new branch add a leaf node with label = most common value of Target_attribute in
Examples
Else below this new branch add the subtree ID3(Examples vi, Target_attribute, Attributes – {A}))
End
Return Root

* The best attribute is the one with highest information gain


17

Which Attribute Is the Best Classifier?


• The central choice in the ID3 algorithm is selecting which attribute to test at each node in the tree.

• A statistical property called information gain measures how well a given attribute separates the training
examples according to their target classification.

• ID3 uses this information gain measure to select among the candidate attributes at each step while
growing the tree.
18

ENTROPY MEASURES HOMOGENEITY OF EXAMPLES

• To define information gain precisely we begin by defining a measure called entropy.


Entropy measures the impurity of a collection of examples.

• Given a collection S, containing positive and negative examples of some target concept, the entropy of S
relative to this Boolean classification is

Where,
p+ is the proportion of positive examples in S
p- is the proportion of negative examples in S.
9 Positive examples
5 Negative examples

19
20

Example: Entropy

Suppose S is a collection of 14 examples of some boolean concept, including 9


positive and 5 negative examples. Then the entropy of S relative to this boolean
classification is
21

• The entropy is 0 if all members of S belong to the same class

• The entropy is 1 when the collection contains an equal number of positive and
negative examples

• If the collection contains unequal numbers of positive and negative examples,


the entropy is between 0 and 1

Back
22
23

INFORMATION GAIN MEASURES THE EXPECTED REDUCTION IN ENTROPY

• Entropy measures the impurity in a collection of training examples. Information gain, is the expected
reduction in entropy caused by partitioning the examples according to the selected attribute.

• The information gain, Gain(S, A) of an attribute A, relative to a collection of examples S, is defined as

Where, Values(A) is the set of all possible values for attribute A and S v is the subset of S for which attribute A has
value v, i.e., Sv ={s ϵ S | A(s)=v}
24

Example: Information gain

Let, Values (Wind) = {Weak,


Strong}
S = [9+, 5−]
= [6+, 2−]
SWeak
= [3+, 3−]
SStrong

Information gain of attribute Wind:


Gain(S, Wind) = Entropy(S) − 8/14 Entropy (SWeak) − 6/14 Entropy (SStrong)
25

S = [9+, 5−]
Entropy (SWeak)[+6,-2] = -(6/8)* log2 (6/8) – (2/8)* log2 (2/8) SWeak = [6+, 2−]
= [3+, 3−]
= -0.75 *(-0.41503) – 0.25 * (-2)
SStrong
= 0.3112 + 0.5 = 0.811

Entropy (SStrong)[+3,-3] =
=1

Information gain of attribute Wind:


Gain(S, Wind) = Entropy(S) − 8/14 Entropy (SWeak) − 6/14 Entropy (SStrong)
= 0.94 – (8/14)* 0.811 – (6/14) *1.00
= 0.048
WINDY
40

An Illustrative Example
• To illustrate the operation of ID3, consider the learning task represented by the
training examples of below table.
• Here the target attribute PlayTennis, which can have values
yes or no for different days.
• Consider the first step through the algorithm, in which the topmost node of the
decision tree is created.
41

Day Outlook Temperature Humidity Wind PlayTennis


D1 Sunny Hot High Weak No
D2 Sunny Hot High Strong No
D3 Overcast Hot High Weak Yes
D4 Rain Mild High Weak Yes
D5 Rain Cool Normal Weak Yes
D6 Rain Cool Normal Strong No
D7 Overcast Cool Normal Strong Yes
D8 Sunny Mild High Weak No
D9 Sunny Cool Normal Weak Yes
D10 Rain Mild Normal Weak Yes
D11 Sunny Mild Normal Strong Yes
D12 Overcast Mild High Strong Yes
D13 Overcast Hot Normal Weak Yes
D14 Rain Mild High Strong No
42

ID3 determines the information gain for each candidate attribute (i.e., Outlook, Temperature, Humidity,
and Wind), then selects the one with highest information gain
43

The information gain values for all four attributes are

• Gain(S, Outlook) = 0.246

• Gain(S, Humidity) = 0.151

• Gain(S, Wind) = 0.048

• Gain(S, Temperature) = 0.029

• According to the information gain measure, the Outlook attribute provides the
best prediction of the target attribute, PlayTennis, over the training examples.
Therefore, Outlook is selected as the decision attribute for the root node, and
branches are created below the root for each of its possible values i.e., Sunny,
Overcast, and Rain.
44
45

SRain = { D4, D5, D6, D10, D14}

Gain (SRain , Humidity) = 0.970 – (2/5)1.0 – (3/5)0.917 = 0.019


Gain (SRain , Temperature) =0.970 – (0/5)0.0 – (3/5)0.918 – (2/5)1.0 = 0.019
Gain (SRain , Wind) =0.970 – (3/5)0.0 – (2/5)0.0 = 0.970
46
1. Define decision tree learning. List and explain appropriate problems for decision tree learning. 6M
2. Explain the basic decision tree learning algorithm. 5M
3. Explain the concept of decision tree learning. Discuss the necessary measures required to select the
attribute for building a decision tree using ID3 algorithm. 11M
4. Write and explain decision tree for the following transactions:

Data set 1 Data set 2


66
5. What is decision tree, discuss the use of decision tree for classification purpose with an example. 8M
6. For the transaction shown in the table compute the following:
(i) Entropy of the collection of transaction records of the table with respect to classification.
(ii) What are the information gain of a1 and a2 relative to the transactions of the table? 8M

7. Construct a decision tree to represent the following Boolean functions:


(i) A ∧ ~ B (ii) A ∨ [B ∧ C] (iii) A XOR B 6M

8. Write the ID3 algorithm. 6M


9. Define (i) Decision Tree, (ii) Entropy, (iii) Information Gain, (iv) Restriction Bias, (v) Preference Bias

67
9. Apply ID3 algorithm for constructing decision tree for the following training examples. 10M

68
69

You might also like