0% found this document useful (0 votes)
71 views

Module 4DMDW

The document summarizes module 4 on classification. It discusses decision trees as a classification technique that uses recursive partitioning to split a training dataset based on attribute test conditions, with the goal of creating homogeneous leaf nodes. It describes Hunt's algorithm for building decision trees and issues in the design of decision tree induction algorithms such as determining the best attribute test condition to split nodes and stopping criteria. Finally, it covers measures of node impurity like Gini index and entropy that are used to evaluate potential splits and select the attribute test that minimizes impurity in child nodes.

Uploaded by

Gowtham P B
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
71 views

Module 4DMDW

The document summarizes module 4 on classification. It discusses decision trees as a classification technique that uses recursive partitioning to split a training dataset based on attribute test conditions, with the goal of creating homogeneous leaf nodes. It describes Hunt's algorithm for building decision trees and issues in the design of decision tree induction algorithms such as determining the best attribute test condition to split nodes and stopping criteria. Finally, it covers measures of node impurity like Gini index and entropy that are used to evaluate potential splits and select the attribute test that minimizes impurity in child nodes.

Uploaded by

Gowtham P B
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 45

Module-4 Classification

MODULE-4 CLASSIFICATION
 Introduction
 Decision Trees Induction,
 Method for Comparing Classifiers
 Rule Based Classifiers
 Nearest Neighbor Classifiers
 Bayesian Classifiers

Introduction
Classification: Definition
Classification, which is the task of assigning objects to one of several predefined categories. Given a
collection of records (training set ),Each record contains a set of attributes, one of the attributes is the
class. Find a model for class attribute as a function of the values of other attributes. The input data for a
classification task is a collection of records. Each record,

Goal: previously unseen records should be assigned a class as accurately as possible

A test set is used to determine the accuracy of the model. Usually, the given data set is divided into
training and test sets, with training set used to build the model and test set used to validate it.

Applications:
Examples

 Detecting spam email messages based upon the messageheader and content.
 Categorizing cells as malignant or benign based upon the results of MRI scans.
 Classifying galaxies based upon their shapes.
 Categorizing news stories as finance, weather, entertainment, sports, etc
 Classifying credit card transactions as legitimate or fraudulent.
General Approach to Solving a Classification

 A classification technique (or classifier) is a systematic approach to building


 classification models from an input data set.
Module-4 Classification

 Each technique employs a learning algorithm to identify a model that best fits the relationship
between the attribute set and class label of the input data.
 The model generated by a learning algorithm should both fit the input data well and correctly
predict the class labels of records it has never seen before.
 Therefore, a key objective of the learning algorithm is to build models with good generalization
capability; i.e., models that accurately predict the class labels ofpreviously unknown records

Evaluation of the performance of a classification model:


is based on thecounts of test records correctly and incorrectly predicted by the model. Thesecounts are
tabulated in a table known as a confusion matrix.
Module-4 Classification

Most classification algorithms seek models that attain the highest accuracy, or equivalently, the lowest
error rate when applied to the test set.
Classification Techniques:
• Decision Tree based Methods
• Rule-based Methods
• Memory based reasoning
• Neural Networks
• Naïve Bayes and Bayesian Belief Networks
• Support Vector Machines

Decision Tree Induction


The tree has three types of nodes:
A root node that has no incoming edges and zero or more outgoing edges.
Internal nodes, each of which has exactly one incoming edge and two or more outgoing edges.
Leaf or terminal nodes, each of which has exactly one incoming edge and no outgoing edges.
In a decision tree, each leaf node is assigned a class label. The non terminal nodes, which include the root
and other internal nodes, contain attribute test conditions to separate records that have different
characteristics
In principle, there are exponentially many decision trees that can be constructed from a given set of
attributes.
Module-4 Classification

Hunt's Algorithm
In Hunt's algorithm, a decision tree is grown in a recursive fashion by partitioning the training records
into successively purer subsets. Let Di be the set of training records that are associated with
Module-4 Classification

node t and y= {“y1,y2,y3….yc"} be the class labels. The following is a recursive definition of Hunt's
algorithm.
Step 1: If all the records in Dt belong to the same class yt, then t is a leaf node labeled as yt.
Step 2: If Di contains records that belong to more than one class, an attribute test condition is selected to
partition the records into smaller subsets. A child node is created for each outcome of the test condition
and the records in Dt are distributed to the children based on the outcomes. The algorithm is then
recursively applied to each child node.

To illustrate how the algorithm works, consider the problem of predicting whether a loan applicant will
repay her loan obligations or become delinquent, subsequently defaulting on her loan.
The initial tree for the classification problem contains a single node with class label Defaulted = No (see
Figure 4.7a), which means that most of the borrowers successfully repaid their loans. The tree, however,
needs to be refined since the root node contains records from both classes.
The records are subsequently divided into smaller subsets based on the outcomes of the Home Owner test
Module-4 Classification

condition as shown in Figure 4.7(b). The justification for choosing this attribute test condition will be
discussed later. For now, we will assume that this is the best criterion for splitting the data at this point.
Hunt's algorithm is then applied recursively to each child of the root node. From the training set given in
Figure 4.6, notice that all borrowers who are home owners successfully repaid their loans. The left child
of the root is therefore a leaf node labeled Defaulted = No (see Figure 4.7(b)).
For the right child, we need to continue applying the recursive step of Hunt's algorithm until all the
records belong to the same class. The trees resulting from each recursive step are shown in Figures 4.7(c)
and (d).

Design Issues of Decision Tree Induction:


A learning algorithm for inducing decision trees must address the following two issues.
1) Should the training records be split?
Each recursive step of the tree-growing process must select an attribute test condition to divide the records
into smaller subsets. To implement this step, the algorithm must provide a method for specifying the test
condition for different attribute types as well as an objective measure for evaluating the goodness of each
test condition.
2) How should the splitting procedure stop?
A stopping condition is needed to terminate the tree-growing process. A possible strategy is to continue
expanding a node until either all the records belong to the same class or all the records have identical
attribute values. Although both conditions are sufficient to stop any decision tree induction algorithm,
other criteria can be imposed to allow the tree-growing procedure to terminate earlier.

Methods for Expressing Attribute Test Conditions:


Decision tree induction algorithms must provide a method for expressing an attribute test condition and its
corresponding outcomes for different attribute types.
Binary Attributes: The test condition for a binary attribute generates two potential outcomes, as shown
in Figure 4.8.

Nominal Attributes :Since a nominal attribute can have many values, its test condition can be expressed
in two ways, as shown in Figure 4.9. For a multiway split (Figure 4.9(a)), the number of outcomes
depends on the number of distinct values for the corresponding attribute. For example, if an attribute such
as marital status has three distinct values-single, married, or divorced-its test condition will produce a
three-way split.
Figure 4.9(b) illustrates three different ways of grouping the attribute values for marital status into two
subsets.
Module-4 Classification

Ordinal Attributes: Ordinal attributes can also produce binary or multiway splits. Ordinal attribute
values can be grouped as long as the grouping does not violate the order property of the attribute values.
Figure 4.10 illustrates various ways of splitting training records based on the Shirt Size attribute.
The groupings shown in Figures 4.10(a) and (b) preserve the order among the attribute values, whereas
the grouping shown in Figure a.10(c) violates this property because it combines the attribute values Small
and Large into the same partition while Medium and Extra Large are combined into another partition.

Continuous Attributes: For continuous attributes, the test condition can be expressed as a
comparison test (A < V) or (A>=V ,) with binary outcomes, or a range query with outcomes of the
form Vi<=A<Vi+1, for i=1,2…k. The difference between these approaches is shown in Figure 4.11.
Module-4 Classification

How to determine the Best Split:


Greedy approach:
– Nodes with homogeneous class distribution are preferred
Need a measure of node impurity:
Non-homogeneous, High degree of impurity

Homogeneous, Low degree of impurity

Measures of Node Impurity:


• Gini Index
• Entropy
• Misclassification error
Module-4 Classification

Where p(i|t) denote the fraction of records belonging to class i at a given node t and where c is the
number of classes.
The measures developed for selecting the best split are often based on the degree of impurity of the child
nodes. The smaller the degree of impurity, the more skewed the class distribution.

Node N1 has the lowest impurity value, followed by N2 and N3.


To determine how well a test condition performs, we need to compare the degree of impurity of the parent
node (before splitting) with the degree of impurity of the child nodes (after splitting). The larger their
difference, the better the test condition. The gain, is a criterion that can be used to determine the goodness
of a split.
Module-4 Classification

Characteristics of Decision Tree Based Classification:


Advantages :
 Decision tree induction is a nonparametric approach for building classification models. In other
words, it does not require any prior assumptions regarding the type of probability distributions satisfied by
the class and other attributes.
 Finding an optimal decision tree is an NP-complete problem
 Techniques developed for constructing decision trees are computationally inexpensive, making
it possible to quickly construct models even when the training set size is very large. Once a decision tree
has been built, classifying a test record is extremely fast, with a worst- case complexity of O(W), where
,W.irs the maximum depth of the tree.
 Decision trees, especially smaller-sized trees, are relatively easy to interpret.
 Decision tree algorithms are quite robust to the presence of noise.
 The presence of redundant attributes does not adversely affect the accuracy of decision trees.

Disadvantages:
Since most decision tree algorithms employ a top-down, recursive partitioning approach, the number of
records becomes smaller as we traverse down the tree. At the leaf nodes, the number of records may be
too small to make a statistically significant decision about the class representation of the nodes.

 A subtree can be replicated multiple times in a decision tree, as illustrated in Figure 4.19. This makes
the decision tree more complex than necessary and perhaps more difficult to interpret. Such a situation
can arise from decision tree implementations that rely on a single attribute test condition at each internal
node.

TUPulse.co
Module-4 Classification

Exercises:
Module-4 Classification

VTUPulse.com
Module-4 Classification
Module-4 Classification
Module-4 Classification
Module-4 Classification
Module-4 Classification
Module-4 Classification

Model Over fitting:


The errors committed by a classification model are generally divided into two types: training errors and
generalization errors.
Training error, is the number of misclassification errors committed on training records,
Generalization error is the expected error of the model on test records.
A good model must have low training error as well as low generalization error.

Underfitting : The training and test error rates of the model are large when the size of the tree is very
small. This situation is known as model underfitting.
Underfitting occurs because the model has yet to learn the true structure ofthe data. As a result, it
performs poorly on both the training and the test sets.

Overfitting.: As the number of nodes in the decision tree increases, the tree will have fewer training and
test error . However, once the tree becomes too large, its test error rate begins to increase even though its
training error rate continues to decrease. This phenomenon is known as model over fitting.

Reasons for over fitting:


• Presence of Noise
• Lack of Representative Samples

Figure shows the training and test error rates of the decision tree.

Estimating Generalization Errors:


Generalization errors: error on testing (⅀e‟(t)) Methods for estimating generalization errors:
1) Optimistic approach: e’(t) = e(t)
2) Pessimistic approach:
o For each leaf node: e‟(t) = (e(t)+0.5)
o
 Ex: For a tree with 30 leaf nodes and 10 errors on training
 (out of 1000 instances): Training error = 10/1000 = 1%
 Generalization error = (10 + 30 0.5)/1000 = 2.5%

Reduced error pruning (REP):


Uses validation data set to estimate generalization error
Module-4 Classification

How to Address Over fitting:


Pre-Pruning (Early Stopping Rule) 
 Stop the algorithm before it becomes a fully-grown tree
 Typical stopping conditions for a node:
o Stop if all instances belong to the same class
o Stop if all the attribute values are the same
o
 More restrictive conditions:
o Stop if number of instances is less than some user-specified threshold
o Stop if class distribution of instances are independent of the available features
(e.g., using x 2 test)
o Stop if expanding the current node does not improve impurity measures (e.g.,
Gini or information gain)

2 Post-pruning
 Grow decision tree to its entirety
 Trim the nodes of the decision tree in a bottom-up fashion
 If generalization error improves after trimming, replace sub-tree by a leaf node.
 Class label of leaf node is determined from majority class of instances in the sub-tree

Exercises:
Module-4 Classification
Module-4 Classification

VTUPulse.com
Module-4 Classification
Module-4 Classification

Rule-Based Classifier
A rule-based classifier is a technique for classifying records using a collection of "if . . .then. . ." rules.
The rules for the model are represented in a disjunctive normal form, . where R is known as the rule set
and r;'s are the classification rules or disjuncts

The left-hand side of the rule is called the rule antecedent or precondition.
The right-hand side of the rule is called the rule consequent, which contains the predicted class yi

VTUPulse.com
Module-4 Classification

A lemur triggers rule R3, so it is classified as a mammal A turtle triggers both R4 and R5
A dogfish shark triggers none of the rules

Rule Coverage and Accuracy


Coverage of a rule:
– Fraction of records that satisfy the antecedent of a rule
Accuracy of a rule:
– Fraction of records that satisfy both the antecedent and consequent of a rule
Module-4 Classification

Mutually Exclusive Rules The rules in a rule set .R are mutually exclusive if no two rules in .R are
triggered by the same record. This property ensures that every record is covered by at most one rule in R.
Exhaustive Rules A rule set -R has exhaustive coverage if there is a rule for each combination of
attribute values. This property ensures that every record is covered by at least one rule in –R Ordered
Rules In this approach, the rules in a rule set are ordered in decreasing order of their priority, which can
be defined in many ways (e.g., based on accuracy, coverage, total description length, or the order in which
the rules are generated). An ordered rule set is also known as a decision list. When a test record is
presented, it is classified by the highest-ranked rule that covers the record. This avoids the problem of
having conflicting classes predicted by multiple classification rules.

Rule-Ordering Schemes
Rule-based ordering
Individual rules are ranked based on their quality

 This approach orders the individual rules by some rule quality measure.
 This ordering scheme ensures that every test record is classified by the "best" rule covering it.
Class-based ordering
Rules that belong to the same class appear together
In this approach, rules that belong to the same class appear together in the rule set R. The rules are then
collectively sorted on the basis of their class information.

Rule Evaluation:
Module-4 Classification

VTUPulse.com
Module-4 Classification

Characteristics of Rule-Based Classifiers:


A rule-based classifier has the following characteristics:
 The expressiveness of a rule set is almost equivalent to that of a decision tree because adecision treecan
be represented by a set of mutually exclusive and exhaustive rules. Both rule-based and decision tree
classifiers create rectilinear partitions of the attribute space and assign a class to each partition.
Nevertheless, if the rule-based classifier allows multiple rules to be triggered for a given record, then a
more complex decision boundary can be constructed.

 Rule-based classifiers are generally used to produce descriptive models that are easier to interpret, but
gives comparable performance to the decision tree classifier.
Module-4 Classification

VTUPulse.co
Module-4 Classification

VTUPulse.co
Module-4 Classification
Module-4 Classification

VTUPulse.co
Module-4 Classification

Nearest-Neighbor Classifiers
Requires three things
– The set of stored records
– Distance Metric to compute distance between records
– The value of k, the number of nearest neighbors to retrieve

To classify an unknown record:


– Compute distance to other training records
– Identify k nearest neighbors
– Use class labels of nearest neighbors to determine the class label of unknown record
– (e.g., by taking majority vote)
Module-4 Classification

K-nearest neighbors of a record x are data points that have the k smallest distance to x Compute distance
between two points:
– Euclidean distance

Determine the class from nearest neighbor list


– take the majority vote of class labels among the k-nearest neighbors Choosing the value of k:
– If k is too small, sensitive to noise points
– If k is too large, neighborhood may include points from other classes

Characteristics of Nearest-Neighbor Classifiers:


 Nearest-neighbor classification is part of a more general technique known as instance-based
learning, which uses specific training instances to make predictions without having to maintain an
abstraction (or model) derived from data. Instance-based learning algorithms require a proximity
measure to determine the similarity or distance between instances and a classification function that
returns the predicted class of a test instance based on its proximity to other instances.

 Lazy learners such as nearest-neighbor classifiers do not require model building. However,
classifying a test example can be quite expensive because we need to compute the proximity
values individually between the test and training examples.

 Nearest-neighbor classifiers can produce arbitrarily shaped decision boundaries. Such boundaries
provide a more flexible model representation compared to decision tree and rule-based classifiers
that are often constrained to rectilinear decision boundaries.
 Nearest-neighbor classifiers can produce wrong predictions unless the appropriate proximity
measure and data preprocessing steps are taken.
Module-4 Classification

Bayesian Classifiers
Bayes’ Theorem:
Bayes‟ theorem is a way to figure out conditional probability. Conditional probability is the probability of
an event happening, given that it has some relationship to one or more other events

Bayes’ Theorem Problems Example #1


In a particular pain clinic, 10% of patients are prescribed narcotic pain killers. Overall, five percent of the
clinic‟s patients are addicted to narcotics (including pain killers and illegal substances). Out of all the
people prescribed pain pills, 8% are addicts. If a patient is an addict, what is the probability that they will
be prescribed pain pills?
Step 1: Figure out what your event “A” is from the question. That information is in the italicized part
of this particular question. The event that happens first (A) is being prescribed pain pills. That‟s given as
10%.
Step 2: Figure out what your event “B” is from the question. That information is also in the italicized
part of this particular question. Event B is being an addict. That‟s given as 5%.
Step 3: Figure out what the probability of event B (Step 2) given event A (Step 1). In other words, find
what (B|A) is. We want to know “Given that people are prescribed pain pills, what‟s the probability they
are an addict?” That is given in the question as 8%, or .8.
Step 4: Insert your answers from Steps 1, 2 and 3 into the formula and solve.
P(A|B) = P(B|A) * P(A) / P(B) = (0.08 * 0.1)/0.05 = 0.16
The probability of an addict being prescribed pain pills is 0.16 (16%).

Bayes’ Theorem Problems Example #2


Given:
Module-4 Classification

– A doctor knows that meningitis causes stiff neck 50% of the time
– Prior probability of any patient having meningitis is 1/50,000
– Prior probability of any patient having stiff neck is 1/20

If a patient has stiff neck, what‟s the probability he/she has meningitis?

Using the Bayes Theorem for Classification:


Let X denote the attribute set and Y denote the class variable. If the class variable has a non- deterministic
relationship with the attributes, then we can treat X and Y as random variables and capture their
relationship probabilistically using P(Y/X). This conditional probability is also known as the posterior
probability for Y, as opposed to its prior probability, P(Y).
During the training phase, we need to learn the posterior probabilities P(Y/X) for every combination of X
and Y based on information gathered from the training data.
By knowing these probabilities, a test record X' can be classified by finding the class Y’ that
maximizes the posterior probability, P(Y'/X').
To illustrate this approach, consider the task of predicting whether a loan borrower will default on their
payments.
Figure 5.9 shows a training set with the following attributes: House Owner, Marital Status, and Annual
Income. Loan borrowers who defaulted on their payments are classified as Yes, while those who repaid
their loans are classified as No
Module-4 Classification

Suppose we are given a test record with the following attribute set:
X :(Home Owner : No, Marital Status : Married, Annual Income : $120K).
To classify the record, we need to compute the posterior probabilities P(Yes/X) and P(No/X) based on
information available in the training data.
If P(Yes/X) > P(No/X), then the record is classified as Yes; otherwise, it is classified as No

VTUPulse.com

Estimating Conditional Probabilities for Continuous Attributes:


There are two ways to estimate the class-conditional probabilities for continuous Attributes in naive
Bayes classifiers:
Module-4 Classification

Example of Naïve Bayes Classifier: Consider the training record


Module-4 Classification

VTUP
M-estimate of Conditional Probability:

where n is the total number of instances from class Yj, nc is the number of training examples from class
Yi that take on the value Xi, m is a parameter known as the equivalent sample size, and p is a user-
specified parameter.
Module-4 Classification
Module-4 Classification
Module-4 Classification
Module-4 Classification

Characteristics of Naive Bayes Classifiers:


Naive Bayes classifiers generally have the following characteristics:
 They are robust to isolated noise points because such points are averaged out when estimating
conditional probabilities from data.
 Naive Bayes classifiers can also handle missing values by ignoring the example during model
building and classification.
 They are robust to irrelevant attributes.
 Correlated attributes can degrade the performance of naive Bayes classifiers because the
conditional independence assumption no longer holds for such attributes.

Bayesian Belief Networks


Bayesian networks represent an advanced form of general Bayesian probability
A Bayesian network is a graphical model that encodes probabilistic relationships among variables of
interest.

A Bayesian belief network (BBN), or simply, Bayesian network, provides a graphical representation of
the probabilistic relationships among a set of random variables. There are two key elements of a Bayesian
network:
1. A directed acyclic graph (dag) encoding the dependence relationships among a set of variables.
2. A probability table associating each node to its immediate parent nodes.

Consider three random variables,A, B, and C, in which A and B are independent variables and each has a
direct influence on a third variable, C.
The relationships among the variables can be summarized into the directed acyclic graph shown in Figure
5.12(a).
Each node in the graph represents a variable, and each arc asserts the dependence relationship between the
pair of variables. If there is a directed arc from X to Y, then X is the parent of Y and Y is the child of X.
F\rrthermore, if there is a directed path in the network from X to Z, then X is an ancestor of Z, whlle Z is a
descendant of X.
For example, in the diagram shown in Figure 5.12(b), A is a descendant of D and D is an ancestor of
B. Both B and D arc also non-descendants of A.

In the diagram shown in Figure 5.12(b), A is conditionally independent of both B and D given C because
the nodes for B and D are non-descendants of node A.
Module-4 Classification

The conditional independence assumption made by a naive Bayes classifier can also be represented using
a Bayesian network, as shown in Figure 5.12(c), where gr is the target class and {Xt,Xz,...,Xa} is the
attribute set.
Besides the conditional independence conditions imposed by the network topology, each node is also
associated with a probability table.
1. If a node X does not have any parents, then the table contains only the prior probability P(X).
2. If a node X has only one parent, Y, then the table contains the conditional probability P(XIY).
3. If a node X has multiple parents, {Y1,,Y2, . . . ,Yn}, then the table contains the conditional
probability P(XlY1,Y2,. . ., Yn.).
Module-4 Classification

EXAMPLE:
You have a new burglar alarm installed at home.
 It is fairly reliable at detecting burglary, but also sometimes responds to minor earthquakes.
 You have two neighbors, Ali and Veli, who promised to call you at work when they hear the alarm.
 Ali always calls when he hears the alarm, but sometimes confuses telephone ringing with the alarm
and calls too.
 Veli likes loud music and sometimes misses the alarm.
 Given the evidence of who has or has not called, we would like to estimate the probability of a
burglary.
Module-4 Classification

Characteristics of BBN
Following are some of the general characteristics of the BBN method:
 BBN provides an approach for capturing the prior knowledge of a particular domain using a
graphical model. The network can also be used to encode causal dependencies amongvariables.

 Constructing the network can be time consuming and requires a large amount of effort. However,
once the structure of the network has been determined, adding a new variable is quite straightforward.

 Bayesian networks are well suited to dealing with incomplete data. Instances with missing attributes
can be handled by summing or integrating the probabilities over all possible values ofthe attribute.

 Because the data is combined probabilistically with prior knowledge, the method is quite robust to
model over fitting.

You might also like