0% found this document useful (0 votes)
20 views32 pages

06 Classification

The document discusses classification techniques in data mining. Classification involves using a model to predict the class of new data based on analyzing labeled training data. The document describes the classification process, different classification algorithms like decision trees, and how to evaluate classification methods.

Uploaded by

aanaon0
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views32 pages

06 Classification

The document discusses classification techniques in data mining. Classification involves using a model to predict the class of new data based on analyzing labeled training data. The document describes the classification process, different classification algorithms like decision trees, and how to evaluate classification methods.

Uploaded by

aanaon0
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 32

Data Mining

Lecture 6:
Classification

Data Mining 1
Classification: Definition
 Given a collection of records (training set):
 Each record contains a set of attributes, one of the attributes is the
class.
 Find a model for class attribute as a function f of the
values of other attributes.
 Goal: previously unseen records should be assigned a class
as accurately as possible.
 A test set is used to determine the accuracy of the

model.
 Usually, the given data set is divided into training and

test sets, with training set used to build the model and
test set used to validate it.

Data Mining 2
What is classification?
 Classification is the task of learning a target
function f that maps attribute set x to one of the
predefined class labels y.
class
Tid Refund Marital Taxable
Status Income Cheat
One of the attributes is the class attribute
No
1 Yes Single 125K
In this case: Cheat.
2 No Married 100K No
3 No Single 70K No
Two class labels (or classes): Yes (1), No (0)
4 Yes Married 120K No
5 No Divorced 95K Yes
6 No Married 60K No
7 Yes Divorced 220K No
8 No Single 85K Yes
9 No Married 75K No
10 No Single 90K Yes
10

Data Mining 3
Classification: A Two-Step Process
1. Model construction: describing a set of predetermined classes.
 Each tuple is assumed to belong to a predefined class, as

determined by the class label attribute.


 The set of tuples used for model construction is training set.

 The model is represented as classification rules, decision trees, etc.

2. Model usage: for classifying future or unknown objects.


 Estimate accuracy of the model:

 The known label of test sample is compared with the classified

result from the model.


 Accuracy rate is the percentage of test set samples that are

correctly classified by the model.


 Test set is independent of training set.

 If the accuracy is acceptable, use the model to classify data tuples

whose class labels are not known.

Data Mining 4
General approach to classification

 Training set consists of records with known class


labels.

 Training set is used to build a classification model.

 A labeled test set of previously unseen data


records is used to evaluate the quality of the
model.

 The classification model is applied to new records


with unknown class labels.

Data Mining 5
Illustrating Classification Task

Tid Attrib1 Attrib2 Attrib3 Class Learning


1 Yes Large 125K No
algorithm
2 No Medium 100K No

3 No Small 70K No

4 Yes Medium 120K No


Induction
5 No Large 95K Yes
6 No Medium 60K No
7 Yes Large 220K No Learn
8 No Small 85K Yes Model
9 No Medium 75K No
10 No Small 90K Yes
Model
10

Training Set
Apply
Tid Attrib1 Attrib2 Attrib3 Class Model
11 No Small 55K ?
12 Yes Medium 80K ?
13 Yes Large 110K ? Deduction
14 No Small 95K ?
15 No Large 67K ?
10

Test Set
Data Mining 6
Process (1): Model Construction

Classification
Algorithms
Training
Data

NAME RANK YEARS TENURED Classifier


Mike Assistant Prof 3 no (Model)
Mary Assistant Prof 7 yes
Bill Professor 2 yes
Jim Associate Prof 7 yes IF rank = ‘professor’
Dave Assistant Prof 6 no
OR years > 6
Anne Associate Prof 3 no
THEN tenured = ‘yes’
Data Mining 7
Process (2): Using the Model in Prediction

Classifier

Testing
Data Unseen Data

(Jeff, Professor, 4)
NAME RANK YEARS TENURED
Tom Assistant Prof 2 no Tenured?
Merlisa Associate Prof 7 no
George Professor 5 yes
Joseph Assistant Prof 7 yes
Data Mining 8
Evaluating Classification Methods
 Accuracy:
 classifier accuracy: predicting class label.

 Speed:
 time to construct the model (training time).

 time to use the model (classification/prediction time).

 Robustness:
 handling noise and missing values.

 Scalability:
 efficiency in disk-resident databases.

Data Mining 10
Classification Techniques

 Decision Tree based Methods.


 Rule-based Methods.
 Memory based reasoning.
 Neural Networks.
 Naïve Bayes and Bayesian Belief Networks.
 Support Vector Machines.

Data Mining 11
Decision Trees
 Decision tree:
 A flow-chart-like tree structure.
 Internal node denotes a test on an attribute.
 Branch represents an outcome of the test.
 Leaf nodes represent class labels or class distribution.

Data Mining 12
Decision Tree Classification Task

Tid Attrib1 Attrib2 Attrib3 Class


Tree
1 Yes Large 125K No Induction
2 No Medium 100K No algorithm
3 No Small 70K No

4 Yes Medium 120K No


Induction
5 No Large 95K Yes
6 No Medium 60K No

7 Yes Large 220K No Learn


8 No Small 85K Yes Model
9 No Medium 75K No
10 No Small 90K Yes
Model
10

Training Set
Apply
Tid Attrib1 Attrib2 Attrib3 Class
Model Decision
11
12
No
Yes
Small
Medium
55K
80K
?
?
Tree
13 Yes Large 110K ?
Deduction
14 No Small 95K ?

15 No Large 67K ?
10

Test Set
Data Mining 13
Example of a Decision Tree

class
Tid Refund Marital Taxable
Splitting Attributes
Status Income Cheat

1 Yes Single 125K No


2 No Married 100K No Refund
Yes No
3 No Single 70K No Test outcom
4 Yes Married 120K No NO MarSt
5 No Divorced 95K Yes Married
Single, Divorced
6 No Married 60K No
7 Yes Divorced 220K No TaxInc NO
8 No Single 85K Yes < 80K > 80K
9 No Married 75K No
NO YES
10 No Single 90K Yes
10

Class labe
Training Data Model: Decision Tree
Data Mining 14
Another Example of Decision
Tree

class MarSt
Married Single, Divorced
Tid Refund Marital Taxable
Status Income Cheat
NO Refund
1 Yes Single 125K No
Yes No
2 No Married 100K No
3 No Single 70K No NO TaxInc
4 Yes Married 120K No < 80K > 80K
5 No Divorced 95K Yes
NO YES
6 No Married 60K No
7 Yes Divorced 220K No
8 No Single 85K Yes
9 No Married 75K No There could be more than one tree that
10 No Single 90K Yes fits the same data!
10

Data Mining 15
Decision Tree Classification Task

Tid Attrib1 Attrib2 Attrib3 Class


Tree
1 Yes Large 125K No Induction
2 No Medium 100K No algorithm
3 No Small 70K No
4 Yes Medium 120K No
Induction
5 No Large 95K Yes
6 No Medium 60K No

7 Yes Large 220K No Learn


8 No Small 85K Yes Model
9 No Medium 75K No
10 No Small 90K Yes
Model
10

Training Set
Apply
Tid Attrib1 Attrib2 Attrib3 Class
Model Decision
11
12
No
Yes
Small
Medium
55K
80K
?
?
Tree
13 Yes Large 110K ?
Deduction
14 No Small 95K ?

15 No Large 67K ?
10

Test Set
Data Mining 16
Apply Model to Test Data

Test Data
Start from the root of tree. Refund Marital Taxable
Status Income Cheat

No Married 80K ?
Refund 10
10

1010

Yes No

NO MarSt
Single, Divorced Married Assign Cheat to “No”

TaxInc NO
< 80K > 80K

NO YES

Data Mining 17
Tree Induction
 Finding the best decision tree is NP-hard.

 Greedy strategy.
 Split the records based on an attribute test that
optimizes certain criterion.

 Many Algorithms:
 Hunt’s Algorithm (one of the earliest)
 CART
 ID3, C4.5
 SLIQ,SPRINT

Data Mining 18
Constructing decision trees: the Hunt’s
algorithm

 Xt: the set of training records for node t.


 Y = {y1,…,yc}: class labels.
 Step 1: If all records in Xt belong to the same class yt,
then t is a leaf node labeled as yt.
 Step 2: If Xt contains records that belong to more than
one class:
 select attribute test condition to partition the records into
smaller subsets
 Create a child node for each outcome of test condition
 Apply algorithm recursively for each child

Data Mining 19
Hunt’s Algorithm
Refund
Don’t
Yes No
Cheat
Don’t
Cheat
Cheat

Refund
Refund
Yes No
Yes No
Don’t Marital Don’t Marital
Cheat Status Cheat Status
Single,
Married Single,
Divorced Married
Divorced
Don’t
Cheat Taxable Don’t
Cheat
Income Cheat

< 80K >= 80K


Don’t
Cheat
Cheat 20
Data Mining
Hunt’s Algorithm (Example (1))

Age < 25 Age > 25


Age Car Type Risk
23 Family High
17 Sports High Car Type {sports} Car Type {Family,Truck}

43 Sports High
20 Family High
High
68 Family Low
32 Truck Low
High Low

Data Mining 21
Hunt’s Algorithm (Example (2))
Temperature Humidity Windy
Outlook Play?
‫لحرارة‬ ‫الرطوبة‬ ‫الرياح‬
sunny hot high false No
sunny hot high true No
overcast hot high false Yes
rain mild high false Yes
rain cool normal false Yes
rain cool normal true No
overcast cool normal true Yes
sunny mild high false No
sunny cool normal false Yes
rain mild normal false Yes
sunny mild normal true Yes
overcast mild high true Yes
overcast hot normal false Yes
rain mild high true No

Data Mining 22
Hunt’s Algorithm (Example (2))

Outlook

sunny rain
overcast

Humidity Yes
Windy

high normal true false

No Yes No Yes

Data Mining 23
Metrics for Performance Evaluation
 Accuracy:
 Accuracy of a classifier M, acc(M): percentage of test set

tuples that are correctly classified by the model M.


 Error Rate:
 Error rate (misclassification rate) of M = 1 – acc(M).

 Precision: Given all the predicted labels (for a given class


X), how many instances were correctly predicted.

 Recall: For all instances that should have a label X, how


many of these were correctly captured?

Data Mining 24
Metrics for Performance Evaluation
 Counts of test records that are correctly (or incorrectly) predicted by
the classification model.
 Confusion matrix:
Predicted Class

Class=Yes Class=No
TP (true positive)
FN (false negative) Actual a b
Class=Yes
Class (TP) (FN)
FP (false positive)
c d
Class=No
TN (true negative) (FP) (TN)

# correct predictions ad TP  TN


Accuracy   
total # of predictions a  b  c  d TP  TN  FP  FN

# wrong predictions bc FN  FP


Error rate   
total # of predictions a  b  c  d TP  TN  FP  FN
Data Mining 25
Limitation of Accuracy
 Consider a 2-class problem
 Number of Class 0 examples = 9990.
 Number of Class 1 examples = 10.

 If model predicts everything to be class 0,


accuracy is 9990/10000 = 99.9 %.
 Accuracy is misleading because model does not detect
any class 1 example.

Data Mining 26
Precision & Recall
a TP # correctly _ predicated _ X
Precision (p)   
a  c TP  FP # all _ predicated _ X
a TP # correctly _ predicated _ X
Recall (r)   
a  b TP  FN # all _ actual _ X

Acc= 80% Model M1 Predicted Class Acc= 90% Model M2 Predicted Class
ER= 20% + - ER= 10% + -
Actual Actual
P = 0.7 + 150 40 P = 0.98 + 250 45
Class Class
R = 0.8 - 60 250 R = 0.85 - 5 200

buy_computer= buy_computer=
classes yes no
total recognition(%)

buy_computer= 6954 46 7000 99.34


yes
buy_computer= 412 2588 3000 86.27
no
total 7366 2634 10000 95.52
Find Accuracy, Error Data
Rate, Precision, Recall ?
Mining 27
Precision and Recall for the Multi-Class
classification
 While it is fairly straightforward to compute precision and
recall for a binary classification problem, it can be quite
confusing to compute these values for a multi-class
classification problem.
 First, let us assume that we have a 3-class classification
problem, with labels A, B and C.
 Once you have the confusion matrix, you need to compute
precision and recall for each class.
 Note that the values in the diagonal would always be the true
positives (TP).

Data Mining 28
Precision and Recall for the Multi-Class
classification
 Now, let us compute precision for Label A:
= TP_A / (TP_A + FP_A)
= TP_A / (Total predicted as A)
= 30/60 = 0.5
 Now, let us compute recall for Label A:
= TP_A / (TP_A + FN_A)
= TP_A / (Total Actual for A)
= 30/100 = 0.3
 So precision=0.5 and recall=0.3 for label A.
Precision=0.5 means that, out of the times label A was predicted,
50% of the time the system was in fact correct.
Recall=0.3 means that, out of all the times label A should have been

predicted, only 30% of the labels were correctly predicted.

Data Mining 29
Examples (1)
 In this case:
 TP = 2 (#1 and #4),
 FP = 1 (#3),
 TN = 1 (#5),
# Correct label Classifier’s label
 FN = 2 (#2 and #6).
 Accuracy = (2 + 1) / (2+1+1+2) 1 T T
= 0.5 2 T N
 Error rate = (1 + 2) / (2+1+1+2) 3 N T
= 0.5
4 T T
OR = (1 - accuracy) = 0.5
5 N N
 Precision = (2) / (2 + 1) = 0.67
 Recall = (2) / (2 + 2) = 0.5 6 T N

Data Mining 30
Examples (2)
# Correct label Classifier’s label
 In this case:
1 A A
 Accuracy = 5 / 12
= 0.42 2 A B
 Error rate = (1 – accuracy) 3 A C
= (1 – 0.42) = 0.58 4 B C
 Precision (A) = 1 / 3 = 0.33 5 B B
 Recall (A) = 1 / 3 = 0.33
6 B B
7 C A
 Precision (B) = 2 / 5 = 0.4
 Recall (B) = 2 / 3 = 0.67 8 C C
9 C B
 Precision (C) = 2 / 4 = 0.5 10 C C
 Recall (C) = 2 / 6 = 0.33 11 C B
12 C A
Data Mining 31
Methods for Performance Evaluation

 Holdout:
 Reserve 2/3 for training and 1/3 for testing.

 Random subsampling:
 Repeated holdout.

 Cross validation:
 Partition data into k disjoint subsets (k = 10 is most

popular).
 k-fold: train on k-1 partitions, test on the remaining one.

 Bootstrap:
 Sampling with replacement.

 ~63% of records used for training, ~27% for testing.


Data Mining 32
Examples (3)
 Construct a Decision Tree by using the next table and verify that
it is correct.
Steps of the solution: # A B C Class
 Divide data into: 1 0 0 0 +
 Training set. 2 0 0 1 +
 Test set.
3 0 1 0 -
 Construct a Decision Tree model.
4 0 1 1 -
 Apply the model to test data.
5 1 0 0 +
 Evaluate the model performance:
6 1 0 0 +
 Accuracy.
 Error rate. 7 1 1 0 +
 Precision . 8 1 0 1 -
 Recall. 9 1 0 1 -

10 1 1 0 +

11 0 0 0 +

12 1 1 1 +
Data Mining 33

You might also like