05 Classification
05 Classification
1
2
Chapter 8. Classification: Basic Concepts
4
Learning vs Classification
5
Supervised vs. Unsupervised Learning
7
Classification—A Two-Step Process
◼ Model construction: describing a set of predetermined classes
◼ Each tuple/sample is assumed to belong to a predefined class, as
mathematical formulae
◼ Model usage: for classifying future or unknown objects
◼ Estimate accuracy of the model
◼ Note: If the test set is used to select models, it is called validation (test) set
8
Process (1): Model Construction
Classification
Algorithms
Training
Data
Classifier
Testing
Data Unseen Data
(Jeff, Professor, 4)
NAME RANK YEARS TENURED
Tom A ssistan t P ro f 2 no Tenured?
M erlisa A sso ciate P ro f 7 no
G eo rg e P ro fesso r 5 yes
Jo sep h A ssistan t P ro f 7 yes
10
Chapter 8. Classification: Basic Concepts
12
Decision Tree Induction: An Example
age income student credit_rating buys_computer
<=30 high no fair no
❑ Training data set: Buys_computer <=30 high no excellent no
❑ The data set follows an example of 31…40 high no fair yes
>40 medium no fair yes
Quinlan’s ID3 (Playing Tennis) >40 low yes fair yes
>40 low yes excellent no
❑ Resulting tree:
31…40 low yes excellent yes
age? <=30 medium no fair no
<=30 low yes fair yes
>40 medium yes fair yes
<=30 medium yes excellent yes
<=30 overcast
31..40 >40 31…40 medium no excellent yes
31…40 high yes fair yes
>40 medium no excellent no
no yes yes
13
Algorithm for Decision Tree Induction
◼ Decision tree induction is the learning of decision trees from class-
labeled training tuples.
◼ Basic algorithm (a greedy algorithm)
◼ Tree is constructed in a top-down recursive divide-and-conquer manner
◼ At start, all the training examples are at the root
advance)
◼ Examples are partitioned recursively based on selected attributes
14
15
16
Attribute Selection Measure:
Information Gain (ID3/C4.5)
◼ Select the attribute with the highest information gain
◼ Let pi be the probability that an arbitrary tuple in D belongs to
class Ci, estimated by |Ci, D|/|D|
◼ Expected information (entropy) needed to classify a tuple in D:
m
Info( D ) = − pi log 2 ( pi )
i =1
◼ Information needed (after using A to split D into v partitions) to
classify D: v | D |
Info A ( D) = Info( D j )
j
j =1 | D |
◼ Information gained by branching on attribute A
Info( D) = I (9,5) = −
9 9 5 5
log 2 ( ) − log 2 ( ) =0.940 Gain(income) = 0.029
14 14 14 14
Gain( student) = 0.151
Gain(credit _ rating ) = 0.048
18
19
age (dis.) income student credit_rating buys_computer
youth high no fair no
youth high no excellent no
middle_aged high no fair yes
senior medium no fair yes
senior low yes fair yes
senior low yes excellent no
middle_aged low yes excellent yes
youth medium no fair no
youth low yes fair yes
senior medium yes fair yes
youth medium yes excellent yes
middle_aged medium no excellent yes
middle_aged high yes fair yes
senior medium no excellent no
20
Computing Information-Gain for
Continuous-Valued Attributes
◼ Let attribute A be a continuous-valued attribute
◼ Must determine the best split point for A
◼ Sort the value A in increasing order
◼ Typically, the midpoint between each pair of adjacent values
is considered as a possible split point
◼ (ai+ai+1)/2 is the midpoint between the values of ai and ai+1
◼ The point with the minimum expected information
requirement for A is selected as the split-point for A
◼ Split:
◼ D1 is the set of tuples in D satisfying A ≤ split-point, and D2 is
the set of tuples in D satisfying A > split-point
21
22
Gain Ratio for Attribute Selection (C4.5)
◼ Information gain measure is biased towards attributes with a
large number of values
◼ C4.5 (a successor of ID3) uses gain ratio to overcome the
problem (normalization to information gain)
v | Dj | | Dj |
SplitInfoA ( D) = − log 2 ( )
j =1 |D| |D|
◼ GainRatio(A) = Gain(A)/SplitInfo(A)
◼ Ex.
noise or outliers
◼ Poor accuracy for unseen samples
30
Scalability Framework for RainForest
31
Rainforest: Training Set and Its AVC Sets
33
Presentation of Classification Results
medium income
39
Prediction Based on Bayes’ Theorem
◼ Given training data X, posteriori probability of a hypothesis H,
P(H|X), follows the Bayes’ theorem
40
Classification Is to Derive the Maximum Posteriori
◼ Let D be a training set of tuples and their associated class
labels, and each tuple is represented by an n-D attribute vector
X = (x1, x2, …, xn)
◼ Suppose there are m classes C1, C2, …, Cm.
◼ Classification is to derive the maximum posteriori, i.e., the
maximal P(Ci|X)
◼ This can be derived from Bayes’ theorem
P(X | C )P(C )
P(C | X) = i i
i P(X)
◼ Since P(X) is constant for all classes, only
P(C | X) = P(X | C )P(C )
i i i
needs to be maximized
41
Naïve Bayes Classifier
◼ A simplified assumption: attributes are conditionally
independent (i.e., no dependence relation between
attributes): n
P( X | C i) = P( x | C i) = P( x | C i) P( x | C i) ... P( x | C i)
k 1 2 n
k =1
◼ This greatly reduces the computation cost: Only counts the
class distribution
◼ If Ak is categorical, P(xk|Ci) is the # of tuples in Ci having value xk
for Ak divided by |Ci, D| (# of tuples of Ci in D)
◼ If Ak is continous-valued, P(xk|Ci) is usually computed based on
Gaussian distribution with a mean μ and standard deviation σ
( x− )2
1 −
g ( x, , ) = e 2 2
and P(xk|Ci) is 2
P(X | C i) = g ( xk , Ci , Ci )
42
Naïve Bayes Classifier: Training Dataset
age income studentcredit_rating
buys_compu
<=30 high no fair no
Class: <=30 high no excellent no
C1:buys_computer = ‘yes’ 31…40 high no fair yes
C2:buys_computer = ‘no’ >40 medium no fair yes
>40 low yes fair yes
Data to be classified: >40 low yes excellent no
31…40 low yes excellent yes
X = (age <=30,
<=30 medium no fair no
Income = medium, <=30 low yes fair yes
Student = yes >40 medium yes fair yes
Credit_rating = Fair) <=30 medium yes excellent yes
31…40 medium no excellent yes
31…40 high yes fair yes
>40 medium no excellent no
43
44
Naïve Bayes Classifier: An Example
age income studentcredit_rating
buys_comput
<=30 high no fair no
<=30 high no excellent no
31…40
◼ P(Ci): P(buys_computer = “yes”) = 9/14 = 0.643 >40
>40
high
medium
low
no fair
no fair
yes fair
yes
yes
yes
◼ Disadvantages
◼ Assumption: class conditional independence, therefore loss
of accuracy
◼ Practically, dependencies exist among variables
Bayes Classifier
◼ How to deal with these dependencies? Bayesian Belief Networks
(Chapter 9)
46
Chapter 8. Classification: Basic Concepts
◼ One rule is created for each path from the <=30 31..40 >40
root to a leaf
student? credit rating?
yes
◼ Each attribute-value pair along a path forms a
no yes excellent fair
conjunction: the leaf holds the class
no yes yes
prediction
◼ Rules are mutually exclusive and exhaustive
◼ Example: Rule extraction from our buys_computer decision-tree
IF age = young AND student = no THEN buys_computer = no
IF age = young AND student = yes THEN buys_computer = yes
IF age = mid-age THEN buys_computer = yes
IF age = old AND credit_rating = excellent THEN buys_computer = no
IF age = old AND credit_rating = fair THEN buys_computer = yes
49
50
Chapter 8. Classification: Basic Concepts
54
Classifier Evaluation Metrics:
Precision and Recall, and F-measures
◼ Precision: exactness – what % of tuples that the classifier
labeled as positive are actually positive
55
Classifier Evaluation Metrics: Example
56
Evaluating Classifier Accuracy:
Holdout & Cross-Validation Methods
◼ Holdout method
◼ Given data is randomly partitioned into two independent sets
◼ Ensemble methods
◼ Use a combination of models to increase accuracy
classifiers
◼ Boosting: weighted vote with a collection of classifiers
61
Bagging: Boostrap Aggregation
◼ Analogy: Diagnosis based on multiple doctors’ majority vote
◼ Training
◼ Given a set D of d tuples, at each iteration i, a training set Di of d tuples
◼ The bagged classifier M* counts the votes and assigns the class with the
most votes to X
◼ Prediction: can be applied to the prediction of continuous values by taking
the average value of each prediction for a given test tuple
◼ Accuracy
◼ Often significantly better than a single classifier derived from D
returned
◼ Two Methods to construct Random Forest:
◼ Forest-RI (random input selection): Randomly select, at each node, F
attributes as candidates for the split at the node. The CART methodology
is used to grow the trees to maximum size
◼ Forest-RC (random linear combinations): Creates new attributes (or
68
Summary (II)
◼ Significance tests and ROC curves are useful for model selection.
◼ There have been numerous comparisons of the different
classification methods; the matter remains a research topic
◼ No single method has been found to be superior over all others
for all data sets
◼ Issues such as accuracy, training time, robustness, scalability,
and interpretability must be considered and can involve trade-
offs, further complicating the quest for an overall superior
method
69
References (1)
◼ C. Apte and S. Weiss. Data mining with decision trees and decision rules. Future
Generation Computer Systems, 13, 1997
◼ C. M. Bishop, Neural Networks for Pattern Recognition. Oxford University Press,
1995
◼ L. Breiman, J. Friedman, R. Olshen, and C. Stone. Classification and Regression Trees.
Wadsworth International Group, 1984
◼ C. J. C. Burges. A Tutorial on Support Vector Machines for Pattern Recognition. Data
Mining and Knowledge Discovery, 2(2): 121-168, 1998
◼ P. K. Chan and S. J. Stolfo. Learning arbiter and combiner trees from partitioned data
for scaling machine learning. KDD'95
◼ H. Cheng, X. Yan, J. Han, and C.-W. Hsu, Discriminative Frequent Pattern Analysis for
Effective Classification, ICDE'07
◼ H. Cheng, X. Yan, J. Han, and P. S. Yu, Direct Discriminative Pattern Mining for
Effective Classification, ICDE'08
◼ W. Cohen. Fast effective rule induction. ICML'95
◼ G. Cong, K.-L. Tan, A. K. H. Tung, and X. Xu. Mining top-k covering rule groups for
gene expression data. SIGMOD'05
70
References (2)
◼ A. J. Dobson. An Introduction to Generalized Linear Models. Chapman & Hall, 1990.
◼ G. Dong and J. Li. Efficient mining of emerging patterns: Discovering trends and
differences. KDD'99.
◼ R. O. Duda, P. E. Hart, and D. G. Stork. Pattern Classification, 2ed. John Wiley, 2001
◼ U. M. Fayyad. Branching on attribute values in decision tree generation. AAAI’94.
◼ Y. Freund and R. E. Schapire. A decision-theoretic generalization of on-line learning and
an application to boosting. J. Computer and System Sciences, 1997.
◼ J. Gehrke, R. Ramakrishnan, and V. Ganti. Rainforest: A framework for fast decision tree
construction of large datasets. VLDB’98.
◼ J. Gehrke, V. Gant, R. Ramakrishnan, and W.-Y. Loh, BOAT -- Optimistic Decision Tree
Construction. SIGMOD'99.
◼ T. Hastie, R. Tibshirani, and J. Friedman. The Elements of Statistical Learning: Data
Mining, Inference, and Prediction. Springer-Verlag, 2001.
◼ D. Heckerman, D. Geiger, and D. M. Chickering. Learning Bayesian networks: The
combination of knowledge and statistical data. Machine Learning, 1995.
◼ W. Li, J. Han, and J. Pei, CMAR: Accurate and Efficient Classification Based on Multiple
Class-Association Rules, ICDM'01.
71
References (3)
◼ T.-S. Lim, W.-Y. Loh, and Y.-S. Shih. A comparison of prediction accuracy, complexity,
and training time of thirty-three old and new classification algorithms. Machine
Learning, 2000.
◼ J. Magidson. The Chaid approach to segmentation modeling: Chi-squared
automatic interaction detection. In R. P. Bagozzi, editor, Advanced Methods of
Marketing Research, Blackwell Business, 1994.
◼ M. Mehta, R. Agrawal, and J. Rissanen. SLIQ : A fast scalable classifier for data
mining. EDBT'96.
◼ T. M. Mitchell. Machine Learning. McGraw Hill, 1997.
◼ S. K. Murthy, Automatic Construction of Decision Trees from Data: A Multi-
Disciplinary Survey, Data Mining and Knowledge Discovery 2(4): 345-389, 1998
◼ J. R. Quinlan. Induction of decision trees. Machine Learning, 1:81-106, 1986.
◼ J. R. Quinlan and R. M. Cameron-Jones. FOIL: A midterm report. ECML’93.
◼ J. R. Quinlan. C4.5: Programs for Machine Learning. Morgan Kaufmann, 1993.
◼ J. R. Quinlan. Bagging, boosting, and c4.5. AAAI'96.
72
References (4)
◼ R. Rastogi and K. Shim. Public: A decision tree classifier that integrates building and
pruning. VLDB’98.
◼ J. Shafer, R. Agrawal, and M. Mehta. SPRINT : A scalable parallel classifier for data
mining. VLDB’96.
◼ J. W. Shavlik and T. G. Dietterich. Readings in Machine Learning. Morgan Kaufmann,
1990.
◼ P. Tan, M. Steinbach, and V. Kumar. Introduction to Data Mining. Addison Wesley,
2005.
◼ S. M. Weiss and C. A. Kulikowski. Computer Systems that Learn: Classification and
Prediction Methods from Statistics, Neural Nets, Machine Learning, and Expert
Systems. Morgan Kaufman, 1991.
◼ S. M. Weiss and N. Indurkhya. Predictive Data Mining. Morgan Kaufmann, 1997.
◼ I. H. Witten and E. Frank. Data Mining: Practical Machine Learning Tools and
Techniques, 2ed. Morgan Kaufmann, 2005.
◼ X. Yin and J. Han. CPAR: Classification based on predictive association rules. SDM'03
◼ H. Yu, J. Yang, and J. Han. Classifying large data sets using SVM with hierarchical
clusters. KDD'03.
73
CS412 Midterm Exam Statistics
◼ Opinion Question Answering:
◼ Like the style: 70.83%, dislike: 29.16%
◼ 80-89: 54 ◼ 50-59: 15
◼ 70-79: 46 ◼ 40-49: 2
◼ Speed
◼ time to construct the model (training time)
76
Predictor Error Measures
◼ Measure predictor accuracy: measure how far off the predicted value is from
the actual known value
◼ Loss function: measures the error betw. yi and the predicted value yi’
◼ Absolute error: | yi – yi’|
◼ Squared error: (yi – yi’)2
◼ Test error (generalization error):
d
the average loss over the test set
d
d d
d
| y −Relative
y '|
i squared error:
i
( yi − yi ' ) 2
i =1
i =1
d d
| y
i =1
i −y|
(y
i =1
i − y)2
The mean squared-error exaggerates the presence of outliers
Popularly use (square) root mean-square error, similarly, root relative
squared error
77
Scalable Decision Tree Induction Methods
tree earlier
◼ RainForest (VLDB’98 — Gehrke, Ramakrishnan & Ganti)
◼ Builds an AVC-list (attribute, value, class label)
78
Data Cube-Based Decision-Tree Induction
◼ Integration of generalization with decision-tree induction
(Kamber et al.’97)
◼ Classification at primitive concept levels
◼ E.g., precise temperature, humidity, outlook, etc.
◼ Low-level concepts, scattered classes, bushy classification-
trees
◼ Semantic interpretation problems
◼ Cube-based multi-level classification
◼ Relevance analysis at multi-levels
◼ Information-gain analysis with dimension + level
79