Data Mining: Practical Machine Learning Tools and Techniques
Data Mining: Practical Machine Learning Tools and Techniques
Issues: training, testing, tuning Predicting performance: confidence limits Holdout, cross-validation, bootstrap Comparing schemes: the t-test Predicting probabilities: loss functions Cost-sensitive measures Evaluating numeric prediction The Minimum Description Length principle
How predictive is the model we learned? Error on the training data is not a good indicator of performance on future data
Split data into training and test set More sophisticated techniques need to be used
Issues in evaluation
Statistical reliability of estimated differences in performance ( significance tests) Choice of performance measure:
Number of correct classifications Accuracy of probability estimates Error in numeric predictions Many practical applications involve costs
Success: instances class is predicted correctly Error: instances class is predicted incorrectly Error rate: proportion of errors made over the whole set of instances
Resubstitution error: error rate obtained from training data Resubstitution error is (hopelessly) optimistic!
Test set: independent instances that have played no part in formation of classifier
Assumption: both training data and test data are representative samples of the underlying problem Example: classifiers built using customer data from two different towns A and B
To estimate performance of classifier from town A in completely new town, test it on data from B
The test data cant be used for parameter tuning! Proper procedure uses three sets: training data, validation data, and test data
Dilemma: ideally both training set and test set should be large!
Predicting performance
Assume the estimated error rate is 25%. How close is this to the true error rate?
Statistical theory provides us with confidence intervals for the true underlying proportion
Confidence intervals
We can say: p lies within a certain specified interval with a certain specified confidence Example: S=750 successes in N=1000 trials
Estimated success rate: 75% How close is this to true success rate p?
10
Pr [z Xz ]=c
Confidence limits
Confidence limits for the normal distribution with 0 mean and a variance of 1: z Pr[X z]
0.1% 0.5% 1% 5% 10% 20% 40% 3.09 2.58 2.33 1.65 1.28 0.84 0.25
Thus:
1 1.65
Pr [1.65X1.65]=90 % To use this we have to reduce our random variable f to have 0 mean and unit variance
Data Mining: Practical Machine Learning Tools and Techniques (Chapter 5) 12
Transforming f
f p p1p/N
f N
f2 N
z2 4N2
/1
z2 N
13
Examples
p[0.732,0 .767]
p[0.691,0 .801] Note that normal distribution assumption is only valid for large N (i.e. N > 100) f = 75%, N = 10, c = 80% (so that z = 1.28): p[0.549,0 .881]
(should be taken with a grain of salt)
Data Mining: Practical Machine Learning Tools and Techniques (Chapter 5) 14
Holdout estimation
What to do if the amount of data is limited? The holdout method reserves a certain amount for testing and uses the remainder for training
Usually: one third for testing, the rest for training Example: class might be missing in the test data Ensures that each class is represented with approximately equal proportions in both subsets
15
Holdout estimate can be made more reliable by repeating the process with different subsamples
In each iteration, a certain proportion is randomly selected for training (possibly with stratificiation) The error rates on the different iterations are averaged to yield an overall error rate
This is called the repeated holdout method Still not optimum: the different test sets overlap
16
Cross-validation
First step: split data into k subsets of equal size Second step: use each subset in turn for testing, the remainder for training
Called k-fold cross-validation Often the subsets are stratified before the crossvalidation is performed The error estimates are averaged to yield an overall error estimate
17
More on cross-validation
Extensive experiments have shown that this is the best choice to get an accurate estimate There is also some theoretical evidence for this
Stratification reduces the estimates variance Even better: repeated stratified cross-validation
E.g. ten-fold cross-validation is repeated ten times and results are averaged (reduces the variance)
18
Leave-One-Out cross-validation
Set number of folds to number of training instances I.e., for n training instances, build classifier n times
Makes best use of the data Involves no random subsampling Very computationally expensive
(exception: NN)
19
It guarantees a non-stratified sample because there is only one instance in the test set!
Best inducer predicts majority class 50% accuracy on fresh data Leave-One-Out-CV estimate is 100% error!
20
The bootstrap
The same instance, once selected, can not be selected again for a particular training/test set
The bootstrap uses sampling with replacement to form the training set
Sample a dataset of n instances n times with replacement to form a new dataset of n instances Use this data as the training set Use the instances from the original dataset that dont occur in the new training set for testing
21
A particular instance has a probability of 1 1/n of not being picked Thus its probability of ending up in the test data is:
1 1 ne10.368 n
This means the training data will contain approximately 63.2% of the instances
22
Therefore, combine it with the resubstitution error: The resubstitution error gets less weight than the error on the test data Repeat process several times with different replacement samples; average the results
err=0.632etest instances0.368etraining_instances
23
Probably the best way of estimating performance for very small datasets However, it has some problems
Consider the random dataset from above A perfect memorizer will achieve 0% resubstitution error and ~50% error on test data Bootstrap estimate for this classifier:
err=0.63250 %0.3680%=31.6%
Frequent question: which of two learning schemes performs better? Note: this is domain dependent! Obvious way: compare 10-fold CV estimates Generally sufficient in applications (we don't loose if the chosen method is not truly better) However, what about machine learning research? Need to show convincingly that a particular method works better
25
Comparing schemes II
Want to show that scheme A is better than scheme B in a particular domain For a given amount of training data On average, across all possible training sets Let's assume we have an infinite amount of data from the domain: Sample infinitely many dataset of specified size Obtain cross-validation estimate on each dataset for each scheme Check if mean accuracy for scheme A is better than mean accuracy for scheme B
26
Paired t-test
In practice we have limited data and a limited number of estimates for computing the mean Students t-test tells whether the means of two samples are significantly different In our case the samples are cross-validation estimates for different datasets from the domain Use a paired t-test because the individual samples are paired
William Gosset
Born: 1876 in Canterbury; Died: 1937 in Beaconsfield, England Obtained a post as a chemist in the Guinness brewery in Dublin in 1899. Invented the t-test to handle small samples for quality control in brewing. Wrote under the name "Student".
Data Mining: Practical Machine Learning Tools and Techniques (Chapter 5) 27
Estimated variances of the means are x2/k and y2/k If x and y are the true means then
m x x my y
2 x
/k
2 y
/k
Students distribution
With small samples (k < 100) the mean follows Students distribution with k1 degrees of freedom Confidence limits:
9 degrees of freedom
Pr[X z]
normal distribution
Pr[X z] 0.1% 0.5% 1% 5% 10% 20%
z
4.30 3.25 2.82 1.83 1.38 0.88
z
3.09 2.58 2.33 1.65 1.28 0.84
29
Let md = mx my
The difference of the means (md) also has a Students distribution with k1 degrees of freedom Let 2 be the variance of the difference d
t=
md
2 d
/k
30
Look up the value for z that corresponds to /2 If t z or t z then the difference is significant
I.e. the null hypothesis (that the difference is zero) can be rejected
31
Unpaired observations
If the CV estimates are from different datasets, they are no longer paired (or maybe we have k estimates for one scheme, and j estimates for the other one) Then we have to use an un paired t-test with min(k , j) 1 degrees of freedom The estimate of the variance of the difference of the means becomes:
2 x k
2 y j
32
Dependent estimates
We assumed that we have enough data to create several datasets of the desired size Need to re-use data if that's not the case E.g. running cross-validations with different randomizations on the same data Samples become dependent insignificant differences can become significant A heuristic test is the corrected resampled t-test: Assume we use the repeated hold-out method, with n 1 instances for training and n2 for testing New test statistic is:
t=
md
1 n 2 d k
1
n2
33
Predicting probabilities
Performance measure so far: success rate Also called 0-1 loss function:
Most classifiers produces class probabilities Depending on the application, we might want to check the accuracy of the probability estimates 0-1 loss is not the right thing to use in those cases
34
p1 pk are probability estimates for an instance c is the index of the instances actual class a1 ak = 0, except for ac which is 1 Quadratic loss is: j p j a j 2= j!=c p2apc 2 j Want to minimize
E[ j p ja j 2 ]
Can show that this is minimized when pj = pj*, the true probabilities
Data Mining: Practical Machine Learning Tools and Techniques (Chapter 5) 35
The informational loss function is log(pc), where c is the index of the instances actual class Number of bits required to communicate the actual class Let p1* pk* be the true class probabilities Then the expected value for the loss function is:
p1 log2 p1...pk log2 pk
36
Discussion
Both encourage honesty Quadratic loss function takes into account all class probability estimates for an instance Informational loss focuses only on the probability estimate for the actual class Quadratic loss is bounded: it can never exceed 2 1 j p2 j Informational loss can be infinite
37
In practice, different types of classification errors often incur different costs Examples:
Terrorist profiling
38
39
Two confusion matrices for a 3-class problem: actual predictor (left) vs. random predictor (right)
Dperfect Drandom
41
Cost-sensitive classification
Basic idea: only predict high-cost class when very confident about prediction Normally we just predict the most likely class Here, we should make the prediction that minimizes the expected cost
Expected cost: dot product of vector of class probabilities and appropriate column in cost matrix Choose column (class) that minimizes expected cost
42
Cost-sensitive learning
So far we haven't taken costs into account at training time Most learning schemes do not perform costsensitive learning
They generate the same classifier no matter what costs are assigned to the different classes Example: standard decision tree learner Resampling of instances according to costs Weighting of instances according to costs
Some schemes can take costs into account by varying a parameter, e.g. nave Bayes
Data Mining: Practical Machine Learning Tools and Techniques (Chapter 5) 43
Lift charts
In practice, costs are rarely known Decisions are usually made by comparing possible scenarios Example: promotional mailout to 1,000,000 households
Mail to all; 0.1% respond (1000) Data mining tool identifies subset of 100,000 most promising, 0.4% of these respond (400)
40% of responses for 10% of cost may pay off
44
ROC curves
Stands for receiver operating characteristic Used in signal detection to show tradeoff between hit rate and false alarm rate over noisy channel y axis shows percentage of true positives in sample
rather than absolute number
47
Collect probabilities for instances in test folds Sort instances according to probabilities
Another possibility is to generate an ROC curve for each fold and average them
49
For a small, focused sample, use method A For a larger one, use method B In between, choose between A and B with appropriate probabilities
Data Mining: Practical Machine Learning Tools and Techniques (Chapter 5) 50
TP rate for combined scheme: q t1 + (1-q) t2 FP rate for combined scheme: q f1+(1-q) f2
51
More measures...
Percentage of retrieved documents that are relevant: precision=TP/ (TP+FP) Percentage of relevant documents that are returned: recall =TP/(TP+FN) Precision/recall curves have hyperbolic shape Summary measures: average precision at 20%, 50% and 80% recall (three-point average recall) F-measure=(2 recall precision)/(recall+precision) sensitivity specificity = (TP / (TP + FN)) (TN / (FP + TN)) Area under the ROC curve (AUC): probability that randomly chosen positive instance is ranked above randomly chosen negative one
52
53
Cost curves
Cost curves plot expected costs directly Example for case with uniform costs (i.e. error):
54
56
Other measures
The mean absolute error is less sensitive to outliers than the mean-squared error:
p1a1...pna n n
p1 a 1 2...pn an 2 n
Sometimes relative error values are more appropriate (e.g. 10% for an error of 50 when predicting 500)
57
How much does the scheme improve on simply predicting the average? The relative squared error is:
p1a12...p nan 2 a12... an 2 a a
Correlation coefficient
Measures the statistical correlation between the predicted values and the actual values
SPA
SP SA
SPA =
i pi p a ia n1
SP=
i p i p 2 n1
SA =
i ai a 2 n1
Which measure?
Root mean-squared error Mean absolute error Root rel squared error Relative absolute error Correlation coefficient
space required to describe a theory + space required to describe the theorys mistakes
In our case the theory is the classifier and the mistakes are the errors on the training data Aim: we seek a classifier with minimal DL MDL principle is a model selection criterion
61
Reasoning: a good model is a simple model that achieves high accuracy on the given data Also known as Occams Razor : the best theory is the smallest one that describes all the facts
William of Ockham, born in the village of Ockham in Surrey (England) about 1285, was the most influential philosopher of the 14th century and a controversial theologian.
62
Theory 1: very simple, elegant theory that explains the data almost perfectly Theory 2: significantly more complex theory that reproduces the data without mistakes Theory 1 is probably preferable Classical example: Keplers three laws on planetary motion
Less accurate than Copernicuss latest refinement of the Ptolemaic theory of epicycles
63
The best theory is the one that compresses the data the most I.e. to compress a dataset we generate a model and then store the model and its mistakes
We need to compute (a) size of the model, and (b) space needed to encode the errors (b) easy: use the informational loss function (a) need a method to encode the model
64
L[T]=length of the theory L[E|T]=training set encoded wrt the theory Description length= L[T] + L[E|T] Bayess theorem gives a posteriori probability of a theory given the data:
Pr [T|E]=
Equivalent to:
MAP stands for maximum a posteriori probability Finding the MAP theory corresponds to finding the MDL theory Difficult bit in applying the MAP principle: determining the prior probability Pr[T] of the theory Corresponds to difficult part in applying the MDL principle: coding scheme for the theory I.e. if we know a priori that a particular theory is more likely we need fewer bits to encode it
66
Advantage: makes full use of the training data when selecting a model Disadvantage 1: appropriate coding scheme/prior probabilities for theories are crucial Disadvantage 2: no guarantee that the MDL theory is the one which minimizes the expected error Note: Occams Razor is an axiom! Epicuruss principle of multiple explanations: keep all theories that are consistent with the data
67
Description length of data given theory: encode cluster membership and position relative to cluster
Works if coding scheme uses less code space for small numbers than for large ones With nominal attributes, must communicate probability distributions for each cluster
68