Machine Learning With WEKA An Introduction
Machine Learning With WEKA An Introduction
2005/9/30
WEKA is available at
http://www.cs.waikato.ac.nz/ml/weka/
Data can be imported from a file in various formats: ARFF, CSV, C4.5, binary
Pre-processing tools in WEKA are called filters: including discretization, normalization, resampling, attribute selection, transforming and combining attributes,
Set parameter
Equal frequence
Building classifiers
Classifiers in WEKA are models for predicting nominal or numeric quantities Implemented learning schemes include:
Decision trees and lists, instance-based classifiers, support vector machines, multi-layer perceptrons, logistic regression, Bayes nets,
Meta-classifiers include:
Bagging, boosting, stacking, error-correcting output codes, locally weighted learning,
Output setting
Fitted result
clustering data
WEKA contains clusterers for finding groups of similar instances in a dataset Implemented schemes are:
FarthestFirst
Clusters can be visualized and compared to true clusters (if given) Evaluation based on loglikelihood if clustering scheme produces a probability distribution
clustering data
Finding associations
WEKA contains an implementation of the Apriori algorithm for learning association rules
Works only with discrete data
Apriori can compute all rules that have a given minimum support and exceed a given confidence
Finding associations
Attribute selection
Panel that can be used to investigate which (subsets of) attributes are the most predictive ones Attribute selection methods contain two parts:
A search method: best-first, forward selection, random, exhaustive, genetic algorithm, ranking An evaluation method: correlation-based, wrapper, information gain, chi-squared,
Data visualization
Visualization very useful in practice: e.g. helps to determine difficulty of the learning problem WEKA can visualize single attributes (1-d) and pairs of attributes (2-d)
To do: rotating 3-d visualizations (Xgobi-style)
Color-coded class values Jitter option to deal with nominal attributes (and to detect hidden data points) Zoom-in function
Performing experiments
Experimenter makes it easy to compare the performance of different learning schemes For classification and regression problems Results can be written into file or database Evaluation options: cross-validation, learning curve, hold-out Can also iterate over different parameter settings Significance-testing built in!
Step 1 : Set output file Step 3 : choose algorithm Step 2 : add dataset
Experiment status
Visualize data
Set parameter
Finish
Thank you