SlideShare a Scribd company logo
How to Become a Tree Hugger: Random Forests and
Predictive Modeling for Developers
@__mharrison__
@aaronj1331
Objective
Provide code so developers can see end to end
machine learning example
About Matt
● Author, consultant and trainer in Python and
Data Science.
● Python since 2000
● Experience across Data Science, BI, Web, Open
Source Stack Management, and Search.
● http://metasnake.com/
Most Recent Book
Check out
Learning the Pandas Library.
Outline
● Machine Learning in a Nutshell
● Which algorithm to use?
● The Titanic
● Decision Trees
● Random Forests
● Conclusion
Machine Learning
Supervised Classification
Sometimes called "Predictive Modeling."
1.Identify patterns from labeled examples.
(Training Set => Model)
2.Based on those patterns, try to guess labels for
other examples. (Predict)
How to Become a Tree Hugger: Random Forests and Predictive Modeling for Developers
How to Become a Tree Hugger: Random Forests and Predictive Modeling for Developers
Examples
Binary Classification
● Is this student at high risk for dropping out?
● Is this individual at high risk for defaulting on a loan?
● Is this person at high risk for becoming infected with a certain
disease?
Multi-class Classification
● What is the most likely disease given this individual's current
symptoms?
● Which ad is the user most likely to click on?
How to Become a Tree Hugger: Random Forests and Predictive Modeling for Developers
So which algorithm
should I use to build a
model?
No Free Lunch
If an algorithm performs well on a certain class of
problems then it necessarily pays for that with
degraded performance on the set of all remaining
problems.
David Wolpert and William Macready. No Free Lunch Theorems for
Optimization. IEEE Transactions on Evolutionary Computation, 1:67, 1997.
No Free Lunch
No single algorithm will build the best model on
every dataset.
Which algorithm?
But what if there is something better out there?
Premise
Manuel Fernández-Delgado, Eva Cernadas, Senén
Barro, and Dinani Amorim. Do we Need
Hundreds of Classifiers to Solve Real World
Classification Problems? Journal of Machine
Learning Research, 15(Oct):3133−3181, 2014.
http://jmlr.csail.mit.edu/papers/v15/delgado14a.ht
Premise
It turns out that Random Forests are usually a
good place to start.
Premise
Thunderdome with:
● 179 classifiers from 17 classifier families
● 121 datasets from the UCI repository
Premise
"The Random Forest is clearly the best family of
classifiers, followed by SVM, neural networks,
and boosting ensembles."
● On average, RF achieved 94.1% of the
theoretical maximum accuracy for each dataset.
● RF achieved over 90% of maximum accuracy in
84.3% of datasets.
The RMS Titanic
How to Become a Tree Hugger: Random Forests and Predictive Modeling for Developers
Classification Task
Predict who did and did not survive the disaster
on the Titanic
(and see if we can get some idea of why it turned
out that way)
Code!!!
Get Data
>>> import pandas as pd
>>> df = pd.read_excel('data/titanic3.xls')
http://biostat.mc.vanderbilt.edu/wiki/pub/Main/DataSets/titanic3.xls
Columns
● class: Passenger class (1 = first; 2 = second; 3 = third)
● name: Name
● sex: Sex
● age: Age
● sibsp: Number of siblings/spouses aboard
● parch: Number of parents/children aboard
● ticket: Ticket number
● fare: Passenger fare
● cabin: Cabin
● embarked: Port of embarkation (C = Cherbourg; Q = Queenstown; S = Southampton)
● boat: Lifeboat (if survived)
● body: Body number (if did not survive and body was recovered)
Label Column
The column giving the label for our classification
task:
● survival: Survival (0 = no; 1 = yes)
Exploring
>>> df.shape
(1309, 14)
>>> df.embarked.value_counts()
S 914
C 270
Q 123
Name: embarked, dtype: int64
Exploring
>>> df.cabin.value_counts()
C23 C25 C27 6
B57 B59 B63 B66 5
G6 5
B96 B98 4
F2 4
F4 4
C22 C26 4
F33 4
D 4
C78 4
E101 3
B58 B60 3
...
Name: cabin, dtype: int64
Question
Can we build a model that will predict survival?
Decision Trees
Decision Trees
>>> from sklearn import tree
>>> model = tree.DecisionTreeClassifier(random_state=42)
>>> ignore = set('boat,body,home.dest,name,ticket'.split(','))
>>> cols = [c forc in df.columns if c != 'survived' and c not in ignore]
>>> X = df[cols]
>>> y = df.survived
>>> model.fit(X, y)
Traceback (most recent call last):
. . .
ValueError: could not convert string to float: 'S'
Create Dummy Variables
>>> dummy_cols = 'pclass,sex,cabin,embarked'.split(",")
>>> df2 = pd.get_dummies(df, columns=dummy_cols)
>>> model = tree.DecisionTreeClassifier(random_state=42)
>>> ignore = set('boat,body,home.dest,name,ticket'.split(','))
>>> cols = [c forc in df2.columns if c != 'survived' and c
... not in ignore and c not in dummy_cols]
>>> X = df2[cols]
>>> y = df2.survived
Try Again
>>> model.fit(X, y)
Traceback (most recent call last):
. . .
ValueError: Input contains NaN, infinity or a value
too large for dtype('float32').
Imputing
Fancy term for filling in values. Mean is a good
choice for decision trees as it doesn't bias the
splitting, whereas 0 would
Try Again
>>> X = X.fillna(X.mean())
>>> X.dtypes
age float64
sibsp int64
parch int64
fare float64
pclass_1 float64
pclass_2 float64
pclass_3 float64
sex_female float64
sex_male float64
cabin_A10 float64
cabin_A11 float64
cabin_A14 float64
Try Again
>>> model.fit(X, y)
DecisionTreeClassifier(class_weight=None,
criterion='gini', max_depth=None,
max_features=None, max_leaf_nodes=None,
min_samples_leaf=1, min_samples_split=2,
min_weight_fraction_leaf=0.0,
presort=False, random_state=42,
splitter='best')
What Does the Tree Look Like?
>>> tree.export_graphviz(model,
... out_file='/tmp/tree1.dot',
... feature_names=X.columns,
... class_names=['Died', 'Survived'],
... filled=True)
>>> import subprocess
>>> _ = subprocess.check_output(
... 'dot -Tpng -oimg/tree1.png /tmp/tree1.dot'.split())
How to Become a Tree Hugger: Random Forests and Predictive Modeling for Developers
Does it Generalize?
Need a Test Set
>>> from sklearn import cross_validation
>>> X_train, X_test, y_train, y_test = 
... cross_validation.train_test_split(
... X, y, test_size=.3, random_state=42)
>>> _ = model.fit(X_train, y_train)
>>> model.score(X_test, y_test)
0.76844783715012721
Another Model
>>> model2 = tree.DecisionTreeClassifier(
... random_state=42, max_depth=3)
>>> _ = model2.fit(X_train, y_train)
>>> model2.score(X_test, y_test)
0.81424936386768443
What Does the Tree Look Like?
>>> tree.export_graphviz(model2,
... out_file='/tmp/tree2.dot',
... feature_names=X.columns,
... class_names=['Died', 'Survived'],
... filled=True)
>>> import subprocess
>>> _ = subprocess.check_output(
... 'dot -Tpng -oimg/tree2.png /tmp/tree2.dot'.split())
How to Become a Tree Hugger: Random Forests and Predictive Modeling for Developers
Adjusting Parameters
Adjust Parameters
>>> import numpy as np
>>> fromsklearn.learning_curve import validation_curve
>>> model3 = tree.DecisionTreeClassifier(random_state=42)
>>> param_range = np.arange(1, 500, 20)
>>> param_name = 'min_samples_leaf'
>>> train_scores, test_scores = validation_curve(
... model3, X, y, param_name=param_name, param_range=param_range,
... cv=10, scoring="accuracy", n_jobs=1)
>>> train_scores_mean = np.mean(train_scores, axis=1)
>>> train_scores_std = np.std(train_scores, axis=1)
>>> test_scores_mean = np.mean(test_scores, axis=1)
>>> test_scores_std = np.std(test_scores, axis=1)
Plot Validation Curve
>>> import matplotlib.pyplot as plt
>>> fig = plt.figure()
>>> ax = fig.add_subplot(111)
>>> plt.title("Validation Curve with Decision Tree")
>>> plt.xlabel(param_name)
>>> plt.ylabel("Score")
>>> plt.ylim(0.0, 1.1)
>>> plt.plot(param_range, train_scores_mean, label="Training score", color="r")
>>> plt.fill_between(param_range, train_scores_mean - train_scores_std,
... train_scores_mean + train_scores_std, alpha=0.2, color="r")
>>> plt.plot(param_range, test_scores_mean, label="Cross-validation score",
... color="g")
>>> plt.fill_between(param_range, test_scores_mean - test_scores_std,
... test_scores_mean + test_scores_std, alpha=0.2, color="g")
>>> plt.legend(loc="best")
>>> fig.savefig('img/ml-dt-param-features.png')
>>> #plt. clf()
How to Become a Tree Hugger: Random Forests and Predictive Modeling for Developers
Overfitting & Underfitting
● Overfitting - memorizing data
● Underfitting - not flexible enough (cannot
capture trend)
How Much Data Do
We Need?
Learning Curve
>>> from sklearn.learning_curve import learning_curve
>>> def plot_learning_curve(estimator, title, X, y, ylim=None, cv=None,
... n_jobs=1, train_sizes=np.linspace(.01, 1.0, 10)):
... fig = plt.figure()
... plt.title(title)
... if ylim is not None:
... plt.ylim(*ylim)
... plt.xlabel("Training examples")
... plt.ylabel("Score")
... train_sizes, train_scores, test_scores = learning_curve(
... estimator, X, y, cv=cv, n_jobs=n_jobs, train_sizes=train_sizes)
... train_scores_mean = np.mean(train_scores, axis=1)
... train_scores_std = np.std(train_scores, axis=1)
... test_scores_mean = np.mean(test_scores, axis=1)
... test_scores_std = np.std(test_scores, axis=1)
... plt.grid()
...
... plt.fill_between(train_sizes, train_scores_mean - train_scores_std,
... train_scores_mean + train_scores_std, alpha=0.1,
... color="r")
... plt.fill_between(train_sizes, test_scores_mean - test_scores_std,
... test_scores_mean + test_scores_std, alpha=0.1, color="g")
... plt.plot(train_sizes, train_scores_mean, 'o-', color="r",
... label="Training score")
... plt.plot(train_sizes, test_scores_mean, 'o-', color="g",
... label="Cross-validation score")
...
... plt.legend(loc="best")
... return fig, plt
Plot it
>>> title = "Learning Curves (Decision Tree)"
>>> fig, plt = plot_learning_curve(model,
... title, X, y, ylim=(0.5, 1.01), cv=10, n_jobs=4)
>>> fig.savefig('img/ml-lc.png')
How to Become a Tree Hugger: Random Forests and Predictive Modeling for Developers
Another Performance
Measure
ROC Curve
Receiver Operating Characteristic - area indicates
performance
ROC
>>> fromsklearn.metrics import auc, confusion_matrix, roc_curve
>>> def fig_with_title(ax, title, figkwargs):
... if figkwargs is None:
... figkwargs = {}
... if not ax:
... fig = plt.figure(**figkwargs)
... ax = plt.subplot(111)
... else:
... fig = plt.gcf()
... if title:
... ax.set_title(title)
... return fig, ax
ROC
>>> def plot_roc_curve_binary(clf, X, y, label='ROC Curve (area={area:.3})',
... title="ROC Curve", pos_label=None, sample_weight=None,
... ax=None, figkwargs=None, plot_guess=False):
... ax = ax orplt.subplot(111)
... ax.set_xlim([-.1, 1])
... ax.set_ylim([0, 1.1])
... y_score = clf.predict_proba(X)
... if y_score.shape[1] != 2 and not pos_label:
... warnings.warn("Shape is not binary {} and no pos_label".format(y_score.shape))
... return
... try:
... fpr, tpr, thresholds = roc_curve(y, y_score[:,1], pos_label=pos_label,
... sample_weight=sample_weight)
... except ValueErroras e:
... if 'is not binary' in str(e):
... warnings.warn("Check if y is numeric")
... raise
...
... roc_auc = auc(fpr, tpr)
... fig, ax = fig_with_title(ax, title, figkwargs)
...
... ax.plot(fpr, tpr, label=label.format(area=roc_auc))
... if plot_guess:
... ax.plot([0, 1], [0, 1], '--', color=(0.6, 0.6, 0.6), label='Guessing')
... ax.set_xlabel('False Positive Rate')
... ax.set_ylabel('True Positive Rate')
... ax.legend(loc="lower right")
... return fig, ax
ROC
>>> plt.clf()
>>> fig, ax = plot_roc_curve_binary(
... model, X_test, y_test,
... 'DT {area:.3}', plot_guess=1)
>>> fig.savefig('img/ml-roc.png')
How to Become a Tree Hugger: Random Forests and Predictive Modeling for Developers
Pros/Cons Decision Trees
Pros:
● Easy to explain
Cons:
● Tends to overfit
Random Forest
Random Forest
Created by Tin Kam Ho (1995), Leo Breiman, and
Adele Cutler (2001).
Condorcet's Jury Theorem
From 1785 Essay on the Application of Analysis to the
Probabililty of Majority Decisions. If each member of
jury has p > .5 of predicting correct choice, adding
more jury members increases probability of
correct choice.
Random Forest
Algorithm:
● Sample from training set N (random WITH
REPLACEMENT - lets us do OOB)
● Select m input variables (subset of M total input variables)
● Grow a tree
● Repeat above (create ensemble)
● Predict by aggregation predictions of forest (votes for
classification, average for regression)
Random Forest
>>> from sklearn import ensemble
>>> model3 = ensemble.RandomForestClassifier(random_state=42)
>>> model3.fit(X_train, y_train)
RandomForestClassifier(bootstrap=True, class_weight=None,
criterion='gini', max_depth=None, max_features='auto',
max_leaf_nodes=None, min_samples_leaf=1, min_samples_split=2,
min_weight_fraction_leaf=0.0, n_estimators=10, n_jobs=1,
oob_score=False, random_state=None, verbose=0,
warm_start=False)
>>> model3.score(X_test, y_test)
0.75572519083969469
Feature Importance
Most important features at the top of the decision trees
>>> print(sorted(zip(X.columns, model3.feature_importances_),
... key=lambda x: x[1], reverse=True))
[('age', 0.22344483424840464), ('fare', 0.19018725802080991), ('sex_male', 0.12990057398621174),
('sex_female', 0.12860349870512569), ('pclass_3', 0.051127382589271984), ('parch',
0.042403381656923547), ('sibsp', 0.041437135835858306), ('pclass_1', 0.026146920495887703),
('embarked_S', 0.016952460872998475), ('pclass_2', 0.014536895778953276), ('embarked_C',
0.011974575978148253), ('embarked_Q', 0.0066746190486480592), ('cabin_D56',
0.0050674850086476347), ('cabin_C22 C26', 0.0038209715167321157), ('cabin_F E57',
ROC
>>> fig, ax = plot_roc_curve_binary(
... model3, X_test, y_test,
... 'RF1 {area:.3}')
>>> fig.savefig('img/ml-roc3.png')
How to Become a Tree Hugger: Random Forests and Predictive Modeling for Developers
Tuning
Tuning
Fancy term regularization - attempt to prevent
overfitting
Tuning
● max_features - Don't want to use all of the features (all tree will look the same). By taking
samples of features, you reduce bias and correlation amoung trees
● n_estimators - More is better, but diminishing returns (don't need too many jurors, takes a
longer time to train, lots of memory)
● max_depth - too tall, overfitting. Can't know ahead of time what good size could be. Can
use these parameters to constrain depth as well:
– min_samples_leaf - smaller is more prone to overfitting (capturing noise)
– max_leaf_nodes - Can't be more than this many leaves
– min_weight_fraction_leaf - The minimum weighted fraction of the input samples required to be at a
leaf node. Note: this parameter is tree-specific.
Grid Search
Grid Search
>>> fromsklearn.grid_search import GridSearchCV
>>> model5 = ensemble.RandomForestClassifier()
>>> params = {'max_features': [.1, .3, .5, 1],
... 'n_estimators': [10, 20, 50],
... 'min_samples_leaf': [3, 5, 9],
... 'random_state': [42]}
>>> cv = GridSearchCV(model5, params).fit(X, y)
>>> cv.best_params_
{'max_features': 0.1, 'random_state': 42, 'n_estimators': 20,
'min_samples_leaf': 3}
Grid Search
>>> model6 = ensemble.RandomForestClassifier(
... **cv.best_params_)
>>> model6.fit(X_train, y_train)
>>> model6.score(X_test, y_test)
0.77608142493638677
ROC
>>> fig, ax = plot_roc_curve_binary(
... model6, X_test, y_test,
... 'RF (tuned) {area:.3}')
>>> fig.savefig('img/ml-roc6.png')
How to Become a Tree Hugger: Random Forests and Predictive Modeling for Developers
Summary
Summary
● Scalable (can build trees per CPU)
● Reduces variance in decision tree
● No normalization of data (don't want money range ($0 - $10,000,000)
overidding age (0-100)
● Feature importance (look at "mean decrease of impurity" where this
node appears)
● Helps with missing data, outliers, and dimension reduction
● Works with both regression and classification
● Sampling allows "out of bag" estimate, removing need for test set
Why Python?
● Efficient algorithm
● Close to metal, 3000+ lines of Cython
● Faster than OpenCV (C++), Weka (Java),
RandomForest (R/Fortran)
Thanks
Feel free to follow up on Twitter
@__mharrison__
@aaronj1331

More Related Content

PDF
Analysis of Fatal Utah Avalanches with Python. From Scraping, Analysis, to In...
PPTX
Learn python - for beginners - part-2
PDF
Matlab and Python: Basic Operations
PDF
Python Puzzlers - 2016 Edition
PDF
Python fundamentals - basic | WeiYuan
PPTX
Basics of Python programming (part 2)
PDF
Advanced Python, Part 2
PDF
AmI 2016 - Python basics
Analysis of Fatal Utah Avalanches with Python. From Scraping, Analysis, to In...
Learn python - for beginners - part-2
Matlab and Python: Basic Operations
Python Puzzlers - 2016 Edition
Python fundamentals - basic | WeiYuan
Basics of Python programming (part 2)
Advanced Python, Part 2
AmI 2016 - Python basics

What's hot (20)

PDF
Python Cheat Sheet
PPTX
Learn python in 20 minutes
PPTX
Introduction to Python and TensorFlow
PDF
Begin with Python
PDF
Python Puzzlers
PDF
Learn 90% of Python in 90 Minutes
PDF
Introduction to Python
PPTX
Python 표준 라이브러리
PPTX
FUNCTIONS IN PYTHON. CBSE +2 COMPUTER SCIENCE
PDF
Beginners python cheat sheet - Basic knowledge
 
PDF
Python and sysadmin I
PDF
Beautiful python - PyLadies
PPTX
Python 101++: Let's Get Down to Business!
PDF
Python于Web 2.0网站的应用 - QCon Beijing 2010
PDF
Python Tutorial
PPTX
Python 내장 함수
PPT
Python tutorial
PDF
Java Basics - Part1
PPTX
Python Traning presentation
PDF
Python dictionary : past, present, future
Python Cheat Sheet
Learn python in 20 minutes
Introduction to Python and TensorFlow
Begin with Python
Python Puzzlers
Learn 90% of Python in 90 Minutes
Introduction to Python
Python 표준 라이브러리
FUNCTIONS IN PYTHON. CBSE +2 COMPUTER SCIENCE
Beginners python cheat sheet - Basic knowledge
 
Python and sysadmin I
Beautiful python - PyLadies
Python 101++: Let's Get Down to Business!
Python于Web 2.0网站的应用 - QCon Beijing 2010
Python Tutorial
Python 내장 함수
Python tutorial
Java Basics - Part1
Python Traning presentation
Python dictionary : past, present, future
Ad

Similar to How to Become a Tree Hugger: Random Forests and Predictive Modeling for Developers (20)

PPTX
Decision Tree.pptx
PDF
Intro to Python
PPTX
wk5ppt1_Titanic
PDF
Py ohio
PDF
Time Series Analysis and Mining with R
PDF
Bigger Data v Better Math
PPT
PPTX
PPT_1_9102501a-a7a1-493e-818f-cf699918bbf6.pptx
DOCX
LSTM Framework For Univariate Time series
PDF
maxbox starter60 machine learning
PDF
Two methods for optimising cognitive model parameters
PPT
isabelle_webinar_jan..
PPT
Classification
PDF
Rcommands-for those who interested in R.
PDF
cel shading as PDF and Python description
PPTX
Pythonlearn-02-Expressions123AdvanceLevel.pptx
PDF
Random Forests: The Vanilla of Machine Learning - Anna Quach
PPTX
Ml presentation
PDF
Authorship attribution pydata london
PDF
C programming day#3.
Decision Tree.pptx
Intro to Python
wk5ppt1_Titanic
Py ohio
Time Series Analysis and Mining with R
Bigger Data v Better Math
PPT_1_9102501a-a7a1-493e-818f-cf699918bbf6.pptx
LSTM Framework For Univariate Time series
maxbox starter60 machine learning
Two methods for optimising cognitive model parameters
isabelle_webinar_jan..
Classification
Rcommands-for those who interested in R.
cel shading as PDF and Python description
Pythonlearn-02-Expressions123AdvanceLevel.pptx
Random Forests: The Vanilla of Machine Learning - Anna Quach
Ml presentation
Authorship attribution pydata london
C programming day#3.
Ad

Recently uploaded (20)

PDF
solutions_manual_-_materials___processing_in_manufacturing__demargo_.pdf
PDF
Top Generative AI Tools for Patent Drafting in 2025.pdf
PDF
Cloud-Migration-Best-Practices-A-Practical-Guide-to-AWS-Azure-and-Google-Clou...
PDF
NewMind AI Weekly Chronicles - August'25 Week I
PDF
GamePlan Trading System Review: Professional Trader's Honest Take
PPTX
CroxyProxy Instagram Access id login.pptx
PDF
Transforming Manufacturing operations through Intelligent Integrations
PDF
CIFDAQ's Token Spotlight: SKY - A Forgotten Giant's Comeback?
PDF
AI And Its Effect On The Evolving IT Sector In Australia - Elevate
PDF
Google’s NotebookLM Unveils Video Overviews
PDF
DevOps & Developer Experience Summer BBQ
PPTX
Understanding_Digital_Forensics_Presentation.pptx
PDF
REPORT: Heating appliances market in Poland 2024
PPTX
Detection-First SIEM: Rule Types, Dashboards, and Threat-Informed Strategy
PPTX
Effective Security Operations Center (SOC) A Modern, Strategic, and Threat-In...
PPTX
PA Analog/Digital System: The Backbone of Modern Surveillance and Communication
PPTX
Belt and Road Supply Chain Finance Blockchain Solution
PDF
Automating ArcGIS Content Discovery with FME: A Real World Use Case
PPTX
Web Security: Login Bypass, SQLi, CSRF & XSS.pptx
PDF
CIFDAQ's Market Insight: SEC Turns Pro Crypto
solutions_manual_-_materials___processing_in_manufacturing__demargo_.pdf
Top Generative AI Tools for Patent Drafting in 2025.pdf
Cloud-Migration-Best-Practices-A-Practical-Guide-to-AWS-Azure-and-Google-Clou...
NewMind AI Weekly Chronicles - August'25 Week I
GamePlan Trading System Review: Professional Trader's Honest Take
CroxyProxy Instagram Access id login.pptx
Transforming Manufacturing operations through Intelligent Integrations
CIFDAQ's Token Spotlight: SKY - A Forgotten Giant's Comeback?
AI And Its Effect On The Evolving IT Sector In Australia - Elevate
Google’s NotebookLM Unveils Video Overviews
DevOps & Developer Experience Summer BBQ
Understanding_Digital_Forensics_Presentation.pptx
REPORT: Heating appliances market in Poland 2024
Detection-First SIEM: Rule Types, Dashboards, and Threat-Informed Strategy
Effective Security Operations Center (SOC) A Modern, Strategic, and Threat-In...
PA Analog/Digital System: The Backbone of Modern Surveillance and Communication
Belt and Road Supply Chain Finance Blockchain Solution
Automating ArcGIS Content Discovery with FME: A Real World Use Case
Web Security: Login Bypass, SQLi, CSRF & XSS.pptx
CIFDAQ's Market Insight: SEC Turns Pro Crypto

How to Become a Tree Hugger: Random Forests and Predictive Modeling for Developers

  • 1. How to Become a Tree Hugger: Random Forests and Predictive Modeling for Developers @__mharrison__ @aaronj1331
  • 2. Objective Provide code so developers can see end to end machine learning example
  • 3. About Matt ● Author, consultant and trainer in Python and Data Science. ● Python since 2000 ● Experience across Data Science, BI, Web, Open Source Stack Management, and Search. ● http://metasnake.com/
  • 4. Most Recent Book Check out Learning the Pandas Library.
  • 5. Outline ● Machine Learning in a Nutshell ● Which algorithm to use? ● The Titanic ● Decision Trees ● Random Forests ● Conclusion
  • 7. Supervised Classification Sometimes called "Predictive Modeling." 1.Identify patterns from labeled examples. (Training Set => Model) 2.Based on those patterns, try to guess labels for other examples. (Predict)
  • 10. Examples Binary Classification ● Is this student at high risk for dropping out? ● Is this individual at high risk for defaulting on a loan? ● Is this person at high risk for becoming infected with a certain disease? Multi-class Classification ● What is the most likely disease given this individual's current symptoms? ● Which ad is the user most likely to click on?
  • 12. So which algorithm should I use to build a model?
  • 13. No Free Lunch If an algorithm performs well on a certain class of problems then it necessarily pays for that with degraded performance on the set of all remaining problems. David Wolpert and William Macready. No Free Lunch Theorems for Optimization. IEEE Transactions on Evolutionary Computation, 1:67, 1997.
  • 14. No Free Lunch No single algorithm will build the best model on every dataset.
  • 15. Which algorithm? But what if there is something better out there?
  • 16. Premise Manuel Fernández-Delgado, Eva Cernadas, Senén Barro, and Dinani Amorim. Do we Need Hundreds of Classifiers to Solve Real World Classification Problems? Journal of Machine Learning Research, 15(Oct):3133−3181, 2014. http://jmlr.csail.mit.edu/papers/v15/delgado14a.ht
  • 17. Premise It turns out that Random Forests are usually a good place to start.
  • 18. Premise Thunderdome with: ● 179 classifiers from 17 classifier families ● 121 datasets from the UCI repository
  • 19. Premise "The Random Forest is clearly the best family of classifiers, followed by SVM, neural networks, and boosting ensembles." ● On average, RF achieved 94.1% of the theoretical maximum accuracy for each dataset. ● RF achieved over 90% of maximum accuracy in 84.3% of datasets.
  • 22. Classification Task Predict who did and did not survive the disaster on the Titanic (and see if we can get some idea of why it turned out that way)
  • 24. Get Data >>> import pandas as pd >>> df = pd.read_excel('data/titanic3.xls') http://biostat.mc.vanderbilt.edu/wiki/pub/Main/DataSets/titanic3.xls
  • 25. Columns ● class: Passenger class (1 = first; 2 = second; 3 = third) ● name: Name ● sex: Sex ● age: Age ● sibsp: Number of siblings/spouses aboard ● parch: Number of parents/children aboard ● ticket: Ticket number ● fare: Passenger fare ● cabin: Cabin ● embarked: Port of embarkation (C = Cherbourg; Q = Queenstown; S = Southampton) ● boat: Lifeboat (if survived) ● body: Body number (if did not survive and body was recovered)
  • 26. Label Column The column giving the label for our classification task: ● survival: Survival (0 = no; 1 = yes)
  • 27. Exploring >>> df.shape (1309, 14) >>> df.embarked.value_counts() S 914 C 270 Q 123 Name: embarked, dtype: int64
  • 28. Exploring >>> df.cabin.value_counts() C23 C25 C27 6 B57 B59 B63 B66 5 G6 5 B96 B98 4 F2 4 F4 4 C22 C26 4 F33 4 D 4 C78 4 E101 3 B58 B60 3 ... Name: cabin, dtype: int64
  • 29. Question Can we build a model that will predict survival?
  • 31. Decision Trees >>> from sklearn import tree >>> model = tree.DecisionTreeClassifier(random_state=42) >>> ignore = set('boat,body,home.dest,name,ticket'.split(',')) >>> cols = [c forc in df.columns if c != 'survived' and c not in ignore] >>> X = df[cols] >>> y = df.survived >>> model.fit(X, y) Traceback (most recent call last): . . . ValueError: could not convert string to float: 'S'
  • 32. Create Dummy Variables >>> dummy_cols = 'pclass,sex,cabin,embarked'.split(",") >>> df2 = pd.get_dummies(df, columns=dummy_cols) >>> model = tree.DecisionTreeClassifier(random_state=42) >>> ignore = set('boat,body,home.dest,name,ticket'.split(',')) >>> cols = [c forc in df2.columns if c != 'survived' and c ... not in ignore and c not in dummy_cols] >>> X = df2[cols] >>> y = df2.survived
  • 33. Try Again >>> model.fit(X, y) Traceback (most recent call last): . . . ValueError: Input contains NaN, infinity or a value too large for dtype('float32').
  • 34. Imputing Fancy term for filling in values. Mean is a good choice for decision trees as it doesn't bias the splitting, whereas 0 would
  • 35. Try Again >>> X = X.fillna(X.mean()) >>> X.dtypes age float64 sibsp int64 parch int64 fare float64 pclass_1 float64 pclass_2 float64 pclass_3 float64 sex_female float64 sex_male float64 cabin_A10 float64 cabin_A11 float64 cabin_A14 float64
  • 36. Try Again >>> model.fit(X, y) DecisionTreeClassifier(class_weight=None, criterion='gini', max_depth=None, max_features=None, max_leaf_nodes=None, min_samples_leaf=1, min_samples_split=2, min_weight_fraction_leaf=0.0, presort=False, random_state=42, splitter='best')
  • 37. What Does the Tree Look Like? >>> tree.export_graphviz(model, ... out_file='/tmp/tree1.dot', ... feature_names=X.columns, ... class_names=['Died', 'Survived'], ... filled=True) >>> import subprocess >>> _ = subprocess.check_output( ... 'dot -Tpng -oimg/tree1.png /tmp/tree1.dot'.split())
  • 40. Need a Test Set >>> from sklearn import cross_validation >>> X_train, X_test, y_train, y_test = ... cross_validation.train_test_split( ... X, y, test_size=.3, random_state=42) >>> _ = model.fit(X_train, y_train) >>> model.score(X_test, y_test) 0.76844783715012721
  • 41. Another Model >>> model2 = tree.DecisionTreeClassifier( ... random_state=42, max_depth=3) >>> _ = model2.fit(X_train, y_train) >>> model2.score(X_test, y_test) 0.81424936386768443
  • 42. What Does the Tree Look Like? >>> tree.export_graphviz(model2, ... out_file='/tmp/tree2.dot', ... feature_names=X.columns, ... class_names=['Died', 'Survived'], ... filled=True) >>> import subprocess >>> _ = subprocess.check_output( ... 'dot -Tpng -oimg/tree2.png /tmp/tree2.dot'.split())
  • 45. Adjust Parameters >>> import numpy as np >>> fromsklearn.learning_curve import validation_curve >>> model3 = tree.DecisionTreeClassifier(random_state=42) >>> param_range = np.arange(1, 500, 20) >>> param_name = 'min_samples_leaf' >>> train_scores, test_scores = validation_curve( ... model3, X, y, param_name=param_name, param_range=param_range, ... cv=10, scoring="accuracy", n_jobs=1) >>> train_scores_mean = np.mean(train_scores, axis=1) >>> train_scores_std = np.std(train_scores, axis=1) >>> test_scores_mean = np.mean(test_scores, axis=1) >>> test_scores_std = np.std(test_scores, axis=1)
  • 46. Plot Validation Curve >>> import matplotlib.pyplot as plt >>> fig = plt.figure() >>> ax = fig.add_subplot(111) >>> plt.title("Validation Curve with Decision Tree") >>> plt.xlabel(param_name) >>> plt.ylabel("Score") >>> plt.ylim(0.0, 1.1) >>> plt.plot(param_range, train_scores_mean, label="Training score", color="r") >>> plt.fill_between(param_range, train_scores_mean - train_scores_std, ... train_scores_mean + train_scores_std, alpha=0.2, color="r") >>> plt.plot(param_range, test_scores_mean, label="Cross-validation score", ... color="g") >>> plt.fill_between(param_range, test_scores_mean - test_scores_std, ... test_scores_mean + test_scores_std, alpha=0.2, color="g") >>> plt.legend(loc="best") >>> fig.savefig('img/ml-dt-param-features.png') >>> #plt. clf()
  • 48. Overfitting & Underfitting ● Overfitting - memorizing data ● Underfitting - not flexible enough (cannot capture trend)
  • 49. How Much Data Do We Need?
  • 50. Learning Curve >>> from sklearn.learning_curve import learning_curve >>> def plot_learning_curve(estimator, title, X, y, ylim=None, cv=None, ... n_jobs=1, train_sizes=np.linspace(.01, 1.0, 10)): ... fig = plt.figure() ... plt.title(title) ... if ylim is not None: ... plt.ylim(*ylim) ... plt.xlabel("Training examples") ... plt.ylabel("Score") ... train_sizes, train_scores, test_scores = learning_curve( ... estimator, X, y, cv=cv, n_jobs=n_jobs, train_sizes=train_sizes) ... train_scores_mean = np.mean(train_scores, axis=1) ... train_scores_std = np.std(train_scores, axis=1) ... test_scores_mean = np.mean(test_scores, axis=1) ... test_scores_std = np.std(test_scores, axis=1) ... plt.grid() ... ... plt.fill_between(train_sizes, train_scores_mean - train_scores_std, ... train_scores_mean + train_scores_std, alpha=0.1, ... color="r") ... plt.fill_between(train_sizes, test_scores_mean - test_scores_std, ... test_scores_mean + test_scores_std, alpha=0.1, color="g") ... plt.plot(train_sizes, train_scores_mean, 'o-', color="r", ... label="Training score") ... plt.plot(train_sizes, test_scores_mean, 'o-', color="g", ... label="Cross-validation score") ... ... plt.legend(loc="best") ... return fig, plt
  • 51. Plot it >>> title = "Learning Curves (Decision Tree)" >>> fig, plt = plot_learning_curve(model, ... title, X, y, ylim=(0.5, 1.01), cv=10, n_jobs=4) >>> fig.savefig('img/ml-lc.png')
  • 54. ROC Curve Receiver Operating Characteristic - area indicates performance
  • 55. ROC >>> fromsklearn.metrics import auc, confusion_matrix, roc_curve >>> def fig_with_title(ax, title, figkwargs): ... if figkwargs is None: ... figkwargs = {} ... if not ax: ... fig = plt.figure(**figkwargs) ... ax = plt.subplot(111) ... else: ... fig = plt.gcf() ... if title: ... ax.set_title(title) ... return fig, ax
  • 56. ROC >>> def plot_roc_curve_binary(clf, X, y, label='ROC Curve (area={area:.3})', ... title="ROC Curve", pos_label=None, sample_weight=None, ... ax=None, figkwargs=None, plot_guess=False): ... ax = ax orplt.subplot(111) ... ax.set_xlim([-.1, 1]) ... ax.set_ylim([0, 1.1]) ... y_score = clf.predict_proba(X) ... if y_score.shape[1] != 2 and not pos_label: ... warnings.warn("Shape is not binary {} and no pos_label".format(y_score.shape)) ... return ... try: ... fpr, tpr, thresholds = roc_curve(y, y_score[:,1], pos_label=pos_label, ... sample_weight=sample_weight) ... except ValueErroras e: ... if 'is not binary' in str(e): ... warnings.warn("Check if y is numeric") ... raise ... ... roc_auc = auc(fpr, tpr) ... fig, ax = fig_with_title(ax, title, figkwargs) ... ... ax.plot(fpr, tpr, label=label.format(area=roc_auc)) ... if plot_guess: ... ax.plot([0, 1], [0, 1], '--', color=(0.6, 0.6, 0.6), label='Guessing') ... ax.set_xlabel('False Positive Rate') ... ax.set_ylabel('True Positive Rate') ... ax.legend(loc="lower right") ... return fig, ax
  • 57. ROC >>> plt.clf() >>> fig, ax = plot_roc_curve_binary( ... model, X_test, y_test, ... 'DT {area:.3}', plot_guess=1) >>> fig.savefig('img/ml-roc.png')
  • 59. Pros/Cons Decision Trees Pros: ● Easy to explain Cons: ● Tends to overfit
  • 61. Random Forest Created by Tin Kam Ho (1995), Leo Breiman, and Adele Cutler (2001).
  • 62. Condorcet's Jury Theorem From 1785 Essay on the Application of Analysis to the Probabililty of Majority Decisions. If each member of jury has p > .5 of predicting correct choice, adding more jury members increases probability of correct choice.
  • 63. Random Forest Algorithm: ● Sample from training set N (random WITH REPLACEMENT - lets us do OOB) ● Select m input variables (subset of M total input variables) ● Grow a tree ● Repeat above (create ensemble) ● Predict by aggregation predictions of forest (votes for classification, average for regression)
  • 64. Random Forest >>> from sklearn import ensemble >>> model3 = ensemble.RandomForestClassifier(random_state=42) >>> model3.fit(X_train, y_train) RandomForestClassifier(bootstrap=True, class_weight=None, criterion='gini', max_depth=None, max_features='auto', max_leaf_nodes=None, min_samples_leaf=1, min_samples_split=2, min_weight_fraction_leaf=0.0, n_estimators=10, n_jobs=1, oob_score=False, random_state=None, verbose=0, warm_start=False) >>> model3.score(X_test, y_test) 0.75572519083969469
  • 65. Feature Importance Most important features at the top of the decision trees >>> print(sorted(zip(X.columns, model3.feature_importances_), ... key=lambda x: x[1], reverse=True)) [('age', 0.22344483424840464), ('fare', 0.19018725802080991), ('sex_male', 0.12990057398621174), ('sex_female', 0.12860349870512569), ('pclass_3', 0.051127382589271984), ('parch', 0.042403381656923547), ('sibsp', 0.041437135835858306), ('pclass_1', 0.026146920495887703), ('embarked_S', 0.016952460872998475), ('pclass_2', 0.014536895778953276), ('embarked_C', 0.011974575978148253), ('embarked_Q', 0.0066746190486480592), ('cabin_D56', 0.0050674850086476347), ('cabin_C22 C26', 0.0038209715167321157), ('cabin_F E57',
  • 66. ROC >>> fig, ax = plot_roc_curve_binary( ... model3, X_test, y_test, ... 'RF1 {area:.3}') >>> fig.savefig('img/ml-roc3.png')
  • 69. Tuning Fancy term regularization - attempt to prevent overfitting
  • 70. Tuning ● max_features - Don't want to use all of the features (all tree will look the same). By taking samples of features, you reduce bias and correlation amoung trees ● n_estimators - More is better, but diminishing returns (don't need too many jurors, takes a longer time to train, lots of memory) ● max_depth - too tall, overfitting. Can't know ahead of time what good size could be. Can use these parameters to constrain depth as well: – min_samples_leaf - smaller is more prone to overfitting (capturing noise) – max_leaf_nodes - Can't be more than this many leaves – min_weight_fraction_leaf - The minimum weighted fraction of the input samples required to be at a leaf node. Note: this parameter is tree-specific.
  • 72. Grid Search >>> fromsklearn.grid_search import GridSearchCV >>> model5 = ensemble.RandomForestClassifier() >>> params = {'max_features': [.1, .3, .5, 1], ... 'n_estimators': [10, 20, 50], ... 'min_samples_leaf': [3, 5, 9], ... 'random_state': [42]} >>> cv = GridSearchCV(model5, params).fit(X, y) >>> cv.best_params_ {'max_features': 0.1, 'random_state': 42, 'n_estimators': 20, 'min_samples_leaf': 3}
  • 73. Grid Search >>> model6 = ensemble.RandomForestClassifier( ... **cv.best_params_) >>> model6.fit(X_train, y_train) >>> model6.score(X_test, y_test) 0.77608142493638677
  • 74. ROC >>> fig, ax = plot_roc_curve_binary( ... model6, X_test, y_test, ... 'RF (tuned) {area:.3}') >>> fig.savefig('img/ml-roc6.png')
  • 77. Summary ● Scalable (can build trees per CPU) ● Reduces variance in decision tree ● No normalization of data (don't want money range ($0 - $10,000,000) overidding age (0-100) ● Feature importance (look at "mean decrease of impurity" where this node appears) ● Helps with missing data, outliers, and dimension reduction ● Works with both regression and classification ● Sampling allows "out of bag" estimate, removing need for test set
  • 78. Why Python? ● Efficient algorithm ● Close to metal, 3000+ lines of Cython ● Faster than OpenCV (C++), Weka (Java), RandomForest (R/Fortran)
  • 79. Thanks Feel free to follow up on Twitter @__mharrison__ @aaronj1331