Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds
Arrow up icon
GO TO TOP
Machine Learning Quick Reference

You're reading from   Machine Learning Quick Reference Quick and essential machine learning hacks for training smart data models

Arrow left icon
Product type Paperback
Published in Jan 2019
Publisher Packt
ISBN-13 9781788830577
Length 294 pages
Edition 1st Edition
Languages
Arrow right icon
Author (1):
Arrow left icon
 Kumar Kumar
Author Profile Icon Kumar
Kumar
Arrow right icon
View More author details
Toc

Table of Contents (13) Chapters Close

Preface 1. Quantifying Learning Algorithms FREE CHAPTER 2. Evaluating Kernel Learning 3. Performance in Ensemble Learning 4. Training Neural Networks 5. Time Series Analysis 6. Natural Language Processing 7. Temporal and Sequential Pattern Discovery 8. Probabilistic Graphical Models 9. Selected Topics in Deep Learning 10. Causal Inference 11. Advanced Methods 12. Other Books You May Enjoy

Training data development data – test data

This is one of the most important steps of building a model and it can lead to lots of debate regarding whether we really need all three sets (train, dev, and test), and if so, what should be the breakup of those datasets. Let's understand these concepts.

After we have sufficient data to start modelling, the first thing we need to do is partition the data into three segments, that is, Training Set, Development Setand Test Set:

Let's examine the goal of having these three sets:

  1. Training Set: The training set is used to train the model. When we apply any algorithm, we are fitting the parameter in the training set. In the case of a neural network, finding out about the weights takes place.

Let's say in one scenario that we are trying to fit polynomials of various degrees:

    • f(x) = a+ bx → 1st degree polynomial
    • f(x) = a + bx + cx2  2nd degree polynomial
    • f(x) = a + bx + cx+ dx3 → 3rd degree polynomial

After fitting the model, we calculate the training error for all the fitted models:

We cannot assess how good the model is based on the training error. If we do that, it will lead us to a biased model that might not be able to perform well on unseen data. To counter that, we need to head into the development set.

  1. Development set: This is also called the holdout set or validation set. The goal of this set is to tune the parameters that we have got from the training set. It is also part of an assessment of how well the model is performing. Based on its performance, we have to take steps to tune the parameters. For example, controlling the learning rate, minimizing the overfitting, and electing the best model of the lot all take place in the development set. Here, again, the development set error gets calculated and tuning of the model takes place after seeing which model is giving the least error. The model giving the least error at this stage still needs tuning to minimize overfitting. Once we are convinced about the best model, it is chosen and we head toward the test set.
  1. Test set: The test set is primarily used to assess the best selected model. At this stage, the accuracy of the model is calculated, and if the model's accuracy is not too deviated from the training accuracy and development accuracy, we send this model for deployment.

Size of the training, development, and test set

Typically, machine learning practitioners choose the size of the three sets in the ratio of 60:20:20 or 70:15:15. However, there is no hard and fast rule that states that the development and test sets should be of equal size. The following diagram shows the different sizes of the training, development, and test sets:

Another example of the three different sets is as follows:

But what about the scenarios where we have big data to deal with? For example, if we have 10,000,000 records or observations, how would we partition the data? In such a scenario, ML practitioners take most of the data for the training set—as much as 98-99%—and the rest gets divided up for the development and test sets. This is done so that the practitioner can take different kinds of scenarios into account. So, even if we have 1% of data for development and the same for the test test, we will end up with 100,000 records each, and that is a good number.

You have been reading a chapter from
Machine Learning Quick Reference
Published in: Jan 2019
Publisher: Packt
ISBN-13: 9781788830577
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime