0% found this document useful (0 votes)
16 views6 pages

U&O Fitting

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views6 pages

U&O Fitting

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

Under fitting & Over Fitting

Supervised learning is the types of machine learning in which


machines are trained using well "labelled" training data, and on basis of
that data, machines predict the output. The labelled data means some
input data is already tagged with the correct output.

Underfitting and Overfitting

When we use unnecessary explanatory variables it might lead


to overfitting. Overfitting means that our algorithm works well
on the training set but is unable to perform better on the test
sets. It is also known as problem of high variance.

When our algorithm works so poorly that it is unable to fit even


training set well then it is said to underfit the data.It is also
known asproblem of high bias.

•Bias:Assumptions made by a model to make a function easier to learn.


It is actually the error rate of the training data. When the error rate has
a high value, we call it High Bias and when the error rate has a low
value, we call it low Bias.
•Variance: The error rate of the testing data is called variance. When
the error rate has a high value, we call it High variance and when the
error rate has a low value, we call it Low variance.

In the following diagram we can see that fitting a linear


regression (straight line in fig 1) would underfit the data i.e. it
will lead to large errors even in the training set. Using a
polynomial fit in fig 2 is balanced i.e. such a fit can work on the
training and test sets well, while in fig 3 the fit will lead to low
errors in training set but it will not work well on the test set.
Underfitting and Overfitting

When we talk about the Machine Learning model, we actually


talk about how well it performs and its accuracy which is known
as prediction errors. Let us consider that we are designing a
machine learning model. A model is said to be a good machine
learning model if it generalizes any new input data from the
problem domain in a proper way. This helps us to make
predictions about the future data, that the data model has
never seen. Now, suppose we want to check how well our
machine learning model learns and generalizes to the new
data. For that, we have overfitting and underfitting, which are
majorly responsible for the poor performances of the machine
learning algorithms.

Underfitting:A statistical model or a machine learning algorithm is said to


have underfitting when it cannot capture the underlying trend of the data,
i.e., it only performs well on training data but performs poorly on testing
data. (It’s just like trying to fit undersized pants!) Underfitting destroys the
accuracy of our machine learning model. Its occurrence simply means that
our model or the algorithm does not fit the data well enough. It usually
happens when we have fewer data to build an accurate model and also
when we try to build a linear model with fewer non-linear data. In such
cases, the rules of the machine learning model are too easy and flexible to
be applied on such minimal data and therefore the model will probably
make a lot of wrong predictions. Underfitting can be avoided by using
more data and also reducing the features by feature selection.
In a nutshell, Underfitting refers to a model that can neither
performs well on the training data nor generalize to new data.

An underfitted model has high bias and low variance.

As we can see from the above diagram, the model is unable to


capture the data points present in the plot.

Reasons for Underfitting:


1.High bias and low variance
2.The size of the training dataset used is not enough.
3.The model is too simple.
4.Training data is not cleaned and also contains noise in it.
Techniques to reduce underfitting:
1.Increase model complexity
2.Increase the number of features, performing feature engineering
3.Remove noise from the data.

4.Increase the number of epochs or increase the duration of

training to get better results.


Overfitting:A statistical model is said to be overfitted when the model
does not make accurate predictions on testing data. When a model gets
trained with so much data, it starts learning from the noise and inaccurate
data entries in our data set. And when testing with test data results in
High variance. Then the model does not categorize the data correctly,
because of too many details and noise. The causes of overfitting are the
non-parametric and non-linear methods because these types of machine
learning algorithms have more freedom in building the model based on the
dataset and therefore they can really build unrealistic models. A solution
to avoid overfitting is using a linear algorithm if we have linear data or
using the parameters like the maximal depth if we are using decision
trees.
Example: The concept of the overfitting can be understood by the below graph
of the linear regression output:

Reasons for Overfitting are as follows:

1. High variance and low bias


2.The model is too complex
3.The size of the training data
Techniques to reduce overfitting:
1.Increase training data.
2.Reduce model complexity.
3.Early stopping during the training phase (have an eye over the loss
over the training period as soon as loss begins to increase stop
training).
4.Ridge Regularization and Lasso Regularization

5.Use dropout for neural networks to tackle overfitting.

Goodness of Fit
The "Goodness of fit" term is taken from the statistics, and the goal of the
machine learning models to achieve the goodness of fit. In statistics modeling,
it defines how closely the result or predicted values match the true values of
the dataset.
The model with a good fit is between the underfitted and overfitted model, and
ideally, it makes predictions with 0 errors, but in practice, it is difficult to
achieve it.
As when we train our model for a time, the errors in the training data go down,
and the same happens with test data. But if we train the model for a long
duration, then the performance of the model may decrease due to the
overfitting, as the model also learn the noise present in the dataset. The errors
in the test dataset start increasing, so the point, just before the raising of
errors, is the good point, and we can stop here for achieving a good model.

Good Fit in a Statistical Model: Ideally, the case when the model makes the
predictions with 0 error, is said to have a good fit on the data. This
situation is achievable at a spot between overfitting and underfitting. In
order to understand it, we will have to look at the performance of our
model with the passage of time, while it is learning from the training
dataset.
With the passage of time, our model will keep on learning, and thus the
error for the model on the training and testing data will keep on
decreasing. If it will learn for too long, the model will become more prone
to overfitting due to the presence of noise and less useful details. Hence
the performance of our model will decrease. In order to get a good fit, we
will stop at a point just before where the error starts increasing. At this
point, the model is said to have good skills in training datasets as well as
our unseen testing dataset.

You might also like