Loss Function
Loss Function
The loss function is the function that determines how far the algorithm’s
current output is from what is desired. This is a technique for assessing how
well our algorithm models the input. It can be divided into two categories.
Both for regression and for classification
Loss functions and metrics varied slightly from one another as well. Loss
functions can provide data on the effectiveness of our model, but they may
not be directly relevant or simple to understand for humans. Metrics are useful
in this situation. Even though they may not be the best options for loss
functions since they may not be differentiable, metrics like accuracy are
considerably more useful for people to comprehend how well a neural network
performs.
Regarding the problems we encounter in the actual world, loss functions can
be broadly divided into classification and regression. Our task in classification
problems is to predict the respective probability of each class that the
challenge involves. The goal of regression, on the other hand, is to forecast
the continuous value for a given collection of independent features to the
learning algorithm.
The total absolute difference between the actual and projected variables is
calculated using MAE. The average size of mistakes in a group of projected
values is thus measured. While the absolute error is much more resistant to
outliers, the mean square error is simpler to address. Outlier values are ones
that significantly differ from other reported data points.
If the prediction and the ground truth were identical, the MAE would be zero,
which it never is. Given that you wish to reduce the inaccuracy in your
predictions, a regression problem might benefit from using this
straightforward loss function as one of your measurements.
MAE averages out the absolute disparities between the actual and anticipated
values. When a data point x i and its anticipated value yi are considered, where
n is the total number of data points in the collection
The mean squared error is the average of the squared discrepancies between
the actual and anticipated values. Models trained with mean squared error
have fewer outliers or at least less severe outliers than models trained with
mean absolute error because mean squared error prioritizes a large number of
little errors over a few large errors.