Session On Maximum Likelihood Estimation
Session On Maximum Likelihood Estimation
Probability: This is a measure of the chance that a certain event will occur out
of all possible events. It's usually presented as a ratio or fraction, and it ranges
from 0 (meaning the event will not happen) to 1 (meaning the event is certain
to happen).
More Definitions
A likelihood quantifies how good one’s model is, given a set of data that’s been
observed.
However, there are some machine learning algorithms that don't rely on
MLE. For example:
MLE and the concept of loss functions in machine learning are closely
related. Many common loss functions can be derived from the principle of
maximum likelihood estimation under certain assumptions about the data
or the model. By minimizing these loss functions, we're effectively
performing maximum likelihood estimation.
3. Then why does loss function exist, why don't we maximize Likelihood
The confusion arises from the fact that we're using two different
perspectives to look at the same problem.
For many models, these two perspectives are equivalent - minimizing the
loss function is the same as maximizing the likelihood function. In fact,
many common loss functions can be derived from the principle of MLE
under certain assumptions about the data.
In short, while you can often get by with a practical understanding of loss
functions and optimization algorithms in applied machine learning,
understanding MLE can be extremely valuable for gaining a deeper
understanding of how and why these models work.