Machine Learning
Machine Learning
Bias:
- Bias is the offset value given to the model.
- It is used to shift the model in a particular direction.
- Similar to a Y-intercept.
- ‘b’ is equal to “Y” when all the features values are zero.
Example – Linear Regression….
Hyperparameter – for Optimization….
Learning Rate: It is tunable parameters used for Loss Function
optimization that determines the step size at each
iteration while moving toward a minimum of a loss
function.
Gradient descent – optimization algorithm.
Numbers of Epochs:
- Represents the number of times the model iterates over
the entire dataset
Proper values are required fro learning rate and number of epochs to avoid
overfitting and Underfitting.
Model Optimization.
Optimization: It refers to determining best parameters for a model, such that the loss
function of the model decreases, as a result of which the model can predict more
accurately.
- Finding best parameters to get optimal results.
- Gradient descent is one of the mostly used algorithm for optimization.
Initial parameters
updated parameters
Model Optimization .
Optimization: It refers to determining best parameters for a model, such that the loss
function of the model decreases, as a result of which the model can predict more
accurately.
- Finding best parameters to get optimal results.
- Gradient descent is one of the mostly used algorithm for optimization.
Working of Gradient Decent .
Working of Gradient Decent .
Gradient Descent - Optimization algorithm .
- Optimization algorithm used for minimizing the loss function in
various ML algorithms.
- It is used for updating the parameters of the learning model.
- Formula for updating w and b is:
- w --> weight
- b --> bias
- L --> Learning rate
Chapter Chapter 01
Pattern Recognition and
Machine Learning
by
Machine Learning
by
Tom Mitchell
Christopher M. Bishop