0% found this document useful (0 votes)
27 views

Machine Learning

This document summarizes lectures 13-14 from a machine learning course. It discusses concepts relevant to ML models including bias-variance tradeoff, loss functions, model evaluation, parameters vs hyperparameters, and gradient descent. Bias refers to the difference between predicted and actual values, while variance measures prediction variability. Loss functions determine how well models perform. Parameters like weights are estimated from data, while hyperparameters like learning rate control the learning process and require optimization. Gradient descent is an optimization algorithm that updates parameters to minimize the loss function.

Uploaded by

Zarfa Masood
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
27 views

Machine Learning

This document summarizes lectures 13-14 from a machine learning course. It discusses concepts relevant to ML models including bias-variance tradeoff, loss functions, model evaluation, parameters vs hyperparameters, and gradient descent. Bias refers to the difference between predicted and actual values, while variance measures prediction variability. Loss functions determine how well models perform. Parameters like weights are estimated from data, while hyperparameters like learning rate control the learning process and require optimization. Gradient descent is an optimization algorithm that updates parameters to minimize the loss function.

Uploaded by

Zarfa Masood
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 29

CS446: Machine Learning

Lecture 13-14 (Concepts Relevant to ML Models)


Instructor:
Dr. Muhammad Kabir
Assistant Professor
[email protected]

School of Systems and Technology


Department of Computer Science
University of Management and Technology, Lahore
Previous Lectures…
 Selecting right ML model
 Cross-Validation – Statistical measures
 Overfitting in ML – concept, signs, causes and prevention
 Under fitting in ML – concept, signs, causes and prevention
 Bias-variance tradeoff in ML
Today’s Lectures…
 Loss function
 Model evaluation – Accuracy score, mean score error
 Model parameters and hyperparameter
 Gradient decent in ML
Bias-variance Tradeoff in ML Models….
Bias
 The bias is known as the difference between the prediction of the
values by the ML model and the correct value.
 Being high in biasing gives a large error in training as well as testing
data.
 Its recommended that an algorithm should always be low biased to
avoid the problem of Underfitting.
Bias-variance Tradeoff in ML Models….
Variance
 The variability of model prediction for a
given data point which tells us spread of
our data is called the variance of the
model. High Variance
 The model with high variance has a very
complex fit to the training data and thus
is not able to fit accurately on the data
which it hasn’t seen before.
 high on variance - overfitting of data.
 The high variance data looks like follows.
High Variance
Bias-variance Tradeoff in ML Models….
Bias-variance Tradeoff in ML Models….
Tradeoff
 Too simple algorithm - may be on high bias
and low variance
 Too complex - may be on high variance and
low bias.
 This tradeoff in complexity is why there is
a tradeoff between bias and variance.
 An algorithm can’t be more complex and
less complex at the same time.
 For the graph, the perfect tradeoff will be
like.
Loss function in ML Models….

 A loss function measures how far an estimated values is from


the true (actual) values – difference between predicted and
actual values.
 It helps to determine which model performs better and
which parameters are better.
Example - Loss function in ML Models….
Example - Loss function in ML Models….
Example - Loss function in ML Models….
ML Models evaluation – Accuracy & Error….
ML Models evaluation – Accuracy & Error….
ML Models evaluation – Accuracy & Error….
ML Models evaluation – Accuracy & Error….
Model parameters and hyperparameter….
A model parameter is a configuration variable that is internal to the model and whose value
can be estimated from the given data- Two types of Parameters
Model parameters hyperparameter
 These are the parameters  Those parameters whose values
that can be determined by control the learning process.
training with trained data.  They are adjustable parameters
 These are considered as used to obtain an optimal model.
internal parameters.  External parameters.
Model parameters and hyperparameter….
Model parameters and hyperparameter….
Weight: Weight decides how much influence the input will
have on the output.
Model parameters and hyperparameter….
Weight: Weight decides how much influence the input will have on the output.
Different weights (numerical values) are assigned to each input. Zero to Name, height
and weight. Prefer input will get positive values while unfavorable attributes will get
negative value.
Model parameters….
Weight: Weight decides how much influence the input will have on the
output.
X – features or input variable
Y – Target or output variable
w- weight
b- bias

Bias:
- Bias is the offset value given to the model.
- It is used to shift the model in a particular direction.
- Similar to a Y-intercept.
- ‘b’ is equal to “Y” when all the features values are zero.
Example – Linear Regression….
Hyperparameter – for Optimization….
Learning Rate: It is tunable parameters used for Loss Function
optimization that determines the step size at each
iteration while moving toward a minimum of a loss
function.
Gradient descent – optimization algorithm.

Numbers of Epochs:
- Represents the number of times the model iterates over
the entire dataset

Proper values are required fro learning rate and number of epochs to avoid
overfitting and Underfitting.
Model Optimization.
Optimization: It refers to determining best parameters for a model, such that the loss
function of the model decreases, as a result of which the model can predict more
accurately.
- Finding best parameters to get optimal results.
- Gradient descent is one of the mostly used algorithm for optimization.

Initial parameters

updated parameters
Model Optimization .
Optimization: It refers to determining best parameters for a model, such that the loss
function of the model decreases, as a result of which the model can predict more
accurately.
- Finding best parameters to get optimal results.
- Gradient descent is one of the mostly used algorithm for optimization.
Working of Gradient Decent .
Working of Gradient Decent .
Gradient Descent - Optimization algorithm .
- Optimization algorithm used for minimizing the loss function in
various ML algorithms.
- It is used for updating the parameters of the learning model.
- Formula for updating w and b is:

- w --> weight
- b --> bias
- L --> Learning rate

- dw -->partial derivative of loss unction with respect to w.


- db --> partial derivate of loss function with respect to b.
Gradient Descent - 3D
Chapter Reading

Chapter Chapter 01
Pattern Recognition and
Machine Learning
by
Machine Learning
by
Tom Mitchell
Christopher M. Bishop

You might also like