0% found this document useful (0 votes)
11 views25 pages

Loss Functions

The document explains the concepts of loss and loss functions in machine learning, highlighting the difference between the actual output and predicted output for a single example. It categorizes various types of loss functions for regression, classification, and autoencoders, including Mean Square Error and Binary Cross Entropy. Additionally, it describes different types of gradient descent methods: Batch, Stochastic, and Mini-batch Gradient Descent.

Uploaded by

Jasvinder Kaur
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views25 pages

Loss Functions

The document explains the concepts of loss and loss functions in machine learning, highlighting the difference between the actual output and predicted output for a single example. It categorizes various types of loss functions for regression, classification, and autoencoders, including Mean Square Error and Binary Cross Entropy. Additionally, it describes different types of gradient descent methods: Batch, Stochastic, and Mini-batch Gradient Descent.

Uploaded by

Jasvinder Kaur
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 25

39

40

LOSS VS LOSS FUNCTION

Loss:
For a single example (a single input) the
difference between actual output (target) and
predicted output.
Example: Your model predicts an image as
"Dog", but it was actually "Cat" → This is an
error!
Loss Function:
Average error across all training samples.
This is a function that tells the model how
wrong it is.
41

LOSS

LOSS =
LOSS FUNCTION
42

Number of Observations
43

TYPES LOSS FUNCTION


1. Regression:
 MSE(MEAN SQUARE ERROR)
 MAE(MEAN ABSOLUTE ERROR)
 HUBER LOSS
2. Classification:
 Binary cross – entropy
 Categorical cross – entropy
3.Auto Encoder:
KL Divergence
MSE/SQUARED LOSS/L2 LOSS.
44
MAE/ L1 LOSS. 45
HUBER LOSS
46

n – the number of data points.


y – the actual value of data point.
𝑦 - the predicted value of data point.
𝛿 – defines the point where the Huber loss function from a quadratic.
47
BINARY CROSS ENTROPY/LOG LOSS
48
CATEGORICAL CROSS ENTROPY
SPARSE CATEGORICAL 49

CROSS ENTROPY

If target columns has one hot encoded to classes like 001,
010 then use categorical cross entropy.
If target column has numerical encoding to classes like
1,2,3,4,…..N then use sparse categorical cross entropy.
TYPES OF GRADIENT DESCENT 50

1. Batch Gradient Descent


2. Stochastic Gradient Descent
3. Mini batch gradient descent
51

BATCH GRADIENT DESCENT


1. We go through all training samples and calculate cumulative error.
2. Now we back propagate and adjust weights.
52
53
54
55

MY COMPUTER
56

STOCHASTIC GRADIENT
DESCENT
1. Use one only(randomly picked) sample for a forward pass
and then adjust the weights.
2. Good when training set is very big and we don’t want too
much computation.
57
58
59
60
61
MINI BATCH GARDIENT 62

DESCENT

• Instead of choosing one randomly picked training sample you will use a
batch of randomly picked training samples.

For examples:
1)I have 20 training sample total.
2)Lets say I use 5 random samples for one forward pass to calculate
cumulative error
3)After that adjust the weights
63

You might also like