0% found this document useful (0 votes)
19 views32 pages

DL Ii

The document discusses various optimization techniques in machine learning, particularly focusing on gradient descent methods such as Stochastic Gradient Descent (SGD) and its variants. It highlights the importance of momentum in optimization and addresses challenges like overfitting and convergence to local minima. Additionally, it mentions adaptive methods like Adam and RMSprop for improving training efficiency.

Uploaded by

manasvisingh1894
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF or read online on Scribd
0% found this document useful (0 votes)
19 views32 pages

DL Ii

The document discusses various optimization techniques in machine learning, particularly focusing on gradient descent methods such as Stochastic Gradient Descent (SGD) and its variants. It highlights the importance of momentum in optimization and addresses challenges like overfitting and convergence to local minima. Additionally, it mentions adaptive methods like Adam and RMSprop for improving training efficiency.

Uploaded by

manasvisingh1894
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF or read online on Scribd
You are on page 1/ 32

You might also like