0% found this document useful (0 votes)
23 views2 pages

Gradient Descent - Stochastic GD - Regularization

This document describes the stochastic gradient descent algorithm. It works as follows: 1. Initialize the total value of the model parameters randomly. 2. Choose a random data point and estimate the gradient of the loss function with respect to the model parameters at that point. 3. Collectively update the model parameters in the opposite direction of the gradient. 4. Repeat steps 2-3 until the model converges. Stochastic gradient descent can train models many times faster than simple gradient descent because the data is accessed in minibatches rather than all at once.

Uploaded by

Divyanshi Dubey
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
23 views2 pages

Gradient Descent - Stochastic GD - Regularization

This document describes the stochastic gradient descent algorithm. It works as follows: 1. Initialize the total value of the model parameters randomly. 2. Choose a random data point and estimate the gradient of the loss function with respect to the model parameters at that point. 3. Collectively update the model parameters in the opposite direction of the gradient. 4. Repeat steps 2-3 until the model converges. Stochastic gradient descent can train models many times faster than simple gradient descent because the data is accessed in minibatches rather than all at once.

Uploaded by

Divyanshi Dubey
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 2

munumu e b

rastieut deseut To

Cuneut
Porut

updat

wth
vitiatiotl
value bo
oStarrt
bE tobKT. How?
Ho
moe pom cunnt
bout
at
C a l c w l a t e
slope
ne payanmele
mullple Siepe
baranttugradint f1bk)

olirneen o-ve
with
radient uith p ze
otp
Move n the
o

Run pdae k b f ()
o Repeat untl conveged
calculat
dala we will Kave o
Drawbask
uitu bip
1adeu at Qach point

at eacl data pou


data pont = Sum o 9radaint
Graeliut au
Juor epesaler
Stoclas te gyadieuE de Seeut
desetet
gadiut:-
Approinting

V i , Tu)= otlyj, (z;)


Sradtiunt at an a»bitrarily
all poiuts
Jraliuut at oluoi pount
mauy imes fasler than Smple 9radutut duut
tiis becaus data b
okn eduadant.
Stochastie 9radsent detaut

oStaut idh tntal


valu of bo
Chose 7andon data enbuy
&utimale gradiint vflb) oy tila
pont
toetively updale E+xVf(be)
Kegulauizalion:
nimse eglo) ko) + RIe)
rulaizer
r0gulansed
R e hunilion

You might also like