Role of Optimizer in Neural Network
Role of Optimizer in Neural Network
1.RMS propagation
RMS Prop is Root Mean Square Propagation. In RMS Prop learning rate gets adjusted automatically and
it chooses a different learning rate for each parameter.
RMS Prop divides the learning rate by the average of the exponential decay of squared gradients
V t =ρ v t −1+(1−ρ)∗g 2 t
−η
Δ ωt = ∗gt
√V t + ε
η : Initial Learning rate
Vt: Exponential average of square of gradient
Advantages:
• Now the learning rate does not decay and the training does not stop.
Disadvantages:
• Computationally expensive.
Advantages:
Disadvantages:
• To get the same convergence as gradient descent needs to slowly reduce the value of learning
rate.