Unit Online 1.3
Unit Online 1.3
(21MCA24DB3)
-L2 regularization
-L1 regularization
-Dropout regularization
- Early Stopping Regularization
-The topology of the deep neural network (i.e. layers and their
interconnection)
- The learned parameters (i.e., the learned weights and biases)
The model is dependent upon the hyper parameters because the hyper
parameters determine the learned parameters (weights and biases).
Error Landscape
SSE:
Sum
Squared
Error
S (ti – i)2
0
Weight Values
Delta Learning Rule
(Widrow-Hoff Rule)
• Goal is to decrease overall error each time a weight is
changed
• Total Sum Squared Error (SSE) is called objective function E
= S (ti – zi)2