The Delta Rule
The Delta Rule
e=t−y
2. Adjust the Weights: The weights are then updated to reduce this error,
according to the following formula:
wnew = wold + ∆w
• where ∆w is the change in weight, calculated as:
∆w = η · e · x
• Here:
– η is the learning rate (a small constant that controls how large the
weight update steps are),
– e is the error,
– x is the input value corresponding to that weight.
3. Repeat: This process is repeated for each input until the network’s output
aligns closely with the target output, reducing the error progressively.
Limitations
The delta rule is primarily used for single-layer neural networks with a linear
activation function. For more complex, multi-layer networks, the backprop-
agation algorithm is an extension that can handle non-linear activations,
making it more versatile for deep learning applications.