0% found this document useful (0 votes)
3 views1 page

The Delta Rule

Uploaded by

lukadl
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views1 page

The Delta Rule

Uploaded by

lukadl
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 1

The delta rule, also known as the Widrow-Hoff rule, is a fundamental concept

in neural networks, especially in training single-layer networks like perceptrons.


It’s a learning rule used to minimize the error in the network's output by adjusting
the weights during training. The main objective of the delta rule is to reduce
the difference (or "delta") between the actual output and the desired output.

How the Delta Rule Works


1. Calculate the Error: For each input, the error e is calculated by finding
the difference between the network’s output y and the target output t:

e=t−y
2. Adjust the Weights: The weights are then updated to reduce this error,
according to the following formula:

wnew = wold + ∆w
• where ∆w is the change in weight, calculated as:

∆w = η · e · x
• Here:
– η is the learning rate (a small constant that controls how large the
weight update steps are),
– e is the error,
– x is the input value corresponding to that weight.
3. Repeat: This process is repeated for each input until the network’s output
aligns closely with the target output, reducing the error progressively.

Why the Delta Rule is Useful


The delta rule is a key component of gradient descent. By updating weights in
the direction that reduces error, it allows the network to "learn" from examples
over multiple iterations. This rule works well in simple neural networks and is
often foundational for understanding more complex training methods in multi-
layer networks, like backpropagation.

Limitations
The delta rule is primarily used for single-layer neural networks with a linear
activation function. For more complex, multi-layer networks, the backprop-
agation algorithm is an extension that can handle non-linear activations,
making it more versatile for deep learning applications.

You might also like