ANN Example
ANN Example
Overview
For this tutorial, we’re going to use a neural network with two inputs, two hidden neurons, two
output neurons. Additionally, the hidden and output neurons will include a bias.
In order to have some numbers to work with, here are the initial weights, the biases, and training
inputs/outputs:
The goal of backpropagation is to optimize the weights so that the neural network can learn how to
correctly map arbitrary inputs to outputs. For the rest of this lecture we’re going to work with a single
training set: given inputs 0.05 and 0.10, we want the neural network to output 0.01 and 0.99.
Summary
Forward pass Steps
Calculate input at h1 and h2.
Calculate output at h1 and h2.
Calculate input at o1 and o2.
Calculate output at o1 and o2.
Calculate total error:
Backward pass to We want to know how much a change in weights (e.g. ) affects the
reduce total error by
updating weights: total error, (e.g. ).
Calculate:
Update w5:
Likewise find:
Using the hidden layer find:
To begin, let’s see what the neural network currently predicts given the weights and biases above and
inputs of 0.05 and 0.10. To do this we’ll feed those inputs forward though the network.
We figure out the total net input to each hidden layer neuron, squash the total net input using
an activation function (here we use the logistic function), then repeat the process with the output
layer neurons.
Total net input is also referred to as just net input by some sources.
We repeat this process for the output layer neurons, using the output from the hidden layer neurons
as inputs.
We then squash it using the logistic function to get the output of o1:
We can now calculate the error for each output neuron using the squared error function and sum them
to get the total error:
Some sources refer to the target as the ideal and the output as the actual.
The is included so that exponent is cancelled when we differentiate later on. The result is eventually
multiplied by a learning rate anyway so it doesn’t matter that we introduce a constant here [1].
For example, the target output for is 0.01 but the neural network output 0.75136507, therefore its
error is:
Repeating this process for (remembering that the target is 0.99) we get:
The total error for the neural network is the sum of these errors:
Our goal with backpropagation is to update each of the weights in the network so that they cause the
actual output to be closer the target output, thereby minimizing the error for each output neuron and
the network as a whole.
Output Layer
Consider . We want to know how much a change in affects the total error, aka .
is read as “the partial derivative of with respect to “. You can also say “the gradient
with respect to “.
is sometimes expressed as
When we take the partial derivative of the total error with respect to , the
quantity becomes zero because does not affect it which means we’re
taking the derivative of a constant which is zero.
Next, how much does the output of change with respect to its total net input?
The partial derivative of the logistic function is the output multiplied by 1 minus the output:
Finally, how much does the total net input of change with respect to ?
You’ll often see this calculation combined in the form of the delta rule:
Alternatively, we have and which can be written as , aka (the Greek letter
delta) aka the node delta. We can use this to rewrite the calculation above:
Therefore:
Some sources extract the negative sign from so it would be written as:
To decrease the error, we then subtract this value from the current weight (optionally multiplied by
some learning rate, eta, which we’ll set to 0.5):
Some sources use (alpha) to represent the learning rate, others use (eta), and others even
use (epsilon).
We perform the actual updates in the neural network after we have the new weights leading into the
hidden layer neurons (ie, we use the original weights, not the updated weights, when we continue the
backpropagation algorithm below).
Hidden Layer
Next, we’ll continue the backwards pass by calculating new values for , , , and .
We’re going to use a similar process as we did for the output layer, but slightly different to account
for the fact that the output of each hidden layer neuron contributes to the output (and therefore error)
of multiple output neurons. We know that affects both and therefore
the needs to take into consideration its effect on the both output neurons:
Starting with :
And is equal to :
Now that we have , we need to figure out and then for each weight:
We calculate the partial derivative of the total net input to with respect to the same as we did
for the output neuron: