0% found this document useful (0 votes)
19 views24 pages

ANN Calculations

The document outlines the architecture and calculations involved in training a neural network using backpropagation. It describes a simple case study with two inputs, hidden neurons, and output neurons, detailing the process of feeding inputs through the network and updating weights to minimize error. After multiple iterations, the network significantly reduces its error in predicting the target outputs from the given inputs.

Uploaded by

sklahiri70
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views24 pages

ANN Calculations

The document outlines the architecture and calculations involved in training a neural network using backpropagation. It describes a simple case study with two inputs, hidden neurons, and output neurons, detailing the process of feeding inputs through the network and updating weights to minimize error. After multiple iterations, the network significantly reduces its error in predicting the target outputs from the given inputs.

Uploaded by

sklahiri70
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 24

ANN calculations

Background calculation with simple case study


ANN Architecture

• We use a neural network with


two inputs, two hidden
neurons, two output neurons.
Additionally, the hidden and
output neurons will include a
bias.
ANN Architecture

In order to have some numbers to work


with, here are the initial weights, the
biases, and training inputs/outputs.

The goal of backpropagation is to


optimize the weights so that the neural
network can learn how to correctly map
arbitrary inputs to outputs.

For the rest of this tutorial we’re going to


work with a single training set: given
inputs 0.05 and 0.10, we want the neural
network to output 0.01 and 0.99.
ANN Calculation

• To begin, lets see what the


neural network currently
predicts given the weights and
biases above and inputs of 0.05
and 0.10.
• To do this we’ll feed those
inputs forward though the
network.
ANN Calculation

• We figure out the total net


input to each hidden layer
neuron, squash the total net
input using an activation
function (here we use
the logistic function), then
repeat the process with the
output layer neurons.
• Total net input is also referred
to as just net input
ANN Calculation
ANN Calculation
Calculating the
Total Error
The Backwards
Pass

• Our goal with


backpropagation is to update
each of the weights in the
network so that they cause
the actual output to be closer
the target output, thereby
minimizing the error for each
output neuron and the
network as a whole.
The Backwards Pass : Output Layer
The Backwards Pass : Output Layer

• First, how much does the total


error change with respect to the
output?
The Backwards Pass : Output Layer
The Backwards Pass : Output Layer
The Backwards Pass : Output Layer
The Backwards Pass : Output Layer
The Backwards Pass : Output Layer
The Backwards Pass : Hidden Layer
The Backwards Pass : Hidden Layer
The Backwards Pass : Hidden Layer
The Backwards Pass : Hidden Layer
The Backwards Pass : Hidden Layer
The Backwards Pass : Hidden Layer
The Backwards Pass : Hidden Layer
Update Weights
Finally, we’ve updated all of our weights!
When we fed forward the 0.05 and 0.1 inputs
originally, the error on the network was
0.298371109.

After this first round of backpropagation, the


total error is now down to 0.291027924.

It might not seem like much, but after


repeating this process 10,000 times, for
example, the error plummets to
0.0000351085.

At this point, when we feed forward 0.05 and


0.1, the two outputs neurons generate
0.015912196 (vs 0.01 target) and

You might also like