A Study On Backpropagation in Artificial Neural Ne
A Study On Backpropagation in Artificial Neural Ne
net/publication/349077282
Article in Asia-Pacific Journal of Neural Networks and Its Applications · August 2020
DOI: 10.21742/AJNNIA.2020.4.1.03
CITATIONS READS
24 6,157
2 authors, including:
Ch Sekhar
Vignan’s Institute of Information Technology
46 PUBLICATIONS 152 CITATIONS
SEE PROFILE
All content following this page was uploaded by Ch Sekhar on 07 September 2021.
Abstract
Innovation assumes essential job nowadays in human life to limit the manual work.
Execution and exactness with innovation will be high. The Backpropagation neural
framework is multilayered, feedforward neural framework and is by a full edge the most
extensively utilized. It is moreover seen as one of the least demanding and most wide systems
used for managed planning of multilayered neural systems. Backpropagation works by
approximating the non-direct association between the data and the yield by changing the
weight regards inside. It can furthermore be summarized for the data that is rejected from the
planning structures (perceptive limits).
1. Introduction
A neural system is a gathering of associated I/O units where every association has a weight-
related with its PC programs. It encourages you to develop prescient models from enormous
databases. This model expands upon the human sensory system. It encourages you to lead
picture understanding, human learning, speech recognition, and so on.
Backpropagation is the embodiment of neural net preparing. It is the technique for
tweaking loads of a neural net dependent on the error value got in the past epoch (i.e.,
emphasis). Legitimate tuning of the loads permits you to lessen error value and to make the
model dependable by expanding its speculation. Backpropagation is a compact structure for
“backward propagation of errors.” It is a standard technique for preparing artificial neural
systems [1]. This technique assists with computing the inclination of a misfortune work
regarding all the loads in the system. Backpropagation recipes from essential standards and
genuine qualities. The neural system uses three information neurons, one shrouded layer with
two neurons, and a yield layer with two neurons.
Article history:
Received (April 26, 2020), Review Result (May 29, 2020), Accepted (July 10, 2020)
In 1961, the essentials idea of ceaseless Backpropagation was inferred with regards to
control hypothesis by J. Kelly, Henry Arthur, and E. Bryson.
In 1969, Bryson and Ho gave a multi-orchestrate dynamic system improvement
method.
In 1982, Hopfield brought his idea of a neural framework.
In 1986, by the effort of David E. Rumelhart, Geoffrey E. Hinton, Ronald J. Williams,
Backpropagation got affirmation.
In 1993, Wan was the primary individual to win a stellar example acknowledgement
challenge with the assistance of the backpropagation strategy.
Neural Network
Model
Err is
Min? Update the Weights
NN Model is ready
Weights
The above [Figure 1] shows that the everyday workflows of the backpropagation process
mechanism. During the backwards propagation, these computation activities will happen as
mentioned below.
Find Error rate: Here we need to calculate the model output with actual output
Minimum Error: Cross verifying whether the error is minimized or not.
Update the Weights: The error is more than the acceptable range then, update the
weights and biases. After that, again check the error. Repeat the process until the
error becomes low.
Neural Network Model: Once the error rate was acceptable range then the model is
ready to use for forecasting the data
The generalized workflow and stepwise computation in Backpropagation given as pseudo-
code as follows:
The above [Figure 2] shows the feedforward artificial neural network contains Input,
Hidden and Output layers. Each layer contains two nodes with respective weights. All nodes
are fully connected model, including the bias node.
Notions of the above network as follows:
X1, X2: Input Nodes
H1, H2: Hidden Layer Nodes with net out from respective inputs
HA1, HA2: Hidden Layer Nodes with activation output
O1, O2: Output Layer Nodes with net out from respective inputs
OA1, OA2: Output Layer Nodes with activation output
W1 to W8: Weights of respective layers from input to output
B1, B2: Bias Nodes for Hidden and Output layers respectively
Step 1
The input and target values for this problem are X1=1, X2=2, I and target values t1 =0.5
and t2=0.05. The weights of the network need to be randomly chosen within the range of 0 to
1. Here we initialize weights as shown in the figure above for understanding the process.
Step 2
From the Step1, we got the inputs and respective weights, as it was a feed-forward network.
The neuron will send to next neuron, i.e. hidden layer neuron. As it was fully connected
network, each node/neuron will receive inputs from all the nodes/neurons of the input layer.
Now here we are going to calculate the summation output of at each node of the hidden
layer as follows,
𝐻1 = (𝑊1 × 𝑋1 ) + (𝑊3 × 𝑋2 ) + (𝐵1 × 𝑊0 ) (1)
𝐻2 = (𝑊2 × 𝑋1 ) + (𝑊4 × 𝑋2 ) + (𝐵1 × 𝑊0 ) (2)
From the above equations, we are going to calculate feed-forward computation from input
to hidden and hidden to output layers respectively
Input to the hidden layer
𝐻1 = (1 × 0.1) + (2 × 0.3) + (1 × 0.5) = 1.2 (3)
𝐻2 = (1 × 0.2) + (2 × 0.4) + (1 × 0.5) = 1.5 (4)
Applying activation function for both hidden nodes, here we are using sigmoid activation
functions.
S= 1/ (1+e-x)
Figure 3. (a) Sigmoid activation function equation (b) With HA1 (c) With HA2
1 1
𝐻𝐴1 = (1+𝑒 −𝐻1 ) = (1+𝑒 −1.2 ) = 0.6456 (5)
1 1
𝐻𝐴2 = (1+𝑒 −𝐻2 ) = (1+𝑒 −1.5 ) = 0.9525 (6)
Step 3
In this step, we need to compute the error value occurred w.r.t. to target output and feed-
forward computation values
Error= Actual Output – Target Output
E1 =OA1 - T1
E2 =OA2 - T2
Etotal = E1+E2 (13)
Step 4
After the above operation, need to start backwards step. To update the weights based on
the error value.
Figure 4. Error backpropagated from output to hidden, Hidden to the input layer
Here will compute the what change the error concerning the weight w5
𝛿𝐸𝑡𝑜𝑡𝑎𝑙 𝛿𝐸𝑡𝑜𝑡𝑎𝑙 𝛿𝑜𝑢𝑡𝑂1 𝛿𝑛𝑒𝑡01
𝛿𝑤5
= 𝛿𝑜𝑢𝑡𝑂1
× 𝛿𝑛𝑒𝑡01 × 𝛿𝑤5
(14)
We are spreading in reverse; the first thing we have to do is, compute the adjustment in
simple mistakes w.r.t the yield O1 and O2. New weight is calculated based
𝛿𝐸𝑡𝑜𝑡𝑎𝑙
𝑊𝑛𝑒𝑤 = 𝑊𝑜𝑙𝑑 + 𝑙𝑒𝑎𝑟𝑛𝑖𝑛𝑔 𝑟𝑎𝑡𝑒 ( 𝑊𝑜𝑙𝑑
) (15)
The above process explained for one node or perceptron, and it needs to be repeated for all
the nodes and update the weights. With new weights need to calculate the new error again
until getting the error with minimal.
6. Conclusions
In this paper, we have shown that the process of a backpropagation neural network
performs well on large sets of data. The performance can be improved by changing the
number of hidden neurons and the learning rate. Because of its iterative training and gradient-
based training, the general speed is far slower than required, so it takes a significant amount
of time to train on an extensive set of data. We cannot say that there is a whole network for
every kind of database out there. So keep testing your data on multiple neural networks and
see what fits the best.
References
[1] Budiharjo S. Triyuni W. Agus Perdana, and H. Tutut, “Predicting tuition fee payment problem using
backpropagation neural network model,” (2018)
[2] M. Huan, C. Ming, and Z. Jianwei, “Study on the prediction of real estate price index based on hhga-rbf
neural network algorithm,” International Journal of u - and e-Service, Science and Technology, SERSC
Australia, ISSN: 2005-4246 (Print); pp.2207-9718 (Online), vol.8, no.7, July, (2015) DOI: 10.142 57/ijunnes
st.2015.8.7.11.
[3] A. Muhammad, A. Khubaib Amjad, and H. Mehdi, “Application of data mining using artificial neural
network: survey,” International Journal of Database Theory and Application, vol.8, no.1, (2015) DOI:
10.14257/ijdta.2015.8.1.25.
[4] P. Jong, “The characteristic function of CoreNet (Multi-level single-layer artificial neural networks),” Asia-
Pacific Journal of Neural Networks and Its Applications, vol.1, no.1, (2017) DOI: 10.21742/AJNNIA.201
7.1.1.02
[5] L. Wei, “Neural network model for distortion buckling behaviour of cold-formed steel compression members,”
Helsinki University of Technology Laboratory of Steel Structures Publications 16, (2000)
[6] The concept of Back-Propagation Learning by examples from the http://hebb.cis.uoguelph.ca/~skremer
/Teachin g/27642/BP/node3.html