NNA Introduction
NNA Introduction
●Feed Forward ANNs: the information flow is unidirectional. A unit sends information to
other units from which it does not receive any information. In general, the feed-forward
networks are “static” (Fig. a).
●Recurrent ANNs: feedback loops are allowed. In this sense these neural networks are
“dynamic” (Fig. b).
How ANNs mimics Human Brain?!
● ANNs are composed of multiple nodes, which imitate biological neurons of human brain.
● The neurons are connected by links and they interact with each other.
● The nodes can take input data and perform simple operations on the data.
● ANNs are capable of learning, which takes place by altering weight values.
The NNA resembles ANNs having Npop input data having D dimension(s) and only one
target data (response). After setting the target solution (XTarget) among the other pattern
solutions, the target weight (WTarget), the weight corresponding to the target solution, has to
be selected from the population of weight (weight matrix).
NNA: Weight matrix
In the ANNs, the artificial neurons or the processing units may have several input paths,
corresponding to the dendrites. Using a simple summation, the unit combines the weighted
values of these input paths. The result is an internal activity level for the unit.
Back to the NNA, initial weights are defined as given in the following equation:
W is a square matrix (Npop×Npop) which generates random numbers uniformly between zero to
one during iterations. The first subscript of weight relates to its pattern solution and the
second subscript of weight is shared with the other pattern solutions.
NNA: Weight matrix
• However, there is a constraint for the weight values. The imposed
constraint is the summation of weights for a pattern solution should not
exceed one, mathematically, it can be defined as given follows:
Npop
w (t ) 1,
j 1
ij i, j 1, 2,3,..., N pop
Without the above constraint, the weight values tend to grow (i.e., values
more than one) in a specific direction and therefore, the algorithm will be
stuck in a local optimum point.
• Having this constraint gives the NNA’s agents controlled movement with
mild bias (varying from zero to one).
NNA: Generation of New Pattern Solutions
After forming the weight matrix (W), new pattern solutions (XNew) are
calculated using the following equation inspired by the weight summation
technique used in the ANNs:
New Npop
X j (t 1) wij (t ) X i (t ), j 1, 2,3,..., N pop
i 1
X i (t 1) X i (t ) X iNew (t 1), i 1, 2,3,..., N pop
For instance, if we have six pattern solutions (i.e., six neurons, population
size of 6), updating the first new pattern solution can be calculated as given
follows:
X 1New (t 1) w11 X 1 (t ) w21 X 2 (t ) w31 X 3 (t ) w41 X 4 (t ) w51 X 5 (t ) w61 X 6 (t )
Idea of updating new pattern solutions:
An Example
NNA: Updating of Weight Matrix
After creating the new pattern solutions from the previous population of
patterns, the weight matrix should be updated as well, based on the value of
the best weight so called “target weight”. The following equations suggest an
updating equation for the target weight:
WiUpdated (t 1) Wi (t ) 2 rand (W Tar g et (t ) Wi (t )), i 1, 2,3,..., N pop
Npop
w (t ) 1,
j 1
ij i, j 1, 2,3,..., N pop
• In other words, the bias operator in the NNA is another way to explore
the search space (exploration process) and it acts similar to the
mutation operator in the GA.
NNA: Bias Operator
Suggested strategy for the bias operator applied to new input solutions and
updated weight matrix.
For i = 1 to Npop
If rand ≤ β
%% ------------- Bias for New Pattern Solution---------------------------------------------------------------
Nb = Round (D×β) % Nb: No. of biased variables in population of new pattern solution
For j = 1: Nb
XInput (i, Integer rand [0, D]) = LB+(UB-LB) ×rand.
End For
%% ------------- Bias for Updated Weight Matrix ------------------------------------------------------------
Nwb = Round (Npop×β) % Nwb: No. of biased variables in updated weight matrix
For j = 1: Nwb
WUpdated (j, Integer rand [0, Npop]) = U (0,1).
End For
End If
End For
β is a modification factor, which determines the percentage of the pattern
solutions that should be altered. The initial value of β is set to 1 (means 100
percentage chance to modify all individuals in population) and its value
adaptively has been reduced at each iteration using any reduction formulation.
NNA: Transfer Function Operator
In the NNA, unlike ANNs, transfer function operator transfers the new
pattern solutions in the population from their current positions in the
search space to new positions in order to update and generate better
quality solutions toward the target solution. The improvement of the
solutions is made by moving the current new pattern solutions closer to
the best solution (target solution).
X i* (t 1) TF ( X i (t 1)) X i (t 1) 2 rand ( X T arget (t ) X i (t 1)), i 1, 2,3,..., N pop
NNA: Bias & Transfer Function
Operators
Combination of Bias and TF operators in the NNA
For i = 1 to Npop
If rand ≤ β
%% ----------------- Bias Operator ----------------------------------------------------
Bias Operator
Else (rand > β)
%% ----------------- Transfer Function (TF) Operator -------------------------
X i* (t 1) TF ( X i (t 1)) X i (t 1) 2 rand ( X T arg et (t ) X i (t 1)), i 1, 2,3,..., N pop
End If
End For
at early iterations, there exists more chances for the bias operator
generating new pattern solutions (more opportunities for discovering
unvisited pattern solutions) and also new weight values. However, when
the iteration number is increasing, this chance decreases, and the TF
operator plays more important roles in the NNA especially at final iterations.
NNA: Sequential steps of NNA
Schematic view of NNA
NNA has self-feedback and global feedback.
Flowchart of NNA
Steps of NNA
First Published paper of NNA
More Published Papers So Far
NNA codes and more for downloads
https://ali-sadollah.com/neural-network-algorithm-nna/
Metaheuristic Diagram
NNA
HS
Q&A
Any Questions?
Emails: [email protected]
[email protected]