0% found this document useful (0 votes)
14 views28 pages

NNA Introduction

The Neural Network Algorithm (NNA) is a metaheuristic optimization algorithm inspired by artificial neural networks (ANNs) and the biological nervous system, designed to generate new candidate solutions through a population-based approach. It mimics the functioning of the human brain by utilizing interconnected nodes (neurons) that can learn and adapt by altering weight values to minimize error in predictions. The NNA employs various operators such as bias and transfer functions to explore the solution space and improve the quality of solutions iteratively.

Uploaded by

madadi morad
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views28 pages

NNA Introduction

The Neural Network Algorithm (NNA) is a metaheuristic optimization algorithm inspired by artificial neural networks (ANNs) and the biological nervous system, designed to generate new candidate solutions through a population-based approach. It mimics the functioning of the human brain by utilizing interconnected nodes (neurons) that can learn and adapt by altering weight values to minimize error in predictions. The NNA employs various operators such as bias and transfer functions to explore the solution space and improve the quality of solutions iteratively.

Uploaded by

madadi morad
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 28

Neural Network Algorithm (NNA)

● A metaheuristic optimization algorithm (population-based)

● Inspired by the artificial neural networks (ANNs) and biological


nervous system.
● Benefits from complicated structure of the ANNs and its
operators in order to generate new candidate solutions.
How the human brain works?
● Human brain is composed of 86 billion nerve cells called neurons which
are connected to other thousand cells by Axons.
● Stimuli from external environment or inputs from sensory organs are
accepted by dendrites.
● These inputs create electric impulses, which quickly travel through the
neural network. A neuron can then send the message to other neuron to
handle the issue or does not send it forward.
Artificial Neural Networks (ANNs)

Artificial neural networks (ANNs) are computational models (i.e.,


interconnected computing units) inspired by the structure of biological
neural networks. As in nature, the connections among units largely
determine the network function.
Artificial Neural Networks (ANNs)
There are two Artificial Neural Network topologies: Feed Forward and Feedback.

●Feed Forward ANNs: the information flow is unidirectional. A unit sends information to
other units from which it does not receive any information. In general, the feed-forward
networks are “static” (Fig. a).

●Recurrent ANNs: feedback loops are allowed. In this sense these neural networks are
“dynamic” (Fig. b).
How ANNs mimics Human Brain?!
● ANNs are composed of multiple nodes, which imitate biological neurons of human brain.

● The neurons are connected by links and they interact with each other.

● The nodes can take input data and perform simple operations on the data.

● The result of these operations is passed to other neurons.

● The output at each node is called its activation or node value.

● Each link is associated with weight.

● ANNs are capable of learning, which takes place by altering weight values.

● ANNs tries to map input data to the target data iteratively.


ANN NNA
The ANNs, simply speaking, tries to map input data to the target data. Therefore, the
ANNs tries to reduce the error (e.g., mean square error) among predicted solutions
and target solutions using iteratively changing the values of weight functions (wij)
(see Fig. 1b). However, in the optimization, the goal is to find the optimum solution,
and a metaheuristic algorithm should search a feasible optimal solution using a
defined strategy.
ANN NNA
• Therefore, inspired by the ANNs, in the NNA, the best obtained
solution at each iteration is assumed as target data and the aim is to
reduce the error among the target data and other predicated pattern
solutions (i.e., moving other predicted pattern solutions towards the
target solution).
• NNA concept is matched with minimization problems.
Basic idea of NNA
● Like other metaheuristics, the NNA also works on a population.

● Initially, we generate the predicted solutions (i.e., Pattern Solution) to


the problem.
● The best obtained Pattern Solution is set to a target data (Best
Solution).
● NNA aims to reduce the error between the target solution and the
other predicted solutions by changing the value of weights and pattern
solutions.
NNA: Formulation

● Initialization of Pattern Solution = [x1, x2, x3,…, xD]

● In general, a population of pattern solution is stored in the following matrix:

 x11 x12 x31  x1D 


 2 2 2 2

 x1 x2 x3  xD 
Population of Pattern Solutions  X 
    
 
 x1N pop x2N pop x3N pop  xDN pop 
i i i
Ci  f ( x , x , , x )
1 2 D i 1, 2,3,..., N pop

The NNA resembles ANNs having Npop input data having D dimension(s) and only one
target data (response). After setting the target solution (XTarget) among the other pattern
solutions, the target weight (WTarget), the weight corresponding to the target solution, has to
be selected from the population of weight (weight matrix).
NNA: Weight matrix
In the ANNs, the artificial neurons or the processing units may have several input paths,
corresponding to the dendrites. Using a simple summation, the unit combines the weighted
values of these input paths. The result is an internal activity level for the unit.

Back to the NNA, initial weights are defined as given in the following equation:

 w11  w1i  w1N pop   w11  wi1  wN pop 1 


 1 N pop
  
 w2  w2  w2   w12  wi 2  wN pop 2 
i

W (t ) [W1 ,W2 , , WN pop ]    


         
 w1  wi  w N pop   w  w  w 
 pop
N N pop N pop   1 N pop iN pop N pop pop 
N

W is a square matrix (Npop×Npop) which generates random numbers uniformly between zero to
one during iterations. The first subscript of weight relates to its pattern solution and the
second subscript of weight is shared with the other pattern solutions.
NNA: Weight matrix
• However, there is a constraint for the weight values. The imposed
constraint is the summation of weights for a pattern solution should not
exceed one, mathematically, it can be defined as given follows:
Npop

 w (t ) 1,
j 1
ij i, j 1, 2,3,..., N pop

wij U (0,1) i, j 1, 2,3,..., N pop

Without the above constraint, the weight values tend to grow (i.e., values
more than one) in a specific direction and therefore, the algorithm will be
stuck in a local optimum point.
• Having this constraint gives the NNA’s agents controlled movement with
mild bias (varying from zero to one).
NNA: Generation of New Pattern Solutions
After forming the weight matrix (W), new pattern solutions (XNew) are
calculated using the following equation inspired by the weight summation
technique used in the ANNs:
New Npop
X j (t  1)   wij (t ) X i (t ), j 1, 2,3,..., N pop
i 1

X i (t  1)  X i (t )  X iNew (t  1), i 1, 2,3,..., N pop
For instance, if we have six pattern solutions (i.e., six neurons, population
size of 6), updating the first new pattern solution can be calculated as given
follows:

X 1New (t  1) w11 X 1 (t )  w21 X 2 (t )  w31 X 3 (t )  w41 X 4 (t )  w51 X 5 (t )  w61 X 6 (t )
Idea of updating new pattern solutions:
An Example
NNA: Updating of Weight Matrix
After creating the new pattern solutions from the previous population of
patterns, the weight matrix should be updated as well, based on the value of
the best weight so called “target weight”. The following equations suggest an
updating equation for the target weight:

WiUpdated (t  1) Wi (t )  2 rand (W Tar g et (t )  Wi (t )), i 1, 2,3,..., N pop
Npop

 w (t ) 1,
j 1
ij i, j 1, 2,3,..., N pop

wij U (0,1) i, j 1, 2,3,..., N pop


NNA: Bias Operator
• Bias current is always tied to a surrounding condition (e.g., noise), so
as to make the output of each neuron respect the surrounding
condition.

• In the NNA, the bias operator modifies a certain percentage of the


pattern solutions in the new population of pattern solutions and
updated weight matrix (acting as a noise).

• In other words, the bias operator in the NNA is another way to explore
the search space (exploration process) and it acts similar to the
mutation operator in the GA.
NNA: Bias Operator
Suggested strategy for the bias operator applied to new input solutions and
updated weight matrix.
For i = 1 to Npop
If rand ≤ β
%% ------------- Bias for New Pattern Solution---------------------------------------------------------------
Nb = Round (D×β) % Nb: No. of biased variables in population of new pattern solution
For j = 1: Nb
XInput (i, Integer rand [0, D]) = LB+(UB-LB) ×rand.
End For
%% ------------- Bias for Updated Weight Matrix ------------------------------------------------------------
Nwb = Round (Npop×β) % Nwb: No. of biased variables in updated weight matrix
For j = 1: Nwb
WUpdated (j, Integer rand [0, Npop]) = U (0,1).
End For
End If
End For
β is a modification factor, which determines the percentage of the pattern
solutions that should be altered. The initial value of β is set to 1 (means 100
percentage chance to modify all individuals in population) and its value
adaptively has been reduced at each iteration using any reduction formulation.
NNA: Transfer Function Operator
In the NNA, unlike ANNs, transfer function operator transfers the new
pattern solutions in the population from their current positions in the
search space to new positions in order to update and generate better
quality solutions toward the target solution. The improvement of the
solutions is made by moving the current new pattern solutions closer to
the best solution (target solution).

X i* (t  1) TF ( X i (t  1))  X i (t  1)  2 rand ( X T arget (t )  X i (t  1)), i 1, 2,3,..., N pop
NNA: Bias & Transfer Function
Operators
Combination of Bias and TF operators in the NNA
For i = 1 to Npop
If rand ≤ β
%% ----------------- Bias Operator ----------------------------------------------------
Bias Operator
Else (rand > β)
%% ----------------- Transfer Function (TF) Operator -------------------------

X i* (t  1) TF ( X i (t  1))  X i (t  1)  2 rand ( X T arg et (t )  X i (t  1)), i 1, 2,3,..., N pop

End If
End For

at early iterations, there exists more chances for the bias operator
generating new pattern solutions (more opportunities for discovering
unvisited pattern solutions) and also new weight values. However, when
the iteration number is increasing, this chance decreases, and the TF
operator plays more important roles in the NNA especially at final iterations.
NNA: Sequential steps of NNA
Schematic view of NNA
NNA has self-feedback and global feedback.
Flowchart of NNA
Steps of NNA
First Published paper of NNA
More Published Papers So Far
NNA codes and more for downloads
https://ali-sadollah.com/neural-network-algorithm-nna/
Metaheuristic Diagram

NNA

HS
Q&A
Any Questions?

Emails: [email protected]
[email protected]

Personal Website: www.ali-sadollah.com

You might also like