Soft Computing: UNIT I:Basics of Artificial Neural Networks
Soft Computing: UNIT I:Basics of Artificial Neural Networks
Reference Books:
1. B. Yegnanarayana (2006), Artificial Neural Networks, Prentice Hall of India, New
Delhi, India.
2. John Yen, Reza Langari (2006), Fuzzy Logic, Pearson Education, New Delhi, India.
3. S. Rajasekaran, Vijaylakshmi Pari (2003), Neural networks, Fuzzy Logic and
Genetic Algorithms Synthes is and Applications, Prentice Hall of India, New Delhi,
India.
The first application of fuzzy logic is to create a decision system that can
predict any sort of risk. The second application is using fuzzy information that
selects the areas which need replacement.
Soft computing uses the method of Artificial Neural Network (ANN) to predict
any instability in the voltage of the power5 system.
DEPARTMENT OF INFORMATION TECHNOLOGY 11/09/2021
ANN (Artificial Neural Network)
Artificial intelligence and machine learning haven’t just grabbed headlines and made
for blockbuster movies; they’re poised to make a real difference in our everyday
lives, such as with self-driving cars and life-saving medical devices.
Artificial Intelligence is a term used for machines that can interpret the data, learn
from it, and use it to do such tasks that would otherwise be performed by humans.
• You’ve probably already been using neural networks on a daily basis. When you
ask your mobile assistant to perform a search for you—say, Google or Siri or
Amazon Web—or use a self-driving car, these are all neural network-driven.
• Computer games also use neural networks on the back end, as part of the game
system and how it adjusts to the players, and so do map applications, in processing
map images and helping you find the quickest way to get to your destination.
Definition:
An artificial neural network (ANN) is the piece of a computing system designed to
simulate the way the human brain analyzes and processes information.
• ANNs have self-learning capabilities that enable them to produce better results
as more data becomes available.
• Before taking a look at the differences between Artificial Neural Network ( ANN)
and Biological Neural Network (BNN), let us take a look at the similarities based
on the terminology between these two.
Soma Node
Dendrites Input
Synapse Weights or Interconnections
Axon Output
DEPARTMENT OF INFORMATION TECHNOLOGY 16 11/09/2021
ANN versus BNN
The following table shows the comparison between ANN and BNN based on some criteria mentioned.
Criteria BNN ANN
Processing Massively parallel, slow but superior Massively parallel, fast but inferior
than ANN than BNN
Size 1011 neurons and 102 to 104 nodes mainly depends on
1015 interconnections the type of application and network
designer mainly depends on the
type of application and network
designer
For the above general model of artificial neural network, the net input can be
calculated as follows −
yin=x1.w1+x2.w2+x3.w3…xm.wm
The output can be calculated by applying the activation function over the net
input.
• let us take a look at the structure of our brain. Human brain contains billions of
neurons that act as organic switches.
• All these neurons are interconnected to form a huge and complex structure
called Neural Network.
• The output of a single neuron is dependent on inputs from thousands of
interconnected neurons.
• The summing part receives N input values, weights each value, and computes a
weighted sum.
• The output part produces a signal from the activation value. The sign of the
weight for each input determines whether the input is excitatory (positive
weight) or inhibitory (negative weight).
DEPARTMENT OF INFORMATION TECHNOLOGY 21 11/09/2021
• The inputs could be discrete or continuous data values, and
likewise the outputs also could be discrete or continuous.
• Therefore the inputs to a processing unit may come from the outputs of other
processing units, and/or from external sources.
• The output of each unit may be given to several units including itself.
• The amount of the output of one unit received by another unit depends on the
strength of the connection between the units, and it is reflected in the weight
value associated with the connecting link.
• If there are N units in a given ANN, then at any instant of time each unit will have
a unique activation value and a unique output value.
• The set of the N activation values of the network defines the activation state of
the network at that instant.
• Likewise, the set of the N output values of the network defines the output state
of the network at that instant. Depending on the discrete or continuous nature of
the activation and output values, the state of the network can be described by a
discrete or continuous point in an N-dimensional space.
DEPARTMENT OF INFORMATION TECHNOLOGY 23 11/09/2021
Operations:
In operation, each unit of an ANN receives inputs from other connected units
and/or from an external source.
A weighted sum of the inputs is computed at a given instant of time.
The activation value determines the actual output from the output function
unit, i.e., the output state of the unit.
The output values and other external inputs in turn determine the activation
and output states of the other units.
Activation dynamics determines the activation values of all the units, i.e., the
activation state of the network as a function of time.
The activation dynamics also determines the dynamics of the output state of
the network.
The set of all activation states defines the activation state space of the
network.
The set of all output states defines the output state space
of the network. Activation dynamics determines the trajectory of the path of
the states in the state space of the network. For a given network, defined by
the units and their interconnections with appropriate weights, the activation
states determine the short term memory24function of the network.
DEPARTMENT OF INFORMATION TECHNOLOGY 11/09/2021
• In order to store a pattern in a network, it is necessary to adjust the weights
of the connections in the network. The set of all weights on all connection in
a network form a weight vector.
• The set of all possible weight vectors define the weight space.
• When the weights are changing, then the synaptic dynamics of the network
determines the weight vector as a function of time.
• Synaptic dynamics is followed to adjust the weights in order to store the
given patterns in the network.
• The process of adjusting the weights is referred to as learning.
• Once the learning process is completed, the final set of weight values
corresponds to the long term memory function of the network.
Update:
In implementation, there are several options available for both activation and
synaptic dynamics. In particular, the updating of the output states of all the
units could be performed synchronously.
DEPARTMENT OF INFORMATION TECHNOLOGY 25 11/09/2021
In this case, the activation values of all the units are computed at
• From the activation values, the new output state of the network is derived.
• For each unit, the output state can be determined from the activation value either
deterministically or stochastically.
1.McCulloch-Pitts Model
2. Perceptron model
3. Adaline model
1.McCulloch-Pitts Model
In McCulloch-Pitts (MP) model (Figure 1.2) the activation (x) is given
by a weighted sum of its M input values (αi) and a bias term (θ).
• For eg: First input is high and second input is low then the output neuron fires
high.
x1 x2 y
0 0 0
0 1 0
1 0 1
1 1 0
• This unit delay property of the MP neuron can be used to build sequential
digital circuits.
• With feedback, it is also possible to have a memory cell (Figure 1.4c) which can
retain the output indefinitely in the absence of any input.
• Hence a network using this model does not have the capability of learning.
Moreover, the original model allows only binary output states, operating at
DEPARTMENT OF INFORMATION TECHNOLOGY 30 11/09/2021
discrete time steps
2.Perceptron model
• The Rosenblatt's perceptron model (Figure 1.5) for an artificial neuron
consists of outputs from sensory units to a fixed set of association units, the
outputs of which are fed to an MP neuron.
• The association units perform predetermined manipulations on their inputs.
• The main deviation from the MP model is that learning (i.e., adjustment of
weights) is incorporated in the operation of the unit. The desired or target
output (b) is compared with the actual binary output (s), and the error (6) is
used to adjust the weights.
• The following equations describe the operation of the perceptron model of
a neuron:
Connections can be made either from the units of one layer to the units of
another layer (interlayer connections) or among the units within the layer
(intralayer connections) or both interlayer and intralayer connections.
Further, the connections across the layers and among the units within a layer
can be organized either in a feed forward manner or in a feedback manner.
In a feedback network the same processing unit may be visited more than
once.
Generally, in an ANN, the processing units are arranged into layers and all the
units in a particular layer have the same activation values and output values.
Connection can be made between layers in multiple ways like processing unit of
one layer connected to a unit of another layer, processing unit of a layer
connected to a unit of same layer, etc.
Perceptron learning Law – Network starts its learning by assigning a random value
to each weight.
Outstar learning rule – We can use it when it assumes that nodes or neurons in a
network arranged in a layer.
Winner−takes−all Law
It is concerned with unsupervised training in which the output nodes try to
compete with each other to represent the input pattern.