Unit - 1 (Introduction)
Unit - 1 (Introduction)
Computing Techniques
Course Objectives and Course Outcomes
Course Objectives:
• To provide an introduction to the emerging area of intelligent control and optimization.
• To offer a basic knowledge on expert systems, fuzzy logic systems, artificial neural networks
and optimization techniques.
Course Outcomes:
Upon completion of the course, students will be able
– Elucidate the concept of soft computing techniques and their applications.
– Design neural network controller for various system.
– Apply Fuzzy Logic for real world problems.
– Apply Evolutionary Algorithm for obtaining optimal value for real world problem.
– Apply Genetic Algorithm and particle swarm optimization for power electronic optimization
problems.
19PEC08 – Advanced Soft Computing
Techniques
• UNIT V - APPLICATIONS
Unit – 1 Introduction to Soft
Computing, Artificial Neural
Networks
Unit – 1
• Introduction to Soft Computing: Introduction to soft computing -
soft computing vs. hard computing - various types of soft computing
techniques - applications of soft computing.
• Artificial Neural Networks: Neuron - Nerve structure and synapse -
Artificial Neuron and its model - activation functions - Neural network
architecture: single layer and multi-layer feed forward networks -
McCulloch-Pitts neuron model - perceptron model - Adaline and
Madaline - multilayer perceptron model - back propagation learning
methods - effect of learning rule co-efficient - back propagation
algorithm - factors affecting back propagation training - applications.
Introduction to Soft
Computing
Computing:
• It means there are certain input and then
there is a procedure by which the input can
be converted into some output.
• The input is called the antecedent and then
output is called the consequence and then
computing is basically mapping.
• f is the function which is responsible to
convert the input (x) to some output (y).
• This is the concept of computing.
• The concept of computing is first time
invented by a mathematician Lotfi A Zadeh.
• He is only termed as LAZ and he basically is
the first person to introduce the concept of
hard computing.
According to LAZ, the hard computing provides
• precise result
• step that is required to solve the problem is
unambiguous
• control action is formally defined by means
of some mathematical formula or some
algorithm.
Examples of Hard Computing
Fuzzy Logic
Neural Network
Genetic algorithm
Applications of Soft Computing
• Image Processing, pattern classification.
• Image Processing and Data Compression
• Automotive Systems and Manufacturing
• Soft Computing to Architecture
• Decision-support Systems
• Soft Computing to Power Systems
• Neuro Fuzzy systems
• Fuzzy Logic Control
• Machine Learning Applications
• Speech and Vision Recognition Systems
• Process Control and So On…
Hybrid Computing
computing.
• So, few portion of the problem can be solved using hard computing for which we
have a mathematical formulation for that particular part and then where we
real time for which no good algorithm is available and we also do not required
accurate result some near accurate result is sufficient for us then we can solve
soft computing for that part and then mixing together is basically the hybrid
computing.
• The major three hybrid systems are as
follows:
– Hybrid Fuzzy Logic (FL) Systems
– Hybrid Neural Network (NN) Systems
– Hybrid Evolutionary Algorithm (EA) Systems
Unit – 1
• Introduction to Soft Computing: Introduction to soft computing -
soft computing vs. hard computing - various types of soft computing
techniques - applications of soft computing.
• Artificial Neural Networks: Neuron - Nerve structure and synapse -
Artificial Neuron and its model - activation functions - Neural network
architecture: single layer and multi-layer feed forward networks -
McCulloch-Pitts neuron model - perceptron model - Adaline and
Madaline - multilayer perceptron model - back propagation learning
methods - effect of learning rule co-efficient - back propagation
algorithm - factors affecting back propagation training - applications.
Human Brain
• The human is an amazing processor. The most
basic element of the human brain is a specific
type of cell, known as neuron.
• The human brain comprises about 100 billion
neurons.
• Each neuron can connect with up to 2,00,000
other neurons, although 1,000 - 10,000
interconnections are typical.
• Neural Networks are networks of neurons, for example, as
found in real (i.e. biological) brains.
• Artificial neurons are basic approximations of the neurons
found in real brains. They may be physical devices, or
purely mathematical constructs.
• Artificial Neural Networks (ANNs) : An artificial neural
network may be defined as an information-processing
model that is inspired by the way biological nervous
system, such as the brain, process information. This model
tries to replicate only the most basic functions of the brain.
• From a practical point of view, an ANN is just a parallel computational system
hardware, whose design was inspired by the design and functioning of animal
brains.
• The neural networks have the ability to learn by example, which makes them
task, that is, there is no need to understand the internal mechanism of that
task.
• These networks are well suited for real time systems because of their fast
network damaged.
• Neural network can be viewed from a multi-
disciplinary point of view as shown in Figure.
Application of NN
• Air traffic control.
• Echo patterns.
• Employee hiring.
• Machinery control.
• Medical diagnosis.
• Voice recognition.
• Weather prediction.
Synapse Dendrites
Axon
Axon
Soma Soma
Dendrites
Synapse
• An artificial neural network consists of a number of
very simple processors, also called neurons, which are
analogous to the biological neurons in the brain.
1011
10 15
Types:
4. Sigmoidal functions
1. Binary sigmoid function
5. Ramp function
Applicable to: Single layer network having binary data
Ramp Function
Important Terminologies
1.Weight : Represents the strength of synapse connecting input and output neuron.
3.Bias: Biases are constant and it is an additional input into the next layer. It act as a weight
whose activation is always one, which improves the performance of neural net.
4.Threshold: It determines, based on the inputs, whether the perceptron fires or not.
5.Learning Rate(α): Used to control amount of weight adjustment at each step of training.
6.Momentum factor(): Used to converge faster even for higher value of α with less overshoot.
Used in BPN.
assigned to same cluster unit. Used in adaptive resonance theory. Ranges between 0.7 and 1.
Various Neural Network Model
• All excitatory connections into a particular neuron have the same weight,
although different weighted connections can be input to different neurons
• weights of the neuron are set along with the threshold to make the neuron
to perform a simple logic function
Concept of Linear Separability
• Linear separability, is the concept wherein the
separation of the input space into regions is based
on whether the network response is positive or
negative
• A decision line is drawn to separate the positive
and negative responses
• Decision line is called as decision making line
Concept of Linear Separability
with two inputs
Step 6: Train the network until there is no weight change for all
training pair
The weight W1 = I, W2 = l, b = 1 are the final weights after first input
pattern is presented. The same process is repeated for all the input
patterns. The process can be stopped when all the targets become equal
to the calculated output or when a separating line is obtained using the
final weights for separating the positive responses from negative
responses. Table 2 shows the training of perceptron network until its
target and calculated ourput converge for all the patterns.
It can be easily found that the above straight
line separates the positive response and
negative response region, as shown in Figure 2.
MADALINE Network
• MADALINE which stands for Multiple Adaptive Linear Neuron
• It is a network which consists of many Adalines in parallel with a
single output unit
• It is an example for multilayer feed-forward architecture
• The weights that are connected from the Adaline layer to
Madaline layer are fixed, positive and posses equal values
• Weights between the input layer and Adaline layer are adjusted
during training process
• The network is trained using LMS algorithm
MADALINE Network
It consists of "n" units of input layer, "m" units of Adaline layer and
"1" unit of the Madaline layer.
MADALINE Network Training Algorithm
MADALINE Network
• Each neuron in the Adaline and Madaline layers has a bias of
excitation 1
• The Adaline layer is present between the input layer and the
Madaline (output) layer; hence, the Adaline layer can be considered
a hidden layer
• The use of the hidden layer gives the high computational capability
compared to single-layer nets
• but this complicates the training process to some extent
• It is used in adaptive noise cancellation and adaptive filters
MADALINE Network Training Algorithm
MADALINE Network Training Algorithm
Back-Propagation Network
i wij j wjk
xi k yk
m
n l yl
xn
Input Hidden Output
layer layer layer
Erro r signals
Back-Propagation Network
• Neurons present in the hidden & output layers have
biases whose activation is always one
• Bias term also acts as weights
• Direction of information flow for feed-forward
phase is shown in figure
• During back propagation phase of learning signals
are sent in reverse direction
Back-Propagation Network
• Training of BPN is done in three stages.
• Feed-forward phase (Feed-forward of input
pattern & calculation of output)
• Back-propagation of Error phase
• Weight updation phase
Feed forward Training Phase