0% found this document useful (0 votes)
14 views47 pages

Unit 1 Application of Soft Computing - Presentation

The document discusses soft computing, highlighting its ability to handle imprecision and uncertainty through techniques like fuzzy logic, neural networks, and genetic algorithms. It outlines various applications of soft computing, including handwriting recognition, image processing, and machine learning. Additionally, it explains the Hebbian learning rule and its implementation in neural networks for logical operations.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views47 pages

Unit 1 Application of Soft Computing - Presentation

The document discusses soft computing, highlighting its ability to handle imprecision and uncertainty through techniques like fuzzy logic, neural networks, and genetic algorithms. It outlines various applications of soft computing, including handwriting recognition, image processing, and machine learning. Additionally, it explains the Hebbian learning rule and its implementation in neural networks for logical operations.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 47

Application of Soft Computing

Unit 1
PROBLEM SOLVING TECHNIQUES

HARD COMPUTING SOFT COMPUTING

Precise Models Approximate Models

Traditional Functional
Symbolic
Numerical Approximate Approximation
Logic
Modeling and Reasoning and Randomized
Reasoning
Search Search

2
Soft Computing
Soft computing combines different techniques
and concepts. It can handle imprecision and
uncertainty. Fuzzy logic, neurocomputing,
evolutionary and genetic programming, and
probabilistic computing are fields of soft
computing. Soft computing is designed to model
and enable solutions to real world problems,
which cannot be modelled mathematically.
The main computing paradigm of soft computing
are:
• Neural Networks
• Fuzzy systems
• Genetic Algorithms

Neural network for learning and adaptivity.

Fuzzy set are for knowledge representation via


fuzzy If – Then rules.

Genetic algorithm for evolutionary computation.


MULTIDISCIPLINARY VIEW OF NEURAL
NETWORKS
FUZZY LOGIC
• Origins: Multivalued Logic for treatment of
imprecision and vagueness
– 1930s: Post, Kleene, and Lukasiewicz attempted to
represent undetermined, unknown, and other possible
intermediate truth-values.
– 1937: Max Black suggested the use of a consistency profile
to represent vague (ambiguous) concepts.
– 1965: Zadeh proposed a complete theory of fuzzy sets
(and its isomorphic fuzzy logic), to represent and
manipulate ill-defined concepts.

Fuzzy logic gives us a language (with syntax and local


semantics) in which we can translate our qualitative
domain knowledge.
GENETIC ALGORITHM
EVOLUTIONARY PROCESS
APPLICATIONS OF SOFT COMPUTING

Handwriting Recognition
Image Processing and Data Compression
Automotive Systems and Manufacturing
Soft Computing to Architecture
Decision-support Systems
Soft Computing to Power Systems
Neuro Fuzzy systems
Fuzzy Logic Control
Machine Learning Applications
Speech and Vision Recognition Systems
Process Control and So On

9
ANN: Inspired from Biological Neural
Network
BRAIN COMPUTATION
The human brain contains about 10 billion
nerve cells, or neurons. On average, each
neuron is connected to other neurons through
approximately 10,000 synapses.
Model of an ANN

ASSOCIATION OF BIOLOGICAL NET


WITH ARTIFICIAL NET
Bipolar Sigmoid
MULTILAYER FEED FORWARD NETWORK
LAYER PROPERTIES
• Input Layer: Each input unit may be
designated by an attribute value possessed
by the instance.

• Hidden Layer: Not directly observable,


provides nonlinearities for the network.

• Output Layer: Encodes possible values.


FEEDBACK OR RECURRENT NEURAL NETWORK
Suppose that we are going to work on AND Gate problem using perceptron. The gate returns
true value if and only if both inputs are true.
We are going to set weights randomly. Let’s say that w1 = 0.9 and w2 = 0.9. Learning rate=0.5,
bias = 0.5

X1 X2 Y
0 0 0
0 1 0
1 0 0
1 1 1
HEBB NETWORK
Donald Hebb stated in 1949 that in the brain, the learning is performed
by the change in the synaptic gap. Hebb explained it:

“When an axon of cell A is near enough to excite cell B, and


repeatedly or permanently takes place in firing it, some growth
process or metabolic change takes place in one or both the cells
such that A’s efficiency, as one of the cells firing B, is increased.”
HEBB LEARNING
• The weights between neurons whose activities are
positively correlated are increased:

dw ij
~ correlation ( x i , x j )
dt
• Associative memory is produced automatically

The Hebb rule can be used for pattern association, pattern


categorization, pattern classification and over a range of other
areas.
Hebbian Learning Algorithm
According to Hebb’s rule, the weights are found to increase
proportionately to the product of input and output. It means that
in a Hebb network if two neurons are interconnected then the
weights associated with these neurons can be increased by
changes in the synaptic strength.

This network is suitable for bipolar data. The Hebbian learning


rule is generally applied to logic gates.

The weights are updated as:


W (new) = w (old) + x*y
Training Algorithm For Hebbian Learning Rule
The training steps of the algorithm are as follows:

1. Initially, the weights are set to zero, i.e. w =0 for all inputs i =1
to n and n is the total number of input neurons.

2. Let s be the output. The activation function for inputs is


generally set as an identity function (linear function of slope 1).

3. The activation function for output is also set to y= t.

4. The weight adjustments and bias are adjusted to:

•The steps 2 to 4 are repeated for each input vector and output.
Example Of Hebbian Learning Rule

Let us implement logical AND function with bipolar inputs using


Hebbian Learning.
X1 and X2 are inputs, b is the bias taken as 1, the target value is the
output of logical AND operation over inputs.

Input Input Bias Target


X1 X2 b y
1 (High) 1 (High) 1 1 (High)
1 (High) -1 (Low) 1 -1 (Low)
-1 (Low) 1 (High) 1 -1 (Low)
-1 (Low) -1 (Low) 1 -1 (Low)
#1) Initially, the weights are set to zero and bias is also set as zero.
w1=w2=b=0
#2) First input vector is taken as [x1 x2 b] = [1 1 1] and target value is 1.
The new weights will be:

#3) The above weights are the final new weights. When the second input is
passed, these become the initial weights.

#4) Take the second input = [1 -1 1]. The target is -1.

#5) Similarly, the other inputs and weights are calculated.


Weight Bias New Weights
Inputs Bias Target Output
Changes Changes

X1 X2 b y w1 w2 b W1 W2 b

1 1 1 1 1 1 1 1 1 1

1 -1 1 -1 -1 1 -1 0 2 0

-1 1 1 -1 1 -1 -1 1 1 -1

-1 -1 1 -1 1 1 -1 2 2 -2
Hebb Net for AND Function
FEW APPLICATION AREAS OF NEURAL
NETWORKS

You might also like