Chapter 1
Chapter 1
(173101)
UNIT-1
Introduction to Soft
Computing
The idea of Soft Computing was initiated by Lotfi A.
Zadeh.
Definition: Soft Computing is an emerging (up and
coming, rising, promising, talented) approach to
computing which parallel the remarkable ability of human
mind to reason and learn in a environment of uncertainty
(doubt) and imprecision.
Zadeh defines SC into one multidisciplinary system as the
fusion (Union or Combination) of the fields of Fuzzy Logic,
Neuro-Computing, Genetic Computing and
Probabilistic Computing.
Fusion of methodologies designed to model and enable
solutions to real world problems, which are not modeled or
too difficult to model mathematically.
They are composed of two features: adaptively &
knowledge
Introduction to Soft
Computing
SC consist of : Neural Networks, Fuzzy Systems,
and Genetic Algorithms.
Neural Networks: for learning and adaption
Fuzzy Systems: for knowledge representation
via fuzzy if-then rules.
Genetic Algorithms: for evolutionary
computation.
20 Apr 1) Hard computing, i.e., conventional computing, requires a precisely stated analytical
model and often a lot of computation time. Soft computing differs from conventional (hard)
computing in that, unlike hard computing, it is tolerant of imprecision, uncertainty, partial truth,
and approximation. In effect, the role model for soft computing is the human mind.
2) Hard computing based on binary logic, crisp systems, numerical analysis and crisp
software but soft computing based on fuzzy logic, neural nets and probabilistic reasoning.
3) Hard computing has the characteristics of precision and categoricity and the soft
computing, approximation and dispositionality. Although in hard computing, imprecision and
uncertainty are undesirable properties, in soft computing the tolerance for imprecision and
uncertainty is exploited to achieve tractability, lower cost, high Machine Intelligence Quotient
(MIQ) and economy of communication
4) Hard computing requires programs to be written; soft computing can evolve its own
programs
5) Hard computing uses two-valued logic; soft computing can use multivalued or fuzzy logic
6) Hard computing is deterministic; soft computing incorporates stochasticity
7) Hard computing requires exact input data; soft computing can deal with ambiguous and
noisy data
8) Hard computing is strictly sequential; soft computing allows parallel computations
9) Hard computing produces precise answers; soft computing can yield approximate
answers
Neural Networks (NN)
NN are simplified models of the biological neuron
system.
Neural network: information processing
paradigm(model) inspired by biological nervous
systems, such as our brain
Structure: large number of highly interconnected
processing elements (neurons) working together.
Inspired by brain.
Like people, they learn from experience (by
example), therefore train with known example
of problem to acquire knowledge.
NN adopt various learning mechanisms
(Supervised and Unsupervised are very
popular)
Neural Networks (NN)
Characteristics, such as:
Mapping capabilities or Pattern recognition.
Data classification.
Generalization.
High speed information processing.
Parallel Distributed Processing.
In a biological system,
learning involves adjustments to the synaptic
connections between neurons.
Architecture:
Feed Forward (Single layer and Multi layer)
Recurrent.
Neural Networks (NN)
Where can neural network systems help.
w 2
a
p 2
w 3
f O u tp u t
p 3
1
B ia s
a f p 1 w 1 p 2 w 2 p 3 w 3 b f p i w i b
n 1
y f w0 wi xi
w0
x1 x2 x3 i 1
Activation Function
20
Perform mathematical
18
16
Linear operation on the signal
yx output.
14
12
10
0
0 2 4 6 8 10 12 14 16 18 20
2
2
1.5
1.5
1
Logistic
1
1
0.5
y
0.5
0
0
1 exp( x)
-0.5
-0.5
-1
-1
-1.5
-1.5
-2
-10 -8 -6 -4 -2 0 2 4 6 8 10
-2
-10 -8 -6 -4 -2 0 2 4 6 8 10
1.5 2
Hyperbolic tangent
11.5
0.5
1
exp( x) exp( x)
0.5
-0.5
y
0
-1
-0.5
exp( x) exp( x)
-1.5
-1
-2
-10 -8 -6 -4 -2 0 2 4 6 8 10
-1.5
-2
-10 -8 -6 -4 -2 0 2 4 6 8 10
Architecture of ANN
Feed Forward Neural Network.
Single Layer Feed Forward Neural Network.
Multi Layer Feed Forward Neural Network.
Recurrent Neural Network.
Feed Forward Neural
Networks
Output
layer
x1 x2 .. xn x1 x2 .. xn
Recurrent Neural Networks
Output
layer
2nd hidden
layer There is at lease one
feed back loop.
1st hidden
layer
x1 x2 .. xn
Learning Methods
Supervised Learning:
A teacher is assumed to be present during the
learning process.
Input pattern is used to train the network
associated with an output pattern (Target pattern).
For determination of error, compare networks
calculated output and expected target output.
The error can be used to change n/w parameter,
which results in an improvement in performance.
Learning Methods
Unsupervised Learning:
A teacher is assumed to be not present during the
learning process.
Target output is not presented to the network.
So that n/w learn by itself.
Reinforced learning:
Teacher available but does not present the
expected answer.
Only indicates if the computed o/p is correct or
incorrect.
Perceptron
The perceptron neuron produces a 1 if the net
input into the transfer function is equal to or
greater than 0, otherwise it produces a 0.
Its a single-unit network
Change the weight by an
amount proportional to
the difference between
the desired output and
the actual output. Input
Wi = * (D-Y).Ii
Actual
Learning
Desired output
rate
output
Perceptron Learning
Rule
Example: A simple single unit
adaptive network
The network has 2
inputs, and one
output. All are binary.
The output is
1 if W0I0 + W1I1 + Wb > 0
0 if W0I0 + W1I1 + Wb 0
We want it to learn
simple OR: output a 1
if either I0 or I1 is 1.