0% found this document useful (0 votes)
39 views21 pages

Chapter 1

1st hidden propagated in one direction layer Input from input to output. layer  No feedback connections between layers.  Information flows from input to output layer without looping back.  Commonly used neural network type for supervised learning problems such as pattern recognition and prediction.

Uploaded by

Vignesh Vky
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
39 views21 pages

Chapter 1

1st hidden propagated in one direction layer Input from input to output. layer  No feedback connections between layers.  Information flows from input to output layer without looping back.  Commonly used neural network type for supervised learning problems such as pattern recognition and prediction.

Uploaded by

Vignesh Vky
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 21

Soft Computing

(173101)
UNIT-1
Introduction to Soft
Computing
The idea of Soft Computing was initiated by Lotfi A.
Zadeh.
Definition: Soft Computing is an emerging (up and
coming, rising, promising, talented) approach to
computing which parallel the remarkable ability of human
mind to reason and learn in a environment of uncertainty
(doubt) and imprecision.
Zadeh defines SC into one multidisciplinary system as the
fusion (Union or Combination) of the fields of Fuzzy Logic,
Neuro-Computing, Genetic Computing and
Probabilistic Computing.
Fusion of methodologies designed to model and enable
solutions to real world problems, which are not modeled or
too difficult to model mathematically.
They are composed of two features: adaptively &
knowledge
Introduction to Soft
Computing
SC consist of : Neural Networks, Fuzzy Systems,
and Genetic Algorithms.
Neural Networks: for learning and adaption
Fuzzy Systems: for knowledge representation
via fuzzy if-then rules.
Genetic Algorithms: for evolutionary
computation.

Soft Computing is still growing and developing.


Goal of Soft Computing
It is a new multidisciplinary field, to construct a
new generation of Artificial Intelligence, known as
Computational Intelligence.
The main goal is: to develop intelligent machines
to provide solutions to real world problems, which
are not modeled or too difficult to model
mathematically.
Its aim is to exploit (develop) the tolerance for
Approximation, Uncertainty, Imprecision, and
Partial Truth in order to achieve close
resemblance with human like decision making.
Components of Soft
computing
Components of soft computing include:
Machine learning, including:
Neural networks (NN)
Perceptron
Support Vector Machines (SVM)
Fuzzy logic (FL)
Evolutionary computation (EC), including:
Evolutionary algorithms
Genetic algorithms
Differential evolution
Metaheuristic and Swarm Intelligence
Ant colony optimization
Particle swarm optimization
Ideas about probability including:
Bayesian network
Hard Computing Vs SoftComputing

20 Apr 1) Hard computing, i.e., conventional computing, requires a precisely stated analytical
model and often a lot of computation time. Soft computing differs from conventional (hard)
computing in that, unlike hard computing, it is tolerant of imprecision, uncertainty, partial truth,
and approximation. In effect, the role model for soft computing is the human mind.
2) Hard computing based on binary logic, crisp systems, numerical analysis and crisp
software but soft computing based on fuzzy logic, neural nets and probabilistic reasoning.
3) Hard computing has the characteristics of precision and categoricity and the soft
computing, approximation and dispositionality. Although in hard computing, imprecision and
uncertainty are undesirable properties, in soft computing the tolerance for imprecision and
uncertainty is exploited to achieve tractability, lower cost, high Machine Intelligence Quotient
(MIQ) and economy of communication
4) Hard computing requires programs to be written; soft computing can evolve its own
programs
5) Hard computing uses two-valued logic; soft computing can use multivalued or fuzzy logic
6) Hard computing is deterministic; soft computing incorporates stochasticity
7) Hard computing requires exact input data; soft computing can deal with ambiguous and
noisy data
8) Hard computing is strictly sequential; soft computing allows parallel computations
9) Hard computing produces precise answers; soft computing can yield approximate
answers
Neural Networks (NN)
NN are simplified models of the biological neuron
system.
Neural network: information processing
paradigm(model) inspired by biological nervous
systems, such as our brain
Structure: large number of highly interconnected
processing elements (neurons) working together.
Inspired by brain.
Like people, they learn from experience (by
example), therefore train with known example
of problem to acquire knowledge.
NN adopt various learning mechanisms
(Supervised and Unsupervised are very
popular)
Neural Networks (NN)
Characteristics, such as:
Mapping capabilities or Pattern recognition.
Data classification.
Generalization.
High speed information processing.
Parallel Distributed Processing.
In a biological system,
learning involves adjustments to the synaptic
connections between neurons.
Architecture:
Feed Forward (Single layer and Multi layer)
Recurrent.
Neural Networks (NN)
Where can neural network systems help.

When we can't formulate an algorithmic solution.


When we can get lots of examples of the behavior
we require.
learning from experience
When we need to pick out the structure from
existing data.
Neural Networks (NN)
Biological Neuron.

Brain contain about 1010 basic unit called neurons (Small


cell).
Connected to 1014 other neurons.
It receives electro-chemical signals from its various
source and transmit electrical impulses to other neurons.
Average brain weight 1.5 kg, and neuron has 1.5 * 10 -9
gms.
Neural Networks (NN)
Biological Neuron.
While some of the neuron performs input and
output operations:
Form a part of an interconnected network and
responsible for signal transformation and storage
of information.
Composed of:
Cell body known as soma (Behave as processing
unit).
Dendrites (Behave as input channels).
Axoma (Behave as output channels).
Key Elements of NN
Neural computing requires a number of neurons, to be
connected together into a neural network. Neuron(s) are
arranged in layers.
In p u ts W e ig h t s
p w 1
1

w 2
a
p 2
w 3
f O u tp u t
p 3

1
B ia s
a f p 1 w 1 p 2 w 2 p 3 w 3 b f p i w i b

Each neuron within the network is usually a simple


processing unit which takes one or more inputs and produces
an output. At each neuron, every input has an associated
weight which modifies the strength of each input. The
neuron simply adds together all the inputs and calculates an
output to be passed on.
Artificial Neural Network
What is Artificial Neuron?
Definition : Non linear, parameterized function
with restricted output range.
The neuron calculates a weighted sum of inputs
and compares it to a threshold . If the sum is
higher than the threshold , the output is set to
1, otherwise to 0.
y

n 1

y f w0 wi xi
w0

x1 x2 x3 i 1
Activation Function
20
Perform mathematical
18

16
Linear operation on the signal
yx output.
14

12

10

0
0 2 4 6 8 10 12 14 16 18 20

2
2

1.5
1.5

1
Logistic
1
1

0.5

y
0.5
0
0

1 exp( x)
-0.5
-0.5
-1

-1
-1.5

-1.5
-2
-10 -8 -6 -4 -2 0 2 4 6 8 10
-2
-10 -8 -6 -4 -2 0 2 4 6 8 10

1.5 2

Hyperbolic tangent
11.5

0.5
1

exp( x) exp( x)
0.5

-0.5

y
0
-1
-0.5

exp( x) exp( x)
-1.5
-1
-2
-10 -8 -6 -4 -2 0 2 4 6 8 10
-1.5

-2
-10 -8 -6 -4 -2 0 2 4 6 8 10
Architecture of ANN
Feed Forward Neural Network.
Single Layer Feed Forward Neural Network.
Multi Layer Feed Forward Neural Network.
Recurrent Neural Network.
Feed Forward Neural
Networks

Output
layer

2nd hidden The information is


layer propagated from the
inputs to the outputs
1st hidden
layer

x1 x2 .. xn x1 x2 .. xn
Recurrent Neural Networks

Output
layer

2nd hidden
layer There is at lease one
feed back loop.
1st hidden
layer

x1 x2 .. xn
Learning Methods
Supervised Learning:
A teacher is assumed to be present during the
learning process.
Input pattern is used to train the network
associated with an output pattern (Target pattern).
For determination of error, compare networks
calculated output and expected target output.
The error can be used to change n/w parameter,
which results in an improvement in performance.
Learning Methods
Unsupervised Learning:
A teacher is assumed to be not present during the
learning process.
Target output is not presented to the network.
So that n/w learn by itself.

Reinforced learning:
Teacher available but does not present the
expected answer.
Only indicates if the computed o/p is correct or
incorrect.
Perceptron
The perceptron neuron produces a 1 if the net
input into the transfer function is equal to or
greater than 0, otherwise it produces a 0.
Its a single-unit network
Change the weight by an
amount proportional to
the difference between
the desired output and
the actual output. Input
Wi = * (D-Y).Ii
Actual
Learning
Desired output
rate
output

Perceptron Learning
Rule
Example: A simple single unit
adaptive network
The network has 2
inputs, and one
output. All are binary.
The output is
1 if W0I0 + W1I1 + Wb > 0
0 if W0I0 + W1I1 + Wb 0
We want it to learn
simple OR: output a 1
if either I0 or I1 is 1.

You might also like