0% found this document useful (0 votes)
26 views34 pages

Unit 1

Soft computing is an emerging approach that provides approximate solutions to complex real-world problems using techniques like fuzzy logic, neural networks, and genetic algorithms. It is useful when exact solutions are not possible. Soft computing techniques include fuzzy logic, artificial neural networks, and genetic algorithms.

Uploaded by

Prateek Patidar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
26 views34 pages

Unit 1

Soft computing is an emerging approach that provides approximate solutions to complex real-world problems using techniques like fuzzy logic, neural networks, and genetic algorithms. It is useful when exact solutions are not possible. Soft computing techniques include fuzzy logic, artificial neural networks, and genetic algorithms.

Uploaded by

Prateek Patidar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 34

What is soft computing

Soft computing is the reverse of hard (conventional) computing. It refers to a group of


computational techniques that are based on artificial intelligence (AI) and natural selection. It
provides cost-effective solutions to the complex real-life problems for which hard computing
solution does not exist.
Zadeh coined the term of soft computing in 1992. The objective of soft computing is to provide
precise approximation and quick solutions for complex real-life problems

In simple terms, you can understand soft computing - an emerging approach that gives the amazing
ability of the human mind. It can map a human mind and the human mind is a role model for soft
computing.

Basically, soft computing is different from traditional/conventional computing and it deals


with approximation models.
Some characteristics of Soft computing
 Soft computing provides an approximate but precise solution for real-life problems.
 Imprecision and uncertainty
 The algorithms of soft computing are adaptive, so the current process is not affected by any
kind of change in the environment.
 The concept of soft computing is based on learning from experimental data. It means that
soft computing does not require any mathematical model to solve the problem.
 Soft computing helps users to solve real-world problems by providing approximate results
that conventional and analytical models cannot solve.
 It is based on Fuzzy logic, genetic algorithms, machine learning, ANN, and expert systems.
Example

string1 = "xyz" and string2 = "xyw"


1. Problem 1
2. Are string1 and string2 same?
3. Solution
4. No, the solution is simply No. It does not require any algorit
hm to analyze this.
Let's modify the problem a bit.
1. Problem 2
2. How much string1 and string2 are same?
3. Solution
4. Through conventional programming, either the answer is Ye
s or No. But these strings might be 80% similar according to
soft computing.
Applications of soft computing
There are several applications of soft computing where it is used. Some of them are listed
below:
 It is widely used in gaming products like Poker and Checker.
 In kitchen appliances, such as Microwave and Rice cooker.
 In most used home appliances - Washing Machine, Heater, Refrigerator, and AC as well.
 Apart from all these usages, it is also used in Robotics work (Emotional per Robot form).
 Image processing and Data compression are also popular applications of soft computing.
 Biometric application in Image processing.
 Used for handwriting recognition.
 Stock market prediction in business.
 Computer aided diagnosis in medical.
Need of soft computing

 Hard computing is used for solving mathematical problems that need a precise answer. It fails to
provide solutions for some real-life problems. Thereby for real-life problems whose precise
solution does not exist, soft computing helps.
 When conventional mathematical and analytical models fail, soft computing helps, e.g., You can
map even the human mind using soft computing.
 Analytical models can be used for solving mathematical problems and valid for ideal cases. But
the real-world problems do not have an ideal case; these exist in a non-ideal environment.
 Soft computing is not only limited to theory; it also gives insights into real-life problems.
 Like all the above reasons, Soft computing helps to map the human mind, which cannot be
possible with conventional mathematical and analytical models.
Elements of soft computing
Soft computing is viewed as a foundation component for an emerging field of
conceptual intelligence. Fuzzy Logic (FL), Machine Learning (ML), Neural Network
(NN), Probabilistic Reasoning (PR), and Evolutionary Computation (EC) are the
supplements of soft computing. Also, these are techniques used by soft computing to
resolve any complex problem.
Any problems can be resolved effectively using these components. Following are three
types of techniques used by soft computing:
 Fuzzy Logic
 Artificial Neural Network (ANN)
 Genetic Algorithms

Fuzzy Logic (FL)


Fuzzy logic is nothing but mathematical logic which tries to solve problems with an open
and imprecise spectrum of data. It makes it easy to obtain an array of precise conclusions.
Fuzzy logic is basically designed to achieve the best possible solution to complex problems
from all the available information and input data. Fuzzy logics are considered as the best
solution finders.
Neural Network (ANN)
Neural networks were developed in the 1950s, which helped soft computing to solve real-world problems,
which a computer cannot do itself. We all know that a human brain can easily describe real-world
conditions, but a computer cannot.
An artificial neural network (ANN) emulates a network of neurons that makes a human brain (means a
machine that can think like a human mind). Thereby the computer or a machine can learn things so that
they can take decisions like the human brain.
Artificial Neural Networks (ANN) are mutually connected with brain cells and created using regular
computing programming. It is like as the human neural system.

Genetic Algorithms (GA)


Genetic algorithm is almost based on nature and take all inspirations from it. There is no genetic algorithm
that is based on search-based algorithms, which find its roots in natural selection and the concept of
genetics.
In addition, a genetic algorithm is a subset of a large branch of computation.
Soft computing vs Hard computing
Soft computing vs Hard computing

Parameters Soft Computing Hard Computing

Computation time Takes less computation time. Takes more computation time.

It is mainly based on binary logic and numerical


Dependency It depends on approximation and dispositional.
systems.

Computation type Parallel computation Sequential computation

Result/Output Approximate result Exact and precise result

Neural Networks, such as Madaline, Adaline, Art Any numerical problem or traditional methods of
Example
Networks. solving using personal computers.
What is Artificial Neural Network?
The term "Artificial Neural Network" is derived from Biological neural networks that develop
the structure of a human brain. Similar to the human brain that has neurons interconnected to
one another, artificial neural networks also have neurons that are interconnected to one
another in various layers of the networks. These neurons are known as nodes.

The given figure illustrates the typical diagram of Biological Neural


Network.
The typical Artificial Neural Network looks something like the given
figure.

Biological Neural Network Artificial Neural Network

Dendrites Inputs
Cell nucleus Nodes
Synapse Weights
Axon Output
Features Artificial Neural Network Biological Neural Network

Definition It is the mathematical model which is mainly It is also composed of several processing pieces
inspired by the biological neuron system in the known as neurons that are linked together via
human brain. synapses.

Processing Its processing was sequential and centralized. It processes the information in a parallel and
distributive manner.

Size It is small in size. It is large in size.

Control Mechanism Its control unit keeps track of all computer-related All processing is managed centrally.
operations.

Rate It processes the information at a faster speed. It processes the information at a slow speed.

Complexity It cannot perform complex pattern recognition. The large quantity and complexity of the
connections allow the brain to perform complicated
tasks.

Feedback It doesn't provide any feedback. It provides feedback.


Fault tolerance There is no fault tolerance. It has fault tolerance.

Operating Environment Its operating environment is well- Its operating environment is


defined and well-constrained poorly defined and
unconstrained.

Memory Its memory is separate from a Its memory is integrated into the
processor, localized, and non- processor, distributed, and
content addressable. content-addressable.

Reliability It is very vulnerable. It is robust.

Learning It has very accurate structures They are tolerant to ambiguity.


and formatted data.

Response time Its response time is measured in Its response time is measured in
milliseconds. nanoseconds.
Evolution of Neural Networks

Since the 1940s, there have been a number of noteworthy advancements in the
field of neural networks:

•1940s-1950s: Early Concepts


Neural networks began with the introduction of the first mathematical model of
artificial neurons by McCulloch and Pitts. But computational constraints made
progress difficult.

•1960s-1970s: Perceptrons
This era is defined by the work of Rosenblatt on perceptrons. Perceptrons are
single-layer networks whose applicability was limited to issues that could be solved
linearly separately.

•1980s: Backpropagation and Connectionism


Multi-layer network training was made possible by Rumelhart, Hinton, and
Williams’ invention of the backpropagation method. With its emphasis on learning
through interconnected nodes, connectionism gained appeal.
•1990s: Boom and Winter
With applications in image identification, finance, and other fields, neural
networks saw a boom. Neural network research did, however, experience a
“winter” due to exorbitant computational costs and inflated expectations.

•2000s: Resurgence and Deep Learning


Larger datasets, innovative structures, and enhanced processing capability
spurred a comeback. Deep learning has shown amazing effectiveness in a
number of disciplines by utilizing numerous layers.

•2010s-Present: Deep Learning Dominance


Convolutional neural networks (CNNs) and recurrent neural networks
(RNNs), two deep learning architectures, dominated machine learning.
Their power was demonstrated by innovations in gaming, picture
recognition, and natural language processing.
• An Artificial Neural Network in the field of Artificial intelligence where it
attempts to mimic the network of neurons makes up a human brain so that
computers will have an option to understand things and make decisions in a
human-like manner.
• The artificial neural network is designed by programming computers to behave
simply like interconnected brain cells.
• There are around 1000 billion neurons in the human brain. Each neuron has an
association point somewhere in the range of 1,000 and 100,000.
• In the human brain, data is stored in such a manner as to be distributed, and we
can extract more than one piece of this data when necessary from our memory
parallelly. We can say that the human brain is made up of incredibly amazing
parallel processors.
• We can understand the artificial neural network with an example, consider an
example of a digital logic gate that takes an input and gives an output. "OR" gate,
which takes two inputs. If one or both the inputs are "On," then we get "On" in
output. If both the inputs are "Off," then we get "Off" in output.
• Here the output depends upon input. Our brain does not perform the same task.
The outputs to inputs relationship keep changing because of the neurons in our
brain, which are "learning."
The architecture of an artificial neural
network:
• In order to define a neural network that consists of a large number of artificial
neurons, which are termed units arranged in a sequence of layers.

• Artificial Neural Network primarily consists of three layers:


Input Layer:
As the name suggests, it accepts inputs in several
different formats provided by the programmer.
Hidden Layer:
The hidden layer presents in-between input and output
layers. It performs all the calculations to find hidden
features and patterns.

Output Layer:
The input goes through a series of transformations using
the hidden layer, which finally results in output that is
conveyed using this layer.
The artificial neural network takes input and computes the
weighted sum of the inputs and includes a bias. This
computation is represented in the form of a transfer
function.

It determines weighted total is passed as an input to an


activation function to produce the output. Activation
functions choose whether a node should fire or not. Only
those who are fired make it to the output layer. There are
distinctive activation functions available that can be
applied upon the sort of task we are performing.
Advantages of Artificial Neural Network (ANN)
Parallel processing capability
Storing data on the entire network
Capability to work with incomplete knowledge

Having a memory distribution


Having fault tolerance
Disadvantages of Artificial Neural Network:

Assurance of proper network structure


Unrecognized behavior of the network
Hardware dependence
Difficulty of showing the issue to the network
The duration of the network is unknown
How do artificial neural networks work?
Artificial Neural Network can be best represented as a weighted directed graph, where
the artificial neurons form the nodes.
The association between the neurons outputs and neuron inputs can be viewed as the
directed edges with weights.
The Artificial Neural Network receives the input signal from the external source in the
form of a pattern and image in the form of a vector. These inputs are then
mathematically assigned by the notations x(n) for every n number of inputs.
Afterward, each of the input is multiplied by its corresponding weights ( these
weights are the details utilized by the artificial neural networks to solve a specific
problem ). In general terms, these weights normally represent the strength of the
interconnection between neurons inside the artificial neural network. All the
weighted inputs are summarized inside the computing unit.
If the weighted sum is equal to zero, then bias is added to make the output non-zero
or something else to scale up to the system's response. Bias has the same input, and
weight equals to 1. Here the total of weighted inputs can be in the range of 0 to
positive infinity. Here, to keep the response in the limits of the desired value, a
certain maximum value is benchmarked, and the total of weighted inputs is passed
through the activation function.
The activation function refers to the set of transfer functions used to achieve the
desired output. There is a different kind of the activation function, but primarily
either linear or non-linear sets of functions. Some of the commonly used sets of
activation functions are the Binary, linear, and Tan hyperbolic sigmoidal activation
functions.
The model of an artificial neural network can be specified by three entities:

•Interconnections
•Activation functions
•Learning rules

Interconnections:
• Interconnection can be defined as the way processing elements (Neuron) in ANN are
connected to each other. Hence, the arrangements of these processing elements and
geometry of interconnections are very essential in ANN.
• These arrangements always have two layers that are common to all network
architectures, the Input layer and output layer where the input layer buffers the input
signal, and the output layer generates the output of the network.
• The third layer is the Hidden layer, in which neurons are neither kept in the input layer
nor in the output layer. These neurons are hidden from the people who are interfacing
with the system and act as a black box to them.
• By increasing the hidden layers with neurons, the system’s computational and
processing power can be increased but the training phenomena of the system get more
complex at the same time.
There exist five basic types of neuron connection
architecture :

1.Single-layer feed-forward network


2.Multilayer feed-forward network
3.Single node with its own feedback
4.Single-layer recurrent network
5.Multilayer recurrent network
1. Single-layer feed-forward network

• In this type of network, we have only two


layers input layer and the output layer but
the input layer does not count because
no computation is performed in this layer.
• The output layer is formed when different
weights are applied to input nodes and
the cumulative effect per node is taken.
• After this, the neurons collectively give
the output layer to compute the output
signals.
2. Multilayer feed-forward network
• This layer also has a hidden layer that is internal to the network and has no direct contact
with the external layer.
• The existence of one or more hidden layers enables the network to be computationally
stronger, a feed-forward network because of information flow through the input function,
and the intermediate computations used to determine the output Z.
• There are no feedback connections in which outputs of the model are fed back into itself.
3. Single node with its own feedback

When outputs can be directed back as inputs to the same layer or preceding layer nodes,
then it results in feedback networks. Recurrent networks are feedback networks with
closed loops. The above figure shows a single recurrent network having a single neuron
with feedback to itself.
4. Single-layer recurrent network

The above network is a single-layer network with a feedback connection in which the
processing element’s output can be directed back to itself or to another processing element or
both. A recurrent neural network is a class of artificial neural networks where connections
between nodes form a directed graph along a sequence.
5. Multilayer recurrent network

In this type of network, processing element output can be directed to the processing element in the
same layer and in the preceding layer forming a multilayer recurrent network. They perform the
same task for every element of a sequence, with the output being dependent on the previous
computations. Inputs are not needed at each time step. The main feature of a Recurrent Neural
Network is its hidden state, which captures some information about a sequence.

You might also like