Module 1_sc - Google Docs
Module 1_sc - Google Docs
Soft computing is the use of approximate calculations to provide imprecise but usable solutions
to complex computational problems. The approach enables solutions for problems that may be
either unsolvable or just too time-consuming to solve with current hardware.
Zadeh coined the term of soft computing in 1992. The objective of soft computing is to provide
precise approximation and quick solutions for complex real-life problems.
In simple terms, you can understand soft computing - an emerging approach that gives the
amazing ability of the human mind. It can map a human mind and the human mind is a role
model for soft computing.
o Soft computing provides an approximate but precise solution for real-life problems.
o The algorithms of soft computing are adaptive, so the current process is not affected by
any kind of change in the environment.
o The concept of soft computing is based on learning from experimental data. It means
that soft computing does not require any mathematical model to solve the problem.
o Soft computing helps users to solve real-world problems by providing approximate
results that conventional and analytical models cannot solve.
o It is based on Fuzzy logic, genetic algorithms, machine learning, ANN, and expert
systems.
Example
Soft computing deals with the approximation model. Yoi will understand with the help of
examples of how it deals with the approximation model.
Let's consider a problem that actually does not have any solution via traditional computing, but
soft computing gives the approximate solution.
string1 = "xyz" and string2 = "xyw"
1. Problem 1
2. Are string1 and string2 same?
3. Solution
4. No, the solution is simply No. It does not require any algorithm to analyze this.
Let's modify the problem a bit.
1. Problem 2
2. How much string1 and string2 are same?
3. Solution
4. Through conventional programming, either the answer is Yes or No. But these strings might be 8
0% similar according to soft computing.
As we already said that, soft computing provides the solution to real-time problems and here you
can see that. Besides these applications, there are many other applications of soft computing.
Hard Computing is that the ancient approach employed in computing that desires Associate in
Nursing accurately declared analytical model. the outcome of hard computing approach is a
warranted, settled, correct result and defines definite management actions employing a
mathematical model or algorithmic rule. It deals with binary and crisp logic that need the precise
input file consecutive. Hard computing isn’t capable of finding the real world problem’s solution.
Soft computing will emerge its own Hard computing requires programs
8. programs. to be written.
Soft Computing: Soft Computing could be a computing model evolved to resolve the non-linear
issues that involve unsure, imprecise and approximate solutions of a tangle. These sorts of issues
square measure thought of as real-life issues wherever the human-like intelligence is needed to
resolve it.
Difference between AI and Soft Computing:
S.NO. A.I. SOFT COMPUTING
Artificial Intelligence mainly deals with Soft computing mainly deals with the
8 making the machines intelligent. imprecision and probabilities.
“Basically, soft computing is not a homogeneous body of concepts and techniques. Rather, it is a
partnership of distinct methods that in one way or another conform to its guiding principle. The
dominant aim of soft computing is to exploit the tolerance for imprecision and uncertainty to
achieve tractability, robustness and low solutions cost. The principal constituents of soft
computing are fuzzy logic, neurocomputing, and probabilistic reasoning, with the latter
subsuming genetic algorithms, belief networks, chaotic systems, and parts of learning theory. In
the partnership of fuzzy logic, neurocomputing, and probabilistic reasoning, fuzzy logic is
mainly concerned with imprecision and approximate reasoning; neurocomputing with learning
and curve-fitting; and probabilistic reasoning with uncertainty and belief propagation”.
In the soft computing framework, the basic idea which has been developed so far has consisted
in supposing that there is a set of resolving agents which are basically algorithms for solving
combinatorial optimization problems, and to execute them cooperatively by means of a
coordinating agent to solve the problem in question, taking the generality based on minimum
knowledge of a problem as a fundamental premise. Each solving agent acts autonomously and
only communicates with a coordinating agent to send it the solutions as it finds them and to
receive guidelines about how to proceed. The coordinating agent receives the solutions found by
each solving agent for the problem, and following a fuzzy rule base to model its behavior, it
creates the guidelines which it then sends to them, thereby taking total control of the strategy.
7.Image Recognition:
Image recognition is also an important application of machine learning for identifying objects,
persons, places, etc. Face detection and auto friend tagging suggestion is the most famous
application of image recognition used by Facebook, Instagram, etc. Whenever we upload photos
with our Facebook friends, it automatically suggests their names through image recognition
technology.
8.Product Recommendations:
Machine Learning is widely used in business industries for the marketing of various products.
Almost all big and small companies like Amazon, Alibaba, Walmart, Netflix, etc., are using
machine learning techniques for products recommendation to their users. Whenever we search
for any products on their websites, we automatically get started with lots of advertisements for
similar products. This is also possible by Machine Learning algorithms that learn users' interests
and, based on past data, suggest products to the user.
Automatic Translation:
Automatic language translation is also one of the most significant applications of machine
learning that is based on sequence algorithms by translating text of one language into other
desirable languages. Google GNMT (Google Neural Machine Translation) provides this feature,
which is Neural Machine Learning. Further, you can also translate the selected text on images as
well as complete documents through Google Lens.
9.Virtual Assistant:
A virtual personal assistant is also one of the most popular applications of machine learning.
First, it records out voice and sends to cloud-based server then decode it with the help of
machine learning algorithms. All big companies like Amazon, Google, etc., are using these
features for playing music, calling someone, opening an app and searching data on the internet,
etc.
10.Email Spam and Malware Filtering:
Machine Learning also helps us to filter various Emails received on our mailbox according to
their category, such as important, normal, and spam. It is possible by ML algorithms such as
Multi-Layer Perceptron, Decision tree, and Naïve Bayes classifier.
Commonly used Machine Learning Algorithms
list of a few commonly used Machine Learning Algorithms as follows:
1.Linear Regression
Linear Regression is one of the simplest and popular machine learning algorithms recommended
by a data scientist. It is used for predictive analysis by making predictions for real variables such
as experience, salary, cost, etc.
It is a statistical approach that represents the linear relationship between two or more variables,
either dependent or independent, hence called Linear Regression. It shows the value of the
dependent variable changes with respect to the independent variable, and the slope of this graph
is called as Line of Regression.
K-Means Clustering
K-Means Clustering is a subset of unsupervised learning techniques. It helps us to solve
clustering problems by means of grouping the unlabeled datasets into different clusters. Here K
defines the number of pre-defined clusters that need to be created in the process, as if K=2, there
will be two clusters, and for K=3, there will be three clusters, and so on.
Decision Tree
Decision Tree is also another type of Machine Learning technique that comes under Supervised
Learning. Similar to KNN, the decision tree also helps us to solve classification as well as
regression problems, but it is mostly preferred to solve classification problems. The name
decision tree is because it consists of a tree-structured classifier in which attributes are
represented by internal nodes, decision rules are represented by branches, and the outcome of the
model is represented by each leaf of a tree. The tree starts from the decision node, also known as
the root node, and ends with the leaf node.
Decision nodes help us to make any decision, whereas leaves are used to determine the output of
those decisions.
A Decision Tree is a graphical representation for getting all the possible outcomes to a problem
or decision depending on certain given conditions.
Random Forest
Random Forest is also one of the most preferred machine learning algorithms that come under
the Supervised Learning technique. Similar to KNN and Decision Tree, It also allows us to solve
classification as well as regression problems, but it is preferred whenever we have a requirement
to solve a complex problem and to improve the performance of the model.
A random forest algorithm is based on the concept of ensemble learning, which is a process of
combining multiple classifiers.
Random forest classifier is made from a combination of a number of decision trees as well as
various subsets of the given dataset. This combination takes input as an average prediction from
all trees and improves the accuracy of the model. The greater number of trees in the forest leads
to higher accuracy and prevents the problem of overfitting. Further, It also takes less training
time as compared to other algorithms.
Support Vector Machines (SVM)
It is also one of the most popular machine learning algorithms that come as a subset of the
Supervised Learning technique in machine learning. The goal of the support vector machine
algorithm is to create the best line or decision boundary that can segregate n-dimensional space
into classes so that we can easily put the new data point in the correct category in the future. This
best decision boundary is called a hyperplane. It is also used to solve classification as well as
regression problems. It is used for Face detection, image classification, text categorization, etc.
Naïve Bayes
The naïve Bayes algorithm is one of the simplest and most effective machine learning algorithms
that come under the supervised learning technique. It is based on the concept of the Bayes
Theorem, used to solve classification-related problems. It helps to build fast machine learning
models that can make quick predictions with greater accuracy and performance. It is mostly
preferred for text classification having high-dimensional training datasets.
It is used as a probabilistic classifier which means it predicts on the basis of the probability of an
object. Spam filtration, Sentimental analysis, and classifying articles are some important
applications of the Naïve Bayes algorithm.
It is also based on the concept of Bayes Theorem, which is also known as Bayes' Rule or Bayes'
law. Mathematically, Bayes Theorem can be expressed as follows:
Where,
o P(A) is Prior Probability
o P(B) is Marginal Probability
o P(A|B) is Posterior probability
o P(B|A) is Likelihood probability