SC
SC
Ans:
In soft computing, optimization and efficiency should go hand in hand because of the
complex and often computationally demanding nature of the techniques used. Soft
computing methods like genetic algorithms, neural networks, and fuzzy logic are designed to
solve problems with uncertainty, imprecision, and approximation. To achieve practical
solutions, it's crucial to balance the quality of the result (optimization) with the resources
used to find that result (efficiency). Here's why they need to go together:
1. Optimization Ensures High-Quality Solutions
Optimization in soft computing involves finding the best possible solution (e.g.,
maximizing performance, minimizing errors) from a large or complex search space.
Soft computing algorithms often operate in dynamic, uncertain environments, where
the goal is to get close to the optimal solution rather than finding a mathematically
exact one.
However, finding the best solution can be computationally expensive due to the size of the
search space or the iterative nature of algorithms. This leads to the need for efficiency.
2. Efficiency Minimizes Computational Resources
Efficiency ensures that the search for the optimal solution is performed with minimal
use of time, memory, and computational power.
Many soft computing techniques involve multiple iterations or evaluations to
converge on a solution (e.g., training neural networks or evolving solutions in genetic
algorithms). If these processes are inefficient, they can take too long or consume too
many resources, making the system impractical for real-world applications.
3. Handling Large and Complex Problems
Soft computing techniques often need to handle large datasets or highly complex problems.
Optimization alone can be computationally intense in such cases. Efficiency becomes key to
keeping the process manageable, especially when:
Exploring large search spaces: Optimization needs efficient exploration strategies to
avoid getting stuck in local optima while keeping resource use in check.
Reducing computational cost: Soft computing models like deep neural networks can
be very resource-intensive. Without efficient training processes, optimization may
become impractical due to high computational costs.
4. Real-Time and Adaptive Systems
In real-time applications, such as robotics, financial trading, or control systems, the solution
needs to be not only optimal but also fast and adaptive. Optimization must happen quickly
for the system to adapt to changing conditions, and this is where efficiency becomes critical.
For example, in a real-time control system, an optimized but slow solution could cause
delays or failures. Efficient optimization ensures that the system responds fast enough while
still maintaining good performance.
5. Scalability
As problems or data sizes grow, the computational cost of finding optimal solutions also
increases. Optimization without efficiency might work for small problems but will likely
break down or become too slow for larger, more complex systems. Efficiency is necessary to
ensure that the system can scale while still finding optimal or near-optimal solutions.
For example, genetic algorithms can optimize complex problems by evolving populations of
solutions, but they need to be efficient to handle large populations or complex fitness
landscapes without becoming too slow.
6. Trade-offs Between Accuracy and Resource Use
Soft computing often involves trade-offs between accuracy (finding the optimal solution)
and resource use (time, memory, processing power). In many real-world problems, it may
not be feasible to achieve perfect optimization, especially if it requires too many resources.
Efficiency helps to achieve a balance where the solution is "good enough" (near-optimal)
while using a reasonable amount of resources.
Conclusion
In soft computing, optimization and efficiency must go hand in hand because:
Optimization ensures that you find the best or most effective solution for complex,
uncertain problems.
Efficiency ensures that this solution is found within a reasonable amount of time and
with minimal use of resources.
Without efficiency, optimization can become impractical due to high computational costs.
Conversely, without optimization, an efficient solution may not be useful if it isn’t effective
or accurate enough for the problem at hand. Both are essential to creating robust, scalable,
and practical soft computing systems.
Different GA Strategies:
Simple Genetic Algorithm (SGA)
Steady State Genetic Algorithm (SSGA)
Messy Genetic Algorithm (MGA)
Simple GA features:
SSGA stands for Steady-State Genetic Algorithm. It is steady-state meaning that there are no
generations. It differs from the Simple Genetic Algorithm, as in that tournament selection does not
replace the selected individuals in the population, and instead of adding the children of the selected
parents into the next generation, the two best individuals out of the two parents and two children
are added back into the population so that the population size remains constant.
Pseudocode :
7. If offspring are better than the worst solutions then replace the worst individuals with the
offspring such that population size remains constant.
9. If convergence criteria are met, then terminate the program else continue with step 3.
Steady-State Genetic Algorithm Block Diagram
Features :
Applications :
To optimize a wide range of different fit functions which is not solvable using normal hard
computing-based algorithms.
1. Binary Encoding:
In binary encoding, solutions are represented as binary strings of 0s and
1s. This is the most common form of encoding used in GAs.
Each gene in the binary string represents a specific aspect of the
solution, and the length of the string corresponds to the number of
variables or features in the problem.
In this, each bit represents a particular feature or decision variable in the
problem. Binary encoding is simple and works well for a wide range of
problems, especially those involving combinatorial optimization.
Applications: Binary encoding is often used for problems like the knapsack
problem, job scheduling, or feature selection.
2. Real-Value (Floating-Point) Encoding:
Instead of using binary digits, real-value encoding uses real numbers
(floating-point numbers) to represent the solution. Each gene in the
chromosome corresponds to a real-valued variable in the problem.
Example:
Chromosome 1: [2.4, -1.6, 3.5, 0.8]
Chromosome 2: [1.1, 0.5, -3.2, 2.0]
This encoding is useful for optimization problems with continuous variables,
such as function optimization or parameter tuning, where decision variables
are real numbers.
Applications: Problems involving continuous optimization, such as machine
learning hyperparameter tuning, and engineering design problems.
3. Permutation Encoding:
In permutation encoding, the chromosome represents a permutation of
a sequence of numbers. This is particularly useful for ordering problems
where the order or sequence of items matters (e.g., scheduling, traveling
salesman problem).
Example:
Chromosome 1: [3, 1, 4, 2, 5]
Chromosome 2: [1, 2, 5, 4, 3]
In the above example, each number represents a unique entity, and the
sequence represents the order in which these entities are arranged.
Applications: Problems like the Traveling Salesman Problem (TSP), job
scheduling, and vehicle routing, where finding the best sequence or
permutation is critical.
4. Tree Encoding:
Tree encoding is used in Genetic Programming, where solutions are
represented as tree structures rather than linear sequences. Each node
in the tree represents an operator or function, and the leaves represent
operands or variables.
Example (for a mathematical expression):
markdown
Copy code
+
/\
* 5
/\
3 4
This tree encodes the expression (3 * 4) + 5. Tree encoding is used when the
solution involves expressions, decision trees, or structures like circuits.
Applications: Genetic Programming, symbolic regression, evolving decision
trees, or designing algorithms.
SELECTION :
In a Genetic Algorithm (GA), the selection operator is responsible for choosing
which individuals (solutions or chromosomes) from the current population will
pass their genetic material (genes) to the next generation. The idea behind the
selection operator is to prefer individuals with better fitness values, while
maintaining some diversity by giving weaker individuals a chance to be
selected. The goal is to improve the overall fitness of the population over
successive generations.
The selection operator directly influences the balance between exploration
(searching new parts of the solution space) and exploitation (refining existing
good solutions). Different selection methods are designed to achieve this
balance in various ways.
Types of Selection Operators:
1. 1. Roulette Wheel Selection (Fitness-Proportionate Selection)
o In roulette wheel selection, the probability of selecting an
individual is proportional to its fitness. The better the fitness of an
individual, the larger its chance of being selected. This can be
visualized like a roulette wheel where each individual has a slice of
the wheel proportional to its fitness.
o In roulette wheel selection, each individual is assigned a slice of a
conceptual wheel proportional to its fitness. The wheel is spun to
select individuals, where individuals with higher fitness have larger
slices and thus a higher probability of being selected.
Example:
Individual 1: Fitness = 5
Individual 2: Fitness = 3
Individual 3: Fitness = 2
In this example, Individual 1 has a 50% chance of being selected (5/10),
Individual 2 has a 30% chance (3/10), and Individual 3 has a 20% chance (2/10).
Advantages: Simple to implement, allows individuals with higher fitness to
have a greater chance of reproduction.
Disadvantages: If one individual has a much higher fitness than others, it can
dominate the selection process, leading to premature convergence (low
diversity).
2. Stochastic Universal Sampling:
This is an extension of roulette wheel selection, where multiple equally
spaced pointers are used to select individuals. This method ensures a
more even selection of individuals, reducing the risk of selecting only the
fittest.
Example:
o Imagine a roulette wheel with multiple selection points instead of
a single spin. Each pointer is spaced evenly on the wheel, allowing
for a more consistent selection of individuals.
Advantages:
Provides a more uniform selection method than simple roulette wheel
selection, reducing selection pressure.
Disadvantages:
Slightly more complex to implement than traditional methods.
3. Rank-Based Selection
In rank-based selection, the population is ranked based on their fitness
values. The selection probability is based on this ranking, rather than the
raw fitness values. The individual with the highest rank has the highest
chance of being selected, but the difference in selection probability is
less dramatic than in fitness-proportionate selection.
Steps:
1. Sort the population based on fitness.
2. Assign each individual a rank.
3. Assign selection probabilities based on the rank (not the raw
fitness values).
Example:
Rank 1: Individual with Fitness = 9
Rank 2: Individual with Fitness = 7
Rank 3: Individual with Fitness = 5
Rank 4: Individual with Fitness = 3
The individual ranked 1 has the highest chance of being selected, but rank-
based selection avoids over-favoring exceptionally fit individuals.
Advantages: Avoids the issue of overly dominant individuals with very high
fitness, making the selection process more balanced.
Disadvantages: May be slower to converge to the best solution compared to
fitness-proportionate methods.
4. Tournament Selection
In tournament selection, a group of individuals is randomly selected
from the population (usually 2 or more). The individual with the highest
fitness in the group is selected as a parent.
Steps:
1. Select k individuals from the population randomly (k is the
tournament size, typically 2 or 3).
2. Compare their fitness values.
3. Select the individual with the highest fitness to pass to the next
generation.
Example:
Tournament Size = 3
Suppose you have 5 individuals: A, B, C, D, and E. You randomly select 3
individuals (e.g., B, C, and D). If C has the highest fitness among them, C
is selected for reproduction.
Advantages: Simple and effective, allows control over selection pressure by
adjusting the tournament size. If the tournament size is small, the selection
process is more random, preserving diversity. If the tournament size is large,
selection becomes more deterministic, favoring fitter individuals.
Disadvantages: If the tournament size is too small, weaker individuals might
get selected too often, slowing down the convergence.
CROSSOVER: Crossover is one of the key genetic operators in Genetic
Algorithms (GA) that combines the genetic information of two parent solutions
to produce new offspring. This process mimics biological reproduction, allowing
for the exchange of genetic material, which can lead to the exploration of new
areas in the solution space. The main goal of crossover is to create better
solutions by combining desirable traits from two parent solutions.
1. Single-Point Crossover:
In single-point crossover, a single crossover point is randomly selected on
the parent chromosomes. The genes before this point are taken from
one parent, and the genes after the point are taken from the other
parent.
Applications: Works well with binary encoding and is one of the simplest
crossover methods.
2. Two-Point Crossover:
In two-point crossover, two crossover points are randomly selected,
and the segment between these points is swapped between the
parents.
Applications: Suitable for both binary and real-value encodings, as well
as for permutation encoding.
3. Multipoint crossover:
4. Uniform Crossover:
In uniform crossover, each gene is selected randomly from either
parent with equal probability. This results in offspring that inherit a mix
of genes from both parents.
Example:
o Parent 1: 101010
o Parent 2: 110100
o Offspring: 111010 (where genes are chosen randomly from both
parents)
Applications: Effective for binary and real-value encodings, allowing for
more significant mixing of genetic material.
5. Half-uniform crossover:
6. Uniform crossover with crossover mask:
7. Shuffle crossover:
8. Three parent crossover:
9. Matrix crossover
10. Single arithmetic crossover-for Real-Value Encoding
In arithmetic crossover, offspring are created by taking a weighted average
of the parents' gene values. This is suitable for real-value encoding, where
the genes are continuous variables.
Example:
o Parent 1: 2.0
o Parent 2: 4.0
o Offspring: 0.5 * Parent 1 + 0.5 * Parent 2 = 3.0
Applications: Commonly used in continuous optimization problems.
11.Order Crossover (OX) (for Permutation Encoding):
Order crossover is specifically designed for permutation problems, such
as the Traveling Salesman Problem (TSP). It preserves the relative order
of genes from the parents.
Example:
o Parent 1: [3, 1, 4, 2, 5]
o Parent 2: [1, 5, 2, 4, 3]
o Selected segments from Parent 1: [1, 4] (between two crossover
points)
o The offspring is filled with the remaining genes from Parent 2
while preserving their order: [2, 3, 1, 4, 5]
Applications: Primarily used in combinatorial optimization problems
involving permutations.
12. Partially Mapped Crossover (PMX) (for Permutation Encoding):
PMX is another method designed for permutation problems that helps
preserve the unique properties of genes.
Example:
o Parent 1: [1, 2, 3, 4, 5]
o Parent 2: [4, 5, 1, 2, 3]
o After mapping segments, the offspring might look like [4, 5, 3, 2,
1] where the mappings are maintained.
Applications: Useful for problems like TSP and job scheduling.
https://www.youtube.com/watch?
v=BTlB6ioeMSU&list=PL4gu8xQu0_5J3xTQDTZM_A17hTid4ahJ1&index=5
https://www.youtube.com/watch?
v=kifA8gq0OGU&list=PL4gu8xQu0_5J3xTQDTZM_A17hTid4ahJ1&index=6
MUTATION:
Mutation is a genetic operator used in Genetic Algorithms (GA) to introduce
variability into the population of solutions. It serves to maintain diversity within
the population and prevent premature convergence, ensuring that the
algorithm continues to explore a broad range of potential solutions. Mutation
involves making small, random changes to an individual’s genetic makeup,
which can help the algorithm escape local optima and discover better
solutions.
Key Objectives of Mutation
1. Introduce Variation
2. Prevent Premature Convergence
3. Explore New Solutions
Common Mutation Methods
The choice of mutation operator depends on the encoding scheme used in the
Genetic Algorithm. Here are some common mutation methods:
1. Bit Flip Mutation (for Binary Encoding):
o In bit flip mutation, a randomly chosen bit in a binary string is
inverted (0 becomes 1, and 1 becomes 0).
o Example:
Original Chromosome: 101011
Mutated Chromosome: 101111 (where the third bit was
flipped)
o Applications: This method is commonly used in binary encoding
and is simple to implement.
2. Swap Mutation (for Permutation Encoding):
o In swap mutation, two randomly selected genes in a permutation
are swapped. This maintains the structure of the permutation
while introducing variation.
o Example:
Original Chromosome: [3, 1, 4, 2, 5]
Mutated Chromosome: [3, 4, 1, 2, 5] (where genes at
positions 1 and 2 were swapped)
o Applications: This method is often used in problems like the
Traveling Salesman Problem (TSP) and other permutation-based
optimization problems.
3. Scramble Mutation (for Permutation Encoding):
o In scramble mutation, a segment of the chromosome is randomly
selected, and the genes within that segment are shuffled to create
a new order.
o Example:
Original Chromosome: [3, 1, 4, 2, 5]
Mutated Chromosome: [3, 2, 4, 1, 5] (where the segment
containing 1, 4, and 2 was scrambled)
o Applications: Effective for permutation problems, maintaining the
diversity of solutions.
4. Gaussian Mutation (for Real-Value Encoding):
o In Gaussian mutation, a small random value drawn from a
Gaussian distribution (normal distribution) is added to a gene's
value, altering it slightly. This is particularly effective for real-
valued solutions.
o Example:
Original Chromosome: [2.5, 3.0, 4.1]
Mutated Chromosome: [2.7, 3.0, 3.9] (where small Gaussian
noise was added)
o Applications: Suitable for continuous optimization problems, such
as function optimization.
5. Uniform Mutation (for Real-Value Encoding):
o In uniform mutation, a gene is randomly replaced with a new
value chosen from a predefined range. This can introduce
significant changes and maintain diversity.
o Example:
Original Chromosome: [2.5, 3.0, 4.1]
Mutated Chromosome: [2.5, 5.0, 4.1] (where the second
gene was replaced with a new value)
o Applications: Useful for problems where genes can take on a range
of values.
6. Swap and Inversion Mutation (for Permutation Encoding):
o This mutation combines both swap and inversion methods. A pair
of genes is swapped, and then the order of genes between them is
inverted.
o Example:
Original Chromosome: [3, 1, 4, 2, 5]
Mutated Chromosome: [3, 2, 4, 1, 5] (genes at positions 1
and 3 were swapped, and the segment between them was
inverted)
o Applications: Useful in combinatorial optimization problems.
https://www.youtube.com/watch?
v=51lZ5jI0JbA&list=PL4gu8xQu0_5J3xTQDTZM_A17hTid4ahJ1&index=7
GA SOLVED EXAMPLE :
https://www.youtube.com/watch?
v=udN28wPqaZI&list=PL4gu8xQu0_5J3xTQDTZM_A17hTid4ahJ1&index=8
https://www.youtube.com/watch?v=Dj1AZ0T-m-
I&list=PL4gu8xQu0_5J3xTQDTZM_A17hTid4ahJ1&index=9
https://www.youtube.com/watch?
v=Nvu7Klh_knM&list=PL4gu8xQu0_5J3xTQDTZM_A17hTid4ahJ1&index=10
ALSO REFER THESE FOR GA OPERATORS:
1. "C:\Users\hp-pc\Downloads\GA Module VI.pdf"
2. https://www.slideshare.net/slideshow/fundamentals-of-genetic-
algorithms-soft-computing/267120602 == BOOKMARKED
3. PPTS PROVIDED BY SIR
UNIT 3
1. perceptron:
+ Also pdf by tai
There are several different architectures for ANNs, each with their own
strengths and weaknesses. Some of the most common architectures include:
Feedforward Neural Networks: This is the simplest type of ANN architecture,
where the information flows in one direction from input to output. The layers
are fully connected, meaning each neuron in a layer is connected to all the
neurons in the next layer.
Recurrent Neural Networks (RNNs): These networks have a “memory”
component, where information can flow in cycles through the network. This
allows the network to process sequences of data, such as time series or
speech.
Convolutional Neural Networks (CNNs): These networks are designed to
process data with a grid-like topology, such as images. The layers consist of
convolutional layers, which learn to detect specific features in the data, and
pooling layers, which reduce the spatial dimensions of the data.
Autoencoders: These are neural networks that are used for unsupervised
learning. They consist of an encoder that maps the input data to a lower-
dimensional representation and a decoder that maps the representation back
to the original data.
Generative Adversarial Networks (GANs): These are neural networks that are
used for generative modeling. They consist of two parts: a generator that
learns to generate new data samples, and a discriminator that learns to
distinguish between real and generated data.
Perceptron: A single layer neural network that works as an artificial neuron to
perform computations.
+ my notes book
2.Architecture of neural network:
1. Single-layer feed-forward network
In this type of network, we have only two layers input layer and the output
layer but the input layer does not count because no computation is performed
in this layer. The output layer is formed when different weights are applied to
input nodes and the cumulative effect per node is taken. After this, the neurons
collectively give the output layer to compute the output signals.
2. Multilayer feed-forward network
This layer also has a hidden layer that is internal to the network and has no
direct contact with the external layer. The existence of one or more hidden
layers enables the network to be computationally stronger, a feed-forward
network because of information flow through the input function, and the
intermediate computations used to determine the output Z. There are no
feedback connections in which outputs of the model are fed back into itself.
3. Single node with its own feedback
7. Weight Update:
o An optimizer (e.g., Stochastic Gradient Descent, Adam) updates
the weights and biases to minimize the loss function.
8. Iterative Learning:
o Forward propagation, loss calculation, backpropagation, and
weight updates are repeated over multiple iterations (epochs)
until the network learns to map inputs to outputs effectively.
Application
Use Case ANN Model Real-Life Example
Area
Image Facial recognition by
Object detection CNN
Recognition Facebook and Apple
Natural
Language
Language RNN/Transformer Google Translate
translation
Processing
Speech Siri, Alexa, Google
Voice commands LSTM/Transformer
Recognition Assistant
Autonomous
Self-driving cars CNN Tesla, Waymo
Vehicles
Tumor detection in
Healthcare Disease detection DNN
MRIs
Financial Fraud prevention by
Fraud detection FNN
Sector Visa and Mastercard
Industrial machinery
Predictive Equipment failure
RNN maintenance in
Maintenance prediction
manufacturing plants
Predicting rainfall and
Weather storms by
Climate prediction DNN
Forecasting meteorological
organizations
Application
Use Case ANN Model Real-Life Example
Area
Gaming AI game agents RL with DNN AlphaGo by DeepMind
Product Collaborative Recommendations on
E-Commerce
recommendations Filtering with NN Amazon and Netflix
Malware detection in
Cybersecurity Threat detection Autoencoders
cybersecurity software
Energy load
Energy Smart grid
FNN management in utility
Management optimization
companies
Autonomous task Industrial robots by
Robotics RL with DNN
execution Boston Dynamics