BID Module4
BID Module4
MODULE 4
In mathematical folklore, the "no free lunch" (NFL) theorem (sometimes pluralized) of David
Wolpert and William Macready, alludes to the saying "no such thing as a free lunch", that is,
there are no easy shortcuts to success. It appeared in the 1997 "No Free Lunch Theorems for
Optimization". Wolpert had previously derived no free lunch theorems for machine learning
(statistical inference). The "no free lunch" (NFL) theorem is an easily stated and easily
understood consequence of theorems Wolpert and Macready actually prove. It is weaker than
the proven theorems, and thus does not encapsulate them. Various investigators have extended
the work of Wolpert and Macready substantively. In terms of how the NFL theorem is used in
the context of the research area, the no free lunch in search and optimization is a field that is
dedicated for purposes of mathematically analysing data for statistical identity, particularly
search and optimization.
• NFLT states that no universally better algorithm can solve all types of optimization
problems.
• NFLT highlights the importance of adapting design principles to the unique challenges
presented by biological systems.
• NFLT originated in computer science, asserting that no one optimization algorithm is
universally superior.
• Relevance to biology-inspired design: Understanding the limitations of optimization
informs how we draw inspiration from nature.
• NFLT states that there is no universal optimization algorithm; performance is context-
dependent.
• In the realm of biology, this means no single design solution works for all environments.
• The diversity of biological systems challenges the idea of a one-size-fits-all approach.
• NFLT underscores the need for a diverse toolbox of design strategies to address the
complexity of natural systems.
Applying a single optimization strategy to all design problems is unrealistic. Biological
systems showcase a variety of adaptive strategies, emphasizing the need for context-
specific design in biomimicry. NFLT encourages a nuanced understanding of the
complexity of natural systems. Embracing the principles of NFLT prompts designers to
explore unconventional solutions that might be overlooked by traditional approaches.
The NLFT are a set of mathematical proofs and general framework that explores the connection
between general-purpose algorithms that are considered “black-box” and the problems they
solve.
The No Free Lunch Theorems state that any one algorithm that searches for an optimal cost or
fitness solution is not universally superior to any other algorithm.
This is due to the huge potential problem space for the application of the general-purpose
algorithm; if an algorithm is particularly adept at solving one class of problem, and the fitness
surface that comes with it, then then it has to perform worse on the remaining average of
problems. Wolpert and Macready wrote in their paper, No Free Lunch Theorems for
Optimisation:
“If an algorithm performs better than random search on some class of problems then in
must perform worse than random search on the remaining problems.”
In short because of the NFLT proof, we can state: A superior black-box optimisation strategy is
impossible.
Bat Algorithm:
The Bat algorithm is a population-based metaheuristics algorithm for solving continuous
optimization problems. It’s been used to optimize solutions in cloud computing, feature
selection, image processing, and control engineering problems.
Metaheuristics are similar to heuristics, and they seek promising solutions to problems.
However, metaheuristics are generic and can deal with a variety of problems. Generally,
metaheuristic algorithms are designed for global optimization.
Yang proposed the Bat algorithm in 2010. The basic Bat algorithm is bio-inspired. It’s based
on the characteristics of bat bio-sonar or echolocation. Bats in the wild emit ultrasonic waves
into the environment to aid in hunting or navigation. Following the emission of these waves, it
receives the echoes of the waves. It locates itself and identifies obstacles by using the received
echoes.
It’s illustrated in the following figure:
There are over a quarter of a million types of flowering plants in Nature, 80% of them are
flowering species. The main purpose of a flower is ultimately reproduction via pollination.
Flower pollination process is associated with the transfer of pollen by using pollinators such as
insects, birds, bats…etc.
Flower Pollination Algorithm (FPA) is a nature inspired population based algorithm proposed
by Xin-She Yang (2012). The main objective of the flower pollination is to produce the optimal
reproduction of plants by surviving the fittest flowers in the flowering plants. In fact this is an
optimization process of plants in species. FPA is a swarm-based optimization technique that
has attracted the attention of many researchers in several optimization fields due to its
impressive characteristics. FPA has very fewer parameters and has shown a robust performance
when applied in various optimization problems. In addition, FPA is a flexible, adaptable,
scalable, and simple optimization method. Therefore, FPA, compared with other metaheuristic
algorithms, shows good results for solving various real-life optimization problems from
different domains such as electrical and power system, signal and image processing, wireless
sensor networking, clustering and classification, global function optimization, computer
gaming, structural and mechanical engineering optimization, and many others. FPA is a
population-based optimization technique, initiated with a set of provisional or random
solutions. At each iteration, either one of the two operators is carried out for each individual
population member: local pollination operator and global pollination operator. In a local
pollination operator, the decision variables of the current solution attract the other two
randomly selected solutions from two population members. In a global pollination operator,
the decision variables of the current solution attract to the globally best solution found. The
switch operator is responsible for exchanging the improvement loop either locally or globally.
This process repeats until a predefined stopping criterion is met. The procedural optimization
framework of FPA at its initial version has undergone modification or hybridization to enhance
its performance with relation to different types of problem landscapes. FPA is investigated to
determine the optimal loading of generators in power systems. Simulations results for small
and large scale power system considering the valve loading effect are implemented to indicate
the robustness of FPA.
Genetic algorithms are iterative optimization techniques that simulate natural selection to find
optimal solutions.
i) Single point Crossover: A crossover point on the parent organism string is selected. All data
beyond that point in the organism string is swapped between the two parent organisms. Strings
are characterized by Positional Bias.
ii) Two-point Crossover: This is a specific case of a N-point Crossover technique. Two
random points are chosen on the individual chromosomes (strings) and the genetic material is
exchanged at these points.
Mutation introduces small random changes in the genetic information of selected individuals.
This operation helps maintain genetic diversity within the population, allowing exploration of
different regions of the solution space.
Let’s consider an example that involves optimizing a common task: finding the best route to
commute from home to work.
Imagine you want to optimize your daily commute and find the shortest route to go from your
home to your workplace. You have multiple possible routes to choose from, each with different
distances, traffic conditions, and travel times. You can use a GA to help you find the optimal
route.
In this case, potential solutions can be encoded as permutations of the cities or locations along
the commute route. For example, you can represent each possible route as a string of city
identifiers, such as “A-B-C-D-E-F,” where each letter represents a location (e.g., a street,
intersection, or landmark).
2. Initialization
Start by creating an initial population of potential routes. You can randomly generate a set of
routes or use existing routes as a starting point.
3. Evaluation
Evaluate each route in the population by considering factors such as distance, traffic conditions,
travel time, and other relevant criteria. The evaluation function should quantify the quality of
each route, where lower values indicate better solutions (e.g., shorter distance, less time spent
in traffic).
4. Selection
Perform a selection process to choose which routes will be part of the next generation. Selection
methods aim to favour fitter individuals, in this case, routes with lower evaluation values.
Common selection techniques include tournament selection, roulette wheel selection, or rank-
based selection.
5. Crossover
Apply crossover to create new routes by combining genetic material from two parent routes.
For instance, you can select two parent routes and exchange segments of the routes to create
two new offspring routes.
6. Mutation
Introduce random changes in the routes through mutation. This helps explore new possibilities
and avoid getting stuck in local optima. A mutation operation could involve randomly
swapping two cities in a route, inserting a new city, or randomly changing the order of a few
cities.
7. New generation
The offspring generated through crossover and mutation and a few fittest individuals from the
previous generation form the new population for the next iteration. This ensures that good
solutions are preserved and carried forward.
8. Termination
The GA continues the selection, crossover, and mutation process for a fixed number of
generations or until a termination criterion is met. Termination criteria can be a maximum
number of iterations or reaching a satisfactory solution (e.g., a route with a predefined low
evaluation value).
9. Final solution
Once the GA terminates, the best solution, typically the route with the lowest evaluation value,
represents the optimal or near-optimal route for your daily commute.
By iteratively applying selection, crossover, and mutation, GAs help explore and evolve the
population of routes, gradually converging toward the shortest and most efficient route for your
daily commute.
It’s important to note that GAs require appropriate parameter settings, such as population size,
selection strategy, crossover and mutation rates, and termination criteria, to balance exploration
and exploitation.
1. Optimization problems: GAs excel at solving optimization problems, aiming to find the
best solution among a large set of possibilities. These problems include mathematical function
optimization, parameter tuning, portfolio optimization, resource allocation, and more. GAs
explore the solution space by enabling the evolution of a population of candidate solutions
using genetic operators such as selection, crossover, and mutation, gradually converging
towards an optimal or close-to-optimal solution.
3. Machine learning: GAs have applications in machine learning, particularly to optimize the
configuration and parameters of machine learning models. GAs can be used to optimize
hyperparameters, such as learning rate, regularization parameters, and network architectures
in neural networks. They can also be employed for feature selection, where the algorithm
evolves a population of feature subsets to identify the most relevant subset for a given task.
4. Evolutionary robotics: GAs find use in evolutionary robotics, which involves evolving
robot behavior and control strategies. By representing the robot’s control parameters or policies
as chromosomes, GAs can evolve solutions that maximize performance metrics such as speed,
stability, energy efficiency, or adaptability. GAs are particularly useful when the optimal
control strategies are difficult to determine analytically.
5. Image and signal processing: GAs are applied in image and signal processing tasks,
including image reconstruction, denoising, feature extraction, and pattern recognition. They
can optimize the parameters of reconstruction algorithms to enhance image quality. In signal
processing, they can optimize filtering parameters for denoising signals while preserving
important information. GAs can also be used for automatic feature extraction, evolving feature
extraction algorithms to identify relevant features/objects in images or signals.
6. Design and creativity: GAs have been used for design and creativity tasks, such as
generating artistic designs, music composition, and game design. By representing design
elements or musical notes as genes, GAs can evolve populations of designs or compositions
and evaluate their quality using fitness functions tailored to the specific domain. GAs have
demonstrated the ability to generate novel and innovative solutions in creative domains.
7. Financial modelling: GAs are applied in financial modeling for portfolio optimization,
algorithmic trading, and risk management tasks. GAs can optimize the allocation of assets in
an investment portfolio to maximize returns and minimize risk. They can also evolve trading
strategies by adjusting trading parameters to adapt to market conditions and maximize profits.
GAs provide a flexible and adaptive approach to modeling complex financial systems.
Companies across various industries have used genetic algorithms to tackle a range of
challenges. Here are a few recent noteworthy examples of GA:
1. Google’s DeepMind
2. Tesla’s self-driving tasks
3. Amazon’s logistics operations
4. Autodesk’s design optimization
5. Uber’s evolutionary optimizer
6. Boeing’s wing design optimization
7. Ford’s vehicle routing optimization
8. Siemens’ manufacturing process optimization
9. NVIDIA’s GPU architecture optimization
10. Toyota’s supply chain optimization
Bio-Inspired Optimisation:
Ant Colony Optimization technique is purely inspired from the foraging behaviour of ant
colonies, first introduced by Marco Dorigo in the 1990s. Ants are eusocial insects that prefer
community survival and sustaining rather than as individual species.
Ant Colony Optimization (ACO) is a metaheuristic algorithm inspired by the foraging
behaviour of ants.
Inspiring source of ACO is the foraging behavior of real ants. The ant deposits a chemical
pheromone trail on the ground when it forages. The quantity of pheromone deposited depends
upon the quantity and quality of the food that will guide other ants to the food source. The ants
find shortest paths between their nest and food sources with the help of indirect communication
between them via pheromone trails. The below figure represents the operators, control
parameters and the application areas of the ant colony optimization algorithm. Pheromone trail
update and evaporation of pheromone trail are the operators of the ACO algorithm. The control
parameters used by the ACO algorithm includes the number of ants and number of iterations
considered in the algorithm. Some of the application areas, where Ant Colony Optimization
(ACO) algorithm can be used are represented as shown in the given figure.
i) Foraging Behavior
ACO algorithms involves simulating the foraging behaviour of ants to solve
optimization problems . Ants deposit pheromones as they search for food , and these
pheromones influence other ants decisions.
ii) Exploration and Exploitation
ACO strikes a balance between exploration and exploitation to find optimal
solutions.
iii) Global Optimization
The algorithms focus on finding a global best solution rather than getting stuck in
local optima by utilizing the collective knowledge of the ant colony.
Advantages:
Limitations:
Applications:
i) Routing and Networking: ACO has been used to optimize network routing such as
packet switching networks and telecommunication systems.
ii) Vehicle Routing: It is applied in the optimization of vehicle routing problems like
the famous Travelling Salesman Problem.
iii) Manufacturing: ACO enables efficient scheduling and resource allocation in
manufacturing processes , minimizing production time and cost.
iv) Robotics and Automation: Used to optimize robot motion planning and control ,
improving autonomy and efficiency in robotic systems.
Applications: