Chapter 1 Optimisation
Chapter 1 Optimisation
1 Optimisation techniques
Optimisation is everywhere, from nature to engineering and social sciences to economics and
even for traffic control, holiday planning etc. In fact, over the century, optimisation has
become the backbone behind numerous applications. Optimization means to find the “best”
design with respect to certain criteria under various constraints. Most of the real life problems
are very complex in nature and may involve a number of design variables or decision
variables and to find the best possible combination of these variables is a gruesome task. This
is where mathematical optimisation techniques become indispensable. The process of finding
the best solution to a problem first involves constructing a mathematical model to describe
the problem and subsequently use the optimisation techniques. However, there is no single
optimization algorithm that can effectively solve all optimization problems and over the
years, many different algorithms have been proposed. Optimization algorithms can be
classified as deterministic or stochastic. Deterministic algorithms follow a rigid mathematical
procedure with no random components. For the same initial point, the deterministic
algorithms will follow the same path and produce the same optimum point, irrespective of
whether the program is run today or tomorrow. On the other hand, stochastic algorithms
always have some randomness and the algorithm usually produces a different result every
time the algorithm is executed, even though the same initial point is used. Some deterministic
optimization algorithms use the gradient information and these are called gradient-based
algorithms. One example is the Newton-Raphson algorithm, that uses the function values and
their derivatives, and it works well for smooth unimodal unconstrained optimisation
problems. Another gradient based technique is the method of Lagrange multipliers, which is
applicable for constrained optimization problems with equality constraints. The basic idea
behind the method of Lagrange multipliers is to convert a constrained problem into an
unconstrained one by using certain multipliers for each constraint called the Lagrange
multipliers and thereafter apply the derivative test of an unconstrained problem. For
inequality constraints, things become more complicated and the Karush-Kuhn-Tucker (KKT)
conditions need to be used. But for highly non-linear, multimodal and multivariate problems
with a large number of constraints, the implementation of these KKT conditions is very
difficult, and often inefficient.
Due to the growing demand for accuracy and the ever-increasing complexity of
systems, the optimisation process becomes more and more time consuming. For many highly
non-linear problems, the deterministic methods may fail to give the solution with an
acceptable time frame. Also, most real-world problems are multimodal in nature and these
deterministic algorithms converge only locally, i.e, the final result depends on the initial
guess values. One strategy to ensure that all peaks are reachable is to run the optimisation
process a number of times with different random initial guess values. For example, if an
objective function has NP peaks, and if the optimisation algorithm is run NS(>NP) number of
times with different initial start values selected from various search regions, then it is likely
to reach all the peaks of this multimodal function. However, in reality, this is not easy to
achieve, as the number of peaks and valleys a function is not known and often there is no
guarantee that all peaks are sampled. Also, many problems may take continuous and discrete
values, and their derivatives might not exist at the optimum point. Thus, another class of
algorithms called the Stochastic algorithms have been developed. Stochastic algorithms are
generally classified as heuristic and metaheuristic, although the difference is small. Heuristic
algorithms use trial and error to find acceptable quality solutions to a complex problem in a
reasonable amount of time. Among the quality solutions found, it is expected some may be
nearly optimal solutions but there is no assurance for such optimality. It hopes that these
algorithms work most of the time, but not all the time. Further development over the heuristic
algorithms is the so-called meta- heuristic algorithms. Meta- means `higher level', and these
perform better than the heuristics. All meta-heuristic algorithms work with a population of
solutions and use certain tradeoffs between exploration and exploitation. Exploration means
to generate diverse solutions so as to explore the entire search space on the global scale. This
is achieved via randomization and this prevents the solutions from being trapped at local
optima. On the other hand, exploitation means to search locally in a region by exploiting the
information that a current good solution is found in this region. A good combination of
exploration and exploitation will usually ensure that the global optimum is achievable. These
meta-heuristic algorithms never guarantee of finding the exact global optimum point, but they
converge to nearly optimal values with less computational effort and in a reasonable amount
of time. As time and computing resources are limited, the aim to find good quality solutions
(not necessary the best) in a reasonable and practical time limit.
Researchers have always been trying to find better optimisation algorithms, or even
universally robust algorithms, especially for tough NP-hard optimization problems. Nature
always creates the best design. Many researchers have taken inspiration from nature to
develop certain optimisation algorithms called the bio-inspired algorithms. The first
breakthrough in metaheuristic optimisation is achieved with the development of the Genetic
Algorithm in the year 1960 by John Holland. This algorithm was inspired by Charles
Darwin’s theory of natural evolution. This algorithm is based on the process of natural
selection, where the fittest individuals are selected for reproduction in order to produce
offspring of the next generation. Another type of bio-inspired algorithms are the swarm
intelligence (SI) algorithms that adopt the collective intelligence exhibited by an organized
group of insects such as bees, ants, termites, fireflies and wasps, as well as from flocks of
animals such as birds or fish. These SI algorithms consist of a population of
simple agents interacting locally with one another and also with their environment, using
certain simple rules. Each agent may not be intelligent, but the whole system of multiple
agents may show some self-organization behaviour and thus can behave like some sort of
collective intelligence. The particle swarm optimization (PSO) uses the swarm behaviour of
fish and birds, while firefly algorithm (FA) is based on the flashing behaviour of fireflies.
Cuckoo search (CS) algorithm is based on the brood parasitism of some cuckoo species,
while Bat algorithm is inspired by the echolocation behaviour of micro bats. There are a
number of other SI algorithms such as Ant colony optimization, Bee algorithms, the flower
pollination algorithm, etc and in fact more than 40 nature inspired algorithms have been
developed till date.
The selection of the optimisation algorithm for a given problem is very crucial. The
algorithm chosen for a given optimization task will depend on the type of the problem, the
nature of an algorithm, the desired quality of solutions, the available computing resource, the
time limit.