Genetic Algorithm in MATLAB
Genetic Algorithm in MATLAB
ABSTRACT In this paper, an attractive approach for teaching genetic algorithm (GA) is presented. This approach is based primarily on using MATLAB in implementing the genetic operators:
crossover, mutation and selection. A detailed illustrative example is presented to demonstrate that
GA is capable of finding global or near-global optimum solutions of multi-modal functions. An
application of GA in designing a robust controller for uncertain control systems is also given to
show its potential in designing engineering intelligent systems.
KEYWORDS
1 INTRODUCTION
Genetic Algorithm (GA) is a major topic in a neural and evolutionary computing under-graduate (advanced level)/postgraduate course. The course plays an
important role in designing intelligent control systems, which is now very
attractive and stimulating to students in Electrical Engineering Departments.
The usual way to teach genetic algorithm is by the use of PASCAL, C and
C++ languages. In this paper, we present an attractive and easy way for
teaching such a topic using MATLAB1. It has been found that this approach
is quite acceptable to the students. Not only can they access the software
package conveniently, but also MATLAB provides many toolboxes to support
an interactive environment for modelling and simulating a wide variety of
dynamic systems including linear, nonlinear discrete-time, continuous-time and
hybrid systems. Together with SIMULINK, it also provides a graphical user
interface (GUI) for building models as block diagrams, using click-and-drag
mouse operations.
The rest of the paper is organised as follows. Section 2 introduces a simple
version of genetic algorithm, canonical genetic algorithm (CGA), which was
originally developed by Holland2. Section 3 describes the implementation
details of the genetic operators: crossover, mutation and selection. Section 4
demonstrates the performance of the CGA with a multi-modal function.
Section 5 provides an application of the developed genetic operators in designing a robust controller for uncertain plants, and section 6 gives a conclusion.
139
140
2.1 Initialisation
In the initialisation, the first thing to do is to decide the coding structure. Coding
for a solution, termed a chromosome in GA literature, is usually described as a
string of symbols from {0, 1}. These components of the chromosome are then
labeled as genes. The number of bits that must be used to describe the parameters
is problem dependent. Let each solution in the population of m such solutions x ,
i
i=1, 2, ..., m, be a string of symbols {0, 1} of length l. Typically, the initial population of m solutions is selected completely at random, with each bit of each
solution having a 50% chance of taking the value 0.
FIG. 1
141
2.2 Selection
CGA uses proportional selection, the population of the next generation is
determined by n independent random experiments; the probability that individual x is selected from the tuple (x , x , ..., x ) to be a member of the next
i
1 2
m
generation at each experiment is given by
f (x )
i >0.
P{x is selected}=
(1)
i
m
f (x )
j
j=1
This process is also called roulette wheel parent selection and may be viewed
as a roulette wheel where each member of the population is represented by a
slice that is directly proportional to the members fitness. A selection step is
then a spin of the wheel, which in the long run tends to eliminate the least fit
population members.
2.3 Crossover
Crossover is an important random operator in CGA and the function of the
crossover operator is to generate new or child chromosomes from two parent
chromosomes by combining the information extracted from the parents. The
method of crossover used in CGA is the one-point crossover as shown in
Fig. 2(a). By this method, for a chromosome of a length l, a random number
c between 1 and l is first generated. The first child chromosome is formed by
appending the last lc elements of the first parent chromosome to the first c
elements of the second parent chromosome. The second child chromosome is
formed by appending the last lc elements of the second parent chromosome
to the first c elements of the first parent chromosome. Typically, the probability
for crossover ranges from 0.6 to 0.95.
2.4 Mutation
Mutation is another important component in CGA, though it is usually conceived as a background operator. It operates independently on each individual
142
pop=
binary string 1
binary string 2
2
e
f (x )
1
f (x )
2
e
x
popsize
f (x
)
popsize
143
The first stringlength column contains the bits which characterise the binary
codification of the real variable x. The strings are randomly generated, but a
test must be made to ensure that the corresponding values belong to the
function domain. The crossover and mutation operators will be applied on this
stringlength-bit sub-string. The (stringlength+1)-th and (stringlength+2)-th
columns contain the real x value, used as auxiliary information in order to
check the algorithms evolution, and the corresponding f (x), which is assumed
to be the fitness value of the string in this case. Then the initialisation process
can be completed using the following code:
function [pop]=initialise(popsize, stringlength, fun);
pop=round(rand(popsize, stringlength+2));
pop(:, stringlength+1)=sum(2.9(size(pop(:,1:stringlength),2)1:1:0).
*pop(:,1:stringlength))*(ba)/(2.9stringlength1)+a;
pop(;, stringlength+2)= fun(pop(;, stringlength+1));
end
In the above routine, we first generate the binary bits randomly, and then
replace the (stringlength+1)-th and (stringlength+2)-th columns with real x
values and objective function values, where fun is the objective function, usually
denoted by a .m file.
3.2 Crossover
Crossover takes two individuals parent1, parent2, and produces two new individuals child1, child2. Let pc be the probability of crossover, then the crossover
operator can be implemented as follows:
function [child1, child2]=crossover(parent1, parent2, pc);
if (rand<pc)
cpoint=round(rand*(stringlength2))+1;
child1=[ parent1(:,1:cpoint) parent2(:,cpoint1+1:stringlength)];
child2=[ parent2(:,1:cpoint) parent1(:,cpoint1+1:stringlength)];
child1(:, stringlength+1)=sum(2.9(size(child1(:,1:stringlength),2)1:1:0).
*child1(:,:stringlength))*(ba)/(2.9stringlength1)+a;
child2(:, stringlength+1)=sum(2.9(size(child2(:,1:stringlength),2)1:1:0).
*child2(:,1:stringlength))*(ba)/(2.9stringlength1)+a;
child1(:, stringlength+2)= fun(child1(:, stringlength+1));
child2(:, stringlength+2)= fun(child2(:, stringlength+1));
else
child1=parent1;
child2=parent2;
end
end
144
3.4 Selection
The selection operator determines which of the individuals will survive and
continue in the next generation. The selection operator implemented here is
roulette wheel selection and this is perhaps the simplest way to implement
selection. We first calculate the probabilities of each individual being selected,
based on equation (1). Then the partial sum of the probabilities is accumulated
in the vector prob. We also generate a vector rns containing normalised random
numbers, by comparing the elements of the two vectors rns and prob, we decide
the individuals which will take part in the new population:
function [newpop]=roulette(oldpop);
totalfit=sum(oldpop(:,stringlength+2));
prob=oldpop(:,stringlength+2) / totalfit;
prob=cumsum(prob);
rns=sort(rand(popsize,1));
fitin=1; newin=1;
while newin<=popsize
if (rns(newin)<prob(fitin))
newpop(newin,:)=oldpop(fitin,:);
newin=newin+1;
else
fitin= fitin+1;
end
end
145
sin2(10x)
1+x
(2)
for x belonging to the domain [0, 1]. The characteristic of the function is
plotted in Fig. 4(a). As shown in the figure, the function has three maxima,
with a global maximum achieved at x=0.1527.
Several trials have been performed changing the fundamental parameter
values. Below, a relevant example, with popsize=30, stringlength=20, pc=
0.95 and pm=0.05 is reported. Fig. 4(a) shows the initial population for the
optimisation of a one-variable optimisation: the 1 represent the individuals of
the population, randomly generated in the function domain. Fig. 4( b) shows
the population after 50 generations of the CGA, and Fig. 4(c) gives the population after 100 generations of the CGA.
As can be observed from Fig. 4(a), at the first step, the population (represented by 1 in the figures) is randomly distributed in the whole function
domain. During the evolution of the algorithm, Fig. 4( b), the individuals of the
population tend to concentrate at the peaks of the function and at convergence,
Fig. 4(c), most of the individuals are clustered near the desired maximum value
of the function.
146
FIG. 4
5 AN APPLICATION EXAMPLE
In this section, we provide an application example of the developed GA in
designing robust controller for uncertain control systems. Robust control under
parametric uncertainty is an important research area with many practical
applications. When the system has a general nonlinear uncertainty structure,
the usual approach is to overbound it by an interval dynamical system. The
147
FIG. 4 (Continued.)
robust controller design can be transformed into solving an eigenvalue optimisation problem, which involves optimising the maximum real part of eigenvalues
of the characteristic polynomial for the uncertain plant. This kind of problem
is a typical non-smooth and nonconvex optimisation problem, and may have
several local optima. We demonstrate that GA is capable of solving this kind
of problem. A practical flywheelshaftflywheel system, including uncertainty
in the length of the shaft and shaft damping, is employed to carry out the
numerical simulation.
5.1 Uncertain control systems
Let us consider the following single-input single-output (SISO) linear systems:
dny
dn1y
a (q)
+a (q)
++a (q)y
n
n1
0
dtn
dtn1
dm1u
dmu
+b
(q)
++b (q)u
=b (q)
m1
0
m
dtm
dtm1
(3)
where n is the order of the model, y is the output variable, u is the input
variable, and q={q , q , ..., q } is the design parameter vector. Furthermore,
1 2
k
we acknowledge uncertainties in the design parameters by noting the design
parameter vector q lies in a box Q=[q , q+][q , q+][q , q+]. The
1 1
2 2
k k
transfer function of (3) is defined by
G(s, q)=
(4)
148
(5)
We also consider a feedback controller, F(s, p)=F (s, p)/F (s, p), where the
1
2
numerator and denominator of F are polynomials in s, and p={p , p , ..., p }
1 2
l
is the parameter vector lying in a box
P=[ p , p+][ p , p+][ p , p+].
1 1
2 2
l l
The task of robust controller design considered in this section is to determine
the optimal value of p so that the feedback control system is stable for all
parameters qQ.
For the feedback system shown in Fig. 5, the characteristic polynomial is:
H(s)=F (s, p)D(s, q)+F (s, p)N(s, q)
2
1
and this polynomial varies over the corresponding uncertainty set
(6)
(7)
T y(s, q, p)=
Y (s)
F(s, p)G(s, q)
=
.
R(s) 1+F(s, p)G(s, q)
(8)
149
150
rpld4
32
(11)
Gpd4
32l
(12)
K=
FIG. 7
151
152
153
6 CONCLUSION
In this paper, an attractive approach for teaching genetic algorithm has been
presented. This approach is based primarily on using MATLAB in
implementing the genetic operators: crossover, mutation, and selection. An
advantage of using such an approach is that the student becomes familiar with
some advanced features of MATLAB, and furthermore, with the availability of
other MATLAB Toolboxes such as The Control Systems Toolbox, Neural
Network Toolbox and Fuzzy Logic Toolbox, it is possible for the student to
develop genetic algorithm-based approaches to designing intelligent systems,
which could lead to his/her final year or MSc project.
REFERENCES
[1] MAT L AB 5, The Math Works Inc., Natick, MA (1997)
[2] Holland, J., Adaptation in Natural and Artificial Systems, University of Michigan Press, Ann
Arbor (1975)
[3] Goldberg, D. E., Genetic Algorithms in Search, Optimization and Machine L earning, AddisonWesley (1989)
[4] Borie, J. A., Modern Control Systems, Prentice-Hall International (1986)
[5] Sebald, A. V. and Schlenzig, J., Minimax design of neural net controller for highly uncertain
plants, IEEE T rans. Neural Networks, 5, pp. 7382 (1994)