0% found this document useful (0 votes)
7 views8 pages

Lecture 6 -1-transcript

The document outlines a lecture on evolutionary algorithms, focusing on benchmark functions and their applications in optimization. It discusses the structure of genetic and memetic algorithms, introduces various types of benchmark functions, and emphasizes their importance in testing algorithm performance. The lecture aims to equip students with the ability to design their own multi meme memetic algorithms and understand the implications of different representations and operators.

Uploaded by

cemharwood
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as TXT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views8 pages

Lecture 6 -1-transcript

The document outlines a lecture on evolutionary algorithms, focusing on benchmark functions and their applications in optimization. It discusses the structure of genetic and memetic algorithms, introduces various types of benchmark functions, and emphasizes their importance in testing algorithm performance. The lecture aims to equip students with the ability to design their own multi meme memetic algorithms and understand the implications of different representations and operators.

Uploaded by

cemharwood
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as TXT, PDF, TXT or read online on Scribd
You are on page 1/ 8

UNKNOWN

I. Think it is. Right.

SPEAKER 0
All right. Thank you.

UNKNOWN
Thank you. Hi.

SPEAKER 1
Hi, everyone. Hello. Right. Sound check. Can everyone hear me clearly at the back?
Yeah. Okay. Um. Should I skip this slide? They just scan the code? Yeah. Yes. Thank
you. Thank you for the reminder. So, as I mentioned in the last lecture, there will
be two lectures about evolutionary algorithms. And this is the second one. And in
this lecture we've got a bit of content to cover basically. Basically we will start
with some questions. And those questions for all of us actually for us to think
about it. Yeah. In the rest of the lecture. We will try to answer all those
questions one by one, if we can. Yeah. Um, I will cover something called benchmark
functions. You may have heard of it, but we will go through a few of them. Then we
will get into benchmark function experiments. So in the last lecture it was a bit
of a introduction to a genetic algorithm memetic algorithm. We covered different
components of that. But now this time we will go through the actual publications.
You know we will go through the actual paper. We will see their experiment settings
a bit similar. Advance if you like. Then we will see their results. And around that
result we will discuss those questions as well. Yeah. After that another real world
question or experiments travelling salesman problem. Within that we will cover
permutation based operators. And this is a bit of advanced version of crossover or
mutation that we've seen so far. Then we will cover something called multi meme
memetic algorithm. As you can tell this is an enhanced version of memetic
algorithm. And we will revisit benchmark function again by using multi meme memetic
algorithm. That's why you know all these actually thoughts are connected. And I
would suggest you to, you know follow up carefully. If you if anything unclear
please let me know okay. We are learning outcome at the end of this lecture. Well
by now actually you should be able to identify different types of evolutionary
algorithms. You should be able to understand the potential issues with the choice
of different representation, or different genetic operators that we will cover. You
should be able to understand the basic component of multiple memetic algorithms.
Also, at the end of this lecture, you should be able to design your own multi meme
memetic algorithm to an unseen problem. Yeah. Um, just a bit of recap from the last
lecture. I'm sure you remember this slide because we've seen it like 100 times. But
these are the basic components for Tas. Yeah. Representation, initialisation. We've
got fitness values for each individual. We were selecting parents applying
crossover, mutating them, torturing a little bit. And another evaluation. We
replace them. We replace our previous population with a new one, or we replace
someone and the individuals. Then we terminate it. So that was the basic generic
genetic algorithm. Then we've seen the memetic algorithm which is a bit of advanced
version of it. We simply inject hill climbing local search in this example. In the
previous lecture that local search were applied after the mutation. But um, there
was a slide actually about it. So it could be applied before the mutation, before
the crossover, different, uh, different parts of the genetic algorithm. Actually,
we could apply hill climbing or local search or meme, if you like. So let's start
with the questions for us. As I say, there is no we can not answer them directly
without doing set of experiments or without undertaking heavy theoretical
mathematical research about it. So the first question is when does the genetic
algorithm perform better than a memetic algorithm? The. Second one is is the choice
of meme. As you remember, meme means local search or hill climbing. Any type of
hill climbing is. The choice of meme is important. The other one is which meme?
Which meme have a better performance in a memetic algorithm? I think you've
covered, um, in the local search lecture with Warren. You've seen different local
search methods. Yeah, yeah. Yep. Yeah. And last question. Can we somehow combine
several means to obtain a synergy sort of interaction between memes, to improve our
optimisation, to improve the performance of our optimised optimisation algorithm?
So those are the questions. And for the next two hours, we will try to answer those
by checking the actual literature, by checking their results and see what they
claim or what what is their result. So let's start with the benchmark function.
Have you heard of it? No. Okay. So why do we use benchmark function? Okay. Let me
give you a simple example obviously. Um, if you if you buy a laptop or if you buy a
high speed car. What do you do? You test the speed. Yeah. How do you test it? You
need to have a sort of clear highway without speed limitation. But in the UK, I
think there are some speed limitation. But in Germany there are some highways.
There's no speed limit. So you take your car and test it on that test track. So
when you define your own algorithm or strategy you test it. And benchmark benchmark
function is our test track. So they serve as a test bed to compare your
performance, your algorithm performance to other algorithms performances. They are
quite useful. Why? Because we already know their global minimum. Yeah. and they can
be easily computed. Some of them are easier than the others. But the good thing is
there are plenty of different benchmark functions, and each of them has different
characteristics. And those characteristics kind of mimicking or simulating the real
world condition. So that you design your own algorithm, you apply it to a bunch of
different benchmark function, and you may fail at some of them, but you will see
the characteristic of the failed benchmark function. Then you will improve your
your algorithm. Yeah, we will get into their different characteristics as well. But
there is a link here that that organisation provides different benchmark function
or test beds. And trust me, there are plenty. We will see a just a paper link. I
think they provide like almost 200 different benchmark function, which means once
you define your algorithm, you may you will be able to test your algorithm on 200
different functions. Question.

SPEAKER 2
So you use this for the search space?

SPEAKER 1
Yes. In my search space the idea here is to test it basically the test and see
whether you can find the global minimum. And in the meantime obviously you'll be
testing your search space as well. There are some visualisations. Um, but before
that let's get into classification of benchmark functions. Well there are, there
are different classes or characters if you like, but there are like in this in this
slide, four of them. Which one is continued differentiability, whether the function
is Discontinuous or discontinuous? Yeah. Whether there's a jump in between the
variables, we will see some visuals about that. Um, dimensionality scalability.
That means, um, your benchmark function can be extended as much as you want. What
does it mean? It can have as many variables as you want. Because when you define a
problem, you will have a different variables. And after that you can actually add
extra variables. But some benchmark function doesn't allow you to it. It's only two
dimensions or five dimensions. Yeah. Separability. This is also another important
one. In this case, we check whether the variables within the benchmark functions
are dependent or independent. Whether changing one variable affect the other
variable or not. So if they are independent then you can take one variable,
optimise it itself, then take another variable, optimise it and you can do it in
parallel, all of them. That's why it will save you time. Also it will save some
computational cost. Separability. That's also an important aspect or class of
benchmark function modality. This is also a complexity of your benchmark function.
There is something called unimodal, which means that that function only have one
global minima. And it's relatively simpler than the other benchmark function. Or
there's something called multimodal, which means that that function will have a
global minima and some other local minima around that global minima, which makes it
difficult to solve in a way. That's why by by checking all these, actually, you can
test your own evolutionary algorithm and see in which cases your algorithm works or
not. Okay, let's see some visuals about these, um, benchmark function. Uh, we start
with some really simple one actually square. And this is a, um, basically adding
all the variables by square and them and the dimension is n. Yeah. Here at the
bottom you see, uh, two dimensional visualisation of two dimensional benchmark
function XX1X2YX2. And this is how it looks. Actually it's pretty simple isn't it.
Easy to solve. So it is continuous. We are checking through its characteristics. We
will apply some algorithm on all these different benchmark functions in a minute.
Okay. It is continuous and it is differentiable because it's also continuous and it
is separable. What does it mean? It means we can separate different variables at
different time and visualise them or optimise them individually. Yeah. Such as when
you look at from one side, let's say we are only looking at this, this direction.
Then we will only see one variable x. Then we can actually optimise it. Yeah. It's
easy. Scalable. We can increase n as much as we want.

UNKNOWN
Question. What do you mean by differential?

SPEAKER 1
Oh what do I mean by differentiable. So okay good question. This is a taking a
derivative of a function. I'll try to plot it. I think you've covered it in in
machine learning. Uh, module I believe when you take a point, let's say our, um.
Can I zoom in? No, I can't. Let's say, um, one solution is around here. Yeah. And,
um, let's say you are using gradient descent. It has nothing to do with the
evolutionary algorithm but gradient descent optimisation. Optimisation? What do you
do? You take the derivative of this point to find which way to go. This is my mouse
plotting. That's why it's horrible. I know, but you take the derivative of that
point, then you step one by one towards the global minima hopefully. So that is
Differentiating it. When you take the derivative of a point is differentiable. It
allows you to walk through your space. Um, but for for evolutionary algorithms, um,
I think we will cover in the next slide. Okay. Let me complete this scalable, which
means we can have as many variables as we want. Um, or there is also um, non
differentiable function that we will get into in a second. Okay. So in this kind of
scalable and separable function there is some trick which is called delta
evaluation. Um separable function allows delta evaluation. And this is an example
of that. Um, let me walk you through with this example. Let's say we've got n
variables. Yeah. And this is our, um, real number chromosome, let's say. And we
process it through. We throw it into our evolutionary algorithm. There was some
crossover mutation there. The other it threw back. Then it only changed one
variable. You know, previously it was six x4 to three. Now x4 to three is five.
What do we do when we evaluate the fitness function or benchmark function right
here? We don't have to add all of those and variables because you may have really
massive, well tremendous number of variables. You don't have to add them all one by
one again. You will simply get the delta out of it. Previous sum was this. And then
delta now is simply five squared minus six squared. Because this is the only
variable has changed, you know, rather than adding them all. We will simply get the
delta. This is the delta. Then we add the delta to the sum here. That is a sort of
trick. If you like delta evaluation it's called it is useful. As I said, it saves
you time. It saves you computational cost. And it can be applied if your benchmark
function is separable. Yeah. Another unimodal function. Another benchmark function
is the step function. Here in the step function we will have some operator which is
called floor operator. If you like, its kind of runs up or runs down the decimal
value to the nearest integer. That's why you you may end up having this sort of
curve. And this is N1 basically. Yeah. As you can see there are some gap in between
variables or the function value. So it is discontinuous. It is not differentiable.
So for this kind of benchmark function or functions you may not be able to apply
all these classical gradient just gradient based optimisation algorithm. But
luckily for us the evolutionary algorithm works for this kind of function as well
because you simply throw a bunch of different candidate solution and you find your
own way by applying crossover and mutation. So this is a this is called step
function. And it's also unimodal discontinuous non differentiable. It is separable
because the the variable x I doesn't really affect in other acts, and it's
scalable. We can increase N as much as it is needed. Another benchmark function.
This is a funny one. Like this is a I don't know how to pronounce rust region. Rust
region. In this case, things are getting a bit more complicated. As you can see,
there are some global minimum around here. You can see that here, but also there
are a lot local minima around the global minima. This is tricky. This trap you in a
local minima. And you may think that okay I find the solution. This is the global
minima. But no it is not. I think this fluctuation becomes from the cost here. But
yeah, this is also another multi-model benchmark function that you can test your
algorithm and see whether you are able to find the global minima or you trap in a
local minima. Do I have it? Okay, this is continuous. You can get the derivative at
any point. This is differentiable, separable and scalable separable again. We can
get the x size. Whatever is x I is. And then we can optimise them by themselves and
independently. We can do it for all the well for this function, for all the
variables and scalable and can increase as much as we need. Here at the bottom you
see n equal to actually two dimensional visualisation of the last region function.
Um another one I think this is the last one in compared to the previous one. To me,
when I look at the figure here. To me, this seems relatively bit easier than the
previous one, to be honest, because why the global minima is kinda isolated than
the local minima here. Um, this function is continuous. Obviously it is
differentiable at any point, but it is non-separable at this time. Previous one. Up
until this point it was all separable, but this is non-separable and non scalable.
Two dimensional. Oh, how is it two dimensional? Um it's basically this is the one
dimension. Yeah. Uh, this this is not y this supposed to be x2. And this is the
second dimension x one and the third dimension. If you are asking this one, this is
the f x y actually. Which means that, um, okay, if you take x1 to be minus three,
if you take x two to be minus three as well, then your function, this function at
the top will give you this value probably, which is the let's say 8 or 9. Do you
see what I mean? The third dimension is actually the function itself, the outcome
of x1 and x2. That's why this is two dimension. It is non-separable because one
variable affect another one. Actually, when you change one variable x you will
change the y or x2. Whatever. I think it was y. Okay, ignore the x2, it was y. So
original one was correct and it is non scalable. You cannot have extra variable in
this is extremely special. Yes, it is exponential. And it can only have two
variables generally. Yeah. So you will not have extra variable in that case. Okay.
Let's see a quick example. Example of square function optimisation and square
function. As you've seen from the beginning. Um this is the square unimodal and
relatively easy one to solve. Yeah. The objective here is that we will find a set
of integers x I, which will maximise the function f given below. This is the square
function benchmark function and n is three. This time we've got three variables x1,
X2, and x3. This is our set domain between -512 to 512. So in the given interval
here we've got 1024 integers. Now we are applying applying our algorithm
evolutionary algorithm into this function. Quickly. What do we do so far in genetic
or memetic algorithm. We used to work on bitstring one and zero. Yeah our
chromosome was one and zeros only. Now we will kind of encode these values to one
and zeros to process it in the iterations in the evolutionary algorithm. Then we
will decode it after that. I'm just going through quickly that encoding bit. As I
said we've got 24 values here on the left hand side. Yeah. And we kind of map all
of them by using bits. String such as zeros is the first first integer which is -
511. All the zeros and one represent mapping to -510. So those are the bits right
now we can work on it. We can throw them into genetic or evolutionary algorithms.
In the decoding part once we get the binary back then we will obviously decode it
such as here we've got x I zeros and 11 which is equal to three here on the left
hand side. Yeah. And then three means um 511 I'm sorry. Three means three -511.
There's a subtract operator here. Because once we find the three on the left hand
side, what's the equality here? It's going to be 5 or 9. Yeah. There's a subtract
operator here. Okay. So this is how we apply encoding and decoding. Now let's get
back to our example. Let's say we've got a function with three parameters x1, x2
and x3. Each parameters are represented by ten bits ten different bits. We put them
all together. Now we ended up 30 bits of chromosome, which is right here. Remember
from the last lecture we had a really small toy examples. But now we are working on
30 bit at a time. Yeah. They all represent x1, X2 and x3. Now imagine let's suppose
this is the outcome. one of the solution from our genetic algorithm. What do we do?
We kind of decode it to the actual values to see what they are. And we use the x
one decoded to 810 x two 768 and x three is six. Then we apply the subtract
operator which is the the minimum value in your interval. Then here is here's the
benchmark function result. Yeah that is how you apply encoding and decoding on on
this kind of problem. Any questions so far. Nothing. All right. So I'm going to
talk a little bit of more on this. Then. Then we will get into actual experiments.
Yes.

UNKNOWN
Just the other problems like the magic. Like what they seem to be um, categorised
as just another problem for the evolutionary algorithm to solve.

SPEAKER 3
Same as the back side problem. Oh.

SPEAKER 1
Yes. Yes. I mean, you can imagine this all kind of fitness function makes that
problem. Such as?

SPEAKER 3
Yeah. So what makes it a different matter?

SPEAKER 1
So the differences is as I said obviously these are a bit more complicated. And
also they are categorised. They've got different characteristics. And you you will
have a chance to challenge your own algorithm to try challenge it in different
settings, in different, uh, characteristics of the problem. So that allow us to
check different check our algorithm in different conditions and see in which
condition it works and in which condition it doesn't work. That is the advantage.
Okay, more on the function optimisation. I'll try to. Okay. After this we will get
into actual case studies. But previously our interval was the integers. Yeah, but
what if we will end it up a domain of interval of five -5.12 to 5.12. How do we
actually deal with those. You know the one solution is the again using the binary
binary encoding and decoding. But in this case we will use the same precision same
splitting manner of two digit after decimal point. We use again the same 1024
numbers. However, this time once we decode it, then we will divide it by 100. Yeah,
it's an easy solution to deal with that kind of real numbers. Um. We can also use
other representation for to deal with such real numbers. We could use the real real
number itself in the chromosome. But representation is important when you decode
encode. When you encode and decode your interval, you have to stick with the actual
boundaries. Because sometimes you may encode your, your issue, your number. Then
the issue will be you may go over boundaries when you apply all this mutation or
crossover. And that kind of issues would call redundancy. Let's say we are giving
this interval. We do do some encoding decoding. But at the end we cover this
interval. Our solution space between -540 to 540. Which means that we kind of
expand our solution space. We waste time on the unnecessary regions. Also, we waste
some computation around that. That's why we have to be careful when we encode or
decode and when we use our representation technique. Okay, let's get into actual
fun case studies. This is a benchmark function optimisation and I will refer to
this paper which is led by Andy Andrews. John is he's a professor in our school.
And they did some work which is called a comprehensive analysis of hyper heuristic.
And this paper was published in Intelligent Data Analysis Journal. What they did
was to test different benchmark function and and compare the result. Yeah. Let me
explain to this table quickly to you. As I said, there are a bunch of different
benchmark functions. Some calls fare foxhole, actually some Goldberg, Wheatley, and
so on and so forth. Data is 14 of them. They label all of them F1 from F1 to F 14.
Yeah. This is the label we will be using on the results table. That's why um, just
be careful about this. F1 means this fair function that we've seen that. Yeah f f
for F5 we will see it's the forecastle function. Another benchmark function.

UNKNOWN
Question. So why tested again sorry.

SPEAKER 1
What algorithm they tested again. Um we will see in a second okay. These are the
domain ops with the mouse domain. They they choose it design data design. And this
is the optimum value for the sphere function. As you remember, the optimum value
global optimum was zero. For some of them it is minus one. Most of them are zero
actually. Now here's the characteristic. Some of them are continuous. Some of them
are separable and some of them are multimodal, while the others are unimodal and so
on. So that you see here on the right hand side you see different characteristic.
When you test your algorithm, you will see in which one you can achieve the global
minima and in which one you cannot. Yeah. Let's see some visualisations as well.
This is F2 label F2 Rosenburg. And this is the two dimensional visualisation of
benchmark function. What do you think about this one, Foxhall? This is really
tricky one, actually, isn't it? Because it has one global minima, and all the local
minima are really close to global minima. And at the bottom is quite sorry. At the
top it's quite neutral. So your solution may trap in one of those Vauxhall. And it
would be sort of difficult to get out. Yeah. What do you do? I mean you may
increase the mutation probability. There was a question about that. Why it is low
or high. You know for the gradient based optimisation this would be a challenging
problem as well. What about this one is some it is f10 e so it is quite neutral
most of the space. And there's only one little hole to get into the global minima.
This is also another challenge for optimisation algorithms. Yeah. Okay. I'm going
to, uh, see. Okay. We will see results in a second. But before that we talked about
the encoding. I want to get into a bit of binary encoding and grey encoding. I
don't know how you heard about them, but grey encoding something we prefer because
for the neighbourhood value, great encoding change only one bit, such as when we
look at the binary representation. Let's focus on three. Here it is represented by
011 okay. When we changed this last bit we can actually jump to the two. Yeah we
will end up 010. When I when I say change one bit what does it mean. We may apply
mutation and flip one of the parameter. Then we can jump to the two here the
solution. But to be able to jump to the floor. In the binary encoding, we will have
to change two bits, which is not practical for us because in mutation generally we
change one bit and we want to explore the neighbourhood values. Why do we want to
explore the neighbourhood values? Because that tree is already a good solution for
us. We want to see the other close and close neighbours, which potentially have
also good solutions. But in binary encoding, changing one bit doesn't help, doesn't
let you to go for actually, so you won't be able to explore for the solution 100 by
changing one bit in the great encoding, you are able to because it is designed to
change one bit for each neighbourhood values such as. For this one, you change the
last bit here. And to be able to jump to four you change only this bit. Yeah this
is called grey encoding. And in the experiment um representation is designed to be
grey encoding okay. Here on the left hand side you see the papers experiment
settings label was the benchmark function dimensionality. They worked on ten
dimension for this fair function. Remember we've seen only two dimensions. But now
they've got ten different dimension for most of the benchmark function. And each
each of them, each dimension represented by 30 bits. Now they've got 300 bits for
that simple simple square function, 300 for most of them as well. Yeah. And here
you see the population size. This is their experiment settings. They use grey
encoding initialisation. Remember random made selection. We've covered this
tournament selection with size two. Crossover is one point crossover and
probability is one. Traditional mutation that we cover in the last lecture. And the
mutation rate is one over two. And whatever the chromosome length here for the
first example first benchmark function the chromosome length is um 300. And the
mutation rate is one over 600. It's pretty low. Transgenerational replacement. We
also cover this, um, in the TGA. This they use the keeping the only two best
individual from the previous generation. Remember we were deciding that you know
how many individual we will change from the previous generation. In this particular
settings. They only kept two best from the previous generation. They kill the rest
and they take the two best to the next generation. Yeah. This is their experiment
settings for 14 different benchmark function okay, this is the important bit. Also
there was a question about what sort of algorithm they tested. They tested many.
Basically they use the genetic algorithm with the mouse okay. Here the genetic
algorithm. They also use memetic algorithm with the steepest descent hill climbing
method. You know in this case meme local search is different. They've got another
symmetric algorithm with another local search, another memetic algorithm with
random mutation and other memetic with David speed wise hill climbing. So different
memetic algorithm with different local search. And they label them as. Ma zero, Ma
one, Ma two and three. You know here you see five different optimisation algorithm
actually, and each of them has different settings with different local search.
Yeah. They came up with even more algorithm to test. They inject some bias in their
algorithm, such as this hill climbing at the operator end with zero, which means
that, um, let's say the mutation, whatever you see it, you don't bit flip it. You
inject zero. If it is zero, it stays zero. If not, makes it zero. You know, this is
kind of biased actually injecting the algorithm or another type of local search
with different operators. And these expected the poor performing one M for Ma for
Ma five till to seven. Yeah. So how many algorithm did we ended up. Five. Nine of
them. So they tested 1414 different benchmark function and nine different genetic
or memetic algorithm with different local search in each algorithm. Yeah. This is a
kind of extensive experiment, if you like. Um, this is the kind of, uh, computer
they use because in publication, generally you state that and all runs are repeated
50 times. Remember, they take F1. It's a fair function. They apply GA to square
function and repeat it 50 times. This is not the iteration. There is iteration in
GA. They completed and repeat all this experiment 50 times and then they move on
for the Ma zero. Again, another 50 experiments again and again and again. So they
ended up really a massive set. Massive set of experiments. Performance indicator.
Yeah. They use something called success rate effectiveness of a function of an
algorithm. And this is the ratio when you get the optimum solution in proportion to
total number of round 50. Let's say um for the F1 f1 is this fair function. You
apply GA genetic algorithm 50 times. If you get the optimum solution for 25 times
over 50 times, the success rate is one over two. Yeah, half of all these 50 rounds
GA were able to provide global minimum. So this is something called success rate.
Also they've got the average number of evaluation configuration efficiency. What
does it mean. It's kind of how many iteration how many configuration changes has
been done through the all these experiment. Yeah. And the bar chart shows the
results. If the success rate 100% then the bar chart is drawn. You will see in a
second. If not, it is not basically. You've got benchmark function. You apply one
algorithm. You repeat that 50 times. And we expect all in all of those runs we get
the global minima. We've got the best solution or optimum solution. In that case,
we plot the bar chart. You will see. Okay, here is the result for the next, uh,
sort of 5 to 10 minutes. I'll talk about this. As I said, there are some extensive
research here, many different experiments. I won't be able to cover them all.
However, I'll give you a bit of flavour, and there's a link that I would suggest.
If you're interested, I would suggest you to check through the. Their discussion is
an interesting one. So as I said, they started with this fair function f one. Yeah
this is the visualisation and they apply to GA the bar chart here. Um let me get
the pointer. The bar chart here shows the GA performance since the bar chart is
exist. What does it mean? It means for 50 run GA were able to give us the optimum
solution for each run we've got the optimum solution from GA, and the y axis here
represent the number of configuration, which means that the lower bar chart is
better. What does it mean? It means it finds the best solution earlier than the
others. If it is high, it finds the best solution, but it's a bit later than the
others. Yeah. So this is GA is performance. As you know we've got nine different
nine different algorithm they tested. And here is the other result. What do we see
here. Which one is the best. The lower is the better. Because GA here we are able
to find optimum solution earlier than the others. So that also answered one of our
question actually when does GA To works better than the memetic algorithm. Yeah, we
will get into results as well. But for a simple problem like a sphere, G a seems to
be working well, even maybe better than the others. Faster than the others, because
all of them provide the optimum solution for each run. Anyway, so another, uh,
benchmark function. Rosenburg. This is the one here. Gar were not able to provide
us the optimum result for each 50 run. Neither M467. So these are the Ma one and
three results. Yeah. Um, another result. Set of different benchmark function. As
you know, each table f for represent different function with over nine different
optimisation algorithm. Foxhole was the, uh,
the one with challenging one. Obviously, most of them are failed to find optimum
solution for each 50 run yet. Mar two which is memetic algorithm with random
mutation. Hill climbing works okay. Ma three with, uh, Davis bitwise hill climbing
works even better because it was able to find the optimal solution earlier. Earlier
than the Ma two. Yeah, and these are the results. I mean, we could talk about all
these results for the next five hours or anyway, but I'm not going to go through
each of them is one was the challenging one and Ma three again. Uh, this Davis
bitwise hill climbing seems to be working well, even though the problem is
challenging. Okay, let me summarise this and then we will have ten minutes break,
but M0 seems to be the best choice for all these three different benchmark
function. Ma1 works for f six to f eight. Ma three works for the Platos these
functions and for functions like F1, F11, or F9 genetic algorithm performs slightly
better than the mimetic algorithm. Yeah, although they also apply some statistical
analysis in the result and statistical results says that they are insignificant.
Yet on the figure, as you can tell. GA performs slightly better than the other one,
and Ma two turns out to be worse. M3 is the best. When we go through all this
result, we will see about that. But the point here is that different memes. Lead at
different performances. Yeah, that's why there was a question at the beginning.
Designing the right mEMS for the problem in hand is important. So we will need to
know which meme to use to which problem. But the question for us, how do we know
that? That we will cover in the next bit. Okay, let's have a ten minutes break.
Thank you.

You might also like