Literature Review
Literature Review
Khaled Elghayesh School of Electrical Engineering and Computer Science University of Ottawa Ottawa, Canada K1N 6N5 [email protected] February 9, 2014
Introduction
Industrial robots are reprogrammable multiuse manipulators that are used in several industrial applications. They are generally designed by having multiple rigid links connected by joints, with one end attached to the ground and the other end acting as an end eector. The Programmable Universal Machine for Assembly (PUMA) is a robotic manipulator that was developed by Unimation in 1978. Shown in gure 1 [1], PUMA 600 is one model of the PUMA series which has six degrees of freedom and consists of seven links with six rotary joints. The problem of calculating the inverse kinematics of a PUMA 600 robot is the main topic of this study and it involves nding the set of joint variable values which realizes a desired end-eector pose for the robot. Determining the pose for the robotic arm includes calculating the vectors for position tion
used to dene the trajectory of the end-eector of a PUMA robot. The reverse process that calculates the joint variable values is . It is a complex non-linear problem that is not well-posed and that has been tackled by a multitude of dierent approaches before, from analytical to numerical to optimization-based approaches. The success of each of the previously implemented approaches in the literature has been often limited to certain robotic congurations and environments. Therefore, the need has risen for a general robust ecient mechanism to solve robotic inverse kinematics in a real-time or semi real-time fashion.
p and orienta-
Literature Review
Parallel Genetic Algorithms (PGA) are an important member of the wider family of parallel evolutionary algorithms (PEA). Sequential evolutionary algorithm are population-based stochastic optimization search methods that has been proven successful in many applications in engineering and machine learning. It involves a family of a multitude of search algorithms and techniques such as genetic algorithms, articial neural networks, genetic programming, evolutionary strategies. With work by Grefenstette from as early as from 1981, there has been rst attempts to propose a parallel computing architecture for a GA and investigating the issues underlying designing an ecient PGA. This was followed by lots of other literature from dierent researchers in the eld on all aspects associated with PGAs, such as by Mhlenbein and Tanese in [2], [5] and [6]. The proven success of the new ideas of PGA was demonstrated by its application on dierent well-known scheduling problems, such as the school timetabling problem [7] and the train scheduling problem [8]. Before discussing the review on the literature wrote on PGA, it is good to mention the purpose of PGA and the objectives of parallelizing a problem through a PGA implementation. It is not only to get a faster solution that minimizes the wall clock time needed to achieve the solution that we use PGA, it is to also try and maximize the quality of the solution reached by having an average solution that is closer the optimal solution. Having more processors that work on small data structures in parallel gives higher chances to achieve more accurate solutions [3]. However, we are stuck with the
challenge that it is usually hard to measure the speedup in parallel evolutionary algorithms due to their probabilistic nature, in which each run is dierent than the other and will produce dierent results, therefore performance evaluation has to be done in a statistical manner. Most of the literature discusses the dierent design aspects of PGAs in terms of topologies, subpopulation sizes, isolation time for synchronisation between generations. The most fundamental concept of PGA is
lution process by having a population that stores certain characteristics encoded in genes and having them evolve with time through genetic operators to only keep individuals with a better tness. PGA adds to that the ability to have multiple subpopulations each operating independently, and communicating according to a dened topology. Migration refers to the exchange of individuals between subpopulations in a manner that does not aect the behavior of the algorithm as a whole from the serial point of view [4]. It is a major point of research in PGA and it was highlighted in the work of Tanese - one of the pioneers in the eld - on migration and communication strategies [9] and Belding [10]. Another interesting paper on synchronous vs. asynchronous migration and when to migrate Migration directly aects the GA performance strategically was written by Hijaze and Corne [11].
migration.
in operators like selection and mating. Performance degrades when the size of the neighborhood increases. If too big it will be almost like the behavior of a panmictic population. Sarma and De Jong propose a useful parameter in [12] as the ration between the neighborhood radius to the population radius to monitor that. PGAs are also categorized per topology. There are a few main streams for PGA topology design that all the literature follow and agree upon. The master-slave scheme is the most commonly-used base concept, in which the master processor stores the population, selects the parents of the next generation and applies genetic operators to individuals that are sent to slave processors to be evaluated and returned back at the end of each generation [13]. There are many variants on this principle, namely that use dierent communication and memory protocols. Higher processor utilization can be achieved if we abolish the division between generation so as the master sends and receives withough synchronization to take account of dierent processor speeds [14]. Shared memory versus distributed memory is also another variant that is applied in some topologies. This whole approach is named coarse-grained PGA, which in contrast to ne-grained PGA, involves a higher computation to communication ratio as illustrated in g.2 below [3]. The exact answer of which is better depends on many factors, most importantly the application itself. found in [14], [15] and [16]. Detailed analytical comparison between both approaches can be
Other factors to consider when designing and choosing a PGA for an optimization application include the type of communication between dierent subpopulations. Most PGA implementations are synchronous since the time required to send individuals back and forth to do genetic operators is already more than any performance gains. It has been already shown that with a high level of communication, the overhead makes the solution quality almost close to how a sequential GA would perform. Another interesting conception to note here is the dierence between celullar evolutionary algorithms distributed evolutionary algorithms
individuals, while cEA are those that are based on a neighborhood approach. The more subpopulations we have, the more of a cEA system we are getting, however the bigger subpopulation size we use, the bigger of a normal sequential panmictic GA we are approaching. Sefrioui and Periuax propose the idea of real-coded GAs and two way migration that involves two types of migration and also emphasies on the distinction between dEA and cEA. Alba and Troya show in [19] that a square or rectangular topology provides better performance for cEA. There has been quite a few ready made successful implementations of PGA in the literature that are based on dierent architectures. Due to the ease of the sequential implementation, it can basically be run on most parallel frameworks like MPI, PVM, JRMI, and even metacomputing internet-based architectures like Globus. The idea of having independent equivalent runs for the algorithm with dierent initial conditions has been explored before and was quite successful [3]. With how stochastic the GA is in its nature, it is without surprise that this methodology can be successful, especially that usually statistical information is used to evaluate the performance and speedup anyway. Calculating the speedup for PGA is also another challenge due to the probabilistic stochastic nature of the algorithms. It is hard to realize which run generated the best execution for a sequential GA so that it can be marked in the speedup calculation. Therefore, the usual way to analyze the performance of PGAs is to gradually increase the number of processors and statistically measure the performance by recording the best, worst, average performance (time-wise and solution accuracy-wise) [20]. This has been a general overview of PGA implementations and developments in the literature. There is a theoretical background in the literature as well that describes the behavior and convergence of PGAs that was somehow out of scope of the literature review of this study as the research methodology of investigating the feasibility of using PGAs for the PUMA 600 robot inverse kinematics optimization problem will be through empirical analysis and comparison with dierent techniques, in partcular those implemented by the author in [20]. A further review and implementation of particular algorithms in state-of-the-art PGA literature is to be done as the next step in this study.
(dEA)
(cEA)
and
References
1. Grefenstette, John J.
port).
2. Muhlenbein, Heinz. "Evolution in time and space-the parallel genetic algorithm." . 1991.
of genetic algorithms
3. Alba, Enrique, and Marco Tomassini. "Parallelism and evolutionary algorithms." 6.5 (2002): 443-462.
4. Mhlenbein, Heinz, M. Schomisch, and Joachim Born. "The parallel genetic algorithm as function optimizer." 17.6 (1991): 619-632.
Parallel computing
5. Mhlenbein, Heinz, Martina Gorges-Schleuter, and Ottmar Krmer. "Evolution algorithms in combinatorial optimization." 7.1 (1988): 65-85.
Parallel computing
6. R. Tanese, Parallel genetic algorithms for a hypercube, in J. J. Grefenstette, Ed., 1987, p. 177.
Algorithms,
problem
A parallel genetic algorithm for solving the school timetabling Parallel Computing and Transputers
37 (1994):
8. Abramson, David, Graham Mills, and Sonya Perkins. "Parallelisation of a genetic algorithm for the computation of ecient train schedules." 139-149.
10. T. C. Belding, The distributed genetic algorithm revisited, in , L. J. Eshelman, Ed., 1995, pp. 114121.
11. Hijaze, Muhannad, and David Corne. "An investigation of topologies and migration schemes for asynchronous distributed evolutionary algorithms." 2009. NaBIC 2009. World Congress on. IEEE, 2009.
12. Sarma, Jayshree, and Kenneth De Jong. "An analysis of the eects of neighborhood size and shape on local selection algorithms." Berlin Heidelberg, 1996. 236-244. IV. Springer
13. Cant-Paz, Erick. "A survey of parallel genetic algorithms." 10.2 (1998): 141-171.
systems repartis
. 1993. optimizers."
ICGA
ICGA
16. Lin, Shyh-Chang, W. F. Punch III, and Erik D. Goodman. "Coarse-grain parallel genetic algo-
IEEE, 1994.
17. Alba, Enrique, and Jos M. Troya. "A survey of parallel distributed genetic algorithms." Com-
18. Sefrioui, Mourad, and Jacques Priaux. "A hierarchical genetic algorithm using multiple models VI. Springer Berlin Heidelberg,
19. Alba, Enrique, and Jos M. Troya. "Improving exibility and eciency by adding parallelism to genetic algorithms." 12.2 (2002): 91-114.
20. Elghayesh, Khaled. Solving Inverse Kinematics of PUMA 600 Using Tools of Computational Intelligence. Unpublished Master's Thesis. School of Electrical Engineering and Computer Science, University of Ottawa, 2014.