0% found this document useful (0 votes)
62 views7 pages

Evolutionary-Techniques-for-Model-Order-Reduction-of-Large-Scale-Linear-Systems

This document discusses using evolutionary techniques like genetic algorithms and particle swarm optimization for model order reduction of large scale linear systems. It aims to find stable reduced order models while maintaining accuracy when compared to the original higher order system. The techniques guarantee stability of the reduced order model if the original model is stable.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
62 views7 pages

Evolutionary-Techniques-for-Model-Order-Reduction-of-Large-Scale-Linear-Systems

This document discusses using evolutionary techniques like genetic algorithms and particle swarm optimization for model order reduction of large scale linear systems. It aims to find stable reduced order models while maintaining accuracy when compared to the original higher order system. The techniques guarantee stability of the reduced order model if the original model is stable.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

World Academy of Science, Engineering and Technology

International Journal of Electrical, Computer, Energetic, Electronic and Communication Engineering Vol:6, No:9, 2012

Evolutionary Techniques for Model Order


Reduction of Large Scale Linear Systems

S. Panda, J. S. Yadav, N. P. Patidar and C. Ardil

reduced order model may be unstable even though the original


Abstract—Recently, genetic algorithms (GA) and particle swarm high order system is stable.
optimization (PSO) technique have attracted considerable attention To overcome the stability problem, Hutton and Friedland
among various modern heuristic optimization techniques. The GA [2], Appiah [3] and Chen et. al. [4] gave different methods,
has been popular in academia and the industry mainly because of its called stability based reduction methods which make use of
intuitiveness, ease of implementation, and the ability to effectively
International Science Index, Electrical and Computer Engineering Vol:6, No:9, 2012 waset.org/Publication/5760

some stability criterion. Other approaches in this direction


solve highly non-linear, mixed integer optimization problems that are
typical of complex engineering systems. PSO technique is a relatively
include the methods such as Shamash [5] and Gutman et. al.
recent heuristic search method whose mechanics are inspired by the [6]. These methods do not make use of any stability criterion
swarming or collaborative behavior of biological populations. In this but always lead to the stable reduced order models for stable
paper both PSO and GA optimization are employed for finding stable systems.
reduced order models of single-input- single-output large-scale linear Some combined methods are also given for example
systems. Both the techniques guarantee stability of reduced order Shamash [7], Chen et. al. [8] and Wan [9]. In these methods
model if the original high order model is stable. PSO method is based the denominator of the reduced order model is derived by
on the minimization of the Integral Squared Error (ISE) between the
some stability criterion method while the numerator of the
transient responses of original higher order model and the reduced
order model pertaining to a unit step input. Both the methods are reduced model is obtained by some other methods [6, 8, 10].
illustrated through numerical example from literature and the results In recent years, one of the most promising research fields
are compared with recently published conventional model reduction has been “Evolutionary Techniques”, an area utilizing
technique. analogies with nature or social systems. Evolutionary
techniques are finding popularity within research community
Keywords—Genetic Algorithm, Particle Swarm Optimization, as design tools and problem solvers because of their versatility
Order Reduction, Stability, Transfer Function, Integral Squared
and ability to optimize in complex multimodal search spaces
Error.
applied to non-differentiable objective functions. Recently,
I. INTRODUCTION Genetic Algorithm (GA) and Particle Swarm Optimization
(PSO) techniques appeared as a promising algorithm for

T HE exact analysis of high order systems (HOS) is both


tedious and costly as HOS are often too complicated to be
used in real problems. Hence simplification procedures based
handling the optimization problems. GA can be viewed as a
general-purpose search method, an optimization method, or a
learning mechanism, based loosely on Darwinian principles of
on physical considerations or using mathematical approaches biological evolution, reproduction and ‘‘the survival of the
are generally employed to realize simple models for the fittest’’ [11]. GA maintains a set of candidate solutions called
original HOS. The problem of reducing a high order system to population and repeatedly modifies them. At each step, the
its lower order system is considered important in analysis, GA selects individuals from the current population to be
synthesis and simulation of practical systems. Bosley and Lees parents and uses them to produce the children for the next
[1] and others have proposed a method of reduction based on generation. In general, the fittest individuals of any population
the fitting of the time moments of the system and its reduced tend to reproduce and survive to the next generation, thus
model, but these methods have a serious disadvantage that the improving successive generations. However, inferior
individuals can, by chance, survive and also reproduce. GA is
well suited to and has been extensively applied to solve
S. Panda is working as Professor in the Department of Electrical and
Electronics Engineering, National Institute of Science and Technology,
complex design optimization problems because it can handle
Berhampur, Orissa, INDIA. (e-mail: [email protected] ). both discrete and continuous variables, non-linear objective
J.S.Yadav is working as Assistant Professor in Electronics and and constrain functions without requiring gradient information
Communication Engg. Department, MANIT Bhopal, India (e-mail:
[email protected])
[12–16].
aculty N.P.patidar is working as a Senior Lecturer in Electrical PSO is inspired by the ability of flocks of birds, schools of
Engineering Department, MANIT, Bhopal, India. (e-mail: fish, and herds of animals to adapt to their environment, find
[email protected])
rich sources of food, and avoid predators by implementing an
C. Ardil is with National Academy of Aviation, AZ1045, Baku,
Azerbaijan, Bina, 25th km, NAA (e-mail: [email protected]). information sharing approach. PSO technique was invented in

International Scholarly and Scientific Research & Innovation 6(9) 2012 1105 scholar.waset.org/1999.5/5760
World Academy of Science, Engineering and Technology
International Journal of Electrical, Computer, Energetic, Electronic and Communication Engineering Vol:6, No:9, 2012

the mid 1990s while attempting to simulate the successive generations. The inferior individuals can also
choreographed, graceful motion of swarms of birds as part of a survive and reproduce.
sociocognitive study investigating the notion of collective Implementation of GA requires the determination of six
intelligence in biological populations [17]. In PSO, a set of fundamental issues: chromosome representation, selection
randomly generated solutions propagates in the design space function, the genetic operators, initialization, termination and
towards the optimal solution over a number of iterations based evaluation function. Brief descriptions about these issues are
on large amount of information about the design space that is provided in the following sections.
assimilated and shared by all members of the swarm [12, 18-
A. Chromosome representation
20]. Both GA and PSO are similar in the sense that these two
techniques are population-based search methods and they Chromosome representation scheme determines how the
search for the optimal solution by updating generations. Since problem is structured in the GA and also determines the
the two approaches are supposed to find a solution to a given genetic operators that are used. Each individual or
objective function but employ different strategies and chromosome is made up of a sequence of genes. Various types
computational effort, it is appropriate to compare their of representations of an individual or chromosome are: binary
performance. digits, floating point numbers, integers, real values, matrices,
In this paper, two evolutionary methods for order reduction etc. Generally natural representations are more efficient and
of large scale linear systems are presented. In both the produce better solutions. Real-coded representation is more
International Science Index, Electrical and Computer Engineering Vol:6, No:9, 2012 waset.org/Publication/5760

methods, evolutionary optimization techniques are employed efficient in terms of CPU time and offers higher precision with
for the order reduction where both the numerator and more consistent results.
denominator coefficients of ROM by minimizing an Integral
B. Selection function
Squared Error (ISE) criterion. The obtained results are
compared with a recently published conventional method to To produce successive generations, selection of individuals
show their superiority. plays a very significant role in a genetic algorithm. The
selection function determines which of the individuals will
II. STATEMENT OF THE PROBLEM survive and move on to the next generation. A probabilistic
selection is performed based upon the individual’s fitness such
th
The Let the n order system and its reduced model that the superior individuals have more chance of being
( r < n ) be given by the transfer functions: selected. There are several schemes for the selection process:
roulette wheel selection and its extensions, scaling techniques,
n −1 tournament, normal geometric, elitist models and ranking
∑ di s i methods.
i =0
G ( s) = n
(1) The selection approach assigns a probability of selection Pj
to each individuals based on its fitness value. In the present
∑ejs i
j =0 study, normalized geometric selection function has been used.
In normalized geometric ranking, the probability of selecting
r −1 an individual Pi is defined as:
∑ ai s i
R(s) = i =0
(2) Pi = q ' (1 − q )r −1 (3)
r
∑bjs i
j =0 q
q' = (4)
1 − (1 − q) P
where ai , b j , di , e j , are scalar constants. where,
th
The objective is to find a reduced r order reduced model q = probability of selecting the best individual
R(s ) such that it retains the important properties of G (s ) for r = rank of the individual (with best equals 1)
the same types of inputs. P = population size

III. OVERVIEW OF GENETIC ALGORITHM (GA) C. Genetic operators


Genetic algorithm (GA) has been used to solve difficult The basic search mechanism of the GA is provided by the
engineering problems that are complex and difficult to solve genetic operators. There are two basic types of operators:
by conventional optimization methods. GA maintains and crossover and mutation. These operators are used to produce
manipulates a population of solutions and implements a new solutions based on existing solutions in the population.
survival of the fittest strategy in their search for better Crossover takes two individuals to be parents and produces
solutions. The fittest individuals of any population tend to two new individuals while mutation alters one individual to
reproduce and survive to the next generation thus improving produce a single new solution. The following genetic

International Scholarly and Scientific Research & Innovation 6(9) 2012 1106 scholar.waset.org/1999.5/5760
World Academy of Science, Engineering and Technology
International Journal of Electrical, Computer, Energetic, Electronic and Communication Engineering Vol:6, No:9, 2012

operators are usually employed: simple crossover, arithmetic Start


crossover and heuristic crossover as crossover operator and
uniform mutation, non-uniform mutation, multi-non-uniform
mutation, boundary mutation as mutation operator. Arithmetic Specify the parameters for GA
crossover and non-uniform mutation are employed in the
present study as genetic operators. Crossover generates a
random number r from a uniform distribution from 1 to m and Generate initial population
creates two new individuals by using equations: Gen.=1

⎧ x i , if i < r ⎫
x i' = ⎨ ⎬ (5)
⎩ y i otherwise ⎭ Find the fitness of each individual
in the current population

⎧ y i , if i < r ⎫
y i' = ⎨ ⎬ (6)
⎩ x i otherwise ⎭ Yes
Gen.=Gen.+1 Gen. > Max. Gen.? Stop
International Science Index, Electrical and Computer Engineering Vol:6, No:9, 2012 waset.org/Publication/5760

Arithmetic crossover produces two complimentary linear


combinations of the parents, where r = U (0, 1): No
− − −
X ' = r X + (1 − r ) Y (7) Apply GA operators:
selection,crossover and mutation
− − −
Y ' = r Y + (1 − r ) X (8)
Fig. 1. Flowchart of genetic algorithm
Non-uniform mutation randomly selects one variable j and
sets it equal to an non-uniform random number. IV. PARTICLE SWARM OPTIMIZATION METHOD

⎧ xi + (bi − xi ) f (G ) if r1 < 0.5, ⎫ In conventional mathematical optimization techniques,


⎪ ⎪ problem formulation must satisfy mathematical restrictions
xi ' = ⎨ xi + ( x i + a i ) f (G ) if r1 ≥ 0.5,⎬ (9) with advanced computer algorithm requirement, and may
⎪ x , ⎪
⎩ i otherwise ⎭ suffer from numerical problems. Further, in a complex system
where, consisting of number of controllers, the optimization of
several controller parameters using the conventional
G
f (G ) = (r2 (1 − )) b (10) optimization is very complicated process and sometimes gets
G max struck at local minima resulting in sub-optimal controller
parameters. In recent years, one of the most promising
r1, r2 = uniform random nos. between 0 to 1.
research field has been “Heuristics from Nature”, an area
G = current generation. utilizing analogies with nature or social systems. Application
of these heuristic optimization methods a) may find a global
Gmax = maximum no. of generations.
optimum, b) can produce a number of alternative solutions, c)
b = shape parameter. no mathematical restrictions on the problem formulation, d)
relatively easy to implement and e) numerically robust.
D. Initialization, termination and evaluation function Several modern heuristic tools have evolved in the last two
decades that facilitates solving optimization problems that
An initial population is needed to start the genetic algorithm
were previously difficult or impossible to solve. These tools
procedure. The initial population can be randomly generated
include evolutionary computation, simulated annealing, tabu
or can be taken from other methods. search, genetic algorithm, particle swarm optimization, etc.
The GA moves from generation to generation until a Among these heuristic techniques, Genetic Algorithm (GA)
stopping criterion is met. The stopping criterion could be and Particle Swarm Optimization (PSO) techniques appeared
maximum number of generations, population convergence as promising algorithms for handling the optimization
criteria, lack of improvement in the best solution over a problems. These techniques are finding popularity within
specified number of generations or target value for the research community as design tools and problem solvers
objective function. because of their versatility and ability to optimize in complex
Evaluation functions or objective functions of many forms can multimodal search spaces applied to non-differentiable
be used in a GA so that the function can map the population objective functions.
into a partially ordered set. The computational flowchart of the The PSO method is a member of wide category of swarm
GA optimization process employed in the present study is intelligence methods for solving the optimization problems. It
given in Fig. 1. is a population based search algorithm where each individual
is referred to as particle and represents a candidate solution.

International Scholarly and Scientific Research & Innovation 6(9) 2012 1107 scholar.waset.org/1999.5/5760
World Academy of Science, Engineering and Technology
International Journal of Electrical, Computer, Energetic, Electronic and Communication Engineering Vol:6, No:9, 2012

Each particle in PSO flies through the search space with an previous best solution. The velocity update in a PSO consists
adaptable velocity that is dynamically modified according to of three parts; namely momentum, cognitive and social parts.
its own flying experience and also to the flying experience of The balance among these parts determines the performance of
the other particles. In PSO each particles strive to improve a PSO algorithm. The parameters c1 and c2 determine the
themselves by imitating traits from their successful peers. relative pull of pbest and gbest and the parameters r1 and r2
Further, each particle has a memory and hence it is capable of help in stochastically varying these pulls. In the above
remembering the best position in the search space ever visited equations, superscripts denote the iteration number. Fig.2.
by it. The position corresponding to the best fitness is known shows the velocity and position updates of a particle for a two-
as pbest and the overall best out of all the particles in the dimensional parameter space. The computational flow chart of
population is called gbest [12]. PSO algorithm employed in the present study for the model
The modified velocity and position of each particle can be reduction is shown in Fig. 3.
calculated using the current velocity and the distances from
the pbestj,g to gbestg as shown in the following formulas
[12,17-20]:
x (jt,+g1)
social part
v (jt,+g1) = w * v (jt,)g + c1 * r1 ( ) * ( pbest j , g − x (jt,)g )
(11) pbest j
International Science Index, Electrical and Computer Engineering Vol:6, No:9, 2012 waset.org/Publication/5760

+ c 2 * r2 ( ) * ( gbest g − x (jt,)g ) v(jt,+g1)


gbest
cognitive part
x (jt,+g1) = x (jt,)g + v (jt,+g1) (12)
v(jt,)g
With j = 1,2,..., n and g = 1,2,..., m current motion
x(jt,)g momentum
influence
part
Where,
n = number of particles in the swarm
m = number of components for the vectors vj and xj Fig. 2. Description of velocity and position updates in particle swarm
optimization for a two dimensional parameter space
t = number of iterations (generations)
v (jt,)g = the g-th component of the velocity of particle j at
Start
iteration t , v gmin ≤ v (jt,)g ≤ v gmax ;
w = inertia weight factor
Specify the parameters for PSO
c1 , c 2 = cognitive and social acceleration factors
respectively
Generate initial population
r1 , r2 = random numbers uniformly distributed in the
Gen.=1
range (0, 1)

x (jt,)g = the g-th component of the position of particle j at


Find the fitness of each particle
iteration t in the current population
pbest j = pbest of particle j
gbest = gbest of the group
Yes
Gen.=Gen.+1 Gen. > max Gen ? Stop
The j-th particle in the swarm is represented by a d-
dimensional vector xj = (xj,1, xj,2, ……,xj,d) and its rate of
No
position change (velocity) is denoted by another d-
dimensional vector vj = (vj,1, vj,2, ……, vj,d). The best previous Update the particle position and
position of the j-th particle is represented as pbestj =(pbestj,1, velocity using Eqns. (19) and (20)
pbestj,2, ……, pbestj,d). The index of best particle among all of
the particles in the swarm is represented by the gbestg. In PSO,
each particle moves in the search space with a velocity
according to its own previous best solution and its group’s Fig. 3. Flowchart of PSO for order reduction

International Scholarly and Scientific Research & Innovation 6(9) 2012 1108 scholar.waset.org/1999.5/5760
World Academy of Science, Engineering and Technology
International Journal of Electrical, Computer, Energetic, Electronic and Communication Engineering Vol:6, No:9, 2012

V. NUMERICAL EXAMPLES B. Results


Let us consider the system described by the transfer The reduced 2nd order model employing PSO technique is
function described by transfer function [21, 22]: obtained as follows:

s 3 + 7s 2 + 24s + 24 2.9319s + 7.8849


G (s) = (13) R 2 (s) = (15)
(s + 1)(s + 2)(s + 3)(s + 4) 3.8849s 2 + 11.4839s + 7.8849
For which a second order reduced model R2 ( s ) is desired.
The reduced 2nd order model employing GA technique is
A. Application of PSO and GA obtained as follows:
While applying PSO and GA, a number of parameters are
5.2054s + 8.989
required to be specified. An appropriate choice of the R 2 (s) = (16)
parameters affects the speed of convergence of the algorithm. 6.6076 2 + 14.8941s + 8.989
Implementation of PSO, several parameters are required to
The convergence of objective function with the number of
be specified, such as c1 and c2 (cognitive and social generations for both PSO and GA is shown in Fig. 4. The unit
acceleration factors, respectively), initial inertia weights, step responses of original and reduced systems by both the
International Science Index, Electrical and Computer Engineering Vol:6, No:9, 2012 waset.org/Publication/5760

swarm size, and stopping criteria. These parameters should be methods are shown in Figs. 5 and 6 for PSO and GA method
selected carefully for efficient performance of PSO. The respectively. For comparison, the unit step response of a
constants c1 and c2 represent the weighting of the stochastic recently published ROM obtained by conventional Routh
Approximation method [21] is also shown in Figs. 5 and 6.
acceleration terms that pull each particle toward pbest and
gbest positions. Low values allow particles to roam far from
-3
the target regions before being tugged back. On the other x 10
hand, high values result in abrupt movement toward, or past, 2
GA
target regions. Hence, the acceleration constants were often set
PSO
to be 2.0 according to past experiences. Suitable selection of
1.5
inertia weight, w , provides a balance between global and
local explorations, thus requiring less iteration on average to
Fitness

find a sufficiently optimal solution. As originally developed,


1
w often decreases linearly from about 0.9 to 0.4 during a run
[17, 18]. One more important point that more or less affects
the optimal solution is the range for unknowns. For the very
0.5
first execution of the program, wider solution space can be
given, and after getting the solution, one can shorten the
solution space nearer to the values obtained in the previous 0
iterations. 0 10 20 30 40 50
Implementation of GA normal geometric selection is Generation
employed which is a ranking selection function based on the
normalized geometric distribution. Arithmetic crossover takes Fig. 4. Flowchart of PSO for order reduction
two parents and performs an interpolation along the line
formed by the two parents. Non-uniform mutation changes
one of the parameters of the parent based on a non-uniform 1
probability distribution. This Gaussian distribution starts wide,
and narrows to a point distribution as the current generation
0.8
approaches the maximum generation.
The objective function J is defined as an integral squared
Amplitude

error of difference between the responses given by the 0.6


expression:
0.4
t∞
J = ∫ [ y (t ) − y r (t )]2 dt (14) 0.2 Original 4th order model
0 Reduced 2nd order model by PSO
0 Reduced 2nd order model by Pade Appx
Where
y (t ) and y r (t ) are the unit step responses of original and 0 1 2 3 4 5 6 7

reduced order systems. Time (sec)


Fig. 5. Step Responses of original model and ROM by PSO

International Scholarly and Scientific Research & Innovation 6(9) 2012 1109 scholar.waset.org/1999.5/5760
World Academy of Science, Engineering and Technology
International Journal of Electrical, Computer, Energetic, Electronic and Communication Engineering Vol:6, No:9, 2012

TABLE I: COMPARISON OF METHODS


1
Method Reduced model ISE
0.8 Proposed 2.9319s + 7.8849 8.2316x10-5
evolutionary
3.8849s 2 + 11.4839s + 7.8849
Amplitude

0.6
method:
PSO
Proposed 5.2054s + 8.989 8.6581x10-5
0.4 conventional
method: GA 6.6076 2 + 14.8941s + 8.989
Original 4th order model
0.2 Routh 4.4713 − 0.189762s 0.008
Reduced 2nd order model by GA
Approx. [21] 2
0 Reduced 2nd order model by Pade Appx 4.4713 + 4.76187s + s
0 1 2 3 4 5 6 7 Parthasarath 0.6997 + s 0.0303
Time (sec) y et al. [25] 2
0.6997 + 1.45771s + s
Fig. 6. Step Responses of original model and ROM by GA
Shieh and 2.3014 + s 0.1454
International Science Index, Electrical and Computer Engineering Vol:6, No:9, 2012 waset.org/Publication/5760

It can be seen that the steady state responses of both the Wei [26]
proposed reduced order models are exactly matching with that
2.3014 + 5.7946s + s 2
of the original model. Also, compared to conventional method Prasad and 34.2465 + s 1.6885
of reduced models, the transient response of evolutionary Pal [27] 2
34.2465 + 239.8082s + s
reduced model by PSO and GA is very close to that of original
model. J Pal [28] 24 + 16.0008s 0.0118
2
24 + 42s + 30s
VI. COMPARISON OF METHODS
The performance comparison of both the proposed algorithm
for order reduction techniques is given in Table I. The
comparison is made by computing the error index known as REFERENCES
integral square error ISE [23, 24] in between the transient [1] M.J.Bosley and F.P.Lees, “A survey of simple transfer function
parts of the original and reduced order model, is calculated to derivations from high order state variable models”, Automatica, Vol. 8,
measure the goodness/quality of the [i.e. the smaller the ISE, pp. 765-775, !978.
the closer is R (s ) to G (s ) , which is given by: [2] M.F.Hutton and B. Fried land, “Routh approximations for reducing
order of linear time- invariant systems”, IEEE Trans. Auto. Control, Vol.
t∞ 20, pp 329-337, 1975.
ISE = ∫ [ y (t ) − y r (t )]2 dt (17) [3] R.K.Appiah, “Linear model reduction using Hurwitz polynomial
0 approximation”, Int. J. Control, Vol. 28, no. 3, pp 477-488, 1978.
[4] T. C. Chen, C.Y.Chang and K.W.Han, “Reduction of transfer functions
Where y (t ) and y r (t ) are the unit step responses of by the stability equation method”, Journal of Franklin Institute, Vol.
original and reduced order systems for a second- order 308, pp 389-404, 1979.
[5] Y.Shamash, “Truncation method of reduction: a viable alternative”,
reduced respectively. This error index is calculated for various
Electronics Letters, Vol. 17, pp 97-99, 1981.
reduced order models which are obtained by us and compared
[6] P.O.Gutman, C.F.Mannerfelt and P.Molander, “Contributions to the
with the other well known order reduction methods available model reduction problem”, IEEE Trans. Auto. Control, Vol. 27, pp 454-
in the literature. 455, 1982.
VI. CONCLUSION [7] Y. Shamash, “Model reduction using the Routh stability criterion and
the Pade approximation technique”, Int. J. Control, Vol. 21, pp 475-484,
In this paper, two evolutionary methods for reducing a high 1975.
order large scale linear system into a lower order system have [8] T.C.Chen, C.Y.Chang and K.W.Han, “Model Reduction using the
been proposed. Particle swarm optimization and genetic stability-equation method and the Pade approximation method”, Journal
algorithm methods based evolutionary optimization techniques of Franklin Institute, Vol. 309, pp 473-490, 1980.
are employed for the order reduction where both the [9] Bai-Wu Wan, “Linear model reduction using Mihailov criterion and
numerator and denominator coefficients of reduced order Pade approximation technique”, Int. J. Control, Vol. 33, pp 1073-1089,
model are obtained by minimizing an Integral Squared Error 1981.
(ISE) criterion. The obtained results are compared with a [10] V.Singh, D.Chandra and H.Kar, “Improved Routh-Pade Approximants:
recently published conventional method and other existing A Computer-Aided Approach”, IEEE Trans. Auto. Control, Vol. 49. No.
well known methods of model order reduction to show their 2, pp292-296, 2004.
[11] D.E. Goldberg, Genetic Algorithms in Search, Optimization, and
superiority. It is clear from results presented in Table 1 that
Machine Learning, Addison-Wesley, 1989.
both the proposed methods give minimum ISE error compared
to any other order reduction technique.

International Scholarly and Scientific Research & Innovation 6(9) 2012 1110 scholar.waset.org/1999.5/5760
World Academy of Science, Engineering and Technology
International Journal of Electrical, Computer, Energetic, Electronic and Communication Engineering Vol:6, No:9, 2012

[12] S.Panda and N.P.Padhy, “Comparison of Particle Swarm Optimization


and Genetic Algorithm for FACTS-based Controller Design”, Applied Sidhartha Panda is a Professor at National Institute
Soft Computing. Vol. 8, Issue 4, pp. 1418-1427, 2008. of Science and Technology, Berhampur, Orissa,
India. He received the Ph.D. degree from Indian
[13] S.Panda and N.P.Padhy, “Application of Genetic Algorithm for PSS and Institute of Technology, Roorkee, India in 2008,
FACTS based Controller Design”, International Journal of M.E. degree in Power Systems Engineering in 2001
Computational Methods, Vol. 5, Issue 4, pp. 607-620, 2008. and B.E. degree in Electrical Engineering in 1991.
[14] S.Panda and R.N.Patel, “Transient Stability Improvement by Optimally Earlier he worked as Associate Professor in KIIT
Located STATCOMs Employing Genetic Algorithm” International University, Bhubaneswar, India & VITAM College
of Engineering, Andhra Pradesh, India and Lecturer
Journal of Energy Technology and Policy, Vol. 5, No. 4, pp. 404-421,
in the
2007. Department of Electrical Engineering, SMIT, Orissa, India.His areas of
[15] S.Panda and R.N.Patel, “Damping Power System Oscillations by research include power system transient stability, power system dynamic
Genetically Optimized PSS and TCSC Controller” International Journal stability, FACTS, optimisation techniques, distributed generation and wind
of Energy Technology and Policy, Inderscience, Vol. 5, No. 4, pp. 457- energy.
474, 2007.
Jigyendra Sen Yadav is working as Assistant Professor in Electronics and
[16] S.Panda and R.N.Patel, “Optimal Location of Shunt FACTS Controllers Communication Engineering Department, MANIT, Bhopal, India. He received
for Transient Stability Improvement Employing Genetic Algorithm”, the B.Tech.degree from GEC Jabalpur in Electronics and Communication
Electric Power Components and Systems, Taylor and Francis, Vol. 35, Engineering in 1995 and M.Tech. degree from MANIT, Bhopal in Digital
No. 2, pp. 189-203, 2007. Communication in 2002. Presently he is persuing Ph.D. His area of interest
includes Optimization techniques, Model Order Reduction, Digital
[17] J. Kennedy and R.C.Eberhart, “Particle swarm optimization”, IEEE
International Science Index, Electrical and Computer Engineering Vol:6, No:9, 2012 waset.org/Publication/5760

Communication, Signal and System and Control System.


Int.Conf. on Neural Networks, IV, 1942-1948, Piscataway, NJ, 1995.
[18] S.Panda, N.P.Padhy, R.N.Patel, “Power System Stability Improvement Narayana Prasad Patidar obtained his PhD from Indian Institute of
by PSO Optimized SSSC-based Damping Controller”, Electric Power Technology, Roorkee, in 2008. Presently, he is working as a Senior Lecturer in
Components & Systems, Taylor and Francis, Vol. 36, No. 5, pp. 468- Electrical Engineering Department, MANIT, Bhopal, India. His research
interests are voltage stability, security analysis, power system stability and
490, 2008.
intelligent techniques.
[19] S.Panda and N.P.Padhy, “Optimal location and controller design of
STATCOM using particle swarm optimization”, Journal of the Franklin C. Ardil is with National Academy of Aviation, AZ1045, Baku, Azerbaijan,
Institute, Elsevier, Vol.345, pp. 166-181, 2008. Bina, 25th km, NAA
[20] S.Panda, N.P.Padhy and R.N.Patel, “Robust Coordinated Design of PSS
and TCSC using PSO Technique for Power System Stability
Enhancement”, Journal of Electrical Systems, Vol. 3, No. 2, pp. 109-
123, 2007.
[21] C. B. Vishwakarma and R.Prasad, “Clustering Method for Reducing
Order of Linear System using Pade Approximation”, IETE Journal of
Research, Vol. 54, Issue 5, pp. 326-330, 2008.
[22] S.Mukherjee, and R.N.Mishra, Order reduction of linear systems using
an error minimization technique, Journal of Franklin Inst. Vol. 323, No.
1, pp. 23-32, 1987.
[23] S.Panda, S.K.Tomar, R.Prasad, C.Ardil, “Reduction of Linear Time-
Invariant Systems Using Routh-Approximation and PSO”, International
Journal of Applied Mathematics and Computer Sciences, Vol. 5, No. 2,
pp. 82-89, 2009.
[24] S.Panda, S.K.Tomar, R.Prasad, C.Ardil, “Model Reduction of Linear
Systems by Conventional and Evolutionary Techniques”, International
Journal of Computational and Mathematical Sciences, Vol. 3, No. 1, pp.
28-34, 2009.
[25] R.Parthasarathy, and K. N. Jayasimha, System reduction using stability
equation method and modified Cauer continued fraction, Proc. IEEE
Vol. 70, No. 10, pp. 1234-1236, Oct. 1982.
[26] L.S. hieh, and Y.J.Wei, A mixed method for multivariable system
reduction, IEEE Trans. Autom. Control, Vol. AC-20, pp. 429-432, 1975.
[27] R.Prasad, and J.Pal, Stable reduction of linear systems by continued
fractions, J. Inst. Eng. India, IE (I) J.EL, Vol. 72, pp. 113-116, Oct.
1991.
[28] J. Pal, Stable reduced –order Pade approximants using the Routh-
Hurwitz array, Electronics letters, Vol. 15, No. 8, pp. 225-226, April
1979.

International Scholarly and Scientific Research & Innovation 6(9) 2012 1111 scholar.waset.org/1999.5/5760

You might also like