In this section, the analysis and design of the presented controllers will be presented and discussed. This section is divided into two parts to ensure a comprehensive understanding of the methodologies employed. The first part will focus on the design and implementation of the proportional–integral–derivative (PID) and linear quadratic regulator (LQR) controllers. These controllers will be optimized using the genetic algorithm (GA) optimization method. This part will delve into the theoretical foundations of the PID and LQR controllers, followed by a detailed explanation of how the genetic algorithm is applied to enhance their performance and efficiency. The second part will focus on the design and development of the fuzzy logic controller and the fuzzy integral controller. This section will cover the principles of fuzzy logic, explaining its application in control systems and how it can be integrated with traditional control methods to improve system responsiveness and robustness. The fuzzy integral controller will be introduced as an advanced technique that combines fuzzy logic with integral control to achieve superior control outcomes.
3.1. PID and LQR Controllers with the GA Optimization Method
The genetic algorithm (GA) method is considered one of the most prominent and efficacious optimization techniques utilized across numerous domains, including control system design. It has been extensively employed to ascertain the optimal values for the parameters of controllers, serving as a critical step in the controller design process. This method belongs to the category of evolutionary algorithms, which are inspired by the principles of natural selection and genetics. By emulating the process of natural selection, genetic algorithms iteratively evolve solutions to optimization problems, ultimately converging on the most optimal solutions.
In the context of controller design, the genetic algorithm plays an integral role in augmenting the performance and efficiency of control systems. It commences with the population of potential solutions, each represented as a set of chromosomes or individuals. These individuals are evaluated based on a predefined fitness function, which assesses the extent to which each solution satisfies the desired control objectives. The algorithm then applies genetic operators, such as selection, crossover, and mutation, to generate a new population of potential solutions. Through selection, the most proficient individuals are chosen to transmit their attributes to subsequent generations, analogous to the concept of survival of the fittest. Crossover enables the amalgamation of characteristics from disparate individuals, facilitating the creation of potentially superior offspring. Mutation introduces stochastic variations in the offspring’s chromosomes, ensuring genetic diversity and aiding the algorithm in evading local optima. By iteratively applying these operators over multiple generations, the genetic algorithm incrementally refines the solutions, steering them toward optimal parameter values for the controllers. The outcome is a set of optimized controller parameters that enhance the overall performance, stability, and robustness of the control system.
To use GA, firstly, a population size that represents the potential solutions of the optimal problem should be selected; this population will be represented as chromosome structures, which are typically a binary string. This chromosome structure includes all the controller’s elements that must be optimized. In this paper, it will be used for the PID and LQR controllers. For the PID controller, it is required to optimize the values of the Kp, Ki, and Kd, while for the LQR controller, it is required to optimize the Q and R matrices.
After forming the chromosome, the next step is the use of the fitness function in order to obtain the performance of each one. The next step is the crossover; this is between a pair of individuals, so the information between the two chromosomes will be exchanged to create totally new chromosomes. After crossover, mutation will occur randomly to only some of the individuals in order to increase the diversity in the population.
In the domain of control system design utilizing genetic algorithms, the selection of an appropriate fitness function is of the utmost importance. The fitness function serves as a quantitative metric that evaluates the performance of a given set of controller parameters concerning the specified objectives. The choice of fitness function influences the optimization process’s outcome, as different functions can produce different results. Thus, identifying the most suitable fitness function for the model in use is crucial for achieving optimal performance where each fitness function has its strengths and weaknesses, and their applicability is contingent upon the specific control objectives and evaluation criteria.
Commonly utilized fitness functions include mean squared error (MSE), integral squared error (ISE), integral time squared error (ITSE), and integral time absolute error (ITAE). Each of these functions is designed to address specific control evaluation scenarios by emphasizing performance aspects such as the transient response, steady-state response, or both.
The mean squared error (MSE) function is frequently employed due to its straight-forward nature and efficacy in minimizing the average of the squared deviations between the desired and actual system outputs. This function is particularly beneficial when the primary objective is to reduce the overall deviation in the system’s response from the desired trajectory. Integral squared error (ISE), on the other hand, integrates the square of the error over time, thereby placing emphasis on mitigating substantial deviations throughout the entire response period. This characteristic renders ISE suitable for systems minimizing both transient and steady-state errors. Integral time squared error (ITSE) further refines this approach by weighting errors that occur later in the response, making it advantageous for applications where the transient response is critically important and later-stage precision is essential. At the same time, the integral time absolute error (ITAE) emphasizes the minimization of time-weighted absolute errors, prioritizing rapid error correction and minimal overshoot. This makes ITAE particularly well-suited for scenarios where a balance between transient and steady-state performance is desirable.
The equations of these fitness functions are represented in Equations (11)–(14).
To use the GA with the PID controller, it is crucial to understand the parts of the PID controller and their effect on the response. It consists of three parts (proportional, integral, and derivative), and the sum of their effect forms the controller output. When increasing the proportional gain Kp, the response of the system will be faster, but it will increase the overshoot of the response while increasing the derivative gain Kd and reduce the overshoot and any sudden changes in the response. Finally, the use of the integral part will eliminate the steady-state error, even though adding it will have a negative effect on the transient response.
Equation (15) shows the representation of the control single of the PID controller, while
Figure 4 shows the block diagram of the GA with the PID controller.
For the LQR controller, the optimization of the weighting matrices (Q and R) is used to minimize the quadratic cost function where they play a crucial role in detecting the role of each state and control input in the controller, as the Q matrix is the state-cost weight matrix, while the R matrix is the control weight matrix.
Figure 5 shows the block diagram of the LQR controller which is a state feedback control technique that uses the quadratic cost function to determine the optimal control input to minimize the performance index of the system. The role of the feedback loop is to measure the state variables of the system to calculate the optimal control inputs.
As the LQR controller used the state space representation shown in Equation (16), the quadratic cost function to be minimized is given by Equation (14).
where
x(
t) is the state vector,
u(
t) is the control input vector,
A is the state matrix, and
B is the input matrix.
where
Q is the state cost matrix and
R is the control cost matrix. By using the optimal values of the
Q and
R matrices, the controller gain
K can be computed using Equations (18–20).
where
P is the solution to the continuous-time algebraic Riccati equation.
Finally, the block diagram represents the LQR controller with the GA is shown in
Figure 6.