0% found this document useful (0 votes)
51 views87 pages

5.2 - Design Based On State Space

This document provides an overview of state space control design using pole placement. It discusses how to formulate the control design problem using state feedback controllers and the conditions of controllability and observability. It then describes the pole placement technique to compute the state feedback gain matrix K by assigning desired closed-loop poles. The document also discusses how to deal with partial state information using state estimators. Finally, it provides an example to illustrate the pole placement design process.

Uploaded by

Marcelo
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
51 views87 pages

5.2 - Design Based On State Space

This document provides an overview of state space control design using pole placement. It discusses how to formulate the control design problem using state feedback controllers and the conditions of controllability and observability. It then describes the pole placement technique to compute the state feedback gain matrix K by assigning desired closed-loop poles. The document also discusses how to deal with partial state information using state estimators. Finally, it provides an example to illustrate the pole placement design process.

Uploaded by

Marcelo
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 87

Diseño Mecatrónico

Design Based on State Space


MCTG1013

Marcelo Fajardo-Pruna, Francisco Yumbla, PhD


PhD [email protected]
[email protected] ORCID ID: 0000-0003-4220-010X
ORCID ID: 0000-0002-5348-4032 SCOPUS ID: 57201852791
SCOPUS ID: 57195539927
Introduction

• This approach requires more assumption that we don’t have when


using the transfer function approach.

• We need that the system is controllable and the state vector is


accessible.

• If some of the states are not available for feedback, an estimator


can be built to compute an estimate of the whole state vector or
part of the states and therefore use this estimate for feedback
instead of the state vector.

• Separation principle.
Formulation of the Control Design Problem

• Most of the built systems are either unstable or don’t have the
desired performances

• State feedback controller

• It requires extra assumptions that are summarized in:

• the complete access to the state vector,

• and the controllability (or stabilizable).

• The condition of the accessibility to the state vector may be


relaxed while the second one can not be.
• If we have only partial access to the state vector an estimator can
be developed to estimate the state vector and therefore still
continue to use the state feedback control.

• Another alternate consists of using the output feedback control.

• The structure of the state feedback controller is given by:

• where K is the gain matrix that we need to compute.


State Feedback Controller Design

• One of the methods that we can use to design the appropriate


controller that will guarantee the desired performances is the pole
assignment technique.

• This technique requires the complete accessibility to the state


vector or the system is observable that we can use an estimator to
estimate the state vector otherwise the control law can not be
computed.

• The gain K in the controller expression needs to be determined.


• The main idea behind this technique consists of transforming the
desired performances to a desired characteristic polynomial that
will provide the desired eigenvalues.

• For single input single output case, if the system is of dimension 𝑛,


then the gain 𝐾 has n scalar gains to be determined, i.e.:
• The pole placement technique consists first of all of obtaining the
poles of the closed-loop dynamics that give the desired
performances, then using these poles the controller gain K is
computed.

Block diagram of discrete-time linear system


• Let us assume that the desired poles that give the performances
are:

• and that we can get from the desired specifications.

• The corresponding desired characteristic polynomial is given by:

• The closed-loop characteristic polynomial is given by:

• The design approach of pole assignment consists of equating the


two characteristic polynomials. Performing this we get:
• More often the specifications of the system are given in
continuous time and can combine the stability with the overshoot,
the settling time, the steady error, etc. To get the desired poles in
this case, the transformation is made in the continuous-time to get
the desired poles in the s-domain and with the transformation, 𝑧 =
𝑒 𝑠𝑇 , where T is the sampling period of the system, we can
compute the corresponding poles in the z-domain that should be
inside the unit circle.
Example

• Let us assume that we have a dynamical system with two states,


𝑥1 𝑘 and 𝑥2 𝑘 and suppose that the dynamics of the system has
been transformed to the following discrete-time form:

• This system has its poles outside the unit circle and therefore, it is
unstable.

• Firstly, we need to check the controllability of the system.


• Let us also assume that the poles that give the desired
performances are by:

• The corresponding desired characteristic equation is given by:

• Let the controller gain, 𝐾, i.e.:


• The characteristic equation of the closed-loop dynamics is given
by:

• Equating the two characteristic equations gives:


• If the desired poles are located at 0.1 ± 0.1 j the controller gain is given by:

• To understand the relationship between the pole location and the system
response let us consider the following cases obtained from the poles 0.1 ±
0.1j by acting on the real and/or imaginary parts :
• desired poles located at 0.4 ± 0.4 j
• desired poles located at 0.025 ± 0.025 j
• desired poles located at 0.4 ± 0.1 j
• desired poles located at 0.025 ± 0.1 j
• desired poles located at 0.1 ± 0.4 j
• desired poles located at 0.1 ± 0.0.25 j
• poles at 0.4 ± 0.4 j, k1 = −6.4 k2 = 8.4
• poles at 0.025 ± 0.025 j, k1 = −0.4937 k2 = 9.937
• poles at 0.4 ± 0.1 j, k1 = −7.15 k2 = 9.15
• poles at 0.025 ± 0.1 j, k1 = −0.4469 k2 = 9.9469
• poles at 0.1 ± 0.4 j, k1 = −1.15 k2 = 9.15
• poles at 0.1 ± 0.025 j, k1 = −1.9469 k2 = 9.9469
• When the state space description is put in the controllable form,
the computation of the controller’s gains becomes easier.

• In fact, referring to the previous chapter the controllable form for


the open dynamics is given by:
• The corresponding characteristic polynomial of the closed-loop
dynamics is:

• The desired characteristic polynomial can also be put in the


following form:

• By equating the characteristic polynomial with the desired


characteristic polynomial we get:
• In the previous sessions, we presented a transformation that put
the system description in the controllable canonical form, a
question that comes is what relationship exists between the
controller gains of the original description, K and the one of the

controllable canonical form, 𝐾.

• To answer this this question, notice that the characteristic equation


for the closed-loop of the system in the controllable canonical form
is given by:
• where 𝐴ҧ and 𝐵ത are the matrices of the controllable canonical from
obtained after the transformation 𝜂 𝑘 = 𝑃𝑥 𝑘 .

• Using the fact that the matrix P is nonsingular and 𝑃𝑃−1 = I

• that we can write as follows:

• This gives in turn:


• This characteristic equation will have the same poles as the
characteristic equation of original description:

• if we have the following relation between the controller gains:


Example

• Let us assume that we have a dynamical system with two states,


𝑥1 𝑘 and 𝑥2 𝑘 and suppose that the dynamics of the system has
been transformed to the following discrete-time form:

• Firstly, we need to check the controllability of the system.


• It is important to notice that system in open-loop is unstable since
its poles are outside the unit circle.

• Let us also assume that the poles that give the desired
performances are given by:

• The corresponding desired characteristic equation is given by:

• Let the controller gain, K, i.e.:


• Using the relationship 𝑘𝑖+1 = 𝑑𝑖 − 𝑎1 , when 𝑖 = 0,1, we get:

• More often the computation for general state space description


using the pole assignment technique is tedious and one of the
used method to overcome this problem is to use the Ackerman’s
method.

• This method is based on the following relations:


• For the closed-loop dynamics we have also:

• To use this relation we need firstly to expand the terms


• In order to use it, multiply these relations respectively by
𝑑0 , 𝑑1 , . . . , 𝑑𝑛 − 1 and 1 and sum them, we get:
• Now if the system is controllable, which means that the inverse of
the controllability exists, and the fact that Δ𝑑 (𝐴 − 𝐵𝐾) = 0, this
relation can be rewritten as follows:

• To extract the controller gain, K, from this relation we multiply both


sides by 0 0 ··· 0 1 , which gives in turn:
• In summary the Ackerman’s method consists of the following
steps:

1. compute the desired characteristic polynomial Δ𝑑 (𝑧) as


before, i.e.:

2. use the Cayly-Hamilton theorem to compute Δ𝑑 (𝐴), i.e.:

3. use the following formula to compute the gain K:


• In this example, we will show that the design of the state feedback
controller is affected by the canonical forms i.e. the controller
gains are different. For this purpose, let us consider a dynamical
system with output y(k) and input u(k) has the following dynamics:

• Let us firstly establish the canonical forms:


• observable canonical form:

• Jordan canonical form:


• controllable canonical form:

• observable canonical form:

• Jordan canonical form:


DC motor driving a mechanical load

𝑘
𝐺 𝑠 =
𝑠(𝜏𝑠 + 1)
𝑘=1
𝜏 = 50𝑚𝑠

• Our aim in this example is to stabilize the closed-loop dynamics.

• Improve the settling time at 5% equal to 50ms while guaranteeing


that the overshoot is less than or equal to 5%.

• Since the time constant is equal to 50 ms, a proper choice for T is


5 ms.

• This value will be used to get the different canonical forms.


• To solve this design problem, we will use all the canonical forms.
Therefore we have

• the controllable form of this system is given by:

• with
• the observable form of this system is given by:

• With
• the Jordan form of this system is given by:

• with
• From the specifications, we get:

• From these relations we obtain:

• which gives the following poles:


• Their corresponding poles in discrete-time domain when the
sampling period T is chosen equal to 0.1 are given by:

• The corresponding characteristic polynomial is given by

• The closed-loop characteristic polynomial is given by:


• By equating the two characteristic polynomials we get the
controller gain depending on the considered representation as
follows:

• controllable form

• observable form

• Jordan form
Output Feedback Controller Design

• It may happen in some circumstances that we don’t have


complete access to the state vector.

• the approach we used for the state feedback control can not be
used and an alternate is required for this case.

• We will develop an approach that estimates the state vector and


use this estimate as the state vector for the actual control.
• We will firstly focus on the design of the observer that can be used
to estimate the state vector which can be used for feedback.

• We will see how to combine the controller and the observer


designs.

• Let us consider the following system:

• where 𝑥 𝑘 ∈ 𝑅𝑛 , 𝑢 𝑘 ∈ 𝑅𝑚 and 𝑦 𝑘 ∈ 𝑅𝑝 represent respectively


the state, the input and the output of the system.
• One easy way to build the estimate of the state 𝑥 𝑘 is to use the
following structure for the estimator:

• where 𝑥ො 𝑘 ∈ 𝑅𝑛 is the state estimate of the state vector 𝑥 𝑘 .

• with 𝑒 𝑘 = 𝑥 𝑘 − 𝑥ො 𝑘 is the estimation error.


• Notice that the error dynamics doesn’t depend on the control 𝑢 𝑘

• The behavior of the error will depend on the stability of the matrix
A

• We have no way to change the behavior to make it faster if it is


necessary to guarantee the convergence of the estimator by
placing the pole of the matrix A at some appropriate locations.
• To overcome this we should change the structure of the estimator
and a natural one is given by the following dynamics:

• where 𝑥ො 𝑘 ∈ 𝑅𝑛 is the state estimate of the state vector 𝑥 𝑘 and


𝐿 is a constant gain matrix to be designed and that will be referred
as the observer gain.
• The new dynamics for the estimation error depends on the choice
of the gain matrix, 𝐿.

• The behavior can be controlled by the choice of this observer gain


𝐿.

• It is important to notice that the eigenvalues of the matrix 𝐴T −


𝐶 T 𝐿T are the same as those of the matrix 𝐴 − 𝐿𝐶.
• If we denote by 𝑧1 ,···, 𝑧𝑛 , the poles that permit the design of the
matrix L, the characteristic equation is given by:

• We can design the gain matrix L using the Ackerman formula for
the following dynamics.

• with the control 𝑢 𝑘 = −𝐿T 𝑥 𝑘 .


Block diagram of discrete-time linear system
• To show how we design simultaneously the controller and the
observer gains, let us consider the following dynamical system:

• The objective is to design a state feedback that assures the


following performances:
• stable

• an overshoot less than 5 %

• a settling time at 5 % of 3 s
• First of it is important to notice that the system is unstable since all
the poles are at the unit circle.

• The controllability and observability matrices are given by:

• Which are both full rank. The system is then controllable and
observable.
• It is possible to place the poles where we want to guarantee the
desired performances.

• Let us now convert the performances to desired poles. As we did


previously, we have:

• From these relations we obtain:


• which gives the following poles:

• Since the order of the system is equal to 3, a third pole can be


chosen equal to 𝑠3 = −5

• Their corresponding poles in discrete-time domain when the


sampling period 𝑇 is chosen equal to 0.1𝑠 are given by:

• The corresponding characteristic polynomial is given by


• The controller gain can be computed either using the Ackerman
formula or the function place of Matlab. The controller gain is
given:
𝐾 = 0.0071 0.4963 0.0894

• Since we don’t have access to all the states, we need to apply the
state feedback control to estimate the state.

• We will design an observer for this purpose.

• The poles we will consider for the observer design are derived
from those of the specifications.
• As we said earlier, these poles can chosen faster compared the
ones used in the controller design. We will consider them four
times faster.

• Their corresponding poles using the same sampling period 𝑇 are


given by:

• The corresponding characteristic polynomial is given by


• The controller gain can be computed either using the Ackerman
formula or the function place of Matlab. The observer gain is
given:
• Let us now consider the case on multi input multi output case. For
this purpose, we consider the following dynamical system

• First of all it is important to notice that this system is unstable. Our


goal in this example is to design a state feedback controller that
stabilizes the system and place the poles of the closed-loop of this
system at 0.2, 0.1 and 0.2 ± 0.2 j.
• To design the state feedback controller, we will search for the
transformation 𝜂(𝑘) = 𝑃𝑥 𝑘 that gives the controllable canonical
form and then use the procedure we presented earlier to design
the controller gain.

• It is important to notice that we don’t have access to all the states


and therefore an observer is required to estimate the state for
feedback. To design the controller and the observer gains, the
system must be controllable and observable. The system is of
order 4 and has two inputs and two outputs.
• The controllability and the observability matrices are given by:
• Let us firstly focus on the design of the state feedback controller.

• We need to transform the actual description to a controllable


canonical form.

• To determine the controllability indices, notice that by inspection of


the fourth columns are linearly independent and therefore the
matrix
• From this we conclude that the controllability indices associated to
the first and the second columns of the matrix B are equal to 2 and
therefore controllablity index of the system is equal to 2.

• The inverse of this matrix is given by:

• From this we get:


• The matrix P of the transformation, 𝜂(𝑘) = 𝑃𝑥 𝑘 is given by:
• The new description is given by:
• First of all notice that the system has four states and two inputs.
Therefore, the controller gain has 2 ×4 components:
• From the other side notice that we can have the two separate
following characteristic polynomials

• From which we can construct the following matrix that will have
the same desired poles:
Design of the observer

• For this purpose we consider poles for the design of the observer
gain faster than those used for the design of the controller gains.
Le us select the following ones:

• To design the observer gain, we need to search for the


transformation that gives the observable canonical form.
Linear Quadratic Regulator

• We will try to design the optimal state feedback that respects a


certain fixed criteria.

• Let us focus first of all on the finite horizon optimal control


problem. The cost function is given by:

• where the matrices Q, S and R are respectively symmetric and


semi-positive definite, and symmetric and positive definite.
• The matrix R is supposed to be symmetric and positive-definite
because as we will see at the design phase of the optimal
controller, we need to compute the inverse of this matrix.

• The first term in the cost is used to penalize the state and the
second one for the control.

• The last term is used to penalize the final state.

• For general matrices Q and R it is difficult to give an explanation,


but for diagonal matrices the highest coefficient in the diagonal of
the appropriate matrix will give the smallest either state or control.
• The linear regulator problem can be stated: given a linear time-
invariant system with the following dynamics:

• find a control law:

• that minimizes the cost function

• To solve this optimization problem three approaches can be used


among them we quote the Bellman principle.
• To establish the optimality conditions that will give us the optimal
solution we proceed recursively.

• For this purpose, if the initial state is 𝑥 𝑁 − 1 and we want to


drive it to the final state 𝑥 𝑁

• In this case, the cost becomes:

• Using the system dynamics, we can rewritten this as follows:


• where the variable decision is 𝑢 𝑁 − 1 and that we would like to
determine and that makes the criteria smaller.

• Using the fact that the cost is continuous in the decision variable
and the necessary condition for optimality

• which gives in turn the optimal control, 𝑢 𝑁 − 1 , to drive the state


from 𝑥 𝑁 − 1 to 𝑥 𝑁 :
• It is well known from optimization theory that this solution will be
the minimizing one if the following holds:

• We can see that the control is well a state feedback one that we
can rewrite as:

• with 𝐾 𝑁 − 1 = − 𝑅 + 𝐵T 𝑆𝐵 𝐵T 𝑆𝐴.
• The corresponding cost,

• With

• Notice that by this choice we have:

• and therefore we have 𝑆𝑁 = 𝑆


• If now we consider another step backward and using the principle
of optimality, we have:

• Proceeding similarly as before we get:

• with
• In a similar manner if we would like to drive the state from 𝑥 𝑘 to
𝑥 𝑁 , we get

• With

• To get the solution of this optimization problem we should solve


backward the following equation:

• with the following initial condition


• The corresponding optimal control at each step is given by:

• with
Example

• To show how to solve the optimal control problem for a finite


horizon, let us consider the following dynamical system:

• Let the weighting matrices Q, R and S be given by:


• Our objective is then to search for a stabilizing controller

• We will use the optimal control approach

• The optimization problem is a finite horizon with N = 10

• As it is shown by next figure the two components of the gain


converge to finite values.

• These values are then used for simulation.

• It can be verified that the closed-loop dynamics has all its poles
inside the unit circle.
• We stabilize the system using the optimal control. The steady
state gain is given by:
• For the infinite horizon case, the cost function becomes:

• Now if we assume the optimal cost is given by:

• where P is an unknown matrix.


• Notice that the cost can be rewritten as follows:
• Using now the expression of the control and the system dynamics,
we get:

• Based on optimality conditions we get:

• that gives the optimal control law:

• with
• Using this expression for the control law and the previous one for
the cost function, we get the following that must holds for all 𝑥 𝑘

• This implies in turn the following:

• Replacing K by its expression we obtain the following Ricatti


equation:
Diseño Mecatrónico
Design Based on State Space
MCTG1013

You might also like