CAS 2023 Lecture5 LectureNotes
CAS 2023 Lecture5 LectureNotes
ẋ = f (x, u) (2.63)
ẋ = A x + B u (2.64)
If this linear system is not stable, then it can be stabilized with a linear state-
feedback law given by
u(t) = K x(t) (2.65)
where we know how to compute the values of the gain row vector K so that we
have the desired poles in closed-loop. However, remember that u(t) represents
only small deviations around a nominal u⇤ , and therefore that u(t) should not
be applied directly to the nonlinear system. Indeed, the control input signal is
and will always be u(t). Hence, using the way u(t) is defined, the control input
applied to the nonlinear system is given by
Similarly, the variable x(t) used for feedback in (2.65) still represents small
deviations around x⇤ , not the state x(t) as measured from the nonlinear system
itself. As a consequence, what enters our feedback controller (2.65) is
Thus, combining (2.65), (2.66) and (2.67), a way to implement our linear state-
feedback stabilizer of a nonlinear system would be given by the following ex-
pression
u(t) = K(x(t) x⇤ ) + u⇤ (2.68)
This linear controller and its nonlinear plant are represented together in the
block diagram below (see figure 2.6).
50
2.8. TRACKING CONTROL FOR LINEAR SYSTEMS
Nonlinear
system
Linear
State-feedback
Controller
First, we need to make sure it is actually possible at all for the system to
follow this trajectory, especially when there is no disturbance or di↵erence in
initial conditions. Hence, we will call feasible trajectory for the state-space
representation ẋ = Ax + Bu a desired trajectory xd (t) if an open-loop control
signal ud (t) can be found such that xd (t) is a solution for the system with ud (t)
as input. That is we have xd (t) and ud (t) verifying
Note that this expression can be seen as an extension of the equilibrium relation
(2.46) we saw in section 2.6. Indeed, if xd (t) is constant, then ẋd (t) = 0 and
expression (2.69) reduces to 0 = Axd + Bud , which is nothing else than (2.46).
Once the feasible trajectory xd (t) and its associated ud (t) are defined, we
can introduce, similarly again to section 2.6 the “delta” variables given by
which again leads to the error dynamics around (xd (t), ud (t))
51
2.9. INTEGRATION USING LINEAR STATE-FEEDBACK
Open-loop
/
Feedforward Linear
Controller Linear
State-feedback
system
Controller
where a and b are the constant parameters of the system. Applying the state-
feedback controller u(t) = kx(t), we will get the closed-loop dynamics
52
2.9. INTEGRATION USING LINEAR STATE-FEEDBACK
Hence, the simple feedback law u(t) = kx(t) cannnot completely compensate
for constant disturbances.
Let us now introduce a state-feedback law where an integral term is added:
Z t
u(t) = kx(t) kI x(⌧ )d⌧ (2.78)
0
where it is important to notice that the constant disturbance term has disap-
peared since d˙ = 0. Converting this second-order system into a state-space
representation by defining the state coordinates [x1 , x2 ]T := [x, ẋ]T , we get
d x1 0 1 x1
= (2.81)
dt x2 bkI a bk x2
We know that the whole state [x1 (t), x2 (t)]T and hence x1 (t) = x(t), will con-
verge to the origin of the state-space provided each eigenvalue of the matrix
0 1
(2.83)
bkI a bk
has a strictly negative real part. The result means that, provided that k and kI
are chosen properly, ie so that the dynamics (2.81) are stable, controller (2.78)
can stabilize our scalar system around the origin, and this despite the presence
of an additive constant disturbance.
y = Cx. (2.85)
53
2.9. INTEGRATION USING LINEAR STATE-FEEDBACK
Σ Linear
system
K
Σ
KI ∫ C
Similarly to the scalar case of the previous subsection, we propose then the
following state-feedback controller with integration
Z t Z t
u(t) = Kx(t) KI Cx(⌧ )d⌧ = Kx(t) KI y(⌧ )d⌧ . (2.86)
0 0
In the above equation, note that we do not integrate the whole state, but only
the output, which is itself matching the dimension of both the input and the
disturbance itself.
In order to see the impact of this controller on the overall steady-state regime,
let us first define the new state variable representing the dynamics associated
to the integral of part of the controller, ie we have
Z t
xI := Cx(⌧ )d⌧ . (2.87)
0
Defining then the extended state vector xE := [xI , xT ]T , its corresponding state-
space representation can be obtained by gathering (2.84) and (2.88) together,
ie we have
d xI 0 C xI 0
= + (u + d). (2.89)
dt x 0 A x B
Regarding state-feedback, note that, given the definition of the new state vari-
able xI in (2.87), controller (2.86) can be rewritten as
u= Kx KI x I (2.90)
Putting this equation into the extended dynamics (2.89), we get the closed-loop
dynamics
ẋI 0 C xI 0
= + d. (2.91)
ẋ BKI A BK x B
54
2.9. INTEGRATION USING LINEAR STATE-FEEDBACK
In order to know the steady-state induced by controller (2.86), let us now extract
the equilibrium points from (2.91). Thus, let all derivatives in (2.91), so that
we have
0 Cx⇤
= . (2.92)
0 BKI xI + (A BK)x⇤ + Bd
⇤
Let us take some time to examin the implication of the above equation. The
first line should be valid for any matrix C, which means that we have x⇤ = 0.
For the second line, note that x⇤ = 0 implies that the second term in the RHS
term disappears, so that we are left with
so that we have
KI x⇤I = d. (2.94)
⇤
These two above results, ie x = 0 and expression (2.94) are actually quite
essential. The first one, similarly to the scalar case, means that steady-state
is the origin despite the presence of a disturbance. But the second one gives
an idea of which part of the state-feedback controller (2.86) is doing the job of
rejecting the disturbance: the integrator, of course. Indeed, expression (2.94)
simply means that the steady-state at the output of the integrator corresponds
to the actual disturbance on the system. In some way, one might also say that
the integrator gives an estimate of the disturbance, without actually knowing
its amplitude.
ẋ = ax + bu (2.95)
and assume this time that we want to stabilize it around an equilibrium point not
at the origin, ie we have x⇤ 6= 0. For convenience, let us rewrite this equilibrium
point as r := x⇤ (often called “reference”). As an alternative to what we saw in
section 2.6, ie a combined feedforward/state-feedback control strategy, consider
instead the following dynamic state-feedback controller (with integral term)
Z t
u(t) = kx(t) kI (x(⌧ ) r)) d⌧ (2.96)
0
55
2.9. INTEGRATION USING LINEAR STATE-FEEDBACK
Let us now look at the equilibrium points of system (2.98) or (2.99). Setting
the derivative of the state components to 0, we get
) (
0 = x⇤2 x⇤1 = r
⇤ ⇤
) (2.100)
0 = (a bk)x2 bkI (x1 r) x⇤2 = 0
which means that we are free to choose any r we want! The important im-
plication of this is that, provided of course that k and kI are chosen so that
the system is stabilized, using integral controller means that the system will be
stabilized around the desired reference r = x⇤ .
This particular technique is quite interesting because, contrary to the method
that we have seen in section 2.6 for stabilizing around an equilibrium point x⇤ ,
it does not require the computation of u⇤ . It is implicitely computed by the
dynamic controller (2.96).
The above discussion can be generalized to higher-order systems. In this
case, we simply replace (2.86) with
Z t
u(t) = Kx(t) KI (y(⌧ ) r(⌧ ))d⌧ . (2.101)
0
Letting then Z t
xI := Cx(⌧ ) r(⌧ )d⌧ (2.102)
0
and proceeding the same kind of analysis as in the previous section, we obtain
that y ⇤ = Cx⇤ = r.
ẋ = ax + bu + d.t (2.103)
56
2.9. INTEGRATION USING LINEAR STATE-FEEDBACK
given by Z t
de
u(t) = kp e(t) + ki e(⌧ )d⌧ + kd (t). (2.105)
0 dt
In order to compare a PID controller with linear state-feedback techniques,
let us first assume, for simplicity, that the reference signal is such that r(t) = 0
(i.e. we want to stabilize the system output around the origin). In this case,
controller (2.105) simplifies into
Z t
u(t) = kp y(t) ki y(⌧ )d⌧ kd ẏ(t) (2.106)
0
Furthermore, we will also assume that the plant is a second-order system rep-
resented by the following state-space represenation (in component form)
8
>
<ẋ1 = x2
ẋ2 = a0 x1 a1 x2 + bu . (2.107)
>
:
y = x1
Note the striking resemblance between the above expression and equation (2.106).
Indeed, we have exactly kp = k1 , kd = k2 and ki = kI , ie k1 plays the same role
as the proportional gain kp , k2 the same as derivative gain kd and kI the same
as ki .
57
2.9. INTEGRATION USING LINEAR STATE-FEEDBACK
A quite similar analysis can be carried out for second-order systems stabilized
around a non-zero reference input signal. However, for higher-order systems, a
state-feedback controller will have more gains, allowing to control more complex
dynamics (roughly corresponding to adding higher derivative terms in the PID
control frameworks), while the PID controller will basically remain the same.
However, PID control turns out to be quite e↵ective and sufficiently performant
while remaining quite simple in many cases, hence its popularity.
58