0% found this document useful (0 votes)
8 views

CAS 2023 Lecture5 LectureNotes

Cas lecture 5 notes 2023

Uploaded by

Asaad waqar
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views

CAS 2023 Lecture5 LectureNotes

Cas lecture 5 notes 2023

Uploaded by

Asaad waqar
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

2.7.

LINEAR STATE-FEEDBACK FOR NONLINEAR SYSTEMS

2.7 Linear state-feedback for nonlinear systems


The interest of linear state-feedback control lies essentially in its simplicity: take
the current state as information, and multiply it by a few gains to compute the
control input which is then applied back to the plant. As we have seen, the
computation of the gains can be done in a quite simple way.
Interestingly, applying linear state-feedback to systems which are nonlinear is
also possible. The basic idea consists in choosing a point of interest, where one
wishes to stabilize the system, ie an equilibrium point, and then apply the right
linear state-feedback controller based on the linear approximation of the system
around the chosen equilibrium point.
More concretely, start with nonlinear system described by

ẋ = f (x, u) (2.63)

and compute a linear approximation of (2.63) around (x⇤ , u⇤ ) to obtain

ẋ = A x + B u (2.64)

If this linear system is not stable, then it can be stabilized with a linear state-
feedback law given by
u(t) = K x(t) (2.65)
where we know how to compute the values of the gain row vector K so that we
have the desired poles in closed-loop. However, remember that u(t) represents
only small deviations around a nominal u⇤ , and therefore that u(t) should not
be applied directly to the nonlinear system. Indeed, the control input signal is
and will always be u(t). Hence, using the way u(t) is defined, the control input
applied to the nonlinear system is given by

u(t) = u⇤ + u(t) (2.66)

Similarly, the variable x(t) used for feedback in (2.65) still represents small
deviations around x⇤ , not the state x(t) as measured from the nonlinear system
itself. As a consequence, what enters our feedback controller (2.65) is

x(t) = x(t) x⇤ (2.67)

Thus, combining (2.65), (2.66) and (2.67), a way to implement our linear state-
feedback stabilizer of a nonlinear system would be given by the following ex-
pression
u(t) = K(x(t) x⇤ ) + u⇤ (2.68)
This linear controller and its nonlinear plant are represented together in the
block diagram below (see figure 2.6).

50
2.8. TRACKING CONTROL FOR LINEAR SYSTEMS

Nonlinear
system

Linear
State-feedback
Controller

Figure 2.6: Linear state-feedback controller for a nonlinear system

2.8 Tracking control for linear systems


One can also extend the result of section 2.6 for a system to track or follow a
particular trajectory of reference xd (t). In this case, the feedback controller will
have to compensate for errors around this trajectory.

First, we need to make sure it is actually possible at all for the system to
follow this trajectory, especially when there is no disturbance or di↵erence in
initial conditions. Hence, we will call feasible trajectory for the state-space
representation ẋ = Ax + Bu a desired trajectory xd (t) if an open-loop control
signal ud (t) can be found such that xd (t) is a solution for the system with ud (t)
as input. That is we have xd (t) and ud (t) verifying

ẋd (t) = Axd (t) + Bud (t) (2.69)

Note that this expression can be seen as an extension of the equilibrium relation
(2.46) we saw in section 2.6. Indeed, if xd (t) is constant, then ẋd (t) = 0 and
expression (2.69) reduces to 0 = Axd + Bud , which is nothing else than (2.46).

Once the feasible trajectory xd (t) and its associated ud (t) are defined, we
can introduce, similarly again to section 2.6 the “delta” variables given by

x(t) := x(t) xd (t) and u(t) := u(t) ud (t) (2.70)

which again leads to the error dynamics around (xd (t), ud (t))

ẋ(t) = A x(t) + B u(t) (2.71)

from which we obtain the tracking controller expression

u(t) = K(x(t) xd (t)) + ud (t) (2.72)

Interestingly, note that tracking controller (2.72) thus combines an open-loop


or feedforward controller with a standard linear state-feedback controller, as
represented in figure 2.7.

51
2.9. INTEGRATION USING LINEAR STATE-FEEDBACK

Open-loop
/
Feedforward Linear
Controller Linear
State-feedback
system
Controller

Figure 2.7: OL + Linear State-FB = tracking controller.

2.9 Integration using linear state-feedback


2.9.1 Basic principle
Through our simulations/experiments, we have seen that linear state-feedback
is quite good at coping with transitory disturbances, such as high-frequency
noise or di↵erences in initial conditions.
However, other more persistent disturbances can perturb the system. For
example, a constant wind will influence the behavior of a plane, a heavy pas-
senger in the car might push the driver to increase the action on the throttle to
maintain speed.
To see how persistent disturbances a↵ect systems, let us start with the simple
linear scalar system represented by

ẋ(t) = ax(t) + bu(t) (2.73)

where a and b are the constant parameters of the system. Applying the state-
feedback controller u(t) = kx(t), we will get the closed-loop dynamics

ẋ(t) = (a bk) x(t) (2.74)

which will be stable around equilibrium point x⇤ = 0 provided a bk < 0.


Assume now that a constant disturbance d a↵ects the system additively, ie we
have
ẋ(t) = ax(t) + bu(t) + d (2.75)
Using the same feedback law u(t) = kx(t) means that, because of the distur-
bance d, closed-loop dynamics (2.74) become

ẋ(t) = (a bk) x(t) + d (2.76)

for which we have a new equilibrium point given by


1
x⇤ = d (2.77)
a bk
Expression (2.77) means that, unless we set the gain of our controller to k = 1
(both unrealistic and unreasonable), the equilibrium cannot be x⇤ = 0 anymore.

52
2.9. INTEGRATION USING LINEAR STATE-FEEDBACK

Hence, the simple feedback law u(t) = kx(t) cannnot completely compensate
for constant disturbances.
Let us now introduce a state-feedback law where an integral term is added:
Z t
u(t) = kx(t) kI x(⌧ )d⌧ (2.78)
0

with kI a constant scalar gain tuning the influence/importance of the integral


term. Putting (2.78) into disturbed system (2.75), we get the closed-loop dy-
namics Z t
ẋ = ax bkx bkI x(⌧ )d⌧ + d (2.79)
0

Expression (2.79), combining both integral and di↵erential terms, is simply


called an integro-di↵erential equation. In order to obtain a di↵erential equation,
di↵erentiate (2.79) to get

ẍ = (a bk)ẋ bkI x (2.80)

where it is important to notice that the constant disturbance term has disap-
peared since d˙ = 0. Converting this second-order system into a state-space
representation by defining the state coordinates [x1 , x2 ]T := [x, ẋ]T , we get
  
d x1 0 1 x1
= (2.81)
dt x2 bkI a bk x2

which has equilibrium point


  ⇤ 
x⇤1 x 0
= = . (2.82)
x⇤2 ẋ⇤ 0

We know that the whole state [x1 (t), x2 (t)]T and hence x1 (t) = x(t), will con-
verge to the origin of the state-space provided each eigenvalue of the matrix

0 1
(2.83)
bkI a bk

has a strictly negative real part. The result means that, provided that k and kI
are chosen properly, ie so that the dynamics (2.81) are stable, controller (2.78)
can stabilize our scalar system around the origin, and this despite the presence
of an additive constant disturbance.

2.9.2 A more general case


More generally, let us start again with a disturbed system described by the
following single-input state-space representation

ẋ = Ax + B(u + d), (2.84)

to which is added the corresponding output equation

y = Cx. (2.85)

53
2.9. INTEGRATION USING LINEAR STATE-FEEDBACK

Σ Linear
system

Dynamic State-Feedback Controller

K
Σ
KI ∫ C

Figure 2.8: State-feedback control with integration

Similarly to the scalar case of the previous subsection, we propose then the
following state-feedback controller with integration
Z t Z t
u(t) = Kx(t) KI Cx(⌧ )d⌧ = Kx(t) KI y(⌧ )d⌧ . (2.86)
0 0

In the above equation, note that we do not integrate the whole state, but only
the output, which is itself matching the dimension of both the input and the
disturbance itself.
In order to see the impact of this controller on the overall steady-state regime,
let us first define the new state variable representing the dynamics associated
to the integral of part of the controller, ie we have
Z t
xI := Cx(⌧ )d⌧ . (2.87)
0

Di↵erentiating this new state variable, we get

ẋI = Cx. (2.88)

Defining then the extended state vector xE := [xI , xT ]T , its corresponding state-
space representation can be obtained by gathering (2.84) and (2.88) together,
ie we have    
d xI 0 C xI 0
= + (u + d). (2.89)
dt x 0 A x B
Regarding state-feedback, note that, given the definition of the new state vari-
able xI in (2.87), controller (2.86) can be rewritten as

u= Kx KI x I (2.90)

Putting this equation into the extended dynamics (2.89), we get the closed-loop
dynamics    
ẋI 0 C xI 0
= + d. (2.91)
ẋ BKI A BK x B

54
2.9. INTEGRATION USING LINEAR STATE-FEEDBACK

In order to know the steady-state induced by controller (2.86), let us now extract
the equilibrium points from (2.91). Thus, let all derivatives in (2.91), so that
we have  
0 Cx⇤
= . (2.92)
0 BKI xI + (A BK)x⇤ + Bd

Let us take some time to examin the implication of the above equation. The
first line should be valid for any matrix C, which means that we have x⇤ = 0.
For the second line, note that x⇤ = 0 implies that the second term in the RHS
term disappears, so that we are left with

0= BKI x⇤I + Bd (2.93)

so that we have
KI x⇤I = d. (2.94)

These two above results, ie x = 0 and expression (2.94) are actually quite
essential. The first one, similarly to the scalar case, means that steady-state
is the origin despite the presence of a disturbance. But the second one gives
an idea of which part of the state-feedback controller (2.86) is doing the job of
rejecting the disturbance: the integrator, of course. Indeed, expression (2.94)
simply means that the steady-state at the output of the integrator corresponds
to the actual disturbance on the system. In some way, one might also say that
the integrator gives an estimate of the disturbance, without actually knowing
its amplitude.

2.9.3 Stabilization around a constant reference


Let us come back to our scalar system

ẋ = ax + bu (2.95)

and assume this time that we want to stabilize it around an equilibrium point not
at the origin, ie we have x⇤ 6= 0. For convenience, let us rewrite this equilibrium
point as r := x⇤ (often called “reference”). As an alternative to what we saw in
section 2.6, ie a combined feedforward/state-feedback control strategy, consider
instead the following dynamic state-feedback controller (with integral term)
Z t
u(t) = kx(t) kI (x(⌧ ) r)) d⌧ (2.96)
0

This gives the closed-loop dynamics, after di↵erentiation,

ẍ = (a bk) ẋ bkI (x r) (2.97)

whose state-space representation is


   
d x1 0 1 x1 0
= + r (2.98)
dt x2 bkI a bk x2 bkI

or, in component form


(
ẋ1 = x2
(2.99)
ẋ2 = (a bk)x2 bkI (x1 r)

55
2.9. INTEGRATION USING LINEAR STATE-FEEDBACK

Let us now look at the equilibrium points of system (2.98) or (2.99). Setting
the derivative of the state components to 0, we get
) (
0 = x⇤2 x⇤1 = r
⇤ ⇤
) (2.100)
0 = (a bk)x2 bkI (x1 r) x⇤2 = 0

which means that we are free to choose any r we want! The important im-
plication of this is that, provided of course that k and kI are chosen so that
the system is stabilized, using integral controller means that the system will be
stabilized around the desired reference r = x⇤ .
This particular technique is quite interesting because, contrary to the method
that we have seen in section 2.6 for stabilizing around an equilibrium point x⇤ ,
it does not require the computation of u⇤ . It is implicitely computed by the
dynamic controller (2.96).
The above discussion can be generalized to higher-order systems. In this
case, we simply replace (2.86) with
Z t
u(t) = Kx(t) KI (y(⌧ ) r(⌧ ))d⌧ . (2.101)
0

Letting then Z t
xI := Cx(⌧ ) r(⌧ )d⌧ (2.102)
0
and proceeding the same kind of analysis as in the previous section, we obtain
that y ⇤ = Cx⇤ = r.

2.9.4 Polynomial disturbances


More complex disturbances than the simple constant signals seen in section 2.9.1
can be considered using the same basic integral idea. Indeed, consider again our
scalar example but a↵ected this time with a constantly increasing disturbance

ẋ = ax + bu + d.t (2.103)

where the constant term d is multiplied by the time variable t. To compensate


for this first-order polynomial disturbance, one will then use the following linear
state-feedback controller with double integrator
Z t Z tZ ⌧
u(t) = kx(t) kI x (⌧ ) d⌧ kII x( )d d⌧ (2.104)
0 0 0

and similarly for higher-order polynomial disturbances where we will simply


need more integrators.

2.9.5 A link with PID control


Linear state-feedback controller with integration can be related to PID con-
troller, which we have briefly talked about at the beginning of this chapter (see
again figure 2.9 for reference). Recall that, mathematically, a PID controller is

56
2.9. INTEGRATION USING LINEAR STATE-FEEDBACK

Figure 2.9: PID Controller

given by Z t
de
u(t) = kp e(t) + ki e(⌧ )d⌧ + kd (t). (2.105)
0 dt
In order to compare a PID controller with linear state-feedback techniques,
let us first assume, for simplicity, that the reference signal is such that r(t) = 0
(i.e. we want to stabilize the system output around the origin). In this case,
controller (2.105) simplifies into
Z t
u(t) = kp y(t) ki y(⌧ )d⌧ kd ẏ(t) (2.106)
0

Furthermore, we will also assume that the plant is a second-order system rep-
resented by the following state-space represenation (in component form)
8
>
<ẋ1 = x2
ẋ2 = a0 x1 a1 x2 + bu . (2.107)
>
:
y = x1

As we have seen, in this case, a linear state-feedback controller with integral


term takes the form
Z t
u(t) = k1 x1 (t) k2 x2 (t) kI x1 (⌧ )d⌧ (2.108)
0

which, because we have a controllability canonical form, we also have x1 = y


and x2 = ẏ, and this implies
Z t
u(t) = k1 y(t) k2 ẏ(t) kI y(⌧ )d⌧ (2.109)
0

Note the striking resemblance between the above expression and equation (2.106).
Indeed, we have exactly kp = k1 , kd = k2 and ki = kI , ie k1 plays the same role
as the proportional gain kp , k2 the same as derivative gain kd and kI the same
as ki .

57
2.9. INTEGRATION USING LINEAR STATE-FEEDBACK

A quite similar analysis can be carried out for second-order systems stabilized
around a non-zero reference input signal. However, for higher-order systems, a
state-feedback controller will have more gains, allowing to control more complex
dynamics (roughly corresponding to adding higher derivative terms in the PID
control frameworks), while the PID controller will basically remain the same.
However, PID control turns out to be quite e↵ective and sufficiently performant
while remaining quite simple in many cases, hence its popularity.

58

You might also like