0% found this document useful (0 votes)
132 views15 pages

08 Integral Action PDF

This document discusses the use of integral action in state feedback control to enable reference tracking and disturbance rejection. It introduces augmenting the plant model with an integral of the output or error to modify the closed-loop system's DC gain. This allows the output to track a constant reference signal or reject constant disturbances by driving the integral term to zero at steady-state. An example demonstrates how integral action allows tracking of a reference and disturbance rejection. The concept is also extended to continuous-time systems. Integral action is shown to ensure the output asymptotically converges to the reference for constant inputs.

Uploaded by

crazyrmr
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
132 views15 pages

08 Integral Action PDF

This document discusses the use of integral action in state feedback control to enable reference tracking and disturbance rejection. It introduces augmenting the plant model with an integral of the output or error to modify the closed-loop system's DC gain. This allows the output to track a constant reference signal or reject constant disturbances by driving the integral term to zero at steady-state. An example demonstrates how integral action allows tracking of a reference and disturbance rejection. The concept is also extended to continuous-time systems. Integral action is shown to ensure the output asymptotically converges to the reference for constant inputs.

Uploaded by

crazyrmr
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 15

Lecture: Integral action in state feedback control

Automatic Control 1

Integral action in state feedback control


Prof. Alberto Bemporad
University of Trento

Academic year 2010-2011

Prof. Alberto Bemporad (University of Trento)

Automatic Control 1

Academic year 2010-2011

1 / 15

Lecture: Integral action in state feedback control

Adjustment of DC-gain for reference tracking

Reference tracking
Assume the open-loop system completely reachable and observable
We know state feedback we can bring the output y(k) to zero asymptotically
How to make the output y(k) track a generic constant set-point r(k) r ?
Solution: set u(k) = Kx(k) + v(k)
v(k) = Fr(k)
We need to choose gain F properly to ensure reference tracking
!"#$%&'$()*+,'-..

r(k)

u(k)

v(k)

A,B

x(k)

y(k)

K
',#/+,((-+

Prof. Alberto Bemporad (University of Trento)

x(k + 1)

(A + BK)x(k) + BFr(k)

y(k)

Cx(k)

Automatic Control 1

Academic year 2010-2011

2 / 15

Lecture: Integral action in state feedback control

Adjustment of DC-gain for reference tracking

Reference tracking

To have y(k) r we need a unit DC-gain from r to y


C(I (A + BK))1 BF = I
Assume we have as many inputs as outputs (example: u, y R)
Assume the DC-gain from u to y is invertible, that is C Adj(I A)B invertible
Since state feedback doesnt change the zeros in closed-loop
C Adj(I A BK)B = C Adj(I A)B
then C Adj(I A BK)B is also invertible
Set
F = (C(I (A + BK))1 B)1

Prof. Alberto Bemporad (University of Trento)

Automatic Control 1

Academic year 2010-2011

3 / 15

Lecture: Integral action in state feedback control

Adjustment of DC-gain for reference tracking

Example
Poles placed in (0.8 0.2j, 0.3). Resulting
closed-loop:

1.4
1.2

x(k + 1)

y(k)

u(k)

1.1
0

1
0
x(k) +
u(k)
0.8
1

1 0 x(k)

0.13 0.3 x(k) + 0.08r(k)

1
0.8
0.6
0.4
0.2
0
0

The transfer function G(z) from r to y is


2
, and G(1) = 1
G(z) = 25z2 40z+17

Prof. Alberto Bemporad (University of Trento)

Automatic Control 1

10

20
sample steps

30

40

Unit step response of the closed-loop


system (=evolution of the
system from

initial condition x(0) = 00 and
reference r(k) 1, k 0)

Academic year 2010-2011

4 / 15

Lecture: Integral action in state feedback control

Adjustment of DC-gain for reference tracking

Reference tracking

Problem: we have no direct feedback on the tracking error e(k) = y(k) r(k)
Will this solution be robust with respect to model uncertainties and
exogenous disturbances ?
Consider an input disturbance d(k) (modeling for instance a non-ideal
actuator, or an unmeasurable disturbance)
&#*/0)!&.0/+1$#'!"#$%&'$()*+,'-..

d(k)
r(k)

+
+

u(k)

+
+

A,B

x(k)

y(k)

K
',#0+,((-+

Prof. Alberto Bemporad (University of Trento)

Automatic Control 1

Academic year 2010-2011

5 / 15

Lecture: Integral action in state feedback control

Adjustment of DC-gain for reference tracking

Example (contd)
Let the input disturbance d(k) = 0.01, k = 0, 1, . . .
1.4
1.2
1
0.8
0.6
0.4
0.2
0
0

10

20
sample steps

30

40

The reference is not tracked !


The unmeasurable disturbance d(k) has modified the nominal conditions for
which we designed our controller
Prof. Alberto Bemporad (University of Trento)

Automatic Control 1

Academic year 2010-2011

6 / 15

Lecture: Integral action in state feedback control

Integral action

Integral action for disturbance rejection


Consider the problem of regulating the output y(k) to r(k) 0 under the
action of the input disturbance d(k)
Lets augment the open-loop system with the integral of the output vector
q(k + 1) = q(k) + y(k)
|
{z
}
integral action

The augmented system is






  
 
x(k + 1)
A 0
x(k)
B
B
=
+
u(k) +
d(k)
q(k + 1)
C I
q(k)
0
0



x(k)
C 0
y(k) =
q(k)
Design a stabilizing feedback controller for the augmented system



x(k)
u(k) = K H
q(k)
Prof. Alberto Bemporad (University of Trento)

Automatic Control 1

Academic year 2010-2011

7 / 15

Lecture: Integral action in state feedback control

Integral action

Rejection of constant disturbances


&#*/0)!&.0/+1$#'!"#$%&'$()*+,'-..

d(k)
+

u(k)

+
+

.0$0-)3--!1$'4

q(k)
&#0-2+$()$'0&,#

A,B

x(k)

y(k)

Theorem
Assume a stabilizing gain [H K] can be designed for the system
augmented with integral action. Then limk+ y(k) = 0 for all constant
disturbances d(k) d

Prof. Alberto Bemporad (University of Trento)

Automatic Control 1

Academic year 2010-2011

8 / 15

Lecture: Integral action in state feedback control

Integral action

Rejection of constant disturbances


&#*/0)!&.0/+1$#'!"#$%&'$()*+,'-..

d(k)
+

u(k)

+
+

.0$0-)3--!1$'4

q(k)
&#0-2+$()$'0&,#

A,B

x(k)

y(k)

Proof:
The state-update matrix of the closed-loop system is

  


A 0
B
K H
+
C I
0
The matrix has asymptotically stable eigenvalues by construction
h
i
x(k)
For a constant excitation d(k) the extended state q(k) converges to a

steady-state value, in particular limk q(k) = q


q
=0
Hence, limk y(k) = limk q(k + 1) q(k) = q
Prof. Alberto Bemporad (University of Trento)

Automatic Control 1

Academic year 2010-2011

9 / 15

Lecture: Integral action in state feedback control

Integral action

Example (contd) Now with integral action


Poles placed in (0.8 0.2j, 0.3) for the augmented system. Resulting closed-loop:


 
1.1 1
0
x(k + 1) =
x(k) +
(u(k) + d(k))
0 0.8
1
q(k + 1)

y(k)

u(k)

q(k) + y(k)

1 0 x(k)

0.48 1 x(k) 0.056q(k)

Closed-loop simulation for x(0) = [0 0]0 , d(k) 1:


3
2.5
2
1.5
1
0.5
0
0.5
0
Prof. Alberto Bemporad (University of Trento)

10

20
sample steps
Automatic Control 1

30

40
Academic year 2010-2011

10 / 15

Lecture: Integral action in state feedback control

Integral action

Integral action for set-point tracking


&#*/0)!&.0/+1$#'!"#$%&'$()*+,'-..

d(k)
r(k)

+
-

q(k)
&#0-2+$()
$'0&,#

0+$'4&#2)-++,+

u(k)

+
+

A,B

.0$0-)3--!1$'4

x(k)

y(k)

Idea: Use the same feedback gains (K, H) designed earlier, but instead of feeding
back the integral of the output, feed back the integral of the tracking error
q(k + 1) = q(k) + (y(k) r(k))
|
{z
}
integral action

Prof. Alberto Bemporad (University of Trento)

Automatic Control 1

Academic year 2010-2011

11 / 15

Lecture: Integral action in state feedback control

Integral action

Example (contd)
3.5

x(k + 1)

q(k + 1)

1.1 1
x(k)
0 0.8
 
0
+
(u(k) + d(k))
1
q(k) + (y(k) r(k))
|
{z
}

3
2.5
2
1.5
1
0.5

tracking error

y(k)
u(k)

0
0

0 x(k)

0.48

1 x(k) 0.056q(k)

10

20
sample steps

30

40

Response for x(0) = [0 0]0 ,


d(k) 1, r(k) 1

Looks like its working . . . but why ?


Prof. Alberto Bemporad (University of Trento)

Automatic Control 1

Academic year 2010-2011

12 / 15

Lecture: Integral action in state feedback control

Integral action

Tracking & rejection of constant disturbances/set-points


Theorem
Assume a stabilizing gain [H K] can be designed for the system
augmented with integral action. Then limk+ y(k) = r for all constant
disturbances d(k) d and set-points r(k) r
Proof:
The closed-loop system




 


x(k + 1)
A + BK BH
x(k)
B 0
d(k)
=
+
q(k + 1)
C
I
q(k)
0 I
r(k)



x(k)
C 0
y(k) =
q(k)
d(k)
has input r(k) and is asymptotically stable by construction
h
i

x(k)
For a constant excitation d(k)
the
extended
state
converges to a
q(k)
r(k)

steady-state value, in particular limk q(k) = q


q
=0
Hence, limk y(k) r(k) = limk q(k + 1) q(k) = q
Prof. Alberto Bemporad (University of Trento)

Automatic Control 1

Academic year 2010-2011

13 / 15

Lecture: Integral action in state feedback control

Integral action

Integral action for continuous-time systems


The same reasoning can be applied to continuous-time systems
x(t)

Ax(t) + Bu(t)

y(t)

Cx(t)

Augment the system with the integral of the output q(t) =

Rt
0

y()d, i.e.,

(t) = y(t) = Cx(t)


q
{z
}
|
integral action

The augmented system is




d x(t)
dt q(t)
y(t)

=
=

A
C

0
0



x(t)
q(t)

x(t)
q(t)

B
0


u(t)

Design a stabilizing controller [K H] for the augmented system


Implement
Zt
u(t) = Kx(t) + H

(y() r())d

0
Prof. Alberto Bemporad (University of Trento)

Automatic Control 1

Academic year 2010-2011

14 / 15

Lecture: Integral action in state feedback control

Integral action

English-Italian Vocabulary

reference tracking
steady state
set point

inseguimento del riferimento


regime stazionario
livello di riferimento

Translation is obvious otherwise.

Prof. Alberto Bemporad (University of Trento)

Automatic Control 1

Academic year 2010-2011

15 / 15

You might also like