Control of Systems With Constraints PDF
Control of Systems With Constraints PDF
with Constraints
Ph.D. Thesis
by Martin Bak
Department of Automation
Technical University of Denmark
November 2000
Department of Automation
Technical University of Denmark
Building 326
DK-2800 Kongens Lyngby
Denmark
Phone: +45 45 25 35 48
First edition
Copyright
c Department of Automation, 2000
The document was typeset using LATEX, drawings were made in Xfig, graphs were
generated in MATLAB, a registered trademark of The MathWorks Inc.
This thesis is submitted as a partial fulfillment for the degree of Doctor of Philoso-
phy (Ph.D.) in the subject area of electrical engineering. The work was carried out
from September 1997 to November 2000 at the Department of Automation, Techni-
cal University of Denmark. Supervisors on the project was Associate Professor Ole
Ravn, Department of Automation, and Associate Professor Niels Kjlstad Poulsen,
Department of Mathematical Modelling.
Several people have contributed to make the project and the writing of this thesis
much more pleasant than I expected it to be. I am thankful to my supervisors for
their guidance and constructive criticism during the study, and to Claude Samson
and Pascal Morin who made my stay in 1998 at INRIA in Sophia Antipolis, France
possible. A special thanks goes to my colleague and office-mate Henrik Skovsgaard
Nielsen for many comments and valuable discussions throughout the three years.
Thanks also to Thomas Bak for proofreading an early and more challenging version
of the manuscript and for contributing to improve the standard of the thesis.
This thesis deals with the problem of designing controllers to systems with con-
straints. All control systems have constraints and these should be handled appro-
priately since unintended consequences are overshoots, long settling times, and
even instability. Focus is put on level and rate saturation in actuators but aspects
concerning constraints in autonomous robot control systems are also treated. The
complexity of robotic systems where several subsystems are integrated put special
demands on the controller design.
The thesis has a number of contributions. In the light of a critical comparison of
existing methods to constrained controller design (anti-windup, predictive control,
nonlinear methods), a time-varying gain scheduling controller is developed. It is
computational cheap and shows a good constrained closed-loop performance.
Subsequently, the architecture of a robot control system is discussed. Importance
is attached to how and where to integrate the handling of constraints with respect
to trajectory generation and execution. This leads to the layout of a closed-loop
sensor-based trajectory control system.
Finally, a study of a mobile robot is given. Among other things, a receding horizon
approach to path following is investigated. The predictive property of the controller
makes the robot capable of following sharp turns while scaling the velocities. As a
result, the vehicle may follow an arbitrary path while actuator saturation is avoided.
Resume
Preface iii
Abstract v
Resume vii
List of Tables xx
Nomenclature xxi
1 Introduction 1
1.1 Motivation and Background . . . . . . . . . . . . . . . . . . . . 1
1.1.1 Constraints in Control Systems . . . . . . . . . . . . . . . 2
1.2 Research in Constrained Control Systems . . . . . . . . . . . . . 7
1.3 Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.4 Objectives and Organization of the Thesis . . . . . . . . . . . . . 9
x Contents
5 Conclusions 149
Bibliography 157
List of Figures
1.1 Input and output response to a reference step change for a PI con-
trolled unstable system with actuator saturation (left) and without
saturation (right). . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2 General control system with various constraints, limitations, and
disturbances. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.3 Typical saturation characteristic. . . . . . . . . . . . . . . . . . . 4
2.23 Time response for the constrained system for different choices of
the anti-windup gain matrix. The unconstrained response (dashed),
the constrained response (solid), and the input (dotted) are shown
for anti-windup gains (a) Ma , (b) Mk , (c) Mh , and (d) Ms . . . . . 71
2.24 Eigenvalues of FM = F M ()H using the conditioning tech-
nique with cautiousness and varying from = 0 (o) to ! 1
(). Eigenvalues for other designs are also shown: Ma (), Mk
(), and Ms (+). . . . . . . . . . . . . . . . . . . . . . . . . . 73
2.25 Time response for the unconstrained system with a pole placement
controller (solid) and with a PID controller (dashed). . . . . . . . 76
2.26 Time response for the constrained system with pole placement con-
troller for different choices of the anti-windup gain matrix. The
unconstrained response (dashed), the constrained response (solid),
and the input (dotted) are shown for anti-windup gains (a) Mk , (b)
Ms , (c) Mc , and (d) no AWBT. . . . . . . . . . . . . . . . . . . . 77
2.27 Time response for constrained predictive controller (solid) and
AWBT compensated pole placement controller (dashed), both with
magnitude and rate saturation in the actuator. . . . . . . . . . . . 80
2.28 Time response for output constrained predictive controller (solid)
exposed to reference changes (dashed). The output is constrained
such that no overshoots exists. . . . . . . . . . . . . . . . . . . . 81
4.15 Scaled velocities. Top plot: forward (solid) and angular (dashed)
velocity. Bottom plot: wheel velocities vl (solid) and vr (dashed).
Constraints are shown with dotted lines. . . . . . . . . . . . . . . 134
4.16 Path following for different turns. . . . . . . . . . . . . . . . . . 135
4.17 Different values of and . Small (solid), medium (dashed), and
large (dash-dotted) values. . . . . . . . . . . . . . . . . . . . . . 136
4.18 Results from a path following experiment with the test vehicle.
Reference path (dotted), measured path (solid). . . . . . . . . . . 137
4.19 The actual measured velocities (solid) compared to the desired ve-
locity references (dashed). . . . . . . . . . . . . . . . . . . . . . 137
4.20 Posture stabilization with exponential robust controller. . . . . . . 141
4.21 Noisy y2 -measurements (), x1 (dashed), x2 (dotted), and x3 (solid).142
4.22 (a) Convergence of the estimation error for time-varying s (solid)
and constant s (dashed). (b) Steady-state for time-varying s
(solid), constant s (dashed), and measurement noise (dotted). . . 147
List of Tables
4.1 Condition numbers for the observability matrix for different trajec-
tories. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
xx List of Tables
This following lists contain the most important symbols, abbreviations, and termi-
nology used in the text.
Symbols
t Continuous time
k Discrete time
q 1 Backward shift operator or delay operator: q 1
f (k) = f (k 1)
T Sampling period or sampling time
tk Sampling instant: tk =kT
u Controller output
u^ Process input, possibly constrained
y Process output
x Process state vector
Differencing operator: = 1 q 1
Abbreviations
Terminology
In terms of time responses for dynamic systems the following cases are considered:
Introduction
Constraints are present in all control systems and can have damaging effects on the
system performance unless accounted for in the controller design process. Most
often constraints are identified as actuator magnitude and rate saturations, or out-
put and state variable constraints. This thesis considers the above-mentioned con-
straints but also enlarges the perception to include issues such as nonholonomic
constraints and limited/constrained sensor packages and readings.
In some cases constraints are handled in a static way by over-designing the system
components such that during normal operation saturation or other limitations are
unlikely to be activated. This, however, is from a practical point of view highly
inefficient and will unnecessarily increase the cost of the overall system and is not
a recommendable approach. For example, by choosing a larger more powerful ac-
tuator than needed saturation is avoided. However, if at some point the production
speed or the load disturbances are increased, constraints may again be of concern
and the over-design has failed. Moreover, applications where the production speed
is limited by the controller performance (e.g. a welding robot) will look for time
optimal solutions and hence exploration of the full range of the systems actuators.
2 Chapter 1 / Introduction
2 2
Output
Output
1 1
0 0
0 5 0 5
4 4
Input
Input
0 0
4 4
0 5 0 5
Time (secs) Time (secs)
Figure 1.1. Input and output response to a reference step change for a PI controlled unstable
system with actuator saturation (left) and without saturation (right).
disturbances
controller selection criteria magnitude and rate limits
work space constraints
model errors
safety limits
reference
output
Controller(s) Actuator Plant
Data
Sensors
measurement processing
noise
miscalibration
time delays
Figure 1.2. General control system with various constraints, limitations, and disturbances.
4 Chapter 1 / Introduction
In the following, some of the issues in the figure are discussed in details.
Actuators
Control systems rely on actuators to manipulate the state or output of the controlled
process such that performance requirements are met. In all physical systems actu-
ators are subject to saturation as they can only deliver a certain amount of force
(or equivalent). Most frequently confronted with are level or magnitude constraints
where the actuator strictly can apply a value between an upper and a lower limit.
Less frequently accounted for are rate constraints where the change in the actuator
output is limited.
Two examples of constrained actuators are a DC motor and a valve. For control of
rotary motion, a common actuator is the DC motor. Such a device is constrained
in input voltage and current which without load is equivalent to a velocity and
acceleration. Another common device is the valve. Here the level limits are inher-
ently given as fully closed or fully opened. The rate constraints are governed by
the mechanism (e.g. a motor) driving the valve.
Figure 1.3 shows a typical saturation characteristic for an input level or magnitude
saturated actuator.
u umax u^
umin
0 1
sat(u1 )
B C
B sat(u2 ) C
sat(u) =B
B . C:
C
(1.2)
B .. C
@ A
sat(un )
The values umin and umax are chosen to correspond to actual actuator limits either
by measuring the actuator output or simply by estimation. Input rate saturations can
in a similar way be modelled by applying the differencing operator = 1 q 1
to the saturation function:
8
<umax
> u umax
sat(u) = u umin < u < umax ; (1.3)
>
:
umin u umin
where umax and umin are the upper and lower limits of the rate constraints
respectively.
Very few general theoretic results exist on stability for actuator constrained sys-
tems, the following being a well-known exception:
In other cases, restrictions on the initial conditions and outside coming disturbances
(reference changes, noise) can ensure closed-loop stability for the system.
Constraints can be classified as either hard or soft. Hard constraints are charac-
terized by no violation of the limits is permissible or feasible at any time during
operation whereas soft constraints temporarily can be allowed to violate the speci-
fied limits.
Sensor package
Mobility/Steerability
Some applications must take into account constraints in the state of the system
and in the working environment, that is state and output constraints. In robotics,
manipulators and mobile robots, traveling from one point to another, frequently
encounter obstacles which impose motion constraints.
Some constraints are self-imposed by the designer or user of the system. For safety
reasons it may be advantageous to limit states such as temperatures, pressures, ve-
locities, turning rates, and voltages. Furthermore, in order to avoid unnecessary
mechanical wear and tear, constraints can be imposed on movements causing sud-
den jerks and yanks.
Many control strategies are model-based2 for which reason the successability is
limited by the accuracy of the models. Error sources are inadequate calibration, in-
accessible physical parameters, unmodelled dynamics or linearization of nonlinear
dynamics.
1
The robot is restricted in the mobility, i.e. the robot can move in 3 dimensions (position and
orientation) but can only directly control two (heading and angular speed).
2
Model-based design implies that a mathematical dynamical model of the process is used in the
design of a controller. An example of this is in predictive control where the process model predicts
the systems behavior and enables the controller to select a control signal based on what the future
will bring.
1.2 / Research in Constrained Control Systems 7
The term constrained control system usually refers to control systems with input,
output or state constraints. The research within this field primarily focuses on other-
wise linear plants. Two well-known techniques have received much attention:
it may choose to keep the velocity but deviate from the nominal path. Either way
has advantages but the selected behavior should be a design option. This suggests
to handle the constraints in the reference or trajectory generator instead of in the
low-level motion controllers. This way the system has direct control of how the
saturation compensation affects the resulting trajectory.
How to design and implement such a closed-loop event-based trajectory scheme
which can deal with both constraints and unexpected events is a difficult question.
Challenges include development of algorithms and stability investigations of the
required feedback loops.
1.3 Contributions
The architecture and building blocks of a robotic control system are pre-
sented in the context of how and where to handle various constraints in the
system.
The thesis is organized with three main chapters concerning the above mentioned
issues. Details are as follows:
10 Chapter 1 / Introduction
Chapter 5. Conclusions
Conclusions and future work recommendations are given.
Chapter 2
Predictive control and nonlinear control are included in the group of strategies
classified as embedded constraints handling. This reflects that the design is not
just an extra compensation loop added to an original controller. However, it does
not exclude the constrained controller from being a linear controller during normal
unconstrained operation or to originate from a linear controller.
Constraints exist in all control systems with actuator saturation being by far the
most common one. This explains why most approaches described in the literature
are concerned with actuator saturation.
It has been recognized for many years that actuator saturations, which are present in
most control systems, can cause undesirable effects such as excessive overshoots,
long settling times, and in some cases instability. These phenomena were initially
observed for systems controlled by conventional PID control but will be present in
any dynamical controller with relatively slow or unstable states as pointed out in
Doyle et al. (1987).
The problem arises when an actuator saturates and results in an mismatch between
the controller output and the system input. This means that the feedback loop is
broken: Changes in the controller output (outside the linear range of the actuator)
does not effect the system; the loop gain is zero. The integrator in the PID con-
troller, for example, is an unstable state, and saturations can cause the integrator
to drift to undesirable values as the integrator will continue to integrate a tracking
error and produce even larger control signals. In other words, we have integra-
tor windup which can lock the system in saturation. In process control, integrator
windup is often referred to as reset windup since integral control is called reset
control1 . Control techniques designed to reduce the effects from integrator windup
are known as anti-windup (AW).
1
Historical note (Franklin et al., 1991): In control applications before integral control became
standard, an operator would remove a potential steady-state output error by resetting the reference
2.2 / Anti-windup and Bumpless Transfer 13
Another control problem is often exposed in parallel with windup, at least from a
theoretical point of view. When substituting one controller with another in a con-
trol system, there may, at the moment of transfer, exist a mismatch between the
output of the two controllers causing bumps on the output of the system. This is
equivalent to the case with a saturating actuator where we had the same mismatch.
The problem of retaining a smooth control signal and system output when switch-
ing controllers is known as bumpless transfer (BT). Controller substitution is, from
a practical point of view, interesting for a number of reasons (Graebe and Ahlen,
1996):
Most often the two areas are referred to as anti-windup and bumpless transfer
(AWBT) all together and consider the problem when a nonlinearity operates on
the controller output, that be a saturation element, relay, dead-zone, hysteresis, or
a controller substitution due to switching or logic. Figure 2.1 shows a common
system notation for AWBT schemes where u is the controller output and u ^ is the
actual system input. In short, the AWBT problem can be expressed as keeping the
controller output u close to the system input u^ under all operational conditions.
As a contrast to predictive control, which is a control strategy capable of hand-
ling actuator saturation as an embedded part of the controller (see section 2.3), an
AWBT scheme is a modification to an already designed possibly nonlinear con-
troller and the AWBT modification will only interfere when saturation or substitu-
tion occur. During normal operation, the closed-loop performance is equal to the
to a new value reducing the error. For example, if the reference is 1 and the output reads 0.9, the
operator would reset the reference to, say, 1.1. With integral control, the integrator automatically
does the resetting.
14 Chapter 2 / Controller Design for Constrained Systems
r
u u^ y
Controller Nonlinearity Plant
Figure 2.1. Common system notation for AWBT schemes. The unconstrained controller
output is denoted u while the (possibly) constrained process input is denoted u
^. The con-
troller may be 1 or 2 DoF.
2. Fit an AWBT scheme upon the controller. This will (should) not alter the
existing controller during normal operation.
During the last 40 years or so a substantial number of papers have launched a wide
range of different approaches to AWBT. Initially, the propositions were, as the na-
ture of the problem itself, all based on practical problems experienced during op-
eration of a control systems and would therefore relate only to specific controllers
and systems. In the late eighties general methods with respect to the controller
and the system evolved. These include the classical anti-windup (Fertik and Ross,
1967; Astrom and Wittenmark, 1990), the Hanus Conditioning Technique (Hanus,
1980; Hanus et al., 1987) and refinements (Walgama et al., 1992; Hanus and Peng,
1992), and the observer-based method (Astrom and Rundqwist, 1989; Walgama
and Sternby, 1990). Recently, unifying frameworks embracing all existing linear
time-invariant AWBT schemes have been put forward (Kothare et al., 1994; Ed-
wards and Postlethwaite, 1996, 1998; Peng et al., 1998). In Graebe and Ahlen
(1994, 1996) bumpless transfer is investigated with comments on the practical dif-
ferences between anti-windup and bumpless transfer.
As already indicated, a large number of ways exists to keep the system input u and
the controller output u
^ close. The following lists some basic evident guidelines for
dealing with actuator saturation:
2
Some AWBT schemes are specific to certain controller structures such as a PID controller and
will not work for a nonlinear controller.
2.2 / Anti-windup and Bumpless Transfer 15
Of course, the designer does usually not choose the system so guideline number
one should be regarded more as a caution. Saturation compensation is particularly
important in this case.
Specifically regarding AWBT design, the following three criteria should be held in
mind during the design process:
The closed-loop system must be stable, also under saturation. This, of course,
assumes that the system is globally stabilizable with the available actua-
tor range which is only guaranteed for open-loop stable systems (Sussmann
et al., 1994).
Stability considerations for saturated systems are somewhat complex and only few
authors have attempted to analyze the resulting closed-loop behavior. Some no-
table references are: Kothare and Morari (1997) give a review and unification of
various stability criteria such as the circle criterion, the small gain theorem, describ-
ing functions, and the passivity theorem; Edwards and Postlethwaite (1997) apply
the small gain theorem to their general AWBT framework; Astrom and Rundqwist
(1989) make use of describing function analysis and the Nyquist stability crite-
rion to determine stability for the observer-based approach; Niu and Tomizuka
(1998) are application oriented and are concerned with Lyapunov stability and ro-
bust tracking in the presence of actuator saturation; and finally, Kapoor et al. (1998)
present a novel synthesis procedure and consider stability.
3
A conditionally stable system is characterized as one in which a reduction in the loop gain may
cause instability.
16 Chapter 2 / Controller Design for Constrained Systems
A number of more or less ad hoc methods exist and will briefly be described in the
following.
An obvious drawback of this method is that the control signal may lock in satura-
tion indefinitely since the integrator is not being updated. A solution to this prob-
lem would be to stop the integration only when the control signal actually becomes
more saturated and to update the integrator if the control signal de-saturates.
This, of course depends on the sign of the tracking error and which actuator limit
is being violated, see Table 2.1.
Table 2.1. Stop integration at saturation depending on saturation violation and tracking
error.
2.2 / Anti-windup and Bumpless Transfer 17
Incremental Algorithms
Usually, a controller calculates an absolute value that is fed to the actuator and as
seen, this may cause windup. An alternative is to implement an incremental algo-
rithm that only evaluates the increments in the control signal since previous sample.
Basically, the integrator is moved outside of the controller. This is, for example, in-
herent when using a stepper motor since it accepts incremental signals. Note, that
only the risk of windup due to the integrator is removed whereas any slow or un-
stable state remaining in the incremental controller may still cause problems. For a
PID controller, though, moving the integrator outside of the controller is probably
sufficient.
Conditional Integration
Astrom and Rundqwist (1989) mention this method. The idea is to only apply in-
tegral action when the tracking error is within certain specified bounds. Intuitively,
the bounds can be set such that the actuator does not saturate given the instanta-
neous or predicted value of the system output. Basically, the limits on the actuator
are translated into limits on the tracking error. These limits form the proportional
band and the integration is then conditioned that the tracking error is within the
proportional band. A disadvantage of this method is the possibility of chattering
though the introduction of a hysteresis or similar can help reducing that.
r
y PD
e K + 1 + u u^
i s Actuator
+ +
+
1
t et
Z t
1
u=K e+ e(t)dt (2.2)
i
e = r y; (2.3)
applies to integral action since the transfer function from e to u remains unstable if
the controller has any unstable poles besides the integrator.
No theoretic guidelines exist for choosing the gain t . The designer must exclu-
sively rely on simulations and on-line tuning based on the closed-loop nonlinear
system response. A simple choice would be to select the AW gain t = i . This
simplifies (2.8) to
1
u(s) = Ke(s) + u^(s): (2.10)
1 + i s
This alternative AWBT implementation is shown in Figure 2.3 for a scalar PI con-
troller.
r + e + u u^ y
K Actuator Plant
+
1
1+i s
For a PID controller of the form u(s) = P (s)I (s)D(s)e(s) the following decom-
position can be carried out:
1 + i s 1 + d s
u(s) = K e(s) (2.11)
i s 1 + d s !
1 d s + 1 + (1 ) di
) u(s) = K s + 1 + d s
e(s); (2.12)
i
1 d s
u(s) = K r(s) y(s) + (r(s) y(s)) y(s)
si + d s
1
+ (^u(s) u(s)); (2.13)
t s
where i ; d ; t ; and are tunable parameters.
An example of a discretization of this controller with the AWBT scheme and sam-
pling time T is:
P (k) = K (r(k) y(k)) (2.14)
d Kd
D(k) = D(k 1) (1 q 1 )y(k) (2.15)
d + T d + T
KT T
I (k + 1) = I (k) + e(k) + (^u(k) u(k)) (2.16)
i t
u(k) = P (k) + I (k) + D(k) (2.17)
u^(k) = sat(u(k)); (2.18)
which attractively implements the proportional, derivative and integral terms sepa-
rately.
A number of approaches to AWBT uses state space theory. Astrom and Rundqwist
(1989) describes an observer-based approach to anti-windup. Figure 2.4 shows a
traditional state space controller consisting of a full-order state observer and a state
feedback. When the plant input u ^ is different from the controller output u (upon
saturation or transfer) the windup problem is easily understood: the estimated states
given by the observer will not reflect the control signal fed to the plant and therefore
will not reflect the actual system states. This may lead to inconsistency and an
incorrect control signal resulting in windup. Given such a controller structure it
is simple to avoid the problem by feeding back the saturated control signal u ^ to
the observer rather than the control output u. The estimated states will then be
correct, the consistency between observer and system states is intact, and windup
is avoided.
The result is given by:
x^_ = Ax^ + B u^ + K (y C x^)
u = Lx^ (2.19)
u^ = sat(u);
2.2 / Anti-windup and Bumpless Transfer 21
x^ u u^ y
Observer L Actuator Plant
Controller
where x ^ is the estimated state vector, (A; B; C ) describes the system dynamics, K
is the observer gain and L is the state feedback gain.
Not all state space controllers are given as a combination of an observer and a state
feedback. As described in Astrom and Rundqwist (1989) the controller in (2.19)
has high frequency roll-off meaning the transfer function of the controller is strictly
proper4 . The technique can be extended to a general state space controller, with
constant high frequency gain, given by the following structure
x_ c = F xc + Gr r Gy y (2.20)
u = Hxc + Jr r Jy y; (2.21)
where xc is the state vector for the controller and F; Gr ; Gy ; H; Jr , and Jy repre-
sent the dynamics of the controller.
To avoid windup, the controller is changed by feeding back the difference between
u and u^ to the controller. This results in the following controller
x_c = F xc + Gr r Gy y + M (^u u) (2.22)
= (F MH )xc + (Gr MJr )r (Gy MJy )y + M u^ (2.23)
u = Hxc + Jr r Jy y (2.24)
u^ = sat(u); (2.25)
+
r y + x_ c R xc +
G H
+ + + u
Actuator
^
u
Figure 2.5. 1 DoF state space controller with observer-based AWBT compensation.
For an input-output RST controller structure, a similar result exists (Walgama and
Sternby, 1990):
In Astrom and Rundqwist (1989) no design guidelines are given on how to select
the gain M except for the fact that F MH should be stable. A more direct design
procedure is recently proposed in Kapoor et al. (1998). The idea is to analyze the
2.2 / Anti-windup and Bumpless Transfer 23
where (A; B; C ) and (F; G; H; J ) represent the dynamics of the system and the
controller, respectively. The constrained closed-loop system yields:
" # " # " #
A + BJC BH BJ B
z_ = z+ r+ (^u u) (2.32)
GC F G M
= Acl z + Bcl r + Mcl (^u u) (2.33)
u = Kcl z + Jr (2.34)
y = Ccl z: (2.35)
z~ = z z u ; u~ = u uu ; y~ = y yu ; (2.36)
Suppose Acl can be Schur decomposed (Acl = U U h1 ) and i the Schur matrix
has real diagonal matrix blocks. Given a matrix T = 0 I U 1 , where I is an
identity matrix with as many rows and columns as control states nc , Kapoor et al.
(1998) selects the gain as
M = T2 1T1 B; (2.41)
24 Chapter 2 / Controller Design for Constrained Systems
where T2 is the last nc columns of T and T1 the remaining. This makes the closed-
loop system asymptotically stable. Note, that the choice of M is not necessarily
unique or optimal in terms of performance but only chosen from a stabilizing point
of view.
The controller output is assumed to equal the plant input due to the modified refer-
ence. Hanus et al. (1987) assumes present realizability giving
u = Hxrc + Jr r Jy y; (2.44)
when using the correct states resulting from applying the realizable reference. Sub-
tracting (2.44) from (2.43) yields:
2.2 / Anti-windup and Bumpless Transfer 25
u^ u = Jr (rr r ); (2.45)
This is called the AWBT conditioned controller because the controller is condi-
tioned to return to normal mode as soon as it can (Hanus et al. (1987)). Refer to
Figure 2.6 for a schematic view of the method.
u u^
r + rr C (s) Actuator Plant
y
+
+
Jr 1
Figure 2.6. Hanus conditioning technique. C (s) represents the controller transfer function.
Comparing (2.47) with (2.23) shows that the Hanus Conditioning technique is a
special case of the observer-based approach with M = Gr Jr 1 .
The matrix Jr has to be nonsingular, i.e. no time delay can exist in the con-
troller from reference to output (Hanus and Peng, 1992; Walgama et al.,
1992).
26 Chapter 2 / Controller Design for Constrained Systems
The case with a controller with time delays (Jr singular) is handled in Hanus and
Peng (1992) where the conditioning technique is modified for a general time delay
structure.
Walgama et al. (1992) proposes two modifications to the technique to overcome the
three inherent problems of the technique. The first approach introduces cautious-
ness so that the change in the modified reference is made smoother and guarantees
a non-singular Jr . This is done by introducing a tuning parameter as follows:
rr = r + (Jr + I ) 1 (^u u); 0 1; (2.50)
which leads to the controller:
x_c = (F Gr (Jr + I ) 1 H )xc (Gy Gr (Jr + I ) 1 Jy )y
+ Gr (Jr + I ) 1 u^ (2.51)
u = Hxc + Jr r Jy y (2.52)
u^ = sat(u): (2.53)
The parameter indicates the degree of cautiousness where = 0 results in the
original method and ! 1 results in the rr = r and hence the effect from the
conditioning is completely removed. The method is referred to as the Conditioning
Technique with Cautiousness.
A second extension by Walgama et al. (1992) leads to the so-called Generalized
Conditioning Technique (GCT) which performs conditioning on a filtered reference
signal rf instead of directly on the reference r . Walgama et al. (1992) uses a RST
controller structure and modifies the T -polynomial, but here we shall adopt the
state space approach used in Kothare et al. (1994) as it leans itself towards the
original presentation of the conditioning technique.
Let the 2 DoF state space controller with the filtered reference be defined by
x_c = F xc Gy y (2.54)
x_f = Ff xf + Gf r (2.55)
u = Hxc + Hf xf + Jf r + Jy y; (2.56)
2.2 / Anti-windup and Bumpless Transfer 27
Replacing r in (2.55) with this expression for rr yields the following resulting
controller
x_c = F xc Gy y (2.58)
x_f = (Ff Gf Jf 1 Hf )xf Gf Jf 1 Hxc + Gf Jf 1 u^ Gf Jf 1 Jy y (2.59)
u = Hxc + Hf xf + Jf r + Jy y (2.60)
u^ = sat(u): (2.61)
As pointed out by Walgama et al. (1992) no guidelines are available for choosing
the filter in terms of stability and closed-loop performance.
Several authors have attempted to set up a general unifying framework for exist-
ing AWBT methods. The benefits from such a unification are many. A unification
would open up for establishing a general analysis and synthesis theory for AWBT
schemes with connection to issues such as how design parameters influence stabil-
ity, robustness, and closed-loop performance. Moreover, the designer can compare
and contrast existing methods.
The first strides towards a generalization was taken in Walgama and Sternby (1990)
who identified an observer inherent in a number of existing schemes (classical,
observer-based, Hanus conditioning, and others). This allowed the authors to gen-
eralize the considered schemes as a special case of the observer-based approach.
In Kothare et al. (1994) a more methodical framework was put forward which
unified all known linear time-invariant AWBT schemes. A short introduction will
be given here. The parameterization is in terms of two constant matrices 1 and
2 as opposed to the one parameter M of the observer-based approach. This adds
freedom to the design at the expense of clarity and usability. Consider first the
following state space notation
28 Chapter 2 / Controller Design for Constrained Systems
" #
F G
C (s) = H (sI F) G + J =
1
(2.62)
H J
" #
h i F G1 G2
C1 (s) C2 (s) = ; (2.63)
H J1 J2
to represent a 1 DoF and 2 DoF controllers, respectively. For a controller with dy-
namics described by (F; G; H; J ), Kothare et al. (1994) propose the AWBT condi-
tioned controller given by
r
+ u u^ y
C G
+
+
R
AWBT
h i
C (s) = C1 (s) C2 (s)
h i
= H T (sI F ) 1 G1 + J1 H T (sI F ) 1 G2 + J2 ; (2.68)
Table 2.2 shows resulting R compensators for some of the mentioned techniques.
The framework does not necessary represent the way the various techniques should
be implemented in practice since the choice of R may cause unstable pole/zero
cancellations which are not necessarily exact. However, the framework may be
valuable as a platform for comparison of methods and establishment of stability
results.
30 Chapter 2 / Controller Design for Constrained Systems
Scheme R compensator
Classical anti-windup 1
t s
F Gr Jr 1
Hanus conditioning technique
H 0
Gr (Jr + I )
1
F
Conditioning technique with cautiousness
H 0
F M
Observer-based anti-windup
H 0
F K1 K2 1
Framework by Kothare et al. (1994)
H K2 I
Most authors simply state that anti-windup and bumpless transfer are the same.
This section will elaborate on some similarities and discrepancies and illustrate
specific bumpless transfer techniques. Consider the following two statements:
Anti-windup should keep the controller output close the plant input during all
operational circumstances in order to avoid performance degradation and
stability problems upon saturation.
Bumpless Transfer should keep the latent5 (inactive) controllers output close to
the active controllers output (which is the plant input) in order to avoid transients
after switching from one controller to another.
latent controllers states such that at the point of transfer the latent controllers
output uL equals the active controllers output uA . So, basically, using the words
of Graebe and Ahlen (1996), the essence of bumpless transfer is the desire to
compute the state of a dynamical system so its output will match another signals
value.
Initial Conditions
For a PI controller, with only one state, the initial conditions are easily determined.
Consider the discrete-time PI controller
s0 + s1 q 1
uL (k) = e(k): (2.70)
1 q 1
At the time of transfer t = kT we aim to have uL (k ) = uA , where uA is the active
controllers output. Trivial calculations give the following expression for uL (k 1)
which is the only unknown parameter:
The signals e(k ) and e(k 1) are noise infected from the measurement y and
generally for higher order controllers the noise sensitivity will increase.
For a general ordered controller we cannot determine the values of all the states
of the controller at the point of transfer. For a nc -order controller we need a data
history of nc samples. Bumpless transfer requires that the controller states are ini-
tialized such that
2 3 2 3
uL (k) uA (k)
6 7 6 7
6
6 uL (k 1) 7 6
7 6 uA (k 1) 7
7
6 .. 7=6 .. 7; (2.72)
6 . 7 6 . 7
4 5 4 5
uL (k nc + 1) uA (k nc + 1)
or
UL = UA : (2.73)
the state xc must be determined at the time of transfer such that equality (2.73) is
fulfilled. To do so we need to calculate xc (k ijk ); i 0 which gives previous
state vectors based on the current one. From (2.74) we get
xc (k 1jk) = F 1
(xc (k) Ge(k 1)) ; (2.76)
and trivial iterations give the following general expression
i 1
X
xc(k ijk) = F i xc(k) F (i j ) Ge(k j 1); xc (kjk) = xc(k):
j =0
(2.77)
Using (2.75) we have
uL (k ijk) = Hxc(k ijk) + Je(k i): (2.78)
If (2.77) and (2.78) are put into (2.73) we get
UA = Hxc(k) + J E; (2.79)
where
2 3 2 3
H e(k)
6 7 6 7
6 HG 1 7 6 e(k 1) 7
H =6
6
6 ..
7
7;
7
E =6
6
6 ..
7
7;
7
(2.80)
4 . 5 4 . 5
HG ( nc 1)
e(k nc + 1)
and
2 3
J 0 0 0
6 .. .. .. 7
60 J HF 1 G . . . 7
6 7
6 7
J =6
60 HF 2 G
..
. 0 0 7:
7 (2.81)
6. .. 7
6. .. 7
4. . . J HF 1 G 0 5
0 HF ( nc 1)
G HF 2 G J HF 1 G
Now, the state vector is easily determined
xc(k) = H 1 (UA J E ); (2.82)
given full rang of H.
This method has an obvious drawback, however. The applicability of the method
rely upon the access to the latent controller as the controller states must be set to
the estimated parameters at transfer. The next method overcomes this problem.
2.2 / Anti-windup and Bumpless Transfer 33
Dynamic Transfer
Graebe and Ahlen (1994) suggest a method which is developed for bumpless trans-
fer between alternative controllers and is not intended to be used in an anti-windup
context. In Graebe and Ahlen (1996), similarities between anti-windup and bump-
less transfer are discussed. It can be argued that the method is a special case of the
generic frameworks but the practical usefulness of the method should give it more
credit.
The scenario is the switch from an active controller CA to a new latent (inactive)
controller CL . The idea is to let the output uL of the latent controller CL track the
output uA of the active controller CA . This is done by using a 2 DoF tracking loop
as shown in Figure 2.8 where uA is the reference to the tracking controller, CL is
regarded as the system to be controlled, and the control plant error (r y ) acts as
a disturbance.
+ e uA
r CA G y
Active Controller Plant
e
1 uL
TL CL
uA + RL
Latent Controller
SL
Tracking Controller
TL CL RL CL
uL = uA + (r y ): (2.83)
RL + SL CL RL + SL CL
Assuming that uL tracks uA , the transfer can now take place without bumps.
Adding another tracking loop for CA tracking CL the transfer can be done bi-
T and
directional. The bi-directional scheme is presented in Figure 2.9 where CA
T
CL denote the active and latent tracking controllers, respectively.
uL
CAT
uA
CA
r + e y
G
+ uL
CL
+
Tracking loop
CLT uA
Figure 2.9. Bi-directional bumpless transfer. The active control loop is indicated with bold
lines. Subscripts A and L denote active and latent controller, respectively, and superscript
T indicates tracking controller. At the point of transfer, the four switches (dashed lines) are
activated.
However, for simple low order controllers, implemented in a way such that access
to internal signals (and overriding of these) is easy, the initial condition approach
is an uncomplicated alternative.
Numerous methods have been mentioned in the previous sections. As seen they
all have similarities since generic frameworks can encompass them all. However,
each method has different properties regarding simplicity, applicability, tuning, etc.
Table 2.3 sums up features and peculiarities of the mentioned methods.
state information and the output is a sequence of future control actions where only
the first element in the sequence is applied to the system.
Predictive control belongs to the class of model-based designs7 where a mathe-
matical model of the system is used to predict the behavior of the system and
calculate the controller output such that a given control criterion is minimized, see
Figure 2.10.
System model
Criterion parameters
r Solve u y
System
u = arg minu J
Figure 2.10. Model and criterion based control for a possibly time-varying system.
For some time the predictive control strategy has been a popular choice in con-
strained control systems because of its systematic way of handling hard constraints
on system inputs and states. Especially the process industries have applied con-
strained predictive control as plants here are often sufficiently slowly sampled to
permit implementations.
The theory of predictive control has evolved immensely over the last couple of
decades with many suggestions with related features such as system output predic-
tion. A survey of methods is given in Garca et al. (1989). One version has turned
out to be particularly widely accepted: the now well-known Generalized Predictive
Controller (GPC). It was introduced in Clarke et al. (1987a,b) and was conceptu-
ally based on earlier algorithms in Richalet et al. (1978) and Cutler and Ramaker
(1980). Since the appearance of the GPC, many extensions and add-ons have been
suggested. These were unified in Soeterboek (1992) within the so-called Unified
Predictive Controller (UPC). This controller is slightly more flexible in terms of
model structure, criterion selection and parameter tuning than the GPC. Bitmead
et al. (1990) critically exposed generalized predictive control and argued that the
7
Predictive Control is also referred to as Model Predictive Control (MPC).
38 Chapter 2 / Controller Design for Constrained Systems
Predictive controllers, that being the GPC, the UPC or others, are conceived by
minimizing a criterion cost function J
This is an alternative to specifying the closed-loop poles, as is the case for the
pole placement controller, or fitting frequency responses, as is the case for lead/lag
controllers. The criterion must be selected so that the called for performance and
robustness are accomplished.
The pioneer in this field was the Minimum Variance (MV) controller by Astrom
(1970) based on the following criterion,
J (k) = E (y(k + d) r(k + d))2 ; (2.85)
where y (k ) is the system output, r (k ) is the reference signal, d is the time delay
from input to output, and E fg denotes the expectation. As is well-known this
simple criterion yields controllers only suitable for minimum phase systems as
excessive control signals are not penalized. Consequently, the criterion was slightly
modified,
2.3 / Predictive Controller Design 39
J (k) = E (y(k + d) r(k + d))2 + u(k)2 ; (2.86)
where u(k ) is the controller output. This criterion is likely to reduce the magnitude
of the control signal but introduces the disadvantage of not allowing non-zero u(k ).
For systems lacking an integrator this leads to steady-state errors when tracking a
non-zero reference.
The GPC criterion introduced in Clarke et al. (1987a) is of the form:
8 9
<XN2 Nu
X =
J (k) = E (y(k + j ) r(k + j ))2 + (u(k + j 1))2 ; (2.87)
: ;
j =N1 j =1
subject to the equality constraint
u(k + i) = 0; i = Nu ; : : : ; N2 ; (2.88)
where N1 is the minimum costing horizon, N2 the maximum costing (or predic-
tion) horizon, Nu the control horizon, and = 1 q 1 denotes the differencing
operator. Notice, that the minimization now involves several future system and con-
troller outputs. This leads to controllers satisfying a much larger range of systems
than the MV controller. Penalizing only the differenced controller output u(k )
in the criterion, the resulting controller will allow non-zero outputs and reference
tracking is improved.
Although versatile, numerous extensions to the criterion (2.87) have been made. In
Soeterboek (1992) this led to the following unified criterion function
8
<XN2 2
Pn (q 1 )
J (k ) = E y(k + j ) r(k + j )
:
j =N1
P d (q 1 )
9
2 k
NX 2 =
Qn (q ) 1
+ u(k + j 1) ; (2.89)
j =1
Qd (q 1 ) ;
where x(k ) is the system state, Q and W are non-negative definite symmetric user
defined weight matrices of appropriate dimensions, and uf (k ) denote the filtered
control signal. This criterion function does not include deviations of the output
from a reference signal r (k ). One way of overcoming this is to augment the system
state vector with the states of the reference signal by which the criterion (2.91)
again can be made to describe the problem.
The criteria portrayed in the previous section are all based on a dynamical model
description. Given a time-invariant linear (or linearized) model, an analytical solu-
tion to the minimization problem of the criterion exists and can be solved once and
for all. The predictive controllers in Clarke et al. (1987a) and Soeterboek (1992)
were all based on polynomial input-output models such as the ARIMAX and Box-
Jenkins while Hansen (1996) has derived the predictive control law based on the
following somewhat more general structure
d B (q ) C (q 1 )
1
A(q 1 )y(k) = q u (k ) + e(k); (2.93)
F (q 1 ) D(q 1 )
2.3 / Predictive Controller Design 41
with
A(q 1
) = 1 + a1 q 1 + a2 q 2 + : : : + ana q na (2.94)
B (q 1
) = b0 + b1 q 1 + b2 q 2 + : : : + bnb q nb (2.95)
C (q 1
) = 1 + c1 q 1 + c2 q 2 + : : : + cnc q nc ; (2.96)
D (q 1
) = 1 + d1 q 1 + d2 q 2 + : : : + dnd q nd ; (2.97)
F (q 1
) = 1 + f1 q 1 + f2q 2 + : : : + fnf q nf ; (2.98)
and d > 0. The system has input u(k ), output y (k ) and is disturbed by a noise
sequence e(k ) with appropriate characteristics. Table 2.4 shows how this and other
less general models relate.
Table 2.4. Process models obtained from the general linear structure.
This section shall only consider the discrete-time state space model
2.3.3 Predictions
where
h iT
U = u(k) u(k + 1) u(k + j 1) : (2.102)
Here N1 = 1 but if this is not the case, the first N1 1 rows of G and F will be
null.
2.3 / Predictive Controller Design 43
Introducing the control horizon Nu into the criterion changes (2.103) slightly. De-
fine the vector of future controller outputs Uu
h iT
Uu = u(k) u(k + 1) u(k + Nu) ; (2.105)
and recall that we have set u(k + i) = u(k + Nu ) for i Nu . This gives us
X^ = F x(k) + Gu Uu ; (2.106)
with
2 3
B 0 0
6 .. 7
6 AB B ..
. 7
6 . 7
6 .. 7
6 .. .. 7
6 . . . 0 7
6 7
6 ANu B
Gu = 6 AB B 7
7: (2.107)
6 Nu +1
6A
6
B A2 B AB + B 7
7
7
6 .. .. .. 7
6 . . . 7
6 7
4 N2 P
Nu 1 5
AN2 1 B AN2 N1 B Ai B
i=0
At times, control signal filtering can improve performance. We assume that the
filtered version uf (k ) of u(k ) can be written in terms of state model,
xu (k + 1) = Au xu (k) + Bu u(k)
(2.112)
uf (k) = Cu xu (k) + Du u(k):
The j -step prediction of uf (k ) is given by
h i
uf (k + j ) = Cu Aju xu (k) + Cu Aj 1 B CuA B Du U; 0
(2.113)
Unlike the unconstrained linear problem, an analytic solution rarely exists for the
constrained optimal control problem. Consequently, the constrained minimization
problem must be solved in every sample and efficient fast algorithms are needed.
The most common approach to a solution is applying an iterative Quadratic Pro-
gramming (QP) method which can minimize a quadratic objective function subject
to linear constraints. Such a problem is described as
1 T
min
u 2
u Hu + cu; (2.118)
subject to
P u q; (2.119)
J (k) = X^ T IQ X^ + UuT IW Uu
= UuT IW + GTu IQGu Uu + 2xT F T IQ Gu Uu + xT F T IQF x: (2.120)
Constant terms have no influence on for which value of Uu the criterion is mini-
mized which leads to reformulating the criterion, minimized for the same value of
Uu :
1
J (k) = UuT IW + GTu IQ Gu Uu + xT F T IQGu Uu
2
1
= UuT HUu + cUu ; (2.121)
2
which is a standard QP criterion function.
Modelling Constraints
also rate limits in the control signal and limits on the states occur. The constraints
can be described by
Magnitude: umin u(k) u ; 8k max
Instead of using the state vector x directly we may introduce a new vector z = Cz x
which contain the constrained combination of states. The following shows how to
model the constraints such that a QP problem can solved.
First we introduce the vector 1m (x) = [x; x; : : : ; x]T of length m. Then, over a
receding horizon Nu , we can describe the magnitude constraints as
Pm Uu qm ; (2.123)
with
" # " #
I 1Nu (umax )
Pm = qm = ; (2.124)
I 1Nu (umin)
where I is an identity matrix of dimension Nu Nu . For the rate constraints we
get
Pr Uu qr ; (2.125)
where
" #
P0
Pr = r 0 ; (2.126)
Pr
and
2 3
u(k 1)
6 7
2 3 6 0 7
1 0 0 0 6
6 ..
7
7
6 .. .. 7 6 . 7
6 1 1 0 . .7 " # 6 7
6 7 6 7
0 6 .. 7 1 (u ) 0
Pr = 6
6 0 1 1
..
. 7.7 qr = Nu max + 6 6
7
7:
6 . 7 1Nu (umin) 6
6 u(k 1)7
7
6 .
07
.. .. .. 6 7
4 . . . . 5 6 0 7
0 0 1 1 6
6
4
..
.
7
7
5
0
(2.127)
2.3 / Predictive Controller Design 47
Pz Uu qz ; (2.128)
with
" # " #
I G 1N2 (zmax ) ICz F x(k)
Pz = Cz u qz = ; (2.129)
ICz Gu 1N2 (zmin ) + ICz F x(k)
where nz is the number of outputs in z .
In condensed form the constraints can be expressed as
P Uu q; (2.130)
with
2 3 2 3
Pm qm
6 7 6 7
P =6
4 Pr 5
7 q 4 qr 5 :
=6 7 (2.131)
Pz qz
State Constraints
The constraints on the states of the controlled system are normally imposed for two
reasons:
Safety concern
Certain operating conditions may prevent unnecessary wear and tear of the
installations and hence from a safety viewpoint system states are constrained.
For example, a temperature or a velocity would have to lie within certain
limits to prevent stressing the equipment, and a space telescope normally
have certain exclusion zones towards the sun (regions of the state space)
where the optics may take damage.
Control performance
As shown in Kuznetsov and Clarke (1994) and Camacho and Bordons (1999)
constraints on the controlled output can be used to force the response of the
process to have certain characteristics. By imposing constraints the output
48 Chapter 2 / Controller Design for Constrained Systems
z (k + j ) k1 r(k); j = 1; : : : ; N2 ; k1 1; (2.132)
where k1 determines the maximum allowable size of the overshoot and N2 defines
the horizon during which the overshoot may occur. Using the predictions of x we
have
z (k + j ) k2 r(k); j = 1; : : : ; N2 ; k2 1; (2.134)
Often, constraints on the output signals are soft constraints meaning the output can
physically exceed the imposed constraints and violations can actually be allowed
for short periods of time without jeopardizing safety concerns. Soft constraints
could be handled to some extent by tuning of the controller by adjusting the weights
on the signals of interest in the criterion. But, as seen, imposing constraints may
help in fulfilling closed-loop performance objectives.
2.3 / Predictive Controller Design 49
The algorithm shows that basically two features identify the optimization algo-
rithm: the search direction, and the step size in the search direction.
Various quadratic programming techniques are available that will solve the stated
minimization problem (see for example Luenberger (1984)) and have been used
in a predictive control context, e.g. by Garca and Morshedi (1984); Camacho
(1993) and Chisci and Zappa (1999). However, QP methods are complex, time-
consuming (likely to be a problem in electro-mechanical real-time applications),
and may lead to numerical problems as pointed out in Rossiter et al. (1998). Thus,
simpler gradient projection methods such as Rosens have been explored (see for
example Bazaraa et al. (1993)) and applied to predictive control applications, e.g.
by Soeterboek (1992) and Hansen (1996). However, in the case of rate constraints,
the complexity of this algorithm increases significantly.
Suboptimal constrained predictive control (SCPC) has been the issue of Kouvar-
itakis et al. (1998) and Poulsen et al. (1999). The search for an optimal solution
is renounced in favor of a decreased computational burden. One such suboptimal
approach is based on a linear interpolation between the unconstrained optimal so-
lution and a solution that is known to be feasible (the solution does not activate
50 Chapter 2 / Controller Design for Constrained Systems
So far, the techniques considered in section 2.2 and 2.3 have been linear and have
generated linear controllers (at least in the linear area of operation). This has some
shortcomings. The anti-windup and bumpless transfer approach does not guaran-
tee control signals within the limits since the compensation is not activated until
a constraint becomes active. The predictive controller, however, does satisfy the
constraints but the computational burden is significant and the actual implemen-
tation of the controller is difficult seeking a resource optimal programming of the
algorithms.
Since the nature of constraints itself is nonlinear, one might expect a nonlinear
controller to achieve better performance than a linear one. Few nonlinear control
tools exist for incorporating actuator saturations into the design process as even
simple linear systems subject to actuator saturations generate complex nonlinear
problems. Very few general results exist even for constrained linear systemsas a
matter of fact most results are very specific to a small class of systems. Examples
of such systems are chains of integrators and driftless systems (x_ = f (x)u). Some
significant results on nonlinear bounded control can be found in Teel (1992); Suss-
mann et al. (1994); Lauvdal et al. (1997); Lauvdal and Murray (1999) and Morin
et al. (1998a). The different approaches have different characteristics, but most of
2.4 / Nonlinear and Time-Varying Methods 51
them rely on a scaling of the controller where the feedback gains converge to zero
as the norm of the states tends to infinity.
2.4.1 Rescaling
One of the more promising ideas explored within the nonlinear approaches is the
so-called dynamic rescaling initially proposed in Morin et al. (1998a). The method
is based on recent results on stabilization of homogeneous systems in MCloskey
and Murray (1997) and Praly (1997) where a rescaling method has been developed
to transform a smooth feedback with (slow) polynomial stability into a homoge-
neous feedback yielding (fast) exponential stability8 .
In the dynamic rescaling technique a linear stabilizing controller (for a linear sys-
tem) is rescaled as a function of the state generating a new stabilizing and bounded
control law. The concept is depicted in Figure 2.11.
Dynamic x
Rescaling
r u u^ y
Controller Actuator Plant
The family of systems considered are single-input linear open-loop stable and con-
trollable systems in Rn :
x_ = Ax + Bu; (2.136)
2 3 2 3
0 1 0 0
6 . 7 6.7
6 . .. .. 7 6.7
A=6 7
=6 .7
. . .
6 7; B 6 7: (2.137)
6 0
4 0 1 7
5
607
4 5
a
1 an 1 an 1
In general, the bounded control law for this system turns out rather complex (for
instance, the controller for a 4 dimensional system is parameterized by some 12
parameters). Although tedious, the method is interesting as the following example
will illustrate.
For simplicity consider the double integrator
x_ 1 = x2
(2.138)
x_ 2 = u;
stabilized with any linear state feedback controller
l1 l2
u(; x) = x x; (2.140)
2 1 2
and the Lyapunov function
l1 2 1 2
V (; x) = x + x: (2.141)
4 1 2 2
The controller will again be a stabilizing controller for (2.138) since > 0 and
the Lyapunov function can be shown to be non-increasing along the trajectories of
(2.138)-(2.140). The concept of the rescaling procedure is now to choose such
that the Lyapunov function (2.141) stays at a chosen contour curve for x tending
to infinity. This will bound the control signal.
Let (x) be a scalar function given by
(
1 V (1; x) 1
(x) = (2.142)
solution of V (; x) = 1 otherwise.
(
0 x=0
u(; x) =
x 6= 0:
l1 l2 (2.143)
(x)2 x1 (x) x2
l2
max x = l2 : (2.146)
(x) 2
The respective minimums have opposite signs. A conservative estimate of the max-
imum of u is then
p
max (u(; x)) l1 + l2 : (2.147)
The Lyapunov function V (; x) controls the rescaling process and hence effects the
overall performance of the constrained system. Many possible Lyapunov functions
are available but how to choose the best one is not clear. Another design difficulty
is the fact that the control law is guaranteed bounded but the specific bounds are
not given. Thus, the designer must, by simulation or direct calculations, determine
the bounds and if they are too large or narrow, the Lyaponov function must be
modified.
The bounds on the rescaled controller (2.142)-(2.143) are governed by the Lya-
punov function equality V (; x) = 1 and are difficult to calculate explicitly except
for a cautious assessment. Furthermore, as for the shown example, the bounds de-
pends on the controller feedback gains li . This dependency is unfortunate as the
method conceptually could be an add-on to an existing stabilizing linear controller
and not an integrated part of the design of the controller. As we will show, a small
extension to the approach will allow the controller to, adaptively, adjust the right
54 Chapter 2 / Controller Design for Constrained Systems
side of the Lyapunov equality such that independently chosen actuator bounds can
be specified.
Consider the equation V (; x) = where V (; x) is defined in (2.141) and > 0.
For x 6= 0 this equation has the unique positive solution
v
u q
u 1 x2 +
2 x2 + 4 l1 x1
1 4 1 2
t 2
= : (2.148)
2
Now choose (x) in the following way
(
1 if V (1; x)
(x) = (2.149)
the solution (2.148) otherwise.
If is increased, larger control signals are allowed before the rescaling is activated
and vice versa if is decreased.
By letting time-varying we can estimate such that a value for is found that
corresponds to the given actuator saturation limits. It is imperative that remains
positive. Let
where > 0 is a tuning parameter that governs the convergence rate and u ^=
sat(u) is the saturated control signal. The initial condition (0) should be chosen
large since the adaptation law only allows to become smaller over time.
For the double integrator we can make use of the fact that we know the bounds on
the maximum value for u(; x). For the controller with given by (2.149) we get
the upper bound
l1 x1 lx
max (u(; ; x)) max + max 2 2 (2.151)
2
p p
= l1 + l2 : (2.152)
Define for an actuator u = min(umax ; jumin j) where umax and umin are the actuator
saturation limits. A lower bound for is then given from max(u(; ; x)) = u:
2
u
= p : (2.153)
l1 + l2
2.4 / Nonlinear and Time-Varying Methods 55
Simulation Study
To illustrate the adaptive determination of the saturation bounds, consider the dou-
ble integrator affected by a nasty (unrealistic but for the study interesting) distur-
bance on the state vector. The system is assumed stabilized by a feedback controller
with gains l1 = 0:5 and l2 = 1 and the input is magnitude saturated with juj 1.
Based on these settings we initialize (0) = 2 and set = 0:5. Figure 2.12 shows
the control signal u(; ; x) and the convergence of . It is seen how u at first sat-
urates but as tends to a smaller value the control signal remains within the limits.
The rescaling approach guarantees stability but performance and even actuator
bounds are difficult to specify a priori. The following will consider a gain schedul-
ing method similar to rescaling but with a different approach to selecting the scal-
ing. The aim is to have good performance of the constrained closed-loop system
in terms of rise time, settling time, and overshoot which often are deteriorated by
constraints. Optimally, these quantities should be as close as possible to those of
the unconstrained system.
56 Chapter 2 / Controller Design for Constrained Systems
Input
0
0 50
2
1.5
0.5
0 50
Time (secs)
x_ 1 = x2
x_ 2 = x3
.. (2.157)
.
x_ n = u^;
where xi ; i = 1; : : : ; n are the states and u
^ is the input subject to saturation, that
is, u
^ = sat(u). Assume we have designed a linear controller given by
h i
u = Lx; L = l1 l2 ln ; (2.158)
which stabilizes the chain of integrators when no saturation is present. The ob-
jective of the gain scheduling is to modify the state feedback matrix L such that
settling time and in particularly the overshoot are improved during saturation com-
pared to the uncompensated constrained system. The overshoot of a constrained
dynamic time response is mainly governed by the windup up in the controller states
while the settling time is governed by the undamped natural frequency. These ob-
servations lead to the following schematic gain scheduling approach:
2.4 / Nonlinear and Time-Varying Methods 57
1. During saturation, schedule the controller such that the closed-loop poles
move towards the origin while keeping a constant damping ratio. This will
decrease the needed control signal and hence reduce windup without increas-
ing the overshoot. Unfortunately, the rise time is increased.
2. When the controller de-saturates, move the poles back towards the original
locations. This will help improve the overall settling time.
A set of complex poles, say p1 and p2 , are in terms of the damping ratio and the
undamped natural frequency !n given by
p
p1;2 = !n
2
j 1 2 : (2.159)
This shows that it is feasible to schedule the frequency while maintaining the damp-
ing ratio with only one parameter. The gain scheduled poles are given by
!n p
p1;2 () =
2
j 1 2 ; (2.160)
where 2 [1; 1[ is the scaling factor. For = 1 no scheduling is applied while for
! 1 the poles move towards the origin reducing the magnitude of the control
signal.
Applying this scheduling approach to the chain of integrators, we get the following
gain scheduled characteristic polynomial C (s)
p1 p2 p
C (s) = s +
s+
s+ n ;
(2.161)
where
K ((t)) = diag n (t); n 1 (t); : : : ; 1 (t) ; (2.164)
58 Chapter 2 / Controller Design for Constrained Systems
1 1
Imag Axis
Imag Axis
0 0
1 1
1.5 0 1.5 0
Real Axis Real Axis
Figure 2.13. Root locus for a fourth order integrator chain with closed-loop poles as a
function of for nonlinear (left) and linear (right) scaling. and indicate nominal
( = 1) and open-loop poles ( ! 1), respectively.
Notice, that for systems with only one or two integrators, linear scaling will also
produce stable poles for all feasible values of . The problem with instability only
exists for dimension three or larger.
2.4 / Nonlinear and Time-Varying Methods 59
Having selected the scaling method we need to specify how to determine the scal-
ing factor . Based on the two previous stated performance observations we suggest
to make the scaling factor partly state partly time depending. Define (t; u) as
8
< u
> ^j uj u > umax + or u < umin
_ = 0 jumax uj or jumin uj (2.165)
> :
N ( 1) otherwise,
where > 0, N > 0, > 0 and u ^ denotes the usual saturated control signal. The
first case increases during saturation which moves the poles towards the origin
and thereby reduces the control signal. Upon de-saturation the third case will force
back to the nominal value = 1. creates a dead-band around the saturation
limits which is introduced to smooth the changes in . Figure 2.14 shows _ as a
function of u.
umin 0 umax u
In Figure 2.15 an example of a time response for u and is shown. During satura-
tion increases until u is within the specified bounds. In terms of performance it
is desirable to bring back to unity. This is done at a sufficiently slow rate since,
if done too fast, u will hit hard the other saturation level.
The parameters are easy to choose. The following observation are made:
Input
0
1
0 40
1
0 40
Time (secs)
Figure 2.15. Sketch illustrating the scaling factor when input is magnitude saturated.
ai + li
ai + li () = (2.168)
n i+1
) li() = n 1i+1 (n i+1 1)ai + li :
(2.169)
Simulation Study
To illustrate the developed gain scheduling approach we shall investigate the prop-
erties through a simulation study. Consider an integrator chain of length 4, i.e.
x_ 1 = x2
x_ 2 = x3
x_ 3 = x4 (2.170)
x_ 4 = u
y = x1 ;
where y is the output. We assume existence of input unity magnitude limitations
juj 1 and a linear stabilizing controller u = Lx. Initially, we place the four
poles of the linear system such that the undamped natural frequencies are !n = 1
and the damping ratios are = p12 and = 2p 1
2
, respectively, which yields the
gains
h i
L = 1 p32 3 p32 : (2.171)
The following simulations assume x(0) = (x1 (0); 2; 0; 0)T , where x1 (0) may vary.
The saturation element causes the linear controller with gains (2.171) to go unstable
for x1 (0) > 1:25. Throughout the simulations, is set to 0:1. Generally, the
influence is negligible but may help to remove possible chattering in . Note, that
the uncompensated constrained system with no gain scheduling is unstable for all
the following examples.
Figure 2.16 shows a simulation result with initial conditions x1 (0) = 10. For com-
parison the unconstrained linear response is also shown. It is seen that the sched-
uled response is slower but the overshoot (undershoot in the case) is similar.
To illustrate the robustness in the choice of the parameters and N Figure 2.17
and 2.18 shows the effects of varying and N , respectively.
62 Chapter 2 / Controller Design for Constrained Systems
15
10
Output
5
0
5
0 25
5
0
Input
5
10
15
0 25
2
1.5
1
0 25
Time (secs)
Figure 2.16. Time response with saturation and gain scheduling (solid) using = 1 and
N = 10. For comparison the unconstrained linear response is shown (dashed). The un-
compensated constrained response is unstable.
15
10
Output
5
0
5
0 25
Time (secs)
Figure 2.17. Time response for N = 10, = 0:2 (solid), = 1 (dashed), and = 3
(dash-dotted).
15
10
Output
5
0
5
0 5 10 15 20 25
Time (secs)
For all parameter choices the responses are good but for smaller values of the
overshoot increases and for large N the response becomes slower.
Finally, Figure 2.19 shows robustness for different initial states. The inputs are
scaled so that saturation only occurs during the first seconds. It is seen that particu-
larly for x1 (0) = 100 some oscillations show up. Choosing N larger would reduce
these.
100
Output
50
0 25
1
Input
1
0 50
Time (secs)
Figure 2.19. Time response for = 1, N = 10, and x 1 (0) = 1 (solid), x1 (0) = 50
(dashed), and x 1 (0) = 100 (dash-dotted).
2.4.3 Conclusions
The presented rescaling and gain scheduling methods have much in common such
as the way the scaling is incorporated into the controller but the selection of the
scaling/scheduling is different.
The rescaling approach guarantees controller output within the saturation limits
whereas the gain scheduling is more like the AWBT compensation since the sched-
uler is activated only when constraints become active.
The previous three sections have presented four different methods of handling con-
straints in the control design process
64 Chapter 2 / Controller Design for Constrained Systems
The last two approaches are listed separately, although they have some similarities
such as how the scaling is introduced into the linear controller.
Table 2.5 aims to compare the strategies on a variety of properties. Some properties
such as the computational burden of an implementation and the number of design
parameters depend to some degree on the order of the system and the controller
and on which design options are applied. For example, a predictive controller has
a number of optional design parameters (N1 ; N2 ; Nu , weights, filters) but is fairly
independent of the order of the system. On the other hand, a AWBT compensation
is directly linked to the order of the controller. Therefore, some of the entries in
the table are debatable and, in general, the table mainly applies to low ordered
systems.
The comparison is far from complete since other constrained control approaches
than those discussed are available in the literature. Examples of such are sliding
mode control, mode-switching systems, specific nonlinear controllers, bang-bang
control, and supplementary gain scheduling approaches. However, they will not
further be discussed.
Three cases are studied in the following, the last two being of greatest importance.
The first study briefly investigates a constrained single integrator. The last two
studies use a cascaded double tank system to examine the observer-based approach
(Astrom and Rundqwist, 1989) and a predictive controller, and aim to verify the
usefulness of the procedures and comment on the selection of parameters.
66 Chapter 2 / Controller Design for Constrained Systems
The objectives of this example are twofold: Firstly, we aim to show how actuator
constraints explicitly can be handled in the design of a PI controller if the applied
references and disturbances are known at the design level. Secondly, the example
should demonstrate that this is not a practicable design approach.
For simplicity the system G(s) under consideration is a single integrator and the
controller C (s) is a PI controller. The transfer functions are given as
1 1 + i s
G(s) = ; C (s) = k ; (2.172)
s i s
where k and i are the controller parameters. The transfer function from reference
to controller output is found to be
is the damped natural frequency. We have assumed complex poles ( 1). Solving
du = 0 for t = t , we get
dt max
2.6 / Case Studies 67
p !
1 1 2 (1 4 2 )
tmax = arctan : (2.178)
!d (3 4 2 )
For 0:5 1 we have tmax 0 which means that maximum for the time
response for u(t); t 0 is for t = 0. For < 0:5 we have tmax > 0 and maximum
for u(tmax ).
For t = 0 we get u(0) = 2!n = k . In order to avoid saturation, k should be
selected smaller than the smallest saturation level, that is
Having selected k we need to select i based on the wanted damping ratio which
governs the overshoot and the natural frequency which governs the rise time. A
small rise time implies a small damping ratio so it is a compromise. The damping
ratio, though, also governs the closed-loop zero ( !2n = 1i which moves to
the right towards the poles for increasing . This will add overshoot to the step
response.
In case tmax > 0 we can, for a selected , solve u(tmax ) = min(jumin j; umax ) for !n
and then determine k and i .
This design approach might be extended to higher order systems but still some
obvious disadvantages are present:
This case study aims to demonstrate how the choice of the AWBT compensation
matrix M in an observer-based anti-windup scheme is of great importance for the
68 Chapter 2 / Controller Design for Constrained Systems
closed-loop constrained response. For the double tank system considered, we show
a way of selecting the AWBT parameters such that superior performance is ob-
tained.
Both Astrom and Rundqwist (1989) and Kapoor et al. (1998) have illustrated anti-
windup on the same process, namely a system consisting of two identical cascaded
tanks. The input to the system is the pump speed which determines the flow rate to
the upper tank. The process output considered is the lower tanks fluid level. The
process is shown in Figure 2.20 where a pump transports the fluid from a reservoir
to the upper tank.
upper tank
Linearized around an operating point, the double tank can be described by the state
space model
" # " #
0
x_ = x+ u^ = Ax + B u^ (2.180)
0
h i
y = 0 1 x = Cx; (2.181)
The objective of this experiment is to evaluate AWBT settings and determine op-
timal AWBT setting for various references and disturbances. The experiment is
conducted as follows.
1.5
Output
0.5
0
0 300 600 900
Time (secs)
Figure 2.21. Time response for the unconstrained system (solid) and the constrained system
without anti-windup (dashed).
For the experiment Figure 2.21 shows the output of the closed-loop system with-
out saturation and for the closed-loop system with saturation but no anti-windup
compensation. The response with saturation is obviously poor with unsatisfactory
overshoots and oscillations. A necessary condition for stability of the constrained
closed-loop system is stable eigenvalues of the matrix
" #
m1 K m1 N
FM = F MH = i ; (2.189)
m2 K m2 N Nd
i
which describes the closed-loop dynamics of the controller during saturation. It is,
however, not a sufficient condition. As an example of that, choose M such that the
eigenvalues of FM are ( 0:0020 i0:0198) and expose the saturated tank system
to a unity reference step. Figure 2.22 shows the marginally stable result.
2
Output
0
0 300 600 900
Time (secs)
Figure 2.22. The marginal stable response for the constrained system with a saturation-wise
stable controller.
2.6 / Case Studies 71
For the double tank system with the described experiment Astrom and Rundqwist
(1989) have considered three different choices of M which place the eigenval-
ues of the matrix FM at ( 0:05; 0:05); ( 0:10; 0:10), and ( 0:15; 0:15). As
pointed out in Astrom and Rundqwist (1989) only the first choice of M gives sat-
isfactory results for the impulse disturbance and hence only the first choice will
be considered here. Kapoor et al. (1998) advocates for choosing the eigenvalues of
FM at ( 0:2549; 0:0569) which guarantees stability for feasible references. Us-
ing Hanus conditioning technique described in section 2.2.4 we get M = Gr Jr 1
which gives the eigenvalues ( 0:0118 i0:0379) of FM . Finally, we suggest
choosing the eigenvalues of FM equal to the two slowest poles of the closed-loop
system, namely ( 0:0257 i0:0386). We will refer to these specific anti-windup
matrices as Ma ; Mk ; Mh and Ms for the Astrom, Kapoor, Hanus, and the slowest
poles design, respectively.
Figure 2.23 shows input and output profiles for the four design choices. All four
choices improve the response, although Hanus conditioning technique tends to
oscillate since the control signal after de-saturation immediately saturates at the
other level. This is the inherent short-sightedness mentioned in section 2.2.4. The
performance of the system exposed to the load disturbance is almost identical for
all gain selections and will not be investigated any further.
1 1
0.5 0.5
0 0
0 300 600 900 0 300 600 900
(c) 1.5 (d) 1.5
1 1
0.5 0.5
0 0
0 300 600 900 0 300 600 900
Time (secs) Time (secs)
Figure 2.23. Time response for the constrained system for different choices of the anti-
windup gain matrix. The unconstrained response (dashed), the constrained response
(solid), and the input (dotted) are shown for anti-windup gains (a) M a , (b) Mk , (c) Mh ,
and (d) Ms .
72 Chapter 2 / Controller Design for Constrained Systems
Since the purpose of AWBT is to reduce the effects from saturation (maintaining
stability and good performance of the closed-loop system under saturation) we will
consider the error between the constrained and the unconstrained time response.
Define the output error y~ = y yu where y and yu are the constrained and uncon-
strained outputs, respectively. The following performance index is considered
N2
1 X
I=
N2 N1 + 1 i=N
jy~(i)j; (2.190)
1
where y~(i) is the sampled error. To ease the comparison of the different designs the
generated performance indices from an experiment are normed with the smallest
index from the experiment. This implies that the best design choice for a specific
experiment will have a normed error equal to 1. Note, the normed errors from one
experiment to another cannot be compared.
The four designs are compared in Table 2.6 for the full experiment as well as for the
different sub-experiments. Notice that the slowest poles selection yields a superior
performance under all tried conditions while Kapoors selection performs the worst
except with regards to the load disturbance where Hanus selection takes the credit.
Experiments
Design Full exp. Reference Impulse Load
Ma 1.11 1.11 1.11 1.07
Mk 1.34 1.50 1.24 1.04
Mh 1.17 1.12 1.21 1.23
Ms 1.00 1.00 1.00 1.00
no AWBT 2.19 2.44 1.96 1.93
min(I ) 0.0478 0.0696 0.0582 0.0155
Table 2.6. Normed errors between constrained and unconstrained response exposed to ref-
erence step, impulse disturbance, and load disturbance.
The conditioning technique with cautiousness mentioned in 2.2.4 may help to im-
prove Hanus conditioning technique. Figure 2.24 shows a root locus for the poles
of FM using cautiousness. We see that cautiousness does not give the designer full
2.6 / Case Studies 73
control of the pole locations and cautiousness cannot move the poles to neither the
locations of the Astrom design nor the slowest poles design.
0.04
Imag Axis
0.04
0.3 0.15 0
Real Axis
The objective of this part of the study is to investigate the robustness to different
sized reference steps.
The input-output gain of the process is = 103
and u^ 2 [0; 1]. Hence, the feasible
references for which the constraints are inactive in stationarity are restricted to the
interval [0; 10
3
] assuming no load disturbance on u is present.
The setting for the experiment is:
1. t = 0: Reference step r 2 [0:5; 1; 1:5; 2; 2:5; 3], process and controller start
from stationarity with x(0) = 0 and xc (0) = 0.
Table 2.7 shows the normed errors for the experiment. For all reference steps Ms
yields the minimum error between the constrained and unconstrained system. Es-
pecially for small step sizes we note a significant proportional improvement from
a good selected anti-windup matrix. This suggests that anti-windup design should
not only be considered for large saturations but anytime saturation occurs in the ac-
tuators. For large saturations, the greatest concern is to maintain stability, whereas
for small saturations it is more a matter of performance.
74 Chapter 2 / Controller Design for Constrained Systems
Experiments
Design r = 0:5 r=1 r = 1:5 r=2 r = 2:5 r=3
Ma 1.33 1.11 1.09 1.04 1.03 1.01
Mk 2.21 1.50 1.34 1.22 1.13 1.08
Mh 1.76 1.12 1.26 1.12 1.02 1.01
Ms 1.00 1.00 1.00 1.00 1.00 1.00
no AWBT 1.99 2.46 2.56 2.20 2.02 1.36
min(I ) 0.0098 0.0348 0.0728 0.1434 0.2500 0.4009
Table 2.7. Normed errors between constrained and unconstrained response to reference
steps.
The objective of this part of the study is to investigate the robustness to different
sized impulse disturbances on x2 . The settings for the experiment are:
Table 2.8 shows the normed errors for the experiment. where Ms again provides
the system with the best performance in all cases.
So far a 2 DoF PID controller has been used which places three of the four closed-
loop poles at almost the same frequency. We have seen that placing the anti-windup
compensated poles of the controller during saturation at the same location as the
two slowest of the closed-loop poles give superior performance for various refer-
ence steps and disturbances.
2.6 / Case Studies 75
Experiments
Design x2 = 0:1 x2 = 0:5 x2 = 1 x2 = 1:5 x2 = 2
Ma 1.24 1.11 1.11 1.10 1.08
Mk 2.21 1.24 1.09 1.06 1.05
Mh 1.05 1.21 1.35 1.38 1.38
Ms 1.00 1.00 1.00 1.00 1.00
no AWBT 2.26 1.96 1.97 1.98 2.00
min(I ) 0.0024 0.0291 0.0698 0.1127 0.1568
Table 2.8. Normed errors between constrained and unconstrained response to impulse dis-
turbances on x 2 .
The following seeks to investigate how a change in the location of the closed-loop
poles influences the selection of the anti-windup matrix. In order to get a better
flexibility in placing the closed-loop poles we exchange the PID controller with a
pole placement strategy parameterized by the controller
with a desired closed-loop characteristic polynomial Acl (s) = Ao (s)Ac (s) where
Ao (s) = s2 + a1 s + a2 and Ac = s2 + a3 s + a4 . We choose T (s) = a4 A (s)
o
which gives the simple reference to output transfer function
a4
y(s) = r(s): (2.192)
s2 + a3 s + a4
For the double tank system we need to solve the so-called Diophantine equation
For a second order controller with integral action we have R(s) = s(s + r1 ) and
S (s) = s0 s2 + s1 s + s2 and by identifying coefficients of powers of equal degree
we find the solution
76 Chapter 2 / Controller Design for Constrained Systems
r1 = a1 + a3 2 (2.194)
aa
s2 = 2 4 (2.195)
2 (a1 + a3 ) + 23 + a2 a3 + a1 a4
s1 = (2.196)
a1 a3 2(a1 + a3 ) + 32 + a2 + a4
s0 = : (2.197)
The previous full experiment is repeated with an unity reference step and an im-
pulse and load disturbance. The closed-loop poles of the unconstrained system are
placed at
which are the same locations as with the PID controller except that both real ob-
server poles are now placed at 0:2549. The response of the unconstrained system
exposed to a reference step yields a 12% overshoot due to the relatively undamped
poles of the original PID controller design but the responses to the impulse and load
disturbances are improved. Figure 2.25 compares the unconstrained responses of
the PID controlled and the pole placement controlled double tank exposed to the
full experiment.
1.5
1
Output
0.5
0
0 250 500 750
Figure 2.25. Time response for the unconstrained system with a pole placement controller
(solid) and with a PID controller (dashed).
1 1
0.5 0.5
0 0
0 300 600 0 300 600
(c) 1.5 (d) 1.5
1 1
0.5 0.5
0 0
0 300 600 0 300 600
Time (secs) Time (secs)
Figure 2.26. Time response for the constrained system with pole placement controller for
different choices of the anti-windup gain matrix. The unconstrained response (dashed), the
constrained response (solid), and the input (dotted) are shown for anti-windup gains (a)
Mk , (b) Ms , (c) Mc, and (d) no AWBT.
Conclusions
A second order double tank system with saturating input has been considered. A
second order PID controller and a second order pole placement controller with
78 Chapter 2 / Controller Design for Constrained Systems
Experiments
Design Full exp. Reference Impulse Load
Mk 1.26 1.01 1.89 1.31
Ms 1.12 1.41 1.00 1.00
Mc 1.00 1.00 1.22 1.22
no AWBT 2.04 3.06 1.15 2.08
min(I ) 0.0551 0.0841 0.0617 0.0048
Table 2.9. Normed errors between constrained and unconstrained response with pole place-
ment controller exposed to reference step, impulse disturbance, and load disturbance.
integral action have been fitted to the system. The following conclusions regarding
anti-windup design are drawn:
The following case study aims to illustrate how predictive control can handle con-
straints in a natural and systematic way. The double tank system from section 2.6.2
is used to compare AWBT with constrained predictive control.
2.6 / Case Studies 79
It is paramount that the resulting controller has integral action so that the controller
can compensate for unknown disturbances. Since the system has no integrators,
the criterion should only penalize changes in the control signal which leads to the
criterion function
N2
X Nu
X
J (k ) = (y(k + j ) r(k + j ))2 + u(k + j )2 : (2.199)
j =N1 j =0
This, however, does not guarantee a zero steady-state error if for example the con-
trol signal is disturbed by a constant load. Hence, the system has to be augmented
with an integral state (z (k +1) = z (k )+ r (k ) y (k )) in order to accomplish real
integral action in the controller.
The following controller parameters have been chosen: N2 = 50; N1 = 1,
Nu = 49, and = 2:25 which gives an unconstrained time response with similar
performance to the pole placement controller used in section 2.6.2 when exposed
to the full experiment in the said section.
The virtue of the predictive controller is its ability not only to handle magnitude
constraints but also rate constraints on the actuator output. If only magnitude con-
straints are present, the performance of the constrained predictive controller is sim-
ilar to that of the best of the AWBT compensated controllers in section 2.6.2. How-
ever, since AWBT feedback does not compensate for rate saturation, the predictive
controller is superior. This will be illustrated using the experiment with a reference
step, an impulse disturbance on the lower tank, and a load disturbance on the upper
tank. The input is in the interval [0; 1] and it is assumed that the full speed actuator
change from 0 to 1 or vice versa takes minimum 3 seconds.
Figure 2.27 shows the response for the predictive controller and the AWBT com-
pensated pole placement controller. As expected, the predictive controller com-
pensates for the rate saturation and avoids the excessive overshoots that charac-
terizes the AWBT compensated controller. Note, that the predictive controllers
ability to react on reference changes ahead of time is not used in this simulation.
The quadratic minimization problem subject to constraints was solved using qp()
from MATLAB.
80 Chapter 2 / Controller Design for Constrained Systems
1.5
Output
0.5
0
0 250 500 750
1
Input
0.5
0
0 250 500 750
Time (secs)
Figure 2.27. Time response for constrained predictive controller (solid) and AWBT com-
pensated pole placement controller (dashed), both with magnitude and rate saturation in
the actuator.
Overshoot
It was described in section 2.3.5 how constraints on the output variables of a sys-
tem could be used to shape the closed-loop response. In Figure 2.28 is shown the
response of the constrained predictive controller exposed to step reference changes.
The actuator is magnitude and rate saturated and the system output has been con-
strained such that no overshoots exists. The controller here has knowledge of ref-
erence changes ahead of time.
2.7 Summary
1
Output
0.5
0
0 250 500
1
Input
0
0 250 500
Time (secs)
Figure 2.28. Time response for output constrained predictive controller (solid) exposed to
reference changes (dashed). The output is constrained such that no overshoots exists.
give the most flexible and versatile management of constraints including input and
output magnitude as well as rate constraints.
Section 2.4 presented nonlinear approaches characterized by being specific to in-
put constraints and certain classes of linear systems. It was pointed out that the
dynamic rescaling method by Morin et al. (1998a) makes the determination of the
controller bounds difficult and hence an adaptive estimation of the the bounds was
presented. A nonlinear gain scheduling approach was presented. The method en-
sures a constant damping ratio and stable poles during the scheduling. A simulation
study revealed good performance of the method.
In Section 2.5 a comparison of the presented strategies was made.
Section 2.6 contains case studies and investigated in particular the observer-based
AWBT approach and the predictive controller applied to a double tank system. It
was argued that the parameter selection for the AWBT compensation is of great
importance for the overall performance. Specifically, the poles of the constrained
controller should be chosen close to the slowest poles of the unconstrained closed-
loop system. The predictive controllers management of constraints is functional.
Imposing constraints on the output of the system is a straightforward and useful
way of forming the closed-loop response.
Chapter 3
This subject of this chapter is robotic control systems and how to incorporate
closed-loop constraint-handling into a real-time trajectory generation and execu-
tion system.
The term a robotic system is widely used to describe a diversity of automatic con-
trol systems such as industrial robots, mobile robots (for land, air, or sea use), and
mechanical manipulators. Robotics cover a large area of applications ranging from
manufacturing tasks such as welding, batch assembly, inspection, and order pick-
ing, to laboratory automation, agricultural applications, toxic waste cleaning, and
space or underwater automation.
The word robot originates from Czech and translates to slave or forced labour.
Originally, the robot was introduced as a human-like machine carrying out me-
chanical work but robotics have since earned a more versatile interpretation. Two
modern definitions of a robotic system are given in McKerrow (1991).
The Robot Institute of America:
The first definition seems to be mainly concerned with production robots such as
mechanical manipulators whereas the second definition has a false ring of worker
displacement being the overall objective of automation. Definitions are always
debatable but in general a robot is considered a general-purpose, programmable
machine with certain human-like characteristics such as intelligence (judgment,
reasoning) and sensing capabilities of the settings but not necessarily human-like
appearance.
Three functionalities are seeked integrated in a robotic system such that it may
move and respond to sensory input at an increasing level of competence: i) percep-
tion which help the robot gather informations of the work environment and itself,
ii) reasoning which makes the robot capable of processing informations and mak-
ing knowledge-based decisions, and iii) control with which the robot interacts with
objects and the surroundings. Hence, the complexity of a robotic control system is
often large and will typically exhibit some degree of autonomy, i.e. the capability
to adapt to and act on changes or uncertainties in the environment (Lildballe, 1999)
in terms of task planning, execution, and exception handling.
The focus of this chapter is to identify control relevant constraints in the robotic
control system and discuss how and where to handle these. From a control point of
view, some generic elements of a robot control system such as motion controllers,
trajectory planning and execution, task planning, sensors, and data processing are
described. The problems with off-line trajectory planning and non-sensorbased ex-
ecution are discussed and suggestions for on-line trajectory control are made. The
emphasis is put on the layout of such a system while the actual control algorithms
to some degree are investigated in chapter 4.
Figure 3.1 shows a typical structure of a robotic control system consisting of the
three functionalities perception, reasoning, and control as mentioned before. As
indicated in the figure with the dashed and overlapping line, reasoning in the robot
may take a more distributed form than control and perception. A great part of this
chapter is concerned with integrating path planning and trajectory control which
3.1 / Elements of a Robot Control System 85
Reasoning
Management Supervisoring
Learning Planning
Obstacle avoidance
Control Perception
Actuators Sensors
Environment
Robot motion
clearly advance some of the intelligence to the controller. Likewise, sensors may in-
clude some intelligence in terms of for instance self-calibration and fault-detection.
Each functionality can be decomposed into a set of functional subsystems where
some are mechanical/electrical such as sensors and actuators, some are control
algorithms such as motion control, trajectory execution, and sensor fusion, and
others are integration of knowledge and high level task generation and decision
making such as planning and supervisoring. A short description is given:
Actuators
Robot motion is accomplished by electrical, hydraulic, or pneumatic actua-
tors and consist of devices such as motors, valves, piston cylinders, and chain
drives.
Sensors
Sensors provide measurements of variables within the robot (internal sen-
sors) and from the environment (external sensors) used in various tasks
such as controlling, guidance, and planning. The sensing devices include
encoders, potentiometers, tachometers, accelerometers, strain gauges, laser
86 Chapter 3 / Robot Controller Architecture
range scanners, ultrasonic sensors, sonars, cameras, and GPS (Global Posi-
tioning System) receivers.
Motion controllers
The motion controllers command the robots actuators such that the inten-
tioned motion is achieved. Control is concentrated within three different ap-
proaches: 1) joint space control where references are joint coordinates, 2)
task space control where references are Cartesian coordinates, and 3) force
control.
Reasoning
Technically, this is were the main part of a robotic systems intelligence is
located. The objective is to enable the robot to plan and supervise various ac-
tions and in particular to react to changing and uncertain events in the robots
work environment and to some degree in the robot itself (fault-tolerance).
This short review of the elements of a robot system leads to an examination of the
constraints acting on the system.
3.2 / Trajectory Generation and Execution 87
3.1.1 Constraints
Constraints in a complex system such as a robot are numerous ranging from com-
puter power and actuators to payload capacity and safety concerns. Most relevant
for the control system are constraints associated with dynamic conditions during
operation. These include:
Saturation in actuators.
The input, state, and output constraints are in general difficult to handle during the
design of the control system and even during the planning phase of a new motion
task. Thus on-line compensators should be present.
In robotics a common task is motion along a predefined path. A robot solves tasks
and missions by means of executing a sequence of motions in a workspace. Due to
limited actuator capabilities and mechanical wear from rough and jerky motion it
is disadvantageous and unrealistic to reach an arbitrary target position within one
sample interval, equivalent to applying a step reference. Often the motion from one
position to another must be controlled in terms of intermediate positions, velocities,
and accelerations in space. Thus, the motion of the robot should be planned as a
function of time known as trajectory generation. A trajectory represents the time
history of positions, velocities and accelerations desired to achieve a particular
motion.
A number of challenges exists in specifying trajectories and applying these to a
robot control system. Typically, trajectories are generated in an off-line fashion.
This calls for a structured deterministic and static environment and a perfect mod-
elling of the robot itself such that the motion can be planned in a collision free
88 Chapter 3 / Robot Controller Architecture
In trajectory generation and motion planning, the robot (or the system designer/o-
perator) has to consider a number of design criteria. Some of these are:
minimum time
minimum energy
sufficient actuator torque to handle modelling errors and disturbances.
The solution should take into account actuator constraints as well as system
and controller dynamics.
The relevance of these criteria depends on the robots application and the structure
of the work space. For instance, for an AGV (Autonomous Guided Vehicle) used
in a warehouse facility collision free motion is of major concern and temporary
deviations from the nominal path are acceptable whereas for a welding robot the
path tracking has first priority.
Some of these objectives can be conflicting. For example in case of saturating ac-
tuators the robot must choose whether to follow the path as good as possible or
whether to keep the velocity and get to the end target in minimum time. This, of
course, depends on the application but through the closed-loop trajectory control
system it should be possible to specify the conditions and the choice.
90 Chapter 3 / Robot Controller Architecture
Usually, the trajectory is either specified in the joint space or in the Cartesian space
also known as the configuration space. In the joint space the references are func-
tions of the joint angles for a robot arm. For example, for a mobile robot the ref-
erences are either functions of the left and right wheel velocities (vl ; vr ) or the
angular and forward velocities (v; ! ). The relation is given as (see Section 4.3
more details):
vr vl
!=
b
vr + vl
v= ;
2
where b is the wheel base. In the Cartesian space the references are functions of
the two or three dimensional path which the end effector (or mobile robot) must
follow.
Both approaches have advantages. Joint space schemes are usually the easiest to
compute as the motion controllers take joint space variables as references but it
can be difficult to generate smooth straight motions through space. Most tasks are
defined in terms of a path in the configuration space. Furthermore, the environment
including obstacles is described with respect to this frame. Cartesian space gener-
ated references must be translated to joint space references for the controllers. This
involves inverse kinematics. The main problem here is redundancy which means
that one point can be reached with different joint configurations, see Craig (1989)
or McKerrow (1991) for more details. For mobile robots with nonholonomic con-
straints, especially stabilization of the robot to a given posture in the configuration
space makes generating the necessary joint references difficult.
Traditionally, trajectories are generated off-line. Based on initial and desired tar-
get positions and velocities and the time constraints on the motion, the history of
references can be calculated and fed to the robot controllers at a given rate. This ap-
proach assumes a static structured environment with non-saturating actuators, such
that the robot has no problem in following the reference. In case of unexpected
events the motion is stopped and replanned. Hence, from the time of planning, the
execution is open-loop or merely a storage of a predefined plan.
3.2 / Trajectory Generation and Execution 91
Planning
reference
PVC Controller Saturation Robot
Sensors
saturation measurements
Figure 3.2. The path velocity controller (PVC). The feedback from the controller to the
PVC module modifies the reference update rate upon saturating actuators.
92 Chapter 3 / Robot Controller Architecture
Tarn et al. (1994) and Tarn et al. (1996) suggest a path-based approach where the
time-dependency in the trajectory is replaced with a path-dependency making the
trajectory event-based. The execution of the trajectory follows the actual position
of the robot along the path. This way a robot is capable of avoiding an obstacle
without replanning the path and without having to catch of with a reference far
ahead of it. The trajectory waits for the robot, so to speak.
The general architecture presented in this section is based on the discussion in the
previous section.
Figure 3.3 shows a traditional planning and control scheme. It consists of a stan-
dard feedback loop around the controlled system. The reference to the controller is
provided by the execution module which simply at some predefined rate picks the
next reference in the trajectory table. The planning module calculates the time his-
tory of references. Occasionally, there can be a feedback from some sensor system
reporting events like mission accomplished or replanning needed.
This scheme has a number of drawbacks. First of all, since controller saturation
cannot be detected, the scheme relies on the planned motion being sufficiently
nice. Otherwise, we could experience windup in the reference, so to speak. That
is, the trajectory executor continues to supply new references ahead of the robots
criteria
Generation
events
reference
Execution Controller Robot
measurements
Sensors
Closedloop
Realtime
actual position which could keep the actuator in saturation and cause the tracking
error to grow. Therefore, we need an extra feedback loop from the controller to the
execution module in order to be able to detect saturation and to act on it by for
example slowing down the execution rate.
Furthermore, we should add a feedback loop from some sensor system to the ex-
ecution module to recalculate/replan/adjust the reference depending on how close
the robot is to the target position or path. This can be seen as an on-line trajectory
control. Finally, the planning module must have status information from the system
and environment regarding the mission, obstacles, etc.
These suggestions lead to the scheme in Figure 3.4. We could further introduce
a feedback from the execution module directly to the planning module. However,
this is already accomplished through the sensor system.
The following describes the two main modules (generation and execution) in the
general trajectory planning and executing architecture (Figure 3.4).
criteria
Trajectory events
Generation
trajectory
measurements
events controller and actuating device
reference
Execution Controller Sat Robot Sensors
Figure 3.4. Overview of general generation/execution scheme for sensor- and event-based
trajectory control.
94 Chapter 3 / Robot Controller Architecture
Trajectory Generation
Task Description
Initialization: task criteria
Input: observations of the environment (obstacles), initial and
target position, velocities, mission status
Action: calculates smooth history of references or simply defines
the target of the motion along with constraints on the
trajectory
Output: history of references, target
Sampling time: event-based
Sampling: occurs when mission accomplished or replanning needed
Execution
For each controller sampling instance this module provides a reference to the con-
trollers based on the state of the system and the controllers.
Task Description
Initialization: history of references
Input: saturation in controllers, motion observations
Action: changes the reference itself or the update rate based on the
system state, saturation in actuators. May switch to
semi-autonomous control while for instance passing an
obstacle
Output: reference to motion controllers
Sampling time: similar to the sampling time in the controllers. May be
interrupted by the planning module
Sampling: constant
3.4 / Architecture for Mobile Robot 95
3.3.1 Considerations
Some bumpless transfer problems exist when using the before described trajec-
tory control scheme. Upon replanning a new trajectory in the trajectory generation
module it is desirable to have a smooth transfer from the old trajectory to the new
one. This is easily done by stopping the robot before switching but this approach
could be to slow or some trajectory planning schemes may not allow zero veloci-
ties (for example a path following unit for a mobile robot). A change in the control
task (for example from path following to posture stabilization for a mobile robot)
may likewise cause unwanted transients and finally, the sampling of the execution
module can change from one rate to another due to for example switching from
encoder readings to camera measurements.
The extra feedback loops in the trajectory control scheme prompts some theoretical
questions (Tarn et al., 1996):
This section deals with the layout of a trajectory control systems for mobile robots.
First, we identify some classic motion control tasks in mobile robotics. Table 3.3
shows four different ways of controlling a robot.
In general, the task of tracking has time constraints while the task of following
is unconcerned with the time base. For both of these the velocities must be non-
zero while in stabilization the velocities tend to zero. Depending on the motion
task different strategies for dealing with constraints and unexpected events must be
applied.
96 Chapter 3 / Robot Controller Architecture
Figure 3.5 shows the layout of a trajectory control scheme for motion control of a
mobile robot. The robot consists of two motors for driving the wheels1 and one or
two castor wheels. The motors are controlled by the motor controllers. The robot is
equipped with a variety of sensors such as encoders, camera, ultra sound, and laser
range scanner. A sensor fusion and data processing module extracts information
from the sensors. The intelligence of the robot is placed in the task and trajectory
planning module and the execution module. The following lists details on the dif-
ferent modules but of course this is very dependent on the robot, the application,
and the nature of the environment.
Task dependent:
Reasoning Execution absolute posture
distance and orientation errors
Perception
eventbased sampled
relative posture to target
(mission accomplished, path, velocity, sensor fusion
(new obstacle, ...) path following
target, constraints data processing
posture stabilization
modelling
Task planning path tracking
deadreckoning
Trajectory generation point tracking
events localization
Obstacle avoidance
velocity scaling
encoder readings
saturation
forward and angular velocity references data
Figure 3.5. General overview of sensor- and event-based trajectory control on mobile
robots.
the path and the velocity profile. In case of a posture stabilization the output
would just be the target position or in case of an inaccurately known target
position, an initial value of the target to search.
Trajectory execution
This module is part of the real-time closed-loop control system and supplies
references to the motor controllers. The reference update rate or the refer-
ences itself may be altered in case the actuators saturate or updated informa-
tion on the target position and obstacles (that may change the path to avoid
collisions). In some cases exceptions will have to go through the trajectory
generation module over even further up the control system hierarchy (task
planning module, teleoperation, etc) for a complete replanning but the idea
is to have the execution module handling basic exceptions itself. An example
could is obstacle avoidance. During the actual avoidance, the robot may be
semi-autonomously teleoperated or running an autonomous recovering plan
but there is no need to abort the original plan since the executor will track the
progress of the avoidance and will be ready to continue the motion task. The
98 Chapter 3 / Robot Controller Architecture
Motor controllers
Here, the low level control signals to the robots motors are calculated. This
may be with or without some anti-windup strategy. Typically, the references
are angular and linear velocities (or equivalent: left and right wheel veloci-
ties). The controllers take readings from the encoders as input.
The benefits from sensor-based trajectory control are obvious but the realization
of such an architecture is difficult. This treatment of trajectory control in mobile
robots continues in the case study in chapter 4 and will in particular look at some
specific solutions to path following and posture stabilization.
3.5 Summary
This chapter has discussed sensor-based closed-loop trajectory planning and exe-
cution and illustrated an architecture for such a system. In section 3.1 the base was
made in terms of a description of elements of a robot control system. From this,
trajectory generation and execution was considered in section 3.2. Pros and cons
for off-line versus on-line trajectory control was pointed out. Section 3.3 summed
up the discussion in a general architecture for trajectory control. Finally, section 3.4
focused on a mobile robot and special considerations regarding such a vehicle.
The chapter has however not provided guidelines for the actual design of the algo-
rithms used in the new trajectory feedback loops. The following chapter looks to
some degree at that problem.
Chapter 4
Mobile robotics have in the past decades been a popular platform for testing ad-
vanced control solutions covering areas like motion control, sensor fusion and man-
agement, navigation, and map building, to name a few. This interest is triggered by
the countless challenges encountered by researchers when implementing mobile
robotic applications. The desired autonomous operation of the vehicle necessitates
multiple sensors, fault-tolerant motion control systems, and reasoning capabilities.
The mobility itself induces new annoyances (or stimuli depending on how you look
at it) since the environment is most likely to be dynamic and partially unstructured.
This calls for collision avoidance, extended perception skills, and makes task plan-
ning and execution difficult.
Where the discussion in chapter three was a fairly general description of a robotic
system and in particular trajectory control, this chapter goes into details with a
number of practical problems and solutions in mobile robotics. This include design,
implementation, and experimental verification. The chapter considers the following
aspects:
encoder gains and the wheel base can reduce the drifting of the posture (po-
sition and orientation)1 estimate and extend the period of time where the
robot for example can perform blind exploration, that is manoeuvering with-
out external position information. The calibration is based on the existing
sensor package and appropriate filters.
The mobile robot of interest is a so-called unicycle robot which has two differential-
drive fixed wheels on the same axle and one castor wheel, see Figure 4.1. A robot
with this kind of wheel configuration has an underlying nonholonomic property
that constrain the mobility of the robot in the sideways direction. This adds signif-
icantly to the complexity of the solutions to various motion control problems for
the robot.
A mobile robot of this type is currently in use at the Department of Automation
and is equipped with a sensor package consisting of a camera among other things
capable of detecting artificial guide marks for positional measurements, optical
encoders on the driving wheels measuring wheel rotations, and a laser range scan-
ner providing distance measurements with a field of view of 180 . A thorough
1
The position and orientation of a vehicle is in the following referred to as the posture. In the
literature, the term pose is also used.
4.1 / The Mobile Robot 101
vl
2rl
Camera Guide mark
b v Pan/tilt
Communication links
y Laser scanner
! vr Ultrasonic sensors
Castor
Wheel encoders
x
(a) Posture and velocity definitions (b) The departments self-contained vehicle
overview of sensors and sensor systems for mobile robots is found in Borenstein
et al. (1996).
The posture of the mobile robot is given by the kinematic equations
x_ = v cos
y_ = v sin (4.1)
_ = !;
where (x; y ) indicate the position of the robot center in the Cartesian space and
is the orientation or heading angle of the robot (angle between the x-axis and
forward velocity axis of the robot). The inputs are the heading or forward velocity
v (defined as v = x_ cos + y_ sin ) and the angular velocity !. Combined, the
triplet (x; y; )T defines the posture of the robot. See Figure 4.1 for details.
The kinematic model (4.1) is easily sampled with the assumption of constant inputs
v and ! during the sampling periods,
102 Chapter 4 / Case Study: a Mobile Robot
sin T ! (k )
T
x(k + 1) = x(k) + T 2
T v(k) cos (k) + !(k)
2
! (k ) 2
T
sin !(k) T (4.2)
y(k + 1) = y(k) + T 2 T v(k) sin (k) + !(k)
2
!(k) 2
(k + 1) = (k) + T !(k);
A more thorough discussion of the following has been published in Bak et al.
(1999).
In mobile robotics knowledge of a robots posture is essential to navigation and
path planning. This information is most often obtained by combining data from
two measurements systems: (1) the well-known odometry model (Wang, 1988)
based on the wheel encoder readings and (2) an absolute positioning method based
on sensors such as a camera (Chenavier and Crowley, 1992; Murata and Hirose,
4.2 / Localization with Calibration of Odometry 103
1993), a laser range finder (Cox, 1989) or ultrasonic beacons (Kleeman, 1992). The
tool for fusing data is usually a Kalman filter (Larsen, 1998; Bak et al., 1998).
The need to combine two measurement systems, a relative and an absolute, comes
from the use of the efficient indispensable odometry model which has the unfor-
tunate property of unbounded accumulation of errors. Typically, these errors are
caused by factors such as irregular surfaces, inaccurate vehicle specific physical
parameters (wheel radii, wheel base, gear ratios), and limited encoder sampling
and resolution. See Borenstein and Feng (1995, 1996) for a thorough discussion of
the subject. Obviously, it is beneficial to make either of the measurement systems
as accurate as possible. Given a reliable precise odometry system the robot can
increase operation time in environments where absolute measurements for some
reason are lacking, and fewer often time-consuming absolute measurements are
needed allowing for enhanced data processing or reduced costs. Likewise, precise
absolute measurements provide a better and faster correction of the posture and
again fewer measurements are needed.
Borenstein and Feng (1996) describe a procedure to calibrate the physical param-
eters in the odometry model. The method uses a set of test runs where the mobile
robot travels along a predefined trajectory. Given measurements of initial and fi-
nal postures of the robot, a set of correction factors are determined. The procedure
has proven to be precise and straightforward to carry out in practice but it is also
found to be rather time-consuming as the suggested trajectory for the experiment
is 160 m long (10 runs of 16 m). Furthermore, it relies on precise measurements
as the experiment only gives 10 initial and 10 final measurements from which the
calibration information is extracted.
In Larsen et al. (1998) an augmented extended Kalman filter estimates the three
physical parameters in the odometry model along with the posture estimation. This
allows for a more automated calibration procedure which can track time-varying
parameters caused by for example payload changes. Unfortunately, the observabil-
ity of the parameters is poor and the calibration quality relies strongly on the chosen
trajectory.
The following presents a two step calibration procedure based on the filter in Larsen
et al. (1998). Step one determines the average encoder gain while step two deter-
mines the wheel base and the left (or right) encoder gain during which the average
value is maintained. This gives far better observability of the parameters and still
allows for automation of the procedure.
104 Chapter 4 / Case Study: a Mobile Robot
Let the posture of a mobile robots center with respect to a given global reference
system be described by the state vector z = (x; y; )T . Given a two-wheeled mo-
bile robot with encoders mounted on each motor shaft of the two drive wheels the
odometry model is useful to describe the propagation of the state. Based on (4.2)
we have
x(k + 1) = x(k) + u1 (k) cos ((k) + u2 (k))
y(k + 1) = y(k) + u1 (k) sin ((k) + u2 (k)) (4.3)
(k + 1) = (k) + 2u2 (k);
where input u1 equals the translational displacement while input u2 equals half the
rotational displacement, both since previous sample. The inputs are functions of
the encoder readings and the physical parameters of the robot. The definitions are
kr r (k) + kl l (k)
u1 (k) =
2 (4.4)
kr r (k) kl l (k)
u2 (k) = ;
2b
where r (k ) and l (k ) denote the encoder readings, kr and kl the gains from
the encoder readings to the linear wheel displacements for right and left wheel
respectively and b the wheel base of the vehicle. The encoder gains kr and kl are
defined as
rr r
kr = ; kl = l ; [m/pulse] (4.5)
hr Nr hl Nl
with h the encoder resolution [pulses/rad], N the gear ratio from motor shaft to
wheel, and r the wheel radius.
From the odometry model the accumulation of errors is easily seen. Any uncer-
tainty in u1 or u2 will be added to the previous posture estimate and will over time
cause drift.
A number of potential error sources can be identified from the model. The encoder
readings may not correspond to the actual displacement of the robot for different
reasons such as uneven floors, wheel slippage, limited sampling rate and resolution.
The nature of these errors can be classified as stochastical. The calculation of the
inputs u1 and u2 may also be erroneous if for example the physical parameters
(the wheel base and the left/right encoder gain) are incorrectly determined. The
4.2 / Localization with Calibration of Odometry 105
influence from such error sources is systematic (correlated over time and not zero
meaned) and therefore very difficult to deal with in a Kalman filter which is often
used for posture estimation in mobile robotics. The good news about systematic
errors are the fact that they originate from physical parameters and it is a matter of
proper calibration to reduce the effects.
Now, let a bar denote the true actual physical value as opposed to the nominal
value. The following will model the uncertainties in the three physical parameters
in the odometry model. The wheel base b is modelled as
b = b b; (4.6)
kr = r kr
kl = l kl :
(4.7)
kr + kl
ka = : (4.8)
2
As will be seen later this value is easily calibrated by a simple experiment and we
can therefore impose the constraint of a constant average gain on the estimation of
the encoder gains. From (4.7) and (4.8) we get
2ka r kr
l = : (4.9)
kl
This implies that initial knowledge of ka allows us to estimating only two correc-
tion factors, namely b and r .
In this section we discuss ways of measuring the average encoder gain ka by means
of a simple extended Kalman filter for doing the calibration on-board the robot.
Borenstein and Feng (1996) model ka by means of a scaling error factor that relates
the true value to the nominal. We shall adopt this approach in the following way:
106 Chapter 4 / Case Study: a Mobile Robot
kr + kl k +k
ka = = a r l ; (4.10)
2 2
where a is the correction factor and must be determined.
The scaling error a is often measured be means of a straight-line motion experi-
ment (von der Hardt et al., 1996; Borenstein and Feng, 1995). Let the robot travel
in a straight line over a given distance measured by means of the odometry model.
Call the distance sodo . Now, measure the actual driven distance sactual by tape mea-
sure or similar and calculate the scaling error as a = sactual =sodo . Of course, this
experiment relies on the vehicles trajectory being close to the presumed straight
line. One should pay attention to two things:
2. The uncertainties on the encoder gains should not be too great. Iterating the
calibration procedure can reduce this error source.
This experiment can be implemented on the robot in order to automate the process
of calibration as will be shown in the following. Let s(k ) be the accumulated dis-
placement of the robots center. Now, model the propagation of s(k ) and a (k ) as
follows
kr r (k) + kl l (k)
s(k + 1) = s(k) + a (k)
2 (4.11)
a (k + 1) = a (k);
with s(0) = 0 and a (0) = 1. It is advantageous to use s(k ) as opposed to the
three-state odometry model because here we have no dependency on the initial
orientation or a given reference system.
Now, let a process noise q 2 R3 have covariance Q and add it to the encoder read-
ings and a (k ). The linearized system matrix As (k ) and process noise distribution
matrix Gs (k ) then follows
" #
1 kr r (k)+kl l (k)
As (k) = 2
(4.12)
0 1
" #
a (k)kr a (k)kl 0
Gs (k) = 2 2
: (4.13)
0 0 1
4.2 / Localization with Calibration of Odometry 107
where k k2 denotes the Euclidean norm and e(k ) is the uncertainty on the mea-
surement. The quantity y0 is the robots absolute initial position either produced
as y (0) or as a mean value of several measurements, possibly from more than
one sensor. Assume
e(k ) is an uncorrelated white noise process with distribution
e(k) 2 N 0; r01 r02 .
This filter will estimate a (k ) as the robot moves along a straight line and receives
correcting measurements of the displacement. It is vital that the robot travels as
straight as possible.
In this section we assume that the scaling error a has been determined and we
can rely on the fact that the average value of kr and kl is correct. To simplify the
notation we assume a = 1 or equivalent that the values of kr and kl have been
corrected with a : kr = a kr;old ; kl = a kl;old .
Now, the idea is to estimate the remaining uncertainty parameters simultaneously
with an existing posture estimation system on the robot. See Bak et al. (1998) for
a description of such a system. By augmenting the system state vector z with the
two uncertainty parameters r and b we get
h iT
zaug = x y r b ; (4.15)
r (k + 1) = r (k)
(4.17)
b (k + 1) = b (k):
108 Chapter 4 / Case Study: a Mobile Robot
The process noise q 2 R4 has covariance Q and is added to the encoder readings
(r and l ) and to the two extra states (r and b ).
The linearized augmented system consists of the system matrix Aaug and the pro-
cess noise distribution matrix Gaug :
" #
A F
Aaug (k) = (4.18)
0 I
" #
G 0
Gaug (k) = ; (4.19)
0 I
with details given in Appendix A. The matrices A, F and G are the non-augmented
linearized system, input and noise distribution matrices respectively while I de-
notes an identity matrix of appropriate dimensions.
In order to have convergence of the filter we use an absolute measurement y (k ) 2
R 3 of the true state vector z . Model this in the following way
The determination of r and b relies only on the knowledge of ka and the absolute
measurements. The trajectory is free to choose.
This section validates the described calibration procedure through simulations us-
ing an advanced Simulink model. Based on simulations suggestions for a useful
test trajectory are made.
The employed Simulink model includes motor dynamics, encoder quantization,
viscous and coulomb friction, stiction as well as image quantization. See Nrgaard
et al. (1998) for further details.
4.2 / Localization with Calibration of Odometry 109
As measure of convergence we use the relative estimation error denoted (k) and
defined as
a (k)ka ka r (k)kr kr b (k)b b
a (k) = ; r (k) = ; b (k) = ;
ka kr b (4.22)
for the scaling, the right wheel, and the wheel base corrections factors, respectively.
For Monte Carlo experiments with a number of simulations with stochastically
determined parameters we use the average of the numerical value of (k ) denoted
S (k) and defined as
S (k) = E fj(k)jg ; (4.23)
where E fg is the expectation. To show convergence we use the average and stan-
dard deviation of (k ) over a number of samples. These quantities we denote m(i)
and (i), respectively, and define them as
M M
1 X 1 X
m(i) = (k); (i)= ((k) m(i))2 ; (4.24)
i + 1 k=M i i k =M i
where M is the total number of samples in a simulation.
Consequently, (k ) and m(i) should tend to zero and S (k) should become small
as k becomes large2 .
Scaling error
4
0 10 20
0
a(k) [%]
0 10 20
Time [secs]
Figure 4.2. One realization of the scaling correction factor estimation. Initially, a 2%
relative estimation error exists. After the experiment, the error is 0:004%.
Figure 4.3 shows results from the 100 Monte Carlo experiments where (a) gives the
average trajectory of the estimation error while (b) shows initial and final values of
sa (k). The drift on the estimate of the distance is due to the robot not traveling in
a straight-line caused by inaccurate encoder gains (we only estimate the average)
and the controller dynamics. The second step of the procedure will correct this mis-
match. For all simulations we have good convergence as the average and standard
deviation over the last 50 samples are
Before turning to the systematic odometry errors we will look at the observability
of the two remaining parameters. The observability of the augmented filter depends
on the chosen trajectory. Determination of observability for time-varying systems
is somewhat involved and will not be pursued further here. But, given certain con-
ditions such as slow constant speed and turning rate the linearized system matrix is
slowly time-varying and can be assumed constant for observability purposes. This
4.2 / Localization with Calibration of Odometry 111
0
Dist. Error [mm]
log (| (k)|)
2
0 20
a
10
4
1
Sa(k) [%]
0.5
0 6
0 20 0 50 100
Time [secs] Simulations
Figure 4.3. Scaling correction factor estimation with 100 Monte Carlo estimations using
a Simulink model and an up to a 2 percent initial error on the physical parameters of the
odometry model. (a) The top plot shows the average distance estimation error while the
bottom plot shows the average relative scaling estimation error S a (k ). (b) Initial (cross)
and final (square) relative scaling estimation errors for the 100 simulations. In average the
scaling correction factor is improved by a factor of 54.
way the observability matrix can be calculated. For most trajectories and parame-
ter configurations, the observability matrix will numerically have same rank as the
system matrix (which is the criterion for observability) even though the observabil-
ity may be poor. Consequently, the ratio between the largest and smallest singular
value will be used to indicate the degree of observability for different trajectories.
Here three fundamental test runs are examined: (1) straight-line driving, (2) turn-
ing on the spot, and (3) driving with a constant turning and heading speed. The
averaged condition numbers for the simulations are given in Table 4.1.
This shows that turns are good for both correction factors while straight-line driv-
ing practically only provides information to the correction factor on the encoder
gains. Some mixture of the two seems rational.
112 Chapter 4 / Case Study: a Mobile Robot
Table 4.1. Condition numbers for the observability matrix for different trajectories.
The chosen trajectory for the following experiments is shown in Figure 4.4 and
will be traveled in both directions. For real experiments this might not be a suitable
trajectory as sensors may require a certain orientation, i.e. a camera should be
pointed in the direction of a guide mark in order to obtain measurements from
images of the mark. Other trajectories will also work.
~1 m
~1 m
Figure 4.4. Trajectory for mobile robot for calibrating systematic errors. The forward ve-
locity is kept under 0:2 m/s and the total time to complete the course in both directions is
about 50 seconds. An absolute measurement is provided every one second.
4.2 / Localization with Calibration of Odometry 113
In Figure 4.5, a simulation with the filter for estimating the systematic errors is
shown. In order to verify convergence the absolute measurements are not corrupted
with noise, although the measurement covariance matrix in the filter is non-zero.
Due to these perfect unrealistic measurements the convergence is fast and exact.
Also, as the systematic odometry errors become exact estimated, the posture esti-
mation error is zero, even between measurements.
1
r(k) [%]
2
0 10
2
b(k) [%]
1
0 10
Time [secs]
Figure 4.5. One realization of systematic odometry factors estimation. The measurements
are noiseless. The relative odometry correction estimation errors r (k ) and b (k ) are
shown.
In average the correction factors have improved with a factor of 75 and 81 for r
and b , respectively.
114 Chapter 4 / Case Study: a Mobile Robot
Sr(k) [%]
0.5
0
0 50
1
Sb(k) [%]
0.5
0
0 50
Time [secs]
Figure 4.6. Odometry correction factors estimation with 100 Monte Carlo estimations
using a Simulink model and an up to a 2 percent initial error on the physical parameters
of the odometry model. The plot shows the average relative scaling estimation error S r (k )
and Sb (k ).
In this section real world experiments are carried out on the departments mobile
robot, described and illustrated in section 4.1.
The mobile robots drive system consists of two DC motors mounted on the front
wheels through a gearing. They are equipped with encoders with an resolution
of 800 pulses/turn. The control and coordination of the robot are handled by an
on-board MVME-162A main board with a MC 68040 CPU running the OS-9 op-
erating system. Two ways of acquiring absolute measurements exist:
1. On-board guide mark vision system. A standard commercial 512 512 pixel
grey level type CCD camera is mounted on top of the robot with an on-
board CPU for image processing. Based on images of artificial guide marks,
absolute postures of the robot can be calculated.
2. External tracking vision system. The robot is equipped with two light diodes,
one on each side, which can be tracked with a surveillance camera placed
in the ceiling of the laboratory. The posture of the robot can be monitored
4.2 / Localization with Calibration of Odometry 115
22
kr [ m/pulses]
21.8
21.6
21.4
21.2
0 25
550
b [mm]
540
530
0 50 100 150
Time [secs]
Figure 4.7. Calibration with three different initializations. Note different time-scales for
top and bottom plot.
116 Chapter 4 / Case Study: a Mobile Robot
In Larsen et al. (1998) a method was proposed that in one step tries to estimate
the wheel base and the two encoder gains. Real world experiments equivalent to
the above described reported some difficulties estimating all three parameters at
the same time as the wheel base was almost 10% off after more than 200 vision
measurements. By splitting the procedure into two steps we only need 50-100 mea-
surements and the robustness of the convergence is improved significantly.
4.2.6 Conclusions
of the estimated values are of the same order as calibrations using existing manual
procedures.
The two filters are easy to implement and the method is well suited for automat-
ing the calibration task allowing for more autonomous vehicles. It is especially
applicable to vehicles with frequent load or wheel configuration changes.
000000000000
111111111111
111111111111
000000000000
000000000000
111111111111
111111111111
000000000000
000
111
111
000
1
0
1
0
000000000000
111111111111
111111111111
000000000000
00
11
11
00
1
0
1
0
00000000
11111111
11111111
00000000
000000000000
111111111111 111111111111
000000000000
0000000
1111111 111
000 111111111111
000000000000
000000000000000000000000000000000
111111111111111111111111111111111 00
11 00000000
11111111
000000000000
111111111111 0000000
1111111 111111111111111111111111111111111
000000000000000000000000000000000 00000000
11111111
000000000000
111111111111 0000000
1111111 000000000000000000000000000000000
111111111111111111111111111111111 00000000
11111111
000000000000
111111111111 0000000
1111111 000000000000000000000000000000000
111111111111111111111111111111111 00000000
11111111
000000000000
111111111111 0000000
1111111 000000000000000000000000000000000
111111111111111111111111111111111 00000000
11111111
000000000000
111111111111 0000000
1111111 000000000000000000000000000000000
111111111111111111111111111111111 00000000
11111111
000000000000
111111111111 0000000
1111111 000000000000000000000000000000000
111111111111111111111111111111111 00000000
11111111
000000000000
111111111111 0000000
1111111 000000000000000000000000000000000
111111111111111111111111111111111 00000000
11111111
000000000000
111111111111 0000000
1111111 000000000000000000000000000000000
111111111111111111111111111111111 00000000
11111111
000000000000
111111111111 0000000
1111111 000000000000000000000000000000000
111111111111111111111111111111111 00000000
11111111
000000000000
111111111111 0000000
1111111 000000000000000000000000000000000
111111111111111111111111111111111 00000000
11111111
000000000000
111111111111 0000000
1111111 000000000000000000000000000000000
111111111111111111111111111111111 00000000
11111111
000000000000
111111111111 0000000
1111111 000000000000000000000000000000000
111111111111111111111111111111111 00000000
11111111
000000000000
111111111111 0000000
1111111 000000000000000000000000000000000
111111111111111111111111111111111 00000000
11111111
000000000000
111111111111 0000000
1111111 000000000000000000000000000000000
111111111111111111111111111111111 00000000
11111111
000000000000
111111111111
000000000000
111111111111 0000000
1111111 000000000000000000000000000000000
111111111111111111111111111111111 00000000
11111111
00000000
11111111
000000000000
111111111111 0000000
1111111 000000000000000000000000000000000
111111111111111111111111111111111 00000000
11111111
000000000000
111111111111 0000000
1111111 000000000000000000000000000000000
111111111111111111111111111111111 00000000
11111111
000000000000
111111111111 0000000
1111111 000000000000000000000000000000000
111111111111111111111111111111111 00000000
11111111
000000000000
111111111111 0000000
1111111 000000000000000000000000000000000
111111111111111111111111111111111 00000000
11111111
000000000000
111111111111 0000000
1111111 000000000000000000000000000000000
111111111111111111111111111111111 00000000
11111111
000000000000
111111111111 0000000
1111111 000000000000000000000000000000000
111111111111111111111111111111111 00000000
11111111
000000000000
111111111111 0000000
1111111 000000000000000000000000000000000
111111111111111111111111111111111 00000000
11111111
000000000000
111111111111 0000000
1111111 000000000000000000000000000000000
111111111111111111111111111111111 00000000
11111111
000000000000
111111111111 0000000
1111111 000000000000000000000000000000000
111111111111111111111111111111111 00000000
11111111
000000000000
111111111111 0000000
1111111
0000000
1111111
000000000000000000000000000000000
111111111111111111111111111111111
000000000000000000000000000000000
111111111111111111111111111111111 00000000
11111111
000000000000
111111111111 0000000
1111111 000000000000000000000000000000000
111111111111111111111111111111111 00000000
11111111
000000000000
111111111111 0000000
1111111 000000000000000000000000000000000
111111111111111111111111111111111 00000000
11111111
000000000000
111111111111 0000000
1111111 000000000000000000000000000000000
111111111111111111111111111111111 00000000
11111111
000000000000
111111111111 00000000
11111111
000000000000
111111111111
000000000000
111111111111
000000000000
111111111111
000000000000
111111111111
000000000000
111111111111
0000000
1111111
0000000
1111111
0000000
1111111
0000000
1111111
0000000
1111111
0000000
1111111
Path
000000000000000000000000000000000
111111111111111111111111111111111
000000000000000000000000000000000
111111111111111111111111111111111
000000000000000000000000000000000
111111111111111111111111111111111
000000000000000000000000000000000
111111111111111111111111111111111
000000000000000000000000000000000
111111111111111111111111111111111
000000000000000000000000000000000
111111111111111111111111111111111
00000000
11111111
00000000
11111111
00000000
11111111
00000000
11111111
00000000
11111111
000000000000
111111111111 0000000
1111111 000000000000000000000000000000000
111111111111111111111111111111111 00000000
11111111
000000000000
111111111111 0000000
1111111 000000000000000000000000000000000
111111111111111111111111111111111 00000000
11111111
000000000000
111111111111 0000000
1111111 000000000000000000000000000000000
111111111111111111111111111111111 00000000
11111111
000000000000
111111111111 0000000
1111111 000000000000000000000000000000000
111111111111111111111111111111111 00000000
11111111
000000000000
111111111111 0000000
1111111 000000000000000000000000000000000
111111111111111111111111111111111 00000000
11111111
000000000000
111111111111 0000000
1111111 000000000000000000000000000000000
111111111111111111111111111111111 00000000
11111111
000000000000
111111111111
000000000000
111111111111 0000000
1111111 000000000000000000000000000000000
111111111111111111111111111111111 00000000
11111111
00000000
11111111
000000000000
111111111111 0000000
1111111 000000000000000000000000000000000
111111111111111111111111111111111 00000000
11111111
000000000000
111111111111 0000000
1111111 000000000000000000000000000000000
111111111111111111111111111111111 00000000
11111111
000000000000
111111111111 0000000
1111111 000000000000000000000000000000000
111111111111111111111111111111111 00000000
11111111
000000000000
111111111111 0000000
1111111 000000000000000000000000000000000
111111111111111111111111111111111 00000000
11111111
000000000000
111111111111 0000000
1111111 000000000000000000000000000000000
111111111111111111111111111111111 00000000
11111111
000000000000
111111111111 0000000
1111111 000000000000000000000000000000000
111111111111111111111111111111111 00000000
11111111
000000000000
111111111111 0000000
1111111 000000000000000000000000000000000
111111111111111111111111111111111 00000000
11111111
000000000000
111111111111 0000000
1111111 000000000000000000000000000000000
111111111111111111111111111111111 00000000
11111111
000000000000
111111111111 0000000
1111111 000000000000000000000000000000000
111111111111111111111111111111111 00000000
11111111
000000000000
111111111111 0000000
1111111
0000000
1111111
000000000000000000000000000000000
111111111111111111111111111111111
000000000000000000000000000000000
111111111111111111111111111111111 00000000
11111111
000000000000
111111111111 0000000
1111111 000000000000000000000000000000000
111111111111111111111111111111111 00000000
11111111
000000000000
111111111111 0000000
1111111 000000000000000000000000000000000
111111111111111111111111111111111 00000000
11111111
000000000000
111111111111 0000000
1111111 000000000000000000000000000000000
111111111111111111111111111111111 00000000
11111111
000000000000
111111111111 0000000
1111111 000000000000000000000000000000000
111111111111111111111111111111111 00000000
11111111
000000000000
111111111111 0000000
1111111 000000000000000000000000000000000
111111111111111111111111111111111 00000000
11111111
000000000000
111111111111 0000000
1111111 000000000000000000000000000000000
111111111111111111111111111111111 00000000
11111111
000000000000000000000000000
111111111111111111111111111
000000000000
111111111111 0000000
1111111 000000000000000000000000000000000
111111111111111111111111111111111 00000000
11111111
000000000000000000000000000
111111111111111111111111111
000000000000
111111111111 0000000
1111111 00000000
11111111
000000000000
111111111111 0000000
1111111 00000000
11111111
000000000000
111111111111 0000000
1111111 00000000
11111111
000000000000
111111111111 0000000
1111111 00000000
11111111
000000000000
111111111111 0000000
1111111 00000000
11111111
000000000000
111111111111 00000000
11111111
000000000000
111111111111
000000000000
111111111111
000000000000
111111111111
000000000000
111111111111
000000000000
111111111111
000000000000
111111111111
000000000000
111111111111
000000000000
111111111111
000000000000
111111111111
000000000000
111111111111
000000000000
111111111111
000000000000
111111111111
000000000000
111111111111
000000000000
111111111111
0
1
0
1
0
1
0
1
0
1
Robot
111111111111
000000000000 1111111111111111111111
0000000000000000000000
000000000000000000000000000000000000000
111111111111111111111111111111111111111
000000000000000000000000000000000000000
111111111111111111111111111111111111111
111111111111111111111111111111111111111
000000000000000000000000000000000000000
000000000000000000000000000000000000000
111111111111111111111111111111111111111
000000000000000000000000000000000000000
111111111111111111111111111111111111111
000000000000
111111111111 0
1
000000000000
111111111111 0000000000000000000000
1111111111111111111111 000000000000000000000000000000000000000
111111111111111111111111111111111111111
000000000000
111111111111 000000000000
111111111111 1111111111111111111111
0000000000000000000000 000000000000000000000000000000000000000
111111111111111111111111111111111111111
000000000000
111111111111 000000000000
111111111111 1111111111111111111111
0000000000000000000000 000000000000000000000000000000000000000
111111111111111111111111111111111111111
Sharp turns raise a problem. The vehicle velocities, heading and angular, must
be constrained such that the turn is appropriately restrained and smooth. A large
heading velocity together with a large angular velocity will jeopardize the stability
118 Chapter 4 / Case Study: a Mobile Robot
and safety of the robot or cause saturation in the motors which again will cause
overshooting and long settling times. The velocity constraints can either be self-
imposed due to desired vehicle behavior and safety concerns or physical due to
actual limitations caused by for instance currents and voltages in the motors.
To avoid excessive overshooting and to have time to decelerate when turning, the
presented controller is based on a strategy that forecasts the intersection using a
receding horizon approach where the controller predicts the posture of the robot
and together with knowledge of an upcoming intersection compensates the con-
trol signals. Predictive path planning was discussed in Normey-Rico et al. (1999);
Ferruz and Ollero (1998) where smooth paths were considered.
The general path following problem is characterized by the forward velocity not
being part of the control problem opposed to the path tracking problem where
typically a virtual reference cart is tracked (de Wit et al., 1996; Koh and Cho, 1999;
Samson and Ait-Abderrahim, 1991) and both the forward and angular velocity are
controlled. Hence, path following has an extra degree of freedom (but only controls
two degrees) which allows handling of constraints by scaling the forward velocity.
This has been exploited in Bemporad et al. (1997) for a wall-following mobile
robot. In Koh and Cho (1999) the constrained path tracking problem was discussed.
At first, the problem of following a straight line without turns is considered. A lin-
ear controller is presented that handles constraints by means of a simple velocity
scaling. Next, a nonlinear receding horizon approach to the general path follow-
ing problem is considered. The resulting controller cannot be solved explicit and
will have to rely on an on-line minimization of a criterion function. This is time-
consuming, even without constraints. As a consequence of that, a simplified faster
linear but approximative approach is presented. By using the velocity scaling, the
constraints are handled in a simple way. The section concludes with experimental
results.
A more condensed version of the research in this section is to appear in Bak et al.
(2001).
Given a path P in the xy -plane and the mobile robots forward velocity v (t), the
path following problem consists in finding a feedback control law ! (t) such that
the distance to the path and the orientation error tends to zero. Thus the traveled
distance along the path is not itself of interest to this problem.
4.3 / Receding Horizon Approach to Path Following 119
The path following problem is illustrated in Figure 4.9 where P is the orthogonal
projection of the robot point R onto the path. The signed distance between P and
R is denoted d. An intersection is placed at C and the signed distance from C to
P along the path is denoted s. The orientation error is defined as ~ = r ,
where r is the orientation reference. The two path sections have orientation 1
and 2 , respectively, with j2 1 j < assumed. Shown are also two bisection
lines defined by the angles = 2 2 1 and + 2 . These lines will later be used to
determine at what point the reference should change.
2
d = s tan ()
d = s tan + 2
(xc ; yc )
y C
R
s
Castor d
1
P
x
Figure 4.9. The path following problem with a path consisting of two straight intersected
lines.
4.3.2 Constraints
Constraints exist at different levels of a control system for a mobile robot. At the
motor level, voltages and currents are magnitude limited, and at trajectory level,
the same goes for velocities and accelerations. Since the path following algorithm
is a velocity trajectory generator that generates references to the underlying motor
controllers, only constraints on the velocities of the robot are considered. This is
120 Chapter 4 / Case Study: a Mobile Robot
partly justified by the fact that, typically, the motors of a mobile robot are capable
of delivering larger velocities than desirable during normal operation. Hence, ve-
locity constraints are often imposed and hence magnitude saturation in the actual
actuators (the motors) are generally not of concern, except when fast high perfor-
mance trajectory generators are designed. Furthermore, only magnitude saturations
are considered.
Let u = (v; ! )T and uw = (vr ; vl )T , where vr and vl denote the right and left
wheel velocities, respectively. Discarding wheel slippage and uneven floors, the
heading and angular velocities relate to the left and right wheel velocities in the
following way
" # " #" #
v 1 1
vr
u= = 2 2
= Fw u w ; (4.28)
! 1
b
1
b vl
w and vw be the max-
where b is the length of the wheel base of the robot. Let vmax min
imum and minimum allowable velocities for the left and right wheels (we assume
equal constraints on the two wheels), i.e.
w v vw ; vw v vw :
vmin r max min l max (4.29)
Figure 4.10 illustrates how the constraints relate. It is assumed that zero belongs to
the set of valid velocities. In combined compact form the velocity constraints are
2 3 2 3
Fw 1 w )
12 (vmax
6 7 6 w )7
6
6"
Fw 1 #7
7
6 12 (vmin
6
7
7
6 0 1 7 6 ! 7
7u 6
6 7 6 7
7;
max
6 (4.31)
6 0
6" 1#77
6
6 !min 7
7
6 7 6 7
4 1 0 5 4 v max 5
1 0 vmin
or
4.3 / Receding Horizon Approach to Path Following 121
v
w
vmax
vmax
!
w vmax
vmin w w vmin
vmax w
bagv bagv
vmin
w
vmin
!min !max
P u q; (4.32)
In this section the path is assumed perfectly straight and infinitely long, that is,
the turns are not under consideration. We consider a velocity scaling approach to
a standard linear controller such that the constraints are satisfied. In Section 4.3.5
the method will be applied to a receding horizon controller.
The velocity scaling was implicitly introduced in Dickmanns and Zapp (1987) and
further explored in the control context in Sampei et al. (1991) and Samson (1992).
122 Chapter 4 / Case Study: a Mobile Robot
Linear Controller
When the path is straight a nonlinear parameterization of the path following prob-
lem is
d_ = v sin(~)
~_ = !;
(4.34)
where d and ~ are the distance and orientation errors, respectively. In the neighbor-
hood of the origin (d = 0; ~ = 0), a linearization of (4.34) gives
d_ = v~
~_ = !:
(4.35)
Assuming that v is different from zero (but not necessarily constant), this system
(4.35) is controllable and stabilizable when using a linear state feedback controller
of the form
! = l1 vd l2 jvj;
~ (4.36)
with l1 > 0 and l2 > 0. For a constant v , this controller reverts to a classical linear
time-invariant state feedback. The velocity v is included in the controller gains
such that the closed-loop xy -trajectory response is independent of the velocity of
the vehicle. As will be demonstrated, the gains l1 and l2 are chosen with respect
to the distance response instead of the corresponding time response in the case of
time equations. For a given v , consider the closed-loop equation for output d:
d + l2 jvjd_ + l1 v2 d = 0; (4.37)
where we identify the undamped natural frequency !n and the damping ratio as
p
!n = jvj l1
l (4.38)
= p2 :
2 l1
For a second-order linear system, the transient peak time (time from reference
change to the maximum value) tpeak is a function of the natural frequency !n and
the damping ratio :
!
1 cos 1 ( )
tpeak = exp p ; 0 < 1: (4.39)
!n 1 2
4.3 / Receding Horizon Approach to Path Following 123
Define the peak distance as dpeak = jvjtpeak . Thus, (4.38) and (4.39) suggest select-
ing the gains l1 and l2 as
0 12
exp p
cos 1 ( )
B 1 2 C
l1 = B C
@ dpeak A (4.40)
p
l2 = 2 l1 :
Nonlinear Extension
de Wit et al. (1996) suggest the following extension to the linear controller (4.36)
that globally stabilizes the nonlinear model (4.34):
(
l1 vd; ~ = 0
!= ~) d l jv j;
~ otherwise. (4.41)
l1 v sin(2
2
~ 2
Note, that the linear controller (4.36) and the nonlinear controller (4.41) behave
similarly around (d = 0; ~ = 0).
Velocity Scaling
The controller (4.36) only determines the angular velocity ! while the forward ve-
locity v is left to the operator to specify. This extra degree of freedom and the fact
that for the controller (4.36) ! ! 0 for v ! 0, allow us to handle the velocity
constraints by scaling the forward velocity such that v =
vdes , where vdes is the
desired velocity of the vehicle and
2 [0; 1] is a scaling factor. This way the con-
strained xy -trajectory will remain the same as the unconstrained, only the traverse
will be slower.
For a given distance error d and orientation error ~, the scaled control law (4.36)
have the form
with
We need to determine the scaling factor
such that the following inequality is
satisfied,
" #
1
P
vdes = P 0
q: (4.44)
k(d; ~)
Since P 0 is a vector and
2 [0; 1], the inequality is satisfied by setting
(q )
= min 1; 0 i ; i = j (P 0 )j > 0; j = 1; : : : ; 8 : (4.45)
(P )i
The selection of
can be interpreted as a time-varying velocity scaling and by
on-line determination it is guaranteed that the constraints on the velocities are not
violated.
It seems naturally to anticipate the corner and to embark on the turn before
reaching the actual turning point. This will smooth the turn.
The anticipation of the corner suggests a receding horizon approach where the
control signals are based on predictions of the robots posture while the decrease
in velocity suggests the use of scaling.
4.3 / Receding Horizon Approach to Path Following 125
y
Castor Path
Figure 4.11 shows a local coordinate system (x; y ) to the considered intersection.
With respect to this coordinate system the following model (based on the simplified
odometry model in (4.2)) describes the motion of the vehicle,
the intersection two different error sets are needed depending on where the vehicle
is. The first error set is used along the incoming path section while the second error
set is used when the vehicle travels along the outgoing section. The switch from
one error set to another is defined in Figure 4.11 by the two bisection lines. The
symbols and illustrate in which area in the neighborhood of the intersection
each error set is used. The error sets are defined as follows:
8 !
>
> y^ y^ < x^
! >
> ;
d^(^x; y^) < ^ ! or y^ > x^
^~(^x; y^) = >
> x^
(4.48)
>> ; otherwise :
: ^ 2
The prediction of the posture is easily calculated by extrapolating the nonlinear
model in (4.46). For example
Since the predictions are nonlinear and the errors are position-dependent no explicit
minimization of the criterion (4.47) exists and the controller will have to rely on
on-line minimization. This nonlinear approach is not of practical interest due to the
mandatory minimization at run-time which depending on the size of the prediction
horizon is so time-consuming that an real-time implementation is out of question.
4.3 / Receding Horizon Approach to Path Following 127
The nonlinear solution to the intersection problem presented in the previous sec-
tion 4.3.4 has some run-time problems due to the on-line minimization of the cri-
terion (4.47). The following attempts to solve the same problem by means of a
simple and fast linear predictive strategy. The presentation uses the definitions in
Figure 4.9.
The following approach is approximative and based on the linearized model (4.35)
d_ = v~
~_ = !;
(4.52)
which was used for the linear state feedback controller in Section 4.3.3. A dis-
cretized version of (4.52) with sampling period T is found by integration:
T
d(k + 1) = d(k) + T v (k) r (k) + !(k)
2 (4.53)
(k + 1) = (k) + T !(k);
where we have assumed _r = 0.
Since we eventually want to apply the velocity scaling to the receding horizon
approach, we introduce a new control signal ' defined as v' = ! where we for a
constant ' have ! ! 0 for v ! 0.
Define the state vector z (k ) = (d(k ); (k ))T and the reference vector r (k ) =
(0; r (k))T , and rewrite (4.53) to
" # " # " #
1 Tv T v2 2
0 Tv
z (k + 1) = z (k ) + 2 '(k) + r(k); (4.54)
0 1 Tv 0 0
or
1
y [m]
0.5
0
1 0 1 2
x [m]
Figure 4.12. Straight path following with (solid) and without (dashed) nonlinear scheduling
for an initial distance error of 1m.
The scaling approach from Section 4.3.3 can straightforward be applied. The scal-
ing vector = (
(0); : : : ;
(N2 ))T is selected such that
" #
1
P vdes
(n) = P 0
(n) q; n = 0; : : : ; N2 ; (4.63)
'(k + n)
is satisfied by using (4.45).
130 Chapter 4 / Case Study: a Mobile Robot
Reference Estimation
where 1 and 2 are given by the orientation and direction of the corner. If for
example the path is oriented along the x-axis with a left turn along the y -axis, then
1 = 0 and 2 = =2.
Since the velocity changes due to the velocity scaling, the arrival of the corner and
thus the sampling instance where reference should change, must be based on an
estimation of the robots posture. Based on the odometry model (4.2) we have
4.3 / Receding Horizon Approach to Path Following 131
T !(k)
s(k + 1) = s(k) + T v(k) cos (k) r (k) +
2
T !(k) (4.67)
d(k + 1) = d(k) + T v(k) sin (k) r (k) +
2
(k + 1) = (k) + T !(k):
This models n-step predictor is easily found by iterating the equations (4.67) like
in (4.51)
nX1
^(k + njk) = (k) + T !(i)
i=k
0 1
nX1 i 1
T X
d^(k + njk) = d(k) + T v(i) sin @(k) r (k) + !(i) + T !(j )A
i=k
2 j =k
0 1
nX1 i 1
T X
s^(k + njk) = s(k) + T v(i) cos @(k) r (k) + !(i) + T !(j )A :
i=k
2 j =k
(4.68)
From (4.65) and (4.68), we can estimate ^(k ). For 2 1 > 0 we get
n
^(k) = min n d^(k + njk) tan + s^(k + njk)
2 o
_ d^(k + njk) tan()^s(k + njk) ; (4.69)
The Algorithm
This concludes the linear predictive receding horizon controller defined by the con-
trol law (4.62), the scaling (4.63), the estimation of ^ in (4.69), and the reference
vector (4.66). Algorithm 4.3.1 illustrates a sampled controller using this approach.
Clearly, the number of calculations for each sampling is reduced significantly com-
pared to the nonlinear approach since the controller can be solved explicit.
This section verifies the usefulness of the linear receding horizon approach by
The parameters are chosen as T = 0:04s, N = 100,
means of a simulation study.
= 0:0001, and Q = 10 0 , with = 0:02 if not stated otherwise. The desired ve-
locity is 0:2m/s and the prediction horizon is equivalent to detecting a corner 0.8m
before reaching it, desired speed assumed. The velocity constraints on the robot are
chosen as
Straight Path
Consider the task of following a wall with 1 = 0 and initial starting point in
(0; 1m; 0)T . Figure 4.13 shows the trajectory and the scaled controller outputs
along with the constraints. The resulting xy -trajectory for the unconstrained and
constrained closed-loop system are equal; only the time response is different. For
the unconstrained controller, x = 1 is reached after 7.6 seconds while for the con-
strained controller, the time is 9.56 seconds due to the scaling.
4.3 / Receding Horizon Approach to Path Following 133
0.25
v [m/s]
y [m]
0.5 0
0
0.25
0 0.5 1 1.5 1 0.5 0 0.5 1
x [m] [rad/s]
Intersection
At first, a 90 turn is considered. Figure 4.14 shows the xy -trajectories for number
of different initial starting positions and indicates a good robustness to initial con-
ditions. The trajectory with x0 = ( 0:5; 0:5; 0) breaks off after a short time due
to an early change in the orientation reference caused by the predicted position of
the vehicle being closer to second path section than to the first. The trajectory is
therefore in full compliance with the intended behavior.
To illustrate the scaling of the velocities and the fulfillment of the constraints Fig-
ure 4.15 display time histories for the scaled velocities (both left/right and for-
ward/angular) for the initial position x0 = ( 1; 0; 0)T .
It is seen how the forward velocity gives way to an increase in the angular velocity
which secures a safe smooth turn while fulfilling the imposed constraints. The on-
line estimation of ^ induces a small fluctuation in the signals. This, however, could
be reduced by low-pass filtering the estimate.
Now, consider different turns with 1 = 0 and 2 = 30 ; 60 ; 90 ; 120 , and 150 ,
respectively. Figure 4.16 shows the xy -trajectories for the different turns where the
exact same parameter setting has been used for all the turns. This demonstrates
134 Chapter 4 / Case Study: a Mobile Robot
0.5
y [m]
0
0.5
1 0.5 0
x [m]
0.5
0.5
0 2 4 6 8 10
0.2
v and v [m/s]
l
0
r
0.2
0 2 4 6 8 10
Time [secs]
Figure 4.15. Scaled velocities. Top plot: forward (solid) and angular (dashed) velocity.
Bottom plot: wheel velocities v l (solid) and vr (dashed). Constraints are shown with dotted
lines.
4.3 / Receding Horizon Approach to Path Following 135
y [m]
0.5
1 0.5 0 0.5
x [m]
a good robustness. Even for very sharp turns (2 120 ) there is hardly any
overshooting.
Parameter Robustness
Next, we consider parameter robustness and tuning capabilities. Figure 4.17 shows
a 90 turn with different values of and (small, medium, and large values). The
variations of the two parameters and have similar effects on the closed-loop
xy-trajectory. In particular, for large values of or small values of the response
tends to overlapping the path sections at all time. This is possible in spite of the
velocity constraints because the forward velocity is allowed to tend to zero at that
point. The drawback is the deceleration of the vehicle being very abrupt.
The mobile robot test bed described in section 4.1 and 4.2.5 is used in the following
to experimentally verify the linear receding horizon approach.
136 Chapter 4 / Case Study: a Mobile Robot
0.6 0.6
0.4 0.4
y [m]
y [m]
0.2 0.2
0 0
Figure 4.17. Different values of and . Small (solid), medium (dashed), and large (dash-
dotted) values.
1
y [m]
0.5
2 1.5 1 0.5 0
x [m]
Figure 4.18. Results from a path following experiment with the test vehicle. Reference path
(dotted), measured path (solid).
0.4
0.2
v [m/s]
0
0 5 10 15
1
0.5
[rad/s]
0.5
0 5 10 15
Time [secs]
Figure 4.19. The actual measured velocities (solid) compared to the desired velocity refer-
ences (dashed).
138 Chapter 4 / Case Study: a Mobile Robot
The motor controllers are badly tuned or system parameters are incorrectly
determined.
Friction (stiction and coulomb) introduces a delay in the tracking due to the
required integral action in the motor controllers.
No significant effort has been put into trying to reduce the velocity tracking prob-
lem because, despite the overshooting, the usability of the receding horizon ap-
proach is justified.
Figure 4.19 also shows how the forward velocity v is reduced (scaled) when large
angular velocities ! are required.
4.3.8 Conclusions
This section has presented a receding horizon controller to the path following prob-
lem for a mobile robot and the key results are the following.
The presented linear algorithm is simple and fast and is easily implemented
into a robot control system.
The simulation study indicates good robustness to the degree of turn and the
initial starting position.
Experimental results have shown real-time usability of the approach but also
indicated, for this specific implementation, problems regarding the velocity
tracking capabilities.
4.4 / Posture Stabilization with Noise on Measurements 139
The kinematic model of the unicycle robot in (4.1) can be changed into a simpler
form by the following local change of coordinates and control inputs
x_ 1 = u1
x_ 2 = x3 u1 (4.70)
x_ 3 = u2 :
140 Chapter 4 / Case Study: a Mobile Robot
This system belongs to the class of chained systems or driftless systems. Note that
the model is only valid for 2] 2 ; 2 [. The original control signals are easily
reconstructed in the following way
1
v= u
cos 1 (4.71)
! = cos2 u2 :
The controllability of the system is guaranteed by the full-rankedness of the Control
Lie Algebra3 but since the linearization of (4.70) is not stabilizable, it is necessary
to rely on nonlinear techniques for stabilizing control design. Brocketts condi-
tion (Brockett, 1983) implies furthermore that no pure-state feedback law u(x) can
asymptotically stabilize the system.
The Algorithm
Morin and Samson (1999) suggest an algorithm that achieves exponential stabi-
lization of (4.70) along with robustness with respect to imperfect knowledge of
the systems control vector fields. The control strategy consists in applying peri-
odically updated open-loop controls that are continuous with respect to state ini-
tial conditions. Let the state z = (x1 ; x2 ; x3 )T be observed by a measurement
y(k) = (y1 ; y2 ; y3 )T = z (k) at time t = kT , where k 2 Z and T is a constant
sampling time. On the time interval [kT ; (k + 1)T [ the control signals are then
defined by:
1 h i
y1 (k) + 2ajy2 (k)j 2 sin(t)
1
u1 (y; t) =
T " #
1 y (k ) (4.72)
u2 (y; t) = y3 (k) + 2(k2 1) 2 1 cos(t) ;
T ajy2 (k)j 2
with the constraints
T = 2= ( 6= 0)
jk2 j < 1 (4.73)
a > 0:
3
The dimension of the vector space spanned at zero by all the Lie brackets of the vector fields fi
must equal the system order n (Isidori, 1995).
4.4 / Posture Stabilization with Noise on Measurements 141
The control parameters are thus a and k2 and to some degree T (or ) where k2
governs the convergence rate of x2 while a governs the size of the oscillations in
the x1 and x3 directions.
Figure 4.20 illustrates the mode of operation for the controller with k2 = 0:1,
a = 0:25, T = 10 seconds, and initial posture z0 = ( 0:1m; 0:5m; 10 )T .
0.2
0.5
1
Input u
Robot
0.2
y [m]
0 10 20 30 40 50
0.25
0.5
2
Input u
0
0
0.5
0 10 20 30 40 50 0.2 0 0.3
Time [secs] x [m]
Stability
x1 (k + 1) = x1 (k) y1 (k)
1
x2 (k + 1) = x2 (k) + y1 (k)y3 (k) + ay3 (k)jy2 (k)j 2 + (k2
1
1)y2 (k)
2 (4.75)
x3 (k)y1 (k)
x3 (k + 1) = x3 (k) y3 (k):
Assuming perfect measurements (y (k ) = z (k )) we get
x1 (k + 1) = 0 (4.76)
1
x2 (k + 1) = k2 x2 (k) + x3 (k) ajx2 (k)j 2
1
x (k )
2 1
1
= k2k+1 x20 + k2k x30 ajx20 j 2
1
x (4.77)
2 10
x3 (k + 1) = 0; (4.78)
with initial conditions z (0) = (x10 ; x20 ; x30 )T . For jk2 j < 1, as required, x2 (k ) !
0 for k ! 1 and hence stability is obtained.
0.1
States
0.1
0 50 100
Time [secs]
1
x2 (t) = y2 (k)(k2 1)(t cos(t) sin(t)) + x2 (k)
2 (4.79)
k2 1 y2 (k)
x3 (t) = sin(t);
a jy2 (k)j 12
and for t = (k + 1)T we get:
x1 (k + 1) = 0 (4.80)
x2 (k + 1) = k2 x2 (k) + (k2 1)e2 (k)
k
X
= kk+1 x
2 20 + (k2 1) k2k ie2 (i) (4.81)
i=0
x3 (k + 1) = 0: (4.82)
The peak value for x1 is clearly for t = (k + 12 )T due to the cosines term while
x3 peaks for either t = (k + 14 )T or t = (k + 43 )T due to the sinus term. Finally,
x2 takes its maximum for t = kT or t = (k + 1)T due to the non-decreasing term
t cos(t) sin(t). Hence, given from (4.79) the relevant equations are:
1
x1 (k + ) = 2ajx2 (k) + e2 (k)j 2
1
2
x2 (k + 1) = k2 x2 (k) + (k2 1)e2 (k) (4.83)
1 3 (k 1) (x2 (k) + e2 (k))
x3 (k + ) = x3 (k + ) = 2 :
4 4 a jx2 (k) + e2 (k)j 12
Table 4.3 lists the suprema, mean values, and variances for the states in (4.83)
given the two kind of noise disturbances. Details on the calculations can be found
in Bak (2000) where the case of noise an all states also is examined though only
approximative expressions are established.
144 Chapter 4 / Case Study: a Mobile Robot
Uniform Gaussian
State Supremum Expectation Variancea
p 5 14 q q
x1 (k + 12 ) 2a supfy2 g a p2 4 3
4
2
1+k2 r2 4a2 2 1+2k2 r2
1 k2 1 k2
x2 (k) 1 jk2 j e 0 1+k2 r2
x3 (k + 14 ) 1 k2 psup fy2 g
q q
1 k2 2 2 2
x3 (k + 34 ) a 0 a 1+k2 r2
1 k2 2
y2 (k) 1 jk2 j e + e 0 1+k2 r2
a
f g
For x1 (k + 12 ), the second moment E x1 x1 is given.
Table
R 1 n4.3. Quantification of noisy y 2 measurements influence on system states. (n) =
x 1 e x dx.
0
4.4.3 Filtering
where u(k ) = z^(k ) when state estimation is used. The vector function f includes
all nonlinear terms while B is a constant matrix. They are given by
2 3
1 0 0
6 7
B=6
4 0 k2 1 07 5 (4.86)
0 0 1
2 3
0
6 7
40:5u1 (k )u3 (k ) + au3 (k )ju2 (k )j
0:5
f (z (k); u(k)) = 6 5:
x3 (k)u1 (k)7 (4.87)
0
For this system we will consider a Luenberger-like observer of the following type:
2 3
1 1 0 0
6 7
z~(k + 1) = 4 0
6 1 2 0 75x~(k) K (k)e(k + 1): (4.93)
0 0 1 3
Convergence of the estimation error z~ is then guaranteed for 0 < i < 2; i =
1; 2; 3.
Obviously, the choice of the s is a compromise between fast convergence and
steady-state noise rejection. Small gains provide excellent noise rejection capabil-
ities but long convergence of the estimation error since more trust are put in the
kinematic model of the mobile vehicle. The process noise or model uncertainties
for the kinematic model mainly stem from movement of the robot due to inaccurate
physical parameters (covered in section 4.2) and finite encoder resolution. When
the speed of the robot tends to zero, the certainty of the model increases. All this
suggests letting the gains be a function of some norm of the posture error
i = f (kxk); (4.94)
such that noise rejection is increased as the robot comes to a halt. Since the error
is unknown, the measurement is used instead. The usability of the measurements
depends on the estimation error being small upon reaching the equilibrium z = 0.
One way of selecting such a function is
i = 0i e kyk ; i = 1; 2; 3;
1
(4.95)
with
0 < 0i < 2; 8i = 1; 2; 3
0 11
q
X3 (4.96)
kyk p
= @ aj yj Aj j ; p; q > 0; aj > 0:
j =1
i ! 0 for kyk ! 0
i ! 0i for kyk ! 1;
(4.97)
and since (4.95) is monotone, each will be within the stability bounds.
4.4 / Posture Stabilization with Noise on Measurements 147
Simulation Study
Figure 4.22 shows the Euclidean norm4 of the estimation errors with time-varying
and constant s. The plots was created with the following parameter settings: p =
1; q = 2; ai = 18; and 0i = 1 for i = 1; 2; 3. For the constant gains, i = 0:2; i =
1; 2; 3 have be chosen. At steady-state, the improvement from using time-varying
gains in terms of the mean value of the normed estimation error is a factor 28 better
than with no filtering and a factor 9 better than with constant gain filtering.
3
x 10
0.4 1.5
||Estimation Error||2
||Estimation Error||2 1
0.5
0 0
0 25 0 100
Samples Samples
Figure 4.22. (a) Convergence of the estimation error for time-varying s (solid) and con-
stant s (dashed). (b) Steady-state for time-varying s (solid), constant s (dashed), and
measurement noise (dotted).
Consider again the example from Section 4.4.2 that with controller settings a =
0:25 and k2 = 0:1 and a 1 mm measurement error on x2 estimated the supremum
of x1 (k + 21 ) to 22mm. With time-varying filtering, the same measurement error
gives a supremum of 1.6mm and with constant gains the supremum is 8.4mm.
pPn
4
kk =
The Euclidean norm is defined as x 2 i=1 xi .
2
148 Chapter 4 / Case Study: a Mobile Robot
4.4.4 Conclusions
This section has looked at the effect of noise on measurements used in closed-loop
to stabilize a 3 DoF chained system. Two kinds of noise has been considered 1)
bounded noise and 2) Gaussian white noise. The second objective of this section
has been filter design for reducing the effect of noise on the measurements in a
closed-loop controlled 3 DoF chained system.
The first part of the study considered noise on x2 but noiseless measurements of x1
and x3 and has evaluated expressions for suprema, expectations and covariances
for the states.
Regarding filtering, a Luenberger-like observer was investigated with a nonlinear
gain matrix canceling nonlinearities in the system model and resulting in a linear
estimation error system. This has the advantage that convergence of the estimation
error is easily guaranteed. A time-varying observer gain has been proposed which
is a function of the norm of the posture error.
An alternative to the presented nonlinear filter is given in Nrgaard et al. (2000a)
where state estimators for nonlinear systems are derived based on an interpolation
formula.
4.5 Summary
This chapter examined a mobile robot. Three sub-studies covered separate issues
on constraints in robotics. Section 4.2 looked at how to bypass difficulties concern-
ing posture estimation caused by the limitations in the sensor package on-board
the robot. An on-line auto-calibration procedure was presented which can reduce
drifting of the odometry and hence extend exploration time and increase trust in
the overall estimation.
In section 4.3 a receding horizon approach was introduced for path following in
the presence of velocity constraints. The receding horizon enables the vehicle to
anticipate sharp turns and smooth the turning. By means of a velocity scaler, ve-
locity constraints can be imposed on the wheels as well as the forward and angular
velocities. The velocities are set such that constraints are satisfied and the nominal
path followed.
Finally, section 4.4 was concerned with posture stabilization of nonholonomic con-
strained systems. It was established that noisy measurements induce large limit
cycles in the closed-loop stabilization. Appropriate filtering was suggested.
Chapter 5
Conclusions
This thesis is concerned with aspects involved in designing and implementing con-
trollers for systems with constraints. This includes a comparative study of design
methodologies, development of a gain scheduling constrained controller, an out-
line of a robot controller architecture which incorporates closed-loop handling of
constraints, and a substantial mobile robot case study with explicit solutions to
problems arising from limited sensor readings, velocity constraints, and wheel-
configuration induced nonholonomic constraints. This chapter summarizes the re-
search effort presented in the thesis.
Primarily, the studied constraints originate from saturating actuators or restrictions
on state or output variables. Both level and rate constraints are under considera-
tions. In robotic systems with multi-layered controllers and integrated subsystems
constraints may take many shapes. Sufficient and reliable sensor information has
turned out to be a hurdle for closed-loop robot controllers. In mobile robotics, a
consistent flow of posture information is vital to solving many motion tasks. Much
effort has been put into fusing sensors but the sensor differences such as noise-
signal ratio, sampling rate, delays, and data processing impede the performance
of the solutions. Another constraint is the so-called nonholonomic constraint that
stems from the mechanical construction of the system. It is basically a restriction
on the manoeuverability of the system.
150 Chapter 5 / Conclusions
The thesis started with an overview and comparison of three fundamentally differ-
ent existing methodologies for constrained control systems, namely Anti-Windup
and Bumpless Transfer (AWBT), predictive control, and nonlinear control which
include rescaling and gain scheduling. Predictive control is the only systematic ap-
proach but mainly applies to linear systems and is computationally heavy due to
the on-line solving of the minimization problem. AWBT is a versatile approach
applicable to already implemented controllers but it can only manage input level
saturations. Nonlinear solutions originate from a desire to guarantee stability and
to improve performance during constraints. Most nonlinear solutions are system
specific and limited in applicability.
A new nonlinear gain scheduling approach to handling level actuator constraints
was introduced. The class of systems is restricted to chain of integrators and sys-
tems in lower companion form. The saturation is handled by scheduling the closed-
loop poles upon saturation. Good closed-loop performance is obtained through a
partly state partly time dependent scheduler. The saturation limits are easy to spec-
ify and the saturation compensation is determined by only three parameters. The
controller does not guarantee the control signal being within the limits at all time.
This is equivalent to the AWBT compensation.
In a case study the design of an AWBT compensation for an actuator constrained
PID controlled cascaded double tank system was treated. It was argued that the
compensation matrix has great influence on the closed-loop performance and hence
should be carefully selected. Moreover, the poles of the constrained controller
should be chosen close to the slowest poles of the unconstrained closed-loop sys-
tem. The generality in this study is debatable except for systems of similar charac-
teristics.
The architecture for robot control system with sensor-based closed-loop trajectory
control has been described. Traditionally, in robotics, trajectories are generated off-
line and fed to a tracking controller at certain given rate. Such a system will fail
to handle internal active system constraints or external uncertain events such as
upcoming obstacles. However, by incorporating relevant sensory information into
the trajectory executor, the trajectory can be modified or the execution rate changed
allowing for appropriate handling of the constraints. This way the robot can in a
closed-loop fashion react to events and constraints without aborting the motion and
embarking on replanning or switching to some contingency plan. Furthermore, the
handling of actuator constraints in the trajectory module provides good control of
the actual trajectory outcome for the constrained system.
151
The structure of the linearized augmented system for determination of the uncer-
tainties k and b is given in equations (4.18) and (4.19). Here the matrices are
given explicit. Note, that c() and s() denote cos() and sin(), respectively.
2 3
1 0 u1 s()
6 7
A=6
4 0 1 u1 c() 7
5 (A.1)
0 0 1
2 3
6
c() 12 u1 s() 1
u s()
2 1
7
F =6
4 s() + 12 u1 c() 1
u c()
2 1
7
5 (A.2)
2 3
c() 12 u1 s() c() 21 u1 s()
6 7
G=6
4 s() + 12 u1 c() s() + 12 u1 c() 7;
5 (A.3)
with following definitions
154 Chapter A / Linearized System for Estimating Systematic Errors
= x3 + u2 (A.4)
k k
= r r r l (A.5)
2
kr r + kr l
= (A.6)
x5 b
x k (2ka x4 kr )l
= 4 r r (A.7)
x25 b
1
= (ka xk)
2 4 r
(A.8)
xk
= 4 r (A.9)
x5 b
2ka + x4 kr
= (A.10)
x5 b
1
= x4 kr : (A.11)
2
Appendix B
This appendix determines the closed-loop solution to the posture stabilized chained
system described in section 4.4.
Let a 3 DoF chained system be described by the equations
x_ 1 = u1
x_ 2 = x3 u1 (B.1)
x_ 3 = u2 :
The state z = (x1 ; x2 ; x3 )T is measured with y (k ) = (y1 ; y2 ; y3 )T = z (k ) at time
t = kT , where k 2 Z and T is a constant sampling time. On the time interval
[kT ; (k + 1)T [ the control signals (u1 ; u2 ) are then defined by:
1h i
y1(k) + 2ajy2 (k)j 2 sin(t)
1
u1 (y; t) =
T" #
1 y2 (k) (B.2)
u2 (y; t) = y3 (k) + 2(k2 1) cos( t ) ;
T ajy2 (k)j 2
1
The control parameters are a and k2 and to some degree T (or ) where k2 governs
the convergence rate of x2 while a governs the size of the oscillations in the x1 and
x3 directions.
By applying the controller (B.2) to the system (B.1), the solution to z_ on the time
interval [kT ; (k + 1)T [ can be found by first integrating x_1 , then x_ 3 , and finally x_ 2 :
Z t
x1 (t) = x1 (k) + u1 (y; )d
kT
t kT
+ ajy2 (k)j 2 (1 cos(t))
1
= x1 (k) y1 (k) (B.4)
Z t
T
x3 (t) = x3 (k) + u2 (y; )d
kT
t kT k2 1 y2 (k)
= x3 (k) y3 (k) + sin(t): (B.5)
T a jy2 (k)j 12
Now, given x3 and u1 we get for x2 (t):
Z t
x2 (t) = x2 (k) + x3 (t)u1 (y; t)dt
kT
k2 1 t kT
= x2 (k) + y2 (k)(t cos(t) sin(t)) y (k)x3 (k)
2 T 1
0:5(t2 (kT )2 ) kT (t kT )
+ y3 (k)y1 (k)
T2
(k2 1) y1 (k)y2 (k) p
p (1 cos( t)) + a jy2(k)jx3 (k)(1 cos(t))
22 a jy2(k)j
ap
+
2
jy2 (k)jy3(k)((t kT ) cos(t) sin(t)): (B.6)
Bibliography
Bak, M., Larsen, T. D., Andersen, N. A., and Ravn, O. (1999). Auto-calibration of
systematic odometry errors in mobile robots. In Proceedings of SPIE, Mobile
Robots XIV, pp. 252263, Boston, Massachusetts.
Bak, M., Poulsen, N. K., and Ravn, O. (2001). Receding horizon approach to path
following mobile robot in the presence of velocity constraints. Submitted to the
European Control Conference, Porto, Portugal.
Bazaraa, M., Sherali, H., and Shetty, C. (1993). Nonlinear programming. Theory
and algorithms. John Wiley and Sons, New York.
Bemporad, A., Marco, M. D., and Tesi, A. (1997). Wall-following controllers for
sonar-based mobile robots. In Proceedings of the 36th IEEE Conference on
Decision and Control, pp. 30633068, San Diego, California.
Bentsman, J., Tse, J., Manayathara, T., Blaukanp, R., and Pellegrinetti, G. (1994).
State-space and frequency domain predictive controller design with application
to power plant control. In Proceedings of the IEEE Conference on Control Ap-
plications, volume 1, pp. 729734, Glasgow, Scotland.
Bitmead, R. R., Gevers, M., and Wertz, V. (1990). Adaptive Optimal Control, The
Thinking Mans GPC. Prentice Hall, New York.
Borenstein, J., Everett, H. R., and Feng, L. (1996). Where am I? Sensors and meth-
ods for mobile robot positioning. Technical report, The University of Michigan.
Chisci, L., Lombardi, A., Mosca, E., and Rossiter, J. A. (1996). State-space ap-
proach to stabilizing stochastic predictive control. International Journal of Con-
trol, 65(4), 619637.
Clarke, D., Mothadi, C., and Tuffs, P. (1987a). Generalized predictive control
Part I. The basic algorithm. Automatica, 23(2), 137148.
Clarke, D., Mothadi, C., and Tuffs, P. (1987b). Generalized predictive control
Part II. Extensions and interpretations. Automatica, 23(2), 149160.
Dahl, O. (1992). Path Constrained Robot Control. PhD thesis, Lund Institute of
Technology.
de Wit, C. C., Siciliano, B., and Bastin, G. (1996). Theory of Robot Control.
Springer-Verlag, London.
Dickmanns, E. and Zapp, A. (1987). Autonomous high speed road vehicle guid-
ance by computer vision. In Proceedings of the 10th IFAC World Congress,
volume 4, pp. 232237, Munchen, Germany.
Doyle, J., Smith, R., and Enns, D. (1987). Control of plants with input saturation
nonlinearities. In Proceedings of the American Control Conference, pp. 2147
2152, Minneapolis, USA.
Fertik, H. and Ross, C. (1967). Direct digital control algorithms with anti-windup
feature. ISA Transactions, 6(4), 317328.
Franklin, G., Powell, J., and Emani-Naini, A. (1991). Feedback Control of Dy-
namic Systems. Addison-Wesley, Reading, Massachusetts.
Garca, C. E., Prett, D. M., and Morari, M. (1989). Model predictive control:
Theory and practice a survey. Automatica, 25(3), 335348.
Graebe, S. F. and Ahlen, A. (1996). The Control Handbook, chapter 20.2 Bumpless
Transfer, pp. 381388. CRC Press, Boca Raton, Florida.
Hanus, R., Kinnaert, M., and Henrotte, J.-L. (1987). Conditioning technique, a
general anti-windup and bumpless transfer method. Automatica, 23(6), 729
739.
Hanus, R. and Peng, Y. (1992). Conditioning technique for controllers with time
delays. IEEE Transactions on Automatic Control, 37(5), 689692.
Kapoor, N., Teel, A., and Daoutides, P. (1998). An anti-windup design for linear
systems with input saturation. Automatica, 34(5), 559574.
Kleeman, L. (1992). Optimal estimation of position and heading for mobile robots
using ultrasonic beacons and dead-reckoning. In Proceedings of the 1992 IEEE
International Conference on Robotics and Automation, volume 3, pp. 2582
2587, Nice, France.
Koh, K. and Cho, H. (1999). A smooth path tracking algorithm for wheeled mobile
robots with dynamic constraints. Journal of Intelligent and Robotic Systems:
Theory and Applications, 24(4), 367385.
Kothare, M. V., Campo, P. J., Morari, M., and Nett, C. N. (1994). A unified frame-
work for the study of anti-windup designs. Automatica, 30, 18691883.
Kouvaritakis, B., Rossiter, J., and Cannon, M. (1998). Linear quadratic feasible
predictive control. Automatica, 34(12), 15831592.
Larsen, T., Bak, M., Andersen, N., and Ravn, O. (1998). Location estimation for
an autonomously guided vehicle using an augmented kalman filter to autocal-
ibrate the odometry. In Proceedings of the 1998 International Conference on
Multisource-Multisensor Information Fusion, pp. 245250, Las Vegas, Nevada.
Mayne, D. Q., Rawlings, J. B., Rao, C. V., and Scokaert, P. O. M. (2000). Con-
strained model predictive control: Stability and optimality. Automatica, 36(6),
789814.
Middleton, R. H. (1996). The Control Handbook, chapter 20.1 Dealing with Actu-
ator Saturation, pp. 377381. CRC Press, Boca Raton, Florida.
Morin, P., Murray, R., and Praly, L. (1998a). Nonlinear rescaling and control laws
with application to stabilization in the presence of magnitude saturation. In Pro-
ceedings of the IFAC NOLCOS, volume 3, pp. 691696, Enschede, The Nether-
lands.
Murata, S. and Hirose, T. (1993). Onboard locating system using real-time im-
age processing for a self-navigating vehicle. IEEE Transactions on Industrial
Electronics, 40(1), 145154.
Nrgaard, M., Poulsen, N., and Ravn, O. (1998). The AGV-sim version 1.0 -
a simulator for autonomous guided vehicles. Technical report, Department of
Automation, Technical University of Denmark.
Nrgaard, M., Poulsen, N., and Ravn, O. (2000a). New developments in state
estimation for nonlinear systems. Automatica, 36(11), 16271638.
Nrgaard, M., Ravn, O., Poulsen, N. K., and Hansen, L. K. (2000b). Neural Net-
works for Modelling and Control of Dynamic Systems. Springer-Verlag, London.
164 Bibliography
Peng, Y., Vrancic, D., Hanus, R., and Weller, S. (1998). Anti-windup designs for
multivariable controllers. Automatica, 34(12), 15591665.
Richalet, J., Rault, A., Testud, J. L., and Papon, J. (1978). Model predictive heuris-
tic control: applications to industrial processes. Automatica, 14(5), 413428.
Rossiter, J., Kouvaritakis, B., and Rice, M. (1998). A numerically robust state-
space approach to stable-predictive control strategies. Automatica, 34(1), 6573.
Sampei, M., , Tamura, T., Itoh, T., and Nakamichi, M. (1991). Path tracking control
of trailer-like mobile robot. In Proceedings of IEEE/RSJ International Workshop
in Intelligent Robots and Systems, volume J, pp. 193198, Osaka, Japan.
Stein, G. (1989). Bode lecture: Respect the unstable. In Proceedings of the 28th
IEEE Conference on Decision and Control, Tampa, Florida.
Bibliography 165
Sussmann, H. J., Sontag, E. D., and Yang, Y. (1994). A general result on the
stabilization of linear systems using bounded controls. IEEE Transactions on
Automatic Control, 39(12), 24112425.
Tarn, T.-J., Bejczy, A. K., Guo, C., and Xi, N. (1994). Intelligent planning and con-
trol for telerobotic operations. In Proceedings of the IEEE/RSJ/GI International
Conference on Intelligent Robots and Systems, volume 1, pp. 389396.
Tarn, T.-J., Xi, N., and Bejczy, A. K. (1996). Path-based approach to integrated
planning and control for robotic systems. Automatica, 32(12), 16751687.
Teel, A. R. (1992). Global stabilization and restricted tracking for multiple inte-
grators with bounded controls. System & Control Letters, 18, 165171.
Tsakiris, D. P., Samson, C., and Rives, P. (1996). Vision-based time-varying sta-
bilization of a mobile manipulator. In Proceedings of the Fourth International
Conference on Control, Automation, Robotics and Vision, ICARCV, Singapore.
von der Hardt, H.-J., Wolf, D., and Husson, R. (1996). The dead reckoning lo-
calization system of the wheeled mobile robot: Romane. In Proceedings of the
1996 IEEE/SICE/RSJ International Conference on Multisensor Fusion and In-
tegration for Intelligent Systems, pp. 603610, Washington D.C.
Over-design, 1 Robot
Overshoot, 12, 48, 55, 56, 61, 67, 70, definition of, 83
80 Robot control system
elements of, 8486
P RST controller, 22, 24
Path following, 117139
Path planning, 102 S
Path velocity controller, 91 Saturation, 12
Perception, 84 actuator, 12, 14
PI controller, 2, 31, 66 bounds, 55
PID controller, 12, 16, 19, 69, 74 definition of, 4
Pole placement, 58 effect on performance, 12
Pole placement controller, 38 guidelines, 14
Pole placement controller, 7477, 79 instability, 2
Posture level, 26
definition, 101 Schur decomposed, 23
stabilization, 100, 139 Schur matrix, 23
Predictive control, 3650, 65, 78 Sensor fusion, 99
constrained, 7, 45, 63, 78 Sensor fusion, 86
generalized, 37 Settling time, 12, 55, 56
suboptimal constrained, 49 Single integrator, 66
unified, 37, 50 Sliding mode control, 65
Predictor Sonar, 86
j -step, 42 Stability
Process noise, 106 asymptotical, 24, 53, 140
Pump, 68 general result, 5
Stepper motor, 17
Q
Quadratic programming, 38, 45, 49 T
Time-varying, 50
R Trajectory
Reasoning, 84, 86 execution, 86
Receding horizon, 36, 117139 generation, 86
linear, 127 Trajectory control
nonlinear, 124 general architecture, 92
Rescaling, 5155 non-timebased, 91
Reset windup, 12 velocity control, 91
Retro-fitted design, 11, 80
V
Rise time, 55
Velocity scaling, 127
170 Index
Y
YF-22 aircraft, 2