0% found this document useful (0 votes)
204 views8 pages

Randomized Methods For Control of Uncertain Systems

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
204 views8 pages

Randomized Methods For Control of Uncertain Systems

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

Encyclopedia of Systems and Control

DOI 10.1007/978-1-4471-5102-9_133-2
© Springer-Verlag London 2014

Randomized Methods for Control of Uncertain Systems


Fabrizio Dabbene and Roberto Tempo
CNR-IEIIT, Politecnico di Torino, Torino, Italy

Abstract

In this article, we study the tools and methodologies for the analysis and design of control systems
in the presence of random uncertainty. For analysis, the methods are largely based on the Monte
Carlo simulation approach, while for design new randomized algorithms have been developed.
These methods have been successfully employed in various application areas, which include
systems biology; aerospace control; control of hard disk drives; high-speed networks; quantized,
embedded, and electric circuits; structural design; and automotive and driver assistance.

Preliminaries
Randomized methods for control deal with the design of uncertain and complex systems. They have
been originally developed for linear systems affected by structured uncertainty, usually expressed
in the so-called M   configuration. A similar approach may be followed when dealing with
uncertainty in other contexts, such as uncertainty in the environment (random disturbances) or
even when there is no uncertainty in the problem formulation, but the complexity of the problem is
such that randomized methods may be the best approach, since these methods are known to break
the curse of dimensionality, see Tempo et al. (2013) for details.
For the sake of simplicity, we consider here an uncertain plant transfer function P .s; q/ affected
by parametric uncertainty

q D Œq1 : : : q` T

bounded in a set Q  R` . The objective is to design the parameters  2 Rn of a controller transfer


function C.s;  / so to guarantee robustly some desired performance. This is reformulated as the
problem of finding a design satisfying some uncertain constraints of the form

f .; q/  for all q 2 Q:

In other words, the goal is to design a robust controller which satisfies the uncertain constraints.
Specific examples of these constraints include an H1 or H2 norm bound on the closed-loop
sensitivity function or time-domain specifications.


E-mail: [email protected]

Page 1 of 8
Encyclopedia of Systems and Control
DOI 10.1007/978-1-4471-5102-9_133-2
© Springer-Verlag London 2014

Since this objective may be too hard to achieve in many situations, we are relaxing it as follows:
we would like to design controller parameters  2 Rn such that a certain violation is allowed, i.e.,

f .; q/  0 for all q 2 Qgood I
f .; q/ > 0 for all q 2 Qbad

where the good and bad sets satisfy the equations



Qgood [ Qbad D QI
Qgood \ Qbad D Ø;

and the goal is to guarantee that the bad set Qbad is “small” enough. To state this concept more
precisely, we assume that q 2 Q is a random vector with given probability density function (pdf),
and we introduce the probability of violation and the controller reliability.

Definition 1 (Probability of Violation and Reliability). The probability of violation for the
controller parameters  2 Rn is defined as

V . / DProb
P fq 2 Q W f .; q/ > 0g :

The reliability of the design  2 Rn is given by

R . / D 1  V . / :

In this context, we are satisfied if, given a violation level /2 .0; 1/, the probability of violation is
sufficiently small, i.e., V . / /. We remark that relaxing the requirement of robust satisfaction
of the uncertain constraints f .; q/  0 to a probabilistic one (by means of the probability
of violation) is not helpful computationally because computing exactly the probability V . / is
very hard in general because it requires to solve a multidimensional integral over the nonconvex
domain defined by f .; q/ > 0, with q 2 Q  R` . The problem is then resolved introducing
Monte Carlo randomized algorithms (formally defined in the next section). This is a computational
approach which leads to solutions which are often denoted as PAC (probably approximately
correct) (Vidyasagar 2002).
More precisely, for fixed design  2 Rn , to compute a Monte Carlo approximation based on
N random simulations, we generate N independent identically distributed (iid) random samples
of q 2 Q, called the multisample, of the uncertainty q according to the given probability density
function
˚ 
q .1:::N / D q .1/ ; : : : ; q .N / 2 QN :

The cardinality N of the multisample q .1:::N / is often referred to as the sample complexity
(Vidyasagar 2001). The empirical violation of the design  is then defined.

Definition 2 (Empirical Violation). ˚For given  2 Rn , the empirical violation of V .; q/ with
respect to the multisample q .1:::N / D q .1/ ; : : : ; q .N / 2 QN is given by

Page 2 of 8
Encyclopedia of Systems and Control
DOI 10.1007/978-1-4471-5102-9_133-2
© Springer-Verlag London 2014

  1 X N
 
VON ; q .1:::N / D
P If ; q .i/
N iD1
 
where If ; q .i/ is the indicator function
  
 .i/
 0 if f ; q .i/  0
If ; q D P
1 otherwise.

Monte Carlo Randomized Algorithms for Analysis


In this section, we study Monte Carlo randomized algorithms for analysis, i.e., when the controller
parameters are fixed, and in particular we concentrate on a PAC computation of the probability of
violation. In agreement with classical notions in computer science (Mitzenmacher and Upfal 2005;
Motwani and Raghavan 1995), a randomized algorithm (RA) is formally defined as an algorithm
that makes random choices during its execution to produce a result. This implies that, even for
the same input data, the algorithm might produce different results at different runs, and, moreover,
the results may be incorrect. Therefore, statements regarding properties of these algorithms are
necessarily of probabilistic nature.
Formally, the probabilistic parameters ", ı 2 .0; 1/ called accuracy and confidence, respectively,
are introduced. For  any  , the PAC approach provides an empirical violation which is an
O
approximation VN ; q .1:::N /
to V . / within accuracy ", and this event holds with confidence
1  ı.

Monte Carlo Randomized Algorithm


Given a design  2 Rn , a Monte Carlo randomized
 algorithm (MCRA) is a randomized algorithm
that provides an approximation VON ; q .1:::N / to V . / based on the multisample q .1:::N / . Given
accuracy " and confidence ı, the approximation may be incorrect, i.e.,
ˇ  ˇ
ˇ O .1:::N / ˇ
ˇV . /  V ; q ˇ>

but the probability of such an event is bounded, and it is smaller than ı.


In general, the results obtained by an MCRA as well as its running time would be different
from one run to another since the algorithm is based on random sampling. As a consequence,
the computational complexity of such an algorithm is usually measured in terms of its expected
running times. MCRA are efficient because the expected running time is of polynomial order in the
problem size (Tempo et al. 2013). One-sided and two-sided Monte Carlo randomized algorithms
may be also defined (Tempo and Ishii 2007).
To derive the probabilistic properties of MCRA, we need to state the so-called Hoeffding
inequality, which provides a bound on the error between the probability of violation and the
empirical violation (Vidyasagar 2002).

Page 3 of 8
Encyclopedia of Systems and Control
DOI 10.1007/978-1-4471-5102-9_133-2
© Springer-Verlag London 2014

Two-Sided Hoeffding Inequality


For fixed  2 Rn and " 2 .0; 1/, we have
n ˇ  ˇˇ o
ˇ
Prob q .1:::N / 2 QN W ˇV . /  VO ; q .1:::N / ˇ >   2e2N :
2

For fixed accuracy ", we observe that the right-hand side of this equation approaches zero
exponentially. Furthermore, if we bound the right-hand side of this equation with confidence ı,
we immediately obtain the classical (additive) Chernoff bound (Chernoff 1952) which is stated
next.

Chernoff Bound
For any " 2 .0; 1/ and ı 2 .0; 1/, if
1 2
N  log
2 2 ı
then, with probability greater than 1  ı, we have
ˇ  ˇ
ˇ O .1:::N / ˇ
ˇV . /  V ; q ˇ  :

The Chernoff bound provides an indication of the required sample size, i.e., it provides the so-
called sample complexity. More precisely, the sample complexity of a randomized algorithm is
defined as the minimum cardinality of the multisample q .1:::N / that needs to be drawn in order to
achieve the desired accuracy " and confidence ı. Notice that the confidence enters the Chernoff
bound in a logarithmic fashion, while accuracy enters quadratically, and therefore, it is much more
expensive computationally. Other large deviation inequalities and sample complexity bounds are
discussed in the literature, including in particular the (multiplicative) Chernoff bound and the log-
over-log bound for computing the so-called empirical maximum (Tempo et al. 1997). We refer to
Vidyasagar (2002) for additional details.

Remark 1 (Las Vegas Randomized Algorithms). Las Vegas randomized algorithms (LVRA) are
based on random samples generated according to a discrete probability density function, instead
of a continuous pdf as in the case of Monte Carlo. Therefore, contrary to MCRA, LVRA provide
the “correct answer” with probability one because the entire search space can be fully explored.
However, because of randomization, the running time of an LVRA is random (similarly to MCRA)
and may be different in each execution. Hence, it is of interest to study the expected running time
of the algorithm. It is noted that the expectation is with respect to the random samples generated
during the execution of the algorithm and not to the problem data. Classical examples of LVRA
are within computer science and include the well-known randomized quicksort (RQS) algorithm
for ranking numbers, which is implemented in a C library of the UNIX operating system (Knuth
1998). Other more recent developments in systems and control regarding these algorithms are for
the PageRank computation in the Google search engine (Ishii and Tempo 2010), consensus over
large-scale networks (Fagnani and Zampieri 2008), localization and coverage control of robotic
networks (Bullo et al. 2012), and opinion dynamics (Frasca et al. 2013). These problems are
generally formulated in a graph theoretic setting consisting of nodes and links, and either the

Page 4 of 8
Encyclopedia of Systems and Control
DOI 10.1007/978-1-4471-5102-9_133-2
© Springer-Verlag London 2014

nodes or the links are randomly selected according to a given “local” protocol (often called gossip)
based on a given discrete pdf.

Randomized Algorithms for Control Design


This section deals with control problems which require computing a design  2 Rn satisfying some
probabilistic properties on the uncertain constraints f .; q/. Two classes of problems, feasibility
and optimization, are considered.

Feasibility Problem
Given uncertain constraints f .; q/ and level /2 .0; 1/, compute  2 Rn such that

V . / D Prob fq 2 Q W f .; q/ > 0g / : (1)

The second problem relates to the optimization of a linear function of the design parameters under
probability constraints.

Optimization Problem
Given uncertain constraints f .; q/, a linear objective function c T  and level p 2 .0; 1/, solve the
constrained optimization problem

min cT 
subject to V . / D Prob fq 2 Q W f .; q/ > 0g  p: (2)

Optimization problems subject to constraints of the form V . / D Prob fq 2 Q W f .; q/ > 0g /


are often called chance constraint optimization (Uryasev 2000).
Most of the algorithms that have been studied in the literature follow two main paradigms and
are often based on the following convexity assumption.

Convexity Assumption
The uncertain constraint f .; q/ is convex in  for any fixed value of q 2 Q.
The two solution paradigms that have been proposed are now summarized. The algorithms
have been implemented in the Toolbox RACT (Randomized Algorithms Control Toolbox) for
probabilistic analysis and control design in the presence of uncertainty (Tremba et al. 2008).

Paradigm 1 (Sequential Approach)


Under the convexity assumption, we study the Feasibility Problem (1). The algorithms presented
in the literature (see, e.g., (Calafiore et al. 2011) for finding a probabilistic feasible design) follow
a general iterative scheme (Fig. 1), which consists of successive randomization steps to handle

Page 5 of 8
Encyclopedia of Systems and Control
DOI 10.1007/978-1-4471-5102-9_133-2
© Springer-Verlag London 2014

Fig. 1 Paradigm for sequential design consisting of probabilistic oracle and update rule

uncertainty and optimization steps to update the design parameters. In particular, these algorithms
share two fundamental ingredients:

1. A probabilistic oracle which  performs


 a random check, with the objective to assess whether the
probability of violation V  .k/
of the current candidate solution O .k/ is smaller
 than agiven
level p and returns a certificate of unfeasibility, that is, a value q such that f O .k/ ; q .k/ > 0,
.k/

when the candidate solution is found unfeasible


2. An update rule upd which exploits the convexity of the problem for constructing a new
candidate solution O .kC1/ based on the probabilistic oracle outcome

In this paradigm, the algorithm returns a design Ok such that


  n   o
V Ok D Prob q 2 Q W f Ok ; q > 0  

is larger than 1  ı. That is, the violation probability associated to the design Ok is smaller than the
level , and this event holds with large confidence 1  ı.

Paradigm 2 (Scenario Approach)


Under the convexity assumption, we study the optimization problem (2). We remark that,
even under these assumptions, solving this problem is very hard computationally because the

Page 6 of 8
Encyclopedia of Systems and Control
DOI 10.1007/978-1-4471-5102-9_133-2
© Springer-Verlag London 2014

probabilistic constraint is nonconvex. To alleviate this difficulty, we reformulate problem (2) as


a so-called scenario problem introduced in Calafiore and Campi (2006), which is now described.
For randomly extracted scenarios q .1:::N / , this approach requires to compute  2 Rn that solves
the convex optimization problem subject to a finite number of sampled constraints

min cT 
ON D   (3)
subject to f ; q i  0; i D 1; : : : ; N

In this paradigm, the algorithm returns in one-shot a design ON and the sample complexity N such
that
  n   o
O O
V N D Prob q 2 Q W f N ; q > 0  

is larger than 1  ı. That is, the violation probability associated to the design ON is smaller than
the level p, and this event holds with large confidence 1  ı.

Concluding Remarks
Other probabilistic approaches have been proposed in the literature for control design, which are
not based on the convexity assumption. A noticeable example is the strategy based on statistical
learning theory (Valiant 1984; Vapnik 1998) which has the objective to design a controller without
any convexity assumptions (Alamo et al. 2009). In particular, in Alamo et al. (2013), the general
class of sequential probabilistic validation (SPV) algorithms has been introduced. A specific
SPV algorithm tailored to scenario problems, providing a sequential scheme for dealing with the
optimization problem, has been recently studied in Chamanbaz et al. (2013).

Bibliography
Alamo T, Tempo R, Camacho E (2009) A randomized strategy for probabilistic solutions of
uncertain feasibility and optimization problems. IEEE Trans Autom Control 54:2545–2559
Alamo T, Tempo R, Luque A, Ramirez D (2013) The sample complexity of randomized methods
for analysis and design of uncertain systems. arXiv:13040678 (accepted for publication)
Bullo F, Carli R, Frasca P (2012) Gossip coverage control for robotic networks: dynamical systems
on the space of partitions. SIAM J Control Optim 50(1):419–447
Calafiore G, Campi M (2006) The scenario approach to robust control design. IEEE Trans Autom
Control 51(1):742–753
Calafiore G, Dabbene F, Tempo R (2011) Research on probabilistic methods for control system
design. Automatica 47:1279–1293
Chamanbaz M, Dabbene F, Tempo R, Venkataramanan V, Wang Q (2013) Sequential randomized
algorithms for convex optimization in the presence of uncertainty. arXiv: 1304.2222
Chernoff H (1952) A measure of asymptotic efficiency for tests of a hypothesis based on the sum
of observations. Ann Math Stat 23:493–507

Page 7 of 8
Encyclopedia of Systems and Control
DOI 10.1007/978-1-4471-5102-9_133-2
© Springer-Verlag London 2014

Fagnani F, Zampieri S (2008) Randomized consensus algorithms over large scale networks. IEEE
J Sel Areas Commun 26(4):634–649
Frasca P, Ravazzi C, Tempo R, Ishii H (2013) Gossips and prejudices: ergodic randomized
dynamics in social networks. In: Proceedings of the 4th IFAC workshop on distributed estimation
and control in networked systems, Koblenz
Ishii H, Tempo R (2010) Distributed randomized algorithms for the PageRank computation. IEEE
Trans Autom Control 55:1987–2002
Knuth D (1998) The art of computer programming. Sorting and searching, vol 3. Addison-Wesley,
Reading
Mitzenmacher M, Upfal E (2005) Probability and computing: randomized algorithms and proba-
bilistic analysis. Cambridge University Press, Cambridge
Motwani R, Raghavan P (1995) Randomized algorithms. Cambridge University Press, Cambridge
Tempo R, Ishii H (2007) Monte Carlo and Las Vegas randomized algorithms for systems and
control: an introduction. Eur J Control 13:189–203
Tempo R, Bai EW, Dabbene F (1997) Probabilistic robustness analysis: explicit bounds for the
minimum number of samples. Syst Control Lett 30:237–242
Tempo R, Calafiore G, Dabbene F (2013) Randomized algorithms for analysis and control of
uncertain systems, with applications. Communications and control engineering series, 2nd edn.
Springer, London
Tremba A, Calafiore G, Dabbene F, Gryazina E, Polyak B, Shcherbakov P, Tempo R (2008)
RACT: randomized algorithms control toolbox for MATLAB. In: Proceedings 17th IFAC world
congress, Seoul, pp 390–395
Uryasev SP (ed) (2000) Probabilistic constrained optimization: methodology and applications.
Kluwer Academic, New York
Valiant L (1984) A theory of the learnable. Commun ACM 27(11):1134–1142
Vapnik V (1998) Statistical learning theory. Wiley, New York
Vidyasagar M (2001) Randomized algorithms for robust controller synthesis using statistical
learning theory. Automatica 37:1515–1528
Vidyasagar M (2002) Learning and generalization: with applications to neural networks, 2nd edn.
Springer, New York

Page 8 of 8

You might also like