0% found this document useful (0 votes)
39 views7 pages

Stochastic MPC

Uploaded by

BRIDGET AUH
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF or read online on Scribd
0% found this document useful (0 votes)
39 views7 pages

Stochastic MPC

Uploaded by

BRIDGET AUH
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF or read online on Scribd
You are on page 1/ 7
ACADEMIA Accelerating the world’s research. Stochastic Tubes in Model Predictive Control With Probabilistic Constraints Qifeng Cheng IEEE Transactions on Automatic Control Cite this paper Downloaded from AcademiajeduC Get the citation in MLA, APA, or Chicago styles Related papers Download a PDF Pack of the best related papers @ ig Advanced Textbooks in Control and Signal Processing Model Predictive Control Basil Kouvarit. Explicit use of probabilistic distributions in linear predictive control Qifeng Cheng The intermediate disorder regime for directed polymers in dimension $1+1$ Jeremy Quastel Stochastic Tubes in Model Predictive Control with Probabilistic Constraints ‘Mark Cannon, Basil Kouvaritakis, Sa’a V. Rakovié and Qifeng Cheng Abstract—Stochastic Model Predictive Control (MPC) strate- ses can provide guarantees of stability and constr faction, but their online computation can be formidable. difficulty is avoided in the current paper through the use of tubes of fixed cross section and variable sealing. A model describing the evolution of predicted tube scalings facilitates the computation of stochastic tubes; furthermore this procedure ‘ean be performed offfine. The resulting MPC scheme has a low ‘online computational load even for long prediction horizons, ‘thus allowing for performance improvements. The efficacy of the approach is illustrated by numerical examples. Keywords: constrained control; stochastic systems; probabilistic constraints; model predictive control. IL. IvrRopuction ‘A common framework for robust model predictive control (MPC) is based on the assumption that model uncertainty (whether multiplicative or additive) can be described in terms ‘of bounded compact sets (e.g. [13], (22)) without any spe- cific reference to information that may be available on the probability distribution of the uncertainty. Yet this information (if available) has a key role to play in the treatment of soft constraints of a probabilistic nature, Such constraints can be handled through the use of second order cone inequalities (see for example [20), [8], both of which deal with the case of ‘open loop stable systems with finite impulse responses), or confidence ellipsoids (e.g, [21], [15]), or through estimates of the conditional mean of the state [23], or a transformation that leads to conditions involving standard multivariate normal distributions (12), or by multivariate integration (1) However none of these approaches considered the issues of recursive feasibility or stability of the MPC strategy in closed loop operation. Recent work [4), [3], [5] considered constraints on the average probability of constraint violation over a given horizon, This involved minimizing a performance index subject to constraints that restrict the predicted plant state either to nested ellipsoidal sets ([4)) or to nested layered tubes with variable polytopic cross sections ({3], [5]). By constraining the probability of transition between tubes and the probability of constraint violation within each tube, this strategy ensures satisfaction of soft (also hard) constraints as well as recursive feasibility with respect to these constrains Tubes have been proposed for different MPC problem formulations: [10], [14] consider linear dynamics with additive bounded uncertainty, whereas [19] and [11] address nonlinear systems with and without additive uncertainty, respectively. The main drawback of [3], [5] is that the constraints on transitional and violation probabilities were invoked using confidence polytopes for model uncertainty, and this results in large numbers of variables and linear inequalities in the MPC ‘optimization, which can limit application to low-dimensional systems with short horizons even if the number of tube layers is small. Moreover, for small numbers of tube layers the handling of probabilistic constraints becomes conservative. ‘The current paper uses tubes with fixed cross sections (for convenience ellipsoids are used, though more general forms fare possible), but the scalings and centres of these cross sections are allowed to vary with time. As in the deterministic case [18], the evolution of the tubes is described by a scalar dynamic system, which implies a significant reduction in the number of optimization variables. In this context the dynamics governing the tube scalings are stochastic, and the computation of the predicted distributions of tube scalings is facilitated using & process of discretization, The distributions enable bounds to be imposed on the probability of violation of state constraints; these are computed offline and are invoked ‘online by tightening the constraints on the predictions of a nominal model. The offline computation of distributions for the stochastic variables allows many discretized levels to be used, thus allowing for the implicit use of many layered tubes without any concomitant increase in the online computation, Unlike [21] and [23], which did not guarantee closed loop feasibility, the current paper ensures feasibility in a recursive manner. ‘This means that, given feasibility at initial time, the proposed approach ensures that the online constrained ‘optimization problem is feasible at the next time instant and therefore remains feasible at all subsequent times, TI. PROBLEM DEFINITION Consider the linear time-invariant model with state 2% €R", control input up, and disturbance input we, ree = Arg + Bus + Bytoe o Here wg, for k = 0,1,..., are zero-mean, independent and identically distributed ((.i.d.) random variables, and its known that wy € €(W,a4) where €(W, a) is an ellipsoidal set E(W, cy) = (ws "Ww < ax), W=WT > 0. Here ay > 0, = 0,1)... are iid, random variables. The distribution function F(a) = Pr(a < a) is assumed to be known, either in closed form or by numerically integrating the density of w, and we make the following assumptions on a. Assumption 1. Fis continuously differentiable and «1 0, The system is subject to soft constraints of the form Pr(gte < hy) = Dy F = yore, for given gy © RM, hy © R and given probabilities pj > 0.5, General linear constraints on states and inputs can be handled using the paper's framework, and hard constraints can be included as 4 special case of soft constraints invoked with probability 1 (wp. 1). To simplify presentation, the paper's approach is developed for the case of a single soft constraint (1. = 1), @) @ Pr(g?sy Sh)>p, p>05, and the case of nie > 1 can be treated simply by taking the interseetion of me constraint sets, The control problem is to rinimize, at each time h, the expected quadratic cost In = OER (2E, Qaers + uf, Russ) (a) subject to Pr(g" ze; p for all i > 0, while ensuring closed loop stability (in a suitable sense) and convergence of 14, 10 a neighbourhood of the origin as k —> oe. ‘We decompose the state and input of the model (1) into k= te + Oey te = Kae + ee, o so that 2, and ey evolve according to aay = et Buck © eae = Sey + Butt oO where @ = A+ B,K is assumed to be strictly stable and {Cetin i = Ol...,N — 1} ate the free variables in a receding horizon optimization at time k. This allows the effect of disturbances on the i-step-ahead predicted state, r..,, 0 be considered separatcly (via es) from the nominal prediction, ze44, and thus simplifies the handling of constraints. IIL. UNCERTAIN PREDICTIONS The uncertainty in the é-step-ahead prediction e,; can be characterized using (2) and (7) in terms of the scalings 9, of ellipsoidal sets containing ¢. 1, denoted ex. € €(V, 3) EV, 6) ={e: PVeS i}, V=VT>O ‘This section constructs a dynamic system to define the random. sequence {(3,, i = 1,2,...} and proposes a method of approx- imating numerically the distribution functions of 6, i > 1 Given egy: € E(V,8:) and wey: © E(W,a) we have eegisr © E(V, Bian) if and only if max, (be + Buw)"V (be + Buw) < Bis 0) ‘The problem of computing the minimum ,,.1 satisfying (9) is NP-complete, but sufficient conditions for (9) are as follows. and V satisfy Bis = Me + a Lemma 1. iff, 6s (ao) vot tav-te"—pyw'BE = 0 a for some > 0, then exsier € E(V E(V, Bs) and ways € ECV, 04) / Biya) whenever exes € Proof: Using the $-procedue (2), suflicient conditions ) are obtained as for r (0 eve w) Nd + juay for some A > 0 and ys > 0. However -can be removed from these inequalities by sealing ,, By-1 and V, and conditions (10) and (11) follow directly. 7 The distribution of 4, for any i > O can be determined from the distributions of a and y using (10). In the sequel A.V” in (10) (11) are taken to be constants independent of c,d (a procedure for optimizing the values of \ and V is described in section IV). We further assume that the distribution function, Fay, of By is known and has the following properties. Assumption 2. Fay. is right-continuous with only a finite number of discontinuities and 9¢ (0, 5 Note that Assumption 2 allows for fixed, deterministic (e.g 8 = 0 corresponds to F(x) = 1 for all x > 0). The following result shows that Fg, is well-defined if \ € (0,1). ‘Theorem 2. 1f0.< A <1, then @. 8 is bounded for alli: 3; € [0,5.), where Bis = NB + &, and 8, < B= max{o, <2, i) Fp, 18 continuously differentiable for all ¢ > 1 Gil) 8 converges in distribution to the random variable Bt = Dig Maw as i+ 00. Proof: Assumption 1 and (10) together imply that 6, © (0,8) with Buy — Ad, +a. Hence if ¢ (0,1), then {3,} monotonic and 3; > -1;4 as i 90, which proves (i) The distribution of ey1 is given for any i > O by the convolution integral (see e.g. [6]) ‘5 1 Fassle)= 5 f Fale—w Inv) dy (12) where fo, denotes the density of gj. It follows from Assump tion | and the dominated convergence principle that F,,, is continuously differentiable, as claimed in Gi. To prove (ii), let @f denote the random variable 3 = Ao for i =0,1,... where wer =WtMai, yo =O. (13) Then 8 = Si) Mau + Nido, and since (10) gives 5, = Djs Maxie + Aig where {a4} is i.i.d., the distributions ‘of 6; and 8} ate identical forall i > 0. Furthermore the bounds ‘on a and % in Assumptions | and 2 imply that for every € > 0 there exists m such that Pri, and it follows that converges ae. to a limit as i —> 00 [17] From the definition of 8,, we therefore have 6 + 8, ae and hence 3; converges in distribution to 5, [17} : TThe simple form of (10) facilitates the approximate com- putation of the distribution of . Consider for example a ‘numerical integration scheme based on a set of points (2), ) = Bl i 0,1,...,p} in the interval [0, 3] with O=m end <8. (a) Let mj be an approximation to F,(z) in the interval 2 € (e),2)11) for j= 0,...,p—1, and let x,» ~ 1. Then since the convolution in (12) can be written equivalently as o Foal) =2[F as) a generic quadrature inethod enables the vectors m= go> My! to be computed for i= 1, ....N by seting (y) Sala — dy) dy, p= Fup (2) for j = 0,...,p and using the recursion mun = Pre (16) for i= 0,1,...,N —1, where the (p +1) x (p +1) elements of the matrix P are determined by the particular numerical integration scheme employed and the density, fa, ofc If the data points 2; satisfy |rj,1 xy] <4 for j = O,..-sp — 1, then the approximation error can be made aubitrarly smal if 6 is suciently small, as we show below We denote fz,.5 a8 the piecewise constant function with Fr, s(@) = mig for x € [j, 2541), J = 0..-..9 1 Lemma 3. For any finite horizon N we have Fy,.5 > Fa, as 5 O for i=1,...,N_ Also Fe, + Fs, as 6 > 0, where rr is the eigenvector of P associated with eigenvalue 1 Proof: By Assumptions 1-2 and Theorem 2, the integrand in (15) is piecewise continuous for i = 0 and continuous for 4 > 0. It can therefore be shown that, as 6 —> 0 max |Fy,,5(0) — Fy,(x)| = O(mé ‘) in"), TS 1 ay where m denotes the number of discontinuities in Fa, and fe > 1 ae constants dictated by the numerical integration Scheme employed, This implies Fe, sr Fa, a8 5» 0 since tm and NV are finite. Combining the error bounds used to derive (17) with a2 = Pri, we have, as 5 +0 4 Peale) => [Pasa fal 2a) dy + O(s8") and since wz, satisties mp, = Pr, it follows that, as 6 + 0 'slt) = [Pay stu) Jal Aa) dy + O(68") 08) But by Theorem 2, Fy, isthe unique soltion of Fae) => [° Faa(a) fale 20) dy ‘Therefore (18) implies Pago» Py, as 60 . Lemma 3 shows that the eigenvector 7, provides an ap- proximation to the steady state distribution of (10) despite the linear growth of the error bound in (17) with N. In the sequel we assume that 4 is chosen sufficiently small that the approximation errors associated with 1; for i ~0,...,.N and 1 may be neglected Remark 1. The matrix TPT, where ‘Tis a lower-triangular ‘matrix of 1's, is the transition matrix associated with a Markov chain. The (j, k)th element of 1! PT gives the probability of 25-1 < Bin < ay given aq1 < G < xp. The elements of TPP ae therefore non-negative and each column sums to 1, so P necessarily [16] has an eigenvalue equal 10 1 Remark 2. The definitions of P and mo ensure that each 1: generated by (16) belongs to the set S = {r CR :0< Mo S+- < Myst = 1h We next show how the sequence {7} can be used to construct ellipsoidal sets that contain the predicted state of (7) With a specified level of confidence. For a given sequence {2:;} satisfying (14), and for any x € § and p € (0, 1, let ind(x, p) and b(7r,p) denote the functions that extract respectively the index and the value of 2 corresponding to a confidence level p aga) (19b) ind(x, p) = min{j : x; > p} i(x,p) = 25, j = ind(,p) ‘Theorem 4. The i-sep-ahead prediction x3 satisfies ekys € E(V,b(m,p)) with probability p. Proof: This follows from ex ys € €(V, Bi), since, if mo is determined using the known distribution for 9, then by construction 3, < 5(s,p) holds with probability p. / ‘Theorem 4 implies that the predicted values of exys lie inside tubes with ellipsoidal cross sections (defined by V and i{rm,p)) with probability p; hence we refer to these tubes as stochastic tubes. Although the stochastic tubes apply t0 ex. they can be referred to ry through the use of (5), which translates their centres to the nominal state prediction 2. IV. MPC strateGy ‘To avoid an infinite dimensional optimization problem, we adopt the usual dual prediction mode MPC paradigm [13]; at time k, mode 1 consists of the first V steps of the prediction horizon over which the control moves cy, in (6-7) are free, and mode 2 comprises the remainder of the horizon, with ket = 0 for all 1 > N, Because of the additive uncertainty appearing in (1), the quadratic cost (4) is unbounded, Hence the modified cost of [3] is used ned where Less = 2f.,Qresi + uf.sRuys. This cost is shown in [3] to be a quadratic function of the vector ey of free variables {chi 1=0,..-,.V— 1} at time k Je=[of cf VOl2f ef 1] ey for a suitably defined positive definite matrix 9. Constraints are invoked explicitly in mode 1 and implicitly, through the use of a terminal set, $, in mode 2 Liss) — Las) Dos = Jim Ex(Less) (20) A, Constraint handling in mode 1 ‘The distribution of {y is dictated by the information avail- able on the plant state at the beginning of the prediction horizon. Assuming that zr. is known at time k, we set 22 = 24, ex = 0 and £3) = 0, so that Hp, (r) = 1 for all «> 0, Then, using v,, to denote the corresponding predictions of variable vat time k, we have from (7), PB wy + ene = 0, + Bedeyia, Vi2 1 ‘These predictions are independent of k, so the sets that define the stochastic tube containing ¢ je can be computed offline, namely ex ijs € €(V,b(asjosp)) with probability p, where iyo = Piro, = 1,2.-., Foo = [1 -- AP In this setting the probabilistic constraint of (3) can be enforced as follows Lemma 5. The constraint Pr(g" z.aje pfor p> 0.5 is ensured by the condition Ym.) VaTVg 22) Bucessin for oT zeae Sh where q = 2p F201... with 244 =e Land 2-c4aje = Peegae + Proof: Assume that the distribution of g¥eq4,\4 i8 sym- metric about 0 (note that there is no loss of generality inthis assumption because ofthe symmetric bound employed in (8)) Then 9 2, 4 +9" ex4ajn < h holds with probability p > 0.5 iff g7p.jn + (97 eesyn| Sh with probability q = 2p —1 The sufficient condition (22) then follows from_exsijk E(V,ryo,7)) and maxeeecu |g e| = WY? V/V ‘Note that the replacement of |g"e| by its maximum value b'2\/gTV 1g describes the process of constraint tightening whereby the constraint g? 2, 2 hon the nominal pre- dicted state is replaced by the tighter condition given in (22). Although (22), when invoked for i = 1,...,.V, ensures that fat time k the predicted sequence (2x, Jn) satisfios (3), it does not ensure recursive feasibility, namely that there exists predicted sequence at time & + 1 satisfying (3). This is because constraints of the form Pr(g”zyijx < h) > p do not guarantee that gr,443.41 Sf holds with probability p However the future state is necessarily contained a all times within the robust tube associated with upper bounds on 3: xsae €€(Vyblayo51))s — a, 21 (23) with probability 1. The stochastic tubes that define constraints such as (22) at times + I,k + 2,... are therefore based on initial distributions for y which ean be inferred from the robust tube at time &, and this enables the construction of constraints to ensure recursive feasibility. Let n;, denote the discrete approximation tothe dstibution of i that is obtained if 8, Gj < i) is equal to the upper bound B(2r,9,1), 80 that Fs, (2) = 0 for x € (0,6(xyj0.1)) and Fy,(a) = 1 for « > b(j0,1) . Then applying (16) for God = 1 gives sy = PI msign Mais = windlssosD) where u(j) = [0+ 0 LI]? the lower triangular matrix of Is b(rrso0 1) (ay denotes the jth column of Lemma 6. For any i> j, the value of ex iiss predicted at time k (given 14) satisfies exsaeiy © €(Wb(aa jsp) with probability p. (25) Proof: From eesjs1 = Bee4j + Beets and (23), the prediction at time kof exysuirsy stistics exjerteny € PE(V, Ole I051)) © E(V,bG0,0)) WP. D where @ denotes Minkowski addition. But, by Lemma 1 and (24) we have E(V, Hr 001) OE (Vs Hmyo,P)) EV, bCmp23)).P))s Which establishes (25) for ¢ = j +1. Similarly eu sgrans © PE(V ER -1)).P)) OE(V-blmyo,2)) WD. P 50 eayyrajery © E(VsPln) 2yep)) with probability and (25) follows by induction forall > j. 7 A simple way to ensure the recurrence of feasibility is to requite the probabilistic constraints (3) to hold forall future predictions. Lemma 6 shows that the worst case predictions of (25) for 7 1, can be handled by selecting at cach prediction time the largest scaling: Bulg) = max{O(mo.9)s---sB(m-ag)} 28) and by replacing 6(nsjo.4) with B,(q) in (22): 9 axyan Sh [Belg] VTITG, = 29-1. 2D) The sets £(V,bi(p)) for 4 = 1,2,..., define the cross sec- tions of a recurently feasible stochastic tube. The preceding argument is summarized as follows ‘Theorem 7. Ifat k = 0 there exisis ey satisfying (27) for all 4 > 0, then for all k > O there exists ¢x satisfying (27) for all i > 0, and hence the probabilistic constraint (3) is feasible for all k > 0, Proof: Follows directly from Lemma 5 and Lemma 6. a Remark 3. Note that the computation of recurrently feasible stochastic tubes does not depend on the information available at time k, and hence can be performed offline. Consequently the online computation does not depend on the dimension of =, so this can be chosen sufficiently large that the approximation crvors discussed in Lemma 3 are negligible. B. Constraint handling in mode 2 The constraint handling framework of mode 1 can also be used to define a terminal constraint, 2, xjx € 5, which ensures that (27) is satisfied for all ¢ > N. Since (27) constitutes a Set of linear constrains on the deterministic predicted trajectory {2x je}. the infinite horizon of mode 2 can be accounted for using techniques similar to those deployed in [7] for the computation of maximal invariant sets. Before describing the construction of S, we first derive some fundamental properties of the constraints (27). Lemma 8. The scaling of the tube cross section defined in (26) satisfies B,(q) < Bax(q) for all + and all q & [0,1], and converges as i> 00 to a limit which is bounded by Tox Proof: ‘The monotonic increase of the scaling b(=jo,1) of the robust tube with # in (23) and the definition of x1) in terms of the robust tube in (24) implies (by Lemma 1) that B(PE Sy, 9) < OUP Pra iig1eq), and hence B(mj.9) < Bm, 11yc109) for any i > j. This implies B;(q) < Bila) for all and since (4) < b(so1) forall, as i 00 © a limit no greater than the asymptotic value of the robust tube scaling, Pr. . 28) im Bulg) N whenever 2. xjx © So then a sufficient condition for Sax to be non-empty is b> Begg 09) Proof: Satisfaction of constraints (27) over the infinite horizon of mode 2 requires that g Be ch [bwsilg)] Vat g, 1=1,2,... GO) ‘where for convenience the initial state of mode 2 is labelled z. By Lemma 8, Bj(q) increases monotonically with i, and & is strictly stable by assumption, so (29) is obtained by taking the limit of (30) as i > oo and using the upper bound in (28). . Lemma 8 implies that By-4.(a) in (30) can be replaced by Tx for all i > Ni, for some finite Ny in order to define the terminal set In this setting, the terminal set is given by $= Sy Where ve Sp = fe: gs Sh Bunt) VI toh gz ch BEVGTVAG, j= 1,2...) i) ‘The constraints in (31) are necessarily conservative, but the implied constraint tightening can be made insignificant with, sufficiently large 1. This follows from the assumption that & is strictly stable, which implies that Sg + Sup a8 NV +0. Although (31) involves an infinite number of inequalities, Sq has the form of a maximal admissible set. The procedure of [7] can therefore be used to determine an equivalent repre- Scnlation of Sg in terms ofa finite number of inequalities, the existence of which is ensured if Sx is bounded. This involves solving & sequence of linear programs to find the smallest integer n* such that g™"" 7 0, (33) is a semidefinite program (SDP) in the variable V-1, and since A is univariate and restricted (o the interval (0,1), (33) can be solved by solv- ing successive SDPs for V-! with altemate iterations of @ "univariate optimization (such as bisection) for 2 Given the definition of the cost (21), the constraints (27), and the terminal set (31), the MPC algorithm is as follows. Algorithm 1 (Stochastic MPC). Offine: Compute V, A via (33). Determine P in (16), and hence, for i= 1,..., N+. compute 73, j= 0,...,1 ~ 1 in (24) and the sealings 5, (a) in (26), Compute the smallest n* such that in (31) ean be ‘expressed as (32). Online: At each time-step k = 0,1, 1. solve the quadratic programming (QP) optimization j= arginin Jy subject to(27) for i= 1,....N and anwik € Sy ‘64 2. implement uy = Key + ¢f Theorem 10. Given feasibility at k = 0, Alg. 1 is feasible for all k > 1. The closed loop system satisfies the probabilistic constraint (3) and the quadratic stability criterion Yim, 2 7 #o(Le) < Law Proof: Recurrence of feasibility follows from Theorem 7, Which implies thatthe tail: ce aje = {Cbets-s+s¢kew—1,0} of the minimizing sequence: ej = {ce,---Cx+x1) of (34) at time k is feasible for (34) at time k + 1. The feasibility of the tail also implies that the optimal cost satisfies the bound: Jz ~Bu(Jas) 2 La — Loe which implies (35) . 5) \V, ILLUSTRATIVE EXAMPLES Example 1. Consider the 2nd order system with 18 05) 2 a=|iS i: Pr(g"z, p, g=[-L75 1", h=15, p=08. The disturbance ivy has a truncated normal distribution, with zero mean, covariance E(w,wf) = W~1/12? and bounds 2.68 < (we).2 < 268. The distribution of a is then a modified y-squared distribution with & = 0.1. The cost weights are Q = I, R = 5, and K in (5) is chosen as the LQG optimal feedback for the unconstrained case: K = [L518 ~ 0.645] ‘The offline optimization of V,2 in (33) gives 0.782 0.348 0.348 0.168 ‘The robust tube has asymptotic scaling bo = 0.170, and the interval (0,0.178] is divided into 1000 equal subintervals to define {23} and hence compute P and b,(q). Horizons N = 6,17 = 7, give n° = 0 and the scalings of the recurrently feasible stochastic tube are {B.(g)} = {0.81, 4.41, 6.09, 6.78, 7.06,7.18,....7.26} x 10-2 Algorithm 1 meets constraints with probability. greater than (0.8, while allowing some constraint violations (17% at k= 1). As expected, LQG control achieves lower average cost: = tim 2 [BeT On, + uf Ru) Lee Wer » B= By =I ASO4a1, V= Jes the average values over 200 realizations of the disturbance sequence being J.; = 306 for LQG, and J.; = 330 for Algo- rithm 1, but this is atthe expense of violating the probabilistic constraint (LQG had 100% violations at k= 1). Conversely, robust MPC (with p = 1) ensures zero violations, but achieves this atthe expense of higher average cost: J. ~ 360. Te online optimization is a QP with 6 optimization vari- ables and 13 constraints, requiring an average CPU time of 3.5 ms (Matlab/2.4 GHz processor). By contrast, the approach ‘of {20} requires the solution a second order cone program (SOCP), which for the same number of free variables and ceonsiraints requires an average of 105 ms CPU time. Example 2. This example considers the DC-DC converter system described in [9]. For the operating point defined by f= 0.380 A, of = —16 V, uw! = 0.516, sampling interval T= 0.65 ms and problem parameters Viq ~ 15 V, R= 85 [= 4.2 mH, C = 2200 pF, the linearized state space model 4 ‘ 3s 3 , 3 Algor | ast atte 2s on 4 end a mia | undontaned | {Sapna os const os a q as i Sa 1. Closed loop responses of Algorithm 1 and LQG conta for 10° disturbance sequence realizations, showing ellpsoial tube cross-sections predicted at k=0 for p= 1 and p = 08 matrices are given by: 10,0075] 4.798} y 4=[otas Oa]: Be [ping] Bem Wat ‘The LQG-optimal feedback gain for Q = diag{1,10}, R= 1 is K = [0.211 0.0024). Fluctuations in the inductor current, Which arise due to random variations in the source input voltage, are subject to the probabilistic constraint defined by Pr{{L Olax —2°) <2} > D8. For additive uncertainty with, covariance W1/25%, and c = 0.02, the solution of (33) 1.458 | 730 14.8 ‘The robust tube has asymptotic scaling Bu, = 1.02, and horizons 1V = 8, ‘were employed. For initial condition x 2° = (2.5,2.8) and 10° disturbance sequence realizations, ‘Algorithm 1 produced an average cost of Jer = 221, as compared with J.y = 280 for robust MPC (with p = 1) and Jei = 166 for LQG in closed loop operation. Robust MPC {gave no constraint violations (Fig. 2), which is clearly over- cautious given the Timit of 20% violations at each sample, whereas Algorithm 1 (Fig. 1) allowed constraint violations with a frequency of 14.4% over k = 1,...,6, and LOG. ignored the constraint (100% violation rate for k < 3). x=ase0, v= VI. ConcLusioxs ‘The proposed stochastic MPC strategy handles probabilistic constraints with online computational load similar to that of nominal MPC. This is achieved by fixing the cross sectional shape of a tube containing predicted trajectories, but allowing the centres and scalings of the cross sections to vary. The resulting MPC law ensures recursive feasibility and conver- gence, REFERENCES LU] H Arclano-Garcia and Worny G. Chance constrained optimization of process systems under uncertainty: I Suet monotony. Computers & ‘Chemical Engineering, 331015681583, 2009. [2] 8. Boyd, LE! Ghaool, E. Feon, and V. Balaksshean. Linear Marie Inequalities in Sytem and Control Theory. SIAM, 1994, [3] M. Cannon, B. Kouvartakis, até D.Ng. Probabilistic toes iain Fig. 2. Closed loop sespoases of Robust MPC and LQG consol for 108 disturbance sequence realizations Is] M_ Cannon, 1 Kouvartais, and X. Wa. Probabilistic constrained MPC for malipicave and additive stochasio uncertainty. JEEE Trans. ‘Automatic Conte, $3(7'1626-1632, 2008 [5] M.Canson, D. Ng, and B. Kouvariakis. Successive linearization INMPC for’ last "of stochasie nonlinear systems. In Nonlinear “Mode! Predictive Control, volume 384 of Lecture Note in Control and Information Sciences, pajes 289-282. Springer, 2008. [6] W. Feller An inroduction o.probabiltysheory and its applications, volume 2 John Wiley, 197 7 EG. Gilber_and Linear systems with state and contol ‘sonstunt: The thenry and practice of maximal sdnisibe sets IEEE Trane Automatic Contr, 36(9):1008-1020, 1991, {8} DE Kassman, TA. Badgwell, and RB. Havkins. Robust steady Sine target calelation for model predictive contol AICKE Journal, 4645) 1007-1024, 2000, [9] M-Lazar, W.Heemels, B. Roset, H, imei, and P. Van Den Bosch Input-o-tate stabilizing sub-optimal NMPC with an aplication to DC: DDC converter. Int J. Rod Nonlin, Con, 188) 90-903, 2008, {U0} YL. TLee and B. Kouvariakis. Robust receding horizon predictive cont for systems with uncenain dyanies and inpu saturation. Automatica, 36(10) 1497-1505, 2000, [It] YL Lee, B. Kouvattakis, and M. Cannon. Coastained ceding horizon ‘redicve contol for nonlinear systems. Automatica, 912), 2002 [12] FeLi, M. Wendt, and G. Wouny. A probabilstically consuained model redictive controler Automatica, 38(7)1171-1176, 2002, [U3] B.Q. Mayae, JB. Rawlings, CV. Rao, and POM, Scoksen._ Com Stained model predictive control: Sabi and opimality. Automatica, 36(6):789-814, 2000. [14] DQ" Mayne, NLM. Seron, and SV. Rakovié, Robust model predie- tive contol of constrained linear systems with bounded disturbances. ‘Automatica, 11(2)219-24, 2008. US] ZK. Nagy and RD. Brau. Robust nonlinear model predictive cont ‘Of batch processes. AICAE Journal, 9(7)1 76-1786, 2003 [16] JR. Now. Markov chains. Cambridge University Press, 1997 17] A. Papoulis. Probably, Random Variables and Stochastic Processes MeGraw il, 1985, [U8] S.V. Rakovig and M. Facehni, Approximate reachability analysis for liner disrete time syslems using homothety and invariance. In Proc 17th IFAC World Congress, Seoul, 2008 [I9] SW. Rakovie, AR Teel, D.Q. Mayne, and A. Asol. Simple robust control invariant tubes for some lasses of nonlinear discrete time Systems. Te Pr. IEEE Conf Dee. Cone, pages 6397-6402, 2006. [Ro] A. Sehwarm ané M. Nikola. Chance constrained model preicive ‘onvol, AICHE Journal, 4(8) (743-1782, 199, [21] Dit van Hessem and OM, BosgraA conic reformulation of model preditive contol including bounded and stochastic disturbances undet State and input constraints. In Pro. IEEE Con. Dec. Con, 2002 [22] Ys. Wang and JB. Rawlings. A new robust model predictive contol meno I: Theory and computation. J. Proc Cont, 18281-27200 [23] J. Yan and Re Biumead. Incorporating ste estimation ino model pre- fete contol and it applicalion o network tate ontol, Automatica, ‘ar stochastic movel predictive conto, Spteme & Control Leters, $8(10)747-753, 2000 |8(3)595-603, 2005,

You might also like