0% found this document useful (0 votes)
18 views10 pages

Model Predictive Control Using Matlab

This document is a tutorial on Model Predictive Control (MPC) using MATLAB, focusing on its basic concepts and numerical implementation, particularly for Linear MPC (LMPC) and Nonlinear MPC (NMPC). It covers the formulation of optimization problems, constraints handling, and the advantages of MPC over traditional optimal control methods. The document also outlines the algorithm for LMPC and provides a detailed explanation of the optimization process involved in MPC.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views10 pages

Model Predictive Control Using Matlab

This document is a tutorial on Model Predictive Control (MPC) using MATLAB, focusing on its basic concepts and numerical implementation, particularly for Linear MPC (LMPC) and Nonlinear MPC (NMPC). It covers the formulation of optimization problems, constraints handling, and the advantages of MPC over traditional optimal control methods. The document also outlines the algorithm for LMPC and provides a detailed explanation of the optimization process involved in MPC.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

M ODEL P REDICTIVE C ONTROL USING MATLAB

Midhun T. Augustine
Department of Electrical Engineering
Indian Institute of Technology Delhi, India
[email protected]
arXiv:2309.00293v1 [math.OC] 1 Sep 2023

Date of initial version: 15 - 12 - 2021


Date of current version: 01 - 09 - 2023

A BSTRACT
This tutorial consists of a brief introduction to the modern control approach called model predictive
control (MPC) and its numerical implementation using MATLAB. We discuss the basic concepts and
numerical implementation of the two major classes of MPC: Linear MPC (LMPC) and Nonlinear
MPC (NMPC). This includes the various aspects of MPC such as formulating the optimization
problem, constraints handling, feasibility, stability, and optimality.

Keywords Optimal Control · Model Predictive Control · Numerical Optimization.

1 Introduction
MPC is a feedback control approach that uses model-based optimization for computing the control input. In MPC, a
model of the system along with the current state (measured or estimated) is used to predict the future behavior (states)
of the system, for a control input sequence over a short period. The predicted behavior is characterized by a cost
function which is a function of the predicted state and control sequence. Then an optimization algorithm is used to
find the control sequence which optimizes the predicted behavior or cost function. The first element of the control
sequence is applied to the system which gives the next state, and the algorithm is repeated at the next time instant,
which results in a receding horizon scheme. Model predictive control (MPC) is also known as receding horizon
control (RHC). The name MPC originated from the model-based predictions used for optimization, whereas the name
RHC comes from the receding horizon nature of the control scheme. MPC which originated from the optimal control
approach has the following advantages over the former:
1. It gives closed-loop control schemes whereas optimal control mostly results in open-loop control schemes.
2. MPC can handle complex systems such as nonlinear, higher-order, multi-variable, etc.
3. MPC can incorporate constraints easily.
Notations: N, Z and R denote the set of natural numbers, integers, and real numbers respectively. Rn stands for
n - dimensional Euclidean space and Rm×n refers to the space of m × n real matrices. Matrices and vectors are
represented by boldface letters (A, a), scalars by normal font (A, a), and sets by blackboard bold font (A, B, C). The
notation P > 0 (P ≥ 0) indicates that P is a real symmetric positive definite (semidefinite) matrix. Finally, I, 0
represents the identity matrix and zero matrix of appropriate order.
MPC is associated with a number of terminologies which are defined below
1. Sampling time (T): It is the time difference between two consecutive state measurements or control updates.
In general T ∈ R+ , and for discrete-time systems T > 0, whereas for continuous-time systems T = 0.
2. Time horizon (NT ): It is the number of time instants the control input is applied to the system. In general,
NT ∈ N and if NT is infinity, the problem is called an infinite horizon problem, otherwise finite horizon
problem.
3. Prediction horizon (N ): It is the length of the prediction window over which the states are predicted and
optimized. In general N ∈ N and usually 2 ≤ N ≤ NT .
4. Control horizon (NC ): It is the length of the control window in which the control input is optimized, and
normally NC ≤ N . In this tutorial, we mainly focus on the case for which the control horizon is the same
M ODEL P REDICTIVE C ONTROL USING MATLAB

>
Controller (MPC)
Predicted state sequence

xr
System
Output

>
Control or
Reference input State
> > >
>

Feedback Control input sequence


ur

>
>
k k+NC k+N
(a) (b)
Figure 1: (a) MPC General block diagram (b) Basic MPC strategy.

as the prediction horizon, i.e., NC = N. If NC < N we optimize the control sequences over NC and the
remaining control sequences (of length N − NC ) are normally chosen as zero.

The general block diagram of a dynamical system with MPC is given in Fig. 1(a) and the basic strategy of MPC is
given in Fig. 1(b). In MPC during the current time instant k, we consider the optimization over the next N instants
where N is the prediction horizon, i.e., the optimization window is from k to k+N. This indicates that the optimization
window moves with time and this feature is called moving horizon or receding horizon. In MPC, during every time
instant, we compute the sequence of control inputs over the control horizon, which optimizes the future performance
of the system over the prediction horizon. Then the first element of the optimal control sequence is applied to the
system, which results in a receding horizon scheme. The first element of the control sequence and the next state under
the MPC scheme are represented in black color in Fig. 1(b). By repeating this at each time instant, we obtain the
control inputs and states with MPC over the time horizon.
Based on the nature of the system model used in the optimization, MPC can be grouped into the following two classes:

1. Linear MPC: For which the system model and the constraints are linear. The cost function can be linear
or quadratic which results in linear programming or quadratic programming problems which are convex
optimization problems.
2. Nonlinear MPC: For which the system model is nonlinear and the constraints can be either linear or nonlinear.
The cost function is usually chosen as a linear or quadratic function of states and control inputs which results
in a nonlinear programming problem that can be non-convex.

Another classification is based on the implementation of the MPC algorithm which results in the following categories:
1. Implicit MPC: This is also known as the traditional MPC in which the control input at each time instant is
computed by solving an optimization problem online. In this tutorial, we will be focusing on implicit MPC
which is the most general MPC scheme.
2. Explicit MPC: In this, the online computation is reduced by transferring the optimization problem offline. In
explicit MPC the state constraint set is divided into a finite number of regions and the optimization problem
is solved offline for each of the regions which gives the control input as a function of the state. This simplifies
the online computation to just a function evaluation.

When it comes to optimization the approaches can be classified into two categories:

1. Iterative approach: In which the elements of the decision vector are optimized together. Here the optimal
decision vector is computed iteratively by starting with an initial guess which is then improved in each it-
eration. Most of the linear programming and nonlinear programming algorithms are based on the iterative
approach.
2. Recursive approach: In which the elements of the decision vector are optimized recursively, i.e., one at a
time. The popular optimization algorithm which uses the recursive approach is dynamic programming. Even
though both the iterative approach and recursive approach are used in MPC, in this tutorial we focus on the
former.

2
M ODEL P REDICTIVE C ONTROL USING MATLAB

2 MPC of Linear Systems


In this section, we discuss the basic concept of Linear MPC (LMPC) and its numerical implementation.

2.1 LMPC: Problem Formulation

Consider the discrete-time linear time-invariant (LTI) system:


xk+1 = Axk + Buk (1)
where k ∈ T = {0, 1, ..., NT − 1} is the discrete time instant, xk ∈ X ⊆ Rn is the state vector, uk ∈ U ⊆ Rm is the
control input vector, A ∈ Rn×n is the system matrix and B ∈ Rn×m is the input matrix. The sets X and U are the
constraint sets for the states and control inputs which are usually represented by linear inequalities:
X = {x ∈ Rn : Fx x ≤ gx }
(2)
U = {u ∈ Rm : Fu u ≤ gu }.

The cost function is chosen as a quadratic sum of the states and control inputs:
T −1
NX
J = xTNT QNT xNT + xTk Qxk + uTk Ruk (3)
k=0

where QNT ∈ Rn×n , Q ∈ Rn×n , R ∈ Rm×m are the weighting matrices used for relatively weighting the states
and control inputs and to be chosen such that QNT ≥ 0, Q > 0, R > 0. The state and control sequence is defined
as X = x0 , x1 , ..., xNT , U = u0 , u1 , ..., uNT −1 which contains the state and control input over the time horizon.
Now, the optimal control problem for the LTI system is defined as follows which is also known as the constrained
linear quadratic regulator (CLQR) problem:
Problem 1. For the linear system (1) with the initial state x0 , compute the control sequence U by solving the optimiza-
tion problem
inf J
U

subject to U ∈ UNT , X ∈ XNT +1 (4)


xk+1 = Axk + Buk , k ∈ T.

As NT → ∞ the problem is called infinite-horizon constrained LQR. One can solve constrained LQR with a large time
horizon (NT → ∞) using the MPC approach which usually results in suboptimal solutions with lesser computation.
MPC uses a prediction horizon N ≤ NT (in practice N << NT ) and during every time instant the control sequence
for the next N instants is computed for minimizing the cost over the next N instants. The cost function for the MPC
with a prediction horizon N at time instant k is defined as
X−1
k+N
Jk = xTk+N |k QN xk+N |k + xTi|k Qxi|k + uTi|k Rui|k (5)
i=k

in which xi|k , ui|k denotes the state and control input at time instant i predicted or computed at time instant k. Note that
here k denotes the time instants within the time horizon and i denotes the time instants within the prediction horizon. 
Similarly, the state and control sequence
 for the MPC at time instant k is defined as Xk = xk|k , xk+1|k , ..., xk+N |k ,
Uk = uk|k , uk+1|k , ..., uk+N −1|k . Then the MPC problem for linear systems is defined as follows:
Problem 2. For the linear system (1) with the current state xk|k = xk given, compute the control sequence Uk , by
solving the optimization problem
inf Jk
Uk

subject to Uk ∈ UN , Xk ∈ XN +1 , k ∈ T (6)
xi+1|k = Axi|k + Bui|k , k ∈ T, i = k, ..., k + N − 1.
.

3
M ODEL P REDICTIVE C ONTROL USING MATLAB

2.2 LMPC: Algorithm


Here we represent the MPC optimization problem as a quadratic programming problem. From the solution of the state
equation for LTI systems we obtain

xk|k
  I   0 0 . . . 0   uk|k 
 xk+1|k   A   B 0 . . . 0   uk+1|k 
 .  =  .  xk +  . .. ..   .. . (7)
 .   ..   .. . .
. .

xk+N |k AN AN −1 B AN −2 B . . . B uk+N −1|k
By defining the following matrices

xk|k
 
uk|k
  I   0 0 ... 0
 xk+1|k   uk+1|k  A  B 0 ... 0
Xk =   . .  , Uk =  .  AX =  .  , BU =  .
 ..   .. .. ..  (8)
.
 ..  . .
xk+N |k uk+N −1|k A N
AN −1 B AN −2 B ... B
the equation (7) is rewritten as
Xk = AX xk + BU Uk . (9)
This indicates that, the predicted state Xk can be represented as a function of the current state xk and input sequence
Uk . Similarly, by defining
Q ... 0 0 R 0 ... 0
   
 .. .. ..  0 R . . . 0
QX =  . . .  , RU =  . . ..  (10)
0 . . . Q 0   .. .. .
0 . . . 0 QN 0 0 ... R
the cost function (5) can be represented in terms of Xk and Uk as
Jk = XTk QX Xk + UTk RU Uk . (11)
Finally, by defining
Fx 0 ... 0 gx Fu 0 ... 0 gu
       
0 Fx ... 0 gx  0 Fu ... 0 gu 
FX = 
 ... .. ..  , gX = 
 ...  , FU =  ...
  .. ..  , gU = 
 ... 
 (12)
. . . . 
0 0 ... Fx gx 0 0 ... Fu gu
the state and control constraints in (2) can be represented in terms of Xk and Uk as
FX Xk ≤ gX
(13)
FU Uk ≤ gU .
Now by combining Xk and Uk , we can represent the cost function with a single decision vector. For that we define
       
Xk QX 0 FX 0 g
z= , H= , F= , g = X , Feq = [I −BU ] , geq = AX xk (14)
Uk 0 RU 0 FU gU
using this we can rewrite the cost function (11) and constraints (9),(13) and represent the optimization problem (6) as
a quadratic programming problem as below
inf zT Hz
z
subject to Fz ≤ g (15)
Feq z = geq
which can be solved using standard numerical optimization algorithms such as the steepest-descent method, Newton
method, etc. For faster convergence of the numerical optimization method, the optimal solution for the current instant
can be used as the initial condition for the next instant. Note that here geq is a function of the state vector xk . Therefore
the current state information is required for solving the optimization problem. In MPC this optimization problem is
solved during each time instant k and the first element of U∗k is applied to the system, i.e., the control input with MPC
is
uk = [U∗k ]1 = u∗k|k . (16)

4
M ODEL P REDICTIVE C ONTROL USING MATLAB

Note that this algorithm is based on the assumption that, an optimal control sequence exists at each time instant. The
existence of an optimal control sequence depends on the system model and constraints, and this will be discussed in
the feasibility analysis section. The algorithm for linear MPC is given below:

Algorithm 1 : LMPC
1: Require A, B, NT , N, n, m, Q, R, QNT , Fx , gx , Fu , gu
2: Initialize x0 , z0
3: Construct AX , BU , QX , RU , H, F, g
4: for k = 0 to NT − 1 do
5: xk = [X]k+1 (obtain xk from measurement or estimation)
6: Compute Feq , geq
 ∗
Xk
7: Compute z = ∗
by solving the optimization problem (15)
U∗k
8: Apply uk = [Uk ]1 to the system

9: Update z0 = z∗
10: end for

The optimization problem can be solved using the MATLAB function fmincon for solving constrained optimization
problems which are of the form
z∗ = fmincon(f, z0 , F, g, Feq , geq , lb, ub) (17)
in which lb, ub are the vectors containing the lower bound and upper bound of each element in the decision vector z.

2.3 Reducing online computation


Here we discuss some methods for reducing online computation in which the basic idea is to reduce the number of
optimization variables and constraints. The first method uses the idea of eliminating the states from the decision vector
z. This method is useful when we have only control constraints, i.e., the state is unconstrained xk ∈ Rn or the state
constraints can be transferred to control constraints. We have from (11) the cost Jk is a function of the state sequence
Xk and control sequence Uk . Now, by substituting (9) in (11), we obtain
T 
Jk = AX xk + BU Uk QX AX xk + BU Uk + UTk RU Uk
 

= UTk BTU QX BU + RU Uk + 2xTk ATX QX BU Uk + xTk ATX QX AX xk


     
(18)
= UTk HUk + qTk Uk + rk

where H = BTU QX BU + RU , qTk = 2xTk ATX QX BU and rk = xTk ATX QX AX xk . Therefore we can represent the cost Jk as
a function of the current state xk and control sequence Uk , in which Uk is the decision vector. Similarly, the constraint
inequalities (13) can be rewritten as
 
FX AX xk + BU Uk ≤ gX =⇒ FX BU Uk ≤ gX − FX AX xk
(19)
FU Uk ≤ gU .
   
FX BU g X − F X AX x k
Now, by defining z = Uk , F = ,g= we can represent the optimization problem (15) as a
FU gU
quadratic programming problem as below
inf zT Hz + qTk z + rk
z (20)
subject to Fz ≤ g.
Note that here the parameters qk , rk and g are functions of xk . Therefore the current state information is required for
solving this optimization problem.
Another way to reduce the online computation is to use a control horizon NC lesser than the prediction horizon N.
This in turn reduces the number  of optimization variables. In this case, we define the control sequence as Uk =
uk|k , ..., uk+NC −1|k , 0, ..., 0 and this reduces the number of decision variables in z to mNc .

5
M ODEL P REDICTIVE C ONTROL USING MATLAB

2.4 LMPC: Set point tracking


So far we considered the stabilization problem in MPC for which the reference xr = 0. In this section we discuss the
set point tracking problem for which the reference xr 6= 0, and the objective is to track the nonzero set point. For the
nonzero reference xr , the steady state value of the control input will be nonzero, i.e. ur 6= 0 and in steady state we
have xk+1 = xk = xr . Substituting this in (1) gives
xr = Axr + Bur =⇒ ur = B−1 (I − A)xr (21)
where B−1 is the pseudo-inverse. The set point tracking can be transferred to a stabilization problem by defining the
error state and control xek = xk − xr , uek = uk − ur and consider the error dynamics for MPC design which gives
xek+1 = xk+1 − xr = Axk + Buk − xr = Axk − Axr + Buk − xr + Axr
(22)
= A[xk − xr ] + B[uk − B−1 (I − A)xr ] = Axek + Buek .
Using the error state and control vectors the constraints can be rewritten as
Fx x ≤ gx =⇒ Fx (xek + xr ) ≤ gx =⇒ Fx xek ≤ gx − Fx xr
(23)
Fu u ≤ gu =⇒ Fu (uek + ur ) ≤ gu =⇒ Fu uek ≤ gu − Fu ur .
 be defined as in (12) in which gx , gu are replaced by gx − Fx xr , gu − Fu ur .
 FX , gX, FU , gU can
Now, the matrices
Xek Xk − Xr
We define z = = and the optimization problem is obtained as in (15), solving which the optimal
Uek Uk − Ur
control input for the MPC problem is obtained as
uk = [U∗ek ]1 + ur . (24)

2.5 LMPC: Numerical examples


We consider an LTI system with system and input matrices as follows
   
0.9 0.2 0.1
A= B= . (25)
−0.4 0.8 0.01
T
The simulation parameters are chosen as NT = 50, N = 5, Q = I2 , R = 1 and x0 = [10 5] . The constraint set is
defined as in (2) with
1 0 10
   
   
0 1 10 1 1
Fx =  gx =   Fu = gu = (26)
−1 0  10 −1 1
0 −1 10
which is equivalent to −10 ≤ x1k ≤ 10, −10 ≤ x2k ≤ 10, −1 ≤ uk ≤ 1. The response of the LTI system with
the MPC scheme is given in Fig. 2(a). The response shows the states converge to the origin and the constraints are
T
satisfied. Similarly, for the set-point tracking problem, the state reference is chosen as xr = [3 2] for which the
steady-state control input is obtained by solving (21) for the linear system (25) which gives ur = 0.59 which satisfies
the control constraints. The simulation response for the set-point tracking is given in Fig. 2(b), which shows the state
converges to the desired reference.

(a) (b)

Figure 2: LMPC response (a) Stabilization (b) Set point tracking.

6
M ODEL P REDICTIVE C ONTROL USING MATLAB

3 MPC of Nonlinear Systems


In this section, we discuss the basic concept and numerical implementation of Nonlinear MPC (NMPC).

3.1 NMPC: Problem formulation

Consider the discrete-time nonlinear system defined by the state equation:


xk+1 = f(xk , uk ) (27)
where k ∈ {0, 1, ..., NT − 1} is the discrete time instant, xk ∈ X ⊆ Rn is the state vector, uk ∈ U ⊆ Rm is the input
vector and f : X × U → X is the nonlinear mapping which maps the current state xk to the next state xk+1 under
the control action uk . The constraint sets X and U are defined as in (2) and the cost function is chosen as a quadratic
function as in (5). Then the MPC problem for nonlinear systems is defined as follows:
Problem 3. For the nonlinear system (27) with the current state xk|k = xk , compute the control sequence Uk by
solving the optimization problem
inf Jk
Uk

subject to Uk ∈ UN , Xk ∈ XN +1 , k ∈ T (28)
xi+1|k = f(xi|k , ui|k ), k ∈ T, i = k, ..., k + N − 1.
.

3.2 NMPC: Algorithm

By defining Xk and Uk as in (8) we can rewrite the cost function and constraints for the nonlinear MPC problem as
Jk = XTk QX Xk + UTk RU Uk (29)
and
FX Xk ≤ gX
FU Uk ≤ gU (30)
feq (Xk , Uk ) = 0
where
xk|k − xk
 
 xk+1|k − f(xk|k , uk|k ) 
feq (Xk , Uk ) =  .. . (31)
.
 
xk+N |k − f(xk+N −1|k , uk+N −1|k )
Now, by defining z, H, F, g as in (14) the optimization problem is represented as a nonlinear programming problem as
below
inf zT Hz
z
subject to Fz ≤ g (32)
feq (z) = 0.
Here the equality constraint is nonlinear which makes the optimization problem a nonlinear programming problem.
In MPC this optimization problem is solved during every time instant k and the first element of U∗k is applied to the
system, i.e., the control input with MPC is
uk = [U∗k ]1 = u∗k|k . (33)
The algorithm for nonlinear MPC is summarized below:

Algorithm 2 : NMPC
1: Require f, NT , N, n, m, Q, R, QNT , Fx , gx , Fu , gu
2: Initialize x0 , z0
3: Construct QX , RU , H, F, g
4: for k = 0 to NT − 1 do
5: xk = [X]k+1 (obtain
 ∗ xk from measurement or estimation)
Xk
6: Compute z = ∗
by solving the optimization problem (32)
U∗k
7: Apply uk = [Uk ]1 to the system

8: Update z0 = z∗
9: end for

7
M ODEL P REDICTIVE C ONTROL USING MATLAB

The optimization problem (32) can be solved using the MATLAB function for solving constrained optimization prob-
lems:
z∗ = fmincon(f, z0 , F, g, lb, ub, feq ). (34)

3.3 NMPC: Set point tracking

Here we discuss the set point tracking problem for nonlinear systems for which the reference xr 6= 0. The reference
value or steady state value of the control input ur is computed by solving the steady-state equation
xr = f(xr , ur ). (35)
By defining the error state and control vector as xek = xk − xr , uek = uk − ur , the constraints can be rewritten as in
(23). Similarly, the equality constraint becomes
xek+1 = f(xk , uk ) − xr = f(xek + xr , uek + ur ) − xr . (36)
   
Xek Xk − Xr
Now by defining z = = , the optimization problem is obtained as in (32), solving which the
Uek Uk − Ur
optimal control input for the MPC problem is obtained as
uk = [U∗ek ]1 + ur . (37)

3.4 NMPC: Numerical examples


We consider the discrete-time model of the simple pendulum system which is defined by the state equation:
 
x1k + T x2k
xk+1 = f(xk , uk ) = (38)
x2k + T − gl sin(x1k ) − Ml
B 1

2 x2k + Ml2 uk

where M is the mass of the simple pendulum, B is the friction coefficient, l is the length of the pendulum, g is the
acceleration due to gravity and T is the sampling time. The system parameters are chosen as M = 1, B = 1, l =
T
1, g = 9.8, T = 0.1 and simulation parameters are chosen as Q = I2 , R = 1 and x0 = [2 1] . The constraint set
parameters is defined as
1 0 5
   
   
0 1 5 1 0.1
Fx =  g x =   Fu = gu = (39)
−1 0  5 −1 0
0 −1 5
which is equivalent to −5 ≤ x1k ≤ 5, −5 ≤ x2k ≤ 5, 0 ≤ uk ≤ 0.1. The response of the simple pendulum with the
MPC scheme is given in Fig. 3(a). The response shows the states converge to the origin and the constraints are satisfied.
T
Similarly for the set-point tracking problem, the state reference is chosen as xr = [0.5 0] for which the steady-state
control input is obtained by solving (36) for the nonlinear system (38) which gives ur = M gl sin(x1r ) = 4.69.
Hence we set the maximum value of control input as 5 for the set-point tracking problem. The simulation response for
the set-point tracking is given in Fig. 3(b) which shows the state converges to the desired reference.

(a) (b)

Figure 3: NMPC response (a) Stabilization (b) Set point tracking.

8
M ODEL P REDICTIVE C ONTROL USING MATLAB

4 Feasibility, Stability, and Optimality


In this section, we study the feasibility, stability, and optimality of the MPC scheme. We start with the feasibility which
deals with the existence of the optimal solution. The MPC problem is feasible, if there exists an optimal solution z∗
for the optimization problem at each time instant that satisfies all the constraints. Whenever there are constraints on
states the optimization problem becomes more complicated. In that case, we have to select the control sequence Uk in
such a way that the corresponding predicted state sequence Xk does not violate the state constraints. This leads to the
idea of feasibility and feasible sets. We denote Uf k ⊆ UN as the feasible set of control inputs
Uf k = {Uk ∈ UN : Xk (xk , Uk ) ∈ XN +1 }. (40)
Note that Uf k depends on the current state xk , i.e., Uf k = Uf (xk ) and we denote it as Uf k to simplify notations. It
also depends on the prediction horizon N which we are considering as fixed here. The number of elements in Uf k
decreases when xk is closer to the boundary of X and when xk moves away from the boundary more and more Uk
becomes feasible and when xk is sufficiently far from the boundary we have Uf k = UN , i.e., all the control sequences
are feasible, which is same as the unconstrained state case. This situation is demonstrated in Fig. 4 in which for Fig.
4(a) the current state xk is closer to the boundary of the constraint set. In this case, the predicted state sequences 2 and
3 violate the state constraints, hence the corresponding control sequences will not be feasible. But, for Fig. 4(b) the
current state xk is sufficiently far away from the boundary of the constraint set which makes all the 3 predicted state
sequences to stay within the constraint set. Consequently, all the 3 control sequences will be feasible.
The MPC problem is said to be feasible for xk ∈ X if Uf k is nonempty. This also ensures the existence of a solution
to the optimization problem. Clearly, for the unconstrained state case, the MPC problem is always feasible, and for
the constrained state case, feasibility depends on the current state. We denote the set for feasible states by Xf k ⊆ X
which is defined as
Xf k = {xk ∈ X : Uf k 6= φ}. (41)
In general, if Xf k and Uf k are the feasible set of states and control sequences during time instant k. Then the MPC
control law is computed by solving the optimization problem:
inf Jk (xk , Uk )
Uk ∈Ufk
(42)
subject to xi+1|k = f(xi|k , ui|k ), k ∈ T, i = k, ..., k + N − 1.
Clearly, every control sequence in the set Ufk results in a predicted state sequence that satisfies the state constraints.
Hence there is no need to include the state constraints in the optimization problem here. The notation Xf k is more
general and covers the time-varying systems also, and for time-invariant systems, the index k can be omitted which
simplifies the notation to Xf .
Another important concept associated with feasibility is persistent feasibility. The MPC problem is said to be persis-
tently feasible, if the feasibility of initial state x0 guarantees the feasibility of future states xk , k = 1, 2, ..., NT under
the dynamics, i.e. Uf 0 6= φ =⇒ Uf k 6= φ, ∀k = 1, 2, ..., NT . Persistent feasibility depends on the system dynamics,
prediction horizon N, and the constrained sets X, U.
x2k x2k
>

>

Constraint set Constraint set

x1k Predicted sequence 1


x1k
>
>
> Predicted sequence 3
>
Predicted sequence 1
xk
Predicted sequence 3

xk
Predicted sequence 2
>

>

Predicted sequence 2

(a) (b)
Figure 4: Feasibility (a) State near the boundary of X (b) State away from the boundary of X.

9
M ODEL P REDICTIVE C ONTROL USING MATLAB

Next, we discuss stability, which deals with the convergence of the solution, i.e., whether the state trajectory under the
MPC scheme converges to the desired reference or equilibrium point. In MPC, the stability analysis is mainly based
on the Lyapunov approach in which the basic idea is to design the control scheme in such a way that the optimal cost
function becomes a Lyapunov function, i.e., Vk = Jk∗ and it satisfies

∆V = Jk+1 (xk+1 ) − Jk∗ (xk ) < 0. (43)
There exist different variants of the criteria (43) which give a different upper bound for ∆V. In general, for stabilizable
LTI systems, by properly selecting the terminal weighting matrix and constraints, the value function of the MPC
scheme can be made as a Lyapunov function. However, this may not always be possible for nonlinear systems. The
terminal weighting matrix QN and terminal constraints FxN , gxN can be easily incorporated in the MPC algorithm by
adding them in QX , FX , gX which results in:
Q ... 0 0 Fx . . . 0 0 gx
     
 .. .. ..  . .. ..   . 
QX =  . . .  , FX =  .. . .  , gX =  ..  . (44)
0 . . . Q 0   0 ... F 0  g 
x x
0 ... 0 QN 0 ... 0 FxN gxN

Finally, optimality is a term associated with the performance of the solution and depends on how fast the trajectory
converges to the equilibrium point and how much control effort is required. When it comes to optimality, the MPC
schemes usually result in a suboptimal solution. This is because of the reason that, in MPC during every time instant
we optimize the performance over the prediction horizon, not the entire time horizon. Consequently, as the prediction
horizon increases the MPC control law becomes more optimal and, in general as N → NT , the control law becomes
optimal.

5 Further Reading
This tutorial attempts to discuss the basic theory of MPC and its numerical implementation using MATLAB. For
a more detailed study on linear MPC and nonlinear MPC, one can refer to [1] and [2], respectively. For a better
understanding of the numerical optimization methods for solving linear programming, quadratic programming, and
nonlinear programming problems, one may refer to [3]. For related and advanced topics in MPC such as LQR, Kalman
filter, adaptive MPC, Robust MPC, and Distributed MPC, one can refer [4]-[8]. A lecture series based on this tutorial
can be found at [9]. The MATLAB codes for the MPC examples discussed in this paper are available at [10].

References
[1] F. Borrelli, A. Bemporad and M. Morari “Predictive Control for Linear and Hybrid Systems”, Cambridge Univer-
sity Press, 2017.
[2] L. Grune and J. Pannek “Nonlinear Model Predictive Control Theory and Algorithms”, Springer, 2011.
[3] D. Luenberger and Y. Ye “Linear and Nonlinear Programming: Fourth edition”, Springer, 2008.
[4] D. Mayne, “Model predictive control: Recent developments and future promise”, Automatica, Issue 50, pp. 2967-
2986, 2014.
[5] M. Guay, V. Adetola and D. Dehaan,“Robust and Adaptive Model Predictive Control of Non-linear Systems”, The
Institution of Engineering and Technology, 2015.
[6] B. Kouvaritakis and M. Cannon,“Model Predictive Control: Classical, Robust and Stochastic”, Springer, 2016.
[7] S. Rakovic and W. Levine, “Handbook of Model Predictive Control”, Springer, 2019.
[8] M. Augustine, “A Note on Linear Quadratic Regulator and Kalman Filter”, arXiv, 2023.
http://arxiv.org/abs/2308.15798.
[9] M. Augustine, “Model Predictive Control using MATLAB”, YouTube, 2021.
https://www.youtube.com/playlist?list=PL0IUz_pjFlJ2LsSTLY4I1yBhgGJeo9Baf.
[10] M. Augustine, “MPC-MATLAB”, GitHub, 2021.
https://github.com/MIDHUNTA30/MPC-MATLAB.

10

You might also like