Ans April 17
Ans April 17
Ans April 17
April 2017
Part –A
1. Write the need for sample and hold device.
Analog Signal is converted to digital using an A/D conversion system. The A/D converter
coverts a voltage amplitude at its input into a binary code representing a quantized
amplitude closest to the amplitude of the input. Input signal variation during the time of
conversion can lead to erroneous results. Therefore, high performance A/D systems are
preceded by a S/H device which keeps the input to A/D converter constant.
2. Define the State Transition Matrix of a discrete system.
x(k 1) Fx (k )
x(k ) F k x(0)
(k ) F k 1 zI F 1 z
3. Compare Parametric and Nonparametric methods of system identification
Parametric method
Nonparametric method
Suitable for controller design, Simple and fast, needs a few a priori
simulation, prediction etc. information.
Needs a priori assumptions about the Give only some general ideas about the
model structure. system (bandwidth, order).
( j ) y (t j t ) w(t j ) ( j ) u(t j 1)
N2 ^ Nu
J ( N1 , N 2 ,... N u )
jN j 1
1
Where,
^
y (t j t ) is an optimum j-step ahead prediction of the system output on data upto time
K,N1 and N2 are the minimum and maximum costing horizons,
Nu- control horizons,
(j) and (j) are weighting sequences and w(t+j) is the future reference trajectory, which
can considered to be constant.
The objective of predictive control is to compute the future control sequence
u(t),u(t+1),…..u(t+Nu) in such a way that the future plant output y(t+j) is driven close to
w(t+j). This is accomplished by minimizing J(N1,N2,….Nu)
Part –B
11 Sketch the block diagram ofa typical sampled data controlled system and explain the 10
a functions performed by each block.
i Ans:
ADC: The analog signal is converted into digital form by A/D conversion system. The
conversion system usually consists of an A/D converter proceeded by a sample-and-hold
device.
DAC: The digital signal coming from digital device is converted into analog signal.
Sample and Hold Device:
Sampler: It is a device which converts an analog signal into a train of amplitude demodulated
pulses.
Hold device: A hold device simply maintains the value of the pulse for a prescribed time
duration.
Sampling: sampling is the conversion of a continuous-time signal into a discrete time signal
obtained by taking samples of the continuous time signal (or analog signal) at discrete time
2
instants.
Sampling frequency: Sampling frequency should be greater than two times of signal
frequency. Fs 2* Fin
Types of sampling:
Periodic sampling: In this sampling, samples are obtained uniformly at intervals of T seconds.
Multiple-order sampling: A particular sampling pattern is separated periodically.
Multiple-rate sampling: In this type two simultaneous sampling operations with different time
periods are carried out on the signal to produce the sampled output.
Final Control Element: A Final control element changes a process in response to a change
in controller output a system. Some example of Final control elements are actuators include
valves, dampers, fluid couplings, gates, and burner tilts to name a few.
Sensor: A sensor is a device that detects and responds to some type of input from the physical
environment. The specific input could be light, heat, motion, moisture, pressure, or any one of a
great number of other environmental phenomena.
11 Test the controllability of the following system. 6
a
ii x1 (k 1) 1 2 x1 (k )
x (k 1) 3 4 x (k )
2 2
x k
y k 1 2 1
x 2 k
Ans:
The controllability test is
U g Fg F 2 g ..... F n1 g
G matrix is not given. Hence controllability cannot be tested.
OR
11 Describe the principle and design procedure for state feedback control scheme with block 10
b diagram.
i Ans:
Consider the state-space model of a SISO system
x(k + 1) = Ax(k) + Bu(k) (1)
y(k) = Cx(k)
where x(k) ∈ Rn , u(k) and y(k) are scalar.
In state feedback design, the states are feedback to the input side to place the closed poles at
desired locations.
Regulation Problem:
When we want the states to approach zero starting from any arbitrary initial state, the design
problem is known as regulation where the internal stability of the system, with desired
transients, is achieved.
3
Control input: u(k) = −Kx(k) (2)
Tracking Problem:
When the output has to track a reference signal, the design problem is known as tracking
problem. Control input: u(k) = −Kx(k) + Nr(k) where r(k) is the reference signal.
First we will discuss designing a state feedback control law using pole placement
technique for regulation problem. By substituting the control law (2) in the system state model
(1), the closed loop system becomes x(k + 1) = (A − BK)x(k). If K can be designed such that
eigenvalues of A − BK are within the unit circle, then the problem of regulation will be solved.
The control problem can thus be defined as: Design a state feedback gain matrix K such that
the control law given by equation (2) places poles of the closed loop system x(k+1) =
(A−BK)x(k) in desired locations.
Design Procedure: (Ackerman’s Method)
1. Determine the desired characteristic polynomial.
1 2 ...... n n 1n1 .... n1 n I
2. Determine the matrix ф(F) using the coefficient of desired characteristic polynomial.
( F ) F n 1F n1 .... n1F n I .
3. Calculate the state feedback gain matrix (K)
using Ackermann‟s formula.
K 0 0 .... 0 1Qc1 ( F )
11 Test the stability of the following system. P( z ) z 4 1.2 z 3 0.07 z 2 0.3z 0.08 0 6
b
ii Ans:
Check for Necessary Conditions:
P(1) 1 1.2 0.07 0.3 0.08 0.09 0
4
estimated parameter vector. The estimated parameters do not have any physical insight of the
process. The various parametric methods of system identification are
1. Least squares (LS) estimate
2. Prediction error method (PEM)
3. Instrumental variable (IV) method
LEAST SQUARES ESTIMATION
The method of least squares is about estimating parameters by minimizing the squared
error between observed data and their expected values. The linear regression is the simple type
of parametric model. This model structure can be written as
T
Y(t) (t) (1)
Where,
Y(t) = Measured Quantity
a
y(t 1) u(t 1)
b
T
(t)
Where, (t) y(t 1) u(t 1)
a
b
The elements of (t) are often called regression variables or regressors while y(t) is
called regressed variable. The is called parameter vector. The variable t takes integer values.
5
Example 2:
Consider a truncated weighting function model.
Y(t) h 0 u(t) h1 u(t 1) ..... h m 1 u(t M 1)
The input signal u(t) u(t 1) ..... h m 1 u(t M 1) are recorded during the experiment.
Hence the regression variables
(t) u(t) u(t 1) ..... h m 1 u(t M 1) is a M- Vector of
known quantities.
T
h 0 h1.... h m 1 is a M- Vector of
The problem to find an estimate ˆ of the parameter vector as shown in Fig.1 from
experimental measurements given an experimental measurement
Y(1), (1),Y(2), (2).....Y(N), (N) . Here „N‟ represents number of experimental data and
„n‟ represents number of unknown quantities in (t) or number of unknown parameters in .
T
Y(1) (1)
T
Y(2) (2)
.
.
T
Y(N) (N)
This can be written in matrix notation as
Y (4)
Y(1)
.
Where, Y . an ( N x1 ) vector (5)
Y(N)
T
(1)
.
. an ( N x n ) vector (6)
T
(N)
(t) y(t) T
(t) ˆ (7)
6
y(t) - Observed value
T
(t) ˆ - Expected value
And stack these in a vector defined as
(1)
.
.
(N)
In statistical literature the equation errors are often called residuals. The least square estimate of
is defined as the vector ˆ that minimizes the loss function
N N
1 2 1 T 2
V( ) (t) Y(t) (t) (8)
2 t 1 2 t 1
Note:
The order forms of loss function are
1 T
V( ) . (9)
2
1 2
V( ) (10)
2
Where . denotes the Euclidean vector norm.
The ˆ will be estimated from experimental measurements
Y(1), (1),Y(2), (2).....Y(N), (N) , by minimizing the loss function V( ) in (8)
and (7) . The solution to this optimized problem is
ˆ T 1 T
Y (11)
For this solution, the minimum value of V is
min
V( ) V ˆ
1 T
Y Y YT ( T
) 1 T
Y (12)
2
Note:
The matrix T is a positive definite.
The form eq. (11) of the least squares estimate can be rewritten in the equivalent form
t 1 t
ˆ (t) (t) T
(t) (t) Y(t) (13)
s 1 s 1
In many cases (t) is known as function of t. Then (13) might be easier to implement than
(11) since the matrix of large dimension is not needed in eq. (13). Also the form eq. (13) is
the starting point in deriving several recursive estimates.
OR
7
12 With an example for each, Explain any one parametric and non-parametric methods of 16
b
system identification.
PARAMETRIC METHOD OF SYSTEM IDENTIFICATION
A parametric method can be characterized as a mapping from the recorded data to the
estimated parameter vector. The estimated parameters do not have any physical insight of the
process. The various parametric methods of system identification are
4. Least squares (LS) estimate
5. Prediction error method (PEM)
6. Instrumental variable (IV) method
RECURSIVE IDENTIFICATION METHOD
In recursive (also called on-line) identification methods, the parameter estimates are
ˆ
computed recursively in time. This means that if there is an estimate (t 1) based on data
8
Most adaptive systems, for example adaptive control systems as shown in Fig. 1. are based
(explicitly or implicitly) on recursive identification.
Then a current estimated model of the process is available at all times. This time
varying model is used to determine the parameters of the (also time-varying) regulator
(also called controller).
In this way the regulator will be dependent on the previous behavior of the process
Through the information flow: Process --- model --- regulator).
If an appropriate principle is used to design the regulator, then the regulator should
adopt to the changing characteristics of the process.
The various identification methods are
Recursive least squares method
Real time identification method
Recursive instrumental variable method
Recursive prediction error method.
The argument t has been used to stress the dependence of ˆ on time. The eq.(3) can
be computed in recursive fashion.
9
t 1
T
P(t) (s) (s) (4)
s 1
t
P 1 (t) (s) T
(s)
s 1
t 1
P 1 (t) (s) T
(s) (t) T
(t)
s 1
P 1 (t) P 1 (t 1) (t) T
(t)
P 1 (t 1) P 1 (t) (t) T
(t) (5)
Then using eq.(3) and eq. (4) can be written as
t
ˆ (t) P(t) (s)Y(s) (6)
s 1
Note:
If we replace t by t 1 in eq. (6), we get
t 1
ˆ (t 1) P(t 1) (s)Y(s) (7)
s 1
t 1
ˆ (t 1) P 1 (t 1) (s)Y(s)
s 1
The equation eq. (6) can be written as
t 1
ˆ (t) P(t) (s)Y(s) (t)y(t) (8)
s 1
By substituting eq. (7) in eq. (8) we get
ˆ (t) P(t) P 1 (t 1) ˆ (t 1) (t)y(t)
By substituting eq. (5)
ˆ (t) P(t) P 1 (t) (t) T
(t) ˆ (t 1) (t)y(t)
ˆ (t) P(t) P 1 (t) ˆ (t 1) (t) T
(t) ˆ (t 1) (t) y(t)
ˆ (t) ˆ (t 1) P(t) (t) T
(t) ˆ (t 1) P(t) (t) y(t)
P(t 1) (t)
K(t) T
(12)
1 (t) P(t 1) (t)
The recursive least squares algorithm (RLS) consists of
1. ˆ (t) ˆ (t 1) K (t)
2. (t) y(t) T
(t) ˆ (t 1)
P(t 1) (t) T (t) P(t 1)
3. P(t) P(t 1)
1 T (t) P(t 1) (t)
11
u t a sin ( t)
Where,
a = amplitude of the sinusoidal input u(t)
= Frequency of the sinusoidal input u(t) in rad/sec
arg G( ) (3b)
This can be proved as follows. Assume the system is initially at rest. Then the system G(s) can
be represented using a weighting function h(t) as follows.
t
12
t
s
G(s) h( ) e d (5)
0
(ei t
e i t
)
Since sin( t) (6)
2i
Equations (1) (4) (5) and (6)
t
Y(t) h( )u(t )d
0
h( )a sin ( (t ))d
0
t
(ei (t )
e i (t )
)
h( )a d
0
2i
t
a
h( )(ei (t )
e i (t )
)d
2i 0
t
a
h( )(ei t e i
e i t
e i
)d
2i 0
t t
a i t ( i t) i t ( i t)
e h( )e d e h( )e d
2i 0 0
G(i ) h( )e( i t)
d
0
t
( i t)
G( i ) h( )e d
0
a it i t
Y(t) e G(i ) e G( i )
2i
Since we can represent
G(i ) rei
Where,
r magnitude of G(i ) G(i )
13
a it
Y(t) e G(i ) ei arg G(i )
e i t
G(i ) e i arg G(i )
2i
G(i ) G( i ) G(i )
a
Y(t) G(i ) ei t ei arg G(i )
e i t
e i arg G(i )
2i
a G(i ) sin t argG(i )
b. sin( t ) (7)
b a G(i )
arg G(i )
From above equations (1) and (3) are proved.
By measuring the amplitudes a and b as well as the phase difference , one can draw a
bode plot (or Nyquist or equivalent plot) for different values. From the bode plot, one can
easily estimate the transfer function model G(s) of a system.
The figure shows a continuous data PID controller acting on an error signal e(t). The controller
simply multiplies the error signal e(t) with a constant Kp. The integral control multiplies time
integral of e(t) by Ki and derivative control generates a signal equal to Kd times the time
derivative of e(t). The function of the integral control is to provide action to reduce the area
under e(t) which leads to reduction of steady state error. The derivative control provides
anticipatory action to reduce the overshoots and oscillations in time response. In digital control,
P-control is still implemented by a proportional constant Kp. The integrator and differentiator
can be implemented by various schemes.
Numerical Integration:
Since integration is the most time consuming and difficult, basic mathematical operations on a
14
digital computer using simulation is important. Continuous integration operations are
performed by numerical methods. This replaces the SOH devices at strategic locations in a
control system.
X(s)/R(s)=1/s represents the integrator, where r(t) is the input. Area under the curve is
represented by x(t), with output between t=0, t=T.
13 e 2 s 8
a Design dead beat controller for the following process G s .
s 1
p
ii
1 z 1
D( z )
G ( z ) 1 z 1
G ( z ) Z Gh 0 ( s ).G p ( s ) Gh 0 G p ( z )
1 e sT e 2 s 1
G ( z ) Z ( z 2 z ( 2T ) ).Z
s s 1 s ( s 1)
1 1 z z
G ( z ) 2 2 T TakeT 1sec
z z z z 1 z e T
1 1 z z
G ( z ) 2 3
z z z 1 z 0.367
z 1 0.633
2
z z 0.367
1 z 1 z 2 z 0.367 2 z 0.367
D( z )
1 1.579 z 2
G ( z ) 1 z z 1 0.633 z 1
M ( z) z 0.367
D( z ) 1.579 z 2 2
E( z) z 1
OR
13 Sketch the block diagram for IMC. 6
16
b
i
A list of the variables used in the above block diagrams are explained below;
d(s)=disturbance d~(s)=estimated disturbance gp(s)=process gp~(s)=process model
q(s)=internal model controller r(s)=set-point r~(s)=set-point u(s) manipulated input
y(s)=measured process output y~(s)=model output
13 10
b Describe the simplified Smith Predictor scheme with the steps.
ii
17
As shown in the figure, the process is conceptually split into a pure lag and a pure dead time. If
the fictitious variable (b) could be measured somehow, that can be connected to the controller
as shown in fig.7.40(b). This would move the dead time outside the loop. The controlled
18
variable (c) would repeat whatever b did after a delay of ɵd. since there is no delay in the
feedback signal (b), the response of the system would be greatly improved. The scheme of
course, cannot be implemented, because b is an unmeasurable signal. Ow a model of the
process is developed and a manipulated variable (m) is applied to the model as shown in the
figure. If the model were perfect and disturbance, L=0, then the controlled variable c will
become equal to the error cm and em=c-cm=0. The arrangement reveals that although the
fictitious process variable b is unavailable, the value of bm can be derived which will be equal
to b unless modelling errors or load upsets are present. It is used as feedback signal. The
difference (c-cm) is the error which arises because of modelling errors or load upsets. To
compensate for these errors, a second feedback loop is implemented using em. this is called the
Smith Predictor control strategy. The Gc(s) is a conventional PI or PID controller which can be
tuned much more tightly because of the elimination of dead time from the loop. Thus the
system consists of a feedback PI algorithm (Gc) that controls a simulated process Gm-(s), which
is easier to control than the real process.
14 Explain how to obtain RGA matrix that help to pair inputs and outputs.
a The relative gain (λij) between input j and output I is defined as follows;
yi
u
j uk k j
λij= 1. ij
yi
u
j yk k i
The relative-gain array provides a methodology where we select pairs of input and output
variables in order to minimize the interaction among resulting loops. It is a square matrix that
contains individual relative gain as elements, that is ij . For a 2x2 system, the RGA is
11 12 .
21 22
11 12
RGA is given by , this equation yields the flowing relationships, 11 12 1
21 22
11 21 1 12 22 1 21 22 1, then for a 2x2 system, only one relative gain must be
11 1 11
calculated for the entire array,
11
.
1 11
0.05 0.95
Consider a relative gain array,
0.95 0.05
We pair y1 with u1 and y2 with u2 in this case.
Consider the whiskey blending problem, which has steady state process gain matrix and RGA:
y1 0.025 0.075 u1 0.025 0.075 0.25 0.75
y 1
1 u2 1 1 0.75 0.25 indicating that the
, K ,
1
output-input pairings should be y1-u2 and y2-u1. In order to achieve this pairing we, could use
the following block diagram. The difference between r2 and y2 is used to adjust u1, using a PID
controller (gc1), hence, we refer to this pairing as y2-u1. The difference between r2 and y1 is used
to adjust u2 using a PID controller (gc1); hence we refer to this pairing as y1-u2. This
corresponds to the following diagram. This can also be done by redefining variables.
Consider the following RGA for a system with three inputs and three outputs:
19
1 1 1
3 4 2
3 4 0
1. We should not pair on a negative relative gain.
2. We should not pair with a relative gain of 0 because that means that particular input
does not have an effect on the particular output when all other loops are open.
3. In row 3 of the RGA, which corresponds to output 3, we would not pair y3 with u3
because of the 0 term. We cannot pair y3 with u1 because of the -3 term, which means
y3 is paired with u2.
1 1 1
3 4 2
3 4 0
4. From the first row, we cannot pair y1 with u3 because of -1 term. So our only choice is
to pair y1 with u1.
OR
14 8
b Explain how to design decouplers for a 2x2 process.
i The purpose of decouplers is to cancel the interaction effects between the two loops and thus
render two interacting loops.Let us consider the i/p-o/p relatiomships of a 2 i/p-o/p
process.2control loops m1 with y1 and m2 with y2.keep y1 constant.
m1 should change by the following amount:
m1=-H12(S)/H11(S)m2-----------------------------------------------------------------------------------(1)
We introduce a dynamic element
D1(S)=-H12(S)/H11(S)-----------------------------------------------------------------------------------(2)
This uses m2 as i/p and provides o/p which gives the amount by which m1 should be varied to
keep y1 constant and cancel the effect of m2 on y1.This way the decouplers cancels the effect
of loop2 on loop1.
20
From fig(b) with 2 feedback loops it is possible to get an i/p-o/p relationship of 2 closed loops.
y1(S)=-Gc1[H11-H12H21/H22]/1+Gc1[H11-H12H21/H22]y1sp-------------------------------------------------------------------------
--------(4)
y2(S)=-Gc2[H22-H12H21/H11]/1+Gc2[H22-H12H21/H11]y2sp-------------------------------------------------------------------------
--------(5)
The equations 3 and 4 shows that o/p of loop 1 and loop2 depend only on the set point of its
own and not the other loop
NOTE:
1.Two interacting control loops are perfectly decoupled only if H11,H12,H21,H22 are perfectly
known.Practically this is not possible so only partial decoupling is possible.
2.For non-linear processes like chemical process ,initially they will be perfectly decoupled .As
process parameters keep changing interaction increases.Solution is to use Adaptive decouplers.
3.Perfect decoupling allows independent tuning of each controller.
4.Decouplers are feedforward control elements.
14 Explain biggest log modulus method 8
b BIG LOG MODULUS METHOD
ii Decouplers are elements used to compensate for interaction among loops when the order is less
than 3.
To reduce the interactions in a nxn systems with decentralized controller, such that control loop
responses reaches set point BLT method is used. Luyben proposed this method.
STEP1:
Calculate the Z-Nsettings for each individual loop.The ultimate gain and ultimate frequency
ofeach diagonal transfer function Gjj(s) are calculated in the classical way.To do this,a value of
21
frequency „w‟ is guessed.The phase angle is calculated and the frequency is varied to find the
point where the Nyquist plot of Gjj(w) crosses the negative real axis(ie -180 degree phase
angle)
The frequency at which it occurs is wu.Reciprocal of real part of Gjj(w)is ultimate gain.
STEP 2:
Detuning factor F is assumed always >1(btw 1.5 to 4)
Gain of all feedback controllers Kci are calculated by assuming ZN gain KZNi by F.
KCi=KZNi/F[KZNi=Kui/2.2]
Then all feedback controllers reset time are calculated by multiplying ZN settings with the
factor F.
Where
F factor can be considered as detuning factor for all the loops .Larger the value of F more stable
is the system.But will be more sluggish to set point and also to load responses.
STEP 3:
Using the guessed value of F and the resulting controller settings a multivariable Nyquist plot
of scalar function W(iw)=-1+det[I+GM(iw)B(iw)] -for multiloop
Big Log Modulus Lcm is defined.
Lcm=20log|w/1+w|
The peak in the plot of Lcm over entire frequency range Lcmmax.
STEP 4:
Ffactor is varied until Lcm=2N.where N is the order of the system. For N=1 is SISO case,the
familiar +2Db max closed loop log modulus criterion is obtained N=3 +4Db
If Lcm max not equal to 2N,then find new value of F and return to step 2.
MPC is based on iterative, finite-horizon optimization of a plant model. At time the current
plant state is sampled and a cost minimizing control strategy is computed (via a numerical
minimization algorithm) for a relatively short time horizon in the future: Specifically, an online
or on-the-fly calculation is used to explore state trajectories that emanate from the current state
and find a cost-minimizing control strategy until time Only the first step of the control strategy
23
is implemented, then the plant state is sampled again and the calculations are repeated starting
from the new current state, yielding a new control and new predicted state path. The prediction
horizon keeps being shifted forward and for this reason MPC is also called receding horizon
control. Although this approach is not optimal, in practice it has given very good results. Much
academic research has been done to find fast methods of solution of Euler–Lagrange type
equations, to understand the global stability properties of MPC's local optimization, and in
general to improve the MPC method. To some extent the theoreticians have been trying to
catch up with the control engineers when it comes to MPC.
Principles of MPC
Model Predictive Control (MPC) is a multivariable control algorithm that uses:
an internal dynamic model of the process
a history of past control moves and
an optimization cost function J over the receding prediction horizon,
to calculate the optimum control moves.
An example of a non-linear cost function for optimization is given by:
2 2 2
rk 1 y k 1 rk 2 y k 2 rk 3 y k 3 wu k2 wu k21
y mod el predictive output
r setpo int
u change in the manipulate d input
w weight for the changes in the manipulate d input
The subscripts indicate the sample time.
OR
15 Describe the multivariable Dynamic Matrix Control scheme with detailed algorithmic 16
b steps.
24
y c k 1 y c k 1 d k 1
N 1
y c k 1 si uk i 1 sN uk N 1 d k 1 (5)
i 1
N 1
y c k 1 s1uk si uk i 1 sN uk N 1 d k 1
i2
So for the jth step into the future, we find
yc k j yc k j d k j
j N 1
si uk i j sN uk N j d
(6)
y c k 1 si uk i j k j
i 1 i j 1
correctionterm
effect of futurecontrolmoves effect of pastcontrolmoves
and we can separate the effect of past and future control moves as shown in the above equation.
The most common assumption is that the correction terms is constant in the future (this is the
constant additive disturbance assumption): d k j d k j 1 ... d yk y k (7)
Also, realize that there are no control moves beyond the control horizon of M steps, so
uk M uk M 1 ... uk P 1 0 (8)
In matrix-vector form, a prediction horizon of P steps and a control horizon of M steps,
yields
c
y k 1 s1 0 0 0 0
c s uk
s 0 0 0 u
k 2
y 2 1
k 1
c
yk j s j s j 1 s j 2 s j M 1
uk M 2
c uk M 1
PP
s s 1 s P 2 s
P M 1
Mx1 currentand futurecontrolmoves u f
y k P PxM dynamicmatrixS f
c
Px1 corrected outputpredictions Y
s2 s3 s4 s N 2 s N 1
s uk 1
3 s 4 s 5 s N 1 0 u
0 0 k 2
s s s 0 0
uk N 3
j 1 j 2 N 1
uk N 2
s P1 sP2 0 0
( N 2 ) x1 pastcontrolmoves u past
Px ( N 2 ) matrixS past
uk N 1 d k 1
u d
s N k N 2 k 2
(8a)
uk N P d k P
Px1 pastinputsu P Px1 predicted disturbances
25
c
Y S f u f S pastu past s N u P d (9)
corrected predicteddisturbanc es
predicted effect of effect of pastmoves
outputs currentand
futturemoves
e w u
P M 1
The least-squares objective function is
c 2 c 2
k i k i
(12)
i 1 i 0
Notice that the quadratic terms can be written in matrix-vector form as
c
eck 1
P
ek 1 ek 2 ek P ek 2 E c E c
c 2 c c c T
i 1
ek i
(13)
c
ek P
uk
w u k i w u k u k 1 u k M 1 u k 1
M 1
c 2
i 0
u k M 1
And (14)
w 0 0 0 u k
0 w 0 0
u k u k 1 u k M 1
0 0 0
u k 1 T Wu f
uf
0 0 0 w u k M 1
Therefore the objective function can be written in the form,
Subject to the modelling equality constraint (11), E c E S f u f (15)
Substituting (15) into (16), the objective function can be written,
( E S f u f )T ( E S f u f (u f )T )Wu f (16)
The solution for minimization of this objective function is
u f ( S f S f W ) 1 S f
T T
E (17)
unforced errors
K
The current and future control move vector is proportional to the unforced error vector.
Because only the current control move is actually implemented, we use the first row of the
matrix of the K matrix and u f K1E (18)
Where represents the first row of the matrix, where K (S f S f W ) 1 S f
T T
26