0% found this document useful (0 votes)
71 views13 pages

ELEC4632 - Lab - 01 - 2022 v1

This document provides an introduction and overview for Lab 1 of ELEC4632, which involves using linear least squares methods to identify models for input/output data from physical systems. Students will complete a pre-lab exercise to generate and plot sinusoidal data in MATLAB. The main goals of the lab are then introduced: using experimental input/output data and linear least squares to determine the parameters of linear discrete-time models, represented as transfer functions or state space models, in order to model and understand system dynamics. Parametric system identification and the linear least squares method are described for estimating the coefficients of second-order discrete-time models from measured data.

Uploaded by

wwwwwhfzz
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
71 views13 pages

ELEC4632 - Lab - 01 - 2022 v1

This document provides an introduction and overview for Lab 1 of ELEC4632, which involves using linear least squares methods to identify models for input/output data from physical systems. Students will complete a pre-lab exercise to generate and plot sinusoidal data in MATLAB. The main goals of the lab are then introduced: using experimental input/output data and linear least squares to determine the parameters of linear discrete-time models, represented as transfer functions or state space models, in order to model and understand system dynamics. Parametric system identification and the linear least squares method are described for estimating the coefficients of second-order discrete-time models from measured data.

Uploaded by

wwwwwhfzz
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 13

School of Electrical Engineering &

Telecommunications

ELEC4632 Lab 1

An introduction to linear least square method and


system identification
In this lab, you will use linear least square method to identify a model for an input/output data
collected from a physical system.
This lab is also conducted remotely through Microsoft Teams for online students. Before attending
the online lab, make sure to read the ”Guide to Remotely Access ELEC4632 Labs” which you
can find on the course Moodle page.

Pre-lab Exercise: MATLAB Refresher


You should write a MATLAB code (m-file) to create a set of sinusoidal data and plot them exactly
as shown in Fig. 1, through the following steps.

Fig. 1. Comparison of a set of sinusoidal data, (a) Original data, (b) Cut-off data.

1. Create a column vector representing time from 0 to 100 seconds with time spacing of 0.1 second
and name it t. Use either linspace function or use direct vector definition like t = a:b:c.
Make sure to transpose it as this form of variables are row vectors in MATLAB by default. Type
in help + function_name, such as linspace, in MATLAB command window to learn

1
how to use the function, and you can also see full help on any function by searching them in the
main help page of MATLAB. Do not know which function can achieve your objective? Google is
a good searching tool.
2. Use sin and cos functions to generate two sinusoids, y1 and y2, respectively, with a period of
100 seconds, and add a uniformly distributed noise using rand function with the bound of ±0.2.
3. Cut off the first 200 samples from all data and assign them to new variables t_new, y1_new,
and y2_new, respectively. Make sure t_new starts from zero. You can extract part of a data
stored in a variable by using colon operator in indexing rows and columns, i.e., A(a:b,:)
would extract data in matrix A from row a to row b for all columns (recall that A(c,d) in MATLAB
returns the element stored in row c and column d of A)
4. Store these three cut-off vectors under a new matrix variable named data_new. Use data_new
= [t_new y1_new y2_new] to attach column vectors with the same size and create an n-by-
3 matrix, i.e., data_new n×3. This method is known as matrix concatenation.
5. Plot the original data you created in Step 2 against time, i.e., y1 versus t and y2 vs t, on the
top side of the figure with the same colors as shown in Fig. 1(a) using the following functions:
figure, subplot, plot, stairs, hold on, hold off, xlim, ylim, title, grid, xlabel,
ylabel, legend. To display Greek letters, like π, read the Text Properties in MATLAB help.
Limit x-axis and y-axis between (0, 140) and (−1.5, 1.5).
6. Use the same functions above (except figure) to plot the cut-off data at the bottom of the figure
as shown in Fig. 1(b). The color code for cos(0.02*pi*t) is [0.6 0.7 0.8] (see the help
for plot and RGB color codes)

Introduction to System Identification


In this lab, you will learn how to determine or identify a suitable dynamic model for a process using
linear least squares method [1], [2]. In control systems theory, to design a control system for a
process using a so-called Model-Based Control method, a mathematical model of the process is
needed. In general, there are two methods to model a process (also known as system identification).
One is by using the physical relationships that describe the dynamics of the process. For instance,
using Newton’s laws to obtain equations of motion for a mass-spring-damper process, or using
Kirchhoff’s Voltage and Current laws in addition to Newton’s laws to derive a permanent magnet DC
motor equation [3]. This method can sometimes result in complex dynamic equations for
complicated processes. Another way of finding a dynamic model for a process is by using
experimental input and output data, and then trying to match them with a mathematical equation,
either linear or nonlinear, depending on the nature of the system and its operating conditions. This
method is useful for the cases where there is little information about the physics of the process, or
due to the limitation in accessing different parts of the process for parameter measurements, only
input and output signals are available to be used. We all know that a process would reveal its
dynamic characteristics through its outputs and states variables if a proper input signal is applied to
excite all internal modes of the process.
One of the most common methods of empirical system identification in industry using input/output
data is the so-called Step Response modelling. This method is suitable for processes with slow
dynamics which are mostly controlled for regulation purposes or set-point control. An example is
provided in Appendix section showing how to find a first order model using a recorded data in
continuous-time domain.

2
Parametric System Identification Using Linear Least Squares
Method
Most of the processes in industry are controlled in a way to keep the systems near their operating
points, and as a result, they can be represented by linear differential or difference equations (rational
transfer function). Therefore, we can determine the finite number of parameters that characterize
such a dynamic model, i.e., polynomial coefficients or zeroes and poles. This method is known as
parametric system identification. However, it is not always possible to model these processes with
a first-order transfer function using one simple step response as discussed before, since their
dynamics can be more complicated. Therefore, by choosing a suitable input signal and recording
the corresponding output using a digital computer, we can determine causal discrete-time models1,
as a dynamic model for a process to be controlled. The reason for identifying the models in discrete-
time form is that the control system will eventually be implemented on a digital computer, so it would
be beneficial to have a discrete-time model of the process and design the control system directly in
discrete-time domain, even though the real nature of the process is in continuous-time domain. In
this lab, our focus is on second-order discrete-time models with strictly proper form2. It should be
mentioned that, when no information is available about the process, we need to vary the order of
the model to get the best approximation of the actual process in terms of closeness of the responses.
Linear discrete-time dynamic models in second-order form are given as follows,
▪ Transfer function in Z domain (z-1 is called “delay operator”):

Y (z) b1z + b2 b1z −1 + b2 z −2


G( z ) = = = . (1)
U ( z ) z 2 + a1z + a2 1 + a1z −1 + a2 z − 2
▪ State Space:

x(k + 1) = Gx(k ) + Hu(k )  0 1  0


 , G= , H =  , C = b2 b1 , D = 0.
y (k ) = Cx(k ) + Du(k ) − a2 − a1  1 (2)
(Canonical Controllable Form)

▪ Difference Equation:
y (k ) + a1y (k − 1) + a2 y (k − 2) = b1u(k − 1) + b2u(k − 2). (3)
Thus, the unknown parameters of the process model to be determined are {a1, a2, b1, b2}. In order
to estimate these parameters, the so-called linear least squares method is used in this lab, which is
a well-known and commonly used method for this purpose. More details on least squares method
can be found in [1] and [2]. From the three different discrete-time models above, difference equations
in Eq. (3) is more suitable since it can be written as a set of linear equations containing discrete-
time values of the measured input/output data. Later on, for the purpose of controller design, state
space model will be used primarily. Thus, the difference equation can be rearranged as follows,

1 A casual system (also known as a physical or non-anticipative system) is a system whose output depends on previous
and current values of input, and perhaps previous values of output [3].
2 A strictly proper form for a transfer function means that its numerator’s order is less than the order of denominator, i.e.,

less zeros than poles [3]. In discrete-time domain, it means that output depends only on previous values of input and
output.

3
y (k ) = −a1y (k − 1) − a2 y (k − 2) + b1u(k − 1) + b2u(k − 2),

 − a1 
 
−a (4)
y (k ) = y (k − 1) y (k − 2) u(k − 1) u(k − 2)  2   y (k ) =  T (k ) .
  b 
1
T ( k )  


b2 

The model y(k) = T(k) is called regression model and the variables inside T(k) are called
regression variables or regressors. They contain the previous values of the input and
corresponding output values at time steps k − 1 and k − 2 (k is the discrete time step related to
continuous time as t = kTs for k = 1, 2, 3 … with Ts as the sampling time). Since we use computer to
apply input signal with a fixed sampling rate and then measure the corresponding output, N linear
equations are constructed, as shown in Fig. 2, with only four unknown parameters for a second-
order model, where N is the number of samples.

u(k) y(k)

G(z) = ?
{u(1), u(2), …, u(N)} {y(1), y(2), …, y(N)}

Unknown Process
Fig. 2. System identification with input/output data.

Therefore, the problem of solving a set of linear equations is shown as below,

y (1) = y (0) y ( −1) u(0) u( −1) 


y (2) = y (1) y (0) u(1) u(0) 

y (3) = y (2) y (1) u(2) u(1)  (5)
  

y (N ) = y (N − 1) y (N − 2) u(N − 1) u(N − 2) 

 y (1)   y (0) y ( −1) u (0 ) u( −1) 


 y (2)   y (1)  − a1 
   y (0 ) u(1) u(0)   
 − a2 
  y ( 3 )  =  y ( 2) y (1) u ( 2) u(1) 
     b1  (6)
         b 
 y (N )  y (N − 1) y (N − 2) u(N − 1) u(N − 2) 2
   
Y 

 Y = . (7)
As can be seen in Eq. (7), vector Y and matrix  contain all recorded input and output information,
and  is the vector of unknown model parameters to be found. The result is similar to the case where
we are dealing with solving a set of linear equations having more unknown variable than the number
of equations, which indicates that there is no unique solution to this problem. Least squares method
can offer the best solution for model parameters {a1, a2, b1, b2} in the sense of minimum error
between the left hand side and the right hand side of those linear equations. Moreover, we know

4
that the real process is not perfectly linear, and the actual measurements y(k) in each time step
does not have a linear relationship with the previous measurements and the input values as the
model implies, i.e., Y ≈ . Thus, the error between the actual output values and the model output
values, i.e., E = Y − , includes both measurement noise and model uncertainty as well. Finally,
the least squares solution that gives the best estimate for model parametersˆ is obtained as
follows,

ˆ = (T )−1TY (8)

The obtained solution ˆ in Eq. (8) is the best estimate of the model parameters in the sense
of least squares (note that  = [−a1 −a2 b1 b2]T whereas we seek [a1 a2 b1 b2]T ). It is clear, however,
that the main condition for solving the above matrix equation (also known as normal equations) is
that the matrix T has full rank and it is positive-definite (well-conditioned), which is known as
Persistent Excitation condition. More details on least square solution in Eq. (8) are provided in
Appendix.

Selection of a Suitable Input


So far, we have learned how to find the best estimate of coefficients for a system model using linear
least squares. In the next step, we want to know how to perform data collection for system
identification. The most recommended types of input signals for system identification are Pseudo
Random Binary Signal (PRBS), Square-Wave Signals, or the sum of many sinusoidal waves. The
amplitude of the input signal should be bounded to keep the system in its linear operation region. It
should be mentioned that any process has constraints on its input in terms of the maximum range
of the input amplitude allowed to be applied before the process is damaged. In addition, the resulting
output y(k) should be much larger than measurement noise. Otherwise, the output would not be
usable for system identification and it might create a so-called ill-conditioned T; and therefore,
the persistent excitation condition is no longer met.

DC Offset Compensation
The output of a linear time-invariant (LTI) system is always zero when zero input is applied
(assuming zero initial conditions). Thus, if a system responds to zero input with the nonzero value,
the system is called to have offset value in its output (DC offset), which probably exists in industrial
processes. The other form of offset in systems is when the input range is limited to only positive or
negative values. In this case, the value in the middle of the input range is called input offset, and the
corresponding output to this input is called output offset. In both above-mentioned cases, the offsets
should be removed from both input and output data before filling the matrix  to be able to find an
LTI model accepting both negative and positive input values. However, in the actual control
operation, the control input u(k) should be calculated from offset-free output (output offset yoffset
should be subtracted from the measured output ym(k) to be used in the computation of u(k)). Then,
input offset uoffset should be separately added to the calculated control input to find the actual process
input uin(k) as shown in Fig. 3.
uoffset
+
u(k) + uin(k) ym(k)
Controller Process

y(k)
_
yoffset +

5
Fig. 3. Compensation of the input and output offsets.

Model Verification
After performing data collection and then identifying the unknown parameters of a process model
using linear least squares, it is time to verify the obtained dynamic model to see whether it
demonstrates similar behavior to the actual process or not. Hence, the same input signal should be
applied to both the model and the process, and then the output responses should be compared as
illustrated in Fig. 4. It is preferred to use a different set of input/output data for validation than those
used for system identification. The smaller the error between the output responses, the better the
quality of the identified model will be. One good criterion for verification of the identified model is the
application of Mean Squared Error method (MSE) on the difference between output responses or
the error signal.
Simulated
Input Signal Identified System output data
Model
_
Error signal
+
Actual
Process
Experimental Data

Fig. 4. Model verification after performing system identification.

Lab Exercise (2 marks)


In this lab, you are going to identify a second-order discrete-time linear model as in Eq. (1)-(3) using
a pre-collected data from one of the W-T setups through the following steps,
1. Data extraction and analysis (0.6 marks, checked at 45 minutes)
a. Download the pre-collected data from Moodle and save it in the current directory of MATLAB.
The data file is named SysIdenData_StudentVersion.mat. Load the data into MATLAB
Workspace using load function. The loaded data should appear in Workspace with the
name LogData in Structure format. Use the following code to extract individual data in vector
array form,
t = LogData.time;
y_act = LogData.signals(1).values(:,2);
y_actm = LogData.signals(1).values(:,1);
u_act = LogData.signals(2).values;
As you can see, the data contain recorded time in t, actual noise-reduced output in y_act,
original measured output in y_actm and actual input data in u_act. Then, find the sampling
time for which the data were recorded and name it Ts or h.
b. Plot these data using the functions you practiced in Pre-lab exercise as shown in Fig. 5.

6
Fig. 5. Original data, (a) Comparison of noise-reduced and measured output signals, (b) Actual input signal.

c. Find the best estimate of the offset value from the noise-reduced output y_act and name it
y_offset. Then, remove the output offset by subtracting y_offset from y_act and name
it y. Find input offset as well and name it u_offset and remove it from input data u_act
and name it u. Then, plot them in one figure as shown in Fig. 6. As you can see, both offset-
free input and output signals in Fig. 6(a) and (b), respectively, begin from zero.

Fig. 6. Offset-free data, (a) Offset-free output signal (noise-reduced), (b) Offset-free input signal.

Hint: As explained in DC Offset Compensation section, for a set of input-output data which
only contain positive values, like u_act and y_act here, offset values have to be detected
and removed from the data before filling the matrix . If no information is given about the
range of input signal, we would assume the first value of the input signal is input offset as
you can clearly see in Fig. 5(b). However, we cannot simply consider the same fact for output
offset as there is always some noise in the output signal, even in the noise-reduced one
y_act here. The best approach to estimate the closest value to actual output offset is by

7
taking average from the first period of output data which corresponds to the first period of
input data before the first change. Thus, you have to write a loop in MATLAB to automatically
detect the number of samples in the first period of output to be used in average. Use while
function for loop and mean function for average.
2. Identifying a second-order discrete-time linear model (0.6 marks, checked at 1 hour 30
minutes)
a. Create matrix  as shown in Eq. (6) using the first half of the offset-free input-output data. If
starting from k = 1, you need to choose the values for y(−1), y(0), u(−1), and u(0) as
initial conditions. They should be chosen rationally. You do not necessarily need to start from
k = 1. You can choose to start from some samples ahead, i.e., starting from k = 10, for
example. You can use matrix concatenation to create matrix  as explained in pre-lab
exercise.

b. Find the solution of least squares method ˆ as given in Eq. (8), to obtain the estimate of the
second-order discrete-time model parameters {a1, a2, b1, b2}.
c. Create a second-order discrete-time transfer function and state space representation of the
identified model in MATLAB, as given in Eq. (1) and (2), and display them on Command
Window using tf and ss functions.
3. Model verification and simulation (0.8 marks, checked at 2 hours 30 minutes)
a. Using the transfer function or state space model you defined in previous part, simulate the
identified model, which means finding the response of the identified model to the offset-free
input u as explained in Model Verification section.
b. You should simulate the model and plot the results, first by using the second half of the
offset-free input u and compare the simulated output with the second half of the offset-free
output y, and then, by using the entire offset-free input u and compare the simulated output
with the entire offset-free output y as shown in Fig. 7(a) and (b), respectively. Use lsim or
filter functions to find the simulated response. You could try Simulink to simulate the
model too to practice on Simulink too.
c. In Fig. 7(a), can you explain why the simulated output does not start from the same point as
the offset-free output y?

8
Fig. 7. Model verification, (a) comparison of 2nd half of the simulated output with the 2nd half of actual offset-free output,
(b) comparison of the entire simulated output with actual offset-free output.

Post-lab Exercise
In this exercise, we want to compare the effect of different model orders in the accuracy of system
identification.
1. Follow the same procedure you did in Exercise 2 to identify a first-order model using offset-free
data u and y. The structure of a first-order difference equation is given as below,

y(k ) = −a1y(k − 1) + b1u(k − 1). (9)


2. Simulate the first-order model using the entire offset-free input and compare its simulated output
with the second order one you obtained before and the offset-free output y as shown in Fig. 8.

Fig. 8. Comparison of different order models with the actual offset-free output for accuracy purposes.

3. Use mean squared error method to numerically compare between these two models as can be
seen in the top-left corner of Fig. 8. Which model would you choose for controller design and
why?

9
Appendix
In step response modelling, the process is approximated by a first-order or second-order transfer
function mostly with pure delay Error! Reference source not found.. As an example, if the
response of a process to a step input with the amplitude of a = 2 is given as in Fig. A1, we can
approximate this process with a first-order transfer function G(s). The unknown parameters are gain
K, time constant τ, and time delay td which can be easily computed as in Fig. A1.
In this example, it is interesting to see that the original system was a second-order transfer function
as shown in Fig. A2(a). The approximate first-order transfer function outputs a similar response with
a high accuracy to the original system response as illustrated in Fig. A2(b). However, if the response
has oscillations, second-order or higher-order models should be considered and the unknown
parameters should be determined using different approaches.

Ke −t d s a
G(s ) = , U (s ) =
1+  s s
0.4
Ke −t d s a
Y (s ) = G(s )U (s ) = 
X: 2.019
Y: 0.4
0.35 0.63aK aK 1+  s s
0.3 −( t − t d )

0.25
 y (t ) = aK (1 − e  )
X: 0
Output

0.2 Y: 0.252 K = 0.2


 = 0.2 If t →  y ss = lim y (t ) = aK
0.15 t →
td = 0.2 sec
y ss
0.1 Thus K=
a
0.05 td  + td

t =  + td y ( + t d ) = 0.63aK
0
0X: 0.2 0.50.3988
X: 1 1.5 2 2.5 If 
Y: 0 Y: 0 Time (sec)
−1
Thus  = y (0.63aK ) − t d
(Geometrica lly from the graph)
Fig. A1. The response of the original system to a step input and the values of the approximate first-order model.

0.4

0.35 Original System Response


Aproximate Model Response
0.3

0.25
Output

0.2

0.15

0.1

0.05

0
0 0.5 1 1.5 2 2.5
Time (sec)

(a) (b)
Fig. A2 (a) Simulation block diagram, (b) Comparison of the step responses generated by original and approximate
models.

10
Linear Least Squares Method
One of the most common methods for parametric identification is the so-called Least Squares (LS)
method. The method of least squares grew out of the fields of astronomy and geodesy as scientists
and mathematicians sought to provide solutions to the challenges of navigating the Earth's oceans
during the age of exploration. The accurate description of the behavior of celestial bodies was the
key to enabling ships to sail in open seas, where sailors could no longer rely on land sightings for
navigation (source: Wikipedia). In this lab, our focus is on the linear least squares method in
mathematics rather than statistics known as linear regression, even though both have the same
concept but different interpretations.
Mathematically, linear least squares method is mostly applied in the problem of approximately
solving a set of linear equations where there are more equations than the unknown variables or
parameters. The best approximation is obtained by minimizing the sum of squared differences
between the data values (right-hand side of the equations) and their corresponding modelled values
(left-hand side of the equations). The approach is called “linear” least squares since the assumed
equations to be estimated are linear in the parameters. Linear least squares problems
are convex and have a closed-form solution that is unique, provided that the number of data points
used for constructing the equations are equal to or more than the number of unknown parameters.
One of the most common application of linear least squares is in curve-fitting problems, where a set
of data point from an observation or experiment are fitted with a curve having a linear-in-parameter
form. To see how linear least squares method works and understand it better, consider the following
simple problem of solving two linear equations with two unknowns,

a1u11 + a2u12 = y1
 . (A1)
a1u21 + a2u22 = y 2
What are the possible answers for a1 and a2 that satisfy both equations if uij and yi are known and
belong to real numbers set (for i and j = 1, 2)? From linear Algebra, we know that if the equations
are linearly independent, there exists a unique solution as below,

 u 22 y 1 − u12 y 2
a1 = u u − u u
−1
u11 u12   a1   y 1   a1  u11 u12   y 1   11 22 12 21

   =     =      ,
u 21 u 22  a2   y 2  a2  u 21 u 22   y 2   u y − u 21y 1
a2 = 11 2 (A2)
 u11a22 − u12u 21
u u12 
det 11  = u11u 22 − u12u 21  0.
 u 21 u 22 
If the above equations are linearly dependent (determinant equals zero), no specific solution could
be found (one variable can be chosen randomly/arbitrarily in order to find the other one). Now
consider the following three equations,

a1u11 + a2u12 = y 1

a1u21 + a2u22 = y 2 . (A3)
a u + a u = y
 1 31 2 32 3

It is not usually possible to find a unique solution that satisfies all equations at the same time (there
are more equations than the unknown variables/parameters a1 and a2). However, it is possible to
find a solution that approximately solves the above equations in some best sense, i.e., the right-

11
hand side of each equation has the closest value to its left-hand side. Thus, the error equations can
be defined as follows,


e = y − (a u + a u )  0
 1 1 111 2
12

 yˆ1  e1   y 1  u11 u12 


 e  =  y  − u   a1 ,
e2 = y 2 − (
a1u 21 + a2u 22 )  0

     
2 2 21 u 22  a 
 yˆ 2 e3   y 3  u31 u32   2
(A4)
    
e3 = y 3 − (
a1u31 + a2u32 )  0
 E Y 
 yˆ 3

 E =Y −    = Y − Yˆ .

Therefore, least squares problem is defined as the best solution 𝜃̂ which minimizes the sum
of the squared errors ei (for i = 1, 2, 3). This can be written as the minimization of the following
quadratic cost/loss/objective function V( ),
3

e ˆ = argmin V ( ) = aˆ1 aˆ2  .


1 1 2 1 1 2
V ( ) =
T
2
i = (e1 + e22 + e32 ) = E T E = E  (A5)
2 i =1 2 2 2 

To find the solution that would give the minimum value of V( ), you just need to take the derivative
with respect to independent variable θ and then find the zeros of the resulting equation. From
calculus, however, we know that V( ) is a scalar function with two independent variables a1 and a2
(in the form of a vector,  = [a1 a2]T ). Therefore, you need to take partial derivative of the scalar
cost function V( ) with respect to vector  , also known as gradient of V( ), as follows,

 
1
ˆ = argmin V ( , k ) = argmin ET E = argmin
2 
1
2
(
(Y −  )T (Y −  ) )
V ( ) 1 
=0  (E T E ) = 0
 2 
1 
((Y −  )T (Y −  )) = 0
2  Reminder from algebra :
1 
((Y T −  T T )(Y −  )) = 0
2   T
1  x Qx = 2Qx (A6)
(Y T Y − Y T  −  
T T
 Y +  T T   ) = 0 x
2  = (Y T  )T  T  T
x Q= Q x =Q
1  x x
(Y T Y − 2 T T Y +  T T  ) = 0
2 
1 for a symetric matrix Q and vector x.
( −2T Y + 2T  ) = 0
2

 ˆ = (T  )−1 T Y

References
[1] K. J. Astrom and B. Wittenmark, Adaptive Control. 2nd ed., Upper Saddle River, NJ: Prentice Hall, 1995.
[2] L. Ljung, System Identification: Theory for the User. 2nd ed., Upper Saddle River, NJ: Prentice Hall, 1999.
[3] N. S. Nise, Control Systems Engineering. 7th ed., Hoboken, NJ: Jhon Wiley and Sons, 2015.

12
V1 by Dr. Hailong Huang, July 2018.
V0 by Dr. Arash KHATAMIANFAR, July 2017.

13

You might also like