0% found this document useful (0 votes)
8 views27 pages

Chapter 7

Uploaded by

Ekbal 6M
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views27 pages

Chapter 7

Uploaded by

Ekbal 6M
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 27

Development of Empirical Models

From Process Data


• In some situations it is not feasible to develop a theoretical
(physically-based model) due to:
1. Lack of information
Chapter 7

2. Model complexity
3. Engineering effort required.
• An attractive alternative: Develop an empirical dynamic
model from input-output data.
• Advantage: less effort is required
• Disadvantage: the model is only valid (at best) for the
range of data used in its development.
i.e., empirical models usually don’t extrapolate very
well. 1
Simple Linear Regression: Steady-State Model
• As an illustrative example, consider a simple linear model
between an output variable y and input variable u,
y = β1 + β 2u + ε

where β1 and β 2 are the unknown model parameters to be


Chapter 7

estimated and ε is a random error.

• Predictions of y can be made from the regression model,


yˆ = βˆ 1 + βˆ 2u (7-3)
where β̂1 and β̂2 denote the estimated values of β1 and β2,
and ŷ denotes the predicted value of y.
• Let Y denote the measured value of y. Each pair of (ui, Yi)
observations satisfies:
Yi = β1 + β2ui + εi (7-1)
2
The Least Squares Approach
• The least squares method is widely used to calculate the
values of β1 and β2 that minimize the sum of the squares of
the errors S for an arbitrary number of data points, N:
N N
Chapter 7

2
S =∑ ε12 = ∑ (Yi − β1 − β 2ui ) (7-2)
i =1 i =1

• Replace the unknown values of β1 and β2 in (7-2) by their


estimates. Then using (7-3), S can be written as:
N
S = ∑ ei2
i =1
where the i -th residual, ei , is defined as,
ei Yi − yˆi (7 − 4)
3
The Least Squares Approach (continued)

• The least squares solution that minimizes the sum of


squared errors, S, is given by:
Suu S y − Suy Su
Chapter 7

β̂1 = 2
(7-5)
NSuu − ( Su )

NSuy − Su S y
β̂ 2 = 2
(7-6)
NSuu − ( Su )
where:
N N N N
Su ∆ ∑ ui Suu ∆ ∑ ui2 S y ∆ ∑ Yi Suy ∆ ∑ uiYi
i =1 i =1 i =1 i =1

4
Extensions of the Least Squares Approach
• Least squares estimation can be extended to more general
models with:
1. More than one input or output variable.
2. Functionals of the input variables u, such as poly-
Chapter 7

nomials and exponentials, as long as the unknown


parameters appear linearly.

• A general nonlinear steady-state model which is linear in the


parameters has the form,
p
y = ∑β j X j + ε (7-7)
j =1

where each Xj is a nonlinear function of u.

5
The sum of the squares function analogous to (7-2) is
2
N  p 
S = ∑  Yi − ∑ β j X ij  (7-8)

i =1 

j =1 
which can be written as,
Chapter 7

T
S = (Y - X β ) (Y − X β ) (7-9)

where the superscript T denotes the matrix transpose and:

 Y1   β1 
 
Y =  β= 
 
Yn  β p 
 

6
 X11 X12 X1 p 
 
 X 21 X 22 X2p 
X = 
 
 X n1 X n2 X np 
Chapter 7

The least squares estimates β̂ is given by,

( )
−1
βˆ = X X T
X TY (7-10)

providing that matrix XTX is nonsingular so that its inverse exists.


Note that the matrix X is comprised of functions of uj; for
example, if:
y = β1 + β 2u + β3 u 2 + ε

This model is in the form of (7-7) if X1 = 1, X2 = u, and


X3 = u2. 7
Fitting First and Second-Order Models
Using Step Tests
• Simple transfer function models can be obtained graphically
from step response data.
• A plot of the output response of a process to a step change in
Chapter 7

input is sometimes referred to as a process reaction curve.


• If the process of interest can be approximated by a first- or
second-order linear model, the model parameters can be
obtained by inspection of the process reaction curve.

• The response of a first-order model, Y(s)/U(s)=K/(τs+1), to


a step change of magnitude M is:
y ( t ) = KM (1 − e −t / τ ) (5-18)

8
• The initial slope is given by:

d y  1
  = (7-15)
dt  KM t =0 τ
Chapter 7

• The gain can be calculated from the steady-state changes


in u and y:
∆y ∆y
K= =
∆u M

where ∆y is the steady-state change in y.

9
Chapter 7

Figure 7.3 Step response of a first-order system and


graphical constructions used to estimate the time constant, τ.

10
First-Order Plus Time Delay Model
Ke-θ s
G ( s) =
τs + 1
For this FOPTD model, we note the following charac-
teristics of its step response:
Chapter 7

1. The response attains 63.2% of its final response


at time, t = τ+θ.
2. The line drawn tangent to the response at
maximum slope (t = θ) intersects the y/KM=1
line at (t = τ + θ ).

3. The step response is essentially complete at t=5τ.


In other words, the settling time is ts=5τ.

11
Chapter 7

Figure 7.5 Graphical analysis of the process reaction curve


to obtain parameters of a first-order plus time delay model.
12
There are two generally accepted graphical techniques for
determining model parameters τ, θ, and K.

Method 1: Slope-intercept method


First, a slope is drawn through the inflection point of the
Chapter 7

process reaction curve in Fig. 7.5. Then τ and θ are


determined by inspection.

Alternatively, τ can be found from the time that the


normalized response is 63.2% complete or from
determination of the settling time, ts. Then set τ=ts/5.
Method 2. Sundaresan and Krishnaswamy’s Method
This method avoids use of the point of inflection
construction entirely to estimate the time delay.
13
Sundaresan and Krishnaswamy’s Method
• They proposed that two times, t1 and t2, be estimated from a
step response curve, corresponding to the 35.3% and 85.3%
response times, respectively.
Chapter 7

• The time delay and time constant are then estimated from the
following equations:
θ = 1.3t1 − 0.29t2
(7-19)
τ = 0.67 ( t2 − t1 )

• These values of θ and τ approximately minimize the


difference between the measured response and the model,
based on a correlation for many data sets.

14
Estimating Second-order Model Parameters
Using Graphical Analysis
• In general, a better approximation to an experimental step
response can be obtained by fitting a second-order model to
the data.
Chapter 7

• Figure 7.6 shows the range of shapes that can occur for the
step response model,
K
G (s) = (5-39)
( τ1s + 1)( τ 2 s + 1)
• Figure 7.6 includes two limiting cases: τ 2 / τ1 = 0 , where the
system becomes first order, and τ 2 / τ1 = 1 , the critically
damped case.
• The larger of the two time constants, τ1 , is called the
dominant time constant.
15
Chapter 7

Figure 7.6 Step response for several overdamped second-


order systems.

16
Smith’s Method
• Assumed model:
Ke −θs
G (s) =
τ 2 s 2 + 2ζτs + 1
Chapter 7

• Procedure:

1. Determine t20 and t60 from the step response.


2. Find ζ and t60/τ from Fig. 7.7.
3. Find t60/τ from Fig. 7.7 and then calculate τ (since
t60 is known).

17
Chapter 7

18
Fitting an Integrator Model
to Step Response Data

In Chapter 5 we considered the response of a first-order process


to a step change in input of magnitude M:

( )
Chapter 7

y1 ( t ) = KM 1 − e −t / τ (5-18)
For short times, t < τ, the exponential term can be approximated
by
−t / τ t
e ≈ 1−
τ
so that the approximate response is:

  t   KM
y1 ( t ) ≈ KM 1 −  1 −   = t (7-22)
  τ  τ

19
is virtually indistinguishable from the step response of the
integrating element
K2
G2 ( s ) = (7-23)
s
In the time domain, the step response of an integrator is
Chapter 7

y2 ( t ) = K 2 Mt (7-24)

Hence an approximate way of modeling a first-order process is


to find the single parameter
K
K2 = (7-25)
τ
that matches the early ramp-like response to a step change in
input.

20
If the original process transfer function contains a time delay
(cf. Eq. 7-16), the approximate short-term response to a step
input of magnitude M would be

KM
y (t ) = (t − θ ) S (t − θ )
t
Chapter 7

where S(t-θ) denotes a delayed unit step function that starts at


t=θ.

21
Chapter 7

Figure 7.10. Comparison of step responses for a FOPTD


model (solid line) and the approximate integrator plus time
delay model (dashed line).
22
Development of Discrete-Time
Dynamic Models
• A digital computer by its very nature deals internally with
discrete-time data or numerical values of functions at equally
spaced intervals determined by the sampling period.
Chapter 7

• Thus, discrete-time models such as difference equations are


widely used in computer control applications.
• One way a continuous-time dynamic model can be converted to
discrete-time form is by employing a finite difference
approximation.
• Consider a nonlinear differential equation,

dy ( t )
= f ( y, u ) (7-26)
dt
where y is the output variable and u is the input variable. 23
• This equation can be numerically integrated (though with some
error) by introducing a finite difference approximation for the
derivative.
• For example, the first-order, backward difference
approximation to the derivative at t = k ∆t is

dy y ( k ) − y ( k − 1)
Chapter 7

≅ (7-27)
dt ∆t
where ∆t is the integration interval specified by the user and
y(k) denotes the value of y(t) at t = k ∆t. Substituting Eq. 7-26
into (7-27) and evaluating f (y, u) at the previous values of y and
u (i.e., y(k – 1) and u(k – 1)) gives:

y ( k ) − y ( k − 1)
≅ f ( y ( k − 1) , u ( k − 1) ) (7-28)
∆t
y ( k ) = y ( k − 1) + ∆tf ( y ( k − 1) , u ( k − 1) ) (7-29)
24
Second-Order Difference
Equation Models
• Parameters in a discrete-time model can be estimated directly
from input-output data based on linear regression.
• This approach is an example of system identification (Ljung,
Chapter 7

1999).
• As a specific example, consider the second-order difference
equation in (7-36). It can be used to predict y(k) from data
available at time (k – 1) ∆t and (k – 2) ∆t .

y ( k ) = a1 y ( k − 1) + a2 y ( k − 2 ) + b1u ( k − 1) + b2u ( k − 2 ) (7-36)

• In developing a discrete-time model, model parameters a1, a2,


b1, and b2 are considered to be unknown.
25
• This model can be expressed in the standard form of Eq. 7-7,
p
y = ∑β j X j + ε (7-7)
j =1

by defining:

β1 a1 , β2 a2 , β3 b1 , β4 b2
Chapter 7

X1 y ( k − 1) , X2 y ( k − 2) ,
X3 u ( k − 1) , X4 u ( k − 2)

• The parameters are estimated by minimizing a least squares


error criterion:
2

N p 
S = ∑  Yi − ∑ β j X ij  (7-8)

i =1 

j =1 
26
Equivalently, S can be expressed as,
T
S = (Y - X β ) (Y − X β ) (7-9)

where the superscript T denotes the matrix transpose and:


Chapter 7

 Y1   β1 
 
Y =  β= 
 
Yn  β p 
 

The least squares solution of (7-9) is:

( )
−1
βˆ = X T X X TY (7-10)

27

You might also like