0% found this document useful (0 votes)
26 views44 pages

Thesis CPM

babisha

Uploaded by

raoseshu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
26 views44 pages

Thesis CPM

babisha

Uploaded by

raoseshu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 44

CONTROLLER PERFORMANCE MONITORING

-Performance classification with low frequency data

Submitted by

Lekshmi P
ABSTRACT

Control engineering deals with the theory, design and application of control systems. The
primary objective of control systems is to maximize profits by transforming raw materials
into products while satisfying criteria such as product-quality specifications, operational
constraints, safety and environmental regulations. The design, tuning and implementation
of control strategies and controllers are undertaken within the first phase in the solution
of control problems. Any well functioning control system after sometime in operation
will have some changes in the characteristics of the material/product being used,
modifications of operation strategy and changes in the status of the plant equipment due
to aging, wear, fouling, component modifications, etc and will lead to the degradation of
control performance. This may be due to inadequate controller tuning and lack of
maintenance, poor design or equipment malfunctioning etc. It is not an easy task to
monitor each of the loops individually. Optimal process control can only be achieved
when all the components of a control system are working properly. Hence, before tuning
a loop, one must verify that each component is operating as specified and that the design
is appropriate. Already for single control loops, it is clear that the task of getting and
keeping all components in good health in not trivial. The fact, that a plant in the process
industry typically comprises hundreds to thousands control loops, reveals the huge
challenge of monitoring and ensuring top performance of such complex control systems.

Controller performance monitoring is an online automated procedure that evaluates the


performance of control system and delivers information to plant personnel for
determining if specified performance targets and response characteristics are being met.
The objective of the present work is to obtain a framework which will help to reduce the
load on the DCS and to do high level loop performance classification with low frequency
plant data.

Keywords: Controller performance monitoring; loop performance classification; low


frequency plant data.
CHAPTER 1

INTRODUCTION

1.1 CONTROLLER PERFORMANCE MONITORING

1.1.1 Introduction

A control system is an interconnection of components, i.e., sensor, process/plant, actuator


and controller, forming a system configuration that has the general objective to influence
the behavior of the system in a desired way. The central component is the process whose
output is to be controlled. The controller seeks to maintain the measured process variable
(PV) at a specified set point SP) in spite of the disturbances acting on the process. The
actuator is the device that includes the final control element (a valve, damper, etc. and its
associated equipment such as positioned). This receives the controller output (OP) signal,
react in appropriate fashion to impact the process, and consequently cause the PV to
respond in the desired manner. The combination of process and actuator is called the
plant.

Figure 1.1 Component block diagram of a closed-loop system


Optimal process control can only be achieved when all aforementioned components are
working properly. Hence, before tuning a loop, one must verify that each component is
operating as specified and that the design is appropriate. Already for single control loops,
it is clear that the task of getting and keeping all components in good health in not trivial.
The fact, that a plant in the process industry typically comprises hundreds to thousands
control loops, reveals the huge challenge of monitoring and ensuring top performance of
such complex control systems.

1.1.2 Control loop issues

The state of the control loop performance are analysed in many surveys. Its often
concluded that the basic control principles are often ignored and control algorithms are
incorrectly chosen and tuned, while sensors and actuators are poorly selected or
maintained. Consequently, the control performance of many loops can be significantly
improved by proper loop retuning, controller redesign or equipment maintenance. The
major control loop issues are

 External disturbances
 Limited maintenance and inadequate controller tuning
 Equipment malfunctioning or poor design
 Inappropriate control structure

1.1.3 Control performance monitoring - Definition and advantages

The main objective of control performance monitoring (CPM) is to provide online


automated procedures that evaluate the performance of the control system and deliver
information to plant personal for determining whether specified performance targets and
response characteristics are being met by the controlled process variables. This should
help detect and avoid performance deterioration owing to variations in the process and
operation. Recommendations and/or actions are generated to inspect/maintain control
loop components, e.g., sensors, actuators, or to re-tune the controller based on the
calculated performance metrics within the assessment step.
Controller performance monitoring will help in achieving

 Safer operation and reduced environmental impact


 Increased efficiency
 Sustainable manufacturing
 Efficiency gains
 Quality gains
 Agility gains

1.2 OBJECTIVES

The aim of the project work is to perform loop classification with low frequency data.
The major objectives are listed below

 To propose a framework which will help to reduce the load on DCS

 To see if a single performance index can do the high level loop classification

 To check if the same index can classify the loops reliably with low frequency data

 To determine the least possible frequency that can be used to sample the loops
without losing the loop characteristics

1.3 ORGANISATION OF THE REPORT

Chapter 2 Literature review


Chapter 3 Control performance monitoring
Chapter 4 Analysis with plant data
Chapter 5 Summary and Conclusions
CHAPTER 2

LITERATURE REVIEW

Thomas J. Harris(1989)[1] described a very simple technique for ascertaining the


best theoretically achievable feedback control performance as measured by the output
mean square error. Minimum variance controller is referred as the best possible
control in the mean square sense for processes described by linear transfer functions
with additive disturbances.

E.J. Bristol(1990)[2] explains a new data compression technique that permit


computer trending to effectively store and analyse practically unlimited amounts of
process history as trend records for later evaluation. The swinging door algorithm is
a very simple effective example of these techniques.

Lane Desborough and Thomas Harris (1992)[3] introduced a normalized


performance index to characterize the performance of feedback control schemes. It
provides a measure of the proximity of the control to minimum variance control. A
fast, simple , on-line method for estimating the index is given.

T.Hagglund(1995) [4] explains a procedure for automatic monitoring of control loop


performance. Oscillations are detected by the loop. The procedure is automatic in the
sense that no additional parameters except the normal controller parameters have to
be specified.

R.Russell Rhinehart(1995)[5] a method that provides an automated recognition


whenever a controller is not performing well and is insensitive to changes in process
noise.
Tina Miao and Dale E. Seborg(1999)[6]proposed a method to detect excessively
oscillatory feedback control loops. The technique is simple and requires only normal
operating data.

Alexander Horch(2000) [7] have proposed the development of methods for


automatic condition monitoring of control loops with application to the process
industry. This enables both detection and diagnosis of malfunctioning control loops.
The Harris index is modified to cover a large range of processes.

Michael A. Paulonis and John W.Cox(2003) [8] have developed a large-scale


controller performance assessment system in which the controllers can be sorted in
the order of their performance to identify which need attention.Performance history is
available to track improvement or degradation and reports are automatically
generated and send to subscribers to keep them informed of relevenat changes with
minimal investment of their time.

Nina F. Thornhill et al.(2004) [9] presented the impact of compression on data-


driven methods and presents an automated algorithm by which the presence of
piecewise linear compression may be inferred during the pre-processing phase of a
data-driven analysis.

Alexander Horch and Friedrun Heiber(2004)[10] evaluates the control


performance by means of performance indices from large amounts of measurement
data. The focus is twofold: firstly to assess information that can be deduced from
many data sets and secondly to investigate the usefulness of simple performance
measures using established methods and some useful new ideas.

Riku Pollanen et al. (2005)[11] studies a simulation environment consisted of Mat


lab-based process simulation, control and monitoring tools and connected to the
commercial PC automation system by the aid of OPC is demonstrated. A summary of
typical indices and case studies of real-time control performance assessment are
given.
Alan J. Hugo(2006) [12] explains various controller performance techniques and
their applicability to the requirements of control engineers. Also, its advantages and
limitations.

Rachid A. Ghraizi et al(2007).[13] focuses on performance assessment of industrial


controllers using process data collected at regular intervals of time. Also, a
methodology based on the predictability of controller errors is also proposed for
performance monitoring

Habil Jelali(2010) [14] gives an overall idea about control performance monitoring-
techniques, need and challenges. Also, the different matrices used for controller
performance monitoring is explained.

Jacques F. Smuts(2011)[15] gives explains how to get best performance from


challenging loops. Problems originating from within the control loop and outside of
the control loop are studied. Also, appropriate corrective actions to solve the
problems and to improve the control performance are presented.

Alexey Zakharov and Sirkka-Liisa Jamsa-Jounela(2014) [16] explains a method


for the detection of oscillation which is a necessary step for determining valve
stiction. This paper proposed an oscillation detection method that directly evaluates
the similarity of the shapes of subsequent oscillation periods by means of correlation
coefficient

Karl J. Astrom [17] explains stochastic control theory. It explains the basis control
principles that forms the basis for the analysis.

The book by Douglas C. Montgomery [18] gives the basics of statistical techniques.
The principles of statistical theory, linear algebra and analysis guide the development
of efficient experimental design for factor settings.
CHAPTER 3

CONTROL PERFORMANCE MONITORING

3.1 INTRODUCTION

Most modern industrial plants have hundreds and even thousands of automatic control
loops. These loops can be simple proportional-integral-derivative (PID) or more
sophisticated model based linear and non-linear control loops. It has been reported that as
many as 60% of all industrial controllers have performance problems. Having an
automated means of detecting when a loop is not performing well and then diagnosing
the root cause is essential because they play a vital role in product quality, safety and
ultimately economics. Some of the obstacles that prevent this automatic assessment, from
being a part of the day-today maintenance program include a lack of: user-friendly
interface, readily understandable report generation, diagnosis information in text form, a
single composite index ranking the loop performance and reliable computational software
tools. In addition, education of the operations staff is essential to make full use of some of
the currently available time and frequency methodologies.

3.2 NEED FOR CONTROLLER PERFORMANCE MONITORING


A primary difficulty of controller performance monitoring is the sheer number of loops to
be monitored – a typical large processing operation consists of hundreds of control loops,
often operating under varying conditions. The majority of the controllers use the PID
algorithm, but there may also be advanced multivariate model-based controllers and other
application specific controllers. Maintenance of these loops is generally the responsibility
of either a control engineer or an instrument technician, but other responsibilities,
coupled with the tediousness of consistently monitoring a large number of loops, often
results in control problems being overlooked for long periods of time. However, this task
is well suited for automation. The data already resides in the DCS or plant historian, and
plant tests are not required, as it is the closed-loop response of the process that is of
interest. A complication arises from the fact that the any deviation from set point is a
function of both the controller performance and the plant disturbance spectrum. Any
controller performance methodology must separate out the effects of plant disturbances
(which are external to the controller) from tuning, equipment problems, and out-of-
service issues. Control performance monitoring ensures that the process control assets
remain reliable and efficient. It is a condition-based application that monitors, identifies,
diagnoses and remedies control asset issues across all plant layers. In addition to this, it
has the following advantages-safer operation and reduced environmental impact more
sustainable manufacturing, efficiency gains, quality gains and agility gains

3.3 PRINCIPLE OF CONTROL PERFORMANCE MONIRTORING

The primary objective of control systems is to maximize profits by transforming raw


materials into products while satisfying criteria such as product-quality specifications,
operational constraints, safety and environmental regulations. The design, tuning and
implementation of control strategies and controllers are undertaken within the first phase
in the solution of control problems. When properly carried out, the result of this phase
should be a well functioning and performing control system. After some time in
operation, changes in the characteristics of the material/product being used, modifications
of operation strategy and changes in the status of the plant equipment (aging, wear,
fouling, component modifications, etc.) may lead to the degradation of control
performance. Poor control performance, therefore, leads to poor plant performance, and
that in turn implies poor financial performance. This, again, underlines the need for some
form of regular scheduled maintenance of control loops to ensure consistently high levels
of performance.
Figure 3.1 Effect of poor control performance

The second phase in the solution of control problems should be the supervision of the
control loops and the early detection of performance deterioration. The process industries
are faced with ever-increasing demands on product quality, productivity and
environmental regulations. These force companies to operate their plants at top
performance, hence the need for control systems with consistently high performance.
Control systems are thus increasingly recognized as capital assets that should be
maintained, monitored and revised routinely and automatically. These tasks are
performed today within the framework of control performance monitoring (CPM), which
has got considerable attention from both the academic and industrial communities in the
last decade.

The main objective of control performance monitoring (CPM) is to provide online


automated procedures that evaluate the performance of the control system and deliver
information to plant personal for determining whether specified performance targets and
response characteristics are being met by the controlled process variables. Control
performance assessment techniques consist of benchmark selection, assessment,
diagnosis and improvement

3.4 PROCEDURE FOR CONTROL PERFORMANCE MONIRTORING

Loop performance assessment is a complex task which has multiple steps to be done. The
first step is the selection of a benchmark against which the control performance will be
evaluated. It will be the desired or best-possible performance given by the existing plant
and control equipment. Second step is the assessment of the loops. Based on calculations
using measured data, the closeness of the current control performance to the selected
benchmark is tested for. This results in the performance classification excellent/good/fair/
poor of the control loop based on the performance index . The third step is diagnosis of
the underlying causes. When the analysis indicates that the performance of a running
controller deviates from good or desired performance, i.e., when the control loop
performance is classified as ‘fair’ or ‘poor’, the reasons for this should be found out .The
diagnostic step is the most difficult task of CPM. Performance improvement is the final
step. After isolating the causes of poor performance, corrective actions should be
suggested to restore the health of the control system. In most cases, poor working
controllers can be improved by retuning, i.e., adjusting their parameter settings. When the
assessment procedure indicates that the desired control performance is not possible with
the current process and control structure, more substantial modifications to improve the
control system performance are required. Figure 3.2 shows the framework for loop
performance assessment.

Figure 3.2 Existing framework for loop performance assessement

The plant data PV, SP, OP (high frequency data) is used to perform control performance
analysis. This is done by calculating various indices like CPI, RPI, OI, SI etc. which will
help to categorize the loops into excellent /good/fair/poor. From these loops only those
loops which are classified into fair /poor are considered in the next step for detecting the
cause for the poor performance. All the aforementioned indices are required to do the
diagnosis part which is a very crucial part. These malfunctioning loops have to be retuned
or redesigned to give better performance. Such a framework deals with large amount of
data samples which requires large amount of data storage. It also increases the processing
time. And thus the entire setup will increase the load on DCS which is not desirable. This
lead to the formulation of another framework which is given in Figure 3.3

Figure 3.3 Proposed framework for loop performance assessement

In the proposed framework, lower frequency plant data is given for control performance
analysis. This step is done by calculating a single index which does the overall
performance analysis and thus classify the loops into excellent/good/fair/poor. Then only
for the fair and the poor loops, all the indices that are required for proper diagnosis of the
causes are calculated. At this stage high frequency data is required. The next step is
taking corrective action for those poor performing loops. This set up requires fewer
amounts of data compared to the existing framework. Therefore it requires less space for
data storage. As all the indices are calculated only for the fair/poor loops, it takes less
processing time. This will reduce the load on the DCS and thus improve the overall
efficiency. This framework is used in this work for control loop performance assessment.

3.5 DIFFERENT PERFORMANCE INDICES FOR CPM

Many indices are used to determine loop performance in a plant. Some are based on
variability, on response time, presence of oscillation or stiction etc.

3.5.1 CPI-Controller Performance Index or Minimum variance Index or Harris


Index

MVC based assessment first described by Harris (1989) compares the actual system-
output variance  y2 to the output variance  MV
2
as obtained using minimum-variance

controller applied to an estimated time-series model from measured output data.


The Harris index is defined as

 MV
2
 MV 
 y2 (3.1)

This index will of course be always within the interval [0, 1], where values close to unity
indicate good control with respect to the theoretically achievable output variance. “0”
means the worst performance, including unstable control. No matter what the current
controller is, we need only the following information about the system: Appropriately
collected closed-loop data for the controlled variable and known or estimated system time
delay (τ ).
3.5.2 RPI-Relative Performance Index

The Relative Performance Index (or R P I) is a measure of the ratio of user defined
benchmark response speed to the actual response speed of the closed loop system. RPI
equal to 1 implies that the control system performance is meeting the specifications. RPI
greater than 1 implies that the control system is removing disturbances (or tracking the
set-point) faster than desired. RPI less than 1 implies that the control system is taking
longer than desired to settle down after a disturbance (or tracking the set-point).Although
RPI can vary between 0 and infinity, it is reported between 0.1 and 10.

 des
RPI 
 act (3.2)

Where  des - desired closed loop rise time and  act - actual closed loop rise
time

3.5.3 OI-Oscillation Index

Oscillation index or O I is an important metric that is used for performance classification


and diagnosis. Oscillation Index gives a measure of the degree of the sustenance of
oscillation. It is a value between 0 and 1, with 0 indicating no oscillation and 1 indicating
a perfectly regular oscillation. Any OI above 0.5 indicates an oscillation that is sustained
or decaying slowly. Oscillations that decay faster typically have a oscillation index value
between 0.2 to 0.5. OI is calculated based on the Auto-correlation function applied on the
error data. It indicates the nature of the oscillation.

3.5.4 SI- Stiction Index

Stiction is the static friction that needs to be overcome to enable relative motion of
stationary objects in contact. When the Control Valve exhibits high stiction, two phases
can be evident: Stick phase and Slip Phase. In the stick phase, the valve stem is stuck as
the pneumatic force applied on the stem have not exceeded the static friction. This makes
the controller output OP keeps moving in one direction without a change in the Valve
opening. In the slip phase the applied force overcomes the stiction and valve jumps to a
new position before moving smoothly again. This jump is due to big energy accumulated
in the valve during the stick phase. Stiction Index indicates whether a valve is sticky or
not. Also , using the pattern of PV SP OP data stiction can be easily detected.
CHAPTER 4

ANALYSIS WITH THE PLANT DATA

4.1PLANT DATA

High frequency plant data considered for the analysis was taken from four major loop
types which are level, flow, pressure and temperature. Process variable (PV), set point
(SP) and controller output (OP) were the inputs to the proposed framework given in
Figure 3.3. 100 loops from each loop type were considered as the test data set. All the
loops were classified manually into excellent/good/fair and poor by a group of experts in
Honeywell considering its PV, SP, OP data. It was the benchmark against which the
loops were compared to see if the control performance analysis technique applied is
giving reliable results. Table 4.1 gives the details of the data set taken for the simulation.

Si. Loop Type Sampling No. of test data No. of validate


No. time(high set taken data set taken
frequency) in sec
1 Flow 1 105 100
2 Level 30 100 100
3 Pressure 5 105 100
4 Temperature 30 100 100

Table 4.1 Details of the loops considered for simulation

4.2 METHODS USED FOR LOOP PERFORMANCE CLASSIFICATION

Three methods were tested in this project work to classify the loops into
excellent/good/fair and poor. Variance in the process variable is one major parameter
considered to get the loops classified reliably. The other two methods were based on PV
and SP data. But all the methods were grounded on variability of the process
measurements. The method which passed the test data set, i.e., the method which gave
good percentage match with the manual classification of the loops were tested on the
validate data set. Then again it was tested on a set of 10000 loops and was checked
against the manual classification.

4.3CPI-CONTROLLER PERFORMANCE INDEX/HARRIS INDEX/MINIMUM


VARIANCE INDEX

4.3.1 Theory

The minimum variance index proposed by Thomas J. Harris [1] takes into account the
variability in the process variable for loop performance assessment. The minimum-
variance control (MVC), also referred to as optimal H2 control and first derived by
Astrom (1979), is the best possible feedback control for linear systems in the sense that it
achieves the smallest possible closed-loop output variance. More specifically, the MVC
task is formulated as minimisation of the variance of the error between the set point and
the actual output at k + τ, given all the information up to time k:

J  E{[r  y(k   )]2 } (4.1)

Or

J  E{ y 2 (k   )} (4.2)

When the set point is assumed zero (without loss of generality), i.e., the case of
regulation or disturbance rejection is considered. The discrete time delay τ is defined as
the number of whole periods of delay in the process, i.e. (Harris 1989).

  1  f  1  int(Td  Ts ) , (4.3)
where Td is the (continuous) process delay arising from true process dead time or
analysis delay, and Ts denotes the sampling time. f is the number of integer periods of
delay.

MVC based assessment first described by Harris (1989) compares the actual system-
output variance  y2 to the output variance  MV
2
as obtained using minimum-variance

controller applied to an estimated time-series model from measured output data.


The Harris index is defined as

 MV
2
 MV 
 y2 (4.4)

It is the ratio of the minimum achievable variance to the actual variance of the system.
This index will of course be always within the interval [0, 1], where values close to unity
indicate good control with respect to the theoretically achievable output variance. “0”
means the worst performance, including unstable control. No matter what the current
controller is, we need only the following information about the system:
• Appropriately collected closed-loop data for the controlled variable.
• Known or estimated system time delay (τ ).

From the measured (closed-loop) output data, a time-series model, typically of


AR/ARMA type is estimated
^
C (q)
y (k )  ^
 (k ) (4.5)
A( q )

A series expansion, i.e., impulse-response, of this model gives


  
y (k )    ei q t  (k )
 i 0 
  
 e0  e1 q 1  ....e 1 q1( 1)  (k )  e q   e 1 q ( 1)
2

 ....  (k )

(4.6)
The first τ impulse response coefficients can be estimated through τ-term polynomial long
division, or equivalently via resolution of the Diophantine identity:

^ ^ ^ ^
C (q)  E (q) A(q)  q t F (q) (4.7)

The feedback-invariant terms are not a function of the process model or the controller;
they depend only on the characteristics of the disturbance acting on the process. Since the
first τ terms are invariant irrespective of the controller (Figure 4.1), the minimum
variance estimate corresponding to the feedback-invariant part is given by
 1
 MV
2
  ei2 2 (4.8)
i o

The first coefficient of the impulse response, e0, is often normalised to be equal to unity.
The estimate of the actual output variance can be directly estimated from the collected
output samples using the standard relation. However, it is suggested to use the (already)
estimated time-series model also for evaluating the current variance. From the series
expansion of the time-series model (Equation 4.6), we obtain

 y2   ei2 2 (4.9)
i o

Since the noise variance will be cancelled in Equation 4.4, it is neither needed nor has an
effect on the performance index. This compares the sum of the τ first impulse-response
coefficients squared to the total sum; see Figure 4.1.
The performance index  MV V corresponds to the ratio of the variance, which could

theoretically be achieved under minimum variance control, to the actual variance.  MV is

a number between 0 (far from minimum-variance performance) and 1 (minimum-


variance performance) that reflects the inflation of the output variance over the
theoretical minimum variance bound. As indicated in Desborough and Harris (1992), it is
more useful to replace  y2 by the mean-squares error of y to account for offset If  MV

is considerably less than 1, re-tuning the controller will yield benefits. If  MV is close to

1, the performance cannot be improved by re-tuning the existing controller; only process
or plant changes, such as changes in the location of sensors and actuators, inspection of
valves, other control loop components, or even alterations to the control structure can
lead to better performance.

(4.10)

Figure 4.1 An impulse response of the time series data showing the contributions to the
Harris Index

There are two advantages for using this index over a simple error variance metric:
1. Taking the ratio of the two variances results in a metric that is (supposedly)
independent of the underlying disturbances—a key feature in an industrial situation,
where the disturbances can vary widely.
2. The metric is scale independent, bounded between 0 and 1. This is an important
consideration for a plant user, who might be faced with evaluating hundreds or even
thousands of control loops.

4.3.2 Simulation Results

The method was applied to the high frequency data set of 100 loops from each of the four
loop types; namely level, flow, pressure and temperature and the CPI was calculated for
all the loops. Based on the value of the index the loops are classified into
Excellent/Good/Fair/Poor. It was then compared against the manual classification to
check the percentage match for all the loop types. Also the same method was applied on
lower frequency data sets also to check if the loops get classified reliably with the lower
frequency data. The results are summarized in the Table 4.2.

Si.No. Loop Type No. of loops(manual % loops with 2/3


classification) level changes

E G F P Total

1 Flow(1 sec) 25 25 26 29 105 15.23

2 Level(30 sec) 25 25 25 25 100 38

3 Pressure(5 sec) 25 25 28 27 105 24.76

4 Temperature(30 sec) 25 25 25 25 100 21

Table 4.2 Summary of the loops(high frequency loop data) considering CPI
Figure 4.2 Plot of the PV,SP data of an excellent loop(CPI=0.81)

Figure 4.3 Plot of the OP data of an excellent loop(CPI=.81)


Figure 4.4 Plot of the PV,SP data of a poor loop(CPI=0.187)

Figure 4.5 Plot of the OP data of a poor loop(CPI=0.187)


4.3.2 Discussions

Table 4.2 gives the summary of the results after calculating CPI with high frequency PV
SP and OP data. Around 25% of pressure loops deviated from the manual classification
when the results were compared when only 15.23% of the flow loops gave wrong results.
Percentage deviation of the level loops were maximum among the four loop types and it
was 38%. But, an overall match of 75.25% was observed when CPI was used on high
frequency data set. Then CPI was applied on sampled data sets, i.e., lower frequency data
was tested to give a classification similar to that of the manual classification. It was
observed that a large number of Fair and poor loops became good on sampling. This
implies that there is a chance of losing some of the characteristics of the loop properties
when the data set is compressed. In this case, the sampled lower frequency data did not
get classified properly. The data compression increases predictability of the signal and
thus affects the Harris index. So this method cannot be adopted in the proposed
framework as it fails when the data is compressed or when lower frequency data is
considered. Figures 4.2 and 4.3 show the plots of PV, SP and OP data of an excellent
loop. It has a CPI value of 0.81. This loop is performing well and do not need attention.
They are typically tracking the set point well, with very few or no significant deviations.
Figures 4.4 and 4.5 show the PV, SP and OP data of a poor loop. CPI value is only 0.187
in this case. This implies that the loop has large variance. It can be inferred from the PV
and the OP plots that the loop has stiction.

4.4 USING RATIO BETWEEN PROCESS VARIABLE AND SET POINT

4.4.1Theory

This method is similar to the test done by R. Russel Rhinehart in his paper a watchdog for
controller performance monitoring[5]. It provides an automated recognition whenever a
controller is not performing well and is insensitive to changes in process noise. Re-tuning
is needed in linear controllers when set point, loads etc. change. The three common
situations which indicate ineffective control are 1)extended period of controlled variable
oscillations about the setpoint,2)extended period where the controlled variable is offset
from the set point and 3)persistent succession of disturbances or load changes. This
method is also grounded in an analysis of variance.

Figure 4.6 Illustration of terms

Figure 4.2 shows a process variable which is initially at set point and subsequently
changes. Two classes of deviation are indicated in this figure. The first deviation is
d1 which is the difference between set point and process variable and the second
deviation is the difference between two successive process variables, d 2 . If the process is
at set point and the measurement is subject to random, independent, zero mean
fluctuations (noise) then the set point is the time averaged measurement value, and
process variance can be estimated as

1 N
S  2

N  1 i 1
1 (d1, i ) 2 (4.11)

If N is large the process can be estimated as

1 1 N
S 
2
2 
2 N  1 i 1
(d 2, i ) 2 (4.12)

As the equations 4.11 and 4.12 require storage, updating and manipulating the past N
data, the averaging can be replaced by a first-order filtering operation.
S12f , i   (d1, i ) 2  (1   ) S12f , i 1
(4.13)

S22 f , i  0.5 (d 2, i ) 2  (1   ) S22 f , i 1 (4.14)

If the process is at set point, and only subject to noise the values of S12 and S 22

are identical and their ratio will be near to unity. With the straightforward averaging of
2 2
d1 and d 2 , r would be calculated as

ri  S12f , i / S 22 f , i (4.15)

Autocorrelation in the noise shift the distribution toward higher values of r and for
processes with PV at the set point will have a value of r less than 3. The value of r will be
greater than 3 for those processes whose process variable is not at the set point. Also,
after an upset, when the controller is functioning and trying to recover the control, the
value of r will be greater than 3.If the number of readings with r value greater than 3 is
large then that implies that there is a problem with the controller. The number of
consecutive bad readings, (i.e., r value greater than 3) required to trigger the watchdog
was set as 1000.

4.4.2 Simulation Results

The method was applied to the high frequency data set of 100 loops from each of the four
loop types; namely level, flow, pressure and temperature. The test was done to group the
loops into either good or bad. i.e., the loops were analysed to see if the number of bad
readings go beyond 1000.The loops which deviate from the set point will have this
number greater than 1000 and that loop will be poor performing one . It was then
compared against the manual classification to check the percentage match for all the loop
types. The value of lambda was fixed after training a set of 100 loops. The results are
summarized in the Table 4.3
Si.No. Loop type Lambda % of % of fair/poor Overall %
value excellent/good loops that got of loops that
loops that got classified got
classified reliably reliably classified
reliably

1 Flow(1 sec) 0.1 60 72.72 66.66

2 Level(30 sec) 0.2 48 46 47

3 Pressure(5 sec) 0.1 66 56.36 60.95

4 Temperature(30 0.2 74 56 65
sec)

Table 4.3 Summary of the loops (high frequency loop data) considering ratio between
SP-PV and PV-PV

Figure 4.7 Plot of the PV,SP data of an excellent loop(watch dog method)
Figure 4.8 Plot of the OP data of a excellent loop(watch dog method)

Figure 4.9 Plot of the PV,SP data of an poor loop(watch dog method)
Figure 4.10 Plot of the OP data of an poor loop(watch dog method)

4.4.3 Discussions

For flow and pressure loops the value of lambda was found to be 0.1 and for level and
temperature loop, it was found to be 0.2.Flow loop was found to give the best results
when the ratio between set point and process variable was considered. Almost 67% of the
flow loops got classified. Level loops gave the worst results with this method. The test
gave an overall match of 59.9% with that of the manual classification. This method did
not give satisfactory results and therefore it was not applied on lower frequency data sets
to check if the loops get classified reliably with the lower frequency data. Figures 4.7
and4.8 show the plots of PV, SP and OP data of an excellent loop. The number of
consecutive bad readings has not gone beyond 1000. This loop is performing well and do
not need attention. They are typically tracking the set point well, with very few or no
significant deviations. Figures 4.9 and 4.10 show the PV, SP and OP data of a poor loop.
The number of consecutive bad readings is more than 1000 in this case.. It can be inferred
from the PV and the OP plots that the loop is saturated.
4.5 USING THE RATIO BETWEEN STANDARD DEVIATION OF ERROR AND
MEAN OF PROCESS VARIABLE

4.5.1Theory

In statistics and probability theory, the standard deviation (SD) (represented by the Greek
letter sigma, σ) shows how much variation or dispersion from the average exists. A low
standard deviation indicates that the data points tend to be very close to the mean (also
called expected value); a high standard deviation indicates that the data points are spread
out over a large range of values. The standard deviation of a random variable, statistical
population, data set, or probability distribution is the square root of its variance. It is
algebraically simpler though in practice less robust than the average absolute deviation A
useful property of the standard deviation is that, unlike the variance, it is expressed in the
same units as the data.

The mean is a measure of central tendency. It is the value usually described as the
average. The mean is determined by summing all of the numbers and dividing the result
by the number of values. The mean of a population of N values (scores) is defined as the
sum of all the scores, x of the population, Σx , divided by the number of scores, N. The
population mean is represented by the Greek letter μ (mu) and calculated by using Often
it is not possible to obtain data from an entire population. In such cases, a sample of the
population is taken.
1
 ( x1  x2  ...  xN ) (4.16)
N
To further describe data sets, measures of spread or dispersion are used. One of the most
commonly used measures is standard deviation. This value gives information on how the
values of the data set are varying, or deviating, from the mean of the data set. Deviations
are calculated by subtracting the mean, x , from each of the sample values, x, i.e.
deviation = x − x. As some values are less than the mean, negative deviations will result,
and for values greater than the mean positive deviations will be obtained. By simply
adding the values of the deviations from the mean, the positive and negative values will
cancel to result in a value of zero. By squaring each of the deviations, the problem of
positive and negative values is avoided.
To calculate the standard deviation, the deviations are squared. These values are summed,
divided by the appropriate number of values and then finally the square root is taken of
this result, to counteract the initial squaring of the deviation. The standard deviation of a
population, σ , of N data items is defined by the following formula.


1
N

( x1   ) 2  ( x2   ) 2  ...  ( xN   ) 2  (4.17)

Standard deviation is measured in the same units as the mean. It is usual to assume that
data is from a sample, unless it is stated that a population is being used. The variance is
the average of the squared deviations when the data given represents the population. The
lower case Greek letter sigma squared,  2 , is used to represent the population variance.

This method takes in to account the ratio of standard deviation of error(set point- process
variable) and the mean of the process variables. It gives a measure of dispersion of the
PV from the average value of the set point. The analysis is done by considering a
threshold value for this ratio. For excellent and good loops, the ratio of these two values
need to be less than a threshold value and for fair and poor loops, the ratio of these values
need to be greater than the threshold. The threshold was found to be different for different
loop types. It was obtained by training the test data set. And it was also tested on the
validation data set. Then again a set of 10000 loops were considered to test this method.

4.5.2 Simulation Results

The method was applied to the high frequency data set of 100 loops from each of the four
loop types; namely level, flow, pressure and temperature. The test was done to group the
loops into two. i.e., the loops excellent and good loops into one group and fair and poor
loops into another group. The test was done to find out the percentage of excellent/good
loops with a standard deviation of error less than threshold value of the mean of the
process variable and the percentage of fair and poor loops with a standard deviation
greater than the threshold value of the mean of the process variable. It was then compared
against the manual classification to check the percentage match for all the loop types. The
value of threshold was fixed after training a set of 10000 loops. The threshold value will
be different for different loop types. The results are summarized in the Table 4.4

Si. Loop type Thresh E/G F/P overall


N old
High Lower High Lower High Lower
o. (fractio
frequen frequen freque frequenc freque frequen
n of
cy cy ncy y ncy cy
mean
of PV)

1 Pressure 0.005 5 sec 60 sec 5 sec 60 sec 5 sec 60 sec

71.19 70.92 70.32 70.19 70.75 70.55

2 Temperature 0.003 30 sec 200 sec 30 sec 200 sec 30 sec 200 sec

79.36 79.73 80.82 80.62 80.09 80.175

3 Level 0.02 30 sec 200 sec 30 sec 200 sec 30 sec 200 sec

66.12 66.17 60.7 60.36 63.41 63.265

4 Flow 0.006 1 sec 20 sec 1 sec 20 sec 1 sec 20 sec

71 70.03 68.88 68.08 69.94 69.05

Table 4.4 Summary of the loops considering ratio between

Standard deviation of error and mean of PV


Figure 4.11 Plot of the PV,SP data of an excellent loop(std deviation =0.0015mean(PV))

Figure 4.12 Plot of the OP data of an excellent loop(std deviation =0.0015mean(PV))


Figure 4.13 Plot of the PV,SP data of an poor loop(std deviation =0.01414mean(PV))

Figure 4.14 Plot of the OP data of an poor loop(std deviation =0.01414mean(PV))


4.5.3 Discussion

Table 4.4 gives the summary of the results when the standard deviation of error and mean
of process variable were considered for the analysis. Threshold values for each of the
loops are different. Temperature loops have the lowest threshold which is only 0.003 and
level loops have a threshold value of 0.02 which is the highest value among the four
loops. The Table4.4 shows the simulation results of both high frequency data and lower
frequency data. With high frequency plant data, all the excellent and good loops with a
standard deviation of error less than the threshold were counted and similarly all the fair
and poor loops with standard deviation of error greater than the threshold were counted.
Both the values were compared against the results of the manual classification and it was
observed that there was an overall match of 71%. Flow and pressure loops were giving a
match of around 70% individually when the temperature loops gave a match of around
80%. Level loops gave bad results with a match of 66% with the manual classification.
After testing the high frequency data, the lower frequency data was tested. The same
procedure was done to find out the match in percentage with the manual classification.
With the lower frequency data, flow and pressure loops gave a match of 70% and
temperature loops gave 80%. But the match percentage got dropped in the case of level
loops. It was giving only 60% match. However, the sampled data or the compressed
lower frequency data gave an overall match of 71 %. Figures 4.11 and 4.12 show the
plots of PV, SP and OP data of an excellent loop. The standard deviation of error is less
than the threshold value of the mean of the PV. This loop is performing well and do not
need attention. They are typically tracking the set point well, with very few or no
significant deviations. Figures 4.13 and 4.14 show the PV, SP and OP data of a poor
loop. This is the OP plot of a flow loop which has a threshold of 0.006. This implies that
the loop has large variance. It can be inferred from the PV and the OP plots that the loop
is saturated. This is one of simplest method when compared to the other two methods. It
analyses the error value to perform high level loop classification.
4.6 PERFORMANCE COMPARISON

Three methods are used for high level loop classification in this work. All the three
methods are grounded on the variability in the process variable.
Harris Index[1] also known as the minimum variance index determines the smallest
possible minimum closed loop output variance. MVC minimizes the error between the set
point and the process variable. This method requires only the closed loop plant data SP,
PV, OP and the process time delay. It is the ratio of the minimum achievable variance to
the actual variance of the system. The major advantage of CPI is that it is obtained in the
range between 0 and 1.This makes the analysis of thousands of loops simpler.
Classification can be easily done with the scale independent metric obtained. But CPI
fails to classify the loops efficiently when compressed data is used for the analysis. This
method can be applied only to high frequency plant data.
A method was proposed by R.Russel Rhinehart[5] to determine the need of re-tuning in
case there is a change in the load or set point. A watchdog has been set to give permission
to re-tune the controllers at the time of inefficient control. This is done by taking the
number of unusual process states (bad readings) in a process. Watchdog acts only when
the number exceeds a threshold value. This method which uses closed loop plant data
SP, PV and lambda did not classify the loops reliably as it was unable to capture some of
the properties of the loop. However, it can be used to find out tracking loops, i.e., loops
where PV tracks SP. Those are classified as excellent loops.
With the knowledge gained after implementing the above two methods, another method
was used to carry out loop classification. It takes in to account the standard deviation of
error and mean of the process variable to perform high level loop classification. It is
rather a simpler method which classified 71% of the loops properly. It requires only
closed loop PV and SP data. Compressed plant data can also be used for loop assessment.
It was observed that those loops with very small error but poor (oscillating loops) and
tracking loops those are excellent in nature were not getting classified properly. This is
because the method uses the variation of the process variable from its average. So, small
variations which indicate poor performance will get misinterpreted. However, this is a
computationally simple method that can be adopted to classify loops reliably.
Figure 4.15 PV tracking SP loop (excellent loop)

1.52
PV
SP
1.5

1.48

1.46

1.44

1.42

1.4

1.38
0 0.2 0.4 0.6 0.8 1 1.2 1.4
Time (hrs)

Figure 4.16 Oscillating loop loop (poor loop)


4.7 STATISTICAL TESTING

4.7.1 Theory

A statistical hypothesis test is a method of statistical inference using data from a scientific
study. In statistics, a result is called statistically significant if it has been predicted as
unlikely to have occurred by chance alone, according to a pre-determined threshold
probability, the significance level. These tests are used in determining what outcomes of
a study would lead to a rejection of the null hypothesis for a pre-specified level of
significance; this can help to decide whether results contain enough information to cast
doubt on conventional wisdom, given that conventional wisdom has been used to
establish the null hypothesis. The critical region of a hypothesis test is the set of all
outcomes which cause the null hypothesis to be rejected in favor of the alternative
hypothesis. Statistical hypothesis testing is sometimes called confirmatory data analysis,
in contrast to exploratory data analysis, which may not have pre-specified hypotheses. In
the Neyman-Pearson framework (see below), the process of distinguishing between the
null & alternative hypotheses is aided by identifying two conceptual types of errors (type
1 & type 2), and by specifying parametric limits on e.g. how much type 1 error will be
permitted.

In statistics, a confidence interval (CI) is a type of interval estimate of a population


parameter and is used to indicate the reliability of an estimate. It is an observed interval
(i.e. it is calculated from the observations), in principle different from sample to sample,
that frequently includes the parameter of interest if the experiment is repeated. How
frequently the observed interval contains the parameter is determined by the confidence
level or confidence coefficient. More specifically, the meaning of the term "confidence
level" is that, if confidence intervals are constructed across many separate data analyses
of repeated (and possibly different) experiments, the proportion of such intervals that
contain the true value of the parameter will match the confidence level; this is guaranteed
by the reasoning underlying the construction of confidence intervals. Whereas two-sided
confidence limits form a confidence interval, their one-sided counterparts are referred to
as lower or upper confidence bounds. The level of confidence of the confidence interval
would indicate the probability that the confidence range captures this true population
parameter given a distribution of samples. Greater levels of variance yield larger
confidence intervals, and hence less precise estimates of the parameter. Confidence
intervals of difference parameters not containing 0 imply that there is a statistically
significant difference between the populations

A Confidence Interval is an interval of numbers containing the most plausible values for
our Population Parameter. The probability that this procedure produces an interval that
contains the actual true parameter value is known as the Confidence Level and is
generally chosen to be 0.9, 0.95 or 0.99.

Large sample confidence interval for a population mean is given by the formula


x  ( zcriticalv alue ) (4.18)
n

Where x -mean of the population  -standard deviation and n- sample size

Si. No. Confidence interval (%) Z critical value

1 99.5 2.81

2 99 2.58

3 95 1.96

4 90 1.645

Table 4.5 The Z critical value for different confidence intervals


Figure 4.17 Normal distribution curve with confidence intervals

A statistical test was done to determine the lowest frequency that can be used for
sampling the loops. This is by calculating the confidence interval for the given set of
data.

4.7.2 Simulation Results

The test was done on all the four loop types. High frequency data sets were sampled at
different frequencies and its mean was checked to see if it lies in between the confidence
intervals. The frequency of the set of loops which did not lie within the confidence
interval are not considered.

Si.No. Loop Type Sampling time in sec

1 Flow 8

2 Level 240

3 Temperature 270

4 Pressure 35

Table 4.6 Lowest possible sampling frequencies


4.7.3 Discussion

A statistical test was done on all four loop types to determine the lowest frequency that
can be applied to sample the loops. Flow loops that are sampled at every 1 second can be
sampled up to a frequency of 8 seconds. Likewise level loops can be sampled at every
240 seconds. It is normally sampled at every 30 seconds. In the case of pressure loops,
the high frequency plant data is taken at every 5 seconds. According to the statistical test
done, it can be sampled up to 35 seconds. Temperature loops can be sampled to 270
seconds. Originally it was sampled at every 30 seconds like in the case of level loops.
Table 4.6 gives the details of the lowest frequencies that can be used to sample the loops.
CHAPTER 5

SUMMARY AND CONCLUSIONS

5.1 CONCLUSIONS

 A framework was proposed in such a way that it reduces the load on the DCS.
This is achieved by considering only the lower frequency plant data for analysis
in the first step. High frequency data is considered only in the case of fair and
poor loops for the diagnosis of the fault. This will reduce the amount of data that
is considered for analysis and therefore the storage space required will be less and
also the processing time will get reduced. This will enhance the overall efficiency
of the control system.

 Three common situations can be used to identify ineffective control; which are an
extended period of controlled variable oscillations about the set point, an extended
period where the controlled variable is offset from the set point and the third
situation is a persistent succession of disturbances or load changes which cannot
be handled by the existing control.

 Variance is a good measure that can be used to assess loop performance as it


indicates how far a set of numbers is spread out. Thus a small variance indicates
that the data points tend to be very close to the mean value.

 CPI gives the best results when high frequency plant data is used for loop
performance classification. 85.25% match was obtained when controller
performance index was used for loop classification. However, sampled lower
frequency data will not get classified properly. The data compression increases
predictability of the signal and thus affects the Harris index.
 The ratio between the error and the adjacent process variable values can also be
used to measure the variance in the process. But this could not capture the loop
dynamics reliably. Only 59.9% match was obtained with this method.

 The ratio between the standard deviation of the error and the mean of the process
variable is another method which can be used for loop performance assessment.
Using this method the variance of around 71% loops were captured effectively.
This method takes into account the dispersion of the process variable around the
average value and therefore loops with small variation around the set point will
not get classified reliably. The same method can be used to classify the loops with
lower frequency plant data also. It also gives an overall match of around 71%.

 A Confidence Interval is an interval of numbers containing the most plausible


values for our Population Parameter. It is not an exact value of the parameter.

 Sampling of the plant data affects statistical properties of a population. i.e., data
sampling will have an impact on the mean and variance . However, compression
of the plant data can be done up to a particular frequency beyond which the loop
loses its statistical properties.

 It is difficult to obtain a single metric or a method to perform high level loop


classification (100%) with compressed data as with a single method it is difficult
to capture the properties of all the loop types and the faults that will occur.
REFERENCES

Thomas J. Harris (1989), Assessment of control loop performance, Canadian journal of


chemical Engineering, volume 67, october,1989,856-861.

E.H.Bristol (1990), Robust Swinging door trending: Adaptive trend recording, International
society of automation,0065-2814/90,749-754.

L.Desborough, T.Harris (1992), performance assessment measures for univariate feedback


control, Canadian journal of Chemical Engineering 70(1992)1186-1197

Rhinehart, R.R (1995), A watchdog for controller performance monitoring , 078032445-5,


American Control Conference, Proceedings of the 1995 (Volume:3 ) 2239 – 2240.

Alan J.Hugo (2006), Performance assessment of single-loop industrial controllers, Journal of


process control 16(2006),785-794

Michael A. Paulonis and John W. Cox (2003), A practical approach for large-scale controller
performance assessment, diagnosis , and improvement, Journal of Process control,13(2003), 155-
168.

N.F. Thornhill et al.(2004), The impact of compression on data-driven process analysis, Journal
of Process Control 14 (2004)389-398

Rachid A Ghraizi (2005) Performance monitoring of industrial controllers based on the


predictability of controller behaviour,European Symposium on Computer Aided Process
Engineering,15(2005)

T.Hagglund (1995), A control-loop performance monitor, Elsevier Science Ltd,Control Eng.


Practice,Vol 3, No.11, 1543-1551.

Tina Miao and Dale E. Seborg(1999) Automatic detection of excessively oscillatory feedback
control loops, Proceedings of the 1999 IEEE,22-27.

Alexey Zakharov and Sirkka-Liisa Jamsa-Jounela(2014), Robust oscillation detection index


and characterization of oscillating signals for valve stiction detection, American Chemical
Society,53,5973-5981.

You might also like