0% found this document useful (0 votes)
90 views29 pages

Keep Advanced Control Systems Online

This document discusses key parameters for successfully implementing advanced control systems in chemical processes. It identifies maintaining ongoing support as critical for optimal operation. The main steps are: identifying business objectives, selecting between traditional advanced control (TAC) or model predictive control (MPC) based on the process, and ensuring proper infrastructure like instrumentation and basic controls are in place before implementing advanced controls.

Uploaded by

Bisto Masilo
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
90 views29 pages

Keep Advanced Control Systems Online

This document discusses key parameters for successfully implementing advanced control systems in chemical processes. It identifies maintaining ongoing support as critical for optimal operation. The main steps are: identifying business objectives, selecting between traditional advanced control (TAC) or model predictive control (MPC) based on the process, and ensuring proper infrastructure like instrumentation and basic controls are in place before implementing advanced controls.

Uploaded by

Bisto Masilo
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 29

Keep Advanced Control Systems Online

Know the key parameters required for successful implementation


By Mark Brewer, BP Chemicals Ltd.; Edited by Kristine Chin

Mark Brewer, BP Chemicals Ltd.


Advanced controls are rapidly becoming a part of the landscape of plants in the chemical
process industries. The attractions are increased process reliability and process yields, which
impacts profitability. Additionally, when process equipment are well-controlled, they are
subject to less stress, which results in longer life.
While owners understand the potential benefits of this technology, little has been
documented about how to optimally operate such advanced controls. This article identifies the
underlying principles of optimal operation of advanced controls. These principles are applicable
to the full range of advanced control techniques, including traditional advanced control (TAC),
model-based predictive control (MBPC) and other ``exotic'' technologies (e.g., neural
networks, expert systems and multivariate statistical process control).
TAC, also known as complex algorithm arrangements, uses a collection of simple
conventional controllers arranged in a structure that allows them to work in a coordinated way.
These range from simple ratio controls to simple optimization (e.g., selecting the energy
source from two alternatives). MBPC, on the other hand, allows several control objectives to
be achieved simultaneously through manipulation of several process variables, each of which
affects one or more of the control objectives.
Human factors play a significant role in effectively operating advanced controls. The key is
to involve the process operators, who are the users that make the decision to use such
systems or not. If advanced controls work reliably and the operators like them, they will use
them. It's that simple. However, in the cases where the advanced-control environment fails,
and lets down the process operators, an atmosphere of mistrust will develop.
Many projects fail because owners assume that advanced controls can be installed and
then forgotten. Similar to the situation with most process equipment, ownership of such
systems is a lifetime commitment. They must be treated like living organisms; neglect them,
and they will die.
Steps to success
Keeping advanced-control systems online depends on a number of factors. These form a
chain of linked activities, referred to as the ``Advanced-Control Support Chain'' (Figure 1).
This chain is looped to show that advanced controls must be continually maintained and
developed. Note that, in general, the more extensive the maintenance and development, the
larger the loop.
A chain is only as strong as its weakest link, so a simple way of guaranteeing successful
implementation of advanced-control technology is to identify the key links and make sure they
are strong. However, no matter how robust the support chain is, there has to be a force that
drives the loop. This is provided by an owner with a clear understanding of the business
environment. Establishing this ownership is the key to success.
Identify the Objectives
The first step of a successful control-system project is to decide what business need is
being addressed by the control system. Many projects fail because the designer does not take

into account the business needs. For example, in a production unit for a product that can
always be sold, it makes sense to install controls designed to push throughput constraints. In
a production unit that is restricted by feedstock availability, increasing the yield might be the
main objective.
If the control system is attempting to manipulate the wrong business driver, it will be
turned off or, worse, forced into use, thereby adversely affecting business. Remember that
business drivers can change during the lifetime of the plant and that flexibility in the control
objectives may need to be provided.
Select the Technology
It is important to get this step right because mistakes here will cause problems for many
years to come. Currently, there is a debate among users over whether to use TAC or MBPC.
Both techniques can be used to control a process where several measurements are
simultaneously used to control one or more variables. But, while the overall effect of the two
approaches is similar, the mechanisms used are fundamentally different.
In MBPC, the process is typically modeled as a series of cause-and-effect relationships
between measurements and controlled variables. A mathematical algorithm is used to
determine the optimal controller action. The result is to attain the desired process state, while
satisfying constraints, using the smallest process manipulation.
TAC uses a number of discrete control-loops to achieve the desired objective. These loops
are made to operate together using signal exchange in order to counteract controller
interaction. This is sometimes called dynamic decoupling. Counteracting controller interaction
can be important in some processes where there is no simple one-to-one relationship between
measurements and actuators (e.g., a furnace with multiple firing zones). In most cases, if one
valve is moved, all the process measurements are affected. Dynamic decoupling of more than
two or three interacting controls is very complex and difficult.
These discrete-control loops often rely on switches, signal selectors and calculated
measured variables, in order to give constraint control and to operate under different process
scenarios. Complicated processes require very complex control schemes and there is a
practical limitation of size. A single distillation column, and its associated equipment, about
defines the limit that can be controlled using TAC.
In addition to consideration of the scale of the control problem, there is a rule-of-thumb
that can determine which of the two should be used. In principle, if the problem can be solved
by predicting the effect on a process measurement based on changes in other process
measurements, also known as model inversion, then TAC is appropriate.
TAC can use a combination of ratio and proportional-integral-derivative (PID) style of
algorithms, which can be easily set up to give optimal control. Such applications tend to be
simple and have moderate to fast dynamics. A common example is a gas-main pressure
control. If flows are measured throughout the main, then it is easy to calculate the required
change in the manipulated variable at the control point, based on the other flows and the
pressure.
Conversely, if the control problem cannot be inverted (e.g., processes with long deadtimes, large and complex interactions, varying degrees of freedom or multiple constraints),
then an MBPC that optimizes some control function is the only practical approach. An example
is controlling a product-property in a polymer reactor. The structure of each controller is shown
in Figure 2.

A model-based approach will have a wider application than a traditional approach. This is
because a model-based approach can theoretically provide an adequate controller, even for a
simple application that can be inverted, whereas there are a significant number of cases where
the traditional approach cannot be made to work at all.
Nevertheless, there are practical limitations that come from the complexity of the
mathematics used in MBPC. At the moment, computing platforms with the reliability and speed
to perform the mathematics required for MBPC limit the technique to applications with process
response times measured in many seconds. A cycle time of one minute is currently typical.
Control systems with enough processing power to allow cycle times measured in milliseconds
will make MBPC a practical option for most chemical processes. These computers will be
available in a few years time.
The two techniques, MBPC and TAC, should not be seen as competitive, but rather as
complementary. Since MBPC is less reliable than TAC, the latter is used to act as a back-up, so
that a process can survive failure.
Furthermore, since MBPC uses very simple dynamic relationships, some form of TAC is
usually required to eliminate local nonlinear effects. It is difficult to say if MBPC will ever
replace TAC entirely. What is certain is that there is a role for TAC for many years to come.
In addition, selecting the appropriate technology is subject to a range of non-technical
influences. Factors that can influence choice of technology include past experience, installed
infrastructure, agreements with vendors and support capability, as well as technology
capability. There are few cases where the technology choice on a project is completely free.
The technology-specification phase can be helped by having a clear technology strategy that
ensures that decisions are made with all factors properly considered.
Provide Infrastructure
Nobody would dispute that firm foundations are important when it comes to erecting a
building. However, advanced control is often installed where the underlying control
infrastructure is not working correctly. Many plants have poorly tuned regulatory controllers,
or instrumentation that is unreliable or installed in the wrong place. Valves of an inappropriate
trim size or type, or with malfunctioning positioners, are also common. These problems are
often not obvious, since process operators have resolved problems that stop them from
operating the plant, but have learned to live with less severe problems. Installing advanced
controls always exposes these cases, and the older the plant, the worse the problems are.
Thus, there may be nothing wrong with the advanced controls, but they cannot function
correctly because the measurements fed into them are inaccurate, or the actuators are
defective.
An example that demonstrates such poor fundamental control was in a case where the pH
of a water stream was being controlled by direct acid injection (Figure 3). One suggestion to
resolve the poor control was to apply simple advanced-control techniques. However,
investigation of the problem determined that while the dynamics of the process (mixing of acid
and water in a fast-flowing system) were compatible with the response time of the pH probe,
they were much faster than the ability of the actuator to adjust the flow of acid. Improved
control was achieved by changing the process, rather than by installing advanced control.
Design and Install
Once the business need is defined and the appropriate technology is chosen, design
becomes a matter of applying control engineering. This article cannot go into detail here. For
further information, review F.G. Shinskey's, ``Process Control Systems: Application, Design

and Tuning.'' Alternatively, contact any MBPC vendors, such as Aspen Technology, Honeywell
Hi-Spec Solutions, Invensys, and MDC Technology, for further information.
However, there are some principles that need to be emphasized. Besides the principles of
controller mechanics, and choosing, installing and tuning a control algorithm, there are other
equally important issues that often get neglected. Remember, controls are not there just to
stabilize the operation of the plant; they earn their keep by aligning the plant to business
needs, and, on a more-immediate level, in making process operators' jobs easier.
Designing the interfaces so as to meet operators' needs are vital. Operators must be able
to understand what the controller is doing, and why. This includes the operations associated
with controller monitoring, turning the controller on and off, and changing set points.
Performing these activities must be easy and intuitive. The effort required to produce good
operator interfaces should not be underestimated. Control-system vendors can be very poor at
providing good operator interfaces. In most successful applications, operator screens are
designed by the end user with input from the process operators. This is because displays
should be task-oriented (as determined by the user), and not system-oriented (as typically
provided by the equipment supplier).
An example that demonstrates this point is an MBPC application at BP Amoco's
Grangemouth, U.K., site. There had been several failed attempts to get a successful
application in use in various parts of the site. However, better interfaces have helped tip the
balance, and, consequently, recent attempts worked well, whereas earlier attempts had failed.
Another factor that should be considered in the design is robustness. The ability of a
controller to operate under a range of conditions, with minimal maintenance and good
reliability, is essential for user acceptance. There are two elements that impact reliability, the
control technology and the application.
Keep the system as simple as possible. Don't over-engineer models. Remember that if the
system develops a fault, someone is going to have to fix it, under pressure, with all the stress
of a live production environment. Nevertheless, controller initialization and tolerance to
instrument failure (where possible) are important contributors to correct controller-design, and
should not be compromised.
Train the Users
This seems like an obvious step in making sure that controls are understood and used.
However, the difficulty of this process is almost always underestimated.
This training involves teaching sophisticated new technical concepts, while at the same
time challenging established process-operating methods and changing operators'
responsibilities and accountabilities. It requires 100% commitment and support from
manufacturing and business managers, as well as from operating team members. Because the
training will be done in a busy, difficult environment, where everyone has other priorities, it
will require control engineers with well-developed communication skills. Accordingly, training
tools are regarded as critical to success at Grangemouth, where both high-fidelity training
simulators and generic computer-based training programs are used to carry out structured
training for all operators.
Maintenance
Adequate maintenance and support is the main operating cost of using control, and this
cost must be adequately budgeted. Software does not degrade in the same way as physical
equipment, so it is tempting to fit it and forget it. However, the control system is full of
process models (even a simple PID control loop is an inverted process model). The accuracy of

these models with respect to the real plant behavior does deteriorate because of changes in
process-operating conditions such as fouling, internal damage and wear. This results in
controller operation deteriorating over time. Since controllers are designed to be tolerant of
model errors, this deterioration can extend over a long time, sometimes months or years,
without being noticed, until one day, the controller enters an unstable state and appears to
malfunction without apparent cause.
Like any decaying equipment, the greater the extent of deterioration, the more difficult and
expensive it becomes to recover. Solving controller performance problems is almost always
easy, if tackled immediately, but can become very difficult if the control system is left alone for
long periods of time. This is because distrust grows among operators as a controller
malfunctions, and, in a worst-case scenario, the users turn the device off and learn to live
without the advanced controller that you have spent months developing. This situation can be
more difficult to resolve than getting the original installation accepted, since the controller's
failure becomes part of the folklore of the plant. The controller then has to be fixed and recommissioned in an environment of mistrust.
Note that automated tuning methods are available, and under many circumstances can be
used to great benefit. However, there are several cases where these methods can cause
problems, even causing an unplanned plant shutdown. At BP Amoco's Grangemouth plant, a
self-tuning algorithm had been installed and had been left alone with the autotune function
turned on. Nobody had realized that the process operators had used controller performance to
determine when an exchanger was fouled. The operators determined fouling based on the flow
controller's ability to keep tight control, rather than using laborious monitoring and calculation
methods. With the new controller, the heat-exchanger performance remained good and
operators did not realize that the exchanger was fouling until it was too late, and the plant
choked completely. This resulted in an unplanned plant shutdown that was more extensive,
and therefore more expensive, than the normal limited shutdown just to clean the exchanger.
Measure and Report
Measurement of controller use and benefit is vital to the successful use of advanced
control. It is this activity that provides the owner of the control system with the motivation to
drive the support process. Without that process, the controls will be neglected and will be
turned off. A neglected advanced controller will work for only a matter of months, on average,
before it falls into disuse. It only takes a minor problem, such as an instrument failure, for
operators to turn off an advanced-control system, and unless it is fixed and put back into use
immediately, it will never get used. And if it does get recommissioned, it will need a complete
checkout. This is because the original reason for the controller failure gets forgotten,
sometimes within a matter of days, and the controller gets a permanent ``broken'' label stuck
onto it. Also, people will not turn on something that they know they can manage without and
don't understand the need for, anyhow.
A controller gives a benefit to the business, but only when it is used. The crudest measure
of controller benefit is service factor, the assumption being that if the controller is being used,
then it is giving a certain level of benefit. In the absence of anything else, it is worth
monitoring this, and the control engineer should work with the control system owner to put
online monitoring in place. This should generate reports that are used to pick out controls that
have a poor reliability record, and maintenance can then be targeted there.
Such a program assumes, however, that advanced controls are beneficial, but does not tell
just how beneficial they are. The benefits that advanced control can deliver are often not very
well understood. Therefore, quantifying their value will be advantageous when limited
resources are being prioritized.
The approach favored at BP's Grangemouth plant for quantifying the effect that advanced
controls have on business is to calculate benefits using real data for several months after

controls are turned on. While it is unusual for a single advanced-control system to provide
observable effects on the business bottom line, a controller will usually have a significant
effect on individual process measurements. Process flows and compositions, in particular, can
be translated, more or less, directly into financial terms. Again, these measures should be
made online, and integrated into the reporting system to give a good feel for control benefits.
Another way to determine the benefits of advanced control systems is to look at the online
calculation of controller operation. More specifically, examine the statistical measures of
deviation of the process measurement from the setpoint. In some cases, these can also be
used to detect early signs of controller problems.
There are also less-tangible benefits that advanced controls provide. These are mostly
derived from increased process reliability. Process equipment is subject to less stress when
well controlled, and it enjoys longer life. In addition, well-controlled plants will be easier to
operate and there will be fewer operational problems, which will lead to increased reliability.
Efforts should be made to measure all effects that provide business value.
Ongoing Developments
Typically, manufacturing processes are subject to change. It is obvious that if the process
design is changed (a reasonably frequent occurrence), then the advanced controls will need to
be appropriately modified. Advanced controls need to be treated as an integral part of the
plant. There have been cases where apparently minor process modifications have resulted in
advanced controls not working.
An example of this was in the rerouting of a gas-feed header that made a certain process
stream subject to composition changes, where previously its composition had been steady. To
make matters worse, the composition became a function of flowrate, and this invalidated the
assumptions made about the plant mass balance in the plant's closed-loop optimizer.
Optimization was adversely affected by this simple mechanical modification and several weeks
of optimizer operation were lost until the ``fault'' was corrected. The engineers who designed
the plant modification did not involve control engineers because the modification was a
mechanical piping change and -- seemingly -- no physical control equipment was affected. All
plant modifications should be looked at by control engineers, so that they can assess any
effect on the plant controls.
In addition to physical plant changes, the plant's business drivers can change. This can be
a more serious problem. The owner of the advanced control system must understand the
business drivers that are being addressed, so that the advanced-control model remains in step
with the business needs. Once the need for development of an advanced control system is
identified, it is necessary to go back to the design phase of the ``Advanced Control Support
Chain.''
This loop must be a never-ending process. If it comes to a halt, operation of the advanced
controls will begin to decay as the process models and the assumptions they use become
obsolete.

Online Particle Sizing as A Route to Process Optimization


Consider the potential advantages and design limitations to determine if
a switch is right for you
By George Crawley, Malvern Instruments Ltd. and Andrew Malcolmson, Malvern Instrume

The abundance of offline systems for particle-size analysis is a testament to that methods
longstanding popularity in the chemical process industries (CPI). It is also an indication,
however, that far too many processes are currently performing below their potential. Given the
influence of operator variability and sampling inconsistency, the results of offline analysis are
inherently uncertain. Meanwhile, time delays associated with offline analysis mask true
process behavior and disturb manpower schedules in the laboratory, particularly during
transient periods such as startup.
By switching to online analysis, however, these problems can in many cases be avoided.
Much greater quantities of data can be produced more rapidly with little, if any, operator
intervention. Automation of sample extraction, preparation, and transport, together with
analysis, data reporting and some maintenance routines cleaning and background-data
verification, for example eliminate any associated variability. Also, the sample size is
typically much larger (at sample flowrates in the kg/hr rather than g/sample range), and
hence, representative data can be generated in more timely intervals.
To determine if an upgrade from offline to online analysis is justified, consider both the
potential process improvements and analyzer system-design constraints.

<>Justifying replacement
Figure 1 highlights how different the results of offline and online analyses can be. The
graph shows operation of a mill, with the red dots indicating regular manual sampling for
particle-size analysis. From an offline-analysis point of view, operation is steady. The overlaid
online analysis, however, is collected in realtime and, thus, tells a different story. If quality
assurance of particle size was the goal in this application, offline analysis would give a false
positive result.
Even so, if online measurement simply reported that operation was unsteady, an
instrumentation switch would hardly be justified. What really sets the online method apart is
that it enables process optimization of conditions such as the following:
Steady-state operation. The process can be run more smoothly and closer to target
specifications
Upset identification. Any significant changes in operation are immediately observed, leading
to
more-rapid
problem
resolution
Transient-state operation. Constant monitoring allows required changes in process operation,
such
as
product
changeover,
to
be
more
efficiently
implemented
Process end-point determination. Overwork of the product is prevented as the required endpoint can be easily detected
Online analysis typically delivers improved process knowledge, which leads to the
identification of one-shot quick wins, such as hardware optimization or upgrades. It also
enables longterm benefits via smarter process control.

Improved process control. Without online analysis, the impact of different process variables
on final-product properties often eludes manufacturers of solids products. For instance, in
spray drying and atomization, the material properties are influenced by fluid flow, heat
transfer, droplet dynamics, solidification kinetics and other factors. Likewise, fluidized-bed
processes, including agglomeration and granulation, are successfully used in the production of
a range of solid products but are frequently difficult to fully optimize and understand. In either
case, reliance on extensive operator experience makes any extrapolation to new conditions
time-consuming and problematic.
In contrast, online analysis allows direct observation of the impact of a given variable on
particle size. Figure 2, p. 40, shows an example of a simple study, where a rotating-disc
granulator contained in a spray dryer unit is used to produce ceramic particulates of a given
diameter. The trends in the figure clearly indicate the effect of disk rotation speed (the red
line) on particle size. As step changes in the disc speed are made, corresponding changes in
particle size can be observed.
Correlations such as these can be used to develop improved manual operating instructions,
to implement automated control loops or to develop background data for soft sensors.
Soft sensors come into play when the hardware for online analysis is not always available
if it needs to be shared among several operating lines, for example. In such cases, an
online analysis can initially be used to determine the particle-size effects of other process
parameters that are continuously measured. Then, in the absence of the analyzer, particle size
can be predicted from the other measured parameters and an appropriate control strategy can
be followed.
Improved safety and reduced risk. Sampling of severely toxic materials is a potentially
hazardous operation that requires extensive personnel protection. Online analysis can provide
an immediate improvement in safety by eliminating manual removal of material from the plant
and the risk of operator exposure.
With online analysis, issues such as material plugging, bridging and flooding, mechanical
failure and utility upsets can be rapidly detected. Online analysis can also be used for
predictive maintenance, reducing the risk of equipment failure and increasing plant reliability.
In pharmaceutical manufacturing, inadvertent exposure to the product is a significant
danger, but also, product contamination is totally unacceptable. Ease-of-cleaning norms have
been applied to certain embodiments of process particle-size analyzers, making them
appropriate for use in a regulated good-automated-manufacturing-practices (GAMP)
environment. The use of online analysis also reduces batch-to-batch variation, thereby
improving product safety and allowing manufacturers to more easily meet their regulatory
obligations.

<>Design considerations
Representative sampling. Sample presentation and procurement is a particularly important
part of any analysis. The procurement of a representative sample is frequently difficult for
solids handling processes, and sampling is typically responsible for the majority of the error
associated with an analysis. For online sampling systems, generic automated devices have
been designed to extract a representative sample from a flowing line, and subsequently return
it to the process. Take special care to ensure that your sampling system can accommodate any
special material characteristics.
Consider, for instance, that coarse particles are especially affected by the size of sample
too small a sample and there may not be enough material to be representative. Likewise,

biasing due to flow conditions may also lead to artificial skewing of the particle-size
distribution and reported concentration.
For wet processes, sample dilution may be required prior to analysis, depending on the
analytical technique used. If so, an automated dilution system will be more likely to present a
representative sample. Similarly, samples from dry powder processes may clump together due
to electrostatic attraction, and require dispersion of these loosely bound particulate aggregates
using an air venturi so as to ensure the accurate measurement of the primary (nonaggregated) particle size.
Ideally, the samples presented for measurement should not undergo any denaturing of the
particles. Dilution in an inappropriate diluent, for instance, could lead to dissolution of the
particles or flocculation. Over-dispersion in a dry system could lead to milling of the particles.
Even incorrect temperature control could lead to increases in relative humidity and, thus,
clumping.
Process relevant information. A key requirement of an online analysis system is that it
must measure process relevant information. Particle size can be measured using many
different methods, including low-angle laser-light scattering, ultrasound, optical image analysis
and direct mechanical measurement. Each of these methods are capable of producing accurate
particle size data, but only within its intended set of parameters. The technique should
therefore be chosen with care.
The ultimate judge of product quality is your companys product release protocols. Since
these protocols were probably defined in an era of offline laboratory-based methods, any
upgraded online quality-assurance system must use either the same or correlatable analytical
technology and produce data with the same or comparable precision. In short, the golden rule
is that the online process device should lessen, not increase, the laboratory workload.
Measurement timeframe. To maximize the usefulness of an online system, data must be
produced within a process-relevant timeframe. Two variables are important in a consideration
of this issue the data acquisition time (DAT) and the measurement acquisition time (MAT).
DAT is defined as the time taken for the instrument to acquire a single piece of data, whereas
MAT is defined as the time taken for the system to acquire sufficient data to produce an
accurate result and reset the system ready for the next measurement cycle. There are two
main types of analysis :
Ensemble (bulk) measurement using on-line laser diffraction, where MAT approximately
equivalent to DAT this is the ideal case, where MAT equals DAT up to a speed of two entire
particle-size distributions per second, and based on the characterization of millions of particles
Counters, where MAT is much greater than DAT particle-size analysis methods based on
the measurement of individual particles involve the application of statistics to a much reduced
dataset of measurements. A typical image analyzer,for instance, must acquire 250,000
500,000 images of individual particles to accurately represent a single particle-size
distribution. That number of counts can take 10-20 min. on some systems
When MAT equals or exceeds process time constants, it is impossible to resolve temporal
changes
in
the
process
When MAT is somewhat less than process time constants, results will be non-quantitative,
but
process
trending
can
still
be
accomplished
When MAT is much less than process time constants, true process behavior can be resolved,
including infrequent blips due to process upsets. If the frequency of the measurement cycle
is high, then the analyzer measurements can be classed as quasi-continuous or realtime.
Process appropriate device. The final key requirement of an online system is that it is
based around a process-appropriate device. A process-appropriate device is one that meets
the reliability, maintenance, and safety requirements of the local manufacturing environment

(hardware robustness), is able to communicate with the existing control systems, and
consistently produces analysis to the required precision (process robustness).

TERMINOLOGY
Online, offline, inline, and atline terms are frequently used in discussions of process analysis.
In many discussions on this topic, the terms online and inline analysis are used
interchangeably, so for clarity the following definitions are followed in this article:
Online analysis. In this approach, a sample stream is routed from the bulk flow for
analysis in an analytical loop. The sample is then typically returned to the process. The
instrument is more independent of the process than inline equipment and can include extra
sample preparation, such as dispersion, without increasing the footprint of the analyzer in the
process stream
Inline analysis. The analytical instrument is installed directly in the process line, negating
the requirement for any bypass. Typically, sampling still occurs as the measurement will be
carried out on only a sub-set of the entire process stream
Offline analysis. The offline approach utilizes laboratory-based instrumentation that are
not connected to the process in any way. Measurements are carried out on batch samples,
with process sampling and transport to the in instrument carried out manually
Atline analysis. The sample is manually removed from the plant and subsequently
analyzed close by. Measurement is similar to conventional offline batch analysis, but the
analyzer can be mobile or portable, must be rugged in design, and is more focused on the
measurement of a particular type of sample than off-line instrumentation. Additionally, atline
systems can have links to the process control system.
Typically offline and atline instrumentation is more easy to install than that of online and
inline units. But, on the other hand, online and inline analyzers are capable of making much
more frequent measurements.

Design
considerations
for
an
online
particle-size
analysis
system

Reliability
and
robustness
compared
to
plant
requirements
Integration with existing control systems and industrial communication protocols

Ongoing
calibration,
verification
and
maintenance
requirements
Special plant requirements, such as classified zone operation and Good-AutomatedManufacturing-Practices
(GAMP)
for
pharmaceuticals

Measurement
acquisition
time,
compared
to
process
time
constants

Analytical
consistency
with
existing
process
protocols
Sample system design, including sample extraction, preparation and transport, process
footprint, automation complexity, and ease-of-maintenance

Get Up To Speed On Digital Buses


These non-proprietary field networks offer a host of benefits. Find out,
once and for all, which ones are best suited to your applications
By
Cognis
Gary
Emerson
Edited by Rebekkah Marshall

James

Corp.

A.

Process

Mitchell
Law
Management

The first reason most chemical process engineers investigate a proposed bus-structured
system is for its reduced wiring costs. A multi-drop or tree-shaped bus structure emanating
from a controller communications card makes point-to-point wiring unnecessary, and can cut
the installed cost of field devices by as much as 40% even more in some circumstances.
But it is a buss many other advantages that are the most important in the long run (see the
box, at right).
Choosing the right bus from among the several described below requires that the process
control scheme first be fully understood to determine if a bus is actually warranted; and, then
if so, what type. Each digital-bus format has particular strengths and weaknesses, and no
single bus can satisfy all requirements (Figure). Fortunately, the most advanced automation
platforms can support several buses, running simultaneously in the same controller.
Factors to take into account in evaluating buses include the following:

Impact on existing plant controls

Hazardous area requirements


Transmission distances and speed
Conduit runs and wall seals
Electromagnetic interference (EMI)
Data-logging, recipe-management and process-validation requirements 1
Training and maintenance
Future growth

Also, keep in mind, that even in the most bus-centric control system (see the box,
Enabling truly distributed control, opposite page), a certain number of traditionally hardwired
analog and discrete (on-off) devices are usually necessary consider, for example, devices
not having a bus communications capability, or extremely fast-response equipment, such as
gas valves and emergency-safety-shutdown (ESS) devices.
Due to the requirement for unique microchip-based communication packages, busconnected field devices cost more than conventional analog and discrete designs, with the
more-complex buses imposing a greater differential in price. Also, the installed cost of the first
bus-connected device is almost always higher than the installed cost of the first traditionally
wired point-to-point device. At some number of devices, however, the cost curves intersect,
and installation of each additional bus-connected device provides a savings.
Following is a description of the instrumentation buses that have been adopted most
broadly throughout the chemical process industries (CPI). They are divided into bit-level, bytelevel, and block-level categories often respectively termed sensor buses, device buses, and
field buses.

Sensor Buses
AS-i
AS-i (Actuator/Sensor-interface) is the de facto bit-level bus structure. Introduced in
Europe for factory automation, it is becoming popular in the U.S for discrete devices in process
control. For simple on-off service in valves, switches, solenoids, starters, pushbuttons and
photoeyes, for example and short controller-to-device distances, AS-i is a rugged, fast,
noise-resistant, and low-cost bus that allows several devices to be attached to one node. Its
deficiencies are small message size and relatively limited diagnostic capability.
AS-i is a two-wire master-slave network with power and signals carried on a single pair of
wires (see box, Master-slave vs. peer-to-peer networking, above). It allows for separate
device power if that is required for higher-power devices. Even though AS-i is economical, we
estimate the bus cost curves versus conventional wiring cost curves intersect at about 10 AS-i
slaves per installation.
Although the power required for AS-i-bus devices exceeds the limit specified for intrinsicsafety certification one means of ensuring safety in a hazardous area several
manufacturers protect use of their products in hazardous areas by other means. For example,
a solid-state relay was recently certified for FM Class I, Division 2, allowing localized four-float
tank-level indication. A variety of AS-i factory automation devices are available for nonhazardous areas; many of these are expected to be certified in the future.
The AS-i master communications card in the controller backplane can speak directly to the
control systems database. One segment will support up to 31 slaves at distances as great as
100 m, or farther if repeaters are used.
Message structure is compact and transmission fast: four bits in each direction per slave
and message (plus status), with a maximum cycle time of 5 msec for 31 slaves; and cycle
time is faster when fewer slaves are attached. Garbled messages are automatically identified
and repeated.
The bus supplies up to 8 A per segment to power slaves, sufficient to drive low-power
solenoid-actuated valves and significantly more than the 420-mA range supplied by an
analog system. If more current is needed, the AS-i bus can drive a slave unit containing a
remote relay module to switch to auxiliary power for higher-load devices, such as contactors.
Each slave can handle up to four sensors or actuators, giving a bus-segment total of 124
inputs and 124 outputs.
A molded bus cable is available to provide foolproof insulation-displacement connections.
Conventional round cable can also be specified, as can simple 16-gauge building wire. Twisted
pairs (typical of analog wiring) are not required, and the bus can be run in a conduit with
other control circuits.

Device Buses
The next step up the ladder of instrumentation-bus proficiency and cost is occupied by
open byte-level serial communications, most commonly represented by DeviceNet and
Profibus DP. Both are supported by numerous hardware vendors.
A number of other open device-level buses are available, and they are largely used in
factory automation. These include Interbus, FIP, Seriplex, Modbus, LonWorks and Sercos. The
Modbus protocol, nonetheless, is familiar to many in the CPI for providing serial links in
process control.

Both DeviceNet and Profibus DP are cyclical, bi-directional networks that are optimized for
on-off communications in industrial automation as opposed to process control. But unlike bitlevel AS-i, these buses transmit small-to-midsize packets of information. So, in addition to
sensor and actuator duty, typical applications include variable-frequency-drive (VFD) control,
current and voltage monitoring, barcode reading, device-level diagnostics, local panel displays
and intelligent motor-control centers (MCC).
Due to their larger data handling capability, device buses should be wired with special
cables that are manufactured for them. For DeviceNet, power and signals are carried on two
separate pairs of wire. But for Profibus DP, power and signals can travel on either wire of a
single pair. Profibus DP additionally allows independent power supply.
In the CPI, a relatively inexpensive, Class I, Division 2, local graphical panel or even just
a classified start-stop panel residing on a device bus can be very helpful, by permitting an
operator to interface with the process without having to walk back to the control room. An
intelligent MCC especially one with VFD starters that has been factory-wired and
connected with a byte-level bus can provide a large amount of status, health, and diagnostic
information.

DeviceNet
DeviceNet is a realtime network based on the Controller Area Network (CAN) protocol,
which was originally developed by The Bosch Group (Karlsruhe, Germany) for automotive
onboard electronics an application where speed and reliability for engine timing, antilockbrake systems and air bags are paramount. After CAN was established, Allen-Bradley
engineered it for industrial automation by adding an application protocol on top of the CAN
communications stack. The company later transferred DeviceNet to the Open Device Vendors
Association a move that made the protocol free and non-proprietary for all users.
DeviceNet is the most popular byte-level bus in North America; and, because of the high
volume of CAN and DeviceNet hardware in manufacture, its communications chips are
relatively inexpensive.
DeviceNet offers both peer-to-peer and master-slave data exchange; and a given device
may behave as a client, a server, or both. The data field is 0 to 8 bytes of user information,
ideal for frequent exchange of small amounts of input-output (I/O) data such as VFD speed
references, display updates and simple diagnostics. For the highest transmission reliability, the
bus has several types of error detection and fault confinement including cyclic redundancy
checking (CRC) and automatic retries to prevent a faulty node from disrupting the network.
Like most instrumentation buses, devices can be swapped under power, if the control system
supports such functionality.
Transmission distances vary with the baud rate: 500 m at 125 kb/s down to 100 m at 500
kb/s. DeviceNet can have as many as 64 nodes, with each node theoretically supporting an
infinite number of I/O points. In peer-to-peer operation, DeviceNet is similar to Ethernet, in
that any node can attempt to transmit at any time. However, bit-wise non-destructive
arbitration resolves collisions in favor of the higher-priority node, with no loss in data or
bandwidth by this node.
Like the AS-i protocol, DeviceNet supplies power exceeding the restrictions for intrinsic
safety. And like manufacturers of AS-i products, a relatively large number of DeviceNet
vendors have taken other steps to ensure that their products operate safely in hazardous
areas.

Profibus DP

Profibus DP is the most widely specified open byte-level bus in Europe. It is one of several
offshoots of the basic Profibus protocol; another is Profibus PA, which is discussed later.
Profibus DP is a master-slave bus that can support up to 126 masters or slaves and data
fields as large as 244 bytes. It is based on the RS-485 serial-communications carrier, which
means distance and interference rejection are good. The bus also permits the addition and
removal of devices and step-by-step commissioning, without disturbing other devices; and
expansions have no effect on devices already in operation again assuming the control
system supports such functionality.
Profibus DPs transmission speeds range from 9.6 kb/s (at distances up to 1,200 m) to 12
Mbps (at distances below 100 m); although, only one speed can be selected for all devices on
a given segment. At 12 Mbps, 512 bits each of input and output data for 32 stations can be
transmitted in just 1 msec.
Meanwhile, Profibus DP diagnostics are extensive. Unusual for a device-level bus, Profibus
DP includes optional extended functions that permit acyclic parallel transmission of alarms and
read-write functions. This allows device statuses to be read and slave parameters to be
optimized, without disturbing control. Isolators and I/O multiplexers are available to make
Profibus DP intrinsically safe (IS). Further, numerous classified devices, using Profibus DP, are
available for Class-I, Division-2 hazardous areas.

Field Buses
The most versatile of the open digital-instrumentation buses are the block-level types that
carry large packets of information. Falling into this category are Foundation Fieldbus and
Profibus PA. The older HART protocol is sometimes thought of as a field bus, but its use of a
420-mA signal sets it apart from strictly digital buses discussed in this article (see box, p.
46).
Field buses are bi-directional and primarily intended to communicate with intelligent field
devices. Unlike bit- and byte-level buses, field bus segments are designed to carry both
signals and device power on the same type of shielded-pair wires used for analog control. If
necessary, devices can also be externally powered.
The primary objectives for developing field buses have been to:

Create more-capable and lower-wiring-cost replacements for conventional 420-mA


transmission of process variables

Permit the control algorithms to be performed in field instruments, as well as in the


controller
Permit remote calibration, commissioning, diagnostics and maintenance of field devices

Provide true device interoperability

Unlike sensor- and device-level buses, field buses are optimized for continuously
transmitting messages that contain multiple, floating-point process variables all sampled at
the same time and containing respective statuses. Being digital, field buses do not incur the
drift that can arise in conveying analog signals.
Because of the more intense communications required during process control, compared to
factory automation mainly from the larger amounts of data that need to be continuously
transmitted and the requirement for intrinsic safety, together with power and communications
over the same wire pair field buses are slower than sensor and device buses. Both
Foundation Fieldbus and Profibus PA operate at 31.25 kb/s.

Field buses were designed from the start to support IS connections, although the number
of devices permitted on an IS segment must be sharply reduced compared to a segment
serving a non-IS area. Fortunately, manufacturers of power supplies have recognized this
challenge and are developing solutions to overcome it.

Foundation Fieldbus
Foundation Fieldbus is considered by many to be the most versatile of the block-level
buses. It was developed from the start as a field bus, and it is the only instrumentation bus
having a user layer at the top of the conventional Open-Systems-Interconnection (OSI)
communications stack to allow for standard field-bus function blocks for control (21 at
present). The user layer also carries device descriptions and system management, allowing
data-acquisition and control functions to be performed across the bus between devices from
different manufacturers a truly interoperable bus architecture.
Typically, a standard field-bus segment commonly referred to as an H1 can be as long
as 1,900 m, without repeaters and accommodate 16 devices in a non-hazardous area. A
passive-model, IS segment can handle four devices. The Foundation just added the FISCO
active IS model a German IS model which increases the potential to accommodate up to
eight IS devices per segment.
Foundation Fieldbus also recently became available with flexible function blocks (FFBs) to
perform complex batch, discrete, and hybrid operations. These application-specific blocks are
available pre-configured or fully configurable and can implement overall control strategies,
such as batch sequencing, burner management and VFD coordination. FFBs reportedly can
also be used to map information from byte- and device-level buses to Foundation Fieldbus,
should this be desirable. Though, at this time, their use has yet to be verified in that capacity.
Foundation Fieldbus is highly deterministic. For one thing, it provides explicit
synchronization of control and communications that allow periodic execution of control
functions without communications-related deadtime or jitter. And, it delivers time distribution
to instrumentation for support of function-block scheduling and alarm time-stamping at the
point of detection. Fieldbus instrumentation is also hot-swappable.
When Foundation Fieldbus is teamed with an advanced process automation system, a
consistent scan time that is independent of logic-memory size helps avoid the need for loop
retuning when control modifications are made. A discrete I/O card can now also be distributed
on a field-bus segment for local on-off control, avoiding the need for a bit- or byte-level bus,
or point-to-point wiring back to the controller.
The Fieldbus Foundation (Austin, Tex.) has also introduced a high-speed Ethernet version
of field bus for use as a communications backbone. This is discussed later in the section on
Ethernet.

Profibus PA
Profibus PA is an IS add-on to the Profibus DP bus and is primarily used in Europe, with the
bulk of its installations in Germany. The upstream Profibus DP portion, and the two-wire
downstream Profibus PA portion, are connected by either an intelligent link or a segment
coupler, depending on the Profibus-DP speed needed. However, the fact that Profibus PA is an
add-on to a device-level bus and does not incorporate a user layer in its communications stack
limits its versatility compared to Foundation Fieldbus.
Like its DP sibling, Profibus PA permits the addition and removal of devices and step-bystep commissioning, without disturbing other devices. Cable lengths can be up to 1,000 m in
hazardous areas, 1,900 m otherwise.

Without a user layer, Profibus PA is defined by a device application profile on the OSI
application layer, which describes the functions, parameters, and behavior of field devices
typically used in process control and is designed to facilitate device interchangeability and
vendor independence. Descriptions of device attributes are based on the function-block model.
Profibus PA function blocks are designated for physical, transducer, analog input (AI), analog
output (AO), discrete input (DI) and discrete output (DO). The application profile is divided
into communications-relevant and applications-relevant definitions.

Ethernet
Ethernet TCP/IP is the fastest-growing and lowest-cost digital-network technology yet
developed which has made it ubiquitous in the IT and Internet worlds. It is also stimulating
intense interest in process control and factory automation. Factory automation has for several
years offered Ethernet options for connecting controllers to remote I/O bricks (physical
groupings of I/Os) using controls-based data representations. Contributors to Ethernets
popularity in all areas of automation include its low-cost computer-store available components,
switched ports, more-secure firewalls, increasing speeds to 1 GHz, support by PC operating
systems, and ability to include web pages in field devices having embedded processors.
Ethernet today is only partially capable to handle process instrumentation, however.
Admittedly, the Fieldbus Foundation has developed an Ethernet communications stack, called
high-speed ethernet (HSE), that maintains all of the advantages of the H1 Fieldbus user layer.
But HSE alone is not intended for controlling processes, because it cannot connect directly to
field devices or communications bricks at this time. Also, a method for bus-powering field
devices has not been developed that caters for intrinsic safety.
Instead, HSE is suggested as a 100-Mb backbone (redundant if desired) for tying several
H1 segments back to a controller via an HSE/H1 linking device. The linking device also permits
peer-to-peer communications between devices on the H1-connected segments without
controller intervention.
HSE is additionally recommended as a backbone for high-performance applications,
subsystem integration, high-density data generators, and data servers, and for
communications with enterprise resource planning (ERP) and management information
systems (MIS). With copper wire, HSE length is limited to 100 m; with fiber optics, a segment
can reach 2,000 m.
Some control industry prognosticators have gone as far to say that the days of todays
popular control buses are numbered because they are too slow in comparison to Ethernet.
Thats a stretch for several reasons:

Ethernet and instrumentation buses are evolving as complementary rather than competitive
and should remain that way for the foreseeable future. Foundation Fieldbus HSE is a good
example

Super-fast response (every msec) or a need to quickly transfer reams of data are seldom
necessary in process automation, where data is transmitted once every 50100 msec for
the fastest devices and once every 5 sec for the average ones
Speed imposes penalties. To assure control reliability, for example, 10 Mb/s and faster
Ethernet or any high-speed network, for that matter generally requires a star
topology2 and short runs. This topology can make Ethernet field wiring as expensive to
install as conventional point-to-point wiring thereby negating a major advantage of
choosing any kind of a network or bus in the first place.

Extending to the enterprise

Process-automation vendors and standards organizations particularly a joint effort of the


American National Standards Institute (ANSI; Washington, D.C.), the Instrumentation,
Systems and Automation Society (ISA; Research Triangle Park, N.C.) and the OPC Foundation
(Scottsdale, Ariz.) are developing ways to provide seamless, Ethernet-based, transactional
(and eventually near-realtime) data exchange from the field device to the enterprise system.
As an example, Foundation Fieldbus and the DeviceNet and Profibus organizations have
recently signed on to the OPC Foundations Data Exchange standard for Ethernet.
The device-to-enterprise, data-exchange capability seems to be evolving as follows:

OPC to avoid writing drivers for server-to-client communications

OPC bridges to permit server-to-server communications in process control

The Internets structured (computer-understandable) XML language


Process automation vendor software to translate native process automation system data to
XML
Transaction servers such as Microsoft Biztalk 2000 to bridge older ERP systems to the
Internet, ease the development of pure XML/Internet ERP systems, and provide an
infrastructure for rules-based business document creation, transformation, routing and
tracking

A typical transactional data exchange, for instance, might start when an intelligent field
device, such as a flow transmitter connected by Foundation-Fieldbus, detects a blocked
impulse line. The transmitter sends a device alert to the process-automation system controller
to indicate that a maintenance condition has arisen. The alert may also contain recommended
actions, plus information such as the transmitters tag number or maintenance history.
As a priority message, the alert is quickly relayed to a process automation system
workstation for operator action. Ideally, the alert would also be forwarded to a computerized,
maintenance-management system (CMMS) to automatically generate a work order. If urgent,
the alert might additionally be transmitted to a maintenance technicians pager or mobile
phone for immediate attention.
Should the transmitter need to be replaced (based on the type of alert, such as notification
of a failure), the message could be sent over the Internet to the companys ERP system at the
head office, and interfaced with the ERPs inventory-control module to check availability of a
replacement part. If necessary, the field device could then interface with the ERPs orderprocessing module to place an order for a new part. The ERPs accounting system then might
issue a purchase order again via the Internet to a supplier. After installation, the new
device could be automatically added to the maintenance system database, auto-commissioned
over field bus and auto-recorded in the maintenance log. All of these steps could be completed
without human intervention.

Selecting Your Control System Step-by-Step


Broken down for the CPI user and its consulting engineer, this practical
approach covers the process from beginning to end

By
Baxter

Harmik

Jeff
CRB Engineers

Begi
BioScience
Comstock

In todays competitive manufacturing environment, control systems in the chemical process


industries (CPI) are becoming larger and larger percentages of construction costs, and are
increasing in sophistication to meet growing automation needs. In addition to meeting basic
technical and functional requirements, users are seeking to overcome other commonly
experienced problems. These include maintenance difficulties, lack of local product and
systems integration support, high implementation and maintenance costs, startup delays, and
an insufficiently user-friendly interface.
Following is a step-by-step process to assist in selecting the right control system platform,
from the perspective of the users organization, as well as the that of the consulting engineer
who supports them.

Users process steps


1. Assign an automation technical lead position within the project team. Representing the
user with inside knowledge of the organizational and project requirements, this individual will
champion the control-system selection and implementation processes. Qualifications should
include experience in automation project engineering and construction. The technical lead
position must also be linked to corporate engineering standards, be able to get approval from
the project team and senior management, and facilitate system knowledge transfer within the
organization at project completion.
2. Collaborate with the consulting engineering firms technical lead. Teaming up with an
automation technical lead from the consulting engineering company brings additional industry
insight and experience to the project. In addition, the consultant can be directly linked to the
process mechanical design effort, which for large projects is typically led by the same
consulting engineering company. The consultant also plays a key role in assisting the user with
the development of preliminary system conceptual design, bid packages, and other design
documents.
3. Develop the User Requirement Specification (URS). The URS defines the basic
requirements for the control system. It should follow internal company guidelines or those set
by the industry, such as Good Automated Manufacturing Practice (GAMP) standards for the
pharmaceutical industry. The best method for capturing the requirements is getting input and
approval from a cross-functional core team consisting of representatives from project
engineering, manufacturing (system user), plant engineering, maintenance, quality control,
and other appropriate departments. Once the basic requirements are set, it is also important
to get feedback and buy in from senior management on the URS.
At this phase of the process it is appropriate to develop a preliminary network architecture
for the control system based the URS. For that purpose, the URS should contain, at a
minimum, the following basic requirements for the control system:
Control methodology: batch or continuous
I/O size and scalability
Networking requirements (architecture)
Operator interface requirements

Redundancy (hardware, communication, power)


Interfacing to process equipment skids
Interfacing to an existing control system
Interfacing to other business systems or laboratory information systems
Type of instrumentation and I/O
Data historian requirements
Asset management, and interfacing to a maintenance system
Reporting requirements (for example, production, cleaning and maintenance)
Security (physical access, data)
Audit-trail or version control
Platform lifecycle
Electronic records
pharmaceutical industry)

or

electronic

signatures

(21CFR

Part11

compliance

for

the

4. Develop the technical evaluation criteria. Establishing a base criterion is critical for
technical comparison of available platforms. In this step, in addition to conventional measures,
the latest instrumentation and control system technologies should be explored. Some of the
current areas that should be considered are:
Digital bus networks (such as Foundation Fieldbus, DeviceNet, ProfiBus, Asi)
Communication technology (such as Ethernet, ControlNet or Modbus)
Built-in redundancy (hardware, power supply, communication)
Integrated, ISA S88-compliant batch engine for batch control
Integrated software tools (for control, data analysis, asset management, reporting, audit
trail, open connectivity and so on)
Once the specific criteria are established, an evaluation matrix should be developed. The
matrix provides an all-inclusive summary of all the key evaluation criteria. For a typical
example of important considerations, see box, p.70.
Other evaluation categories from the URS, such as system maintainability, product
technical and implementation support and platform lifecycle, should also be incorporated into
the matrix. For more on technical criteria and vendor selection, see Part 1 of this report, p.
6266.
5. Evaluate available systems against the technical criteria. After an initial scan of potential
platforms, at a minimum three reputable suppliers should be considered for further evaluation.
Onsite supplier presentations with participation from the internal cross-functional team as well

as the consulting engineer should begin the formal evaluation process. The evaluation matrix
is used by the team as a scorecard for both grading the suppliers in each category, and
generating an overall supplier rankings. After evaluating the suppliers, narrow the suppliers to
the top platforms before proceeding.
6. Conduct reference checks for the top platforms. Obtain a list of industry references from
each system supplier, with the following minimum criteria:
User requirements are similar
System implementation is complete and the process is running (It should be validated if
it is a pharmaceutical facility)
If possible, the vendor should offer references for multiple system installations
Contact the key references and identify both positive and negative aspects of their overall
experience with the system, including whether or not they would select and install the same
system in the future. If the platform is already being used by a facility within the customers
own organization, then that facility should certainly be benchmarked as a reference.
7. Perform cost evaluation. Develop a preliminary bid package to include, as a minimum,
the following sections:
Bid instructions (scope, cost breakdown and schedule)
URS
Process flow charts, process steps, descriptions
Formal process flow diagrams (PFDs) and P&IDs
Equipment summary
Instrument count and I/O count by type
Preliminary network architecture
Distribute bid packages to the top suppliers prior to pre-bid meetings. Conduct such
meetings with suppliers and review the bid package. It is advantageous to include all suppliers
in one meeting, to save time and to set a clear and consistent expectation. Also, consistent
answers to all questions raised both during and after the pre-bid meeting, heard by or
delivered to all suppliers, brings more uniformity to the bidding process.
The post meeting questions are best answered through a formal request for information
(RFI) process with all Q&As forwarded to all bidders.
After the receipt of bids, the results are tabulated and analyzed. The box on p. 69 shows a
sample cost-breakdown structure specified in the bid instructions. Use of such structure allows
for both component-based-cost and total-cost comparison within hardware, software, and
integration categories. In addition, ratios such as cost per I/O point, total hardware cost to
total cost, total software cost to total cost, and total integration cost to total cost can be
calculated and included in the tabulation to put the figures in perspective.

8. Select the best system based on technical and cost evaluation. The best system is
recommended based on overall system comparison of technical score cards, bid tabulation,
reference checks, and team member feedback. The chosen system should be the one that best
meets all user requirements.
9. Document the process steps and final justification. A final justification paper must be
written, to include a summary of activities in each process step and key points of comparison,
evaluation and conclusion. The justification paper should also reference the following
attachments:
Bid package and bid tabulation
Evaluation matrix
Reference checks and trip reports
Supplier presentation copies
10. Get approval from project team, and senior management. At this stage of the process,
the project team is already on board and agrees with the recommendation. Depending on the
organizational structure, the justification paper and the attachments should be presented to
senior and corporate management for review and approval.

Consulting Engineers Process Steps


1. Assign an automation technical lead. This person should be assigned to collaborate with
and support the customer in implementing the process. The leads qualifications should include
experience in automation-systems implementation and extensive knowledge of multiple
systems (PLC, DCS, hybrid), project engineering and construction. The lead should also be a
team player by temperament.
2. Assist in development of customers automation goals. The consulting engineer can
assist in facilitating the development and prioritization of the customers automation goals and
desired benefits from the level of automation expected in the project. These goals may include
production efficiency improvement, product quality improvement, process optimization, future
expansion and flexibility.
3. Develop or assist in development of URS. The consulting engineer should take the lead if
the customer does not have resources to develop the URS.
4. Develop preliminary engineering documents. As part of the system and process design
effort, several documents are generated that are critical to the platform selection process.
These documents include:
Integration or integrator scope of work
PFDs & PIDs
Equipment list
Preliminary network architecture
Preliminary sequence of operations for each system

Estimated instrument and I/O count (For an example, see Tables 1 and 2)
By I/O type
Per controller
Per piece of equipment
5. Collaborate with the customer on which suppliers to consider. The consulting engineers
experience in working with several different automation suppliers helps the customer to
narrow down the list of potential suppliers to the ones that are reputable in the industry and
have a larger installed base.
The consultant also plays an important role in helping the customer develop the evaluation
matrix and establish the importance levels assigned to each criterion.
6. Attend and grade supplier presentations. By attending the supplier presentations, the
consultant helps the customers team ask questions to challenge the proposed solutions, bring
focus to conformance to URS, and provide answers to technical questions.
As part of the evaluation team, the consultant grades each supplier and assists in
consolidating the grades from the overall project team.
7. Perform cost evaluations of the top candidates. In this step the consultant plays a critical
role by supporting the customer in development and distribution of bid packages, conducting
pre-bid meetings with suppliers, receiving bids and performing bid tabulation.
The bid package incorporates all the documents developed in Step 4 along with other
customer-supplied documents such as the URS and process description. The pre-bid meetings
are critical in ensuring that all suppliers are provided with all necessary information to properly
bid on the project.
The consulting engineer can also take the lead in answering RFIs during the bidding
process.
8. Conduct reference checks on top candidates. It is important that the consultant gather
supplier references and conduct reference checks independent of the customer, and then
compare the information collected with the customer to maximize the effectiveness of the
assessment.
The consultant should join the customer in visiting any reference sites.
9. Assist in the selection based on project requirements and cost constraints. As part of the
customers evaluation team, the consultant helps the team in documenting the pros and cons
of each system, analyzing systems against the project requirements, summarizing findings
from reference checks, consolidating project team feedback, reviewing the budgetary bids
and, ultimately, selecting the best system and documenting the justification for that selection.

Control System Selection

With profits at stake,you must consider all the factors, and then choose
an automation provider with the knowledge, experience, technology and
resources to deliver a total solution
By
Honeywell Industry Solutions

Jack

Gregg

In the process industries, control-system technology is constantly evolving. New solutions


for optimization, advanced control* and predictive maintenance provide continuous
improvements in production efficiency. However, selecting a new control system as part of a
plant modernization, expansion or greenfields project involves more than just a technology
decision. Users should also consider factors such as system architecture, enterprise
integration, vendor experience, customer support and ease of future system upgrades.
When planning to implement a new control system, manufacturers face a number of
important considerations that will have a major impact on the success of their project over the
long-term. The following is a brief discussion of the key factors involved, including: initial
considerations, vendor selection and technology requirements.

Initial considerations
1. Potential suppliers
A level playing field for all? On a typical control-system project, the user obtains
competitive bids from a number of qualified automation providers. The user, whether working
independently or assisted by a systems integrator, may employ a strategy of feasibility,
conceptual and preliminary engineering to develop project scope, schedules, costs and
benefits, as well as identify, plan and estimate work items.
Because price tends to be an overriding factor in vendor selection, project specifications
should be developed carefully, so as to ensure a level playing field for all prospective system
suppliers. Selection of the low-cost bidder often comes with an unexpected price tag. A
supplier with limited expertise in the customers industry and applications may overlook
requirements not covered in the specifications, resulting in numerous change orders over the
course of the project. These additional, incremental costs can add up quickly and negate
savings provided by the suppliers bid.
Vendor selection is discussed in considerably more detail below.

2. Alliances with vendors


What are the longterm benefits? Manufacturers can avoid competitive bidding by
establishing an alliance agreement with a single automation provider. Such alliances frequently
tie project results into the customers operating measures, with the automation providers
compensation varying according to the outcome. (For example, rather than specify its own
solution, the automation provider may choose third-party technology that is better suited to a
particular application.) Thus, the supplier has an equal stake in the success, or failure, of the
work performed.
Many customers prefer vendor alliances because they allow for development of best-inclass control solutions that optimize total lifecycle performance and minimize project and
lifecycle costs. Together, the customer and supplier develop best practices to ensure continual
improvement from project to project.

A strategic alliance can be structured so that the customer does not actually purchase the
control system, but rather leases it from the automation provider. Under this scenario, the
customer benefits from enhanced, long-term automation functionality, as opposed to one-time
products, projects, hardware and software. Additionally, a multiyear service contract alleviates
the constraints of technology obsolescence (through accelerated and justified technology
upgrades), lack of skilled personnel (through co-sourcing or outsourcing) and lack of enough
capital (through appropriate and innovative financial structuring).

3. Project methodology
Whos accountable for results? Depending upon the size and scope of the control-system
project (ranging from automation of a single process, to construction of a new unit or
expansion of an entire plant), the user may elect to have an engineering and construction firm
serve as project manager and supervise the work through completion. For a step-by-step
guide to the owner-consultant approach, see Part 2 of this report, pp. 6771.
As an option, a main automation contractor can act as the primary source of supply for the
project, providing a single source of accountability. The benefits of this approach come from
the automation contractors familiarity with the integrated control solution, ensuring that the
user gets the most out of new technology. In the role of main automation contractor, the
control vendor provides or obtains all necessary automation equipment and oversees design,
engineering, installation and commissioning of the new system.

4. System integration
How can everything work together? In many companies, the accountants, schedulers and
other non-plant-floor personnel need to access information from the manufacturing process
(with the proper security precautions) in order to make more informed decisions and analyses.
That means systems from the control room to the business office must be tightly integrated
not only across the plant architecture, but around the world in many cases.
Companies wanting to unite the process and business worlds, and thus achieve true
enterprise-wide integration, should choose an automation solution that allows them to
establish a single, global database for all manufacturing information. This database must be
available to, and easily accessible by, all interested users, from the supervisory to the operator
level. The solution should also include applications synchronizing the database for use in the
business environment.

VENDOR SELECTION
1. Technology base
Does the vendor offer a complete solution? When it comes to selecting a control-system
provider, bigger is usually better. That is to say, the provider should offer a broad, wellsupported product portfolio, or have in place strategic alliances with third-party suppliers, in
order to meet the full scope of the customers automation requirements.
The suppliers control-system platform should be based on a flexible, non-proprietary
architecture that can deliver robust process control, unlimited connectivity, reliable safety, and
seamless encapsulation of installed systems. The supplier should also offer a suite of
applications supporting advanced process control (APC), optimization, asset management, Six
Sigma methodologies and other business-driven initiatives. In addition, the supplier should
provide all of the instrumentation, control devices, value-added programs, services and
training necessary to support their solution.

Keep in mind that an automation provider may offer a state-of-the-art computing platform
but fall short when it comes to integrating advanced tools and applications in its solution.
Applications must then be purchased separately and bolted-on to the system.

2. Domain expertise
How well does the vendor know your industry? Domain expertise is one of the most
important criteria when evaluating prospective control-system providers. Only a supplier with
a proven, demonstrated understanding of your industry, company and business can help you
succeed.
For instance, the chosen supplier should have project teams, estimators, sales staff and
consultants that are focused on your market. They should be capable of deploying a project
engineering group familiar with your production strategies, knowledgeable about your business
practices, and able to apply automation technology to improve your financial results.
A supplier with long-standing customer domain expertise can also uncover implementation
cost savings. This supplier will ensure their project proposal covers all of the bases, thus
minimizing change orders and resulting in a proposed initial price and finishing price that are
relatively similar.

3. Application knowledge
How well does the vendor understand your processes? Once the suppliers domain
expertise has been established, the next factor to consider is application knowledge.
Specialized processes in, for instance, the chemical, fine-chemical and pharmaceutical
industries require extensive applications-engineering experience.
As your automation partner, the control-system supplier must have a sufficient knowledge
of your manufacturing processes to effectively apply advanced control strategies, and then put
this knowledge to use improving plant efficiency and business performance.

4. Joint accountability
Does the vendor believe in shared risks and shared rewards? Automation providers who
believe in joint accountability are so confident of the benefits of their solution they are willing
to join you in sharing risks. Rewards to the provider come from sharing in financial results
attributed to improved manufacturing performance.
In this type of business relationship, the customer no longer devotes capital to controlsystem purchases. Rather, the supplier owns, installs and maintains all plant automation
equipment. This includes updating technology as needed to capture the largest performance
benefits. The supplier also assumes control-related payroll costs, and is responsible for
recruiting, employing and training support personnel for the automation solution.
Contracts based on joint accountability remove the risks and costs of process automation
from the customer. Furthermore, they eliminate budget constraints that keep users from
modernizing their control-system architecture.

5. System upgrades
Does the vendor provide a secure path to the future? Pay close attention to whether
potential suppliers provide a secure, affordable migration path to the latest technology. Too
often, customers invest in physical and intellectual assets, only to find out later that their
supplier is discontinuing or no longer supporting a particular control-system architecture. This

can require bulldozing your plants data highway to make way for an entirely new and costly
platform.
Automation providers are wise to develop investment protection strategies that help
customers leverage the value of their existing assets. Some offer integration tools that provide
operators with a common view of data, applications, alarms, events and messages across
different platforms. Consequently, new and legacy systems can coexist while the customer
migrates at a pace it determines to the latest control technology.

6. Support services
Will the vendor maintain (and enhance) its solution? In the competitive process-control
field, a vendors success hinges not only on the performance of its system solution, but also on
the quality of its support services. Despite rapid changes in automation technology, suppliers
must be willing and able to support legacy equipment dating back 10, 20 or even 30 years
(relatively old analog systems and instrumentation are still the norm in many plants).
Automation vendors should provide service offerings that support and protect their
customer base. A case in point: service contracts allowing customers to lock-in their
automation budget at a fixed rate and over an extended period of time, and receive
guaranteed hardware and software upgrades to keep technology up-to-date.

TECHNOLOGY REQUIREMENTS
1. Control architecture
Does the solution provide an integrated automation platform? Users implementing a
modern enterprise-automation system can achieve unprecedented connectivity through all
levels of their plants. But an integrated architecture is only part of the solution global
manufacturers need for business agility, responsiveness and quality control.
New systems should also manage process knowledge through a combination of advanced
technologies, industrial domain expertise, and Six Sigma methodologies.
Other recommendations: choose an open, scalable control system that is fully redundant,
includes robust control algorithms, and provides on-process upgrades to minimize plant
downtime. (For more on open systems, see box, p. 66.) The system should be embedded with
best-in-class applications for advanced control, asset management and control monitoring,
and include a human interface integrating plantwide information and delivering real-time
process data. Additionally, the system should comply with open industry standards.

2. Field instrumentation
Does the solution integrate smart devices? Throughout the process industries, users are
now seeking control solutions that support digital integration of field instruments, allowing
processes to be linked with monitoring and control equipment, and providing the platform
needed to operate plants more profitably.
However, users may not be able to justify the increased implementation cost for installing
fieldbus technology in all areas of the plant. Some processes simply do not warrant the added
capabilities of smart field devices. For this reason, your automation provider should offer a
maintenance-management program incorporating all of your field assets traditional and
fieldbus alike and providing tools for integrating all device information in a single database.

3. User interface

Does the solution support complex HMI requirements? Manufacturers with an older humanmachine interface (HMI) platform may have a limited view of critical processes, and thus
cannot take steps to optimize performance. Nevertheless, replacing existing operator stations
to accommodate a new control system can be both costly and disruptive to plant operations.
Instead of requiring customers to support an outdated HMI or abandon it entirely, controlsystem suppliers should provide the means to leverage existing investments and intellectual
property, and at the same time migrate plant control rooms and engineering stations to newer,
more robust technology. This can include field upgrade kits allowing users to retain their
existing hardware and industrial-class furniture, while expediting the transition to the latest
operator environment.

4. Networks
Does the solution employ open or proprietary protocols? Control systems employing open
network protocols provide process plants with new levels of connectivity. Users have the
freedom to select the best control and instrumentation solutions for a given task. They can
mix and match devices from a variety of manufacturers, and transparently integrate them in a
field network strategy that suits their needs.
Be sure the control system you choose makes full use of recognized open standards, and is
equipped to integrate the industry-leading field network protocols. These include Foundation
fieldbus, Profibus, HART, DeviceNet and ControlNet, among others.

5. Optimization
Does the solution support redesigned work processes? Increased competition is causing
manufacturers to look for ways to squeeze additional productivity out of their operations. This
requires a solution that redesigns work processes across the enterprise to get the most out of
your personnel, processes and technology.
When selecting a new control system, it is important that the vendor offer a solution tightly
integrating optimization, multivariable control and APC. Moreover, these tools should be
embedded in a system architecture that captures and leverages process knowledge over time.
In this manner, you will have access to the information needed to involve the right people in
the process, at the right time. You will also have a methodology in place for continuous
improvement.

6. Asset management
Does the solution focus on the entire process? Asset-management systems are designed to
support a reliability-centered maintenance program with automated decision support for
identification and repair of equipment problems. Benefits include increased plant availability
and uptime, focused maintenance efforts, and reduced equipment failure.
Users should determine whether a suppliers asset-management solution is field centric
or process centric. A field-centric solution relies on device diagnostics to enable preventative
maintenance on valves, transmitters and other intelligent instruments. Although a good first
step, this approach does not provide an understanding of how device status impacts process
performance.
With a process-centric solution, users have an enterprise-wide view of the relationships
between all installed assets, and as such, can make informed decisions affecting plant
availability. This approach allows the user to determine: 1) the impact of equipment problems

on the process; 2) the association between these problems and the business; and 3) the
priority of needed repairs.

OTHER CONSIDERATIONS
1. Commissioning and startup
How long does it take? Proper planning is necessary to keep commissioning and startup
time off the critical path of a control-system project. Start-ups vary based on whether they are
greenfield or revamp projects. Revamp projects are generally done by hot cut-over. This
means moving one loop at a time from the existing to the new system, while the unit
operates, thereby eliminating production losses.
During the cut-over, operators will have to operate two systems. Proper training and
preparation for this is important. A comprehensive cut-over plan should be developed that is
aligned to the process units and operational requirements. Plant operations, maintenance,
engineering and project personnel need to be involved in the plan development. Your supplier
should have a long reference list in planning, supporting and executing hot cut-overs.
On greenfield projects, the entire loop from transmitter, to control system, back to the
valve can be checked out beforehand. This provides the opportunity to verify the
operational integrity of the loop before the process begins operating. Proper planning is
needed to keep the control system from creating any operation difficulties during plant
startup. A supplier should have a dedicated team of engineers and craftsmen with extensive
experience in both greenfield startups and hot cutovers.

2. Outside services
How much engineering assistance is required? Many chemical companies are concentrating
on their core competencies and at the same time cutting costs. That means fewer internal
engineering resources. External engineering assistance can be critical to a successful project.
Does the supplier have a comprehensive organization in place to provide this assistance? Safe
project execution is of paramount importance. Does the supplier have a documented safety
program in place with a proven track record of safe project implementation? Also, does it have
engineering standards, methods and tools in place to efficiently provide project engineering
assistance?
Many aspects of control-system design may be unfamiliar to in-house engineering
resources. External assistance can fill these gaps. Consider assistance in system network
design, I/O layouts, graphics and human interface designs. The experience of a knowledgeable
supplier can be invaluable here. Much of the mundane work, such as database configuration
and graphics implementation, can also be handled externally. Make sure your supplier can
provide these services in an efficient and cost-effective manner.

3. Technology refresh
What is the cost of system upgrade? At some point in the life of a control system, it
becomes apparent that migrating to current technology is necessary. Based on the outcome of
the migration activities, the economic justification is clearly evident.
For example, a chemical plant might use a control system upgrade to simplify its control
scheme and push constraints harder. Even if the plant was running at capacity prior to the
migration, it can increase production afterwards. The value of the increased production will
quickly pay for the system-upgrade costs.

Customers should expect their automation provider to have a defined plan or road map for
control-system upgrades. With guaranteed product support, multiyear service agreements and
an overall plant modernization strategy, this approach can be a win-win proposition for both
the supplier and customer.

4. Training
How much and how often? Plants modernizing their process controls face a difficult
dilemma: how to instruct employees in the use of new technology while at the same time
reducing training-related costs and minimizing disruptions of day-to-day operations. Scarcity
of personnel competent in process control is a notable problem today; for more see pp. 119,
120.
Your automation provider should tailor a training program to meet the requirements of your
site strategy and the constraints or your work environment. Conducting courses at the
customers site, for instance, can minimize employee travel expense and time away from the
job.
In addition, e-learning programs provide comprehensive training via the Internet. They
not only save time and money, but allow a larger number of employees to participate in
courses than would have been possible with site-specific training, see p. 191 for more on elearning tools.

You might also like