PID Controller: Control Loop Basics
PID Controller: Control Loop Basics
The PID controller calculation (algorithm) involves three separate parameters, and is accordingly
sometimes called three-term control: the proportional, the integral and derivative values, denoted P,
I, and D. Heuristically, these values can be interpreted in terms of time: P depends on the present
error, I on the accumulation of past errors, and D is a prediction of future errors, based on current
rate of change. The weighted sum of these three actions is used to adjust the process via a control
element such as the position of a control valve or the power supply of a heating element.
In the absence of knowledge of the underlying process, a PID controller is the best controller. By
tuning the three constants in the PID controller algorithm, the controller can provide control action
designed for specific process requirements. The response of the controller can be described in terms
of the responsiveness of the controller to an error, the degree to which the controller overshoots the
setpoint and the degree of system oscillation. Note that the use of the PID algorithm for control does
not guarantee optimal control of the system or system stability.
Some applications may require using only one or two modes to provide the appropriate system
control. This is achieved by setting the gain of undesired control outputs to zero. A PID controller
will be called a PI, PD, P or I controller in the absence of the respective control actions. PI
controllers are fairly common, since derivative action is sensitive to measurement noise, whereas the
absence of an integral value may prevent the system from reaching its target value due to the control
action.
Sensing water temperature is analogous to taking a measurement of the process value or process
variable (PV). The desired temperature is called the setpoint (SP). The input to the process (the water
valve position) is called the manipulated variable (MV). The difference between the temperature
measurement and the setpoint is the error (e) and quantifies whether the water is too hot or too cold
and by how much.
After measuring the temperature (PV), and then calculating the error, the controller decides when to
change the tap position (MV) and by how much. When the controller first turns the valve on, it may
turn the hot valve only slightly if warm water is desired, or it may open the valve all the way if very
hot water is desired. This is an example of a simple proportional control. In the event that hot water
does not arrive quickly, the controller may try to speed-up the process by opening up the hot water
valve more-and-more as time goes by. This is an example of an integral control.
Making a change that is too large when the error is small is equivalent to a high gain controller and
will lead to overshoot. If the controller were to repeatedly make changes that were too large and
repeatedly overshoot the target, the output would oscillate around the setpoint in either a constant,
growing, or decaying sinusoid. If the oscillations increase with time then the system is unstable,
whereas if they decrease the system is stable. If the oscillations remain at a constant magnitude the
system is marginally stable.
In the interest of achieving a gradual convergence at the desired temperature (SP), the controller may
wish to damp the anticipated future oscillations. So in order to compensate for this effect, the
controller may elect to temper their adjustments. This can be thought of as a derivative control
method.
If a controller starts from a stable state at zero error (PV = SP), then further changes by the controller
will be in response to changes in other measured or unmeasured inputs to the process that impact on
the process, and hence on the PV. Variables that impact on the process other than the MV are known
as disturbances. Generally controllers are used to reject disturbances and/or implement setpoint
changes. Changes in feedwater temperature constitute a disturbance to the faucet temperature control
process.
In theory, a controller can be used to control any process which has a measurable output (PV), a
known ideal value for that output (SP) and an input to the process (MV) that will affect the relevant
PV. Controllers are used in industry to regulate temperature, pressure, flow rate, chemical
composition, speed and practically every other variable for which a measurement exists.
The PID control scheme is named after its three correcting terms, whose sum constitutes the
manipulated variable (MV). Hence:
where
Pout, Iout, and Dout are the contributions to the output from the PID controller from each of
the three terms, as defined below.
Proportional term
The proportional term (sometimes called gain) makes a change to the output that is proportional to
the current error value. The proportional response can be adjusted by multiplying the error by a
constant Kp, called the proportional gain.
where
A pure proportional controller will not always settle at its target value, but may retain a steady-state
error. Specifically, the process gain - drift in the absence of control, such as cooling of a furnace
towards room temperature, biases a pure proportional controller. If the process gain is down, as in
cooling, then the bias will be below the set point, hence the term "droop".
Droop is proportional to process gain and inversely proportional to proportional gain. Specifically
the steady-state error is given by:
e = G / Kp
Droop is an inherent defect of purely proportional control. Droop may be mitigated by adding a
compensating bias term (setting the setpoint above the true desired value), or corrected by adding an
integration term (in a PI or PID controller), which effectively computes a bias adaptively.
Despite droop, both tuning theory and industrial practice indicate that it is the proportional term that
should contribute the bulk of the output change.
Integral term
The contribution from the integral term (sometimes called reset) is proportional to both the
magnitude of the error and the duration of the error. Summing the instantaneous error over time
(integrating the error) gives the accumulated offset that should have been corrected previously. The
accumulated error is then multiplied by the integral gain and added to the controller output. The
magnitude of the contribution of the integral term to the overall control action is determined by the
integral gain, Ki.
Derivative term
The rate of change of the process error is calculated by determining the slope of the error over time
(i.e., its first derivative with respect to time) and multiplying this rate of change by the derivative
gain Kd. The magnitude of the contribution of the derivative term (sometimes called rate) to the
overall control action is termed the derivative gain, Kd.
where
Dout: Derivative term of output
Kd: Derivative gain, a tuning parameter
SP: Setpoint, the desired value
PV: Process value (or process variable), the measured value
e: Error = SP − PV
t: Time or instantaneous time (the present)
The derivative term slows the rate of change of the controller output and this effect is most
noticeable close to the controller setpoint. Hence, derivative control is used to reduce the magnitude
of the overshoot produced by the integral component and improve the combined controller-process
stability. However, differentiation of a signal amplifies noise and thus this term in the controller is
highly sensitive to noise in the error term, and can cause a process to become unstable if the noise
and the derivative gain are sufficiently large. Hence an approximation to a differentiator with a
limited bandwidth is more commonly used. Such a circuit is known as a Phase-Lead compensator.
Summary
The proportional, integral, and derivative terms are summed to calculate the output of the PID
controller. Defining u(t) as the controller output, the final form of the PID algorithm is:
Proportional gain, Kp
Larger values typically mean faster response since the larger the error, the larger the
proportional term compensation. An excessively large proportional gain will lead to process
instability and oscillation.
Integral gain, Ki
Larger values imply steady state errors are eliminated more quickly. The trade-off is larger
overshoot: any negative error integrated during transient response must be integrated away by
positive error before reaching steady state.
Derivative gain, Kd
Larger values decrease overshoot, but slow down transient response and may lead to
instability due to signal noise amplification in the differentiation of the error